tv Hearingon Regulating Artificial Intelligence CSPAN September 14, 2023 11:29pm-1:53am EDT
11:29 pm
country intricately tied to the celebrate authors and their unforgettable books among our featured books common sense by thomas paine. huckleberry finn by mark twain their eyes were watching god and free to choose by milton and rose freedman. watch our 10 part series books that shaped america starting monday september 18 at 9:00 p.m. eastern on c-span, c-span now the free mobile video app or online at c-span.org. ♪ >> he spent as your unfiltered view of government. funded by these television companies and more including charter communications. >> charter is to be recognized on the best internet providers. we are just getting started. molding 100,000 miles of new infrtructure to reach those who need it most. looks charter communications support c-span as a public
11:30 pm
service along with these other television providers. giving it a front row seat to democracy. microsoft president brad smith testified ways to regulate artificial intelligence. he joined other witnesses to discuss transparency love the idea of labeling products such as images and videos as being made by ai. this hearing before the senate judiciary subcommittee on privacy technology and the law is about two hoursrs and 20 minutes. [background noises] [background noises]
11:31 pm
>> the hearing of our subcommittee on privacy, technology, and the law will come to order. i want to welcome our witnesses, the audience who are here, and say a particular thanks to senator schumer who has been very supportive and interested in what we are doing here and also to chairman durbin whose support has been invaluable in encouraging us to go forward here. i have been grateful to my partner in this effort, senator hawley and i produced a framework basically, a
11:32 pm
blueprint for a path forward to achieve le legislation our interest is in legislation and hearing along with the two previous ones has to be seen as a means to that end. we are very result oriented. i know you are from your testimony, and i've been enormously encouraged and emboldened by the response so far just inag the past few days and from my conversations with leaders in the industry like mr. there is a deep appetite and hunger for the rule and guardrails, basic safeguards for businesses and consumers, for people in general from the panoply of potential perils, but there's also the desire for a
11:33 pm
potential benefit. our effort is to provide for regulation in the best sense of the word, regulation that permits and encourages innovation and new business entrepreneurship but provides those guardrails and forcible safeguards that can encourage trust and confidence in this growing technology. it's not a new technology entirely. it's been around for a decade but it's intelligent and regarded as entering a new era. the question is how soon and what and it should be regulation that encourages the best in american free enterprise but also provides the kind of
11:34 pm
protection that we do in other of our economic activity. to my colleagues who say there's no need for rules we have enough protecting the public, yes we have laws that prohibit unfair and deceptive l competition and laws that regulate airline safety andet drug safety but nobody would argue that simply because we w have those rules we don't need to specific specifics for medical devices or car safety just because we have rules that prohibit discrimination in the workplace doesn't mean we don't need rules that prohibit discrimination in voting and we need to make sure that they are framed and targeted in a way that apply to the risk involved, risk-based
11:35 pm
rules. managing the risk is what we need to do here so our principles are pretty straightforward i think. authorshippride of and we circulated the framework to encourage comment. we won't be offended by criticism. that's the way we can make this better and achieve legislation we hope or i hope by the end of this year. the framework is basically establishing a licensing regime for companies engaged in high risk ai c development, creatingn independent oversight body that has expertise with ai and works with other agencies to administer and enforce the law. protecting the national and economic security to make sure we are not enabling china or russia and other adversaries to interfere in our democracy or violatee a human rights requirig
11:36 pm
transparency about the limits and the use of ai models and at this point include rules like watermarking, digital disclosure when ai is being used and data access for researchers and ensuring that companies can be held liable when their products breach privacy and civil rights and endanger the public, deep fake impersonations. anwe've all heard those terms ad we need to prevent those harms. senator holly and i as former attorneys general of our states have a deep and abiding affection for the potential enforcement powers of those officials, state officials but
11:37 pm
the point is there must be effective enforcement. private rights of action as well as federal enforcement are very important. let me close by saying before i turn over to my colleague we are going to have more hearings. the way to build a coalition and support of these measures is to disseminate as widely as possible the information needed for our colleagues to understand what's at stake. we need to listen to the kind of industry leaders and experts that we have before today and we need to act with dispatch more than just deliberate speed we need to learn from our experience that if we let this horse get out of the barn it will be even more difficult to
11:38 pm
contain and we are seeking that on social media the harm that it portends right now as we speak. we are at the cusp of a new era and i asked what his greatest fear was. i said my nightmare is the massive unemployment that could be created that is an issue we don't deal with directly here but it shows how wide the ramifications may be and we need to deal with potential worker displacement and training and this new era is one that portends an enormous problems and peril but we need to deal with both. i will turn now to the ranking member. >> thank you mr. chairman and thank you forse organizing this
11:39 pm
hearing. this is now the third of the hearings we've done. i've learned a lot in the previous couple. a sum up what we are learning about the potentials of ai it's exhilarating. some of it is horrifying and i think what i hear the chairman saying and i agree with his we have a responsibility now to do our part to make sure that this new technology that holds a lot of promise but also peril works for the american people, that it's good fortu the american people, but it's good for families, that we don't make the same mistakes that congress made with social media were 30 years ago now congress basically outsourced social media to the biggest corporations in the world and that has been an unmitigated disaster, the biggest most powerful corporations not just in america but the globe and the history of the globe doing whatever they
11:40 pm
want with social media, running experiments every day on america's kids inflicting mental health harms the likes of which we've never seen, messing around e'in our elections in a way that is deeply corrosive to our way of life. we cannot make those mistakes again so we are here as senator blumenthal said to try to find answers and to try to make sure that this technology is something that actually benefits the people of this country. i have no doubt with all due respect to the corporate consumer in front of us the corporations i have no doubt it's going to benefit your company's. what i want to make sure is that it benefits the american people and i think that is the task we are engaged in. i look forward to this today. thank you mr. chairman. >> i i want the witnesses and tn as is our custom i will swear that manr and asked them to submit their testimonies. welcome to all of you.
11:41 pm
the chief scientist he joined in january 2009 as chief scientist spending 12 years at stanford university where he was chairman of the computer science department. he's published over 250 papers and holds and is the author of four textbooks. the president and chair of microsoft as the vice chair and president he is responsible for spearheading the company's work and i representing it publicly n a wide variety of critical issues involving the intersection of technology and society including artificial intelligence, cybersecurity, privacy, environmental sustainability, human rights, digital safety, immigration, products and business for
11:42 pm
nonprofit customers. we appreciate you being here. professor woodrow is professor of law and class of 1960 scholar at boston university school of law and a a nonresident fellow t the cornell institute of policy and medicine and law at washington university, faculty associate atn the science centr for internet and society at harvard university and affiliate scholar at the center for internet and society at stanford law school. i could go on about each of you in much greater length with all of your credentials but suffice it to say, very impressive. if you now stand i will administer the oath. do you solemnly swear the testimony you are about to give is the truth, the whole truth
11:43 pm
and nothing but the truth so help you god? thank you. why don't we begin with you. >> chairman blumenthal, ranking member hawley, distinguished committee members thank you for the privilege to testify today. i am nvidia's head of research and glad to discuss artificial intelligence and the future. nvidia is at the forefront of accelerating the ai. technology is a potential to transform industries, address global challenges profoundly benefiting society. since our founding in 1993, b been committed to developing technology to empower people and improve the quality of life worldwide. today over 40,000 companies use nvidia platforms across media and entertainment, scientific healthcare, financial services, internet services, automotive and manufacturing to solve theer world's most diffict challenges to bring new products
11:44 pm
and services to consumers worldwide. at our founding in 1993 we were a 3d graphic start up one of dozens of startups competing to create entirely new markets for accelerators to enhance computer graphics for games. in 1999 we invented the graphics processing unit or gpu to performm a massive number of calculations in parallel. we launched it for gaming and recognized they could theoretically accelerate any application that could benefit from parallel processing. this paid off today researchers worldwide innovate on nvidia gpu's. through the collective efforts we've made advances in ai that will revolutionize and provide tremendous benefits to societies across sectors such as healthcare, medical research, education, business, cybersecurity, climate and beyond. we also recognize like any new product or service, ai products and services have risks and those who make and use or sell ai enabled products and services are responsible p for their
11:45 pm
conduct. fortunately many uses at the ai applications are subject to existing laws and regulations that govern the sectors in which they operate. ai enabled services and high-risk sectors could be subject to enhanced licensing certification requirements when necessary while other applications with less risk of harm may need less stringent licensing or regulation. clear stable thoughtful regulation the developers will work to benefit the society while making the products and services as safevi as possible. for our part, we are committed to the safe unitrust with the development and deployment of ai. for example, the guardrails are open source software empowering developers to guide the ai applications through the accurate appropriate and secure responses. nvidia has implemented model risk management guidance and training comprehensive assessment and management of risk associated with developed models. today nvidia announced it is endorsing the white house
11:46 pm
voluntary commitment on ai. as we deploy ai more broadly we can and will continue to identify and address risks. no discussion of ai would be complete without addressing what is often described as frontier ai models. some have expressed fear that frontier models will evolve into uncontrollable artificial general intelligence which could escape our control andnd cause harm. fortunately uncontrollable artificial general intelligence is scientific fiction not reality. at its core ai is a software programar that is limited by its training, the input to provide into it and the nature of its output in other words, humans will always decide how much decision making power to see to the ai models. as long as we are thoughtfully and measured we can save trustworthy ethical deployment of the systems without suppressing innovation which could spur innovation by injuring air tools are widely available to everyone not concentrated in the hands of a few powerfuler firms. i will close with two
11:47 pm
observations. first the genie is already out of the bottle. it offers widely published and available to all. software can be transmitted anywhere in the world at the press of a button and many development tools, frameworks and foundational models are open sourced. a second no nation and certainly no company controls the chokepoints to ai development. leaving the computing platforms competing with companies around the world while u.s. companies may currently the most energy efficient cost efficient and easiest to use they are not the only viable alternatives abroad. safe and trustworthy ai will require multilateral cooperation or it will not be effective. the united states is in a position today with your help we can continue to lead on policy innovation into the future. we stand ready to work with you to ensure the accelerated
11:48 pm
computing serve the bestat interest of all. thank you for the opportunity to testify before this committee. >> thank you very much. mr. smith? >> chairman blumenthal, ranking member, members of the subcommittee my name is brad smith the vice chair and president of microsoft and thank you for the opportunity to be here today and more importantly for the work you've done to create the framework you've shared. the chairman blumenthal i think you put it very well. first we need to learn and act with dispatch. ranking member, i think you offered real words of wisdom. let's learn from the experience the whole world had with social media. let's beye clear about the prome and the peril in equal measure. i would say first i think your framework does not attempt to answer every question by design, but it's a very strong and positive step in the right direction and puts the u.s.
11:49 pm
government on the path to be a global leader to ensure a balanced approach that will enable innovation to go forward with the right legal guardrails in place. as we all think about this more i think it is worth keeping kthree goals in mind, first les prioritize safety and security which your framework does. let's require licenses for advanced aim models and uses in high risk scenarios. let's have an agency that is independent and can exercise real and effective oversight over this category and then let's couple that with the right kind of controls that will ensure safety and the protection
11:50 pm
of the consumers let's prioritize national security always in a sense let's think as well as you have about protecting the privacy, the civil rights and the needs of kids among many other ways of working to ensure we get this right. let's take the approach you are recommending focused not only on those companies that develop ai like microsoft as well as companies that deploy ai like microsoft in different categories we are going to need different levels of obligations. and as we go forward let's think about the connection between the role of a central agency that will be on point for certain things as well as the obligations that will be delayed could be part of the work of many agencies and indeed our courts as well.
11:51 pm
and let'sot do one other thing. maybe it's one of the most important things we need to do to ensure the threats many people worryre about remain part of science fiction and don't become a new reality. let's keep ai under the control of people. it needs to be safe. and to do that as we've encouraged there need to be safety breaks especially for any ai application or system that can control critical infrastructure. if the company wants to use ai to take control of the electrical grid or all of the self driving cars on the roads were the water supply we need to learn from so many others that do great things but also can go wrong. we need a safety break like we have a circuit breaker in every building and home in the country to stop the flow of electricity if that is needed. then i would say let's keep one third of all in mind is all this
11:52 pm
is the one i would suggest may be doing a bit more to consider adding to the framework let's rememberer the promise that this offers. right now if you go to state capitals in other countries, i think there's a lot of energy being put on that. when i see what governor newsom is doing in california or in north dakota or governor duncan in virginia, i see them at the forefront giving out how to use ai to improve the delivery of healthcare, advanced medicine, improve education for our kids. and maybe most importantly make the government services more accessible and efficient. let's see if we can find a way to not only make the government better by using this technology but cheaper use the savings to provide more and better services to our people. that would be a good problem to have the opportunity to consider. in some this is not a time for
11:53 pm
half measures. let's go forward as you have recommended. let's be ambitious and get this right. thank you. >> thank you verypr much. i read your testimony and you were very much against half measures so we look forward to hearing the full measures you recommend. >> that's correct, senator. of the committee, thank you for inviting me to appear before you today. i am a professor of law at bostonec university and my comments are based on a decade of researching law and technology issues and i'm drawing from research on artificial intelligence policy that i conducted as a fellow with colleagues at the cornell institute in washington university st. louis. committee members, up to this point ai policy has largely been made up of industry led approaches like encouraging transparency,e mitigating bias and promoting principles of ethics.
11:54 pm
i'd like to make one simple point in my testimony today. these approaches are vital, but they are only half measures. they will not fully protect us. lawmakers must be go beyond these to ensure they are worthy of our trust. half measures like audits assessments and certifications are necessary for data governance but industry leverage with checks like these to dilute our walls and managerial exercises that entrench harmful surveillance business models. a checklist is no match for the staggering fortune available to those that exploit our data, labor andra procure ready to develop and deploy the systems and it's no substitute for meaningful liability when the systems harm the public. today i'd like to focus on three popular half measures and why lawmakers must do more.
11:55 pm
first,r transparency is a popur solution for the opaque systems. if we understand the various parts of the systems lawmakers must intervene when the tools are harmful. a second laudable but insufficient approach is when companies work to mitigate bias. the systems are notoriously bias along race, class, gender and ability while mitigating it is critical for self-regulatory efforts to make it fair are half measures doomed to fail. it's easy to say the systems shouldn't be biased and difficult to find consensus on what that means and how to get there. additionally it's a mistake to assume if a system is fair that it's safe for all people even if we end ensure the systems work equally well for all communities, all we will have done is create a more effective tool with the powerful can use to dominate, manipulate and
11:56 pm
discriminate. a third half measure is committing to ethical principles. ethics are important and of these sound impressive but they are a poor substitute for law. it's easy to commit but the industry doesn't have the incentive to leave money on the table for the good of society. i have three recommendations to move beyond the half measures. first, lawmakers must accept the ai systems are not neutral and regulate how they are designed. people often argue lawmakers should to avoid design goals per technologies because there are no bad ai systems only bad ai users. this view of technology is wrong. there is no such thing as a neutral technology including liabilities of design or consumer protections.
11:57 pm
providing the means end of the instrumentality of unfair and deceptive conduct. my second recommendation is to focus on substantive laws that limit abuse of the power. ai systems are so complex and powerful that it can seem like trying to regulate like magic but the broad risk and benefit of the systems are not so new. they bestow power. it is used to benefit a psalm and harm others. lawmakers should borrow from established approaches to remedy the power imbalances and require broad nonnegotiable duties of loyalty, care and and confidentiality and implement robust bright line rules that limit harmful secondary uses and disclosures of personal data in ai systems. my final recommendation is to encourage lawmakers to resist the idea that ai is inevitable. when lawmakers go straight to putting up guardrails they face questions about whether particular ai systems should exist at all.
11:58 pm
this dooms us to half measures. strong rules include prohibitions on unacceptable practices like emotion recognition, biometric surveillance in public spaces, and social scoring. in conclusion, to avoid the mistakes of the past lawmakers must make thema hard calls trust and accountability can only exist where the law provides meaningful protections for humans and ai half measures will certainly not be enough. thank you and i welcome your questions. >> thank you, professor and i take very much too hard you're imploring us against half measures. i think listening to both senator holly and myself you have a sense of boldness and initiative and we welcome all of the specific ideas most especially mr. smith your suggestion that we canly be more engaged and proactive at the state level or federal level making useov of ai in the public
11:59 pm
sector but taking the thought that has been introduced in ai technology in general is not neutral. how do we safeguard against the downsides of ai whether it's discrimination or surveillance, will this licensing regime and oversight entity be sufficient and what kind of power do we need to give it? >> i would say first of all it is indispensable in certain high risk scenarios but it won't be sufficient to address every issue but it's a critical start because i think what it really ensures is especially savor the frontier models most advanced as well as certain applications that are highest risk frankly you do need a license from the government before you go forward and that is real accountability.
12:00 am
you can't drive a car until you get a license. you can't make the model or the application available until m yu pass through that gate. i do think that it would be a mistake toha think that one sine agency or licensing regime would be the right recipe to address everything especially when we think about the harms we need to address and that's why i think it is equally critical that every agency and the government that is responsible for the enforcement of the law and protection of people's rights master the capability to assess ai. i don't think we want to move the approval of every new drug from the fda to this agency so by definition the fda is going to need for example to have the capability to assess ai. that would be one of several additional specifics i think one can think about. >> that is an important point because it is going to be used making automobiles, airplanes, toysir for kids.
12:01 am
so the faa, the fda, the trade commission, the consumer product safety commission, they all have presently existing rules and regulation but there needs to be an oversight entity that uses some of those rules and adapts them and adopts new rules so that those harms can be prevented. there is a lot of different names we could call that in connecticut now has an office of artificial intelligence. you could use but i think the idea is that we want to make sure that the harms are prevented through a licensing regime focused on risk. you said that autonomous ai is science fiction beyond human control is science fiction but
12:02 am
science fiction has a way of coming true, and i wonder whether that is a potential fear certainly it is one that is widely shared at the moment whether it's fact-based or not, it is in the reality of human perception and as you know the trust and comments are very important so i wonder how we counter the perception and prevent the science-fiction from becoming reality. >> i should have said artificial general intelligence gets out of control is science fiction, not autonomous. we use artificial intelligence for example artificial intelligence vehicles all the time. the way we make sure we have control over ai of all sorts is by for any critical application keeping the human in the loop.
12:03 am
ai is a computer program that takes c an input and produces an output and if you don't connect something that can cause harm it can cause that harm so any time there's grievous harm that could happen, you want a human being between the output of the model and the cause of harm so i think as long as we are careful about how we deploy ai to keep humans in the loop i think we can assure they won't take over and shut down our power grid or cause airplanes to fall out of the sky. we can keep control over them. >> i have a lot more questions but we are going to adhere to five-minute rounds. we have a very busy day with boats as a matter of fact and i will turn to senator holly. >> thank you mr. chairman and again to the witnesses for being here. i want to particularly thank you, mr. smith. i know that there's a group of other ofer your colleagues, your
12:04 am
counterparts in industry who are gathering i think tomorrow. i appreciate you being willing to be here in public and answer questions in front of the press and this is open to anybody who wants to see it and that is the way this ought to be done. i appreciate you being willing to do that. you mentioned protecting kids. i want to start with that and ask a little bit about what microsoft has done and is doing. kids use your chabad is that fair to say? >> yes we have certain age controls so we don't let a child of any age but yes in general it is possible for children to register if they are a certain age. >> and that is? >> i'm trying to remember as i sit here. >> i think it's 13. >> i was going to say 12 or 13. do you have some sort of age verification? how dow we know what age? obviously the kid can put in whatever age he or she wants to.
12:05 am
is there some sort of verification? >> we do have systems that involve getting permission from a parent and we use this across the services including for gaming. i don't remember off the top of my head exactly how it works but i would be happy to get you the details. >> my impression is that it doesn't really have enforceable age verification. there's no wayay to know but agn you can correct me if that's wrong. let me ask you this, what happens to all of the information that our hypothetical 13-year-old is putting into the tool as it is having this chat? they could be chatting about anything going back and forth on any number of subjects. what happens to that info? >> the most important thing i would say first is that if all is done in a manner that protects the privacy of children -- >> and how is that? >> we follow the rules that exist to protect child online
12:06 am
privacy and forbids usingsi it r tracking or advertising or other things. it seeks to put tight controls around the u.s. and retention of that information. the second thing i would add to that is in addition to protecting privacy, we are hyper focused on ensuring that in most cases people of any age but especially children are not able to use something like being chat in ways b that would cause harmo themselves or others. >> and how do you do that? >> we have a safety architecture we use across the board. think about it like this there's two things around of the model first is called a classifier so that if somebody asks how can i commit suicide tonight, how can i blow up my school tomorrow, that hits a classifier that identifies c a class of questios or prompts or issues and second there's what we call meta-
12:07 am
prompts and if we intervene the question is not answered. if someone asks how to commit suicidee we typically would provide a response that encourages someone to get mental health assistance and counseling and tells them how. if somebody wants to know how to build a bomb it says you cannot use this to do that. that fundamental safety architecture is going to evolve and get better but in a sense it is at the heart if you will of what we do end of the best practices in the industry and i think part of what this is all about that we are talking about here is how we take that architectural element and continue to strengthen it. >> that's helpful. let meul ask about the informatn back to the kids. is it stored in the united states, is that the stored ooverseas? >> if the child is in the united states the data is stored in the united states. that's true not only for children but adults as well. >> and who has access to that
12:08 am
data? >> the child has access. the parents may or may not havee access. typically we give -- >> and what circumstances would the parents have access? >> i would havear to get you spe the specifics. this is something we will implemented in thes united stats though it isn't legally required in the united states it is required as you may know in europe. people have the right to find out what information we have about them, they have the right to see it and the right to ask us to correct it if it's wrong, they have the right to ask us to delete it if that's what they want us to do. >> and if they ask you to, you do? >> that's our promise and we do not want to comply with that. >> i have a lot more questions i'm going to try to adhere to the time. five minutes, mr. chairman. [laughter] it's great news for us, not so much for the witnesses.
12:09 am
before i leave the subject about the kids personal data and where it's stored i'm asking you this because i'm sure we have seen other technology companies with social media space about where the data is stored and major access issues and i'm thinking it shouldn't be hard to guess in particular china where we've seen other social media companiess who say americans daa is stored in america but lots of people inut other countries can access that data so is that true for you, is a child's data that's stored in the united states as you said if they are an american citizen, cannot be accessed let's say in china by a microsoft engineer? >> i would love to go b back and confirm that but i don't believe that. >> could you get that for me for the record?or i will have more questions later. thanks mr. chairman. >> senator klobuchar. >> thank you, all of you. i think i will lead some election questions with some of
12:10 am
the chairs of the rules thcommittee. mr. smith in your written testimony you talk about how watermarks could be helpful disclosure of ai material as you know and we've talked about i have a bill that is represented to acquire disclaimer and some kind of mark. i think we have to go further. we will get to that in a minute. but can you talk about what you mean by the health of democracy and civic discourse will undoubtedly benefit from initiatives that help protect the public against deception or fraud facilitated by a i generated conduct? >> absolutely. and i do think things are moving quickly in a positive direction in terms of what we are seeing. on the positive side we p are seeing the industry come together and a company like adobe exercise leadership and there's a recipe that i see
12:11 am
emerging. it starts with a first principle people should have the right to know if they are getting a phone call from a computer,et from aif there's content coming from an ai system rather than a human being. we then need to make that real and create what's called a provenance system watermarking for legitimate content so that it can't be altered easily to detect a deep fake. we need to create an effort that brings the industry and governments together so we know what to do and there is a consensus when we do spot deep fakes especially even that have altered legitimate content. >> and let's get to that. we have introduced our bill today with senator collins who led the electoral reform act and to senator kunz to ban the use
12:12 am
of deceptive ai generated content so this would work in concert but when you get them pretending to be the elected official or the candidate when it is not and we've seen this used against people on both sides of the oil which is why it was so important that we lead and be bipartisan in this work and i want to thank him for his leadership not only on the framework but on what we are doing and i will go to you, senator. i just promoted you. [laughter] may be. [laughter] could you come in your testimony you advocate for outright prohibitions we are talking about. we do of course have a constitutional exception for satire and humor because we love it so much, the senators do, just kidding. but could you talk about why you believe there has to be some an
12:13 am
outright ban of misleading conduct relating to federal candidates and political ads? >> sure. absolutely. thank you for the question. of course keeping in mind free expression constitutional protections that would apply to any sort off legislation i do think the bright line rules and prohibitions around such deceptive ads are critical because we know the procedural walk-throughs as i said in my testimony are often give the veneer of protection without protecting so to an outright prohibit these practices i think is really important and i would even go potentially a step further and think about ways in which weay can prohibit not just those we would consider to be deceptive but practices we would consider even abusivees that leverage our desire to believe
12:14 am
or want to believe things againstng us and there's a bodyf law that sort of runs alongside unfair or deceptive trade practices. >> i had to someone that i know well that has a kid in the marines whose deployed. they call and ask for money to be sent somewhere. could you talk about what companies do and i appreciate the work you've done to ensure the platforms are designed so they can't be used for criminal purposes because that's got to be part of the work that we do and not just against elected officials. >> the best measures against deep fakes and mr. smith mentioned the use of prominence and systems where you can have
12:15 am
authentic images and voice recordings signed by the device whether it's a camera or audio orrecorder that has recorded tht voice and then when it's presented it can be authenticated as being genuine and not a deep fake that is the flipside thatli would require anything that is synthetically generated and those technologies n combination can help people sort out along with a certain amount of public education to make sure people understand what the technology is capable of and are on guard for that and sort out what is real from what is fake. >> last, mr. smith where i started. some platforms use local news content without compensatingna journalists and papers including by using their content to train ai algorithms. the journalism competition preservation act that i have with senator kennedy would allow local news organizations to negotiate with online platforms including generative ai platforms that use their content
12:16 am
without compensation. could you talk about the impact that it could have on local journalism? you talked about in your testimony the importance of investment and quality journalism but we are getting getting@we've got to find a way to make sure that people actually doing the work are compensated in many ways but also in journalism. >> i would say three quick things. number one we need to recognize local journalism is fundamental to the health of the country and of the electoral system and its ailing so we need to find ways to preserve and promote it. number two, generally i think we should let local journalists and publications make decisions about whether they want their content to be available for training or grounding and the like and that'se a big topic and it's worthy of more discussion and we should certainly let them in my view negotiate collectively because that is the only way that it's going to negotiate effectively.
12:17 am
>> i g appreciate your words. i'm going to get in trouble with senator blumenthal here. >> there are ways we can use ai to help local journalists and we are interested in that so let's add to that to the list. >> very good and thank you both of you. i talk about senator holly's work but you also for your leadership. >> thank you for yours, senator klobuchar. senator hirono. t >> mr. smith, it's good to see you again. every time we have one of these hearings we learn something new, but the conclusion i have drawn is that ai is ubiquitous. anybody can use ai it can be used in any endeavor. when i hear you testifying about how we shouldn't be taking house measures, i am not sure what that means. what does it mean not taking a measure on something as ubiquitousus as ai where there e
12:18 am
other regulatory schemes that can touch upon those endeavors that use ai? there's always a question i have when we address something as complex as how ai is looking that there are unintended consequences that we should care about, would you agree? >> i would absolutely agree. i think we have to define what is a full measure and a half measure but we canea all agree that half measures are not a good enough. >> that is the thing how to recognize going forward what is going to help us with this powerful tool so i have a question for you mr. smith. it is a powerful tool that can be used to spread a lot of disinformation and that happened during the disaster on maui. the residence were subject to
12:19 am
disinformation some of it coming from foreign governments, i.e. russia looking to sell confusion and distress including don't sign up for fema because they cannot be trusted. i worry with ai, such information will only become more rampant with future disasters. do you s share my concern about the misinformation and disaster context andn the role that ai cn play and what can we do to prevent these foreign entities from pushing the disinformation to people that are very vulnerable? absolutely share your concern and there's two things we need to think about doing. first let's use the power of ai as we are to detect these kind of activities when they are taking place because that will enable us to go faster as they did in that instance where microsoft and others used the data technologies to identify whatte people were doing.
12:20 am
number two, i think we just need to stand up as a country and with other governments and ofs e public and to say there needed to be some clear redlines in the world today regardless of how much elsewhere what else we disagree upon. when you think about what happens typically in the wake of an t earthquake or hurricane or tsunami or flood, the world comes together, people are generous. they help provide relief and then let's look at what happened after the fire in maui. it was the opposite of that. we had to some people not necessarily directed by the kremlin but people that regularly spread russian propaganda trying to discourage the people from going to the agencies that could help them. that's inexcusable and we saw what we believe is chinese directed activity trying to persuade the world in multiple languages that the fire was caused by the united states government itself using a
12:21 am
meteorological weapon. those are the things we should all try to bring the international community together to agree that they are off-limits. >> how do we identify that this is even occurring as china or russia directed misinformation going on. i didn't know this was happening byw the way and even in the energy committee i said we had people testify and i asked regarding the disaster i asked whether he was aware that there've been disinformation put out by the foreign government in that example and he said yes but i don't know that the people recognized it was going on so how do we even identify that it's going on and then to come forward and say this is happening and to name names, identify which country it is that is spreading this kind of
12:22 am
misinformation? >> we have to think about two things. first a company like microsoft we have to lean in with data and infrastructure and experts and real-time capability to spot these threats, find the patterns and reach well-founded conclusions and the second thing this is where it's going to need all of your help, what do we do if wet find that a foreign government is deliberately trying to spread false information next year and a senate or presidential campaign about a candidate how do we create the room so that information can be shared and people will consider it? the most important word in your framework is bipartisan. how do we create the bipartisan framework so that when we find this, we create a climate where people can listen? i think we have to look at both of those parts of the problem together. >> i hope wege can do that and f you don't mind, mr. chairman,
12:23 am
one of the concerns from the worker standpoint is that their jobs will be gone and professor, you mentioned that the ai can result in job losses and for both you and mr. smith what are the kind of jobs that will be lost to ai? >> that's an excellent question and it's difficult to predict that in the future but i would start by saying it's not necessarily something that can be automated effectively but things that i think control the purse strings think could be automated and if it gets to the point that it appears it could i imagine you will see the industry moving in that direction. s >> i think you mentioned in your book which i am listening to that things like ordering something out of a drive
12:24 am
through, those things could be to ai. >> for years ago we published our book with our co-author behind me and said what's the first job we think might be eliminated by ai, we don't have a crystal ball but i that it's taking an order at the drive-through of a fast food restaurant. you're not really establishing a rapport with a human being. all the person does is listen and type into a computer what you're saying so when ai can hear ass well as a person, it cn enter that in and indeed i was struck a few months ago wendy's i thinkhs announced that they we starting to consider whether they would automate the drive-through. i think there's a lesson though and it should give us both pause but i think a little bit of optimism there's no creativity involved in drive-through at least listening and entering. there are so many jobs that do involve j creativity so the hopi
12:25 am
think is to use ai to automate the t routine may be even the wk that's boring to free people up soan they can be more creative d focus more on paying attention to other people and helping them. if we apply that recipe more broadly we can put ourselves on a path that is more promising. >> thank you, senator. senator kennedy. >> thank you, mr. chairman and thank you for calling this hearing. if i am a recipient of content created by generative ai, do you think i should have the right to
12:26 am
know that content was generated by a robot? >> yes, i think you do. i think the details would depend on the context but in most cases i or anybody else if i received something i would like to know is this real or was this generated. >> what i would say is if you are listening to an audio or watching a video are seeing an image and it was generated by ai i think people have the right to know. where i think therene is a nuans if you are using it to say help you write something, maybe write a first draft just as i don't think any of us would say when our staff helps us write something we are obliged to give the speech into say now i'm going to read the paragraph that my staff wrote. you make it your own and i think the written word is a little moreom complex so we need to thk that through but as a broad principle, i agree.
12:27 am
>> there are situations you probably wouldn't expect to be dealing with the product of generative ai.ve >> as times change it's possible our expectations change and so. >> but as a principal do you think people should have thet right to know when they are being fed content from generative ai? >> i tell my students it depends on the context. if you are vulnerable to generative ai than the answer is absolutely yes. >> what do you mean if you are vulnerable. i'm just looking for, no disrespect, a straight answer. i like to things, breakfast food and straight answers. [laughter]
12:28 am
if a robot is feeding me information and i don't know it's a robot, am i entitled to know is a concern, straight up. >> i think the answer is yes. >> am i entitled to know who owns that robot? and where the content came from? i know it came from a robot but somebody had to make it give me that content. am i entitled as a consumer to know who owns it? >> i think that is a harder question that depends on the particular context. if somebody is feeding me a videoo identified as ai, i now know that it's generated and it's not real. if it's being used for example in a political campaign.
12:29 am
>> what it make d a difference f it is owned by let's say president biden or president trump, don't you want to know and evaluate the content who owns the robot? and who prompted it to give you this information? >> i would probably want to know that but i don't know if it would be required for me to know that. i amie a believer by the program that's doing it. the onlye qualification you should think about and would know better than me there are certain areas and political speech where one has to decide whether you want people to act with anonymity. the papers were first published under a pseudonym and i think in
12:30 am
the world today i'd rather have everybody know who's thinking. >> how do you feel about breakfast food? >> im pro breakfast food. i agree with mr. smith that there are circumstances you would want to preserve anonymous speech and in some ways he would absolutely want to know. >> i don't want to go to over. obviously this is an important subject and the extent to which i think let me rephrase that, the extent of most senators knowledge in terms of the nuances t of ai, the general impression is that ai has extraordinary potential to make our lives better if it doesn't make our lives worse first, and
12:31 am
that's about the extent of it and my judgment we are not nearly ready to be able to collect a bill that looks like somebody designed it on purpose. we are more likely to take baby steps. and i ask these questions predictably because we have a is very simple it saysif you ownf a robot thatg to spit out artificial content to consumers, they have the right to know that it was generated by a robot but again i want to thank my colleagues. thank you all for coming. >> on behalf of the chairman we are going to start a second round and i guess i will go
12:32 am
first. we were talking about kids and the information. let me give you an opportunity in the best possible way. thirteen, the age that is such a young age. i've got three kids at home, ten, eight, two. i don't want my kids to be interacting in the chat box anytime soon but 13 is young. would you commit to raising that age and to a verifiable age verification procedure so that theparents can know they can haa sense of confidence that their 12-year-old isn't just saying yes i'm 13, i'm 15, go ahead.
12:33 am
let's get in a back and forth with this robot. will you commit to these things on child safety today? >> as you can imagine they have one principal they want me to remember, don't go out without talking to them first. >> but you're the boss. most mistakes you make them by yourself. i'm happy to go back and talk more about with the right age should be. >> don't you think 13 is awfully low to interact with a robot who could be telling you to do any number of things? >> nots necessarily. let me just describe this scenario. when i was in korea a couple of months ago we met with a deputy prime minister and they are trying to create for three topics that are very objective.
12:34 am
mass, coding and learning english. a digital textbook with a tutor so if you are doing math and you don't understand the concept you can ask the ai tutor to help you solve the problem. and it's useful for the kids and the parents. let's say what'ssa the age of eighth grade algebra, i tried to help them with their homework and they didn't believe i ever made it through the class. i think we want kids in a controlled way with safeguards to use something that way. >> i'm talking about your ai chat. famously earlier this year, you had a technology for "the new york times" that wrote about this and i'm looking at the article right now urging this person to break up a marriage.
12:35 am
do we want to 13-year-olds to be having those conversations? >> of course not. >> will you commit to raising -- >> we are not going to make the decision on the exception. we have multiple tools. age is a very redline. >> it is a very redline. >> and my point is there is a safety architecture that we can dapply this is an adult can you imagine the kind of things your chat about what to say to a 13-year-old? i'm serious about this do you
12:36 am
think it is a good idea? >> let's put this in context at the point that technology has ts been ruled out for only 20,000 people journalist for "the new york times" spent two hours ignoringnd his wife and interacting with a computer trying to break the system which he managed to do. we didn't envision that use and the next day we had fixed it. >> are you telling me that you envisionedhe the questions a 13-year-old asked and we should be fine with that? that i should trustrt you in the way "the new york times" writer did? >> what i'm saying is i think we go forward. we have an increasing capability to learn from the experience of people. >> that's what worries me. that is exactly what worries me. we have to have some failures
12:37 am
let's not learn fromov the failures. this isro what happened with social media. we have social media that made billions of dollars giving us a mental health crisis in this country. they got rich, the kids got depressed,he committed suicide. why would we want to run that experiment again? let's also recognize that technology does require real users. what's different about this and what is fundamentally different from the social media experience is that we not only have the capacity, butut we have the will and we are applying that will to fix things in hours and days.
12:38 am
we are goingre to do well with this. i'm just asking why we should. >> i'm hoping we will work every day to earn it. that's why we have a licensing obligation. in your framework. >> that i am asking you as the president of the company to make a commitment for child safety and protection to say microsoft, you can tell every parent in america now microsoft is going to protect your kids. we will t never use your kids aa science experiment. and therefore we are not going to allow or target your kids or allow your kids to be used by our chat bots as a source of information if they are younger than 18. >> but i think you're talking about with all due respect to things.es
12:39 am
>> we don't want to use kids as a source of information and monetize et cetera but i am of the view i don't want to cut off an eighth grader with an ability to use a tool that would help them use algebra or math in a way they couldn't a year ago. with all due respect it wasn't algebra or math that it was recommending were talking about when he was trying to break up a reporters marriage. >> but now we are mixing things. >> we are talking about your w chat about. >> of course. and i'm talking about the protection of children and how we make technology better. yes that was that episode back in february. six months later if that journalist tries to do the same thing again it will not happen. >> do you want me to be done, senator?
12:40 am
>> senatorin klobuchar. >> some of us haven't voted yet. i wanted to turn to you. in march to ms. nvidia announcea partnership with getty images to develop models that generate new images using the image library. importantly this provides royalties to content creators. why is it important to company with and pay for these degenerative models? >> we believe in protecting people's intellectual property rights and the rights of the photographers who produce the images in the models are trained on and expecting income from those images i and we didn't wat to infringee so we didn't scrape a bunch of images of the web, we partnered and trained our model and when people use picasso to generate images, the people who provided the original content we
12:41 am
see this as a way of going forward where people who are providing vip you should benefit from the use of that. >> today the white house announced eight more companies committing to take steps to move towards safe, secure and transparent development. nvidia is onend of those. can you talk about the steps that you've taken and what steps you plan to take to foster the development?t? >> we've done a lot already. we've implemented the guardrails to basically put guardrails around our own models so that inappropriate prompts to the model don't get a response. if it were to generate something that were intercepted before it could reach theve user model. we have a set of guidance that we give and how they should be
12:42 am
appropriately used and we provide cards to sort of say where the model came from, what it is trained on then we test them very thoroughly and it depends upon the use of certain models we test themm for bias we want to make sure that when you refer to a doctor it doesn't assume it's a him. we test them in certain cases for safety we have a variant that's used in the medical profession and we want to make sure that the advice it gives is safe and there's a number of other measures i can give the list if you wanted. >> thank you. do you think congress should be more focused on regulating the input and design of generative ai or focus more on outputs and capabilities? >> both. certainly i think the area that has been ignored up to this
12:43 am
point has been the design and input to a lot of these tools. so to the extent that that area could use some revitalization i would encourage inputs and outputs. >> and i suggest you look at these election bills because as we've all been talking about, i think we have to move quickly on those and the fact that it's bipartisan it's been a very positive thing. i want to thank mr. smith for wearing a tie. i know that maybe he was an ai generated message that you got tokn know that this would be a smart move with me after their loss on sunday and i would remind you that they are playing thursday night.he >> i can assure you that it was an accident. >> very good. thank you. we have a lot of work to do. >> thank you mr. chairman. mr. smith, i want to come to you
12:44 am
first and talk about china and the chinese communist party, the ways they have gone about. we've seen a lot of it they have these influence campaigns that they are running to influence certain thought processes with the american people. i know you all just did a report on china. you covered some of the disinformation. so, talk to me a little bit about how microsoft within the industry as a whole can combat some of these campaigns. >> i think there's a couple of things we can think more about and do more about. first we should want to ensure that our own products and systems and services are not used by foreign governments in thisis matter. and i think there's room for the
12:45 am
evolution of export controls and next-generation export controls that.p prevent i think there's also room for a concept that's worked through the 1990s in the world of banking and financial services in the know your customer requirements and we've been advocates for those so that if there is abuse of systems, the company that is offering the service knows who is doing it and is in a better position to stop it from happening. i think the other side of the client is usings a high and advancing the defense of technologies that start with of the ability to detect what is going on and we've been investing heavily in that space. that is what enabled us to produce the report that we published and what enables us to see the patterns in communications around the world and we are seeking to be a voice with many others that calls on
12:46 am
government to i will say lift themselves to a higher standard so that they are not using this kind of technology to interfere in other countries and especially in other countries elections. >> in the report that you all did, and you were looking at china, did you look at what i called the other members of the axis of evil, russia, iran, north korea? >> we did and that specific report you are referring to was east asia. we see especially prolific activities. some from china, some from iran and really the most global actor in this space is russia and we've seen that grow during the war. we've seen it really spiral in recent years going back to the middle of the last decade. we estimate that the russian government is spending more than
12:47 am
a billion dollars a year on a global what we call cyber influence operation. part of it targets the united states. i think thehe fundamental goal s to undermine confidence in everything the public cares about in the united states but we see it in the south pacific and across africa and i think it is a problem that we need to do more to encounter. >> you would see something like a knowem your customer or swift system that applies to banking that is there to help weed out that we think companies should increase their due diligence to make certain that their systems are appropriate and then being careful about doing business with countries that may misuse a certain technology. >> generally, yes. one can look at the specific scenarios. but i know the customer requirement we also said the
12:48 am
knowst your cloud so with the systems are deployed in the centers. >> let me come to you. i think one of the things as we look at ai t detrimental impacte don't always look at the doomsday scenarios, but we are looking at some of the reports onor the surveillance with the p surveilling and with iran surveilling women. i think there are other countries that are doing the same type of surveillance. so what can you do to prevent that? how do we prevent that? >> senator, i've argued in the past that facial recognition technologies in certain sources, biometric surveillance are fundamentally dangerous and that no world in which that should t be safe for any of us d
12:49 am
we should prohibit them out right. the prohibition of the biometric surveillance in public spaces, prohibition of emotion recognition this is what i refer to as the strong bright line measures that control absolute lines in the sand rather than ithe procedural ones that ultimately i think have been entrenching this kind of harmful surveillance. >> mr. chairman, can i take another 30 seconds because mr. daleyca was shaking his head in agreement on some of things i was catching that. do you want to weigh in before i close my questioning on either of these topics? >> i was in general agreement i guess shaking my head i think we need to be careful who we sell to. that people are using it for good purposes and not to
12:50 am
suppress others and we will continue because we don't want to see the technology misused to oppress anybody. >> thank you, senator blackburn. my colleague mentioned that we have aac forum tomorrow, which i welcome. i think anything to aid in our education and enlightenment is a good thing and i just want to express the hope that some of the folks who were appearing in that venue will also cooperate and appear before the subcommittee we will certainly be inviting more than a few of them and i want to express my thanks to all of you for being here and especially mr. smith who has to be here tomorrow to talk to my colleagues privately.
12:51 am
our effort is complementary, not contradictory to what senator schumer is doing as you know. i'm very focused on election interference because elections are upon us and i want to thank my colleague senator klobuchar, coonsnd and collins for taking a first step toward addressing the harms thatat may result from dep fake impersonation and all of the potential perils we've identified here. it seems to me that authenticating the truth that embodies true images and voices is one approach and then banning the deep fakes and impersonations is another approach and obviously banning
12:52 am
anything in the public realm and public discourse endangers running afoul the first amendment, which is why disclosure is often the remedy that we seek especially with campaign-finance. so maybe i should ask all of you whether you see banning certain kinds of election interference, and you raised the specter of foreign interference and fraud and scams that could be perpetrated. if they were in 2016. and i think it is one of those nightmares that should keep us up at night because we are in open society and we welcome the free expression and ai is a form
12:53 am
of expression we regarded as free or not and whether it's generated and high risk were simply touching up some of the background in the tv ad. maybe each of you can talk a little bit about what you see thepo potential remedy. >> it is a concern that the american public may be misled by a deep fakes of various kinds. i think as you mentioned that the use of problems to authenticate a true image or voice at its source and then tracking that to its deployment will let us know what a real images and if we insist on the content the ai generated content identified as such that people are at leaste tipped off that this is generated and not the
12:54 am
real thing, i think that having some foreign entity interfere but ate the same time, ai generated content is speech, and i think it would be a dangerous precedent to try to ban something. i think it's much better to have disclosure as you suggested and to demand something out right. >> number one, 2024 is a critical year for elections not only in this country but it's not only for the united states it's for the united kingdom, india across the european union more than 2 billion people will vote for who is going to represent them and so this is a global issue for the world's democracies. number two you are right to focus inth particular on the fit amendment because it is a critical cornerstone for the rights that we all enjoy. and i would also be quick to add
12:55 am
i don't think the russian government qualifies for the protection r under the first amendment and if they are seeking to interfere in our elections, then i think that the country needs to take a strong stand and a lot of funds need to be given how to do that effectively. but number three, and i think this goes to the heart of your question and why it is a good one i think it is going to require some thought and discussion and ultimate consensus to emerge let me say around one specific scenario. let's imagine for a moment that there is a video that involves a presidential candidate that originally was giving a speech and then let's imagine that someone uses ai to put different words into the mouth of that candidate and uses ai technology to perfect it to a level that it's difficult for people to recognize as fraudulent. then you get to this question of what should we do.
12:56 am
we've been trying to think this through i think we have two broad alternatives. one is we take it down and the other is we relabel it. i think it makes me nervous, that isn't really our role to act as sensors and the government really cannot under the first amendment, but relabel it to ensure accuracy. that is probably a reasonable path. but what this highlights is the discussion still to be had and i think the urgency for that conversation to take place. >> and i willnd say and then i want to come to you that i agree emphatically with your point about the russian government and chinese government or the saudi government as potentialnm interviewers. they are not entitled to the
12:57 am
protection of our bill of rights when they are seeking to destroy those rights and purposefully trying to take advantage of the free and open society to infect decimate our freedoms. so i think there's a distinction to be made there in terms of national security, and i think that rubric of national security, which is part of our framework, applies with great force in this area. and that is different from a presidential candidate putting up and add to that in effect puts words in the mouth of another candidate and as you know we began appearing with introductory remarks that were impersonation taken from my
12:58 am
comments on the floor, taking my voice from speeches that i made on the floor of the united states senate with content generated by chat gtp that sounded exactly like something i would say in a voice that was indistinguishable from mine, and obviously i disclosed that back at the hearing but in real time, as mark twain famously said, a light travels halfway around the world before the truth gets out of bed, and we need to make sure that there is action in real time if you're going to do the kind of identificationon that yu suggested. real-time meaning real-time in a campaign which is measured in minutes ands, hours, not in the days and months. >> thank you, senator. like you, i am nervous about
12:59 am
coming out saying we are going to ban all forms of speech particularly something as important as political speech and like you, i also worry about disclosure alone and a half measure. earlier in this hearing it was asked what is a half measure, and i think that goes to work answering your question today. i think the best way to think about half measure is an approach that is necessary but not sufficient, that risks giving us the illusion that we've done enough, but ultimately i think that this is the point, doesn't really disrupt the business model and the financial incentives that have gotten us here in the first place.r so to help answer your question, one thing i would recommend is thinking about throwing lots of differenten tools, which i appld your bipartisan framework for doing this, lots of different tools to bear on this problem, thinking about the role that surveillance advertising plays in powering a lot of these
1:00 am
harmful technologies and ecosystems that doesn't allow the system, the law you just to be created and flourished and amplified so i would think abouc rules and safeguards we could do to limited those financial incentives barring from standard principles of accountability, things like we use disclosure where it's effective,, it's nota effective you have to make it a safe and if you can't make it save it shouldn't exist. >> i'm going h to turn over for more questions but i think this is a real conundrum. we need to do something about it. we need more than half measures. we can't think with a false sense of comfort that we've solved the problem. we don't provide effective enforcement. and to be very blunt, federal elections commission often has
1:01 am
been less than fully effective, a lot less than fully effective in enforcing rules relating to campaigns so there again, and oversight with strong enforcement authority, sufficient resources and the will to act is going to be very important if we are going to address this problem inn real time. .. is it a good thing that workers lose their jobs to ai whether it's wendy's or walmart or at the local hardware store? you pointed out that your
1:02 am
comment was there was really no creativity involved in taking orders at the drive-thru. but that is a job oftentimes the first job for younger americans but in this economy were the wages of blue-collar workers have been flat for 30 or 40 years and branding what worries me is oftentimes what we hear from the tech sector to be honest with you his jobs don't have creativity and don't have value but i'm frankly scared to death that ai will replace lots of jobs the tech types things aren't creative and will lead to more blue-collar workers with no place to turn to my question to you is can make >> more of this and is it really progress for folks to lose those kinds of jobs. ii suspect it's not the best paying job in the world by least it's a job and they want to see more of these jobs lost? >> to be clear i didn't save it was a good or bad thing but just to predict what jobs to be
1:03 am
impacted and identified that job is one that might be. let's step back because i think your question is critically important. let's first reflect on the fact that we have had 200 years of automation that have impacted jobs and. sometimes for the better and sometimes for the worse. in wisconsin where i grew up in missouri where my father grew up if you go back 150 years it took 20 people to harvest an acren of wheat and now it takes one so 19 people don't work on that acre anymore. that's been an ongoing part of technology. the real question is this, how do we ensure technology advances so that we help people get better jobs and get the skills they need for those jobs and hopefully do it in a way that broadens economic opportunity rather than narrows it.
1:04 am
i think the thing we should be the most concerned by this since the 1990s and i think this is the point you're making, if you look at the flow of additional technology fundamentally we flipped in a world that is widely economic. those people with a college graduate education have seen their incomes rise in real terms and those people with a high school diploma or less have seen their income level actually drop compared to where it was in the 1990s. so what do we doo now? highwood at police what i think our goal should be. can we use this technology to help advance productivity for a much broader range of people? including people who didn't have the good fortune to go where you are i went to college or law school. can we do it in a way that not only makes them more productive.
1:05 am
reap some of the dividends for themselves in a growing income level. i think it's that conversation that we need to have. >> i agree with you and i hope that is what ai can do. he talked about the farm and it iser to take 20 people to do wht one person can do and you said it takes thousands of people to produce textiles and furniture were now at zero. so we can tell the tale in different ways. i'm not sure seeing working-class jobs go overseas or be replaced entirely successful and i would argue not at all. it argued more bret baier economic policy in the last 30 years has been downright disastrous for working people. tech companies and financial institutions and banks and wall street have made huge profits for blue-collar workers can barely find a good paying job. i don't want ai to ai to be the latest accelerant of that trend.
1:06 am
i don't really want every service station in america so that nobody can get their foot in the door and start the climb up the ladder and let me ask you about something else did he mention national security. critically important. there is no national security threat more significant in the united states and china and let me just ask you is microsoft too entwined with china to have the microsoft research set up in beijing in the late 1990s and centers in shanghai and elsewhere he have all kinds of cooperation with chinese state-owned businesses that i'm looking at an article here from protocol magazine where one said microsoft had been the alma mater of chinese tic tacs. are you concerned about your degree of the employment with
1:07 am
the chinese government and do you need to be decoupling to make sure our national security does not face a compromise? >> i think it's something we need to focus on. to some degree in some technology fields microsoft is the alma mater of the technology in every country of the world because of the world in last 40 years but when it comes to china today we are in need to have very specific controls on who uses our technology and for what and how. that's why we work on quantum computing and focus on synthetic media or a whole variety of things. while the same time at starbucks has stores in china i thinkst is good they can run their services in our data center other than the chinese companies. >> just on facial recognition back in 2016 your company released the database with
1:08 am
10 million faces without the consent of the folks who are in the database. you took it downch eventually although it took three years in chinese that database to train much of its facial recognition software and technology. he said microsoft might be the alma mater of many companies and ai. china's unique. china is running a concentration camp using digital technology like we have never seen before and isn't that a problem for your company to be in any way involved in that? >> we don't want to be involved in any any way in that and i don't think we are. >> are you going to closure center in shanghai? >> i don't think that will accomplish anything. you are running thousands of people out into the chinese government and state-owned enterprises. isn't that a problem? >> there's a big premise and i don't embrace the premise that
1:09 am
is what we are doing. the notion that we are running thousands of people through an going to the chinese government. >> i thought you had 10,000 employees in china were created from chinese state-owned agency's chinese state-owned businesses. they work for you and then they go back. >> we have employees in china and in fact to my knowledge that is not where they are coming from and that is not where they are going. we are not running a revolving door and it's all about what we do and who we do it with. he thinks it's a paramount importance and that's what we are focused on. >> you condemn --. >> we do everything we can to ensure technology is not used in any way for that kind of activity in china and around the world by the way..
1:10 am
>> but you condemn its? >> yes. >> what are the safeguards you have in place so your technology is not further enabling the chinese government? thank you take something like facial recognition which is part ofti your question we have tight controls that limit the use of facial recognition in china including controls that make it difficult if not impossible to use for any kind of real-time surveillance at all and by the way thing we should remember the u.s. as of late -- leader in many ai fields and china is a leader in the show with recognition trades back in part the information you help to require. >> is because we have with most data. that you gave it to them. you don't think that anything to do with the? >> when you have country at one point for billion people and you
1:11 am
have facial recognition used in so many places it gives that country and massiveay data. >> are you saying that database that microsoft released in 2016 are you saying that wasn't used by the chinese government to train their chinese recognition recognition -- chinese facial recognition and? >> my goodness the advance and facial recognition technology if you go to another country where they are using facial recognition technology is highly unlikely its american technology. it's highly likely that it's chinese technology because they are such leaders in their field which i think is fine. if you want to pick a field for united states as a technology leader i would put facial recognition technology on that list. let's recognize its homegrown. >> how much money as microsoft were invested in ai development in china? >> i don't know but i will tell
1:12 am
you this the revenue we make in china which accounts for one out of every six humans on this planet is 1.5% of global revenue. it's not the market for us that it is for other industries or other tech companies. >> it sounds like you can afford to decouple. >> but is that the right thing to do?t' >> yes. a regime that's fundamentally inflicting atrocities on its own citizens that you just alluded to in doing to the uyghurs running modern-day concentration camps? >> there to questions that are worthy.urng number one general motors was just a stealth cars in china do you want to create jobs for people in michigan or missouri to those cars could be sold in china? if the answer to that is yes think about the second question. how dodo you put general motors
1:13 am
and china to run its operations and you take the storage data. would you like to be in a secure datata center run by an american company or would you like to be run by a chinese company? which will better protect general motors trade secrets? we should be there so that we can protect the data of american companies, european companies japanese companies. we can disagree on everything else that i believe serves the country well. >> you are doing more than protecting data in china. you have major research centers, tens of thousands of employees in your question do i want general motors to be building cars in china? no, i don't. i want them to make cars here in the united states with american workers and do i want american companies to be aiding in any way the chinese government with
1:14 am
their tactics. senator also put it you like for me to yell to you now? >> i've been very hesitant to interrupt the discussion in the conversation that is very interesting and i want to call on o senator ossoff. >> thank you mr. chairman and thank you for your testimony. getting down to the fundamentals mr. smith if we are going to move forward with the legislative framework for regulatory framework we have defined the legislative text precisely what it is that we are regulating. what is the scope of regulated activities, technologies and products? how should we consider that question and how do we define the scope of technologies, the scope of services, the scope of products that should be subject to a regime of regulation that
1:15 am
is focused on its? >> they are three layers of technology by which we need to focus in defining the scope of legislation. first is the area that has been a central focus of 2023 in the executive branch in here on capitol hill thehe so-called frontier foundation models that are the most powerful like generative ai. inre addition there are the applications of ai or his senators blumenthal and hawley havelo said the deployers of ai. there is an application that falls on that model and in what we consider to be a high-risk scenario meaning you could make a decision that would have an impact on to say that privacy rights the civil liberties the rights of children are the needh of children than then i think we need to think hard and have broad regulation is effective to protect americans in the third player in his the data center of
1:16 am
the structure, where these models and applications are deployed and we should ensure those data centers are secure that there is cybersecurity requirement that the companies meet. we should ensure theirto safety systems at one, two or all three levels if there is an ai system that will automate and control say something like critical of the structures such as the electrical grid. those are the areas where weke would start there with clear thinking and allow the effort to learn and private detail. >> as more and more models are trained and developed to higher levels of power and capability there won't be a proliferation, there may be a proliferation of models perhaps not the frontier modelhe and perhaps not those at the leading edge they use the most of all powerful enough to
1:17 am
have serious implications so is which models are the most powerful in a moment in time or is there a threshold with capability or power that should define the scope of related technology? >> i think you post a critical question that frankly lack of people inside across s the government and academia are working against. i think technology is evolving in the conversation in a way to evolve with it. let's just posit there. something like gpt and open ai amvets give you 10,000 things you do really well. it's expensive to create and relatively easy to regulate in the scheme of things because there are one, two or 10 but now let's go to where you were going
1:18 am
which is right. what does the future bring in terms of proliferation? imagine that it economic and professor hartzog's university that says i want to create an open source technology. it requires many gpueq is and it won't require as much data. let's imagine that it could be used to create the next virus that could spread it then you'd say we really need to ensure the there is safety in architecture and controls around that as well.. that's the conundrum andnd thats why this is a hard problem to solve and it's why we are trying to build safety architecture and art data center so open-source models can still be used in ways that will prohibit that harm from taking place. as you think about a licensing regime this is one of the hard questions who do you license? you don't want it to be so hard that only a small number of big
1:19 am
companies can get it but you also need to make sure you aren't recurring people who don't need a license for what they are doing for the beauty of the framework in my view is that starts to frame the issue and it starts to define it. >> is a license to train a model to a certain level of capability and is that a license to sell access to the model or is it a license to purchase or deployed that model? who is the licensing entity? >> that's the question that we may have different answers in different snares. mostly i i would say i think the may well be obligations to disclose an independent authority when the training runs to depending on when an oversight body can dislike what
1:20 am
may happen when a company is building a commercial airplane. the good news is there's an emerging of foundation of best practices at how the model should be trained to what kind of testing there should be, what harms should be addressed is a big topic. >> forgive me. you say a license to deploy that means work sample if the microsoft office product wishes to usese gpt model for some uses serving purpose within your sweet you would need a license to deployed gpt in that way or do you mean gpt would require a license to offer to microsoft and putting aside whether or not it's a plausible scenario there's a question of a licensing arrangement rates imagine, think about like going. boeing builds a new plane and before you can sell it to united airlines united airlines to fly
1:21 am
at the faa will certify it. now imagine we are in gpt 12, whatever you want to name it before that gets released for use ice think you can imagine a licensing regime that you would say he needs to be licensed after its been saved and then you ask yourself how they make that work so we don't have the government's lawyer thing down what i would say you bring together threeus things. first you need industry-standard so you have a common foundation as to how the training should take place. second you neednd national regulation and third if we are going to have a global economy in the countries where he want these things to work you probably need a level of international coordination and i say look at the world of civil aviation. that's fundamentally how it has worked since the 1940s and let's try to learn from it and see how we might apply something
1:22 am
like that are other models here. >> mr. dally how would you reply to the question in a field where the technical capability is accelerating at a rapid rate, future rate unknown? wear and according to what standard or metric or definitiom of power do we draw the line for what requires a license for deployment in what can be deployed without oversight by the government? >> it's a tough question because you have the balance two important considerations. the first is the risks presented by a model whatever the power and on the other side is the fact that we would like to ensure that the west is ahead in that field and we want to make sure individual like to mix and entrepreneurs with a good idea
1:23 am
can move forward and innovate and deploy models without huge bears so is the capability of the model and the risk presented by oversight? the thing is we will have two right legislation and the legislation is going to in a word defined the scope of break elated products. we will have too bound that which is subject to licensing arrangement or wherever it lands and that which is not. >> it's dependent on the application because if you have model which is basically determining the medical procedure there is a higher rish for that application outcome predicting of another model which is controlling the temperature in your building and if it gets a little bit wrong you may have a little bit too much power or you aren't as comfortable as you might be but
1:24 am
it's not a life-threatening situation but you need to regulate the things that are of high consequences. >> i'm anbar of time. you had to wait so we will give you a couple more. >> professor and i'd be curious to hear from others with respect to the chairman's follow-up how does any of this work without international law? isn't it correct that a model potentially a very powerful and dangerous model for sample whose purpose is to unlock cbr and/or destructive urological capabilities to a relative sophisticated actor, once trained its relatively to transport and without an international legal system and b a level of a surveillance that
1:25 am
seems inconceivable as to the flow of data across the internet how can that be controlled successfully? >> it's a great question senator and with respect to my intro simply say there are going to be limits. even assuming we do need cooperation which i would agree agree with you we started thinking about ways in which her example within the eu which is our deployed significant ai regulations we might design frameworks that are compatible withks that. ultimately what i worry about is deploying a level of surveillance in an attempt to perfectly capture the entire chain of any is simply not possible. >> i share that concern about privacy which is why raise the point how can we know what folks
1:26 am
are loading? the model train down to perhaps a device that isn't even on line. >> their limits. >> do either of you want to take a step? >> i were to say there's a need for international coordination and it's more likely to come from like-minded governments and perhaps local governments in the initial years. i do think there's a lot we can learn. we are talking to senator blackburn about financial transactions and somehow we have managed globally and in the united states for 30 years to know your customer requirements obligations forat banks. money is moved around the world then nothing is perfect and that's why we have laws. it will do about it good to to protect against criminal uses of money it would cause concern. >> i think you are right in that
1:27 am
these models are very affordable. you can put the parameters of oath models and to a large usb drive and carry it with you somewhere and you could also train them anywhere in the world. i think it's really the use ofea the model and its deployment that you could effectively regulate. it will be hard to regulate becausela people can't create tm here they will create them some rouzer to be careful. we want the best people creating these models in the u.s. and not to go somewhere else where the regulatory climate is driven. >> thank you senator ossoff. i hope you are okay with it few more questions. >> do we have a choice? [laughter] no. [laughter] thank you very much. useful.n very i want to follow up on a number of questions. first of all on the
1:28 am
international issue their art samples and models for international cooperation and you mentioned civil aviation. the 737, the 737 max, i think i've are right when it crashed was a plane that had to be redone and airlines around the world look to the united states for that redesign and then approval. civil aviation, atomic energy, not always completely effective. it has worked in many respects and their art international models where the united states is in need.
1:29 am
frankly in this instance the eu is ahead of us in many respects regarding social media and we are following their leadership example. i want to come to this issue of having centers rather in china or for that matter elsewhere in the world requiring safeguards so we are not allowing more technology to be used in china against the uighurs and preventing that technology from being stolen or serving bad purposes. are you satisfied mr. smith that in fact we are doing it in china and preventing evils that could result from doing business there in that way?
1:30 am
>> i would say two things. first i feel good about our track record and our vigilance and the constant need for us to be vigilant about what services we offer to whom and how they are used and i would take from that what i think is probably the conversation we have will need to have as ad country about export controls more broadly. there is three fundamental areas of technology and united states i would argue today is the global leader. firstps the chips from bolivia d the cloud infrastructure from a company like microsoft and the third is the foundation model for my firm such as open ai and google and other companies are global leaders as well. i think if we want to feel we
1:31 am
are good in creating jobs in nine states by investing in manufacturing which i completely endorse and the technologies being used properly they probably need an export control regime that we have does three things together. for example there might be a country in the world is just set aside china for a moment. there's another country where you wall and the executive branch would say we have some cuomo's but we want u.s. technology to be present and we want u.s. technology to be used properly in a way that would make you feel good. you might say let's export ships of that country to be used in the data center for company that we trust and is licensed here for that use with the model being used in a secure way in that data center with your customer requirements and
1:32 am
guardrails with that certain kinds of use off-limits. that may well be her government policy needs to go and how the tech sector needs to support the government and work with the government to make it a reality. >> i think that answer is very insightful and raises other questions. i would analogize the situation as nuclear proliferation. we core. over safety in some respects with other countries and some of them adversaries. but we still do everything inhi our power to prevent american companies from helping china or russia in their nuclear program. part of that nonproliferation effort is sanctions and we have limits and rules around selling and sharing certain choke point
1:33 am
technology related to nuclear enrichment as well as biological warfare, surveillance and other national security risk and our framework in fact he envisions sanctions and safeguards precisely in those areas were exactly the reasons we have been discussing here. last october at the biden administration used existing legal authorityn as a first step in blocking the sale of some high-performance chips and equipment to make those chips to china and our framework and export controls and sanctions. i guess the question that we will be discussing that won't be resolved today regrettably but we appreciate your input going forward and i'm inviting any of the listening audience here in
1:34 am
the room or elsewhere to participate in this conversation on this issue and others. how can we draw a line on the hardware and technology that american companies are allowed to provide anyone else in the world? in the adversaries or friends because as you've observed mr. dally and all of this acceptance easily proliferated. >> if i could comment on this you drew an analogy to y nuclear regulation and mention the word choke point. the difference here is that there really isn't a choke point and there's a careful balance to be made between limiting where our chips go and what they are used for and disadvantaging
1:35 am
american companieses and the whe food chain that feeds them because we are the only people whoo make chips that could do a. i wish we were but we are not. they are companies around the world and their american companies and their companies and asian companies in europe and people can't get what they need to do ai from us they will get them somewhere else. what will happen then is if it turns out the chips aren't things that they can use and software and appalled us in the standard standard chips for people to do ai become something from pick a country singapore office sudden all the software engineers were right e all they software forll those chips and they will become the dominant chip and the leadership of that was shiftedte from the u.s. to singapore or whatever other country. we had to be careful to balance
1:36 am
the national security considerations and the technology considerations against preserving the u.s. lead in the technology area. >> it's a really important point and what you have would be a counter argument. for a moment i would challenge. sometimes you can approach this and say look if we don't provide it somebody else will so let's not worry about it. but at the end of the day whether you are a company or country i think you do have have clarity about how you want your technology to be used and i fully recognize there may be a day in the future after i retire from microsoft when i look back and i don't want to say oh we did something bad because if we
1:37 am
didn't somebody else would hav. i want to say no we have values and we have principals and we had them placed guardrails and protections that we turned down sales so that somebody could use their technology to abuse other people's rights and if we lost the business that's the best reason in the world to lose business and what's true for company is true as a country. i'm not saying your view shouldn't be considered, should and that's why this issue is complicated how to strike that balance. >> professor hartzog do you have any comments? >> i think that was well said and i would only add it's also worthdd considering in this discussion about how do we safeguard these incredibly dangerous technologies and the risk if they proliferated. it'sed so dangerous to come back
1:38 am
to the existential question again thinking not only about how we would guardrail it but how we lead by example what to thank you brought up which is really important and we don't win the race to violate human rights. it's not something we want to do. >> and it isn't simply a symbol is importing chips from the uniteded states and building yor own data center. most ai companies rent capabilities from cloud providers and make sure the cloud providers are used to circumvent our export controls are sanctions. mr. smith the race to know your customer rules. knowing your customers would require thousands of ai class providers to go. companies are using.
1:39 am
you are leasing out the supercomputer you need to make sure the customer isn't the peoples liberation army and it isn't being used to subjugate weaker sanity isn't used to do facial recognition on dissidents or opponents and nevada for example. you made a critical point which there is a moral imperative and i think there is a lesson in the history of this great country, the greatest country in the world that when we lose our moral compass we lose our way and when they simply do economics or political interest sometimes it's very short-sighted. and produces a geo-political swamp and quick sand.
1:40 am
so i think these kinds of issues aress very important. when we lead by example. i want to make a final point and if senator hawley has questions we will let him ask that on this issue of -- i mentioned at the outset i think we are and the cusp of a new industrial revolution. we have seen this movie before as they say in it didn't turn out that well. in the industrial revolution where workers were displaced en masse and textiles, factories and the mills in this country and all around the world went out of business essentially or placed the workers with automation and mechanics. i would respond i think we need
1:41 am
to train those workers. we need to provide education and it needn't be a four-year college. in my state of connecticut the defense contractors are going to need thousands of welders, electricians, trades people of all kinds who will have not just a career that requires skills. frankly i wouldn't begin to know how to do it and don't have the aptitude to do it. i think they are a tremendous opportunities here not just in the creative sphere that you have mentioned where higher human talents may come into play. jobs are being created daily in this country.
1:42 am
as i go around the state the most common comment i hear is you can't find enough people to do the job we have right now. we can't find people to fill the openings that we have and that is in my view may be the biggest challenge for the american people. >> it's such an important point in it's worth putting i wholeheartedly endorse what senator hawley said that we want people too have jobs etc.. first let's consider the demographic context in which jobs are created. the world has just entered a shift of the kind that is literally had some themes from the 1400 populations that are in
1:43 am
much of the world declining. one of the things we look at his every country measured over a five-year. since the working age population increasing or decreasing and by how much paint from 20222025 the working age population in this country people aged 20 to 64 is only going to grow by 1 million people. the last time a group i that smalll a number do you know who was president of united states? john adams.. that's how far back it you up to go and if you look at a countryh like italy take that group of people over the next 20 years it is going to decline by 41% in and what is true of italyru is almost true to the same degree in germany and it's happening in japan and. we live in a world for where bob many countries we encounter what you find a suspect when you go to hartford or st. louis or kansas c city you can't find
1:44 am
enough police officers enough nurses are enough teachers and that is a problem we need to desperately focus on solving so how do we do that? i think ai is something that can help in something like a call center. onef of the things that fascinating to me we have more than 3000 customers around the world. one fascinating one is a bank in the netherlands. you go to a call center today the desks of the workers look like trading floor and wall street. they have six different terminals and somebody calls and desperately tries to find the answer to question. with something like gpt for with our services six terminals can become one. somebody who's working there can ask aue question and the answer comes up in what they are finding is that person who is answering the phone talking to a customer can spend more time concentrating on the customer
1:45 am
and what they need. i appreciate all the challenges. there's so much uncertainty and we desperately need to n focus n scaling but i do hope that this is an era where we can use this to frankly help people fill jobs and focus more, just put it this way i'm excited about artificial intelligence. i'm even more excited about human intelligence and if we can use artificial intelligence to help people exercise more human intelligence and earn more money doing so that would be something that would be way more exciting to pursue them everything we have we had to grapple with around social media this past decade. >> the framework very much focuses on providing more training and something that this
1:46 am
entity will do. definitely something it has two address and it's not only working conditions and opportunities within the workplace for promotions to protect civil rights. we haven't talked about it in detail. we deal with it in our framework in terms of transparency and china may try to steal our technology but they can't steal our people in china has its own population challenges with the need for more people, skilled labor but i say about connecticut we don't have goldminesin or oil wells but wht we have is an able workforce.
1:47 am
i think ai can help promote that. senator hawley. >> i want to thank her staff for this hearing but we will continuear these hearings. it's so helpful to us. i can go down our framework and cite the proposals to specific comments made by others who have testified before and we will enrich and expand our framework with the insights that you have given us so i want to thank all of our witnesses and again i continuing our buyers of cash bipartisan approach. and adopt all measures, not half measures. thank you all.
1:49 am
32 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on