Skip to main content

tv   Hearingon Regulating Artificial Intelligence  CSPAN  September 14, 2023 4:13pm-6:36pm EDT

4:13 pm
>> it looks like this. americans conceived democracy at work. word for word. from the nation capital to wherever you are. this is what democracy looks like. c-span. our of cable. >> brad smith testified on ways to regulate artificial intelligence. he joined other witnesses to discuss transparency laws. images and videos as being made by ai. this hearing before the senate judiciary subcommittee on privacy, technology and the law is aboutut two hours and 20 minutes.
4:14 pm
technology and the law will come to order. i want to welcome our witnesses. the audience he was here. senator schumer has been very supportive in what we are doing here.
4:15 pm
encouraging us to go forward here. ori have been grateful, especiay to my partner in this effort, senator holly, the ranking member, he and i have produced a framework, basically, a blueprint for past forward. our interest is in legislation. this hearing, along with the two previous ones have to be seen as a meanse to that end spirit we are very, very result oriented as i know you are from your testimony. i have been enormouslyte encouraged and emboldened by the response so far just in the past few days and by my conversations with leaders in the industry like mr. smith. there is a deep appetite, hunger
4:16 pm
, for rules and guardrails , basic safeguards for businesses and consumers for people in general from potential perils. there is also a desire to make use of the potential benefits. our effort is to provide for regulation, in the best sense of the word. regulation that permits and encourages innovation. and new businesses in technology and entrepreneurship. at the same time, provides those guardrails and forcible safeguards that can encourage trust and confidence in this growing technology. it is not a new technology entirely. it is been around for decades,
4:17 pm
but artificial intelligence is regarded as entering a new era. make no mistake, there will be regulation. the only question is, how soon and what. it should be regulation that encourages the best in american free enterprise, but at the same time provides air protections that we do and other areas of power, economic activity to my colleagues to say there is no reason for new rules. there are enough laws. it prohibiting competition. we have laws that regulate airline safety and drug safety. simply because we have those roles, we do not need specific protections for medical device
4:18 pm
safety. we have rules that prohibit discrimination in the workplace. it doesrk not mean we don't need rules that prohibit discrimination in voting. we need to make sure that these protections are framed in targeted in a way that apply to the risk involved. risk based rules. managing the risks is what we need to do here. so, our principles are pretty straightforward, i think. we have no pride of authorship. we have circulated this framework to encourage,. we will not be offended by criticism from any quarter. that is the way we can make this framework better and eventually achieve legislation. i hope by the end of this year. the framework is basically establishing a licensing regime
4:19 pm
for companies engaged in high risk ai development. creating an oversight body that has expertise with ai and works with other agencies to administer and enforce the law. protecting the national and economic emergency. making sure we are not enabling china or russia or other adversaries to interfere with our democracy or violate human rights. requiring transparency to have the limits and use of ai models. at this point, includes rules like watermarking, digital disclosure, when ai is being used and data access for researchers. ensuring that ai companies can be held liable when their products reach privacy, violate civil rights endanger the public these impersonations, hallucinations, we've all heard
4:20 pm
the term spewed we need to prevent harm. as former attorney general of our states, a deep and abiding affection for the potential enforcement powers of those officials. the point is there must be effective enforcement. we will have more hearings. we are in support of these measures. disseminated widely. what is at stake here.
4:21 pm
having before us here today. we need to act with dispatch. experiencing it with social media. if we letet it out of the barn,t will be even more difficult to contain damn social media. we are seeking an act on social media. right now as we speak. we are literally at the cusp of a new era. one of the greatest the massive unemployment that could be created.
4:22 pm
we do need potential work and replacement. we need to work with both. >> thank you, mr. chairman did thank you for organizing this. this is the third that we have done. i have done a lot in the previous couple. some ofea what we are learning, the potentials are exhilarating. some of it, is horrifying. i think what i certainly agree with is we have a responsibility here now to do our part to make sure that this new technology which holds a lot of promise, but also payroll actually works with the american people. it is good for families.
4:23 pm
that we do not make the same mistakes that congress made with social media where 30 years ago now we outsourced social media to the biggest corporations in the world. the history on the globe. doing whatever they want with social media. running experiments basically every day onth american kids. elected as if which they were never seen. we cannot make those mistakes again. i tried to make sure that this technology is something that actually benefits the people of this country.
4:24 pm
the heads of these corporations, i have no doubt it will benefit your company's. what i want to make sure that it actually benefits american eagle i look forward to this day. thank you. >> i want to introduce our witnesses. as our customs, i will swear them in and ask them to submit their testimony. welcome to all of you appeared the chief scientist, he joined in january 2009 as chief scientist after spending 10 years att stanford university where he was chairman of the computer science department. he has published over 250 papers 120 issued patents. he is the author of four textbooks. vice chair and president of microsoft.
4:25 pm
as microsoft vice chair and president he is responsible for spearheading the company's work in representing it in a wide variety of critical issues involving the intersection of technology and society including artificial intelligence, cyber security, privacy, environmental sustainability,us digital right, philanthropy and products in business for non-profit customers. and, we appreciate your being here. professor hartzog, professor of law and class of 1960s caller boston university school of law. d also a nonresident and associe for internet and society at harvard university and affiliate
4:26 pm
scholar at the center for internet and society at stanford law school. i could go on about each of you at a much greater length with all of your credentials, but suffice it to say, very impressive. if you now stand, i will administer the oath. >> do you solemnly swear that the testimony you are about to give is a truth, the whole truth and nothing but the truth. thank you. >> why don't we be —-dash begin with you. >> thank you for the privilege to testify today. i am the head of research and am delighted to discuss artificial intelligence journey in the future.pu the forefront technologies have the potential to transform injury industries benefit
4:27 pm
societies. we have been committed to developing technology to empower people and improve the quality of life worldwide. today over 40,000 companies using platform to cross media and entertainment, scientific computing, healthcare, financial services, internet services, automotive manufacturing. solving the world's most difficult challenges to bring services to consumers worldwide. we were 3 d graphics start appeared one of dozens competing to creating an entirely new market foror accelerators to enhance for spirit in 1999 we invented pp you which performs a lemassive number of calculations in parallel. we watched gaming. we recognize that they could theoretically accelerate any application that could benefit from the processing. this debtsi paid off. researchers worldwide innovate
4:28 pm
on what they use. we have made advances in ai that will revolutionize and provide tremendous benefits to society across sectors such as healthcare, medical research, education, business, cybersecurity, climate and beyond. we also recognize like any new product or service, they have risks.ri those that make or use and sell these products, the services are responsible for their conduct. many uses of ai applications are specific toer existing laws and regulations that govern the sectors in which they operate. these high-res sectors could be subject to enhance licensing and certifications when necessary. other applications with less risk of heart may need less stringent licensing or regulation. ai developers will work to benefit society while making products and services as safe as possiblett. for our part, committed to the
4:29 pm
safe and trustworthy development and employment of ai. for example, nemo guardrails are open-source software and empower them to guide applications to accurate, appropriate and secure responses. they have implemented model risk management guidance ensuring a comprehensive assessment and management of risk associated with these models. today announcing theya are endorsing the white houses voluntary commitment to ai. we can and will continue to identify and address risks. no discussion of ai would be complete without addressing what is often described as frontier ai models. some of the expressed fear that they will evolve into uncontrollable artificial general intelligence which could escape our control and cause harmau. fortunately, uncontrollable artificial general intelligence ise science fiction, not realit. at its core a software program that is limited by its training,
4:30 pm
the input provided to it in the nature of its output. in other words, humans will always decide how much decision-making power for ai models. as long as we are thoughtful and measure we can ensure safe trustworthy and ethical deployment without suppressing innovation. we can ensure that ai tools are widely available to e everyone, now concentrated in the hands of a few -- few powerful firms. first, ai genie is already out of theut bottle. already published and available to all. ai softwareed could be transmitd anywhere in the world that press of a button. many development tools, frameworks and foundational models are open-source. second, no nation and certainly no company controls a chokepoint to ai development. leaving to computing platforms around the world while u.s. companies may currently be the most energy efficient, cost efficient and easiest to use. other nations developing systems
4:31 pm
with or without u.s. components. safe and trustworthy ai will require multi- corporations or it will not be effective. the united states is in a remarkable position today. we can i continue to lead on policy innovation well into the future. ready to work with you to ensure that generative ai and the computing for the best interest of all. thank you for testify before this committee. >> thank you very much. >> chairman, members, my name is brad smith. thank you for the opportunity to be here today. i thinkhe more importantly, thak cyou for the work you have don. chairman, i think you put it very well. first, we need to learn and act with dispatch. ranking member, i think you
4:32 pm
offered real words of wisdom. let's learn from the experience a whole world had with social media. let's be clear about the promise in the parol equal measure as we look to the future of ai. i would first say i think your framework does that. it doest not attempt to answer every question by design. but it is a very strong and positive step in the right direction and it puts the u.s. government on the path to be a global leader in ensuring a balanced approach that will enable innovation to go forward with the right legal guardrails in place. as we all think about this more, i think it's about keeping three goals in mind. first, let's prioritize safety and security what your framework does. let's require licenses for advanced ai models and uses in high risk scenarios. let's have a an agency that is
4:33 pm
independent and can exercise real and effective oversight over this category. but then let's couple that with the right kinds of falls that will ensure safety. of the sort that we have already seen i think, start to emerge in the whiteon house commitments tt were launched on july 21. second, let's prioritize, as you do, the protection of our citizens and consumers. let's prioritize national security, always in a sense the first priority of the federal government. let's think as well, as you have , about protecting the privacy, the civil rights and the needs of kids. among many other ways of working in ensuringwe we get this right. let's take the approach that you are recommending. focusing not only on those companies that develop ai, like microsoft, as well as companies that deploy ai like microsoft.
4:34 pm
in different categories, we are going to need different levels of obligations. and as we go forward, let's think about the connection between the role of a central agency that will be on point for certain things as well as the obligation that will be part of t the work of many agencies. indeed, our courts as well. let's do one other thing as well is one of the most important things that we need to do so that we ensure the trust we need to worry about remains part of science fiction and do not become a new reality. let's keep ai under the control of people. it needs to be safe. to do that, there needs to be safety breaks. especially for any ai application or system that controls critical infrastructure if a company wants to use ai to take control via electrical grid
4:35 pm
are all the self driving cars on our roads or the water supply, we need to learn from so many other technologies that do great things, but also can go wrong. we need a safety break just like we have a circuit breaker in every building and home in this country toow stop the flow of electricity if that is needed. then i would say, let's keep one third in mind as well. this is the one where i would suggest you may be considered doing a bit more to add to the framework. let's remember the promise that this offers. right now, if you go to state capitals, you go to other countries, i think there is a lot of energy being put on that. when i see what governor newsom is doing in california, governor bergman north dakota governor duncan in virginia, i see them at the forefront of how to use ai to improve the delivery of healthcare. advanced medicine. improve education for our kids.
4:36 pm
maybe most importantly, make government services more accessible and more efficient. let's see if we can find a way to not really make government better by using this technology, but cheaper or use the savings to provide more and better services to our people. that would be a good problem to have the opportunity to consider professor has said this is not a time for half measures. it is not. he has right. let's go forward as you have recommended. let's be ambitious and get this right. thank you. >> thank you. thank you very much. i have read your testimony and you arere very much against half measure. we look forward to hearing what the full measures that you recommend are. >> that is correct. thank you for inviting me to appear before you today. i am a professor of law boston
4:37 pm
university. my comments today are based on a decade of research and technology issues. i am drawing on policy that i conducted as a fellow with colleagues that support the institute at washington university in st. louis. committee members, up to this point ai policy has largely been made upst with industry led approaches like encouraging transparency, mitigating bias and promoting principles of ethics. i would like to make one simple point in my testimony today. these approaches are vital. but they are only half measures. they will not fully protect us. to bring ai within the rule of law they must go beyond these half measures to ensure that these systems and the actors that deploy them are worthy of our trust. half measures like audit assessment and certifications are necessary. industry leverages procedural checks like these dilute our loss and to managerial box checking exercise that entrench harmful surveillance based
4:38 pm
business models. a checklist is no match for the staggering fortune available to those that exploit our data, labor and our procare eddie to develop and deploy ai systems. it is no substitute for meaningful liability when ai systems are on the public. today i would like to focus on three popular half measures on why lawmakers must do more. first, transparency is a popular proposed solution for opec systems. it does not produce accountability on its own. even if we truly understand the various parts of ai systems, lawmakers must intervene when these tools are harmful. a second laudable but approach is when companies were to approach bias. race, class, gender and ability. self-regulatory efforts to make it there are half measures doomed to fail.
4:39 pm
it's easy to say they must not be biased. difficult to find consensus on what that means and how to get there. additionally, a mistake to assume that if the system is fair then it is safe for all people. even if we ensure that ai systems work equally well for all communities, all we will have done is create a more effective tool that the powerful continues to dominate, manipulate and discriminate. a third ai half measure is committing ethical principle. they sound impressive, but they are a poor substitute for laws. industry does not have the incentive to leave money on the table for the good of society. i have three recommendations for the committee to move beyond ai half measures. ai systems are not neutral and regulate how they are designed. people often argue that lawmakers should avoid design rules.
4:40 pm
for technology because there are no bad ai systems. only bad ai uses. this is wrong. there is no such thing as a neutral technology, including ai systems. facial recognition empowers the watcher. generated replays labor. lawmakers should embrace theories of accountability of defective design or consumer protection theory of providing the means and instrumentality of unfair and deceptive conduct. my second recommendation is to focus on substantive laws that limit abuses of power. ai systems are so complex and powerful that it can seemed like regulating magic. the broader risks and benefits of ai systems are not so new.re ai systems pistole power. this is used to benefit some and harm others. bllawmakers should borrow from established legal approaches to
4:41 pm
remedying power imbalances to acquire a broad nonnegotiable duties of loyalty, care and confidentiality and implement robust rules that limit harmful secondary uses and disclosures of personal data in ai systems. my final recommendation is to encourage lawmakers to resist the idea that ai is inevitable. when they go straight to putting up guardrails, they failed to ask questions about whether particular system should exist at all. this dooms us to half measures. strong worlds would include prohibition on unacceptable ai practices like emotion recognition, biometric surveillance in public spaces and social scoring. in conclusion, to avoid the mistakes of the past, lawmakers must make the hard calls. trust and accountability can only exist where the law provides meaningful protection for humans. they certainly will not be enough. thank you andnd i welcome your questions. >> i take very much to heart
4:42 pm
your imploring us against half measures. i think listening to both senator holly and myself, you have a sense of boldness and initiative. we welcome all of the specific ideas. most respectfully, mr. smith, we can be more engaged at the state level and making use of ai in the public sector. but taking the thought that the professor had so importantly introduced ai technology in general is not neutral. how do we safeguard against the downside of ai whether it is discrimination or surveillance. the oversight entity will it be sufficient and what kind of powers do we need to give it?
4:43 pm
>> i would say, first of all, i think it is indispensable in certain high risk scenarios. it is a critical star. what i think it really ensures is especially for the frontier models as well as other certain applications, you do need a license from the government before you go forward. that is where you have accountability. you cannot drive a car until you get a license. you cannot make it available until you pass through that gate i do think that it would be a mistake to think that one single agency or licensing regime would be the right recipe to address everything. especially when we think about the harms we need to address. that is why think it's equally critical that every agency in the government that is responsible for the enforcement of the law and the protection of people's rights master the capability to assess ai. i don't think we want to move
4:44 pm
the approval of every new drug to this agency. by definition, the fda will need to have the capability to assess ai. that would be just one of several additional specifics. >> i think that that is a really important point. ai will be used in making automobiles, making airplanes, making toys for kids. the faa, the fda, the federal trade commission, consumer products save, they all have presently existing rules and regulations, but there needs to be an oversight entity that uses some of those rules and adapts them and adopts new rules so that those harms can be prevented. there are a lot of different names we could call that appeared connecticut now has an office of artificial intelligence. you could use different terms.
4:45 pm
but i think that the idea is that we want to make sure that the harms are prevented through a licensing regime focused on risk. you said that economists ai is science fiction. ai is beyond human control. but, science fiction has a way of coming true. and, i wonder whether that is a potential fear. certainly, it is one that is widely shared at the moment. whether it f is fact-based or n, it is in the reality of human perception. and, as you well know, it is very, very important. so, i wonder how we counter the perception and prevent the science fiction from becoming
4:46 pm
reality. >> what i said is artificial general intelligence that gets out of control. not economists. we use artificial intelligence for example. autonomous vehicles all the time i think the way we make sure that we w have control over ai f all sorts is by, for any really critical application, keeping the human in the loop. ai is a computer program. it takes an input, it produces an output and if you do not connect out nothing that could cause harm, it can't cause that harm. anytime there is harm that could happen, you want a human being between the model and the causing of harm. i think as long as we are careful about how we deploy ai to keep humans and the critical loops, i think we can ensure that that ai will not take over and shut down our power grid or,
4:47 pm
you know, cause airplanes to fall out of the sky. we can keep control over them. >> thank you. i have a lot more questions but we will adhere to five-minute rounds. a very busy day with boats, as a matter of fact. i will turn it to senator holly. >> thank you. thank you to the witnesses for being here. i want to thank you, mr. smith. i know that there is a group of other colleagues, your counterparts. i think we are gathering tomorrow and that is what it is. i appreciate you being willing to be here in public and answer questions in front of the press that is here. this is open to anybody that wants to see it and i think that's a way this ought to be done. i appreciate yout. being willing to do that. you mentioned protecting kids. oii want to ask you a little bit about what microsoft has done and is doing. kids use your being chabad. is that fair to say? >> yes. we have certain age controls.
4:48 pm
yes, in general, it is possible for children to register if you are certain age. >> the ages? >> i think it is 13. does that sound right? >> i was going to say 12 or 13. i was thinking 13. >> do you have some sort of age verification? obviously, the kid can put in whatever age he or she wants to. is there an age verification? >> we deal. we involve t getting proration r -- permission from a parent. i don't remember off the top of my head exactly how it works. >> my impression is that it does not reallyy have enforceable age verification. no way really to know. but you correct me if that is wrong. what happens to all of the information that our hypothetical 13-year-old is putting into the tool having
4:49 pm
this chat. they could be chatting about anything. going back and forth on a number of subjects. what happens to that info? >> well, the most important thing i would say first is all is done in a manner that protects the privacy of children >> how is that? >> well, we followed the rules which exist to protect child online privacy. it forbids using it for tracking forbids its use for advertising inor other things. it seeks to put things around the use and retention of that information. the second thing w that i would add to that is in addition to protecting privacy, we are hyper focused on ensuring that in most cases people of any age, especially children are not able to use something like being chat in ways that could cause harm to themselves or others.
4:50 pm
>> how do you do that? we haveos a safety architecture that we use across the board. there are two things around the model. so iff somebody asks how can i commit suicide tonight, how can i block my school tomorrow, that hits a classifier that identifies a class of questions or prompts or issues. there are meta- prompts. if we intervene so that the question is not answered, if someone asked how to commit suicide, we typically would provide a response that encourages somebody to get mental health assistance in counseling and tells them how. omif somebody wants to know howo build a bomb, it says, no, you cannot use this to do that. that is fundamental safety architecture that will's evolve and get better. in a sense, it is at the heart, ndif you will, above what we do.
4:51 pm
i think the best practices in the industry a part of what this is all about is how we take that architectural element and continue to strengthen it. >> very good. that is v helpful. let me ask you about the kids information. is it stored in the united states? is it stored overseas? >> if the child is in the united states, the data is stored in the united states. that is true for not only children, adults as well. >> who has access to that data? >> the child. parents may or may not have access. >> of what circumstances would the parents havece access? >> i would have to get you the specifics on that. this is something we have implemented in the united states even though it is not legally required in the united states. it is legally required as you know in europe. people have a right to find out what information wee have about them. they have a right to see it. they have the right to ask us to corrected if it is wrong.
4:52 pm
they have the right to ask us to deleted if a they want us to. >> if they ask you to delete it you delete it? >> we better. that is our promise. >> i have a lot more questions i will try to adhere to the time limit, mr. chairman. great news for us. not such great news for the witnesses. [laughter] last thing. just about the kids personal data and where it is stored. we have seen other technology companies in the social media space have major issues about where data is stored. i am thinking it should not be hard to guess. i'm thinking in particular of china where we seen other social media companies to say america status -- guess what lots of other people can access that data. it is a child's data that they entered into the being chat,
4:53 pm
stored in the united states as you just said if they are an american citizen, cannot be accessed by a china engineer? >> i do not believe so. >> would you be able to get that for me for the record, please. i will have lots more questions later. >> thank you, senator. >> thank you very much. thank you all of you. i am the chair of the rules committee. mr. smith, in your written altestimony, you talk about how watermarks could be helpful with the exposure of ai. material, as you know, i have a bill that i lead that representative clark leads in the house to acquire the disclaimer. some kind of mark on generated ads. i think we. have to go further. could you talk about what you mean by you in your written testimony the health of democracy in specific discourse
4:54 pm
undoubtably benefiting from initiatives that help protect the public against discussion or fraud facilitated by ai generated conduct. >> absolutely. here i do think things are moving quickly both in a positive or in some direction that we are seeing. on the positive side you are seeing the industry come together. a company like adobe exercise realal leadership. a recipe that i see emerging. i think it starts with the first principle. people should have the right to know if they are getting a phone call from a computer, from ai, if there is contact coming from a system rather than a human being. making that reall with legal rights to back it up. we need to create a system, water markings for legitimate content so that it cannot be altered easily without her detection. we need to create an effort that brings the industry, and i think governor and it together so we
4:55 pm
know what to do. when we do spot deep fakes. deep fakes that have authored legitimate content. >> thank you. >> let's get to that hot off the press. senator holly and i have introduced ouray bill today with senator collins. who led the electoral reform act in senator coons to ban the use of deceptive ai generated content in elections. this would work and content with some watermark. the deception where it is fraudulent content pretending to be the elected official or the candidate when it is not and we have seen this used against people onim both sides of the aisle which is why it was so important that we be bipartisan in this work. i want to thank him for his leadership. not only the framework, but the work that we are doing. going to you, senator, could you
4:56 pm
, and your testimony, you advocate for some outright prohibitions. we do have, of course, a constitutional inception for satire and humor. could you talk about why you believe that there has to be some outright ban of misleading ai conduct related to federal conduct and political ads? >> sure.e. absolutely. thank you for the question. of course, keeping in mind free protection that would apply to any legislation. i doht think that bright line rules around such deceptive ads are critical because we know the procedural walk-throughs as i said in my testimony are,
4:57 pm
thinking of the veneer protection without actually protecting us. to outright prohibit these practices is really important. i would potentially go a step further. not just those that we would consider to be deceptive, but those thatt we even consider to be abusive. our desire to believe or our t want to believe things against us. there is a body of law that runs alongside unfair deceptive fair practices against abusive trade practices. >> all right. thinking of that and talking to mr. smith about this as well, i had someone actually that i know well hill has a kid in the marines who was just -- deployed somewhere where they don't even cknow where it is. asking money to be sent somewhere in texas, i believe. could you talk about what they
4:58 pm
do to ensure that ai platforms are designed so they cannot be used ford criminal purposes. it has to be part of the work that we do. not just against elected officials. >> i think the best measures and mr. smith mentioned it in his testimony is the use for authentication systems where you can have authentic images, authentic voice recordings signed by the device whether it's a camera or audio recorder that has recorded that boys and when itei is presented it can be authenticated as being genuine, not ate deep fakes. that is the flipside of watermarks. anything generated be identified as such. those two technologies in combination can really help sort out along with public education to make sure that people understand what the technology is capable of and are on guard of that. what is real from what is fake.
4:59 pm
>> mr. smith, back where started here. some platforms use local news without compensating journalists and papers including by using their content to train ai algorithms. the preservation act with senator kennedy what allow local news organizations to negotiate with platforms including generated ai platforms. they use their content without compensation. can you talk about the impacts on local journalism. you talked about that in your testimony about the importance of investment in quality journalism. we have to find a way we make sure that the people actually doing the work are compensated smin many ways, but also in journalism. mr. smith. >> three quick things. we need to recognize that local journalism is fundamental to the health of the country in the electoral system. we need to find ways to preserve
5:00 pm
and promote it.nu number two, i think that we should let other journalists and publications make decisions about whether they want their content to be available for training or grounding and the like. that is a big topic and it is worthy of more discussion. we should certainly let them, in my view, negotiate collectively. that is the only way they will do that. >> i appreciate your words. i am going to get in trouble from the senator here. >> i will just say, there are ways that we can use ai to help local journalists. we arees interested in that, to. let's add that to the list. >> thank you, again, both of you thank you for your leadership. much.nk you very thank you for yours, senator. >> thank you, mr. chairman. it is good to see you again. every time we have one of these
5:01 pm
hearings, we learn something new a conclusion that i have drawn is that ai is ubiquitous, anybody can use ai. .... .... should be taking half measures i'm not sure what that means. what does it mean that taking half measure on something as ubiquitous as ai where there are other regulatory schemes that can touch upon those endeavors that use ai? there's a is always the question i haveom that when we look at something as complex as how ai is looking their unintended consequences that we should care about. would you agree? mr. smith? >> i would absolutely agree. leah to define what is all measuring what's half measure but we can agree half measures aren't good enough. >> how to recognize going forward whether that will help
5:02 pm
us as a powerful tool. mr. smith it is a powerful tool that can be used to good but can also be used to spread misinformation that happened during the disaster on maui. thee maui residents were subject to disinformation, some of it coming from foreign governments i.e. russia. don't sign up for fema for example and i worry that with ai such information will become more rampant with future disasters. do you share my concern with misinformation in the disaster context and the rule ai could play in what would you do to prevent foreign entities from
5:03 pm
pushing o ai disinformation toop people who are very vulnerable? >> i absolutely share your concern and are two things we need to think about doing. first let's use the power of ai sbr to detect these kinds of activities when they are taking place because they can go faster as they did in that instance where microsoft and others used data technologies to identify people. member two i think we need to stand up as a country and with other governments in the public and sayee there needs to be some clear red line in the world today regardless of how much else or what else we disagree about. when you think about what happens typically in the wake of an earthquake or hurricane-force tsunami for a flood the world comes together and people are generous. they provide relief and let's look at what happened after the fire in maui. it was the opposite of that. we had some people not
5:04 pm
by therily directed kremlin. people who regularly spread russian propaganda trying to discourage the people of lahaina from going to the agencies that could help t them. that's inexcusable and we saw what we believe is chinese directed activity trying to persuade the world that the fire was caused by the united states government itself using a meteorologicalal weapon. those are the things wee should all try to bring the international community together and agree they are off-limits. >> how do we identify that this is even occurring, that china or russia directed disinformation? i didn't know this was happening by the way and the energy committee in which i said i asked regarding the maui disaster i asked the person testifying whether he was aware
5:05 pm
there was misinformation put out by the foreign government and he said yes but i don't know the people of maui recognize what was goingas on so how do we numr one identify this is going on and come forward to say this is happening and to name names and identify who it is that is spreading misinformation? >> two things. companies like microsoft need to lean in like we aren't data infrastructure experts in the real-time capabilities to spot these threats and patterns and reach well-founded conclusions and this is the harder one. what do we do it we find a w foreign government is deliberately trying to spread false information next year in the presidential campaign? how do we create room so information can be shared and
5:06 pm
people will consider it. your framework is bipartisan. how do we create that bipartisan framework so we can create a climate where people -- i think we have the look of both of those parts of the problem together. >> i hope we can do that and if you don't mind one of the concerns about ai from a standpoint is there jobs that will be gone and you mentioned generative ai can result in job losses and for both you and mr. smith what are the jobs that will be lost today i? >> that's an excellent questions difficult to protect in the future. i'd start by saying not necessarily something they can be automated effectively the things that i think those that
5:07 pm
control the pursestrings think could be automated effectively. if they get to the point where it appears as though it could i would imagine the industry will move in that direction. >> i think you mentioned in your book which i have listened to it's like ordering something in those jobs could be gone done to ai. >> we published her book and we said what's the first job that would be eliminated by ai? the don't have a crystal ball but i would bet it would be taking the order in the director of a fast food restaurant are not establishing a rapport with a human being. all thiss recent assist listen n type into computer what you are saying so what ai can hear as well as a person they can enter dan and i was struck as few months ago when it was announced
5:08 pm
they were starting to consider whether they would automate ai drive-thru. i think there's a lesson with that and it should give the spots but a little bit of optimism. creativity involved in a drive-thru listening and entering in order. they are so jobs that do involvm creativity. the real hope i think is to use ai to automate the routine and the work that's to free people up so they can beeo more creatie so they can focus more on paying attention to other people and helping them. ifg we just apply that recipe more broadly i think we might put ourselves on a path that is promising. >> thank you. thank you mr. chairman. >> thank you senator hirono. >> thank you mr. chairman and thank you for calling this
5:09 pm
hearing. mr. dally am i saying your name correctly? >> that's correct. >> mr. dally if i have specific content created by generative ai do you think i should have a rightd to know if that content was generated by a robot? c yes, i think you do. i think the details would depend on the context for the most cases i or anybody else would like to know is this real or is this generated? >> mr. smith? >> generally yes. if you are listening to an audio or watching a video if you are seeing an image generated by ai i think people have a right to know. if you are using ai to help you
5:10 pm
write something may be helping you write the first draft just as i don't think any of us would say our staff helps us write something we are obliged to give the speech and say now i'm going to read the paragraph that my staff wrote. you make it your own and the written word is a little more complex solyndra think that through but as a broad principle i agree. >> professor. >> there situations where you probably wouldn't expect to be dealing with the product of generative ai. >> well that's the problem. >> is possible that it will change. >> but as a principle do think people should have a right to know when they are being fed content through generative ai?
5:11 pm
>> i tell my students it depends on the context. if you're vulnerable to generative ai thell answer is absolutely yes. >> what you mean if you'rewe vulnerable? no disrespect. a straight answer. >> absolutely. >> at like two things breakfast food and straight answers. >> i love them. >> if a robot is seeking information and i don't know it'se a robot in my entitled to know what to robot? it's a concern, be straight up. >> i think the answer is yes. >> arrived back to mr. dally. in my entitled to know who owns that robot? and where that content came from? i note came from k a robot but someone had to introduce a robot
5:12 pm
to make that. >> i think that's a harder question that depends on the particular context. if somebody is feeding me a video that identify as being germinated by aaii now now it's generated and it's not real. so if it's being used for example in a political campaign. >> let me stop you. suppose i look into the video generated by a robot would it make any difference to you whether that robot was owned by let's say president biden or president trump? don't you want to know who owns the robot and who prompted it to give me thisis information? >> i wouldy probably want to knw that. i don't know that i would feel it would be required for me to know that. >> how about you mr. smith?
5:13 pm
>> many people know not only that it's generated by a computer but it's a program that's doing it. the only thing i would offer and you know better than me there certain areas in political speech where 1/2 to decide whether you want people that are under anonymity. the federalist papers were published -- and i would rather know who is speaking. >> i hate to halep and not straight answer. >> how do you feel about breakfast at?ee >> i am pro breakfast food. i agree with mr. smith.e i think there are circumstances where you want to observe anonymous speech and you were entitled to do that. >> i want -- don't want to go to over. obviously this is an important subject in the extent to which i
5:14 pm
think, let me rephrase that. the extent of most senators in terms of the nuances of ai their general impression is that ai has extraordinary potential to make our lives better if it doesn't make our lives worse first and that's about the extent of it. in my judgment we are not really ready to be able to grab the bill that looks like somebody decided on purpose. i think we are more likely to take big steps and i ask you these t questions predictably because senator schatz and i have a billet is pretty simple. it says if you own a robot that's going to out artificial content to consumers and consumers have the right to know if it's generated by a robot and who owns the robot and i think
5:15 pm
it's a good place to start. again i want t to thank my colleagues here my chair and ranking member. i want to hear their questions too. thank you all for coming. >> thank you senator kennedy put on the half the chairmen were going to start his second round and i guess i will go first since i'm the only one sitting here. if mr. smith we were talking about kids and kids privacy and safety thanks to the information you are going to get me. let me give an opportunity to make a little news today in the best possible way. 13, the age limit is such a young age. i have three kids at home 10, eight and two are my kids. i don't want my kids to be on
5:16 pm
chad at all but 13 is incredibly young. we do commit to raising that age and would you commit to a verifiable age verification procedure so the parent can no, they can have some sense of confidence that their 12-year-old is not just saying yale yeah i'm 13 great sure, go right on ahead and let's get into a back-and-forth with his robot. newould you commit to those this on behalf of child safety? >> as you can imagine microsoft probably has one principle they want me to her. don't make statements without talking to them first. >> but you are the boss. >> most mistakes you make when you make them by yourself. happy to go back and. talk more aboutld what the right age shoud
5:17 pm
be. >> don't you think 13 is low though? >> it depends. >> to interact with the robot could -- who could tell you to do any number of things. that's all fully young. >> let me describe this scenario. when i was in seoul and the deputy prime minister whoho is e ministry of education and they were trying to create three topics that are objective math coding and english. a digital textbook within ai tutor so if you are doing math and you don't understand the concepts you can ask the ai tutor to help you solve the problem and by the way it's not useful just for the kids but it's useful for the parents. i think it's good. that's their 14-year-old, let's say eighth-grade algebra. when my kids were in eighth grade algebra i try to help them. i think we want kids in a
5:18 pm
controlled way with safeguards to you something that way. >> i'm talking about your ai chat. famously earlier this year yard chat bot and someone in "the new york times" wrote about this your c chat bot was urging this person to break up his marriage. do we want 13-year-olds to be having those conversations? >> no, course not. >> will you commit to raising the age? i don't want those chats with anybody. >> i don't either. >> but we are going to make the decision on the exception. but we have multiple tools. age is a very red line. >> it's a very red line and that's why like it.
5:19 pm
>> my point is there's a safety architecture that we can apply its bank. your safety architecture didn't stop an adult, didn't stop the chat bot from having a discussion with t an adult in which it said you don't really love your wife and your wife isn't goodt for you. she doesn't really love you. this is an adult. can you imagine the kinds of things that chat bot could say to a 13-year-old and i'm serious about this. do you really think it's a good idea to? >> away to second, let's put that into context. at a point where the technology rolled out to 20,000 people a journalist within your time two hours on the evening of valentine's day ignoring his wife and interacting with a computer trying to break the system which he managed to do. we didn't envision that in the next day we fixed it. >> the >> are you telling me he will look at all the questions a 13-year-old might ask and the parents would be fine with that
5:20 pm
and i should trust you in the same with way "the new york times" rider did? >> what i'm saying is i think as we go forward we have been increasing capability to learn from the experience of real people. >> that is what worries me. m that's exactly what worries me. if what you're saying is we have tests and failures i don't want 13-year-olds to be your guinea pig and ii don't want 14-year-olds to be your guinea pig. i don't want you to learn from their failure. if you want to learn from the failures of your site does go ahead. let'sou not learned at the failures of america's kids. this is what happened to sociale media. billions of dollars giving us a mental health crisis in this company. they got rich and the kids got depressed and committed suicide but why would we want to run that experiment with ai? why not raise the case? we should want anybody to be guinea pig regardless of age. >> let's roll kids out.
5:21 pm
>> let's k also recognize that technology does require real users and what's different about this technologych and what is so different from the social media experience is that we not only have the capacity that we have the will and we are applying that will to fix things in hours and days. >> after-the-fact. i'm sorry to sounds to me say trust us we are going to do well with this and i'm asking you why we should trust you with our children. >> i'm not asking you for trust but we will work every day to build that. >> i'm'm asking you as a presidt of this company to make a commitment now for child safety protection to say you know whata microsoftou, you could tell evey parent in america now microsoft is going to protect your kids.
5:22 pm
we will never use your kids as a science experiment ever, never and therefore we won't target your kid and we won't allow your kid to be used by our chat box as a source of information if you are younger than 18. >> with all due respect thereto about.you're talking >> i'm talking about kids, very simple. >> we don't want to use kids as a source of information and monetize them etc. but i'm equally at the view i don't want to cut off an eighth-grader today with the ability to use this tool that will help them learn algebra or math in a way that they couldn't a year ago. >> with all due respect it wasn't algebra or math that your chat bot was recommending and breaking up a reporter's marriage. we are talking about your chat bot. we are talking about chat. >> of cores were talking about being chat and i'm talking about
5:23 pm
the protection of children and yes there was that episode in february on valentine's day and six months later the journalist tries to do thet same thing agan it will not happen. >> do you want me to be done senator? i don't want to miss my vote. >> senator klobuchar. >> you are very kind, thank you. some of us haven't voted yet so i wanted to terror -- turn to you mr. dally. in march the video announced a partnership with -- to develop models that generate new images in this partnership provides royalties to contentip creators. why it was important to the company totn partner with developing generative ai. >> with respect to intellectual
5:24 pm
property rights are rights of the photographers who the images the mod was trained on an expecting income for most images we did not want to infringe on them. we partnered with getty and the trainer model picasso and what people used picasso to generate images people provided the original content and we see this as a way of going forward in general where people are providing for i.t. to train these people to benefit from the use ofof i.t.. >> today the white house announced eight more companies thatni are committing to take steps to move towards safe securing transport development of ai and could you talk about the steps that you have taken and the steps you planak to take in the responsible development of ai? we have done lot already.
5:25 pm
we have implemented our naimo guardrail so we can basically put b guardrails around our own nemo that if we don't get a responsesel the model inadvertey generate something that might be offensive in that detected and intercepted before ithe can reah the model. we have a set of guidance that we provide for all of our internally generated models and how they should be appropriately used and provide cards to say where the model came from and what it was trained on and we test these models very thoroughly. it's dependent upon the use so for certain models we test them for bias if they want to make sure when we are for to a doctor we automatically assume it to him. we have a nemo model called bio feed used in the medical profession to make sure that the advice they give is safe. there a number of other measures.
5:26 pm
>> very good, thank you. professor hartzog should congress be more focused on implementing the input are designed to generative ai or its output capability? >> senator bag can be answered, both. >> if they can. >> certainly the area that has been ignored up to this point has been to design an input to the stool so to the extent that area could use the revitalization i would encourage output design and use. >> okay and i suggest you look atat these bills because as we have been talking about it think we have to move quickly on those in the fact that it's bipartisan is a very positive thing. i wanted to thank mr. smith for wearing a purple vikings tie.
5:27 pm
i know was in ai generated message it got to know after their loss on sunday i will remind you thursday night as well. >> as a of wisconsin i can assure you was an accident. >> very good. thank you all. we have lot of work to do. thanks. >> senator blackburn. >> thank you mr. chairman. i want to talk about china and the chinese communist party, and we have seen a lot of it on tik tok. t they have these influence campaigns that they are running to influence certain thought processes with the american people. i know you all just did a report on china. you covered some disinformation and some of the campaigns. talk to me a little bit about how microsoft is an industry as
5:28 pm
a whole can combat some of these campaigns? >> i think they are a couple of things we can think more about and do more d about. the first is we should all want to ensure that our own products and goods and services are not used by foreign governments in this manner and i think there is room for the evolution of export controls and next-generation export controls to help prevent that. i think there's also room for a concept that has worked since the 1990s in the world of banking and b financial service. we've been advocates for those so that if there is abuse of the system the company that is offering the service knows who's doing it and is in a better position to stop it from happening. i think the other sidehe of the point is using ai and advancing
5:29 pm
our defensive technologies, which start with our ability to detect what is going on and we have been investing heavily in that space, that is what enable us to produce the report that we published. it is what enables us to see the patterns in communications aroundti the world and we are seeking to be a voice with many others that really calls on government to lift themselves to a higher standard so they are not using this kind of technology to interfere with other countries and especially other countries elections. >> in the report that you all had and you are looking at china did you look at what i call o te other members of -- russia iran north? >> we did in that specific report you're referring to is focused on east asia.
5:30 pm
we see especially prolific activities, some from china, some from iran and really the most global actor in this space is russia and we have seen that grow during the war and we have seen it spiraling we set years going back to the middle of the last decade. the russian government is spending more than a billion dollars a year on what we call cyber influence information and part of the targeted states. part of the goal is to undermine public confidence is everything that the public cares about in united states. we see it in the south pacific and we've seen it across africa and i do think it's a problem and we need to do more to counter it. >> so it up you would say something like a no your customers with system would apply to banking that is there to help weed out and companies
5:31 pm
should increase their due diligence to make certain their systems are appropriate and then being careful about doing business with countries that may misuse a certain technology. >> generally, yes. in specific scenarios know your customer requirement and we have also said no year requirements of these systems are deployed for the security of theo cente. >> one of things as we look at ai and its detrimental impact we look at the doomsday scenarios and looking at some of the reports on surveillance with the pcp surveilling the uyghurs with iran surveilling -- and i think they are other countries that are doing the same type of
5:32 pm
surveillance. what can you do to prevent that and how do we prevent that? >> senator i have argued in the past that facial recognition technology and biometric surveillance are fundamentally dangerous and that there is no worldny that it should be safe r any of us and we sure prohibited outright. a daily prohibition of biometric surveillance of public spaces and motion recognition which is what i've i refer to is a strong bright line measure that draws an absolute line in the sand rather than procedural ones that ultimately have been. she this harmful. >> chairman may take another couple of seconds because his head was shaking
5:33 pm
in agreement. you want to weigh in before i close my questioning. >> i was in general agreement and i was shaking my head, we need to be very careful about who we give our technology to and in the video we try to sell to people who are using us for good commercial purposes and not to suppress others. and we will continue that. we don't want to see this technology misuse to repress anybody. >> got it, thank you. >> thank you senator blackburn. my colleague senator hawley mentioned we have a form tomorrow which i welcome for education enlightenment which is a good thing and i just want to express the hope that some of
5:34 pm
the folks who are appearing in that venue will also cooperate before the subcommittee and certainly inviting more than the few of them. i want to express my thanks to all of you forex being here but especially mr. smith who will be herere tomorrow to talk to my colleagues privately and our effort is complex and not contradictory to what senator schumer is doing. i'm very focused on election interference because elections aree upon us and i want to thank my colleague senator klobuchar and hawley and -- for taking the next step for addressing the harm that may be from all of the potential perils we have
5:35 pm
identified here. authenticating the truth that embodies true images and voices is one approach and then banning the impersonations is another approach. obviously banning anything in the public round is public discourse and endangers running afoul of thein first amendment which is why disclosure is often the remedy that we see especially in campaignek financ. maybe i should ask all of you whether you see banning certain kinds of election interference and have you raised the specter
5:36 pm
of foreign interference and the fraud scams that could be perpetrated as they were in 2015, and i think it is one of those nightmares that should keep us up at night. we welcome free expression and ai is a form of expression and whether it's regarded as free or not and whether generated in high-risk for simply touching up some of the background in tv ad. maybe each of you can talk a little bit about what you see as a potential remedy to stop this? >> it is a grave concern with elections coming up with the american public may be misled by deepfake so various kinds.
5:37 pm
asth you mentioned the use of te promenade to imitate a voice and tracking that will let us know and if we insist on ai content, ai generated content being identified that people are least tipped off that this is generated and not the real thing, i think we need to avoid having especially a foreign entity interfere in our election at the same time ai generated content is speech and i think it would be a dangerous precedent to try to ban something. i think it's much better to have exposure as you suggested and demanded from the outright. >> three thoughts. number one 2020 force a critical year for elections not only in
5:38 pm
this country and not only for the united states but the united kingdom india figure opinion union. people were vote for who will represent them so this is a global issue for numerous democracies.st i think you are right on the first amendment because it's such a critical cornerstone for americans and the rights that we all enjoy. yet i will also be quick to add i don't think the russian government qualifies for protection under thehe first amendment and if they are seeking to appear -- interfere in our elections i think the country needs to take a strong stand and a lot of thought needs to be given on how to do that effectively. and this goes to the heart of the question why it's such a good one. i think it's going to require some real thought and discussion and an ultimate consensus to emerge around one specific scenario. let's imagine for a moment that
5:39 pm
there is a video that involves the president a candidate giving aet speech and let's imagine tht someone uses ai to put different words into the of that candidate and uses ai technology to perfect it to a level that it is difficult for people to recognize as fraudulent. then you get to this question what should we do? at leastn as we have been trying to think this through i think we have two broad alternatives. one is we take it down and the other is we relabel it. if we do the first we are acting -- and that makes me nervous and i don't think that's our role in the government cannot under the first amendment relabeling to ensure accuracy of think that is probably a reasonable path. with this highlights is the
5:40 pm
discussion still to be had and the urgency for that conversation to take place. >> and i will just say and mr. mr. hartzog i agree with your point about the russian government or the saudi government c as potential interferes. they are not entitled to protection to our bill of rights when they are destroying those rights in purposely trying to take advantage of the free and open society to decimate our freedoms so there's a distinction tohe be made in ters of national security and that rubric of national security which is part of, our framework applies with great force in this area and that is different from
5:41 pm
a president a candidate putting up an ad that and the fact puts words in the off another candidate and issue me know we have been hearing introductory remarks from me that were impersonations taken from my comments were taking my voice from speeches that i made on the floor of the united states senate with content generated by chat gpt that sounded exactly like something i would say in a voice that is indistinguishable fromdi mine. i disclosed that fact but in real time as mark twain famously said light travels halfway around the world before we all
5:42 pm
get out of bed and we need to make sure there's action and real-time and he suggested real-time meaning real time in a campaign which is measured in minutes and hours rather than days or months. >> thank you senator. like you i'm nervous about coming up and saying we are going to ban all forms of speech particularly when you're talking about something as important as political speech and like youyoi worry about disclosure a long. earlier in this hearing it was asked what is the half measure in bed goes towards answering your question today. i think the best way tosw think about half measures is an approach that is necessary but not sufficient, that gives us the illusion that we have done enough. ultimately this is the pivotal point, doesn'tes and that got ue
5:43 pm
in the first place and to help answerr your question one thingi would recommendow is thinking about throwing lots of different tools which i applaud your bipartisan framework for bringing lots of tools to bear on this problem and thinking about the role that surveillance advertising plays in empowering a lot of these harmful technologies that doesn't allow the lie to be created to flores and to be amplified. i've think about rules and safeguards to help limit those financial incentives barring from standard principals of accountability things like we used disclosure affected and where it's not affect if you have to make it safe and if you can't make it safe it shouldn't exist. >> i think i'll turn to senator hawley but i think this is ahi
5:44 pm
real conundrum. we need to do something about it before the half measures. we can't delude ourselves thinking with a false sense of comfort that we solve the problem if we don't provide effective enforcement and to be very blunt the federal elections commission often has been less than fully effective in enforcing rules relating to campaigns and there again and oversight wouldtr give strong in force and authority and the will to act is going to be very important if we are going to address thiss problem. senator hawley. >> mr. smith let me go back to something you said. he talks about wendy's automating the drive-thru and
5:45 pm
talking about this is a good thing. i just want to press on that a little bit. is it a good thing that workers lose their jobs to ai whether it's at wendy's or at walmart or whether it's at the local hardware store? you pointed out your comment was there's no creativity involved in taking orders at the drive-thru. that is a job. oftentimes the first job for younger americans but in this economy where the wages of blue-collar workers have been flat for 30 or 40 years and running what worries me is all the time whatim we hear the jobs that don't have creativity don't have value. i'm scared to death that ai will replace lots of jobs that tech types things are creative and believe even more blue collars
5:46 pm
without a place to turn. my question to you is can we expect more of this and is it really progress for folks to lose those kinds of jobs? it's b not the best paying job n the world but it least it's a job and we want to see more of these jobs. >> first i didn't say if it was a good or bad thing. ik, identified that job is one that likely would be. step stepping back and think your question is critically important. let's first reflect on the path that we have had 200 years of automation that have impacted jobs sometimes for the better sometimes for the worse. in wisconsin where i grew up in missouri where my father grew up if you go back 150 years it took 20 people to harvest an acre of wheat of corn and now it takes one so 19 people don't work on that acre anymore.
5:47 pm
that's been an ongoing part of technology. the row question is this, how do we ensure technology advances so that we help people get better jobs, get the skills they need for those jobs and hopefully do it in a way that broadens economic opportunity rather than narrow zipper they think the thing that we should be the most concerned by, since the 1990s i think this is the point you are making, if you look at the flow of digital technology fundamentally we lived in a world that has widened the economic divide. those people with a graduate education have seen their incomes rise in real terms. those people with a high school diploma or less have seen their income level actually drop compared to where it was in the 1990s. so what do we do now?
5:48 pm
i would have police what i think our goal should be. can we use this technology to help advance productivity for a much broader range of people? including people who didn't have the good fortune to go to where you are i went to college or law school and can we do it in a way that not only makes them more productive. reap some of the dividends of that productivity for themselves at a growing income level? i think it's that conversation that we needon to have good. >> i agree with you and i hope that's what ai can do. you talk about the farm and you said it takes 20 people to do what one person could do and it used to take thousands of people to produce textiles and furniture were now it's zero. we can tell the tale in different ways. i'm not sure seeing working-class jobs go overseas will be replaced entirely as as success story.
5:49 pm
it's not a success turned argue more broadly or economic policy in the0 last 30 years has been downright disastrous for working people. tech companies and financial institutions and certainly banks and wall street they have reaped huge profits that blue-collar workers can barely find a good paying job.d i don't want ai to be the latest accelerant of that trend. i don't really want every service station in the area to be manned by someone for no one can f get their foot in the door and start the climb up the ladder and that worries me. let me mention something else that you mention national security, critically important. there's no national security threat that is more significant united states and china. let me just ask you is microsoft too entwined with china to have the microsoft research asia in beijing back in the late 1990s you have centers in shanghai and
5:50 pm
elsewhere. you've got all kinds of corporation to a chinese state-owned business. i'm looking at an article here from protocol magazine where one of their contributors said microsoft had been the alma mater of chinese big tech. are you concerned about your degree of employment with the chinese government? do you need to be decoupling in order to make sure our securities and fait accompli? >> and some technology microsoft is the alma mater of the technology in every country the world because of the role that we played over the last 40 years but when it comes to china today we are in need to have very specific controls on who uses our technology and for what and how. that's why we don't for temple
5:51 pm
do computing or provide facial recognition services are focused on synthetic media for a whole variety of things while the same time when starbucks has stores in china i think it's good they can run their services in our data center other than chinese companies data center. >> on facial recognition and 2016 your company release a database of 10 million faces without the consent of the folks who were in the database. you eventually take it down although it took three years. chinese that database to train much of its facial recognition software and technology. isn't that a problem? he said microsoft may be the alma mater of many companies and ai. china is runninga concentration camps using digital technology like we have never seen before and isn't that a problem for your company to be in any way involved in that? >> we don't want to be involved in that withth any way -- in any
5:52 pm
way. are you going to close your centers in china and beijing and shanghai? >> i don't think it will accomplish that. thank you are running thousands of people through your centers into the chinese government and state-owned enterprises. isn't that a problem? >> first of all there's a big premise that don't embrace the premise on what we are doing. >> which part is ron? >> the notion that we are running thousands of people through the chinese government. >> i thought you had 10,000 employees c in china recruited from chinese state-owned agencies, chinese state-owned businesses. they work for you and then they go back to the state-owned entities. >> we have employees in china in effectct we have that number. to my knowledge that is not where they are coming from and that is not where they are going. we are not running that kind of a -- and it's all about what we do and who we do it with.
5:53 pm
i think that's of paramount importance and that's where focused on. >> do you condemn with the chinese are doing to the uighurs in the xinjiang province? and we do everything we can to make sure our technology is not used in any way for that kind of activity in china and around the world by the way. >> but you condemn it to declare? >> yes. >> what are the safeguards you have in place or your technology is not further enabling the chinese government and the number of people you employy there. >> you take something like facial recognition which is at the heart of your question we havequ tight controls and limit the use of facial recognition in china including controls that in effect make it difficult if not impossible to use in any real-time surveillance at all and by the way this thing we should remember the u.s. is as a leader in many ai fields.y china is a leader in facial recognition technology and ai.
5:54 pm
>> in part because of the information you help them acquire, no? >> is because they have the most data. >> yeah but you gave it toth th. you don't think that anything to do with that? >> when you have a country of one point for billion people and you decide to have facial recognition used in so many places it gives that country a massive data. >> are you saying the database that microsoft released in 2016 you are saying that wasn't used by the chinese government? >> i am not familiar with that cannot be happy happy to pride -- provide you with information that my goodness the advance in facial recognition technology, if you go to another country where they are using facial recognition technology ie highly unlikely that american technology. it's highly likely that its
5:55 pm
chinese technology because they are such leaders in the field which i think is fine. if you want to think of a field where the united states is one to be a technology leader i put facial recognition technology on >> how much money has microsoft. invested in ai development in china? >> i don't know but i will tell you this, the revenue we make in china which accounts for about one out of every six humans on this planet is 1.5% of our total revenue. it's not the market for us that it is for other industries or some other tech companies. >> it sounds like you can afford to decouple. >> but is that the right thing to do? >> yes and a regime that's fundamentally inflicting the kinds of atrocities on the citizens you alluded to and from running modern-day concentration camps i think it is.
5:56 pm
>> their two questions that are worthy of thought. number one we want general motors to sell and manufacture cars let's say sell cars in china and be wanted create jobs for people in michigan and missouri so cars can be sold in china. if the answer to that as he is and think about the second question. how do you want general motors and china to run its operations and where would you like them to store the data? would you like to be in a secure data center run by an american company or would you like to be run by a chinese company? which will better protect general motors trade secrets? we should be there so we can protect the data of american companies, european companies, japanese companies. even if you disagree on everything else, that i believe served this country well. >> i think you are doing a lot more than just protecting data
5:57 pm
in china. you have major research centers, tens of thousands of employees and your question do i want general d motors to be building cars in china? no, i don't. i want them to make cars here in the united states with american workers and do i want american companies in any way the chinese government and their tactics? senator os up would you like for me to yield to you now? >> i have been very hesitant to interrupt.sc conversation here. it's very interesting and i'm going to call and on senator os up and then i have a couple of follow-up questions. >> thank you mr. chairman. >> thank you all for your testimony. >> getting down to the fundamentals mr. smith if we are going to move forward with the legislative framework we havela too defined clearly the legislative tactics and what does we are regulating.
5:58 pm
what is the scope of regulated activity,, technologies, products? how should we consider that question and how do we define the scope of technologies, the scope of services in the scope of products that should be subject to a regime of regulation that is focused on artificial intelligence? >> i think they are three layers of technology on which we need to focus the time and scope of legislation. first is the area that has been the central focus of 2023 and the executive branch of the -- and here on capitol hill the so-called frontier foundation models that are the most powerful ford generative ai. in addition there at the applications they use ai or as senators blumenthal and hawley have said that the employers ofe ai. if there's an application that's
5:59 pm
on that model that we consider to bens a high-risk scenario and we make a decision that would have an impact on say the privacy rights, the civil liberties and rights of children are the needs of children that i think we need to think hard and have broad regulation that is effected to protect americans in the third layer is the data center infrastructure. where are these models and where these applications are deployed and we should ensure those data centers are secure,t that there are cybersecurity requirements that ours need to make and should we ensure their safety systems at one, two or all three levels if there is an ai system that's going to automate and control something like critical of the structures such as the electrical grid. those are the areas where we would see clear thinking and a lot of effort to learn and apply the details and focus there.
6:00 pm
>> as more and more models are trainedtr and developed to highr levers -- higher levels of capability there will be approached in -- or proliferation of models perhaps not the frontier models for were those at the leading edge that use the most computing of all, powerful enough to have serious implications. it is the a question which models are the most powerful at this moment in time or is there a threshold capability or power that should define the scope of her good agent technology? >> thank you just post one of the critical questions that frankly a lot a of people inside the tech sector across the government are working to answer. i think technology is evolving in the conversation needs to evolve with it.
6:01 pm
let's just pause here. there's something like gpt and open ai. let's posit what i can do. 10,000 things really well. it's expensive to create and it'sat relatively easy to regule in the scheme of things because there are one, two or 10 but now let's go to where you were going which what does the future bring in tms proliferation. imagine there is an academic at the professors school. it's not going to do 10,000 things well it's going to do four things well. it requires gpu's and it won't require as much data. let's imagine it will be used to create the next virus that spreading the on the planet than you say we need to ensure there is safety architecture and controls around that as well. that's the conundrum and that's
6:02 pm
why this is a hard problem to solve. it's why we are trying build safety architecture and art data centers so open-source models can run in them and still be used in ways that u will prohibt that kind of harm from taking place. as you think about a licensing regime this is one of the hard questions.e. who needs a license? you don't want to be so hard that only a small number of big companies get it but you also need to make sure you are requiring people to get it when people don't need a license for r doing.y are the beauty of the framework in my view is the it starts to frame the issue. >> is it a license to train a model to a certain level of capability? is it a license to sell or license access to that model or is it a license to purchase or deployed that model works who is the end of the? >> that's another question that
6:03 pm
can be answered in different scenarios.s. mostly i would say should be a license to deploy. there may well be applications to disclose to an independent authority when the training run begins depending on when the training run ends so an oversight body can follow it just they way they might when it companies building a new commercial airplane. the good news is there's an emerging foundation and a call at best practices for how the model should be trained what kind of testing there should be what harm should be addressed and that's a big topic. >> when you say a license to deploy for sample at the microsoft office products wishes to use gpt model for some user serving purpose within your sweep you would need a license to deploy gpt in that and that way or to demand that gpt would
6:04 pm
require a license to offer to microsoft and putting aside whether or not this is a plausible scenario the question is what is the structure of a licensing arrangement?e >> imagine that say boeing builds a new plane and before they cand sell they can so2 united airlines and united airlines decide to the faa will -- andmagi we are gpt 12 whatever you want to name it before thato gets released fr use i think you can imagine a licensing regime that you could say needs to very rely -- license when it's certified as safe and any up to ask yourself how do you make that work so we don't have the government's lawyer thing down there but i twould say are three things. first you need a common foundation as to how training should be placed in second you
6:05 pm
need national regulations and third are going to have the global economy in countries where we want these things to work you probably need a level of international coordination. i would say look at the world of civil aviation that's fundamentally howow this works since t the 1940s. let's try to learn from it and see how we might apply something like that are other models here. >> mr. dally how would you respond to the question in a field where the technical capabilities are accelerating at a rapid rate of future rate unknown? wear and according to what standard or metric or definition of power do we d draw the line r what requires a license for deployment and what can be freely deployed without oversight by the government? >> it's a tough question. you have two balance two
6:06 pm
important considerations. the first is risks presented by a model of whatever power and on the other side is the fact that we like to ensure the u.s. stays ahead in this field and we want to make sure individual academics and entrepreneurs with a good idea can move forward and innovate and deploy models without huge barriers so it's the capability and it's the rik presented by its deployment without oversight? the thing is we will have the right legislation and legislation is going to have two in words defined the scope of related products so we are going to have too bound that which is subject to a licensing arrangement or wherever we land not.at which is so i mean >> it's dependent on the application. if you have a model which is
6:07 pm
basically determining a medical procedure there's a high-risk with that depending on the patient outcome and if you have another model which is controlling the temperature in your building if it gets a little bit wrong there may be a little bit too much power or your are not asot comfortable as you might be in a life-threatening situation you need to regulate the things that are of high- consequence if the model goes awry. >> just tap the gamble when you want me to stop. professor and i'd be curious to hear from others with respect to the chairman's follow-ups how does any of this work without international law? isn't it correct that a model potentially a powerful and dangerous model for example purposes to unlock cpr and/or
6:08 pm
virological capabilities are relatively unsophisticated actor. once trained its relatively to transport and without and international legal system and b the level of surveillance that seems inconceivable into the flow of data across the internet how cannot be controlled? >> it's a great question senator and with respect to being efficient in my answer i will simply say there are going to be limits even assuming we do need international reparation which i agree is the only started thinking about ways in which ford sample in the eu which is deployed significant ai regulation we might design frameworks that are compatible with that.
6:09 pm
ultimately what i worry about is deploying a level of surveillance never before seen an attempt to perfectly capture the entire chain of ai. >> i share that concern about privacy which is why i raised the point. how can we know when folks are loading a model onto perhaps a device that is neither on line any more? >> bera limits. >> senator do you want to take a stab at it? >> there's a need for international coordination and that comes from like-minded governments and perhaps local governments in the initial years. i do think there's a lot we can learn if we were talking to senator blackburn about the swift system for financial transactions and somehow we have managed globally in the united states for 30 years to know your
6:10 pm
customerer requirements obligations for banks. money is moved around the world and nothing is perfect and that's why we have laws. butec he can do a lot of good to protect against criminal abuses that would cause concern. >> these models are very portable. you can put the parameters of both policies into large ones, a large usb drive and carry it with you somewhere. you also trained them in a data in the world.e the use of the model and deployment that you could effectively regulate it will be hard to regulate because if people can't created here they will create a summer also had to be very -- very careful if we want to stay ahead. love those people training these models in the u.s. and not to go somewhere else. >> thank you mr. chairman. i hope you are okay with this.
6:11 pm
we've been added for while and we are very grateful. thank you very much. it's been very useful. i want to follow up on a number of things. first of all on the international issue there are examples and models for international cooperation and mr. smith you mentioned civil aviation. the 737 max i think i have a right when it crashed was a plane that had to be redone in many respects and companies, airlines around the world look to the united states for that redesign and then approval in
6:12 pm
aviation, atomic energy not only completely effective but in its work in many respects. i think their international models and the united states is a leader by example. best practices are used when we support them and the eu is ahead of us and many respects regarding socialoc media. we are following their leadership by example. ii want to come to this issue of having centers whether you are here in china or for that matter elsewhere in the world requiring safeguards so we are not allowing our technology to be misused in china against the uighurs in preventing that
6:13 pm
technology from being stolen for the people who trained there from serving it with bad services. are you satisfied that it is possible that fact that you are doing itre in china and that is preventingng the evils that coud result from its? >> i would say two things. first i feel good about our track record and their vigilance is and the need for us to be vigilant about what services we offer to home and how they are used. i would take from that what i think is probably the conversation we will need to have as a country about export controls. their three fundamental areas of technology where the united states is today i would argue the global leader.r. first the gpu chips from a
6:14 pm
company like namibia and second the cloud infrastructure of the company like microsoft and the third is the foundation model from a firm such as open ai and companies that are global leaders as well. i think if we want to feel that we are good in creating jobs byn manufacturing here as you said senator hawley which i completely endorse candidate technologies being used properly they probably need an export control regime that leaves those briefings three things together. for sample there might be a country that lets set aside china for a moment. there's another country for you all and the executive branch would say we have some qualms but we want u.s. technology to be present and we want u.s. technology to be t t used propey the way that will make you feel
6:15 pm
good. youmi might say export ships to that country to be used in the data center of a company that we trust that is licensed per that use with the model being used in a secure way in that data center with a no your customer requirements and guardrails that would be off-limits. that mayay well be where government policy needs to go and how the tech sector needs to support the government and work with the government to make it a reality. >> i think that answer is very insightful and raises other questions. i would to analogize the situation to include proliferation. we cooperate over safety in some respects with other countries some of them adversaries but we
6:16 pm
still do everything in our power to prevent american companies from helping china or russia and their receiver programs. part of that effort is through through -- we impose sanctions and we have limits and rules around selling and sharing certain chokepoint technologies related to nuclear enrichment as willis biological warfare surveillance and other financial security risks and their framework envisions sanctions and safeguards precisely in those areas for exactly as we have been discussing here. last october the biden administration used existing legal authority as a first step in blocking the sale of some high-performance chips to make
6:17 pm
those chips to china and their framework calls for export controlsor and sanctions. i guess the question that we will be discussing and we won't be resolving today regrettably that we would appreciate your input going forward and invite any of the listening audience here in the room or elsewhere to participate in this conversation on this issue and others. actually i draw line on the hardware andpa technology that e are allowed to provide anyone else in the world, any adversary or friend because as you observe mr. dally in office except that it's easily proliferated. >> if i could comment on this if
6:18 pm
you drew an analogy to nuclear regulation and mention the word chokepoint the difference here is there really isn't a chokepoint and there's a careful balance to be made between limiting where are chips go in with their use for and disadvantaging american companies and the whole food chain that feeds them because we aren't the only people who make chips that can do ai. i wish we were, but we are not. are companies around the world. their american companies and their companies in asia and companies in europe and if people can't get the chipsps thy need to do ai from us they will get them somewhere else. what will happen then is it turns out the chips aren't the things that make them useful, it's the software and if all of a sudden the standard chips for
6:19 pm
people to do aii comes from pick a country singapore all of a sudden all the software engineer starts riding the software for those chips they will become the dominant chips and the leadership of that technologies werere shifted from the u.s. to singapore or whatever the country would be dominant. we have two careful to balance the national security considerations and the technology considerations against preserving the u.s. lead in the technology area. >> it's a really important point to what you have is the argument counterargumentle. and what senator dally raises which is also important sometimes you can approach this and say look if we don't provide this to somebody, somebody else will. i get it but at the end of the day whether you are a company or
6:20 pm
a country thank you do have that clarity about how you want your technology to be used. i fully recognize that there may be a day in the future after i retire from microsoft when i look back and i want to say oh we did something bad because if we didn't someone else would have. i want too say no we had values and we have principals and had them placed guardrails and protections and we turned down sales so that somebody could use her -- couldn't useif their technology to be his others' rights and if wee lost business that's the best reason in the world to lose business and what's true is companies is true of aso comp -- is true of the country. that's why this issue is complicated how to strike the best balance.
6:21 pm
>> or faster hartzog do you have any comment? >> i think that was well said and i would add it's also worth considering in this discussion about how we safeguard this incredibly dangerous technology and the risk that could happen if they were example proliferated. if it's so dangerous we need to look at the question again i bring it back to thinking notst only about how we put guardrails on but how we lead by example which what you brought up is really important. we don't win the race to violate human rights and that's not the way we want to do it. >> it is simply not just importing chips from united states in building your own data center. most ai companies capabilities from crowd providers and we need to make sure crowd providers don't circumvent our export
6:22 pm
sanctions and you raise the know your customer rules, know your customers would require cloud ai cloud whose models are deployed to know what companies they are using those models. if you're using a supercomputer you need a -- in the peoples liberation army and that it isn't used to use facial recognition on opponents in i ran for example. but i do think you have made a critical point which there is a moral imperative and there is a lesson that history of this great country and the history of the world that when we lose our
6:23 pm
moral compass we lose our way. when we do economic or political interest sometimes it's very short-sighted and turns into it geopolitical swamp in quicksand. these kinds of issues are very important to keep in mind. when we lead by example. i want to jeff: a final point and if senator hawley has questions we will let him ask on this issue of worker -- i mentioned at the outset i think we are on the cusp of an industrial revolution we have seen this movie before as they say and it didn't turn out that well. the industrial revolution where workers were displaced and those
6:24 pm
textile factories and the mills in this country and all around the world went out of business or replace the workers with automation and mechanics. i would respond by saying we need to train those workers. we need to find an educational solution to it and it could even be a four-year college. in my state of connecticut the defense contractors are going to need thousands of welders electricians, trades people of all kinds who will have not just jobs. careers that require skills that frankly i wouldn't begin to know how to do.
6:25 pm
i think there are tremendous opportunities not just in the creative sphere that you mentioned where we may think higher human talents come into play but in all kinds of jobs that are being created daily already in this country. the most common comment i hear from businesses is the can't find enough people to do the job we have right now. we can't find people to fill an opening that we have and that is in my view may be the biggest challenge for the americans today? >> that such a good point and it's worth putting everything we think about jobs because i wholeheartedly endorse senator hawley. we want people to have jobs and
6:26 pm
earn a living etc.. first let's consider the demographic context in which jobs are created. the world has just entered a shift of the kind that it literally hasn't seen since the 1400's. populations that are leveling off or in much of the world now declining. one of the things we look at hir every country is the working age population embracing or decreasing and by how much? and 2020, to 2025 the working age population this country people aged 20 to 64 is only going to grow by 1 million people. the last time it grew by that small the number two you know who is president of the united states? john adams. that's how far back you to go and if you look at a country like italy take a that group of people over the next 20 years it's going to decline by 41% and
6:27 pm
what's true of italy is true of almost this to the same degree in germany and its already happening japan and. we live in a world where for many countries we suddenly and counter what you actually find i suspect when you go to hartford or kansas city, people can't find enough police officers enough nurses, and the teachers and that is a problem we need to desperately focus on something so how do we do that? i think ai w is something that n help. even something like call centers one of the things it's fascinating to me we have more than 3000 customers around the world running content. a bank in the netherlands you go to a call center, the desks of the workers look like a trading floor on wall street. sixradi different terminals, somebody calls they are desperately trying to find the answer toal a question, with
6:28 pm
something like gpt with our services six terminals can become one. someone who is working there can ask a question and answerre coms up and what they are finding is at the person who is answering the phone talking to a customer can now spend more time concentrating on the customer and what they need. i appreciate all the challenges. there is so much uncertainty and we desperately need to focus on skill but i really do hope that this is an era where we can use this to frankly help people fill jobs and focus, let's just put it this way i'm excited about artificial intelligence and i'm even more excited about human intelligence. if we can use artificial intelligence to help people exercise more human intelligence and earn more money doing so that would be something that
6:29 pm
would be way more exciting to pursue than everything we have had to grapple with for the last decade around social media. >> our framework very much focuses on workers and providing training. it may notg. be something this entity will do but it's definitely something that i passed to address and it's not only displacement. working positions and opportunities within the workplace to protect civil rights. we haven't talked about it in detail but we deal with it in our framework in terms of the transparency in decision-making and china may try to steal our technology but they can't steal our, people.
6:30 pm
china has its own population challenges with the need for more peoples skills. i say about connecticut we don't have goldmines or oil wells but what we have is an able workforce and that's going to be the key to the american economy. if ai can promote that development. you all have been really patient and so has our staff. i want to thank our staff and most importantly will continue these hearings. it is so helpful to us. i can go down ourur framework ad tied the proposals to the p specific comments made by sam outman or others who will enrich and expand our framework with
6:31 pm
the insights that you have given us.en .. o continue our bipartisan approach here. you made that point, , mr. smit. we have to be bipartisan and adopt full thank you all. this hearing is adjourned. [background noises] [background noises] [background noises]
6:32 pm
[background noises] [background noises]
6:33 pm
[background noises] [background noises]
6:34 pm
[inaudible conversations] [background noises] [background noises]
6:35 pm
[background noises] if you ever miss any of c-span's coverage you can find it any at anytime online at c-span.org. videos of key hearings debates and other events feature markers that guides you to interesting
6:36 pm
and newsworthy highlights. these points of interest markers appear on the right-hand side of your screen when you hit play on slats videos. this time i told makes it to get an idea of what was debated and cited in washington scroll through spend a few minutes on points of interest. ♪'s he spent as your unfiltered view of government. funded by these television companies and more including spark light. >> the greatest count on earth as a place to call home. at spark light it is our home to right now we are all facing our greatest challenge. spark light is working around the clock to keep you connected. we are doing our part so it's a little easier to do yours. sparkight supports c-span as a public service along with these other television providers. giving you a front row seat to democracy. exit nasa administrator bill nelson announces agency

38 Views

info Stream Only

Uploaded by TV Archive on