Skip to main content

tv   Hearing on Deployment Use of AI in National Security  CSPAN  May 27, 2024 3:09am-6:00am EDT

3:09 am
3:10 am
hill where the house committee is holding a hearing on how artificial intelligence can help defend and secure the u.s. live coverage, here on c-span3 .
3:11 am
c-span3 without objection, the chairman to claim -- in recess at any point. the purpose of this hearing is to receive test money from private sector stakeholders relating to the opportunities and challenges resented by the emergence of artificial intelligence and discuss how the -- a.i. technologies in support of the homeland security mission. i now recognize myself for an opening statement. in this era of rapidly advancing technology, i am especially proud to live in a nation of innovators, some of whom join us today. today, american ingenuity is paving the way once again. artificial intelligence, or a.i., promises to transform the global economy as we know it.
3:12 am
a.i. has the potential to create new jobs, catalyze productivity in americans, and of course, protect our men and women in uniform and law enforcement. throughout this congress, committees in both chambers have convened numerous hearings to understand the countless opportunities and challenges a.i. presents. like cybersecurity, a.i.'s impact is a complex and crosscutting issue that cannot be handled by one jurisdiction alone. therefore, we are here to examine what i believe to be the most promising areas of which to expand our use of a.i., the security and defense of our homeland. the committee on homeland security has no obligation to make sure we harness it correct, as with any technology, it presents new risks and we must take the time to understand them. this includes prioritizing safety and security throughout a.i. development. deployment and use.
3:13 am
it also requires us to treat a.i. with appropriate nuance so that we understand the impact of proposed regulatory measures on our businesses. today's full committee hearing follows up on a productive cybersecurity and infrastructure protection subcommittee hearing led by chairman -- last december. the subcommittee specifically considered the role of dhs and securing a.i., a topic we will continue to explore today. as that hearing reaffirmed the threats facing our country or increasingly complex, and dhs plays a critical role in keeping americans safe at our country secure. dhs has a broad mission and has explored and even implemented a.i. for specific purposes aligned with its unique missions. for example a u.s. customs and border protection as used a.i. powered systems to monitor -- monitor cameras, which help identify suspicious activity and unauthorized crossings in real time. the transportation and security administration is currently
3:14 am
examining the ways in which a.i. can enhance its security screening, including using a.i. to augment its x-ray imaging of travelers -- traveler's carry on coverage. powered by a.i. to identify security threats among the traveling public and enhance the prescreening process. with these a.i. powered systems, while these a.i. power systems offer the promise of increased security and efficiency, they also bring significant risk that congress must carefully assess. for instance, a.i. powered facial recognition, which present substantial privacy concerns. we must ensure that the use of a.i. powered facial recognition by tsa is balanced with strong
3:15 am
protections of privacy, civil liberties and ethical standards. furthermore, u.s. immigrations and customs enforcement is using a.i. to help identify and track illegal activities such as human trafficking and smuggling. by analyzing large data sets and protecting patterns, in the cybersecurity and infrastructure and security -- is carefully examining the risks and opportunities presented by a.i. in the way can be leveraged to enhance our resilience against cyber threats. in the years ahead, it will play a critical role in addressing and managing risks at the nexus of a.i., cybersecurity, and critical infrastructure. considering the widespread push for a.i. adoption within dhs, it's critical that the department collaborate and with relevant stakeholders, including those from the private sector to manage a.i.'s complexities and risks. in addition to domestic concerns relating to the emergence of a.i., we must also consider the broader, strategic implications. our nations primary strategic
3:16 am
adversary, the people's republic of china, has made a.i. development a national priority and is investing heavily in its research, talents, and infrastructure. not only economically, but also in terms of our national security. in fact, the 2024: threat assessment warns that developing, quote, malicious cyber actors have begun testing the capabilities of a.i. develop malware, and a.i. assistant software development. technologies that have the potential to enable larger scale, faster, efficient, and more invasive cyber attacks against targets including pipelines, railways, and other infrastructure. this is extremely concerning, and as complex as these threats are, our efforts to combat them will be even more challenging if our adversaries development and innovation. for these reasons, it is important for congress, dhs, and the private sector to work
3:17 am
together to ensure that we remain at the forefront of a.i. innovation while safeguarding our national security, economic competitiveness, and civil liberties. today, we will hear from a panel of experts with insights and the steps that we can take to trust that a.i. , the a.i. we use, will be secure. to our witnesses, thank you for being here today for your efforts to educate members of this committee and the american people on how we can responsibly advance a.i. innovation. i look forward to your testimony. i now recognize the ranking member, mr. thompson for his opening statement. >> thank you very much, mr. chairman. good morning to our witnesses. i would like to thank you for holding this important hearing on the intersection of artificial intelligence and homeland security. artificial intelligence is not new. the department of homeland
3:18 am
security and its components have a long history of trying to understand how to most appropriate leverage the capacity of -- a.i. provides. the release of chatgpt in november of 2022 made clear that a.i. has transformative potential and in accelerated efforts by the administration and congress to ensure the united states continued to lead the world on the responsible development in the use of a.i. please consider to better secure the homeland, we must keep three critical principles in mind. first, we must ensure that a.i. models be used in the data to train them do not reinforce existing biases. that requires that a.i. used by the government be developed pursuant to specific policies designed to eliminate bias and is tested and retested to ensure it is not having that effect. eliminating bias with a.i. also requires a diverse a.i.
3:19 am
workforce. comprised of people from a variety of backgrounds who can identify potential biases and prevent biases from being encoded into the models. second, the government must rigorously assess appropriate use cases for a.i. and ensure that the deployment of a.i. will not jeopardize the civil rights, civil liberties, or privacies of the product. law enforcement and national security agencies in particular must implement and exacting review of potential infringements on those fundamental democratic principles. moreover, it is essential that the workforce be included in decision-making processes on how a.i. will be deployed. the workforce is the -- is in the best position to understand capability gaps and where a.i. can be affected. a.i.
3:20 am
is also, used to carry out their jobs more effectively. it's not, and should not ever be a replacement for people finally, the a.i. tools we use must be secure. admit exists, can be adopted to secure a.i. i commend the cybersecurity and infrastructure, security agency, commonly called -- for working with the private sector to ensure that the adoption of secure by design principles in the development of a.i. moving forward, we must determine vulnerabilities unique to a.i. and work together to address them. i commend president biden on last year's executive order on a.i.. which put the government on the path to developing and
3:21 am
deploying a.i. in a manner consistent with these principles. as dhs continues to assess how we use a.i. to carry out this mission said from cybersecurity to disaster response to aviation security, i'm confident that it will do so in a manner that incorporates feedback -- civil liberties and privacy. we cannot allow our optimism about the benefits of a.i. to short-circuit as we evaluate this new technology. at the end of the day, bad technology is bad for security. as we consider the potential benefits a.i.'s presenting for the mission, we must also consider the new threats it proposes. a.i. in the hands of our adversaries can jeopardize the security of federal and critical infrastructure networks, as well as the integrity of our elections.
3:22 am
we know that china, russia, and iran have spent the past four years honing their abilities to influence our elections. so discord, among the american public, and undermine confidence in our election results. advances in a.i. will only make their job easier . so we must redouble our efforts to identify this content and empower the public to identify malicious foreign influence operations. i look forward to a robust conversation about how the department of homeland security can use a.i. strategically to carry out its mission more effectively. i look forward to the witness testimony, mr. chairman, i yield back the balance of my time. >> i want to thank the ranking
3:23 am
member for his comments. a members of the committee, i want to remind you that opening statements may be submitted for the record. i'm pleased to have a distinguished panel of witnesses before us today and i asked that our witnesses please stand and raise your right hand. do you solemnly swear that the testimony you give before the committee on homeland security of the united states house of representatives will be the truth, the whole truth, and nothing but the truth, so help me god? thank you, please be seated. i would now like to formally introduce our witnesses, mr. troy demmer is the cofounder and chief product officer of gecko robotics, a company combining -- and a.i. powered software to help ensure the reliability and sustainability of critical infrastructure. today, the innovative climbing robots capture data from the real world that was never before assessed from pipeline, missile silos, and other critical asset types. geckos a.i. driven software platform enables human experts to
3:24 am
contextualize the data and translated into operational improvements. -- chief technology officer, vice president of engineering. he has over 20 years of extremes working on high profile incidents and leading research and development teams, including at national security agencies. he's also an adjunct -- mr. sikorski is also an adjunct at club university. mr. ajay amlani -- how do you pronounce your name, sir? >> ajay amlani. got it. thank you. currently serves as the president and head of the americans for iproov, a global provider of authentication proxies for online enrollment and verification. in addition, use well- recognized identity -- as a strategic adviser for industry leaders, working in artificial intelligence, and e-commerce. the deputy director of the center of democracy and
3:25 am
technology and surveillance project, prior to joining cdt, jake worked as senior counsel at the constitution project, the project on government oversight. he also previously served as a program fellow at the open technology institute and law clerk at the seneca subcommittee on technology and the law. again, i think our witnesses for being here and i now recognize mr. dummer for five minutes to summarize his opening statement. >> good morning, chairman. ranking member thomas -- thompson, and members of the committee. thank you for the opportunity to join you today. my name is troy demmer and i am the cofounder of gecko robotics. that uses robots, software, and a.i. to change how we understand the health and integrity of physical infrastructure. back in 2013, gecko started with a problem, the power plant where mike cofounder and i had to go to college to keep shutting down due to critical assets feeling.
3:26 am
the problem was obvious in the first meeting. they had almost no data. the data that had been collected was collected very manually with the gauge reader and a single point. sensor measurement. furthermore, the data was often collected by workers working off of ropes at elevated heights or in confined spaces. resulting in few measurement readings. that meant the powerpoint had to make -- power plant had to make reactive decisions rather than proactive. using rapid technology -- technological advances to get better data and generate better outcomes. we use that -- armed with various sensor payloads that can gather 1000 times more data at 10 times the speed, compared to traditional methods. we also have a software platform that takes that data and combines it with other data sets to build a first order understanding of the health of
3:27 am
critical infrastructure. where are the vulnerabilities? what do we need to do to be proactive before fixing it, before the problem occurs? and how to prevent catastrophic disasters? those are the problems gecko is solving today for some of the most critical infrastructure that protects the american way of life. we are helping -- and build the next-generation military equipment smarter. we are helping the u.s. air force create a digital baseline for the largest infrastructure program since the eisenhower interstate highway project. and we are working with various other critical public and commercial infrastructure projects across the country. and every one of these cases, the missing link is the data and that brings us to the conversation we are having in today's hearing. a.i. models are only as good as the inputs they are trained on. trustworthy a.i. requires trustworthy data inputs. data inputs that are audible, interrogate will come a day inputs that provide a complete and undiluted answer to questions we are asking. when it comes to our critical infrastructure, infrastructure that powers the american way of
3:28 am
life and protection of the homeland, though data inputs do not exist. without better data, even the most sophisticated a.i. models will be a best and effective, and at worst, harmful. if the way america collects data on infrastructure today hasn't fundamentally changed in 50 years, today, despite advances and data collection technologies like robots, drones, sensors, smart probes that can detect -- we are still largely gathering data manually, even on her most critical infrastructure like dams, pipelines, power plants, and more. to give you one more example, we have one -- who collects classes, both manually and with the robots. the manual process collected by handheld sensors and -- the robots on the same asset collect more than 1 million. that more than 2600 times the data, data that multiplies the power of a.i. models, that is the scale and the difference between new
3:29 am
technology and manual processes that still largely rely on critical info structure. without better data collection, a.i. will never meet his potential to -- that protects the homeland and the american way of life. as i conclude, i want to touch briefly on how we think about robotics and a.i. in the workforce. at gecko, we talk a lot about the workforce. it's a big priority for many of our partners. we responded by hiring a significant number of former, manual, and speculative workers and train them to operate our robots. more than 20% of our employees don't have -- a dangerous heights are now operating robots. workers who built their careers inspecting assets are now helping our software developers build the tools to make that work better. thank you again for the opportunity to join today and i look forward to your questions. >> thank you, mr. demmer. i now recognize mr. sikorski for five minutes to summarize his opening statement.
3:30 am
>> chairman green, ranking member thompson, distinguished members of the committee. take you for the opportunity to testify on critical role that artificial intelligence plays in enhancing cyber security defenses and security the homeland. my name is michael sikorsky and i'm the chief technology officer and vice president of engineering, which is threat intelligence and response division of palo alto networks. for those not familiar with palo alto networks, we are an american headquartered company founded in 2005 that has since become the global -- leader. we support 95 of the fortune 100, the u.s. federal government, critical infrastructure operators, and a wide range of state and local partners. this means that we have deep and broad visibility into the cyber threat landscape. we are committed to being trusted national security partners with the federal government.
3:31 am
as my written testimony outlines, a.i. is central to realizing this commitment. we encourage everyone to embrace a.i.'s transformative impact for cyber defense. a reality of today's threat landscape is that adversaries are growing increasingly sophisticated and a.i. will further amplify this scale and speed of their attacks. however, this fact only heightens the importance of maximizing the substantial benefits that a.i. offers for cyber defense. the a.i. impact for cyber defense is already very significant. by leveraging precision a.i., each day, palo alto networks to text three -- 2.3 million attacks that were not present the day before. this process of continuous discovery and analysis allows threat detection to stay ahead of the adversary, blocking 11.3 billion total attacks each day and remediating cloud threats 20 times faster. the bottom
3:32 am
line is that a.i. makes security data actionable for cyber defenders. giving them real-time visibility across their digital enterprises and the ability to prevent, detect, and respond to cyber attacks quickly. accordingly, palo alto networks firmly believes that to stop the bad guys from winning, we must aggressively leverage a.i. for cyber defense. my written testimony highlights a compelling use case of a.i. powered cybersecurity that is showing notable results. up leveling and modernizing the security operations center, also known as the s.o.c.. this creates an inefficient game of walkable. while critical alerts are missed and vulnerabilities remain exposed. we have seen transformative results from customer use of a.i. powered s.o.c.. this includes the following, a reduction of time to respond
3:33 am
from 2 to 3 days to under -- a five times increase in closeout rates and a four times increase in the amount of security data adjusted and analyze each day. outcomes like these are necessary to stop threat actors before they can encrypt systems or steal sensitive information. and none of this would be possible without a.i.. this new a.i. infused world we live in also necessitates what we like to call secure a.i. by design. organizations will need two, one, secure every step of the a.i. application lifecycle and supply chain, two, protect a.i. data from unauthorized access and leakage at all times and three oversee employee a.i. usage to ensure -- with internal policies. these principles are aligned with and based on the security concepts already included within the a.i. risk management framework and
3:34 am
should be promoted for ecosystem wide benefit. now the issues we are discussing today are important to me also on a personal level. spent decades as both a cybersecurity partner both to stop threats and as an educator, training the cyber workforce of tomorrow. is with that background that i can say confidently that homeland security, national security, and critical infrastructure resilience are being enhanced by a.i. have powered cyber defense as we speak, and we must keep the pedal to the metal because our adversaries are certainly not sitting on their hands. thank you again for the opportunity to testify and i look forward to your questions and continuing this important conversation. >> thank you i now recognize mr. amlani for his five minutes of opening statement. >> good morning, ranking member thompson and members of the committee. i have been building innovative solutions to help organizations
3:35 am
for the last 20 years. i serve as president head of america. i started my federal services as a white house fellow to secretary tom ridge, the first secretary of homeland security in the aftermath of the 9/11 attacks. at a time in which the federal government was rethinking how to manage its natural security missions. a large part of that found new ways to benefit from the innovation happening in the private sector. the past 20 years, i have forge partnerships with the federal government and the commercial sector. that facilitate the utilization of commercial technology to augment national security initiatives. today, this committee is considering how to harness the power of a.i. as part of a multilayered defense against our adversaries. to best answer that question, we need to start with understanding how a.i. enables threat actors. what capabilities can dhs and its component agencies develop to combat these threats?
3:36 am
what actions can the department take to better work with industry as it promotes standards for a.i. adoption? a.i. exponentially increases the capabilities and the speed to deploy new fraud and cyber attacks on the homeland. they enable new threat and technology developers to dramatically shorten their cycles. ultimately, add technologies are unique in a way that they upscale threat actors. the actors themselves no longer have to be sophisticated. a.i. is democratizing the threat landscape by providing any aspiring cyber criminal easy-to- use, advanced tools, capable of achieving sophisticated outcomes. the crime as a service dark web is a very affordable. the only way to combat a.i.- based attacks is to harness the power of a.i. in our cybersecurity strategies. at iproov, we develop a.i.
3:37 am
powered biometric solutions to answer a fundamental question. how can we be sure of someone's identity? iproov is trusted by financial institutions globally by verifying that an individual is not only the right person, but also a real person. our technology is monitoring and enhanced by an internal team of scientists, specializing in computer vision, deep learning, and other a.i. focused technologies. level attacks are identified, investigated, and triage in real time, and in technology enhancements are continuous. this combination of human experts is indispensable to harness a.i. in defending and securing the homeland. but equally important is the need for a.i.-based security technologies to be inclusive and uphold privacy mandates by design. dhs and its component -- and accountability. including performing routine self assessments and collecting public input on matters of privacy protection and limitations on data use. those actions serve as a great
3:38 am
model for how dhs and other agencies should treat kate -- a.i. capabilities, both in regulating a.i. adoption. the u.s. government has used biometrics and a growing number of programs of the past few decades to -- with jen a.i., biometrics take on an expanded role of helping to ensure that someone is who they claim to be in digital ecosystems. for example, defects and synthetic identities have recently become so realistic that they are in perceivable to the human eye. because of this biometric verification plays a critical role in the security of the nation. congress should support the creation of more useful standards for systems and testing. give access to the best talent developing new technology tools with the agility necessary to respond to the changing threat landscape. the silicon valley innovation
3:39 am
program is a very powerful model for both acquiring the expertise of the nations best engineering minds will also creating a collaborative testbed for providing new technologies. iproov has worked with -- in all phases of the program and can testify firsthand to the powerful impact that this program could have if expanded to scale with a broader group of stakeholders. another example could be expanded upon to incorporate a wider range of perspectives is biometric technologies work to address future threats. in conclusion, we at iproov are completely focused on pioneering capabilities, while collaborating with federal stakeholders. to advance innovation. we seek to play a constructive role in a.i. practices and hope the committee will see us as a resource as you consider a path forward. thank you. i look forward to your questions. >> thank you, trainee. i now recognize mr. laperruque to summarize his opening statement. >> chairman green, ranking member thompson, and members of
3:40 am
the house committee. thank you for inviting me to testify on the topic of artificial intelligence and how we can ensure that its use aids america's national security, as well as our values as a democracy. i am jake laperruque, deputy director of the surveillance project at the center of technology. cdt is a -- defense in the civil each. we worked for nearly three decades to ensure that rapid advances such as a.i. promote our core values as a democratic society. a.i. technology can only provide security if they are used in a responsible manner and as chairman green say, treated with appropriate nuance. this is not only critical for keeping america safe, it's also necessary for protecting our constitutional values. today, i would like to offer a set of principles for the responsible use of a.i., as well as policy recommendations for national security space. we must be wary that for a.i. technologies, garbage in will
3:41 am
lead to garbage out. too often, a.i. is treated as a sorcerer's stone that can turn lead into gold. in reality, it only performs as well as the data that is given. reckless deployment of a.i. technologies, such as using input data that is low-quality well be on the balance of what any given system is designed to -- will yield bad results. in the national security space, this can have dire consequences. wasted resources. triggering false alarms that leave genuine threats unattended to. ensuring that a.i. is used responsibly is also critical to protecting our values as a democracy. a.i. is often framed as an arms race, especially in terms of national security, but we must take care in what we are racing towards. authoritarian regimes in china, russia, and iran have shown how a.i. technologies such as facial recognition and throttle dissent, oppressive marginalize groups and supercharged surveillance. truly winning the a.i. arms race does not mean the
3:42 am
fastest bullet on the broadest scale. it requires civil liberties. as ranking mirror thompson highlighted, responsible use requires exercising care for creation to input of data into a.i. systems, to the use of the results from those systems. to facilitate responsible use, government applications of a.i. should be centered on the following principles. a.i. should be built upon proper training data it should be subject to independent testing. it should be deployed from the parameters for it was designed for. it should be used by specially trained staff and corroborated by human review. it should be subject to strong internal government speculations. should be bound by safeguards to protect constitutional values and it should be regulated by institutional mechanisms for transparency and oversight. although the degree of secrecy make upholding these principles especially challenging, we can and must find ways of promoting responsible use of a.i. . ct proposes two policies and furtherance of this goal.
3:43 am
first, congress should establish an oversight board for the use of a.i. in national security context. this will be a bipartisan, independent entity within the executive branch with members of staff giving access to all use of a.i. within the national security sphere. the board would act -- to promote responsible use of a.i. this would support both compliance as well as lead to improved practices. the board's role would also allow for greater public knowledge and engagement. this would serve as a conduit for harnessing outside expertise and building public trust in the government's ongoing use of a.i. the privacy and civil liberties board has demonstrated how effective this model can be. the board's role in counterterrorism oversight -- in a manner that aided both security and civil liberties alike. a new board focused on the use of a.i. in the national security realm would be similarly beneficial. second, congress should enact requirements -- for the use of
3:44 am
a.i. this should include required declassification review of key documents such as a.i. -- it should also require annual public reporting and information such as the full set of a.i. technologies, the number -- and the nature of that impact. while we support prompted option of these to be solved by a silver bullet. promoting responsible use of a.i. will require continual review, engagement, and adaptation. this work should be done in consultation for a broad set of stakeholders, impact communities, and experts. thank you for time, and i look forward to your questions. thank you, sir. members will be recognized by order of seniority for the five minutes of questioning and an additional round of questioning may be called. after all members have been reckoned nice. i now recognize myself for five
3:45 am
minutes of questioning. mr. amlani , in the evolution of a.i., would you discuss your perception or opinion on whether or not you think, as the threat evolves, we are evolving ahead of the threat? >> thank you, mr. chairman for the question. this is a very important component of the work that we do at iproov. we have a central operation center where we monitor all the threats happening globally. singapore, uk, australia, latin america, and the united states and we are constantly using our human people there with phd's to be of to assess the threats that are actually -- we are migrating and adapting our a.i. technology to stay multiple steps ahead of the adversary and i think this is very critical as we look at a.i. technology to build in and understanding what steps are having globally so we can continue to modify our systems. >> thank you for that.
3:46 am
i'm glad to hear that. makes everybody sigh a little bit of relief. i want to ship to the subject a little bit because we are talking about a.i. as a tool and as, you know, all the positives and the challenge of it when it's on the attack side but i want to ask something about workforce. what do we need to do uniquely, and we really, i think this would be for you since, you know, your company employs a very large number of individuals in this fight. how do we need to be thinking about developing the workforce for a.i., as well as cybersecurity itself? >> that's a great question, congressman. it's super imperative, myself, i've trained cyber agents for the fbi at quantico, the department of energies, private companies, and that i've been teaching at the university level for over a decade and firsthand i've seen
3:47 am
that we really need to focus in as a.i., cybersecurity, it's here to stay and we really need to make strides to continue to push down, get people trained up , our work for us, our children, get them learning these concepts very early on. it's super important. and then from the palo alto networks perspective, we are very invested in this, as well. so as far as the unit 42 team goes, we have what is called the unit 42 academy, which takes undergraduates, gives them internships, during the college years, hands-on training, and then they come to work with us as a class, like a co-court, where they learn a lot of up leveling skills and their first year spent engaged in learning and growing and i'm proud to say that 80% of the unit 42 academy in the last year has been made up of women. >> do me a favor and let's go
3:48 am
just a little bit more granular on this question. what are the skills, specific skills that these individuals seem to have? what are we teaching them? i get, you know, the coding and all that stuff, but what specifically do you need to see in a candidate? that you would want to hire? >> so number one is having a background and understanding how to engage these technologies. we spent a lot of time building technologies over the last 10 or 20 years in the cybersecurity industry. so what they get access to and how they can leverage them to fight against, you know, evil that's coming into these networks. i also think about things like understanding of how forensics works, malware analysis, the ability to really dive deep into the technical details so that if they can dig out and sift through the noise, and then
3:49 am
it also comes into play with a.i. do they have an understanding of a.i. and how it is being leveraged in the product. one thing we deal with heavily is these analysts, inundated with too many alerts with all these products firing. so it opens up for them to dive deep into what's really there. >> essentially, is now fighting the a.i., basically. the machines are fighting the machines. am i getting this right? >> i think to some extent, i think that there is definitely that kind of thing for sure but at the end of the day, the cyber analyst is being up leveled in their ability to fight back. >> we've got some legislation coming out pretty soon on workforce development. i just want to make sure that the aip's is captured in that.
3:50 am
thank you. my time has expired. i now organize the ranking member for his five minutes of questioning. >> thank you, mr. chairman. i want -- one of the things that we've become accustomed to is if we are feeling -- filling out selling online as we complete it, it won't figure out if you are a robot or if you are a human and so you've got to figure out now, is this a bus, is this a -- all of this are kind of trying to figure this out. so you gentlemen that are giving us some conflict that there are some things out here. let me just give you half -- a hypothetical question. how you see the role of the government as a.i. develops and what do we need to do to ensure the public that
3:51 am
that development does not allow our adversaries to become even more -- then it is? we will start with mr. demmer. >> i, thank you, mr. ranking member, for the question. i agree that the country needs to prioritize building a.i. that is safe, with the executive order that, you know, where i stand is that we need to collect highly accurate data that ultimately informs these models and increasingly, what i think can be done is to help create testbeds for helping to validate those models and creating the training data sets
3:52 am
that enable, you know, good a.i. to be created, leveraging only good data sets and so that's my position. >> i think one thing that i think of is my history in cybersecurity. we rushed as innovators to get the internet out and we didn't build while thinking about security and the spot we are in. that's why it's really important that we build a.i. and have it secure by design. it's one of these things where it falls into a few different categories, making sure that the a.i. we are building, was rushing to get the technology in their products and out to consumers but we need to think about it as we build it. for example, what is the application development lifecycle? what is the supply chain of that being built, and is that secure? thinking about these things as they are running in real time, are we protecting those applications so that they don't
3:53 am
get manipulated? how about the models we are building, how about the data? is that, you know, subject to manipulation? how are we modeling -- monitoring it? and how are we monitoring our employees without -- who are using this today? those are our areas of focus. >> in addition, when building the internet, identity was on a layer that was ever thought about. the challenge that you just described about the changes, the buses, i just encountered seaplanes versus normal planes. it's very difficult to decipher what's the difference, in terms of the captcha that's what it's called. i think we need to constantly test all the tools to make sure that they are inclusive in making sure that they are accurate. standards are another important component here that comes at a testing but it's also very focused on leveraging and organizations like this, continue to invest in organizations like this. talent organization and much of that resides in the private sector. and partnerships with private sector companies and we survey the top engineering schools
3:54 am
about where they wanted to work after graduating and there was no government agency other than nasa on that whole list. there was no defense or government contractor on that whole list. other than spacex and as we start to think about this, and we actually get access to the top engineers across society and it's actually through partnerships with commercial -- thank you. >> yeah, i would echo several things of what's been said already. we need well-trained system, we need high standards to make sure that we are using good systems. we need proper data inputs be going into these a.i. systems and proper treatment of what's going out. i would also emphasize that meeting our adversaries in this field make sure we do not end up imitating adversaries. right now, facial recognition is an example of being harped on that's used in a freighting way in regimes like china, russia, iran. right now, federally, we do not have any regulations and although the cases are limited,
3:55 am
there are document cases of it being used against peaceful protesters in the united states. that's the type of standard we should be prohibiting. >> thank you very much. maybe i will submit, for the record, we have elections coming up in november. the last time there was some involvement by russia, china, specifically, with our elections, are you in a position to say, whether or not we are robust enough to defend against our adversaries for elections or for you to encourage us to be a little more attentive in any aspect of our elections? >> that's a great question. i think that certainly, generative a.i.
3:56 am
makes it easier for malicious actors to actually come after us in that way, we've actually already seen them in the cyber arena, start to build more efficient phishing emails, so things like typos, grammar mistakes, all the kind stuff, that's a thing of the past. that will be something that we encounter anymore. and -- >> in other words, there won't be any more typos. >> right. and they can also read someone's inbox and talk like that individual. they can leverage it in that way. i do think that sisa, we are a member of the j cdc and they are taking election securities, that's a big priority for them and so, we are assisting in those ways i think that it's definitely super concerning and something that we need to lean into with the election cycle coming up. >> anyone else want to address that? >> that's a good question. i will let the gentlemen's time continue if anyone else wants
3:57 am
to answer. >> i think from act and identity perspective this is also very important in regards to who it is that is actually posting online and being able to discuss so from an identity perspective, making sure that it's the right person, it's a real person that sexy posting and communicating and making sure that person is, in fact, right there at that time is a very important component to make sure that we know who it is that actually generating content online. there is no identity layer to the internet currently today. we have a lot of work that's being done on digital credentials here in the united states, our country is one of the only in the western world that doesn't have a digital identity strategy. we have some work that the flea been done in section 4.5 but it hasn't really been expanded upon and i think that some work that we should think about encountering and doing. >> if i may, you know, i have
3:58 am
some questions on that, too, that i might submit in writing because this digital identification thing is, you know, it's banking and all that goes on the phone it's, these digital idea, it's critical. so, you know, thank you ranking member. i now recognize mr. higgins, gentleman from louisiana for his five minutes of questioning >> thank you, mr. chairman, chairman, i worked extensively with my colleagues in the household -- house oversight committee regarding artificial intelligence and not necessarily opposed to the emerging technology, even if i -- opposing the income and tied to the ocean. it's happening. i think it's important that congress provides a framework so that a.i. can not be leveraged in any manner that is contrary to americans' individual rights
3:59 am
and liberties. i introduced the automated governments attack, the tat act to set limits -- as a whole. the bill seeks to ensure federal agencies notify individuals when they are interacting with were subject to decisions made using a.i. or other automated systems and directs federal agencies to establish a human review of appeal process to ensure -- to ensure that human beings have supervision of a.i. generated decisions that can impact the lives of americans and specifically our freedoms so i have concerns about this technology but we may as well embrace it.
4:00 am
i think it's crucial that america lead in the emergence of a.i. technologies and how it interfaces with our daily lives. and may i say, we can have hours and hours of discussion about this, but i have five minutes. so, i'm going to ask you gentlemen, regarding the use of a.i. as it could contribute towards more effective counterintelligence or criminal investigations, as those criminal investigations and counterintelligence efforts really to homeland security, trying to focus in on this committee, our committee here and our responsibilities here. what are your thoughts, gentleman, on how we can best deploy a.i. with our existing law
4:01 am
enforcement endeavors, for border security and criminal investigations that result from our effort to secure and resecure our border. mr. sikorski, i like to start with you. >> that the great question. one thing i look towards is the way we are already leveraging a.i. for cyber defenses. as i spoke about in my statement , we have been taking really large amounts of data and distilling it into very few actionable items. >> you say taking large amount of data. where you getting that data from ? >> so for us, as a cybersecurity company, we are focused on processing security data, which means ones and zeros, the malware that's found on the systems, the vulnerability and enumerations of the systems, low-level
4:02 am
operating system information that enables us to like tease out what is the actual threat, which is very distinct. >> so you are drawing on raw data from open sources and -- like how are you, are you accessing the raw data that you are analyzing? >> that's a great question. some of the data we are getting is from you know, the customer collection that we have in our products that are spread throughout the world. >> from the fortune 100, you said, the company that works with 95 of the top 100. >> that's right. >> okay, so those companies have agreements with their users, that's the part we don't read. as it comes up on your phone you renew the agreement and nobody reads it, we go to the bottom and we click yes. so when the 95 companies have agreed to share that data, you are sharing that to analyze a.i. for the purposes of, what?
4:03 am
>> yeah, in order to find new attacks, so one of the things we are doing is it's firewall data, network level telemetry, it's very different than, you know, say, personal information or something like that. instead, we are focused on the lower level data, bring it together, and leveraging a.i. to say how is that network attack related to what's happening on the actual computer, bring that together quickly so that the analyst can find the threat and a limited very quickly. >> your specifically referring to cybersecurity, but chairman, mike time has expired. thank you for convening this hearing. it's very important. absolutely. >> i now recognize mr. carter. >> thank you, >> thank you. i now recognize mr. carter, the gentleman from louisiana. >> thank you, mr. chairman. thank you for everyone joining today. ai technology and its demands for use possesses significant
4:04 am
challenges spread we must leverage ai to improve our national security. please support hr 8388 the task force act which proposes the creation of a dedicated task force that focuses on safety, security, and accountability challenges posed by ai. it is crucial for the american people as we work to tackle these pressing issues. ai is not new. we know that. it is relatively new to the general public. some of its applications have enormous value. some of them can be quite frightening. national security, obviously, is major. i would like to ask each of you to take a crack at answering the questions relative to civil rights, civil liberties in general for the american people. how is this contemplated as we develop further, delving into
4:05 am
ai? mr. damer? >> i for to my fellow witnesses on this issue. it is complicated. we are focused on building data sets in a way that promotes national security. >> i'm a cybersecurity practitioner so i'm not -- i don't know all the ins and outs of policy itself but my sense is that when we think about ai and regulation, we have to think about the purpose. so defining what is high risk and saying, you know, what are the ai use cases that are high risk and focusing on the unique security requirements for it. on the cybersecurity site, the security data, the ones and
4:06 am
zeros i talked about earlier, they are not high risk compared to other technologies being rolled out. >> at iproov, we work with every independent assessment organization out there. we are urging to stay ahead of the technology. many times, these organizations focus on threats or challenges in the past. we need to stay up to speed and above and go beyond. we build inclusivity by design project that includes skin tone, but also cognitive and physical capabilities, making sure those are taken into consideration. socioeconomic class. many tools are expensive to obtain. i got the iphone 15 pro and i can attest to that. age is also a very important
4:07 am
component, as well as ginger, so making sure that all of those characteristics are embedded into the design. ai is an important component of success. >> there is a range in some systems, for example, damage assessment systems that dhs uses may not pose a risk to civil rights and liberties. other technologies like face recognition are automated targeting systems present significant risks to civil rights and civil liberties. it puts a good stress on care for civil rights and civil liberties. we hope that's a good step towards development. this is an area where oversight is essential to evaluating the effectiveness of those roles, evaluating whether more steps are needed and, as necessary, prompting congress to step in and make their own rules. with face recognition, there
4:08 am
are no federal regulations for law enforcement to use. i think that needs to change. >> the information out is only as good as the information in. the data becomes paramount. how do we take into account cultural nuances, natural biases, so we are not replicating biases that humans have, and then become part of the bias that is in ai? >> we operate in singapore and many african countries, as well as latin american countries. we take consideration to make sure we see the threats coming in but also look at the data and
4:09 am
make sure we have a representative database that we train on. we go over and above when it comes to make sure that our training data is representative of the overall population. i would encourage you to be able to include standards to invest in those to make sure other providers are doing the same. >> from our standpoint, as you have heard from the questions here that you continue to hear that congress has concern on making sure that we learn from the mistakes may not be the right word, but we learn from how fast the internet came, facebook, instagram, and all of those things, and they are constantly bringing about new challenges. the good, the bad, and the ugly. had we learn from that to make sure that we are managing this ai as it unfolds and becomes more prevalent in our society? i realize that my time has expired.
4:10 am
mr. chairman, if you allow them to answer that, i will yield. despite the gentleman can ask the question. >> sure. that is directed at me, i assume. >> or anyone else that cares to take a crack at it. >> is staying on top of all of the different technologies is very important, making sure that we have independent organizations within the federal government that can have different companies submit their ai to testing, making sure that we have the right people staffed within those organizations who can stay on top of all the latest opportunities and threats in the market. yes. there is a lot of interest that this overall industry has in making sure that it is well represented databases and common standards that we can all adhere to. but i think that making sure those accurate solutions can be in front of customers for biometric verification is also an important component. biometric verification is also something that is different than biometric recognition. i want to make sure that we can call out the differences between the two. >> the gentleman's time has expired.
4:11 am
the chair has been flexible and generous with time, but i asked the members to watch the clock. the gentleman from mississippi is recognized. >> thank you, mr. chair. i want to thank all of our guests for joining us today. after seeing the incredible growth of ai, we know the growth -- ai can be used to analyze data. we know they have offensive and defensive technologies. the technology can be used for good and evil. as we drill down today on homeland security, and looking at the role that ai is playing in homeland security, i think it is easy for many of us to see the role that ai plays in cybersecurity, and we have heard testimony about the offensive and defensive capabilities of ai in the cyber world. but i want to talk a little bit about the role that ai made
4:12 am
play in securing our physical borders. one of the things this committee has been focused on is securing our southwest border. there were 3.2 million encounters on the southwest corner, a record number of encounters. we are on track to break that this year. we know from the secretary of state and the secretary of homeland security -- he reported that 85% of those individuals encountered are at some point released into the interior. and so my question is, how can we use ai to better identify the individuals that are being encountered? i have a great fear that we are not identifying and properly vetting those individuals before they were released into the interior. i'll start with you, mr.
4:13 am
amlani. you talk about biometrics and the use of biometrics and the things that y'all are doing. so i would ask first if you could start off and if anyone else would like to join in, how can we within the department of homeland security better use ai to identify the numerous individuals that we are encountering on a daily basis along the southwest border so that we are not allowing people into the country that would cause ill will? people with criminal records and criminal backgrounds. we see people apprehended all the time that a criminal backgrounds. individuals that have been previously arrested and convicted in other countries. how can we use ai to better vet those individuals and do so in
4:14 am
a more timely fashion, before they are released? i will start with you and allow you to kick that off. i would ask anyone who would like to join in to please continue in this discussion. >> thank you, congressman, for the question. this is a very important question in my mind. we cannot speak on behalf of dhs, but i can speak on behalf of my experience in 2003 at the department of homeland security originally, launching biometrics at the border. secretary tom ridge did assess biometrics as one of the core technologies to improve department capabilities, improving throughput and security at the same time. my personal experience with biometrics was introduced to me in 2003. it was rolled out at airports for the first time for people coming into the country that were not citizens, we used fingerprint technology at the borders. it was eye-opening for me and for people walking up to customers -- customs and border patrol agents, that had a look
4:15 am
of fear, where they would have to be asked private questions. that interaction, for me, was something that lit a fire within me over the last 20 years to recognize that this was not just a security tool and a capability that was focused on law enforcement. it is a tool for consumers to allow themselves to protect their privacy. >> yes? >> i would just add that this is a good example of input data and what you bring into the system can make such a difference. a photo that is done with good lighting, close-up, clean profile, such as a dmv photo or a photo during processing, and booking, that is more likely to yield accurate results.
4:16 am
that something in the field from a distance. if i took my smart phone out now or a photo taken at night -- it is something during processing. that makes a difference. it is highly situational. >> i will give you an opportunity if anyone would like to add to this discussion. mr. chairman, i'm overtime, so i yield back. >> the gentleman yields. the gentleman from maryland is recognized. >> thank you, mr. chairman. i appreciate that. to my republican colleagues, if you will relate to chairman greene, my appreciation for having this. i think this is an outstanding topic for us. in fact, we might need to have
4:17 am
additional hearings on the stock because time goes so quickly. but thanks again for having it, and to mr. amlani, welcome, from maryland. their representative from prince georges county. you are just outside of our district. lovely when it comes up, we can ask you to murder -- move a little bit further south. i want to ask a question of you, mr. sikorski. that is something i was worrying about quite a bit. we have identified a lot of threats , and the challenge, i think is that sometimes, it can replicate very quickly, faster than a litigation approach can mitigate those. let's take the deep fakes types of imagery that some of those pop up that can embarrass individuals. revenge porn kinds of things. since i don't know that litigation is best not to do it, and i think you mentioned that your company is taking
4:18 am
steps to try to deal with that moving forward, i would like to hear a little bit more about it, but in addition to that, how can the government create incentives from the private sector to do more on that front? i think it is going to be better than the government fighting ai. how can we go about it in a way that allows us to keep pace? >> that is a great question, congressman. i think there are some things that are happening that are really great with the government. a few things i'm pretty passionate about myself. things like collaborative defense. back when i worked for the nsa, you didn't tell anybody that you worked there. now there's a collaboration center that realizes that we can go much further defending the nation if we work together, versus incomplete and utter silos. so a continued push, like we have seen with jc dc, for example, has been successful
4:19 am
and is moving the needle, pushing her on that front, i think is imperative. i think it is important to think about cyber hygiene and what entities are doing, so companies that are out there, how are they protecting themselves, with cyber hygiene, that's great, one thing that we lacked somewhat is like, what's the report card on how efficient some of these cybersecurity things are -- when you talk about the researcher and healthcare and so forth. maybe we can roll out things we can track overtime like mean time to respond. how quickly are we responding to attacks and sifting through all that nice? >> i want to monitor my time, but just a follow-up, if you could respond in writing, perhaps, unit two for the
4:20 am
academy was interesting to me. i was wondering how it might replicate other ways that the government could encourage private entities or, you know, colleges and universities that might be willing to do it, but find ways to expand that effort, too, and the too many alerts point you made earlier is another one i would like to find out a little bit more about print with respect to talent development, i appreciate the fact that, you know, their efforts going on, and mr. sikorski, i think your company mentioned it, and mr. amlani, i think your company did, as well. i think that is a good solution for intermediate and long-term. for the short run, i think we are importing a lot of the talent. in some instances, we have had people come and testify before this committee that they are coming from hostile states, frankly. and so one of the things that i'm wondering about, since the
4:21 am
government monitors it to some extent on the way in through the immigration process, which has its challenges, for sure, with respect to these types. the other is, once these guys get in and go to these companies, how do we know that the company is doing a good job of monitoring these individuals to make sure that they are staying on the right track and not misusing their position? there's no economic espionage going on. should we be confident that these companies are doing a good enough job to make sure that the united states is protected and that their industry is protected from those sources of attacks. i apologize to chair for running into the line, but i appreciate the chairs indulgence on this. anyone who would like to answer. >> that is a great question.
4:22 am
i've been doing instant response for almost 12 years and it varies. it's not just nationstates that have threats. it's not just rent somewhere. another big threat is insider. we see insider threats when unit 42 is responding, where, you know, it is not just the threat of them may be putting something into the supply chain and getting out the door and that kind of threat, which we know nationstates are embedding employees in that way, but where we see them go out the door and they have stolen data, and then they start engaging in rent somewhere like behavior, especially if their country doesn't have the ability to make money and has their economy cut off. those are just some ideas i have. >> anyone else? >> well, mr. chairman, thank you. i yield back. i appreciate it. i did see your kind words when i was out in the lobby out there. i appreciate that. i will tell you that, i think, honestly, and i would ask members for feedback on this -- we need more of the five minutes of the hearing and we just aren't getting it. what i may do is have a townhall type thing where we are the townhall, and we are
4:23 am
just asking questions. i think that would be more informative. maybe some presentations, so to speak, on data poisoning for ai and all that kind of stuff. us understanding a little better to ask more formal questions. thank you for saying that. we'll get more stuff on the books. >> i appreciate that, mr. chair. >> i now recognize the gentleman from mississippi. >> thank you, mr. chair. thank you for being here today. this is something that we are all concerned about. we appreciate you being here today. >> the capabilities of ai are quickly advancing. when i look at it, it feels futuristic. it holds the power to significantly improve the way we work and go about our daily lives. however, we also know that advancements in technology create new opportunities for
4:24 am
bad actors to operate effectively. i agree that congress must be closely monitoring this tool to ensure the application is not misused, but the government cannot get in the way of american leaders in this sector in their building to improve the product. the chinese communist party has intense ambitions to win the ai race. when i talk to people in the private sector, it is clear we are in a global ai arms race. our adversaries are innovating quickly and it is critical that we do not restrict the capabilities of american businesses. i would like to direct this to the entire panel. how can we ensure america's leadership in ai and what government actions could jeopardize this? i'll start with anyone that wants to answer. >> thank you so much for the question. this is really critically important point for me.
4:25 am
i think continuing to stay ahead of our adversaries on technology requires both investment in our talent and workforce. i just took my 15-year-old son on college visits and i can tell you it is actually very difficult to get into an engineering university today. i think there is an unprecedented demand for people wanting to study ai and wanting to study cybersecurity and wanting to study other types of engineering that are being left out of the workforce at the college stage. they get intimidated by software engineering. and being able to make that a part of the high school curriculum leading into college i think will help and creating more educational opportunities for more individuals to get into the workforce to learn those skills. not just at the college age, but also, going forward, as they are progressing through their careers. in particular,
4:26 am
investing in companies and making sure that we are actually hiring companies that have the best talent is another component. this companies themselves can recruit the best talent. they provide entrepreneurial worlds that allow individuals to be able to create and solve problems in settings that are fun environments to be able to do that. and i think if we can actually make a concerted effort through organizations like the silicon valley innovation program to hire companies to solve our massive government problems is an important component to stay ahead. >> thank you very much. anyone else like to say anything? >> i would say that encouraging procurement and other incentives, responsible development of tools and proper
4:27 am
use of those tools is key. in making sure that we are not simply trying to barrel ahead with reckless development, or reckless applications that's not going to yield good results. as i said before, winning the ai arms race doesn't just mean the fastest and broadest buildup. it means the most efficient and effective one. it is also a matter of looking at our values. we look at countries like china, russia, et cetera. it is a deployment of ai that is ciccone and in nature that supports those regimes. i think it is important that we not just be a global leader in innovation, but a global leader in values in this space and make sure that when we are promoting these things, we promote the use that is in conjunction with democratic values. >> thank you very much. mr. sikorski spoke the act says information through censorship -- i assume they will continue to use restrictions on ai. how does america's freedom in a capitalist economy help to attract global of the investment around ai? >> that is a great question.
4:28 am
i think one way that we do is just by that fact. we are more open with regards to how we are able to innovate and in what regard we are able to innovate. i think putting that together with your previous question, i think about, you know, what is the high risk ai models we are building that are impactful to our citizens? things like someone applying for credit or for university. right. those kinds of things are high risk and something we should hunker down on as far as how are they being built up, versus on the cybersecurity side, i think the risk is a lot different. therefore, pouring in a ton of innovation, which we are doing in industry, and we see organizations doing -- so us, together with the government to integrate fast and think about the charge of how we sift through the noise of all this
4:29 am
-- security alerts -- and make sense of it to find the next big attack. >> thank you. mr. chairman, i yelled back. >> as chairman kamal take a second here to shamelessly look that academic institutions in tennessee. vanderbilt university just hired general nakasone. they go out high schools and middle schools. there starting cyber at an early age. great schools in tennessee. sorry. i had to do that. i'm sorry. i looked at the nameplate.
4:30 am
>> i represent michigan and detroit. my question is for mr. sikorski. you mentioned in your testimony the need to secure every step of ai app development lifecycle. as we saw with the solar winds, one silver but he can have long reaching consequences. >> that is a great question. i was heavily involved and help reverse engineer the solar winds back door. it was exciting to brief homeland security about that as we were on boxing it. i think about that.
4:31 am
that is one of the biggest concerns we have when we think about cyber threats is the supply-chain attack. there have been numerous other examples since solar winds, which is now a few years in the past, where our adversaries were able to get into the supply chain, deploy code, and it gets shipped out to customers, which gives them a backdoor into networks. at the networks, we are focused on thinking about how to eliminate that. as you are building your code, and as you are shifting out your products. so there are multi-tears that weird thing about when it comes to ai security but one of the biggest is supply-chain because people very quickly develop this. they are pulling in technologies there are out there on the internet or otherwise. that makes a really good opportunity for an adversary to slip something in. and so we adopt technologies and have focused our ai on eliminating those threats. >> thank you. and can you elaborate on what
4:32 am
shortfalls the ai industry has demonstrated when it comes to securing ai app development lifecycles? >> i'm sorry. what was that? there was a horn. >> can you elaborate on what shortfalls the ai industry has demonstrated when it comes to securing ai app development. >> when we are rushing out new technologies, we are not seeing who can get this out first. we are talking about how fast ai development is working and we have all talked about how much it will evolve our life over time, and it is inevitable. i think that when you do that, people end up rushing, doing
4:33 am
things and cutting corners to get those technologies out, not thinking about security. shocker. that is what we do with with a lot of innovation over the time. i think that making sure that we are actually putting in mechanisms to think through -- what is people's security posture -- are they doing the right things as they ship out applications? to a large extent, software development in our country is a national security issue when we talk about the supply-chain. >> this is a question for the panel. the world has not yet seen a major ai driven cyber attack. on the scale of attacks, what is the likelihood that we see an attack on this scale were worse in the next five years? anybody. >> i will take that one, as well. i may cybersecurity guy. one thing i think about that is
4:34 am
a huge threat and concern i have -- because we sought went somewhere spread like fire, takedown systems, like you mentioned. i remember seeing grocery stores knocked out. i think about my experience with the solar winds and the fact that dish just imagine that russia, that was using ai in that attack -- they would've been able to focus more on getting their hooks into systems for which the solar winds that in an automated way, rather than using their human army, it would've been a much more efficient attack. >> are. anybody else? thank you. >> it might be used to facilitate a phishing attempt.
4:35 am
instead of sending an email you will get a fake call or voice message or even a video saying open up that email i just sent you, making it less suspicious. they can also target critical infrastructure. they use those in ways that might not be related to national security. if there is a vulnerability or manipulation of ai systems than that could also be a vulnerability point. maybe how we might use ai to distribute power along the grid, or how we use it to check it for people coming into a sitting or log doge large-scale event. if those were the target of an attack or subject to human ablation, that could raise a host of risks. >> mr. chair, thank you. i'll take my own seat next time. >> poisoned the data there. >> thank you. sorry i got your name wrong. i now recognize the gentleman
4:36 am
from texas. >> thank you, mr. chair. i think it's an important discussion. i'm glad we're having it. thanks to the witnesses for being here. the public-private partnership that we have throughout the united states between government dish including the prompt of homeland security and other industries, including critical infrastructure, i don't know that it has ever been as important as it is today. i think the witnesses for being here because i think the goal is for us to be able to keep up. one of the ways we can keep up and one of the ways we can continue to train the next generation is happening in my hometown at angelo state university. they are a center of excellence. they are eight hispanic serving institution. it is a rural area. they have a strong focus, in addition to just general cyber issues, on ai. in san angelo, we have
4:37 am
goodfellow air force base, which is a joint intelligence training base. you pair these up together. you have university work being done. you have the university that trains are mental professionals. the majority of them trained at that location. i want to put a plug in there for that. i may ask some questions on that. let me go to mr. sikorski. in your testimony, you talk about how adversaries are leveraging ai. i would like to get your -- whether it is fishing or some other technique or whether it is a new technique, maybe talk to me about how you see adversaries increasing the scope, the scale, and the threat, using ai. >> that is a great question, congressman. i think that goes to the question earlier. right. the what if. if some of the attacks we saw on the best leveraged ai in a wider attack -- i also think about the evolution. right. so the first thing that the attackers did it -- we are
4:38 am
monitoring the dark web. we are seeing what they are talking about in the forms. what they are selling to each other. the access to networks. but also, talking about how to leverage this technology to create really efficient attacks. the big focus so far has been on social engineering attacks, which means things like phishing, like we talked about. and also manipulate you to get multifactor authentication. you know where you need the extra token to log in. there focusing there, as well. where we start to see them poking around is using ai to be able to do better reconnaissance on the networks they are attacking to know what holes are in networks across the world, and then also, starting doge they are starting to wade into how can they develop malware efficiently and variations of it so it's not the same attack you see over and over again, which goes back
4:39 am
to the point of, how do you fight against that? which is you need to develop technologies that are really efficient using ai to see those variations. >> i want to ask all of you in the last two minutes to speak to this this model that angeles state university is using, where they train students. they may go into the nsa or the military. they may go to the private sector. what should the university be doing? we've got about a minute and a half. >> are not a hard science person . i found in my field, it is invaluable for people in the policy space like me to learn from the people that understand the tech. it helps with how we can translate across to each other and it helps as he did -- you design systems to figure out how to do it and how it promotes what we care about as a society.
4:40 am
>> i think there is a vocabulary around cybersecurity that is an important component to educate the youth on. spearfishing you mentioned all the other types of attacks. i think these are important things to be of the teach people about to make sure that they themselves are not attacked. they should understand what attacks are happening and how to guard against them. identity is the big component of all cyber attacks. with the credential that it has been hacked or compromised in some way -- it can be activated afterwards on an ongoing basis. having better identification mechanisms to make sure that individuals don't have to change their password every day. they don't have to have multiple things that are hard to remember. >> would you say that the
4:41 am
universities should be doing this in a classified setting, that they should be working on these types of techniques and partnering with government agencies as well? >> government agencies as well as the commercial sector. >> very good. my time is expired. i'm sorry, mr. chairman, i didn't get a chance to talk to you. >> you are recognized for five minutes. >> thank you, mr. chairman. thank you for doing this hearing. i'm interested in you doing some kind of off-site retreat to delve into this more in depth. when i listen to you and i think more and more about ai, the more terrified i become, quite frankly. there are great opportunities, but i'm scared. in 1974, i was in seventh grade, and i remember sister ruth at st. patrick's grade school saying technology is moving so fast these days. that we don't have a chance to figure out how it's affecting us. how it's affecting our values. how it's affecting our families. how it's affecting our society.
4:42 am
it is moving too fast. that was because of the space race. atari had, with pong in 1972. think about how fast everything is moving now. and think of all the impacts we have seen from social media on our young people. these polls come out and say, you know, 35% of kids are patriotic, or for boomers like me, it is 70 something percent. nobody believes in any religion anymore for any institutions. we have all these kids with mental health issues related to their self-esteem. everything is moving so quickly, and i'm confident that you are going to figure out how to protect -- as mr. sikorski said, he represents 90 of the fortune 500. we will find ways to protect businesses and make sure they are looking out for their security and interest and hopefully we will think of more things with our government.
4:43 am
but i'm worried about our kids. i'm worried about our senior citizens getting scammed. i'm worried about people using our voices to say things we didn't say. i'm worried about our society that is so divided and our foreign adversaries -- chinese communist party, russia, iran, trying to divide us more by using social security against our freedom. they will use defects to try to divide us a little more. so i'm worried about this divisiveness. i'm worried about fraud. so i'm going to give you the rest of my time and i'm asking each of you, can you just talk about values? i'm worried about the values in our country just like the people that believe in our values. i'm worried about our own country and our values and promoting those values. each of you, just give me 30 seconds each, if you can. what is the number one thing in the big picture you think we need to be focused on right now
4:44 am
to address not the positive parts of ai, but the scary parts of ai. what is number one in your mind? >> i would say it has to be the conference of approach from the creation systems to input of data in proper situations to retrieval and use of results. that is something that there are a lot of factors that are together in the national security space. we have to fight mechanisms for transparency and oversight. you don't typically have that light you can shine on the private sector or other parts of government. i think we have to find ways to promote that oversight and make sure we are upholding those principles for responsible use, even when you don't always have the same level of insight. >> oversight to watch the input of data? >> from procurement to what is being input to how they are using data and what kind of human review and corroboration you have. >> that sounds awfully big
4:45 am
effort. mr. amlani. >> as a father of three children, all of whom use digital tools regularly, i think, for stuff, we are placing a large responsibility on parents to be able to monitor the digital communications of our children. there aren't age verification mechanisms to provide a gateway to make sure that the right people are accessing the right systems at any point in time. it goes back to the identity layer of the internet that i mentioned before. as you mentioned, there are all kinds of online bullying. extremism is, recruiting, things like that on but it is being done not just on a dark web, but in the open. it is because we don't have the ability to recognize when people are online, and it creates mechanisms and difficulties for making sure that we can actually have stronger values in forest and allow parents to be empowered >> better idea of people.
4:46 am
>> mr. sikorski. >> the other idea i would put out there is the education piece. i talked heavily about cyber education for the workforce that is going to defend the nation next as being permanent. there is also the education piece for the rest of society when it comes to security and ai and being -- and disinformation and knowing that what you are seeing on your phone may or may not be real. and people are at an age where people are getting news by scrolling through their phone second after second. i think that is something that needs to be considered, and how do we eliminate those kinds of things that are not real. >> mr. chairman, can we allow mr. demmer to answer? >> thank you for the question, mr. congressman. i agree with you on the societal advancements creating impacts that are unintended. i believe that technology advancement that is happening
4:47 am
today creates a promising future for all workers to be able to upscale and have more fulfillment in their work and to be able with these technologies, but it starts with the critical data. if we don't have good data going into these systems, then, you know, garbage in, garbage out. >> thank you. thank you, mr. chairman. >> i now recognize the chairman of the cyber subcommittee for five minutes of questioning. >> thank you. thank you, witnesses, for being here. my colleague was talking about 1974. where was i? i wasn't alive. 1984, though. recently, one of our biggest concerns has been the repositioning of chinese state actors. they are postured for a cyber attack if a conflict with
4:48 am
the united states were to arise. u.s. intelligence community has said that ai has been helping them detect this activity, given it is much more technology to decipher than traditional tactics that involve malware. how have you been thinking about this challenge? >> absolutely. the threats are dual. there's the cyber threats. there's others on this witness stand that can best answer that. on the physical infrastructure, specifically, critical assets, that is a vulnerability. we have seen it. gecko is a partner for the u.s. industrial base, both public and private sector. we need energy security -- critical infrastructure like roads, bridges, dams. these are susceptible to physical attacks. ultimately, we are creating wins for the industrial base. it allows us to fight these other threats, as well. >> thank you. appreciate it.
4:49 am
>> mr. sikorski , i understand you teach cybersecurity at columbia university which means you have a receipt to our upcoming talent. in your view, what does the government need to do to bring more -- sufficient numbers of people to the cyber workforce? >> it's a great question. i think, back to my experience, right. when i was in school, it wasn't until i was a senior in college in new york city and 9/11 happened that i was like, i want to go work for the nsa. that is what i want to go do. but i didn't really think about that before. it was things like video games and things like that. i think getting people excited earlier on is a really good place that we can focus, of, like, there are cybersecurity attacking clubs at universities and high schools. this gets people excited to
4:50 am
gamify it and play along. i think, while we look at the workforce gap, i think our workforce capability -- where we have the academy and we are bringing people in who don't maybe have the hands-on experience -- we are giving it to them with on-the-job training, and then i also think about government programs like i was in when i went to the nsa, and i was in a technical development program there where i got ramped up on all the things i learned during my education previous to that. those types of programs that are feeders are powerful in getting people to level up at the skills they didn't get in college. >> that answered my second question. not all the way. you have these clubs and feeder organizations. what is congress's role in further improving those? do we have a role? it's over half 1 million jobs -- cyber job openings in the u.s. that is what keeps me up
4:51 am
at night, that we don't have the workforce to defend against these cyber attacks. ai can only bring us so far. we need that human element. do we have a role? what is it? have you thought that far? >> i think we absolutely do. there is an ability to create these types of programs where it makes it really easy for people to apply and get into. and to the point made earlier about, hey, it is hard to get into schools that have these programs available -- and i think we often think that it needs to be a very specific cyber program that they are doing. some of it -- they can learn skills on the job when they get in. it is more about building out the broad base of technical capability in our workforce. i think that is one great area. i think there are a lot of great agencies like cisa that have information out there where people can learn and train up.
4:52 am
there are things like that going on that are very powerful. thinking about how to drag in that workforce. some of that is earlier on. we are talking sometimes about giving people skills. when i taught at the fbi, i was like, all of a sudden, these people work cyber agents and they had none of the background in cyber. they had no technical computer science background. it was challenging. it is not just a snap your fingers, trained them in a month. it starts early. >> companies need to start focusing on skills-based hiring instead of degree based hiring. chairman, i yelled back. >> the gentleman yields. i now recognize the congressman who represents palo alto, i think. mr. swallow for five minutes. >> thank you for holding this hearing. we are best as a committee when we are taking on issues like this. i've enjoyed working with the gentleman from long island on the cyber committee.
4:53 am
i was hoping to talk about something that californians are concerned about. entertainment industry is the second largest job crater. -- creator. it is offscreen. ai is the future. there is no putting it back in the bottle. it is the future. we have to embrace it and put guardrails on it and contours around it. when it comes to the creative community, you know, the example over the weekend of what happened to scarlet johansson with her voice eventually being stolen from her for an ai product, what should we be thinking about to make sure
4:54 am
that we protect artists and creators from this, but as i said, embrace ai and recognize that this is where we are going. >> it is important to find ways to be proactive and find a way to find what people's rights are going to be. in a situation like that, i'm not saying that was imagined or contemplated in the movie where she played an ai for five years ago came out. this is something that has come up a lot. i think in the recent writer strike, how do we build in those protections -- not just for how ai is being used right now in this industry, but how will it be used in 10 years? this is a little outside of my field, but from the technology standpoint, it moves so fast, i
4:55 am
think it is important to be proactive thinking about, not just current risks and factors to care about, but what do we need to care about down the line that we might not be ready for. >> when we talk about creators, that is not post ai. that is such a basic, hot take. they don't oppose ai. they just want rights. they are willing to engage around their likeness and their voices, but they should be compensated. in the majority of people who are writers and actors or people you have never heard of, but this is their livelihood. in california, we are especially sensitive to that. i wanted to ask mr. amlani, because we are in the same backyard, the bay area. the chair alluded that our culture has created, you know, so many in the bay area, but i do worry about, with ai, i have a community in my area called pleasanton, one of the most
4:56 am
wealthy communities in america. you've probably heard of it. i have other places like san lorenzo and hayward and union city and they have some of the poorest communities in the country with schools that don't have enough resources. those kids have dreams as big as kids in pleasanton. i just fear that at a child's earliest days in schools, that there's going to be two classes created. those who are given access to ai in their classrooms and those who are not. what can the private sector do because you are often some of the best solutions out there to partner with school districts to make sure that you are imparting your knowledge and skills in places where you must have the need to recruit talent down the track as well? >> sure. comments and questions. this is a pretty personal issue
4:57 am
with me but with regards to actually allowing people to have access to the technology and ai in particular, it is interesting, the way that ai is democratizing itself and is making itself available to everybody. it is as much of a concern to make sure that actually everyone has access to it and is actually able to have these tools, but also, people that have gone through large universities and masters degrees and phd's, now, a lot of that content and knowledge is available at people's fingertips that have never gone through those programs. so with different ai tools, that are now available at think -- people's fingertips, you can code. you can write apps. you can create content. i have my 12-year-old creating music right now. this kind of democratization of ai capabilities is an opportunity but also a massive threat. it really does upscale many
4:58 am
cyber criminals around the globe to be able to attack systems. people that are not as well off, potentially, and would love to have the ability to be able to create malware that could potentially create a ransom payment. these opportunities to educate the youth and make sure they know how to use it responsibly and for the right aspects are things that our society needs to embrace and do a lot more with. >> thanks. thanks again. >> the gentleman yields. i now recognize mr. d'esposito from new york. >> this is something that affects the united states of america and the world and has real promise for the future. just last week, during police week, i shared an emergency management subcommittee hearing with the law enforcement and hearing about the use of drones . we heard a lot about the
4:59 am
expanding field of how law- enforcement agencies are utilizing drones to assist them in doing their jobs. it was good to hear and promising that they were embracing technology about how to handle issues that are plaguing so many cities across this country. as we embrace technology, and as the field expands, we need challenges. we find challenges. listening to all of you speak about the power of ai to assist the united states from attacks from our enemies, it seems that there may be a space for ai with these drones. so generally speaking, any of you could answer. is ai already being used in drones, either by those in law enforcement, the government, or privately? >> congressman, thank you for the question. being in a related field with
5:00 am
doing wall climbing robots primarily, i can say that ai is useful for using these smart tools properly, everything from localizing data, ensuring that the data point is synced to a location on a real-world asset, to processing millions of data points, or in this case, visual images. we heard earlier about drones being used to secure the border. there are definitely applications here for that. >> you mentioned data. obviously, drones are utilized by law enforcement to collect that data, whether it is audio, visual, location data, gps, among others. so that information that is collected, we need to make sure that it is collected correctly and kept private, so. is there a role that ai can play in ensuring that this information remains secure and doesn't give the bad guys access? >> absolutely. i defer to my fellow witnesses
5:01 am
here on specific policy recommendation for cybersecurity, but echo clicks massive amounts of data on the physical world and we take very seriously the responsibility of securing that data. that's for the standards of securing the data and also providing training to the entire workforce so they know how to properly handle any type of information is classified information. >> any other witnesses have regulations with regards to that? >> i could just add that, you know, securing the models, the data used to build them, is critical. one of the things we deal with the most when doing instant response, especially ransomware attacks have shifted.
5:02 am
they no longer encrypt files on disk anymore. they focus on stealing data and then using that to extort victims. and so securing the crown jewels of data, which is what most entities have is paramount. >> sorry. did you have something? >> sure. >> this is a defense innovation unit for the autonomous systems portfolio, part of my role years ago was to manage autonomous systems within drones but i think one of the biggest concerns is actually chinese made technology within drones and making sure that we create a list of different drones that people could potentially use in law enforcement and other avenues to make sure that we have eight safe list of drones, which is something that defense innovation did create. >> you are leading into my next question. it is almost like you planned this out. part of the conversation we had last week from the nypd was the fact that they currently use -- utilize chinese technology in their drones to eliminate them from the fleet because of the issues and concerns that we have. my colleague from new york,
5:03 am
mrs. stefanik introduced legislation that would help law enforcement make sure they only purchase american made technology and american-made drones. obviously, those chinese drones are still in our atmosphere. still being utilized by first responders and law-enforcement agencies. quickly, how can ai help us mitigate the threats that they pose? >> there's also a significant amount of work being done by defense innovation unit and other agencies on mitigation of drones and counter drone work. so ai being used for
5:04 am
>> it has changed so much and just the progress of ai in just the last six months to a year has been startling. and how it can also improve our lives as americans. i also want to focus on the challenges it faces to our own democracy process. just in the last 24 hours, dhs issued a warning of threats that it poses to our early elections. continues to threaten global democracy. and just last week,
5:05 am
intelligence community. the intelligence officials have testified in the senate that since 2016, that the threats are happening even more so as we know today. others competing to influence not just their own countries but what's happening in our own elections here in the u.s. that is a danger to all of us and concern for us as well. this past january we know that voters were exposed to one of the most high-profile cases of ai. we know that there was a robo call impersonating president joe biden telling people not to vote. the attempt was identified and stopped and i think we're all very grateful for it. we can see that those types of efforts can cause enormous havoc especially in elections that were close and in communities across the country. we already know that there has been unsuccessful and
5:06 am
successful attempts to undermine our elections and folks are trying to do it again in 2024. the rise of ai is very concerning to all of us. >> one of the advantages in the u.s. is that our elections are decentralized. they are run by states, counties, towns, cities. the technology to know how to deal with the oncoming wave of ai. i'm really concerned about how these smaller agencies, the city clerk's actually were able to take on ai and have the resources to do so. we know that there is a new task force coming out of
5:07 am
homeland security is really important. this is a huge responsibility of this committee how we provide and get assistance to these clerks across the country. what additional safeguards and how can congress better support all of these election organizations and officials across the country that have a decentralized system and how are they supposed to get support? i think that is very concerning. >> the decentralized in some ways provides a lot of assets is a significant challenge when you're facing threats like this. over the last decade we have built up a robust network that provides support for election security for things such as cyber threats. we should supplement that by having the same entities serve
5:08 am
as a means for highlighting ai threats. but also from the general awareness and education, that can be brought from those, the small side of federal entities to the broad network trying to educate about here are the types of technologies, here is something that might be used to trying to use ai to create spam to overload your office to building up fake materials about pulling information to using ai. >> our elections are obviously for president, these are federal elections but they rely on data from local towns and counties. they just send their information up through the process. the real concern is that you can micro target
5:09 am
precincts, small towns and cities with these ai disruptions in certain states. that is something that we really have to consider and think about as we move forward. the education that is needed to take on these defenow recognize. >> good afternoon, mr. sikorski i would like to go back to your testimony where you mention that our cyber adversaries are utilizing ai to really enable some of their malicious behavior . finding new attack vectors, enhancing the speed of lateral movements once they do intrude
5:10 am
in that work. i am interested in hearing more about how we are using machine learning to build and enhance our cyber defenses about these kinds of attacks. precision ai including some revisions to how the socks are operating. would you elaborate for us on how machine learning can enhance our cyber defenses? >> actually, the networks and myself have been involved on this journey for over 10 years. other technologies like that have gotten really popular really quick. we have been rebuking -- and time our technology to be able to detect malware on systems. we have been training model to do that for quite some time. and that detects variation and making sure that the variance
5:11 am
will be stopped do to that training. there is also the idea of the leveraging ai to get rid of the noise. that is the recent evolution we have been focused on. everybody is inundated with all of these tools, when i go back to these for example, i did numerous responses to that after that came out. went on to corporate networks and one of the big problems that they had was that they actually detected the attack. they detected being dropped on the system. they did not know it was there because they were flooded with so much noise and so what we are doing is taking our technology and very much focusing on how to put these alerts together and reduce the amount of information. the brains of the analysts who are getting earned out, instead
5:12 am
it gives them a chance to zero in on the attack and actually move the needle. >> you also mentioned in your testimony at a point about the unintended consequences of disclosure. i would like to go back to that. particularly raise a concern and a public disclosure that require information about how we are using ai to train, they could unintentionally create a road map for the adversaries and reduce our overall security. if you are a policymaker, how would you balance disclosure requirements with not alerting our adversaries to the type of information that we don't want them to have? >> that is also a great question . what is your end goal with respect to the ai that you are
5:13 am
trying to get to customers or to protect the network? you have to think about the trade-off. the more that we regulate things on oversight, it could slow down the innovation and the protection. etiquette is the appropriate thing to do when we are talking about summary applying for a home loan or something like that . i think when we start to talk about cybersecurity we have to focus on what is the data and is the date of the ones and zeros, the malware, the detections to be able to limit attacks and how important it is to make sure that we continue to make a difference with the technologies that we are building on the cyber war that we are fighting day in and day out. >> you also mentioned, that reminds me of secure by design.
5:14 am
>> this goes back to the point that ai is here to stay no matter what any of us do. it is sort of like the internet. we didn't plan security into the internet, and what we are doing comes up as a cybersecurity company we missed out on an opportunity to build things in a secure way and that's where when it comes to securing ai by design music about what are you building, how are you building it, how are you protecting the application as is running, how are you protecting models, the data everything as it's actually flowing out to customers or otherwise. i think that is where a really big focus on building things in a secure way is really
5:15 am
important. >> i now recognize for five minutes question. >> thank you, chairman for having us here today, thank you all, the witnesses. i want to start and build off i incredibly proud of the work that they are doing. thank you for emphasizing the need to educate and train our cyber workforce. i am wondering in your experience, what are you seeing as the most prominent barriers to cyber workforce? >> there are a few parts to that, i think the first is desire. i think that getting people at a much younger age focused on
5:16 am
getting excited about these technologies and wanting to get involved and it goes back to what's going on earlier, and actually talking about cybersecurity at a very early age. the point of what is happening in your district, they are really feeding the engine of all of these cybersecurity. i think working with industry to make sure that they are lining up for the jobs a lot of cybersecurity companies struggle to >> everyone is looking to
5:17 am
enhance their cybersecurity capabilities which includes adding cybersecurity professionals, whether it is a private entity that is concerned about these issues. just quickly following back, you mentioned a desire at a younger age for people to engage in this field. are you seeing anything from that student demographic that draws them that maybe we should be sort of using to highlight and amplify at a younger age? >> that is a great question. i think that one thing i think about is the game of dictation of it. and i think of myself personally i wanted to be a video game programmer originally because that was really cool and i had the tech skills for it. once i realized that cybersecurity is like this a good versus evil kind of situation going on in the real world, it's only going to get bigger i started to get really excited. and then there are hacking
5:18 am
competitions and driving into getting people to participate more because of the fun that could be had in the team building that could be had working toward that. it doesn't really exist out there but that's what actually happens, there is the cybersecurity club where they focus on that. they go to competitions together and it rallies them into a common goal. i think those types of things are great. >> i'm going to shift really quick. in your testimony you touched on how ai can be used to better secure the infrastructure, the collection of high quality high quantity data, we are overseeing the major investment that is being utilized by the infrastructure investment and jobs act. it is building our future infrastructure. one of my favorite projects was restored the george when they went through cables and was a completely intricate project.
5:19 am
as we think about investing in our infrastructure so we don't have to make these huge investments, how could the use of ai touch on and used to better of keep our existing infrastructure? >> this is something we care deeply about, of course protecting infrastructure we have today capturing the right data sets so that we can ensure that they are here for us when we need them, not vulnerable to old age or some external threat, we also see the opportunity with these data sets to help us build more intelligently in the future. as we bring new systems online, how do we instrument them in ways where we can do that? modernization of what is going on with that equipment so that we don't have the failures of the vulnerabilities and hopefully lower the cost because two thirds of the infrastructure is usually
5:20 am
concerned after the initial bill. >> reporter: what type of dollar amount investment could be made to quickly scale up this technology? >> there are very much technology ready solutions out there for national security and there are ways that i would love to follow-up with some guidance on that in terms of programs that can be utilized to help bring this technologies to the forefront and accelerate the adoption of them whether it be investment in hardware technologies as well as ensuring the policy side says that not all solutions are created equal. today we do a lot of things that seemingly placate the ability to think they have a handle on what is going on but it is actually willfully inadequate in terms of how we are actually managing and maintaining. >> we appreciate the continuous
5:21 am
conversation. i am out of time so i yelled back. >> i now recognize gentlemen from north carolina. >> i yelled my time to you. >> thank you. >> quick question on creating a sense of security for the public . there is all the fear that is out there and are there requirements that we can place in the system that we can give people a sense of security and kill switches, what are some things that we can do to give the public a sense of security that we are not going to create the terminator or something like that? anyone? you are smiling. >> control is one of several factors that is critical for something like this. you need strong principles from creation to what data you are putting in to systems and what systems you are using it for to what data you are taking out and how you are using it.
5:22 am
as you said for human review, one of the steps on the outside is there should be human cooperation, it shouldn't just be ai making its own decisions and we have to know how reliable ai is in certain circumstances. sometimes it can provide small bits of insight and sometimes it can be very reliable sometimes it gives a degree and you have to treat it with a bit of skepticism but also it provides a bit of value along those lines, not just human review but especially trained staff, that's why it is so port -- important for folks, general buys, that is a tendency for individuals to always assume that the systems tend to work better than humans that has been documented in the cases. you need to understand what type of biases you might want to apply to any given system in that situation.
5:23 am
>> there were some negative things said about facial recognition earlier, i think if i heard, maybe not negative, but concerning. the fairness, the use of it against large groups, population stuff. law enforcement. what is your thoughts on the reliability of facial recognition? this is your field, i am aware of my district and company that has use of the three-dimensional of the hand, facial recognition has failed so what are your thoughts on, i think you're right it sort of begins and ends with being able to make sure that the person is the person in the system to make sure that -- what are your thoughts on that? >> thank you mr. chairman. this is actually a really important question. from even going back to your prior question about building confidence in individuals and
5:24 am
ai, i think it ties to that as well which is fundamentally, post 9/11 after the terrorist attacks, one of the steps that we did was we created tsa. having federal agents at the checkpoints made people feel safer. that used to just be a private contracted workforce but, by having federal agents there and made people feel more comfortable. these steps are really important to be able to put into place. people do not feel comfortable with identification steps that are necessary to access systems. passports, the most secure place to secure passports according to all of the major experts in the field right now is actually in a personal book that you write with you because nobody oversees is trying to steal your personal book for passports, they can access your systems but your book of
5:25 am
passports is something that they have very difficult access to receive. leveraging better authentication builds confidence. people have become very comfortable with face i.d. to be able to secure their systems. people have no confidence in what member thompson mentioned in regards to captcha's . and facial verification is an important component. >> have a quick question, data poisoning, how big of a threat is that for the systems that are using that data to make very quick decisions particularly in the defense world and if we need to go
5:26 am
somewhere else to have that conversation we can postpone it until then. if you can just really quickly share a few concerns about that if you have any? >> it goes back to the secure ai by design. how are we securing that information, how are we securing the models themselves that make those decisions as well as the training data that goes there. there is a lot of research and a lot of thought about what attackers could do if they could manipulate that data which then in turn not even necessitate an attack against the technology itself it is an attack against the underlying data and it's definitely a concern that needs to be taken into account and built into as the technology is being built out. >> thank you, i yields. >> thank you all for being here. the worldwide threat assessment warns that our most
5:27 am
sophisticated adversaries, russia, iran, they see the presidential election is an opportunity to undercut our democracy. and many of our law enforcement intelligence agencies have confirmed that there also seeing upcoming threats. we know russia used social media and cyber to interfere in our election in 2016. we know that these actors posed as proud boys in an online operation to intimidate voters in 2020. just recently we learned that china is experimenting with ai to enhance their influence operations. within the operation of homeland security, cisa is charged with protecting the security of our elections . i know you work closely with
5:28 am
cisa on these issues and i would love to just ask you a relatively open-ended question which is how is cisa prepared or how is our government prepared to address the use of ai by foreign adversaries to undermine and interfere in our elections? >> that is an excellent question . i think cisa is doing a great job with the collaboration with private industry to work with us and many other private entities on thinking through what are we actually saying in the landscapes. one of the things i am tasked with is running the threat intelligence and how do we take all of the information that we are getting from other private entities bring it all together and share it back this is where we are seeing these directors go whether it be what china is
5:29 am
up to today, what russia is doing staying on top of these threats and finding the new ones. for example we saw novel attack of how russia was sending emails, phishing emails to the ukrainian embassies and actually making that discovery and showing how that actually went down. the hyper collaboration is definitely going to move the needle. >> deep fakes, we know intelligence agencies said that they had spotted chinese and iran in deep fakes that were not used but the fbi has recently identified that recent elections in slovakia were impacted by the use of deep fakes, how prepared are we to prohibit or prevent the use of deep fakes that might have a significant impact on our
5:30 am
election? >> what we have seen with ai it really lowers the bar for attackers to be able to generate believable communications whether it is emails, phishing, voice, or even deep fakes technology . i think that that lowering of the bar makes it a lot easier to select >> the deep fake technology, the impacts of an authentic contract is really concerning. >> that is actually what we do, we use green lighting from a cell phone screen or a desktop screen that detects against live human skin, all simultaneously while you are
5:31 am
actually recording a video or doing a match, verifying that it is a live human being with the right person at the right time. with being there at the creation of the video is an important component that you can tag the video and verify that it is in fact not a deep fake. >> are you coordinating with cisa on this? >> no. >> beyond just the deep fakes in the misinformation one of the risks is the benefit. it is something that when you are evaluating truthful information someone can just say no, that was just a defect, it is not just the initial misinformation it is trying to create an entire ecosystem of uncertainty. it is just to emphasize why it is such a significant threat
5:32 am
and why we need to make sure we have clear information and ways to debunk this that are reliable. >> you and many of the members on this committee are concerned about election security and i would just urge you to encourage some of your colleagues who are trying to interfere with law enforcement efforts to coordinate election security and prevent election interference with the cyber companies who through which the adversaries do try to influence and i hope that we don't see any more members of the republican party trying to cut funding in cisa and that we work closely with cisa to make sure that our elections are safe. i yield back. >> the chair now recognizes mr. crane. >> thank you, mr. chairman. i realize that this hearing is
5:33 am
about utilizing ai to bolster homeland security. i want to bring up an interesting meeting that i had in my office before coming to the committee. i was with one of the largest tech companies in the united states and arguably global companies. they were talking about major cyber security attacks that they are saying. i was told that they used to see a range of tens of thousands of days, they are now seeing attacks to about 100,000 a day. is that's consistent with you all, with what you all are seeing and hearing in your space as well? an increase in cyber attacks? >> yes, that is a great question, we have actually seen a great increase in cyber attacks. the number of attacks we are stopping across all of our
5:34 am
customers was 65,000 customers is in the billions. it still gives you a sense of how many new attacks are going on and then we see the cadence of ransomware attacks and extortion continuing to increase as all of these have evolved. >> this company also told me that not only are the numbers of attacks on a daily basis increasing drastically, but they also said the brazenness of some of these foreign actors is becoming more and more hostile and increased as well, is that something that you all can verify that you are seeing as well? >> yes. the days of ransomware attack where they just encrypt files and you are just paying for a key, we actually miss those days because now they are
5:35 am
stealing the data and harassing our customers to get ransomware payments. they are taking the data which has the customer information and really going to a dark place of the level of harassment there willing to go to. sending flowers to executives and even going after companies customers pertaining to spam them as a company when in fact they are the threat actor harassing their customers and leading to getting the payment that thereafter. >> what do you attribute this drastic rise in aggression in the amount of cyber attacks that we are now seeing against our own corporations and our infrastructure? mr. demmer, i will start with you .
5:36 am
>> thank you congressman for the question, my expertise really lies on the physical infrastructure, the critical assets that we help maintain and protect, i can't say that the threats are rising on our critical energy systems on our infrastructures like bridges and roadways and dams, although we haven't seen the pace of attacks that we are seeing on the cyber side, it is a real vulnerability and risk as our infrastructure has more vulnerabilities. >> what about you, mr. sikorski? >> i think the threat actors when we talk about ransomware have become a business where it is actually not the same hacker is breaking in and doing the ransomware and everything else, it is actually one group breaks in and then they sell the access on the dark web to ransomware gangs. almost like franchises like mcdonald's, they pop up and have a reputation score about
5:37 am
how likely they are going to do what they say they're going to do and that enables them to be able to build the relationships and get the access that they need. it is operated like a nine to five job. let me ask you something, do you believe that some of these nationstates and some of these, do you think that they sense weakness here in the united states? do you think that has anything to do with it? >> i don't think that i could speak to if they sense weakness or not. it is more an opportunistic thing from what i see. we have seen them leverage vulnerabilities. >> we have started those opportunities, were they not present a couple of years ago? >> i wouldn't say -- i would say those opportunities have always been present. however, the availability and opportunity for them to figure out how to get in has increased
5:38 am
, they are better enabled and now that they're operating in this model and makes them more efficient and able to pull out over attacks. >> do you think we are doing enough offensively to make sure that individuals that would utilize these types of technologies. >> if they're going to carry out these types of attacks against our country >> when i think about, i'm not a policymaker as far as thinking about what the best stick is to have as far as dealing with the cyber threat, one of the things that i always focused on is on the defensive side and had we make sure we are doing everything we can to secure. big steps we have made in
5:39 am
recent years the last few years, all of this collaboration that is happening is moving the needle and i think that will help a lot. that being said, we are in an arms race right now. the defenders need to win here. one opportunity we could have is remove the vulnerabilities i was talking about. >> thank you for the extension. >> some extra graces, i now recognize mr. kennedy from new york. >> thank you, chairman and thank you to the panel today for your testimony. we are hearing a lot about advancements in ai and the upcoming elections. we want to make sure that it is secure and as november approaches there is
5:40 am
more and more concerned about those that seek to undermine our election. just last week, a report from the department of homeland security and progressives, generative ai tools. aggravating emerging events, disrupting disruption processes, we are attacking infrastructure. i worry about the rapid advances as we are discussing here today, the cyber security and infrastructure security agency is responsible for election security. how can this agency best support election officials and united states voters so that they can discuss the
5:41 am
authenticity so that they see information online? >> this is an area that we can learn a lot of lessons through cyber security space over the last five to 10 years using these type of federal agencies to distribute information out to a huge range as well as the public and that comes from both providing education about how these technologies work, what to be aware of. also as well as information about specific threats if there is something deep fake techniques, some sort of new type of attack providing information. it's something where because our election system is so decentralized, because there is a lot of information out there acting as that hub of getting out good information can be very important.
5:42 am
>> what you see is a role for the cyber security and security agency in authenticating content online in regards to the upcoming election? >> i think that will be a much more challenging question if we're talking about content from any layperson as opposed to a message being targeted at an election administrator. that is something that something from my work does a lot of research into >> i know what chairman greene earlier mentioned, i want to plug the university of buffalo which established the uv institute for artificial intelligence and data science to explore ways to combine machines, in 2021, just this
5:43 am
past year. the new ai center. you should look at buffalo in the engineering school, how can we better harness the institutes of higher ed especially our institutions for getting at this coming edge technology and while there training up our youth in this new technology, how do we make sure that they are developing it in a safe way regarding ai? >> so, my thank you so much for your question, my son and i would love to come to buffalo. fundamentally i think there is
5:44 am
a level of distrust and some of the ai content and not knowing who or where the content came from. it comes down to identity. and verifying the content has not been tampered with after it was created by the initial creator. making sure that you are able to identify the initial creator is a very important component on trusting all of this content and also for property concerns that we have already discussed here today. using proper identification tools, things that can verify it is the right person, it is a real person and it is right now to allow that individual to have ease-of-use to be able to identify themselves and the content is coming from them. >> thank you, i guild my time. >> i now recognize myself i've minutes. >> for the purpose of this committee, how ai can help us
5:45 am
be able to protect entities within the united states, that's where i want to go here initially. the ability of your company to help bolster border security for those of you that are representing the free market, you had alluded to, have some information here where ai powered copilots, my understanding it is a natural language assistance that can guilds, multiple operational advantages, i'm actually curious as to when we are doing our national -- initial claims, if our border security officials can utilize that tool to better inform the cases of the future so that people are
5:46 am
being coached for interviews language barrier can exist and they can be told to say certain keywords are you all aware of how you can make sure we can get to the depth of fraud to use ai to get past the language barrier, those applying helping us get to the truth of the matter. >> it is a great question. i will take from my experience helping to create the network, when we say copilot, what we mean is you have a human who is using technology to find new threats, that's what we sell, but it's very difficult to use so we end up doing is we end up building these technologies called copilots for our products so that users can engage very quickly to ask questions and then they will
5:47 am
process the question and give them the answer that is there so that they don't have to figure out all of the complex cities of our product. i think what you can do is you can take what we've done in our capacity and apply that to a use case like we were talking about. they are paying attention to what they are collecting, what information and the ai maybe says to them, i saw differently then you sod when i feed all of this in and put it against my model and you could put the two side-by-side and you put on the suit and you are more powerful than you were before. that is one thing i would think to go to. >> you do think that officials could be empowered in those moments to help the case but i am assuming there is also a defensive mechanism that someone crossing illegally could also use that technology
5:48 am
to time delay, read the question and i'm assuming the technology is booming where they can get a response to help their case also ai generated. >> i am definitely not an expert on the border and how those policies are to figure out what the best way is on immigration and other things the thought processes the types of things that we are doing with ai to enable customers to prevent cyber threats could the model similar for really anybody was collecting data to make better decisions that they might not otherwise do by themselves. >> the rise of ai, the significant question about concerns, constitutional, election all, ai can be very dangerous. social credit scores and the ability to have facial recognition and then may
5:49 am
produce a different outcome depending upon whether you adhere to the government's positions or not, there is a real fear it is a credible fear out there for those of us that think this could be used to weapon eyes against the nations citizenry. i want to ask from your perspective the significant concerns that you have regarding ai in the future. if anybody wants to talk about election fraud specifically candidates that can be created to look like they are saying something that they are not actually saying, and how that can be generated, the ability for a user, we're talking about government regulation here. for a user to have a fact check , something that can be encrypted where i can have the ability to determine whether or not that was created or whether that was an actual video, anyone who wants to respond to that. >> the identity of the creator
5:50 am
is actually a very important component up front mr. chairman, mr. congressman. and i believe that right now that is not actually associated with most of the videos shared openly you can never tell who created the original video, you can have some confidence that the video itself was created originally by the initial creator, but there is some watermarking tools and other technologies being used by other companies that there is some investment in the space currently today to be able to assess it. >> with that i yield, ms. clark for her five minutes. >> thank you mr. chairman, and i think a ranking member for holding this important hearing, let me also thank our panel of witnesses for bringing their
5:51 am
expertise to bear today, the rapid advancements in artificial intelligence and cyber security represent significant new opportunities and challenges with respect to securing the homeland, everyday developers are creating new ai tools to better identify and respond to the increasing number of cyber attacks but while it is a critical tool in our cyber defenses, it also still created and deployed by human beings. more data sets which often replicate human biases and the bias is built into the very technology created to improve our lives and keep us safe. for example, ai design for law enforcement purposes that was trained on historical lease or crime data may serve to reproduce or expand existing inequities in policing as those are not an accurate reflection of crime but rather in police
5:52 am
activity of crime reporting which can be fraught with bias, while artificial intelligence and cyber security will remain important elements defending the country, these are not just technological issues, but critical, civil, and human rights issues as well. i ask unanimous consent. this article written, the national security council provides value for contacts with the societal impacts of cyber threats, the need for cyber security strategies developers and players of artificial intelligence in the realm of security the homeland or providing a service to consumers must be intentional in the creation and rollout of this technology similarly we as policymakers must be deliberate and meticulous in our choices
5:53 am
as we work toward our effort in securing the homeland as well as crafting a comprehensive entered which remains foundational to our work on ai and the range of other issues. we must all take care to ensure that the civil rights and democracy are baked into this emerging technology. once it gets out there will be no putting the genie back in the bottle. have encourage the government to consider different standards for different use cases recognizing that certain uses of ai may not pose the same liberty concerns as others here can you elaborate on why you think such differentiation is important and how you think the federal government should identify which uses of ai should trigger heightened security while keeping american civil rights top of mind? >> that is an excellent question
5:54 am
. i think about whenever you are applying security to something, people see it as an inconvenience and they don't want to do it. so, especially when it comes to innovation as well, people are moving very fast with this technology and we need to think about security as we do it rather than just rushing to get it out there. as he said, it is unstoppable once it is out. so, i think that when i look at cyber security specifically and the defense challenges we have, we spoke earlier about the amount of threats and the amount of attacks going up over time, and our adversaries we also touched upon our leveraging ai and trying to figure out how they can produce more attacks leveraging ai, it's very important to stay, keep innovating very quickly on the security side and focus on, that data is the ones and zeros, the malware. what is
5:55 am
happening on a system, what is vulnerable and using that as inputs to make the decisions. when it comes to things like you mentioned, policing, you could go to employment, credit, education, college application, wherever this ends up going, it is important to really take a look at those high risk cases and take an approach to regulation that has a purpose and why you are doing it. those impacts, it is really impacting people's lives in a way that maybe with cybersecurity, it is like we are helping people's lives by preventing them getting attacked. >> thank you. do you agree that we should have different standards for different use cases and what use cases do you believe pose the greatest civil rights and privacy risks? other witnesses are also welcome to chime in.
5:56 am
>> there are certainly some types of oversight that should be universal. the principles i had mentioned for ensuring accurate results in advocacy, at the same time there are certain forms of ai that do create heightened risk to individual rights and liberties that we do need to have in standards and safeguards for. any type of ai, for example that might be used to help make a designation that an individual should be a target of an investigation subject to surveillance or arrested, that is an example where there are severe right impacting ramifications to use of ai that would and heightened scrutiny and a number of extra safeguards in place to make sure that uses are efficacious and that you are not having problems such as the biases that you mentioned. >> with that, i want to thank the witnesses for your valuable testimony before this committee today. we would ask the
5:57 am
witnesses to respond to any additional questions in writing pursuant to committee role 7d. without objection, this committee now stands adjourned.
5:58 am
5:59 am

17 Views

info Stream Only

Uploaded by TV Archive on