tv [untitled] October 18, 2024 12:30pm-1:01pm EDT
12:30 pm
this makes the scams look convincing and thus people are more susceptible to fall for them. there's also an increase in that the point crypto scams and disinformation at scale. bad actors are able to upscale their operation by using ai to auto generate text, images, websites, videos and other content offers. this can be done through the creation of scam sites and the ability to rapidly disseminate crypto related disinformation. for an instance in to avoid detection or if a law enforcement about actor running a scam operation may cycle through different sites with new marketing creating a new site every time. this process can be seamless and low cost to the use of ai. in addition to sustaining the scanned infrastructure which is necessary for the scam, scammers can use ai to accelerate an upscale their outreach to potential victims in order to generate illicit proceeds. this might include voice cloning, gpt style chat tools as
12:31 pm
we described deepfakes and ai aided social engineering to bypass security measures for personalized phishing, cryptocurrency confidence investment scams and other schemes. another example is using large lingual model such as chatgpt, two, claude, et cetera to facilitate cybercrime. as noted use become increasingly multimodal and able to input output with text but also images of audio and video. generative ai models are able to generate new coat and check existing code for bugs. this can be exploded in the crypto space by identifying vulnerabilities at scale and facilitating cybercrime. decentralized cryptocurrency applications and open source code to route their operation to hold user assets. hackers available to exploit the transparency and open code full abilities to steal billions from
12:32 pm
defined protocols identifying the vote of those in the code or security systems and analyzing investors behavior patterns. on the flip side, however, ai can be used to check code as an audit function large language models are also getting better at identifying and rejecting malicious prompts. this is the upside of all this. the web three ecosystem is already a space prone to hacks and exploits. and again this base is basisn facing vulnerabilities in addition when artificial intelligence interest a picture. this is primarily because ai adds a a layer of sophisticatn to existing attack vectors acting as an additional tool for bad actors. another concern is increasingly prevalent use of aloe pans to write the underlying blockchain code. this may save time between the idea stage and bring protocol to market, , the ad generate code y
12:33 pm
not be secure potential attacks were content created with data poison in in my period can ak up logical code bugs the same way a human can? this might provide, this might prompt jail break attacks which is a threat to ll m that exploit loopholes and pay a program to produce responsive the bypass is ethical safeguards. so to protect against increasing sophisticate sophisticated attack, the blockchain industry and protocols should adopt rigorous security audits. as you've heard from the panel previously. these are specifically specific focus on ai systems while more sophisticated encryption methods can help protect contracts of ai powered cyber attacks. redundancy and packet controls also increasingly important role in ensuring there's always a option to manually shut down ai systems of suspicious activity is detected. other ways in which law
12:34 pm
enforcement can leverage ai is akin to combat crypto related crimes effectively include transaction monitoring and analysis. ai algorithms can analyze vast amounts of blockchain data to detect suspicious patterns and anomalies. this includes come for instance, identifying large and usual transactions, rapid movement of funds or transactions involving known illicit addresses. also enhanced forensics are ai powered tools can assist in blockchain for in six by tracing the flow of funds across multiple transactions and wallets. this can help uncover complex money laundering schemes and identified the ultimate beneficiaries. also, deepfake detection ai typically using deep learning specialized neural networks can use to detect deepfakes and of the forms of digital manipulation to deceive victims and law enforcement. that is used to deceive victims and law enforcement. finally ai combining their sex
12:35 pm
can analyze communications on social media, forms and dark web identify discussions related to legal activities involving cryptocurrencies. so as ai technology advances we can expect more sophisticated tools for detecting and preventing crimes. while the ability of criminals to exploit ai encrypt it is fast and presents several challenges, the intersection ai, cryptocurrency, blockchain also presents opportunities for law enforcement to leverage technologies, to enhance the capabilities to combat financial crimes and protect the public. when deputy attorney general monaco announced the creation of nccet three years ago she said as the technology advances, so, too, the department evolves within so we are poised to root out abuse on this cryptocracy platforms and that your user confidence in these systems. the same principle should apply to artificial intelligence and other transformative technologies. the department will continue to
12:36 pm
be vigilant and ready to respond to new threats in cryptocurrencies base, particularly as criminal actors employ ai or other tools to enhance their schemes. thank you again for the opportunity to engage in this discussion, i do hope you enjoy the rest of the symposium. [applause] >> now is a good time to take a five-minute break. if people could go ahead take your seat that would be great, we'll get started. let me turn over to chris paid for most of you know. take it from there. thank you. >> thanks, tampa thank you has all been here. so you heard this morning, on chris painter, i'm a senior advisor at csis. also long-term history and government, was prosecutor and justice for me as before i went
12:37 pm
on to state and other things. currently capacity building cyber such great to be with you. have a really great group of panelists so i'll haven't introduced himself by i wanto frame this little bit talk about the follow up on what we're doing. the first panel you heard really was about some of the challenges of a attention to criminalize activities and had it and that's important part. but there's the other half of the issue is how do you actually have ai and have that used in various federal agencies including will support the department of justice. just like you saw in the first panel i think leonard set that operative hava fund et cetera ai in the beginning and very scared of it afterwards. that such is true for ai. that's true from us any technology that out there. technology as always a is either good or bad, it's how it's used and how it's employed. it reminds me of the internet in
12:38 pm
some ways because i remember when i was at the state department hillary clinton who was the secretary of state gave a speech on internet freedom and of her speech was the internet, yeah. that would be a summary. unep later after we saw abuses by governments and things that were happening in the aftermath of the arab spring, she give another speech which i think it we summarized is the internet -- so that i think the issue were facing, good and bad part of it. people who know me well know i also am a fan of cyber films which almost always dystopian and almost always the earlier ones including the first network computers took over the world called colossus project 1970, the computer are always, the ai is always what takes over the world. but there's a good part and we can harness the ai but important having policies in place almost to do that. it's a key part of that. every federal agency thinks that unique. they are to some extent but justice in particular in this
12:39 pm
area is, as many faceted agency and one that a think is particularly both challenging but also is a lot of opportunity to both youth and i have good governance. justice is not just a law enforcement and investigatory agency. it certainly is of that. it's also a member of the intelligence community and national security agency. it is a regulatory agency. it is a civil rights organization and has a civil rights division. it is a corporate compliance organization. it is a consumer protection agency. it is an antitrust agency. it is really made of things and i think that poses challenges for a look at this issue but at the same time opportunities to do just a couple days ago lisa monaco the deputy attorney general was speaking at the counter ransomware initiative beating, and she's been having just as ai initiative, she's having a lot of meetings over
12:40 pm
the years well before the executive order thinking about this ai issue. her quote that it wanted to pull out is that doj needs to be leading in the responsible use of ai. now, that's a big promise and that's a hard thing to do given all the challenges here. of course the larger ai discussion in terms of how the justice department governs it with the concerns and considerations are are governed by the president's executive order on safe and secure and trustworthy ai, which means all this. omb guidance they can afterwards or really about the same time that said what agencies need to do in terms of structure and evaluating risk and there are particular issues at risk for the department of justice in terms of law enforcement and other missions, and then a national security memo which hasn't happened yet but will happen. with that context i would like to introduce our panel. we have two mex folks from the department of justice, one person from a nonprofit but
12:41 pm
former experience in a lot of different areas and also someone from the privacy and civil liberties community, too. they're all pillars in a community. heading the do-it-yourself. >> i'm peter winn, , the head of the office of partisan civil liberties at the department of justice, and our job really falls into two general categories. we responsible for evaluating new policies, regulations, new laws, ways of doing new things to make sure privacy and civil liberties concerns are properly assessed in connection with thae process. but the other function we have, and i would argue, equally if not more important function, is the compliance function which is the responsibility for ensuring that all the components that chris was describing, the fbi, atf, bureau of prisons, dea,
12:42 pm
marshalls, although litigating components, the antitrust folks, you can go on and on, comply with the rules that are established both by congress and through regulations and through executive orders, comply with the rules that are established that we're supposed to comply with. because at the end of the day the department of justice in all of its different functions depends upon the ability to access information about people. and if we don't handle information correct were going to lose the authority we need to access that information. subtrust is mission-critical, and the best way to lose trust is not to comply with the laws that have been established that you're supposed to comply with.
12:43 pm
and so speedy before you get more into the substance because ago to your next on that, i think that's a a good friend of where you sit in this. >> yeah, it's about public trust become a think this conversation that were having is one about how to maintain public trust, and you're going to lose public trust if you don't use new technologies effectively to protect the public with respect to public safety or with respect to nasa security and other missions that the department of justice has. at the same time you're going to lose public trust if you miss use that technology, if you use it in a way that turned to be irresponsible. and in connection with ai we are busy speedy before you get that, i'm going to come back. i just wanted a quick introduction of folks and then come get back into exactly which into. by the way, i forgot to make this announcement just like in the first panel, there are cards in middle of the table for you to write down questions. we will reserve time again for
12:44 pm
questions. really look for to those. let's think about galatians. if you have the right and down. if you don't have the right and understood that we will collect those. thank you. >> my turn? everybody, i have becky, senior advisor for ai. i i work with the chief artificl intelligence officer. before this i came from that of my work with government and accountability issues and for the person to go after it is responsible and work of machine learning fairness and at principles of dirt. i followed the call of the ai talent search to come to government so i was really inspired by executive order on ai, it's thoughtful and forward-looking approach, they came to covet as a presidential innovation fellow. so if anybody wants to learn more about that but we know, but that's me. >> i'm duane blackburn from wyatt earp which is a not-for-profit which operates ffrdc for the full committee were prohibited from lobbying come to develop products and don't can be with industry. we basically just help the
12:45 pm
government solve problems for safer world. i work for small team called sandifer data-driven policy, and and what we do is take the insights that about 10,000 scientists and engineers gain from the work supporting agencies and get that those enhance a policy maker so they can make decisions that evidence base, actionable and effective. prior to joining mitre i spent eight years of the office of science and technology policy to the white house where persistent directors or love for the voiceover portfolios portfolios entitled ares i worked on. i do that i was a programmatic departments of defense and justice. >> i'm great, director of the security and surveillance project at the center for democracy and technology. cdt is washington, d.c. based civil rights and civil liberties organization. we've gotten into ai in a big way. in fact, we do a weekly meeting that we call a big ai meeting. it involves roughly a third of
12:46 pm
the staff, because a lot of the different projects at cdt are involved in one aspect of ai or another. my piece of the pie is to try to keep government surveillance in line and to make it so we surveilled the right people and not the wrong people, and ai is going to play a part in that. before i got to cdt 17 years ago, i was at the american civil liberties union here in washington for 12 years. >> so thank you for that. hopefully will have no hallucinations during the spell, but we will first turn, peter, to you. as you beginning to do give that broad overview with a kind of cargoes are an what you know doj's think a. >> i think greg set up the challenges police and law enforcement nationals get a pretty effectively, which is you want to be surveilling the right people and not the wrong people.
12:47 pm
you know, when information was on paper you could lock it up and put in your lockbox your pocket and you control the information. when it became electronic an existing network systems, it really is more like water flowing in the river, and when you damn up the river to get electricity you don't want to be finishing more of the concrete before you ask how the salmon are to get upstream. we really a connection with any new information technology, the goal is to manage the information so that the positive benefits outweigh the negative benefits. negative risks. the challenge of any information
12:48 pm
technology is to engage in that cost-benefit analysis effectively. harkening back to my point about not following the law is a a t way of losing public trust. ultimately the goal is to maintain, hopefully enhance, public trust. the challenge, one of the challenges that we forget with ai because of all the hype about ai, it's so new, is that there are existing laws on the books that apply here. one of the laws that is among the most significant laws that you've never heard of is called the tea company act of 2002 which requires privacy impact assessment to be done in connection with new information technologies. the last i heard i was a new information technology.
12:49 pm
and so we've been doing in a less organized way assessments when ai products come on board for, since 2002. and we've got a structure in place involving privacy impact assessments that we been using and developing, and we've got a system, organizational system where each of the components, 42 components all have a process when we do this work. in connection with ai where it's a transformer technology, at least generative ai in the forms that a think were talking about, there's another definition of and which is i didn't think computers could do that. but we're talking a big data in general. there are a lot of unanswered questions, particularly how it performs in new situations. as great indicated, circumstances that could
12:50 pm
significantly affect and impact individual liberties. and because of the nature of our law enforcement and national security and our litigation work, our uses of ai have particularly significant risks. it can impact peoples liberty. and rights. and so the goal is, you know, the stakes are higher for our mission. we are holistically reviewing all of our tools and systems and data across all of the doj components to make sure that we are focusing on the gaps or the overlook processes, seeking to identify the risks along the lines, establishing operational procedures to mitigate the risk here the thing about ai is almost by definition generative ai highlights human ignorance,
12:51 pm
but there are lots lot of t can be anticipated ex-ante, at the goal is to try to spot those through the assessment process and then mitigate them if you can identify them, and then as new risks become, he become aware of new risks because humans are, i mean, almost a condition of knowledge is the awareness of your ignorance. so as we become aware of our ignorance and we have opportunities to mitigate those risks in real time to try to update our operational procedures to make sure we maintain that public trust that so essential. because even a dog knows the difference between tripped over or kicked. were going to trip over some dogs but we don't want to be doing it on purpose and we don't want to be doing it twice, okay? in terms of the structures we now have in place, particularly the structures in the most
12:52 pm
recent deal and want to highlight there was a former el at the end of the last administration that is just as important and as formative. really more of a consensus-based approach. that eo is still the place. we use eat all in lawful manner. we do in connection with appropriate evaluation of the benefits and risks. we want to make sure the information that's actionable is used through reliable and effective, it's safe, it's understandable. we use it responsibly and probably the most important thing that we've always put in place is to make sure there's a human in the loop. a scary example is facial recognition technology. the risks i think most people are focusing on the rest but there are significant benefits.
12:53 pm
well, and this is generally true for how the department is approaching particularly sensitive areas of use of ai, but in facial recognition nobody ever is allowed to act without corroborating evidence. nobody, no action is even permitted without a human in the loop, and often several humans in the loop, and that the use of this technology and a lot of the ai technologies are treated as leads, but always requiring human evaluation, corroborative evidence because of the high risks associated with how we use it and the kind of missions that we have. the most recent eo obviously involves development of guidelines, the particular sensitivity to the way ai can be used to have discriminatory impacts.
12:54 pm
the challenges associated with synthetic content, including abusive and nonconsensual imagery. i mean, i can go on, but you know, the challenge on the complaint site of the house is to make sure we are identifying all the different technologies that were using in evaluating those technologies, applying the appropriate testing to those technologies. becky will talk a little bit more about how we are implementing the various guides that we received from omb. you've mentioned monica was launching of the justice ai initiative. the goal there is to bring together all stakeholders to really share expertise and perspective on the positives and the negatives of ai.
12:55 pm
because as i said, humans are, we are conscious of her ignorance, this is a collective process. and if you are trying to regulate a new ecosystem, you don't just try to figure it out yourself. you engage in all the stakeholders and bring it all that knowledge so that the decisions that you're making, affecting people's lives are as well informed and evolve as much light as possible from the public. -- i again. i can use examples but i would like to sort of stop at this point and pass it on to becky. >> great. >> becky, you're the ai person. you're not an ai presbyter the person who is on i guess point the end of the stick and helping to draft these policies and put them together. with that framing and more kind of, you know, concrete terms since your writing fisting or helping write this thing, what are the considerations you were seeing? what are you try to capture in
12:56 pm
this? what he trying to make sure it doesn't happen and what he trying to make sure it does happen as you have these policies which will naturally be shifting as peter said overtime. >> sure. i think, i've been shocked as i come into this work how similar it is to the work i've done in the past. so the basic building blocks that you need for ai governance and risk management i was outlined quite well in the memos that it been described. so the omb memo the most recent one in march, outlined some really foundational components of good governance and risk management for ai and emerging technology. it parallels though structure that any large complex organization would have to build to adequately manage these technologies. you have to know what ai you have. you have to understand the risks and benefits that that ai presents begin to understand
12:57 pm
based on those conditions and those characteristics, again as peter said, the controls that you put in place, in different contexts of use. have to enable humans along the full risk management lifecycle to make decisions consistently and effectively through training or through frameworks or through policies. you have to monitor after you've to point something and be able to respond. because even if you kick the tires forever before the systems launch, they are prediction machines when they take inputs they may produce something he didn't expect and how to be able to monitor for that and change your processes accordingly. this involves both know what you use and the first place, so what ai should use but then when you make a decision to use a certain type of ai, how you use it. the restrictions of controls of the different characteristics that are in place they can have managed that system. substantially a lot of this is
12:58 pm
unprecedented. we are still in the very early days of this deal of responsible ai, responsible ai and risk management, et cetera. there are a lot of existing systems as peter described, processes, infrastructure, actresses that you have to integrate into to make this work part of everybody's day life, make it second nature. but the substance of what you integrate is really on the cutting edge. a lot of the work involves human expertise, getting the right people in a room to understand how they are making decisions and how they're making judgment here continue break that down into repeatable frameworks or repeatable criteria, ideally overtime the subjective becomes more objective. different people using different types of use cases, or the same types of use cases, it can assess the system in the same
12:59 pm
way, can manage its risk in the same way and can spot the same issues at scale across a very large organization with diverse mission, mission statements with the first work with diverse use cases. that's the very high level, can double-click into each of those areas what an inventory is, how you collect information about ai assets, all of that kind of stuff. i mean the weeds with that now. >> i would say that particularly difficult when i was saying in agreement back every time the government tries to adore everything, anything, it's almost a hopeless task because her so much stuff people don't know and being relatively new to the justice department and you probably have seen this yourself. that's obviously an interesting task which have before you can actually apply these controls. dewayne, you bring more of sort of the outside perspective. you are supposed of the government, can't help the
1:00 pm
government in this area but is also seen this evolve in your always be dazed and other days producing this issue evolve in other contexts and not the ai context. what advice can you give this group and also just generally about how this should be approached given that other device? and anything else you want to say. >> sure, thank you. i'm kind of the opposite of becky. i can very spelt ai but i didn't than supposed on emerging technologies, multiple emergency technologies. like you said i'm kind of the outsider here in this group. there's a few consistent things that policymakers need to keep abreast of, aware of as you start work on this. i will walk through some of those to make sure we're all aware of them. frankly i'm going to start with just recognizing a difficult their job is as their developing policies on these emerging technologies in different applications. ..
0 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on