tv [untitled] October 18, 2024 12:00pm-12:31pm EDT
12:00 pm
>> we are starting to see from positive development, a nonprofit organization for voluntary principles that would guard these productions, a safety by design idea and number of technology companies, the white house put out practices that companies have joined. there is an effort with our office to engage to red team to prevent them from being misused by the public. ...
12:01 pm
on again how we can help kids keep themselves safe, how we can help parents and caregivers and others and kids lives keep kids safe. those are policy initiatives that are starting to come to fruition that are going to be really important to sort of, that we all build on in the coming years. there's a start for sure. >> actually i've been pretty impressed over the course of the last two years-ish of the surge of thought and policymaking around ai risk, more so than any other area. i don't think it's a bandwagon effect anybody wants to get on
12:02 pm
board with it. they see a major opportunity and challenge and want to get their hands are bound. things happening with the national governments, things like the executive order and implementation efforts around that. the eu, cyber, ai act and the work there. there's also nongovernment organizations like the world economic forum has a massive ai governance working group with sub working groups on various things. i'm on a safe systems technologies working group looking at how to actually think about and design some safeguard safeguards. there's a lot happening all over the place. the question is are we working quickly enough? if not for lack of trying. it's quite impressive how much attention is going into this topic. >> it's like this delicate balance where you want to allow innovation to happen. you want healthy competition. you do want but heavy restriction and policies out
12:03 pm
there to reduce your innovation at any cost. i would definitely say that something i've also seen is that you all these organizations and your companies, startups talk with legislators and legislators are taking the time, their staff to look and ai and how to improve the situation. i really do applaud them. one of the concerns i have on this front is that for example, so colorado came out with their own recently, and this becomes a concern because if each day comes out with her own this may be a problem for security leaders to ensure their being compliant. what we needed usually come together and work as fast as we can to ensure we can all be compliant but also to ensure there's transparency as well. >> great. i want to try to lead some time for questions at the end. i have a couple i think the panel might want to address.
12:04 pm
before we get there i want to make sure our discussion is on a positive note. so what are we doing right here? what are we doing right about addressing ai enabled crime? [laughing] >> good question. >> again, so in our space in particular the child child exploitation space a lot of what i said there's just a lot of understanding of the problem and that is in turn generating sort of a flood of resources to deal with that problem, both on the what i'll call the reactive in, investigation and prosecution side, but also on the front end, the protective end with a prevention efforts, we can engage sort of safety by design engagement with technology companies.
12:05 pm
the department, that's just it our space. you know, the department is also bringing all kinds of resources to bear from all different subject matter experts, attorneys, technologists and others to make sure we have the best possible thinking on understanding the scope of the problem and being creative in our common creative in our thinking, collaborative in our efforts to work across the department come across in the federal government to address these harms. i can't sam particularly optimistic about the nature and extent desha say that -- on kids now but i am optimistic about the way in which government and schools and ngos and state and locals and others are combining together to confront this problem. >> so in addition to what i said earlier about the right attention is on this problem, who also people thinking creatively and innovative on our
12:06 pm
ai cannot make our lives better, make it more efficient. there's unbelievable potential there. we are experiencing it just in the basic everyday at how to access information and get answers to questions that we have. there's an incredible opportunity in the cybersecurity space to harness ai, to make network defenses better, faster, to help on the work for side where right now kind of the challenge at the first level security operations center type of roles they are just flooded with information alerts, and ai can hope to do with some of that. in fact, replacing at the lowest level some of the basic functions, drudgery work, that humans have had to do and do that much more efficiently kind of freeing up human capital to work on problems at a higher level that they enjoy doing better. there's lots of upside, and on a workforce side there's certainly
12:07 pm
plenty there, too. >> i would say seeing people volunteering to time to try to understand and work with entities that are really get such as mitre alice or ms. mackey, great organizations working with the public to help and it's so nice to see our committee coming together trying to resolve the situation to work together and also something that's nonpartisan which makes things easier to work on and moe faster as well. >> i want to echo what ross said to the extent he was talking about child exploitation. i think you're seeing the same thing in other places and i had been really inspired by people across the department component coming forward who have the will and interest and energy to take on some of these incredibly heart problems and to do it in a coordinated way. you are seeing quality correlation across agencies and components of the department and that would make a difference so where we didn't have a deep bench of ai prosecuted before,
12:08 pm
we're going to get better i think as fast as possible. >> just to throw my two cents in. someone who something of a techno dooms fair, wherever your system is quite porous that was initially engineered not to actually serve as a commercial platform, but to serve as an information sharing and research platform. we tend to let tech run free. this is the first time i can recall when there's actually been a suggestion we pump the brakes and think, so i think that's room for optimism. we have time for a couple of questions. let's see. let me start with this one. for those working in earlier stage emerging technologies, for example, quantum technology, what are the lessons learned are steps we could or should of taken sooner? what can be done proactively versus reactively? this is a generally sort to deal with sort of technological issues like this in crime.
12:09 pm
thoughts? >> one of the challenges in the least. -- early-stage startups were a lot of this innovation is happening is the funding model, it doesn't lend itself well to having governance and policy people and folks that do interaction with the government. so there's always a challenge that getting to the minimum viable product and then all that other stuff will come later. so sometimes safety and security issues become a bit of an afterthought. that also exist within the trust and safety practitioner space come early companies don't have these teams frankly. it is a challenge, and even for innovators that are thinking of some of these concerns or downsides, there isn't the runway to hire people to deal with it. so that's a major challenge in more mature organizations they constructed think ahead and
12:10 pm
start to build some of these governance structures and to think about some of the risks, and as well as opportunities and plan for those. but not a perfect answer. >> what else? i was in front of a very wise federal judge on monday and said to a defendant there is right and there is wrong, and this was wrong. i just encourage people to keep that in mind. it's not use hard to tell what those things are, and so if you feel you need advice on the i suppose seek good legal advice or trust and safety advice. i don't think it's that hard. the harms we see are things that people can perceive and understand while they're designing them, and i just ask founders to keep that in mind as their building tools, particularly tools they're intending to release publicly to consumers around the world. >> that touches on another question we received, a few questions that are in section 230 lane. one in particular asks, are you
12:11 pm
suggesting technology platform should bear liability for malicious actors misusing the platforms to carry a ai-enabled crime? i don't know if you want -- >> i would just say canada is a tight rope. think about that way we have to try to figure out like okay, if i had you sign these terms and conditions and use of this product wrong, who's responsible for this? one, you probably you or your organization do you have probably to first place you didn't resolve. i think it's a competent amount at a not a lawyer some going to pass it on over to the legal side on this panel but that's just my two cents. >> i not a lawyer either but i think -- it's never stop me, right? i think over time and expectation of due care might start to emerge. maybe litigation might help to clarify what that looks like. organizations that are acting reasonably, to the extent they
12:12 pm
are aware of a particular risk, they are building practices around that to the extent they're getting notification of fraud and misuse, that they are taking reasonable steps to prevent that, they are probably in good shape. to the extent they can be shown they are willfully blind to or just allowing their platforms to use for criminal schemes and other harms, that, a case can be made that they are part of that. >> i went about like cybersecurity insurance basically? if you find your guilt in a certain sense. that something of interest i would say. >> i think our final question, a question of her before and prosecuting fraud the critical and sometimes hardest part is -- how is department thinking that improving intent when making the recommendation? >> i will have to answer that
12:13 pm
question. -- half answer. the intent standards in our criminal don't differ by case. they are the same for every case in every defendant and they are very well laid out in law. there's been an enormous amount of research and writing by legal experts and academics about new ideas in artificial intelligence face. space. that's going to take legal development or statute to change that. what you should expect is that enforceable want to understand how the tool works, what is information that the ai company knows, what is information they store, what is information access and what did he say about it? that's the same thing we do in pretty much every single white-collar case and that will apply as well in ai. >> i guess i would also add, often it's not really the tool itself that's acting. there's an actor that's posing questions that has intent him or herself and that's really what you are proving, if i'm
12:14 pm
researching ransomware targets while the tools providing me with recommendations, i actually the driver of the process and i intend to commit that crime and i'm collecting information to further that. there are ways in which this debut were back to the law of the wars horse that falls e normal framework that we have, and this is sort of reachable, notwithstanding the technology that's in the way. before i let you go, and this is only a little bit of a surprise, so we're sitting here today 2024. 2024. where are we headed? this this is a lightly questn the line. let's take us out a few years. ai and crime. >> i think we need to be a little more proactive at this rate. i think we've been very reactive, and i don't see any of
12:15 pm
these crimes slimming down, unfortunately. i think it will increase and you think we need to do better as a society, especially on deepfakes like pornography especially like with boys. doing this to their classmates or even -- that boggles my mind we're still here in society and we haven't done anything to address that and have that should not be allowed, and there needs to be some sort of form of accountability and a sense of punishment, too, as well. these really impact women and girls for their lives. >> so i'm increasingly concerned about an authentic content, at a don't know if were going to be able to entirely get our arms around through things like watermarking and other content authenticity efforts. increased focus needs to go into identifying and making sure that the source of information is known. so that then the information itself can be judged.
12:16 pm
that also gets into the phishing and frauds type of scenario where i'm getting a text message i need to know who that is from, legitimate business or individual, in my contact list. we need to get back to first principles on everything from domain name registration to e-mail accounts so there are ties to actual known humans at all the, or vetted known businesses on the in so we can then filter everything else out, then we will to be judging as to whether a viewpoint is correct our weather that image is real or not on the basis of the person that's on the other end as opposed to trying to judge the content itself on its face. >> for me i would say i am hopeful and optimistic that, again, in our space, the child on child sort of deep thinking that we can make a a meaningfl dent in that problem through education and making sure kids
12:17 pm
understand, that like this you can't just think of this as modern-day kind of bullying, teasing, flirting. that's our real lasting consequences. that something that is achievable and that we are actively working to achieve. i am worried more broadly that there would be a sort of constant kind of one-upsmanship between federal law enforcement solving a problem and then the offender community find a workaround and then fix that, and then a workaround, and that will be this kind of constant game of back and forth. because we have on the enforcement side very smart, very dedicated, , very creative people working on this problem. the offender community also has people who are technologically savvy, committed to their illegal acts. so that is motivation on that
12:18 pm
side. i don't know how this is going to play out but i am worried there's going to be a kind of, were going to have to constantly be on guard that what we're doing is working and sort of looking around the corner to what's going to be coming next. >> you get the last word. >> and optimistic one. i think it's far since the first generation of some of these tools people were not think as much about trust and safety as he could've been. it was exciting research, exciting development of tools. over the next couple of years you see a much better with us on that. my large legitimate actors were building out ai tools because they understand the scrutiny. he would talk about it and people are seeing the risk associate with that and that would be a good thing. i think there would be more emphasis on that from the provider side of things and if we maintain the same enthusiasm and law enforcement coordination maybe it's a problem that we can contain. >> if you could join and thanking our panelists. [applause]
12:19 pm
>> run through the schedule real quick because i forgot something. we're going to have our first lightning talk. it would be followed by another panel on ai and governance at doj. we will then have lunch and congressman beyer will speak. let me ask -- there you are. let me ask claudia to come up and talk about crypto currency. so, please. >> good morning. it's a pleasure to be speaking with you all today. i am clottey, deputy chief of the computer crime and intellectual property section, ccips and director of the national crypto currency enforcement team. i would like to thank the center for strategic international studies for hosting us and to my colleagues who are at the event.
12:20 pm
today i'm going to speak about a topic that is at the forefront of technological innovation and law enforcement, the intersection of artificial intelligence, crypto currency and blockchain technology. these advancements are not only reshaping industries but also presenting new challenges and opportunities for law enforcement agencies worldwide. unlike traditional currencies, cryptocurrencies operate on decentralized networks based on blockchain technology. a blockchain is a a distributd ledger the records all transactions across a network of computers. cryptocurrencies have become a tool for various criminal activities. the anonymity and decentralization of cryptocurrencies make them attractive for illegal activities including money laundering, ransomware attacks and fraud. despite these challenges
12:21 pm
blockchain technologies also provide unique opportunities for law enforcement. the immutable nature blockchain records means that once the transaction is recorded it cannot be altered. this feature is invaluable for tracking and tracing illegal activities. so when one thinks about artificial intelligence, one might not readily associated with crypto currency or the blockchain. but as these two things become increasingly prevalent in our society that are opportunities for innovation and growth but also for criminals to exploit new development for illicit purposes. nonetheless, and i can also be a powerful tool in the fight against crypto currency related crime. the department of justice has made combating crime involving crypto currency did you assets of priority and has devoted significant resources to this effort. this is not new, however.
12:22 pm
indeed the department has investigated and prosecuted crimes involving crypto currency for a long time. year after year the department has been taking down illicit marketplaces, cryptocurrency mixers and infrastructure actors, and holding their operators and administrators accountable. we have pursued hackers who have stolen millions in crypto, scammers who have deprived thousands of americans of their life savings in crypto conference investment scams and money laundering syndicates who have made the illicit flow of funds possible. we have also pursued illicit activity in decentralized finance, or d5, which is a continually growing ecosystem. we have continued to investigate and prosecute very fraud schemes involving crypto critique of which include investment fraud schemes can multilevel marketing and ponzi schemes and market manipulation. we have ceased billions in illicit proceeds.
12:23 pm
three years ago deputy attorney general monaco announced the creation of the national cryptocurrency enforcement team to tackle complex investigations and prosecutions of criminal use of of cryptocracy, particularly crimes as they had committed by virtual currency exchanges, mixing atomic services and money-laundering infrastructure actors. the team combines expertise of the department of the justices quinn will and asset recovery section, ccips come of the section in the division and with experts detailed from u.s. attorneys offices. department prosecutors to investigate and prosecute the illicit activity involving cryptocurrency including the attorneys on my team have had to work to adapt to a complex and rapidly evolving environment involving novel issues. so as we begin to see criminal actors use artificial intelligence for illicit means, we are not starting from a blank page.
12:24 pm
we are applying existing frameworks and proven legal tools to pursue new threats and hold those behind them accountable. so what are these threats could you heard about some of them from the last panel but i'm going to talk about those that intersect with crypto and blockchain. so as artificial intelligence is a term used to refer to computer systems capable of performing tasks that typically require human intelligence. i think to recognize is that artificial intelligence is not on the horizon. if that even light cryptocurrency which in many ways is to waiting for mass acceptance. in ai, neural networks mimic the way our brains process information allowing oo recognize patterns and make decisions. and i is integrated into many of the tools we always use often without us even realizing it. so many of you open your phone using face id. when that happens your phone is using machine learning. it's taken an image of your face using 30,000 points of infrared
12:25 pm
light and then comparing the scanned with the image it is already stored of your face. also when you send an e-mail and e-mail program suggests the end of your sins is using artificial intelligence to determine which are most likely to say next based on a vast body of other text that it was trained on. and would you turn on a streaming service platform recommends what it thinks people want next it is using artificial intelligence analyze your past viewing patterns. with data and computer resource going to the next level in terms of scale, however, ai is now becoming more accessible and better in the form activities that we previously only exhibited to humans. because of this, ai has become more salient, or salient than examples i decided. generative ai has made the barrier to entry much lower. now, practically anybody can use
12:26 pm
ai, and the technology is quite good with its multi-model capabilities. this means we can prompt a program to generate not only tax but also voice images and video. this has given an unprecedented scale to activities that were previously human intensive. naturally, criminals are leveraging is widely available capabilities to carry out their activities in several ways, especially in cryptocurrency space. so first, let me give you some examples. ai has enhanced the ability to expand and create a visit economies for goods and services. this includes dark web listings, explicit deepfake generation as we first discussed, creating fake identities and fake ids to circumvent know your customer requirements, and checks at cryptocurrency services can open bank accounts, , established shw companies and launder funds. let me give you an example. there's an underground website
12:27 pm
called only fake which is claimed use neural networks to generate realistic looking photos of fake ids for only $15. this service is a potential radical disrupting the marketplace for fake identity and cybersecurity generally. by producing fake ids nearly instantly this technology could streamline everything from bank fraud to laundering stolen funds. rather than painstakingly crafting a fake id by hand, a highly skilled criminal profession that could take years to master, or even waiting for a purchase one to arrive in the mail with the risk of interception, this service lets essentially anyone generate fake ids in minutes that may seem real enough to bypass various online verification systems. much like many of the financial and designate nonfinancial institutions, cryptocurrency exchanges regarding kyc checks onboarding new users. this typically involves building verification system checking new
12:28 pm
users documents. a service like the one i described would allow somebody to step through the identity verification process on crypto exchanges. in fact, at 2023 report by an id verification company noted 70% of crypto companies observe an increase in use of deepfakes for kyc. the apparent use of which in such cases grew by 128% from 2022 to 2023. 2023. but if you another example. ai related crypto scams and market manipulation schemes. it's quite easy to create a token on the blockchain. scammers have exploited this the exit scams this is where the scammers try the pipe to boost of the token price only to sell the reserves for significant profit and then leaving their victims out to dry. there's also market manipulation or pump and dump schemes which
12:29 pm
coordinate groups initiate certain purchases and sales of tokens to make a profit. ai has become the hype generating target of strength of recent tokens. there are hundreds of tokens listed on several blockades that have some variant of the term gpt in their name. while some of these may be well-intentioned, several have involved scammers deceitfully claiming an official association with chatgpt of the supposedly legitimate ai company. another way criminals are using ai is using a combination of machine learning, computer vision, and lps, the latest generative ai to allow them to allow criminals to scale their illicit operations. think about investment scams. these are as old as time. criminals are leveraging deepfakes and generate materials to present celebrities or authority figures to promote fraudulent cryptocurrencies.
12:30 pm
this makes the scams look convincing and thus people are more susceptible to fall for them. there's also an increase in that the point crypto scams and disinformation at scale. bad actors are able to upscale their operation by using ai to auto generate text, images, websites, videos and other content offers. this can be done through the creation of scam sites and the ability to rapidly disseminate crypto related disinformation. for an instance in to avoid detection or if a law enforcement about actor running a scam operation may cycle through different sites with new marketing creating a new site every time. this process can be seamless and low cost to the use of ai. in addition to sustaining the scanned infrastructure which is necessary for the scam, scammers can use ai to accelerate an upscale their outreach to potential victims in order to generate illicit proceeds. this might include voice cloning, gpt style
5 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on