Skip to main content

tv   Big Tech Execs Discuss AI and the 2024 Elections  CSPAN  March 31, 2024 4:28am-4:55am EDT

4:28 am
the four of you enough. maybe out of that panel will become the cooperation. let's get openai, facebook, others together with people like jocelyn and michael who have a lot of the depth and width where the money flows are that she sees, and let's see if there cannot be some cooperative effort between now and this election. if we don't try, we know what will happen. we know what's going to happen. and i think that -- michael, you made a great point, we need more transparency and openness. and that should be de-classifying information as quickly as possible so it gets out in the public, and frankly, governments need to get it out and not ask for permission because it could influence the conversation going forward. but i think this idea of collaboration is better in a democracy to have collaboration and bring people together. let's see what we can do to
4:29 am
follow up. thank you all very much. [applause] and the leader of microsoft threat analysis center, which is part of customer safety. so where i want to begin with
4:30 am
each of you, because all three of you, in slightly different ways, are focused on looking at the threats and the risks that you are seeing across your platforms were cross society. i think even more so in your case. i want to begin by you sharing with us what you are seeing. particularly with relationship to the use of ai when it comes to information deception or other forms of information deception. yasmin, i will begin with you. jigsaw is a little bit of a different animal because you are really looking at societal changes and what kind of interventions you can make, not even necessarily via your platforms to affect change. give us a little bit of a sense of what you are seeing. yasmin: hello, everyone. i think that panel before us did a good job of surveying the landscape, including the threat.
4:31 am
i wanted to get a bit specific and vote on what was said. so we talk about trust in the last panel and one of our observations about the trust landscape is not that we are in a post-trust era because as humans we have to make decisions and evaluate things. but it's not that trust is as accurate -- evaporated, it's that it's migrated. trust is much less institutional and much more focused. i think that's really important as we think about the risk posed by generative ai. we did a study with gen z to figure out how young people are going about -- what they have online and how they go about evaluating information. so i wanted to do a survey of this room and have -- we have a good generational mix. just by a show of hands, how many people here read the comments underneath news
4:32 am
articles? that's about half, two thirds. i have to tell you, i did not read the comments. i thought the collective understanding was that we don't need the comments. gen z, maria, nobody. the interesting thing is not that they read them as much as when and why. when do they read the comments, they often go headlines, a little bit of an understudy here. no one else would speak the words from my mouth. headline, comments, then the article. why would they be doing it in that order? this is according to them, they want to know if the article is fake news. so you see the inversion, the article, the journalists being
4:33 am
the authority -- authoritative cure ration. the experts of authority. gen z, increasingly we are going to social places to look for signals. we look for information sensibility. they are looking for social signals about how to situate the claims and the relevance to them. so, we have the term alternative facts. this is alternative facts checking. and we should be really concerned. and it's relative -- relevant for generative ai because one of the things that we may be emphasized less and we should, because it's the threat coming around the corner, in addition to content, we have accounts that are going to be -- we talked about this earlier, these human presenting chatbot's. and what we are seeing, one of the products that jigsaw is the
4:34 am
most popular for moderating, spaces. we have comments that go through our thousand partners, and we hear about accounts that are there, first thing, they are not sending you crypto or disinformation, they are active, what are they doing? they are building a history of humanlike behavior. because in the future, yes it will be really important for us to evaluate where we can do detection to evaluate if something is a deepfake. when there is a deepfake, where will people go to check? they will go to other people in social spaces. we need to invest in humans and also infest in ensuring the human -- inventing chatbot's do not have any share. >> -- vivian: synthetic identities, not just synthetic content. fascinating. clint. i have no new for a number of
4:35 am
years. you have been with microsoft for two years, but you have been sort of doing this deep digital forensics for quite a long time. so you have seen that trajectory of history, how we have seen things evolve since 2016, prior to 2016 until now. give us a sense of what change you have seen, particularly since generative ai has taken off in the wake of the launch of chatgpt and what risks you are seeing today. >> in terms of timing it was 10 years two months ago we encountered our first russian account that impersonated an american that would later go after the election. i'm sorry, we were trying and we were working from our house, and we used a tool called microsoft excel, which is incredible, if you've ever checked that out. now we use microsoft, it's a
4:36 am
major change from 10 years. and what's interesting is, in 2014, 2015, 2016 it was testing it on the russian population first, ukraine, syria, libya, battlefields, then it was taking it on the roads to all the european elections in the u.s. elections. watching what has transpired to that, there is a little bit of a misunderstanding about how much things have changed in 10 years in terms of social media. speaking of gen z. gen z, would you leak -- would you read more than 200 words? i bet you would watch 200 videos. that's one of the biggest changes in 10 years with the technology. that's not just about gen z, that's about my generation and everybody older. video is king today. if you are trying to write a 9000 word blog, you are running appeal -- uphill with a lot of weight on your back. our monitoring list in 2016 were
4:37 am
twitter and facebook accounts linking to blog spots. in 2020 it was twitter or facebook, a few other platforms, but mostly liking to you too. today, if you go to it, it will be all video. my team tracks russia, china worldwide. we've got 30 on the team, we do 15 languages and we are mostly based here in new york. nine months ago we did a dedicated focus on what is the ai usage by these threat actors? so we have some results, and what i would say in 2024, there will be fakes, some will beef deep, most will be shallow, and the simplest manipulations will travel the furthest on the internet. so in just the last few months, the most effective technique that has been used by russian actors has been posting a picture and putting a real news organization logo on that picture. i'm sure david can tell you more
4:38 am
about this, disturbing across the internet gets millions of shares of use. there have been several deepfake videos in and around russia and ukraine in some elections and they have not gone very far and they are not great, yet. this will all change, remember, this is march. things are moving quickly. what i would note is, just looking at a few things that are five distinct things to look at. one is the setting. is it public versus private? i love david's take on this. when you see a deepfake video go out, crowds are good collectively at saying, i have seen that video before and that background, she didn't say that, he didn't say that. we have seen pollutants fakes. the crowd will throw that out and it kind of dies. the place to worry as private settings. people are isolated they believe things they would not normally believe. is anybody remember covid, when we were all at our house. it was very easy to distribute all sorts of information. it was hard to know what was true or false or what to
4:39 am
believe. people had different perceptions of the pandemic. in terms of ai, medium matters. videos are the hardest to make. text is the easiest, texas hard to get people to pay attention to. video, people like to watch. audio is the one we should be worried about. ai audio is easier to create because your dataset is smaller. you can make that on a lot more people, it takes a much smaller dataset and there is no contextual clues for the audience to really evaluate. when you watch a deepfake video and go, i know how that person walks, seen how they talk, that's not how it is. audio, you give it a discount. they will say, on the phone, maybe they do sound like that, kind of garbled, but may be. that is where to look. we have seen that in the slovak elections, we've seen it with the robo calls around president biden. indonesia, we have seen these
4:40 am
examples, there was a deepfake video that used tom cruise ai voice, he's probably the most fake person both video and audio around the world. that kind of come to the other thing to look for, focus on fully synthetic ai content. the most effective stuff is real, a little bit of fate, then real. blending it into change it just a little bit is hard to fact-check. it's tough to chase after. so when you look at private settings and audio with a mix of real and fake, that's a powerful tool that can be used. a couple of other things to think about is the context and the timing. many of you probably saw a is incorrect about the baltimore bridge tragedy this week. people immediately rushed to things. when you are feared or there something you have never seen before, you tend to believe things you would not normally believe. it's a super contentious event, or there is some sort of an
4:41 am
accident or tragedy. ai employed in that way to be a much more powerful tool. to do that you have to have staffing, people, you have to know the technology, you have to know compute and capacity. that's not a guy in his basement on the other site of the river. that's a well-organized organization with technology that has the infrastructure to do that and is ready to run on something instantly, i.e., the russian this stage russian disinformation system. we are talking about thousands of people that are working around the clock. as we know and all of the governments around the world, there are thousands of people working to counter disinformation day in and day out. we are just not set up the same way. that gives them a strategic advantage. 10 years ago we were tracking to activity sets of russia that ultimately went for 2016. today my team tracks 70 activities tied to russia. that just tells you in terms of
4:42 am
scale worldwide in the way things are going, that's something to look for. the last thing to think about is knowledge of the target. the secretary brought up a great point. people know the target well, they are better at deciding whether something is fake or not. if you have seen it over and over again. but, if you don't know the target or the context well, they are not as good as it -- as good at it. presidential candidate or presidential candidate, there will be a date -- a deepfake and it will make our heads explode. if it's a person working in of election spot and a deepfake is made, maybe they are not even a real person, it's the contextual situations we have to be prepared for in terms of response. we work with google and meta, i would just tell you, my experience being on the outside of tech, 10 years ago when i notified tech companies about the russians going after the election, they told me i was an idiot and that no one would believe that. now i work at a tech company and
4:43 am
we do exchanges all the time. i would just like to point out i feel like we have great relationships, we have worked together for years on different projects. i think that something else that's quite a bit different today. vivian: david, i want you to pick up where clint is leaving off. any additional context you can provide in addition to what you are seeing out there, but then i also want you to pick up on something clint mentioned. it's one thing when it's a big splash he deepfake that is all over public forums, so this could easily be debunked and i agree the spectacular deepfake of one of the presidential candidates is unlikely to have an impact but it stuff you cannot see because it's on messaging platforms. talk about what you are seeing there. >> i think building a bit on what clint mesh -- mentions on what we are seeing, our teams have taken down 200 50 different
4:44 am
influence operations around the world, including those from russia, china, iran and domestic from around the world. a be the key three things we see, one, these are increasingly cross platform, cross internet operations. the days of never could fake accounts on facebook or twitter is somewhat close because systems are gone. now i think the largest number we have ever seen as 300 different platforms and located in a single operation from russia, including things like nextdoor, but for your neighborhood, as well as more than 5000 web domains by a single russian operation called doppelganger that we reported on. what that means is the responsibility for countering these operations is significantly more diffuse. platform companies don't just have so protect on their platform companies but also to share information. it was mentioned in the last panel, not just sharing information amongst different
4:45 am
panels affected, civil society groups and government organizations can take meaningful action in their own domain. the second big trend that we have seen is that these operations are increasingly domestic and increasingly commercialized. commercial actors sell capabilities to do coordinated inauthentic behavior. maria's organization has written a lot about the philippines. in the commercialization of these tools, democratize its access and sophisticated capabilities and it can feel the people that pay for them. it makes it a lot harder to hold the threat actor accountable by making it harder for teams like ours were government. and then the third piece is that -- to the use of aia, much like clint mentioned, we have generally seen ai cheap fake, shallow fate, not even ai enabled, it just seems like photoshop or repurpose content mainly being used by
4:46 am
sophisticated threat actors, russia, china, iran. but we are reducing ai enabled things like deepfakes for a tech generation being used by scammers and spammers. scammers and spammers are some of the most innovative people in the online threat environment. they move the fastest, they are the least responsive to external pressure because they just want to make money and they are in jurisdictions that won't do anything. what we should be alert to his the tactics and techniques that the spammers and scammers use being adopted by actions over time. if you want to see what's coming, that's were i would be looking. what can be done, what's working, what isn't, especially in the examples you use from ai enabled capabilities being used in smaller and more private settings. this is where things like some of the watermarking -- by watermarking, more of what was talked about, technical watermarking that cannot be removed to identify whether content is effective or created
4:47 am
by an ai system can be perpetuated by social media platforms. if a company that produces ai content, which, this is one of them, is willing to be part of the coalition, they sure anything our models produce is ai generated and when it shows up on twitter or our own platforms are on snapchat, it should carry through those standards. so in munich coming -- amongst tech companies, google was part of that. the more we can raise the bar across the industry to require companies to be building in these capabilities early before we get to the point where the bad things have already happens, the more we can build meaningful defenses. one thing from the last panel that stuck with me was, when anna was at the white house dealing with russia policy, i was in the u.s. government also dealing with policy. and we were chasing after the problem at that point. it had left the station. we have an opportunity now to
4:48 am
build the safeguards and assess technology is taking off. i'm happy we are having the conversation now and i'm happy everyone pulled this together. vivian: i want to stick with you for a second and talk about going deeper on messaging platforms. meta owns one of the most used, significant largest, encrypted messaging platforms in the world, which is whatsapp. so much of what we know is that is traveling that could be these synthetic messages, no matter what form factor they are, video, text, images or audio travels through whatsapp. how do you think about ensuring that those platforms do not become vectors for this kind of harmful, synthetic content around elections, and what are you doing about that and also about the open parts of whatsapp as well? >> there's really exciting
4:49 am
integration between the technical standards we have talked about. things like watermarking that can be problematically carried through on platforms. in ensuring that robust and reliable encryptions remains in place for people all over the world and communications cannot be spied on by governments, particularly in an authoritarian regime. two different toolsets. one is ensuring that as platforms whether whatsapp, signal or anyone else building point-to-point communication tools that we are building and tools for the people who use the platform to identify and report problematic platform. spam's and things like disinformation. also that we are building and technologies as the industry up takes more of the safeguards around ai systems that can be programmatically propagated without needing to break fundamental encryption. you can imagine a future where we can get all of these companies that produce ai images
4:50 am
or ai generated tech to sign up to watermarking standards. and if that content ends up being sent through one of our platforms that the watermark and p carry through without having to have someone in the middle saying that's ai generated. i think that's one of several reasons why these technology standards are so important, it can hopefully be enshrined not just in industry agreements but also regulatory conversations because there is a world in which we can -- and i think it's important to retain fundamental encryption standards while still making sure we are doing our due diligence an hour's responsibility to protect our broader information environment. vivian: there are things meta can do to keep it from going viral while protecting encryptions. i will ask you both to talk about what google is doing to eliminate the risks and stop the spread of ai generated elections misleading information.
4:51 am
>> quickly this idea of at the origination of the content, trying to stunt it in a way that is enduring so that it can be identified as synthetic is important. one of the things that i think is interesting is its kind of refusing to provide the gen ai service when the stakes are as high as they are when there are election queries. a lot of people understand intuitively that there's a tension for technology companies and wanting to make the experience for these safe, but not creating so much fiction that they don't want to use the product. now, if you go to google's generative ai project and you look for something election related, it will give you a non-answer, which is a crappy
4:52 am
feeling, but they send you to search. they say go to search and research shows people want an authoritative source. i think it's interesting thinking about this between authority and authenticity. those are the mental models that we have from the last decade of search and social media. it's coming from an institution that they trust or even google search. there's a lot of trust there so the stakes are really high, you better get it right. or social media coming from my friend, they are my social network and i trust them. generative ai is neither of those. it's not a positive or summarizing what the internet says, and it's not a human thing. so i think we don't have mental models to deal with generative ai output, and the moment, i think it's an interesting demonstration of commitment to trying to put election integrity first.
4:53 am
it's giving users a pretty bad experience of generative ai. vivian: you are defaulting to sending people to search and you are sorting this out. we are quickly running out of time. tell us what microsoft is doing? clint: we are going to get them this time. just conceptually, the russian concept of reflective control is you conduct an attack on your adversary and then they attack themselves in response. that some of what has happened over the last 10 years. they are winning through the force of politics rather than the politics of force. there are more than three nationstates that will do some sort of election influence and interference. my team is designed to focus on russia, iran, china. absolutely, you will see that in the november report. we have another report focused on this one. i think the key point is you have to raise the cost on the adversary at some point rather than raising the cost on yourself to function as a democracy.
4:54 am
so there are lots of things we could do with policy and tech. we do that at microsoft and we do data exchanges amongst ourselves. but ultimately we have to say, there is a hack here, a leak here, it's coming and we are anticipating being in front of it. it's inoculating the public, it's raising the cost for actors to do that. sometimes that is methods and platforms communicating so we include controls. but a lot of it is awareness. communicating to the european governments in the u.s. government, this is what we see because we can see it often better from the public sector than the private. if it's something that is impactful, our nation is seeing notifications. vivian: we are out of time, thank you so much. we could've gone longer. [applause] thanks >>, everybody, it's so good to be here, i direct the technology and media specialization

31 Views

info Stream Only

Uploaded by TV Archive on