Skip to main content

tv   Washington Journal Zeve Sanderson  CSPAN  January 25, 2024 12:59pm-1:31pm EST

12:59 pm
c- and why. be part of the conversation. >> c-spanshop.org is c-span's online store. brow our apparel, books, home -- home decor. shop now or anytime at c-span sho >> a healthy demracy doesn't just look like this. it looks like this, where americans can see democracy at work, where citizens are truly informed■ our republic thrives. get informed straight from the source on c-span. unfiltered, unbiased, word for word. fçfrom the nation's capital to wherever you are. the opinion that matters the mo is your own.
1:00 pm
this is what democracy looks like. c-span, powered by cable. >> welcome
1:01 pm
1:02 pm
host: welcome back, we are of the center for social media and politics. welcome to the program. guest: thanks are having me. host: talking about artificial intelligence deepfakes, can you would swing with that is? guest: a deepfake is the manipulation or fabrication of an imageaudio recording or video through automated means with the intended effect of making something appear to happen that didn't happen. it's a potential threat in the last two election cycles. this moment is different. we have long had technology to manipulate content in different mediums. viewers at home no hollywood ny years. it's expensive and time-consuming. it took trained specialist on the software. it would wired many hours. now with a i, we have a different situation where we have the technology to create deepfakes have been democratized and has lowered the cost and increase the access. host: let's take a look, this was a fake robo call of what sounded like president biden lling new hampshire voters ahead of the primary, telling them not to vote. >> a bunch of malarkey. you know the value of voting democratic winner votes count. it's impoant you save your vote for the november election. we need your help in selecting -- and electing democrats up and down the ticket. voting this tuesday only enables the republicans in their quest to elect donald trump again.
1:03 pm
it makes a difference in november but not this tuesday. host: again, that was not president biden although it sounded an awful lot like him. your comments on that? guest: it did sound a lot like him. if i had been on the other end of that phone call, i don't think i would have been able to tell the difference that it wasn't him. this is a challenging moment in the information environment. we are sort of interested when it comes to these deepfakes, it's a lot of the deepfakes we have seen have been audio only. they haven't been video. one of the reasons is because audio is hard fact chk. if someone want to created a fabricated video of joe biden, his every mood- move is watched and recorded. there might be a few minutes or a few hours before it gets fact checked. this is much harder. it's a challenge mong into 2024. it would have been easy make that using current technology. you can upload a few minutes of a politicians voice and create an ai version of that. e public should go out with a healy bit of skepticism. but notoo much skepticism. host: if you would like to join our conversation about ai and its use in campan 2024, you can do so on our les by party, democrats (202) 748-8000, republicans (202) 748-8001 an independents (202) 748-8002. you can join us on text and social media. you said this is being democratized but how easy ist to do this now? could i do it? what kind of technical background do you need? guest: i think you probably could do it but i don't know what your technical background is. these technologies are very
1:04 pm
accessible. depending on the different company, different companies have different for. certain companies like openai prohibit this type of content from being made using any of their products.ere are a numbers out there, many of them are very good and i would say they are not quite as easy as click and play but they are getting there. host: another part of this would be what some people are calling the liars dividend. can you explain that? refers to an informationd, that environment where there is a lot of anxiety around false or manipulated or fabricated content circulating. what it allows the actors to do is claim that something that did happen, that actually did happen
1:05 pm
tht didn't happen. you can imagine in the context of 2016, it's sort of an audio clip without video or a photo attached to it. as we just sawith the joe biden call, it'easy to make this sort of audio content. you can imagine donald trump claiming it was false. it will give strategic actors the general context to claim something that's true is actually fal. host: let's take a look at a couplef other examples. the first one is an ad put out by the republican national committee just after president biden announced his reelection campaign last year and it features ai generated fake images. >> we cannot call the 2024 presidential race for joe biden.
1:06 pm
>> my fellow americans. >> this morning we invaded taiwan. >> financial markets are in freefall after 500 regional banks have shuttered their doors. >> the border was overrun by 80,000 illegals yesterday. >> they've close the city of san francisco this morning claiming its escalating crime and the fentanyl crisis. >> who is in charge here? it feels like the train is coming off the tracks. host: that is using ai generated with images of the future. should that be allowed? guest: they are both at the federal ants federal level in the state level. lawmakers and regulators are considering what to doere. my colleagues argue two things, that first we should target the electoral harm being done, not the technologies being used. there are well-financed national
1:07 pm
campaigns. we have access to more nascent technologies to fabricate material so we should focus on what the electoral harm we are nervous about rather than the underlying technology given that the cat is out of the bag. we are still early days here. a lot of policy, rather than jumping the gun and focusing on the wrong things should really focus on promoting your knowledge about how the generative ai might impactly in these kind of years so there should be a big focus on transparencies to make sure any time generative ai is usth viewer at home would know it but also promoting research to understand what the effects of this sort of content is on voters.
1:08 pm
host: when you say transparency, if you've got some bad actors out there, they will not be transparent that they are using deepfake. guest: they will not be transparent, true. it might put an onus on the meanism by which its reaching people whether it's broadcast networks on television requiring them to do due diligence in order to apply labels, social media companies applying labels. at the state level, they been considering there being penalties for not disclosing. host: you wrote a piece for brookings that essentiallys that ai can be misunderstood and in some cases can -- the fear of ai can be overhyped, explain that. guest: i think a lot of our fear moving int2024 is specifically■!
1:09 pm
around the relationship between ai and misinformation. we start with a review of the literacy. what do we know over the last 10 years about the way misinformation has her -- has or has an impact elections. have they impacted votes. we make the argument that we've been potentially concerned about the impact of misinformation. the news doesn't really make up a majority of people's media consumption and when it does, people tend to consume high-quality media. when misinformation exists, it tends to be heavily concentrated in the extremes of either party. it does not tend to reach most americans. for us, rather than saying using the evidence argue that ai will pact 2024, instead we say, wh might be different about
1:10 pm
this moment we are in. we focus on three different ways ai might make misinformation impacted or directly impac elections or to make it more of it and there is a simple math. misinformation is to break into the mainstream because there is more of it. it can make it more persuasive or effective than it could be better targeted. reporters and 2016, their tweets had grammatical errors. tiktok is is very different social media platform which really sends content to users who haven't opted in to a paicular social network. i don't need to follow you to get your connt but is just reacng
1:11 pm
information diffusion might change the effect of w the information is reaching d not the effect it has. calls shortly. democrats, (202) 748-8000, republicans (202) 748-8001 and independents (202) 748-8002. you can start calling in now. i want to ask you about public opinion about this. ap did a poll in october of this past year and i will read you some of the results and have you comment. how concerned are people about the spread and increase of false information,8%aid they were concer 5 have not read or heard much at all about ai and 30% report they ha used in ai chatbot or an image generato what do you make of the
1:12 pm
numbs? guest: i focused on the first number which is the level of anxiety. i think we talked about after the joe biden phone call, i have a level of skepticism. by healthy, i mean pe should understand that these technologies exist and they have capacity to produce content you played on your show. however, here, we are a nonpartisan research center. what we care about most is democracy and democratic society. what i'm nervous about is that it has become a great anxiety around the use of these technologies to create false or fabricated content.ust in the gl information environment continues to erode. we are stilleeing that people tend to consume high-quality
1:13 pm
incredible content from sources like c-span. i think going into 2024, some skepticism of understanding that these technologies exist is important but not so much that people just stop trusting what they see. host: let's talk to viewers, deborah is up first, a republican in west chester, ohio. call: good morning and thank you for king my call. my concern is that we may put ourselves in a position that we will when you look at section 230, i'm thinking by this time, we should have solved and resolved that and we will never know what social media platforms would have been an and not holding them accountable. the other concern is free speech as described in our constitution. we protect speech when it's generated by a human.
1:14 pm
what about speech that's generated by chat? these are major constitutional questions. we are not real good at resolving some of these. as we develop the technology, we discuss these issues and we wait until almost too late and everybody is used to these various things. there has to be something, sometimes a disclosure on every article or add that says there s intelligence used in generating the images. host: what do you think? guest: i think sometimes transparency is great. i think transparency should be a big piece of this. i want to pick up on something that was mentioned -- mentioned on social media. i agree when it came to the development of social media and
1:15 pm
how quickly it penetrated society, the legal side responded but i don't directly comment on that. on the research side, when it came to giving researchers likeg we needed to do this type of research, it was too slow. we are still catching up on understanding the effects of social media platforms. when it comes to the development of ai technologies, the pace of development is fast and i hope we don't make the same mistake again. we have this all of society approa t effect of these technologies are. lawyers and policymakers go out and argue in court or the halls of congress about what to do here, we hope they are given high-quality information in order to do their work.
1:16 pm
host: ira in louisburg, north carolina, democrat. caller: good morning. host: go right ahead. caller: the horse has already left the barn. i can tell you that law enforcement agencies are already using the technology you are speaking about. there is a video or audio clip this is one thing and if you don't video the video, that video will have changed. it will say something different and the audio say something different, it's already happening. law enforcement is not always accountable for something they do but the only time they are is when people take video or audio. it's happening like that right now.
1:17 pm
everyone of my audio clips and video clips have been altered. i filed something with the department of justice to have a case investigated. host: all right, your comment? guest: in terms of my direct research, i tend to focus on elections and electoral politics. we are seeing these sorts of harms occur elsewhere in a law-enforcement context. that's one of the primary ones that people worry about. this is related to the liars dividend. suddenly this gets the general context for police officers who potentially said or did something. they can claim that video was fabricated using a tool.
1:18 pm
the transparency and accountability efforts we've gotten over the last few years holding law enforcemt accountable are potentially at risk. given the dynamics i mentioned. that's worrying and i hope lawmakers and the courts are able to understand the gravity of this technological moment we are in. host: we got a question from text -- do you know anything abo? guest: i think the challenge now is that there are many technologies. it's not a monolith. either the actual technology itself, what models are being developed but also the companies. i think right now, there are some companies that are acting in quite good states that are trying to consider the potential
1:19 pm
harms being done, companies like openai which just released their election policy that i found to be quite considerate and i applaud them and hope they continue in that regard. in the u.s. and abroad tt aree not putting up those sort of guardrails. in terms of the funding sources, i'm not sure but one of the places we do focus is really trying to hold these companies to a high standard with both the policies they put on the books and the way they enforce/■d policies. host: hendersonville, north carolina, independent. caller: hey, i am currtl studying artificial intelligence programming in college. one thing we recently discussed is how ai is used society and how widely used it is. anybody that has a smartphone uses ai.
1:20 pm
it lives in every part of our aspect. in every aspect of our lives when it comes to technology, ai has been there for a while. with the fear of what it can do or what people can do using it for elections, because it creates a decent amount of fear, i don't want another ai winter to develop because people are afraid -- i've been trying to get it degree in this and get a career in this. host: your comments? guest: it is true that ai is quite pervasive. one of the challenges when we talk about ai is what exactly we mean. it's quite a broad term. one of the jokes in computer science spaces is ai is anything that doesn't work yet.
1:21 pm
one of the things that we are concerned about is that we will -- our fear of the effects will end up being greater than the actual effect.nsequences it migd when people become to skeptical the information they see to the point where we just see general erosion and trust in the information the media environment it there is a potential economic impact of that fear especially when it comes to global competition and that's what the caller touched on. host: we also have another text -- i guess the new hampshire ap s investigating so we will find out is for the judicial sidef it. do you think more groups will be encouraged to do this kind of
1:22 pm
thing? guest: yes, i think it's an omen of the year ahead. as i mentioned, especially when it comes to audio, that's going to be quite a bit more challenging because there is so little other context for people to use in order to potentially fact check it themselves. also, over the past couple of years, the number of robo calls has continued to expand. that could be a place where we see quite a lot of concern. people focus on scams or concerns about automated robo calls. i think we are going to see strategic actors were motivated actoe these technologies in ordr to shift the information landscape. it's going to take in all of society approach. i think lawmakers should understand the gravity of the
1:23 pm
situation but also make sure they are leveraging the highest quality information they have in order to pass really good laws. journalists who cover the space and researchers stud a private companies need to be really good actors and really try to understand that the elections are an important democratic process and they need to do their part. host: new milford, connecticut, independent. caller: good morning, thank you for taking my call. technology is one thing and it's wonderful and we all have two make an effort to learn as much as we can and understand it. besides the technology, there's something called moral and ethics. as a society, we are lying if -- is the accepted thing to do. it's a problem for the majority of the population.
1:24 pm
if lying is accepted for them, who is telling the truth? thank you very much. host: any comment there? guest: i think there needs to be public accountability for using these technologies in haful ways. one of the things i wanted to transparency, one of the callers mentioned the content either should be logged to make sure -- laws passed if artificial intelligence is used in label. one thing we focus on here when it comes to truth is the devil is in the details. it matters whether the label, the disclosure comes before or after. i hope when we are out there trying to promote truth, as a
1:25 pm
society, lawmakers focus on the little pieces that really impact the public. host: john is in warrenton, virginia, republican, good morning. caller: good morning, i like that question about dark money coming in from overseas. i almost see this video you showed is sort of like the initial stages of the computer age where we played pong. ai will be used for more complicated and interesting things. i get 25 robo calls a day and they've changed my phone. i can't answer calls on my phone anymore because of robo calls. i don't really think we -- they are trying to sell me something. i think it's an attack and at some sort of weird way to change my life. remember in the 90's when the internet came on, how much it
1:26 pm
changed our life and how it puts the economy forward and there is so many new businesses. i wonder if ai will do the same thing. it looks like it will be able to solve a lot of problems and one more question, i can remember -- host: i have a question about how you think ai could impact the electi, what you think of that? caller: yeah, i'm anarly adopter for c-span call in. i was listening the first day c-span was on radio back in the 90's and i s whathey used tos when i was in college. i was atumbc. how will it affect the election? the little bit of extra stuff they can do with ai will be nominal.
1:27 pm
the harm is nominal. we already don't believe what's going on. host: ok. let's talk to rmantown, maryland, good morning. caller: with the last go around with the social media companies, e government wasn't able to make any substantial change in the average american's life. with this, we are worried that even videos on our phone may be an accident or maybe a protest rally, maybe bigotry, that goes against the op it comes to these companies that are owned by one major conglomerate and if it goes against their bottom line, they want to get it into her phone and they can alter these things. the biggest challenge for the government that has affected the
1:28 pm
public hasn't been that big or important in technology. they tend to let things go by like recently with apple changing their product to be manufactured in india. there's something on the devices they change. we didn't hear a big penalty from the government. they don't have stringent restrictions. i don't have faith of the government will do something. do they have a certain agency? if we created the space force, we know how important it is to be able to gather intelligence and be able to communicate. we created a space force. host: we got your question. hoimpactful do you think the federal government can be?
1:29 pm
guest: the federal government can be very impactful. one of the gliers of hope here we see is that the federal government and state governments and governments internationallyñ are trying to do what they didn't do during the social media era, the beginning of social media which is they are trying to understand these trade-offs between democratic benefits and potential democratic arms. part of this is fact-finding, getting good research that the laws that are passed are high-quality we saw yesterday, the national science foundation announced a large new funding program in ai to do exactly this. we've seen governments move much faster whether it's joe biden's executive order or the bills at the state level we are seeing proposed. they a sort of learning from the mistakes during the social media era and i think they can
1:30 pm
be quite impactf. host: you mentioned the executive order. i have it here on the fact sheet, president biden issues executive order on trustworthy artificial intelligence, that came out in october. what impact do you think that will have? guest: in terms of the impact it will have come i think it's too early to tell. some of the pieces of it will need to be updated as the technology changes. for example, it focuses on models and technology of a certain size. as the technology advances, we will either rethink that where they've focused on largew our cs are quite small and even handheld. in general, the orientation and

38 Views

info Stream Only

Uploaded by TV Archive on