Skip to main content

tv   Washington Journal Zeve Sanderson  CSPAN  January 25, 2024 11:21pm-12:06am EST

11:21 pm
important issue of our time and is the most thing. >> c-span voices 2021. be a part of the conversation. >> c-span unfiltered view of government. we are funded by these television companies and more including charter communications. charter communications supports c-span as public service along with tse television providers giving you a front-row seat to democracy. >> if you ever miss any of c-span's coverage you're invited any time online at c-span.org,
11:22 pm
videos of key hearings, debates and other events. this timeline is quickly to get an idea what was decided in washington. scroll through and spend a few minutes on c-span's points of interest. >> welcome back to washington journal. he's at new york university and executive director of social media : talking about artificial intelligence deepfakes, can you would swing with that is? guest: a deepfake is the manipulation or fabrication of an image, audio recording or video through automated means with the intended effect of making something appear to
11:23 pm
happen that didn't happen. it's a potential threat in the last two election cycles. this moment is different. we have long had technology to manipulate content in different mediums. viewers at home no hollywood movies have been doing this for many years. it's expensive and time-consuming. it took trained specialist on the software. it would wired many hours. now with a i, we have a different situation where we have the technology to create deepfakes have been democratized and has lowered the cost and increase the access. host: let's take a look, this was a fake robo call of what sounded like president biden calling new hampshire voters ahead of the primary, telling them not to vote. >> a bunch of malarkey. you know the value of voting
11:24 pm
democratic winner votes count. it's important you save your vote for the november election. we need your help in selecting -- and electing democrats up and down the ticket. voting this tuesday only enables the republicans in their quest to elect donald trump again. it makes a difference in november but not this tuesday. host: again, that was not president biden although it sounded an awful lot like him. your comments on that? guest: it did sound a lot like him. if i had been on the other end of that phone call, i don't think i would have been able to tell the difference that it wasn't him. this is a challenging moment in the information environment. we are sort of interested when it comes to these deepfakes, it's a lot of the deepfakes we have seen have been audio only. they haven't been video. one of the reasons is because audio is hard to fact check.
11:25 pm
if someone want to created a fabricated video of joe biden, his every mood -- move is watched and recorded. there might be a few minutes or a few hours before it gets fact checked. this is much harder. it's a challenge moving into 2024. it would have been easy to make that using current technology. you can upload a few minutes of a politicians voice and create an ai version of that. the public should go out with a healthy bit of skepticism. but not too much skepticism. host: if you would like to join our conversation about ai and its use in campaign 2024, you can do so on our lines by party, democrats (202) 748-8000, republicans (202) 748-8001 an
11:26 pm
independents (202) 748-8002. you can join us on text and social media. you said this is being democratized but how easy is it to do this now? could i do it? what kind of technical background do you need? guest: i think you probably could do it but i don't know what your technical background is. these technologies are very acceptable. depending on the different company, different companies have different rules around what their technology can be used for. certain companies like openai prohibit this type of content from being made using any of their products. there are a number of products out there, many of them are very good and i would say they are not quite as easy as click and play but they are getting there. host: another part of this would
11:27 pm
be what some people are calling the liars dividend. can you explain that? guest: the liars dividend, that refers to an information environment where there is a lot of anxiety around false or manipulated or fabricated content circulating. what it allows the actors to do is claim that something that did happen, that actually did happen that it didn't happen. you can imagine in the context of 2016, it's sort of an audio clip without video or a photo attached to it. as we just saw with the joe biden call, it's easy to make this sort of audio content. you can imagine donald trump claiming it was false. it will give strategic actors the general context to claim something that's true is
11:28 pm
actually false. host: let's take a look at a couple of other examples. the first one is an ad put out by the republican national committee just after president biden announced his reelection campaign last year and it features ai generated fake images. >> we cannot call the 2024 presidential race for joe biden. >> this morning we invaded taiwan. >> financial markets are in freefall after 500 regional banks have shuttered their doors. >> the border was overrun by 80,000 illegals yesterday. >> its escalating crime and the fentanyl crisis. >> who is in charge here? it feels like the train is coming off the tracks. host: that is using ai generated with images of the future. should that be allowed? guest: they are both at the
11:29 pm
federal ants federal level in the state level. lawmakers and regulators are considering what to do here. my colleagues argue two things, that first we should target the electoral harm being done, not the technologies being used. there are well-financed national campaigns. we have access to more nascent technologies to fabricate material so we should focus on what the electoral harm we are nervous about rather than the underlying technology given that the cat is out of the bag. we are still early days here. a lot of policy, rather than
11:30 pm
jumping the gun and focusing on the wrong things should really focus on promoting your knowledge about how the generative ai might impact elections and especially in these kind of years so there should be a big focus on transparencies to make sure any time generative ai is use that a viewer at home would know it but also promoting research to understand what the effects of this sort of content is on voters. host: when you say transparency, if you've got some bad actors out there, they will not be transparent that they are using deepfake. guest: they will not be transparent, true. it might put an onus on the mechanism by which its reaching people whether it's broadcast networks on television requiring them to do due diligence in order to apply labels, social media companies applying labels.
11:31 pm
at the state level, they been considering there being penalties for not disclosing. host: you wrote a piece for brookings that essentially says that ai can be misunderstood and in some cases can -- the fear of ai can be overhyped, explain that. guest: i think a lot of our fear moving into 2024 is specifically around the relationship between ai and misinformation. we start with a review of the literacy. what do we know over the last 10 years about the way misinformation has her -- has or has an impact elections. have they impacted votes. we make the argument that we've been potentially concerned about
11:32 pm
the impact of misinformation. the news doesn't really make up a majority of people's media consumption and when it does, people tend to consume high-quality media. when misinformation exists, it tends to be heavily concentrated in the extremes of either party. it does not tend to reach most americans. for us, rather than saying using the evidence argue that ai will impact 2024, instead we say, what might be different about this moment we are in. we focus on three different ways ai might make misinformation impacted or directly impact elections or to make it more of it and there is a simple math. misinformation is to break into the mainstream because there is more of it. it can make it more persuasive or effective than it could be
11:33 pm
better targeted. reporters and 2016, their tweets had grammatical errors. tiktok is is very different social media platform which really sends content to users who haven't opted in to a particular social network. i don't need to follow you to get your content but is just reaching me so that dynamic of information diffusion might change the effect of who the information is reaching and not the effect it has. host: we will get your calls shortly. democrats, (202) 748-8000, republicans (202) 748-8001 and independents (202) 748-8002. you can start calling in now. i want to ask you about public opinion about this. ap did a poll in october of this
11:34 pm
past year and i will read you some of the resud have you comment. how concerned are people about the spre ancase of false information, 58% said they were ncerned, 54% have not read or heard much at all about ai and 30% report they have used in ai chatbot or an imagrator. what do you make of those numbers? guest: i focused on the first number which is the level of anxiety. i think we talked about after the joe biden phone call, i have a level of skepticism. by healthy, i mean people should understand that these technologies exist and they have capacity to produce content you played on your show. however, here, we are
11:35 pm
nonpartisan subjects and what we chear about most and care about is a democratic society. great anxiety around, you know, the use of these technologies to create false or fabricate content. and so, you know, i think in 2024, you know, a little bit of skepticism of understanding of these technologies is as important but not that people stop trusting. >> let's talk to viewers. republican in westchester, ohio. >> good morning, thank you for taking my call. my concern is that we may put ourselves in a position look at
11:36 pm
section 230, solved what social media forms should have been and held them accountable and the other concern i have is privileged speech as described in our constitution. we protect speech generated by a human. what about speech that's generated by -- these are major constitutional questions and we've not really good at resolving some of these as we are developing the technology the which is where we should be discussing the issues. there has to be something, disclosure on every political ad that says there was or there was
11:37 pm
not artificial intelligence used in generating the images. >> what do you think? >> yeah. i think transparency should be a big piece of discussion. i want to pick up on something that was mentioned around social media. when it came to the development of social media and connect today society, i think on the legal side to respond as nonlawyer never to comment on that but for instead comment on the research side. when it came to giving researchers like us the type of data and, you know, importantly funding that we needed this type of researching it was too slow and we were still catching up on really understanding the effects of social media forms and here when it comes to the development of ai technology thes, the pace
11:38 pm
of development and i hope we don't make the same mistake again and we really have all of the society approach to understand what the effects of these technologies are largely so that lawyers and policymakers as they go out and either, you know, argue in courts or halls of congress of what to do that they were given high-quality information in order to do their work. >> ira in louisburg, north carolina. democrat. >> good morning. can you all hear me okay? >> yeah, go right ahead. >> okay. as an attorney i can tell you that law enforcement agencies are already using the technology that you are speaking about. if you don't make a copy or
11:39 pm
video, you will get a video but it will say something different, the audio will say something different. that's already happening. if you follow the news, law enforcement, the only time they get held accountable for anything that they do is wheny somebody takes video or audio. if you do not maybe taken original audio or video clip they are not going to bat. things like that right now. everyone of video clips have been altered. if you file something with the department of justice to have the case -- >> all right. steve, your comments? >> yeah, so this is in terms of my directory search i tend to focus on election and electoral politics the. i will say that we are seeing
11:40 pm
elsewhere and law enforcement context is one of the primary ones that me as a citizen are worried about and suddenly gets the general context for a police officer. in fact, to claim that the video is fabricated using tools and transparency and accountabilityy efforts that we've gotten over the last few years to hold law enforcement accountable, suddenly potentially at risk given some of these dynamics that i mentioned. any of that is worrying and i hope both lawmakers and the courts are able to understand the gravity of this sort of technological moment that we are in. does this technology get fed by dark money from overseas? do you know anything about that?
11:41 pm
>> you know, i think the challenge right now that there are many technologies, it's not -- i wished i had said this earlier it's not monolithic here. but also the companies. i think that there are right now companies that are acting in quite good faith that are really trying to consider, you know, the potential harms that are being done. open ai which just last week released their election's policy that i found to be quite considerate and i applaud them on their efforts and hope they continue in that regard. other companies both here and abroad that are not putting those sorts of guardrails. in terms of funding sources i'm not sure but one of the places that i hope we do focus is really trying to hold these companies to a high standard with both policies that they put on the books as well as the way they enforce those policies.
11:42 pm
>> hendersonville, north carolina, independent. >> hey, so i am currently studying artificial intelligence programming in college and one of the things that we recently discussed how ai is used in society and how -- and how widely spread it is. so anybody that has a smartphone has ai. it lives in every part of our aspect of our lives when it comes to the technology, ai has been there for a while. and with the fear of what it can do or what people can do using it for elections, it can cause a decent amount of fear and i don't want another ai winter to develop because people are afraid of this technology as i'm trying to get a degree in this the and make a career out of it.
11:43 pm
>> steve, your comments? >> yeah, so it is true that -- that ai is quite pervasive. i also think one of the challenges when we talk about ai cis exactly what we mean. it's quite a broad term. i think one of the jokes in sort of computer science base is ai is anything that doesn't work yet. i think that one of the things that we are sort of concerned about is that we will also fear and democratic consequence that is people might become too skeptical and we see general erosion and trust in the media environment but there could also be potential economic impacts of that fear especially when it comes to global competition and
11:44 pm
something that the caller gist - just touchedn. >> the fake ai gered robo calls in new hampshire is oman of the year ahead? i guess the new hampshire ag is investigating it. we have to find out as far as the judicial side of it but do you think that more groups will beur encouraged to do this typef thing? >> yes, it is a omen of the year ahead. especially when it comes to the audio that will be quite a bit more challenging because there's fun facts and also as we have seen over the past couple of years the number of robo calls has continued to expand and so, you know, that could be a place where we see quite a lot of concern and colleagues who focus
11:45 pm
on scams are especially sort of concerned about automated robo calls. when it comes to election i think we are going to see strategic actors or motivated actors both here and abroad to try to use these technologies in order to shift the information landscape and it's going to the take all-society approach and i think that lawmakers should definitely understand the gravity of the situation but also, you know, make sure that they're leveraging the highest quality information they have in order to pass really good laws. it's going to require journalists to cover and researchers to study it and private company to be good actors and really try to sort of understand that elections are very important democratic process and do their part. >> ben independent, good
11:46 pm
morning. >> thank you for taking my call. technology is one thing and it's wonderful and we all have to make an effort to learn as much as we can and you said it but besides the technology, there's something called moral and ethics and we have as a society today relying accepted thing to do. so it'ss a problem for the -- te majority of the population while if lying is accepted then who is telling the truth? thank you very much. >> steve, any comments there? >> yeah, i think that there needs to be public accountability for using technology in harmful ways and ysone of the things that i wantd to mention is when, they were talking about transparency efforts in this regards and one of the callers mentioned that all of this content there should
11:47 pm
bee lawed pass today make sure that artificial intelligence is used that it's label. one of the pieces that we really focus on here when it comes to truth is that the devil is in the details and so you know it matters whether the label comes or it goes or after the end, whether it's in small text and i hope when we are trying to promote truth especially a lawmakers that they focus on the little pieces that really impact the public. >> john, in virginia good morning. >> good morning.an hey, i like that question about dark money coming from overseas but so i almost see this video that will -- that you showed as sort of like the initial stages of the computer age where we play pond. it's going to be used -- ai will be used for much more
11:48 pm
complicated an much more interesting things. for example, i get 25 robo calls a day. robo calls have changed my phone. i can't answer my call on my phone anymore because of robo calls. i have to say i don't really think they are trying to sell me something. i think it's an attack. i think it's some sort of weird way to change my life but i think, remembering how much in the 90's when the internet came on how much it changed our life and how it pushed the economy forward and there were so many new businesses. i wonder iff ai is going to do the same thing and it looks like it's going to be able to the solve a lot of problems and one more question, i can't remember. >> well, john, i have a question for you about how you think aii could impact the election? >> oh, yeah. i'm an early adopter for c-span call-in. i was listening the first day
11:49 pm
c-span was on radio back in the 90's and i was what they used to call a news junky in the 90's when i was in college at the university of unbc. how is it going to the affect the election? >> yeah. >> well, i have to say, you know, a little bit of -- little bit of extra lying with ai is nominal. that's nominal. that's my opinion. we already don't believe what's going on. >> okay. let's talk to zein, democrat. good morning. >> in the last go-around with the social media companies the government wasn't able to make any substantial change in average americans' lives. they are worried that even videos on our phone, maybe a protest the rally, maybe
11:50 pm
bigotry, that goes against the opposition or these companies one major conglomerate and if the they go against bottom line they can get into the phones and alter these things. the government, their biggest challenge is having been that have affected the public, haven't been that big, haven't been that important as of recent in technology and let things go by like recently with apple changing the product to be manufactured in india led to some malware or something on the devices. apple caught it but we didn't hear big penalties from the government, we didn't hear about any stringent restrictions. so i don't have faith that the government will do something.
11:51 pm
are they trying -- do they have a certain agency? if we created the base force we now how important it is the to save -- to be able to gather intelligence and to be able to community and we created a space. >> we got your question. steve, how impactful do you think the federal government can be? >> the federal government can be very impactful. one of the glimmers of hope that we see is that both the federal government here sort of in the u.s. and governments internationally are trying to do what zian sort of mentioned they didn't do during the social media era, the beginning of social media which is they are trying to sort of understand these trade-offs between democratic benefits and potential democratic harm and move much more quickly. part as i mentioned, finding, getting good research that the
11:52 pm
laws that are passed are high quality and you saw yesterday at the national science foundation announced a funding program in ai to do exactly this but we have seen governments move much, much faster whether it's joe biden's executive order or bill proposed. they are sort of learning from some of the mistakes during the social media era and i think can be quite impactful. >> you just mentioned the executive order. i have it here on the fact sheet president biden issues executive order on safe, secure and trust worthy artificial intelligence. that came out in october. any -- what impact do you think that's going to have? >> in terms of the impact that it's going to have, i think it's a little bit too early to tell. some of the pieces of it are going to be updated as the technology changes, for example, focus on models, sort of
11:53 pm
technology of a certain size and i think that as technology advances we are going to rethink that. but in general i think that gesture at sort of policy making toward trying to move quickly, trying to learn much more quickly about technology that i think we've seen federal government do so in the past. i mean, they bought experts into federal government to really understand these technologies deeply. they've engaged with everyone from leaders of these companies frto academics, to also which hs been promising civil society organizations and community groups what are interacting with the folks who ultimately are going to be impacted by the technologies to make sure that the phones are mitigated.
11:54 pm
policy making toward acting more quickly than we did in the past. >> let's go to augusta, georgiai independent. max, good morning. >> good morning. thank you for c-span. we enjoy watching you and particularly on this subject. i would ask your guest when the government says you have to label the ai, how quickly everyone will label that they may or may not have ai in their material. they'll be one step ahead of you. thank you. >> yeah. i think that when it comes to -- when it comes to label we have seen actually even before ai that timing was a really big piece of information moved so quickly in our digital sort of
11:55 pm
information age that, you know, people tend to be exposed to information quite quickly on social media and so if the label is applied hours or days later it's not going to have the impact. i think that the devil will be in the details with policy making, with enforcement mechanisms, are there. i agree with the caller that timing is a big piece and there's nefarious actors are going to want to sidestep whatever policies and think deeply about how to get ahead of them. >> john is a republican in massachusetts, good morning. >> hi, thank you so much for taking my call. being that this technology was created by darpa, all your
11:56 pm
social medias and all your communications is controlled by social government being that we have foreign dictator government that is we work with, that we help being that computers and this gentleman said that the social media was more complicated than this other ai but other countries are building robots to replace human beings. also law enforcement who has been killing the majority of people color. >> do you have something specifically about the election? yeah, i thinkti the whole system is rigged because every president that's ever been
11:57 pm
president is related to the king of england, okay. obama was even related to the king of england. >> we are getting off the subject there. let's talk to jake in statem, georgia. >> good morning. >> good morning. >> i'm completely for technology and artificial intelligence but it seems like the country itself, what is evil is good and what is good is evil. how do you think artificial intelligence is going to have the impact on that statement i just made and i think it -- it can come from the republican side or the democratic side or just individuals who own corporations and what not, what is your thought on that, sir, and thank you? >> we also have this from
11:58 pm
michael who wants to know who is going to decide what the reputable sources are? >> yeah, i can respond to beth of those quickly. so in terms of this sort of, i think what the general statement around good and evil that stood out for me is when we have the generalized fear of technology, we tend to sort of fall into this the camp, right, we see these technologies utopian or ruining everything where i see research being able to really come into this base is being able to identify very specific benefits being able to use the research to help policymakers and regulators do their job better. and as well as the public sort of understanding individuals inh sort of a richer more nuanced ways. in terms of the credible sources of information, i mean, this is something that we've -- i think
11:59 pm
that we thought about sort of since the creation of journalism and media but certainly quite a bit over the last 25 years. you know, i'm not going to like again we are a nonpartisan researching center. we don't have investments, personal or academic investments in this media company or that media company. instead what i would focus on when it comes to sort of credibility is whether sort of journalists or media companies whether they have sort of processes in place that we consider to be credible, whether they fact-check stories and retract stories if they are shown to be false, whether theya do things like disclose who writers are, who their funding sources are. i see credibility as an outcome. people should andnd can have diverse perspectives and ideologies but instead commitment to a process of doing good journalism and that can come from a variety of sources
12:00 am
with different views. >> alina in highland park, illinois, good morning. >> good morning, given what happened in the 2020 election with the fbi and dhs interference and coordinating with twitter, former twitter which is now x as well as facebook possibly, in a nefarious way, you know, ai is supposed to actually be something new technology good and to help the humanity progress. but given what happened in the 2020 election, interference by our government, very own government, it worries me that i it is maybe being designed to bp used as opposed to helping humanity, to control humanity the and that's what worries me.
12:01 am
.. ..
12:02 am
that is probably ai ahead of everybody in order to control. >> okay. go ahead. any comment there? >> my comment here, lots of things that we hear. we talk this up from the public opinion polling that you mentioned before. is it general anxiety around this new technology. that is where i think that researchers need to move quite
12:03 am
quickly in the space so that we can bring high-quality evidence inup the public. i don't think there is any evidence that ai is being used in the election context for controlling the public. but what i do here is there is a general anxiety and i hope that we can get good high-quality information out to the public quickly. >> less call montgomery. independent. >> thank you for putting me through. i just heard the woman talking about how it is not necessarily the technology that is the issue , it is the people that will be in charge of it. 'we will be, you know, beckon call because at the end of the day, people writing these algorithms. there is a lot of corruption. i am not surprised that people are hesitant about it. we all have the capacity to do good and bad. it is a matter of choice.
12:04 am
as far as what they are saying, focusing on the outcomes, giving people evidence of this is good or bad, it is statistics to manipulate to say whatever you wantll. at the end of the day it will be a moral issue. a human nature issue. my final point, human problems need human solutions. it is just a tool. i think it is a little bit deeper than that. that is all i have to say. thank you. >> last word. >> the last thing that the caller said is sort of speaking
12:05 am
my language. completely agree. we need to really understand, in addition to understanding the technology themselves, understand that i was happy to see the government building on that sort of technical capacity. we also need to approach this from a human center to understand how the general public is impacted by this. in the early days, i don't think we have a full understanding of how it is impacted. we need a better understanding of how we can mitigate the harms that we are talking about. this will impact people in every way whether it is there work life and labor or their social lives. i think that we really do need to approach us from a human centric point of view. people at home, that is the way that they are thinking about it. they are really making sure that people are the center of the way that they are thinking here. >> the center for social media and politics executive director.
12:06 am
you can find them at cs and ayp nyu.org. thank you for joining us today. ♪♪ >> c-span "washington journal" our live form involving you to discuss the latest issues and government and politics and public policy. from washington and across the country. coming up friday morning, the hudson institute michael durant and defense priorities discussing the latest in the israel hamas war. concerns about a wider conflict. richard rubin talks about congressional efforts. businessd parents. c-span "washington journal" join the conversation live at seven eastern friday morning■ on c-spn c-span now. or online at c-span.org. four c-span's voices 2024, asking voters what issue is most

25 Views

info Stream Only

Uploaded by TV Archive on