tv Washington Journal Neil Chilson CSPAN June 27, 2023 10:01am-10:33am EDT
10:01 am
here is what is coming of live today on c-span come former united nations ambassador and cuent republican presidential candatnikki haley talks about u.-born policy and its efforts to strengthen partnerships in the indo pacific region. we will bring that to you live at 10:30 eastern todaynd at 2:30 p.m. eastern, bert zoellick discusses ukraine's economy andeconstruction efforts with the washington post. youan watch all of our events live here on c-span, c-span now, are free video mobile app or online at c-span.org. >> he spent his or her of government. we are funded by these television companies and more including media,. --mediacom.
10:02 am
mediacom support c-sn as a public service along with these other television providers giving you a front row seat to democracy. washington journal continues. host: a conversation on the future of artificial intelligence. our guest is neil chilson, served as chief technologist at the federal trade commission. he is a commissioner at the center for growth and opportunity. there has been a call for a pause in the ai development to get around this issue. headline on that topic yesterday from wired magazine. meet the ai protest group campaigning against human extinction. fears that artificial intelligence might wipe us out has fueled a rise in groups like
10:03 am
this. what is your idea about taking a pause in ai? guest: ayotte is actually many things. many tools, dozens of tools use ai today. the tools that are prompted the most news and maybe our water concerning this group are these large language models like chatgpt which has a chat interface where you can type prompts and it generates text that is probable based on the understanding of everything on the internet. it gives you an understanding of a sequence of tokens. i think if people thought about pausing all of the devices that they have that use ai and say we are not quite to make these any better, we talking about things like alexa or siri that can be beneficial to the disabled
10:04 am
community among others. a lot of tools have a huge potential to benefit humanity. why would you pause that? we should think about the risks. i don't think these largely which models like chatgpt, they are not an existential threat. they are worried about what comes in 40 or 50 years. does that mean we should pause development? if not pause, there needs to be rules of the road. there was a conversation on capitol hill. we have privacy laws at the state level, the federal trade commission who prevents unfair and deceptive practices. we have laws around how -- they sectors have a bunch of roles.
10:05 am
it is up to the agencies that manage these two take a close look and see if there are gaps that they need to fail. if there is a gap that congress needs to fill, congress can look at it. some of the discussions on the hill, the positive spoke model might be the right approach. host: explain. guest: there would be a central repository of expertise in the federal government. it would be something like the national institute of standards and technology. then that would be used by all the agencies who have these sector specific regulatory mandates, to understand how ai would apply to driverless vehicles, drones, drug development. each of the agencies --
10:06 am
host: so the future of ai calls to pause and regulate it, neil chilson is our guest, a senior research fellow for the center for growth and opportunity. republicans call in at (202) 748-8001, democrats (202) 748-8000, independents (202) 748-8002. this was a conversation on capitol hill, chuck schumer talked about it last wednesday. the majority later, a minute and a half of comments. >> we do not know what artificial intelligence will be capable of two years, 50 years, 100 years from now. in the hands of foreign adversaries, domestic groups, interested in financial gain or
10:07 am
political upheaval. the dangers of ai could be extreme. we need to do everything we can to make sure these groups cannot use it for illicit and bad purposes. but we also need security for america's workforce. because ai, particularly generative ai, is disrupting the way people make a living. i greatest risk are those who live paycheck-to-paycheck, displacing millions of low-income workers. energy production could be next. those with college education and vast degrees will be impacted. ai will shape and reshape the knowledge economy. impacting workers in sales and
10:08 am
marketing, coding, software develop and, banking, law, other skilled professions. many assumed these jobs would always be safe but that is not the case. the erosion of the middle class, already one of america's most serious problems, could get worse with ai if we ignore it. and measures to prevent job loss or distribution of income. host: chuck schumer on capitol hill on wednesday. i want to pick up on the erosion of the middle class due to artificial intelligence. do you see that happening? guest: what is powerful about ai, i've mentioned how we already use these tools. i think senator schumer is correct to point out the trans-formative effect of the potential, including to the workforce. in many ways, i think the early concerns about automation in factories and manufacturing did
10:09 am
directly impact blue-collar workers. these language models are aimed more at content critters so it is more white-collar workers who will have the opportunity to transform their jobs with this technology. i say that with an optimistic bend because i think these tools are powerful in enabling individuals to create sophisticated, complex image and video and text content. that means that many of us can express ourselves in ways maybe we have not been able to before in art. and there is the chance to streamline the most boring parts of our work and become more productive. there was a study out of stanford about call-center workers who were using ai assisted chatbot's to help them
10:10 am
answer customer questions. the research showed there were 14% more productive. customers like them more and the benefits were to the people who were the newest and least experience on the job. the younger, less experienced people on the workforce. there are opportunities for gains across the board for these technologies to help us all get better at our jobs. host: when it comes to your job, former chief technologist in the federal trade commission from 2017 2018, what does a chief technologist do and how far has that rule changed in five years due to ai? >> is primarily an advisor to the chair and it weighs in on all of the areas in which the federal trade commission, both a composition and a consumer protection mission, might touch on technology. i think that job has probably
10:11 am
not changed that much due to ai, but the emphasis has changed on the types of cases that the ftc is bringing. the ftc is primarily a law enforcement agency. the more recent administration has shifted to more of a rulemaking mode in which there would be a regulator over certain industries. that means that you technologist has a harder job in some ways and has to try to predict the future of technologies so the rules that are put out don't hinder it or maximize the benefits and minimize harm. if they say there is a law violation, let's talk about the intricacies of how the law was violated and bring in the case -- maybe that role has changed a little. host: neil chilson, what is the center for growth opportunity?
10:12 am
guest: it is focused on igniting economic growth and abundance and thereby enabling all of us to have more opportunities to live better lives. we focus on immigration, energy and where i focus, technological innovation. we work with students, we have an international network of researchers and we try to communicate the transformative solutions we believe will help spark economic growth to decision-makers. we are funded by a wide range of individuals, organizations and foundations. the work of our researchers is directed by this. host: the center for growth and opportunity, to give you the opportunity to check in with a few people. democrat from virginia, good morning. caller: good morning.
10:13 am
i think regulations are just fine, but regulating the dark world is not going to work. i equate ai as the invention of the atomic bomb. the ability to destroy humankind is evident. i don't know how it is going to be -- look at the internet. in the 90's, the internet was a wonderful thing and then the dark forces started working their way in and they take over in many ways. i'm not sure -- how do you see regulation going? what kinds of regulations are going to keep it cap on this and thank you for taking my question. host: answer the question. guest: i have heard the equivalence between the atom
10:14 am
bomb and ai before and one way they are similar is they have big potential international implications. we can talk about the race between china and the u.s. and who is going to win the ai race. they are different in other ways. ai is a general-purpose technology. we all use this technology all the time. every time you use talk to text on your phone, it is using algorithms that fall under in many definitions ai. this technology is here, it is being used widespread. john is talking about existential risk and in many ways there is no clear path from the technologies that we have right now to the type of artificial general intelligence that people are afraid poses an existential risk to humans. even the experts will say that.
10:15 am
so regulation i don't think should be focused on that risk. we should have people thinking about it, just like we have people thinking about what we would do if there was an asteroid collision. but i don't think it ranks as a global priority. as humans, we have many global priorities that probably are more immediate. you mentioned, we do have atom bombs and nuclear destruction is a potential catastrophe for humankind. we have viruses and pandemics and things like climate change. these are much more immediate threats that probably are a higher policy priority than the risk of artificial general intelligence. host: you talk about the ai race. how does a country win? guest: maybe i am braced too quickly. technology is not a race. it produces benefits and when
10:16 am
they become widespread, they can produce them for all of humankind. when i talk about a race between the u.s. and china, i'm talking about the different visions that a u.s. led effort on ai would have from a chinese led effort on ai. so that u.s. in technology, software technology like ai, where we have quite a lead, all of these companies are in the u.s., his focus on bottom up delivering solutions to customers and users. in china, everything has that predicate how is it a threat to the states and the ruling party? with that, there technology will be aimed at supporting the party first rather than trying to deliver solutions. those competing visions for ai
10:17 am
are what i talk about when i talk about a race. host: this is jc, the line for independence. caller: i have a question about artificial intelligence, ai, and how it impacts society, humidity. when we have your ones and zeros being controlled by somebody to shift into the wrong path, how does that benefit society rather than going into those creations? the need to guide us so it does not destroy the earth and the galaxies, the universe. host: a philosophical question. guest: i think computer ai is a tool. i think tools are as useful and as harmful as the people behind them.
10:18 am
i think when people choose how to use tools, it is important that we think about the moral implications of using those. i do think that ai has a great potential to bring benefit and healing, both of the physical kind but also psychological. maybe even spiritual number but it is a tool and we want to make sure that we have the uses that are beneficial while mitigating the ones that are harmful. host: is it a tool that can be more creative than a human and whatever industry -- writing books. we are seeing ai generated books. do you think the ai is going to eventually be able to be more creative and come up with better stories than a human writer? guest: i think people will always value something created by humans. that is the history of technology. we see that when people have
10:19 am
furniture, handcrafted widgets from at sea. i think people will always value human creation. these are powerful tools and in the hands of creators, they can make average, normal people be able to express their thoughts in extra ordinary ways. i think there is a huge potential for an explosion of creativity in this space. will a ai generated novel ever win the pulitzer? i don't know. it's possible. you can think of these large language models as generating the average of the internet on whatever prompt you give them. if i ask them to write something in the style of hemingway, it is trying to generate something that looks like the average understanding of hemingway. it is more derivative than it is creative but it is a good
10:20 am
sounding board for people who are trying to be creative. to say what will it be like to talk about that in a poem? you can get a reaction, a first draft, and they can spark creativity in humans. host: people on twitter with optimism on the human side, a real writer has more imagination . let's get over the notion that it will take over as a writing tool and nothing more. let's get your calls, for republicans, (202) 748-8001. democrats (202) 748-8000. independents (202) 748-8002. neil chilson is with us for the
10:21 am
next 15 or 20 minutes. in colorado, good morning. >> good morning. a quick comment for the audience and mr. chilson. i believe at its current state ai is more of an aggregator. it might be able to give off a false sense of creativity, again some of the more popular platforms where populated by mining reddit. as far as it being the average of the internet opinion, that rings true from my understanding. i have a question. given the revelations from the twitter files and how private organizations working at an agent for the u.s. government, there are things the government is forbidden from doing, is there that much of a gap between the chinese pursuit of ai, the
10:22 am
american pursuit of ai given the fact that we now understand them from revelations? guest: complicated question. the twitter files debate is ongoing and the role that government has in shaping large platforms is one that i think about a lot. i think we do have constitutional and statutory protections for u.s. citizens being able to express themselves, whether individuals or if they run a large platform. i think when we have large platforms collaborating with government officials, it needs close scrutiny. i still think that is quite different than the chinese system where the technology itself is getting a yes no restriction on whether or not it
10:23 am
can even be used in the first place or developed in the first place. and it has to be calibrated so that it does not offend the communist party in china. that is very different from the u.s., both structurally and in effect. host: you're talking about chatgpt, the average of the internet and the color was talking about -- the person who called was talking about the early usage of reddit to get this content. who decides the content of the intelligence and where does it come from? who says you can use this to create your chatgpt answer but don't do this? guest: the way these large language models are created is by gathering a bunch of information and there are certain repositories of information the people have gathered already. they are made up of publicly available content on the internet primarily, but there are supplemental pieces of content that are put in there.
10:24 am
that is run through an algorithm that trains the model. what that means is at the end there is a model which has a bunch of numbers. none of the content is there. the decision is made on the front end about what all of the content will be that goes in there. once a model is trained, the content is sort of distilled down to its essence and relationship to each other. that is what the model has going forward. at that point, when you type in a prompt into chatgpt, it requests from the model, predict what text would look like following this. it is like a fancy version of auto complete of that's on your phone. that response goes through screens that the company might have about looking for mistakes in might have or other types of
10:25 am
problems it doesn't want to release out. then the answer is, the result of the prompt is given to the user. there are points when you can choose with the result would be. but to the questioners point, the decision is made at the front end when you're training the models about what content will go into there. host: if you are focused on chatgpt, would you be more comfortable with some sort of heavy hand of regulation on some other form of ai? guest: we are to focus on gpt and these models. these same tools are producing amazing science. we have something all alpha fold, which -- how we build models for how various
10:26 am
biological functions of our bodies work. and they were able to do in 18 months what 200 million of these predictions, almost all of them known to science, were in the previous 60 years of the manual approach. misuse implications for drug testing, plastic cleanup in the oil. to the first part of your question, where to focus on chatgpt and large language models. there is a lot of potential for ai in a lie -- a wide number of spaces with human benefits. to those areas have more risk? certainly if you are developing drugs, we need to protect the safety of those. we have the fda drug
10:27 am
administration approval process. it is not really change that particular process. the safety concerns are adequately addressed by the fda. the question is how can you maximize the use of ai to produce the drugs? host: you are on with neil chilson. an independent. caller: thank you for taking my call. since the computer knowledge has been expanding. i think of the roof of the sistine chapel where we see the figure of god with his hand out to man, and he is in participation with man. and here's his hand, and he doesn't give a damn and does not
10:28 am
consider god trying to help them out. when man has evolved over the last thousands of years, hundred years recently, and it seems like things are getting worse instead of better. we've got a thing called a television, telephones, handheld computers that isolate our children in my opinion because they have got their nose in that all of the time. they don't have any human committee kitchen with other people. it seems like the computer phone has basically isolated the next generation. host: can ai make us feel less isolated from other humans? guest: that is a great question. the concerns around the use of electronic devices, computers or
10:29 am
televisions, is recurring throughout history. each time there is any technology, people point out the downside and worry about it. they acclimate to the technology, maybe find new ways to use it. in my own personal example, my chat groups are the easiest way for me to stay in touch with my family. including my 71-year-old grandmother who i can pick up my phone and text her and that is easy. while i'm just running around, i can send her pictures to a device she has in her house. we need to think about how we use these technologies, including ai. ai has the potential to appear to -- there are people who like that mode of discovery, explanation, dutch exploration,
10:30 am
and will benefit. but we can keep an ion how people will use it to supplement rather than replace it. host: good question, why can't ai prevent hacking or fishing or other crimes? guest: in oxley does. the biggest example, the one i'm sure all of your listeners and waters have benefited from, is spam filtering. it is a format for ai. because we are able to apply some of this morning, we can identify this. these types of technologies, they are very good at absorbing a giant amount of data and identifying patterns in them.
10:31 am
the types of cybersecurity attacks that might be difficult if a human was management -- manually monitoring them, certain things ai systems can and do help cybersecurity experts identify problems. can they stop all of it? no. bad actors are creative as well. they have access to tools and they try to do things. but ai is one tool but the good guys can use to deal with some of these. host: you said you are an optimist on ai. do you think the overall narrative is more pessimistic? we started with this headline from yesterday about human extinction and ai. you heard the comments from majority leader schumer that we played overall, who is trying to change the narrative on artificial intelligence? guest: historically, humans have
10:32 am
always and where to buy something that looks human but is not. it goes back to frankenstein and much further. what people personify these technologies, frankly it is so easy to do with chatgpt, it sounds very convincing. when we personify the is and we don't understand the underlying technologies, it is easy to fear there is a replacement for humans, the few things that people have loaded, right now the technology is not like that. i think the people who are -- >> host: or the people who are programming them have motives. guest: it is a concern. but it is less of a concern about ai than the other types of technology. the way these large language models are trained is through a mass amount of data.
10:33 am
the amount of direct control the designers have over the results is not as mechanical as it would be if they are blocking keywords on social media or something like that. so i do think that is a concern and we should be aware of what the designers are intending. we recently asked for comments -- and cia, that national telogen -- the and tia -- they talk about technology issues within the government. they asked about how can we can make technology accountable. and how can we make sure that when people design ai's, they have the common good in mind. once
83 Views
IN COLLECTIONS
CSPANUploaded by TV Archive on
