tv Washington Journal Neil Chilson CSPAN July 2, 2023 4:14am-4:59am EDT
4:14 am
4:15 am
the federal trade commission. he is a commissioner at the center for growth and opportunity. there has been a call for a pause in the ai development to get around this issue. headline on that topic yesterday from wired magazine. meet the ai protest group campaigning against human extinction. fears that artificial intelligence might wipe us out has fueled a rise in groups like this. what is your idea about taking a pause in ai? guest: ayotte is actually many things. many tools, dozens of tools use ai today. the tools that are prompted the most news and maybe our water concerning this group are these large language models like chatgpt which has a chat interface where you can type prompts and it generates text that is probable based on the
4:16 am
understanding of everything on the internet. it gives you an understanding of a sequence of tokens. i think if people thought about pausing all of the devices that they have that use ai and say we are not quite to make these any better, we talking about things like alexa or siri that can be beneficial to the disabled community among others. a lot of tools have a huge potential to benefit humanity. why would you pause that? we should think about the risks. i don't think these largely which models like chatgpt, they are not an existential threat. they are worried about what comes in 40 or 50 years. does that mean we should pause development? if not pause, there needs to be rules of the road.
4:17 am
there was a conversation on capitol hill. we have privacy laws at the state level, the federal trade commission who prevents unfair and deceptive practices. we have laws around how -- they sectors have a bunch of roles. it is up to the agencies that manage these two take a close look and see if there are gaps that they need to fail. if there is a gap that congress needs to fill, congress can look at it. some of the discussions on the hill, the positive spoke model might be the right approach. host: explain. guest: there would be a central repository of expertise in the federal government. it would be something like the
4:18 am
national institute of standards and technology. then that would be used by all the agencies who have these sector specific regulatory mandates, to understand how ai would apply to driverless vehicles, drones, drug development. each of the agencies -- host: so the future of ai calls to pause and regulate it, neil chilson is our guest, a senior research fellow for the center for growth and opportunity. republicans call in at (202) 748-8001, democrats (202) 748-8000, independents (202) 748-8002.
4:19 am
this was a conversation on capitol hill, chuck schumer talked about it last wednesday. the majority later, a minute and a half of comments. >> we do not know what artificial intelligence will be capable of two years, 50 years, 100 years from now. in the hands of foreign adversaries, domestic groups, interested in financial gain or political upheaval. the dangers of ai could be extreme. we need to do everything we can to make sure these groups cannot use it for illicit and bad purposes. but we also need security for america's workforce. because ai, particularly generative ai, is disrupting the way people make a living. i greatest risk are those who live paycheck-to-paycheck,
4:20 am
displacing millions of low-income workers. energy production could be next. those with college education and vast degrees will be impacted. ai will shape and reshape the knowledge economy. impacting workers in sales and marketing, coding, software develop and, banking, law, other skilled professions. many assumed these jobs would always be safe but that is not the case. the erosion of the middle class, already one of america's most serious problems, could get worse with ai if we ignore it. and measures to prevent job loss or distribution of income. host: chuck schumer on capitol hill on wednesday.
4:21 am
i want to pick up on the erosion of the middle class due to artificial intelligence. do you see that happening? guest: what is powerful about ai, i've mentioned how we already use these tools. i think senator schumer is correct to point out the trans-formative effect of the potential, including to the workforce. in many ways, i think the early concerns about automation in factories and manufacturing did directly impact blue-collar workers. these language models are aimed more at content critters so it is more white-collar workers who will have the opportunity to transform their jobs with this technology. i say that with an optimistic bend because i think these tools are powerful in enabling individuals to create sophisticated, complex image and video and text content.
4:22 am
that means that many of us can express ourselves in ways maybe we have not been able to before in art. and there is the chance to streamline the most boring parts of our work and become more productive. there was a study out of stanford about call-center workers who were using ai assisted chatbot's to help them answer customer questions. the research showed there were 14% more productive. customers like them more and the benefits were to the people who were the newest and least experience on the job. the younger, less experienced people on the workforce. there are opportunities for gains across the board for these technologies to help us all get better at our jobs. host: when it comes to your job, former chief technologist in the federal trade commission from 2017 2018, what does a chief
4:23 am
technologist do and how far has that rule changed in five years due to ai? >> is primarily an advisor to the chair and it weighs in on all of the areas in which the federal trade commission, both a composition and a consumer protection mission, might touch on technology. i think that job has probably not changed that much due to ai, but the emphasis has changed on the types of cases that the ftc is bringing. the ftc is primarily a law enforcement agency. the more recent administration has shifted to more of a rulemaking mode in which there would be a regulator over certain industries. that means that you technologist has a harder job in some ways and has to try to predict the future of technologies so the rules that are put out don't hinder it or maximize the
4:24 am
benefits and minimize harm. if they say there is a law violation, let's talk about the intricacies of how the law was violated and bring in the case -- maybe that role has changed a little. host: neil chilson, what is the center for growth opportunity? guest: it is focused on igniting economic growth and abundance and thereby enabling all of us to have more opportunities to live better lives. we focus on immigration, energy and where i focus, technological innovation. we work with students, we have an international network of researchers and we try to communicate the transformative solutions we believe will help spark economic growth to
4:25 am
decision-makers. we are funded by a wide range of individuals, organizations and foundations. the work of our researchers is directed by this. host: the center for growth and opportunity, to give you the opportunity to check in with a few people. democrat from virginia, good morning. caller: good morning. i think regulations are just fine, but regulating the dark world is not going to work. i equate ai as the invention of the atomic bomb. the ability to destroy humankind is evident. i don't know how it is going to be -- look at the internet. in the 90's, the internet was a
4:26 am
wonderful thing and then the dark forces started working their way in and they take over in many ways. i'm not sure -- how do you see regulation going? what kinds of regulations are going to keep it cap on this and thank you for taking my question. host: answer the question. guest: i have heard the equivalence between the atom bomb and ai before and one way they are similar is they have big potential international implications. we can talk about the race between china and the u.s. and who is going to win the ai race. they are different in other ways. ai is a general-purpose technology. we all use this technology all the time. every time you use talk to text on your phone, it is using algorithms that fall under in
4:27 am
many definitions ai. this technology is here, it is being used widespread. john is talking about existential risk and in many ways there is no clear path from the technologies that we have right now to the type of artificial general intelligence that people are afraid poses an existential risk to humans. even the experts will say that. so regulation i don't think should be focused on that risk. we should have people thinking about it, just like we have people thinking about what we would do if there was an asteroid collision. but i don't think it ranks as a global priority. as humans, we have many global priorities that probably are more immediate. you mentioned, we do have atom bombs and nuclear destruction is a potential catastrophe for humankind. we have viruses and pandemics
4:28 am
and things like climate change. these are much more immediate threats that probably are a higher policy priority than the risk of artificial general intelligence. host: you talk about the ai race. how does a country win? guest: maybe i am braced too quickly. technology is not a race. it produces benefits and when they become widespread, they can produce them for all of humankind. when i talk about a race between the u.s. and china, i'm talking about the different visions that a u.s. led effort on ai would have from a chinese led effort on ai. so that u.s. in technology, software technology like ai, where we have quite a lead, all of these companies are in the u.s., his focus on bottom up
4:29 am
delivering solutions to customers and users. in china, everything has that predicate how is it a threat to the states and the ruling party? with that, there technology will be aimed at supporting the party first rather than trying to deliver solutions. those competing visions for ai are what i talk about when i talk about a race. host: this is jc, the line for independence. caller: i have a question about artificial intelligence, ai, and how it impacts society, humidity. when we have your ones and zeros being controlled by somebody to shift into the wrong path, how
4:30 am
does that benefit society rather than going into those creations? the need to guide us so it does not destroy the earth and the galaxies, the universe. host: a philosophical question. guest: i think computer ai is a tool. i think tools are as useful and as harmful as the people behind them. i think when people choose how to use tools, it is important that we think about the moral implications of using those. i do think that ai has a great potential to bring benefit and healing, both of the physical kind but also psychological. maybe even spiritual number but it is a tool and we want to make sure that we have the uses that are beneficial while mitigating the ones that are harmful. host: is it a tool that can be
4:31 am
more creative than a human and whatever industry -- writing books. we are seeing ai generated books. do you think the ai is going to eventually be able to be more creative and come up with better stories than a human writer? guest: i think people will always value something created by humans. that is the history of technology. we see that when people have furniture, handcrafted widgets from at sea. i think people will always value human creation. these are powerful tools and in the hands of creators, they can make average, normal people be able to express their thoughts in extra ordinary ways. i think there is a huge potential for an explosion of creativity in this space. will a ai generated novel ever
4:32 am
win the pulitzer? i don't know. it's possible. you can think of these large language models as generating the average of the internet on whatever prompt you give them. if i ask them to write something in the style of hemingway, it is trying to generate something that looks like the average understanding of hemingway. it is more derivative than it is creative but it is a good sounding board for people who are trying to be creative. to say what will it be like to talk about that in a poem? you can get a reaction, a first draft, and they can spark creativity in humans. host: people on twitter with optimism on the human side, a real writer has more imagination
4:33 am
. let's get over the notion that it will take over as a writing tool and nothing more. let's get your calls, for republicans, (202) 748-8001. democrats (202) 748-8000. independents (202) 748-8002. neil chilson is with us for the next 15 or 20 minutes. in colorado, good morning. >> good morning. a quick comment for the audience and mr. chilson. i believe at its current state ai is more of an aggregator. it might be able to give off a false sense of creativity, again some of the more popular platforms where populated by mining reddit. as far as it being the average of the internet opinion, that
4:34 am
rings true from my understanding. i have a question. given the revelations from the twitter files and how private organizations working at an agent for the u.s. government, there are things the government is forbidden from doing, is there that much of a gap between the chinese pursuit of ai, the american pursuit of ai given the fact that we now understand them from revelations? guest: complicated question. the twitter files debate is ongoing and the role that government has in shaping large platforms is one that i think about a lot. i think we do have constitutional and statutory protections for u.s. citizens being able to express
4:35 am
themselves, whether individuals or if they run a large platform. i think when we have large platforms collaborating with government officials, it needs close scrutiny. i still think that is quite different than the chinese system where the technology itself is getting a yes no restriction on whether or not it can even be used in the first place or developed in the first place. and it has to be calibrated so that it does not offend the communist party in china. that is very different from the u.s., both structurally and in effect. host: you're talking about chatgpt, the average of the internet and the color was talking about -- the person who called was talking about the early usage of reddit to get this content.
4:36 am
who decides the content of the intelligence and where does it come from? who says you can use this to create your chatgpt answer but don't do this? guest: the way these large language models are created is by gathering a bunch of information and there are certain repositories of information the people have gathered already. they are made up of publicly available content on the internet primarily, but there are supplemental pieces of content that are put in there. that is run through an algorithm that trains the model. what that means is at the end there is a model which has a bunch of numbers. none of the content is there. the decision is made on the front end about what all of the content will be that goes in there. once a model is trained, the content is sort of distilled down to its essence and relationship to each other. that is what the model has going forward.
4:37 am
at that point, when you type in a prompt into chatgpt, it requests from the model, predict what text would look like following this. it is like a fancy version of auto complete of that's on your phone. that response goes through screens that the company might have about looking for mistakes in might have or other types of problems it doesn't want to release out. then the answer is, the result of the prompt is given to the user. there are points when you can choose with the result would be. but to the questioners point, the decision is made at the front end when you're training the models about what content will go into there. host: if you are focused on chatgpt, would you be more comfortable with some sort of heavy hand of regulation on some
4:38 am
other form of ai? guest: we are to focus on gpt and these models. these same tools are producing amazing science. we have something all alpha fold, which -- how we build models for how various biological functions of our bodies work. and they were able to do in 18 months what 200 million of these predictions, almost all of them known to science, were in the previous 60 years of the manual approach. misuse implications for drug testing, plastic cleanup in the oil.
4:39 am
to the first part of your question, where to focus on chatgpt and large language models. there is a lot of potential for ai in a lie -- a wide number of spaces with human benefits. to those areas have more risk? certainly if you are developing drugs, we need to protect the safety of those. we have the fda drug administration approval process. it is not really change that particular process. the safety concerns are adequately addressed by the fda. the question is how can you maximize the use of ai to produce the drugs? host: you are on with neil chilson. an independent. caller: thank you for taking my call. since the computer knowledge has
4:40 am
been expanding. i think of the roof of the sistine chapel where we see the figure of god with his hand out to man, and he is in participation with man. and here's his hand, and he doesn't give a damn and does not consider god trying to help them out. when man has evolved over the last thousands of years, hundred years recently, and it seems like things are getting worse instead of better. we've got a thing called a television, telephones, handheld computers that isolate our children in my opinion because they have got their nose in that all of the time. they don't have any human
4:41 am
committee kitchen with other people. it seems like the computer phone has basically isolated the next generation. host: can ai make us feel less isolated from other humans? guest: that is a great question. the concerns around the use of electronic devices, computers or televisions, is recurring throughout history. each time there is any technology, people point out the downside and worry about it. they acclimate to the technology, maybe find new ways to use it. in my own personal example, my chat groups are the easiest way for me to stay in touch with my family. including my 71-year-old grandmother who i can pick up my phone and text her and that is easy. while i'm just running around, i
4:42 am
can send her pictures to a device she has in her house. we need to think about how we use these technologies, including ai. ai has the potential to appear to -- there are people who like that mode of discovery, explanation, dutch exploration, and will benefit. but we can keep an ion how people will use it to supplement rather than replace it. host: good question, why can't ai prevent hacking or fishing or other crimes? guest: in oxley does. the biggest example, the one i'm sure all of your listeners and waters have benefited from, is spam filtering. it is a format for ai.
4:43 am
because we are able to apply some of this morning, we can identify this. these types of technologies, they are very good at absorbing a giant amount of data and identifying patterns in them. the types of cybersecurity attacks that might be difficult if a human was management -- manually monitoring them, certain things ai systems can and do help cybersecurity experts identify problems. can they stop all of it? no. bad actors are creative as well. they have access to tools and they try to do things. but ai is one tool but the good guys can use to deal with some of these. host: you said you are an
4:44 am
optimist on ai. do you think the overall narrative is more pessimistic? we started with this headline from yesterday about human extinction and ai. you heard the comments from majority leader schumer that we played overall, who is trying to change the narrative on artificial intelligence? guest: historically, humans have always and where to buy something that looks human but is not. it goes back to frankenstein and much further. what people personify these technologies, frankly it is so easy to do with chatgpt, it sounds very convincing. when we personify the is and we don't understand the underlying technologies, it is easy to fear there is a replacement for humans, the few things that people have loaded, right now
4:45 am
the technology is not like that. i think the people who are -- >> host: or the people who are programming them have motives. guest: it is a concern. but it is less of a concern about ai than the other types of technology. the way these large language models are trained is through a mass amount of data. the amount of direct control the designers have over the results is not as mechanical as it would be if they are blocking keywords on social media or something like that. so i do think that is a concern and we should be aware of what the designers are intending. we recently asked for comments -- and cia, that national
4:46 am
telogen -- the and tia -- they talk about technology issues within the government. they asked about how can we can make technology accountable. and how can we make sure that when people design ai's, they have the common good in mind. once that's one thing the left out was the primary way across all industries that we make sure that people who produce products and services are accountable is market processes. if you're not producing something people want and find useful or they find offensive, that is a bad business plan. so i think we need to think at least about that particular method of accountability and where there are gaps where we might need regulatory solutions.
4:47 am
host: paul in indianapolis, republican, good morning. caller: good morning. what we have to remember is we have all been using we call ai for a long time and the original versions said if then statements, a primitive version of artificial intelligence. what we mislead ourselves when we call it intelligence. it is simply machines executing binary instructions at a high rate. now the computers are so powerful that they execute those at a higher rate than we can grasp. we were just talking about the ill will of the programmers. i am not so much worried about the ill will of the programmers as the mistakes of the programmers, the instructions that they left out.
4:48 am
something they did not intend that will have unintended consequences. as far as the intelligence, i want to free up again the work of lawrence joseph stock meyer who did a lot of work on complexity theory. he once said as part of a joke that to have a computer that could discuss the novel jane eyre, it would take a computer the size of the known universe. so i think when we call it artificial intelligence we mislead ourselves. we have to be careful about that having unintended mistakes. thank you. host: neil chilson. guest: you're pushing a lot of the buttons on the issues that we care about. i wrote a book called getting out of control that is about applying complicity theory public policy in the personal lens. i think you are right. when we think about the
4:49 am
interaction between computers and complex systems, some of the fears that computers are going to be as intelligence -- intelligent as a human, we don't have good definitions of intelligence. philosophers fight over this a lot. there is some knowledge processing happening in computers that can resemble the ways that humans process information. but it is not the same. and especially it is not the same as the complex efforts that we as individuals contribute to, but we don't control across society. when you think about the complexity of all of the things we as a society produce and we think about how can we cram all of that knowledge and execution ability into a box, i'm less afraid of that. i think you are right that when we compare the intelligence that computer has to the complexity of the world, there is still a giant gap between those things.
4:50 am
host: getting out of control, emergent leadership in a complex world, the paper out in 2021. this is micah in nevada, republican. good morning. caller: good morning, thanks for taking my call. i called and on one of the lines, eileen left of center but it does not matter, we are all americans. it is interesting what you're talking about, but someone a few people ago from indiana, i think we've got to be careful about humans being separated and not speaking to each other. as it happens, people become statistics. it gets orwellian and easy to say porous people are going to be affected, this and that. if the interest of people are
4:51 am
going to have opinions -- it is going to affect millions in america and that is politically motivated. the people who control the intelligence and the artificial intelligence, that is where the problem is. i would say, would you do anything you are doing without security of housing, take away all of your income and imagine yourself as one of the poorest people? would you be anywhere but at home try to help yourself? i think government is basically corrupting. i don't want to say ignorance is bliss. but there are some ways that humans -- you drive on the freeway and people are driving like they want to kill you, i drive for a living. please address some of this. host: we will let him take it up.
4:52 am
guest: at the center of growth and opportunity, our goal is to have economic abundance that raises all people and gives everybody the opportunity to maximize their contribution to this world and live a good and fulfilling life. i really synthesize with your objectives. i think that over history, technology has been one of those ways to raise living standards. the last 200 years have been unprecedented in the amount of human growth that we have had, within population but also in prosperity and the access that people have to information, resources. that is not mean we are done. we have plenty more to do. ar is not a silver bullet on those complicated problems that you are talking about. the one where it probably has the most relevance is around automated driving and certainly it can be frustrating.
4:53 am
there is the potential that automated driving would make it less frustrating. that still seems like a little ways off. it does seem like other areas are taking off and are simpler to solve. but it shows that ai helps with the hardest problems and we will have to continue to deal with the barriers that are holding people back from reaching their full potential. host: to alabama, this is dixie, independent. caller: good morning. frontline, if you are familiar with that show on pbs, they did a series about five years ago, on ai. in all of the scientists there at the end called for regulation because it is going to be a big job displacement. it was also a concern at the current senate hearings.
4:54 am
but you don't seem to be concerned about that. guest: i do think the economic effects on employment are important for us to look at. there are a bunch of ways we can deal with that. the question is, do we slow down tools that can make workers more productive, or do we find ways to get those tools into people's hands and the training that they need to move up in the value chain and be able to produce more and have more fulfilling work, get rid of the drugs work that maybe they have been enduring in the past. one of the big potential areas of ai is in education. when you think about what education is, in many cases it is they -- teaching people the average knowledge around a subject is kind of the goal. if you can have an ai based tutor responding to your
4:55 am
questions on different topics, that can help close the education gap in many ways. kahn academy is doing this with online courses now. you can ask about that specific math problem you're working on and it will coach you through getting to the answer. it won't give you the answer but it will coach you through one of the steps you need to take. there are lots of applications for job training and retraining. that is something that i think could be a helpful role for government. we should look at how we can deploy these tools in order to make it possible for people to be more productive and happier in themselves. host: this is doug, staten island, republican. you are on with neil chilson. caller: good morning, how are you doing? host: doing well, what is your question or comment?
4:56 am
we are running out of time. caller: last night, we were kicking this around and i think this whole ai chatgpt thing is being blown out of proportion. so your venture capitalists can make a lot of money. i don't think it is going to be replacing -- was your opinion this whole thing is the next graft to get a bunch of money built up but it is not going to change our lives? what do you think about this? host: do you want to take up the topic of whether this is not that next big thing echo -- thing? guest: every technological trend has people who have not thought through what they're trying to build. or maybe they are trying to take advantage of people.
4:57 am
this technology has proven to have substantial uses that are profitable but also beneficial. i've mentioned the protein prediction. there were lots of technologies that we use every day that were powerful and that use the benefit. i do think this -- these new technologies have created a talking point. because they are so conversational, they can feel like a personality. and i think they have surprised a lot of people with their capability. but i do think if we step back and look at the 5, 10 years down the road, will see this was the start of the new general-purpose technology that is going to get deployed throughout our economy. and investment is part of making that happen. but it is only one part of making it happen. host: thank you for the answer
32 Views
IN COLLECTIONS
CSPAN Television Archive Television Archive News Search ServiceUploaded by TV Archive on