tv Washington Journal Neil Chilson CSPAN November 6, 2023 3:14pm-3:31pm EST
3:14 pm
1950's, it explores the african-american community, race relations, gender roles and female poem putterment. the novel is considered a harlem renaissance classic and has had an impact on american literalture and feminist literature. we'll be joined by the author of "zora neale hurston: southern life." watch tonight at 9:00 p.m. eastern, c-span, on c-span now, or online at c-span.org. be sure to scan the q.r. code to listen to our companion podcast where you can learn more about the authors of the books featured. c-span is your unfiltered view of government. wie funded by these television companies and more including comcast. >> you think this is just a community center? no. it's way more than.
3:15 pm
that >> comcast is partnering with 1,000 community centers to create wi-fi enabled centers so students from low-income families can get the tools they need to be ready for anything. >> comcast supports c-span as a public service along with these other television providers giving you a front row seat to democracy. to talk about artificial intelligence and efforts to oversee it is neil chilson, with the center for growth and opportunity, a senior research fellow. he served as the former chief technologist for the federal trade commission. a little bit south of center, the position it takes when it comes to artificial intelligence. guest: the center for growth and opportunity is a nonprofit research organization. we focus on three front topic areas. emigration, energy and technology and innovation.
3:16 pm
all of these areas we work in with the goal of unlocking, igniting the abundance, prosperity that can help everybody reach their full potential. when it comes to technology, we are optimistic about the future. we understand there are always challenges as new technologies get integrated into society. overall, we want policies that enable innovation to happen and a cultural environment that embraces innovation, rather than fears it. host: how is the organization funded? guest: the organization is funded by a wide range of individuals, foundations and other types of organizations. we work with a wide network of international network of scholars, but the work is the scholars own. it is not cg owes work. host: i want to ask you about the federal trade commission, chief technologist. what does that mean? guest: at the federal trade commission, the chief technologist served a visiting
3:17 pm
professor. elevating a certain set of issues. often, it was a professor visiting from a university. elevating a certain set of issues the ftc might be concerned about. i worked on cryptocurrency issues. i worked on the economics of privacy. it is a focal point for the agency around the pacific -- the specific set of policy issues. different chief technologist have done things differently. it is not about running the technology of the agency. it with -- it is an outward facing role, thinking about the role of technology in society. host: during that time, was ai on the horizon as far as ways to think about it? guest: absolutely. one of the things that is most interesting about this debate is that the ftc has a different -- has at different times called this set of issues different things.
3:18 pm
there was a big data research we did for a while, there is a lot of privacy work around this use of information in computing. one of the difficulties is that ai blends easily into general computing. it can be hard to draw a line between what is computers and what is ai. that is one of the big challenges for this executive order that came out and regulatory approaches. host: there are components to the order -- first and foremost, including two the biden administration, would require computing -- would require companies to notify -- that is broad. what is he getting at in that broadness? what are they focusing on? guest: that particular requirement is one of the few mandates within the law -- sorry, within the executive order. it is aiming at thethese new las or other types of neural
3:19 pm
networks. it applies only to maybe a few companies or models, but is really focused on these biggest models out of the concern that somehow, these are different than the smaller models. i do not know how much that is true. they are better in capacity and capability in some ways, but that is the theory behind having them report and check in with the government, notifying them they are doing this type of work. host: can you give me an example about those bigger models, at least for people to get their hands around it? guest: so, the models that we are talking about are trained on massive amounts of data. so, that is the size the government is measuring. the end result of how big the model is, how much memory does it take up, how many pieces of processing doesn't represent?
3:20 pm
-- does it represent? these are the type chatgpt is. people have used chatgpt. that is one example of one of these foundational models that a company named openai developed. there is a lot of open foundation models used not just for chatbot's, but might be used for image generation or other applications. what these models are good at is recognizing patterns in massive amounts of data. given a prompt, a question, giving an answer that reflects that pattern of data that is analyzed. host: is the technology itself, its use, can it be innocuous? can it be dangerous? how does the federal government pursue those things? guest: it is a difficult problem. these are computing tools. often when people say ai, you can replace computer there and
3:21 pm
it would more mean the same thing. we are talking big computers, obviously. i think the harms that might come from them are largely driven by the uses people put them to. like any powerful tool, i think we should look to, how are we going to ensure that people will use them responsibly without killing all the many possible, powerful benefits that could come from them? host: our guest with us to talk about this topic and ask him questions, (202) 748-8000 for those in the eastern time zones. (202) 748-8001 in the mountain and pacific time zone. if you want to text us, (202) 748-8003. a couple more things from the executive order, something called the national institute of standards and technology would set red team testing standards. what is that? guest: nest is an independent
3:22 pm
organization inside the department of commerce. it sets different standards on technology for the country. nest has done a lot of work in ai. they have framework for assessing ai and computational risk. this executive order charges them with coming up with a set of standards with how companies should test their models. red teaming is this idea have a group of people who are trying to break the model. that is what red teaming is. it is not the only tool that can ensure nai model is useful -- an ai model is useful and safe. the executive order charges setting rules for this larger model. host: it is hard to boil this down in a couple of things. one of the elements of the executive order, the homeland security establishing an ai
3:23 pm
safety and security board to advise the government. guest: there is a lot of boards set up in the exec of order. the executive order is 100 pages long. it calls for dozens of agencies to do things. a lot of reports are setting up structures. there are different organizations that this executive order would set up within the government to advise other governmental bodies about how to use ai. it is a whole of government, kitchen sink approach to this issue in a way that i do not think is unprecedented in the history of u.s. technology regulations. host: you think it is the right approach? guest: i do not think it is the right approach for a different -- for a bunch of different reasons. when you contrast this approach of the biden administration to the clinton approach to the internet, the clinton administration on the early internet, they came out with a set of principles in 1997 that
3:24 pm
said, we are going to let private industry lead. where government does intervene, it is going to be minimal and it is going to be focused on addressing harms. here, this is airy much a government first approach instead of a innovator first approach. i think that has the risk of stifling a lot of benefits we could get from artificial intelligence. host: if i am an innovator, i see these recommendations from the federal government. what is the typical response of someone specifically in the ai world, do you think? guest: there are going to be dozens of government actions taken here, so dozens of reports. innovators are going to have a tough time keeping up with the activity the federal government is doing in this space. it will depend if you are working in health care or drug discovery, for working in transportation. the executive order has charges
3:25 pm
to all these different agencies that regulate these areas to do something about ai. i think the benefit is going to be sitting here waiting, what is the government going to be doing here? host: are they going to open their technology, particularly the results of the tests calls for? guest: there are calls to share -- has -- as you pointed out, there is calls to share results with the government. i think there are charges within the executive order about how government itself should use ai. both could be useful. i do not know whether there will be mandates that companies share their information with other parties, except sharing it with the government and hitting it to other parties that way. host: let's hear from a viewer. this is eva in california, you are on. caller: i have a problem. i have a watch for three years,
3:26 pm
and it is not working. thank god i have extension warranty. i call and the worker who came said re-update the program and the problem was computed. it is working now. we laugh. i said, is that artificial intelligence? well, that is going to leave the consumers when we have problems like that, which we do not know. guest: it is interesting. there are so many applications. i use chatgpt with my dad to troubleshoot his lawnmower two weeks ago. that was interesting. i think there are lots of applications in this space that will be direct to consumer. hopefully, another application might be in the consumer space that -- the dishwasher can point
3:27 pm
out what the problem is itself without you having to call somebody. host: it is ultimately going with information and how it is presented. one of the things i would think would be a concern is how much you trust the information. what are concerns there as far as the truthfulness of information being presented? guest: it is a big challenge. these models are trained on publicly available data, and maybe other sublimated -- supplemental data sets. they all use the internet. there is misinformation and direct lies in that. so, the ai companies are trying really hard to figure out how they can make the results reliable and useful to consumers. i think that will ultimately be the test. do people find these things trustworthy? are they useful in day to day life, the business applications we need?
3:28 pm
companies have a strong incentive to deliver that level of quality. but, it is a difficult, technical challenge. host: i want to play a bit of the vice president. she talked about the biden administration's approach and philosophy when it comes to this type of oversight. i want to get your response to it. [video clip] >> i believe history will show that this was the moment when we had the opportunity to lay the groundwork the future of ai. the urgency of this moment must then compel us to create a collective vision of what this future must be. a future where ai is used in advanced human rights and human dignity. where privacy is protected, and people have equal access to opportunity. where we make our democracies
3:29 pm
stronger and our world safer. a future where ai is used to advance the public interest. that is the future president joe biden and i are building. host: the overarching philosophy, what do you think of it going in as far as the technology? guest: it is hard to disagree with anything in their. we do want technology that advances human prosperity, but also protects individual rights and liberty, that supports democracy. the question is, how do we get there? the executive order takes a specific path, which is we are going to take every government agency charged with how to incorporate this technology into american lives and companies. that is very different than the approach we have taken to software thus far, which has
3:30 pm
very much been a bottom up. let's figure out how these technologies -- they are hard to predict how they are applied over time -- let's give people the tools, figure out how to use them and deal with problems as they come up. i do not disagree with the vision of what we want. the question is, how do we get there? host: mary from california, hello. caller: hi, i am hoping you can give us some examples and history of where the international community has come together to regulate a new technology. guest: that is a great question. mary, probably the most likely ones are around nuclear weapons and some agreements around their. there are great differences between nuclear weapons and ai. nuclear weapons have one purpose, they are used to destroy.
49 Views
IN COLLECTIONS
CSPANUploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=574803383)