Skip to main content

tv   Jeremy Kahn Mastering AI  CSPAN  November 6, 2024 7:37am-8:21am EST

4:37 am
and be legislated. we talk about that and the tension between a lot of scientists who studied animals, natural historians who observed them in the wild and wrote about them and observations, they were harvesting quite a lot of them to fuel their collections and constitute studies. a sad situation where the last of certain species are being collected by scientists. in the one hand speaking out but very much in the final destruction of species. >> one of the stories, interesting moments in the
4:38 am
book, george brunel, a towering figure in the history of conservation in america who sounds the first audubon movement aimed at women, the harvesting of feathers, whole birds on top, and designed for women and families, and the same time he's invited to join, the boone and crockett club by very elitist society of men, to conserve the big game, keep shooting it.
4:39 am
he abandons the movement, they are symbolic because they were so incredibly plugged in. roosevelt's political career was stalled at that point, profoundly influential, stuyvesants and other figures there. they basically succeed in creating legislation that privileges outside hunters on the one hand bad actors that are harvesting great scale and producing numbers and native american -- iced out because of some of the activism and that does create the same ethos you see today, we conserve the
4:40 am
animals to shoot them, you know. >> any other questions? all right. thank you all for coming out. i just want to say, we covered a lot of ideas in this book because they are rich and profound. we scratch to the stories in this book, you get a sense that every chapter has a crazy narrative that you have not heard of before. that would be entertaining to read while you concentrate bigger picture issues. brought back to this world, seems very distant but at the same time you see the origin story, a very difficult act to walk and you pulled it off
4:41 am
magisterial you here. congratulations. thank you for coming. >> the c-span bookshelf podcast feed makes it easy to listen to c-span's podcasts that feature nonfiction books, discover new offers and ideas. each week we are making it convenient to listen to multiple episodes with critically acclaimed authors, current events and culture for signature programs, about books, "after words," q and a. listen to the podcast feed, you can find it in all our podcasts on the free c-span now mobile video apps or wherever you get your podcasts. on our website, c-span.org/podcasts.
4:42 am
weekends on c-span2 are an intellectual feast. every saturday american history tv documents america's story and on sunday booktv brings the latest in nonfiction books and authors. funding for c-span2 comes from these television companies and more including media.com. >> nearly 30 years media.com was founded on a powerful idea, cutting edge broadband to underserved communities. from coast-to-coast, we connected 850,000 miles of fiber. our team delivered one gig speeds to customer, led the way in developing a 10 g platform, offering the fastest most reliable network on the go. media.com, decades of dedication, deliverance, decades ahead. >> media.com support c-span2 as a public service.
4:43 am
>> c-span now is a free mobile apps featuring your unfiltered view of what is happening in washington live and on demand. keep up with the day's biggest events with live streams, floor proceedings and hearings from the u.s. congress, white house events, the courts, campaign, and more from the world of politics all at your fingertips. you can stay current with the latest episodes of washington journal and scheduling information for c-span, tv networks and c-span radio plus a variety of compelling podcasts. c-span now available at the apple store and google play and download it for free. c-span.org/c-spannow. your front row seat to washington anytime anywhere. >> jeremy -- jeremy kahn is the author of "mastering ai: a survival guide to our superpowered future".
4:44 am
jeremy kahn, explain what the term singularity means. >> this get areas the moment when computers or software surpassed human intelligence. that is often the definition given. some people give a definition, the moment humans have merged somehow with machines and become cyborgs. those are the definitions of the singularity. >> host: how far away are we from the singularity? >> guest: i think we are still a ways away from that moment although ai technology is getting very powerful and very useful, we are still at least a decade if not more from a singularity moment but it is a good time to think about what the applications of that moment might be but we are still a ways off. >> host: what do we need to figure out before that moment? legally, ethically, morally, what do we need to do?
4:45 am
>> guest: there is a lot we should be thinking about in terms of how to mitigate the risks that might come from that superpowerful ai. what we should do to mitigate risks, the ai technology we currently have and is rapidly rolled out, the risks there that we should address, for longer-term risks. >> host: what are those risks right now? >> guest: the risks we should be looking at, some have gotten some attention and some haven't gotten enough attention but they include the risk in the political realm of accelerated and expanded disinformation campaign and crisis of
4:46 am
authenticity across media because of the synthetic content that can be produced and having warped various search engine algorithms and other things, and look how we are worried about the risks that are more subtle and the use of ai technology being cognitive abilities. memory may be imperiled when we are reliant on the use of ai technology to summarize hat answers and there will be tendencies, this is a risk to our social relations with other people if we rely on ai chat boughts as companions as people are starting to do. there are some risks depending on how businesses deploy the technology, what happens in terms of employment and income inequality. i'm more optimistic in those areas, the risk we will see
4:47 am
mass unemployment is very remote. i think this technology will create more jobs than it takes away. if we use this technology right, there's a chance to enhance human abilities and crater productivity boom that would be great for the economy and a chance to level people up, move people back to the middle class with the help of this technology. if used correctly with the right guard rails there's a chance for the technology to be an equalizer. if we go down a path where we don't take those steps the technology could see increasing inequality. all those risks are present with the technology as it exists right now. some risks that get a lot of attention about ai becoming sentient or developing some kind of agency of its own where it might take action against humans, those sci-fi scenarios are a long way off but they are getting ever so closer.
4:48 am
it makes sense to spend some amount of time and effort thinking about how you might head them off but i don't want to that to distract from addressing of near-term risks. >> host: why do you call this a survival guide? why put it in the survival to stick terms? >> guest: is one of the first technologies the challenges what humans think is so unique, so here we have a technology software the challenges that. that is disruptive. you might think about what we reserve for humanity in a world where you have technology such as that and the possibility of other greater risks down the road. even with the technology we have today, how it is deployed in a military context, that we
4:49 am
should be thinking hard about, do we want to go down that road and what controls do we need in place, poses a threat, and the full title, a survival guide to our superpowered future, if used correctly this technology can grant us superpowers. it can be transformative technology for science, health and medicine, drug discovery, education where there's quite a lot of panic about chat parts and chat gpt which came out in 2022. that technology has the potential to be a huge positive for education. >> host: jeremy kahn is the author of "mastering ai: a survival guide to our superpowered future," taking your phone calls as we have
4:50 am
this discussion, phone lines split regionally, 202-748-8000 in the eastern or central time zone, 202-748-8001 in the amount or pacific time zones. folks are calling in, thifrom page7 your bo, superpowerful technological leaps, nle weapons, satellites, supercomputers and the internet or mostften developed or funded by govemes. the motivation w ually strategic, gaining literary or geopolitical advantage, not naial, and government so often secretive were subject to public accountability in contrast to developing artificial general intelligence rests with a handful of powerful technology companies. what does that mean for us? >> we need to take some steps to think about how we can hold those companies that are developing this technology to account and government needs to step out to create some rules
4:51 am
around the use of the deployment of the technology. we should not allow private corporations to develop this incomplete vacuum and give them blank check. we will need some regulation around this and other ways of having leverage towards these companies which would include the agency we have as consumers, we don't have to buy what they are selling and we can use power of the purse to have some influence how this technology develops and some agency as employees of companies that are starting to think about how they use ai technology to encourage the companies we work for, our employers and if you're in management, to actually deploy this technology in a responsible way as a councilman to human skills and labor and not to think of it as a substitute for human labor. a lot of risks i warned about
4:52 am
come from the framing of this technology is a direct substitute for human cognitive abilities. it works best as a complement to human cognitive skills. there are things humans will always be better off including universal skills that are important in business environments and this technology can give us a huge boost, but we want copilots, not autopilot. >> host: what the origin story of the ai technology that folks are familiar with? chat gpt? >> guest: it came from a company that used to be fairly obscure called open ai which began life, founded in part, elon musk was one of the cofounders, he brought together a bunch of people from silicon valley including the current ceo and greg brockman, they
4:53 am
came together with top ai researchers with people like google and they were founded in opposition to google. elon and other folks were afraid that google was racing ahead in ai technology and willpower be concentrated in the hands of a single corporation and was a really bad idea and dangerous. they wanted to create something that would be a counterbalance to this entity called deep line owned by google which is based in the uk where i am coming to you from today. they were based in london. seems at the time to be racing ahead in the develop of ai technology and create a fear that google will dominate the landscape so open ai was founded on opposition to that and was supposed to be everything google, a giant for
4:54 am
profit corporation, and was seen as very secretive. open ai committed to publishing all this research and making models for people to use. what happened is it took a lot more money for ai models. they have to raise money and some debate about that. to take the nonprofit on. and other companies like tesla but the other cofounders did not want to do that. they need to come up with another option, controlled by nonprofit board but the for-profit entity would be able to take venture capital funding get.
4:55 am
then attract venture capital funding. in 2019 they got a large investment from microsoft to help build the computing of a structure to build powerful ai systems. open ai became interested in large language models, ai models that ingest tons of human written text from the internet, the largest ones almost an entire internet's worth of text but early models couldn't do as much but open ai started playing around with these, being able to complete lots of tasks involving language cover they didn't do just one natural language school but lots of skills, they were swiss army life of natural language processing that made them attractive and people got interested in these. open ai started developing several large language models but mostly rolling them out to
4:56 am
ai developers mostly for free to play with. they started having a few products they charge people for. in november 2022 they suddenly roll out chat gpt and made it available to anyone for free to play with and that is the chat but we are familiar with today, looked like the search bar on google search and you could ask it anything. it could give different responses, right you code. and that kicked off the current, the generative and i boom. >> host: what is the difference? >> guest: ai is any system designed to mimic certain human cognitive abilities, to learn from data as opposed to being explicitly programmed to. .
4:57 am
agi refers to a system that would be as intelligent as the average person into the cognitive tasks an average person can do and that's the goal of computer science since the foundation of the field but it seems like a deck on the horizon that was a long way off and we would never create a system of would do this but was a worthy goal for the field. what happened is we've gotten closer to systems that are more general that can seemingly imitate more aspects of our cognitive ability. we are a long way from a system at is as capable as the average person but seems like we are getting closer. >> host: calls for you, jeremy kahn, angela in california. thanks for waiting.
4:58 am
>> in 1997, i am in the insurance industry to come up with the computer program for workers comp insurance. and playing pac-man to learn the computer. the enemy of the state, total recall came out 30 years ago. the robots had the a1 capabilities, looks like this capabilities have been around since 1997. how advanced is the army you, is the avatar movie showing what the army capability is. and and can you answer that question.
4:59 am
>> the military secretly has even more advanced ai models than we know have been developed commercially. i would be skeptical of that. this is a case where private companies pulled away from where the government is. it has to do with the amount of money involved and training infrastructure and big data center capacity, a little hard to hide and requires large physical location somewhere. might have an inkling of working on this. if the government developed it to keep it secret. it wouldn't make sense to keep it under wraps as it were. this is a case where private industry is ahead of where the government is and the government is trying to play catch up, proposal the biden
5:00 am
administration has to stand up, some public computing infrastructure the could train one of these language models that would be available to 20 very city debate university researchers and members of public institutions so they wouldn't be dependent on what was only available for corporations, to pay to use this or depend on their largess. i think in terms of sci-fi pointing the direction to where we may be heading, sci-fi has in the past been an interesting way of thinking about the future and where things might be heading. there's an interesting interplay between the people working on developing the technology and science fiction. ..it is called a neuro-networka kind of ai system loosely based on the human brain. in particular, there is a transformer that is behind most
5:01 am
of the recent advances in and transformers come some recharged at google have seenut the movie arrival and were interested in the way that aliens in that moviehe communicated and process language parsing to process language in that movie andss thy thought theirs is interesting idea about parallel processing language in that movie, could we create an ai system, a model that would process language in the same way, an algorithm that would work like the aliens process language in that movie. that's how they came up in this with his idea for the transformer which has been the thing that is really kicked off the generative ai boom and the thing that powers pre-much every of ai system that's been debuted in the last few years. there is an end by between science-fiction and scientific development, particularly when it comes to ai. it's worth at sci-fi to think about narratives of how the
5:02 am
technology might play out in the future. it's interesting. it's worthy thought experiment, the playoff scenarios and a look at what science-fiction authors have thought of in the past. but i don't think you should confuse sci-fi with reality just because someone posited this could be a future that might come about with technology doesn't mean that is a future that's going to come about. it doesn't mean governments have developed systems that are like you see in sci-fi movies. >> we will head up to sin city las vegas, eric. good morning. thank you for waiting. >> caller: i tell you, so many different things i wish we did talk for a couple of hours. -- [inaudible] i mean black budgets are therefore a reason for us not to know. singularity, and i definity have had long discussions with
5:03 am
my chatgtp about my tendency, and those are great discussions, and it is difficult for me to not start looking at that tool like it has feelings because it seems like it does in that matter, and it is very difficult for me to not start to connect to some kind of bond with a machine. other than that, about the similarity -- host: let me talk about that and part of what jeremy kahn writes about. on this tendency to be like humans? guest: there is a tendency and it is one of the reasons i'm concerned about the use of
5:04 am
chatbot's by people to be companions. there are companies out there marketing them as you should use this as a friend or sounding board and do something to unload your feelings and thoughts to at the end of the day, and i think that is slightly disturbing, that trend. i think we should be a little worried about it because i think there will be a tendency for people to look at these as if they are human beings. that is what we tend to do. particularly chatbot's, and i write in the book about what is called allies affect, named -- eliza affects, named around the first chatbot developed in the 1960's, named after eliza doolittle, but eliza, the chatbot, had this amazing effect on people that it was trained to act as a psychotherapist, and the creator of it shows that persona for the chatbot in part
5:05 am
because if you asked a question, it might respond with another question. that was a good way for it to cover up the fact that it did not have good leg which understanding. but it could give you the impression that it was responding to what you asked it, and it was such a powerful effect that even people who knew they were interacting with the software and that it was not a real therapist or person, started confessing things to the chatbot. it was hard for them to actually suspend their own belief in this case, so they were so credulous of the idea that it was the person, that even these other computer scientists who knew well it was not a person, found themselves confessing things to it almost against their will or better judgment, so this was named the allies affect, the tendency for people to ascribe humanlike characteristics to software chatbot's. today with chatgtp, which has a
5:06 am
higher level of seeming understanding of language, it can make understanding much better than eliza and it is an even more dangerous situation because many people will say it is like speaking to your friend and maybe even better because it seems so nice and apathetic. in the book, i tried to say, look, it does not have real empathy. real empathy is a human trait that comes from lived experience, and these chatbot's have no lived experience, so if they can imitate empathy, it will never be real. we need to draw a bright line between the real and inauthentic . i worry that people are going to use ai chat bots as companions. there was already a group of people who uses them for erotic role-play and romantic relationships and that is dangerous. i think people will use it as an emotional clutch, a social clutch, and they will not go out and interact with real people.
5:07 am
i think that is a real danger, and i think the companies that are designing it and rolling out these chatbot's, particularly if you have children or teenagers using the, that it should be a control on how long they should interact with them and that they should encourage users to get out and talk to real people and they should keep reminding the user that they are not a real person, despite their leg which abilities which seem humanlike. i think there is a danger that we have to fight against, and we should try to take design decisions and push the companies that are creating the chatbot's to take design decisions that try to always set the framing such that we know we are interacting with a chatbot and not a person, and it encourages us to go out and have real human relationships. host: what is their touring test? guest: it is a test that alan
5:08 am
turing, an early computer scientists, affectation, came up with. that was the idea that the test of intelligence or a machine, you could have an observer who would read dialogue taking place between a human and machine but not be able to know which is which, and if they could read the dialogue from the two discussions in the conversation, that the human who was supposed to judge this would not be able to tell which comments were written by the human and which were written by the machine so that is the turing test. i write in the book about the negative impact it has had on the framing of ai because that has become one of the big tests their intervening decades. and couldn't observer not know that the dialogue was written by a machine?
5:09 am
with chatgtp, we are there now, and with the latest models from other companies, as well, like google. or from meta, all with powerful models, that you could read a lot of what the right and not be able to tell it was written by ai software. but the problem is it sets up a scenario, and there are two problems, one, is scenario where we are always framing this as an verses machine, and it is when a machine can do exactly what the human can do, mimic what the human can do, and it frames everything as a machine and exact substitute or software for the human. again, i think a lot of the problems with the technology come from that framing. if we can think of this as a compliment to humans as a
5:10 am
copilot, and aid, an assistant, we will be in a lot better shape and we can actually sidestep the risks the technology presents today just by the reframing, but it immediately puts you in the framework where the machine can substitute directly for the person because it can mimic what the person can do so well. around the idea of mimicry, there is also the idea of deception, so the turing test is a sound deceiving the perceiver, and i think there is something ethically challenging about putting deception at the core of what we are saying in the test of intelligence. i think that is a mistake, and i think computer scientists are better off and we all are, if that test went away. if we did not have the idea of deception at the core of how we are judging intelligence in software. host: the book, “mastering ai: a survival guide to our superpowered future," the
5:11 am
author, jeremy kahn, joining us from oxford, england. dave, new york. good morning. caller: good morning. great topic and discussion. my main question is about the applications, but you guys write me off track with this science-fiction discussion. i think one of the terms was coined by a science-fiction author, and you had guys like gibson who came up. they used terms like cyberspace and predicted the internet, so science-fiction is interesting. it definitely has led to interesting ideas, but to get back to the question, i think that the ai technology is not that far, but we are far away from this being a big problem in society. it is definitely heading that way. the bottleneck with ai i think
5:12 am
is processing, so when you get the processing power, tings will change a lot, but i don't think we're that close. my main concern is military application because if you look at the military, the space program, it is funded at military, and i guarantee ai and surveillance is funded by the military. these are the things that were me the most. and for sure china has definitely enabled a lot of energy and resources into developing ai and military applications. to be honest, we are in an arms race right now with china in ai. i'm curious what you have to say. thank you and bye. guest: there are concerning military applications of ai.
5:13 am
and there are worrying concerns around surveillance and how ai can empower a surveillance state and empower authoritarian regimes that would like to do my surveillance. china has a very effective system of mass surveillance in place across the country, and that would not be possible without some ai models. that does not mean they have anything like artificial intelligence. some of the ai models effective at surveillance are fairly small ai models compared to like the language -- large leg which models that power chatgtp because they only have to recognize faces and video images, and they are good at that, and they can track people across a network of cctv cameras, and that is a use of ai that is here today that we should be concerned about. where i'm concerned about military applications, there are several.
5:14 am
one is i think we should be concerned about small drones with thomas targeted capability, where they could be assassination bots because that is very potentially destabilizing. which i think we are getting close to being able to do, but you could modify commercial drone with software that could recognize individual basis and target individual people, the ultimate terry west robin -- terrorist weapon. i think we should have some kind of limits on this technology and the proliferation of this technology. what has happened so far is there has been an attempt by the un to get an absolute ban on these systems of any size with autonomous targeting capabilities. the problem is that states, including the united states, have locked progress so far on any sort of ban. and what i say in the book is i
5:15 am
think it may be time to move away from an absolute ban and start using an arms-control mindset and thinking about are there certain kinds of these ai systems that we can get all the great powers to agree to place limits around? at some of these smaller systems that could be used as a fascination bot might be one where china, u.s., and the other p5 nations can see it in their interest to limit the spread of such technology because it would be destabilizing to all the powers, and that is a case where you might be able to make progress on arms-control. i think there might be others on larger weapon platforms where the u.s., china, and russia could agree to have some imitations put in place for mutual benefit, but, yeah, the other area where ai has been used, and those are on decision-support, recommending
5:16 am
strategies, or tactics down to the unit levels, there are strategies at the higher level, and i think there is great potential with everybody working on that. i think there is potentially some great consequences there if we do not get it right. we are already starting to see in ukraine and gaza where there have been targeting systems deployed that keep you in the loop, so the ai recommends the target to be struck, and human intelligence officers are supposed to review that and sign off, but some cases have reported, particularly out of israel, that where some israeli intelligence officers have told journalists anonymously that they have not only targeting recommendations from ai, but for them to really do the checks, they get a recommendation but they don't have much idea about why the ai system recommended the target, and they felt they were in a position where they were rubberstamping with the ai system was turning out.
5:17 am
i think that is potentially a dangerous situation. international law says that humanitarian law says military commanders must continue to exercise meaningful control over the weapon systems they deploy, and i think there is a big question there about what that means. lots of questions there. in the arms race with china, yes, we are pushing ahead on ai planning capabilities, but china has enacted a strict domestic regulation around consumer use of ai than the united states has. people often used china as a bogeyman for why the united states may not create ai galatian or enact ai regulation -- regulation or enact it, and china actually has much stricter commercial ai regulation than the u.s. does. just because the u.s. might restrict commercial development
5:18 am
from ai is not necessarily restrict the u.s. military from pushing ahead on certain capabilities. i think we need to separate out military and civilian uses and not allow what we feel our national security priorities to prevent us from enacting sensible civilian regulations. host: dee on x pass this question, write your compaes are already abusing ai, using it to exploit drivers and riders. so what is your concern about bad commercial civilian actors like uber? guest: i'm very concerned about certain business models that might deployed by companies developing ai systems, including chatbot's and personal assistance. i'm concerned we will go down a path that we might be seen with social media were some of the ai apps will use an engagement based business model, where they
5:19 am
try to keep people on the app, using psychological tricks to do so, and i don't think that is the right model. i worry about that because one of the social chatbot's, people -- there will be a tendency to keep them on there as long as possible with the exclusion of real human contact. the other thing i worry about is in those business models is something around advertising business models where there might not be enough transparency about who has paid for you to be served a certain contact, and we are going to move quickly in the next two years to a world of ai assistance that will go out and do things for us on the internet and use other software on our behalf, and it will do things like book are vacations and restaurant reservations, and shop for us, and when you have an ai agent act on your behalf, you would like it to do the things that would actually
5:20 am
conform to your wishes, and your own preferences. you do not want -- for example, shoes, if i tell my ai agent that i wanted to buy a new hiking boots, i would like it to buy the pair that will be right for me and the style of hiking i do and the trip i take, i do not want them to recommend the nike hiking boots because nike paid openai a lot of money to recommend those services. or at the least, i would like complete transparency if the ai agent says i found a great pair of nike boots that it is recommending that pair because of nike paying something to have that recommended and not because that really is the best thing necessarily for my hiking style or where i'm going. i worry about that. i think this is a case where the ftc could take action and really try to restrict the business models that the companies use or manage that there be transparency around any sort of pay for serving you some kind of
5:21 am
content or recommendations. and i worry very much about those business models. host: we will take viewers who stick around on c-span over to a discussion with government officials from nasa, the commerce department, the faa about the commercialization of safe, what it means for the economy, and before we do that, jeremy kahn, what role do you see ai playing in those big issues? guest: well, ai is extremely important for space exploration, and we will not get to mars or the moon without ai systems helping guide there was space ships, but luckily, i think one of the interests in the organization, i talk about this, but, they put a lot of thought into how humans interact with ai systems. and they have some interesting weapons on the best way to present information from an ai system to humans a

2 Views

info Stream Only

Uploaded by TV Archive on