Skip to main content

tv   Jeremy Kahn Mastering AI  CSPAN  November 3, 2024 6:10am-6:54am EST

6:10 am
>> washington journal continues. host: joining us is jeremy kohn, the ai editor at fortune magazine and author of the new book mastering ai, survival guide to our superpowered future. explain what the term the singularity means?
6:11 am
guest: it refers to the moment when computers or software surpass human intelligence, that's often the definition given some other people give a definition that it's the moment human sort of merge somehow with machines, those of the two definitions usually given on the singularity. host: what's the definition you use and how far away are we from the singularity. >> i think we are still a ways away from that moment. although ai technology is getting very powerful and useful. i think we are still at least a decade if not more away from that kind of single party moment but it's a good time to start thinking about some of the implications of that moment. but i think we are still a ways off. host: what we need to think about lethal -- legally, what we do in that time? guest: there's a lot we should be thinking about in terms of how we can mitigate some of the
6:12 am
risks that might come about from that kind of superpowerful ai, but a lot of the book concentrates on what we should do to mitigate this here and now with the ai technology we currently have which is rapidly being rolled out and governments are using it and i think consumers of course are using it and i think there risks there we should address and at the same time were thinking about some these longer-term risks. the risks right now i think we should be looking at and some of them have gotten some attention and some haven't gotten enough attention but they would include in the political realm accelerated and sort of expanded disinformation campaigns, a kind of crisis of authenticity in general across media because of the amount of synthetic content that can produced. in the way that may warp various search engines algorithms and other things we may be ingesting more synthetic content without
6:13 am
realizing it. what we read may be influenced by that. but i'm also worried about some of the risks that are more subtle that they use ai technology is a vital human cognitive ability. our ability to write, to think critically, our memory may be imperiled to some extent. we've gotten reliant on the use of ai technology can provide a summarized answers. this can be a tendency not to go out and think too hard. and worry about that. i think there's a risk to our social relations with other people. i thing we all start lying on ai chat boxes and companions and people asserting to do. so there's risk there. there's some risk i think the pending on how businesses deploy the technology to sort of what happens in terms of employment and also in terms of income inequality. i am more optimistic in those areas, the risk we will see mass unemployment is pretty remote.
6:14 am
i think this technology will create more jobs than it takes away. i think we use this technology right there is a chance to really enhance the ability and create a productivity boom that would be great for the economy and there's a chance to level people up, rescale people and move them back with the help of this technology. if used correctly and deployed correctly there really is a chance for this technology to be an equalizer. if we naively go down a path where we don't take those steps, technology could very well see increasing inequality. i think those are the risks with technology as it exists now. some of the risks they get a lot of attention about ai potentially becoming senti and or developing some kind of agency of its own. those sci-fi scenarios are still a long way off. but they are getting ever so closer. getting some amount of time and
6:15 am
effort heading them off paid i don't think we want to distract from the addressing of the near-term risk. host: why do you call it a survival guide question mark by put it in those terms? guest: i do think this is a technology that's very general-purpose and when the first technologies that really challenges what humans think is unique about humanity which is our intelligence and cognitive ability. here you have the technology software the challenges that. i think that is disruptive and i think you might want to think about how, what we do to preserve humanity in a world where you have technology such as that and there's a remote possibility those other greater risks down the road and we should be thinking about and take some steps to mitigate. if you look at how this is being deployed in a military context there's some very big risks there. that we should be thinking hard about. do we really want to go down that road and if so what
6:16 am
controls do we need in place. there are definitely things where i think this poses a threat where people should think about what else can i do to survive. but the full title is a survival guide to our superpowered feature and i do think used correctly this technology can grant a superpowers and really be transformative for science, for health and medicine, for drug discovery also in education where there's been a lot about chatbot's and things like chatgpt which came out in late 2022. i think that technology has the potential to be a huge positive for education. host: the book, mastering ai, survival guide to our superpowered future. taking your phone calls as we have this discussion, a phone line split as usual regionally. 202-748-8000 in eastern or
6:17 am
central time zones. 202-748-8001 if you are in the mountains or pacific time zone. jeremy con as folks are calling in, page 37 of your book, previously superpowerful technological lea, nuclear weapons, satelteand surcomputers were most often developed or at least funded by governments. the most is usually strategic any military or geopolitical vantage not financial. governments thgh often secretive were subject to some form of public accountabily in contrast, the way they develop artificial and general intelligence left with a handful of powerful technology companies, what does that mean for us? guest: it means we start to take some steps to think about how we can hold those companies that are developing this technology to account. i think government needs to step in and bring in some rules around the employment of the technology.
6:18 am
we should not allow private corporations to develop in a complete vacuum and academy -- and give them a blank check. we are can and need some regulation around this. i also think we have to think about other ways of having leverage towards these companies which would include some of the agency we have as consumers, we don't have to necessarily buy what they are selling in the form we are selling it and we can use our power of the purse to have some influence on how the technologies developed. we also have the agency has some employees starting to think about how they will use ai technology to try and encourage the companies we work for and if you're someone in management to actually deploy the technology in a responsible way and deploy it as a comp woman to human labor and not think of it as a substitute for human labor. a lot of the risks i worry about in the book, about from the idea
6:19 am
of the framing of the technology as a direct substitute for human cognitive abilities. and this technology works best in a complement to human cognitive skills. there are things humans will always be better at including a lot of the interpersonal skills that are important for this environment. in this technology can give us a huge boost, it can act as a copilot but we want copilots not auto pilots. host: what's the origin story of the ai technology that folks are most familiar with, chatgpt. guest: that came from a company that's now was well known but used to be fairly obscure called openai. it began life as a nonprofit founded in part elon musk is one of the cofounders. he brought together a bunch of people from silicon valley, the current ceo sam altman, greg brockman, they came together with the top ai researchers from
6:20 am
places like google and they were founded in opposition to google. they were founded because elon and some of the other folks were afraid google was racing ahead with ai technology and they would dominate with the superpowerful ai and this power would be concentrated in the hands of a single corporation and they thought that was a bad idea and was dangerous. so they wanted to create something that would be a counterbalance, particular to this entity called deep mind which was based in the u.k.. they were based here in london, they seemed at the time to be racing ahead in the developing of ai technology and it appeared google would just dominate the landscape so openai was founded in opposition to that and was supposed to be everything google and deep mind were not. google was this giant for-profit corporation, openai was initially founded as a nonprofit.
6:21 am
deep mind was seen as secretive, openai was initially committed to making all of its openai models completely open for other people to use. what happened subsequently is it turned out it took a lot more money to develop these very powerful ai models and i think elon and sam altman that -- then they anticipated. there was some debate about raising money and elon musk at one point wanted to take the nonprofit on completely himself and merge it into his other companies like tesla, but the other cofounders of the company did not want to do that so they needed to come up with another option. and that option was to create a for-profit entity that would still be in control by the nonprofit board but the for-profit entity would be able to have extra capital funding. -- they did that, and then in 2019, they got a large investment from microsoft, a
6:22 am
billion-dollar investment to help build the computing infrastructure they needed to start tilting out powerful ai systems. in particular, openai became interested in large language models, ai models that inject tons of text from the internet. the largest ones are trained on almost the entire internet worth of text, but the early models were trained on a lot and they could not do as much but openai started playing around with these and they showed great promise in being able to complete lots of tasks. they did not do one skill but lots of skills. they were a swiss army knife of language processing, and this made them attractive, and people started getting interested in these, and openai developed these largely which models, but they were rolling them out to ai developers mostly for free. they started having a few
6:23 am
products that they charged people for, and then in late november 2022, they suddenly strolled out chatgtp and made it available for free to play with and that is a chatbot we are familiar with today, and it looked a little bit like the search bar on a google search, you could ask anything in it could do all sorts of things, give you different responses, summarize your meeting in haiku, write code and music, pretty impressive, and that really kicked off the current generative ai and the current race towards the ai that we are still in the middle of today. host: what is the difference versus ai and gi? caller: it ai -- guest: ai mimics cognitive reasoning as opposed to being explicitly programmed. e.g. i refers to an ai system that would be as intelligent as
6:24 am
the average person and it could do all of the cognitive tasks that a person could do, and that have been the goal of computer science since the foundation of the field back in the middle of the 20th century, but it has always been seen as this kind of point on the horizon that was a long way off. and a lot of people felt we could never create a system that could do this but it was a worthy goal for the field. what has happened over time as we have gotten closer and closer to those who can seemingly imitate more aspects of our cognitive ability. we are still a long way off on a system that is as capable as the average person, but it seems we are getting closer. host: plenty of calls for you, jeremy kahn. in california comic angela -- angela in california, go ahead. caller: i wanted to ask your guest, back in 1997, i'm in the
6:25 am
insurance company, my company had me sit down with guys from google to come up with a computer program for workers comp insurance. they told me -- i noticed i had a friend from jbl, and total recall came out 30 years ago, and robots had the a1 capability in those movies, so it seems like the capability has been around since 1997. how advanced is the army with this? is the avatar movie showing us with the army capability is? my friends at jpl told me to start looking at the movies because they are telling me the future of ai. can you answer that question? guest: a lot of people have this idea on how the military has
6:26 am
secretly more advanced ai models than what we know have been developed commercially. i would be a little skeptical. i think actually this is a case where the private companies have pulled ahead to where the government is, and part of it has to do with the amount of money involved in the training infrastructure, and the veracity it takes to train these models would be hard to hide and requires a large, his goal location somewhere. we might have an inkling that they are working on this. i also don't think there is any reason if the government developed this to keep it secret. it would not make sense to keep it under wraps, so this is a case where the private industry is probably very ahead of where the public government is, and where the government is trying to play catch-up, there are proposals the biden administration has in some kind of public of the infrastructure
6:27 am
that can train a model that would be available to university researchers and other members of republican institutions so that they would not be dependent on what was only available from corporations, and corporations, you have to pay to use this or depend on them that they're willing to let people use it for free. in terms of whether they are pointing to the direction of where we may be headed in sci-fi , sci-fi has in the past been an interesting way of thinking about the future and where things might be heading. there is an interest in interplay between the people working on developing the technology and science fiction. it is called a neuro-network, a kind of ai system loosely based on the human brain. in particular, there is a transformer that is behind most of the recent advances in ai,
6:28 am
and transformers come about because researchers at google had seen the movie "arrival" and are interested in the way that the aliens in the movie communicated and processed language seemed to process leg which, and they thought there was an interesting idea about the parallel processing of language in the movie, could be create an ai model that would process it in the same way one algorithm that would work sort of like it seemed like the aliens processed language in that movie? that is how they initially came up with the idea of the transformer, which has been the thing that has kicked off generative ai boom and empowers chatgtp and pretty much every other ai system that has been debuted in the last few years, so there is an interesting play between fiction and development with ai. it is worth looking at and thinking about the narratives of how the technology might play out in the future.
6:29 am
it is definitely interesting. it is where the thought experiment in the playoff scenario is to look at what science fiction authors have thought of in the past, but i don't think you should confuse sci-fi with reality just because someone is positive this could be a future that might come about with this technology does not mean that it is the future that will come about and it does not mean that governments have already developed systems that are like the ones you see in sci-fi movies. host: las vegas, eric. good morning. thank you for waiting. caller: i tell you, so many different things, i wish we could talk for a couple of hours. [indiscernible] black budgets are there for a reason for us not to know, but if you talk about the singularity, and i definitely have had long discussions with my chatgtp about my tendency,
6:30 am
and those are great discussions, and it is difficult for me to not start looking at that tool like it has feelings because it seems like it does in that matter, and it is very difficult for me to not start to connect to some kind of bond with a machine. other than that, about the similarity -- host: let me talk about that and part of what jeremy kahn writes about. on this tendency to be like humans? guest: there is a tendency and it is one of the reasons i'm concerned about the use of chatbot's by people to be
6:31 am
companions. there are companies out there marketing them as you should use this as a friend or sounding board and do something to unload your feelings and thoughts to at the end of the day, and i think that is slightly disturbing, that trend. i think we should be a little worried about it because i think there will be a tendency for people to look at these as if they are human beings. that is what we tend to do. particularly chatbot's, and i write in the book about what is called allies affect, named -- eliza affects, named around the first chatbot developed in the 1960's, named after eliza doolittle, but eliza, the chatbot, had this amazing effect on people that it was trained to act as a psychotherapist, and the creator of it shows that persona for the chatbot in part because if you asked a question,
6:32 am
it might respond with another question. that was a good way for it to cover up the fact that it did not have good leg which understanding. but it could give you the impression that it was responding to what you asked it, and it was such a powerful effect that even people who knew they were interacting with the software and that it was not a real therapist or person, started confessing things to the chatbot. it was hard for them to actually suspend their own belief in this case, so they were so credulous of the idea that it was the person, that even these other computer scientists who knew well it was not a person, found themselves confessing things to it almost against their will or better judgment, so this was named the allies affect, the tendency for people to ascribe humanlike characteristics to software chatbot's. today with chatgtp, which has a
6:33 am
higher level of seeming understanding of language, it can make understanding much better than eliza and it is an even more dangerous situation because many people will say it is like speaking to your friend and maybe even better because it seems so nice and apathetic. in the book, i tried to say, look, it does not have real empathy. real empathy is a human trait that comes from lived experience, and these chatbot's have no lived experience, so if they can imitate empathy, it will never be real. we need to draw a bright line between the real and inauthentic . i worry that people are going to use ai chat bots as companions. there was already a group of people who uses them for erotic role-play and romantic relationships and that is dangerous. i think people will use it as an emotional clutch, a social clutch, and they will not go out and interact with real people. i think that is a real danger,
6:34 am
and i think the companies that are designing it and rolling out these chatbot's, particularly if you have children or teenagers using the, that it should be a control on how long they should interact with them and that they should encourage users to get out and talk to real people and they should keep reminding the user that they are not a real person, despite their leg which abilities which seem humanlike. i think there is a danger that we have to fight against, and we should try to take design decisions and push the companies that are creating the chatbot's to take design decisions that try to always set the framing such that we know we are interacting with a chatbot and not a person, and it encourages us to go out and have real human relationships. host: what is their touring test? guest: it is a test that alan turing, an early computer
6:35 am
scientists, affectation, came up with. that was the idea that the test of intelligence or a machine, you could have an observer who would read dialogue taking place between a human and machine but not be able to know which is which, and if they could read the dialogue from the two discussions in the conversation, that the human who was supposed to judge this would not be able to tell which comments were written by the human and which were written by the machine so that is the turing test. i write in the book about the negative impact it has had on the framing of ai because that has become one of the big tests their intervening decades. and couldn't observer not know that the dialogue was written by a machine? with chatgtp, we are there now,
6:36 am
and with the latest models from other companies, as well, like google. or from meta, all with powerful models, that you could read a lot of what the right and not be able to tell it was written by ai software. but the problem is it sets up a scenario, and there are two problems, one, is scenario where we are always framing this as an verses machine, and it is when a machine can do exactly what the human can do, mimic what the human can do, and it frames everything as a machine and exact substitute or software for the human. again, i think a lot of the problems with the technology come from that framing. if we can think of this as a compliment to humans as a copilot, and aid, an assistant,
6:37 am
we will be in a lot better shape and we can actually sidestep the risks the technology presents today just by the reframing, but it immediately puts you in the framework where the machine can substitute directly for the person because it can mimic what the person can do so well. around the idea of mimicry, there is also the idea of deception, so the turing test is a sound deceiving the perceiver, and i think there is something ethically challenging about putting deception at the core of what we are saying in the test of intelligence. i think that is a mistake, and i think computer scientists are better off and we all are, if that test went away. if we did not have the idea of deception at the core of how we are judging intelligence in software. host: the book, “mastering ai: a survival guide to our superpowered future," the author, jeremy kahn, joining us
6:38 am
from oxford, england. dave, new york. good morning. caller: good morning. great topic and discussion. my main question is about the applications, but you guys write me off track with this science-fiction discussion. i think one of the terms was coined by a science-fiction author, and you had guys like gibson who came up. they used terms like cyberspace and predicted the internet, so science-fiction is interesting. it definitely has led to interesting ideas, but to get back to the question, i think that the ai technology is not that far, but we are far away from this being a big problem in society. it is definitely heading that way. the bottleneck with ai i think is processing, so when you get
6:39 am
the processing power, tings will change a lot, but i don't think we're that close. my main concern is military application because if you look at the military, the space program, it is funded at military, and i guarantee ai and surveillance is funded by the military. these are the things that were me the most. and for sure china has definitely enabled a lot of energy and resources into developing ai and military applications. to be honest, we are in an arms race right now with china in ai. i'm curious what you have to say. thank you and bye. guest: there are concerning military applications of ai.
6:40 am
and there are worrying concerns around surveillance and how ai can empower a surveillance state and empower authoritarian regimes that would like to do my surveillance. china has a very effective system of mass surveillance in place across the country, and that would not be possible without some ai models. that does not mean they have anything like artificial intelligence. some of the ai models effective at surveillance are fairly small ai models compared to like the language -- large leg which models that power chatgtp because they only have to recognize faces and video images, and they are good at that, and they can track people across a network of cctv cameras, and that is a use of ai that is here today that we should be concerned about. where i'm concerned about military applications, there are several. one is i think we should be concerned about small drones
6:41 am
with thomas targeted capability, where they could be assassination bots because that is very potentially destabilizing. which i think we are getting close to being able to do, but you could modify commercial drone with software that could recognize individual basis and target individual people, the ultimate terry west robin -- terrorist weapon. i think we should have some kind of limits on this technology and the proliferation of this technology. what has happened so far is there has been an attempt by the un to get an absolute ban on these systems of any size with autonomous targeting capabilities. the problem is that states, including the united states, have locked progress so far on any sort of ban. and what i say in the book is i think it may be time to move
6:42 am
away from an absolute ban and start using an arms-control mindset and thinking about are there certain kinds of these ai systems that we can get all the great powers to agree to place limits around? at some of these smaller systems that could be used as a fascination bot might be one where china, u.s., and the other p5 nations can see it in their interest to limit the spread of such technology because it would be destabilizing to all the powers, and that is a case where you might be able to make progress on arms-control. i think there might be others on larger weapon platforms where the u.s., china, and russia could agree to have some imitations put in place for mutual benefit, but, yeah, the other area where ai has been used, and those are on decision-support, recommending strategies, or tactics down to the unit levels, there are
6:43 am
strategies at the higher level, and i think there is great potential with everybody working on that. i think there is potentially some great consequences there if we do not get it right. we are already starting to see in ukraine and gaza where there have been targeting systems deployed that keep you in the loop, so the ai recommends the target to be struck, and human intelligence officers are supposed to review that and sign off, but some cases have reported, particularly out of israel, that where some israeli intelligence officers have told journalists anonymously that they have not only targeting recommendations from ai, but for them to really do the checks, they get a recommendation but they don't have much idea about why the ai system recommended the target, and they felt they were in a position where they were rubberstamping with the ai system was turning out. i think that is potentially a
6:44 am
dangerous situation. international law says that humanitarian law says military commanders must continue to exercise meaningful control over the weapon systems they deploy, and i think there is a big question there about what that means. lots of questions there. in the arms race with china, yes, we are pushing ahead on ai planning capabilities, but china has enacted a strict domestic regulation around consumer use of ai than the united states has. people often used china as a bogeyman for why the united states may not create ai galatian or enact ai regulation -- regulation or enact it, and china actually has much stricter commercial ai regulation than the u.s. does. just because the u.s. might restrict commercial development from ai is not necessarily
6:45 am
restrict the u.s. military from pushing ahead on certain capabilities. i think we need to separate out military and civilian uses and not allow what we feel our national security priorities to prevent us from enacting sensible civilian regulations. host: dee on x pass this questi, ite your companies are already abusing ai, using it to exploit drivers andiders. so what is your concern about bad commercial civilian actors like uber? guest: i'm very concerned about certain business models that might deployed by companies developing ai systems, including chatbot's and personal assistance. i'm concerned we will go down a path that we might be seen with social media were some of the ai apps will use an engagement based business model, where they try to keep people on the app,
6:46 am
using psychological tricks to do so, and i don't think that is the right model. i worry about that because one of the social chatbot's, people -- there will be a tendency to keep them on there as long as possible with the exclusion of real human contact. the other thing i worry about is in those business models is something around advertising business models where there might not be enough transparency about who has paid for you to be served a certain contact, and we are going to move quickly in the next two years to a world of ai assistance that will go out and do things for us on the internet and use other software on our behalf, and it will do things like book are vacations and restaurant reservations, and shop for us, and when you have an ai agent act on your behalf, you would like it to do the things that would actually conform to your wishes, and your
6:47 am
own preferences. you do not want -- for example, shoes, if i tell my ai agent that i wanted to buy a new hiking boots, i would like it to buy the pair that will be right for me and the style of hiking i do and the trip i take, i do not want them to recommend the nike hiking boots because nike paid openai a lot of money to recommend those services. or at the least, i would like complete transparency if the ai agent says i found a great pair of nike boots that it is recommending that pair because of nike paying something to have that recommended and not because that really is the best thing necessarily for my hiking style or where i'm going. i worry about that. i think this is a case where the ftc could take action and really try to restrict the business models that the companies use or manage that there be transparency around any sort of pay for serving you some kind of content or recommendations.
6:48 am
and i worry very much about those business models. host: we will take viewers who stick around on c-span over to a discussion with government officials from nasa, the commerce department, the faa about the commercialization of safe, what it means for the economy, and before we do that, jeremy kahn, what role do you see ai playing in those big issues? guest: well, ai is extremely important for space exploration, and we will not get to mars or the moon without ai systems helping guide there was space ships, but luckily, i think one of the interests in the organization, i talk about this, but, they put a lot of thought into how humans interact with ai systems. and they have some interesting weapons on the best way to present information from an ai system to humans and really lessons that every business as
6:49 am
they deploy ai should learn. i talk about some in the book. they include having explanations for why nai recommends certain things to the person -- why ai recommend certain things to the person because it increases trust and people are more likely to follow recommendations when they understand the rationale behind it. if you have a system that recommends with no explanation, that is difficult and people turn not to trust it and do not follow the recommendation in many cases. so there is lots of overlap. host: johnson in -- john has been waiting in new jersey, thank you. caller: interesting conversation. i have added your book to my to purchase. guest: thank you. caller: at the top of my list. this is great. i can remember when the birth of virtual reality came out, starting the oculus rift and
6:50 am
they were talking about the effects on the brain, people could not keep their glasses on for longer than five minutes without getting severe headaches. these are all large liquids models. that is basically what we are talking about, where the equality robot or whatever, it is about human manipulation in thought and action. i also read, professor, "physics of the future." it was a flawed getting through the first 65 pages -- plod get into the first 65 pages of algorithms, but i noticed that it comes down to, well, it can come down to when it comes to science fiction that nobody is
6:51 am
saying that those so-called this stupid futures -- this is toby and futures are out of the realm of possibility and that everything you said means we have to keep an eye on what we are doing. but once you start talking about monetization and profiting, it gets difficult. by the way, i listened to a show called "off the hook" on chatgtp, and these are old guys who work in variousthey proved . they call it. there is so much to talk about. x: a lot there. guest: you product good points. certainly chatgpt can lie. all these large lang which model
6:52 am
systems, they can do what they call hallucinate. it's when an ai system tells you something confidently that is not true. right now we have no real solution to this hallucination. we have to be careful how we use ai chatbot and ai systems. there are ways to curtail it to some degree. you may get the systems to work reasonably well. it's another reason why i think -- i'm optimistic about the deployment of ai and a lot of companies. -- in a lot of companies. if you start thinking about some thing that is going to help people, you still need the human in the loop because the ai system is not yet good enough to give you a 100% accurate answer all the time. you need that person checking the answer. you avoid any scenarios of mass
6:53 am
unemployment and you can reap the benefits of the technology. if we deploy this correctly with the right guardrails and think about the design of the systems, there is a chance to expand human potential and have a lot of positive transformational effects. if we don't do those things, i am worried. there are downside and risks and that's why we need to take action now to place guardrails around the technology, have sensible regulation, and think about design choices. if we can create a world will be keep humanity at the center of it and keep human empathy at the core of what we do and avoid some of the real risks of the technology otherwise entails. host: "mastering ai: a survival guide to our superpowered future." ai editor at fortune magazine.
6:54 am
if you could first know where we are and whither we are

2 Views

info Stream Only

Uploaded by TV Archive on