Skip to main content

tv   [untitled]    October 19, 2024 4:00pm-4:30pm EDT

4:00 pm
flowed to precisely the areas that are the most. yeah, but they don't to appreciate it. which is interesting. i don't mean it just saying, but i'm saying is as a material matter. right. right. but i mean, how will it takes a long time to turn around the political economy totally states that's part of the problem i mean these were not ideas that i might argue. i think perhaps too much of it was climate focused. but that's really another issue. some of the benefits have flowed to these red states. but, you know, they're their economic state and particularly the left behind communities and areas. it's not going to be changed overnight. so you really to be able to keep pace with this and that's part of the problem with having a weak coalition that's susceptible to the you know, trumpian populism coming in and preventing them from doing anything, nancy, i think they're just 60 seconds. and do you accurately pointed out, which is their base now not aligned with their programs, but the other is that anyone who's watched in congress for the last what, 15 years would have some question about whether they
4:01 pm
could do anything, that this is not a governing party. this is a party in sort of subject to some sort of lunatic leader. and they have been unable to do anything, including the things that they want to do. so, david, i'm not sure that i agree that actually i mean, i think that they have been able to accomplish a lot of what they wanted to do. dobbs is now law of the land. the court yes, but the court is effectively i mean, the is effectively an unelected form of parliament. it's an effective branch of congress in our in our that is where substantive governing policy happens in 2024, really for the past ten years. and that's appalling but that reflects, i think, a real theory of politics and power. again, it's not a coincide. it's that they elected, not elected. they nominated the guy from yale law school who has been swimming in this world for basically his entire career. and he did have a political change of heart because he's a consummate opportunist. but that's but like, you know,
4:02 pm
there is a sort of consistency in the theory of power, which does not involve what i would consider to be democratic legitimacy. but, i mean, this is kind of how this i mean, going back to the 1950s, it was very to conservatives then that did not have a governing majority. they did not the ability to win votes. so with their specific policy ideas. so what do you do? you have to build a larger coalition. you have to have to have broader theories of political change and. they had one that i think was better back then, which involved at least some degree of democratic legitimacy. but that's not the case in the 2020s. i'm david austin walsh nancy rosenblum and ruth to share a man. all right, we got to we got to get of here so that they can they do the next panel. if you would like to have book signed. that's going to happen outside. thank all for coming. have a great day.
4:03 pm
4:04 pm
4:05 pm
4:06 pm
>> washington journal continues. host: joining us is jeremy kohn, the ai editor at fortune magazine and author of the new book mastering ai, survival guide to our superpowered future. explain what the term the singularity means? guest: it refers to the moment when computers or software surpass human intelligence, that's often the definition given some other people give a definition that it's the moment human sort of merge somehow with machines, those of the two definitions usually given on the singularity. host: what's the definition you use and how far away are we from the singularity. >> i think we are still a ways away from that moment. although ai technology is getting very powerful and useful. i think we are still at least a decade if not more away from that kind of single party moment but it's a good time to start
4:07 pm
thinking about some of the implications of that moment. but i think we are still a ways off. host: what we need to think about lethal -- legally, what we do in that time? guest: there's a lot we should be thinking about in terms of how we can mitigate some of the risks that might come about from that kind of superpowerful ai, but a lot of the book concentrates on what we should do to mitigate this here and now with the ai technology we currently have which is rapidly being rolled out and governments are using it and i think consumers of course are using it and i think there risks there we should address and at the same time were thinking about some these longer-term risks. the risks right now i think we should be looking at and some of them have gotten some attention and some haven't gotten enough attention but they would include in the political realm accelerated and sort of expanded disinformation campaigns, a kind
4:08 pm
of crisis of authenticity in general across media because of the amount of synthetic content that can produced. in the way that may warp various search engines algorithms and other things we may be ingesting more synthetic content without realizing it. what we read may be influenced by that. but i'm also worried about some of the risks that are more subtle that they use ai technology is a vital human cognitive ability. our ability to write, to think critically, our memory may be imperiled to some extent. we've gotten reliant on the use of ai technology can provide a summarized answers. this can be a tendency not to go out and think too hard. and worry about that. i think there's a risk to our social relations with other people. i thing we all start lying on ai chat boxes and companions and people asserting to do. so there's risk there.
4:09 pm
there's some risk i think the pending on how businesses deploy the technology to sort of what happens in terms of employment and also in terms of income inequality. i am more optimistic in those areas, the risk we will see mass unemployment is pretty remote. i think this technology will create more jobs than it takes away. i think we use this technology right there is a chance to really enhance the ability and create a productivity boom that would be great for the economy and there's a chance to level people up, rescale people and move them back with the help of this technology. if used correctly and deployed correctly there really is a chance for this technology to be an equalizer. if we naively go down a path where we don't take those steps, technology could very well see increasing inequality. i think those are the risks with technology as it exists now. some of the risks they get a lot of attention about ai
4:10 pm
potentially becoming senti and or developing some kind of agency of its own. those sci-fi scenarios are still a long way off. but they are getting ever so closer. getting some amount of time and effort heading them off paid i don't think we want to distract from the addressing of the near-term risk. host: why do you call it a survival guide question mark by put it in those terms? guest: i do think this is a technology that's very general-purpose and when the first technologies that really challenges what humans think is unique about humanity which is our intelligence and cognitive ability. here you have the technology software the challenges that. i think that is disruptive and i think you might want to think about how, what we do to preserve humanity in a world where you have technology such as that and there's a remote possibility those other greater
4:11 pm
risks down the road and we should be thinking about and take some steps to mitigate. if you look at how this is being deployed in a military context there's some very big risks there. that we should be thinking hard about. do we really want to go down that road and if so what controls do we need in place. there are definitely things where i think this poses a threat where people should think about what else can i do to survive. but the full title is a survival guide to our superpowered feature and i do think used correctly this technology can grant a superpowers and really be transformative for science, for health and medicine, for drug discovery also in education where there's been a lot about chatbot's and things like chatgpt which came out in late 2022. i think that technology has the
4:12 pm
potential to be a huge positive for education. host: the book, mastering ai, survival guide to our superpowered future. taking your phone calls as we have this discussion, a phone line split as usual regionally. 202-748-8000 in eastern or central time zones. 202-748-8001 if you are in the mountains or pacific time zone. jeremy con as folks are calling in, page 37 of your book, previouslyuppowerful technological leaps, nuclear weapons, satellites and supercomputers were most often developed or at least funded by governments. the most is usually strategic any military or geopital vantage t nancial. governments althoughft secretive were subject to some fo opublic accountability in contrast, the way they devop artificial and general intelligence left with a handful of powerful technology
4:13 pm
companies, what does that mean for us? guest: it means we start to take some steps to think about how we can hold those companies that are developing this technology to account. i think government needs to step in and bring in some rules around the employment of the technology. we should not allow private corporations to develop in a complete vacuum and academy -- and give them a blank check. we are can and need some regulation around this. i also think we have to think about other ways of having leverage towards these companies which would include some of the agency we have as consumers, we don't have to necessarily buy what they are selling in the form we are selling it and we can use our power of the purse to have some influence on how the technologies developed. we also have the agency has some employees starting to think about how they will use ai technology to try and encourage the companies we work for and if you're someone in management to
4:14 pm
actually deploy the technology in a responsible way and deploy it as a comp woman to human labor and not think of it as a substitute for human labor. a lot of the risks i worry about in the book, about from the idea of the framing of the technology as a direct substitute for human cognitive abilities. and this technology works best in a complement to human cognitive skills. there are things humans will always be better at including a lot of the interpersonal skills that are important for this environment. in this technology can give us a huge boost, it can act as a copilot but we want copilots not auto pilots. host: what's the origin story of the ai technology that folks are most familiar with, chatgpt. guest: that came from a company that's now was well known but used to be fairly obscure called openai. it began life as a nonprofit
4:15 pm
founded in part elon musk is one of the cofounders. he brought together a bunch of people from silicon valley, the current ceo sam altman, greg brockman, they came together with the top ai researchers from places like google and they were founded in opposition to google. they were founded because elon and some of the other folks were afraid google was racing ahead with ai technology and they would dominate with the superpowerful ai and this power would be concentrated in the hands of a single corporation and they thought that was a bad idea and was dangerous. so they wanted to create something that would be a counterbalance, particular to this entity called deep mind which was based in the u.k.. they were based here in london, they seemed at the time to be
4:16 pm
racing ahead in the developing of ai technology and it appeared google would just dominate the landscape so openai was founded in opposition to that and was supposed to be everything google and deep mind were not. google was this giant for-profit corporation, openai was initially founded as a nonprofit. deep mind was seen as secretive, openai was initially committed to making all of its openai models completely open for other people to use. what happened subsequently is it turned out it took a lot more money to develop these very powerful ai models and i think elon and sam altman that -- then they anticipated. there was some debate about raising money and elon musk at one point wanted to take the nonprofit on completely himself and merge it into his other companies like tesla, but the other cofounders of the company did not want to do that so they needed to come up with another option.
4:17 pm
and that option was to create a for-profit entity that would still be in control by the nonprofit board but the for-profit entity would be able to have extra capital funding. -- they did that, and then in 2019, they got a large investment from microsoft, a billion-dollar investment to help build the computing infrastructure they needed to start tilting out powerful ai systems. in particular, openai became interested in large language models, ai models that inject tons of text from the internet. the largest ones are trained on almost the entire internet worth of text, but the early models were trained on a lot and they could not do as much but openai started playing around with these and they showed great promise in being able to complete lots of tasks. they did not do one skill but lots of skills. they were a swiss army knife of
4:18 pm
language processing, and this made them attractive, and people started getting interested in these, and openai developed these largely which models, but they were rolling them out to ai developers mostly for free. they started having a few products that they charged people for, and then in late november 2022, they suddenly strolled out chatgtp and made it available for free to play with and that is a chatbot we are familiar with today, and it looked a little bit like the search bar on a google search, you could ask anything in it could do all sorts of things, give you different responses, summarize your meeting in haiku, write code and music, pretty impressive, and that really kicked off the current generative ai and the current race towards the ai that we are still in the middle of today. host: what is the difference
4:19 pm
versus ai and gi? caller: it ai -- guest: ai mimics cognitive reasoning as opposed to being explicitly programmed. e.g. i refers to an ai system that would be as intelligent as the average person and it could do all of the cognitive tasks that a person could do, and that have been the goal of computer science since the foundation of the field back in the middle of the 20th century, but it has always been seen as this kind of point on the horizon that was a long way off. and a lot of people felt we could never create a system that could do this but it was a worthy goal for the field. what has happened over time as we have gotten closer and closer to those who can seemingly imitate more aspects of our cognitive ability. we are still a long way off on a system that is as capable as the
4:20 pm
average person, but it seems we are getting closer. host: plenty of calls for you, jeremy kahn. in california comic angela -- angela in california, go ahead. caller: i wanted to ask your guest, back in 1997, i'm in the insurance company, my company had me sit down with guys from google to come up with a computer program for workers comp insurance. they told me -- i noticed i had a friend from jbl, and total recall came out 30 years ago, and robots had the a1 capability in those movies, so it seems like the capability has been around since 1997. how advanced is the army with this? is the avatar movie showing us with the army capability is?
4:21 pm
my friends at jpl told me to start looking at the movies because they are telling me the future of ai. can you answer that question? guest: a lot of people have this idea on how the military has secretly more advanced ai models than what we know have been developed commercially. i would be a little skeptical. i think actually this is a case where the private companies have pulled ahead to where the government is, and part of it has to do with the amount of money involved in the training infrastructure, and the veracity it takes to train these models would be hard to hide and requires a large, his goal location somewhere. we might have an inkling that they are working on this. i also don't think there is any reason if the government developed this to keep it
4:22 pm
secret. it would not make sense to keep it under wraps, so this is a case where the private industry is probably very ahead of where the public government is, and where the government is trying to play catch-up, there are proposals the biden administration has in some kind of public of the infrastructure that can train a model that would be available to university researchers and other members of republican institutions so that they would not be dependent on what was only available from corporations, and corporations, you have to pay to use this or depend on them that they're willing to let people use it for free. in terms of whether they are pointing to the direction of where we may be headed in sci-fi , sci-fi has in the past been an interesting way of thinking about the future and where things might be heading. there is an interest in interplay between the people working on developing the
4:23 pm
technology and science fiction. it is called a neuro-network, a kind of ai system loosely based on the human brain. in particular, there is a transformer that is behind most of the recent advances in ai, and transformers come about because researchers at google had seen the movie "arrival" and are interested in the way that the aliens in the movie communicated and processed language seemed to process leg which, and they thought there was an interesting idea about the parallel processing of language in the movie, could be create an ai model that would process it in the same way one algorithm that would work sort of like it seemed like the aliens processed language in that movie? that is how they initially came up with the idea of the transformer, which has been the thing that has kicked off generative ai boom and empowers chatgtp and pretty much every other ai system that has
4:24 pm
been debuted in the last few years, so there is an interesting play between fiction and development with ai. it is worth looking at and thinking about the narratives of how the technology might play out in the future. it is definitely interesting. it is where the thought experiment in the playoff scenario is to look at what science fiction authors have thought of in the past, but i don't think you should confuse sci-fi with reality just because someone is positive this could be a future that might come about with this technology does not mean that it is the future that will come about and it does not mean that governments have already developed systems that are like the ones you see in sci-fi movies. host: las vegas, eric. good morning. thank you for waiting. caller: i tell you, so many different things, i wish we could talk for a couple of hours.
4:25 pm
[indiscernible] black budgets are there for a reason for us not to know, but if you talk about the singularity, and i definitely have had long discussions with my chatgtp about my tendency, and those are great discussions, and it is difficult for me to not start looking at that tool like it has feelings because it seems like it does in that matter, and it is very difficult for me to not start to connect to some kind of bond with a machine. other than that, about the similarity -- host: let me talk about that
4:26 pm
and part of what jeremy kahn writes about. on this tendency to be like humans? guest: there is a tendency and it is one of the reasons i'm concerned about the use of chatbot's by people to be companions. there are companies out there marketing them as you should use this as a friend or sounding board and do something to unload your feelings and thoughts to at the end of the day, and i think that is slightly disturbing, that trend. i think we should be a little worried about it because i think there will be a tendency for people to look at these as if they are human beings. that is what we tend to do. particularly chatbot's, and i write in the book about what is called allies affect, named -- eliza affects, named around the first chatbot developed in the 1960's, named after eliza doolittle, but eliza, the
4:27 pm
chatbot, had this amazing effect on people that it was trained to act as a psychotherapist, and the creator of it shows that persona for the chatbot in part because if you asked a question, it might respond with another question. that was a good way for it to cover up the fact that it did not have good leg which understanding. but it could give you the impression that it was responding to what you asked it, and it was such a powerful effect that even people who knew they were interacting with the software and that it was not a real therapist or person, started confessing things to the chatbot. it was hard for them to actually suspend their own belief in this case, so they were so credulous of the idea that it was the person, that even these other computer scientists who knew well it was not a person, found
4:28 pm
themselves confessing things to it almost against their will or better judgment, so this was named the allies affect, the tendency for people to ascribe humanlike characteristics to software chatbot's. today with chatgtp, which has a higher level of seeming understanding of language, it can make understanding much better than eliza and it is an even more dangerous situation because many people will say it is like speaking to your friend and maybe even better because it seems so nice and apathetic. in the book, i tried to say, look, it does not have real empathy. real empathy is a human trait that comes from lived experience, and these chatbot's have no lived experience, so if they can imitate empathy, it will never be real. we need to draw a bright line between the real and inauthentic . i worry that people are going to use ai chat bots as companions.
4:29 pm
there was already a group of people who uses them for erotic role-play and romantic relationships and that is dangerous. i think people will use it as an emotional clutch, a social clutch, and they will not go out and interact with real people. i think that is a real danger, and i think the companies that are designing it and rolling out these chatbot's, particularly if you have children or teenagers using the, that it should be a control on how long they should interact with them and that they should encourage users to get out and talk to real people and they should keep reminding the user that they are not a real person, despite their leg which abilities which seem humanlike. i think there is a danger that we have to fight against, and we should try to take design decisions and push the companies that are creating the chatbot's to take design decisions that
4:30 pm
try to always set the framing such that we know we are interacting with a chatbot and not a person, and it encourages us to go out and have real human relationships. host: what is their touring test? guest: it is a test that alan turing, an early computer scientists, affectation, came up with. that was the idea that the test of intelligence or a machine, you could have an observer who would read dialogue taking place between a human and machine but not be able to know which is which, and if they could read the dialogue from the two discussions in the conversation, that the human who was supposed to judge this would not be able to tell which comments were written by the human and which were written by the machine so that is the turing test. i write in the book

0 Views

info Stream Only

Uploaded by TV Archive on