Skip to main content

tv   Lectures in History History of Artificial Intelligence  CSPAN  February 23, 2024 8:00am-8:49am EST

8:00 am
today we are going to do our second lecture on the history artificial intelligence. we talked about it way back in september. and today we're going to be talking about how it is the came
8:01 am
to over take rules based in artificial. so in 19 and 2009, a trio of google researchers published a paper, the unreasonable effectiveness of data. and what they said is that scientia and humanists alike had been looking for simple theories of of about language, other facets of human experience things that would look like physics or math. but it turned out embrace saying complexity, they said, was the way to go. taking data at enormously scale and analyzing is what allowed you to translate language to reproduce language, to understand pictures and what not. data one out rather surprisingly to people of a mathematical mindset over rules. not that long before, as we talked about some weeks ago, the progenitor of the term artificial intelligence, john mccarthy, had denounced the idea
8:02 am
that learning from sensory experi ance from data would ever produce complex behavior. but in 2009, the opposite to be true. and today that's what we're talking about. how is it that we came to be this situation now in the last, say, 7 to 8 years, the term artificial intelligence has gone from something that was seen as sort of a a backward, older, kind of approach to quite precisely. one of the most exciting things happening right now, and which was understood good as predictive algorithms using, statistics and something called machine learning that had been trained using large high data sets. it wasn't about rules and symbolic reasoning, it was about data. and its analogy source, and particularly data at scale, all at facebook or google or amazon scale or the human genome.
8:03 am
so this is what has produced the sort of things that we're struggling with today. so here. i have from this morning i asked chad, gpt to tell me about the coming into self-consciousness an ai in the style of dr. seuss, which was very happy to do. and then i asked a one of the diffusion b programs to an images of princeton students listening to a lecture on the history of ai and it produced something that it had learned very much. how did we get to this world? so this is a world that is explored and is based on deep empirical granularity. what do i mean by that? it's based on the sensory data, the big coming of experience, and its grand neela in that it's about the detail of things of the world, not sort of extreme simple cases, but rather the full complexity language. and it's in all of its different
8:04 am
meters, the irregularities verbs, not just the regularities but at huge scale and of the fundamental phenomena of our time is that predictive algorithms that are trained on, large scale data, historical current data, have a very powerful ability to reproduce existing forms of inequality and equality. that is when there are structural, for example, structure or racism data at this scale tends to reproduce that. and this is a fundamental sort of ethical and political question of the time. so it's often referred to as dumpster fires of the the ai of today. so how did we get here what made it possible? and what are some of the more pressing concerns it so i'm going to give you a kind of whirlwind tour of how we got there after you a little bit about what we talked about in our first ai lecture.
8:05 am
so now, as i said, a.i. has been redefined. if you look at the longer history of it, it's a history both of the technical disciplines that are array around this this term, i and it's a a sort of a fascinating moment of cultural focus of thinki tough fundamental issues of the nature of humanity, of the nature of culture, of the interaction, of nations, of corporations and of things and its history, those histories are deeply and deeply intertwined. and we need only think all of these sort of cuur touchstones to think about how much of popular culture is involved in conversaabout a.i. and our talk at the very end. these very touchstones are important in thinking about real ai today, so let's return to where we began our conversation last time, which was alan turing and some of his friends at
8:06 am
bletchley park doing. during the second world war. now they were knee deep in, doing large scale computational work on vast amounts of data that was produced by the collection of the the communications of the axis powers, as one of them described in doing cryptography. they were, quote, up to our elbows and automation of one kind or another. and in the evenings, they thought about what the implications of this would be in the future. now, i told you that they broke this into sort of several ways of thinking about what might be to be intelligent. one was learning from experience, like collecting lots of signals of german making secret transmissions. the other was thinking about rules, logic and mathematics. many of them came out of domains like math. maddocks now, in the history of
8:07 am
the first, say, 50 years of ai, the focus is largely upon rules plus small number of facts, not the complexity of experiments and the keynote moment for is a conference in the middle of the 1950s, organized by john mccarthy and he was quite blunt and you can find this onbe where he says he invented the term artificial intelligence largely as of getting money. now, as i explained to you the kind of ai that they focused on was not based on data in particular at all. it was rather focused on a vision in some sense the self-image of mathematicians and logician and chess players and it prized the ideas that what what really made human intelligence interesting was its high nature. so the forms of programing they devised in tried to combine
8:08 am
rules, symbolic reasoning and a certain amount activity. and it was made into the heart of intelligence. now, what happened to the learning from experience? well, i've already told you mccarthy was, down on this and so down on was it. he and his allies that they went after the biggest examples of it. so this alternate path is most identified with an apparatus called the perceptron, which was a teral an attempt to literally reproduce in machines, a kind of neural that would sense things and try to say whether what you're seeing is an a or an h initially it was a large machine, but it became something that was very much algorithmic for mccarthy and his allies. this was the antithesis of what artificial intelligence would be. and i recorded this for you. it was to get at the lowest
8:09 am
level of human and animal cognition and not its heights. the symbolic heights. so they deliberately targeted for death quite successfully, and argued that the data centric approach was not intelligence worth the name. okay, so that's all for review. and i am casting it into 2 to 2, you know, black, white colors. so our question is, how did this data focused approach come back? did it come from how did it come to dominate in the ways that it does in our world? well, you'll remember in bletchley park, they talked about rules, but they also talked about learning from experience. they were in the heart of a data driven enterprise. so when we think about alan turing, typically that the stories are about alan turing, the lone genius, the tortured genius, the person who suffered this awful persecution by british state. but his work was very much done in an entire factory that i
8:10 am
described to you some some weeks ago of data analysis at large, using large infrastructures for purpose of attempting to win war. this approach to thinking about data did not disappear, but it wasn't known as ai at all. in fact, it was something very different. now the ai of our current moment emerges from a long lineage of their and has a bunch of components. only of which i'm going to be able to talk to you today. so the a.i. of of the past half decade, it it very much emerges from a data centric approach. and it it's enabled by relatively weak, proper privacy and property protections.
8:11 am
we've talking about this. it's to an organization of research to labor and it's undergird it by massive computing capacity. you need all of these ingredients to understand the emergence of ai in our terms and the roots of that are very much in this world war two context. and what happens to it after the cold war. so while the symbolic rules program is exploding and getting most of the good press behind the scenes, mostly in, mostly unclassified domains, there's a kind of low road of instrumental computational data, which is vast archives of data. first, things like cryptography, a logical combined with an approach using statistics where the goal is not necessarily to produce great thought, but rather to solve concrete problems of the military.
8:12 am
it in lots of different places. and this is just one kind of example which is a domain that becomes known as pattern recognition. and in pattern recognition you have fundamental issues like you vast amounts of imagery from satellites and spy planes and it takes amounts of human labor to classify them. would it be possible for a computer to learn to classify this? this was not intelligence in sense of can we reproduce the great mathematician oceans of the path, but rather can we, the labor of recognition on a computational platform? this approach develop into a whole array of algorithms and any of you who've ever studied machine learning or computational statistics have learned. many of these algorithms under various kinds of names and it concerned using those kinds of algorithms on large data sets, not kind of toy datasets that
8:13 am
were more the focus of math semantical statistics. now this issue of how is it that you replace an expert was that was extremely challenging and the people in symbolic i definitely wanted to do and the people in computational statistics wanted to do this and it turned out to be very, very challenging. one of the great discoveries, it happens in parallel in science and in this in in in the social in domains like history and sociology is a recognition that getting people to explain how they are experts, how they make expert decisions, to ask, say, a doctor, how does he or she make a clinical decision is incredibly hard. there was a kind of hubris that it would be easy to elicit the thought processes of skilled professionals. but turns out to be enormously hard. and this is a great example of a stanford study where a guy using a lathe is trying to explain how
8:14 am
he uses it. that's kind of skilled activity. and it turns out it's enormously hard to do this. there's a program called expert systems which attempt to elicit rules through discussion with experts and put them into place. but it turns out to be both intensive, very difficult and expensive, elicit the rules and incredibly brittle. they're not expansive, not good at dealing with the complexity of the world. and they call this the knowledge acquisition bottleneck. so you needed a human expert to discuss for a very long time with the so-called knowledge engineer in order to elicit rules. and it came to dawn on a lot of people that the solution might not be to try to replicate the rules by which people think, but rather to create, predict give algorithms that might work in an entirely different way that would duplicate at a very high
8:15 am
percentage the kinds of decisions they would make. one of the interlock tutors of turing way back in the forties wrote in 1985, mastery is not acquired by reading books. it's acquired by trial and error and teacher supplied examples is this is how humans acquire skill. and if you think about this, this is both a statement about what the of human cognition and intelligence is and a concrete program for thinking about what you'd want to do if you're going to build algorithms that would duplicate human and the kind of thing they're thinking because this may be all very abstract is imagine you've a vast sky survey of all the stars at very high resolution and traditionally you'd have astronomers and as we discussed traditionally a large pools of of extremely learned women through all these plates and and classify stellar
8:16 am
objects. so the quest the question was could you do that not by asking the astronomers how do you tell a pulsar or not, but taking a large dataset where they classify objects and then produce a statistical predictor that's going to make the predictions of what the astronomers would do. now, i hope that's clear it's not saying how do you do it? it's saying, can we model something going to make predictions that line up with you, that division is a really major one because it no longer means that you're attempting to model the process of human cognition. you're attempting to model the outputs of human cognitive action. this turns out to be more successful than anything that had been in the rules based a.i. and for those of you who've worked in the us world, it consolidates what we think of as central parts of machine
8:17 am
learning, and particularly what is called supervised learning. and supervised learning is one in which you have a set of data that has been classified by a set of human actors, and you produce that can duplicate that classification. so think sky charts or if you had a whole bunch of people assessing credit card applications, then you build an algorithm that models that behavior can do it at an incredible scale. now, all of was going on in in all kinds of academic and intelligence settings, but things were happening in parallel that we've been discussing in the past few weeks that created the conditions in which these could be applied at a larger scale and become familiar, to all of us, those shifts. involve first, as we've discussed, the consolidation and expansion and of vast infrastructures for, the storage and analysis data, the
8:18 am
particular focus on storage of data was one that was businesses and and spies were much more interested in than a lot of scientists. but an infrastructure actually comes to be built and expanded, and it continues to expand and to this day. so you had to have this infrastructure, but you also had to have a change in norms about using data. and i think i showed to you some weeks ago, but it's meme making fun of the fact in the 1960s there was this thriving concern about the privacy of our data. and yet today we expect the wiretap on our desk or on our wrist to answer of questions. there is a transformation of norms and this was very much connected to those transformations in laws around privacy that we discussed this particular important moment in 1974, where it seemed that the united and other jurisdictions would have robust privacy laws,
8:19 am
but that's not what ended up happening. the commercial of data, by and large, was for use by commercial entities, for use, for sale, for analysis and whatnot. and it's a central to the world in which we live. so you had this new technology, this new kind of predictive technology combined with ever greater infrastructures changing, norms around privacy, relatively weak laws around privacy and with the explosion, the internet in incorporation of large companies retained this data stored this data and had many reasons to be using it. oh and i forgot i forgot that the us government itself came also to have very contrersial accounts of what kind of data it could and couldn't use about us people and us people. so in the technical world and in the business world and in the
8:20 am
cryptologic world, we've moved very, very far from the world of rules that i discussed before. in fact, an ethic that focused on predictability, i mean, prediction over interpretation, over algorithms that could predict on the basis of large scale databases came to be seen as not just good in a commercial domain or good in a military or intelligence domain, but the best and for most fundamental of algorithms, sort of very legitimate objects, even though they were so from the visions of what intelligence had been and more traditional, i one of the touchstones of this transfer mission came in 2009. so the year of that google paper. so net flix had announced a few years a crowdsource sourcing
8:21 am
challenge and. what netflix said was, we are going to give $1,000,000 to the team that can best improve our algorithm and at the time netflix wasn't streaming they were sending out dvds and so happened is a wide variety of people gained access to an extreme large by the time buy buy at the time data set a kind of commercial data set and and at the time it was quite difficult for ordinary say programmers or researchers or others to gain access to large scale commercial datasets only the big players, the googles, the facebooks, the amazon and the netflix had access to this kind of data. netflix released this and then said, see what you can do. and the story is quite interesting. people tried all kinds of
8:22 am
algorithms to try to interpret it and then try to predict on the basis of it and various people came together into teams that would combine their algorithms. now, the winning approach, the group here that's called belcher's pragmatic, didn't have one predictor rather what they did is that they produced a predict they produced a giant engine which combined 50 predictors, 200 ends combined with another 30 blends. now, this algorithmic prediction system was able to better the kind of movies that people liked, but it was from the standpoint of any traditional scientific approach to knowledge, bizarre because it was just this ran almost seeming combination of all kinds of different machine learning and statistical classifiers that were rammed together and that
8:23 am
worked. it had an it was an object that fundamentally was without giving you understanding at all, but it was really a touch point in the power of of of what you could if you harnessed a large amount of computational power, a large dataset and lots of predictors to make predictions along one particular metric. so it was at once moment in which this ethos of prediction was gained a sort of widespread attention, but it also was a model for how you might organize research itself. now the google paper i began with said we'd been looking after the wrong kind of that language is sort of granular, complicated thing, and we needed to create knowledge systems,
8:24 am
recognize that the netflix challenge sort of doubled on that and said the way that we do that is not through, say, individual geniuses thinking and figuring theories, but rather collective those whose results contribute towards a larger project of coming together and trying to maximize form of metric. the netflix challenge then modeled an idea that lots of people would together and in a competition in trying to maximize something, do better than any one group could do. and in fact that you could combine this this meshed perfect with organizations who had fundamental metrics at the heart of their business. for example. oh, i was a misprint. but if you think about facebook, facebook becomes very early engaged in maximizing engage moment on its website that is
8:25 am
its goal in some sense as a business is to have the largest number of people who spend as much time as possible on its website and thus its algorithm were designed primarily with that in mind, that is a single metric. now this is called the secret sauce of machine learning, and it turns out to be enormously spoke tactically powerful, far so than anyone ever had any right to think was going to be the and it works in any domain in which you can sort an agreed upon metric and maximize it whether it's engagement whether it's a score in predicting what people like or things like to get high scores in game or indeed fundamental scientific issues. so. in 2012, that neural network approach, which i showed had been outright attacked by people in the symbolic tradition, came back with a vengeance.
8:26 am
it was rebranded learning at first and it's deep because it was a form of network that had many layers and i won't go into the details but what those many layers allowed it to do was overcome of the formidable problems of a much simpler neural network. last time i told you that it couldn't neural networks can't figure out simple neural networks can't figure out how to discriminate how to how to replicate called an x or function, but if you have many layers, you can now. people had known this in some sense for decades, but what they didn't have were three things. they didn't have enough computer time to train a neural network because of training. a network is slow, computationally expensive and expensive in terms of
8:27 am
electricity, so they didn't have the compute time. they also didn't have data. no networks require huge amounts of data to train a neural to recognize a logical function takes a huge amount of data to train a symbolic machine to a logical thing. is is no data at all. so they had new record, lots of compute time. it required lots of data but also neural networks had always been suspicious. why were they suspicious because you cannot of the time figure out how is that they are making the predictions that they did. now think back to that netflix predictor i showed you that predictor was this conjuring of all kinds of predictors. it was no model telling you, giving you an understanding of people's cognitive state. it was a purely predictive of thing. well, so were neural networks. and in fact, neural networks turned out to be even better if
8:28 am
you had the sufficient and cash to run them and energy to run them and data to do and you didn't mind if all you got was prediction? and by 2012, there were some very, very large corporations that had exactly that combination of elements. and hence deep learning or neural networks explodes on to the scene and, turns out to be enormously good at a huge number of tasks, tasks that we use all the time from voice recognition to recommending what kind of websites you should see on any of your only social media to fundamental questions of thinking about protein folding to indeed questions of understanding language itself and above all. there was a moment in which there was a large database of of, of images and the deep learning algorithms, just blue.
8:29 am
the other algorithms out of the water when trained on huge amount of of, of visual images. it was. that deep learning and this entire approach gets. in 2015 as ai now as i said to you ai for many people from the nineties through the early through the arts and into 20 tens was kind of the old fashioned had its precisely not which was exciting and important but a.i. has always been a branding tool and the explosion of powerful neural networks on large and extremely large computational platforms with amounts of data, with fundamental predictive problems, turned out to be just the recipe for a rebranding of all of that as artificial intelligence.
8:30 am
and it happened relatively recently, and that produces what we are dealing with now. so the the that we've become so familiar with in the last years last couple of years notably chat gpt and then image production are descendants of these and like the other deep well even more so than deep learning networks they depend on amount of computational power truly tremendous data sets and they're fundamentally a predictive the numbers are so though that only a small number of of of companies and an even smaller number of nation states are capable of training these kind of models. they also depend on a really really fantastic technical track that is published by google in a paper called attention is all
8:31 am
you need that allows you to connect to large datasets and give them a kind of memory. but it's what allows us to ask these kinds of questions. so when you think about gpt just or all of the phenomena like it, just keep in mind it depends fundamentally on truly vast infrastructures of compute, large datasets and a willingness to pursue a kind of predictive ethic. and the amazing thing is that it turns out to be enormously good at producing natural england. it's actually unbelievable surprising. so the paper i began with the google researchers say it is a really strange state of affairs that the world is set up in a way that we can learn so much from data and whatever the effects of chat, gpt and it's like in the next hundred one of the most reaching aspects of
8:32 am
them is it challenges our own vision of what it is to know something and the nature of the world. language is such it turns out that and an algorithm which essentially what the next letter should be is capable of producing language that we register as almost human like and is capable of organizing things that is almost human like those are the epistemic implications and how those will play out. it's going to be all of you to decide in some sense. now, there is a massive conversation on about the implication of this because the deep and deep learning hasn't just been rebranded as a it's that we are all to things that are starting or starkly better anything we most of us anticipated might exist in our lifetime. and this brings us to this
8:33 am
bigger conversation ocean and how we think about a.i. when we think about a.i., we don't think about it neutrally without sort of a broader cultural conversation. and here i've given you these are just films, but of course there's this many, many books and including things like the chrome now the conversation, right now there's a very vibrant controversy about the harms and dangers of a.i. potential and current. and just last i think it was two weeks ago, maybe there was a major a.i. summit, nowhere less than bletchley park, where a large number of leaders came together to sort of make statements about thinking about the powers and harms of a.i. and at the same time, questions a.i. dominance, very much structure, a lot of facets of trade policy, particularly the united states visa v the people's republic of china.
8:34 am
this is a bigger one than i can deal with in this lecture. but just to to conclude, i want you to think about two major schools that are really important and central in the conversation and if you're going to get into this, you really need to wrestle with both of them. the first is emerges from groups, people who've been concerned with how automatic decision making systems are going to affect people in the here and now. you remember back in my lecture on privacy, i talked about how people worried about privacy and data both for against the state and against corporations. noted that data was most often going to be mobilized against least empowered sets of people with the explosion of a database based economy, database governance. this has turned out to be very, very true, whether it's in
8:35 am
systems for judging whether people are going to are going to be recidivists or in building facial recognition system. the everyday systems all around us, most of which we don't even think of as ai but now are classify as ai. one of the famous science fiction things is a movie based on a philip k -- story called minority report, in which in the movie tom cruise plays someone who is who uses what are called precogs, who judge and predict where crime is going to appear and stop it before. it happens. this mode of is very powerful. i'm thinking about the attempts to create predictive algorithms that say crime is likely to appear here when those predictive algorithms are based on historical data of that is very much grounded in. the very the very inequality that our all around us and in that frame our entire society. so you have a whole packets of
8:36 am
civil society people in governance working at this, and it's among the most important conversations happening right now in the last couple of years. oops. another way of thinking has really come to fore and it is much, very much grounded in a the narratives of existential risk that run through so much of science fiction, particularly popular science fiction, as in the terminator, where arnold schwarzenegger's character back in time to kill the person who is going to defeat the machine. so there's a big conversation that has been fed by a lot of large tech money that is worried about ai algorithms as a fundamental to the species itself. and these two camps do not seem much eye to eye. one is very much concerned with
8:37 am
questions of, say, drones and policing in the here and now. the other is worried that we won't say make it to mars if the machines kill us off. these are both central. central conversations and they illustrate the way in which the cultural conversation around ai is is going to be enormously important in subsequent development of its technology. okay. so for next time, we are going to be talking about how this cluster of predictive algorithms came to shape the web. so we last in the last couple of lectures, we've talked about the transformation from the true vision of a peer to peer, true democratic internet. and we're looking at how is it that it became the very different internet in we live today central to that story is the predictive alga thems that recommend the content see that create the information ecosystem
8:38 am
in we live the differential if economic economic and information infrastructures in which we live. but that story is equally one about how shaping the web in order to gain money was central in creating the very infrastructures that made these predictive algorithms possible. you don't have a chat, you don't have google search, you don't have facebook prediction algorithms in. you have the capacity to create that is based on huge training data grounded in relative weak privacy and intellectual property laws. and we need to track this in order to understand it. that is what comes to produce a.i. okay, i actually do have time for questions today. so anything. yeah, please.
8:39 am
as well. yeah. oh, so the positive effects, right. well, so so the question is, what about the positive effects? yeah, i haven't played those up, but there's no question that. you know, you can think about them in a variety of places. but there's very easy ones like the fact that voice recognition now works so enormously well. and that is a fundamental the accessibility producing technology, right? it is transformative. we will probably get over a kind of, uh a shock and awe about gpt and other sorts of things and see those as fundamental tools just calculators or tools and then of course in a wide variety of sciences, there's incredibly rich data bases that previously were not tracked able. so we simply can do very different kinds of science, whether it's protein folding or other kinds things.
8:40 am
so you're absolutely right that i haven't played that up as much. now, some of that's pretty contested because, as i told you the explosion machine learning now is focused on prediction and for many scientists prediction is but one part of the equation when your fundamental interests say you're interested in chemical mechanisms that you're able to make about those is not the same as providing a chemical mechanism. so in discipline. after discipline, you see a tussle between a new kinds of science which are predictably powerful based on huge data and forms of reasoning, which are based more on, say, causal modeling and. the funny thing is this happens not just the sciences, but it very much happens in commerce. so one of the earliest great successes in much of this one, it wasn't even called, it wasn't even machine learning was just called. data mining was replacing
8:41 am
traditional experts at marketing at say a a grocery store or a drugstore or with the predictions an algorithm says, you know, actually it sense this is the most famous anecdote. it would seem counterintuitive, but if you put diapers and beer together and a late night drug store, you're going to sell a lot more of both. and what a way to think about that is you're changing the kinds of expertise about fundamental decisions. and it and then it crosses the sciences, the humanities and all kinds business practices. and you're going to see we're going to see continuing tussling over strengths and weaknesses of those different kinds of approach. so that's a wonderful question. other thoughts? yeah. so this is asking you to do a little bit of casting. yes. but you were mentioning these these biases amongst society that are making their way into these datasets. and dr. benjamin here at princeton coined the term garbage in and garbage out.
8:42 am
and i wanted to know your thoughts on, you know, how would we eventually overcome that in terms of large data models moving forward. so the question, you know, if you if you have you have lots garbage in your dataset, how are you going to prevent lots of garbage coming out? and this is a active problem and it's easy to portray it as a kind of divide between those people who recognize these and those who don't. and how are those being resolved? well, that's a it's it's a major research issue. and there are some people who think the answers are going to be technical. and so there's an entire enterprise within computer science that is committed to making algorithms more fair. and some of the great work being done here. now, it turns out, and it's more technical than i could get in this class, you the the technical definitions of fairness are logically contradictory.
8:43 am
so it always has to come down to a human policy decision. but it's very clear that the solutions to this are not going to be merely technical. they're also going to be in the right kind of collectivity. we don't know a lot about how gpt works under its internals. part of it is a engine, as i explained, that works on these kind of it's called semi supervised learning and it's learning a lot of awful, awful aspects are hardwired into the large language corpus. but if you try to get gpt to do a lot of things, it will utterly refuse to and you for example i because i'm a deep geek, i asked it when it first became i wanted it to make lots of evil constitutions of the united federation of planets because of course i did. and it wouldn't do it for certain keywords. so clearly there had been a work to hardwire it to prevent in egalitarian outcomes.
8:44 am
but when i asked it to use sort, plato has this tripartite division of society which is a non egalitarian society. it didn't hit any of those keywords and it happily an an egalitarian. so some the solutions may come from sort of hard wired sorts of things, some of them may come from really transforming the kind of data sets and, recognizing the inequities in data sets, but that's incredibly hard. so example, it's easy to show that if you've a large data set and you remove race from it completely in the united states, zip code. such a powerful proxy for race you're still predicting on race. so it's a very hard technical and non-technical problem. it is one of the great challenges of of our time, especially if your attitude isn't one that we are just going to get rid of all these algorithms if we say these are going to be built into our decision systems, then it's incumbent on all of us techno, all technologists, the like to to produce that comport with the kinds of societies we want.
8:45 am
great question. let's see? yeah, we have time for one more, please. john, how do you see the push for automated general intelligence fitting into the conversation of like symbolic reasoning versus predictive a.i.? you spoke about? well, so people are still really about this. so a lot the people in more ai and then people in cognitive science are quite explicit that whatever chat gpt and all the foundation models and, all of the image models and, the variety of other technologies which i'm just capturing under those terms still can't do basic like logical reasoning comp arithmetic, a whole of things that we think are essential to human intelligence. so they are, they're enormously good predictive. well, one term that someone uses is that a group of researchers uses their stochastic parents and they're never going to be more than that.
8:46 am
so as a sort of as a historian, what you see is a continuation of this long term division between accounts of what it is to be intelligent and. what is so shocking in some sense about these generative models is how many domains they they approach their they perform better than anything the symbolic thought could ever be produced. but at a fundamental level, they do not to many people. they just will never be intelligent. the way we understand intelligence to be, there's a whole nother way of looking about it is, that in the past 30 years we've brought come to know so much more about the diverse forms of intelligence in the animal world that it's my prediction is that we will come to appreciate taxonomies different of animal intelligence, different forms of machine intelligence and different forms of human intelligence, rather than seeing
8:47 am
them as an either or, we're going to recognize this explosion of different kinds on the that i'm talking about today is this rather remarkable, vast empire whole collection that can general rise and produce on the basis of incredibly large datasets. okay, our time is up. i actually finished on time next time, as i said, we're going to look at what happens when these sorts things are built into the web and indeed the companies that produce them. the platforms are precisely the ones with the money, compute and data to produce these kinds of things. it's very much part of our situation. so i'll see you on thursday.
8:48 am
8:49 am

19 Views

info Stream Only

Uploaded by TV Archive on