Skip to main content

tv   HAR Dtalk  BBC News  December 21, 2023 4:30am-5:01am GMT

4:30 am
welcome to hardtalk, i'm stephen sackur. are the machines about to take over? that basic fear seems to underpin much of the discussion about artificial intelligence, and parallel developments such as synthetic biology. the latest wave of tech advances offers us extraordinary new possibilities, but do we flawed human beings have the will and the means to contain and control them? well, my guest is mustafa suleyman, ceo of inflection ai and the author of a challenging book on aland us. is that a doomed relationship?
4:31 am
mustafa suleyman, welcome to hardtalk. thanks for having me. it's a great pleasure to have you here. now, you, in your career, are wrestling with the complex relationship between us humans and increasingly intelligent machines. it seems, if i've got it right, that you're not so much worried about the machines — you're worried about us, our wisdom. is that right? it's a great way of putting it. i mean, i think the great challenge we have is one of governance. containment means that we should always be in control of the technologies that we create. and we need to make sure that they are accountable to us as a species, they work for us over many, many decades
4:32 am
and centuries, and they always do way, way, way more good than harm. with every new type of technology we have, there are new risks — risks that, at the time we experience them, feel really scary. they're new. we don't understand them. they could be completely novel in ways that could be very harmful. but that's no reason to feel powerless. these technologies are things that we create and, therefore, we have every ability to control. all right. and to be clear, as you've just described it, and i guess you would include innovations from, i don't know, the discovery of fire to the wheel to the sail to, in our more recent industrial age, the steam engine, maybe even all the way up to the internet. these game—changing technological advances. you seem to be suggesting that the inventors, the innovators, people like you are today, can't foresee where they'll go. we can't always foresee where they'll go, but designed with intention, we can shape them and provide guardrails around them, which ultimately affect their trajectory
4:33 am
in very fundamental ways. take aircrafts, for example. i mean, flight, when it first appeared, seemed incredibly scary. how could i possibly get into a tin tube at 40,000ft and go 1,000 mph? but, over many decades, we've iterated on our process of safety and oversight. we've had regulators that force competitors to share insights between them. we have the black box recorder, the famous black box recorder that tracks all the telemetry on board the plane. it records everything that happens in the cockpit and shares those insights across the world. now, that's a great achievement for our species. i mean, it means that we can get all the benefits of flight with actually very, very few of the risks overall. and that's the approach that we need to take with al. all right. so let's get down to ai. now, i began by positing a sort of widely held human fear that artificial intelligence leads inevitably to a sort of superintelligent machine that ultimately has sort of full autonomy and somehow goes rogue on humanity.
4:34 am
you have deep concerns but it seems that is not your deepest prime concern. unfortunately, the big straw man that has been created over the last few years in al is that of terminator or skynet — this idea that you would have a recursively self—improving artificial intelligence, one that can update and improve its own code independently of human oversight, and then... right. sort of breaks the umbilical cord with the human creator. right. that's exactly right. and that would be the last thing we want. we want to make sure that we always remain in control and we get to shape what it can and can't do. and i think that the odds of this recursively self—improving ai causing an intelligence explosion, is how it's often referred to, i think they're relatively low and we're quite far away from that moment.
4:35 am
and part of the problem at the moment is that we're so fixated on that possibility — mostly because sci—fi gives us an easy way in to talking about it when we don't have very long — that we're actually missing a lot of the more practical near—term issues which we should focus on. and that is what you call the coming wave. and you say the wave is all about understanding that where we are with artificial intelligence today is not going to be where we are in, say, five years�* time. because i think the phrase you use is, "artificial capable intelligence is transformative." so i need you to explain to me how and why it's transformative. that's exactly right. so think of ai development over the last ten years in two ways. the first is that we have been training ms to classify — that is understand — our perceptual input. right? so, they have got good at identifying objects in images — so good, in fact, that they can be used to control, you know, self—driving cars. right? they have got really good at transcribing text. when you dictate to your phone, you know, you record that
4:36 am
speech and you translate it into written text. they can do language translation. these are all recognition. they're understanding and classifying content. now they've got so good at that that they can now generate new examples of those images, of that audio, of the music, and that's the new generative ai boom that we're currently the last year has been incredible with these large language models that can produce new text that is almost at human level in terms of accuracy. they are creating — these new chatbots and other sort of apps that we talk about — they are creating text, they're creating music, they're creating even visual art. and that gives us a sense that the machine somehow is developing its own consciousness, which of course it's not. it's simply got more and more and more data to use and sort of mould in a way that fits the instructions it's given. so, how far can this go? that's exactly right. and so i think we often have
4:37 am
a tendency to anthropomorphize and project more into these systems than they actually have. and that's understandable. three or four years ago, people would often say, "well, ai will never be creative, that will always be the preserve of humans." a few years ago, people would say, "well, ai will never have empathy." and now you can see that these conversational ms and chatbots are actually really good at that. the next wave of features that they're likely to develop over the next five years, as these models get bigger and bigger, are capabilities around planning. you know, you referred earlier to artificial capable intelligence. in the past, we've just defined intelligence on the basis of what an ai can say. now, we're defining intelligence on the basis of what an ai can do. these ais will co—ordinate multiple steps in complicated scenarios. they will call apis, they'll use websites, they will use back—end data bases. they'll make phone calls to real humans, they'll make phone calls to other ais, and they'll use that to make really complicated plans over
4:38 am
extended periods of time. you paint this sort of near—to—medium—term future of the expansion of ai into every aspect of our lives, and you say that writing the book about it — you call it the coming wave — was a gut—wrenching experience. why was it gut—wrenching? are you...are you frightened? i'm not frightened, but i think new technologies always bring very new challenges. and in this case, there is the potential that very powerful systems will spread far and wide. and i think that has the potential to pose really significant, catastrophic threats to the future of the nation state. so, for example, all technologies in the history of our species, all the general purpose technologies that you just referred to, to the extent that they have been incredibly useful, they've always got cheaper, easier to use, and therefore they have spread all around the world. that has been the absolute engine of progress for centuries, and it's been a wonderful thing. it's delivered enormous benefits in every possible respect, in every domain.
4:39 am
if that trajectory continues, when we're actually talking about the creation of intelligence itself or, in synthetic biology, life itself, in some respects, then these units of power are going to get smaller and smaller and smaller and spread far and wide. everybody in 30 to 50 years may potentially get access to state—like powers, the ability to co—ordinate huge actions over extended time periods. and that really is a fundamentally different quality to the local effects of technologies in the past — you know, aeroplanes, trains, cars, really important technologies — but have localised effects when they go wrong. these kinds of ms have the potential to have systemic impact if they go wrong in the future. i mean, there's so much that's profound and deep in what you've just said, i'm almost struggling to know where to start with it. but one... a couple of phrases
4:40 am
just come to my mind. you talked about, uh, the way to create intelligence and synthetic life. this is sort of godlike power that we humans are now looking at, contemplating. but with the best will in the world, probably none of us believe that we deserve godlike powers. we are too flawed. that's surely where the worry comes. we need these powers more than ever. and that's the paradox. i mean, this is the ultimate prediction engine. we'll use these ms to make more efficient foods, for example, that are drought resistant, that are resistant to pests. we'll use these ms to reduce the cost of health care dramatically. right? we'll use these ms to help us with transportation and education. everyone is going to get access to a personal intelligence in their pocket, which is going to make them much, much smarterand more efficient at theirjob. that, i think, is going to unleash a productivity boom that is like a cambrian explosion. imean...
4:41 am
well, hang on. a productivity boom for those like you who absolutely have the skills to be at the forefront of this transformation. but most of us humble humans do jobs which will disappear. it's quite possible, in this world of advanced ai, you won't need journalists. you won't necessarily need half the doctors we've currently got, or the lawyers or a whole bunch of other professions which thought they were safe from mechanisation but are certainly not safe from artificial intelligence. what on earth are human beings going to do when so much of what we need to do is done by machines? these are tools that make us radically smarter and more efficient. so if you are a doctor, you spend a vast portion of your day inputting data, writing notes, doing very laborious, painful work. these are tools that should save you an enormous amount of time so that you can focus on the key things that you need to...
4:42 am
i take your point, but you'll need less doctors. it's possible that you will need less doctors. we may need less of every possible role in the future. yes. to that, i would say, bear in mind that work is not the ultimate goal of our society. the goal of our society is to create peace and prosperity, wealth and happiness, and to reduce suffering. the long march of civilisation is not a march towards creating work. it is a march towards reducing work and increasing abundance. and i believe that over a 30— to 50—year period, we are on a path to producing radically more with radically less. that has been the story of history. but is it not possible that...? i do not mean to interrupt you when you're painting this incredibly positive picture, but is it not possible that you will challenge the mental health of human beings? so much of our self—worth comes from our sense of utility, our usefulness. a lot of that comes from work. in this world you are portraying, 30 years away, where work and productivity
4:43 am
is fundamentally different, machine—led, we humans may feel ourselves to becoming progressively more useless. the question is, who is the we? so you and i get an enormous amount of health and wellbeing and identity out of our work, and we are very lucky. we're the privileged few. many, many people don't find that flow and peace and energy in their everyday work. and so i think it's important for us to remember that, over the multi—decade period, we have to be striving towards a world where we've solved the issues of creation and redistribution. the real challenge you're describing is the one that you alluded to at the top of the show. it's a governance question. how do we capture and redistribute the value that is being created in a fair way, that brings all of society with us? and that's a better challenge
4:44 am
to have than not having enough. right. so let's now get to the truly malign possibilities that come with this expansion of transformative ai. because you've just explained that what ai allows is for the sort of, the cost of being powerful to become ever less. it empowers people in ways we haven't imagined before, and it empowers non—state actors and states to do bad things in new ways. how do you ensure that doesn't happen? that's the great challenge. i mean, these models represent the best and the worst of all of us, and the challenge is to try to mitigate the downsides. many people will use these tools to spread more efficient forms of misinformation... it's already happening. to sow dissent, to increase anger and polarisation... and also, to enforce a new level of authoritarianism through surveillance, through the elimination of privacy. and that's why it's critical for us in the west, in europe and in the us and all over the world, to defend our freedoms, because these are clearly potential tools
4:45 am
which introduce new forms of surveillance. they might reduce the barrier to entry to surveillance. and so the challenge for us is figuring out how we don't rabbit—hole down that path. if we give in on our own set of values and accept that we have to then lunge towards more authoritarian surveillance, that would be a complete collapse of the values that we actually stand for. i talked about gut—wrenching emotions as you wrote this. you're clearly worried. however, you dress up the positivity of so much that ai offers, you signed a joint statement that came from the center for al safety earlier in this year. it was very simple. itjust called upon all of you in this business, including governments, including private—sector people like yourself, to mitigate the risk of extinction — that was what it was called — that could come from al. that, that you said in the statement, should be a global priority. i see no sign that it is becoming that global priority.
4:46 am
just as these ais reduce the barrier to entry, to be able to educate somebody that doesn't have access to education or provide the highest quality health care to someone who can only have a telephone call interaction with an ai health clinician, they also enable people who don't have the expertise in biology, for example, to develop biological or chemical weapons. they reduce the barrier to entry, to access knowledge and take actions. and that is fundamentally the great challenge before us. i think it's an intellectually honest position to notjust praise the potential upsides, but look straight in the face of the potential downsides. and wisdom in the 21st century has got to be about holding these two competing directions in tension and being very open and clear about them so that we can mitigate and address the risks today. but let's start our look at where we might expect the mitigation to come from, the responsible, accountable governance of the ai world to come from, by addressing the private sector,
4:47 am
by addressing people like you who've made, let us be honest, hundreds of millions of dollars — in some people's cases billions of dollars — by being sort of market movers, pioneers in artificial intelligence. you have an extraordinary financial stake in constantly pushing the boundaries, don't you? i do. and i'm building a company. it is a public benefit corporation, which is a new type of company, a hybrid for—profit and nonprofit mission, entrenched in our legal charter. so it doesn't solve all of the issues of for—profit missions, but it's a first step in the right direction. and we create an ai called pi, which stands for personal intelligence. it is one of the safest ms in the world today. none of the existing jailbreaks and prompt hacks to try to destabilise these ms and get them to produce toxic or biased content work on pi. pi is very careful, it's very safe, it's very respectful. so i believe...
4:48 am
so you are so concerned about being responsible, being transparent and sort of being audited about what you are doing in this sphere, have you moved right away, then... ? and you're based in silicon valley. the old silicon valley mantra was "move fast, break things". and if you talk about a whole generation of tech pioneers who are now obviously multibillionaires, from the gateses to the larry pages, to the zuckerbergs and the musks, these were people who sort of developed their ideas to the max and perhaps only later began to wrestle with some of the downsides. are you saying you've fundamentally changed that model? i think so. i think there is a new generation of ai leaders coming up who have been putting these issues on the table since the very beginning. i co—founded deepmind in 2010, and right from the outset we've been putting the question of ethics and safety and ai on the table, at the forefront. i mean, you mentioned audits. for example, just two months ago, i signed up to the voluntary commitments
4:49 am
at the white house and met with president biden and laid out a suite of proposals that would proactively subject us at inflection ai, and the other big providers of large language models, to audits by independent third parties, and to share those best practices notjust with each other, but with the world. right. and when you, in your book, when you address how to mitigate the dangers of ai, you talk about a whole bunch of things which, to me, seem very idealistic. you say you're already being accountable. you're accepting audit from the outside of everything you do, but you also say the international community is going to have to work at this. there's going to have to be amazing cooperation and collaboration. there's no sign of that happening. the united states is looking at a voluntary code. you've been to the white house to discuss that. the eu is looking at legislation on al, but then look at china. china is also moving as fast as it possibly can in this field. they are not interested in signing up to this sort of idealistic international co—operation that you write about. i've got no shame
4:50 am
in being an idealist. i think that you have to be an idealist to try and move civilisation forward. we have to raise the bar on ourselves and on our peers. now, i can't control what happens in china, but i can certainly praise and endorse the efforts that i see in the eu, for example. i mean, this is a very robust piece of regulation that has been developed in consultation over three—and—a—half years with a very sensible framework, and i've got a lot of praise for it. i think it's excellent. the great fear in europe is that, if they all sign up to this regulation, the chinese are going to have an immediate market advantage because they're not bound by the same rules, they're not bound by transparency, accountability, by fronting up to the international community about what they're doing. that's the pragmatic reality of life. we cannot mean that that leads to a race to the bottom on values just because they're doing it. it's no justification. we have to stick to our principles. well, yeah, but you talk about containment, containing this phenomenon that you say could be the most wonderful
4:51 am
boon and positive thing for the world, or could be the bringer of chaos, catastrophe and anarchy to the world. you can't have a containment policy if containment is accepted in europe, but not in china. what we can do is focus on trying to get the strategy right for ourselves first and, from that position, engaging constructively with china and other partners that have different sets of values from our own, and start by not demonising china and not excluding them from discussions and negotiations. we have very fundamental differences of values to them, but we have to work with them. but we know, because we see it, that china fundamentally sees the development of ai as a new exercise in state power and an exercise in imposing an ideology, that of the communist party. i'm mindful of a comparison between how we control ai and how we have controlled the potency of the nuclear weapon over the last, whatever it is, 80 years. isn't the truth that, in the nuclear example, we have reached a containment place because of the ultimate deterrent of mutually assured destruction?
4:52 am
will there be an element of that in our approach to ai? nuclear nonproliferation has been a great achievement of our civilisations. in fact, we've reduced the number of nuclear partners from 11 to eight over the last 60 years. that's a great thing. we've massively reduced the number of warheads. we've demonstrated that we can reach international agreement, make compromise, achieve consensus, you're right, where there is a mutually assured destruction incentive. i don't think we have that any time soon in al, so i agree that there is going to be a period where we really are at odds with one another, and there isn't that kind of dramatic incentive to drive co—operation. however, that doesn't mean there is no incentive. we absolutely must share our safety best practices with our — quote, unquote — "adversaries". one area where there is is in the development
4:53 am
of synthetic biology. experimenting with pandemic—scale pathogens, engineering them to make them more transmissible and more lethal are capabilities that are soon going to be quite widely available within five years. desktop synthesisers, those that can be used to actually engineer — that is print or manufacture — new strands of dna, enable people in their garage to experiment with new pathogens. this is a very frightening prospect, and there is very good reason why china as well as us, europe, the uk, the us, everybody wants to basically try and stop the proliferation of these kinds of tools. final thoughts. going to have to be very quick. stephen hawking said this before he died, "the rise of powerful ai will either be the best or the worst thing ever to happen to humanity. we don't yet know which." when will we know? i have every confidence that it's going to be one of the most dramatic and positive impacts in the history of our species.
4:54 am
it is going to be a huge boon for productivity over the next few decades, and that will become very, very clear within ten years. mustafa suleyman, i thank you for being on hardtalk. thank you. hello. the weather in the run—up to christmas now is looking fairly unsettled, especially so over the next 2a hours because we've got a spell of really windy weather right across the uk. the danish met service have named storm pia. now, that's moving to the north of us, but it's going to bring really windy conditions wherever you are, especially the further north. some heavy showers in the mix, too. and there's the potential
4:55 am
for some significant travel disruption on thursday. so there's that area of low pressure, storm pia, heading towards denmark. lots of isobars on the map here and we've got really windy conditions for scotland, northern ireland, northern england, north wales too. even further south, it's going to be a blustery sort of day. so some heavy showers, especially towards the north and the west. a bit more sunshine returning across scotland through thursday morning. but look at the gusts of wind — 65—75mph, even 80mph up towards the northern isles. really windy, too, for northern ireland, the isle of man, into northern england — 50—60mph gusts. more around the exposed coasts and hills. windy too, but not quite as windy as further north for the likes of south wales and southern england. so cloud and patchy light rain in the south, sunshine and blustery, squally showers moving into the north. even a little bit of snow over the highest ground of scotland. temperatures just 5 degrees in aberdeen, but still up to around 12 down towards the london region. so we've got the mild, cloudy and fairly damp weather in the far southwest overnight. clearer skies elsewhere
4:56 am
as we head through into thursday morning, but then more rain returns from the west through the early hours. i think it's going to be frost—free again heading into friday morning, but the lowest temperatures will be across the northeast of the uk. so we're in the colder air there, but further towards the southwest, we've got milder air, this weather front that is the dividing line between those two air masses. heading through friday, and this area of milder air will spread its way across the uk, the winds coming in from a westerly direction. so that weather front will bring some rain initially across northern ireland, parts of england and wales, tracking its way eastwards — perhaps a little bit of snow fora time, again, over the highest ground across the north of scotland. but for most of us, it'll be rain showers and, again, temperatures between around about 5 to 12 degrees. colder than that, though, towards the northern isles. and then looking ahead towards the festive period, it's a little bit up and down, it's fairly unsettled. mild for the next few days, perhaps things a little bit colder into boxing day. bye— bye.
4:57 am
4:58 am
4:59 am
live from london, this is bbc news. as fighting continues in gaza — the israeli military says it has found a tunnel network used by the hamas leadership in gaza city.
5:00 am
the un security council again postpones a vote calling for a suspension of fighting as diplomats struggle to agree on the language of the draft resolution. a judge is due to rule today on whether the two teenagers convicted of murdering 16—year—old brianna ghey should be named publicly. american prisoners released by venezuela land back in the us. they were freed in a major prisoner swap. welcome to bbc news. hello, i'm lukwesa burak. the israeli military says its ground forces inside gaza city have found tunnel infrastructure — including a spiral staircase and a lift — that it believes served as a base for hamas leaders. an army spokesman said
5:01 am
the network branched out from properties registered to the hamas leader

25 Views

info Stream Only

Uploaded by TV Archive on