Skip to main content

tv   HAR Dtalk  BBC News  December 21, 2023 10:30pm-11:01pm GMT

10:30 pm
around here, high pressure, rolling around it, this cloud, outbrake serve rain developing widely, hitting cold air in north—east scotland, lower levels checked and quite and i see an snowy night in shetland. that is where the cold air is, mould of —— most of us mild into the morning. a fairly cloudy start. outbreaks of rain extensively snow in the scottish mountains further snow flurries in shetland. brightening up to wales and the south—west, but in between, from northern ireland, south—west scotland, through northern england towards east anglia, this is where the weather front will remain in place, rain on and off during the day. mild for large parts of the uk. 9-13, but day. mild for large parts of the uk. 9—13, but barely above freezing in shetland. high pressure down towards the south, the weather front starting to reinvigorate and it is a dividing line, between the mild air and the cold air, mould of a —— most
10:31 pm
of will be in the milder zone. some significant snow to begin with in the mountains in the north—east. on saturday there will be some sunny spells and temperatures up to about 12. warm christmas eve, back to you, i2. warm christmas eve, back to you, sophie. thanks, matt. and that's bbc news at ten. this is bbc news. we will have the headlines at the top of the hour as newsday continues straight after hardtalk. welcome to hardtalk, i'm stephen sackur. are the machines about to take over? that basic fear seems to underpin much of the discussion about artificial intelligence, and parallel developments such as synthetic biology.
10:32 pm
the latest wave of tech advances offers us extraordinary new possibilities, but do we flawed human beings have the will and the means to contain and control them? well, my guest is mustafa suleyman, ceo of inflection ai and the author of a challenging book on al and us. is that a doomed relationship? mustafa suleyman, welcome to hardtalk. thanks for having me. it's a great pleasure
10:33 pm
to have you here. now, you, in your career, are wrestling with the complex relationship between us humans and increasingly intelligent machines. it seems, if i've got it right, that you're not so much worried about the machines — you're worried about us, our wisdom. is that right? it's a great way of putting it. i mean, i think the great challenge we have is one of governance. containment means that we should always be in control of the technologies that we create. and we need to make sure that they are accountable to us as a species, they work for us over many, many decades and centuries, and they always do way, way, way more good than harm. with every new type of technology we have, there are new risks — risks that, at the time we experience them, feel really scary. they're new. we don't understand them. they could be completely novel in ways that could be very harmful.
10:34 pm
but that's no reason to feel powerless. these technologies are things that we create and, therefore, we have every ability to control. all right. and to be clear, as you've just described it, and i guess you would include innovations from, i don't know, the discovery of fire to the wheel to the sail to, in our more recent industrial age, the steam engine, maybe even all the way up to the internet. these game—changing technological advances. you seem to be suggesting that the inventors, the innovators, people like you are today, can't foresee where they'll go. we can't always foresee where they'll go, but designed with intention, we can shape them and provide guardrails around them, which ultimately affect their trajectory in very fundamental ways. take aircrafts, for example. i mean, flight, when it first appeared, seemed incredibly scary. how could i possibly get into a tin tube at 40,000ft and go 1,000 mph? but, over many decades, we've iterated on our process of safety and oversight. we've had regulators that force
10:35 pm
competitors to share insights between them. we have the black box recorder, the famous black box recorder that tracks all the telemetry on board the plane. it records everything that happens in the cockpit and shares those insights across the world. now, that's a great achievement for our species. i mean, it means that we can get all the benefits of flight with actually very, very few of the risks overall. and that's the approach that we need to take with al. all right. so let's get down to ai. now, i began by positing a sort of widely held human fear that artificial intelligence leads inevitably to a sort of superintelligent machine that ultimately has sort of full autonomy and somehow goes rogue on humanity. you have deep concerns but it seems that is not your deepest prime concern. unfortunately, the big straw man that has been created over the last few years in al is that of terminator or skynet — this idea that
10:36 pm
you would have a recursively self—improving artificial intelligence, one that can update and improve its own code independently of human oversight, and then... right. sort of breaks the umbilical cord with the human creator. right. that's exactly right. and that would be the last thing we want. we want to make sure that we always remain in control and we get to shape what it can and can't do. and i think that the odds of this recursively self—improving ai causing an intelligence explosion, is how it's often referred to, i think they're relatively low and we're quite far away from that moment. and part of the problem at the moment is that we're so fixated on that possibility — mostly because sci—fi gives us an easy way in to talking about it when we don't have very long — that we're actually missing a lot of the more practical near—term issues which we should focus on. and that is what you call the coming wave. and you say the wave is all about understanding that where we are with artificial intelligence today is not
10:37 pm
going to be where we are in, say, five years�* time. because i think the phrase you use is, "artificial capable intelligence is transformative." so i need you to explain to me how and why it's transformative. that's exactly right. so think of ai development over the last ten years in two ways. the first is that we have been training ais to classify — that is understand — our perceptual input. right? so, they have got good at identifying objects in images — so good, in fact, that they can be used to control, you know, self—driving cars. right? they have got really good at transcribing text. when you dictate to your phone, you know, you record that speech and you translate it into written text. they can do language translation. these are all recognition. they're understanding and classifying content. now they've got so good at that that they can now generate new examples of those images, of that audio, of the music, and that's the new generative ai boom that we're currently
10:38 pm
the last year has been incredible with these large language models that can produce new text that is almost at human level in terms of accuracy. they are creating — these new chatbots and other sort of apps that we talk about — they are creating text, they're creating music, they're creating even visual art. and that gives us a sense that the machine somehow is developing its own consciousness, which of course it's not. it's simply got more and more and more data to use and sort of mould in a way that fits the instructions it's given. so, how far can this go? that's exactly right. and so i think we often have a tendency to anthropomorphize and project more into these systems than they actually have. and that's understandable. three or four years ago, people would often say, "well, ai will never be creative, that will always be the preserve of humans."
10:39 pm
a few years ago, people would say, "well, ai will never have empathy." and now you can see that these conversational ais and chatbots are actually really good at that. the next wave of features that they're likely to develop over the next five years, as these models get bigger and bigger, are capabilities around planning. you know, you referred earlier to artificial capable intelligence. in the past, we've just defined intelligence on the basis of what an ai can say. now, we're defining intelligence on the basis of what an ai can do. these ais will co—ordinate multiple steps in complicated scenarios. they will call apis, they'll use websites, they will use back—end data bases. they'll make phone calls to real humans, they'll make phone calls to other ais, and they'll use that to make really complicated plans over extended periods of time. you paint this sort of near—to—medium—term future
10:40 pm
of the expansion of ai into every aspect of our lives, and you say that writing the book about it — you call it the coming wave — was a gut—wrenching experience. why was it gut—wrenching? are you...are you frightened? i'm not frightened, but i think new technologies always bring very new challenges. and in this case, there is the potential that very powerful systems will spread far and wide. and i think that has the potential to pose really significant, catastrophic threats to the future of the nation state. so, for example, all technologies in the history of our species, all the general purpose technologies that you just referred to, to the extent that they have been incredibly useful, they've always got cheaper, easier to use, and therefore they have spread all around the world. that has been the absolute engine of progress for centuries, and it's been a wonderful thing. it's delivered enormous benefits in every possible respect, in every domain. if that trajectory continues, when we're actually talking about the creation of intelligence itself or, in synthetic biology, life itself, in some respects, then these units of power are going to get smaller and smaller and smaller and spread far and wide. everybody in 30 to 50 years may
10:41 pm
potentially get access to state—like powers, the ability to co—ordinate huge actions over extended time periods. and that really is a fundamentally different quality to the local effects of technologies in the past — you know, aeroplanes, trains, cars, really important technologies — but have localised effects when they go wrong. these kinds of ms have the potential to have systemic impact if they go wrong in the future. i mean, there's so much that's profound and deep in what you've just said, i'm almost struggling to know where to start with it. but one... a couple of phrases just come to my mind. you talked about, uh, the way to create intelligence and synthetic life. this is sort of godlike power that we humans are now looking at, contemplating. but with the best will in the world, probably none of us believe that we deserve godlike powers. we are too flawed. that's surely where the worry comes. we need these powers
10:42 pm
more than ever. and that's the paradox. i mean, this is the ultimate prediction engine. we'll use these ais to make more efficient foods, for example, that are drought resistant, that are resistant to pests. we'll use these ais to reduce the cost of health care dramatically. right? we'll use these ais to help us with transportation and education. everyone is going to get access to a personal intelligence in their pocket, which is going to make them much, much smarterand more efficient at theirjob. that, i think, is going to unleash a productivity boom that is like a cambrian explosion.
10:43 pm
imean... well, hang on. a productivity boom for those like you who absolutely have the skills to be at the forefront of this transformation. but most of us humble humans do jobs which will disappear. it's quite possible, in this world of advanced ai, you won't need journalists. you won't necessarily need half the doctors we've currently got, or the lawyers or a whole bunch of other professions which thought they were safe from mechanisation but are certainly not safe from artificial intelligence. what on earth are human beings going to do when so much of what we need to do is done by machines? these are tools that make us radically smarter and more efficient. so if you are a doctor, you spend a vast portion of your day inputting data, writing notes, doing very laborious, painful work. these are tools that should save you an enormous amount of time so that you can focus on the key things that you need to... i take your point, but you'll need less doctors. it's possible that you will need less doctors. we may need less of every possible role in the future. yes. to that, i would say, bear in mind that work is not the ultimate goal of our society. the goal of our society is to create peace and prosperity, wealth and happiness, and to reduce suffering.
10:44 pm
the long march of civilisation is not a march towards creating work. it is a march towards reducing work and increasing abundance. and i believe that over a 30— to 50—year period, we are on a path to producing radically more with radically less. that has been the story of history. but is it not possible that...? i do not mean to interrupt you when you're painting this incredibly positive picture, but is it not possible that you will challenge the mental health of human beings? so much of our self—worth comes from our sense of utility, our usefulness. a lot of that comes from work. in this world you are portraying, 30 years away, where work and productivity is fundamentally different, machine—led, we humans may feel ourselves to becoming progressively more useless. the question is, who is the we? so you and i get an enormous amount of health and wellbeing and identity out of our work, and we are very lucky. we're the privileged few.
10:45 pm
many, many people don't find that flow and peace and energy in their everyday work. and so i think it's important for us to remember that, over the multi—decade period, we have to be striving towards a world where we've solved the issues of creation and redistribution. the real challenge you're describing is the one that you alluded to at the top of the show. it's a governance question. how do we capture and redistribute the value that is being created in a fair way, that brings all of society with us? and that's a better challenge to have than not having enough. right. so let's now get to the truly malign possibilities that come with this expansion of transformative ai. because you've just explained that what ai allows is for the sort of, the cost of being powerful to become ever less. it empowers people in ways we haven't imagined before, and it empowers non—state actors and states to do bad things in new ways.
10:46 pm
how do you ensure that doesn't happen? that's the great challenge. i mean, these models represent the best and the worst of all of us, and the challenge is to try to mitigate the downsides. many people will use these tools to spread more efficient forms of misinformation... it's already happening. to sow dissent, to increase anger and polarisation... and also, to enforce a new level of authoritarianism through surveillance, through the elimination of privacy. and that's why it's critical for us in the west, in europe and in the us and all over the world, to defend our freedoms, because these are clearly potential tools which introduce new forms of surveillance. they might reduce the barrier to entry to surveillance. and so the challenge for us is figuring out how we don't rabbit—hole down that path. if we give in on our own set of values and accept that we have to then lunge towards more authoritarian
10:47 pm
surveillance, that would be a complete collapse of the values that we actually stand for. i talked about gut—wrenching emotions as you wrote this. you're clearly worried. however, you dress up the positivity of so much that ai offers, you signed a joint statement that came from the center for al safety earlier in this year. it was very simple. itjust called upon all of you in this business, including governments, including private—sector people like yourself, to mitigate the risk of extinction — that was what it was called — that could come from al. that, that you said in the statement, should be a global priority. i see no sign that it is becoming that global priority. just as these ais reduce the barrier to entry, to be able to educate somebody that doesn't have access to education or provide the highest quality health care to someone who can only have a telephone call
10:48 pm
interaction with an ai health clinician, they also enable people who don't have the expertise in biology, for example, to develop biological or chemical weapons. they reduce the barrier to entry, to access knowledge and take actions. and that is fundamentally the great challenge before us. i think it's an intellectually honest position to notjust praise the potential upsides, but look straight in the face of the potential downsides. and wisdom in the 21st century has got to be about holding these two competing directions in tension and being very open and clear about them so that we can mitigate and address the risks today. but let's start our look at where we might expect the mitigation to come from, the responsible, accountable governance of the ai world to come from, by addressing the private sector, by addressing people like you who've made, let us be honest, hundreds of millions of dollars — in some people's cases billions of dollars — by being sort of market movers, pioneers in artificial intelligence. you have an extraordinary financial stake in constantly pushing the boundaries, don't you?
10:49 pm
i do. and i'm building a company. it is a public benefit corporation, which is a new type of company, a hybrid for—profit and nonprofit mission, entrenched in our legal charter. so it doesn't solve all of the issues of for—profit missions, but it's a first step in the right direction. and we create an ai called pi, which stands for personal intelligence. it is one of the safest ais in the world today. none of the existing jailbreaks and prompt hacks to try to destabilise these ais and get them to produce toxic or biased content work on pi. pi is very careful, it's very safe, it's very respectful. so i believe... so you are so concerned about being responsible, being transparent and sort of being audited about what you are doing in this sphere, have you moved right away, then... ? and you're based in silicon valley. the old silicon valley mantra was "move fast, break things". and if you talk about a whole
10:50 pm
generation of tech pioneers who are now obviously multibillionaires, from the gateses to the larry pages, to the zuckerbergs and the musks, these were people who sort of developed their ideas to the max and perhaps only later began to wrestle with some of the downsides. are you saying you've fundamentally changed that model? i think so. i think there is a new generation of ai leaders coming up who have been putting these issues on the table since the very beginning. i co—founded deepmind in 2010, and right from the outset we've been putting the question of ethics and safety and ai on the table, at the forefront. i mean, you mentioned audits. for example, just two months ago, i signed up to the voluntary commitments at the white house and met with president biden and laid out a suite of proposals that would proactively subject us at inflection ai, and the other big providers of large language models, to audits by independent third parties, and to share those best practices notjust with each other, but with the world. right.
10:51 pm
and when you, in your book, when you address how to mitigate the dangers of ai, you talk about a whole bunch of things which, to me, seem very idealistic. you say you're already being accountable. you're accepting audit from the outside of everything you do, but you also say the international community is going to have to work at this. there's going to have to be amazing cooperation and collaboration. there's no sign of that happening. the united states is looking at a voluntary code. you've been to the white house to discuss that. the eu is looking at legislation on al, but then look at china. china is also moving as fast as it possibly can in this field. they are not interested in signing up to this sort of idealistic international co—operation that you write about. i've got no shame in being an idealist. i think that you have to be an idealist to try and move civilisation forward. we have to raise the bar on ourselves and on our peers. now, i can't control what happens in china, but i can certainly praise and endorse the efforts that i see in the eu, for example. i mean, this is a very robust piece of regulation that has been developed in consultation
10:52 pm
over three—and—a—half years with a very sensible framework, and i've got a lot of praise for it. i think it's excellent. the great fear in europe is that, if they all sign up to this regulation, the chinese are going to have an immediate market advantage because they're not bound by the same rules, they're not bound by transparency, accountability, by fronting up to the international community about what they're doing. that's the pragmatic reality of life. we cannot mean that that leads to a race to the bottom on values just because they're doing it. it's no justification. we have to stick to our principles. well, yeah, but you talk about containment, containing this phenomenon that you say could be the most wonderful boon and positive thing for the world, or could be the bringer of chaos, catastrophe and anarchy to the world. you can't have a containment policy if containment is accepted in europe,
10:53 pm
but not in china. what we can do is focus on trying to get the strategy right for ourselves first and, from that position, engaging constructively with china and other partners that have different sets of values from our own, and start by not demonising china and not excluding them from discussions and negotiations. we have very fundamental differences of values to them, but we have to work with them. but we know, because we see it, that china fundamentally sees the development of ai as a new exercise in state power and an exercise in imposing an ideology, that of the communist party. i'm mindful of a comparison between how we control ai and how we have controlled the potency of the nuclear weapon over the last, whatever it is, 80 years. isn't the truth that, in the nuclear example, we have reached a containment place because of the ultimate deterrent of mutually assured destruction? will there be an element of that in our approach to ai?
10:54 pm
nuclear nonproliferation has been a great achievement of our civilisations. in fact, we've reduced the number of nuclear partners from 11 to eight over the last 60 years. that's a great thing. we've massively reduced the number of warheads. we've demonstrated that we can reach international agreement, make compromise, achieve consensus, you're right, where there is a mutually assured destruction incentive. i don't think we have that any time soon in al, so i agree that there is going to be a period where we really are at odds with one another, and there isn't that kind of dramatic incentive to drive co—operation. however, that doesn't mean there is no incentive. we absolutely must share our safety best practices with our — quote, unquote — "adversaries". one area where there is is in the development of synthetic biology. experimenting with pandemic—scale pathogens, engineering them to make them more transmissible and more lethal are capabilities that are soon going to be quite widely available within five years. desktop synthesisers, those that can be used
10:55 pm
to actually engineer — that is print or manufacture — new strands of dna, enable people in their garage to experiment with new pathogens. this is a very frightening prospect, and there is very good reason why china as well as us, europe, the uk, the us, everybody wants to basically try and stop the proliferation of these kinds of tools. final thoughts. going to have to be very quick. stephen hawking said this before he died, "the rise of powerful ai will either be the best or the worst thing ever to happen to humanity. we don't yet know which." when will we know? i have every confidence that it's going to be one of the most dramatic and positive impacts in the history of our species. it is going to be a huge boon for productivity over the next few decades, and that will become very, very clear within ten years. mustafa suleyman, i thank you for being on hardtalk. thank you.
10:56 pm
hello again. storm pia has been bringing some very strong winds, particularly to the northern half of the uk, and it has been bringing some issues. for example, here on the barton bridge just by the trafford centre on manchester's orbital m60 motorway, a lorry got blown over by the strong winds. reports of some transport disruption elsewhere. a top gust of 81 mph recorded in shetland, into the 70s across mainland scotland and over the very tops of the pennines as well. now the core of storm pia is actually now moving into scandinavia. we get a core of really strong winds going into denmark over the next few hours. gusts could reach 80—90 mph, strong enough to bring some disruption here and maybe even blow some roofs off buildings. across the uk, plenty of showers or lengthier outbreaks of rain
10:57 pm
across north—western areas of the country overnight. but with colder air in shetland, well, here the rain turns to snow, could be several centimetres, even blizzards for a time. icy conditions, then, to watch out for for the first part of friday morning. friday is going to be another unsettled day. still quite blustery. the north—westerly winds bringing showers or lengthier outbreaks of rain across northern and western areas. something a bit drier and brighter across eastern scotland, where it stays on the cold side. and we should have largely dry conditions across southern wales and much of southern england as well. heading through friday nighttime into the early part of saturday, we get this battle zone between the relatively mild air that most of the uk will have, the colder air feeding in across northern scotland, into that mix, we get this weather front moving in. it looks like we could well see a spell of snow getting down potentially to quite low elevations across the very far north of scotland, with a risk of icy stretches building in here. otherwise, friday night is going to be a mild night.
10:58 pm
no chance of any snow with temperatures for most of you at around 8—10 degrees. on into the start of the weekend, then, saturday sees further outbreaks of rain across scotland, milder air moving in here, so any snow turning back to rain. away from that, something a bit brighter across southern areas, very mild, temperatures around 12—13 degrees, staying on the blustery side. what about christmas eve? well, south—westerly winds dominate the country. outbreaks of rain around, mild weather conditions, particularly so across parts of eastern england, where we could see temperatures reaching around iii—15 celsius. and then for christmas day, for most of us, mild, still some rain around. could be a bit colder, though, for northern scotland. small chance of some snow here.
10:59 pm
hello, you're with newsday, live from singapore, the headlines. mass shooting in prague — more than fifteen people are killed as a gunmen opens fire
11:00 pm
on his fellow students. still no vote in gaza at the un security council — the us says there are still serious concerns over the current draft. a top eu court rules that uefa's ban on a european super league is unlawful. we begin in the czech republic — police there say fourteen people have been killed in a mass shooting at a university in prague. have been killed in a mass shooting twenty five others were injured, ten of them seriously. the gunman, who was a student at the university, was also found dead. it's the worst incident of its kind in the country's history.
11:01 pm
this was prague a short time ago —

29 Views

info Stream Only

Uploaded by TV Archive on