Skip to main content

tv   HAR Dtalk  BBC News  September 12, 2023 4:30am-5:01am BST

4:30 am
hey, i'm zof with the catch up! tonight — the latest on daniel khalife, an incredible athletic achievement and the hokey cokey. but first — an 11—year—old attacked by a dog in birmingham says american bully xl dogs should be banned. video of the attack has been going round on tiktok all weekend. the dog grabbed my hand and he started, like, moving me around. after he let go of my arm, he went on my shoulder. the home secretary's now looking at banning the dogs, but some say bans on breeds don't work. four dog breeds are on the banned list, with cross—breeds banned too. if you have one, you could get an unlimited fine or six months in prison. some other stories now. daniel khalife has appeared in court, after his escape from wandsworth prison last week. the court heard he might have used bedsheets to strap himself under a truck. the 21—year—old terror suspect
4:31 am
spent four days on the run before being arrested. and all 400 wilko stores will close by early october. 12,500 staff will probably lose theirjobs because the business couldn't be saved. and amputee milly pickles has completed the world's steepest race, "the red bull a00" in slovenia. milly, who lost part of her leg when she was 20, said the slope sprint was "unbelivably tough". she is the first amputee to ever reach the finish line. time now, then, for 10 seconds of the hokey cokey. this is perhaps the most british thing ever — a hot weekend and hundreds were out in dorset doing the hokey cokey. i thought i'd give it a go... no, do you know what? i'm very glad i wasn't there. you're all caught up now. see ya!
4:32 am
voice-over: this is bbc news. we'll have the headlines for you at the top of the hour, which is straight after this programme. welcome to hardtalk, i'm stephen sackur. are the machines about to take over? that basic fear seems to underpin much of the discussion about artificial intelligence, and parallel developments such as synthetic biology. the latest wave of tech advances offers us extraordinary new possibilities, but do we flawed human beings have the will and the means to contain and control them 7 well, my guest is mustafa suleyman, ceo of inflection ai and the author of a challenging book on aland us. is that a doomed relationship?
4:33 am
mustafa suleyman, welcome to hardtalk. thanks for having me. it's a great pleasure to have you here. now, you, in your career, are wrestling with the complex relationship between us humans and increasingly intelligent machines. it seems, if i've got it right, that you're not so much worried about the machines — you're worried about us, ourwisdom. is that right? it's a great way of putting it. i mean, i think the great challenge we have is one of governance. containment means that we should always be in control of the technologies
4:34 am
that we create. and we need to make sure that they are accountable to us as a species, they work for us over many, many decades and centuries, and they always do way, way, way more good than harm. with every new type of technology we have, there are new risks — risks that, at the time we experience them, feel really scary. they're new. we don't understand them. they could be completely novel in ways that could be very harmful. but that's no reason to feel powerless. these technologies are things that we create and, therefore, we have every ability to control. all right. and to be clear, as you've just described it, and i guess you would include innovations from, i don't know, the discovery of fire to the wheel to the sail to, in our more recent industrial age, the steam engine, maybe even all the way up to the internet. these game—changing technological advances. you seem to be suggesting that the inventors, the innovators, people like you are today, can't
4:35 am
foresee where they'll go. we can't always foresee where they'll go, but designed with intention, we can shape them and provide guardrails around them, which ultimately affect their trajectory in very fundamental ways. take aircrafts, for example. i mean, flight, when it first appeared, seemed incredibly scary. how could i possibly get into a tin tube at 40,000ft and goi,000 mph? but, over many decades, we've iterated on our process of safety and oversight. we've had regulators that force competitors to share insights between them. we have the black box recorder, the famous black box recorder that tracks all the telemetry on board the plane. it records everything that happens in the cockpit and shares those insights across the world. now, that's a great achievement for our species. i mean, it means that we can get all the benefits of flight with actually very, very few of the risks overall. and that's the approach that we need to take with al.
4:36 am
all right. so let's get down to ai. now, i began by positing a sort of widely held human fear that artificial intelligence leads inevitably to a sort of superintelligent machine that ultimately has sort of full autonomy and somehow goes rogue on humanity. you have deep concerns but it seems that is not your deepest prime concern. unfortunately, the big straw man that has been created over the last few years in al is that of terminator or skynet — this idea that you would have a recursively self—improving artificial intelligence, one that can update and improve its own code independently of human oversight, and then... right. sort of breaks the umbilical cord with the human creator. right. that's exactly right. and that would be the last thing we want. we want to make sure that we always remain in control and we get to shape what it can and can't do. and i think that the odds of this recursively self—improving ai causing an intelligence explosion,
4:37 am
is how it's often referred to, i think they're relatively low and we're quite far away from that moment. and part of the problem at the moment is that we're so fixated on that possibility — mostly because sci—fi gives us an easy way in to talking about it when we don't have very long — that we're actually missing a lot of the more practical near—term issues which we should focus on. and that is what you call the coming wave. and you say the wave is all about understanding that where we are with artificial intelligence today is not going to be where we are in, say, five years' time. because i think the phrase you use is, "artificial capable intelligence "is transformative." so i need you to explain to me how and why it's transformative. that's exactly right. so think of ai development over the last ten years in two ways. the first is that we have been training ms to classify, that is understand — our perceptual input. right? so, they have got good at identifying objects in images — so good, in fact,
4:38 am
that they can be used to control, you know, self—driving cars. right? they have got really good at transcribing text. when you dictate to your phone, you know, you record that speech and you translate it into written text. they can do language translation. these are all recognition. they're understanding and classifying content. now they've got so good at that that they can now generate new examples of those images, of that audio, of the music, and that's the new generative ai boom that we're currently in. the last year has been incredible with these large language models that can produce new text that is almost at human level in terms of accuracy. they are creating — these new chatbots and other sort of apps that we talk about — they are creating text, they're creating music, they're creating even visual art. and that gives us a sense that the machine somehow is developing its own consciousness, which of course it's not.
4:39 am
it's simply got more and more and more data to use and sort of mould in a way that fits the instructions it's given. so, how far can this go? that's exactly right. and so i think we often have a tendency to anthropomorphize and project more into these systems than they actually have. and that's understandable. three or four years ago, people would often say, "well, ai will never be creative, "that will always be the preserve of humans." a few years ago, people would say, "well, ai will never have empathy." and now you can see that these conversational ms and chatbots are actually really good at that. the next wave of features that they're likely to develop over the next five years, as these models get bigger and bigger, are capabilities around planning. you know, you referred earlier to artificial capable intelligence. in the past, we've just defined intelligence on the basis of what an ai can say. now, we're defining intelligence on the basis of what an ai can do. these ais will co—ordinate multiple steps in complicated scenarios. they will call apis, they'll use websites, they will use
4:40 am
back—end databases. they'll make phone calls to real humans, they'll make phone calls to other ais, and they'll use that to make really complicated plans over extended periods of time. you paint this sort of near—to—medium—term future of the expansion of ai into every aspect of our lives, and you say that writing the book about it — you call it the coming wave — was a gut—wrenching experience. why was it gut—wrenching? are you...are you frightened? i'm not frightened, but i think new technologies always bring very new challenges. and in this case, there is the potential that very powerful systems will spread far and wide. and i think that has the potential to pose really significant, catastrophic threats to the future of the nation state. so, for example, all technologies in the history of our species, all the general purpose technologies that you just referred to, to the extent that they have been incredibly useful, they've always got cheaper, easier to use, and therefore they have spread all
4:41 am
around the world. that has been the absolute engine of progress for centuries, and it's been a wonderful thing. it's delivered enormous benefits in every possible respect, in every domain. if that trajectory continues, when we're actually talking about the creation of intelligence itself or, in synthetic biology, life itself, in some respects, then these units of power are going to get smaller and smaller and smaller and spread far and wide. everybody in 30 to 50 years may potentially get access to state—like powers, the ability to co—ordinate huge actions over extended time periods. and that really is a fundamentally different quality to the local effects of technologies in the past — you know, aeroplanes, trains, cars, really important technologies — but have localised effects when they go wrong. these kinds of ms have the potential to have systemic impact if they go wrong in the future. i mean, there's so much
4:42 am
that's profound and deep in what you've just said, i'm almost struggling to know where to start with it. but one... a couple of phrases just come to my mind. you talked about, uh, the way to create intelligence and synthetic life. this is sort of godlike power that we humans are now looking at, contemplating. but with the best will in the world, probably none of us believe that we deserve godlike powers. we are too flawed. that's surely where the worry comes. we need these powers more than ever. and that's the paradox. i mean, this is the ultimate prediction engine. we'll use these ms to make more efficient foods, for example, that are drought resistant, that are resistant to pests. we'll use these ais to reduce the cost of health care dramatically. right? we'll use these ms to help us with transportation and education. everyone is going to get access to a personal intelligence
4:43 am
in their pocket, which is going to make them much, much smarter and more efficient at theirjob. that, i think, is going to unleash a productivity boom that is like a cambrian explosion. imean... well, hang on. a productivity boom for those like you who absolutely have the skills to be at the forefront of this transformation. but most of us humble humans do jobs which will disappear. it's quite possible, in this world of advanced ai, you won't need journalists. you won't necessarily need half the doctors we've currently got, or the lawyers or a whole bunch of other professions which thought they were safe from mechanisation but are certainly not safe from artificial intelligence. what on earth are human beings going to do when so much of what we need to do is done by machines? these are tools that make us radically smarter and more efficient. so if you are a doctor, you spend a vast portion
4:44 am
of your day inputting data, writing notes, doing very laborious, painful work. these are tools that should save you an enormous amount of time so that you can focus on the key things that you need to... i take your point, but you'll need less doctors. it's possible that you will need less doctors. we may need less of every possible role in the future. yes. to that, i would say, bear in mind that work is not the ultimate goal of our society. the goal of our society is to create peace and prosperity, wealth and happiness, and to reduce suffering. the long march of civilisation is not a march towards creating work. it is a march towards reducing work and increasing abundance. and i believe that over a 30— to 50—year period, we are on a path to producing radically more with radically less. that has been the story of history. but is it not possible that...? i do not mean to interrupt you when you're painting this incredibly positive picture, but is it not possible that you will challenge the mental
4:45 am
health of human beings? so much of our self—worth comes from our sense of utility, our usefulness. a lot of that comes from work. in this world you are portraying, 30 years away, where work and productivity is fundamentally different, machine—led, we humans may feel ourselves to becoming progressively more useless. the question is, who is the we? so you and i get an enormous amount of health and wellbeing and identity out of our work, and we are very lucky. we're the privileged few. many, many people don't find that flow and peace and energy in their everyday work. and so i think it's important for us to remember that, over the multi—decade period, we have to be striving towards a world where we've solved the issues of creation and redistribution. the real challenge you're describing is the one that you alluded to at the top of the show. it's a governance question. how do we capture and redistribute the value that is being created
4:46 am
in a fair way, that brings all of society with us? and that's a better challenge to have than not having enough. right. so let's now get to the truly malign possibilities that come with this expansion of transformative ai. because you've just explained that what ai allows is for the sort of, the cost of being powerful to become ever less. it empowers people in ways we haven't imagined before, and it empowers non—state actors and states to do bad things in new ways. how do you ensure that doesn't happen? that's the great challenge. i mean, these models represent the best and the worst of all of us, and the challenge is to try to mitigate the downsides. many people will use these tools to spread more efficient forms of misinformation... it's already happening. to sow dissent, to increase anger and polarisation... and also, to enforce a new level of authoritarianism through surveillance, through the elimination of privacy.
4:47 am
and that's why it's critical for us in the west, in europe and in the us and all over the world, to defend our freedoms, because these are clearly potential tools which introduce new forms of surveillance. they might reduce the barrier to entry to surveillance. and so the challenge for us is figuring out how we don't rabbit—hole down that path. if we give in on our own set of values and accept that we have to then lunge towards more authoritarian surveillance, that would be a complete collapse of the values that we actually stand for. i talked about gut—wrenching emotions as you wrote this. you're clearly worried. however, you dress up the positivity of so much that ai offers, you signed a joint statement that came from the center for al safety earlier in this year. it was very simple.
4:48 am
itjust called upon all of you in this business, including governments, including private—sector people like yourself, to mitigate the risk of extinction — that was what it was called — that could come from al. that, that you said in the statement, should be a global priority. i see no sign that it is becoming that global priority. just as these ais reduce the barrier to entry, to be able to educate somebody that doesn't have access to education or provide the highest quality health care to someone who can only have a telephone call interaction with an ai health clinician, they also enable people who don't have the expertise in biology, for example, to develop biological or chemical weapons. they reduce the barrier to entry, to access knowledge and take actions. and that is fundamentally the great challenge before us. i think it's an intellectually honest position to notjust praise the potential upsides, but look straight in the face of the potential downsides. and wisdom in the 21st century has got to be about holding these two competing directions in tension and being very open and clear about them so that we can mitigate and address the risks today.
4:49 am
but let's start our look at where we might expect the mitigation to come from, the responsible, accountable governance of the ai world to come from, by addressing the private sector, by addressing people like you who've made, let us be honest, hundreds of millions of dollars — in some people's cases billions of dollars — by being sort of market movers, pioneers in artificial intelligence. you have an extraordinary financial stake in constantly pushing the boundaries, don't you? i do. and i'm building a company. it is a public benefit corporation, which is a new type of company, a hybrid for—profit and nonprofit mission, entrenched in our legal charter. so it doesn't solve all of the issues of for—profit missions, but it's a first step in the right direction. and we create an ai called pi,
4:50 am
which stands for personal intelligence. it is one of the safest ms in the world today. none of the existing jailbreaks and prompt hacks to try to destabilise these ms and get them to produce toxic or biased content work on pi. pi is very careful, it's very safe, it's very respectful. so i believe... so you are so concerned about being responsible, being transparent and sort of being audited about what you are doing in this sphere, have you moved right away, then... ? and you're based in silicon valley. the old silicon valley mantra was "move fast, break things". and if you talk about a whole generation of tech pioneers who are now obviously multibillionaires, from the gateses to the larry pages, to the zuckerbergs and the musks, these were people who sort of developed their ideas to the max and perhaps only later began to wrestle with some of the downsides. are you saying you've fundamentally changed that model? i think so. i think there is a new generation of ai leaders coming up who have been putting these issues on the table since the very beginning. i co—founded deepmind in 2010, and right from the outset we've
4:51 am
been putting the question of ethics and safety and ai on the table, at the forefront. i mean, you mentioned audits. for example, just two months ago, i signed up to the voluntary commitments at the white house and met with president biden and laid out a suite of proposals that would proactively subject us at inflection ai, and the other big providers of large language models, to audits by independent third parties, and to share those best practices not just with each other, but with the world. right. and when you, in your book, when you address how to mitigate the dangers of ai, you talk about a whole bunch of things which, to me, seem very idealistic. you say you're already being accountable. you're accepting audit from the outside of everything you do, but you also say the international community is going to have to work at this. there's going to have to be amazing cooperation and collaboration. there's no sign of that happening. the united states is looking at a voluntary code. you've been to the white house to discuss that. the eu is looking at legislation on al, but then look at china.
4:52 am
china is also moving as fast as it possibly can in this field. they are not interested in signing up to this sort of idealistic international co—operation that you write about. i've got no shame in being an idealist. i think that you have to be an idealist to try and move civilisation forward. we have to raise the bar on ourselves and on our peers. now, i can't control what happens in china, but i can certainly praise and endorse the efforts that i see in the eu, for example. i mean, this is a very robust piece of regulation that has been developed in consultation over three—and—a—half years with a very sensible framework, and i've got a lot of praise for it. i think it's excellent. the great fear in europe is that, if they all sign up to this regulation, the chinese are going to have an immediate market advantage because they're not bound by the same rules, they're not bound by transparency, accountability, by fronting up to the international community about what they're doing. that's the pragmatic reality of life. we cannot mean that that leads
4:53 am
to a race to the bottom on values just because they're doing it. it's no justification. we have to stick to our principles. well, yeah, but you talk about containment, containing this phenomenon that you say could be the most wonderful boon and positive thing for the world, or could be the bringer of chaos, catastrophe and anarchy to the world. you can't have a containment policy if containment is accepted in europe, but not in china. what we can do is focus on trying to get the strategy right for ourselves first and, from that position, engaging constructively with china and other partners that have different sets of values from our own, and start by not demonising china and not excluding them from discussions and negotiations. we have very fundamental differences of values to them, but we have to work with them. but we know, because we see it, that china fundamentally sees the development of ai as a new exercise in state power and an exercise in imposing an ideology, that of the communist party.
4:54 am
i'm mindful of a comparison between how we control ai and how we have controlled the potency of the nuclear weapon over the last, whatever it is, 80 years. isn't the truth that, in the nuclear example, we have reached a containment place because of the ultimate deterrent of mutually assured destruction? will there be an element of that in our approach to ai? nuclear nonproliferation has been a great achievement of our civilisations. in fact, we've reduced the number of nuclear partners from 11 to eight over the last 60 years. that's a great thing. we've massively reduced the number of warheads. we've demonstrated that we can reach international agreement, make compromise, achieve consensus, you're right, where there is a mutually assured destruction incentive. i don't think we have that any time soon in al, so i agree that there is going to be a period where we really are at odds with one another, and there isn't that kind of dramatic
4:55 am
incentive to drive co—operation. however, that doesn't mean there is no incentive. we absolutely must share our safety best practices with our — quote, unquote — "adversaries". one area where there is is in the development of synthetic biology. experimenting with pandemic—scale pathogens, engineering them to make them more transmissible and more lethal are capabilities that are soon going to be quite widely available within five years. desktop synthesisers, those that can be used to actually engineer — that is print or manufacture — new strands of dna, enable people in their garage to experiment with new pathogens. this is a very frightening prospect, and there is very good reason why china as well as us, europe, the uk, the us, everybody wants to basically try and stop the proliferation of these kinds of tools. final thoughts. going to have to be very quick. stephen hawking said this before he died, "the rise of powerful ai
4:56 am
will either be the best "or the worst thing ever to happen to humanity. "we don't yet know which." when will we know? i have every confidence that it's going to be one of the most dramatic and positive impacts in the history of our species. it is going to be a huge boon for productivity over the next few decades, and that will become very, very clear within ten years. mustafa suleyman, i thank you for being on hardtalk. thank you. hello. 0ur unprecedented run of september days over 30 degrees did not continue into monday.
4:57 am
that said, parts of the south and the east were still quite warm and humid — 27 celsius in suffolk. for tuesday, the process of things turning cooler and fresher does continue for most of us, although some warmth and humidity will hold on down towards the south. this humid air really quite stubborn, quite slow to clear, whereas fresher conditions are now filtering in across scotland and northern ireland. morning temperatures of around two or three degrees in parts of the highlands, whereas further south, 1a, 15, 16 degrees in that humid air. through tuesday for scotland and for northern ireland, we will see lots of sunshine and just a few mostly light showers. but for parts of northern england, particularly yorkshire and lincolnshire and down into the midlands, we will have a lot of cloud, we'll have some outbreaks of rain continuing for a good part of the day, and then for east anglia and the southeast in that humid air, we've got the chance for some big thunderstorms to pop up. 23 or 2a degrees down towards the southeast.
4:58 am
further north, though, just 15 for aberdeen, 17 there in belfast. that cooler, fresher air continuing to work its way in and that process continues into tuesday night. still some cloud and rain across parts of eastern england, but clear spells elsewhere, one or two mist patches. and temperatures for some spots in the highlands, i think we'll get very close to freezing. there could be a touch of frost in places still, though, 15 or 16 in some coastal parts of eastern england. now for wednesday, high pressure temporarily, at least, builds its way in across the uk. so wednesday, probably one of the driest and brightest days of the week. there will be some spells of sunshine, but our next weather system looks set to bring cloud and rain and strengthening winds too into northern ireland and western scotland later in the day. fresher conditions even getting down into the southeast at this stage, 21 there for london. now, a lot of uncertainty in the forecast by thursday, there will be a weather
4:59 am
front pushing southwards. but the exact timing, the exact progress of that frontal system is still open to question. to the south of it, something warmer developing again, 2a degrees. live from london, this is bbc news. anger grows in morocco over the the speed of the response to friday's earthquake as heavy
5:00 am
lifting equipment begins to arrive in the hardest hit areas of the atlas mountains. floods in libya are reported to have killed 2,000 people with many more still missing. russia says kim jong—un has arrived in russia as he prepares for talks with vladimir putin. and a major study suggests female surgeons working in nhs hospitals in the uk are subjected to a culture of sexual harrassment and assaults by male colleagues. a very warm welcome to the programme. hello. i'm sally bundock. there's been criticism in morocco of the speed of the official response to friday's earthquake, which is now known to have killed more than 2,800 people.
5:01 am
heavy lifting equipment has begun to arrive in remote

20 Views

info Stream Only

Uploaded by TV Archive on