tv HAR Dtalk BBC News September 12, 2023 11:30pm-12:01am BST
11:30 pm
the uk's biggest pub chain is going to start charging you more for a pint when it's busy. stonegate group, which owns brands like yates and slug and lettuce, will add 20p at peak times as part of new dynamic pricing in 800 of its pubs. you'll have probably seen a similar thing from apps like uber and when booking hotel rooms or flights. higher demand means higher prices. it comes as the increase in what we're being paid has caught up with how fast prices are rising. average pay�*s up almost 8% compared to last year. that's the same as inflation, which is how much prices are going up by. it's the first time that's happened in two years, meaning your money's not losing its value for now, but prices have been rising faster and for longer so pay is still behind. let's get some other news then. more than 2,000 people are thought to have died in floods in libya.
11:31 pm
at least 10,000 more are missing after a huge storm hit the country. boys as young as 13 are being increasingly targeted with sextortion. they're coaxed into sending explicit images and told if they don't send the scammers money, those images will be shared online. police in scotland say it's a growing problem and there's another new iphone coming. the iphone 15, along with new airpods, will have a usb—c port instead of the usual iphone cable. tonight, i'm going to leave you with ten seconds of coastline canines. 50 dogs took part in this annual surf a thon in california, raising money for a rescue center near san diego. hopefully, the water wasn't too rough. right? we're done. see you later.
11:32 pm
this is bbc news. we'll have the headlines at the top of the hour, as newsday continues straight after hardtalk. welcome to hardtalk, i'm stephen sackur. are the machines about to take over? that basic fear seems to underpin much of the discussion about artificial intelligence, and parallel developments such as synthetic biology. the latest wave of tech advances offers us extraordinary new possibilities, but do we flawed human beings have the will and the means to contain and control them? well, my guest is mustafa suleyman, ceo of inflection ai and the author of a challenging book on aland us. is that a doomed relationship?
11:33 pm
mustafa suleyman, welcome to hardtalk. thanks for having me. it's a great pleasure to have you here. now, you, in your career, are wrestling with the complex relationship between us humans and increasingly intelligent machines. it seems, if i've got it right, that you're not so much worried about the machines — you're worried about us, our wisdom. is that right? it's a great way of putting it. i mean, i think the great challenge we have is one of governance. containment means that we should always be in control of the technologies that we create. and we need to make sure
11:34 pm
that they are accountable to us as a species, they work for us over many, many decades and centuries, and they always do way, way, way more good than harm. with every new type of technology we have, there are new risks — risks that, at the time we experience them, feel really scary. they're new. we don't understand them. they could be completely novel in ways that could be very harmful. but that's no reason to feel powerless. these technologies are things that we create and, therefore, we have every ability to control. all right. and to be clear, as you've just described it, and i guess you would include innovations from, i don't know, the discovery of fire to the wheel to the sail to, in our more recent industrial age, the steam engine, maybe even all the way up to the internet. these game—changing technological advances. you seem to be suggesting that the inventors, the innovators, people like you are today, can't foresee where they'll go. we can't always foresee
11:35 pm
where they'll go, but designed with intention, we can shape them and provide guardrails around them, which ultimately affect their trajectory in very fundamental ways. take aircrafts, for example. i mean, flight, when it first appeared, seemed incredibly scary. how could i possibly get into a tin tube at 40,000ft and go 1,000 mph? but, over many decades, we've iterated on our process of safety and oversight. we've had regulators that force competitors to share insights between them. we have the black box recorder, the famous black box recorder that tracks all the telemetry on board the plane. it records everything that happens in the cockpit and shares those insights across the world. now, that's a great achievement for our species. i mean, it means that we can get all the benefits of flight with actually very, very few of the risks overall. and that's the approach that we need to take with al. all right. so let's get down to ai. now, i began by positing a sort of widely held human fear that
11:36 pm
artificial intelligence leads inevitably to a sort of superintelligent machine that ultimately has sort of full autonomy and somehow goes rogue on humanity. you have deep concerns but it seems that is not your deepest prime concern. unfortunately, the big straw man that has been created over the last few years in al is that of terminator or skynet — this idea that you would have a recursively self—improving artificial intelligence, one that can update and improve its own code independently of human oversight, and then... right. sort of breaks the umbilical cord with the human creator. right. that's exactly right. and that would be the last thing we want. we want to make sure that we always remain in control and we get to shape what it can and can't do. and i think that the odds of this recursively self—improving ai causing an intelligence explosion, is how it's often referred to, i think they're relatively low and we're quite far away from that moment.
11:37 pm
and part of the problem at the moment is that we're so fixated on that possibility — mostly because sci—fi gives us an easy way in to talking about it when we don't have very long — that we're actually missing a lot of the more practical near—term issues which we should focus on. and that is what you call the coming wave. and you say the wave is all about understanding that where we are with artificial intelligence today is not going to be where we are in, say, five years�* time. because i think the phrase you use is, "artificial capable intelligence is transformative." so i need you to explain to me how and why it's transformative. that's exactly right. so think of ai development over the last ten years in two ways. the first is that we have been training a15 to classify — that is understand — our perceptual input. right? so, they have got good at identifying objects in images — so good, in fact, that they can be used to control, you know, self—driving cars. right?
11:38 pm
they have got really good at transcribing text. when you dictate to your phone, you know, you record that speech and you translate it into written text. they can do language translation. these are all recognition. they're understanding and classifying content. now they've got so good at that that they can now generate new examples of those images, of that audio, of the music, and that's the new generative ai boom that we're currently the last year has been incredible with these large language models that can produce new text that is almost at human level in terms of accuracy. they are creating — these new chatbots and other sort of apps that we talk about — they are creating text, they're creating music, they're creating even visual art. and that gives us a sense that the machine somehow is developing its own consciousness, which of course it's not. it's simply got more and more and more data to use and sort of mould in a way that fits
11:39 pm
the instructions it's given. so, how far can this go? that's exactly right. and so i think we often have a tendency to anthropomorphize and project more into these systems than they actually have. and that's understandable. three or four years ago, people would often say, "well, ai will never be creative, that will always be the preserve of humans." a few years ago, people would say, "well, ai will never have empathy." and now you can see that these conversational a15 and chatbots are actually really good at that. the next wave of features that they're likely to develop over the next five years, as these models get bigger and bigger, are capabilities around planning. you know, you referred earlier to artificial capable intelligence. in the past, we've just defined intelligence on the basis of what an ai can say. now, we're defining intelligence on the basis of what an ai can do. these ais will co—ordinate multiple steps in complicated scenarios. they will call apis, they'll use websites, they will use back—end data bases. they'll make phone calls to real humans, they'll make
11:40 pm
phone calls to other ais, and they'll use that to make really complicated plans over extended periods of time. you paint this sort of near—to—medium—term future of the expansion of ai into every aspect of our lives, and you say that writing the book about it — you call it the coming wave — was a gut—wrenching experience. why was it gut—wrenching? are you...are you frightened? i'm not frightened, but i think new technologies always bring very new challenges. and in this case, there is the potential that very powerful systems will spread far and wide. and i think that has the potential to pose really significant, catastrophic threats to the future of the nation state. so, for example, all technologies in the history of our species, all the general purpose technologies that you just referred to, to the extent that they have been incredibly useful, they've always got cheaper, easier to use, and therefore they have spread all around the world. that has been the absolute engine of progress for
11:41 pm
centuries, and it's been a wonderful thing. it's delivered enormous benefits in every possible respect, in every domain. if that trajectory continues, when we're actually talking about the creation of intelligence itself or, in synthetic biology, life itself, in some respects, then these units of power are going to get smaller and smaller and smaller and spread far and wide. everybody in 30 to 50 years may potentially get access to state—like powers, the ability to co—ordinate huge actions over extended time periods. and that really is a fundamentally different quality to the local effects of technologies in the past — you know, aeroplanes, trains, cars, really important technologies — but have localised effects when they go wrong. these kinds of a15 have the potential to have systemic impact if they go wrong in the future. i mean, there's so much that's profound and deep
11:42 pm
in what you've just said, i'm almost struggling to know where to start with it. but one... a couple of phrases just come to my mind. you talked about, uh, the way to create intelligence and synthetic life. this is sort of godlike power that we humans are now looking at, contemplating. but with the best will in the world, probably none of us believe that we deserve godlike powers. we are too flawed. that's surely where the worry comes. we need these powers more than ever. and that's the paradox. i mean, this is the ultimate prediction engine. we'll use these a15 to make more efficient foods, for example, that are drought resistant, that are resistant to pests. we'll use these a15 to reduce the cost of health care dramatically. right? we'll use these a15 to help us with transportation and education. everyone is going to get access to a personal intelligence in their pocket, which is going to make them much, much smarterand more efficient at theirjob.
11:43 pm
that, i think, is going to unleash a productivity boom that is like a cambrian explosion. imean... well, hang on. a productivity boom for those like you who absolutely have the skills to be at the forefront of this transformation. but most of us humble humans do jobs which will disappear. it's quite possible, in this world of advanced ai, you won't need journalists. you won't necessarily need half the doctors we've currently got, or the lawyers or a whole bunch of other professions which thought they were safe from mechanisation but are certainly not safe from artificial intelligence. what on earth are human beings going to do when so much of what we need to do is done by machines? these are tools that make us radically smarter and more efficient. so if you are a doctor, you spend a vast portion of your day inputting data, writing notes, doing very laborious, painful work.
11:44 pm
these are tools that should save you an enormous amount of time so that you can focus on the key things that you need to... i take your point, but you'll need less doctors. it's possible that you will need less doctors. we may need less of every possible role in the future. yes. to that, i would say, bear in mind that work is not the ultimate goal of our society. the goal of our society is to create peace and prosperity, wealth and happiness, and to reduce suffering. the long march of civilisation is not a march towards creating work. it is a march towards reducing work and increasing abundance. and i believe that over a 30— to 50—year period, we are on a path to producing radically more with radically less. that has been the story of history. but is it not possible that...? i do not mean to interrupt you when you're painting this incredibly positive picture, but is it not possible that you will challenge the mental health of human beings? so much of our self—worth comes from our sense of utility, our usefulness.
11:45 pm
a lot of that comes from work. in this world you are portraying, 30 years away, where work and productivity is fundamentally different, machine—led, we humans may feel ourselves to becoming progressively more useless. the question is, who is the we? so you and i get an enormous amount of health and wellbeing and identity out of our work, and we are very lucky. we're the privileged few. many, many people don't find that flow and peace and energy in their everyday work. and so i think it's important for us to remember that, over the multi—decade period, we have to be striving towards a world where we've solved the issues of creation and redistribution. the real challenge you're describing is the one that you alluded to at the top of the show. it's a governance question. how do we capture and redistribute the value that is being created in a fair way, that brings all of society with us? and that's a better challenge
11:46 pm
to have than not having enough. right. so let's now get to the truly malign possibilities that come with this expansion of transformative ai. because you've just explained that what ai allows is for the sort of, the cost of being powerful to become ever less. it empowers people in ways we haven't imagined before, and it empowers non—state actors and states to do bad things in new ways. how do you ensure that doesn't happen? that's the great challenge. i mean, these models represent the best and the worst of all of us, and the challenge is to try to mitigate the downsides. many people will use these tools to spread more efficient forms of misinformation... it's already happening. to sow dissent, to increase anger and polarisation... and also, to enforce a new level of authoritarianism through surveillance, through the elimination of privacy. and that's why it's critical
11:47 pm
for us in the west, in europe and in the us and all over the world, to defend our freedoms, because these are clearly potential tools which introduce new forms of surveillance. they might reduce the barrier to entry to surveillance. and so the challenge for us is figuring out how we don't rabbit—hole down that path. if we give in on our own set of values and accept that we have to then lunge towards more authoritarian surveillance, that would be a complete collapse of the values that we actually stand for. i talked about gut—wrenching emotions as you wrote this. you're clearly worried. however, you dress up the positivity of so much that ai offers, you signed a joint statement that came from the center for al safety earlier in this year. it was very simple. itjust called upon all of you in this business, including governments, including private—sector people like yourself, to mitigate the risk of extinction — that was what it was called — that could come from al. that, that you said in the statement, should
11:48 pm
be a global priority. i see no sign that it is becoming that global priority. just as these ais reduce the barrier to entry, to be able to educate somebody that doesn't have access to education or provide the highest quality health care to someone who can only have a telephone call interaction with an ai health clinician, they also enable people who don't have the expertise in biology, for example, to develop biological or chemical weapons. they reduce the barrier to entry, to access knowledge and take actions. and that is fundamentally the great challenge before us. i think it's an intellectually honest position to notjust praise the potential upsides, but look straight in the face of the potential downsides. and wisdom in the 21st century has got to be about holding these two competing directions in tension and being very open and clear about them so that we can mitigate and address the risks today. but let's start our look
11:49 pm
at where we might expect the mitigation to come from, the responsible, accountable governance of the ai world to come from, by addressing the private sector, by addressing people like you who've made, let us be honest, hundreds of millions of dollars — in some people's cases billions of dollars — by being sort of market movers, pioneers in artificial intelligence. you have an extraordinary financial stake in constantly pushing the boundaries, don't you? i do. and i'm building a company. it is a public benefit corporation, which is a new type of company, a hybrid for—profit and nonprofit mission, entrenched in our legal charter. so it doesn't solve all of the issues of for—profit missions, but it's a first step in the right direction. and we create an ai called pi, which stands for personal intelligence. it is one of the safest ms in the world today. none of the existing jailbreaks and prompt hacks to try to destabilise these a15 and get them to produce toxic
11:50 pm
or biased content work on pi. pi is very careful, it's very safe, it's very respectful. so i believe... so you are so concerned about being responsible, being transparent and sort of being audited about what you are doing in this sphere, have you moved right away, then... ? and you're based in silicon valley. the old silicon valley mantra was "move fast, break things". and if you talk about a whole generation of tech pioneers who are now obviously multibillionaires, from the gateses to the larry pages, to the zuckerbergs and the musks, these were people who sort of developed their ideas to the max and perhaps only later began to wrestle with some of the downsides. are you saying you've fundamentally changed that model? i think so. i think there is a new generation of ai leaders coming up who have been putting these issues on the table since the very beginning. i co—founded deepmind in 2010, and right from the outset we've been putting the question of ethics and safety and ai on the table, at the forefront. i mean, you mentioned audits.
11:51 pm
for example, just two months ago, i signed up to the voluntary commitments at the white house and met with president biden and laid out a suite of proposals that would proactively subject us at inflection ai, and the other big providers of large language models, to audits by independent third parties, and to share those best practices notjust with each other, but with the world. right. and when you, in your book, when you address how to mitigate the dangers of ai, you talk about a whole bunch of things which, to me, seem very idealistic. you say you're already being accountable. you're accepting audit from the outside of everything you do, but you also say the international community is going to have to work at this. there's going to have to be amazing cooperation and collaboration. there's no sign of that happening. the united states is looking at a voluntary code. you've been to the white house to discuss that. the eu is looking at legislation on al, but then look at china. china is also moving as fast as it possibly can in this field. they are not interested
11:52 pm
in signing up to this sort of idealistic international co—operation that you write about. i've got no shame in being an idealist. i think that you have to be an idealist to try and move civilisation forward. we have to raise the bar on ourselves and on our peers. now, i can't control what happens in china, but i can certainly praise and endorse the efforts that i see in the eu, for example. i mean, this is a very robust piece of regulation that has been developed in consultation over three—and—a—half years with a very sensible framework, and i've got a lot of praise for it. i think it's excellent. the great fear in europe is that, if they all sign up to this regulation, the chinese are going to have an immediate market advantage because they're not bound by the same rules, they're not bound by transparency, accountability, by fronting up to the international community about what they're doing. that's the pragmatic reality of life. we cannot mean that that leads to a race to the bottom on values just because they're doing it. it's no justification. we have to stick to our principles.
11:53 pm
well, yeah, but you talk about containment, containing this phenomenon that you say could be the most wonderful boon and positive thing for the world, or could be the bringer of chaos, catastrophe and anarchy to the world. you can't have a containment policy if containment is accepted in europe, but not in china. what we can do is focus on trying to get the strategy right for ourselves first and, from that position, engaging constructively with china and other partners that have different sets of values from our own, and start by not demonising china and not excluding them from discussions and negotiations. we have very fundamental differences of values to them, but we have to work with them. but we know, because we see it, that china fundamentally sees the development of ai as a new exercise in state power and an exercise in imposing an ideology, that of the communist party. i'm mindful of a comparison between how we control ai and how we have controlled the potency of the nuclear
11:54 pm
weapon over the last, whatever it is, 80 years. isn't the truth that, in the nuclear example, we have reached a containment place because of the ultimate deterrent of mutually assured destruction? will there be an element of that in our approach to ai? nuclear nonproliferation has been a great achievement of our civilisations. in fact, we've reduced the number of nuclear partners from 11 to eight over the last 60 years. that's a great thing. we've massively reduced the number of warheads. we've demonstrated that we can reach international agreement, make compromise, achieve consensus, you're right, where there is a mutually assured destruction incentive. i don't think we have that any time soon in al, so i agree that there is going to be a period where we really are at odds with one another, and there isn't that kind of dramatic incentive to drive co—operation. however, that doesn't mean there is no incentive.
11:55 pm
we absolutely must share our safety best practices with our — quote, unquote — "adversaries". one area where there is is in the development of synthetic biology. experimenting with pandemic—scale pathogens, engineering them to make them more transmissible and more lethal are capabilities that are soon going to be quite widely available within five years. desktop synthesisers, those that can be used to actually engineer — that is print or manufacture — new strands of dna, enable people in their garage to experiment with new pathogens. this is a very frightening prospect, and there is very good reason why china as well as us, europe, the uk, the us, everybody wants to basically try and stop the proliferation of these kinds of tools. final thoughts. going to have to be very quick. stephen hawking said this before he died, "the rise of powerful ai will either be the best or the worst thing ever to happen to humanity. we don't yet know which." when will we know?
11:56 pm
i have every confidence that it's going to be one of the most dramatic and positive impacts in the history of our species. it is going to be a huge boon for productivity over the next few decades, and that will become very, very clear within ten years. mustafa suleyman, i thank you for being on hardtalk. thank you. hello. for at least part of wednesday, we can expect a window of fine weather overhead. if we take a look at the satellite picture. we can see the rain—bearing clouds that affected parts behind me, there is another
11:57 pm
weather system that will be working into the north—west later on wednesday. but in between, a slice of clear skies, quite a cool, in fact, chilly start for some on wednesday morning, but some spells of sunshine. starting the day then across the highlands, very close to freezing with a touch of frost in places. compare that with 16—17 around some coasts of eastern and south—eastern england, where we start the day with a bit more clouds, still some humid air in place, but it will turn fresher and brighter here as the day wears on. a big slice of sunshine, but then by lunchtime we'll see rain splashing into northern ireland and then that will get into western scotland by the middle of the afternoon with a strengthening wind. temperatures north to south, 14—21 degrees. and then as we head through wednesday evening, we'll see very wet weather for northern ireland, western scotland. that rain overnight getting down into parts of northern england, eventually parts of wales turning very windy in the north—west of scotland with gales for a time. it is going to be a mild start to thursday, certainly much milder in the highlands, but generally 11—14
11:58 pm
degrees for thursday. this area of low pressure to the north—west of scotland bringing some very strong winds. this dangling weather front bringing a band of cloud and rain. that front�*s to start off sitting across parts of mid—wales and into the midlands. it may well be that the rain peps up again as we head into the afternoon. to the south of that, some sunshine and some warmth. returning to the north of our weather front, a mix of sunny spells and showers. and then for friday, it looks as if our weather front will start to move northwards again. so some heavy and persistent rain for parts of northern ireland and perhaps most likely southern and central parts of scotland. feeling pretty disappointing in glasgow and edinburgh underneath that rain. further south with some sunshine, temperatures up to 25 degrees. and then as we head into the weekend, well,it looks like low pressure will try to push up from the south and that will bring showers or longer spells of rain in our direction. there could be some pretty intense downpours, maybe some thunderstorms and turning a little warmer
11:59 pm
12:00 am
rescue teams are facing a huge challenge to deliver aid to eastern libya hit by catastrophic floods. kim jong un says his visit to russia for talks with vladimir putin is of strategic importance. the republican controlled house of representatives is opening a formal impeachment inquiry into presidentjoe biden. the usjustice department has taken google to court and we'll hear from vanuatu's attorney general about why the pacific island nation is taking bigger nations to court over carbon emmissions. live from our studio in singapore, this is bbc news. it's newsday. welcome to the programme. the big story coming out of libya — the united nations has called it a "calamity of epic proportions" —
18 Views
IN COLLECTIONS
BBC News Television Archive Television Archive News Search ServiceUploaded by TV Archive on