tv HAR Dtalk BBC News October 14, 2019 4:30am-5:01am BST
4:30 am
the kurds of northern syria stuart russell, welcome to hardtalk. say they've done a deal hi. with president assad's government you have spent a career in attempt to stop the turkish invasion of their territory. at the forefront of ideas syria's government has confirmed it will send troops to confront turkish aggression as turkey continues its military on artificial intelligence. operation against the kurds. can you give me a working definition? very straightforwardly it means making machines intelligent. china's president xijinping has and what that means is traditionally issued a stern warning against dissent, as protests making machines that act continue in hong kong. several rallies which began so as they achieve their objectives. peacefully have descended that's more or less the same definition be applied to human intelligence. into clashes between riot police, and protesters. it's more or less the same definition we applied some of the demonstrators are now to any machine. using the new tactic of attacking i mean, we design a car to achieve the territory's the objective of travelling down pro—beijing businesses. rescuers have been working the road in it, so is through the night injapan to try to reach people affected there something more by floods and landslides — triggered by typhoon hagibis. about the intelligence that these machines that you work with and on are providing? japanese emergency services yeah, so a car is designed to travel say at least 35 people were killed by the storm, down the road but that objective which brought wind—speeds of over 200 kilometers an hour. is our objective. the car doesn't know now on bbc news, it's hardtalk. that's what it's for.
4:31 am
but these machines and in some sense, do. they have an explicit internal model welcome to hardtalk, of what their objective i'm stephen sackur. what is the most serious existential is and they can figure out how to achieve it. threat facing humanity? so there might have the goal of winning a game of chess well, many of us might point and they can figure out how to do that and beat humans at it. to nuclear war or climate change. learning — what about that word, "learning" — where does that fit? so these days, following actually but some of the greatest minds a recommendation by alan turing in the tech sector are looking in 1950, we built many of the ai in a very different direction. artificial intelligence, system, not by programming them warned the physicist explicitly, but by training them. stephen hawking, could spell the end of the human race. so we have general—purpose learning algorithms that take examples my guest tonight is stuart russell, from the real world and then a globally renowned computer from those examples they extract scientist and sometime advisor general patterns and those patterns allow them to make predictions to the uk government and un. about the new cases. right now ai is being developed as speech recognition for example, a tool to enhance human capability. 00:01:49,588 --> 2147483051:37:39,509 is it fanciful to imagine 2147483051:37:39,509 --> 4294966103:13:29,430 the machines taking over? is not trained by, you know, we write to recognise a "ph", "s" and p. instead we just take hours and hours and hours of speech, with a transcription and then we feed that to a machine learning algorithm and then it's able
4:32 am
to recognise new instances of speech and transcribe it. learning of course, very different from thinking. is the word "thinking" ever applicable in the world of artificial intelligence? it depends which philosophy you ask. and some argue, you know, is the word "swimming" applicable to submarines? in english, no, we don't say. but in other languages i think — in russian they do use the word "swim" for submarines. and we use the word "fly" for aeroplanes, which we borrowed from birds. so are machines actually thinking? i would argue, yes, in a meaningful sense if you look at what a chess programme is doing, it is imagining possible moves that it could make and imagining possible moves that the opponent could apply. it is plotting out these futures
4:33 am
and choosing the future that turns out best. one more word before we get into the heart of your work, and that is "autonomy", because it seems that autonomy is a very important word, once machines are able to learn and we can obviously argue until the cows come home about whether they are thinking, but once they have this separateness from us as their masters, they have clearly crossed a very important line. is autonomy at the heart of today's work on artificial intelligence? only in a very restricted sense, that we can allow freedom to the machine to choose how it achieves the objective but it doesn't choose the objective. so the standard model for how we build ai systems is that the machine is something that finds optimal solutions for problems that we specify. so we say, i want you to win this
4:34 am
chess game and it figures out how to do it. so how is has autonomy in the decision—making process but not in the objective. and in fact, we don't really even know how a system might come up with its own objectives from scratch. elon musk, who i don't know if you know, obviously he's out on the west coast alongside you, elon musk of so many different ventures from electric cars to space exploration, said this recently, he said: "i'm very close to the cutting—edge in al and it scares the hell out of me. it's capable of vastly more than almost anyone knows and the rate of improvement is exponential. digital super intelligence is a species—level risk." do you agree? i would have to say yes. i think elon is a brilliant man, he uses colourful metaphors, he talks about summoning the demon and other things that he gets some flak for that, but the point he's making is the following: intelligence is what gives us power in the world, over other species. if we make something else
4:35 am
that is more intelligent than us, therefore more powerful than us, how do we make sure that it never has any power? it remains under our control for ever. for eternity. it's about that power dynamic. it's about control. and in a sense, most literally it's about the off button. yes, that's one way of capturing the essence of this problem is, can you switch it off? and interestingly so, alan turing who is the founder of computer science back in the late 19305, and wrote one of the seminal papers on al in 1950, was quite matter of fact. he said eventually we would have to expect the machines to take control. so completely resigned, and he actually tried this idea out, perhaps we might be able to switch the power off at a strategic moment but even so as a species we will be humbled.
4:36 am
and unfortunately, you can'tjust switch the power off, any more than you can just play better moves than a superhuman chess programme. it is superhuman, you can't play better moves than it. and you can't out—think a superhuman machine with which you are in contact. so say that this is a track that, once we go down there does appear to be an air of inevitability about the surpassing of human intelligence by the machine? which raises the question, and it gets to the very heart of where we are at this stage of humanities development. should we consider not going down this path of artificial intelligence? is it even a possibility that we could say we understand the potentiality, but we also see the profound level of potential risk and threat and therefore, we won't do it?
4:37 am
so that's a great question. there is a novel by samuel butler written in 1863 that answers that question. he describes a civilisation that developed its machine technology and then reached that fork in the road and took the other fork. they decided to abandon machine technology and they consigned the machines to the museum. and said, we cannot take this risk because we don't want to lose control. so, i'm not sure we really have that choice. because the upside of ai is so enormous, if we achieve human level al or superhuman ai, and we remain in control over it, then this could be a golden age for humanity and the value is in the thousands of trillions of pounds. so, it's very hard to stop the momentum, right now it's hundreds of billions of dollars being invested every year,
4:38 am
all over the world. except in the beginning of this interview you conceded that the risk is potentially existential therefore not only is the reward enormous and you have just monetised it, but the risk is beyond any monetary value. it is indeed the future of our species. and so this is the question — do we follow alan turing and say, the loss of control is inevitable? i don't think we should do that. i think what we need to do is understand, where is the source of the risk? this is not as if a superior alien civilisation just lands on earth and they're more intelligent and capable than we are and were toast. this is something we are designing so where does the problem come from? what is it about the way we design ai systems that leaves there to be a conflict in which we lose control? do you think we understand where that point is and how it works? because i'm just thinking
4:39 am
right now, if one — and let's get to the nitty—gritty of what is happening in al, we have ai being developed to the tune of tens and hundreds of billions of dollars across the world, both by corporate actors, you know the the big tech companies at the forefront that we can all name, and states as well. whether it be the us in terms of defence department or the chinese and russian and other governments doing it at a state level. do you think those various actors understand precisely the dilemma that you havejust laid out? so i can say for sure that the chinese government does, because their official policy position acknowledges that al is an existential risk to mankind, if it's not developed correctly. so to come back to the earlier question, where is the point where it goes wrong? what is the nature of the catastrophe?
4:40 am
and i believe it comes from the way we have been thinking about al. so we might call it the standard model for al which is that we build machines that take an objective that we specify, that come up with a plan to achieve it, and then off they go. and the problem is something that we've known for thousands of years — that we don't know how to specify these objectives correctly. so king midas said, "i want everything i touch to turn to gold." and we know what happened to him, right? his water and his wine and his food and his family all turned to gold. and he dies in misery and starvation. so as an engineering discipline, i would say it's just bad engineering if the only way you can get this to work correctly is to specify objectives perfectly... because that will never happen. right. it would be like saying, "ok you can only find this aeroplane if you have seven arms and five brains and if you don't,
4:41 am
well it's going to crash." that's a badly designed aeroplane for a human being to fly and so the answer comes exactly from this point, that you can be pretty sure any objective the human states is not the true underlying objective, that they really have for the future. therefore the machine needs to know that it doesn't know what the true objective is. that the human asks is perhaps indicative evidence of what they might like, but certainly is not complete. so if i ask you for a cup of coffee, that doesn't mean that's my only objective in life and everything else doesn't matter. like i'm allowed to mow down people at starbucks and crash the car on the way back with a coffee and so on. of course that's not what i mean by the objective. there's all kinds of un—stated stuff, some of which i couldn't even state if you tried to get me to stat it. so, so far we have been fairly abstract, fascinating but fairly abstract. so let's just bring you down if we can briefly, to the here and now.
4:42 am
we know that as i said the billions are being invested in al, in particular spheres there is a concentration. defence spending for example, the search for what one might call "autonomous weapons." we have seen the rise of the drone. we now, thanks to the turkish government, know there are swarm drones which are, according to turkey, which has developed this capability, capable of using facial recognition technology, deciding for themselves whether that person represents a legitimate target, opening fire or smashing into them with explosives if they deem it to be the right target and if not, coming home again. so they have finished the mission and decided there was no attack to be launched. right. that kind of weaponisation of ai sounds to many people, extraordinarily scary. are you scared? yes, in the near term that's a very, very serious risk. in fact, two years ago we made a small film called slaughter bots, which described something that's
4:43 am
very like this turkish weapon. and many governments derided this film as science fiction, that this couldn't be possible. for decades, and it's not even worth discussing a treaty to ban this kind of weapon, because it's so far off in the future, but already at that time the turkish government and the arms manufacturer, stm, were in the process of designing and manufacturing pretty much precisely the thing that we showed in the movie. so, we have been working very hard to bring a treaty about but the way things work in diplomacy, if the united states, russia, and the united kingdom are opposed to a treaty, it's very hard to move things forward. so when you look across the piece, would you say it is in militarisation of ai that you, at the moment, find most cause for concern? so i think we have to think about
4:44 am
different timescales simultaneously. in the short—term to me, this is the most serious risk. and i think are another big issue so that's happening is the manipulation of opinions through social media and part of that comes down to the way the automated algorithms work and it's the same issue, you you are the objective of these automated algorithms is to maximise click through and the way they operate is in fact to manipulate people so that they become more predictable in their opinions and the kinds of things that they will consume. so that's had a dramatically negative effect on, in some sense, the entire world and it was collateral damage of an algorithm operating to optimise the wrong objective. just click through is a poorly designed objective because it doesn't take into account all of these externalities, these changes to people's opinions which have such a negative effect globally. we see, as i've already said, states pouring money into ai
4:45 am
and we've referred to the weapons business but surveillance in china, where we know now that facial recognition technology is being married with mass surveillance to, in a sense, create a surveillance where your behaviours, your behaviours attract and then, given credit scores and are part of a means of defining your place in society. a really all—overarching surveillance notion. and then we've got russia which, i'm going to quote vladimir putin, russia appears to sort of see an ai "arms race", saying that artificial intelligence is the future for not only russia but all humankind and whoever becomes the leader in this sphere will become the ruler of the world. is that a mentality that you think is going to hasten us to this very dark place you talked about? i think it's quite possible, yes. if countries feel that they are in this live—or—die competition with other nations to be the first
4:46 am
to create superhuman ai, then they are going to cut corners. they are not going to wait for these more robust solutions that allow humans to retain control. are they going to go straight for pushing forward on the standard model in the hopes they get there first? in terms of standard of living for your people, there is really no need to be saying we want to rule, we want leadership, we want ownership of this technology and a large corporations have actually agreed to an international organisation called the partnership on al, whoever achieves superhuman ai will share it with everybody. do you really believe that? this gets down to the idea of whether ai can be,
4:47 am
in any sense, regulated globally. we made an early sort of comparison with the potential threat of nuclear weapons. for 19 years, we have had the iaea, the international watchdog in vienna which is supposed to monitor the development of nuclear facilities around the world. in the field of ai, is it possible to imagine that level of global cooperation and transparency, both at corporate and state level? it's very interesting to look back at the future of nuclear technology, it's a potentially civilisation—ending technology but it was also thought to be something that could create a golden age of unlimited energy and also peace. it's that same risk—reward dynamic. even though there was a movement among scientists at the end of world war ii to share technology and put it in trust for humanity,
4:48 am
instead, it became used as a tool of domination, first by the us and then the ussr, obtaining the technology and becoming the nuclear arms race so part of the problem is, they tried to put it in trust for humanity after the fact, when it was already owned by the united states. what we are trying to do now is solve this problem before the fact, before the technology is created, do two things. one is to change the way we decide it so that we remain in control of ai systems forever, eliminate the standard model and move to a new model which doesn't involve these fixed known objectives that are put into the system. the second part of it is to essentially have unbreakable agreements that the technology will be shared for the benefit of humanity. do you think that the rise of ai is going to undermine our liberal notions of democracy and individualism? will it play into the hands of
4:49 am
authoritarian controlling systems? i think the chinese model is something, my guess is they are going to abandon because what happens when you set up something like this social credit score which is supposed to be a numerical calculation of how much that person is contributing to an overall harmonious virtuous social order. is it that people simply behave to optimise the score and not virtue and harmony? so you get bonus points, for example, for visiting your ageing parents because chinese tradition says you should honour your parents and your ancestors and so that's considered to be virtuous behaviour.
4:50 am
but of course if you are only doing it to get more points, you lose the virtue. you are encouraging what used to be virtuous behaviour but you are turning it into cynical behaviour so you end up the society of cynical point maximises rather than anyone doing anything for a good reason. my final thought is this. throughout our conversation, with sort of posed a juxtaposition between man and machine. it was an either/or. either man maintains control all the machines as they exponentially increase their capacity for data storage and thinking, learning anyway, they take over and we lose control and we can't turn them off. maybe it's not an either/or. maybe what we should think about for the future is the melding of the two, that somehow man and machine are married. and this is another thing that elon musk has suggested, he started a company, neuralink, his goal is exactly that. so you have to ask yourself,
4:51 am
if we all need to have brain surgery, electronic implants, just to survive, did we make a mistake somewhere along the line? and i think the answer would be yes. so i don't find that to be an attractive future for the human race. but i think elon musk would say, we are halfway there already. the relationship we have with our mobile phone and the degree to which it now is a central part of almost our personality, the way we relate to the rest of the world. we are halfway there. you can just sort of physically put the chip from your mobile phone into your brain, you will become some sort of cyborg and maybe that is the artificial intelligence future that we face. it could be. it's just for communicating,
4:52 am
i don't really have too much of a problem with that and i'm still opposed the idea that you won't even get a job if you've already had the brain surgery. but if it's really enhancement of our cognitive function, that i think is problematic because then what we think of as human in some sense vanishes and what's left is whatever the next generation of some giant it corporation can dream up and again, that's not the future i would like. this is where your optimism runs out. i think the human race will choose a different path. we need to get you back several years from now and see where you're at but for now, we are out of time. stuart russell, thank you for being on hardtalk. thank you, delightful. thank you.
4:53 am
hello. the weekend brought us a pretty unsettled spell, quick changeable weather. many places saw some rain, particularly on sunday. this was the scene as the sun set on sunday evening over london. so, fairly dramatic skies and big shower clouds around. we are going to be seeing more of that unsettled weather over the next week or so. things remaining unsettled through the week ahead. further spells of rain at times, but it's not going to be a washout this week. and there will be some sunshine on offer too. so, what we're going to have today, we've got low pressure moving in from the west, bringing some rain to northern ireland, and also a waving weather front heading in towards the south—east. so to start off your monday morning, that means we could well have heavy showers from the word go across the likes of kent, perhaps sussex as well. these showers in the south—east push their way north
4:54 am
through the day. they'll be hit and miss, not everywhere seeing them, but if you do catch these showers, they could be heavy, bringing thunderstorms with quite a lot of lying surface water as well. there will be some fairly persistent rain for northern ireland, pushing into western scotland later in the day. the wind is not too much of a feature for most places, but could be quite blustery with some of those heavy downpours in the south and the east. i think northern england and parts of northern and eastern scotland should have the best of the dry, bright weather through the day, with temperatures between about 13—17 degrees. into monday evening now, and this batch of heavy showers moves further northwards across england. we've got the showers gradually fading away from the west of scotland. so actually, things are drying up into the early hours of tuesday, perhaps just a bit of rain lingering for the north—east of england
4:55 am
perhaps eastern scotland as well. quite a murky start with quite a bit of low cloud and perhaps a mist and fog around as well first thing on tuesday. but tuesday will bring us a bit of a respite. a short window of slightly drier weather. we're in between weather systems during the day on tuesday. so once that rain does clears away from the east coast, quite a bit of dry weather to be seen. any morning mist and fog breaking up to leave some sunny spells. quite light winds, the next area of rain waiting in the wings. but that won't arrive in the west until much later on in the day. before it gets there, temperatures fairly typical for the time of year. around about 12—15 degrees, light with some sunshine, shouldn't feel too bad. into wednesday, this front first thing across much of scotland and england, it gradually clears towards the east. more sunshine working in from the west but also a few scattered showers, particularly for northern ireland. temperatures 12—16 degrees in the sunshine. so not too bad. the winds should ease after a bit of a blustery start to the day. and then further ahead, low pressure, often in charge.
4:57 am
4:58 am
turkey continues its military operation against the kurds in syria, but damascas says it will send troops to confront turkish aggression. poland's governing law and justice party has decisively won parliamentary elections, keeping an overall majority. we talk to to one of the most acclaimed directors of all time, martin scorsese, about his latest film, the irishman. it's about power, love and betrayal and, ultimately, the price you pay for the life you lead.
77 Views
IN COLLECTIONS
BBC News Television Archive Television Archive News Search ServiceUploaded by TV Archive on