tv HAR Dtalk BBC News November 1, 2017 12:30am-1:01am GMT
12:30 am
our top story: eight people have been killed in a terrorist attack in new york city. a man drove into people on a bike path in lower manhattan. he was shot by police and is now in custody — he's said to be a 29 year old from central asia who lived in florida. new york city mayor says this was a "cowardly act of terror aimed at innocent civilians". bill deblasio says their spirit will never be moved by an act of violence. stay with us. now on bbc news, it's time for hardtalk. welcome to hardtalk
12:31 am
with me, stephen sackur. for now, the bbc employs human beings like me to question the way the world works. but for how much longer? as research into artificial intelligence intensifies, is there any sphere of human activity that won't be revolutionised by aland robotics? my guest today is alan winfield, a world—renowned professor of robot ethics. from driving, to education, to work and warfare, are we unleashing machines which could turn the dark visions of science fiction into science fact? alan winfield, welcome to hardtalk.
12:32 am
delighted to be here, stephen. you do have a fascinating title, director of robot ethics, i'm tempted to ask you, what's more important to you, the engineering, robotics or the ethics, being an ethicist? both are equally important. i'm fundamentally an engineer so i bring an engineering perspective to robotics but more than half of my work now is thinking about... i'm kind of a professional worrior now. would you say the balance has shifted over the course of your career? you started out very much in computers and engineering but increasingly as you have dug deeply into the subject, in a sense the more philosophical side of it has been writ large for you? absolutely right.
12:33 am
it was really getting involved in public engagement, robotics public engagement, 15 years ago that if you like alerted me and sensitised me to the ethical questions around robotics and ai. let's take this phrase, artificial intelligence. it raises an immediate question in my mind, how we define intelligence. i wonder if you could do that for me? it's really difficult, one of the fundamental philosophical problems with al is we don't have a satisfactory definition for natural intelligence. here's a simple definition, it's doing the right thing at the right time, but that's not very helpful from a scientific point of view. one thing we can say about intelligence is it's not one thing we all have more or less of. what about thinking? are we really, as in the course of this conversation, talking about the degree to which human beings can make
12:34 am
machines that think? i think thinking is a dangerous word. it's an anthropomorphisation and in fact more than that it's a humanisation of the term intelligence. a lot of the intelligence you and i have is nothing to do with conscious reflective thought. one of the curious things about al is that what we thought would be very difficult 60 years ago, like playing board games, chess, go as it happens, has turned out to be not easy but relatively easy, whereas what we thought would be very easy 60 years ago, like making a cup of tea in somebody else‘s kitchen, has turned out to be enormously difficult. it's interesting you light upon board games so quickly because in the news in the last few days we've seen something quite interesting, google‘s deepmind department has this machine, computer, call it what you will, the alphago zero i think they call it, which has achieved i think astounding results playing this
12:35 am
game, i'm not familiar with it, a game called go, mainly played in china, extremely complex, more moves in it, more complexity than chess, and this machine is now capable of beating it seems any human grandmaster and the real thing about it is it's a machine that appears to learn unsupervised. that's right. i must admit, i'm somewhat baffled by this, you said don't think about thinking but it seems this is a machine that thinks. it's a machine that does an artificial analogue of thinking, it doesn't do it in the way you and i do. the technology is based on what are called artificial neural networks and they are if you like an abstract model of biological networks, neural networks, brains in other words, which actually we don't understand very well curiously but we can
12:36 am
still make very simple abstract models, and that's what the technology is. the way to think about the way it learns, and it is a remarkable breakthrough, i don't want to over—hype it because it only plays go, it can't make a cup of tea for you, but the very interesting thing is the early generations effectively had to be trained on data that was gleaned from human experts and many, many games of go. it had to be loaded with external information? essentially, that's right. that's what we call supervised learning, whereas the new version, and again, if i understand it correctly, i only scanned the nature paper this morning, is doing unsupervised learning. we actually technically call it reinforcement learning. the idea is that the machine is given nothing else than if you like the rules of the game and its world is the board, the go
12:37 am
board and the pieces, and then itjust essentially plays against itself millions and millions of times. it's a bit like, you know, a human infant learning how to, i don't know, play with building blocks, lego, entirely on his or her own byjust learning over and over again. of course, humans don't actually learn like that, mostly we learn with supervision, with parents, teachers, brothers and sisters, family and so on. you're prepared to use a word like learning, thinking you don't like, learning you're prepared to apply to a machine? yes. what i want to get to, before we go into the specifics of driverless cars and autonomous fighting machines and all of that, i still want to stay with big—pictu re stuff. the human brain, you've already
12:38 am
mentioned the human brain, it's the most complex mechanism we know on this planet. in the universe in fact. is it possible, talking about the way that google mind and others are developing artificial intelligence, that we can all look to create machines that are as complex with the billions and trillions of moving parts, if i can put it that way, that the human brain possesses? i would say in principle, yes, but not for a very long time. i think the problem of making an ai or robot if you like, a robot isjust ai in a physical body, that is comparable in intelligence to a human being, an average human being if you like, averagely intelligent human being, is extraordinary difficult and part of the problem, part of the reason it's so difficult is we don't actually have the design, if you like, the architecture of human minds.
12:39 am
but in principle you think we can get it? what i'm driving at really is this principle philosophical question of what the brain is. to you, professor, is the brain in the end chemistry? is it material? is it a lump of matter? yes. does it have any spiritual or any other tangible thing? it is chemistry? i'm a materialist, yes, the brain is thinking meat. that is a bit of a copout. you said thinking meat, it is meat and the way that meat is arranged means it could think, so you could create something artificial where, if it was as complex and well arranged as human capacity could make it one day, it could also think? i believe in principle, yes. but the key thing is architecture. in a sense, the way to think about the current work on artificial intelligence, we have these artificial neural networks which are almost like
12:40 am
the building blocks. it's a bit like having marble, butjust having a lot of wonderful italian marble doesn't mean you can make a cathedral, you need to have the design, you need to have the architecture and the know—how to build that cathedral and we don't have anything like that. one more general point and then i want to get down to the specifics. nick bostrom at oxford university, you know him, i know you do because he works in the same field as you, you have to think of ai as a fundamental game changer for humanity. it could be the last invention that human intelligence ever needs to make, he says, because it's the beginning of a completely new era, the machine intelligence era and in a sense we are a bit like children playing with something we have picked up and it happens to be an unexploded bomb and we don't even know the consequences that
12:41 am
could come with it. do you share that vision? i partially share it. where i disagree with nick is that i don't think we are under threat from a kind of runaway super intelligence, which is the thesis of his book of that subject, superintelligence, but i do think we need to be ever so careful. in a way, i alluded to this earlier, we don't understand what natural intelligence is, we don't have any general scientific theory of intelligence so trying to build artificial general intelligence is a bit like trying to do particle physics at cern without any theory, without any underlying scientific theory. it seems to me that we need both some serious theory, which we don't have yet, we have some but it isn't unified, there isn't a single theory if you like like the standard model physics. we also need to do responsible research and innovation. in other words we need to innovate ethically to make sure any as it were unintended consequences
12:42 am
are foreseen and we head them off. let's talk in a more practical sense, unintended consequences may well come up. let's start with something i think most of us are aware of now, and regard as one of the most both challenging and perhaps exciting specific ai achievements, that is the driverless car. yes. it seems to me all sorts of issues are raised by a world in which cars are driverless. a lot of moral and ethical issues as well as practical ones. you work with people in this field, are you excited by driverless cars? iam, yes. i think driverless cars have tremendous potential for two things... do you see them as robots? i do, yes, a driverless car is a robot. typically once a robot becomes part
12:43 am
of normal life we stop calling it a robot, like a vacuum cleaner. i think there are two tremendous advances from driverless cars we can look forward to, one is reducing the number of people killed in road traffic accidents significantly, if we can achieve that, so i'm going to be cautious when i speak more on this. the other is giving mobility to people, elderly people, disabled people who currently don't have that. both of those are very practical but science magazine last year studied a group of almost 2,000 people, ask them about what they wanted to see in terms of the morale at the almost of using driverless cars, how the programming of the car would be developed to ensure that, for example, in a hypothetical,
12:44 am
if a car was on the road and it was about to crash but if it veered off the road to avoid a crash it would hit a group of schoolchildren being led by a teacher down the road. the public in this survey wanted to know that the car would in the end accept its own destruction and that of its driver, human passenger rather, as opposed to saving itself and ploughing into the children on the side of the road. how do you as a robot ethicist cope with this sort of challenge? the first thing i'd say is let's not get it out of proportion. you have to ask yourself as a human driver, probably like me you've got many years of experience of driving, have you ever encountered that situation? not in my case, but i want to know if i ever step into a driverless car someone has thought about this. i think you're right. the ethicists and the lawyers are not clear. the point is we need to have a conversation. i think it's really important that
12:45 am
if we have driverless cars that make those kinds of ethical decisions, you know, that essentially decide whether to potentially harm the occupants... you're doing what you told me off for doing, you're anthropomorphising, it wouldn't be making an ethical decision, it would be reflecting the values of the programmer. those rules need to be decided by the whole of society. the fact is, whatever those rules are, there will be occasions when the rules result in consequences that we don't like and therefore i think the whole of society needs to if you like own the responsibility for those cases. so you are making a call, be it driverless cars or any other examples we're thinking about with al, of the technological developments in lockstep with a new approach to monitoring, regulation, universal standardisation. and a conversation, a big conversation in society
12:46 am
so that we own the ethics that we decide should be embedded. but that will not help. much of the development here, i mean, you work at bristol in a robotics lab but a lot of cutting—edge work is being done in the private sector. some of it is done by secretive defence establishments. there is no standardisation, there is no cooperation. it is a deeply competitive world. itjolly well needs to be. my view is simple. the autopilot of a driverless car should be subject to the same levels of compliance with safety standards as the autopilot of an aircraft. we all accept... you and i would not get into an aircraft if we thought that the autopilot had not met those
12:47 am
very high standards. it is inconceivable that we could allow driverless cars on our roads that have not passed those kind of safety certification processes. leaving driverless cars and going to areas that are, perhaps, more problematic for human beings. they developed the idea of the machine, the future intelligent machine, taking jobs and roles that have traditionally always been done by human beings because they involve things like empathy and care. and compassion. i'm thinking about roles of social care and education. even, frankly, a sexual partner because we all now read about the sexbots that are being developed. in these roles, do you feel comfortable with the notion that machines will take over from human beings?
12:48 am
no. and i do not think they will. but they already are... japan has carers. a care robot may well be able to care for you, for example, for your physical needs, but it cannot care about you. only humans can care about either humans or any other animal. object, robots cannot care about people or things, for that matter. and the same is true for teachers. teachers typically care about their classes. you think some people are getting way overheated about this? one of the most well—known teachers here in britain he now says that in his vision of a future education system, many children will be taught one—on—one in a spectacular new way by machines. he says it is like giving every child access to the best private school. ultimately, there may well be...
12:49 am
and we are talking about into the future, some combination of machine teaching and human teaching. you cannot take the human out. an important thing to remember here is particularly human characteristics of empathy, sympathy... theory of mind, the ability to anticipate, to read each other. these are uniquely human characteristics as is our creativity and innovation and intuition. these are things we have no idea how to build artificially. jobs that involve those things are safe. interesting. people nowadays are looking at doomsday scenarios
12:50 am
with the development of robotics and ai where frankly, mostjobs one can think of — and i was being flippant earlier about being replaced by a robot — but you suggest to me that so many differentjobs, notjust blue—collar but white—collar as well will be done by machines... again, are we overstating it? i think we are. yes. i am not saying that will not happen eventually but i think that what we have is much more time than people suppose to find a harmonious, if you like, accommodation between human and machine. that actually allows us to exploit the qualities of humans and the skills, the things that humans want to do which is, you know... if you don't mind me saying, you seem both extraordinarily sanguine and comfortable and optimistic about the way
12:51 am
in which ai is developing under the control of human beings, and yourfaith in humanity's ability to co—operate on this and establish standards seems to run in the face of facts. in one area is weaponisation. the notion that al and robotics will revolutionise warfare and warfighting. you were one of 1000 senior scientists who signed an appeal for a ban on al weaponry in 2015. that that will not happen, will it? the ban... it may... did you see what vladimir putin said? he said that artificial intelligence is the future for russia and all of humankind — and this is the key bit, "whoever becomes the leader in this sphere will become the ruler of the world." i am an optimist but i am also very worried about exactly this thing. we have already seen, if you like, the political weaponisation of ai. it is clear, isn't it, that the evidence is mounting that al was used in recent elections? you are talking about the hacking? and some of that
12:52 am
believe is from russia? that is political weaponisation and we do need to be worried about these things. we do need to have ethical standards and we need to have worldwide agreement. i am optimistic about a ban on lethal autonomous weapons systems. the campaign is gaining traction, there have been all kinds of discussions and i know some of the people involved quite well in the united nations. but we know the limitations of the united nations and the limitations of politics and we know that human nature usually leads to the striving to compete and to win, whether it be in politics or in the battlefield. can i leave you with this thought? it seems that there is a debate within science and you are on one side being sanguinary and optimistic. perhaps on the other side, stephen hawking recently said that the development of full artificial intelligence could spell
12:53 am
the end of the human race. do you find that kind of thought helpful or deeply unhelpful? deeply unhelpful. the problem is that it is not inevitable. what he is talking about here is a very small probability. if you like it is a very long series of if this happens, then if that happens, then this and so on and on. i wrote about this in the observer back in 2014. before stephen hawking got involved in this debate. my view is that we are worrying about an extraordinarily unlikely event, that is the intelligence explosion,... do you think we have actually been too conditioned by science fiction and by the terminator concept? we have. there is no doubt. the problem is that we are fascinated.
12:54 am
it is a combination of fear and fascination and that is why we love science—fiction. but in your view it is fiction? that scenario is fiction that you are quite right, there are things we should worry about now. we need to worry aboutjobs, we need to worry about weaponisation of ai and we need to worry about standards in driverless cars and care robots, in medical diagnosis ais. there are many things that are here and now problems in the sense that are kind of more to do with the fact that al is not very intelligent so we need to worry about artificial stupidity. that is a neat way of ending. alan winfield, thank you very much. hello.
12:55 am
mixed fortunes in our weather during the day ahead. southern areas should see more in the way of sunshine than they did on tuesday. a feed of drier air from the near continent around area of high pressure. but up to the north, it's all about this weather front, a weather front bringing a slow—moving band of rain, heavy rain for a time time across southern and south—western scotland during the first part of the morning. to the north of the frontal system, there will be a mixture of sunny spells and showers. but it is this rain around the glasgow area stretching towards edinburgh, which could cause some spot issues and persistent heavy rain during the morning rush hour. into the midlands and east anglia
12:56 am
and the south—east, there'll be the odd fog patches through the first part of the morning. fog tending to clear but it will be a fairly bright day with increasing amounts of sunshine. the south—west of england starting off on a bright note. again, there could be the odd fog patch. similar story across wales. more cloud in northern ireland and here is the weather front again just beginning to fringing towards the north coast at this stage. as we go on through the day, our frontal system will only move very slowly southwards, although the rain along it will tend to ease. to the north, a mixture of sunshine and heavy showers and to the south, for england and wales, a dry day and an increasingly bright one. there should be some spells of sunshine. 16 in london. that won't feel too bad, if the skies are blue. wednesday night the frontal system moves on and pushes its way southwards. at this stage, reallyjust a band of cloud and spots of drizzle. to the north and south, fairly chilly and across southern areas of england,
12:57 am
could be some fog around on thursday morning. on thursday, this area of cloud from the old weather front with some spots of patchy rain in drizzle will drift slowly southwards. furthermore, sunny skies but a dip in the temperatures. eight degrees in aberdeen, 12 in cardiff. friday will be dry and bright enough for many of us, but a change up here to the north—west, and another weather front sinking in, and it's a weak affair. but that frontal system on friday will bump it some warm air pushing in temporarily from the continent. some heavy rain across england and wales and once all of that clears away, some really cold air into the weekend. the coming all the way from the arctic. some sunny spells but also some showers as well.
12:58 am
perhaps wintry over the high ground and for all of us, a chilly wind. this is newsday on the bbc. i'm rico hizon, in singapore. the headlines: eight dead in new york and at least twelve others injured after a truck strikes people on a bike path in lower manhattan. a man is shot by police and is now in custody. police say the suspect moved to the us in 2010. the mayor of new york says the attack is being seen as deliberate but that there's no evidence so far of a wider plot. let me be clear, based on the information we have, at this moment, this was an act of terror and a particularly cowardly act of terror. i'm babita sharma, in london. also in the programme: president trump takes to twitter to address the events in manhattan, describing the attacker as "sick and deranged".
50 Views
IN COLLECTIONS
BBC News Television Archive Television Archive News Search ServiceUploaded by TV Archive on