Skip to main content

tv   HAR Dtalk  BBC News  October 31, 2017 12:30am-1:01am GMT

12:30 am
election has produced its first criminal charges. former trump campaign manager paul manafort has pleaded not guilty to 12 charges, including concealing earnings from his dealings with ukraine before he joined the trump team. his lawyer has denied any suggestion of collusion. a former foreign policy advisor to the trump campaign has admitted lying to the fbi about his contacts with russian officials. the white house has distanced itself from the arrests. and this video is trending on bbc.com: the white house has been hosting trick—or—treating children a as part of a halloween event. president trump and his wife melania gave out high fives and the usual sweets to costumed kids and their parents. that's all from me now. stay with bbc world news. now on bbc news, it's time for hardtalk. welcome to hardtalk with me, zeinab badawi. for now bbc employs human beings
12:31 am
like me to question the way our world works, but for how much longer? as research into artificial intelligence intensifies, is there any sphere of human activity that won't be revolutionised by ai and robotics? my guest today is alun wyn jones, a world—renowned professor of robot ethics, are we unleashing machines which could turn the dark visions of science fiction into science fact? alan winfield, will
12:32 am
welcome to heart talk. you have a fantastic title, director of robot ethics, i'm tempted to ask you, what's more important to you, robotics or being an ethicist? both are important, i'm an engineer so i bring a engineering perspective to robotics but more than half of my work now is thinking about and... i'm kind of a professional warrior now. would you say science has shifted over the course of your career? you started out very much in computers and engineering but increasingly as you have dug deeply into the subject, in a sense the more philosophical side of it has been written large for you?
12:33 am
absolutely right. it was really getting involved in public engagement, robotics public engagement, robotics public engagement, 15 years ago that if you like alerted me and sensitised me to the ethical questions around robotics and ai. let's take this phrase, artificial intelligence, it raises an immediate question in my mind, how do we define intelligence? it's really difficult, one of the fundamental philosophical problems with al is we don't have a satisfactory definition for natural intelligence. here's a simple definition, it's doing the right thing at the right time, but that's not very helpful from a scientific point of view. one thing we can say about intelligence is it's not one thing we all have more or less of. what about thinking? are we really, in the course of this conversation, talking about the degree to which human beings can make machines that
12:34 am
think? i think thinking is a dangerous word. it's an and anthropomorphise asian and more than that it's a humanisation of the term intelligence. a lot of the intelligence. a lot of the intelligence you and i have is nothing to do with conscious reflective thought. one of the curious things about al is that what we thought would be very difficult 60 years ago, like playing board games, chess, go as it happens, has turned out to be not easy but relatively easy, whereas what we thought would be very easy 60 years ago, like making a tea in somebody else's kitchen has turned out to be enormously difficult. it's interesting you go to board games so quickly because in the news in the last few days we've seen something quite interesting, google's deepmind department has this machine computer, call it what you will, the alphago zero i think they call it,
12:35 am
which has achieved i think astounding results playing this game, i'm not familiar with it, astounding results playing this game, i'm not familiarwith it, a game, i'm not familiarwith it, a game called go, majority two mainly played in china, extremely complex, more moves in it, more complexity than chess, and this machine is now capable of beating it seems any human grandmaster and the real thing about it is it's a machine that appears to learn unsupervised. that's right. i must admit, i'm somewhat baffled by this, you said don't think about thinking but it seems this is a machine that thinks. it's a machine that does an artificial analogue of thinking, it doesn't do it in the way you and i do. the technology is based on what are called the visual neural networks and they are if you like an a bstra ct networks and they are if you like an abstract model of biological networks, neural networks, brain networks, neural networks, brain networks, which actually we don't understand aru well curiously but we
12:36 am
can still make very simple abstract models and that's what the technology is. the way to think about the way it learns, and it is a remarkable breakthrough, i don't wa nt to remarkable breakthrough, i don't want to over hype it because it only plays go, it can't make a tea for you, but the very interesting thing is the early generations are effectively had to be trained on data that was gleaned from human experts and many, many games of go. it had to be loaded with external information? that's right. that's what we called supervised learning, whereas the new version, and again ifi whereas the new version, and again if i understand it correctly, i only scanned the nature paper this morning, is doing unsupervised learning. technically we call it reinforcement learning. the idea is that the machine is given nothing else than if you like the rules of the game and its world is the board,
12:37 am
the game and its world is the board, the go board and the pieces, and thenit the go board and the pieces, and then it just essentially the go board and the pieces, and then itjust essentially plays against itself millions and millions of times. it's a bit like, you know, a human infant learning how to, i don't know, play with building blocks, lego, entirely on his or her own by just learning over and blocks, lego, entirely on his or her own byjust learning over and over again. of course, human stone actually learn like that, mostly we learn with supervision, we learn with parents, sisters, family and so on. you're prepared to use a word like learning, thinking you don't like learning, thinking you don't like learning, thinking you don't like learning your prepared to apply toa like learning your prepared to apply to a machine? yes. what i want to get to, before we go into the specifics of driverless cars and autonomous fighting machines and all of that, i want to stay with big picture stuff. the human brain, you've already mentioned the human
12:38 am
brain, it's the most complex mechanism we know on this planet. brain, it's the most complex mechanism we know on this planetm the universe in fact. is it possible, talking about the way that google mind and others are developing artificial intelligence, that we can all look to create machines that are as complex with the billions and trillions of moving parts if i can put it that way that the human brain possesses?” parts if i can put it that way that the human brain possesses? i would say in principle, yes, but not for a very long time. i think the problem of making an ai very long time. i think the problem of making an al or robot if you like, a robot isjust ai in a physical body, that is comparable in intelligence to a human being, an average human being if you like, averagely intelligent human being, is extraordinary difficult and part of the problem, part of the reason it is so difficult is we don't actually have the design, if you like, the architecture of human
12:39 am
minds. but in principle you think we can get it? what i'm driving at really is this principle philosophical question of what the brain is. to you, professor, is the brain is. to you, professor, is the brain in the end chemistry? is it material, is it a lump of matter? yes. does it have any spiritual or any other tangible thing? it is chemistry? i'm a materialist, the brain is thinking meat. that is a bit of a copout. you said thinking meat, it is meat and the way that meat, it is meat and the way that meat is arranged means it could think, so you could create something artificial where if it was as complex and well arranged as human capacity could make it one day, it could also think? i believe in principle, yes. but the key thing is architecture. in a sense, the way to think about the current work on
12:40 am
artificial intelligence, we have these artificial neural networks which are almost like the building blocks. it's a bit like having marble but just like jo blocks. it's a bit like having marble butjust likejo having a lot of wonderful italian marble doesn't mean you can make a cathedral, you need to have the design, you need to have the architecture and the know—how to build that cathedral and we don't have anything like that. one more general point and then i wa nt to one more general point and then i want to get down to the specifics. nick bostrom at oxford university, you know him, i know you do because he works in the same field as you, you have to think of ai as a fundamental game changer for humanity. you could be the last dimension that human intelligence ever needs to make, he says, because it's the beginning of a completely new era, the machine intelligence era and in a sense we are a bit like children playing with something we have picked up and it happens to be an unexploded bomb and we don't even know the consequences that could
12:41 am
come with it. do you share that vision? i partially share it. where i disagree with nick is that i don't think we are under threat from a kind of runaway super intelligence, which is the thesis of his book of that subject, super intelligence, but i do think we need to be ever so careful. in a way, i alluded to this earlier, we don't understand what natural intelligence is, we don't have any general scientific theory of intelligence so trying to build artificial general intelligence is a bit like trying to do particle physics at certain without any theory or underlying scientific theory. it seems to me that we need both some serious theory, which we don't have yet, we have some but it isn't unified, there isn't a single theory if you like like the standard model physics. we also need to do
12:42 am
responsible research and innovation. in other words we need to innovate ethically to make sure any as it we re ethically to make sure any as it were unintended consequences are foreseen and we had them off. let's talk ina foreseen and we had them off. let's talk in a more practical sense, unintended consequences may well come up. let's start with something i think most of us are aware of now, and regard as one of the most both challenging and perhaps exciting specific ai achievements, that is the driverless car. yes. it seems to me all sorts of issues are raised by a world in which cars are driverless. a lot of moral and ethical issues as well as practical ones. you work with people in this field, are you excited by driverless cars? i am, yes. ithink field, are you excited by driverless cars? i am, yes. i think driverless ca i’s cars? i am, yes. i think driverless ca rs have cars? i am, yes. i think driverless cars have tremendous potential for two things... do you see them as robots? yes, a driverless, is a
12:43 am
robot. typically once a robot becomes part of normal life we stop calling it a robot, like a vacuum cleaner. i think there are two tremendous advances from driverless ca i’s tremendous advances from driverless cars we can look forward to, one is reducing the number of people killed in road traffic accidents significantly, if we can achieve that, so i'm going to be cautious when i speak more on this. the other is giving mobility to people, elderly people, disabled people who currently don't have that. both of those are very practical but science magazine last year studied a group of almost 2000 people, ask them about what they wanted to see in terms of the morale at the almost of using driverless cars, how the programming of the car would be developed to ensure that, for example, ina developed to ensure that, for example, in a hypothetical, if a car
12:44 am
was on the road and it was about to crash but if it veered off the road to avoid a crash it would hit a group of schoolchildren being led by a teacher down the road. the public in this survey wanted to know that the car would in the end accept its own destruction and that of its driver, human passenger rather, as opposed to saving itself and ploughing into the children on the side of the road. how do you as a robot ethicist cope with this sort of challenge? the first thing i'd say is let's not get it out of proportion. you have to ask yourself asa human proportion. you have to ask yourself as a human driver, probably like me you've got many years of experience with driving, have you ever encountered that situation? not in my case, but i want to know if i ever step into a driverless car someone ever step into a driverless car someone has thought about this.|j think you're right. the ethicists and the lawyers are not clear. the point is we need to have a conversation. i think it's really
12:45 am
important that if we have driverless ca i’s important that if we have driverless cars that make those kinds of ethical decisions, you know, that essentially decide whether to potentially harm the occupants... you're doing what you told me to not do, you're anthropomorphising, it wouldn't be making an ethical decision, it would be reflecting the values of the programmer. those rules need to be decided by the whole of society. the fact is, whatever those rules are, there will be occasions when the rules result in consequences that we don't like and therefore i think the whole of society needs to if you like own the responsibility for those cases. so you are making a call, be it d riverless ca rs so you are making a call, be it driverless cars or any other example, of the technological developments in lockstep with a new approach to monitoring, regulation,
12:46 am
universal standardisation. and a conversation, a big conversation in society so that we own the ethics that we decide should be embedded. but that will not help. much of the development here, i mean, you work in bristol —— bristol in a robotics lab with a lot of cutting—edge work being done in the private sector. some of it is done by secretive defence establishments. there is no standardisation, there is no cooperation. it is a deeply competitive world. itjolly cooperation. it is a deeply competitive world. it jolly well needs to be. my view is simple. the autopilot of a driverless car should be subject to the same levels of compliance with safety standards as the autopilot of an aircraft. we all accept... you and i would not get into an aircraft if we thought that
12:47 am
the autopilot had not met it is inconceivable that we could allow driverless cars on our roads that have not passed those kind of safety certification processes. leaving driverless cars and going to areas that are, perhaps, more problematic for human beings. they developed the idea of of the machine, the future intelligent machine, the future intelligent machine, taking jobs and roles that have traditionally always been done by human beings because they involve things like empathy and there. and compassion. i'm thinking about roles of social care and educative. even, frankly, a sexual partner because we all now read about the sex robots that are being developed. in these roles, do you feel comfortable with the notion that machines will take
12:48 am
over from human beings? no. and i do not think they will. but they already are... japan has carers. ca re already are... japan has carers. care robot may well be able to care for you, care robot may well be able to care foryou, in example, care robot may well be able to care for you, in example, for your physical needs, but it cannot care about you. only humans can care about you. only humans can care about either humans or any other animal. object, robots cannot care about people or things, for that matter. and the same is true for teachers. teachers typically care about their classes. you think some people are getting wei overheated about this? one of the most well—known teachers here in britain he now says that in his vision of a future education system, many children will be taught one—on—one ina children will be taught one—on—one
12:49 am
in a spectacular new way by machines. he says it is like giving every child access to the best private school. ultimately, there may well be... and we are talking about into the future, some combination of machine teaching and human teaching. you cannot take the human teaching. you cannot take the human our. an important thing to remember here is particularly human characteristics of empathy, sympathy... theory of mind, the ability to anticipate, to read each other. these are uniquely human characteristics as our creativity and innovation and intuition. these are things we have no idea how to build artificially. jobs that involve those things are safe. interesting. people nowadays are looking at doomsday scenarios with the development of robotics and a eyewea i’, the development of robotics and a eyewear, frankly, most jobs
12:50 am
the development of robotics and a eyewear, frankly, mostjobs will one can think of and i was being flippant earlier can think of and i was being flippa nt earlier about can think of and i was being flippant earlier about being replaced by a robot, but you suggest to me that so many differentjobs, not just blue—collar but white—collar as well will be done by machines... again, or are we overstating it? ithink machines... again, or are we overstating it? i think we are. yes. iam not overstating it? i think we are. yes. i am not saying that will not happen eventually but i think that what we haveis eventually but i think that what we have is much more time than people suppose to find a harmonious, if you like, accommodation between human and machine. that actually allows us to exploit the qualities of humans and the skills, the things that humans want to do which is, you know... if you don't mind me saying you seem know... if you don't mind me saying you seem both extraordinarily sanguine and comfortable and optimistic about the way in which ai is developing, under the control of human beings and your face
12:51 am
is developing, under the control of human beings and yourface in humanity's ability to call operate on this and establish standards seems to run in the face—off at. in one area is weaponisation. the notion that al and robotics will revolutionise warfare and war fighting. you were one of 1000 senior scientists who signed an appealfor a senior scientists who signed an appeal for a ban on senior scientists who signed an appealfor a ban on al weaponry in 2015. that that will not happen, will it? the ban... it may... did you see what vladimir putin said? he said that artificial intelligence is the future for russia and all of humankind. whoever becomes the leader in this year will become the ruler of the world. i am not missed andi ruler of the world. i am not missed and i am also very worried about exactly this thing. we have already seen, if you like, the political weaponisation of ai. it is clear, isn't it, that the evidence is mounting that al was used in recent elections. you are talking about the
12:52 am
hacking? and some of that believe is from russia? that is political weaponisation and we do need to be worried about these things. we do need to have ethical standards and we need to have worldwide agreement. iam we need to have worldwide agreement. i am optimistic about a ban on lethal autonomous weapons systems. the campaign is gaining traction, there have been all kinds of discussions and i know some of the people involved quite well in the united nations. but we know the limitations of the united nations and the limitations of politics and we know that human nature usually leads to the striving to compete and to win um whether it be in politics or in the battlefield. can i leave you with this law? it seems that there is a debate within signs and you are on one side being sanguinary and optimistic. perhaps on the other side, stephen hawking recently said
12:53 am
that the development of full artificial intelligence could spell the end of the human race. do you find that kind of thought helpful or deeply unhelpful? deeply unhelpful. the problem is that it is not inevitable. what he is talking about here is a very small probability. if you like it is a very long series of if this happens it then if that happens, then this and so wanted it i wrote about this in the observer backin i wrote about this in the observer back in 2014. before stephen hawking got involved in this debate. my view is that we are worrying about an extraordinarily unlikely event, that is the intelligence explosion,... do you think we have actually been too conditioned by science fiction and by the terminator concepts? we have. there is no doubt. the problem is
12:54 am
that we are fascinated. it is a combination of fear and fascination and that is why we love science—fiction. and that is why we love science-fiction. but in your view it is fiction? that scenario is fiction that you are quite right, there are things we should worry about now. we need to worry aboutjobs, we need to worry about weaponisation of ai and we need to worry about standards in d riverless ca rs we need to worry about standards in driverless cars and care robot, in medical diagnosis ais. there are many things that are here and now problems in the sense that our kind of more to do with the fact that al is not very intelligent so we need to worry about artificial stupidity. that is a neat way of ending. alan winfield, thank you very much. for many of us the weather is on the turn and it is turning
12:55 am
that bit milder. we look at the temperatures we had at tulloch bridge on monday morning, reaching —5 but this morning, 10 celsius. a 15 degree rise. cloud and rain around, but is that cloudy weather bringing in mild conditions across much of the country. the rain will be at its heaviest across western scotland, a bit patchy across the east. a little misty over the high ground but no desperate problems with visibility. just a little mist over the top of the hill. further southwards across northern wales, partly cloudy with a few showers. some bright spells to start the day across southern counties of england. possibly an isolated shower for east anglia and the south—east. that will clear away quickly in the morning so what we are left with is bright weather
12:56 am
across southern areas. the south—westerly winds bringing in mild conditions but we could potentially see a spell of rain for a time during the afternoon affecting parts of wales and then moving on into parts of north—west england. always the wettest weather will be actually across western scotland. it should stay dry for most of the day across much of the midlands, southern england and the far south of wales. temperatures reaching 14 degrees also. spooky night coming up for trick and treaters. bits of rain across the north—west and quite wet for western scotland as well. clearer spells further south. temperature—wise overnight we are looking at lows between eight and 11 celsius. this weather front is going to become very slow moving across western scotland with the rain building up in those western hills as we go on through the day on wednesday. that weather front moves nowhere fast. for most of the uk, mild south, south—westerly winds coming in across the country so it will be a mild day. temperatures reaching 15 degrees,
12:57 am
turning a little cooler and fresher perhaps across the far north—west of scotland. we have some cool air moving southwards as we go through wednesday night into thursday behind this cold front. we should start to see some bright spells working in, possibly the best across eastern scotland and parts of northern england. a bit of cloud further south, bits and pieces of light rain and drizzle around. towards the end of the day we will see the return of some cooler air coming back across the uk. that said, on friday we will have a reasonably quiet weather day with a little rain moving southwards. to the start of the weekend we will probably have a spell of heavy rain for a time in england and then some showers following through the north—west. hello, this is newsday. i'm rico hizon in singapore. our top stories: probing possible links with russia — donald trump's former campaign manager is accused of conspiracy against the united states.
12:58 am
meanwhile, a former trump aide pleads guilty to lying to the fbi about his contacts with russian officials. but the white house insists there was no collusion. we've been saying from day one there's been no evidence of trump—russia collusion and nothing in today's indictment changes that at all. i'm kasia madera in london. also in the programme: after five troubled years, australia's manus island detention
12:59 am
1:00 am

46 Views

info Stream Only

Uploaded by TV Archive on