tv HAR Dtalk BBC News December 7, 2017 4:30am-5:01am GMT
4:30 am
following president trump's formal recognition of the disputed city ofjerusalem as the capital of israel. the move overturns decades of american policy towards the region. the move has been welcomed by the israeli government as ‘historic‘. in california, more than a thousand firefighters are battling huge wildfires. hundreds of buildings have already been destroyed and thousands of homes are under threat in ventura county, north of los angeles. the state governor has declared a state of emergency. no casualties have yet been recorded. more than 200 countries attending a un—backed conference in kenya have called for action to stop the flow of plastic into the world's oceans. however, scientists have criticised the lack of concrete proposals to prevent around eight million tonnes of plastic reaching the sea each year. let's have a look at the front pages of this morning's papers: the guardian leads with donald trump's recognition ofjerusalem as israel's capital and his plans to move the us embassy
4:31 am
there from tel aviv. the paper also looks at the growing pressure theresa may is facing to strike a deal over brexit. brexit is also on the front page of the i — which takes a look at the ins and outs of the talks, from philip hammond and the divorce bill, to david davis and his lack of impact assessments. the telegraph says jean—claude juncker fears the uk government might collapse next week, if a breakthrough in the brexit talks isn't found. the metro leads with the story of a man arrested after a brawl in a westminster bar on the parliamentary estate. the mirror leads with reports of a plot against prince george, with a story about a man who's appeared in court accused of allegedly urging jihadis to attack the young prince at school. the financial times‘ top story is about plans by a uk shopping centre owner to combat a high street slowdown, as retail sales shift online. the times has the story of a shortage
4:32 am
of medicines which is forcing some cancer patients and those with severe mental health issues to be turned away from pharmacies. and the express is concerned about the imminent arrival of storm caroline and her 90 mile per hour winds. now on bbc news, it's time for hardtalk. welcome to hardtalk. i'm stephen sackur. for now, the bbc employs human beings like me to question the way our world works. but, for how much longer? as research and development effort into artificial intelligence intensifies, is there any sphere of human
4:33 am
activity that won't be revolutionised by aland robotics? my guest today, is alan winfield. a world—renowned professor of robot ethics. from driving, to education, to work and warfare, are we unleashing machines which could turn the dark visions of science fiction into science fact? alan winfield, welcome to hardtalk. hi, delighted to be here, stephen. you do have a fascinating title, professor of robot ethics, i'm tempted to ask you first, what's most important to you, the engineering, the robotics or the ethics, being an ethicist? well, both are equally important. i am fundamentally an engineer, so i bring and engineering perspective to robot ethics and engineering. but, i would say, that more
4:34 am
than half of my work now is actually thinking about... and, you know, i'm kind of a professional warrior now. would you say the balance has shifted 7 that over the course of your career, because you started out, very much in computers and engineering, that increasingly as you have dug deep into the subject, in a sense, the more philosophical side of it, has been writ large for you. absolutely right, yes. actually, it was really getting involved in public engagement, robotics public engagement 15 years ago, that, if you like, alerted me, and sensitised me to ethical questions about round robotics and ai. let's take this phrase, artificial intelligence. it raises an immediate question, in my mind, of how we define intelligence. so, i wondered if you could do that for me? it's really difficult. in fact, one of the fundamentals, if you like, philosophical problems with al, is we don't have a satisfactory definition for natural intelligence.
4:35 am
so, here is a simple definition. it's doing the right thing at the right time. but, that's not very helpful from a scientific point of view. one thing that we can say about intelligence, is that it's not one thing that we all have more or less of. what about thinking? are we really, in the course of this conversation, talking about the degree to which human beings can make machines that think? it, ithink, thinking's a dangerous word. it's an anthropomorphisation. and in fact, more than that, it's a humanisation of the term, intelligence. a lot of the intelligence that year and i have, actually is nothing to do with conscious reflective thought. so, one of the curious things about al, is that what we thought would be very difficult 60 years ago, like playing a board game, chess, has turned out to be, not easy, but relatively easy. whereas, what we thought would be very easy, 60 years ago, like making a cup of tea, in somebody else‘s kitchen, has turned out to be enormously
4:36 am
difficult. it's interesting that you alight upon board games. in the news over the past few days, we have seen something really quite interesting. google‘s deepmind department has this machine, computer, call it what you will, the alpha go zero, i think they call it, which has achieved astounding results playing this game. i'm not familiar with it, but a game known as go. i think, it's primarily played in china, and extraordinarily complex. it has more computations, more moves in it, more, sort of, complexity than chess. and, this machine is now capable of beating, it seems, any human grandmaster, and the real thing about it, is that it is a machine that appears to learn unsupervised. that's right. i must admit, i'm somewhat baffled by this, because i've just asked you about thinking, you say, no,
4:37 am
don't use that word, but it seems to me, this is a machine that thinks. well, it's a machine that does, if you like, an artificial analogue of thinking. it certainly doesn't do it anyway that you and i do. the technology's based on what is called artificial neural networks. and they are, if you like, an abstract model of biological networks, neural networks, brains, if you like, which actually, we don't understand very well. but, we can still make very simple abstract models, and that is what the technology is. but, i mean, the way to think about the way its learning, and it is a remarkable breakthrough, i mean, it's... i don't want to over hype it, because it only plays go, it can't make a cup of tea, but the very interesting thing is the earlier generation's affectively had to be trained on data that was gleaned from human experts, and many, many games of go. it had to be loaded with external information. that's right. and that was what we called supervised learning, whereas the new version,
4:38 am
and again, if i understand correctly, i only scanned the paper, this morning, the nature paper, is doing unsupervised learning. technically, we call it reinforcement learning. the idea is that the machine is given nothing else than the, if you like, the game, the rules of the game, and its world is the board. the go board and the pieces. and then, itjust plays essentially against itself, millions and millions of times. if a bit like, you know, you, ora human infant, learning how to, i did know, play with building blocks, lego, entirely on his or her own, byjust learning over and over again. of course, is humans don't actually learn like that. mostly, we learn with supervision, with you know, parents, teachers, brothers and sisters,
4:39 am
family and so on. but, it's interesting, you are prepared to use a word like, learning. thinking, you don't like, learning, you're prepared to apply to a machine? that's right, yes. and what i want to get to, before we go into the specifics of driverless cars and autonomous fighting machines, and all of that, i still want to stay with the big picture stuff, the human brain, you've already mentioned the human brain, it is the most complex mechanism we know of. on this planet. in the universe, in fact. is it possible, talking about the way in which google mind and others are developing artificial intelligence, that we can ultimately look to create machines that are as complex, with the aliens and trillions of moving parts, if i can put it that way, that the human brain possesses? i would say, in principle, yes. but, not for a very long time.
4:40 am
i think, the problem of making an al, or a robot, if you like, a robot is just an ai in a physical body, thatis comparable in intelligence to a human being, you know, an average human being, if you like, and averagely intelligent human being, is extraordinarily difficult, and part of the reason why it's so difficult is because we don't actually have the design, if you like, the architecture of human minds. but, in principle, you think we can get it. because what i am driving at, if this principle, philosophical question of what the brain is. to you, professor, is the brain, in the end, chemistry? is its material? is it a lump of matter? yes, yes. it doesn't add any sort of spiritual or any other intangible thing, it is chemistry? in my view, but i am a materialist, yes, the brain is thinking meat. and yes, but that's a bit of a copout, because you have added, thinking meat.
4:41 am
it's, meat, and the way that meat‘s as arranged, means that it can think. so, you can create something artificial, which if it were as complex, and as well arranged as human capacity can make it, one day, it can also think. i believe, in principle, yes. but, the key thing is, architecture. in a sense, the way to think about the current work on artificial intelligence, we have these artificial neural networks, which are almost like the building blocks. so, it's a bit like having marble. but, just having a lot of wonderful italian marble does not mean that you can make a cathedral. you need to have the design, you need to have the architecture, and the know—how to build that cathedral, and we do not have anything like that. just one more general point, and i don't want to get down to the specifics, but, nick bostrom, you know him, i know
4:42 am
you do, because he works in the same field as you, he says that you have to do think of ai has a fundamental game changer for humanity. that you could be the last invention that human intelligence ever needs to make, he says, because it is the beginning of a completely new era, the machine intelligence era. in a sense, he says, we are a bit like children, playing with a... something that we have picked up, and it has to be an unexploded bomb, and we don't even know the consequences that could come with it. do you share that vision? i partially share it. so, where i disagree with nick, is that i don't think we are under threat, from a sort of runaway super intelligence. that is the thesis of his book. however, i do think, that we need to be ever so careful, and in a way, alluded to this earlier, we don't actually understand what natural intelligence is, and in fact, we have no general scientific theory of intelligence.
4:43 am
so, trying to build artificial general intelligence is a little bit like trying to do particle physics at cern, without any theory, without any underlying scientific theory. so, it seems to me, that we need both some serious theory, which we don't have yet, we have some, but it is not unified, there is not a single theory, if you like, like the standard model physics. and, we also need to do responsible research and innovation, in other words, we need to innovate, ethnically, to ensure that any, as it were, unintended consequences of foreseen and we had them. so, let's talk now about... in a more practical sense, unintended consequences may well come up. let's start with something i think that most of us are aware of now, and regard as one of the most challenging and perhaps exciting specific ai achievements, that is the driverless car. now, it seems to me, also of issues are raised by a
4:44 am
one world in which cars are driverless. one of the ethical and moral issues, as well as practical ones, i work with people in this field. are you excited by driverless cars? i am, yes, and i think driverless cars have tremendous potential for two things. do you see them as robots? i do, yes. a driverless car is a robot. and typically, of course, once a robot goes into, becomes part of normal life, we stop calling it a robot, like a vacuum cleaner. so, i think there are two tremendous advances from driverless cars that we can look forward to. one is reducing the number of people who are killed in road traffic accidents significantly, if we can achieve that, so i'm going to be cautious when i speak... and the other, is giving mobility to people, you know, elderly people, disabled people who currently don't have that. and, both of those
4:45 am
are very practical, but science magazine last year, studied a group of almost 2000 people and asked them about what they wanted to see in terms of the morality, almost, of using driverless cars. how the programming of the car would be developed to ensure that, for example, in a hypothetical situation, if a car was on a road, and it was about to crash. and it veered off the road to avoid the crash, it would hit a group of schoolchildren, being led by a teacher down the road, the public wanted to know that the car would in the end, except its own destruction, and that of its driver, its human passenger, rather, as opposed to saving itself, and ploughing into the children on the side of the road. how do you, as a robot ethicist cope with this sort of challenge? well, the first thing that i would say, is let's not get
4:46 am
it out of proportion. you have got to ask yourself, as a human driver, probably, like me, you've got many years of experience of driving, have you ever encountered that situation? well, no, in my case. but, nonetheless, i want to know if i ever step into a driverless car that somebody has thought about this? and you're right. i think, the ethicists and the lawyers are not clear. we need to, have a common vision about... i think it's really important that if we have driverless cars, that make those kind of ethical decisions, that essentially decide whether to potentially harm the occupant or... you are doing what you told me off for doing, you are anthropomorphisizing. .. the car would not be making an ethical decision. the car would be referred in the values of the person who programmed it. exactly. but, i would say that those rules need to be decided by, if you like, the whole of society, because, the fact is, however those rules...
4:47 am
whatever those rules are, there will be occasions when the rules result in consequences that we don't like. and therefore, i think the whole of society needs to own the responsibility for those cases. so, this is you making a call, whether it be driverless cars, or any of the other examples we are currently thinking about with al, of the technological developments in lockstep with a new approach to monitoring, regulation, sort of universal standardisation... and, the conversation, a big conversation in society, so that we, if you like, own the ethics that we decide should be invented. but, that's not been to help is it, because at the moment, much of the development here, you work in bristol, in a robot lab, but a lot of the cutting edge work this field is done in the private sector, we have already mentioned google, there are many companies doing it, some of it is done by secretive defence establishment around the world, there is no standardisation, there is no cooperation, and in fact, it is a deeply competitive world. well, they're jolly
4:48 am
well needs to be. i mean, my view is very simple, the autopilot of the driver's car should be subject to the same levels of compliance with safety standards, as, you know, for instance, the autopilot of an aircraft. we all accept that we wouldn't, you and i would not get into an aircraft, if we thought that the autopilot has not let those very high standards. and, i think it's inconceivable that we could allow driverless cars on our roads that have not passed those kinds of safety certification processes. let's leave driverless cars, and go to areas which are perhaps, more problematic, for human beings, because they develop the idea of the machine, the future intelligent machine, taking jobs and roles, that have traditionally always been done by human beings, because they involve things like empathy, and care, and compassion. now, i am thinking about roles of social carers, as educators, teachers, and even, frankly, a sexual partner, because the all now read
4:49 am
about the sex bots. so, in these roles, do you feel comfortable with the notion that machines will take over from human beings? no, and in fact, i don't think they will. they already are, japan has carers that are machines. yes, but, we need to make a distinction here, that a carer robot may well be able to care for you, in other words, for your physical needs. it cannot care about you. only humans can care about other humans, or any other animal. objects, you know, robots, cannot care about people or things, for that matter. and, the same is true for teachers. teachers, typically care about their classes, . .. so, do you think some people are getting way overheated, about this, one of britain's's most well—known
4:50 am
teachers, anthony seldon, who ran wellington college, for a while, he now says that in his vision of a future education system, kids will be, many of them, taught one—on—one, in a spectacular new way by machines. he said it is like giving every kid access to the best private schools. well, i think, that ultimately, there might well be, and we are talking about sometime into the future, some combination of machine teaching and human teaching. you cannot take the human out, and a really important thing to remember, here, is the peculiarly human characteristics of empathy, sympathy, theory of mind, the ability to anticipate to read each other, these are uniquely human characteristics, and so, intuition, creativity, innovation, these are things that we have no idea how to build artificially. jobs that involve those things are safe. that interesting, because, a lot of people nowadays, are looking at
4:51 am
doomsday ‘s scenarios, any development of robotics in al, and frankly, mostjobs one can think of, and i was sort of being a little bit flippant at the beginning about being a presenter who might be replaced by a robot, but you are suggesting to me that the idea that so many differentjobs, not just blue—collar, but white—collar as well, are going to end up being done by machines. again, i'll be overstating it? i think we are overstating it. yes, significantly. i am not saying it would happen eventually, but i think that what we will have is much more time than people suppose, to find, if you like a harmonious, if you like, accommodation between human and machine.
4:52 am
that actually allows asked to exploit the qualities of humans and the, if you like, the skills, do things that humans want to do. if you don't mind me saying, you seem both extraordinarily sanguine and comfortable and optimistic about a way in which ai is developing. under the control of human beings, and yourfaith in humanity's ability to cooperate on this and established standards, seems to me to run in the face of the facts, because one area which i want to end with, in a way, is weaponisation. is the notion that al and robotics are going to revolutionise warfare and war fighting, now, you are one of a thousand senior scientists who signed an appeal, i think, for a ban on al weaponry in 2015, but that's not been to happen, is it? well, the ban... did you see what vladimir putin said? artificial intelligence, is the future for russia, for all of humankind, and, this is the key bit.
4:53 am
whoever becomes the leader in this sphere will become the ruler of the world. well, yes, i am an optimist, but i am also very worried about exactly this thing, and of course, we have already seen the political weaponisation of ai. it's pretty clear, isn't it, that the evidence is mounting that al was used in the recent elections? you are talking about the hacking, and that came from, we believe, from russia? indeed. and that is a political weaponisation, so we do need to be worried about these things, we do need to have ethical standards. we need to have worldwide agreement, and i am optimistic about a ban on lethal autonomous weapon systems. the campaign, you know, is getting traction, there has been all sorts of discussions, and i know some of the people involved by well. yes, but, we know the limitations of the
4:54 am
united nations, we know the limitations of politics, frankly, and we know that human nature usually leads to be striving to compete and to win, whether it be in politics on the battlefield, oi’ whatever. let me just leave you with this thought. it seems to me, there is a real debate within science, right now, and you are on one side, relatively sanguine and optimistic, stephen hawking, perhaps on the other side said this recently, the development of full artificial intelligence could spell the end of the human race. do you find that kind of thought helpful, or deeply unhelpful? i find it deeply unhelpful. i mean, the problem is, that it isn't inevitable. what he is talking about, here, is a very small probability, and it's, if you like, a very long series of if this happens, then if that happens, then if that happens, and so one,... i wrote about this in the observer, back in 2014. before stephen hawking got involved in this debate. in my view, we are worrying about an extraordinarily unlikely event. that is the intelligence explosion. do you think we have
4:55 am
actually been too conditioned by science fiction and by the terminator concept... there is no doubt about that. and, the problem is of course that we are fascinated. it is a combination of fear and fascination. that is why we love science fiction. but, in your view, it is fiction. well, that scenario is fiction, but, you are quite right, stephen, there are tonnes of things that we should worry about now. we need to worry about, as you say, jobs, weaponisation of ai, standards of driverless cars, in the care robots, in medical diagnosis ais. there is lots of stuff that are here and now problems, in a sense, that are kind of more to do with the fact that al is not very intelligent. so, we need to worry about artificial stupidity. that is a very neat way of ending.
4:56 am
alan winfield, thank you very much. you are very welcome. it was a pleasure. thank you very much indeed. hello. the day ahead will bring some very windy weather and then we get plunged into the deep freeze. storm caroline, a deep area of low pressure is drifting to the north of the british isles. a band of rain sinking southwards and eastwards but look at all the white lines, the isobars on the chart, very, very windy, storm force winds are possible in northern areas and then we opened the floodgates to this very cold air plunging all the way in from the arctic. we're starting thursday on a fairly mild note, a wet note for some and a windy night for all of us. the mild weather will not last, though, because ours are bands of rain clearing south and east and we will all get into cold air, wintry showers in northern ireland and scotland but the real concern here is the strength of the wind. as you can see, we're expecting wind gusts in excess of 80mph across northern scotland particularly, and it could well be enough to cause some
4:57 am
disruption, yes, but also some damage. so the met office has issued an amber be prepared warning for the strength of the wind, and even further south across central parts of scotland, just fringing into northern ireland, there's a yellow warning in force, gusts of 70mph possible here. elsewhere it starts wet and windy across the south—eastern corner, that rain will take a while to clear away. then the skies will brighten and then we get into the wintry showers blown in on this strong north—westerly wind, could easily be blizzard conditions in the snow showers and those temperatures coming down as the afternoon goes on. now, into thursday night, these snow showers will drift further southwards and eastwards. we could see a covering of snow just about anywhere, but mostly in places exposed to this north—westerly wind. there could be some icy stretches around as well. so a very wintry look to the weather on friday. yes, some sunshine,
4:58 am
still some snow showers, a bitterly cold north—westerly will wind, your thermometer will read 2—5 degrees but it will feel subzero for many. now, saturday looks like it will bring something a little bit quieter, the winds easing from the west. still very cold but not as many showers at this stage, 1—5, those are the maximum temperatures. then as we head into the second half of the weekend, it's all eyes on this front of system hurtling in from the atlantic. we will keep you posted. hello, this is the briefing. our top stories: protests in the middle east as president trump formally recognises drew russell as the capital of israel. australia's parliament could be just moments away from a vote on legalising same—sex marriage. “— away from a vote on legalising same—sex marriage. —— jerusalem. fa ncy same—sex marriage. —— jerusalem. fancy a slice of world heritage? that may be about to join the list
4:59 am
54 Views
IN COLLECTIONS
BBC NewsUploaded by TV Archive on
