tv Click BBC News December 15, 2021 1:30am-2:01am GMT
1:30 am
this is bbc news. we will have the headlines and all the main news stories for you at the top of the hour straight after this programme. this week, what will it take to keep us all safe? will machines lead us into war? how do we stop killer robots from becoming a reality? and can the victims speak louder than the survivors? theme music
1:31 am
has there ever been a time when we haven't been at war? battles have been raging down through the ages, over lands, religion, and resources. every so often a new technology comes along which gives one side a massive advantage and changes the shape of war forever. throughout the history of warfare there has been one common thread, and that is it has been people who have made the decisions on who and how to fight. but now we are having to ask the question what would happen if you took the human out of the loop? weapons, guided and driven
1:32 am
by artificial intelligence, are no longer science fiction. over the next week, the un will discuss whether the development and deployment of this kind of technology should be left unrestricted, regulated, or banned. so far, the us and the uk have opposed binding agreements to regulate or ban the use of so—called killer robots. james clayton reports from silicon valley on the dawning reality that al researchers say we need to start thinking about today. the nuclear bomb totally transformed warfare. there are those who now say we are on the cusp of something similar. it is a fast track to — i think "dystopia" is the right word for it. the nation is still recovering from the incident yesterday which officials are saying is some kind of automated attack which killed 11 us senators at the capitol
1:33 am
building. autonomous weapons combine a confluence of different technologies: drones, facial recognition, artificial intelligence and big data to create a sort of super—weapon that not only detects and destroys, but can make that decision itself, and can be owned not just by states, but potentially by organisations, terrorist groups, anyone. this is the kind of dystopian reality that has been painted by critics — assassinations, private armies of bots, computers deciding whether humans live or die. these types of weapons that could be easily deployed and moved throughout different environments, like a swarm, embodiments of the robo—dog with a machine—gun, and how easy they can proliferate, and fall into the hands of not what we think of as traditional militaries.
1:34 am
this is not about prohibiting or banning ai usage in the military, or even in weapon systems. it is about drawing a red line on this specific use case of weapons, which are these smaller types of systems that target people. even the un secretary general is worried. the weaponisation of artificial intelligence is a serious danger. on december 13th, a review of conventional weapons is scheduled to be held at the un, in geneva, where they will be discussing killer robots. campaigners will be looking for an outright ban. but, already, that looks unlikely, with the us reportedly saying it would prefer a non—binding agreement. the discussion should be more about how we regulate it and how we try to define it and approach it rather than trying to outright ban it, which is not going to happen. russia, china and the us will go after these technologies, so they are very keen to avoid being put at a competitive disadvantage against what is increasingly looking
1:35 am
like a great power, cold war—type competition over the next 30—50 years. but if countries can't ban killer robots, what will that mean for humanity? it is a fast track to, i think, "dystopia" is the right word for it. it is a world in which we delegated and relegated the decision to take a human life to algorithms, right? but it's not quite as simple as that. others argue that autonomous weapons are often mischaracterised. it's not being given the authority to kind of decide its mission set. no commander in the world would ever want a weapons system that decided what it wanted to do on any given moment. these would be preprogrammed rules, according to preprogrammed rules of engagement, that are legally screened to make sure they meet the requirements of law of conflict. the machine may make leaner decisions on the rules of engagement which have been preset and preassigned on legal grounds, than a stressed pilot, who is trying to do a million things at once. that may be the case
1:36 am
with a sophisticated military, but that is not necessarily what we're talking about here. if anyone has an ability to access a type of weapon that can selectively target a group of people, just lay that framing onto all of the types of conflict that we see today. whether we think about conflicts within country, when we think about rogue states, when we think about terrorist groups, when we think about cartels, we think about violent crime. now you are giving powering those types of conflicts with a weapon that can target at scale, right? to me, that is a very, very scary future. autonomous weapons aren't a distance possibility. much of the tech needed to create them already exists and some believe that if humans cannot get together to ban them, it could be one of
1:37 am
humanity's greatest mistakes. that was james clayton and i have been speaking to professor stuart russell, whose bbc reith lecture this week warned about the dangers of ai—controlled weaponry. the letter raised the possibility of children playing with toy guns, being accidentally targeted by killer robots. he was involved in the original slaughterbots short film from 2017, which in itself was shockingly realistic. sound of drones applause did you see that? before it was premiered publicly i showed it to some of my ai colleagues and when they are watching the ceo of the arms company demonstrating the capabilities of this new technology and the kind of uses you can put it to, they thought this was a documentary.
1:38 am
they didn't think this was fictional at all. your kids probably have one of these, right? when it premiered in geneva, actually, at the negotiations on autonomous weapons, the russian ambassador sort of sneered at this and said, "why are we even discussing this? "this is science—fiction. "it won't be possible for even 25—30 years." three weeks after we premiered the movie, turkish arms company stm actually announced a weapon and they advertised capabilities for autonomous hits on humans, face recognition, human tracking, all of the things that the ceo talks about. those genuine bots could exhibit the same kind of intelligence and autonomy in the film, i would imagine they would be manually controlled and flown into things? you might think that, but, actually, no. they are fully autonomous
1:39 am
and the un has a report, showing that they were actually used autonomously to hunt down retreating troops in libya in march 2020. there are many different arguments people make. one is a moral one, that it is just morally unacceptable to turn over to a machine the decision to kill a human being. you can basically launch weapons by the million. enough to kill half a city. the bad half. type in a rough description of the mission, you know, "wipe out everyone in this city between the age of 12 and 60." just characterise him, release the swarm and rest easy. so you create this weapon of mass destruction that is more effective than nuclear weapons, cheaper, easier to build, easier to proliferate, and doesn't really leave behind a huge radioactive smoking crater.
1:40 am
is the answer to always keep a human in the loop? and is the problem with that which human? i think the answer is "yes." to disallow attacks where there is no human supervision, no human who is looking at the actual situation and the actual target and saying, "yes, this is ok." even under the assumption that the machine is programmed by someone who has the best legal training and the most humanitarian ofaims, even in that situation, we face problems of not being able to make the decisions correctly. the problem is the idea of these slaughterbots, all the bits can be bought in a decent supermarket, probably with the exception of a small bit of explosive, so, what do we do? they're technically already available and how would you ever ban them? so, we ban many things that are already available. so, biological weapons — it wouldn't be that hard
1:41 am
for someone with the knowledge to create a biological weapon, but we still ban them. chemical weapons are widely available in industrial products. the companies that make them are required to account for those products, to check that their customers are real customers and not fake shell companies. companies that receive an order for 5 million quad copters would need to check who is buying the quad copters and we can do this in ways that will not be perfect, but will prevent the kinds of weapons of mass destruction that i am most concerned about. hello and welcome to the week in tech. it was the week better.com ceo had a better day than the 900 people he fired over zoom. your employment here is terminated. twitter closed thousands
1:42 am
of fake china state—linked accounts, pushing propaganda. and here is a look at nasa's latest crop of artemis astronauts, some of whom might take giant steps walking on the moon. it was also the week instagram announced a new feature helping managing time spent on the app. parents will be able to see, and possibly be horrified by, how much time their children spend on the app, and set time limits, while teens get their scrolling interrupted by �*take—a—break�* reminders. it comes after leaked internal research suggested teens blame the app for increased anxiety. researchers from mit's csail have found a new way to exercise the brains and bodies of soft robots inside a computer programme. welcome to the evolution gym, a simulator allowing large—scale testing of machine—learning algorithms, who choose how to grow bodies to complete tasks like climbing and flipping. some are evolving like animals. finally, those of us who have always wanted to swap hands for wheels are in for a treat! that's what researchers at eth
1:43 am
zurich did with this, the newest iteration of anymal, its quadruple—legged robot. it simply locks its wheels when it needs to walk, but why walk when with a top speed of 14 mph, it can really even roll up and down staircases! that is a wheely good idea! the pandemic came with many challenges, but at least for the majority of us, being at home actually means being in a place of safety, but not for everyone. calls and messages to domestic abuse charity refuge increased 60% between april last year and february this year. some have described the situation as an epidemic beneath a pandemic. this is not an issue in the uk alone. in israel, domestic violence cases between partners quadrupled last year. i was michal sela.
1:44 am
in 2019, i was murdered by the man who was my husband. this was from a campaign highlighting the issue, by giving a voice to those who never got the chance to speak up for themselves. we have been training our deep learning tools for quite a long time on hundreds of thousands of faces and videos, so it really understands really, really well how faces move. it is able, when you give it a video of someone moving their face, to transfer all that movement to a still image. it understands where the different markers on the face are, and what will look realistic, and it is very, very skilful at taking every little gesture, every little blink and adding it to that new image. we were able to take videos of famous actresses speaking, bringing their story to life, and allowing technology, our technology allowed it to transfer that to their actual image, so it appeared as though they were telling
1:45 am
their own story. the shocking murder of michal saylor shook israel in 2019. her husband was found guilty of stabbing her to death in front of her young daughter. her case is not an isolated one. i spoke to her sister, who has become a leading voice in raising awareness for domestic abuse in israel, setting up a charity, hoping to use technology to save the next murder victim. after it happened, i had so many questions. how come it happened to us? how come i didn't see anything? why didn't she share nothing with me? what was she going through? the biggest question, could we have saved michal? could she be with us today? and you have actually taken your grief to start a charity and help raise awareness, haven't you?
1:46 am
the head of this second campaign is shiran melamdovsky somech. the first time i heard her idea, to take my sister's picture and make her alive, i was shocked, i was like, "oh, my god — it is too hard." but then i thought again, and i told her, "you know what, shiran? "you are a genius. "this is what the world needs. "the world needs to be shocked!" what do you hope to achieve overall? my vision is zero femicide per year. an ecosystem of hundreds of thousands of start—ups to save lives of women in their own homes. so we can beat this, we can combat domestic violence with technology, innovation, humankind. we have solved much harder problems than this. now, many of us may feel quite uncomfortable when we have to unavoidably walk home alone, especially after dark, so shona mccallum has been taking a look at some of the latest technology that is hoping to help
1:47 am
keep us safe. there is always that thought at the back of your mind — will i get home safe tonight? between leaving your friends and opening your front door, there might be a time you are travelling alone. we might call our mum, share our location on whatsapp, or have our keys to hand — anything to make us feel safer. but i am interested to find out is there something better? one in three women in the world experience violence at some point in their lives, and even more say they have been harassed or felt unsafe. we did experience being followed, we have cat—calling, and we did receive quite a lot of unwanted attention. we did see flashers quite frequently on our school run.
1:48 am
experiences like that led emma kay to create walksafe, am app to help women map out a safer route home. so we have got our walksafe map, you and i are on our way home. what i can do is, i can have a look at our routes. i can zone out here and, as you can see, it will show me all where the recent crime data is. so that has been a recent pickpocketing. this has been where an assault has taken place. looks like somewhere we would want to avoid, right? exactly. the app also has a feature called tapsafe, where you tap the phone to alert your friends or family that you feel under threat. but some might feel having this kind of information in your pocket could make you more paranoid. looking around this park, it is lovely and, yeah, i feel pretty safe, but, just a couple of hours later... ..and in the dark, things feel pretty different.
1:49 am
and if you are getting assaulted, it is perhaps unlikely that you would reach for your phone and bring up a safety app. so i've travelled to the university of bath to meet the team who are developing a smartwatch app that senses distress by monitoring heart rate and body motion to send an automatic alert if you are under attack. alert blares. i often feel scared when i am out walking or running alone. so when i saw that apple watches were being used to detect heart—attacks, i had a kind of lightbulb moment and i thought, maybe this could be applied to women's safety? but as the team have discovered, it is difficult to simulate distress. we looked at loads and loads of police reports and loads and loads of cctv footage of people being attacked
1:50 am
and we tried to find things that were common amongst everything, and we saw the struggling motion, the pushing motion, that kind of change from being very regular and repetitive to something that was very erratic. 98-67. so they created a test which allowed them to collect the data to train an algorithm which could recognise similar events. 26 plus 58. this smartwatch tech is in early development, but if the team manage to crack it, it could be something we see being used by women in the future. the bsafe app for smartphones is similar too. phone: all guardians alerted. the idea is an emergency alarm is activated by your voice, and then it automatically starts livestreaming video and audio to your chosen contacts and records what is happening. the company believes these recordings could be used in evidence in court cases, which are often hard to prove. making women safer on our
1:51 am
streets should be a priority for all of society, and what is clear is that this huge problem cannot be solved by technology alone. but what i have discovered is that tech could have a role to play in addressing some of the fears that women feel on theirjourneys home. for years, efforts in developing autonomous systems have focused on replicating the human brain through machine—learning, but one uk—based start—up thinks there may be a better model — the honey bee. engineers at opteran think nature has already solved autonomy, and they have been researching the brains of honey bees to derive algorithms to teach robots and drones collision avoidance. the honey bee is a fabulous visual navigator. they can understand the world they are in over a 10km radius.
1:52 am
they can fly point—to—point and return to home, and they can share information with other bees in the hive. and then those bees can go to that location. that is exactly what robots need to do. we have robots in warehouses that need to move around and get to certain locations, and then come back, and then do that dynamically without bumping into other things or if the environment changes. we have the same in the air with drones, doing inspections. self—balancing autonomous bicycles navigating around the streets. essentially, what is happening is that machines need to move more naturally. the system enables drones and robots to take panoramic views of the world around them, just like an insect would. then, like a honey bee, the sensing technology calculates the optic flow across the field of vision, considering any obstacles, and taking action to avoid collisions. the system has already been tested on drones and a robot dog named hopper.
1:53 am
nature likes to solve things in the most simple way possible. it finds those solutions that stop you from having to do a lot of computation that you didn't need to do. one of the ways it does that is and how it perceives the world. so, every animal that we know of that uses vision to navigate works out the motion of the world around it. we call it optical flow. and that is exactly how our technology works. and the company claims that basing algorithms on insect brains could be a more efficient way of creating ai. importantly, it would also be a more affordable method for lower—cost robotic applications and that is the market that opteran is targeting. the current approach to ai is fundamentally flawed because it is really based on a caricature of how real brains work. it is kind of a tiny piece of the puzzle that has just been scaled up and applied to lots of different data with lots of computing power. but that means
1:54 am
it is very fragile. it doesn't work the way real brains work and it is very opaque. we don't really understand what it is doing half the time, and that is not a good enough solution for autonomous systems. nature has come up with weird and wonderful ways of doing things that humans could never figure out, so maybe there is a lot more technologists can learn from the other creatures around us. that's it from us for this week. as ever, you can keep up with the team on social media, find us on youtube, instagram, facebook and twitter at @bbcclick. thanks for watching, and we'll see you soon. bye— bye. hello.
1:55 am
well, the key message in the weather forecast is that the weather is going to be very settled over the next few days. a big area of high pressure is going to establish itself across the uk, and that means settled conditions, i think, in the run—up to christmas. and on the satellite picture, you can see that area of high pressure across spain, portugal and france. it's building here and it's extending northwards. and as it extends northwards, it's going to push the fronts away to the north as well. but for the time being, we still have quite a few isobars — these pressure lines — so a stronger wind and a weather front close to northern ireland and across western parts of scotland and the north, as well, during the course of wednesday. so, here, it will be at times cloudy, but it's mild with that cloudy, rainy weather, around 8—9 degrees celsius. to the south of that, very mild, too. ten degrees, but it is dry. now, the cloud cover on wednesday will vary considerably across the uk. we still have that weakening weather front
1:56 am
in the north west here, so dribs and drabs of rain. and at the very least, it will be cloudy, but plenty of bright if not even sunny weather around merseyside, parts of the midlands, lincolnshire, also northern and eastern parts of scotland. in the south, we have thicker cloud because it's drifting in from the southern climes here. now, this high pressure really will be in place across the uk by thursday. you can see the weather fronts have been pushed to the north, so that means it's drying out right across the uk. still a little bit of rain maybe early in the morning, flirting with the very far north—west of scotland and the northern isles, but you can see the bulk of the country is dry on thursday. and again, a lot of variation in the cloud cover, but wherever you will be on thursday, i think the temperature will be more or less the same, around 10—12 degrees celsius. now, this is what we call a blocking high, and this happens when the jet stream sort of wraps around it in the shape of the greek letter omega. so, you can just about make out that omega shape. when that happens in the atmosphere, things don't tend to move around an awful lot.
1:57 am
2:00 am
welcome to bbc news. i'm mark lobel. our top stories: most countries probably have omicron cases and its spreading faster than previous variants. the world health organization warns the world to prepare. even if omicron does cause less severe disease, the sheer number of cases could, once again, overwhelm unprepared health systems. noes to the left, 126. wow! gosh! here in london, despite the pm claiming success, dozens of his own mps vote against new coronavirus restrictions in the latest blow to boris johnson's authority. two close friends who lost
56 Views
IN COLLECTIONS
BBC News Television Archive Television Archive News Search ServiceUploaded by TV Archive on