Skip to main content

tv   Click  BBC News  February 25, 2018 4:30am-5:00am GMT

4:30 am
the un security council has unanimously passed a resolution calling for a 30 day humanitarian ceasefire across syria. it follows a week of intense bombardment of the rebel—held enclave of eastern ghouta, in the suburbs of damascus. the truce would allow for aid and medical evacuations. the official so to the and russia. russian athletes, who were allowed to compete as neutral athletes, cannot compete under their own five. —— flag. parents who fear their daughters have been abducted by boko haram jihadists in nigeria have released a list of more than 100 names. up to now, it hadn't been clear how many girls were missing following the attack on a boarding school. now on bbc news, it's time for click. this week, lady becomes model.
4:31 am
man becomes ape. and spen becomes trump. as donald trump: great, the best. 0k, movie quiz time. five points if you can name this film. correct — it's raiders of the lost ark. no, that is not harrison ford, that is the face of nicholas cage. 0k, try this one. yes, it is the fellowship of the ring. 100 points if you spotted nicholas cage, nicolas cage and nicholas cage.
4:32 am
so, what on earth is going on? we're just about getting used to the idea that there are loads of fakes online. fake news, fake tweets, fake photoshopped images, but these videos are a whole level above anything that we've seen before, and they may have consequences that go far beyond just switching out a few movie stars. a lot of what we talk about over the dinner table is, you know, we live in a diverse world... researchers at the university of washington released this video last year, which used a computer vision algorithm to very convincingly doctor 0bama's mouth movements to make him lipsync to something he said in a different interview. a lot of kids, the doors that have been opened to me aren't open to them. and with the tricks and tools of machine learning becoming better and easier to use, it's now possible to do this without a particularly powerful computer. remember the nick cage
4:33 am
videos from earlier? well, this mix of donald trump and angela merkel was created using the same tool, a tool called deepfa kes. to be clear, this is notjust a face swap like you might see on snapchat. this is artificial intelligence that's learned what trump's face looks like and then made it copy merkel‘s facial expressions. what's fascinating is that these weren't made by a team of researchers, or a hollywood visual effects department. these were made by individuals following an online tutorial on a desktop machine. now, to see how easy it is, we're going to do it. we're gonna take my face and make me president. we trained a neural network by feeding it video of some of my past appearances. we mixed it with president trump's state of the union address. the software broke the video
4:34 am
into individualframes, ran them through the network, and in less than a day, this was the result. all of us, together, as one team... so, this is the original video of trump. and this is me, on his head. we all share the same home. i'm not sure it's an improvement, but that does seem to be president spenley trump. the other half of the experiment didn't go quite so well. this is click presenter, donald kelly. now, this was a very short and quick experiment. it's far from perfect. it's blurry, you can see the edges and sometimes, well, it's just downright scary. but had we left the network to train for longer, on better videos, we could have got much more convincing results. now, deepfakes has hit the headlines in recent weeks, but it's not because of trump or nicolas cage, or even me. it's because of porn.
4:35 am
an online community has been using al to put hollywood actresses‘ faces onto adult movie stars‘ bodies. now, obviously we can't show you much of the resulting videos, but they feature a number of female celebrities. the videos are stolen, and so are the faces. i was surprised but, at the same time, not really, because of how technology is advancing so much that it's just seems kind of inevitable. however, the context of the material, taking a celebrity or a well—known face — or even anyone's face, for that matter — and putting it on a porn performer‘s body, and essentially creating this fake, non—consensual porn, it's disturbing. it was definitely a disturbing thing to see. are people going to take, like, pictures of their ex—girlfriends and put them on porn performers? issues surrounding consent
4:36 am
and copyright have led the website hosting this content to ban deepfa ke photography, and the communities making it. reddit too, the site where the community started, changed its rules to forbid involuntary fake pornography. away from porn, it doesn't take much imagination to see how one could create international outrage by making fake statements from world leaders. something that may become possible very soon, thanks to some software that we looked at last year. this is lyrebird. the idea here is that i can train a neural network with samples of my voice and then it will be able to speak like me. harry hoped he would see some success from the current project. parents should look out for... the software asks you to read out at least 30 sentences of its choosing, from which it can pull out the basic building blocks of words, the phonemes, that can then be put back together in any order.
4:37 am
in other words, "in other words". i've always been a big fan of one direction. they were, quite frankly, better than the beatles. spencer laughs. although the creators of lyrebird are aware that this technology could be misused, they say that by releasing it as a free tool, well, at least the public will become aware that fake voices are already a reality. as donald trump: great, the best! one idea that we're considering is to watermark the audio samples that we produce. so we are able to detect immediately if it's generated by us. so, how do we protect ourselves from having our online photos, videos and sound recordings used to create fake uses? at the moment, we are in a wild, wild west situation. we don't know the attitude of the courts to this problem. we don't have a clear piece of legislation that would cover it.
4:38 am
we have piecemeal laws on privacy, copyright, trademark and passing off that would be useful to somebody in trying to stop this from happening. but we don't have a clear legal definition and we don't have a clear piece of legislation that is exactly on point. and until we have that, this legal uncertainty will continue. the morality and the legality of deepfakes are murky issues. just as we are wrestling with the fact that we can't trust what we read, very soon we will need to confront the fact that we can't trust anything we see or hear either. hello, and welcome to the week in tech. it was the week that google's dedicated health farm announced it has created an algorithm that can predict high blood pressure, as well as risk of heart attacks or strokes, simply by scanning a person's eyes. as boston dynamics keep on showing
4:39 am
us how intelligent robots are becoming, experts this week warned that artificial intelligence in the wrong hands is ripe for exploitation. drones turned into missiles, fake videos manipulating public opinion, and automated hacking are just three of the threats they identified. and amazon ceo jeff bezos is getting a new clock, but it's not one you can pick up from a shop. costing a huge m2 million — or £30 million, it's designed to run for 10,000 years and construction began this week inside a hollowed out mountain in texas. the vice president of facebook‘s adverts found himself in hot water this week, after tweeting that russian—backed ads were not designed to sway the us elections. rob goldman's tweet came after 13 russians were charged with meddling in the election via social media. his views were not those of facebook itself. and a conspiracy theory video claiming that survivors of last week's florida shooting are "crisis actors" somehow became youtube's number one trending clip before being removed. and finally, pictures
4:40 am
of samsung's latest phone, the galaxy s9, were leaked ahead of its launch at the mobile world congress, starting on sunday — ironically, via an app released by the company itself. we'll move on now to video game news. remember nintendo's switch, its hugely successful console that's both mobile and which plugs into a tv? well, the japanese gaming giant has now created a host of rather unusual new peripherals which wildly alter how the machine is used. and marc cieslak has been getting all bent out of shape over it. he plays a musical scale. you'd be forgiven for thinking that this cardboard was the packaging for the new peripherals for the nintendo switch console. however, the cardboard are the peripherals themselves.
4:41 am
called labo, it's a range of devices, which includes things like a piano, motor bike handlebars, fishing rod, and even a robot suit. straps on the shoes... i might look like i'm stomping around in a slightly weird way, but this game asks you to really get into the character of a giant robot. and, if i pull down my visor... i activate first—person mode. for precision destruction. called toy—cons, they're all constructed from folded cardboard. some use elastic bands and all use the switch's motion sensing controllers. i think labo's a big deal for nintendo switch, just because it proves that nintendo is capable of continuing to innovate on an already innovative product. the fact that it's made out of cardboard and your existing controllers fit in, i think will will blow parents‘ minds and, more importantly, blow children's minds as well. but before you can play
4:42 am
with your toy—con, you've got to build it first, something you might worry requires the prowess of an origami expert crossed with the advanced flatpack furniture building skills of a self—assembly sensei. building these devices takes varying lengths of time. more complicated toy—cons, like the robot suit, can take up to eight hours to complete. but that's part of the appeal of labo, taking pleasure from the building of the devices that you're about to use, and understanding how they go together. a little bit of patience and some deft folding results in this. nintendo reckon this is a radio controlled car. last time i looked, cars had wheels. my completed toy—con, which i can make move around, because the switch controllers have got hd rumble, it means that you can have different levels of rumble. allowing this particular toy—con to move about. now, this only took me
4:43 am
about ten minutes to build, all in all. and because the switch controllers have an ir camera in them, on the switch itself, you can see where the toy—con is going. each one of the toy—cons comes with a game. some are more complicated than others, but will require an element of physical control, which comes courtesy of the folded cardboard. the games themselves are more like mini—games. but that's not the point. this is more about creativity and making something than it is a hard—core gaming experience. but, i do question the durability of cardboard peripherals. how does that go back in there? not very, based on my time with them. we've managed to have a pit stop with our very own cardboard mechanic. 0k, fantastic. so, while i managed to damage my cardboard motorcycle, repairs are really quite easy. there were two different offerings so far, the variety pack, which includes five different toy—cons, priced at £59.99, and the robosuit,
4:44 am
which costs £69.99. that seems like a lot of money for cardboard toys with bits of string for guts. nintendo hasn't yet said whether they are going to give you replacement parts for that, or whether you are going to have to scavenge cardboard from supermarkets or things like that. so it's going to be interesting to see how much nintendo are expecting you to spend on top of the base game and cardboard kits. this week, caterpillar announced the release of a new smartphone. you'd be forgiven for not even knowing they produced such a thing. these devices are specifically aimed at the construction industry, but this one has a few interesting features. an upgrade to their flir thermal imaging camera, the addition of a laser beam for measuring how far away something is or room size, and
4:45 am
the standout feature — a nose. yes, it can smell. or more specifically, has an indoor air quality sensor which aims to alert users if there are high levels of volatile organic compounds — or vocs — in the air, something commonly found in paint, solvents and cleaning products. sound a bit niche? well, its creators don't think so. builders, plumbers, electricians, carpenters, farmers. these type of people kind of generally get overlooked by the everyday phone vendors. so, what we are doing is understanding the technology that we can integrate into our products that really makes their lives better. next week on the show, we'll be bringing you all of the latest news and releases from nwc in barcelona. that was lara. now, earlier we looked at how one face can be
4:46 am
transplanted onto another. next, it's time to meet the people who took this man's face and body and turned him into a completely different species. the team behind war for the planet of the apes is hoping to win the best visual effects 0scar next weekend. if it does, it will be the second 0scar in a row for dan lemon. bad place. human zoo. bad human. bad humans. soldiers... war for the planet of the apes represents the effort of over 500
4:47 am
people in visual effects. some of us worked on the movie for, like, two years. the movie runs about two hours and 20 minutes, and all but 15 shots in the movie had visual effects. many of the heaviest shots in the movie took over 11100 hours per frame to render. so, that's a core hours. so we have a lot of those processors working at the same time. maybe the frames wouldn't come back for two days, three days. i find long time ago, after zoo... so, the razor‘s edge that we walk is figuring out how to take our apes and adjust them, and make the performance readable to a human audience, so it is legible as something that you and i can connect with. but also, not to take it so far that suddenly it doesn't feel ape—like any more. somehow it feels too human. that is the fine line that we ride and that requires our animators to really... you know, it's their kind of skill and experience that comes into play there. this particular movie plays
4:48 am
as sort of this epic western, you know? where they are travelling from one location to the next and you get to see a lot of different environments, and spaces. the big environment was the prison camp. that is where most of the second half of the movie takes place. the prison camp, we actually built it near the vancouver airport in british columbia. it was a 70,000 square feet set. so it was a really big set. but it was surrounded by over 100,000 square feet of green screen. we needed all that screen because we had big wide shots that were set quite low. and beyond this set, we needed to put the sierra nevada mountains. we used a process called photogrammetry, where we went up in a helicopter and shot reference up and down the sierra nevadas in that kind of region. and then we turned those still photographs into three—dimensional geometry using the photogrammetry process. and we were able to use those pieces of geometry sort of like a kit set. and in the kind of traditional model making sense, we were able to take geometry from those mountains, cut them up and then put them together into a way where we could achieve specific compositions that matched what the director
4:49 am
was after for that set. we developed this new piece of software called totara. totara is an ecosystem simulator. so, we built the terrain and we sort of map resources. we said, this is where the good soil is, this is where the bad soil is. and then we let the plants grow from these seeds. we ran this similation over the equivalent of 80 years inside the computer. and it allowed the trees to grow up. and as the trees grew, they competed with one another for resources. so, they would kind of try to reach higher and higher. and what happened was a lot of complexity emerged from that simulation. it looked a lot more realistic, and we got a lot more realistic variation and naturalism from that simulation. one of the things that are so great about ourjob is not knowing what the next thing is.
4:50 am
and that, for us, is the thing that is so much fun, to be able to be, like, 0k, here is the creative challenge. how can we take our technical tools and bend them to tell this story, or what can we invent, what can we make up to be able 0k... with andy serkis about his role as caesar in war for the planet of the apes, and the art of motion capture in general. and you can watch that again right now on our youtube channel. and next week we will get exclusive, behind—the—scenes access to another visual effects 0scar nominee, star wars: the lastjedi. now, as it happens, the visual effects company that power star wars has also been involved in something rather different recently. just in time for fashion week season, the london college of fashion has teamed up
4:51 am
with industrial light & magic to put on a show with a difference. two years in the making, ilmxlab is debuting its live cgx technology in an augmented reality catwalk experience. real—life models are joined by a virtual avatar that is controlled by a human, wearing ilm's signature performance capture gear backstage. as the performer walks around, a specially designed rig, motion capture and depth sensing cameras tracked her every move. the avatar mirrors her physicality in real—time.
4:52 am
4:53 am
4:54 am
4:55 am
4:56 am
4:57 am
4:58 am
4:59 am
5:00 am
5:01 am
5:02 am

44 Views

info Stream Only

Uploaded by TV Archive on