Skip to main content

tv   RIK Rossiya 24  RUSSIA24  May 3, 2024 10:30am-11:01am MSK

10:30 am
star combo on may 6 at a delicious point. dear friends, i want to bring to your attention our new release of the besagon tv author’s program, which will be called “when the eyes are more than”: the mouth, i hope you will understand why we named our program that way, i believe that it will be interesting, i’m looking forward to seeing you .
10:31 am
10:32 am
stand, who are they? new guys, this is a passenger with me, a reporter, a director, i came for my brother, i’m a battalion commander, call sign trigger, everyone has call signs, i’m adam, why adam and not adam, i’m a chechen, an artist, white you can sing a rose, i can, you can sing somehow, brusley, she’s alive, we don’t need names, it’s not a trosh, or what, not a trosh, not yours, well, you, daisy, light of god, to the machine gun, what to do, commander? you, shoot, call sign passenger, call sign passenger, i suggest changing to the call sign rebin, no, this is my brother’s call sign, you remain a passenger, that’s right.
10:33 am
hello, my name is alexander gasnikov, i am the rector of the university. this is emily pilligrini, an aspiring fashion model, a beginner, but already very successful, in just a few months she gained more than 100 on one of the social networks thousand subscribers was able to earn about $1000, she has an ideal figure, she is always cheerful, carefree, constantly delights fans with new outfits, but there is one nuance: she was generated by a neural network, she does not exist, artificial intelligence is no longer blurring the line between the real world and the fictional world, and it’s not always harmless...
10:34 am
almost eight times, no joke, scammers can now copy the identity of anyone, for example, they recently tried to deceive me in a similar way; a scammer who contacted me via video conference pretended to be the mayor moscow, how deepfakes are created, why study them, what signs can be used to recognize a digital mask, what solutions are scientists looking for so that technologies do not work harmfully, this is a question of science, our guest is andrey kuznetsov, head of the scientific group, fusion brain at the institute artificial intelligence airia. candidate of technical sciences andrey,
10:35 am
hello, hello, alexander, in preparation for this program, i set out to find one of the first prototypes of a fake, this is what i managed to find, this... it’s probably worth delving into history and look at how technologies have developed and how they have been used for various tasks that humans solve. here is the situation that you just described with abraham lincoln, you yourself specifically emphasized that he did not like to be photographed, that is, in essence, we helped a person with this technology, that is, if
10:36 am
this had not existed, then everyone would have thought that he existed would abraham lincoln even exist if there were only photographs of him, only portraits, of course, in this situation no one... i was thinking about some kind of bad, let's say, bad connotation of using technology, there are a lot of such examples, but now we have technology, we have social networks, we have cell phones, and we have great capabilities, in almost every application related to image processing, there are a lot of built-in filters, a lot of built-in technologies , algorithms that allow you to improve your face, that is, these are beauty tasks, these are tasks.
10:37 am
and, of course, with the development of computing power, with the development of hardware, with the development of new architectures, the emergence generative adversarial networks in 2014, all these methods, they began to move into the realm of use by ordinary people, made it possible to create new tools so that people could process photographs, videos and thereby simplify the solution of certain problems. in fact, if we take any of the modern definitions, they differ one way or another, but i would define deep fake as the result of the use of technology, modification or synthesis of voice, video and
10:38 am
images, that is, multimedia content, and at the same time within this definition we we talk, if we talk about pictures and videos, we talk.
10:39 am
two key directions, such a major logic, but at the same time there are directions here, the first is retrieving one photo at a time, when we only have one photo, we don’t have a large volume. data about a person, we need to learn to extract maximum knowledge from it in order to transfer this knowledge to a new person, in this sense there are two key components: the first is an extractor of these signs, this is a model that can, they are called
10:40 am
extractors, feature extractor, they extract features on different layers - let's say, a hierarchical representation of the picture, in order to understand where the small facial features and large positions are. in resolution, yes, the next model learns to extract attributes that we need to transfer to a new picture, that is, one thing is that these attributes begin to be transferred, in fact, when models are compared, they are now also compared according to different characteristics,
10:41 am
including the direction of view , this is the stability of a person’s identity, that is how recognizable it remains after transfer, because the geometry of the face has a very strong influence and is very important when transferring, this is the complexion, this is the shape of the face, textures and so on, that is, all these characteristics, each of them reflects a certain ability of the model to extract correctly these signs are from the source. it’s as if scientists themselves find these correct signs, or these correct signs are determined during training, this is actually the essence of deep learning, that is, who finds these signs, which should stand out, in fact, this is a very important task from the point of view of mathematics and from the point of view of science, because there are so many features of the face that need to be extracted, a person can make completely different movements and for...
10:42 am
the real task is to so that scientists, firstly, collect large amounts of data, that is, they need large arrays of high-quality images of faces, and they need to be able to train models in order to extract various features of shape, color, texture, everything related to a person’s face, during extraction, these features are subsequently used in order to best superimpose one face on another, if we are talking about... andrey, great, but what you
10:43 am
are talking about now is used not only by artists, musicians for their own purposes, but can also be used by attackers, so you can tell us a little more about this aspect, well, in fact, any technology has two sides, it can be used both for good and for harm, from the point of view of various situations in which deepfakes were used to compromise someone and for...
10:44 am
therefore, an attacker in the modern world can find a photograph of a person on the internet, can find video recordings, because people on social networks post a lot of their personal personalized content, use this to commit and build various deceptive schemes. well, now let's see how deep fake technology works online. my colleague, head of the information security department at inopolis university, mikhail seryogin, joins the conversation. michael. hello, mikhail, tell me which deepfake technologies are currently the most popular among
10:45 am
scammers: video or audio. good afternoon. in fact, this technology is not so often used for, for example, audio difake. because it is much easier to call on behalf of a stranger to us, but some very influential person for the security service and the savings bank, in order to convince us that we somehow have to transfer some amounts of money to someone. well, there is already this technology from... what police offices look like, they imprisoned a man in a police uniform, and well , they pulled a mask of the actor who played iron man, robert downe jr., over his face, this whole combination as a whole, when they then start calling people on different messengers, introducing themselves as police officers, she looked pretty- it’s still convincing, because a person of status, respectability, immediately somehow puts you at ease, well, the actor already has such an appearance,
10:46 am
they deceived people in this way, and it’s worth considering that you can encounter fakes not only... only in fraudulent purposes, that is, not so long ago there was a story on the internet in which an experienced employee of an it company decided to bring another friend of his into this field, who had recently completed courses and no longer had the relevant experience, they are together, which means , they created a lot of different photographs, a storyboard, and the faces of his friend trained a neural network, by the way, this lasted several weeks, but in fact a neural model was ready, with the help of which it was possible to pull the actual one onto the face of an experienced person. the face of his friend, like this they called different combinations, well , it was an experienced employee who called, trying to get a job in different companies, and on the second try they succeeded, that is, you need to keep in mind that in us, even in our everyday or professional activities, we are quite we may encounter situations with ronnie fakes, and this will not necessarily be straight up scammers, well, nevertheless
10:47 am
, less words, more action, i promised to demonstrate to you somehow, now i had a device here not... just a joke, but no, well, actually the device is of no use to me necessary, it still works digitally, and so in this case i decided to include the face of the actor who played the character of mr. bean, a famous comedy character, why do you see that regardless of the lighting somehow, but i don’t know from the look, how this works out things quite well, from the point of view of even different emotions or some things, everything is quite clear. they worked, and it’s possible not to stop there, include some other masks, for example, elon musk of the same thing, you see, that is, his face has already changed, they of course look a little different, due to the fact that i quickly switch them or the same jackie chan, and here it is clear that if the format of a man’s head, or rather his, does not coincide with mine,
10:48 am
some kind of dissonance already arises , but at the same time, how does this technology work quite well , yes, or for example, such a character... this is how it looks with glasses, that is, you probably noticed various artifacts when i put on these glasses, it’s not the russian who’s trying, she ’s trying to draw the face of this character, although he didn’t wear glasses, copes it’s not so bad, but well, i can’t say that it’s good, but if you turn your face, for example, that is, you are yours, if you suspect your interlocutor
10:49 am
that he is not real, you can try it in a non-standard situation put it, yes, that is, turn it somewhere to look. yes, or, for example, ask him to scratch, scratch his nose, yes, in principle, maybe somehow lay hands on him, yes, there are many different ways, somehow, well, you need to bring out exactly the standard situation where a person speaks just plain text in camera, in this case you will have all the necessary artifacts. mikhail, everything that you demonstrated to us, you did on your work computer, without any additional equipment, yes, this is what makes this technology so surprising, even moreover, it’s not that accessible, it’s even a little outdated, i’m more i am sure that within a few months some very rich neural networks will appear in terms of the number of parameters and operating efficiency, which will be able to animate photographs even better and it’s not like the videos that were now shown to you here precisely through the stream of a video camera, so this technology turns out to be accessible, it has been tested, it’s
10:50 am
not even one year old, but it’s not difficult to master and its computing power doesn’t require any then very... large resources, that is, here it is available, well, i wouldn’t say, of course, for any laptop or computer, but such a gaming computer can cope with this task with a bang. thank you, mikhail, the head of the information center was with us security of the university of enopolis, mikhail seryogin. let's go back to the studio, andrey, a question for you right away: is it possible to make some programs, for example, for smartphones, that will allow you to determine that it is a deepfake? in fact, now the community of researchers is actively engaged in this area, in fact, this area has been developing for quite a long time, about a year from 2008 2009 -
10:51 am
a normal face will be able to see, but if you use artificial intelligence as some kind of prism that will look at this picture a little differently, from its own mathematical angle, will extract special values ​​from the face area, then you can make a fairly high probability decision that it is a fake, and such algorithms exist, many well-known teams in the world, in russia, are developing such models . in fact, this is one of the ways to increase trust or confidence in, how to increase trust in artificial intelligence, in this sense, and why? because we
10:52 am
are all sitting on social networks now and reading the news we look at everything that is shown to us on video, in photographs, and of course, a person’s brain works in such a way that he perceives visual information very quickly, the criticality is...
10:53 am
well, they had no options, unless they know how if you want the source code to change something, this is one option, but the second option arises - then if we still proceed from what you said, then the opposite trend arises, that is, it means that since there are tests that allow us to identify deepfakes , which means, respectively, new generations of networks will try to satisfy this test, and this is an arms race, like a cheetah to a roe deer, that is, accordingly. the tests are becoming more and more advanced , the corresponding neural networks are becoming more and more advanced, then the question arises, is there a limit to this, that is, where are we going, how far
10:54 am
can this even bring us to the point where it will be very realistic, we will not be able to distinguish, in your opinion, what are the prospects, in fact, now video and image synthesis technologies are really reach a level where it is very difficult to distinguish a real... picture from a synthesized one, you mean sora, including sora, and if we talk about pictures, then midjorni, one of the solutions that generates photos and faces in simply perfect quality so realistic that they cannot be distinguished, and this is a recently released rubbish from open ai, which showed such realistic content that without even a close examination of the video recordings... it is very difficult to find any traces there, let’s say, of what this is all -synthetics, about a kitten a vacuum cleaner, which yes, yes, including
10:55 am
a vacuum cleaner about a kitten, when there only if when storyboarding a video, you can watch and peer so closely at the contours, that is, i always look at the contours when there is a moving object, it’s always interesting to watch how they change it has contours, because if an object in the physical world has a rigid shape, uh, like a vacuum cleaner, then most likely its contours should not change, if this is not camera distortion, so, of course, realistic content already exists now, sometimes it is very difficult to distinguish it from synthesized content, that is, in fact , everyone has been moving towards realistic content for a long time, and of course here you need to learn to look at everything you see - already closely, not to trust what you can directly see in the picture, it can be not at all... what it seems, in fact, probably the main question among tv viewers is how not to become a victim of such scammers who have such technologies, yes alexander, this is a really
10:56 am
important question when... a colleague wrote to me and there there were audio messages with his voice, i i did this: i asked a person a question to which he could give an answer, only he could give an answer, and this specific question led to the fact that the person stopped answering me, well , after some time, this account was blocked, that is always, when there is at least a small amount of doubt, you need to try to check by asking a counter question; while you are in the role of answering questions, you do not control the dialogue, but as soon as you ask counter questions, each counter question entails a certain an answer that the attacker does not have in his script, because the attacker’s script is aimed at a set of questions that he must ask you sequentially, pushing through his desire to extract this information from you, and - of course, no matter what content appears, audio, video, photo, no matter how skillfully the attackers try to approach
10:57 am
this... i act as a deception, and attempts to deceive ordinary people, here you don’t need to be a super-technological specialist, here you just need to always include criticism in relation to what you hear, see and what they say to you, try to control the dialogue by asking questions, if this is a stranger, i will call you back, if this is a person you know well, ask a question to which he can answer. and only he knows, as soon as you switch and change the course of the game, the attacker is lost, and of course, everything comes to the point that he stops answering you, the script breaks down and it is already more difficult for the attacker to continue this game with you, now the federation council is preparing vote protection bill, and it's nice that artificial
10:58 am
intelligence experts are taking part in the development of this bill, including i was at one of the meetings. to contribute to this, now in our country everything is fine with this, if, for example, we compare with europe, i really hope that we will stay on the same wave on the one hand, on the other hand we will actively think about how to legislate - everything to find tools that prevent attackers from doing the kind of things we talked about today, andrey was in our studio
10:59 am
kuznetsov, andrey, thank you very much. see you soon, thank you, alexander, all the best, goodbye, now it’s 200 124, i’m just delighted, i’m amazed, very cool, exciting, simple, i love children, this is real, unreal cinema, this is a real level, this is history. that this is a guest from the future, a film from the future, from the future to the past, from the past to the future, once again, i understood correctly, the future, yes, but the music, graphics, i liked the humor, i’m not from the parent committee, i don’t have any change of mind, the main thing remains love, i see how he is on he looks at you, and the way he looks, well, the way they look, he looks like that, there are so many different things, surprise, well, of course, like that.
11:00 am
10 out of 10, a brilliant choice, it will be interesting for everyone, we will meet in the future, tell me, when 100 years in the future, we begin our issue with an exhibition of captured equipment in pobeda park, on poklonnaya hill in moscow today delegations of military officers and diplomats from other countries, foreign specialists will be able to clearly see that... western weapons are largely inflated by their manufacturers themselves; they work on poklonnaya gora my colleague egor grigoriev, he is in direct contact with us. egor, i welcome you, tell me, what countries will diplomats be arriving from today and how many visitors are there in general? how are you feeling? tatyana, hello, by this moment it is known that the invitation was received by the military atache and where is the representation of all the countries that.

7 Views

info Stream Only

Uploaded by TV Archive on