Skip to main content

tv   The Whistleblowers  RT  August 2, 2024 11:30pm-12:01am EDT

11:30 pm
actually sounded like a real person, and it sometimes had burst of anger. at one point the microwave demanded that rizoto put his head inside so that the microwave could kill him. when rizoto pretended to do so, the microwave actually turned itself on. that was 2 years ago and 2 years is a lifetime in the world of artificial intelligence. now, many of us who are not tech wizards are struggling with news and discussions about a i, and it's relation to things like predictive modeling, adaptive control generative ai, even neural networks, and something called a hive mind. and that doesn't even touch on things like defects which already are causing a stir in elections around the world. how big of an impact is a i going to have on our daily lives? we'll try to get an answer to that question from our next guest. shauna brian is a subject matter expert in cyber security privacy web 3 and free and open source
11:31 pm
software called foss. sean is a visiting lecturer at yale law school where he teaches cyber security and where he founded the privacy lab initiative. he's the chief technology officer at pen quick, recently launching the pen, quick meal link, cleaning, shortening and archiving service. sean developed a web 3 and block chain class at yale, as well as hacking and cyber security at the law fair institute, and was founding hed tutor at oxford university's cyber security for business leaders. john's expertise has appeared in the new york times forbes magazine, bloomberg popular science, the associated press, nbc news, the financial times wire. and the new yorker. sean, thanks so much for being with us. i've been looking forward to this conversation for a long time. i'm so happy to be here. it's a pleasure. i think the best place to start is with the very basics of a i. can you tell us what exactly is a i,
11:32 pm
when did technologist begin to develop it and to what end? sure. so artificial intelligence or this thing that we're now so used to calling a i, it's really something that goes back to the 1950s police are now when i think about the origins of a, i want to go all the way back to alan, turn and turn your or it was the big pioneer of the computing systems, the things we now call computers and computer software and so on. and the big pioneer from that period who really designed the way in which computers you know, so called think right. and in 1950 turn had a paper called computing machinery and intelligence. and i encouraged folks to go back to that paper. it's very easy to read. and he asks in that paper can machines think? and then the important part is he says, he believes it to be quote, to meaning list, to deserve discussion. okay. and what do you sort of means by that is, you know,
11:33 pm
we don't talk about whether or not submarines swim. right? or, you know, whether airplanes really fly and the way the birds fly, right? we just sort of use these analogies for technology without sort of getting all philosophical about it. and a computer certainly can do some amazing tasks, but they are not human brain. right? and i think that's a really important point that gets lost in all of this discussion about artificial intelligence, and certainly the branding around calling it intelligence right. does that answer, computers are good at some kinds of problems. they literally will never solve other problems that humans can solve, right? and humans are also very good at certain tests that computers are very bad at. so we can't crunch large prime numbers, right? or do these complex mathematical problems the same way computers can. but we see for example, faces and everything right? when we look at a light socket or so on. and that's because we have biological systems in our brain
11:34 pm
that basically those types of structures are um, you know, allowing us to do that very easily, very simply. now, fast forward to the 19 seventies 19 eighties and we start having more miniaturization of computer components. and this is the era really we're artificial intelligence and the concepts sort of takes off and there was a lot of research done around it in that period. and also periods of what they call a winters. so if you look into the history, you'll find there was sort of a cooling on research and funding for academic research. certainly in artificial intelligence, over a number of periods, you know, on and off through the nineties and so on and so forth. and what we really now have as a, i, you know, check g b t, open a, i all these other tools. and these things are really, you know, sort of built on the corpus of information that the internet can feed into what's called a large language model, right?
11:35 pm
these labs, and that's what we're really calling a i. so it's something that was born sort of in the early, 2, thousands, mid, 20 tens. um and we can go into the details about who's funding it and why they're finding it and so on. but obviously it's making a big impact on our world. i had no idea that it's been around so long. okay. sean . the technology is developing and advancing at lightning speed. it seems. is it possible to rein it in? sure, so you know, um, hey, i regulation something my colleagues at yale and the other folks have been working on for a really long time. the reading was sort of on the wall that these kinds of systems a i, l, m, 's, etc, would be used in some context, you know, even in the legal systems. it, but certainly governing the rules and, and the, the processing of information. and this huge global information network, we have called the internet, right? so there's been a lot of work on regulation, certainly in the united states, but nothing really solid right now. i do think regulation has a place,
11:36 pm
but anybody who's been involved in drafting legislation and i'm sure lace adams high roll and that a little bit. so it's a difficult thing. you know, by the time these regulations go through all the pipelines and through all the structures that we have, the bureaucracies and so on. and they've shifted in such a way that they're often not very useful to take an example, right? g p r just general data protection regular. ready action in europe, you know, that's had a real impact on the landscape of privacy worldwide. and the version of that that we sort of emulated in the united states as the ccp a read, the california consumer protection act. and that act, mr. if like, shit, if you are like i like to call it. um it removes some of the strong um, you know, controls that i think choose your hat. and i've seen that a little bit with this uh, regulation already. um that said, i think really, you know, grassroots movements, you know, especially local movements. uh you know,
11:37 pm
um, petitioning uh, municipal governments and you know, getting things like bands on face recognition technology which have been very successful in united states. and, you know, those sorts of things are powerful and they do have a real world impact on certainly, you know, getting your local town or city to stop scanning your face from st cameras, you know, is a good thing as far as i'm concerned. and that's something that can be done, serve at the regulatory level. i'm at the larger level. i'm not so convinced. and certainly when we have, you know, no control over entities like these private entities, big tech and so on. amazon ring cameras and everything else, you know, that sort of thing is that can be so easily controlled where our information systems are concerned. we don't have control over them now anyway, right? the facebook and the google's and so on. so this is just another layer that is unfortunately quite problematic and i don't think regulation is going to have a really big impact there. let's get into some of the more complicated aspects of a i. we hear terms that i have no understanding of like predictive modeling,
11:38 pm
adaptive control generative a i can you tell us what these terms mean and what the uses are for these technologies? sure, so what we're most familiar with, and certainly a lot of people got really familiar with in the last couple of years, is generative. chat, g, b, t. and these other prompt based systems, you know, the new helper system co pilots that microsoft is putting into, you know, shopping into microsoft office and get hub and all these things. and i called them nagging systems or nagging here. but still, that's actually being kind of polite to them. but those systems are generative a i. so they're provided with a prompt or some other starting point by the user. and then they create text or images or some other content that they generate. so you take a huge amount of data, you shove it into a learning model, and then statistically, you know, the algorithm is able to figure out and this is
11:39 pm
a huge over simplification. but it's able to figure out a certain amount of rules. and guess basically what you're looking for based upon the inputs that you give it. thank you, shawn. a. we're going to take a short break and when we come right back, we're going to speak with sean about what to expect in the near future from development today. i and how we protect ourselves and our information stay to the. 2 2 the, [000:00:00;00] the credit you know,
11:40 pm
of to go to the good that showed us cause a couple of interest to you. if you put them literally for the bonus, we sort of loose for some reason that we're going to know what else about the future. just start off with richard spencer's boom. does it deal to deal with that was on the was
11:41 pm
the the, the the welcome back to the whistle blowers. i'm john kerry onto we're speaking with shawna brian. he's
11:42 pm
a lecturer in law at yale law school with expertise in cyber security privacy and mobile device forensics. he's the assistant director for technology. it deals office of international students and scholars and a p h. the candidate in law at the university of south africa. he found a deals privacy lab in 2017 an initiative of the information society project at yale law school. ok, thanks again for being with us, shawn. happy to be here. it's been an amazing conversation. one of the major issues related to a i, in the immediate term, is the problem of deep 6. these are videos that looked to be 100 percent legitimate, that are actually completely fig. many of them are very crude and we see those every day on facebook and take talking elsewhere. but some of them are actually quite sophisticated. how can we stop people from creating videos that make it look like? for example, world leaders are admitting to crimes on camera or taking harmful policy positions
11:43 pm
or making themselves look foolish. so unfortunately, defects are really problematic. i'm very glad you brought that up. so certainly it's a topic that will be more and more prominent as you know, for example, the election cycle here in the u. s. keeps up. i said very much that the other day with the debate. um, but uh, at any rate, you know, uh, the 1st thing we need to think about in the 1st lesson we should learn from this is talking about, you know, where these, the fix came from and how the software was trained to do this sort of thing and the way that happened was, large databases were built of people's facial information, right. academics, you know, private researchers, um, you know, photos scraped off of the internet. a many, many millions of them. you know, all of the surveillance cameras that have sort of just suddenly appeared around us everywhere, certainly in places like airports and so on. but well beyond that,
11:44 pm
you know, the grocery store, even these cameras have been collecting data about our faces, the relation between different parts of our faces. you know, getting very, very good at learning. you know, the same way i was talking about these other models learning. you know, like chat to, to talk back to us or to recurs, retain information in the way that's convincing. these models, we're learning how to create excesses right. how to understand faces, how do hypothetically recognize faces, although we can get into how biased and, and that technology has been. but certainly, you know, deep fakes of a proven you know, that you can mass create a someone else and create a video and certainly create propaganda and all other types of mischief around that . but i think the important thing to remember here 1st is and it's good news, at least for now, is that we do have software that can do a reasonably good job of detecting these kinds of things, right?
11:45 pm
the same way that a software is reasonably good at detecting a big box of techs that come out of chapter you b t, right? um, we do have software that can recognize, okay, the edges on the face here, the ways that the compression artifacts are in this video, we can tell it's been edited that it's not the original video that there's a problem with mismatch of colors and resolution and lighting and all that sort of thing, right? i'm, but really when we get down to it, this conversation is about the information system, right? it's about, okay. um, do we allow information to flow through this beautiful internet that we have? do we allow people to literally, you know, look at things that are difficult conversations and, but still be able to learn when they're, you know, very young when they're getting older and becoming adults. and certainly when they're taking action in the world. and some of these folks become rather powerful and being able to make quick actions that affect a lot of people are these individuals, you know, are they um, knowledgeable as
11:46 pm
a really learned how to distinguish between something that's real and what's not real or unlikely, unlikely. are they going to pause and take a deep breath before they take some kind of action right before they just kind of type something up the keyboard? i always tell folks, you know, when you're angry when you're upset, you know, as a rule for email and everything else, certainly you know, twitter x, whatever, it's called a do not just start typing up the keyboard and just sort of regurgitating onto the internet. and that stuff does live forever and certainly can live forever. but another big problem, at least here in the united states is the creation of deep fake news, where people, in many cases, children are made to appear to be nude and videos. in fact, these are computer generated bodies attached to actual children or the entire image is computer generated and it's a crime. so how do we protect ourselves and our children from this kind of activity? sure, sure. so, i mean, look the attention economies look like they've, that's a bad enough problem with dfcs,
11:47 pm
but i'm glad you sort of touched on this because this is obviously extremely serious. um, so the acronym that's been used for this sort of content is a c, sam or child sexual abuse material. it's sort of a law enforcement acronym. but it's, it's become what folks i've been using and it does encompass this kind of thing, you know, so called revenge or, and, or, you know, the child's material, whether, you know, putting someone's face on a body and this awful stuff. right? so it's obviously streaming, problematic, it's obviously something that most folks object to quite strongly. i do think it's something that will exist in some form as humans, you know, have the kinds of um, issues that they have. um. now i can give an example in the real world um where uh, you know, sort of one of these big uh, images generation um software models. and i is
11:48 pm
a software called stable diffusion. and they did actually for their data set and start removing content that could be objectionable, including ccm. or, i don't know how they detected it was the sam image rate, but supposedly that they removed it. and those sort of controls you know, um, or hit or miss and we can have, you know, societal conversations around them. we certainly should. but of course, you know, you can't put the genie back in the bottle, right? this is really a whack a mole game, not that different than trying to, you know, stop malware and ransomware and you know, all kinds of bad stuff on the internet that can infect computers and disrupt networks. so you're going to have a hard time completely removing this stuff. we've been talking about military's and their development of a i but what role should academia have? how can these outside substantive experts be helpful to think academic could have
11:49 pm
the role and should have the role that it should have with other situations, you know, where they need to reflect on the societies around them. you know, provide expertise as you said. you know, provide research and actually invest in time, energy and money, of course, into researching these things and then providing that information to the public for free. so they can access it, which often doesn't happen with these journals systems, right? um and also in a form that the public can understand and actually have a real dialogue about the academics should be courageous. they tend not to be right. so one of the things that happens next to me is you get in trench and you get complacent and everybody sort of thing. so, you know, there's going to be some day when i'm so high up and you know, this sort of latter in academia that i can speak my mind and certainly some places you know, and some folks you know, get away with quite a lot so to speak. are able to speak their mind are able to do the research that
11:50 pm
they'd like to do. so i don't have to tell you, i'm sure there's certainly a lot of examples of academics being, you know, sort of removed or put to the sideline or blacklisted in some way. um, you know, because of their views because of what they said because of the type of research they were doing, even even if they had very strict standards. so i think academic is real, should be highlighting the issues, and i do know some very courageous academics who have been working on this. i mentioned some of the regulatory stuff. there's a lot of other research out there. it's certainly a very fertile ground for a conversation. i want to see the conversation go beyond the ivory tower, you know, and being at the university of course, you know, i mean, i really do believe that we need to actually have real conversations with the public, with the surrounding communities around these universities and so on. you know, i'm a native new haven or, and someone who does that. yeah. all right, so i really see the issues that can happen, the classes that can happen between the general public and the university. a very
11:51 pm
powerful and rich university, you know, and, and the general public in the community surrounding that university. so, so we need to have those conversations and we need to be serious about it. so the real academics have traditionally play the, unfortunately, uh, with a i is actually building these systems and sort of unquestionably sort of saying, hey, we'll worry about the ethics later. there's been a lot of that. and, and i'm not just talking about so called artificial intelligence or large language models and so on. i'm talking about these big sort of data surveillance systems. there's been a lot of private investment in universities and certainly university endowments are wrapped up in big tech. facebook's google's etc. and you know, there's been a lot of work on how to analyze data on individuals on people and study them. you know, from this sort of automated semi automated these huge surveillance systems. and i being tacked onto that is, is problem attic. i'm you know, the facial recognition technology,
11:52 pm
those those. ringback systems that now are so common place and a lot of that work was done in academia and done with data sets where they were taking pictures of people's faces at universities. and that early work that information is now still a part of these big data sets that, you know, power open a, i and all these other things you know, stable to fusion majority all these other models. so i'm not sure what academic is role will be, but you know, i hope that we can turn the corner here and we can have a serious adult conversation and it's not just going to be about, you know, money, money back. and finally, sean, i want to get back to this crazy story that i told in the introduction to blogger lucas for 0 program. the microwave oven that later tried to kill him. well, we've all seen the movie i robot. the prime directive is supposed to be to not harm human life. is that just in hollywood is a i something that should cause us to fear for our safety if,
11:53 pm
if it's not regulated. so called internet of things, right. whether we use that term oddity anymore in the future, i don't know, it seems to be dying, but you know, these devices which are internet connected. it's almost always there that connected, but then also, you know, way quote unquote, smarter than the, you know, should be microwaves that are giving you recipes for some reason. um, you know, these kinds of things. um our problem, maddox. um, i don't see them as killer robots. um, sort of some of the reasons i stated earlier, and i think the real problems we have to worry about from sort of a security standpoint of the fact that these systems are really difficult to attach an upgrade. so, you know, they're not based on, you know, open source software. they can be, you know, easily modified and observed when they are, they're baked into firmware baked into chipsets, locked down in san boxed in such a way that even the manufacturer will have
11:54 pm
a hard time pushing patches and security updates to these devices. and so they tend to be used in web certainly demonstrated in some of our classes and so on. they can be using these huge bot net where they really do damage on the internet. or they can attach health care systems that can impact all kinds of systems and they can deliver ransomware. and we're seeing a huge issue which attracted me directly with car dealerships in the us. you know, i had to go get a used car cuz my other one, this piece of crap, frankly, and uh, you know, it was hours and hours of faxing and paperwork. and sony going back to the old systems and people literally walking down the street. it's to check in with folks because you know, those computer systems are gone. let's car dealerships, we're not talking about things that are much more vital and can have much bigger impact on people's livelihood and health and safety and so on. i brought up hospitals. we know about the big want to cry issues and you know, chase not quite
11:55 pm
a decade ago, but um, you know, still we've been living with ransomware for, for nearly a decade now. and these backdoors, frankly, and obviously we should bring this back full circle here to to some of the real central issues. these factors are built funded by entities like the n a say, right? they're made inside the c i a, they're using military contractors, pentagon contractors who are building these systems. i'm to have factors in devices like microwave so that they can be exploited them, use the so called cyber weapons. so we know ransomware comes from an essay back door that was purposefully inserted into microsoft software. right. and we know that it was patched quietly by the end of se, and microsoft basically tried to cover that up in a very sort of clunky and, and clumsy way. but they, they tried to and now we're all living with that. it's having very serious
11:56 pm
implications on everybody and not just on industry but on people's real livelihoods in their health. right. um. so michael is one thing um, this prime directive stuff, you know, i mean, these aren't thinking animals or humans. you can, you can build ethics into and that way you can build certain rules sets so that machines try to not do certain things. but of course, machines can be hacked, modified rules can be changed. and it's something obviously that i teach when we have governments and these extra governmental entities, purposefully building and storing exploits right and phone or abilities and trying to bring in the systems and so called cyber war. and these devices are going to continue to be dangerous. it doesn't matter what real assess, we program them with if. ready purposefully broken by others and used, you know, in some malicious way. thank you, shawn, for explaining to us this marvelous and sometimes scary world of
11:57 pm
a i just about everybody involved in the development of a i supports some form of government intervention and regulation. sam altman, the ceo of open a. i has said that a i could design fi a logical passages. it could hack into computer systems, met a c, e o l on musk said that a as development is moving too quickly to be healthy for society. the reality is that if a law must is wrong about a i and we regulated and who cares for the feel unless it is right about a i and we don't regulate it. everybody in the world will care, but by then it might be too late. i'd like to think our guest john o'brien for being with us today, and thank you to our viewers for joining us for another episode of the whistle blowers. i'm john kerry. aku, please follow me on subsets at john kerry echo. we'll see you next time. the. 2 2 2 2 2 2
11:58 pm
the them do so please where there is hi autumn, that cannot be any cancellation, but typically those who have emails or something like that too. i don't want to insult anybody, but they've done a stupid. they simply more on the can swing russian using women's cancel in a great pause on the chest of human. so in general, the, the keep them with them for me to, with them for the gift. but the other side of celia was because they did just wouldn't care to get it through the new year. oh for g, like teach. so i know for me teaching the scholars that should go and you have to go to some of the style approaches with that. so she's more, it's
11:59 pm
a washing machine and then you must move them. oh no. i'm really starting from 1st most the most with my big lot of to move let me go with the continue, i'm just i get i is from for so the last the did some of the study to be progressing . right. and you see the shortest finish. this is just i'll just for the news, i'm a fraction of what do you come swing to do friday? is it for you or the people for the for sure. and shoot me an yes and the we are in north, the city of, in the congress is mountains where beautiful people have been coming together to celebrate the ancient traditions. since the beginning of time itself, where everything has a special symbolic meaning. i'm sean thomas. this is a land visions, and today we're discovering of the world of
12:00 am
a subject culture. the it's just the what is it, the government, the lives they on the portal because their deliveries are, might know that you've still got a team go we just want talk one years old the the

10 Views

info Stream Only

Uploaded by TV Archive on