tv The Whistleblowers RT August 3, 2024 3:30pm-4:01pm EDT
3:30 pm
the, almost nobody on earth can get through a single day of news without reading or hearing of some new development related to artificial intelligence. and if you follow the business pages, much of that news is about a i. and the norm is companies like in video, apple, alphabet meta, try one semi conductor and others that actually develop it. perhaps you've even experiment to the literal with a i. maybe it is helped to write a paper or a letter or a thank you know what, what exactly is artificial intelligence? what is it supposed to do? and how exactly is it going to change our lives? i'm john kerry. aku welcome to the list of lowers the . 2 2 2 2 2 2 2 2 2 the 1st time i ever heard of a workable form of artificial intelligence was in 2022. when i read
3:31 pm
a fascinating and frightening article about an american tech blogger named lucas rizoto, rizoto decided to recreate a childhood, imaginary friends that he had had years earlier. using ai. the friend was a microwave oven that he called magnetron. to do this, he purchased the microwave and merged it with g p. t 3, a text based artificial intelligence system developed by the company. open a i a g p, g 3 mimics human language, creating original sentences. when prompted to populate its brain open a i said g p, the 3 vast amounts of data. everything from wikipedia to newspaper articles to read it. post rizoto then programmed it with the backstory which he described as a 100 page books detailing every moment of magnetrons, imaginary life. he then programmed it to turn on when he gave it to come in and to begin conversing with it. you later describe those conversations as both beautiful
3:32 pm
and eerie why it was because the microwave actually sounded like a real person. and it sometimes had burst of anger. at one point the microwave demanded that rizoto put his head inside so that the microwave could kill him. when rizoto pretended to do so, the microwave actually turned itself on. that was 2 years ago, and 2 years is a lifetime in the world of artificial intelligence. now many of us who are not tech wizards are struggling with news and discussions about a i, and it's relation to things like predictive modeling, adaptive control generative a. i even neural networks and something called hive mind. and that doesn't even touch on things like defects which already are causing a stir in elections around the world. how big of an impact is a i going to have on our daily lives? we'll try to get an answer to that question from our next guest. shauna brian is
3:33 pm
a subject matter expert in cyber security privacy web 3 and free and open source software called foss. sean is a visiting lecturer at yale law school where he teaches cyber security and where he founded the privacy lab initiative. he's the chief technology officer at pen. quick, recently launching the pen, quick me link, cleaning, shortening and archiving service. sean developed a web 3 and block chain class at yale, as well as hacking and cyber security at the law fair institute and was founding head tutor at oxford university's cyber security for business leaders. johns expertise has appeared in the new york times forbes magazine, bloomberg popular science, the associated press, nbc news, the financial times wired. and the new yorker sean. thank so much for being with us . i've been looking forward to this conversation for a long time. i'm so happy to be here. it's a pleasure. i think the best place to start is with the very basics of
3:34 pm
a i. can you tell us what exactly is a i, when did technologist begin to develop it and to what end? sure. so artificial intelligence or this thing that we're now so used to calling a i, it's really something that goes back to the 1950s play. when i think about the origins of a, i want to go all the way back to alan turn and turn for us was the big pioneer of the computing systems. the things we now call computers and computer software and so on. and the big pioneer from that period who really designed the way in which computers you know, so called think highest and in 1952. and you had a paper called computing machinery and intelligence. and i encouraged folks to go back to the paper. it's very easy to read and he asks in that paper can machines think. and then the important part is he says, he believes it to be quote,
3:35 pm
to meaning list, to deserve discussion. okay? and what do you sort of means by that is, you know, we don't talk about whether or not submarines swim. right? or, you know, whether airplanes really fly in the way the birds fly, right? we just sort of use these analogies for technology without sort of getting all philosophical about it. and a computer certainly can do some amazing tasks, but they are not human brain. right? and i think that's a really important point that gets lost and all of this discussion about artificial intelligence and certainly the branding around calling it intelligence right. does that answer computers a good it, some kinds of problems they literally will never solve other problems that humans can solve, right? and humans are also very good at certain tasks that computers are very bad at. so we can't crunch large prime numbers, right? or do these complex mathematical problems the same way computers can. but we see for example, faces in every thing, right?
3:36 pm
when we look at a light socket or so on. and that's because we have biological systems in our brain that basically those types of structures are, you know, allowing us to do that very easily, very simply. now, fast forward to the 19 seventies 19 eighties and we started having more miniaturization of computer components. and this is the era really were artificial intelligence and that concept sort of takes off and there was a lot of research done around it in that period and also periods of what they call a i winters. so if you look into the history, you'll find there was sort of a cooling on research and funding for academic research. certainly in artificial intelligence, over a number of periods, you know, on and off through the nineties and so on and so forth. and what we really now have as a, i, you know, check g b t, open a, i, all these other tools. and these things are really, you know, sort of built on the corpus of information that the internet can feed into. what's
3:37 pm
called a large language model. right, these labs, and that's what we're really calling a i. so it's something that was born sort of in the early, 2, thousands, mid, 20 tends. so we can go into the details about who's funding it and why they're finding it and so on. but obviously it's making a big impact on our world. i had no idea that it's been around so long. okay. sean, the technology is developing and advancing at lightning speed. it seems. is it possible to rein it in? sure, so you know, um, hey, i regulation something my colleagues at yale and the other folks have been working on for a really long time. the reading was sort of on the wall that these kinds of systems a i, l, m, 's, etc, would be used in some context, you know, even in the legal systems. it, but certainly governing the rules and, and the, the processing of information. and this huge global information network, we have called the internet, right?
3:38 pm
so there's been a lot of work and regulation, certainly in the united states, but nothing really solid right now. i do think regulation has a place, but anybody who's been involved in drafting legislation and extra laced by role and that a little bit. so it's a difficult thing. you know, by the time these regulations go through all the pipelines and through all the structures that we have, the bureaucracies and so on. and they've shifted in such a way that they're often not very useful to take an example, right? g, p yards, general data protection regulation in europe. you know, that's had a real impact on the landscape of privacy worldwide. and the version of that that we sort of emulated in the united states as the see cpa, right? the california consumer protection act, and that act is sort of like, shoot if you are like i like to call it. um it removes some of the strong um, you know, controls that i think choose your hat. and i'm seeing that a little bit with this. uh, regulation already. um that said, i think really,
3:39 pm
you know, grassroots movements, you know, especially local movements. uh you know, um, petitioning uh, municipal governments and you know, getting things like bands on facial recognition technology which have been very successful in united states. and those sorts of things are powerful and they do have a real world impact on certainly, you know, getting your local town or city just stop scanning your face from st. cameras, you know, is a good thing as far as i'm concerned. and that's something that can be done, serve at the regulatory level, math, the larger level. i'm not so convinced. and certainly when we have, you know, no control over entities like these private entities, big tech and so on. amazon ring cameras and everything else. you know, that sort of thing is that can be so easily controlled where our information systems are concerned. we don't have control for them now anyway, right? the facebook's and that google's and so on. so this is just another layer that is unfortunately quite problematic. and i don't think regulation is going to have a really big impact there. let's get into some of the more complicated aspects of
3:40 pm
a i we hear terms that i have no understanding of like predictive modeling, adaptive control generative a. i can you tell us what these term is mean, and what the uses are for these technologies? sure. so what we're most familiar with, and certainly a lot of people got really familiar with in the last couple of years, is generative. chat, g, b, t and these other prompt based systems, you know, the new helper system co pilot, that microsoft is putting into, you know, shopping into microsoft office and get hub and all these things. and i called them nagging systems or nag water. but still, that's actually been kind of polite to them. but those systems are generative. they, i, so they provided with a prompt or some other starting point by the user. and then they create text or images or some other content that they generate. so you take a huge amount of data,
3:41 pm
you shove it into a learning model, and then statistically, you know, the algorithm is able to figure out, and this is a huge over simplification. but it's able to figure out a certain amount of rules. and guess basically what you're looking for based upon the inputs that you give it. thank you, shawn. we're going to take a short break and when we come right back, we're going to speak with sean about what to expect in the near future from development. today i and how we protect ourselves and our information stay to. 2 2 2 the, in this thing of the continent of boxes. oh, send we into a rock with the rest of the world. we're going to relate with the wind in terms
3:42 pm
of donations on, in terms of trade. africa must define what she once lived in. africa must define ourselves, cultures, africa, define ourselves critically. the cause of your guys, no choice, but to move forward forward. she will do so please whether res. hi autumn, that cannot be any cancellation, but typically those who have emails or something like that too. i don't want to insult anybody, but they've done a stupid and they simply more on the cancelling russian using women's cancel in a great pause when we change the human. so in general, the
3:43 pm
. 2 2 welcome back to the whistle blowers. i'm john kerry who were speaking with shawna, brian, he's a lecturer in law at yale law school with expertise in cyber security privacy and mobile device forensics. he's the assistant director for technology at deals, office of international students and scholars and a p h d candidate in law at the university of south africa. he found a deals privacy lab in 2017, an initiative of the information society project at yale law school. to thanks again for being with us, shawn. happy to be here. it's been an amazing conversation. one of the major issues related to a i, in the immediate term is the problem of deep stakes. these are videos that look to be 100 percent legitimate, that are actually completely fig. many of them are very crude and we see those every day on facebook and take talking elsewhere. but some of them are actually quite sophisticated. how can we stop people from creating videos that make it look like, for example,
3:44 pm
world leaders are admitting to crimes on camera or taking harmful policy positions or making themselves look foolish. so unfortunately, the fix are really problematic. i'm very glad you brought that up. so certainly it's a topic that will be more and more prominent as you know, for example, the election cycle here in the u. s. sheets up. i said very much that the other day with the debate. um, but uh, at any rate, you know, uh, the 1st thing we need to think about in the 1st lesson we should learn from this is talking about, you know, where these deep fix came from and how the software was trained to do this sort of thing and the way that happened was, large databases were built of people's facial information. right? um, academics, you know, private researchers, um, you know, photos scraped off of the internet. a many, many millions of them. you know, all of the surveillance cameras that have sort of just suddenly appeared around us
3:45 pm
everywhere, certainly in places like airports and so on. but well beyond that, you know, the grocery store, even these cameras have been collecting data about our faces, the relation between different parts of our faces. you know, getting very, very good at learning. you know, the same way i was talking about these other models learning, you know, like chat to, to talk back to us or to regards retain information. and the way that's convincing . and these models were learning how to create excesses right. how to understand faces. how do hypothetically recognize faces all that we can get into how biased and, and that technology has been. um, but certainly you know, deep fix of, of proven you know, that you can mass create a someone else and create a video and, and certainly create propaganda and all of the types of mischief around that. but i think the important thing to remember here 1st is and it's good news, at least for now, is that we do have software that can do
3:46 pm
a reasonably good job of detecting these kinds of things, right? the same way that a software is reasonably good at detecting a big box of techs that come out of chapter you b t, right? um, we do have software that can recognize, okay, the edges on the face here, the way that the compression artifacts are in this video, we can tell it's been edited that it's not the original video that there's a problem with mismatch of colors and resolution and lighting and all that sort of thing, right? i'm, but really when we get down to it, this conversation is about the information system, right? it's about, okay. um, do we allow information to flow through this beautiful internet that we have? do we allow people to literally, you know, look at things that are difficult conversations and, but still be able to learn when they're, you know, very young when they're getting older and becoming adults. and certainly when they're taking action in the world. and some of these folks become rather powerful and being able to make quick actions that affect a lot of people are these individuals, you know, are they um, knowledgeable, has
3:47 pm
a really learned how to distinguish between something that's real and what's not real or unlikely, unlikely. are they going to pause and take a deep breath before they take some kind of action right before they just kind of type something up the keyboard? i always tell folks, you know, when you're angry when you're upset, you know, the roof or e mail and everything else. certainly, you know, twitter x, whatever, it's called a do not just start typing up the keyboard and just sort of regurgitating onto the internet. and that stuff does live forever and certainly can live forever. but another big problem, at least here in the united states is the creation of deep fake news, where people, in many cases, children are made to appear to be nude and videos. in fact, these are computer generated bodies attached to actual children or the entire image is computer generated and it's a crime. so how do we protect ourselves and our children from this kind of activity?
3:48 pm
sure, sure. so, i mean, look the attention economies look like they've, that's a bad enough problem with dfcs, but i'm glad you sort of touched on this because this is obviously extremely serious. um, so the acronym that's been used for this sort of content is a c, sam or child sexual abuse material. it's sort of a law enforcement acronym. but it's, it's become what folks i've been using and it does encompass this kind of thing, you know, so called revenge or, and, or, you know, a child's material where there, uh, you know, putting someone's face on a body in this awful stuff. right? so it's obviously streaming, problematic, it's obviously something that most folks object to quite strongly. i do think it's something that will exist in some form as humans, you know, have the kinds of issues that they have. um, now i can give an example in the real world where uh, you know,
3:49 pm
sort of one of these big images generation of software models. and i and i piece of software called stable to fusion. but they did actually for their data set and start removing content that could be objectionable, including ccm. or, i don't know how they detected it was the sam image rate, but supposedly that they removed it. and those sort of controls you know, um, or hit or miss and we can have, you know, societal conversations around them. we certainly should. but of course, you know, you can't put the genie back in the bottle, right? this is really a whack a mole game, not that different than trying to, you know, stop malware and ransomware and you know, all kinds of bad stuff on the internet that can infect computers and disrupt networks. so you're going to have a hard time completely removing this stuff. we've been talking about military's and their development of a i but what role should academia have? how can these outside substantive experts be helpful to think academic could have
3:50 pm
the role and should have the role that it should have with other situations that you know where they need to reflect on the societies around them. you know, provide expertise as you said. you know, provide research and actually invest in time, energy and money, of course, into researching these things and then providing that information to the public for free. so they can access it, which often doesn't happen with these journals systems, right? um and also in a form that the public can understand and actually have a real dialogue about um, academics should be courageous. they tend not to be right. so one of the things that happens next day me is you get entrenched and you get complacent and everybody sort of thing. so, you know, there's going to be some day when i'm so high up and you know, this sort of latter in academia that i can speak my mind and certainly some places you know, and some folks you know, get away with quite
3:51 pm
a lot so to speak. are able to speak their mind are able to do the research that they'd like to do. so i don't have to tell you, i'm sure there's certainly a lot of examples of academics being, you know, sort of removed or put to the sideline or blacklisted in some way. you know, because of their views because of what they said because of the type of research they were doing, even even if they have very strict standards. um, so i think academic is real, should be highlighting the issues and i do know some very courageous academics who have been working on this. i mentioned some of the regulatory stuff. there's a lot of other research out there. it's certainly a very fertile ground for a conversation. i want to see the conversation go beyond the ivory tower, you know, and being at the university of course, you know, i mean, i really do believe that we need to actually have real conversations with the public, with the surrounding communities around these universities and so on. you know, i'm a native new haven or, and someone who's at yale, right? so i really see the issues that can happen to classes that can happen between the
3:52 pm
general public and the university. a very powerful and rich university, you know, and, and the general public in the community surrounding that university. so, so we need to have those conversations and we need to be serious about it. so the real academics have traditionally play the, unfortunately, uh, with a i is actually building these systems and sort of unquestionably sort of saying, hey, we'll worry about the ethics later. there's been a lot of that. and, and i'm not just talking about so called artificial intelligence or large language models and so on. i'm talking about these big sort of data surveillance systems. there's been a lot of private investment in universities and certainly university endowments are wrapped up in big tech. facebook's google's etc. and you know, there's been a lot of work on how to analyze data on individuals on people and study them. you know, from this sort of automated semi automated these huge surveillance systems. and i
3:53 pm
being tacked onto that is, is problem added. um, you know, the facial recognition technology, those, those systems that now are so common place a lot of that work was done in academia and done with data sets where they were taking pictures of people's faces at universities. and that early work that information is now still a part of these big data sets that, you know, power open a, i and all these other things you know, stable to fusion majority all these other models. so i'm not sure what academic is role will be, but you know, i hope that we can turn the corner here and we can have a serious adult conversation and it's not just going to be about, you know, money, money. but and finally, shawn, i want to get back to this crazy story that i told in the introduction, tech blogger lucas, for 0 program, the microwave oven that later tried to kill him. well, we've all seen the movie i robot. the prime directive is supposed to be to not harm
3:54 pm
human life. is that just in hollywood is a i something that should cause us to fear for our safety if, if it's not regulated. so called internet of things, right. whether we use that termite entity anymore in the future, i don't know, it seems to be dying. but you know, these devices which are internet connected. it's almost always there that connected . but then also, you know, way corner got smarter than the, you know, should the microwaves that are giving you recipes for some reason. um, you know, these kinds of things. um, our problem, maddox. um, i don't see them as killer robots. um, sort of some of the reasons i stated earlier, and i think the real problems we have to worry about from sort of a security standpoint of the fact that these systems are really difficult to attach an upgrade. um, so you know, they're not based on, you know, open source software. they can be, you know, easily modified and observed when they are there baked into firmware baked into chipsets, locked down in san boxed in such
3:55 pm
a way that even the manufacturer will have a hard time pushing patches and security updates to these devices. so they tend to be used in web, certainly demonstrated in some of our classes and so on. they can be using these huge bot net where they really do damage on the internet, or they can attack health care systems that can impact all kinds of systems. and they can deliver ransomware, we're seeing a huge issue which attracted me directly with car dealerships in the us. you know, i had to go get a used car cuz my other one, this piece of crap, frankly, and uh, you know, it was hours and hours of faxing and paperwork. and sony going back to the old systems and people literally walking down the street need to check in with folks. because, you know, those computer systems are gone. let's car dealerships, we're not talking about things that are much more vital and can have much bigger impact on people's livelihood and health and safety and so on. i brought up hospitals. we know about the big want to cry issues and you know, chase not quite
3:56 pm
a decade ago, but um, you know, still we've been living with ransomware for, for nearly a decade now. and these backdoors, frankly, and obviously we should bring this back full circle here to, to some of the real central issues. these factors are built funded by entities like the n a say, right? they're made inside the c i a, they're using military contractors kind of gotten contractors who are building these systems. i'm to have factors in devices like microwave so that they can be exploited, then used a so called cyber weapons. so we know ransomware comes from an essay back door that was purposefully inserted into microsoft software. right. and we know that it was patched quietly by the end of se, and microsoft basically tried to cover that up in a very sort of clunky and, and clumsy way. but they, they tried to and now we're all living with that. it's having very serious
3:57 pm
implications on everybody, not just on industry, but on people's real livelihoods in their house. right. um. so michael is one thing um, this prime director of stuff, you know, i mean, these aren't thinking animals or humans. you can, you can build ethics into and that way you can build certain rules, steps so that machines try to not do certain things. but of course machines can be hacked, modified rules can be changed. and it's something obviously that i teach when we have governments and these extra governmental entities, purposefully building and storing exploits right and phone or abilities and trying to bring in the systems and so called cyber war. and these devices are going to continue to be dangerous. it doesn't matter what. ready steps we program them with if. ready purposefully broken by others and used, you know, in some malicious way. thank you, shawn,
3:58 pm
for explaining to us this marvelous and sometimes scary world of a i just about everybody involved in the development of a i supports some form of government intervention and regulation. sam altman, the c e o of open a i has said that a i could design fi a logical passages. it could hack into computer systems, met a c, e o l on musk said that a eyes development is moving too quickly to be healthy for society. the reality is that if a loan must is wrong about a i and we regulated to new care, but the feeling must is right about a i, and we don't regulate if everybody in the world will care. but by then it might be too late. i'd like to think our guest john or brian for being with us today and thank you to our viewers for joining us for another episode of the whistle blowers . i'm john kerry. aku, please follow me on subsets at john kerry echo. we'll see you next time. the. 2 2 2
3:59 pm
4:00 pm
the, as you create in the hands, it's got process bridge to cry me a hand is press, had moscow points to potential us involvement the head of the upcoming election. set a goal. somebody degrades ambassador to explain why keep supporting rebel forces dividing . molly's government. thousands have gathered at all. yes, sophia square chanting freedom, its apollo stine, continuing the solidarity with the policy and people are huge. riley, and as symbol draws massive crowds onto the streets this and that continues over the assume these ready kenning of the box the to the .
7 Views
Uploaded by TV Archive on