tv The Whistleblowers RT August 3, 2024 7:30am-8:01am EDT
7:30 am
us males, van solver 0 and 2 of them from my, um grocer, past my organization. they were de, also, uh people who are managers in lawyers from brazil, from 19 to unit for many countries. and all the international servers, asbestos, that we have free elections and set license without any kind of problem during the ballot develops. and i guess they are trying to create a situation all stool or something like that. well, we'll be back with fresh look at what's happening around the world and just on the 30 minutes, but don't go anywhere because don't carry out who is up next with the whistle blows, [000:00:00;00]
7:31 am
the . the, almost nobody on earth can get through a single day of news without reading or hearing of some new development related to artificial intelligence. and if you follow the business pages, much of that news is about a i. and the norm is companies like in video, apple. busy alphabet meta, try one semi conductor and others that actually develop it. perhaps you've even experiment to the literal with a i. maybe it is helped to write a paper or a letter or a thank you know, for what exactly is artificial intelligence? what is it supposed to do, and how exactly is it going to change our lives? i'm john kerry. aku welcome to the whistle blowers
7:32 am
the . 2 2 2 2 2 2 2 the 1st time i ever heard of a workable form of artificial intelligence was in 2022. when i read a fascinating and frightening article about an american tech blogger named lucas rizoto, rizoto decided to recreate a childhood, imaginary friends that he had had years earlier. using ai. the friend was a microwave oven that he called magnetron. to do this, he purchased the microwave and merged it with g p. t 3, a text based artificial intelligence system developed by the company. open a i g p t 3 minutes human language creating original sentences. when prompted to populate its brain open a i said gp do 3 vast amounts of data. everything from wikipedia to newspaper articles to read it. post rizoto then programmed it with a back story,
7:33 am
which he described as a 100 page book detailing every moment of magnetrons, imaginary life. he then programmed it to turn on when he gave it to come in, and you begin conversing with it. you later describe those conversations as both beautiful and eerie why it was because the microwave actually sounded like a real person. and it sometimes had burst of anger. at one point the microwave demanded that rizoto put his head inside so that the microwave could kill him. when rizoto pretended to do so, the microwave actually turned itself on. that was 2 years ago, and 2 years is a lifetime in the world of artificial intelligence. now many of us who are not tech wizards are struggling with news and discussions about a i, and it's relation to things like predictive modeling, adaptive control generative a. i even neural networks and something called hive mind. and that doesn't even
7:34 am
touch on things like defects which already are causing a stir in elections around the world. how big of an impact is a i going to have on our daily lives? we'll try to get an answer to that question from our next guest. shawna o'briant is a subject matter expert in cyber security privacy web 3 and free and open source software called foss. sean is a visiting lecturer at yale law school where he teaches cyber security and where he founded the privacy lab initiative. he's the chief technology officer at pen quick, recently launching the pen, quick meal link, cleaning, shortening and archiving service. sean developed a web 3 and block chain class at yale, as well as hacking and cyber security at the law fair institute, and was founding head tutor at oxford university's cyber security for business leaders. john's expertise has appeared in the new york times forbes magazine, bloomberg popular science, the associated press, nbc news,
7:35 am
the financial times wire. and the new yorker. sean, thanks so much for being with us. i've been looking forward to this conversation for a long time. i'm so happy to be here. it's a pleasure. i think the best place to start is with the very basics of a i. can you tell us what exactly is a i, when did technologist begin to develop it and to what end? sure. so artificial intelligence or this thing that we're now so used to calling a i, it's really something that goes back to the 1950s police are now when i think about the origins of a, i like to go all the way back to alan, turn and turn your for us was the big pioneer of the computing systems, the things we now call computers and computer software and so on. and the big pioneer from that period who really designed the way in which computers you know, so called think right. and in 1950 turn had
7:36 am
a paper called computing machinery and intelligence. and i encourage folks to go back to that paper. it's very easy to read. and he asks in that paper can machines think? and then the important part is he says, he believes it to be quote, to meaning list, to deserve discussion. okay. and what do you sort of means by that is, you know, we don't talk about whether or not submarines swim. right? or, you know, whether airplanes really fly and the way the birds fly, right? we just sort of use these analogies for technology without sort of getting all philosophical about it. and a computer certainly can do some amazing tasks, but they are not human brain. right? and i think that's a really important point that gets lost in all this discussion about artificial intelligence, and certainly the branding around calling it intelligence right. does that answer computers a good it, some kinds of problems they literally will never solve other problems that humans can solve, right?
7:37 am
and humans are also very good at a certain task that computers are very bad at. so we can't crunch large prime numbers, right? or do these complex mathematical problems the same way computers can. but we see for example, faces and everything right? when we look at a light socket or so on. and that's because we have biological systems in our brain that basically those kinds of structures are um, you know, allowing us to do that very easily, very simply now fast forward to the 19 seventies 19 eighties and we started having more miniaturization of computer components. and this is the era really were artificial intelligence and that concept sort of takes off and there was a lot of research done around it in that period and also periods of what they call a winters. so if you look into the history, you'll find there was sort of a cooling on research and funding for academic research. certainly in artificial intelligence, over a number of periods, you know,
7:38 am
on and off through the nineties and so on and so forth. and what we really now have as a, i, you know, check g b t, open a, i, all these other tools. and these things are really, you know, sort of built on the corpus of information that the internet can feed into. what's called a large language model. right, these labs, and that's what we're really calling a i. so it's something that was born sort of in the early, 2, thousands, mid, 20 tens. um and we can go into the details about who's funding it and why they're finding it and so on. but obviously it's making a big impact on our world. i had no idea that it's been around so long. okay. sean, the technology is developing and advancing at lightning speed. it seems. is it possible to rein it in? sure, so you know, um, hey, i regulation something my colleagues at yale and the other folks have been working on for a really long time. the reading was sort of on the wall that these kinds of systems
7:39 am
a i, l, m, 's, etc, would be used in some context, you know, even in the legal systems. it, but certainly governing the rules and, and the, the processing of information. and this huge global information network, we have called the internet, right? so there's been a lot of work and regulation, certainly in the united states, but nothing really solid right now. i do think regulation has a place, but anybody who's been involved in drafting legislation and extra lace my role in that a little bit. and it's a difficult thing. you know, by the time um, these regulations go through all the pipelines and through all the structures that we have, the bureaucracies and so on. and they've shifted in such a way that they're often not very useful to take an example, right? g, p r, just general data protection regulation in europe. you know, that's had a real impact on the landscape of privacy worldwide. and the version of that that we sort of emulated in the united states as the see cpa, right?
7:40 am
the california consumer protection act, and that act is sort of like, shoot if you are like i like to call it. um it removes some of the strong um, you know, controls that i think g p r hat. and i've seen that a little bit with the say i regulation already. so that said, i think really, you know, grassroots movements, you know, especially local movements. uh, you know, um, petitioning uh, municipal governments and you know, getting things like bands on face recognition technology which have been very successful in united states. and, you know, those sort of things are powerful and they do have a real world impact. i'm certainly, you know, getting your local town or city to stop scanning your face from st cameras, you know, is a good thing as far as i'm concerned. and that's something that can be done. serve at the regulatory level, math, the larger level. i'm not so convinced and certainly when we have, you know, no control of your entities, like these private entities, big tech and so on. amazon ring cameras and everything else. you know,
7:41 am
that sort of thing is that can be so easily controlled where our information systems are concerned. we don't have control over them now anyway, right. the facebook's and the google's and so on. so this is just another layer that is unfortunately quite problematic. and i don't think regulation is going to have a really big impact there. let's get into some of the more complicated aspects of a i we hear terms that i have no understanding of like predictive modeling, adaptive control generative a. i can you tell us what these terms is mean and what the uses are for these technologies? sure. so what we're most familiar with, and certainly a lot of people got really familiar with in the last couple of years, is generative. chat g, b, t and these other prompt based systems, you know, the new helper system co pilot, that microsoft is putting into, you know, shopping into microsoft office and get hub and all these things. and i called them nagging systems for next year. but still, that's actually been kind of polite to them. but those systems are generative
7:42 am
a i. so they provided with a prompt or some other starting point by the user. and then they create text or images or some other content that they generate. so you take a huge amount of data, you shove it into a learning model, and then statistically, you know, the algorithm is able to figure out, and this is a huge over simplification. but it's able to figure out a certain amount of rules. and guess basically what you're looking for based upon the inputs that you give it. thank you, shawn. we're going to take a short break and when we come right back, we're going to speak with sean about what to expect in the near future from development. today i and how we protect ourselves and our information stay to the. 2 2
7:43 am
the, to take a fresh look around. there's a life kaleidoscopic, isn't just a shifted reality distortion by power to division with no real live indians fixtures, design to simplify. it will confuse who really wants a better wills and is it just as a chosen few fractured images presented to this, but can you see through their illusion going underground? can the . 2 2 2 welcome back to the whistle blowers. i'm john kerry echo. we're speaking with shawna, brian. he's
7:44 am
a lecturer in law at yale law school with expertise in cyber security privacy and mobile device forensics. he's the assistant director for technology. it yields office of international students and scholars and a p h d candidate in law at the university of south africa. he found a deals privacy lab in 2017, an initiative of the information society project at yale law school. thanks again for being with us, shawn. happy to be here. it's been an amazing conversation. one of the major issues related to a i in the immediate term is the problem of deep sakes. these are videos that looked to be 100 percent legitimate, that are actually completely fig. many of them are very crude and we see those every day on facebook and take talking elsewhere. but some of them are actually quite sophisticated. how can we stop people from creating videos that make it look like, for example, world leaders are admitting to crimes on camera or taking harmful policy positions or making themselves look foolish. so unfortunately,
7:45 am
the fix are really problematic. i'm very glad you brought that up. um, certainly it's a topic that will be more and more prominent as you know, for example, the election cycle here in the u. s. keeps up. i said very much that the other day with the debate. um, but uh, at any rate, you know, uh, the 1st thing we need to think about in the 1st lesson we should learn from this is talking about, you know, where these, the fix came from and how the software was trained to do this sort of thing and the way that happened was, large databases were built of people's facial information, right. academics, you know, private researchers, um, you know, photos scraped off of the internet. a many, many millions of them. you know, all of the surveillance cameras that have sort of just suddenly appeared around us everywhere, certainly in places like airports and so on. but well beyond that, you know, the grocery store,
7:46 am
even these cameras have been collecting data about our faces, the relation between different parts of our faces. you know, getting very, very good at learning. you know, the same way i was talking about these other models learning. you know, like chat to, to talk back to us or to recurs, retain information in the way that's convincing. these models, we're learning how to create excesses right. how to understand faces, how do hypothetically recognize faces, although we can get into how biased and, and that technology has been. but certainly, you know, deep fakes of a proven you know, that you can mass create a someone else and create a video and certainly create propaganda and all other types of mist jeff around that. but i think the important thing to remember here 1st is and it's good news, at least for now, is that we do have software that can do a reasonably good job of detecting these kinds of things, right? the same way that a software is reasonably good at detecting
7:47 am
a big box of techs that come out of chapter you b t, right? um, we do have software that can recognize, okay, the edges on the face here, the way that the compression artifacts are in this video, we can tell it's been edited that it's not the original video that there's a problem with mismatch of colors and resolution and lighting and all that sort of thing, right? i, but really when we get down to it, this conversation is about the information system, right? it's about, okay. um, do we allow information to flow through this beautiful internet that we have? do we allow people to literally, you know, look at things that are difficult conversations and, but still be able to learn when they're, you know, very young when they're getting older and becoming adults. and certainly when they're taking action in the world. and some of these folks become rather powerful and being able to make quick actions that affect a lot of people are these individuals, you know, are they um, knowledgeable, has a really learned how to distinguish between something that's real and what's not
7:48 am
real or unlikely, unlikely. are they going to pause and take a deep breath before they take some kind of action right before they just kind of type something up the keyboard? i always tell folks, you know, when you're angry when you're upset, you know, as a rule for email and everything else, certainly you know, twitter x, whatever, it's called a do not just start typing up the keyboard and just sort of regurgitating onto the internet. and that stuff does live forever and certainly can live forever. but another big problem, at least here in the united states is the creation of deep fake news, where people, in many cases, children are made to appear to be nude and videos. in fact, these are computer generated bodies attached to actual children or the entire image is computer generated and it's a crime. so how do we protect ourselves and our children from this kind of activity? sure, sure. so, i mean, look, the attention economies look look big. that's a bad enough problem with the fix, but i'm glad you sort of touched on this because this is obviously extremely
7:49 am
serious. um, so uh the acronym that's been used for this sort of content is a c, sam or child sexual abuse material. it's sort of a law enforcement acronym, but it's, it's become what folks i've been using and it does encompass this kind of thing. you know, so called revenge or an, or, you know, child material where they're, you know, putting someone's face on a body and this awful stuff. right. it is obviously streaming, problematic. it's obviously something that most folks object to quite strongly. i do think it's something that will exist in some form as humans, you know, have the kinds of issues that they have. um, now i can give an example in the real world where uh, you know, sort of one of these big images generation software models. and i,
7:50 am
a piece of software called staple diffusion. and they did actually for their data set and start removing content that could be objectionable, including ccm. or, i don't know how they detected it was the sam image rate, but supposedly that they removed it. and those sort of controls you know, um, or hit or miss and we can have, you know, societal conversations around them. we certainly should. but of course, you know, you can put the genie back in the bottle, right. this is really a whack a mole game, not that different than trying to, you know, stop malware and ransomware and you know, all kinds of bad stuff on the internet that can infect computers and disrupt networks. and you're going to have a hard time completely removing this stuff. we've been talking about military's and their development of a i but what role should academia have? how can these outside substantive experts be helpful? so i think i can actually make, could have the role and should have the role that it should have with other situations the, you know,
7:51 am
where they need to reflect on the societies around them. you know, provide expertise, as you said, you know, provide research and actually invest in time, energy and money, of course, into researching these things. and then providing that information to the public for free. so they can access it, which often doesn't happen with these general systems, right? um and also in a form that the public can understand and actually have a real dialogue about the academics should be courageous. they tend not to be right. um, one of the things that happens in academia is you get in transit and you get complacent and everybody sort of thing. so, you know, there's going to be some day when i'm so high up and you know, this sort of latter in academia that i can speak my mind and certainly some places you know, and some folks you know, get away with quite a lot so to speak. and are able to speak their mind are able to do the research that they'd like to do. so i don't have to tell you, i'm sure there's certainly
7:52 am
a lot of examples of academics being, you know, sort of removed or put to the side line or blacklisted in some way. but you know, because of their views because of what they've said because of the type of research they were doing, even even if they had very strict standards. so i think academic is real, should be highlighting the issues. and i do know some very courageous academics who have been working on this. i mentioned some of the regulatory stuff. there's a lot of other research out there. it's certainly a very fertile ground for a conversation. i want to see that conversation go beyond the ivory tower, you know, and being at the university of course, you know, i mean, i really do believe that we need to actually have real conversations with the public, with the surrounding communities around these universities and so on. you know, i'm a native new haven or an someone who's at yale, right? so i really see the issues that can happen to classes that can happen between the general public and the university of her powerful and rich university. you know and,
7:53 am
and the general public and the community surrounding that university so, so we need to have those conversely. ready agents, and we need to be serious about it because the role academics have traditionally played unfortunately with a i is actually building these systems and sort of unquestionably served saying hey, we'll worry about the ethics later. there's been a lot of that, and i'm not just talking about so called artificial intelligence or large language models and so on. i'm talking about these big sort of data surveillance systems. um, there's a lot of private investment in universities and certainly university endowments are wrapped up in big tech, facebook's google's etc. and you know, there's been a lot of work on how to analyze data on individuals on people and study them. you know, from this sort of automated semi automated these huge surveillance systems and i being tacked onto that is, is problem attic. um, you know, the facial recognition technology, those, those uh,
7:54 am
systems that now are so common place. so a lot of that work was done in academia and done was data sets where they were taking pictures of people's faces at universities. and that early work that information is now still a part of these big data sets that, you know, power open a, i and all these other things, you know, stable diffusion made journey, all these other models. so i'm not sure what academic is role will be, but, but you know, i hope that we can turn the corner here and we can have a serious adult conversation and it's not just going to be about, you know, money, money, my and finally, sean. i want to get back to this crazy story that i told in the introduction to blogger lucas for 0 program, the microwave oven that later tried to kill him. well, we've all seen the movie i robot. the prime directive is supposed to be to not harm human life. is that just in hollywood is a i something that should cause us to fear for our safety if, if it's not regulated. so called internet of things,
7:55 am
right. whether we use that term oddity anymore in the future, i don't know, it seems to be dying, but you know, these devices which are internet connected. it's almost always there that connected, but then also, you know, way quote unquote, smarter than the, you know, should be microwaves that are giving you recipes for some reason. um, you know, these kinds of things. um our problem, maddox. um, i don't see them as killer robots. um, sort of some of the reasons i stated earlier, and i think the real problems we have to worry about from sort of a security standpoint of the fact that these systems are really difficult to attach an upgrade. so, you know, they're not based on, you know, open source software. they can be, you know, easily modified and observed when they are, they're baked into firmware baked into chipsets, locked down in san boxed in such a way that even the manufacturer will have
7:56 am
a hard time pushing patches and security updates to these devices. so they tend to be used in web, certainly demonstrated in some of our classes and so on. they can be used in these huge bot net, where they really do damage on the internet or they can attach health care systems . they can then tackle all kinds of systems and they can deliver ransomware. we're seeing a huge issue which attracted me directly with car dealerships in the us. you know, i had to go get a used car cuz my other one, this piece of crap, frankly, and uh, you know, it was hours and hours of faxing and paperwork. and sony going back to the old systems and people literally walking down the street need to check in with folks. because, you know, those computer systems are gone. that let's car dealerships, we're not talking about things that are much more vital and can have much bigger impact on people's livelihood and health and safety and so on. i brought up hospitals. we know about the big want to cry issues and you know, chase not quite a decade ago, but, you know, still we've been living with ransomware for,
7:57 am
for nearly a decade now. and these backdoors, frankly, and obviously we should bring this back full circle here to, to some of the real central issues. these factors are built funded by entities like the n a say. right? and they're made inside the c i a, they're using military contractors kind of gotten contractors who are building these systems. i'm to have factors in devices like microwave so that they can be exploited, then use the so called cyber weapons. so we know ransomware comes from an essay back door that was purposefully inserted into microsoft software, right? so we know that it was patched quietly by the end of se, and microsoft basically tried to cover that up in a very sort of clunky and, and clumsy way. but they, they tried to and now we're all living with that is having very serious implications on everybody, not just on industry,
7:58 am
but on people's real livelihoods in their house. right. um. so michael is one thing um, this prime directive stuff, you know, i mean, these aren't thinking animals or humans, you can, you can build essex into and that way you can build certain rules set so that machines try to not do certain things. but of course, machines can be hacked, modified rules can be changed, and it's something obviously that i teach when we have governments and these extra governmental entities, purposefully building and storing exploits right and phone or abilities and trying to bring in the systems and so called cyber war and these devices are going to continue to be dangerous. it doesn't matter what real assess. we program them with, if they're purposefully broken by others and used, you know, in some malicious way. thank you, shawn, for explaining to us this marvelous and sometimes scary world of a i just about everybody involved in the development of
7:59 am
a i supports some form of government intervention and regulation. sam altman, the c e o of open a. i has said that a i could design fi a logical passages. it could hack into computer systems, met a c, e o l on musk said that a eyes development is moving too quickly to be healthy for society. the reality is that if a loan must is wrong about a i and we regulated and who cares. but if one must is right about a i, and we don't regulate it. everybody in the world of war care. but by then it might be too late. i'd like to think our guest john or brian for being with us today and thank you to our viewers for joining us for another episode of the whistle blowers . i'm john kerry. aku, please follow me on subsets at john kerry echo. we'll see you next time. the. 2 2 2 2 2
8:00 am
the 70 you craning drones are in the septic, goes an item and attacked on the 7 russian regions. i but some did get through anything on that bose and residential bill. thing wrong says the u. s. is complicit in the assassination oklahoma. paula to or chief isaac the is the regional escalation, most of us who does the navy carriers, frankly swung by destroyers to them, at least to defend israel against promise, prevents the american administration has happens that they own was appropriate then
10 Views
Uploaded by TV Archive on