tv The 360 View RT June 9, 2023 4:30pm-5:01pm EDT
4:30 pm
sol precedence that actually did not exist. debacle, drew attention to the potential problems of relying on artificial intelligence. while i discussed all of us a bit earlier in the program, the panel of guests, the founders of jobs, e, b, b. they do not know they extend, definitely it right. it has started stating stuff. it has started reading research papers this dr. vision research outcomes so. so he, he might be nice, but he might really be believing that that there was a judgement to that to say that i've assessed papers because writing the search papers up at school to come to a desired results outcome. is there a way to, to regulate, to prevent these kinds of mistakes beyond regulation? we need to make a decision as humans just to what extent were willing to become. and we're willing to flatten our own intelligence into machine intelligence. to what extent were willing to let a high run our society, the head of a company that made chat g p t some oakland he recently met with india's prime
4:31 pm
minister to discuss i guess advantages and risks of using a i me just for me to use degree day, that's kind of worrying to me why, why would he sit down with the indian prime minister? what, what, what are the, what are they talking about? what are the implications here? see everybody's concerned about the reach of a i as of now, as of today, no human being can be the computer to just right. what happens if those extends, i raised the economics due diligence to validate, what is tomorrow? nobody get no human being can beat a i to an election. what you're saying. so mister around it is very, very disturbing. michael, back back to you. if i can. i mean, do you think a guy is going to replace the need for thought human workers and, and would that then i guess result in layoff? so am i being a bit extreme? labor redundancy is certainly a possibility, but i think the more urgent question as to what extent is a i going to cure a reality for us. it's creating a kind of world view and
4:32 pm
a world system of its own, which has been programmed into it, but which, what, from which programming it, it is capable of making inferences and extending out this world view. so we're replicating, some human biases and human problems. and we're replicating in and i and then reproducing it, and it massively, in no mind enough time with this doctor, the emotions of its own, the conscious of its own. so the human history that we know, i must say that the history within the history in which the human is dominated, we might not know with the point we know it's right now. i think we need to look at the questions of whether we really want to merge our intelligence where they are in effectively outsource our world views, our history, our, our future to, to a, i entitled to, to artificial intelligence. and i really enjoyed the discussion. we have of those, i guess just a short time ago here on the program, although, you know,
4:33 pm
when it comes to the chat g, b, t, you know, there's a, i mean it's kind of a stuff in lightness. so you can read more about that. if you want a website, so you don't, the, the concept of artificial intelligence has been in movies for decades. but since 2020, a type of technology designed to generate human like responses to a given prompt or input has made this a, our software and increasingly popular tool for all ages. i'm scared of using on this edition of 360 view, we're going to ask if the development of a chat, g p t. it will enhance our daily lives or lead to our own demise. let's get started . the
4:34 pm
artificial intelligence was developed during world war 2 to merge the functioning of machines and human beings. but recently a new model called chat g p t. or generative pre trained a transformer, has gained significant it pension and interest from various communities. since the technologies launch in 2020, and many people use language models for a wide range of purposes. so just generating creative writing, automating customer service, assisting with research, and even generating new ideas. the ease of this a trend is expected to grow as more people become aware of the benefit of a i language models. while i has many potential benefits, it also has some possible drawbacks users should be aware of. let's start with buys a. our systems are only as unbiased as the data they are trained on. if the data has biases, the ai system will also have those biases which can lead to unfair and discriminatory
4:35 pm
outcomes. the lack of empathy and our systems are not capable of emotions or empathy. like humans, which means they can not understand human feelings or even their experiences. this can make it difficult for them to provide appropriate responses to emotional situations, dependents on data. now, as systems require a large amount of data to train and improve their accuracy. this means they are only as good as the data they have access to, and if that data is incomplete or inaccurate, they are system will also be flawed. there's also a worry about causing unemployment as a i has the potential to automate many jobs which could lead to unemployment for some workers. is gonna have some significant social and economic impacts, particularly for those who are already disadvantage. and lastly, probably most importantly, the security risk and our systems can be vulnerable to attacks and hacking which could lead to the exposure of sensitive information or the manipulation of data for
4:36 pm
malicious purposes. it has many potential benefit is important to be aware of these drawbacks and work to address them to ensure chat, g, p, t, and other i. systems are used ethically and responsibly so discuss, let's bring in our guest in con, seo and founder of future. see research. thank you again. so do you think chad g, p t and other chad pods are a danger and that they are quite the personal data from individuals. there's so much happening with child g, p, d 3 and generative and i, when it comes to the question of collecting data and collective collecting information, we have to realize that these platforms and this technology of gender data is trained on a lot of data. so a lot of that learning has been already being done on a tons of data, and it's impossible to tell what that data comprises of exactly. it could be
4:37 pm
proprietary information, it could be personal data, it could be personal information from people, individuals, companies, and all this information is kind of the trade secret of whoever is operating that generative ai platform chat, ged 3, in this case and you example. so yes, it's possible that it contains private and personal information, but that has been redacted from responses that chad to 3 would offer. so a typical example of this is that if you ask chat, ged 3, a controversial question, it's not going to answer you. and it's going to say that it can't answer it. so yes, it's very much possible personal data can be used, has been used, and potentially will be used. what would that technology actually reveal about someone? so when it comes to information collection and information and protection, we have to realize that when we use 3rd party technologies we use, let's say, a chat, g p, d 3 or something that google has designed or microsoft is design or one of the
4:38 pm
hundreds of different technology companies designs when we started using their platforms, there's the thing called the end user license agreement. now we never read it and an average person never reads the you love. and the eula contains all of these things in a legal terminology that, hey, we're going to use your data of whatever information you put in, we're going to use it, we're going to re purpose it and so on and so forth. and, and that's kind of the current condition for us to use those platforms. now, when you create an account with any online platform, any online technology, it's very, it's, it's a regular practice to ask for name, age, date of birth. in some cases, credit card information. and so companies are collecting this information and how are they using it? will they use it? where will they use it? and will they get their generative a i to train on this data is the big question, but nobody's going to answer just in
4:39 pm
a very straightforward manner because it's controversial. but does that mean they're not collecting that information? no, they definitely are. and that's the, that's the challenge that we are in at right at this moment is, where do we trade off our privacy to the, to the use or to benefit that these gender to be i, technologies are providing, do you fear the manipulation of these kinds of chat box by outside 3rd parties as it means to influence individuals or insight them or side groups to certain types of behaviors. now generally of a i technologies including chat gp, the 3 as an example, are just beginning to emerge. this means that the possibilities are numerous. we're seeing charge you 53 being used by university students or school students to write their essays to get into graduate programs. and that's a very initial use case. now turn a 3rd party, use these platforms to influence certain people and get them to do things.
4:40 pm
absolutely, yes, stuff. the biggest danger off generative b i technology is that, how do you influence other people to automation by creating a content by creating not just text content by writing automated blogs by creating content that has heat, that has other bands of things in it, including video content including deep fakes. so all this content that can be created is generally be icon, degenerative a, i power content. and how do you use that information and how you use those outputs is, is really to the imagination. it's very much possible that the fighting crime in the future in the near future will get much more complex because now you're not dealing with a person with a firearm. you're not dealing with with the terrorist. what physically harming people. but you're dealing with corporations that are going to harm individuals and
4:41 pm
people in society by creating content. that's hateful by creating content that can harm others. and so we're headed into very interesting and scary times that at times there are some reports that hackers and researchers are able to actually use chat g p t to create malware and other viruses that can escape detection in some cases. now is this technology going to change the face of the internet and threaten the security of our lives on the web? so when it comes to gender to be i, technologies, and the structure of the internet, a lot is happening now. we started with something, you know, called the, the world wide web was then moved into web to a point tool. we're talking about web 3 point. oh, now all these different complex ways in which the web is, is kind of created and exists. the internet exist. there's also the metal works where it's a completely different world. oh, now when it comes to a generative ai and what it can do with us or for us when we are on a the internet,
4:42 pm
it can definitely change our interaction. you're uh, you know, there's, there's the case off malware being created or viruses being created by gender d, b, i. and the complexity of the situation is that a, i is much better at creating these viruses or creating malware because it's got unlimited possibilities and innovations. it can create a test code at a much rapid pace than a human, right? a traditionally viruses would be coded by a person, and a person would create a code and release it out there in the world. and that would be the virus. but now you have powerful technology such as artificial intelligence that's creating a virus, it's making it better every single iteration and it's finding ways to make it more penetrative, more damaging. and so cyber security is going to become much more complex in the next few years, because now we're battling anatomy, which is not a physical which is not a person, but
4:43 pm
a technology. and we have to arm ourselves really in a very strategic way to protect ourselves. so i'm interested how can consumers protect themselves and the present from this in the future? the 1st of all, in terms of protecting ourselves in the future, we absolutely need to hold technology companies accountable and responsible for everything that they're creating. the process of creating technology as an example or chat ged to the generative and i video generative a i audio content. all of these are not just about recording something, but where computers are creating code computers are creating content where there are no controls. now if we let this out of our hands and let the technology determine and these companies determine what they should be doing then and nothing is in our hands, we are just at the mercy of some specific people or organizations or corporations
4:44 pm
that dictate how we behave how we interact with the world and what happens to people in the end. i think it's time to take control on of how a companies are actually creating products. there has to be regulation on how and what organizations and big to can create and what are they allowed to create seen the last few years in the last, in the history of the internet. there's been very little control on technology companies and holding them accountable on the content they create the systems they created and how they monetize them. every big tech company creates and mon increase content and systems and the monetize our information to benefit themselves. but now things have to be a little bit different because now it's getting much more complex. uh, one of the core architectural things that can change is changing to a new architecture of the, of that called web 3, where people who are on the other side un, i, we can be in charge of our own information and region allowing this allow people
4:45 pm
from using our information to monetize, to be monetize and for them to benefit. so because of this, do you think it's better for policies to be made by individual countries or should a global policy be created? so from a country to country basis, it's very difficult for everybody to come to consensus, right? we have disagreements, even in the rules and regulations difference between the americas in between the you and both of these regions as an example. uh they, they proceed with uh, rules and regulations with respect to technology in a different way. there are some countries that are making a little bit more progress than we are. and let me tell you, we are really, really behind the car right now and creating this regulation. so surprisingly, if you look at the middle east, you look at the united arab emirates, you know, you hear about cities like to buy and would that be? they are progressive enough to say that, hey, we need a minister of artificial intelligence to speak of robin,
4:46 pm
understand what to do with the eye. and that's progressive thinking. so if i see regulation coming a to from anywhere, it's either going to be a small pockets in your, maybe in a sonia or in asia and singapore, or maybe in the u, a. e, or even saudi arabia. that is really, really taking a very serious, being, developing it a, when it comes to america, as when it comes to the western world. there are far too many powerful technology companies that exist here, that control the dialogue that controls the conversations that happens in the, you know, in the big houses of government. and so that, that's the big challenge. i think we have to have 2 different types of dialogues. one is a global dialogue at a global level with every country involved in another. is that a local level to try and strengthen our own walls and to strengthen our own communities and country by giving you stick around. you know, this sounds like a great time to take a break but,
4:48 pm
the let us continue our conversation about a i and the growing role is playing and society with in con, seo and founder of future see research in, do you see a chat box as posing a threat to our social fabric and the social well being of individuals? when are people going to become diluted into thinking, they can have a relationship. for example, with a chat box. so when it comes to the future of chat thoughts, it's a very much possible that if we let things out of control, if we don't control what we see, how we see it, where we see it, then things will go go totally out of hand. it's not completely impossible that
4:49 pm
will end up having a relationship with the child bought and people will get emotionally involved with a, with, with a chat bought or, or an avatar of some kind that's driven by ai technologies. and because it's so easy to create content, create that engagement, create this artificial persona off of an entity that doesn't even exist, but behaves and feels like a human being on a screen. it's very much possible that certain people in the future will be in intimate relationships with these, with these entities and these avatars that are created. we have to be careful about where we're headed because right now this, the future of humanity is kind of in parallel. if things go out of control, how can we trust the developers of these types of technology to be objective and unbiased? i mean, we are already seeing that chat g p t has a clear left swing by us from numerous examples. are people on the right just going to have to create their own artificial intelligence?
4:50 pm
or can we have a technology that is more neutral? in many cases, artificial intelligence right now today is biased. there's a whole whole area of conflict with an, a i, where it has different biases. and the reason for that bias is the training data and also who has programmed that a technology. and depending on these 2 factors, you're a, i could be totally n t, n t, a certain group of people or anti a certain, uh, ideal or philosophy. and that's, that's where we are right now. uh, the creation and the design of e. i is not perfect to the training data it uses is not perfect. and at the end of the day, it really depends on whoever is programming, creating those algorithms. and writing that actual code is what are they thinking about and what kind of output do they desire from this technology? i really feel that it's,
4:51 pm
it's an area of very complicated decision making of accountability and also really thinking as a community as a whole, whether you are right or left being that how can we come together and create something that benefits humanity we have to shun our our own personal agendas and come together to solve this problem. otherwise, it might just get completely out of hand. is there any fear that artificial intelligence can become fully conscious and autonomy as in the future? are we staring down the barrel of the terminal here, or matrix movies anytime soon? so when you think about 100 percent autonomy of yeah, i systems or beyond autonomy, you're looking at a i being sent and it's thinking it has a mind of its own and it can make decisions about good or bad or certain outcomes. it is possible, but not today today, it's not possible ai right now is considered to be out in a very narrow field. so it's known as artificial, narrow intelligence if can do certain things. but we're slowly headed towards the
4:52 pm
next phase of a i called artificial general intelligence, where it has much more intelligence and it can power different things that can automate many different things that can make certain decisions as well. and the stage after that, which is what we're talking about. technology being sent to you. it is artificial super intelligence where send p a d. i will be able to manage everything, creates things, build things. and you're looking at the terminator re scenario terminator for scenario. and these future scenarios where the possibly, you know, we will end up we, we won't control anything, but technology will be able to dictate things. and that is an error of that, not even the i, scientists are looking forward to a spoken to tablets, scientists or the, or the last few years filming, or documentary a i, the next frontier that's coming up later this year. they're not looking forward to artificial super intelligence. and if we stayed in the ag,
4:53 pm
i era the next day or from now that would be rather happy. it is possible and you never know what's happening behind closed doors and in labs in the i labs that are designing things. we don't know if something's doing or don't make it to the public . um, uh, you know, arena, but it's definitely possible. so let's say society decided a, i was going to be detrimental to society. is it too late to stop or limit a integration to the world when we even want to. so there are a few different things that are happening in the world as we know it. there are the vendors, the providers, the companies that are building technology, and we, as consumers don't control that, we do nothing about it. we can do anything about it because it's beyond many, many walls of uh, you know, legal implications and, and frameworks. and so we are unable to dictate what these companies do as a whole, at a personal level of we interact with technologies that are out there in the world in
4:54 pm
our everyday lives. and there were some control that we can exert on that, whether we end up using a technology or not using it, or using it in a way to harm others or to benefit ourselves and so on. there's many different ways of looking at it when it comes to being productive when it comes to, you know, creating a better world than absolutely. i mean, we cannot live without technology. think about stopping to use your cellphone or your computers. we cannot live in the, in the modern world without using certain parts of technology. and so funny enough right now our relationship with technology is also changing. the way we interact with technology is changing right. back in the day when the internet released and the internet was a 1st fresh thing, i remember many people were hesitant in going online and logging onto their bank and checking their balances and so on because they were afraid that somebody would steal their money. come today 2023. we cannot live without an app or we can have to
4:55 pm
transfer money or do or pay pay with our phone and it even. and so that's a relationship change that's happening right now to deal with maybe afraid of a i. but maybe in the next 5 years, if it progresses in a nice way, it adds more benefit to our lives than we might end up liking it and doing things that will, that will impact the world in a positive way. so it really depends on how the public is educated, how these technology companies, uh, these creators of the technologies and algorithms. what do they create, and do they create fear in the world? or do they create good positive stuff in the world? and we can actually maybe solve some of the biggest, biggest challenges that the world faces today in their leaders like bill gates and you know, on mosque. they seem to have different perspectives on a, i both actually have made their money off technology. do you think their debate is helping or hurting this conversation? my personal opinion. i think it's beneficial to some extent, but at, at
4:56 pm
a larger level, it's very confusing because it's 2 of these very famous, very influential people debating things that average ordinary people do not understand. you and i, and everybody else in the world. we need to understand the implications of these technologies on our daily lives, rather than and the corporations and the big factories in this big different organizations. these people run. so those conversations are 2 different things. i really feel a lot of it is very harmful. it's not productive, it doesn't help in any way. and i, i really wish there were other leaders or these leaders who would help people understand what these technologies can really do and how they can benefit people and humanity as a whole. so yeah, that's my opinion. i think i think the dialogue needs to change a level that they do in con, for joining us. you know, the development of technology can be a good thing for that has sort of a society as long as it is done responsibly, you know,
4:57 pm
a chat platform g p t is not perfect and could still make mistakes or generate bias responses. if it is not trained properly or if it is used appropriately, these are real concerns which become serious problems is not addressed during the initial phase of its creation and integration. sadly, we are already seeing those whose only goal is profit choosing to compromise the current date. ethical standards, which those in the future will have to answer for as good. how he is in the spend your 360 view of the news affecting you. thanks for watching the,
4:58 pm
[000:00:00;00] the, the toner, it will see skit the sheet. they will see skin out of the cool with the that you sent those last i was i move you on is zacko going that they're not showing in of each uh, throughout the united, short, full state taxation. you folks have tried, i'm not sure, but i, she's the instructional give about john long would love to go. just talk a little you should see those sheets of the go. there's a little c, a years ago, sales custom coast like friends here was the months me, i'm a photo skid. she was a but all you know and about the show on circuit most to go to court waterfront statements, but go out most i usually it comes to shock getting ready to go to your open when you get to mind for sure. most of quite catch the thought. yes. cool. can. yeah.
4:59 pm
and the console. you put the revise, your site was rolling into a sober to the door. you know, cream dirty. a sodium. yeah, i did. she shit. a duck. clean, nice report. the control room for 2 of us, so we fixed you get them awarded by actually system really being you have enough people not to say websites, pets, and stuff. i'm assuming. yeah, we did the boom cloud. so it's actually, it's tim jeff, dark news, but i'll let you drive both on the on it. okay. excuse me. that's where you store lot of the my software, but just go to you store no satellite. if she ever we get this booked every shipment. that is just a good fit you immediately, of course, to take him over to them. i need you to set. it says that you train school. so
5:00 pm
that's for and use a when they finished that for a couple year and you know, to by useful bunch of the same. yes. i see the job these business way way through the features of the double play. the price of the is always pop headlines on off you international and comments from the kremlin, as the president lot of improved and says cab counter offensive. currently on day 5 as not achieved any of its objective as ukrainian forces have stopped the northwest . india shrugs off us congressional panels suggestion to join nato, saying the prox framework is not applicable for new delhi. a bit of all the different lo, speak there from bodies, far administered,
16 Views
Uploaded by TV Archive on