Skip to main content

tv   The Cost of Everything  RT  November 22, 2023 6:00pm-6:31pm EST

6:00 pm
[000:00:00;00] the, the breaking news this our, the f b i finds no evidence of a terror attack in their investigation of a vehicle explosion on the rainbow bridge which connects the us and canada. as according to the governor of new york, is rarely troops press on with attacks on gaza, on the eve of a 4 day truce with a mazda that includes a prisoner's watson, also 5300 a palace, jenny, and children have already been reportedly chilled the tall among children is sickening in a set price fall over the imminent danger of lethal epidemics in gaza as the humanitarian group paused for access to water and sanitation for children. local dr
6:01 pm
. sarah, some distressing details with us is the last now and we have received many patients in a critical condition from the notes. this includes children that helped me like some cases are on the co insurance when they reach a place of these children conchella some day off and relatives of hostages still held by her mom to take to the streets of tel aviv expressing mixed feelings over the truce. as only 50 out of more than 200 captives are expected to return home with the fear of everybody is that this will give the how much time to reorganize thinking about this, people listed. well those are the headlines we're following here on our to international stay with us next. the costs of everything
6:02 pm
from our gas are used to operating rooms a i is revolutionizing the way that we perceive the world and pushing the boundaries of what is possible. i'm christy, i'm, you're watching the cost of everything where today won't dive deep into the heart of a i, uncovering its impact and the transformative potential that it holds for our future . while touchy p t has been used by everyone around the world from writing a child's book report to creating accompany a powerpoint presentation. no one has really bothered to ask how much does it even cost to run such an impressive platform? while a new report estimates that the cost to run chad c p t per day, including how much each query approximately costs the chat box that has set off and a resolution in november 2022. but it has proven,
6:03 pm
extremely expensive to maintain the research firm so that it costs approximately $700000.00 per day, or $0.36 per query to keep the chat bought up and running. it is so expensive that microsoft is developing its own proprietary a. i chips to assist in the maintenance of open a eyes operation of chat g p t microsoft azure cloud is hosting chat gp so that open a does not have to invest in a physical server room. but even so, it is much more expensive than a regular query to a search engine with one chat costing about 7 times more than a google request chat g p t has struggled with high traffic and capacity issues, slowing down and crashing its servers. the company attempted to fix this by introducing a paid chat, jp plus tier at $20.00 a month to open a i currently use this new video is g. p use to maintain it is tragic. php process
6:04 pm
is an industry expert. expect that the company will likely require an additional $330000.00 g p use from the video to maintain its commercial performance for 2023. microsoft already has a i chips in the works, co named athena. it is currently being tested internally with the brand own teams and is expected to be introduced next year. in recent years, the a i race has been heating up with the us in china at the forefront of a i driven research. corporations like google, facebook, microsoft, $0.10, and alibaba are all actively pushing capabilities to new heights. china is by do, has recently rolled out its own child to pity rival, ernie bought to the public, making it the 1st domestic app to be fully available in china. alibaba also rolled out to lee chan when is large language model system that is trained on huge amounts
6:05 pm
of data in order to recognize and generate content. other countries such as canada, japan, and south korea are also quietly foraging their own initiatives. india rank this in terms of investments received by startups offering a i base products and services last year in the are received $3240000000.00 in total investments following the us china, the u. k. and israel india is planning to launch its own child c p t. rival soon. and this development could propel india to the forefront of the global industry and solidify its position as a major leader. many are excited for the potential to create meaningful and lasting change across various sectors, ultimately benefiting its people and empowering millions as well. people are divided on how artificial intelligence is going to shape the job market. there are
6:06 pm
studies that have found a i drives productivity in new organizations, are leading the way towards adopting a i and automation with 75 percent of desk. employees use a i at work, and this makes india the leader in the world for 8 i use. and today we're joined by tom tally, author of artificial intelligence basics. and now tom has witnessed, remarkable, advancements in recent years. how do you believe these technological strides are shaping our future and what benefits can i bring to various industries? yeah, no, i mean it's been around for decades, but it really has been the last 10 years that we've seen a lot of the advances. and primarily because of access to a lot of the data, you know, with our phones in particular. so we have a lot of data to work with and you know that data is critical because that's what the models, the sophisticated bottles like, deep learning use to create their magic and create whether it be productions or
6:07 pm
like we seem to try to be there. create content in terms of the benefits to society or to business it probably in particular is that it tends to save us time. um, so you know, there's a just the mattress and the work day. all the tedious activities, you're engaging. you know, if that could be automated, it means that you have much more time to devote to those things are important. so i look at a, i is being a helper. it's like a virtual assistant. it can help us, you know, do the things that we humans can do, but why waste our time of things that are really not that important. and that a i can do um, but uh, but with some of the latest things that we've seen like the chance you'd be, yeah it's, there were things that we didn't invite or possible before, like create content. re documents and summarize them for us and figure things out for us. so, you know, it is in mc advances or just happening every day. so yeah,
6:08 pm
an extra year is gonna be pretty exciting and look for a set of a i research require substantial investments. could you share insights into the level of funding and resources being channelled into a development and how this investment is driving technological breakthroughs? yes, so the investment comes from different sources for, for a i, one is venture cap little, you know, these are funds that focus on earlier stage companies. you know, like the, like open a, i was at one point was an early stage company, or in video was another stage company. and they, they sought out, you know, venture capital. and the other is another important source of capital. is the big mega tech corporations like microsoft, you know, microsoft wind up being the biggest funder for the open the eye. and we're seeing the big companies get more active, like google oracle and so forth. get more active in their investments in smaller companies,
6:09 pm
but also internally. so for their own development. and then you know what, why is, you know, why do we need all this money? for example, opening i is raised over $10000000.00 from microsoft. them. you know what, why do you need $10000000.00? well, 1st of all, data scientists are not cheap. it's very hard to find page these that understand how to create and use and maintain these models. so that's part of it. so i mean, it could, you could spend 7 figures easily on data scientists and there's competition for it . there's a lot of competition for, but the other part is just the infrastructure. um, so for g p t in particular, microsoft had to build a super computer to make that happen. um, you know, and, you know, because these models are just massive, you know, hundreds of billions of parameters, huge amounts of memory. it has to be done very quickly, complicated models,
6:10 pm
and to do that at scale. so if you have a 100000000 users using a i, um, you know, you need all the bandwidth. and so the big question, you know, the, the big costs right now is just the infrastructure cost of, uh, you know, the data centers, the energy of the semi conductors. and so for a company like a video, it's good news because you need these semi conductors to, you know, to create these computers. and so a lot of the costs are, you know, part of it is the data scientists, but that's as a fraction of the cost and the most of the costs right now, just the infrastructure of making this happen. but a lot of companies like google are trying to make their models leaner and more streamline to reduce these costs. so that's a big movement in a high end to try to make these, these models not as expensive as, as they are today. a as application extend to space exploration and environmental monitoring. so how are a algorithms being employed to analyze fast data sets from satellites,
6:11 pm
and aid and climate research and disaster response. this is more of the traditional sweet spot for a. i would be more like machine learning or you know, for and then some of the more of the advanced deep learning. and this, this type of capability has been around for decades. so used to be called the analytics or, you know, maybe business intelligence. but it's really our big data, so you take a huge amount of data and you find patterns in that data. and you know, in what was difficult, 20 years ago. so you just didn't have the data, you know, but we have thousands of science satellites in space collecting huge amounts of data. we have so many sensors, your car collects huge amounts of data in the sense that, you know, into the cloud, your for refrigerator traits, data your tv, raise the hang your phone. cuz i mean, it is pretty much, everything has a computer and it is printing data, it's spinning off data. so we have a lot of data to work with, you know, and it's,
6:12 pm
it's, you know, part of this is to try to, to, by those patterns. and these models can find patterns and these, and these data sets that humans really can to because it's just so much data. and sometimes we don't see certain patterns that the machines can see or correlate. um, so you know, we, we see a lot of progress with, uh, uh, you know, what are you interested in like a climate change? you know, our weather forecasts. i just really sophisticated a i models and they do get better and better over time. you know, i realize so the weather forecasting isn't, it's great here. but usually in other parts of the world, the weather forecasting is better. and even if it has a lot to do with the availability of the data, and then the sophistication of the underlying models. but um, you know, i think there's a base construction that it's all about the models and that's not the case. a lot of it has to do with the data. you can have really nice models. but if you don't
6:13 pm
have the right data sources, it really doesn't matter much. and besides, a lot of these models are off the shelf and open source and available from academic research. so a lot of what makes a i successful is how to use the data. not so much the models autonomy as weapons rate and a driven cyber attacks are also emerging threats. how do we regulate and control the development of a technologies with the potential for causing harm or engaging in malicious activities? as of now, at least for the united states is different, other countries of the united states does not have any federal regulation of a i. there's no really no state regulation either now and the state level. you might have privacy regulations about how you can use private information a um, but it's a go and use a i as a way to create a weapon or to create malware. you know, it's a hack into a system, there's no law against that. now there is a law there,
6:14 pm
there are clearly laws of using weapons of mass destruction and you know, or, you know, hacking system. but the creation of that using a guy is not prohibitive, per se. and the, the only regulation that we have at this point is self regulation. so if i go to chat g b t, there are guard rails in there. if i, i try this. yeah, i'll say try to create some malware and i'll try to trick it and usually it figures out my tricks and it won't do it. so in, in, in the these, these a i companies usually have policies and usage policies saying you can't use those for to create weapons or malware or harass people or to engage in certain activities. and they do try to, to stamp that out and prevent that. but there are ways that, you know, people can spend, you know, if someone wants to do something evil and they have the time, you know, they just have to be right once. and that is known as a jailbreak or prompt injunction to go into the systems and, and trick them into gauging in certain the various activities. thank you so much, tom, but please stick around. all 3rd,
6:15 pm
tom tally will stay with us right here after the break. and when we come back, 10 giants are still battling out. who will dominate the a i world will have more after the break. the .
6:16 pm
the the rivalry between mark sucker, bird and eon must continues to heat up over there, pushes into the artificial intelligence, world. zocker birds meta is set to release and open source version of it's a language model,
6:17 pm
allowing startups and developers to build tools and software on their own using this tech. and this approach runs counter to the proprietary systems developed by mosques, open a, i must, has often expressed concerns about the potential dangers of a i, and this call for more regulation and oversight over a development. he also warned that a i could potentially become a threat to humanity is not controlled properly while calling the facebook c. e o. knowledge of the field as limited. soccer bird, on the other hand, has a more optimistic view of artificial intelligence. he believes that has the potential to bring significant benefits to society, such as improving health care, education, and transportation. he also says that a i can be used to solve many of the world's most pressing problems. but mosque isn't the only one as a jeffrey hinton often referred to as the godfather of
6:18 pm
a. i quit his role at google suddenly, so he could speak freely about the dangers of the technology he helped create. in an interview, hinton express that a chat box right now are not more intelligent than us, but soon they may be. and that bad actors may use a i in ways that could have detrimental impact on society such as manipulating elections or instigating violence. others in the field such as dennis has of this of beat mind and sam altman, of open a. i share the same sentiment and are warning against the risk of extinction from a i and calling for a pause in a development to. however, others doubt these predictions as a plane to the inability of ai system to handle even the most mundane tasks like driving a car, despite years of effort and billions of dollars invested in a. i can't even handle this one challenge. what chance does the technology have
6:19 pm
a matching every other human accomplishment in the coming years? so for this and more, let's bring in again tom talley, author of artificial intelligence basics, and now tom, a as a rapid growth has raised concerns about its ethical implications. how can we ensure that a i, technologies are developed and used responsibly and what safeguards are necessary to prevent unintended consequences? think that the keyword isn't unintended and the other keywords. what does that makes um, depending on what country or from ethics could be different? everyone has, you know, different cultures and societies have different views on what ethics is. and there's some things are fairly generally agreed upon, but other things, maybe not so much so that that's a problem with that takes us just, you know, who's, who's values to we promote and the, the, the other is about unintended consequences. and i do think generally the data
6:20 pm
sciences we create, these systems are good people, you know, i don't think they go out there and they, they, they're like mad scientists and try to destroy the world of mankind. and, you know, recap it now. there's the, there are those who are the hackers, i try to do that. but i think goes at work at companies and have a regular job in their data sciences mostly are on the up and up. but it's unintended consequences. so, um, you know, because maybe i'm white's and i, male, and i went to stanford and all my friends are white male, me, i went to stanford and i might have a world view and i have, i have a real view and i may not see certain things in my analysis or may overlook certain things that someone who is not. yeah, part of that category. we look at and say, wait a minute, that you're doing something, you know, doing it intentionally. but just because of the way you, you're brought up in the way you look at things,
6:21 pm
you're creating certain systems that have these types of outputs. and also, you know, they'll say, well, maybe don't use gender for the data, for example, in the analysis. so if we have a credit system to explain credit to an individual, we won't use gender for example. but there's other data that can reflect gender, and that can be very nuanced. you know, maybe the type of profession they have or what school they went to, or what degrees they have, whatever. and that might reflect gender, and that may skew it in a certain direction that me, you know, discriminate, or have bias in the outcomes. so these are really difficult questions, and i do think part of helping to, to deal with is having more diversity in these companies as who are creating these models. and i think that will go a long way and helping us out. but right now,
6:22 pm
i mean, if you look at a lot of these companies, it tends to be very monolithic in terms of those are creating these models. one of the major concerns with a, i is bias and discrimination and algorithmic decision making. how can we address the issue of bias, especially in sensitive areas like criminal justice and hiring processes? well, some companies just don't do it. uh, you know, pepsico does not use a i to hire employees. um, you know though they may use technology to help streamline certain processes, but they're, they're talking to use it to screen employees. and i think other companies are doing that. and we're also seeing that even even companies that create safer official, the facial recognition has been used and adjusted system. and you know, there are companies like microsoft and ibm that are saying we don't want this technology used for that purpose. so again, it's more like self regulation saying,
6:23 pm
we think that the it, and i think part of it is the accuracy. um, you know, that these systems are not a 100 percent correct. so if they're not a 100 percent correct and mistakes are strongly high, you know, whether someone has their liberty or not are put in jail, then you know, the accuracy has to be there. and if it's not, if you can't prove it, it shouldn't happen. the problem is we don't have regulation on that united states, so there are jurisdictions that we'll use as technology. and i think there was a case recently where a woman was identified misidentified as being assessed back then she was put, put jail for that wasn't her. so i'm just very curious, and this is the, i think it's a, you know, besides the horrible situation that the person was put into with these, the incident started happening. it lowers the trust in a,
6:24 pm
a. so it's incumbent upon the industry to create the standards because of the people see, this is, you know, for so bad and, and, and harm and evil. but they don't want it, they don't want any part of, you know, what, what a, i has to offer. so, you know, so i think the industry is trying to be responsible here, but there's probably a lot more that needs to be done. now the rise of a i power deep big technology has the potential to disrupt trust and authenticity. what strategies can we adopt to combat the spread of misinformation and manipulate and media? wow, that's, that's, that's, that's a tough one. you know, it's, it's kind of like cyber security where, you know, just when you feel like a, you've, you know, prevented something from happening. you know, a hacker comes up with something new, i think that's the case with deep fakes. and defects are, are achieving levels that are really difficult to differentiate between real and
6:25 pm
not real. and you know, and if it's, you know, because of this generative a high technology, you know, it's, it's not even just, you know, uh, audio, but as video the videos are getting more sophisticated, it may come to the point where you all talk to somebody and think you're talking to a real person, but you're not. and it could be on assume call anything that's, you know, and then maybe it could result in a fraud where they want to get some money or make a, you know, money transfer or something like that. um. so, i mean, how do you come about this? um, you know, what we've done so far as kind of is taking more of a cyber security approach to it, trying to. yeah, we have systems out there that can, you know, a lot of times find irregularities in, in, in these, um and it will do so it was time. so it'll be like the angle of the face versus someone's eyes, you know, and really, you know,
6:26 pm
the lips are almost there and not quite when it comes to the synchronization with, with a voice. but it's, it's some point you're not going to be able to tell as an adept when that happens. i'm not sure what happened. so yeah, i think it could be that, you know, we, we just have double checks in our system. you know, we, we just don't always take things on face value and it may be, we just, you know, certain times we, we, we just, yeah, even though we met, we think we're talking to a human, you know, there's always that possibility or not. so if there's something important, because of that, make sure you double check. um, so if someone says, you know, on his own call, i want to make this wire transfer. you know, don't do it just based on that call. maybe just, you know, make a call somewhere else to whatever department the pay payables department and just say, you know, just wanna make sure i'm gonna do this and just, everything's okay. so, you know,
6:27 pm
probably add friction unfortunately to activities. i don't think we're there yet. there i've had been examples of where that's happened, where there's been front committee and where people thought they were talking to somebody that then was actually a general. thank you so much tom, for all your time today is now the rise of technology giants such as google or microsoft has meant that these companies have in your total monopoly on how people access information through search and chat, chat g p t has created a one stop place where you can ask anything and get a simple, satisfactory answer back. so instead of a big web with millions of sites, a few companies will lose all of the answers. this monopolize ation of the internet market in the past couple of years was not even close to what could happen next. as a i queries quickly become the new norm. i'm christy, i thanks for watching and we'll see you right back here next time on the cost of
6:28 pm
everything. the take a fresh look around his life kaleidoscopic isn't just a shifted reality distortion by power to division with no real opinions. fixtures designed to simplify will confuse who really wants a better wills, and is it just as a chosen few fractured images presented as 1st?
6:29 pm
can you see through their illusion going underground? can the so i used to be starting the school curriculum. that's why i decided now let me start this up. one was in, which is looking at truth of what really happened here in these off of the, the german. so just bear with the,
6:30 pm
with the one with taking the picture. they were proud of the most of the, these pictures rustic in between menu whole for an animal age. and so when you see the central leasing it becomes and the person knows of was even seen. everything is out on the mission, the bidding, the for my definitely didn't even see this of the shower and he's funny. so i had also back and change the a lot of sort of that. so that is actually in those kind of what's not taking balances, nothing is just the german soldiers. and so the people for that goes and vision kids that
6:31 pm
the

17 Views

info Stream Only

Uploaded by TV Archive on