tv The Cost of Everything RT November 23, 2023 1:30am-2:01am EST
1:30 am
is this, i think my phone on record to him. second time. when i saw him coming to me again. uh, i think my front again because again, let's have them same thing. i just said, i didn't know that before, but now i'm scared about what he can do for me because i'm just working here. i don't know of anybody. you know, what turns out the man in the video wasn't just anybody who's a former obama administration officials, stewart sel. dewitt who worked with the national security council and has been very prominent and involved in the israel palestine conflict. people find it shocking that such a prominent official who is involved with the us state department and its activities around the world would say such hateful thing as in his everyday life. it's horrible to learn about an extra a government official go down that far to to talk about the religion and people and humanity with its own. he did not answer and seems to represent real feelings. here
1:31 am
in new york city and tensions between real communities. you have a lot of strong supporters of israel and you also at the same time have many muslim americans. the 2 communities don't see eye to eye on the israel palestine conflict and recent global events. and these kinds of incidents that go viral across social media seem to represent the ongoing tension between the 2 communities. caleb bobbin r t new york and the as mentioned some at new is just a merging katara as foreign ministry says the exact timing of the 1st between israel and the some us would be a 9th in the coming hours out the result of what's being called positive progress on the hostage talks mediated by the officials and katara, israel and gals that launched on wedding stay about the ceasefire. deal would come into place. we believe this morning at 10 am, but it's really other said it's been delayed until at least friday. we'll keep you
1:32 am
updated on the developments as an when they come us. alright, we are heading to the cost of everything studios next though, as christie, i once again helps us navigate a financial world, seemingly growing ever more complicated by the day. stay close for that, but i'll catch you again in 30 the, from our galleries to operating rooms a i is revolutionizing the way that we perceive the world and pushing the boundaries of what is possible. i'm christy, and you're watching the cost of everything where today won't dive deep into the heart of a i, uncovering its impact and the transformative potential that it holds for our future . well touch g p t
1:33 am
has been used by everyone around the world from writing a child's book report to creating accompany a powerpoint presentation. no one has really bothered to ask how much does it even cost to run such an impressive platform? while a new report estimates that the cost to run chad c p t per day, including how much each query approximately costs the chat box that has set off and a resolution in november 2022. but it has proven, extremely expensive to maintain the research firm so that it costs approximately $700000.00 per day, or $0.36 per query to keep the chat bought up and running. it is so expensive that microsoft is developing its own proprietary a. i chips to assist in the maintenance of open a eyes operation of chat g p t. the microsoft azure cloud is hosting charge you a p d. so that open a does not have to invest in a physical server room. but even so, it is much more expensive than
1:34 am
a regular query to a search engine with one chat costing about 7 times more than a google request chat g p t has struggled with high traffic and capacity issues, slowing down and crashing its servers. the company attempted to fix this by introducing a paid chat, jp cheap plus here at $20.00 a month. open a. i currently use this new video as g p. use to maintain is track jp process is an industry expert. expect that the company will likely require an additional $330000.00 g p use from the video to maintain its commercial performance for 2023. microsoft already has a i chips in the works code named athena. it is currently being tested internally with the brands own teams and is expected to be introduced next year. in recent years, the a i race has been heating up with the us in china at the forefront of
1:35 am
a i driven research. corporations like google, facebook, microsoft, $0.10, and alibaba are all actively pushing capabilities to new heights. china is by do, has recently rolled out its own chathoochie rival. ernie bought 2 of the public making it the 1st domestic app to be fully available in china. alibaba also wrote out to lee chan. one is large language model system that is trained on huge amounts of data in order to recognize and generate content. in other countries, such as canada, japan and south korea are also quietly foraging their own initiatives. india rank this in terms of investments received by start up offering a i base products and services last year in the i received $3240000000.00 in total investments following the us, china,
1:36 am
the u. k. and israel india is planning to launch its own child c p t arrival soon. and this development could propel india to the forefront of the global industry and solidify its position as a major leader. many are excited for the potential to create meaningful and lasting change across various sectors, ultimately benefiting its people and empowering millions. while people are divided on how artificial intelligence is going to shape the job market, there are studies that have found a i drives productivity in new organizations are leading the way towards adopting a i and automation with 75 percent of desk employees use a i at work and this makes india the leader in the world for 8 i use. and today we're joined by tom tally, author of artificial intelligence basics. and now tom has witnessed, remarkable, advancements in recent years. how do you believe these technological strides are
1:37 am
shaping our future and what benefits can i bring to various industries? yeah, no, i mean it's been around for decades, but it really has been the last 10 years that we've seen a lot of the advances. and primarily because of access to a lot of the data, you know, with our phones in particular. so we have a lot of data to work with and you know that data is critical because that's what the models, the sophisticated bottles like, deep learning use to create their magic and create whether it be productions or like we seem to try to be to create content in terms of the benefits to society or to business and probably in particular is that it tends to save us time. um, so you know, there's just the mattress and the work day, all the tedious activities. you're engaging. you know, if that could be automated, it means that you have much more time to devote to those things are important. so i look at a, i is being a helper. it's like a virtual assistant. it can help us,
1:38 am
you know, do the things that we humans can do, but why waste our time of things that are really not that important. and that a, i can do. um, but uh, but with some of the latest things, i always seem like the chance you'd be, yeah it's, there were things that we didn't invite or possible before, like create content. re documents and summarize them for us and figure things out for us. so, you know, it is in mc advances are just happening every day. so yeah, an extra year is gonna be pretty exciting. and the pursuit of a i research require substantial investments. could you share insights into the level of funding and resources being channelled into a development and how this investment is driving technological breakthroughs? yes, so the investment comes from different sources for, for a i, one is venture cap little, you know, these are funds that focus on earlier stage companies. you know, like the, like opening i was at one point was an early stage company,
1:39 am
or in video. it was another stage company and they, they sought out, you know, a venture capital. and the other is another important source of capital. is the big mega tech corporations like microsoft, you know, microsoft wind up being the biggest funder for the open the eye. and we're seeing the big companies get more active, like google oracle and so forth. get more active in their investments in smaller companies, but also internally. so for their own development. and then you know what, why is, you know, why do we need all this money? for example, opening i is raised over $10000000.00 from microsoft. them. you know what, why do you need $10000000.00? well, 1st of all, data scientists are not cheap. it's far, it's hard to find page these that understand how to create and use and maintain these models. so that's part of it. so i mean, it could, you could spend 7 figures easily on data scientists and there's competition for it
1:40 am
. there's a lot of competition for, but the other part is just the infrastructure. um, so for g p t in particular, microsoft had to build a super computer to make that happen. um, you know, and, you know, because these models are just massive, you know, hundreds of billions of parameters, huge amounts of memory. it has to be done very quickly, complicated models, and to do that at scale. so if you have a 100000000 users using a i or um, you know, you need all the bandwidth. and so the big question, you know, the, the big cost right now is just the infrastructure cost. uh, you know, the data centers the energy of the semi conductors. so for companies like in videos, it's good news because you need these semi conductors to, you know, to create these computers. and so a lot of the costs are, you know, part of it is the data scientists, but that's just
1:41 am
a fraction of the cost and the most of the costs right now. just the infrastructure of making this happen. but a lot of companies like google are trying to make their models leaner and more streamline to reduce these costs. so that's a big movement in a high end to try to make these, these models not as expensive as, as they are today. a eyes application extend to space exploration and environmental monitoring. so how are a algorithms being employed to analyze that data sets from satellites and aid in climate research and disaster response. this is more of the traditional sweet spot for a. i would be more like machine learning or, you know, for and then some of the more, the advanced deep learning. and this, this type of capability has been around for decades. so used to be called analytics or, you know, maybe business intelligence. but it's really or big data. so you take a huge amount of data and you find patterns in that data. and you know,
1:42 am
it was difficult 20 years ago. so you just didn't have the data, you know, but we have thousands of science satellites in space collecting huge amounts of data. we have so many sensors, your car collects huge amounts of data in the sense that, you know, into the cloud, your refrigerator traits, data, your tv creates the hang your phone, cuz i mean it just pretty much everything has a computer and a screening data spinning off data, so we have a lot of data to, to work with. um, you know, and it's, you know, part of this is to try to the find those patterns. and these models can find patterns and these, and these data sets that humans really can to because it's just so much data. sometimes we don't see certain patterns that the machines can see or correlate. um, so you know, we, we see a lot of progress with, uh, uh, you know, what are you interested in like a climate change? you know, our weather forecasts. just really sophisticated
1:43 am
a i models and they do get better and better over time. you know, i familiarize so the weather forecasting isn't as great here, but they usually, in other parts of the world, the weather forecasting is better. and even if it has a lot to do with the availability of the data and then the sophistication of the underlying models. but, you know, i think there's a misconception that it's all about the models. and that's not the case. a lot of it has to do with the data. you can have really nice models, but if you don't have the right data sources and it really doesn't matter much. and besides, a lot of these models are off the shelf and open source and available from academic research. so a lot of what makes a successful is how to use the data. not so much the models, a thomas weapon very and a i driven cyber attacks are also emerging threats. how do we regulate and control the development of a i, technologies with the potential for causing harm or engaging and malicious activities? as of now, at least for the united states,
1:44 am
is different other countries the united states does not have any federal regulation of a i. there's really no state regulation either now and the state level. you might have privacy regulations about how you can use private information. a um, but it's a go and use a i as a way to create a weapon or to create malware. you know, it's a hack into a system. there's no law against that. now there is a law there. there are clearly laws of using weapons of mass destruction and you know, or, you know, hacking system. but the creation of that using a guy is not prohibited per se. um, and the, the only regulation that we have at this point is self regulation. so if i go to chat, cvt, there are guard rails in there. if i, i tried this. yeah, i'll say try to create some malware and i'll try to trick it. and usually it figures out my tricks and it won't do it. so in, in, in the these, these, the i companies usually have policies and usage policies. say you can't use those
1:45 am
for to create weapons or malware or to harass people or to engage in certain activities. and they do try to, to stamp that out and prevent that. but there are ways that, you know, people can spend, you know, if someone wants to do something evil and they have the time, you know, they just have to be right once. and that is known as a jailbreak or prompt injunction to go into the systems and, and track them into gauging in certain nefarious activities. thank you so much, tom, but please stick around. all 3rd, tom tally will stay with us right here after the break. and when we come back, 10 giants are still battling out who will dominate the a i world will have more after the break. the
1:46 am
1941 with the nazis health relation, ultram nationalists. the u. astonishes the claim, the independent state of croatia. shortly off, the seizing power, they build the scene of us concentration camp, a place associated with the worst atrocities committed in yugoslavia during world war 2. use dash is used to come system to isolate and exterminate subs, roma, jews, and other non catholic minorities, and political opponents of the fascist regime. conditions in the san of us come with her renders the gods tortured to arise and the prisoners they send in the concentration camps. so most of them died 6 was incredible genocide. the
1:47 am
the rivalry between marg flucker bird and ian moss continues to heat up over there, pushes into the artificial intelligence world, locker birds. meta is set to release and open source version of it's a language model, allowing startups and developers to build tools and software on their own using this tech. and this approach runs counter to the proprietary systems developed by mosques, open a, i must, has often expressed concerns about the potential dangers of a i n s. call for more regulation and oversight over a development. he also warned that a i could potentially become a threat to humanity if not controlled properly, while calling the facebook c. e o knowledge of the field as limited a sucker bird. on the other hand, has a more optimistic view of artificial intelligence. he believes that has the
1:48 am
potential to bring significant benefits to society, such as improving health care, education, and transportation. he also says that a i can be used to solve many of the world's most pressing problems. but mosque isn't the only one as a jeffrey hinton often referred to as the godfather of a. i quit his role at google suddenly, so he could speak freely about the dangers of the technology he helped create. in an interview, hinton express that a chat box right now are not more intelligent than us, but soon they may be. and that bad actors may use a i in ways that could have detrimental impact on society such as manipulating elections or instigating violence. others in the field such as dennis has of us of deep mind and sam altman, of open a. i share the same sentiment and are warning against the risk of extinction from a i and calling for a pause in a development. however,
1:49 am
others doubt these predictions as a point to the inability of ai system to handle even the most mundane task, like driving a car, despite years of effort and billions of dollars invested in a, i can't even handle this one challenge. what chance does the technology have a matching every other human accomplishment in the coming years? so for this and more, let's bring in again, tom tally, author of artificial intelligence basics to now tom, a as a rapid growth has raised concerns about its ethical implications. how can we ensure that a i, technologies are developed and used responsibly and what safeguards are necessary to prevent unintended consequences? think that the keyword isn't unintended. so any other keywords, what does that makes um, depending on what country or from ethics could be different, everyone has, you know, different cultures and societies have different views on what ethics is. and
1:50 am
there's some things that are fairly generally agreed upon. but other things, maybe not so much. and so that, that's a problem with that takes is just, you know, who, who's values to we promote and the, the, the other is about unintended consequences. and i do think generally the data sciences we create, the systems are good people, you know, i don't think they go out there and they, they are like mad scientists and try to destroy the world of mankind. and you know, we have it now there's the, there are those who are the hackers. i try to do that. but i think goes at work at companies and have a regular job in their data sciences mostly are on the up and up. but it's unintended consequences. so, um, you know, because maybe i'm white's and i, male, and i went to stanford and all my friends are white male and they, i went to stanford and i might have a world view and a half. i have
1:51 am
a real view and i may not see certain things in my analysis or may overlook certain things that someone who is not yet part of that category. they look at that say, wait a minute, that you're doing something, you know, doing it intentionally. but just because of the way you, you're brought up the way you look at things. you're creating certain systems that have these types of outputs. and also, you know, they'll say, well, maybe don't use gender for the data, for example, in the analysis. so if we have a credit system to explain credit to an individual, we won't use gender. for example, if there's other data that can reflect gender, and that can be very nuanced. you know, maybe the type of profession they have or what school they went to, or what degrees they have, whatever. and that might reflect gender, and that may skew it in a certain direction that me, you know, discriminate,
1:52 am
or have bias in the outcomes. so these are really difficult questions, and i do think part of helping to, to deal with is having more diversity in these companies and who are creating these models. and i think that will go a long way and helping this out. but right now, i mean, if you look at a lot of these companies, it tends to be very monolithic in terms of those who are creating these models. one of the major concerns with a, i is bias and discrimination and algorithmic decision making. how can we address the issue of bias, especially in sensitive areas like criminal justice and hiring processes? well, some companies just don't do it. uh, you know, petticoat does not use a i to hire employees. so you know that they may use technology to help streamline certain processes, but they're, they're talking to use it to screen employees. and i think other companies are
1:53 am
doing that. and we're also saying that even companies that create safer facial the facial recognition has been used and adjust the system. and you know, there are companies like microsoft and ibm that are saying we don't want this technology used for that purpose. so again, it's more like self regulation saying we think that the did, and i think part of it is the accuracy. um, you know that these systems are not a 100 percent correct. so if they're not a 100 percent correct and mistakes are extremely high, you know, whether someone has their liberty or not are put in jail, then you know, the accuracy has to be there. and if it's not, if you can't prove it, it should not happen. the problem is we don't have regulation on that united states, so there are jurisdictions that we'll use as technology. and i think there was a case recently where a woman was identified misidentified as being assessed back then she was put,
1:54 am
put jail for that wasn't her. so i'm very curious, and this is the, i think it's a, you know, besides the horrible situation that the person was put into these, the incident start happening. it lowers the trust in a, a. so it's incumbent upon the industry to create the standards because of the people see, this is being a force, a bad and, and, and harm and evil. but they don't want it, they don't want any part of, we know what, what a i has to offer. so, you know, so i think the industry is trying to be responsible here, but there's probably a lot more that needs to be done. now the rise of a i power, the big technology has a potential to disrupt trust and authenticity. what strategies can we adopt to combat the spread of misinformation and manipulate and media? oh that's that's, that's a, that's a tough one. um, you know, it's a, it's, it's kind of like cyber security where, you know, just when you feel like a,
1:55 am
you've, you know, prevented something from happening. you know, a hacker comes up with something new. i think that's the case with deep fakes and defects are, are achieving levels that are really difficult to differentiate between real and not real. and you know, and if you know, because of this generative a high technology of, you know, it's, it's not even just, you know, uh, audio, but it's video. the videos are getting more sophisticated. it may come to the point where you all talk to somebody and think you're talking to a real person, but you're not. and it could be on a zoom call. anything that's, you know, and then maybe it could result in a fraud where they want to get some money or make a, you know, money transfer or something like that. um, so, i mean, how do you come about this? um, you know, what we've done so far as kind of is taking more of
1:56 am
a cyber security approach to it, trying to. yeah, we have systems out there that can, you know, a lot of times find irregularities in, in, in these, um and it will do so it was time. so it'll be like the angle of the face versus someone's eyes, you know, and you know, the lips are almost there and not quite when it comes to the synchronization with, with a voice. but it's, it's a point. you're not going to be able to tell as an adept when that happens. i'm not sure what happened. so yeah, i think it could be that we, you know, we, we just have double checks in our system. you know, we, we just don't always take things on face value and it may be, we just, you know, certain times we, we, we just, yeah, even though we're met, we think we're talking to a human, you know, there's always that possibility or not. so if there's something important, because of that, make sure you double check. and so if someone says, you know,
1:57 am
on his own call, i want to make this wire transfer. you know, don't do it just based on that call. maybe just, you know, make a call somewhere else to whatever department the, the pay payables department and just say, you know, just wanna make sure i'm gonna do this and just, everything's okay. so, you know, probably add friction unfortunately to activities. i don't think we're there yet. there i've had been examples of where that's happened, where there's been front committee and where people thought they were talking to somebody that there was actually a i generated. thank you so much tom, for all your time today is now the rise of technology giants such as google or microsoft has meant that these companies have a near total monopoly on how people access information through search and chat, chat g p t has created a one stop place where you can ask anything and get a simple, satisfactory answer back. so instead of a big web with millions of sites,
1:58 am
a few companies will lose all of the answers. this monopolization of the internet market in the past couple of years was not even close to what could happen next. as a, i queries quickly become the new norm. i'm christy, i thanks for watching and we'll see you right back here next time on the cost of everything. the the gym. and so just stay with the, with the one with taking the picture. they were proud of the muscle better and ready to leave, to go in your mouth settings menu, a pinch in there know that he got cutting away or i actually was who i see on google. okay. and with and then almost a week ago. and then we'll get to under the, under the general say to us, love the screen. but this,
1:59 am
this indeed reality. last has never been did you purchase a i did you please give us a time to time crash sites and who are you to do this yourself. it's your stats. i see v as in actually this is a very bold decision by the gym and the government to even stop. because when the agreement was reached, gena, they did the small divisions and the small and guide all questions. and all of those we are using to pay us what they are giving us just go to the pollution countries and now we have not been so frontier calls what has been happening here. it will open the ponder as box the next thing. it will be them both then also have to pay the
2:00 am
the units last night and did not sleep. we had a very large noise noise coming as a result of the ongoing process between the idea of sol, just an depos, a relentless idea solved on the go. so for israel's, the ladies they agreed upon, ford, a truce with some small cap. tar stays within the past hour. the exact time will be the last, some locals in the unpopulated cliff. so say they're not optimistic about the deal . cease fire fuels like jane killer, a mockery is either benefee not hours will continue to survive on the bare essentials as even if those went out flu and your with all.
15 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=743568830)