tv Democracy Now LINKTV June 1, 2023 8:00am-9:01am PDT
8:00 am
06/01/23 06/01/23 [captioning made possible by democracy now!] amy: from new york, this is democracy now! >> the nightmare scenario, i can imagine with ai, robots have become so powerful they are able to control or eliminate humans without their knowledge. this could lead to society where the rights of individuals are no longer respected. amy: a group of leading artificial intelligence experts
8:01 am
are warning ai poses a societal-level risk of extinction on par with pandemics and nuclear war. we will speak to one of the godfathers of ai and two other leading experts on the rapid rise of ai technology and the risks it poses. plus, as the house approves a bipartisan deal to suspend the debt ceiling, we will look at why student debt relief advocates oppose a controversial part of the deal. >> to correct debt ceiling agreement as it stands now has a provision that completely ends the pause on student debt payments and codifies it so it can never be extended again. amy: all that and more, coming up. welcome to democracy now!, democracynow.org, the war and peace report. i'm amy goodman.
8:02 am
the house of representatives has approved legislation to suspend the debt deal in just days before the u.s. is set to run out of money to pay its bills. wednesday's vote was 314-170 with the majority of house democrats voting in favor. if approved by the senate, it will cap domestic spending below the current rate of inflation while allowing larger increases to the military budget. house minority whip katherine clark said republicans had forced democrats hand. >> there is no perfect negotiation when you are the victims of extortion. nobody likes to pay a ransom note, and that is exactly what tonight's vote is. amy: some republicans voted against the deal saying it did not cut enough from social programs. meanwhile, nearly 40 members of the congressional aggressive caucus voted no. congressmember pramila jayapal of washington slammed the legislation saying it "rips food assistance away from poor people and disproportionately black and
8:03 am
brown women pushes forward pro-corporate permitting, and a pipeline in direct violation of the communities input and close back nearly 25% of the funding democrats allocated for the irs to go after wealthy tax cheats." the bill now heads to the senate which will need to act quickly to avoid a catastrophic default on u.s. debts before a june 5 deadline set by the treasury department. in sudan, at least 19 people were killed and more than 100 injured as shells fired from a tank hit a market in a neighborhood south of khartoum. the deadly violence came after sudan's army said it was abandoning u.s.- and saudi-brokered ceasefire talks with the rival rapid support forces paramilitary group. on wednesday, the u.n.'s world food programme said aid workers had been able to reach civilians in the khartoum metropolitan area for the first time since fighting broke out april 15.
8:04 am
the wfp says sudan's conflict could increase the number of people facing food insecurity by about 2.5 million in the coming months. ukrainian officials say russian air strikes on kyiv overnight killed three people, including a mother and her 11-year-old daughter. those attacks came as pro-russia officials said five people had been killed by artillery fire in a russian-occupied village in eastern ukraine. russia also said ukrainian drones struck two oil refineries near russia's biggest oil export terminals, triggering a large fire at one of the sites. in washington, d.c., white house spokesperson john kirby announced the biden images -- biden administration is sending ukraine an additional $300 million in military equipment, including patriot air defense systems, anti-aircraft stinger missiles, and more ammunition. north korea has failed in its first attempt to put a spy satellite into orbit. on wednesday, a north korean rocket crashed into the sea west of south korea after its second stage reportedly lost thrust.
8:05 am
kim yo jong, the powerful sister of leader kim jong un, said afterwards north korea would attempt more satellite launches. the rocket's launch triggered air raid sirens in south korea and japan. in seoul, a presidential alert sent to all mobile phones warned 10 million citizens to prepare to evacuate, triggering panic. seoul's mayor later defended the alert. >> the emergency message good be an overreaction but it was not a mistake. our principal was to respond in a manner that could be seen as excessive and it can be no compromise on safety. amy: the tokyo electric power company is facing mounting opposition over its plans to pump 1.3 million tons of contaminated wastewater from the fukushima nuclear plant into the sea. the water contains dangerous radionuclides from the 2011 meltdown of three reactors at the site, triggered by a massive earthquake and tsunami.
8:06 am
this week the u.n.'s international atomic energy agency signed off on the planned release of the waste water by the plant's operator, the tokyo electric power company. the plan has triggered protests in japan and in south korea, where anti-nuclear activists recently staged demonstrations. >> we can't believe the japanese government's plan to release nuclear contaminated water has gone as far as it has. the impact will be enormous. i think south korea and the neighboring countries should work together to postpone a decision on nuclear polluted water discharge. amy: in tennessee, a 32-year-old woman has received an emergency hysterectomy after doctors initially refused to provide an abortion to end her high-risk pregnancy. the surgery left mayron hollis unable to bear more children. hollis was finally granted surgery nearly a week after she was first admitted to a tennessee hospital for excessive bleeding. she only survived after receiving a large transfusion of
8:07 am
blood. tennessee law criminalizes performing an abortion as a felony punishable by up to 15 years in prison, with only a narrow exception for medical emergencies. meanwhile, the biden administration says it's reconsidering a plan to move the headquarters of the u.s. space command to alabama, citing the state's near-total ban on abortion. in response, republican lawmakers from alabama threatened to freeze federal spending on space command's temporary headquarters in colorado springs. in news from georgia, a police swat team raided a home in atlanta wednesday and arrested three organizers with the atlanta solidarity fund. the nonprofit group has been raising money to bail out protesters opposed to the construction of a massive police training facility known as cop city. lawyers have described the raids and arrests on the bail fund as unprecedented. on wednesday, kamau franklin with community movement builders
8:08 am
spoke at a rally outside the dekalb county jail. >> we will not be intimidated. we will not be stopped. we will not be out organized. we will not let them time and time again take the only thing they have at their disposal, which is the brutal nature of their police force to stop us. we will fight back here in the city of atlanta. we will fight back in the state of georgia. [indiscernible] amy: the raid on the bail fund comes as 42 activists face domestic terrorism charges for opposing cop city. the atlanta city council is expected to hold a key vote on cop city on june 5. the city recently admitted the public cost of the project will top $67 million, twice as high as originally stated.
8:09 am
to see all our interviews with activists about cop city, go to democracynow.org. a jury in los angeles has found former sitcom star danny masterson guilty of raping two women he met through the church of scientology. the jury hung on a third rape charge. witnesses testified during the trial officials in the church of scientology pressured them not to talk to police about the rape allegations. masterson is best known for his starring role in the sitcom "that '70s show." he faces 30 years to life in prison at a sentencing hearing scheduled for august. nasa has held its first public meeting of a panel studying unidentified aerial phenomena, commonly known as ufo's. the 16-member panel said more high-quality data is needed to explain many of the sightings but that there's no evidence any of them could be explained by extraterrestrial origins. wednesday's event at nasa headquarters in washington, d.c., came after congress last year held its first hearings on the subject in over half a century.
8:10 am
the 2024 field of republican presidential candidates is expected to soon grow. former vice president mike pence, former new jersey governor chris christie, and north dakota governor doug burgum are all expected to formally launch campaigns next week. many political analysts say the crowded field could help increase donald trump's chances of securing the republican nomination again. and here in new york, a recent -- the elected speaker of the school of law is facing death threats after she criticized the israeli treatment of palestinians during her speech. fatima mohammed, who is yemeni-american, addressed the law school's graduating class in early may after being elected to speak by her classmates. >> israel continues to indiscriminately raise bullets and bombs on worshipers, murdering the old, the young, attacking even funerals and graveyards as it encourages
8:11 am
lynch mobs to target palestinian homes and businesses as it imprisons its children, as it continues its project of settler colonialism, expelling palestinians from their homes. amy: she also spoke out against white supremacy and the new york police department. she has since faced a torrent of criticism from "the new york post" and other right-wing media outlets and republican politicians, as well as some democrats, including congressmember richie torres and new york city mayor eric adams. on tuesday, more than two weeks after mohammed spoke, the cuny board of trustees and the chancellor responded to the criticism by declaring mohammed's remarks to be a form of "hate speech." however, mohammed has received public support from the jewish law students association at cuny and other groups. the new york city chapter of jewish voice for peace issued a statement saying -- "we decry the false characterization of her speech as anti-semitic simply because she accurately describes the
8:12 am
conditions palestinians live under every day." and those are some of the headlines. this is democracy now, democracynow.org, the war and peace report. i'm amy goodman, joined by my democracy now! co-host nermeen shaikh. hi, nermeen. nermeen: hi, amy. welcome to all of our listeners and viewers from around the country and around the world. amy: coming up, a group of leading artificial intelligence experts are warning ai poses a societal level risk of extinction on par with pandemics and nuclear war. stay with us. ♪♪ [music break]
8:14 am
the movie "m3gan," a horror comedy about a mean girl robot named megan. this is democracy now, democracynow.org, the war and peace report. i'm amy goodman with nermeen shaikh. we begin today's show looking at growing alarm over the potential for artificial intelligence to lead to the extinction of humanity. the latest warning comes from hundreds of artificial intelligence, or ai, experts as well as tech executives, scholars, and others like climate activist bill mckibben who signed onto an ominous, one-line statement released tuesday that reads -- "mitigating the risk of extinction from ai should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." among the signatories to the letter released by the center for ai safety is geoffrey hinton, considered one of three godfathers of ai. he recently quit google so he could speak freely about the dangers of the technology he
8:15 am
helped build, such as artificial general intelligence, or agi, in which machines could develop cognitive abilities akin or superior to humans sooner than previously thought. >> i have always assumed that the brain was better than the computer models we had. i had always assumed by making the computer models more like the brain, we would improve them. my epiphany was a couple of months ago, i suddenly realized navy to computer models we have is better than the brain. and if that is the case, then maybe quite soon they will be better than us. so the idea of super intelligence, instead of being something and the distant future, might come sooner than expected. the existential threat, it might wipe us out, that is like nuclear weapons because nuclear weapons have the possibility
8:16 am
they would wipe out everybody. and that is why people could cooperate on preventing that. the existential threat, i think maybe the u.s. and china and europe and japan can all cooperate on trying to avoid that existential threat, but the question is, how should they do that? i think stopping development is infeasible. amy: many have called for a pause on introducing new ai technology until strong government regulations and a global regulatory framework are in place. joining hinton in signing the letter was a second ai godfather, yoshua bengio, who joins us now for more. he is a professor at the university of montreal, founder and scientific director of the mila -- quebec artificial intelligence institute. in 2018 he shared the prestigious computer science prize, the turing award, with geoffrey hinton and yann lecun. professor bengio is also a
8:17 am
signatory of the future of life institute open letter calling for a pause on large ai experiments. professor bengio, welcome to democracy now! as we talk about an issue i think most people cannot begin to comprehend. if you could start off by talking about why you have signed this letter warning of extinction of humanity, but talk about what ai is first. >> thank you for having me. thank you for talking about this issue that requires more awareness. the reason i signed this, like geoff, i changed my mind in the last humans. what triggered this for me is interacting with chatgpt and seeing how far we had moved much
8:18 am
faster than i anticipated. i used to think that reaching human level intelligence with machines could take anymore decades. if not centuries. the progress of science seemed to be, well, slow. as researchers, we tend to focus on what does not work. but right now, we have machines that pass what is called the current test, which means they can converse with us and it could easily as as being humans. that was supposed to be a milestone for human level intelligence. i think they are still missing a few things but that kind of technology could already be dangerous to destabilize democracy to disinformation, for example. but because of the research that is currently going on to bridge the gap with what is missing from current large which models,
8:19 am
it is possible my horizon i was saying as many decades in the future is just a few years in the future. that could be very dangerous. it suffices a small organization or somebody with crazy beliefs, conspiracy theory, terrorists, military organizations decides to use this without the right safety mechanisms and it could be catastrophic for humanity. nermeen: professor, it would be accurate then to say the reason artificial intelligence and concerns about artificial intelligence have become the center of public discussion in a way they have not previously been because the advances that have occurred in the field have surprised even those who are
8:20 am
participating in it in the lead researchers in it. if you could elaborate on the question of super intelligence, especially the concerns that have been raised about unaligned super intelligence. and also the speed at which we are likely to get to unaligned super intelligence. >> yeah. the reason it was surprising is in the current systems from scientific perspectives, the methods used are not -- they were different from the things we only knew just a few years ago. it is the scale at which they have been built, the amount of data, the amount of engineering that has made this really surprising progress possible. so we could have similar progress in the future because of the scale of things. the problem -- first of all, why are we concerned about super intelligence?
8:21 am
the question is, is it even possible to build machines that will be smarter than us? the consensus in the scientific community is that our brain is a very complicated machine. there is no reason to think in principle we could build machines that would be at least as smart as us. there's the question of how long it will take. we will discuss that. in addition, geoffrey hinton was saying in the piece that was heard, computers have advantages that brains don't have. for example, they can talk to each other very high speeds and exchange information. for us, we are limited by the very few bits of information per second that language allows us. that gives them a huge advantage to learn faster. for example, the systems today already can read the whole internet very quickly whereas a human would require 10,000 years
8:22 am
of their life reading all the time to achieve the same thing. they can have access to information and sharing that information in ways that humans don't. it is very likely as we make progress towards understanding the principles behind human intelligence, we will be able to build machines that are smarter than us. why is it dangerous? because if they are smarter than us, they might act in ways that are not -- that do not agree with what we intend to what we want them to do. it could be for several reasons but this question of alignment is difficult to state -- to instruct a machine to behave in a way that agrees with our values, our needs, and so on. we can say it in language but it might be understood in different way. that can lead to catastrophes, as the dean argued any times.
8:23 am
but this is something that already happens -- this alignment problem already happens. for example, you can think of corporations not being quite aligned with what society wants. society would like corporations to provide useful goods and services, but we can't dictate that to corporations directly. instead, we have given them a framework where they maximize profit under the constraints of the laws, and that may work reasonably but also have side effects. for example, corporations can find loopholes in those laws or even worse they can influence the laws themselves. this can happen with ai systems that we are trying to control. they might find ways to satisfy the letter of our instructions but not the intention, the spirit of the law. that is very scary.
8:24 am
we don't fully understand how the snarios uold but ere is enough danger and enough uncertainty that i think a lot of inttion- or atttion should be given to these questions. nermeen: i would like to bring in our other guest max tegmark, an mit professor focused on artificial intelligence research. he wrote a recent piece for "time magazine" titled "the 'don't look up' thinking that could doom us with ai." >> i am not hearing anything. nermeen: could we please take care of that? can you hear me now? like >> i am hearing professor bengio. when does this end? nermeen: professor bengio, let's go back to you. if you could explain whether you think it will be difficult to regulate this industry,
8:25 am
artificial intelligence, despite all of the advances that have already occurred, how difficult will regulation be? >> now i am hearing amy most of when does this segment end? nermeen: professor bengio, can you hear me? >> i do. i am also hearing max. nermeen: if you could explain how difficult you think regulating the artificial intelligence will be despite so many advances occurring so rapidly. >> even if something seems difficult, like dealing with climate change and even if we feel it is hard task to do the job and convince enough people in society to change in the right ways, we have a moral duty to try our best. the first things we have to do with ai risks is get on with
8:26 am
regulation. set up governance frameworks, both individual countries and internationally. when we do that, it is going to be useful for all the -- we have been talking a lot about extinction risks, but there are other risks that are shorter-term. risks to destabilize democracy. if democracy is destabilized it is that in itself but it is going also hurt in our abilities to deal with existential risks. there are other risks that are going on with ai. discrimination, bias, privacy and so on. we need to -- legislative and literary body and what we need is a revelatory framework that
8:27 am
will be adaptive. there is a lot of unknowns. it is not like we know precisely that things could happen. we need to do a lot more in terms of monitoring, validating, and need controlling access so not any bad actor can easily get their hands on the dangerous technologies. we need the body that will regulate or the bodies across the world to be able to change their rules as nefarious users show up for technology advances. that is a challenge, but i think we need to go in that direction. amy: i want to try to what's again bring max tegmark into the conversation. max tegmark is an mit professor focused on artificial intelligence. his recent time magazine article "the 'don't look up' thinking
8:28 am
that could doom us with ai." if you could explain that point. also, why you think right now, you know, many people have just heard the term chatgpt for the first time in the last months. the general public has become aware of this. and how you think it is most effective to regulate ai technology? >> thank you. thank you for the great question. i wrote this piece comparing what is happening now in ai with the movie "don't look up" because we are all living this film. where as a species, confronting the most dramatic thing that has ever happened to us. where we may be losing control over our future and almost no one is talking about it. i'm so grateful to you and others for starting to have that
8:29 am
conversation now. that is of course why we had these open letters that you just referred to here, to really help mainstream this conversation that we have to have. that people previously used to make fun of you you brought up that we could lose control of this and go extinct, or example. nermeen: you have drawn analogies when it comes to regulation with the regulations that were put in place on biotech and physics. could you explain how that might apply to artificial intelligence? >> yeah. to appreciate what a huge deal this is when the top scientists in ai are warning about extinction, it is good to compare with the other two times in history that it has happened. that leading scientists warned about the very thing they work taking it happen. once in the 1940's when they
8:30 am
started warning about nuclear armageddon and it happened again in the early 1970's with biologists saying, hey, maybe we should not start making clones of humans and edit the dna of our babies. the biologists have been a big success story here. it should inspire ai researchers today because it was deemed so risky that we would lose control over our species back in the 1970's, that we decided this as a world society to not do human cloning, to not edit the dna of our offspring. here we are with the really flourishing biotech industry that is doing so much good in the world. so the lesson here for ai is that we should become more like biology. we should recognize that in biology, no company has the right to just launch a new
8:31 am
medicine and start selling it in supermarkets without first convincing experts from the government that this is safe. that is why we have the food and drug administration in the u.s., for example. with particularly high risk use of ai, we should aspire to something similar where the onus is on the company's to prove something extremely powerful is safe before it gets deployed. amy: last fall, the white house office of science and technology policy published a blueprint for an ai bill of rights and called it a vision for protecting our civil rights and the algorithmic age. this comes amid growing awareness about racial biases embedded in artificial intelligence and how impacts the use of facial recognition programs by law enforcement and more. i want to bring into this conversation tawana petty,
8:32 am
director of policy and advocacy at the algorithm justice league. long time digital and data-rights activist. tawana petty, welcome to democracy now! you're not only warning people about the future, you're talking about the uses of ai right now and how they can be racially discriminatory. can you explain? >> thank you for having me, amy. absolutely. i must say the contradictions have been heightened with the godfather of ai and others speaking out and offering these particular letters they're talking about these futuristic potential harms. however, many women have been warning about existing harm of artificial intelligence many years prior to now.
8:33 am
dr. nelson, what you just mentioned, the blueprint in ai bill of rights, which is asking for five core principles -- safe and effective systems, algorithmic discrimination protection, data privacy, notice and explanation, and human alternatives consideration and fallback. at the algorithmic just ask league, we have been responding to discrimination that date back many years prior to this most robust narrative reshaping conversation that has been happening over the last several months with artificial general intelligence. so we are already seeing harms with algorithmic disco nation in medicine. we are seeing the pervasive surveillance that is happening with law enforcement using face detection systems to target community members during protests, squashing not only our
8:34 am
civil liberties and rights to organize and protest, but also that miss identifications that with regard to false arrests that we have seen two prominent cases started off in detroit. so there are many examples of existing harms that it would've been really great to have these voices of mostly white men who are in the tech industry who did not pay attention to the voices of all those women who were lifting up these issues many years ago. and they're talking about the futuristic possible risks when we have so many risks happening today. nermeen: professor max tegmark, if you could respond to what tawana petty said and the fact others have also said the risks have been largely overstated in the letter and more importantly, given what tawana petty has said, it distracts from already existing effects of artificial
8:35 am
intelligence that are widely in use already? >> i think this is a really important question. there are people who say one of these kinds of risks distracts from the other. i strongly support -- i support every thing we heard here from tawana petty. these are important problems, examples of how we are giving too much control already two machines. but i strongly disagree we should have to choose that worrying about one kind of risk or the other. that is like saying we should stop worng on cancer prevention because it distracts from stroke prevention. these are all incredibly important risks. i've spoken up a lot on social justice risks as well and threats. it plays into the hands of the tech lobbyists if it looks like there is infighting between people who are trying to rein in big tech for one reason and people are trying to rein in big
8:36 am
tech for other reasons. let's work together. society can work on both cancer prevention and stroke prevention. we have the resources. we should be able to make sure we don't go extinct and look at social justice issues. we might be losing total control of our society. relatively soon. it can happen in the next few years. it could happen in a decade. once we are all extinct, all of these other issues ceased to matter. let's work together to tackle all the issues so we can actually have a good future for everybody. amy: tawana petty, and then i want to bring back yoshua bengio --tawana petty, what needs to happen at the national level? u.s. regulation.
8:37 am
that i want to compare what is happening here, what is happening in canadian regulation, the eu, european union, which seems like it is about to put in the first comprehensive set of regulations. >> so the blueprint was a good model to start with, that we are seeing some states adopt and try to roll out their versions of an ai bill of rights. the president issued an executive order for racial equity and support under served communities which is addressing specifically algorithmic discrimination. yet the national institute of standards and technology that issued an ai risk management framework that breaks down the various types of biases that we find within algorithmic systems like computational consistent, statistical, and human cognitive. and there are so many other legislative opportunities that are happening on the federal level. you see the sec speaking up, the
8:38 am
federal trade commission on algorithmic discrimination. yet the equal employment opportunity corporation that is issued statements. you have the consumer financial protection bureau who has been adamant about the impact of algorithmic systems have on us when data brokers are amassing these massive massive data that have been extracted from community members. i agree that there needs to be some collaboration and cooperation but we have seen situations like -- terminated from google before chatgpt was launched. cooperation has not been lacking on the side of the folks who work in ethics. to the contrary, these companies have terminated their ethics departments and people who have been warning about existing harm. amy: professor bengio, if you can talk about the level of regulation and what you think
8:39 am
needs to happen and is putting forward models that you think could be affected? >> first of all, i would like to make a correction here. i have been involved in really working towards dealing with negative social impact of ai for many years. in 2016, worked on the montréal declaration for the responsible development of ai, which is very much centered on ethics and social injustice. since then, i've created an organization in the research center that i had which is completely focused on human rights. i think these accusations are false. as max was saying, we don't need to choose between fighting cancer and fighting heart disease. we need to do all of those things. but better than that, what is
8:40 am
needed in the short term, at least, building up these regulations. it is going to help mitigate all of those risks. i think we should work together rather than having these accusations. nermeen: professor bengio, i would like to ask you about some of the work you have done with respect to human rights and artificial intelligence. earlier this month, conference on artificial intelligence was held in rwanda. you were among those who are pushing for the conference to take place in africa. could you explain what happened at that conference, 2000 people i believe attended, and what african researchers and scientists had to say about what the goods are, the public good that could come from artificial intelligence and why they felt in fact one of the questions raised was why wasn't there more discussion about the public good
8:41 am
rather than just the immediate risk or future risks? >> yes. i have been working -- in addition to the ethics questions, i've been working on the applications of ai and what is called ai for social good. that includes things like medical applications, environmental, social justice applications. in those areas, it is particularly important we bring to the fore the voices of the people who could most benefit and also the most suffer. in particular, the voices of africans have not been very present. as we know, the development of this technology is mostly in rich countries in the west. as a member of the board of the conference, one of the main conferences in the field, i've been pushing for many years for us to have the events taking
8:42 am
place in africa. so this year was the first. it was supposed to be before the pandemic but it was pushed. what we saw was an amazing presence of african researchers and students at levels that we could not see before. the reason -- there are many reasons, but mostly it is a question of accessibility. currently, many western countries, the visas for african researchers or from developing countries are very difficult to get. i have been fighting, for example, the canadian government a few years ago when we had a conference in canada. there were hundreds of african researchers who were denied a visa and he had to go one by one in order to make them,. i think it is important that the decisions we are what it takes allegedly which of all everyone
8:43 am
about ai, be taken in the most inclusive possible ways. and for that reason, we need to not just think about what is going on in the u.s. or canada, but across the world. we need not just to think about the risks of ai we have been discussing today, but also how do we invest more in areas of application where companies are not profitable but important to address, for example, the u.n. sustainable development goals and help reduce misery and deal, for example, with medical issues that are not present in the west but are like infectious diseases that are mostly in poorer countries. amy: can you talk about ai and not only nuclear war but, for example, the issue jody williams the nobel laureate has been
8:44 am
trying to bring attention to for years, killer robots that can kill with their bare hands? the whole issue of ai when it comes to war and who fights these wars? >> this is also something i've been actively involved in for many years. campaigns to raise awareness about the danger of killer robots. also known more precisely as lethal autonomous weapons. when we did this five or 10 years ago, it was still something that sounded like science fiction. but actually, there has been reports that drones have been equipped with ai capabilities, especially computer capabilities, face recognition, that have been used in the field in syria and maybe this is happening in ukraine. so it is already something that
8:45 am
we know -- we know the science behind building these killer drones -- not killer robots. we don't know yet how to build robots that worked really well. but if you take drones, we know how to fly in a fairly thomas way, and if these drones have weapons on them and these drones have cameras, then ai could be used to target the drone to specific people and kill and an illegal waste specific targets, that is incredibly dangerous. it can destabilize the military imbalance we know today. i don't think people are paying enough attention to that and in terms of the existential risk, the real issue here is if the super intelligent ai also has control of dangerous weapons, think it is just going to be very difficult for us to reduce
8:46 am
the risks -- the catastrophic risks. we don't want to put guns in the hands of people who are instable or in the hands of children that could act in ways that could be dangerous. and that is the same problem here. nermeen: professor max tegmark, if he could respond on the military uses -- possible military uses of artificial intelligence and the fact, for instance, china is now -- the japanese publication study earlier this year concluded china is producing more research papers on artificial intelligence than the u.s. is. you said this is not akin to an arms race but rather to a suicide race. if you could talk about the regulations that are already in place from the chinese government on the applications
8:47 am
of artificial intelligence compared to the eu and the u.s. >> you are right, a great question. the recent change now this week when the idea of extinction from ai goes mainstream i think will actually help the geopolitical rivalry between the east and west. get more harmonious. until now, most policymakers have viewed ai as something that gives you great power. everybody wanted it first. there was this idea that whoever gets artificial general intelligence i can outsmart humans somehow wins. it could easily end up with everybody losing. the big winners are the machines left over after we are all
8:48 am
extinct. it suddenly gives the incentives to the chinese government and the american government and european government -- the chinese government does not want to lose control over its society anymore than any western government does. for this reason, we can see that china has already put tougher restrictions on their own tech companies than we in america have on american companies. it is not because -- we don't have to persuade the chinese, in other words, to take precautions because it is not in their interest to go extinct. it doesn't matter if you are american or canadian once you are extinct. amy: i know -- >> i should add, so does not sound like hyperbole this idea of extinction, it is important to remember roughly half the species on this planet that were
8:49 am
here a few thousand years ago have been driven extinct already by humans. so extinction happens. it is also important to remember why we drove all of these other species extinct. it wasn't because necessarily we hated the west african black rhinoceros or certain species that live in coral reefs. when we chopped down the rain forest or ruined the coral reefs by climate change, that was kind of a side effect. we just wanted resources. we had other goals. this did not align with the goals of the species. because we were more intelligent than them, they were powerless to stop us. this is exactly what yoshua bengio was warning about also for humanity. if we lose control of our planet to more intelligent entities and their goals are not aligned with ours, we will be powerless to prevent massive changes they
8:50 am
might do to our biosphere. that is the way in which we might get wiped out the same way the other half of the species did. let's not do that. there's so much goodness, so much wonderful stuff that ai can do for all of us. work together to harness and steer us in a good direction, curing those diseases that have stumped us, living people out of poverty, stabilizing the climate, and help in life on earth flourish for a very long time to come. i hope by raising awareness of the risks, we will get to work together to build that great future with ai. amy: tawana petty, moving from the global to the local -- we are here in new york and the new york city mayor eric adams has announced new york police department is acquiring some new semi autonomous robotic dogs in the coming -- in this period.
8:51 am
you have looked particularly about their use and their discriminatory use in communities of color. >> i will also say in michigan where i live, they also have acquired robot dogs. these are situations that are currently happening on the ground and in organization a law enforcement that is still suffering from systemic racial bias with over police and hyper surveilled marginalized communities. we are looking at these robots now being given the opportunity to police and surveilled already hyper surveilled communities. i would like an opportunity to address briefly the previous comment, my commentary is not to attack any of the existing efforts or previous efforts for years worth of work that these two gentlemen have been involved in. i greatly respect efforts to address racial and equity and
8:52 am
ethics in artificial intelligence. i agree we need to have some collaborative effort in order to address these existing things that we are experiencing. people are already dying from health discrimination with algorithms. people are already being misidentified by police using facial recognition. government services are utilizing corporations like id.me tees facial recognition to access benefits. we have a lot of opportunities to collaborate currently to prevent the existing threats we are curtly -- currently facing. amy: tawana petty, speaking to us from detroit director of , policy and advocacy at the algorithm justice league. yoshua bengio is a professor at university of montreal. and max tegmark is an mit professor. we will link to your time magazine piece "the 'don't look
8:53 am
up' thinking that could doom us with ai." coming up, we look at student debt as the house approves a bipartisan deal to suspend the debt ceiling. back in 20 seconds. ♪♪ [music break] amy: "if i own today" by shayna steele. this is democracy now! i'm amy goodman with nermeen shaikh. we look at how suspending the debt ceiling would end the student loan payment pause,
8:54 am
force payments to resume september first and limit new student debt moratoriums. on tuesday, democratic congress member ayanna pressley filed an amendment to remove this section saying -- for more, we're joined by braxton brewington, press secretary of the debt collective, a group working to end the student loan crisis. welcome back. we only have a few minutes. can you respond to the passage of the debt ceiling legislation and how it impacts student loans? >> this is president biden turning his back on student debtors. with this provision does is essentially codified an end to the student loan pause not only should student loans debt should student debtors have to resume payments on september 1, but that the pause can never be extended again. what that risks is should jun
8:55 am
come this month, the supreme court ruled against student debt relief, student loan borrowers are going to be in what could be the worst financial position ever been in which is going through covid, having the student debt crisis, but not having their student debt relief and having to resume costly payments. nermeen: what is the scale of the student debt crisis and what impact does this likely have on literally tens of millions of student debtors in the u.s.? >> student debt is the second-largest household debt in the entire country. in 2019, 70 people defaulted on their student debt every 60 seconds. senior citizens had their social security checks garnished most of veterans heather wages garnished most of people were not able to purchase a home.
8:56 am
the crisis of student debt is a large one before covid. now it is exacerbated by covid where people have lost their health insurance, their wages happens detonated or some have declined -- have been stagnated or some have declined. people after the pandemic to actually have any relief because of student debt possibly not going through because of the supreme court. that is why the backstop of the pause is so important because it is the only thing keeping student debtors afloat from not falling and financial decline. amy: what do you think could happen right now? >> the vote is said to go through soon on the senate side and we're still pressuring members of congress to vote no on the provision of codifying the end of the payment pause. where pressuring biden president to uphold his promise. this conservative court which has frankly ignored the facts of
8:57 am
this case through every step of the process -- there was no fact-finding process -- we are nervous facts of this at the supreme court is going to ignore the rule of law, ignore reason come and rule against student debt relief. but president biden can do is use other legal tools at his disposal to ensure people get the relief that they have already applied for, that have already been approved for, so we don't, out of covid-19 where people are in such a really bad financial situation. the biden administration should declare student debt an emergency on its own. there's no reason for the biden administration to put his hands behind his back and allow the republicans to tie him to ending the payment pause and never being able to take action on this really important domestic issue again. amy: you tweeted --
8:58 am
have you pose this question to the biden administration? >> we have. so far the biden administration i said they remain confident in their case that they argued before the supreme court. i think our response to that is the white house can't see the future. so if we are in scenario with this conservative supreme court, which we know several members of the court have been corrupted and even bribed, if we are in deposition with the supreme court rules against student debt relief, the biden administration , because of this debt ceiling deal, have removed their leverage in insuring student debtors are able to stay financially afloat. it is not too late. the vote has not happened yet. we are encouraging members of congress and the biden administration to strike this provision from the deal so that student loan borrowers are not in such a precarious, bad economic situation akin to the student debt crisis before covid.
8:59 am
amy: braxton brewington, thank you for being with us, press secretary of the debt collective, a group working to end the student loan crisis. that does it for our show. democracy now! is looking for feedback from people who appreciate the closed captioning. e-mail your comments to outreach@democracynow.org or mail them to democracy now! p.o. box 693 new york, new york 10013. [captioning made possible by democracy now!] ♪♪
102 Views
Uploaded by TV Archive on