Skip to main content

tv   Public Affairs Events  CSPAN  November 21, 2016 3:33pm-5:34pm EST

3:33 pm
assisting jodie williams and coordinating the international campaign to ban land mines, co-laureate of the 1997 peace prize. finally, general. he served in the corps of signals indian army. he retired after 40 years of active military service in the corps of signals indian army in april this year. his last appointment was commandant of the military college for telecommunications engineering, which carries out training in the fields of ict, electric warfare, and cyber operation, and is also designated a center of excellence for the indian army for these disciplines. the general officer has received many such awards. i want to call out a few of them. he's been the recipient of the president for distinguished service in the defense forces. he's also been awarded the department of defense production for r&d work. and last year he was conferred the coveted distinguished alumnus award by the india institute of bombay and is the only defense officer to ever
3:34 pm
hold such an honor. with that, thank you, panel, and i'll turn it over to george. >> great, thanks a lot. [ applause ] great, thank you. what we want to do have as much of a conversation as possible, first amongst ourselves up here, and then with you all, to basically draw out a number of the dilemmas in this area and to help identify what are the questions that might be the most worth pursuing, as different countries and different actors move down this agenda. so to start us, i want to ask the general to build on what david said a bit. certainly there must be other drivers beyond dealing with numerical asymmetries that would make autonomous systems attractive to a military and to a government in terms of
3:35 pm
problems they solve and advantages they confer. can you give us your perspective, what are the attractions of autonomy in this space? >> i'll start by saying that one can't get away from the fact that weapons are meant to destroy and kill. but they destroy and kill, supposed to kill military potentially. and the idea is not to effect the noncombatants. the noncombatants have to be saved. a major question you have to ask is, does ai have the potential of reducing the negatives of destroying the noncombatant, potentially. i feel that in a sense, by its character, artificial intelligence has great potential
3:36 pm
towards this goal. having said that as an opening, let's see how warfare is actually changing in the last few decades. there are two things which have happened. on the one front, there is a change in the nature of warfare from the conventional to what is normally referred to as fourth generation warfare, where the lines of politics and military are blurring. so there is a different context in fourth generation warfare. india happens to have the context of both conventional warfare as well as the fourth generation warfare. and so some of the things in the discussions which come up really get related, at least my examples really get related to how the benefits turn up here. the other change in warfare which is happening is to do with the information age. now, here again, you have on the one hand cyber warfare, electronic warfare, one thing that is happening in the information age.
3:37 pm
coming to the relationship with artificial intelligence, before that, because of information, coming into the weapons systems, what you are having over the years is greater and greater precision in the weapons systems. now, ai, again, has a potential of increasing this precision, and discrimination as we'll be discussing, i'm sure, as part of the panel. and that is where, again, the aspect of having lesser and lesser noncombatant casualties is going to occur. now, when it comes to specifics as to what are the types of systems, so increasing degree of what ai can do. so let's start with just four different examples. you can have a defensive system.
3:38 pm
and in a defensive system, like for example handling of diffusion of aries. there the adversity is not involved and ai can do a lot in coming up with the systems in any case. at the next level, you have defensive ai. so we talk of, you know, the systems like phalanx which have been deployed for decades now. there, the missiles are coming in, you're destroying the missiles, autonomous systems, ai autonomous systems are in place so that casualties are reduced. and the third, you have precision coming in now. so you can have offensive systems. for example, if you have drones, armed drones which are autonomous, okay, you already have armed drones in effect, but you have autonomous armed
3:39 pm
drones, when the pilots -- that's the third level, the offense is coming in. at the fourth level, if the graduation of the ai takes place and it develops to the extent where it can also mimic the empathy and judgment aspects, when it graduates to that stage, further savings will be possible. >> thank you. that was a brilliant setup. you raised a number of issues that i think we'll dive farther into, including the questions of offense, defense, and other function. let me turn to mary and in a sense ask you to respond, but in
3:40 pm
particular to the extent this capability allows one to be more discriminating and precise. when you look at kind of parsing what can be advantageous in these capabilities to what should be avoided, can you hone in on what the distinctions should be? >> you talked about the dangerous tasks autonomy has been used for in the military for cleaning ships, disposal, robots to assist the soldiers, and now we're moving into a phase where we see greater autonomy in weapons systems and that's seen with an autonomous aircraft that can fly great distances and carry a great payload.
3:41 pm
we mentioned some of these systems in first report we did back at human rights watch called "losing humanity." in our view, they were not fully autonomous. they had a degree or nature of autonomy in them, but they were normal fully autonomous. we called for a ban on fully autonomous systems. the ban calls on a future ban and not the ones that are in use today. but we did that because we looked at where the technology was headed. we said we're concerned about where this is headed. we're worried about this.
3:42 pm
that was part of the rationale behind forming this campaign to stop killer robots that launched in 2013 and is still going. it's a global coalition. i coordinate it on behalf of human rights watch. you know, this is not a campaign against autonomy in the military sense. it's not a campaign against artificial intelligence. there are many people working in autonomy and artificial intelligence. it's a campaign to draw the line and establish how far do we want to take this. so you can view the call of the campaign as being a negative one, calling for a preemptive ban on the development, production, and use of fully autonomous weapons or you can view it in a positive way in terms of how we want to retain or keep meaningful human control over weapons system and not over every aspect of the weapons system, but the two critical functions of the weapons system
3:43 pm
which in our mind is the selection of a target and the use of force. we know that sounds very easy. it's harder to put into practice, but this is where the debate has been centering for the last few years when it comes to autonomous weapons systems. >> let me draw you out and then i'll turn to daniel. you talk about drawing the line and what i take as drawing the line is basically at target selection and decision to fire as it were, saying there should be a human there. i get that in a sense, but in terms of objectives, if an objective were to minimize casualties or risk of indiscriminate -- civilian or
3:44 pm
non-targeted deaths, i would say, greater precision if different versions of these weapons could be demonstrated to provide more precision and reduce collateral damage and inadvertent deaths, why should it matter whether a human was in the loop or not? i'm not trying to argue with you. i'm trying to draw you out about why kind of the principle of a person in the loop as distinct from the outcomes -- i'm related to people that i don't want in the loop. i'm croatian descent. tell me what's wrong with that.
3:45 pm
>> there are many benefits to employing autonomy in the military sphere. our concern is we're going to have stupid systems that are weaponized before we have the smart ones that can do the level four, the mimicking of human judgment and empathy. our concern is we're going to have stupid autonomous weapon systems being deployed before we have these super smart ones which are further in the future as we understand it. it was first the robotists and the ai experts that came to us and said you don't understand what could go wrong when these are deployed in the field. we have many technical concerns. there will be unanticipated
3:46 pm
consequences and unanticipated things will happen there. but then the other kind of elements of the campaign that have come on board, the faith leaders and nobel peace laureates are worried this will make it easier to go to war because you can send the machine rather than the human. of course, we want to try and keep them out of fighting as much as possible. but the fear if the human soldiers are not in there and just the machines, it will be a worse situation on the battlefield for civilian populations. this is why we see a need to draw the line. >> daniel, let me draw you in on any of this, but in particular
3:47 pm
how you thought about whether there's a valid difference between offense and defense or territoriality. i'm listening to mary going i totally get that if you're operating on someone else's territory. >> let my start by saying autonomous weapons systems are already here. the issue is no longer forward facing. it's also current facing. and while we don't know all the autonomous systems out there because some of them are closely guarded secrets, we know a lot of them. and i think mary is right in one respect of that. the capability to deploy autonomous systems is still outpacing the capability to train them to be human replacements.
3:48 pm
now i say that in spite of the fact that computers can beat human beings in chess and in fact anything that requires thinking in speed or numbers of calculations, et cetera. one of the problems we face is that what we want to train the autonomous weapon system to do, we're not sure how to do that. let me go into that for one minute because you'll see some sort of sitting in between the two positions. i used to train soldiers to comply with the laws of war. and when we trained human beings to do so, we have a system. it's more or less the same in military organizations. we have a specific set of rules. there's the principle of discrimination. you have to discriminate between a legitimate combatant and non-combatant. when we think about how to try to teach a computer to do this,
3:49 pm
we're not sure how to do that as human beings. artificial intelligence doesn't learn like a human being. it learns differently, and there are different ways to teach computers, but none of them are putting them in a classroom and giving them a lecture and then taking them into the field and trying out a few dry runs. we learned the old ways we taught the system don't work on computers. the first point i want to stress is we see a chasm opening between the ability to deploy the autonomous systems and the capability of teaching them what the rules are. obviously, that gap will close as computer systems continue to develop. and that is quite possible, but
3:50 pm
to be fair i think the military hardware is outpacing the ai side currently. that's my first point.outpacing. that's my first point. my second point is, it's going to be relatively easy to feel. we discussed this in e-mails before, the panel. most autonomous systems today are still stationary. most. why? because movement for autonomous systems is complex. there are a lot of different types of capabilities. now, the most oldest autonomous weapons systems is the land mine. some people would say it is semiautonomous because the way it works. if you want to go into detail, if you take the acoustics from
3:51 pm
the 1960s and '70s, small computers involved with signature enemy ships. and they would only target enemy vessels which meets specific acoustic signatures. very primitive. they have been running 40, 50 years right now. they were the first solid autonomous weapons systems in the world. however, they don't go and try to find a target. that adds a level of complexity, which is huge. the autonomous machine guns connected to land radars and other countries i know, et cetera, they are staying in one place. when you go into a territory where the machine has to the learn the environment and this is a very complicated machine experiment. looking at machinery and
3:52 pm
identifying human from nonhuman and friend from foe. and to do that, that is a complicated experiment. now, i say that with one final comment. i'm not even 100% sure the rules we have actually face robots. and i'll explain why. you see, we build the rules for combat today for human. and they come with a few hidden assumptions. one, human beings make mistakes. and we are okay with that. we accept a certain level of risk in combat for human soldiers. you're allowed to the make a mistake if you are a soldier. i tell you a very sad story. one of the military operations 14 years ago there was the terrorists had fielded one ton
3:53 pm
ieds under the roads to blow up tanks. and tanks couldn't withstand the bluff. they were blowing up very big. one tank was traveling in a location. they were really on guard for that. and suddenly they heard this huge boom from the tank. they thought surely they must have gone over an ied. they were searching for the t theurret. so they look in the periscope. and they shoot them. they realized it wasn't an ied. the tank had gone over a huge boulder and it sounded like a huge explosion. the two people were innocent. the reality is in a combat
3:54 pm
situation the crew killed two people because they thought they were in a confrontational situation. would we be willing to reach the same conclusion in an automated system? are we willing to give the computer the benefit of a mistake? the defense in criminal proceedings, will we give autonomous systems the defense of necessity? if you want to prevent a bigger, you're allowed to -- all of our systems are geared for human beings. so the bottom line is not only is it difficult to train the robots and not even 100% sure the rules are ready for the artificial intelligence. >> i think you're on to something obviously that's
3:55 pm
extremely important in terms of whether it's a challenge to develop, rules to deal with ai or rules to deal with an even broader category and what the expectations are. my sense is that we can all learn a lot in terms of how we think about this and how we might think about it coming from the liability side rather than trying to define autonomy or not and saying, well, autonomy should be avoided. how do you go about it. but if case law i think it helps enormously. so i want to come back to that. but i want to pick up on a couple things you said and bring the gentlemen in and mary also. from anyone's perspective, is the distinction between stationary and mobile an
3:56 pm
important distinction? especially, mary, from the sense if one thinks about prohibitions or what could be avoided, does it matter? relatedly, the distinction between defense of one's own territory versus action out of territory which implies mobility. so i'm just trying to see how to think about this. do you want to jump in on those two points and then -- >> can i say a couple of things? >> sure. >> before answering that, a couple of points on what mary said in the last interaction. i think the feel was that weapons can be deployed before runs. that is not acceptable. that is presuming i feel the sting of the weapons and the
3:57 pm
people who are deploying it are doing it in an irresponsible manner. that's a feel which is there but i feel that's the way the inductions of technology is done. we have to look at what is inherently wrong with fully autonomous weapons systems. it is fully autonomous systems and meaningful human control is what is being looked at. and there is a weakness in what is this fully autonomous system. there is annen gauge. weapons system which are fully autonomous which can cycle and engage targets without human intervention. now that "and" is important. only selection is accepted.
3:58 pm
only engagement is also accepted. after all, we are only engaging. we are not selecting. and only engaging is acceptable. but selecting and engaging where is the question is being drawn. the decision is what is felt the day the machines issue -- so the decision should be left to the human is one point of view. it could possibly come up. and i would like to make the point that if we are looking at various technologies, the chain we talk about where you first identify that nobody objects to
3:59 pm
the navigation or nobody objects to the selection. in fact, if we look at the 2009, it specifically says in all of these functions is permitted and it is only the decision to build. the point we want to make is that the the complexity is not going into that decision. the decision is one aspect of it. as long as the human is there, there is no technology involved. ai brings the rest of the functions which goes into making an autonomous system. that is one point i want to make. if you are thinking about banning technology, you are talking the most important part as far as technology is concerned. now, coming to the question you
4:00 pm
asked whether on the defense side, defense in any military person would know when you say defense, it's not defense. offense and defense is part of defense. so conceptually it is an aspect coming into it. but that also comes in offensiveness. so going into offense would really be the same. >> how about on your own territory. you can avoid that about offensive defense by saying you can operate on your territory but not outside your territory. >> okay. i have a little more on that. i'll take an example from india. so, for example, a line of
4:01 pm
control. it is the sanctity cannot be crossed. if we are looking at that scenario, then if you defend, that defense embroils entry. you can have nonmobile robots also looking at defense. but in dimensional operation, you also go across. so you attack. so what i'm saying is depending on which back drop you are looking at, defense may or may not be. that's why i'm saying in general, distinction between offense and defense may not be really correct from a technology point of view. however, it would be more acceptable to those who do not
4:02 pm
want to dell gate to the machines. defense of a system would be more respected. that's our distention again. >> daniel just on the territoriality. i was thinking of iron dome, israeli system which operates over the airspace. and then there's a wall. so i'm thinking of analogies because the general is talk building the line of control which separates the part of kashmir and there has been firing and lots of movement. in better times there's not meant to be. so you can imagine that kind of boundary where one might put autonomous weapons so there is not infiltration coming across.
4:03 pm
on the other hand, presumably if like the last month when india's -- there's movement going back and forth. you might want to turn those systems off so you don't hurt your own or manage that in some way. so given israel's experience and your experience here, does the distinction on of territoriality matter practically or legally or no? >> first of all, realistically, if we take the iron dome system, it had been very public. you have manual setting, semiautomatic, and automatic. so it's a missile defense system. the idea is you want to shoot down the missile in a safe location. it there for the computer to not only know. every system works like this. it identifies with the radar, the incoming missile or whatever
4:04 pm
is coming in. then it calculates where it's going to hit because it's on a ballistic trajectory. so it will not veer from its track. you know where it's hitting. so you warn the people in that specific area should take cover, et cetera. but then it calculates if it's going to hit in a dangerous place, it calculates where to shoot it down so where to minimize the damage. now, theoretically at least, boundaries are not relevant for that. so if you can catch the missile early enough, we wouldn't care if it landed in another country's territory. although the calculation is landing in an unpopulated area would still be the same for the system. but the idea that the system is not supposed to take boundaries into consideration. it is supposed to take saving human lives into consideration. so my gut feeling is that the stationary versus the mobile
4:05 pm
issue is just the technological difference in complexity. again, i think people will be more easy to accept the fact that you would field such things in your territory and then you send them to another country. on the moral public opinion side, there are arguments to be made that these are initial steps down the road. but from the technological and even from a legal side, i don't really think that. i don't think it holds. >> just on the complexity which you mentioned again. but the systems which will be targeting let us say mobile targets, many more complex i think is the statement. now let me just paint the picture. and, again, i take an example from a conventional robot. for example, you have in an area
4:06 pm
of let's see 10 kilometers by 10 kilometers, 100 tanks. and so it is a combustive environment. now instead of what you normally do, this has to do with military. these are the models. ai is coming into place in the military. so the forces, blue forces they will be destroying the tanks. so they are equal on part. one has technology and one is ai instead of tanks. so now i'm trying to analyze
4:07 pm
what is the complexity as compared to this technology of these drones and destroying them? i think the complexity gap is hardly anything you would pay for for technology which is there. you only have to pick up signatures. which is pretty simple. in such a scenario, the complexity is not there. the complexity is there where is the terrorist mixed up in a population, and so there is more external distinction. so that's a complex problem. i just wanted to comment on the
4:08 pm
complexity. >> mary, come in and sort all this out for us. >> i'm just thinking about the international thoughts we have been participating in three years. not three years of talks but three weeks of talks over the past three years. they look for common ground where the governments will will agree. there's about 90 countries agreeing on this. i thought i heard them all saying these don't exist yet. autonomous weapons do not exist yet. we do have some in place at the moment. but there was pretty widespread acknowledgment that what we're concerned about, the lethal autonomous weapons system are still to come. and the other thing the states seem to agree on is international law applies. international humanitarian law. the testing of your weapons and doing that. of course it applies to all of this. and the kind of notion of what are we talking about? we're talk building a weapons
4:09 pm
system that has further human intervention. there's a fair amount of convergence around that as well. what they haven't been able to do is break it down and get into the nitty-gritty details here. they need to spend a week just talking it through the aspects or the elements or the characteristics that are concerning to us. is it that it's mobile rather than stationary or that it is targeting personnel rather than material targets. is it defensive, offensive. although those words are not so helpful for us either. what kind of environment is it operating in? is it complex like an urban environment is, or are we talk building out at sea and out in the desert. and finally, this one has not really been talked about. but what is the time period in which it is operating? because it's no coincidence this campaign to stop killer robots
4:10 pm
was funded by the people campaign to go stop and land mines. we are concerned that one of these machines could be programmed to go out and search for its target not just for the next few hours, but weeks, months, years in advance. and then where is the responsibility if you're putting a device out like that. so that's some of the kind of breakdown we need to have in the process to really get our heads around what are the most problematic aspects here. not every aspect is problematic. that will tell us how to draw the line and how we move forward. >> if states have agreed that the laws of conflict and other international law would apply, it seems we have some different circumstances and we should play out than if they don't agree. dan is shaking his head. tell me why you are not shaking
4:11 pm
your head. pick up on this too. >> mary is absolutely right. you know, when i grew up, there was a u.s. band called super tramp. >> yeah. we're dating ourselves. >> one of my favorite songs growing up was the opening lyric were take a look at my girlfriend. she's the only one i've got. now international is that. we have no alternative. we don't have a plan b. so as a very old time international lawyer who deals with this issue, i don't have the rules to apply for this. so we have no chance to say at international convention, we will apply existing rule. the part we're not telling them is we don't know how to do that. that's one of the problems. they don't work on humans even if you think they do. and because of that, the
4:12 pm
principle, i am 1 million percent convinced it would be international law artificial intelligence as if they were human beings. in reality, we will be asked to translate in reality. we will have a huge new challenge. so that's one of them. >> let me jump right in on this. and we can continue as a conversation. that seems to me one of the strongest arguments for at least a pause, if not a ban, a moratorium. to the extend what what you just said, obtaining, let's wait until we can sort this out then. so tell me what's wrong with that. if anything, whether the problem is it's not practical but from a legal point of view. >> okay. so i am also a cynical international. and the reason i am is because i used to do this for a living.
4:13 pm
now, if you look at the list of countries participating in the process, you will not be surprised that the primary candidates are less involved than the countries who are not supposed to be fielding those weapons. in fact, if we take the land mine as a specific example, they will join with very few notable exceptions all the countries who do not have land mines. so it is divided into two big troops. some say no more. and with the exception of three countries they have not joined the regime. as a result, it is not a rule of international law. it is only binding on the member states, which creates a bad principle of international law because international law is different for every single country. this is part of international law. it is how the system works. for example, for canada, it is unlawful to develop or field an
4:14 pm
anti personal land mine. for israel it is totally legitimate to do so. in the event canada and israel fight, israel could use that and canada could not. what will happen with autonomous weapons systems and why i am not waving the band flag together with mary is because i know the countries that are going to field them are not going to be administering any type of result from that process. the last thing i want to have happen is the normal countries who have very complicated projects for fielding weapons like india who came up with a robotic revolution 15 years ago. i think they have the biggest in the world today, sidebar. but they took this problem on board as one of the issues they need to tackle, i would trust
4:15 pm
them much more to handle it effectively in a country where they don't care about the collateral damage as much. my concern is they could achieve the opposite. the good guys who will take care only the field systems after they know they can achieve all the good results we think they can won't field them until they're ready with a small mistake probably. but other people will field them earlier. and that is not necessarily a reality i want to live in. so that is where i come in. >> mary, how do you respond to that? >> i mean, just to say the treaty is called the conventional conventional weapons, geneva framework conviction. all the countries interested in developing autonomous weapons technology are involved in it and participating in it. nothing would be discussed without the agreement of all of these countries.
4:16 pm
we do have china, russia, the united states, israel, south korea and the uk in there debating this subject. and just to come back on the land mine size, we have 162 countries now who have banned these weapons. 45 million have been destroyed from stockpiles. we have gone from 50 countries producing them down to 10 as a result of the international treaty. and the international treaty includes former users and producers of land mines. that problem is in stocks that have been mass manufactured. we're not talk building doing a land mines treaty on the autonomous. not yet. we are talk building doing it in this particular framework. we are quite sincere in wanting this to work. we cannot do this with everybody around the table, then you might end up doing these kind of efforts. right now there is consensus at
4:17 pm
least to talk about. not so much on what to do about it yet. >> and how about -- what has been the thinking about a moratorium as distinct from a ban? and i say it from the form of, if there's also the possibility that smart versions of these weapons could be more discriminating and have other positive values from humanitarian and other point of view, then kind of an indefinite or permanent ban seems to me a priority where one would want to question. on the other hand, because people stipulate that they don't quite know how to apply international law and other things, arguing for a moratorium until that is worked out that. would make sense to people, which is how i try to think about things. so take me through the moratorium versus ban. i know you're working on a ban.
4:18 pm
i'm not asking you to endorse something you're not working on. you and then i'll ask the general. >> the moratorium call came from a summary. christoph hynes issued a report in 2013 in which of his major findings was that there haob a moratorium. so it wasn't a proposal from the campaign. when he was on his way out this year he issued more reports cooling for a ban. we haven't talked about a whole lot of other concerns that are raised with these weapons systems. but you are seeing the responsibility to take the human life to a machine is something that people are not comfortable with and that they want to debate this. it's not just countries like the holy sea, it is countries who
4:19 pm
have been the victim of drone aspects, they have already seen the effect of some degree of autonomy and they don't want to cross that moral line. there's also a lot of countries talk building security and instability and what happens when these have the weapons and the other country doesn't. what does it do for the nature of conflict and war fighting when one side has taut high-tech whiz bang and the other side cannot. the other question is are we going to level the playing field so everybody has these weapons systems, or is it better that nobody have them? at this moment there's still time to sort this out. there's still time to put down the rules. and there is still time to prevent the fully autonomous weapons from coming into place. >> i think again you used the
4:20 pm
terminology of fully autonomous. that is very important. we will put a moratorium only on the fully autonomous system which replaces a human in the decisions to kill. and that's all. that's all that begins. so we are not putting a moratorium on -- this proposal is not trying to put a mortar kwroupl on use of the autonomy or all the other functions. essentially there is no moratorium on ai deployment in any of these systems. the decision to kill does not require ai. it is implementation of a weapons systems. so that's one part. so when you say moratorium, in effect, nothing will happen on the ground. the second part that you
4:21 pm
mentioned was -- the last part of what you said was on? >> in terms of who has the system -- >> the third strategy of the u.s. now that rationale, to have this military capability, to have this technology, to have this dominant. so that logic can only particular particular attention to a system. and the idea of this new technology other than having a technological edge is also that bringing ai into the weapons systems will lead to a cleaner form of war.
4:22 pm
pgms can be ruled out. but vis-a-vis the standard ones that drop from aircraft. they are better because you do lesser combatant loss. in property. in a similar manner, more intelligence, more discrimination. even if we don't have the other stage, it will need to move resighs targeting of what you want to target. is so from that extent, on the one hand you're building military capability. the other is cleaner form of war which benefits to have it. in summary, i would say that just saying moratorium is not really doing anything positive on the ground. if we decide from other points of view, look into the -- if you
4:23 pm
look at human race and that perspective and that point of view one wants to sort of ban the buildup of technology at this stage, well, that is worth considering as a point. thank you. dan. and then i want to open it to the broader discussion. >> i think the point i want to make is that several different agenda are legitimate that work here. one school of thought says we're not ready to fuel such autonomous systems yet. i think they are currently right. i think we haven't solved the technological requirements to ensure the statistical accuracy, not in the simple one, but in
4:24 pm
the complex -- i don't think anybody has solved the ai problem. yet it requires so many different schools of technology. you need a machine to be able to do so under a lot of stress, physical stress. lots of challenges. i call them technical, but they are really intelligent technical difficulties. but they will be solved. they are just not yet ready today. the one group is saying wait until you're sure before you allow a machine to press a button which shoots and kills a human being. that is one group of thought. another group actually said something wider. we don't want machines to kill people, period. irrespective of how good they are doing it, we don't think this should take place.
4:25 pm
now, this is a more philosophical important discussion of a totally different level which has nothing to do with the technology involved. i will point out here that we have already already gone a partial robotic revolution in the civilian sphere. they have become invisible already. but if i go back in time, one of our favorite stories is, you know, the first elevators in the world were built in chicago when they had the first high-rises. like you saw in the old movies, there were elevator operates who used to operate the elevators to the stop at the floor. but they built buildings too high for human operators. so they built the first machine-operated elevator. the problem is when people went in and there was no operator, they said this is the first ever
4:26 pm
machine-operated elevator. it is perfectly safe to the use. nobody would use it for the beginning because they thought it was unsafe. how can a machine know where to stop? an elevator is a very primitive form of today, especially when an autonomous machine can kill you. traffic lights. aircraft. they make the decision which humans -- if they make a mistake, people can die. we have long time accepted the fact that computers can make decisions for us that can kill us. what happens for the first time is that we have reached a state where they can do it on purpose. this is a decision point which we need to decide if we're crossing it or not. being the cynicist i am, i think
4:27 pm
we have already crossed it. but i'm glad we're having the conversation now and not 20 years from now. and the final school of thought, do we want it. the two schools of thought on that one. the one saying the more accurate missile systems. coming from israel, remember we are advocates of accurate missile systems because the less civilians we hit, the less israel is targeted for doing something wrong. so we have a vested interest in using more accurate. the cleaner you make the battle field the easier it is so fight. part of the reason there are not so many wars is it is dirty. if you just kill the combatants they will be happier to go to war. i'm showing you the different
4:28 pm
schools of thought. each one does something different. >> that was a great summation and taxonomy of the discussion. i want to thank you and each of our panels. a terrifically sharp and informed conversation. let's open it to discussion. you know the procedure. i call on you. you say who you are. they bring you the microphone. there is a lady here midway and a gentleman. ladies first at least for the next eight days or january 20th. >> diane from the center for analysis. i was wondering how you think this discussion apply toss cyber warfare. particularly thinking where cyber weapons could be useful.
4:29 pm
>> okay. cyber warfare, cyber domain is very much part of this discussion of automated systems. it is about human lives. killing human lives. and cyber is very innocent. it can affect human lives but in a different sense. when you talk cyber defense or let's say cyber attack, an autonomous attack. that's part of the cyber.
4:30 pm
that's whaeultd say. >> dan, you want to jump in on that? >> yeah. i think it's part of the discussion. one of the reasons i say so is because i don't actually know where cyber stops and kinetic begins anymore. i used to know. i don't know anymore. one of the discussions, for example, we've been having on fielding robotic systems is what type of protection do you want to give those systems against being hacked? because -- one of the examples i came up in a discussion a few years ago is maybe we need a kill switch. weaponized autonomous systems. then someone said but someone can hack it. the reality that i think most of the discussions are the same. i totally agree with the germ that direct cyber attacks are usually not focused on killing
4:31 pm
human beings but indirectly they can do tons of damage. so i think the -- it would create a copy of itself and go out into the battle field. however, we already know how to do that with the computer viruses. so i actually think the cyber autonomy world is even scarier because it has the potential of us using control more than the kinetic side. but that's another discussion. >> halloween is the scariest
4:32 pm
discussion. >> that gentleman there. >> jessie kirkpatrick, professor on the george mason university. i want to the pick up on a point that dan raised about sort of varying levels of autonomy we have in technology currently. and we're also in the cusp of different types of autonomous systems that can take lives. and i want to point to one that already does, and that's self-driving cars. right? they make moral decisions to kill. they are going to crash as a matter of the law of physics or statistical probability. and there will need to be programs that is a life and death decision. so i would like to hear a little bit about the distinction thes that the panelists see between this type of technology and autonomous weapons. >> you mentioned that in your opening comments. the short answer, no one has a
4:33 pm
good answer what they are supposed to do with an autonomous car. being a procedural lawyer, the question becomes not what do we do but who is responsible to do it. and so we now have a discussion that goes something like this. option number one -- this is a discussion two weeks ago by the way, with some of the companies that do this. option one, you allow the guy who buys the car the decision. would you rather suicide or would you prefer not to, sir? i don't know how many of you would buy the car with the, you know, presumption. but it is a decision. one of the people in the meeting say let's agree we different different colors to those cars so you know who they are on the road. it's a true example. another way of doing it is
4:34 pm
saying, no, the car comes hardwired with the decision. do we tell the people who buy the car? the answer is you can't because it is algrithim and they won't explain. it will do whatever the guy who wrote the code told it to do. there is no way we can summarize in a way that the customer will understand. aim taking you through this because when we move the analogy to the warfare side the main difference, and i thought this was your question, the warfare side this is all intentional, right? but the reality is the warfare side the big problem the general is afraid of is the distinction part. when you have people in the battle field and you want to
4:35 pm
identify who is noncombatant, you need to optimize what you are going to do so you minimize. it is exactly the same question. if you take away all the fluff. then who is going to make the decision? are you going to ask the commander what level of casualties is acceptable, which is option one. that is actually easier for me to say. that's how major operations work today. or are you going to allow the manufacturer of the autonomous weapons system to hardwire that into the system and me, if i went back into my manager career, i have no idea what the system is going to do when i press the button. for all i know it is going to kill one or two civilians or none at all. if it does, i have no way of controlling that.
4:36 pm
so the questions are exactly the same, although the scenario is different. you're exactly right. we are facing the same side on the civilian front as we will face in the military front in the very near future. >> i think one of the things which is happening in these discussions is that we don't know the systems in general. there are degrees of systems that can be used in different contexts. so while daniel said in a simplistic situation, in today's context, that may or may not be simplistic. the systems, all the enemy airfields, that is much easier. you go in and bomb.
4:37 pm
you send that system. the next less complicated is what is in meters. when you come closer to another situation where a company is going in bunkers. when they go into this company, then to that extent there should be more of an issue. that is more difficult, closer situation. a human would be there. we are seeing whether humans are there. what we are deploying, i would
4:38 pm
say, has to be, whatever technology level is reached, that type of system should be permitted to be deployed in a responsible matter. as such we are already in that esteem. there are already systems on the ground. they have been there for decades. mines are being -- i mean, there's a convention against mines for similar reasons. so they are already there. as a new technology, there is a responsible manner in which they should be deployed. in general, the modern aspects of the question, there is a much bigger stage. if at all the systems can mimic empathy and judgment, that will be much later. if it is perfect to that extent, that brings me to a second point. and that is about who is
4:39 pm
accountable -- the accountability was raised. is it the manufacturer, the member, the states. in the system malfunctions on the ground, it is definitely the responsible. but the person who is there is responsible. it is within that bonded capability of the system that is supposed to deploy. it is an angle to this degree which has to be really strong. the more complex systems that will be deployed. >> i suspect as with the vehicles, so if we go this
4:40 pm
direction with military systems that latter point will be more debatable. do we want cleaner wars was the question. do we want fewer traffic fatalities. the answer may be yeah but i'd rather -- it's easier to be a system with a driver and soldier accountable than even if it's safer and cleaner but now a big supplier is accountable, the state is accountable. it's something what issue this is going to bring up. for including financial reasons. i would not rather take on the liability. i would rather you have the liability. brave new world. this gentleman here right in the middle. yeah. why don't we take two. and this lady with the blue and white striped shirt here. if we can get another microphone. let's take two in the interest of time. i think we're bumping up against
4:41 pm
it. >> in keeping with the things their scary, you talked about autonomous offensive and defensive capabilities. but we only talked about auto ton eupl deterrence. where you would put an input on second strike nuclear talk or in the line of cyber, a retaliatory attack before your systems go off entirely. how do you bring it into autonomous weapons systems. you make it much more likely that the things will get out of control. >> let's take the other question and then we can parallel process. >> hi. my name is lori green. i was in hoeistic essay assessor and scored the test of english as foreign language for six
4:42 pm
years until the advent of an autonomous system replaced the human reared and now i'm becoming autonomous. are you not aware that artificial intelligence and programming weaponry to think about humans is destroying our own language processing because we are granting these machines so much importance to cancel our own reasoning out of the process? >> wow. i didn't do that well on my s.a.t. so i don't know if i understand your question. but i am trying to process it. >> we are trying to create a system like a robot is going to think like a human about reasoning, when to strike or what to strike or how to strike. and so in the process maybe even a computer system to reason when it would be appropriate to strike. so we are granting this
4:43 pm
algorithms and it is canceling our own ability to think spontaneously and reasonably, even as demonstrated i think even today with some of the explanations that you provided and lagging a real critical target in your arguments. there was a lot of just open processing without really making it definitive, in some cases, answer. also, the process for deterring autonomous weaponry is entirely too slow. i think most people are critically aware there is a lot of apathy against the idea of just altogether canceling out the prospect of autonomous, fully autonomous weaponry. and i'm wondering if that's because so much money is invested into the artificial
4:44 pm
intelligence process and not enough in human capacity. >> i think the first part of the question was about you're delegating. you're saying that a machine can be more reasonable and take more reasonable decisions and be able to arrive at the correct decision in a better manner than a human? >> no i think the opposite of that. >> she's questioning that. >> she is saying what it is. that it is not going to work. we're destroying our own capacity to reason and think by pursuing it. >> okay. she's saying a machine with ai will never do better than
4:45 pm
humans? is that what she is saying? >> yes. >> that is for ai to say. >> why would we ever want that? >> it is not technology. it is how the ai is -- whether they will be able to understand international language. we would say no. but you see what is happening here today. my own belief with a layman's knowledge of ai is that anything that the human mind can do, including empathy at any level, it is not at that stage. there is no scientific reason to believe otherwise. >> it is only mimicking judgment. it is not rationally judging. >> dan, do you want to jump in on this? >> i want to talk about the two questions together. it is all a question of
4:46 pm
delegation. you used that word in your introduction. and you're questioning whether it is is right to dell gate some forms of decisions to machines. and you have an assumption that it is a bad idea. i do not totally agree with you on every scenario, but i think it is a legitimate question. you went one step further, should we delegate authority to use significant amount of power in a disastrous situation where human being may not be able to respond quickly enough, effectively enough or intelligently enough in order to counterattack or whatever. and these are great questions because they raise the question of what are we developing a.i. for? if we forget the first two years
4:47 pm
and scientific gains, it is supposed to be something that makes our lives better and easier. that is throughout the expire subculture. for example, if it can make a good decision quicker than a human being and save a life, most people would say that's a good thing. and i was seeing technology develop, i personally being a technological layman who was working in this field, can take you that i have seen numerous examples where computers are much better than human beings making the decision, which i want them to make. human beings are scared, human beings are tired, human beings don't have all the information, and human beings act on instinct which turns out to be a decision which is sometimes really good and sometimes really, really bad. now, it may not always be a good thing to delegate authority to a machine. and i think the decision we need
4:48 pm
to make is where we degree the machines come to help. and your scenario, which is an extremely scenario, i would rather not want a machine to make that decision personally. but i can definitely identify parts of life where i want machines to help me out. where i really like the fact that i don't need to trust human beings with all the communications. but i do not want them to replace us in things which i care about. and this is the type of discussion i think we should have now before we let technology logical companies and market pressure push us in a direction we are not necessarily willing to go. >> if no one else -- go ahead. >> just to say we heard quite a bit from the artificial intelligence community, the guys in silicon valley, how artificial intelligence can be beneficial to humanity. this is their big catch phrase. and they're investing money in trying to determine in ways
4:49 pm
which in it could be beneficial to humanity but delegating authority to a machine on the battle field is a line in which many of them draw. we haven't talked about policing. we haven't talked about worlder control. we are talking about armed conflict. this is not just the armed conflict we're concerned about. it is much broader than that. but the campaign for the robots is the points at which it is weapon sized. it is a much broader, bigger debate. and we don't have all the answers to it. >> well, i want to thank all the panelists and all of you at least here beginning the process of this debate and helping us really i think hone in on what some of the key questions and issues are. thank you all again. and thanks, dan, general, and mary. [ applause ]. >> thank you all for coming for
4:50 pm
this first part. we hope to see you december 2nd in pittsburgh or via lie stream. cyber panels we'll be looking at on december 2nd. in the meantime, i encourage you to download the carnegie app, and would like to last but not least, thank the team in helping with this event, lauren and rachel, who help with the organization. thank you very much. [ applause ]
4:51 pm
today, three former white house chiefs of staff discuss presidential transitions and challenges facing the administration. starting at 3:00 p.m. eastern on c-span. tonight at 9:00 on c-span 2, civil rights activists, historians and political operatives on president obama legacy. here is a portion. >> a president can't wave a magic wand and say civil rights repair, right. that doesn't happen. the executive is constrained in very real ways. now, one sign that has been at least gives me some hope is that this justice department and civil rights division has been
4:52 pm
busier than it has been in any administration prior, maybe except for johnson, right. it has been -- it's been busy. the laws that congress passes, though, constrains its reach in very, very real ways. i'll give you one example, and then i'm going to pass it on and hopefully we can talk more. there was a mention of treyvon martin in the last panel. you know, it was the correct decision for the department of justice not to intervene in treyvon martin, because the law is written in a way that makes it nearly impossible for them to intervene in that way. they have to show that at the time that zimmerman dealt the death blow. he was motivated solely by racial animus.
4:53 pm
more from civil rights activist, scholars, historians and political operatives on civil rights and race, tonight at 9:00 eastern on c-span 2. here are some of our featured programs thursday, thanksgiving day, on c-span. just after 11:00 a.m. eastern, nebraska center, ben sasse on american values, founding fathers and purpose of government. >> there is a huge civic mindedness in history but not compelled by the government. >> former senator tom harkin on healthy food and childhood obesity in the u.s. >> from everything from monster burgers with 1,420 calories and 7 grams of fat to 20 ounce cokes and pepsis, 12 to 15 teaspoons of sugar, feeding an epidemic of
4:54 pm
child obesity. then at 3:30, wikipedia founder, jimmy whales, challenge of providing global access to information. >> 1,000 en triers, a small community there, 5 to 10 really active users, another 20 to 30 they know a little bit, and they start to think of themselves as community. >> a little after 7:00 eastern, an inside look at the year long effort to repair and restore the capitol dome. justice, elena kagan talks about her career. >> i did my thesis, it talk me an incredible amount, but also taught me what it was like to be a serious historian and to sit in archives all day everyday, and i realized it just wasn't for me. >> followed by justice clarence thomas at 9:00. >> genius is not putting a $2
4:55 pm
idea on a $20 sentence. it is putting a $20 idea in a $2 sentence, without any loss of meaning. >> and just after 10:00, exclusive ceremony at the white house, president obama will present the medal of freedom to 21 recipients, including michael jordan, singer bruce springsteen, sicily tyson, and bill and melinda gaits. listen on the free span radio app. tonight on the communicators. former fcc commissioners robert mcdowell on how the fcc could change with the trump administration and the telecommunications issues that could be facing. >> smart as a country, we'll start to tackle those with the future of the internet going beyond network neutrality, what does it mean with artificial intelligence, what does it mean for jobs. what about the consolidation and
4:56 pm
commercialization. >> i sense that what plans chairman wheeler for business services and then the top box item where there wasn't any sort of consensus among the democrats for starters, is probably also not going to get off the ground. >> watch the communicators tonight at 8:00 eastern on c-span 2. analysts and correctional from state and federal government, inmate job training and education during incarceration. they look at some of the barriers that prevent access to opportunities. thank you. we're all ready to go and transition to our first panel now. >> thank you, everyone. good to have you all here. i'm scott stossel, head of the
4:57 pm
"atlantic" magazine. i want to set up this conversation before introducing the team folks we have on the panel. as all of you know, as we heard earlier, you know, it is a fact that people who are incarcerated have lower levels of educational attainment than the general public. we also know that that is a by-product of larger systemic issues that cut across multiple dimensions. in addition to that, we know that correctional education has been proven to reduce recidivism, and offer positive benefits to returning citizens. this is why this panel is so important. so on this panel, which has perspectives from, you know, state level, federal level, instituti institutional level, we're going
4:58 pm
to try to in the course of a brisk 50 minutes address policies and practices at all levels, consider what policy levels can be used to many prove correctional education, training and build opportunities, and finally, to examine the topic through a primarily racial lens. because that's i think the most important one. and we're also going to cover different types of correctional educational and training, technical education, english language learning, post secondary education and needs associated with all of them. and actually before i begin, i want to thank wayne who sort of set this all up and did all logistics and david sokolow, who also was responsible, and nick turner, who just set this whole thing up. so let me introduce our
4:59 pm
panelists. you can find more detailed biographies in your pacts. let me start with shawn addy, the director of correctional education at the office of career technical and adult education at the u.s. department of education. welcome. next, here, is bianca. i knew i was going to mangle her name. bianca vanharoron. brandt is the superintendent of correctional education at the california department of corrections and rehabilitation. and fred patrick, director of sentencing. he is also working on the implementation of second chance pell pilot programs, which is a hugely important product. he is not our subject for today.
5:00 pm
so with that, as kind of the setup, i wanted to start by asking each you guys individual questions and then open it up. at the end, there will be time for questions. but brent, let me start with you. a lot of these issues are state driven. california is probably, you know, among u.s. states, more progressive in the union. and certainly in its approach to correctional education. from where you sit, what are some of the keys to california's success and what barriers does the state have to overcome and what lessons can you extract from that to the country as a whole? >> california has gone through some significant changes in the past few years. and there are various reasons for that. one is because we have a
5:01 pm
governor who thoroughly supports rebill t rehabilitate programs, and bipartisan effort, and multiple bills that have passed within california and the last several years that have not only reduced the prison population, allowing us to actually use classroom space to provide rehabilita rehabilitatetitive programs, but it has allowed us to provide community college for many people. recently, in 2014, senate bill 1391 allowed for the community colleges to come into the prison system and teach face-to-face classes. and have that be funded by the state. so we now have face-to-face
5:02 pm
college programs at 29 of our 35 prisons up and running. and hopefully in the next six months, we'll have all 35 up and running. and i also can't forget to mention just the tremendous support that we have in california from our leadership, from cdcr, california department of corrections and rehabilitation. everybody is behind it. so it is almost like the stars are aligning right now for us to do these innovative things. it is not there hasn't been barriers. i think the barriers that we face in california are similar to any state, and that would be lack of program space, border facilities that need to be remodelled, just the whole, i guess, the correctional staff maybe not buying into it completely. and that's changing, though. it is changing, because of our leadership with our department, and the message is we are all
5:03 pm
about rehabilitation. it is difficult to blame the correctional officers, because here they signed up to be providing safety and security inside the prison system and all of a sudden, we're telling them you're not going to become a counselor. and that's a difficult shift for them to make. but some great things are happening in california, and across the country in terms of people buying into the -- >> and i want to move on to other folks quickly, but briefly, you've talked about how important it is that you've got the leadership from governor brown on down. is this model exportable to say mississippi or louisiana or any other, you know, other states across the country, or does it require the kind of installment of leadership along the lines of california? >> it goes to the adage of if there is a will there is a way, and if the top person is behind
5:04 pm
it and wants to make changes it, can happen. but the governor's office, the legislature has to get behind it. not just from a policy standpoint but from a funding standpoint as well. >> let me ask you, turning to shawn, in federal government also supports states in correctional education. what is the -- actually, follows neatly on what brandt was talking about. what is the federal role? brandt is talking about what they were able to achieve at the state level sort of independent of any kind of federal support for the most part obviously. what is the federal role in correctional education and what do you think it should be? and how do you balance state investment, both financial and kind of just resource commitment time and all that versus federal? >> so i think the federal role
5:05 pm
is to in some cases provide education. i think it is also to coordinate education, and also to support education. so when i say provide, there is the bureau of prisons, which they have 122 correctional facilities across the country, and they provide education across those countries. that is focusing on education. when i say coordinate, there is a federal inner agency working group, the re-entry council, and that's -- i don't remember the exact number, between 15 and 20 federal agencies that are working across agencies working to help people facilitate re-entry to make sure re-entry is spoother, share resourceooth share grant opportunities, and then part of that is the education aspect. so you know, talking about second chance pell, other federal agencies can help. and i think the other aspect is the support that i talked about.
5:06 pm
second chance pell, nick turner talked briefly about that. that's one aspect. we also have discretionary grant programs, where we're working across many jurisdictions providing funding for correctional education. it is also really important when we talk about correctional education to think about it as, i mean, language is so important in this field, to think about it as its education in correctional facilities. it is not somehow the base because inside a prison or halfway house. we think about it as these are teachers, instructors, same type of rigor. but that's a quick answer. it's important to talk about that. as i mentioned with the support, we have discretionary grant programs two, big programs i'm sure people are familiar, the work force act and the per kins act, and there is funding set aside for both of those grant programs. they are ceilings, not floors,
5:07 pm
so states can spend up to a certain percentage. second chance pell, and that's getting a lot of attention, but something we're really excited about it. building the continuum of education to a bault education to post secondary. >> perfect segue, second chance pell is a huge issue i think an important topic here. so to you, fred, given -- actually, i don't know, at some point, the federal government outlawed pell grants for, if i'm not mistaken, to prisoners? >> state and federal prisoners. >> a, is there a chance we can roll that back? and b, you know, given -- well, either way, i mean there is sort of, nyou know, if pell grants ae
5:08 pm
available to prisoners or if they're not, two different trajectories, but how does investing in post secondary education, incarcerated individuals are receiving what they need, and can they get the services they need or is really, you know, second chance pell grant the most important thing we can -- we can do as citizens and policymakers? >> as was talked about earlier, we are currently in the midst of an experimental site initiative, those generally last three to four years in duration. although there are 69 colleges in 128 states, it is important to know there were over 200 colleges from 47 states. so real interest in doing this all across america. there a real need for it. what we think about the impact and benefit of college and
5:09 pm
prison, we should think about it as a public good. so it is not just that that individual in cars rated individualcarceratesed individual receives it, benefit from it, but the entire community. we talked earlier about many individuals are parent, and it is very clear that the education of parents is predictive on their children. we talked about it in the context of economic and social mobility. the reality is, by the year 2020, almost two-thirds of job vacancies will require some form of post secondary education or credential. so rather you look at it from a how do you sort of help renew communities and when you think about the sort of incarceration that matters, individuals returning to certain communities, are able to get jobs, take care of themselves and their family. there is the public safety benefit. individual whose participate in education in prison, and this is all forms of education, not just
5:10 pm
post secondary, 43% less likely for recidivism. from the taking care of myself and family angle, it is the investment we ought to be doing. i think it is important to realize as nick talked about in the keynote speech that this is an experimental, this second chance pell is not the victory. so the reality is we need to do a lot of work over the next three or four years to ensure there is the repeal of the congressional ban. that's what it will take. until then, we may find ourselves in three or four years back to the 1994. >> i'll just notice an irony here. about a year ago, we published a piece in the "atlantic magazine" piece about the agnola prison,
5:11 pm
the warden, and a lot of prisoners are getting full access to ged education, which they didn't have access to before they ended up in prison. and there is a certain, like, ass backwards notion that if they could have had access to all of these social services and education before they were put into prison, could you have, you know, theoretically avoided a lot of the criminality that landed them in the first place. >> i think that's a great point. when you think about communities, the largest producers of individuals who are incarcerated, there is a lot to be said about the lack of quality, secondary education, lots to be said about the lack of social service and other things. vibrant community would consist of. over 600,000 people are coming home every year and we should shift to begin to see those individuals as part of a talent pipeline that can help restore
5:12 pm
v vibrant to those communities. >> i guess the question is, given the sort of incarcerated individuals at all levels of educational backgrounds, from negligible to, you know, in some cases, more substantial. what is your model, eye dleidea given some are illiterate and some are not. how do we address that? >> right. so you know, at john j.r., educating for justice. we're thinking about how can we organize our educational initiatives along a continuum, right. so the core of our program, the core of the prison pipeline, credit bearing college in prison, the core are our credit
5:13 pm
bearing courses. but we do recognize that students and perspective students are any where along that continuum. so we begin the continuum inside prison offering development edge, preparing folks who may already have a high school diploma or equivalent to be ready for college level work. the next kind of notch on the continuum are a credit bearing courses that are fully city university, the students are earning the credits, and they're transferrable when they get into the community. at the next level, because you know, as we all know, funding has been really scares, and our capacity to serve the need is really small. but we want to prepare as many people as possible to enroll in school in the communities, so we provide access to entrance examples, across seven prisons in new york state. so that folks are one step closer to being able to enroll
5:14 pm
if we don't have the space to enroll them in our program. we really think about like how can we meet folks at every level or at every point in the continuum. and it is not enough. i just have to say this piece too, it is not enough to be doing the work on the inside. what happens in our communities when people come home. it is wonderful to be able to provide access to higher ed in a correctional setting, and we want to make sure people are transitioning and landing on colleges and university campuses where the full richness of what it means to access higher ed is available to them. the last thing i'll say is that anybody who does this work knows that any educational program, particularly higher ed, changes the correctional institution. so while our program is focused on preparing folks and providing
5:15 pm
higher ed for folks, we also know that it has an impact on the abe courses in the facility, it has an impact on the hsc programs, such that the -- that community become ace learning community, and the students who are in those precollege programs have something to aspire to. >> well, i want to open it up to the panel as a whole and i'll start with this question, which may be unanswerable or beyond the scope of this panel. one of my closest friends in washington is a guy who in the 1970s, he got arrested for robbing banks, was in prison, came out, tried to clean himself up, became a crack dealer. went back to college. got out, went back to college, got a b.a. and a masters. he cannot now find, you know,
5:16 pm
despite having both a b.a. and a master's from catholic university, it is really, really hard for him to find a job because he has a criminal record. what do we do about that? anyone? >> so first, you know, we have over 45,000 laws and regulations across this country that are real barriers to individuals successfully reintegrating. they're called consequences, and i don't like it to mask what it is. it is those barriers, restrictions, 45,000 what i call barriers to social mobility and civic engagement, range from voting, ability to get an occupational license, such as barbering, to not having a driver's license while you're still on parole. lots of things that get in the way of our stated goals doing their time paying their dues and coming home and moving on.
5:17 pm
and so one of the things we need to do is begin to roll back all of those laws that you regulations that are real barriers. i think the other thing is we have to identify amplify and support champions. do you have lots of organizations such as johns hopkins health system in baltimore, one of the largest employers in the state of maryland who does a great job of hiring and supporting and advancing by the way of formally incarcerating individuals. they have done rigorous studies on how those individuals compare to individual whose have not been just as individual. in every measure, in terms of misconduct, promotional opportunities, on every messer, no difference. the reality is, individuals coming home are usually motivated, want to do a good job, appreciate the second chance, and we have to do a better chance of figuring out how we ex panned that. >> that's my experience from my
5:18 pm
friend. do you guys think -- should the -- should we face the idea that you have to declare your criminal record? like once you've cleaned up and i mean, how -- how do we handle this problem? once you go into the criminal justice system, you have the mark of kane on your forehead, and it makes it that harder to find -- i was going to a level a little bit more, you know, really at a basic level. i think the question that we need to ask ourselves is what kind of society do we want, right. if we want an enlightened society, a society that is safe, a society where people have the opportunity to benefit from the strengths that everyone among us has, then we have to remove the, you know, the "f" on the forehead, which removes boxes from applications, when a person's sentence ends, there
5:19 pm
aren't 45,000 laws that operate as perpetual punishment. we're looking to those folks to bring their strengths to our community. when we use our criminal justice system, and by extension, when we use our education system to create silos that we put people in that are impossible to get out, right, this en we -- then we have painted ourselves into this corner. >> yeah. >> where we will have scores and hundreds of thousands of people who will come home who will not be welcome in our society, because what we decided is what we want is us and them. until we make that shift and recognize without the $2.2 million contribution, we are far worst off than if we are opening ourselves to receiving the gifts that they have. >> that's really interesting.
5:20 pm
i guess getting to this -- to the crux of the work that you are doing, and anyone can, please, leap in, what are the biggest barriers in policy barriers to the work you're doing now either from, you know, a political practical, equity perspective, and, well, i'll stop. let me ask that. and anyone should leap in. >> so i think a couple of things. one, i think a lot of things. i think one of the issues is and this hearkens back, punishment from rehabilitation. your punishment is committed a crime, go to prison for five years, come out, your punishment is over. your punishment should end. really, the time should be used for rehabilitation. you should have access to education and training. we know that works.
5:21 pm
we know that can help people come out and contribute to their communities, and pass it on to their children and family members. but in terms of what you were saying per policy, second chance pell is an example. if we had full pell statements, i think that would be great in terms of changing how correctional systems think about education, you know, bianca mentions how it changes the whole culture of an institution when you have a college, because that provides hope for people. there is something more that they can do beyond, noy, if they have a ged, they can go get an associate's or bachelor's degree. if they don't have their ged, there is a reason to pursue it, because there is something beyond that. that's part of it. >> i'll take it further. the punishment is the sentence itself. the deprivation is how do you
5:22 pm
get them to succeed. we should reimagine prisons to get the skills and tools they need to go home and support themselves, their families and be contributing members of society. >> one of the things that helps us overcome some of these barriers is looking at the research studies that have been conducted and there have been some great ones in the past several years, you mentioned the rand 2013 study and i think everybody coat quotes that everyday. only two sentences of it. you know, the 43% less likely to resid vat res residivate, and that's helpful to us, because it helps us work with the policymakers and those that run the prisons that help them understand why we need to overcome some of these barriers.
5:23 pm
>> there is another part of that study that people should be quoting and i didn't earlier. we're wondering, where we being good stewards, right. we should want that. well, guess what. education in prison, for every dollar invested, there is a four to five dollar savings in reduced incarceration costs. so air talking about a 400% return on investment. they don't recidivate. so at a time in which we care about how you spend our dollars, education in prison, again, all forms, lihugely impactful. >> people talk about the kind of in a lot of communities, this school to prison pipeline, which a catastrophe, you know, sort of
5:24 pm
a catastrophe and that's, noy, contributing to our mass incarceration problem. but there a way, theoretical way out, which is to have the prison to college or secondary, you know, post or secondary education pipeline. again what, are the policy levers, you guys are coming from different perspectives here, but that can be used to improve the quality and accessibility and salient or usefulness of prison education. and you know, viewing this through a racial lens too, which is i think the right lens, that there is a social justice racial equality component to this too. i guess boiled down, how do we maximize or optimize prison education in order to get people who are in the criminal justice
5:25 pm
system as they come out of it prepared for, you know, living outside that, and living outside of criminality, and you know, we talked on the phone a couple of days ago. you know, there is a difference between job training, crucially important, because you know, in order to earn a living, you need to come out and have a job. but particularly for, you know, post secondary education, there is inherent value in that one could argue. so just any of you, weigh in on that. >> i would say a couple of things. i mean, i would say absolutely more federal and state investment in education across the board, right, but particularly for vulnerable populations like people who are in prison, right. so i think more federal and state investment, i think we need to pass the real act and get some more, you know, funding for people who are currently in prison. i think we also need to
5:26 pm
recognize that we are in a moment where you know, we're deciding that brown people shouldn't get a higher education, right. and without going to the places where we put brown people, then we -- then we're going continue to make that choice. not to provide, you know, access. i think on the outside, as folks, you mentioned when people are coming home, i think on the outside, we also need to be thinking really critically. so yes, it is important to have a job. it is important to have, you know, job training. but i think that when we're talking about folks in the criminal justice system, we tend to think about higher education and college as job training, because we made also a decision that folks are only good for the jobs that they can do for other people. and i think that we need to prioritize higher education in a way that we do for the general public. we need to see higher education and employment just for the value that it actually is.
5:27 pm
and i think when we do that, then you know, you don't have parole officers telling folks that, you know, going to college is not an option for them. they need to get a job, right. you provide more opportunities for folks than currently exist. i notice you -- >> yeah i think we need, we talked about the rand study a lot, but sometimes people look over the fact that it is the ram study funded by the department of justice bureau of justice assistan assistance. we need more research. i think we also need state investments. so you know, if we bring back pell, we also need to bring back state forms. you still have a state in new york, there the situation assistance. after pell went away, if we're viewing this as an investment, and i think it is for people,
5:28 pm
they should have access to the same type of resources they can invest in themselves. someone is coming back from eastern shore of maryland or upstate new york, back to, you know, baltimore or new york city, that's an investment that the state should be making to go along with the federal investment. so. >> do you guys have thought as soon as. >> the incarcerated students do have access to the state funds for higher education, and i see that as an incredible incentive. noy, there are some that think incentivizing on a prison yard is accomplished by giving them more movies to watch, or they can stay up later, or maybe they have a warm cooked meal or something like that. but i don't see that working as much as providing opportunities for education. and where we have people that are working on a.a. degrees and
5:29 pm
bachelor's degrees, that filters down to the abe classes and those working on high school equivalency, which creates people that are better critical thinkers. it may not lead to a specific career. but it helps produce critical thinkers, with the soft skills to succeed in the job. >> and i'll add, i'll simply amplify that. education in prison, does exactly what we all know it to do. you learn how to think critically. you write better. you communicate better. you problem solve. all the things that we're taught pretty early on as to why you should pursue education, and why we should do it on full-scale in prisons. the other thing is bianca talked about and brandt talked about
5:30 pm
how it lifts all forms of tainment. you have college education in prison and then you also see greater attainment rates in other forms. the other thing it does is it makes facilities safer. from a quality of work life perspective from the staff, it is about funding from both state and federal funding, and it is also about how do you support the correctional leaders, brandt and directors in terms of how do they get the requisite support so they're not choosing do i have enough funding for staffing or can i also have counselors and tutors and mentors, and other supports that are needed to make it successful. >> i want to open this up to audience questions. i should have said this earlier, basically anyone, i think there is someone that will be walking around with note cards. if anyone has questions, write them down and they'll send them to me. while air doing that, i want to ask you a couple of you guys
5:31 pm
talked about we need more research. what is the nature of the research that -- what is the nature of the research we should ideally be focusing on? >> so we have the second chance pell right now, and there is a myriad of programs being offered across different levels of correctional facilities to just different types of programs to post secondary to career and technology focused and we need to figure out what works for this program. what's the most effective of those programs, given the limited -- i think that's just scratching the surface. >> i think that kind of researches what you need to do from a quality improvement aspect. let's not use that as a basis for not moving forward. there is plenty of research that suggests that education in prison is extremely impactful. it is a balancing act, right. you want to get inside as rand said, the black box, to figure
5:32 pm
out which elements work better and you always want to look at ways to improve. it is balancing that need for additional insight versus the need to keep pushing. >> so i actually, we've got a bunch of audience questions here already. let me quickly try to go through some of these. so the first question is, noy, it is great there is education programs and training offered, but what mechanisms are in place to ensure quality and the questioner adds, by way of sort of detail, i know federal prisons that offer ged programs, but students don't have access to materials, or using outdated materials. so how do you make sure that at any given prison, the quality of the education is high. >> i think one way to do it, you know, in new york, the cost of you know, doing business really was that it needed to be a credit bearing, you know, accredited program. that's one way.
5:33 pm
i think having, you know, with respect to materials, having syllabae, course material, exactly the same as it is on campus, and you know, sticking to that. to the extent that that's possible, right. so in new york state, students don't have access to the internet. there are things we have to do to adjust to make that possible. but i think one way to get at the quality, right, is really making what you provide inside comparable to the extent that the facility will allow to what you're doing on the outside. >> including credentials. you don't want to just pick up a person off the street and say you go and teach that course. >> how do you incentivize people to teach in prison. >> our experience is you don't have to incentivize. there are no shortage of people interested in teaching in a correctional setting.

51 Views

info Stream Only

Uploaded by TV Archive on