Skip to main content

tv   Public Affairs Events  CSPAN  November 8, 2016 4:00pm-6:01pm EST

4:00 pm
>> so, we will now get started with the first panel discussion. before we start, let me briefly outline the two key ideas that have been driving this event. and when david and i started with the planning for this, the first one was essentially to bring together the technical experts of carnegie mellon university and the policy expert from the carnegie endowment. that is why each panel is setting a stage presentation with one of the technical experts from carnegie mellon university and followed by a panel discussion. the second idea was to bring in people from around the world for the panel discussion. i'm particularly pleased to not only welcome our partners from pittsburgh, but also to welcome for example a panelist from hong kong. join the event on december 2nd, and please drop your business cards outside or send us an e-mail, and i would now like to sbrus andrew moor from carnegie
4:01 pm
mellon university. and the computer science school has been ranked as the number one school by u.s. ne news repeatedly in the last few years for the grass roots program. andrew, prior to becoming dean for the last few years, was vice president of engineering at google commerce. he's been on the faculty of cmu since 1993 and named for the association of artificial intelligence in 2005. keeping with the global theme of this event, he originally hails from the united kingdom, and it's a pleasure to have you, andrew. >> thank you very much. [ applause ] >> so, this is a really interesting and exciting time in the world of artificial intelligence for many people, for regular consumers, it's got great promise for them, or companies, it is an absolutely critical differentiator.
4:02 pm
we do have options here to make the world a much better place through careful application of technology. what i would like to do to set the stage here is talk about two things which at first sight sound like clear goods. personalization, i'll explain what that means, and privacy. two extremely important issues. then i'm going to run through a series of cases where these two great principles start to bump into each other. and they will get increasingly sophisticated, and by the end of this stage setting, i hope to have everyone kind of squirming in their seats because it's so annoying that two wonderful and important things, privacy and personalization, which seem like clear goods, need us to societal challenges. all right. so let's begin with privacy. it's a clear right.
4:03 pm
and we would almost all of us agree that anyone who intentionally violates privacy by revealing information which they gained in confidence is doing something bad, and there's laws in our international legal system and in all our domestic legal systems which deal with that issue. so that's important. personalization. personalization is probably one of the most critical features of a world based on artificial intelligence and machine learning. i'll explain some places where it's obviously good. many great institutions, including carnegie mellon under dr. suresh's leadership, are pushing very hard to understand how we can help children learn more effectively. if it turns out that i, as a child, have problems with understanding when to use the letters ck while i'm writing, it
4:04 pm
makes a lot of sense for an automated tutor to personalize their instruction to me so they can practice that particular issue with me. no doubt about it. that seems like a sensible thing. if i'm a patient in a hospital, and it becomes clear that unlike most patients i cannot tolerate more than a certain amount of ibuprofen within 20 minutes of a meal, as we learn that, of course, it makes sense to personalize my treatment. so that is good. in a moment, there's no difficulty involved. here's where it gets interesting. some aspects of personalization, like, for instance, how unlikely to react to some liver cancer medications, not like we can personalize it by just looking what's happened to me over my lifetime. because probably this is the first time i've had those medications. when you're an artificial
4:05 pm
intelligence engineer building a personalization system, the way you power it is to find out about me, and then ask the question, does it make things good for andrew, what should i do, and what can i learn from other people like andrew. and that's suddenly where you begin to see this big conflict. other people like andrew is something which could help me a lot, because if it turns out that everyone who's over 6'3" with a british accent is virulently opposed to, for example, the electric light orchestra, it's an extremely useful thing to know, so that i can make sure that's never recommended to him. so it makes sense to use other people's information in aggregate to help personalize things for me. and in many examples, that can really make things better.
4:06 pm
recommendations of movies is an obvious one. then when you start to think of information on the web, for example, if i like to browse news every day, and we notice that i'm typical of people who perhaps in the mornings are very interested in policy related news, but in the evenings when i'm relaxing i tend to like technology related news, that's useful information to help make sure that i'm a happier person when i'm reading the news. that's the upside of personalization. personalization uses machine learning. machine learning is exactly the technology which looks at data, figures out the patterns to usefully say, what would other people like andrew want. and what is the definition to someone who is like me or dissimilar to me. it's something that empowers gmail, empowers movie recommendations, and the thing with which helps personalized
4:07 pm
medicine initiative figure out how to treat you through probably different treatment from someone else. and now i'm going to go through four examples of increasing squirminess of why this stuff is hard, why privacy and personalization actually starts to conflict with each other. the first case is one where i actually don't think we do have any trouble with policy. it's a simple case of what we think society is going to do. if someone publishes unauthorized data about me, they are breaking the law and that should be remedied. that slt simplest case. the responsibility case there in a good company or well-functioning government is you actually have the legislation in place, you have clear rules, and if somebody does, for example, look up the bank account of a famous celebrity, just so they can gloat about it, that person's going to get fired.
4:08 pm
and in some cases, if the consequences are serious, there's more significant penalty. now, cases two, three and four are ones where it starts to get a little fuzzier. case two. someone uses your data in a way that you didn't expect, but it turns out you kind of agreed to it. and a famous example is a firefighter in everett, washington, who was suspected of actually starting fires. and one of the ways in which the police really got to understand that this was a serious risk was they then went to their grocery coupon supplier and looked at the things this particular person had purchased in the last couple of months, and they found a huge number of fire-starting kits from that. in another case, someone who was suing a supermarket for a slip-and-fall accident, part of the supermarket's defense was
4:09 pm
they produced sales records for that person showing that they had -- they were buying excessive, in their eyes, amounts of alcohol. now, those are not actually legal. both of those were covered under the terms of the services, and also the laws of the land regarding law enforcement use of data. that's difficult. at that point, we've already hit something where the general public is going to be very uncomfortable. it's the thing that we all feel uneasy when we sign these terms and services. those are difficult ones. now i'm going to get to the ninja difficult ones which are just beginning to emerge and make things very interesting for artificial intelligence engineers who are trying to do good, but could quite easily accidently do bad. this next example is where we're using machine learning to really help people, but inadvertently,
4:10 pm
accidently the machine learning system starts to look like a bigot, or make decisions which most of us would think a reasonable human would not make. and a good example of this is from a member of jim garrett's faculty, his school of engineering at carnegie mellon university, who showed a little experiment with google's advertising system where he looked at the ads which are shown in response to a query about job searches. and he used google's personalization system to get exactly the same queries to google when the revealed identity of the user was male, and when they were female. and horribly it turned out that the ads showed when the person was revealed to be female, were for jobs with lower pay. which you look at that, and
4:11 pm
anyone would think if that machine learning algorithm was a person, they are both a jerk, and in fact they are doing something illegal. just this morning, an example with facebook who were introducing an ethnic affiliation term in their advertising system has fallen afoul of a very similar issue. now, why would a machine learning system do this. none of the engineers were, at least -- none of the engineers had any intent of causing harm. the reason was the machine learning system had just observed in the data prior to this that all else being equal, which is a dangerous phrase to use, it was seeing that the women who were clicking on ads tended to click on ads for lower paying jobs than the men. so this machine learning algorithm which we humans build, has a kind of defense, it can
4:12 pm
really say i'm just showing people what they're most likely to click on. it's not my fault society is set up in such a way that my data is showing that women are clicking on lower paid ads. now, this is complicated. and i don't have the right answer for you. if it helps, i should notice that this experiment is particularly unrealistic in the sense that it's very rare that a machine learning system only sees and identifies gender. usually the machine learning system sees many other things about a person. they actually have the part of the history of the kind of thing that that person wants to do. other interests about that person. and so it will actually find that there are other features of that person much more important than gender or race, was showing their particular interests. it still makes us feel uncomfortable. so that is what i regard as the
4:13 pm
most difficult part of machine learning and personalization at the moment. it is very hard, though i do not know of a piece of research that i fully trust to prevent these things from being, if you like, bigots. finally, i'm going to mention the ninja hard case. now, this is pretty simple. it is the case that if you really want to preserve privacy, you can cost other people their lives. there are examples of this in many law enforcement situations. but another simple one is in medicine, where if you're involved in a drug trial, and suppose you had 20 hospitals all trying out some new drug treatment on 20 different patients, then it is definitely in the interests of those patients for the hospitals to pool their data, to actually share data with each other so
4:14 pm
that one central body can do the machine learning with a large end, statistical significance, to find out if the system is working or not. now, if you decide not to do that, just so worried about privacy that you're going to not let the hospitals reveal details about the patients to each other, then you can still actually get some statistically significant results as to whether the medication is effective or not, it's just going to take you considerably longer, and many more patients will have to be in the trial and you'll have to wait longer before you get the answers. and matt frederickson, a computer science faculty member at carnegie mellon, has shown very clear cases of the actual analysis of privacy levels, years of lives saved. unfortunately, and that's exactly what this room doesn't want to hear, there's a tradeoff
4:15 pm
curve there. it's almost certain in our mind that we don't want to be on the extreme end of that tradeoff curve, but we do need to decide where we are in the center of it. so, hopefully we're squirming. i've tried to show you that no extreme position on personalization is good through privacy, or prif si is personalization. none of those extreme positions are useful. we have to use our technological smarts, and our policy smarts to try to find the right place in the middle. and that's the setup for this panel discussion. [ applause ] >> thank you. >> at this point i would like to introduce our panelists. yuet tham who is an expert on cross-border compliance, and
4:16 pm
international agreements regarding data use. if you want to come up to the chair. paul timmers, the director of the sustainable and secure society of the european community. has been head of the ict organization for inclusion and e-government units. so we have experts here from asia and from europe who are helping us discuss this issue. next i'm pleased to introduce ed felten, a computer scientist. he's a computer scientist and the deputy director of the white house office of science and technology policy. who has been leading a bunch of intense strategic thinking about artificial intelligence over the next few years. and then i would like to introduce our moderator, ben scott, the senior adviser for new america who is the senior adviser to the open technology institute, and also nonresidential fellow at the
4:17 pm
center of the society at stanford. >> thank you very much, andrew, for that introduction. we're going to jump right into the discussion with our expert panelists who, as you see, strategically represent different regions of the world. so it can offer perspectives on these questions from across the globe. if i may quickly summarize the policyconundrum, it is this. machine learning in a.i. benefits from the personalization of data using in learning algorithms. personalization requires large data sets. it requires the collection and processing of data at a large scale. that raises two key questions. one is, what are the rules governing the processing of data for commercial uses of a.i., and raises key issues for what are
4:18 pm
the rules for collection and processing of data for government use of a.i. underneath that sits the basic question will algorithmic accountability. if you decide it is unacceptable to have an algorithm that reflects gender bias in employment practices, how do you regulate that. and if you decide you regulate that at the national level, how do you coordinate that at the international level when data markets are global. these are the problems that we are all facing in government. and i think it's fair to say that the technological reach of machine learning and artificial intelligence has exceeded the grasp of our policy frameworks to contain and shape these new forms of digital power in the public interests. so what i'd like to do is start with setting a base line of where different parts of the world are coming down on these
4:19 pm
issues, and what the building blocks look like at the regional level. there have been lots of efforts in the u.s. to address these questions. there's been lots of debates in the european union to address these questions. i would say less so in asia, although i'll be interested to hear more from yuet about what's happening in asian markets. but i want to first begin by allowing all of our panelists to speak from their own perspectives about what's happening in their -- in this field in their region. what is the approach to regulating or establishing a policy framework for these most difficult questions. big data collection and the application of artificial intelligence. maybe i'll begin with you, ben. >> okay. well, i -- first i should start by -- with the disclaimer that i'm not a lawyer. so do not treat me as an authority on u.s. law on this issue. but i can talk about the policy
4:20 pm
approach that has been taken in the united states. and it is rooted out of the longer term policy approach that the u.s. has taken with respect to privacy. and that involves generally regulation of certain sectors where privacy is particularly salient, whether it involves things like health care, or practices related to credit and employment and so on. and it also involves a broader consumer protection framework around privacy that is rooted in notions of notice and consent. and so we have a framework for privacy which the u.s. has used and is continuing to use. and that involves both laws and involves also enforcement of those laws. when it comes to the particular issues that are raised by a.i. and machine learning, there are
4:21 pm
a bunch of things that have been done. and i point to in particular over the last few years, the work that the administration has done on big data, and then more recently on artificial intelligence. on both of those areas, and i think they're tightly intertwined, the administration has engaged in a series of public outreach activities, and then published reports. the idea being to try to drive a public conversation about these policy challenges, and to try to move -- both to move the base about making rules and making a policy in a fundamentally positive way, but also to heighten the attention to and interest in these issues, and to try to drive a public debate. because i believe strongly that the institutions, the companies
4:22 pm
that are collecting data and using it in this way almost universally want to use it, collect it, and engage in a.i. activities in a way that is responsible and positive. and sustainable. because i think people recognize that if you push the envelope too much, that the public will not allow that to stand. and so we've really tried to drive public discussion, we've tried to raise the level of dialogue, and that's been fundamentally one of the areas in which the administration has worked. we also recognize the importance of -- we also recognize the ways in which these issues operate across borders, and the need to work with international partners and to make sure that as data flows across borders, and as citizens everywhere encounter the companies, the institutions of other nations, that we can work together reasonably, and we
4:23 pm
have an international system for dealing with these things. >> thanks, ed. what is the view from brussels? >> perhaps i should put in another kind of disclaimer in a certain sense in that i think if you look at what is happening in policy development, the engagement with stakeholders, public debates like here, or whether you go in the direction of official public policy or law on regulation, you have to put it actually against the reality of what is happening around technology and around the use of technology. so i think the examples that andrew gave are really interesting and challenging. for example, the case where, for example, you don't -- machine learning doesn't have access to your personal data, even if it would be good for other people. it's a very interesting case, because you have to look at it, how could you apply today's framework to that. to a degree law is strong in the
4:24 pm
european union for fundamental rights. and we look at fundamental rights, but also fundamental rights are not absolute. the public health is one of those reasons to start using some of that personal data, individual data. but with the appropriate safeguards. and that may mean that you put a challenge to the technology. can you encrypt sufficiently. can you use new technology so you can have accountability after the fact, after it has been used. i think it is that dialogue that we very much are looking for in the european set. fundamental rights are very, very important in european setting. if we say privacy, privacy is a fundamental right, as a matter of fact, we even split it into privacy from the perspective of the protection of your private life and the protection of your publication versus personal data. there are differences. the fundamental right is at play there. based upon that, we have laws,
4:25 pm
but also policy development. it's a very actively moving field. at the moment we are working on a policy initiative around the free flow of data and platforms, and precisely those are being put to the test by machine learning, a.i., the questions that we have on the table. >> how does it look in asia, yuet? >> i'm not a computer scientist, so i will push this from a different perspective. every one of the challenges about asia is that, you know, it's not even bifurcated, just in terms of the regulations coming out of the region. you talk about asia, what do we really mean? different people have different views about asia as well. but i think when you talk about privacy laws in the asia-pacific, i think the countries that come to mind, at the forefront of regulations would be japan and korea.
4:26 pm
and to some extent australia new zealand. and countries such as singapore, and hong kong, taiwan, and the philippines. some of them are actually put into place in 2012. singapore is a country where they have changes. they are progressive. but the fact is that they implement the privacy laws for the first time in 2012. that, again, gives you some idea as to the importance placed on privacy. and then in the last category you've got countries such as indonesia, vietnam, and china. and so these are the countries where we call them privacy laws. they're not really based on privacy. not individual privacy anyway. and i heard today a lot about human rights, how privacy is a
4:27 pm
human right. for a lot of these countries, and with these laws emanating not just because of a motivation to protect human rights, although a lot of it would be consumer rights, and i think some people would argue that. consumer rights would be to some extent human rights as well. and a lot of these laws that come into play, the last category of country, what is the challenge about them that they don't have a single data privacy regulation. and i tell my clients, a little facetiously, but it's true to some extent, sometimes the more laws the country has, i mean, i do fpa corruption investigation, for example, and as a result of that, we take a lot of e-mails throughout the region. that's why we have the familiarity with the privacy rules and regulations. i joke with some of my clients, don't look at the transparency index and see how risky a country is when it comes to corruption.
4:28 pm
look at how many laws they have. the more anti-corruption laws they have, the more problematic corruption tends to be in that country. it is the same for countries such as china, and vietnam, and indonesia. you find little bits and pieces of information. they refer to how privacy is, you know, it's a right of all citizens. but they don't really tell you how that's going to be enforced. that is a regulation that you see in china. i think some of the challenges in asia is just trying to harmonize the regulations for a lot of companies, a lot of our clients who are trying to operate and transfer data across the borders. you have a lot of oh e-so japan, for example, has got -- a new law will come into force in about two years, and that's probably the first time where we actually talk about the personalization. in terms of all the other countries, i think the idea of
4:29 pm
artificial intelligence is not even something that the countries have seriously considered. there are things you might see guidelines. used by some of the regulators. but these are just guidelines. >> let me pick out a point which i think is implicit in what you said, which is, you've all described the approach of the united states, europe, variety of asian countries to these questions from a commercial data privacy perspective, regulating the market, commercial actors gathering data, applying artificial intelligence to produce particular outcomes. but i think at the core of this question from a regular lar tory, and especially from a political perspective, when you collect a lot of data and begin to produce some of these outcomes, in government access today, that is inextricably combined with the protection regulations. the recent tensions between the
4:30 pm
united states and europe has been about commercial data practices. but ultimately it is rooted in u.s. government access to the commercial data that is collected by american companies. so my question is, do you believe that even if we were able to find a harmonization, a standard for commercial data regulations that apply to big data collection and the application of artificial intelligence out of it, it is all undermined at the end of the day by individual interests and their unwillingness to give up any of that data by a government for national security or law enforcement purposes. >> i can actually give a quick example before we go to europe and the u.s. china has got a provision where, you know, there are a few examples of localization.
4:31 pm
if there is information that relates to the medical information, or health of the citizens, it has to be stored in servers in china. another example is data privacy provisions. the singapore government and all state entities are excluded from its provision. that's a very good example of where the states' rights conflict. >> perhaps building on that, i think this whole question about national security and sovereignty, perhaps you also have to generalize a little bit, and all the interests, too, that are certainly governmental interests, or should be addressed for society at a scale. which is safeguarding democracy. so i think one of the concerns, if you look at last week in a speech, they talked about the transparency of the platforms. this is in order to keep consumers properly informed. but it's also, what is the kind
4:32 pm
of bias that may creep in through the algorithms in terms much the provision of news. that's got everything to do with the way you execute democracy. so there is a debate about avoiding that where democracy gets polarized into echo chambers. and we don't have a real debate anymore. that's also a series of interests. i think where you're talking about the values to what extent are they shared internationally, now, i think you can be optimistic or pessimistic about that. if we talk about data protection, we have been able to make an agreement between europe and the united states, even if you do not have exactly the same starting point as regards data protection, let alone as regards national security. the privacy shield. i know it's going to be put to the test and that's how it should be. nonetheless, we go a lot further than we have at a time of safe harbor. we started in that area of excess by government, or
4:33 pm
national security purposes through the data being transferred in the transatlantic context and the safeguards for that. it is possible if you negotiate to make an agreement on certain types of issues, whether you can do that for everything, and across the world i think it's very doubtful. there are many places where norms and values don't work. so if we bring it to the field of cybersecurity, as we clearly see it, wie government about th cybersecurity, which has everything to do with a. iflt also. are we getting very far? it's little steps. i think it's not a single type of answer to this question. there is a degree of progress between, let's say, those that have a degree of like-mindedness. but there are many, many areas where we should be be pessimistic. >> there are plenty of areas in
4:34 pm
which government access to data for purposes of national security or law enforcement is relatively uncontroversial. i think we don't want to forget those. and of course, the international discussions around this issue have been going on longer than the -- than the conversation about a.i. these are not -- these issues are not simple. but i think if you look at privacy shield, for example, it is an example of the way in which it is possible for us to engage internationally, and to get to a point where we can work together. as to these issues about fairness and nondiscrimination, i think this is another area in which there is a broad alignment of interests internationally, and in which i think there's a lot of progress we can make by working together. >> let me present a more
4:35 pm
pessimistic vision and get a response for this. to me it stands to reason that as the private sector grows more sophisticated with machine learning technologies, collects more data, applies more powerful a.i. algorithms to that data, it will be irresistible for government to reach into those companies for legitimate reasons, in many cases, but also perhaps for illegitimate ones, to gain access to that power. the example that you raised of the firefighter buy iing arson kits, i don't know where you buy those or where you have the coupons for them, but the idea that law enforcement may not only tap your phone calls, or your e-mails, but should also look at your purchasing records or your health data and put together a portrait for you and compare you against others and determine you may have committed a crime is where i think the government in legitimate cases would want to use. but what that says to me is that, ultimately every country is going to want to control that
4:36 pm
data for themselves, in their own sovereign territory. so my question is, number one, are we headed for a global data sovereignty movement where everyone tries to have data rules where a.i. operated by domestic companies is used as a geopolitical asset. second, if it's algorithm ic data, if they say we'll show you how the algorithm works, it reflects the actual user -- the behaviors of users and reflects back to them the things that they are most likely to click on. do you then regulate that algorithm and tell facebook, you have to change that algorithm? and then how do you hold them
4:37 pm
accountable? how do you determine whether they have done so in a way that measures up to a particular standard? so i guess two questions, one is are we headed to a hard power regime of localization in your review at the global level, and two is, even if we're able to use transparency as a tool to push back against excess of a.i., does it even work? >> let me start by taking the second part of that. about the value of transparency. which i think really goes to a desire for governance and accountability. and one way to try to get there, to increase accountability, would be to say, well, open up everything, tell us everything about what your algorithm is, tell us everything about what your data is. but here i think is a polilace where we can apply technical
4:38 pm
approaches to try to provide accountability, to pry to provide evidence of fairness or nondiscrimination or certain accountability along certain dimensions without unnecessarily meaning to reveal everything. i think one of the traps we can fall into in thinking about this issue is to think that this is a problem caused by technology, which can be addressed only by laws and regulations. i think it's important to recognize as, i think, the discussion today has, that technology can be an important part of addressing these problems. that there are technologies of accountability, and that we need to think in a creative way about how to put those things together. we also need to think, i think, about the ways in which forces short of legal prohibition can constrain the behavior of companies and authorities when it comes to the use of data. to the extent that what is
4:39 pm
happening is known to the public, to the extent that there is an opportunity to provide evidence of fairness, evidence of accountability. that in itself creates a dynamic in which companies and authorities may -- will often voluntarily provide that kind of accountability. we've seen that to some sent in privacy, where companies would like to be able to make strong promises to consumers, for consumer comfort, but knowing that they will be held to those promises. you get a dynamic in which companies can compete based on privacy. to the same extent, if we have technologies and mechanisms of soft accountability, that that can lead, number one, to a competition to provide a service in a way that's more friendly in order to bring people in, and it can also lead to the kind of
4:40 pm
accountability that occurs when bad behavior is revealed. so i think there's a lot more opportunities there to do softer forms of governance and try to use technology to get to that issue. >> do you think the regulation is sufficiently collectible for softer forms of -- >> absolutely. well, i think what that says, i find that really challenging. i think indeed, technology needs to be invited to make things work really well, like the underlying intentions like the data protection of information. if you talk about informed concept, even informed consent about automated processing, that is a real challenge for technology. you can bounce back and say it's impossible because the algorithms we don't even know what's happening inside. but that's not sufficient as an observer. there are other approaches. and i think you're referring to there are other approaches where you can measure things like fairness, things like did you actually understand what is
4:41 pm
happening, the decision-making. also, i must say, we're getting a little bit away from the monolithic notion. there's a direction you can continue to have and that's where the technology can mediate when you talk about consent, as the use of the data evolves. though i'm kind of optimistic about the reach that the opportunities that are there in technology. when you talk about localization, again, probably a nuanced approach to that is necessary. because there is a real risk, i think you pointed to that, the data localization happens. it's happening already today. and actually, that you do not necessarily get an internet by country, but perhaps by region. at the same time we have initiatives, you heard earlier through the privacy shields. that's a way to avoid the
4:42 pm
localization, and we are talking about personal data. we have a free flow of data, actually we can move any restriction to the localization of data. and i think we probably want to differentiate which domains are we talking about. when we talk about the public health problem, like the rise of the zika virus, i think we have a more globalized approach to that. we have the w.h.o., and the professional collaboration in the field of health that allows us to do big data, a.i. type of analysis on the data we're getting from zika all over the world as a matter of fact. for me, in this debate we need to involve the governance that already exists. almost any kind of government institution we have in the world that works will be exposed to the what are you doing with the data with a.i.
4:43 pm
make use of those institutions, too. that may be in a more differentiated way, but there are certainly domains it will work. what is the data we have from the self-driving cars? i'm not sure. >> perhaps a necessarily complex, but therefore differentiated sector-by-sector approach -- >> and i think you learn from what you do in one sector to the others. it is not impossible to come up with governance. and coming up with new governance approaches, and not all will work. the realtime threat of cyber incidents may not be quite compatible with the type of governance that we have set up between people and organizations, which is relatively slow. and so we will also have to review the type of governance that we already have. >> and i think for asia, this is
4:44 pm
a little self-serving, but i still think we need regulations. because a lot of the countries still don't have what we take for granted in the rest of the world. for those jurisdictions that have the laws in place, i think the question of how is the enforcement, and the positions, the guidelines issued by the regulators, but there are so many other countries in asia that still don't even have very big privacy laws. i think at the end of the day you still need those to be in place. i mean, for the framework. and a lot of asia follows the consent principle that is adopted in the rest of the world. i think in terms of the localization, a lot should be done for various reasons, and they're usually not because of privacy. for example, indonesia, i mean, they talked about localizing
4:45 pm
data. the reason for that was because they thought that this was a misguided belief that that was going to help improve their economy, by localizing data. they didn't realize that would put off a lot of the national corporations from investing in the country. so they held back from that. which brings me to my, you know, last point in terms of, i think what we have seen, with all the international -- multi-national companies that have set up operations in asia, they bring it along with them regulations that they have to follow, because for example, they are dealing with data from the european union, or the u.s., and because of that, they tend to follow the standard that's set the highest. so when you have consumers in asia who see, hey, this is the way my data should be treated, and this is the way an
4:46 pm
international corporation would deal with my data, my privacy, then you start expecting that from the institutions within the country. so i think there have been a lot of that where, you know, the cascading of privacy, even though the regulations aren't in place. but you've got the economic pressure. to a large extent. >> i want to put one more provocation to the panel. but before i do that, i want to invite you all to start thinking about questions you may have for the panelists. we'll reserve the last section of this panel for audience questions. so start thinking about that while i put this question to the panel. which is about all three of you have now raised notice and consent. it is the basis of privacy law across the world at the moment. and yet, even before a. iflt was already under fire. already under attack about whether it would ever be sufficient. for various reasons.
4:47 pm
there's an argument that notice and consent is a sham, because you're presenting a consumer with a 15-page document for a service they want to buy, and no one ever reads it. they have no idea what they consented to even though they've been noticed. once you click that box and say, i agree, all the rights you had up to that point are gone. not all, but many. second, as we collect more and more data, and companies become diversified horizontally across many product platforms, it may not know exactly what it is that they're going to do with your data. and they may not know to give you notice at that point. and at what point do you build in multiple notification points. i've recently had occasion to talk to a number of founders of new silicon valley startups, and also in europe where i spent the last several years in berlin, and data is the new value
4:48 pm
property. people are building companies based purely on the idea that they're collecting lots and lots of data. what they will do with that data, how they will monetize it, how they will pool that resource with other resources, how they will be acquired, integrated into a larger enterprise, big question mark, but undeniably not a deterrent for venture capital flowing into those companies. once again, draws into the question this basic notion of notice and consent. if we come into a world where data is pooled intentionally in a fashion to maximize the personalization, it might not even be reasonable to ask a company to predict in advance all the ways in which that data may be used, and they may not be the only ones who gain access to that data and use it for purposes that may benefit or harm the user. so my question to the panel is, if we root the idea of an international standard on
4:49 pm
privacy policy as it applies to the big data and algorithmic accountability on an old framework of notice and consent, are we setting ourselves up for failure from the beginning? >> i think notice and consent is not bad at all. what we are challenged to do is to make sure that consent is important, meaningful, freely given, that there is a choice, and that's simply not often implemented. the 40-page contract is not meaningful. how do you translate it into something that is meaningful? it must be said in order to process your personal data, it's not consent that may be a lawful ground, at least in eu, there a grounds for public health. it's one of those.
4:50 pm
certainly there are issues around public safety, security, et cetera. gune et cetera. maybe grounds to process personal data without consent. so there's actually even a legitimate interest for direct marketing direct marketing purposes that may constitute, as the law may assess, my constitute a legitimate ground to process personal data. now, the question is how do you interact? because in all those cases you still will have to interact with the data subject, the one that is providing you data. how do you do that in a meaningful way? i'm still a little bit puzzled why interaction with the user is a problem. from the company point of view, you would probably say i want to interact more with the user than less, because each point i interact is another opportunity to engage in the discovery of value, to differentiate. >> let me ask a followup. what if the user is dead? we're soon to a moment in our history where there is terabytes out there about people who are
4:51 pm
no longer living. yet that data will undoubtedly have value to the company that owns it and the governments that may gain access to that data. how do you deal with that? >> so you will probably be thinking of a case where you would like to invoke public interest, for example public health. and again, certainly the european law says that if it is in the public interest, you can use that as a legitimate ground to start processing data. so there are possibilities. really the big technical difficulties, or underlying difficulties that have to do with algorithmic accountability which is not resolved, or actually we have a broad debate with the health community in europe, radiologists are saying what do i do with all those data that i have, that i now start to put again under data protection and how do i make sure that folks that want to be forgotten can apply to that? there are serious implementation challenges. they will not always have the
4:52 pm
most ideal answer. but in a certain sense, we are looking back into a legacy. a legacy that we can improve as the implementation of the law evolves. all of the communities that are involved in personal data, in europe for sure, are also called upon to look at what the technology and the law makes possible, and provide their interpretation of that, a common interpretation rather than a fragmented one. that's a challenge that needs to happen from public administration to radiologists. >> but in answer to your second question, actually, about -- there are a number of laws in asia where if the subject is dead, the concept of privacy doesn't apply anymore. >> open season on that data. >> yes, unfortunately. and i think just in terms of the other question that you raised about notice and consent, so i
4:53 pm
say this a little facetiously, again, we used to joke, we can draft these consent agreements and put in as much as we like and nobody is going to disagree. everybody will just click "agree." i read this book called "future crimes." i have to say, after reading that book, i refuse to load apps on my iphone to the extent that i can. it is very difficult to live without apps. but i probably have one of fewest apps in the whole of asia on my phone, after reading that book. i remember some of the statistics that i saw in that book about how i think the privacy policy for facebook is double the length of the u.s. constitution. and, you know, i think it was either paypal or ebay, i don't remember which company, where the privacy policy is longer than "hamlet." i've given presentations, a lot of presentations in asia about privacy and data security.
4:54 pm
and i've always asked this question, how many times have you actually said i don't agree, you know, when that privacy policy pops up? and all the presentations that i've given, only one person put up their hand, and that was a lecturer, from one of the universities, just an academic to some extent. but i think most people, they'll just click yes, because they don't have much of a choice, because they don't think it's important. it's not because people don't value privacy, but i think the difficulty is that there aren't many avenues for them to seek redress. and, you know, because we don't have the concept of constant litigation, and it's not a litigious society in general in asia, it's very difficult for individuals or consumers to get together and change the laws and the policies. >> so this is a fundamentally
4:55 pm
difficult issue, right? the uses to which some data may be put may be extremely complex. and the implications of those uses for a particular user may be even more complex. so if we were to start with the principle that something should not be collected if its use would not have been acceptable to the user, it's not at all clear how you could put that into effect in practice, right? we know that telling users every last detail of what will happen and every last detail of the implications, asking them to read that before they disclose anything ever, is not practical, and is not the way that people behave. that said, there are a few strange people like academics and privacy lawyers who do read these things, and there are people who have built tools that look for changes and so on and analyze them. so if a company does change its
4:56 pm
longer-than-"hamlet" privacy policy, there is some chance that will be noticed and trigger some public debate over that. there are methods of accountability other than all the users reading all the things, which we know doesn't happen. still, it's a fundamentally difficult question. if we were to offload that decision to someone else, they don't make it terribly much easier to figure out what the right answer is as to which uses would be acceptable to the user or which uses are socially beneficial. >> maybe the issue, it's got to be meaningful notice and meaningful consent. and i think one of the things that, you know, just from a policy perspective, because the notion of, you know, these consent agreements is that they shift everything onto the individual consumer, who don't really have the ability to, you know, reject the terms. and so i think, you know, just
4:57 pm
in terms of the policy, when it comes to ai and all the other provisions, i think it's important for the governments to actually think about shifting a lot of their responsibility back to the corporations for self-assessment and things like that. >> i'm wondering if you also cannot start splitting it up, in the sense that especially with the automated process, you have to explain the significance and the consequences for the user. and the point i think we're making is, first of all it's very difficult for a user to understand and read all about that, and fundamentally it may be very difficult to say that right at the beginning. still, that raises the question, why would you assume that it is only at the beginning that you ask for consent? why don't you have a repeated approach to interaction with the user as the system actually also develops and learns, and draw the consequences? at that moment in time that a consequence becomes relevant, you could ask, in a number of
4:58 pm
situations, i'm not saying all, it may be simpler to understand than a whole long text about what potentially will happen. >> there's an answer to that, my clients say they want all the consent up front. they don't want the obligation to go back to the consumer or the customers. usually when we draft these policy provisions for them or these agreements, they tell us right up front, can we make it as inclusive as possible. so that is what they do, because there is nothing to prevent us from doing that. i think that's difficult. >> i mean, i think what you're suggesting is thinking of it more as a matter of user interaction design or user experience design rather than perhaps asking for everything up front or trying to get an extremely broad consent or ask for extremely broad consent up front, that you might ask for
4:59 pm
some consent initially, more later. how and when you do that may be difficult depending on the nature of the product and whether there even is a touch point with the user that comes later. certainly i think thinking of these negotiations of consent in terms of user experience design, user interaction design, can be a fruitful way to get closer to -- to get closer to a strong notion of consent in a way that is less burdensome on the user. >> to go back to that point as well, i think a lot of consumers, they don't really need to know the algorithm or to understand it. what they want to know is, what is a different way, the purpose it's going to be used, not so much the algorithm. i've heard that excuse before, some companies say, well, there's no point to us explaining the algorithms to the consumers or the customers because they're not going to understand that. that's not the concern they have. it's the change in use. >> a quick followup for you.
5:00 pm
is it possible to have a discussion in the abstract about the notice and consent regime without looking at the market concentration in many markets for digital products and services? because if you're choosing between two or three mobile phone companies or two or three search engines or two or three social media platforms or mortgage lenders or hospitals, asking someone to opt out because they disagree with the consent provisions is inviting them to not participate in modern society. so there i think is a relationship between market structure and privacy policy which in many cases is definitive, that you don't really have a real scientific alternative other than the consent. >> it must be said that i thought it was interesting, what was going around in twitter, i
5:01 pm
don't know who attached a statement there from the fcc having just issued some privacy guidelines for intact access providers, who described the situation as no choice. they say you have to be additionally careful when there actually is no choice, which may be the case there. i think there is a certain sensitivity around the notion of fairness, which includes the notion of choice. >> let me at this point invite all of you to raise your hands. tim, are we passing a microphone around in order to it everyone on the recording? i'll start obvious hever here ay way across the room. identify your name and affiliation before you give your question. our panelists will know who they're talking to. >> thank you, i'm the dean of the college of energy at carnegie mellon university. i want to come back with this idea of privacy versus
5:02 pm
personalization. it would seem the last part of this discussion raises the question, why doesn't we apply personalization to privacy? so, you know, the only choice being i have to take one blank consent form, it's either that or nothing, where is if there were some way for me to fill out a privacy profile so that it described what i wanted to share, what i didn't want to share, how i wanted to share my data, could that not then be applied against whatever the company is saying is their privacy policy, so i don't have to read every one of them, i simply spend the time saying what i'm about, and let the interaction happen, more like a personalization applied to privacy. >> that seems to remind me of the point you brought up about how competition in the private sector can potentially mitigate against abuses of privacy policies. maybe this is a question you can respond to. >> sure.
5:03 pm
and there are a couple of avenues that i think come to mind here. one is this idea that a user might check some boxes or slide some sliders in a user interface and give some idea of their preferences with respect to privacy, and there would be enforcement of that on the user's behalf or some kind of automated negotiation between the user's technology, their browser or app, and a company's technology, so things would only happen within the bounds that the user had said were acceptable. and there have been various attempts to build those sorts of technologies. none of them have taken hold for reasons that i think are largely contingent. it could easily have turned out that such a thing became popular, but for reasons to complicated to go into here, i
5:04 pm
think that has mostly not happened. the other approach is one that is -- takes more of a machine learning kind of approach, where you're trying to ask the user a relatively limited number of questions about -- specific questions about what they want, and then you have a technology that on the user's behalf that tries to infer what decisions they would make in other cases. the idea of a system that operates on the user's behalf is one of the technological vehicles that could develop. and again, you have sort of contingent questions of technological development that may make that more likely and may make it easier or more likely to be deployable. but certainly that i think is one direction in which users may be able to put technology to work on their behalf to manage this stuff. because the complexity of these choices, if the user has to make
5:05 pm
every single detailed choice is too much. >> i think it's a very interesting idea. the question is will it hold against all of the four cases. it may not hold against the third case. you can use the personal data back towards society. personalization of data, it merits to be discussed, but would it actually eliminate the risk of things being bad for society? . >> we'll take another question. right in the back. the microphone is coming. >> yes. it's jose colon from the state department. i believe you only -- you are focusing on part of the issue of privacy because there are other means of data collection that doesn't involve people clicking on the internet. when you go to a store, there are many cameras following you, you pay with a credit card.
5:06 pm
they sell information that they are getting from machine learning. they are using those for many purposes. we had the case of samsung with tvs, at least in conversation with people. how would you address those? that's a big an issue as you clicking on something on the intent. >> your responses? >> i think this gets to the issue of, if you have a model based on notice and consent, how can you talk about consent in a case where collection of data happens in the environment, such as with cameras or with microphones that are out in the world. and the cases that are in a public place are i think some of the most difficult here. if there's a product in your home which has a microphone or camera and that's turned out without your consent, that seems to me not a difficult case from a policy standpoint.
5:07 pm
but in a public place where there is not an interaction with the user, where consent could naturally be sought, i think this becomes a pretty difficult issue. i don't think we have all the answers to those by any means. >> for us there's also a real part of the debate, because the two parts are fundamental rights, the confidential offering of communications in your private life, and the data protection part. what you mentioned touches upon both aspects. your confidentiality, when you go where you are not to be tracked, even if that doesn't necessarily and immediately involve personal data, still a right to be protected. so it's really part of the debate in europe. >> yes, right in the back. >> hi, my name is andrew hannah, i'm with politico. you've talked about shifting
5:08 pm
responsibility back to corporations in terms of privacy agreements and others have talked about softer forms of governance, in terms of shaking what data can be used. i was wondering if you could be a little more concrete and talk about tangible initiatives that could be taken on a policy level to allow for this to happen. >> let me start. i think this already is happening. if you look at the dynamics that drive the privacy policies of some of the large companies and ways in which companies use data, there is a competitive dynamic that operates, in which companies would -- on the one hand would like to be able to use data to optimize their business goals. but on the other handed would like to be able to promise to consumers that the use of data is limited to things that consumers would find acceptable. and of course those promises, once made, have legal force. so i think you see this
5:09 pm
operating already. it's inherent in a model of notice and consent that consumers will -- may either withhold notice or take their business somewhere else if they don't like what's being done in a particular setting. this is a dynamic that operates already, and it is something that is driven both by the enforcement of law for example by the ftc with respect to companies keeping their privacy promises to consumers. it's also driven by some of the public dialogue and is driven as well by the public debate and by some of the press coverage about privacy practices. all those things i think push companies to try to make stronger promises to consumers when they then have to keep. >> i think there was one in the middle. yes, ma'am.
5:10 pm
>> hi, my name is kerry ann from the organization of american states. the question is kind of tied to the gentleman in the back that asked about other forms of data collection. most of you would call, when she came online, in terms of how she collected data, there's data that's private, you have algorithms you can build from all those sources that are open. how is that tied back to consumer protection if there's actually no obligation by the person who may be developing this new ai that we don't know about that's actually collecting it? how does privacy come in, when we're pushing our data occupy the out there for anyone to use? >> great question. >> strictly speaking, if you are able to start reidentifying and it becomes personal data, you still fall under the data
5:11 pm
protection law. you have to look at how far you push the boundary and using also open data to reidentify, and the case that you mentioned was real. that's where people have to take responsibility or at least in the european consideration, they would be liable against the law. >> hi, my anymore is al gombas, i work for the state department. i'm curious, if we were to create a scenario where we can negotiate the privacy restrictions, what might happen then i think might be that companies will incentivize consumers to give more data, give discounts or something in the event that if they want to get more data from the individual or consent. and i'm wondering how that might play out, if you think that's a good idea, bad idea, whether we
5:12 pm
should have a blanket law saying you can't do that, you have to offer the same discounts to everybody regardless of what the amount of privacy they will require of the company or not, and how a consumer may be taken advantage of, for example consumers may be in a position where they feel like they have to give up data because they can't afford the service without it. >> i think there was a study that showed that actually consumers are -- they prefer giving some information, and then having the ability to consent, if additional information or different uses are going to be made of the data. and i think the other point this study showed, i can't remember the name actually, was that consumers generally are willing to give more information if they get something in return. i think that's -- again, we go back to the notion of fairness. one of the -- you know, the
5:13 pm
problematic areas that we have is that either the consumer or the customer doesn't know how the data has been used or has been used in a different way, no notification has been given. and i think the third thing is that the companies that want to benefit, they've been able to monetize the data, use it for marketing reasons. but the consumer hasn't actually benefitted additionally from that different use or that additional information. so i think that at the end of the day, if you go back to notice and consent, not right at the start of the relationship but perhaps as that relationship progression. >> perhaps i can add something to that. for me there are two dimensions in it. one is, indeed do you provide fairness in the perception of the user while the data is being used, and then people are saying that's not the case because you
5:14 pm
get value out of it and you don't give part of that value back to me. the other part of the debate is does the consumer have a fair choice in the beginning. everybody is in a monopolistic situation. if you look back at the statement the fcc did last week about access, you cannot be forced, it's essentially what i think they are facing, you cannot be forced to give up your browser data, et cetera, your browser preference data, otherwise you don't have access to my service. there's not so much choice in that service. there's also that aspect of, is there a reasonable balance at the moment is an essential service being provided versus the use of personal data, you can't stop people from having access to the service. >> it's not different from regulations, where you need the government to step in to start the ball rolling. if none of the internet providers are actually -- i mean, i think it's going to be quite difficult in some sectors
5:15 pm
to wait for the companies, to take the initiative to regulate themselves. i think this is one of those issues where you have to have the government step in and just start the ball rolling. >> yes, ma'am, right here in the front. >> my name is erica bassou. i'm a ph.d. student at american university. my question is about the notion of democracy in all of this, and while we are speaking to a roomful of people who have a fairly good idea of some of the terms that we're using like notice and consent in terms of service and data privacy and ai, i'm just wondering what this all means in terms of access to even this information about what these terms are. and is it just a conversation
5:16 pm
between policymakers and corporations who have access to these definitions? or is it really a conversation with the users who get affected? >> great question about literacy. i mean, i think you see, in practice you see a lot of discussion, a lot of chatter among policy experts. and you see more occasional flare-ups of direct public interest in some of these issues and some of the practices. and as is often the case in governance, the elites are sweating the details every day. and there is a corrective of the public noticing something that seems quite wrong to them and speaking up loudly. i think that is how these things often do operate. the -- certainly -- and we do
5:17 pm
certainly see those -- those flare-ups of direct public interest from time to time. >> one of the points in the debate in europe is also whether machine learning, ai, should also be made more widely available, kind of an open ai type of environment, which actually could be quite an interesting point for international corporations. that's kind of democratizing the tools themselves. >> yes, sir, in the front. >> thank you. daniel reisner from israel. my question is, when you mentioned a phrase, you mentioned old frameworks, when discussing this issue. one of my questions relates to one of the oldest frameworks we're using, which is the concept of the state in the framework of the discussion.
5:18 pm
because we all realize that we've globalized every element of the discussion. the data, in spite of localization efforts, is global. companies hold data. the same piece of information is split between two or three different locations on the same server. and some of my clients are split up over different continents. you don't actually get the same piece of data in any one location anywhere. the company holding the data is actually a multi-structure and sits in 25 different locations as well. and so on the one hand, the data is globalized. the players are globalized. that raises the question, what is the role of the state? i give you an example which i faced relatively recently in israel. the israeli government decided -- called me up one day and said that they wanted -- not the whole government, but parts of it, and they said they had decided to regulate an
5:19 pm
international cloud services provider. i asked, why do you think you should regulate israeli compan they're not active in israel per se. they say, it's very simple, they offered the services to an israeli government entity. but i said, the cloud sits somewhere in europe, i think, the company is an american company, et cetera, et cetera. they said, yes, but the service is being offered in israel so it's our job to regulate. and i pushed back and i said, well, if you want to regulate it for that purpose, and 211 other countries in the world could legitimately make the same argument because it's a global service, i said, do you really think it makes sense? they said, we never thought of that, we'll take it under advisement, and i haven't heard from them since. [ laughter ] what do you think we should be doing? governments are still our main tools of policy. but when we all recognize that facebook has more to say about the privacy of its constituent elements than any government in the world, with apologies to all
5:20 pm
governments represented, are we still having the discussion in the right forum? should we be thinking of a different mechanism in which we have a discussion and engagement with the right players and the rye forum? >> a very simple question. [ laughter ] >> you see something similar happening in the debate around cyber security, which is considered by some an international issue, but global companies are saying, i want to buy the best super security in the world, i don't care really where it comes from, but i need to have the best because i'm a global company. is that necessarily contradictory? i don't think in all cases. does it mean that you need to go for some form of global governments? at least a form of international governments, yes, because you need to have an idea of what is the quality of cyber security. i think the supply side corporation, if i simplify that concept, could be quite fruitful
5:21 pm
in a case like this. what companies are actually asking when they talk about data protection and privacy and machine learning and how is that different from the more nationally determined cultural values around that, that's a plea for the community, the cultural values discussion is part of the debate around machine learning and ai. i don't think you can get very far if you do this only nationally. >> quick followup question, i think his question is really an important one, do you think there are any global institutions that could channel national interests effectively, at least a mini lateral level? meaning the largest number of states that are willing to meaningfully participate in a single standard. >> i guess it's not really
5:22 pm
organized or named as such. but i point earlier to certain sectors in which you can start talking about the governments of data. so you can build upon some of the existing governance that is there and make that more ai and machine learning we're. we don't need to advance something new but perhaps we do need to talk about additional -- well, institutionalized, put in between commas, institutionalized form of governments to tackle this. some people have an interesting proposal on the table, a financing organization in the uk that talks about creating a machine intelligence commission that would work more on the basis of the notions around common laws, as you get exposed to the practice. and it would really bring experience together. >> other comments on this point? we have about five minutes left. i'm going to try to take a few more questions. yes, sir.
5:23 pm
>> carl hedler from the george washington university, privacy and research institute. we seem to be relying on lawsuits largely to control corporate behavior in regard to privacy. i'm wondering, in that case people have to identify harms. i'm concerned about the ability in the context of ai and machine learning -- [ inaudible ] -- think we have that ability? >> that's a tough question. and it gets to some deep technical issues, as you know. the question of why an ai system did a particular thing. and what that system might have done had conditions been a bit different. it can be difficult to answer. but depending on what kind of decision it is that the system
5:24 pm
made or assisted in, there are different legal regimes that may operate, at least in the u.s., and different burdens, different legal burdens may apply to the company that is -- or institution that is making that decision with the help of ai. so i think it's more of a detail question as to what kind of question, showing what kind of governance is needed. i also think that to the extent that people are naturally skeptical of whether complex ai-based decisions are being made in a way that is fair and justifiable, i think that to use these technologies in a way that is really sustainable in the longer run, i think will require greater effort as being able to explain why a particular decision was made or to be able to produce evidence to justify the fairness or efficacy of the
5:25 pm
decision that's being made. it's not a simple issue. i do think that in the public in protecting themselves and government in protecting the public against the sort of harms you're talking about are not without legal or technical capabilities. >> let me ask a question that sums up several that i've heard so far, which is, given the inherent weaknesses of notice and consent, but recognizing it's a tool that we have, and recognizing the challenges of harm in identifying harm and adjudications, is there a combination of tools that might be used that are rooted in transparency? what does this algorithm do and what it is intended to do? there ever we get a better sense of whether it is producing a harm or may produce a harm. and that harm should be or some
5:26 pm
approximation of that risk should be disclosed in the notice regime. what is the combination of tools that might best produce a framework for handling these technologies as we move forward? do you want to jump in on that? >> i think that's a very good question. ed's comment is just right. there is something very interesting about when you're an ai engineer building one of these systems. it's sometimes very hard to diagnose why your system did something. but you always have to write down something called an objective function. when for instance if i decided tomorrow to release a program to help people navigate the streets of washington in track of efficiently by tracking everyone, all the cabs and all the other vehicles, if i write down my objective as to each user to get them to their location, their destination as
5:27 pm
quickly as possible, then even if i'm doing some fancy algorithms which you don't quite understand to accomplish that, i can show that to a lawyer or policy maker. this is why my algorithm is pulling data from many people. on the other hand, if i supplement it a little bit because i'm getting paid by a coffee company to send people routing them past their coffee shops, then again, that will be sitting there in the code. so when you think about an ai or machine learning algorithm being written, someone says, it's so complicated we can't explain them, that's not a legitimate answer. because when i write an ai algorithm, you always have to write the objective thing, what is the thing that the ai system is trying to do. if you want companies or governments to be clear about what their ais are doing, it is legitimate to say, show me the objective function. >> maybe we will leave it there with andrew's optimistic vision
5:28 pm
of a possible way forward. please join me in thanking all of our great panelists. [ applause ] beginning at 8:00 eastern tonight, president carter' concession speech and ronald reagan's acceptance speech. and the 2000 election, george w. bush and al gore's speeches. programs on presidential leadership in the 1789 debate over the official title of george washington, all of this tonight on american history tv on c-span3. election night, tonight on c-span. watch the results and be part of a national conversation about the outcome. be on location at the hillary
5:29 pm
clinton and donald trump election night headquarters and watch victory and concession speeches in key senate, house, and governor's races. starting live throughout wednesday, watch live tonight on c-span, on demand at c-span.org or listen to our live coverage using the c-span radio app. we'll be simulcasting the canadian broadcasting corporation's live coverage of the u.s. elections. more now from a conference on artificial intelligence, privacy, and security with specialists talking about autonomous weapons systems and their current and potential uses by militaries around the world. also the practical and ethical questions in delegating decisions to computers as well as the constraints of their use in battle situations.
5:30 pm
good afternoon. thank you all for joining us this afternoon for those of you who are joining us just now. also welcome to those of you on the live stream who are joining us today. the hashtag for the event is #carnegiedigital." this is the second panel of the first part of the carnegie colon colloquium. this panel focuses on autonomy in the context of military operations. as i explained earlier this morning for the first panel, this event is designed to combine the tech expertise of
5:31 pm
carnegie mellon university with the expertise of the carnegie endowment. it will be followed by a panel discussion with experts from around the world. so we are particularly pleased and delighted to have people from israel and india who came up all the way specifically for this event. it's now my pleasure to introduce you to daryl brumley, director of the security institute at carnegie mellon university, he's also ceo of a company called for all secure, which won the darpa cyber grant challenge this year. it's a great pleasure to have him here. with that, i look forward to that panel discussion. thank you. [ applause ] >> thank you, everyone. you read the headlines today, you'll come across headlines such as russia is building
5:32 pm
robots to fight on the battlefield. the u.s. navy is developing swarms of unmanned drones. and darpa commissions a fully autonomous cyber bot commission. these highlight the increasing role of autonomy in the military. in the second panel we'll take an international perspective on what autonomy and counterautonomy mean in military operations. as tim mentioned, my name is david brumley. i'm a professor and director of cmu, security and privacy institute. i also consider myself a hacker as i run this hacking team that many people have talked about. my job for the next ten minutes is to give a highly level overview of the issue, why it's so exciting, why it's so timely, and why it's so important to get absolutely right as we go forward. this panel's issue in a nutshell is countries around the world, including the u.s., russia, israel, china, india, are increasingly deploying and investing in artificial intelligence and autonomy in their operations.
5:33 pm
autonomous technology, once the work of science fiction, is here today. for example, in pittsburgh, you can use your uber app to summon a completely autonomous vehicle to take you home from a steelers game to your house. don't just think physical. think of cyber space. think of social. for example, in august this year, darpa demonstrated that it's possible to build fully autonomous cyber bots in full spectrum offense and defense. it then went on to demonstrate that these bots can supplement human capabilities in the manual defcon competition. we also need to think about social networks where autonomous systems can be used to sway the opinion of a population. key pros include faster and better decisionmaking in weapons systems, cyber space operations, and it even creates the possibility of fully robotic soldiers in warfare. these are all significant benefits that lower the cost and lead to better protection of human life. however, there are significant
5:34 pm
policy, legal, and ethical questions. many questions revolve around how much control we should cede to machines. what sort of actions should we allow machines to take and when. and how do we handle the case when machines have mistakes, when there's bugs that could be exploited by your variations. autonomy results from delegation of a decision to authorize and take action within specific boundaries. we'll be talking about delegation of a decision. in the context of this panel, we delegate that decision to a computer program. an app, if you will. everyone is familiar with apps like games and web browser. but these are not autonomous. they follow a fixed set of rules and interact with the user in a very limited way. an autonomous system must be more than an app following a prescriptive set of rules. it must be able to make a decision about how its actions
5:35 pm
will affect the environment. today we focus on autonomous decisions where we delegate a decision to take action and that action has been ceded to a computer app. that app interacts with the world and the world interacts with it. i also want to set the stage for the size and the scope of the investment in autonomy. i want to use the u.s. department of defense in history as a illustrative lens. autonomy and ai are center stage. this strategy is called the third offset strategy. when i heard this phrase offset fragi strategy, i didn't really understand what it meant. it seeks to offset a numerically superior force with technical supremacy. an offset strategy allows someone like the u.s. to win without matching the enemy tank for tank for plane for plane. to get a sense of the scale, the very first offset was our nuclear weapons strategy. the u.s. invested heavily in nuclear weapons, especially
5:36 pm
battlefield and technical nuclear weapons, because it provided an effective deterrent. we didn't have to match the enemy tank for tank, plane for plane. in the mid-'70s, russia reached nuclear parity with the u.s. and the offset was no longer an offset. the u.s. and other countries started looking for other offsets. the u.s. came up with the second offset, where the idea was using accurate guided munitions delivered by effective delivery systems, you could achieve the same effect without the collateral damage. this investment led to huge advances in science that went beyond the military domain. things like gps wouldn't have been possible if the u.s. didn't invest in this idea. we expect the investment and radical change in international policy to be just as specific. the race to autonomy is not only happening in the u.s. and to implement these sort of offset strategies. it's also in other countries.
5:37 pm
for example, it includes russia and china, which i mentioned just a few minutes ago, are investing in roboticized armies. in 2014, a bank of america report states that japanese companies invested more than $2 billion in autonomous systems led by tech companies such as facebook and google. we don't get to just deploy autonomous systems and call it done, though. once we deploy them, they themselves may become targets. that leads to a notion of counterautonomy, where adversaries may go after the autonomous systems themselves as a way of getting at their intended target. as an example, just to kind of put this in scope, there's a very famous chess playcomputer called ripka. it was defeated because someone found a flaw in the engine. in chess, if you go more than 50 moves without moving a pawn, it's a draw. the chess engine had a flaw
5:38 pm
where it would try to avoid a flaw under all circumstances. this player would go after the autonomous system by offering a piece as a sacrifice. the computer thought it was piece up. the player would move 49 moves without a pawn move. the computer would say, oh, no, a draw is coming up. it would try to avoid it. and the player could go to town. this is going after the algorithm, not just the test game. autonomy is going to be huge. it's critical we get it right. the stakes are extremely high for many reasons. one of them is autonomy will drive us to take decisive action faster and faster. these actions will be in the cyber domain and the kinetic domain. remember what i said. autonomy is a delegation to an authorized entity to take action within a specific boundary. i want to think about a couple of different dimensions. what decision is being delegated? second, in what circumstances? and third, what are the appropriate boundaries for using this sort of technology? and to dig a little deeper, the
5:39 pm
decision being delegated is a different question. countries are forming stakes on the ground on how they're going to think about this. the deputy secretary of defense in 2014 said humans in the united states' conception will always be the ones who make the decision to use lethal force, period, end of story. when he was questioned whether a computer would ever take lethal action. but the pace of technology makes applying these high level philosophy and principles to different situations difficult. for example, should an autonomous system shoot a suicide bomber before they have an effect? is that okay? is that defense? is that offense? when is the decision ceded? he goes on to say, and he qualifies him that there may be times when it's okay for the computer to take control, for example suppose you got 60 mills coming at you. there's no way a human is going to be able to sort all that out. the human will make the decision but make it ahead of time for the computer to be able to react
5:40 pm
to that. this isn't a hypothetical conversation. it's here today. for example, consider for a minute the fire and forget missile systems. we've all heard of these probably in the newspaper. one example is the uk bridgestone missiles, which groups that one of our panelists serves on, illustrates there are no clear lines when we've ceded control. the fire and forget systems are often described as autonomous. some will say they're semi autonomous. it really just depends on which definition you're looking at. the british air force described it as effective against all known and protected armored threats. brimstone's radar seek and searches, comparing them to known target signatures in its memory, the missiles rejects returns which do not much and continues searching and comparing. the missiles can be programmed not to search for targets,
5:41 pm
allowing them to safely overfly friendly forces or only to accept targets in a designated boxed area, thus avoiding collateral damage. an interesting question. someone has decided to use lethal action but it was up to the computer to identify who to take lethal action against. there's another more subtle question, what do we do when there's a bug in the software, that it maybe misidentifies where it's supposed to go? what are the constraints? if we go back to the uber example in pittsburgh, suppose a pedestrian walks out in front of a self-driving car and it can only miss the human by driving off a bridge. who should you save? the driver or the pedestrian? a good question. there's no clear solution. and military operations, we often have similar questions. who are we going to save when given the choice? how are we going to program the objective functions in these military operations? so with that framing, i would like to introduce our moderator
5:42 pm
and speakers. our moderator is the vice president of studies at the carnegie endowment for international peace. george, can you please step up. his work is primarily on nuclear strategy and nonproliferation issues and on south asian security. george is the author of the prize winning book, "india's nuclear bomb," called an extraordinary and definitive account of 50 years of indian policy making. george has been a member of the national academy of sciences committee on armed control and international security, the council on nuclear policy and many other such advisory committees. thank you, george, for joining us today. our panelist is daniel riser, can you please come up, a partner at the her zog law office. he joined in 2008 as the firm's public international law, security, and defense partner, recognized as one of israel's
5:43 pm
leading public law experts, ten of which he served as head of the israel national law department. he was the senior lawyer responsible for advising the israeli leadership. i hope you can advise us on this issue as well. i would like to invite the director for the arms division where she led the human rights watch advocacy against particularly problematic weapons that pose significant threats to civilians. she is also serving as the global coordinate of the campaign to stop killer robots, one of the people i quoted earlier on the uk brimstone. she worked for the vietnam veterans of america foundation, assisting jodie williams and coordinating the international campaign to ban land mines, co-laureate of the 1997 peace prize. finally, general. he served in the corps of
5:44 pm
signals indian army. he retired after 40 years of active military service in the corps of signals indian army in april this year. his last appointment was commandant of the military college for telecommunications engineering, which carries out training in the fields of ict, electric warfare, and cyber operation, and is also designated a center of excellence for the indian army for these disciplines. the general officer has received many such awards. i want to call out a few of them. he's been the recipient of the president for distinguished service in the defense forces. he's also been awarded the department of defense production for r&d work. and last year he was conferred the coveted distinguished alumnus award by the india institute of bombay and is the only defense officer to ever hold such an honor. with that, thank you, panel, and i'll turn it over to george. >> great, thanks a lot. [ applause ] great, thank you.
5:45 pm
what we want to do have as much of a conversation as possible, first amongst ourselves up here, and then with you all, to basically draw out a number of the dilemmas in this area and to help identify what are the questions that might be the most worth pursuing, as different countries and different actors move down this agenda. so to start us, i want to ask the general to build on what david said a bit. certainly there must be other drivers beyond dealing with numerical asymmetries that would make autonomous systems attractive to a military and to a government in terms of problems they solve and advantages they confer. can you give us your perspective, what are the attractions of autonomy in this space? >> i'll start by saying that one
5:46 pm
can't get away from the fact that weapons are meant to destroy and kill. but they destroy and kill, supposed to kill military potentially. and the idea is not to effect the noncombatants. the noncombatants have to be saved. a major question you have to ask is, does ai have the potential of reducing the negatives of destroying the noncombatant, potentially. i feel that in a sense, by its character, artificial intelligence has great potential towards this goal. having said that as an opening, let's see how warfare is actually changing in the last few decades. there are two things which have
5:47 pm
happened. on the one front, there is a change in the nature of warfare from the conventional to what is normally referred to as fourth generation warfare, where the lines of politics and military are blurring. so there is a different context in fourth generation warfare. india happens to have the context of both conventional warfare as well as the fourth generation warfare. and so some of the things in the discussions which come up really get related, at least my examples really get related to how the benefits turn up here. the other change in warfare which is happening is to do with the information age. now, here again, you have on the one hand cyber warfare, electronic warfare, one thing that is happening in the information age. coming to the relationship with artificial intelligence, before that, because of information, coming into the weapons systems, what you are having over the years is greater and greater
5:48 pm
precision in the weapons systems. now, ai, again, has a potential of increasing this precision, and discrimination as we'll be discussing, i'm sure, as part of the panel. and that is where, again, the aspect of having lesser and lesser noncombatant casualties is going to occur. now, when it comes to specifics as to what are the types of systems, so increasing degree of what ai can do. so let's start with just four different examples. you can have a defensive system. and in a defensive system, like for example handling of diffusion of aries. there the adversity is not involved and ai can do a lot in
5:49 pm
coming up with the systems in any case. at the next level, you have defensive ai. so we talk of, you know, the systems like phalanx which have been deployed for decades now. there, the missiles are coming in, you're destroying the missiles, autonomous systems, ai autonomous systems are in place so that casualties are reduced. and the third, you have precision coming in now. so you can have offensive systems. for example, if you have drones, armed drones which are autonomous, okay, you already have armed drones in effect, but you have autonomous armed drones, when the pilots -- that's the third level, the offense is coming in. at the fourth level, if the graduation of the ai takes place and it develops to the extent
5:50 pm
where it can also mimic the empathy and judgment aspects, when it graduates to there are many other benefits, which we can talk about, but in increasing degrees of complexity as a.i. graduates, i would say these are the four areas we can talk of as a starting point. i think i'll stop right there. >> thank you. that was a brilliant setup. you raised a number of issues that i think we'll dive farther into, including the questions of offense, defense, and other function. let me turn to mary and in a sense ask you to respond, but in particular to the extent this capability allows one to be more
5:51 pm
discriminating and precise. when you look at kind of parsing what can be advantageous in these capabilities to what should be avoided, can you hone in on what the distinctions should be? >> you talked about the dangerous tasks autonomy has been used for in the military for cleaning ships, disposal, robots to assist the soldiers, and now we're moving into a phase where we see greater autonomy in weapons systems and that's seen with an autonomous aircraft that can fly great distances and carry a great p y payload.
5:52 pm
we mentioned some of these so systems in first report we did back at human rights watch called "losing humanity." in our view, they were not fully autonomous. they had a degree or nature of autonomy in them, but they were normal fully autonomous. we called for a ban on fully autonomous systems. the ban calls on a future ban and not the ones that are in use today. but we did that because we looked at where the technology was headed. we said we're concerned about where this is headed. we're worried about this. that was part of the rationale behind forming this campaign to stop killer robots that launched in 2013 and is still going. it's a global coalition. i coordinate it on behalf of human rights watch.
5:53 pm
you know, this is not a campaign against autonomy in the military sense. it's not a campaign against artificial intelligence. there are many people working in autonomy and artificial intelligence. it's a campaign to draw the line and establish how far do we want to take this. so you can view the call of the campaign as being a negative one, calling for a preemptive ban on the development, production, and use of fully autonomous weapons or you can view it in a positive way in terms of how we want to retain or keep meaningful human control over weapons system and not over every aspect of the weapons system, but the two critical functions of the weapons system which in our mind is the selection of a target and the use of force. we know that sounds very easy. it's harder to put into practice, but this is where the debate has been centering for the last few years when it comes
5:54 pm
to autonomous weapons systems. >> let me draw you out and then i'll turn to daniel. you talk about drawing the line and what i take as drawing the line is basically at target selection and decision to fire as it were, saying there should be a human there. i get that in a sense, but in terms of objectives, if an objective were to minimize casualties or risk of indiscriminate -- civilian or non-targeted deaths, i would say, greater precision if different versions of these weapons could be demonstrated to provide more precision and
5:55 pm
reduce collateral damage and inadvertent deaths, why should it matter whether a human was in the loop or not? i'm not trying to argue with you. i'm trying to draw you out about why kind of the principle of a person in the loop as distinct from the outcomes -- i'm related to people that i don't want in the loop. i'm croatian descent. tell me what's wrong with that. >> there are many benefits to
5:56 pm
employing autonomy in the military sphere. our concern is we're going to have stupid systems that are weaponized before we have the smart ones that can do the level four, the mimicking of human judgment and empathy. our concern is we're going to have stupid autonomous weapon systems being deployed before we have these super smart ones which are further in the future as we understand it. it was first the robotists and the ai experts that came to us and said you don't understand what could go wrong when these are deployed in the field. we have many technical concerns. there will be unanticipated consequences and unanticipated things will happen there. but then the other kind of elements of the campaign that have come on board, the faith
5:57 pm
leaders and nobel peace laureates are worried this will make it easier to go to war because you can send the machine rather than the human. of course, we want to try and keep them out of fighting as much as possible. but the fear if the human soldiers are not in there and just the machines, it will be a worse situation on the battlefield for civilian populations. this is why we see a need to draw the line. >> daniel, let me draw you in on any of this, but in particular how you thought about whether there's a valid difference between offense and defense or territoriality. i'm listening to mary going i totally get that if you're
5:58 pm
operating on someone else else's territory. >> let my start by saying autonomous weapons systems are already here. the issue is no longer forward facing. it's also current facing. and while we don't know all the autonomous systems out there because some of them are closely guarded secrets, we know a lot of them. and i think mary is right in one respect of that. the capability to deploy autonomous systems is still outpacing the capability to train them to be human replacements. now i say that in spite of the fact that computers can beat human beings in chess and in fact anything that requires thinking in speed or numbers of calculations, et cetera. one of the problems we face is that what we want to train the
5:59 pm
autonomous weapon system to do, we're not sure how to do that. let me go into that for one minute because you'll see some sort of sitting in between the two positions. i used to train soldiers to comply with the laws of war. and when we trained human beings to do so, we have a system. it's more or less the same in military organizations. we have a specific set of rules. there's the principle of discrimination. you have to discriminate between a legitimate combatant and non-cononh non-combatant. when we think about how to try to teach a computer to do this, we're not sure how to do that as
6:00 pm
human beings. artificial intelligence doesn't learn like a human being. it learns differently, and there are different ways to teach computers, but none of them are putting them in a classroom and giving them a lecture and then taking them into the field and trying out a few dry runs. we learned the old ways we taught the system don't work on computers. the first point i want to stress is we see a chasm opening between the ability to deploy the autonomous systems and the capability of teaching them what the rules are. obviously, that gap will close as computer systems continue to develop. and that is quite possible, but to be fair i think the military hardware is outpacing the

46 Views

info Stream Only

Uploaded by TV Archive on