tv Public Affairs Events CSPAN November 21, 2016 1:33pm-3:34pm EST
1:33 pm
tonight at 9:00 eastern on c-span2. here are some of our featured programs thursday, thanksgiving day, on c-span. just after 11:00 a.m. eastern, nebraska senator on american values, the founding fathers, and the purpose of government. >> there's a huge civic mindedness in american history, but it's not compelled by the government. >> followed at noon with former senator tom harkin on healthy food and the rise of childhood obesity in the u.s. >> for everything from monster thick burgers with 1420 calories and 107 grams of fat to 20 ounce cokes and pepsis, 12 to 15 teaspoons of sugar, feeding an epidemic of childhood obesity. >> then at 3:30 wikipedia founder jimmy wales talks about the evolution of the online
1:34 pm
encyclopedia and the challenge to providing global access to information. >> once there's 1,000 entries, i know there's a small community there, five to ten really active users, another 20 to 30 that know a little bit and they sort of think of themselves as community. >> a little after 7:00 eastern, an inside look at the year long effort to repair and restore the capital dome. at 8:00 justice elena kagan reflects on her life and career. >> then i did my senior thesis, which is a great thing to have done. it taught me an incredible amount. it also taught me what it was like to be a serious historian and sit in archives all day every day. i realized it just wasn't for me. >> followed by justice clarence thomas at 9:00. >> genius is not putting a $2 idea in a $20 sentence, it's putting a $20 idea in a $2 sentence without any loss of meaning.
1:35 pm
>> just after 10:00 at an exclusive ceremony in the white house, president obama will present the medal of freedom highest award to 21 recipients including nba star michael jordan, singer bruce springsteen, actress cecily tyson and bill and miranda gates. watch on c-span and c-span.org or listen on free c-span radio app. >> now to a conference on artificial intelligence, privacy and security. this is about consumer policies and legal issues tied to massive data collection and sharing. >> good morning, everybody. welcome to carnegie endowment. my name is timor, co-direct policy initiative. together with david brumly, who is the director of the security and privacy institute at carnegie university we're delighted to welcome all of you here in person. for those joining us on the live
1:36 pm
stream online, the #is carnegie digital. i now have the pleasure of introducing ambassador bill burns for the welcoming remarks and look forward to this day with you. thank you very much. [ applause ] pref we well, good morning everyone. let me begin by congratulating tim and david for putting together this extraordinary colloquium. i'm delighted to launch with suresh, whose remarkable leadership of carnegie mellon e reminds me how fortunate i am. as a diplomat for 33 years before, that i've had the privilege of welcoming heads of
1:37 pm
state, military generals, foreign ministers, university presidents and distinguished thinkers and doers of all stripes, but i've never had the privilege of introducing a robot, let alone several. so it's a great pleasure to welcome them and their friends. like all of you, i look forward to getting a glimpse of our robotic future in today's program. robots are not today's only first. today is also the first of two events we're holding for the first time with carnegie mellon university, one of the premier universities and fellow member of the impressive group of institutions founded by andrew carnegie more than a century ago. andrew carnegie created institutions at a critical historical juncture. the foundations of international order that had prevailed for most of the 19th century were beginning to crack. catastrophic war and disorder loomed. the last great surge of the industrial revolution was
1:38 pm
performing the global economy. the carnegie endowment together with sister organizations sought to help establish and reinforce the new system of order that emerged of the two worlds, a system that produced prosperity in the second half of the 20th century than andrew carnegie could have imagined. hard to escape that the world is at a transforming moment, underpinnings of order, return of rivalry and conflict after many years of decline. the growing use of new information technologies both as drivers for human advancement and as levers of disruption and division in and among countries, the shift in economic dynamism and rejection by societies in many regions western led and
1:39 pm
angry nationalism. here at carnegie we're trying to meet these head on across our programs and six global centers. we focused this with carnegie mellon, one of the significant of these challenges, intersection of emerging technologies and innovation in international affairs. technology's capacity, as all of you know very well, to simultaneous advance and challenge global peace and securities increasingly apparent. in too many areas, the scale and scope of technological innovation is outpacing development of rules and norms intended to maximize its benefits while minimizing its risk. in today's world no single country will be able to dictate rules and norms. as global institution with expertise, decades of experience and nuclear policy and significant reach into some of the most technologically capable governments and societies, the
1:40 pm
carnegie endowment is well positioned to identify and help bridge these gaps. earlier this year we launched a cyber policy initiative to do just that, working quietly with government officials, experts and businesses in key countries, our team is developing norms and measures to manage the cyber threats of greatest strategic significance. these include threats to the integrity of financial data, unresolved tensions between governments and private actors regarding how to actively defend against cyber attack, systemic corruption of the information and communication technology supply chain and attacks on command and control of strategic weapons systems. our partnership with carnegie mellon seeks to deepen the exchange of ideas among our scholars and the global community with technical experts and practitioners wrestling with the whole range of governance and security issues. today's event will focus on artificial intelligence and its
1:41 pm
implications in the civilian and military do mains. tim and david have curated panels with diverse and international perspectives. december 2nd we'll reconvene in pittsburgh for an equally exciting conversation on internet governments and cyber security norms. our hope is that this conversation will be the beginning of a sustained collaboration between our two institutions and with all of you. there's simply too much at stake for all of us to tackle this problem separately. we can and, indeed, we must tackle together if we hope to sustain andrew carnegie's legacy. i'd look to conclude by thanking carnegie corporation of new york for making this colloquium possible. let me welcome to the stage suresh, extraordinary leader of extraordinary institution and
1:42 pm
terrific co-conspirator in this endeavor. thank you all very much. [ applause ] >> thank you, bill. i also want to thank tim and david for all their efforts. welcome to inaugural carnegie colloquium, part of initiative to shape manners of cooperation in artificial intelligence, machine learning, and cyber security. first and foremost, i would like to thank ambassador bill burns for hosting this event today. as two organizations that reflect a strong legacy of andrew carnegie, carnegie mellon university and carnegie endowment for international peace have formed a powerful partnership to examine technology and diplomacy across the set of emerging areas,
1:43 pm
critical to our collective future. it's my sincere hope that this event as the follow up colloquium which will take place at carnegie university on december 2nd formed the basis for broader relationship between our institutions. let me also add my thanks to dr. gregorian, president of the carnegie corporation of new york who provided support for both of these events. and in fact, based on a conversation that ambassador burns and i had a few months ago. dr. gregorian was very enthusiastic and supportive of this effort. i understand -- to understand cyber and security, we must
1:44 pm
first recognize cmu as a place where pioneering work in artificial intelligence took place decades ago. every since herbert simmon and alan newel created artificial intelligence in the 1950s before the terminology was even recognized broadly, cmu has remained cutting edge of this field. carnegie mellon took the bold step a generation later to create its software engineering institute, which has served the nation through the department of defense and served industry by acquiring, developing, operating, and sustaining innovative software systems that are affordable, enduring, and
1:45 pm
trustworthy. designing safe software systems and attempting to create learning abilities of the human brain where natural progression where two of the modern world's most pressing concerns, cyber security and privacy, to meet this challenge, carnegie mellon cyber security and privacy research is multi-disciplinary and encompassing a broad range of disparate disciplines. it incorporates faculty across university in areas of policy development, risk management and modeling. our aim is to build a new generation of technologies has deliver quantifiable computer security and sustainable communication systems. the policy guidelines to maximize that effectiveness.
1:46 pm
tmu's prime ear research center on the subject is cilab, visionary public private partnership that has become a world leader in technological research, education, and security awareness among cyber citizens of all ages. by drawing expertise of more than 100 cmu professors from various disciplines, a world leader in the technical development of artificial intelligence, cyber offense and defense and is a pipeline for public private leadership in organizations as varied as nsa and google. the work of cylab was featured in a "60 minutes" report on
1:47 pm
machine learning and many other aspects. in particular the professor's facial recognition programming helped match a very bloody surveillance photo with the boston marathon bomber from a database of 1 million faces. you'll have an opportunity to see the professor's work in action today during the lurvelg time demonstrations downstairs. today you'll hear from cy lab's director david brumly who led a cmu tomb just a couple months ago that won this year's super bowl of hacking, darp, a's $2 million cyber challenge. congratulations, david. [ applause ] just a week later, a week after that, david took a team of cmu
1:48 pm
students to def com where they won a game in another hijacking competition. you you'll hear from andrew moore, dean of school of science, who was also featured in "60 minutes" report on artificial intelligence. i would also like to acknowledge dr. jim garrett, the dean of the college of engineering at carnegie mellon university who joins us along with rick siger who played an important role helping put together this event which means carnegie mellon and carnegie endowment. tmu's advancements in cyber security will be highlighted in the colloquium today, which is an outgrowth of the partnership between our two organizations. you will learn more about this in the two panel discussions today. we hope that these discussions on the future of consumer privacy, autonomy and military
1:49 pm
operations will lay a strong foundation for future colloquium and better inform ongoing thinking and technology and diplomacy in these critical areas. we'd like to welcome you to the colloquium today and also like to close by thanking again the ambassador. [ applause ] >> so we will now get started with the first panel discussion. before we start, let me briefly outline two key ideas that have been driving this event. when david and i started with the planning for this, the first one was essentially to bring together technical experts of carnegie mellon university and policy expert from the carnegie endowment. that is why each panel is proceeded by setting a stage presentation with one of the technical experts from carnegie mellon university and will be followed by the panel discussion. the second idea was to draw on
1:50 pm
carnegie mellon's global network to bring in people around the world for the panel discussion. i'm particularly pleased to not only welcome partners from pittsburgh but also come from hong kong. if you're interested to join the event on december 2 in pittsburgh, be sure to drop your business card off outside or send us an e-mail. i would like to introduce andrew moore, the dean of computer science at carnegie mellon university and the computer science school at carnegie mellon university has been ranked as the number one school by u.s. news repeatedly in the past few years for the grad school program and prior to becoming dean for the last two years, andrew was vice president of engineering at google commerce, has been on the faculty of sam u since 1993 and has been on the program for advance in of artificial intelligence. keeping with the global theme of this event, he hails from
1:51 pm
bournemouth in the united kingdom. thank you very much. >> so this is an interesting and exciting time in the world of artificial intelligence for many people, for regular consumers it's got great promise for them. for companies it's an absolutely critical differentiator and for societies in general we do have options here to make the world a much better place through careful application of technology. what i'd like to do to set the stage here is talk about two things which are -- which at first site sound like clear goods, personalization -- i'll explain what that means -- and privacy, two extremely important issues. then i'm going to run through a series of cases where these two great principles start to bump into each other. and they will get increasingly
1:52 pm
sophisticated and by the end of this stage setting i hope to have everyone squirming in their seats because it's so annoying the that two wonderful and important things, privacy and personalization, which seemed like clear goods, lead us to very difficult societal and technical challenges. so that's what i'm going to try to do in the next couple of minutes. so let's begin with privacy. it's a clear right and we would almost all of us agree that anyone who intentionally violates privacy by revealing information which they gained in confidence is doing something bad and there's laws in our international legal system and in all our domestic legal systems which deal with that issue. so that's important. important.ersonalization is probably one of the most critical features of a world based on artificial intelligence and machine learning and i'll explain places where it's
1:53 pm
obviously good. many great institutions, including carnegie mellon under dr. sureshi's leadership have developed ways to help children learn more effectively. if it turns out that i as a child have problems with understanding when to use the letters "ck" while i'm writing, it makes a lot of sense for an automated tutor to personalize their instruction so they can practice that particular issue with me, no doubt about it, that seems like a sensible thing. if i'm a patient in a hospital and it becomes pretty clear that unlike most patients i cannot tolerate more than a serb amount of ibuprofen within 20 minutes of a meal as we learn that, of course it makes sense to personalize my treatment. so that is good. and in a moment there's no
1:54 pm
difficulty involved. here's where it gets interesting. some aspects of personalization -- like, for instance, how i'm likely to react to some liver cancer medications, it's not like we can personalize it by just looking at what's happened to me over my lifetime. when you're building a personalization system the way you power it is to find out about me and then a ask the question so to make things good for andrew what should i do and what can i learn from other people like andrew? and that is suddenly where you begin to see this conflict. other people like andrew is something which can help me a lot because if it turns out that everyone who's over 6'3" with a british accent virulently opposed to, for example, the
1:55 pm
electric light orchestra, it's an extremely useful thing to know so i can make sure that's never recommended to me. so it makes sense to use other people's information in aggregate to help personalize thing for me and in many examples that can really make things better. recommendations of movies is an obvious one and then when you start to think of information on the web, for example if i like to browse news everyday and we notice that i'm typical of people who perhaps in the mornings are very interested in policy-related news but in the evening when i'm relaxing i tend to like technology-related news, that's useful information to make sure i'm a happier person when i'm reading the news. so this is the upside of personalization. personalization uses machine
1:56 pm
learning. machine learning is exactly the technology which looks at data, figures out the patterns to usefully say what would other people like andrew want? and the definition is what it is for someone to be unlike with me or dissimilar. it's the thing which powers ads in gmail and movie recommendations and the things which helps the personalized medicine initiative figure out how to treat you, you'll probably need different treatment from someone else. and now i'm going to go through four examples of increasing squirminess of why this stuff is hard. why privacy and personalization actually starts to conflict with each other. the first is a simple case of things we'd like to think society is going to do. if someone publishes unauthorized data about me they are breaking the law and that should be remedied.
1:57 pm
that's the simplest case and the responsibility there in a good company or a well-functioning government is you actually have the legislation in place, you have clear rules and if shall be does, for example, look up the bank account of a famous celebrity just so they can blog about it that person is going to get fired and in some cases the consequences are serious and there's a more significant penalty. now, cases two, three, and four are ones where it starts to get a little fuzzier. case two. someone uses your data in a way you didn't expect but it turns out you kind of agreed to it. and a famous example is a firefighter in everett, washington, who was suspected of actually starting fires and one of the ways in which the police really got to understand that
1:58 pm
this was a serious risk was they then went to their grocery coupon supplier and looked at the things that this particular person purchased in the last couple months and they found a huge number of fire-starting kits from that. in other case, someone who is suing a supermarket for a slip-and-fall accident, part of the supermarket's defense was they produced sales records for that person showing that they had -- they were buying excessive, in their eye, amounts of alcohol. those are not actually illegal. both of those were covered under the terms of service and also the laws of the land regarding law enforcement use of data. that's difficult and at that point we've already hit something where the general public is going to be very uncomfortable and it's the thing which means we all feel uneasy
1:59 pm
when we sign these terms and services. those are difficult, now i'll get to the ninja difficult ones where engineers are trying to do good but could do bad. this next example is where we're use magazine learning to really help people but inadvertently accidentally the machine learning system starts to look like a bigot or make decisions which most of us would think a reasonable human would not make and a good example of this is from a member of jim garrett's faculty, the school of engineering at carnegie mellon university who showed an experiment with google's advertising system where we looked -- he looked at the ads which were shown in response to
2:00 pm
a query about job searches and he used google's personalization system to give exactly the same queries to google when the identity of the user was made and when they were female. horribly it turned out when the ads showed where the person was referred to be female were for jobs with lower pay which you look at that and anyone would think if that machine learning algorithm was a person they are both a jerk and in fact they are doing something illegal. facebook is looking to use an ethnic affiliation term in their algorithms, and that's fallen afoul. why would a machine learning system do this none of the engineers were -- i would assume none of the engineers had any intent of causing harm. the reason was the machine
2:01 pm
learning system confirmed in the data prior this that all else being equal -- which is a very dangerous phrase too use, is it was seeing that the women who were clicking on ads tended to click on ads for lower-paying jobs than the men. so this machine learning algorithm which we humansings build has got a kind of defense. it can really say i am just showing people what they're most likely to click on, it's not my fault if society is set up in such a way that my data is showing that women are clicking on lower-paid ads. now this is complicated and i don't have the right answer for you. if it helps, i should notice that this experiment is particularly unrealistic in the sense that it's very rare a
2:02 pm
machine learning system only sees an identified gender. usually the machine learning system see miss other things about a person. they have the past history of the kind of thing that person wants to do, other interests about that person and so it will actually you find that there are other features of that person much more important than gender or race for showing their particular interest so that is what i would regard as the most difficult part of machine learning and personalization at the moment it's very hard and i do not know of a piece of research that i fully trust to prevent these things from being, if you like, bigots. finally i'm going to mention the ninja hard case. and this is pretty simple. it's the case that's if you really want to preserve privacy you can cost other people their lives.
2:03 pm
there are examples of this in many law enforcement situations but another simple one is in medicine where if you're involved in a drug trial and suppose you had 20 hospitals all trying out some new drug treatment on 20 different patients then it's definitely in the interest of those patients for the hospitals to pool their data. to share data with each other so that one central body can do the machine learning with a large n for statistical significance to find out if the system is working or not. now if you the are worried ant privacy and you decide you're not going to let the hospital see details about one another, then you can still get significant results as to whether the medication is effective or not, it's just
2:04 pm
going to take you considerably longer and many more patients will have to be in the trial and you'll have to wait longer before you get the answers and matt frederickson, a computer science faculty member has shown very clear cases of of the analysis of privacy levels versus lives saved or years of lives saved and unfortunately -- and it's what this room doesn't want to hear there's a tradeoff curve there. we have to decide where we are within the center of it. so hopefully we're squirming. i've tried to show you that no extreme position on personalization is good, screw privacy, or privacy is good, screw personalization. neither of those extreme positions are useful. we have to use our technological smarts and our policy smarts to try to find the right place in the middle and that's the setup
2:05 pm
for this panel discussion. [ applause ] thank you. at this point i would like to introduce our panelists, yuet tham from the law firm of sidney austin, an expert on cross border compliance and international agreements regarding data use. if you want to come up to the chair. paul timmers, the director of the sustainable and secure society directorate of the european community has been head of the ict organization for inclusion and e-government units. so we have experts here from asia and from europe who are helping us discuss this issue. next i'm pleased to introduce ed
2:06 pm
fell on the, a hero to all computer scientists because he's a computer scientist and the and the deputy director of the white house science and technology policy who has been needing a bunch of intense strategic thinking about artificial intelligence over the next few years. and then i would like to introduce our moderator ben scott, the senior advisor from new america who is the senior advisor to the open technology institute and also a non-residential fellow at the center for internet and society at stanford. good to meet you. >> thank you very much, andrew, far introduction. we're going to jump into a discussion with our expert panelists who as you see tra strategically represent different regions of the world and can offer perspectives from across the globe. if i may quickly summarize the policy conundrum that sits behind the cases that andrew laid out, it's this.
2:07 pm
machine learn and ai benefits from the personalization of data use in learning algorithms, personalization requires large data sets to compare individual cases to lots of other cases. it raises the collection and processing of data at a large scale. that raises two key questions, one is what are the rules governing the collection of processing of data for commercial uses of ai and raises key issues for what are the rules for collection and processing of data for governments use of ai. ? underneath that sits the basic question of algorithmic capability. if you decide it's unacceptable to have an algorithm that reflects gender bias in employment practices, how do you regulate that? and if you decide to regulate that at the national level, how do you coordinate that at the international level when data markets are global. these are the problems that we are all facing in government and it's fair to say that the
2:08 pm
technological reach of machine learning and artificial intelligence has exceeded the grasp of our policy frame works to contain and shape these new forms of digital power in the public interest. so what i'd like to do is start with setting a baseline of where different parts of the world are coming down on these issues and what the building blocks look like at the regional level. there has been less talk in asia, although i would like that hear more from yuet about what's happening in asian markets but i would like to allow our panelists to speak from their perspectives about what's happening in their -- in this field in their region. what e what is the approach to regulating or establishing a policy framework for these most
2:09 pm
difficult questions of big data collection and the application of artificial intelligence? maybe i'll begin with you, ed. >> okay. well, first i should start by -- with the disclaimer that i am not a lawyer so don't treat me as an authority on u.s. law on this issue but i can talk about the policy approach that has been taken in the united states and it is rooted out of the longer term policy approach that the u.s. has taken with respect to privacy. and that involves generally regulation of certain sectors where they're -- where privacy is particularly salient, whether it involves things like health care or practices related to credit and employment and so on and it also involves a broader consumer protection framework
2:10 pm
around privacy that is rooted in notions of notice and consent. and so we have a framework for privacy which the u.s. has used and is continuing to use and that involves both laws and also enforcement of those laws. when it comes to the particular issues that are raised by ai and machine learning, there are a bunch of things that have been done and i'd point to in particular over the last few years the work that the administration has done on big data and then more recently on artificial intelligence. on both of those areas, and i think they're tightly intertwined, the situation that has engaged in a series of public outreach activities and then published reports, the idea being to try to drive a public conversation about these policy
2:11 pm
challenges and to try to move -- both to move the debate about making rules and making policy in a fundamentally positive way but also to heighten the attention to and interest in these issues and to try to drive a public debate because i believe strongly that the institutions, the companies that are collecting data and using it in this way almost universally want to use it, collect it, and engage in ai activities in a way that is responsible and positive and sustainable because i think people recognize if you push the envelope too much the pub will not allow that to stand so we've tried to thrive public discussion. we've tried to raise the level of dialogue and that's been fundamentally one of the areas in which the administration has worked. we also recognize the importance
2:12 pm
of -- we also recognize the ways in which these issues operate across borders and the need to work with international partners and to make sure that as data flows across borders and as citizens everywhere encounter the companies, the institutions of over nations that we can work together reasonably and we have an international system for dealing with these things. >> thanks, paul, what's the view from brussels? >> the view from brussels, perhaps i should put in another kind of disclaimer in a certain sense that is i think if you look what is happening in policy development whether that is engagement with stakeholder, public debates like here or whether you go in the direction of official public policy or law and regulation you have to put it against the reality of what is happening around technology and the use of technology so the examples that he gave were interesting and challenging.
2:13 pm
machine learning doesn't have access to your personal data even though it would be good for other people. it's a very interesting case because you have to look at it, how could you apply today's frameworks, including law, to that. so to a degree law has been strong in the european union and we would look at fundamental rights but also fundamental rights are not absolute so the public health is one of those reasons that you can actually start using someone's personal data, also individual data but with appropriate safeguards and this may mean you put a challenge to technology can you encrypt sufficiently? can you use new technologies? so it is i think that dialogue that we also are very much looking for in the european scene. it must be said, fundamental
2:14 pm
rights are important in european setting so if we say privacy, privacy is a fundamental right as a matter of fact, we even split it into privacy from the perspective of the protection of your private life and the protection of your communication versus personal data so there are differences, there's more than one fundamental right at play there, based upon that we have law but we have also policy development and it's a very actively moving field. for example at the moment we are working on a policy initiative around the free flow of data and around platforms and precisely those are being put to the test by machine learning, ai, precisely by the questions that we have here on the table. >> yuet, how does it look in asia? >> so i'm not a computer scientist, i'm a lawyer so i'm going to push this from a different perspective. one of the challenges about asia is that it's not even bifurcated just in terms of the laws and regulations that are coming out
2:15 pm
of the region. i mean, in fact when we talk about asia, what do we mean? different people have different views as well. when we talk about privacy laws in asia-pacific, the countries that come to mind as being at the forefront of regulations would be japan and korea and to some extent australia and new zealand. and then following that would be countries such as singapore and hong kong, taiwan and the philippines where they've got fairly new laws, some of them were put into place in 2012. singapore is a country where -- i used to be from the attorney general's chambers and -- in terms of the laws they are progressive but the fact is they implemented privacy laws for the first time in 2012.
2:16 pm
for a lot of these countries, and with these laws emanating not just because of a motivation to protect human rights, although a lot of it would be consumer rights, and i think some people would argue that. consumer rights would be to some extent human rights as well. and a lot of these laws that come into play, the last category of country, what is the challenge about them that they don't have a single data privacy regulation. and i tell my clients, a little facetiously, but it's true to some extent, sometimes the more
2:17 pm
laws the country has, i mean, i do fpa corruption investigation, for example, and as a result of that, we take a lot of e-mails throughout the region. that's why we have the familiarity with the privacy rules and regulations. i joke with some of my clients, don't look at the transparency index and see how risky a country is when it comes to corruption. look at how many laws they have. the more anti-corruption laws they have, the more problematic corruption tends to be in that country. it is the same for countries such as china, and vietnam, and indonesia. you find little bits and pieces of information. they refer to how privacy is, you know, it's a right of all citizens. but they don't really tell you how that's going to be enforced. that is a regulation that you see in china. i think some of the challenges in asia is just trying to harmonize the regulations for a lot of companies, a lot of our
2:18 pm
clients who are trying to operate and transfer data across the borders. you have a lot of -- so japan, for example, has got a new law will come into force in about two years, and that's probably the first time where we actually talk about the personalization. in terms of all the other countries, i think the idea of artificial intelligence is not even something that the countries have seriously considered. there are things you might see guidelines, you know, introduced by some of the regulators. but these are just guidelines, and there is no teeth to any of them. >> let me pick out a point which i think is implicit in what you said, which is, you've all described the approach of the united states, europe, variety of asian countries to these questions from a commercial data privacy perspective, regulating the market, commercial actors gathering data, applying artificial intelligence
2:19 pm
algorithms to produce particular outcomes. but i think at the core of this question from a regulatory, and especially from a political perspective, when you collect a lot of data and begin to produce some of these outcomes, in government access today, that is inextricably combined with the protection regulations. the recent tensions between the united states and europe has been about commercial data practices. but ultimately it is rooted in u.s. government access to the commercial data that is collected by american companies. so my question is, do you believe that even if we were able to find a harmonization, a standard for commercial data regulations that apply to big data collection and the application of artificial intelligence algorithms, machine learning, is it all undermined at the end of the day by individual interests and their
2:20 pm
unwillingness to give up any of that data by a government for national security or law enforcement purposes. >> i can actually give a quick example before we go to europe and the u.s. china has got a provision where, you know, there are a few examples of data localization. any related to health, medical information or the health of the citizens has to be stored in servers in china. another example is singapore data privacy provisions. the singapore government and all state entities are excluded from its provision. that's a very good example of where the states' rights conflict. >> perhaps building on that, i think this whole question about national security and sovereignty, perhaps you also have to generalize a little bit, and all the interests, too, that
2:21 pm
are certainly governmental interests, or should be addressed for society at a scale. which is safeguarding democracy. so i think one of the concerns, if you look at merkel last week speech, talked about transparency of algorithm plat forms. this is in order to keep consumers properly informed. but it's also, what is the kind of bias that may creep in through the algorithms in terms of the provision of news. that's got everything to do with the way you execute democracy. so there is a debate about avoiding where we get in a situation where democracy gets polarized into echo chambers, and we don't have a real debate anymore. that's also a serious interest. i think where you're talking about the values to what extent are they shared internationally, now, i think you can be optimistic or pessimistic about that. if we talk about data protection, we have been able to make an agreement between europe
2:22 pm
and the united states, even if you do not have exactly the same starting point as regards data protection, let alone as regards national security. the privacy shield. i know it's going to be put to the test, and that's how it should be. nonetheless, we go a lot further than we have at a time of safe harbor. we started in that area of excess by government, or national security purposes through the data being transferred in the transatlantic context and the safeguards for that. it is possible if you negotiate to make an agreement on certain types of issues, whether you can do that for everything, and across the world i think it's very doubtful. there are many places where norms and values don't work. so if we bring it to the field of cybersecurity, as we clearly see it, we do negotiate internationally about norms and values and relationship with cyber security, which has everything to do with aia also. are we getting very far? it's little steps.
2:23 pm
i think it's not a single type of answer to this question. there is a degree of progress between, let's say, those that have a degree of like-mindedness. but there's also many, many areas where we should be -- pessimistic. >> there are plenty of areas in which government access to data for purposes of national security or law enforcement is relatively uncontroversial. i think we don't want to forget those. and of course, the international discussions around this issue have been going on longer than the -- than the conversation about a.i. these are not -- these issues are not simple. but i think if you look at privacy shield, for example, it is an example of the way in which it is possible for us to engage internationally, and to get to a point where we can work
2:24 pm
together. as to these issues about fairness and nondiscrimination, i think this is another area in which there is a broad alignment of interests internationally, and in which i think there's a lot of progress we can make by working together. >> let me present a more pessimistic vision and ask your responses to this. to me it stands to reason that as the private sector grows more sophisticated with machine learning technologies, collects more data, applies more powerful a.i. algorithms to that data, it will be irresistible for government to reach into those companies for legitimate reasons, in many cases, but also perhaps for illegitimate ones, to gain access to that power. the example that you raised of the firefighter buying arson kits, i don't know where you buy those or where you have the coupons for them, but the idea
2:25 pm
that law enforcement may not only tap your phone calls, or your e-mails, but should also look at your purchasing records or your health data and put together a portrait for you and compare you against others and determine you may have committed a crime is where i think the -- is an extraordinary development, one in which government in legitimate cases would want to use. but what that says to me is that, ultimately every country is going to want to control that data for themselves, in their own sovereign territory. so my question is, number one, are we headed for a global data sovereignty movement where everyone tries to have data rules where a.i. operated by domestic companies is used as a geopolitical asset. second, if it's algorithmic data, if they say we'll show you
2:26 pm
how the algorithm works, it reflects the actual user -- the -- if facebook said we'll show you how our algorithm works for news feeds, does that solve the one? yes, it affects the behavior of users and affects the things they will most likely click on. do you then regulate that outcome and tell facebook you have to change that algorithm. how do you hold them accountable? how do you determine they have done so in a way that measures up to a particular standard. two yes, sir. are we headed to hard power regime of localization in your review at the global level, and two is, even if we're able to use transparency as a tool to push back against excess of a.i., does it even work? >> let me start by taking the second part of that. about the value of transparency. which i think really goes to a
2:27 pm
desire for governance and accountability. and one way to try to get there, to increase accountability, would be to say, well, open up everything, tell us everything about what your algorithm is, tell us everything about what your data is. but here i think is a place where we can apply technical approaches to try to provide accountability, to try to provide evidence of fairness or nondiscrimination or certain accountability along certain dimensions without unnecessarily meaning to reveal everything. i think one of the traps we can fall into in thinking about this issue is to think that this is a problem caused by technology, which can be addressed only by laws and regulations. i think it's important to recognize as, i think, the discussion today has, that technology can be an important
2:28 pm
part of addressing these problems. that there are technologies of accountability, and that we need to think in a creative way about how to put those things together. we also need to think, i think, about the ways in which forces short of legal prohibition can constrain the behavior of companies and authorities when it comes to the use of data. to the extent that what is happening is known to the public, to the extent that there is an opportunity to provide evidence of fairness, evidence of accountability. that in itself creates a dynamic in which companies and authorities may -- will often voluntarily provide that kind of accountability. we've seen that to some sent in privacy, where companies would privacy, where companies would like to be able to make strong promises to consumers, for
2:29 pm
consumer comfort, but knowing that they will be held to those promises. you get a dynamic in which companies can compete based on privacy. to the same extent, if we have technologies and mechanisms of soft accountability, that that can lead, number one, to a competition to provide a service in a way that's more friendly in order to bring people in, and it can also lead to the kind of accountability that occurs when bad behavior is revealed. so i think there's a lot more opportunities there to do softer forms of governance and try to use technology to get to that issue. around fairness in government. >> do you think the regulation is sufficiently collectible for softer forms of -- >> absolutely. well, i think what that says, i find that really challenging. i think indeed, technology needs to be invited to make things work really well, like the underlying intentions like the data protection of information. if you talk about informed
2:30 pm
consent, even informed consent about automated processing, that is a real challenge for technology. you can bounce back and say it's impossible because the algorithms we don't even know what's happening inside. but that's not adequate. but that's not sufficient as an observer. there are other approaches. and i think you're referring to there are other approaches where you can measure things like fairness, things like did you actually understand what is happening, the decision-making. also, i must say, we're getting a little bit away from the monolithic notion. there's an interaction you can continue to have and that's where the technology can mediate when you talk about consent, as the use of the data evolves. though i'm kind of optimistic about the reach that the opportunities that are there in technology. when you talk about localization, again, probably a nuanced approach to that is necessary. because there is a real risk, i think you pointed to that, the
2:31 pm
data localization happens. it's happening already today. and actually, that you do not necessarily get an internet by country, but perhaps by region. at the same time we have initiatives, you heard earlier through the privacy shields. that's a way to avoid the localization, and we are talking about personal data. we have a free flow of data, actually move any undue restriction to the localization of data. actually we can move any restriction to the localization of data. and i think we probably want to differentiate which domains are we talking about. when we talk about the public health problem, like the rise of the zika virus, i think we have a more globalized approach to that. we have government systems like w.h.o.s and professional collaboration in the field of health that allows us to do big
2:32 pm
data, a.i. type of analysis on the data we're getting from zika all over the world as a matter of fact. for me, in this debate we need to involve the governance that already exists. almost any kind of government institution we have in the world that works will be exposed to the what are you doing with the data with a.i. make use of those institutions, too. that may be in a more differentiated way, may not work in every domain but certainly do mains it will work. what is the data we have from the self-driving cars? i'm not sure. >> perhaps a necessarily complex, but therefore differentiated sector-by-sector approach -- >> and i think you learn from what you do in one sector to the others. it's not necessarily showing it is impossible to come up with governance. must be said a strong plea in europe also to come up with new governance approaches and also to say not all governance
2:33 pm
approaches will work. the realtime threat of cyber incidents may not be quite compatible with the type of governance that we have set up between people and organizations, which is relatively slow. and so we will also have to review the type of governance that we already have. >> and i think for asia, this is a little self-serving, but i still think we need regulations. because so many of the countries don't have something that will be taken for granted in the rest of the world. for those jurisdictions that have the laws in place, i think the question of how is the enforcement and policy positions, in terms of the the guidelines issued by the regulators, but there are so many other countries in asia that still don't even have very big privacy laws. i think at the end of the day you still need those to be in place for the framework at the very least.
2:34 pm
and a lot of asia follows the consent principle that is adopted in the rest of the world. i think in terms of the data localization, a lot should be done for various reasons. they're usually not because of privacy. for example, indonesia, i mean, they talked about localizing data. the reason for that was because they thought that this was a misguided belief that that was going to help improve their economy, by localizing data. what the government didn't realize was that was going to put off a lot of the multinational corporations from investing in the country, so they held back from that, which brings me to my, you know, last point in terms of, i think what we have seen, with all the international -- multi-national companies that have set up operations in asia, they bring it along with them regulations
2:35 pm
that they have to follow, because, for example, they are dealing with data from the european union, or the u.s., and because of that, they tend to follow the standard that's set the highest. so when you have consumers in asia who see, hey, this is the way my data should be treated, and this is the way an international corporation would deal with my data, my privacy, then you start expecting that from the institutions within the country. so i think there has been a lot of that where, you know, the cascading of privacy, even though the regulations aren't in place. but you've got the economic pressure to a large extent. >> i want to put one more provocation to the panel. but before i do that, i want to invite you all to start thinking about questions you may have for the panelists. we're going to reserve the last section of this panel for audience questions. so start thinking about that
2:36 pm
while i put this question to the panel, which is about all three of you have raised notice and consent. it is the basis of privacy law across the world at the moment. and yet, even before ai was already under fire, already under attack about whether it would ever be sufficient. for various reasons. there's an argument that notice and consent is a sham, because you're presenting a consumer with a 15-page document for a service they want to buy, and no one ever reads it. they have no idea what they consented to even though they've been noticed. once you click that box and say, i agree, all the rights you had up to that point are gone. not all, but many. second, as we collect more and more data, and companies become diversified horizontally across multiple product platforms, it may not know exactly what it is
2:37 pm
that they are going to do with your data. and they may not know to give you notice at that point. and at what point do you build in multiple notification points. i've recently had occasion to talk to a number of founders of new silicon valley startups, and not just silicon valley but europe where i spent time in berlin. today. a is the new value property. people are building companies based purely on the idea that they're collecting lots and lots of data. what they will do with that data, how they will monetize it, how they will pool that resource with other resources, how they will be acquired, integrated into a larger enterprise, big question mark, but undeniably not a deterrent for venture capital flowing into those companies. once again, draws into the question this basic notion of notice and consent. if we come into a world where data is pooled intentionally in a fashion to maximize the utility of personalization, it
2:38 pm
might not even be reasonable to ask a company to predict in advance all the ways in which that data may be used. they may not be the only ones who gain access to that data and use it for benefits that might harm or benefit the user. so my question to the panel is, if we root the idea of an international standard on privacy policy as it applies to the big data and algorithmic accountability on an old framework of notice and consent, are we setting ourselves up for failure from the beginning? >> i think notice and consent is not bad at all. what we are challenged to do is to make sure that consent is informed, meaningful, freely given, that there is a choice, and that's simply not often implemented. the 40-page contract is not meaningful.
2:39 pm
how do you translate it into something which is meaningful. it must be said in order to process your personal data, it's not consent that may be a lawful ground, as the gentleman in the protection agency said, at least in europe, but also there are legitimate grounds for public health. it's one of those. certainly there are issues around public safety, security, et cetera. maybe grounds to process personal data without consent. so there's actually even a legitimate interest for direct marketing purposes. it may constitute as the law says, a legitimate ground to process personal data. now, the question is how do you interact? because in all those cases you still will have to interact with the data subject, the one that is providing you data. how do you do that in a meaningful way? i'm still a little bit puzzled why interaction with the user is a problem. from the company point of view, you would probably say i want to
2:40 pm
interact more with the user than less, because each point i interact is another opportunity to engage in the discovery of value, the delivery of value, to differentiate. >> let me ask a followup. what if the user is dead? we're soon to a moment in our history where there is terabytes of data out there about people who nor longer living. yet that data will undoubtedly have value to the company that owns it and the governments that may gain access to that data. how do you deal with that? >> so you will probably be thinking of a case where you would like to invoke public interest, for example, public health. and again, certainly the european law says that if it is in the public interest, you can use that as a legitimate ground to start processing data. so there are possibilities. really the big technical difficulties, i think, or underlying difficulties that have to do with algorithmic
2:41 pm
accountability, which is not resolved or actually we have a broad debate with the health community in europe. radiologists are saying, what do i do with all those data that i have, that i now start to put again under data protection and how do i make sure that folks that want to be forgotten can apply to that? there are serious implementation challenges. they will not always have the most ideal answer. but in a certain sense, we are looking back into a legacy. a legacy that we can improve as the implementation of the law evolves. all of the communities that are involved in personal data, in europe for sure, are also called upon to look at what the technology and the law makes possible, and provide their interpretation of that, a common interpretation rather than a fragmented one. that is clearly a challenge that needs to happen from public administration to radiologists. >> but in answer to your second
2:42 pm
question, actually, about -- there are a number of laws in asia where if the subject is dead, the concept of privacy doesn't apply anymore. the law doesn't protect that data. >> open season on that data. >> yes, unfortunately. and i think just in terms of the other question that you raised about notice and consent, so i say this a little facetiously, again, we used to joke, we can draft these consent agreements and put in as much as we like and nobody is going to disagree. everybody will just click "agree." i read this book called "future crimes." i have to say, after reading that book, i refuse to load apps on my iphone to the extent that i can. it is very difficult to live without apps. but i probably have one of fewest apps in the whole of asia on my phone, after reading that book. i remember some of the
2:43 pm
statistics that i saw in that book about how i think the privacy policy for facebook is double the length of the u.s. constitution. and, you know, i think it was either paypal or ebay, i don't remember which company, where the privacy policy is longer than "hamlet." i've given presentations, a lot of presentations in asia about privacy and data security. and i've always asked this question, how many times have you actually said i don't agree, you know, when that privacy policy pops up? and all the presentations that i've given, only one person put up their hand, and that was a lecturer from one of the universities, just an academic to some extent. but i think most people, they'll just click yes, because they don't have much of a choice, because they don't think it's important. it's not because people don't
2:44 pm
value privacy, but i think the difficulty is that there aren't many avenues for them to seek redress. and, you know, because we don't have the concept of constant litigation, and it's not a litigious society in general in asia, it's very difficult for individuals or consumers to get together and change the laws and the policies. >> so this is a fundamentally difficult issue, right? the uses to which some data may be put may be extremely complex. and the implications of those uses for a particular user may be even more complex. so if we were to start with the principle that something should not be collected if its use would not have been acceptable to the user, it's not at all clear how you could put that into effect in practice, right? we know that telling users every last detail of what will happen and every last detail of the implications, asking them to
2:45 pm
read that before they disclose anything ever, is not practical, and is not the way that people behave. that said, there are a few strange people like academics and privacy lawyers who do read these things, and there are people who have built tools that look for changes and so on and analyze them. so if a company does change its very long, longer than hamlet privacy policy, there is some chance, reasonable chance it will be noticed and trigger some public debate over that there are methods of accountability other than all the users reading all the things, which we know doesn't happen. still, it's a fundamentally difficult question. if we were to offload that decision to someone else, they don't make it terribly much easier to figure out what the right answer is as to which uses would be acceptable to the user or which uses are socially beneficial. >> maybe the issue, it's got to be meaningful notice and
2:46 pm
meaningful consent. and i think one of the things that, you know, just from a policy perspective, because the notion of, you know, these consent agreements is that they shift everything onto the individual consumer, who don't really have the ability to, you know, reject the terms. and so i think, you know, just in terms of the policy, when it comes to ai and all the other provisions, i think it's important for the governments to actually think about shifting a lot of their responsibility back to the corporations for self-assessment and things like that. >> i'm wondering if you also cannot start splitting it up, in the sense that especially with the automated process, you have to explain the significance and the consequences for the user. and the point i think we're making is, first of all it's very difficult for a user to
2:47 pm
understand and read all about that, and fundamentally it may be very difficult to say that right at the beginning. still, that raises the question, why would you assume that it is only at the beginning that you ask for consent? why don't you have a repeated approach to interaction with the user as the system actually also develops and learns, and draw the consequences? at that moment in time that a consequence becomes relevant, you could ask, in a number of situations, i'm not saying always, but could ask again. it may be simpler to understand than a whole long text about what potentially will happen. >> there's a simple answer to that. you talk to any of my clients and they want all the consent up front. they don't want the obligation to go back to the consumer or the customers. usually when we draft these policy provisions for them or these agreements, they tell us right up front, can we make it as inclusive as possible. so that is what they do, because there is nothing to prevent us from doing that.
2:48 pm
i think that's difficult. >> i mean, i think what you're suggesting is thinking of it more as a matter of user interaction design or user experience design rather than perhaps asking for everything up front or trying to get an extremely broad consent or ask for extremely broad consent up front, that you might ask for some consent initially, more later. how and when you do that may be difficult depending on the nature of the product and whether there even is a touch point with the user that comes later. certainly i think thinking of these negotiations of consent in terms of user experience design, user interaction design, can be a fruitful way to get closer to -- to get closer to a strong notion of consent in a way that is less burdensome on the user. >> to go back to that point as well, i think a lot of consumers, they don't really
2:49 pm
need to know the algorithm or to understand it. what they want to know is, what is a different way, the purpose it's going to be used, not so much the algorithm. i've heard that excuse before, some companies say, well, there's no point to us explaining the algorithms to the consumers or the customers because they're not going to understand that. that's not the concern they have. it's the change in use. >> a quick followup for you. is it possible to have a discussion in the abstract about the notice and consent regime without looking at the market concentration in many markets for digital products and services? because if you're choosing between two or three mobile phone companies or two or three search engines or two or three social media platforms or mortgage lenders or hospitals, asking someone to opt out because they disagree with the consent provisions is inviting them to not participate in modern society.
2:50 pm
so there i think is a relationship between market structure and privacy policy which in many cases is definitive, that you don't really have a real scientific alternative other than the consent. >> it must be said that i thought it was interesting, what was going around in twitter, i don't know who attached a statement there from the fcc having just issued some privacy guidelines for intact access providers, who described the situation as no choice. they say you have to be additionally careful when there actually is no choice, which may be the case there. i think there is a certain sensitivity around the notion of fairness, which includes the notion of choice. >> let me at this point invite all of you to raise your hands. tim, are we passing a microphone around in order to it everyone on the recording? i'll start over here and work my way across the room.
2:51 pm
identify your name and affiliation before you give your question. our panelists will know who they're talking to. >> thank you, i'm the dean of the college of engineering at carnegie mellon university. i want to come back with this idea of privacy versus personalization. it would seem the last part of this discussion raises the question, why doesn't we apply personalization to privacy? so, you know, the only choice being i have to take one blank consent form, it's either that or nothing, where is if there were some way for me to fill out a privacy profile so that it described what i wanted to share, what i didn't want to share, how i wanted to share my data, could that not then be applied against whatever the company is saying is their privacy policy, so i don't have
2:52 pm
to read every one of them, i simply spend the time saying what i'm about, and let the interaction happen, more like a personalization applied to privacy. >> that seems to remind me of the point you brought up about how competition in the private sector can potentially mitigate against abuses of privacy policies. maybe this is a question you can respond to. >> sure. and there are a couple of avenues that i think come to mind here. one is this idea that a user might check some boxes or slide some sliders in a user interface and give some idea of their preferences with respect to privacy, and there would be enforcement of that on the user's behalf or some kind of automated negotiation between the user's technology, their browser or app, and a company's technology, so things would only happen within the bounds that
2:53 pm
the user had said were acceptable. and there have been various attempts to build those sorts of technologies. none of them have taken hold for reasons that i think are largely contingent. it could easily have turned out that such a thing became popular but for reasons too complicated to go into here, i think that has mostly not happened. the other approach is one that is -- takes more of a machine learning kind of approach, where you're trying to ask the user a relatively limited number of questions about -- specific questions about what they want, and then you have a technology that on the user's behalf that tries to infer what decisions they would make in other cases. the idea of a system that operates on the user's behalf is one of the technological vehicles that could develop. and again, you have sort of contingent questions of technological development that
2:54 pm
may make that more likely and may make it easier or more likely to be deployable. but certainly that i think is one direction in which users may be able to put technology to work on their behalf to manage this stuff. because the complexity of these choices, if the user has to make every single detailed choice is too much. >> i think it's a very interesting idea. the question is will it hold against all of the four cases. it may not hold against the third case. you can use the personal data, bad for society. personalization of data, it merits to be discussed, but would it actually eliminate the risk of things being bad for society? >> we'll take another question. right in the back. the microphone is coming.
2:55 pm
>> yes. it's jose colon from the state department. i believe you only -- you are focusing on part of the issue of privacy because there are other means of data collection that doesn't involve people clicking on the internet. when you go to a store, for example, target, there are many cameras following you, you pay with a credit card. they sell information that they are getting from machine learning. they are using those for many purposes. we had the case of samsung with the smart tvs, a conversation with people. how would you address those? that's a big an issue as you clicking on something on the intent. >> your responses? >> i think this gets to the issue of, if you have a model based on notice and consent, how can you talk about consent in a case where collection of data happens in the environment, such
2:56 pm
as with cameras or with microphones that are out in the world. and the cases that are in a public place are i think some of the most difficult here. if there's a product in your home which has a microphone or camera and that's turned out without your consent, that seems to me not a difficult case from a policy standpoint. but in a public place where there is not an interaction with the user, where consent could naturally be sought, i think this becomes a pretty difficult issue. i don't think we have all the answers to those by any means. >> for us there's also a real part of the debate, because the two parts are fundamental rights, the confidentiality of communications in your private life and the data protection part. what you mentioned touches upon both aspects. your confidentiality, when you go where you are not to be tracked, even if that doesn't
2:57 pm
necessarily and immediately involve personal data, still a right to be protected. so it's really part of the debate in europe. >> yes, right in the back. >> hi, my name is andrew hannah, i'm with "politico." you've talked about shifting responsibility back to corporations in terms of privacy agreements and others have talked about softer forms of governance, i just wondered in terms of shaping what data can be used. i was wondering if you could be a little more concrete and talk about tangible initiatives that could be taken on a policy level to allow for this to happen. >> let me start. i think this already is happening. if you look at the dynamics that drive the privacy policies of some of the large companies and ways in which companies use data, there is a competitive dynamic that operates, in which
2:58 pm
companies would -- on the one hand would like to be able to use data to optimize their business goals. but on the other hand would like to be able to promise to consumers that the use of data is limited to things that consumers would find acceptable. and of course those promises, once made, have legal force. so i think you see this operating already. it's inherent in a model of notice and consent that consumers will -- may either withhold notice or take their business somewhere else if they don't like what's being done in a particular setting. this is a dynamic that operates already, and it is something that is driven both by the enforcement of law for example by the ftc with respect to companies keeping their privacy promises to consumers. it's also driven by some of the public dialogue and is driven as well by the public debate and by
2:59 pm
some of the press coverage about privacy practices. all those things i think push companies to try to make stronger promises to consumers when they then have to keep. >> i think there was one in the middle. yes, ma'am. >> hi, my name is kerry ann from the organization of american states. the question is kind of tied to the gentleman in the back that asked about other forms of data collection. most of you would recall, when came online in terms of what happened to her, how she collected data, the result, in terms of privacy so much data available in blogs that are private, personalized facebook. you have algorithms you can build from all those sources that are open. how is that tied back to consumer protection if there's actually no obligation by the person who may be developing
3:00 pm
this new ai that we don't know about that's actually collecting it? how does privacy come in, when we're pushing our data out there for anyone to use? i wondered about you thoughts on that. >> great question. >> strictly speaking, if you are able to start reidentifying and it becomes personal data, you still fall under the data protection law. you have to look at how far you push the boundary and using also open data to reidentify, and the case that you mentioned was real. that's where people have to take responsibility or at least in the european consideration, they would be liable against the law. >> hi, my anymore is al gombas, i work for the state department. i'm curious, if we were to create a scenario where we can
3:01 pm
negotiate the privacy restrictions, what might happen then i think might be that companies will incentivize consumers to give more data, give discounts or something in the event that if they want to get more data from the individual or consent. and i'm wondering how that might play out, if you think that's a good idea, bad idea, whether we should have a blanket law saying you can't do that, you have to offer the same discounts to everybody regardless of what the amount of privacy they will require of the company or not, and how a consumer may be taken advantage of, for example consumers may be in a position where they feel like they have to give up data because they can't afford the service without it. >> i think there was a study that showed that consumers are -- they prefer giving some information, and then having the ability to consent, if additional information or different uses are going to be
3:02 pm
made of the data. and i think the other point this study showed, i can't remember the name actually, was that consumers generally are willing to give more information if they get something in return. i think that's -- again, we go back to the notion of fairness. one of the -- you know, the problematic areas that we have is that either the consumer or the customer doesn't know how the data has been used or has been used in a different way, no notification has been given. and i think the third thing is that the companies that want to benefit, they've been able to monetize the data, use it for marketing reasons. but the consumer hasn't actually benefitted additionally from that different use or that additional information. so i think that at the end of the day, if you go back to notice and consent, not right at the start of the relationship
3:03 pm
but perhaps as that relationship progresses. >> perhaps i can add something to that. for me there are two dimensions in it. one is, indeed do you provide fairness in the perception of the user while the data is being used, and then people are saying that's not the case because you get this proportional value out of it and you don't give part of that value back to me. that's one part of the debate. your part of the debate is does the consumer have a fair choice in the beginning. actually de facto monopolistic situation. if you look back at the statement the fcc did last week about access to the internet, you cannot be forced is essentially what they are saying, you cannot be forced to give up your browser data, et cetera, your browser preference data, otherwise you don't have access to my service. there's not so much choice in that service. there's also that aspect of, is there a reasonable balance at the moment that there is an
3:04 pm
essential service being provided versus the use of personal data, you cannot start excluding people from having access to a service. >> it's not different from regulations, where you need the government to step in to start the ball rolling. if none of the internet providers are actually -- i mean, i think it's going to be quite difficult in some sectors to wait for the companies, to take the initiative to regulate themselves. i think this is one of those issues where you have to have the government step in and just start the ball rolling. >> yes, ma'am, right here in the front. >> my name is erica bassou. i'm a ph.d. student at american university. my question is about the notion of democracy in all of this, and while we are speaking to a room
3:05 pm
full of people who have a fairly good idea of some of the terms that we're using like notice and consent in terms of service and data privacy and ai, i'm just wondering what this all means in terms of access to even this information about what these terms are. and is it just a conversation between policymakers and corporations who have access to these definitions? or is it really a conversation that you're having with the users who get affected? >> great question about literacy. i mean, i think you see, in practice you see a lot of discussion, a lot of chatter among policy experts. and you see more occasional flare-ups of direct public interest in some of these issues and some of the practices. and as is often the case in governance, the elites are
3:06 pm
sweating the details every day. and there is a corrective of the public noticing something that seems quite wrong to them and speaking up loudly. i think that is how these things often do operate. the -- certainly -- and we do certainly see those -- those flare-ups of direct public interest from time to time. >> one of the points in the debate in europe is also whether machine learning, ai, should also be made more widely available, kind of an open ai type of environment, which actually could be quite an interesting point for international corporations. that's kind of democratizing the tools themselves. >> yes, sir, in the front.
3:07 pm
>> thank you. daniel reisner from israel. my question is, when you mentioned a phrase, you mentioned old frameworks, when discussing this issue. one of my questions relates to one of the oldest frameworks we're using, which is the concept of the state in the framework of the discussion. because we all realize that we've globalized every element of the discussion. the data, in spite of localization efforts, is global. companies hold data. the same piece of information is split between two or three different locations on the same server. and some of my clients are split up over different continents. you don't actually get the same piece of data in any one location anywhere. the company holding the data is actually a multi-structure and sits in 25 different locations as well. and so on the one hand, the data is globalized.
3:08 pm
the players are globalized. that raises the question, what is the role of the state? i give you an example which i faced relatively recently in israel. the israeli government decided -- called me up one day and said that they wanted -- not the whole government, but parts of it, and they said they had decided to regulate an international cloud services provider. i asked, why do you think you should regulate them? they're not an israeli company, they're not active in israel per se. they buy the products online. they say, it's very simple, they offered the services to an israeli government entity. but i said, the cloud sits somewhere in europe, i think, the company is an american company, et cetera, et cetera. they said, yes, but the service is being offered in israel so it's our job to regulate. and i pushed back and i said, well, if you want to regulate it for that purpose, and 211 other countries in the world could legitimately make the same
3:09 pm
argument because it's a global service, i said, do you really think it makes sense? they said, we never thought of that, we'll take it under advisement, and i haven't heard from them since. what do you think we should be doing? governments are still our main tools of policy. but when we all recognize that facebook has more to say about the privacy of its constituent elements than any government in the world, with apologies to all governments represented, are we still having the discussion in the right forum? should we be thinking of a different mechanism in which we have a discussion and engagement with the right players and the right forum? >> a very simple question. [ laughter ] >> you see something similar happening in the debate around cyber security, which is considered by some very much a national issue but global companies are saying i want to
3:10 pm
buy the best super security in the world, i don't care really where it comes from, but i need to have the best because i'm a global company. is that necessarily contradictory? i don't think in all cases. does it mean that you need to go for some form of global governments? at least a form of international governments, yes, because you need to have an idea of what is the quality of cyber security. i think the supply side corporation, if i simplify that concept, could be quite fruitful in a case like this. what companies are actually asking when they talk about data protection and privacy and machine learning and how is that different from the more nationally determined cultural values around that, that's a plea in the community to make sure ethics, cultural value discussion is really part of the debate around ai and machine learning. so not only for academics but also for the institutions involved in that. i don't think you can get very far if you do this only nationally.
3:11 pm
>> quick followup question, i think his question is really an important one, do you think there are any global institutions that could channel national interests effectively, at least a mini lateral level? meaning the largest number of states that are willing to meaningfully participate in a single standard. companies. >> i guess it's not really organized or named as such. but i point earlier to certain sectors in which you can start talking about the governments of data. so you can build upon some of the existing governance that is there and make that more ai and machine learning aware. use that. we don't need to advance something new but perhaps we do need to talk about additional -- well, institutionalized, put in between commas, institutionalized form of governments to tackle this. some people have an interesting proposal on the table, a think tank and financing organization
3:12 pm
in the uk that talks about creating a machine intelligence commission that would work more on the basis of the notions around common laws, as you get exposed to the practice. and it would really bring experience together. >> other comments on this point? we have about five minutes left. i'm going to try to take a few more questions. yes, sir. >> carl hedler from the george washington university, privacy and research institute. we seem to be relying on lawsuits largely to control corporate behavior in regard to privacy. i'm wondering, in that case people have to identify harms. i'm concerned about the ability in the context of ai and machine learning -- [ inaudible ] >> that's a tough question.
3:13 pm
and it gets to some deep technical issues, as you know. the question of why an ai system did a particular thing. and what that system might have done had conditions been a bit different. it can be difficult to answer. but depending on what kind of decision it is that the system made or assisted in, there are different legal regimes that may operate, at least in the u.s., and different burdens, different legal burdens may apply to the company that is -- or institution that is making that decision with the help of ai. so i think it's more of a detail question as to what kind of question, showing what kind of governance is needed. i also think that to the extent that people are naturally skeptical of whether complex ai based decisions are being made in a way that is fair and
3:14 pm
justifiable, i think that to use these technologies in a way that is really sustainable in the longer run, i think will require celebrator effort at being able to explain why a particular decision was made or to be able to produce evidence to justify the fairness or efficacy of the decision that's being made. it's not a simple issue. i do think that in the public in protecting themselves and government in protecting the public against the sort of harms you're talking about are not without legal or technical capabilities. >> let me ask a question that sums up several that i've heard so far, which is given the inherent weaknesses of notice and consent, but recognizing it's a tool that we have, and
3:15 pm
recognizing the challenges of harm in identifying harm and adjudications, is there a combination of tools that might be used that are rooted in transparency? what does this algorithm do and what it is intended to do? therefore, we get a better sense of whether it is producing a harm or may produce a harm. and that harm should be or some approximation of that risk should be disclosed in the notice regime. what is the combination of tools that might best produce a framework for handling these technologies as we move forward? do you want to jump in on that? >> i think that's a very good question. ed's comment is just right. there is something very interesting about when you're an ai engineer building one of these systems. it's sometimes very hard to diagnose why your system did something.
3:16 pm
but you always have to write down something called an objective function. which i don't understand to accomplish that, i can show that to a lawyer or policymaker, this is why my algorithm is pulling data from many people. on the other hand, if i've supplemented a little bit because i'm getting paid by a coffee company to send people routing them past their coffee shops, again, that will be sitting there in the code. so when you think about an a.i. or machine learning algorithm being written, someone says it's so complicated we can want explain them, that's not a legitimate answer because when
3:17 pm
you write an a.i. algorithm, you always have to write the objective function, what is the thing that the a.i. system is trying to do. so if you want companies to actually -- or governments to be clear about what their a.i.s are doing, it is legitimate to say show me the objective function. >> maybe we will leave it there with andrew's optimistic vision about a possible way fwashd. really appreciate that. please join me in thanking all of our great panelists for discussion today. [ applause ] today, three former white house chiefs of staff discussed presidential transitions and challenging facing the income administration. live coverage begins at 6:30 p.m. eastern at c-span. tonight on the communicators.
3:18 pm
former fcc commissioner is robert mcdowell and michael cops on how the fcc could change under the trump administration and the tack and telecommunications issues. >> if we're smart, we'll tackle what's the future of the internet going beyond network neutrali neutrality. what does it mean with artificial intelligence, what does it mean for jobs? what about the consolidation and commercialization. >> i sense that what plans can team up for services and the set top box item whether it wasn't any sort of aanymorety among the democrats is also not going get off the ground. >> watch the communicators tonight at 8:00 eastern on c-span 2. we have more now from a conference on artificial intelligence, this is about autonomous weapon systems and their current and potential military uses around the world
3:19 pm
and delegating decisions to computers as well as the constraints of their use in battle situations. good afternoon. thank you all for joining us this afternoon for those of you who are joining us just now. also welcome to those of you on the live stream who are joining us today. the hashtag for the event is #carnegiedigital." this is the second panel of the first part of the carnegie colloquium. drop your business card off
3:20 pm
outside or send us an e-mail. this panel focuses on autonomy in the context of military operations. as i explained earlier this morning for the first panel, this event is designed to combine the tech expertise of carnegie mellon university with the expertise of the carnegie endowment. each will set the stage followed by a panel discussion with experts from around the world. so we are particularly pleased and delighted to have people from israel and india who came up all the way specifically for this event. it's now my pleasure to introduce you to david brumly, director of the security institute at carnegie mellon university, he's also ceo of a company called for all secure, which won the darpa cyber grant challenge this year. it's a great pleasure to have him here. and begin setting the stage remarks. with that, i look forward to that panel discussion. thank you. [ applause ]
3:21 pm
>> thank you, everyone. you read the headlines today, you'll come across headlines such as russia is building robots to fight on the battlefield. the u.s. navy is developing swarms of unmanned drones. and darpa commissions a fully autonomous cyber bot commission. these highlight the increasing role of autonomy in the military. in the second panel we'll take an international perspective on what autonomy and counterautonomy mean in military operations. as tim mentioned, my name is david brumley. i'm a professor and director of cmu, security and privacy institute. i also consider myself a hacker as i run this hacking team that many people have talked about. my job for the next ten minutes is to give a highly level overview of the issue, why it's so exciting, why it's so timely, and why it's so important to get absolutely right as we go
3:22 pm
forward. this panel's issue in a nutshell is countries around the world, including the u.s., russia, israel, china, india, are increasingly deploying and investing in artificial intelligence and autonomy in their operations. autonomous technology, once the work of science fiction, is here today. for example, in pittsburgh, you can use your uber app to summon a completely autonomous vehicle to take you home from a steelers game to your house. don't just think physical. think of cyber space. think of social. for example, in august this year, darpa demonstrated that it's possible to build fully autonomous cyber bots in full spectrum offense and defense. it then went on to demonstrate that these bots can supplement human capabilities in the manual defcon competition. we also need to think about social networks where autonomous systems can be used to sway the opinion of a population. key pros include faster and better decisionmaking in weapons
3:23 pm
systems, cyber space operations, and it even creates the possibility of fully robotic soldiers in warfare. these are all significant benefits that lower the cost and lead to better protection of human life. however, there are significant policy, legal, and ethical questions. many questions revolve around how much control we should cede to machines. what sort of actions should we allow machines to take and when. and how do we handle the case when machines have mistakes, when there's bugs that could be exploited by your variations. what does autonomy mean? autonomy results from delegation of a decision to authorize and take action within specific boundaries. we'll be talking about delegation of a decision. in the context of this panel, we delegate that decision to a computer program. an app, if you will. everyone is familiar with apps like games and web browser. but these are not autonomous. they follow a fixed set of rules and interact with the user in a
3:24 pm
very limited way. an autonomous system must be more than an app following a prescriptive set of rules. it must be able to make a decision about how its actions will affect the environment. today we focus on autonomous decisions where we delegate a decision to take action and that action has been ceded to a computer app. that app interacts with the world and the world interacts with it. i also want to set the stage for the size and the scope of the investment in autonomy. i want to use the u.s. department of defense in history as a illustrative lens. autonomy and ai are center stage. this strategy is called the third offset strategy. when i heard this phrase offset strategy, i didn't really understand what it meant. it seeks to offset a numerically superior force with technical supremacy. an offset strategy allows
3:25 pm
someone like the u.s. to win without matching the enemy tank for tank for plane for plane. to get a sense of the scale, the very first offset was our nuclear weapons strategy. the u.s. invested heavily in nuclear weapons, especially battlefield and technical nuclear weapons, because it provided an effective deterrent. we didn't have to match the enemy tank for tank, plane for plane. in the mid-'70s, russia reached nuclear parity with the u.s. and the offset was no longer an offset. the u.s. and other countries started looking for other offsets. the u.s. came up with the second offset, where the idea was using accurate guided munitions delivered by effective delivery systems, you could achieve the same effect without the collateral damage. this investment led to huge advances in science that went beyond the military domain. things like gps wouldn't have been possible if the u.s. didn't invest in this idea. we expect the investment and
3:26 pm
radical change in international policy to be just as specific. the race to autonomy is not only happening in the u.s. and to implement these sort of offset strategies. it's also in other countries. for example, it includes russia and china, which i mentioned just a few minutes ago, are investing in roboticized armies. it's also an industry. in 2014, a bank of america report states that japanese companies invested more than $2 billion in autonomous systems led by tech companies such as facebook and google. we don't get to just deploy autonomous systems and call it done, though. once we deploy them, they themselves may become targets. that leads to a notion of counterautonomy, where adversaries may go after the autonomous systems themselves as a way of getting at their intended target. as an example, just to kind of
3:27 pm
put this in scope, there's a very famous chess computer called ripka. it was defeated because someone found a flaw in the engine. in chess, if you go more than 50 moves without moving a pawn, it's a draw. the chess engine had a flaw where it would try to avoid a flaw under all circumstances. this player would go after the autonomous system by offering a piece as a sacrifice. the computer thought it was piece up. the player would move 49 moves without a pawn move. the computer would say, oh, no, a draw is coming up. it would try to avoid it. and the player could go to town. this is going after the algorithm, not just the test game. autonomy is going to be huge. it's critical we get it right. the stakes are extremely high for many reasons. one of them is autonomy will drive us to take decisive action faster and faster. these actions will be in the cyber domain and the kinetic domain. remember what i said. autonomy is a delegation to an authorized entity to take action
3:28 pm
within a specific boundary. i want to think about a couple of different dimensions. what decision is being delegated? second, in what circumstances? and third, what are the appropriate boundaries for using this sort of technology? and to dig a little deeper, the decision being delegated is a different question. countries are forming stakes on the ground on how they're going to think about this. the deputy secretary of defense in 2014 said humans in the united states' conception will always be the ones who make the decision to use lethal force, period, end of story. when he was questioned whether a computer would ever take lethal action. but the pace of technology makes applying these high level philosophy and principles to different situations difficult. for example, should an autonomous system shoot a suicide bomber before they have an effect? is that okay? is that defense? is that offense? when is the decision ceded? he goes on to say, and he qualifies him that there may be
3:29 pm
times when it's okay for the computer to take control, for example suppose you got 60 mills coming at you. there's no way a human is going to be able to sort all that out. the human will make the decision but make it ahead of time for the computer to be able to react to that. this isn't a hypothetical conversation. it's here today. for example, consider for a minute the fire and forget missile systems. we've all heard of these probably in the newspaper. one example is the uk bridgestone missiles, which groups that one of our panelists serves on, illustrates there are no clear lines when we've ceded control. the fire and forget systems are often described as autonomous. some will say they're semi autonomous. it really just depends on which definition you're looking at. the british air force described it as effective against all known and protected armored threats. brimstone's radar seek and searches, comparing them to known target signatures in its
3:30 pm
memory, the missiles rejects returns which do not much and continues searching and comparing. the missiles can be programmed not to search for targets, allowing them to safely overfly friendly forces or only to accept targets in a designated boxed area, thus avoiding collateral damage. an interesting question. someone has decided to use lethal action but it was up to the computer to identify who to take lethal action against. there's another more subtle question, what do we do when there's a bug in the software, that it maybe misidentifies where it's supposed to go? what are the constraints? if we go back to the uber example in pittsburgh, suppose a pedestrian walks out in front of a self-driving car and it can only miss the human by driving off a bridge. who should you save? the driver or the pedestrian? a good question.
3:31 pm
there's no clear solution. and military operations, we often have similar questions. who are we going to save when given the choice? how are we going to program the objective functions in these military operations? so with that framing, i would like to introduce our moderator and speakers. our moderator is the vice president of studies at the carnegie endowment for international peace. george, can you please step up. his work is primarily on nuclear strategy and nonproliferation issues and on south asian security. george is the author of the prize winning book, "india's nuclear bomb," called an extraordinary and definitive account of 50 years of indian policy making. george has been a member of the national academy of sciences committee on armed control and international security, the council on nuclear policy and many other such advisory committees. thank you, george, for joining us today. our panelist is daniel riser, can you please come up, a partner at the her zog law office.
3:32 pm
he joined in 2008 as the firm's public international law, security, and defense partner, recognized as one of israel's leading public law experts, ten of which he served as head of the israel national law department. he was the senior lawyer responsible for advising the israeli leadership. i hope you can advise us on this issue as well. i would like to invite the director for the arms division where she led the human rights watch advocacy against particularly problematic weapons that pose significant threats to civilians. she is also serving as the global coordinate of the campaign to stop killer robots, one of the people i quoted earlier on the uk brimstone. she worked for the vietnam veterans of america foundation,
3:33 pm
assisting jodie williams and coordinating the international campaign to ban land mines, co-laureate of the 1997 peace prize. finally, general. he served in the corps of signals indian army. he retired after 40 years of active military service in the corps of signals indian army in april this year. his last appointment was commandant of the military college for telecommunications engineering, which carries out training in the fields of ict, electric warfare, and cyber operation, and is also designated a center of excellence for the indian army for these disciplines. the general officer has received many such awards. i want to call out a few of them. he's been the recipient of the president for distinguished service in the defense forces. he's also been awarded the department of defense production for r&d work. and last year he was conferred the coveted distinguished alumnus award by the india institute of bombay and is the
68 Views
IN COLLECTIONS
CSPAN3Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1039439736)