Skip to main content

tv   Key Capitol Hill Hearings  CSPAN  November 18, 2016 11:58am-1:59pm EST

11:58 am
over time and to make sure we are communicating that effectively and finally on speeding up decision making, it's an excellent question and jorge was right to put that into a slide. the -- to me the logical thing always would have been to make a clear distinction between political decisions and implementation of political decisions. and it is possible, i believe, to preserve for consensus at the north atlantic council at a political level decision making on political issues. but once decisions at a political level are taken, even if it's well, well in advance it ought to then be implemented through the nato authorities both civilian and military without having to go back for additional political decision making. and that is the part that always slows things down. firm the kosovo campaign it was about targeting.
11:59 am
when it comes to deployments of forces or how forces would act in a given situation, say, in afghanistan, it always came back to the knack for further decisions, that's something i think we should try to separate out. we take initial political decisionings we can take future big political decisions but in between we need to let nato authorities get on with their wor work. >> i wanted to mention on moldova. i want to mention a book we have out we're calling europe's gray zone. the reality is we're facing a vast space of europe in europe's east, nonn non-nato eu europe ts turbulent, it's violent, there are lingering issues not only russian engagement but corrupt elites, legacy issues, those trying to block it within many of these countries and these
12:00 pm
people -- the people of that region don't know where their future lies and i would argue we haven't been all that engaged to help them find that out so this is my other point, i don't know that nato leads on that, though at the moment, frankly the european union should be having a real rethink of their eastern partnership policies, the eastern partnership has been the way the eu has engaged these countries, particularly the ones we're talking about and i think people in the region don't understand because this is a process like many of the eu processes where a country like moldova has a stack of paper in front of it, it's got to do all this stuff in order to really move forward it's not going to become luxembourg tomorrow. it's so much procedural issues, so much elite and bureaucratic, so far removed from public attention that people in moldova, people in ukraine, people in these countries, they don't know what the eu would
12:01 pm
mean to them. there's nothing tangible and so i think a tailored approach to the eastern partnership attuned to the basic needs of those countries and how you deal with it and be a bit more willing to deal with the corruption issue. in fact, how we enable the corruption. the banking fraud in moldova, all the money funneled out of moldova ended up funneled through latvian banks. member of eu, member of nato. there's thousands examples of this. how we don't enforce our own laws that allow this type of thing to happen. so things we can do for ourselves to help moldova we don't even need to go there but we don't do it so i think there's a lot of stuff we have to realize that we're going to be facing this instable east for some time to come. the european commission has said no enlargement under this current commission. i would bet there's no enlargement under the next commission. we're facing a decade at least
12:02 pm
of uncertainty and instability and we have to create new tools by which we engage in nand nato one but not only one. we have to be realistic this is the europe we're facing. it's not fixed, it's not a stable place, the potential for violence continuing is very high, particularly in this part of europe. if there's one lesson of history we've learned, wherever we turn away from the gray zones of europe, we end up paying a higher price later. so we need to engage up front and think about these things now. >> if i could just add with respect to the public support question, the pew center has done a lot of public opinion research in the united states and in nato countries on this. if you look at their data from april this year, among people who lean republican, 75% of them said nato was good for the united states. among people who lean democrat it was 81% who said nato was good for the united states,
12:03 pm
independents 78%. they even broke it down by whom you support and so for supporters of donald trump who has been the most critical of nato in this election campaign, even among those -- even among his supporters 64% of people said nato is good for the united states so i think that speaks to the reservoir of understanding and of support in the u.s. public with regard to the question on ukraine, i agree with curt volker's answer. i don't agree with the facts as you laid them out. i think the facts are quite different and even if you take one bit about the allegation that nato is moving its military infrastructure closer, i think if you look at the numbers of forces that nato has placed in the east, they are in no way a threat to russia and it was only done after a long and careful evaluation. thanks.
12:04 pm
>> >> i agree with what was said about the other answers. i'll just briefly focus on the decision making, i think it's time for nato to -- for the north atlantic council, the political leadership to realize that it's not in the military implementation business. that it has created these forces, it can create whether these forces can enter combat, but i think it should defer to other nato military commanders the authority to train these forces, to have snap exercises for these forces, to move them two to three times a year across the alliance, i think this will improve readiness of these forces which is a key need and it will improve deterrence in europe to have these forces more readily deployable and more quickly deployed by military commanders. >> just a quick note on the decision-making question. let me be the first to admit
12:05 pm
that of course there's room for improvement in nato decision making, having spent hours of my life in windowless rooms like this having nato meetings. but having said that it's also our experience in a time of real crisis nato can take important political decisions very quickly. i was at the u.s. delegation at nato on 9/11 and within 24 hours of the subject being put on the table of the declaration of article v, they agreed to declare article v and when the united states requested certain measures of support for allies, within 24 hours everything the united states requested had been granted by the allies. so when there is a crisis in the offing sometimes they can move quickly. that doesn't mean they can't do better. >> once again, we're way over time so that's why you've asked me to talk. i want to thank our distinguished panelists for a
12:06 pm
superb discussion today. i would only add that, of course, i'm an old timer and over the hill but when i had the privilege of commanding the say in t-- nato strategic reserve forces in the late '70s and early '80s, we didn't worry so much about central europe. our whole effort was on the flanks of nato and we had a maritime thought process and a maritime strategy and we had that and we talked about a much larger kind of thought process and much larger kind of integrated and adaptable strategy and i go back to what i said earlier. you should -- there should be a nato strategy and it should be a free world strategy and we don't have to worry too much about russia, i don't think, if we just put our thinking cabs on and remember that it's in
12:07 pm
nobody's interest, russia's interest or anybody else's interest to get too fancy or start too much of a problem anywhere in the world. we got -- you know, we here in an era of globalization now. any strategy has to be an economic thought process, a political thought process, a societal thought process, technology is a part of it and, of course, the military and we simply don't seem to have that kind of not only national but international thought process going on so if i were asked to advise the new president and all i would think along those kind of lines but don't worry, they're not going to ask me. thank you for being with us. [ applause ] [ indistinct conversation ] the supreme court recently heard oral argument in two consolidated cases brought by the city of miami against the
12:08 pm
bank of america and wells fargo. the court will decide if miami can sue the banks under the fair housing act for discriminatory mortgages given to african-american and hispanic home buyers that resulted in loan defaults, foreclosures and less tax revenue for the city. watch that tonight at 8:00 p.m. eastern on our companion network c-span 2. we have a special web page at cspan.org to help you follow the supreme court. go to cspan.org and select "supreme court" here in the right hand top of the page. once on our supreme court page you'll see four of the most recent oral arguments heard by the court this term and click on the "view all" link to see all the oral arguments covered by c-span. in addition you can find recent appearances by many of the supreme court justices or watch justices in their own words, including one on one interviews in the past few months with justices kagan, thomas and
12:09 pm
ginsburg. there's also a calendar for this term, a list of all current justices with links to quickly see all their appearances on c-span as well as many other supreme court videos available on demand. follow the supreme court at cspan.org. now, it's a discussion about artificial intelligence, privacy and security. the carnegie endowment for international peace and carnegie mellon university recently co-hosted this event. this next portion features a panel on consumer privacy and the legal considerations, restrictions and implications of mass data collection and sharing. this is about an hour and forty-five minutes. [ indistinct conversation ]
12:10 pm
>> goibod morning, everybody. i co-direct the cyber policy initiative at the carnegie endowment and together with the the carnegie mellon, we're delight that you joined us. the hashtag for this event is "carnegie digital" and i now have the pleasure of introducing ambassador bill burns to you for the welcoming remarks and look forward to this day with you later today. thank you very much. [ applause ] >> well, good morning, everyone, and welcome again. let me begin by congratulating tim and david and their carnegie endowment and carnegie mellon colleagues for putting together this extraordinary colloquim. i'm delighted to launch today's
12:11 pm
event. the leadership remains me of how fortunate i am to be a part of the extended carnegie family. as president of the carnegie endowment over nearly the past two years and as a diplomat for 33 years before that i've had the privilege of welcoming heads of state, military generals, foreign ministers, university presidents and dwiched thinkers and doers of all stripes but i've never had the privilege of introducing a robot, let alone several, so it's a great pleasure to welcome snake bot, ball bot and their friends to today's event. >> like all of you, i look forward to getting a glimpse of our robotic future later in today's program. robots are not today's only first. today is also the first of two events we're holding for the first time with carnegie mellon university, one of the world's premier universities and a fellow member of the imsuppressive group of institutions founded by andrew carnegie more than a century
12:12 pm
ago. andrew carnegie created these institutions at a critical historical juncture. the foundations of the international order that prevailed for most of the 19th century were beginning to crack. catastrophic war and disorder loomed and the last great surge of the industrial revolution was transforming the global economy. the carnegie ebb doundowment, tr with its sister organizations, thought to help establish and reinforce the new system of order that emerged out of the two world wars, a system that produced more peace and prosperity in the second half of the 20th century than andrew carnegie could ever have imagined. it's hard to escape the feeling that the world is once again at a transformative moment, profound forces are shaking the underpinnings of international order, the return of great power riflery and the rise of conflict after many years of decline. the growing use of new information technologies both as drivers of human advancement and
12:13 pm
as levers of disruption and division within and among countries. the shift of economic die 'nam irk from west to east and growing pressures of economic dislocation and stagnation and the rejection by societies of many regions of western-led globalization and the embrace of an angry fortress like nationalism. here at the carnegie endowment, we're trying to meet these challenges head on across our programs and six global centers, we focus this colloquim and partnership with carnegie mellon and one of the most significant of these challenges, the intersection of emerging technologies, innovation and international affairs. technology l's capacity as all of you know very well to simultaneously advance and challenge global peace and security is increasingly apparent. in too many areas the scale and soap of technological innovation is outpacing the development of rules and norms intended to maximize its benefits while
12:14 pm
minimizing its risks. in today's world, no single country will be able to dictate these rules and norms. as a global institution with deep expertise decades of experience in nuclear policy and significant reach into some of the most technologically capable governments and societies, the carnegie endowment is well positioned to identify and help bridge these gaps. earlier this year, we launched a cyber policy initiative to do just that, working quietly with government officials, experts and businesses in key countries, our team is developing norms and measures to manage the cyber threats of greatest strategic significance. these include threats to the integrity of financial data, unresolved tensions between governments and private actors regarding how to actively defend against cyber attack, systemic corruption of the information and communication technology supply chain, and attacks on command and control of strategic
12:15 pm
weapons systems. our partnership with carnegie mellon seeks to deepen the exchange of ideas among our scholars and the global community of technical experts and practitioners wrestling with the whole range of digital governance and security issues. today's event will focus on artificial intelligence and implications in the civilian and military domains. tim and david have curated an exceptional set of panels with diverse international and professional perspectives. on december 2 we will convene in pittsburgh for more conversation. our hope is that this conversation will be the beginning of sustained collaboration between our two institutions and with all of you. there is simply too much at stake for all of us to tackle this problem separately. we can and indeed we must tackle it together if we hope to sustain andrew carnegie's legacy. i'd like to conclude by thanking
12:16 pm
the carnegie corporation of new york for making this colloquim possible and for everything they've done and continue to do to contribute to a more peaceful world and let me thank and welcome an extraordinary leader of an extraordinary institution and a terrific co-conspirator in this endeavor. thank you all very much. clap. >> thank you, bill. i also want to thank tim and david for all that efforts. welcome to the inaugural carnegie colloquim, part of an fish yaive the to inform and shape global norms and manners of cooperation? artificial intelligence machine learning and cyber security. i would like to thank first and foremost ambassador bill burns
12:17 pm
for hosting this event today. as two organizations that reflect a strong legacy of andrew carnegie, carnegie mellon university and the carnegie endowment for international peace have formed a powerful partn partnership to examine technology and diplomacy across a set of emerging areas critical to our collective future. it's my sincere hope that this event as well as the follow-up colloquim which will take place at carnegie mellon university on december 2 form the basis of an even broader and closer relationship between our two institutions. let me also add my thanks to dr. grigorian, president of the carnegie corporation of new york who provided by support for both of these events and, in fact is based on a conversation that
12:18 pm
ambassador burns and i had a few months ago and dr. grigorian was enthusiastic and supportive of this effort. to understand carnegie mellon university's intelligence, machine learning and cyber security, we must first recognize cmu as a place where pioneering work in computer science and artificial intelligence took place decades ago. and ever since herbert simon and alan norrell created the ingredients of artificial intelligence in the 1950s or the terminology was even recognized broadly, cmu has remained the cutting edge of this field. carnegie mellon took the bold step, a generation later to create its software engineering
12:19 pm
institute and serve the industry by acquiring, developing, operating and sustaining innovative software systems that are affordable and during and trustworthy. designing safe software systems and attempting to create the learning abilities of if human brain where natural progressions toward two of the modern worlds most pressing concerns. to meet these challenges, carnegie mellon's multidisciplinary and encompassing a broad raid of disparate disciplines. it incorporates faculty from across the university with strengths such as policy
12:20 pm
development, risk management and modeli modeling our aim is to have sustainable communications systems and the policy guidelines to maximize their effectiveness. tmu's cylab is a partnership that has become a leader in technological research, education and security awareness among subber citizens of all ages. by drawing expertise of more than 100 cmu professors from various disciplines, cylab is a world leader in the technology leader in artificial intelligence, cyber defense and is a pipeline for public and
12:21 pm
private seconder leadership in organizations as varied as nsa and google. the work of c ylab professor mario sevillas was featured on nova and machine learning and many other aspects. in particular the professor's facial recognition programming helped match a myrrhing -- a very kblblurry surveillance pho with the boston marathon bomber from a database of one million faces. you will have an opportunity to see the professor's work in action today during the lunchtime demonstrations down stairs. today you will also hear from cylab's director david brumley who led a cmu team a couple of months ago that wop the super
12:22 pm
bowl of hacking, darpa's $2 million challenge. congratulations, david. just a week lather, a week after that david took a team of cmu students to defcon in las vegas where they won again in another hacking competition. finally you will hear from andrew moore, the dean of our school of computer science. i would also like to acknowledge dr. jim garrett, the dean engineering at carnegie mellon university who joins us along with rick seiger who played an important parole in helping put together this event between carnegie mellon and the carnegie endowmen
12:23 pm
endowment. this had been highlighted in the colloquim today, an outgrowth of the partnership between our two organizations. you will learn more about this in the two panel discussions today. we hope these discussions on the future of consumer privacy and autonomy in military operations will lay a strong foundation for future clouolloquim and they wi better inform on going technology and diplomacy in these critical areas. i'd like to welcome you today and i would like to close by thanking again ambassador bryce. thank you. [ applause ] >> before we start, let me outline the two key ideas that have been driving this event and when david and i started with the planning for this the first
12:24 pm
one was to bring together the technical experts of carnegie mellon university and the policy expert from the carnegie endowment. that's why each panel is preceded by a setting the stage with one of the technical experts from carnegie mellon university and will be followed by the panel discussion. the second idea was to draw on carnegie's endowment to bring people around the world for the discussion so i'm pleased to not only welcome our partners from pittsburgh but also to welcome, for example, people who have come from hong kong. if you're interested to join the event on december 2 in pittsburgh, be sure to drop your business card off outside or send us an e-mail. i would like to introduce andrew moore, the dean of computer science at carnegie mellon university and the computer science school at carnegie mellon university has been ranked as the number one school by u.s. news repeatedly in the past few years for the grad school program and prior to
12:25 pm
becoming dean for the last two years, andrew was vice president of engineering at google commerce, has been on the faculty of sam u u since 1993 and has been on the program for advance in of artificial intelligence. keeping with the global theme of this event, he hails from bournemouth in the united kingdom. thank you very much. bo >> so this is an interesting and exciting time in the world of artificial intelligence for many people, for regular consumers it's got great promise for them. for companies it's an absolutely critical differentiator and for societies in general we do have options here to make the world a much better place through careful application of technology. what i'd like to do to set the stage here is talk about two things which are -- which at first site sound like clear
12:26 pm
goods, personalization -- i'll explain what that means -- and privacy, two extremely important issues. then i'm going to run through a series of cases where these two great principles start to bump into each other. and they will get increasingly sophisticated and by the end of this stage setting i hope to have everyone squirming in their seats because it's so annoying the that two wonderful and important things, privacy and personalization, which seemed like clear goods, lead us to very difficult societal and technical challenges. so that's what i'm going to try to do in the next couple of minutes. so let's begin with privacy. it's a clear right and we would almost all of us agree that anyone who intentionally violates privacy by revealing information which they gained in confidence is doing something bad and there's laws in our
12:27 pm
international legal system and in all our domestic legal systems which deal with that issue. so that's important. personalization is probably one of the most critical features of a world based on artificial intelligence and machine learning and i'll explain places where it's obviously good. many great institutions, including carnegie mellon under dr. sureshi's leadership have developed ways to help children learn more effectively. if it turns out that i as a child have problems with understanding when to use the letters "ck" while i'm writing, it makes a lot of sense for an automated tutor to personalize their instruction so they can practice that particular issue with me, no doubt about it, that seems like a sensible thing.
12:28 pm
if i'm a patient in a hospital and it becomes pretty clear that unlike most patients i cannot tolerate more than a serb amount of ibuprofen within 20 minutes of a meal as we learn that, of course it makes sense to personalize my treatment. so that is good. and in a moment there's no difficulty involved. here's where it gets interesting. some aspects of personalization -- like, for instance, how i'm likely to react to some liver cancer medications, it's not like we can personalize it by just looking at what's happened to me over my lifetime. when you're building a personalization system the way you power it is to find out about me and then a ask the question so to make things good for andrew what should i do and
12:29 pm
what can i learn from other people like andrew? and that is suddenly where you begin to see this conflict. other people like andrew is something which can help me a lot because if it turns out that everyone who's over 6'3" with a british accent virulently opposed to, for example, the electric light orchestra, it's an extremely useful thing to know so i can make sure that's never recommended to me. so it makes sense to use other people's information in aggregate to help personalize thing for me and in many examples that can really make things better. recommendations of movies is an obvious one and then when you start to think of information on the web, for example if i like to browse news everyday and we notice that i'm typical of people who perhaps in the
12:30 pm
mornings are very interested in policy-related news but in the evening when i'm relaxing i tend to like technology-related news, that's useful information to make sure i'm a happier person when i'm reading the news. so this is the upside of personalization. personalization uses machine learning. machine learning is exactly the technology which looks at data, figures out the patterns to usefully say what would other people like andrew want? and the definition is what it is for someone to be unlike with me or dissimilar. it's the thing which powers ads in gmail and movie recommendations and the things which helps the personalized medicine initiative figure out how to treat you, you'll probably need different treatment from someone else. and now i'm going to go through four examples of increasing squirminess of why this stuff is
12:31 pm
hard. why privacy and personalization actually starts to conflict with each other. the first is a simple case of things we'd like to think society is going to do. if someone publishes unauthorized data about me they are breaking the law and that should be remedied. that's the simplest case and the responsibility there in a good company or a well-functioning government is you actually have the legislation in place, you have clear rules and if shall be does, for example, look up the bank account of a famous celebrity just so they can blog about it that person is going to get fired and in some cases the consequences are serious and there's a more significant penalty. now, cases two, three, and four are ones where it starts to get a little fuzzier. case two. someone uses your data in a way
12:32 pm
you didn't expect but it turns out you kind of agreed to it. and a famous example is a firefighter in everett, washington, who was suspected of actually starting fires and one of the ways in which the police really got to understand that this was a serious risk was they then went to their grocery coupon supplier and looked at the things that this particular person purchased in the last couple months and they found a huge number of fire-starting kits from that. in other case, someone who is suing a supermarket for a slip-and-fall accident, part of the supermarket's defense was they produced sales records for that person showing that they had -- they were buying excessive, in their eye, amounts of alcohol. those are not actually illegal. both of those were covered under
12:33 pm
the terms of service and also the laws of the land regarding law enforcement use of data. that's difficult and at that point we've already hit something where the general public is going to be very uncomfortable and it's the thing which means we all feel uneasy when we sign these terms and services. those are difficult, now i'll get to the gin ja difficult ones where engineers are trying to do good but could do bad. this next example is where we're use magazine learning to really help people but inadvertently accidentally the machine learning system starts to look like a bigot or make decisions which most of us would think a reasonable human would not make and a good example of this is from a member of jim garrett's
12:34 pm
faculty, the school of engineering at carnegie mellon university who showed an experiment with google's advertising system where we looked -- he looked at the ads which were shown in response to a query about job searches and he used google's personalization system to give exactly the same queries to google when the identity of the user was made and when they were female. horribly it turned out when the ads showed where the person was referred to be female were for jobs with lower pay which you look at that and anyone would think if that machine learning algorithm was a person they are both a jerk and in fact they are doing something illegal.
12:35 pm
facebook is looking to use an ethnic affiliation term in their algorithms, and that's fallen afoul. why would a machine learning system do this none of the engineers were -- i would assume none of the engineers had any intent of causing harm. the reason was the machine learning system confirmed in the data prior this that all else being equal -- which is a very dangerous phrase too use, is it was seeing that the women who were clicking on ads tended to click on ads for lower-paying jobs than the men. so this machine learning algorithm which we humansings build has got a kind of defense. it can really say i am just showing people what they're most likely to click on, it's not my fault if society is set up in such a way that my data is showing that women are clicking on lower-paid ads.
12:36 pm
now this is complicated and i don't have the right answer for you. if it helps, i should notice that this experiment is particularly unrealistic in the sense that it's very rare a machine learning system only sees an identified gender. usually the machine learning system see miss other things about a person. they have the past history of the kind of thing that person wants to do, other interests about that person and so it will actually you find that there are other features of that person much more important than gender or race for showing their particular interest so that is what i would regard as the most difficult part of machine learning and personalization at the moment it's very hard and i do not know of a piece of research that i fully trust to prevent these things from being,
12:37 pm
if you like, bigots. finally i'm going to mention the ninja hard case. and this is pretty simple. it's the case that's if you really want to preserve privacy you can cost other people their lives. there are examples of this in many law enforcement situations but another simple one is in medicine where if you're involved in a drug trial and suppose you had 20 hospitals all trying out some new drug treatment on 20 different patients then it's definitely in the interest of those patients for the hospitals to pool their data. to share data with each other so that one central body can do the machine learning with a large n for statistical significance to find out if the system is working or not.
12:38 pm
now if you the are worried ant privacy and you decide you're not going to let the hospital see details about one another, then you can still get significant results as to whether the medication is effective or not, it's just going to take you considerably longer and many more patients will have to be in the trial and you'll have to wait longer before you get the answers and matt frederickson, a computer science faculty member has shown very clear cases of of the analysis of privacy levels versus lives saved or years of lives seied and unfortunately -- and it's what this room doesn't want to hear there's a tradeoff curve there. we have to decide where we are within the center of it. so hopefully we're squirming. i've tried to show you that no
12:39 pm
extreme position on personalization is good, screw privacy, or privacy is good, screw personalization. neither of those extreme positions are useful. we have to use our technological smarts and our policy smarts to try to find the right place in the middle and that's the setup for this panel discussion. [ applause ] thank you. at this point i would like to introduce our panelists, uet pmam from the law firm of sidney aust austin, an expert on cross border compliance and international agreements regarding data use. if you want to come up to the chair. paul timmers, the director of the sustainable and secure society directorate of the european community has been head
12:40 pm
of the ict organization for inclusion and e-government units. so we have experts here from asia and from europe who are helping us discuss this issue. next i'm pleased to introduce ed fell on the, a hero to all computer scientists because he's a computer scientist and the and the deputy director of the white house science and technology policy who has been needing a bunch of intense strategic thinking about artificial intelligence over the next few years. and then i would like to introduce our moderator ben scott, the senior advisor from new america who is the senior advisor to the open technology institute and also a non-residential fellow at the center for internet and society at stanford. good to meet you. >> thank you very much, andrew, far introduction. we're going to jump into a discussion with our expert
12:41 pm
panelists who as you see tra strategically represent different regions of the world and can offer perspectives from across the globe. if i may quickly summarize the policy conundrum that sits beneed the cases that andrew laid out, it's this. machine learn and ai benefits from the personalization of data use in learning algorithms, personalization requires large data sets to compare individual cases to lots of other cases. it rarz the collection and processing of data at a large scale. that raises two key questions, one is what are the rules governing the collection of processing of data for commercial uses of ai and raises key issues for what are the rules for collection and processing of data for governments use of ai. ? underneath that sits the basic question of algorithmic
12:42 pm
capability. if you decide it's unacceptable to have an algorithm that reflects gender bias in employment practices, how do you regulate that? and if you decide to regulate that at the national level, how do you coordinate that at the international level when data markets are global. these are the problems that we are all facing in government and it's fair to say that the technological reach of machine learning and artificial intelligence has exceeded the grasp of our policy frame works to contain and shape these new forms of digital power in the public interest. so what i'd like to do is start with setting a baseline of where different parts of the world are coming down on these issues and what the building blocks look like at the regional level.
12:43 pm
there has been less talk in asia, although i would like that hear more from uet about what's happening in asian markets but i would like to allow our panelists to speak from their perspectives about what's happening in their -- in this field in their region. what e what is the approach to regulating or establishing a policy framework for these most difficult questions of big data collection and the application of artificial intelligence? maybe i'll begin with you, ed. >> okay. well, first i should start by -- with the disclaimer that i am not a lawyer so don't treat me as an authority on u.s. law on this issue but i can talk about the policy approach that has been taken in the united states and it is rooted out of the longer term policy approach that the u.s. has taken with respect to privacy. and that involves generally
12:44 pm
regulation of certain sectors where they're -- where privacy is particularly salient, whether it involves things like health care or practices related to credit and employment and so on and it also involves a broader consumer protection framework around privacy that is rooted in notions of notice and consent. and so we have a framework for privacy which the u.s. has used and is continuing to use and that involves both laws and also enforcement of those laws. when it comes to the particular issues that are raised by ai and machine learning, there are a bunch of things that have been done and i'd point to in particular over the last few years the work that the administration has done on big data and then more recently on
12:45 pm
artificial intelligence. on both of those areas, and i think they're tightly intertwined, the stwags has engaged in a series of public outreach activities and then published reports, the idea being to try to drive a public conversation about these policy challenges and to try to move -- both to move the debate about making rules and making policy in a fundamentally positive way but also to heighten the attention to and interest in these issues and to try to drive a public debate because i believe strongly that the institutions, the companies that are collecting data and using it in this way almost universally want to use it, collect it, and engage in ai activities in a way that is responsible and positive and sustainable because i think
12:46 pm
people recognize if you push the envelope too much the pub will not allow that to stand so we've tried to thrive public discussion. we've tried to raise the level of dialogue and that's been fundamentally one of the areas in which the administration has worked. we also recognize the importance of -- we also recognize the ways in which these issues operate across borders and the need to work with international partners and to make sure that as data flows across borders and as citizens everywhere encounter the companies, the institutions of over nations that we can work together reasonably and we have an international system for dealing with these things. >> thanks, paul, what's the view from brussels? >> the view from brussels, perhaps i should put in another kind of disclaimer in a certain
12:47 pm
sense that is i think if you look what is happening in policy development whether that is engagement with stakeholder, public debates like here or whether you go in the direction of official public policy or law and regulation you have to put it against the reality of what is happening around technology and the use of technology so the examples that he gave were interesting and challenging. machine learning doesn't have access to your personal data even though it would be good for other people. it's a very interesting case because you have to look at it, how could you apply today's frameworks, including law, to that. so to a degree law has been strong in the european union and we would look at fundamental rights but also fundamental rights are not absolute so the public health is one of those reasons that you can actually start using someone's personal data, also individual data but
12:48 pm
with appropriate safeguards and this may mean you put a challenge to technology can you encrypt sufficiently? can you use new technologies? so it is i think that dialogue that we also are very much looking for in the european scene. it must be said, fundamental rights are important in european setting so if we say privacy, privacy is a fundamental right as a matter of fact, we even split it into privacy from the perspective of the protection of your private life and the protection of your communication versus personal data so there are differences, there's more than one fundamental right at play there, based upon that we have law but we have also policy development and it's a very actively moving field. for example at the moment we are working on a policy initiative around the free flow of data and around platforms and precisely those are being put to the test
12:49 pm
by machine learning, ai, precisely by the questions that we have here on the table. >> uet, how does it look in asia? >> so i'm not a computer scientist, i'm a lawyer so i'm going to push this from a different perspective. one of the challenges about asia is that it's not even bifurcated just in terms of the laws and regulations that are coming out of the region. i mean, in fact when we talk about asia, what do we mean? different people have different views as well. when we talk about privacy laws in asia-pacific, the countries that come to mind as being at the forefront of regulations would be japan and korea and to some extent australia and new zealand. and then following that would be countries such as singapore and hong kong, taiwan and the philippines where they've got fairly new laws, some of them were put into place in 2012.
12:50 pm
singapore is a country where -- i used to be from the attorney general's chambers and -- they are progressive. but the fact is that they implemented privacy laws for the first time in 2012. that, again, gives you some idea as to the importance placed on privacy. and then in the last category you've got countries such as indonesia, vietnam, and china. and so these are the countries where we call them privacy laws. they're not really based on privacy. not individual privacy anyway. and i heard today a lot about human rights, how privacy is a human right. for a lot of these countries, and with these laws emanating not just because of a motivation to protect human rights, although a lot of it would be consumer rights, and i think some people would argue that.
12:51 pm
consumer rights would be to some extent human rights as well. and a lot of these laws that come into play, the last category of country, what is the challenge about them that they don't have a single data privacy regulation. and i tell my clients, a little facetiously, but it's true to some extent, sometimes the more laws the country has, i mean, i do fpa corruption investigation, for example, and as a result of that, we take a lot of e-mails throughout the region. that's why we have the familiarity with the privacy rules and regulations. i joke with some of my clients, don't look at the transparency index and see how risky a country is when it comes to corruption. look at how many laws they have. the more anti-corruption laws they have, the more problematic corruption tends to be in that country. it is the same for countries such as china, and vietnam, and indonesia.
12:52 pm
you find little bits and pieces of information. they refer to how privacy is, you know, it's a right of all citizens. but they don't really tell you how that's going to be enforced. that is a regulation that you see in china. i think some of the challenges in asia is just trying to harmonize the regulations for a lot of companies, a lot of our clients who are trying to operate and transfer data across the borders. you have a lot of -- so japan, for example, has got a new law will come into force in about two years, and that's probably the first time where we actually talk about the personalization. in terms of all the other countries, i think the idea of artificial intelligence is not even something that the countries have seriously considered. there are things you might see guidelines, you know, introduced by some of the regulators.
12:53 pm
but these are just guidelines, and there is no teeth to any of them. >> let me pick out a point which i think is implicit in what you said, which is, you've all described the approach of the united states, europe, variety of asian countries to these questions from a commercial data privacy perspective, regulating the market, commercial actors gathering data, applying artificial intelligence algorithms to produce particular outcomes. but i think at the core of this question from a regulatory, and especially from a political perspective, when you collect a lot of data and begin to produce some of these outcomes, in government access today, that is inextricably combined with the protection regulations. the recent tensions between the united states and europe has been about commercial data practices. but ultimately it is rooted in u.s. government access to the
12:54 pm
commercial data that is collected by american companies. so my question is, do you believe that even if we were able to find a harmonization, a standard for commercial data regulations that apply to big data collection and the application of artificial intelligence algorithms, machine lea learning, is it all undermined at the end of the day by individual interests and their unwillingness to give up any of that data by a government for national security or law enforcement purposes. >> i can actually give a quick example before we go to europe and the u.s. china has got a provision where, you know, there are a few examples of data localization. any related to health, medical information or the health of the citizens has to be stored in servers in china. another example is singapore data privacy provisions.
12:55 pm
the singapore government and all state entities are excluded from its provision. that's a very good example of where the states' rights conflict. >> perhaps building on that, i think this whole question about national security and sovereignty, perhaps you also have to generalize a little bit, and all the interests, too, that are certainly governmental interests, or should be addressed for society at a scale. which is safeguarding democracy. so i think one of the concerns, if you look at merkel last week speech, talked about transparency of algorithm plat forms. this is in order to keep consumers properly informed. but it's also, what is the kind of bias that may creep in through the algorithms in terms of the provision of news. that's got everything to do with the way you execute democracy. so there is a debate about
12:56 pm
avoiding where we get in a situation where democracy gets polarized into echo chambers, and we don't have a real debate anymore. that's also a serious interest. i think where you're talking about the values to what extent are they shared internationally, now, i think you can be optimistic or pessimistic about that. if we talk about data protection, we have been able to make an agreement between europe and the united states, even if you do not have exactly the same starting point as regards data protection, let alone as regards national security. the privacy shield. i know it's going to be put to the test, and that's how it should be. nonetheless, we go a lot further than we have at a time of safe harbor. we started in that area of excess by government, or national security purposes through the data being transferred in the transatlantic context and the safeguards for that. it is possible if you negotiate to make an agreement on certain types of issues, whether you can do that for everything, and across the world i think it's
12:57 pm
very doubtful. there are many places where norms and values don't work. so if we bring it to the field of cybersecurity, as we clearly see it, we do negotiate internationally about norms and values and relationship with cyber security, which has everything to do with aia also. are we getting very far? it's little steps. i think it's not a single type of answer to this question. there is a degree of progress between, let's say, those that have a degree of like-mindedness. but there's also many, many areas where we should be -- pessimistic. >> there are plenty of areas in which government access to data for purposes of national security or law enforcement is relatively uncontroversial. i think we don't want to forget those. and of course, the international discussions around this issue have been going on longer than
12:58 pm
the -- than the conversation about a.i. these are not -- these issues are not simple. but i think if you look at privacy shield, for example, it is an example of the way in which it is possible for us to engage internationally, and to get to a point where we can work together. as to these issues about fairness and nondiscrimination, i think this is another area in which there is a broad alignment of interests internationally, and in which i think there's a lot of progress we can make by working together. >> let me present a more pessimistic vision and ask your responses to this. to me it stands to reason that as the private sector grows more sophisticated with machine learning technologies, collects more data, applies more powerful a.i. algorithms to that data, it will be irresistible for
12:59 pm
government to reach into those companies for legitimate reasons, in many cases, but also perhaps for illegitimate ones, to gain access to that power. the example that you raised of the firefighter buying arson kits, i don't know where you buy those or where you have the coupons for them, but the idea that law enforcement may not only tap your phone calls, or your e-mails, but should also look at your purchasing records or your health data and put together a portrait for you and compare you against others and determine you may have committed a crime is where i think the -- is an extraordinary development, one in which government in legitimate cases would want to use. but what that says to me is that, ultimately every country is going to want to control that data for themselves, in their own sovereign territory. so my question is, number one, are we headed for a global data sovereignty movement where
1:00 pm
everyone tries to have data rules where a.i. operated by domestic companies is used as a geopolitical asset. second, if it's algorithmic data, if they say we'll show you how the algorithm works, it reflects the actual user -- the -- if facebook said we'll show you how our algorithm works for news feeds, does that solve the one? yes, it affects the behavior of users and affects the things they will most likely click on. do you then regulate that outcome and tell facebook you have to change that algorithm. how do you hold them accountable? how do you determine they have done so in a way that measures up to a particular standard. two yes, sir. are we headed to hard power
1:01 pm
regime of localization in your review at the global level, and two is, even if we're able to use transparency as a tool to push back against excess of a.i., does it even work? >> let me start by taking the second part of that. about the value of transparency. which i think really goes to a desire for governance and accountability. and one way to try to get there, to increase accountability, would be to say, well, open up everything, tell us everything about what your algorithm is, tell us everything about what your data is. but here i think is a place where we can apply technical approaches to try to provide accountability, to try to provide evidence of fairness or nondiscrimination or certain accountability along certain dimensions without unnecessarily meaning to reveal everything.
1:02 pm
i think one of the traps we can fall into in thinking about this issue is to think that this is a problem caused by technology, which can be addressed only by laws and regulations. i think it's important to recognize as, i think, the discussion today has, that technology can be an important part of addressing these problems. that there are technologies of accountability, and that we need to think in a creative way about how to put those things together. we also need to think, i think, about the ways in which forces short of legal prohibition can constrain the behavior of companies and authorities when it comes to the use of data. to the extent that what is happening is known to the public, to the extent that there is an opportunity to provide evidence of fairness, evidence of accountability. that in itself creates a dynamic
1:03 pm
in which companies and authorities may -- will often voluntarily provide that kind of accountability. we've seen that to some sent in privacy, where companies would like to be able to make strong promises to consumers, for consumer comfort, but knowing that they will be held to those promises. you get a dynamic in which companies can compete based on privacy. to the same extent, if we have technologies and mechanisms of soft accountability, that that can lead, number one, to a competition to provide a service in a way that's more friendly in order to bring people in, and it can also lead to the kind of accountability that occurs when bad behavior is revealed. so i think there's a lot more opportunities there to do softer forms of governance and try to use technology to get to that issue. around fairness in government. >> do you think the regulation
1:04 pm
is sufficiently collectible for softer forms of -- >> absolutely. well, i think what that says, i find that really challenging. i think indeed, technology needs to be invited to make things work really well, like the underlying intentions like the data protection of information. if you talk about informed consent, even informed consent about automated processing, that is a real challenge for technology. you can bounce back and say it's impossible because the algorithms we don't even know what's happening inside. but that's not adequate. but that's not sufficient as an observer. there are other approaches. and i think you're referring to there are other approaches where you can measure things like fairness, things like did you actually understand what is happening, the decision-making. also, i must say, we're getting a little bit away from the monolithic notion. there's an interaction you can continue to have and that's where the technology can mediate
1:05 pm
when you talk about consent, as the use of the data evolves. though i'm kind of optimistic about the reach that the opportunities that are there in technology. when you talk about localization, again, probably a nuanced approach to that is necessary. because there is a real risk, i think you pointed to that, the data localization happens. it's happening already today. and actually, that you do not necessarily get an internet by country, but perhaps by region. at the same time we have initiatives, you heard earlier through the privacy shields. that's a way to avoid the localization, and we are talking about personal data. we have a free flow of data, actually move any undue restriction to the localization of data.
1:06 pm
actually we can move any restriction to the localization of data. and i think we probably want to differentiate which domains are we talking about. when we talk about the public health problem, like the rise of the zika virus, i think we have a more globalized approach to that. we have government systems like w.h.o.s and professional collaboration in the field of health that allows us to do big data, a.i. type of analysis on the data we're getting from zika all over the world as a matter of fact. for me, in this debate we need to involve the governance that already exists. almost any kind of government institution we have in the world that works will be exposed to the what are you doing with the data with a.i. make use of those institutions, too. that may be in a more differentiated way, may not work in every domain but certainly do mains it will work. what is the data we have from
1:07 pm
the self-driving cars? i'm not sure. >> perhaps a necessarily complex, but therefore differentiated sector-by-sector approach -- >> and i think you learn from what you do in one sector to the others. it's not necessarily showing it is impossible to come up with governance. must be said a strong plea in europe also to come up with new governance approaches and also to say not all governance approaches will work. the realtime threat of cyber incidents may not be quite compatible with the type of governance that we have set up between people and organizations, which is relatively slow. and so we will also have to review the type of governance that we already have. >> and i think for asia, this is a little self-serving, but i still think we need regulations. because so many of the countries don't have something that will be taken for granted in the rest of the world. for those jurisdictions that have the laws in place, i think
1:08 pm
the question of how is the enforcement and policy positions, in terms of the the guidelines issued by the regulators, but there are so many other countries in asia that still don't even have very big privacy laws. i think at the end of the day you still need those to be in place for the framework at the very least. and a lot of asia follows the consent principle that is adopted in the rest of the world. i think in terms of the data localization, a lot should be done for various reasons. they're usually not because of privacy. for example, indonesia, i mean, they talked about localizing data. the reason for that was because they thought that this was a misguided belief that that was going to help improve their economy, by localizing data. what the government didn't realize was that was going to
1:09 pm
put off a lot of the multinational corporations from investing in the country, so they held back from that, which brings me to my, you know, last point in terms of, i think what we have seen, with all the international -- multi-national companies that have set up operations in asia, they bring it along with them regulations that they have to follow, because, for example, they are dealing with data from the european union, or the u.s., and because of that, they tend to follow the standard that's set the highest. so when you have consumers in asia who see, hey, this is the way my data should be treated, and this is the way an international corporation would deal with my data, my privacy, then you start expecting that from the institutions within the country. so i think there has been a lot of that where, you know, the cascading of privacy, even though the regulations aren't in
1:10 pm
place. but you've got the economic pressure to a large extent. >> i want to put one more provocation to the panel. but before i do that, i want to invite you all to start thinking about questions you may have for the panelists. we're going to reserve the last section of this panel for audience questions. so start thinking about that while i put this question to the panel, which is about all three of you have raised notice and consent. it is the basis of privacy law across the world at the moment. and yet, even before ai was already under fire, already under attack about whether it would ever be sufficient. for various reasons. there's an argument that notice and consent is a sham, because you're presenting a consumer with a 15-page document for a service they want to buy, and no one ever reads it. they have no idea what they consented to even though they've
1:11 pm
been noticed. once you click that box and say, i agree, all the rights you had up to that point are gone. not all, but many. second, as we collect more and more data, and companies become diversified horizontally across multiple product platforms, it may not know exactly what it is that they are going to do with your data. and they may not know to give you notice at that point. and at what point do you build in multiple notification points. i've recently had occasion to talk to a number of founders of new silicon valley startups, and not just silicon valley but europe where i spent time in berlin. today. a is the new value property. people are building companies based purely on the idea that they're collecting lots and lots of data. what they will do with that data, how they will monetize it, how they will pool that resource with other resources, how they will be acquired, integrated
1:12 pm
into a larger enterprise, big question mark, but undeniably not a deterrent for venture capital flowing into those companies. once again, draws into the question this basic notion of notice and consent. if we come into a world where data is pooled intentionally in a fashion to maximize the utility of personalization, it might not even be reasonable to ask a company to predict in advance all the ways in which that data may be used. they may not be the only ones who gain access to that data and use it for benefits that might harm or benefit the user. so my question to the panel is, if we root the idea of an international standard on privacy policy as it applies to the big data and algorithmic accountability on an old framework of notice and consent, are we setting ourselves up for failure from the beginning?
1:13 pm
>> i think notice and consent is not bad at all. what we are challenged to do is to make sure that consent is informed, meaningful, freely given, that there is a choice, and that's simply not often implemented. the 40-page contract is not meaningful. how do you translate it into something which is meaningful. it must be said in order to process your personal data, it's not consent that may be a lawful ground, as the gentleman in the protection agency said, at least in europe, but also there are legitimate grounds for public health. it's one of those. certainly there are issues around public safety, security, et cetera. maybe grounds to process personal data without consent. so there's actually even a legitimate interest for direct marketing purposes. it may constitute as the law says, a legitimate ground to process personal data.
1:14 pm
now, the question is how do you interact? because in all those cases you still will have to interact with the data subject, the one that is providing you data. how do you do that in a meaningful way? i'm still a little bit puzzled why interaction with the user is a problem. from the company point of view, you would probably say i want to interact more with the user than less, because each point i interact is another opportunity to engage in the discovery of value, the delivery of value, to differentiate. >> let me ask a followup. what if the user is dead? we're soon to a moment in our history where there is terabytes of data out there about people who nor longer living. yet that data will undoubtedly have value to the company that owns it and the governments that may gain access to that data. how do you deal with that? >> so you will probably be thinking of a case where you
1:15 pm
would like to invoke public interest, for example, public health. and again, certainly the european law says that if it is in the public interest, you can use that as a legitimate ground to start processing data. so there are possibilities. really the big technical difficulties, i think, or underlying difficulties that have to do with algorithmic accountability, which is not resolved or actually we have a broad debate with the health community in europe. radiologists are saying, what do i do with all those data that i have, that i now start to put again under data protection and how do i make sure that folks that want to be forgotten can apply to that? there are serious implementation challenges. they will not always have the most ideal answer. but in a certain sense, we are looking back into a legacy. a legacy that we can improve as the implementation of the law evolves. all of the communities that are
1:16 pm
involved in personal data, in europe for sure, are also called upon to look at what the technology and the law makes possible, and provide their interpretation of that, a common interpretation rather than a fragmented one. that is clearly a challenge that needs to happen from public administration to radiologists. >> but in answer to your second question, actually, about -- there are a number of laws in asia where if the subject is dead, the concept of privacy doesn't apply anymore. the law doesn't protect that data. >> open season on that data. >> yes, unfortunately. and i think just in terms of the other question that you raised about notice and consent, so i say this a little facetiously, again, we used to joke, we can draft these consent agreements and put in as much as we like and nobody is going to disagree. everybody will just click "agree."
1:17 pm
i read this book called "future crimes." i have to say, after reading that book, i refuse to load apps on my iphone to the extent that i can. it is very difficult to live without apps. but i probably have one of fewest apps in the whole of asia on my phone, after reading that book. i remember some of the statistics that i saw in that book about how i think the privacy policy for facebook is double the length of the u.s. constitution. and, you know, i think it was either paypal or ebay, i don't remember which company, where the privacy policy is longer than "hamlet." i've given presentations, a lot of presentations in asia about privacy and data security. and i've always asked this question, how many times have you actually said i don't agree, you know, when that privacy policy pops up? and all the presentations that i've given, only one person put
1:18 pm
up their hand, and that was a lecturer from one of the universities, just an academic to some extent. but i think most people, they'll just click yes, because they don't have much of a choice, because they don't think it's important. it's not because people don't value privacy, but i think the difficulty is that there aren't many avenues for them to seek redress. and, you know, because we don't have the concept of constant litigation, and it's not a litigious society in general in asia, it's very difficult for individuals or consumers to get together and change the laws and the policies. >> so this is a fundamentally difficult issue, right? the uses to which some data may be put may be extremely complex. and the implications of those uses for a particular user may be even more complex. so if we were to start with the
1:19 pm
principle that something should not be collected if its use would not have been acceptable to the user, it's not at all clear how you could put that into effect in practice, right? we know that telling users every last detail of what will happen and every last detail of the implications, asking them to read that before they disclose anything ever, is not practical, and is not the way that people behave. that said, there are a few strange people like academics and privacy lawyers who do read these things, and there are people who have built tools that look for changes and so on and analyze them. so if a company does change its very long, longer than hamlet privacy policy, there is some chance, reasonable chance it will be noticed and trigger some public debate over that there are methods of accountability other than all the users reading all the things, which we know doesn't happen.
1:20 pm
still, it's a fundamentally difficult question. if we were to offload that decision to someone else, they don't make it terribly much easier to figure out what the right answer is as to which uses would be acceptable to the user or which uses are socially beneficial. >> maybe the issue, it's got to be meaningful notice and meaningful consent. and i think one of the things that, you know, just from a policy perspective, because the notion of, you know, these consent agreements is that they shift everything onto the individual consumer, who don't really have the ability to, you know, reject the terms. and so i think, you know, just in terms of the policy, when it comes to ai and all the other provisions, i think it's important for the governments to actually think about shifting a lot of their responsibility back to the corporations for
1:21 pm
self-assessment and things like that. >> i'm wondering if you also cannot start splitting it up, in the sense that especially with the automated process, you have to explain the significance and the consequences for the user. and the point i think we're making is, first of all it's very difficult for a user to understand and read all about that, and fundamentally it may be very difficult to say that right at the beginning. still, that raises the question, why would you assume that it is only at the beginning that you ask for consent? why don't you have a repeated approach to interaction with the user as the system actually also develops and learns, and draw the consequences? at that moment in time that a consequence becomes relevant, you could ask, in a number of situations, i'm not saying always, but could ask again. it may be simpler to understand than a whole long text about what potentially will happen. >> there's a simple answer to
1:22 pm
that. you talk to any of my clients and they want all the consent up front. they don't want the obligation to go back to the consumer or the customers. usually when we draft these policy provisions for them or these agreements, they tell us right up front, can we make it as inclusive as possible. so that is what they do, because there is nothing to prevent us from doing that. i think that's difficult. >> i mean, i think what you're suggesting is thinking of it more as a matter of user interaction design or user experience design rather than perhaps asking for everything up front or trying to get an extremely broad consent or ask for extremely broad consent up front, that you might ask for some consent initially, more later. how and when you do that may be difficult depending on the nature of the product and whether there even is a touch point with the user that comes later.
1:23 pm
certainly i think thinking of these negotiations of consent in terms of user experience design, user interaction design, can be a fruitful way to get closer to -- to get closer to a strong notion of consent in a way that is less burdensome on the user. >> to go back to that point as well, i think a lot of consumers, they don't really need to know the algorithm or to understand it. what they want to know is, what is a different way, the purpose it's going to be used, not so much the algorithm. i've heard that excuse before, some companies say, well, there's no point to us explaining the algorithms to the consumers or the customers because they're not going to understand that. that's not the concern they have. it's the change in use. >> a quick followup for you. is it possible to have a discussion in the abstract about the notice and consent regime without looking at the market concentration in many markets for digital products and services? because if you're choosing
1:24 pm
between two or three mobile phone companies or two or three search engines or two or three social media platforms or mortgage lenders or hospitals, asking someone to opt out because they disagree with the consent provisions is inviting them to not participate in modern society. so there i think is a relationship between market structure and privacy policy which in many cases is definitive, that you don't really have a real scientific alternative other than the consent. >> it must be said that i thought it was interesting, what was going around in twitter, i don't know who attached a statement there from the fcc having just issued some privacy guidelines for intact access providers, who described the situation as no choice. they say you have to be additionally careful when there actually is no choice, which may
1:25 pm
be the case there. i think there is a certain sensitivity around the notion of fairness, which includes the notion of choice. >> let me at this point invite all of you to raise your hands. tim, are we passing a microphone around in order to it everyone on the recording? i'll start over here and work my way across the room. identify your name and affiliation before you give your question. our panelists will know who they're talking to. >> thank you, i'm the dean of the college of engineering at carnegie mellon university. i want to come back with this idea of privacy versus personalization. it would seem the last part of this discussion raises the question, why doesn't we apply personalization to privacy? so, you know, the only choice being i have to take one blank consent form, it's either that
1:26 pm
or nothing, where is if there were some way for me to fill out a privacy profile so that it described what i wanted to share, what i didn't want to share, how i wanted to share my data, could that not then be applied against whatever the company is saying is their privacy policy, so i don't have to read every one of them, i simply spend the time saying what i'm about, and let the interaction happen, more like a personalization applied to privacy. >> that seems to remind me of the point you brought up about how competition in the private sector can potentially mitigate against abuses of privacy policies. maybe this is a question you can respond to. >> sure. and there are a couple of avenues that i think come to mind here. one is this idea that a user might check some boxes or slide some sliders in a user interface
1:27 pm
and give some idea of their preferences with respect to privacy, and there would be enforcement of that on the user's behalf or some kind of automated negotiation between the user's technology, their browser or app, and a company's technology, so things would only happen within the bounds that the user had said were acceptable. and there have been various attempts to build those sorts of technologies. none of them have taken hold for reasons that i think are largely contingent. it could easily have turned out that such a thing became popular but for reasons too complicated to go into here, i think that has mostly not happened. the other approach is one that is -- takes more of a machine learning kind of approach, where you're trying to ask the user a relatively limited number of
1:28 pm
questions about -- specific questions about what they want, and then you have a technology that on the user's behalf that tries to infer what decisions they would make in other cases. the idea of a system that operates on the user's behalf is one of the technological vehicles that could develop. and again, you have sort of contingent questions of technological development that may make that more likely and may make it easier or more likely to be deployable. but certainly that i think is one direction in which users may be able to put technology to work on their behalf to manage this stuff. because the complexity of these choices, if the user has to make every single detailed choice is too much. >> i think it's a very interesting idea. the question is will it hold against all of the four cases. it may not hold against the
1:29 pm
third case. you can use the personal data, bad for society. personalization of data, it merits to be discussed, but would it actually eliminate the risk of things being bad for society? >> we'll take another question. right in the back. the microphone is coming. >> yes. it's jose colon from the state department. i believe you only -- you are focusing on part of the issue of privacy because there are other means of data collection that doesn't involve people clicking on the internet. when you go to a store, for example, target, there are many cameras following you, you pay with a credit card. they sell information that they are getting from machine learning. they are using those for many purposes. we had the case of samsung with the smart tvs, a conversation with people.
1:30 pm
how would you address those? that's a big an issue as you clicking on something on the intent. >> your responses? >> i think this gets to the issue of, if you have a model based on notice and consent, how can you talk about consent in a case where collection of data happens in the environment, such as with cameras or with microphones that are out in the world. and the cases that are in a public place are i think some of the most difficult here. if there's a product in your home which has a microphone or camera and that's turned out without your consent, that seems to me not a difficult case from a policy standpoint. but in a public place where there is not an interaction with the user, where consent could naturally be sought, i think this becomes a pretty difficult issue. i don't think we have all the answers to those by any means. >> for us there's also a real
1:31 pm
part of the debate, because the two parts are fundamental rights, the confidentiality of communications in your private life and the data protection part. what you mentioned touches upon both aspects. your confidentiality, when you go where you are not to be tracked, even if that doesn't necessarily and immediately involve personal data, still a right to be protected. so it's really part of the debate in europe. >> yes, right in the back. >> hi, my name is andrew hannah, i'm with "politico." you've talked about shifting responsibility back to corporations in terms of privacy agreements and others have talked about softer forms of governance, i just wondered in terms of shaping what data can be used. i was wondering if you could be a little more concrete and talk
1:32 pm
about tangible initiatives that could be taken on a policy level to allow for this to happen. >> let me start. i think this already is happening. if you look at the dynamics that drive the privacy policies of some of the large companies and ways in which companies use data, there is a competitive dynamic that operates, in which companies would -- on the one hand would like to be able to use data to optimize their business goals. but on the other hand would like to be able to promise to consumers that the use of data is limited to things that consumers would find acceptable. and of course those promises, once made, have legal force. so i think you see this operating already. it's inherent in a model of notice and consent that consumers will -- may either withhold notice or take their business somewhere else if they
1:33 pm
don't like what's being done in a particular setting. this is a dynamic that operates already, and it is something that is driven both by the enforcement of law for example by the ftc with respect to companies keeping their privacy promises to consumers. it's also driven by some of the public dialogue and is driven as well by the public debate and by some of the press coverage about privacy practices. all those things i think push companies to try to make stronger promises to consumers when they then have to keep. >> i think there was one in the middle. yes, ma'am. >> hi, my name is kerry ann from the organization of american states. the question is kind of tied to the gentleman in the back that asked about other forms of data collection. most of you would recall, when
1:34 pm
came online in terms of what happened to her, how she collected data, the result, in terms of privacy so much data available in blogs that are private, personalized facebook. you have algorithms you can build from all those sources that are open. how is that tied back to consumer protection if there's actually no obligation by the person who may be developing this new ai that we don't know about that's actually collecting it? how does privacy come in, when we're pushing our data out there for anyone to use? i wondered about you thoughts on that. >> great question. >> strictly speaking, if you are able to start reidentifying and it becomes personal data, you still fall under the data protection law. you have to look at how far you push the boundary and using also open data to reidentify, and the case that you mentioned was real. that's where people have to take responsibility or at least in the european consideration, they would be liable against the law.
1:35 pm
>> hi, my anymore is al gombas, i work for the state department. i'm curious, if we were to create a scenario where we can negotiate the privacy restrictions, what might happen then i think might be that companies will incentivize consumers to give more data, give discounts or something in the event that if they want to get more data from the individual or consent. and i'm wondering how that might play out, if you think that's a good idea, bad idea, whether we should have a blanket law saying you can't do that, you have to offer the same discounts to everybody regardless of what the amount of privacy they will require of the company or not, and how a consumer may be taken
1:36 pm
advantage of, for example consumers may be in a position where they feel like they have to give up data because they can't afford the service without it. >> i think there was a study that showed that consumers are -- they prefer giving some information, and then having the ability to consent, if additional information or different uses are going to be made of the data. and i think the other point this study showed, i can't remember the name actually, was that consumers generally are willing to give more information if they get something in return. i think that's -- again, we go back to the notion of fairness. one of the -- you know, the problematic areas that we have is that either the consumer or the customer doesn't know how the data has been used or has been used in a different way, no notification has been given. and i think the third thing is that the companies that want to
1:37 pm
benefit, they've been able to monetize the data, use it for marketing reasons. but the consumer hasn't actually benefitted additionally from that different use or that additional information. so i think that at the end of the day, if you go back to notice and consent, not right at the start of the relationship but perhaps as that relationship progresses. >> perhaps i can add something to that. for me there are two dimensions in it. one is, indeed do you provide fairness in the perception of the user while the data is being used, and then people are saying that's not the case because you get this proportional value out of it and you don't give part of that value back to me. that's one part of the debate. your part of the debate is does the consumer have a fair choice in the beginning. actually de facto monopolistic situation. if you look back at the
1:38 pm
statement the fcc did last week about access to the internet, you cannot be forced is essentially what they are saying, you cannot be forced to give up your browser data, et cetera, your browser preference data, otherwise you don't have access to my service. there's not so much choice in that service. there's also that aspect of, is there a reasonable balance at the moment that there is an essential service being provided versus the use of personal data, you cannot start excluding people from having access to a service. >> it's not different from regulations, where you need the government to step in to start the ball rolling. if none of the internet providers are actually -- i mean, i think it's going to be quite difficult in some sectors to wait for the companies, to take the initiative to regulate themselves. i think this is one of those issues where you have to have the government step in and just start the ball rolling. >> yes, ma'am, right here in the
1:39 pm
front. >> my name is erica bassou. i'm a ph.d. student at american university. my question is about the notion of democracy in all of this, and while we are speaking to a room full of people who have a fairly good idea of some of the terms that we're using like notice and consent in terms of service and data privacy and ai, i'm just wondering what this all means in terms of access to even this information about what these terms are. and is it just a conversation between policymakers and corporations who have access to these definitions? or is it really a conversation that you're having with the users who get affected?
1:40 pm
>> great question about literacy. i mean, i think you see, in practice you see a lot of discussion, a lot of chatter among policy experts. and you see more occasional flare-ups of direct public interest in some of these issues and some of the practices. and as is often the case in governance, the elites are sweating the details every day. and there is a corrective of the public noticing something that seems quite wrong to them and speaking up loudly. i think that is how these things often do operate. the -- certainly -- and we do certainly see those -- those flare-ups of direct public interest from time to time. >> one of the points in the debate in europe is also whether machine learning, ai, should also be made more widely
1:41 pm
available, kind of an open ai type of environment, which actually could be quite an interesting point for international corporations. that's kind of democratizing the tools themselves. >> yes, sir, in the front. >> thank you. daniel reisner from israel. my question is, when you mentioned a phrase, you mentioned old frameworks, when discussing this issue. one of my questions relates to one of the oldest frameworks we're using, which is the concept of the state in the framework of the discussion. because we all realize that we've globalized every element of the discussion. the data, in spite of localization efforts, is global. companies hold data. the same piece of information is split between two or three
1:42 pm
different locations on the same server. and some of my clients are split up over different continents. you don't actually get the same piece of data in any one location anywhere. the company holding the data is actually a multi-structure and sits in 25 different locations as well. and so on the one hand, the data is globalized. the players are globalized. that raises the question, what is the role of the state? i give you an example which i faced relatively recently in israel. the israeli government decided -- called me up one day and said that they wanted -- not the whole government, but parts of it, and they said they had decided to regulate an international cloud services provider. i asked, why do you think you should regulate them? they're not an israeli company, they're not active in israel per se. they buy the products online. they say, it's very simple, they offered the services to an
1:43 pm
israeli government entity. but i said, the cloud sits somewhere in europe, i think, the company is an american company, et cetera, et cetera. they said, yes, but the service is being offered in israel so it's our job to regulate. and i pushed back and i said, well, if you want to regulate it for that purpose, and 211 other countries in the world could legitimately make the same argument because it's a global service, i said, do you really think it makes sense? they said, we never thought of that, we'll take it under advisement, and i haven't heard from them since. what do you think we should be doing? governments are still our main tools of policy. but when we all recognize that facebook has more to say about the privacy of its constituent elements than any government in the world, with apologies to all governments represented, are we still having the discussion in the right forum? should we be thinking of a different mechanism in which we have a discussion and engagement with the right players and the right forum? >> a very simple question. [ laughter ]
1:44 pm
>> you see something similar happening in the debate around cyber security, which is considered by some very much a national issue but global companies are saying i want to buy the best super security in the world, i don't care really where it comes from, but i need to have the best because i'm a global company. is that necessarily contradictory? i don't think in all cases. does it mean that you need to go for some form of global governments? at least a form of international governments, yes, because you need to have an idea of what is the quality of cyber security. i think the supply side corporation, if i simplify that concept, could be quite fruitful in a case like this. what companies are actually asking when they talk about data protection and privacy and machine learning and how is that different from the more nationally determined cultural
1:45 pm
values around that, that's a plea in the community to make sure ethics, cultural value discussion is really part of the debate around ai and machine learning. so not only for academics but also for the institutions involved in that. i don't think you can get very far if you do this only nationally. >> quick followup question, i think his question is really an important one, do you think there are any global institutions that could channel national interests effectively, at least a mini lateral level? meaning the largest number of states that are willing to meaningfully participate in a single standard. companies. >> i guess it's not really organized or named as such. but i point earlier to certain sectors in which you can start talking about the governments of data. so you can build upon some of the existing governance that is there and make that more ai and
1:46 pm
machine learning aware. use that. we don't need to advance something new but perhaps we do need to talk about additional -- well, institutionalized, put in between commas, institutionalized form of governments to tackle this. some people have an interesting proposal on the table, a think tank and financing organization in the uk that talks about creating a machine intelligence commission that would work more on the basis of the notions around common laws, as you get exposed to the practice. and it would really bring experience together. >> other comments on this point? we have about five minutes left. i'm going to try to take a few more questions. yes, sir. >> carl hedler from the george washington university, privacy and research institute. we seem to be relying on lawsuits largely to control corporate behavior in regard to privacy.
1:47 pm
i'm wondering, in that case people have to identify harms. i'm concerned about the ability in the context of ai and machine learn. [ inaudible ] >> that's a tough question. and it gets to some deep technical issues, as you know. the question of why an ai system did a particular thing. and what that system might have done had conditions been a bit different. it can be difficult to answer. but depending on what kind of decision it is that the system made or assisted in, there are different legal regimes that may
1:48 pm
operate, at least in the u.s., and different burdens, different legal burdens may apply to the company that is -- or institution that is making that decision with the help of ai. so i think it's more of a detail question as to what kind of question, showing what kind of governance is needed. i also think that to the extent that people are naturally skeptical of whether complex ai based decisions are being made in a way that is fair and justifiable, i think that to use these technologies in a way that is really sustainable in the longer run, i think will require celebrator effort at being able to explain why a particular decision was made or to be able to produce evidence to justify the fairness or efficacy of the decision that's being made. it's not a simple issue. i do think that in the public in protecting themselves and
1:49 pm
government in protecting the public against the sort of harms you're talking about are not without legal or technical capabilities. >> let me ask a question that sums up several that i've heard so far, which is given the inherent weaknesses of notice and consent, but recognizing it's a tool that we have, and recognizing the challenges of harm in identifying harm and adjudications, is there a combination of tools that might be used that are rooted in transparency? what does this algorithm do and what it is intended to do? therefore, we get a better sense of whether it is producing a harm or may produce a harm. and that harm should be or some approximation of that risk should be disclosed in the notice regime. what is the combination of tools that might best produce a
1:50 pm
framework for handling these technologies as we move forward? do you want to jump in on that?. and ed's comment is just right, but there's something very interesting about when you're an a.i. engineer building one of these systems. it's sometimes very hard to diagnose whether your system did something, but you always have to write down something called an objective function. when for instance if i decided tomorrow to release a program to help people navigate the streets of washington in traffic efficiently by tracking everyone, all the cabs and all the other vehicles, if i write down my objective is for each user to get them to their location, their destination as quickly as possible, and even if i'm doing some fancy algorithms which i don't understand to accomplish that, i can show that to a lawyer or policymaker, this is why my algorithm is pulling data from many people.
1:51 pm
on the other hand, if i've supplemented a little bit because i'm getting paid by a coffee company to send people routing them past their coffee shops, again, that will be sitting there in the code. so when you think about an a.i. or machine learning algorithm being written, someone says it's so complicated we can want explain them, that's not a legitimate answer because when you write an a.i. algorithm, you always have to write the objective function, what is the thing that the a.i. system is trying to do. so if you want companies to actually -- or governments to be clear about what their a.i.s are doing, it is legitimate to say show me the objective function. >> maybe we will leave it there with andrew's optimistic vision about a possible way fwashd. really appreciate that. please join me in thanking all of our great panelists for discussion today. [ applause ] the supreme court recently
1:52 pm
heard oral argument in two consolidated cases brought by the city of miami against the bank of america and wells fargo. the court will decide if miami can sue the banks under the fair housing act for discriminatory mortgages given to african-american and hispanic home buyers that resulted in loan defaults, foreclosures and less tax revenue for the city. watch that tonight at 8:00 p.m. eastern on our companion network c-span2. we have a special web page at c-span.org to help you follow the supreme court. go to c-span.org and select supreme court near the right hand top of the page. once on our supreme court page, you'll see four of the most recent oral arguments heard by the court this term. and click on the view all link to see all the oral arguments covered by c-span. in addition, you can find recent appearances by many of the supreme court justices or watch justices in their own words
1:53 pm
including one-on-one interviews in the past few months with justices kagan, thomas and ginsbu ginsburg. there's also a calendar for this term, a list of all current justices with links to quickly see all their appearances on c-span, as well as many other supreme court videos available on demand. follow the supreme court at c-span.org. a signature feature of c-span2's book tv is our coverage of book fairs and festivals. and this coming weekend book tv will be live from the 33rd annual miami book fair. saturday's coverage begins at 10:00 a.m. eastern. here's some of what you'll see. "new york times" book review editor pamela paul on buy the book, writers on literature and the literary life from the "new york times" book review. "the washington post" wesley lowry with his book they can't kill us all, ferguson, baltimore and a new era in america's racial justice movement. and former democratic
1:54 pm
presidential candidate bernie sanders takes your phone calls and talks about our revolution, a future to believe in. sunday gets underway at 10:30 a.m. eastern and features fox news host and former white house press secretary dana perino with her latest book, how my best friend became america's dog. pulitzer prize winning journalist on her book "in the darkroom." national book award finalist colson whitehead. and co-founder of the miami book fair and owner of miami's books and bookstore, mitchell cap lain. live coverage saturday at 10:00 a.m. eastern and sunday at 10:30 a.m. eastern. go to booktv.org for the complete weekend schedule. next, it's a debate on the causes of the 2008 financial crisis. the george washington university law school hosted policy experts and others who discussed whether and to what extent u.s. government housing policy contributed to the crisis, as well as the implications for
1:55 pm
potential future economic downturns. this is an hour and 50 minutes. >> good afternoon. i'm neil ruiz. the executive director for the center of law for finance here at gw law school. the center is a think tank designed to be a focal point here in d.c. for the study and debate on major economic issues for the u.s. and the global community. we're excited about today's program. we have some of the best experts coming from different perspectives to discuss did u.s. housing cause the 2008 financial crisis, what is the right policy for the future. for all the audience members here, we're actually live right now on c-span 2. just so you know.
1:56 pm
if you ask questions, identify. it will be broadcast to the world. i would like to introduce to dean alan morrison. >> thank you. i'll be very brief. i want to express my thanks to my old and dear friend peter wa wallison. i bought his book. i read it. it's very interesting and very challenging. i'm sure we'll have a wonderful day today. peter, thank you for coming. now the real moderator my colleague, professor arthur wilmarth. >> thank you, alan. welcome, everybody. we're delighted to have you here for a stimulating and interesting discussion on two issues. i guess we're looking backwards and forwards. first, we're looking backwards at the financial crisis and
1:57 pm
asking the question of whether u.s. housing policy before the crisis played an important role in essentially sowing the seeds for the crisis. then secondly, we're going to be asking the question of what should be the u.s. housing policy going forward. i'm not going to anticipate the commentators' remarks, but obviously all of you are aware that fannie mae and freddie mac are in a quasi-limbo state at the moment. they're essentially controlled by the federal government through conservatorships, and there is periodic continuing debate on what the future of those organizations should be or what any alternative organization might be in terms of federal participation in the mortgage market. i'd like to now introduce our three speakers. as dean morrison mentioned, we are indeed greatly privileged to have all three of these speakers
1:58 pm
here today. they are nationally recognized experts on the mortgage market and on financial regulation more generally. i'm also delighted to welcome all three of them back to g.w. law school. all of them have been here for some of our prior programs, and we are very grateful that they have come back once again. our first speaker will be peter wallison. peter is a co-director of the american enterprise institute's program on financial policy studies, and he is a long time expert analyst and commentator on financial regulatory matters generally. he was general counsel of the u.s. treasury department under president reagan and then served as white house counsel under president reagan. earlier this year, he published his book, which dean morrison made reference to, "hidden in plain sight, what really caused

76 Views

info Stream Only

Uploaded by TV Archive on