Skip to main content

tv   Key Capitol Hill Hearings  CSPAN  November 13, 2014 1:00am-3:01am EST

1:00 am
1:01 am
1:02 am
1:03 am
1:04 am
1:05 am
1:06 am
1:07 am
1:08 am
1:09 am
1:10 am
1:11 am
1:12 am
1:13 am
1:14 am
1:15 am
1:16 am
1:17 am
1:18 am
1:19 am
1:20 am
1:21 am
1:22 am
1:23 am
1:24 am
1:25 am
1:26 am
1:27 am
1:28 am
1:29 am
1:30 am
1:31 am
1:32 am
1:33 am
1:34 am
1:35 am
1:36 am
1:37 am
1:38 am
1:39 am
1:40 am
1:41 am
1:42 am
1:43 am
1:44 am
1:45 am
that you give us and we are always with the state of israel. thank you for your time and thank you for being with us at this ga. coming up on c-span 3, a number of panels look at privacy, civil liberties and counterterrorism. topics include technology, governments' oversight role and private sector challenges. on thursday, dennis hastert, the longest-serving republican speaker, will join us on washington journal to discuss the 2014 election results and how republicans should govern in the 114th congress.
1:46 am
you can see the former speaker live at 8:30 a.m. eastern on our companion network, c-span. now a conversation on privacy and civil liberties, this panel focuses on technology and congress oversight role. this is hosted by the privacy and civil liberties oversight board which is appointed by the president and was a recommendation of the 9/11 commission. >> thank you, mr. chairman, and good morning again to members of the audience and particularly good morning to our second panel. the title of our panel is privacy interests in the
1:47 am
counterterrorism context and the impact of technology. i have no opening statement of my own so we can go straight to the opening statements by the witnesses. i'll introduce each of them in turn. we're going to go right down the row, which happens also to be alphabet cal order. i remind the witnesses that we would ask them to keep their opening remarks to seven minutes. there is a time keeper which you might not have seen but in the front row here, renee, who will be hold ing ing up a yellow car your two-minute warning and then a red car for "time's up." thereafter a round of questioning by the board members and, again, the possibility of questions submitted by members of the audience, staff members throughout the audience have little index cards and so if, during the course of the panel,
1:48 am
you're -- a question occurs to you, raise your hand and some will bring you over a little 3 x 5. our first speaker or member of this panel is annie anton. she is professor in and chair of the school of interactive computing at georgia tech university. she has a ph.d. in computer science and is one of the country's leading experts on issues at the intersection of technology and policy. so, annie, please. >> thank you, good morning and thank you for the opportunity. let's try that again. good morning and thanks for the opportunity to testify. we're in an ever-changing world where terrorists and criminals are getting smarter and more sophisticated. their offensive techniques are surpassing our ability to
1:49 am
protect our nation, providing strong technical protections for privacy and civil liberties is a counterterrorism weapon. today i focus primarily on three technology considerations. first, strong encryption is an essential technology for fighting terrorism. second, deidentification, while not perfect, may be a reasonable approach give an thorough risk analysis. and, third, improved privacy threat modelling is critical for counterterrorism. our national cyber infrastructure must be resilient to attacks from foreign powers, terrorists, and criminals, requiring government back doors in commercial products, stockpiling exploits and weakening software security standards are all practices that weaken our nation's cyber security posture and make it easier for attackers to infiltrate these systems for nefarious purposes.
1:50 am
the latest apple and google phones built in encryption by default. both companies are configuring this encryption so that they can not decrypt the information for anyone, including law enforcement. these measures have been sharply criticized by the director of the fbi and the attorney general. as a technologist, i can asthaertassert that this will yield a system that can better withstand intrusions and denial of service attacks as well as limit access to awe thebt kauthenticated and users. requiring companies to provide back doors for law enforcement or national security hurts both individual privacy and our nation's overall security. moreover, the security benefits are questionable at best because sophisticated terrorists and criminals will simply use international products for more secure, less convenient
1:51 am
alternatives. technology and policy scholars are actively debating the merits of deidentification and anonmization techniques. the information is critical because privacy rules only apply to identifiable data. technology scholars emphasize that there is no way to mathematically prove an anonmized data set, that it can not be reidentified. in contrast, privacy scholars -- policy scholars believe that anonmy saization provides real practical protection to most of the people most of the time. car which the locks on your door at home are pretty good but not good enough to keep a determined intruder at bay. that's the idea behind anonmization. there is some critical where it is critical toll protect a person's identity. for example, for victims of domestic abuse we need to ensure their location is protected and cannot be reidentified by their
1:52 am
abuser, however, in in m setings if we apply effective but not perfect deidentification proceed yours, overall privacy protection may be increased and data may be more useful. in such cases, the perfect should not be the enemy of the good it might be considered how the determine in practice when agencies should insist on technically strict deidentification versus when effective but not perfect deidentification may address the bulk of the risk. finally, threat modeling is critical for counterterrorism and we must improve it to achieve two goals. first, we must develop privacy-oriented threat models. most threat modeling techniques have been developed entirely in a security context with little privacy consideration. the latter is crucial given the rise of big data analytics and the internet of things. second, as a nation, we do not want insiders leaking state
1:53 am
secrets to foreign journalists to become a common way to influence public policy decisions and debates. insiders with access to sensitive information must be considered as potential threats simply because of the extreme damage that a leak could do. either indirect costs by providing useful information to enemies or indirect costs with respect to public relations or erosion of trust. a good threat model makes risk analysis feasible for any organization. in closing, as a technologist and privacy scholar, i believe we should encourage strong encryption by default, use practical de-identification technologies now rather than wait for theoretically perfect solutions, and expand threat modelling to include privacy and security as well. in addition, ed felton mentioned the importance of having technologists in the room. i can't help but note that the
1:54 am
review group did not have a technologist, that the p-club, which i appreciate all that you are doing, but, again there isn't a technologyist in the room and having technologists on panels is helpful but i would like to see us move forward to having more technologists involved in the decision making. and so i'd like to thank the civil liberties and privacy board for its commitment to finding ways for the government to protect privacy and also for meeting our critical security needs as a nation as well. thank you. >> let me just -- thank you for your testimony but actually we have a technologyist in the second row. >> oh, great. >> and we have a technologist outside as well. so we do value the role of having technologists and have two full time on our staff. >> good and i look forward to meeting them. thank you. >> thank you. our second witness is alvaro pedoia. alvaro is the executive director of the center on privacy,
1:55 am
technology, and the law at georgetown university law school and was previously chief council to the senate judiciary subcommittee on privacy, technology, and the law. alvaro? >> thank you. good morning and thank you for the opportunity to speak with you today. we have a problem right now in privacy and it's a problem for government and industry. government and industry have developed extraordinarily powerful data analysis tools. these tools let them analyze data sets that had previously been too large or too messy. they let them process that data faster and they let them find latent value in data sets that had previously seemed old and worthless. in short, these processes create enormous value and that value is driving both government and industry to collect as much information as possible and to retain it as long as possible. the problem is, is that's
1:56 am
hitting up against long-established privacy values ingrained in the phipps. the phipps encourage limited collection, they encourage data minimization and the instruction of data for the purpose for which it was collected and so right now both in the intelligence community and in industry there's effectively an effort to redefine privacy. privacy used to be about collecting only what you needed to collect. under the new model, you collect as much information as possible and you protect privacy through after-the-fact post-collection use restrictions. i'm here to encourage you to resist this new model. in my written testimony, i argue four points. the first is that collection still matters. the collection of personal data impacts a person's core right to privacy regardless of what happens to that data after the fact. second, this was discussed in the first panel, but there's a misconception, i think, that the phipps are primarily useful for commercial privacy.
1:57 am
in my written testimony i talk about the fact that the phipps remain a critical benchmark against which to measure the privacy impacts of counterterrorism policies and i'll just add, given the previous discussion that literally since their inception in 1973, the committee that road the report dedicated a section talking about how, of course, not all of the phipps can apply in the intelligence context but clearly some of them must because the risk is too high. third, and my testimony i talk about that we need to remember that privacy is not about taking but -- pardon me, that it's about taking and not about sharing and fourth and finally, i think that americans do expect a degree of privacy in public. now, given my limited time here, i want to focus my oral testimony on just that first point, collection. i think it's the most important. after the snowden disclosures on the telephone records program last summer, the ic's first line of argument was that, you know, we may collect a lot of this information but we only look at
1:58 am
a tiny part of it. the problem is that this is not how people think about privacy. if a police officer knocked on your door and said "hey, i want you to give me a last of every person you've spoke within in the last week." then said "don't worry, we're probably never going to look at this stuff." would that reassure you? i think that most people would say no and i think that this highlights the fact that the forcible collection of sensitive data in and of itself invades what this board has called "the core concept of information privacy." and that's "the ability of individuals to control information about themselves." it's not just a concept, as you know, it implicates first amendment and fourth amendment interests, elaborate that on my written testimony, but this my mind, the single-biggest reason to resist a privacy model that primarily relies on post-collection use restrictions is the disparate impact that might have on vulnerable communities. in a use restriction model you collect everything and protect privacy by banning harmful uses of data after it's been
1:59 am
collected. the problem is that there's basically what i'll call a moral lag in the way we treat data. what i mean by that is that we as a society are often very slow to realize that a particular use of data is harmful, especially when it involves data of racial and ethnic minorities, lgbt and people who have lacked political power. in fact, the two most prominent examples with this moral lag involves the department of defense or formerly the department of war. during world war ii, japanese americans volunteered detailed information about themselves and their families in the census. they volunteered that information under a statutory promise from the federal government that that data would remain confidential. this was a use restriction. what happened? as you know, in 194 congress waived the confidentiality provisions and the department of war used detailed census data to monitor and relocate japanese americans to internment camps. after world war ii, a similar story unfolded for gay and lesbian service members. they were prohibited from serving openly so many turned to
2:00 am
military chaplains, psychologists, physicians. yet routinely and even after "don't ask,don't tell" the military used that confidentially collected data to out and dishonorbly discharge lgbt service members. now, today with the benefit of hindsight we recognize that these events are discrimination. but at the time, the picture was less clear for a lot of people and that took a long time to change. the census only acknowledged the work -- the full extent of wartime sharing of census data in 2007 and congress only repealed the ban on openly serving gay and lesbian service members in 2011. that was three years ago. so let me be clear, my point is not to cast aspersions on the department of defense. rather, my point is that all of us as a society are consistently slow to recognize what's a harmful use of data when it comes to vulnerable communities. it often takes us decades to figure that out. far too often today's
2:01 am
discrimination was yet's national security measure. what this this means for our data and what this means for privacy is that we can not solely rely on use restrictions. what this means is that collection matters. and that the simplest and most powerful way to protect privacy is to limit data collection, particularly for the government. i urge you to continue to protect that core right. thank you. >> thank you very much. our next witness is mike hinsey, chief privacy council at microsoft where he's been for 16 and a half years really at the epicenter of the evolution of technology and privacy. mike? >> thank you for the opportunity to speak with you today in this important discussion. i come to this discussion from the perspective of advising on and managing privacy and related issues in the private sector.
2:02 am
i've done that for nearly two decades, first as an associate here in a d.c. law firm and, as you mentioned, for last 16 plus years at microsoft. at microsoft, we approach the issue of privacy from a core belief that privacy is an essential value, both to us and to our customers. we have a strong commitment to privacy because we recognize the customer trust is critical to the adoption of online and cloud services. our customers -- from individual consumers to large enterprises -- will not use our products and services unless they trust them. unless they trust their private data will remain private. we seek to build that trust with our customers by adhering to a robust set of policies an standards. these policies and standards guide how we do business and design our products and services in a way that protects customer privacy. these are based on the fair information practices which we agree remain relevant today. including transparency about the data we collect and how we use it, minimization with regard to
2:03 am
the data collected and how long it's retained, choice about collection and use of data, strong security to ensure that the data is protected, accountability to ensure that we are living up to our commitments. these standards are not just a rule book we created and hope that our employees follow. win instead, we built them into the processes we used to operate our business. for example, they're built into the tools that are used in our software development life cycle and there are checkpoints that prevent a product from service to shipping without a privacy signoff. in some, we've taken what's often referred to as a privacy by design approach to how we operate the company and how we develop and run our services. and this approach is supported by a mature privacy program that includes dedicated personnel with privacy expertise who sit in both centralized roles and are embedded throughout the business, the program includes incident management response and escalation processes. further we have developed and deployed comprehensive role-based training for
2:04 am
engineers, sales and marketing personnel, as well as those in hr, customer service and other roles that touch and handle personal data. and our program includes executive-level accountability for privacy compliance. but that investment in privacy and the trust we've worked to build sundays mined if those customers believe that the government can freely access that data. concern about government access to data collected by the private sector can foster a lack of trust in most private sector services and when those concerns are focus tonight access to data by the u.s. government, that lack of trust becomes focussed on u.s. companies. that's why we've been vocal for the need for surveillance reform in the united states. there have been positive steps in this regard in the last year, but there's more that needs to be done. we've laid out several things the u.s. government should do to help restore the trust that's been damaged by last year's revelations. first, bulk data collection programs should end. we have been clear that we have
2:05 am
not received any bulk orders -- any bulk orders for bulk data collection but we strongly feel that surveillance should be focused on specific targets rather than bulk collection of data related to ordinary people's activities and communications. the recommendations of this board in the section 215 program are encouraging, as are the comments of the president, and we urge the administration to end the existing program and we urge congress to enact prohibitions on any such orders in the future. second, we should do more to increase transparency. transparency is a key element to any program for protecting privacy. it facilitates accountability and enables public debate around policies and programs. here, too, we've seen positive developments, in particular, the government is declassified more information about its surveillance programs and the workings of the fisa court. additionally, we and other companies filed lawsuits last year against the u.s. government arguing that we have a legal and constitutional right to disclose more detailed information about the demands we've received under
2:06 am
u.s. national security laws. earlier this year, we came to an agreement with the government enabling us to publish some aggravated data about the fisa and national security letters we've received. it was a good step that helped foster better understanding of the type and volume of such orders that service providers receive though we believe there can be and should be more detailed reporting permitted. third, we support reforms of how the fisa court operates. in order to foster a greater confidence and surveillance programs and government access to data, they're appropriately balanced against privacy and other individual rights. surveillance activities must be subject to judicial oversight. we need a continued increase in the transparency of the fisa court's proceedings and ruling but effective judicial review requires a true adversarial pro jess more than one side is heard. we urge congress to act on fisa reform. fourth, government should provide assurances that it will not attempt to hack into data
2:07 am
centers and cables. in the year since the "washington post" reported on alleged hacking by the nsa of cables running between data centers of some of our competitors, there's not yet been any public commitment by the government that it will not seek to obtain data by hacking into internet companies. we believe the constitution requires that the goth seek information from american companies within the rule of law and through authorized government access and we've taken steps to prevent such attempts by increasing and strengthening our use of encryption across our networks and services. nevertheless, we and others in the industry will press for clear government assurances. fifth, although recent government revelations have focused on the u.s. government and many of the subsequent debates have focused on the privacy rights of u.s. persons, we must recognize that this is a global issue. as we seek to sell our products and services to customers around the world, discussions that focus exclusively on the rights of u.s. persons are not enough. many people around the world do view privacy as a fundamental
2:08 am
human right and they have a very real concern about whether and how governments can access that data. in that regard, we appreciate the steps that president obama announced in january which acknowledged the need to address protections by non-u.s. -- about non-u.s. citizens. along those lines in the law enforcement context, we've challenge add federal court warrant in the u.s. courts seeking customer e-mail for content that's held in our data center in ireland. further, we've called for governments to come together to create a new international legal framework that allows for new streamlined processes for cross border data access that can supplement existing rules. none of this should be taken to suggest that we don't value and appreciate the absolutely critical work that our law enforcement security agencies do everyday to keep us all safe. in fact, we work closely with the u.s. and other governments to help fight cyber crime and other threats. we want to ensure those agencies have the tools and information that they need to protect us from terrorism and other threats to our safety and security, but there needs to be a balance between safety and the personal
2:09 am
freedoms that people around the world, especially law-abiding citizens and institutions enjoy. this balance is readily an easy one. as chief justice roberts recognized in the case of "riley v. california," privacy comes at a court. but the court's unanimous decision makes clear that privacy is an inherent value that must be protected. while there's not always a perfect analogy between protecting private any the private sector and nation hall security context, we deal with questions of striking the right balance between privacy and other needs and each of these contexts as technology evolves we need to continually re-evaluate that balance and many of the principles that have proved useful in striking and remaining that balance, the fair information principles, continue to be relevant today. >> mike, can you wrap up? >> i'm wrapping it up. sorry. >> super, thanks. >> we'll come back to some of those issues in the questions.
2:10 am
our final member of this panel is the chief security architect at nvidia, the company that designs and builds high performance computer systems. he's a photographer and computer scientist. welcome and please proceed. >> thanks for the opportunity to testify today. i appreciate it. i'm here as a technologist, not as a lawyer and in silicon valley we say the "i'm not a lawyer" rule applies. our concern is about building syst systems that are buildable and creating rules that are enforceable, so i wish to provide some technology background to the panel and to the conversation and the -- from our perspective, security is to system to what harmony is to
2:11 am
music. providing security as a foundation of establishing rules of privacy the our model. we build systems that are enabled and are able to enforce rules and that is the context of security as we see it. security is one of the intersections between technology and civil liberty and we deal with issues such as trust and active adversary in a system. this is how we built and design our systems. our world used to be simpler and sometimes i provide samples of that simple world. you all remember this as a mobile phone. this is from the time that the phones were actually doing just that, they were phone. and some of these devices were statements of class. you remember this, right?
2:12 am
this was a phone. this was a mobile phone. i worked in this company. one of my favorites in the collection, text -- this is. this used to send and receive text messages. some of these -- oh, yeah. plie, this was your personal digital assistant. i have some other -- oh, yeah. [ laughter ] palm. there used to be the company that existed, this was one of the darlings of the valley. oh, yeah, of course this was also a very important device that everyone carried. s from the time that the world was very simple and we built systems that did very basic things. and it was per thomas freedman
2:13 am
when i sat down to -- and i quote here "to write the world is flat, facebook didn't exist, twitter was still a sound, the cloud was still in the sky, 4g was a parking place, luninked i was a prison, applications were what you sent to college and skype was a typo." so june 29, 2007, iphone was introduced. the world changed. the world for us, technologists changed, probably for everybody else in the room non-technologists and technologists alike also changed and we are dealing with devices that are not as simple as what we used to carry. so that's part of the problem that from my perspective i'm interested and ramifications of the changes in this technology as the subject as we are talking about. it's only seven and a half years.
2:14 am
it's only seven and a half years ago. so i don't believe there is any other event in the history that in this short amount of time has ravaged and gone through everything and tried to change everything such as our foundation of our society. in the old and pre-2007 world we said things like "you cannot enumerate all the attacks." cripping tofy is a known statement and meaning you cannot define a secure state of aa system. it was difficult back then during these devices. it has just become worse. the guarantees that -- we don't know anything about our future but a couple of things i could guarantee, a couple of things i could guarantee right here is that things would only get faster. we're going to build things that are faster. they're going to become smaller, a lot smaller, they're going to
2:15 am
become cheaper and these devices are going to become a lot more abundantment some of them we no longer care about, building devices that are usable for a long period of time, it's a lot more economic to build these devices that are basically throw away devices. that's a concept that we are following. and they're becoming more connected. everything is becoming more connected. you have heard of things such as iot, internet of things or, as i called it, thingsternet. everything is becoming very talkative. everything is -- all of these devices are very chatty. they talk a lot. so you guys all have phones, smart phones in your pockets. from the time that i started, which is about five minutes right now until now, each one of those devices without you even touching them has transmitted, sent, and received about half a
2:16 am
meg data without you even touching them. this abundance of information that has happened, that is without you interacting is having a lot of ramification on what we are doing -- we heard a lot of things about day is only accumulating, it's not going away. we are generating more data than we can manage or fathom. a hundred hours of video. a hundred hours of video is uploaded on youtube. and youtube is not the only recipient of this service. other companies also have these services. hundred hours of videos are uploaded to youtube every single minute. every single minute. so we are building systems to manage and compartmentmentalize and define and create and work with these data and this data as we have heard in the two panels
2:17 am
are not going away. they are not disappearing. and in the new world, maintaining security is even harder. so as a citizen very carefully following what is happening by this esteemed board as to what is the ramification of the decisions that we are making and whether that's enforceable, we can we can build systems that are enforcing these rules because right now being a security profession al and creating a doable, enforceable security is as unpopular as being an atheist in jerusalem. no one likes you. so i'm hoping that we can come up with a system that is also buildable. lastly, i close my remarks and i'm looking forward to the
2:18 am
questions. one more thing that i can guarantee is the attacks are going to increase. and they're going to become simpler and easier to mount and by one measure the number of attacks in 2013 were three trillion, only affecting private information. on average 27.3 -- $27.3 per attack. about $100 billion, the cost of these attacks. this data is 2013. none of the target home depot, linked inn, none of that information, none of those attacks are included here so with that i close my remarks and look forward to answering questions. thank you. >> thank you. we'll now go through a round of questioning and board members as well will be subject to the time limits here. i think ef-20 minutes and then
2:19 am
each board member will have five minutes and then still the possibility of questions from member members of the audience. i wanted to build my first question off of the point that i think hadi was making at the end there which is that there seems to be this inexorable trend towards more sophisticated devices, collecting, generating, sharing, emitting autonomously, automatically disclosing more and more information and i think i'll go to professor anton first and maybe come back to hadi with this. but looking that the phenomenon and the seaming inexrabbit of it, the seaming inevitability of
2:20 am
it, first on the technology design side, on the technology design side what do you see as any potential at all for limiting that growth, controlling the flow of that information you talked for some extent about the possibility of technology protecting privacy. how does that square with this tremendous on going growth of informati information. >> thank you, so as was mentioned in the earlier panel, systems are getting more and
2:21 am
more complex which makes compliance more and more difficult as well. i really theme we don't limit growth and limit the ingenuity of new technologies that might have great applications in the future and solve wonderful, really important problems. by the same token, there's work that's been done, especially with work being done at georgia tech on how do we design the internet of things or internet of devices such that we are taking privacy and security into consideration given all of the outputs, all of the possible inputs and engineers just simply need better tools and heuristics for how to do that. privacy by design is thinking about these early on and not thinking about it after the fact. in terms of controlling
2:22 am
information i think what we want is to secure the flow of information but not limit the flow of information. these are all things researchers are working on in universities and research labs and industry as well. >> i've written myself about the potential for privacy enhancing technology and the value of privacy by design but at the same time i just don't see it happening. or -- let me put it this way. while i see it happening and i take mike's point that microsoft has incorporated privacy by design as a corporate concept but there are these other hugely dominant trends that almost seem to be overwhelming. >> so within the context of counterterrorism i think's a lot of policies and laws that are in
2:23 am
place. when i mentioned earlier i'd like to see more technologists in the room it's not just to study it after the fact but actually to be involved in forming the policy because a lot of times the policy and the law are written in such a way that we can avoid it. and so what i'd like that see is more technologists involved in the discussion up front really informing the decisions about laws that are going to be passed, about policies we're going to adopt. because we could write them in a way that makes it easier to comply with the law. when -- >> do you have an exam until mind? >> excuse he? >> do you have an example in mind? >> so i work a lot in hipaa, for instance. we have the new change that w meaningful use i had one ph.d. student trying to predict how
2:24 am
the change is going to be because when they finally make that decision we'll have very little time to implement that change in systems to make sure we're compliance. and had we had more technologists involved in that process, we'd be able to more quickly adapt our systems and we've have a better community of practice, if you will, about how to salvage those laws and have systems to make sure that only the right people are having access to the right information at the right time. and in compliance with the law. >> and certainly you would agree that we need both better, clearer laws as well as more mindful technology? >> absolutely. >> not that one or the other will solve this problem? >> absolutely. we need both. >> i want to go to alvaro but there was one point in your written testimony which you didn't mention and i want you to talk about it now because i think it's very, very important.
2:25 am
a lot of our constitutional law of privacy is based upon the concept of reasonable expectation to privacy. and there's a lot of worry and legitimate concern that with the -- these changes in technology that our expectations of privacy diminish. you talked about the fact that in fact with changes in technology our expectations of privacy may actually be growing. >> the point here is that the katz test cuts both ways. usually when the court talks about katz in society they say everyone's becoming inured to this idea, they're surrendering to the ubiquitous collection of their data but i think it's helping people learn what privacy is and the best example of this is location technology and facial recognition technology. previously, people food occasion to develop an opinion on whether
2:26 am
or not they expected the sum total of their movements to be developed -- to be compiled in a profi profile. suddenly, it's becoming radically cheaper to conduct that surveillance so i think that in the same ways that you only realize what you had when you start losing it, for the first time a reasonable expectation of privacy is crystalizing privacy in people's minds so i would say you know what? maybe when i go to the grocery store or i drive down the street gohr to work i expect my colleagues at work to see me, the people i know at the store to see me, my neighbors to see me. but i don't expect anyone to know i'm at all those places at all times no matter where i go so i think the technology can expand our expectation of privacy. >> and mike, over the past 15 or 16 years that you've been at microsoft, do you think it's
2:27 am
fair to say that your customers have become less interested and less concerns about privacy or expect more of microsoft and other companies when it comes to privacy? >> i think they expect more. i think i agree that expectations of privacy in some ways have increased. they've certainly changed as technology evolves. people learn about it, they adapt. there's certainly data sharing going on people would haven't contemplated or accepted a number of years ago but that doesn't mean people don't care about privacy anymore. it's very clear to us that our customers care about privacy now more than ever. and you see that in the amount of resources and attention and focus that we've put on privacy. it really is one of the top legal issues we're dealing with. it's one of the top customer issues we're dealing with.
2:28 am
we hear everyday from customers who have questions about hour their data is being treated, how it's being protected, how it's used. people's expectations of privatery are not fading away. >> just to put a nail in the coffin here, i think the government argues and there's obviously security precedent to support it that a person surrenders his privacy rights when he disclosed information to a third party such as microsoft in the course of using microsoft products or services. but it seems to me from what you're saying that microsoft does not believe that its customers have surrendered their privacy rights when they've used a microsoft product or service or thereby microsoft has acquired information, microsoft does not believe that that information has zero privacy interest, does it? >> absolutely not. on the contrary.
2:29 am
to the extent the third party doctrine ever made any sense, it doesn't make sense today. people increase. >> i are putting all of the information they used to keep on their homes in file cabinets on line and in cloud services and as recent court decisions have recognized, particularly in "riley," it's even more there's more data created, filing the most private, intimate details of people's lives that's in cloud services in the hands of third part iies more so than wa nfr people's homes and the expectation around privacy, around that data are quite profound. >> and that's true in your view both of content so to speak and noncontent or metaday o ta or transactional data. there's sensitivity there in both categories. >> absolutely. i don't like the term metadata
2:30 am
because it encompasses too much. we should talk about what we're talking about and there is a broad range of data that's collected or created or inferred through the use of online services and some of it's fairly benign. we call things, the -- put the metadata label on things like the amount of storage you're using in the online storage thing or the average file size and -- but even that has privacy implications and we embrace the ideas of transparency and consent and all of the phipps around that kind of data, too. but as you go up the scale with maybe content being the end as the most private, the most -- the stuff that people have the highest expectation of privacy, but other things you're about who you're communicating with are right up there. right up against content in terms of what that can reveal
2:31 am
about people's relationships associations, thoughts believes, et cetera. and there's very important privacy implications about that as well. >> you mentioned the transborder issues and the fact that people around the world recognize privacy as an interest and in many cases as a human right. are you -- where do we stand a what are you aware of? what do you know about -- is there any progress being made multilaterally or bilaterally or in terms of developing standards for transborder surveillance and transborder government access? anything in the works there that we should be aware of? >> not that i'm aware of specifically.
2:32 am
you know, there's certainly more discussions happening in recent years than there has been in the past around a number of constituents and interested parties on privacy around the globe. the chairman and i were recently at an international data protection conference where these issues were loudly and vigorously discussed and debated and so that dialogue is happening but in terms of actual progress towards making headway in terms of developing an international framework for this stuff, there's certainly a lot more work to be done. >> i would just ask you and i would ask others as well as members of the audience but additional panel uss if and when you do become aware of things that are making progress, please let us know. obviously we're remaining interested in that, in the transborder question.
2:33 am
we've talked about privacy by design. in your experience, do technologists give adequate consideration to privacy as they design products and what more could be done to encourage or promote privacy by design? >> and technology we built things that are reasonably well defined so i recognize in the previous panel there was a discussion that you don't necessarily need to define privacy to be able to enforce it. on the technology side, if we are able to build a model that represents a need, then we are very good at building it. i think part of the reason that a map hang, a very human very
2:34 am
societal concept such as privacy into the devices that we build, the services that we build and we use sometimes it's simpler, sometimes it's not. to answer your question, i see a great deal of attention, a great deal of interest in the notion of privacy, privacy by design, secure by design, trustworthy by design and especially in the field that we are dealing with, our model and security of the device when we release it and it goes to the field is in a mutually distrusting system. so you don't really know. it's one thing -- let me take a step back. it's one thing to build a server that resides in someone's data center where you have full control over the actual device and you have to control the flow of information, the software that is there and how it's used. it's another thing to build a device and leave in the the hands of the users and guessing
2:35 am
what they want to and guessing they want to do. then it's one thing to have a notion of privacy as we do and build a system based on that. it's another thing when you take a look at this -- should i call it a generation gap as to -- there's this company called snap chat and they had promised that whatever picture you take it will disappear. anyone who works in technology knows that things like this are not possible. you could take a picture of that device. we call it job security. then when they realize that this is not really possible, they announced it and they're under the oversight of the government for about 20, i think, years to make sure that they do things right and they're paying attention -- i know they're paying a lot of attention to make sure they get things right. but then you look at the users. i think the status -- stat was released last week or the week before that they asked college
2:36 am
students 50% -- more than 50% of college students said, yeah, we still will use snap chat. they're aware. they understand. i don't know how to reconcile that. there's a generation -- new generation that has -- i don't know whether it's more or less but certainly a different expectation and definition of privacy. and there is a vagueness of what does that mean in terms of a system that could be built? once those are, you know, in a reasonable state, we are really good at building systems that satisfy those rules, hence my opening remarks as to our model in the industry and technology, we understand the rules. they're very good at, you know, creating those rules, building devices and services that enforce those rules but it has to be buildable and it has to be enforceable. the attention is certainly
2:37 am
there. >> the first premise is the rules have to be clear and if they're not clear then you don't know what to build to? >> semi-clear will do. we used to live in a world before 2007 everything had to be really, really well defined. it no longer exists. we have a new generation of hackers that do not abide by the rules. therefore we have to create systems that are almost right. we are seeing it in the program languages, design of the system, seeing it in self-correcting systems. sometimes, somehow, somewhat accurate will do. >> andy, did you want to respond to that in the minute remaining? >> sure. this reminds me a little bit about what i was talking about practical encryption and anom mization. and so, i think there are times and certain applications where that kind of risk is fine and there are other instances where it's not fine. and then that's where guidance
2:38 am
from p clot can be helpful in terms of trying to figure out when is it that we can have pretty good rules and when do we have to have very tight, accurate 100% certainty kind of rules. >> okay. thank you. at this point, other members of the board will pose questions under the five-minute rule and we'll go in sort of reverse order down the line here starting with rachael grand. >> thank you, jim. and thanks to all of you for being here. that's really a good segway. the first question i was planning on asking, i was interested in what you were seeing about the domestic violence context you want it to be perfect, other context, good enough will do. can you explain what you mean fwha? what's an example of d identification method that might be good enough but perhaps not perfect? i'm not a technologist, as you know. if you could help me out, that
2:39 am
would be great. >> all right. so, there are certain cases of studies that have been done, for instance, when the netflix put out their data online and then researchers went and looked at the internet movie data base to see if they could reidentify people. they had resources. it was readily available information and this context i don't think anyone was personally hurt by it, but there might be cases where that kind of identification could be extremely damaging. so the more we talked earlier about aggregation of data bases and how the ability to link different kinds of information across different kind of data bases could actually be detrimental. it can also help us find the bad guy, though. that's the tension, right? so when is it okay and when is it not okay? and are there instances -- for
2:40 am
instance, for netflix or something that's available online that's just not, you know, where you went to school or something that's not very important, it may not be really necessary to worry about where you had dinner, for instance. but in the context of a group that is actively trying to mount a terrorist attack, that's really important. >> so i guess that makes sense in terms of when it's important around when it is not important. how do you do it? for example, how do you do the perfect domestic violence context? >> i think it's very difficult. i think we have technology that's pretty good but not perfect. and so the idea is do you keep the data unincrypted and then easily accessible? or because it's not very important. or do you actually encrypt it and then use reasonable,
2:41 am
practical anonmization on top of that. and so it just depends. and i think this is one of those cases where technologists would welcome guidance when -- in helping us figure out when are the risk profiles. technologists don't have access to sometimes what the risks are. >> for mr. bedoya. you said something along the lines of national security context some of the phipps must apply, even when they all can't. >> sure. the first is a historical point, when the hue report was issued -- i was just reading its pages 74-75, the committee said, okay, we set out these standards. clearly all of them can't apply to all intelligence records, but some of them must apply because the risk is too high if we don't have some protections. so put that more concretely,
2:42 am
obviously the difficult ones are individual participation in transparency. i think there are ways to address these -- at least on an aggregate level that would be really powerful. so, you know, i think in the 702 context the board has -- and to take a step back. i think it is shocking that one half years after the disclosures the american public doesn't have even a rough sense of how many of them have had their day ta collected. people think it's everyone but then you have news reports saying 30% of calls are actually recorded. so in the 702 context the board recommended various measures to identify the scope. in all my time in the senate, i never saw anything that would lead me to believe that it would actually be impossible for the nsa to produce an estimate based on statistical sampling of a number of u.s. persons collected in 702 data.
2:43 am
there's a number of things you could do to quantify scope, one of them could be releasing the number of queries done on 12333 level. i think there are ways to address this. >> anybody else have a thought? >> i do in terms of transparency. this is another way in which, for instance, fisk technologists could be helpful. when you have -- if heidi success and whispers in mike's ear, i spoke with jim about the panel. by the time it got to jim it will be i spoke with him about wearing flannel. when you get lawyers from the nsa and fisk about technology and you don't have a technologist there to ask questions like -- make suggestions about, well, we could actually -- have you thought about including this kind of metric or instrumenting this software in certain ways, we could actually improve the
2:44 am
ability to have more transparency and more oversight in technology with those discussions bringing everyone in the room. >> thank you. >> i'm going to try to get a question in for each panelist. i would appreciate brief responses. annie, you said that encryption is good for counterterrorism. i would like to understand more. i understand mandating a back door weakens protections, but it seems that terrorists can now hide their communications which seems to be debttrimental to counterterrorism. >> it's a better world when everyone can hide their information. and if -- so there was a case in greece where there was a phone and someone was able to actually start -- because of the back door they were able to actually listen to the conversations because through a wiretap on the prime minister thatch that's what happens when you don't have encryption and security by default. to think that the terrorists
2:45 am
aren't going to do the same thing, i think is naive. >> alford, you talked about the expectation of privacy and if i heard you correctly, tell me if i'm wrong, is that you're suggesting that we talk about not what people expect their privacy to be because i can put up a sign saying i'm conducting video surveillance and destroy that. but their expectation of what privacy should be? >> i'm not saying that. that's a separate wonderful, powerful argument. what i'm saying is that technology is making us realize that we do expect privacy in scenarios that didn't exist 10 or 15 years ago. so i think technology can expand your notion of privacy, but i also think the fourth amendment doesn't protect me and you, it protects us as a society and sets a base for relationship between a government and its citizens that also needs to be protected. >> fourth amendment, mike, you talked about the balance between government requests and your customer's privacy, do you think the government should have a
2:46 am
warrant every time it accesses your customer's records? particularly if they're american customers. >> yeah. certainly in the law enforcement context we've advocated for a reform of that that would in effect require a warrant for access to any content regardless of the age to precise location information, other sensitive data. you know, i'm not sure we gould so far to say that a warrant is required for every single case for every single data type. we certainly need to update the rules so that there is appropriate judicial review of surveillance programs and specific requests that we get for data. >> in terms of third-party doctrine, would you then essentially not have it be an absolute exception to the fourth amendment but essentially where would grow with it to provide some protection but not necessarily a full warrant protection? >> yeah. the laws that we deal with in
2:47 am
the law enforcement context provide a sliding scale in effect. provide some reasonable oversight in protection, something below warrant and probable cause. and we've taken the position that's appropriate for some types of subscriber data, et cetera. >> thanks. heidi, you talked about -- i want to put this in the context of how much information should be collected, you talked about enforceable rules for collection, but you also said that collection is going to be faster, cheaper, we're going to be all more connected and that tax will increase and that even compliance with rules may be more difficult, professor fellton talked about potential abuse of information and increase possibilities of breech. how would you strike the balance between collection rules and essentially user rules? >> that's a very good question,
2:48 am
very difficult one. i don't know on the technology side of the house, i don't know if we really know where the balance is. we take a look at the tax, we look at the system, we look at the capabilities, we look at the mere fact that all of these attacks, exploits are becoming so advanced that i used to give you one concrete example. i used to need to be physically around your things that you touched to be able to lift your fingerprint then have access to your phone and then use that fingerprint to mount an attack. with the resolution of the cameras that we have these days, sometimes -- very high resolution camera, i just need to have your picture that was taken somewhere in china to be able to zoom in, zoom in, zoom in and lift your fingerprint and mount an attack. how do you reflect things like this should we build systems that when ever there's a
2:49 am
fingerprint that smudges it and we don't expose it -- there are things like that encompass all those cases that should be buildable what i'm trying to come across are coming up with the rules that define those capabilities or things that should or shouldn't be done is a very complex problem. >> thank you. >> so, thank you, guys, for another excellent panel. my first question and this goes back to what i had said on a previous panel which is i view our job to be translating these ideas, these concepts, these concerns into practical recommendations. so, starting with you, what have you found effective as a privacy officer to ensure your very large work force, your complicated work force dealing with emerging issues takes privacy seriously, your rules are enforced, and that from beginningprivacy is a
2:50 am
part of your culture? this is free advice to the new privacy officer over at nsa. >> well, becky, as i eluded to in my opening remarks, you know, one, there's no silver bullet. you need to take a number of approaches. and we've taken a number of approaches to drive awareness and sensitivity around privacy throughout our work force, through a number of steps on mandatory training that's required for all employees that cover a range of ethical and compliance issues. deeper role-based training that's specific to software engineers, that's specific to sales and marketing people, that's specific to different roles that people play in a company that impact customer privacy. we have -- as i mentioned, not just sort of told people what the rules are and then crossed
2:51 am
our fingers and hope they abide by them. we have put in check points in the way that we have developed our internal systems, the way you've developed a software and get it out the door that has to go through certain check points and reviews to ensure that privacy issues aren't missed or overlooked. there's a number of things that we've done along those lines to make sure that people are aware and have the tools available to them to do privacy right. but then there's also different checks along the way to ensure that mistakes don't get made. nothing is perfect, of course, but we try to do a multi-facetted approach or multi-layered approach to make sure we catch those things. >> let me follow up on this and it's a somewhat specific example but hypothetical. have you found training to be more effective or effective enough in the absence of pairing
2:52 am
with mechanisms and processes. that was a horrible question. so i'm just going to start over again and say, 702 that program has certain legal requirements. in the private sector, would you train to those legal requirements or would you also have, for example, when an analyst is sitting there attempting to target or select or whatever they're going to do, also have at each stage of the screen or the process or however they're doing it rules reflected in the computer system that they're attempting to use? >> we do both. to the extent that you can use technology to enforce policy, that's always super effective because you get past -- or you reduce the potential for human error. but that's not always possible. you can't completely prevent mistakes, oversight, or
2:53 am
intentional bad acts. so you need to do more than that. you have to have -- you have to build the awareness so that the inadvertent stuff is reduced. you have to build in the technology tools to prevent that from happening. then you need some level of checks to make sure that everything went right. if you have somebody trying to circumvent a policy for whatever reason, that there's some way to catch that before it creates a negative impact. >> and so i think i have time for one other quick question, in the section 215 program, one of the features was, in fact, not all of the call detail records went to the government. in fact, names are not provided originally to the government, subscriber information, simply numbers to numbers, would that be an example of d identification and anonmization. that was my only question.
2:54 am
>> i have a couple of very sort of brief questions, which i think you can answer very quickly. that way i'll get them all in. okay. i'll begin with -- you talked about how it would be good for us and we already do have technologists on board -- based upon your knowledge here, does the government have technologists who worry at all about privacy? i know they have technologists, obviously, but is this as a result of your observations and studying the field something that they consult with the technologists about, hey, we need this kind of information for national security, but we would like to get it or as much as we can? what's the balance off? does any of that kind of thing go on inside the government with
2:55 am
technologists? >> right. so having worked a lot with the government, i know that they consult technologists greatly with security, with privacy, with compliance issues and how do we engineer software that takes all of that into consideration. i think if we look at the past five years or so or six years or so that you'll see that the nsa was really, really focussed on compliance. i think the results of the reports and the oversight have shown that they've done a really good job with that. when there's been an issue, they've dealt with it. i think someone mentioned the new cpo at nsa. what we'll see different now is not only are we complying with law going to be something that's factored into all of the software that's developed and all of the tools and techniques and procedures but also now, well, just because it complies with law, should we really be doing it? and what's the extra step we're going to take to really consider
2:56 am
privacy at the on set? >> so you sound reasonably satisfied with the fact that they're taking it seriously and doing the best they can? >> i absolutely do. i wish -- i actually feel very comforted by the fact that the government has a ton of oversight and a ton of laws to comply with. and i personally am much more worried about the large collection and amount of collection that's taking place in industry that people don't really understand. >> all right. so i can get on to my next -- mr. bedoya, you talked about how important it was to limit collection to what was necessary or purposeful, et cetera. but in light of so many of the experts on both panels talked about almost like an inevitable momentum of collection, collection, collection, where would you look -- what part of the government or where would you look for the mechanism to try and limit the collection or
2:57 am
get that kind of impediment or balance done? >> certainly. so i think folks have been saying that it's inevitable that industry is going to collect all this data. i don't think folks have been saying it's inevitable that government will collect it. taking that as a given, i think the question is about reconstructing the fire wall between government and industry with respect to data collection. and so i would be surprised if anyone on the panel thinks or previous panels think that it's inevitable the government will collect all this data. one other quick point on your previous question, i believe that the congressional committees that conduct oversight on fisa and on foreign intelligence certainly senate judiciary committee lacks technologists. >> we talked a little bit about that in our first report on fisa reform there. okay.
2:58 am
mr. hinz, you talked earlier, you said one of your principles was there shouldn't be any bulk data collections. now, terminology is varied all over the place, it would help me if i knew what you meant by bulk collection. at a gathering i was at, they talked about the great importance of public health data, especially for when epidemics come along or that sort of stuff, so wouldn't some of that come under your ban against all bulk data collection? >> i was talking specifically about government surveillance programs. >> okay. i just wanted to clarify that. and what do you mean by -- give us an example of what you call bulk data. this has been a debate whether this program or that program falls under bulk data. >> certainly. i had in mind the 215 program in particular where government
2:59 am
goes -- >> it's not targeted. >> yes, it's not targeted. correct. >> okay. i think that's all i have right now. >> we may be able to go back to board members for additional questions. i would like to continue with this panel up until the top of the hour. we have one question from the audience which i will read and we welcome others if others want to post questions, in 2005, the national academy of sciences studied whether patterned-base data mining can anticipate who is likely to be a future terrorist. it concluded that this wasn't feasible. the question is pattern-based data mining in the terrorism context, is it feasible today and will it be feasible ten years from now? would anybody like to address
3:00 am
that? heidi? >> i don't know specifically about terrorism, mindful of what ed mentioned as that we have limited data. but there is a program that has been running in las vegas in lapd, we may not necessarily still be able to identify specific criminals, but our predictive modelling systems have been at work. they're able to make a reasonably good prediction about where the criminal activities are more likely. it is not precisely the question that you're asking, but i can assure that it is just becoming better. i can assure that any service provider that has the amount of data that we are generating and it's becoming more and more and more generated is just a honing and fine tuning and polishing their models. whethers

87 Views

info Stream Only

Uploaded by TV Archive on