tv Data Breaches Cyber Threats Panel CSPAN May 31, 2018 11:10am-12:18pm EDT
11:10 am
panelists. the consumer federation of america hosted their a.m. forum bringing together government, industry and public policy leaders to discuss consumer issues. we continue now with remarks from technology experts on the recent data challenges with equifax and facebook and how those corporations are dealing with the fallout from security breaches. this is about an hour. good morning. if you will have a seat, we will
11:11 am
commence with the last session of the conference. good morning. can you take your seats, please? thank you. well, thanks to all of you for saying for this session about the insecure digital world. i'm susan grant, the director of consumer protection and privacy at consumer federation of america and it's interesting, just this morning in the sessions the issue of cyber security was touched on many times. it's come up throughout the whole conference. so this is certainly very
11:12 am
timely. with so much in the news about data breaches, internet connected devices that can spy on us and our data from our online activities being collected and used in ways that we never would have expected, it's no wonder that so many americans are concerned about the security of their personal information, and that's what we are going to talk about today with three excellent panelists, michelle richardson is deputy director of freedom and security and technology project at the center for democracy and technology. she specializes in privacy and security issues and previously worked for aclu and on capitol hill. justin brookman is the director of consumer privacy and technology policy at consumers union, the policy arm of consumers reports.
11:13 am
previously he was policy director of the ftc's office of technology, research and investigation, and before that actually he was with cdt. and steven roosa is a partner in holland and knight's new york office where he focuses on advising companies on a wide spectrum of technology and legal issues pertaining to privacy and security. we are going to have a conversation amongst us and we will save time for questions at the end for all of you. okay. so i'd like to start with a general question or series of issues really for all of the panelists to respond to. maybe it's never going to be possible to have 100% perfect security, but it seems like in
11:14 am
many of the data breaches that we hear about they could have been avoided and i think the poster child for that is equifax breach where there was a software vulnerability that they knew about and they simply failed to patch it, allowing hackers to steel very sensitive information about americans, millions of americans, and just this morning there was another revelation that some thousands of people's passport information were also compromised in that breach. so what i'd like to ask the panelists is what are the main factors that contribute to these kinds of security failures? is it just not paying enough attention to security, is it not having the right procedures in place? is it not committing enough resources to it?
11:15 am
what's the problem? >> sure. so i think you're absolutely right, a lot of the data breaches we hear about are completely avoidable, equifax is a good example of that. you still to this day hear lots of cases about people had a laptop unencrypted in the back of a laptop that got stolen is the cause for a lot of them. the key is the lack of incentives, they don't bear the cost of data security in a lot of cases. in some cases data beach notification might be triggered and that's annoying and expensive and you have to hire steve. so there is definitely a cost there, but they aren't the ones who bear -- like when the consumer has credit -- their credit is attacked, when they have identity theft, this he don't bear the cost of that. the ftc maybe has data security authority, they brought a lot of cases, they use an old statute from 100 years ago to say the
11:16 am
companies are required to use reasonable data security, but that's being attacked in court right now and even if the ftc can bring an action they can't get penalties. the ftc can say you used unreasonable security and the company signs an order saying yes, i did, i will stop doing that. so the companies can treat poor security as a st could of doing business. it probably should be a cost. they can't use 100% security but i think it should be a greater cost than it is today. >> michelle, do you have any sense? >> sure. i think with he usually find that many times companies will say that because security is complicated we can't endorse a list of best practices, but when these breaches come to light they are not using just best of industry practices that are actually really cheap and baked into services right now. it's things like encryption, multi-factor authentication, lease permission access, things that don't actually require a huge technical skill set, it's
11:17 am
something that your i.t. contractors absolutely can do, someone just needs to make the decision to do it. so when you hear that, well, you can never account for these sorts of situations and there is no way to fend them off, that's really just not true. i think the joke is always there are two types of people, those that have had breaches and those who just haven't found out yet that they have had breaches, right? it's not really about whether you can fend off an advanced persistent threat. if you are being targeted by north korea or iran for your intellectual property that's one situation, but what we see is even the basic stuff for consumer level data that's going to be resold for pennies on the black market is not getting these basic protections. it really does go back to the incentives. there is no consequence at this point. we have seen in the last few years states are getting out there because the federal government is totally paralyzed and so we are hoping that these state laws are able to do things that we just can't and here in
11:18 am
d.c. with accountability enforcement and penalties that really make it worthwhile for them to fix their systems in a systematic way and not sort of brush off each breach. >> all righty, then. so to push back on some of that, just gently, so working on the data security side of things in a law firm, so we have to respond to incidents and also do preparedness and response, you know, from where i hit my observation is that it's actually really hard to get the level of security that you would want in large organizations. complexity, any security professional will tell you complexity is the enemy of security. when you get really large organizations you've got all kinds of different systems, some legacy systems, et cetera, and security is absolutely hard and it costs money and a lot of these companies have -- that have been involved in breaches
11:19 am
actually have really top drawer security departments and spend a lot on security. you know, microsoft there's -- you know, for example, in wannacry and i'm not pointing fingers at microsoft, there is actually a way to point fingers at the u.s. government, we can talk about that later, but they spend a billion dollars a year on development security, right, for when they're making, i don't know, maybe $20 billion in profits. that's a huge budget outlay. so i think it's really hard. not to say -- and let's take it updates and patches even. if you are in a large organization and you get updates and patches you can't roll those out right away. this he could have additional vulnerabilities and you need to test them. if you would have, you know, done immediate updates, for example, for the intel vulnerability, the meltdown in spectrum, you would have ended up bricking all of your machines because it turns out that the firmware update was for certain
11:20 am
classes of hardware didn't -- wasn't going to work. so you've got -- there is a danger in the updates themselves and there's -- and that's hard. and then the last piece i would say, and this is going to be the most controversial one, i think, is in terms of harm and cost of doing business. you know, in a data breach where you had -- by the way, i would say also that the cost of data breach even if there is not a lawsuit where you end up with damages against the company, it's on responding to it both with the security response vendors and the law firms, it is a hell of a cash burn per month, even for large companies. you know, when companies have data breaches they're terrified. this is not something that they take lightly, they don't like the brand hit, they don't like the cash burn and, you know, there's job security issues for the people who en o the outcomes for all of this. so there's that. but then the damage on the consumer, you know, if you have
11:21 am
identity protection or you can put a credit lock on your account, like is it annoying? yes. are some people really harmed? yes. but in the aggregate what are we really looking at in terms of consumer harm? you know, i don't know. i don't know if it's as outsized as all of the horrific headlines in the media that we see. that's the other side of the conversation. >> i think one thing we're seeing, though, too, is that there is rapid move to software as a service and the use of the cloud and all of this is being centralized, right? it's not the world where you had 20 different software programs and we're very quickly moving to a place where you are going to have fewer decision-makers in the security setting and they will be able to make smart decisions on a scale that maybe individual people cannot make. but the other thing, too, i think we need to push back on the companies is that to the extent that some of these breaches are about individuals
11:22 am
having their passwords guessed, right? or phishing to click on a link that corrupts their entire computer. we have technology systems that make those things way too easy. you still sometimes go into your log in on an account and it says, well, you need 20 characters and you need an at sign and an underscore but no underlines and three capitals. this has been debunked for years now that this is not a way that people can understand their passwords. that means they write them down and they use the same one over and over again. but someone still on the tech side of this company has not switched to something that's going to be usable for your average person. so to the extent that we can get companies moving in that way, they will actually be helping their consumers to better control their information. >> i will add, i agree that there are some companies that do a pretty good job of it, sure, microsoft invests a ton of money into the space. microsoft was supporting with
11:23 am
vista until a couple months ago, a 12-year-old operating system. then you look at some of the iot products coming out today and you look at some of the default settings and they are objectively terrible. you look at the ftc's case against d link and they are using like ridiculous practices. the case against wyndham hotels, a very sophisticated company, a lot of things they were doing was things that could have been stopped. again, microsoft does a pretty good job, apple does a pretty good job. your operating system on your desktop computer gets regular updates and they do a good job with that. your phones, some of them, like apple phones get a lot of updates, android phones, maybe the flag ships, maybe not. the ftc did a report that i worked on when i was there said it's a crapshoot. even super expensive phones sometimes they get zero support. phones some of them do. you look at iot devices like refrigerators, routers, anything else, do they even have a process for getting security updates and a lot of them don't.
11:24 am
so i agree, yeah, the ones that are in the news do often have systems in place, though even for like the inn tells of the world, who bears the cost of spectre and meltdown? intel has to do some patches but ultimately that falls on the consumer. for a lot of other companies out there i think, again, the incentives are not in place for them to take it seriously enough. >> thanks. we are going to turn to incentives in a bit, but, steve, your firm has an actual laboratory that it used to test whether your clients are actually protecting data as they claim. >> yeah. >> can you describe what that is and how it works? >> yeah. no, sure, thanks. so it's sort of unusual for a firm to maybe have this type of thing, but we have an internal testing lab where for clients that have consumer-facing websites, mobile apps, iot devices, we will set up a
11:25 am
special network an environment, capture all the network traffic to and from the device and during test sessions also do a code analysis and then -- and really get after issues like data leakage and designed in data sharing at the code level that may not be known actually even to the people who own those outcomes in the company. so, you know, in terms of -- in terms of data breach, i don't know if it necessarily gets at those issues, per se, but it does get at video privacy protection act, children's online privacy protect issues, gdpr issues. to take two seconds on it because it plugs into larger issues here. one of the things you see with development is increasing modularity. so we have third party code we're going to take from here and third party code we are going to take from here to perform different functions for whatever our software s whether it's in a mobile app or different types of software. those third party code libraries
11:26 am
will call out during their operation and a lot of times what the developers think that that code does versus what the code actually does are two very different things that impact on privacy and security. so we try to really get into the weeds on that. thank you. >> thanks. it really sounds like a great service. let's turn to the security concerns about internet-connected devices. we couldn't bring our colleague fin merstad from the norweigian consumers council here, but we're going to show a video that his organization made. they studied several different brands of smart watches for children that parents can use to keep track of them and communicate with them, and as you will see from the study that they commissioned by a security consultancy firm, other people can communicate and track the
11:27 am
children as well through these device devices. >> uh-oh. volume? ♪ ♪ >> these smart watches for kids might seem like a good idea, but what about the kids' right to privacy and how secure are they really? we are about to find out. good to see you, sir. >> good to see you. >> so, harrison, you've been looking at security of these smart watches, how was it so circumvent them? >> a lot easier than we expected. they were missing standard best practices that you would expect to see on these types of device, as a result we had a lot of
11:28 am
security findings. >> in what ways can a person or attacker get access to these watches without actually having them in your hand? >> so an attacker would need a unique identifier for the watch or an imei and you use that as part of the registration process. i type in the verification code that i have, i forward the request along back to the server and now i have associated fin's watch with a new account that it didn't originally have access. >> this number is something that you could also find online. you don't need to have access to the watch to get this number. >> no, absolutely. so you don't need -- you do not need physical access to the watch to get this number. this is just one of the methods that we have identified. >> in the research of these smart watches that were the most surprising findings? >> we identified that there's the possibility to use these devices as a spy device without
11:29 am
the kid ever having to activate any functions on the watch or even being aware that something is happening. i'm going to send the text message here. it will automatically call me back, i can press answer on my phone here and -- >> am i talking back to you now? >> one of the key functions of these watches is parents can track their kids. on my way here i wore all of these watches on my arm. what can an attacker do with this information? >> so an attacker would have access to all the location history that would be stored in that parents' app. we also identified om problems with location history where an attacker can manipulate where the location of the watch looks to be in the app. >> here i can see all the location history. >> this was my walk here, actually. >> absolutely. and i can see the exact time that you were at this location. >> wow. >> right. so with the attack that we have done here we've changed the location data that was sent from
11:30 am
the watch or between the watch and the server. so we make it look as if the watch is in london while in reality it's sitting here with us. >> these smart watches have an emergency sos button. did you find any problem with it? >> we did identify problems with the emergency functionality. normally a child could press the emergency button and it would initiate a phone call back to the parents but an attacker with control over the app could change the phone numbers that are supposed to be called or they can complete them entirely. >> the s has showed up so it should have activated the sos function. okay. so now it's calling back to my phone. >> so you've taken over, you are now a person who should not have access to this phone. >> right. >> you have put in a different phone number than the parents' phone. these smart watches collect a lot of sensitive information about children. is it stored safely? >> not as safely as you might think. on some of the watches the
11:31 am
communication was not encrypted so anybody sitting on the network could see that information, passing back and forth. some of the servers don't protect the information the way it should be. with one of the watches it was actually possible to return data for other users and see location information about other people. >> as we have just seen, these watches are clearly not safe for children and also they violate the children's act of privacy. until these issues have been resolved they should be removed from the shelves and in the long-term we need better rules to protect children and grown-ups from privacy violating products. >> well, removing these things from the shelves is really difficult.
11:32 am
these watches, for instance, mostly are made in china under many, many different brand names, contacting the companies behind them which fin's group tried to do proved really difficult in many cases. the federal trade commission certainly doesn't have the ability to issue recalls and as we heard from acting chairwoman burkel this morning the cpsc doesn't see at least at this point that it's within its remit to deal with these kinds of security issues which could actually put children physically in danger. so what to do about this. one thing that's happening, justin, is that consumers union has launched a new program to test and report on the privacy and security of these kinds of
11:33 am
internet-connected devices. so can you tell us why cu started the digital standards program, how it works and what you've found so far? >> yeah, so it was really more about consumer reports. so consumer union is the advocacy and policy wing but consumer reports has an 80 hundred year history of testing products and looking at tvs do they work, are they good, cars especially, a lot of people rely on cr for cars. so we're really good at evaluating products but now products have new dimensions, internet connected, other things to be worried about. are there data security concerns? so about a year ago the organization put out this digital standard which is the metric which which we're going to start looking at companies for some of these new values. so we partnered with a few organizations that have a lot of experience in this area, ranking digital rights which is part of oti which did a lot of work looking at policies and evaluating which ones are better
11:34 am
and worse, disconnect me which is the maker of an ad blocker, a lot of experience in the ad tech space and then in the cyber independent testing lab which had a lot of experience with testing cyber security issues so we put the standard out about a year ago and what some of the privacy best practices are, we are looking for security best practices, but also other issues, too, like repair ability, can you actually repair your own device. interoperability. like ownership, do we even own our devices anymore. michelle talked about how everything is running software as a service now, do we really own our refrigerator or is it a refrigerator as a service that can be bricked remotely when someone wants to. these are the values we talked about and now the challenge is translating that into actually rating products. so we started to do this, we announced our first test case looking at a bunch of smart tvs, this is an issue i worked on at the ftc and cared a lot about.
11:35 am
tvs traditionally were just like a screen and you plugged your cable into it and you would watch it. now they have a lot of software on them, you can use them to connect to netflix and amazon and you can use them as skype or a web browser, you a loi the of these companies, this is a thin margin business with a lot of chinese companies who are just churning out screens for low cost looking for ways to make more moneys. what a lot of the companies are doing they want to watch what you're watching so they run software in the background to send screenshots back to their servers to they can process that and say, oh, justin is watching a basketball game, justin is watching "a good place", and they can build a detailed log of the things you're doing. they look at google and facebook who are printing out money by serving targeted ads and they are interested in doing the same thing. we looked at these under the standard and tried to say which ones are better, which ones are
11:36 am
worst. it's interesting, they all kind of try to get permission for it. there was an ftc case against the manufacturer vizio a couple years ago that said that tv viewing is sensitive so you have to get permission. so they all kind of have permission, this he all throw up a screen where they talk about content recognition or targeted content, but you can tell in a lot of cases they are definitely phrased to get you to hit accept, the skip button is hidden in a corner. how do we evaluate and assign a score to that, to a user interface? so i think for the first one we didn't feel comfortable picking winners and losers, we learned a lot from it. i have my thoughts about t but cr is a careful organization that wants to get it right. we have a history of testing things and the scientific process in place and so i would like to say whatever justin feels like, but we actually have to have a little more rigorous process for that. the go he will is for the next set of ones we will be releasing soon is to pick best and worst to rank them and then with the
11:37 am
goal that one way we will be able to feel comfortable assigning a score to say this tv gets a 30 on privacy because they don't tell you that they are he a watching that you're doing, this he don't get security updates, this he don't do this and that. that's the initiative we've launched and the goal is to make it part of our standard evaluating products because there is a feature, an area of interest for more and more products. >> so this is obviously going to be helpful to consumers because they will be able to see these rankings and take them into account if they are the kind of consumers that we like that research before they buy things like appliances. maybe it will put some pressure on the producers, although i'm not sure how much these chinese companies are going to care. >> the big companies do. >> yeah. >> i would say the samsungs and the people who make a lot of products, especially in the car space where i think cr -- people trust cr on cars, they are very
11:38 am
interested, so they've all called and wanted to talk to us about it. >> that's good news. so, michelle, what consumer reports is doing and what hill and knight's testing lab is doing for its clients is really helpful, but is it enough? and what more do you think needs to be done? cdt recently issued a really interesting paper about strict liability and how that might force companies to pay more attention to security. could you talk about that and share your thoughts? >> sure. so we're very happy to see what consumer reports is doing. it's kind of funny, i spent a lot of time with the executive branch in convenings and interagency blah blah and for years the companies have been saying don't regulate us, let's just do something like energy star for the internet of things, and then consumer reports came out with an energy star for the
11:39 am
internet of things and they were like, wait, we thought we were going to do this for ourselves. it has really changed the conversation, there haven't been critical massive reviews yet, but i think it has kicked people in the pants to realize that consumers are going to organize and there's always value for consumer self defense. it's only one pies of the system but where we are now where the political process and legal system is failing us, it's more important than ever and so we did try to look at how the legal system is handling these cases and we ended up focusing on strict products liability because it seems to be the only way to recover right now, only if someone is hurt seriously or something is destroyed are people finding their way into court. often these cases are actually settled quite early so there is not evolving case law here to the extent that people have died in car crashes or another popular area is medical devices,
11:40 am
i think we've seen in our paper that's things like cancer treatments, pain administration, all these things now are actually hooked to the internet and the failures are really life and death issues. to the extent that this can evolve, i think it makes sense because these producers whether it's the hardware or the software are the only people in a place to make serious decisions about the security. this is unlike other things that have a physical component or things that people intuitively understand. you cannot make informed decisions for yourself on every connected device you possibly own. i think there are a lot of people who have connected devices and don't realize it. you don't realize your comcast remote is always on listening and that's the default when you got it. or your tvs and the dumb things that are now being put on the
11:41 am
internet with tooth brushes and, i don't know, flashlights and it's pretty much endless, right? for a very long time they said just don't buy it, right? someone said, you know, we are not going to have a choice much longer. the first thing to fall was cars, right? you are not able to go out and buy an unconnected car, you are not going to be able to make that choice to say i don't want to put myself at risk in that way. because then this is sort of the right to repair related issues and licensing they put into your licensing agreement that you actually can't disconnect the car, it will void your warranty. so there is going to be a point where we can't opt out. if we are going to create that system that means only one group of people is going to be responsible and it's going to be the people who guilt it. i would say that the companies are getting nervous about there being some sort of negligence regime possible. we are seeing more and more legislation here in d.c. with liability protections and they
11:42 am
are usually incredibly broad and they will say something maybe like if there is a reasonable standard of care you are not liable and while that may seem like, okay, maybe that's trying to explain negligence, what they do on the back end is they say, but you can't actually bring suit adds a consumer. we are going to pick it to your ags but then the ags have to bring a single suit all together it has to be before the d.c. circuit court, the ftc can intervene and they are making sure recovery will never happen. the other request they have going is that there's a limit on damages of a million dollars per sin incident, no the account. eek which fax all that would be a single $1 million fine on behalf of every single consumer in america, that's the proposal before congress right now. so i've got myself so far away and worked up over this, this is -- >> that's okay. >> we're really not moving in the right direction yet.
11:43 am
i will just wrap up to say i do think cambridge analytica has been a turning point, it was partly just because people were shocked about how much data was there and how it was being used. i think a lot of us who followed it knew this, we knew how bad the terms of services were, but it was never explained to average people in a way they could understand. we're finally turning that corner and i think serious privacy and security legislation could be a reality in the next couple of years. >> maybe. i'm going to turn to justin and steve for their thoughts, but i do want to just pick up on one thing you said. yesterday we heard from illinois attorney general madigan about the good things some states have done to enact privacy and security laws, but we also see things like a recent bill in ohio which would create a safe
11:44 am
harbor shielding companies from liability for bad security if they have followed certain voluntary standards like the ones developed by nist or other sort of voluntary bodies. you know, we oppose this bill because we think that it's a really bad idea to prevent people from being able to bring actions to hold companies accountable. some might argue, though, that it would actually give them incentive to do better. so why don't you respond to that as well as in general your thoughts on liability. >> sure. so on the point of the strict liability proposal, for example, i think, you know -- so i would take a step back and say, you know, when you look at tort law for product liability generally,
11:45 am
and i'm not a product liability lawyer, but here it goes, you know, we have -- when you've got a license or a consumer contract, you know, the harm that results that's purely economic in nature will be governed by the contract and if there's personal injury you can sue in tort. this is something that sort of helps sway across the landscape of consumer products for quite a while, and in terms of strict liability and i believe that you have to have particularly, you know, known dangerous instrumentalities in order to actually get out of normal product liability and into strict. even in strict liability you still have to prove a product defe defect and, you know, that it's the cause. so i think in terms of, you know, in law school they would call the least cost avoider, i'm not so sure it's the companies that are actually making the products because, again, on the modularity piece both for code and for hardware, you know, it's
11:46 am
not like they've -- the companies that are putting the products on the shelves are -- have made most of the hardware that's in the product or written most of the code, it's probably unlikely. they have made some of the hardware and maybe written some of the code. but, you know -- and certainly sh -- and then there is the issue of are we talking about -- you know, are we talking about pacemakers and autonomous vehicles and bloody wrecks on the highway, are people dying because they're getting zapped by their pacemaker, or are we talking about data breach and things like that? it would seem to me to be, you know -- i would caution -- i would counsel caution on any change in sort of the default structure of our tort liability system which i think because of the way that it's set up actually leaves some play in the joints for innovation and for folks to make mistakes that aren't grievous and not have companies be put out of business for that. think about, for example, the htc case from several years ago
11:47 am
by the ftc where there were issues with -- with the way that htc had -- was putting phones on the market with pre installed android permissions that really were not done in a secure way, but they installed user base was something like 18 million people or 20 million people. there was no at single incidence that i know of or i have read in the media of a known compromise, of a known vulnerability of known harm. should we hold htc to a strict liability standard and put the company out of business with no known harm? i would say to put the brakes on in terms of strict liability. i think that that would have to be, you know, considered very carefully and probably, you know, there is a big difference in terms of how we think about pacemakers and autonomous vehicles possibly versus network connected fish tank thermometers and things like that.
11:48 am
that was the cause of a data breach in the case of a casino. we don't need strict liability to know that we shouldn't hook up fish tank thermometers to our company network. >> that might help. i mean, i think it's a really interesting idea. i really enjoyed the paper, i recommend people at least take a look at it because i've often wondered about the intersection of strict liability and warranty law and all the things that are on the internet of things. look, with he already have strict liability for a lot of data breaches. data breach notification is a strict liability cost. if you have data breach and you lose someone's social security number it doesn't matter if you are the world's best security there is no reasonableness inquiry into it. you have to notify people. that's a cost. i think that's a feature and not a bug. we are actually trying to extend that in a bunch of states to other sorts of information like online cloud storage. in your online account gets hacked you should be told about t one, because you should know, also it's good to incur costs on companies for going through this. a lot of money goes to steve and maybe it's not the most
11:49 am
efficient allocation of resources in the world, but it puts costs on companies -- >> pretty efficient. i was just kidding. >> it puts costs on companies to make them take security more serio seriously. i'm interested in this idea of doing it more generally where you don't even need to consider the reasonableness, you just bear the cost of what went wrong. i think steve makes a good point that the way it's been interpreted in the past doesn't quite translate over here, so we talked about defects, i mean, the problem is code is inherently defective. all code has defects. whatever the next version of android that comes out, i mean, google will pump trillions of dollars into it with the smartest people in the world making t but it's going to have defects so the could he for security is to recognize that and then could have in place a system to recognize that after the fact and patch it and deploy the patches and get people to install the patches and to remediate it as best as possible. so how does that -- how does that affect strict liability? i'm not entirely sure. i also think it's a good
11:50 am
question about where in the stack the liability should lie. so, i mean, android technically and unex are open source, android is fairly open source. changes the code and puts changes to it it's really a question of how htc or samsung implements that code, which of those players bears the responsibility for that. so i strongly agree that the incentives need to change. i have traditionally thought about it if you use bad practices then you bear some costs but i'm interested in the strict liability idea. it is a little bit fraught, but, again, something needs to change in order to get the majority of the players in this space to take it more seriously. again, there is still the supply chain issues. the ftc brought a case against a smartphone manufacturer that was white labeling a chinese phone. do they wear the responsibility for the defects that the chinese
11:51 am
manufacturer put in place? the ftc said yeah. there are complicated issues that need to be unraveled. >> any of the other panelists have additional ideas for what might provide good incentives for better security? >> so just to hop in very quickly in response to justin. versus government action and changing the default settings of tort law versus having a private company that sets up standards and holds companies publicly accountable to those, it seems to me that the private sector solution, you know, unless -- unless deemed to be an utter failure and it's of course just getting off the ground i would think that that's the forum that probably makes the most sense and has the least chance of disrupting innovation and actually being an engine for innovation. so i actually think what
11:52 am
consumer reports does is great and has, you know -- again, i think that with the ftc certainly there are some of these investigations that we understand and then there's others that we think that are just, you know -- head scratchers and whatnot. so really applaud the private sector efforts of consumer reports. >> thank you. private sector self-regulation has a role. self-regulation is most effective when there is a threat of real legislation or there are -- and right now we are in one of those moments. right now there might be incentives for things to be put in place. in the privacy space 2012 there is interest in privacy legislation especially around ad tech. we're going to do, do not track and we will take call of this for you. some of the momentum behind legislation fell away and do not track fell away. self-regulation will always play a role but it will not be sufficient by itself.
11:53 am
it has not been sufficient in the space given the extensive data security failures of a lot of companies. so, again, recognizing that perfect security is both imperfect and undesirable, i think still it is a continuum and we need to move the needle farther in the direction of security. >> and i realized when i was speaking about strict liability i kind of bled into some of the efforts to do something below that at a more negligence level and that's what some of the states have started doing on outcome based, you know, sanctions. so there is always that option that there is some sort of legislative response and that's what you are a he eventually going to need. i think we kind of get in this rock and a hard place where they
11:54 am
say you can't write standards for the industry, it's too complicated. let us figure this out as things go wrong. one thing i'm excited about with the trump administration is that they want to get harder on their contractors and improve their own security, and they're going through an i.t. modernization process where they're trying to allow agencies to save up money and make big purchases and shared services, and sort of slingshot past the constant update of crappy old systems. they only have enough money to update their windows that's ten years old over and over again. so as a purchaser, the government can affect the private market. so i'm hoping that this process, that they are really clear about what minimum standards are.
11:55 am
they can affect what the big companies actually put out because that has happened in the past and it's something that they are considering. i know they've said they have reached out to ul and others because they want to try to use a purchasing power as opposed to direct regulation. i would say, though, too, just how privacy works in this country, security may follow where we do do it industry by industry. so the agencies in congress are much more involved in select areas like health devices and cars and there might be other areas where highly regulated industries that are already, you know, highly regulated systems that are reviewed for safety, cyber security is just sort of layered on as the next thing. there are so many cyber physical results anyways. right? if you are trying to understand how these things work it doesn't make sense to not consider cyber security while you're doing it. that hopefully is something that can lift all boats. >> just for two seconds on, you know, again, sounding the caution bell for, you know, how well privacy legislation has worked -- or regulation, you know, in the case of capa
11:56 am
originally the government wanted to, you know, protect kids from porn and it turns out that you can't really do that effectively because of the first amendment. so instead what we ended up having is a regulation that protects kids from lego ads because you can't have targeted or interest based advertising so kids get ads for sushi. it is a regulation that has costed -- has costed -- has cost -- geez, it's friday -- has cost industry millions of dollars, it has taken content off of the market that would otherwise be available because it's too expensive for small providers to put content out there and kids get sushi ads instead of legos. it is a regulation that has never protected anybody from anything. we need to be careful before we go and do that. >> oh, i'm making a note we're going to have a session on capa next year for sure. we're going to go an extra ten minutes because we started late so there's one more thing that we're going to talk about before
11:57 am
we open it up to questions and that is the latest outrage, facebook and cambridge analytica. some people are calling it a data beach which i don't think is technically accurate, but there is no question that there are companies such as facebook that are collecting tons of information across various platforms and making it available to others to use without necessarily having a system in place to ensure that it's only being shared and used for the purposes that are intended and that it's secure. so my final question for the panelists is what should be done to monitor the trail of the data after it's left your hands? >> the easy thing to do is to lock down the platform which is one that facebook eventually did, it just closed the barn door a little too late. in facebook's case specifically
11:58 am
it will be interesting to see what the ftc ends up doing with that because, as i said, in most cases the ftc can't fine you when you do bad things but if you have previously done a bad thing and signed an order saying you are not going to do bad things then they can get catastrophically large penalties. subject to the 8th amendment. a lot of people are saying this is an easy case. i think it's an interesting case. i think the ftc order is worded very specifically so a lot of the questions are -- some of the theories some people are proposing i don't think are going to work, but the system they had in place which was obviously prone to abuse, right? we wrote a letter to facebook about this in 2010 when i was at cdt and joined with epic and cdd and a bunch of other folks says this is ridiculous. any app that someone installs can scrape all your friends' information. you have to shut this down. they waited five years to do
11:59 am
that and, you know, just -- and to their credit they did shut it down. i think they get credit for that, which is why it's interesting that a lot of the conversations now are switching to other facebook problems, like the hearing with mark zuckerberg focused somewhat on cambridge analytica but a lot of this was about all the other things they do, how they track you off of facebook. i think a lot of how facebook responds to that now is that door is shut, but, i mean, there's lots of things we can talk about facebook wise. i think that will be interesting to see how they react both on all the extra data collection they do but also how they police their platform for being accused. but those are kind of like a lot of separate issues. but, i mean, yeah, whether it's -- i mean, you get to the issue of is it a data breach or not? it's an interesting question because the end result was someone who should not have had the data got the data. does that make it a security incident? i don't know. would the ftc go after that by itself is an interesting question. they might. it also depends on what people
12:00 pm
are told, what people's reasonable expectations are if they were told or had reason to believe what they put on facebook would just be shared with their friends, not whatever stupid farmville or whatever app that they were playing at the time. >> i know we back at the cdt office are trying to think more holistically so i don't want to say here is the solution. i think we're trying to talk to a lot of people and come up with a plan that will stand the test of time, right? that will account for facebook, but other situations. i think one interesting idea my boss talks about is sort of like a covenant that runs with your data, right? that this idea that so often our terms of service say, well, yeah, you could use it for other things to give me my service and make my service better and people assume that to really be in the four corners of service you have and the sort of crazy way that spins out is really not
12:01 pm
a fair way to use the data. so is there a way to better enforce this idea of what your permission means? i think, you know, some of the things that we need to separate out are the security and the privacy. to me this is very clearly privacy. security is objective. it is whether you can control who accesses and uses your data, right? but privacy is a normative value of what do we decide how it can be used, right? you are now in the box of what's legal and what's under your control. here this is definitely a privacy violation. they knew what they were doing, they chose to set up the system like that and people just -- it didn't sink into them about what that meant. there are very closely tied security and privacy but they're different and i think there can be different consequences for it. i think what we're hoping to
12:02 pm
come of this is that maybe there are some types of data that are absolutely protected because it's just so crucial, right? i mean, the recent dna case was very interesting. talk about something that you can never get back and never change. how do we think about dna? and i know our goal, too, is to get beyond the notice and consent model as sort of a generic out because that has not worked, it's not been meaningful and it is not serving people in any useful way. >> steve, any last thoughts about audit trails? >> yeah, so it's hard, i think. not that that's helpful. but companies are trying to deal with this the best they can. we are seeing -- we're seeing companies with an immediate need for actually having to stand up systems that do this because of complying with gdpr for eu facing activities and you have to -- you basically have to know what's going to happen to your data and if you're going to share it with a third party you're putting your neck out,
12:03 pm
you know, potentially for 4% of annual revenue, not profit, revenue. and so there's various ideas that folks are trying to implement in terms of recordation and tracking. there are certain required consents under that regime, specifically especially for sensitive categories of data which would include political beliefs and philosophical beliefs and whatnot. graph that on to the social network scene and, you know, it's probably very different type of group of consents that would be necessary to have, you know, released this data here, but then again you are going to have the problems with folks what happens when there's breach or what happens when there is a deviation. i do think that especially the way that that is tied into the political situation, you're talking about potential societal wide impact for this type of data, you know, i think there is
12:04 pm
a sea change and part of it goes to when you have a company where the entire economic model is leveraging to the max your data, you know, this is attention that is awfully hard but it's hard for everybody and i would be lying to you if i said there was some, you know, great solution at this point. >> we could all agree it's not easy or it would have been solved already. i think we've had a good discussion describing what the problems and the issues are and now it's time to open it up for questions. is somebody going to pass a mic around? thanks. rachel, she does everything. and so please identify yourselves if you're willing, if you wish to remain anonymous that's okay, too. make it brief, whether it's a comment or a question, don't make a long speech.
12:05 pm
>> hi. i'm dan mccrory with the chicago consumer coalition. the question is for mr. ruse. in the last 30 seconds of your original presentation you introduced concepts of brand, hit and cash burn. indeed the entire conversation seemed to be that there's corporations and there's individuals and there's different ways of measuring their damages. now, i thought the supreme court just last year decided for us that corporations were individuals and we should judge them that way. it seems our entire conversation challenges that. am i misinterpreting some things?
12:06 pm
>> i'm not sure that i actually grasped the question, per se, although i will certainly concede that there's, you know -- in the legal realm the fact that you have real concerns and you have legal concerns ends up with some odd results for sure. my point on the cash burn thing was simply that i think that the impact to companies is significant even for data breaches that don't end up in lawsuits, per se, and that it's not necessarily -- at least this was my view -- there wasn't necessarily a lack of incentives. i think there's a lot of scrambling and investment by companies and dis incentives to having data breaches and adverse incidents but that they happen anyway notwithstanding that those costs are born and real and are felt. >> hi, i'm mike lit with u.s. perg. just this week it was reported in fortune magazine that 57% of
12:07 pm
the global fortune 100 companies have continued to install the flawed version of apache struts which is the software that led to the he can fax breach and which the fbi warned against over a year ago and y'all kind of touched on this a little bit but i'd love to hear everyone's thoughts on if we don't require immediate implementation of patches or have penalties what requirements or consequences will protect our data because what we have is there's clearly still a problem. >> so i'm not familiar with the fortune story, but, i mean, certainly the idea that even large sophisticated companies failed to make updates, obviously given equifax is certainly plausible. how to regulate security is an interesting question. i've seen provisions as simplistic as thou shalt use reasonable security and i've seen things like the massachusetts law is fairly prescriptive.
12:08 pm
a mandate update to install patches, there's patches and there's patches. there's critical patches, do you want to tie something to a cve designation? i personally lean against more prescriptive language. i don't think government can -- like they can't say what level of encryption do you use? the law can't keep face with that even the if the ftc had rulemaking. i err more towards a general reasonableness standard but with more of a stick. i mean, i take steve's point that it hurts to have a bad security incident. it should hurt more. >> yeah, i mean, in terms of the fact that you've got software with known vulnerabilities still being installed, i mean, one of the questions, too, is what company is installing them? you know, not all back end databases are created equal and a few are -- if you are a credit bureau or company with a data broker or something like that
12:09 pm
with lots of sensitive features in your data set, that's one thing. if you are a company that's got, you know, data that's not nearly on that magnitude, yeah, maybe it's not a good idea, but you're probably not causing, you know, societal harm. >> microsoft's clippy page is running an old version, and t t that -- that's another reason why it's hard to prescribe it, then i think the ftc should have the ability or other regulators or potentially private actors should have the ability to come in after the fact and say, okay, that was unreasonable and then the case law will build out what that looks like. >> hi. lee glass. i'm with the montgomery office of consumer protection. i guess mr. russa i'd like to start with you or if someone else has some comments of course. i'm going back to the idea of strict liability. your examples of htc and this is not my field and the pornography seemed -- well, i don't know, a
12:10 pm
little logical fallacy-ish if you don't mind me saying. with equifax where there is an element of knowing, do you not think that the strict liability would be appropriate if there is actual knowledge? i mean, just in a juris prudential type question. >> equifax is not a client, i can't speak for them and i wouldn't concede necessarily that there was scienter. but let's unpack that -- >> [ inaudible ]. >> okay. but what i want to do is i want to take your example and just as sort of as a hypothetical even, even if we are not attaching it to a specific company and saying, well, what about strict liability? but if you are talking about -- if you are talking about an entity where there's scienter for a bad act and you have resulting damages why would you need strict liability? you have fault, you have intent, you have causation, you don't need strict liability. strict liability is there was a
12:11 pm
vulnerability but we don't know if it was your fault or whose fault it is and we're going to impose damages. i actually think the existing tort regime responds potentially to that, again, in the hypothetical. i'm not look to go throw anybody under the bus. >> but consider the counterfactual. let's say equifax had the world's best security in place and they tried everything and then someone still found a way in and no one would say -- even looking after the fact, oh, man, no one could have possibly thought of that. but then as a result, like, you know, 150 million people's information was breached. who should bear the cost of that? i mean, is there still a rational that, okay, this is the cost we have, this is the cost society has in having online outward facing databases containing super secure information. the credit bureau system should bear that cost. i can see the argument for that. i think that's actually one of the easier cases. i feel a little bit more comfortable with that. in some of the other areas where it breaks down is when there is
12:12 pm
unclear responsibility, right, because there's different layers of the stack, some of it being open source. but there i can -- in that case, especially given the sensitivity of the data i can see more of an argument for it. >> especially knowing how that information is used to make incredibly important decisions about whether we can even buy homes or participate in the economy, that we don't have a voluntary relationship with them. it's not like if they grew up i'm not going to use your service anymore. there is no way out of them. they are one of those industries that need to be much more carefully regulated and they should be kicked out of the system if they are not able to secure the data. i don't think the current tort system -- well, i hope the current tort system works. i think the problem with the demonstration of harm is going to catch up with this sort of thing because they will say, well, you don't know if this information is actually being used somewhere. right?
12:13 pm
how do you actually show this has been bought by criminals and how do we know when they used it was actually from this preach and not the 800 other breaches that have happened with target and all these other things. and that's where something like a strict liability regime is so important. there are some things that the cost is so great and distributed that you need to stop it on the front and there just may never be a back end fix for things like this. >> thank you. steve, do you want to say anything before we close? >> sure. >> oh, that steve. >> let me thank the panel very much. this was a great conversation and we will continue to debate these issues. join me in thanking them. [ applause ] >> i know this conference has posed many very substantial challenges to those of us who are seeking to represent consumers, but i also think that the conference offered many solutions, offered us a better
12:14 pm
understanding of these issues, how to attack them and also identified some opportunities and strategies that we can take advantage of. listen, thank you all so much for participating. those of you who are with member organizations, the annual meeting will be at 1:00. safe travels home. [ applause ] here's a look at our primetime schedule. beginning at 8:00 p.m. eastern on c-span, commencement speeches from apple's ceo tim cook, john kasich, and luis gutierrez. on c-span2, a recent look at book fairs and festivals. and c-span3, it's american
12:15 pm
history tv. we'll show you interviews with vietnam war veterans who share their experience of fighting and living in that country. >> sunday night on "afterwards," jonah goldberg with his book "suicide of the west," argues that tribalism, populism and nationalism are threatening american democracy. he's interviewed by john podhorowicz. >> in your book, you posit that western civilization is unnatural. what do you mean by that? >> if you took humans and cleared them of all of our civilizational education and put them in their natural environment, we wouldn't be having conversations about books or doing podcasts, we would be teaming up into little bands and
12:16 pm
troops and defending ourselves, because that is what our actual nature is. that's sort of the point of "lord of the flies." you have these kids who are the pinnacle of western civilization at the time, these kids from a british boarding school. and almost instantly, the second you put them back in a natural environment, they go to all tribal, they carry spears, they attack each other. that's humanity. >> watch "after words" sunday night on c-span2's book tv. c-span, where history unfolds daily. in 1979, c-span was created as a public service by america's cable television companies. and today, we continue to bring you unfiltered coverage of congress, the white house, the supreme court, and public policy events in washington, d.c. and around the country. c-span is brought to you by your cable or satellite provider.
12:17 pm
next, officials from the transportation security administration and the government accountability office testify on the tsa's precheck program and weight times at airports. we heard about what's being done to balance security concerns and efficiency, as well as the impact of the use of canines. held by a house homeland security subcommittee, this is an hour and 45 minutes. >> okay. the committee on homeland security subcommittee on transportation and protective security will come to order. let me apologize for my delay today. this is my third or fourth meeting today, so i apologize for being behind. the meeting today to assess the transportation security administration's preparedness for the approaching peak summer travel period. i now recognize myself for an opening statement. the summer months have seen an
45 Views
IN COLLECTIONS
CSPAN3 Television Archive Television Archive News Search ServiceUploaded by TV Archive on