tv Data Breaches Cyber Threats Panel CSPAN May 12, 2018 3:35am-4:44am EDT
3:35 am
>> this form held by the consumer federation of america continues now with remarks from technology x words looking at the recent data challenges with equifax and facebook. they spoke about how the corporations are dealing with the fallout from security breaches, and what can be done to better protect consumers private information, this is an hour. good morning, if you will have a seat we will commence with the last session of the conference.
3:36 am
>> good morning, can you take your seats please? thank you thanks to all of you for staying for this session about the insecure digital world, i am susan grant, the director of consumer protection and privacy at consumer federation of america. it is interesting, just this morning in the sessions, the issue of cyber security was touched on many times. it has come up throughout the whole conference, this is certainly very timely. with so much in the news about data breaches, internet
3:37 am
connected devices that can spy on us, and data from online activities being collected and used in ways that we never would have expect did, it is no wonder that so many americans are concerned about the security of their personal information. that is what we will talk about today with three excellent panelists, michelle richardson is deputy director of freedom and security and technology project at the center for democracy and technology. she specializes in privacy and security issues and previously worked for aclu and on capitol hill. justin bruckman is the director of consumer privacy and technology policy at consumers union, the policy arm of consumer reports. previously, he was policy director of the ftc's office of tech knowledge he, research and
3:38 am
investigation, and before that actually, he was with cdt. stephen roosa is a partner in holland and knight's new york office where he focuses on advising companies on a wide spectrum of technology and legal issues pertaining to privacy and security. we will have a conversation amongst us and we will save time for questions at the end for all of you. okay, i would like to start with a series of general questions for all of the panelists to respond to. maybe it will never be possible to have 100% perfect security, but it seems like in many of the data breaches that we hear about, they could have been
3:39 am
avoided. the poster child for that is the equifax breach where there was as software vulnerability that they knew about and they simply failed to patch it, allowing hackers to deal sensitive information about millions of americans. just this morning there was another revelation that thousands of people had passport information also compromised in that breach. so, what i would like to ask the panelists are, what are the main factors that contribute to these kinds of security failures , is it just not paying enough attention to security? is it not having the right procedures in place? is it not committing enough resources to it? what is the problem
3:40 am
>> sure, you are absolutely right, many of the data breaches are avoidable, equifax is a critic sample, till, to this day, you hear many cases about people having an unencrypted laptop in the back of the car being the cause of a lot of them. for me and the policy perspective, the key is lack of incentives, they do not bear the cost of data security in many cases. in some cases, data breach notification could be triggered and that is annoying and expensive. there is definitely a cost there, but they are not the ones who bear, when the consumer has the credit attacked or identity threat -- theft, they do not bear the cost. the ftc maybe has that authority, they got a lot of cases and use an old statute that say companies are required to use reasonable data security but that is being attacked in court, even if the ftc can bring
3:41 am
an action, they cannot get penalties, they can say you used unreasonable security in the company signs an order saying yes, i did, i was up. the countries can treat this as the cost of his nest, they cannot use 100% security but it should have a greater cost than it does today. michelle, do you have any thoughts >> sure, i think any times we find that companies will say because security is complicated we cannot endorse a list of breck -- best practices, but when the breaches come to light they are not using best of industry practices that are cheap and baked into the services right now, things like encryption, multifactor authentication, lease permission access, things that do not actually require a huge technical skill set, that is something that i.t. contractors can do, someone just needs to make a decision to do this. when you hear that you can
3:42 am
never account for these situations and there is no way to fend them off, that is really not true. i think the joke is always that there are two types of people, those that have had reaches and those that have not found out yet. it is not really about whether you can fend off an advanced persistent threat, if you are targeted by north korea or iran, that is one situation, but the basic stuff like consumer data that will be resold for pennies on the black bucket is not getting basic protections. it goes back to the incentive, there is no consequence at this point, we have seen over the last few years that states are getting out there, the federal government is totally paralyzed. we are hoping that the state laws are able to do things that we just cannot and here in dc with accountability enforcement
3:43 am
and penalties that make it worthwhile for them to fix their systems in a systematic way and not brush off each breach. >> already then. to push back on some of that, gently, working on the data security side of things in the law firm, we have to respond to incidents and also do preparedness and response. from where i sit, my observation is, it is actually really hard to get the level of security that you would want in large organizations. complexity, any security professional will say complexity is the enemy of security. when you get really large organizations, your many systems, legacy systems, etc., security is absolutely hard and cost money. many of these companies involved in breaches have top security department and spend a
3:44 am
lot on security. microsoft, for example, when he have pointed fingers at microsoft and there is a way to point fingers at the u.s. government that we can discuss later, but they spend $1 billion per year on development in 30 for when they are making maybe $20 billion in profit, that is a huge budget outlet. i think it is really hard, let's take updates and patches even, if you are in a large organization with a date and patches, you cannot rule them out right away, they could have additional vulnerabilities that need tested. if you did immediate updates for the intel vulnerability, the meltdown spectrum, you would have ended up breaking every machine because it turns out, the firmware up date was for certain classes of hardware and would not work. so there is a danger in the
3:45 am
update themselves and that is hard. the last piece that i would say, and probably the most controversial, is in terms of harm and cost of doing business. in a data breach, by the way, i would also say that the cost of a data breach, even if there is no lawsuit where you end up with damages against the company, is responding to it both with the security response vendors and the law firms, it is a hell of a cash firm per month, even for large companies. when companies have data breaches, they are terrified, this is not the day take lightly , they do not like the brand hit, or the cash burn. there are job security issues from those that only outcomes for all of this. there is that. then, the damage on the consumer, if you have identity protection or can put a credit lock on your account, is it
3:46 am
annoying, yes, is it harmful, yes, but in the aggregate, what are we looking at with consumer harm? i do not know if it is as outsized as the horrific headlines in the media, that is the other side of it. >> i think that one thing that we are seeing is that there are rapid moves to software as a service and the use of the cloud. all of this is being centralized and this is not the world where you had 20 different software programs, we are quickly moving to a place where you have fewer decision-makers in the security setting and they will be able to make smart decisions on a scale that maybe individual people cannot. the other thing as well, we need to push back on the companies to the extent that some of these breaches are about individuals having their passwords guessed right, or fishing, to click on a link
3:47 am
that corrupts the entire computer, we have technology systems that make these things far too easy, you still sometimes go in to your login on an account and it says you need 20 characters, an@sign and three capitals, this has been debunked for years, this is not a way that people can understand their passwords, they write them down and use the same one over and over again, but one, still on the tech side of the company has not switched to something that is usable for the average person , to the extent that we can get companies living in that way, they will help their consumers to better control information. >> i will add, some companies do a good job, microsoft invested a ton of money into the space. they were supporting with vista until a couple of months ago, they had like a 12-year-old operating system, if you look
3:48 am
at the iot's coming out today and some of the default settings and they are objectively terrible, the ftc case against the light, that is one of the biggest companies out there using ridiculous things out there and some of the stuff that they were doing could have easily been stopped. microsoft is a pretty good job, apple does a pretty good job, your operating system on your desktop gets regular updates, your phone, some of them, apple phones get a lot of updates, android phones, maybe the flagship, maybe not. the ftc did a report a couple of months ago when i was there and said, it is a crap shoot, even superexpensive phones get zero support. some phones do, if you look at other i.t. devices like refrigerators, routers, do they even have a process for security up date? the ones that are in the news do often have things in place.
3:49 am
even for the intel's of the world, who will bear the cost of specter and meltdown when people buy new devices? intel has to do patches but ultimately it falls upon the consumer. for many other companies out there, the incentives are not in place for them to take it seriously enough. >> thanks, we will turn to incentives in a bit. steve, your firm has an actual laboratory that you used to test whether your client are actually protecting data as they claim, can you describe what that is and how it works? >> sure, thanks. it is sort of unusual for firm to maybe have this type of thing, but we have an internal testing lab for clients with consumer facing websites, mobile apps, iot devices, we will set up a special networking environment, capture the traffic to and from the device and during test sessions
3:50 am
do a code analysis. then we will get after issues like data leakage and designed sharing at the code level that might not be known even to those people that own the outcomes in the company. in terms of data breach, i do not know if it gets at the issue for those but it does go to the children's online protection act, and other issues, this is sort of an interesting issue but to take a couple of seconds, one of the things that you see with development these days is increasing modularity, we have third-party code that we will take from here and third ready code from here to perform all sorts of different functions for our software, a mobile app, different software. then the third party code libraries will call out during the operation and often times what the developers think that the code does versus what the
3:51 am
code actually does are very different things that impact privacy and security. we really try to get into the weeds on that. thank you. >> thanks, it sounds like a great service. let us turn to the security concerns about internet connected devices, we could not bring our colleagues in your stead from the norwegian consumer council here, but we will show a video that his organization made. they studied several different brands of smart watches for children that parents can use to keep track of them and communicate with them. as you will see from the study commissioned by a security consultant firm, other people can communicate and track the children as well through these devices.
3:52 am
>> audio? >> ♪ >> the smart watches for kids could seem like a good idea, but what about a kid's right to privacy, and how secure are they, we are about to find out. >> good to see you sir. >> good to see you. >> harrison, you have been looking at security of smart watches, how is it? >> they were missing a lot of standard best practices that you would expect to see on texting devices. as a result, there were many security finding. -- findings.
3:53 am
>> in what ways can a person or attacker get access to the watches without having them in their hands? >> they need a unique identifier for the watch, you use that as part of the registration process, i type in the verification code, i forward the request along to the server, now i have associated this watch with a new account that it did not originally have access. >> the ime i number can be found online? you don't need access to the watch? >> you do not need physical access to get the number, this is just one of the methods that we have identified. >> in the research, what was the most surprising findings? >> we identified that there is a possibility to use these devices as a spy device without the kid ever having to activate any functions on the watch or
3:54 am
being aware that something is happening, i will send a text message here. >> it will automatically call me back and i can answer on my phone. >> my talking back to now? >> one of the key functions as parents can track their kids, i wore all of these on my arm on the way here, what can an attacker do with this information? >> an attacker would have access to all of the location history stored in the parents app. we also identified other problems with location history where an attacker can manipulate where the location of the watch looks to be in the app. i can see all of the location history here. i can see the exact time that you are at this location. with the attack that we have done here, we change the location data that was sent from the watch or between the watch and the data server.
3:55 am
we make it look as if the watches in london, in reality, it is sitting with us. >> the smart watches have an emergency sos button, did you find any issues? >> we did find some problems with the functionality. normally, a child could press emergency button and it would initiate a phone call to the parents, an attacker with control over the app could change the phone numbers that are supposed to be called, or delete them entirely. so they should have activated the sos function. now it is calling back to my phone. >> you are taking over, you should not have access to this phone and you have put in a different phone number than the parents phone, these smart watches collect much sensitive information, is it stored safely ? >> not as safely as you would think, on some of the watches it was not encrypted, anyone
3:56 am
sitting on the network could see the information going back and forth. some of the servers do not protect the information the way it should be. with one of the watches, it was actually possible to return data for other users and see location information about other people. >> is you just saw these watches are not safe for children and they violate a children's right to privacy, still the issues have not been resolved, they should be removed from the shelves and in the long term we need better rules to protect and adults alike from unsafe privacy violating products. while removing these things from the shelves is very difficult, these watches, for instance are mostly made in china under many different
3:57 am
brand names, contacting the companies behind them which the group tried to do, prove difficult in many cases. the federal trade commission certainly does not have the ability to issue recalls and we heard from acting chair woman this morning, the cps c does not at least at this point that it is within its remittance to deal with these kinds of security issues which could actually put children physically endanger. what to do about this ? one thing that is happening justin is that consumers union has launched a new program to test and report on privacy and security for these kinds of internet connected devices. can you tell us why the digital
3:58 am
standards program started and how it works and what you have found so far? >> this is more about consumer reports, considering your -- consumer opinion is really different, but this looks at tvs and cars and do they work and many people rely on consumer reports for cars, we are good at evaluating product, but any have new dimensions, internet connected, they have other things that need to worry about, other data concerns? about one year ago there was a digital standard which is the metric which we will start looking at companies for new values. we partnered with a few organizations that have a lot of mixed variance ranking digital right and they did a lot of work with policies and evaluating which were better and worse. disconnect me, a maker of an ad
3:59 am
blocker, a lot of experience in that ad tech space. and cyber security issues, we have put the standard out about one year ago and he looked at some of the privacy best practices and security best practices, but also, other issues like repair ability, can you actually repair your own device interoperability, ownership, delete even own our devices anymore? michelle talked about how everything is a software of service, delete own a refrigerator or is it a service that can be gripped remotely? are things updated? these are the values that we have talked about and the challenges translating that to rating product. we started to do this and we announce the first test case looking at smart tvs, i have looked at this and heard a lot about it. tvs traditionally were like a screen and then you would plug
4:00 am
your cable into it and launch it. now they have a lot of software . sometimes that is cool, you could connect to netflix or amazon or use as a guide or when browser. a lot of these companies are in margin businesses. a lot of chinese companies are churning out screens for low cost looking to make more money. many companies are trying to watch what you are watching. they run software in the background to send screenshots to the servers for them to process and say, justin is watching a basketball game, justin is watching this or whatever, they can see detailed logs of what you are doing. i am not sure how they are monetizing this. but they do look at google and facebook were printing out money by serving targeted at. we look at the standard entrance a, which is better, which is worse? they all try to get person -- permission and there was an fcc
4:01 am
case against visio saying tv viewing is sensitive and you had to have permission, they all kind of had permission, they talk about targeted content, but you could tell that in many cases, they were definitely afraid to try to get you to hit except in the skip button was hit -- hidden in the corner. the challenges, finding a score to the user interface. the first time we did not feel comfortable picking winners or losers and we learned a lot. i have my thoughts about it but the are is a careful organization that would like to get it right, they have a history of testing things and there is a scientific process in place. i would like to say whatever justin feels like we need a more rigorous process. the goal is for the next set of ones that we will release soon to pick test and worst, to rank them, then there is a ball that one day we would feel comfortable assigning a score and it would say, this tv gets
4:02 am
a 30 on privacy because they do not tell you that they are watching you, they do not get security up dates or get this and that. that is the initiative that we launched and the goal is to make it part of the standard evaluating products because this is a feature or area of interest for more and more products. >> this would be helpful to consumers because they can see the rankings and take them into account if they are the type tumors that we like, research things like appliances, maybe this will put some pressure on the producers, although, i am not sure how much these chinese companies are going to care. >> big companies do, i would say, the samsung's and the people that make many products like the car space, cr on cars, they are very interested, they have all called and wanted to talk to us about it. that is good news. what
4:03 am
consumer reports is doing and what a testing lab is doing for its clients is really help will, but is this enough, and what more do you think needs to be done? cbt recently issued an interesting paper on strict reliability and how it could force companies to pay more attention to security, can you talk about this and share your thoughts? lecture, we are very happy to what consumer reports is doing. it is funny, i spend a lot of time with the executive branch in convening inter-agency, for years, companies have said, do not regulate us, let us just do something like energy star for the internet of things and in consumer reports they came out with an energy star for the internet of things and they
4:04 am
were like, wait, we thought we would do this for ourselves. it really change the conversation and there has not been critical mass of use yet, but it has kicked people in the pants to realize that consumers will organize and there is always value for consumer self defense, it is always just one piece of the system but where we are now, the political and legal system is telling us, it is now more important than ever. we did try to look at how the legal system is handling these cases and we focused on strict product reliability, it seems to be the only way to recover right now and only if someone is hurt very serious or something is destroyed our people finding their way into court and these cases are often settled early so there is not evolving case law here to the extent that people have died in car crashes, or another popular area would be medical devices. we have seen in our paper
4:05 am
things like cancer treatments, pain administration, all of these things now are actually hooked to the internet and the failures are really life and death issues. to the extent that this can evolve, i think it makes sense because these producers, whether it is the hardware, or software, are the only people in a place to make serious decisions about security. unlike other things that have a physical component or the people intuitively understand, you cannot make informed decisions for yourself on every connected device that you possibly own, any people have connected devices and do not realize that, you do not realize that the comcast remote is always on listening and that is default, or your tv, the dumb things that are now being put on the internet with two rushes and flashlights, it is pretty much endless. for very
4:06 am
long time i said, do not buy it, and someone said, you know, we will not have a choice much longer. the first thing to fall was cars, you are not able to go out and buy an unconnected car, you cannot make that choice to say i do not want to put myself at risk in that way because that is the root -- the right to repair and they put this in your licensing agreement that you cannot disconnect the car or avoids a warranty, there will be a point where we cannot opt out, if we create the system, it means only one group people will be responsible and that will be the people that wrote it. i would say that the companies are getting nervous about there being some sort of negligent regime possible and we are seeing more and more legislation here in dc with liability protections. they are usually incredibly broad and they will say something maybe like, if there is a reasonable standard of
4:07 am
care. you are not liable. that could be them explaining negligence, but what they do on the backend is say, you cannot bring a suit as a consumer, we will kick it to the ag, but the ag has to bring a singles suit together and it has to be before the dc circuit court and ftc can intervene. they are making sure the recovery will never happen. the other request that they have going is that there is a limit of damages of $1 million per incident, not account. equifax, that would be a single $1 million fine on behalf of every single consumer in america, that is the proposal before congress right now. i have got myself so far away and worked up over this. we are really not moving in the right direction yet, i will wrap up to say that i think cambridge analytica has been a turning point and it is
4:08 am
probably because people were shocked about how much data was there and how it would be used, i think a lot of us that followed it new this and knew how bad the terms of services work, but it was not explained in a way that the average person could understand, we are turning this corner and areas privacy and security legislation could be a reality in the next few years. >> maybe, i will turn to justin and steve for their thoughts, but i want to pick up on one thing that you set. yesterday we heard from illinois attorney general about the good things that some states have done to an act privacy and security laws, we also see things like the recent bill in ohio that would create a safe harbor shielding companies from liability for bad security if they had
4:09 am
followed certain voluntary standards like the ones developed by voluntary bodies. we oppose the bill because we think it is a bad idea to prevent people from being able to bring actions to hold companies accountable. some could argue that it might actually give them incentive to do better. when you respond to this, as well as in general, what are your thoughts on liability? >> sure. >> on the point of the strict liability proposal, for example, i would take a step back and say, when you look at law for product liability generally, and i am not a product liability lawyer, but goes.
4:10 am
when you got a license or a consumer contract, the harm that results it is purely economic in nature will be governed by the contract and if there is personal injury, you can sue in court. this has helped sway across the lands of consumer products for quite meantime. in terms of strict liability, i believe you have to particularly have known dangerous instrumentalities to actually get out of normal product liability and into strict. even in strict liability you have to prove a product defect and know that that is the cause. i think in terms of what in law school they would call the least cost avoid her, i am not so sure it is the companies that are making the product, on the modularity piece for code and hardware, it is not like the companies that are putting the products on the shelves have
4:11 am
made most of the hardware that is in the product will written most of the code, it is unlikely , they have made some of the hardware and written some of the code. then certainly there is the issue of are we talking about pacemakers and autonomous vehicles, are people dying because they are being zapped by a pacemaker, or are we talking about data breach and things like that? it would seem to me that i would counsel caution on any change in the vault -- default systems but because of the way to set up it leaves room for regions situations. if you think about the htc case from several years ago, there were issues with the way that
4:12 am
htc was putting phones on the market with preinstalled android permissions that were not done in a secure way. the installed user base with something like 18 million people. there was not a single incident that i know of that was a known compromise or vulnerability. should we hold htc to a strict liability standard and put them out of this mess with no known heart? i would say to put the brakes on with strict liability. that would have to be a big difference in terms of how we think about pacemakers and autonomous vehicles possibly versus network connected, fish tank thermometers and things like that, that actually was the cause of the data breach in a casino, but we do not strict liability to know we should not
4:13 am
hookup fish tank thermometers to a current network. >> i recommend that people take a look at this, i enjoyed the paper and i wonder about the intersection between strict liability, warranty law and all of the internet things. we already have strict liability for many data breach things. if you have data breach and you lose a social security number, it does not matter, there is no written -- reasonable inquiry, you have to notify people, that is a cost. it is a feature and not about. we are trying to extend this to other information like online cloud storage. if your online account is hacked you should be told about it, you should know but it is also good to incur costs on companies. a lot of money goes to steve and the efficient allocation, but it does put
4:14 am
cost on companies. they will take security more seriously. i am interested in the idea and doing it more generally. you just bear the cost of what went wrong. >> steve makes a good point, the way it was interpretative -- interpreted in the past is not the extent. the code is inherently defective. whatever the next version of android that comes out, google will pump training dollars into it, it will have defects. the key for security is to recognize that and have a system in place to recognize that after-the-fact and patch and deploy the patches and get people to install the patches and remediate as best possible. how does that affect strict reliability? i am not entirely sure. android technically, and
4:15 am
unix, is open source am a fairly open source. this really is a question of htc and samsung implementing the code. i strongly agree that the incentives need to change. i traditionally thought about this in, if you use bad practices you bear cost, but i'm interested in this strict liability idea, it is fraught, but something needs to change to the majority of the players to take it more seriously. again, there is still the supply chain issues. the ftc brought a case against a smart phone manufacturer that was white labeling a chinese phone, do they bear the responsibility that the chinese manufacturer put in place and the sec said yes in that case,
4:16 am
but there are many complicated issues to be unraveled. >> to any of the panelists have additional ideas for good incentives for better security? government action and changing default settings versus having a private company that sets up standards and holds companies privately accountable, it seems to me that the private sector solution, unless deemed to be an utter failure and of course, that is just eating up the ground, i would think that is the form that probably makes the most sense and has the least chance of disrupt innovation and actually being an engine for innovation. i think what consumer reports does is great, again, with the
4:17 am
ftc, there are certainly some investigations that we understand, then, there are others that we think are just head scratchers and whatnot. >> thank you. >> private sector self- regulation has a role, it is most effective with the threat of real legislation. right now, we are in one of these moment. you could be thankful for things put in place. i remember the privacy space in 2012 or so there was a lot of interest in privacy legislation and many people said we will do do not track and take care of this. and then the momentum fell behind and do not track fell behind. this will always play a role but will not be efficient by it self. there is a lot of
4:18 am
companies. i recognize that perfect security was in perfect and undesirable. there is a continual and we need to move farther in the direction of security. >> i realized i was inking about strict reliability that led into the efforts to do something below this at a negligent level. this is what some states have started to do, outcome-based sanctions. there is always the option that there is a legislative response. we get in this rock and hard place is too complicated and let us figure this out as go wrong. then things do go wrong and you want to recover in court and they say, we do not want to spend 20 years litigating, shouldn't there be a list? you cannot get out of the loop where you pick a path and go forward with it.
4:19 am
i would say, as someone is dc- based, a field of consumer actions, one thing i am excited about with the trump administration is that they want to get harder on the contract is and improve their own security and are going through an i.t. modernization process right now where they try to save up money and make big purchases and slingshot past crappy old systems. they only have enough money to update windows that are 10 years old over and over again and they never get to significantly upgrade. as a purchaser, the government can affect the private market. i'm hoping in this process that they are clear about minimum standards and can affect with the company put out. i know
4:20 am
they want to use purchasing power as opposed to direct regulation. i would say, how privacy works in the country, security may follow and we do this industry by industry, the agencies and congress were more involved and other areas could have highly regulated industries that were reviewed for safety. cyber security is the next thing because there are many cyber physical results anyway. if you try and understand how these things work, it does not make sense to not consider cybersex as well doing this. hopefully that is something that can lift all but. >> just for a couple of seconds , again, sounding the caution bill for how well privacy legislation works, think of regulation. originally the
4:21 am
government wanted to protect kids from porn, it turns out, you cannot do that effectively due to the first amendment. instead, we ended up with a regulation that protected kids from ads because you cannot have targeted based ads. this regulation has costed, has cost industry millions of dollars. it has taken content off of the market that would otherwise be available because it is too expensive for providers to put that out there and kids get sushi ads instead of legos. we need to be careful before we do that. >> i am making a note, we will have a session on that for sure . we will go an extra 10 minutes because we started late. there's one more thing to talk about before we open up to questions. facebook and cambridge analytical, some
4:22 am
people call it a data breach which is not technically accurate. there is no question that there are companies like facebook that are collecting tons of information across various pop forms and making it available to others to use without necessarily having a system in place to ensure that it is only being shared and used for the intended purposes and it is secure. >> reporter: the final question for the panelists, what should be done to monitor the trail of the data after it has left your hands? >> the easiest thing to do is locked on the pop form which was eventually did, but they closed the barn door a little too late. in the case of facebook specifically, it is interesting
4:23 am
to see what the ftc will do with that. in most cases they cannot find you when you do bad things, but if you previously did a bad thing and signed an order that said you will not there can be substantial penalties. catastrophic as we -- catastrophically large. a lot of people say this is an easy case, it is an interesting case. it is worded specifically . a lot of the questions are theories i do not think will work. the system was prone to a use, we wrote a letter to facebook about this in 2010 and joined with epic and other people saying, this is ridiculous, and he asked that someone installs could scrape all of your friends information. they waited five years to do that and to their credit, they did shut it down and will get credit for that. a lot of the
4:24 am
conversation is switching to other facebook problems like the hearing with marc zuckerberg focus somewhat on cambridge analytical, but there was a lot of other things like tracking on facebook. and a lot of facebook responding as the store shot. there's a lot of things to discuss and it will be interesting to see how they react. also, how they police the plot form. those are separate issues. you get to the issue of, is it a data breach or not? the end result was someone should not have had the data, got the data, does that make it a security incident? i do not know, with the ftc go by it after itself is an interesting
4:25 am
question, they might. it depends unreasonable dictations if people really thought or had reason to believe that what they put on facebook would just be shared with friends and not whatever stupid app they were playing at the time. >> i know at the cdt office we are trying to think more holistically, i don't want to say this is solution, we want to come up with a plan that will attest time. one interesting idea that my boss talks about is still a covenant that runs with your data. the idea that the terms of service often say, you can give this to me and it will make the service better and people assume this to be the four corners of the service that you have and the crazy way that that finds is not a fair way to use the data. is there a better way to
4:26 am
enforce the idea of what the permission means? i think some of the things that we need to separate out are the security and the privacy, to me this is clearly privacy. security is object, it is whether you can control who accesses and uses your data. privacy is a normative value, what do we decide, how can it be used? now you are in the box of what is legal and under your control. hear this is a privacy violation. they knew what they were doing and chose to set up a system like that. it just did not think into people about what that meant. they are closely tied, security and privacy, but they are different and i think there can be different consequences for it. i think what we are hoping to come of this is maybe there is data that is absolutely protected because it is so crucial.
4:27 am
the recent dna case was interesting, talk about something you cannot get back or change, how do we think about dna? i know that our goal is to get beyond the notice and consent model as sort of a generic out because it has not worked, it has not been meaningful, and it is not serving people in any useful way. >> steve, any last thoughts on audit trails? >> it is hard, not that that is helpful. companies are trying to deal with this the best they can. we are seeing companies with an immediate need for actually having to stand up systems that do this, complying with gdp are for activities, you basically have to know what will happen to the data and if you are going to share it with a third party, you are putting your neck out potentially for 4% of annual revenue, nonprofit revenue.
4:28 am
there are various ideas that people are trying to implement in terms of recordation and tracking. there are certain required consents under the regime, specifically for sensitive categories of data, including political and philosophical beliefs and whatnot, graphed that on to the social network scene and it is probably a different type of group of consents that would be necessary to have released the data. then again, you will have the problem of what happens when there is breach or deviation? i do think that especially how this is tied into the political situation, you are talking about potential societal wide impacts for this type of data, i think there is a change and part of this goes to when you have a company with the entire
4:29 am
economic model is leveraging to the max your data, this is attention that is hard, but it is hard for everyone, i would like to you if i said there were some. >> i think we could all agree that it is easier or would have been solved already. i think we have had a good discussion describing what the problems and issues are. now it is time to open up for questions, is m1 going to pass the microphone around? rachel does everything. >> so please identify yourself and make it brief, if it is a comment or question, do not make a long speech. >> i am dan with the chicago
4:30 am
consumer coalition, in the last 30 seconds mr. steven roosa, you interest -- introduce concepts about corporations, individuals and different ways of measuring damages. i thought the supreme court just last year decided for us corporations were individuals and we should judge them that way. it's in the entire conversation challenges that, in my misinterpreting? >> yes, i am not sure that i
4:31 am
actually grasp the question per se, but i certainly see that in the legal realm, the fact that you have real people and legal people in with odd results for sure. my point on the cash printing is simply that i think the impact of companies is significant even for data breaches that do not end up in lawsuits per se and it is not necessarily, at least this was my view, there was not necessarily a lack of incentive, i think there is a lot of scrambling and investment by companies and disincentives to data breaches and adverse incidents, but they happen anyway. >> notwithstanding that those costs are borne -- are born. >> just this week it was recorded in forbes magazine that 57% of the global fortune 100 companies continue to
4:32 am
install a flawed version of software that led to the equifax breach that the f i warned about over one year ago. you touched on this a little bit, but i would love to hear your thoughts on, if we do not require immediate implementation of patches, or have penalties, what requirements and consequences will protect the data? what we have clearly still has a problem. >> i am not familiar with the story, but certainly the idea that large sophisticated companies have failed to make updates and obviously, that is quite possible given aqua fat. how to regulate security is an interesting question. i have seen provisions about thou shalt use reasonable security or things like the massachusetts lobbying fairly prescriptive. to mandate update
4:33 am
patches, there are patches and patches, critical, tying something to a designation, i personally lean against prescriptive language. i do not think government can say what level of encryption to use, the law cannot pace. i care more towards a general reasonableness standard with more of a stick. i take this point that it hurts to have a bad security incident, it should hurt more. >> in terms of the fact that you have software with known vulnerabilities being installed, one of the questions is, what company is installing them? not all backend databases are created equal, if you are a credit bureau or a company with data brokers or something like that with lots of sensitive features in your data set, that is one thing, if you are a
4:34 am
company that has data that is not nearly on that magnitude, yes, maybe it is not a good idea, but you are probably not causing societal harm. >> microsoft is holding -- operating an old version and that is another reason it is hard to prescribe the ftc should have the ability, or other regulator should have the ability after-the-fact to say, that was unreasonable and billed it that way to what it looks like. >> yes, the glass with the montgomery county office of consumer protection. i guess mr. steven roosa, i will start with you, i'm going back to the idea of strict liability, your examples of htc, and this is not my field, with pornography, seeing this logical fallacy, with equifax, there is actually an element of
4:35 am
c and enter, actual knowing, do not that strict liability would be appropriate if there is actual knowledge just in a credential type question? >> right,! -- equifax is not a client and i cannot ache for them, i would not necessarily concede that there was see and enter, let us unpack that. what i want to do is take the example, and just as a hypothetical, even, if we are not attaching it to a specific company and saying, what about strict liability? if you are talking about an entity where there is c enter for a bad act why do you need strict liability? you have fault, intent, causation, you do not need strict liability.
4:36 am
the existing regime respond to that hypothetically, i do not want to throw anyone under the bus. consider the counterfactual, if equifax had the world's best security in place and they tried everything and someone still found a way in, even after the fact, no one would say that anyone would have thought of that. as a result, 150 million people had their information reached, who will bear the cost? is there still a rationale, this is the cost that we had, this is the cost that society has in having outward facing databases with secure information. the credit bureau system should bear that cost. i can see the argument for that, it is one of the easier cases and i feel more comfortable with that. and some of the other areas where breaks down is when there is unclear responsibility because there are different layers and some of it is open source.
4:37 am
in that case, especially given the sensitivity of the data, you can see more of an argument for that. >> especially knowing how the information is used to make incredibly important decisions about buying homes or participating in the economy that we do not have a voluntary relationship with them, it is not like, if they scrub, i will not use the service anymore, there is no way out, they are one of the industries that needs to be more carefully regulated. they should be kicked out of the system if they cannot secure the data. i do not think that the current torque system, i hope it works, i think the problem with the demonstration of harm is going to catch up with this sort of thing. they will say, you do not know if this information is actually being used somewhere, how do you actually show that it has been bought by criminals? how do we know that when they used it is from this beach and not the 800 others that have happened with target -- breaches
4:38 am
and not the 800 others that have happened with target. the cost is so great that you need a soffit on the front and there could not be a back and fix for something like this. >> thank you. >> steve, do want to say anything before we close? >> thanked the panel very much, this is a great conversation, we will continue to debate the issues during the evening. >> [ applause ] >> i know this conference has posed many substantial challenges to those of us seeking to represent consumers, i also think that the conference offered many solutions and offered us a better understanding of these issues and how to attack them, and
4:39 am
also identified some opportunities and strategies that we can take advantage of. thank you also much for participating, those of you with member 1 pm. dave travels home. >> [ applause ] -- safe travels home. >> [ applause ] >> reporter: -- >> >> the annual leadership forum was held in dallas earlier this week, saturday we will show the event with speakers like ted cruz and john cornyn watch at 8:30 pm eastern on c-span. we look at the cold war as the back drop for events of
4:40 am
1968, including the vietnam war, the presidential campaign, and the space race. joining us to discuss the turbulent time is a historian and documentary filmmaker and marc kramer, program director on the project on cold war daddies at harvard university, watch 1968, america in turmoil, live on sunday at 8:30 am eastern on c-span's washington journal, and on american history tv on c-span 3. sunday is newsmakers with north carolina chairman speaking on the molar investigation and why he will request a gao audit. in the washington examiner interviews him at 10 am and 6
4:41 am
pm eastern on c-span. later that day, remarks from howard schultz, starbucks executive chair on the role of responsibility for global companies. he spoke at the atlantic council in washington. that will be 6:30 pm eastern, also on c-span. sunday on q&a, diversity of california professor on his book, interoperable, about the life and times of conjoined twins. >> you can imagine, two married couples, they cannot in the same bed. also, when they set up these separate households one mile from each other. they stick to a rigid schedule, they will say, live in chains house
4:42 am
for three days with chains wife, during this three days, chain basically is the master of the house and can do whatever he wants. and his brother gives up the free will. that is alternate matching. three days later, they will move to the other house and the other brother gives a free will. >> did it work? >> apparently, they have 21 children. my q&a, sunday night at 8 pm eastern on c-span. sunday night on afterwards, journal and author talks about a book, killing the deep space, he is interviewed by investigative journalist, cheryl atkinson. >> i heard some of these phrases the past few years but did not attach a lot of meaning to them until recently, maybe you can
4:43 am
define in your view, a deep state of shadow government and swamp. the same thing, or how do you differentiate? >> my terminology is it deep state, others could call it the shadow government because they affect their own bureaucratic wishes rather than the wishes for the people, electing donald trump for instance. donald trump has termed at the swamp, this is probably the term that most americans immediately understand because washington was at one point like that. the creatures coming out of the swamp are biting and fighting for turf. >> watch afterwards sunday night at 9 pm eastern on c-span 2's book tv. catherine boudreau joins us, a reporter for food and agriculture. what are the key
56 Views
IN COLLECTIONS
CSPAN3Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1729388005)