tv DW News - News Deutsche Welle April 10, 2018 9:00pm-9:16pm CEST
9:00 pm
early this year you did not actively monitor whether that data was transferred by such developers to other parties moreover your policies only prohibit transfers by developers to parties seeking to profit from such data number one besides professor cole guns transfer and now potentially q.b. you do you know of any instances where user data was improperly transferred to third party in breach of facebook's terms if so how many times does that happen and was facebook only made aware of that transfer by some third party mr chairman thank you. as i mentioned where now conducting a full investigation into every single app that had a access to a large amount of information before we locked down platform to prevent developers from accessing this information around twenty fourteen we believe that we're going
9:01 pm
to be investigating many apps tens of thousands of apps and if we find any suspicious activity we're going to conduct a full audit of those apps to understand how they're using their data and if they're doing anything improper if we find that they're doing anything improper will ban them from facebook and we will tell everyone affected as for past activity i don't have all the examples about that we've banned here but if you'd like i can have my team follow up with you after this have you ever required an audit to ensure the deletion of improperly transferred data and if so how many times. mr chairman yes we have i don't have the exact figure on how many times we have but overall the way we've been for star platform policies in the past. is we have looked at patterns of how apps have have used our a.p.i. as an access to information as well as looked into reports that people have made to us about apps that might be doing sketchy things going forward we're going to take
9:02 pm
a more proactive position on this and do much more regular spot checks and other reviews of apps as well as increasing the amount of audits that we do and again i can make sure that our team follows up with you on anything about the specific past stats that would be interesting i was going to assume that sitting here today you have no idea and if if i'm wrong on that you're able you were telling me i think that you're able to supply those figures to us as least as of this point. mr chairman i i will have my team follow up with you on what information we have ok but right now you have no certainty of whether or not one how much of that's going on right. ok facebook collects massive amounts of data from consumers including content networks con attack lists device information location and the information from third parties yet your data policy is only
9:03 pm
a few pages long and provides consumers with only a few examples of what is collected and how it might be used the examples given emphasize but nine uses such as connecting with friends but your policy does not give any indication for more controversial issues of such data my question why doesn't facebook disclose to accusers all the ways the data might be used by facebook and other third parties and what is facebook's responsibility to inform users about that information mr chairman i believe it's important to tell people exactly how the information that they share on facebook is going to be used that's why every single time you go to share something on facebook whether it's a photo in facebook or a message and messenger what's up every single time there's a control right there about who you're going to be sharing it with whether it's your friends or public or
9:04 pm
a specific group and you can you can change and control that in line to your broader point about the privacy policy. this gets into in an issue that i think we and others in the tech industry have found challenging which is that long privacy policies are very confusing and if you make it long and spell out all the detail then you're probably going to reduce the percent of people who read it and make it accessible to them so one of the things that we've struggled with over time is to make something that is a simple as possible so people can understand it as well as giving them controls in line in the product in the context of when they're trying to actually use them taking into account that we don't expect that most people will want to go through and read a full legal document. senator nelson. thank you mr chairman. yesterday when we talked i gave the relatively harmless example that i'm communicating with my friends on facebook and indicate that. i
9:05 pm
love a certain kind of chocolate and. all of a sudden i start receiving advertisements for chocolate what if i don't want to receive those commercial advertisements so your chief operating officer ms sandberg suggested on the in b.c. today show that face book users who do not want their personal information used for advertising might have to pay for that protection pay for are you actually considering having facebook users pay for you not to use that information. senator people have
9:06 pm
a control over how their information is used in ads in the product today so if you want to have an experience where your ads aren't aren't targeted using all the information that we have available you can turn off third party information what we've found is that even though some people don't like ads people really don't like ads that aren't relevant and while there is some discomfort for sure with using information in making ads more relevant the overwhelming feedback that we get from our community is that people would rather have us show relevant content there than not so we offer this control that they are referencing. some people use it is not the majority of people on facebook and i think that that's that's a good level of control to offer i think what sheryl was saying was that in order to not run ads at all we would still need some sort of business model and that is your business model so i take it that and i use the harmless example of chocolate
9:07 pm
but if it got into more personal thing communicating with friends and i want to cut it all off i'm going to have to pay you in order not to sin me using my personal information something that i don't want that innocent is what i understood ms sandberg to say is that correct. yes senator although to be clear we don't offer an option today for people to pay to not show ads we think offering an ad supported service is the most aligned with our mission of trying to help connect everyone in the world because we want to offer a free service that everyone can afford ok that's the only way that we can reach billions of people so therefore you consider my personally identifiable data the company's data not my data is that it is no senator actually at the first
9:08 pm
line of our terms of service say that you control it only the information and content that you put on facebook well the recent scandal. is obviously frustrating not only because it affected eighty seven million but because it seems to be part of a pattern of lax data prompted by the company going bag years so back in twenty eleven it was a settlement with the f.t.c. and now we discover yet another incidence where the they don't was flailed to be protected when you discovered the cambridge analytic that had brought it only a taint all of this information why did you inform those eighty seven million when we learned in two thousand and fifteen that cambridge of atika had
9:09 pm
bought the data from an app developer on facebook that people had shared it with we did take action we took down the app and we demanded that both the app developer and cambridge analytic a delete and stop using any data that they had they told us that they did this in retrospect it was clearly a mistake to believe them and you should have followed up and done a full audit then and that is not a mistake that we will make yes you did that and you apologize for it but you didn't notify them. and. do you think that you have an ethical obligation to notify eighty seven million facebook users senator when we heard back from cambridge and a lot of code that they had told us that they weren't using the data and it deleted it we considered it a closed case and retrospect that was clearly a mistake we should have taken their word for it and we've updated our policies and how we're going to operate the company to make sure that we don't make that mistake
9:10 pm
again did anybody notify the f.t.c. no senator for the same reason that we considered it a close to a closed case senator thune and mr zucker broad you that do that differently today presumably the did in response to senator nelson's question yes having to do it or . this may be your first appearance before congress but it's not the first time that facebook has faced tough questions about its privacy policies wired magazine recently noted that you have a fourteen year history of apologizing for ill advised decisions regarding user privacy not unlike the one that you made just now in your opening statement after more than a decade of promises to do better how is today's apology different and why should we trust facebook to make the necessary changes to ensure user privacy and give people a clearer picture of your privacy policies thank you mr chairman. so
9:11 pm
we have made a lot of mistakes and running the company i think it's pretty much impossible i believe to start a company in your dorm room and then grow it to be at the scale that we're at now without making some mistakes and because our service is about helping people connect and information those mistakes have been different and how do we try to make the same mistake multiple times but in general a lot of the mistakes are around how people connect to each other just because of the nature of the service overall i would say that we're going through a broader philosophical shift in how we approach our responsibility as a company for the first ten or twelve years of the company i viewed our responsibility as primarily building tools that if we could put those tools in people's hands then that would empower people to do good things what i think we've learned now across a number of issues not just data privacy but also fake news and foreign interference in elections is that we need to take a more proactive role and
9:12 pm
a broader view of our responsibility it's not enough to just build tools we need to make sure that they're used for good and that means that we need to now take a more active view in policing the ecosystem and in watching and kind of looking out and making sure that all of the members in our community are using these tools in a way that's going to be good and healthy so at the end of the day this is going to be something where people will measure us by our results on this it's not that i expect that anything i say here today to to necessarily change people's view but i'm committed to getting this right and i believe that over the coming years once we fully work all these solutions through people will see real real differences well and i'm glad that you will have gotten that message. is we discussed in my office yesterday the line between legitimate political discourse and hate speech can sometimes be hard to identify and especially when relying on artificial intelligence in other technologies for the initial discovery. can you discuss what
9:13 pm
steps that facebook currently takes when making these evaluations the challenges the you face and any examples of where you may draw the line between what is and what is not heat speech yes mr chairman i'll speak to hate speech in then i'll talk about enforcing our content policies more broadly so actually maybe if you're ok with it i'll go in the other order so from the beginning of the company in two thousand and four i started in my dorm room it was me and my roommate we didn't have ai technology that could look at the content that people were sharing so. we basically had to. enforce our content policies reactively people could share with they wanted and and then if someone in the community found it to be offensive or against our policies they'd flag it for us and we look at it reactively now increasingly we're developing ai tools that can
9:14 pm
identify certain classes of bad activity proactively and flag it for our team at facebook by the end of this year by the way we're going to have more than twenty thousand people working on security and content review working across all these things so when when content gets flagged to us we have those those people look at it and if it violates our policies then we take it down. some problems lend themselves more easily to ai solutions than others so heat speech is one of the hardest because determining if something is hate speech is very linguistically nuanced right it's you need to understand you know what is a slur and what. whether something is hateful not just in english but the majority of people on facebook use it in languages that are different across the world. can trust that for example with an area like finding terrorist propaganda which we've actually been very successful at deploying ai tools on already today as we sit here ninety nine percent of the isis and al qaeda content that we take down on facebook
9:15 pm
our ai systems flag before any human sees it so that's a success in terms of rolling out ai tools that can that can proactively police and for safety across the community hate speech i am optimistic that over a five to ten year period we will have a i tools that can. get into some of the nuances the linguistic nuances of of of different types of content to be more accurate and flagging things for our systems but today we're just not there on that so a lot of this is still reactive people flag it to us we we have people look at it we have policy used to try to make it as not subjective as possible but until we get it more automated there's a higher error rate than i'm happy with and considered rhinestone thanks mr chairman. this is not what is facebook doing to prevent flooring or actors from interfering in u.s. elections thank you senator.
24 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=784023401)