tv Future of Facebook CSPAN April 25, 2018 1:17am-2:15am EDT
1:17 am
created a political atmosphere where in broadest terms we don't go for the press for publishing things even where statutes seem to say that h we could. >> watch landmark cases, with guests floyd abrams who represented the times and its case against the nixon administration and the former solicitor general under president george w. bush. bush. why is monday night at eight eastern months he spent. >> at the heritage foundation, the head of facebook local policy management talked about the social media companies future and how facebook monitors content from its users. this is in our. [applause] good morning and welcome to the heritage foundation.
1:18 am
facebook has more than 2 billion active monthly users and approximately 1.5 billion more game every day and there is some estimates that say every minute of every day there are 510,000 comments, 293,000 status photos posted00 to the social network is every minute of every day. so facebook has come to define how many communicate with their family, how they plan and share their social lives and for many, particularly overseas with offerings like free basic service is increasingly how people are getting online in the first place, so facebook's influence and role in the lives
1:19 am
of the is growing and with this group of influence comes a great deal of responsibility and they understand the responsibilities and appears too be evolving. to help us better understand that, we are fortunate to have monaco joined us today. she's ahead of product policy and counterterrorism. she and her team have managed policies that govern what content can be posted as well as how advertisers and developers are able to engage and use the product network. prior to joining, monica was a legal adviser in thailand as was the state's attorney in dc and chicago. she has a ba from rice university and a jd from harvard law. please welcome monica. [applause]
1:20 am
>> to help people understand your expertise. talk about your background and the team that you are managing. i was at the u.s. justice department where i spent more thance a decade [inaudible] they gave a search warrant asking for information. i took over the team that manages the policies about five and a half years ago and since then, they've continued to expand where now they manage the
1:21 am
content standard and that the external engagement. they are constantly evolving and that includes reaching out to safety groups and freedom of expression academics around the world to the stakeholder engagement team. we have a strategic response team and their job is anytime would say there is a terrorist attack or some sort ofat natural disaster that has led to casualties their job is to understand and make sure we are prepared to deal with the content we might see posted as a result so a lot of different teams and subject matter experts as well we have a team that focuses on child safety for his and maintains relationships with safety experts around the world. secondly, two years ago we created that counterterrorism
1:22 am
team that isn't just policy people, we have engineers who help us find and review potential terrorist content. we have lawyers who are helping us proactively sending threats we might find and review legal processes we get and we have the former director of the counterterrorism research cent center, we have the extremist organizations to bring that expertise on to the team and help us maintain relationships with those who are working in a field outside of facebook is what we are doing on the team. >> i would like to ask a couple of big picture questions about how they are thinking about thingsbo right now and then i wt to go deeper into the content policy issue and pull the string
1:23 am
on a couple of issues. just to begin when mark was testifying here recently, one of the themes he was finding again and again is that facebook had not previously taken a broad view of its responsibilities and that it intended to broaden the understanding u going forward. what was the narrow view and what's changing now ask >> the company has certainly grown both in terms of numbers, people who use the service and look at the company but also in terms of the approach. if i think back to where we were when i joined the 2,000 employees at that time. the engineers who were working on these issues on developing new products teachers they were
1:24 am
like you'd find in any smaller startup, they have great ideas and wouldn't necessarily think about how they could be used or abused by bad actors and i remember back then having conversations with engineers who were launching or building new products and features and we would say okay we need to make sure the safety mechanisms in place that people can report abuse and the policies that will govern that he had a way to say i don't think anybody's going to use this for bad reasons and that was just a natural idealism of people building new products envisioning the wonderful ways they can be used and not seeing the bad ways they could be used as a company over the years we have really evolved. mymy team i think that is five years ago we would collectively
1:25 am
find to say let's have a conversation about this and now this is all very much a part as a philosophy of safety by design when engineers are envisioning new products for the first time they are thinking helped m helpe sure this is done consistently with the type of community we are trying to build so we have people who specialize just in new products and features and they work closely with the product team. >> do they>> have a group of fos whose job is to thin it is to tf terrible ways to use the tools? >> not only t do we try to think on our own experience. as a federal prosecutor there are certain things i've seen a lot of anti-would naturally think of and we try to build a team that has the expertise when i think about the people on my team, i have had a woman who wa crisis counselor and i have a
1:26 am
man who was a child exploitation prosecutor. we have people with subject matter expertise and they bring that to the table, but really important to this is maintaining relationships with safety organizations around the world was literally hundredshe of gros we talked to on a regular basis and for this tel part of that if there's something you think we a are not getting right or there's a new trend sometimes they are small. it could be in brazil and this is a good example, there is a trend around to challenge people are sharing with one another and we want to make sure if there's something like that we become aware of it to put policies in place. >> in your type is determined counterterrorism and maybe that sounds normal now that i remember a time when that was be especially peculiar. i'm sure in the post-9/11 world is propaganda and has been
1:27 am
online that became a core mission of what you've done specifically in my note you've made advancements in identifying propaganda. talk about that a little bit. yesterday actually if [inaudible] if you go to the newsroom you will find a post we put out on the most recent data in the last quarter we took down 1.9 million posts for violating our terrorism policies. if it was somebody sharing a photograph of the isis flag to raise awareness in bot those cas we leave the content. they did so in 99% of cases we found the content before anybody flagged it for us possibly do
1:28 am
that using primarily technical tools like image matching the fault is identified that conte context. we've come a long way in the past couple of years. first of all, you are right when i first added to my title there was a lot of surprise from people outside facebook they would see my business card like you have a counterterrorism team. and i would say we do indeed. some thing you're trying to do what they are using social media and we don't want them on facebook or social media. it's not just about what we do at facebook but as an industry so when i think about how far we've come for this technical advances we are now identifying so much of the time of upload or shortly thereafter and i think the median time for takedown of
1:29 am
those posts was less than a minute. but they ar we are also workingy with industries so years ago this was what facebook was doing to remove the terror propaganda and now we've launched the global internet for them counterterrorism and that is an initiative it was a formalization of a bunch of social media companies working together for me the past two years on sharing best practices and technical tools for removing terrorist content and last june when he announced this group that is committed to working together to keep propaganda and the terrorists of the site between sharing technical tools so for instance we have a hatching technology that allows us to take a video, let's say you have a beheading video you
1:30 am
run into the software and it converts that to what we call a digital fingerprint, basically numerics, digital fingerprint and then restore thate in that database at the other companies that participate have access to and any company that wants to keep that particular video can take that and if somebody tries to upload it to their site they will catch it and say this violates our policy so we contribute patches to the database and the tens of thousands of patches now and other companies contribute as well and we get to benefit from their expertise as well so it's the industry working together. >> thinking about the challenge are broadly is at the same way there's been this migration of counterterrorism responsibilities into the sector so it wasn't long ago it was a
1:31 am
government function and now because of the presence of material and people able to use it a portion migrated to you. do you have a sense of other national securityy priorities beginning to migrate into the purview so when we think of cybersecurity generally but then the foreign influence is that a problem for you? >> for those of you who don't know most people use it out of the u.s. more than 85% are outside of thebo united so whene think about things like combating disinformation or making sure we are helping with election integrity we see many policymakers is facebook for great reasons around elections to reach out to constituents and
1:32 am
connect but also people try to use the same tools to interfere so making sure we are getting that right just here but with ticoming elections in brazil and india and mexico, that's very important to us. >> it's not reliance but certainly engagement on these issues so i think one of the questions i have is this the federal government ability to engage these issues disrupted somewhat because of where the activities are occurring and are you feeling a more direct engagement >> i think even way back then when i was a prosecutor there were things we could do but for differents that were roles for different people to
1:33 am
play and i think that's true in this space t as well wear a piee of the puzzle there is removing propaganda and doing our part to identify the credible threats and respond to law enforcement in thees wake of a terror attack or during their investigation so we are an important component of that and acknowledge that there's also incites the government has we will not have come intelligence and a broad picture and i think that translates to these other areas as well so when we think about the integrity there are steps we are taking within the company to ensure greater transparency and we are taking down false bank accounts. we send to be facing them partially responsible for bad content those are the bad actors said it advances and we've
1:34 am
removed tens of thousands. we knew they were more likely to be contributing [inaudible] so that's what we're doing internally and then what we are getting to engage in the other pieces of the puzzle for instance we have launched a research initiative where we will be partnering with broad academic researchers looking at the overall impact of these on election so we can understand that and get ahead of it and then we are also engaging with civil society and where appropriate electoral commission so he can understan we can undee the threats you think he will be facing and how do we make sure we are responding tosu those
1:35 am
quickly. >> before the transition, he mentioned how in the early days was thinking about the cool applications and how it could improve people's lives and now there is a full approach to the development what you see some of the idealism of the earlier days now giving way to the kind of realism in the senior leadership ranks of facebook? >> in my time i have always seen a serious commitment from senior leaders who understand how we are impacting the world and the conversations that we have w on these issues often include senior leadership so when we are thinking about how to build our counterterrorism team goes for non- decisions i making were saying we ought to invest here or d tons this bizarre conversas that do include senior leadership.
1:36 am
i think what has changed is that the broad level in silicon valley is more of an awareness that the tools might be of use and is alwaywiin this way is bae the good technology for bad purposes. one is companies investing and number two is companies willing to work together. fivek years ago i don't know but now it'st in other areas as wel if there is something that we are getting banged up, is a photo of a child being abused or a video circulating one of the first things i do is reach out to youtube and twitter and they do the same for the subversive of cross industry collaboration on the issues. >> something similar exists like we think we have been an authentic accounan inauthenticae
1:37 am
in human form other companies about it o'clock >> we are starting to talk more about the w integrity and the industry. a couple things we've had for sometimsome time in the area of cybersecurity and influence obligations we have had some cooperative efforts or one that we just announced that this is a group of 31 companies that have come together and said we are going to take a stand on protecting users from any type of cyber threats and we are going to commit to helping governments in this type of threat and commit tony working together. we had some place called the threat exchange and it is for
1:38 am
companies to share information they have about those that are trying to attack the infrastructure. it's a broad group of industries but the idea is we've become aware of somebody trying to attack the structure and it's a threat to the infrastructure we put it in a threat exchange and other companies can access that and use it to stay safe. >> letlet's transition to contet policy. i will ask you a simple question and then as you answer it would be helpful to explain the announcement youo just need this morning. so, hate speech. what is? spinnaker isn't a universal definition. at facebook we have a definition which is we don't allow speech that attacks a person or group of people based on a sensitive
1:39 am
characteristics like race, religion, sexual orientation or your. there's a longer list you can find on the policies and i could dgive a plug for the policies provoke you can post on this disc of the community standards. once upon a time they were pretty high level, don't harass anybody,h don't post threats or violence p and then we launcheda more detailed version of those policies. today we've released all their internal details of how we enforce the policies. so if you go to the community standards which even if you are not on facebook they are publicly available you can go to facebook community standards and you will find them. you will see our values, how we got these issues at the highest levels and here's what we mean when we say don't post harassment or hate speech and
1:40 am
then you can click more and you will seeee the guidance and the rules they havehe to enforce. what i mean is we have a team of people based around the world that review potential violations of our content standards. reviewing content if you are on facebook and you see a page or group or profilegr or photo that you think violates the standard and can report that to us and it will go to the community obligations reviewers are we also use technical tools to proactively find the violating content and if we find that sometimes a technical pool is good enough it can make a decision on its own but in the case of most cases it will say this might be a violation and it will go to review cities are real people more than 7500 of them based upon the world
1:41 am
reviewing this content and in order to make sure we are doing efficiently if you report a piece of content they need to make sure it gets the same decision whether the person to t reviewed it is inwh india or tes so that's why we have to have these very granular objective guidance is. that is not all public so when i say we define hate speech as attacks based on somebody's sensitive to the statistics you can see what are those characteristics and what the space between buying an attack. >> so if someone says christians are bigots, is that hate speech? >> some places in the world it would be in others it would not. we have three tiers of hate speech and they are based on the severity ofty the attacks and fr
1:42 am
the most severe we have the broadest protections and for the least severe we have narrow protections. i should also say we know this isn't the only way to define hate speech in fact it's a challenging area and people have different ideas around the world this doesn't match for example europeanaw law. we understand that so there will be cases people will disagree. second is they continue to evolve. the hatevil.the hate speech pole years ago had one tier of attacks. we didn't expand or narrow it. now you will see if you read the standards we launched today it would be something like somebody who is suggesting violence against a particular group would be okay even if it isn't a credible threat if somebody's just saying it would be great if
1:43 am
these people were hanged or something like that, violent speech we will remove that very broadly meaning we will remove it if somebody mentions a sensitive characteristics or even if you mention a nonsensitive characteristic so if they attack christians are also set christian teachers were taxi drivers.ri if they are using that aggressive language we would remove that as hate speech. when you go down to tier three is the segregation or exclusion and for that language which is isn't the sort of speech that might be used to inflame passions but more intensity towards the political speech to make sure we are not removing political speech we have narrow protections there so if somebody wanted to discuss for instant immigration policy, we want to make sure we are protecting the.
1:44 am
so there we don't apply that if somebodydy mentions in nonprotected characteristics there are no hate speech and we make sure things like immigration speech we are allowing as speech. >> i will give you to potential posts and you tell me how you would engage them. on the one hand, you have an individual who posts this christian baker is a bigot, don't go to their store so called the segregation and then on theen other hand someone says this mosque teaches hate, stay away. >> the second one would be allowed and here's why the distinction is that you are attacking people we will remove it. if you are attacking countries or ideas or institutions or religion we will allow it. why do we draw the line that we
1:45 am
facebook is fundamentally about bringing people together and helping people connect with each other and we know they won't do that ifea they don't have a safe place so when we see attacks are targeting people we will remove thosthose inconsiderate and hate speech, but we want people to be able to engage in political speech and that is going to include speech some people will find offensive. some criticisms of countries or religions will be upsetting to people and that's why we give them control. you don't have to follow this page you can block this person if you want to but we think it's important to preserve the ability for people to have the conversations. >> and facebook is a private company so your relationship is different than the government so a group like heritage understands that. >> and we also recognize this
1:46 am
winline we are drawing doesn't always gets it right. we get millions of records every week and we have technical tools when you are talking about enforcing policies on the volume of content and you want to do it consistently and make sure you get the right policy enforcement and you need to have these well-defined lines it means he wilyouwill always have these ede cases. we will sit in a room and look at a photo and say it feels like he'hate speech but under our poy its is on the other side of the mind where we would leave it up. those conversations have been and we try to make our linesnd that are but i guess my point here is it's never going to be a perfect rule that pleases people all the time that we need to be clear about where the rule is
1:47 am
and then look for the feedback that comes in a. >> in several parts of the conversation both today and in the testimony the reference was made one of the key objectives is for the users to feel safe using the platform and that is easy to understand and agree with you want people endangered when they are on the platform and age where there are universities and other organizations deliver a stifling free speech to establish safe spaces this gets a little more complicated and i think senator sachs brought this up when he wasth questioning mark and he sd can you imagine a scenario in the future where pro-life advocacy on your platform might be deemed as he speech because users identified as causing them psychic or emotional harm.
1:48 am
i want to read his response because i don't know as it was as strong as many anticipated. he said i i certainly wouldn't want that to be the case. it doesn't go without so conservatives in general who right or wrong there's a perception of feeling marginalized symptoms on social media i and putting a lot of social commentary but it's a broad feeling so what are the values when it considers free speech on the platforms and the alliances like this as they think about that? >> it's important for us to allow the different perspectives on controversial and upsetting issues. we have a meeting every two weeks of the content standard for him where we consider updates to the policies and
1:49 am
there are things that have been flagged by some internal team or group seeing this issue maybe we should change our policy we will look at the data and talk to experts around the world and consider the options. one of the ones we looked at a month or so ago was what to do with photographs of aborted fetuses. it's aus controversial topic. we know some people view it as upsetting and to others it's an important speech we reach out to groups that are both pro-choice and pro-life and said how can we make this work and we decided we will leave up the content unless somebody is sharing it to celebrate or in a sadistic way mocking death because we have other policies about that. but basically the position was this is important speech and we need to protect it.
1:50 am
we are a long way from policies that would take down. this is a place we want people to express their diversities and we think get a lot of good cang from that when people have the chance to see how others think it is a good thing into the same time not everybody wants to be part of the conversations and that's why we give the tools mentioned earlier if you don't want to engage with a certain personon or content you don't he to. >> a lot oof other people acrose spectrum particularly in the conservative worldview it isn't out of the realm of evolving towards that so those conversations that i think were quite normal five years ago and i want to be careful here
1:51 am
because i think there's still an open debate on but now they are just largely forgiven and so i think that is an open issue for folks. >> a reason we talked to those we do around the world is because this is a concern not just in the united states but something we see a lot of plac places. our standards are not based on european law or anybody's law but it's about making this a safe place giving control to not see the content they don't want to see it making it a safe place if something violates the law in a country that government can come to us and ask us to restrict speech and we have a process whereby we will see if it is consistent with
1:52 am
international norms. sometimes we end up walking for instance the german hate speech law is overbroad than our definition. it's a crime to deny the holocaust so if we become aware of that content in germany they say you need to block it in germany we do, it's not available there but it's available to people in the united states. >> i think what we will do now is start to turn to questions from the audience if you have a question just raise your hand if you would. please be quick in your question and just identify yourself and you refuse and. we expect platforms to remove objectionable content and
1:53 am
protect users. in the united states we have section 230 which allows you to take down speech was not become liable for moderating all speech are all content. are there similar laws in the world and if t not, is that something you would like to see where private platforms can take down the objectionable content without suddenly absorbing liability forbi all content moderation? >> we see some degree of that. one thing i will see is because we are based in the united eight, section 230 is fundamental to our obligations. when we quit you could get requests from governments to block content around the world it is sometimes its a different kind of question.
1:54 am
the way we interact is sometimes based on making sure we are expecting the expectations of the community and making sure we can continue to provide a service to people in that community so there are times that for instance if we don't come play witcomply with walls a particular country facebook wouldn't be able to operate so it is fundamental but other places there'that otherplaces tm of the law and consideration that we have. >> we have western journal and in january we were the fourth most engaged publication on facebook right behind fox news and others. you did your algorithm change as a conservative site down to number 22 whereas cnn went from 16 to number one.
1:55 am
fox had the number one engaged for almost seven years they went to number three. steve scalise showed a chart about how the independently conservative sites have reduced their traffic by 14% since january whereas the liberal leaning sites increased by 17%. what happened with the algorithm and would you be willing to publish what you consider by twisting publishers as opposed to blacklisting publishers? >> i don't manage that issue because mine is the content policy but i do know about it. do you come to facebook and go to your newsfeed which is where you see most content, there are two options you can choose to see the content in reverse chronological order or you can
1:56 am
see what hasd been surfaced to you based on factors like content going viable or most recently posted or that tend to interact wit with most and so f, that is the case but algorithm and back in january we need a change where we sai sent peoplee going to see more content posted by family and friends and less content that is posted by news organizations. around the same time we came out with another announcement saying we are testing something in the united states that looks at a survey result from a broad degree oproperdegree of people e political spectrum say which publications have the broadest trust into the goal is rather than a popularity test of which do you thinknk is best it was
1:57 am
looking to see across the political spectrum with rv organizations that people tend to think our reliable even if they are way over here and you are way over there and then they will see increased visibility. some people were testing. i don't think we'r think we've t right yet. we understand the ramifications are huge to media companies. >> [inaudible] señora blocking us from going to our own likes? i would love to be able to help youths we cayou so we can entera dialogue on that and our ceo is a very high-end we are addressing this but we would like a dialogue about it. >> we will definitely follow up and make that happen.
1:58 am
even when we are running tests we know the ramifications are huge and we want to make sure that. we picked something off back in january of 2017 in the facebook journalism product and that is looking more closely with publishers to understand how can social media work for them because it's the entire journalism industry social media has required some massive changes and it has not all been good and we are trying to hear what we can do to get better. [inaudible] a terrorism research center and the blogs you put out yesterday
1:59 am
you defined terrorism as any nongovernmental organization that engages in premeditated acts of violence in order to intimidate the civilian population or government. so intimidation is the sort of intentnt behind the terrorist behavior and i wondered if you thought about broadening the definition because they also seek to elicit angry responses and intimidation is one of only many emotions they are trying to unite and deeply int they play e hands of the government overreaction. i wondered if you consider broadening the definition to accommodate things like polarization and citing a heavy-handed military response. >> violence for a political objective our view is that with
2:00 am
2:01 am
us streamline the review process and get content to the people who have the right language and expertise. we don't use it, we don't primarily use it to make these decisions. it is lacking the content for the review and helping us. i see more situations where we can use the technology to actually make a decision without it having to go to a person. i mentioned earlier we have now this video that can stop beheading videos at the time of uploads. that is technically simple compared to a lot of the artificial intelligence we are trying to go, so that is an area where they announced they could make the decisions but if you think about something like using technology to try to identify a
2:02 am
credible threat of violence or harassment or hate speech, those areas are inherently so conceptual you could use -- i could write a post where i'm insulting somebody, attacking and use all sorts of words in this post and then i could write the same where i see today on the subway somebody said this to me and it made me feel awful and i could use the same word so finding technology that will help us understand the context is hard. we are investing in it but it's going to take time. we are using technical tools to identify the operations. >> one of the key points that you brought up previously is talk a little bit of outside review that goes on for individual content moderators you have a regular review process to assess how good they
2:03 am
are doing. >> that's right. thousands of the words and content requests even if we are 99% accurate, we are going to have many mistakes every day. and so, we have some controls in place right now to improve accuracy and some that we announced today. every reviewer in the policies that, and what that means is we sometimes double or triple the review of a context that each content reviewer has decided so we could see is she getting the decisions right or does she need to train in this area or is she not up to the job and that is the way that we make that quality assessment. we then have a smaller but deeper review where we are looking at all the surrounding context based on the overall
2:04 am
2:05 am
2:06 am
whether you do any sort of training or whether you have any thoughts about the political biases of store employees given the obviously a lot of content filter is done by human beings and if you recall when president obama was in office, there was that memorable moment when he went to facebook and facebook fk and east of afghanistan will you raise my taxes and there are a lot of instances where our impression of facebook, not just the company itself the people that work for facebook predominantly lean one way politically, and i think that is what makes a lot of people concerned because people that are filtering the content and deciding what should go up and come down and who might
2:07 am
naturally consider a certain type of speech that's legitimate to the hate speech then it becomes highly problematic and so i'm wondering in terms of intellectual diversity, i know you have very impressive people, but someone with 3 degrees from yale might consider a lot of what goes on in this building to be hate speech and that might be somebody that works on your content policy so just wondering what sort of safeguards do you have and what improvements do you plan to make in that area? >> we have to take it seriously, there is no question. first, we have to get the lion and the rest properly and because we need consistency process falls apart so in terms of what kind of training we do and that audit process that we just described is one way of seeing if the reviewers are complying with these very
2:08 am
detailed and granular guidelin guidelines. also, the team is not if you look at linux based in 11 offices around the world and they have contact with people on both sides of the aisle very regularly. the stakeholder engagement team i mentioned, their job is to make sure we are building prospective into the policies and the granular dragons but we look at its not just the least the u.s. bu but the they usualle conservative liberals and this is very strongly held issues about what's going on in a particular country. we do that by engaging with academics. the example i mentioned earlier about fetuses is a prime example of that. it's what do the pro-life and pro-choice groups say and how do we find something that's going to allow that.
2:09 am
>> something i heard you say previously that the reason why the standards are so granular is to help weigh against. >> when i first came into this role on our standards around nudity are very granular and some people an people internally does it have to be that way. it seems a little silly and it would be better if we told the reviewers because it is pornography, remove it and if it is artistic or something for like scientific knowledge or education, so we did a test internally and it showed congo simply people did not agree. same thing with hate speech people will not agree if you say it's offensive to somebody's religion and so instead we write these very detailed this qualifies as an attack, but assessing a religion doesn't qualify as an attack and we have this whole approach now that you
2:10 am
see in the standards and this is the way of taking the bias out of the process and holding the reviewers accountable to the policy. >> the question was who reviews the reviewers. it's politically biased to begin with and i don't know how much the standards are that are the granular. >> everything i just described where we make sure we are doing it from both sides. as far as who holds the reviewers accountable, these quality audits where we dig deeper into the appeals process, that process is managed by multiple teams.
2:11 am
>> so if you go through those and see something you don't like. >> and we are having public session as i think we are having one in washington, d.c. but around the world that is designed as feedback sessions and people will see the way that we implement and enforce the policies right now and there are things they say where this is that right you're getting it wrong and we want to be able to reach out. >> thank you very much for a wonderful conversation and please say thank you. [applause] committee
57 Views
IN COLLECTIONS
CSPAN2Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1668462670)