tv Future of Facebook CSPAN April 24, 2018 8:49pm-9:47pm EDT
8:49 pm
8:50 pm
facebook monitors information from its users. this is half an hour. [applause] >> good morning and welcome to the heritage foundation. facebook has more than two billion active monthly users and approximately 1.5 billion users log in every day. there are some estimates that say every minute of every day there are 510,000 comments, 293,000 status up dates, 136,000 photos posted to the social network every minute of every day. facebook has come to define how many indicate with their family,
8:51 pm
how they plan and share and socialize and for many particularly those overseas with offerings like free basic service it's increasingly how people are getting on line in the first place. facebook's influence and role in the lives of its users is growing. with this comes a great deal of responsibility and facebook's understanding of these responsibilities appears to be evolving. to help us better understand that evolution we are very orchard to have with us the head of product policy and counterterrorism. she and her team manage the policies that govern what content can be posted as well is how advertisers and developers are able to engage and use the product and the network. prior to joining facebook monaco was as an adviser adviser in
8:52 pm
yosemite thailand as well as assistant u.s. attorney in d.c. as well as in chicago. she has a va in economics and english from rice university and a j.d. from harvard law. please welcome monica biggert. [applause] >> thanks for having me. >> monica real quick before we get into the specific questions just set the stage and help people understand your expertise and where you are coming from. talk a little bit about you, your background and what is behind what you are doing. >> i'd like to mention before he came to facebook i was at the u.s. justice department where he spent more than a decade as a criminal prosecutor. [inaudible] >> the team that manages how we respond to government requests. they asked facebook for account
8:53 pm
information. it took over the team about five and a half years ago and since then that team has continued to expand. right now we have a team that manages the content standards of external engagements. the standards are constantly evolving and a lot of that includes reaching out to safety groups and freedom of expression academics around the world. we have a response team. their job is any time there is let's say there's a terror attack or some sort of natural disaster that has led to a lot of casualties. there are job is to understand the events on the ground and make sure we are prepared to comments we might see post on facebook as a result. we have a team that focuses on child safety for instance and
8:54 pm
maintains a lot of relationship groups around the world. about two years ago we created our -- on facebook. this team is not just policy people. we have engineers to help us find and prepare content. we have lawyers who are helping us send credible threats we might find to law enforcement and law enforcement and we have others such as academics. we have the former director of west point counterterrorism research center. we have a ph.d. and extremist organizations so that expertise on that team and help us maintain a relationship with those who are working in the field outside of facebook is what we are doing.
8:55 pm
>> of the think the weight like to do this i'd like to ask just a couple of big picture questions about how facebook is thinking about things right now and then dig deeper into content policy issues. just to begin when mark was testifying here recently one of the key things that he was sounding again and again was that facebook had not previously taken a broad enough view of its responsibilities and it intended to broaden that understanding. you have been at facebook for a little over six years which makes you an old-timer. what was the narrower view and what is changing now? >> certainly the company has certainly grown so people using the service and people who work at the company but also in terms
8:56 pm
of the approach we take. if i go back to where we were as a company when i joined we had about 2000 employees at that time. engineers who were working on these issues on developing new products to be featured, they were like you'd find in any smaller starter. these are people who have great ideas and they would not necessarily think about how their ideas with be used or could be its abuse by others. i remember back then having conversations with engineers who are launching or building new products for the future and we would say okay we need to make sure that we have safety measures in place that people can report abuse and that we have policies that will govern them and they would say i don't think anybody is going to use this for bad. it was a natural idealism that
8:57 pm
comes with people who are envisioning the wonderful way that it can be used and not seeing the bad ways to get the use. as the company over the years we have really evolved. my team if i think back five years ago we would find the engineers and say let's have a conversation about this. now this is all very much a part there's a philosophy where when engineers are envisioning new products or features for the first time they are making sure this is done consistent with the community we are all trying to build it on a team we have specialized product features and they work very closely with the product's. >> does facebook have a group of folks whose job it is to think of terrible ways to use it? >> my job is not only do we try to think they summer on
8:58 pm
experiences of federal prosecutor and there are certain abuse types that i would naturally think of. attracted build a team that has that specialty. we might have a woman has the rape raises cancer and a man who was an exploitation prosecutor. we have people with subject matter expertise in different types of abuse. they bring that to the table but really important that this is maintaining the relationships around the world. there are literally hundreds that we talked to on a regular basis and part of it is you tell us if there's something you think we are not getting right and sometimes they are small. in brazil there is a trend and it's a challenge that young people are sharing with one another. if they are something like that we wanted be aware so we can put policies in place.
8:59 pm
>> he in your title is the term counterterrorism and maybe that sounds normal now but i remember a time when that would be especially peculiar. i'm sure as terrorist propaganda began to spread on line that became a core mission of what you have done specifically and i know you guys have made. significant advancements. talk about that a little bit. >> yesterday actually if you go to the newsroom you will find a post that we put out on the most recent data in the last quarter of q1 of 20 team. we took down 1.9 million posts for violating our terrorism -- it was for instance somebody sharing a photo of isis to condemn it or raise awareness.
9:00 pm
9:01 pm
9:02 pm
it allows us to take a video. the software converged that to what we call a digital fingerprint but it converts to a digital fingerprint and then we store that innovative database but the other companies that participate have access to and any company that wants to keep that particular video from site can take that and if somebody tries to upload the video to their site they will catch it. they contribute well and we get
9:03 pm
the census so it's the industry working together. >> thinkin >> thinking about the challenge more broadly in the same way that there's been this counter r trigger for some it wasn't long ago when that was a government from shutting and now because of the presence in material in the platform and people being able to use it, a portion migrated back to you. do you have a sense of other national security priority is kind of beginning to migrate into the purview so when we think about cybersecurity generally and then that is a problem for you. for those of you who don't know it's more than 85% using facebook or outside of the united states so when we think
9:04 pm
about the information and making sure we are helping with election integrity and the things we see many policymakers use facebook to reach out to constituents and to connect with them but also people use the same tools to interfere so making sure we are getting that right not just here but upcoming elections in brazil and india and mexico that's very importa important. >> in terms of government engagement on these issues i think one of the questions i have is if the ability to engage these issues being disrupted somewhat because of where it is occurring and are you feeling a more direct engagement for
9:05 pm
assistance on these issues? >> i think even way back when when i was a prosecutor there were things we could do a load that for most things there were different rules for different people to play and i think that's true in this space as well. there's removing propaganda inta into doing our part to identify threats and quickly respond to law enforcement in the wake of a terror attack during an investigation so we are an important component and acknowledge that there is also insights governmentthere'salso e that we won't have. they have intelligence i intelld sometimes a broader picture and that translates to other areas as well so when we think of election integrity there are steps we are taking to ensure greater transparency.
9:06 pm
we have seen that they tend to be disproportionately responsible for that content is often the bad actors and the machine learning technical tools have gotten better so we removed tens of thousands of accounts. we knew that they were more likely to be contributing to to through the content we might see so it's what we are doing and really and then there's what wet we'rweare doing to engage with e other pieces of the puzzle for instance we have launched a research initiative where we will be partnering with broad academic researchers who will be looking at the overall impact of
9:07 pm
facebook on elections this week and understand that and get ahead of it and then we are also engaging in the civil society in countries around the world to understand what are the threats you think he will be facing and how t do we make sure we respond to those quickly. >> you mentioned in the early days engineers think about the cool applications and how that would encourage thriving and now there is a formal approach to the development. would you say some of the technology at the senior leadership ranks of facebook? >> always in my time i have seen a commitment to understanding exactly how they are impacting the world and the conversations we have on these issues include senior leadership so when we are
9:08 pm
thinking of how to build the counter trigger byzantines goese are not decisions by making decg wherewithal to invest here would do this, those are conversations that you include senior leadership so it has been there. what has changed is that the default level in silicon valley and the companies generally there is more of an awareness that the tools might be abused. there will always be bad actors. one is companies investing and being willing to work together. five years ago i don't think so. now not just an attacker was ins own space but other areas as well if there is something we te getting windows, there is a photo of a child being abused or a video circulating on the first thinof thefirst things they wouh
9:09 pm
out his to youtube and twitter and the disabled industry collaboration now on issues that wawasn't there five years ago. >> what about things like we think they are pushing this narrative and other companies speak to >> we are starting to talk more about base. a couple things we have had for sometime isome time in the areaf cybersecurity which the disinformation operations are a part of we have had some effort before and one that we just announced a is a group of 31 companies but i think it's grown. this is a group of companies that have come together and said we are going to take a stand on protecting users from any type of cyber threats we are going to
9:10 pm
commit to not helping the governments with any type of ann authentic threat and connect to the working together and there's a framework in place we have since 2015 posted something called the threat exchang thread that is for companies to share information they have about those that are trying to attack their infrastructure. it's a broad group of industries but the idea is we become aware of somebody attacking the infrastructure. as a state or nonstate actor we put it in that exchange and other companies that are part of it can access that and use it to keep their sites safe. let's transition to the contact policy i will ask a simple question and as you answer you will be helpful to explain the announcement h was made this
9:11 pm
morning so, hate speech. what is it? >> is not a universal definiti definition. at facebook we have a definition which is we don't allow speech that attacks a person or group of people based on a sensitive characteristic like race, religion, sexual orientation or gender. there's a longer list you can find in the policies and they can do a plug for the policies. there are policies for which you can post and we hav we have thee upon a time pretty high level don't harass anybody, don't post violence and then in 2015 we launched a more detailed version of the policies. today we release all of our internal details about how we enforce the policies so if you go to the community standards even if you're not on facebook
9:12 pm
they are publicly available you will find them you will see the values and how we think about these issues at the highest levels it u and here's what we mean when we say don't post harassment or hate speech and then if you want you can click more and see the guidance we gave the reviewer's the rules they have to enforce. what i mean is we have a team of people based the world that review potential violations of our content. the reviewing content that has either been flagged by people on facebook if you see a page or group or profile or for that you think violates the standards you can report it to us and it will go to the community operations reviewers or reuse some of our technical tools to proactively find violating content and if we
9:13 pm
find that sometimes the technical is good at that and can make the decision on its own but in most cases it will say this may be a violation and will go to a content review so these are real people more than 7500 of them based around the world reviewing this content and to make sure we are doing that efficiently if you report a piece of content we need to make sure the person reputed whether they are based in india or texas so that's why we have to have these very granular set of objectives violence. when i say when we do find hate speech as an attack based on somebody's characteristics you can see what are those characteristics, what does facebook mean, how do we treat these. >> so if someone says christians are bigots, is that hate speech?
9:14 pm
>> some places in the world it would be in others it would not. in facebook we would look at it. we have three tiers of hate speech and they are based on the severity of the attack and for the most severe we have the broadest protection for the least severe we have narrow protections and i should say two things, w we know this isn't the only way to speech. people have different ideas around the world and this doesn't match european law. we understand that so there will be cases where people will disagree with that. the other thing is they continue to evolve. the policy three years ago had one fear of attacks and we didn't expand or narrow it and now when you read the standards
9:15 pm
we've launched today it would be something like somebody who is engaging in suggesting violence against a particular group would be okay even if it's not a credible threat that somebody said it would be great if these people were hanged or something like that we would remove that very broadly meaning we will remove it if somebody mentions a sensitive characteristic even if they als also mentioning on sensitive characteristic so if they attack christians were said christian teachers were christian taxi drivers if they are using those most aggressive language we would remove as hate speech. when you go down to tier three it is a segregation or exclusion and for that language that might
9:16 pm
be used to inflame passions is geared more towards the political speech to make sure we are not removing political speech of narrow protections so if somebody wanted to discuss integration policy we want to make sure we are protecting that swift somebody mentions a nonprotected characteristic there are no protections and we make sure things are allowed in that speech. >> i will give you t two potentl posts didn't you tell me how you would engage them. on the one hand and individual posts this christian baker is a bigot, don't go to their store and then on the other hand someone says this mosque teaches hate, stay away.
9:17 pm
>> the second one would be allowed and here's why. or distinction is that you are attacking people we will remove it. if you are attacking countries where ideas or institutions or religions we will allow it. why we draw the line that we facebook is fundamentally about bringing people together and helping people connect to each other and we know they won't do that if they are in were in a e place of attacks targeting people will remove those and consider them hate speech if we want people to engage in political speech and that is going to include speech that some people will find offensive, some criticisms of countries or religions will be upsetting and that's why we give them control you don't hav have to follow the page you can block this person if you don't want to see their content is important to preserve the ability to have the conversations. >> it's important to stipulate facebook is a private company so
9:18 pm
your relationship with the amendment is different and the government of a group. >> not only does the first amendment apply to us that's right, but we also recognize with the line we are growing it does not always get it right. we get millions of records every week and the technical tools for identifying others. when you are talking about enforcing policies on the volume of content if you want to do it consistently and get the right policy whether the person is in india or texas like i said earlier, then you need to have these well-defined lines. it means you've always have cases. i will sit in a romantic look at a photo and essay it feels like hate speech but under our policies it's just on this other side of the line where we would leave it up.
9:19 pm
those conversations happen and then we try to make it better but i guess my point is it's never going to be a perfect rule that uses all people of the time. we want to be clear about where the rule is and then look for the feedback and continue to get better over time. >> in the conversation both today and in the testimony of the reference was made to face like one of the key objectives is to feel safe using the platform and that is easy to understand and agree with you don't want people to feel in danger there on the platform that in an age where universities and other organizations are deliberately stifling free speech in an effort to establish safe spaces, this gets a little more complicated and i think the senator brought this up when he was questioning mark and he said
9:20 pm
can you imagine a scenario in the future where advocacy, pro-life advocacy might be deemed as hate speech because users identified as causing them psychic or emotional harm and i want to read his response because i don' don't care as its as strong as many anticipated. he said i certainly wouldn't want that to be the case. that doesn't rule it out so i think conservatives in general who rightly or wrongly have the perception that feeling marginalized.
9:21 pm
to allow different perspectives on very controversial and upsetting issues i will give you an example we have a meeting with a. maybe we should change the policy and look at the data. one of the things we looked at is a fetus is, it is a controversial topic. we know that some people will see it as very upsetting. we reach out to groups that are both pro-choice and pro-life and said how can we make this work and we decided we would leave the content of unless somebody
9:22 pm
is sharing it to celebrate in a sadistic way because we have other policies about that. but basically the decision was its important speech and we need ttheneed to protect it. we are a long way from policies that would take down unpopular beliefs. facebook is a place we want people to express their views and we think a lot of good can come from that. if you don't want to engage in a certain kind of context or with a certain person you don't have to.
9:23 pm
9:24 pm
9:25 pm
who you represent. >> we expect platforms to remove objectionable content and in the united states we have section 230 which allows you to take down speech but not become liable for moderating all speech or content. are there similar walls in the world and if not, is that something you would like to see where the platforms can take down objectionable content without absorbing the likability for all content moderation?
9:26 pm
there are times that for instance if we don't come play h the laws in a particular country they would not be able to operate, so it isn't just about in the section 230 fundamental but other places there's a broad spectrum of consideration that we have. >> gabriel joseph, thanks for your talk. we have the western journal and
9:27 pm
we were the fourtwe've are the e publication on facebook, right behind fox news and others. you did your algorithm change should from a conservative sites down to number 22 whereas cnn went from 16 to number one and fox had the number one and engaged for almost seven years they went to number three. steve scalise showed a chart when mark was testifying on capitol hill. they reduced traffic by 14% in january whereas the liberal leaning sites increased by 17%. what happened with the algorithm, number one, and number two would you be willing to publish with your content would you consider white's list of publishers as opposed to blacklisted publishers?
9:28 pm
>> i don't manage that issue but we know about it, so when you yu come to facebook and go to your news feed which is where you see most content, there's two options, you can choose to see it in reverse chronological order or what has been based on factors like going viral and most recently posted and documented fact with most and so forth. back in january we made a change where we said people are going to see more content posted which necessarily means they will see less content as a percentage that is poste posted by the news organizations. around the same time, we came up with another announcement saying we are testing something in the united states that looks at a
9:29 pm
survey results from a broad degree of people across the political spectrum saying which publications have the broadest trust and the goa goal is rather than was ratherthan a popularitf which ones are best if was looking to see across the political spectrum what are the immediate organizations people tend to think over reliable if they are here or over there. it's something we've are starting off with and we understand what the media companies. i would love to give you the testing to enter the dialogue
9:30 pm
with that. we know that identifications are huge and we want to make sure if we take something off back in january 2017 it's the facebook journalism contact int contactee plaintiff that initiative is looking more closely with publishers to understand because for the entire journalism industry, social media has required some changes -- and all been good so we are trying to hear what we can do better.
9:31 pm
>> ingeborg that you put out yesterday, any nongovernmental organization that engaged in acts of violence in order to intimidate a civilian population or government for an ideological goal. some intimidation is the intent behind the behavior and i wonder if you thought about broadening that because the terrorist organizations also see to elicit angry responses and intimidation is one of only many that they are trying to elicit and many of the others could also play into the hands of terrorists like the government overreaction.
9:32 pm
i wonder if you consider broadening beyond imitation to accommodate things like polarization and fighting a heavy-handed military response. >> our intention is to remove that and our view is tha that it would be sort of indirectly covered but we are totally open to [inaudible] >> during the hearing one of the things that was talked about is using the aia and other tools to automate the process so as technology grows and develops what role do you see that playing in terms of how we look
9:33 pm
at content and other policy concerns that could emerge. >> we use it to help identify potential violations and to help us streamline the review process and get content to the people who have the right language and expertise. we don't use it, we don't primarily use it to make these decisions. it is lacking the content for the review and helping us. i see more situations where we can use the technology to actually make a decision without it having to go to a person. i mentioned earlier we have now this video that can stop beheading videos at the time of uploads.
9:34 pm
that is technically simple compared to a lot of the artificial intelligence we are trying to go, so that is an area where they announced they could make the decisions but if you think about something like using technology to try to identify a credible threat of violence or harassment or hate speech, those areas are inherently so conceptual you could use -- i could write a post where i'm insulting somebody, attacking and use all sorts of words in this post and then i could write the same where i see today on the subway somebody said this to me and it made me feel awful and i could use the same word so finding technology that will help us understand the context is hard. we are investing in it but it's going to take time. we are using technical tools to identify the operations.
9:35 pm
>> one of the key points that you brought up previously is talk a little bit of outside review that goes on for individual content moderators you have a regular review process to assess how good they are doing. >> that's right. thousands of the words and content requests even if we are 99% accurate, we are going to have many mistakes every day. and so, we have some controls in place right now to improve accuracy and some that we announced today. every reviewer in the policies that, and what that means is we sometimes double or triple the review of a context that each content reviewer has decided so we could see is she getting the decisions right or does she need to train in this area or is she not up to the job and that is the way that we make that
9:36 pm
quality assessment. we then have a smaller but deeper review where we are looking at all the surrounding context based on the overall spirit protection of the policy. we will allow people to repeal decisions we made on individual pieces of content. if we removed your photo or post you can appeal it to us and somebody will look at it and see if the additional contact and if
9:37 pm
9:38 pm
whether you do any sort of training or whether you have any thoughts about the political biases of store employees given the obviously a lot of content filter is done by human beings and if you recall when president obama was in office, there was that memorable moment when he went to facebook and facebook fk and east of afghanistan will you raise my taxes and there are a lot of instances where our impression of facebook, not just
9:39 pm
the company itself the people that work for facebook predominantly lean one way politically, and i think that is what makes a lot of people concerned because people that are filtering the content and deciding what should go up and come down and who might naturally consider a certain type of speech that's legitimate to the hate speech then it becomes highly problematic and so i'm wondering in terms of intellectual diversity, i know you have very impressive people, but someone with 3 degrees from yale might consider a lot of what goes on in this building to be hate speech and that might be somebody that works on your content policy so just wondering what sort of safeguards do you have and what improvements do you plan to make in that area? >> we have to take it seriously, there is no question. first, we have to get the lion
9:40 pm
and the rest properly and because we need consistency process falls apart so in terms of what kind of training we do and that audit process that we just described is one way of seeing if the reviewers are complying with these very detailed and granular guidelin guidelines. also, the team is not if you look at linux based in 11 offices around the world and they have contact with people on both sides of the aisle very regularly. the stakeholder engagement team i mentioned, their job is to make sure we are building prospective into the policies and the granular dragons but we look at its not just the least the u.s. bu but the they usualle conservative liberals and this is very strongly held issues about what's going on in a particular country.
9:41 pm
we do that by engaging with academics. the example i mentioned earlier about fetuses is a prime example of that. it's what do the pro-life and pro-choice groups say and how do we find something that's going to allow that. >> something i heard you say previously that the reason why the standards are so granular is to help weigh against. >> when i first came into this role on our standards around nudity are very granular and some people an people internally does it have to be that way. it seems a little silly and it would be better if we told the reviewers because it is pornography, remove it and if it is artistic or something for like scientific knowledge or education, so we did a test internally and it showed congo simply people did not agree. same thing with hate speech people will not agree if you say
9:42 pm
it's offensive to somebody's religion and so instead we write these very detailed this qualifies as an attack, but assessing a religion doesn't qualify as an attack and we have this whole approach now that you see in the standards and this is the way of taking the bias out of the process and holding the reviewers accountable to the policy. >> the question was who reviews the reviewers. it's politically biased to begin with and i don't know how much the standards are that are the granular. >> everything i just described where we make sure we are doing it from both sides.
9:43 pm
as far as who holds the reviewers accountable, these quality audits where we dig deeper into the appeals process, that process is managed by multiple teams. >> so if you go through those and see something you don't like. >> and we are having public session as i think we are having one in washington, d.c. but around the world that is designed as feedback sessions and people will see the way that we implement and enforce the policies right now and there are things they say where this is that right you're getting it wrong and we want to be able to reach out. >> thank you very much for a wonderful conversation and please say thank you. [applause]
9:45 pm
one is the one you see on tv who makes these outrageous comments to get attempted for his brand and even if it creates negative publicity, he still becomes the center of attentio attention evy both in conversation and for the media but then there's the other donald trump the one that insiders know who's just the opposite, he's thoughtful and listens and is careful about making decisions.
9:46 pm
47 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on