tv Outnumbered FOX News November 17, 2020 9:00am-10:00am PST
9:00 am
senators. we have 48 of the democratic side. if it's a tie, kamala harris will be the ty breaker and chuck schumer would be the senate majority leader, sandra. >> sandra: that's it for us from "america's newsroom." thanks for joining us, everybody. outnumbered starts right now. ♪ >> harris: and we are awaiting that hearing with big tech ceos to resume on capitol hill we will take you there expeditiously when that happens. all of this is playing out as tensions remain high over 202420 election results. earlier today facebook ceo mark zuckerberg and twitter ceo jack dorsey faced accusations of suppressing conservative views and interfering in the election. senators were asking some tough questions about big tech's handling of president trump's statements before and after the election. and republicans pressed dorsey on twitter's blocking of "new york post" stories on hunter biden business dealings. watch.
9:01 am
>> if you are not a newspaper at twitter or facebook, then why do you have editorial control over the "new york post"? that, to me, seems is unlike you are the ultimate editor. >> you do realize that by taking down that story you probably gave it more prominence and more visibility than it ever would have gotten had you left it alone? >> we realize that and we recognize it as a mistake that we made. >> harris: twitter chief certain content that may be false. >> more than a year ago the public asked us to offer additional context to help make potentially misleading information more apparent. we did exactly that applying labels to over 300,000 tweets from october 27th to november 11th. which represented about 2.2% of
9:02 am
all u.s. election related tweets. >> harris: meanwhile the media release center released a study that twitter censored or flagged tweets by president trump and his campaign 194 times since may of 2018. however, it has not censored joe biden or his campaign a single time. you are watching outnumbered, i'm harris faulkner. here is today's town hall editor and fox news contributor katie pavlich. from fox news headlines 24/7 carley shimkus. executive director of serve america pac and fox news contributor marie harf and joining us today former arkansas governor and fox news contributor mike huckabee great to see you. governor huckabee, you are prolific on twitter you often make us laugh. what is your fake on how so far this hearing is going? >> mike: these guys crawl up to capitol hill every now and then or maybe they do it remotely. however they do it, it's the
9:03 am
same old song and dance. oh, we are sorry. yeah, we have made some mistakes. look, these are not mistakes. they are intentional attempts to tilt all conversation in the country toward the left. i think anybody who denies that is just denying reality. they ought to put a big disclaimer over all of their sites including google, youtube, and just say, look, we are about as reliable in providing factual information as a carnival barker promising you are going to win the big teddy bear if you just put in another buck. just let people use the forum. let people decide for them selves what's true, what's not. we don't need some silicon valley 22-year-old who just got out of college and his idealistic liberal deciding what we should be able to see. and that's how it is. it ought to be is unlike the phone company. if i call you, harris, and we have a conversation, the phone company doesn't get to say we don't is unlike what you are saying, we are going to take
9:04 am
your phone off the hook. that's not how it ought to work. >> harris: unfortunately, i was being spoken to the entire time you finished that i bet it was funny and very good. >> it was great. >> marie: it was. >> harris: i know it was, it's the governor. as they get things set and restart at this hearing we will monitor it and go right back to it. katie, you will senator mike lee i have been watching the hearing and one thing that stood out to me was that very point that governor huckabee just made about the censorship and how many times more it happens for a certain side of the political aisle which is how senator lee put it. >> katie: very obvious all the data goes in one direction when it comes to certain political candidates who are censored or the context features that are added to tweets. tweets that are taking down and banned. it's been portrayed that the hunter biden story was taken
9:05 am
down because it was a conservative viewpoint. but the fact is that the hunter biden story was based on journalism, facts, documents, and witnesses. it was still censor soared in the final weeks of an election more importantly this is a how congress is going to handle big tech companies. mike lee made a very interesting point about representation and consumer law. twitter and facebook certain type of open forum platform company cootion something other than they represent to their customers then that's fraud. looking at that is one thing. having senator lindsey graham asking about whether twitter and facebook were addictive companies or addictive services which also opens up a whole another way for congress to regulate these companies. censorship is a huge issue but we also have these other things that niece senators are looking at in terms of thousand regulate these companies.
9:06 am
>> harris: yeah, you know, carley, that was about the point of the hearing that i wanted them to fact check because when the answer from the ceos of big tech and i can't remember whether it was dorsey or zuckerberg, i want to say zuckerberg, so correct me if i am wrong, when answer to while talking about well, know, we don't know about the total results and findings on addiction with the platform, but they try to make it more about the content and not making people stay for longer period of time. mine i'm skeleton call. >> the addiction element of this whole thing is interesting. i believe it's josh hawley who put a limit on how far you can scroll down on instagram because you really-there is no end in sight to the scrolling and believe you, me, i have tested it. from time to time. but, you know, these hearings
9:07 am
are really a result of years of allegations of conservative censorship and years of facebook and twitter denying any wrongdoing. and this all started in 2016 when a contractor accused facebook of down playing conservative stories in their trending section and then when president trump won the election the flood gates were opened. had you video of google ceos and executives crying over the election results and then allegations of shadow banning conservatives. and the more they denied wrongdoing, the less believable it is. these companies are filled with very smart, very politically active, very liberal employees so, of course they treat conservative posts differently than liberal posts. and the question is really what to do about it. that section 230 is always batted around. i feel is unlike if you remove that, that has some unintended consequences of actually creating more censorship on social media rather than less. i think that president trump and
9:08 am
lawmakers could fix this in the blink of an eye just by leaving. if they -- they post a lot of critical tweets on twitter. >> harris: that's interesting, too. >> more business. if you want to leave, use your free market power and go somewhere else. >> harris: you know what's so interesting about that, too marie as i come to you. you talk about the president wanting to get involved in any of this, you know, he has talked about, he has tweeted about it. but if he wants to have power, whether he is second term or not, all he has to do is jump off twitter. you no longer get those policy updates from him first person, firsthand, first thing in the morning, whatever it is. but you also don't get the power of the 87 plus million followers then retweeting and putting stuff out there. i mean, it is a huge platform that the user can have, depending on the user. >> well, these are also private companies, harris. republicans talk ad nauseam
9:09 am
about taking regulations off of private companies. letting private corporations make their own decisions whether it's the oil and gas industry, whether it's gun manufacturers, except in this one area which i find slightly hypocritical. but if you take facebook, for example, on any given day, a majority of the top it will posts are from conservative outlets. last 24 hours dan bongino has two of the top ten. fox news has two of the top ten. donald trump has two of the top ten. so the idea that conservatives just talking about facebook somehow don't have a platform, every single day, almost, if you go back weeks and months which i have done, a majority of the top performing posts are conservative. and the reason donald trump gets flagged on twitter more than joe biden is because donald trump lies. and he says things that aren't true every single day. including just this week that he won the election. joe biden just doesn't do that and so, look, republicans claim
9:10 am
to love free enterprise and love the private sector except when they don't. >> harris: i'm going to have to interrupt you plea ple. >> marie: i'm perplexed. >> harris: one area. free is part of it. it's free speech. the man you are looking at on the screen right now talks about it all the time senator ted cruz of texas. go back to the hearing now. big tech on the hill. >> voices that senate democrats disagree with more. that is very dangerous if we want to maintain a free and fair democracy, if we want to maintain free speech. there was a time when democrats embraced and defended the principles of free speech. there was a time when democrats embraced and defended the principles of a free press. and, yet, there is an absolute silence from democrats speaking up for the press outlets censored by big tech. there is an absolute silence for
9:11 am
democrats speaking out for the citizens silenced by big tech. instead, there is a demand used even more power to silence dissent and that the totalitarian instinct that i think is very dangerous. at the same time that big tech exercises massive power, it also enjoys massive corporate welfare. through the affect of section 230, a special immunity from liability that nobody else gets. congress has given big tech in effect a subsidy while they have become some of the wealthiest corporations on the face of the planet. mr. dorsey, i want to focus primarily on twitter and ask you initially, is twitter a publisher? >> is twitter a publisher? >> yes. >> no we are not. we district information. >> so what is a publisher? >> an entity that is plusing
9:12 am
under editorial guidelines and decisions. >> well, your answer happens to be contrary to the text of federal statute, particular section 230 which defines information, content provider is any person or entity that is responsible in whole or in part for the creation or development of information provided through the internet or any other interactive exiewsht service. let me ask you, was twitter being a publisher when it censored the "new york post"? >> no. we have very clear policies on the conduct we enable on the platform and if there is a violation, we take enforcement action and people choose to commit to those policies and to those terms of service. >> except for policies are apply new a partisan and selective matter. you claim it was hacked materials and, yet, you didn't
9:13 am
block the distribution of the "new york times" story that alleged to talk about president trump's tax returns even though a federal statute makes it a crime to district someone's tax returns without their consent. you didn't block any of that discussion. did you? >> our policy was focused on distribution of the actual hacked materials and the "new york times." >> did you block distribution of the president's tax return material. >> in the "new york times" case we interpreted it as reporting about the hacked material. >> did you block edward snowden when he illegally released material? >> will i don't have the answer to that. >> the answer is no. you have used this? a selective manner. let me ask you, were you being a publisher when you forced politico and other journalistic outlet to take down their tweets on a topic that you had deemed impermissible? >> no. we were enforcing our policy and our terms of service. >> so on october 15th, jake sherman a reporter at politico
9:14 am
tweeted the following. i tweet you had a link tout "new york post" story right after this dropped yesterday morning. i immediately reached out to the biden campaign to see if they had any answer. i wish i had given the story a closer read before tweeting it. twitter suspended me. so you actually have a reporter reporting on a story asking the other side for comment. and twitter says hi jaker isman your account at jake sherman has been locked for violating rules. what did the reporter do immediately tweets after that my goal was not to spread information. well, that's a little worrisome in and of itself. my goal was to raise questions about the story oh my overlords in silicon valley i was attacking the "new york post." you don't understand i was attacking them as did i in subsequent tweets and see how the biden campaign was going to respond. they later did respond. and then not long after, jake sherman comes back with my account is clearly no longer
9:15 am
suspended. i deleted the tweet. when twitter is editing and censoring and silencing the "new york post," the newspaper with the fourth highest circulation in the country and politico, one of the leading newspapers in the country, is twitter behaving as a publisher when it's deciding what stories reporters are allowed to write and plush and what stories they are not. >> no. that account is not suspended. the hacked materials policy, we realize that you had there was annerner that policy and the enforcement. >> hold on, i'm literally looking at the tweet from twitter that says your account has been locked. you are telling me that this is not an accurate. >> that's a lock and can be unlocked when you delete the untruth. >> i understand that you have the star chamber power. your answer is always well once we silence you we can choose you to allow to you speak. but you are engaged in plusing decisions. let me shift to a different topic. mr. dorsey, does voter fraud
9:16 am
exist? >> i don't know for certain. >> are you an expert in voter fraud? >> no, i'm not. >> well, why then is twitter right now putting purported warnings on virtually any statement about voter fraud? >> we're simply linking to a broader conversation so that people have our information. >> now are not. you put up a page that says, quote. voter fraud of any kind is exceedingly rare in the united states. that's not linking to a broader conversation. that's taking a disputed policy position and you are a publisher when you are doing that you are entitled to take a policy position but you don't get to pretend you are not a publisher and get a special benefit under section 230 as a result. >> that link is pointing to a broader conversation with tweets from publishers and people all around the country. >> mr. dorsey, would the following statement violate twitter's policies? quote. absentee ballots remain the largest source of potential
9:17 am
voter fraud? >> i imagine that we would label it so that people can have more context. >> how about this quote? quote: third party organizations candidates and political activists voter fraud is particularly possible where, quote, third party organizations candidates and political party activists are involved in, quote, handling absentee ballots. would you flag that as potentially misleading? >> i don't know the specifics of you who we might enforce that but i imagine a lot of these would have a label pointing people to a bigger conversation. >> well, you are right you would label them because have you taken the political position right now that voter fraud doesn't exist. i would note both of those quotes come from the carter baker commission on federal election reform that is democratic president jimmy carter and former secretary of state james baker and twitter's position is essentially voter fraud does not exist. are you aware that just two
9:18 am
weeks ago in the state of texas a woman was charged with 134 counts of election fraud? are you aware of that? >> i'm not aware of that. >> if i tweeted that statement with a link to the indictment, would you put a warning on it that says well, the democratic party position right now is voter fraud doesn't exist? >> i don't think it's useful to get into hypotheticals but i don't believe so. >> you don't believe so. well, we are going to test that we will tweet that and let's see what you put on it. yesterday we spent a considerable amount of time on the phone and you said you you wanted to embrace transparency. so i want to ask you. i have asked twitter and facebook multiple times how many times have you blocked republican candidates for office? their tweets or their posts in 2016 and 2018 and 2020, how many times have you blocked democratic candidates for office? how many times have you blocked republican office holders? how many times have you blocked
9:19 am
democratic office holders? twitter has repeatedly refused to answer that question with specific, hard data and cat locking examples. in the interest of transparency which you said you want to embrace, will you commit in this hearing right now to answer those questions in writing? >> that's exactly what we want to do. >> i'm sorry, mr. dorsey, i didn't hear you. >> that's exactly what we are pushing for as we think about building upon 230. transparency not just. >> you will answer those questions in writing? recommendation transparency thought just of outcomes but also our process as well. >> is that a yes that you will answer those questions in writing? >> we'll certainly look into it and see what we can do. >> and actually answer them and not give lawyerly double speak about why you are not going to give specifics. answer them will you commit to this committee that you will answer those questions? >> we are going to work to answering broader transparency around outcomes. >> that's a no, mr. zuckerberg, how about you? will you commit that facebook will answer those specific
9:20 am
questions cataloging the number of instances in which democrats in 16, 18, and 20 have been silenced versus the number of instances in which republicans have been silenced on facebook? >> senator, i'm not sure if we have that data available. but, i will follow up with you on your team. or your team. >> i will take that as yes or twitter bogus and we don't intend to provide it. >> senator durbin. >> thank you, mr. chairman. we live in a dangerous world. issues of national security, the worst pandemic, public health crisis in modern times in america and we are being challenged as to whether there is going to be a peaceful transition of power in america for the presidency. at that moment in time we decided none of those topics
9:21 am
were important and what was important was to determine whether or not social media was discriminating against republicans. it's an interesting question. i think there are more important and timely questions. we have a recount underway in georgia. we have allegations made by the election officials there where they-out republican allegations republican election officials where they have faced literally death threats. we are trying to determine whether or not the social media, instruments of america are fair to the republican party. i'm trying to struggle with this issue because i want to put it in a context and maybe i can't. maybe it's -- this is unique. we certainly know what the constitution says when it comes to free speech. and we know what it meant over the years, the "new york times" vs. sullivan and others,
9:22 am
publications. we certainly didn't suggest that anyone that used a telephone line for nefarious illegal band banned activity somehow implicated the telephone company into it by its nature. and then came radio and tv and we had to come up with new rules in terms of at one time equal time, fair content and so forth. and now we have this new relatively new mechanism of communicating information and we're trying to determine whether what to dwhatto do witht it is unlike a newspaper plusing or some kinds of communications network alone. section 230 is an attempt to do that and i'm sure everybody finds fault with it. i would is unlike to ask the two witnesses if they would comment on the historical aspects of this particular debate if they
9:23 am
have any thoughts. mr. zuckerberg? >> senator, one of the points in the discussion that i find interesting is people ask if the regulatory model should be more is unlike kind of a news industry or more is unlike telcos. but from my perspective these platforms are a new industry and should have a different regulatory model that is distinct from either of those other two. i think it is not the case that we are is unlike a telco and clearly categories of content whether it's terrorism or child exploitation that people expect us to moderate and address. but, we're also clearly not is unlike a news publisher in that we don't create the content and we don't choose up front what we
9:24 am
plush. we give people a voice to be able to publish things. i do think that we have responsibilities and it will may make sense for there to be liability for some of the content that is on the platform. but i don't think that the analogies to these other industries that have been create you had previously will ever be kind of fully the right way to look at this. i think it deserves to meet its own regulatory framework to get built here. >> thank you. would the other witness care to respond? >> from an historical perspective, 230 has created so much goodness and innovation and, you know, if we didn't have those protections when we started twitter 14 years ago we could not start. that's what we are most concerned with is making sure that we continue to enable new
9:25 am
companies to contribute to the internet. to contribute to conversation. and we do have to be very careful and thoughtful about changes to 230 because going one direction might box out new competitors and startups. going another might create a demand for impossible amount of resources to handle it. and going yet another might encourage even more blocking of voices or what's being raised here, which is censorship of voices and changing the internet dramatically. >> so let's -- go ahead. >> i believe that we can build upon 230. i think we can make sure that we're earning people's trust by encouraging more transparency around content moderation in our process of it. i think we need much more straightforward appeals. and i think the biggest point to
9:26 am
really focus on going forward is algorithms and how they are managing and creating these experiences and being able to have choice in how to use those algorithms on platforms is unlike ours. >> let me get into a specific, mr. zuckerberg. october 10th detroit free press reported 13 men charged thursday in a conspiracy to kidnap michigan governor gretchen whitmer used facebook and secure messaging app.s to connect and plot their attack. the group's use of facebook spans almost a full year. members began to use the social media platform as a recruitment tool in november 2019, according to an affidavit by brian russell detective sergeant michigan state police. once recruited, members communicated by a secured encrypting message platform according to news reports, facebook alerted the fbi about the michigan kidnapper's online activity several months before the arrest, thank goodness.
9:27 am
however, in august, a facebook page page from the kenosha guard militia which advocated violence in the aftermath of the shooting of jacob blake was reportedly flagged over 455 times to facebook. however, the page was deemed non-violating and left up. more than 4,000 people responded to that event, hundreds of armed militia members showed up. a member of this group, a teenager from illinois, later shot and killed two people on the streets of kenosha. mr. zuckerberg, you describe facebook's handling of this militia page as an operational mistake. can you explain the exact reason why the kenosha militia page was not taken down? >> senator, yes. and first, and what happened in kenosha was obviously terrible. what happened here was we rolled out a strengthened policy around militia pages in general.
9:28 am
whereas before that we would have allowed a group that was militia, as long as it wasn't planning or organizing violence directly, and the lead-up to the election we strengthened the policy to disallow more of those groupings because we were on high alert and were treating this situation as very volatile around potential civil unrest around the election. we just put that policy into place. and for a number of reasons, it had not yet been fully rolled out and all of the content reviewers across the company hadn't been fully trained on that. so, we made mistakes in assessing whether that group should be taken down. but, upon appeal, when it was escalated to you a more senior level of content review folks who have more specific expertise in these areas, we recognize that it did violate the policy
9:29 am
and we took it down. it was a mistake. it was certainly an issue. and we're debriefing and figuring out how we can do better. although one other piece that i would add is the person who carried out the shootings was not in any way connected to that page or linked to any of the content there from anything that we or others could tell. >> mr. chairman, if i could ask one more question. yesterday, the fbi released its annual hate crime incident report. the report found that more people were killed in hate motivated violence in 2019 than any year since the fbi began collecting hate crime data in 1990. the report also found that race-based hate crimes remain the most common type of hate crimes last year. and documented increase in religion-based hate crimes and hispanic hate crimes and hate crimes targeting individuals based on gender identity. given these statistics it,
9:30 am
appears to me that it's more important than ever for social media to combat hate on platforms. i might add to one of my colleagues stated earlier. this is not antifa. but these are documented hate crimes from fbi. muslim advocates, muslims have reached out to you many times, mr. zuckerberg about this issue relating to published content that reflects on certain religious groups. and you said at a hearing do you not allow hate crimes on facebook. yet, in may 2020, the tech transparency project found more than 100 american white supremacist groups, many of them explicitly anti-muslim active on the platform both on their own pages as well as auto generated content. >> facebook did alter some of the content but the hate groups largely remained. are you looking the other way,
9:31 am
mr. zuckerberg, as a potentially dangerous situation? >> no, senators. this is incredibly important. we take hate speeches as well as incitement of i have lens seriously. we banned more than 250 white supremacist organizations and treat them the same as terrorist organizations around the world. and we have ramped up our capacity to identify hate speech incitement to violence before people even see it on the platforms. our ai and human review teams you can track our results in the transparency reports that we issue. now, take down about 94% of the hate speech that we find on our plat forms before even reports it to us which is a dramatic amount of progress from where we were a few years ago where when we were just starting to ramp up
9:32 am
on this were taking about 20% of it down before people had to report it to us. there is still more progress to make. we are very invested in this. and you have my commitment that we view this as an issue of the highest severity and one that we are very focused on. >> thank you very much. >> senator sasse? >> thank you, mr. chairman. thank you for hosting this hearing. clearly important topics around content moderation. i'm a skeptic of the content moderation policies that exist both because i don't think the standards are very transparent and i don't think the execution is very consistent. that said i'm more skeptical than a lot of my colleagues i think on both sides of the aisle about whether or not there is regulatory fix that will make it better instead of worse. i especially think it's out that so many in my party are zealous to do this right now when you would have an incoming administration of the other party that would be writing the rules and regulations about it.
9:33 am
and i think it's telling that a number of folks on the other side of the dais think of senator blumenthal, a guy i is unlike but who seemed to almost be giddy about the prospect of a new government regulatory agency to police online speech and i think a lot of people on my side should take pause at the idea that so many on the other side of the aisle are excited about having the next administration get to write these rules and regulations. but, to the broader question. first, just to get to kind of a level set and i want to thank both the witnesses for being here today, but when senator lee lays out some of the issues he did about, you know, just the every human community is going to be situated in a different place about policy commitments and priorities and beliefs. but when senator lee said that 93% of facebook employees who contribute to politics do so on the left, and 99%, i think it was of twitter employees tribute on the left. i would just be interested to see if either of the two of you
9:34 am
thinks that has implications in the sheparding of your organizations. i recognize fully your private organizations. i'm more skeptical of a governmental fix for a lot of problems problems with i'm talking about today. mr. zuckerberg and mr. dorsey and start with facebook. i'm curious as to whether or not you think it's likely that that there is systemic bias in your organization content moderation policies given that your employee base is so unrepresentative of america in general? >> senator, i think it's a good question and certainly i think it means that we have to be more intentional about what we do and thoughtful. our principle and goal is to give everyone a voice and to be a platform for all ideas. as you mention, i do think it's undisputed that our employee base, at least the full time
9:35 am
folks politically would be somewhat or maybe more than just a little somewhat to the left of where our overall community is where the community spends almost a wide variety of people across society. so i do think that that means that we need to be careful and intentional that bias doesn't deep into decisions that we make. although i point out a couple of things. one is that you know, people have a lot of different views outside of new york. we expect and i think generally see that people conduct themselves professionally. and, second, the folks who are doing the content review work, we have about 35,000 people doing content review are typically not based in silicon valley. they are based in places all over the country and all over the worlds. because we serve people in countries all over the world. so i think that the geographic
9:36 am
diversity of that is more representative of the community that we serve than just the full-time employee base in our headquarters in the bay area. >> thank you very much, mr. zuckerberg. mr. dorsey? >> yeah. you know, this is obviously not something we interview for and given an understanding of what people are in the company. and with that understanding, we intend to make sure that both our policy and our enforcement is objective. and i realize that it looks rather opaque and certainly the outcomes might not always match up with that intention with our intention. ands perception of those outcomes may not match up. that's why i think it's so important that we are not just transparent around our policies. but, the actual operations of our content moderation.
9:37 am
if people don't trust our intent, if people are questioning that that's a failure. and that is something that we need to fix and intend to fix. and i think it would benefit the industry as well. but i do, again, point back to something i said earlier on the testimony which is a lot of these decisions are not being made by humans anymore. they are being made by algorithms. enforcement decisions around what you see and what you don't see. to me that is the body of work, that is the conversation that we should be focused on because that is the enduring use case for everyone who interacts with these services. >> thank you. i wish it were true that this would all -- that's these were all easy, you know, objective questions that the questions were if somebody says is the sky green, that's an objective question that the sky is blue and white not green.
9:38 am
but, most of the things we're talking about here and the places where you are applying content moderation labels are not really simply objective questions. they are mostly subjective questions. if we talked about medicare for all being, you know, easily paid for inside a 10-year budget window on assumptions x, y, and z that don't raise taxes, that's not true. there isn't any math by which medicare for all pays for itself inside some short-term window. i don't think any of believe you are going to lap slap a label this is disputed math policy or projections. really what's happening is there a prioritization grid that people are going through even as they build the algorithms not driven by driven by policy oriented individuals. i may be wrong about this. my suspicion is that your employee base is not 99% left of center i believe it's less than
9:39 am
that. i would speculate less than 1% of your employees give money to candidates on the right is because there is a social stigma attached to having conservative views inside your organization and i would guess that those same sort of internal cultural biases inform the subjectivity of which issues end up labeled. so, again, this is sort of an odd place to be in that i am skeptical that the content moderation policies are thought out well. they are not transparent enough for us to really know. but i'm definitely skeptical that they are consistently applied. and, yet, i'm not really on the side of thinking there is some easy governmental fix here there is a lot about section 230 that we could debate. some of the things senator durbin said about how in the era of telephones nobody blamed the phone company for other people having spread misinformation by the phone. exactly. that's what would be the case if section 230 were actually neutral. you are applying content moderation policies and seemingly in a way that's not
9:40 am
objective. i know i'm nearly at time. but i think it would be useful for us to hear from both of to you give us sort of three or five year window into the future if there isn't new legislation. what is changing besides just saying we are moving from humans to more ai? what call tavenl qualitatively g what happens inside your organizations short of a new regulatory scheme? can you tell us where you think you are actually improving and what problems you are trying to solve? mr. zuckerberg, you first, please. >> senator, one of the areas that we are very focused on is transparency. both in the process and in the results. so we're already at the point where every quarter we issue a community standards enforcement report that basically details the prevalence of each category of harmful content and how effective we are at addressing it before people have to even report it to us. over time, we would is unlike to
9:41 am
fill that out and have more detail in and make it more robust. we have already committed to independent external audit of those metrics that people can trust them even more. people have lots of different types of requests for where we might go with that in the future. whether that's breaking down the stats by country or language or into more granular buckets. adding more data around precision. i think that that would all be very helpful that people can see and hold us accountable for how we are doing. for what it is worth, i think that would be a valuable part of a regulatory framework that would not feel particularly overreaching to me. it's something that could be put in law that would create an apples to apples framework that all companies in this space would have to report on the outcomes and effectiveness of their program in that way so at least we can see how everyone is doing. that seems is unlike a sensible step to me. >> thank you, mr. doorsy.
9:42 am
>> senator whitehouse. i'm sorry, mr. dorsey ask you the same question and then give it back to you in a hurry. >> i'm sorry, i missed that? it's junior acting chairman. mr. dorsey? three to five years out a centralized global content moderation system does not scale and we need to rethink how we operate these services. and i would point to we certainly need transparency around any process that we have. and around the practice and the outcomes of those moderations. but, i think having more control so that individuals can moderate themselves, you know, pushing the power of moderation to the edges and to our customers and to the individuals using the service is something we will see more of. and i also believe that having more choice around how
9:43 am
algorithms are altering my experience and creating my experience is important. so being able to turn off ranking algorithms. being able to choose different algorithms that are found written by third party developers and somewhat of an al agoaanalgorithmic is a future tt would excite us. >> thank you. i appreciate why interaction with both of your companies in the run up to this. both of you have said some meaty things about moves toward greater transparency. i will foil up again. thank you. mr. chairman? >> thank you. senator whitehouse? >> thank you, chairman. let me start with moment in history to give context to my questions. when the tobacco industry discovered that its product was deadly, it responded to that news with a system thatatized pm
9:44 am
of denying that state of facts. the upshot for the tobacco industry was not great. it was found in federal court to be engaged in massive fraud and was put under court order to cease its fraudulent behavior. at around the same time, the fossil fuel industry began to run into a similar problem regarding the effects of its product. and it picked up the tuberculositobacco industry.so e tobacco industry's denial operation. these are persistent highly motivated, very well funded and complex information operations not unlike a hostile intelligence service would run. and they are quite secretive and we're now seeing a new form.
9:45 am
i guess you call it election denial happening around our country right now. that's the background that i come at this from seeing. and i am wondering if you see a difference between individual error and basically mass disinformation. is there a difference between odd people with fringe views who offer personal opinions and an orchestrated plan of deliberate misinformation or disinformation that is driven by motivated interests whether foreign or domestic? >> senator, i absolutely think that there is a difference. and you can see it in the patterns of use on the platforms. and in our policies and operations, we view these coordinated inauthentic behavior
9:46 am
operations, networks of fake and sometimes combining with real accounts to push out a message but make it seem is unlike it's coming from a different place that it is or might be more popular than it is. this is what we saw the internet research agency out of russia do in 2016. and since then, a number of other governments and private organizations, including some companies is unlike what have you mentioned have engaged in this behavior. now, the good news is that i think that the industry has generally gotten its systems to be a lot more sophisticated to defend against that in the last several years. it's a combination of ai systems that we have built to find met works of accounts that aren't behaving the way that a normal person would coupled with large number of content reviewers. sometimes with expertise in counter-terrorism or counterintelligence. and then some signal sharing
9:47 am
whether it's with the intelligence community, law enforcement, different groups that have expertise in different areas and with other tech platforms. this is a big effort on, i think all of our sides to make sure that we can defend against this kind of interference and i think we are getting better and better at it. >> well, let me encourage you to persist as you know the last time you were here, you were asked about advertising paid for on facebook denominated in rubles which was not a very sophisticated scheme to be able to penetrate but facebook was able to penetrate it and your upgrade that original set up was simply to after lou a shell corporation to intermediate between the real actor and not. so i encourage you to continue to try to make sure that real voices are what are heard on facebook. mr. dorsey. let me turn to you and ask you the same if he in
9:48 am
the context of bots. brown university recently did a study that showed about 25% of all tweets about climate change are generated by bots. most of them obviously push out climate denial as i described that operation. how is twitter's capacity to identify a both as opposed to a real customer? >> to build off your question i do there is a difference many campaigns to manipulate public conversation to divide people all around the world to confuse and generally to distract and we do have policies enforcement to prevent as much of this as possible. it's a growing threat. and it shows no signs of slowing down. bots are one way that entities
9:49 am
do this. sometimes it may look is unlike a bot but it's actually a human that is organized with other humans for a particular agenda. so it is challenging. we are doing work right now to better identify bots on our service. >> let me just interject mr. dorsey real quick. as a baseline corporation do you agree that a bot does not deserve a voice on your platform that it should actually be people and organizations? >> i don't agree with that as a high level. i think we should be labeling bots so that people have greater context for what they are interacting with. >> fair enough. >> there are plenty of bots on our service that provide a valuable function and i wouldn't want to take that away. >> let me ask both of you, and maybe can you supplement this as with answer in writing for the record because my time is getting short and this is a
9:50 am
complicated question. but the question is when does it matter to twitter and when does it matter to facebook to know who the actual entity is who is using your platform? >>let me start with you, mr. dorsey since mr. zuckerberg went first last time and you can defer to a written answer if you would likes a my time is running very short. >> we will add to this conversation with a written answer. but i do believe that sin anonymity is important. we have usefulness with activist and with whistleblowers and i think that is critical. but, certainly there are times and it's judged by the severity of potential outcomes where we need to dig into identity and take actions. >> we will follow up with that and let me just ask you since my
9:51 am
time has expired, mr. zuckerberg, to respond or have your organization respond in writing. thank you. >> thank you. before we -- senator whitehouse brought up something very important and i want to ask this as directly as i can. to facebook and twitter, do you have any internal research or evidence to suggest that your platforms can be addictive? mr. zuckerberg in. >> senator, i think we can follow up with a summary of research that we have, but from what i have seen so far, it's inconclusive and most of the research suggests that the vast majority of people do not perceive or experience these services as addictive or have issues. but i do think that there should be controls given to people to help them manage their experience better.
9:52 am
and this is something that we're very focused on. >> mr. dorsey? >> i'm not aware of internal research but we can follow up. i do think is unlike anything else, these tools can be addictive and we should be aware of that, acknowledge it,and make sure that we are making our customers aware of better patterns of usage. so the more information the better here. >> thank you, senator hawley? >> that i think thank you, mr. chairman. in the late 19th century the heads of the biggest corporations in america, the barons got together and set rates and prices and determine how they would determine information flow. they determined how they get rid of information we will right this time you are the robber barrons, your companies are the most powerful companies in the world. i want to talk about how you are coordinating together to control information. in recent days my office was contacted by a facebook
9:53 am
whistleblower a former employee of the company with direct knowledge of the company's content moderation practices and i want to start by talking about internal platform called tasks that facebook uses to coordinate projects including censorship. the tasks platform allows facebook employees to communicate about projects they are working on together, that includes facebook censorship teams, including the so-called community well-being team. the integrity team, and the hate speech engineering team who all use the test platform to discuss which individuals or #s or web s or web sites to plan. mr. zuckerberg you are familiar with the task platform. >> senator we use the task system for i think it's -- as you say, for people coordinating all kinds of work across the company although i'm not sure if i would agree with the characterization specifically around content moderation that you gave. >> well, let's get into that and
9:54 am
let me see if we can refresh your memory and provides folks as the home an example. here over my shoulder is an example, a screen shot of the task platform in use. you will notice that the cameras zoom in several witnesses to election integrity throughout on these lists of tasks. again, this is shared across facebook sites, company locations, by working groups. what particularly intrigued me is that the platform reflects censorship input from google and twitter as well. so facebook, as i understand it, facebook censorship teams communicate with counterparts at twitter and google and enter those company's suggestions for censorship onto the task platform so that facebook can then follow up with them and effectively coordinate their crrnship efforts. mr. zuckerberg, let me just ask you directly under oath now, does facebook coordinate its content moderation policies or efforts in any way with google
9:55 am
or twitter? >> senator, let me be clear about this. we we do coordinate and share signals or security related topic. so, for example, if there is signal around a terrorist attack or around child exploitation or around a foreign government creating an influence operation that is where they do share signals about what they see. i think it's important to be very clear that is distinct from the content and moderation policies that we or the other companies have where once we share intelligence or signals between the companies, each company makes its own assessment of the right way to address and deal with that information. >> i'm talking about content moderation. i'm talking about individuals, websites, s, phrases to ban. is it your testimony that you do
9:56 am
not communicate with twitter or google about content moderation, about individuals, web sites, phrases s to ban. "yes" or "no," do you communicate with twitter or google about coordinating your policies in this way? >> senator, we do not coordinate your policies. >> do your facebook content moderation teams with twitter or google. >> senator, i'm not aware of anything specific. but i think it would be probably pretty normal for people to talk to their peers and colleagues in the industry. >> it would be normal if you don't do it? >> no. i'm saying that i'm not aware of any particular conversation but, i would expect that some level of communication probably happens but no different from coordinating what our policies are or our responses in specific instances. >> fortunately i understand that the task platform is searchable. so will you provide a list of every mention of google or
9:57 am
twitter from the task platform to this committee? >> senator, that's something that i can follow up with you and your team after on. >> "yes" or "no," i'm sure you can follow up with a list. why don't you commit while i have got you here under oath it's so much better to do this under oath. will you commit now to providing a list from the task platform of every mention of google or twitter? >> senator, respectfully, without having looked into this, i'm not aware of any sensitivity that might exist around that so i don't think it would be wise for me to commit to that right now so i would be happy to -- >> how many items on the task platform reflect that facebook, twitter and google are sharing information about web sites or hash tags or platforms that they want to suppress? >> senator, i do not know. >> will you provide a list of every website and hash tag that facebook content moderation
9:58 am
teams have discussed banning on the task platform? >> senator, again, i would be happy to follow up with you or your team to discuss further how we might move forward on that. but, without. >> will you commit to it here? senator cruz and senator lee both asked you for lists, individuals, web sites, entities that have been subject to content moderation. you expressed doubt whether any such information exists. have you also said that the task website have you acknowledged the task platform exists, that it is searchable. so will you commit to providing the information you have logged onto the task website about content moderation that your company has undertaken? yes or no? >> senator, i think it would be better to follow up once i have had a chance to discuss with my team what any sensitivity around that would be. that might prevent the kind of sharing that you are talking about. witbut, once i have done that i would be happy to follow up. >> so you won't commit to do it here. we could, of course, subpoena
9:59 am
this information but i would much rather get it from you voluntarily. let everybody take note that mry refused to provide information that he knows that he has and has now acknowledged that he has that task has -- let me switch to a different topic. mr. zuckerberg, tell me about sentra. what is the facebook internal tool called sentra? >> senator, i'm not aware of any tool with that name. >> well, let me see if this refreshes your memory there is a demonstrative over my shoulder. sentra is a tool that facebook uses to track its users not just on facebook but across the entire internet. sentra tracks different profiles that a visitor visits. linked accounts, the pages they visit around the web that have facebook buttons. sentra also uses behavioral democrat can data to monitor accounts even if they're registered under a different name. can you see a shot here, screen shot provided to the sentra platform. we blocked out the user's name in the interest of privacy.
10:00 am
can you see this individual's birth dated and age when they first started using facebook last log in as well as all manner of trackings. how many different devices have they used to access facebook? how many different accounts are associated with their name? what accounts have they visited? what photos have they tagged and on and on and on? mr. zuckerberg, how many accounts in the united states have been subject to review and shut down through sentra? >> senator, i do not know because i'm not actually familiar with the name of that too. i'm sure we have tools that help us with our platform and community integrity work. but, i am not familiar with that name. >> do you have a tool that does exactly what i have described that you can see here over my shoulder? or are you saying that that doesn't exist? >> senator, i'm saying that i'm not familiar with it. and that i would be happy to follow up and get you and your team the information that you would is unlike on this.
133 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1207681342)