tv Cavuto Coast to Coast FOX Business November 17, 2020 12:00pm-2:00pm EST
12:00 pm
susan: mark zuckerberg is number three. stuart: bezos, gates, musk, zuckerberg all worth $100 billion. susan: throw in lvmh. stuart: why not. time up for me. dow down 180. neil, it is yours. neil: right after that list varney, away you go. thank you very much, sir. we are following all those developments. also following this committee hearing, senate judiciary committee hearing or shall i say grilling of these tech executives, heads of facebook and twitter respectively here. before we dip back into that here because both parties are really calling into question whether these guys have gotten a little too arrogant, too big for their britches, we'll get into details. first to gillian turner if they landed any blows. gillian what do you think? reporter: so far for the two ceos, it is back and forth. they're trying to outdo each
12:01 pm
other, whose platform and has done more to protect the 2020 presidential election. mark zuckerberg is going all in on the claim that facebook's moves have been historic. they have done more than any other country in history to make sure bad, fake information about this election is staying off of their platform. i will paraphrase something he said. they introduced new policies to combat voter suppression and misinformation and we worked with local election officials to remove hundreds of thousands of false claims about polling conditions. that he worried could have led to voter suppression. there is one thing so far in this hearing everyone agrees on, that is which posts stay up on these platforms, which come down and which get slapped with warning labels should definitely not be up to the u.s. government. take a listen. >> when it comes time to flag content as being reliable or not reliable do either one of you believe that the government should do that? >> i don't believe so. i think that would be very
12:02 pm
challenging. >> okay. >> i would agree with your sentiment that, that is not something that government should be siding -- deciding. reporter: republicans say warning labels getting slapped on president trump's post are evidence of deep-seeded, deep rooted anti-conservative bias. democrats on the panel are making the opposite here. they're saying warning labels on the president's posts don't actually go far enough. take a listen. >> how many times has he allowed to call for the murder of government officials before facebook suspends his account? will you commit to taking down that account, steve bannon's account? >> senator, no. that is not what our policy is, would suggest that we should do. reporter: so really the big question now, neil, as we move forward from this hearing, are these lawmakers really going to do anything meaningful to rein in these tech companies, or will they continue to drag them in for the hearings, drag them in
12:03 pm
virtually and threaten to pass new legislation to regulate them? it seems like we're heading towards a lot more of the same. which is really a lot of nothing, neil. neil: all right. i like that, a lot of nothing. gillian, thank you very much. as gillian nicely surprised there, republicans are seeing a lot of these companies, particularly twitter, particularly facebook, moving too aggressively against conservative thought. democrats saying not nearly aggressively enough. let's go back to the hearing right now and see how this ping-ponging goes. >> even having to get facebook's approval over what they publish? >> thank you. mr. dorsey? >> we'll make our reports and findings public also so everyone can learn. >> i look forward to the reading of them. i'm actually -- [inaudible] one member of the senate will actually read them, thank you. because you look at some of the
12:04 pm
things that we're -- there is, i know senator blumenthal and others raised this question about the steve bannon, putting on a video. think of what it did. it called for the murder, the beheading of dr. fauci and the director of the fbi, christopher wray. think what that does? i mean the fbi director travels with a security all the time. dr. fauci and his family are private citizens. they're calling for their beheading and it was seen by i think 200,000 people on facebook we have somebody threatening to murder somebody. what do you do, what do you do about that? how do you -- i mean i was a prosecutor. i prosecuted murders and we had
12:05 pm
to face this kind of threat at that time but what do you do when hundreds of thousand of people see a threat go murder somebody? >> senator, that content violated our policies and we took it down. as been the subject of some of the other questions if someone had multiple offenses like that we would remove their whole account. >> i'm sure that, the threat that they, do it multiple times, say go out and murder somebody, cut off the head, we're going to a real problem. facebook will take down our posting, oh, my god -- goodness, what a deterrent. >> senator what we try to do is identify content that violates our policy before anyone in the community has to see it or even report it to us.
12:06 pm
and for some categories like terrorism which i cited before, you know, about 98 or 99% of of the content that we take down are a.i. and human systems find before anyone has to report it to us. on hate speech we're up to 94% of the content we take down our a.i. systems and content reviewers find it before people have to report it to us. what we try to drive more effectiveness is basically finding more and more of that harmful content earlier before it is seen broadly across our system. >> let me ask you about that because you know, we've had these discussions before. i'm deeply concerned about facebook's role in spreading hate speech in myanmar. hate speech that helped fuel a genocide against muslim --
12:07 pm
people. i mean horrible. i've seen the pictures. i've seen -- the genocide. you made some progress about this since you and i talked about it last but my understanding is that facebook shuts down specific confidents that violate your content-related policy but then the user can of course just create a new account. in myanmar for example, on october 8th, facebook took down 38 inauthentic accounts controlled by myanmar military authority to prevent anti--- content. i compliment you for doing that. the meantime the military turned around and created new accounts that promoted the same content. in some way you got a "whack-a-mole" problem here but is there, a way that we can, you can stop these things, not just
12:08 pm
at the account level, at the user level? i use that as an example because people are being murdered in a systemic genocide? >> please answer, senator leahy's question. then we'll need to move on. go ahead. >> i'm sorry to take long but the previous questioner took all his time plus time allotted to me. >> no, we're 2 1/2 minutes. let's wrap it up. go ahead and answer the question >> senator, you're correctly pointing out that we diddies able certain generals in the my man march military as as dangerous figures, and they are not allowed to sign up for new accounts but as you point out these kind of integrity problems are not ones that there is a silver bullet where you could ever fully solve them. we will always be working to
12:09 pm
help minimize the prevalence of harm the same way a city will never eliminate all crime. you try to reduce it and get it and have it be as little as possible and that's what we try to do through a combination of building a.i. systems to identify harmful content up front, hiring thousands of people, tens of thousands of people to do content review and partnering with organizations whether in the intelligence community, law enforcement, election officials or in myanmar, local civil society to help us flag things that we should be aware of and on high alert about. >> thank you, mr. chairman. i will have some questions for the record for both the witnesses. >> thank you very much, senator leahy. i appreciate that. senator cruz. >> thank you, mr. chairman.
12:10 pm
facebook, twitter and google have massive power. they have a monopoly on public discourse in the online arena. i will say it is dismaying listening to the questions from our democratic colleagues because consistently the message from senate democrats is for facebook and twitter and google to censor more, to abuse their power more, to silence voices that senate democrats disagree with more. that is very dangerous if we want to maintain a free and fair democracy. if we want to maintain free speech. there was a time when democrats embraced and defended the principles of free speech. there was a time when democrats embraced and defended the principles of a free press and yet there is an absolute silence from democrats speaking up for the press outlets censored by big tech. there is absolutely silence for
12:11 pm
democrats speaking out for the citizens silence by big tech. instead there is a demand use even more power to silence dissent and that is totalitarian instinct i think is very dangerous. at the same time that big tech exercises massive power it also enjoys massive corporate welfare through the effect much section 230, a special immunity from liability that nobody else gets. congress has given big tech in effect a subsidy while they have become some of the wealthiest corporations on the face of the planet. mr. dorsey i want to focus primarily on twitter and ask you initially, is twitter a publisher? >> is twitter publisher? >> yes. >> no, we are not. we distribute information. >> so what is a publisher?
12:12 pm
>> an entity that is publishing under editorial guidelines and decisions. >> well your answer happens to be contrary to the text of federal statute, particular section 230 which defines an information content provider as any person or entity that is responsible in whole or in part for the creation or development of information provided through the internet or any other interactive computer service. let me ask you, was twitter being a publisher when it censored the "new york post"? >> no. we have very clear policies on the conduct we enable on the platform and if there is a violation we take enforcement action and people choose to commit to those policies and to those terms of service. >> except your policies are applied in a partisan and selective manner. you claim it was hacked materials yet you didn't block
12:13 pm
the distribution of "the new york times" story that alleged to talk about president trump's tax returns even though a federal statute makes it a crime to distribute someone's tax returns without their consent? you didn't block any of that discussion, did you? >> our policy was focused on distribution of the actual hacked materials. >> did you block the discussion of the president's tax return material? >> in "the new york times" case we interpreted it as reporting about the hacked material. >> did you block edward snowden when he illegally released material? >> i don't have the answer to that. >> the answer is no. you haven't used this in a selective matter -- let me ask you, were you being a publisher when you forced "politico" and other journalists outlets take down their tweets on a topic you deemed imperfect missable? >> no. we were enforcing our policy and our terms of service. >> so on october 15th, jake
12:14 pm
sherman a reporter at "politico" the following i tweeted a link to the "new york post" story right after it dropped the yesterday morning. i immediately reached out to the biden campaign they had any answer. i wish i had given a story before closer read before reporting it. twitter suspended me. a reporter reported on a story asking the other side for comment, twitter says, hi, jake sherman, your account @jakesherman for violating rules. what did the "politico" reporter do, my goal was not to spread misinformation. that is worrisome in and of itself. my goal was to raise questions about the story. oh, my over lords in silicon valley i was attacking "the new york post." you don't understand i was attacking them as i did in subsequent tweets and see how the biden campaign was going to respond. though later did respond and not long after jake sherman comes back with, my account is clearly no longer suspend.
12:15 pm
i deleted the tweet. when twitter is editing and censoring and silencing the "new york post," the newspaper with the fourth highest circulation in the country, and "politico," one of the leading newspapers in the country is twitter behaving as publish letter when it deciding what stories reporters are at allowed tourist and not public? >> no that account was not suspended t followed the hack the materials policy. we realized there was error in the policy and enforcement. >> hold on. i'm literally looking at the tweet from twitter that says your account has been locked. you're telling me that this is not an accurate -- >> that is lock and can be unlocked when you delete the tweet. >> i understand that you have the star chamber power. your answer is always, once we silence you we can choose to allow to speak but you are engaged in publishing decisions. let me shift to a different
12:16 pm
topic, mr. dorsey? does voter fraud exist? >> i don't know for certain. >> are you an expert in voter fraud? >> no, i'm not. >> well why then is twitter right now putting purported warnings on virtually any statement about voter fraud? >> we're simply linking to a broader conversation so that people have more information. >> no, you're not. you put up a page that says, quote, voter fraud of any kind is exceedingly rare in the united states. that is not linking to a broader conversation. that is taking a disputed policy position and you're a publisher when you're doing that you're entitled to take a policy position but you don't get to pretend you're not a publisher and get a special benefit under section 230 as a result. >> that link is pointing to a broader conversation with tweets from publishers and people all around the country. >> mr. dorsey, would the following statement violate twitter's policies, quote, absentee ballots remain the largest source of potential
12:17 pm
voter fraud? >> i imagine that we would label it so that people can have more context. >> how about this quote? quote, third party organizations candidates and political activists, voter fraud is particularly possible where quote, third party organizations candidates and political party activists are involved in quote, handling absentee ballots, would you flag that as potentially misleading. >> i don't, you don't know the specifics of how we might enforce that but i imagine a lot of these would have a label pointing to people to a bigger conversation. >> well, you're right. you would label them because you have taken the political position right now that voter fraud doesn't exist. i would note both of those quotes come from the carter-baker commission on federal election reform. that is democratic president jimmy carter and former secretary of state james baker. twitter's position is essentially voter fraud does not exist. are you aware that just two
12:18 pm
weeks ago in the state of texas a woman was charged with 134 counts of election fraud? are you aware of that? >> i'm not aware of that. >> if i tweeted that statement with a link to the indictment would you put a warning on it that says, well the democratic party position right now is voter fraud doesn't exist? >> i don't think it is useful to get into hypotheticals but i don't believe so. >> you don't believe so? we'll test that because i'm going to tweet that. we'll see what you put on it. all right, yesterday, mr. dorsey, you and i spent considerable amount of time on the phone. you said that you wanted to embrace transparency. so i want to ask you, i have asked twitter, i have asked facebook multiple times how times have you blocked republican candidates for office, their tweets or posts in 2016 and 2018 and 2020. how many times have you blocked democratic candidates for office. how many times have you pluck
12:19 pm
blocked republican officeholders and democratic officeholders. twitter refused to answer the question with specific hard data and cataloging the examples. in the interest of transparency which you said you want to embrace, will you commit in this hearing right now to answer those questions in writing? >> that is exactly what we want to do. >> answer the question. >> i'm sorry, mr. dorsey, i didn't hear you. >> that is exactly what we're pushing for as we think about building a upon 230. >> is that a yes, you will answer those questions in writing. >> transparency not just about accounts but also our process as well. >> is that a yes that you will answer those questions in writing? >> we'll certainly look into it and see -- >> actually answer them and not give lawyerly doublespeak about why you're not going to give specifics. answer them, will you commit to committee that but i will answer those questions. >> we'll work to answering broader transparency around our -- >> that is no. mr. zuckerberg how about you will you commit facebook will
12:20 pm
answer those specific questions, cataloging the number of instances in which democrats and in 16, 18, 230 have been silenced versus the number of instances where republicans have been silenced on facebook? >> senator. connell: i'm not sure if we have that data available. i will follow up with you or your team? >> i will take that as a yes. i will take twitter, we'll see if it is a yes or transparency is bogus and we don't intend to provide it. >> senator durbin. >> thank you, mr. chairman. we live in a dangerous world. issues of national security, the worst pandemic, public health crisis in modern times in america and we are being challenged as to whether there is going to be a peaceful transition of power in america in the presidency. at that moment in time we decided none of those topics
12:21 pm
were important and what was important was to determine whether or not social media was discriminating against republicans. it is an interesting question. i think there are more important and timely questions. we have a recount underway in georgia. we have allegations may have had by the election officials there where they, the republican allegations, republican election officials where they have faced literally death threats. we are trying to determine whether or not the social media instruments of america are fair to the republican party. i'm trying to struggle with this issue because i want to put it in a context and maybe i can't. maybe this is unique. we certainly know what the constitution says when it comes to free speech and we know what it meant over the years. "new york times vs. sullivan"
12:22 pm
and others with publications. we certainly didn't suggest that anyone that used a telephone line for nefarious illegal banned activity, somehow implicated the telephone company into it by its nature. and then came radio and tv and we had to come up with new rules in terms of, at one time equal time, fair content and so forth. and now we have this new, relatively new mechanism of communicating information and we're trying to determine what to do with it, whether to treat it like a newspaper publishing or treat it like some sort of a communications network alone. section 230 is an attempt to do that and i'm sure everybody finds fault with it. i would like to ask the two witnesses if they would comment on the historical aspects of this particular debate, if they
12:23 pm
have any thoughts? mr. zuckerberg? >> senator, one of the points in the discussion that i find interesting is people ask if the regulatory model should be more like, kind of the news industry or more like telcos but from my perspective these platforms are a new industry and should have a different regulatory model that is distinct from either of those other two. i think it is not the case that we're like a telco and that there are clearly some types of content, whether terrorism or child exploitation that people expect us to moderate and address but we're also clearly not like a news publisher in that we don't create the content and we don't choose up front
12:24 pm
what we, what we publish. we give people a voice to be able to publish things. so i do think we have responsibilities and it may make sense for there to be liabilities for some of the content that is on the platform but i don't think that the analogies to these other industries that have been created previously will ever be kind of fully the right way to look at this. i think it deserves and needs its own regulatory framework to get built here. >> thank you. the other witness care to respond? >> from a historical perspective 230 has created so much goodness and innovation and you know, if we don't have those protections when we started twitter 14 years ago we could not start and that is what we're most concerned with, making sure we con to and
12:25 pm
enable new companies to contribute to the internet, to contribute to conversation and we do have to be very careful and thoughtful about changes of 230 because going one direction might box out new competitors and new startups. going another might create a demand for an impossible amount of resources to handle it. going yet another might encourage even more blocking of voices or what is being raised here which is censorship of voices so as changing the internet dramatically. >> so -- go ahead. >> i believe we can build upon 230. i think we can make sure that we're earning peoples trust by encouraging more transparency around content moderation in our process a bit. i think we need much more straightforward appeals and i think the biggest point to
12:26 pm
really focus on going forward is algorithms and how they are managing and creating these experiences, being able to have choice how to use algorithms on platforms like ours. >> let me get into a specific, mr. zuckerberg. october 10th, "detroit free press" reported that 13 men charged thursday in a conspiracy to kidnap michigan governor disbench chen whitmer used facebook and secure messaging apps to connect and plot their attack. the use of facebook plans almost a full year. members began to use the social media platform as a recruitment tool in november 2019, according to an affidavit by brian russell, detective sergeant, michigan state police. once recruited members communicated via secured encrypting message platform. according to news reports facebook alerted the fbi about the michigan kidnappers online activity several months before the arrest. thank goodness.
12:27 pm
however in august facebook page for the kenosha guard militia which advocated violence in the aftermath of the shooting of jacob blake was reportedly flagged over 455 times to facebook. however the page was deemed non-violating and left up. more than 4,000 people responded to that event. hundreds of armed militia members showed up. a member of this group, a teenager from illinois, later shot and killed two people on the streets of kenosha. mr. zuckerberg you described facebook's handling of this militia page as a operational mistake. can you explain the exact reason why the kenosha militia page was not taken down? >> senator, yes. first, what happened in kenosha was obviously terrible. what happened here was we rolled out a strengthened policy around
12:28 pm
militia pages in general. whereas before that we would have allowed a group that was in a militia, as long as it wasn't planning or organizing violence directly. in the lead-up to the election we strengthened the policy to disallow more of those groups because we were on high alert and we were treating the situation as very volatile around potential civil unrest around the election. we just put that policy into place and for a number of reasons it had not yet been fully rolled out and all of the content reviewers across the company hadn't been fully trained on that. so we made mistakes in assessing whether that group should be taken down but upon appeal, when it was escalated to a more senior level of content review, folks who have more specific expertise in these areas we
12:29 pm
recognized it did violate the policy and we took it down. it was a mistake. it was certainly an issue and we're debriefing and figuring out how we can do better although one other piece i would add is the person who carried out the shootings was not in anyway connected to that page or linked to any of the content there from anything we or others can tell. >> mr. chairman. if i could ask one more question? yesterday the fbi released its annual hate crime incident report. found more people were killed in hate motivated violence in 2019 than any year since the fbi began collecting hate crime data in 1990. the report also found that race-based hate crimes remain the most common type of hate crimes last year and documented increase in religion-based hate crimes, anti-hispanic hate crimes, hate crimes targeting
12:30 pm
individuals based on gender i've dentty. given the statistics it appears to me more important for social media companies to combat hate on their platforms. one of my colleagues stated earlier, this is not antifa, these are documented hate crimes from the fbi. muslim advocates, muslims have reached out to you, many times, mr. zuckerberg about this issue relating to published content that reflects on certain religious groups and you said at a hearing you do not allow hate crimes on facebook. yet in may 2020, the tech transparency project found more than 100 american white supremacy groups, many of them explicitly anti-muslim active on the platform, on their group pages as well as auto generated content. facebook altered some of the content but the hate groups largely remained. are you looking the other way,
12:31 pm
mr. zuckerberg at a potentially dangerous situation? >> no, senator. this is incredibly important and we take hate speech as well as incitement of violence extremely seriously. we banned more than 250 white supremacist organizations and treat them the same as terrorist organizations around the world and we have ramped up our capacity to identify hate speech and it citement of violence before people see it on the platforms. our a.i. and human review teams, you can track our result on the transparency reports that we issue. we now take down about 94% of the hate speech that we find on our platforms before anyone else even reports it to us which is dramatic amount of progress from where we were a few years ago where, when we were just starting to ramp up on this
12:32 pm
we're taking about 20% of it down before people had to report it to us. there is still more progress to make. we're very invested in this. you have my commitment that we view this as an issue of the highest severity and one we are very focused on. >> thank you very much. >> senator sasse. >> thank you, mr. chairman. thank you for hosting this hearing. clearly important top picks around content moderation. i'm a skeptic of the content moderation policies that exist both because i don't think the standards are very transparent and i don't think the execution is very consistent. that said, i'm more skeptical than a lot of my colleagues i think on both sides of the aisle about whether or not there is regulatory fix that will make it better instead of worse. i especially think it is odd that so many in my party are zealous to do this right now when you would have an incoming administration of the other party that would be writing the
12:33 pm
rules and regulations about it. i think it is telling a number of folks on the other side of the dais, i think of senator blumenthal, a guy i like, who seemed to be almost giddy about the prospect after new government regulatory agency to police online speech. i think a lot of people on my side should take pause so many on the other side of the aisle are excited about the next administration getting to write these rules and regulations. to the broader question, to get to a level set, i want to thank both witnesses for being here today, but when senator lee lays out some of the issues he did about you know, just every human community is going to be situated in a different place about policy commitments and priorities and beliefs but when senator lee said that 93% of facebook employees who contribute to politics do so on the left and 99% i think it was of twitter employees contribute on the left, i would just be interested to see if either of the two of you think that has
12:34 pm
implications in the shepherding of your organizations? i recognize fully that you're private organizations. so again i'm more skeptical of a governmental fix for a lot of the problems we're talking about here today. i'm curious whether or not mr. zuckerberg and mr. dorsey, i guess we'll start with facebook, i'm curious whether or not it's likely there is systemic bias inside of your organization in the excuse of content moderation policies given that your employee base is so unrepresentative of america in general? >> senator, i think it is a good question and certainly, i think it means we have to be more intentional about what we do and thoughtful. our principle and goal is to give everyone a voice to be a platform for all ideas. as you mentioned i do think it is undisputed our employee base, at least the full-time folks
12:35 pm
politically would be somewhat or maybe more than just a little somewhat to the left of where our overall community is, where the community basically stands almost wide variety of people across society. so i do think that means that we need to be careful and intentional internally to make sure that bias doesn't seep into decisions that we make. although, i would point out a couple of things. one is that people have a lot of different views outside of work. we expect and i think generally see that people conduct themselves professionally and second the folks who are doing the content review work, we have 35,000 people doing content review are typically not based in silicon valley. they're based in places all over the country and all over the world because we serve people in countries all over the world.
12:36 pm
so i think that the geographic diversity of that is more representative of the community that we serve than just the full-time employees based in our headquarters in the bay area. >> thanks, mr. zuckerberg. mr. dorsey? >> yeah, you know, this is obviously not something we interview for or have an understanding when people are in the company and with that understanding we intend to make sure that both our policy and our enforcement is objective and i realize that it looks rather opaque and certainly the outcomes might not always match up with that intention, with our intention and that the perception of those outcomes may not match up but that is where i think it is so important we're not just transparent around our policies but the actual
12:37 pm
operations of our content moderation. if people don't trust our intent, if people are questioning that, that is a failure and that is something that we need to fix and intend to fix. i think it would benefit the industry as well. but i do again point back to something i said earlier on the testimony which is, a lot of these decisions are not being made by humans anymore. they're being made by algorithms. that is certainly enforcement decisions, but certainly decisions around what you see or don't see. to me that is the body of work, the conversation we should be focused on because that is the enduring use case for everyone who interacts with these services. >> thank you. and i wish it would be true, these were easy objective questions. the questions were if somebody says if the sky is green, that is on objective question, the
12:38 pm
sky is blue, white and not green. most of the things we're talking about here, in the places where you're applying content moderation labels are not really simply objective questions. they are mostly subjective questions. if we talked about "medicare for all" being you know, easily paid for inside of a 10-year budget window, on assumptions xyz that don't raise taxes, that is not true. there isn't any math by which "medicare for all" pays for itself in some short-term window but i don't think any of us really think you will slap a label on that saying this is disputed accounting or math or policy projections. and so really what is happening there is a prioritization grid that people are going through as they build even the algorithms, even those not driven by humans, they're driven by policy priorities of situated individuals. i may be wrong about this but my suspicion is your employee base is not actually 99% of left than center. i bet it is less than that and i
12:39 pm
would speculate part of the reason less than 1% of your employees give to candidates on the right because there is a social stigma attached to having conservative views inside of your organization. i would guess those same sort of internal cultural biases inform the subjectivity of which issues end up labeled. this is sort of an odd place to be in that i am skeptical that the content moderation policies are thought out well. they're not transparent for us to really know but i'm definitely skeptical they're consistently applied yet i'm not on the side of thinking there is a governmental fix here. there is lot about section 230 we could debate. some of the things senator durbin said, in the era of telephones nobody blamed the phone company for spreading misinformation by phone. exactly that would be the case if section 230 is actually neutral.
12:40 pm
you're applying content moderation policies seemingly not a objective. i am out of time but it would be useful for both of you to give us three or five-year window into the future. if there isn't new legislation what is changing besides just saying we're moving from humans to more a.i.? what qualitatively is changing in the way content moderation happens inside of your organizations short after new regulatory scheme? can you tell us where you think you're actually improving and what problems you're trying to solve? mr. zuckerberg, you first please. >> senator, one of the areas we're very focused on is transparency both in the process and in the results. we're already at a point where every quarter we issue community standards enforcement report that basically details the prevalence of each category of harmful content and how effective we are addressing it before people even have to report it to us.
12:41 pm
over time we would like to fill that out and have more detail on that, make it more robust. we already committed to an independent external audit of those metrics that people can trust them even more. people have lots of different kind of requests where we go for that in the future. whether that is breaking down the stats by country or language or into more granular buckets, adding more data around precision but i think that would all be very helpful so that people can see, hold us accountable for how we're doing. for what it is worth i think that would be a valuable part of a regulatory framework that would not feel particularly overreaching to me. it is something that could be put in law that would create a apples to apples framework that all companies in the space would have to report on the outcome and effectiveness of their programs so we can see how everyone is doing. that seems like a sensible step
12:42 pm
to me. >> thank you. mr. dorsey. >> senator whitehouse. >> mr. dorsey same question, i will give it back to you. >> missed that. >> junior acting chairman. mr. dorsey. >> thank you. i mean if we're considering -- [inaudible] i think realization that centralized global content moderation system does not scale and we need to rethink how we, how we operate these services and i would point to, we certainly need transparency around any process that we have and around the practice and the outcomes of those moderations but i think having more control so that individuals can moderate themselves you know, pushing the power moved race to the edges and to our customers and to the individuals using the service is something we'll see more of. and i also believe that having
12:43 pm
more choice around how algorithms are altering my experience and creating my experience is important to being able to turn off ranking algorithms. being able to choose ranking algorithms that are found written by third party developers and somewhat of an algorithmic marketplace i think is important and and a future that would excite and energize us. >> thank you. appreciate my interaction with both of your companies in the runup to this and i think both of you said some meaty things we can move towards greater transparency. so i will follow up again. thank you, mr. chairman. >> senator whitehouse. >> thank you, chairman. gentlemen, let me start with a moment's lift to give context to my questions. when the tobacco industry discovered that its product was didly it responded to that news with a systemized program of
12:44 pm
denying that set of facts. the upshot for the tobacco industry was not great. it founded in federal court to engage in massive fraud and put under court order to cease its fraudulent behavior. at around the same time the fossil fuel industry began to run into a similar problem regarding the effects of its product and it picked up the poe back coindustry's scheme -- tobacco industry scheme where it left off, using some of the same individuals, some of the same entities, many of the same methods as the tobacco's industry denial operation. these are persistent, highly motivated, very well-funded and complex information operations not unlike a hostile intelligence service would run and they are quite secretive. we're now seeing a new form, i
12:45 pm
guess you call it election denial happening around our country right now. so that's the background that i come at this from seeing and i'm wondering if each of you see a difference between individual error and basically mass disinformation? is there a difference between odd people with fringe views who offer personal opinions and an orchestrated plan of deliberate misinformation or disinformation that is driven by motivated interests whether foreign or domestic? >> senator, i absolutely think that there is a difference and you can see it in the patterns of use on the platforms. in our policies and operations we view these coordinated,
12:46 pm
inauthentic behavior operations, networks of fake and sometimes combining with real accounts to push out a message but make it seem like it is coming from a different place than it is or might be more popular than it is, this is what we saw the internet research agency out of russia do in 2016 and since then a number of other governments and private organizations, including some companies like what you've mentioned have engaged in this behavior. now the good news is that i think the industry has generally gotten it is systems to be more sophisticated to defend against that in last several years. a combination of a.i. systems that we built to find networks of accounts that are not really behaving that a normal person would coupled with large numbers of content reviewers, sometimes with expertise in counterterrorism or counterintelligence and then
12:47 pm
some signal sharing, whether with the intelligence community, law enforcement, different groups that have expertise in different areas and with other tech platforms but this is a big effort on, i think all of our sides to make sure that we can defend against this kind of interference and i think we are getting better and better at it. >> well let me encourage you to persist. as you know the last time you were here you were asked about advertising paid for on facebook denominated in rubles which was not a very sophisticated scheme to be able to penetrate but facebook was unable to penetrate it and your upgrade from that original setup was simply to allow a shell corporation to intermediate between the real actor and not. so i encourage you to continue to try to make sure that real voices are what are heard on facebook. mr. dorsey, let me turn to you
12:48 pm
to ask you the same question in the context of bots. brown university recently did a study showed about 25% of all tweets about climate change are generated by bots. most of them obviously push out climate denial as i described that operation. how is twitter's capacity to identify a bot as a opposed to a real customer? >> well to build off your previous question i do think there is a difference as mark said, i do think there is many coordinated campaigns to manipulate the public conversation, to divide people all around the world, to confuse and generally to distract and we do have policies and enforcement to prevent as much of this as possible. is it a growing threat and it shows no signs of slowing down.
12:49 pm
bots are one way that entities do this. sometimes it may look like a bot. it is actually a human, that is organized with other humans for a particular agenda. so it is challenging. we are doing work right now to better identify bots on our service. >> let me just interject, mr. dorsey, real quick. as a baseline proposition do you agree that a bot does not deserve a voice on your platform? that is should be actual people and organizations? >> i don't agree with that as a high level. i think we should be labeling bots so that people have greater context for what they're interacting with. >> fair enough. >> there are plenty of bots on our service to provide a valuable function. and i wouldn't want to take that away. >> let me ask both of you, are, and maybe you can supplement this with an answer in writing for the record because my time
12:50 pm
is getting short panned this is a complicated question. but the question is when does it matter to twitter and what does it matter to facebook to know who the actual entity is who is using your platform? let me start with you, mr. dorsey, since mr. zuckerberg went first last time. you can defer to a written answer if you like. my time is running very short. >> we'll add to this conversation with a written answer but i, i do believe that anonymity is important. we seen its usefulness with activists and with whistle-blowers and i think that is critical but certainly there are times and it is judged by severity of potential outcomes where we need to dig into identity and take actions. >> we'll follow up with that and
12:51 pm
let me just ask you, since my time has expired, mr. zuckerberg to respond or have your organization respond in writing. thank you. >> thank you. before we, senator whitehouse brought up something very important. i will ask this as directly as i can, to facebook and twitter, do have you any internal research or evidence to suggest that your platforms can be addictive? mr. zuckerberg? >> senator, i think we can follow up with summary of research that we have but, from what i have seen so far, it is inconclusive and most of the research suggest that the vast majority of people do not perceive or experience these services as addictive or have issues but i do think that there should be controls given to people to help them manage their
12:52 pm
experience better and this is something we're very focused on. >> mr. dorsey? >> i'm not aware of internal research but we can follow up but i do think that like anything else these tools can be addict tiff. we should be aware of that, acknowledge it, and make sure we are making our customers aware of better patterns of usage. so the more information the better here. >> thank you. senator hawley. >> thank you, mr. chairman. in the late 19th century the heads of the biggest corporations in america, robber-barons, got together and they set rates, they set prices they determined how they would control information flow, they determined how they would get rid of competition around i will be darned if we aren't right back there again except for this time you're the robber-barons, your companies are, the most powerful companies in the world and i want to talk about how you're coordinating together to control information. in recent days my office was contacted by a facebook
12:53 pm
whistle-blower, a former employee of the company with direct knowledge of the company's content moderation practices and i want to start about talking about internal platform called tasks that facebook uses to coordinate projects including censorship. the tasks platform allows facebook employees to communicate about projects they're working on together. that includes facebook censorship teams include so-called community well-being team, integrity team and the hate speech engineering team who all use the task platform to discuss which individuals or hashtags or websites to ban. mr. zuckerberg you're familiar with the task platform, aren't you? >> senator, we use the task system for, i think it is, as you say for people coordinating all kinds of work across the company although i am not sure i would agree with the characterization specifically around content moderation that you gave. >> well, let's get into that.
12:54 pm
let me see if we can refresh your memory and provide folks a home watching. here over my shoulder, an example, a screen shot of the task platform in use. you will notice as the cameras zoom in several references to election integrity throughout on these lists of tasks. this is shared across facebook's sites, company locations by working groups. what particularly intrigued me is that the platform reflects censorship input from google and twitter as well. so facebook as i understand it, facebook censorship teams communicate with their counterparts at twitter and google and then enter those companies suggestions for censorship on to the task platform so that facebook can follow up with them and effectively coordinate their censorship efforts. mr. zuckerberg, let me ask you, directly, under oath now, does facebook coordinate its content moderation policies or efforts in any way with google or
12:55 pm
twitter? >> senator, let me be clear about this, we, we do coordinate on and share signals on security-related topics. so for example, if there is signal around a terrorist attack or around child exploitation imagery or around a foreign government creating an influence operation that is an area where the company is, do share signals about what they see but i think it its important to be very clear that that is distinct from content moderation policies that we or the other companies have where once we share intelligence or signals between the companies, each company makes its own assessment of the right way to address and deal with that information. >> i'm talking about content moderation. i'm talking about individuals, websites, hashtags, phrases to ban. is it your testimony that you do
12:56 pm
not communicate with twitter or google about content moderation, about individuals, websites, phrases, hashtags too ban? just yes or no, do you communicate with twitter or google about coordinating your policies >> senator, we do not coordinate our policies. do your facebook content moderation teams communicate with their counterparts at twitter or google? >> senator, i'm not aware of anything specific, but i think it would be probably pretty normal for people to talk to their peers or colleagues in the industry -- >> you don't do it? >> no, i'm saying that i'm not aware of any particular conversation, but i would expect that some level of communication probably -- it's no different from coordinating what our policies are or our responses in specific instances. >> fortunately, i understand that the task platform is
12:57 pm
searchable, so will you provide a list of every mention of google or twitter from the task platform to this committee? >> senator, that's something that i can follow up with you and your team after on. >> yes or no, i'm sure you can follow up with the list, but why don't you commit while i've got you hear urn oath. it's so much better to do this under oath. will you commit now to providing a list from the tasks platform of every mention of google or twitter? >> senator, respectfully, i'm -- without having looked into this, i'm not aware of any sensitivity that might exist around that, so i don't think it would be wise for me to commit to that right now. so i would ask to follow up. >> how many items on the task platform reflect that facebook, twitter and google are sharing information about web sites or hashtags or platforms that they want to suppress? >> senator, i do not know. finish. -- >> will you provide a list of every web site and hashtag that
12:58 pm
facebook contented moderation teams have discussed banning on the task platform? >> senator, again, i would be happy to follow up with you or your team to discuss further how we might move forward on that. but without -- >> will you commit to it here? senator cruz and senator lee both asked you for lists of individuals, web sites, entities that have been subject to content moderation. you expressed doubt about whether any such information exists, but you've also acknowledged the task platform exists, that it is searchable. so will you commit to providing the information you have logged on the task web site about content moderation that your company has undertaken, yes or no? >> senator, i think it would be better to follow up once i've had a chance to discuss with my team what any sensitivity around that would be. after that, that might prevent the kind of sharing that you're talking about. but once i've done that, you would be happy to follow up. >> all right. so you won't commit to do it here. we could, of course, subpoena
12:59 pm
this information, but i'd much rather get it voluntarily. let everybody take note that mr. zuckerberg has repeatedly refused to provide information that he knows that he has and has now acknowledged that the task -- let me switch to a different topic. mr. zuckerberg, tell me about sentra, what is the internal tool called seven rah that? >> senator, i'm not aware of a tool with that name. >> well, let me see if this refreshes your memory. sentra is a tool that facebook uses to track its users not just on facebook, but across the entire internet. sentra tags different profiles that a user visits, their message recipients, their linked accounts, the pages they visit around the web that have facebook buttons. sentra also uses behavioral data to monitor users' accounts, even if they're registered under a different name, and you can see a screen shot of the sentra platform. we blocked out the user's name,
1:00 pm
although you can see this individual's birth date and aged, their last log-in as well as all manner of trackings. how many different devices have they used to access facebook, how many different accounts associated with their name, how many accounts have they visited and on and on and on. mr. zuckerberg, how many accounts have been subject to review and shut down through seven that? >> senator -- sentra? >> senator, i do not know because i'm not actually familiar with the name of that tool. i'm sure that we have tools that help us with our platform and community integrity work. but i am not particular with that name. >> do you have a tool that does exactly what i've described, and as you can see here over my shoulder, or are you saying that doesn't exist? >> senator, i'm saying that i'm not familiar with it and that i'd be happy to follow up and get you and your team the information that you would like on this. but i'm, i'm limited what i can,
1:01 pm
what i'm familiar with and share today. >> always amazing to me, mr. chairman, how many people before this committee suddenly develop amnesia. maybe it is something about the air in the room. let me ask you this, when a facebook employee accesses a user's private information like their private messages or their personally-identifiable data, is a record made of that, mr. zuckerberg? >> sorry, senator, could you repeat that? >> is a record made of anytime a facebook employee accesses a user's private information, personally-identifiable information, for example, messages? is a record made anytime a facebook employee does that? >> senator, i believe so. >> does it trigger an audit? >> senator, i think sometimes it may -- >> how many audits -- [inaudible] >> senator, i do not know the exact number. >> can you give me a list? >> senator, we can follow up on
1:02 pm
that to see what would be useful here. >> will you -- i'm almost finished, mr. chair. will you commit to giving us a list of the number of times facebook employees have accessed users' personal account information without their knowledge? yes or no? >> senator, we should follow up on what would be useful here. it is, of course, in the operations of the company, if someone reports something, sometimes it's necessary for people of the company to go review and understand the context around what is happening when someone reporting something. so this is fairly frequent. and as a matter of course, we do have security systems that can detext awe anomalous -- detect anomalous patterns, but we should follow up on more detail in what you're interested in. >> mr. chairman, i'll just say in closing that what we have here is clear evidence of coordination between twitter, google and facebook. mr. zuckerberg knows he has the
1:03 pm
tools to track this, yet he doesn't remember or won't commit to letting us see it. we have evidence of faction tracking its own users across the web, mr. zuckerberg can't remember the name, isn't sure if the tool is deployed in this way and won't commit to giving us basic information. i submit this is totally unacceptable and total predictable because it is exactly what these tech companies have done to the american people and to congress for years now which is why it is time weeing took action -- we took action against these modern-day robber barons. thank you, mr. chairman. >> senator klobuchar. >> thank you very much, mr. chairman. i'm, as you know, the lead democrat on the antitrust subcommittee, and i'm going to take little different approach than mr. hawley did when it comes to competition policy, because i understand why they might be coordinating when it comes to security. what i want to focus on is what i think we're seeing all over this country not just in it can, we're seeing a -- in tech. we're seeing a start-up 14ru6r7.
1:04 pm
we're seeing more and more consolidation, and throughout history we've seen that that is not good for small businesses, it's not good for consumers, and it's not good for capitalism in the end. even successful companies, even popular companies and innovative companies are subject to the antitrust laws of this country. when i asked mr. chai about this at the commerce committee hearing a few weeks ago, he said -- he told me google was happy to take feedback, and my response was that the justice department already provided feedback in the form of a federal antitrust complaint. and i know there is investigation reportedly going on out of the ftc right now regarding your company, mr. zucker or berg. so -- zuckerberg. so i want to start with exclusionary conduct by limiting interoperability with the facebook platform. the investigation that we saw in the house recently gave us a number of examples of companies,
1:05 pm
excluded companies including vine, message me and arc. and my view is this conduct, this exclude their conduct not only damaged the ability of these smaller businesses to compete, but it deprived customers of convenient access. you're one of the most successful companies, biggest companies in the world, mr. zuckerberg. facebook. do you think that this is fair competition or not? with regard to the interoperability and how you've conducted yourself with these other companies. >> senator, i'm generally strongly in favor of interoperability and building platform and api access for companies to be able to access. that's why we built the facebook platform in 2007. some of the powells that you mention -- the policies that you mention, i think, came about because what we were seeing was not necessarily start-ups, but larger competitors like google
1:06 pm
and some of our chinese rivals from trying to access our systems in order to use their scale to compete with us better, and it just felt to us like at the time that that wasn't the intent of what we were trying to enable. >> okay, but we may have a non-chinese example here. i just want to know, i know that maybe we could hear from mr. dorsey, and i have concerns about facebook's treatment of twitter subsidiary vine. it's my understanding that once facebook recognized vine as a competitor after twitter acquired it in 2013, it cut out vine's ability to interoperate with facebook so that vine users couldn't upload their videos to facebook. and i think that twitter shut down vine in 2016. mr. dorsey e, could you tell me about the actual impact of facebook's actions on vine's business, on vine's ability to compete and on your decision to shut down the service? and i know you're not a chinese company.
1:07 pm
>> well, i don't know about the intent on the other side, but i know our own experience was we found it extremely challenging to compete with vine. and so, ultimately, decided that the form had passed us, and we shut it down. againing i don't know the specifics and the tactics in what was done, but we did find it very, very challenging market to enter even though we existed prior to some of our peers doing the same thing. >> okay. i'm going to move to something else quickly. instagram and what's app, we have some released internal facebook e-mails in which you, mr. zuckerberg, wrote that instagram was nascent e, and if they grow to a large scale, they could be very disruptive to us. and in a later e-mail, you confirmed that one of the purposes of facebook acquiring instagram would be to neutralize a competitor.
1:08 pm
you wrote those e-mails that were mentioned in that house report, is that right, mr. zuckerberg? >> senator, i believe so. and i've always distinguished between two things though. one is that we had some competition with instagram in the growing space of camera apps and photo-sharing apps. but at the time i don't think we or anyone else viewed instagram as a competitor, as a kind of large, multipurpose or social platform. in fact, people at the time kind of mocked our acquisition because they thought that we dramatically spent more money than we should have to acquire something that was viewed as primarily a camera and photo-sharing app at the time. >> okay, well -- [inaudible conversations] >> we don't know how it would have done, and when we look at your e-mails, it kind of leads us down this road as well as with what's app that a part of the purchase of these nays e sent competitors -- nascent competitors is to, i'll use the
1:09 pm
words of ftc chairman joe simon who just said a monopolist can squash a nascent competitor by buying it, not just by using anti-competitive activity. i know this is the subject of investigation. maybe we'll be hearing something soon, but i think it's something that committee members better be aware of not just with facebook, but with these deals that have gone through and how it has led to more and more consolidation and how we as the senate -- and i just talked to chairman graham about this last week -- could actually do something about this by changing some of the standards in our laws to make it easier to bring these cases and not just involving tech. so i want to go to something here at the end, the political ad discussion we had in front of the commerce committee, mr. zuckerberg. i know you said that facebook had made over $2 billion on political ads over the last -- you said this was your, quote, relatively small part of your revenue. i know that. but it's kind of a big part of
1:10 pm
the lives of politics, when that much money is being spent on ads. this is a bill i actually have with senator graham, yet we have seen these political ads that keep creeping through despite your efforts to police them on your own, and this is why you would so badly like to pass the honest ads act. one ad that went through says in three battleground states ballots marked for donald trump had been discarded. poll, will voter fraud only increase through november? paid ad, ball os9 mark -- ballots marked for donald trump have been discarded. this played between september 29th and october 7th, 2020, had up to 200,000 impressionings. does this ad violate facebook's policy? >> sorry, can you repeat what the ad was? >> the ad was an american action news ad. they've advertised a lot on your platform, and it said in three
1:11 pm
battleground states ballots marked for donald trump had been discarded. this was preelection. >> senator, i don't know off the top of my head if that specific ad violates our policies. i'd be happy to follow up with you on that. >> would you commit to a policy where actual people 's eyes could review these review these ads instead of just being hit with algorithm review? >> senator, we do have review and verification of political advertisers before they can advertise. >> okay. so does every ad go through a human being? >> senator, i don't know if every -- >> hmm? >> every, every -- our policy is that the we want to verify the authenticity of anyone who is doing political or social issue advertising. and i think it's worth noting that our people reviewers are not in all cases always more
1:12 pm
accurate than the technical systems. >> okay. so you're saying a human being reviews every ad, it's really yes or no or i don't know. >> senator, i don't know. >> okay. we'll follow up then in the written. and then you brought the cease and desist order against nyu for publishing a report that noting over the last two years facebook has not properly labeled approximately 37 million in political ads. why would you not support this project? why would you bring a cease and desist against them? >> senator, is that the project that was scraping the data in a way that might have been -- >> your definition, but -- >> -- consent decree that we have? >> the reason it's happening is weeing haven't passed the honest ads act. so they're trying. they're not violating privacy. they're trying to get the ads so people can see the ads. other campaigns, journalists, everyone. >> senator, you know that i support the honest ads act and
1:13 pm
agree that we should, that we should have that passed, and even before that we'd implemented it across our systems. but i think in the case that you're referring to that project was scraping data in a way that we agreed in our ftc consent decree around privacy that we would not allow. so we have to follow up on that and make sure that we take steps to stop that violation. >> okay. last, mr. dorsey, do you think there should be more transparency with algorithms? part of this is not just -- also the ads now. i'm on just generically. part of this is that people don't know how this data is going across the systems and across the platforms, and people basically are buying access, has been my impression, so that even if you say, like, what's the news in the last 24 hours, old stuff comes up, something's gone awry from the beginnings of this.
1:14 pm
would it be end helpful, do you think, if there was more transparency with algorithms? >> i do think it would be helpful, but it's technically very, very challenging to enforce that. i think a better option is providing more choice to be able to turn off the algorithms or choose a different algorithm so that people can see how it affects one's experience. >> okay, thank you. and i ask the both of you look at the bill that senator kennedy and i have, the journalism we competition and preservation act, to help the content providers negotiate with digital platforms. thank you. >> thank you. senator tillis. >> thank you, mr. chairman. thank you, gentlemen, for joining. mr. chairman if, i know you've asked the question a couple of times about whether or not these that platforms can be addictive. i think they probably can be based on what i've read in one of two ways. they could be just the nature of the personality and engagement in a tool that can, that they can somehow relate to.
1:15 pm
but i also think there's a transactional awe diction, and i think you also mentioned social dilemma. i think that's the use of analytics which i don't criticize among the platforms, but it's the use of analytics to addict you to go down a certain path to produce a certain outcome. and that could either be an outcome forming an opinion or an outcome buying something you didn't even think about 30 minutes before you started going down that path. so i think there are things that we've got the look at, and i do agree with mr. zuckerberg and mr. dorsey. it's not conclusive, but common sense would tell you it's a problem already, and it could become a bigger problem. mr. zuckerberg, i'd like to go back to the task platform for a minute. when i looked at the screen shot that senator hawley put up, it looked a lot like a work management tool. can you tell me a little bit about that and how many people are actually engaged as users on that platform in facebook?
1:16 pm
>> senator, yes, thank you. i was a bit surprised by senator hawley's focus on our task system because all this is, it's a basic internal project management tool. it's exactly what the name sounds like. it's used by companies -- by people across our company thousands of times a day to assign projects and track them. and it's used for all manner of different types of tasks across different people and teams. >> and do you know roughly how many facebook either contractors or full-time employees are actually users of the task platform? >> i think that probably the majority of facebook employees and people we work with have some interaction with the task system and it's some part of their work. it's basically just a
1:17 pm
community-wide to-do list. >> the other platform that senator hawley mentioned was the sentra platform. you said you weren't familiar with that one, but i think that is something that would be helpful maybe as a follow-up to really understand the nature of that platform. i won't press you on it today because you said you weren't specifically familiar with the name of the tool, but i would be more interested in how it's usedment -- used. mr. dorsey, does twitter have a platform similar to the task platform for work management communication among staff? >> absolutely. even the smallest companies use these tools. we use a tool called gira. >> i was involved in implementing these in my time in technology sectors, so i can see why you have these platforms. but, mr. subor berg, you mentioned you didn't think there was a systematic coordination e between google and twitter, but you could corn sue of how --
1:18 pm
conceive of how people in similar professions, you know, may have a discussion, have a relationship, maybe talk about it over a beer. so could you see why -- how the skeptic could see how these platforms could be used across platforms to force certain outcomes? let's say you had a hundred people at facebook, a hundred people at twitter and a hundred people at google that all had a political bent. they get together, they share notes and then they go back and make decisions that could make it appear like it's a corporate initiative, but it could be an initiative but maybe some well-intentioned but misguided staff. could you at least conceive of that being possible? >> senator, i understand the concern, and i think that coordination specifically on policies or enforcement decisions could be problematic in the way that you're sayings which is why i really wanted to
1:19 pm
make sure it was clear that what we do is share signals around potential harms that we're seeing whether it's, you know, specific content in the aftermath of a terrorist attack that people are trying to share virally so that way one platform is seeing it, another platform can be prepared that they will probably see that content soon too. ing signals around foreign interference in elections. but i think it's quite important that each company deals with those signals in the way that is in line with their own policies. and that, i think, is very different from saying that the companies are kind of coordinating to kind of figure out what the policies should be. i understand what the concern would be around that, and that's why i wanted to be clear about what we do and don't do there. >> i agree with that. i would find it horribly you irresponsible to think that this was some sort of a systematic approach across the platforms. but just with the sheer numbers
1:20 pm
of people that you all employ now, i could see how some of what's been suggested here in the hearing could actually occur with just small groups of people trying to manipulate certain outcomes. i don't want to get into details there except to know that the task platform, if it's similar to ones that i have experience with, has a lot of logging, has a lot of data to where maybe you could do yourself a service by saying, you know, i hear what's been suggested here, but in analyzing the interactions between groups and people and seeing maybe some aberrations, some people being more active and more geared towards one outcome or another, it could help you aleve some of our concerns with the way these platforms are being manipulated. i'm not going to have time to drill down into some of the specific questions, and i'm glad to hear that you all are open on some regulatory outcome. i will tell you, if you listen to my colleagues on both sides to have aisle today, i fully
1:21 pm
expect that congress is going to act in the next congress, that we're going to produce an outcome. and some people think that that's not possible because maybe the republicans and democrats are far apart. but if you listen to what they're asking you, they're concerned with the kind of outcome that they didn't like on social media in equal measure. so i do believe that you would be well served to come to the table as an industry and identify things, mr. zuckerberg, i like what you've said about transparency, and, mr. dorsey, i do think that the algorithms when you talk about the sheer scale are probably the most sustainable way to go. but we're still going to have to have some confidence. i like your concept on choice as well. but we're going to have to have more visibility in what's occurred and what's produced certain outcomes like a veterans
1:22 pm
day post that i did after the election. it was actually after my opponent had conceded. i just posted a picture thanking veterans, and for a period of time i think it was suspended and directing people towards election results. i would like to think if that was the result of an an algorithmic decision, that my opponent who almost certainly posted a veterans ad and every other person who was up for election got a similar treatment, because if they didn't, it would seem to me that if there was some other factor in play, if these algorithms are being applied to the base and in that case political commentary from elected officials or candidates. so i view this hearing as an opportunity to seek your commitment on two things. one i mentioned to you all yesterday. i've got a intellectual property subcommittee hearing in the middle of december. i would e like to have a facebook and twitter representative there. i know you're very different
1:23 pm
platforms, but i think you play very prominently in a hearing that senate coons, sitting across from me now, that that would be helpful and we'd like to get your commitment to have witnesses for that hearing in the middle of december. mr. zuckerberg, can i i get that commitment? >> senator, yes, we will make sure that we have the right subject matter expert in your hearing. >> thank you. mr. dorsey? >> we'll follow up with determining the best course of action. >> thank you. and then we'll be following up on a series of questions that i'd like to ask. let me get my head around some of the analytics information that i think you with almost certainly have and, hopefully, be willing to share. but we'll do that in a collaborative way in my office. thank you for being here today. >> thank you. we're going to take a five minute break. our witnesses certainly have earned it. if that's okay with you, senator coons, we'll come back in about
1:24 pm
five minutes. thank you. neil: all right. they're taking a bit of a few minutes to break off here. it's interesting as we were discussing this during the hearing, how the approaches of both the democrats and the republicans are pretty much to go after these guys, the heads of facebook and twitter respectively, but from different sides. for example, to hear ted cruz go at it, they know exactly what they're doing, exactly how they're policing conservative thought. and then to hear some of the more liberal members of the democrats, dick durbin, of course, of illinois saying i don't think you're doing enough when it comes down to cracking down on violent speech or hate speech. you could be doing a a lot more. edward lawrence has been following all of this from washington. i don't know what will come of this, edward. i know, certainly, mr. dorsey and mr. zuckerberg have appeared a number of times between senate committees a little more than, what, a month and a half ago before the commerce committee. but the message here, i think,
1:25 pm
coming from both parties is we are watching you, and i wouldn't be relaxing if i were you. >> reporter: watching for very different reasons, but it does seem that they're inching towards, baby steps towards some sort of change to section 230. and that's been discussed. both mark zuckerberg and jack dorsey said they would approve and would agree e to changes to 230 in some way, and that's the section in the federal communications decency act that gives them immunity. social media companies immunity from lawsuits. now, very interesting line of questioning from senator graham. he actually brought in a new topic that has not been discussed with this hearing or the one before that. he talked about possibly being addicted to social media as an issue with these platforms. listen to what he says on this. >> so here's what i think we're going to do on the committee over time, i hope, is to ask the question more directly are these social media sites addictive.
1:26 pm
do they have a public health component that needs to be addressed. >> reporter: you know, and that's a new wrinkle in all of this. but it was senator josh hawley that really stole the day related to this. he came out and said he has a whistleblower inside that showed a screen shot for what twitter uses as an internal mechanism to sort of track users going through. mark zuckerberg was not prepared to answer a lot of those questions, he said he wanted to get with his team. but, you know, it's interesting, senator ted cruz went to the heart of this issue when he was talking about all of this. he said that what is facebook and twitter a publisher, and they pointed to that new york post story where the twitter banned it outright, the sharing of that story, and facebook limited the distribution of that story saying that hunter biden accepted money from either government officials or foreign companies while the vice president, former vice president was still in office there. listen to what ted cruz said when he confronted these ceos. or.
1:27 pm
>> let me ask you, was twitter being a publisher when it censored "the new york post"? >> no. we have very clear policies on the conduct we enable on the platform. and if there's a violation, we take enforcement action. >> reporter: and, in fact, jack dorsey went a little farther on this talking about that new york post story, he said, you know, they had blocked the account from the new york post, but within 24 hours decided they made a mistake on limiting the distribution of that story. saying it was not really hacked material, where it came from, and this is the reversal that he says twitter made because of that story. listen. >> we did not have a practice around retroactively overturning prior enforcements. this incident gone straited that we needed one -- demonstrated that we needed one, so we created one we believe is fair and appropriate. >> reporter: so changing what they do there, neil, because of
1:28 pm
"the new york post" story. it did take them something like 20 days to reunlock that account there to go forward there. so, neil, very interesting hearing. this will be another hearing on this, it does not seem like we're going to come to a conclusion on what exactly should be changed, but it does look like there are changes on the horizon. neil: interesting. thank you, my friend. i want to go to jerry kaplan now, the digital trends editor-in-chief, very good read of all things technology. jeremy, always good seeing you, my friend, and thank you. what do you make of this? i try to get the 5 miles high view of this, and i get a sense that neither party -- [laughter] i don't know if that can be a good or bad thing, they're coming from it from different directions, but they don't like them. they don't like the unwielded power and the fact that they use it, you know, indiscriminately at least depending on the party. i'm wondering where this goes. there's no safe harbor for these guys in a biden administration
1:29 pm
with or without a republican senate. your thoughts. >> yeah, neil, thanks for having me on the show. it's always a pleasure to see you too. i think you put it well just a minute a ago. it's very clear that people are watching these platforms. however, what's going to actually come out of it? senator thom tillis said just a minute ago that he was e confident that the next congress was going to act, but how are they possibly going to act when there are four or five different conversations going on? what specifically are they going to act on? some people are talking misinformation, some talking bias, there's the question of antitrust, there's the section of whether the 230 regulation needs to be transformed. all of these are very important conversations but with different priorities from different senates, it's really hard to see -- senators, it's ready hard to see them align on one of them and come to an agreement about what needs to happen here. neil: do we know where joe biden stands on all of this? i mean, he's been critical of
1:30 pm
tech overreach. he hasn't shared the view that it blindsides conservative points of view, but he has been concerned that these guys, for want of a better term, have gotten too big for their britches. do we know what policy he would take? >> i haven't heard anything specific from the incoming administration. he hasn't been as vocal as some of the more liberal democrats have been about breaking up big tech even though everyone says something needs to happen. even zuckerberg himself said we're open to more regulation. what it looks like, of course, is a big question mark. yet we haven't heard much from biden about what he thinks should or shouldn't happen here. i don't see him being a force for antitrust regulation, for breaking apart any of these big companies even though personally i feel like that might be a step in the right direction. neil: all right. thank you very much, my friend. i want to go back to the hearing, chris croons, the late -- chris coons, the latest
1:31 pm
inquisitor, who's been touted as a potential secretary of state in a biden administration. let's listen in. >> we sent you a letter urging facebook to do more to address hate speech and calls to violence on the platform. we focused particularly on anti-muslim bias. on an issue that warrants specific attention given the tragic consequences of anti-muslim hate speech in myanmar and sri lanka and new zealand and right here in the united states. and i appreciate that facebook has taken actions in response to these issues, but this letter points out why we need better metrics and transparency to actually evaluate your actions. so my colleagues and i urge better enforcement in particular of your call to arms policy which could have made a difference in a recent tragedy in denobody shah, wisconsin. you and i spoke last week, i appreciated our conversation. can i count on you to provide specific and written responses to each of the questions in this letter, and then can we discuss
1:32 pm
them again? >> senator, yes. i read your letter, and i commit to getting back in detail with our team to address the important topics that you've raised. and one of your questions that i can actually answer right now, i think it's your second question about reporting in our quarterly transparency reports about the prevalence of hate speech that we find on our platforms. we will actually be adding that metric into our transparency reports this thursday when we announce our latest round of, our latest transparency report. >> thank you, mr. zuckerberg. let me just make sure i hear you right about prevalence, because that's one of my lawyers of concern, is the -- areas of concern. you mean you'll be reporting not just what percentage of hate
1:33 pm
speech on the platform you are identifying, catching proactively, removing, but the total volume? >> senator, that's my understanding of, yeah, the prevalence of that content as a percentage of content on the platform. and over time our goal is going to be to get into more detail which is the subject of some of the questions that you've asked here. as well as we've already committed to an independent audit of the community standards enforcement reports, that way people can have full confidence in all of the numbers that we're putting out. and we've been doing these reports for less than a few years now, and we'll continue to flush them out and and administer details, that way -- add more details, that way people can apply the appropriate oversight and scrutiny to the work. >> thank you. i want to move on for a moment, if i could, about your call to arms policy. you said earlier today that facebook made an operational
1:34 pm
mistake in not taking down an event page that called for people to bring weapons to a public park in kenosha. as i think we all know, there was a tragic incident of vigilantism in kenosha where a young man brought his ar-15 and ended up with two protesters dead and one injured. you anticipated this was because this operational mistake was because facebook had just adopted its militia policy a week earlier, and contractors without specialized training didn't pick up the violation. and i appreciate your frankness as to that in your answers to questions earlier today from senator durbin. but your response to senator durbin didn't mention that the event page also violated a separate call to arms policy in place for over a year that contractors aren't tasked to enforce. so i just have to ask as a follow-up why didn't you before and also today reference the call to arms policy when reviewing what went wrong in
1:35 pm
kenosha? >> senator, my understanding is that that post did not necessarily violate that call to arms policy at the time. the call to arms policy is not, does not prohibit anyone from saying, you know, let's go get our guns and do something. you know, for example, people organizing a hunting trip, that's obviously not going to be something that should be against the policies. but what we do on some of these policies which i think i'm glad to get the opportunity to address this, is some of these are contact-specific and just require a higher level of context and expertise in the area to enforce so we don't necessarily have all of the 35,000 reviewers assess every single one of these policies. so i can follow up in more detail if you'd like on the call to arms policy and the nuance
1:36 pm
there specifically, but that's also a bit on how we operationalize these policies. >> thank you for that answer. i do want to follow up because just facially it seemed to me that this was a violation of your own call to arms policy, but i look forward to that, to that conversation. mr. dorsey, if i might, at a house energy committee hearing i think it was two years ago you committed to something that i was just discussing with mr. zuckerberg, an independent civil rights audit, but in your case of twitter. and the audit released by facebook in july has proven invaluable to bringing sunlight to some key areas in which facebook need to improve. will you follow through on your agreement to this independent audit of twitter? >> so we work with civil rights groups all over the country and around the world to get feedback. we're in constant conversation with them. and we do believe that being more transparent and making our
1:37 pm
transparency report a lot more robust which today we still have some gaps is important for any entity to audit independently of us. we believe that's important because that could take away from the work that we need to do. we'd rather provide the information through our format so that people can do our work. >> if i heard you right, you aren't going to pursue an independent civil rights audit, but you are going to continue to release data and consult with civil rights groups. i'd welcome a more thorough answer as to in which way having an independent outside audit would actually harm your transparency efforts. >> i don't mean it would harm it, i mean that we want to provide enough information so that people can do this work independently of us on their own timelines, and that's where we need to make our transparency report more robust. and, as i said, we have regular conversations with these groups
1:38 pm
and take feedback regularly. >> you do, mr. dorsey, have policies against deep fakes or manipulated media against covid-19 mess information, against things that violate civil integrity, but you don't have a stand alone climate change misinformation policy. why not? >> well, misleading information, as you are aware, is a large problem. it's hard to define it completely and cohesively. we wanted to scope our approach to start to fox on the high severity of -- to focus on the high severity, we focus on manipulative media, which you mentioned, civic integrity around the election specifically and public health, specifically around covid. you know, we wanted to make sure that our resources that we have have the greatest impact on where we believed the greatest severity of harm going to be.
1:39 pm
our policies are living documents. they will evolve. we will add to them. but we thought it important that we focus our energies and prioritize the work as much as we could. >> well, mr. dorsey, i'll close with this, i just cannot think of a greater harm than climate change which is transforming literally our planet and causing harm to our entire world. i think we're experiencing significant harm as we speak. i recognize the pandemic and misinformation about covid-19, manipulated media also would cause harm, but i urge you to reconsider that because helping to disseminate climate denialism, in my view, further facilitates and accelerates one of the greatest excess ten, threats to our world. thank you to both of our witnesses. thank you, mr. chairman. >> senator ernst. >> thank you, mr. chair. and, mr. zuckerberg and mr. dorsey, thank you both for being here with us today
1:40 pm
virtually and for your commitment to constantly improving the way your platforms are serving people across the country. there has been a lot of talk today, many of us have been listening from our offices or online, about the censorship of ideas and news on your platforms. and these are the things that have been at the forefront of americans' minds in the leadup to the election as well as the weeks since our 20 general election. and -- 2020 general election. and, you know, the people that i hear from, of course, believe that conservatives were wrongfully being silenced while those on the left that were given basically free rein of your platforms. and one of the points of contention that is often brought up is that you do recruit heavily from california which leads to your employee base skewing quite heavily to the left. so my first question is for both
1:41 pm
of you. do you have concerns about your ability to monitor disinformation on both sides of the political aisle equally given that the majority of your employees typically do lean towards the more progressive side? and again, to both of you, have you taken any steps then at all to make your employee base more representative of the country as a whole when it comes to political affiliation? and, mr. zuckerberg, if we can start with you, please. >> thank you, senator. this is a, those are both really important topics. in terms of assessing what is misinformation, i think it's important that we don't become the deciders on everything that is true true or false ourselves which is why we've tried to
1:42 pm
build a program of independent fact-checkers that we can work with on this. and those fact-checkers are accredited not by us, but by the independent poynter institute for journalism, and it includes fact-checkers that, i think, span the political spectrum as well as the majority of them who would call themselves apolitical. so we've tried to address the issue of making sure that there isn't a bias in our action by actually having us not be the deciders on that type of content ourselves. and to your second question about taking steps to diversify the employee base, we -- this is a sensitive area in that i don't think it would be appropriate for us to ask people on the way in they were interviewing what
1:43 pm
their political affiliation is which, of course, makes it hard to know what the actual breakdown of the company is on this. but one of the areas where i'm more optimistic over time is i think we're going to see more people working remotely around the country and also around the world which will mean that fewer people, a smaller percent of our employees will have to come to, you know, the cities and areas like the bay area where our headquarters is, and we'll be able to employ an increasing number of people awe across all the different geographies in the country. >> very good, thank you. and, mr. dorsey? >> first and foremost, the most important thing is that we build systems and frameworks independent of any one particular employee or individual in our company. and inclusive, included in that system are checkpoints. checkpoints to make sure that we are removing any bias that weeing find, checkpoints to do
1:44 pm
qa and monitoring of all the decisions that we make, having an appeals process which is an external checkpoint on whether we need to correct enforcement action or not. so we want to build something that's independent of the people that we hire, and that is our focus in building a system. second, like mark, i'm really excited that we are at a stage where we can decentralize our company even more, that we do not need people to move to san francisco, that we can hire people all over the country. they can stay where they want to be, wherever they feel most creative. and that's not in this country, it's around the world. and i think the tools are in a state where we can do that more easily. we've, obviously, been forced to do it with covid, and i don't think it's a state that we will return from. the days of having one centralized, massive corporate headquarters in any one particular city are certainly over for us at least.
1:45 pm
and i think many other entrepreneurs starting companies today. >> very good. i really appreciate that, and i think that covid has taught us all a very important lesson and for those to be able to work remotely, i think you will find greater diversity in the thought, which is very important, i think, for the types of platforms that you both represent. now i'd like to move on to an entirely different topic, and since i began my career here in the senate, i have been committed to, of course, protecting those who need it most, and folks -- our children are the most in need, and it's our job as lawmakers to respond to the ongoing threats against them. and social media has created a whole new world for all of us, and it can help us share that information and resources with the public about human trafficking and child exploitation. and it can also help us to track
1:46 pm
sexual pred editors and insure our children are safe from those known threats. and, in fact, i've been working on legislation that would help update what information sexual predators have have to provide about their online identities. and as we a all know, however, social media can also be incredibly harmful. child sexual abuse material, material,csam, is present on nearly every single social media platform that exists. and in such polarized times, i am grateful that it is a subject we do find, it doesn't matter if you're on the leftover the right, we can come together to find solutions for this issue. and, mr. zuckerberg, i know that you and i touched upon this proofly last week when we spoke over the phone, and i do hope, mr. dorsey, that you also share mr. zuckerberg's commitment to fighting these types of issues on your platforms. so just very briefly here as i'm
1:47 pm
running out of time, but, mr. zuckerberg, i do understand that facebook is planning to outset facebook messenger with end to end encryption. and how do you hope to prevent a dissemination of child sexual abuse material if neither law enforcement nor you can access that messenger data? is there some sort of apparatus that you will have in place that can help law enforcement with those situations? and then, mr. dorsey, or we'll go to you next as well. >> senator, thank you for this. i think you're right on every count in what you just said, both that child sexual exploitation one of the gravest threats that we focus the most on, and it is also an area that we'll face new challenges as we move to end to end encryption
1:48 pm
across our messaging systems. of course, the reason why we're moving to end encryption -- encryption is because people want greater privacy and security in their messaging systems, and over time we're choosing systems that can provide them more privacy and security, and that's something that i think it makes sense for us to offer. i think encryption broadly is good. but it is going to mean that we're going to need to find and develop some new tactics. a lot of what we have found around the best ways to identify bad actors on our systems is not actually by looking at the specific content itself, but by looking at patterns of activity and where is it that a group or a person is not behaving in the way that a normal person would so we can nag and review that. -- flag and review that. we've grown increasingly sophisticated at that. that goes across the foreign interference prevention work that we do, and it also will be
1:49 pm
a factor here. and i'd be happy to follow up in more detail on what we have planned, but overall i would say that this is something that we're very focused on, and i agree with your concern. >> okay, thank you very much. and, mr. dorsey, you as well. for those that are on twitter making sure that law enforcement would have access if at all possible, if you can give me an overview of that, please. >> child exploitation is absolutely terrible, and we don't tolerate it on our service at all. we regularly work with law enforcement to address anything that we see inclusive of the patterns that mark has mentioned. the majority of twitter is public, so we don't have this much activity in private channels, so it's a different approach, but it's still, you know, we still see the same activity, and we -- it's one of our highest priorities in its severity. >> thank you both very much for being accessible to us today.
1:50 pm
truly appreciate your input. mr. chairman, thank you. >> thank you. mr. zuckerberg, i really want to appreciate what facebook has done in the area of sexual exploitation of children. y'all have done a very good job of i trying to help law enforcement in that area. senator hirono. >> thank you, mr. chairman. mr. zuckerberg, for the second time in three weeks you've been called before the senate committee, and republican colleagues have beat you up over claims that you're supposedly biased against the -- [inaudible] the fact of the matter is these allegations are baseless. everyone who has systematically looked at the social media from media matters to the cato institute, to former republican senator jon kyl has found absolutely no evidence of aunt-conservative bias. data show that far-right content from the like of fox news, ben
1:51 pm
shapiro dominates the daily top ten most engaged pages on facebook. so all of these allegations about the fact that you hire or all of your employees are. [laughter] of center is -- left of center is -- [inaudible] of nothing. certainly not -- [inaudible] so the way i see it, in this hearing is an effort but my republican colleagues to -- [inaudible] and unfortunately, it is working. two weeks ago "the washington post" reported that facebook has bent over backwards to avoid claims that it was biased against conservatives. and it removed a strike against donald trump jr.'s instagram account that would have penalized him as a repeat offender. apparently, one of several strikes removed from the trump campaign members. [audio difficulty]
1:52 pm
american first action was allowed to post material rated posts by facebook's third-party fact-checker ors without penalty. and these are just two examples. and they are nothing new. in 2019 facebook -- a web site described by the cofounder as a platform for the alt-right as one of its trusted news sources. in 2019 facebook -- [inaudible] the daily caller, another site to be one of its third party fact-checker ors. and joe kaplan, former deputy chief of staff to george w. bush -- [audio difficulty] changes designed to make facebook's algorithm less divisive because of the changes that disproportionately affect -- [inaudible] according to kaplan. so, mr. zuckerberg, you founded facebook, a company with a
1:53 pm
market capitalization of approximately $80 billion, and you control majority share of the company's stock. mr. zuckerberg, i'm wondering at what point do you stop giving in to basic claims of anti-conservative bias and start emphasizing their control over facebook to stop driving division and -- [inaudible] as you say is the mission. my question is to both of you. a recent harvard study found that president trump was the single biggest source of voting misinformation in the runup to the presidential election. since the election president trump has only continued the lies on twitter and facebook, also claiming that he won reelection and that the election is being stolen from him. but the truth is joe biden won the election as major news networks and the associated
1:54 pm
press are concerned. the response to president trump's lies do have, at most, added a warning label while still allowing the president's misinformation to remain online. [audio difficulty] from senator feinstein who defended the labels saying they point people to a broader conversation around the election. i have seriously questions about the effectiveness of these labels. what evidence do you have that these labels are effective in addressing president trump's lies? response, please. >> so i think mark mentioned this earlier as well, we are doing a retrospective on effectiveness of all of our actions to the election. we believe the labels point to, as you said, a broader conversation that people can see
1:55 pm
what's happening with the election and with the results. we don't want to put ourselves in a position of calling an election. that is not our job. so we're pointing to sources and pillars that have traditionally done this in the past, and that is the intention of the policy, that's the intention of the labeling system. >> mr. zuckerberg? >> senator, we view the additional context that we put on posts as part of an overall response and effort to make sure that people have reliable information about the election. so we don't expect that it's just going to be when people are seeing a post that may be casting doubt on a legitimate form of voting or may have misinformation that we can correct and help people understand how they could really vote, for example. so that's why we put the voter
1:56 pm
information center prominently on the facebook and instagram for months leading up to the election and kept it up afterwards so people can see reporting on the results. as i mentioned in my opening statement, 140 million americans visited that. i think this was the largest voting information campaign in the history of our country. so i think when taken together, these actions are, were quite strong of an effort to communicate accurate and reliable information to people at the times when they needed it about how they can vote in the election, encouraging them to vote, having confidence in the election system, knowing who and when the election had been >> my time is running up. all the information, actual information, voter information
1:57 pm
provides but we're talking about all of these misinformation. actually outright lies put out by the president. you want to be able to -- i really have questions as to whether or not this kind of labeling and i'm glad mr. dorsey is determining whether these labels do anything to actually create a larger framework for discussion. i really seriously -- [inaudible] actually happening. since i'm running out of time, i wanted to get to, you know, donald trump as president user, his quoteses get on whether they contain misinformation, especially on whether he won the election and covid, you name it, fraudulent elections that he alleges et cetera, i wonter what are both you prepared to do regarding donald trump's use of your platform after he stops being president? will he still be deemed
1:58 pm
newsworthy? will he still get to use your platform to spread this misinformation? >> north, let me clarify, my last answer, we are also having academic studies, effective of all of our election measures and they will be publishing those results publicly. in terms of president trump and moving forward there are a small number of policies where we have exceptions for politicians under the principle that people should be able to hear what their elected officials are saying and candidates for office but by and large the vast majority of our policies have no newsworthiness or political exception. so if the president or anyone else is spreading hate speech or inciting violence or posting content that delegitimatizes the election or valid forms of voting, those will receive the
1:59 pm
same treatment as anyone saying those things. that will continue to be the case. >> remains to be seen. mr. dorsey? >> we do have a policy around public interest where for global leaders we do make exceptions in terms of whether, if a tweet violates our terms of service we leave it up but we leave it up behind an interstitial and people are not allowed to share that more broadly. a lot of sharing disabled except for quoting it on top of your account. if somebody becomes, is not a world lead are anymore that particular policy goes away. >> thank you. >> so i am running out of time. mr. chairman, i would like to enter into the record a number of studies, particularly november 1st, 2020 article in the "washington post" entitled, trump is largely
2:00 pm
unconstrained by facebook rules or -- [inaudible]. may 26, 2020 article in the "wall street journal" titled, facebook executives shut down efforts to make the site less divisive. three studies from media matters finding no anti-conservative bias on facebook. a article by a doctor, no, big tech is not censoring conservativism. [inaudible]. >> without objection. charles: you are watching, you've been watching the senate hearings. you've got two major players in the social media scene being grilled in part because of the bias, even now they admit exists between their social media platforms and particularly conservative commentators. both republicans and democrats taking shots at both these guests and we're going to continue to monitor it. we'll being covering it as well. good afternoon, i'm charles payne. this is "making money." breangig
53 Views
IN COLLECTIONS
FOX Business Television Archive Television Archive News Search ServiceUploaded by TV Archive on