Skip to main content

tv   Tech Check  CNBC  October 5, 2021 11:00am-12:00pm EDT

11:00 am
content moderation process and importantly, in the context we're talking about today with this hearing, with the major big tech whistleblower, the pac act would explore the viability of a federal program for big tech employees to blow the whistle on wrong doing inside the companies where they work. and my view we should encourage employees in the tech sector like you to speak up about questionable practices of big tech companies so we can among other things ensure that americans are fully aware of how social media platforms are using artificial intelligence and opaque algorithms to keep them hooked on the platform so let me ms. haugen just ask you, we've learned from the information that you provided that facebook conducts what's called engagement-based ranking. which you've described as very dangerous. can you talk why more engagement-based ranking is dangerous and do you think congress should seek to pass legislation like the filter bubble transparency act that would give users the ability to avoid engagement-based ranking
11:01 am
all together >> facebook is going to say you don't want to give up engagement based ranking. you're not going to like facebook as much if we're not picking out the content for you. that's just not true there are a lot of -- facebook likes to present things as false choices, like you have to choose between having lots of spam. let's say imagine we ordered our feeds by time. like, on imessage or on there are other forms of social media that are chronologically based they're going to say you're going to get spammed like you're not going to enjoy your feed the reality is that those experiences have a lot of permutations there are ways that we can make those experiences where computers don't regulate what we see. we together socially regulate what we see. but they don't want us to have that conversation because facebook knows that when they pick out the content that we focus on using computers, we spend more time on their platform, they make more money
11:02 am
the dangers of engagement-based ranking are that facebook knows that content that elicits an extreme reaction from you is more likely to get a click, a comment or reshare and it's interesting because those clicks and comments and reshares aren't even necessarily for your benefit, it's because they know that other people will produce more content if they get the likes and comments and reshares. they prioritize content in your feed so that you will give little hits of dope mean to your friends so they will create more content. and they have run experiments on people producer-side experiments where they have confirmed this >> so, you and your part of the information you provided "the wall street journal" has been found that facebook altered its algorithm in attempt to boost these meaningful social interactions or msi but rather than strengthening bonds between family and friends on the platform the algorithm rewarded more sensationalism and outrage and facebook would say it's
11:03 am
algorithm are used to connect with other friends and family that are largely positive. do you think that facebook's algorithms make its platform a better place for more users and should consumers have the option to use facebook and instagram without being manipulated by algorithms designed to keep them engaged on the platform. >> i strongly believe, i spent most of my career working on systems like engagement based ranking. i come to you and say these things i'm basically damning ten years of my own work facebook says we can do it safely because we have a.i you know, the artificial intelligence will find the bad content that we know are engagement based ranking is promoting. they have written blog posts how they know this is dangerous but the a.i. will save us. facebook's own research says they cannot adequately identify dangerous content and as a result those dangerous algorithms they admit are picking up the extreme
11:04 am
sentiments, the division, they can't protect us from the harms that they know exist in their own system and so i don't think it's just a question of saying should people have the option of choosing not to be manipulated by their algorithms, i think if we had appropriate oversight or if we reformed 230 to make facebook responsible for the consequences of their intentional ranking decisions, i think they would get rid of engagement-based ranking because it is causing teenagers to be exposed to more anorexia content, it is pulling families apart, and places like ethiopia, it's literally fanning ethnic violence. i encourage reform of these platforms, not picking and choosing individual ideas, instead making the platforms themselves safer, less twitchy, less reactive, less viral because that's how we scale bli solve these problems. >> thank you ms. chair, i would simply say let's get to work. we got some things we can do here thanks. >> i agree
11:05 am
thank you. senator chuck? >> thank you, mr. chairman, ranking member thank you for your courage in coming forward was there a particular moment when you came to the conclusion that reform from the inside was impossible and that you decided to be a whistleblower? >> there was a long series of moments where i became aware that facebook, won facebook conflicts of interests between its own profit and public safety that facebook consistently chose to prioritize its profits. i think the moment which i realized we needed to get help from the outside that the only way these problems would be solved is to solve them together not solving them alone is when civic integrity was dissolved following the 2020 election. it really felt like a betrayal of the promises that facebook had made to people who had sacrificed great deal to keep the election safe by basically dissolving our community and integrating into just other parts of the company. >> and when -- i know their
11:06 am
responses that they've sort of distributed the duties. >> yeah. >> that's an excuse, right >> i cannot see into the hearts of other men i don't know what -- >> let me say it this way, it won't work, right? >> i can tell you that when i left the company, so the people who i worked with were disproportionately maybe 75% of my pod of 7 people, those are product managers, program managers, most of them had come from civic integrity all of us left the inauthentic behavior pod either for other parts of the company or the other entirely over the same six week period of time. six months after the reorganization, we clearly lost faith those changes were coming. >> you said in your opening statement that they know how to make facebook and instagram safer. >> uh-huh. >> so thought experiment you are now the chief executive officer and chairman of the
11:07 am
company. what changes would you immediately institute? >> i would immediately establish a policy of how to share information and research from inside the company with appropriate oversight bodies like congress. i would give proposed legislation to congress saying here is what an effective oversight agency would look like i would actively engage with academics to make sure that people who are confirming our facebook's marketing messages true have the information they need to confirm these things and i would immediately implement the, quote, soft interventions that were identified to protect the 2020 election so that's things like requiring someone to click on a link before resharing it. because other companies like twitter have found that that significantly reduces misinformation no one is censored by being forced to click on a link before resharing it >> thank you i want to pivot back to instagram's targeting of kids.
11:08 am
we all know that they announced a pause, but that reminds me of what they announced when they were going to issue a digital currency and they got beat up by the u.s. senate banking committee and they said, never mind. and now they're coming back around hoping that nobody notices that they are going to try to issue a currency. now, let's set aside for the moment this sort of -- the business model which appears to be gobble up everything, do everything, that's the growth strategy do you believe they're actually going to discontinue instagram kids or just waiting for the dust to settle >> i would be sincerely surprised if they do not continue working on instagram kids i would be amazed if a year from now we don't have this conversation again >> why >> facebook understands that if they want to continue to grow, they have to find new users. they have to make sure that the next generation is just as
11:09 am
engaged with instagram as the current one. and the way they'll do that is by making sure that children establish habits before they have good self regulation. >> by hooking kids. >> by hooking kids i would like to emphasize one of the documents we sent in on problematic use examined by age and that peaked with 14-year-olds it's just like cigarettes. teenagers don't have good self regulation they say explicitly i feel bad when i use instagram and yet i can't stop we need to protect the kids. >> just my final question, i have a long list of misstatements, misdirections, and outright lies from the company. i don't have the time to read them, but you're as intimate with all of these deceptions as i am, so i will just jump to the end. if you were a member of this panel, would you believe what facebook is saying >> i would not believe --
11:10 am
facebook has not earned our right to just have blind trust in them. trust -- last week one of the most beautiful things i heard on the committee was, trust is earned and facebook has not earned our trust >> thank you >> thanks, senator shots senator moran and then we've been joined by the chair senator kantwell will be next. we're going to break at about 11:30 if that's okay because we have a vote. then we'll reconvene >> okay. >> mr. chairman, thank you the conversation so far reminds me that you and i ought to resolve our differences and introduce legislation. so, senator thune said, let's go to work. >> our differences are very minor. or they seem very minor in the face of revelations that we have now seen, so i'm hoping we can move forward. >> i share that view, mr.
11:11 am
chairman thank you. thank you very much for your testimony. what examples do you know -- we've talked about particularly children, teenage girls specifically, but what other examples do you know about where facebook or instagram knew its decisions would be harmful to its users but still proceeded with the plan and executed those harmful behavior >> facebook's internal research is aware that there are a variety of problems facing children on instagram, that are -- they know that severe harm is happening to children. for example, in the case of bullying, facebook knows that instagram dramatically changes the experience of high school. so when we were in high school, when i was in high school, most kids -- >> you looked at me and changed your wording. >> sorry when i was in high school, you
11:12 am
know, most kids have positive home lives it doesn't matter how bad it is at school, kids can go home and reset for 16 hours kids who are bullied on instagram, the bullying follows them home. it follows them into their bedrooms the last thing they see before they go to bed at night is someone being cruel to them or the first thing they see in the morning is someone being cruel to them. kids are learning that their own friends, like people who they care about them are cruel to them like think about how that's going to impact their domestic relationships when they become 20 somethings or 30 somethings to believe that people who care about you are mean to you. facebook knows that parents today because they didn't experience these things, they never experienced this addictive experience with a piece of technology, they give their children bad advice. they say things like why don't you stop using it. facebook's own research is aware that children express feelings of loneliness and struggling with these things because they can't even get support from their own parents. i don't understand how facebook
11:13 am
can know all these things and not escalate it to someone like congress for help and support in navigating these problems. >> let me ask the question in a broader way besides teenagers or besides girls or besides youth, are there other practices that facebook or instagram that are known to be harmful but yet are pursued? >> facebook is aware that choices are made in establishing meaningful social interactions, so engagement-based ranking that didn't care if you bullied someone or committed hate speech in the comments, that was meaningful they know that that change directly changed publishers behavior, that companies like buzzfeed wrote in and said the content is most successful on our platform is some of the content we're most ashamed of. you have a problem with your ranking and they did nothing they know that politicians are being forced to take positions they know their own constituents don't like or approve of because those are the ones that get
11:14 am
districted on facebook that's a huge, huge negative impact facebook also knows that they have admitted in public engagement-based ranking is dangerous without integrity and security systems but then not roll it out those integrity and security systems to most of the languages in the world and that's what causing things like ethnic violence in ethiopia. >> thank you for your answer what is the magnitude of facebook's revenues or profits that come from the sale of user data >> oh, i'm sorry i've never worked on that. i'm not aware. >> thank you what regulations or legal actions by congress or by administrative action do you think would be feared most by facebook, instagram or aligned companies? >> i strongly encourage reforming section 230 to exempt decisions about algorithms right? so modifying 230 around content
11:15 am
i think has -- it's very complicated because user generated content is something that companies have less control over they have 100% control over their algorithms and facebook should not get a free pass on choices it makes to prioritize growth and viralty and reactiveness over public safety they shouldn't get a free pass on that because they're paying for their profits right now with our safety so i strongly encourage reform of 230 in that way i also believe there needs to be a dedicated oversight body because right now the only people in the world who are trained to analyze these exper experiments, to understand what's happening inside of facebook are people who grew up inside of facebook or pinterest or another social media company and there needs to be a regulatory home where someone like me can do a tour of duty after working at a place like this and have a place to work on things like regulation, to bring that information out to the oversight boards that have the right to do oversight. >> regulatory agency within the federal government
11:16 am
>> yes >> thank you very much thank you, chairman. >> senator >> thank you, senator. thank you for holding this hearing. i think my colleagues brought up a lot of important issues, so i think i want to continue on that vain first of all, the privacy act that i introduced along with several of my colleagues actually does have ftc oversight of algorithm transparency in some instances i hope you take a look at that and tell us what other areas you think we should add to that level of transparency but clearly that's the issue at hand here. i think in your coming forward -- thank you again for your willingness to do that. the documentation that you say now exists, is the level of transparency about what's going on, that people haven't been able to see. so your information that you say has gone up to the highest levels at facebook, is that they purposefully knew that their
11:17 am
algorithms were continuing to have misinformation and hate information and that when presented with information about this terminology down the stream msi, meaningful social information knowing that it was this choice, you could continue this wrong headed information, hate information about the rohingya and continue to get higher click-through rates i know you don't know about profits but you know on a page f you click through the next page, i'm pretty sure there's a lot more ad revenue than if you didn't click through so you're saying that documents exist that at the highest level at facebook you had information discussing these two choices and that people chose even though they knew that it was misinformation and hurtful and maybe even causing people lives, they continued to choose profit. >> we have submitted documents to congress outlining mark zuckerberg was directly presented with a list of, quote,
11:18 am
soft interventions, so hard intervention is like taking a piece of content off facebook, taking a user off facebook soft interventions are about making slightly different choices to make the platform less viral, less twitchy mark was presented with these options and chose to not remove downstream msi in april of 2020, even though -- even just isolated in at-risk countries, countries at risk of violence, if it had any impact on the overall msi metric so, he chose -- >> which in translation means less money. >> yeah. he said -- >> right was there another reason given why they would do it other than they thought it would really affect their numbers >> i don't know for certain. jeff horowitz struggled with this we sat there and read these minutes. how is this possible like we've just read 100 pages on how downstream msi expands hate speech, misinformation,
11:19 am
violence incite and content, graphic violent content, why wouldn't you get rid of this the best theory we come up with and i want to emphasize this is our interpretation is people's bonuses are tied to msi. right? people stay or leave the company based on what they get paid. and if you hurt msi, a bunch of people weren't going to get their bonuses. >> so you're saying this practice even still continues today. like we're still in this environment? i'm personally -- >> oh, yeah. >> very frustrated by this because we presented information to facebook from one of my own constituents in 2018 talking about this issue with the rohingya, pleading with the company, we pleaded with the company and they continued to not address this issue now, you're pointing out that these same algorithms are being used and they know darn well in ethiopia that it's causing and inciting violence and again they are still today choosing profit over taking this information down is that correct? >> when rioting began in the
11:20 am
united states in the summer of last year, they turned off downstream msi only for when they detected content was health content, probably covid, and civic content, but facebook's own algorithms are bad at finding this content it's still in the raw form for 80, 90% of even that sensitive content. in countries where they don't have integrity systems in the local language and in the case of ethiopia, there are 100 million people in ethiopia in six languages. facebook only supports two of those languages for integrity systems. this strategy of focussing on language-specific content specific systems a.i. to save us, is doomed to fail. >> i need to get to one -- first of all, i'm sending a letter to facebook today they better not delete any information about rohingya or investigations how they proceeded to this in light of your information or the documents. but, aren't we also now talking about advertising fraud? aren't you selling something to advertisers that's not really
11:21 am
what they're getting we know about this because of the newspaper issues we're trying to say that journalism that basically has to meet a different standard, public interest standard, that basically is out there basically proving everyday or they can be sued these guys are a social media platform that doesn't have to live with that and then the consequences they're telling their advertisers that this was -- we see it we see it -- people are coming back to the local journalism because they're like we want to be with the trusted brand. we don't want to be in your website. so i think your finding for the s.e.c. is an interesting one but i think that we also have to look at what are the other issues here and one of them is did they defraud advertisers in telling them this was the advertising content that you were going to be advertising when in reality it was something different based on a different model. >> we have multiple examples of question and answers for the advertising staff, the sales staff, where advertisers say after the riots last summer were asked should we come back to facebook or after the
11:22 am
insurrection, should we come back to facebook and facebook said, in their talking points that they gave to advertisers, we're doing everything in our power to make this safer or we take down all the hate speech when we find it. >> that was not true. >> that was not true they get 35% of hate speech. >> thank you thank you, mr. chairman. >> thank you, senator. and if you want to make your letter available to other members of the committee, i would be glad to join you myself. >> thank you thank you. >> thank you for suggesting it. >> thank you >> senator lee >> thank you, mr. chairman and thank you, ms. haugen, for joining us this week it's very, very helpful. we're grateful that you're willing to make yourself available. last week we had another witness from facebook, ms. davis she came and testified before this committee and she focussed on the extent to which facebook targets ads to children, including ads that are either sexually suggestive or
11:23 am
geared toward adult-themed products or themes in general. now, while i appreciated her willingness to be here, i didn't get the clearest answers in response to some of those questions. so i'm hoping that you can help shed some light on some of those issues related to facebook's advertising processes here today. as we get into this, i want to first read you a quote that i got from ms. davis last week here is what she said during her questioning. quote, when we do ads to young people, there are only three things that an advertiser can target around, age, gender, location which also prohibits certain ads to young people including weight loss ads we don't allow tobacco ads at all to young people, don't allow them to children, we don't allow them to minors, close quote. now, since that exchange happened last week, there are a number of individuals and groups
11:24 am
including a group called the technology transparency project or ttp that have indicated that that part of her testimony was inaccurate, that it was false. ttp noted that ttp had conducted an experiment, just last month, and their goal was to run a series of ads that would be targeted to children ages 13 to 17 to users in the united states now, i want to emphasize that ttp didn't end up running these ads. they stopped them from being distributed to users but facebook did, in fact, approve them and as i understand it facebook approved them for an audience of up to 9.1 million users all of whom were teens so i brought a few of these to show you today this is the first one i wanted to showcase. this first one as a colorful graphic encouraging kids to,
11:25 am
quote, throw a skittles party like no other. which as the graphic indicates, and as the sland jargon also independently suggests, this involves kids getting together, randomly to abuse prescription drugs. the second graphic displays an anatip that is a tip specifically designed to encourage and promote anorexia and it's on there. now, the language, the anatip itself independently promotes that the ad also promotes it insofar as it was suggesting these are images you ought to look at when you need motivation to be more anorexic, i guess you could say. now, the third one invites children to find their partner online and to make a love connection you look lonely, find your partner now to make a love connection now, look, it could be an
11:26 am
entirely different kettle of fish if this were targeted to an adult audience sit not. targeted to 13 to 17-year-olds now, obviously i don't support and ttp does not support these messages particularly when targeted to impressionable children and again just to be clear, ttp did not end up pushing the ads out after receiving facebook's approval but it did, in fact, receive facebook's approval. so i think this says something, one could argue it proves that facebook is allowing and perhaps facilitating the targeting of harmful, adult-themed ads to our nation's children. so could you please explain to me, ms. haugen, how these ads with a target audience of 13 to 17-year-old children, how would they possibly be approved by
11:27 am
facebook and is a.i. involved in that >> i did not work directly on the ad approval system what was resonate for me about your testimony is facebook has a deep focus on scale. scale is can we do things cheaply for a huge amount of people which is partly why they rely on a.i. it's possible that none of those das were seen by a human we have seen from repeated documents from within my disclosures is facebook's a.i. systems only catch a very tiny minority of offending content and best case scenario in the case of something like hate speech, at most they will ever get 10 to 20%. in the case of children, that means drug paraphernalia ads like that, it's likely if they rely on computers and not humans they will also likely never get more than 10 to 20% of those ads. >> understood. mr. chairman, i have one minor followup question which should be easy to answer.
11:28 am
>> go ahead. >> so, while facebook may claim that it only targets ads based on age, gender and location, even though these things seem to counteract that, but let's set that aside for a minute. and that they're not facing ads based on specific categories does facebook still collect interest category data on teenagers, even if they aren't at that moment targeting ads at teens based on those interest categories >> i think it's very important to differentiate what targeting advertisers are allowed to specify and what targeting facebook may learn for an ad let's imagine you had some text on an ad it would likely extract out features that it thought was relevant for that ad for example, in the case of something about partying, it would learn partying is a concept. i'm very suspicious that personalized ads are still not
11:29 am
being delivered to teenagers on instagram because the algorithms learn correlations they learn interactions where your party ad may still go to kids interested in partying because facebook is almost certainly has a ranking model in the background it says this person wants more party-related content. >> interesting thank you. that's very helpful. and what that suggests to me is that while they are saying they're not targeting teens with those ads, the algorithm might do some of that work for them, which might explain why they collect the data even while claiming they're not targeting those ads in that way. >> i can't speak to whether or not that's the intention but the reality is it's very, very, very difficult to understand these algorithms today and over and over and over again we saw these biases, the algorithms unintentionally learned. so, yeah it's very hard to disentangle out these factors as long as you have engagement-based ranking. >> thank you, ms. haugen. >> thank you very much, senator
11:30 am
lee. senator markey. >> thank you, mr. chairman, very much thank you ms. haugen you are a 21st century american hero >> thank you. >> warning our country of the danger for young people, for our democracy and our nation owes you just a huge debt of gratitude for the courage you're showing here today so thank you ms. haugen, would you agree that facebook actively seeks to attract children and teens on to its platforms? >> facebook actively markets to children under the age of 18 to get on instagram and definitely targets children as young as 8 to be on messenger kids. >> internal facebook document from 2020 that you revealed reads why do we care about tweens they are valuable but untapped
11:31 am
audience so facebook only cares about children to the extent that they are a monetary value last week facebook's global head of safety antigone davis told me that facebook does not allow targeting of certain harmful content to teens ms. davis stated, we don't allow weight loss ads to be shown to people under the age of 18 yet a recent study found that facebook permitted targeting of teens as young as 13 with ads that showed a young woman's thin waist promoting websites that glorify anorexia ms. haugen, based on your time at facebook, do you think facebook is telling the truth? >> i think facebook has focussed on scale over safety and it is likely that they are using artificial intelligence to try to identify harmful ads without allowing the public oversight to see what is the actual effectiveness of those
11:32 am
safety systems. >> you unearthed facebook's research about its harm to teens. did you raise this issue with your supervisors >> i did not work directly on anything involving teen mental health this research is freely available to anyone in the company. >> ms. davis testified last week, quote, we don't allow tobacco ads at all we don't allow them to children either we don't allow alcohol ads to minors. however, researchers also found that facebook does allow targeting of teens with ads on vaping ms. haung, based on your time at facebook, do you think facebook is telling the truth >> i do not. i have context on that issue i assume that if they are using artificial intelligence to catch those vape ads unquestionably ads are making its way through. >> okay. so from my perspective listening to you and your incredibly courageous time and time again facebook says one thing and does
11:33 am
another. time and time again facebook fails to abide by the commitments that they had made time and time again facebook lies about what they are doing yesterday facebook had a platform outage, but for years it had a principles outage its only real principle is profit facebook's platforms are not safe for young people, as youed said, facebook is like big tobacco, enticing young kids with that first cigarette. that first social media account designed to hook kids as users life ms. haugen, your whistle blowing shows that facebook uses harmful features that push manipulative influencer marketing, amplify harmful content to teens and last week in this committee facebook wouldn't even commit to not using these features on 10-year-olds facebook is built on computer codes of miskublgt
11:34 am
senator blumenthal and i have introduced the kids internet design and safety act. the kids act you have asked us to act as a committee. and facebook has scores of lobbyists in the city right now coming in right after this hearing to tell us we can't act and have been successful for a decade. >> uh-huh. >> in blocking this committee from acting. so, let me ask you a question, the kids internet design and safety act, or the kids act, here is what the legislation does it includes outright bans on children's app features that, one, quantify popularity with likes and follower accounts, promotes, two -- two, promotes influencer marketing and, three, that amplifies toxic posts and that it would prohibit facebook
11:35 am
from using its algorithms to promote toxic posts. should we pass that legislation? >> i strongly encourage reforms that push us towards human scale social media and not computer-driven social media those amplification harms are caused by computers choosing what's important to us, not our friends and family and i encourage any system that children are exposed to to not use amplification systems. >> so you agree that congress has to enact these special protections for children and teens that stop social media companies from manipulating young users and threatening their well being to stop using its algorithm to harm kids, you agree with that if. >> i do believe congress must act to protect children. >> and children and teens also needed privacy online bill of rights on the author of the children's online privacy protection act of 1998 but it's only for kids under 13 because the industry stopped me from making it age 16 in 1998
11:36 am
because it was already their business model but we need to update that law for the 21st century tell me if this should pass, one, create an online eraser button so that young users can tell websites to delete the data they have collected about them, two, give young teens under the age of 16 and their parents control of their information, and three, ban targeted ads to children. >> i support all those actions. >> thank you and, finally, i've also introduced the alga rhythmic justice and online platform transparency act, which would, one, open the hood on facebook and big tech's algorithms so we know how facebook is using our data to decide what content we see and, two, ban discriminatory
11:37 am
algorithms that harm vulnerable populations online, like showing employment and housing ads to white people but not to black people in our country. should congress pass that bill >> alga rhythmic bias issues are a major issue for our democracy. during my time at pinterest i became very aware of the challenges of, like i mentioned before, it's difficult for us to understand how these algorithms actually act and perform facebook is aware of complaints today by people like african-americans saying that reels doesn't give african-americans the same distribution as white people and until we have transparency and our ability to confirm ourselves that facebook's marketing messages are true, we will not have a system that is compatible with democracy. >> thanks. so and i thank senator lee i agree with you and your line of questioning i wrote facebook asking them to explain that discrepancy because
11:38 am
facebook i think is lying about targeting 13 to 15-year-olds so, here is my message for mark zuckerberg, your time of invading our privacy, promoting toxic content and preying on children and teens is over congress will be taking action you can work with us or not work with us. but we will not allow your company to harm our children and our families and our democracy any longer thank you, ms. haugen. we will act. >> thanks, senator markey. we're going to turn to senator blackburn and then we'll take a break. i know that there's some interest in another round of questions. maybe -- well, maybe we'll turn
11:39 am
to senator -- >> no, we've got cruz and scott. >> we'll come back after >> mr. chairman, i have to go to sit in the chair starting at noon today. >> why don't we turn to you have one question. >> i do. i have one question this relates to what mr. marky was asking does facebook ever employ child psychologists or mental health professionals to deal with these children online issues that we're discussing >> facebook has many researchers with ph.d.s. i assume some of them are -- i know that some have psychology degrees. i'm not sure if they're child specialists. facebook also works with external agencies that are specialists at children's rights online >> senator and then at that conclusion questions we'll take
11:40 am
a break and come back at noon. >> thank you, mr. chairman and i appreciate the indulgence of the committee ms. haugen, last week, the committee heard directly from ms. davis the global head of safety for facebook, during the hearing the company contested their own internal research as if it does not exist yes or no, does facebook have internal research indicating that instagram harms teens, particularly harming perceptions of body image, which disproportionately affects young women? >> yes, facebook has extensive research on the impacts of its products on teenagers, including young women. >> thank you for confirms these reports. last week i requested facebook make the basis of this research the data set minus any personally identifiable information available to this committee. do you believe it is important for transparency and safety that facebook release the basis of
11:41 am
this internal research, the core data set, to allow for independent analysis >> i believe it is vitally important for our democracy we establish mechanisms where facebook's internal research must be disclosed to the public on a regular basis and that we need to have privacy, sensitive data sets that allow independent researchers to confirm whether or not facebook's marketing messages are actually true >> beyond this particular research should facebook make its internal primary research not just secondary slide decks of cherry picked data but the underlying data public by default? can this be done in a way that respects user privacy? >> i believe in collaboration with academic and other researches we can develop privacy conscious ways of exposing radically more data that is available today. it's important to understand how algorithms work and facebook shapes the information we get to see that we have these publicly available for scrutiny. >> is facebook capable of making the right decision here on its
11:42 am
own or is regulation needed to create rel transparency at f facebook >> we need action from congress. >> last week i asked ms. davis about shadow profiles for children on the site and she answered that no data is ever collected on children under 13 because they are not allowed to make accounts. this tactically ignores the issue. facebook knows children use their platform, however seeing this as a problem to be solved, facebook views this as a business opportunity yes or no, does facebook conduct resoerj on children under 13 examining the business opportunities of connecting these young children to facebook's. >> i want to emphasize how vital is it is that facebook should publish the mechanisms i am aware that facebook is doing research on children under the age of 13 and those studies are included in my disclosure. >> you have shared your concerns
11:43 am
about how senior management facebook has continuously prioritized revenue over potential user harm and safety and i have a few questions on facebook's decision making last week i asked ms. davis, quote, has facebook ever found a change to its platform would potentially inflict harm on users, let facebook move forward because the change would also grow users or increase revenue ms. davis said in response, quote, it's not been my experience at all at facebook. that's just not how we would approach it. yes or no, has facebook ever found a feature on its platform harmed its users but the feature moved forward because it would also grow users or increase revenue? >> facebook likes to paint that these issues are really complicated. there are a lot of simple issues, for example, requiring someone to click through on a link before you reshare it that's not a large imposition. but it does decrease growth a
11:44 am
tiny little amount because in some countries reshares make up 35% of all the content that people see facebook prioritized that content on the system the reshares over the impacts to misinformation, hate speech or violence incitement. >> did these decisions ever come from mark zuckerberg directly or from other senior management at facebook >> we have had a few choice documents that contain notes from briefings with mark zuckerberg where he chose metrics defined by facebook, like, meaningful social interactions over changes that would have significantly decreased information, hate speech and other inciting content. >> this is the reference you shared earlier april of 2020. >> the soft interventions. >> facebook appears to be able to count on the silence of its work force for a long time even as it knowingly continued practices and policies that continue to cause and amplify harm facebook content moderators called out, quote, a culture of fear and secrecy within the
11:45 am
company that prevented them from speaking out is there a culture of fear at facebook around whistle blowing and external accountability? >> facebook has a culture that emphasizes that inlairty is the path forward if information shared with the public it will just be misunderstood. and i believe that relationship has to change. the only way that we will solve these problems is by solving them together and we will have much better, more democratic solutions if we do it collaboratively than in isolation. >> one final question, is there a senior level executive at facebook like an inspector general who is responsible for ensuring complaints from facebook employees are taken seriously in that employees legal, ethical and moral concerns receive consideration with the real possibility of insta gates change to company policies. >> i'm not aware of that rule but the company is large and may exist. >> i appreciate that it's my understanding that there's a gentleman by the name
11:46 am
of roy austin who is the vice president of civil rights who described himself as an inspector general but he does not have the authority to make these internal conflicts public. the oversight board was created by facebook to review moderation policies related to public content specifically it was not created to allow employees to raise concerns. so, again, another area of interest i believe that we have to act on. i thank you for coming forward today. >> my pleasure happy to serve >> the committee is in recess. you have been listening to testimony from facebook whistleblower and former project manager frances haugen in front of senate sub committee on consumer protection. we'll take it back for a few moments here a detailed, pretty remarkable conversation in front of a fairly receptive audience about engagement-based ranking and what she says was mark zuckerberg's refusal to alter
11:47 am
algorithms in april of 2020 saying, quote, there is no one currently holding mark accountable but himself. one thing is for sure, julia boorstin, jon fortt, julia you first, there are very few willing to publicly defend facebook today. >> no, this was a pretty much a full court press here, attacks on facebook and making it very clear that mark zuckerberg is where the buck stops, this idea that mark zuckerberg is responsible for what is going on at this company. frances haugen making it very clear that also that mark zuckerberg had the choice to be able to minimize the divisiveness that facebook can cause and that he chose to prioritize the metrics and the data that would be more profitable and cause more engagement and i think it's really interesting here that, you know a number of people on the committee are calling for mark zuckerberg to come forward, to be able to talk to him directly about this, and also the sense
11:48 am
that there is definitely a need for more legislation, privacy legislation, more clear legislation around children, you know, copa has been around for well over 20 years now the privacy legislation for children, but the idea that maybe that needs to be expanded and it does feel like there's a bipartisan push to hold facebook to account, jon. >> yeah. you know, this is remarkable testimony i think very important moment for a few reasons first, i think there's been a wholesale shift that we're witnessing and the conversation about algorithms, about data, child safety, section 230, frances haugen, i mean, the bipartisan embrace of her here and the fact, carl, she's been so measured in the way she talked about facebook. she has not been attacking facebook per se as much as the lawmakers in both parties have and i think frances haugen, she might just be the unofficial national algorithm czar right
11:49 am
now. they're bringing their legislative ideas before her asking her what she thinks about them it reminds me, carl, of the way they ask the fed chair about economic policy and the fed chair usually tries to wiggle and not answer the question except she's answering the question and they're saying, oh, we want to follow up with you and ask you about this legislation that we have or this idea that you have the conversation has gotten a lot smarter and in a way that facebook is not earninngineerind that shouk a concern to facebook. >> indeed. >> i was impressed by -- yeah. yeah, carl, just really impressed by how smart the questions were and this idea that mark zuckerberg was presented with a soft intervention ways to make which she said make the algorithm less twitchy, there needs to be more human moderation and less of a reliance on this sort of twitchy viral algorithm to have the quick sharing of data and perhaps the quick sharing of
11:50 am
misinformation and i think that it was really interesting that she said that he was presented with these things. the questions are being asked in a very smart way, carl >> yes to jon's point, very specific discussion of inside industry terms like downstream like dowm msi and the argument she made that not only is it tied to employee bonuses, but obviously tied to profitability. take a listen to this. >> mark was presented with these options and chose to not remove downstream msi in april of 2020 even though he -- even just isolating at-risk countries. countries at risk of violence. if it had any impact on the overall msi metric -- >> which in translation means less money >> let's bring in eli patel editor in chief of the surge i'm curious about your thoughts.
11:51 am
we talked about the receptive committee, her recommendation of pushing section 230 to address some of these ranking-based algorithms. >> first, i want to agree with all of you frances haugen is an utterly compelling witness for facebook. she's confident and in command she's clear when she doesn't know the answer to the question. she's not overstepping her expertise, and i think that is a powerful move here her idea about amending 230 so that facebook does not have an exemption because they're in control of it, it's powerful and it's been taken up on the hill i think that's great more importantly, facebook is presenting us with false choices. we need to break out of our regulatory frameworks and get away from facebook's focus on 230 and privacy. if anything comes out of this it's facebook losing control of
11:52 am
the frame of the debate and the narrative which they have controlled for a long time now here you have a compelling facebook insider with receipts that is saying, look, all of the criticisms that people have leveled against facebook for years are real and facebook's hemming and hawing about tradeoffs, about privacy and free speech are in facebook's control. i think that is the takeaway from this hearing and there's more hearing to come which has got to suck for facebook, but here you have the beginning of the 230 change which lawmakers are interested in and permission to go beyond that in a way that has not existed before >> and nilay, we've been watching to hear what facebook's response is to the hearing and andy stone, communications of facebook did tweet at the top of the hour just pointing out the fact that frances haugen did not work on child safety or instagram and researched these issues and has no direct knowledge of the topic from her
11:53 am
work on facebook to what degree can she be discredited when she's worked for four different types of social networks? >> they will try to discredit her in various ways. she has the support of lawmakers. she largely has the support of the media. i'm assuming after today's media, she'll have the support of the public. she's given a command performance. facebook's answer is to show us the data if anything will discredit the testimony is the data and it's, louing facebook to see whether they're true or not. just sharing it with the world and saying, look, look at what facebook is doing. that data doesn't show what you say. >> and let me just say -- facebook coming out against frances haugen in that way really strikes me as disingenuous makes me trust the company less. she never said she was an expert on that. she just cited the data from
11:54 am
facebook's own research. so they're digging themselves a hole here, julia >> well, yeah. i think they'll have to be very careful about how they approach her as an expert and the fact that she was so methodical about the way she collected information and brought it out of the company knowing that she was going to be criticized and she would have no chance effecting any change, and what i think is so interesting, and she is urging facebook to share its internal research and share its internal data so that data can be used to help better regulate the company, and saying facebook cannot regulate itself and what they've seen after the past several years has proven that out. what i'm wondering now, nilay. we both covered this company so closely, what is facebook thinking right now what do they think we have to do
11:55 am
to combat the firestorm of everything and how they're prioritizing profits over safety >> my sense of where the company is at is they kind of feel like they've lived through this before and they'll live through it again, and the real thing that facebook cares about is whether it can recruit well, whether advertisers keep showing up and whether they can keep turning out the quarterl performances they've been turning out. they're in the middle of a perfect storm. apple's changes in ios 14 and 15 has led them to release guidance saying next quarter's revenues will be lower. they're under fire in capitol hill and in various courtrooms and attorneys general and facebook has broken up there is a perfect storm of activity around facebook right now that will keep them from hiring some of the people, that will keep them from retaining some of the best people. that might be a long-term problem for them, but that's the
11:56 am
one they've had to focus on, while they believe they will weather it before and they'll weather it again i'm not sure they'll weather this one without regulation. you heard senator markey say you're either on our team or you're not and that's as strong as it gets with a bipartisan consensus. >> now we want to bring in facebook shareholder, natasha lamb senior technology correspondent mike isaac in the couple of minutes we have left, natasha, what is your response as an investor to frances haugen, the recommendations that she's making about how facebook should be regulated and of the insights that she's given, criticisms and how facebook's been led thus far? >> well, i mean -- wow what stunning testimony, and i think the difference between what we've seen in the past and the whatwe've seen today is that the cat is out of the bag
11:57 am
for the last five years as investors we've tried to get the company to confront these issues we haven't gotten far, frankly, because of that unilateral control by zuckerberg, and here you have haugen imploring the government to take action because the company has not. we wanted the company to take action in order to protect itself, to protect its growth path, to have a safe product so that regulators didn't need to get involved, but here we are. the cat's out of the bag, and there is so little partisan divide you know, was there an incredible moment of reaching across the aisle by senator blumenthal and moran when they said we need to enact legislation on this. we need to protect our children, and i think that that was a key moment of this testimony and in terms of regulation, this narrative has been said of a false choice that you're giving up free speech for safety is
11:58 am
just fundamentally untrue and haugen has made that clear that this is an issue from the ground up of an algorithm that amplifies negativity >> an issue from the ground up and an issue from the top down about how the buck stops with mark zuckerberg. mike, you've reported so extensively for "the new york times. what do you think the result of today's testimony is going to be do you think facebook is going to take actions as we await some of this potential regulation >> yeah. you know, i've been asking myself the same question as this has gone on. if you all remember we were covering cambridge analytica and it was leading newspapers and leading nightly broadcasts, and thus far we haven't seen any real consequential, if any, bills come be made into law as a result of that
11:59 am
one thing we did see is the ftc sort of settlement with facebook and sort of institute new privacy practices. i'm very curious inside of the company we just did some reporting and there was a question as to whether they would want to carry out any of this research at all, particularly because any time people get their hands on it it's used against the company and so they're still mulling that at the top right now and nick craig, i was told, is advocating for it, but it's still in debate. so i think there is a real tension inside over whether facebook feels are we going continue getting people this information and the little information we give them because it's so harmful with our reputation in the public and that's still an open question. >> wow that's fascinating, mike we apologize for keeping it brief, but we're coming up to the top of the hour. nilay patel and facebook is up about 2%
12:00 pm
we should mention the market bounce here. santoli describes it as respectable as we are above monday's intraday high best day for the s&p since july 20 and the best day for the nasdaq since may 20th. we've worked our way through some data like the ism we'll get adp tomorrow as we build up to the jobs number on sunday let's get to the half. that does it for "tech check." all right, carl, thanks so much welcome to "the halftime report." i'm scott wapner why one of our committee members is buying tech today you'll find out who and what we will also debate the state of your money and the markets joining me for the hour today, stephanie link, jim lean know that will and jon najarian, co-founder of masrket rebellion.com. we have a nice tick-up in stocks today. take a look at the dow, s&p, nasdaq, everything's positive u nasdaq's getting the biggest bounce from a percentage standpoint, almost.5

22 Views

info Stream Only

Uploaded by TV Archive on