Skip to main content

tv   Public Affairs Events  CSPAN  December 1, 2021 5:00am-6:07am EST

5:00 am
5:01 am
hate speech that it finds on the platform. the documents that you've published suggests ai3% of the hate speech that's there. does that mean that the context around those numbers are essentially meaningless? >> i think it's important to -- there's a pattern of behavior on facebook which is, they're very good at dancing with data. if you go into the transparency reports, the fraction they're
5:02 am
presenting is not total hate speech caught versus total hate speech that is there. the fact that they're presenting the stuff that robots got divided by stuff robots got plus what humans reported and we took down. it's true that 97% or something of what they take down happens because of their robots. that's not the question we want answered. the question we want answered is did you take the hate speech down and the number i've seen is like 3 to 5%, but i wouldn't be surprise if there was variation in the documents. >> essentially the creation of an independent regulator for the tech sector and we need to know what the right questions are as well. and the statistics are misleading. >> part of why i came forward, i know that i have a specific set of expertise that i've worked at for social networks, i'm an algorithmic specialist
5:03 am
for google and worked on pinterest i have an understanding how of ai can unintentionally behave. fays has not -- facebook has not set for for polarizing conduct. part of the reason i came forward, i'm extremely, extremely worried, and the choices that facebook made and how it plays out in broadways, things i'm specifically worried about engagement based ranking which facebook has said before. mark zuck burg put out a white paper engagement based ranking is dangerous unless facebook can take out the bad things. they're getting .8% of violent inciting content. engagement ranking priority advertises that content. i'm deeply concern about their underinvestment in non-english
5:04 am
languages, how they mislead that they support 50 languages, in reality most of those language get a tiny fraction of the safety systems that english gets. and u.k. english is sufficiently different that i would be unsurprised if the safety that they've developed primarily for american english were underenforcing in the u.k. i wouldn't be unsurprised at that and facebook disclosed differences. and i'm deeply concerned about the false choice that is facebook presents. they routinely try to reduce the discussion to things like you can have transparency or privacy, which do you want to have. if you want safety you have to have censorship. when in reality they have lots of noncontent based choices that would sliver off half percentage points of growth, a percentage point of growth and facebook is unwilling to give up slivers for our safety. and i came forward now because
5:05 am
now is the most critical time to act. when we see something like an oil spill, that oil spill doesn't make it harder for society to regulate oil companies, but right now the failures of facebook are making it harder for us to regulate facebook. >> on-- looking at the way the platform is moderated today, do you think and if there's change do you think it makes it more likely we'll see events like the insurrection in washington on january 6th, likely we'll see more of those events today? >> i have no doubt that the events we're seeing around the world myanmar and ethiopia, that's are opening chapters, dividing polarized content. facebook says a sliver of our content is hate, a sliver a
5:06 am
violent. and i don't know if i trust the numbers, two, it's hyper concentrated in 5% of the population, you only need 3% of population on the streets to have a revolution, that's dangerous. >> the typer concentration, in the areas that you worked on particularly, and that's facebook groups. and facebook by executives that the only way you could drive content through the platform is advertising. see that's not true and groups are increasingly used to shape that experience. we talk a lot about the impacts of at rhythmic bases, but what do you think that groups are shaping the experience of many people on facebook? >> groups play a huge and critical role on drives experience on facebook. when i worked on civic misinformation based on recollection, i don't have the documents, i believe something like 60% of the content in groups, and i think it's important for this group to know is that facebook has been trying to extend, like each consume longer sessions and
5:07 am
more content. the only way they can do that is by multiplying on the platform and the way they things like that for groups and reshare. i put into a half million person group and when combined with rankings, that might produce 500, a thousand pieces of content a day and only three get delivered. if you're extreme, polarizing device of content. it's like vital variants. those groups in china are putting out lots and lots of content and those likely to spread will go out. >> and 60% of people that joined facebook groups, promoted extreme content did so facebook active recommendations anding this clearly that facebook is researching. >> i don't know the exact actions taken in the last six months, a year.
5:08 am
actions regarding extremist groups active to users, promoted to users is a thing that facebook shouldn't be able to say, this is a hard problem and we're working on it. they should be able to articulate here is the five point plan because facebook can go in a nontransparent way leads to more tragedies. >> do you think that-- >> i don't know if they have a five point plan. >> or any plan? >> yeah, i don't know. >> to what extent should these considering groups, regulators, you can ask about facebook group how significantly from what you're saying, they're significant in engagement, the engaged in a problem the way that facebook designed it and groups must be a big part of that, too. >> part of what's dangerous about groups, now, we talk about sometimes this idea of, is this an individual problem or a societal problem? one of the things that happened
5:09 am
in aggregates, people who have mainstream interests and they're pushed to mainstream interests. they're yielded to radical left. and radical right. you're looking for healthy recipes you're pushed to anorexia content. one thing that happened with groups, people so echo chambers of social norms. if i'm in groups with covid misinformation and someone gives covid vaccines like, encourage people to get vaccinated they get completely pounced upon, torn apart. i learned that certain ideas are subsequently unacceptable. when the context is around hate now you see a normalization of hate and normalization of dehumanizing of others and that's what leads to violence. >> and group, particularly large groups with hundreds of thousands of members in them they should be easier for the platform to moderate because people are gathering in a
5:10 am
common place. >> i strongly recommend that above a certain sized group they should be required to provide their own moderators and moderate every post. this would naturally in a content and agnostic way regulate the impact of those large groups because if that group is valuable enough, they will have no trouble recruiting volunteers. but if that group is just an amplification point, like we see foreign information operations using groups like this, and by reality hacking, borrowing viral content from other places to fill the group, then we see these places as being -- if you were to launch an advertising campaign with misinformation in this, we at least have a credit card to track that. if you want today start a group and invite a thousand people every day, and the limit is like 2200 people you can invite every day, you can build up that group and your content will land in their news feed for a month, and if they engage it, and follow.
5:11 am
and things like that become very, very dangerous and they drive outside impact on it. >> if you say an agency wanted to influence what a group of people on facebook would see, you'd probably set up a group to see that more than you would facebook pages and advertising? >> it's definitely used by operations and another one used, it's quite dangerous, creating a new account and within five minutes post into a million person group. right? there's no accountability and there's no trace. you can find a group to target any interest you want to. very, very fine. even if you remove microtargeting as a group. >> and again, i mean, what do you think the company strategy is for dealing with this? again, there were changes made to facebook groups, i think in 2017, 2018, to create more of a community experience, and i think they're good for engagement. but if it seems similar to changes the way these were in
5:12 am
the content, these are reformative companies put in place good for engagement, but --. >> i think there's a -- we need to move away from inary choices. >> and groups they face solidarity and we get above a certain size and maybe 10,000 people. you need to start moderating that group and that alone, that natural rates limits and where do we add selection to the system so they're safe in every language. you don't need the ai to find the content. >> in your experience is facebook testing systems all the time? does facebook experiment with the way the system works or how you can increase in engagement and the way that the content needs to be if you we know it's experimented around the news that was favored. how does facebook work in
5:13 am
experimenting with tools? >> facebook is continuously running many experiments and parallel and little slices of the data that they have. i'm a strong proponent that facebook should have to publish the feed of all the experiments that they're running and what the experiment is, just the i.d. and even just seeing the results data would allow us to establish patterns of behavior because the real thing we're seeing here is facebook accepting tiny additions of harm. and how much harm and how much for us. and right now we can't benchmark, oh, you're running these experiments and are you running the public good. if we had at that data we could see patterns of behavior and trends. >> you work in the civic integrity team. if facebook, who would you report to? >> this is a huge weak spot. if i drove a bus in the united states, there would be a phone number in my break room did you see something to endanger public safety, call this number. someone will take you seriously
5:14 am
and listen to you and like the department of transportation. when i worked on counter espionage i saw things concerned about national security and i had no idea to escalate those because i didn't have in the chain of command. i didn't see that they would take that seriously, and we were told just to accept-- >> that you would, report to your line manager? would it be then up to then whether they would choose to escalate? >> i flashed repeatedly when i worked on integrity that teams were critically understaffed and i was told in facebook we accomplished unimaginable thing with far fewer resources than anyone would think possible. there's a culture that lionizes, in my important is irresponsible. first figuring out who move the metric by cutting the most concerns is good. the reality is doesn't matter it facebook is spending $14
5:15 am
billion in safety a year, if they're spending 25 billion or 30 billion that's a question. and right now there's no incentives internally that if you make noise saying we need more help, like people will not, you will not get rallied around for help because everyone is underwater. >> and in many organizations that ultimately, that sort of culture exists, a culture, and people inside the organization don't share people with people at the top. what do you think that people at like mark zuckerberg know about these things? >> i think it's important that all facts are viewed through a lens of interpretation and there is a pattern across a lot of the people who run the company, senior leaders, this may be the only job that they ever had. mark came in when he was 19 and he's ceo. and there are ep as and directors, this is the only job they've ever had. there's a lack of, you know, the people who have been promoted were the people who, you know, focus on the goals
5:16 am
they were given and not necessarily ones that asked questions around public safety. and i think there's a real thing that people are exposed to that and they say look at the good we're doing, yes, that's true, but we didn't invent ethnic balance and that's not the question, what is facebook beginning to amplify hate, or violence. >> and you think that it's making hate worse. >> and thank you for coming and talking to us. first of all, just on some of that last question, and the discussion that you were having. you talked about, you were calling for for help you wouldn't necessarily get the help. would that be the same if you were working in pr or communications at facebook? >> i never worked at pr or
5:17 am
communications. >> i was shocked to hear recently that facebook wants to double down on the metaverse and hiring 10,000 in europe with metaverse, wow, do you think what we could have done if we had 10,000 more engineers? and the safety center-- it's short-term. facebook's research have shown when people have worse integrity experiences on the site they're less likely to retain. i think it would be good for facebook's long-term success because that would force facebook it was more present to be on facebook and that could be better for the long-term growth of the company. >> and let me go back also to the discussion about facebook groups by which we're essentially talking private groups, maybe. if you were asked to be the regulator of a platform like facebook, how do you get the
5:18 am
transparency about what's going on in private groups, given that they're private? >> this is the real bar. maybe have a conversation in society around how many people-- what efforts people are seeing something, is it truly private? like is that number 10,000, is it 25,000? is it really private at that point? because i think there's an argument that facebook will make that there might be a sensitive group which someone might post into and we wouldn't want to share that, even if 25,000 people saw it which i think is actually more dangerous, that there are people lulled into an essential of safety that you know, no one is going to see their hate speech or more sensitive thing like maybe they haven't come out yet. that's dangerous because those spaces are not safe. when 100,000 people see something, you don't know who saw it and what they might do. i'm a big proponent of both google and twitter are more
5:19 am
transparent than facebook. people publish papers and because google knows this happens they staff software engineers who work on phish quality. and twitter knows that 10 percent of all the public tweets end up going on the fire hose and people analyze those and find operation networks. because twitter knows someone is watching, they behave better. and i think in facebook and private groups there's some bar which we say enough people have seen it it's not private. we should have fire hose like twitter. if we want to catch national security threats like twitter operations we need to have 10,000 researchers looking at it and accountability on things like bias, or understanding whether or not our children are safe. >> and that's help l. just on twitter and algorithmic bias they published a report on friday suggesting there was an
5:20 am
algorithmic bias politically. do you think that is unique to twitter or would you say that that would also be the case in facebook and implicit in the way the algorithms and the platforms with all of their algorithms are designed to optimize clicks and therefore, something about certain tips of political content that makes it more extreme that's endemic toward the social media companies? >> i am not aware of any research that demonstrates a political bias on facebook. i'm familiar with lots of research that says the way engagement banked ranking was designed, so facebook calls it meaningful social interactions, the meaningful could have been hate speech and bullying up up until 2020. and this is social interaction rankings. i've seen lots of research that says that kind of ranking, engagement based ranking polarizing extreme content. it doesn't matter on the left or right, it pushes the
5:21 am
extremes and it expands hate. anger and hate is the easiest way for growth. you figure out the tricks on how to optimize facebook and good actors, good publishers are already publishing all the content they can do. but bad actors have an incentive to play the algorithms and all the way on facebook. so the current system is bias towards bad actors and bias towards people who push people. >> thank you. and then currently we have a draft bill which is on individual harm rather than societal harm. and given the work you've done around the democracy as part of your work at facebook, do you think that it is a mistake to admit societal harm? >> i think it's a grave danger to democracies around the world
5:22 am
to admit to societal harm. i looked at consequences of choices facebook was making and global south and i believe situations like ethiopia are part of the opening chapters of a novel that is going to be horrific to read. we have to care about the silo farm not just for the global south, but our societies. like i said before, when an oil spill happens it doesn't make it harder for us to regulate oil companies, but right now facebook is closing the door on us to be able to act. we have a slight window to gain people control over ai. we have to take advantage of this moment. >> and my final question and thank you, is i'm definitely, just because you're a digital company, and analyzing a lot of detail the danger around how different users work. is there any relationship between paid for advertising
5:23 am
and them moving into some of these dangerous private groups possibly then being moved into messaging services into encrypted messaging? are there users generated like that we should be concerned about given particularly paid for advertising currently excluded from this will bill? >> i'm concerned about excluding from bills. ads are on the likelihood people like them, share them, do other things and interact and click on a link. an ad that gets more engagement is a cheaper ad. we've seen over and over again on facebook research it's easier to provoke people with anger than empathy or compassion. we're subsidizing anger. it's easier to run angry ads than a compassionate empathetic ad and there's a need for
5:24 am
discussing disclosures what rates people are paying for ads, having full transparency on the ad stream and understanding, you know, what are those biases, how ads are targeted. in terms of user journeys from ads to extreme groups, i don't have document regarding that. but i can imagine it happening. >> thank you, ms. frances and thank you for being here and taking a personal risk to be here, we're grateful. i really want to ask a number of questions that sort of speak to the fact that this system is entirely engineered for a particular outcome. maybe you could start by telling us what is facebook optimized for? >> i think it not necessarily obvious to us as consumers of facebook, facebook is actually a two-sided marketplace, it's
5:25 am
been-- and consumers. you can't get content without produce it. the consumption based ranking, they said we believe it's important for people to interact with each other and a large part what is disclosed in the documents, a large factor that motivated the change was that people were producing less content. facebook has run things called producer site experiments where they artificially get more people distribution to see what is the impact on your future behavior of getting more likes, more reaches, because they know you get the little hits, you're more likely to produce more content. so right now, you know, facebook has said repeatedly it's not a business interest optimized for hate. it's not in our business interest for bad experiences.
5:26 am
but it's in facebook interest to keep it churning. and you won't look at ads, and the site. if allows the wheel to keep turning. >> so that actually leads to my next question. i was really struck not so much by the harms, because in a funny way they just gave evidence to what a lot of people have been saying for a long time, and a lot of people have been experiencing, that what was super interesting was that again and again the documents show that facebook employees were saying, oh, you could do this, could you do that and i think that a lot of people don't understand what you could do. so, i would really love you to say to the committee, unpack a little bit. what were facebook employees saying we could do about the issues on instagram? what were they saying about ethic violence and what were they saying about the
5:27 am
democratic that you were-- >> and i've said repeatedly, that they're saying i'm a plant to get censorship. one thing, there are docs lots and lots of solutions, picking ideas. they're to design the safety. slowing the platform down. when you focus and get more content from family and friends, you get less hateful divisive content and less misinformation. because the biggest part of driving the information is the hyper distribution notes, groups it goes out to 500,000 people. some examples of noncontent based interception, and alice gets something, bob and carol share it, and dan, the share button is grayed out and that
5:28 am
has the same impact as the entire third party fact checking system it's going to work in global. it doesn't require a language by language system. moving to human scales inside of having ai tell us with are to focus is the safest way to design social media. i want to remind people that we liked social media before there was an algorithmic feed. and facebook said if you lose your chronological feed, you're not going to like it. but facebook has choices in different ways. there are discord servers where it's all chronological and people break out into different rooms as it's too crowded. that's a human intervention, a human scale solution not ai driven solution. and so, slowing the platform down, content agnostic strategies, human scale
5:29 am
solutions, where need to go. >> and each one of these interventions, reshared, some countries in the world where 35% of all the content is a reshare. and the reason why facebook doesn't, you know, crack down on reshares or do friction on them at least is because they don't want to lose that growth. they don't want 1% shorter sessions, and that's 1% less revenue. so facebook has been unable to accept slivers of profit sacrificed for safety and that's not acceptable. >> and i wanted to ask you in particular about what a break glass measure is if you would tell us. >> facebook's current security strategy, safety strategy is that in those engagement based rankings, pick out bad things, but first the heat gets hotter and hotter, it might be a place
5:30 am
like myanmar, doesn't have any information classifiers or labeling systems because the language wasn't spoken by enough people. they allow the temperatures to get hotter and hotter, oh, no, we need to break the glass and slow the platform down. ... these are really safety by design strategies. these are all just saying make your product fit for purpose. can you just say if you think that those could be mandatory in the bill that we're looking at?
5:31 am
>> it's characterized the reason why they turned off their measures after years 2020 election was because they don't believe in censorship. these measures had largely nothing to do with content. there are questions read how much do you amplify light video? decoder 600x multiplier or 60? little questions facebook optimized for safety overgrowth. we need to think about safety by design first and facebook has to demonstrate that they have assessed the risk, they must be mandated, and like stuff like how good is it risk assessment? facebook will give you a bad what if they can. we need to mandate they have to articulate exclusions because facebook is not articulating with a five-point plan to solve these things. >> the issue of white list and because a lot of the bill actually talks about conditions, being very clear and upholding
5:32 am
terms and conditions and having a a regulatory sort of relationship to upholding them. but what is that white lifting were some people are not exempt from terms and conditions? >> for those who are not familiar with reporting by the "wall street journal" does a program called crosscheck, a system where about 5 million people around the world, maybe 5.7 million, were given were given special privileges that allowed them to skip the lines if you will for safety systems. so the majority of safety systems inside of facebook didn't have enough staffing to manually review -- facebook claims this is just a second check to making sure the rules are applied correctly. because facebook was unwilling to invest, they just let people through. i think there's a -- and less with more avenues to understand what's going on inside the company like, for example,
5:33 am
imagine if the facebook was required for research on one-year lag, if their tents of the aims of dollars in profit they can afford to solve problems on the one you're lag, right? we should be able to know systems like this exists because no one knew about the system was because facebook lies to their own oversight board about. >> the last one and what to think about is obviously all the documents you bring come from facebook but we can't really regulate for this company in this moment. we have to look at the sector as a whole and we have to look into the future. and i just wonder whether you in the advice for that? were not trying to -- rakhine make the digital world better and safer for its users. >> engagement is a problem. all sites are going to be it's easier to provoke humans to anger, engagement based rankings figured out our vulnerabilities and panders to those things.
5:34 am
i think having mandatory assessment and mandatory like remediation strategy, we need ways to hold these companies accountable because companies will figure out how to sidestep the things and many to make sure we have processed flexible and can evolve with the companies over time. >> and finally really, , just do you think the scope of the bill, do you think that's a a wise e or should we be looking for some systemic solution more broadly? >> that's a great question. i think any platform that has a reach of more than a couple million people, the public has a right to understand how that is impacting society because we're entering an age where technology is accelerating faster and faster. right?
5:35 am
democratic processes take time if they are done well. and we need to be able to think about how will we know when the next danger is looming? for example, in my case because facebook is a public company i had to file for was for protection. if i'd worked at tiktok which is growing very, very fast that's a private company and it wouldn't have had any evidence to be a whistleblower. i think there's a real thing and think about ink tech companies has a large impact we need to be think that how do we get that out of that company because for example, if you can't take a college class today to understand the integrity systems inside facebook. the only people who understand our people inside facebook. thinking systematically about for larger tech companies how to we get informationally to make decisions is vital. >> thank you so much. thanks chairman. >> i know you'll be meeting with the oversight board. they themselves have information you been publishing orange mission you been discussing.
5:36 am
do you think the oversight board system, transparency or disband itself? >> i always reject binary choices. i'm not an a or b person. i love cmt. i think there's a great opportunity for the oversight board to experiment with what is its bounds. this is a defining moment for the oversight board come what relationship does want to have with facebook? and i think i hope the oversight board takes this moment to step up and demand our relationship that has more transparency because they should ask the question why was facebook able to lie to that in this way and what enables that? because if facebook can come in there and just actively mislead the oversight board which is what they did i don't know, i don't know what the purpose of the oversight board is. >> hindsight board, not oversight board. [laughing] >> francis, , hello. you've been very eloquent about
5:37 am
the algorithm. you talked about ranking pushing extreme content, that sort of content, and addiction driver i think you've used the phrase. and this follows on really from talked about the oversight board or a regulator over here or indeed trying to construct a safety by design regime. what do we need to know about the algorithm? and how do we get that, basically? show to be about the output of an algorithm or should we be actually expecting -- inspecting the entrails for a code? you know, when we talk about transparency it's very easy just to say we need to be much more transparent about the operation of these algorithms, but what does it really mean? >> it's always important to think about facebook as a
5:38 am
concert of algorithms. there are many different algorithmic systems and network in different ways. summer application systems, some are down regulation systems. understanding how all those parts work and how to work together is important. i'll give you an example. facebook has said encasement-based ranking is dangerous unless you this ai that can pick out the content. facebook is never published which languages are supported and which integrity systems are supported in those languages. because of this they are actively misleading the seekers of most large languages in the world by saying we support 50 languages, but most of his countries have a fraction of system that english has. when we say how does the algorithm work, we need to be thinking about what is the experience of the algorithm for lots of individual populations. because the experience of facebook's news feed algorithm and a place that doesn't have
5:39 am
integrity systems on its very different than say the experience in menlo park. some of the things that need to happen are there are ways of doing this he sensitive disclosures of -- we call it segmentation. so imagine if you divide the united states up into 600 communities based on what pages and groups people interact with, their interests. you don't need to say this group is 35-4-year-old white women who live in a self if you don't need to save it. you can have a number on the cluster but understanding are some groups disproportionally getting covid miss info? right now 4% of those segments are getting 80% of all the miss info. we did know that until my disclosure. for hate speech it's the same way. for violence incitement it is the same way. so when we say do we understand the algorithm we should really be asking do we understand the experiences of the algorithm?
5:40 am
and facebook if you aggregate data it will likely hide the dangers the systems are because the experience of the 95th percentile for every single integrity harm is radically different, even more radically different than the median experience. i want to be really clear. the people go and commit acts of violence, of the people who get hyper exposed in this dangerous content. so we need to be able to break out those experiences. >> that's really interesting. did you think that is practical for facebook to produce? what they need to have further research, or have they got ready access to the kind of information? >> you could produce that information day. that was one of the projects i found it when i was at facebook. that segmentation has been use sensor different problem areas like covid misinformation and they already produce many of
5:41 am
these integrity like statistics. part of what's extremely important about facebook published which integrity systems exist and in which languages is right now like let's imagine were looking at self harm content for teenagers. let's imagine we want to understand how is self harm concentrator across the segments? facebook's most recent position according to a source would talk to is basically we don't track self harm. we don't know who it is. if they were forced to publish we could say wait why don't you have a self harm classified? you need to have one so we can answer this question of is of the self harm content focus on 5% of the population? because we can answer the question of where that data. >> we should rapidly to risk assessment that we require to be given to us basically. >> if i were writing standards on risk assessments, a mandatory provision i would find is you need to segment out because the
5:42 am
meeting experience on facebook is a pretty good experience. the real danger is that 20% of population has a horrible experience or experience is dangerous. >> is about the core of what we would need by way of information from facebook or other platforms, or is or other information about data? i mean, what else do we need to be really effective in risk assessment? >> i think there's an opportunity -- a major breach of those integrity systems facebook had to shoot a sampling of content. we'll be up to come in -- a problem i'm concerned about is a facebook as trouble differentiating for m&a languages terrorism content and counterterrorism content. think about the role of counterterrorism content in society. it's how people make society safer. because facebook's ai doesn't work very well for the language
5:43 am
that was in question, i believe it was an arabic, 76% of counterterrorism content was labeled as a terrorism. it facebook had to disclose content at different scores we could go and check and see interesting, this is where your systems are weak and for which languages against each language performs differently. i think there's a real importance for if the were a firehose for facebook and facebook had to disclose what the scoring parameters were, i guarantee you research it would develop techniques for understand the rules of the scores and epiphytes which kind of content. >> tom nicholson. >> thank you very much indeed, chair, and thank you for joining us. you may be interested to know that you are trending on twitter. [laughing] >> so people are listening. i thought the most chilling sentence that you come out so
5:44 am
far this afternoon and i wrote it down, anger and the heat is easiest way to grow on facebook. i mean, that's shocking, isn't it? what a horrendous insight into content per society on social media that that should be the case. >> one report from facebook demonstrates how this different kinds of feedback cycles that all plain in concert. it's it when you look at hostility of a common thread, look at a single publisher at a time to take all the content on facebook and look at the average hostility of that common thread, the more hostile the common thread, the more likely a click will go after that publisher, right? anchor insights traffic outwards means profit. we see people who want to grow really fast, they do that technique of harvesting viral content from other groups and spurted into their own pages come into the groups, and a bias towards stuff that gets an
5:45 am
emotional reaction. the easiest action is anger and research has shown this for decades. >> of us who are adults or aspiring adults like members of the committee will find that hard enough to deal with. but for children this is particularly challenging, isn't it? i would like to follow up on some of the questions specifically on harm to children. children. perhaps you could just for people who don't know what percentage of british teenagers can trace for those who feel like their desire to kill themselves back to can't even believe i'm saying that since, but back to instagram? >> i don't remember the exact -- i think rental%. >> yes, it's exactly that. and body image is also made much worse, isn't it? and why should that be for people who don't understand that? why should it be that being on instagram makes you feel bad about the way your body looks?
5:46 am
>> facebook's owner reports say it is not just instagram is dangerous for teenagers. it is actually more dangerous than other forms of social media. >> why? >> because tiktok is not doing fun activities with your friends. snapchat is about faces. read it is at least vaguely about ideas but instagram is about social comparison and about bodies. it's about like people's lifestyles and that's what ends up being worse for kids. there's also an effect which is a number things are different about life media by instagram though in high school used to be like. when i was in high school it didn't matter if your expense of high school was horrible. most kids had good homes to go home to and they could at the end of the day disconnect. they would get a break for 16 hours. facebook's own research says now the bowling follows children hope it goes into the bedrooms. the last thing they see at night
5:47 am
is someone being cruel to them. the first use in the morning is a a hateful statement. that is just so much worse. >> so you don't get a moments piece? >> they don't. >> you are being bullied all the time. you're ready talk to the senate and also told us what facebook could do to make come to address some of these issues, , but some of your answers were quite complicated. perhaps you could tell us in a really simple way that anybody can get what facebook could do to address those issues, children who want to kill themselves, children who are being bullied, children who are obsessed with the body image in an unhealthy way. and all the other issues you address. what is it facebook can do now without difficulty to solve those issues? >> there are a number of interplay, factors that interplay and dry those issues on the most basic level.
5:48 am
children don't have as good self-regulation as adults do. that's why there are not allowed to buy cigarettes. when kids describe the usage of instagram, facebook's unresearched described as an addicks narrative. kids say this makes me unhappy. i feel like to have the ability to control my usage of it and if you find that i would be ostracized. i am deeply worried that it may not be possible to make instagram safe for a 14-year-old and i sincerely doubt it's possible to make it safe for a 10-year-old. >> they they shouldn't be on it. >> i -- i -- i would be surprised -- i would love to see a proposal from an established, independent agency that has a a picture of what a safe version of instagram for 14-year-old -- >> you don't think such a thing exist? >> are not aware. >> does facebook care whether not instagram is safe for a ten-year-old? >> what i find very deep
5:49 am
misleading about facebook's statements regarding chill is they say things like we did instagram kids because kids are going to live other age we might as will have a safe thing for them. facebook should have to publish what they do to detect 13-year-old on the plot from because i get to you what they're doing today is not enough and facebook's own research that something where -- facebook can guess how old you are with a great deal of precision because they can look at who your friends are, how you interact with. >> but you're in school. that's a giveaway. if you're wearing are wearing a school uniform chances are you -- >> to want to disclose our specific thing. this is something the senate found when we disclose the documents. they found facebook have estimated the ages of teenagers over backwards to figure out how many kids lied about their ages and how many were on the platform and they found are some cohorts ten-15% of 10-year-oldss were on the platform. facebook should have to publish those stats their beer so we can
5:50 am
grade how good would withg kids off the platform. >> so facebook can resolve this, so this is a once to do so. >> facebook in huge -- makes you to get on this and they know that young use of the future of the platform and the early they get them the more likely to get them hooked. >> argosy young users are no different than the rest of us. they're getting to see all the disinformation about covid and everything else like the rest of us and getting to see it just remind us what percentage of the disinformation is being taken down by facebook? >> actually don't know that stat off the top of my head. >> i believe, from what i understand it, it is three to 5%. >> that's for hate speech. but it sure is approximately the same. >> i guess so. >> it is probably even less from the perspective of the all information that falls on facebook is information that is been verified by a third-party fact checking system. and that's the only to catch viral misinformation because
5:51 am
half 1 million, and million people. and this is the most important thing for the uk. a blade there's anywhere near as much third-party fact checking coverage for the uk compared to the united the united states. >> ten to 20% systat directory content to 20% for disinformation and three to 5% for hate speech. so a vast amount of disinformation and hate speech is getting through to children. which must present children with a very peculiar john to sensed of the world and with absolutely no idea, we, how those children are going to grow up and change and develop and mature having lived in this very poisonous society at this very delicate stage in their development. >> i extremely worried about the development and impact of instagram on children. the fact you may have
5:52 am
osteoporosis for the rest of your life. there will be women walking around with brittle bones because of choices facebook window. the secondary think i'm super scared about is kids are learning that people they care about treated them cruelly because kids and instagram when they look at the feedback of watching someone cry to watching someone when they're much more hateful, much minty people even to the french for imagine what the domestic relationships will be like for the skids when you're 30 if the people who about them are mean to them. >> that's a very disturbing thought. the other very disturbing thing you've told us about what you think most people haven't focused on is the idea that language matters. so we think facebook is bad now, but what we don't tend to realize and a very -- culture is all the other languages around the world are getting no moderation of any kind at all. >> i think the thing that should
5:53 am
scare you even more living in the uk is the uk is a diverse society. those languages are not having in africa. mark has said himself is danger. engagement-based ranking is dangerous without a eye. that's what mark zuckerberg said. those people living in the uk and being fed misinformation that is dangerous and radicalizers people. language-based coverage is not just good for individual thing. it is a national security issue. >> that's interesting. on the social front you pointed out that there might be differences between the united kingdom and the united states, which it's not the gap and it's the best to the committee before but i've personal expense of this in twitter where i am a gay man, i was called a greasy bender on twitter and i reported it to twitter and twitter wrote back and said there's nothing wrong with being called a greasy bender and about back giving the exact chapter and verse from the
5:54 am
community standards which showed it was unacceptable and so he wrote back to me presumably from california tell me it was absolutely acceptable. to be generous, they did know what agenda was -- i bender was because is not in use of the united states. honestly it might mean to find out why the in the was being so this particular, particular word. in a nutshell, what do you want us to do? what's the most useful thing in addressing the concerns that you have raised here? >> i think forcing facebook -- i want to be clear. bad actors have already tested facebook. they have gone and tried to hit the rate limits. they tried experiments with content. dana facebook's limitation. feeling when a dozen of facebook's limitations are good actors. facebook needs to disclose what its integrity systems are and which languages they work in,
5:55 am
and the performance per language, or for dialect. because i guarantee you the safety systems that are designed for english, don't work as well on uk english versus american english. >> all this makes a. >> sound relatively d9, doesn't, but what your evidence has shown to us is that facebook is failing to prevent harm to children. it's failing to prevent the spread of disinformation. it's failing to present hate speech. it does have the part to deal with these issues, it's just choosing not to which makes me wonder whether facebook is just fundamentally evil. is facebook evil? >> i cannot -- i think there's a real thing of people, good people, facebook is over mumbling full of conscientious,
5:56 am
kind, empathetic people, good people who are embedded in systems with bad incentives are led to bad actions. there's a real pattern of people who are willing to look the other way, are promoted more than people who raise alarms. >> we know where that leads in history, don't we? so could we compromise that is not evil, maybe that's a moralistic word, but the way that some of the outcomes of facebook's behavior is evil? >> i think it's negligence. >> malevolent? >> malevolent has implied content and i cannot stand to the hearts of men but at least there is a pattern of inadequacy that facebook is unwilling to acknowledge its own power pickett believes in a world of flatness which hides the difference like children are not adults, right? they believe in flatness and it won't accept the consequences of their actions. i think that is negligence and it is ignorance, but i can't see
5:57 am
into the hearts and so i don't want to -- >> i respect your desire obvious he to answer the question into her own way but given the evidence that you have given us, i think a reasonable person running facebook, seeing the consequences of the companies behavior, would i imagine have to conclude that what they were doing the way the company was performing and the outcomes were beloved and want to do something about it. >> i would certainly hope so. >> just on that point about speedy you might if i rest my voice for five minutes? can we take a break for a second? or -- sorry. i don't know how long we're going to go. nevermind. >> thank you. one more point about intent. someone may not have intended to do a bad thing. their actions are causing that and they don't change their
5:58 am
strategy, and what you say about them then? >> i'm a big proponent of we need to look at systems and the systems perform and this idea and actually this this is e problem inside of facebook. facebook has this philosophy that if established the right metrics they can let people free reign. like i said they are intoxicated by flatness. the largest open floor plan office and will, a quarter-mile long in one room. they believe in flatness. they believe if you pick a metric, let people do whatever they want to move the metric and that if i have to do. if you are metrics you can do better action but that ignores if you learned data that that metric is leading to harm which is what meaningful social interactions did, , that the metric and get embedded because now there are thousands of people are all trying to move a metric and people get scared to change a metric and make people not get their bonuses. i think there's a real thing of there is no will the top. mark zuckerberg has unilateral
5:59 am
control over 3 billion people, right? there is no will at the top to make sure the citizens are run and adequately safeway and i think until we bring in a counterweight operated for the shareholders interest and not for the public interest. >> thank thank you, chair. and thank you again for joining us today. it's incredibly important and her testimony has been heard loud and clear. the point i want to pick up are about -- if the facebook were optimizing algorithms in the same way a drug company was trying to increase the production of the product it would probably be very carefully. when if you could explore a bit further this world of addiction and whether they spoke is doing something that we perhaps have never seen in history before which is creating an addictive product that perhaps -- sorry,
6:00 am
taking drug as it were. [inaudible] >> facebook him inside facebook there are many euphemisms that are meant to hide the emotional impact of things. so, for example, the ethnic violence is called the social cohesion team. for addiction the metaphor is problematic use. people are not addicted. they have problematic use. the reality is that using large-scale studies, these are 100,000 people, facebook has found that problematic use is much worse in young people than people who are older. so the bar for problematic use is you have to be self-aware enough and honest enough with yourself to admit you don't have control of your usage and that it's harming your physical health, your school or your employment. and for 14 year olds, their first year, , they have done que
6:01 am
enough problematic use yet but between 5.8% 5.8% and 8% oy that problematic use. that's a huge problem. if that many 14 year olds are that self-aware and that honest, the real number is probably 15, 20%. i am deeply concerned about facebook's role in hurting the most portable among us. facebook instead has been most exposed to misinformation and it is people of been recently widowed, people who are recently divorced, people and move to new city, people who are socially isolated. and i'm deeply concerned that made products that can lead people away from the real communities and isolate them in these rabbit holes, and his filter bubbles. because what you find is what people are said targeting with misinformation together they can make it hard to reintegrate into larger society because night and have shared facts. that's the rihanna. i like to talk with the idea of
6:02 am
the misinformation burden instead of thinking of it as because it is a burden where we encounter misinformation. facebook right now doesn't have any disincentives for trying to do high quality shorter sessions. imagine if there's a syntax that was a painting an hour, like dollars a year for facebook per user. imagine if there's a syntax that pushed facebook to a shorter sessions that had higher quality. nothing today is incentivizing them to do this. if you can get them to stay on the platform longer you'll get more ad revenue and make more money. >> and so -- thank you that's very helpful. in terms of the way often the discussion is around the bill especially we were looking at, is around the comparison being a publisher, publishing platform. but she would look at this much more almost a product approach which in essence is causing addiction as you say with young
6:03 am
people? as you mentioned earlier that impact on trying to get -- dopamine and the brains. we for previous testimony from experts highlighting that children's brains seem to be changed because are using facebook and other platforms to a large extent over many, many hours. and if you're being given all white powder and him having the same sentence, the same access would be very quick to cut down on that but because it's viral screen emco face but we think anyone was using it nicely, that doesn't happen. so your view on the impacts on children with a face because a look at that and whether we should be doing this with regards facebook being a product rather than a platform. >> was i find it telling people in silicon valley and look at the most elite private schools, they often of zero social media policies that they try to establish cultures were like you don't use phones and you don't connect with each other on social media.
6:04 am
the fact that that is a trend in the elite high schools in silicon valley should be a warning to us all. it is super scary to me we're not taking like the safety first perspective with regard to children. like safety by design is so essential tickets because the burden that we have set up until now this idea that the public house to prove -- prove that facebook is did you come facebook is never have to prove that the product is safe for children. we need to flip that script. with pharmaceuticals a long time ago we said it's not the obligation of public to say this medicine is dangerous for its obligation of the producers to say this medicine is safe. it was done over and over again and this is the right moment to act, , the moment to change the relationship with the public. >> and if i may just on that point, sorry, my screen seems to be switching off.
6:05 am
with regards to that point of addiction, has it been any studies within facebook and within the documents yet seen where they've actually looked at how that can increase addiction by the algorithm? >> i have not seen any documents that are as explicit as saying facebook is trying to make addiction worse but accessing documents were in one side so what is think the number of sessions per day some has is indicative of the risk of exhibiting problematic use. but on the other side there couldn't not talking to each other. so it's as interesting, an indicator people still don't platform in three months is it they have more sessions every day. we should figure out how to drag more sessions. this is an example of facebook is not -- because their management style is flat there isn't enough cross promotion, not enough cross filtration and
6:06 am
the site that is responsible for growing the company is kept away from the side that highlights harms and that's in a world where it's not integrated. that causes dangers and makes the problem worse. >> thank you for that. it's 25 to

49 Views

info Stream Only

Uploaded by TV Archive on