Skip to main content

tv   Public Affairs Events  CSPAN  April 28, 2021 1:58am-4:20am EDT

1:58 am
1:59 am
>> facebook, twitter and youtube executives testified before the senate judiciary subcommittee on the use of algorithms in their platforms. committee members inquired about how algorithms can amplify hate speech, misinformation and polarization and data tech companies collect on users. this is two hours and 20
2:00 am
minutes. [silence] [silence] [silence] [silence] [silence]
2:01 am
>> thank you to witnesses participating today and a particular thank you to mr. harris why understand is joining us from hawaii where it is 4:00 a.m. i'd like to thank ranking member senator sasse for working with me to put this hearing together. i am truly grateful we been able to work together on this important topic, it's too important to let it fall victim to the top of the typical gridlock in washington. thank you to chairman durbin training today. generally people hear the word algorithm, you might think of complicated medical formula or piece of computer code but as many of us have been increasingly aware, algorithms impact what literally billions of people read and watch and impact what they think everyday. facebook, twitter, youtube, three major tech companies represented today's hearing use
2:02 am
algorithms to determine what appears on your screen when you open engage with the opposition. there's nothing wrong about that inherently billions or even trillions of pieces of content to choose from and each platform, it makes sense they should have a way to help us sift through what they think users are looking for and what we are actually seeking. advances in machine learning make the technology possibly led to other context, machine learning is driving innovation across many industries. medical science companies to deliver better service. many have recently argued this technology is harnessed into algorithms designed to track time and attention on social media and results can be harmful to kids attention spans, quality of public discourse to public health and even our democracy itself. what happens when algorithms become so good at amplification
2:03 am
and showing content that a computer things you will like that you or your kids or family members spend hours each day engage staring at the screen? what happens when algorithms become so hyper tailored to you and your habits and interests that you stop being exposed to ideas you might find disagreeable or even so different from you, yours to be offensive? what happens when the amplify content, it might be popular but also hateful or plain false. we work on his hearing and one of the main reasons is because we truly don't see these as partisan questions don't come to the hearing with a specific regulatory or legislative agenda but it's an area that requires urgent attention. as mark zuckerberg put it, when left unchecked, people will engage, disproportionally with sensationalist provocative content and can undermine discourse and lead to specific
2:04 am
polarization and we are so polarized and angry with no longer hear each other's view, our democracy itself closes. my plan to use this as a opportunity to learn how these algorithms work, what steps may have been taken to reduce algorithmic amplification that harmful and what can be done so we can build on that and consider a potential path forward of whether volunteering regulatory or allegedly. look forward to hearing from the representatives of facebook, twitter and youtube and we've agreed to testify, each platform has taken measures in recent years to curb some harms algorithmic amplification can cause and it's my hope these platforms can build upon good practices, learn from each other and make significant difference. we will hear from to outside experts who can help us ask bigger picture questions and narrow and on some of the strategies and tactics we could or should follow moving forward including whether and how legislation might improve
2:05 am
practices but these and many other platforms use. thank you, i'm going to turn to my ranking member for his opening remarks. >> thank you and congrats on having gaveled for the first time in six years and hopefully it's not long but i have enjoyed preparations for this hearing with you and your team. i appreciate your opening statement and thank you to all four of you. it's 85 degrees this afternoon in d.c. so you didn't have to avoid us in hawaii and testify at 4:00 a.m. but thank you for participating nonetheless. i want to applaud your opening statement, it's too easy for us to take any complicated issue and reduce it to heroes and villains and whatever the regulatory legislative predetermined tool was to slam it down on the newly to be
2:06 am
defined problem and i think you underscored a number of important points, the symbols one is algorithms like almost all technologies that are new have caused benefits. making a better place make the world of worst place in one of the most fundamental questions before us as a people is legislative or regulatory all those issued to activists. the attention the economy if a product is free, you're probably the product and the american people need to understand we, parents and neighbors need to understand we are being given access to unbelievably powerful tools that can be used for lots of good and because it's free, there's somebody who would like to capture our attention, short attention spans and drive us into echo chambers so algorithms have great potential for good but also can be misused and we
2:07 am
the american people need to be reflective and thoughtful about that first and foremost. for tech companies who showed up today and those of you adjacent to the silicon valley authorizations, thank you for your interest and attention to this conversation, it's important for us to push back from the idea that complicated qualitative problems have quantitative solutions and on this topic, all the technology related big tech hearings we've had over the course of the last two or three years in this committee, the heart problems we've wrestled with, we've been told as soon as supercomputers work better, they wouldn't solve these problems, we need to distinguish between qualitative and quantitative problems but i appreciate the germans perspective on the way we begin this hearing, this is a rush to present politicians know a lot more about these problems than we really do, it's an
2:08 am
acknowledgment that there are problems and challenges in this area and prudence and humility and transparency are the best way to begin and i'm grateful to this committee in particular hearing. >> thank you and i'll turn to chairman durbin for opening remarks. >> i appreciate the opportunity to join you, congratulations for taking the reins in this committee which i was pleased to reconstitute in this congress. we demonstrated significant leadership and look forward to your work and efforts in bringing policy and legislation before the committee. this country stands at the crossroads as we grapple with technology, social media and culture. senator sasse summarized it, it's plus and minus trying to look to the minus side. the right to privacy especially for children is one of the persistent concerns and shared
2:09 am
with members of this committee. everyday internet companies elect reams of personal data on americans including kids but we cannot expect children to fully understand the consequences of their internet use and collection process. kids deserve a chance to request a clean slate once they are old enough to appreciate the nature of intimate data collection let's talk later this week i'll reintroduce the kids all might act to give every american legal right to demand website companies to delete all personal information collected from or about the person when he or she is a child under the age of 13. the right to privacy and access to data can keep the subcommittee occupied, there's a lot more to explore including today's hearing which we will examine how social media platforms use highly targeted algorithms to captivate in every aspect of our lives.
2:10 am
algorithms influence how we engage in they don't just affect our personal lives, the effect of on a global basis. an independent civil rights last year on facebook is not sufficiently attuned to how it algorithms you will extreme and polarizing content can drive people self reinforcing echo chambers extremism following an audit chairman coons facebook calling on the company to do more to mitigate the spread of anti- muslim extremism bigotry on the platform in november when facebook see eo mark zuckerberg testified, i asked him about recent incidents hate conspiracy groups using facebook to plan and recruit including organized conspiracy to kidnap michigan's governor, christian and so-called kenosha card militia posting a quote called to arms on facebook and aftermath in the
2:11 am
shooting of jacob blake in kenosha, wisconsin. that call to arms spread widely and was read by 17-year-old kyle rittenhouse who traveled from illinois to wisconsin where he allegedly shot and killed two people in the streets of kenosha august 25, 2020. that militia fate was reportedly flagged as for 55 times to facebook facebook found the page did not violate sub standards so it was left up. the response to mr. zuckerberg's hearing was it was a mistake, certainly an issue we are debriefing and figuring out to do better. unfortunately, it's clear they didn't forget how to do better with the enough. not even two months later, a mob of domestic terrorists and violent extremists storms this capitol building january 6. fueled by lies in conspiracy theories claiming the election
2:12 am
had been stolen from the former president. while the efforts to overturn the election were ultimately unsuccessful, the trauma of the harrowing day lingers on. after january 6, rampant hate and misinformation on social media platforms has never been clear. we need social media companies to finally take reelection to address the abuse of misuse on the platforms and the role algorithms play and amplify. i look forward to hearing from witnesses and hopeful this subcommittee can accomplish chairman coons leadership and what we are expecting for the opportunity for this country in the right direction. >> thank you, mr. chairman. i will briefly introduce our witnesses for today and then bring them in. facebook vice president of content policy originally joining facebook in 2012 and security council advising the company on child safety law enforcement prior to joining facebook with legal advisor u.s. embassy. she specialized in southeast
2:13 am
asia lost development in response to human trafficking and served as prosecutor of the department of justice 11 years in washington. twitter's head of u.s. public policy based in washington d.c., a company's federal and state public policy teams and initiative serving as twitter intermediary liability policy and company's efforts to help combat the opioid crisis. working in the u.s. senate for my friend, senator johnny isaacson of georgia and found it business millennial bridge to promote public policy. alexandria meeting for the americas where she advises the company and public policy issues around user generated content and previously served special assistant to president obama and deputy assistant secretary of the department of homeland security. she served as members speaker pelosi's senior staff and began working for the center in maryland. private sector expands with
2:14 am
leading north american government affairs for tesla and csra. spent his career studying major technology platforms and how they have recently become social fabric by which we live and think and communicate. the cofounder and president of the center for humane technology which aims to catalyze a shift toward humane technology that operates for the common good. the primary subject of the netflix documentary the social dilemma and led the time well spent movement which sparked changes as facebook, apple and google. a leading public scholar and disinformation researcher specializing and many ablation, critical internet studies and online extremism, research director at the harvard kennedy school team center director of technology and social change project. a cofounder of the misinformation review and research can be found in
2:15 am
peer-reviewed academic, contemporary information munication and society social studies of science, a columnist at mit's technology review. your all virtual which makes this next up a little different or novel for me. what our four witnesses please stand to be sworn, raise your right hand. i can't see you and affirm your doing that but do you affirm the testimony you are about to get before this committee will be the truth, whole truth and nothing but the truth so help you god? >> i do. >> thank you, we will now proceed with witness statements. each of you have five minutes to make an opening statement. >> thank you.
2:16 am
chairman coons, ranking member staff and distinguished members of the subcommittee, thank you for the opportunity to be here with you today. i am monica and i need content policy for facebook. algorithms for many products features including enforcing our policies, when people refer to facebook algorithms, often they are referring to content algorithms that helps us order content for people's new speed so i'll dive into the briefly. the algorithm, they have so much potential content they could see so the average facebook user has thousands of eligible codes every day she could see in her new speed and they are all there but what we do is try to save them the time of sorting through it and use our ranking algorithm
2:17 am
that ranks the coast and put at the top the content they would find most meaningful. it looks at many things including things like how often the user typically comments on her likes content from this particular source, how recently the content was posted and what content is in the content such as a photo or video they can engage with. the process helps to reduce unique to each person. naturally, users don't see the underlying code that makes up the algorithm but we do publish information about how the process works and that includes describing the inputs that go into it and we have a blog post we put out we have significant changes to how we rank content in the algorithm. people can click on any post in the new speed and go to the menu and go to where it says why am i
2:18 am
seeing this post? they will see an explanation for why the algorithm put that piece of content where it did and this helps people understand the algorithm and why they are doing. i do want to underscore people can opt out of this algorithm and go to a most recent news feed which basically means all of the eligible content you could see is simply ordered in chronological order and it can be chosen from an option we call favorite feed which basically allows you select accounts that are favorites of yours and those are the only things that will be ranked. we recently released a feature that allows people toggle among those. as we work to bring more chance betsy to the algorithm and give
2:19 am
people more control how it works for them, we also worked to improve the ways ranking system works and we announce including expanding surveys to understand what meaningful to people and work their time and making it easier to give us individual posts and that is what we take from them and build into ranking algorithms and hope that as we make it better and better, people leave facebook feeling more inspired. it's not the only thing that determines what people might see when they come to facebook, we have a set of community standards that says there are certain categories of content simply not allowed those are public standards we've had years and published a quarterly report on how we are doing at finding the content and removing it and it shows we got better and
2:20 am
better over the past years. if content is removed for violating standards than it does not appear in our newsfeed at all and there are other quick baits and content that don't violate standards but people don't want to see them like click bait or borderline content, that's the algorithm right. the reality is, is not financially or reputation to push people toward increasingly extreme content if we do something like that to keep somebody on the site for a few extra minutes, it makes them have a worse experience and less likely to use our product and that is self succeeding. it's to make sure people want to value our products down the road. algorithms are a key part how people connect and share and fight harmful conflict on the site and we will continue how people understand how it works
2:21 am
and they control their experience. i look forward to your questions. [inaudible] >> thank you very much. please proceed with your testimony. >> thank you for inviting me to appear before you today. on the director of government affairs and public policy to the americans, i appreciate the opportunity to explain how algorithms and machine learning support youtube commission to get everyone a voice show them the world. the uncertainty over the last year, youtube helped bring people together as we stay apart. or if you never have come to youtube to learn skills and
2:22 am
understand the world more deeply and be delighted by stories who can't be found elsewhere. it relies on the trust of our users, creators and advertisers and that's why responsibly is our number one priority. our approach is based on what we call the four hours, remove content that violates our community guidelines and raise authoritative voices, reduce the spread of her lung content and reward trusted creator. submission explains each in detail, or to focus comments today and how machine learning supports responsibility work when it comes to accommodations. recommendations on youtube help users discover content they will enjoy and key subjects to recommend content to our users. recommendations are based on a number including, if enabled, users history and consider factors like country and time of
2:23 am
day and help our systems show up for to raise authoritative voices we also give users significant control over half the recommendations are personalized. review, pause, edit clear search history anytime and give users the opportunity recommendations so they can tell us if they are not useful. we also believe we have the responsibility to limit recommendations of content that can be harmful and that's why january 2019, we launched more than 30 changes to our recommendation system to limit the spread of harmful misinformation and borderline content, content that comes close to but doesn't cross the line violating community guidelines. as a result, 70% drop in watch
2:24 am
time of content from recommendations in the u.s. that year. this content is a fraction of 1% of what is watched on youtube in the u.s. we know that too much and we are committed to this number we know there is interest in the quality of the content and recommend our users. researchers around the world have found youtube's recommendation systems used in the direction popular and authoritative concept. it's content from authoritative sources and reduce recommendations -- excuse me. for line content and harmful misinformation or other recommendations even if the results are in engagement. we are proud of our record here but also continuously to improve. because of responsibility and transparency going hand-in-hand, i'd like to close with three
2:25 am
recent transparency efforts we've undertaken to facilitate a better understanding of our platform. first, may 2020, we collaborated with google to launch the first analysis group that regularly discloses action that we have taken to combat coordinated influence operations from around the world. june 2020, we launched a website called how youtube works to answer frequently asked questions and explains products and policies in detail and provides information on critical topics with child safety, misinformation and copyright. third, earlier this month, we added a new metric to our quarterly guidelines report. the pacific of content violating policy, last quarter this number was .16, out of every 10000 views on youtube, only 16 to 18 come from this content.
2:26 am
this is down by over 70% compared to the same quarter of 2017 thanks in large part to our investment and machine learning. open nature of our platform with important work to be responsible and appreciate the feedback we received from policymakers and will continue to do more. thank you again for the opportunity to appear before you today and look forward to your questions. >> thank you all. >> thank you members of the subcommittee for the opportunity to testify on behalf of twitter today. on the role of algorithms and amplification of content. twitter's purpose is to serve the public conversation in the early days, we were sharing 140 character status updates. now our service has become the go to place to see what's happening in the world and
2:27 am
conversations about a wide range of topics including current defense, sports, entertainment and politics. while much has changed since it was founded 15 years ago we believe our mission is more important than ever. while many challenges we grappled with today are not new, the creation and evolution of the online world have affected the scale and scope of these issues. moreover, we must confront these issues and increasing global but for free expression. we believe addressing the global challenges internet services require free and open internet. we are guided by the following intervals to build trust with the people we serve including increasing transparency, providing more consumer choice and procedural fairness. we stand on the principle of consumer control and choice and particularly relevant to today's discussion on algorithmic
2:28 am
choice. giving people on twitter control over algorithm to determine your own timeline. top right corner of your screen, you can choose to see your tweet and reverse phonological order. when we implement this, suggesting it would be bad for our business, we thought it was the right thing to do for our users and it's a core feature ever since. further in line with our commitment to choice and control, twitter is funding this, an independent team of open source engineers and design groups. develop an open centralized standard for social media. it's our hope this will eventually allow twitter and other companies to contribute to and assess open recommendation algorithms that promote healthy conversation and ultimately provide individual choice. these standards support
2:29 am
innovation making it easier for startups to address issues like harmful content. we recognize this effort is complex, unprecedented and will take time but we are currently planning to provide necessary exploratory races to push it forward. as we make investments to provide more transparency and choice, we've also launched our responsible machine learning initiative to conduct in-depth analysis studies to assess potential harm in the algorithms we use and plan to implement our findings and share them through an open process in this feedback. finally, as policymakers and members of congress here, internet regulation, i urge you to consider algorithm choice and machine learning, twitter and other services a safer place for the public conversation. technology is essential for rooting out harmful content for
2:30 am
child sexual contact. we rely heavily on machine learning for potentially abusive content for human moderators to review. simply put, we must ensure regulation enables companies to help solve some of the problems that technology poses. in summary, we believe moving toward more open systems will increase transparency, provide more consumer control and choice and increase competition in our industry. this will ultimately lead to more innovation and solve smart challenges. we appreciate anonymous privilege hosting some of the most important conversations in the world. we are committed to working with a broad group to get it right for the future of the internet and the future of our society. thank you for the opportunity to be here with you today. >> thank you of the center for
2:31 am
human technology. >> thank you, i'm honored to be here with you today. my background, that was before filling social dilemma which many of you might have seen, has the insiders who understood how the technologies were built in the first place it affected society. my friend in college, they worked with these companies in the early days and mike and kevin, what we are missing in this is a focus on the business model and the nature of what these platforms are about not because they are evil and none of the people here with us today are intentionally causing harm and neither do i believe the companies have intentionally
2:32 am
wanted this to happen but we are in a situation where we don't diagnose it correctly, we are going to be in trouble. while you are hearing from the folks here today about production borderline content hiring content moderators and etc., it's not very convincing but at the end of the day, the business model that craves on human attention means we are with more as humans and citizens of this country we are addicted outraged, polarized, narcissistic and misinformed because that means the business model was successful steering our attention using automation and we are now heading to the results of ten years of this psychological process has worked our national communications and fragmented the window and the reality we need to coordinate and deal with our real problem climate change, the rise in china pandemic, education and infrastructure. so long as these companies
2:33 am
process by turning the american conversation into the kind of all against all because that means the business model, not the advertising but the model of everyone getting a chance of going viral so long as that is personalization, we are each going to be steered to a different reality. lucy evidence, who click on a couple articles that say masks don't work in the data was different, he will see evidence that masks don't work and they are pitted against each other with this infinite reality were anything can go viral. fundamentally, this is breaking many aspects of the nation's fundamental life for children, increase cyber bullying to increase in suicide and takes momentary trauma turns into,
2:34 am
snowballs that affect teachers and classrooms spending two hours monday morning clearing up all the drama that occurred on social media over the weekend. it can reverse progress we've made in civil rights and racial stereotypes increase online harassment and minorities in ways that are demeaning and can increase and inhibit climate change because his information has gone viral. can be a national security in the sense of flying and shutting down but if they fly information bomb into the u.s. by an algorithm from one of these companies, it's what zip code you would like to target, is the opposite it removes the millions and billions of dollars spent on a 35, department of homeland
2:35 am
security it becomes virtual, all productions go away. most important, if we are not coordinated, we cannot even recognize each other as americans, but the only thing that matters. we don't have something to agree on and we cannot change or do anything on this and we are in a moment in history where we are transitioning into a digital society and we've already got an implant for our society, we have the chinese brain implant, control of the behavior modification or the western brain implant that turned into a performative culture in which we fall into a kind of development of using ourselves constantly immersed and focusing on real
2:36 am
problems. someone is going to be controlling 21st century open society or closed society, either it's a digital way which we don't want to do or figure out how to be a digital open society that doesn't lose to that. that is our task and we figure out for the american experiment, it could be in question. >> thank you very much. you can now give your opening statement. >> thank you to the subcommittee and ranking members for inviting me and you to yourself as well. i appreciate the opportunity to talk about how algorithms shapes public discourse. on the research director for the team center of the kennedy school and i study internet. i want to remind everyone the internet is a truly global
2:37 am
technology requiring massive amounts of international labor so whatever policy comes from the u.s. will undoubtedly become the default settings for the rest of the world. i also want to begin by saying i believe public interest internet is possible and i have to believe that to do the heinous job of researching hate incitement harassment and disinformation on these social media products. public internet means practically, policy that draws together best insights across many different professional sectors matched with rigorous independent research into how automation and amplification shapes quality of public life and we should begin by creating public interest obligations for social media timelines and newsfeeds requiring companies to secure a timely accurate information as well as providing robust content moderation services and options but today, let's name the problem of
2:38 am
misinformation at scale and its impact. in the u.s. when we talk about politics, we're talking about media about politics and when the information flows with strategic misinformation, a simple search for something like coronavirus origin or mail in ballots and lead people down the rabbit hole of medical misinformation or political disinformation. october 2020, i testified about misinformation at scale having harmful societal impacts of secondhand smoke and it took the whole of society approach to address this caused by secondhand smoke at which led us to clear the air so when i say misinformation at scale, i'm not complaining that someone is wrong on the internet, the way social media products amplify novel outrageous statements millions of people faster than timely local relevant accurate information to reach them.
2:39 am
post 2020, our society must have this information at scale and deadly consequences. scammers, using social media to sell bogus products, impersonate social movements and push conspiracy. what i've learned over the past decade setting the internet is that everything open will be exploited and misinformation at scale is a feature of social media. which i mean when i say that? for example, i don't never sleep my computer faces white supremacist, researchers going down a rabbit hole, pulling into a online subculture with the key word where norms are unfamiliar but nevertheless content plentiful and there are four aspects of this social media algorithm that can lead someone into this rabbit hole. equivalently, there also for hours. repetition relates to the same
2:40 am
thing over and over on a single product, likes, shares, retreats. redundancy is seeing the same thing across different products, the same thing on youtube that you see on twitter and tends to produce a feeling of something more true. responsiveness is how social media and search engines always provide some answer even if it is wrong, unlike other forms of media and last, reinforcement refers to the way algorithms work to connect people and content and search for slogan or keyword algorithms will reinforce this time and time again. nowhere is this more prevalent than youtube or any search for conspiracy or white concert subpoenas content using keywords of the group servicing numerous recommendations and even offer up engagement with these communities and influencers. if you recently search for this content like rittenhouse, you,
2:41 am
proud boys or antifa, likely to enter a rabbit hole extracting yourself ranging from the difficult to the impossible. the rabbit hole is best understood as algorithms pattern distribution content to maximize engagement and revenue. i have a few things companies could implement if we want to talk about that but tackling a problem will require federal oversight long-term. we didn't do it overnight but tech companies fly the plane with nowhere to land. the cost of doing nothing is nothing short of democracy. thank you. >> thank you very much for your testimony and all witnesses given the live induced number. i want to say, your efforts to
2:42 am
down rank borderline content to improve transparency are all positive steps and we need to continue to find ways to preserve positive benefits of algorithms and show content of people that's meaningful to them while addressing the clear threat and challenge, very real potential for the home for impacts and algorithm amplifications. the question i have is to get a better understanding how one might further build on your efforts and strike the right balance. a proposal social media platforms creating circuit breakers, we are familiar with the phrase, blowing up on the internet. to detect content rapidly gaining widespread viewership so humans can review whether it complies with platform policies before it wraps up tens or hundreds of millions of use, professor donovan, could you briefly explain why this
2:43 am
mechanism might be particularly valuable? >> one of the things we know now from decades of tracking flagging especially in communities, they only tend to flag things as a result of getting retribution on one another, searching for the content and they enjoy it, the systems built in don't tend to work when it comes to particular strategic misinformation especially harassing content as well so the results, what you need to do as a corporation is look for it and i know there have been a couple of instances recently where corporations have found rated out in his staff but it has to be part of the business process and content
2:44 am
moderation and seek out content is essentially out of view this signals from the past. >> thank you. >> that's one way to incorporate this. >> thank you, facebook said last fall it was piloting this concept, what did you find this experience? do you expect to further roll it out broadly explain briefly if you might. >> get to that question. we don't do look at this when we, whether or not something violates our policies referred to fact checkers. the fact checkers are more than 80 independent organizations we work with, they can make content
2:45 am
all we can send it to them, either way, user reports, too, either way if they rate something, that's when we put on that label saying the content is false, directing people to fact check and reduce redistribution of content and yes, we are seeing the efforts paying off. we see when we put one of the informational labels on top of the contents, people are far less likely to click through and see the content and they would if we didn't have that label. >> i appreciate facebook announcing taking the derek chauvin verdict, one step was limiting the spread of contents that systems predicted it likely to violate community in the areas of the speech, violence
2:46 am
and excitement. facebook statement noted the company had done this in other emergency situations in the past. my question is, why facebook wouldn't always limit rapid spread of content likely to violate this. could you help us understand this? >> yes, what we are doing and i put that blog post out, what i meant was we use systems to collectively identify when content is likely to violate or maybe borderline. often what that can help us with send it to reviewers and have them assess whether or not it violates. in extreme situations, not all that content violates. there will be some false positives and that so there is a cost to for instance, taking action on that content and having people look at that so
2:47 am
generally we use those measures to find content we can send to viewers but in situations where we know there is extreme finite risk such as an election in a country in civil unrest or the situation in minneapolis, the chauvin trial, of temporary measure but we will see content this technology, the algorithms is likely to violate. >> let me ask the last question of the representatives here, facebook said and i think you said, not long-term interest to pursue maximum engagement if it comes to the cost of spreading polarizing sensationalized content that's not long-term in the financial interests of the country let alone interpretation interest for algorithms that
2:48 am
amplify harmful device content. i am concerned about the underlying incentives, all three of your platforms for those who have to make decisions day in and day out how your companies operate. mit technology review reported last month pay incentives of facebook are still tied to growth metrics and engagement metrics. if i am an employee who works on inclusive 50, other metrics the company set up to measure my performance directly related simply engagement growth metrics are is a some way broader more positive are? could you answer briefly briefly, you provide pay incentives in terms of algorithm teams directly or indirectly based on engagement and growth related metrics?
2:49 am
>> engineers are not specifically giving incentives complete to increase, the focus is making sure products or services people find useful and would want to use for years to come. >> a top priority for our company is to serve a healthy public conversation and i would love to share with you what we share with our investors and advertisers and all the concerns and priorities we've talked about so far today. what we are telling you is exactly what we tell investors or advertisers because they have the same concerns. we have no incentive to have toxic or unhealthy compensation on this. >> thanks. >> similarly, responsibility is our number one priority and we set those goals around what we
2:50 am
define as responsible growth so we make that goal to encourage adoption of the future but we also want to use or misuse and help make sure it is responsible. >> if you could provide a brief comment on your understanding of incentives of employees and how it aligns with responsible growth versus growth at all costs. >> my understanding, there is a brief experiment with facebook with non- engagement bids for social impact but that is largely gone away and is still a measure of engagement. sections active users growth that is still focused and everything else we are talk about today, it's almost like having bp and michelle asking what you're doing to stop climate change. those business models are creating a business society outraged polarized performative disinformation.
2:51 am
that is the fundamental of how it works and while they can try to skin the harm off the top and do what they can, we want to celebrate that is just fundamentally trapped and they can't change. >> thank you all. >> my first question is where it's finished and i do think constructive engagement is soundbites, i'm not trying to get you all to fight, the truth is his hearing for the work a lot better if we were in the same room so we didn't have to try to bring you all into dialogue but the last three answers from the social media companies and mr. harris answers were ultimately not so i want to go back to, i'll start with ms. figure as well same healthy engagement as opposed to more
2:52 am
quantity, i agree that you have the most destructive practices off the top of the business model is addiction money directly correlated to the amount of time people spend on the site. it was useful for me is hearing the three of you saying what you think is wrong with the argument. another is that content and misinformation content you all well intentioned as your companies are and curtail. the argument is more broadly about the business model and it is your addiction, isn't it? what is he missing? >> i'll say two things i hope will be helpful, for us, this
2:53 am
focus is only long-term and now give one example. january 2018, we put out a post announcing we prioritize content from family and friends over say news content called meaningful social interaction, we suspected it would lead to less time spent on the service and it did. it led to people spending tens of millions of fewer hours on facebook everyday but that was something we did because we thought longer term was more important for people to see that content because they would find meaningful continue to use the site. it is the long-term picture. the other thing is the teams i work with including engineers focused on safety issues, removing content or hate speech and engineers focused on the way
2:54 am
we reduce disinformation labeled on the site, a key statistics prevalent filing content, that is their goal we put out public reports so that is an example how we are focused on long-term making sure we stop abuse. >> i want to be clear i am not targeting the three of you because with my opening statement, it's very sincere. i think there is a danger politics and governance where if you agree there is a problem then there must be a definitive regulatory solution that can be fast and easy and on the other hand if you're not persuaded, and you have to deny there's a problem, i am sort of in between on this and i don't have clarity about the regulatory six but i think we should admit that there is a problem in the last 12 to 14 years as we consume more
2:55 am
additional information correlating with benefits but also some very real costs and i don't think it is just your company. there are reports of the new york times about their own internal deliberation how they would like more americans engaging in healthy content they are just printing money right now over the course of the last four or five years but engagement is much higher when there angry so when the content is angry, it leads to more engagement and i don't think any of you are going to dispute that but i would like to say where the question was a few minutes ago, i would love if you could tell me what you think is wrong with the argument. >> we are focused on serving public conversation including having a place where people can control their expense. when we talk about algorithm, twitter just one thing, we do tweets. as we talk about algorithms, we have algorithms designed to show
2:56 am
what is relevant to you and screen time or how much time you spend on a service, i think that is relevant because as a user of twitter myself, i rely on the site and see what happens in the day and what people are talking about and i log off and move on with my day is important to look at it and recognize algorithms can also be helpful in providing more valuable experience for people. >> sure but the reality is between the products being produced and narcissistic intimate, is it or is it not true when somebody tweet something anger invoking an outrageous and it goes viral but then two hours later they realized they were wrong and corrected, isn't the correction usually like 3% of the traffic of the original outrage falls
2:57 am
think? ... but i think you have to look at the greater picture. a. >> we are basically out of time. do you have anything to say that mr. harris is wrong that it would be useful right now we are not getting direct engagement. i think we are hearing responses that are only around the
2:58 am
margins. >> i think in the opening statement they are but i would make two quick points. first misinformation is not in our interest it relies on the advertisers with single pieces of content. we want to build these relationships within the long term and that is why it is right in the product with things like timers and take a break. but those exist so we can build the relationships with the users for the long term. >> with the tools to manage our level of engagement bridging over are important innovations, so i applaud those tools. >> thank you, senator. chairman durbin. >> thanks a lot, chairman kunz. it's a pleasure to be with you.
2:59 am
first, two disclaimers. to disclaimers. i'm a liberal arts lawyer not nearly as tech savvy as i should be for the hearing. number two, my experience in government that has been over several decades suggests we are slow to recognize issues that are fast breaking and have a very spotty record when it comes to responding in a thoughtful way. i hope this is an exception. if i could address ms. harris first, a low hall, and then ask this question. i've been reading and trying to understand why the european union is taking such an apparently bold and innovative approach to the subject and we are so slow to respond. any thoughts? >> i think we value free speech above other values and that makes it hard to regulate an environment where the composition of what constitutes
3:00 am
one of the quotes we reference at the fundamental problem of humanity is we have medieval institutions and accelerating godlike technology. it isn't an insult it is just to say the challenge you pose is how do we deal with the first derivatives of these issues. we are talking about conversations that we have had four years ago about spreading misinformation and things like that. the acceleration of the new kinds of threats and issues and growth rate of that going far faster than the growth rate of the capacity to mitigate or respond to the threats. i was speaking with someone in the fact checking network that said there's 200 billion going through facebook and they get about 100 fact checks per day. if you think about a bank being
3:01 am
overleveraged and how much risk. as the senator said the decentralized incentive for yellow journalism. in this slow climate change by 2% increasing the kind of outrage and what you are saying. it seems like you get a few more hits if you keep doing it and suddenly that keeps up the global conversation and how -- >> i want to take you down a different path as i try to read and absorb the european union's risk-based approach to this ai issue. they say two things that they find unacceptable to use examples. manipulating human behavior. i think that is at the heart of it as i understand the
3:02 am
explanation. i've heard people from facebook talk about making your facebook experience more meaningful and folks from youtube and twitter talking about healthy dialogue, but the bottom line is it appears like it or not, there is a factor where it is being affected by what we are seeing and what we are reading and experiencing. and that seems to violate the basic premise of the eu regulation. and the second one in the extreme is a social credit scoring that they use as an example that apparently is rampant in places like china and takes the manipulation and analysis and algorithms to the point that they disqualify people being able to get on a fast train in china because they are social credit score. on the manipulation front that
3:03 am
would disqualify just about all of the three companies including tiktok that doesn't get much attention and is dominating children's minds on a daily basis. i think if you have seen the film the social dilemma, we think about the persuasive technology and that these technologies are designed to be persuasive. when you hear twitter talk about the conversations, they are still persuasive to manipulate but try to do it in healthy ways. when you years facebook talk about this they are creating the additional dopamine loop, getting you to invite your friends, drifting out notifications one at a time. but it is the most meaningful way that they can. they try to do the best they can but to the senator's point earlier it is almost like listening to a hostage in a hostage video. nothing they say makes sense until you realize they are having their business model held to their head causing them to say the things they are saying.
3:04 am
the kids online act is a small but very important part of the conversation. thank you, mr. chair man. and you for joining us today. senator hawley. >> thank you mr. chair man and the witnesses. i would like to come back to you if i could that's very important. you talk about the business model of the companies before us today and all of the dominant platforms and your point i think it is and i would like you to elaborate the business model of the companies is advertising which takes place by getting more and more users to spend more and more online so that the companies can get more personal information to sell more stuff.
3:05 am
can you tell us more about this core business model of these dominant platforms. >> it comes from that original business model. to figure out a way they had to borrow twitters model so you can follow any user why did they send the follow-up. every day or two you see you have two more people who followed you and that creates a viral base, like a quick base that gets you to come back into
3:06 am
the service right now we are testing the increase of the suggested users you may know. can we get them to invite my friends. that is literally what they are doing right now saying here are some channels you might want to follow and they are very good at predicting the next person. i've even done this myself, and it creates this treadmill and it's almost like a digital drug war and i'm going to uncheck the services. they dial up how aggressively to show the notifications and e-mails and they will send more and more, each of the services.
3:07 am
they wouldn't do this if the business model wasn't preying on addiction and these are from the persuasive frameworks that all the people i came up with in the tech industry learned. just little bit by bit do what works and it keeps turning to the treadmill all into the attention in the amount of control that this business model gives the companies over our lives is absolutely unbelievable. there's that infamous experience that facebook ran on the users in 2014. have a million users to see if it could depress them or change their mood by tweaking the algorithm that they would refer to the content they saw. and the amazing and scary thing is that they could in fact directly influence the user's mood and change how the users felt about the day or about a particular story or a particular event by tweaking their algorithm because they controlled with the users see.
3:08 am
they controlled the interactions and increasingly control how much time people spend online. these companies say they are about social media but really, they once were. they used to be social media networks. back in 2006 when facebook introduced the news feed there is a great post mark zuckerberg wrote called "calm down, breeze. we hear you," in which he ensured the users that it wouldn't be a big deal. he said we've been getting a lot of feedback and we think that they are great products, but we know many of you are not in immediate fans for those worried about privacy, he went on nothing you do is being broadcast. rather it is being shared with people who are about what you do. advertisers that are in the business of trying to manipulate
3:09 am
so let me ask you this, the companies have been able to do this, they've been able to manipulating content and push particular content to the users and try to interfere. with amplification of behavioral advertising, why should they get the section 230 immunity. or the algorithmic amplification? >> well, section 230 is a difficult double trade and complex one to get into. to say whether the companies want to or not if they took their hands off the steering wheel they are still manipulating people's emotions and the way the regions value the engagement which means literally the most child
3:10 am
trafficking would rise to the top and that would also be a form of manipulation. if you compare side-by-side how much restrictions we do in this setting where you're going to experiment on 14 people, they are on a regular basis tinkering with of the global implant of 3 billion people daily with no oversight. what we need to do is compare side-by-side what are the regulations and protections that would be applied in the one domain but not in a different domain. the focus is more about the design and fundamental oversight of the way the platforms fundamentally operate. part of that is money and we all know that.
3:11 am
these companies spend enormous sums of money trying to influence this body, the regulators, the federal government and it's time that this congress did something about it to show who's in charge. we certainly have to take action to stop this kind of rampant manipulation a recent report from the technology review found that facebook delivery algorithms discriminate based on gender there are numerous other examples they describe what
3:12 am
factors use them to target ads including other information about you from your facebook account such as your age, gender. facebook also to be targeted based on things like zip codes and a proxy for race. are you concerned facebook's reliance on the factors result in discrimination? can you give a yes or no answer, please? >> senator, thank you. making sure that the survey is fair without discrimination is always a priority and we need policies in place to prevent discriminatory targeting. i'm happy to follow-up with you in the interest of time on some of the specifics. >> how does facebook and ensure that it doesn't violate when targeting ads for housing, employment and financial
3:13 am
services? >> senator, thank you. we have policies around when people can use certain targeting criteria so for instance we don't allow some of the more sensitive criteria that you have mentioned. we don't allow that for certain types of advertisements such as financial service advertisements. but i can follow up with you on some of those specifics of how we ensure. >> you addressed to the concerns raised from the mit technology review that found discrimination on the basis of gender for example you addressed those concerns raised by this report. >> in making sure we do not have
3:14 am
discriminatory ads has been a priority has made a number of improvements and i can follow up with those. >> i hope you can follow up. they are the ones who raised the concerns. when facebook has been sued for discrimination by its ads and targeting algorithms it is often hid behind the 230. earlier i joined senators warner and klobuchar in introducing the act that would remove 230 immunity for the violation of the civil rights laws. we are not talking about the total removal of section 230 but as referencing the civil rights. do you agree facebook should not be under section 230 when it discriminates with delivering ads? >> senator, thank you for the question. i agree that there should be a regulation to hold the social
3:15 am
media companies accountable. i think that there is a lot to consider when cracking down on the regulation. they eliminated the immunity from the civil rights laws and discrimination. >> this is doctor donovan. i think that we do need to have some carveout related. especially that would require oversight. when automation is matched with amplification that is there is no review of the events, we don't know who we are doing
3:16 am
business with. not only does facebook not know but then we see a bunch of different games with some of the stuff related to advertisers disclosing who they really are, but overwhelmingly, over the pandemic we have seen all kinds of scams and hoaxes that violate the human rights, so we do need this. a. >> you think there should be a carveout for civil rights violations? >> i'm not sure exactly the way the legislation is written but i would be in support of that. the important thing to recognize is that the companies make money by not having oversight and discernment. >> i would like to ask one last question. apple released the software update that gives users control over whether the apps are able to track when they use other apps and surf the web.
3:17 am
giving greater control strikes me as a positive thing. this is for mr. harris. what do you think the impact of this change will be on the issue of the misinformation talking about today and the system of surveillance, capitalism that companies like facebook and google rely on more broadly? >> i applaud apple for making these steps but it's like a carbon tax on the advertising so it's a business model that treats us as a product and not the customer then removing the micro targeted advertising, the hyper personalization across the applications think of it as going more close to the 1970s model of billboards that are depersonalized. that isn't completely true because they can still target you from within the ecosystem so it isn't going to affect you that much on the misinformation or polarized nation.
3:18 am
it's a subtle carbon tax pertaining to this model. >> thank you. thanks, mr. chairman. >> thank you. senator grassley. i use twitter regularly, they are also popular social media platforms where the users disseminate the views and opinions for millions of users around the world. just here in the united states in 2019 an estimated 72% of americans use at least one social media site. people can make their voices heard, share their opinions and interact increasingly however these big tech companies are deciding what we can and cannot say in an infringing on america's freedom of speech. i constantly hear from ohioans about their concerns with control that big tech has over
3:19 am
the discourse in the country as well as the bias that the platforms have against conservative voices and middle america. i'veheard numerous stories of ps being deleted, businesses removed and creditors silenced. many times this happens without warning and very little if any due process. these platforms have monopoly powers with very few competitors and are not constrained by market forces and consumers have no alternative. big tech is also immune under section 230. this combined with monopoly allows them to censor, block and band whatever they want. we must look at the power control that a handful of companies have over speech and silencing voices. so my question when you decided
3:20 am
to remove the current content from platforms, do you believe that you do that consistent with first amendment free-speech principles such as the viewpoint neutrality, and if you believe that you are doing that, why is it that the conservative voices are consistently the ones that are being censored? >> senator, thank you for that question. we are a platform across the spectrum and i do believe we enforce the policies without regards to the political affiliation. and i do here questions from both sides of the aisle if you will about whether or not we are fair in the content policy enforcement. but i can tell you that without regard to the political ideology. >> senator, i also appreciate the question here. we want you to be a place where
3:21 am
the diverse viewpoints are heard. when the content is removed from the creator, it's an e-mail explaining that. for the last quarter of 2020, we gave a 223,000 appeal and 83,003 and statement showing we don't always get this right, but we certainly want to apply the policies evenly. >> we love to see the tweets on twitter and as you've probably appreciated, twitter wouldn't be twitter if we didn't have these
3:22 am
points and we welcome diverse perspectives of what makes the service. we have rules in place and we forced them impartially and i know people have concerns and believe that companies like ours should be more transparent. that's why we put forth three fourths solutions that we think would go a long way to addressing some of the concerns but first is increased transparency. second is more user control and choice over algorithms and third is enhanced due process. if we make a mistake that the users have the ability to appeal and have decisions reviewed against the terms and the wartime. moved by the platform stating that it is misinformation, but it's actually just viewpoints that liberals disagree with. what are your platforms doing to ensure they are not using
3:23 am
pretext reasons to censor differing opinions? and then my last question. >> i'm happy to take this one. twitter has taken a very narrow scope of the information at this time. we have had three categories that govern our policies. first is manipulated media and second the integrity and third is the covid-19 misinformation. to the potential misinformation to address would bring more voices into help us with that work. >> senator, we do have robust community deadlines to help keep people safe. to your point, those guidelines are reviewed by any of the users.
3:24 am
>> the third person didn't want to comment. the problems of disinformation, extremist content and bias. >> thank you for the question, senator. if there's anyone with an alternative model in this information disinformation and morality, can they succeed in the marketplace?
3:25 am
there's something in the literature with the number of participants and what we have between the social media platform is the rate once you have a dominant platform there's an alternative so the market concentration means even if there are alternatives if you are a venture capital you are going to fund existing companies by knowing that there is an exit pathway and we've all learned the lessons of the competing platforms acquired by the companies. >> i think that point can't be lost because there are regulations we can put in place. that's one way to do it and you can do both at once but if you have a company that buys out everyone from under them in the words of mr. zuckerberg, the companies like instagram and whatsapp will never know if they develop the bells and whistles
3:26 am
to help with this misinformation because there is no competition. do you want to comment more on that? >> just as you said, what if we were letting the alternative reality be subverted and they are going to spend billions more on the content moderation because they want to be the platform people can trust? they can't make that choice because facebook about them and now they are integrated into how much they are working on the problem. it's a way to sweep the garbage under somebody else's rug in the collaboration between the platforms and in some cases we have seen look how bad their problems are because we don't want to pay attention to ours. doctor donovan, in your research you looked at medical misinformation on the role of the social media platforms. can you comment on the sheer
3:27 am
size of the platforms and how they affect the problems that we should be addressing? >> thank you, senator. i look forward to reading your book. the problem with medical misinformation of course is one that was exacerbated by the pandemic that the anti-vaccination has a history of using social media in order to pass the public understanding of science but during the pandemic of course, the way in which the tech companies have turned to the medical misinformation is really like putting a band-aid on an open wound. right now what we need is a comprehensive plan to ensure people have access to local and accurate information like the public interest obligations. but what we have instead is a very slapdash approach to whatever the breaking news event is of the day. so i do think that the size of the platform and the way in which medical information scales much more quickly than the
3:28 am
needed intervention is probably the most pressing public health issue of our time. >> thank you. a recent poll found nearly one in four americans said they wouldn't give the vaccine. a recent report from the center identified 12 specific content producers as the original source of the estimated 65% of the coronavirus disinformation online. the senator and i after he conducted a hearing sent a letter to jack dorsey and mark zuckerberg. >> senator, thank you and thank you for the letter as well. i know that we have assessed that content and removed the
3:29 am
accounts. but more broadly and i think this is an important issue, we know we have to get it right when it comes to the misinformation around covid-19. it's to help people get vaccinated with the local and national health authorities making sure we are directing people to the authoritative health information including where they can get vaccinated. we have 2 billion people with those authoritative health resources, but since the very beginning we've been partnering with the cdc to remove content that contradicts the guidance that could lead to an increased risk people could contract or spread and that includes removing 12 million pieces of safety related information.
3:30 am
>> including what data is used in the social media company algorithms do you give the customers ability for the content advertising systems? >> senator, thank you. we do give people a number of controls that includes everything from the ability to download your own information and who can see your post and you can opt out of the algorithms and you can see who can see the content at any time. >> is the company then supportive of the bill on privacy? senator cantwell? >> i would have to have the u.s. public policy team follow-up with you on that.
3:31 am
thank you. i appreciate that. i can see senator kunz is raising his eyebrows at me and i can see that is enough is enough. >> the chair welcomes the questions from the author of the outstanding book. [laughter] >> i will ask one more question then of ms. culbertson from twitter. just to the original question that i had asked about the disinformation doesn't as we call them. some of these i'm talking about market power isn't as applicable to twitter, so as a competitive platform. but could you at least answer the question here about this disinformation? >> thank you for the question. we continue to review this particular group of individuals
3:32 am
against the policies, and we have taken enforcement actions on the several of these individuals. the team will be following up this week with all of the details are around that. and i just want to note that while we are competitors, we are partners to address a a lot of harmful content issues. we've collaborated and we have worked together on terrorism, sexual exploitation, opioids. so i take issue with what was made earlier, various collaborations across the industry to address the most harmful and we invest heavily in the partnerships around covid-19. we worked closely with the cdc, the white house, but to also ensure the credible information on the service. >> do we take issue with something i said or -- >> no, senator.
3:33 am
one of the other panelists suggested that we have a competitive edge to compete on addressing these harms where we actually collaborate in a lot of these areas. >> thank you. appreciate it. >> we now go to senator kennedy. can you hear us, senator kennedy? >> i can hear you mr. chairman. can you hear me? >> the time is yours. take it away. >> it seems to me that in the guise of giving consumers what they want, a lot of our social media platforms first use surveillance to identify a person's hot buttons, and then they use algorithms. this is called as you know
3:34 am
optimizing engagement. the social media platform wants people to visit often. that's how they make more money advertising. in any event, when that person that we are talking about as a result of this algorithms -- not every time, but quite frequently. that is why you can still find time in america but you have to go off-line to do it. mr. harris, i would like a straight answer from you. i have a bill and others have a similar bill.
3:35 am
a bill that would say section 230 will no longer apply to a social media platform that optimizes engagement. if you were a senator, would you vote for it? >> i would have to see the way the bill is written. >> don't do that to me, mr. harris. give me a straight answer. we all want to read the bill. would you vote for it or not? >> i would be in the support of a bill that had technology companies not measure their primary mode of success. any of the engagement metrics -- >> that's swell, but if the bill said i don't like to waste time in these hearings -- if the bill said no section 230 immunity if you optimize for engagement, would you vote for it? if you don't want to answer,
3:36 am
just tell me. >> it sounds like an interesting proposal. i'm sorry for not being more clear. >> you're being very clear. you are dodging the answer. ms. doctor donovan, would you vote for it? >> when it comes to bills, the reason why i'm in research is so i don't have to make those decisions. but i would say when we are talking about what the companies optimize for in the way in which you have optimized -- >> would you vote for the bill? >> i would vote for some form of the bill that required oversight of these algorithmic systems. >> all right. we have these and i appreciate them, but we never get down to it. we all talk.
3:37 am
at some point you've got to get down to it. and that's where i'm coming from. i'm not trying to be rude. i'm just trying to get an answer out of view. both of you are critical and i'm looking for solutions, not just for us to all show how intelligent we are. >> i'm going to run out of time. let me ask -- i'm thinking about introducing a bill to take the principles of the general data protection regulation in the eu. i never thought i would do something like this. but take the principles and the general data protection regulation in the eu and have those principles apply here in the united states.
3:38 am
would you support that bill? >> senator, i focus on content but i know there are people in the company that would have a follow-up on that. >> would you vote for -- >> we certainly comply with gdp are but we welcome the longer conversations about this. but generally, yes. >> yes? >> yes, senator. >> if you were a senator, would you vote for it? >> i'm not an expert. i can tell you on privacy what we want to do is give the users -- >> i know you want privacy but
3:39 am
your whole model is finding out everything you can about me other than my dna and you may have it for all i know. i'm not trying to be rude but i can't tell you the number of hearings that i've been to. i learn something every time. but when we get down to it, what are we going to do about it? no one wants to answer and you're supposed to be our experts. i would strongly encourage you to come to these hearings with physicians on behalf of yourselves or your company that you are ready to take. don't just work with us. we are trying to solve a problem. >> senator kennedy i have to ask you for a yes or no answer. do you realize you've gone over time? >> i realize that, yes, and i realize everybody else has gone over time. >> take another minute and then
3:40 am
please, wrap it up. >> i'm done. >> thank you, sir. senator, remote. >> thank you, mr. chair man and to the panel. ms. bickert, much of the public discussion is focused on facebook's moderation practices. but there is a compelling argument that the problem is not the quality of the moderation were the nature of the algorithm, but the underlining business model. your scale and power. while you clearly have an obligation to remove certain content, for example incitement of violence. i am not excited about huge multinational tech companies becoming the arbiters of legitimate speech and expression, especially when the decisions about what you may
3:41 am
suppress algorithmically are often made in secret and under heavy pressure from politicians and advertisers in public opinion. so, on the subject of your scale and power, i would like to ask does facebook anticipate that it will in bar going for the requisitions of competitor services in light of the suit that you are already facing from the ftc and a number of state attorneys general alleging that the acquisitions, instagram and whatsapp constitute anticompetitive activity. >> senator, thank you for the question. of course i can't comment on any litigation. i can tell you, because i'm responsible for the content policies and a lot of what we do around moderation, that we do take very seriously both a balancebetween expression and s, but also the need for transparency. and so, with, for instance, our
3:42 am
algorithm over the past few years, we have put out a number of blog posts and other communications where we've given the input of what goes into the ranking algorithm and we've explained how any significant ranking changes we've introduced this tool wear on any post on facebook you can click on it and go into a lion and it will tell you why that is appearing and where it is. and then, significantly, we have made it more visible how you can opt out of that newsfeed in the ranking algorithm, so if people want to see the content -- >> ms. bickert, respectfully, i heard some of these points earlier in the hearing and i'm not asking you to comment on any specific litigation. to be clear, my point is that everything you just said about improving the quality of the
3:43 am
moderation practices, disclosing some of the conditions in the algorithm are not the root issue. the root issue is that facebook has too much power. and one company perhaps should not be such a massive gatekeeper that determines what ideas prosper and what ideas don't and that's why the question i asked was does facebook anticipate that it will and bark on any further acquisitions of competitor services? >> senator, this isn't my area at all and focused on content, i can tell you those from where i sit with my perspective it is a highly competitive space and i know that not only from being the parent of two teenage daughters, both of whom use
3:44 am
social media and there are a lot of services people use. nevertheless, i do think it is important that we recognize that these content moderation rules are really important and we have to be transparent about what they are so people can make informed choices about whether they want to use the services. >> ms. bickert, apple's recent ios update will require apps to seek additional authorization from users in order for those apps, presumably some of your products included, to continue tracking users across the internet. tracking cookies and other technologies allowing facebook and other entities to monitor virtually all of their users web browsing activities. i want to commend apple for taking this step and ask whether you will take significant steps in the short term to reduce your ubiquitous tracking of your
3:45 am
users web activity location data, the technology that they use, and whether you will consider extending the feature that allows the removal of personal data from facebook to include the removal of personal data not just from facebook, but from any entities to whom facebook sold such data and including in your contracts with those whom you sell data of the provision that they must delete all data that they purchased from facebook at the command of the user. so again, two questions. will you follow the lead in the tracking of users across the web and will you include in contracts with those to whom you sell the data. and asking them to remove their data from facebook. thank you so much.
3:46 am
that is in the way the advertising works. it selects from different targeting criteria and then we deliver it to a relevant audience. those are the companies that are selecting that information and following up with you. thank you, mr. chair man. i appreciate the witnesses and the hearing today. and i think that all the
3:47 am
witnesses are hearing that americans are pretty much fed up with the arrogance of big tech you are seeing it from all sides and certainly twitter. jack dorsey had his contempt for congress on the full display on the house energy commerce committee hearing i think it was last month and he tweeted a poll on possible answers to the question basically treating the hearing as a joke. so do you agree that it is unacceptable for the ceo to tweet while he is testifying before congress, yes or no. >> he is the ceo and creator. i ask for a yes and no, but i will say i am pleased you are
3:48 am
looking and appearing more presentable than your ceo in his testimony before us when he behaves disrespectfully in a congressional hearing and before the american people, he embarrasses twitter. it's such proof how out of touch it is with the rest of the country. in my opinion it is destroying the free speech competition, original content. it is responsible also for much of our children's minds. this is something that bothers me the power of facebook and users algorithms to the social media addiction where it is among babies, toddlers, tween's and teams and this is something that should terrify each of us.
3:49 am
youtube deploys them and they do it because it pays well. our children's brains are being trashed so silicon valley ceos can pocket billions of dollars in ad revenue. youtube algorithms create an automated reward system. videos with little educational content are amplified to unsuspecting toddlers and kids and to their unsuspecting parents. with the transparency act to force them to disclose if the secret algorithms are manipulating customers. youtube has a history of
3:50 am
exploiting children to profit off of their viewing history. isn't it true youtube has illegally collected data on kids under age 13 in violation of coppa and marketed the data to companies? >> thank you for the question, senator. that was a novel interpretation we work directly to reach an agreement about how we created this content. you reached a settlement and find a record $170 million. the ftc order doesn't require you to police the channels the
3:51 am
designate their content. however, the commissioner said youtube should have to take the extra step of creating an algorithmic classifier. i know your engineers are capable of designating and designing algorithms for all sorts of purposes. is the engineering team capable of designing an algorithm that can identify a designated content and turn off behavioral advertising? >> they are capable of that and we do require the creators to designate but we also run classifiers as you mentioned to check the system and determine
3:52 am
what content is appropriate to be made for children. also, just to be clear we do not allow personalized advertising on the made for kids content. >> are you prioritizing profit over children? >> no, senator. child safety on the platform is our top priority. we build our products with parental control. >> the ftc is prioritizing children and taking steps to safeguard them. under the settlement you promised to stop illegally marketing the ads to children and videos now have been labeled as made for kids as you just mentioned. they will no longer include a comments section or screens that allow viewers to subscribe to
3:53 am
children. so, are you allowing this behavioral advertising to be turned off? >> yes, senator. we do not sell personalized advertisements on the made for kids content. >> i have a question for you i will submit for the record. thank you, mr. chairman. >> thank you senator blumenthal. >> thank you to all of the witnesses for being part of this hearing and to the chair man for holding it. it's a very, very critically important topic and hearing and i apologize that i am late coming in because i was chairing the subcommittee of commerce on consumer protection dealing with covid-19. approving the bipartisan act
3:54 am
which i led alongside of senator moran. we've known for a long time hate crimes are authorized. the videos of individual crimes with facebook, twitter and youtube no matter how horrifying, only part of the story well in proof hate crime. more effective action against hate crimes. we know that the platforms play a role in hate crimes and hate
3:55 am
speech and they recently found of that as many as one in the three americans experience hate crimes harassment online following the concerning findings. facebook spoke about the break the glass message that it was taking to dial down the hate and disinformation on its platform last week. ms. bickert, you wrote a post about turning the dial down on the speech, violence and insight meant as the company was
3:56 am
anticipating the chauvin trial. facebook does in fact have a dial for hateful content, can the company dial it down now and to all of the representatives from youtube, twitter as well as facebook if we committed to providing access data to independent researchers to help us better understand and address the scourge of hate online. >> senator, thank you. and let me start by saying i completely agree hate speech and hate crime is very concerning and it is a priority for us. i will point to the one quick example which is we started
3:57 am
publishing the prevalence and we now publish the prevalence of hate speech to see what we missed for a significant subset of content. ... >> let me give you an example in the run-up for instance we took some very aggressive
3:58 am
measures with the distribution of content that might violate our policies with the show been trial itself those measures are perfect. and then that does not violate our policies that flagged by technology that should not be produced. that only balance between trying to have abuse and that we provide freedom of expression and to be fair so we take those measures with additional risk of abuse. >> thank you my time is expired. >> thank you senator. we will have a second round of questions only by the ranking member and myself that they
3:59 am
are actively have it on the floor. let me think our five witnesses again and those who have come to question. i understand that 70 percent of the views on youtube are driven by the recommendation algorithm that 2 billion users worldwide and over 1 billion of ours watched every day that makes your recommendation algorithm incredibly powerful. members of the public can see how many times any video has been viewed but what they can't see how may times it has been recommended. although i understand youtube does collect this information and gives it to content providers so the video was taken down by youtube for violating content policies we have no way to know how many times it was recommended by your algorithm before it was
4:00 am
ultimately removed. can you to commit today to provide more transparency about your recommendation algorithm and the impact? >> thank you for the question. generally speaking if content violates our policies we want to remove it as quickly as possible. as you will see in the public community guidelines enforcement report of the three.9 quadrillion videos more than 70 percent were removed you do have an interesting idea we are always looking to expand transparency one way we have done this recently is our view rate which is the percentage of the platform to violate community guidelines last quarter between.16. >> i just want to know if you are willing to release the data you are already
4:01 am
collecting about how many times videos that violate your content standard had then recommend - - recommended by your algorithm. >> i commit to that today but it is an interesting idea and we will work with you on that. >> i look forward to getting an answer as soon as reasonably possible. several publications have reported significant portions of misinformation and polarizing content on facebook comes from readily identified one - - identifiable those that generate a lot of activity. doctor, can you comment how the hyperactive users create problems that i want to ask if facebook intends to tackle the challenge. >> yes you are referring to the buzz feed article on the internal memo from facebook that shows there is a power at play that misinformation tends to be most potent with a
4:02 am
highly coordinated small group of people working around-the-clock to get their groups with the public so what's interesting about reading the document internal to facebook even as they counter super inviter's their own systems and teams could not overcome that's coordinated small network there is a lot of company needs to do to address those adversarial movements were looking at the formation of stopping those groups. >> the wall street journal reported last year that facebook considered seriously but ultimately declined to take measures to put limits on the user's activities. there was a proposal reportedly that would have
4:03 am
reduce the spread of content disproportionately favored by the so-called hyperactive users speak to how facebook is intending to approach this issue. >> let me say we did actually put a restriction or a limit on the number of invites a user could send out in a day to a group during the election. but i also want to speak to the point that was raised that i completely agree there are networks of bad actors that are sophisticated to try to target social media to achieve their objectives the way the networks work is something that we have been focused on the past few years and i know daniel has expertise in this area in terms of identifying
4:04 am
these that are engaged in shell games and other attempts those authentic identities we removed 100 such networks since the beginning of 2017. we are public when we do we publish the results and we have gotten better generally to identify fake accounts near the time of upload every day. >> thank you i look forward to delving into this further with you and other folks at facebook. let me ask two or three more questions. i know it is common for employees at major tech companies to be required nondisclosure agreements as a condition of employment. that's a common practice in businesses i practiced for. to each of your companies
4:05 am
generally require your employees to sign nondisclosure agreements? >> yes or no. >> i don't know the answer but i will follow up. >> i want to be careful because i am not an employment lawyer but i do believe we have standard agreements to. tact proprietary information with employees. >> i would want to come back with the answer but of course we have certain provisions in place to make sure people are not sharing private data and they are handling so generally they can share their perspective sometimes employees will tweet about the different services. >> thank you in general my concern is if a former
4:06 am
employee from one of your companies wants to question or criticize the company or decision-making they might risk facing legal action. i welcome more input following this hearing on that dynamic and whether or not nda's prevent some of the most relevant information about algorithms from getting out to the general public. i appreciate the information shared how algorithms work at high level many independent researchers say it is critical to know the details of the algorithm to understand how components that drive decisions are weighted. how much a metric like meaningful social interaction is correlated with engagement that mr. harris has asserted and the business model of social media requires you to accelerate. given the immense impact of your algorithms potentially
4:07 am
both positive and negative, greater transparency how your algorithms work and how you make those decisions is credible. could you speak to whether your companies are considering the release of more details of this information or enhance transparency measures or audits about the impact of your companies algorithms moving forward? >> i'm happy to start. we constantly think how we can be more transparent about any actions we take including algorithms. that's with our initiative we are happy to provide more details and interest of time that we have interred discipline of groups that twitter looking at the algorithms and machine learning and also we will be
4:08 am
sharing our findings with the public to be open throughout the process and then more broadly we totally agree we should be more transparent and provide more consumer control choice we are committed to having procedural fairness. invested in an independent project called blue sky aimed at open protocols essentially potentially create more controls for those who use our services as well as transparency. >> my last comment is this. mr. harris spoke forcefully and pointedly how the business model of social media is attention harvesting after a decade of positive and negative impacts of social media which has accelerated
4:09 am
the most important force in society today, we have more than not seen the toxic impacts of division and misinformation he has asserted the entire business model is based on dividing society. as we transition into a digitized society, in order for western open democratic societies to survive, we have to develop model humane standards how social media works. it is my hope the next time we convene it might be to consider what sorts of steps are possible, necessary or appropriate to make that progress mr. harris speaks about. >> thank you chairman and to all five of you to be here and want to put another question to mr. harris but before i start i would like to briefly address colleagues on both sides of the aisle because both republican and democratic colleagues today have said a number of things that presumed
4:10 am
more about the problem that we have identified to pick up the most ready to all usually that 230 discussion. i think i am a lot more skeptical then maybe most on this committee to push to a regulatory solution at this stage and in particular the conversation about 230 has been off point to the actual topic at hand today and much of that zeal to regulate is dealt by short-term partisan agenda and is more useful for us to stay closer to the topic the chairman identified for the hearing and it's important for members of congress to constantly remind ourselves we are bound by the first amendment constraints in our job. and with the lines of questioning on the right and left on the panel talk as if the first amendment is a
4:11 am
marginal topic we have to be objectively concerned about the yet we have to draw a distinction between the first amendment and the true public square regulated by the powers of government and the companies we talk about amy klobuchar has raised important issues about scale that they are private companies. there is a number of distinctions we should be attending to a little more closely then maybe we did today. the mr. harris, tell us what discussions you have seen or have been a part of with potential different business models besides the centric model can you give us the blue sky quick. >> a fantastic question. obviously there are subscription models like
4:12 am
wikipedia but i want to make in addition all distinction it is the design model it works because of the design that says content we are all unpaid journalist previously they paid $100,000 per year to write content to get people to look at it that is the cost but what if you can't harvest all of us to take that five minutes of moral outrage to generate attention production for free? we are the unpaid labor for that attention economy duped into sharing information with each other from the technology companies not only the editor editorial side and pay $100,000 per year we have algorithms this happens in the process that in general harm in showing up in all the blind
4:13 am
spots then they say there's this problem the company will respond and say then we deal with these three problems that that destroys the mac democracy faster than other friends in the community raising the alarms about it. that is fundamentally the core design model more than the funding model is funded for public interest to put into a regenerative fund with a bunch of models we could do just like energy companies have a perverse incentive they need more money leave the lights on and the faucets on they make more money but instead they have a model that regulated after a certain amount day double or triple charge to disincentive eyes your energy use but then it goes into the balance sheet put into a regenerative fund imagine a technology company profiting
4:14 am
from engagement only a small portion of that and the rest put into a regenerative public interest find like the fourth estate fact checker researchers and public interest and things like that because to organize a comprehensive shift for the digital open society to compete. >> if we had more time i was going to ask more questions if you think in your role as an ethicist of that debate of the optimal user time on that side and if there is a distinction between a fully consenting 49 euros like myself and how those platforms look at a 13 and 17 -year-old so i will
4:15 am
just echo the thinks to all five of you for the discussion. >> let me conclude by thanking all five of the witnesses for appearing today and my 11 colleagues who engaged in robust questions i appreciate the willingness of the witnesses to answer direct and difficult questions about their platforms and business models. i'm encouraged to see these are topics broadly of interest in where i believe there could be a bipartisan solution. none of us want to live in a society has a price of remaining open and free hopelessly politically divided where kids are hooked on their phones to be part of reprehensible material. but i am conscious of the fact we don't want to needlessly constrain the most innovative
4:16 am
fastest growing businesses in the west. striking the balance requires more conversations look forward to work with ranking member on these matters whether by a roundtable or additional hearing by seeking regulation or legislation that is how best to align incentives with the rest of society to ensure greater transparency and user choice. i have to approach these issues with humility and urgency in the state can demand nothing less. members may submit questions for the record due by 5:00 p.m. may 4th with that the hearing is adjourned. [inaudible conversations]
4:17 am
[inaudible conversations]
4:18 am
huawai or listen on
4:19 am
the c-span radio app. "washington journal" continues. host: turning our attention to the border surge in migrants from central america, joining us is cynthia arnson, the latin american program director at the wilson center. cynthia arnson, i want to begin with the vice president, kamala harris, who has been tasked to the president to address this issue of migrants coming from central america. here's what she had to say when she met thursday with nation leaders about the migrant surge in the northern triangle. ♪ --[video clip] >>

38 Views

info Stream Only

Uploaded by TV Archive on