tv Sports Life Deutsche Welle February 15, 2020 4:15pm-4:29pm CET
4:15 pm
in the summer of 2 in the 18 you were quoted as saying that you would need it when the have is to fully quote unquote retool these confident and security rules no are you saying that you're not come for the quick vote to end and you know back then obviously we could not have 35000 people doing content and security review. 16 years ago did not exist at the same level that it does today to identify this type of harmful stuff so basically the way that the company ran for the 1st 12 years was that people in the community if they saw something that they thought was harmful they would flag it for us and we would look at it reactively and i for a while i that was reasonable but then you know we got to a point where you know we're a large enough scale company that we should be able to have a multi-billion dollar effort on content and security review the ai technology of the point where now we can proactively identify a lot of different types of content so we have
4:16 pm
a responsibility to do that but going from reactive to proactive on this was a multi-year journey there are elections as one type of that is one type of the area that we're worried about but there are about 20 different areas of dangerous and harmful content that we track everything from terrorist propaganda to child exploitation to incitement of violence to hate speech to just go down the list there are about 20 different types of categories in the way that we judge ourselves and every 6 months we issue a transparency report of how much of this type of content are we finding on the service and what percent of the content on the service are our ai and other systems identifying and taking down before it's reported to us or by someone else so in an area where we're doing quite well terrorist propaganda for example in a 99 percent of the terrorist propaganda from you know isis and al qaeda and folks that we take down our ai systems identify and remove before anyone on our network sees it so that's good but that's that's that's a good result and we need to make sure that we get there on all of the different
4:17 pm
categories of content. some are harder than others so for example hate speech is a particularly challenging one because we have to be able to train ai systems to detect really small nuances right is someone posting a video of a racist attack because they're condemning it which probably means they should be able to say that or are they subtly encouraging other people to copy that attack and that you know multiply that challenge of about subtlety linguistically by you know one $150.00 languages around the world where we operate and the ability to make mistakes we're taking down the wrong kind of thing but we're making progress 24 months ago on hate speech we were at 0 percent were taken down proactively and i think today we're at around 80 percent so it's so it's accelerating it is it's a hard problem i don't know if we'll get that one to 99 percent anytime soon but as a i continues improving i think where we're going to that's a tailwind as we keep on investing in the technology will be able to keep on doing
4:18 pm
better and better on this but it's a long term investment. let me. raise an issue that concerns many people in experts. as a problem for society at large the concern that the algorithms that are being used. will create food the you know the democratic citizen not an accurate reflection of all of reality but sort of virtual reality talking though it could chambers filled the bubbles it's a great set of it tell us how you think about this and and. to would the extent you believe that is hurtful to the kinds of societies that we want people can have the full picture of information and not i'm not going to be confined. to some rather narrow flow of information. yes so clearly in
4:19 pm
a lot of countries polarization is increasing and that can be bad the mission of the company is to give people the power to build communities and bring the world closer together 7 our whole thing is about bringing communities together bringing societies together and bring the world together so i you obviously do not want our services to be contributing to polarization to the contrary i want us to be a force for bring people closer together the way that we do this is by helping people stay connected with people they care about and build new kinds of communities most communities people are not worried about if you're joining a church community or a community around the sports most people aren't thinking ok that's going to be polarizing it's these more extreme ideological communities that i think is what people worry about and there you know we do think we have a responsibility to make sure if if there are groups that are spreading a lot of misinformation or or other kind of violations or just kind of
4:20 pm
polarizing that we're not recommending that people join those groups so if you want to be a part of those groups in general as long as it's not violating our rules then that's cool you should be able to do that you should go to seek out the communities that we want that you want but we're not going to be a force for recommending and trying to push you towards that. overall i do think that the narrative and how some people talk about this is out of step with some of the research that has been done so for example there have been. these researchers at stanford university studying polarization that have come up with some results that kind of go against what you're saying one result is that it turns out that they studied the u.s. specifically after 2016 and they found that the parts of the population that were the most polarized were actually the ones that were the least likely to be using the internet at all so that at least that was not causation that's correlation but it definitely suggests that you want to look outside of just the internet. that is
4:21 pm
the primary factor that's causing us more recently they did a follow up study that was a long term study across multiple many countries around the world including a number in europe in the us canada australia a lot of different places tracking polarization over time and what they found is that polarization is not trending consistently in every place so in places like the us it is growing quickly in a number of other countries it's growing and many places that is flat are consistent over the last 20 years and in some places it's even down but the internet and social media are pretty much everywhere so if that were the dominant force that were causing polarization then you would not expect to see different trajectories of polarization and all these different places which again does not mean the polarization that we don't have work to do to make sure that we're we're a very positive force on this but at least should cause some people to question the what i think has become a very popular narrative that this is primarily because of the internet and social media from
4:22 pm
a lot of the research that's being done it's not clear that that's true but we also have work we don't want to make it so we're just not a negative not not negative we want to be a positive force on this as well all right. i'll open for a couple of questions in just one second i have one other question to ask you i understand that after this weekend with these nice people here in munich you're going to go to brussels. actually you could have the discussion with the e.u. commission right here because half of the e.u. commission is actually in the room or maybe somewhere in one of the bilateral rooms but my question is this the subject obviously when you are in brussels is going to be regulation if so which what kind of and i'd like to hear from you what kind of regulation do you think is good for our societies but also acceptable to the company what kind of regulation would you suggest should. be embraced by the e.u.
4:23 pm
what should be rejected it was your sense so i think that there needs to be regulation and at least 4 areas touching our company and they are elections and political discourse around content more broadly content moderation. privacy and data portability and the reason why i really believe that this needs to happen is because there are a lot of decisions in these areas that are really just balances between different social values where what is what should be the balance between free expression and safety where what is at what is political discourse and what's the boundary between that and political interference what to what extent do we want companies to be locking down data and to what extent do we want them to be encouraging them to make it more open to encourage more innovation and competition in academic research so i believe what our responsibility is to build the operational muscle to build
4:24 pm
a proactively enforce whatever the policy is and regulations are to make sure that we can fight election interference take down content that's going to be dangerous have good auditing and controls on the data that we hold for people and businesses to make sure of that is that people can have confidence and not but at some level i do think that we don't want private companies making so many decisions about how to balance social equities without a more democratic process so i think that where where the line is in my opinion should be drawn is there should be more guidance and regulation from the states on . basically on what kind of take political advertising as an example you know what discourse should be allowed or on the balance of a free expression and some things that people call harmful expression where do you draw the line what kinds of system. companies have to have to develop. in the
4:25 pm
absence of that kind of regulation we will continue doing our best we're going to build up the muscle to do it and to to basically find stuff as profitably as possible we will try to draw the lines in the right places but i actually think on a lot of these questions that are trying to balance different social equity is it's not just about coming up with the right answer it's about coming up with an answer that society feels is legitimate and that they can get behind and understand that that you drew the line here on the balance between free expression and safety and not just ok is that the right answer it's not that there's one right answer people need to feel like ok enough people weighed in and that's why the answer should be this and we can get behind that and i just don't think a private company will ever have the way to create that kind of legitimacy so that's why i'm arguing for this i think fundamentally it's it's very important to build the kind of trust that will be necessary in the internet and in our industry and you know even if i'm not going to agree with every regulation in the near term
4:26 pm
i do think it's going to be the thing that helps create trust and better governance of the internet and will benefit everyone including us over the long term. where would these hands up. i think the one that i saw 1st was ronen bergman. and can ruin have a microphone please i would if i had more time with you i would have loved to also include a question about your currency project but here goes. ronen bergman from the new york times thank you miss it's a group of these the love everything good things that you just the 2 quick questions the 1st and i know you and your team have already addressed that but they didn't understand the previous explanations but maybe you can elaborate if the journalist publishes story in the papers not just the journalist that can be sued for libel but the paper is well. if someone posts something on facebook i think
4:27 pm
that your standard that facebook should not be liable should not call could not be prosecuted in fact there were a few times when i hyland to your team of things that were published that were not true in this that it was not for us to judge if something is true or not 2nd question facebook and what's up sued is really hacking company by the name of. claiming that he used vulnerabilities in order to hack into phones and it's so not confirmed that it was there but they said governments use our system our hacking software in order to catch the real bad guys so maybe this lawsuit will. damage their ability government's ability to work against terrorist. proliferator and other organisms thank you sure so in terms of the regulatory framework on i'm content. i do think that there should be regulation on harmful
4:28 pm
content. i think that there's a question about which framework you you use for this and you know right now there are 2 frameworks that i think people have for existing industries there's like newspapers and existing media which is the analogy that you drew and then there's the kind of telco type model which is ok the data just flows through you but you don't get all the telco responsible if someone says something harmful on a phone line but i actually think where we should be is somewhere in between i think the newspaper analogy is clearly wrong because there are more than 100000000000 pieces of content that people share on our services every day so the idea that we should have some kind of human editor that goes and checks each one to make sure that it is ok is just not analogous to what happens in a newspaper or other media company now as ai gets better will be able to more
4:29 pm
efficiently filter out more of the bad stuff. and i think we have a responsibility to do that better and with increasing precision and i think that companies should have to publish transparency reports like we do on the volume of content that they find or as reported to them have to publish what percent they're able to.
28 Views
Uploaded by TV Archive on
