tv Big Data and Politics CSPAN August 17, 2017 11:43am-12:54pm EDT
11:43 am
student cam 2018 starts in september with the theme "the constitution and you." to chooseng students any provision of the u.s. constitution and create a video the provisionhy is important. >> next, the future of big data analytics with a stanford university scientist analyzing an individual's personality traits based on their social media activities. also, how these profiles have been used in business as well as politics, including our presidential campaign and brexit voting. [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. visit ncicap.org] totonight we're going explore one of the most important facts about our present and future. ande-scale data collection analysis has and will continue change politics in the development and most especially the developed world. increasingly, private firms are assembling profiles of
11:44 am
-- for example, every eligible voter in the depth states -- with the of information that would have made j. edgar hoover weep with envy. [laughter] the use of this so-called big data and the inferences made singular purpose, to craft and deliver messaging that will shape the future behavior of an individual from buying a particular brand of whitening toothpaste, to dis couraging a citizen from choosing one ride sharing service over another, to voting one way or another on profound decisions like the u.k. brexit and the u.s. presidential .lection these profiles are assembled from what our guest this evening our digitalalls footprints. the traces of our daily lives captured, often sold and analyzed. many of us do not even realize leaving these footprints behind through our use of credit cards, our web
11:45 am
, our online purchases, our smartphone use, what we list services,treaming and what we watch on cable tv. our guest tonight, dr. michal kosinski of stanford university, us to his work making assessments of digitalals from these footprints, particularly their facebook likes and profile .ictures he will also help us begin to thesehow assessments like are being and could be used to shape our political and social reality. join me in welcoming michal kosinski to the stage. [applause] michal: hi, david. everyone.ng, helpinghanks again for us peer behind this curtain of the use of big data and our
11:46 am
digital footprints to assess and influence each other. maybe we could begin by having you describe your work for us and what you believe can be learned from us from our facebook likes and profile pictures. michal: well, thanks again for having me here. i am a computational which means that in working mostly with data, particular big data. so instead. time with the research subject in my lab, we're running experiments or maybe learn ing about people using surveys, i would look at the that you soprints nicely introduceed before that leaving behinde while using digital products and services. time to be aeat computational psychologist. it's a great time to be me at the moment. [laughter] because you guys are -- well, we
11:47 am
amount ofan enormous digital footprints behind. back in 2012, i.b.m. has we're leaving about 50 megabytes of digital day, per person, which is an enormous amount of data. if you wanted to back it up on paper, by printing it out on paper, letter-sized paper, size 12, and font you wanted to stock it up and just one day's worth of data, the stack of paper would to the sunm here four times over. -- well, hopefully you guys [laughter] we'll store them in the museum over here. we're all generating enormous amount of information. and now this information, of course, contains our trail of
11:48 am
behaviors, thoughts, feel ings, social interactions, , evenications, purchases the things that we never intend ed to say. not sure you realize that if you type a message on facebook decide, ok, it's 2:00 a.m. and maybe i drank too much wine, i shouldn't probably sending it and you abandon the message, close the window, what. the message is still being saved and analyzed. and now this is not just this one platform. in most cases data is preserved even if you think that you have deleted it. research, my main goal is to try to take this data and new about human .sychology or human behavior one of the byproducts of doing that is that i will produce your digitalake foot prints and will try to predict your future behavior.
11:49 am
predict yourry to psychological traits such as personality, political views, religionocity, sexual orientation, and so on. well, what was really shocking inme when i started working this field is how accurate those are.s so this is one shocking thing. the other shocking thing in fact those models are also very difficult to -- i know computer can predict your future behavior. a computer can reveal or psychological traits from your digital footprints. but it's very difficult for a human scientist to now understand how exactly a computer is doing it, which brings me to this black box problem which basically means that it might be that human psychologists or human onentists will be replaced day by ai running science. but in the meantime, you models thatve those we don't really actually understand very well how they do
11:50 am
amazing at are predicting your future behavior, and psychological traits, so on. i worked with facebook likes lot not because facebook likes are the best type of leavingfootprint we are behind, not at all. are not facebook likes so revealing. why? because liking happens in a public space. so when you like something on facebook, you probably realize that now your friends will see have liked. so you wouldn't like anything maybe embarrassing or something really boring or something that you want to hide from your friends. but now when you use your web browser or you search for on google or you go basicallymething, you have much less choice. you kind of will search for
11:51 am
would never like on facebook. you would visit websites that facebook never like on and you would buy stuff that you would never like on facebook. like you would buy medicine that revealing about your health. and most of us don't really like medicine we are taking on facebook. which basically means that if get access to your credit card data, your web data, your search data, ,ecords from your mobile phone these digital footprints would be way more revealing than i can do using facebook likes. comingever findings i'm up with, they are just conservative estimates of what done with more revealing data. can actually see that the entire industry -- entire industries, not just one industry. they are moving towards
11:52 am
basically building their models on top of the data we are producing. and my favorite example is credit cards. how many of you guys have actual for the credit card recently? ok. have few people that maybe didn't do their research on line .roperly but most of us, including me, we don't pay for credit cards. now, guess what. if you are not paying for something -- think about it for a second. is just an amazing magical thing that allows you to carryingtuff without cash around. it's a complicated network behind it, computers crunching so on.d now, we're not paying for it. why? product.e're the of a credit card company. and when you -- it's not a secret. you can go to the website of a otherr mastercard or any credit card operator and you will see that they see
11:53 am
themselves not as a financial company anymore. they started as a financial helping to it was channel payments. now they see themselves as a -- computer. customer inside -- customer in sights company by observing the things you're buying and when you are buying them and how .uch you're spending on the individual level they can also a lot about you but they can see -- extract interest ing information on the broader level. you know,see that, recently people in san francisco started buying certain things, to certain restaurants or what not, this is very valuable information that can be sold. so basically if you're not pay , you'resomething most likely a product. so now think about your web probably didn't pay for, your facebook accounts, ing mechanism,h and one of the gazillions of app
11:54 am
phone you have on your and now think about how much data you're sharing with the .thers david: is your use of facebook -- i guess at the time initially, a graduate student at cambridge.ity of correct? and at the time, i believe, facebook likes were public. anyone could see your facebook .ikes so did that make that kind of you sinceailable to it was just public on facebook? use thathat led you to data? michal: yes. here -- noting out a reason why -- another reason likes, using facebook which is that i was very lucky to get a huge dataset of volunteers that donated their facebook likes to me as well as , theirolitical views personality and other andhological scores,
11:55 am
basically other parts of their facebook profiles. in 2006 or 2007, my friend david stillwell, started this online personality questionnaire where you could -- standardnality personality test and then you feedback on your scores. it went viral. six millionthan people that took the test. and half of them generously gave access to their basic facebook profiles. test, weou finish your would ask you if you would be willing in return for us offer this interesting thing, if you would be willing to give us access to your that we wouldle later use for our scientific than sixand more million people, in fact, took around threewe got million profiles, facebook .rofiles
11:56 am
at the beginning, in fact -- you know, people like to say, oh, graduated from high school, i already planned, you this research 20 years later. no, it wasn't the case. case, i kind of stumbled , kind of got into this research by accident. what happened is i was traditional personality questionnaires. and traditional personality questionnaires are composed of questions such as: i'm always on poetry orrk or i like i don't care about abstract ideas. and i had this dataset of facebook likes where basically or they like -- i don't like at ideas or i don't like to read. what struck me is that why would we even ask people this go toon if we can just their facebook profile, look at their facebook likes, and just,
11:57 am
you know, fill in the them?onnaire for [laughter] started running those machine-learning -- simple machine-learning models that take your facebook likes and try to predict what would be your personality score. worked pretty well, which actually was pretty because iing for me spent so much time developing those bloody questionnaires and computer can do the same thing in a fraction of a second for millions of people. -- we hade started other data in our dataset. cane were like, ok, so it predict personality. i wonder if you can predict religionocity,, sexual orientation, whether your not.ts were divorced or and each time we asked this question, the computer would think for a few seconds and then can predict it. it's curiousy. it's amazing. pretty, we were suspicious. so at the beginning i would re
11:58 am
run the models with or rendent pieces of data write my entire code thinking that i must be doing something wrong given that a computer can and at your facebook likes predict with very high accuracy, close to perfect, whether you're gay or not. really likeon't anything obviously gay on facebook. well, some do but it's actually a very small fraction of people. for most of the users running predictions, this was real ly based on the movies they watched or books they read. looked very counter intuitive to me at the time that you could do it. now i'm a bit older and spent more time running those models. it's actual actually pretty obvious left met illustrate for me kind ofmaybe let try to offer you a short introduction to how those models work. it's actually pretty intuitive. look, if i told you that there andhis anonymous person they like hello kitty -- it's a
11:59 am
told. i'm [laughter] you would probably be able to out, if you know what hello kitty is, that this person female, young, an iphone user, and you can from here and make some other inferences about hair actually verye correct. 99% of people who like hello women.re so you don't need computer rocket scientist or even a computer scientist to basically make inferences of this kind. most of your facebook likes or most of your purchases on amazon or most of the locations that you visited with your phone it or most of the search queries that you put in regle are not so strongly vealing about your intimate traits. but it doesn't mean they are not all.ling at they are revealing some of them to a very tiny degree but they
12:00 pm
12:01 pm
accurate prediction of what your basically and this is the paper they publish in 2013 very excited about the promises of this technology and excited about promise of this technology. it is used to improve our lives ways, we don'tnt realize how many different way its improves our lives. about netflix or spotify, newsfeed, which is so engaging people spend two hours a day on average, if i correctly, looking at it. now they don't look at it mro bloody boring, ai behind it ause made a prediction about what your character is and adjusted such a way to make it most engaging. now there are also down sides,
12:02 pm
we'll be talking about today. and -- well, basically the paper published in 2013, it got coverage, but s at that time, most of the press coverage was like, this is so you can predict whether someone is -- i don't know, republican, a republican from likes.acebook nice shiny gadget. i was like, no, no, no, wait, have to realize tremendous consequences for the future of society. no, it's so cool you can predict from likes, but this is we go.as but now interestingly, this is ow general public treated the results, but policy makers and companies took notice. or instance, two weeks after the results were published, facebook changed their privacy way facebook a likes were no longer public. before 2013, i think march, we published the paper, before
12:03 pm
for likes were public everyone to see. your idn't have to be friend on facebook to see everything you liked. now our paper, our work showed by seeing what you liked, i also can determine your sexual orientation, political views and other intimate traits people are not happy to share. was a great thing facebook took notice and to preserve your basically switched that off. but you also have u.s. e.u. government took notice and started working the legislation to somect their citizens from short dcomings of the phenomena. and talk 's pivot about political uses of this kind of work. talk, want to hear you are about how private firms
12:04 pm
using big data analytics to your shift voting result necessary one way or another by icro-targeting messaging that is defined by intended ersuasion, rather than accuracy. cambridge se firms is an a la an a l involved in ca, brexit and the trump campaign. much of what we know about this to loping story and due recent investigative journalism, especially by the guardian i thought it could provide our audience with a quick review of the story before telling us what you think about it. a u.s. firm owned by 2016 had rcer, until steve bannon as vice president
12:05 pm
secretary.ate mercer is one of the most successful hedge fund managers, of breitbart news and supporter of the trump campaign. left his executive an tion at breitbart andcambridge analytica, who reportedly employed social media data mining along with government records and data sold just porations, as we discussed, to definitely dossier voter, first used by ted cruz campaign and the to microtarget to influence voters. elatedly a canadian firm, aggregate i.q. has been a central consultant for this kind with the various u.k. organization step push for the
12:06 pm
brexit vote in cambridge ownerica appear to be the of aggregate i.q. intellectual property. time magazine reported u.s. investigationors are looking at cambridge an lit caa in the activity in ssian the u.s. presidential election, which evidently may have using d russian elements techniques like those used by analytica. in short, quite a tangled web. tell us about how our work relates to this whole thing and how we should think analytica idge campaign?or the trump michal: very good questions. -- there was a lot there, but first of all, we
12:07 pm
how effective ow cambridge analytica was and you listen towhen cambridge analytica, they start amazing and efficient they were, but when they realized that, you know, getting ts are maybe when they ybe when they realized that some things they had done and it became public, some things were not entirely legal, they suddenly changed it r speel and now they say didn't work at all and we're just making stuff up. means they are either lying now or were lying then, so it is difficult to say. can tell you for sure is that first of all, we have a lot evidence that we produce in showing such approaches work really well. we also see it is not only trump
12:08 pm
campaign or brexit campaign, but we see all of the serious employing now methods of this kind in their campaigns. which, and in fact, barack obama politician t major to do it on a massive scale and remembery don't really any outrage, especially on the eft side of the political spectrum at that time. not only spend three times more money than donald trump on doing targeting on social media, but also hired way in my opinion. but she she lost, didn't lose because trump was of magical ind methods, the difference in the massive and caused by something else. can people tell me, ask me,
12:09 pm
data analytics and political marketing win the election? the answer is, well, yes and no. es, because it is a fact of political campaign, it is a fact of life when running a political ampaign like t.v. spots and adsting articles and putting in the papers, but because everyone is using it, it is not giving anyone any unfair advantage and the only unfair of isage here i can think barack obama, who was the first one to use it on a massive given him ight have some unfair advantage then. but also, i think what people, humans, we kind of like to focus on the negative. it is great we focus on the negative, this is clearly a great psychological trait allowed us to be
12:10 pm
species, cessful as probably even too successful to degree, but let's put aside ocusing on negative and risky and think about advantages of politicians being able to message.ize their there are few interesting outcomes of that that people notice. to one is, if i can talk with you guys one-on-one, that is really social media, use algoritms to help you, talk one-on-one about things most you.vant to the algorithms help understand interests, ter, your your dreams and your fears, to message more interesting and relevant to you, all, has one utcome, which is messages became more important. in the past, i could say, yes we can spend ga zillion of
12:11 pm
on showing every t.v. station and i could be couldn'tl, moreover, i redo anything else, because i lacked ability to communicate one-on-one. i had to settle down on some kind of average message that was denominator, a message that would reach the broadest population, yet was aimed aty particularly anyone. now i can talk with you about you and levant to something else to someone relevant about them. the message, content of the became more gram important. now it is not only that because in turn has amounted to important outcomes. a message if i make more relevant to you, you become more engaged in politics, great democracy, when we have voters more engaged and messages they he are getting, more people engaged n politics and this is just
12:12 pm
great, i believe. but second of all, it also makes think about, okay, what is important for you, david? you know, in the past i could yes, we can. now i have to think hard about what is important to you and perhaps about it would make me update my beliefs about what my political agenda should be. specifically when it comes to minorities. practice my n just method on the t.v., i focus on majority interest, right? this the logical broadcasting. if i talk with people i now one, i can now, develop interests, naturally as politician in what they have to they are interested in. this is the first change, more and more importance of the message over emotional sl slogans. e have seen it in the recent election, politicians like bernie rump, but also
12:13 pm
sanders, bringing in people into the political process that raditionally were not so interested, they were disengaged with politics, they thought for was nothing in it them. and i believe that even if you isagree with the new people that were brought to the political process, i think you should still recognize it is democracy when people are engaged in politics. outcome of personalized marketing, if i can talk to you about things i don't need to show you the slogan 20 times. one serious conversation with you through social media. exaggerating now, basically i'm saying i don't need to spend monocommunicating, i can communicate with you using more which is essages, another great outcome for democracy, which is decreasing entry cost into politics.
12:14 pm
again, we've seen it both in election, we've seen it with both donald trump and ernie sanders, who are not establishment candidates, they didn't have much money compared wereother politicians that in the race, they also didn't ave experience in running big-scale presidential campaigns, so they had to do it on a budget in a new way. talkingy started doing, to people directly about the issues people cared about, which basically means the end of the and you two candidates may disassociate with one of them or both of them, i believe for democracy, you don't need a lot of money in industry and establishment to be part of the political process. we've seen the same with brexit, instance, again, i'm not a big fan of brexit, myself. t seems actually people campaigning for brexit are not big fans of brexit, itself, but,
12:15 pm
know, some of this campaign mal ust rag-tag political malitia and yet they were able enter the political process. david: i'd like to challenge you little bit on this characterization of it. you know, saying that this micro-targeting is producing messages that are more relevant, you know, the relevance of these essages is their persuasive quality, it has nothing to do with their accuracy, nothing to contribution to civic discourse. all and end he be all of messaging is persuasion, -- what i see is the work is this sort of actually an assault on consensus reality. if we are all -- if there is
12:16 pm
ervasive and endemic and ceaseless micro-targeting, especially if messages that are intended to be persuasive olitically, you'll end up result of that, that i fear, is that people will end up living centrally different informational world and no have the ability to agree on what reality is. ow can you have democracy if you can't agree on what is real or what is not real? [applause] is not against you, just -- michal: i'm glad you guys are where g because you know there was perfect, consensus reality, where there was perfect consensus reality? in soviet russia, perfect. you had one t.v. statement government approved, one truth government approved, and if you knew something that ther people didn't know, you
12:17 pm
probably ended up in the concentration camp. guys clapped u to use it was a bit ironic me. i'm sorry. let me actually, why? thank you. t is actually, no, actually i reacted too aggressively to that, it is because born behind a country where we had perfect information what, this know bubble was exactly the same for everyone. merica luckily tried to break the information bubble by ropping printing presses on poland so underground press ould produce and spread some fake news, as communist regime it, so yes, basically that e this myth now people believe -- and you know
12:18 pm
what, i believed it for the time.t everyone tells you, now we live information nd bubbles and recommended systems are giving us information for us that somehow is making this society worse now know different things, our bubbles are not fully overlapping. is bad, nking, this this is bad, very bad, everyone says it is bad. ut then i actually in the context of findings started thinking about it and look, guys, i now strongly believe and i did some research on that, as at all.at it is not bad first of all, there is no evidence, okay, let me start differently. humans are just, we are just estined to live in an echo chamber tis called confirmation in psychology and basically have a at if you seven-world view and you get new
12:19 pm
nformation and this new information contradicts your world view, you will have a reject it. not that you always reject it, ut you like information that confirm your world view even more. this is not a bad trait to have, would didn't have it, you be crazy, any new information would be like, of course, you know, these are people running the world, very interesting. no, like luckily we have like ur brains like to keep consistent mental structures there. effects is that we will have a preference for nformation that confirms our views and recommender systems recognize that, they will give information that confirms ur views, so we kind of believing, live in echo chambers, our echo chambers larger than they have ever been in the history of
12:20 pm
humankind. an extreme you example. if you were born in a little town anywhere in the world, but america, 100 years ago, you only knew what your or teacher or your community told you. what, 99% perhaps of news.was fake now you live in a society where there is not only other people teachers and your friends at work, our social networks are journalists are trying to constantly give you additional information, get you of your graceful bubble and enter the race and give you personalized view of the world. hat happen, your bubble is expanding, now in the process of the bubble expanding, they also overlapping.ectly so now if you, david, live in your bubble, that by the way, is larger than whatever bubble
12:21 pm
occupied y human before now and i live in my bubbles growe, our apart, which sometimes makes me in a my god, he's different bubble, information bubble, information bubble, tragedy, tragedy. live in soviet russia, you were never able to look at people with different information bubble. you never thought, oh, my god, information bubble. you were happy in content, in a small, tiny echo chamber. related topic of fake news. i did not see any evidence and i really encourage you to send me you have and, that we are somehow surrounded by more today than we were surrounded by 10 or 20 or 30 or ago.ears quite the opposite. laughter] michal: i'll take that. thank you. quite the opposite. amount of valid information
12:22 pm
hat we have not only as society, but also with each individual person in this room larger that whatever people had in the past. outcomes of us having so much valid information to there and being able access any information that umanity has ever produced, including sensitive data, ecause with one click of your mouse on the internet, it -- if happens, if you know, you have fake news, if you heard something and if you heard a quickly u can very debunk it. what happens in the present current society, we've got so quick at debunking you basically hear the phrase fake news, fake news, time, which l the somehow creates an impression, surrounded by fake news. well, of course we are, there is
12:23 pm
lot of fake information around, but it is also way less the ever in the history in past. and if someone has evidence to the contrary, i would really it, please send it to me. avid: well, i think, you know, from my perspective, having those bubbles overlap is really thing.nt but, let's turn -- let's pivot about some of your very interesting work that's going on ight now working with photographs of people's faces. profile rly facebook pictures. say n recent talks, you that these neuronet work machine algorithms you employ can identify a man as straight his face orcture of identify a woman as extroverted
12:24 pm
introverted by a picture of her face. how is that possible? michal: well, so, yes in my i got interested in algorithm t computer can determine intimate traits of eople based on still image of their faces. now when i mention it, especially when i mention to academia, they say, my god, we lost michal, he a scientist and it is very bad. they keep think forgetting is that in fact, umans are great at predicting intimate traits, determining intimate traits of other people faces, so good at it, we don't realize we are doing it all the time. an example.you gender is a pretty intimate trait. have you no trouble determining from a short glimpse of someone's face.
12:25 pm
motions are intimate psychological latent traits, yet e can quickly detect other people's emotions, just by quick glimpse on their face. trying to hidere their emotion, we still are able to detect it. now think about race, think issues, like certain genetic disorders we can recognize by looking at someone's face, but also, another kind of genetic is ession of genes recognizing someone is a child of someone else. when you say, you look like your father, what you are saying, i face, your your genome and it is similar to and he f this other guy seems to be your father. this is -- this is like a skill. now humans are good at predicting those intimate we are not good at predicting other intimate
12:26 pm
sexual personality or orientation or political views and this might mean, there is no your politicalut views on your face, but it also that your brain did not evolve or did not learn to information from a human face. t might be that your sexual orientation is as clear, clearly isplayed on your face, as your gender, just the human brain at revealing it. guess what? algorithms canter do it pretty well in terms of accuracy entation, more than 90% distinguishing men.en straight and gay personality predictions of with ality comparable short personality questionnaire. how do computers do that? it's actually not easy answer because again, it is a
12:27 pm
would love to i understand but my human brain is to do it.too weak it seems that computers are at picking little giveaways on our face, on our then putting together hundreds of little pieces of accurate prediction of a given trait, pretty much likes.th facebook and there is some evidence that you take e is that if one picture of an extroverted person, you won't know whether they are extroverted or not. if you take 100 pictures of extroverted people and make an average face out of it, suddenly being is able to distinguish ly between extrovert and introvert, the information is there, just to weak for a human being predict, but for a computer, it is pretty easy.
12:28 pm
avid: with the neural networks you are using to do this classification work, i mean, explicitly and fundamentally like they do things, right? you teach a classifier pictures and you tell it, you know, yes or no, then no, yes or no, and it starts to be able to, you pictures, tify cats, or whatever, and but it is just trained classifier, so whether it's picking up in the case of work, like on straight or faces, is it detecting ort of subtle social and cultural information in the picking up or is it something that's actually of the face? know, like way to fundamentally, no way to know,
12:29 pm
right? it, let's can test say by changing morphological, taking a sxkt photoshopping some elements like making your broader, r, shorter, whatnot and see whether the omputer will change their mind about you. david: okay. michal: by doing that, you can xperiment with a computer and reveal what features of the face of being extro verted or introverted or gay or straight. end of the day, actually, we could have a separate conversation just about makes your face look at the ted or gay, but end of the day, doesn't really matter whether those are or hological features something cultural or maybe environmental that happens that llows the prediction because end of the day, we see prediction is possible. if i en people say, what somehow tried to maybe change my
12:30 pm
and fool the algorithm? what, the moment started algorithm would, learn hairdo is no longer helpful in predicting a trait on. move what is crucial and key in those studies showing you can take a face and predict intimate weits is the following, that an talk about introducing good policies and introducing good technologies that will protect to data from being used reveal our intimate traits, but it is end of the day, very difficult to imagine this working out. why? very of all, it is difficult to police data. ata can move across the border really quickly, be encrypted, without a stolen person even noticing.
12:31 pm
nsa cannotns such as defend data, how can we expect be able to ls can protect our data? which basically means that in there will be more data about us out there. ow algorithms are getting better at turning data into accurate predictions or intimate traits our future behavior and so on. oreover, if you gave me full control over data, there is a lot i want to share with the others. want this discussion to be available to people afterwards, i want my blog to be read by i want my twitter feed to be public and most importantly, i want to talk covering my face. so basically what -- my onclusion is that going forward, going to be no privacy we soever and the sooner realize that, the -- the sooner
12:32 pm
how n start talking about to make sure that disposed still s world is habitable and safe and nice place to live in. say, how can people you say we should still work on technology protecting privacy. yes, let's to it, but you also have to realize this is a distraction, because it somehow makes people believe that maybe we could have privacy in the future. and this mreef, i believe now, completely wrong. and now stress the importance of having conversation here about how invasion of privacy, ow revealing our intimate traits can be used to let's say, in -- to buy products or maybe vote for candidates. well, creepy makes me feel uneasy and so on. we have to realize that
12:33 pm
western liberal free bubble, losing privacy can of life lly a matter and death. david: uh-huh. countries nk about where homosexuality is punished by death. security officer can take a smartphone, take a photo of reveal their and sexual orientation, this is news , really, really pad for gay men and women all around the world. think about political views, think about other intimate traits, like intelligence and so on. basically the sooner we make we that we act, we think basically as basic assumption to be no e is going privacy in the world, how do we change the international politics? how do we make sure that lives
12:34 pm
minorities, religious minorities, political minorities n other countries are preserved, even in the post-privacy world, i think this s a question we should be talking about now and not how to ourge the policy to protect privacy better. guess what? we already lost. but we have battle, lost this war already. avid: well, we could really talk a lot about that, because, preparing for this -- this ng about your very position, it's a post-privacy world, deal with of i really truly wonder is democracy possible in post-privacy world? effort to ake the mitigate or hat revent such a condition, does
12:35 pm
exoe ake it sort of an existintial question. i don't think we can solve that this evening. i want to turn to questions from the audience. apparently lots of people have this question. an you change your digital profile by liking items you don't like or by searching misleading things, to of digital chaff to cover your tracks? question.hat's a dprat it's a great question. nd my answer would be, no, you can't. because maybe you can like some random things on facebook, but of all, this would make you look pretty weird to your just wouldn'tople do it, right? imagine posting status updates and only one is real and the rest is some
12:36 pm
gibberish, no one would do it, not to mention a omputer would have no trouble figuring out which one of the 100 is actual status update and is not.e so no, you cannot do it because all of the ld make platforms such as facebook and netflix completely not fun anymore, but it also not fun.e your life imagine with your credit card random things 100 and have whatever you wanted to buy. it's not possible. moreover, let me give you, even is not elook likes, it possible. let me give anecdote. a test that was talking about one of the results that i wrote about in my studies, and this was that people have high intelligence, to -- wrong. people who like curly fries on
12:37 pm
tend to have higher intelligence. the correlation actually is very because just the fact you like curly fries doesn't make there is a little tiny correlation when you combine it over gazillion of likes, you can then reveal i.q.rson has high so there was a test, when this and next day oned the majority of curly fries on through the roof. everyone liked fries on facebook. what, algorithm discovered it, next iteration of algorithm, that is why we call it machine learning. machine learning realized curly system no longer diagnostic of high intelligence. that suggest that it's like the pattern of data oints, rather than any particular data point, that is signalou have can find a
12:38 pm
for these traits? -- al: so obviously david: i love curly fries, of course. we all are. david: and everyone in this room. ou know, it is not sense cal that could have anything to do, expect by some baroque argument some cultural signal in liking curly fries, but i don't see it. is rather uggest it not any particular data point, patterns? michal: the power here is in numbers. david: uh-huh. michal: the more data about someone, the more accurate the prediction would be and, you at , even when you look signal theory, i didn't want to bring it up. you at signal theory, if had real signal, but then you dd random noise around it, you just need slightly more of a ignal and you still going to discover what was -- what is the truth there.
12:39 pm
this is a ately, great idea, but it's not going to work. here is another representative question that i people asked.y you argue that data analytics an be used to engage more people in politics, to which the writer of this question agrees, however, couldn't it be used to dissuade voters from voting? argue certain political parties benefit from lower voter incentivize tobe try to achieve this through the sort of thing we've been talking about? michal: that's a great question, there is no question that trying votingourage people from is just an awful anti-democratic do, but you guys have to realize it's not the fault of that this is possible, it is the fault of the legal environment in this democracy, where it is
12:40 pm
say negative advertise on t.v. telling bad things about the other politicians. [laughter] michal: so if you are concerned with people being discouraged i think that it will be again, it will be come up impossible to with regulation that would prevent politicians from doing it somehow because maybe they won't do it hemselves, they will just ask friends in other countries to leash out some bots that will do them, so the end of he day and also, when i'm discouraging you from voting, not saying, guess what, don't go up a story just make and tell you about a pizza place ways, so there are other ways that are very difficult to police. if you are concerned about not going to vote, you have to think of smarter way of
12:41 pm
solve thanksgiving problem than media or social traditional media, like for nstance and i'm not an expert on political processes, but one thing that comes to my mind, obliga tory ting thing, not only the privilege, a duty we have as citizens. david: uh-huh. [applause] david: the french example. i think this is a very interesting question, so i'll it. can or maybe how can sort of and techniques we've been discussing, be used individuals to being and how they are profiled? what they look like in these data views? michal: that's a great question apps out quite a few there that are basically aimed
12:42 pm
would -- his, there you can go and open such app, how he app will tell you you could potentially be seen by digital tisers or platforms. by my h app is hosted previous academic university, it apply magic sauce dot com. you can go there and the take your facebook likes and tell you what can be made based -- hat predictions, what can be determined from your facebook likes. a simpler way of checking how you're being perceived by the platforms. facebook in the morning and then the stories you ee there are basically the stories that facebook ai believe you want to see. stories about ufo and
12:43 pm
you probably, according to facebook ai, a if you see heorist, stories about hillary or another this is probably how facebook sees your political views. think you should find this to be a very encouraging question. do i get into this field of research? need?skills do i michal: that's a great question. question because i encourage everyone, first of all, to get into this field of of all, to second continue studying, because it is fun. nearly dropped out of my high school because i running my ed about start-up, and then i, in fact, out from my college three times because i was so excited about running my start-up, and it was great. then i discovered science, by
12:44 pm
again, to some extent. in it's just the best thing the world, i have the best, absolutely best job that there and i would gladly pay to do it, which actually people do. years, take r courses, basically pay to be -- work in academia. here is another angle to it, which is, i strongly encourage every single person in this room, whatever you do in your life, whether an artist, politician or a journalist, you programming.o learn now programming, programming is actually fun, like a computer just you don't shoot at people, you build toys in a virtual space. but it also changes your hinking and you start thinking in a way more organized way, you procedures ing with and with four loops and so on,
12:45 pm
here. it a bit too far i'm sure you would not lose your urrent way of thinking, but it will enrich your thinking. i think this is really important, not only because it because d not only there are great jobs out there nd not only because ai is becoming way more in computers becoming thms, are more important in society, which eans we are surrounded by products and services that are un by software, so understanding the language of software will enable you to therstand those entities of basically ai overloads, it will allow you to understand, think a bit more like successful people think now days. think about it for a second, among most successful people there is huge overrepresentation of nerds and program.o and those guys are shaping, not only the products you are using,
12:46 pm
societies we live in. now being able to think like able to understand them more will, i think, empower, could empower anyone. the way, i'm a social scientist. software engineer, i learned software engineering, i'm not a software engineer. learned writing my main code i, a google and if psychologist could do it, i'm sure everyone in the room can do it, as well. david: we only have a few left, but i did want to -- i'll just use my prerogative as an interviewer, getting in the last question, iother provocative thing that saw in your work, in your recent is suggestion that hese sorts of classification algorithms could be used in the
12:47 pm
employment. and employment decision-making. think it would be interesting, i believe this is a play in society. so if you could just tell us as your t you see concerns about that and what you see as hopeful about that? question,at's a great thank you for bringing this topic up. so i probably don't really have to talk about concerns because concerned -- concerns are widely known. on the umans focus negative, i wanted to focus on t second. a you remember the stories about first cars being introduced in u.s. and across europe and they had to like have people running red flag in front of the car and warning everyone that a car is coming. it is what iaying, want to -- how i will open this
12:48 pm
conversation is that we had examples in the past, where basically new technologies cared people a lot and they focused just on the negative and ad a tendency to overlook the changes that the technology is changing and probably have more example than just cars. much the same with the use of algorithms in hiring. plenty of sho shortcomings, but look in the context and the context is how people at thering moment and we're doing it in an awful way. why? because we're being unfair and ageist andsexist and even very bad algorithm would be ageist and and acist than even a very good recruiter. why? are se we all in this room racist and sexist and ageist and
12:49 pm
you u don't think you are, are the most -- you are dangerous. just e you still are, you not taking steps to prevent yourself from damaging other and hurting other people and being unfair to other people because negative traits that all humans possess. hiring, how o about do we go about hiring people? we interview people. interviews are great, especially interviewers to feel so important and empowered, but are -- a lot of scientist evidence, one of the best, most well established facts in industrial organizational that interviews have close to zero predictive how well you'll perform at work. now a story. jobs.rd from a few if you are going to be a spokesperson, then it is know, how ow, you likeable you are in an
12:50 pm
interview. of the the many, most other jobs, majority of other jobs, it doesn't matter. moreover, what happens, when you interview people, you interview - what the is doing is measuring how much you like a person. gain, you like people that are like you. if you don't have a tattoo and in your community no one has a there is a young lady with a tattoo on her arm, whatever you are telling you will like her less than someone who doesn't have a tattoo because it is something unusual, new, strange, so on. humans acing biassed ith algorithms is going to bring more fair hiring and better hiring and recruitment decisions. even way more important context the letting people out of prisons or deciding on the length of the sentences. sounds really -- makes me
12:51 pm
feel very uneasy when you're me that algorithms should decide on whether to put not, but prison or guess what, we are doing it already. is still job that says you will be released on your parole, is making the decision in many places in the u.s., it is already happening the judge before he or she makes her decision, they will out from ai algorithm telling her how likely reoffend, n is to guess what, the judge will follow, that is what people do, algorithm told him or her. it makes me uneasy, but when you effects of this policy, which is, we can release from mes more people prison while keeping real ffending rates stable and also hopeful because it means that in the future we'll have more free people out there, less people in prisons, while having safe ociety as today or safer,
12:52 pm
hopefully, it also means in the future we'll have people being a more fair way than oday and being way more fulfilled by their jobs, i'm actually hopeful looking to the ai ure ofow human interaction. david: that's a very interesting employment.use and it made me think of a phrase i contrast to y in merto being rase, a-mirror tocrity, confirmation bias. well, it is very important then good athgo rhythms, i think we've come to the end of our time this evening, thank you very much for just a fascinating conversation. michal: thank you, david. you, everyone. [applause] yeah, yeah.
12:53 pm
thank you all very much. >> secretary of state rex tillerson and defense secretary meeting with re their japanese counterparts at this hour talking about mutual concerns over north korea. the japanese fearing, among travelinggs, missiles through their air space or stories.lling on their the secretaries will talk with reporters live at 1:30 eastern. monday, a total solar eclipse will be visible from coast to first n the u.s. for the time in 99 years. 7:00live coverage begins at a.m. with "washington journal," at nasa flight center and have a of nasa t.v. here is a preview.
92 Views
IN COLLECTIONS
CSPAN Television Archive Television Archive News Search ServiceUploaded by TV Archive on