Skip to main content

tv   [untitled]    October 19, 2024 1:00pm-1:30pm EDT

1:00 pm
the washington post. thank you for being with us this morning. guest: thank you so much. host: that does it for today's "washington journal" and we will be back tomorrow morning at 7:00 a.m. eastern. until then, enjoy your day. ♪ ♪ >> c-span's "washington journal," our live forum involving you to discuss latest public policy from washington and across the country. coming up sunday morning, we will talk about this year's campaign in the battleground of
1:01 pm
michigan, first with rick pluta of michigan public radio and later with oakland university political science professor david dulio. join on air, on c-span now, or online at c-span.org. >> this afternoon, kamala harris mpaigns with entertainer lizzo andesoy, life -- in detroit, on c-span, c-span now, our free mobile video app, or online at c-span.org. [gavel] >> the house will be in order. >> this year, c-span covers 45 years of covering congress like no other. we have been your primary source for capital since 1979, providing balance, unfiltered
1:02 pm
coverage of government, where policy is the staff -- decided come over support from america's cable companies. c-span, 45 years and counting, powered by cable. >> next, the president of ipsos, clifford young, talks about his book "polls, pollsters and public opinion." the conversation includes challenges to pulling accuracy on the role of pollsters. hosted by johns hopkins university school of advanced international studies in washington, d.c. this is about one hour and 10 minutes. [indistinct conversatio] challenges to pulling accuracy on the role of pollsters. hosted by johns hopkins university school of advanced international studies in washington, d.c. this is about one hour and 10 minutes.
1:03 pm
>> ok. good afternoon, everyone. and thanks a lot for joining us as part of our continuing america's focus area lunch speaker series. today, we actually have one of our own, dr. cliff young joining us. he teaches a couple classes here at sais, one of which he will talk about but the other which is also our capstone to brazil. we have invited him here. he is currently the president at ipsos of polling and societal trends. before that he was a cluster president. he has had a number of years at ipsos. polling is his expertise. so we are really proud to have him here.
1:04 pm
he has recently released his new book polls, pollsters, and public opinion. i highly recommend the book. he really goes into depth as to what the role of pollsters are. to be quite frank, i am consumed by news and always watching the polls of upcoming elections whether it be the presidential election here or any other presidential election or referendum down in latin america. and i thought i quite understood it until i read this book. you do an amazing job talking about what polling is and what are the roles of pollsters. you can call them data scientists, fortunetellers, and spin doctors. an important way of talking about all the different roles. i don't know if you wanted to maybe share with us what was the motivation for writing the book
1:05 pm
and then may need -- and then maybe go more into detail on contents of the book. dr. young: great to be here doing this talk at johns hopkins sais. the book is for someone like you. maybe you thought you knew everything about polling. hopefully it give you insight into what a pollster actually is. a pollster is someone or something, an organization, that measures and analyzes public opinion. a pollster is not an a pollster -- an upholsterer. i had a long conversation with an aunt that asked me if i wanted the reader her -- redo her sofas and sitting chairs. i'm not an expert on that. the motivation at one level is the memorialization of my own
1:06 pm
journey along 25 years of doing what i do as a pollster. i, like all baby pollsters, are confronted with a series of challenges. by definition it is multidisciplinary. we are data scientists, social psychiatrists, economists, we are decision tree experts. we are statisticians. different pollsters are trained in different ways. i was trained as a sociologist and statistician. i trained in a number of other disciplines. we were challenged with the issues at hand because we didn't have all the tools needed at the time in front of us. the profession of a pollster is
1:07 pm
very artisanal. it is very much hands-on. it is very much an apprentice-like profession. what i wanted to do was i wanted to improve the learning curve for young pollsters and analysts, and i will come back to that in a second. so that they didn't have to go through all the trials and tribulations i went through. that there was one place that was synthetic in nature, that laid out the profession in a way that made sense from a practical perspective. and ultimately on that journey, i have had 16 or so years of students here, for 13 of those years, students at the university of columbia university that were my guinea pigs, and where i tested out materials and frameworks and ways of thinking about being a pollster and how to organize it.
1:08 pm
ultimately that is what the book is. the book is memorializing my own journey. it was a journey as a young pollster to understand the profession. the professionalized profession. ultimately it is not just for the pollster. at the end of the book i say it is for the non-pollster, for the mom like you want to analyze public opinion data but we tell the story through the eyes and the lens of the pollster. mr. marckwardt: thank you for that. as i was reading the book, when i first started stinking about polling, i think of it more as just the science. numbers, statistics. i think what you laid out really well, you talk about in the beginning, you had your first client and you looked at your data sets and your biases and looked at how the interpretation was made and you presented it. what i found was in this book at
1:09 pm
least there is a whole other side of it, if you will, maybe the art of it, which is using some of your own know-how and experience to figure out different questions that might be asked of you or understanding different biases you might not have thought of before. i don't know if you want to talk about your first story when you went up to the client and the asked you questions you are not even anticipating. dr. young: i will jump in. the book has four vignettes for the four sections of the book. those vignettes are these initial experiences in my career that i did not have answers. and that indeed set me on the journey to organize this book. the first vignette is about my first paying client that was nine. i lived in brazil for 10 years and it was my first client ever. it was the brazilian horse
1:10 pm
racing club of rio de janeiro. and yes, i did a poll for them. there was a presidential race going on in the club. and yes, those individuals spent money on polls to know who was up in front in that race. they had a lot of money and resources, obviously. it is horse racing, and people going to a horseracing club probably have resources. so i organized my poll. i did the sample design, the questionnaire, are the questions unbiased. have i organized my analytic framework so i am ready to tell a clear story at the end? i got it all together and i actually did not brief the client in person. i did it by phone. i tend to do it by the phone quickly comes to politics. i didn't speech portuguese at the time. i was learning words while they were talking, that is how sketchy my portuguese was.
1:11 pm
they ultimately asked me, so, who's going to win, are we going to win or are they going to win? i sat there and i didn't have an answer because i didn't know. i did a robust poll done in a scientifically rigorous way, but i have no idea if they are going to win or not. i didn't say that. i was unclear. i said, let's reflect on that. you are always better off staying 50-50 if you really don't know. that is what we are doing today, no one knows. but ultimately, that was really the first challenge of me realizing that i had some of the tools to exercise the profession of polling, but i didn't have all the tools. i didn't have rules of thumb or
1:12 pm
context or any way to talk about the relative odds of winning. and that was jarring. the next day they call again and they are like, the other side, we got the other side's poll. i am like, oh my gosh, the other side has polls too, this is incredible. they are running for the presidency of the club and they both bought poll. they said their poll says the other side is going to win. so which poll is right? yeah, once again, another example of not having the requisite gray hair, not having the sufficient amount of apprenticeship, knowing what those rules of thumbs were. just going back a bit about your initial question, this book, i would not call it art. i would call it context. what this book does above and beyond the technical side of
1:13 pm
saying what a margin of error is, or what's a good or bad sample, how you correct for bias, that is the more technical aspect. a lot of the book is about content. because we only understand things within context. how do we contextualize elections? how do we contextualize changes in public opinion? how do we understand if public opinion appears unstable? and so the book is really about frameworks to provide meaning in context. mr. marckwardt: what i really enjoyed about how you break down the book, you cover the different vignettes, and really the vignettes are based on your own experiences. you talk about different biases, you talk about the nonresponse bias, coverage bias. you talk about these specifically as it comes up to the 2016 presidential election.
1:14 pm
you talk about how you felt a lot of confidence over the years after that first experience at the racing club. and then your entire community, not just yourself, polling professionals ran into the 2016 presidential election, and how that shocked the whole entire industry. i don't know if you would like to share some thoughts about that. dr. young: yeah. we ultimately have an external benchmark for the profession, which are the elections. if we get it right we are awesome, and if we get it wrong we are bums. i use the example of -- 2008 is when i first came back from brazil, i had been in brazil for 10 years. i got to do the obama win, which was historic. i was there for lulu's win as
1:15 pm
well. we were high on the hog in 2008 because we nailed it. we didn't nail it more or less, we nailed it to the decimal point. you fast-forward to 2016, we w ere wrong like everyone else was wrong. i think it is an interesting case study. same pollster, same country, but ultimately two very different results. sometimes that happens. sometimes that happens. what we do as a profession is we reflect on our method and assess in more detail about specifically what happened. it is just not an individual like myself who does this. it is also professional associations like the association -- american association of public opinion research. every other country has similar associations.
1:16 pm
but honing our craft, assessing why we made mistakes is article. the last point i will make is reemphasizing the three personas. i call it the three hatted pollster. multidisciplinary. the first hat is the data scientist. the second is the fortuneteller, that is the persona that predicts. and the third is the spin doctor, which is aligning with public opinion. i use spin dr. kind of tongu -- spin doctor kind of tongue-in-cheek because it is really hard to -- going back to the data scientist, that is really the focus on our craft. how to make polling optimally robust. what are the biases associated with polling. assess where we got it wrong and
1:17 pm
correct what we need to correct looking forward. mr. marckwardt: in that section you also talk a lot about the different types of biases. one question i had issue talk a lot about bond response bias that was really evident in the 2016 campaign. that is to say, a lot of people who were going to vote for trump preferred not to respond to the polls, which then gave you a bias, especially in swing states, that was a lot more evident. and so, is that continuing today? how are you adjusting for that? is that something you are seeing more of not just in the united states but in other countries as well, that there is a tendency in some groups to be more leaning towards not responding and throwing off a lot of polls? dr. young: what i would say first and foremost we don't have one problem, we have multiple problems. that is the tricky thing about what we do is to isolate each of these problems.
1:18 pm
obviously 2016 there were a number of things going on. i think it was also an era of forecasting more so than even the method itself. and the signals were there, but we will come back around to that. ultimately i would argue that politics today globally is wreaking havoc on our methods. and so what do i mean by that? what do we find, what does ipsos find? we find that there is a rise in anti-system, antiestablishment sentiment, widespread belief the system is broken, widespread belief that parties and politicians no longer care about the average person, widespread belief system is raked, and the found a leak that there is the need for a strong leader to take the country back even if they must break the law. does that sound familiar? we find that not just in the
1:19 pm
united states but around the world. we find this in brazil, in turkey, in france, in mexico, in the u.k., in south africa, indonesia, in india, we find it everywhere. it is a global phenomenon. what happens is in this context you have strong populace brands like erdogan, like le pen, like bolsonaro, who are tracking individuals who heretofore participated in nothing -- nothing -- they are nonparticipating. they have checked out of society, they don't want to purchase a paid in polls or anything -- they don't want to participate in polls or anything. and this is ultimately our problem is they are voting now. finding these individuals who hereto for having participated in politics before. before 2015 or so, really we
1:20 pm
didn't care if we didn't capture in our sample. so what are we doing today is an industry? we are doing things at the design stage, designing better samples to try and capture these individuals from the get-go. then we have what we call post-survey adjustments. we have ways to correct the sample to hopefully minimize the fact that we have these individuals that are not responding. do we have a perfect solution? i think we have a solution. will it play out now in this electoral cycle in the united states? will we be ok? i becausen 2016 and 2020, the average error was about three points. hopefully not. hopefully we have it right, but we will see. mr. marckwardt: you mentioned there is more than one bias and one reason that influenced the 2016 election. maybe if we bring in the social media aspect and whatnot, and
1:21 pm
you talk about coverage bias as well and how you are able to reach out to different populations of people that are polling in the united states. it is easier to reach people by telephone or cell phone but that might not be the case. you create a bias in other countries where there is not much penetration with cell phones. but you talk about so many different biases and i think you just mentioned before 2015, you just didn't account for so many different factors. are we entering an era in which the methodologies that were pretty consistent for longer periods of time, you will have to adjust at a quicker cycle, given the changes going on around the world? dr. young: yeah. first, we have always had these problems. maybe these problems were not as acute in the past, or certain moments in the past, but we have had some pretty profound mrs. all over the place.
1:22 pm
what i would say is the following. the way humans communicate and interact is changing profoundly. therefore the way we contact or communicate with our respondents must change as well. a study showed never before have we had so many different methods that were employing to capture individuals. sometimes face-to-face, sometimes by mail, sometimes by phone, sometimes online, sometimes a combination of all of them or some of them. so as pollsters, we're going to have to refine our method. people probably -- it will probably be increasingly heterogeneous because we want the most robust swath of the population. but without a doubt we are challenged today compared to an era where we knocked on the door, or an era when people only
1:23 pm
had a landline to communicate. mr. marckwardt: one concept you talk about especially when you get into the fortuneteller is one of the roles -- i highlighted this which i found a really interesting way of describing two different kinds of people. hedgehogs and foxes. hedgehog being those who are really knowledgeable on one area, and foxes being those who are not as knowledgeable in one area but has more breadth of knowledge, maybe not as much depth. when it comes to polling and pollsters, it is usually the foxes are the smaller population but they forecast more accurately than the hedgehogs. so why are there more hedgehogs than there are foxes, and will this change? are people going to become more breadth in their understanding forward, as there seems to be
1:24 pm
making more of a need for that? dr. young: by the way, this analogy is not my own. i wish i could say it was me but they take it from tet lock who has written a number of books on forecasting. he did an analysis of relative accuracy of forecasts. it is actually taken from an isaiah berlin short story. ultimately what we are talking about, i like to call it learning styles. do you have a single input learning style, or a multi-input learning style? by the way, this goes for not just elections, this can go for anything we analyze. what is the difference between a fox and a hedgehog? a hedgehog has one way to react
1:25 pm
to threats and that is to curl up in a little ball. by the way, hedgehogs are really cute. they are, actually. they are small and -- i don't know if they are furry, but they are cute. but they only react one way. a fox when it comes to danger or threats reacts in a multitude of white. it could run away, hide, play dead, a multitude of ways. so this sort of analogy, this sort of framework helps us reflect a bit on forecasting. what do we find? we find that a forecast that is using multiple inputs is typically superior to a forecast with just a simple input. hedgehog forecasts have higher rates of failure than fox forecasts. so this is some important sort of insight. we can build these methods -- i college training galatia -- i
1:26 pm
call it triangulation. and two things. nate silver had a fox logo because of this analogy. i think 2016 is a classic case of a single input failure. if you looked at the polls there was no doubt clinton would win. if you looked at other models and inputs, it wasn't so clear. that doesn't mean we would have forecasted exactly correctly. but at the very least we should have had doubt in our level of conviction about our forecast should have been lower. in other words, the signals were there, but the fact that everyone was behaving like a hedgehog rather than a fox for the most part as a market, we missed those other signals. mr. marckwardt: again, under that talk a little about
1:27 pm
triangulation. i don't know if maybe you want to explain triangulation, what it means to pollsters. dr. young: just in general for forecasting, triangulation is nothing more than being a fox and taking multiple inputs and putting them together. somehow combining them. you can combine them in different ways. you can combine them quantitatively, qualitatively. at least having multiple inputs in one place to basically assess a potential future outcome. it could be polls like, is the economy doing well or not. if the economy is doing well, it will favor the government candidate. if it is not doing well, it will favor the opposition. could we use other indicators, like the candidate strongest on the main problem we know wins about 85% of the time is another model. could we use fundamental models?
1:28 pm
we know the incumbent has a threefold advantage. we know that at 40% approval rating, a success for nonincumbent has a 6% chance of winning. who is the successor at 40% approval rating today? kamala harris. so at the very least we have these other models, inputs that we can use, and we can assess them together. mr. marckwardt: you talk about afterwards the argentinian election, 2019. i don't know if you want to share some lessons learned from forecasting in that election. dr. young: we talked about single input learning styles versus multiple input learning styles, the hedgehog versus the fox. another bias is confirmation bias.
1:29 pm
it has to do with how what we believe. the 2016 election in the u.s. was very much like that. we cite a lot of different experts and pundits in the book physically saying there is no way someone like trump could win. he looks strange, he is orange, can't win, or his hair is crazy. no, he's a celebrity, it is not serious, no way a celebrity could win. so he had this confirmation bias and what happens ultimately, you begin to sort of understand your own data differently. we actually had a citation of a very well-known political science and modeler who basically had a model that suggested that trump would win, but he was questioning the assumptions of his model. a great example of confirmation bias. the output doesn't make sense
1:30 pm
accord into the way i think the world is aragorn iced -- the world is organized, therefore i have to make sense of it. right now pollsters are trying to understand the scenario today. 2019, which was sort of a primary -- it is more than a primary because everyone is on the ballot, but in argentina the polls -- and the pundits more importantly, seriously overstated the president's chances of winning and they were surprised on election day. why? because he looked the role, he was an orthodox from an economic perspective. he went to all the right schools. he circled around the right circles. obviously things were going to improve over time. there was even

2 Views

info Stream Only

Uploaded by TV Archive on