Skip to main content

tv   [untitled]    October 18, 2024 5:00am-5:31am EDT

5:00 am
really hard to -- going back to the data scientist, that is really the focus on our craft. how to make polling optimally robust. what are the biases associated with polling. assess where we got it wrong and correct what we need to correct looking forward. mr. marckwardt: in that section you also talk a lot about the different types of biases. one question i had issue talk a lot about bond response bias that was really evident in the 2016 campaign. that is to say, a lot of people who were going to vote for trump preferred not to respond to the polls, which then gave you a bias, especially in swing states, that was a lot more evident. and so, is that continuing today? how are you adjusting for that? is that something you are seeing
5:01 am
more of not just in the united states but in other countries as well, that there is a tendency in some groups to be more leaning towards not responding and throwing off a lot of polls? dr. young: what i would say first and foremost we don't have one problem, we have multiple problems. that is the tricky thing about what we do is to isolate each of these problems. obviously 2016 there were a number of things going on. i think it was also an era of forecasting more so than even the method itself. and the signals were there, but we will come back around to that. ultimately i would argue that politics today globally is wreaking havoc on our methods. and so what do i mean by that? what do we find, what does ipsos find? we find that there is a rise in
5:02 am
anti-system, antiestablishment sentiment, widespread belief the system is broken, widespread belief that parties and politicians no longer care about the average person, widespread belief system is raked, and the found a leak that there is the need for a strong leader to take the country back even if they must break the law. does that sound familiar? we find that not just in the united states but around the world. we find this in brazil, in turkey, in france, in mexico, in the u.k., in south africa, indonesia, in india, we find it everywhere. it is a global phenomenon. what happens is in this context you have strong populace brands like erdogan, like le pen, like bolsonaro, who are tracking individuals who heretofore participated in nothing -- nothing -- they are nonparticipating. they have checked out of
5:03 am
society, they don't want to purchase a paid in polls or anything -- they don't want to participate in polls or anything. and this is ultimately our problem is they are voting now. finding these individuals who hereto for having participated in politics before. before 2015 or so, really we didn't care if we didn't capture in our sample. so what are we doing today is an industry? we are doing things at the design stage, designing better samples to try and capture these individuals from the get-go. then we have what we call post-survey adjustments. we have ways to correct the sample to hopefully minimize the fact that we have these individuals that are not responding. do we have a perfect solution? i think we have a solution. will it play out now in this electoral cycle in the united states? will we be ok?
5:04 am
i becausen 2016 and 2020, the average error was about three points. hopefully not. hopefully we have it right, but we will see. mr. marckwardt: you mentioned there is more than one bias and one reason that influenced the 2016 election. maybe if we bring in the social media aspect and whatnot, and you talk about coverage bias as well and how you are able to reach out to different populations of people that are polling in the united states. it is easier to reach people by telephone or cell phone but that might not be the case. you create a bias in other countries where there is not much penetration with cell phones. but you talk about so many different biases and i think you just mentioned before 2015, you just didn't account for so many different factors. are we entering an era in which the methodologies that were pretty consistent for longer
5:05 am
periods of time, you will have to adjust at a quicker cycle, given the changes going on around the world? dr. young: yeah. first, we have always had these problems. maybe these problems were not as acute in the past, or certain moments in the past, but we have had some pretty profound mrs. all over the place. what i would say is the following. the way humans communicate and interact is changing profoundly. therefore the way we contact or communicate with our respondents must change as well. a study showed never before have we had so many different methods that were employing to capture individuals. sometimes face-to-face, sometimes by mail, sometimes by phone, sometimes online,
5:06 am
sometimes a combination of all of them or some of them. so as pollsters, we're going to have to refine our method. people probably -- it will probably be increasingly heterogeneous because we want the most robust swath of the population. but without a doubt we are challenged today compared to an era where we knocked on the door, or an era when people only had a landline to communicate. mr. marckwardt: one concept you talk about especially when you get into the fortuneteller is one of the roles -- i highlighted this which i found a really interesting way of describing two different kinds of people. hedgehogs and foxes. hedgehog being those who are really knowledgeable on one area, and foxes being those who are not as knowledgeable in one area but has more breadth of knowledge, maybe not as much depth. when it comes to polling and pollsters, it is usually the
5:07 am
foxes are the smaller population but they forecast more accurately than the hedgehogs. so why are there more hedgehogs than there are foxes, and will this change? are people going to become more breadth in their understanding forward, as there seems to be making more of a need for that? dr. young: by the way, this analogy is not my own. i wish i could say it was me but they take it from tet lock who has written a number of books on forecasting. he did an analysis of relative accuracy of forecasts. it is actually taken from an isaiah berlin short story. ultimately what we are talking
5:08 am
about, i like to call it learning styles. do you have a single input learning style, or a multi-input learning style? by the way, this goes for not just elections, this can go for anything we analyze. what is the difference between a fox and a hedgehog? a hedgehog has one way to react to threats and that is to curl up in a little ball. by the way, hedgehogs are really cute. they are, actually. they are small and -- i don't know if they are furry, but they are cute. but they only react one way. a fox when it comes to danger or threats reacts in a multitude of white. it could run away, hide, play dead, a multitude of ways. so this sort of analogy, this sort of framework helps us reflect a bit on forecasting. what do we find? we find that a forecast that is using multiple inputs is
5:09 am
typically superior to a forecast with just a simple input. hedgehog forecasts have higher rates of failure than fox forecasts. so this is some important sort of insight. we can build these methods -- i college training galatia -- i call it triangulation. and two things. nate silver had a fox logo because of this analogy. i think 2016 is a classic case of a single input failure. if you looked at the polls there was no doubt clinton would win. if you looked at other models and inputs, it wasn't so clear. that doesn't mean we would have forecasted exactly correctly. but at the very least we should have had doubt in our level of
5:10 am
conviction about our forecast should have been lower. in other words, the signals were there, but the fact that everyone was behaving like a hedgehog rather than a fox for the most part as a market, we missed those other signals. mr. marckwardt: again, under that talk a little about triangulation. i don't know if maybe you want to explain triangulation, what it means to pollsters. dr. young: just in general for forecasting, triangulation is nothing more than being a fox and taking multiple inputs and putting them together. somehow combining them. you can combine them in different ways. you can combine them quantitatively, qualitatively. at least having multiple inputs in one place to basically assess a potential future outcome. it could be polls like, is the
5:11 am
economy doing well or not. if the economy is doing well, it will favor the government candidate. if it is not doing well, it will favor the opposition. could we use other indicators, like the candidate strongest on the main problem we know wins about 85% of the time is another model. could we use fundamental models? we know the incumbent has a threefold advantage. we know that at 40% approval rating, a success for nonincumbent has a 6% chance of winning. who is the successor at 40% approval rating today? kamala harris. so at the very least we have these other models, inputs that we can use, and we can assess them together. mr. marckwardt: you talk about afterwards the argentinian election, 2019.
5:12 am
i don't know if you want to share some lessons learned from forecasting in that election. dr. young: we talked about single input learning styles versus multiple input learning styles, the hedgehog versus the fox. another bias is confirmation bias. it has to do with how what we believe. the 2016 election in the u.s. was very much like that. we cite a lot of different experts and pundits in the book physically saying there is no way someone like trump could win. he looks strange, he is orange, can't win, or his hair is crazy. no, he's a celebrity, it is not serious, no way a celebrity could win. so he had this confirmation bias and what happens ultimately, you
5:13 am
begin to sort of understand your own data differently. we actually had a citation of a very well-known political science and modeler who basically had a model that suggested that trump would win, but he was questioning the assumptions of his model. a great example of confirmation bias. the output doesn't make sense accord into the way i think the world is aragorn iced -- the world is organized, therefore i have to make sense of it. right now pollsters are trying to understand the scenario today. 2019, which was sort of a primary -- it is more than a primary because everyone is on the ballot, but in argentina the polls -- and the pundits more importantly, seriously overstated the president's chances of winning and they were
5:14 am
surprised on election day. why? because he looked the role, he was an orthodox from an economic perspective. he went to all the right schools. he circled around the right circles. obviously things were going to improve over time. there was even this notion that somehow there was a trend towards him up to election day. whether that was the case or not is another question. on the flipside you had the opposition that didn't look the role, wasn't found in the right circles. who ultimately won the election and became president of argentina. the problem is that the polls were shaky. there are lots of new methodologies, a lot of online, untested. and so the polls were, i would say questionable at best.
5:15 am
misleading at worst. if you look at all the indicators, it suggested he was going to get creamed. you had an economy that was imploding. economy was a number one issue. fernandez was leading by about 10 points on the economy. just three data points suggested that. maybe i shouldn't trust those. that is just a great example where he had mixed signals and if analysts took all the signals together, they would have a lower level of conviction. ultimately the day after election day, the market was very surprised. mr. marckwardt: so, there are events that change really quickly public opinion, such as 9/11, black swan events. kind of taking it a little more
5:16 am
can temporarily, you mentioned incumbent approval rating being 40% or higher. now we have a change in the election. we have kamala harris and biden. not so much to predict what's going to come, but are the pollsters trying to work with the change and the fact that there is not really an incumbent, or there is, and it is arguable in one way or another because she is the vice president? how do you account for that change, or are you obviously still trying to figure out how you might be able to account for that difference? dr. young: we are in a complicated moment. because a lot of events have happened. and they have muddied the scenario. we have a former president running again. we have a vice president that took the place of the same president. i think that all we can do in a
5:17 am
situation like this is go to our basic indicators and look at them at face value. a lot of what is happening right now, and if you listen to the market today, everyone is questioning all the indicators. maybe we need to step back and say maybe they are just right and we just have to read them for what they are. on the polling side up until about four days ago, harris was in the lead in most places. that is slowly shifting. the number one problems are the economy and inflation. trump leads by about seven on that. we have a historically weak incumbent, the successor is running. all these sorts of things which should suggest a trump victory by a lot. but it's pretty close. and why is that the case, perhaps? perhaps because he has a ceiling. i think he does have a ceiling. he is not liked by everyone. he has not ingratiated himself
5:18 am
to the majority of the population. but ultimately this is an election about inflation, about cost-of-living, and there are constituencies that are really suffering still today and they don't trust the sitting government. that's the summation of our indicators. yeah, probably it's an election that should be leaning trump. should be. but there a lot of second-guessing right now. and why second-guessing right now? because the polls are so close. mr. marckwardt: so, kind of taking it to a broader look at latin america, what have been the successes and failures of polling, and what are some of the trends in latin america? obviously there is a lot of discussion of democratic backsliding a lot of countries, and tendencies towards more authoritarian figures, policies. taking it back to the polling,
5:19 am
is that something that will continue over the long-term? can polling even ascertain as a fortuneteller of those tendencies to continue? or is this something that might be temporary and what you are seeing? dr. young: part of the role of the fortuneteller is they say something discrete about the future. we work with other subject matters like referenda, or impeachments, for instance. but a lot of the work we do, a lot of the work i do is also setting the stage, the broad stage for the future. what i can say about latin america is first and foremost, like in the united states, there is a widespread belief that the system is broken. this antiestablishment sentiment is not going anywhere. like in the united states, in
5:20 am
latin america there has been a relative break down in ev consensus. i think we should expect more anti-system politics looking forward into the medium to long-term. i think that is a given. brazil is the place i pay most attention to and without a doubt be can see out every day. lula right now is an anti-system political actor who is exercising his role as such. i see this next election in 2026 repeating that. i think this trend, this is a generalized trend. mr. marckwardt: what i wanted to do is open it up for some questions from our audience. we do have a microphone, so we will have it passed over here. but please. >> thanks a lot. i wanted to ask many questions, but i will only ask one.
5:21 am
when you talk about your hat as a spin doctor, and put in the context of the 206 election, i wonder if we failure at the time wasn't one of misinterpreting a probabilistic situation as a certainty. to some extent, i think it feels like when you say there is a 70% chance that candidate x is going to win, people interpret that as candidate x is going to get 70% of the vote and a certain outcome. i felt that when biden was in place people were doing that, but the opposite. trump has a 70% chance of winning and treating that as a certainty. how do you think about communicating results in a world where people make this type of
5:22 am
interpretation your wrist -- how do you marry the fortuneteller with the spin doctor in a way that gets around that, if that is at all possible? dr. young: that is a great question. that is super difficult and a huge challenge. we talked about cognitive biases. single input learning styles and confirmation bias. in that chapter is a third one which is probabilistic versus determine a list of thinking. and the importance of the analyst thinking probably stickally. the critique on 2016 is they had the probabilities wrong. it was more of a 50-50 election. whether it was 6040, it was leaning clinton but was much closer. and so we got that wrong because we didn't incorporate all the signals. if we had, i think it would have been more reasonable at least, our expectations would have been.
5:23 am
but communicating this to the larger public is very difficult. because most people think determine illicitly. they think trump will win, will not win. harris will win, will not win. i would just say the only answer i have is every day of my professional career i explained to someone the notion of variance, even though i don't say variance. how do you communicate uncertainty in a way that people understand? i think it is incumbent upon us professionals to try and provide the context needed to explain things, and i do that obviously with clients and with the media, i am doing it here in part. but that is essential challenge because most people think determine unless the clay. and we have these sorts of problems of interpretation ultimately.
5:24 am
that is not an easy answer to that question but it definitely is a challenge we face every day. mr. marckwardt: yes, up here in the front. or in the back. >> what are your thoughts on our lichtman -- on alan lichtman, academics like him who have these, what, 13 bases, or? dr. young: his keys? it's a model. it's a model that i don't understand 100% but i think it is a bit of take about how he measures things. there is a measurement issue. only he knows how to use the keys, it seems. if he can't open up the method from a scientific perspective and locate it then we have a problem. just as an aside i teach another
5:25 am
class at some other places i am forecasting. what we do is we take the last electoral cycle and decompose all the models and recompose them. because they are transparent so we know how they are put together. we can take them apart and put them back together and figure out is wrong. there's not a lot of transparency there. i don't know how he is measuring variables. but he has some sort of probabilistic model so it is no different. there is no special sauce. it is like any other model that exists. it has an outcome which is the election and a series of other variables. his are mostly structural. but i do not think there is a lot of mystery. it is just a mystery of what is the framework. >> thank you very much. i don't envy your position as being the avatar of all pollster
5:26 am
s. one comment which is that, in looking back on it, to me, the 2016 miss was partially an improper focus. the focus pushed was the aggregate popular vote. of course hillary clinton indeed won the aggregate popular vote. but due to the nature of our system, it's the electoral college that counts. and trump took by a good margin. somehow the polling was saying look at the popular vote, look at the popular vote, she's winning, she's winning. and don't look at this, which is indicating he's not winning. let's not publicize that. i really picked that up with nate silver a lot. like, we want hillary, so the stuff that doesn't look good, we
5:27 am
simply aren't going to make public. which really undermines credibility. there is that factor and there is also the expense factor. i adhered -- they had to dramatically cut back sample sizes because they simply can't afford the big person-to-person sampling. as you mentioned, personal contacts on the phone or face-to-face or elaborate questionnaires. so do you think those kind of things are seriously eroding public confidence at least in polling? thank you. dr. young: there is a lot to unpack there. we just take this step-by-step. i agree with you about the swing states. actually in the book i go through systematically and show
5:28 am
if we focused just on the swing states we would have gotten it right. the signals were there, the polls were overstated towards clinton but the other signals told a different story. i think we have gotten better in the united states with understanding that there is a certain degree of divergence between the swing states and the national result. i do a lot of brazilian media talking about the u.s. elections in brazil. they will not talk to me about the national polls. they have been taught by someone, i don't know who, that they only look at the swing states. i think that is an innovation in the general sense for all of us where the swing states don't clearly represent the national poll. when it comes to accuracy, if you look at the long-term trends we have only become more
5:29 am
accurate. the problem is we have erred in some massively important elections on the wrong side that undermines confidence, when we picked the wrong winner. actually we were more off in 2020 then we were in 2016, but w e picked the right winger. and all we can do, we can only control what we can control when we make an error like that, we admit it. we are open and transparent about it. the pollsters don't do anything individually, we work as a group and we did down into the data to understand is wrong and then we come up with solutions looking forward. that is the only way that i believe we stay credible in a very complex world doing a very complex thing, is to open up the door and let people in. >> thank you so much for this. i have two related questions.
5:30 am
you described the 2016 election as a classic example of a single input failure. what was that input? could you elaborate? then there has been a lot of conversation about the fact that pollsters have learned from 2016 and are now becoming much more sophisticated, i guess as a fox would be, to include other variables that would not miss the silent trump voter or whatever. the related question is that i am looking at a gallop poll that came out either yesterday or today, that shows -- and this is testimony to the degree of polarization in the united states, that the top-five issue for republicans in the top five issues for democrats and those leaving each direction are completely different. for republicans it is the economy, immigration,

0 Views

info Stream Only

Uploaded by TV Archive on