I predict a Kamala Harris victory, with a 309-229 Electoral College majority and a 52-47 popular vote majority. I arrive at this prediction by using the One True Pollster model, outlined below.
Respect for representative-sample phone polling drives the One True Pollster model. Political analysts' respect for polls comes from the ability of phone polling to generate a representative sample, which supports good predictions. Such polls flourished and could be usefully aggregated in 2012 when I started this blog and celebrated the rise of poll aggregators.
Such polls are at this point almost gone, and I return to chart the demise of the era in which this blog rose. In the modern era, response rates have collapsed, and it's really hard to get a representative sample. Poll aggregators have lost their value, clogged by things that aren't really polls, the worst of which are simply used to distort public opinion. Catastrophic failures of polling little noticed by the media demonstrate this.
Here I lay out the three principles of the One True Pollster model. In doing so, I'll explain how I arrived at the map above. The model isn't some kind of neural net with opaque parameters. It uses polling data in line with three principles. I'll outline them below, and in doing so, explain the map.
1. Ann Selzer is the One True Pollster. Her commitment to true representative-sample polling with minimal additional assumptions sets her apart methodologically. It explains her unparalleled record of surprising, informative, and extremely accurate results.
This doesn't mean she's infallible – the One True Pollster is after all a pollster. She slightly underestimated Trump in the primary and we calibrate accordingly. But she's the only pollster who I'm sure is running a real poll these days.
From a recent interview with The Bulwark:
As I read "I keep my dirty fingers off my data", my hand moved towards my heart, like she was singing the National Anthem. As she says about sticking to her rigorous methodology in a Bulwark interview: "I’m prepared that one day it will not work and I’ll blow up into tiny little pieces and be scattered across the city of Des Moines." Holy Saraswati, Goddess of Knowledge!
Selzer found Harris 47, Trump 44 in Iowa. Since she underestimated Trump's margin over Haley by 4 points, we'll subtract 4 and expect Trump +1 in Iowa. If Harris is keeping it that close in Iowa, she's doing so well among the sorts of people who vote there that she's winning MI, WI, PA, and Omaha for 270 electoral votes and victory. We'll try to maintain conservative assumptions apart from "Harris is winning much bigger than expected with Iowa-type people."
The map has Harris winning all other swing states but AZ, where rightward shifts among Latino men favor Republicans. In NC, Mark Robinson's disastrous 'Black Nazi' candidacy for Governor sensitizes voters to Trump's awfulness. I've seen some early voting data suggesting strong turnout among women and Zoomers in Georgia; they carry our hopes. NV is the hardest to predict because confounding factors have devoured the utility of the early voting data, but Jon Ralston has a perfect record since 2010 and he says Harris by 0.3, so I'll just run with that. I'm guessing that's a Harris +5% map?
2. The NYT/Siena poll is informative too, but its predictive power is severely limited. It has enough resources to detect something, but if Harris is doing spectacularly well, NYT/Siena won't be able to tell you that.
Here Nate Cohn explains why, and admits that he and most of his professional community have stopped running real polls. This is the kind of thing that we should have heard months ago from some responsible steward of American political polling. But of course we hear it now after we've been consuming their product frantically for months, and cautiously herded polls make the truth unavoidable:
With all its resources, NYT/Siena is actually doing polls, which is why they aren't herding like so many other pollsters. But they're shying away from publishing any especially strong Harris result. When the guy running the polls tells you that, you have to adjust their polls accordingly. If reality is Harris +7 in PA, they won't show it to you. I'm guessing it's because of a baked-in assumption of their turnout model, but maybe it's somewhere else in the system.
There is a systematic perversity behind Cohn's admission. When the polls favor Democrats but they lose, Democrats blame the polls. When the polls favor Republicans but they lose, Republicans blame shenanigans in the real elections. (They may also do this when polls don't favor Republicans.) This has the perverse effect of making pollsters like election deniers better than people who hold pollsters accountable for their failures.
3. Other polls must be ignored, due to their systematic and severe errors during 2022 Trump-endorsed Senate campaigns and especially the 2024 Republican primary. Low response rates have destroyed the representative-sample polling we once knew and loved. All these polls systematically failed to find the sorts of Republicans who don't like Trump-endorsed Senate candidates and vote for Nikki Haley. This is why the polls wildly overestimated Trump's margin of error in primary after primary. Selzer did better in Iowa than any polling average in any other state.
The surprise of this election will be delivered by silent Haley voters. Selzer's methods alone detect them – longtime Republicans who prefer stability and order, and disdain both lefty-activist and MAGA chaos. Many have business or military backgrounds, or are married to people who do. Pollsters used to focus mainly on these voters because they were the most reliable voters, many regarding voting as a patriotic duty.
Haley voters don't like Trump, but many were scared into sticking with him for a while. They voted against Hillary in 2016 because of 25 years of Republican propaganda against her. BLM may have scared them into staying Republican in 2020.
But then January 6 happened, and shocked these law-and-order loving people. They were on Liz Cheney's side, and were shocked again by her demotion and defeat. Some still voted for Republicans until 2022 in hopes that they could retake the party from Trump, but they wouldn't come out for Trump-endorsed Senate candidates like Dr. Oz and Herschel Walker, who generally underperformed their poll numbers while other Republicans didn't. Now the Haley voters are fully giving up on the party, and many are becoming Harris voters.
Most pollsters used to model Republicans as Haley voters. After 2016 and 2020, they model Republicans as Trump voters. These days, Haley voters may be less likely to pick up the phone than Trump voters. I don't know how it was in 2012. It may have been the opposite back then, when a ringing phone seemed more of an obligation. Selzer alone finds Haley voters, because of her random-dialing methods without frightened modeling assumptions.
Or maybe other true pollsters are running real polls like Selzer. I apologize to them and hope they will soon be revealed to me. But I don't have much confidence that I can find them. I'm sure data can be extracted from some other polls in aggregate, perhaps with appropriate adjustments like I'm making with NYT/Siena. But I have no way of aggregating that data without including lots of fake polls. I despair of finding another needle in that haystack full of snakes.
Poll aggregators are worthless. Especially after 2022 when herding together for safety worked fairly well, lots of polls are incentivized to herd again. Now they're herding around the safest and least committal results – swing states even, Harris up a percent or two nationally.
Aggregators are also clogged with highly rated fake polls from companies like AtlasIntel that 'predicted well' in previous years. I don't trust new firms' past predictions anymore. Scammers running highly manipulated online polls can follow the herd to a safe result one year, and sell out to some billionaire who wants to manipulate media coverage the next year. They could also profit by running pump-and-dumps on prediction markets. Someone could even run five totally fake 'polling firms', predict a range of results, and end up certain of having a 'highly rated' polling firm at the end of it. Goodhart's Law – “when a measure becomes a target, it ceases to be a good measure” – has ruined aggregators' pollster ratings.
AtlasIntel is especially dubious as it's pumping out 'interactive polls' at very high speed near the election, using a 'proprietary' and unexplained methodology. Interactive polls should be ignored entirely, as they involve opt-in methods of sampling that attract chaotic respondents. Pew did an opt-in survey and asked people if they were licensed to operate a nuclear submarine.
People today are calling anything a 'poll' and throwing it into easily manipulated aggregators. One light shines amidst this darkness. She is Ann Selzer, the One True Pollster.
One poll to ring them all
ReplyDeleteOne poll to find them
One poll to bring them all
And in the spreadsheet bind them