We all know political polls are increasingly unreliable. That's why forecasting outfits like 538 aim to separate "the signal and the noise" by assigning grades to pollsters and weighting their forecasts toward those with the best grades.
It seemed like a good plan. So why did it backfire so spectacularly?
Flip Pidot, Peter Hurford and Harry Crane investigate Nate Silver's utterly failed attempt to distinguish good pollsters from bad.
Follow along with our interactive polls and forecasts at OpenModelProject.org. Follow us on Twitter at @OpenModelProj.
If you'd like to see more independent forecasting and unbiased polling in your world (and gain exclusive access it before anyone else), please consider supporting us on Patreon at patreon.com/openmodel.
Raising the Bar for Polling NYC and Vaccine Hesitancy
NYC Mayoral Candidates: Poked, Prodded, Weighed and Measured
Forecast Feud: Are Polls Useless or Just Bad?
Ranked Choice Voting and the NYC Mayoral Race
The Model Went Down to Georgia
Introducing the Open Model Project
Copyright © 2006-2021 Podbean.com