“The political polling profession is done,” Republican pollster Frank Luntz told Axios on Nov. 4. “It is devastating for my industry.”
The frustration with polling was further echoed by a New York Times opinion piece titled, “Can We Finally Agree to Ignore Election Forecasts?” The Atlantic also piled on with a piece titled, “The Polling Catastrophe.”
Our hunch is that no, we can’t ignore election forecasts, and the polling industry will stick around. The polling industry will keep revising its methods and try to do a better job in the future — particularly in down-ballot races — but it won’t go away for multiple reasons.
Pollsters exist because people crave certainty as to what will happen in the future. Who is likely to win a race? What are the issues the public cares about? These questions will always be important in politics, whether or not the pollsters have a good year or a bad one in getting them right.
Pollsters who feel beat up right now should have a drink at the bar with Wall Street analysts. The Wall Street analyst profession is similar: Investors love to beat up on analysts, and sometimes use them as a punching bag.
But the analysts have a job that won’t go away: They provide the certainty, or some semblance of it, that investors will always crave. A need for certainty, for an answer to the question “What is going to happen?” is baked into the human psyche.
Then, too, Wall Street models are like political polling models, and sometimes they get things spectacularly wrong.
Prior to the global financial crisis, Wall Street went all in on quantitative analyst models that said U.S. home prices could never decline on a nationwide basis. That assumption turned out to be wrong, and hundreds of billions of dollars were lost.
Whenever investor institutions lose a huge sum of money, there is probably a bad model — or the Wall Street equivalent of a bad forecast poll — involved in the process somewhere. Consider the models that said commodities were a permanently investable asset class, for example, and that buying timber, copper, or crude oil was like buying stocks.
At the peak of the commodity supercycle, which roughly coincided with the peak of the subprime bubble, these bad models convinced pension funds to buy long-dated oil futures contracts above $130 per barrel before the price of oil collapsed.
The political polling industry will have to adjust its methods, in some cases by a lot. But the way people interpret the data will have to change, too.
A good portion of the problem is not the data, but the incorrect way the data is interpreted. If the data says event X has a 70% chance of happening, and users just interpret 70% as meaning certainty because it is a large number, then everyone will get upset when the 30% scenario occurs.
At the core of all this, you have uncertainty. Humans, as a rule, do not like uncertainty. They want to know what will happen, preferably with zero risk of error. But this is usually impossible.
Some of the problems in the forecasting business come from trying to fulfill a market demand for certainty when no real certainty can be had. “Tell me what is going to happen” is a very different request than, say, “tell me what the odds are and give me the scenarios.” All too often, people want an answer rather than a forecast. They want certainty.
For investors (and pollsters, too) there are three types of uncertainty. It’s helpful to understand the difference, because the three are very different — although they can overlap and blend into each other.
The three types of uncertainty are:
- Aleatoric Uncertainty: The uncertainty of quantifiable probabilities.
- Epistemic Uncertainty: The uncertainty of knowledge.
- Knightian Uncertainty: The uncertainty of nonquantifiable risk.
Investors, traders, and poker players work with all three forms of uncertainty on a routine basis. Managing uncertainty in its three main forms — and converting it to certainty when possible — is at the heart of making money, or winning votes.
Aleatoric uncertainty is taken from the Latin word “alea,” which means dice, where the dice represent games of chance.
Aleatoric uncertainty relates to probability-weighted outcomes, and the type of data you can plug into a spreadsheet. If you can quantify the odds of an outcome, but you don’t know which outcome you will get, that is aleatory uncertainty.
For example, say you are in a high stakes poker hand with an ace-high flush draw. As part of your decision on whether to bet, you can calculate the odds of completing your flush with the next card.
There are nine cards that can complete the flush and a total of 45 unknown cards in the deck, which reduces to 1 out of 5, which means the next card will complete your flush 20% of the time.
That is aleatoric uncertainty: You don’t know which card will come, but you know the probability, or range of probabilities, that you are working with. Aleatoric uncertainty can also refer to statistical variability within an experiment. You know you will get some noise or randomness around various measurements, but it falls within a definable range.
Epistemic uncertainty is harder. It relates to knowledge and whether or not the things you know are true.
If your experiment has epistemic uncertainty, you may not be sure your data is correct. If your knowledge inputs are wrong, then all the conclusions from the models are likely to be wrong or useless (or both). A simplified version of this is the idea of “garbage in, garbage out.”
Epistemic uncertainty is a huge issue when it comes to polling, because of the connection between bad information and bad conclusions.
If a polling model is built on questionable data, then epistemic uncertainty can lead to a false sense of aleatoric confidence. If polling numbers show a given Senate candidate has a 10-point lead, but the numbers are based on bad data, the percentage assumption might be worthless.
To avoid epistemic uncertainty errors, some will want to avoid forecasting models. But if a strong desire still exists to get a handle on the future — to get some clarity on what’s likely to happen — the same issues crop right back up. Every hunch is based on a mental model of some kind, even if the hunch is just instinct.
The final type of uncertainty, Knightian uncertainty, is the trickiest of all. It was introduced by Frank Knight, a University of Chicago economist, in the 1920s.
Knightian uncertainty speaks to the unknown unknowns — the knowledge you don’t have and the risks you don’t see. It is the hardest to deal with because you can’t address it directly.
If an entrepreneur starts a business, they will deal with all three types of uncertainty at once.
- They might face aleatoric uncertainty in planning for a range of outcomes based on statistical models for sales forecasts and product costs.
- They would then have epistemic uncertainty in verifying that the data and knowledge inputs used in their models were correct, and that no key pieces of information were missing.
- And they would need to address Knightian uncertainty in safeguarding against the unknown.
Investors and traders face all three of these in running a portfolio and deciding how much risk to take while managing positions.
- Aleatoric uncertainty impacts profit targets, levels of conviction, and probability assigned to scenario outcomes for investments and trades.
- Epistemic uncertainty applies to data analysis, and the quality of information that goes into assumptions about an investment or a trade.
- Knightian uncertainty covers the unknown forms of risk that have to be guarded against — a good reason for using stop losses or protective risk points, keeping leverage in check, and having a buffer of cash on hand.
An understanding of uncertainty can also improve the process of analyzing outside data.
When looking at polling forecasts, for example, aleatoric uncertainty can speak to what probability ranges actually mean, e.g. understanding, on a gut level, that a 30% probability is meaningful.
Epistemic uncertainty, meanwhile, is a reminder that if the assumptions or data behind a forecast are wrong, the forecast itself could be way off, or at least advisable to take with a grain of salt. And finally, Knightian uncertainty is a reminder that, in all things, sometimes risk comes flying in out of nowhere. There are always events we hadn’t planned for, or possibilities we hadn’t considered, that could up-end the model or turn things upside down. The best safeguards are the ones that do a good job of protecting against risks we didn’t see coming.