Models That Explain vs. Models That Predict

by | Nov 21, 2019

The above chart shows on the Y-Axix early navigators used a model of the earth and stars to predict the correct route to their destination. This worked even though they were wrong and thought the Sun revolved around the earth. Thus, their model predicted but didn’t explain. The X-Axis is a representation of evolution which does a great job explaining how species change. However, it doesn’t predict what species will evolve into in the future. Source: Galit Shmeuli Talk.

A model is a simplified representation of some aspect of reality. Models can be a two or three-dimensional depiction of something, such as a diagram, a mathematical formula, or even a story or analogy. Models are important for understanding the current and future states of the world.

“All models are wrong, but some models are useful.” – George Box

Models, by their very nature as simplifications of reality, cannot be 100% correct. For a map to be 100% correct it would have to be the same as the territory it is representing. As such, George Box’s quote above is spot on – all models are wrong. However, simplification of reality is useful and thus use of models is beneficial.

Explanatory vs. Predictive Models

Models can be thought of as falling within one of two categories:

  1. Models that explain, and
  2. Models that predict

An essential point is that explanation and prediction are separate things. According to Galit Shmueli, a pioneer in this area, “explanatory power and predictive power are totally different things. You can’t infer one from another, yet that is what happens in most sciences.” Most models only do one thing well – they either explain or they predict. Very few models have both explanatory and predictive power.

Models That Predict


In 2006, when they were in the mailing of DVDs business, Netflix announced a prize to whomever could improve their algorithms for recommending movies to users. The prize money was $1 million and the participants were given access to data of all their users, what movies they had rented and the ratings users gave to the movies they rented. The winning team had to improve upon Netflix’s recommendation engine by at least 10%. It turns out that it is really hard to predict what next movie a person will like based on their past likes. Cult movies such as Napoleon Dynamite and Monty Python’s Quest for the Holy Grail are not susceptible to algorithmic predictions. The winning team was from an AT&T lab that produced the greater than 10% result. Read more about the contest here.

Models that suggest what movie, book or music you might like are models that predict. But, they don’t explain why people like the movies they do. The new Netflix model didn’t explain why I might love Napoleon Dynamite but not like Dumb and Dumber.

Scott Page, in his fantastic book The Model Thinker, provides these examples of models that predict without explaining:

Deep-learning algorithms can predict product sales, tomorrow’s weather, price trends, and some health outcomes, but they offer little in the way of explanation. Such models resemble bomb-sniffing dogs. Even though a dog’s olfactory system can determine whether a package contains explosives, we should not look to the dog for an explanation of why the bomb is there, how it works, or how to disarm it.

Robust predictive models usually come from data mining or machine learning. They use associations and correlations to predict future behavior.

Models That Explain

Other models are developed to explain the world. They provide cause and effect. Examples include:

  • Science has developed a robust plate tectonic model of the earth. With this model we can explain the occurrence of earthquakes and volcanoes. However, the plate tectonic model is incapable of predicting when major earthquakes or volcanoes will occur.
  • Evolution is a model that explains the change of species over time. While it does a great job of explaining changes it does not predict how a species will change in the future.
  • Models of climate change explain how and why the earth’s climate changes, including the effects of human generated greenhouse gas emissions. The climate change models do not do a very good job of predicting what the climate will be in the future. In fact, some of the biggest criticism by manmade climate change skeptics is that climate change predictions have been wrong in the past. What the scientific community has not done a good job is communicating that climate change explanation models are very good but the climate change prediction models, due to the challenge of modeling so many factors, are not very accurate.

The social sciences, such as economics and psychology, produce mainly explanatory models. In other words, very good models exist to explain why the economy and people behave as they do, but it is very difficult to predict how the economy and people will behave in specific instances in the future. As such, good predictive models are rare.

Models that Both Predict and Explain

While most models don’t both predict and explain, some models do both. Again from The Model Thinker:

Electrical engineering models that explain voltage patterns can also predict voltages. Spatial models that explain politicians’ past votes can also predict future votes. In perhaps the most famous example of applying an explanatory model to predict, the French mathematician Urbain Le Verrier applied the Newtonian laws created to explain planetary movements to evaluate the discrepancies in the orbit of Uranus. He discovered the orbits to be consistent with the presence of a large planet in the outer region of the solar system. On September 18, 1846, he sent his prediction to the Berlin Observatory. Five days later, astronomers located the planet Neptune exactly where Le Verrier had predicted it would be.

Models that do both are the exception, not the rule.

Key Point and Why This Matters

Very few models are good at both predicting and explaining. You’ll rarely find one model that is good at both predicting and explaining.

Unfortunately, we tend to conflate the two types of models. Even scientists confuse the applicability of models and often try to use explanatory models to predict and prediction models to explain. It is very important to understand what type of model you are dealing with and limit the model to its proper category.

“An economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen today.” – Laurence J. Peter

In my profession (wealth management), confusing explanatory models with predictive models is a common error. Many models exist that explain the behavior of the markets or economy but very few models (or none) do a good job of predicting what will happen in the future. All the time people try to take models that explain the markets or economy and try to use them to predict what will happen. The quote above is spot on – economists (and market pundits) can explain what has happened very well, but lack the ability to predict what will happen.

An example is looking at Price-to-Earnings ratios (P/E) for individual stocks or the market as a whole. P/E ratios can explain whether the market/a stock has become more or less expensive on an absolute or relative basis. This is useful information. However, P/E ratios have very low predictive ability; are are of almost no help in telling you whether to buy or sell at any given moment? Disagree with this statement? If so, you are confusing explanation and prediction of the P/E models. Studies have found that P/E ratios have less than a 0.10 R2 in terms of relationship to the next year’s return and about 0.40 R2 for the next ten years.


  1. Very helpful, thanks John. In our business, helping people write publishable business articles, I have long been aware of the distinction between (what I have called) diagnostic and prescriptive frameworks, which people often confuse. Not quite the same as your clearer distinction, but analogous. We have just edited a 10,000-word paper by a leading consulting firm about the business ecosystems of massive companies such as Amazon and Ping An. The heavily-researched paper deduces several apparent inherent ecosystem strategies, which is fine. But it makes the leap to asserting that other companies can build ecosystems by using one of these strategies, without anyone apparently wondering whether the descriptive model is also a prescriptive one.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.


Subscribe To The IFOD

Get the Interesting Fact of the Day delivered twice a week. Plus, sign up today and get Chapter 2 of John's book The Uncertainty Solution to not only Think Better, but Live Better. Don't miss a single post!

You have Successfully Subscribed!

Share This
%d bloggers like this: