Climate models

How climate models might be better than we think

Chaos theory encompasses vast swaths of mathematics and physics, but it was Edward Lorenz who immortalized it in popular culture. Its now famous 1972 presentation, which summarized his decade-long work in the field, focused on a single provocative question: Can the flapping of a butterfly’s wings in Brazil trigger a tornado in Texas? Although he declined to definitively answer the question, his “butterfly Effecthas changed the way climatologists and meteorologists view causation in atmospheric science.

This first survey argued that long-term weather patterns are very sensitive to the smallest of disturbances. Using early computer simulations, Lorenz discovered that the atmosphere can be highly unstable, allowing similar weather systems to evolve in entirely different ways. In this sense, he argued, the flapping of a butterfly’s wings* could mean the difference between fair shipping and a hurricane.

This does not mean that we can eliminate natural disasters with a particularly powerful pesticide. Although this point has subsequently been omitted from many depictions of the butterfly effect in popular culture, Lorenz was quick to point out that while something as small as a butterfly can affect the weather at the on the other side of the globe, it must be impossible to break everything down. the individual components that combine to create the perfect storm. Moreover, he argues, if a butterfly can cause a tornado, it is just as likely to prevent one. “I propose that over the years, minute disturbances neither increase nor decrease the frequency of occurrence of various weather events,” states its abstract. “The most they can do is alter the order in which these events occur.”

Ever since Lorenz’s research took the field by storm, climatologists have discovered the need to give up the traditional scientific love of causality. Weather and climate models are technically deterministic – if a perfect computer simulation had perfectly accurate measurements of millions of factors affecting the atmosphere, it could hypothetically predict future behavior exactly. However, the impossibility of this task means that in practice, even a small error in measurement at the beginning, or a slight error in judgement, for example, of the number of passing butterflies, can easily worsen. It is for this reason that meteorologists often refer to weather forecasts in terms of probabilities, which acknowledges this uncertainty in initial conditions.


Spaghetti patterns are a familiar sight during hurricane season. Because no single model can adequately account for all predictors of a storm’s track, meteorologists often rely on a composite of many different models to determine the most likely outcomes. Although many models tend to agree, indicating a high degree of certainty for their predicted results, it is not uncommon for individual projections to differ significantly.

In the short term, ie weather conditions lasting less than two weeks, these uncertainties are manageable. However, when climatologists want to study seasonal or even longer climate projections, the resulting chaos makes it extremely difficult to generate high-quality forecasts.

To counter this problem, scientists typically run a series of simulations, each using slightly different initial temperatures, wind speeds and other parameters, then recombine the results into a map of likely outcomes. These “ensemble predictions” tend to lose their predictive ability on longer time scales – think of a spaghetti pattern with strands pointing in entirely different directions – but they capture the uncertain nature of science.

Or do they? Last year Adam Scaife and Doug Smith from the UK office met published a journal article in Climate and Atmospheric Science which highlighted what they see as a “paradox” in the ensemble forecast. The crux of the matter is this: for many models, the ensemble predictions provide a poor measure of the probability of a single simulated outcome…this could simply suggest that chaos reigns and there is little predictability, except for the surprising fact that the ensemble produces much more accurate predictions of the single outcome in the real world. In other words, these models predict the real world better than they predict themselves!



Figure 1. This graph shows how ensemble predictions are better at predicting the actual North Atlantic Oscillation (in black) than simulated predictions (in blue). The horizontal axis shows the number of individual simulations contributing to each ensemble forecast, while the vertical axis measures the correlation between the average ensemble forecast and the year-to-year variations of the Northern Oscillation. -Atlantic (real or simulated). Image credit: Scaife and Smith

This may seem like a non-issue; if the models are better than we think, then what’s the problem? But that means scientists tend to underestimate the value of their models in providing reliable predictions. More importantly, Scaife and Smith showed that some phenomena that were previously thought to be unpredictable, including fluctuations in atmospheric pressure known as the North Atlantic Oscillation, can in fact be predicted relatively well with careful manipulation of the data.

Scientists don’t know exactly what causes this paradox, but Scaife thinks it may have a relatively simple interpretation. Predictions from individual climate models typically encapsulate the inherent variability of observed systems, he says, but most of that variability is due to noise in the data — regions where results are unpredictable and unreliable. “This means that the model predictions each contain a smaller proportion of predictable variability than is found in the real world,” he says.


The large amount of noise in the simulations means that the models are, generally speaking, less predictable than the real world. However, when an average is taken over very many individual simulations, as is the case for an ensemble prediction, the noise effects tend to cancel out, leaving only the predictable “signal”. Since a single simulation contains a lot of noise, it has a high probability of disagreeing with the ensemble mean prediction, while the actual result agrees better because it contains less noise. Hence the paradoxical result that the model predicts the real world better than itself.


There is no easy solution to this paradox, and some scientists don’t even know if it really exists. However, this study provides intriguing evidence that climate and weather patterns could be much more predictable than we thought.


Maybe our days of blaming the butterflies are finally over.

–Eleanor Hook


Eleanor Hook is a freelance science writer based in Chapel Hill, North Carolina. She is a regular contributor to Physics Buzz, where she writes about everything from dead fish to lasers in space.


*Lorenz actually adds the caveat that since a butterfly’s influence is confined to a very small volume, its effect is likely to turn into a larger one only in turbulent air.