Climate models

What COVID forecasters can learn from climate models


[ad_1]

A second national lockdown began in England on November 5.Credit: Hollie Adams / AFP / Getty

Epidemiologists predicting the spread of COVID-19 should adopt climate modeling methods to make predictions more reliable, say computer scientists who have spent months auditing one of the pandemic’s most influential models.

In a study that was uploaded to the preprint platform Research Square November 61, researchers commissioned by the Royal Society of London used a powerful supercomputer to reexamine CovidSim, a model developed by a group at Imperial College London. In March, that simulation helped convince British and American politicians to introduce lockdowns to avoid predicted deaths, but it has since come under scrutiny by researchers who doubt the reliability of its results.

The analysis, which has yet to be peer-reviewed, shows that because the researchers did not appreciate how sensitive CovidSim was to small changes in its inputs, their results overestimated the extent to which a lockdown was likely to reduce deaths, says Peter Coveney, a chemist and computer scientist at University College London, who led the study.

Coveney is reluctant to criticize the Imperial Group, led by epidemiologist Neil Ferguson, who he says did the best job possible under the circumstances. And the model rightly showed that “doing nothing at all would have dire consequences,” he said. But he argues that epidemiologists should test their simulations by running “ensemble” models, in which thousands of versions of the model are run with a range of assumptions and inputs, to provide a range of scenarios with different probabilities. . These “probabilistic” methods are common in the fields of high-performance computing, from weather forecasting to molecular dynamics. Coveney’s team have now done this for CovidSim: The results suggest that if the model had been run as a set it would have predicted a range of probable death tolls under lockdown, averaging twice as high as the original prediction , and closer to the actual numbers.

“CovidSim can be presented as the most complicated epidemiological model, but it is almost like a toy compared to really high-end supercomputing applications,” says Coveney, who was asked to verify the model’s performance as part from the Royal Society’s rapid assistance to modeling. the Pandemic Initiative (RAMP).

Calculation sets

Coveney’s team used the Eagle supercomputer at the Poznan Supercomputing and Networking Center in Poland to perform 6,000 separate runs of CovidSim, each with a unique set of input parameters. These represent characteristics of the pandemic, including the infectivity and lethality of the virus, the likely number of contacts people make in various settings, and the estimated success of measures such as telling people to work from home. . In March, the inputs for many of these parameters were educated guesses, some drawn from preliminary data on the virus and others based on experience with illnesses such as the flu.

Models that predict the spread of the disease are often based on hundreds of parameters, but this can introduce uncertainty. “The circles that set up the RAMP initiative were concerned that these models that epidemiologists are working with contain an absurd number of parameters and that they may not be right,” Coveney said.

His team found 940 parameters in the CovidSim code, but narrowed them down to the 19 that most affected the output. And up to two-thirds of the differences in the model’s results could be attributed to changes in just three key variables: the length of the latency period during which an infected person has no symptoms and cannot transmit the virus; the effectiveness of social distancing; and how long after being infected a person is isolated.

The study suggests that small variations in these parameters could have a disproportionate nonlinear impact on the model output. For example, the majority of the team’s thousands of races have suggested the UK lockdown death toll will be much higher than the Imperial team’s initial projections – 5 to 6 times higher in some cases. The average number still suggested twice as many deaths as the Imperial Group had predicted.

In a modeled scenario, which assumed the UK would lock in when 60 people a week were to be hospitalized for intensive care, the March report predicted a total of 8,700 deaths in the country. The probabilistic results produced by Coveney’s group put this figure at around 15,000 on average, but indicate that more than 40,000 deaths are possible, depending on the parameters used. It is difficult to compare these projections with actual figures for deaths from COVID-19 in the UK, as the lockdown began a week later than results from one of the models assumed, by which time quantities significantly higher levels of disease were already circulating. .

“They didn’t get it right,” Coveney says. “They ran the simulation correctly: they just didn’t know how to get the right probabilistic description out of it. This would mean having to run sets of calculations. Coveney said he couldn’t say whether running an ensemble model would have changed the policy, but Rowland Kao, epidemiologist and data scientist at the University of Edinburgh, UK, points out that the government compares and synthesizes the results of several different COVID-19 models. “It would be overly simplified to consider that decision-making is based on a single model,” he says.

Improved models

Ferguson accepts most of Coveney’s arguments about the benefits of doing probabilistic predictions, but says “we just weren’t able to do it in March.” The Imperial Group has significantly improved its models since then, he adds, and can now produce probabilistic results. For example, it now presents the uncertainty of CovidSim inputs using Bayesian statistical tools – already common in some epidemiological models of diseases such as foot-and-mouth disease in livestock. And a simpler model, he adds, was used to inform the UK government’s decision to reintroduce foreclosure measures in England this month. This model is more agile than CovidSim: “Because we can run it multiple times a week, it’s much easier to adjust the data in real time, which allows for uncertainty,” says Ferguson.

“It sounds like a step in the right direction and is consistent with the conclusions of our article,” Coveney says.

The choice of technique often comes down to a computer compromise, says Ferguson. “If you want to characterize all the uncertainties correctly on a routine basis, then it’s much easier with a less computationally intensive model.”

Bayesian tools are an improvement, says Tim Palmer, a climate physicist at the University of Oxford, UK, who pioneered the use of ensemble modeling in weather forecasting. But only ensemble modeling techniques performed on the most powerful computers will provide the most reliable pandemic projections, he says. Such techniques have transformed the reliability of climate models, he adds, aided by the coordination of the Intergovernmental Panel on Climate Change (IPCC).

“We need something like the IPCC for these pandemic models. We need some kind of international facility where these models can be developed properly, ”Palmer said. “She was rushed due to the urgency of the situation. But to get things done, we need some kind of international organization that can work on synthesizing epidemiological models from around the world. “

[ad_2]