Newswise — Weather forecasters rely on statistical models to find and sort patterns in large amounts of data. Still, the weather remains stubbornly difficult to predict because it is constantly changing.

“When we measure the current state of the atmosphere, we are not measuring every point in three-dimensional space,” says Paul Roebber, a meteorologist at the University of Wisconsin-Milwaukee. “We’re interpolating what happens in the in-between.”

To boost the accuracy, forecasters don’t rely on just one model. They use “ensemble” modeling – which takes an average of many different weather models. But ensemble modeling isn’t as accurate as it could be unless new data are collected and added. That can be expensive. 

So Roebber applied a mathematical equivalent of Charles Darwin’s theory of evolution to the problem. He devised a method in which one computer program sorts 10,000 other ones, improving itself over time using strategies, such as heredity, mutation and natural selection.

“This was just a pie-in-the-sky idea at first,” says Roebber, a UWM distinguished professor of atmospheric sciences, who has been honing his method for five years. “But in the last year, I’ve gotten $500,000 of funding behind it.”

His forecasting method has outperformed the models used by the National Weather Service. When compared to standard weather prediction modeling, Roebber’s evolutionary methodology performs particularly well on longer-range forecasts and extreme events, when an accurate forecast is needed the most.

Between 30 and 40 percent of the U.S. economy is somehow dependent on weather prediction. So even a small improvement in the accuracy of a forecast could save millions of dollars annually for industries like shipping, utilities, construction and agribusiness. 

The trouble with ensemble models is the data they contain tend to be too similar. That makes it difficult to distinguish relevant variables from irrelevant ones – what statistician Nate Silver calls the “signal” and the “noise.”

How do you gain diversity in the data without collecting more of it? Roebber was inspired by how nature does it.

Nature favors diversity because it foils the possibility of one threat destroying an entire population at once. Darwin observed this in a population of Galapagos Islands finches in 1835. The birds divided into smaller groups, each residing in different locations around the islands. Over time, they adapted to their specific habitat, making each group distinct from the others.

Applying this to weather prediction models, Roebber began by subdividing the existing variables into conditional scenarios: The value of a variable would be set one way under one condition, but be set differently under another condition.

The computer program he created picks out the variables that best accomplishes the goal and then recombines them. In terms of weather prediction, that means, the “offspring” models improve in accuracy because they block more of the unhelpful attributes. 

“One difference between this and biology is, I wanted to force the next generation [of models] to be better in some absolute sense, not just survive,” Roebber said.

He is already using the technique to forecast minimum and maximum temperatures for seven days out.

Roebber often thinks across disciplines in his research. Ten years ago, he was at the forefront of building forecast simulations that were organized like neurons in the brain. From the work, he created an “artificial neural network” tool, now used by the National Weather Service, that significantly improves snowfall prediction.