Two of today’s most important global trends are the return of nationalism and the explosion of privately held high-quality data. The potential side effects of both are on vivid display in one unexpected endeavor: weather forecasting.
In his new book, “The Weather Machine: A Journey Inside the Forecast,” journalist Andrew Blum explains how rapidly forecasts have been improving. Quality is gaining roughly a day a decade, so that a 5-day forecast is now about as good as a 4-day forecast was a decade ago, and a 2-day forecast 30 years ago. The improvements, Blum notes, have been achieved not by a single government agency or company but by “an international construction, a carefully conceived and continuously running system of systems, tuned to an endless loop of observing the weather, predicting the weather, and observing it all over again.”
My father, a math professor at MIT who studied computational fluid dynamics, played a modest role in this history. Our family spent part of 1970 in Boulder, Colorado, while he visited the National Center for Atmospheric Research. I remember wondering at the time how someone like my father, who specialized in computer models of turbulence, could help with practical weather forecasting. But there is a connection, which Sergej Zilintinkevich, a Finnish scholar of meteorology, described: “Turbulence is the key to the atmospheric ‘machine.’ We cannot understand weather systems if we do not understand the connections between their parts.” Being able to use a computer to efficiently simulate turbulent flows was thus crucial to building better weather models.
Improvements in weather forecasting are built upon a collection of such advances, one after another. The two key drivers have been computer modeling and satellites — and these have advanced through international cooperation, as weather data and modeling improvements have been shared across countries for well over a century. As the World Meteorological Organization states in its mandate, “weather, climate and the water cycle know no national boundaries.”
Those who believe the US can do almost anything better by itself, with no help from others, are clearly mistaken in weather forecasting. They should take note, for example, of European excellence in this area. Blum pays particular tribute to the European Centre for Medium-Range Weather Forecasts, whose model is informally known as the “Euro.” Compared with other global models, including those developed in the US and the U.K., he says, “the Euro is the most accurate the furthest out in time (if sometimes only slightly). It is also the most improved, the most often.”
Maintaining international cooperation on weather forecasting becomes harder as nationalist pressures build up in other arenas and as people worry more about cyber espionage. The Chinese, for example, have been accused of hacking US weather systems.
At the same time, as the quality of privately collected data continues to improve, the question arises of how to integrate public and private data. This same question comes up in economic forecasting, as I’ve noted previously, but it is also salient in weather forecasting. Weather models have been able to advance as far as they have because public-sector data and information are freely exchanged. Will private companies keep the data they collect private, and use it to construct their own proprietary models? It’s an open question how these private activities best fit in with public weather data and modeling.
If there is a decline in international cooperation and data sharing, the quality of weather forecasting will stagnate, even within the US. As David Grimes, the outgoing head of the World Meteorological Organization, has said, “It would take three days before the United States would realize that they couldn’t live in a vacuum.” –Bloomberg