website header
website header

Implications of AI Usage in Weather Prediction

Implications of AI Usage in Weather Prediction

Google's Graphcast model showed a neural net's ability to outperform a traditional numerical weather prediction (NWP) model in creating weather forecasts for the next ten days (see paper linked below). What implications does this have on how we think about weather, and on the usefulness of meteorological knowledge at all?

by Max H. Balsmeier - July 7th, 2023

Weather prediction, the old way

Traditional weather prediction comprises several complicated steps. It begins with observations from satellites, radiosondes, weather stations, aircraft, and ships. This data is heterogeneous and its spatial distribution is highly irregular. Furthermore, each kind of observation is different: a satellite measures something different than a terrestrial weather station, for example. Based on this data, model variables are calculated in a process called data assimilation. In data assimilation, the observations and the model's ability to simulate physics is combined to produce the very first step of a model run, the analysis.

The next step is the execution of the model itself: It calculates the next weather state based on the current state. This process is executed hundreds of times, until a complete forecast (for one day, one week, or even longer) is completed. Examples of weather models are GFS, ICON, AROME or IFS.

After the model has produced a forecast, further steps are being executed. One such step is model output statistics (MOS), which is used by national meteorological agencies since decades to improve model output. Errors in model output can be categorized into statistical errors and systematic errors. Statistical errors are errors that are randomly distributed, they can only be reduced by running the model at a higher resolution or other improvements in the model or the data assimilation. Systematic errors, on the other hand, have a pattern, they are not totally random. For example, it could be that a forecast model tends to underestimate daily maximum temperatures in sunny weather by one degree Celsius. If one knows this, one can correct the error by adding an additional degree Celsius to the model output. Many more such patterns could be detected. Today, such a procedure would be labelled as machine learning, because the more data is collected, the better the patterns can be detected and the more efficiently they can be corrected for.

Further steps are graphical visualization, as in weather apps, on maps, vertical cross-sections, meteograms, etc., and other forms of post-processing.

The impact of AI on the weather prediction workflow

Now, what changes in this workflow when AI is used? First of all, it depends on how AI is actually employed to improve the forecasting routine. If AI is used to improve the MOS part of the procedure by detecting error patterns in model output, or by improving details deep in the mechanics within the models themselves or the data assimilation, not much is expected to change. Certain steps of the procedure would be improved, maybe even dramatically, by AI, but the workflow itself would stay the same.

Google's Graphcast model, however, is different. It is not a weather model like we are used to. Traditional weather models of the kind being used today operationally to produce forecasts are based on the simulation of physical laws from very different areas of physics: classical mechanics, thermodynamics, and even quantum mechanics. These laws, and even more, how to simulate them efficiently, have been worked on for decades or even centuries. Models are the product of a fight of hundreds of universities, institutes, professors, working groups, scientists, programmers, and other specialists for an improved understanding and implementation of this understanding. Neural-net-based models, if they can even be called models, are very different. Much more than they are based on knowledge about physical laws and mathematics, they are based on data that is used to train a neural net. The human effort to cast scientific knowledge into a computer code is largely circumvented - and outperformed - by AI.

Graphcast is able to replace one step of the current weather prediction workflow - the model, at least for global coarse-resolution medium-range forecasts, while the same principle can also be applied to higher-resolution or limited-area models. It should not be impossible, however, to also replace the very first step, the data assimilation, with a neural-net-based approach. Data assimilation generates regular, gridded data based on irregualar, heterogeneous observations. A task that a neural net could be trained to do, providing the right training data. A challenge could be the varying number and type of observations, and the fact that sensors and equipment breaks, leading to unreliable data, that needs to be filtered out. Once this is achieved, meteorological observations can be mapped to a forecast in ten days, probably within minutes of computing time - compared to hours needed for the current workflow. On top of that, also MOS becomes obsolete when emplyoing AI-based models, since the AI will be able to detect and minimize error patterns in the forecast by itself.

Is studying science still a wise thing to do?

The logic behind replacing large portions of the weather prediction workflow with AI can be applied far beyond the scope of weather forecasting and even science. It brings up the broader question of the usefulness of studying to learn about science, nature, and the many other disciplinces needed to actually understand something. Why bother to learn all this if in the end AI will be better anyway? Sure, one could argue that learning AI instead of learning sciences like physics, chemistry or meteorology is the solution. But a small team of AI specialists has been able to outperform what generations of model developers built before them - indicating that the number of AI specialists needed for reaching breakthrough successes is much smaller than the number of traditional (non-AI) scientists that would have been required to reach the same success.

Links