1
ENSEMBLE FORECASTING IN NEW ZEALAND
(Tony Simmers)
Meteorological Service of New Zealand Limited, PO Box 722, Wellington, New Zealand
Abstract:: The Meteorological Service of New Zealand (MetService) is in the early stages of deciding how best to use ensemble prediction system data in their operational products. Current use is limited to the provision of customisedforecasts for individual clients on a case by case basis. We are seeking to develop more automated ways of using the data, while retaining a degree offorecaster involvement in the process.
Forecasters at MetService already appreciate the benefits of taking an ensemble approach to their choice of global model. They currently use several simple tools to compare different models with each other, and to compare the drift in solution in sequential runs of the same model. All our forecasters have also completed a web based training module that introduces them to the idea of ensemble prediction systems. This module is based around the Southern Hemisphere images NCEP make available on their ftp server, and it refers to the excellent material prepared by EuroMet and the AMS review article by Sivillo et al (1997).
The next steps towards making use of ensemble data are to manage the volume of data effectively, and to generate forecasts that customers (and forecasters) can readily understand. As modelling centres increase the resolution and number of members in each ensemble obtaining the data becomes more difficult. If sectors of limited area were available it would decrease file sizes dramatically, as would the provision of some pre-processed fields such as ensemble average and spread. However ensemble averages tend to be bland, especially at long lead times, and lay people find it hard to understand the reasons for this. MetService is interested in investigating techniques, such as clustering, that would allow us to provide a high temporal and spatial resolution forecast, that is in some way typical of the ensemble set, together with information about the underlying uncertainty.
Introduction
MetService does not currently have an inhouse ensemble forecast system. We have, however, kept a watching brief as larger centres have developed their own ensemble systems. In 1999, the increasing availability of ensemble data and the growing understanding of how ensemble systems behaved meant we could start planning how we could use ensemble data in New Zealand.
As a first step all forecasters were introduced to the concepts behind ensemble systems by way of a short, webbased course. This course drew heavily on material available on the EuroMet web site1, the review paper by Sivillo et al (1997), and the information and data from the NCEP ensemble web pages2. The aims of the course were to introduce forecasters to the idea of ensembles and to some of the ways the data could be displayed before they needed to use the data operationally. Although there were practical examples in the course, there was no attempt made to introduce changes to operational practice. Since the time of the initial training, forecasters have had access to southernhemisphere spaghetti and standard deviations charts from NCEP, although only the shift responsible for forecasts on days 35 regularly looks at the images.
Operational Use of Ensemble Ideas
The ensemble approach to forecasting has been used by forecasters in New Zealand for as many years as they have had a choice of models. When evaluating the available guidance, a key decision for forecasters is which model, perhaps with small adjustments, to base the forecast room's guidance on. To do this, forecasters look at the consistency of the last several runs from a single centre, and they also look for differences between the various models at the same time of validity. One of the tools they have to help them in this task is a simple, tablebased, web page (figure 1). Forecasters can also loop charts with the same validity time that have come from successive runs of a smgle model.
Using techniques like those above has developed forecasters' ability to pick the 'right' model. In an informal sevenmonth study of days when there were significant differences between the American Medium Range Forecast model (MRF) and the global run of the UK Met Offfice's Unified Model, forecasters picked the best solution two out of three times. Pattern recognition and identification of differences when they are significant is at the heart of MetService's ability to add value to the model data we receive. As sets of ensemble data are introduced to the forecast room we anticipate forecasters will continue to be involved in the Droduction of our forecasts.
______
1httn://euromet.meteo.fr/Rb.html (password required)
2http ://s~i62.wwb.noaa. gov:8080/ens/enshome.html
Forecasts for days 515
Apart from a two monthly outlook, MetService does not routinely issue forecasts for lead times greater than five days. The main reason for this is a lack of comfort with the accuracy, and ultimately value, of products based on a single run of a deterministic model. However we do issue forecasts out to two weeks on a case by case basis to individual customers. Customers requesting this service are typically organising a large event that is weather critical and are able to afford the relatively high cost of a oneoff forecast.
Customer response to the twoweek forecasts has been good. For each day out to day 14 the forecasts contain a brief worded forecast together with probability of precipitation (>lmm, >lOmm), maximum temperature within a range of 34°C, and an indication of overall confidence. Because the forecasts are prepared for individual customers, there is the opportunity to stress to them that timings in the second week are indicative only, and that trends are likely to prove more reliable than specific values of a weather element.
1
Figure 1: Screen shot of the model comparison web page used at MetService. When the consistency of a model is looked at the panels display spaghetti charts of previous runs verifying at the same time, and when models are compared the solutions from each of the three models is plotted on each chart.
The forecasts are based on the ensemble charts available from the NCEP web site. It was quickly discovered that additional interpretation seemed to improve the forecast significantly. Forecasts for a particular location are 'sharpened up' by using knowledge of how weather systems affect that place. One way this is done is by relating the forecast ensemble mean 1000500 hPa thickness to the climatological value of the thickness. The difference is used to estimate how much the actual temperature will be above or below normal. Temperature and rainfall probabilities are also adjusted in light of the significant influence the topography of New Zealand exerts on climate. Although objective verifications are not available because of the small number of forecasts that have been issued, customers are finding them useful and sometimes request a series of forecasts to cover the period of interest.
Issues for the Future
Managing the volume of data associated with ensemble sets is a practical but very real problem. As communication capacity increases and costs comes down, so the number of ensemble members and their resolution increases. As more centres share their data 'super ensembles' may become feasible increasing the volume of data still further. Access to data over sectors of the globe, and preprocessed fields may help reduce the data volume; however, it may be more effective to make pragmatic choices about which fields are really necessary and whether a super ensemble would introduce more problems than it solves.
The initial training conducted at MetService assumed that a given ensemble set covers the range of possible solutions adequately and that the individual ensemble members have equal weight. Future training will need to explore these assumptions more fully. However, to monitor the performance of a real ensemble set is at least as diffficult as creating forecasts from the data. Good information from the issuing centre about the characteristics of the ensemble, and whether any members should be weighted differently, would help users of the data to concentrate on how well the fimal forecasts verify.
1
The feedback from customers is that they like, and feel they understand, the current type of forecast where daily values are presented against a background indication of uncertainty in the forecast. Surveys have shown the public has a poor understanding of existing weather terminology, which suggests that they will struggle to pick up any more than the most basic concepts of an ensemble forecast. For these reasons we anticipate the need to make forecasts that summarise the spread of values forecast by each member of an ensemble set rather than simply the mean of all the possibilities. Such an approach would require postprocessing of each ensemble member (for example, a MOS scheme) to take account of local effects, but would make it easier to produce forecasts such as 'likelihood of frost'.
Customers also want to see a 'single most likely scenario' in their forecast. They seem to understand that in such a forecast the trends will be more significant than specific values on individual days. The use of clustering techniques, especially with forecaster involvement, could make it easier to pick the single ensemble member that represents a 'typical solution' for some or all of the period.
Conclusion
The production of a twoweek forecast that has value and is scientifically justifiable requires the use of an ensemble prediction system. To make a forecast that is relevant to customers' needs, and that they will find easy to understand, is likely to require inhouse processing of the individual members of an ensemble set. Forecasters will still have a role to play in producing these forecasts, particularly where worded summaries or when choosing a single most likely scenario. The amount of data this required to support these forecasts will have to be carefully considered, as will the method used to verify forecasts.
Reference
Sivillo, loel K., Jon E. Ahlquist, Zoltan Toth, 1997: An Ensemble Forecasting Prima. Weather and Forecasting: Vol. 12, No. 4, pp. 80-818.