On the relation between

weather-related disaster impacts, vulnerability and climate change

Authors: Hans Visser1*, Arthur C. Petersen1,2,3 and Willem Ligtvoet1

SUPPLEMENTARY MATERIAL

Appendix A The EM-DAT database

As described in Section 2.1, all the analyses in this article are based on the EM-DAT emergency database. This database is open source and has been referenced in many publications (e.g. Birkmann 2013). Comparable databases are NatCat (Munich Re) and Sigma (Swiss Re), which are run on a commercial basis. Regional databases are also in use, for example the SHELDUS database for the United States (Gall et al. 2009; Preston 2013). EM-DAT is a global database maintained by the World Health Organization (WHO) and the Centre for Research on the Epidemiology of Disasters (CRED) at the University of Louvain, Belgium (Guha-Sapir et al. 2012). The database contains disaster events from 1900 onwards, presented on a country basis. Applications can be found in Guha-Sapir and Santos (2012) and references therein.
The EM-DAT database provides three disaster impact indicators for each disaster event: (i) economic losses, (ii) number of people affected, and (iii) number of people killed. These are defined as follows (Guha-Sapir et al. 2012): economic losses are direct damage costs and a direct consequence of weather or climate events. They refer to the cost of all physical impacts, including the lives and health of directly-affected persons, on all types of tangible assets, including private dwellings, agriculture, commercial and industrial stocks and facilities, infrastructure (roads, bridges, ports, water supplies, telecommunications) and natural resources. The number of people affected is the sum of people injured, people needing immediate assistance for shelter and people requiring immediate assistance during a period of emergency (this may include displaced or evacuated people). The number of people killed is the sum of people confirmed dead and/or missing and/or presumed dead.
Since the quality of the analyses is as good as the quality of the underlying data, we briefly address a number of uncertainties associated with the use of EM-DAT (and related databases). We address three issues: (i) the role of ‘reporting bias’, (ii) data comparison across databases, and (iii) definitional issues. More information is given in Guha-Sapir and Below (2002) and Visser et al. (2012 – Appendix A).

Reporting bias

Reporting bias is an important source of uncertainty in disaster databases. Reporting bias is the phenomenon that the number of disasters coded in a database increases over time. The reason is not only an increasing population, increasing wealth or climate change, but also the fact that sources of disaster reporting become sparse further back in time. For example, we checked the disasters reported for a small country – the Netherlands – in EM-DAT and compared these data with a detailed overview of disasters in Buisman (2011). No disasters were reported in EM-DAT before 1950, while all disasters after 1950 were correct. For these reasons CRED advises using its database from 1980 onwards, even though the disaster database starts in the year 1901. This advice has been followed in this study.
A second measure was also taken to remove reporting bias as much as possible. We only selected major disasters, that is disasters in the severity classes 4, 5 and 6, following the severity definitions of Munich Re. These are disasters with an economic loss of over 250 million USD2010, and/or 100 or more fatalities. The reason for this precaution is that reporting bias will manifest itself mainly in disasters with smaller impacts.

Data comparison across databases

One way of checking the reliability of EM-DAT is to compare this database with other databases. This has been done by Guha-Sapir and Below (2002). They compared EM-DAT with two commercial databases: NatCat, maintained by Munich Re, and Sigma, maintained by Swiss Re. Database comparison is not easy, as each institute uses its own definitions, disaster thresholds and geographical units. The same conclusion was drawn by Gall et al. (2009) in a comparison of four economic loss databases (EM-DAT, NATHAN, SHELDUS and Storm Events). Despite these differences, we found a good correspondence between global economic loss data in EM-DAT and the Munich Re NatCat database (R=0.94 over the period 1980 to 2009). We also compared global numbers of people affected over the period 1990 to 2010 and found a correlation of 0.84 (this relative low value is due to one outlier in 1993).

Definitional issues
There are three issues worth mentioning with relation to the CRED database. Firstly, the loss data in EM-DAT are direct losses. Direct losses reflect damages sustained by public infrastructure, buildings, machinery or crops. In the case of complete destruction, direct losses are often equivalent to the replacement costs. However, indirect losses may outweigh direct losses. As a result, the losses presented in this study may be a fraction of the total losses due to a specific disaster (Gall et al. 2009).
Secondly, the definition of disaster type (climatological, hydrological or meteorological) is not always clear. For example, Hurricane Katrina is categorized in EM-DAT as a meteorological disaster. However, much of the disaster burden was due to flooding, which is a hydrological disaster. CRED does not apply any ‘fuzzy attribution’ if a disaster belongs to two disaster types.
Thirdly, the geographical attribution of a disaster may become complicated if countries fall apart. A recent example is the split-up of Sudan into Sudan and South Sudan. Other examples are the split-up of the Soviet Union, Yugoslavia and Czechoslovakia. As long as analyses are aggregated over large regions, as in this study, no distortion of data will arise.

Appendix B Trend estimation and the Kalman filter

The trend model almost exclusively applied in the field of disaster management is the OLS straight line. This model has the advantage of being simple and generating uncertainty information for any trend difference [μt - μs] (indices ‘t’ and ‘s’ are arbitrary time points within the sample period). More formally, the OLS linear trend model reads as:

yt= μt+εt and μt=a+b t , (1)

where parameter a is the intercept, b the slope and εt a white noise process. Now, the variance of any trend difference [μt - μs] follows from var(μt - μs) = (t - s)2 var(b).
Throughout this study a sub-model from the class of Structural Time series Models (STMs) has been applied: the Integrated Random Walk (IRW) model. This model is attractive since it is flexible while generating uncertainty bands in the same way as model (1) (Visser and Molenaar 1995; Visser 2004; Visser and Petersen 2009). The IRW trend model has the following form:

yt= μt+εt and μt=2μt-1-μt-2+ηt , (2)

where yt denotes a measurement at time t, and ηt and εt are independent, normally distributed, white noise processes with zero mean and variances ση2 and σε2 respectively. To estimate trends from this model using the Kalman filter, model (2) needs to be rewritten in the state-space form:

μt+1λt+1=2-110μtλt+ηt0 and yt=1 0μtλt+ εt , (3)

where the term λt equals μt-1, ηt~N(0,ση2) and εt~N0,σε2.
Under these assumptions of normality, the Kalman filter provides optimal estimates μt for the trend μt: the filter yields the minimum mean square estimator (MMSE) for the vector (μt,λt)', based on observations up to and including time t. If the noise processes are not normally distributed, the filter generates the minimum mean square linear estimator (MMSLE). This still yields optimal estimates but the filter is less powerful. For more information on the Kalman filter please refer to Harvey (1984; 1989), Durbin and Koopman (2001) and Chandler and Scott (2011 – Section 5.5). A historical overview, with applications in aerospace, is given by Grewal and Andrews (2010).
The IRW trend model yields both linear and flexible trends, depending on the noise variance ση2. If this variance is set to zero, the IRW trend equals the OLS linear trend (model (1)). On the other hand, when ση2 is set to a large number, the trend will be extremely flexible. Since the value of this noise variance ση2 steers the flexibility of the trend, ση2 is also known as the ‘smoothing parameter’. The optimal value for ση2 can be obtained using maximum likelihood estimation (Harvey 1984; 1989). This implies a minimization of the sum of squared one-step-ahead prediction errors. In this way the flexibility is ‘adapted’ to the data.
All the results shown in Figures 2 and 3 are gained by log-transforming the data first. Thus, the data yt are transformed by zt = ln(yt) = µt + εt. Then, trends are estimated on zt and back-transformed afterwards. If we denote the trend in yt by µt’, it easily follows that the trend ratio [µt’/µs’] equals exp(µt -µs), where the trend difference [µt -µs], and uncertainties therein, follow from model (2). An example of such a trend ratio is given in Figure 2, lower panel.
For model (2) to hold, residuals should be white noise. Normality is an attractive property but not a necessary condition. In multiple regression models such checks are performed on model residuals. Here arises a difference with model checks for Kalman filtering. Here, the one-step-ahead-predictions, or innovations in short, are used to check for whiteness and normality. For details please see Harvey (1989 – Section 5.4). These conditions were fulfilled for all the models estimated. The estimation of autocorrelation functions (ACFs) showed no serial correlation in the innovation series at hand.

References

Birkmann J (ed.) (2013) Measuring vulnerability to natural hazards. Tokyo: United Nations University Press

Buisman J (2011) Extreme Weather. A summary of cold winters and hot summers, hail and tornados, storms and floodings (in Dutch). Franeker: Van Wijnen publishers

Chandler RE, Scott EM (2011) Statistical methods for trend detection and analysis in the environmental sciences. Wiley and Sons Ltd, Chichester UK

Durbin J, Koopman SJ (2001) Time series analysis by state space methods. Oxford: Oxford Statistical Science Series 24

Gall M, Borden KA, Cutter S (2009) When do losses count? Six fallacies of natural hazards loss data. BAMS, June issue:799-809

Grewal MS, Andrews AP (2010) Applications of Kalman filtering in Aerospace 1960 to the present. IEEE Control Systems Magazine, June issue:69-78

Guha-Sapir D, Below R (2002) The quality and accuracy of disaster data. A comparative analysis of three global datasets, Report Provention consortium

Guha-Sapir D, Santos I (2012) The economic impacts of natural disasters. Oxford University Press, Oxford

Guha-Sapir D, Vos F, Below R, Ponserre S (2012) Annual disaster statistical review 2011: the numbers and trends. Brussels, CRED

Harvey AC (1984) A unified view of statistical forecasting procedures. J. of Forecasting, 3: 245-275.

Harvey AC (1989) Forecasting, structural time series models and the Kalman filter. New York: Cambridge University Press

Preston BL (2013) Local path dependence of U.S. socioeconomic exposure to climate extremes and the vulnerability commitment. Global Environmental Change, 23:719-732

Visser H, Petersen AC (2009) The likelihood of holding outdoor skating marathons in the Netherlands as a policy-relevant indicator of climate change. Climatic Change 93:39-54

Visser H, Molenaar J (1995) Trend estimation and regression analysis in climatological time series: an application of structural time series models and the Kalman filter. J. of Climate 8(5):969-979

Visser H, Bouwman A, Petersen AC, Ligtvoet W (2012) A statistical study of weather- related disasters: past, present and future. PBL research report 555076001, Bilthoven, the Netherlands.

1