New Appendix explains differences between EPIs, includes Back-casted rankings
Many people want to know how the 2012 Environmental Performance Index compares with previous editions. We’ve produced a methodological addendum (Appendix III) detailing differences between the 2012 and 2010 versions of the EPI, back-casted rankings from 2000 to 2010, and more information on why some countries were not included in the latest 2012 EPI. Here’s a brief summary of what’s included:
Comparing the 2012 and 2010 EPIs
By far the most anticipated change in EPI methodology was this year’s creation of a pilot trend EPI. This project aimed to uncover changes in environmental performance over time, using datasets that spanned at least the time period 2000-2010 to produce a dynamic, rather than simply static, picture of environmental performance.
The greatest challenge to this effort was finding reliable time series data. The EPI team had to consider not just the existence of data for multiple years, but whether that data was truly comparable across time periods (see our discussion of comparing 2006, 2008, 2010 and 2012 EPI scores at http://epi.yale.edu/about/faq#_How_do_the). In addition, we chose datasets for which there was an established intent for continued data collection; this will make producing a trend EPI possible in future editions of the EPI.
We also had some data sources changes between the 2012 and 2010 EPIs, most notably in the inclusion of satellite-derived indicators. In 2012, we use satellite-derived PM2.5 data, normalized by population and averaged over time to get a more realistic picture of human exposure to health-relevant particulates. The 2010 particulate matter data was akin to a sample, where 2012 data was closer to a census, and also measured a more relevant indicator of air-related human health impacts. Since many countries do not measure PM 2.5, satellite-derived PM2.5 data provided consistent “wall-to-wall” measurement methods that filled in previous gaps.
We also used satellite data for our Forest Loss indicator. In this case, we were able to circumvent some of the issues of self-reported data by using remote sensing of forest cover to produce the inputs for Forest Loss. While using this data represented an important step toward better forest measurement, the satellite data did not include afforestation, and thus did not recognize and reward countries’ reforestation efforts. For this reason, we combined the satellite data for Forest Loss with FAO indicators used in previous EPIs, recognizing that the FAO indicators do have reporting challenges (noted in Appendix I and Section 4 of the EPI report).
In Appendix III, we list countries omitted from the 2012 EPI due to data and indicator gaps. Table 1 details the indicators missing from each countries that prevented calculation of an EPI score. In the past, our team has dealt with missing values by interpolating them so that we could still produce an EPI score with one or more indicators absent. However, experience has shown that interpolation based on regional or GDP averages is not always accurate as policymakers have contacted us to question an estimated value. For this version of the EPI, we’ve produced scores only for those countries with enough data to make the EPI score calculation.
We revise and update our methodology with each edition of the EPI. This means that while our methods are continuously improving, EPI scores are typically not comparable across years. But with the 2012 EPI, using time series data, we were able to produce back-casted rankings for the period 2000-2010. For the first time, countries can see what their rank would have been in a given year based on the latest EPI methodology and consistent data. A table of these back-casted rankings appears in Appendix III.
The 2012 EPI puts environmental performance into sharper focus, and the pilot trend EPI pushes the envelope, producing a dynamic view of global environmental performance, an effort we’ll continue to refine into the future.