Ecological Archives A025-088-A2

Chi Xu, Milena Holmgren, Egbert H. Van Nes, Fernando T. Maestre, Santiago Soliveres, Miguel Berdugo, Sonia Kfi, Pablo A. Marquet, Sebastian Abades, and Marten Scheffer. 2015. Can we infer plant facilitation from remote sensing? A test across global drylands. Ecological Applications 25:14561462. http://dx.doi.org/10.1890/14-2358.1

Appendix B. Detailed description of image processing.

Remotely-sensed images at high spatial resolutions covering most of the Earth’s land surfaces are increasingly available from the free software Google EarthTM (http://www.google.com/earth/). Google EarthTM data come from various sources such as satellite images and aerial photos, which present a range of different baseline spatial resolutions from 0.1 m to 15 m (e.g., ~0.6 m of QuickBird imagery, ~0.4 m of GeoEye imagery, ~0.4-0.5 m of WorldView imagery and ~0.2 m of aerial photos). According to visual inspections of the images, supported by ground information, plant patches in the studied drylands can be clearly distinguished only at very high spatial resolution (less than 0.5 m). We thus selected 65 study sites out of the global dryland dataset, as images at such high resolution are available from Google EarthTM in these sites. For each study site, geometric correction was performed on the Google EarthTM image with over 15 ground control points evenly distributed, using the second-order polynomial transformation and the nearest-neighbour resampling method (Liu and Mason 2009). The map projection of UTM (Universal Transverse Mercator) with the WGS84 datum was used.

Vegetation indices (e.g., normalized difference vegetation index, NDVI) have been effectively used to distinguish between vegetated and non-vegetated areas from remotely-sensed data (Liang 2005). However, calculation of vegetation indices require the involvement of near-infrared bands, which are not available from Google EarthTM images. In this study we used the standard supervised classification approach (with the maximum likelihood method) to distinguish between plant patches and bare soils. Isolated patches with a size of 1 pixel were eliminated because at these scales plants cannot be discriminated clearly and thus can be easily confounded with other soil surface features (i.e., the mixed pixel problem, Tso and Mather 2003). After classification, we randomly selected 50 points (pixels) of plant patch and bare soil from the classification result image, respectively, in each study site. For each of these points we assessed its classification accuracy (i.e., correctly or incorrectly classified). Due to the logistic limitation of tracking the in-situ status of these points, this assessment procedure was conducted through visual distinguishment by overlaying these points on the high-resolution Google EarthTM images. Using these points as validation data, we derived the classification error matrix to calculate overall accuracy of classification (Appendix A, see e.g., Lillesand et al. 2004 for methodological details). The grasslands in our study sites are composed by large tussocks (mostly Stipa spp.). These landscapes present relatively coarse-grained spatial patterns and relatively sharp contrast with background features (usually bare soils) on Google Earth images. This allowed us to clearly identify the vegetation patches and thus have high accuracy of image classification (> 90%).

Literature cited

Liang, S. 2005. Quantitative remote sensing of land surfaces. John Wiley & Sons.

Lillesand, T. M., R. W. Kiefer, and J. W. Chipman. 2004. Remote sensing and image interpretation. John Wiley & Sons Ltd.

Liu, J. G., and P. Mason. 2009. Essential image processing and GIS for remote sensing. John Wiley & Sons.

Tso, B., and P. Mather. 2003. Classification methods for remotely sensed data. CRC press.

[Back to A025-088]