exactextractr/0000755000176200001440000000000014502525462013140 5ustar liggesusersexactextractr/NAMESPACE0000644000176200001440000000042614500104457014354 0ustar liggesusers# Generated by roxygen2: do not edit by hand exportMethods(coverage_fraction) exportMethods(exact_extract) exportMethods(exact_resample) exportMethods(rasterize_polygons) import(raster) import(sf) importFrom(Rcpp,evalCpp) importFrom(methods,setMethod) useDynLib(exactextractr) exactextractr/tools/0000755000176200001440000000000014500103446014270 5ustar liggesusersexactextractr/tools/winlibs.R0000644000176200001440000000071014500103446016060 0ustar liggesusersif(getRversion() < "3.3.0") { stop("Your version of R is too old. This package requires R-3.3.0 or newer on Windows.") } VERSION <- commandArgs(TRUE) if(!file.exists(sprintf("../windows/gdal2-%s/include/gdal/gdal.h", VERSION))){ download.file(sprintf("https://github.com/rwinlib/gdal2/archive/v%s.zip", VERSION), "lib.zip", quiet = FALSE) dir.create("../windows", showWarnings = FALSE) unzip("lib.zip", exdir = "../windows") unlink("lib.zip") } exactextractr/README.md0000644000176200001440000004502014500103446014410 0ustar liggesusers# exactextractr [![Build Status](https://gitlab.com/isciences/exactextractr/badges/master/pipeline.svg)](https://gitlab.com/isciences/exactextractr/-/pipelines) [![coverage report](https://gitlab.com/isciences/exactextractr/badges/master/coverage.svg)](https://isciences.gitlab.io/exactextractr/coverage.html) [![CRAN](http://www.r-pkg.org/badges/version/exactextractr)](https://cran.r-project.org/package=exactextractr) [![cran checks](https://badges.cranchecks.info/worst/exactextractr.svg)](https://cran.r-project.org/web/checks/check_results_exactextractr.html) `exactextractr` is an R package that quickly and accurately summarizes raster values over polygonal areas, commonly referred to as _zonal statistics_. Unlike most zonal statistics implementations, it handles grid cells that are partially covered by a polygon. Despite this, it performs faster other packages for many real-world applications. ![Example Graphic](https://gitlab.com/isciences/exactextractr/-/raw/assets/readme/brazil_precip.png). Calculations are performed using the C++ [`exactextract`](https://github.com/isciences/exactextract) tool. Additional background and a description of the method is available [here](https://github.com/isciences/exactextract#background). Full package reference documentation is available [here](https://isciences.gitlab.io/exactextractr/reference). ### Basic Usage The package provides an [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html) method that operates analogously to the [`extract`](https://www.rdocumentation.org/packages/raster/topics/extract) method in the [`raster`](https://CRAN.R-project.org/package=raster) package. The snippet below demonstrates the use of this function to compute monthly mean precipitation for each municipality in Brazil. ```r library(raster) library(sf) library(exactextractr) # Pull municipal boundaries for Brazil brazil <- st_as_sf(getData('GADM', country='BRA', level=2)) # Pull gridded precipitation data prec <- getData('worldclim', var='prec', res=10) # Calculate vector of mean December precipitation amount for each municipality brazil$mean_dec_prec <- exact_extract(prec[[12]], brazil, 'mean') # Calculate data frame of min and max precipitation for all months brazil <- cbind(brazil, exact_extract(prec, brazil, c('min', 'max'))) ``` #### Summary Operations `exactextractr` can summarize raster values using several named operations as well as arbitrary R functions. Where applicable, a named operation will provide better performance and reduced memory usage relative to an equivalent R function. Named operations are specified by providing a character vector with one or more operation names to the `fun` parameter of [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html). The following summary operations are supported: | Name | Description | | ---------------------- |--------------- | | `count` | Sum of all cell coverage fractions. | | `majority` (or `mode`) | The raster value with the largest sum of coverage fractions. | | `max` | Maximum value of cells that intersect the polygon, ignoring coverage fractions. | | `mean` | Mean value of cells that intersect the polygon, weighted by the fraction of the cell that is covered. | | `median` | Median value of cells that intersect the polygon, weighted by the fraction of the cell that is covered. | | `quantile` | Arbitrary quantile value of cells that intersect the polygon, weighted by the fraction of the cell that is covered. | | `min` | Minimum value of cells that intersect the polygon, ignoring coverage fractions. | | `minority` | The raster value with the smallest sum of coverage fractions. | | `sum` | Sum of values of raster cells that intersect the polygon, with each raster value weighted by its coverage fraction. | | `variety` | The number of distinct raster values in cells wholly or partially covered by the polygon. | | `variance` | The population variance of cell values, weighted by the fraction of each cell that is covered by the polygon. | | `stdev` | The population standard deviation of cell values, weighted by the fraction of each cell that is covered by the polygon. | | `coefficient_of_variation` | The population coefficient of variation of cell values, weighted by the fraction of each cell that is covered by the polygon. | | `frac` | Fraction of covered cells that are occupied by each distinct raster value. | Three additional summary operations require the use of a second weighting raster, provided in the `weights` argument to [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html): | Name | Description | | ---------------------- |--------------- | | `weighted_mean` | Mean value of defined (non-`NA`) cells that intersect the polygon, weighted by the product of the coverage fraction and the value of a second weighting raster. | | `weighted_sum` | Sum of defined (non-`NA`) values of raster cells that intersect the polygon, multiplied by the coverage fraction and the value of a second weighting raster. | | `weighted_variance` | Population variance of defined (non-`NA`) values of cells that intersect the polygon, weighted by the product of the coverage fraction and the value of a second weighting raster. | | `weighted_stdev` | Population standard deviation of defined (non-`NA`) values of raster cells that intersect the polygon, multiplied by the coverage fraction and the value of a second weighting raster. | | `weighted_frac` | Fraction of covered cells that are occupied by each distinct raster value, with coverage fractions multiplied by the value of a second weighting raster. | Weighted usage is discussed in more detail [below](#weighted-usage). Undefined (`NA`) values are ignored in all of the named summary operations when they occur in the value raster. When they occur in the weighting raster, they cause the result of the summary operation to be `NA`. #### Summary Functions In addition to the summary operations described above, [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html) can accept an R function to summarize the cells covered by the polygon. Because [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html) takes into account the fraction of the cell that is covered by the polygon, the summary function must take two arguments: the value of the raster in each cell touched by the polygon, and the fraction of that cell area that is covered by the polygon. (This differs from [`raster::extract`](https://www.rdocumentation.org/packages/raster/topics/extract), where the summary function takes the vector of raster values as a single argument and effectively assumes that the coverage fraction is `1.0`.) An example of a built-in function with the appropriate signature is [`weighted.mean`](https://www.rdocumentation.org/packages/stats/topics/weighted.mean). Some examples of custom summary functions are: ```r # Number of cells covered by the polygon (raster values are ignored) exact_extract(rast, poly, function(values, coverage_fraction) sum(coverage_fraction)) # Sum of defined raster values within the polygon, accounting for coverage fraction exact_extract(rast, poly, function(values, coverage_fraction) sum(values * coverage_fraction, na.rm=TRUE)) # Number of distinct raster values within the polygon (coverage fractions are ignored) exact_extract(rast, poly, function(values, coverage_fraction) length(unique(values))) # Number of distinct raster values in cells more than 10% covered by the polygon exact_extract(rast, poly, function(values, coverage_fraction) length(unique(values[coverage_fraction > 0.1]))) ``` ### Weighted Usage [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html) allows for calculation of summary statistics based on multiple raster layers, such as a population-weighted temperature. The weighting raster must use the same coordinate system as the primary raster, and it must use a grid that is compatible with the primary raster. (The resolutions and extents of the rasters need not be the same, but the higher resolution must must be an integer multiple of the lower resolution, and the cell boundaries of both rasters must coincide with cell boundaries in the higher-resolution grid.) One application of this feature is the calculation of zonal statistics on raster data in geographic coordinates. The previous calculation of mean precipitation amount across Brazilian municipalities assumed that each raster cell covered the same area, which is not correct for rasters in geographic coordinates (latitude/longitude). We can correct for varying cell areas by creating a weighting raster with the area of each cell in the primary raster using the [`area`](https://www.rdocumentation.org/packages/raster/topics/area) function from the `raster` package. #### Weighted Summary Operations Performing a weighted summary with the `weighted_mean` and `weighted_sum` operations is as simple as providing a weighting [`RasterLayer`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) or [`RasterStack`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) to the `weights` argument of [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html). The area-weighted mean precipitation calculation can be expressed as: ```r brazil$mean_dec_prec_weighted <- exact_extract(prec[[12]], brazil, 'weighted_mean', weights = area(prec)) ``` With the relatively small polygons used in this example, the error introduced by assuming constant cell area is negligible. However, for large polygons that span a wide range of latitudes, this may not be the case. #### Weighted Summary Functions A weighting raster can also be provided when an R summary function is used. When a weighting raster is provided, the summary function must accept a third argument containing the values of the weighting raster. An equivalent to the `weighted_mean` usage above could be written as: ```r brazil$mean_dec_prec_weighted <- exact_extract(prec[[12]], brazil, function(values, coverage_frac, weights) { weighted.mean(values, coverage_frac * weights) }, weights = area(prec)) ``` Or, to calculate the area-weighted mean precipitation for all months: ```r brazil <- cbind(brazil, exact_extract(prec, brazil, function(values, coverage_frac, weights) { weighted.mean(values, coverage_frac * weights) }, weights = area(prec), stack_apply = TRUE)) ``` In this example, the `stack_apply` argument is set to `TRUE` so that the summary function will be applied to each layer of `prec` independently. (If `stack_apply = FALSE`, the summary function will be called with all values of `prec` in a 12-column data frame.) ### Additional Usages #### Multi-Raster Summary Functions A multi-raster summary function can also be written to implement complex behavior that requires that multiple layers in a [`RasterStack`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) be considered simultaneously. Here, we compute an area-weighted average temperature by calling [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html) with a [`RasterStack`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) of minimum and maximum temperatures, and a [`RasterLayer`](https://www.rdocumentation.org/packages/raster/topics/Raster-class), of cell areas. ```r tmin <- getData('worldclim', var = 'tmin', res = 10) tmax <- getData('worldclim', var = 'tmax', res = 10) temp <- stack(tmin[[12]], tmax[[12]]) brazil$tavg_dec <- exact_extract(temp, brazil, function(values, coverage_fraction, weights) { tavg <- 0.5*(values$tmin12 + values$tmax12) weighted.mean(tavg, coverage_fraction * weights) }, weights = area(prec)) ``` When [`exact_extract`](https://isciences.gitlab.io/exactextractr/reference/exact_extract.html) is called with a [`RasterStack`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) of values or weights and `stack_apply = FALSE` (the default), the values or weights from each layer of the [`RasterStack`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) will be provided to the summary function as a data frame. In the example above, the summary function is provided with a data frame of values (containing the values for each layer in the `temp` stack), a vector of coverage fractions, and a vector of weights. #### Multi-Valued Summary Functions In some cases, it is desirable for a summary function to return multiple values for each input feature. A common application is to summarize the fraction of each polygon that is covered by a given class of a categorical raster. This can be accomplished by writing a summary function that returns a one-row data frame for each input feature. The data frames for each feature will be combined into a single data frame using using `rbind` or, if it is available, `dplyr::bind_rows`. In this example, the mean temperature for each municipality is returned for each altitude category. ```r altitude <- getData('alt', country = 'BRA') prec_for_altitude <- exact_extract(prec[[12]], brazil, function(prec, frac, alt) { # ignore cells with unknown altitude prec <- prec[!is.na(alt)] frac <- frac[!is.na(alt)] alt <- alt[!is.na(alt)] low <- !is.na(alt) & alt < 500 high <- !is.na(alt) & alt >= 500 data.frame( prec_low_alt = weighted.mean(prec[low], frac[low]), prec_high_alt = weighted.mean(prec[high], frac[high]) ) }, weights = altitude) ``` ### Rasterization `exactextractr` can rasterize polygons though computation of the coverage fraction in each cell. The [`coverage_fraction`](https://isciences.gitlab.io/exactextractr/reference/coverage_fraction.html) function returns a [`RasterLayer`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) with values from 0 to 1 indicating the fraction of each cell that is covered by the polygon. Because this function generates a [`RasterLayer`](https://www.rdocumentation.org/packages/raster/topics/Raster-class) for each feature in the input dataset, it can quickly consume a large amount of memory. Depending on the analysis being performed, it may be advisable to manually loop over the features in the input dataset and combine the generated rasters during each iteration. ### Performance For typical applications, `exactextractr` is much faster than the `raster` package and somewhat faster than the `terra` package. An example benchmark is below: ```r brazil <- st_as_sf(getData('GADM', country='BRA', level=1)) brazil_spat <- as(brazil, 'SpatVector') prec_rast <- getData('worldclim', var='prec', res=10) prec_terra <- rast(prec_rast) prec12_rast <- prec_rast[[12]] prec12_terra <- rast(prec_rast[[12]]) microbenchmark( extract(prec_rast, brazil, mean, na.rm = TRUE), extract(prec_terra, brazil_spat, mean, na.rm = TRUE), exact_extract(prec_rast, brazil, 'mean', progress = FALSE), exact_extract(prec_terra, brazil, 'mean', progress = FALSE), extract(prec12_rast, brazil, mean, na.rm = TRUE), extract(prec12_terra, brazil_spat, mean, na.rm = TRUE), exact_extract(prec12_rast, brazil, 'mean', progress = FALSE), exact_extract(prec12_terra, brazil, 'mean', progress = FALSE), times = 5) ``` | Package | Raster Type | Layers | Expression | Time (ms) | | ------------ | ----------- | ------ |---------------------------------------- |--------------------- | | raster | RasterLayer | 1 | `extract(prec_rast, brazil, mean, na.rm = TRUE)`| 48708 | | terra | SpatRaster | 1 | `extract(prec_terra, brazil_spat, mean, na.rm = TRUE)`| 436 | | exactextractr | RasterLayer | 1 | `exact_extract(prec_rast, brazil, "mean", progress = FALSE)`| 1541 | | exactextractr | SpatRaster | 1 | `exact_extract(prec_terra, brazil, "mean", progress = FALSE)`| 129 | | raster | RasterStack | 12 | `extract(prec12_rast, brazil, mean, na.rm = TRUE)`| 10148 | | terra | SpatRaster | 12 | `extract(prec12_terra, brazil_spat, mean, na.rm = TRUE)`| 266 | | exactextractr | RasterLayer | 12 |`exact_extract(prec12_rast, brazil, "mean", progress = FALSE)`| 222 | | exactextractr | SpatRaster | 12 |`exact_extract(prec12_terra, brazil, "mean", progress = FALSE)`| 112 | Actual performance is a complex topic that can vary dramatically depending on factors such as: - the number of layers in the input raster(s) - the data type of input rasters (for best performance, use a `terra::SpatRaster`) - the raster file format (GeoTIFF, netCDF, etc) - the chunking strategy used by the raster file (striped, tiled, etc.) - the relative size of the area to be read and the GDAL block cache If `exact_extract` is called with `progress = TRUE`, messages will be emitted if the package detects a situation that could lead to poor performance, such as a raster chunk size that is too large to allow caching of blocks between vector features. If performance is poor, it may be possible to improve performance by: - increasing the `max_cells_in_memory` parameter - increasing the size of the GDAL block cache - rewriting the input rasters to use a different chunking scheme - processing inputs as batches of nearby polygons # Accuracy Results from `exactextractr` are more accurate than other common implementations because raster pixels that are partially covered by polygons are considered. The significance of partial coverage increases for polygons that are small or irregularly shaped. For the 5500 Brazilian municipalities used in the example, the error introduced by incorrectly handling partial coverage is less than 1% for 88% of municipalities and reaches a maximum of 9%. ### Dependencies Installation requires version 3.5 or greater of the [GEOS](https://libgeos.org/) geometry processing library. It is recommended to use the most recent released version for best performance. On Windows, GEOS will be downloaded automatically as part of package install. On MacOS, it can be installed using Homebrew (`brew install geos`). On Linux, it can be installed from system package repositories (`apt-get install libgeos-dev` on Debian/Ubuntu, or `yum install libgeos-devel` on CentOS/RedHat.) exactextractr/man/0000755000176200001440000000000014500104457013706 5ustar liggesusersexactextractr/man/exactextractr-package.Rd0000644000176200001440000000142214500104457020446 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/exactextractr-package.R \docType{package} \name{exactextractr-package} \alias{exactextractr} \alias{exactextractr-package} \title{exactextractr: Fast Extraction from Raster Datasets using Polygons} \description{ Quickly and accurately summarizes raster values over polygonal areas ("zonal statistics"). } \seealso{ Useful links: \itemize{ \item \url{https://isciences.gitlab.io/exactextractr/} \item \url{https://github.com/isciences/exactextractr} \item Report bugs at \url{https://github.com/isciences/exactextractr/issues} } } \author{ \strong{Maintainer}: Daniel Baston \email{dbaston@isciences.com} Other contributors: \itemize{ \item ISciences, LLC [copyright holder] } } \keyword{internal} exactextractr/man/exact_resample.Rd0000644000176200001440000000201314500104457017165 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/exact_resample.R \name{exact_resample} \alias{exact_resample} \alias{exact_resample,RasterLayer,RasterLayer-method} \alias{exact_resample,SpatRaster,SpatRaster-method} \title{Resample a raster to a new grid} \usage{ \S4method{exact_resample}{RasterLayer,RasterLayer}(x, y, fun, coverage_area = FALSE) \S4method{exact_resample}{SpatRaster,SpatRaster}(x, y, fun, coverage_area = FALSE) } \arguments{ \item{x}{a \code{RasterLayer} or \code{SpatRaster} to be resampled} \item{y}{a raster of the same class as \code{x} with a grid definition to which \code{x} should be resampled} \item{fun}{a named summary operation or R function to be used for the resampling} \item{coverage_area}{use cell coverage areas instead of coverage fractions in \code{fun}} } \value{ a resampled version of \code{x}, returned as a \code{RasterLayer} or \code{SpatRaster}, depending on the values of \code{x} and \code{y} } \description{ Resample a raster to a new grid } exactextractr/man/coverage_fraction.Rd0000644000176200001440000000331514500104457017657 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/coverage_fraction.R \name{coverage_fraction} \alias{coverage_fraction} \alias{coverage_fraction,RasterLayer,sf-method} \alias{coverage_fraction,RasterLayer,sfc_MULTIPOLYGON-method} \alias{coverage_fraction,RasterLayer,sfc_POLYGON-method} \alias{coverage_fraction,SpatRaster,sf-method} \alias{coverage_fraction,SpatRaster,sfc_MULTIPOLYGON-method} \alias{coverage_fraction,SpatRaster,sfc_POLYGON-method} \title{Compute the fraction of raster cells covered by a polygon} \usage{ \S4method{coverage_fraction}{RasterLayer,sf}(x, y, crop = FALSE) \S4method{coverage_fraction}{RasterLayer,sfc_MULTIPOLYGON}(x, y, crop) \S4method{coverage_fraction}{RasterLayer,sfc_POLYGON}(x, y, crop) \S4method{coverage_fraction}{SpatRaster,sf}(x, y, crop = FALSE) \S4method{coverage_fraction}{SpatRaster,sfc_MULTIPOLYGON}(x, y, crop) \S4method{coverage_fraction}{SpatRaster,sfc_POLYGON}(x, y, crop) } \arguments{ \item{x}{a (possibly empty) \code{RasterLayer} whose resolution and extent will be used for the generated \code{RasterLayer}.} \item{y}{a \code{sf} object with polygonal geometries} \item{crop}{if \code{TRUE}, each generated \code{RasterLayer} will be cropped to the extent of its associated feature.} } \value{ a list with a \code{RasterLayer} for each feature in \code{y}. Values of the raster represent the fraction of each cell in \code{x} that is covered by \code{y}. } \description{ Compute the fraction of raster cells covered by a polygon } \examples{ rast <- raster::raster(matrix(1:100, ncol=10), xmn=0, ymn=0, xmx=10, ymx=10) poly <- sf::st_as_sfc('POLYGON ((2 2, 7 6, 4 9, 2 2))') cov_frac <- coverage_fraction(rast, poly)[[1]] } exactextractr/man/exact_extract.Rd0000644000176200001440000004733414500104457017046 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/exact_extract.R \name{exact_extract} \alias{exact_extract} \alias{exact_extract,Raster,sf-method} \alias{exact_extract,Raster,SpatialPolygonsDataFrame-method} \alias{exact_extract,Raster,SpatialPolygons-method} \alias{exact_extract,Raster,sfc_MULTIPOLYGON-method} \alias{exact_extract,Raster,sfc_POLYGON-method} \alias{exact_extract,Raster,sfc_GEOMETRY-method} \alias{exact_extract,Raster,sfc_GEOMETRYCOLLECTION-method} \alias{exact_extract,SpatRaster,sf-method} \alias{exact_extract,SpatRaster,SpatialPolygonsDataFrame-method} \alias{exact_extract,SpatRaster,SpatialPolygons-method} \alias{exact_extract,SpatRaster,sfc_MULTIPOLYGON-method} \alias{exact_extract,SpatRaster,sfc_POLYGON-method} \alias{exact_extract,SpatRaster,sfc_GEOMETRY-method} \alias{exact_extract,SpatRaster,sfc_GEOMETRYCOLLECTION-method} \title{Extract or summarize values from rasters} \usage{ \S4method{exact_extract}{Raster,sf}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{Raster,SpatialPolygonsDataFrame}(x, y, ...) \S4method{exact_extract}{Raster,SpatialPolygons}(x, y, ...) \S4method{exact_extract}{Raster,sfc_MULTIPOLYGON}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{Raster,sfc_POLYGON}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{Raster,sfc_GEOMETRY}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{Raster,sfc_GEOMETRYCOLLECTION}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{SpatRaster,sf}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{SpatRaster,SpatialPolygonsDataFrame}(x, y, ...) \S4method{exact_extract}{SpatRaster,SpatialPolygons}(x, y, ...) \S4method{exact_extract}{SpatRaster,sfc_MULTIPOLYGON}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{SpatRaster,sfc_POLYGON}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{SpatRaster,sfc_GEOMETRY}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) \S4method{exact_extract}{SpatRaster,sfc_GEOMETRYCOLLECTION}( x, y, fun = NULL, ..., weights = NULL, append_cols = NULL, coverage_area = FALSE, default_value = NA_real_, default_weight = NA_real_, include_area = FALSE, include_cell = FALSE, include_cols = NULL, include_xy = FALSE, force_df = FALSE, full_colnames = FALSE, stack_apply = FALSE, summarize_df = FALSE, quantiles = NULL, progress = TRUE, max_cells_in_memory = 3e+07, grid_compat_tol = 0.001, colname_fun = NULL ) } \arguments{ \item{x}{a \code{RasterLayer}, \code{RasterStack}, \code{RasterBrick}, or \code{SpatRaster}} \item{y}{a \code{sf}, \code{sfc}, \code{SpatialPolygonsDataFrame}, or \code{SpatialPolygons} object with polygonal geometries} \item{fun}{an optional function or character vector, as described below} \item{...}{additional arguments to pass to \code{fun}} \item{weights}{a weighting raster to be used with the \code{weighted_mean} and \code{weighted_sum} summary operations or a user-defined summary function. When \code{weights} is set to \code{'area'}, the cell areas of \code{x} will be calculated and used as weights.} \item{append_cols}{when \code{fun} is not \code{NULL}, an optional character vector of columns from \code{y} to be included in returned data frame.} \item{coverage_area}{if \code{TRUE}, output pixel \code{coverage_area} instead of \code{coverage_fraction}} \item{default_value}{an optional value to use instead of \code{NA} in \code{x}} \item{default_weight}{an optional value to use instead of \code{NA} in \code{weights}} \item{include_area}{if \code{TRUE}, and \code{fun} is \code{NULL}, augment the data frame for each feature with a column for the cell area. If the units of the raster CRS are degrees, the area in square meters will be calculated based on a spherical approximation of Earth. Otherwise, a Cartesian area will be calculated (and will be the same for all pixels.) If \code{TRUE} and \code{fun} is not \code{NULL}, add \code{area} to the data frame passed to \code{fun} for each feature.} \item{include_cell}{if \code{TRUE}, and \code{fun} is \code{NULL}, augment the data frame for each feature with a column for the cell index (\code{cell}). If \code{TRUE} and \code{fun} is not \code{NULL}, add \code{cell} to the data frame passed to \code{fun} for each feature.} \item{include_cols}{an optional character vector of column names in \code{y} to be added to the data frame for each feature that is either returned (when \code{fun} is \code{NULL}) or passed to \code{fun}.} \item{include_xy}{if \code{TRUE}, and \code{fun} is \code{NULL}, augment the returned data frame for each feature with columns for cell center coordinates (\code{x} and \code{y}). If \code{TRUE} and \code{fun} is not \code{NULL}, add \code{x} and \code{y} to the data frame passed to \code{fun} for each feature.} \item{force_df}{always return a data frame instead of a vector, even if \code{x} has only one layer and \code{fun} has length 1} \item{full_colnames}{include the names of \code{x} and \code{weights} in the names of the data frame for each feature, even if \code{x} or \code{weights} has only one layer. This is useful when the results of multiple calls to \code{exact_extract} are combined with \code{cbind}.} \item{stack_apply}{if \code{TRUE}, apply \code{fun} independently to each layer or \code{x} (and its corresponding layer of \code{weights}, if provided.) The number of layers in \code{x} and \code{weights} must equal each other or \code{1}, in which case the single layer raster will be recycled. If \code{FALSE}, apply \code{fun} to all layers of \code{x} (and \code{weights}) simultaneously.} \item{summarize_df}{pass values, coverage fraction/area, and weights to \code{fun} as a single data frame instead of separate arguments.} \item{quantiles}{quantiles to be computed when \code{fun = 'quantile'}} \item{progress}{if \code{TRUE}, display a progress bar during processing} \item{max_cells_in_memory}{the maximum number of raster cells to load at a given time when using a named summary operation for \code{fun} (as opposed to a function defined using R code). If a polygon covers more than \code{max_cells_in_memory} raster cells, it will be processed in multiple chunks.} \item{grid_compat_tol}{require value and weight grids to align within \code{grid_compat_tol} times the smaller of the two grid resolutions.} \item{colname_fun}{an optional function used to construct column names. Should accept arguments \code{values} (name of value layer), \code{weights} (name of weight layer), \code{fun_name} (value of \code{fun}), \code{fun_value} (value associated with \code{fun}, for \verb{fun \%in\% c('quantile', 'frac', 'weighted_frac)} \code{nvalues} (number of value layers), \code{weights} (number of weight layers)} } \value{ a vector, data frame, or list of data frames, depending on the type of \code{x} and the value of \code{fun} (see Details) } \description{ Extracts the values of cells in a raster (\code{RasterLayer}, \code{RasterStack} \code{RasterBrick}, or \code{SpatRaster}) that are covered by polygons in a simple feature collection (\code{sf} or \code{sfc}) or \code{SpatialPolygonsDataFrame}. Returns either a summary of the extracted values or the extracted values themselves. } \details{ \code{exact_extract} extracts the values of cells in a raster that are covered by polygonal features in a simple feature collection (\code{sf} or \code{sfc}) or \code{SpatialPolygonDataFrame}, as well as the fraction or area of each cell that is covered by the feature. Pixels covered by all parts of the polygon are considered. If an (invalid) multipart polygon covers the same pixels more than once, the pixel may have a coverage fraction greater than one. The function can either return pixel values directly to the caller, or can return the result of a predefined summary operation or user-defined R function applied to the values. These three approaches are described in the subsections below. \subsection{Returning extracted values directly}{ If \code{fun} is not specified, \code{exact_extract} will return a list with one data frame for each feature in the input feature collection. The data frame will contain a column with cell values from each layer in the input raster (and optional weighting raster) and a column indicating the fraction or area of the cell that is covered by the polygon. If the input rasters have only one layer, the value and weight columns in the data frame will be named \code{values} or \code{weights}. When the input rasters have more than one layer, the columns will be named according to \code{names(x)} and \code{names(weights)}. The column containing pixel coverage will be called \code{coverage_fraction} when \code{coverage_area = FALSE}, or \code{coverage_area} when \code{coverage_area = TRUE}. Additional columns can be added to the returned data frames with the \code{include_area}, \code{include_cell}, and \code{include_xy} arguments. If the output data frames for multiple features are to be combined (e.g., with \code{rbind}), it may be useful to include identifying column(s) from the input features in the returned data frames using \code{include_cols}. } \subsection{Predefined summary operations}{ Often the individual pixel values are not needed; only one or more summary statistics (e.g., mean, sum) is required for each feature. Common summary statistics can be calculated by \code{exact_extract} directly using a predefined summary operation. Where possible, this approach is advantageous because it allows the package to calculate the statistics incrementally, avoiding the need to store all pixel values in memory at the same time. This allows the package to process arbitrarily large data with a small amount of memory. (The \code{max_pixels_in_memory} argument can be used to fine-tune the amount of memory made available to \code{exact_extract}.) To summarize pixel values using a predefined summary option, \code{fun} should be set to a character vector of one or more operation names. If the input raster has a single layer and a single summary operation is specified, \code{exact_extract} will return a vector with the result of the summary operation for each feature in the input. If the input raster has multiple layers, or if multiple summary operations are specified, \code{exact_extract} will return a data frame with a row for each feature and a column for each summary operation / layer combination. (The \code{force_df} option can be used to always return a data frame instead of a vector.) The following summary operations are supported: \itemize{ \item \code{min} - the minimum non-\code{NA} value in any raster cell wholly or partially covered by the polygon \item \code{max} - the maximum non-\code{NA} value in any raster cell wholly or partially covered by the polygon \item \code{count} - the sum of fractions of raster cells with non-\code{NA} values covered by the polygon \item \code{sum} - the sum of non-\code{NA} raster cell values, multiplied by the fraction of the cell that is covered by the polygon \item \code{mean} - the mean cell value, weighted by the fraction of each cell that is covered by the polygon \item \code{median} - the median cell value, weighted by the fraction of each cell that is covered by the polygon \item \code{quantile} - arbitrary quantile(s) of cell values, specified in \code{quantiles}, weighted by the fraction of each cell that is covered by the polygon \item \code{mode} - the most common cell value, weighted by the fraction of each cell that is covered by the polygon. Where multiple values occupy the same maximum number of weighted cells, the largest value will be returned. \item \code{majority} - synonym for \code{mode} \item \code{minority} - the least common cell value, weighted by the fraction of each cell that is covered by the polygon. Where multiple values occupy the same minimum number of weighted cells, the smallest value will be returned. \item \code{variety} - the number of distinct values in cells that are wholly or partially covered by the polygon. \item \code{variance} - the population variance of cell values, weighted by the fraction of each cell that is covered by the polygon. \item \code{stdev} - the population standard deviation of cell values, weighted by the fraction of each cell that is covered by the polygon. \item \code{coefficient_of_variation} - the population coefficient of variation of cell values, weighted by the fraction of each cell that is covered by the polygon. \item \code{weighted_mean} - the mean cell value, weighted by the product of the fraction of each cell covered by the polygon and the value of a second weighting raster provided as \code{weights} \item \code{weighted_sum} - the sum of defined raster cell values, multiplied by the fraction of each cell that is covered by the polygon and the value of a second weighting raster provided as \code{weights} \item \code{weighted_stdev} - the population standard deviation of cell values, weighted by the product of the fraction of each cell covered by the polygon and the value of a second weighting raster provided as \code{weights} \item \code{weighted_variance} - the population variance of cell values, weighted by the product of the fraction of each cell covered by the polygon and the value of a second weighting raster provided as \code{weights} \item \code{frac} - returns one column for each possible value of \code{x}, with the the fraction of defined raster cells that are equal to that value. \item \code{weighted_frac} - returns one column for each possible value of \code{x}, with the fraction of defined cells that are equal to that value, weighted by \code{weights.} } In all of the summary operations, \code{NA} values in the the primary raster (\code{x}) raster are ignored (i.e., \code{na.rm = TRUE}.) If \code{NA} values occur in the weighting raster, the result of the weighted operation will be \code{NA}. \code{NA} values in both \code{x} and \code{weights} can be replaced on-the-fly using the \code{default_value} and \code{default_weight} arguments. } \subsection{User-defined summary functions}{ If no predefined summary operation is suitable, a user-defined R function may be provided as \code{fun}. The function will be called once for each feature and must return either a single value or a data frame. The results of the function for each feature will be combined and returned by \code{exact_extract}. The simplest way to write a summary function is to set argument \code{summarize_df = TRUE}. (For backwards compatibility, this is not the default.) In this mode, the summary function takes the signature \verb{function(df, ...)} where \code{df} is the same data frame that would be returned by \code{exact_extract} with \code{fun = NULL}. With \code{summarize_df = FALSE}, the function must have the signature \verb{function(values, coverage_fractions, ...)} when weights are not used, and \verb{function(values, coverage_fractions, weights, ...)} when weights are used. If the value and weight rasters each have a single layer, the function arguments will be vectors; if either has multiple layers, the function arguments will be data frames, with column names taken from the names of the value/weight rasters. Values brought in through the \code{include_xy}, \code{include_area}, \code{include_cell}, and \code{include_cols} arguments will be added to the \code{values} data frame. For most applications, it is simpler to set \code{summarize_df = TRUE} and work with all inputs in a single data frame. } } \examples{ rast <- raster::raster(matrix(1:100, ncol=10), xmn=0, ymn=0, xmx=10, ymx=10) poly <- sf::st_as_sfc('POLYGON ((2 2, 7 6, 4 9, 2 2))') # named summary operation on RasterLayer, returns vector exact_extract(rast, poly, 'mean') # two named summary operations on RasterLayer, returns data frame exact_extract(rast, poly, c('min', 'max')) # named summary operation on RasterStack, returns data frame stk <- raster::stack(list(a=rast, b=sqrt(rast))) exact_extract(stk, poly, 'mean') # named weighted summary operation, returns vector weights <- raster::raster(matrix(runif(100), ncol=10), xmn=0, ymn=0, xmx=10, ymx=10) exact_extract(rast, poly, 'weighted_mean', weights=weights) # custom summary function, returns vector exact_extract(rast, poly, function(value, cov_frac) length(value[cov_frac > 0.9])) } exactextractr/man/dot-valueWeightIndexes.Rd0000644000176200001440000000117514500104457020571 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/exact_extract_helpers.R \name{.valueWeightIndexes} \alias{.valueWeightIndexes} \title{Compute indexes for the value and weight layers that should be processed together} \usage{ .valueWeightIndexes(num_values, num_weights) } \arguments{ \item{num_values}{number of layers in value raster} \item{num_weights}{number of layers in weighting raster} } \value{ list with \code{values} and \code{weights} elements providing layer indexes } \description{ Compute indexes for the value and weight layers that should be processed together } \keyword{internal} exactextractr/man/dot-resultColumns.Rd0000644000176200001440000000151114500104457017636 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/exact_extract_helpers.R \name{.resultColumns} \alias{.resultColumns} \title{Return column names to be used for summary operations} \usage{ .resultColumns( value_names, weight_names, fun, full_colnames, quantiles = numeric(), unique_values = numeric(), colname_fun = NULL ) } \arguments{ \item{value_names}{names of value raster layers} \item{weight_names}{names of weighting raster layers} \item{fun}{functions or names of summary operations} \item{full_colnames}{return a complete column name even when there is no ambiguity?} \item{quantiles}{quantiles to use when \code{stat_names} contains \code{quantile}} } \value{ character vector of column names } \description{ Return column names to be used for summary operations } \keyword{internal} exactextractr/man/rasterize_polygons.Rd0000644000176200001440000000235414500104457020143 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/rasterize.R \name{rasterize_polygons} \alias{rasterize_polygons} \alias{rasterize_polygons,sf,RasterLayer-method} \alias{rasterize_polygons,sf,SpatRaster-method} \title{Create a raster approximation of a polygon coverage} \usage{ \S4method{rasterize_polygons}{sf,RasterLayer}(x, y, min_coverage = 0) \S4method{rasterize_polygons}{sf,SpatRaster}(x, y, min_coverage = 0) } \arguments{ \item{x}{a \code{sf} or \code{sfc} object with polygonal geometries} \item{y}{a (possibly empty) \code{RasterLayer} whose resolution and extent will be used for the generated \code{RasterLayer}.} \item{min_coverage}{minimum fraction of a cell that must be covered by polygons to be included in the output} } \value{ a \code{RasterLayer} or \code{SpatRaster}, consistent with the type of \code{y} } \description{ Returns a raster whose values indicate the index of the polygon covering each cell. Where multiple polygons cover the same cell, the index of the polygon covering the greatest area will be used, with the lowest index returned in the case of ties. Cells that are not covered by any polygon, or whose total covered fraction is less than \code{min_coverage}, will be set to \code{NA}. } exactextractr/DESCRIPTION0000644000176200001440000000203514502525462014646 0ustar liggesusersPackage: exactextractr Title: Fast Extraction from Raster Datasets using Polygons Version: 0.10.0 Authors@R: c( person("Daniel Baston", email = "dbaston@isciences.com", role = c("aut", "cre")), person("ISciences, LLC", role="cph")) Description: Quickly and accurately summarizes raster values over polygonal areas ("zonal statistics"). Depends: R (>= 3.4.0) License: Apache License (== 2.0) SystemRequirements: GEOS (>= 3.5.0) Imports: Rcpp (>= 0.12.12), methods, raster, sf (>= 0.9.0), URL: https://isciences.gitlab.io/exactextractr/, https://github.com/isciences/exactextractr BugReports: https://github.com/isciences/exactextractr/issues LinkingTo: Rcpp Suggests: dplyr, foreign, knitr, ncdf4, rmarkdown, testthat, terra (>= 1.5.17) Encoding: UTF-8 RoxygenNote: 7.1.2 VignetteBuilder: knitr NeedsCompilation: yes Packaged: 2023-09-12 15:54:24 UTC; dan Author: Daniel Baston [aut, cre], ISciences, LLC [cph] Maintainer: Daniel Baston Repository: CRAN Date/Publication: 2023-09-20 08:20:02 UTC exactextractr/build/0000755000176200001440000000000014500104660014226 5ustar liggesusersexactextractr/build/vignette.rds0000644000176200001440000000041014500104660016560 0ustar liggesusersuP=o0u> RL7 .Ua膬 8rLQ;^s|{ݳDŽAHha٧ȗRGIERsVZk#2*[_,d{T-<ɍ`_0aR~^]/>vJ"3ג8$5юv⾁ ]A oa'_7pK^ ujS=4~ u}vIZ9G7ߜ~ }M#exactextractr/tests/0000755000176200001440000000000014500103446014272 5ustar liggesusersexactextractr/tests/testthat/0000755000176200001440000000000014502525462016142 5ustar liggesusersexactextractr/tests/testthat/test_helper_blocksize.R0000644000176200001440000000312614500103446022642 0ustar liggesusers# Copyright (c) 2021-2023 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License ta http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. context('block size detection') test_that('blockSize reports block size in row-col order', { landcov_fname <- system.file(file.path('sao_miguel', 'clc2018_v2020_20u1.tif'), package='exactextractr') expect_equal( .blockSize(raster::raster(landcov_fname)), c(2, 3840) ) expect_equal( .blockSize(terra::rast(landcov_fname)), c(2, 3840) ) # netCDF uses a different code path, so copy our test input to netCDF format # and repeat. GDAL doesn't let us control the block size, so hopefully it # is stable. if ('netCDF' %in% terra::gdal(drivers=TRUE)$name) { nc_fname <- tempfile(fileext = '.nc') suppressWarnings({ terra::writeRaster(terra::rast(landcov_fname), nc_fname, gdal=c('FORMAT=NC4', 'COMPRESS=DEFLATE')) }) expect_equal( .blockSize(raster::raster(nc_fname)), c(1, 3840) ) expect_equal( .blockSize(terra::rast(nc_fname)), c(1, 3840) ) file.remove(nc_fname) } }) exactextractr/tests/testthat/test_exact_resample_terra.R0000644000176200001440000001043014500103446023503 0ustar liggesusers# Copyright (c) 2020-2022 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. context('exact_resample (terra)') test_that("exact_resample supports SpatRaster arguments", { set.seed(123) # generate a random raster with a strange extent and resolution src <- raster::raster(matrix(runif(10000), nrow=100), xmn=runif(1), xmx=runif(1) + 9, ymn=runif(1), ymx=runif(1) + 9) # resample it to a raster with a larger grid with different resolution dst <- raster::raster(xmn=0, xmx=10, ymn=0, ymx=10, res=c(1, 2), crs=raster::crs(src)) dst <- exact_resample(src, dst, 'sum') dst_terra <- exact_resample(terra::rast(src), terra::rast(dst), 'sum') expect_equal(terra::rast(dst), dst_terra) }) test_that("resampling can be weighted with coverage areas instead of coverage fractions", { r <- terra::rast(nrows = 10, ncols = 10, xmin = 0, xmax = 10, ymin = 60, ymax = 70, crs = 'EPSG:4326') terra::values(r) <- seq(1, terra::ncell(r)) r2 <- terra::rast(nrows = 1, ncols = 1, xmin = 0, xmax = 10, ymin = 60, ymax = 70, crs = 'EPSG:4326') unweighted <- exact_resample(r, r2, 'mean') area_weighted <- exact_resample(r, r2, 'mean', coverage_area = TRUE) expect_true(area_weighted[1] > unweighted[1]) }) test_that("an R function can be used for resampling", { r1 <- make_square_rast(1:100) r2 <- terra::rast(nrows = 4, ncols = 4, xmin = 0, xmax = 10, ymin = 0, ymax = 10, crs = terra::crs(r1)) r2_rfun <- exact_resample(r1, r2, function(value, cov_frac) { sum(value * cov_frac) }) r2_stat <- exact_resample(r1, r2, 'sum') expect_equal(terra::values(r2_rfun), terra::values(r2_stat)) }) test_that("a multi-layer SpatRaster can be provided to an R summary function", { r1 <- make_square_rast(1:100, crs = 'EPSG:4326') r2 <- terra::rast(nrows = 4, ncols = 4, xmin = 0, xmax = 10, ymin = 0, ymax = 10, crs = terra::crs(r1)) # calculate an area-weighed mean by putting areas in a second layer r1_area <- terra::cellSize(r1) r1_stk <- terra::rast(list(r1, r1_area)) result_a <- exact_resample(r1_stk, r2, function(values, coverage_fraction) { weighted.mean(values[,1], values[,2] * coverage_fraction) }) # compare this to the more straightforward method of setting coverage_area = TRUE result_b <- exact_resample(r1, r2, 'mean', coverage_area = TRUE) expect_equal( terra::values(result_a), terra::values(result_b), tolerance = 1e-3 ) expect_error( exact_resample(r1_stk, r2, 'mean'), 'must have a single layer' ) }) test_that("error thrown if R function returns non-scalar value", { r1 <- make_square_rast(1:100) r2 <- terra::rast(nrows = 4, ncols = 4, xmin = 0, xmax = 10, ymin = 0, ymax = 10, crs = terra::crs(r1)) expect_error( exact_resample(r1, r2, function(value, cov_frac) { return(1:2) }), 'must return a single value' ) expect_error( exact_resample(r1, r2, function(value, cov_frac) { return(numeric()) }), 'must return a single value' ) expect_error( exact_resample(r1, r2, function(value, cov_frac) { return(NULL) }), 'Not compatible' ) expect_error( exact_resample(r1, r2, function(value, cov_frac) { 'abc' }), 'Not compatible' ) }) test_that("error thrown if R function has wrong signature", { r1 <- make_square_rast(1:100) r2 <- terra::rast(nrows = 4, ncols = 4, xmin = 0, xmax = 10, ymin = 0, ymax = 10, crs = terra::crs(r1)) expect_error( exact_resample(r1, r2, sum), 'does not appear to be of the form' ) }) exactextractr/tests/testthat/test_exact_extract_errors.R0000644000176200001440000004035114500103446023551 0ustar liggesusers# Copyright (c) 2018-2021 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License ta http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. library(testthat) library(exactextractr) context('exact_extract input validation') test_that('Error thrown if weighted stat requested but weights not provided', { rast <- make_square_raster(1:9) square <- make_circle(2, 2, 0.5, sf::st_crs(rast)) for (stat in c('weighted_mean', 'weighted_sum')) { expect_error(exact_extract(rast, square, stat), 'no weights provided') } }) test_that('Warning raised if weights provided but weighted stat not requested', { rast <- make_square_raster(1:9) square <- make_circle(2, 2, 0.5, sf::st_crs(rast)) for (stat in c('count', 'sum', 'mean', 'min', 'max', 'minority', 'majority', 'mode', 'variety')) { expect_warning(exact_extract(rast, square, stat, weights=rast), 'Weights provided but no.*operations use them') } }) test_that('Generic sfc_GEOMETRY fails if a feature is not polygonal', { rast <- make_square_raster(1:100) features <- st_as_sfc(c('POLYGON ((0 0, 2 0, 2 2, 0 2, 0 0))', 'POINT (2 7)'), crs=sf::st_crs(rast)) expect_error(exact_extract(rast, features, 'sum', progress = FALSE), 'must be polygonal') }) test_that('Incorrect argument types are handled gracefully', { data <- matrix(1:9, nrow=3, byrow=TRUE) rast <- raster::raster(data, xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') point <- sf::st_sfc(sf::st_point(1:2), crs=sf::st_crs(rast)) linestring <- sf::st_sfc(sf::st_linestring(matrix(1:4, nrow=2)), crs=sf::st_crs(rast)) multipoint <- sf::st_sfc(sf::st_multipoint(matrix(1:4, nrow=2)), crs=sf::st_crs(rast)) multilinestring <- sf::st_sfc( sf::st_multilinestring(list( matrix(1:4, nrow=2), matrix(5:8, nrow=2) )), crs=sf::st_crs(rast)) geometrycollection <- sf::st_sfc( sf::st_geometrycollection(list( sf::st_geometry(point)[[1]], sf::st_geometry(linestring)[[1]])), crs=sf::st_crs(rast)) expect_error(exact_extract(rast, point), 'unable to find.* method') expect_error(exact_extract(rast, linestring, 'unable to find.* method')) expect_error(exact_extract(rast, multipoint, 'unable to find.* method')) expect_error(exact_extract(rast, multilinesetring, 'unable to find.* method')) expect_error(exact_extract(rast, geometrycollection, 'unable to find.* method')) }) test_that('Warning is raised on CRS mismatch', { rast <- raster::raster(matrix(1:100, nrow=10), xmn=-180, xmx=180, ymn=-90, ymx=90, crs='+proj=longlat +datum=WGS84') poly <- sf::st_buffer( sf::st_as_sfc('POINT(442944.5 217528.7)', crs=32145), 150000) expect_warning(exact_extract(rast, poly, weighted.mean, na.rm=TRUE), 'transformed to raster') }) test_that('Warning is raised on undefined CRS', { rast <- raster::raster(matrix(1:100, nrow=10), xmn=0, xmx=10, ymn=0, ymx=10) weights <- raster::raster(matrix(runif(100), nrow=10), xmn=0, xmx=10, ymn=0, ymx=10) poly <- make_circle(8, 4, 0.4, crs=NA_integer_) # neither has a defined CRS expect_silent(exact_extract(rast, poly, 'sum')) # only raster has defined CRS raster::crs(rast) <- '+proj=longlat +datum=WGS84' expect_warning(exact_extract(rast, poly, 'sum'), 'assuming .* same CRS .* raster') # weights have no defined CRS expect_warning(exact_extract(rast, poly, 'weighted_mean', weights=weights), 'No CRS .* weighting raster.* assuming .* same CRS') # both have defined crs sf::st_crs(poly) <- sf::st_crs(rast) expect_silent(exact_extract(rast, poly, 'sum')) # only polygons have defined crs raster::crs(rast) <- NULL expect_warning(exact_extract(rast, poly, 'sum'), 'assuming .* same CRS .* polygon') }) test_that('Error thrown if value raster and weighting raster have different crs', { values <- make_square_raster(runif(100), crs=NA) weights <- make_square_raster(runif(100), crs=NA) poly <- make_circle(8, 4, 1.5, crs=NA_real_) # no CRS for values or weights exact_extract(values, poly, 'weighted_mean', weights=weights) # values have defined CRS, weights do not raster::crs(values) <- '+proj=longlat +datum=WGS84' raster::crs(weights) <- '+proj=longlat +datum=NAD83' expect_error( exact_extract(values, poly, 'weighted_mean', weights=weights), 'Weighting raster does not have .* same CRS as value raster') }) test_that('Error thrown if value raster and weighting raster have incompatible grids', { poly <- make_circle(5, 4, 2, NA_integer_) values <- raster::raster(matrix(runif(10*10), nrow=10), xmn=0, xmx=10, ymn=0, ymx=10) # weights have same extent as values, higher resolution weights <- raster::raster(matrix(runif(100*100), nrow=100), xmn=0, xmx=10, ymn=0, ymx=10) exact_extract(values, poly, 'weighted_mean', weights=weights) # weights have same extent as values, lower resolution weights <- raster::raster(matrix(1:4, nrow=2), xmn=0, xmx=10, ymn=0, ymx=10) exact_extract(values, poly, 'weighted_mean', weights=weights) # weights have offset extent from values, same resolution, compatible origin weights <- raster::raster(matrix(runif(10*10), nrow=2), xmn=1, xmx=11, ymn=2, ymx=12) exact_extract(values, poly, 'weighted_mean', weights=weights) # weights have offset extent from values, same resolution, incompatible origin weights <- raster::raster(matrix(runif(10*10), nrow=2), xmn=0.5, xmx=10.5, ymn=2, ymx=12) expect_error(exact_extract(values, poly, 'weighted_mean', weights=weights), 'Incompatible extents') }) test_that('Error is raised if function has unexpected signature', { rast <- make_square_raster(1:100) poly <- make_circle(5, 5, 3, sf::st_crs(rast)) # unweighted, standard form for (fun in c(length, sum, median, mean, sd)) { expect_error( exact_extract(rast, poly, fun), 'function .* not .* of the form') } expect_silent(exact_extract(rast, poly, weighted.mean)) # unweighted, summarize_df expect_error( exact_extract(rast, poly, function() {}, summarize_df = TRUE), 'function .* not .* of the form' ) # weighted, standard form expect_error( exact_extract(rast, poly, weights = rast, fun = function(x, frac) {}), 'function .* not .* of the form') expect_error( exact_extract(rast, poly, weights = rast, fun = function(x) {}), 'function .* not .* of the form') expect_error( exact_extract(rast, poly, weights = rast, fun = function() {}), 'function .* not .* of the form') # weighted, summarize_df expect_error( exact_extract(rast, poly, weights = rast, fun = function() {}, summarize_df = TRUE), 'function .* not .* of the form' ) }) test_that('Error is raised for unknown summary operation', { rast <- make_square_raster(1:100) poly <- make_circle(5, 5, 3, sf::st_crs(rast)) expect_error(exact_extract(rast, poly, 'whatimean'), 'Unknown stat') }) test_that('Error is raised if arguments passed without R summary function', { rast <- make_square_raster(1:100) poly <- make_circle(5, 5, 3, sf::st_crs(rast)) expect_error(exact_extract(rast, poly, 'sum', na.rm=TRUE), 'does not accept additional arguments') expect_error( exact_extract(rast, poly, cookie = FALSE), 'Unexpected arguments' ) }) test_that('Error is raised for invalid max_cells_in_memory', { rast <- make_square_raster(1:100) poly <- make_circle(5, 5, 3, sf::st_crs(rast)) expect_error(exact_extract(rast, poly, 'mean', max_cells_in_memory=-123), 'Invalid.*max_cells') expect_error( exact_extract(rast, poly, 'mean', max_cells_in_memory = NA), 'must be a single numeric') expect_error( exact_extract(rast, poly, 'mean', max_cells_in_memory = numeric()), 'must be a single numeric') expect_error( exact_extract(rast, poly, 'mean', max_cells_in_memory = integer()), 'must be a single numeric') expect_error( exact_extract(rast, poly, 'mean', max_cells_in_memory = NULL), 'must be a single numeric') }) test_that('Error is thrown when using include_* with named summary operation', { rast <- make_square_raster(1:100) circles <- st_sf( fid = c(2, 9), size = c('large', 'small'), geometry = c( make_circle(5, 4, 2, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast)))) expect_error(exact_extract(rast, circles, 'sum', include_xy = TRUE), 'include_xy must be FALSE') expect_error(exact_extract(rast, circles, 'sum', include_area = TRUE), 'include_area must be FALSE') expect_error(exact_extract(rast, circles, 'sum', include_cell = TRUE), 'include_cell must be FALSE') expect_error(exact_extract(rast, circles, 'sum', include_cols = 'fid'), 'include_cols not supported') }) test_that('Error is thrown when using include_cols or append_cols with nonexisting columns', { rast <- make_square_raster(1:100) circles <- st_sf( fid = c(2, 9), size = c('large', 'small'), geometry = c( make_circle(5, 4, 2, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast)))) # append_cols specified but sfc has no attribute columns expect_error( exact_extract(rast, st_geometry(circles), 'mean', append_cols = 'fid', progress = FALSE), 'only supported for sf') expect_error( exact_extract(rast, st_geometry(circles), weighted.mean, append_cols = 'fid', progress = FALSE), 'only supported for sf') # append_cols specified for misspelled column expect_error( exact_extract(rast, circles, 'mean', append_cols = 'fidd', progress = FALSE), 'undefined columns' ) expect_error( exact_extract(rast, circles, weighted.mean, append_cols = 'fidd', progress = FALSE), 'undefined columns' ) # include_cols specified for sfc expect_error( exact_extract(rast, st_geometry(circles), include_cols = 'fidd', progress = FALSE), 'only supported for sf' ) # include_cols specified for misspelled column expect_error( exact_extract(rast, circles, include_cols = 'fidd', progress = FALSE), 'undefined columns' ) }) test_that('Error is thrown if quantiles not specified or not valid', { rast <- make_square_raster(1:100) square <- make_rect(2, 2, 4, 4, crs=sf::st_crs(rast)) expect_error(exact_extract(rast, square, 'quantile'), 'Quantiles not specified') expect_error(exact_extract(rast, square, 'quantile', quantiles=NA), 'must be between 0 and 1') expect_error(exact_extract(rast, square, 'quantile', quantiles=c(0.5, 1.1)), 'must be between 0 and 1') expect_error(exact_extract(rast, square, 'quantile', quantiles=numeric()), 'Quantiles not specified') }) test_that('Warning emitted when value raster is disaggregated', { r1 <- make_square_raster(1:100) r2 <- make_square_raster(runif(100)) r1d <- raster::disaggregate(r1, 2) r2d <- raster::disaggregate(r2, 2) circle <- make_circle(2, 7, 3, sf::st_crs(r1)) # no warning, values and weights have same resolution expect_silent(exact_extract(r1, circle, weights=r2)) # no warning, values have higher resolution than weights expect_silent(exact_extract(r1d, circle, weights=r2)) # warning, weights have higher resolution than values expect_warning(exact_extract(r1, circle, weights=r2d), 'value .* disaggregated') }) test_that('Error raised when value raster is disaggregated and unweighted sum/count requested', { r1 <- make_square_raster(1:100) r1d <- raster::disaggregate(r1, 2) circle <- make_circle(2, 7, 3, sf::st_crs(r1)) # no error, requested operations either expect disaggregation # or are not impacted by it expect_silent(exact_extract(r1, circle, c('weighted_sum', 'weighted_mean', 'mean'), weights=r1d)) # on the other hand, "count" would be messed up by the disaggregation expect_error(exact_extract(r1, circle, c('weighted_sum', 'count'), weights=r1d), 'raster is disaggregated') # as would "sum" expect_error(exact_extract(r1, circle, c('weighted_sum', 'count'), weights=r1d), 'raster is disaggregated') # no problem if the weights are disaggregated, though expect_silent(exact_extract(r1d, circle, c('weighted_sum', 'count'), weights=r1)) }) test_that('We get an error if using stack_apply with incompatible stacks', { vals <- stack(replicate(3, make_square_raster(runif(100)))) names(vals) <- c('a', 'b', 'c') weights <- stack(replicate(2, make_square_raster(runif(100)))) names(weights) <- c('d', 'e') circle <- make_circle(2, 7, 3, sf::st_crs(vals)) expect_error( exact_extract(vals, circle, function(v, c, w) 1, weights=weights, stack_apply=TRUE), "Can't apply") }) test_that('Error thrown if summarize_df set where not applicable', { rast <- make_square_raster(1:100) circle <- make_circle(7.5, 5.5, 4, sf::st_crs(rast)) expect_error( exact_extract(rast, circle, 'mean', summarize_df = TRUE), 'can only be used when .* function') expect_error( exact_extract(rast, circle, summarize_df = TRUE), 'can only be used when .* function') }) test_that('Error thrown if stack_apply set where not applicable', { rast <- make_square_raster(1:100) circle <- make_circle(7.5, 5.5, 4, sf::st_crs(rast)) expect_error( exact_extract(rast, circle, stack_apply = TRUE), 'can only be used when .* is a summary operation or function' ) }) test_that('Error thrown if append_cols set where not applicable', { rast <- make_square_raster(1:100) circle <- st_sf(make_circle(7.5, 5.5, 4, sf::st_crs(rast))) expect_error( exact_extract(rast, circle, append_cols = TRUE), 'can only be used when .* is a summary operation or function' ) }) test_that('Error thrown if scalar args have length != 1', { rast <- make_square_raster(1:100) circle <- make_circle(7.5, 5.5, 4, sf::st_crs(rast)) flags <- c( 'coverage_area', 'force_df', 'full_colnames', 'include_area', 'include_cell', 'include_xy', 'progress', 'stack_apply', 'summarize_df') for (flag in flags) { base_args <- list(rast, circle) for (bad_value in list(logical(), c(TRUE, TRUE), NA)) { args <- base_args args[[flag]] <- bad_value expect_error( do.call(exact_extract, args), 'must be TRUE or FALSE' ) } } }) test_that('Error thrown if fun is empty', { rast <- make_square_raster(1:100) circle <- make_circle(7.5, 5.5, 4, sf::st_crs(rast)) expect_error( exact_extract(rast, circle, character()), 'No summary operations' ) }) test_that('Error thrown if fun is incorrect type', { rast <- make_square_raster(1:100) circle <- make_circle(7.5, 5.5, 4, sf::st_crs(rast)) expect_error( exact_extract(rast, circle, 44), 'must be a character vector, function') expect_error( exact_extract(rast, circle, list(function() {}, function() {})), 'must be a character vector, function') }) test_that('Error thrown if default values have incorrect type/length', { rast <- make_square_raster(1:100) circle <- make_circle(7.5, 5.5, 4, sf::st_crs(rast)) expect_error( exact_extract(rast, circle, 'mean', default_value = numeric()), 'must be a single numeric value' ) expect_error( exact_extract(rast, circle, 'mean', default_value = c(3, 8)), 'must be a single numeric value' ) expect_error( exact_extract(rast, circle, 'mean', default_value = NULL), 'must be a single numeric value' ) expect_error( exact_extract(rast, circle, 'mean', default_value = FALSE), 'must be a single numeric value' ) }) exactextractr/tests/testthat/test_rasterize.R0000644000176200001440000000564314500103446021334 0ustar liggesusers# Copyright (c) 2022 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. context('rasterize_polygons') test_that('value is assigned to polygon with greatest coverage area', { polys <- st_as_sf( data.frame(id = 1:3, geom = c( 'POLYGON ((10 0, 10 5, 5 5, 10 0))', 'POLYGON ((0 0, 10 0, 5 5, 1 10, 0 10, 0 0))', 'POLYGON ((5 5, 10 5, 10 10, 1 10, 5 5))' )), wkt = 'geom' ) rt <- terra::rast(xmin = 0, xmax = 10, ymin = 0, ymax = 10, res = 2) r <- rasterize_polygons(polys, rt) # lower-right is a tie, so it goes to the first feature encountered expect_equal( extract(r, cbind(9, 1))[[1]], 1) # center tell is touched by all three, goes to polygon that covers the greatest area expect_equal( extract(r, cbind(5, 5))[[1]], 2) }) test_that('min_coverage excludes cells with small coverage area', { rt <- terra::rast(xmin = 0, xmax = 10, ymin = 0, ymax = 10, res = 1) circ <- make_circle(5, 5, 3.5, crs=st_crs(rt)) circ_pieces <- st_sf(st_intersection(circ, st_make_grid(circ, 1))) cfrac <- coverage_fraction(rt, circ, crop = FALSE)[[1]] # by default, all touched cells are included in output r <- rasterize_polygons(circ_pieces, rt) expect_equal( values(cfrac) > 0, !is.na(values(r)) ) # min_coverage excludes cells with small coverage area r <- rasterize_polygons(circ_pieces, rt, min_coverage = 0.5) expect_equal( values(cfrac) > 0.5, !is.na(values(r)) ) }) test_that('input type is preserved', { rt <- terra::rast(xmin = 0, xmax = 10, ymin = 0, ymax = 10, res = 2) rr <- raster::raster(rt) circ <- st_sf(make_circle(5, 5, 3.5, crs=st_crs(rt))) r <- rasterize_polygons(circ, rt) expect_s4_class(r, 'SpatRaster') r <- rasterize_polygons(circ, rr) expect_s4_class(r, 'RasterLayer') }) test_that('no error when polygon does not intersect raster', { rt <- terra::rast(xmin = 0, xmax = 10, ymin = 0, ymax = 10, res = 2, crs=NA) circ <- st_sf(make_circle(500, 500, 3.5, crs=st_crs(rt))) r <- rasterize_polygons(circ, rt) expect_true(all(is.na(values(r)))) }) test_that('no error when polygon partially intersects raster', { rt <- terra::rast(xmin = 0, xmax = 10, ymin = 0, ymax = 10, res = 2, crs=NA) circ <- st_sf(make_circle(10, 5, 3.5, crs=st_crs(rt))) expect_invisible( r <- rasterize_polygons(circ, rt) ) }) exactextractr/tests/testthat/test_exact_extract.R0000644000176200001440000013736314500103446022167 0ustar liggesusers# Copyright (c) 2018-2021 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License ta http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. library(testthat) library(exactextractr) context('exact_extract') test_that("Basic stat functions work", { # This test just verifies a successful journey from R # to C++ and back. The correctness of the algorithm # is tested at the C++ level. data <- matrix(1:9, nrow=3, byrow=TRUE) rast <- raster::raster(data, xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') square <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(rast)) dat <- exact_extract(rast, square) # Calling without a function returns a data frame with values and coverage fractions expect_equal(dat[[1]], data.frame(value=1:9, coverage_fraction=c(0.25, 0.5, 0.25, 0.5, 1, 0.5, 0.25, 0.5, 0.25)) ) # Calling with a function(w, v) returns the result of the function expect_equal(exact_extract(rast, square, fun=weighted.mean), 5) # Calling with a string computes a named operation from the C++ library expect_equal(exact_extract(rast, square, fun='count'), 4) expect_equal(exact_extract(rast, square, fun='mean'), 5) expect_equal(exact_extract(rast, square, fun='median'), 5) expect_equal(exact_extract(rast, square, fun='quantile', quantiles=0.25), 3.5) expect_equal(exact_extract(rast, square, fun='quantile', quantiles=0.75), 6.5) expect_equal(exact_extract(rast, square, fun='min'), 1) expect_equal(exact_extract(rast, square, fun='max'), 9) expect_equal(exact_extract(rast, square, fun='mode'), 5) expect_equal(exact_extract(rast, square, fun='majority'), 5) expect_equal(exact_extract(rast, square, fun='minority'), 1) expect_equal(exact_extract(rast, square, fun='variety'), 9) expect_equal(exact_extract(rast, square, fun='variance'), 5) expect_equal(exact_extract(rast, square, fun='stdev'), sqrt(5)) expect_equal(exact_extract(rast, square, fun='coefficient_of_variation'), sqrt(5)/5) # Can also do multiple stats at once expect_equal(exact_extract(rast, square, fun=c('min', 'max', 'mode')), data.frame(min=1, max=9, mode=5)) expect_equal(exact_extract(rast, c(square, square), fun=c('min', 'max', 'mode'), progress = FALSE), data.frame(min=c(1, 1), max=c(9, 9), mode=c(5, 5))) }) test_that('Weighted stat functions work', { data <- matrix(1:9, nrow=3, byrow=TRUE) rast <- raster::raster(data, xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') equal_weights <- raster::raster(matrix(1, nrow=3, ncol=3), xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') bottom_row_only <- raster::raster(rbind(c(0, 0, 0), c(0, 0, 0), c(1, 1, 1)), xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') square <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(rast)) # equal weights expect_equal(exact_extract(rast, square, 'weighted_mean', weights=equal_weights), exact_extract(rast, square, 'mean')) expect_equal(exact_extract(rast, square, 'weighted_sum', weights=equal_weights), exact_extract(rast, square, 'sum')) expect_equal(exact_extract(rast, square, 'weighted_stdev', weights=equal_weights), exact_extract(rast, square, 'stdev')) expect_equal(exact_extract(rast, square, 'weighted_variance', weights=equal_weights), exact_extract(rast, square, 'variance')) # unequal weights expect_equal(exact_extract(rast, square, 'weighted_mean', weights=bottom_row_only), (0.25*7 + 0.5*8 + 0.25*9)/(0.25 + 0.5 + 0.25)) expect_equal(exact_extract(rast, square, 'weighted_sum', weights=bottom_row_only), (0.25*7 + 0.5*8 + 0.25*9)) expect_equal(exact_extract(rast, square, 'weighted_stdev', weights=bottom_row_only), 0.7071068, tolerance = 1e-7) # Weighted.Desc.Stat::w.sd(x = c(7, 8, 9), mu = c(0.25, 0.5, 0.25))) expect_equal(exact_extract(rast, square, 'weighted_variance', weights=bottom_row_only), 0.5) # Weighted.Desc.Stat::w.var(x = c(7, 8, 9), mu = c(0.25, 0.5, 0.25))) }) test_that('Grouped stat functions work', { rast <- raster::raster(matrix(rep(1:3, each = 3), nrow=3, byrow=TRUE), xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') weights <- raster::raster(matrix(rep(3:1, each = 3), nrow = 3, byrow=TRUE), xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') square1 <- make_rect(0.5, 0.5, 1.0, 1.0, sf::st_crs(rast)) square2 <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(rast)) squares <- c(square1, square2) expect_equal( exact_extract(rast, squares, c('count', 'frac'), progress = FALSE), rbind( data.frame(count = 0.25, frac_1 = 0, frac_2 = 0, frac_3 = 1.00), data.frame(count = 4.00, frac_1 = 0.25, frac_2 = 0.5, frac_3 = 0.25))) expect_equal( exact_extract(rast, squares, c('weighted_frac', 'sum'), weights = weights, progress = FALSE), rbind( data.frame(weighted_frac_1 = 0, weighted_frac_2 = 0, weighted_frac_3 = 1, sum = 0.75), data.frame(weighted_frac_1 = 0.375, weighted_frac_2 = 0.5, weighted_frac_3 = 0.125, sum = 8) )) }) test_that('Grouped stat functions work (multilayer)', { rast <- raster::raster(matrix(rep(1:3, each = 3), nrow=3, byrow=TRUE), xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') rast <- raster::stack(list(a = rast, b = rast + 1)) weights <- raster::raster(matrix(rep(3:1, each = 3), nrow = 3, byrow=TRUE), xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') square1 <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(rast)) square2 <- make_rect(0.5, 0.5, 1.0, 1.0, sf::st_crs(rast)) squares <- c(square1, square2) stats <- c('count', 'frac', 'quantile') quantiles <- c(0.25, 0.75) single_layer_a <- exact_extract(rast[['a']], squares, stats, quantiles = quantiles, progress = FALSE) single_layer_b <- exact_extract(rast[['b']], squares, stats, quantiles = quantiles, progress = FALSE) multi_layer <- exact_extract(rast, squares, stats, quantiles = quantiles, progress = FALSE) # for each layer (a, b) we get a column with the coverage fraction of each # value that occurs in a OR b. If a value does not occur for a given layer, # the values in its associated column will be zero. for (col in unique(c(names(single_layer_a), names(single_layer_b)))) { if (col %in% names(single_layer_a)) { expect_equal(single_layer_a[[col]], multi_layer[[paste(col, 'a', sep='.')]]) } else { expect_equal(c(0, 0), multi_layer[[paste(col, 'a', sep='.')]]) } if (col %in% names(single_layer_b)) { expect_equal(single_layer_b[[col]], multi_layer[[paste(col, 'b', sep='.')]]) } else { expect_equal(c(0, 0), multi_layer[[paste(col, 'b', sep='.')]]) } } }) test_that('Raster NA values are correctly handled', { data <- matrix(1:100, nrow=10, byrow=TRUE) data[7:10, 1:4] <- NA # cut out lower-left corner rast <- raster::raster(data, xmn=0, xmx=10, ymn=0, ymx=10, crs='+proj=longlat +datum=WGS84') # check polygon entirely within NA region circ <- sf::st_sfc(sf::st_buffer(sf::st_point(c(2,2)), 0.9), crs=sf::st_crs(rast)) expect_equal(0, exact_extract(rast, circ, 'count')) expect_equal(NA_real_, exact_extract(rast, circ, 'mean')) expect_equal(NA_real_, exact_extract(rast, circ, weighted.mean)) # check polygon partially within NA region square <- make_rect(3.5, 3.5, 4.5, 4.5, sf::st_crs(rast)) expect_equal(43.5, exact_extract(rast, square, 'sum')) expect_equal(NA_real_, exact_extract(rast, square, weighted.mean)) expect_equal(58, exact_extract(rast, square, weighted.mean, na.rm=TRUE)) }) test_that('MultiPolygons also work', { data <- matrix(1:100, nrow=10, byrow=TRUE) rast <- raster::raster(data, xmn=0, xmx=10, ymn=0, ymx=10, crs='+proj=longlat +datum=WGS84') multipoly <- sf::st_sfc( sf::st_multipolygon(list( sf::st_polygon( list( matrix( c(0.5, 0.5, 2.5, 0.5, 2.5, 2.5, 0.5, 2.5, 0.5, 0.5), ncol=2, byrow=TRUE))), sf::st_polygon( list( matrix( 4 + c(0.5, 0.5, 2.5, 0.5, 2.5, 2.5, 0.5, 2.5, 0.5, 0.5), ncol=2, byrow=TRUE))))), crs=sf::st_crs(rast)) expect_equal(exact_extract(rast, multipoly, fun='variety'), 18) }) test_that('sp inputs supported', { rast <- make_square_raster(1:100) circles <- c( make_circle(3, 2, 4, sf::st_crs(rast)), make_circle(7, 7, 2, sf::st_crs(rast)) ) circles_sf <- sf::st_sf(id = 1:2, geometry = circles) result <- exact_extract(rast, circles, 'mean', progress = FALSE) # SpatialPolygons circles_sp <- sf::as_Spatial(circles) result_sp <- exact_extract(rast, circles_sp, 'mean', progress = FALSE) expect_equal(result, result_sp) # SpatialPolygonsDataFrame circles_spdf <- sf::as_Spatial(circles_sf) result_spdf <- exact_extract(rast, circles_spdf, 'mean', progress = FALSE) expect_equal(result, result_spdf) }) test_that('Generic sfc_GEOMETRY works if the features are polygonal', { rast <- make_square_raster(1:100) polys <- st_as_sfc(c('POLYGON ((0 0, 2 0, 2 2, 0 2, 0 0))', 'MULTIPOLYGON (((2 2, 4 2, 4 4, 2 4, 2 2)), ((4 4, 8 4, 8 8, 4 8, 4 4)))'), crs=sf::st_crs(rast)) expect_equal(exact_extract(rast, polys, 'count', progress = FALSE), c(4, 4+16)) }) test_that('GeometryCollections are supported if they are polygonal', { rast <- make_square_raster(1:100) gc <- st_as_sfc('GEOMETRYCOLLECTION( POLYGON ((0 0, 2 0, 2 2, 0 2, 0 0)), POLYGON ((2 2, 4 2, 4 4, 2 4, 2 2)))', crs = st_crs(rast)) mp <- st_as_sfc('MULTIPOLYGON (((0 0, 2 0, 2 2, 0 2, 0 0)), ((2 2, 4 2, 4 4, 2 4, 2 2)))', crs = st_crs(rast)) expect_equal( exact_extract(rast, gc), exact_extract(rast, mp) ) }) test_that('We ignore portions of the polygon that extend outside the raster', { rast <- raster::raster(matrix(1:(360*720), nrow=360), xmn=-180, xmx=180, ymn=-90, ymx=90, crs='+proj=longlat +datum=WGS84') rect <- make_rect(179.5, 0, 180.5, 1, sf::st_crs(rast)) cells_included <- exact_extract(rast, rect, include_xy=TRUE)[[1]][, c('x', 'y')] expect_equal(cells_included, data.frame(x=179.75, y=c(0.75, 0.25)), check.attributes=FALSE) index_included <- exact_extract(rast, rect, include_xy=TRUE, include_cell = TRUE)[[1]][, c('x', 'y', 'cell')] expect_equivalent(as.matrix(cells_included[c("x", "y")]), raster::xyFromCell(rast, index_included$cell)) expect_equal(index_included$cell, raster::cellFromXY(rast, cbind(cells_included$x, cells_included$y))) }) test_that('Additional arguments can be passed to fun', { data <- matrix(1:9, nrow=3, byrow=TRUE) rast <- raster::raster(data, xmn=0, xmx=3, ymn=0, ymx=3, crs='+proj=longlat +datum=WGS84') square <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(rast)) exact_extract(rast, square, function(x, w, custom) { expect_equal(custom, 6) }, progress=FALSE, 6) }) test_that('We can extract values from a RasterStack', { rast <- raster::raster(matrix(1:16, nrow=4, byrow=TRUE), xmn=0, xmx=4, ymn=0, ymx=4, crs='+proj=longlat +datum=WGS84') stk <- raster::stack(rast, sqrt(rast)) square <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(rast)) extracted <- exact_extract(stk, square)[[1]] expect_equal(names(extracted), c('layer.1', 'layer.2', 'coverage_fraction')) expect_equal(extracted[, 'layer.2'], sqrt(extracted[, 'layer.1'])) expect_equal(extracted[extracted$coverage_fraction==0.25, 'layer.1'], c(5, 7, 13, 15)) expect_equal(extracted[extracted$coverage_fraction==0.50, 'layer.1'], c(6, 9, 11, 14)) expect_equal(extracted[extracted$coverage_fraction==1.00, 'layer.1'], 10) }) test_that('We can pass extracted RasterStack values to an R function', { population <- raster::raster(matrix(1:16, nrow=4, byrow=TRUE), xmn=0, xmx=4, ymn=0, ymx=4, crs='+proj=longlat +datum=WGS84') income <- sqrt(population) square <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(population)) mean_income <- exact_extract(raster::stack(list(population=population, income=income)), square, function(vals, weights) { weighted.mean(vals[, 'population']*vals[, 'income'], weights) }) expect_equal(mean_income, 32.64279, tolerance=1e-5) }) test_that('We can pass extracted RasterStack values to a C++ function', { rast <- raster::raster(matrix(runif(16), nrow=4), xmn=0, xmx=4, ymn=0, ymx=4, crs='+proj=longlat +datum=WGS84') square <- make_rect(0.5, 0.5, 2.5, 2.5, sf::st_crs(rast)) stk <- raster::stack(list(a=rast, b=sqrt(rast))) brk <- raster::brick(stk) for (input in c(stk, brk)) { expect_equal( exact_extract(input, square, 'variety'), data.frame(variety.a=9, variety.b=9) ) twostats <- exact_extract(input, square, c('variety', 'mean')) expect_equal(nrow(twostats), 1) expect_named(twostats, c('variety.a', 'variety.b', 'mean.a', 'mean.b')) } }) test_that('We can apply the same function to each layer of a RasterStack', { set.seed(123) stk <- raster::stack(list(a = make_square_raster(runif(100)), b = make_square_raster(runif(100)))) circles <- c( make_circle(5, 4, 2, sf::st_crs(stk)), make_circle(3, 1, 1, sf::st_crs(stk))) # by default layers are processed together expect_error( exact_extract(stk, circles, weighted.mean, progress=FALSE), 'must have the same length' ) # but we can process them independently with stack_apply means <- exact_extract(stk, circles, weighted.mean, progress=FALSE, stack_apply=TRUE) expect_named(means, c('weighted.mean.a', 'weighted.mean.b')) # results are same as we would get by processing layers independently for (i in 1:raster::nlayers(stk)) { expect_equal(means[, i], exact_extract(stk[[i]], circles, weighted.mean, progress=FALSE)) } }) test_that('Layers of a RasterBrick can be processed independently with stack_apply', { # https://github.com/isciences/exactextractr/issues/54 data <- matrix(1:100, nrow=10, byrow=TRUE) data[7:10, 1:4] <- NA # cut out lower-left corner rast <- raster::raster( data, xmn=0, xmx=10, ymn=0, ymx=10, crs='+proj=longlat +datum=WGS84' ) rast_brick <- brick(rast, rast) square <- make_rect(3.5, 3.5, 4.5, 4.5, sf::st_crs(rast)) expect_equal( exact_extract(rast_brick, square, weighted.mean, stack_apply = T), data.frame(weighted.mean.layer.1 = NA_real_, weighted.mean.layer.2 = NA_real_)) }) test_that('We can summarize a RasterStack / RasterBrick using weights from a RasterLayer', { set.seed(123) stk <- raster::stack(list(a = make_square_raster(1:100), b = make_square_raster(101:200))) weights <- make_square_raster(runif(100)) circle <- make_circle(5, 4, 2, sf::st_crs(stk)) # same weights get used for both expect_equal(exact_extract(stk, circle, 'weighted_mean', weights=weights), data.frame(weighted_mean.a = 63.0014, weighted_mean.b = 163.0014), tolerance=1e-6) # error when trying to use a non-raster as weights expect_error(exact_extract(stk, circle, 'weighted_mean', weights='stk'), "Weights must be a Raster") }) test_that('We get acceptable default values when processing a polygon that does not intersect the raster', { rast <- raster::raster(matrix(runif(100), nrow=5), xmn=-180, xmx=180, ymn=-65, ymx=85, crs='+proj=longlat +datum=WGS84') # extent of GPW poly <- make_rect(-180, -90, 180, -65.5, sf::st_crs(rast)) # extent of Antarctica in Natural Earth # RasterLayer expect_equal(list(data.frame(value=numeric(), coverage_fraction=numeric())), exact_extract(rast, poly)) expect_equal(list(data.frame(value=numeric(), x=numeric(), y=numeric(), cell=numeric(), coverage_fraction=numeric())), exact_extract(rast, poly, include_xy=TRUE, include_cell=TRUE)) expect_equal(0, exact_extract(rast, poly, function(x, c) sum(x))) expect_equal(0, exact_extract(rast, poly, 'count')) expect_equal(0, exact_extract(rast, poly, 'sum')) expect_equal(0, exact_extract(rast, poly, 'variety')) expect_equal(NA_real_, exact_extract(rast, poly, 'majority')) expect_equal(NA_real_, exact_extract(rast, poly, 'minority')) expect_equal(NA_real_, exact_extract(rast, poly, 'minority')) expect_equal(NA_real_, exact_extract(rast, poly, 'mean')) expect_equal(NA_real_, exact_extract(rast, poly, 'min')) expect_equal(NA_real_, exact_extract(rast, poly, 'max')) # RasterStack rast2 <- as.integer(rast) raster::dataType(rast2) <- 'INT4S' stk <- raster::stack(list(q=rast, xi=rast2, area=raster::area(rast))) expect_equal(list(data.frame(q=numeric(), xi=integer(), area=numeric(), coverage_fraction=numeric())), exact_extract(stk, poly)) expect_equal(list(data.frame(q=numeric(), xi=integer(), area=numeric(), x=numeric(), y=numeric(), cell=numeric(), coverage_fraction=numeric())), exact_extract(stk, poly, include_xy=TRUE, include_cell=TRUE)) exact_extract(stk, poly, function(values, cov) { expect_equal(values, data.frame(q=numeric(), xi=integer(), area=numeric())) expect_equal(cov, numeric()) }) }) test_that('Coverage area can be output instead of coverage fraction (projected)', { rast_utm <- disaggregate(make_square_raster(1:100), c(2, 3)) circle <- make_circle(5, 5, 5, crs=st_crs(rast_utm)) df_frac <- exact_extract(rast_utm, circle, include_area = TRUE)[[1]] df_area <- exact_extract(rast_utm, circle, coverage_area = TRUE)[[1]] expect_named(df_area, c('value', 'coverage_area')) expect_equal(df_frac$coverage_fraction * df_frac$area, df_area$coverage_area) }) test_that('Coverage area can be output instead of coverage fraction (geographic)', { rast <- raster::raster(matrix(1:54000, ncol=360), xmn=-180, xmx=180, ymn=-65, ymx=85, crs='+proj=longlat +datum=WGS84') suppressMessages({ circle <- make_circle(0, 45, 15, crs=st_crs(rast)) }) df_frac <- exact_extract(rast, circle, include_area = TRUE)[[1]] df_area <- exact_extract(rast, circle, coverage_area = TRUE)[[1]] expect_equal(df_frac$coverage_fraction * df_frac$area, df_area$coverage_area) }) test_that('coverage_area argument can be used with named summary operations', { rast1 <- raster(matrix(1:54000, ncol=360), xmn=-180, xmx=180, ymn=-65, ymx=85, crs='+proj=longlat +datum=WGS84') rast2 <- sqrt(rast1) suppressMessages({ circle <- make_circle(0, 45, 15, crs=st_crs(rast1)) }) # using only area as weighting expect_equal(exact_extract(rast1, circle, 'weighted_mean', weights = 'area'), exact_extract(rast1, circle, 'mean', coverage_area = TRUE)) # using area x weight as weighting expect_equal( exact_extract(rast1, circle, 'weighted_mean', weights = rast2, coverage_area = TRUE), exact_extract(rast1, circle, fun = function(x, cov, w) { weighted.mean(x, cov * w$rast2 * w$area) }, weights = stack(list(rast2 = rast2, area = area(rast2)))), tol = 1e-2 ) }) test_that('We can weight with cell areas (projected coordinates)', { rast_utm <- raster(matrix(1:100, ncol=10), xmn=0, xmx=5, ymn=0, ymx=5, crs='+init=epsg:26918') circle1 <- make_circle(5, 5, 5, crs=st_crs(rast_utm)) # for projected (Cartesian coordinates), means with cell area and # coverage fraction are the same expect_equal(exact_extract(rast_utm, circle1, 'mean'), exact_extract(rast_utm, circle1, 'weighted_mean', weights='area')) # same result with R summary function expect_equal( exact_extract(rast_utm, circle1, 'weighted_mean', weights='area'), exact_extract(rast_utm, circle1, function(x,c,w) { weighted.mean(x, c*w) }, weights='area'), 1e-5 ) # name doesn't pop out in data frame columns expect_named( exact_extract(rast_utm, circle1, c('sum', 'weighted_mean'), weights='area', force_df = TRUE), c('sum', 'weighted_mean')) # sums differ by the cell area expect_equal(prod(res(rast_utm)) * exact_extract(rast_utm, circle1, 'sum'), exact_extract(rast_utm, circle1, 'weighted_sum', weights='area')) # when using area weighting, disaggregating does not affect the sum expect_equal(exact_extract(rast_utm, circle1, 'weighted_sum', weights='area'), exact_extract(disaggregate(rast_utm, 8), circle1, 'weighted_sum', weights='area')) }) test_that('We can weight with cell areas (geographic coordinates)', { rast <- raster::raster(matrix(1:54000, ncol=360), xmn=-180, xmx=180, ymn=-65, ymx=85, crs='+proj=longlat +datum=WGS84') accuracy_pct_tol <- 0.01 suppressMessages({ circle <- make_circle(0, 45, 15, crs=st_crs(rast)) }) # result is reasonably close to what we get with raster::area, which uses # a geodesic calculation expected <- exact_extract(rast, circle, 'weighted_sum', weights = area(rast) * 1e6) actual <- exact_extract(rast, circle, 'weighted_sum', weights = 'area') expect_true(abs(actual - expected) / expected < accuracy_pct_tol) }) test_that('Correct results obtained when max_cells_in_memory is limited', { rast <- make_square_raster(1:100) poly <- make_circle(5, 5, 3, sf::st_crs(rast)) expect_equal(exact_extract(rast, poly, 'mean'), exact_extract(rast, poly, 'mean', max_cells_in_memory=1)) }) test_that('Weighted stats work when polygon is contained in weight raster but only partially contained in value raster', { values <- raster(matrix(1:15, nrow=3, ncol=5, byrow=TRUE), xmn=0, xmx=5, ymn=2, ymx=5) weights <- raster(sqrt(matrix(1:25, nrow=5, ncol=5, byrow=TRUE)), xmn=0, xmx=5, ymn=0, ymx=5) poly <- make_circle(2.1, 2.1, 1, NA_real_) value_tbl <- exact_extract(values, poly, include_xy=TRUE)[[1]] weight_tbl <- exact_extract(weights, poly, include_xy=TRUE)[[1]] tbl <- merge(value_tbl, weight_tbl, by=c('x', 'y')) expect_equal( exact_extract(values, poly, 'weighted_mean', weights=weights), weighted.mean(tbl$value.x, tbl$coverage_fraction.x * tbl$value.y), tol=1e-6 ) }) test_that('When part of a polygon is within the value raster but not the weighting raster, values for unweighted stats requested at the same time as weighted stats are correct', { values <- raster(matrix(1:25, nrow=5, ncol=5, byrow=TRUE), xmn=0, xmx=5, ymn=0, ymx=5) weights <- raster(sqrt(matrix(1:15, nrow=3, ncol=5, byrow=TRUE)), xmn=0, xmx=5, ymn=2, ymx=5) poly <- make_circle(2.1, 2.1, 1, NA_real_) expect_equal( exact_extract(values, poly, 'sum'), exact_extract(values, poly, c('sum', 'weighted_mean'), weights=weights)$sum ) }) test_that('When polygon is entirely outside the value raster and entirely within the weighting raster, we get NA instead of an exception', { values <- raster(matrix(1:25, nrow=5, ncol=5, byrow=TRUE), xmn=5, xmx=10, ymn=5, ymx=10) weights <- raster(matrix(1:10, nrow=10, ncol=10, byrow=TRUE), xmn=0, xmx=10, ymn=0, ymx=10) poly <- make_circle(2.1, 2.1, 1, NA_real_) expect_equal(NA_real_, exact_extract(values, poly, 'weighted_mean', weights=weights)) }) test_that('Z dimension is ignored, if present', { # see https://github.com/isciences/exactextractr/issues/26 poly <- st_as_sfc('POLYGON Z ((1 1 0, 4 1 0, 4 4 0, 1 1 0))') values <- raster(matrix(1:25, nrow=5, ncol=5, byrow=TRUE), xmn=0, xmx=5, ymn=0, ymx=5) expect_equal(exact_extract(values, poly, 'sum'), 70.5) # CPP code path expect_equal(exact_extract(values, poly, function(x,f) sum(x*f)), 70.5) # R code path }) test_that('No error thrown when weighting with different resolution grid (regression)', { poly <- st_as_sfc(structure(list( '01060000000200000001030000000100000008000000065bb0055b7866401c222222223233c0454444242e776640338ee338842d33c0abaaaacac0776640338ee338962733c0676666469f776640a4aaaaaa362033c03a8ee3784f7866404f555555a41c33c0a64ffa840b7966406c1cc771522133c0454444645a796640f4a44ffa9c2b33c0065bb0055b7866401c222222223233c0010300000001000000080000004b9ff4499f7c6640a3aaaaaaaaaa32c0bdbbbb3b747a6640f8ffff7f549632c0ea933e09aa7b664004b6608b399132c0b1055bb0637e6640dc388e63278f32c0d9822d58827e6640dc388ee3109432c09a999979837c6640590bb660159c32c0676666867c7d664070777777039c32c04b9ff4499f7c6640a3aaaaaaaaaa32c0'), class='WKB'), EWKB=TRUE) v <- raster(matrix(1:360*720, nrow=360, ncol=720), xmn=-180, xmx=180, ymn=-90, ymx=90) w <- raster(matrix(1:360*720*36, nrow=360*6, ncol=720*6), xmn=-180, xmx=180, ymn=-90, ymx=90) exact_extract(v, poly, 'weighted_sum', weights=w) succeed() }) test_that('when force_df = TRUE, exact_extract always returns a data frame', { rast <- make_square_raster(1:100) names(rast) <- 'z' poly <- c(make_circle(5, 5, 3, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast))) vals <- exact_extract(rast, poly, 'mean', progress=FALSE) # named summary operation vals_df <- exact_extract(rast, poly, 'mean', force_df=TRUE, progress=FALSE) expect_s3_class(vals_df, 'data.frame') expect_equal(vals, vals_df[['mean']]) # R function vals2_df <- exact_extract(rast, poly, weighted.mean, force_df=TRUE, progress=FALSE) expect_s3_class(vals2_df, 'data.frame') expect_equal(vals, vals2_df[['result']], tol=1e-6) }) test_that('We can have include the input raster name in column names even if the input raster has only one layer', { rast <- make_square_raster(1:100) names(rast) <- 'z' poly <- c(make_circle(5, 5, 3, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast))) vals <- exact_extract(rast, poly, c('mean', 'sum'), progress=FALSE) expect_named(vals, c('mean', 'sum')) # named summary operations vals_named <- exact_extract(rast, poly, c('mean', 'sum'), full_colnames=TRUE, progress=FALSE) expect_named(vals_named, c('mean.z', 'sum.z')) }) test_that('We can summarize a categorical raster by returning a data frame from a custom function', { set.seed(456) # smaller circle does not have class 5 classes <- c(1, 2, 3, 5) rast <- raster::raster(xmn = 0, xmx = 10, ymn = 0, ymx = 10, res = 1) values(rast) <- sample(classes, length(rast), replace = TRUE) circles <- c( make_circle(5, 4, 2, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast))) # approach 1: classes known in advance result <- exact_extract(rast, circles, function(x, c) { row <- lapply(classes, function(cls) sum(c[x == cls])) names(row) <- paste('sum', classes, sep='_') do.call(data.frame, row) }, progress = FALSE) expect_named(result, c('sum_1', 'sum_2', 'sum_3', 'sum_5')) # check a single value expect_equal(result[2, 'sum_3'], exact_extract(rast, circles[2], function(x, c) { sum(c[x == 3]) })) if (requireNamespace('dplyr', quietly = TRUE)) { # approach 2: classes not known in advance (requires dplyr::bind_rows) result2 <- exact_extract(rast, circles, function(x, c) { found_classes <- unique(x) row <- lapply(found_classes, function(cls) sum(c[x == cls])) names(row) <- paste('sum', found_classes, sep='_') do.call(data.frame, row) }, progress = FALSE) for (colname in names(result)) { expect_equal(result[[colname]], dplyr::coalesce(result2[[colname]], 0)) } } }) test_that('We can append columns from the source data frame in the results', { rast <- make_square_raster(1:100) circles <- st_sf( fid = c(2, 9), size = c('large', 'small'), geometry = c( make_circle(5, 4, 2, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast)))) result_1 <- exact_extract(rast, circles, 'mean', append_cols = c('size', 'fid'), progress = FALSE) expect_named(result_1, c('size', 'fid', 'mean')) result_2 <- exact_extract(rast, circles, weighted.mean, append_cols = c('size', 'fid'), progress = FALSE) # result_2 won't be identical to result_2 because the column names are different # instead, check that the naming is consistent with what we get from the force_df argument expect_identical(result_2, cbind(sf::st_drop_geometry(circles[, c('size', 'fid')]), exact_extract(rast, circles, weighted.mean, force_df = TRUE, progress = FALSE))) }) test_that('We can get multiple quantiles with the "quantiles" argument', { rast <- make_square_raster(1:100) circles <- st_sf( fid = c(2, 9), size = c('large', 'small'), geometry = c( make_circle(5, 4, 2, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast)))) result <- exact_extract(rast, circles, 'quantile', quantiles=c(0.25, 0.50, 0.75), progress=FALSE) expect_true(inherits(result, 'data.frame')) expect_named(result, c('q25', 'q50', 'q75')) }) test_that('Both value and weighting rasters can be a stack', { vals <- stack(replicate(3, make_square_raster(runif(100)))) names(vals) <- c('a', 'b', 'c') weights <- stack(replicate(2, make_square_raster(rbinom(100, 2, 0.5)))) names(weights) <- c('w1', 'w2') circle <- make_circle(2, 7, 3, sf::st_crs(vals)) extracted <- exact_extract(vals, circle, weights=weights)[[1]] expect_named(extracted, c('a', 'b', 'c', 'w1', 'w2', 'coverage_fraction')) # stack of values, stack of weights: both passed as data frames exact_extract(vals, circle, function(v, c, w) { expect_true(is.data.frame(v)) expect_true(is.data.frame(w)) expect_named(v, names(vals)) expect_named(w, names(weights)) }, weights = weights) # stack of values, single layer of weights: weights passed as vector exact_extract(vals, circle, function(v, c, w) { expect_true(is.data.frame(v)) expect_true(is.vector(w)) }, weights = weights[[1]]) # single layer of values, stack of weights: values passed as vector exact_extract(vals[[1]], circle, function(v, c, w) { expect_true(is.vector(v)) expect_true(is.data.frame(w)) }, weights = weights) # single layer of values, single layer of weights: both passed as vector exact_extract(vals[[1]], circle, function(v, c, w) { expect_true(is.vector(v)) expect_true(is.vector(w)) }, weights = weights[[1]]) }) test_that('Named summary operations support both stacks of values and weights', { vals <- stack(replicate(3, make_square_raster(runif(100)))) names(vals) <- c('v1', 'v2', 'v3') weights <- stack(replicate(3, make_square_raster(rbinom(100, 2, 0.5)))) names(weights) <- c('w1', 'w2', 'w3') circle <- make_circle(2, 7, 3, sf::st_crs(vals)) stats <- c('sum', 'weighted_mean') # stack of values, stack of weights: values and weights are applied pairwise result <- exact_extract(vals, circle, stats, weights=weights) expect_named(result, c( 'sum.v1', 'sum.v2', 'sum.v3', 'weighted_mean.v1.w1', 'weighted_mean.v2.w2', 'weighted_mean.v3.w3')) expect_equal(result$sum.v1, exact_extract(vals[[1]], circle, 'sum')) expect_equal(result$sum.v2, exact_extract(vals[[2]], circle, 'sum')) expect_equal(result$sum.v3, exact_extract(vals[[3]], circle, 'sum')) expect_equal(result$weighted_mean.v1, exact_extract(vals[[1]], circle, 'weighted_mean', weights=weights[[1]])) expect_equal(result$weighted_mean.v2, exact_extract(vals[[2]], circle, 'weighted_mean', weights=weights[[2]])) expect_equal(result$weighted_mean.v3, exact_extract(vals[[3]], circle, 'weighted_mean', weights=weights[[3]])) # stack of values, layer of weights: weights are recycled result <- exact_extract(vals, circle, stats, weights=weights[[1]]) expect_named(result, c( 'sum.v1', 'sum.v2', 'sum.v3', 'weighted_mean.v1', 'weighted_mean.v2', 'weighted_mean.v3')) expect_equal(result$sum.v1, exact_extract(vals[[1]], circle, 'sum')) expect_equal(result$sum.v2, exact_extract(vals[[2]], circle, 'sum')) expect_equal(result$sum.v3, exact_extract(vals[[3]], circle, 'sum')) expect_equal(result$weighted_mean.v1, exact_extract(vals[[1]], circle, 'weighted_mean', weights=weights[[1]])) expect_equal(result$weighted_mean.v2, exact_extract(vals[[2]], circle, 'weighted_mean', weights=weights[[1]])) expect_equal(result$weighted_mean.v3, exact_extract(vals[[3]], circle, 'weighted_mean', weights=weights[[1]])) # layer of values, stack of weights: values are recycled result <- exact_extract(vals[[3]], circle, stats, weights=weights) expect_named(result, c('sum', 'weighted_mean.w1', 'weighted_mean.w2', 'weighted_mean.w3')) expect_equal(result$sum, exact_extract(vals[[3]], circle, 'sum')) expect_equal(result$weighted_mean.w1, exact_extract(vals[[3]], circle, 'weighted_mean', weights=weights[[1]])) expect_equal(result$weighted_mean.w2, exact_extract(vals[[3]], circle, 'weighted_mean', weights=weights[[2]])) expect_equal(result$weighted_mean.w3, exact_extract(vals[[3]], circle, 'weighted_mean', weights=weights[[3]])) }) test_that('We can use stack_apply with both values and weights', { vals <- stack(replicate(3, make_square_raster(runif(100)))) names(vals) <- c('v1', 'v2', 'v3') weights <- stack(replicate(3, make_square_raster(rbinom(100, 2, 0.5)))) names(weights) <- c('w1', 'w2', 'w3') circle <- make_circle(2, 7, 3, sf::st_crs(vals)) weighted_mean <- function(v, c, w) { expect_equal(length(v), length(c)) expect_equal(length(v), length(w)) weighted.mean(v, c*w) } # stack of values, stack of weights: values and weights are applied pairwise result <- exact_extract(vals, circle, weighted_mean, weights = weights, stack_apply = TRUE) expect_named(result, c('fun.v1.w1', 'fun.v2.w2', 'fun.v3.w3')) expect_equal(result$fun.v2.w2, exact_extract(vals[[2]], circle, 'weighted_mean', weights=weights[[2]]), tol = 1e-6) # stack of values, layer of weights: weights are recycled result <- exact_extract(vals, circle, weighted_mean, weights = weights[[2]], stack_apply = TRUE, full_colnames = TRUE) expect_named(result, c('fun.v1.w2', 'fun.v2.w2', 'fun.v3.w2')) expect_equal(result$fun.v1.w2, exact_extract(vals[[1]], circle, 'weighted_mean', weights=weights[[2]]), tol = 1e-6) # layer of values, stack of weights: values are recycled result <- exact_extract(vals[[3]], circle, weighted_mean, weights = weights, stack_apply = TRUE, full_colnames = TRUE) expect_named(result, c('fun.v3.w1', 'fun.v3.w2', 'fun.v3.w3')) expect_equal(result$fun.v3.w1, exact_extract(vals[[3]], circle, 'weighted_mean', weights=weights[[1]]), tol = 1e-6) }) test_that('Layers are implicity renamed if value layers have same name as weight layers', { # this happens when a stack is created and no names are provided # raster package assigns layer.1, layer.2 # here we assign our own identical names to avoid relying on raster package # implementation detail vals <- stack(replicate(2, make_square_raster(runif(100)))) names(vals) <- c('a', 'b') weights <- stack(replicate(2, make_square_raster(runif(100)))) names(weights) <- c('a', 'b') circle <- make_circle(2, 7, 3, sf::st_crs(vals)) result <- exact_extract(vals, circle, weights=weights)[[1]] expect_named(result, c('a', 'b', 'a.1', 'b.1', 'coverage_fraction')) }) test_that('Progress bar updates incrementally', { rast <- make_square_raster(1:100) npolys <- 13 polys <- st_sf(fid = seq_len(npolys), geometry = st_sfc(replicate(npolys, { x <- runif(1, min=0, max=10) y <- runif(1, min=0, max=10) r <- runif(1, min=0, max=2) make_circle(x, y, r, crs=sf::st_crs(rast)) }), crs=sf::st_crs(rast))) for (fun in list('sum', weighted.mean)) { for (input in list(polys, sf::st_geometry(polys))) { output <- capture.output(q <- exact_extract(rast, input, fun)) lines <- strsplit(output, '\r', fixed=TRUE)[[1]] numlines <- lines[endsWith(lines, '%')] len <- nchar(numlines[1]) pcts <- as.integer(substr(numlines, len - 3, len - 1)) expect_length(pcts, 1 + npolys) expect_equal(pcts[1], 0) expect_equal(pcts[length(pcts)], 100) expect_false(is.unsorted(pcts)) } } }) test_that('generated column names follow expected pattern', { values <- c('v1', 'v2', 'v3') weights <- c('w1', 'w2', 'w3') stats <- c('mean', 'weighted_mean') test_mean <- function(x, c) { weighted.mean(x, c) } # layer of values, no weights # named summary operations expect_equal(.resultColNames(values[[2]], NULL, c('mean', 'sum'), TRUE), c('mean.v2', 'sum.v2')) expect_equal(.resultColNames(values[[2]], NULL, c('mean', 'sum'), FALSE), c('mean', 'sum')) # generic method (we can recover its name) expect_equal(.resultColNames(values[[2]], NULL, weighted.mean, TRUE), 'weighted.mean.v2') expect_equal(.resultColNames(values[[2]], NULL, weighted.mean, FALSE), 'weighted.mean') # regular function (we can't recover its name) expect_equal(.resultColNames(values[[2]], NULL, test_mean, TRUE), 'fun.v2') expect_equal(.resultColNames(values[[2]], NULL, test_mean, FALSE), 'fun') # stack of values, no weights for (full_colnames in c(TRUE, FALSE)) { expect_equal(.resultColNames(values, NULL, c('mean', 'sum'), full_colnames), c('mean.v1', 'mean.v2', 'mean.v3', 'sum.v1', 'sum.v2', 'sum.v3')) expect_equal(.resultColNames(values, NULL, test_mean, full_colnames), c('fun.v1', 'fun.v2', 'fun.v3')) } # values, weights processed in parallel for (full_colnames in c(TRUE, FALSE)) { expect_equal(.resultColNames(values, weights, stats, full_colnames), c('mean.v1', 'mean.v2', 'mean.v3', 'weighted_mean.v1.w1', 'weighted_mean.v2.w2', 'weighted_mean.v3.w3')) expect_equal(.resultColNames(values, weights, test_mean, full_colnames), c('fun.v1.w1', 'fun.v2.w2', 'fun.v3.w3')) } # values recycled (full names) expect_equal(.resultColNames(values[1], weights, stats, TRUE), c('mean.v1', 'mean.v1', 'mean.v1', 'weighted_mean.v1.w1', 'weighted_mean.v1.w2', 'weighted_mean.v1.w3')) expect_equal(.resultColNames(values[1], weights, test_mean, TRUE), c('fun.v1.w1', 'fun.v1.w2', 'fun.v1.w3')) expect_equal(.resultColNames(values[1], weights, 'weighted_frac', full_colnames = TRUE, unique_values = c(4, 8)), c('weighted_frac_4.v1.w1', 'weighted_frac_4.v1.w2', 'weighted_frac_4.v1.w3', 'weighted_frac_8.v1.w1', 'weighted_frac_8.v1.w2', 'weighted_frac_8.v1.w3')) # here the values are always the same so we don't bother adding them to the names expect_equal(.resultColNames(values[1], weights, stats, FALSE), c('mean', 'mean', 'mean', 'weighted_mean.w1', 'weighted_mean.w2', 'weighted_mean.w3')) expect_equal(.resultColNames(values[1], weights, test_mean, FALSE), c('fun.w1', 'fun.w2', 'fun.w3')) # weights recycled (full names) expect_equal(.resultColNames(values, weights[1], stats, TRUE), c('mean.v1', 'mean.v2', 'mean.v3', 'weighted_mean.v1.w1', 'weighted_mean.v2.w1', 'weighted_mean.v3.w1')) expect_equal(.resultColNames(values, weights[1], test_mean, TRUE), c('fun.v1.w1', 'fun.v2.w1', 'fun.v3.w1')) # here the weights are always the same so we don't bother adding them to the name expect_equal(.resultColNames(values, weights[1], stats, FALSE), c('mean.v1', 'mean.v2', 'mean.v3', 'weighted_mean.v1', 'weighted_mean.v2', 'weighted_mean.v3')) expect_equal(.resultColNames(values, weights[1], test_mean, FALSE), c('fun.v1', 'fun.v2', 'fun.v3')) # custom colnames_fun expect_equal( .resultColNames(values, weights[1], stats, full_colnames = FALSE, colname_fun = function(fun_name, values, weights, ...) { paste(weights, values, fun_name, sep = '-') }), c('NA-v1-mean', 'NA-v2-mean', 'NA-v3-mean', 'w1-v1-weighted_mean', 'w1-v2-weighted_mean', 'w1-v3-weighted_mean') ) }) test_that('We can replace NA values in the value and weighting rasters with constants', { set.seed(05401) x <- runif(100) x[sample(length(x), 0.5*length(x))] <- NA y <- runif(100) y[sample(length(y), 0.5*length(y))] <- NA rx <- make_square_raster(x) ry <- make_square_raster(y) poly <- make_circle(4.5, 4.8, 4, crs=st_crs(rx)) # manually fill the missing values with 0.5 and missing weights with 0.3 rx_filled <- make_square_raster(ifelse(is.na(x), 0.5, x)) ry_filled <- make_square_raster(ifelse(is.na(y), 0.3, y)) expected <- exact_extract(rx_filled, poly, 'weighted_mean', weights = ry_filled) # fill values on the fly and verify that we get the same result expect_equal( exact_extract(rx, poly, 'weighted_mean', weights = ry, default_value = 0.5, default_weight = 0.3), expected) # check same calculation but using R summary function expect_equal( exact_extract(rx, poly, weights = ry, default_value = 0.5, default_weight = 0.3, fun = function(value, cov_frac, weight) { weighted.mean(value, cov_frac*weight) }), expected, 1e-6) # check substitution in raw returned values expect_equal( which(is.na(exact_extract(rx, poly)[[1]]$value)), which(44 == exact_extract(rx, poly, default_value = 44)[[1]]$value) ) }) test_that('All summary function arguments combined when summarize_df = TRUE', { rast <- make_square_raster(1:100) values <- stack(list(a = rast - 1, b = rast, c = rast + 1)) weights <- sqrt(values) names(weights) <- c('d', 'e', 'f') circle <- st_sf( id = 77, make_circle(7.5, 5.5, 4, sf::st_crs(rast))) # in the tests below, we check names inside the R summary function # to verify that our checks were actually hit, we have the summary # function return NULL and check for it with `expect_null`. # values only expect_null( exact_extract(values, circle, summarize_df = TRUE, fun = function(df) { expect_named(df, c('a', 'b', 'c', 'coverage_fraction')) NULL })[[1]]) expect_null( exact_extract(rast, circle, coverage_area = TRUE, summarize_df = TRUE, fun = function(df) { expect_named(df, c('value', 'coverage_area')) NULL })[[1]]) expect_null( exact_extract(values[[1]], circle, coverage_area = TRUE, summarize_df = TRUE, fun = function(df) { expect_named(df, c('value', 'coverage_area')) NULL })[[1]]) # values and weights expect_null( exact_extract(values, circle, summarize_df = TRUE, fun = function(df) { expect_named(df, c('a', 'b', 'c', 'd', 'e', 'f', 'coverage_fraction')) NULL }, weights = weights)[[1]]) expect_null( exact_extract(values, circle, include_cell = TRUE, include_xy = TRUE, include_area = TRUE, include_cols = 'id', summarize_df = TRUE, fun = function(df, extra_arg) { expect_named(df, c('id', 'a', 'b', 'c', 'd', 'e', 'f', 'x', 'y', 'cell', 'area', 'coverage_fraction')) expect_equal(extra_arg, 600) NULL }, weights = weights, extra_arg = 600)[[1]]) # values and weights, stack_apply = TRUE expect_equal( exact_extract(values, circle, weights = weights, summarize_df = TRUE, stack_apply = TRUE, fun = function(df, extra_arg) { expect_named(df, c('value', 'weight', 'coverage_fraction')) extra_arg }, extra_arg = 30809), data.frame(fun.a.d = 30809, fun.b.e = 30809, fun.c.f = 30809)) }) test_that('floating point errors do not cause an error that "logical subsetting requires vectors of identical size"', { rast <- raster(matrix(1:100, nrow=10), xm=0, xmx=1, ymn=0, ymx=1) poly <- make_rect(0.4, 0.7, 0.5, 0.8, crs = st_crs(rast)) val <- exact_extract(rast, poly, weights = rast, fun = NULL, include_cell = TRUE)[[1]] expect_equal(val$value, rast[val$cell]) expect_equal(val$weight, rast[val$cell]) }) test_that("append_cols works correctly when summary function returns multi-row data frame", { rast <- make_square_raster(1:100) circles <- st_sf( id = c('a', 'b'), geom = c( make_circle(3, 2, 4, sf::st_crs(rast)), make_circle(7, 7, 2, sf::st_crs(rast)) )) expect_silent({ result <- exact_extract(rast, circles, function(x, cov) data.frame(x = 1:3, x2 = 4:6), append_cols = 'id', progress = FALSE) }) expect_named(result, c('id', 'x', 'x2')) expect_equal(result$id, c('a', 'a', 'a', 'b', 'b', 'b')) expect_equal(result$x, c(1:3, 1:3)) expect_equal(result$x2, c(4:6, 4:6)) }) test_that("append_cols works correctly when summary function returns vector with length > 1", { rast <- make_square_raster(1:100) circles <- st_sf( id = c('a', 'b'), geom = c( make_circle(3, 2, 4, sf::st_crs(rast)), make_circle(7, 7, 2, sf::st_crs(rast)) )) expect_silent({ result <- exact_extract(rast, circles, function(x, cov) 1:3, append_cols = 'id', progress = FALSE) }) expect_named(result, c('id', 'result')) expect_equal(result$id, c('a', 'a', 'a', 'b', 'b', 'b')) expect_equal(result$result, c(1:3, 1:3)) }) test_that("append_cols works correctly when summary function returns data frame with length 0", { rast <- make_square_raster(1:100) circles <- st_sf( id = c('a', 'b'), geom = c( make_circle(3, 2, 4, sf::st_crs(rast)), make_circle(7, 7, 2, sf::st_crs(rast)) )) expect_silent({ result <- exact_extract(rast, circles, function(x, cov) data.frame(x = character(0), x2 = numeric(0)), append_cols = 'id', progress = FALSE) }) expect_named(result, c('id', 'x', 'x2')) expect_equal(nrow(result), 0) expect_equal(class(result$id), class(circles$id)) expect_equal(class(result$x), 'character') expect_equal(class(result$x2), 'numeric') }) exactextractr/tests/testthat/test_exact_extract_include_args.R0000644000176200001440000001632114500103446024674 0ustar liggesusers# Copyright (c) 2018-2022 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License ta http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. library(testthat) library(exactextractr) context('exact_extract include* arguments') test_that('when include_xy = TRUE, center coordinates area included in the output', { rast <- raster::raster(matrix(1:100, nrow=10), xmn=0, xmx=10, ymn=0, ymx=10, crs='+proj=longlat +datum=WGS84') poly <- sf::st_sfc(sf::st_polygon( list( matrix( c(3.5, 4.4, 7.5, 4.5, 7.5, 6.5, 3.5, 6.5, 3.5, 4.4), ncol=2, byrow=TRUE ) ) ), crs=sf::st_crs(rast)) results <- exact_extract(rast, poly, include_xy=TRUE, include_cell=TRUE)[[1]] # check that correct ranges of X,Y values are output expect_equal( c(3.5, 4.5, 5.5, 6.5, 7.5), sort(unique(results[, 'x']))) expect_equal( c(4.5, 5.5, 6.5), sort(unique(results[, 'y']))) expect_equal(results[, 'cell'], raster::cellFromXY(rast, results[, c('x', 'y')])) # check the XY values of an individual cell with a known coverage fraction expect_equal( results[results[, 'x']==3.5 & results[,'y']==4.5, 'coverage_fraction'], 0.2968749999999998, tolerance=1e-8, check.attributes=FALSE) # we can also send the weights to a callback exact_extract(rast, sf::st_sf(data.frame(id=1), geom=poly), include_xy=TRUE, fun=function(values, weights) { expect_equal(3, ncol(values)) }, progress=FALSE) }) test_that('We can use the stack_apply argument with include_xy and include_cols', { set.seed(123) stk <- raster::stack(list(a = make_square_raster(runif(100)), b = make_square_raster(runif(100)))) circles <- c( make_circle(5, 4, 2, sf::st_crs(stk)), make_circle(3, 1, 1, sf::st_crs(stk))) result <- exact_extract(stk, circles, include_xy = TRUE, stack_apply = TRUE, progress = FALSE, function(df, frac) { weighted.mean(df$value[df$y > 1], frac[df$y > 1]) }) expect_named(result, c('fun.a', 'fun.b')) }) test_that('when include_area = TRUE, cell areas are included in output (geographic) and are accurate to 1%', { rast <- raster::raster(matrix(1:54000, ncol=360), xmn=-180, xmx=180, ymn=-65, ymx=85, crs='+proj=longlat +datum=WGS84') accuracy_pct_tol <- 0.01 suppressMessages({ circle <- make_circle(0, 45, 15, crs=st_crs(rast)) }) results <- exact_extract(rast, circle, include_cell = TRUE, include_area = TRUE)[[1]] expected_areas <- raster::area(rast)[results$cell] actual_areas <- results$area / 1e6 expect_true(all(abs(actual_areas - expected_areas) / expected_areas < accuracy_pct_tol)) }) test_that('when include_area = TRUE, cell areas are included in output (projected)', { rast_utm <- make_square_raster(1:100) circle <- make_circle(5, 5, 5, crs=st_crs(rast_utm)) areas <- exact_extract(rast_utm, circle, include_area = TRUE)[[1]]$area expect_true(all(areas == 1)) }) test_that('include_cols copies columns from the source data frame to the returned data frames', { rast <- make_square_raster(1:100) circles <- st_sf( fid = c(2, 9), size = c('large', 'small'), geometry = c( make_circle(5, 4, 2, sf::st_crs(rast)), make_circle(3, 1, 1, sf::st_crs(rast)))) combined_result <- do.call(rbind, exact_extract(rast, circles, include_cols = 'fid', progress = FALSE)) expect_named(combined_result, c('fid', 'value', 'coverage_fraction')) }) test_that('When disaggregating values, xy coordinates refer to disaggregated grid', { rast <- make_square_raster(1:100) rast2 <- raster::disaggregate(rast, 4) circle <- make_circle(7.5, 5.5, 0.4, sf::st_crs(rast)) xy_disaggregated <- exact_extract(rast2, circle, include_xy = TRUE)[[1]][, c('x', 'y')] suppressWarnings({ xy_weighted <- exact_extract(rast, circle, include_xy = TRUE, weights = rast2)[[1]][, c('x', 'y')] xy_weighted2 <- exact_extract(rast2, circle, include_xy = TRUE, weights = rast)[[1]][, c('x', 'y')] }) expect_equal(xy_weighted, xy_disaggregated) expect_equal(xy_weighted2, xy_disaggregated) }) test_that('When value and weighting rasters have different grids, cell numbers refer to value raster', { anom <- raster(xmn=-180, xmx=180, ymn=-90, ymx=90, res=10) values(anom) <- rnorm(length(anom)) pop <- raster(xmn=-180, xmx=180, ymn=-65, ymx=85, res=5) values(pop) <- rlnorm(length(pop)) circle <- make_circle(17, 21, 18, sf::st_crs(anom)) suppressWarnings({ extracted <- exact_extract(anom, circle, weights=pop, include_cell=TRUE)[[1]] }) expect_equal(extracted$value, anom[extracted$cell]) }) test_that('include_ arguments supported with weighted summary function', { rast1 <- 5 + make_square_raster(1:100) rast2 <- make_square_raster(runif(100)) circle <- st_sf( id = 77, make_circle(7.5, 5.5, 4, sf::st_crs(rast1))) x <- exact_extract(rast1, circle, function(v, c, w) { expect_is(v, 'data.frame') expect_named(v, c('value', 'id')) expect_true(all(v$id == 77)) expect_is(c, 'numeric') expect_is(w, 'numeric') }, weights=rast2, include_cols = 'id') x <- exact_extract(rast1, circle, function(v, c, w) { expect_is(v, 'data.frame') expect_named(v, c('value', 'id', 'x', 'y', 'cell')) expect_true(all(v$id == 77)) expect_equal(v$value, rast1[v$cell]) expect_equal(w, rast2[v$cell]) expect_equal(v$x, raster::xFromCell(rast1, v$cell)) expect_equal(v$y, raster::yFromCell(rast1, v$cell)) expect_is(c, 'numeric') expect_is(w, 'numeric') }, weights=rast2, include_cols = 'id', include_cell = TRUE, include_xy = TRUE) }) test_that('we get a zero-row data frame for a polygon not intersecting a raster', { # https://github.com/isciences/exactextractr/issues/68 rast <- raster(matrix(0, nrow = 100, ncol = 100)) nonoverlap_poly <- st_sf(st_sfc(st_polygon(list(matrix(c(0, 0, 1, 0, 1, -0.25, 0, -0.25, 0, 0), ncol = 2, byrow = TRUE))))) df <- exact_extract(rast, nonoverlap_poly)[[1]] expect_named(df, c('value', 'coverage_fraction')) expect_equal(nrow(df), 0) df <- exact_extract(rast, nonoverlap_poly, include_xy = TRUE)[[1]] expect_named(df, c('value', 'x', 'y', 'coverage_fraction')) expect_equal(nrow(df), 0) df <- exact_extract(rast, nonoverlap_poly, include_cell = TRUE)[[1]] expect_named(df, c('value', 'cell', 'coverage_fraction')) expect_equal(nrow(df), 0) df <- exact_extract(rast, nonoverlap_poly, include_area = TRUE)[[1]] expect_named(df, c('value', 'area', 'coverage_fraction')) expect_equal(nrow(df), 0) }) exactextractr/tests/testthat/test_exact_extract_terra.R0000644000176200001440000000665214500103446023360 0ustar liggesusers# Copyright (c) 2021-2022 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License ta http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. library(testthat) library(exactextractr) context('exact_extract (terra)') test_that('terra inputs supported (single layer)', { ras <- make_square_raster(1:100) terra_ras <- terra::rast(ras) circ <- make_circle(3, 2, 4, sf::st_crs(ras)) expect_equal( exact_extract(ras, circ), exact_extract(terra_ras, circ) ) expect_equal( exact_extract(ras, circ, 'mean'), exact_extract(terra_ras, circ, 'mean') ) expect_equal( exact_extract(ras, circ, weighted.mean), exact_extract(terra_ras, circ, weighted.mean) ) }) test_that('terra inputs supported (single layer, weighted)', { ras <- make_square_raster(1:100) ras_w <- sqrt(ras) terra_ras <- terra::rast(ras) terra_ras_w <- terra::rast(ras_w) circ <- make_circle(3, 2, 4, sf::st_crs(ras)) expect_equal( exact_extract(ras, circ, weights = ras_w), exact_extract(terra_ras, circ, weights = terra_ras_w) ) expect_equal( exact_extract(ras, circ, 'weighted_mean', weights = ras_w), exact_extract(terra_ras, circ, 'weighted_mean', weights = terra_ras_w) ) # mixed inputs supported: terra values, raster weights expect_equal( exact_extract(ras, circ, 'weighted_mean', weights = ras_w), exact_extract(terra_ras, circ, 'weighted_mean', weights = ras_w) ) # mixed inputs supported: raster values, terra weights expect_equal( exact_extract(ras, circ, 'weighted_mean', weights = ras_w), exact_extract(ras, circ, 'weighted_mean', weights = terra_ras_w) ) expect_equal( exact_extract(ras, circ, weighted.mean), exact_extract(terra_ras, circ, weighted.mean) ) }) test_that('terra inputs supported (multi-layer)', { stk <- raster::stack(list(a = make_square_raster(1:100), b = make_square_raster(101:200))) terra_stk <- terra::rast(stk) circ <- make_circle(3, 2, 4, sf::st_crs(stk)) expect_equal( exact_extract(stk, circ, 'mean'), exact_extract(terra_stk, circ, 'mean') ) expect_equal( exact_extract(stk, circ), exact_extract(terra_stk, circ) ) }) test_that('terra inputs supported (weighted, multi-layer)', { stk <- raster::stack(list(a = make_square_raster(1:100), a = make_square_raster(101:200))) stk <- terra::rast(stk) names(stk) <- c('a', 'a') ras <- terra::rast(make_square_raster(runif(100))) ras <- terra::disagg(ras, 2) circ <- make_circle(3, 2, 4, sf::st_crs(ras)) expect_error( exact_extract(stk, circ, 'mean'), 'names.*must be unique' ) }) test_that('include_* arguments supported for terra inputs', { ras <- make_square_raster(1:100) terra_ras <- terra::rast(ras) circ <- make_circle(3, 2, 4, sf::st_crs(ras)) expect_equal( exact_extract(terra_ras, circ, include_cell = TRUE, include_xy = TRUE), exact_extract(ras, circ, include_cell = TRUE, include_xy = TRUE) ) }) exactextractr/tests/testthat/test_exact_resample.R0000644000176200001440000000633014500103446022312 0ustar liggesusers# Copyright (c) 2020-2022 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. context('exact_resample') test_that("exact_resample preserves values", { set.seed(123) # generate a random raster with a strange extent and resolution src <- raster::raster(matrix(runif(10000), nrow=100), xmn=runif(1), xmx=runif(1) + 9, ymn=runif(1), ymx=runif(1) + 9) # resample it to a raster with a larger grid with different resolution dst <- raster::raster(xmn=0, xmx=10, ymn=0, ymx=10, res=c(1, 2), crs=raster::crs(src)) dst <- exact_resample(src, dst, 'sum') # total values should be preserved expect_equal(cellStats(src, 'sum'), cellStats(dst, 'sum')) # resample it to a raster with a larger grid and a smaller resolution dst <- raster::raster(xmn=0, xmx=10, ymn=0, ymx=10, res=c(0.01, 0.02), crs=raster::crs(src)) dst <- exact_resample(src, dst, 'sum') # total values should be preserved expect_equal(cellStats(src, 'sum'), cellStats(dst, 'sum')) }) test_that("error thrown if multiple or no stats provided", { src <- make_square_raster(1:100) dst <- make_square_raster(1:4) expect_error( exact_resample(src, dst, c('sum', 'mean')), 'Only a single') expect_error( exact_resample(src, dst, character()), 'Only a single') }) test_that("error thrown if weighted stat provided", { r <- raster::raster(resolution = 2) target <- raster::shift(r, 2.5, 1) expect_error( exact_resample(r, target, fun = "weighted_mean"), 'cannot be used for resampling' ) }) test_that("error thrown if rasters have different CRS", { src <- make_square_raster(1:100, crs='+init=epsg:4326') dst <- make_square_raster(1:100, crs='+init=epsg:4269') expect_error( exact_resample(src, dst, 'sum'), 'same CRS') }) test_that("warning raised if one CRS undefined", { a <- make_square_raster(1:100, crs='+init=epsg:4326') b <- make_square_raster(1:100, crs=NA) expect_warning( exact_resample(a, b, 'sum'), 'No CRS specified for destination' ) expect_warning( exact_resample(b, a, 'sum'), 'No CRS specified for source' ) }) test_that("stats requiring stored values can be used", { # https://github.com/isciences/exactextractr/issues/47 r <- raster::raster(resolution = 2) target <- raster::shift(r, 2.5, 1) set.seed(1111) raster::values(r) = as.integer(round(rnorm(raster::ncell(r), 0, 1))) vals <- unique(raster::getValues(r)) mode_vals <- sort(unique(raster::getValues(exactextractr::exact_resample(r, target, fun = "mode")))) expect_true(length(mode_vals) > 1) expect_true(all(mode_vals %in% vals)) }) exactextractr/tests/testthat/test_coverage_fraction.R0000644000176200001440000001364614500103446023006 0ustar liggesusers# Copyright (c) 2018-2020 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. context('coverage_fraction') test_that("Coverage fraction function works", { # This test just verifies a successful journey from R # to C++ and back. The correctness of the algorithm # is tested at the C++ level. square <- sf::st_sfc(sf::st_polygon( list( matrix( c(0.5, 0.5, 2.5, 0.5, 2.5, 2.5, 0.5, 2.5, 0.5, 0.5), ncol=2, byrow=TRUE)))) rast <- raster::raster(xmn=0, xmx=3, ymn=0, ymx=3, nrows=3, ncols=3, crs=NA) weights <- coverage_fraction(rast, square)[[1]] expect_s4_class(weights, 'RasterLayer') expect_equal(as.matrix(weights), rbind( c(0.25, 0.5, 0.25), c(0.50, 1.0, 0.50), c(0.25, 0.5, 0.25) ), check.attributes=FALSE) }) test_that("Output can be cropped to the extent of the input feature", { square <- sf::st_sfc(sf::st_polygon( list( matrix( c(0.5, 0.5, 2.5, 0.5, 2.5, 2.5, 0.5, 2.5, 0.5, 0.5), ncol=2, byrow=TRUE)))) rast <- raster::raster(xmn=0, xmx=10, ymn=0, ymx=10, nrows=10, ncols=10, crs=NA) weights <- coverage_fraction(rast, square, crop=TRUE)[[1]] expect_equal(raster::res(weights), raster::res(rast)) expect_equal(raster::crs(weights), raster::crs(rast)) expect_equal(raster::extent(weights), raster::extent(0, 3, 0, 3)) }) test_that("When output is not cropped, cells outside of the processed area are 0, not NA", { square <- sf::st_sfc(sf::st_polygon( list( matrix( c(0.5, 0.5, 2.5, 0.5, 2.5, 2.5, 0.5, 2.5, 0.5, 0.5), ncol=2, byrow=TRUE)))) rast <- raster::raster(xmn=0, xmx=10, ymn=0, ymx=10, nrows=10, ncols=10, crs=NA) weights <- coverage_fraction(rast, square, crop=TRUE)[[1]] expect_false(any(is.na(as.matrix(weights)))) }) test_that('Raster returned by coverage_fraction has same properties as the input', { r <- raster::raster(xmn=391030, xmx=419780, ymn=5520000, ymx=5547400, crs=NA) raster::res(r) = c(100, 100) raster::values(r) <- 1:ncell(r) p <- sf::st_as_sfc('POLYGON((397199.680921053 5541748.05921053,402813.496710526 5543125.03289474,407103.299342105 5537246.41447368,398470.733552632 5533962.86184211,397199.680921053 5541748.05921053))') w <- coverage_fraction(r, p) expect_length(w, 1) expect_is(w[[1]], 'RasterLayer') expect_equal(raster::res(r), raster::res(w[[1]])) expect_equal(raster::extent(r), raster::extent(w[[1]])) expect_equal(raster::crs(r), raster::crs(w[[1]])) }) test_that('Raster returned by coverage_fraction has same properties as the input (terra)', { r <- terra::rast(xmin=391030, xmax=419780, ymin=5520000, ymax=5547400, crs='EPSG:32618') terra::res(r) = c(100, 100) terra::values(r) <- 1:ncell(r) p <- sf::st_as_sfc('POLYGON((397199.680921053 5541748.05921053,402813.496710526 5543125.03289474,407103.299342105 5537246.41447368,398470.733552632 5533962.86184211,397199.680921053 5541748.05921053))', crs = sf::st_crs(r)) w <- coverage_fraction(r, p) expect_length(w, 1) expect_is(w[[1]], 'SpatRaster') expect_equal(terra::res(r), terra::res(w[[1]])) expect_equal(terra::ext(r), terra::ext(w[[1]])) expect_equal(terra::crs(r), terra::crs(w[[1]])) }) test_that('Coverage fractions are exact', { r <- raster::raster(xmn=391030, xmx=419780, ymn=5520000, ymx=5547400, crs=NA) raster::res(r) = c(100, 100) raster::values(r) <- 1:ncell(r) p <- sf::st_as_sfc('POLYGON((397199.680921053 5541748.05921053,402813.496710526 5543125.03289474,407103.299342105 5537246.41447368,398470.733552632 5533962.86184211,397199.680921053 5541748.05921053))') w <- coverage_fraction(r, p) cell_area <- prod(raster::res(w[[1]])) ncells <- raster::cellStats(w[[1]], 'sum') expect_equal(sf::st_area(sf::st_geometry(p)), ncells*cell_area) }) test_that('Warning is raised on CRS mismatch', { rast <- raster::raster(matrix(1:100, nrow=10), xmn=-75, xmx=-70, ymn=41, ymx=46, crs='+proj=longlat +datum=WGS84') poly <- sf::st_buffer( sf::st_as_sfc('POINT(442944.5 217528.7)', crs=32145), 150000) expect_warning(coverage_fraction(rast, poly), 'transformed to raster') }) test_that('Warning is raised on undefined CRS', { rast <- raster::raster(matrix(1:100, nrow=10), xmn=0, xmx=10, ymn=0, ymx=10) poly <- sf::st_buffer(sf::st_as_sfc('POINT(8 4)'), 0.4) # neither has a defined CRS expect_silent(coverage_fraction(rast, poly)) # only raster has defined CRS raster::crs(rast) <- '+proj=longlat +datum=WGS84' expect_warning(coverage_fraction(rast, poly), 'assuming .* same CRS .* raster') # both have defined crs sf::st_crs(poly) <- sf::st_crs(rast) expect_silent(coverage_fraction(rast, poly)) # only polygons have defined crs raster::crs(rast) <- NULL expect_warning(coverage_fraction(rast, poly), 'assuming .* same CRS .* polygon') }) test_that('Z dimension is ignored, if present', { # see https://github.com/isciences/exactextractr/issues/26 polyz <- st_as_sfc('POLYGON Z ((1 1 0, 4 1 0, 4 4 0, 1 1 0))') poly <- st_as_sfc('POLYGON ((1 1, 4 1, 4 4, 1 1))') values <- raster(matrix(1:25, nrow=5, ncol=5, byrow=TRUE), xmn=0, xmx=5, ymn=0, ymx=5) expect_equal(coverage_fraction(values, poly), coverage_fraction(values, polyz)) }) exactextractr/tests/testthat/test_exact_extract_eager_load.R0000644000176200001440000001051014500103446024311 0ustar liggesusers# Copyright (c) 2018-2022 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. context('exact_extract eager loading') test_that("message emitted when working area doesn't fit in memory", { rast_fname <- system.file(file.path('sao_miguel', 'clc2018_v2020_20u1.tif'), package = 'exactextractr') poly_fname <- system.file(file.path('sao_miguel', 'concelhos.gpkg'), package = 'exactextractr') r <- terra::rast(rast_fname) polys <- st_read(poly_fname, quiet = TRUE) # no output when everything fits in memory capture.output({ msg <- capture_messages({ exact_extract(r, polys, 'mode', progress = TRUE, max_cells_in_memory = 1e7) }) }) expect_equal(msg, character()) # message emitted when it doesn't fit capture.output({ expect_message( exact_extract(r, polys, 'mode', progress = TRUE, max_cells_in_memory = 1e6), 'Cannot preload' ) }) # if progress is disabled, so are hints expect_silent( exact_extract(r, polys, 'mode', progress = FALSE, max_cells_in_memory = 1e6) ) # get additional warning by blowing out the GDAL block cache prevCacheSize <- terra::gdalCache() terra::gdalCache(1) capture.output({ expect_message( exact_extract(r, polys, 'mode', max_cells_in_memory = 1e6), 'GDAL block size cache is only 1 MB' ) }) # get additional warning if we are using a RasterStack capture.output({ expect_message( exact_extract(stack(r), polys, 'mode', max_cells_in_memory = 1e6), 'It is recommended to use a SpatRaster' ) }) terra::gdalCache(prevCacheSize) }) test_that('cropping does not introduce grid incompatibility', { rast_fname <- system.file(file.path('sao_miguel', 'clc2018_v2020_20u1.tif'), package = 'exactextractr') poly_fname <- system.file(file.path('sao_miguel', 'concelhos.gpkg'), package = 'exactextractr') weight_fname <- system.file(file.path('sao_miguel', 'gpw_v411_2020_density_2020.tif'), package = 'exactextractr') r <- terra::rast(rast_fname) p <- st_read(poly_fname, quiet = TRUE) w <- terra::rast(weight_fname) expect_silent({ exact_extract(r, p, weights = w, grid_compat_tol = 1e-3, progress = FALSE) }) }) test_that("eager loading does not change values", { # this will fail if terra::crop is not called with snap = 'out' rast_fname <- system.file(file.path('sao_miguel', 'clc2018_v2020_20u1.tif'), package = 'exactextractr') poly_fname <- system.file(file.path('sao_miguel', 'concelhos.gpkg'), package = 'exactextractr') weight_fname <- system.file(file.path('sao_miguel', 'gpw_v411_2020_density_2020.tif'), package = 'exactextractr') r <- terra::rast(rast_fname) p <- st_read(poly_fname, quiet = TRUE) w <- terra::rast(weight_fname) no_eager_load <- exact_extract(r, p, weights = w, include_xy = TRUE, include_cell = TRUE, max_cells_in_memory = 2000, progress = FALSE) eager_load <- exact_extract(r, p, weights = w, include_xy = TRUE, include_cell = TRUE, progress = FALSE) expect_equal(eager_load, no_eager_load, tol = 2e-7) }) test_that('eager loading does not error when geometry is outside extent of raster', { ras <- terra::rast(matrix(1:100, nrow=10)) touches_corner <- make_rect(xmin = 10, xmax = 20, ymin = 10, ymax = 20, crs = sf::st_crs(ras)) loaded <- .eagerLoad(ras, touches_corner, Inf, '') expect_equal( nrow(exact_extract(loaded, touches_corner)[[1]]), 0) }) exactextractr/tests/testthat/helper_functions.R0000644000176200001440000000324214500103446021625 0ustar liggesusers# Copyright (c) 2018-2022 ISciences, LLC. # All rights reserved. # # This software is licensed under the Apache License, Version 2.0 (the "License"). # You may not use this file except in compliance with the License. You may # obtain a copy of the License ta http://www.apache.org/licenses/LICENSE-2.0. # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. default_proj <- '+init=epsg:26918' # UTM 18N; avoid wgs84 to keep cartesian calcs in sf make_rect <- function(xmin, ymin, xmax, ymax, crs) { sf::st_sfc( sf::st_polygon( list( matrix( c(xmin, ymin, xmax, ymin, xmax, ymax, xmin, ymax, xmin, ymin), ncol=2, byrow=TRUE))), crs=crs) } make_circle <- function(x, y, r, crs) { suppressWarnings(sf::st_buffer( sf::st_sfc( sf::st_point(c(x, y)), crs=crs), r)) } make_square_raster <- function(vals, crs=default_proj) { n <- sqrt(length(vals)) stopifnot(as.integer(n) == n) raster::raster(matrix(vals, nrow=n, byrow=TRUE), xmn=0, xmx=n, ymn=0, ymx=n, crs=crs) } make_square_rast <- function(vals, crs=default_proj) { n <- sqrt(length(vals)) stopifnot(as.integer(n) == n) x <- terra::rast(nrows = n, ncols = n, xmin=0, xmax=n, ymin=0, ymax=n, crs = gsub("+init=", "", crs, fixed = TRUE)) terra::values(x) <- vals x } exactextractr/tests/testthat.R0000644000176200001440000000010614500103446016252 0ustar liggesuserslibrary(testthat) library(exactextractr) test_check("exactextractr") exactextractr/configure.ac0000644000176200001440000000734414500103446015426 0ustar liggesusers# adapted from configure.ac used in rgeos package (Roger Bivand) define([pkgversion], esyscmd([sh -c "grep Version: DESCRIPTION | cut -d' ' -f2 | tr -d '\n'"])) AC_INIT(exactextractr, [pkgversion], dbaston@isciences.com) AC_MSG_NOTICE([${PACKAGE_NAME}: ${PACKAGE_VERSION}]) AC_CONFIG_SRCDIR(src/exact_extract.cpp) # find R home and set correct compiler + flags : ${R_HOME=`R RHOME`} if test -z "${R_HOME}"; then AC_MSG_ERROR([cannot determine R_HOME. Make sure you use R CMD INSTALL!]) fi RBIN="${R_HOME}/bin/R" # pick all flags for testing from R : ${CXX=`"${RBIN}" CMD config CXX14`} : ${CXXFLAGS=`"${RBIN}" CMD config CXX14FLAGS`} : ${LDFLAGS=`"${RBIN}" CMD config LDFLAGS`} if test [ -z "$CXX" ] ; then AC_MSG_ERROR(["No C++14 compiler identified by R CMD config CXX14"]) fi GEOS_CONFIG="geos-config" GEOS_CONFIG_SET="no" AC_ARG_WITH([geos-config], AS_HELP_STRING([--with-geos-config=GEOS_CONFIG], [the location of geos-config]), [geos_config=$withval]) if test [ -n "$geos_config" ] ; then GEOS_CONFIG_SET="yes" AC_SUBST([GEOS_CONFIG],["${geos_config}"]) AC_MSG_NOTICE(geos-config set to $GEOS_CONFIG) fi if test ["$GEOS_CONFIG_SET" = "no"] ; then AC_PATH_PROG([GEOS_CONFIG], ["$GEOS_CONFIG"], ["no"]) if test ["$GEOS_CONFIG" = "no"] ; then AC_MSG_ERROR([geos-config not found or not executable]) fi else AC_MSG_CHECKING(geos-config exists) if test -r "${GEOS_CONFIG}"; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR([geos-config not found - configure argument error.]) fi AC_MSG_CHECKING(geos-config executable) if test -x "${GEOS_CONFIG}"; then AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR([geos-config not executable.]) fi fi AC_MSG_CHECKING(geos-config usability) if test `${GEOS_CONFIG} --version`; then GEOS_VER=`${GEOS_CONFIG} --version` GEOS_VER_DOT=`${GEOS_CONFIG} --version | sed 's/[[^0-9]]*//g'` GEOS_CXXFLAGS=`${GEOS_CONFIG} --cflags` GEOS_CLIBS=`${GEOS_CONFIG} --clibs` GEOS_STATIC_CLIBS=`${GEOS_CONFIG} --static-clibs | sed 's/-m/-lm/g'` AC_MSG_RESULT(yes) else AC_MSG_RESULT(no) AC_MSG_ERROR([${GEOS_CONFIG} not usable]) fi AC_MSG_NOTICE([GEOS version: ${GEOS_VER}]) AC_MSG_CHECKING([geos version at least 3.5.0]) if test ${GEOS_VER_DOT} -lt 350 ; then AC_MSG_RESULT(no) AC_MSG_RESULT([Upgrade GEOS to version 3.5.0 or greater.]) else AC_MSG_RESULT(yes) fi AC_MSG_CHECKING(compiling and building against geos_c) [cat > geos_test.cpp << _EOCONF #include #include int main() { GEOSContextHandle_t handle = initGEOS_r(NULL, NULL); finishGEOS_r(handle); return 0; } _EOCONF] ${CXX} ${CXXFLAGS} ${GEOS_CXXFLAGS} -o geos_test geos_test.cpp ${LDFLAGS} ${GEOS_CLIBS} 2> errors.txt if test `echo $?` -ne 0 ; then geosok=no AC_MSG_RESULT(no) else CXXFLAGS="${CXXFLAGS} ${GEOS_CXXFLAGS}" LDFLAGS="${LDFLAGS} ${GEOS_CLIBS}" AC_MSG_RESULT(yes) fi if test "${geosok}" = no; then AC_MSG_CHECKING(geos: linking with ${GEOS_STATIC_CLIBS}) ${CXX} ${CXXFLAGS} ${GEOS_CXXFLAGS} -o geos_test geos_test.cpp ${GEOS_STATIC_CLIBS} 2> errors.txt if test `echo $?` -ne 0 ; then geosok=no AC_MSG_RESULT(no) cat errors.txt AC_MSG_NOTICE([Compilation and/or linkage problems.]) AC_MSG_ERROR([initGEOS_r not found in libgeos_c.]) else geosok=yes CXXFLAGS="${CXXFLAGS} ${GEOS_CXXFLAGS}" LDFLAGS="${LDFLAGS} ${GEOS_STATIC_CLIBS}" AC_MSG_RESULT(yes) fi fi rm -f geos_test errors.txt geos_test.cpp AC_SUBST([PKG_CXX], ["${CXX}"]) AC_SUBST([PKG_CXXFLAGS], ["${CXXFLAGS}"]) AC_SUBST([PKG_LIBS], ["${LDFLAGS}"]) AC_MSG_NOTICE([PKG_CXX: ${PKG_CXX}]) AC_MSG_NOTICE([PKG_CXXFLAGS: ${PKG_CXXFLAGS}]) AC_MSG_NOTICE([PKG_LIBS: ${PKG_LIBS}]) AC_CONFIG_FILES(src/Makevars) AC_OUTPUT exactextractr/src/0000755000176200001440000000000014500104660013716 5ustar liggesusersexactextractr/src/geos_r.h0000644000176200001440000000444314500103446015353 0ustar liggesusers// Copyright (c) 2018-2020 ISciences, LLC. // All rights reserved. // // This software is licensed under the Apache License, Version 2.0 (the "License"). // You may not use this file except in compliance with the License. You may // obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // [[Rcpp::plugins("cpp14")]] #pragma once #include #include #include using geom_ptr = std::unique_ptr>; using wkb_reader_ptr = std::unique_ptr>; // GEOS warning handler static void geos_warn(const char* fmt, ...) { char buf[BUFSIZ] = { '\0' }; va_list msg; va_start(msg, fmt); vsnprintf(buf, BUFSIZ*sizeof(char), fmt, msg); va_end(msg); Rcpp::Function warning("warning"); warning(buf); } // GEOS error handler static void geos_error(const char* fmt, ...) { char buf[BUFSIZ] = { '\0' }; va_list msg; va_start(msg, fmt); vsnprintf(buf, BUFSIZ*sizeof(char), fmt, msg); va_end(msg); Rcpp::stop(buf); } // GEOSContextHandle wrapper to ensure finishGEOS is called. struct GEOSAutoHandle { GEOSAutoHandle() { handle = initGEOS_r(geos_warn, geos_error); } ~GEOSAutoHandle() { finishGEOS_r(handle); } GEOSContextHandle_t handle; }; // Return a smart pointer to a Geometry, given WKB input static inline geom_ptr read_wkb(const GEOSContextHandle_t & context, const Rcpp::RawVector & wkb) { wkb_reader_ptr wkb_reader{ GEOSWKBReader_create_r(context), [context](GEOSWKBReader* r) { GEOSWKBReader_destroy_r(context, r); } }; geom_ptr geom{ GEOSWKBReader_read_r(context, wkb_reader.get(), std::addressof(wkb[0]), wkb.size()), [context](GEOSGeometry* g) { GEOSGeom_destroy_r(context, g); } }; if (geom.get() == nullptr) { Rcpp::stop("Failed to parse WKB geometry"); } return geom; } exactextractr/src/rasterize.cpp0000644000176200001440000000333614500103446016440 0ustar liggesusers// Copyright (c) 2022 ISciences, LLC. // All rights reserved. // // This software is licensed under the Apache License, Version 2.0 (the "License"). // You may not use this file except in compliance with the License. You may // obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // [[Rcpp::plugins("cpp14")]] #include #include "geos_r.h" #include "raster_utils.h" #include "exactextract/src/raster_cell_intersection.h" // [[Rcpp::export]] void CPP_update_max_coverage(Rcpp::NumericVector & extent, Rcpp::NumericVector & res, Rcpp::NumericMatrix & max_coverage, Rcpp::IntegerMatrix & max_coverage_index, Rcpp::NumericMatrix & tot_coverage, const Rcpp::RawVector & wkb, int index) { GEOSAutoHandle geos; auto grid = make_grid(extent, res); auto coverage_fraction = exactextract::raster_cell_intersection(grid, geos.handle, read_wkb(geos.handle, wkb).get()); auto ix = grid.row_offset(coverage_fraction.grid()); auto jx = grid.col_offset(coverage_fraction.grid()); for (size_t i = 0; i < coverage_fraction.rows(); i++) { for (size_t j = 0; j < coverage_fraction.cols(); j++) { auto cov = coverage_fraction(i, j); if (cov > 0) { tot_coverage(i + ix, j + jx) += cov; if (cov > max_coverage(i + ix, j + jx)) { max_coverage(i + ix, j + jx) = cov; max_coverage_index(i + ix, j + jx) = index; } } } } } exactextractr/src/numeric_vector_raster.h0000644000176200001440000000254614500103446020503 0ustar liggesusers// Copyright (c) 2018-2020 ISciences, LLC. // All rights reserved. // // This software is licensed under the Apache License, Version 2.0 (the "License"). // You may not use this file except in compliance with the License. You may // obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #pragma once #include #include "exactextract/src/raster.h" // Construct a Raster using an R vector for storage // This class uses row-major storage, consistent with the return value of // raster::getValuesBlock, but inconsistent with the representation of // matrices in R. class NumericVectorRaster : public exactextract::AbstractRaster { public: NumericVectorRaster(const Rcpp::NumericVector & vec, const exactextract::Grid & g) : AbstractRaster(g), m_vec(vec) {} double operator()(size_t row, size_t col) const final { return m_vec[row*cols() + col]; } const Rcpp::NumericVector vec() const { return m_vec; } private: const Rcpp::NumericVector m_vec; }; exactextractr/src/exactextract/0000755000176200001440000000000014500104660016415 5ustar liggesusersexactextractr/src/exactextract/vend/0000755000176200001440000000000014500103446017352 5ustar liggesusersexactextractr/src/exactextract/vend/optional.hpp0000644000176200001440000013600414500103446021714 0ustar liggesusers// // Copyright (c) 2014-2018 Martin Moene // // https://github.com/martinmoene/optional-lite // // Distributed under the Boost Software License, Version 1.0. // (See accompanying file LICENSE.txt or copy at http://www.boost.org/LICENSE_1_0.txt) #pragma once #ifndef NONSTD_OPTIONAL_LITE_HPP #define NONSTD_OPTIONAL_LITE_HPP #define optional_lite_MAJOR 3 #define optional_lite_MINOR 2 #define optional_lite_PATCH 0 #define optional_lite_VERSION optional_STRINGIFY(optional_lite_MAJOR) "." optional_STRINGIFY(optional_lite_MINOR) "." optional_STRINGIFY(optional_lite_PATCH) #define optional_STRINGIFY( x ) optional_STRINGIFY_( x ) #define optional_STRINGIFY_( x ) #x // optional-lite configuration: #define optional_OPTIONAL_DEFAULT 0 #define optional_OPTIONAL_NONSTD 1 #define optional_OPTIONAL_STD 2 #if !defined( optional_CONFIG_SELECT_OPTIONAL ) # define optional_CONFIG_SELECT_OPTIONAL ( optional_HAVE_STD_OPTIONAL ? optional_OPTIONAL_STD : optional_OPTIONAL_NONSTD ) #endif // Control presence of exception handling (try and auto discover): #ifndef optional_CONFIG_NO_EXCEPTIONS # if defined(__cpp_exceptions) || defined(__EXCEPTIONS) || defined(_CPPUNWIND) # define optional_CONFIG_NO_EXCEPTIONS 0 # else # define optional_CONFIG_NO_EXCEPTIONS 1 # endif #endif // C++ language version detection (C++20 is speculative): // Note: VC14.0/1900 (VS2015) lacks too much from C++14. #ifndef optional_CPLUSPLUS # if defined(_MSVC_LANG ) && !defined(__clang__) # define optional_CPLUSPLUS (_MSC_VER == 1900 ? 201103L : _MSVC_LANG ) # else # define optional_CPLUSPLUS __cplusplus # endif #endif #define optional_CPP98_OR_GREATER ( optional_CPLUSPLUS >= 199711L ) #define optional_CPP11_OR_GREATER ( optional_CPLUSPLUS >= 201103L ) #define optional_CPP11_OR_GREATER_ ( optional_CPLUSPLUS >= 201103L ) #define optional_CPP14_OR_GREATER ( optional_CPLUSPLUS >= 201402L ) #define optional_CPP17_OR_GREATER ( optional_CPLUSPLUS >= 201703L ) #define optional_CPP20_OR_GREATER ( optional_CPLUSPLUS >= 202000L ) // C++ language version (represent 98 as 3): #define optional_CPLUSPLUS_V ( optional_CPLUSPLUS / 100 - (optional_CPLUSPLUS > 200000 ? 2000 : 1994) ) // Use C++17 std::optional if available and requested: #if optional_CPP17_OR_GREATER && defined(__has_include ) # if __has_include( ) # define optional_HAVE_STD_OPTIONAL 1 # else # define optional_HAVE_STD_OPTIONAL 0 # endif #else # define optional_HAVE_STD_OPTIONAL 0 #endif #define optional_USES_STD_OPTIONAL ( (optional_CONFIG_SELECT_OPTIONAL == optional_OPTIONAL_STD) || ((optional_CONFIG_SELECT_OPTIONAL == optional_OPTIONAL_DEFAULT) && optional_HAVE_STD_OPTIONAL) ) // // in_place: code duplicated in any-lite, expected-lite, optional-lite, value-ptr-lite, variant-lite: // #ifndef nonstd_lite_HAVE_IN_PLACE_TYPES #define nonstd_lite_HAVE_IN_PLACE_TYPES 1 // C++17 std::in_place in : #if optional_CPP17_OR_GREATER #include namespace nonstd { using std::in_place; using std::in_place_type; using std::in_place_index; using std::in_place_t; using std::in_place_type_t; using std::in_place_index_t; #define nonstd_lite_in_place_t( T) std::in_place_t #define nonstd_lite_in_place_type_t( T) std::in_place_type_t #define nonstd_lite_in_place_index_t(K) std::in_place_index_t #define nonstd_lite_in_place( T) std::in_place_t{} #define nonstd_lite_in_place_type( T) std::in_place_type_t{} #define nonstd_lite_in_place_index(K) std::in_place_index_t{} } // namespace nonstd #else // optional_CPP17_OR_GREATER #include namespace nonstd { namespace detail { template< class T > struct in_place_type_tag {}; template< std::size_t K > struct in_place_index_tag {}; } // namespace detail struct in_place_t {}; template< class T > inline in_place_t in_place( detail::in_place_type_tag /*unused*/ = detail::in_place_type_tag() ) { return in_place_t(); } template< std::size_t K > inline in_place_t in_place( detail::in_place_index_tag /*unused*/ = detail::in_place_index_tag() ) { return in_place_t(); } template< class T > inline in_place_t in_place_type( detail::in_place_type_tag /*unused*/ = detail::in_place_type_tag() ) { return in_place_t(); } template< std::size_t K > inline in_place_t in_place_index( detail::in_place_index_tag /*unused*/ = detail::in_place_index_tag() ) { return in_place_t(); } // mimic templated typedef: #define nonstd_lite_in_place_t( T) nonstd::in_place_t(&)( nonstd::detail::in_place_type_tag ) #define nonstd_lite_in_place_type_t( T) nonstd::in_place_t(&)( nonstd::detail::in_place_type_tag ) #define nonstd_lite_in_place_index_t(K) nonstd::in_place_t(&)( nonstd::detail::in_place_index_tag ) #define nonstd_lite_in_place( T) nonstd::in_place_type #define nonstd_lite_in_place_type( T) nonstd::in_place_type #define nonstd_lite_in_place_index(K) nonstd::in_place_index } // namespace nonstd #endif // optional_CPP17_OR_GREATER #endif // nonstd_lite_HAVE_IN_PLACE_TYPES // // Using std::optional: // #if optional_USES_STD_OPTIONAL #include namespace nonstd { using std::optional; using std::bad_optional_access; using std::hash; using std::nullopt; using std::nullopt_t; using std::operator==; using std::operator!=; using std::operator<; using std::operator<=; using std::operator>; using std::operator>=; using std::make_optional; using std::swap; } #else // optional_USES_STD_OPTIONAL #include #include // optional-lite alignment configuration: #ifndef optional_CONFIG_MAX_ALIGN_HACK # define optional_CONFIG_MAX_ALIGN_HACK 0 #endif #ifndef optional_CONFIG_ALIGN_AS // no default, used in #if defined() #endif #ifndef optional_CONFIG_ALIGN_AS_FALLBACK # define optional_CONFIG_ALIGN_AS_FALLBACK double #endif // Compiler warning suppression: #if defined(__clang__) # pragma clang diagnostic push # pragma clang diagnostic ignored "-Wundef" #elif defined(__GNUC__) # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wundef" #elif defined(_MSC_VER ) # pragma warning( push ) #endif // half-open range [lo..hi): #define optional_BETWEEN( v, lo, hi ) ( (lo) <= (v) && (v) < (hi) ) // Compiler versions: // // MSVC++ 6.0 _MSC_VER == 1200 (Visual Studio 6.0) // MSVC++ 7.0 _MSC_VER == 1300 (Visual Studio .NET 2002) // MSVC++ 7.1 _MSC_VER == 1310 (Visual Studio .NET 2003) // MSVC++ 8.0 _MSC_VER == 1400 (Visual Studio 2005) // MSVC++ 9.0 _MSC_VER == 1500 (Visual Studio 2008) // MSVC++ 10.0 _MSC_VER == 1600 (Visual Studio 2010) // MSVC++ 11.0 _MSC_VER == 1700 (Visual Studio 2012) // MSVC++ 12.0 _MSC_VER == 1800 (Visual Studio 2013) // MSVC++ 14.0 _MSC_VER == 1900 (Visual Studio 2015) // MSVC++ 14.1 _MSC_VER >= 1910 (Visual Studio 2017) #if defined(_MSC_VER ) && !defined(__clang__) # define optional_COMPILER_MSVC_VER (_MSC_VER ) # define optional_COMPILER_MSVC_VERSION (_MSC_VER / 10 - 10 * ( 5 + (_MSC_VER < 1900 ) ) ) #else # define optional_COMPILER_MSVC_VER 0 # define optional_COMPILER_MSVC_VERSION 0 #endif #define optional_COMPILER_VERSION( major, minor, patch ) ( 10 * (10 * (major) + (minor) ) + (patch) ) #if defined(__GNUC__) && !defined(__clang__) # define optional_COMPILER_GNUC_VERSION optional_COMPILER_VERSION(__GNUC__, __GNUC_MINOR__, __GNUC_PATCHLEVEL__) #else # define optional_COMPILER_GNUC_VERSION 0 #endif #if defined(__clang__) # define optional_COMPILER_CLANG_VERSION optional_COMPILER_VERSION(__clang_major__, __clang_minor__, __clang_patchlevel__) #else # define optional_COMPILER_CLANG_VERSION 0 #endif #if optional_BETWEEN(optional_COMPILER_MSVC_VERSION, 70, 140 ) # pragma warning( disable: 4345 ) // initialization behavior changed #endif #if optional_BETWEEN(optional_COMPILER_MSVC_VERSION, 70, 150 ) # pragma warning( disable: 4814 ) // in C++14 'constexpr' will not imply 'const' #endif // Presence of language and library features: #define optional_HAVE(FEATURE) ( optional_HAVE_##FEATURE ) #ifdef _HAS_CPP0X # define optional_HAS_CPP0X _HAS_CPP0X #else # define optional_HAS_CPP0X 0 #endif // Unless defined otherwise below, consider VC14 as C++11 for optional-lite: #if optional_COMPILER_MSVC_VER >= 1900 # undef optional_CPP11_OR_GREATER # define optional_CPP11_OR_GREATER 1 #endif #define optional_CPP11_90 (optional_CPP11_OR_GREATER_ || optional_COMPILER_MSVC_VER >= 1500) #define optional_CPP11_100 (optional_CPP11_OR_GREATER_ || optional_COMPILER_MSVC_VER >= 1600) #define optional_CPP11_110 (optional_CPP11_OR_GREATER_ || optional_COMPILER_MSVC_VER >= 1700) #define optional_CPP11_120 (optional_CPP11_OR_GREATER_ || optional_COMPILER_MSVC_VER >= 1800) #define optional_CPP11_140 (optional_CPP11_OR_GREATER_ || optional_COMPILER_MSVC_VER >= 1900) #define optional_CPP11_141 (optional_CPP11_OR_GREATER_ || optional_COMPILER_MSVC_VER >= 1910) #define optional_CPP14_000 (optional_CPP14_OR_GREATER) #define optional_CPP17_000 (optional_CPP17_OR_GREATER) // Presence of C++11 language features: #define optional_HAVE_CONSTEXPR_11 optional_CPP11_140 #define optional_HAVE_IS_DEFAULT optional_CPP11_140 #define optional_HAVE_NOEXCEPT optional_CPP11_140 #define optional_HAVE_NULLPTR optional_CPP11_100 #define optional_HAVE_REF_QUALIFIER optional_CPP11_140 // Presence of C++14 language features: #define optional_HAVE_CONSTEXPR_14 optional_CPP14_000 // Presence of C++17 language features: #define optional_HAVE_NODISCARD optional_CPP17_000 // Presence of C++ library features: #define optional_HAVE_CONDITIONAL optional_CPP11_120 #define optional_HAVE_REMOVE_CV optional_CPP11_120 #define optional_HAVE_TYPE_TRAITS optional_CPP11_90 #define optional_HAVE_TR1_TYPE_TRAITS (!! optional_COMPILER_GNUC_VERSION ) #define optional_HAVE_TR1_ADD_POINTER (!! optional_COMPILER_GNUC_VERSION ) // C++ feature usage: #if optional_HAVE( CONSTEXPR_11 ) # define optional_constexpr constexpr #else # define optional_constexpr /*constexpr*/ #endif #if optional_HAVE( IS_DEFAULT ) # define optional_is_default = default; #else # define optional_is_default {} #endif #if optional_HAVE( CONSTEXPR_14 ) # define optional_constexpr14 constexpr #else # define optional_constexpr14 /*constexpr*/ #endif #if optional_HAVE( NODISCARD ) # define optional_nodiscard [[nodiscard]] #else # define optional_nodiscard /*[[nodiscard]]*/ #endif #if optional_HAVE( NOEXCEPT ) # define optional_noexcept noexcept #else # define optional_noexcept /*noexcept*/ #endif #if optional_HAVE( NULLPTR ) # define optional_nullptr nullptr #else # define optional_nullptr NULL #endif #if optional_HAVE( REF_QUALIFIER ) // NOLINTNEXTLINE( bugprone-macro-parentheses ) # define optional_ref_qual & # define optional_refref_qual && #else # define optional_ref_qual /*&*/ # define optional_refref_qual /*&&*/ #endif // additional includes: #if optional_CONFIG_NO_EXCEPTIONS // already included: #else # include #endif #if optional_CPP11_OR_GREATER # include #endif #if optional_HAVE( INITIALIZER_LIST ) # include #endif #if optional_HAVE( TYPE_TRAITS ) # include #elif optional_HAVE( TR1_TYPE_TRAITS ) # include #endif // Method enabling #if optional_CPP11_OR_GREATER #define optional_REQUIRES_0(...) \ template< bool B = (__VA_ARGS__), typename std::enable_if::type = 0 > #define optional_REQUIRES_T(...) \ , typename = typename std::enable_if< (__VA_ARGS__), nonstd::optional_lite::detail::enabler >::type #define optional_REQUIRES_R(R, ...) \ typename std::enable_if< (__VA_ARGS__), R>::type #define optional_REQUIRES_A(...) \ , typename std::enable_if< (__VA_ARGS__), void*>::type = nullptr #endif // // optional: // namespace nonstd { namespace optional_lite { namespace std11 { #if optional_CPP11_OR_GREATER using std::move; #else template< typename T > T & move( T & t ) { return t; } #endif #if optional_HAVE( CONDITIONAL ) using std::conditional; #else template< bool B, typename T, typename F > struct conditional { typedef T type; }; template< typename T, typename F > struct conditional { typedef F type; }; #endif // optional_HAVE_CONDITIONAL } // namespace std11 #if optional_CPP11_OR_GREATER /// type traits C++17: namespace std17 { #if optional_CPP17_OR_GREATER using std::is_swappable; using std::is_nothrow_swappable; #elif optional_CPP11_OR_GREATER namespace detail { using std::swap; struct is_swappable { template< typename T, typename = decltype( swap( std::declval(), std::declval() ) ) > static std::true_type test( int /*unused*/ ); template< typename > static std::false_type test(...); }; struct is_nothrow_swappable { // wrap noexcept(expr) in separate function as work-around for VC140 (VS2015): template< typename T > static constexpr bool satisfies() { return noexcept( swap( std::declval(), std::declval() ) ); } template< typename T > static auto test( int /*unused*/ ) -> std::integral_constant()>{} template< typename > static auto test(...) -> std::false_type; }; } // namespace detail // is [nothow] swappable: template< typename T > struct is_swappable : decltype( detail::is_swappable::test(0) ){}; template< typename T > struct is_nothrow_swappable : decltype( detail::is_nothrow_swappable::test(0) ){}; #endif // optional_CPP17_OR_GREATER } // namespace std17 /// type traits C++20: namespace std20 { template< typename T > struct remove_cvref { typedef typename std::remove_cv< typename std::remove_reference::type >::type type; }; } // namespace std20 #endif // optional_CPP11_OR_GREATER /// class optional template< typename T > class optional; namespace detail { // for optional_REQUIRES_T #if optional_CPP11_OR_GREATER enum class enabler{}; #endif // C++11 emulation: struct nulltype{}; template< typename Head, typename Tail > struct typelist { typedef Head head; typedef Tail tail; }; #if optional_CONFIG_MAX_ALIGN_HACK // Max align, use most restricted type for alignment: #define optional_UNIQUE( name ) optional_UNIQUE2( name, __LINE__ ) #define optional_UNIQUE2( name, line ) optional_UNIQUE3( name, line ) #define optional_UNIQUE3( name, line ) name ## line #define optional_ALIGN_TYPE( type ) \ type optional_UNIQUE( _t ); struct_t< type > optional_UNIQUE( _st ) template< typename T > struct struct_t { T _; }; union max_align_t { optional_ALIGN_TYPE( char ); optional_ALIGN_TYPE( short int ); optional_ALIGN_TYPE( int ); optional_ALIGN_TYPE( long int ); optional_ALIGN_TYPE( float ); optional_ALIGN_TYPE( double ); optional_ALIGN_TYPE( long double ); optional_ALIGN_TYPE( char * ); optional_ALIGN_TYPE( short int * ); optional_ALIGN_TYPE( int * ); optional_ALIGN_TYPE( long int * ); optional_ALIGN_TYPE( float * ); optional_ALIGN_TYPE( double * ); optional_ALIGN_TYPE( long double * ); optional_ALIGN_TYPE( void * ); #ifdef HAVE_LONG_LONG optional_ALIGN_TYPE( long long ); #endif struct Unknown; Unknown ( * optional_UNIQUE(_) )( Unknown ); Unknown * Unknown::* optional_UNIQUE(_); Unknown ( Unknown::* optional_UNIQUE(_) )( Unknown ); struct_t< Unknown ( * )( Unknown) > optional_UNIQUE(_); struct_t< Unknown * Unknown::* > optional_UNIQUE(_); struct_t< Unknown ( Unknown::* )(Unknown) > optional_UNIQUE(_); }; #undef optional_UNIQUE #undef optional_UNIQUE2 #undef optional_UNIQUE3 #undef optional_ALIGN_TYPE #elif defined( optional_CONFIG_ALIGN_AS ) // optional_CONFIG_MAX_ALIGN_HACK // Use user-specified type for alignment: #define optional_ALIGN_AS( unused ) \ optional_CONFIG_ALIGN_AS #else // optional_CONFIG_MAX_ALIGN_HACK // Determine POD type to use for alignment: #define optional_ALIGN_AS( to_align ) \ typename type_of_size< alignment_types, alignment_of< to_align >::value >::type template< typename T > struct alignment_of; template< typename T > struct alignment_of_hack { char c; T t; alignment_of_hack(); }; template< size_t A, size_t S > struct alignment_logic { enum { value = A < S ? A : S }; }; template< typename T > struct alignment_of { enum { value = alignment_logic< sizeof( alignment_of_hack ) - sizeof(T), sizeof(T) >::value }; }; template< typename List, size_t N > struct type_of_size { typedef typename std11::conditional< N == sizeof( typename List::head ), typename List::head, typename type_of_size::type >::type type; }; template< size_t N > struct type_of_size< nulltype, N > { typedef optional_CONFIG_ALIGN_AS_FALLBACK type; }; template< typename T> struct struct_t { T _; }; #define optional_ALIGN_TYPE( type ) \ typelist< type , typelist< struct_t< type > struct Unknown; typedef optional_ALIGN_TYPE( char ), optional_ALIGN_TYPE( short ), optional_ALIGN_TYPE( int ), optional_ALIGN_TYPE( long ), optional_ALIGN_TYPE( float ), optional_ALIGN_TYPE( double ), optional_ALIGN_TYPE( long double ), optional_ALIGN_TYPE( char *), optional_ALIGN_TYPE( short * ), optional_ALIGN_TYPE( int * ), optional_ALIGN_TYPE( long * ), optional_ALIGN_TYPE( float * ), optional_ALIGN_TYPE( double * ), optional_ALIGN_TYPE( long double * ), optional_ALIGN_TYPE( Unknown ( * )( Unknown ) ), optional_ALIGN_TYPE( Unknown * Unknown::* ), optional_ALIGN_TYPE( Unknown ( Unknown::* )( Unknown ) ), nulltype > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > alignment_types; #undef optional_ALIGN_TYPE #endif // optional_CONFIG_MAX_ALIGN_HACK /// C++03 constructed union to hold value. template< typename T > union storage_t { //private: // template< typename > friend class optional; typedef T value_type; storage_t() optional_is_default explicit storage_t( value_type const & v ) { construct_value( v ); } void construct_value( value_type const & v ) { ::new( value_ptr() ) value_type( v ); } #if optional_CPP11_OR_GREATER explicit storage_t( value_type && v ) { construct_value( std::move( v ) ); } void construct_value( value_type && v ) { ::new( value_ptr() ) value_type( std::move( v ) ); } template< class... Args > void emplace( Args&&... args ) { ::new( value_ptr() ) value_type( std::forward(args)... ); } template< class U, class... Args > void emplace( std::initializer_list il, Args&&... args ) { ::new( value_ptr() ) value_type( il, std::forward(args)... ); } #endif void destruct_value() { value_ptr()->~T(); } optional_nodiscard value_type const * value_ptr() const { return as(); } value_type * value_ptr() { return as(); } optional_nodiscard value_type const & value() const optional_ref_qual { return * value_ptr(); } value_type & value() optional_ref_qual { return * value_ptr(); } #if optional_CPP11_OR_GREATER optional_nodiscard value_type const && value() const optional_refref_qual { return std::move( value() ); } value_type && value() optional_refref_qual { return std::move( value() ); } #endif #if optional_CPP11_OR_GREATER using aligned_storage_t = typename std::aligned_storage< sizeof(value_type), alignof(value_type) >::type; aligned_storage_t data; #elif optional_CONFIG_MAX_ALIGN_HACK typedef struct { unsigned char data[ sizeof(value_type) ]; } aligned_storage_t; max_align_t hack; aligned_storage_t data; #else typedef optional_ALIGN_AS(value_type) align_as_type; typedef struct { align_as_type data[ 1 + ( sizeof(value_type) - 1 ) / sizeof(align_as_type) ]; } aligned_storage_t; aligned_storage_t data; # undef optional_ALIGN_AS #endif // optional_CONFIG_MAX_ALIGN_HACK optional_nodiscard void * ptr() optional_noexcept { return &data; } optional_nodiscard void const * ptr() const optional_noexcept { return &data; } template optional_nodiscard U * as() { return reinterpret_cast( ptr() ); } template optional_nodiscard U const * as() const { return reinterpret_cast( ptr() ); } }; } // namespace detail /// disengaged state tag struct nullopt_t { struct init{}; explicit optional_constexpr nullopt_t( init /*unused*/ ) optional_noexcept {} }; #if optional_HAVE( CONSTEXPR_11 ) constexpr nullopt_t nullopt{ nullopt_t::init{} }; #else // extra parenthesis to prevent the most vexing parse: const nullopt_t nullopt(( nullopt_t::init() )); #endif /// optional access error #if ! optional_CONFIG_NO_EXCEPTIONS class bad_optional_access : public std::logic_error { public: explicit bad_optional_access() : logic_error( "bad optional access" ) {} }; #endif //optional_CONFIG_NO_EXCEPTIONS /// optional template< typename T> class optional { private: template< typename > friend class optional; typedef void (optional::*safe_bool)() const; public: typedef T value_type; // x.x.3.1, constructors // 1a - default construct optional_constexpr optional() optional_noexcept : has_value_( false ) , contained() {} // 1b - construct explicitly empty // NOLINTNEXTLINE( google-explicit-constructor, hicpp-explicit-conversions ) optional_constexpr optional( nullopt_t /*unused*/ ) optional_noexcept : has_value_( false ) , contained() {} // 2 - copy-construct optional_constexpr14 optional( optional const & other #if optional_CPP11_OR_GREATER optional_REQUIRES_A( true || std::is_copy_constructible::value ) #endif ) : has_value_( other.has_value() ) { if ( other.has_value() ) { contained.construct_value( other.contained.value() ); } } #if optional_CPP11_OR_GREATER // 3 (C++11) - move-construct from optional optional_constexpr14 optional( optional && other optional_REQUIRES_A( true || std::is_move_constructible::value ) // NOLINTNEXTLINE( performance-noexcept-move-constructor ) ) noexcept( std::is_nothrow_move_constructible::value ) : has_value_( other.has_value() ) { if ( other.has_value() ) { contained.construct_value( std::move( other.contained.value() ) ); } } // 4a (C++11) - explicit converting copy-construct from optional template< typename U > explicit optional( optional const & other optional_REQUIRES_A( std::is_constructible::value && !std::is_constructible & >::value && !std::is_constructible && >::value && !std::is_constructible const & >::value && !std::is_constructible const && >::value && !std::is_convertible< optional & , T>::value && !std::is_convertible< optional && , T>::value && !std::is_convertible< optional const & , T>::value && !std::is_convertible< optional const &&, T>::value && !std::is_convertible< U const & , T>::value /*=> explicit */ ) ) : has_value_( other.has_value() ) { if ( other.has_value() ) { contained.construct_value( T{ other.contained.value() } ); } } #endif // optional_CPP11_OR_GREATER // 4b (C++98 and later) - non-explicit converting copy-construct from optional template< typename U > // NOLINTNEXTLINE( google-explicit-constructor, hicpp-explicit-conversions ) optional( optional const & other #if optional_CPP11_OR_GREATER optional_REQUIRES_A( std::is_constructible::value && !std::is_constructible & >::value && !std::is_constructible && >::value && !std::is_constructible const & >::value && !std::is_constructible const && >::value && !std::is_convertible< optional & , T>::value && !std::is_convertible< optional && , T>::value && !std::is_convertible< optional const & , T>::value && !std::is_convertible< optional const &&, T>::value && std::is_convertible< U const & , T>::value /*=> non-explicit */ ) #endif // optional_CPP11_OR_GREATER ) : has_value_( other.has_value() ) { if ( other.has_value() ) { contained.construct_value( other.contained.value() ); } } #if optional_CPP11_OR_GREATER // 5a (C++11) - explicit converting move-construct from optional template< typename U > explicit optional( optional && other optional_REQUIRES_A( std::is_constructible::value && !std::is_constructible & >::value && !std::is_constructible && >::value && !std::is_constructible const & >::value && !std::is_constructible const && >::value && !std::is_convertible< optional & , T>::value && !std::is_convertible< optional && , T>::value && !std::is_convertible< optional const & , T>::value && !std::is_convertible< optional const &&, T>::value && !std::is_convertible< U &&, T>::value /*=> explicit */ ) ) : has_value_( other.has_value() ) { if ( other.has_value() ) { contained.construct_value( T{ std::move( other.contained.value() ) } ); } } // 5a (C++11) - non-explicit converting move-construct from optional template< typename U > // NOLINTNEXTLINE( google-explicit-constructor, hicpp-explicit-conversions ) optional( optional && other optional_REQUIRES_A( std::is_constructible::value && !std::is_constructible & >::value && !std::is_constructible && >::value && !std::is_constructible const & >::value && !std::is_constructible const && >::value && !std::is_convertible< optional & , T>::value && !std::is_convertible< optional && , T>::value && !std::is_convertible< optional const & , T>::value && !std::is_convertible< optional const &&, T>::value && std::is_convertible< U &&, T>::value /*=> non-explicit */ ) ) : has_value_( other.has_value() ) { if ( other.has_value() ) { contained.construct_value( std::move( other.contained.value() ) ); } } // 6 (C++11) - in-place construct template< typename... Args optional_REQUIRES_T( std::is_constructible::value ) > optional_constexpr explicit optional( nonstd_lite_in_place_t(T), Args&&... args ) : has_value_( true ) , contained( T( std::forward(args)...) ) {} // 7 (C++11) - in-place construct, initializer-list template< typename U, typename... Args optional_REQUIRES_T( std::is_constructible&, Args&&...>::value ) > optional_constexpr explicit optional( nonstd_lite_in_place_t(T), std::initializer_list il, Args&&... args ) : has_value_( true ) , contained( T( il, std::forward(args)...) ) {} // 8a (C++11) - explicit move construct from value template< typename U = value_type > optional_constexpr explicit optional( U && value optional_REQUIRES_A( std::is_constructible::value && !std::is_same::type, nonstd_lite_in_place_t(U)>::value && !std::is_same::type, optional>::value && !std::is_convertible::value /*=> explicit */ ) ) : has_value_( true ) , contained( T{ std::forward( value ) } ) {} // 8b (C++11) - non-explicit move construct from value template< typename U = value_type > // NOLINTNEXTLINE( google-explicit-constructor, hicpp-explicit-conversions ) optional_constexpr optional( U && value optional_REQUIRES_A( std::is_constructible::value && !std::is_same::type, nonstd_lite_in_place_t(U)>::value && !std::is_same::type, optional>::value && std::is_convertible::value /*=> non-explicit */ ) ) : has_value_( true ) , contained( std::forward( value ) ) {} #else // optional_CPP11_OR_GREATER // 8 (C++98) optional( value_type const & value ) : has_value_( true ) , contained( value ) {} #endif // optional_CPP11_OR_GREATER // x.x.3.2, destructor ~optional() { if ( has_value() ) { contained.destruct_value(); } } // x.x.3.3, assignment // 1 (C++98and later) - assign explicitly empty optional & operator=( nullopt_t /*unused*/) optional_noexcept { reset(); return *this; } // 2 (C++98and later) - copy-assign from optional #if optional_CPP11_OR_GREATER // NOLINTNEXTLINE( cppcoreguidelines-c-copy-assignment-signature, misc-unconventional-assign-operator ) optional_REQUIRES_R( optional &, true // std::is_copy_constructible::value // && std::is_copy_assignable::value ) operator=( optional const & other ) noexcept( std::is_nothrow_move_assignable::value && std::is_nothrow_move_constructible::value ) #else optional & operator=( optional const & other ) #endif { if ( (has_value() == true ) && (other.has_value() == false) ) { reset(); } else if ( (has_value() == false) && (other.has_value() == true ) ) { initialize( *other ); } else if ( (has_value() == true ) && (other.has_value() == true ) ) { contained.value() = *other; } return *this; } #if optional_CPP11_OR_GREATER // 3 (C++11) - move-assign from optional // NOLINTNEXTLINE( cppcoreguidelines-c-copy-assignment-signature, misc-unconventional-assign-operator ) optional_REQUIRES_R( optional &, true // std::is_move_constructible::value // && std::is_move_assignable::value ) operator=( optional && other ) noexcept { if ( (has_value() == true ) && (other.has_value() == false) ) { reset(); } else if ( (has_value() == false) && (other.has_value() == true ) ) { initialize( std::move( *other ) ); } else if ( (has_value() == true ) && (other.has_value() == true ) ) { contained.value() = std::move( *other ); } return *this; } // 4 (C++11) - move-assign from value template< typename U = T > // NOLINTNEXTLINE( cppcoreguidelines-c-copy-assignment-signature, misc-unconventional-assign-operator ) optional_REQUIRES_R( optional &, std::is_constructible::value && std::is_assignable::value && !std::is_same::type, nonstd_lite_in_place_t(U)>::value && !std::is_same::type, optional>::value && !(std::is_scalar::value && std::is_same::type>::value) ) operator=( U && value ) { if ( has_value() ) { contained.value() = std::forward( value ); } else { initialize( T( std::forward( value ) ) ); } return *this; } #else // optional_CPP11_OR_GREATER // 4 (C++98) - copy-assign from value template< typename U /*= T*/ > optional & operator=( U const & value ) { if ( has_value() ) contained.value() = value; else initialize( T( value ) ); return *this; } #endif // optional_CPP11_OR_GREATER // 5 (C++98 and later) - converting copy-assign from optional template< typename U > #if optional_CPP11_OR_GREATER // NOLINTNEXTLINE( cppcoreguidelines-c-copy-assignment-signature, misc-unconventional-assign-operator ) optional_REQUIRES_R( optional&, std::is_constructible< T , U const &>::value && std::is_assignable< T&, U const &>::value && !std::is_constructible & >::value && !std::is_constructible && >::value && !std::is_constructible const & >::value && !std::is_constructible const && >::value && !std::is_convertible< optional & , T>::value && !std::is_convertible< optional && , T>::value && !std::is_convertible< optional const & , T>::value && !std::is_convertible< optional const &&, T>::value && !std::is_assignable< T&, optional & >::value && !std::is_assignable< T&, optional && >::value && !std::is_assignable< T&, optional const & >::value && !std::is_assignable< T&, optional const && >::value ) #else optional& #endif // optional_CPP11_OR_GREATER operator=( optional const & other ) { return *this = optional( other ); } #if optional_CPP11_OR_GREATER // 6 (C++11) - converting move-assign from optional template< typename U > // NOLINTNEXTLINE( cppcoreguidelines-c-copy-assignment-signature, misc-unconventional-assign-operator ) optional_REQUIRES_R( optional&, std::is_constructible< T , U>::value && std::is_assignable< T&, U>::value && !std::is_constructible & >::value && !std::is_constructible && >::value && !std::is_constructible const & >::value && !std::is_constructible const && >::value && !std::is_convertible< optional & , T>::value && !std::is_convertible< optional && , T>::value && !std::is_convertible< optional const & , T>::value && !std::is_convertible< optional const &&, T>::value && !std::is_assignable< T&, optional & >::value && !std::is_assignable< T&, optional && >::value && !std::is_assignable< T&, optional const & >::value && !std::is_assignable< T&, optional const && >::value ) operator=( optional && other ) { return *this = optional( std::move( other ) ); } // 7 (C++11) - emplace template< typename... Args optional_REQUIRES_T( std::is_constructible::value ) > T& emplace( Args&&... args ) { *this = nullopt; contained.emplace( std::forward(args)... ); has_value_ = true; return contained.value(); } // 8 (C++11) - emplace, initializer-list template< typename U, typename... Args optional_REQUIRES_T( std::is_constructible&, Args&&...>::value ) > T& emplace( std::initializer_list il, Args&&... args ) { *this = nullopt; contained.emplace( il, std::forward(args)... ); has_value_ = true; return contained.value(); } #endif // optional_CPP11_OR_GREATER // x.x.3.4, swap void swap( optional & other ) #if optional_CPP11_OR_GREATER noexcept( std::is_nothrow_move_constructible::value && std17::is_nothrow_swappable::value ) #endif { using std::swap; if ( (has_value() == true ) && (other.has_value() == true ) ) { swap( **this, *other ); } else if ( (has_value() == false) && (other.has_value() == true ) ) { initialize( std11::move(*other) ); other.reset(); } else if ( (has_value() == true ) && (other.has_value() == false) ) { other.initialize( std11::move(**this) ); reset(); } } // x.x.3.5, observers optional_constexpr value_type const * operator ->() const { return assert( has_value() ), contained.value_ptr(); } optional_constexpr14 value_type * operator ->() { return assert( has_value() ), contained.value_ptr(); } optional_constexpr value_type const & operator *() const optional_ref_qual { return assert( has_value() ), contained.value(); } optional_constexpr14 value_type & operator *() optional_ref_qual { return assert( has_value() ), contained.value(); } #if optional_HAVE( REF_QUALIFIER ) && ( !optional_COMPILER_GNUC_VERSION || optional_COMPILER_GNUC_VERSION >= 490 ) optional_constexpr value_type const && operator *() const optional_refref_qual { return std::move( **this ); } optional_constexpr14 value_type && operator *() optional_refref_qual { return std::move( **this ); } #endif #if optional_CPP11_OR_GREATER optional_constexpr explicit operator bool() const optional_noexcept { return has_value(); } #else optional_constexpr operator safe_bool() const optional_noexcept { return has_value() ? &optional::this_type_does_not_support_comparisons : 0; } #endif // NOLINTNEXTLINE( modernize-use-nodiscard ) /*optional_nodiscard*/ optional_constexpr bool has_value() const optional_noexcept { return has_value_; } // NOLINTNEXTLINE( modernize-use-nodiscard ) /*optional_nodiscard*/ optional_constexpr14 value_type const & value() const optional_ref_qual { #if optional_CONFIG_NO_EXCEPTIONS assert( has_value() ); #else if ( ! has_value() ) { throw bad_optional_access(); } #endif return contained.value(); } optional_constexpr14 value_type & value() optional_ref_qual { #if optional_CONFIG_NO_EXCEPTIONS assert( has_value() ); #else if ( ! has_value() ) { throw bad_optional_access(); } #endif return contained.value(); } #if optional_HAVE( REF_QUALIFIER ) && ( !optional_COMPILER_GNUC_VERSION || optional_COMPILER_GNUC_VERSION >= 490 ) // NOLINTNEXTLINE( modernize-use-nodiscard ) /*optional_nodiscard*/ optional_constexpr value_type const && value() const optional_refref_qual { return std::move( value() ); } optional_constexpr14 value_type && value() optional_refref_qual { return std::move( value() ); } #endif #if optional_CPP11_OR_GREATER template< typename U > optional_constexpr value_type value_or( U && v ) const optional_ref_qual { return has_value() ? contained.value() : static_cast(std::forward( v ) ); } template< typename U > optional_constexpr14 value_type value_or( U && v ) optional_refref_qual { return has_value() ? std::move( contained.value() ) : static_cast(std::forward( v ) ); } #else template< typename U > optional_constexpr value_type value_or( U const & v ) const { return has_value() ? contained.value() : static_cast( v ); } #endif // optional_CPP11_OR_GREATER // x.x.3.6, modifiers void reset() optional_noexcept { if ( has_value() ) { contained.destruct_value(); } has_value_ = false; } private: void this_type_does_not_support_comparisons() const {} template< typename V > void initialize( V const & value ) { assert( ! has_value() ); contained.construct_value( value ); has_value_ = true; } #if optional_CPP11_OR_GREATER template< typename V > void initialize( V && value ) { assert( ! has_value() ); contained.construct_value( std::move( value ) ); has_value_ = true; } #endif private: bool has_value_; detail::storage_t< value_type > contained; }; // Relational operators template< typename T, typename U > inline optional_constexpr bool operator==( optional const & x, optional const & y ) { return bool(x) != bool(y) ? false : !bool( x ) ? true : *x == *y; } template< typename T, typename U > inline optional_constexpr bool operator!=( optional const & x, optional const & y ) { return !(x == y); } template< typename T, typename U > inline optional_constexpr bool operator<( optional const & x, optional const & y ) { return (!y) ? false : (!x) ? true : *x < *y; } template< typename T, typename U > inline optional_constexpr bool operator>( optional const & x, optional const & y ) { return (y < x); } template< typename T, typename U > inline optional_constexpr bool operator<=( optional const & x, optional const & y ) { return !(y < x); } template< typename T, typename U > inline optional_constexpr bool operator>=( optional const & x, optional const & y ) { return !(x < y); } // Comparison with nullopt template< typename T > inline optional_constexpr bool operator==( optional const & x, nullopt_t /*unused*/ ) optional_noexcept { return (!x); } template< typename T > inline optional_constexpr bool operator==( nullopt_t /*unused*/, optional const & x ) optional_noexcept { return (!x); } template< typename T > inline optional_constexpr bool operator!=( optional const & x, nullopt_t /*unused*/ ) optional_noexcept { return bool(x); } template< typename T > inline optional_constexpr bool operator!=( nullopt_t /*unused*/, optional const & x ) optional_noexcept { return bool(x); } template< typename T > inline optional_constexpr bool operator<( optional const & /*unused*/, nullopt_t /*unused*/ ) optional_noexcept { return false; } template< typename T > inline optional_constexpr bool operator<( nullopt_t /*unused*/, optional const & x ) optional_noexcept { return bool(x); } template< typename T > inline optional_constexpr bool operator<=( optional const & x, nullopt_t /*unused*/ ) optional_noexcept { return (!x); } template< typename T > inline optional_constexpr bool operator<=( nullopt_t /*unused*/, optional const & /*unused*/ ) optional_noexcept { return true; } template< typename T > inline optional_constexpr bool operator>( optional const & x, nullopt_t /*unused*/ ) optional_noexcept { return bool(x); } template< typename T > inline optional_constexpr bool operator>( nullopt_t /*unused*/, optional const & /*unused*/ ) optional_noexcept { return false; } template< typename T > inline optional_constexpr bool operator>=( optional const & /*unused*/, nullopt_t /*unused*/ ) optional_noexcept { return true; } template< typename T > inline optional_constexpr bool operator>=( nullopt_t /*unused*/, optional const & x ) optional_noexcept { return (!x); } // Comparison with T template< typename T, typename U > inline optional_constexpr bool operator==( optional const & x, U const & v ) { return bool(x) ? *x == v : false; } template< typename T, typename U > inline optional_constexpr bool operator==( U const & v, optional const & x ) { return bool(x) ? v == *x : false; } template< typename T, typename U > inline optional_constexpr bool operator!=( optional const & x, U const & v ) { return bool(x) ? *x != v : true; } template< typename T, typename U > inline optional_constexpr bool operator!=( U const & v, optional const & x ) { return bool(x) ? v != *x : true; } template< typename T, typename U > inline optional_constexpr bool operator<( optional const & x, U const & v ) { return bool(x) ? *x < v : true; } template< typename T, typename U > inline optional_constexpr bool operator<( U const & v, optional const & x ) { return bool(x) ? v < *x : false; } template< typename T, typename U > inline optional_constexpr bool operator<=( optional const & x, U const & v ) { return bool(x) ? *x <= v : true; } template< typename T, typename U > inline optional_constexpr bool operator<=( U const & v, optional const & x ) { return bool(x) ? v <= *x : false; } template< typename T, typename U > inline optional_constexpr bool operator>( optional const & x, U const & v ) { return bool(x) ? *x > v : false; } template< typename T, typename U > inline optional_constexpr bool operator>( U const & v, optional const & x ) { return bool(x) ? v > *x : true; } template< typename T, typename U > inline optional_constexpr bool operator>=( optional const & x, U const & v ) { return bool(x) ? *x >= v : false; } template< typename T, typename U > inline optional_constexpr bool operator>=( U const & v, optional const & x ) { return bool(x) ? v >= *x : true; } // Specialized algorithms template< typename T #if optional_CPP11_OR_GREATER optional_REQUIRES_T( std::is_move_constructible::value && std17::is_swappable::value ) #endif > void swap( optional & x, optional & y ) #if optional_CPP11_OR_GREATER noexcept( noexcept( x.swap(y) ) ) #endif { x.swap( y ); } #if optional_CPP11_OR_GREATER template< typename T > optional_constexpr optional< typename std::decay::type > make_optional( T && value ) { return optional< typename std::decay::type >( std::forward( value ) ); } template< typename T, typename...Args > optional_constexpr optional make_optional( Args&&... args ) { return optional( nonstd_lite_in_place(T), std::forward(args)...); } template< typename T, typename U, typename... Args > optional_constexpr optional make_optional( std::initializer_list il, Args&&... args ) { return optional( nonstd_lite_in_place(T), il, std::forward(args)...); } #else template< typename T > optional make_optional( T const & value ) { return optional( value ); } #endif // optional_CPP11_OR_GREATER } // namespace optional_lite using optional_lite::optional; using optional_lite::nullopt_t; using optional_lite::nullopt; using optional_lite::bad_optional_access; using optional_lite::make_optional; } // namespace nonstd #if optional_CPP11_OR_GREATER // specialize the std::hash algorithm: namespace std { template< class T > struct hash< nonstd::optional > { public: std::size_t operator()( nonstd::optional const & v ) const optional_noexcept { return bool( v ) ? std::hash{}( *v ) : 0; } }; } //namespace std #endif // optional_CPP11_OR_GREATER #if defined(__clang__) # pragma clang diagnostic pop #elif defined(__GNUC__) # pragma GCC diagnostic pop #elif defined(_MSC_VER ) # pragma warning( pop ) #endif #endif // optional_USES_STD_OPTIONAL #endif // NONSTD_OPTIONAL_LITE_HPP exactextractr/src/exactextract/CMakeLists.txt0000644000176200001440000002022714500103446021161 0ustar liggesuserscmake_minimum_required(VERSION 3.8) project(exactextract) set(DEFAULT_BUILD_TYPE "Release") set(LIB_NAME exactextract) set(BIN_NAME exactextract_bin) set(CMAKE_CXX_STANDARD 14) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_EXTENSIONS OFF) if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0) # gcc 4.9 doesn't fully support C++14, yet CMake doesn't bail when we # set CMAKE_CXX_STANDARD_REQUIRED # https://cmake.org/pipermail/cmake/2017-March/065102.html message(FATAL_ERROR "gcc 5.0+ is required to build exactextract") endif() include(GNUInstallDirs) set(CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake) include(VersionSource) find_package(GEOS REQUIRED) #Configure some options the various components this module can build option(BUILD_CLI "Build the exactextract cli binary" ON) #requires gdal, cli11 option(BUILD_TEST "Build the exactextract tests" ON) #requires catch option(BUILD_DOC "Build documentation" ON) #requires doxygen if(BUILD_CLI) # Create our main program, statically linked to our library # Unlike the library, this depends on GDAL find_package(GDAL) if (GDAL_FOUND) # Check GDAL version (requires CMake 3.14) if (${CMAKE_VERSION} VERSION_LESS 3.14.0) message(WARNING "GDAL 2.0+ is required but detected GDAL version is unknown.") elseif(${GDAL_VERSION} VERSION_LESS 2.0) unset(GDAL_FOUND) endif() endif() #GDAL_FOUND if (NOT GDAL_FOUND) message(FATAL_ERROR "GDAL version >= 2.0 was not found. It is still possible to build and test libexactextract, but the " "exactextract executable cannot be built or installed.") endif() #NOT GDAL_FOUND # Download CLI11 (header-only library) set(CLI11_INCLUDE_DIR ${CMAKE_BINARY_DIR}/CLI11) set(CLI11_INCLUDE ${CLI11_INCLUDE_DIR}/CLI11.hpp) if (NOT EXISTS ${CLI11_INCLUDE}) file(DOWNLOAD https://github.com/CLIUtils/CLI11/releases/download/v1.6.0/CLI11.hpp ${CLI11_INCLUDE} SHOW_PROGRESS) endif() #Configure the exactextract CLI target set(BIN_SOURCES src/exactextract.cpp src/gdal_raster_wrapper.h src/gdal_raster_wrapper.cpp src/gdal_dataset_wrapper.h src/gdal_dataset_wrapper.cpp src/gdal_writer.h src/gdal_writer.cpp src/processor.h src/feature_sequential_processor.cpp src/feature_sequential_processor.h src/raster_sequential_processor.cpp src/raster_sequential_processor.h ) add_executable(${BIN_NAME} ${BIN_SOURCES}) set_target_properties(${BIN_NAME} PROPERTIES OUTPUT_NAME "exactextract") target_compile_definitions(${BIN_NAME} PRIVATE GEOS_USE_ONLY_R_API) target_link_libraries( ${BIN_NAME} PRIVATE ${LIB_NAME} ${GDAL_LIBRARY} ${GEOS_LIBRARY} ) target_include_directories( ${BIN_NAME} PRIVATE ${CMAKE_BINARY_DIR}/generated ${CMAKE_SOURCE_DIR}/src ${GEOS_INCLUDE_DIR} ${GDAL_INCLUDE_DIR} ) # Include CLI11 as a system include so that -Wshadow warnings are suppressed. target_include_directories( ${BIN_NAME} SYSTEM PRIVATE ${CLI11_INCLUDE_DIR} ) target_compile_options( ${BIN_NAME} PRIVATE $<$:-Werror -Wall -Wextra -Wshadow> $<$:-Werror -Wall -Wextra -Wshadow -Wdouble-promotion>) install(TARGETS ${BIN_NAME} RUNTIME DESTINATION bin) endif() #BUILD_CLI if(BUILD_TEST) #Build the test suite # Download Catch (header-only library) set(CATCH_INCLUDE_DIR ${CMAKE_BINARY_DIR}/catch) set(CATCH_INCLUDE ${CATCH_INCLUDE_DIR}/catch.hpp) if (NOT EXISTS ${CATCH_INCLUDE}) file(DOWNLOAD https://github.com/catchorg/Catch2/releases/download/v2.13.8/catch.hpp ${CATCH_INCLUDE} SHOW_PROGRESS) endif() set(TEST_SOURCES test/test_box.cpp test/test_cell.cpp test/test_geos_utils.cpp test/test_grid.cpp test/test_main.cpp test/test_perimeter_distance.cpp test/test_raster.cpp test/test_raster_area.cpp test/test_raster_cell_intersection.cpp test/test_raster_iterator.cpp test/test_traversal_areas.cpp test/test_stats.cpp test/test_utils.cpp) # Create an executable to run the unit tests add_executable(catch_tests ${TEST_SOURCES}) target_include_directories( catch_tests PRIVATE ${CATCH_INCLUDE_DIR} ${GEOS_INCLUDE_DIR} ${CMAKE_SOURCE_DIR}/src ) target_link_libraries( catch_tests PRIVATE ${LIB_NAME} ${GEOS_LIBRARY} ) endif() #BUILD_TEST message(STATUS "Source version: " ${EXACTEXTRACT_VERSION_SOURCE}) configure_file(src/version.h.in ${CMAKE_CURRENT_BINARY_DIR}/generated/version.h) if (GEOS_VERSION_MAJOR LESS 3 OR GEOS_VERSION_MINOR LESS 5) message(FATAL_ERROR "GEOS version 3.5 or later is required.") endif() # Define coverage build type set(CMAKE_CXX_FLAGS_COVERAGE "-fprofile-arcs -ftest-coverage") # Make sure we know our build type if(NOT CMAKE_BUILD_TYPE) message(STATUS "Setting build type to '${DEFAULT_BUILD_TYPE}' as none was specified") set(CMAKE_BUILD_TYPE "${DEFAULT_BUILD_TYPE}") endif() set(PROJECT_SOURCES src/measures.cpp src/measures.h src/box.h src/box.cpp src/cell.cpp src/cell.h src/coordinate.cpp src/coordinate.h src/crossing.h src/floodfill.cpp src/floodfill.h src/geos_utils.cpp src/geos_utils.h src/grid.h src/grid.cpp src/matrix.h src/perimeter_distance.cpp src/perimeter_distance.h src/raster.h src/raster_area.h src/raster_cell_intersection.cpp src/raster_cell_intersection.h src/raster_stats.h src/side.cpp src/side.h src/traversal.cpp src/traversal.h src/traversal_areas.cpp src/traversal_areas.h src/output_writer.h src/output_writer.cpp src/operation.h src/raster_source.h src/stats_registry.h src/utils.h src/utils.cpp src/weighted_quantiles.h src/weighted_quantiles.cpp src/variance.h vend/optional.hpp) add_library(${LIB_NAME} ${PROJECT_SOURCES}) # Check matrix bounds for debug builds set_target_properties(${LIB_NAME} PROPERTIES COMPILE_DEFINITIONS $<$:MATRIX_CHECK_BOUNDS>) target_include_directories( ${LIB_NAME} PRIVATE ${GEOS_INCLUDE_DIR} ) target_compile_definitions( ${LIB_NAME} PRIVATE GEOS_USE_ONLY_R_API ) target_compile_options( ${LIB_NAME} PRIVATE $<$:-Werror -Wall -Wextra -Wshadow -Wdouble-promotion> $<$:-Werror -Wall -Wextra -Wshadow -Wdouble-promotion> ) target_link_libraries( ${LIB_NAME} PUBLIC ${GEOS_LIBRARY} ) set_target_properties(${LIB_NAME} PROPERTIES OUTPUT_NAME ${LIB_NAME}) if(BUILD_DOC) # Doxygen configuration from https://vicrucann.github.io/tutorials/quick-cmake-doxygen/ # check if Doxygen is installed find_package(Doxygen) if (DOXYGEN_FOUND) # set input and output files set(DOXYGEN_IN ${CMAKE_SOURCE_DIR}/docs/Doxyfile.in) set(DOXYGEN_OUT ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile) # request to configure the file configure_file(${DOXYGEN_IN} ${DOXYGEN_OUT} @ONLY) message("Doxygen build started") # note the option ALL which allows to build the docs together with the application add_custom_target( doc_doxygen ALL COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYGEN_OUT} WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} COMMENT "Generating API documentation with Doxygen" VERBATIM ) else (DOXYGEN_FOUND) message("Doxygen need to be installed to generate the doxygen documentation") endif (DOXYGEN_FOUND) endif() #BUILD_DOC exactextractr/src/exactextract/LICENSE0000644000176200001440000002613614500103446017433 0ustar liggesusers Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. exactextractr/src/exactextract/doc/0000755000176200001440000000000014500103446017163 5ustar liggesusersexactextractr/src/exactextract/doc/readme_example_weights.svg0000644000176200001440000000175514500103446024416 0ustar liggesusers 5 6 7 8 exactextractr/src/exactextract/doc/readme_example_values.svg0000644000176200001440000000175514500103446024243 0ustar liggesusers 1 2 3 4 exactextractr/src/exactextract/doc/exactextract.svg0000644000176200001440000002254014500103446022406 0ustar liggesusers image/svg+xml exactextractr/src/exactextract/README.md0000644000176200001440000003570414500103446017706 0ustar liggesusers# exactextract [![Build Status](https://gitlab.com/isciences/exactextract/badges/master/pipeline.svg)](https://gitlab.com/isciences/exactextract/pipelines) [![codecov](https://codecov.io/gl/isciences/exactextract/branch/master/graph/badge.svg)](https://codecov.io/gl/isciences/exactextract) [![Doxygen](https://img.shields.io/badge/Doxygen-documentation-brightgreen.svg)](https://isciences.gitlab.io/exactextract) `exactextract` provides a fast and accurate algorithm for summarizing values in the portion of a raster dataset that is covered by a polygon, often referred to as **zonal statistics**. Unlike other zonal statistics implementations, it takes into account raster cells that are partially covered by the polygon. ### Background Accurate zonal statistics calculation requires determining the fraction of each raster cell that is covered by the polygon. In a naive solution to the problem, each raster cell can be expressed as a polygon whose intersection with the input polygon is computed using polygon clipping routines such as those offered in [JTS](https://github.com/locationtech/jts), [GEOS](https://github.com/OSGeo/geos), [CGAL](https://github.com/CGAL/cgal), or other libraries. However, polygon clipping algorithms are relatively expensive, and the performance of this approach is typically unacceptable unless raster resolution and polygon complexity are low. To achieve better performance, most zonal statistics implementations sacrifice accuracy by assuming that each cell of the raster is either wholly inside or outside of the polygon. This inside/outside determination can take various forms, for example: - ArcGIS rasterizes the input polygon, then extracts the raster values from cells within the input polygon. Cells are interpreted to be either wholly within or outside of the polygon, depending on how the polygon is rasterized. - [QGIS](https://qgis.org/en/site/) compares the centroid of each raster cell to the polygon boundary, initially considering cells to be wholly within or outside of the polygon based on the centroid. However, if fewer than two cell centroids fall within the polygon, an exact vector-based calculation is performed instead ([source](https://github.com/qgis/QGIS/blob/d5626d92360efffb4b8085389c8d64072ef65833/src/analysis/vector/qgszonalstatistics.cpp#L266)). - Python's [rasterstats](https://pythonhosted.org/rasterstats/) also considers cells to be wholly within or outside of the polygon, but allows the user to decide to include cells only if their centroid is within the polygon, or if any portion of the cell touches the polygon ([docs](https://pythonhosted.org/rasterstats/manual.html#rasterization-strategy)). - R's [raster](https://cran.r-project.org/web/packages/raster/index.html) package also uses a centroid test to determine if cells are inside or outside of the polygon. It includes a convenient method of disaggregating the raster by a factor of 10 before performing the analysis, which reduces the error incurred by ignoring partially covered cells but reduces performance substantially ([source](https://github.com/cran/raster/blob/4d218a7565d3994682557b8ae4d5b52bc2f54241/R/rasterizePolygons.R#L415)). The [velox](https://cran.r-project.org/web/packages/velox/index.html) package provides a faster implementation of the centroid test but does not provide a method for disaggregation. ### Method used in `exactextract` `exactextract` computes the portion of each cell that is covered by a polygon using an algorithm that proceeds as follows: 1. Each ring of a polygon is traversed a single time, making note of when it enters or exits a raster cell. 2. For each raster cell that was touched by a ring, the fraction of the cell covered by the polygon is computed. This is done by identifying all counter-clockwise-bounded areas within the cell. 3. Any cell that was not touched by the ring is known to be either entirely inside or outside of the polygon (i.e., its covered fraction is either `0` or `1`). A point-in-polygon test is used to determine which, and the `0` or `1` value is then propagated outward using a flood fill algorithm. Depending on the structure of the polygon, a handful of point-in-polygon tests may be necessary. ### Additional Features `exactextract` can compute statistics against two rasters simultaneously, with a second raster containing weighting values. The weighting raster does not need to have the same resolution and extent as the value raster, but the resolutions of the two rasters must be integer multiple of each other, and any difference between the grid origin points must be an integer multiple of the smallest cell size. ### Compiling `exactextract` requires the following: * A C++14 compiler (e.g., gcc 5.0+) * CMake 3.8+ * [GEOS](https://github.com/libgeos/geos) version 3.5+ * [GDAL](https://github.com/osgeo/GDAL) version 2.0+ (For CLI binary) It can be built as follows on Linux as follows: ```bash git clone https://github.com/isciences/exactextract cd exactextract mkdir cmake-build-release cd cmake-build-release cmake -DCMAKE_BUILD_TYPE=Release .. make sudo make install ``` There are three options available to control what gets compiled. They are each ON by default. - `BUILD_CLI` will build main program (which requires GDAL) - `BUILD_TEST` will build the catch_test suite - `BUILD_DOC` will build the doxygen documentation if doxygen is available To build just the library and test suite, you can use these options as follows to turn off the CLI (which means GDAL isn't required) and disable the documentation build. The tests and library are built, the tests run, and the library installed if the tests were run successfully: ```bash git clone https://github.com/isciences/exactextract cd exactextract mkdir cmake-build-release cd cmake-build-release cmake -DBUILD_CLI:=OFF -DBUILD_DOC:=OFF -DCMAKE_BUILD_TYPE=Release .. make ./catch_tests && sudo make install ``` ### Using `exactextract` `exactextract` provides a simple command-line interface that uses GDAL to read a vector data source and one or more raster files, perform zonal statistics, and write output to a CSV, netCDF, or other tabular formats supported by GDAL. In addition to the command-line executable, an R package ([`exactextractr`](https://github.com/isciences/exactextractr)) allows some functionality of `exactextract` to be used with R `sf` and `raster` objects. Command line documentation can be accessed by `exactextract -h`. A minimal usage is as follows, in which we want to compute a mean temperature for each country: ```bash exactextract \ -r "temp:temperature_2018.tif" \ -p countries.shp \ -f country_name \ -s "mean(temp)" \ -o mean_temperature.csv ``` In this example, `exactextract` will summarize temperatures stored in `temperature_2018.tif` over the country boundaries stored in `countries.shp`. * The `-r` argument provides the location for of the raster input and specifies that we'd like to refer to it later on using the name `temp`. The location may be specified as a filename or any other location understood by GDAL. For example, a single variable within a netCDF file can be accessed using `-r temp:NETCDF:outputs.nc:tmp2m`. In files with more than one band, the band number (1-indexed) can be specified using square brackets, e.g., `-r temp:temperature.tif[4]`. * The `-p` argument provides the location for the polygon input. As with the `-r` argument, this can be a file name or some other location understood by GDAL, such as a PostGIS vector source (`-p "PG:dbname=basins[public.basins_lev05]"`). * The `-f` argument indicates that we'd like the field `country_name` from the shapefile to be included as a field in the output file. * The `-s` argument instructs `exactextract` to compute the mean of the raster we refer to as `temp` for each polygon. These values will be stored as a field called `temp_mean` in the output file. * The `-o` argument indicates the location of the output file. The format of the output file is inferred by GDAL using the file extension. With reasonable real-world inputs, the processing time of `exactextract` is roughly divided evenly between (a) I/O (reading raster cells, which may require decompression) and (b) computing the area of each raster cell that is covered by each polygon. In common usage, we might want to perform many calculations in which one or both of these steps can be reused, such as: * Computing the mean, min, and max temperatures in each country * Computing the mean temperature for several different years, each of which is stored in a separate but congruent raster files (having the same extent and resolution) The following more advanced usage shows how `exactextract` might be called to perform multiple calculations at once, reusing work where possible: ```bash exactextract \ -r "temp_2016:temperature_2016.tif" \ -r "temp_2017:temperature_2017.tif" \ -r "temp_2018:temperature_2018.tif" \ -p countries.shp \ -f country_name \ -s "min(temp_2016)" \ -s "mean(temp_2016)" \ -s "max(temp_2016)" \ -s "min(temp_2017)" \ -s "mean(temp_2017)" \ -s "max(temp_2017)" \ -s "min(temp_2017)" \ -s "mean(temp_2017)" \ -s "max(temp_2017)" \ -o temp_summary.csv ``` In this case, the output `temp_summary.csv` file would contain the fields `min_temp_2016`, `mean_temp_2016`, etc. Each raster would be read only a single time, and each polygon/raster overlay would be performed a single time, because the three input rasters have the same extent and resolution. Another more advanced usage of `exactextract` involves calculations in which the values of one raster are weighted by the values of a second raster. For example, we may wish to calculate both a standard and population-weighted mean temperature for each country: ```bash exactextract \ -r "temp:temperature_2018.tif" \ -r "pop:world_population.tif" \ -p countries.shp \ -f country_name \ -s "mean(temp)" \ -s "pop_weighted_mean=weighted_mean(temp,pop)" \ -o mean_temperature.csv ``` This also demonstrates the ability to control the name of a stat's output column by prefixing the stat name with an output column name. Further details on weighted statistics are provided in the section below. ### Supported Statistics The statistics supported by `exactextract` are summarized in the table below. A formula is provided for each statistic, in which xi represents the value of the *ith* raster cell, ci represents the fraction of the *ith* raster cell that is covered by the polygon, and wi represents the weight of the *ith* raster cell. Values in the "example result" column refer to the value and weighting rasters shown below. In these images, values of the "value raster" range from 1 to 4, and values of the "weighting raster" range from 5 to 8. The area covered by the polygon is shaded purple. | Example Value Raster | Example Weighting Raster | | -------------------- | ------------------------ | | | | | Name | Formula | Description | Typical Application | Example Result | | -------------- | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------- | -------------------- |--------------- | | count | Σci | Sum of all cell coverage fractions. | | 0.5 + 0 + 1 + 0.25 = 1.75 | | sum | Σxici | Sum of values of raster cells that intersect the polygon, with each raster value weighted by its coverage fraction. | Total population | 0.5×1 + 0×2 + 1.0×3 + 0.25×4 = 4.5 | | mean | (Σxici)/(Σci) | Mean value of cells that intersect the polygon, weighted by the percent of the cell that is covered. | Average temperature | 4.5/1.75 = 2.57 | | weighted_sum | Σxiciwi | Sum of raster cells covered by the polygon, with each raster value weighted by its coverage fraction and weighting raster value. | Total crop production lost | 0.5×1×5 + 0×2×6 + 1.0×3×7 + 0.25×4×8 = 31.5 | weighted_mean | (Σxiciwi)/(Σciwi) | Mean value of cells that intersect the polygon, weighted by the product over the coverage fraction and the weighting raster. | Population-weighted average temperature | 31.5 / (0.5×5 + 0×6 + 1.0×7 + 0.25×8) = 2.74 | min | - | Minimum value of cells that intersect the polygon, not taking coverage fractions or weighting raster values into account. | Minimum elevation | 1 | | max | - | Maximum value of cells that intersect the polygon, not taking coverage fractions or weighting raster values into account. | Maximum temperature | 4 | | minority | - | The raster value occupying the least number of cells, taking into account cell coverage fractions but not weighting raster values. | Least common land cover type | - | | majority | - | The raster value occupying the greatest number of cells, taking into account cell coverage fractions but not weighting raster values. | Most common land cover type | - | | variety | - | The number of distinct raster values in cells wholly or partially covered by the polygon. | Number of land cover types | - | | variance | (Σci(xi - x̅)2)/(Σci) | Population variance of cell values that intersect the polygon, taking into account coverage fraction. | - | 1.10 | | stdev | √variance | Population standard deviation of cell values that intersect the polygon, taking into account coverage fraction. | - | 1.05 | | coefficient_of_variation | stdev / mean | Population coefficient of variation of cell values that intersect the polygon, taking into account coverage fraction. | - | 0.41 | exactextractr/src/exactextract/docs/0000755000176200001440000000000014500103446017346 5ustar liggesusersexactextractr/src/exactextract/docs/Doxyfile.in0000644000176200001440000031744614500103446021500 0ustar liggesusers# Doxyfile 1.8.11 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a double hash (##) is considered a comment and is placed in # front of the TAG it is preceding. # # All text after a single hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists, items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (\" \"). #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the config file # that follow. The default is UTF-8 which is also the encoding used for all text # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv # for the list of possible encodings. # The default value is: UTF-8. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by # double-quotes, unless you are using Doxywizard) that should identify the # project for which the documentation is generated. This name is used in the # title of most generated pages and in a few other places. # The default value is: My Project. PROJECT_NAME = "exactextract" # The PROJECT_NUMBER tag can be used to enter a project or revision number. This # could be handy for archiving the generated documentation or if some version # control system is used. PROJECT_NUMBER = # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a # quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = # With the PROJECT_LOGO tag one can specify a logo or an icon that is included # in the documentation. The maximum height of the logo should not exceed 55 # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # the logo to the output directory. PROJECT_LOGO = # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # into which the generated documentation will be written. If a relative path is # entered, it will be relative to the location where doxygen was started. If # left blank the current directory will be used. OUTPUT_DIRECTORY = @CMAKE_CURRENT_BINARY_DIR@ # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- # directories (in 2 levels) under the output directory of each output format and # will distribute the generated files over these directories. Enabling this # option can be useful when feeding doxygen a huge amount of source files, where # putting all generated files in the same directory would otherwise causes # performance problems for the file system. # The default value is: NO. CREATE_SUBDIRS = NO # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII # characters to appear in the names of generated files. If set to NO, non-ASCII # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode # U+3044. # The default value is: NO. ALLOW_UNICODE_NAMES = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, # Ukrainian and Vietnamese. # The default value is: English. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member # descriptions after the members that are listed in the file and class # documentation (similar to Javadoc). Set to NO to disable this. # The default value is: YES. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief # description of a member or function before the detailed description # # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. # The default value is: YES. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator that is # used to form the text in various listings. Each string in this list, if found # as the leading text of the brief description, will be stripped from the text # and the result, after processing the whole list, is used as the annotated # text. Otherwise, the brief description is used as-is. If left blank, the # following values are used ($name is automatically replaced with the name of # the entity):The $name class, The $name widget, The $name file, is, provides, # specifies, contains, represents, a, an and the. ABBREVIATE_BRIEF = # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # doxygen will generate a detailed section even if there is only a brief # description. # The default value is: NO. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. # The default value is: NO. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path # before files name in the file list and in the header files. If set to NO the # shortest path that makes the file name unique will be used # The default value is: YES. FULL_PATH_NAMES = YES # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. # Stripping is only done if one of the specified strings matches the left-hand # part of the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the path to # strip. # # Note that you can specify absolute paths here, but also relative paths, which # will be relative from the directory where doxygen is started. # This tag requires that the tag FULL_PATH_NAMES is set to YES. STRIP_FROM_PATH = # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the # path mentioned in the documentation of a class, which tells the reader which # header file to include in order to use a class. If left blank only the name of # the header file containing the class definition is used. Otherwise one should # specify the list of include paths that are normally passed to the compiler # using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but # less readable) file names. This can be useful is your file systems doesn't # support long names like on DOS, Mac, or CD-ROM. # The default value is: NO. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the # first line (until the first dot) of a Javadoc-style comment as the brief # description. If set to NO, the Javadoc-style will behave just like regular Qt- # style comments (thus requiring an explicit @brief command for a brief # description.) # The default value is: NO. JAVADOC_AUTOBRIEF = NO # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first # line (until the first dot) of a Qt-style comment as the brief description. If # set to NO, the Qt-style will behave just like regular Qt-style comments (thus # requiring an explicit \brief command for a brief description.) # The default value is: NO. QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a # multi-line C++ special comment block (i.e. a block of //! or /// comments) as # a brief description. This used to be the default behavior. The new default is # to treat a multi-line C++ comment block as a detailed description. Set this # tag to YES if you prefer the old behavior instead. # # Note that setting this tag to YES also means that rational rose comments are # not recognized any more. # The default value is: NO. MULTILINE_CPP_IS_BRIEF = NO # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the # documentation from any documented member that it re-implements. # The default value is: YES. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new # page for each member. If set to NO, the documentation of a member will be part # of the file/class/namespace that contains it. # The default value is: NO. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen # uses this value to replace tabs by spaces in code fragments. # Minimum value: 1, maximum value: 16, default value: 4. TAB_SIZE = 4 # This tag can be used to specify a number of aliases that act as commands in # the documentation. An alias has the form: # name=value # For example adding # "sideeffect=@par Side Effects:\n" # will allow you to put the command \sideeffect (or @sideeffect) in the # documentation, which will result in a user-defined paragraph with heading # "Side Effects:". You can put \n's in the value part of an alias to insert # newlines. ALIASES = # This tag can be used to specify a number of word-keyword mappings (TCL only). # A mapping has the form "name=value". For example adding "class=itcl::class" # will allow you to use the command class in the itcl::class meaning. TCL_SUBST = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources # only. Doxygen will then generate output that is more tailored for C. For # instance, some of the names that are used will be different. The list of all # members will be omitted, etc. # The default value is: NO. OPTIMIZE_OUTPUT_FOR_C = NO # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or # Python sources only. Doxygen will then generate output that is more tailored # for that language. For instance, namespaces will be presented as packages, # qualified scopes will look different, etc. # The default value is: NO. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources. Doxygen will then generate output that is tailored for Fortran. # The default value is: NO. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for VHDL. # The default value is: NO. OPTIMIZE_OUTPUT_VHDL = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given # extension. Doxygen has a built-in mapping, but you can override or extend it # using this tag. The format is ext=language, where ext is a file extension, and # language is one of the parsers supported by doxygen: IDL, Java, Javascript, # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: # Fortran. In the later case the parser tries to guess whether the code is fixed # or free formatted code, this is the default for Fortran type files), VHDL. For # instance to make doxygen treat .inc files as Fortran files (default is PHP), # and .f files as C (default is Fortran), use: inc=Fortran f=C. # # Note: For files without extension you can use no_extension as a placeholder. # # Note that for custom extensions you also need to set FILE_PATTERNS otherwise # the files are not read by doxygen. EXTENSION_MAPPING = # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments # according to the Markdown format, which allows for more readable # documentation. See http://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you can # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in # case of backward compatibilities issues. # The default value is: YES. MARKDOWN_SUPPORT = YES # When enabled doxygen tries to link words that correspond to documented # classes, or namespaces to their corresponding documentation. Such a link can # be prevented in individual cases by putting a % sign in front of the word or # globally by setting AUTOLINK_SUPPORT to NO. # The default value is: YES. AUTOLINK_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should set this # tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); # versus func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. # The default value is: NO. BUILTIN_STL_SUPPORT = NO # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. # The default value is: NO. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip (see: # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen # will parse them like normal C++ but will assume all classes use public instead # of private inheritance when no explicit protection keyword is present. # The default value is: NO. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate # getter and setter methods for a property. Setting this option to YES will make # doxygen to replace the get and set methods by a property in the documentation. # This will only work if the methods are indeed getting or setting a simple # type. If this is not the case, or you want to show the methods anyway, you # should set this option to NO. # The default value is: YES. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. # The default value is: NO. DISTRIBUTE_GROUP_DOC = NO # If one adds a struct or class to a group and this option is enabled, then also # any nested class or struct is added to the same group. By default this option # is disabled and one has to add nested compounds explicitly via \ingroup. # The default value is: NO. GROUP_NESTED_COMPOUNDS = NO # Set the SUBGROUPING tag to YES to allow class member groups of the same type # (for instance a group of public functions) to be put as a subgroup of that # type (e.g. under the Public Functions section). Set it to NO to prevent # subgrouping. Alternatively, this can be done per class using the # \nosubgrouping command. # The default value is: YES. SUBGROUPING = YES # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions # are shown inside the group in which they are included (e.g. using \ingroup) # instead of on a separate page (for HTML and Man pages) or section (for LaTeX # and RTF). # # Note that this feature does not work in combination with # SEPARATE_MEMBER_PAGES. # The default value is: NO. INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions # with only public data fields or simple typedef fields will be shown inline in # the documentation of the scope in which they are defined (i.e. file, # namespace, or group documentation), provided this scope is documented. If set # to NO, structs, classes, and unions are shown on a separate page (for HTML and # Man pages) or section (for LaTeX and RTF). # The default value is: NO. INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or # enum is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically be # useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. # The default value is: NO. TYPEDEF_HIDES_STRUCT = NO # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This # cache is used to resolve symbols given their name and scope. Since this can be # an expensive process and often the same symbol appears multiple times in the # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small # doxygen will become slower. If the cache is too large, memory is wasted. The # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 # symbols. At the end of a run doxygen will report the cache usage and suggest # the optimal cache size from a speed point of view. # Minimum value: 0, maximum value: 9, default value: 0. LOOKUP_CACHE_SIZE = 0 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in # documentation are documented, even if no documentation was available. Private # class members and static file members will be hidden unless the # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. # Note: This will also disable the warnings about undocumented members that are # normally produced when WARNINGS is set to YES. # The default value is: NO. EXTRACT_ALL = YES # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will # be included in the documentation. # The default value is: NO. EXTRACT_PRIVATE = NO # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal # scope will be included in the documentation. # The default value is: NO. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be # included in the documentation. # The default value is: NO. EXTRACT_STATIC = NO # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined # locally in source files will be included in the documentation. If set to NO, # only classes defined in header files are included. Does not have any effect # for Java sources. # The default value is: YES. EXTRACT_LOCAL_CLASSES = YES # This flag is only useful for Objective-C code. If set to YES, local methods, # which are defined in the implementation section but not in the interface are # included in the documentation. If set to NO, only methods in the interface are # included. # The default value is: NO. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base name of # the file that contains the anonymous namespace. By default anonymous namespace # are hidden. # The default value is: NO. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all # undocumented members inside documented classes or files. If set to NO these # members will be included in the various overviews, but no documentation # section is generated. This option has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. If set # to NO, these classes will be included in the various overviews. This option # has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_CLASSES = NO # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend # (class|struct|union) declarations. If set to NO, these declarations will be # included in the documentation. # The default value is: NO. HIDE_FRIEND_COMPOUNDS = NO # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any # documentation blocks found inside the body of a function. If set to NO, these # blocks will be appended to the function's detailed documentation block. # The default value is: NO. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation that is typed after a # \internal command is included. If the tag is set to NO then the documentation # will be excluded. Set it to YES to include the internal documentation. # The default value is: NO. INTERNAL_DOCS = NO # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file # names in lower-case letters. If set to YES, upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. # The default value is: system dependent. CASE_SENSE_NAMES = YES # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with # their full class and namespace scopes in the documentation. If set to YES, the # scope will be hidden. # The default value is: NO. HIDE_SCOPE_NAMES = NO # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will # append additional text to a page's title, such as Class Reference. If set to # YES the compound reference will be hidden. # The default value is: NO. HIDE_COMPOUND_REFERENCE= NO # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of # the files that are included by a file in the documentation of that file. # The default value is: YES. SHOW_INCLUDE_FILES = YES # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each # grouped member an include statement to the documentation, telling the reader # which file to include in order to use the member. # The default value is: NO. SHOW_GROUPED_MEMB_INC = NO # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include # files with double quotes in the documentation rather than with sharp brackets. # The default value is: NO. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the # documentation for inline members. # The default value is: YES. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the # (detailed) documentation of file and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. # The default value is: YES. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief # descriptions of file, namespace and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. Note that # this will also influence the order of the classes in the class list. # The default value is: NO. SORT_BRIEF_DOCS = NO # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the # (brief and detailed) documentation of class members so that constructors and # destructors are listed first. If set to NO the constructors will appear in the # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief # member documentation. # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting # detailed member documentation. # The default value is: NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy # of group names into alphabetical order. If set to NO the group names will # appear in their defined order. # The default value is: NO. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by # fully-qualified names, including namespaces. If set to NO, the class list will # be sorted only by class name, not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the alphabetical # list. # The default value is: NO. SORT_BY_SCOPE_NAME = NO # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper # type resolution of all parameters of a function it will reject a match between # the prototype and the implementation of a member function even if there is # only one candidate or it is obvious which candidate to choose by doing a # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still # accept a match between prototype and implementation in such cases. # The default value is: NO. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo # list. This list is created by putting \todo commands in the documentation. # The default value is: YES. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test # list. This list is created by putting \test commands in the documentation. # The default value is: YES. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug # list. This list is created by putting \bug commands in the documentation. # The default value is: YES. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) # the deprecated list. This list is created by putting \deprecated commands in # the documentation. # The default value is: YES. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional documentation # sections, marked by \if ... \endif and \cond # ... \endcond blocks. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the # initial value of a variable or macro / define can have for it to appear in the # documentation. If the initializer consists of more lines than specified here # it will be hidden. Use a value of 0 to hide initializers completely. The # appearance of the value of individual variables and macros / defines can be # controlled using \showinitializer or \hideinitializer command in the # documentation regardless of this setting. # Minimum value: 0, maximum value: 10000, default value: 30. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at # the bottom of the documentation of classes and structs. If set to YES, the # list will mention the files that were used to generate the documentation. # The default value is: YES. SHOW_USED_FILES = YES # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This # will remove the Files entry from the Quick Index and from the Folder Tree View # (if specified). # The default value is: YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces # page. This will remove the Namespaces entry from the Quick Index and from the # Folder Tree View (if specified). # The default value is: YES. SHOW_NAMESPACES = YES # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command command input-file, where command is the value of the # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided # by doxygen. Whatever the program writes to standard output is used as the file # version. For an example see the documentation. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. You can # optionally specify a file name after the option, if omitted DoxygenLayout.xml # will be used as the name of the layout file. # # Note that if you run doxygen from a directory containing a file called # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE # tag is left empty. LAYOUT_FILE = # The CITE_BIB_FILES tag can be used to specify one or more bib files containing # the reference definitions. This must be a list of .bib files. The .bib # extension is automatically appended if omitted. This requires the bibtex tool # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. # For LaTeX the style of the bibliography can be controlled using # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the # search path. See also \cite for info how to create references. CITE_BIB_FILES = #--------------------------------------------------------------------------- # Configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated to # standard output by doxygen. If QUIET is set to YES this implies that the # messages are off. # The default value is: NO. QUIET = NO # The WARNINGS tag can be used to turn on/off the warning messages that are # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES # this implies that the warnings are on. # # Tip: Turn warnings on while writing the documentation. # The default value is: YES. WARNINGS = YES # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag # will automatically be disabled. # The default value is: YES. WARN_IF_UNDOCUMENTED = YES # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some parameters # in a documented function, or documenting parameters that don't exist or using # markup commands wrongly. # The default value is: YES. WARN_IF_DOC_ERROR = YES # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that # are documented, but have no documentation for their parameters or return # value. If set to NO, doxygen will only warn about wrong or incomplete # parameter documentation, but not about the absence of documentation. # The default value is: NO. WARN_NO_PARAMDOC = NO # If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when # a warning is encountered. # The default value is: NO. WARN_AS_ERROR = NO # The WARN_FORMAT tag determines the format of the warning messages that doxygen # can produce. The string should contain the $file, $line, and $text tags, which # will be replaced by the file and line number from which the warning originated # and the warning text. Optionally the format may contain $version, which will # be replaced by the version of the file (if it could be obtained via # FILE_VERSION_FILTER) # The default value is: $file:$line: $text. WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning and error # messages should be written. If left blank the output is written to standard # error (stderr). WARN_LOGFILE = #--------------------------------------------------------------------------- # Configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag is used to specify the files and/or directories that contain # documented source files. You may enter file names like myfile.cpp or # directories like /usr/src/myproject. Separate the files or directories with # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. INPUT = @CMAKE_CURRENT_SOURCE_DIR@/src/ @CMAKE_CURRENT_SOURCE_DIR@/docs # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # libiconv (or the iconv built into libc) for the transcoding. See the libiconv # documentation (see: http://www.gnu.org/software/libiconv) for the list of # possible encodings. # The default value is: UTF-8. INPUT_ENCODING = UTF-8 # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and # *.h) to filter out the source-files in the directories. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # read by doxygen. # # If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp, # *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, # *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, # *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f, *.for, *.tcl, # *.vhd, *.vhdl, *.ucf, *.qsf, *.as and *.js. FILE_PATTERNS = *.cpp *.h # The RECURSIVE tag can be used to specify whether or not subdirectories should # be searched for input files as well. # The default value is: NO. RECURSIVE = NO # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. # The default value is: NO. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories for example use the pattern */test/* EXCLUDE_PATTERNS = */test/* # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # AClass::ANamespace, ANamespace::*Test # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories use the pattern */test/* EXCLUDE_SYMBOLS = # The EXAMPLE_PATH tag can be used to specify one or more files or directories # that contain example code fragments that are included (see the \include # command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank all # files are included. EXAMPLE_PATTERNS = # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude commands # irrespective of the value of the RECURSIVE tag. # The default value is: NO. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or directories # that contain images that are to be included in the documentation (see the # \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command: # # # # where is the value of the INPUT_FILTER tag, and is the # name of an input file. Doxygen will then use the output that the filter # program writes to standard output. If FILTER_PATTERNS is specified, this tag # will be ignored. # # Note that the filter must not add or remove lines; it is applied before the # code is scanned, but not when the output code is generated. If lines are added # or removed, the anchors will not be placed correctly. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: pattern=filter # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how # filters are used. If the FILTER_PATTERNS tag is empty or if none of the # patterns match the file name, INPUT_FILTER is applied. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will also be used to filter the input files that are used for # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). # The default value is: NO. FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and # it is also possible to disable source filtering for a specific pattern using # *.ext= (so without naming a filter). # This tag requires that the tag FILTER_SOURCE_FILES is set to YES. FILTER_SOURCE_PATTERNS = # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that # is part of the input, its contents will be placed on the main page # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. USE_MDFILE_AS_MAINPAGE = #--------------------------------------------------------------------------- # Configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will be # generated. Documented entities will be cross-referenced with these sources. # # Note: To get rid of all source code in the generated output, make sure that # also VERBATIM_HEADERS is set to NO. # The default value is: NO. SOURCE_BROWSER = NO # Setting the INLINE_SOURCES tag to YES will include the body of functions, # classes and enums directly into the documentation. # The default value is: NO. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any # special comment blocks from generated source code fragments. Normal C, C++ and # Fortran comments will always remain visible. # The default value is: YES. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES then for each documented # function all documented functions referencing it will be listed. # The default value is: NO. REFERENCED_BY_RELATION = NO # If the REFERENCES_RELATION tag is set to YES then for each documented function # all documented entities called/used by that function will be listed. # The default value is: NO. REFERENCES_RELATION = NO # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set # to YES then the hyperlinks from functions in REFERENCES_RELATION and # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will # link to the documentation. # The default value is: YES. REFERENCES_LINK_SOURCE = YES # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the # source code will show a tooltip with additional information such as prototype, # brief description and links to the definition and documentation. Since this # will make the HTML file larger and loading of large files a bit slower, you # can opt to disable this feature. # The default value is: YES. # This tag requires that the tag SOURCE_BROWSER is set to YES. SOURCE_TOOLTIPS = YES # If the USE_HTAGS tag is set to YES then the references to source code will # point to the HTML generated by the htags(1) tool instead of doxygen built-in # source browser. The htags tool is part of GNU's global source tagging system # (see http://www.gnu.org/software/global/global.html). You will need version # 4.8.6 or higher. # # To use it do the following: # - Install the latest version of global # - Enable SOURCE_BROWSER and USE_HTAGS in the config file # - Make sure the INPUT points to the root of the source tree # - Run doxygen as normal # # Doxygen will invoke htags (and that will in turn invoke gtags), so these # tools must be available from the command line (i.e. in the search path). # # The result: instead of the source browser generated by doxygen, the links to # source code will now point to the output of htags. # The default value is: NO. # This tag requires that the tag SOURCE_BROWSER is set to YES. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a # verbatim copy of the header file for each class for which an include is # specified. Set to NO to disable this. # See also: Section \class. # The default value is: YES. VERBATIM_HEADERS = YES # If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the # clang parser (see: http://clang.llvm.org/) for more accurate parsing at the # cost of reduced performance. This can be particularly helpful with template # rich C++ code for which doxygen's built-in parser lacks the necessary type # information. # Note: The availability of this option depends on whether or not doxygen was # generated with the -Duse-libclang=ON option for CMake. # The default value is: NO. CLANG_ASSISTED_PARSING = NO # If clang assisted parsing is enabled you can provide the compiler with command # line options that you would normally use when invoking the compiler. Note that # the include paths will already be set by doxygen for the files and directories # specified with INPUT and INCLUDE_PATH. # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES. CLANG_OPTIONS = #--------------------------------------------------------------------------- # Configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all # compounds will be generated. Enable this if the project contains a lot of # classes, structs, unions or interfaces. # The default value is: YES. ALPHABETICAL_INDEX = YES # The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in # which the alphabetical index list will be split. # Minimum value: 1, maximum value: 20, default value: 5. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all classes will # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag # can be used to specify a prefix (or a list of prefixes) that should be ignored # while generating the index headers. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. IGNORE_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output # The default value is: YES. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each # generated HTML page (for example: .htm, .php, .asp). # The default value is: .html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a user-defined HTML header file for # each generated HTML page. If the tag is left blank doxygen will generate a # standard header. # # To get valid HTML the header file that includes any scripts and style sheets # that doxygen needs, which is dependent on the configuration options used (e.g. # the setting GENERATE_TREEVIEW). It is highly recommended to start with a # default header using # doxygen -w html new_header.html new_footer.html new_stylesheet.css # YourConfigFile # and then modify the file new_header.html. See also section "Doxygen usage" # for information on how to generate the default header that doxygen normally # uses. # Note: The header is subject to change so you typically have to regenerate the # default header when upgrading to a newer version of doxygen. For a description # of the possible markers and block names see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each # generated HTML page. If the tag is left blank doxygen will generate a standard # footer. See HTML_HEADER for more information on how to generate a default # footer and what special commands can be used inside the footer. See also # section "Doxygen usage" for information on how to generate the default footer # that doxygen normally uses. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style # sheet that is used by each HTML page. It can be used to fine-tune the look of # the HTML output. If left blank doxygen will generate a default style sheet. # See also section "Doxygen usage" for information on how to generate the style # sheet that doxygen normally uses. # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as # it is more robust and this tag (HTML_STYLESHEET) will in the future become # obsolete. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_STYLESHEET = # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined # cascading style sheets that are included after the standard style sheets # created by doxygen. Using this option one can overrule certain style aspects. # This is preferred over using HTML_STYLESHEET since it does not replace the # standard style sheet and is therefore more robust against future updates. # Doxygen will copy the style sheet files to the output directory. # Note: The order of the extra style sheet files is of importance (e.g. the last # style sheet in the list overrules the setting of the previous ones in the # list). For an example see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that the # files will be copied as-is; there are no commands or markers available. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_FILES = # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen # will adjust the colors in the style sheet and background images according to # this color. Hue is specified as an angle on a colorwheel, see # http://en.wikipedia.org/wiki/Hue for more information. For instance the value # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 # purple, and 360 is red again. # Minimum value: 0, maximum value: 359, default value: 220. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_HUE = 220 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors # in the HTML output. For a value of 0 the output will use grayscales only. A # value of 255 will produce the most vivid colors. # Minimum value: 0, maximum value: 255, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the # luminance component of the colors in the HTML output. Values below 100 # gradually make the output lighter, whereas values above 100 make the output # darker. The value divided by 100 is the actual gamma applied, so 80 represents # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not # change the gamma. # Minimum value: 40, maximum value: 240, default value: 80. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting this # to YES can help to show when doxygen was last run and thus if the # documentation is up to date. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_TIMESTAMP = NO # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_SECTIONS = NO # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries # shown in the various tree structured indices initially; the user can expand # and collapse entries dynamically later on. Doxygen will expand the tree to # such a level that at most the specified number of entries are visible (unless # a fully collapsed tree already exceeds this amount). So setting the number of # entries 1 will produce a full collapsed tree by default. 0 is a special value # representing an infinite number of entries and will result in a full expanded # tree by default. # Minimum value: 0, maximum value: 9999, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files will be # generated that can be used as input for Apple's Xcode 3 integrated development # environment (see: http://developer.apple.com/tools/xcode/), introduced with # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a # Makefile in the HTML output directory. Running make will produce the docset in # that directory and running make install will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html # for more information. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_DOCSET = NO # This tag determines the name of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # The default value is: Doxygen generated docs. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDNAME = "Doxygen generated docs" # This tag specifies a string that should uniquely identify the documentation # set bundle. This should be a reverse domain-name style string, e.g. # com.mycompany.MyDocSet. Doxygen will append .docset to the name. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_BUNDLE_ID = org.doxygen.Project # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. # The default value is: org.doxygen.Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. # The default value is: Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three # additional HTML index files: index.hhp, index.hhc, and index.hhk. The # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on # Windows. # # The HTML Help Workshop contains a compiler that can convert all HTML output # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML # files are now used as the Windows 98 help format, and will replace the old # Windows help format (.hlp) on all Windows platforms in the future. Compressed # HTML files also contain an index, a table of contents, and you can search for # words in the documentation. The HTML workshop also contains a viewer for # compressed HTML files. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_HTMLHELP = NO # The CHM_FILE tag can be used to specify the file name of the resulting .chm # file. You can add a path in front of the file if the result should not be # written to the html output directory. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_FILE = # The HHC_LOCATION tag can be used to specify the location (absolute path # including file name) of the HTML help compiler (hhc.exe). If non-empty, # doxygen will try to run the HTML help compiler on the generated index.hhp. # The file has to be specified with full path. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. HHC_LOCATION = # The GENERATE_CHI flag controls if a separate .chi index file is generated # (YES) or that it should be included in the master .chm file (NO). # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. GENERATE_CHI = NO # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) # and project file content. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_INDEX_ENCODING = # The BINARY_TOC flag controls whether a binary table of contents is generated # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it # enables the Previous and Next buttons. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members to # the table of contents of the HTML help documentation and to the tree view. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help # (.qch) of the generated HTML documentation. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify # the file name of the resulting .qch file. The path specified is relative to # the HTML output folder. # This tag requires that the tag GENERATE_QHP is set to YES. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help # Project output. For more information please see Qt Help Project / Namespace # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_NAMESPACE = org.doxygen.Project # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt # Help Project output. For more information please see Qt Help Project / Virtual # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- # folders). # The default value is: doc. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_VIRTUAL_FOLDER = doc # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom # filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_NAME = # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's filter section matches. Qt Help Project / Filter Attributes (see: # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_SECT_FILTER_ATTRS = # The QHG_LOCATION tag can be used to specify the location of Qt's # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the # generated .qhp file. # This tag requires that the tag GENERATE_QHP is set to YES. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be # generated, together with the HTML files, they form an Eclipse help plugin. To # install this plugin and make it available under the help contents menu in # Eclipse, the contents of the directory containing the HTML and XML files needs # to be copied into the plugins directory of eclipse. The name of the directory # within the plugins directory should be the same as the ECLIPSE_DOC_ID value. # After copying Eclipse needs to be restarted before the help appears. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_ECLIPSEHELP = NO # A unique identifier for the Eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have this # name. Each documentation set should have its own identifier. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. ECLIPSE_DOC_ID = org.doxygen.Project # If you want full control over the layout of the generated HTML pages it might # be necessary to disable the index and replace it with your own. The # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top # of each HTML page. A value of NO enables the index and the value YES disables # it. Since the tabs in the index contain the same information as the navigation # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. DISABLE_INDEX = NO # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. If the tag # value is set to YES, a side panel will be generated containing a tree-like # index structure (just like the one that is generated for HTML Help). For this # to work a browser that supports JavaScript, DHTML, CSS and frames is required # (i.e. any modern browser). Windows users are probably better off using the # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can # further fine-tune the look of the index. As an example, the default style # sheet generated by doxygen has an example that shows how to put an image at # the root of the tree instead of the PROJECT_NAME. Since the tree basically has # the same information as the tab index, you could consider setting # DISABLE_INDEX to YES when enabling this option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_TREEVIEW = NO # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that # doxygen will group on one line in the generated HTML documentation. # # Note that a value of 0 will completely suppress the enum values from appearing # in the overview section. # Minimum value: 0, maximum value: 20, default value: 4. # This tag requires that the tag GENERATE_HTML is set to YES. ENUM_VALUES_PER_LINE = 4 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used # to set the initial width (in pixels) of the frame in which the tree is shown. # Minimum value: 0, maximum value: 1500, default value: 250. # This tag requires that the tag GENERATE_HTML is set to YES. TREEVIEW_WIDTH = 250 # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to # external symbols imported via tag files in a separate window. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. EXT_LINKS_IN_WINDOW = NO # Use this tag to change the font size of LaTeX formulas included as images in # the HTML documentation. When you change the font size after a successful # doxygen run you need to manually remove any form_*.png images from the HTML # output directory to force them to be regenerated. # Minimum value: 8, maximum value: 50, default value: 10. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_FONTSIZE = 10 # Use the FORMULA_TRANPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # # Note that when changing this option you need to delete any form_*.png files in # the HTML output directory before the changes have effect. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_TRANSPARENT = YES # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see # http://www.mathjax.org) which uses client side Javascript for the rendering # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX # installed or if you want to formulas look prettier in the HTML output. When # enabled you may also need to install MathJax separately and configure the path # to it using the MATHJAX_RELPATH option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. USE_MATHJAX = NO # When MathJax is enabled you can set the default output format to be used for # the MathJax output. See the MathJax site (see: # http://docs.mathjax.org/en/latest/output.html) for more details. # Possible values are: HTML-CSS (which is slower, but has the best # compatibility), NativeMML (i.e. MathML) and SVG. # The default value is: HTML-CSS. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_FORMAT = HTML-CSS # When MathJax is enabled you need to specify the location relative to the HTML # output directory using the MATHJAX_RELPATH option. The destination directory # should contain the MathJax.js script. For instance, if the mathjax directory # is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax # Content Delivery Network so you can quickly see the result without installing # MathJax. However, it is strongly recommended to install a local copy of # MathJax from http://www.mathjax.org before deployment. # The default value is: http://cdn.mathjax.org/mathjax/latest. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax # extension names that should be enabled during MathJax rendering. For example # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_EXTENSIONS = # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces # of code that will be used on startup of the MathJax code. See the MathJax site # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an # example see the documentation. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_CODEFILE = # When the SEARCHENGINE tag is enabled doxygen will generate a search box for # the HTML output. The underlying search engine uses javascript and DHTML and # should work on any modern browser. Note that when using HTML help # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) # there is already a search function so this one should typically be disabled. # For large projects the javascript based search engine can be slow, then # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to # search using the keyboard; to jump to the search box use + S # (what the is depends on the OS and browser, but it is typically # , /