BiocParallel/DESCRIPTION0000644000175200017520000000654114516024321015637 0ustar00biocbuildbiocbuildPackage: BiocParallel Type: Package Title: Bioconductor facilities for parallel evaluation Version: 1.36.0 Authors@R: c( person("Martin", "Morgan", email = "mtmorgan.bioc@gmail.com", role=c("aut", "cre")), person("Jiefei", "Wang", role = "aut"), person("Valerie", "Obenchain", role="aut"), person("Michel", "Lang", email="michellang@gmail.com", role="aut"), person("Ryan", "Thompson", email="rct@thompsonclan.org", role="aut"), person("Nitesh", "Turaga", role="aut"), person("Aaron", "Lun", role = "ctb"), person("Henrik", "Bengtsson", role = "ctb"), person("Madelyn", "Carlson", role = "ctb", comment = "Translated 'Random Numbers' vignette from Sweave to RMarkdown / HTML." ), person("Phylis", "Atieno", role = "ctb", comment = "Translated 'Introduction to BiocParallel' vignette from Sweave to Rmarkdown / HTML." ), person( "Sergio", "Oller", role = "ctb", comment = c( "Improved bpmapply() efficiency.", "ORCID" = "0000-0002-8994-1549" ) )) Description: This package provides modified versions and novel implementation of functions for parallel evaluation, tailored to use with Bioconductor objects. URL: https://github.com/Bioconductor/BiocParallel BugReports: https://github.com/Bioconductor/BiocParallel/issues biocViews: Infrastructure License: GPL-2 | GPL-3 SystemRequirements: C++11 Depends: methods, R (>= 3.5.0) Imports: stats, utils, futile.logger, parallel, snow, codetools Suggests: BiocGenerics, tools, foreach, BBmisc, doParallel, GenomicRanges, RNAseqData.HNRNPC.bam.chr14, TxDb.Hsapiens.UCSC.hg19.knownGene, VariantAnnotation, Rsamtools, GenomicAlignments, ShortRead, RUnit, BiocStyle, knitr, batchtools, data.table Enhances: Rmpi Collate: AllGenerics.R DeveloperInterface.R prototype.R bploop.R ErrorHandling.R log.R bpbackend-methods.R bpisup-methods.R bplapply-methods.R bpiterate-methods.R bpstart-methods.R bpstop-methods.R BiocParallelParam-class.R bpmapply-methods.R bpschedule-methods.R bpvec-methods.R bpvectorize-methods.R bpworkers-methods.R bpaggregate-methods.R bpvalidate.R SnowParam-class.R MulticoreParam-class.R TransientMulticoreParam-class.R register.R SerialParam-class.R DoparParam-class.R SnowParam-utils.R BatchtoolsParam-class.R progress.R ipcmutex.R worker-number.R utilities.R rng.R bpinit.R reducer.R worker.R bpoptions.R cpp11.R BiocParallel-defunct.R LinkingTo: BH, cpp11 VignetteBuilder: knitr RoxygenNote: 7.1.2 git_url: https://git.bioconductor.org/packages/BiocParallel git_branch: RELEASE_3_18 git_last_commit: ba4ec29 git_last_commit_date: 2023-10-24 Date/Publication: 2023-10-24 NeedsCompilation: yes Packaged: 2023-10-24 20:28:01 UTC; biocbuild Author: Martin Morgan [aut, cre], Jiefei Wang [aut], Valerie Obenchain [aut], Michel Lang [aut], Ryan Thompson [aut], Nitesh Turaga [aut], Aaron Lun [ctb], Henrik Bengtsson [ctb], Madelyn Carlson [ctb] (Translated 'Random Numbers' vignette from Sweave to RMarkdown / HTML.), Phylis Atieno [ctb] (Translated 'Introduction to BiocParallel' vignette from Sweave to Rmarkdown / HTML.), Sergio Oller [ctb] (Improved bpmapply() efficiency., ) Maintainer: Martin Morgan BiocParallel/NAMESPACE0000644000175200017520000000714514516004410015346 0ustar00biocbuildbiocbuilduseDynLib("BiocParallel", .registration = TRUE) import(methods) importFrom(stats, setNames, terms, runif) importFrom(utils, capture.output, find, head, tail, relist, setTxtProgressBar, txtProgressBar) importFrom(parallel, nextRNGStream, nextRNGSubStream) importFrom(codetools, findGlobals) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Export S4 classes ### exportClass( BiocParallelParam, MulticoreParam, SnowParam, DoparParam, SerialParam, BatchtoolsParam, BPValidate ) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Export non-generic functions ### export( MulticoreParam, SnowParam, DoparParam, SerialParam, BatchtoolsParam, ## register register, registered, bpparam, ## accessor bpnworkers, ## error handlers bptry, ## accessor for the errors bpresult, ## helpers bploop, # worker, manager loops multicoreWorkers, snowWorkers, batchtoolsWorkers, batchtoolsCluster, batchtoolsRegistryargs, batchtoolsTemplate, bpvalidate, bpok, bperrorTypes, bprunMPIworker, ## iteration bpiterateAlong, ## ipcmutex ipcid, ipcremove, ipclock, ipctrylock, ipcunlock, ipclocked, ipcyield, ipcvalue, ipcreset, bpoptions, .registerOption ) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Export S4 generics and methods defined in BiocParallel ### export( bplapply, bpvec, bpvectorize, bpmapply, bpaggregate, bpiterate, ## bp-controls bpworkers, "bpworkers<-", bpbackend, "bpbackend<-", bptasks, "bptasks<-", bpjobname, "bpjobname<-", bpstart, bpstop, bpisup, bpstopOnError, "bpstopOnError<-", bpprogressbar, "bpprogressbar<-", bpRNGseed, "bpRNGseed<-", bptimeout, "bptimeout<-", bpexportglobals, "bpexportglobals<-", bpexportvariables, "bpexportvariables<-", bpforceGC, "bpforceGC<-", bpfallback, "bpfallback<-", bplog, "bplog<-", bplogdir, "bplogdir<-", bpthreshold, "bpthreshold<-", bpresultdir, "bpresultdir<-", ## schedule bpschedule ) ### Same list as above. exportMethods( bplapply, bpvec, bpvectorize, bpmapply, bpaggregate, bpiterate, ## bp-controls bpworkers, "bpworkers<-", bpbackend, "bpbackend<-", bptasks, "bptasks<-", bpjobname, "bpjobname<-", bpstart, bpstop, bpisup, bpstopOnError, "bpstopOnError<-", bpprogressbar, "bpprogressbar<-", bpRNGseed, "bpRNGseed<-", bptimeout, "bptimeout<-", bplog, "bplog<-", bplogdir, "bplogdir<-", bpthreshold, "bpthreshold<-", bpresultdir, "bpresultdir<-", ## schedule bpschedule ) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Export S4 methods for generics not defined in BiocParallel ### exportMethods( show ) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Export S3 methods ### S3method(print, remote_error) S3method(print, bplist_error) S3method(bploop, lapply) S3method(bploop, iterate) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Export 'developer' API for extending BiocParallelParam ### export( .BiocParallelParam_prototype, .prototype_update, .prettyPath, .send_to, .recv_any, .send, .recv, .close, .send_all, .recv_all, .bpstart_impl, .bpstop_impl, .bpworker_impl, .bplapply_impl, .bpiterate_impl, .error_worker_comm, .manager, .manager_send, .manager_recv, .manager_send_all, .manager_recv_all, .manager_capacity, .manager_flush, .manager_cleanup, .task_const, .task_dynamic, .task_remake ) BiocParallel/NEWS0000644000175200017520000003731714516004410014632 0ustar00biocbuildbiocbuildCHANGES IN VERSION 1.34 ----------------------- NEW FEATURES o (1.33.2) limit worker number via environment variables. https://github.com/Bioconductor/BiocParallel/issues/229 o (v1.33.3) bpmapply() does not send the whole list of arguments to all workers. Instead, it takes the arguments and slices them, passing the corresponding slice to each worker. Thanks Sergio Oller! https://github.com/Bioconductor/BiocParallel/issues/229 USER VISIBLE CHANGES o (1.33.1) Mark BatchJobsParam, bprunMPIslave as defunct. o (1.33.9) Change default force.GC= to FALSE in MulticoreParam(). o (1.33.11) change content of 'traceback' on error to include the stack from the location of the error up to the invokation of FUN. Previously, the traceback was from FUN to the top-level of worker code, providing limited insight into nested errors. o (1.33.12) 'force' function arguments to avoid consequences of lazy evaluation discussed in BUG FIXES o (1.33.6) Restore 'exported' global variables in SerialParam() https://github.com/Bioconductor/BiocParallel/issues/234 o (1.33.7) 'configure.ac' uses C++ compiler and checks for existence of required header o (1.33.8 / v 1.32.5) set socket idle timeout to a large value, to avoid premature worker termination and to be consistent with snow / parallel defaults. o (1.33.10 / v 1.32.6) be sure to clean up TransientMulticoreParam state at start of each job. CHANGES IN VERSION 1.32 ----------------------- NEW FEATURES o (v 1.31.10) bpiterate() when ITER is not a function will use bpiterateAlong() to attempt to iterate over elements ITER[[1]], ITER[[2]], etc. https://stat.ethz.ch/pipermail/bioc-devel/2022-July/019075.html USER VISIBLE CHANGES o (v 1.31.3) Deprecate BatchJobsParam in favor of BatchtoolsParam o (v 1.31.11) Replace Random Number .Rnw vignette with Rmd (html) version (thanks Madelyn Carlson!) https://github.com/Bioconductor/BiocParallel/pull/215 o (v 1.31.12) clarify default number of cores, and use on shared clusters (thanks Dario Strbenac) https://github.com/Bioconductor/BiocParallel/pull/218 https://github.com/Bioconductor/BiocParallel/issues/217 o (v 1.31.15) Replace Introduction to BiocParallel .Rnw vignette with Rmd (html) version (thanks Phylis Atieno!) https://github.com/Bioconductor/BiocParallel/pull/226 BUG FIXES o (v 1.31.1) suppress package startup messages on workers https://github.com/Bioconductor/BiocParallel/issues/198 o (v 1.31.1) coerce timeout to integer (typically from numeric) https://github.com/Bioconductor/BiocParallel/issues/200 o (v 1.31.2) avoid segfault when ipcmutex() functions generate C++ errors. This occurs very rarely, for instance when the directory used by boost for file locking (under /tmp) was created by another user. https://github.com/Bioconductor/BiocParallel/pull/202 o (v 1.31.2) resetting bpRNGseed() after bpstart() is reproducible https://github.com/Bioconductor/BiocParallel/pull/204 o (v 1.31.5) enable logs for multiple managers sharing the same workers. https://github.com/Bioconductor/BiocParallel/pull/207 o (v 1.31.13 / v 1.30.4) only export variables in `.GlobalEnv` or `package:` o (v 1.31.14) Reduce bpmapply memory usage. Thanks Sergio Oller. CHANGES IN VERSION 1.30 ----------------------- USER VISIBLE CHANGES o (v 1.29.1) Report first remote error in its entirety. https://github.com/Bioconductor/BiocParallel/issues/165 o (v 1.29.4) Add bpresult() (extract result vector from return value of tryCatch(bplapply(...))) and allow direct use of tryCatch(bplapply(...)) return value as arugment to bplapply(BPREDO= ...). Closes #157 o (v 1.29.8) The default timeout for worker computation changes from 30 days to .Machine$integer.max (no timeout), allowing for performance improvements when not set. o (v 1.29.11) The timeout for establishing a socket connection is set to getOption("timeout") (default 60 seconds). o (v 1.29.15) Check for and report failed attempts to open SnowParam ports. o (v 1.29.18) add bpfallback= option to control use of `lapply()` (fallback) when 0 or 1 workers are available. o (v 1.29.19) add bpexportvariables= option to automatically export global variables, or variables found in packages on the search path, in user-provided `FUN=` functions. BUG FIXES o (v 1.29.2) Fix regression in use of debug() with SerialParam. https://github.com/Bioconductor/BiocParallel/issues/128 o (v 1.29.3) Fix regression in progress bar display with bplapply(). https://github.com/Bioconductor/BiocParallel/issues/172 o (v 1.29.5) Fix default seed generation when user has non-default generator. https://github.com/Bioconductor/BiocParallel/pull/176 o (v 1.29.9) Fix validity when workers, specified as character(), are more numerous than (non-zero) tasks. https://github.com/Bioconductor/BiocParallel/pull/181 CHANGES IN VERSION 1.28 ----------------------- USER VISIBLE CHANGES o (v 1.27.3) Setting `progressbar = TRUE` for SnowParam() or MulticoreParam() changes the default value of `tasks` from 0 to `.Machine$integer.max`, so that progress on each element of `X` is reported. o (v 1.27.3) `tasks` greater than `length(X)` are set to `length(X)`. Thus `.Machine$integer.max`, for instance, assures that each element of `X` is a separate task. o (v 1.27.5) Use of random numbers is robust to the distribution of jobs across tasks for SerialParam(), SnowParam(), and MulticoreParam(), for both bplapply() and bpiterate(), using the RNGseed= argument to each *Param(). The change is NOT backward compatible -- users wishing to exactly reproduce earlier results should use a previous version of the package. o (v 1.27.8) Standardize SerialParam() construct to enable setting additional fields. Standardize coercion of other BiocParallelParam types (e.g., SnowParam(), MulticoreParam()) to SerialParam() with as(., "SerialParam"). o (v. 1.27.9) By defualt, do _not_ only run garbage collection after every call to FUN(), except under MulticoreParam(). R's garbage collection algorithm only fails to do well when forked processes (i.e., MulticoreParam) assume that they are the only consumers of process memory. o (v 1.27.11) Developer-oriented functions bploop.*() arguments changed. o (v 1.27.12) Ignore set.seed() and never increment the global random number stream. This reverts a side-effect of behavior introduced in v. 1.27.5 to behavior more consistent with version 1.26. o (v 1.27.16) Better BPREDO support for previously started BPPARAM, and 'transient' BPPARAM without RNGseed. BUG FIXES o (v 1.27.10) Typo in coercion to SerialParam when only a single worker specified. https://github.com/Bioconductor/BiocParallel/issues/151 CHANGES IN VERSION 1.26 ----------------------- USER VISIBLE CHANGES o (v 1.25.2) bpvalidate() gains an argument to control warning / error / silent signaling, and returns a 'BPValidate' object. BUG FIXES o (v 1.26.1) bptry(bplapply(X, ...)) returns a list of length X, appropriately annotated, when SerialParam(stop.on.error = TRUE). See https://github.com/Bioconductor/BiocParallel/issues/142 CHANGES IN VERSION 1.24 ----------------------- BUG FIXES o (v.1.23.1) bpvalidate() detects variables defined in parent environments; warns on use of global variables. o (v.1.23.2) bplapply() runs gc() after each evaluation of `FUN()`, so that workers do not accumulate excessive memory allocations (memory on a per-process basis is not excessive, but cluster-wise could be). See https://github.com/Bioconductor/BiocParallel/pull/124 o (v.1.24.1) Add 'topLevelEnvironment' to list of blocked global variable exports to address performance regression introduced by testthat 3.0. See https://github.com/Bioconductor/BiocParallel/issues/127 CHANGES IN VERSION 1.22 ----------------------- USER VISIBLE CHANGES o (v 1.20.2) don't advance random number stream when used 'internally'. This behavior alters reproducibility of existing scripts relying (probably implicitly) on the advancing stream. https://github.com/Bioconductor/BiocParallel/issues/110 BUG FIXES o (v 1.20.1) bplapply(), bpmapply(), bpvec() propagate names on arguments more correctly, https://github.com/Bioconductor/BiocParallel/issues/108 CHANGES IN VERSION 1.20 ----------------------- BUG FIXES o (v 1.19.2) Improve efficiency of MulticoreParam() when state does not persist across calls to bplapply(). CHANGES IN VERSION 1.18 ----------------------- USER VISIBLE CHANGES o (v 1.17.6) Initial use of registered BPPARAM does not advance random number seed, see https://stat.ethz.ch/pipermail/bioc-devel/2019-January/014526.html o (v 1.17.7) Loading package does not advance random number seed, see https://stat.ethz.ch/pipermail/bioc-devel/2019-January/014535.html o (v. 1.17.7) removed deprecated functions bplasterror(), bpresume(), bpcatchError() and field catch.error. o (v. 1.17.7) Make logdir, resultdir fields of BiocParallelParam. o (v. 1.17.7) replaced internal use of BatchJobs:::checkDir() (testing existence and read / write ability of log and other directories) with BiocParallelParam validity check. o (v. 1.17.7) expose 'developer' interface, `?DeveloperInterface` o (v. 1.17.11) on Windows, coerce `MulticoreParam(n)` to `MulticoreParam(1)` == SerialParam()` BUG FIXES o (v 1.17.4) port 1.16.3 (no '>' on SnowParam() worker end) and 1.16.4 (bpRNGseed<-() accepts NULL) o (v 1.17.5) port 1.16.4 (bpRNGseed() can reset seed to NULL), 1.16.5 (number of available cores defaults to 1 if cannot be determined). CHANGES IN VERSION 1.16 ----------------------- NEW FEATURES o (v 1.15.9) BatchtoolsParam() gains resources=list() for template file substitution. o (v 1.15.12) bpexportglobals() for all BPPARAM exports global options (i.e., base::options()) to workers. Default TRUE. BUG FIXES o (v 1.15.6) bpiterate,serial-method does not return a list() when REDUCE present (https://github.com/Bioconductor/BiocParallel/issues/77) o (v 1.15.7) bpaggregate,formula-method failed to find BPREDO (https://support.bioconductor.org/p/110784) o (v 1.15.13) bplappy,BatchtoolsParam() coerces List to list (https://github.com/Bioconductor/BiocParallel/issues/82) o (v 1.15.14) implicit loading of BiocParallel when loading a third- party package failed because reference class `initialize()` methods are not installed correctly. This bug fix results in signficant revision in the implementation, so that valid objects must be constructed through the public constructors, e.g., `BatchtoolsParam()` o (v 1.16.3) do not print '>' for each terminating SnowParam() worker o (v 1.16.4) allow bpRNGseed() to reset seed to NULL o (v 1.16.5) number of available cores defaults to 1 on machines where number of cores available cannot be determined. See https://github.com/Bioconductor/BiocParallel/issues/91. CHANGES IN VERSION 1.14 ----------------------- BUG FIXES o (v 1.13.1) bpiterate,serial-method does not unlist the result of FUN before passing to REDUCE. CHANGES IN VERSION 1.12 ----------------------- BUG FIXES o (v. 1.11.1) Change registered backend initialization to first invocation, rather than on load. o (v 1.11.8) Ensure registry is initiailized before (public) use. Issue #65 NEW FEATURES o (v. 1.11.2) bpiterate() gains a progress counter. o (v. 1.11.5) ipclock(), etc: inter-process locks and counters CHANGES IN VERSION 1.10 ---------------------- BUG FIXES o (v. 1.9.6) use of logdir= no longer tries to double-close log file. CHANGES IN VERSION 1.8 ---------------------- BUG FIXES o (v. 1.7.4) Allow more than 125 MPI nodes, https://github.com/Bioconductor/BiocParallel/issues/55 NEW FEATURES o Throttle number of cores used on Bioconductor build systems (with environment variable BBS_HOME set) to 4 CHANGES IN VERSION 1.6 ---------------------- NEW FEATURES o stop.on.error returns catchable 'remote_error' o bplapply() signals a 'bplist_error' when any element is an error. o 'bplist_error' includes an attribute 'result' containing computed results; when stop.on.error = FALSE, the result vector is parallel to (has the same geometry as) the input vector. o bpvec() signals a 'bpvec_error' when length(FUN(X)) != length(X) USER-VISIBLE CHANGES o Rename bpslaveLoop to (S3 generic) bploop o bpiterate() returns values consistent with REDUCE, rather than wrapping in list() o BatchJobsParam() passes more arguments to BatchJobs' makeRegistry(), setConfig(), submitJobs() BUG FIXES o workers=1, tasks=0 assigns all elements of X in one chunk o SerialParam() respects stop.on.error o bpmapply,ANY,* methods did not honor all arguments, particularly MoreArgs. CHANGES IN VERSION 1.2.0 ------------------------ NEW FEATURES o Add support for iterative REDUCE in .bpiterate_serial() o Refactor BiocParallelParam class: - add 'log', 'tasks', 'threshold', 'logdir', 'resultdir' fields - 'tasks' is used by SnowParam and MulticoreParam only o MulticoreParam now uses SnowParam(..., type=FORK) o Add bpvalidate() MODIFICATIONS o Add check to bipiterate() for Windows o Invoke REDUCE without '...' in .bpiterate_serial() o Update README and bpvec() man page o Change default BPPARAM to SnowParam() for Windows o Update bpiterate() man pages for Windows o Add note to vignette re: module load in template file from Thomas Girke o SnowParam: - bpmapply() now dispatches to bplapply() - remove BPRESUME - logging, gc ouput on worker - write results or logs to file - new error handling with futile.logger o Lighten the NAMESPACE by importing only parallel, snow o Modify which params are registered at load time: - Windows: SnowParam(), SerialParam() - Non-Windows: MulticoreParam(), SnowParam(), SerialParam() o bpvalidate() looks for symbols in 'fun' environment, NAMESPACE of loaded libraries, and the search path BUG FIXES o Bug fix in bpiterate_multicore(); update doc examples o Bug fix in bpiterate() in ordering results from Martin o Bug fix in .bpiterate_serial() when REDUCE is given CHANGES IN VERSION 1.0.0 ------------------------ NEW FEATURES o Add vignette sections for cluster managers, AMI o Add bpiterate generic and methods o Add REDUCE to bpiterate() o Add 'reduce.in.order' to bpiterate() MODIFICATIONS o Update vignette examples, reorganize sections o Allow 'workers' in BiocParallelParam to be character or integer o Enhance register() man page; add examples o Improve default registration for SnowParam: - max 8 cores - use detectcores() / mc.cores if available o Modify .convertToSimpleError() to convert NULL to NA_character_ BUG FIXES o Fix recursion problem for BPPARAM as list o Modify bpaggregate() to run in parallel BiocParallel/R/0000755000175200017520000000000014516004410014321 5ustar00biocbuildbiocbuildBiocParallel/R/AllGenerics.R0000644000175200017520000000766014516004410016645 0ustar00biocbuildbiocbuildsetGeneric("bplapply", signature=c("X", "BPPARAM"), function(X, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) standardGeneric("bplapply")) setGeneric("bpmapply", signature=c("FUN", "BPPARAM"), function(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) standardGeneric("bpmapply")) setGeneric("bpiterate", signature=c("ITER", "FUN", "BPPARAM"), function(ITER, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) standardGeneric("bpiterate")) setGeneric("bpvec", signature=c("X", "BPPARAM"), function(X, FUN, ..., AGGREGATE=c, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) standardGeneric("bpvec")) setGeneric("bpvectorize", function(FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) standardGeneric("bpvectorize")) setGeneric("bpaggregate", signature=c("x", "BPPARAM"), function(x, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) standardGeneric("bpaggregate")) ## ## accessors ## setGeneric("bpworkers", function(x) standardGeneric("bpworkers")) setGeneric("bpworkers<-", function(x, value) standardGeneric("bpworkers<-")) setGeneric("bptasks", function(x) standardGeneric("bptasks")) setGeneric("bptasks<-", function(x, value) standardGeneric("bptasks<-")) setGeneric("bpjobname", function(x) standardGeneric("bpjobname")) setGeneric("bpjobname<-", function(x, value) standardGeneric("bpjobname<-")) setGeneric("bpRNGseed", function(x) standardGeneric("bpRNGseed")) setGeneric("bpRNGseed<-", function(x, value) standardGeneric("bpRNGseed<-")) setGeneric("bpforceGC", function(x) standardGeneric("bpforceGC")) setGeneric("bpforceGC<-", function(x, value) standardGeneric("bpforceGC<-")) setGeneric("bpfallback", function(x) standardGeneric("bpfallback")) setGeneric("bpfallback<-", function(x, value) standardGeneric("bpfallback<-")) ## errors setGeneric("bpstopOnError", function(x) standardGeneric("bpstopOnError")) setGeneric("bpstopOnError<-", function(x, value) standardGeneric("bpstopOnError<-")) ## logging / progress setGeneric("bpprogressbar", function(x) standardGeneric("bpprogressbar")) setGeneric("bpprogressbar<-", function(x, value) standardGeneric("bpprogressbar<-")) setGeneric("bptimeout", function(x) standardGeneric("bptimeout")) setGeneric("bptimeout<-", function(x, value) standardGeneric("bptimeout<-")) setGeneric("bpexportglobals", function(x) standardGeneric("bpexportglobals")) setGeneric("bpexportglobals<-", function(x, value) standardGeneric("bpexportglobals<-")) setGeneric("bpexportvariables", function(x) standardGeneric("bpexportvariables")) setGeneric("bpexportvariables<-", function(x, value) standardGeneric("bpexportvariables<-")) setGeneric("bplog", function(x) standardGeneric("bplog")) setGeneric("bplog<-", function(x, value) standardGeneric("bplog<-")) setGeneric("bplogdir", function(x) standardGeneric("bplogdir")) setGeneric("bplogdir<-", function(x, value) standardGeneric("bplogdir<-")) setGeneric("bpthreshold", function(x) standardGeneric("bpthreshold")) setGeneric("bpthreshold<-", function(x, value) standardGeneric("bpthreshold<-")) setGeneric("bpresultdir", function(x) standardGeneric("bpresultdir")) setGeneric("bpresultdir<-", function(x, value) standardGeneric("bpresultdir<-")) ## control setGeneric("bpstart", function(x, ...) standardGeneric("bpstart")) setGeneric("bpstop", function(x) standardGeneric("bpstop")) setGeneric("bpisup", function(x) standardGeneric("bpisup")) setGeneric("bpbackend", function(x) standardGeneric("bpbackend")) setGeneric("bpbackend<-", function(x, value) standardGeneric("bpbackend<-")) ## scheduling setGeneric("bpschedule", function(x) standardGeneric("bpschedule")) BiocParallel/R/BatchtoolsParam-class.R0000644000175200017520000003275714516004410020650 0ustar00biocbuildbiocbuild### ================================================================ ### BatchtoolsParam objects ### ---------------------------------------------------------------- .BATCHTOOLS_CLUSTERS <- c( "socket", "multicore", "interactive", "sge", "slurm", "lsf", "openlava", "torque" ) ### ------------------------------------------------- ### Helper functions ### batchtoolsWorkers <- function(cluster = batchtoolsCluster()) { switch( match.arg(cluster, .BATCHTOOLS_CLUSTERS), interactive = 1L, socket = snowWorkers("SOCK"), multicore = multicoreWorkers(), stop("specify number of workers for '", cluster, "'") ) } batchtoolsCluster <- function(cluster) { if (missing(cluster)) { if (.Platform$OS.type == "windows") { cluster <- "socket" } else { cluster <- "multicore" } } else { cluster <- match.arg(cluster, .BATCHTOOLS_CLUSTERS) } cluster } .batchtoolsClusterAvailable <- function(cluster) { switch( cluster, socket = TRUE, multicore = .Platform$OS.type != "windows", interactive = TRUE, sge = suppressWarnings(system2("qstat", stderr=NULL, stdout=NULL) != 127L), slurm = suppressWarnings(system2("squeue", stderr=NULL, stdout=NULL) != 127L), lsf = suppressWarnings(system2("bjobs", stderr=NULL, stdout=NULL) != 127L), openlava = suppressWarnings(system2("bjobs", stderr=NULL, stdout=NULL) != 127L), torque = suppressWarnings(system2("qselect", stderr=NULL, stdout=NULL) != 127L), .stop( "unsupported cluster type '", cluster, "'; ", "supported types (when available):\n", " ", paste0("'", .BATCHTOOLS_CLUSTERS, "'", collapse = ", ") ) ) } batchtoolsTemplate <- function(cluster) { if (!cluster %in% .BATCHTOOLS_CLUSTERS) stop("unsupported cluster type '", cluster, "'") if (cluster %in% c("socket", "multicore", "interactive")) return(NA_character_) message("using default '", cluster, "' template in batchtools.") if (cluster == "torque") tmpl <- "torque-lido.tmpl" else tmpl <- sprintf("%s-simple.tmpl", tolower(cluster)) ## return template system.file("templates", tmpl, package="batchtools") } batchtoolsRegistryargs <- function(...) { args <- list(...) ## our defaults... registryargs <- as.list(formals(batchtools::makeRegistry)) registryargs$file.dir <- tempfile(tmpdir=getwd()) registryargs$conf.file <- registryargs$seed <- NULL registryargs$make.default <- FALSE ## ...modified by user registryargs[names(args)] <- args registryargs } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Constructor ### setOldClass("Registry") setOldClass(c("NULLRegistry", "Registry")) .NULLRegistry <- function() { structure(list(), class=c("NULLRegistry", "Registry")) } print.NULLRegistry <- function(x, ...) { cat("NULL Job Registry\n") } setOldClass("ClusterFunctions") .BatchtoolsParam_prototype <- c( list( cluster = NA_character_, template = NA_character_, registry = .NULLRegistry(), registryargs = list(), saveregistry = FALSE, resources = list() ), .BiocParallelParam_prototype ) .BatchtoolsParam <- setRefClass( "BatchtoolsParam", contains="BiocParallelParam", fields = list( cluster = "character", template = "character", registry = "Registry", registryargs = "list", saveregistry = "logical", resources = "list" ), methods = list( show = function() { callSuper() .registryargs <- .bpregistryargs(.self) .resources <- .bpresources(.self) cat(" cluster type: ", bpbackend(.self), "\n", .prettyPath(" template", .bptemplate(.self)), "\n registryargs:", paste0("\n ", names(.registryargs), ": ", .registryargs), "\n saveregistry: ", .bpsaveregistry(.self), "\n resources:", if (length(.resources)) paste0("\n ", names(.resources), ": ", .resources), "\n", sep="") } ) ) BatchtoolsParam <- function( workers = batchtoolsWorkers(cluster), ## Provide either cluster or template cluster = batchtoolsCluster(), registryargs = batchtoolsRegistryargs(), saveregistry = FALSE, resources = list(), template = batchtoolsTemplate(cluster), stop.on.error = TRUE, progressbar=FALSE, RNGseed = NA_integer_, timeout= WORKER_TIMEOUT, exportglobals=TRUE, log=FALSE, logdir=NA_character_, resultdir=NA_character_, jobname = "BPJOB" ) { if (!requireNamespace("batchtools", quietly=TRUE)) stop("BatchtoolsParam() requires 'batchtools' package") if (!.batchtoolsClusterAvailable(cluster)) stop("'", cluster, "' supported but not available on this machine") if (length(resources) && is.null(names(resources))) stop("'resources' must be a named list") workers <- .enforceWorkers(workers) prototype <- .prototype_update( .BatchtoolsParam_prototype, workers = as.integer(workers), cluster = cluster, registry = .NULLRegistry(), registryargs = registryargs, saveregistry = saveregistry, resources = resources, jobname = jobname, progressbar = progressbar, log = log, logdir = logdir, resultdir = resultdir, stop.on.error = stop.on.error, timeout = as.integer(timeout), exportglobals = exportglobals, RNGseed = as.integer(RNGseed), template = template ) param <- do.call(.BatchtoolsParam, prototype) validObject(param) param } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Validity ### setValidity("BatchtoolsParam", function(object) { msg <- NULL if (!bpbackend(object) %in% .BATCHTOOLS_CLUSTERS) { types <- paste(.BATCHTOOLS_CLUSTERS, collape = ", ") msg <- c(msg, paste("'cluster' must be one of", types)) } if (is.null(msg)) TRUE else msg }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Methods - control ### setMethod("bpisup", "BatchtoolsParam", function(x) { !is(x$registry, "NULLRegistry") }) .bpregistryargs <- function(x) { x$registryargs } .bpsaveregistry <- function(x) { x$saveregistry } .bpsaveregistry_path <- function(x) { ## update registry location pattern <- "\\-[0-9]+$" file.dir <- .bpregistryargs(x)$file.dir dirname <- dirname(file.dir) basename <- basename(file.dir) dirs <- dir(dirname, paste0(basename, pattern)) n <- 0L if (length(dirs)) n <- max(as.integer(sub(".*\\-", "", dirs))) file.path(dirname, paste0(basename, "-", n + 1L)) } .bpresources <- function(x) { x$resources } .bptemplate <- function(x) { x$template } .composeBatchtools <- function(FUN) { force(FUN) function(fl, ...) { x <- readRDS(fl) FUN(x, ...) } } setMethod("bpbackend", "BatchtoolsParam", function(x) { x$cluster }) setMethod("bpstart", "BatchtoolsParam", function(x) { if (bpisup(x)) return(invisible(x)) cluster <- bpbackend(x) registryargs <- .bpregistryargs(x) oopt <- options(batchtools.verbose = FALSE) on.exit(options(oopt)) seed <- bpRNGseed(x) if (!is.na(seed)) registryargs$seed <- seed if (.bpsaveregistry(x)) { ## the registry$file.dir gets -0, -1, -2... for each bpstart on the ## same parameter registryargs$file.dir <- .bpsaveregistry_path(x) } registry <- do.call(batchtools::makeRegistry, registryargs) registry$cluster.functions <- switch( cluster, interactive = batchtools::makeClusterFunctionsInteractive(), socket = batchtools::makeClusterFunctionsSocket(bpnworkers(x)), multicore = batchtools::makeClusterFunctionsMulticore(bpnworkers(x)), sge = batchtools::makeClusterFunctionsSGE(template = .bptemplate(x)), ## Add mutliple cluster support slurm = batchtools::makeClusterFunctionsSlurm(template=.bptemplate(x)), lsf = batchtools::makeClusterFunctionsLSF(template=.bptemplate(x)), openlava = batchtools::makeClusterFunctionsOpenLava( template=.bptemplate(x) ), torque = batchtools::makeClusterFunctionsTORQUE( template=.bptemplate(x) ), default = stop("unsupported cluster type '", cluster, "'") ) x$registry <- registry # toggles bpisup() invisible(x) }) setMethod("bpstop", "BatchtoolsParam", function(x) { wait <- getOption("BIOCPARALLEL_BATCHTOOLS_REMOVE_REGISTRY_WAIT", 5) if (!.bpsaveregistry(x)) suppressMessages({ batchtools::removeRegistry(wait = wait, reg = x$registry) }) x$registry <- .NULLRegistry() # toggles bpisup() invisible(x) }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Methods - evaluation ### setMethod("bplapply", c("ANY", "BatchtoolsParam"), function(X, FUN, ..., BPREDO = list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) if (!length(X)) return(.rename(list(), X)) if (is(X, "List")) ## hack; issue 82 X <- as.list(X) idx <- .redo_index(X, BPREDO) if (length(idx)) X <- X[idx] nms <- names(X) ## start / stop cluster if (!bpisup(BPPARAM)) { BPPARAM <- bpstart(BPPARAM) on.exit(bpstop(BPPARAM), TRUE) } ## progressbar / verbose if (bpprogressbar(BPPARAM)) { opts <- options( BBmisc.ProgressBar.style="text", batchtools.verbose = TRUE ) on.exit({ ## message("") # clear progress bar options(opts) }, TRUE) } else { opts <- options( BBmisc.ProgressBar.style="off", batchtools.verbose = FALSE ) on.exit(options(opts), TRUE) } registry <- BPPARAM$registry OPTIONS <- .workerOptions( log = bplog(BPPARAM), stop.on.error = bpstopOnError(BPPARAM), timeout = bptimeout(BPPARAM), exportglobals = bpexportglobals(BPPARAM) ) FUN <- .composeTry( FUN, OPTIONS = OPTIONS, SEED = NULL ) ## Make registry / map / submit / wait / load ids = batchtools::batchMap( fun=FUN, X, more.args = list(...), reg = registry ) ids$chunk = batchtools::chunk(ids$job.id, n.chunks = bpnworkers(BPPARAM)) batchtools::submitJobs( ids = ids, resources = .bpresources(BPPARAM), reg = registry ) batchtools::waitForJobs( ids = ids, reg = registry, timeout = .batch_bptimeout(BPPARAM), stop.on.error = bpstopOnError(BPPARAM) ) res <- batchtools::reduceResultsList(ids = ids, reg = registry) ## Copy logs from log dir to bplogdir before clearing registry if (bplog(BPPARAM) && !is.na(bplogdir(BPPARAM))) { logs <- file.path(.bpregistryargs(BPPARAM)$file.dir, "logs") ## Create log dir if (!file.exists(bplogdir(BPPARAM))) dir.create(bplogdir(BPPARAM)) ## Recursive copy logs file.copy(logs, bplogdir(BPPARAM) , recursive=TRUE, overwrite=TRUE) } ## Clear registry if (bpprogressbar(BPPARAM)) message("Clearing registry ...") if (!.bpsaveregistry(BPPARAM)) ## WARNING Save a registry in a folder with extension, ## _saved_registry. BatchtoolsParam('saveregistry=TRUE') option ## should be set only when debugging. This can be extremely ## time and space intensive. suppressMessages({ batchtools::clearRegistry(reg=registry) }) if (!is.null(res)) names(res) <- nms if (length(BPREDO) && length(idx)) { BPREDO[idx] <- res res <- BPREDO } if (!.bpallok(res)) stop(.error_bplist(res)) res }) setMethod("bpiterate", c("ANY", "ANY", "BatchtoolsParam"), function(ITER, FUN, ..., REDUCE, init, reduce.in.order=FALSE, BPREDO = list(), BPPARAM=bpparam(), BPOPTIONS=bpoptions()) { ITER <- match.fun(ITER) FUN <- match.fun(FUN) if (missing(REDUCE)) { if (reduce.in.order) stop("REDUCE must be provided when 'reduce.in.order = TRUE'") if (!missing(init)) stop("REDUCE must be provided when 'init' is given") } if (!bpschedule(BPPARAM) || bpnworkers(BPPARAM) == 1L) { param <- as(BPPARAM, "SerialParam") return( bpiterate(ITER, FUN, ..., REDUCE=REDUCE, init=init, BPREDO = BPREDO, BPPARAM=param, BPOPTIONS=BPOPTIONS) ) } if (!identical(BPREDO, list())) stop("BPREDO is not supported by the BatchtoolsParam yet!") ## start / stop cluster if (!bpisup(BPPARAM)) { bpstart(BPPARAM) on.exit(bpstop(BPPARAM)) } OPTIONS <- .workerOptions( log = bplog(BPPARAM), stop.on.error = bpstopOnError(BPPARAM), timeout = bptimeout(BPPARAM), exportglobals = bpexportglobals(BPPARAM) ) ## composeTry FUN <- .composeTry( FUN, OPTIONS = OPTIONS, SEED = NULL ) FUN <- .composeBatchtools(FUN) ## Call batchtoolsIterate with arguments bploop(structure(list(), class="iterate_batchtools"), ITER, FUN, BPPARAM, REDUCE, init, reduce.in.order, ...) }) BiocParallel/R/BiocParallel-defunct.R0000644000175200017520000000104614516004410020424 0ustar00biocbuildbiocbuildbprunMPIslave <- function() { .Defunct("bprunMPIworker") } BatchJobsParam <- function(workers=NA_integer_, cleanup=TRUE, work.dir=getwd(), stop.on.error=TRUE, seed=NULL, resources=NULL, conffile=NULL, cluster.functions=NULL, progressbar=TRUE, jobname = "BPJOB", timeout = WORKER_TIMEOUT, reg.pars=list(seed=seed, work.dir=work.dir), conf.pars=list(conffile=conffile, cluster.functions=cluster.functions), submit.pars=list(resources=resources), ...) { .Defunct("BatchtoolsParam") } BiocParallel/R/BiocParallelParam-class.R0000644000175200017520000002311214516004410021060 0ustar00biocbuildbiocbuild### ========================================================================= ### BiocParallelParam objects ### ------------------------------------------------------------------------- .BiocParallelParam_prototype <- list( workers=0L, tasks=0L, jobname="BPJOB", log=FALSE, logdir = NA_character_, threshold="INFO", resultdir = NA_character_, stop.on.error=TRUE, timeout=WORKER_TIMEOUT, exportglobals=TRUE, exportvariables=TRUE, progressbar=FALSE, RNGseed=NULL, RNGstream = NULL, force.GC = FALSE, fallback = TRUE ) .BiocParallelParam <- setRefClass("BiocParallelParam", contains="VIRTUAL", fields=list( workers="ANY", tasks="integer", jobname="character", progressbar="logical", ## required for composeTry log="logical", logdir = "character", threshold="character", resultdir = "character", stop.on.error="logical", timeout="integer", exportglobals="logical", exportvariables="logical", RNGseed = "ANY", # NULL or integer(1) RNGstream = "ANY", # NULL or integer(); internal use only force.GC = "logical", fallback = "logical", ## cluster management .finalizer_env = "environment", .uid = "character" ), methods=list( show = function() { cat("class: ", class(.self), "\n", " bpisup: ", bpisup(.self), "; bpnworkers: ", bpnworkers(.self), "; bptasks: ", bptasks(.self), "; bpjobname: ", bpjobname(.self), "\n", " bplog: ", bplog(.self), "; bpthreshold: ", bpthreshold(.self), "; bpstopOnError: ", bpstopOnError(.self), "\n", " bpRNGseed: ", bpRNGseed(.self), "; bptimeout: ", bptimeout(.self), "; bpprogressbar: ", bpprogressbar(.self), "\n", " bpexportglobals: ", bpexportglobals(.self), "; bpexportvariables: ", bpexportvariables(.self), "; bpforceGC: ", bpforceGC(.self), "\n", " bpfallback: ", bpfallback(.self), "\n", .prettyPath(" bplogdir", bplogdir(.self)), "\n", .prettyPath(" bpresultdir", bpresultdir(.self)), "\n", sep="") }) ) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Validity ### setValidity("BiocParallelParam", function(object) { msg <- NULL ## workers and tasks workers <- bpworkers(object) if (is.numeric(workers)) if (length(workers) != 1L || workers < 0) msg <- c(msg, "'workers' must be integer(1) and >= 0") tasks <- bptasks(object) if (!is.numeric(tasks)) msg <- c(msg, "bptasks(BPPARAM) must be an integer") if (length(tasks) > 1L) msg <- c(msg, "length(bptasks(BPPARAM)) must be == 1") if (!is.na(tasks) && tasks < 0L) msg <- c(msg, "bptasks(BPPARAM) must be >= 0 or 'NA'") if (is.character(workers)) { if (length(workers) < 1L) msg <- c(msg, "length(bpworkers(BPPARAM)) must be > 0") if (!is.na(tasks) && tasks > 0L && tasks < length(workers)) msg <- c(msg, "number of tasks is less than number of workers") } if (!.isTRUEorFALSE(bpexportglobals(object))) msg <- c(msg, "'bpexportglobals' must be TRUE or FALSE") if (!.isTRUEorFALSE(bpexportvariables(object))) msg <- c(msg, "'bpexportvariables' must be TRUE or FALSE") if (!.isTRUEorFALSE(bplog(object))) msg <- c(msg, "'bplog' must be logical(1)") ## log / logdir dir <- bplogdir(object) if (length(dir) != 1L || !is(dir, "character")) { msg <- c(msg, "'logdir' must be character(1)") } else if (!is.na(dir)) { if (!bplog(object)) msg <- c(msg, "'log' must be TRUE when 'logdir' is given") if (!.dir_valid_rw(dir)) msg <- c(msg, "'logdir' must exist with read / write permission") } ## resultdir dir <- bpresultdir(object) if (length(dir) != 1L || !is(dir, "character")) { msg <- c(msg, "'resultdir' must be character(1)") } else if (!is.na(dir) && !.dir_valid_rw(dir)) { msg <- c(msg, "'resultdir' must exist with read / write permissions") } levels <- c("TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL") threshold <- bpthreshold(object) if (length(threshold) > 1L) { msg <- c(msg, "'bpthreshold' must be character(0) or character(1)") } else if ((length(threshold) == 1L) && (!threshold %in% levels)) { txt <- sprintf("'bpthreshold' must be one of %s", paste(sQuote(levels), collapse=", ")) msg <- c(msg, paste(strwrap(txt, indent=2, exdent=2), collapse="\n")) } if (!.isTRUEorFALSE(bpstopOnError(object))) msg <- c(msg, "'bpstopOnError' must be TRUE or FALSE") if (!.isTRUEorFALSE(bpforceGC(object))) msg <- c(msg, "'force.GC' must be TRUE or FALSE") if (is.null(msg)) TRUE else msg }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Getters / Setters ### setMethod("bpworkers", "BiocParallelParam", function(x) { x$workers }) setMethod("bptasks", "BiocParallelParam", function(x) { x$tasks }) setReplaceMethod("bptasks", c("BiocParallelParam", "ANY"), function(x, value) { x$tasks <- as.integer(value) validObject(x) x }) setMethod("bpjobname", "BiocParallelParam", function(x) { x$jobname }) setReplaceMethod("bpjobname", c("BiocParallelParam", "character"), function(x, value) { x$jobname <- value x }) setMethod("bplog", "BiocParallelParam", function(x) { x$log }) setMethod("bplogdir", "BiocParallelParam", function(x) { x$logdir }) setReplaceMethod("bplogdir", c("BiocParallelParam", "character"), function(x, value) { if (bpisup(x)) stop("use 'bpstop()' before setting 'bplogdir()'") x$logdir <- value validObject(x) x }) setMethod("bpthreshold", "BiocParallelParam", function(x) { x$threshold }) setMethod("bpresultdir", "BiocParallelParam", function(x) { x$resultdir }) setReplaceMethod("bpresultdir", c("BiocParallelParam", "character"), function(x, value) { if (bpisup(x)) stop("use 'bpstop()' before setting 'bpresultdir()'") x$resultdir <- value validObject(x) x }) setMethod("bptimeout", "BiocParallelParam", function(x) { x$timeout }) setReplaceMethod("bptimeout", c("BiocParallelParam", "numeric"), function(x, value) { x$timeout <- as.integer(value) x }) setMethod("bpexportglobals", "BiocParallelParam", function(x) { x$exportglobals }) setReplaceMethod("bpexportglobals", c("BiocParallelParam", "logical"), function(x, value) { x$exportglobals <- value validObject(x) x }) setMethod("bpexportvariables", "BiocParallelParam", function(x) { x$exportvariables }) setReplaceMethod("bpexportvariables", c("BiocParallelParam", "logical"), function(x, value) { x$exportvariables <- value validObject(x) x }) setMethod("bpstopOnError", "BiocParallelParam", function(x) { x$stop.on.error }) setReplaceMethod("bpstopOnError", c("BiocParallelParam", "logical"), function(x, value) { x$stop.on.error <- value validObject(x) x }) setMethod("bpprogressbar", "BiocParallelParam", function(x) { x$progressbar }) setReplaceMethod("bpprogressbar", c("BiocParallelParam", "logical"), function(x, value) { x$progressbar <- value validObject(x) x }) setMethod("bpRNGseed", "BiocParallelParam", function(x) { x$RNGseed }) setReplaceMethod("bpRNGseed", c("BiocParallelParam", "NULL"), function(x, value) { x$RNGseed <- NULL .RNGstream(x) <- NULL validObject(x) x }) setReplaceMethod("bpRNGseed", c("BiocParallelParam", "numeric"), function(x, value) { x$RNGseed <- as.integer(value) .RNGstream(x) <- NULL validObject(x) x }) .RNGstream <- function(x) { if (length(x$RNGstream) == 0) .RNGstream(x) <- .rng_init_stream(bpRNGseed(x)) x$RNGstream } `.RNGstream<-` <- function(x, value) { value <- as.integer(value) if (anyNA(value)) stop("[internal] RNGstream value could not be coerced to integer") x$RNGstream <- value x } .bpnextRNGstream <- function(x) { ## initialize or get the next random number stream; increment the ## stream only in bpstart_impl .RNGstream(x) <- .rng_next_stream(.RNGstream(x)) } setMethod("bpforceGC", "BiocParallelParam", function(x) { x$force.GC }) setReplaceMethod("bpforceGC", c("BiocParallelParam", "numeric"), function(x, value) { x$force.GC <- as.logical(value) validObject(x) x }) setMethod("bpfallback", "BiocParallelParam", function(x) { x$fallback }) setReplaceMethod("bpfallback", c("BiocParallelParam", "logical"), function(x, value) { x$fallback <- value validObject(x) x }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Methods - evaluation ### setMethod("bpstart", "BiocParallelParam", .bpstart_impl) setMethod("bpstop", "BiocParallelParam", .bpstop_impl) setMethod("bplapply", c("ANY", "BiocParallelParam"), .bplapply_impl) setMethod("bpiterate", c("ANY", "ANY", "BiocParallelParam"), .bpiterate_impl) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Helpers ### ## taken from S4Vectors .isTRUEorFALSE <- function (x) { is.logical(x) && length(x) == 1L && !is.na(x) } BiocParallel/R/DeveloperInterface.R0000644000175200017520000001106414516004410020214 0ustar00biocbuildbiocbuild## ## see NAMESPACE section for definitive exports ## ## Manager class .TaskManager <- setClass("TaskManager", contains = "environment") ## server setGeneric( ".send_to", function(backend, node, value) standardGeneric(".send_to"), signature = "backend" ) setGeneric( ".recv_any", function(backend) standardGeneric(".recv_any"), signature = "backend" ) setGeneric( ".send_all", function(backend, value) standardGeneric(".send_all"), signature = "backend" ) setGeneric( ".recv_all", function(backend) standardGeneric(".recv_all"), signature = "backend" ) ## client setGeneric( ".send", function(worker, value) standardGeneric(".send"), signature = "worker" ) setGeneric( ".recv", function(worker) standardGeneric(".recv"), signature = "worker" ) setGeneric( ".close", function(worker) standardGeneric(".close"), signature = "worker" ) ## task manager setGeneric( ".manager", function(BPPARAM) standardGeneric(".manager"), signature = "BPPARAM" ) setGeneric( ".manager_send", function(manager, value, ...) standardGeneric(".manager_send"), signature = "manager" ) setGeneric( ".manager_recv", function(manager) standardGeneric(".manager_recv"), signature = "manager" ) setGeneric( ".manager_send_all", function(manager, value) standardGeneric(".manager_send_all"), signature = "manager" ) setGeneric( ".manager_recv_all", function(manager) standardGeneric(".manager_recv_all"), signature = "manager" ) setGeneric( ".manager_capacity", function(manager) standardGeneric(".manager_capacity"), signature = "manager" ) setGeneric( ".manager_flush", function(manager) standardGeneric(".manager_flush"), signature = "manager" ) setGeneric( ".manager_cleanup", function(manager) standardGeneric(".manager_cleanup"), signature = "manager" ) ## default implementation -- SNOW backend setMethod( ".send_all", "ANY", function(backend, value) { for (node in seq_along(backend)) .send_to(backend, node, value) }) setMethod( ".recv_all", "ANY", function(backend) { replicate(length(backend), .recv_any(backend), simplify=FALSE) }) setMethod( ".send_to", "ANY", function(backend, node, value) { parallel:::sendData(backend[[node]], value) TRUE }) setMethod( ".recv_any", "ANY", function(backend) { tryCatch({ parallel:::recvOneData(backend) }, error = function(e) { ## indicate error, but do not stop .error_worker_comm(e, "'.recv_any()' data failed") }) }) setMethod( ".send", "ANY", function(worker, value) { parallel:::sendData(worker, value) }) setMethod( ".recv", "ANY", function(worker) { tryCatch({ parallel:::recvData(worker) }, error = function(e) { ## indicate error, but do not stop .error_worker_comm(e, "'.recv()' data failed") }) }) setMethod( ".close", "ANY", function(worker) { parallel:::closeNode(worker) }) setMethod( ".close", "ANY", function(worker) { parallel:::closeNode(worker) }) ## default task manager implementation ## ## define as plain function for re-use without method dispatch .manager_ANY <- function(BPPARAM) { manager <- .TaskManager() manager$BPPARAM <- BPPARAM manager$backend <- bpbackend(BPPARAM) manager$capacity <- length(manager$backend) availability <- rep(list(TRUE), manager$capacity) names(availability) <- as.character(seq_along(manager$backend)) manager$availability <- as.environment(availability) manager } setMethod(".manager", "ANY", .manager_ANY) setMethod( ".manager_send", "ANY", function(manager, value, ...) { availability <- manager$availability stopifnot(length(availability) >=0) ## send the job to the next available worker worker <- names(availability)[1] .send_to(manager$backend, as.integer(worker), value) rm(list = worker, envir = availability) }) setMethod( ".manager_recv", "ANY", function(manager) { result <- .recv_any(manager$backend) manager$availability[[as.character(result$node)]] <- TRUE list(result) }) setMethod( ".manager_send_all", "ANY", function(manager, value) .send_all(manager$backend, value) ) setMethod( ".manager_recv_all", "ANY", function(manager) .recv_all(manager$backend) ) setMethod( ".manager_capacity", "ANY", function(manager) { manager$capacity }) setMethod( ".manager_flush", "ANY", function(manager) manager ) setMethod( ".manager_cleanup", "ANY", function(manager) manager ) BiocParallel/R/DoparParam-class.R0000644000175200017520000000675714516004410017614 0ustar00biocbuildbiocbuild### ========================================================================= ### DoparParam objects ### ------------------------------------------------------------------------- ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Constructor ### .DoparParam_prototype <- .BiocParallelParam_prototype .DoparParam <- setRefClass("DoparParam", contains="BiocParallelParam", fields=list(), methods=list() ) DoparParam <- function(stop.on.error=TRUE, RNGseed = NULL) { if (!requireNamespace("foreach", quietly = TRUE)) stop("DoparParam() requires the 'foreach' package", call. = FALSE) prototype <- .prototype_update( .DoparParam_prototype, stop.on.error=stop.on.error, RNGseed=RNGseed ) x <- do.call(.DoparParam, prototype) ## DoparParam is always up, so we need to initialize ## the seed stream here .bpstart_set_rng_stream(x) validObject(x) x } setMethod("bpworkers", "DoparParam", function(x) { if (bpisup(x)) foreach::getDoParWorkers() else 0L }) setMethod("bpisup", "DoparParam", function(x) { isNamespaceLoaded("foreach") && foreach::getDoParRegistered() && (foreach::getDoParName() != "doSEQ") }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Manager ### .DoparParamManager <- setClass("DoparParamManager", contains="TaskManager" ) ## constructor setMethod( ".manager", "DoparParam", function(BPPARAM) { .DoparParamManager( BPPARAM = BPPARAM, tasks = new.env(parent = emptyenv()) ) }) setMethod( ".manager_send", "DoparParamManager", function(manager, value, ...) { taskId <- length(manager$tasks) + 1L if (taskId == 1L) manager$const.value <- .task_const(value) manager$tasks[[as.character(taskId)]] <- .task_dynamic(value) }) setMethod( ".manager_recv", "DoparParamManager", function(manager) { stopifnot(length(manager$tasks) > 0L) tasks <- as.list(manager$tasks) tasks <- tasks[order(names(tasks))] const.value <- manager$const.value `%dopar%` <- foreach::`%dopar%` foreach <- foreach::foreach tryCatch({ results <- foreach(task = tasks)%dopar%{ task <- .task_remake(task, const.value) if (task$type == "EXEC") value <- .bpworker_EXEC(task) else value <- NULL list(value = value) } }, error=function(e) { stop( "'DoparParam()' foreach() error occurred: ", conditionMessage(e) ) }) ## cleanup the tasks remove(list = ls(manager$tasks), envir = manager$tasks) manager$const.value <- NULL results }) setMethod( ".manager_send_all", "DoparParamManager", function(manager, value) { nworkers <- bpworkers(manager$BPPARAM) for (i in seq_len(nworkers)) { .manager_send(manager, value) } }) setMethod( ".manager_recv_all", "DoparParamManager", function(manager) .manager_recv(manager) ) setMethod( ".manager_capacity", "DoparParamManager", function(manager) { .Machine$integer.max }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Methods - evaluation ### setMethod("bpiterate", c("ANY", "ANY", "DoparParam"), function(ITER, FUN, ..., BPREDO = list(), BPPARAM=bpparam(), BPOPTIONS=bpoptions()) { stop("'bpiterate' not supported for DoparParam") }) BiocParallel/R/ErrorHandling.R0000644000175200017520000000750314516004410017207 0ustar00biocbuildbiocbuild### ========================================================================= ### Error handling ### ------------------------------------------------------------------------- .bpeltok <- function(x, type = bperrorTypes()) { !inherits(x, type) } bpok <- function(x, type = bperrorTypes()) { x <- bpresult(x) type <- match.arg(type) vapply(x, .bpeltok, logical(1), type) } .bpallok <- function(x, type = bperrorTypes(), attrOnly = FALSE) { if (attrOnly) is.null(.redo_env(x)) else is.null(.redo_env(x)) && all(bpok(x, type)) } bptry <- function(expr, ..., bplist_error, bperror) { if (missing(bplist_error)) bplist_error <- bpresult if (missing(bperror)) bperror <- identity tryCatch(expr, ..., bplist_error=bplist_error, bperror=bperror) } bpresult <- function(x) { if (is(x, "bplist_error")) x <- attr(x, "result") x } .error <- function(msg, class=NULL) { structure(list(message=msg), class = c(class, "bperror", "error", "condition")) } .error_remote <- function(x, call) { structure(x, class = c("remote_error", "bperror", "error", "condition"), traceback = capture.output(traceback(call))) } .error_unevaluated <- function() { structure(list(message="not evaluated due to previous error"), class=c("unevaluated_error", "bperror", "error", "condition")) } .error_not_available <- function(msg) { structure(list(message=msg), class=c("not_available_error", "bperror", "error", "condition")) } .error_worker_comm <- function(error, msg) { msg <- sprintf("%s:\n %s", msg, conditionMessage(error)) structure(list(message=msg, original_error_class=class(error)), class=c("worker_comm_error", "bperror", "error", "condition")) } bperrorTypes <- function() { subclasses <- paste0( c("remote", "unevaluated", "not_available", "worker_comm"), "_error" ) c("bperror", subclasses) } .error_bplist <- function(result) { if (is.null(attr(result, "errors"))) { errors <- result total_error <- sum(!bpok(errors)) remote_error <- !bpok(errors, "remote_error") | !bpok(errors, "worker_comm_error") remote_idx <- which(remote_error) if (length(remote_idx)) first_error <- errors[[remote_idx[1]]] else first_error <- "" } else { errors <- attr(result, "errors") total_error <- length(errors) remote_error <- !bpok(errors, "remote_error") | !bpok(errors, "worker_comm_error") first_error_idx <- which(remote_error)[1] if (!is.null(first_error_idx)) first_error <- errors[[first_error_idx]] else first_error <- "" remote_idx <- as.integer(names(errors[remote_error])) } n_remote_error <- length(remote_idx) n_other_error <- total_error - n_remote_error fmt = paste( "BiocParallel errors", "%d remote errors, element index: %s%s", "%d unevaluated and other errors", "first remote error:\n%s", sep = "\n " ) class(first_error) <- tail(class(first_error), 2L) first_error_msg <- as.character(first_error) message <- sprintf( fmt, n_remote_error, paste(head(remote_idx), collapse = ", "), ifelse(length(remote_idx) > 6, ", ...", ""), n_other_error, first_error_msg ) err <- structure( list(message=message), result=result, class = c("bplist_error", "bperror", "error", "condition")) } print.remote_error <- function(x, ...) { NextMethod(x) cat("traceback() available as 'attr(x, \"traceback\")'\n") } `print.bplist_error` <- function(x, ...) { NextMethod(x) cat("results and errors available as 'bpresult(x)'\n") } BiocParallel/R/MulticoreParam-class.R0000644000175200017520000000555214516004410020502 0ustar00biocbuildbiocbuild### ========================================================================= ### MulticoreParam objects ### ------------------------------------------------------------------------- multicoreWorkers <- function() .snowCores("multicore") ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Constructor ### .MulticoreParam_prototype <- .SnowParam_prototype .MulticoreParam <- setRefClass("MulticoreParam", contains="SnowParam", fields=list(), methods=list() ) MulticoreParam <- function(workers=multicoreWorkers(), tasks=0L, stop.on.error=TRUE, progressbar=FALSE, RNGseed=NULL, timeout= WORKER_TIMEOUT, exportglobals=TRUE, log=FALSE, threshold="INFO", logdir=NA_character_, resultdir=NA_character_, jobname = "BPJOB", force.GC = FALSE, fallback = TRUE, manager.hostname=NA_character_, manager.port=NA_integer_, ...) { if (.Platform$OS.type == "windows") { warning("MulticoreParam() not supported on Windows, use SnowParam()") workers = 1L } if (progressbar && missing(tasks)) tasks <- TASKS_MAXIMUM clusterargs <- c(list(spec=workers, type="FORK"), list(...)) manager.hostname <- if (is.na(manager.hostname)) { local <- (clusterargs$type == "FORK") || is.numeric(clusterargs$spec) manager.hostname <- .snowHost(local) } else as.character(manager.hostname) manager.port <- if (is.na(manager.port)) { .snowPort() } else as.integer(manager.port) if (!is.null(RNGseed)) RNGseed <- as.integer(RNGseed) prototype <- .prototype_update( .MulticoreParam_prototype, .clusterargs=clusterargs, cluster=.NULLcluster(), .controlled=TRUE, workers=as.integer(workers), tasks=as.integer(tasks), stop.on.error=stop.on.error, progressbar=progressbar, RNGseed=RNGseed, timeout=as.integer(timeout), exportglobals=exportglobals, exportvariables=FALSE, log=log, threshold=threshold, logdir=logdir, resultdir=resultdir, jobname=jobname, force.GC = force.GC, fallback = fallback, hostname=manager.hostname, port=manager.port, ... ) param <- do.call(.MulticoreParam, prototype) bpworkers(param) <- workers # enforce worker number validObject(param) param } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Methods - control ### setReplaceMethod("bpworkers", c("MulticoreParam", "numeric"), function(x, value) { value <- as.integer(value) nworkers <- .enforceWorkers(value, x$.clusterargs$type) x$workers <- x$.clusterargs$spec <- nworkers x }) setMethod("bpschedule", "MulticoreParam", function(x) { if (.Platform$OS.type == "windows") FALSE else TRUE }) BiocParallel/R/SerialParam-class.R0000644000175200017520000000663714516004410017763 0ustar00biocbuildbiocbuild### ========================================================================= ### SerialParam objects ### ------------------------------------------------------------------------- ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Constructor ### .SerialParam_prototype <- c( list( workers = 1L, backend = NULL ), .BiocParallelParam_prototype ) .SerialParam <- setRefClass( "SerialParam", fields=list(backend = "ANY"), contains="BiocParallelParam", ) SerialParam <- function(stop.on.error = TRUE, progressbar=FALSE, RNGseed = NULL, timeout = WORKER_TIMEOUT, log=FALSE, threshold="INFO", logdir=NA_character_, resultdir = NA_character_, jobname = "BPJOB", force.GC = FALSE) { if (!is.null(RNGseed)) RNGseed <- as.integer(RNGseed) if (progressbar) { tasks <- TASKS_MAXIMUM } else { tasks <- 0L } prototype <- .prototype_update( .SerialParam_prototype, tasks = tasks, stop.on.error=stop.on.error, progressbar=progressbar, RNGseed = RNGseed, timeout = as.integer(timeout), log=log, threshold=threshold, logdir=logdir, resultdir = resultdir, jobname = jobname, force.GC = force.GC, fallback = FALSE, exportglobals = FALSE, exportvariables = FALSE ) x <- do.call(.SerialParam, prototype) validObject(x) x } setAs("BiocParallelParam", "SerialParam", function(from) { SerialParam( stop.on.error = bpstopOnError(from), progressbar = bpprogressbar(from), RNGseed = bpRNGseed(from), timeout = bptimeout(from), log = bplog(from), threshold = bpthreshold(from), logdir = bplogdir(from), resultdir = bpresultdir(from), jobname = bpjobname(from), force.GC = bpforceGC(from) ) }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Methods - control ### setMethod( "bpbackend", "SerialParam", function(x) { x$backend }) setMethod( "bpstart", "SerialParam", function(x, ...) { x$backend <- .SerialBackend() x$backend$BPPARAM <- x .bpstart_impl(x) }) setMethod( "bpstop", "SerialParam", function(x) { x$backend <- NULL .bpstop_impl(x) }) setMethod( "bpisup", "SerialParam", function(x) { is.environment(bpbackend(x)) }) setReplaceMethod("bplog", c("SerialParam", "logical"), function(x, value) { x$log <- value validObject(x) x }) setReplaceMethod( "bpthreshold", c("SerialParam", "character"), function(x, value) { x$threshold <- value x }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Backend method ### .SerialBackend <- setClass("SerialBackend", contains = "environment") setMethod(".send_to", "SerialBackend", function(backend, node, value){ backend$value <- value TRUE }) setMethod( ".recv_any", "SerialBackend", function(backend) { on.exit(backend$value <- NULL) msg <- backend$value if (inherits(msg, "error")) stop(msg) if (msg$type == "EXEC") { value <- .bpworker_EXEC(msg, bplog(backend$BPPARAM)) list(node = 1L, value = value) } }) setMethod("length", "SerialBackend", function(x){ 1L }) BiocParallel/R/SnowParam-class.R0000644000175200017520000003032014516004410017454 0ustar00biocbuildbiocbuild### ========================================================================= ### SnowParam objects ### ------------------------------------------------------------------------- ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Helpers ### .snowHost <- function(local=TRUE) { host <- if (local) { "localhost" } else Sys.info()[["nodename"]] host <- Sys.getenv("MASTER", host) host <- getOption("bphost", host) host } .snowPort <- function() { port <- Sys.getenv("R_PARALLEL_PORT", NA_integer_) port <- Sys.getenv("PORT", port) port <- getOption("ports", port) if (identical(tolower(port), "random") || is.na(port)) { .rng_internal_stream$set() on.exit(.rng_internal_stream$reset()) portAvailable <- FALSE for (i in 1:5) { port <- as.integer( 11000 + 1000 * ((stats::runif(1L) + unclass(Sys.time()) / 300) %% 1L) ) tryCatch( { ## User should not be able to interrupt the port check ## Otherwise we might have an unclosed connection suspendInterrupts( { con <- serverSocket(port) close(con) } ) portAvailable <- TRUE }, error = function(e) { message("failed to open the port ", port,", trying a new port...") } ) if (portAvailable) break } if (!portAvailable) .stop("cannot find an open port. For manually specifying the port, see ?SnowParam") } else { port <- as.integer(port) } port } .snowCoresMax <- function(type) { if (type == "MPI") { .Machine$integer.max } else { 128L - nrow(showConnections(all=TRUE)) } } .snowCores <- function(type) { if (type == "multicore" && .Platform$OS.type == "windows") return(1L) min(.defaultWorkers(), .snowCoresMax(type)) } snowWorkers <- function(type = c("SOCK", "MPI", "FORK")) { type <- match.arg(type) min(.defaultWorkers(), .snowCores(type)) } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Constructor ### setOldClass(c("NULLcluster", "cluster")) .NULLcluster <- function() { cl <- list() class(cl) <- c("NULLcluster", "cluster") cl } .SnowParam_prototype <- c( list( cluster = .NULLcluster(), .clusterargs = list(spec=0, type="SOCK"), .controlled = TRUE, hostname = NA_character_, port = NA_integer_ ), .BiocParallelParam_prototype ) .SnowParam <- setRefClass("SnowParam", contains="BiocParallelParam", fields=list( cluster = "cluster", .clusterargs = "list", .controlled = "logical", hostname = "character", port = "integer" ), methods=list( show = function() { callSuper() cat(" cluster type: ", .clusterargs$type, "\n", sep="") }) ) SnowParam <- function(workers=snowWorkers(type), type=c("SOCK", "MPI", "FORK"), tasks=0L, stop.on.error=TRUE, progressbar=FALSE, RNGseed=NULL, timeout=WORKER_TIMEOUT, exportglobals=TRUE, exportvariables=TRUE, log=FALSE, threshold="INFO", logdir=NA_character_, resultdir=NA_character_, jobname = "BPJOB", force.GC = FALSE, fallback = TRUE, manager.hostname=NA_character_, manager.port=NA_integer_, ...) { type <- tryCatch(match.arg(type), error=function(...) { stop("'type' must be one of ", paste(sQuote(formals("SnowParam")$type), collapse=", ")) }) if (type %in% c("MPI", "FORK") && is(workers, "character")) stop("'workers' must be integer(1) when 'type' is MPI or FORK") if (progressbar && missing(tasks)) tasks <- TASKS_MAXIMUM clusterargs <- c(list(spec=workers, type=type), list(...)) manager.hostname <- if (is.na(manager.hostname)) { local <- (clusterargs$type == "FORK") || is.numeric(clusterargs$spec) manager.hostname <- .snowHost(local) } else as.character(manager.hostname) manager.port <- if (is.na(manager.port)) { .snowPort() } else as.integer(manager.port) if (!is.null(RNGseed)) RNGseed <- as.integer(RNGseed) prototype <- .prototype_update( .SnowParam_prototype, .clusterargs=clusterargs, .controlled=TRUE, workers=workers, tasks=as.integer(tasks), stop.on.error=stop.on.error, progressbar=progressbar, RNGseed=RNGseed, timeout=as.integer(timeout), exportglobals=exportglobals, exportvariables=exportvariables, log=log, threshold=threshold, logdir=logdir, resultdir=resultdir, jobname=jobname, force.GC = force.GC, fallback = fallback, hostname=manager.hostname, port=manager.port, ... ) param <- do.call(.SnowParam, prototype) bpworkers(param) <- workers # enforce worker number validObject(param) param } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Validity ### setValidity("SnowParam", function(object) { msg <- NULL if (!.isTRUEorFALSE(.controlled(object))) msg <- c(msg, "'.controlled' must be TRUE or FALSE") if (.controlled(object)) { if (!all(bpworkers(object) == object$.clusterargs$spec)) msg <- c(msg, "'bpworkers(BPPARAM)' must equal BPPARAM$.clusterargs$spec") } if (is.null(msg)) TRUE else msg }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Getters / Setters ### .hostname <- function(x) x$hostname .port <- function(x) x$port setReplaceMethod("bpworkers", c("SnowParam", "numeric"), function(x, value) { value <- as.integer(value) value <- .enforceWorkers(value, x$.clusterargs$type) x$workers <- x$.clusterargs$spec <- value x }) setReplaceMethod("bpworkers", c("SnowParam", "character"), function(x, value) { nworkers <- .enforceWorkers(length(value), x$.clusterargs$type) x$workers <- x$.clusterargs$spec <- head(value, nworkers) x }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Methods - control ### .bpstart_makeCluster <- function(cargs) { ## set internal stream to avoid iterating global random number ## stream in `parallel::makeCluster()`. Use the internal stream so ## that the random number generator advances on each call. state <- .rng_internal_stream$set() on.exit(.rng_internal_stream$reset()) do.call(parallel::makeCluster, cargs) } setMethod("bpstart", "SnowParam", function(x, lenX = bpnworkers(x)) { if (!.controlled(x)) stop("'bpstart' not available; instance from outside BiocParallel?") if (bpisup(x)) stop("cluster already started") if (bpnworkers(x) == 0 && lenX <= 0) stop("cluster not started; no workers specified") nnodes <- min(bpnworkers(x), lenX) if (x$.clusterargs$type != "MPI" && (nnodes > 128L - nrow(showConnections(all=TRUE)))) stop("cannot create ", nnodes, " workers; ", 128L - nrow(showConnections(all=TRUE)), " connections available in this session") if (x$.clusterargs$type == "FORK") { ## FORK (useRscript not relevant) bpbackend(x) <- .bpfork(nnodes, .hostname(x), .port(x)) } else { ## SOCK, MPI cargs <- x$.clusterargs cargs$spec <- if (is.numeric(cargs$spec)) { nnodes } else cargs$spec[seq_len(nnodes)] ## work around devtools::load_all() ## ## 'inst' exists when using devtools::load_all() libPath <- find.package("BiocParallel") if (dir.exists(file.path(libPath, "inst"))) libPath <- file.path(libPath, "inst") if (is.null(cargs$snowlib)) cargs$snowlib <- libPath if (!is.null(cargs$useRscript) && !cargs$useRscript) cargs$scriptdir <- libPath if (x$.clusterargs$type == "SOCK") { cargs$master <- .hostname(x) cargs$port <- .port(x) } bpbackend(x) <- .bpstart_makeCluster(cargs) } .bpstart_impl(x) }) setMethod("bpstop", "SnowParam", function(x) { if (!.controlled(x)) { warning("'bpstop' not available; instance from outside BiocParallel?") return(invisible(x)) } if (!bpisup(x)) return(invisible(x)) x <- .bpstop_impl(x) cluster <- bpbackend(x) for (i in seq_along(cluster)) .close(cluster[[i]]) bpbackend(x) <- .NULLcluster() invisible(x) }) setMethod("bpisup", "SnowParam", function(x) { length(bpbackend(x)) != 0L }) setMethod("bpbackend", "SnowParam", function(x) { x$cluster }) setReplaceMethod("bpbackend", c("SnowParam", "cluster"), function(x, value) { x$cluster <- value x }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Getters / Setters ### .controlled <- function(x) { x$.controlled } setReplaceMethod("bplog", c("SnowParam", "logical"), function(x, value) { x$log <- value x }) setReplaceMethod("bpthreshold", c("SnowParam", "character"), function(x, value) { x$threshold <- value x }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Coercion methods for SOCK and MPI clusters ### ### parallel::SOCKcluster types setOldClass(c("SOCKcluster", "cluster")) stopCluster.SOCKcluster <- parallel:::stopCluster.default setAs("SOCKcluster", "SnowParam", function(from) { .clusterargs <- list(spec=length(from), type=sub("cluster$", "", class(from)[1L])) prototype <- .prototype_update( .SnowParam_prototype, .clusterargs = .clusterargs, cluster = from, .controlled = FALSE, workers = length(from) ) do.call(.SnowParam, prototype) }) ### MPIcluster setOldClass(c("spawnedMPIcluster", "MPIcluster", "cluster")) setAs("spawnedMPIcluster", "SnowParam", function(from) { .clusterargs <- list(spec=length(from), type=sub("cluster", "", class(from)[1L])) .SnowParam(.clusterargs=.clusterargs, cluster=from, .controlled=FALSE, workers=length(from)) }) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### task dispatching interface ### setOldClass(c("SOCK0node", "SOCKnode")) # needed for method dispatch .SOCKmanager <- setClass("SOCKmanager", contains = "TaskManager") setMethod( ".manager", "SnowParam", function(BPPARAM) { manager <- callNextMethod() manager$initialized <- rep(FALSE, manager$capacity) as(manager, "SOCKmanager") }) setMethod( ".manager_send", "SOCKmanager", function(manager, value, ...) { availability <- manager$availability stopifnot(length(availability) >= 0L) ## send the job to the next available worker worker <- names(availability)[1] id <- as.integer(worker) ## Do the cache only when the snow worker is ## created by our package. if (.controlled(manager$BPPARAM)) { if (manager$initialized[id]) value <- .task_dynamic(value) else manager$initialized[id] <- TRUE } .send_to(manager$backend, as.integer(worker), value) rm(list = worker, envir = availability) manager }) setMethod( ".manager_cleanup", "SOCKmanager", function(manager) { manager <- callNextMethod() manager$initialized <- rep(FALSE, manager$capacity) if (.controlled(manager$BPPARAM)) { value <- .EXEC(tag = NULL, .clean_task_static, args = NULL) .send_all(manager$backend, value) msg <- .recv_all(manager$backend) } manager }) ## The worker class of SnowParam setMethod( ".recv", "SOCKnode", function(worker) { msg <- callNextMethod() if (inherits(msg, "error")) return(msg) ## read/write the static value(if any) .load_task_static(msg) }) BiocParallel/R/SnowParam-utils.R0000644000175200017520000000602114516004410017510 0ustar00biocbuildbiocbuild.connect_timeout <- function() { timeout <- getOption("timeout") timeout_is_valid <- length(timeout) == 1L && !is.na(timeout) && timeout > 0L if (!timeout_is_valid) stop("'getOption(\"timeout\")' must be positive integer(1)") timeout } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### snow::MPI ### bprunMPIworker <- function() { comm <- 1 intercomm <- 2 Rmpi::mpi.comm.get.parent(intercomm) Rmpi::mpi.intercomm.merge(intercomm,1,comm) Rmpi::mpi.comm.set.errhandler(comm) Rmpi::mpi.comm.disconnect(intercomm) .bpworker_impl(snow::makeMPImaster(comm)) Rmpi::mpi.comm.disconnect(comm) Rmpi::mpi.quit() } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### parallel::FORK ### .bpfork <- function (nnodes, host, port) { nnodes <- as.integer(nnodes) if (is.na(nnodes) || nnodes < 1L) stop("'nnodes' must be >= 1") if (length(host) != 1L || is.na(host) || !is.character(host)) stop("'host' must be character(1)") if (length(port) != 1L || is.na(port) || !is.integer(port)) stop("'port' must be integer(1)") connect_timeout <- .connect_timeout() idle_timeout <- IDLE_TIMEOUT cl <- vector("list", nnodes) for (rank in seq_along(cl)) { .bpforkChild(host, port, rank, connect_timeout, idle_timeout) cl[[rank]] <- .bpforkConnect( host, port, rank, connect_timeout, idle_timeout ) } class(cl) <- c("SOCKcluster", "cluster") cl } .bpforkChild <- function(host, port, rank, connect_timeout, idle_timeout) { parallel::mcparallel({ con <- NULL suppressWarnings({ while (is.null(con)) { con <- tryCatch({ socketConnection( host, port, FALSE, TRUE, "a+b", timeout = connect_timeout ) }, error=function(e) {}) } socketTimeout(con, idle_timeout) }) node <- structure(list(con = con), class = "SOCK0node") .bpworker_impl(node) }, detached=TRUE) } .bpforkConnect <- function(host, port, rank, connect_timeout, idle_timeout) { idle_timeout <- IDLE_TIMEOUT con <- socketConnection( host, port, TRUE, TRUE, "a+b", timeout = connect_timeout ) socketTimeout(con, idle_timeout) structure(list(con = con, host = host, rank = rank), class = c("forknode", "SOCK0node")) } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### EXEC command cache ### ## read/write the static value .load_task_static <- function(value) { static_data <- .task_const(value) if (is.null(static_data)) { static_data <- options("BIOCPARALLEL_SNOW_STATIC")[[1]] .task_remake(value, static_data) } else { options(BIOCPARALLEL_SNOW_STATIC = static_data) value } } .clean_task_static <- function() { options(BIOCPARALLEL_SNOW_STATIC = NULL) } BiocParallel/R/TransientMulticoreParam-class.R0000644000175200017520000000526314516004410022371 0ustar00biocbuildbiocbuild.TransientMulticoreParam <- setRefClass( "TransientMulticoreParam", contains = "MulticoreParam" ) TransientMulticoreParam <- function(param) { param <- as(param, "TransientMulticoreParam") bpstart(param) } .TRANSIENTMULTICOREPARAM_JOBNODE <- new.env(parent=emptyenv()) .TRANSIENTMULTICOREPARAM_RESULT <- new.env(parent=emptyenv()) setMethod( "bpstart", "TransientMulticoreParam", function(x, ...) { parallel::mccollect(wait=TRUE) rm( list=ls(envir = .TRANSIENTMULTICOREPARAM_JOBNODE), envir = .TRANSIENTMULTICOREPARAM_JOBNODE ) rm( list = ls(envir = .TRANSIENTMULTICOREPARAM_RESULT), envir = .TRANSIENTMULTICOREPARAM_RESULT ) .bpstart_impl(x) }) setMethod( "bpstop", "TransientMulticoreParam", function(x) { .bpstop_impl(x) }) setMethod( "bpbackend", "TransientMulticoreParam", function(x) { x }) setMethod( "length", "TransientMulticoreParam", function(x) { bpnworkers(x) }) ## ## send / recv ## setMethod( ".recv_all", "TransientMulticoreParam", function(backend) { replicate(length(backend), .recv_any(backend), simplify=FALSE) }) setMethod( ".send_to", "TransientMulticoreParam", function(backend, node, value) { if (value$type == "EXEC") { job <- parallel::mcparallel(.bpworker_EXEC(value)) id <- as.character(job$pid) .TRANSIENTMULTICOREPARAM_JOBNODE[[id]] <- node } TRUE }) setMethod( ".recv_any", "TransientMulticoreParam", function(backend) { .BUFF <- .TRANSIENTMULTICOREPARAM_RESULT # alias tryCatch({ while (!length(.BUFF)) { result <- parallel::mccollect(wait = FALSE, timeout = 1) for (id in names(result)) .BUFF[[id]] <- result[[id]] } id <- head(names(.BUFF), 1L) value <- .BUFF[[id]] rm(list = id, envir = .BUFF) node <- .TRANSIENTMULTICOREPARAM_JOBNODE[[id]] rm(list = id, envir = .TRANSIENTMULTICOREPARAM_JOBNODE) list(node = node, value = value) }, error = function(e) { ## indicate error, but do not stop .error_worker_comm(e, "'.recv_any()' data failed") }) }) setMethod( ".send", "TransientMulticoreParam", function(worker, value) { stop("'.send,TransientMulticoreParam-method' not implemented") }) setMethod( ".recv", "TransientMulticoreParam", function(worker) { stop("'.recv,TransientMulticoreParam-method' not implemented") }) setMethod( ".close", "TransientMulticoreParam", function(worker) { stop("'.close,TransientMulticoreParam-method' not implemented") }) setMethod(".manager", "TransientMulticoreParam", .manager_ANY) BiocParallel/R/bpaggregate-methods.R0000644000175200017520000000642114516004410020360 0ustar00biocbuildbiocbuild### ========================================================================= ### bpaggregate methods ### ------------------------------------------------------------------------- ## All params use bpaggregate,data.frame,BiocParallelParam. ## bpaggretate() dispatches to bplapply() where errors and ## logging are handled. setMethod("bpaggregate", c("ANY", "missing"), function(x, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { bpaggregate(x, ..., BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) }) setMethod("bpaggregate", c("matrix", "BiocParallelParam"), function(x, by, FUN, ..., simplify=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { if (!is.data.frame(x)) x <- as.data.frame(x) bpaggregate(x, by, FUN, ..., simplify=simplify, BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) }) setMethod("bpaggregate", c("data.frame", "BiocParallelParam"), function(x, by, FUN, ..., simplify=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) if (!is.data.frame(x)) x <- as.data.frame(x) if (!is.list(by)) stop("'by' must be a list") by <- lapply(by, as.factor) wrapper <- function(.ind, .x, .AGGRFUN, ..., .simplify) { sapply(.x[.ind,, drop=FALSE], .AGGRFUN, ..., simplify=.simplify) } ind <- Filter(length, split(seq_len(nrow(x)), by)) grp <- rep(seq_along(ind), lengths(ind)) grp <- grp[match(seq_len(nrow(x)), unlist(ind))] res <- bplapply(ind, wrapper, .x=x, .AGGRFUN=FUN, .simplify=simplify, BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) res <- do.call(rbind, lapply(res, rbind)) if (is.null(names(by)) && length(by)) { names(by) <- sprintf("Group.%i", seq_along(by)) } else { ind <- which(!nzchar(names(by))) names(by)[ind] <- sprintf("Group.", ind) } tab <- as.data.frame(lapply(by, as.character), stringsAsFactors=FALSE) tab <- tab[match(sort(unique(grp)), grp),, drop=FALSE] rownames(tab) <- rownames(res) <- NULL tab <- cbind(tab, res) names(tab) <- c(names(by), names(x)) tab }) setMethod("bpaggregate", c("formula", "BiocParallelParam"), function (x, data, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { if (length(x) != 3L) stop("Formula 'x' must have both left and right hand sides") m <- match.call(expand.dots=FALSE) if (is.matrix(eval(m$data, parent.frame()))) m$data <- as.data.frame(data) m$... <- m$FUN <- m$BPPARAM <- m$BPREDO <- m$BPOPTIONS <- NULL m[[1L]] <- quote(stats::model.frame) names(m)[[2]] <- "formula" if (x[[2L]] == ".") { rhs <- as.list(attr(terms(x[-2L]), "variables")[-1]) lhs <- as.call(c(quote(cbind), setdiff(lapply(names(data), as.name), rhs))) x[[2L]] <- lhs m[[2L]] <- x } mf <- eval(m, parent.frame()) if (is.matrix(mf[[1L]])) { lhs <- as.data.frame(mf[[1L]]) bpaggregate(lhs, mf[-1L], FUN=FUN, ..., BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) } else bpaggregate(mf[1L], mf[-1L], FUN=FUN, ..., BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) }) BiocParallel/R/bpbackend-methods.R0000644000175200017520000000035714516004410020023 0ustar00biocbuildbiocbuildsetMethod("bpbackend", "missing", function(x) { x <- registered()[[1]] bpbackend(x) }) setReplaceMethod("bpbackend", c("missing", "ANY"), function(x, value) { x <- registered()[[1]] bpbackend(x) <- value x }) BiocParallel/R/bpinit.R0000644000175200017520000000376514516004410015744 0ustar00biocbuildbiocbuild.bpinit <- function(manager, BPPARAM, BPOPTIONS, ...) { ## temporarily change the paramters in BPPARAM oldOptions <- .bpparamOptions(BPPARAM, names(BPOPTIONS)) on.exit(.bpparamOptions(BPPARAM) <- oldOptions, TRUE, FALSE) .bpparamOptions(BPPARAM) <- BPOPTIONS ## fallback conditions(all must be satisfied): ## 1. BPPARAM has not been started ## 2. fallback is allowed (bpfallback(x) == TRUE) ## 3. One of the following conditions is met: ## 3.1 the worker number is less than or equal to 1 ## 3.2 Parallel evaluation is disallowed (bpschedule(BPPARAM) == FALSE) ## 3.3 BPPARAM is of MulticoreParam class if (!bpisup(BPPARAM) && bpfallback(BPPARAM)) { ## use cases: ## bpnworkers: no worker available, or no benefit in parallel evaluation ## bpschedule: in nested parallel call where the same ## BPPARAM cannot be reused if (bpnworkers(BPPARAM) <= 1L || !bpschedule(BPPARAM)) { oldParam <- BPPARAM BPPARAM <- as(BPPARAM, "SerialParam") on.exit({ .RNGstream(oldParam) <- .RNGstream(BPPARAM) }, TRUE, FALSE) # add = TRUE, last = FALSE --> last in, # first out order } else if (is(BPPARAM, "MulticoreParam")) { ## use TransientMulticoreParam when MulticoreParam has not ## started oldParam <- BPPARAM BPPARAM <- TransientMulticoreParam(BPPARAM) on.exit({ .RNGstream(oldParam) <- .RNGstream(BPPARAM) }, TRUE, FALSE) } } ## start the BPPARAM if haven't if (!bpisup(BPPARAM)) { ## start / stop cluster BPPARAM <- bpstart(BPPARAM) on.exit(bpstop(BPPARAM), TRUE, FALSE) } ## iteration res <- bploop( manager, # dispatch BPPARAM = BPPARAM, BPOPTIONS = BPOPTIONS, ... ) if (!.bpallok(res, attrOnly = TRUE)) stop(.error_bplist(res)) res } BiocParallel/R/bpisup-methods.R0000644000175200017520000000021514516004410017405 0ustar00biocbuildbiocbuildsetMethod("bpisup", "ANY", function(x) FALSE) setMethod("bpisup", "missing", function(x) { x <- registered()[[1]] bpisup(x) }) BiocParallel/R/bpiterate-methods.R0000644000175200017520000000340114516004410020062 0ustar00biocbuildbiocbuildbpiterateAlong <- function(X) { n <- length(X) i <- 0L function() { if (i >= n) NULL else { i <<- i + 1L X[[i]] } } } ### ========================================================================= ### bpiterate methods ### ------------------------------------------------------------------------- ## All params have dedicated bpiterate() methods. setMethod("bpiterate", c("ANY", "ANY", "missing"), function(ITER, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS=bpoptions()) { ITER <- tryCatch({ match.fun(ITER) }, error = function(e) { bpiterateAlong(ITER) }) FUN <- match.fun(FUN) bpiterate(ITER, FUN, ..., BPREDO = BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) }) ## TODO: support BPREDO .bpiterate_impl <- function(ITER, FUN, ..., REDUCE, init, reduce.in.order = FALSE, BPREDO = list(), BPPARAM = bpparam(), BPOPTIONS=bpoptions()) { ## Required API ## ## - BiocParallelParam() ## - bpschedule(), bpisup(), bpstart(), bpstop() ## - .sendto, .recvfrom, .recv, .close ITER <- tryCatch({ match.fun(ITER) }, error = function(e) { bpiterateAlong(ITER) }) FUN <- match.fun(FUN) if (missing(REDUCE)) { if (!missing(init)) stop("REDUCE must be provided when 'init' is given") } ARGS <- list(...) manager <- structure(list(), class="iterate") # dispatch .bpinit( manager = manager, ITER = ITER, FUN = FUN, ARGS = ARGS, BPPARAM = BPPARAM, BPOPTIONS = BPOPTIONS, BPREDO = BPREDO, init = init, REDUCE = REDUCE, reduce.in.order = reduce.in.order ) } BiocParallel/R/bplapply-methods.R0000644000175200017520000000364414516004410017737 0ustar00biocbuildbiocbuild### ========================================================================= ### bplapply methods ### ------------------------------------------------------------------------- ## All params have dedicated bplapply methods. setMethod("bplapply", c("ANY", "missing"), function(X, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) bplapply(X, FUN, ..., BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) }) setMethod("bplapply", c("ANY", "list"), function(X, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) if (!all(vapply(BPPARAM, inherits, logical(1), "BiocParallelParam"))) stop("All elements in 'BPPARAM' must be BiocParallelParam objects") if (length(BPPARAM) == 0L) stop("'length(BPPARAM)' must be > 0") myFUN <- if (length(BPPARAM) > 1L) { if (length(param <- BPPARAM[-1]) == 1L) function(...) FUN(..., BPPARAM=param[[1]]) else function(...) FUN(..., BPPARAM=param) } else FUN bplapply(X, myFUN, ..., BPREDO=BPREDO, BPPARAM=BPPARAM[[1]], BPOPTIONS = BPOPTIONS) }) .bplapply_impl <- function(X, FUN, ..., BPREDO = list(), BPPARAM = bpparam(), BPOPTIONS = bpoptions()) { ## abstract 'common' implementation using accessors only ## ## Required API: ## ## - BiocParallelParam() ## - bpschedule(), bpisup(), bpstart(), bpstop() ## - .send_to, .recv_any, .send, .recv, .close FUN <- match.fun(FUN) BPREDO <- bpresult(BPREDO) if (!length(X)) return(.rename(list(), X)) ARGS <- list(...) manager <- structure(list(), class="lapply") # dispatch .bpinit( manager = manager, X = X, FUN = FUN, ARGS = ARGS, BPPARAM = BPPARAM, BPOPTIONS = BPOPTIONS, BPREDO = BPREDO ) } BiocParallel/R/bploop.R0000644000175200017520000003155514516004410015750 0ustar00biocbuildbiocbuild### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Manager loop used by SOCK, MPI and FORK ## collect the results from the workers .collect_result <- function(manager, reducer, progress, BPPARAM) { data_list <- .manager_recv(manager) success <- rep(TRUE, length(data_list)) for(i in seq_along(data_list)){ ## each result is a list containing the element value passed ## in `.send` and possibly other elements used by the backend d <- data_list[[i]] value <- d$value$value njob <- d$value$tag ## reduce .reducer_add(reducer, njob, value) .manager_log(BPPARAM, njob, d) .manager_result_save(BPPARAM, njob, reducer$value()) ## progress progress$step(length(value)) ## whether the result is ok, or to treat the failure as success success[i] <- !bpstopOnError(BPPARAM) || d$value$success } success } ## These functions are used by all cluster types (SOCK, MPI, FORK) and ## run on the master. Both enable logging, writing logs/results to ## files and 'stop on error'. .clear_cluster <- function(manager, running, reducer, progress, BPPARAM) { tryCatch({ setTimeLimit(30, 30, TRUE) on.exit(setTimeLimit(Inf, Inf, FALSE)) while (running) { success <- .collect_result(manager, reducer, progress, BPPARAM) running <- running - length(success) } }, error=function(e) { message("Stop worker failed with the error: ", conditionMessage(e)) }) reducer } .manager_log <- function(BPPARAM, njob, d) { if (bplog(BPPARAM)) { con <- NULL if (!is.na(bplogdir(BPPARAM))) { fname <- paste0(bpjobname(BPPARAM), ".task", njob, ".log") lfile <- file.path(bplogdir(BPPARAM), fname) con <- file(lfile, open="a") on.exit(close(con)) } .bpwriteLog(con, d) } else if (length(d$value$sout)) { message(paste(d$value$sout, collapse="\n")) } } .manager_result_save <- function(BPPARAM, njob, value) { if (is.na(bpresultdir(BPPARAM))) return(NULL) fname <- paste0(bpjobname(BPPARAM), ".task", njob, ".Rda") rfile <- file.path(bpresultdir(BPPARAM), fname) save(value, file=rfile) } ## A dummy iterator for bploop.lapply .bploop_lapply_iter <- function(X, redo_index, elements_per_task) { redo_n <- length(redo_index) redo_i <- 1L x_n <- length(X) x_i <- 1L function() { if (redo_i <= redo_n && x_i <= x_n) { redo <- redo_index[redo_i] == x_i if (redo) { ## Maximize `len` such that ## - 1. all elements in X[x_i:(x_i + len)] should be redone ## - 2. the number of elements in the task must be ## limited by `elements_per_task` len <- 1L while (redo_i + len <= redo_n && redo_index[redo_i + len] == x_i + len && len < elements_per_task) { len <- len + 1L } redo_i <<- redo_i + len value <- X[seq.int(x_i, length.out = len)] } else { len <- redo_index[redo_i] - x_i value <- .bploop_rng_iter(len) } x_i <<- x_i + len ## Do not return the last seed iterator ## if no more redo element if (x_i > x_n && !redo) { list(NULL) } else { value } } else { list(NULL) } } } ## An iterator for bpiterate to handle BPREDO .bploop_iterate_iter <- function(ITER, reducer) { errors <- sort(.redo_index_iterate(reducer)) len <- reducer$total if(is.null(len)) len <- 0L i <- 0L function(){ if (i < len) { i <<- i + 1L value <- ITER() if (i%in%errors) list(value) else .bploop_rng_iter(1L) } else { list(ITER()) } } } ## This class object can force bploop.iterator to iterate ## the seed stream n times .bploop_rng_iter <- function(n) { structure(as.integer(n), class = c(".bploop_rng_iter")) } ## Accessor for the elements in the BPREDO argument ## Return NULL if not exists .redo_env <- function(x) { attr(x, "REDOENV") } .redo_reducer <- function(x) { .redo_env(x)$reducer } .redo_seed <- function(x) { .redo_env(x)$rng_seed } `.redo_env<-` <- function(x, value) { attr(x, "REDOENV") <- value x } `.redo_reducer<-` <- function(x, value) { .redo_env(x)$reducer <- value x } `.redo_seed<-` <- function(x, value) { .redo_env(x)$rng_seed <- value x } ## The core bploop implementation ## Arguments ## - ITER: Return a list where each list element will be passed to FUN ## 1. if nothing to proceed, it should return list(NULL) ## 2. if the task is to iterate the seed stream only, it should return ## an object from .bploop_rng_iter() ## - FUN: A function that will be evaluated in the worker ## - ARGS: the arguments to FUN .bploop_impl <- function(ITER, FUN, ARGS, BPPARAM, BPREDO, BPOPTIONS, reducer, progress.length) { manager <- .manager(BPPARAM) on.exit(.manager_cleanup(manager), add = TRUE) ## worker options OPTIONS <- .workerOptions( log = bplog(BPPARAM), threshold = bpthreshold(BPPARAM), stop.on.error = bpstopOnError(BPPARAM), timeout = bptimeout(BPPARAM), exportglobals = bpexportglobals(BPPARAM), force.GC = bpforceGC(BPPARAM) ) ## prepare the seed stream for the worker init_seed <- .redo_seed(BPREDO) if (is.null(init_seed)) { seed <- .RNGstream(BPPARAM) on.exit(.RNGstream(BPPARAM) <- seed, add = TRUE) init_seed <- seed } else { seed <- init_seed } ## Progress bar progress <- .progress( active=bpprogressbar(BPPARAM), iterate=missing(progress.length) ) on.exit(progress$term(), add = TRUE) progress$init(progress.length) ## detect auto export variables and packages globalVarNames <- as.character(BPOPTIONS$exports) packages <- as.character(BPOPTIONS$packages) if (bpexportvariables(BPPARAM)) { exports <- .findVariables(FUN) globalVarNames <- c(globalVarNames, exports$globalvars) packages <- c(packages, exports$pkgs) } globalVars <- lapply(globalVarNames, get, envir = .GlobalEnv) names(globalVars) <- globalVarNames ## The data that will be sent to the worker ARGFUN <- function(X, seed) list( X=X , FUN=FUN , ARGS = ARGS, OPTIONS = OPTIONS, BPRNGSEED = seed, GLOBALS = globalVars, PACKAGES = packages ) static.args <- c("FUN", "ARGS", "OPTIONS", "GLOBALS") total <- 0L running <- 0L value <- NULL ## keep the loop when there exists more ITER value or running tasks while (!identical(value, list(NULL)) || running) { ## send tasks to the workers while (running < .manager_capacity(manager)) { value <- ITER() ## If the value is of the class .bploop_rng_iter, we merely iterate ## the seed stream `value` times and obtain the next value. if (inherits(value, ".bploop_rng_iter")) { seed <- .rng_iterate_substream(seed, value) next } if (identical(value, list(NULL))) { if (total == 0L) warning("first invocation of 'ITER()' returned NULL") break } args <- ARGFUN(value, seed) task <- .EXEC( total + 1L, .workerLapply, args = args, static.fun = TRUE, static.args = static.args ) .manager_send(manager, task) seed <- .rng_iterate_substream(seed, length(value)) total <- total + 1L running <- running + 1L } .manager_flush(manager) ## If the cluster does not have any worker, waiting for the worker if (!running) next ## collect results from the workers success <- .collect_result(manager, reducer, progress, BPPARAM) running <- running - length(success) ## stop on error; Let running jobs finish and break if (!all(success)) { reducer <- .clear_cluster( manager, running, reducer, progress, BPPARAM ) break } } ## return results if (!is.na(bpresultdir(BPPARAM))) return(NULL) res <- .reducer_value(reducer) ## Attach the redo information when the error occurs if(!.reducer_ok(reducer) || !.reducer_complete(reducer)) { .redo_env(res) <- new.env(parent = emptyenv()) .redo_reducer(res) <- reducer .redo_seed(res) <- init_seed } res } ## ## bploop.lapply(): derived from snow::dynamicClusterApply. ## bploop <- function(manager, ...) { UseMethod("bploop") } ## X: the loop value after division ## ARGS: The function arguments for `FUN` bploop.lapply <- function(manager, X, FUN, ARGS, BPPARAM, BPOPTIONS = bpoptions(), BPREDO = list(), ...) { ## which need to be redone? redo_index <- .redo_index(X, BPREDO) ## How many elements in a task? ntask <- .ntask(X, bpnworkers(BPPARAM), bptasks(BPPARAM)) elements_per_task <- ceiling(length(redo_index)/ntask) ITER <- .bploop_lapply_iter(X, redo_index, elements_per_task) ntotal <- length(X) reducer <- .lapplyReducer(ntotal, reducer = .redo_reducer(BPREDO)) res <- .bploop_impl( ITER = ITER, FUN = FUN, ARGS = ARGS, BPPARAM = BPPARAM, BPOPTIONS = BPOPTIONS, BPREDO = BPREDO, reducer = reducer, progress.length = length(redo_index) ) if (!is.null(res)) names(res) <- names(X) res } ## ## bploop.iterate(): ## ## Derived from snow::dynamicClusterApply and parallel::mclapply. ## ## - length of 'X' is unknown (defined by ITER()) ## - results not pre-allocated; list grows each iteration if no REDUCE bploop.iterate <- function( manager, ITER, FUN, ARGS, BPPARAM, BPOPTIONS = bpoptions(), REDUCE, BPREDO, init, reduce.in.order, ... ) { ITER_ <- .bploop_iterate_iter(ITER, reducer = .redo_reducer(BPREDO)) reducer <- .iterateReducer(REDUCE, init, reduce.in.order, reducer = .redo_reducer(BPREDO)) .bploop_impl( ITER = ITER_, FUN = FUN, ARGS = ARGS, BPPARAM = BPPARAM, BPOPTIONS = BPOPTIONS, BPREDO = BPREDO, reducer = reducer ) } bploop.iterate_batchtools <- function(manager, ITER, FUN, BPPARAM, REDUCE, init, reduce.in.order, ...) { ## get number of workers workers <- bpnworkers(BPPARAM) ## reduce in order reducer <- .iterateReducer(REDUCE, init, reduce.in.order, NULL) ## progress bar. progress <- .progress(active=bpprogressbar(BPPARAM), iterate=TRUE) on.exit(progress$term(), TRUE) progress$init() def.id <- job.id <- 1L repeat{ value <- ITER() if (is.null(value)) { if (job.id == 1L) warning("first invocation of 'ITER()' returned NULL") break } ## save 'value' to registry tempfile fl <- tempfile(tmpdir = BPPARAM$registry$file.dir) saveRDS(value, fl) if (job.id == 1L) { suppressMessages({ ids <- batchtools::batchMap( fun = FUN, fl, more.args = list(...), reg = BPPARAM$registry ) }) } else { job.pars <- list(fl) BPPARAM$registry$defs <- rbind(BPPARAM$registry$defs, list(def.id, list(job.pars))) entry <- c(list(job.id, def.id), rep(NA, 10)) BPPARAM$registry$status <- rbind(BPPARAM$registry$status, entry) } def.id <- def.id + 1L job.id <- job.id + 1L } ## finish updating tables ids <- data.table::data.table(job.id = seq_len(job.id - 1)) data.table::setkey(BPPARAM$registry$status, "job.id") ids$chunk = batchtools::chunk(ids$job.id, n.chunks = workers) ## submit and wait for jobs batchtools::submitJobs( ids = ids, resources = .bpresources(BPPARAM), reg = BPPARAM$registry ) batchtools::waitForJobs( ids = BPPARAM$registry$status$job.id, reg = BPPARAM$registry, timeout = .batch_bptimeout(BPPARAM), stop.on.error = bpstopOnError(BPPARAM) ) ## reduce in order for (job.id in ids$job.id) { value <- batchtools::loadResult(id = job.id, reg=BPPARAM$registry) .reducer_add(reducer, job.id, list(value)) } ## return reducer value .reducer_value(reducer) } BiocParallel/R/bpmapply-methods.R0000644000175200017520000000662414516004410017741 0ustar00biocbuildbiocbuild### ========================================================================= ### bpmapply methods ### ------------------------------------------------------------------------- # see test_utilities.R:test_transposeArgsWithIterations() for all # USE.NAMES corner cases .transposeArgsWithIterations <- function(nestedList, USE.NAMES) { num_arguments <- length(nestedList) if (num_arguments == 0L) { return(list()) } ## nestedList[[1L]] has the values for the first argument in all ## iterations num_iterations <- length(nestedList[[1L]]) ## count the iterations, and name them if needed iterations <- seq_len(num_iterations) if (USE.NAMES) { first_arg <- nestedList[[1L]] if (is.character(first_arg) && is.null(names(first_arg))) { names(iterations) <- first_arg } else { names(iterations) <- names(first_arg) } } ## argnames: argnames <- names(nestedList) ## on iteration `i` we get the i-th element from each list. Note ## that .getDotsForMapply() has taken care already of ensuring ## that nestedList elements are recycled properly lapply(iterations, function(i) { x <- lapply(nestedList, function(argi) { unname(argi[i]) }) names(x) <- argnames x }) } ## bpmapply() dispatches to bplapply() where errors and logging are handled. setMethod("bpmapply", c("ANY", "BiocParallelParam"), function(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { ## re-package for lapply ddd <- .getDotsForMapply(...) FUN <- match.fun(FUN) if (!length(ddd)) return(list()) ddd <- .transposeArgsWithIterations(ddd, USE.NAMES) if (!length(ddd)) return(ddd) .wrapMapplyNotShared <- local({ function(dots, .FUN, .MoreArgs) { .mapply(.FUN, dots, .MoreArgs)[[1L]] } }, envir = baseenv()) res <- bplapply( X=ddd, .wrapMapplyNotShared, .FUN=FUN, .MoreArgs=MoreArgs, BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS ) .simplify(res, SIMPLIFY) }) setMethod("bpmapply", c("ANY", "missing"), function(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) bpmapply(FUN, ..., MoreArgs=MoreArgs, SIMPLIFY=SIMPLIFY, USE.NAMES=USE.NAMES, BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) }) setMethod("bpmapply", c("ANY", "list"), function(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) if (!all(vapply(BPPARAM, inherits, logical(1), "BiocParallelParam"))) stop("All elements in 'BPPARAM' must be BiocParallelParam objects") if (length(BPPARAM) == 0L) stop("'length(BPPARAM)' must be > 0") myFUN <- if (length(BPPARAM) > 1L) { if (length(param <- BPPARAM[-1]) == 1L) function(...) FUN(..., BPPARAM=param[[1]]) else function(...) FUN(..., BPPARAM=param) } else FUN bpmapply(myFUN, ..., MoreArgs=MoreArgs, SIMPLIFY=SIMPLIFY, USE.NAMES=USE.NAMES, BPREDO=BPREDO, BPPARAM=BPPARAM[[1L]], BPOPTIONS = BPOPTIONS) }) BiocParallel/R/bpoptions.R0000644000175200017520000000752114516004410016466 0ustar00biocbuildbiocbuild.optionRegistry <- setRefClass(".BiocParallelOptionsRegistry", fields=list( options = "list"), methods=list( register = function(optionName, genericName) { if (!is.null(.self$options[[optionName]])) message("Replacing the function `", optionName, "` from the option registry") .self$options[[optionName]] <- genericName invisible(registered()) }, registered = function() { .self$options }) )$new() # Singleton ## Functions to register the S4generic for BPPARAM .registeredOptions <- function() { .optionRegistry$registered() } .registerOption <- function(optionName, genericName) { getter <- getGeneric(genericName) setter <- getGeneric(paste0(genericName, "<-")) if (is.null(getter)) stop("The S4 function '", genericName, "' is not found") if (is.null(setter)) stop("The S4 replacement function '", genericName, "' is not found") .optionRegistry$register(optionName, genericName) } .registerOption("workers", "bpworkers") .registerOption("tasks", "bptasks") .registerOption("jobname", "bpjobname") .registerOption("log", "bplog") .registerOption("logdir", "bplogdir") .registerOption("threshold", "bpthreshold") .registerOption("resultdir", "bpresultdir") .registerOption("stop.on.error", "bpstopOnError") .registerOption("timeout", "bptimeout") .registerOption("exportglobals", "bpexportglobals") .registerOption("exportvariables", "bpexportvariables") .registerOption("progressbar", "bpprogressbar") .registerOption("RNGseed", "bpRNGseed") .registerOption("force.GC", "bpforceGC") .registerOption("fallback", "bpfallback") ## functions for changing the paramters in BPPARAM .bpparamOptions <- function(BPPARAM, optionNames) { registeredOptions <- .registeredOptions() ## find the common parameters both BPPARAM and BPOPTIONS paramOptions <- intersect(names(registeredOptions), optionNames) getterNames <- unlist(registeredOptions[paramOptions]) setNames(lapply( getterNames, do.call, args = list(BPPARAM) ), paramOptions) } ## value: BPOPTIONS `.bpparamOptions<-` <- function(BPPARAM, value) { BPOPTIONS <- value registeredOptions <- .registeredOptions() optionNames <- names(BPOPTIONS) paramOptions <- intersect(names(registeredOptions), optionNames) setterNames <- paste0(unlist(registeredOptions[paramOptions]), "<-") for (i in seq_along(paramOptions)) { paramOption <- paramOptions[i] setterName <- setterNames[i] do.call( setterName, args = list(BPPARAM, BPOPTIONS[[paramOption]]) ) } BPPARAM } ## Check any possible issues in bpoptions .validateBpoptions <- function(BPOPTIONS) { bpoptionsArgs <- names(formals(bpoptions)) registeredOptions <- names(.registeredOptions()) allOptions <- c(bpoptionsArgs, registeredOptions) idx <- which(!names(BPOPTIONS) %in% allOptions) if (length(idx)) message( "unregistered options found in bpoptions:\n", " ", paste0(names(BPOPTIONS)[idx], collapse = ", ") ) } ## The function simply return a list of its arguments bpoptions <- function( workers, tasks, jobname, log, logdir, threshold, resultdir, stop.on.error, timeout, exportglobals, exportvariables, progressbar, RNGseed, force.GC, fallback, exports, packages, ...) { dotsArgs <- list(...) passed <- names(as.list(match.call())[-1]) passed <- setdiff(passed, names(dotsArgs)) if (length(passed)) passedArgs <- setNames(mget(passed), passed) else passedArgs <- NULL opts <- c(passedArgs, dotsArgs) .validateBpoptions(opts) opts } BiocParallel/R/bpschedule-methods.R0000644000175200017520000000023014516004410020216 0ustar00biocbuildbiocbuildsetMethod("bpschedule", "ANY", function(x) TRUE) setMethod("bpschedule", "missing", function(x) { x <- registered()[[1]] bpschedule(x) }) BiocParallel/R/bpstart-methods.R0000644000175200017520000000465214516004410017573 0ustar00biocbuildbiocbuild### ========================================================================= ### ClusterManager object: ensures started clusters are stopped ### ------------------------------------------------------------------------- .ClusterManager <- local({ ## package-global registry of backends; use to avoid closing ## socket connections of unreferenced backends during garbage ## collection -- bpstart(MulticoreParam(1)); gc(); gc() uid <- 0 env <- environment() list(add = function(cluster) { uid <<- uid + 1L cuid <- as.character(uid) env[[cuid]] <- cluster # protection cuid }, drop = function(cuid) { if (length(cuid) && cuid %in% names(env)) rm(list=cuid, envir=env) invisible(NULL) }, get = function(cuid) { env[[cuid]] }, ls = function() { cuid <- setdiff(ls(env), c("uid", "env")) cuid[order(as.integer(cuid))] }) }) ### ========================================================================= ### bpstart() methods ### ------------------------------------------------------------------------- setMethod("bpstart", "ANY", function(x, ...) invisible(x)) setMethod("bpstart", "missing", function(x, ...) { x <- registered()[[1]] bpstart(x) }) ## ## .bpstart_impl: common functionality after bpisup() ## .bpstart_error_handler <- function(x, response, id) { value <- lapply(response, function(elt) elt[["value"]][["value"]]) if (!all(bpok(value))) { on.exit(try(bpstop(x))) stop( "\nbpstart() ", id, " error:\n", conditionMessage(.error_bplist(value)) ) } } .bpstart_set_rng_stream <- function(x) { ## initialize the random number stream; increment the stream only ## in bpstart_impl .RNGstream(x) <- .rng_init_stream(bpRNGseed(x)) invisible(.RNGstream(x)) } .bpstart_set_finalizer <- function(x) { if (length(x$.uid) == 0L) { finalizer_env <- as.environment(list(self=x$.self)) reg.finalizer( finalizer_env, function(e) bpstop(e[["self"]]), onexit=TRUE ) x$.finalizer_env <- finalizer_env } x$.uid <- .ClusterManager$add(bpbackend(x)) invisible(x) } .bpstart_impl <- function(x) { ## common actions once bpisup(backend) ## initialize the random number stream .bpstart_set_rng_stream(x) ## clean up when x left open .bpstart_set_finalizer(x) } BiocParallel/R/bpstop-methods.R0000644000175200017520000000071414516004410017416 0ustar00biocbuildbiocbuildsetMethod("bpstop", "ANY", function(x) invisible(x)) setMethod("bpstop", "missing", function(x) { x <- registered()[[1]] bpstop(x) }) ## ## .bpstop_impl: common functionality after bpisup() is no longer TRUE ## .bpstop_nodes <- function(x) { manager <- .manager(x) .manager_send_all(manager, .DONE()) TRUE } .bpstop_impl <- function(x) { bpisup(x) && .bpstop_nodes(x) .ClusterManager$drop(x$.uid) invisible(x) } BiocParallel/R/bpvalidate.R0000644000175200017520000001461614516004410016567 0ustar00biocbuildbiocbuild.BPValidate <- setClass( "BPValidate", slots = c( symbol = "character", environment = "character", unknown = "character" ) ) BPValidate <- function(symbol = character(), environment = character(), unknown = character()) { if (is.null(symbol)) symbol <- character() if (is.null(environment)) environment <- character() .BPValidate(symbol = symbol, environment = environment, unknown = unknown) } .bpvalidateSymbol <- function(x) x@symbol .bpvalidateEnvironment <- function(x) x@environment .bpvalidateUnknown <- function(x) x@unknown .show_bpvalidateSearch <- function(x) { search <- data.frame( symbol = .bpvalidateSymbol(x), environment = .bpvalidateEnvironment(x), row.names = NULL ) output <- capture.output(search) text <- ifelse(NROW(search), paste(output, collapse = "\n "), "none") c("symbol(s) in search() path:\n ", text) } .show_bpvalidateUnknown <- function(x) { unknown <- .bpvalidateUnknown(x) text <- ifelse(length(unknown), paste(unknown, collapse = "\n "), "none") c("unknown symbol(s):\n ", text) } setMethod("show", "BPValidate", function(object) { cat( "class: ", class(object), "\n", .show_bpvalidateSearch(object), "\n\n", .show_bpvalidateUnknown(object), "\n\n", sep = "" ) }) ######################### ## Utils ######################### .filterDefaultPackages <- function(symbols) { pkgs <- c( "stats", "graphics", "grDevices", "utils", "datasets", "methods", "Autoloads", "base" ) drop <- unlist(symbols, use.names = FALSE) %in% paste0("package:", pkgs) symbols[!drop] } ## Filter the variables that will be available after `fun` loads ## packages .filterLibraries <- function(codes, symbols, ERROR_FUN) { warn <- err <- NULL ## 'fun' body loads libraries pkgLoadFunc <- c("require", "library") i <- grepl( paste0("(", paste0(pkgLoadFunc, collapse = "|"), ")"), codes ) xx <- lapply(codes[i], function(code) { withCallingHandlers(tryCatch({ ## convert character code to expression expr <- parse(text = code)[[1]] ## match the library/require function arguments expr <- match.call(eval(expr[[1]]), expr) ## get the package name from the function arguments pkg <- as.character(expr[[which(names(expr) == "package")]]) which(symbols %in% getNamespaceExports(pkg)) }, error=function(e) { err <<- append(err, conditionMessage(e)) NULL }), warning=function(w) { warn <<- append(warn, conditionMessage(w)) invokeRestart("muffleWarning") }) }) if (!is.null(warn) || !is.null(err)) ERROR_FUN("attempt to load library failed:\n ", paste(c(warn, err), collapse="\n ")) xx <- unlist(xx) if (length(xx)) symbols <- symbols[-xx] symbols } ## find the variables that needed to be exported .findVariables <- function(fun, ERROR_FUN = capture.output) { unknown <- findGlobals(fun) env <- environment(fun) codes <- deparse(fun) ## TODO: The location where the pkg is loaded is not considered here ## (should we consider it??) ## remove the symbols that will be loaded inside the function unknown <- .filterLibraries(codes, unknown, ERROR_FUN) ## Find the objects that will ship with the function while (length(unknown) && !identical(env, emptyenv()) && !identical(.GlobalEnv, env)) { i <- vapply(unknown, function(x) { !exists(x, envir = env, inherits = FALSE) }, logical(1)) ## Force evaluation of the known arguments to ## make sure they will be exported correctly known <- unknown[-i] for (nm in known) force(env[[nm]]) unknown <- unknown[i] env <- parent.env(env) } ## Find the objects that are defined in the search path ## (only if the function/expr depends on the global) inpath <- list() if (length(unknown) && identical(.GlobalEnv, env)) { inpath <- lapply(unknown, function(x) { where <- find(x) ## Includes only packages and variables in the global ## environment keep <- startsWith(where, "package:") | where == ".GlobalEnv" head(where[keep], 1L) }) names(inpath) <- unknown i <- as.logical(lengths(inpath)) unknown <- unknown[!i] inpath <- inpath[i] inpath <- .filterDefaultPackages(inpath) } ## The package required by the worker pkgs <- unique(unlist(inpath, use.names = FALSE)) ## variables defined in the global environment globalvars <- names(inpath)[pkgs == ".GlobalEnv"] pkgs <- pkgs[pkgs != ".GlobalEnv"] pkgs <- gsub("package:", "", pkgs, fixed = TRUE) list( unknown = unknown, pkgs = pkgs, globalvars = globalvars, inpath = inpath ) } ######################### ## validate funtions and vairables that need to be exported ######################### bpvalidate <- function(fun, signal = c("warning", "error", "silent")) { typeof <- suppressWarnings(typeof(fun)) if (!typeof %in% c("closure", "builtin")) stop("'fun' must be a closure or builtin") if (is.function(signal)) { ERROR_FUN <- signal } else { ERROR_FUN <- switch( match.arg(signal), warning = warning, error = stop, silent = capture.output ) } ## Filter the symbols that is loaded via library/require exports <- .findVariables(fun, ERROR_FUN = ERROR_FUN) inpath <- exports$inpath result <- BPValidate( symbol = names(inpath), environment = unlist(inpath, use.names = FALSE), unknown = exports$unknown ) ## error report msg <- character() test <- .bpvalidateEnvironment(result) %in% ".GlobalEnv" if (any(test)) msg <- c( msg, "symbol(s) in .GlobalEnv:\n ", paste(.bpvalidateSymbol(result)[test], collapse = "\n "), "\n" ) test <- .bpvalidateUnknown(result) if (length(test)) msg <- c( msg, "unknown symbol(s):\n ", paste(test, collapse = "\n "), "\n" ) if (length(msg)) ERROR_FUN("\n", paste(msg, collapse = ""), call. = FALSE) result } BiocParallel/R/bpvec-methods.R0000644000175200017520000000463014516004410017207 0ustar00biocbuildbiocbuild### ========================================================================= ### bpvec methods ### ------------------------------------------------------------------------- ## bpvec() dispatches to bplapply() where errors and logging are ## handled. setMethod("bpvec", c("ANY", "BiocParallelParam"), function(X, FUN, ..., AGGREGATE=c, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { if (!length(X)) return(.rename(list(), X)) FUN <- match.fun(FUN) AGGREGATE <- match.fun(AGGREGATE) BPREDO <- bpresult(BPREDO) if (!bpschedule(BPPARAM)) { param <- as(BPPARAM, "SerialParam") return( bpvec( X, FUN, ..., AGGREGATE=AGGREGATE, BPREDO=BPREDO, BPPARAM = param, BPOPTIONS = BPOPTIONS ) ) } si <- .splitX(seq_along(X), bpnworkers(BPPARAM), bptasks(BPPARAM)) otasks <- bptasks(BPPARAM) bptasks(BPPARAM) <- 0L on.exit(bptasks(BPPARAM) <- otasks) FUN1 <- function(i, ...) FUN(X[i], ...) res <- bptry(bplapply( si, FUN1, ..., BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS )) if (is(res, "error") || !all(bpok(res))) stop(.error_bplist(res)) if (any(lengths(res) != lengths(si))) stop(.error("length(FUN(X)) not equal to length(X)", "bpvec_error")) do.call(AGGREGATE, res) }) setMethod("bpvec", c("ANY", "missing"), function(X, FUN, ..., AGGREGATE=c, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) AGGREGATE <- match.fun(AGGREGATE) bpvec(X, FUN, ..., AGGREGATE=AGGREGATE, BPREDO=BPREDO, BPPARAM=BPPARAM, BPOPTIONS = BPOPTIONS) }) setMethod("bpvec", c("ANY", "list"), function(X, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) { FUN <- match.fun(FUN) if (!all(vapply(BPPARAM, inherits, logical(1), "BiocParallelParam"))) stop("All elements in 'BPPARAM' must be BiocParallelParam objects") if (length(BPPARAM) == 0L) stop("'length(BPPARAM)' must be > 0") myFUN <- if (length(BPPARAM) > 1L) { param <- BPPARAM[-1] if (length(param) == 1L) function(...) FUN(..., BPPARAM=param[[1]]) else function(...) FUN(..., BPPARAM=param) } else FUN bpvec( X, myFUN, ..., BPREDO=BPREDO, BPPARAM=BPPARAM[[1]], BPOPTIONS = BPOPTIONS ) }) BiocParallel/R/bpvectorize-methods.R0000644000175200017520000000062114516004410020440 0ustar00biocbuildbiocbuildsetMethod("bpvectorize", c("ANY", "ANY"), function(FUN, ..., BPREDO=list(), BPPARAM=bpparam()) { FUN <- match.fun(FUN) function(...) bpvec(FUN=FUN, ..., BPREDO=BPREDO, BPPARAM=BPPARAM) }) setMethod("bpvectorize", c("ANY", "missing"), function(FUN, ..., BPREDO=list(), BPPARAM=bpparam()) { FUN <- match.fun(FUN) bpvectorize(FUN, ..., BPREDO=BPREDO, BPPARAM=BPPARAM) }) BiocParallel/R/bpworkers-methods.R0000644000175200017520000000031514516004410020122 0ustar00biocbuildbiocbuildsetMethod("bpworkers", "missing", function(x) { x <- registered()[[1]] bpworkers(x) }) bpnworkers <- function(x) { n <- bpworkers(x) if (!is.numeric(n)) n <- length(n) n } BiocParallel/R/cpp11.R0000644000175200017520000000150014516004410015364 0ustar00biocbuildbiocbuild# Generated by cpp11: do not edit by hand cpp_ipc_remove <- function(id_sexp) { .Call(`_BiocParallel_cpp_ipc_remove`, id_sexp) } cpp_ipc_uuid <- function() { .Call(`_BiocParallel_cpp_ipc_uuid`) } cpp_ipc_locked <- function(id_sexp) { .Call(`_BiocParallel_cpp_ipc_locked`, id_sexp) } cpp_ipc_lock <- function(id_sexp) { .Call(`_BiocParallel_cpp_ipc_lock`, id_sexp) } cpp_ipc_try_lock <- function(id_sexp) { .Call(`_BiocParallel_cpp_ipc_try_lock`, id_sexp) } cpp_ipc_unlock <- function(id_sexp) { .Call(`_BiocParallel_cpp_ipc_unlock`, id_sexp) } cpp_ipc_value <- function(id_sexp) { .Call(`_BiocParallel_cpp_ipc_value`, id_sexp) } cpp_ipc_reset <- function(id_sexp, n) { .Call(`_BiocParallel_cpp_ipc_reset`, id_sexp, n) } cpp_ipc_yield <- function(id_sexp) { .Call(`_BiocParallel_cpp_ipc_yield`, id_sexp) } BiocParallel/R/ipcmutex.R0000644000175200017520000000116514516004410016305 0ustar00biocbuildbiocbuild## Utilities ipcid <- function(id) { uuid <- cpp_ipc_uuid() if (!missing(id)) uuid <- paste(as.character(id), uuid, sep="-") uuid } ipcremove <- function(id) { invisible(cpp_ipc_remove(id)) } ## Locks ipclocked <- function(id) cpp_ipc_locked(id) ipclock <- function(id) { cpp_ipc_lock(id) } ipctrylock <- function(id) { cpp_ipc_try_lock(id) } ipcunlock <- function(id) { cpp_ipc_unlock(id) } ## Counters ipcyield <- function(id) { cpp_ipc_yield(id) } ipcvalue <- function(id) { cpp_ipc_value(id) } ipcreset <- function(id, n = 1) { invisible(cpp_ipc_reset(id, n)) } BiocParallel/R/log.R0000644000175200017520000000453314516004410015232 0ustar00biocbuildbiocbuild.log_data <- local({ env <- new.env(parent=emptyenv()) env[["buffer"]] <- character() env }) .log_load <- function(log, threshold) { if (!log) { if (isNamespaceLoaded("futile.logger")) { futile.logger::flog.appender( futile.logger::appender.console(), 'ROOT' ) } return(invisible(NULL)) } ## log == TRUE if (!isNamespaceLoaded("futile.logger")) tryCatch({ loadNamespace("futile.logger") }, error=function(err) { msg <- "logging requires the 'futile.logger' package" stop(conditionMessage(err), msg) }) futile.logger::flog.appender(.log_buffer_append, 'ROOT') futile.logger::flog.threshold(threshold) futile.logger::flog.info("loading futile.logger package") } .log_warn <- function(log, fmt, ...) { if (log) futile.logger::flog.warn(fmt, ...) } .log_error <- function(log, fmt, ...) { if (log) futile.logger::flog.error(fmt, ...) } ## logging buffer .log_buffer_init <- function() { .log_data[["buffer"]] <- character() } .log_buffer_append <- function(line) { .log_data[["buffer"]] <- c(.log_data[["buffer"]], line) } .log_buffer_get <- function() { .log_data[["buffer"]] } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### logs and results printed in the manager process ### .bpwriteLog <- function(con, d) { .log_internal <- function() { message( "############### LOG OUTPUT ###############\n", "Task: ", d$value$tag, "\nNode: ", d$node, "\nTimestamp: ", Sys.time(), "\nSuccess: ", d$value$success, "\n\nTask duration:\n", paste(capture.output(d$value$time), collapse="\n"), "\n\nMemory used:\n", paste(capture.output(gc()), collapse="\n"), "\n\nLog messages:\n", paste(trimws(d$value$log), collapse="\n"), "\n\nstderr and stdout:\n", if (!is.null(d$value$sout)) paste(noquote(d$value$sout), collapse="\n") ) } if (!is.null(con)) { on.exit({ sink(NULL, type = "message") sink(NULL, type = "output") }) sink(con, type = "message") sink(con, type = "output") } .log_internal() } BiocParallel/R/progress.R0000644000175200017520000000260014516004410016306 0ustar00biocbuildbiocbuild### ========================================================================= ### progress bar ### ------------------------------------------------------------------------- .progress <- function(style = 3, active = TRUE, iterate = FALSE, ...) { if (active) { ntasks <- 0L if (iterate) { list(init = function(x) { message("iteration: ", appendLF=FALSE) }, step = function(n) { ntasks <<- ntasks + 1L erase <- paste(rep("\b", ceiling(log10(ntasks))), collapse="") message(erase, ntasks, appendLF = FALSE) }, term = function() { message() # new line }) } else { ## derived from plyr::progress_text() txt <- NULL max <- 0 list(init = function(x) { txt <<- txtProgressBar(max = x, style = style, ...) setTxtProgressBar(txt, 0) max <<- x }, step = function(n) { ntasks <<- ntasks + n setTxtProgressBar(txt, ntasks) if (ntasks == max) cat("\n") }, term = function() { close(txt) }) } } else { list( init = function(x) NULL, step = function(n) NULL, term = function() NULL ) } } BiocParallel/R/prototype.R0000644000175200017520000000177214516004410016520 0ustar00biocbuildbiocbuild## There are three timeouts involved ## ## - establishing a socket connection, from getOption("timeout"), ## default 60 seconds ## - duration of allowed computations, from argument `timeout=` to ## *Param(), default WORKER_TIMEOUT (infinite) ## - duration of idle connections (no activity from the worker ## socket), default IDLE_TIMEOUT (30 days) beause (a) this is the ## snow behavior and (b) sockets appear to sometimes segfault & lead ## to PROTECTion imbalance if an attempt is made to write to a ## terminated socket. ## Timeout for individual worker tasks WORKER_TIMEOUT <- NA_integer_ ## Timeout for socket inactivity IDLE_TIMEOUT <- 2592000L # 60 * 60 * 24 * 30 = 30 day; consistent w/ parallel ## Maximum number of tasks, e.g., when using progress bar TASKS_MAXIMUM <- .Machine$integer.max .prototype_update <- function(prototype, ...) { args <- list(...) stopifnot( all(names(args) %in% names(prototype)) ) prototype[names(args)] <- unname(args) prototype } BiocParallel/R/reducer.R0000644000175200017520000001727714516004410016113 0ustar00biocbuildbiocbuild.Reducer <- setRefClass( "Reducer", fields = list( result = "ANY", total = "numeric", reduced.num = "numeric", reduced.index = "numeric", value.cache = "environment", redo.index = "numeric" ) ) .LapplyReducer <- setRefClass( "LapplyReducer", fields = list( exists.error = "logical" ), contains = "Reducer" ) .IterateReducer <- setRefClass( "IterateReducer", fields = list( REDUCE = "ANY", errors = "environment", reduce.in.order = "logical", appending.offset = "numeric", init.missing = "logical", REDUCE.missing = "logical" ), contains = "Reducer" ) setGeneric(".map_index", function(reducer, idx){ standardGeneric(".map_index") }) setGeneric(".reducer_add", function(reducer, idx, values){ standardGeneric(".reducer_add") }) setGeneric(".reducer_reduce", function(reducer, idx){ standardGeneric(".reducer_reduce") }) setGeneric(".reducer_ok", function(reducer){ standardGeneric(".reducer_ok") }) setGeneric(".reducer_complete", function(reducer){ standardGeneric(".reducer_complete") }) setGeneric(".reducer_value", function(reducer){ standardGeneric(".reducer_value") }) ######################### ## Reducer ######################### setMethod(".reducer_complete", signature = "Reducer", function(reducer) { reducer$total == reducer$reduced.num }) setMethod(".reducer_ok", signature = "Reducer", function(reducer) { length(reducer$errors) == 0L }) ######################### ## LapplyReducer ######################### .lapplyReducer <- function(ntotal, reducer = NULL) { if (is.null(reducer)) { result <- rep(list(.error_unevaluated()), ntotal) redo.index <- seq_len(ntotal) } else { result <- reducer$result redo.index <- which(!bpok(result)) ntotal <- length(redo.index) } .LapplyReducer( result = result, total = ntotal, reduced.index = 1L, reduced.num = 0L, value.cache = new.env(parent = emptyenv()), redo.index = redo.index, exists.error = FALSE ) } setMethod(".reducer_add", signature = "LapplyReducer", function(reducer, idx, values) { reducer$value.cache[[as.character(idx)]] <- values while (.reducer_reduce(reducer, reducer$reduced.index)) {} if(!all(bpok(values))) reducer$exists.error <- TRUE reducer }) setMethod(".reducer_reduce", signature = "LapplyReducer", function(reducer, idx) { ## obtain the cached value idx <- as.character(idx) if (!exists(idx, envir = reducer$value.cache)) return(FALSE) values <- reducer$value.cache[[idx]] rm(list = idx, envir = reducer$value.cache) ## Find the true index of the reduced value in the result idx <- reducer$redo.index[reducer$reduced.num + 1L] reducer$result[idx - 1L + seq_along(values)] <- values reducer$reduced.index <- reducer$reduced.index + 1L reducer$reduced.num <- reducer$reduced.num + length(values) TRUE }) setMethod(".reducer_value", signature = "LapplyReducer", function(reducer) { reducer$result }) setMethod(".reducer_ok", signature = "LapplyReducer", function(reducer) { !reducer$exists.error }) ######################### ## IterateReducer ######################### .redo_index_iterate <- function(reducer) { if (is.null(reducer)) return(integer()) finished_idx <- as.integer(names(reducer$value.cache)) missing_idx <- setdiff(seq_len(reducer$total), finished_idx) c(missing_idx, as.integer(names(reducer$errors))) } .iterateReducer <- function(REDUCE, init, reduce.in.order=FALSE, reducer = NULL) { if (is.null(reducer)) { if (missing(init)){ result <- NULL init.missing <- TRUE } else { result <- init init.missing <- FALSE } if (missing(REDUCE)) { REDUCE <- NULL REDUCE.missing <- TRUE } else { REDUCE.missing <- FALSE } .IterateReducer( result = result, total = 0L, reduced.num = 0L, reduced.index = 1L, value.cache = new.env(parent = emptyenv()), redo.index = integer(), REDUCE = REDUCE, errors = new.env(parent = emptyenv()), reduce.in.order = reduce.in.order, appending.offset = 0L, init.missing = init.missing, REDUCE.missing = REDUCE.missing ) } else { reducer <- reducer$copy() reducer$appending.offset <- reducer$total reducer$redo.index <- .redo_index_iterate(reducer) reducer$value.cache <- as.environment( as.list(reducer$value.cache, all.names=TRUE) ) reducer$errors <- as.environment( as.list(reducer$errors, all.names=TRUE) ) reducer } } setMethod(".map_index", signature = "IterateReducer", function(reducer, idx) { redo.index <- reducer$redo.index if (idx <= length(redo.index)) idx <- redo.index[idx] else idx <- idx - length(redo.index) + reducer$appending.offset idx }) setMethod(".reducer_add", signature = "IterateReducer", function(reducer, idx, values) { reduce.in.order <- reducer$reduce.in.order idx <- as.character(.map_index(reducer, idx)) value <- values[[1]] if (.bpeltok(value)) { if (exists(idx, envir = reducer$errors)) rm(list = idx, envir = reducer$errors) } else { reducer$errors[[idx]] <- idx } reducer$value.cache[[idx]] <- value reducer$total <- max(reducer$total, as.numeric(idx)) if (reduce.in.order) while (.reducer_reduce(reducer, reducer$reduced.index)) {} else .reducer_reduce(reducer, idx) reducer }) setMethod(".reducer_reduce", signature = "IterateReducer", function(reducer, idx) { idx <- as.character(idx) if (!exists(idx, envir = reducer$value.cache)) { return(FALSE) } ## stop reducing when reduce.in.order == TRUE ## and we have a pending error if (!.reducer_ok(reducer) && reducer$reduce.in.order) return(FALSE) value <- reducer$value.cache[[idx]] ## Do not reduce the erroneous result if (!.bpeltok(value)) return(FALSE) if (!reducer$REDUCE.missing) { if (reducer$init.missing && (reducer$reduced.num == 0)) { reducer$result <- value } else { reducer$result <- reducer$REDUCE(reducer$result, value) } ## DO NOT REMOVE, only set to NULL to keep track ## of the finished results reducer$value.cache[[idx]] <- NULL } reducer$reduced.num <- reducer$reduced.num + 1L reducer$reduced.index <- reducer$reduced.index + 1L TRUE }) setMethod(".reducer_value", signature = "IterateReducer", function(reducer) { value.cache <- reducer$value.cache if (!reducer$REDUCE.missing) { res <- reducer$result } else { ## remove the index of the meta elements and errors idx <- names(value.cache) idx <- setdiff(idx, names(reducer$errors)) res <- rep(list(NULL), reducer$total) for (i in idx) res[[as.integer(i)]] <- value.cache[[i]] } ## Attach the errors as an attribute if (!.reducer_ok(reducer) || !.reducer_complete(reducer)) { ## cannot attach attribute to NULL if (is.null(res)) res <- list() idx <- .redo_index_iterate(reducer) errors <- rep(list(.error_unevaluated()), length(idx)) names(errors) <- as.character(idx) for (i in names(reducer$errors)) errors[[i]] <- value.cache[[i]] attr(res, "errors") <- errors } res }) BiocParallel/R/register.R0000644000175200017520000000523114516004410016271 0ustar00biocbuildbiocbuild### ========================================================================= ### .registry object ### ------------------------------------------------------------------------- .registry <- setRefClass(".BiocParallelRegistry", fields=list( bpparams = "list"), methods=list( register = function(BPPARAM, default = TRUE) { BPPARAM <- eval(BPPARAM) if ((!length(BPPARAM) == 1) || !is(BPPARAM, "BiocParallelParam")) stop("'BPPARAM' must be a 'BiocParallelParam' instance") .self$bpparams[[class(BPPARAM)]] <- BPPARAM if (default) { idx <- match(class(BPPARAM), names(.self$bpparams)) .self$bpparams <- c(.self$bpparams[idx], .self$bpparams[-idx]) } invisible(registered()) }, registered = function(bpparamClass) { if (missing(bpparamClass)) .self$bpparams else .self$bpparams[[bpparamClass]] }) )$new() # Singleton .register <- .registry$register .registered <- .registry$registered .registry_init <- function() { multicore <- .defaultWorkers() > 1L tryCatch({ if ((.Platform$OS.type == "windows") && multicore) { .register(getOption( "SnowParam", SnowParam() ), TRUE) .register(getOption("SerialParam", SerialParam()), FALSE) } else if (multicore) { ## linux / mac .register(getOption( "MulticoreParam", MulticoreParam() ), TRUE) .register(getOption( "SnowParam", SnowParam() ), FALSE) .register(getOption("SerialParam", SerialParam()), FALSE) } else { .register(getOption("SerialParam", SerialParam()), TRUE) } }, error=function(err) { message( "'BiocParallel' did not register default BiocParallelParam:\n", " ", conditionMessage(err) ) NULL }) } register <- function(BPPARAM, default = TRUE) { if (length(.registry$bpparams) == 0L) .registry_init() .register(BPPARAM, default = default) } registered <- function(bpparamClass) { if (length(.registry$bpparams) == 0L) .registry_init() .registered(bpparamClass) } bpparam <- function(bpparamClass) { if (missing(bpparamClass)) bpparamClass <- names(registered())[1] default <- registered()[[bpparamClass]] result <- getOption(bpparamClass, default) if (is.null(result)) stop("BPPARAM '", bpparamClass, "' not registered() or in names(options())") result } BiocParallel/R/rng.R0000644000175200017520000000567114516004410015243 0ustar00biocbuildbiocbuild## .rng_get_generator(): get the current generator kind and seed .rng_get_generator <- function() { seed <- if (exists(".Random.seed", envir = .GlobalEnv, inherits = FALSE)) { get(".Random.seed", envir = .GlobalEnv, inherits = FALSE) } else NULL kind <- RNGkind() list(kind = kind, seed = seed) } ## .rng_reset_generator(): reset the generator to a state returned by ## .rng_get_generator() .rng_reset_generator <- function(kind, seed) { ## Setting RNGkind() changes the seed, so restore the original ## seed after restoring the kind RNGkind(kind[[1]]) if (is.null(seed)) { rm(.Random.seed, envir = .GlobalEnv) } else { assign(".Random.seed", seed, envir = .GlobalEnv) } list(kind = kind, seed = seed) } ## .rng_init_stream(): initialize the generator to a new kind, ## optionally using `set.seed()` to set the seed for the the ## generator. .rng_init_stream <- function(seed) { state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) ## coerces seed to appropriate format for RNGkind; NULL seed (from ## bpRNGseed()) uses the global random number stream. if (!is.null(seed)) { RNGkind("default", "default", "default") set.seed(seed) ## change kind kind <- "L'Ecuyer-CMRG" RNGkind(kind) ## .Random.seed always exists after RNGkind() seed <- get(".Random.seed", envir = .GlobalEnv, inherits = FALSE) } else { .rng_internal_stream$set() seed <- get(".Random.seed", envir = .GlobalEnv, inherits = FALSE) ## advance internal stream by 1 runif(1) .rng_internal_stream$reset() } seed } ## .rng_next_stream(): return the next stream for a parallel job .rng_next_stream <- function(seed) { ## `nextRNGStream()` does not require that the current stream is ## L'Ecuyer-CMRG if (is.null(seed)) seed <- .rng_init_stream(seed) nextRNGStream(seed) } .rng_next_substream <- function(seed) { if (is.null(seed)) seed <- .rng_init_stream(seed) nextRNGSubStream(seed) } ## iterate the seed stream n times .rng_iterate_substream <- function(seed, n) { for (k in seq_len(n)) seed <- .rng_next_substream(seed) seed } ## a random number stream independent of the stream used by R. Use for ## port and other 'internal' assignments without changing the random ## number sequence of users. .rng_internal_stream <- local({ state <- .rng_get_generator() RNGkind("L'Ecuyer-CMRG") # sets .Random.seed to non-NULL value internal_seed <- .Random.seed .rng_reset_generator(state$kind, state$seed) list(set = function() { state <<- .rng_get_generator() internal_seed <<- .rng_reset_generator("L'Ecuyer-CMRG", internal_seed) }, reset = function() { internal_seed <<- .rng_get_generator()$seed .rng_reset_generator(state$kind, state$seed) }) }) BiocParallel/R/utilities.R0000644000175200017520000000607014516004410016462 0ustar00biocbuildbiocbuild.splitIndices <- function (nx, tasks) { ## derived from parallel i <- seq_len(nx) if (nx == 0L) list() else if (tasks <= 1L || nx == 1L) # allow nx, nc == 0 list(i) else { fuzz <- min((nx - 1L)/1000, 0.4 * nx / tasks) breaks <- seq(1 - fuzz, nx + fuzz, length.out = tasks + 1L) si <- structure(split(i, cut(i, breaks)), names = NULL) si[sapply(si, length) != 0] } } .ntask <- function(X, workers, tasks) { if (is.na(tasks)) { length(X) } else if (tasks == 0L) { workers } else { min(length(X), tasks) } } .splitX <- function(X, workers, tasks) { tasks <- .ntask(X, workers, tasks) idx <- .splitIndices(length(X), tasks) relist(X, idx) } .redo_index <- function(X, BPREDO) { if (length(BPREDO)) { if (length(BPREDO) != length(X)) stop("'length(BPREDO)' must equal 'length(X)'") idx <- which(!bpok(BPREDO)) if (!length(idx)) stop("no previous error in 'BPREDO'") idx } else { seq_along(X) } } ## re-apply names on X of lapply(X, FUN) to the return value .rename <- function(results, X) { names(results) <- names(X) results } .simplify <- function(results, SIMPLIFY=FALSE) { if (SIMPLIFY && length(results)) results <- simplify2array(results) results } .prettyPath <- function(tag, filepath) { wd <- options('width')[[1]] - nchar(tag) - 6 if (length(filepath) == 0 || is.na(filepath)) return(sprintf("%s: %s", tag, NA_character_)) if (0L == length(filepath) || nchar(filepath) < wd) return(sprintf("%s: %s", tag, filepath)) bname <- basename(filepath) wd1 <- wd - nchar(bname) dname <- substr(dirname(filepath), 1, wd1) sprintf("%s: %s...%s%s", tag, dname, .Platform$file.sep, bname) } .getDotsForMapply <- function(...) { ddd <- list(...) if (!length(ddd)) return(list(list())) len <- vapply(ddd, length, integer(1L)) if (!all(len == len[1L])) { max.len <- max(len) if (max.len && any(len == 0L)) stop("zero-length and non-zero length inputs cannot be mixed") if (any(max.len %% len)) warning("longer argument not a multiple of length of vector") ddd <- lapply(ddd, rep_len, length.out=max.len) } ddd } .dir_valid_rw <- function(x) { all(file.access(x, 6L) == 0L) } .warning <- function(...) { msg <- paste( strwrap(paste0("\n", ...), indent = 2, exdent = 2), collapse="\n" ) warning(msg, call. = FALSE) } .stop <- function(...) { msg <- paste( strwrap(paste0("\n", ...), indent = 2, exdent = 2), collapse="\n" ) stop(msg, call. = FALSE) } ## batchtools signals no timeout with 'Inf', rather than NA; do not ## implement as bptimeout() method because NA is appropriate in other ## contexts, e.g., when 'show()'ing param. .batch_bptimeout <- function(BPPARAM) { timeout <- bptimeout(BPPARAM) if (identical(timeout, NA_integer_)) timeout <- Inf timeout } BiocParallel/R/worker-number.R0000644000175200017520000001164314516004410017250 0ustar00biocbuildbiocbuild.workerEnvironmentVariable <- function(variable, default = NA_integer_) { result <- withCallingHandlers({ value <- Sys.getenv(variable, default) as.integer(value) }, warning = function(w) { txt <- sprintf( paste0( "Trying to coercing the environment variable '%s' to an ", "integer caused a warning. The value of the environment ", "variable was '%s'. The warning was: %s" ), variable, value, conditionMessage(w) ) .warning(txt) invokeRestart("muffleWarning") }) if (!is.na(result) && (result <= 0L)) { txt <- sprintf( "The environment variable '%s' must be > 0. The value was '%d'.", variable, result ) .stop(txt) } result } .defaultWorkers <- function() { ## assign default cores ## environment variables; least to most compelling result <- .workerEnvironmentVariable("R_PARALLELLY_AVAILABLECORES_FALLBACK") max_number <- .workerEnvironmentVariable("BIOCPARALLEL_WORKER_MAX", result) default_number <- .workerEnvironmentVariable("BIOCPARALLEL_WORKER_NUMBER", result) if (is.na(max_number)) { result <- default_number } else { result <- min(max_number, default_number, na.rm = TRUE) } ## fall back to detectCores() if necessary if (is.na(result)) { result <- parallel::detectCores() if (is.na(result)) result <- 1L result <- max(1L, result - 2L) } ## respect 'mc.cores', overriding env. variables an detectCores() result <- getOption("mc.cores", result) ## coerce to integer; check for valid value tryCatch({ result <- as.integer(result) if ( length(result) != 1L || is.na(result) || result < 1L ) stop("number of cores must be a non-negative integer") }, error = function(e) { msg <- paste0( conditionMessage(e), ". ", "Did you mis-specify R_PARALLELLY_AVAILABLECORES_FALLBACK, ", "BIOCPARALLEL_WORKER_NUMBER, or options('mc.cores')?" ) .stop(msg) }) ## override user settings by build-system configurations if (identical(Sys.getenv("IS_BIOC_BUILD_MACHINE"), "true")) result <- min(result, 4L) ## from R-ints.texi ## @item _R_CHECK_LIMIT_CORES_ ## If set, check the usage of too many cores in package @pkg{parallel}. If ## set to @samp{warn} gives a warning, to @samp{false} or @samp{FALSE} the ## check is skipped, and any other non-empty value gives an error when more ## than 2 children are spawned. ## Default: unset (but @samp{TRUE} for CRAN submission checks). check_limit_cores <- Sys.getenv("_R_CHECK_LIMIT_CORES_", NA_character_) check_limit_cores_is_set <- !is.na(check_limit_cores) && !identical(toupper(check_limit_cores), "FALSE") if (check_limit_cores_is_set) result <- min(result, 2L) result } .enforceWorkers <- function(workers, type = NULL) { ## Ensure that user 'workers' does not exceed hard limits; most- ## to least stringent. Usually on build systems ## R CMD check limit (though it applies outside check, too... check_limit_cores <- Sys.getenv("_R_CHECK_LIMIT_CORES_", NA_character_) check_limit_cores_is_set <- !is.na(check_limit_cores) && !identical(toupper(check_limit_cores), "FALSE") if (workers > 2L && check_limit_cores_is_set) { if (!identical(check_limit_cores, "warn")) { .stop( "_R_CHECK_LIMIT_CORES_' environment variable detected, ", "BiocParallel workers must be <= 2 was (", workers, ")" ) } .warning( "'_R_CHECK_LIMIT_CORES_' environment variable detected, ", "setting BiocParallel workers to 2 (was ", workers, ")" ) workers <- 2L } ## Bioconductor build system test <- (workers > 4L) && identical(Sys.getenv("IS_BIOC_BUILD_MACHINE"), "true") if (test) { .warning( "'IS_BIOC_BUILD_MACHINE' environment variable detected, ", "setting BiocParallel workers to 4 (was ", workers, ")" ) workers <- 4L } worker_max <- .workerEnvironmentVariable("BIOCPARALLEL_WORKER_MAX") if (!is.na(worker_max) && workers > worker_max) { .warning( "'BIOCPARALLEL_WORKER_MAX' environment variable detected, ", "setting BiocParallel workers to ", worker_max, " ", "(was ", workers, ")" ) workers <- worker_max } ## limit on number of available sockets if (!is.null(type) && workers > .snowCoresMax(type)) { max <- .snowCoresMax(type) .warning( "worker number limited by available socket connections, ", "setting BiocParallel workers to ", max, " (was ", workers, ")" ) workers <- max } workers } BiocParallel/R/worker.R0000644000175200017520000002452614516004410015766 0ustar00biocbuildbiocbuild### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Utils ## Extract static and dynamic data from a task Return NULL if no ## static data can be extracted .task_const <- function(value) { ## Supports EXEC task only if (value$type != "EXEC") return(NULL) if (isTRUE(value$dynamic.only)) return(NULL) if (value$static.fun) fun <- value$data$fun else fun <- NULL fullArgNames <- names(value$data$args) if (all(value$static.args %in% fullArgNames)) { args <- value$data$args[value$static.args] if (!length(args)) args <- NULL } else { args <- NULL } if (!is.null(fun) || !is.null(args)) list(fun = fun, args = args, fullArgNames = fullArgNames) else NULL } ## Extract the dynamic part from a task .task_dynamic <- function(value) { ## Supports EXEC task only if (value$type != "EXEC") return(value) if (value$static.fun) value$data$fun <- TRUE if (length(value$static.args)) value$data$args[value$static.args] <- NULL if (value$static.fun || length(value$static.args)) value$dynamic.only <- TRUE value } ## Recreate the task from the dynamic and static parts of the task ## It is safe to call the function if the task is complete ## (Not extracted by `.task_dynamic`) or `static_Data` is NULL .task_remake <- function(value, static_data = NULL) { if (is.null(static_data)) return(value) if (value$type != "EXEC") return(value) if (!isTRUE(value$dynamic.only)) return(value) if (value$static.fun) value$data$fun <- static_data$fun if (length(value$static.args)) { value$data$args <- c(value$data$args, static_data$args) value$data$args <- value$data$args[static_data$fullArgNames] } value$dynamic.only <- NULL value } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Worker commands ### Support for SOCK, MPI and FORK connections. ### Derived from snow version 0.3-13 by Luke Tierney ### Derived from parallel version 2.16.0 by R Core Team .EXEC <- function(tag, fun, args, static.fun = FALSE, static.args = NULL) { list( type = "EXEC", data = list(tag = tag, fun = fun, args = args), static.fun = static.fun, static.args = static.args ) } .VALUE <- function(tag, value, success, time, log, sout) { list( type = "VALUE", tag = tag, value = value, success = success, time = time, log = log, sout = sout ) } .DONE <- function() { list(type = "DONE") } ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Worker options and function to run the task .workerOptions <- function( log = FALSE, threshold = "INFO", stop.on.error = TRUE, as.error = TRUE, timeout = WORKER_TIMEOUT, exportglobals = TRUE, force.GC = FALSE) { force(log) force(threshold) force(stop.on.error) force(as.error) force(timeout) force(force.GC) if (exportglobals) { blocklist <- c( "askpass", "asksecret", "buildtools.check", "buildtools.with", "pager", "plumber.swagger.url", "profvis.print", "restart", "reticulate.repl.hook", "reticulate.repl.initialize", "reticulate.repl.teardown", "shiny.launch.browser", "terminal.manager", "error", "topLevelEnvironment", "connectionObserver" ) globalOptions <- base::options() globalOptions <- globalOptions[!names(globalOptions) %in% blocklist] } else { globalOptions <- NULL } list( log = log, threshold = threshold, stop.on.error = stop.on.error, as.error = as.error, timeout = timeout, force.GC = force.GC, globalOptions = globalOptions ) } .composeTry <- function(FUN, OPTIONS, SEED) { FUN <- match.fun(FUN) ERROR_OCCURRED <- FALSE ## use `ERROR_CALL_DEPTH` to trim call stack. default: show all ERROR_CALL_DEPTH <- -.Machine$integer.max UNEVALUATED <- .error_unevaluated() # singleton log <- OPTIONS$log stop.on.error <- OPTIONS$stop.on.error as.error <- OPTIONS$as.error timeout <- OPTIONS$timeout force.GC <- OPTIONS$force.GC globalOptions <- OPTIONS$globalOptions handle_warning <- function(w) { .log_warn(log, "%s", w) w # FIXME: muffleWarning; don't rely on capture.output() } handle_error <- function(e) { ERROR_OCCURRED <<- TRUE .log_error(log, "%s", e) call <- rev(tail(sys.calls(), -ERROR_CALL_DEPTH)) .error_remote(e, call) } if (!is.null(SEED)) SEED <- .rng_reset_generator("L'Ecuyer-CMRG", SEED)$seed function(...) { if (!identical(timeout, WORKER_TIMEOUT)) { setTimeLimit(timeout, timeout, TRUE) on.exit(setTimeLimit(Inf, Inf, FALSE)) } if (!is.null(globalOptions)) base::options(globalOptions) if (stop.on.error && ERROR_OCCURRED) { UNEVALUATED } else { .rng_reset_generator("L'Ecuyer-CMRG", SEED) ## capture warnings and errors. Both are initially handled ## by `withCallingHandlers()`. ## ## 'error' conditions are logged (via `handle_error()`), ## annotated, and then re-signalled via `stop()`. The ## condition needs to be handled first by ## `withCallingHandlers()` so that the full call stack to ## the error can be recovered. The annotated condition ## needs to be resignalled so that it can be returned as ## 'output'; but the condition needs to be silenced by the ## outer `tryCatch()`. ## ## 'warning' conditions are logged (via ## `handle_warning()`). The handler returns the original ## condition, and the 'muffleWarning' handler is invoked ## somewhere above this point. output <- tryCatch({ withCallingHandlers({ ## emulate call depth from 'inside' FUN, to ## account for frames from tryCatch, ## withCallingHandlers ERROR_CALL_DEPTH <<- (\() sys.nframe() - 1L)() FUN(...) }, error = function(e) { annotated_condition <- handle_error(e) stop(annotated_condition) }, warning = handle_warning) }, error = identity) ## Trigger garbage collection to cut down on memory usage within ## each worker in shared memory contexts. Otherwise, each worker is ## liable to think the entire heap is available (leading to each ## worker trying to fill said heap, causing R to exhaust memory). if (force.GC) gc(verbose=FALSE, full=FALSE) SEED <<- .rng_next_substream(SEED) output } } } .workerLapply_impl <- function(X, FUN, ARGS, OPTIONS, BPRNGSEED, GLOBALS = NULL, PACKAGES = NULL) { state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) ## FUN is not compiled when using MulticoreParam FUN <- compiler::cmpfun(FUN) if (!is.null(OPTIONS$globalOptions)) { oldOptions <- base::options() on.exit(base::options(oldOptions), add = TRUE) } ## Set log .log_load(OPTIONS$log, OPTIONS$threshold) for (pkg in PACKAGES) { suppressPackageStartupMessages(library(pkg, character.only = TRUE)) } ## Add variables to the global space and remove them afterward ## Recover the replaced variables at the end if necessary replaced_variables <- new.env(parent = emptyenv()) if (length(GLOBALS)) { for (i in names(GLOBALS)) { if (exists(i, envir = .GlobalEnv)) replaced_variables[[i]] <- .GlobalEnv[[i]] assign(i, GLOBALS[[i]], envir = .GlobalEnv) } on.exit({ remove(list = names(GLOBALS), envir = .GlobalEnv) for (i in names(replaced_variables)) assign(i, replaced_variables[[i]], envir = .GlobalEnv) }, add = TRUE) } composeFunc <- .composeTry(FUN, OPTIONS, BPRNGSEED) args <- c(list(X = X, FUN = composeFunc), ARGS) do.call(lapply, args) } ## reduce the size of the serialization of .workerLapply_impl from ## 124k to 3k .workerLapply <- eval( parse(text = "function(...) BiocParallel:::.workerLapply_impl(...)"), envir = getNamespace("base") ) ### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ### Worker loop. Error handling is done in .composeTry. .bpworker_EXEC <- function(msg, sink.sout = TRUE) { ## need local handler for worker read/send errors if (sink.sout) { on.exit({ sink(NULL, type="message") sink(NULL, type="output") close(file) }) file <- rawConnection(raw(), "r+") sink(file, type="message") sink(file, type="output") } t1 <- proc.time() value <- tryCatch({ do.call(msg$data$fun, msg$data$args) }, error=function(e) { ## return as 'list()' because msg$fun has lapply semantics list(.error_worker_comm(e, "worker evaluation failed")) }) t2 <- proc.time() if (sink.sout) { sout <- rawToChar(rawConnectionValue(file)) if (!nchar(sout)) sout <- NULL } else { sout <- NULL } success <- !(inherits(value, "bperror") || !all(bpok(value))) log <- .log_buffer_get() ## Reset the log buffer .log_buffer_init() value <- .VALUE( msg$data$tag, value, success, t2 - t1, log, sout ) } .bpworker_impl <- function(worker) { repeat { tryCatch({ msg <- .recv(worker) if (inherits(msg, "error")) ## FIXME: try to return error to manager break # lost socket connection? if (msg$type == "DONE") { .close(worker) break } else if (msg$type == "EXEC") { value <- .bpworker_EXEC(msg) .send(worker, value) } }, interrupt = function(e) { NULL }) } } BiocParallel/README.md0000644000175200017520000000134714516004410015404 0ustar00biocbuildbiocbuildBiocParallel ============ Bioconductor facilities for parallel evaluation (experimental) Possible TODO ------------- + map/reduce-like function + bpforeach? + Abstract scheduler + lazy DoparParam + SnowParam support for setSeed, recursive, cleanup + subset SnowParam DONE ---- + encapsulate arguments as ParallelParam() + Standardize signatures + Make functions generics + parLapply-like function + Short vignette + elaborate SnowParam for SnowSocketParam, SnowForkParam, SnowMpiParam, ... + MulticoreParam on Windows github notes ------------ + commit one-liners with names git log --pretty=format:"- %h %an: %s" TO FIX ------------- + DoparParam does not pass foreach args (specifically access to .options.nws for chunking) BiocParallel/build/0000755000175200017520000000000014516024321015222 5ustar00biocbuildbiocbuildBiocParallel/build/vignette.rds0000644000175200017520000000056014516024321017562 0ustar00biocbuildbiocbuild]k0k۹)lȼG``"^.6BH"󇏹T1E2F9orsg;0FFuԳgGK`Ĕi34娟d. R MrY)(y/Pd8/depVFw*LYl+rnCwY0` gDŽwGsQ-;{P=أ0{-[W\[So}oQ]yZaݧ M΋ , W0)拦1ü V0CslBȒTokŅZ v^lDI%v:Z "bBiocParallel/cleanup0000755000175200017520000000002314516024321015473 0ustar00biocbuildbiocbuildrm -f src/Makevars BiocParallel/configure0000755000175200017520000034170314516024321016042 0ustar00biocbuildbiocbuild#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.71 for BiocParallel 1.32.4. # # # Copyright (C) 1992-1996, 1998-2017, 2020-2021 Free Software Foundation, # Inc. # # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh as_nop=: if test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1 then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else $as_nop case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi # Reset variables that may have inherited troublesome values from # the environment. # IFS needs to be set, to space, tab, and newline, in precisely that order. # (If _AS_PATH_WALK were called with IFS unset, it would have the # side effect of setting IFS to empty, thus disabling word splitting.) # Quoting is to prevent editors from complaining about space-tab. as_nl=' ' export as_nl IFS=" "" $as_nl" PS1='$ ' PS2='> ' PS4='+ ' # Ensure predictable behavior from utilities with locale-dependent output. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # We cannot yet rely on "unset" to work, but we need these variables # to be unset--not just set to an empty or harmless value--now, to # avoid bugs in old shells (e.g. pre-3.0 UWIN ksh). This construct # also avoids known problems related to "unset" and subshell syntax # in other old shells (e.g. bash 2.01 and pdksh 5.2.14). for as_var in BASH_ENV ENV MAIL MAILPATH CDPATH do eval test \${$as_var+y} \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done # Ensure that fds 0, 1, and 2 are open. if (exec 3>&0) 2>/dev/null; then :; else exec 0&1) 2>/dev/null; then :; else exec 1>/dev/null; fi if (exec 3>&2) ; then :; else exec 2>/dev/null; fi # The user is always right. if ${PATH_SEPARATOR+false} :; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac test -r "$as_dir$0" && as_myself=$as_dir$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then printf "%s\n" "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Use a proper internal environment variable to ensure we don't fall # into an infinite loop, continuously re-executing ourselves. if test x"${_as_can_reexec}" != xno && test "x$CONFIG_SHELL" != x; then _as_can_reexec=no; export _as_can_reexec; # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. printf "%s\n" "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 fi # We don't want this to propagate to other subprocesses. { _as_can_reexec=; unset _as_can_reexec;} if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="as_nop=: if test \${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1 then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else \$as_nop case \`(set -o) 2>/dev/null\` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi " as_required="as_fn_return () { (exit \$1); } as_fn_success () { as_fn_return 0; } as_fn_failure () { as_fn_return 1; } as_fn_ret_success () { return 0; } as_fn_ret_failure () { return 1; } exitcode=0 as_fn_success || { exitcode=1; echo as_fn_success failed.; } as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; } as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; } as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; } if ( set x; as_fn_ret_success y && test x = \"\$1\" ) then : else \$as_nop exitcode=1; echo positional parameters were not saved. fi test x\$exitcode = x0 || exit 1 blah=\$(echo \$(echo blah)) test x\"\$blah\" = xblah || exit 1 test -x / || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1" if (eval "$as_required") 2>/dev/null then : as_have_required=yes else $as_nop as_have_required=no fi if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null then : else $as_nop as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac as_found=: case $as_dir in #( /*) for as_base in sh bash ksh sh5; do # Try only shells that exist, to save several forks. as_shell=$as_dir$as_base if { test -f "$as_shell" || test -f "$as_shell.exe"; } && as_run=a "$as_shell" -c "$as_bourne_compatible""$as_required" 2>/dev/null then : CONFIG_SHELL=$as_shell as_have_required=yes if as_run=a "$as_shell" -c "$as_bourne_compatible""$as_suggested" 2>/dev/null then : break 2 fi fi done;; esac as_found=false done IFS=$as_save_IFS if $as_found then : else $as_nop if { test -f "$SHELL" || test -f "$SHELL.exe"; } && as_run=a "$SHELL" -c "$as_bourne_compatible""$as_required" 2>/dev/null then : CONFIG_SHELL=$SHELL as_have_required=yes fi fi if test "x$CONFIG_SHELL" != x then : export CONFIG_SHELL # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. printf "%s\n" "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 fi if test x$as_have_required = xno then : printf "%s\n" "$0: This script requires a shell more modern than all" printf "%s\n" "$0: the shells that I found on your system." if test ${ZSH_VERSION+y} ; then printf "%s\n" "$0: In particular, zsh $ZSH_VERSION has bugs and should" printf "%s\n" "$0: be upgraded to zsh 4.3.4 or later." else printf "%s\n" "$0: Please tell bug-autoconf@gnu.org about your system, $0: including any error possibly output before this $0: message. Then install a modern shell, or manually run $0: the script under such a shell if you do have one." fi exit 1 fi fi fi SHELL=${CONFIG_SHELL-/bin/sh} export SHELL # Unset more variables known to interfere with behavior of common tools. CLICOLOR_FORCE= GREP_OPTIONS= unset CLICOLOR_FORCE GREP_OPTIONS ## --------------------- ## ## M4sh Shell Functions. ## ## --------------------- ## # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_nop # --------- # Do nothing but, unlike ":", preserve the value of $?. as_fn_nop () { return $? } as_nop=as_fn_nop # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`printf "%s\n" "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null then : eval 'as_fn_append () { eval $1+=\$2 }' else $as_nop as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null then : eval 'as_fn_arith () { as_val=$(( $* )) }' else $as_nop as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith # as_fn_nop # --------- # Do nothing but, unlike ":", preserve the value of $?. as_fn_nop () { return $? } as_nop=as_fn_nop # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi printf "%s\n" "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits as_lineno_1=$LINENO as_lineno_1a=$LINENO as_lineno_2=$LINENO as_lineno_2a=$LINENO eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" && test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || { # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-) sed -n ' p /[$]LINENO/= ' <$as_myself | sed ' s/[$]LINENO.*/&-/ t lineno b :lineno N :loop s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/ t loop s/-\n.*// ' >$as_me.lineno && chmod +x "$as_me.lineno" || { printf "%s\n" "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } # If we had to re-execute with $CONFIG_SHELL, we're ensured to have # already done that, so ensure we don't try to do so again and fall # in an infinite loop. This has already happened in practice. _as_can_reexec=no; export _as_can_reexec # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). . "./$as_me.lineno" # Exit status is that of the last command. exit } # Determine whether it's possible to make 'echo' print without a newline. # These variables are no longer used directly by Autoconf, but are AC_SUBSTed # for compatibility with existing Makefiles. ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac # For backward compatibility with old third-party macros, we provide # the shell variables $as_echo and $as_echo_n. New code should use # AS_ECHO(["message"]) and AS_ECHO_N(["message"]), respectively. as_echo='printf %s\n' as_echo_n='printf %s' rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" test -n "$DJDIR" || exec 7<&0 &1 # Name of the host. # hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` # # Initializations. # ac_default_prefix=/usr/local ac_clean_files= ac_config_libobj_dir=. LIBOBJS= cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= # Identity of this package. PACKAGE_NAME='BiocParallel' PACKAGE_TARNAME='biocparallel' PACKAGE_VERSION='1.32.4' PACKAGE_STRING='BiocParallel 1.32.4' PACKAGE_BUGREPORT='' PACKAGE_URL='' # Factoring default headers for most tests. ac_includes_default="\ #include #ifdef HAVE_STDIO_H # include #endif #ifdef HAVE_STDLIB_H # include #endif #ifdef HAVE_STRING_H # include #endif #ifdef HAVE_INTTYPES_H # include #endif #ifdef HAVE_STDINT_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef HAVE_UNISTD_H # include #endif" ac_header_cxx_list= ac_subst_vars='LTLIBOBJS LIBOBJS OBJEXT EXEEXT ac_ct_CXX CPPFLAGS LDFLAGS CXXFLAGS CXX target_alias host_alias build_alias LIBS ECHO_T ECHO_N ECHO_C DEFS mandir localedir libdir psdir pdfdir dvidir htmldir infodir docdir oldincludedir includedir runstatedir localstatedir sharedstatedir sysconfdir datadir datarootdir libexecdir sbindir bindir program_transform_name prefix exec_prefix PACKAGE_URL PACKAGE_BUGREPORT PACKAGE_STRING PACKAGE_VERSION PACKAGE_TARNAME PACKAGE_NAME PATH_SEPARATOR SHELL' ac_subst_files='' ac_user_opts=' enable_option_checking ' ac_precious_vars='build_alias host_alias target_alias CXX CXXFLAGS LDFLAGS LIBS CPPFLAGS CCC' # Initialize some variables set by options. ac_init_help= ac_init_version=false ac_unrecognized_opts= ac_unrecognized_sep= # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. # (The list follows the same order as the GNU Coding Standards.) bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datarootdir='${prefix}/share' datadir='${datarootdir}' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' runstatedir='${localstatedir}/run' includedir='${prefix}/include' oldincludedir='/usr/include' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' infodir='${datarootdir}/info' htmldir='${docdir}' dvidir='${docdir}' pdfdir='${docdir}' psdir='${docdir}' libdir='${exec_prefix}/lib' localedir='${datarootdir}/locale' mandir='${datarootdir}/man' ac_prev= ac_dashdash= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval $ac_prev=\$ac_option ac_prev= continue fi case $ac_option in *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;; *=) ac_optarg= ;; *) ac_optarg=yes ;; esac case $ac_dashdash$ac_option in --) ac_dashdash=yes ;; -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=*) datadir=$ac_optarg ;; -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \ | --dataroo | --dataro | --datar) ac_prev=datarootdir ;; -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*) datarootdir=$ac_optarg ;; -disable-* | --disable-*) ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: \`$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=no ;; -docdir | --docdir | --docdi | --doc | --do) ac_prev=docdir ;; -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*) docdir=$ac_optarg ;; -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv) ac_prev=dvidir ;; -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*) dvidir=$ac_optarg ;; -enable-* | --enable-*) ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: \`$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=\$ac_optarg ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht) ac_prev=htmldir ;; -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \ | --ht=*) htmldir=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localedir | --localedir | --localedi | --localed | --locale) ac_prev=localedir ;; -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*) localedir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst | --locals) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd) ac_prev=pdfdir ;; -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*) pdfdir=$ac_optarg ;; -psdir | --psdir | --psdi | --psd | --ps) ac_prev=psdir ;; -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*) psdir=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -runstatedir | --runstatedir | --runstatedi | --runstated \ | --runstate | --runstat | --runsta | --runst | --runs \ | --run | --ru | --r) ac_prev=runstatedir ;; -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \ | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \ | --run=* | --ru=* | --r=*) runstatedir=$ac_optarg ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: \`$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=\$ac_optarg ;; -without-* | --without-*) ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: \`$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=no ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) as_fn_error $? "unrecognized option: \`$ac_option' Try \`$0 --help' for more information" ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. case $ac_envvar in #( '' | [0-9]* | *[!_$as_cr_alnum]* ) as_fn_error $? "invalid variable name: \`$ac_envvar'" ;; esac eval $ac_envvar=\$ac_optarg export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. printf "%s\n" "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && printf "%s\n" "$as_me: WARNING: invalid host type: $ac_option" >&2 : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}" ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` as_fn_error $? "missing argument to $ac_option" fi if test -n "$ac_unrecognized_opts"; then case $enable_option_checking in no) ;; fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;; *) printf "%s\n" "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;; esac fi # Check all directory arguments for consistency. for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \ datadir sysconfdir sharedstatedir localstatedir includedir \ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \ libdir localedir mandir runstatedir do eval ac_val=\$$ac_var # Remove trailing slashes. case $ac_val in */ ) ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'` eval $ac_var=\$ac_val;; esac # Be sure to have absolute directory names. case $ac_val in [\\/$]* | ?:[\\/]* ) continue;; NONE | '' ) case $ac_var in *prefix ) continue;; esac;; esac as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val" done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null ac_pwd=`pwd` && test -n "$ac_pwd" && ac_ls_di=`ls -di .` && ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` || as_fn_error $? "working directory cannot be determined" test "X$ac_ls_di" = "X$ac_pwd_ls_di" || as_fn_error $? "pwd does not report name of working directory" # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then the parent directory. ac_confdir=`$as_dirname -- "$as_myself" || $as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_myself" : 'X\(//\)[^/]' \| \ X"$as_myself" : 'X\(//\)$' \| \ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$as_myself" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` srcdir=$ac_confdir if test ! -r "$srcdir/$ac_unique_file"; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r "$srcdir/$ac_unique_file"; then test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .." as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir" fi ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work" ac_abs_confdir=`( cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg" pwd)` # When building in place, set srcdir=. if test "$ac_abs_confdir" = "$ac_pwd"; then srcdir=. fi # Remove unnecessary trailing slashes from srcdir. # Double slashes in file names in object file debugging info # mess up M-x gdb in Emacs. case $srcdir in */) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;; esac for ac_var in $ac_precious_vars; do eval ac_env_${ac_var}_set=\${${ac_var}+set} eval ac_env_${ac_var}_value=\$${ac_var} eval ac_cv_env_${ac_var}_set=\${${ac_var}+set} eval ac_cv_env_${ac_var}_value=\$${ac_var} done # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures BiocParallel 1.32.4 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking ...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIR man documentation [DATAROOTDIR/man] --docdir=DIR documentation root [DATAROOTDIR/doc/biocparallel] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIR dvi documentation [DOCDIR] --pdfdir=DIR pdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] _ACEOF cat <<\_ACEOF _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of BiocParallel 1.32.4:";; esac cat <<\_ACEOF Some influential environment variables: CXX C++ compiler command CXXFLAGS C++ compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory LIBS libraries to pass to the linker, e.g. -l CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. Report bugs to the package provider. _ACEOF ac_status=$? fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d "$ac_dir" || { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } || continue ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`printf "%s\n" "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`printf "%s\n" "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix cd "$ac_dir" || { ac_status=$?; continue; } # Check for configure.gnu first; this name is used for a wrapper for # Metaconfig's "Configure" on case-insensitive file systems. if test -f "$ac_srcdir/configure.gnu"; then echo && $SHELL "$ac_srcdir/configure.gnu" --help=recursive elif test -f "$ac_srcdir/configure"; then echo && $SHELL "$ac_srcdir/configure" --help=recursive else printf "%s\n" "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi || ac_status=$? cd "$ac_pwd" || { ac_status=$?; break; } done fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF BiocParallel configure 1.32.4 generated by GNU Autoconf 2.71 Copyright (C) 2021 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit fi ## ------------------------ ## ## Autoconf initialization. ## ## ------------------------ ## # ac_fn_cxx_try_compile LINENO # ---------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest.beam if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext then : ac_retval=0 else $as_nop printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_compile # ac_fn_cxx_try_link LINENO # ------------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest.beam conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || test -x conftest$ac_exeext } then : ac_retval=0 else $as_nop printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_link # ac_fn_cxx_check_header_compile LINENO HEADER VAR INCLUDES # --------------------------------------------------------- # Tests whether HEADER exists and can be compiled using the include files in # INCLUDES, setting the cache variable VAR accordingly. ac_fn_cxx_check_header_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 printf %s "checking for $2... " >&6; } if eval test \${$3+y} then : printf %s "(cached) " >&6 else $as_nop cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : eval "$3=yes" else $as_nop eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext fi eval ac_res=\$$3 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_cxx_check_header_compile ac_configure_args_raw= for ac_arg do case $ac_arg in *\'*) ac_arg=`printf "%s\n" "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac as_fn_append ac_configure_args_raw " '$ac_arg'" done case $ac_configure_args_raw in *$as_nl*) ac_safe_unquote= ;; *) ac_unsafe_z='|&;<>()$`\\"*?[ '' ' # This string ends in space, tab. ac_unsafe_a="$ac_unsafe_z#~" ac_safe_unquote="s/ '\\([^$ac_unsafe_a][^$ac_unsafe_z]*\\)'/ \\1/g" ac_configure_args_raw=` printf "%s\n" "$ac_configure_args_raw" | sed "$ac_safe_unquote"`;; esac cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by BiocParallel $as_me 1.32.4, which was generated by GNU Autoconf 2.71. Invocation command line was $ $0$ac_configure_args_raw _ACEOF exec 5>>config.log { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` /usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac printf "%s\n" "PATH: $as_dir" done IFS=$as_save_IFS } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *\'*) ac_arg=`printf "%s\n" "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;; 2) as_fn_append ac_configure_args1 " '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi as_fn_append ac_configure_args " '$ac_arg'" ;; esac done done { ac_configure_args0=; unset ac_configure_args0;} { ac_configure_args1=; unset ac_configure_args1;} # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Use '\'' to represent an apostrophe within the trap. # WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug. trap 'exit_status=$? # Sanitize IFS. IFS=" "" $as_nl" # Save into config.log some information that might help in debugging. { echo printf "%s\n" "## ---------------- ## ## Cache variables. ## ## ---------------- ##" echo # The following way of writing the cache mishandles newlines in values, ( for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 printf "%s\n" "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #( *${as_nl}ac_space=\ *) sed -n \ "s/'\''/'\''\\\\'\'''\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p" ;; #( *) sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) echo printf "%s\n" "## ----------------- ## ## Output variables. ## ## ----------------- ##" echo for ac_var in $ac_subst_vars do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`printf "%s\n" "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac printf "%s\n" "$ac_var='\''$ac_val'\''" done | sort echo if test -n "$ac_subst_files"; then printf "%s\n" "## ------------------- ## ## File substitutions. ## ## ------------------- ##" echo for ac_var in $ac_subst_files do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`printf "%s\n" "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac printf "%s\n" "$ac_var='\''$ac_val'\''" done | sort echo fi if test -s confdefs.h; then printf "%s\n" "## ----------- ## ## confdefs.h. ## ## ----------- ##" echo cat confdefs.h echo fi test "$ac_signal" != 0 && printf "%s\n" "$as_me: caught signal $ac_signal" printf "%s\n" "$as_me: exit $exit_status" } >&5 rm -f core *.core core.conftest.* && rm -f -r conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -f -r conftest* confdefs.h printf "%s\n" "/* confdefs.h */" > confdefs.h # Predefined preprocessor variables. printf "%s\n" "#define PACKAGE_NAME \"$PACKAGE_NAME\"" >>confdefs.h printf "%s\n" "#define PACKAGE_TARNAME \"$PACKAGE_TARNAME\"" >>confdefs.h printf "%s\n" "#define PACKAGE_VERSION \"$PACKAGE_VERSION\"" >>confdefs.h printf "%s\n" "#define PACKAGE_STRING \"$PACKAGE_STRING\"" >>confdefs.h printf "%s\n" "#define PACKAGE_BUGREPORT \"$PACKAGE_BUGREPORT\"" >>confdefs.h printf "%s\n" "#define PACKAGE_URL \"$PACKAGE_URL\"" >>confdefs.h # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. if test -n "$CONFIG_SITE"; then ac_site_files="$CONFIG_SITE" elif test "x$prefix" != xNONE; then ac_site_files="$prefix/share/config.site $prefix/etc/config.site" else ac_site_files="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site" fi for ac_site_file in $ac_site_files do case $ac_site_file in #( */*) : ;; #( *) : ac_site_file=./$ac_site_file ;; esac if test -f "$ac_site_file" && test -r "$ac_site_file"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5 printf "%s\n" "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" \ || { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "failed to load site script $ac_site_file See \`config.log' for more details" "$LINENO" 5; } fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special files # actually), so we avoid doing that. DJGPP emulates it as a regular file. if test /dev/null != "$cache_file" && test -f "$cache_file"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5 printf "%s\n" "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . "$cache_file";; *) . "./$cache_file";; esac fi else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5 printf "%s\n" "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Test code for whether the C++ compiler supports C++98 (global declarations) ac_cxx_conftest_cxx98_globals=' // Does the compiler advertise C++98 conformance? #if !defined __cplusplus || __cplusplus < 199711L # error "Compiler does not advertise C++98 conformance" #endif // These inclusions are to reject old compilers that // lack the unsuffixed header files. #include #include // and are *not* freestanding headers in C++98. extern void assert (int); namespace std { extern int strcmp (const char *, const char *); } // Namespaces, exceptions, and templates were all added after "C++ 2.0". using std::exception; using std::strcmp; namespace { void test_exception_syntax() { try { throw "test"; } catch (const char *s) { // Extra parentheses suppress a warning when building autoconf itself, // due to lint rules shared with more typical C programs. assert (!(strcmp) (s, "test")); } } template struct test_template { T const val; explicit test_template(T t) : val(t) {} template T add(U u) { return static_cast(u) + val; } }; } // anonymous namespace ' # Test code for whether the C++ compiler supports C++98 (body of main) ac_cxx_conftest_cxx98_main=' assert (argc); assert (! argv[0]); { test_exception_syntax (); test_template tt (2.0); assert (tt.add (4) == 6.0); assert (true && !false); } ' # Test code for whether the C++ compiler supports C++11 (global declarations) ac_cxx_conftest_cxx11_globals=' // Does the compiler advertise C++ 2011 conformance? #if !defined __cplusplus || __cplusplus < 201103L # error "Compiler does not advertise C++11 conformance" #endif namespace cxx11test { constexpr int get_val() { return 20; } struct testinit { int i; double d; }; class delegate { public: delegate(int n) : n(n) {} delegate(): delegate(2354) {} virtual int getval() { return this->n; }; protected: int n; }; class overridden : public delegate { public: overridden(int n): delegate(n) {} virtual int getval() override final { return this->n * 2; } }; class nocopy { public: nocopy(int i): i(i) {} nocopy() = default; nocopy(const nocopy&) = delete; nocopy & operator=(const nocopy&) = delete; private: int i; }; // for testing lambda expressions template Ret eval(Fn f, Ret v) { return f(v); } // for testing variadic templates and trailing return types template auto sum(V first) -> V { return first; } template auto sum(V first, Args... rest) -> V { return first + sum(rest...); } } ' # Test code for whether the C++ compiler supports C++11 (body of main) ac_cxx_conftest_cxx11_main=' { // Test auto and decltype auto a1 = 6538; auto a2 = 48573953.4; auto a3 = "String literal"; int total = 0; for (auto i = a3; *i; ++i) { total += *i; } decltype(a2) a4 = 34895.034; } { // Test constexpr short sa[cxx11test::get_val()] = { 0 }; } { // Test initializer lists cxx11test::testinit il = { 4323, 435234.23544 }; } { // Test range-based for int array[] = {9, 7, 13, 15, 4, 18, 12, 10, 5, 3, 14, 19, 17, 8, 6, 20, 16, 2, 11, 1}; for (auto &x : array) { x += 23; } } { // Test lambda expressions using cxx11test::eval; assert (eval ([](int x) { return x*2; }, 21) == 42); double d = 2.0; assert (eval ([&](double x) { return d += x; }, 3.0) == 5.0); assert (d == 5.0); assert (eval ([=](double x) mutable { return d += x; }, 4.0) == 9.0); assert (d == 5.0); } { // Test use of variadic templates using cxx11test::sum; auto a = sum(1); auto b = sum(1, 2); auto c = sum(1.0, 2.0, 3.0); } { // Test constructor delegation cxx11test::delegate d1; cxx11test::delegate d2(); cxx11test::delegate d3(45); } { // Test override and final cxx11test::overridden o1(55464); } { // Test nullptr char *c = nullptr; } { // Test template brackets test_template<::test_template> v(test_template(12)); } { // Unicode literals char const *utf8 = u8"UTF-8 string \u2500"; char16_t const *utf16 = u"UTF-8 string \u2500"; char32_t const *utf32 = U"UTF-32 string \u2500"; } ' # Test code for whether the C compiler supports C++11 (complete). ac_cxx_conftest_cxx11_program="${ac_cxx_conftest_cxx98_globals} ${ac_cxx_conftest_cxx11_globals} int main (int argc, char **argv) { int ok = 0; ${ac_cxx_conftest_cxx98_main} ${ac_cxx_conftest_cxx11_main} return ok; } " # Test code for whether the C compiler supports C++98 (complete). ac_cxx_conftest_cxx98_program="${ac_cxx_conftest_cxx98_globals} int main (int argc, char **argv) { int ok = 0; ${ac_cxx_conftest_cxx98_main} return ok; } " as_fn_append ac_header_cxx_list " stdio.h stdio_h HAVE_STDIO_H" as_fn_append ac_header_cxx_list " stdlib.h stdlib_h HAVE_STDLIB_H" as_fn_append ac_header_cxx_list " string.h string_h HAVE_STRING_H" as_fn_append ac_header_cxx_list " inttypes.h inttypes_h HAVE_INTTYPES_H" as_fn_append ac_header_cxx_list " stdint.h stdint_h HAVE_STDINT_H" as_fn_append ac_header_cxx_list " strings.h strings_h HAVE_STRINGS_H" as_fn_append ac_header_cxx_list " sys/stat.h sys_stat_h HAVE_SYS_STAT_H" as_fn_append ac_header_cxx_list " sys/types.h sys_types_h HAVE_SYS_TYPES_H" as_fn_append ac_header_cxx_list " unistd.h unistd_h HAVE_UNISTD_H" # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in $ac_precious_vars; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val=\$ac_cv_env_${ac_var}_value eval ac_new_val=\$ac_env_${ac_var}_value case $ac_old_set,$ac_new_set in set,) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 printf "%s\n" "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was not set in the previous run" >&5 printf "%s\n" "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then # differences in whitespace do not lead to failure. ac_old_val_w=`echo x $ac_old_val` ac_new_val_w=`echo x $ac_new_val` if test "$ac_old_val_w" != "$ac_new_val_w"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' has changed since the previous run:" >&5 printf "%s\n" "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} ac_cache_corrupted=: else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&5 printf "%s\n" "$as_me: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&2;} eval $ac_var=\$ac_old_val fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: former value: \`$ac_old_val'" >&5 printf "%s\n" "$as_me: former value: \`$ac_old_val'" >&2;} { printf "%s\n" "$as_me:${as_lineno-$LINENO}: current value: \`$ac_new_val'" >&5 printf "%s\n" "$as_me: current value: \`$ac_new_val'" >&2;} fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *\'*) ac_arg=$ac_var=`printf "%s\n" "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) as_fn_append ac_configure_args " '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;} { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5 printf "%s\n" "$as_me: error: changes in the environment can compromise the build" >&2;} as_fn_error $? "run \`${MAKE-make} distclean' and/or \`rm $cache_file' and start over" "$LINENO" 5 fi ## -------------------- ## ## Main body of script. ## ## -------------------- ## ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu CXX=`"${R_HOME}/bin/R" CMD config CXX` if test -z "$CXX"; then as_fn_error $? "No C++ compiler is available" "$LINENO" 5 fi CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXXFLAGS` CPPFLAGS=`"${R_HOME}/bin/R" CMD config CPPFLAGS` ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test -z "$CXX"; then if test -n "$CCC"; then CXX=$CCC else if test -n "$ac_tool_prefix"; then for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC clang++ do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CXX+y} then : printf %s "(cached) " >&6 else $as_nop if test -n "$CXX"; then ac_cv_prog_CXX="$CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CXX="$ac_tool_prefix$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CXX=$ac_cv_prog_CXX if test -n "$CXX"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CXX" >&5 printf "%s\n" "$CXX" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$CXX" && break done fi if test -z "$CXX"; then ac_ct_CXX=$CXX for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC clang++ do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CXX+y} then : printf %s "(cached) " >&6 else $as_nop if test -n "$ac_ct_CXX"; then ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CXX="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CXX=$ac_cv_prog_ac_ct_CXX if test -n "$ac_ct_CXX"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CXX" >&5 printf "%s\n" "$ac_ct_CXX" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$ac_ct_CXX" && break done if test "x$ac_ct_CXX" = x; then CXX="g++" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CXX=$ac_ct_CXX fi fi fi fi # Provide some information about the compiler. printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C++ compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the C++ compiler works" >&5 printf %s "checking whether the C++ compiler works... " >&6; } ac_link_default=`printf "%s\n" "$ac_link" | sed 's/ -o *conftest[^ ]*//'` # The possible output files: ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*" ac_rmfiles= for ac_file in $ac_files do case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; * ) ac_rmfiles="$ac_rmfiles $ac_file";; esac done rm -f $ac_rmfiles if { { ac_try="$ac_link_default" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link_default") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then : # Autoconf-2.13 could set the ac_cv_exeext variable to `no'. # So ignore a value of `no', otherwise this would lead to `EXEEXT = no' # in a Makefile. We should not override ac_cv_exeext if it was cached, # so that the user can short-circuit this test for compilers unknown to # Autoconf. for ac_file in $ac_files '' do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; [ab].out ) # We found the default executable, but exeext='' is most # certainly right. break;; *.* ) if test ${ac_cv_exeext+y} && test "$ac_cv_exeext" != no; then :; else ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` fi # We set ac_cv_exeext here because the later test for it is not # safe: cross compilers may not add the suffix if given an `-o' # argument, so we may need to know it at that point already. # Even if this section looks crufty: it has the advantage of # actually working. break;; * ) break;; esac done test "$ac_cv_exeext" = no && ac_cv_exeext= else $as_nop ac_file='' fi if test -z "$ac_file" then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error 77 "C++ compiler cannot create executables See \`config.log' for more details" "$LINENO" 5; } else $as_nop { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C++ compiler default output file name" >&5 printf %s "checking for C++ compiler default output file name... " >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5 printf "%s\n" "$ac_file" >&6; } ac_exeext=$ac_cv_exeext rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out ac_clean_files=$ac_clean_files_save { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5 printf %s "checking for suffix of executables... " >&6; } if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then : # If both `conftest.exe' and `conftest' are `present' (well, observable) # catch `conftest.exe'. For instance with Cygwin, `ls conftest' will # work properly (i.e., refer to `conftest.exe'), while it won't with # `rm'. for ac_file in conftest.exe conftest conftest.*; do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` break;; * ) break;; esac done else $as_nop { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of executables: cannot compile and link See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest conftest$ac_cv_exeext { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5 printf "%s\n" "$ac_cv_exeext" >&6; } rm -f conftest.$ac_ext EXEEXT=$ac_cv_exeext ac_exeext=$EXEEXT cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main (void) { FILE *f = fopen ("conftest.out", "w"); return ferror (f) || fclose (f) != 0; ; return 0; } _ACEOF ac_clean_files="$ac_clean_files conftest.out" # Check that the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5 printf %s "checking whether we are cross compiling... " >&6; } if test "$cross_compiling" != yes; then { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if { ac_try='./conftest$ac_cv_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then cross_compiling=no else if test "$cross_compiling" = maybe; then cross_compiling=yes else { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error 77 "cannot run C++ compiled programs. If you meant to cross compile, use \`--host'. See \`config.log' for more details" "$LINENO" 5; } fi fi fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5 printf "%s\n" "$cross_compiling" >&6; } rm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out ac_clean_files=$ac_clean_files_save { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5 printf %s "checking for suffix of object files... " >&6; } if test ${ac_cv_objext+y} then : printf %s "(cached) " >&6 else $as_nop cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF rm -f conftest.o conftest.obj if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then : for ac_file in conftest.o conftest.obj conftest.*; do test -f "$ac_file" || continue; case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else $as_nop printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of object files: cannot compile See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest.$ac_cv_objext conftest.$ac_ext fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5 printf "%s\n" "$ac_cv_objext" >&6; } OBJEXT=$ac_cv_objext ac_objext=$OBJEXT { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C++" >&5 printf %s "checking whether the compiler supports GNU C++... " >&6; } if test ${ac_cv_cxx_compiler_gnu+y} then : printf %s "(cached) " >&6 else $as_nop cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : ac_compiler_gnu=yes else $as_nop ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_cv_cxx_compiler_gnu=$ac_compiler_gnu fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_compiler_gnu" >&5 printf "%s\n" "$ac_cv_cxx_compiler_gnu" >&6; } ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test $ac_compiler_gnu = yes; then GXX=yes else GXX= fi ac_test_CXXFLAGS=${CXXFLAGS+y} ac_save_CXXFLAGS=$CXXFLAGS { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CXX accepts -g" >&5 printf %s "checking whether $CXX accepts -g... " >&6; } if test ${ac_cv_prog_cxx_g+y} then : printf %s "(cached) " >&6 else $as_nop ac_save_cxx_werror_flag=$ac_cxx_werror_flag ac_cxx_werror_flag=yes ac_cv_prog_cxx_g=no CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_g=yes else $as_nop CXXFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : else $as_nop ac_cxx_werror_flag=$ac_save_cxx_werror_flag CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_cxx_werror_flag=$ac_save_cxx_werror_flag fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_g" >&5 printf "%s\n" "$ac_cv_prog_cxx_g" >&6; } if test $ac_test_CXXFLAGS; then CXXFLAGS=$ac_save_CXXFLAGS elif test $ac_cv_prog_cxx_g = yes; then if test "$GXX" = yes; then CXXFLAGS="-g -O2" else CXXFLAGS="-g" fi else if test "$GXX" = yes; then CXXFLAGS="-O2" else CXXFLAGS= fi fi ac_prog_cxx_stdcxx=no if test x$ac_prog_cxx_stdcxx = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CXX option to enable C++11 features" >&5 printf %s "checking for $CXX option to enable C++11 features... " >&6; } if test ${ac_cv_prog_cxx_11+y} then : printf %s "(cached) " >&6 else $as_nop ac_cv_prog_cxx_11=no ac_save_CXX=$CXX cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_cxx_conftest_cxx11_program _ACEOF for ac_arg in '' -std=gnu++11 -std=gnu++0x -std=c++11 -std=c++0x -qlanglvl=extended0x -AA do CXX="$ac_save_CXX $ac_arg" if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_cxx11=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cxx_cxx11" != "xno" && break done rm -f conftest.$ac_ext CXX=$ac_save_CXX fi if test "x$ac_cv_prog_cxx_cxx11" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else $as_nop if test "x$ac_cv_prog_cxx_cxx11" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else $as_nop { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_cxx11" >&5 printf "%s\n" "$ac_cv_prog_cxx_cxx11" >&6; } CXX="$CXX $ac_cv_prog_cxx_cxx11" fi ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx11 ac_prog_cxx_stdcxx=cxx11 fi fi if test x$ac_prog_cxx_stdcxx = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CXX option to enable C++98 features" >&5 printf %s "checking for $CXX option to enable C++98 features... " >&6; } if test ${ac_cv_prog_cxx_98+y} then : printf %s "(cached) " >&6 else $as_nop ac_cv_prog_cxx_98=no ac_save_CXX=$CXX cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_cxx_conftest_cxx98_program _ACEOF for ac_arg in '' -std=gnu++98 -std=c++98 -qlanglvl=extended -AA do CXX="$ac_save_CXX $ac_arg" if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_cxx98=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cxx_cxx98" != "xno" && break done rm -f conftest.$ac_ext CXX=$ac_save_CXX fi if test "x$ac_cv_prog_cxx_cxx98" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else $as_nop if test "x$ac_cv_prog_cxx_cxx98" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else $as_nop { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_cxx98" >&5 printf "%s\n" "$ac_cv_prog_cxx_cxx98" >&6; } CXX="$CXX $ac_cv_prog_cxx_cxx98" fi ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx98 ac_prog_cxx_stdcxx=cxx98 fi fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing shm_open" >&5 printf %s "checking for library containing shm_open... " >&6; } if test ${ac_cv_search_shm_open+y} then : printf %s "(cached) " >&6 else $as_nop ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ namespace conftest { extern "C" int shm_open (); } int main (void) { return conftest::shm_open (); ; return 0; } _ACEOF for ac_lib in '' rt do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_cxx_try_link "$LINENO" then : ac_cv_search_shm_open=$ac_res fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext if test ${ac_cv_search_shm_open+y} then : break fi done if test ${ac_cv_search_shm_open+y} then : else $as_nop ac_cv_search_shm_open=no fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_shm_open" >&5 printf "%s\n" "$ac_cv_search_shm_open" >&6; } ac_res=$ac_cv_search_shm_open if test "$ac_res" != no then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" fi ac_header= ac_cache= for ac_item in $ac_header_cxx_list do if test $ac_cache; then ac_fn_cxx_check_header_compile "$LINENO" $ac_header ac_cv_header_$ac_cache "$ac_includes_default" if eval test \"x\$ac_cv_header_$ac_cache\" = xyes; then printf "%s\n" "#define $ac_item 1" >> confdefs.h fi ac_header= ac_cache= elif test $ac_header; then ac_cache=$ac_item else ac_header=$ac_item fi done if test $ac_cv_header_stdlib_h = yes && test $ac_cv_header_string_h = yes then : printf "%s\n" "#define STDC_HEADERS 1" >>confdefs.h fi ac_fn_cxx_check_header_compile "$LINENO" "sys/mman.h" "ac_cv_header_sys_mman_h" "$ac_includes_default" if test "x$ac_cv_header_sys_mman_h" = xyes then : else $as_nop as_fn_error $? "cannot find required header sys/mman.h" "$LINENO" 5 fi ac_config_files="$ac_config_files src/Makevars" cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 printf "%s\n" "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space=' '; set) 2>&1` in #( *${as_nl}ac_space=\ *) # `set' does not quote correctly, so add quotes: double-quote # substitution turns \\\\ into \\, and sed turns \\ into \. sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; #( *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) | sed ' /^ac_cv_env_/b end t clear :clear s/^\([^=]*\)=\(.*[{}].*\)$/test ${\1+y} || &/ t end s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ :end' >>confcache if diff "$cache_file" confcache >/dev/null 2>&1; then :; else if test -w "$cache_file"; then if test "x$cache_file" != "x/dev/null"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5 printf "%s\n" "$as_me: updating cache $cache_file" >&6;} if test ! -f "$cache_file" || test -h "$cache_file"; then cat confcache >"$cache_file" else case $cache_file in #( */* | ?:*) mv -f confcache "$cache_file"$$ && mv -f "$cache_file"$$ "$cache_file" ;; #( *) mv -f confcache "$cache_file" ;; esac fi fi else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5 printf "%s\n" "$as_me: not updating unwritable cache $cache_file" >&6;} fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' # Transform confdefs.h into DEFS. # Protect against shell expansion while executing Makefile rules. # Protect against Makefile macro expansion. # # If the first sed substitution is executed (which looks for macros that # take arguments), then branch to the quote section. Otherwise, # look for a macro that doesn't take arguments. ac_script=' :mline /\\$/{ N s,\\\n,, b mline } t clear :clear s/^[ ]*#[ ]*define[ ][ ]*\([^ (][^ (]*([^)]*)\)[ ]*\(.*\)/-D\1=\2/g t quote s/^[ ]*#[ ]*define[ ][ ]*\([^ ][^ ]*\)[ ]*\(.*\)/-D\1=\2/g t quote b any :quote s/[ `~#$^&*(){}\\|;'\''"<>?]/\\&/g s/\[/\\&/g s/\]/\\&/g s/\$/$$/g H :any ${ g s/^\n// s/\n/ /g p } ' DEFS=`sed -n "$ac_script" confdefs.h` ac_libobjs= ac_ltlibobjs= U= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_script='s/\$U\././;s/\.o$//;s/\.obj$//' ac_i=`printf "%s\n" "$ac_i" | sed "$ac_script"` # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR # will be set to the directory where LIBOBJS objects are built. as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext" as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5 printf "%s\n" "$as_me: creating $CONFIG_STATUS" >&6;} as_write_fail=0 cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1 #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1 ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh as_nop=: if test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1 then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else $as_nop case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi # Reset variables that may have inherited troublesome values from # the environment. # IFS needs to be set, to space, tab, and newline, in precisely that order. # (If _AS_PATH_WALK were called with IFS unset, it would have the # side effect of setting IFS to empty, thus disabling word splitting.) # Quoting is to prevent editors from complaining about space-tab. as_nl=' ' export as_nl IFS=" "" $as_nl" PS1='$ ' PS2='> ' PS4='+ ' # Ensure predictable behavior from utilities with locale-dependent output. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # We cannot yet rely on "unset" to work, but we need these variables # to be unset--not just set to an empty or harmless value--now, to # avoid bugs in old shells (e.g. pre-3.0 UWIN ksh). This construct # also avoids known problems related to "unset" and subshell syntax # in other old shells (e.g. bash 2.01 and pdksh 5.2.14). for as_var in BASH_ENV ENV MAIL MAILPATH CDPATH do eval test \${$as_var+y} \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done # Ensure that fds 0, 1, and 2 are open. if (exec 3>&0) 2>/dev/null; then :; else exec 0&1) 2>/dev/null; then :; else exec 1>/dev/null; fi if (exec 3>&2) ; then :; else exec 2>/dev/null; fi # The user is always right. if ${PATH_SEPARATOR+false} :; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac test -r "$as_dir$0" && as_myself=$as_dir$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then printf "%s\n" "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi printf "%s\n" "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null then : eval 'as_fn_append () { eval $1+=\$2 }' else $as_nop as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null then : eval 'as_fn_arith () { as_val=$(( $* )) }' else $as_nop as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits # Determine whether it's possible to make 'echo' print without a newline. # These variables are no longer used directly by Autoconf, but are AC_SUBSTed # for compatibility with existing Makefiles. ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac # For backward compatibility with old third-party macros, we provide # the shell variables $as_echo and $as_echo_n. New code should use # AS_ECHO(["message"]) and AS_ECHO_N(["message"]), respectively. as_echo='printf %s\n' as_echo_n='printf %s' rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`printf "%s\n" "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" exec 6>&1 ## ----------------------------------- ## ## Main body of $CONFIG_STATUS script. ## ## ----------------------------------- ## _ASEOF test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Save the log message, to keep $0 and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" This file was extended by BiocParallel $as_me 1.32.4, which was generated by GNU Autoconf 2.71. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ on `(hostname || uname -n) 2>/dev/null | sed 1q` " _ACEOF case $ac_config_files in *" "*) set x $ac_config_files; shift; ac_config_files=$*;; esac cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # Files that config.status was made for. config_files="$ac_config_files" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 ac_cs_usage="\ \`$as_me' instantiates files and other configuration actions from templates according to the current configuration. Unless the files and actions are specified as TAGs, all are instantiated by default. Usage: $0 [OPTION]... [TAG]... -h, --help print this help, then exit -V, --version print version number and configuration settings, then exit --config print configuration, then exit -q, --quiet, --silent do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE Configuration files: $config_files Report bugs to the package provider." _ACEOF ac_cs_config=`printf "%s\n" "$ac_configure_args" | sed "$ac_safe_unquote"` ac_cs_config_escaped=`printf "%s\n" "$ac_cs_config" | sed "s/^ //; s/'/'\\\\\\\\''/g"` cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config='$ac_cs_config_escaped' ac_cs_version="\\ BiocParallel config.status 1.32.4 configured by $0, generated by GNU Autoconf 2.71, with options \\"\$ac_cs_config\\" Copyright (C) 2021 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." ac_pwd='$ac_pwd' srcdir='$srcdir' test -n "\$AWK" || AWK=awk _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # The default lists apply if the user does not specify any file. ac_need_defaults=: while test $# != 0 do case $1 in --*=?*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ac_shift=: ;; --*=) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg= ac_shift=: ;; *) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; esac case $ac_option in # Handling of the options. -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --versio | --versi | --vers | --ver | --ve | --v | -V ) printf "%s\n" "$ac_cs_version"; exit ;; --config | --confi | --conf | --con | --co | --c ) printf "%s\n" "$ac_cs_config"; exit ;; --debug | --debu | --deb | --de | --d | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`printf "%s\n" "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; '') as_fn_error $? "missing file argument" ;; esac as_fn_append CONFIG_FILES " '$ac_optarg'" ac_need_defaults=false;; --he | --h | --help | --hel | -h ) printf "%s\n" "$ac_cs_usage"; exit ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) as_fn_error $? "unrecognized option: \`$1' Try \`$0 --help' for more information." ;; *) as_fn_append ac_config_targets " $1" ac_need_defaults=false ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then set X $SHELL '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \printf "%s\n" "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' export CONFIG_SHELL exec "\$@" fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX printf "%s\n" "$ac_log" } >&5 _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Handling of arguments. for ac_config_target in $ac_config_targets do case $ac_config_target in "src/Makevars") CONFIG_FILES="$CONFIG_FILES src/Makevars" ;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test ${CONFIG_FILES+y} || CONFIG_FILES=$config_files fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason against having it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Hook for its removal unless debugging. # Note that there is a small window in which the directory will not be cleaned: # after its creation but before its name has been assigned to `$tmp'. $debug || { tmp= ac_tmp= trap 'exit_status=$? : "${ac_tmp:=$tmp}" { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status ' 0 trap 'as_fn_exit 1' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` && test -d "$tmp" } || { tmp=./conf$$-$RANDOM (umask 077 && mkdir "$tmp") } || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5 ac_tmp=$tmp # Set up the scripts for CONFIG_FILES section. # No need to generate them if there are no CONFIG_FILES. # This happens for instance with `./config.status config.h'. if test -n "$CONFIG_FILES"; then ac_cr=`echo X | tr X '\015'` # On cygwin, bash can eat \r inside `` if the user requested igncr. # But we know of no other shell where ac_cr would be empty at this # point, so we can use a bashism as a fallback. if test "x$ac_cr" = x; then eval ac_cr=\$\'\\r\' fi ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null` if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then ac_cs_awk_cr='\\r' else ac_cs_awk_cr=$ac_cr fi echo 'BEGIN {' >"$ac_tmp/subs1.awk" && _ACEOF { echo "cat >conf$$subs.awk <<_ACEOF" && echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' && echo "_ACEOF" } >conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'` ac_delim='%!_!# ' for ac_last_try in false false false false false :; do . ./conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X` if test $ac_delim_n = $ac_delim_num; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done rm -f conf$$subs.sh cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK && _ACEOF sed -n ' h s/^/S["/; s/!.*/"]=/ p g s/^[^!]*!// :repl t repl s/'"$ac_delim"'$// t delim :nl h s/\(.\{148\}\)..*/\1/ t more1 s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/ p n b repl :more1 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t nl :delim h s/\(.\{148\}\)..*/\1/ t more2 s/["\\]/\\&/g; s/^/"/; s/$/"/ p b :more2 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t delim ' >$CONFIG_STATUS || ac_write_fail=1 rm -f conf$$subs.awk cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACAWK cat >>"\$ac_tmp/subs1.awk" <<_ACAWK && for (key in S) S_is_set[key] = 1 FS = "" } { line = $ 0 nfields = split(line, field, "@") substed = 0 len = length(field[1]) for (i = 2; i < nfields; i++) { key = field[i] keylen = length(key) if (S_is_set[key]) { value = S[key] line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3) len += length(value) + length(field[++i]) substed = 1 } else len += 1 + keylen } print line } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g" else cat fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5 _ACEOF # VPATH may cause trouble with some makes, so we remove sole $(srcdir), # ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{ h s/// s/^/:/ s/[ ]*$/:/ s/:\$(srcdir):/:/g s/:\${srcdir}:/:/g s/:@srcdir@:/:/g s/^:*// s/:*$// x s/\(=[ ]*\).*/\1/ G s/\n// s/^[^=]*=[ ]*$// }' fi cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 fi # test -n "$CONFIG_FILES" eval set X " :F $CONFIG_FILES " shift for ac_tag do case $ac_tag in :[FHLC]) ac_mode=$ac_tag; continue;; esac case $ac_mode$ac_tag in :[FHL]*:*);; :L* | :C*:*) as_fn_error $? "invalid tag \`$ac_tag'" "$LINENO" 5;; :[FH]-) ac_tag=-:-;; :[FH]*) ac_tag=$ac_tag:$ac_tag.in;; esac ac_save_IFS=$IFS IFS=: set x $ac_tag IFS=$ac_save_IFS shift ac_file=$1 shift case $ac_mode in :L) ac_source=$1;; :[FH]) ac_file_inputs= for ac_f do case $ac_f in -) ac_f="$ac_tmp/stdin";; *) # Look for the file first in the build tree, then in the source tree # (if the path is not absolute). The absolute path cannot be DOS-style, # because $ac_f cannot contain `:'. test -f "$ac_f" || case $ac_f in [\\/$]*) false;; *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";; esac || as_fn_error 1 "cannot find input file: \`$ac_f'" "$LINENO" 5;; esac case $ac_f in *\'*) ac_f=`printf "%s\n" "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac as_fn_append ac_file_inputs " '$ac_f'" done # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ configure_input='Generated from '` printf "%s\n" "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g' `' by configure.' if test x"$ac_file" != x-; then configure_input="$ac_file. $configure_input" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5 printf "%s\n" "$as_me: creating $ac_file" >&6;} fi # Neutralize special characters interpreted by sed in replacement strings. case $configure_input in #( *\&* | *\|* | *\\* ) ac_sed_conf_input=`printf "%s\n" "$configure_input" | sed 's/[\\\\&|]/\\\\&/g'`;; #( *) ac_sed_conf_input=$configure_input;; esac case $ac_tag in *:-:* | *:-) cat >"$ac_tmp/stdin" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac ;; esac ac_dir=`$as_dirname -- "$ac_file" || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir="$ac_dir"; as_fn_mkdir_p ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`printf "%s\n" "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`printf "%s\n" "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix case $ac_mode in :F) # # CONFIG_FILE # _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # If the template does not know about datarootdir, expand it. # FIXME: This hack should be removed a few years after 2.60. ac_datarootdir_hack=; ac_datarootdir_seen= ac_sed_dataroot=' /datarootdir/ { p q } /@datadir@/p /@docdir@/p /@infodir@/p /@localedir@/p /@mandir@/p' case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in *datarootdir*) ac_datarootdir_seen=yes;; *@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5 printf "%s\n" "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;} _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_datarootdir_hack=' s&@datadir@&$datadir&g s&@docdir@&$docdir&g s&@infodir@&$infodir&g s&@localedir@&$localedir&g s&@mandir@&$mandir&g s&\\\${datarootdir}&$datarootdir&g' ;; esac _ACEOF # Neutralize VPATH when `$srcdir' = `.'. # Shell code in configure.ac might set extrasub. # FIXME: do we really want to maintain this feature? cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_sed_extra="$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s|@configure_input@|$ac_sed_conf_input|;t t s&@top_builddir@&$ac_top_builddir_sub&;t t s&@top_build_prefix@&$ac_top_build_prefix&;t t s&@srcdir@&$ac_srcdir&;t t s&@abs_srcdir@&$ac_abs_srcdir&;t t s&@top_srcdir@&$ac_top_srcdir&;t t s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t $ac_datarootdir_hack " eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } && { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \ "$ac_tmp/out"`; test -z "$ac_out"; } && { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&5 printf "%s\n" "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&2;} rm -f "$ac_tmp/stdin" case $ac_file in -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";; *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";; esac \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac done # for ac_tag as_fn_exit 0 _ACEOF ac_clean_files=$ac_clean_files_save test $ac_write_fail = 0 || as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5 # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || as_fn_exit 1 fi if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5 printf "%s\n" "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;} fi BiocParallel/configure.ac0000644000175200017520000000106614516004410016411 0ustar00biocbuildbiocbuildAC_INIT([BiocParallel], 1.32.4) dnl package name, version CXX=`"${R_HOME}/bin/R" CMD config CXX` if test -z "$CXX"; then AC_MSG_ERROR([No C++ compiler is available]) fi CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXXFLAGS` CPPFLAGS=`"${R_HOME}/bin/R" CMD config CPPFLAGS` AC_LANG(C++) dnl check librt for shm_open support dnl R usually is linked to librt but not always AC_SEARCH_LIBS([shm_open], [rt]) AC_CHECK_HEADER( [sys/mman.h], [], AC_MSG_ERROR([cannot find required header sys/mman.h])) AC_SUBST(LIBS) AC_CONFIG_FILES([src/Makevars]) AC_OUTPUT BiocParallel/inst/0000755000175200017520000000000014516024320015077 5ustar00biocbuildbiocbuildBiocParallel/inst/RMPInode.sh0000755000175200017520000000022214516004410017045 0ustar00biocbuildbiocbuild#! /bin/sh ${RPROG:-R} --vanilla < ${OUT:-/dev/null} 2>&1 loadNamespace("Rmpi") loadNamespace("snow") BiocParallel::bprunMPIworker() EOF BiocParallel/inst/RSOCKnode.sh0000755000175200017520000000041614516004410017164 0ustar00biocbuildbiocbuild#! /bin/sh # the & for backgrounding works in bash--does it work in other sh variants? ${RPROG:-R} --vanilla < ${OUT:-/dev/null} 2>&1 & loadNamespace("snow") options(timeout=getClusterOption("timeout")) BiocParallel::.bpworker_impl(snow::makeSOCKmaster()) EOF BiocParallel/inst/doc/0000755000175200017520000000000014516024320015644 5ustar00biocbuildbiocbuildBiocParallel/inst/doc/BiocParallel_BatchtoolsParam.R0000644000175200017520000000510014516024235023464 0ustar00biocbuildbiocbuild## ----style, eval=TRUE, echo=FALSE, results="asis"-------------------------- BiocStyle::latex() ## ----setup, echo=FALSE----------------------------------------------------- suppressPackageStartupMessages({ library(BiocParallel) }) ## ----intro----------------------------------------------------------------- library(BiocParallel) ## Pi approximation piApprox <- function(n) { nums <- matrix(runif(2 * n), ncol = 2) d <- sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } piApprox(1000) ## Apply piApprox over param <- BatchtoolsParam() result <- bplapply(rep(10e5, 10), piApprox, BPPARAM=param) mean(unlist(result)) ## -------------------------------------------------------------------------- registryargs <- batchtoolsRegistryargs( file.dir = "mytempreg", work.dir = getwd(), packages = character(0L), namespaces = character(0L), source = character(0L), load = character(0L) ) param <- BatchtoolsParam(registryargs = registryargs) param ## -------------------------------------------------------------------------- fname <- batchtoolsTemplate("slurm") cat(readLines(fname), sep="\n") ## ----simple_sge_example, eval=FALSE---------------------------------------- # library(BiocParallel) # # ## Pi approximation # piApprox <- function(n) { # nums <- matrix(runif(2 * n), ncol = 2) # d <- sqrt(nums[, 1]^2 + nums[, 2]^2) # 4 * mean(d <= 1) # } # # template <- system.file( # package = "BiocParallel", # "unitTests", "test_script", "test-sge-template.tmpl" # ) # param <- BatchtoolsParam(workers=5, cluster="sge", template=template) # # ## Run parallel job # result <- bplapply(rep(10e5, 100), piApprox, BPPARAM=param) ## ----demo_sge, eval=FALSE-------------------------------------------------- # library(BiocParallel) # # ## Pi approximation # piApprox <- function(n) { # nums <- matrix(runif(2 * n), ncol = 2) # d <- sqrt(nums[, 1]^2 + nums[, 2]^2) # 4 * mean(d <= 1) # } # # template <- system.file( # package = "BiocParallel", # "unitTests", "test_script", "test-sge-template.tmpl" # ) # param <- BatchtoolsParam(workers=5, cluster="sge", template=template) # # ## start param # bpstart(param) # # ## Display param # param # # ## To show the registered backend # bpbackend(param) # # ## Register the param # register(param) # # ## Check the registered param # registered() # # ## Run parallel job # result <- bplapply(rep(10e5, 100), piApprox) # # bpstop(param) ## ----sessionInfo, results="asis"------------------------------------------- toLatex(sessionInfo()) BiocParallel/inst/doc/BiocParallel_BatchtoolsParam.Rnw0000644000175200017520000002212214516004410024025 0ustar00biocbuildbiocbuild%\VignetteIndexEntry{2. Introduction to BatchtoolsParam} %\VignetteKeywords{parallel, Infrastructure} %\VignettePackage{BiocParallel} %\VignetteEngine{knitr::knitr} \documentclass{article} <>= BiocStyle::latex() @ <>= suppressPackageStartupMessages({ library(BiocParallel) }) @ \newcommand{\BiocParallel}{\Biocpkg{BiocParallel}} \title{Introduction to \emph{BatchtoolsParam}} \author{ Nitesh Turaga\footnote{\url{Nitesh.Turaga@RoswellPark.org}}, Martin Morgan\footnote{\url{Martin.Morgan@RoswellPark.org}} } \date{Edited: March 22, 2018; Compiled: \today} \begin{document} \maketitle \tableofcontents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The \Rcode{BatchtoolsParam} class is an interface to the \CRANpkg{batchtools} package from within \BiocParallel{}, for computing on a high performance cluster such as SGE, TORQUE, LSF, SLURM, OpenLava. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Quick start} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This example demonstrates the easiest way to launch a 100000 jobs using batchtools. The first step involves creating a \Rcode{BatchtoolsParam} class. You can compute using 'bplapply' and then the result is stored. <>= library(BiocParallel) ## Pi approximation piApprox <- function(n) { nums <- matrix(runif(2 * n), ncol = 2) d <- sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } piApprox(1000) ## Apply piApprox over param <- BatchtoolsParam() result <- bplapply(rep(10e5, 10), piApprox, BPPARAM=param) mean(unlist(result)) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{\emph{BatchtoolsParam} interface} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The \Rcode{BatchtoolsParam} interface allows intuitive usage of your high performance cluster with \BiocParallel{}. The \Rcode{BatchtoolsParam} class allows the user to specify many arguments to customize their jobs. Applicable to clusters with formal schedulers. \begin{itemize} \item{\Rcode{workers}} The number of workers used by the job. \item{\Rcode{cluster}} We currently support, SGE, SLURM, LSF, TORQUE and OpenLava. The 'cluster' argument is supported only if the R environment knows how to find the job scheduler. Each cluster type uses a template to pass the job to the scheduler. If the template is not given we use the default templates as given in the 'batchtools' package. The cluster can be accessed by 'bpbackend(param)'. \item{\Rcode{registryargs}} The 'registryargs' argument takes a list of arguments to create a new job registry for you \Rcode{BatchtoolsParam}. The job registry is a data.table which stores all the required information to process your jobs. The arguments we support for registryargs are: \begin{description} \item{\Rcode{file.dir}} Path where all files of the registry are saved. Note that some templates do not handle relative paths well. If nothing is given, a temporary directory will be used in your current working directory. \item{\Rcode{work.dir}} Working directory for R process for running jobs. \item{\Rcode{packages}} Packages that will be loaded on each node. \item{\Rcode{namespaces}} Namespaces that will be loaded on each node. \item{\Rcode{source}} Files that are sourced before executing a job. \item{\Rcode{load}} Files that are loaded before executing a job. \end{description} <<>>= registryargs <- batchtoolsRegistryargs( file.dir = "mytempreg", work.dir = getwd(), packages = character(0L), namespaces = character(0L), source = character(0L), load = character(0L) ) param <- BatchtoolsParam(registryargs = registryargs) param @ \item{\Rcode{resources}} A named list of key-value pairs to be subsituted into the template file; see \Rcode{?batchtools::submitJobs}. \item{\Rcode{template}} The template argument is unique to the \Rcode{BatchtoolsParam} class. It is required by the job scheduler. It defines how the jobs are submitted to the job scheduler. If the template is not given and the cluster is chosen, a default template is selected from the batchtools package. \item{\Rcode{log}} The log option is logical, TRUE/FALSE. If it is set to TRUE, then the logs which are in the registry are copied to directory given by the user using the \Rcode{logdir} argument. \item{\Rcode{logdir}} Path to the logs. It is given only if \Rcode{log=TRUE}. \item{\Rcode{resultdir}} Path to the directory is given when the job has files to be saved in a directory. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Defining templates} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The job submission template controls how the job is processed by the job scheduler on the cluster. Obviously, the format of the template will differ depending on the type of job scheduler. Let's look at the default SLURM template as an example: <<>>= fname <- batchtoolsTemplate("slurm") cat(readLines(fname), sep="\n") @ The \Rcode{<\%= =>} blocks are automatically replaced by the values of the elements in the \Rcode{resources} argument in the \Rcode{BatchtoolsParam} constructor. Failing to specify critical parameters properly (e.g., wall time or memory limits too low) will cause jobs to crash, usually rather cryptically. We suggest setting parameters explicitly to provide robustness to changes to system defaults. Note that the \Rcode{<\%= =>} blocks themselves do not usually need to be modified in the template. The part of the template that is most likely to require explicit customization is the last line containing the call to \Rcode{Rscript}. A more customized call may be necessary if the R installation is not standard, e.g., if multiple versions of R have been installed on a cluster. For example, one might use instead: \begin{verbatim} echo 'batchtools::doJobCollection("<%= uri %>")' |\ ArbitraryRcommand --no-save --no-echo \end{verbatim} If such customization is necessary, we suggest making a local copy of the template, modifying it as required, and then constructing a \Rcode{BiocParallelParam} object with the modified template using the \Rcode{template} argument. However, we find that the default templates accessible with \Rcode{batchtoolsTemplate} are satisfactory in most cases. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Use cases} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% As an example for a BatchtoolParam job being run on an SGE cluster, we use the same \Rcode{piApprox} function as defined earlier. The example runs the function on 5 workers and submits 100 jobs to the SGE cluster. Example of SGE with minimal code: <>= library(BiocParallel) ## Pi approximation piApprox <- function(n) { nums <- matrix(runif(2 * n), ncol = 2) d <- sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } template <- system.file( package = "BiocParallel", "unitTests", "test_script", "test-sge-template.tmpl" ) param <- BatchtoolsParam(workers=5, cluster="sge", template=template) ## Run parallel job result <- bplapply(rep(10e5, 100), piApprox, BPPARAM=param) @ Example of SGE demonstrating some of \Rcode{BatchtoolsParam} methods. <>= library(BiocParallel) ## Pi approximation piApprox <- function(n) { nums <- matrix(runif(2 * n), ncol = 2) d <- sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } template <- system.file( package = "BiocParallel", "unitTests", "test_script", "test-sge-template.tmpl" ) param <- BatchtoolsParam(workers=5, cluster="sge", template=template) ## start param bpstart(param) ## Display param param ## To show the registered backend bpbackend(param) ## Register the param register(param) ## Check the registered param registered() ## Run parallel job result <- bplapply(rep(10e5, 100), piApprox) bpstop(param) @ \section{\Rcode{sessionInfo()}} <>= toLatex(sessionInfo()) @ \end{document} BiocParallel/inst/doc/BiocParallel_BatchtoolsParam.pdf0000644000175200017520000054442614516024237024061 0ustar00biocbuildbiocbuild%PDF-1.5 % 45 0 obj << /Length 2031 /Filter /FlateDecode >> stream xZY~_A`ʎ}IH]K~%)߷FfD2"LϪf*ѣ3?9PH &6ODYtћ/ga 0XhJb | ϐT<S׬Ik4iFcW#'W߬h% ^qFĉMToQiՕ X13'^&+ГjR b 8RD#)93HqS碩Fl5mqSZcnOA(~LGMY3;I,Zp!#1.S`f EmLx224^U65AFR @`Xn-[}v.]|, "j[AFo )A01`&<sXAA4XHiPbb_eSXf]iԴW^B%(B3~QLn.0 Qb1wS6B,h Z bL4p߽OI HDae)JOS !~9krD˚o;oGN v*<-[ Goxj`m@ٷ7 f0ZfVsQ$l);~UvD*nE?kiW4"],I6aᛣN'|ŽO"SO$ӤDN[SHMJyz ltuvV?Ɏ|`rۙ2خEč8*.\Bj(E4=z'AT\GR nk?2iTך/#KL0&GZ9?5t1gf !NTx*ÿ>V<|ȽH Cu"s 78A/Іw5x{G/-oޙ v2^~`0Defb~9])pQ`V$Ӻ=hB!,?1kĄHlM r4O^%q~E!NiK6/nӃN%޾`C8Dzͥ?ϙK]a2M\x]guV6k:}6gb8\I.6O /z}4('ULP ;SݖOާb_:O?$e{dtQuS#:wEiRgi݄⏡Zkɪۄ#dl]Yh4z^c+0 ҿeŻ2k1$H%_Qp8Ȉ a_Q S C8NsqV2goǰl8doT#r\}qc-e,Y ?#VUX;g**WE/ s:ChƬf\OdyPDv aF0!jje1!ȅ 5U,<㭧nߚ0cjoYf7`[&RL8Xb?/,-KAg:.aiq[ygomֆXw3o$I:`3C??É!sXբ8ڗXvUPq*@fZL?=-Syi!1H(݈*Z;p Ri- S`$i{ٷCuoU34OX8EdH30񿎜0808g}8SV ᇏ{j7X͏4KFT6:;'wݍSPo&}1w.ԸýW}[< Q endstream endobj 66 0 obj << /Length 2706 /Filter /FlateDecode >> stream x\Ks8Wf,,^ٝCR夒$ǩ=$9-1I I@H+혊J7_PCеr=@E|޿0H N΂cx%#240zbi68t*u|דFG(큠/l=zŒH׵ީ_{YD=[.!"mE* $8NC )]~ǀ&-CRF<_=+GO 2e7/Iy烽mKX [[ ƌA.ĵ-*w?;Ka2b~j^# m ;c;2Yxt€r/voL5zqB$r.aсv z?wgen=tceU=h 8b e"'L$5ky> 1PԊj7W9tpJKbƒCE(Mt81oHܲt"e2!t3E&HKgaӊ@K)ݥE:O3BBVX4ҥΜ a]6&d[j7p L}Mxq3"C* )|(%TV'Ey0Zǩ&t7i֑u0(5fh8 [(Es~=A S[~UeH[c.QMLKcƐ.t_-`#MӆhbREZ%~{,!0 S{~t ޽z!}O9),&<oJ74&[ Cn{:bD7aIjk uEnEq- c_ҲWC+" 9|7g8un>Z2:%L&,Qa*mE׾ڭ \W=ik1gkp(/jL̀Xwdapirۻi,Mswxibf\꼴Ym]5#t;C֤ںhzumo-3[hQ+lLN^aTM^3;^aw O~ ]S8:C6խ-An`r6k+M쏱l,y~`uΝU@  Qyl7{RȮ4L EUsWoR\x@&+icFg7T!w]7ƽ f ]Ey|<ݟT$ɶ) KλҀ|e߮毭ا1bwIrb"4#"p&Ř)7{kEpBWy3?nU[?Z~' W'Q5Mv> V?qݒ[|1a}7s83TNi\Ϟzwznc٤lFmEWM)mK?7kxo̜2E6/^?{VVuxeQGH:f:w endstream endobj 71 0 obj << /Length 2370 /Filter /FlateDecode >> stream xZKs8Wp'HU_JvSl( H{<~iّ=-U@FAP]Нb٩ӷGԁۣWgG7$- /tIVcNG$*={Izзf|UѪ8t%_ѷ#Ï9K tdN>B04$4$joE^? u,wzG/Ij0uZuY´6ʲ$ zܕZoҨR*2)**07"I갾BA~+e^0Mfz fN* {]L_%yvmtz,3UUU.Z=B]JʝEkUB{=2';ͣ89,ZD`z]ff!5} }hR<$%G}Oۍ,Ɖ/ (3j&4 ӎx߫shOGԅ0Dz  4A o3!6^޸ 'X_!=5vw G@4X%l$?7\ C9`&QZ+j3f()JSL@KS)YTud6_&6/TTm(t}]^  fi1zf'  <6>'ˮ>]]>JatktE4['EH\_Qj?Bz6.}#H$y02 %:sqLo ]wg"tŲ^2g0[ݰ円.3sQ~w_=7vPtݞ:& z)ZkӨ,x"!?e |%PbEghrW1)JT(@Z kJøʱK°Ґ][2jzp&: sl,La\ln V,f*/U+>(i69{%*UՐE 3lF`V&&PiÔ\ 31LiIM/oۉ0}u \oaIhP<"7@0Y$²T]RY0DRY[j%F Mb<^tuCPyꞰ$i L@ω+x}gJIܾANRݳ cuHZ. h&̋"]F p`Vă>},n#ς SC׍ qݢj~_ d(N(]4(Yze:H?jf7" ~0FҺҜ5߸Yס PNW%$䃞uTwWfHcOW:|LJ1]a\LϥH 44H+SaRVQi(P j s+7az93S׀ , )}$:',dr+$bg2\U$deE~J9\R\;RZqT; ZYv,賽@| kqUΘ&a+O6zMD[AzMqdTa<{]3$oAlEeM!{u˜: >#zN $M }V:c\i\u(ZnsVuOvIåW&&g=7#!T4e-COfr#fVO톁lzΙC0-f O?w]8=VϨ!3Ve`_5&yWDH]zaChLb*`Gbǎ|`#Xy|norkJ!h7Y ~4d @N P_ (IY6C*ˠ 2{rbQ`m:K,*W7չfwpjoIp5_"}$|{7܉ڕUQy^,&Az^Ϛn~2y=蓲7e{nYˬr.aE]:l썴Ou0.L2+Hg@M}T;`*lɁ}J;cq bf! z*d endstream endobj 75 0 obj << /Length 2291 /Filter /FlateDecode >> stream xZoFB^6mIƹvmqXQ+gvտf!,˩Lrߎo>&o^]xGd1A4$IỺGI7F4I8GmO(Ih덯XpQG7dU۟Y+7|C?Ka&Ӄ;j3$ !YQc4&Id7GأǠMXՖM _,@eK3+ڼs SWb|Nn>7/|~|4 <0/uS!9BsQ<DˆUC.9sS#E%xB^ .^Zx%XKxRu+>ce7l/ԉY 7snxg! >{ \uVRA5L6}_PW}I[0k>eCr%Z;sVxӷ72desZp§E]3Jl\}X a8A5c6jr~l'WNe%kk+΂5sdžvzR:nxY5dlAK1#ӢdOlc %%D`.3k<K:A2֎َW|/5]ؾlw[zv_"u67b._CPvvz ~W{O0>dx5ogAU&%[[R/?G a- t$:2T֯otYY7 )׮W{exZ_Сx킒Ml%?^]ͫz}۳ӞȼWd,]giwqrTd8F;?uZg?gM]! Dוb?|:3[BD>;.8?}.D \r6YWF~ N>W^ya5㒩]"f=.~z~-ݳHwBnb] ̾ZjBws{"T {x y3/ܝSpmB 3 և%\X#lOďbKkn+wV.?p"/lшDKe`ior= :?㛝 ߧ$􂁟QkUc%л֧nw"8#C87*@Єx@qC C[,IԌ'2@i;^Ƕ%i+dyppmD׼|򇋒80>vR_oqk >1NMJA_na vGRdS |"9}OKᄣ ̈Е5fuk:C?SXN,hDQSAbSځl!{I[ ' qLkkol{KVU#(h;|LT x^LfPNfkaUpʎ!awr[@4+"NfQ@[ `h5xןK3WUP0XƀG) `VYڦ}vI\s[՚čAIhpY= [{ajak]a5 ȋFZ(J.& H1nUSs{sVϸVK,PA4Z;JoP( ~%+BoRz IB)?z}1ni AXW+I#/룕5W{'> stream x\oBy|KZ{P ͹v,Ӊn%+ɗM{9:]!%R$C(ػ"I*!N7y?/rD_VIј_P cqȪ~\ƹw~H`3ޘ(Ďb41 @RxjVTLXd23}>׶Lͷ8'Dw~OHޘޔӴ.2)<^lxQWQ433Bes: j\ۻdUE3nVie +(?/oG!q( |ՌwueꦵˡLWKV~g~#RW_k/Vfn{`t~ k ?TViff @dabQե4;[kQ^+K.AP zhvr# I-{"GHr +7?'g_K5 <ϡ[H < {ps`aHFJ .zb(ҐXS|N"$Pw%S;ax]^(G`)!] 먘S QQPgnzF`\R\5Xpݘr8*^,f/ЄIA4I.4Iȭ,ŋ vS4ɐmf[E d0VZh4*tݍQT! +[Ҧ5xdx#Lf*kj4J_I*fˆ!"`"c߀"ǚ"Ha!L&&Xm8I{eH@&;!Z{3MR7fpAQv;9} f`7oLT i[ ~ւ!tnZW1s}ۺs.t+`Saj({_٪r G.v|ko GN@ɬ 4]UnVV}s#qa-)Br0RxMQ]\@4oʦ Î5'p{Y{XIy%o΍brhO'}. eY)k~Z҄Du ]TT`/rN9a񛫰V|4P&o4Oʺ5ZH VPΌ(s GZMY8B먐}e{f8FO0Eśԫ5doh9/o#fG|H0 A֎o](^ٞ>Z>EKఙ<=';Y:!Goݶ-`dt4eO2pxdϾ֠ P2}ë!O͟4Oߏ͈w-; k`G8`u}q8n=ŸV7?Vv@G=z"mO:!,#-n+4:^!ov'+ux-JoyWyGt3uJG][OTUWʐۛx0PRFa9 BںJtY=Ըm&ν;SӣށZڂ`i1f[ҙ4/T<N.QQc~7B:0lgwؽś7F$ !%hͷ[&_~Uux$5G4 `*"V)Bb)nHQ_@f2<,CMv\M%rc"Q"̾<.`DD!k> stream xZmo6_!_l 84ě J,Mo(RU.~ubm3<3D[NS~N;Xur0 AIMm6,Nw6[h?wogGb5M3n4~?lX3 m-=@ŊlǨM(ԏm[Zg"9-8O%`E&"y4GӿѤ!Tb# 8!]3EarG;&#$gF 2Wtk݃dt7JI͓rs|f73o*>}Ϟ+_hmH>Xi)N8PLPӓB$ש׈i#ۨfY䥔8>k8\ fgvKQDOuFc|Ta0_GZK)Ոlچ$);7Oij"lO^}97IѭLT鎻LPjG/ߟ7_?g:..>PTC7CY܂bbeg@8m&a)FHe:K j VUjC7qg1|Dqe#]_Bi/=rû Y.yv!e+7m7Y^\RKٕ3W/yf-)c^\ǻAzS6Y^3ٍLu~ b̫H[bkH!UI7ORMk#A؋띫y]_Uk S<0&n( `}w#jx])~ y˔jTN2w(8 o;Q#J-_^bv[$>^cwjҍĪ|*4 r6v/˯,rW~DwHGqB6%9~FriE68pF]a MUSLF ! 2!듒LfI_FM&uZEcT rK$fTT rA!A\7gCɰ<+G7RabA1qdϽNԀJ uo`C(0ZC( A5]`#-YmVxl HTdDnYĩVIT,v/v¥WYCxhݍ endstream endobj 87 0 obj << /Length 592 /Filter /FlateDecode >> stream xmTM0Wpi5un՞zȭ!@1f׀(%7o<M@=K-l3:H  r`w~:#J0V]Ӽm}wMyNH3= [F 1 -ڱnC *_ r_t8&T key#_>%d]r\A+M5e"7}> stream xڍve\[׶tcҠݝ!  1!%)tt )x8|x|kZk>Jj Bீ,lqD@_AL 66v666,ZZ B$M ~@sp8ZJN& `c`38d `QX@@οys']sJ3PdŪ @  li01pG8bKČ fk PP9` s(dklb?Mgbu؂ Q 2 Y0'ߴ&C@?qךrtp re(Av_YTa(1!v.n fↅ!q<0+䊨#^ ( b7UX%/UXAڀąhE* D@Ĩ̡=!\""@db{qRmMeAt. "|/e}YDbC(ۜ&Z[ "D㷹+?n{wR\\{q4ϓjy9fޙF" [Fh؛@͟@X@{q`݌9uW89zAs{}oN@D+AvM?1 {B$qF_ V.D[V0нShzo"uAh:y͠DvwYI]A=gD&wo.6Cn||(;dᮀ76]F\ xxŎ܈C@3'_lFraMCߧ&K;$ z rp/_7=h mEyqݩ#kQ3gAc]qhDߧSٌh]o0!Fwnxglj*hZlm '%]LAQ~N[1>̒OStII_Z5XQc hT.xgT=9 ? (V8R;VAvl Ӄ6҅PR.ϝ ^.'Cϰ윉&CW$[8+='ES/;>ݚ.5,H`&C*o#NOZyo y\F|0ܥr9Uj"qW٦"oyHjH[k:Gϴpb {(gFU%All8_E3dG0w~pa %v:f(M$D?i(01 R΢WCtH?ZH!Þ1Rm?u47BM&?ā>$6}?Ht\FyKNS0)֜ojk r:(䤆Jm&q Q09c .>|<KK9ٸY`rg= N!⹹'v9mϳ3}B26$?[wMC_MSv(n<=,}W3d7٘aB;PMb?x1UMi B?4m6 qT<]{HfT.ZC !A z8>ʙu*q"j zz^>!oG5]i& ,aIZD_E^H>BJmYT% M*dilPqXØ؆uz+ bBovȤ=nToхÏnGu`pA-]ɼ+ %m}xÈ̢Yw2Q΅r gNzv2,Y7XFx1YWDp7qnLgJTmZ4_`%'?wɰ?Rʶ-1 b"^CrG1KBڤ~~Q`6LKYΆ/ N Fm3L?B11Ļ?o*[D֑x9}jDc.#o PU03}hPGK6O=0.l+_0E~SsdWh5aWuzae^BDW?,9fD>^q)jP83U6G-|Ơ@q;:*L-ܿr{T+{y]F}xڡfH;o輘sI ըe4W>و,P&ll~r|3pء,ŽS$(>+5]:{e 4 ,=x6QP |68=fNO>eX󩆉Ƒ0$~{VFg|u0W*1Ȑ0[8h @џK kWh.bhrR54GRO9א/gObf"cI㉔wDx\v4kT$)펎k^wN?)࿹جT99]^yF|u^},uzђ} Kr@p:Y ̷8T>6u}HAVBk=÷B-X%쟯:Rـd-sir}]5C =_8ǝUb/&-\gASu}vV>(?Zp[ ںyC5ćn4EgVGy3B"q<5+VcWtJ/ҳBvDq-:=[ lp ]p!o xϐ UVsm\tʓ!xWXħ0E)ITѻ 9!f}- YOH1*wX&g-_,l#ڈi 3sj-K( HESgWjJ` vƔ_cQTS>يt@7sADGs ?{;ʄW5V {DQ_Q[O⒒ c?YP"z,ZA?I-o:=;d>M9go=|OZE_p9%8BgO ?@ϰP Nf*"->+WlGzqHpלԍ5T tuW#c(?\>ğ^38ڥޤ@cB碜dPe'ŗ&[ۊ b-eP]$-| ",֬n[Al#y%c:098i6K䊫)l'䮉"z/\^}{Mm[Z1{'5D솤 *+3 ~]yL="dVu:ӏh7W?>V[hռP.r{fLKv"*`LG.LMiս!fwY+9fbJٵ3>|61(tq%UE +Fs9҇Y>CTM2՚J^ 4cҪ[KTQm[e O`iaCٌ }|1o2AW'c("sbj=I>˻xbA)VLj&/ǼW" T^>+o򲖅2H1/r;ip&wMF'`3LeukV^_酉obqFJ h}=֕ʹ9ɗm[.v>1ק&uIhYf6(R"=U{2k-jr0 oL7ucvK u-~\y8$Vv=8P+12coFÄ^4d.;oQ…!L>@ڈ@\OýVN Σ g~VqkEW[$݃13~QIX!})Aw=~$}"C:-At^<Hd⒫{ Ѹ<"Tu'ZUi1~tU K;" TbkX_6+RD|̰4Mshdytӓ+XX)k<0xwL^$%)$4t]ǻVM׀+CfU9c&w36_*M .a<;7}L KC{c k']Q`AkŅ7R\,clV 5[ShHv.+ɐ[/M 4R<kӆrܓOh>9Šז=fΩQHz(&t n2oMV^Wmg&:=|sA.XAi9Sʅ~6GZ%ގhK{w C]rl5vQV=EWwwX׍ۯ+qJ-1_u8Fk[{Q `$ l[<L)B>RIrT5G|x Y eʒm{ .s8`sdaϿEs'^t(<7M9}M汎+w-Y<2ӧm.=o 24~;ڇ}]Mb=C{_!38Jfxn4OQW d4~f 4eUuްsٮhaQuM%yUeG/2E ms*_\3=7H3אM2E̮U^4|9X#;뫲 oryo:0@PvڜW0.s]ځʟcI,|z@,e.*Fd(mm?)iO%ThWS̝i 8 T,^4ЈJVč[|>5Ces$UldrOG#ֲքYbcNZ=z{ k; ysA?@o0MJVrBubVJ҄~qFESy Vg۹ x2c؏ܰ֨ ,F(Х{u-TSr=@K Q;4+xCuHkHG&+3[ 1\^8c5S( lwϨ/!Ɣ3ިaXl5Fѐ8ϥswߎgXw v%Q~`T&xأ18G;ioIp4h/<'4&jn)ro>I#V {sqKڤӺ6iX'j7d?jJMϡF'5APU~Dz*@p#5UKCB4Ȏ6t"a_lu-qf ͵e.ܥ.{'W6f%V%xGfӜk _h?ja =0^ }@0ܞ; -( U}[;RBpr3㴷[@GLJ2G b2 ϐ#xo%Bx+i~wD-.( zȱaꡄt"~'QmV@-1ySy٭z&q,7͏lʈiH.*(SP[Ր߻f^R(u.vx 2}'("ы/0Hl< cZ$dDc+MAܒss(eceLѢ` NxtϏRϥ M"a e$}CEʡ\Xl endstream endobj 107 0 obj << /Length1 1210 /Length2 4181 /Length3 0 /Length 4918 /Filter /FlateDecode >> stream xmyd>#AUwdGcaI.s !93cH.$ r!cHV{ H*ȇ7ac4y;>ػb--4K( Yy:J ś `(?.H>88qrQf:lvgPYheK|~jA2&lLߍ^tb)TݪJ&oXBZ_M.\KH0 hF etؔ`gkLS;Mr6Z#bpE+7˰"F%~{>!UYĤ.]k*qk}įtF&L$ܲ }'[SU,,(_:dKC6Jeol=qcv:}B.qQPSȵ 0.dSn;*avӝo+ |55@j`4;nڍU.V3'lxaA:_Ǝz"{W_G05?ٳ=OZH-MעhӝXoEzhQ5%!4_zQ4˅v']p(NNEP Z+kl}94]I/j|d2 ֜#x9w V B'l6Nh2Tx z?ԳܕS:N-5k%:P9~(F}[*Y'|MOWJ{df'9E_nrPPmEH> C\6JQ)2Y* N-rc9 f[LQsekO{71=N /w&zK ި?<[E!̹fBȩ gS)FRu&xeG 씟nz,Sc4PnsyZHn'dÙnq&ԔU9./^Rb1 Q,Hl'[<5+[ #W$_j)N0/ ~QbUM`~ Oi_?<ݴs^<01ߔ*cQBYb LʗzszWiFg:q>|-Q⅌hD)Nկ^,J5+3ɿ{2fjfQٺB 2Dldqڋ79þr<[nR;ʹ1onWL$Fc{gkZ>r #G/W~wFLk*G(71#49-TBϧN*a/sژG< F͑5$/VDǐ?ϗeiNۡqb4W4~' X|= 5e33y},KXQHtekWވ}ֹsx-*"NĹuړ/(;W SdH`"uoKp\;ό^K-n6z6|a{UE5_MxlyK3s)gv`a˥Wpq56ٱ[Lr[E 'eyyH)n6Kؚ}*o;DX50|,e/M›8gqT\;i*}tQtԁ*ZQ|e+ߞ̼r3HLm 8ȋ#2;ęuk$s԰bvO0Eލfaω׮#woCn=.cg! .u$ߥtMy;6VZ39wUs_]8)VIg即M)wR /Ftl- ?sX 6ePHˣĠ:- ,+ЖfOʣ?l#2:mʮn+o7kD8sn]靣8e;42dKws"^@ I\Ku3fp 3'0ƛ2/ mjB:@N:kUogCgHLHpm7̪c ,Wڤ>-,^}MӉs轘鸫|\GL^qeMb 4@|uE\4afp|ҝ=G!m}\@5{_ς iWV>a P9x^`8i'7ݫ)M<:C6|t΍[.3&HY <V5>soNfiTݲS NY}ѕA1y-/c 2ǫkӜYl[8l[)lt-HQ.>C9;"shftt]HOO_+]zt9RO bdO%-aUʴc_ol/.6(MufW\r_\ua#r~;bnhS>T̚IX%=RS*n/pj-P)u|z.I[0Y;Hps Z/+LVSr-TQ z39&(7jn3+q\7LŬ,<һ|5f de)=pX1X+￞*7~!yrqY+-@ES+K9#7:do[2FϮdX'NϋZ+\9~=֎6U/,<|D|Qs~nZwO&1+%_`z "b`S*haȼkW'^!O"Ło $9$)4DXM>c%愈8)dUKGKV-.KcﴵXtj^.SvWLV{MNqv)v Qx`>$&ǿٽo[ +s~;7ձDK7dB|yO/ endstream endobj 109 0 obj << /Length1 2747 /Length2 22736 /Length3 0 /Length 24306 /Filter /FlateDecode >> stream xڴeT\]5 ܵp wwwww  @p  @\}w' (K{c@M,flvv`4̜܁l kO37; '5IXxCll|H h 0(<4}]@@?@݃ ruуC$]|lm<~`fw8 @`d gQb(;{:g'9 l452*Z,..nEBCSK ))i3d449[35+IiiJY@n703jP+7g l<<\YYY==XݬY\icvv?@taWߛP9~I;k,%8l1s:W3bUUfN '3' ;dI/ J^rOSwwfh;f6ݶ2V?6%1e9i) MfE91+9qbw>1IE~/7 `#;o$m:y8?smڭl,~+oªd ?o `\  fo3X@g;( @w7<,Hdsr f*=Z:;9,AVH9ZҞf m|N?{l?@NY0X! 7]$33I9Y8[:YعfnnfHlA`3m gR,NG nrsX~UJA|V 60?`UWW pu?\]WWWW pu?\]jA`.ZA`.zA|`.8Ǚ9xMA\` g'' \/'/ V ; ݝ=Jv )!fs % {v 9@p2sCupȀk܌˟ep23]#'nlVz:{;+~@zst9>sy"8>mJ ,7_[ pϿ Xz _ !rW~̜9#כ?7yo@> vmQ5ARs|,gº0kys=D%{ Ў| gEY~P8/[f-D͂P&YVCj(8RHZ'|=3WY]ޕKvl7LUt^չ^H{A5EӲ(Iy8~ "\x&I/giWQ=uio?mxS4rJ(˄T38/ͣ|U^d)[ֶ@mnŌ~@n,f)Y $cA9$fHcuEjs/=Ci2I5c a?\J tf5<Fk@9oVk/9UPeufc|o-YmŹCw4h.Wl2Գ$Ыe LLj&hHUР97a,#.{o1_a:؏t(:I!}/g@Lm"dLJPmuiyy{c,=n sLfyOt*7v8uCP) +iΖE=*t|ΐl9 ͜TGrb7fmh);>Pv] ׎f' Z RK@edfOdm|Ν ~'cr/_x…/ܺ9bG&-KD_[Dy7>[>P%GJgw[8+-yi\JiV}2>N_OFEum}~!ʼV^%{SX®բOXP MV}>`S4?f5,NAFyF vmv,!Hy.,zq3}+foWeX.'0$vHzϸCoT=^xv +ʅ.V-& N}cWipR2i%R70 J`*W>_WKuqS1aV]KG~\1{?JX0ne%R0Q.bKY2Li]-0<ʴ7Yz#ӟSVWnRqG.j.r570 ^ $wgI9+I$UΗSoFZ3V-DWGu|$XNO_ȶCJ~0N1L {|Ae8{üf!y^'$Ұٻub|˩Hrvu#cwILAco 1PaN긩ڌNzPI vŴqxq\6ͬlw՘d |BbZ<(3ߪӖKLKiiQ`(|DُUJ`p̣e8bRGD {ql?W:G3Ìkf\?ڂ \);ʰT*4|1Kɋ@]^ʯ ?>AWuq:(\.l#Ãl0~\Ѥg=xYX(7u4"neM_>UwkZb '{йkdiˋuFowq0MN-{zJ"[uRYfKK+v_J9/ЇV^;(.TOEJw1?yyYިlD8#Ch0;lTDݥ.ÉFm}ϿO+mz"4Z θ?(㹨xϼ5#a_C3 22_E/xsOs5HO) Lr8Rr߱G&{,S(gwc՚%FL}|~!n?0MݏSŸ79{Cu m^V UUkq$=;s\HV`4jp0F$ oJ^6="g'q ِ5ot x@a4wR١@7BjF읇ĕg=O^#ɘjgdxeG fEȶz:fBE)H&FH"F8x|ͩ2G1 -UL(;%C.o"M‰^w^jD[嘹B⏾Rm,xH1V0>e L靋ָxyTK*֥i/' u|̂VC h)Wy2xXBI޶ĉ6%+odMC׈EfBdj$z /B!U񙝁Q}3;(*Iq xȲ{%kA+}~X]sH,b,G)3&4pF[,|3|S L{u:1 ={%]zPt'. J bwS9Jޤ J`4mDW~@ ѡ'ul h/" .LbKRkvB{,bc_A&m "Ω#$Mypn#(GMm:Vv Cj! _/ˑci\Nv T*4&wqI}*%̤#RN9t.8-2MNHafY4O#8M8drO!:oc+<ҽ3 ǓY?nQ+RyC]d*{ZzǼw؇j]RB7/(ZȹE%ϻ(j+:mSvq^$[:R2(b}<+$`QpfZzcrL釩nQacjW;g鮕H._ꌅ+aӵ#͢Ҫ(C(џLk3&nclLV{#]mގu8DwлɎ𕇊3ZfObPX+63pp>";%[c}2b3U(vr0™IC{B8ӊ<^.o~ HwGM :G)HFy.}NlGYgxOx1dnHS&VЌ>WV(tɖEOaM)r[;D"2^!;m 7&MLhTBmw *.+)HGE?!d2D))H0ȣrAѵ:jOo[5ƫ {LոF 0t]l,z(Ӈ"tjXd>Dʣ8 T>>FlA]vDU2HȝZ C}Nΐo5IAx]чjf|f~j7= Q(ӫ1EK:Lm7ϞkТe6MϾ#E5$|IeSZbCr%z6q.PS1 Z'HcRLP2Fn| P^;Ի$XwG|zbĭ1Ee̴4ښ&=ߜ8f3̿QS~x8Kj(n nOK)~aEly7_쨬{i."~ע? Gh;&Y1$^\=TqfdǏV&_Gfͷ- DSH{~NcDۂ1~1i7*q2/4Lڅ$UYnD%󍋙`P]D˽K` ˈNU67֓1HLGDXc?WN2)vfh*vL^ٿ%sSr[Y $c ^V*-@[Y <+b]زuɓ]P=f肁eFޭ9cDOi?s5--CW.w73nה͝"ղ9pyςdx/=b ZhaB =!׾c~zk z]Ö>"?T`,Qa(&PWa5SєxD<ع<@g%$tm^X[7:L*ݴ@ DcxC~`p]2vg6;ˈRϩ?I&r*?ϘP^ !b%\2 w Նheԙd> 5G_*[XvJ(pa9) tF]ssu=D2CV2ORl%^u>b]4&Eaic;S[! ˣqn "A2Z3ͷHe+X!8Qc^g8)V JƉY\ojj㝼#yZҺ+S=0JCud\ݍӚ&Gb Y%&H{ 6j*N*ZG@I]J3V^!L] Ě"3p}\JmG%aR];$8ddjy1eX ҴKXmenHQeJ]/]O|Bm'ƨy~;= 5EKw_v_[ ќzF#ftQ*}d$n m%.C᨜4es~{4՜g !&jJW W sS1w*ҝOUC^_1UD`t8P$q;%[ a!.ȘuMIzt9%X hypkrPa(nK|k3g.#bdJ!ԿUQjJmUQ1+v(Z,^/׭qbS y1Fu&ș>_h:*#O(a/GT:_ OϚ6١'i&$,GEz0h)f"=_r_F5!fL;B&mz *~""!$DT 0%]X65aP޷3E o!,>1OEOd[ }ΖLW&j;Xp>7c39 _tTItye4BX*+GeRݐ3ڴ( * :x@AIW`lǧkFL}ȐK}!JNɕĦ:r/jl'S}rUVIu u˔ե`<{' ɚRX"QX1~LlfQX$i}<(CrZJߟssq׼(fi{$& V <'{ڛrΘ"/֓t-Mĺk4 3t#YGi! [Bz=$l']YwŜ0liLۈPf]PPz&c];qjR$ Dh [@`Թ;]L&:LRB6o D'yBT"e1xО;Qmj7gKz J\#-Bf/iZȥWbrúUzfZ q$'4䕁])o4&KbXKܥunZ[P}*Z;ȉM?R8Y9hbt4Egϴ[Z)n%fn EE;(r !(ٗij\Cn5'ܾNq[,K 7[vR:F _Kl ._թjaJ?1rEa2):I٫5f?G|̈>|隣և:i%_yRCR欗<܋"$SxP,Q@mOwuWe$զkM\Gs ʔIڋg1L^6%`|n>4Y~BOq3qɲC% n=[d2G>8>bGKG V\d 1\`Kx?b̙g(hAƲA'햤(mFRI(X Fz6LU' 8pCK" /,bF1kR_>ڟz4۹87MwtXh_ø?'#?E P>E_aaߦ|%^uD⪀M]3ԊC*^pI!fɢ,քHGM{>8Nw P5;R;ٮ*t!*$K}{l՗k1/8vWF<"]ɓcl܈RvJsdA]:g5w* dPX|zA#y׿1f |a~::{p'ՋGByԚ_@ˀv`PqUEg}]Nj|`]]*6Vg !;(n)ǥP-߽QbVr6~4+e=p;CXPv@8Eot3b]HG1=}o!6Кzz]s9 샲`hQ]j$Z_ggR8]j.}z 2)M^T:F  #{:^H78aXrM A&:{RT@MSgcUvPBV:ѼuτgaC|8f8km:WP6~GN>u>(Wd\fe5+ Ĉ1mݞc^f ,Íq+>qH\oǨŘPZ( {E͊ :pB>[%ݻeU9of5c$jNjr@M3h9 Wb<[b2ڴk{2xBRE.\ߺ579*T 1P~-$Ę>Dҽjrr*¢XA1PG}qKu.rݙ@)8Z(Mrx9 W?3/0Ld=# O:7BjǨ`e[PoŠoWtL%ɔ%OxU-MSp% գ}d50n![t* } t1k @lMf>c.yF7_IuAkd ORKzіR;H˻#[]-1Q?RQ"%$CAFg XG+LIywhawh^ܓQ&Lִgj]04I{Ș;`VHҜe5 Q_uB~:& xtN_ hV+"djڬSXkR+  { |y^l ;ѡ9 S7햿IfZ ثgIJQ lcA% fPi)F|ܡVaqH1-䁞u v~<Գ=19sKu F_ɐE2\Ι\D*"zh}x?`ء:G5 {6[#Y/mۜk}StE$r5z;*s$ I֎7ZU8#60'Dgg &S45:Cy55xDNi;p )a#OS9a@-@1Pn]E}6A#`?&XMA7jNo[9 F3HΘ3hC$pMNy=,rL0Ar㲟Zw<8tbx*U^W=z - 345£gڵ ޼p!^dB 6?ÿQ0b'csK1C3: Mf`f Z,A"FW]pT}2&t]b:"J[}˹$'FB)B9`蟌+c")gdeA<7g71xsW(m*32m@(L\pʕk֏q E-.6p~h߅D4o-GZʛx}1Sl7K2: c&ƁtvOlSA9e46*Qӿ|!%8pyw1+lұش8$"Og5ץI|PN+[cd[xTY.h`eX2ecaj:d*Q02/A-??4kƣ/5{fc:Nqm|3q('4`UXb\{=` i;zTغ-҆А,QQMGwR&nr\Eo 8r{B.l R\T\nƌomN(׎˕4#8E|Xq]z]ACAYJi,^lb 2iq0/5N3yv|D 9<\ZEiL5Ri#>shUuݬ#yomԑ.흯ɮјk +m>5egt/[PڍP-o|w}Dk7U?;F =:`9Q18[2? <3r]8 ocLNoBiBɝ3`8M:ɻ !V8>4+7ml]÷ ֩M Wo6A%rԤ 8ӹ_ՓV|47o18L]ՈA_U%~'Ho\53`-Ҡy)9O-M\v.*I{0զjՆD\A&Z7.-]};0C=Sstf {9>#A 5-U6'GEr;[7a՛iΜDH٢Mj$Y9 !^IL#&$ZGVqE0f+uzϟl@QSñ:႘2Yά̫+~rϱ nYѤ{&7hfMT# 0"" cmV3*s+AR/>B4Ni}N9#-⸙p2qE޶yXJ7zv+Ѧmɏ~ Bn6 (JI~R<|$\}$c7g{)(/#_XDOُqe'ɛw4Ukya7YOۆb-V҈ )Y6y [18BLW ŃJT?2Ow?anyde7V=Ѧ,cw4疛SOߝ͕tu&/>:SJ|sZGr+tZQ-̈́tEf|MTG7n1QO%@qtocK ̩Пÿ` W*?${ =Z:څ2+꾓,alRFgK+4oU\KO5ue$1r#[4A~'waAl%(m??Çg6g 촶iN'̫M"*~\]1;(N fs@5UVT xn_=*ya~ETޅ}.š S(YLdsz(ؽZa>=5;)9bk䓚!rFRx,5|jy fPjvnCd_'w[gk.yZ]8#Ԍhl#zPtlEMk.K_ ?Ѡ Q 'zlFg/~U~XDѴgRš֡|AʜrMS"U=0f--SM]AjDC *N ,TMDڼ!'/&U#Q9tMO$z]Os B\FV̼=IMF Enݙ+V/Ӎ.\kbhbu$+waeclH8 )цUg(ҢiGЦu<9V[g2 {:Gɜ MU¬\_ؾe>/Op[.Zk/T2bQv⥧+BHY}J>|ACCfMI$iu&рjzUbZy=qhG;=T*dg-etQ)i.H2 5zD*|7Bs?5!( I /t,z#êX46"FH{cM|. F$IH8(5jzQ ׶2u|0g"^4 WD$1=hH`[6zo%{,Hb:&:NJ'^$}Kcg9ys_44CN9޶Cjm8+$PևA4ߦq٭RAqXQF!25fH4Hn|kx{h"$J5M{mz5z2o4 5re=F:wS̈ ?wȬTW:-dy6Kv* ɮ͇Ǟ<%qJDɞfY՛/ox?=7wcsi:JQΡ3fz^C9VnHjuH/ Iwv{qXx ~ wԢDY;|kW4QK1 ȳiN!qܪQ*shѮ/<٪(LȐ4bfq;^>qw :uW54q  .+;mD["촍fD}?[6 ^ȵ%1`TIRƉbZ:nyOP3v|^q*p-6o5VZGD|I,(`z/&qyW6̵'>IqrAr5:&D|Dzc-Ggy08CzZٍTH~9ԚW` &6.AY2ir5/ЇSgXI"gԳcͺ_eT_V=gC2MW ~%(Z|s1=}]ni0PM@kpv]HH3D2יi2Ŵ|?cM[mIo9;p2a툀Z Od7 1}ǔuQlWa#vKnvcJ./7?3M!7ۮC&f,KKNO]b[]q@w|l!orM?%x=]ۺ,HccF˜xEFut"(?T>8O+J!VW$RR8 ⚙P6ni1aΕ-|t 7XJr 2v`j ~E`F1v !XnjjO3)ERip)%P>jt닮N;tKQJ^K9ȕ&mԐW 7u0&ބʖ5p_mJDg]'y=5(HصTWtVp5Y3Y#'$V0m<^pX8P@bG|Ia{-Ig.[ rv)EkhZS }P"jD:{ܑ%X=VE/fٺvm汕: G?*Wpis<-' +iVcK֛-a~ؾcǽAU{<yI"S}V%d?x$aH4]$7ojHmsԠ,xaڹM9$_CSmVΉڂb%*B\&xHXJj㹿z91Pkg {$ޗu8,3Y`TAv\b[vRC[տ7!шNJ4'` Sl|7hj1#z.Pu_EAn/2$[j4kaPbH%c;@!en7jz}5+0?%\ߦh>֭_ຂȕ-BO~aԣk8ޕ/ /l6M{Ǩ\: 1Jf"?,!wD過JřQڎ4P :)HZa:c;{tmY-F~u+dVHɼ,YiԻ~ˊ^ɟ)-7x  <jNbTWpP0M7ĪSֶǦ$~<̳fx{ r oXERDZr'4J vvsPw~$.TP$UB]յ;oWTAؙȦVPy\8P02fo&vj =Tfs%kve;Y咰53bW#>$lPEP ҂Mݨ\p*KS:W[2sϣ ~\rѮq]`l]dUg4ьe˫ *6>?^WeqA5B>55yD>C$ڜ"Úvbl--7.l$S8K6ɧ]PkA>cEUC\5J7e9y~;䝼+Vi}HgDͺbdgBvvm^3!C;<Y0c-w5_U;k#!ClqU#DTx-ŰP. uxL_-w:$iݶT;)p-PK05?S6LбDsID& B c$>}#=+ T3=zIq'QeKuLy7x-=pZ|[m_~\4E60 'V^fbAq~Taj=g-U?ղB?@ EUO#Z>ZqpezidXsH*>;r{'vMoDm\|>&cQd8"dEe [1|G"Nϟ7oJ{Vrڽo 瀑1gx,=`_ښ1gX,)Y6T%M?.ɿ c :Y.6РN/+PH)n!uxeC|a|0DIJB˵7u_LOy&О%M\ߢ@ٳ![`#6Rnc?A|(oӦw|pQ%t !=[Fޢ뫞KhBڭp"vi1!X s䶇;RAn ]*8Q~8b!9V#}jBD&S{?AW&qBZM܈>D$Y159nG-|ɿ܅^pȦ!ߜ/}E ܔ'B K6$DKcv6w ==9FU9'R9t/ i<&5B H +y~y;QyzvuͽrH:URyٴ ߅C<)l ^TI1-_b6 |0wupb"qBΘ ؇Ża%)y]%W4n)lF;UtT$+OBxn?Qݳ+B QU,oѮ罊H`[7my0,6_P//8âʼ qfH۽l69CW$q`d] 5w;S UKHvRlMzil?.'*ZrBYv3b-{8'*&.nERCTQq0tgkŭ?7ȴE@B_uŲ-s°pjGN`Ů54 pMܢ<|cqn_g&B>= 1W%Qf5Di(Zc*( "(~Љb쟃b ./[5N^ 2Ж jMM .Cl76𑟶MEdl|nM@cd_A\D[S1` $KqOY&ޖ!)#SJZZX C]ϝ̖\ua2VN.yॵ?wKO &ry `\_~OWuԾ)@ aٴ`~8nuK1w0Ⱥe#dJV0HV3hL"U(wY+u # c?ք,ˋN؃dAukEeY =pS EEfwy9cI4a&aWf^LZFR~`CE("wlZN@}ʤn dT$upZtߟ/y3V7kK(ަ#h*dgjj]&6Ԙ#2>[W`"dtInqgP 6w=7 f2#.qe gҾΎM,aP).L<'kj*A7\3rPlBA( ^/)!3U÷A7Č{]8!s\_e2fq0 OAP$k|tq6dԡe^ާq5 gFhu"$ɓQ|HntE6bN׻c?}`< \fCLܖh:uXsٯGYdo7OC]wko%:c 6q_Ss&-Dsj.(Q$S: :@hLyA]\Yde榢4l.;QxiV](.1`t^8?m"freb;C,Awgd1 8~eВQt'ןHZaK +v,C+stݽ۵9, KU g@nV3 a)}n'8fh=(3pV0D۵u#@Cxfe̘gY)ҳۂSoWtJ"~e'&v5u9t |K v m"~+LZD4~,l4pg_NJpDFHGUNFeI̠> WjKS97Oc7)|qY: 6轀̹KDt# rC H 0:4Ef@ tK6m~OGDƌ|;67XA*&K.qEO9]K$"@L_8ֶ6GBrKbri[\dx.3>u~E:YVƆxslj_L}eZpnE, bm.hF8 %fj>mn)) xqIy(mɆ6#Qn,Y`4ЮĹ3{(Wu81Æ}`!9`8+p-xAK  |*!'fP悂0`^k. X\_~$_O>S>U>S ^Y9^M pD%މ:Nas^;=NnθvR'X]KP#$))؃I(AFA( 0pNK8}Ô\϶E?蹸̍h9%_ 4 Njp :EPW`U_PA#o$6 -(X6B\63uQԏ!-0`JˇYB-{>yNK1<-{0}0v[}%g)΋ijNC\QfC+85aֲlU%;هF2H;>\rNAн6z9E;}AC/iwr&r/,"_ja_cbXQ:M}*2݆NWZU[?tOW:y Mv"ٔ\+V w lI]ȐF$; G>!$fEAm=BPP'+։O ̔VVEiCN?ˏ5L/trJjڵXifr0(0sVt}Q1e<G݀{.@]!X0V4e !Ռx,sW)VU]ՏXyu]gZE>l-Zk7,qUfyZ v{zFOV8G[˭a+.D6m_qœFv^7c vO`QXhX; B:y ۛȵJ7l MWeE>J;k$#}v.xDONVGR߄`K lYkaQI8 endstream endobj 111 0 obj << /Length1 1960 /Length2 14830 /Length3 0 /Length 16087 /Filter /FlateDecode >> stream xڵeTk6J!8o KK!x`Kp ߙ3YZ뮻!#VT23قr*FNt@sgk#G3=##+# 5` ͂ :)M9 H4 (9錍@[s-EdnO?i#+;W'+ M/Gs}vc )$U89U5 ZN PSQ }oN W}f]NLUHUKQLOF jofv6%PZ N`z;Gsz{ZvVoG58ۚl;;ȂLN?Nv+mJ&[!bZmp+_#-hkdkf6;; DiE]d֞F{cFN v;"`a@TTdϖN:`7_ r8L\Ʒ&5yc|:l\m=?b3ٟ:3قRg&B0ßd1[=fFN@o ;:=o0mT.ekf[_kʿƔmFMl@3y;[;P3eKZH?_3#? @)@N 7"lbwYK^v%[ϾПcSҽ- VV?l BB40Yؙٚll#GG#wƷ.`fcx2)60ځ\`o#Ÿ08 F f+A߈-ѿo`߬- Vo%7dzCI`:L-? j7|G?;er  ӷ?An:o&{% b'ۡXNVz{~?KMfLx,SC}'J`ȸʰ5'Z>l7eI-$ ´~YmJ,2U4(&4N!7WNB/U:LP>jb{EH&-i^΅q-ajpFsGmÛh{~Ljb-@=m5, kՁ[oR6^=-dAq)s}h?ٔ_d&z+FԷ UqzDk f95O(&u`B[ɸv$dbl10s ]q>E1\j\ͮ;}DC3x?F;6}јJs0t;œ$wcklA}pq!1lɦ$/Z%$p7_LBJbp X;!5c#X_);~Z؂\._*M~w&f`wQHb q&%7.0:҉:4RSVQWsJ*q9bʊ1Œd C8Vwu#|W=i,9GW&0D$qDZ l꘴d*ڶ]\F)RI -[Zz'$ #kL<6.cB)#VrH~,z̈́LנWKבf0tFsw5O"G͞;OIe:S(W\ɸfHkaz_{{7Km#5!ʕ̼*c;l\&Z(7Lz"t?:5ew"}$N4qgRrJʉ $z3ᔫb? w[(<$)q,yqJWB_2Mg?I ?rG_.u=x<[N g֑X qeJgeTB:;gW=Qo68Tį#9x/=S A2Zm"i2#TSQCA*|Pf5H A$Y]f@xw߱*'zOK?P#_GnX|`(+_pIE>S nu|%)|A ,XyvLxA_N?j߱pHU6;=s?-47C& y:XAf|,XdgoW~&c#2{wGm0ϼ?e`xM r:U=77+(. $ԏ?c.q1up"Q9~ojT/:ѭfjWƎ]:-[Q`QbL+qhFd`%悝.;̓h&q$y<[wC"gz;$!6(/a~ˌK?Q:X-zl#b>nzhvvpOy8˧cM[3=mZ,Ol>X~8n|p鞋=Ztp\vу ;&{PHlm1 nnܨ;y[.o)wkRg\'-C0ɧ%af'v"} ,e rhQӗՀHx7ϖ6slvj@ J fVml(l&(xxQC]%-T'$PJf]iKd(#K/"#R{EH ii$vm ?^Fk4_/z|'kynx悡b F5>2C*w[J^I/w} hDT! T&BrIuȉ&Dc$xp,0Ρ]f9L1`4r`Lmn#KD7%/3[hhN_Z,:Adթf%d%I&QD`*KWz?"v5MfuG'd@hJn.-x܆)VGNjhQN'\%6)XÚFJھ 9^$,W7r@7Ur8>\tӄb=*GYK׏PcΛwiv2HɀsJ xӿ.pz0pz^ ?يL T9l,R)b6]3Ο쓺SXl|U=["ϯUcly6*ILF HѰD]PH"怶 sgs Tu;OO &)ݷT-}۠3g-lq=RB@]%OжLAz3XC0ȋ+_w+jFae}o (K)_bq۰Vǝryn}]aZꞳ>qPkpBǫKjgf}~ ش|;ǐ|gf^V[F0>fxd(%M.؁iCM2е_6n篦ѐ ] qlUjM<𭜃ѶK݅#>%WQ,-.b8eAL|>cP2O )Vmr#u2sBŀrQ/U!e@Ծ|˂›a2_xt2r.~ Ԅ$v@bhN[F3S^"#A$,AW4wr7>qA4]$!:YK8p~%ɢ9dwQ̕uj D}^>YQM_-(uH#YK: ',-ZFFkkn=퉝=9s}FyU;P^1߂M`fX0ŭ^N9KkDy\~]7ȝ2$kS8h񌉇-Yv¹nإwLIS~Nɷ.Z?L+#hXB = S á=^Q7t#AsFq>#ᥡYşElw-퍆BǺa&ff2*|fXl2l$+*铸rԸB޶ľ#WeƒV9nb<6 Sm8$*$d_a d&\ XbBK@|>ә `? ,NۅFIj1-H'(ׄ.y*9nKr 揌,ljv#t]`K}9\r{[f~x6'IƔ%#v;\N šxАvS KD*¦NU[@_!yI0.ԅLe&Ayk/թed(b\Eux]qmΙ8:>{٠ѴQvU1l-hz[Ws.7}=4r  ]L^n}0wj^$GNq '4slɣ_e|#UwF}E NiS4:"l~sƾuZ! Q׌e% GP)!ղW;T\X21sBKP­;i.wƝ89Scp:$ȥçͯCw/, xy_PYwwvwS#݅VnWe2H%D8[W@g3yiWg= Gw>Yr{_O +`?ɟ?FT##ôO O†1>KV+W+tY6Knu 2Pv_add{떕53: ͛J [$L+qV_djj&=']1bQ7rBҧ7TܾBqp}{ h$zAT`!juNcޜ3w|)\N c\WуLz]W G+ZH4l&L +SX'0հ!Pi|36'GqPUʑBL:KQ>t_<3LԨghsA"V*aMۤ2&&Kpoࡳ5zA S >b[WL֯.R'($_oq,/O.v?坷 [>9o]x57J'w/~hWEKk) 苕†'5uL#gm@O/.]g&%D:ovYӸ9! i:g4TiF:VG*ujFEH딷dTpkbW$_|iRVi Qwk4*Ħ\o{ĪujzJfa}9w]IcA~ (J{uH]jZb_n a ^EetK:["hWua>4|aaV0)@+;^^⇥SLbi!CU\ :ʅ{.)_gLٴE縳RMmh$QO Sz a5x%YKr 4l7\P>S.oyvUpNm3TSKyaq*ƨwn)0~> Iu5f\ '姢)av2ڶH0笄|xŚ. UZ壁t0? (4AADz%gɫ"%ٔ۔ܑ뼛MIMCcoJݜD5[RVPzj@4ߒ%fbGRC1ΫeA9=WҼ*1)ߩG3K }g7k\QFu5؎B y j21xyM2E<ˑUny3zɎk oFo!FX6pT٪Ng8¥c,IPNhմ0)uʹ#Ǔ*B{uV'Z.ax6n%uVum1]"G6MvY#;#t/T&1VUSzVO.VɜНL(nglO۾Aq,09bN4ѐh|:жL[r;#4ʖcӧcqWJ.rx&BIPמaPJ[-xa~K|9+R2dւ~ yLChpmAC߄rԱd@QX^H:U&q, m5N}. ZR=*_kqs/2!^)k氱!鸲FE€KCmk1gwďe Jut4-҆ ͣ? 3ھlln"bW݂ S?Vlsߕ7k:d0;r5U$^\ق)CI2{t])@93杸n(B ur>qea +B(!fvh-qKcE,{#n3I΂KZhN \\:w6wq_Hx]UFl.&њow[ XU|}ϧC3aS< {z/I~lNpqW6,3G5: y`xk6pNࠦk $#-0E7_vC&1H"*H:0c4"{4lLGsZ`O%b%YdP3I|g֐j*a$!rӓS[rLwCyĔOd/qލze BL[{a$n+ Rӥ{8OQ|ՏԟDCa*%Dmj_`^ :^a?l]Y~{6/i.VaeTp7 1c12 31*Y#2WD[BL 96' +CDKlKD]:Kk2 υ AƚaШ$xxAlAOч찧,(ҡIR:D&JL/_4v2JYh;LYRz-'D) Ռ[2"(5jor6M/nQ9_"j=U?Jw`C=CO Uي][lۊ8GwcǥA6 @%bZ5 /TEj̆ o8)$V~?ωS|;0Kn=C,-'FQs{J~\دIrtc{7RTc1ox4z{q,(I–WFwi=&*zT$2zvNTWz/0! Zn;RE ;_Z t\t_1GJg{ jp Dz|) )Ea9-5bа?E?m a.5wOFON}45*ITC5=D IZkyMX/@*½i_D@E&{ߴˏ^QbNΞ]}0PLdS_95eɄ1N@rH" 4]8TPabK)\-~bW IZ_mA:i1L"Fғ%6k~@r9QδJg(ɐORAGA6]~؆UNG¿4cntx Sˀ\W$\0H4}YsrL@WY g]vvszW\捪&\>v/?&b]= ڥ r k7,ԝ!LG$yETW;F<KXF.n/W'4MC2dAWr2Uq Em!>[.źИ2j`.}7 ğJ+^|6ۇ9!+l:Ps=NNF KI(5*)BFqPH&:Mܓ.X{V?6#KiJ4^K&j޴7Fa֢y:=ÑkiW*EoVElNdxk#8tUF+fK8Ch}2\ؙ󳥁M)*3U0UZJ(#b>6~_nc#)1A!~+5oy&!/( ۢ<_Yu`ѩ[kjMER"hG\<&ҋ<ɵ:eok0kPẌ7nd˘?P &3ިEYl{ϭ~v˫fHi~ESZRAlEy;&nն^&{9:{s~7-o5ɈnQ9eSMsb9t_9OF,um+t:gn6k’Y߃^7Z\JKÁKFK;ǞS2Wu-f=D25OUvSe/mZ R\eL^ArWJӾ=zzJR|Q/:ݠՉW%lII6D<T6u{d;SbwYh+rҺt8|겠ouWaִZJ3@i+5p Yֽ]ҋ>@,*3֎(:LruvR]ø ѷ~TQ@x8跦KDWW;TyTnǝp/T+L!a,^OB$3s5T۱?g^NS,HژWJ*dZ <| 4?ܯ>CXеvd_($s3] * gr袹W h2a6)p-Mۖgl|,❷SE 07qGGv7S݄H|Mλw W<}[b! 1\F7^XOoo+By%:GSJ}EdiCC GWA̹<@v7~5,N9`Iv-eQ+'bmsfn=_Ky̻]%|N7hopX/+(d H9X qZW԰Ā UE~3?O|ZW!2ؖ)6 BѳS bt;=W!(uYڛ\vs ^9&ݥG7/ְ?|& $26jpJ~878y4 q0Z/.8|zge)y]l~E?~XJ3~}>ej0Dk7rF9B)I!I7'{HnR}&;̶GLo]N-^!j6'R=J"֝,:4rr S e] ?^ȊXgװޙ5̱qц,/(* >3 aζyoygHGm>5z: >v(q O=H*Gްlw) ?l[=W\Hq:vǕF,|k!N,vQ IZTi+7lkSuC5ZVͼRFWcsSJDךL`2בm]3i]U2<ݜmJ2ZF]FdpE4 [+; 4ՏUʶz3MGF 9 PhP_P<"V͸0%SAd=)QNpU@f&=^ӎU(KZ'mWCu) #󆲩j'y uU0<' s B G3y49ǶHλyG +Bu TJhO]0=Θt.HXtᬞpĂe?p1͸!D K\Ĭ@+_!J0LZ$a`&͚=бN> mW {ր+!Ȉ_`t~@! G%э9Ms wAKV"~x)&9r;`c{7&5/{y|g~1(AxZ!'XZfZ]Lh.='|˟6o»^_IGBN,bWEkx]懩 ~ъFv{+tA+}`j}?*ݶ-B2zJ[a&۶b,D`|#G$-k&~œA9s:W` X7Ma6̋1($brxʯu%ZH0 O)Wc8Nm Fll힇UAT0RDyOF;K!3L9ddЏKH1"ڑڸRu8{φ57W}= k,Swc0)z<ԝ#<J~i04e^5^s$<:)7j豓t}K#d9(9 >/{0J*@{؝H!đ4 xie'ܥ]qWz(r< #zq ߦUw\=TlofJΒS:icTsҤNpH_&+CoG >u~i.ۄbkij^(Rbyj5;4#M&s.#3ȏ`> stream xڴeT5 4NH n`\[p!s̙wfWծ߾.X 5( rp U,\<8Y$ANjNn@'+;;7 ;lr|`;dg@]Nk/@ur- 0ǻbkdxH\}msp- Pry{8,\*U@rX,l 6P%TQb`}O rHii1%Te@]fWm{Uw:*22l{-Ѿwީ6  `WA66oooV[O0+ݖ= wG; 0.rJ`V@_,_Nw)Iv7.oN{k'`ʦצ &v c[kk~# pߐ=,'0!q4HdWpe~\oYXm86Hl `g3?c-|WDK7x=d}`+I/} $\lu Cw7|?~o6 u C9usv qYۻ8yxHxqk?`cu)WOpTl*MB6F6F}V;?v ;  ;{*пwwǿA^{埕azt-Dnc>U7$ǟ]<w*wZgާ VAVBU2%3հ4'DaVrf )b! eyAr}p\kk @@b4]V\^JCłb N2đ@wC<7 Jε"Xy6w',e HNbŀ  E8ׯ}ػqeb4Msۄ*/ FQidIzCO}2_บvrm|9NTIJyq: *f7}T ŧ4OF s;Iژƍd 0  `TG9vw䐛CA4sz0l?5{ TNB+:j줃`< CS5WD5ʌSL3?ƈ("(la3W>bEfr&T3F%{^&\~")疯Cp8|KBzҤќ}{wߍHۨx\r!C:x S]z|Xanѕ<箏0!=ˡE' *$[bQ>W6F 21 5޾b4'k "~b/ً12Meg,n88$z[:oLoxBc-HAao!Aa Keh֟IJ4pzҾ:np`9ϙ`ht |鱈Iu"Lh3ɅJ]=QZݹ@8<5j"ӳSamߌfV~؛`\F͍l"S7cvxZ$"ODGP˽M+*9#pU=-Cswr2Z#dyF#tdLFjHn:P1 mw@fLpŴ&v[;V1oƧF %SdP4u)lZ$4IP_B1 qw*0 -j#^l3͢VճgKʊ}nˋt5t?7|!-VV2Ќ80zNγ۔r5wت| 8I"$-vHꃟO{m}%OuI_$sE*p"˱jdflZml#Ò|cI9&sE3ń]K]Ca }4P{g u+FwHdI(V28J⯩MuWFҷrWWD捯\ȏ;<@GbeijF 4|6ԤwU8A0ɴ2K6],S4ޮ/"S#{Ig1_}TjU[s_A7 lGtpAR@(<~BTZRgZvuqN]ɺv]S9Bek}j&}#>/uC&ܲ]q@1Dz4k (MpjuY}M*_qJ!"9r&_ʮ"Rr`+~p{Pĺhܳ9OvtV-P+.G^Q7QD ^YQ+ !a0A BЌvuRGmsO8A<Mn625/rş'嘲fB͂/5ݦԌx?jK6 ՝ВWV jT[C6=r ^Z"~qՖXY\ce^>Mb oT-c=]$2N/4:dc%)pLkMn_ylڝ n)Zطb$fv4IT5jLscX9p8glj:%p>rY|Ls4Вr49 |Hw* ѭ'M >RsE}}H%liȕOM3q:mLW.@>I oz~2 {~d"TU!Ϝ(b*9Oy=lu+ B:p-k2$#5P\o)[-xy8H\4ypߟD~A*)#5Ah r{܋l <0:<=E/~$UƤ֓BقFF9R[_#e1N%1{r38vlΪYؙ_|pQ5P&J ZG %phHp@<,Q~U>>3&-E!] eC3МL"+97TM!{7.K ׆@B&.%̃0̷|# C]RA m rh;5cԿxJAyG>Xx|Q \p,1?\YfNt(A$e?OLA/m@b6`uIEƱ{j(G7r. E H}Ix,@Z3?MiVl׹!&}^Kx|јnS#a0)g %z66x(3TVv#+n=tZni է]xxeăWEq$7fzr\;kllY;8Rx == S&x CaQ;'~_ExSHcԭcӧ8U|]fWm)wݏ <4 8D!/ z=w;gd>0\wDS}8}zF@/S;vۻ<YlN) $`?Gyl)s1%:Lp*d9g;p+(JVH/f#j߄=U!#%tQe?"LBZ{5n_5 !I5);- 3x0 ( f ][oCPWي5Mr i.;\hZB62E+CfհlD2 EН=]ˠ!ź 7jͰ6;@j-&b7'*f'*,rQ|w 1kw`|"= 3rj=d\W7K~  r۰- ' ɈuO*rRu^&'!QXKk.~(#3 Ni^w=wfk7't4`e>@-Ox^]Ap-f)Yr-@Agne/ ~)uSƆ'O @7sɖэ\𢈇DaH] bijH$D1-s_4&xbjbҾ M/Q:w*'AH͍`dB*sكw-b>D ب$+2M MW}GƵ:G9)sOpߖ$FWD!{(&'gx 6wS)Mk$%~kF|+JcuLl**׭rXc|gBG:\YuDkP'{4 a&4**%L |0N˝_}Oe0WorEUNIaC"Y$^<~:8)Ы Uxc%'Ŝ A9Ӡ&ԤrxУ{xnϘvwW סH('|9g_ԇ([J|O! wac 5Ê$̮(!r2ɑ}ʁqxC;wQPHYRۇhyS+bYUS}ǜhr}H ;|]qX=] u܁6X,C^t29^,ik-oʑ0jcfbNHTyC{:bV2b""#:hb]3(9"OlSu5+{^s),-Ip>E.ƣ+E:x[76.A$g7bfhtዻ}>YJr$#f<'˸F H Z=J(D@Z+On>Q[=MgrPvWSȘsB h ::w mauVRLS瓊}^*O#ywh . ݐz?zKuJrP%b탨_ tY5D6jVoa1/1#CD&: IL ;|ũR>Ci{tEշYͶrR.tIG<Фg|{LT=0 =[_#`e|wTҨ!?eJ{"Ik+d]eYK*\7[mmyϨx`H%XsxS~!M|rk{LmX~hEn+! v^'J у ~gl)$-M Tq(Ṽכ+OA00b,,Xp0>>(3P<زQ*TbXL=d^'(0x b?q:`<3M@&;~*"/4@?8U w%v!]U#.)o<|ҳ^.fH.Iu1wfW!Deᒽ}1ql$l>YK+tľ~!b-Z4SM/֕kfGAN^Q$j ՑszB'Y $ؼ;(FóԕjWjEr 2MpjK'/fL_) t#gӊ5U4pgSZ%%|](-6cGzLAz7MPe$9qǒŪeOUOP(5lq|^u=CӠD_^_fMmRONZx=ʳ "ް" :g+w)PQHjb>m-kRhqkR3NMeHKWT>8Os電zn庶`I. N8-9DdsE` +7t&7misiCtVLS`OuB_&~F wRΫ:퐟#ĻL)_uK]2?|ck)m[Lβ sR0J`dfϝAmц?IVѾ/}Ix3vw4Obon7!D]~tܯh蜑0]C\!xYU[~6s5zXH#2;M('$ &͹smr()8+{يn3++a}*E[/)ELԄ)'i~hrD;d9)2qun-!H1vLx6ɅB7@Ǘ^ۇtLax"%vӝq?LZ? (I _ry5;/H]DkVDY63)ѱG0*@;E:ك( e^ Bt˻2gO6{oi?81=ͲBMT>);A۵;YJD"$hGjrF!= egl+'d@HZBʣ;p3bY&#hNyLτ+܂o p(Yl3{ICnaʜZޯ`x qx&I6_\c=|U$:4# R@Od#;T Y;c aX<ε cow +ѣ04zVq}{G;/J> ƍJ?%\JyOIKki'z:,An #6a޲TS`4,h t~+%/c͔=ERζWLWt+Ib;>u@O4p<%"Dҡm^O:mg'Jf-Ő.3œ3- wC5)ܞyy`%]lZ@GSY-ħ)Y},0{S3G f,Sb>C^, G $TX*|j ?CN|NJC_srg.]**i. ٗ:傺mVPob mPDBc W(U3G#kܯ XZ(%B_Mjƻ!!څ-hg%<-=<d_5M%Ixu j|M), v:q`(5% I0" ؖ<,:\/tN]$@< { oikq`:),r+-%DJs]#tF-,vCwJ Ji;V(&yg~ń A (;,5Jdk+.PHfG<88\܄$#9ɷks#0 KUTU O*:rSYSձL½fϘώʬc{B\>sq\Z Z.ru1D+2| VnBq)+RlA GcӏP,MW#ڢKrؠMs E-c ŪZ>l,8 EH>gZ.](2Q Ѧ8]V95 x$_1 <`(^+)0& 8.q|?.),Th3Yj<(1<8SD4qs'Lgey#?:FLVԳ"~gl{?vUKaz,KkhZU&/V|<6:rZzp7S|,bk1Iz,bkt~v:d,VxtţsYBWe_Y I̷3|tOL7x?teow+nx$.Gg9tʎ8HR.[}yibIDvp2_JζOLUmEL{,/ܬ&=VyYɬ ,]6Kұ)*H<3*}m8ѯ G AÁ+JjCaT +?,bUn LBK.Gʦ:<kD)@΍'+qlDO5Zo_=d1&g׹9{rP?74>vlՋ`4c`GbXÛ]d-qA~ Gj3g88`_r`_ vr%Ңq  7${XHd@VH::fLP+dQdbas"g*<,]S^oBq̓Gdv):'İlKJ }ӹnPp]8I_\F{ˌD, CB{.TVI:K=b|ֲ<+#grKozf=KE`:d!inR';MrʇC95naZPKk6Jg/@m/x6}M c 4 ^F@ V$,>" HFCk`|Xws! Hg PmE~9$E1ɸPL4o)ݽ`陇w+sՂv6'P|V6_b fS*c ٛp6no#|H4pn{L!c^Ro")cs;׹z*IϰoZT,IXrCWBh)eæj(͝|-0R jTUzEX(e|Pgii;,/[Uq[\I݅NDFIQѲ^Fr!g;#dD jlzl=p-K2'6|E0\8~tϣ3#.sm4jTnQq0Nū44$jߙnb?jyGK`@0EǤ7c>:d+a֬eCѵjڈNgԗv'd5~@TaHm2r*1X#*ճlyx2AB\uʉWr_Ts.c$"жm OOEY_t 7U4[-'U Ҟttļ[.gz^Q(:pYziW_{thN܃O.v(iX 1jVB'k"x hm"+X9ǮZrIJծ9pZ ,$)(\a1P%Av蛙{JNb`N=zsImLwy63m%s=˨J9D%Oǜ5C*NՐ6Iii݉=%_XSXނJ#i,RS4'/%2Z, l)Coƶ v8rᗔ)+0vZhh`rry`Md.J&t=%E/إX5/o!}"9#c}"H/y']Mu%Ҏ>.[Ves>ee';̫Yki)wR~P\9{zIfH@@0\$& -G9*)5ݵVk#9SJ3>ؽO' ˘/ %J}=Tc=Lt2jEp/#kO8غб?B`%m-JYWa<-J.3̗:p*gteY= BX1(Cգ o2n+e8iGwAJ%G1teT{+h2"m0̨li!ػE[Zk-?}FY 6Jl7ǩm.Z}Dy/2@+>3NV[]* ~n 8 9cto!Gm-5/ Y65 H8`y32ukfN1zlO1}ExSzaY˘{P.Ϛ=z<^lB߶QY!B'~]^2P+UL 猧 Q\΀-+m&{z2%Vsεb@xZ8 R|G4he%Tz@ D[u_y?Vu=gtv+mƯwքH:l刖@45OhKc^ZWٻGUS|` ?ܸkR1'?xΥYߕ;z-/"FBQnjqW" !ҧ}} |C;j>KJTG^TH8&@ͫpb聂j=3򽐽爃%ʴ{^Ԩ`O-+ÎZv2*GB}';ܠG\D0 ox$7,}pghvI,nRg. irD4?䑽NL]6O4@:xCgnn6obcjw {۝}+\PMX,Jv%N/!@do6Ϝ$Z7joS؇ezt,8ױH.gsD1Gk M-TkA0 -d: rȻʉ}>AMN%=eEߪrY.Z*Ә?j^&OrjhB9܇Aiװ}qg͕'vI1}&~pMfyI " 8JQ9ՓG̥{V?|Hy)!5)N4qab랠3E-Z"o[|lg"T!t2 I`ʮa޶pae-{D6ƙRJ"&ɺLeh~Qa _cZDll2*IDž%G/';?5 0Ocu-s W"9%5&=)~|A!!82SM[Ngz7SnL{xNȕcxӯtZj`aŭ}Hmti|NR#O\G אnCPߪǐ:rjBaV־ߜ־kJAnny6 Alsǔ<7 `lEَ{7tVq^%.ѺsįD|N_"Sy=Ⱥ(`N'nVlhFu1b3C_$w\o':?p9Y:׵Vp7w.vMMZZRPd#ӅsrPjPk -,4 YR|]<@=,-kʱP*0>Vײ^4[f5;tYFĎWUϏn!<1\ۋd=kKR9#̫4S8OV7$m)14'v(kGZy-;e_Jx9B=*\+S6 =< Ц0G}eySD"<tko&#uG_Ե>[ UAI4d>˫F_6a_6fE7)v?t6$"sAhd?-/؈rgOx65ڈb.^GF|;$XصV4TM7! 7dSS 1z^[>z< Ey[GYp0Y)1"3j ' Ų y:eO4{w14v=T}whۜLU8[ _Ϧg})}U{,*"{~l5ԎCdr }aByUS7_,m>//13v$IR"޺:Gڜdj2Uz=AKl 9j/ y8,tYJ[f݁&M7&_wAs6qis ޑC_s:3lER;T33)i{;¶H&Ш B*Xb-?bvfTר `4`$VRkz(1窜4Jd΢Cը(4`%/jmT+z &]w~:l y.uaI#9)ڂ s[A9PJhE ' ؐ*Pu}v6k[(-Z xO ZN4QU=vRL3.x„v>çKfӦ;G3nY~ۉ oEkS€l_h RSwY㜃{ԠD6hXr-I΢r'I53%޺bhٱxt(#?w>lGt> _ T7/+9Ct,SK\C p>_ɥJ Fm0$mN^bP@(P#Fdɞ*щ4UA7(RxrKKE$5bn8}7 Pؾ7ίXmFk!cvIupa sm6}Pcb)D%R)soZ΋182?*_OZϝ ++VX{rachdD,*ی{ʢTcfK+YHJBDK9*wW.z(^&Rq˧/G6rymZ!EǤX90))ſy:qOy{&h% N@9DU3w3}VHtk=_|iD5v #Nx9:SdL%1<׿/GPdFF@(H=;+׭2-2=.8ooMU7i;Ꮅ)>kơV?@\\q#˚l&q-' Y#?Dw88#ODiŀKA @7@Obn'!2EUgh3a ̖Q՜n6 vv.L28nc$vZ6~ǸƼ٤8Z;b 7-r!#ϿEew(dBsъf0B L̮=;l>)k'փ|OJѢ4qmպć W Y89=L'aQqS'7)=#0Ȁˠ9i'F,? 9O4@}fͯY $Rr88cR:GTzGV{mWm ۑlPؕVXڔ(%V.(us.`ypF~*N gJq8 x]jAQQj{pW} p MONؗ%Zey^ero_6L0@>`<"\q\yCi- `*IA@OA6;Xg+qg7:, ;ǂ[)*K锎_v#֛ N{>esI+D0HbL5.e)ݭ%tX'*l0M+DD["UWO OKHhYj  +O@in/zYh'&m ϤԞ*av ġw3DkO"^EK=pEm C̖\@9%ie58)}렍L.K[1)"=J]{U|17qgϢ̚v@g'ӵ2 g'UHBܾGpPYr m9aˎ,Dp| \{;6i!O,8ƾ *rv7wlܳNK'$9ZyqbO^ aL_c\>ې^E isesk>Fo.-u۫;q'RY:bZ vZs{]z&Ώzh雖yvAERhE2I䇈JLə'ıIplQBepy>ҧ1kYi-Ad &Hb$3;'=Q >3!HA~3oYi7];Uq42 :\wط'zZ޺͊ Tcd*s2#*&R1nVv|!ӍzY}eGڎd{[~@MRaS;_F[K: u|b+~6?SV֠W/?L'S]"(M&kI0uYIk3ۢ4럗?Mi 1Iw j+&@*:PQdxVcG􁄠g.X7>` S2>V/TdW[VӤ]F.0v~!(k"J* "/Zh{tΉ TBH8I+&eTXw-]В%.$lM|w)%t(9+PY]|G9CkwHUE9ijq0KxWbì&:w t!x-)?fV$.} tTL4y*W;T/!C}WM.4_i(00٨/=czܗyӺ9?&E; H <*A6L7DMB{'eYpm?qh+|V 군3URbd!)?bk!x@dyV<063yj\  L0 n @wwzEsȭԆ]*/0";>Lo<1~wTiE: TC7yFXYcQgR2ey@s;oTtAozT,:Mwὺp[$<-6@e\UJ[CӈڸHppXJx6"{~w!O5_B"vd+w.5R}gv 'R lA}`eaAByYW>&UXbf_j܉y̗pQysh3c0nŸ׊'&IM_M pB4_@<PgGTr>`|Ⴠ@鿙~ff!O5FݰXT91lw6Zz8Ő CʉX}xe:*W#іSh B*ixqf@}\QJ(z\N6NG|#x؟$YG ' ~u.]: F9t+40: ֑4 6dPˌ)ۙٽ/$qK&$\x}hʻu.pՙ ٴʗ&"L?")1+ cp ?’l_ "x\%|_Z 59=,x>W (z{iUbrq]{ؗ)ZlȱL¡v @>n^v"[77̾3HJ(EEy\ϰs[%Mnkd؈ƽ?E/?| Jq5+ȇc4B1F}k$hYX^d9 %_ əm5E/+!wjRkf8̖iӼX*:{ƃm}<^ڰ=EEeP kSlS!xϽJ184r)GqY)dY4l!aO7iQU#ג)5.ANE`<' d@$H +)5sL >!Qa8~$)3x$4hrG t8xƫY1 kN7nQ{d Ƈ Gú'- yJ\\y9r;!5*#Yޥ'i͜bRxGyxDzx6)mH??HiڝwM ,O>vOY0\"gVX" Rᾗ-fW>':W8Ќ5vZcL:TWX0i?3;&5+*U,X~^&#z~ג:ٕOw:})Fd n4Y[Z{+G<j-'Ա(MH] I I,H 5JBݰDbY \6@z%8SsO Q lݝ% VJ+ƺ E1VhcE]=KɻjijlDS#r)s2;8b9긣gZ=+Le&B?b >Ck4nM gC.G J$)LE*>jn݃~[h4?VuOc-ќWYlʏ'Q6tZ/9?Q< }udE_V,.2&|؉:[F $cE9çu+xͮ2_s 9۬^ "aҔ1:ɬY&yK>w{C(Ey=@rW.ScjNr6H4U'7}A,.o.۲8qhf%?h6*֡gC#7M\;oTgs]؆ٷ;*KP6;&+'$ kk>c0YaX~ϸ! M޴I=+)Lq.nkٓfá5+T5ח$Y:ұY/!~Y?"ZV{PKWL ѰDPA}O``Oj?Ʈ$Ilξumer{g=kDҙgrD,]+ew.b%FSn 1hlpND2=+o ȞZ~/8IOk4KS|cYS{q"V|.Ө$?=Vx iNeL4/N3aPiqrD[[sf µ|^$& x(ɁGؗS$L0cIφ|a8F-Æh)8KEUXEq~Z2;',jT&3UE2٥*8XI ]al;TYC8Lfa5>B)CƐ)$KpD4| .:ЮbV=\IS/GR^ETp.\QsD(OSf>)SIA)& C7*#*:h>}Vf U5NrL,->?K9$^o6TNur,7ɴVꨆWMWk \|2rAxL(ϳoR96`htړe0gБ4!̿PHʄ³^J<( iD'gG:J8qwoy b!q:Bbq^ߨ&3Aŧ nt7rň5t|2|"|_Jx1C&@;n-9v>-'Nj=aw~,y.":tFA x 4Q/jTLo}ɎJ@ iN*2ӊ³a 0̡NE+״~$}cd/Jykc*\U]G  IRt#6Qh[f tX2]Nh$[=oBoD 1 2a{lsy ?(O7vZd gDO>D(hqJ}Am*Q:Sp~(/9"-tD"CɜEgɸw׭fSRRq;X/m)ۀ%[kt*'8$eH)*|L2m0etzą c ++d}v0USR J.'I^'ro좸ܝb#͎ _Yr?Cn/&5u MtldqTYW*$r,Jj-Gvs%aĬ}+,T ?]Gm?r[NAg==?9.,BQ,T걊 EF7tÒI5øܶr0ZyA( &`CC 90͂Q# gkBMt1&|zŐs>]8W34Y@q/>1M(d%* ۽o Z?0C5̖a[ L'wj$^VcxFn! Ƅ;V1ڰG@GR.gFsi,'hb++L e 8{@=R>q _.?ewvG~<lgR>\bo\>5DԗgE3~sTX3sdzl! t NFb !TMf"WڂL5֡0J[[3H5AN}O [o0}#`Nj6fJ=ClYٌr9{"]5+d-ܭ|OU)3B:S_ƬQF\n mUf7 mj`^6x:c:Y?,jBXOԿ`?w oCHbek(WK )' W3e+أŇ_ Y=j^2o7/s]k@7:ztGD_2Vm+TasjZfSjkN_O8ISlEml<̤fCCn x %4~8ܕ" ,SDڿ G|+QJ}[dEjuL1~T )$9vg޽ȭ2!}Zz9sEdr'_+ӁOΠ9߰Ï/״|&H$8_j1rwzj ;W]1˻q+C>K4/PZeγ$%%T-!EɢU?~i#x k#D}}H~qc>i((U 0`SDdcdG3ա,9_GJ!'YTZd'!:0ȗciaɀr;a&?h7zMGopw`2$4vtFBi yڈmHE"kO^?܍#%Th@']bI|# y3_IѕkgĂv(r ' 6E6) ;usW݀*(J z+e5Z*wkEMc׸#Hڅ_G/C݂&k#d. %9>?89lh) XS%pg]FزXMsHP}pi5/&J00_{ WL͠e~َ3ᢍ-`h>l̉ V7͢'h6'}EZĩCly (>1W߾h$H+Qe3"E濼8_UVtK18"~W|367=Cv.;E{H;z:o/׆)V6ǁ0W-/ԃ c+3ݠbM QewF2&C!ٌ:|!I*.9R&m>fOE~+b*ܝ􀀒/`^P)7[JM3uvxt*'𹯘T#CR>PO}'Yz4eHp4.Mɟ~cMhr4זZ-$Sԇ?0JQbZ >@emﶊ-T|'v$s#VbU~LXVEIL]t,ʃټn&C!Ck'"$dR_gjљ|*{ Ō}+3}pKCʣ"a Xh2S`M"*;]?'ȒaeY2ɧ!4,qy}5OB҄ZC"p3o$qڨ&jX5u;Gz]E: g{G.$ Eld|XT`^dTNީFa.%8Iz :䕣QX634Ͼ%ZϬ e«ـR48Ol-#<"зK񧙡t  M'U8=6i Qٳtg<A51r Qw̕(Ug@ endstream endobj 115 0 obj << /Length1 2012 /Length2 22435 /Length3 0 /Length 23713 /Filter /FlateDecode >> stream xڴeT5wwZ݃ x)bxqb݊CG}=#$s.{w1BM,ndq1 4@lN66.DjjI0 $eT,66~Dj,-%3@gPuruc67s}3Aֶ [7ONf?DK,<]mf K @9@3+@ ҐVȪhjг%pwvvI M-Y&4_zoP|'\IZS\SOU OF 𷴷P+_t6nn,n,N`kgiغ\͢Yi!P!7_kskb+)@kH!c*°u g {6kha;>Ƭ[|i~NȘBG-FsP XCAd\<JQiJ) TX:{u\d>۟ܚ&y?9P&w۽ϊrN+{ՅGWx*mAug0W>U.( VDլ'*I<BE_ɰ\Dwx,U7Q {36%\Qc?{ەFǞdSjfRƖ#r" { MF+YehA7+>:#\kX.ZGe:,n*]-`91 3"^fk%0h5rH;0:"R ~KuG\@i ܐD mfdxn(Cv QP9ແ=>Nԇ$ [ [?= --}?A6\6@y7|%O>WTqiw\(6%ꂪ?+6r=u#uyt+x'w zhxw#5k~a8D×<}ޡǻ + V,ΒmIOιqwrJ cKq_w~hgVI|EL Fsb^ qY4 :Jx1Bv„ Rncc>'ixgA_awa"imJBuq2 zdwUwLVb x3pc䇘WKb%7~Y5_ ^r+EaEڢ0#3eK A6w"jlFX5.+V/;tڲQkԅg> |m-0r2WIMRwCżBbvO6c\̙"ߋ;R= ` #ШNˏe鵣3T*U0?2SvJ>كTp)zOfqNݒdϛ۪RGydvc4]PvP7*~UaBy78Z_k%*h|)lʒm'KV"aV= u&%?&OzpQȇC&}v6V`R 1k7iw@e3ú*k+hg&{aݹO~Q"~&?\ /r:e&M| ֙Y^gqc$ے,5|͝u scLO#|gl`޸0I#~PL;qJ2aଭY V";S@(rsLuTNP^7PB5.~QݻUj:ZBNJL):stLqvTuxS_?Zoo*0D)@[,59 ҝ1eNҢ2c-*.(i}1Y(z;k쁚{*%,ܞ>tklf,VcrgDU]C;[x׫S]V?'l)&aXom +ҝ"7uL3i!73ISFe&`ͧI Orp6F=7M92§u‰a@_hPW\J䐬Btb'Ei~=;C@l@GS..~d\+ݮ"p5zQ h> nFЦ5: Vwx%A=TrCA 8asғW5A=' HɉZ6ũ>{6{/ks: h2ca,g3 <>B1JĒLL+qIs# ďt_ک& 3b]Z!~JF3ZI\.[ %/ey.6=W:C)C5aWlw=ܖ::eXה&p:7=v_6i"Z*bGH❆<{{Rq8F |赴qqH&EF-c`Fj= ɩ \9ģ4hP~w,TĉN˯jfC^ zqmWdqb@9FΛWYn,!wDe{cy}=DAq?-U bE'CI1-Y&J;*!M0+1Y 9w 3lr)i [j#4 A׋#%~BXe uĩ+%̡9ɌBlWf s΅jlkW.8?Mh2 !0ri?d&>Lv%:Kczkڧxa{ތy _cCs~Y~'3jS}HШIձW^-wܘsDiI%CӒcS0NaR.N'ezkbB79g:wq:ޞ+*taIR@52lDau.7XBn5}YlX\ ~QU *QU)u`y2Ŝkݮg4xsZ 5ј%@E_D OHSك'atebC#ydv $`A zD{! dhh|D܁8/ȸuPOI͋\A }Xz6YѤݸ3hs_snu5 k" ̌GPia[wWz ~I"x9b~Z`ՖIISMF!w W`2ah(4*@?\bFb|R]@͔ yAlp :  5!(8*RI@򆴒جb*,n9[* )B:/kӪ^ 4uL?L^]S^XZ:dǩaD 91Bh ֪"H-[m>m/b UgFp)r-_JKM7N(h8mS7g--#|HڸmYuXI!DаCfn\˦pϝҫ)*(;W? F #Wޤ8lp|TjWUAl %c8Q績8Z!t|rc7FyޑS[EU1'oNEhH xF^QDtHKs~#jwks:ʭDhӈ)PLHκyf͐>X0~67: GvIbgsE] ̜HϲiF%@@^4t 6p7v^j@\样Zl<7C ܈D.fH]{,9WEW@j;C[A 7zcj*>r{S[e$JaN%'t肱y ~kyQp+ "#F;#xҢlĀHQXp \/*,KmplDp%J ([ZSNpOcpҐiiC^` p Kx'tۯ)t}S`q#ub,{6A6&<g;R* iw8PD} +0ڛeN$ J tXFQ^} j@imP-ٸv Нϗ~% !c$֖ 0\y ٿԶXOֵ&Wg1a@5E;1I>RMU\XR1zi 9]K"zy*uꤩyCrw@|jZPQ/}ː jWWoQXzCE X[bjvӌ0]"c!hHõWJuZ6:qNzGTɹ}-ifwhpjwǦavOˈa26;fj>FaۍGgD/be#q}C2CktSe a`%S75XN)]HLUP*E9!?E\b~Q2d5xKެ+TUhA93>:Y 08?p)S[yB|pV25uZ(*՘n{3F-IR((E|y]JA:Ml=tC5ӱi1K磻v+)kn >0?ơ)SlՈϊ"jjvXJ1Z,tԔ-L0lW?.Vl5yV~;IJ]p5N/r \OHx76Gn.L g:aDBiUT(Hqusu-ҫ꿷+MaP@Dni FFp!JHGJ"`INĒPc?cC¡uQ)NP_}Y(z}98R>7:Oə>1epx2Xsf|P8>v227K Q'OyAJ#U }N#(6r&%$Fr%t~;mIdBܦF }Tt܏SaoYfe~!cT.ZTPxd :b2L.~]oaׂBHY4 =woj'T76n6'$]czŇRn]^V1}#yy2h^y6%f>z397BHDL_́YE 4Ģ8IE1xP9lޢk:$j=S 译 ^b(Q ҽnN mw98yWuV!vZqH"z oHdN< )&\ -f7 1"ȉoS94qˠ_o)E c=*?]w#C[;;H!Qʞm͘|b.ZBu-٧<IJ_h˼so̦xAO\wº5_U t+;,`jF$yjF&Q,ʨy&p'2Գ* dFknk_i.n7/?o.iv$z)^Ž7?|KDڻƣi5LedyI8m%g{#; 9t#T>B#"QG6Q5w,ĥ5!ɦ_l)vJ"x5g|I`fq": )|~9}>2 'uJ[UͷLDwP쎷 S%ſ( ߤ #Ǽl[ sŨgؾ_Ӭ1yX:u g?+k>A\r/G`:ؔQ9Ӣt.I38:ʉ 2n^tjq[ʊ&РOp xp}J41`'8`D&SwMob'XoJup3F#b[BEjM"TB%[\SЃ5ĂUw3AxO+qIzaǣ5+.Bj Hf.*6(lav̶Jn#۳Un!;KJ [ƟBtH q6}|G"߭G ­\- wlMx_KL0s{0̡aF=b}_7+h/c鐠 +Wk*'l;Kө'By~7QDrD(qkڤUOƪonC~^|aB KnfڳMepހ #ɠao)µAYiwah`+ B-%RXBb9ݼ/8r7kƉ\0N.9v%A2Y(45A)ᒶD< /fKJ=ʦ-K1Wz}7ϔk¿Ȁ М08)8hr^4zPCK !ܑ`F(X3BNɧz2& :C5iM{lQ=zZ! U#dE%'O 9į '2:86Mc< e{wn#~Q2a)4U5/R|#{~_ʳGeq'?K&'**d"΁ZCY ?E?([}Tͯ J>lzl ΪՏ/ځ8.o&Ztvr# *j<8#& w&6ZݴBt =k!U`Mυ}E/%h]Wߋw z8⇰uZ@+ϩGv߬I q]^.5?h07+,_&0 ؃-^ y 4șQH ;Զbu`h,ʵve1"٥B71ͱ;9Ys[K_%j'u[K&x?n!lٺo(&pi6!&f5[$o/ҙmpb !.pt^\td؃]$wXv1@!챟Ĝ1k-LVgfX4*{OoӀI;S詆N>sS ¬[?l̾e\rf'{/L+z~.mWnZC@ɶgcu_1_/`0ž!~)tHe|dvWt,9Nf ϨMcePD2egN[UYFb#cqWeUΖ~X acv\Z[Zxna[V AUFH q)`U6`Ls>.:戉@G40'#SH#'!M+zJqtxp ;k0L u㜢GZ#Eg3CfH6DNGDn٦K+W$I2{!:oϚ$mlrC/w+~k#gv:h>]2_02M MFQ(.K̰S Ρ2XczKh?9vhb-ilLc9hsr`>_R$AIfi癹u v] iL#7 èmxGwګEӚ$_{b736V|zGc͊^}յ:F-' µ$V|(58n$>W4܂εHK?'^@H˱&N!#GX/ćXB# ”tWSz%U5k{ix vjZįK)|ԡOI?B7y VIIDqnEVB3,IQ>\0hI@5 E4#!?)Ki 8ΰ&8C| ͟!=BH ַJZt,nN0$YG`X8X#qݑ//4 JbX}S''3Ԟ:\\Rՠ#-ta$<ؔ1.UӡZ!)tyG.Ҟq&(_iw-.:¬R9>c1SG48$?}e =#][=5D؎2&r׃lm3|M5z\oLҦx"jL dcNw#x4϶eЧlt{ӎ?era݉jYݳ5o`/SI1۳KׇYnV9gq3 ѓj8q-3yB%~՟]Rנ}A1qZ4Ti#,{sV2Z!\RC-1m?>hlmd4U0Qr4胡A`4Yؙp1L˳7s,QUJ GWd{츔RUn`?"߻Q5b]u1$DRZ;~LV,&]nAk (_-zuH3}IYg*s/H /pƌLOz-AlD{L(8k(H&3~jN~\֔( cR2MI!>G%!S 5-,Fka%,EwC"@(XM l6+xi4*40-:8^-'C̦nb RKx9A9KG?~qZUi؇y)|!TfmZ:)ّh7Ԡl4*T"2`EڣeGS"";wQӊ ZĖ6z*dx`h&$ i1 [qvD XW~Ifc]g>d_"131\|jǐEH^[8.Cmmq+#VN`dD_ݠ &3,kuBTxatC5ƦCyG"S>}mt$Dwk/Ax|JAEEU'-qHȖnw(Lykb,n}dH&zpJu.f]OvvV}w!P/T1ⴽBMM-3Gb;ԧ~Q {XCGR?Sg 3P?M\ne #[amP sgpMC-)YCqZ!BvWw<3FX̬AiAI_ZK&^=#F)Q$ϗc52LZnnC16nsk UlO\\B$[}S=47D1]6k kڢV`j=~ TASGlA L_ڥ)4 2AZK6-ޙ[$TXGI"!) (TI1+rUje4u]-"6ouVfrav0g /(o;o24,h XGl6&t&~j Y-Y}[~N|ǭW:ϲ|Z}5ˏİ2v3)\4"oV_oz89SiW2 ;:ԗdOTՃUSkgn՜5ΆD8*"C~wH-$k@fOȲ]Qz#Yl|mdkgPQ%2h;7qɟZ3 n7{ʖ:yA縹r2LJ-X[W۟l,%on y#v~l1)zA":h8)X/CH^( IMjmGGUD1i)1xhN GL{iQdL}'+1X%SY‘`sisz.~ڈcʵho]E/>>}5Mz>!LE1Z9Q..#ӋTRC l mZ^;$+IOCOԻAq棭pi#9/Ejk3e:"#A%0uiv8>C*>*VjԴACaY 5P_[VVm#Lq{J=dܬc=k~54/uEHA'N )Wetb8s8h_;kI.d_8:.my@¶F‹3H8sY ӎޏ \` „!r1M J'Xh?zaLչy±?{ ,Z G#B{or0‰T@ƌj`C?R#Q{E#W0&E `f%(h 8`U(( Ic6vQ0^]I:(tPklD\1ˣTT$,Ig)O՜QvF& XiyAnt9'dOCws+rfL`Z>,u/%5=?~./wpAv' ǫ) De"0fxڇ`he'5kS ;WW&;v\%{ji$Xq$ 'fx>b~qۺ(l%'):|mMKJUޅؑۃ nǧM{Q&5-엜/ىgf=[0,IeBN_H9,Gie%TG&L4=9V^mW(w _]6zluv?NEĠ~D yES@cQT58:GR6fԇD5$/bw(N: ǃiN=Kq[ffq㱸T0Lp%hKl˦rnՓ~ @I OȷDߐQ[ fh ܃Im3Eyx'lxZ^{ AҞ:/?AD^U3ޘveMbZbXWaeu XuU[߇eqttE VġOQKyD,ү{^auXeTCP[ ~"}^-3C~761Fq34wGj^鱌"MM(?6Z3pS$?4߇Z2%N#_fҠ:~I9#\Tl~EI|H8|m iu7ۤ Aֻ6 6KHl3f:tdvg<9fP?HP W*|j4ZI74K29jŹwf .,MS 0( h $>V_ @wV6_`97s^bA-PPЎbw[vbdE?|+0{5rɮ߯:5H1a(#:Xz#{DZm ki(]~ N%i]YM2>3_p!F(lmSF*1;|#q75/g)9kٗX66|[Gb zs&r^2ϣ)]Z @ 'g 0&Z$(u_牉#E@)mݳ`^h,֨ү[ 2 !ÚJrPP^s :/I6cuſ2MB &5VN~3 . u|fȡ3P% Z*'3QM`v9 f5 [27B{b=e{[jN~2ƾ.eeR/ag;UE+荈Ecb=^\m1ZLrJt?BEB!Q0_"Y[_R"X:4; AG5i ]2%"ApZ0!4O#~k@өqw|Es]Y#7z0hS/H RHs2 yz<' Ħ1TC (([$TP{` sъ?q=Gp^Q%CoQ>_[e4/w% tuUK_y`3 1X %;R;cXZM0`th{R<=|6Vx=}L#E#󔣷E[fefM K^~[ d ]D!0X@WIXPБxY] 9UeE6.昮fـ&g|m BBwT8T 6v_),榬0t ^Gˉ[1aL4JYr7ɵ2by zwE v:;.)wQH%WUMŌ2i K뛘\֪o=hVF vK4[ClQJ{[D.pZNR~vTI]6b—٤YFE6הIMj?_o55ݍKkmӆy zۮy٧~ثqq;y醎H"+_laM['frQ915`/ AD:H"XdQi,z4#")bE`G iUˠvȑ±מWl~"oȱSU"͛Ju:$]*ic#3!]R㍍nJH}I#Oz_ׅ֫zIw'i/ȿS[WzPZ1wP8Q&4mkWPvikju)򆸂0f_&F:6$7x9=6Rqrn;{!v_s]a򸍨bFv ,$ \ΉtlMer RFYNo+VêO0'>s"j:cAZ@ؤ>4p + 45_ugk!2e}B1yy:QP.LhcŸi<i]V8 5yL.z&h235$L<}(< (_¬r1$# agzf " -%HwG! miX) ["8IxYEA_ki(sI-9w7׭=4kT0ZF]$g181a8kWS+"ʧgjn)X^K~qZ$tVb!S;S3! [v[T4eA 3:!cv2[NTcU?)o=P}:2g?4;iz7]?>_5G$ jcm#]٪3B~ Inqd-ɛ%FI̒f_X%yǚ&: E{4zTQcRukם=ao1 >|jв{LܷFyU^\)jS恝@d@-wMYșRM, v )CMei=K0Hʩ;LxH+ u)uBD;ՆI(0@! 7+:,xfwgW̟Xto7ʫߑQǷ(OIb0f rKld؏ Zlp;k53HN@nw][sb^Bz_]VMcSWUĎIJEClE ʳ"IqUM[,?&unPi#uр5Zs՘:1 mk+B|}Eҕ~o'sj6%4ιq|T!@Og=BA312•LhJ׵iY³Vyc`TPX{8*#3hlV b2b N:H.r~?BWORǞ@O xZoFR:f{U; yPK[Ŝkಿ?w\A8u˄46&:~NZҼu #s6-;s&cΞD&M!WVZZu]sO ^hgCG|gmU36|%G=C[C 㽦h".Cgw421!AlŹ J}xNgh1XѷsU>L9^l$K l9(QGh! 8?b?UO u,LjƊ_>5>"<<74LҾ!]ON(5? r㩝yC8$q=ƒ{pK逖#\у`{tݬ<2; \+ [BbsZLPW`6{Wϋ `^3LIz@u蛡7 CO7%f[&j"GUȟo7u:U (j,Jt5st|ğ-p~@_,p8q t \Eh[, u}G4H HqճZmt{ݒG e#Lcz2Z\m]>^4@OhN$ޔPig*A8|,<(g\7ywPlPa8H*PK* )]Yݓy/CȋÅ^BZ FdM| #SV1O@Q EkJ{&-ʸ¯FVr^Z?񎵢~\Ng-g$Vw.1o#nez+XK' {q{*{d!Yt'ΕܢO^w≱rIisiU tSLw6沾uW=lRcj11Q{QgU Uo]Jz? 7e@IQmG_J><-ltN3"ֽgR)ڐ4Sc|܅%L`WBυ٩<]kB3Җg m1'61:Sn}e[ }녈0aKumZ+"b4tU咴d |! r]<GmۇBĊ3BuG_'/{}^e=䟬c;USăoAkwR6 -Jm ev5W~QkzC7ㆧu8i@tC0ψ &K ~[ ܃g8ǯխx~&׻%db J)A e}*X5^lzۂ9+eׇ2OI&o$RO(J٧(j8%ɣcv`֔'īaVY5] 87:U%/ѷ_wb{Yc %8Ӝi@>Ȓm^ 4 .EthfB~YK/'ȪztwLnq#zjوZlaNNk}dQbD+< ~~`r0L*ٜQHHh,֡ I! ?=a AxJ;L FSA8!5ыa>=z<Ѡ`4LȬ{Q} UuT}[SmF+5.Q\V8Q #AIၱ'N_;?I@t%ÌXaD%@lKgb@ [#_TKmn}(9{ MPի"F4kݏБ|+Fp AzNJ0doMf}I -$R.~ܰҴ zMT~gno2jtT8y𗩬hIgY(ꥂH(w}H~k;%wAX ͥ^wNFR|(ϴ| E4f J̹ze!;}WtY~8tb3(J5nՇI.Ou[Ov|tgz!Yh9O0ze [PmLd } 1(8uV]ﴭeiJzU>2Lc>/;)k{XcoEo2Q',=! % 0ńx%g>^4> stream xڴeT5 5www'8)-Zi^^z[723szudPj0Y:@`fv6~ȍY`acBpm@f` ?lP82@h 0(f@v?@ lnnmA@ 'goW[k3gțY;y@y%;i ś6fV'+&P!QWRՠgyO_Hhhj0$Ŕ5@m&WM k&owǿJRbzR-ѼwC\)YY===Y,N,ictr_]qY +E(ZAnAN2:K΃w!s:26fn**lA` d6Lޟ@K5H&)߭;ߙߴmru+#`e۽5))IKih2+Y] |bl<v>.J,%߻vC+N`'WogAN YZUݙU /w 9k ^63-i N+37f]݁n`ik~͂Ov9_{'mgҿRK'7hȪ~%lOIxotNfauZڂ-l%x9싁fvZ7侟>wއtspc -a835I,,mAn7"(pps|ߧϬXY@N;`w=yxG6 >^VE|VAlVA.@.wl;go=BtNӹ|u7~ P`_Elk]:+dv2`{mwOݕ-.`|11϶z_@\^ph(2UKrZ++ 5I/YCMIQ( -TK獥YRX6VHRPE7/"RvRر_gc<U:aEZ!g{ :aT$;1ά_laִ(wLιk2Xy+7ޙ ,nBO.+)}JMUĉag7M{#k9C5$.qRxijƣg_fQ)Ϥqt-j[i!=f6@7byYCjd{ - #aA=$dHb>&k!џ%{3z3E׺o# n^ޕ|W `,)jbY+8F ?FX'M&Y }A9n58!V|6\摭2ޡ !+5ƒw9)Usrtz,(: ?=?c~p7=S ɸ P 3ͅ]-ltf',*,.4D>R drh B#Ns@v©LLW|p%Ao"Fv!}QܤsXѯwi5=P(PfUBڹ͏6,$3tɈEgu~$l](G٦! WUy~W]Nx ZRU9M&GbG]Ko 羅D'@R$rP-Req1@d93]iw,Cb(4ocP93f1]fGBM`5MlYv|}uuXE9crG8mz Yd؃bdZ8g!hZM̭cy**'rjko]ҏ,C|,bU֐WX]Pig+|yӱL|v7Tb},t~f/㿞D]<)B_%EQ0!ŝ!Jmώ|A5$hXX'G";Z !P%`O ei'gд2 TZi% h k8[j`X<\;o/Kص#D9W*kwo.K\`z:S~Ʃ&&!aaӄ <`jD!@hf.Fhua|9(NyS.pOfH=t`DNaĤ} ;1 V N'  K$' SŰ*61߫Ar=ZCp(!D\4h0JTFJrsUR39H4Hܕ!¾U6xxb}TNJt0Elc{YVTgKgZ%=H=KSxZRq2t&fyLJBn%%DjbO#akXw?ɯr&st-~|kd~O=2(@߭ &%T`Džv{0 pU3.sj>?'!i15)tm ϗ4~@CB~!x"Kz+xiy`׵a6P&3qI&mY54Y.+ U< {J=R~**dFڙvj7$mO #v7Ш |mk_:Yx} E BMR?s,WƝ-ДkW„]rTj/4oc0; T<hd+%(`UBi>qڔk)V <*p$_|xȬ*,U d0ECsV_"k9F蓌bTZ~J;a[@(Dy/ zGate?'ϤAJ"WO,*Fҟ~j%&5hv}$׳L $nǔU)[vENk(ڟ` Ub-<8+sU TE..J{H#h!s>h_s8&޿kEf5x̗([R&8D%賰D/)y%Fpsag/.Zݯ5EjJi1BSC?a]L`ܺ5a>k2({8OGb2_p~)2*HZ6z5Fqݭcvȿ3itR?i\ಒE4ܴR(Yr d.^^X&ȿlq[gܪ9ו8hZ)-րnxhZqFaƫ"v܍ZRkKQ/;; %Bm*rmqiX 9+3!%zˆRzD+#N SpwBL/6߭pZg| vM;Uj׋.o%79>Eo0AvA_E_'ٟ1b=3֦~%^aKa uLr9-Zar)83H%B M)>J9(h ֬{,ja#̇ (U wq{:.Jp~g7fY.d0( za=<+E}8@0{g[R-b~zM%h Bk\x_{)ޮ0/!"՘)ՄObIΛoǵ LQ.MVFq5&o?|:0TwńN-r#>XT,¨U~IqJ=C'h-.]K8T59K4D ?nТbX6W^zpZ^FiLA'\e=͋tȱXy{@_DTo*D>ak5p2BOӯ;_|jEˎӤ[ yHRV%->AOptVhiq>qc8 B:7l+>dlqxچw]DSg\A5J2/*Ųl_{ǿ{]GB/Oǘmdmar_,0$ʠ^yĖ3{!@nu܎;>z5jf9WLHTߛi]e[F +j}β/`{ ʹ|NݿKb;][=}V@2k0:-!+B T_@Q5$ΞʈC1.0pXًĝ UaҨX͚9_|Ssi6ҦjP>8tUX4MΪ [YL_|Kf)O0UF焰+X|kA7TufȾ^r)R QEit]Y'.r^,?'-8"ZĵNsMZ٭ cJEԱnfz4[>-QLJ W@OEt0!x_HUC+r̊qp'gV*!' eVs;LfbuxVѨj!K<|Cl9|X.ٷ2h߶]B/ -t.9VLb缒5*3 B)&%Ax`Nt?]El7AP|*"l=4R0_f,&zJgPy!mGagU5gYypmvWT)rHƘTBJg'V%=:uPMGZ?|htf&cyIlZB}dE:Ne-!N 'E2ϖYO2)4YW<4;FTHFZT YMӷh?ozmNԑHMA|LiIl FpQɘzcdq1WwF)ay]N}{7ǾgL uKJDG;gf܋/KJϵi󽴒!i/izUbԜ}"B)k͌tqPwܸ!$*[Ft2إ%e{sW`S8z].VNh= 7d0}>/T=(vV*-CS.SģG5s?-z1?^b|vͱΌ Y)S{|wۨ)p^\ ipɜ!y^E,邟.ӪM@0KIB=c!|~Q%R9r]?yaf`꯷~Yc+ߢ+Q.ϧϡѳnICtO"y IJ`?igo@is={0 ; B:}\9_VL䯘oWRTawB\=Kw -s)%In?XQ PV;b|Tx7`LbּVESIA]u|?VU:&BPq\]^޽#a- d0z#Ԉa][Ҽ QtzӖ@UmO5jO쨊ˋ|Qߵp }HӊpV"R'T]4s!Og{2F * $r7ۘI^GYƞLJ0%놨JkTijg:Gб͜H6пJ*{z[i%AA$̫'*3BmrhT)Ci4t-l:SQMfÊ+}RN[[ɌY z@\w}p63>$ =ea!!,+|%q=HaJO{o52F !=~^5fʃiu dp^D~u^Bbj6MQtv]m\hR-~}4fFvꮉS4Qȍ͈ J4QS\ οdrgش2ZS 9B:v7 \윍Լ4^dGuh+CE݃rlf?"o"D"oKl/v3pڭCm[i'h*Y`CX\Ij@\a+Wwvqk >> NrT]ŀ])M ߷Ѷ)`/VǃnQ[ߵ@f)m ¶(O$ x+M9Qe `{H22#m,5ES[!wZueĪ ,̓ZgP\ 'QZ$ AКHt"ȼ )Af%)?S|OYxbv_q0H~i f(LW1fY W/rڗM&J ֭}jAwNQ4Øz3Y]lBf` Z`5=N'TՆNuɈN(؆:mܨ9Cx% @d_*J6qy0YYD,u]n7%ؙrVV^NjRVpK_ ۤEyr+J b ͝,Ԫ2$AbAK<^pE4+QZE<ͣ j?VFAc$ɒy<-DK,4GwNbpaI}9o!r%bqBlVhEK7AɌ]\s*2;n6?[2[0Xr?&nrNɶY Lo2w1 2ѓ _sa3Pk/Xm?d(ŒOr Lu= &=d32|q&BD&|ÑLpǡ쇘DI<>*Æ_CBIʒvj}h S'`a524i_s?^I;6W>|6Rg`: |9slxMSʺ4%B6f0'޵2ȶJ!VΛGn-"Ovảr)Y©ِTyPS'y•DZ3Ug~*K7U[vB>#dӄXX:IID@׋j+1zk6Z3CiP!ǵSSh #\QDt\l9V(0_Vîi>~IN"i%j[Y|4VtU1wA5.0hȓcMW3̕N[ftJ#KI6Yp2jBttOJ4ܮ".[>, )\G38KAHDu,y,lT]S})U5d*N'z}%$OVaD? tK$nM +4gY)ʷvn!]w 8rI9|7a-kG!2Gl.aDg`AJu˲1Lc]xF*7hVn%Zϱ$zNUjHK"1$W>m!3!l;s rdv32ͩ uN<՟.ԭ^HMFZ( ͤ?"Nt,ƓqyeפƩDLk0w1D-qImQ#& {|)% A}* kRkױ7GB76=n1k ]C(RӴ.&>`N' ~HqfW <)Cݥ>7ciɲdh$0)$(CInEocMt_oI>o$pDɑ*zJ_|1'>ڋM.p$ݖ`AymJ0S`~+#N- (w]:[7(ZA>&UDl'ۄbQj/r|CUi9!:e׾:!{ZkevraGm{WHlOٮ] :IpΈc>"ENaC*/ސPG4(RP)__+g}ȫrKwM&^*پ:λk_A>_uRBh=8qQXoq16~W~>.'(Xx]`&s1YKG!n,p\JDl)$^։T2ZoL? w "qz _-*L~ew|sn zk7 9yܥҒ{HGSmm/M d=A{H֧GD_/ڹg/2abr LPr8,pRx\SDs=Wr#h"nQKCpgId)XL!l_gjʝvC30U߿Yo0@fϏS˜kt?ZkP U][&@Ya"qBU{~g82uzG?z Utt˅<#^W&l>وQ !,kCsj؊r^ /qEPLLJ'X+H\~-Ǔϲ%?laW#t[.%sQt)}+l?练#Ʀ xBjV˜;/χ~ 0yOAL"n5kgd@ x \ ?]V-EXlșlu?xCGq=NZ/.IԒ9i+0Q>Y]yQgӉX.ZYM,NRLCqZXg6:}+z;z:.cQd#GS?p)k;3wX}hl]8R1 H׳!be~_n3 ;{diR:=Œ=͚Fς͇` Jt[QKM7f4n߅F&HCC4cgDM#Q#"<vכMy}zͽ?E)(BM᥏>"Rx.֧_*Ec埴iTX(#J~ LvL;8~Mx|#Y1sD[EHzPZ@P[,\gY[5Sw.B/K_UhFeݥH@]jo1T@ʁaI!V>a:`;/b*tVdxD[|n@zC"/ӛG0_9 Ube['*aO+T4 a3r3PbB;,CT Ѷsh2nY)(/ێio._1}2@qc!Y ^`0i#YY̡`8}@UGG('D҈CB>]J"7[Votc>Ut?8"K܊tib o2+m&֨ZT2I(jW}LK}1Ool]2VmrL,0KUBGMul&<:SA9ҧ+<O&G#6TW.akΙVڊ1A߱y1Zd͠=&2#o9kn.,˯'6&cv 2KCĝשg6i\ ]v |XM =?>`)\K\c)4bsϷgTς$g@t|| L()Y}\_ԝJ͢\,>zO=,f]kmեZHr;o+ό;Y"3#'0'D*ø2f=ɢ%`moP;+c V6q L^N3x흝.>u98x>(ܭ L.6);$Dz\ÏByD?*+aO(V\v: R GK|!Ø5TwΝߙLp $-X44LLst/lazYG2ɕ6 ٢wR1b:9hUCȸV:Yi.գNHfĔV챼˼&ƔZq{11jO>8H3JiAXW>km0>m/@ވVLBb~az^xՌص>Zu҇\/ ˺U0RO%!+tuv12 qA⡻g%c!<ʮ`:(aFGԚ=0UF۶l KEzQ"rVQAKAk~%ԬeFJAIbzgmbz~UdS_#6_9>ں*dcNr=wi>~A'urV4mHo*SP/a|{@CZϲc7[S?[bWitg V7EBM UVa90(hp('ZE5iut5'J5G5ele>Y;i I\6dd'elkW)4>qxy1MKT3Ф"= X[ԀOSD4o{Ӵsil0_: @!AEnHF^Xaب5t\KQL%}ݹa3?sX tT!NDs)Ff>e )c'iy0-AdHۭd5x(q>Eii1u[6xD9_fi33y7=዆{r-ℋ0Zߟ'fK`RŻycEI.iK m E35ZD.'}&=Ly.C0,Z)3bFxb0rh2_Py iRXQǏ112yX)O#^׃n;hb.Rw(@q:㱇-+?S/s1ۥQmB{%"3 '9s65{g+|IKP41bE vY\NA!,)iba `vsp k58{ s14ka-]*\HJSZEi6E@ XZ&J*98;d*PctWo}1k6&&3xD@m}))3,MMi+H84 ^^x Ϥ y/f 0$ǂFGe:n}\9"oeU1_#HV]*=fͧALdh+\'%f#H_ᡃz:CHd_Th;8ٯgO~͟U?LG}S  ߷FLwvaD'6X1x!O<+ǖLL h'"zKE!L\"~XM7gJ`ݙ+QDt|!:zlxtu_M&4s nЌ6?b3jorS.i5#Kxk/.6a,Oȳx#^h(Lڜp6}VûHC3ښ;FLoAtqkez7=GE<:yq%V?#5oWҐ37ͷ$T.M,RH{ppNGdHp/{u'-J31%y1B:9\1XURwY&lƄ1]˗Se)J˔-*4}? \" D׌b'2nQf YQ:Ȯ>ӊ}@^_msԸxOD X-Fo啣6zͯnQnuAB8Kwgx,V=( YcڼdIbc9(W\m&fhx^_#c&UPwqԥ4 Dܐ ۸¯%й_MQtE1ooIu[vcO4'#B" "OFP3{@]Y*+JgXF(UpoRwZ&'u˂($TX&hn!bQc1h_@5>s hdgmb Ɗ_ڧFUCm2}W G ?kx9mh(!tHCϕe|\K-bHSP;x➹YiSXE7ƃ1|JMfC80`6XdfwlT/qhݒL+ |; p5&x3lWIc;nKv Me:CNmV{ҝ5F|(v~q?ף,eKI~E]a*5Jwy,jB,2w`t#/_8M3Zp*A |d@&0ٳOyIJK[ N^{)&V!PfWƇ Hlq+tM{h̐)v#⎿[OV, oM<m~6XWEt L K= J;J5&w5p/y> GpUMD*JaAbqvјХ1tCXU"2ˏ4)/%@o5U`@tkFCR;6+mH8$8[r*﹙y uoKAB1oh"yc~V̥V9˄-5qOP8yҍԻ 9j"FLN߭݌" [JFQ~imH"$ʔ:o6~._T##a+7C0&ϢӹGfch\)׫řCzU݄ d'Th%hݜSi_sP%_3b'QJCx0VR5>}[i xlk ,4pt & ɹIx[~e˯$+N3̟rw>ȖG4JRqr39V* *ܩo0wǺDxq!X ̦vZvlgZQNQMA /|[pq'g,[тbԧMk!j_Eώ|@UUh;:T~̾>/^FiSAy}*z@~$cC 1Sf+S|t<Sɵ3UV-EN/d=x춿1є;)[wH'Mߤ&Ra}Te4XZqˉ{F1)wkb'6Ch nz>=EͼÕ lD>5xwm=$K G=kWM`pkw]&Wu3HTIVxuJ>6m.]T#nFI fȯ 2N-[(vl {wj O xut[vSQf= da@fA'MbYEฐb5dѷK*V/^y= Q2Șw6H&E6~zKy̌^QɆ>vڝM׽W9p^ä%@xz E(iRMlqGHRucV;/k"ȗ2XEXR4&6HZqpH 6 v SYm#-H}V⓾W) vѐte3?9bѼN}/zgx;%͝f֌oJ{0y 0&S] j9 8ܓl̊d[UU ye^a_X*:_o^u2(6Jgw;P@nrkm%}a V5uMAؑH@'B " D6<'m Jv+lD?DR \[|(;wX~A]^' W@L`Qk?$Mb [gF*/=wmŹ;1LAgVr-eট̒\]%AVY5Eb,WgiXXiM%:nQ$0S!]F1ȌOVA{'1`A{7bfŅ{#kByx 91O@Hw[@[ 02Sҙ_ J|$\'8geQ~P '8XK!{зV]HCXbJ^nX;_ùm6vr.:eŰǨák^aO` eY ;)/=ܴߋZÅ)&NS~M&@vy Cyb+`up44.H"ix \X3©ꅲu酈KYa&'^ZGۄ*mu*ZG#PLtmkL (H'3rX!Si o9';o{ZU""T=M%6G}dJEa45ӲsYgR8 680un_dV/üD84#&Z~;GUl(@=jd޷/X 7M1iJk[ K!.pe_(6ߛjbY(50٫;j߳-~. G vV o('LތzrY/`/R 4%Y.6u8o?p w8~AWL9ˆ4P{؁; a fY_$5onnGj@hp(kprͭ{hIτMɓ~z{@r?kp)p&4բGMr317]x<wl(k}b*<=nSzӴH 1*n(aٛ{0ب"wZm=ҋOQ;y]=Ŝp%S$q|>[iQNF`wB,PO<{DJy{[*ݩ(De xoޥ/ }Kd_9A`1K5^.MV︘q-puL-CqvzA2(==)iAaV5?5MZ qN{C(ļW@ endstream endobj 119 0 obj << /Length1 1608 /Length2 5935 /Length3 0 /Length 6752 /Filter /FlateDecode >> stream xڭWeXvSH 1tI03(̀t !twHw t ߵ㽮Yk=׺\ɦg ABNhaA!ie`#p0F1BNNe$ C8ؠ(DDRRRe;f=2w/˯?<( uD8áNh Ŀ ;#Pu#: q:ꄂH * %RDl(g(uCP$Ba0i._0v;oBH&a(4 9z*jD;ؠF0n A]~ۇx60' uCe @`(gGwLn 3 d~jo8BQ( Wwqvvt;`hNPX䶇9  ,+Awԍd(JwbNO;zZqEpmŢݟ *?_KƼd %jxebzg\y᫖sK4eu`Se`բ}P2 tPiodS8U <]ƶz$q<Iغ"br%72-V'qMCX( @Z-@39n)L!Lަ1N^c>Aϸ _ їe.cLc˪cws\@k߹6-+!,tQܧ\-r(8 MRD(S`Xi<>[ab3N~? yB>Kv]xi"f {RUm(2o Z?/b ;X97tPwZ|r Pwq(v]-glQA=kī1]r;^Hо& &R{8^8މ7/lWL͸cEk!+oӉX$y#<Q Tj\sz`?63,~FD>.Ԇ|űH&zEbgB!v~2&LDh3'f@op+9ue:&N52$̅xݛ?hyUǫ\ǂmO9sV2-h jk$lKHLEӕ*r,?ٰ"<- |̡޽: ln}Zh/tp_h}$ ̹L<1*+3CYx:}?-9?8=bP73"8k<2Lhԁ >\bпYrì]\DS!'#I=& >UQ//ŚЖ+JQk,Ug <:OTS&~2u08݋Ja^ #,4H`1ߦdv9 tUO8Q[nnN)Oa{}|@l&T1b rvS^9Ip4G{R瓛5\IC&˙\+NQJi-@U- ʑaHa'] Df@Ѣ r&h死bT\Oآ<~Z%2 i5ASC'dz%)m9J|3|cSeRKϞN,*£EsK(r\?=>dͦz'A rU^Suy5‡bw~{}2΃{%ݜ*HZOD ,i PcE#[[3z>ƟSx|Y>Icc\%BC}V@vi^d/ޥ/t+L0UpP R4C7d]7Glvq'g&žX:xA۞jȥGHo Ԝjz[㦨L{9mjw9%?pQTՅ 2hG[,z5RD+7wq$w:AݐfD7"&h3VlM+ݏ 幰θVZǵJn1[⻶;MG <* FD>i!&X^[>q,貮I W?ۄUt}r b|>22{.nɰ=z3F4)ۋ lƹՒ璅Gt ve/ݼ"ߑVU@؊006}S=G x{`e7\/|2G ":ebz 2X0Dt@Kzb(c541n׺?nTٞu,-*A8<;o"@^AwosYIz)gr[DAgasPzr:jT}n46{ rٜ}..Ƒ/V烊NS4^Z'򯛧}CMfx }=ED^!G4>NV-q)6_N9-\qr;~j4YwMsg,upxOҁvԆlP_Fp bEd6/\V77<8K+ "MN};\MЛ*E4M`Xt+9;+ٗEI/l LԋUkO&ӎXr;.D]U#-,Cʄ8Ks@73iceAH~/dc"7;QD""ҼCG9Zx7'F VqQEXA.!r-;b+}Q!V9 E>䮫r:&3ѝr,ߤ3I袃иYB/ 7uʢ*@ȾCB%K ٜN)(1O%}{QKUiӔܽ]i&rj Ѳxԗ[sVu|;VwHp{O&ovlD5EE&][[`4%kl+p.-ggSL rqHR9{J$$L@InbB)ѧn=/D^{Lܭ?Nm'6j+ Kڹ}몆[i=z9~a%90Mtojh ~'gqm$I T& dWҿ%QbX w⶚rJ-ֽ\\ԥj>+Xߌ#l SWXBcfKuDUjgmDeвGt XaܓPzu/:/lc~#z=Sk R2aΦŠԕOPXzdvZݷtqXwBl cmdbB?`F="!S!:XDO?$ݍyELW f[gULfY=+*꿂? *rHgx&\Ey,8{=ɐ(Ziwt#k"%+ >;%:UJhz. sG޹oQ'J+8ɱ9?M08mSh~ZF\537e=[e|[kYoo!]ַd8̫._p&?cyqtxTI}}F g q)SJ9uQl2\dW0N *ry'悛NRm|%vm?,ZA0uWPr?mxS{E4h-Xv~XR7%V!-^>4Zt~n\ֆS ~4% -~W)dzL_(}+47E9D\x-o>XǬ.JVAL [>Yޚ^!:bp϶?Ww25~}k-(:ԻN3UA^h!#,cz6UޱhUoeal'52 e}p|`UoB|to)J/,A{6ν~T,Dت 4.p= o2'8Z4`1HZs#ض٩ךm rgr}zFk4->}[ -zGwZy{rE׸c+<9iՑI^m,=ZS2R}yUƒlR3bi,b*>QX)0NJUk+|iߓɒA-ZAU an}]1mrV, 9hfr:@TW_ hoUŒ`71&TӵN\gr]zJ.ګOu&\Y=& ~ ^0}|/J~v[P+@֛ } +"tCOf iu1~NPzh4#+$# %*u+۪/CjIϩQ& mm앳a s0y$W^߲+ͷGqjSd?-ТkKktUC>찎ÇJQq`Y(viƪ;j[&^B8FC!.,y>ybn͸pG0ḷA4JZb-=4n˂Sl- YJBɋ %lPLj5ׁuxxli2= J-P endstream endobj 121 0 obj << /Length1 1625 /Length2 6117 /Length3 0 /Length 6935 /Filter /FlateDecode >> stream xڭTeXF:. AZff`f.S:TDPRD@8{;9>^^b34R# 8ZHLXTuu@B*X 0*DCp5 "0j@\ &++KPE Nh#s>Y~|`"QP'8 !\!p4h #Zjkx5M8)t 8"_Cp)@ b oHABQ(78!p4h <`쎈?ܐ+Ð"Ph uC0Y 4҉vFA10#@Kah0(h7w. E>27$ /$  (;?|D#xS19AhLn'(Hh1ѿ``> G"}nGowj LYAt _!@W(\V;a&"$&.,ҀzCP4an C0(~b8CAO࿻/-׎":&iiY~ ~Bb22! QY @VJ"C$z 0d7u84h ٳ~ $3?WS?6񆀈 pL;}#j:p"^WW ڃ2"Wd*#kF}>Z蠅A ,,#*b4s<`JwJZlcu+f |Ԝgnd,wE{)?xzzۏ;יtl>j// ܔl៕xLdܗXNn1h2H oYyP '&J[hZfD$ :w6npbgp;,L;Zyď t kXE3bfʦ?6_1CJ6!cS$I23@'4EcUwX}xxiK>hS7|cb8X.sj^CmҡoÙ^ Do\ɊIhƒʑ12#$0<M{&pZ cHs6xk,3NVjf1UؐrG3MC{'"MϿ7+{練'3NcZr=В{Z g&{a&GUrY`2[@QB0>fU:eҥ)c,1ჱHT䱞iݺ|LR <ੈl[" fpz\UVdPle lݷ1{]|_q3Gb)RK9LboRQ.K>< Qx1UjC4s1詥>q80/qI ޻oἤbG`}kEc=NBT1VD|;ܭK$hW|xRNS9|CN'A櫬5:?\1"w4Dz{ `D_TvAn} H%UNhPʘYIH`QVd3{` YLϩ/zw߁5OYԩξooWTD>\潽?f@3 22G&Ih~iE|k*>P!I[_7pwՌ–ju9V 92RyU.y7KϙЫP~8HvwQ~ze*뛕tX/|9jt ɡo4Oe ,̣yQOnI4=5r8ʇ#rՅrTJ;wpd΋w8FwYMB./]-QӒ-"]Z̶ gT-ӹ&% =ob ,2 W 9(DW` G%O泳tCDNJUPh~ I<J#:pC;kdBH]D ҽ|z-v+O oǙq$M{ AsbS'Ľ YQg9vi2ULy28×*&rCo:EbՓ K(g"#UgxԄb,pn9]ykw2QSqSR:d}'kA#,CcDndnL:J 3gcTH0uN٢d Зǥ/hvUV27GB? w( r F!6DxBhn+M0zV ߑr$d.w8Kf MM=GS|L,Ч`1M0+$ʠWAaZ.^Ou36/ڵD%I_Oz}݊mde'B}4M{[^ToS󚪩 l&jU9M)1Oҙ\$ lwyƼEyr+$(Nl߂f"FMGp?їݐgGžn߿ןG4 G-)G15?>t%Z*S*-q#<_#ldcA;F>ptDfGxuѕϚpRC#s* ԝ`zW=O,[UnߜaeU9^P\0jb3*U; mӟ=aKX l?-e,m19#[im&Zb1ýD, s5O#[8Z-;zn,7!"EףMkAl |"t'W0v53^dY4:|n s{gwr]gAЏrfb{,7S|ӤC/֝$?{+¡t $+HK(QA0#$Ky6ҝZu';hUQ.r9dLX12G4?Ͻ5hcۀYȚxVk/_XEڨveVhC+}m恤J%sMZR ;Z`c[ں{&IS6IW:>j,<7fݡ563'J>b7&s ⮐A95l_JC|Iܒobc6ʰTѾHQ`ٲ˼֘~uMF<]]*и1rfc-8)߫Tn̯Ci LU377j6/aʈFi n} ~;sC#Zx?O7,&ock^ A.ɵar:GuD_'RO|YN+|$4%ߪ9PθY8LFbԋ<丅Mn SZԘ(zԗ򦖊:EM:דn;ΰ[ Aռd*EQ"޾)_L&6v7n٦Vo͹YN.g6&oYȝA]}Ag BIؠ^ˢcGtoņv5v;H&NbUK2&=U|)/E~:T8SRk]YKI-΅6' `2A0j6;-F@tC~ϭҡgUA0[[e2V{ rߺw, .SIn̒HH9GOggOIo%|c؟>Z\HT*?էq|Z-%ܴS _Y\ْY 7)ZiU:sY'ۜnv_\LMjjah #Ι/Om֨%ؗ]]ı6s:i>aZ NWS"+a~âVǐ^mwvg\k"1 7O}T{/|AfnkS$*Hzd3\pz{Ď$0 ї)tJ1HțDS5)<[A(u>?aB UۜYJi57.Gќat@ouIʔŦtW->>Z7xsbt}D- 񊌽uܦ G+$ I3 EF<ι8+F)92n2&l$ v'q4խ4 ;Qi!t7ICA0&_( ˵ /lS[߭RE$x?F/ze#t4YqJ|ͪV517 @DJ4 9 Խ\%m1nڂ-hkW W=tT#! r4 <ĵ)/ ҽ$˳z|rm6w7=A/7O `ŶFc}!!: V;~<ͺ/;1&-`tb440°OEԾXYoYT/yy[b>5!3i"X;")f?5}-d`#Էqjj,jnA[J>żWK8V$}݈8VUDN?/p+W'8K]bW]ϥ2&ei թ[M¤]ViX^u R9\c}|y &~ PcP} o)p o3AօjQfS6髻˨>]#T?GIJ5O4T_C d:> stream xusu\6HJY %0HJЍ4(5 30ttHIw7i yq}u]Xbc#l 8hBm0zpu^]=p [C07 b"^аFP(&! Y@IQPg'@`P 8Ch=  BWe qTY  jp=l<!P{8f!3H@M߱vJ`(wN4)Ϗ`|v|pMJp+)BQۛ<97'8%`;(gK` u@7"@8Pq@m@B$Aak8 @aGjQKh V!v5(;߿W7 FakZ;Cz QI^W@L)7‚TԶoE ؿw\o\ӰO@Cm!οMb?iP`W〛]߽ìQn yh55 j+{LPWe; E:ep8FBY/HNj޸O #zY~Ӷf<^7B [idcUpYKo#"'&!h߭.MB~%e)j:Ȕi+Eto# -j)M bαnQ4)o6:O&6+ڶ)8rC[:YÐݑyTpeKCy3w7+kC8f8: n҈-o5=i"*MBrR5V~meqYL[ z<'=^.KE-KUU>k§kJԡsL=a6Cк~G@IH +}o U$e[~<5hX^{'aś9 ^_0hGá[@#LOVlCO/O2OVώ= mR54 9*㯾aOe9$ȿ62_7rۏR.8 מJ<"lN ƿ>)9*cm@ݏ02dmd WZ QX̍.z9·ur~I1愂F+;|.48,qʾd&|qЖ&S6>$9#)NĐz46i1ndQbLWW9,C-gN%fKe\j=,0g ™}o[u\Uu^:oW(}^*eֱ -X݌L-FmxfT =L, l.#y~]`+bb]aل>+meL=Cm{l4l-!o"E$j{ zTȶ=8XJ-oŅr2t][6S6pO,'s]Kg=Sǚ u_zyE-ggSR>CHSOh|s7ݤ͵(`ȧ8}I76 Sd5YIP6V<5jT7ot(.JXJ><]ΌOwgR6RFڧuN]ׇ=U9 HRFLebG.iNF5(ŗ |S 47 [Jj;Fbn >0@LDdȭth:JzTodaHwB#xpͲ3"kE$ʢhUr|{ط0c]c-Y2[<|@¸*]D٥o%[$q C fY;/hG }GXxUŪ2CȦEZ}1٣NXCw hy@ G* P~yE^o)YQcOi+}T PV'Wma3D?o駼hŒ3me?|HQJؚ]v ^8$yE|y6$XL+(H8Q*!q) M.SBD(_sšZd|f/WT‘v!ˡ Б'9[ ŖNe<7]nqO6SZǜA02I`9yX[y,7A ¿Rj|y/<%YY:qti4ΪBq"X:7hKҼkˌPRk] GUn B+˗ nO}غ&PoxBB?O{YZWr&=v8?>$0>.L{un~2U>P[%a,NUr?374"+DP=b2囹J&WeQߗJ63h+R2ˬsQ[zein!nAenh^mm({xj`ߊ_c3i.gxQ#qW7 HN]cM;:*?LWY~p`ֶ4IOj~9~IF8kYKYKAR-L܄{]Rө^V=3J4g5kP1xC7CnoxhQ?5{Fmb \+I̫{4|,vXoi-%a¡U>5;Ni,i;PNѵDp|͖ÈF~@i`‚ڄg7C?:C N5zX(qrbNKuUCɤi떥ǖ;~Ƌ;2!ǟhq1x#@S{~~@M jJYOƒYA[~L;Yұ6V]Ć;uC|)'UvYB|ZĤouz||az(am1r+& )> ٬aGZo5Î6G` .&\eM rؑ#ߧ=ZD^\OlS4?Ǽuޗ9f׉;?n_3V&' i?RBEqht:cRFR$DGԈG9ɤQ4}pp>u o8O N ĩ]\WT}QafbkUB92!Mϑ^,jӐ%G3b[dg=>>;OE+{P֞dvF>~L;4j$m| slb1RΩ5x. !c /*|yN$A+k<weǰ8Ի'z dGa "/&U*2X#3] <YhV2>V]w>3u,Li5یج[0ʣK`ep$CR!rkʗݙ<3c*\" 8ȷ{E Z[nӠlVZWu 0Tz<|4KYZ=@`O`@BR#֊rXɩ&gه&XDH[{syf[$eyb0MȆ//̓F_٪YɊuYHHX?"h?[ˊѭP~ib w upY3=GW$aQ4wŠP 3nFV7FaS|ӥP-fL i~=r<-3C9ջœ7{wZ+NnI=?4jFӻj~1kP-x Kw >GxaL3jaG2䀇IDߛf haJCӜ+R1TЯR V*Qұ!Jbn\ L>o4TDw+7qɞG1s?p|-d3Dح'槕M~2!Ε>zYT(1 f._6!rDr1<쇀r84$e?Ghs-d:aTDy;׾G+0.\J/7EV\9b+x *hl$:l_sQ Nst/BWRm<)gVgF_!~bҖr} IzRMiʲ܂\\(2ׂft_ cv:%,$GXL.zpePSQx1XkUlrٷ v0w_|A۞3OtN`>J7tn=xF#sGGp裔wYI*!oT?4l_ $|(Yu$Ō6hضyxAJ`3,5bEUK!2C6ﭷQ4VezHݹ[i#}EsAIojEiJ*SMFT_-S6MfN_Ѽ:fa3:А"p!5vhQOKg tVG1 ug0a=}+(Fes7x9%z愧ꊓU|wj23.PAD #JBd'vJpY5J>+}\{ԊP8VW$\;d?`p;#IwdnJ͓.QlɆ:R|L)zzGFZU' "1Z:zL*f-Bhw)awY.u,MΔMBu+8,ZR~2AߊdbXW'ܸI]YCGEü;6ö"igX^Ƶ XXM:|VFԧqD\ alϺt.lpĐl""ruA8wu})XN=)W?$carE +GI-)5fYqax#(6r,Y-ytwI¯.W;cZʍ58픓u1% })tK_YX0uޭ2T7)zl%^@f ~v_B9֫Lj2Z!/22Sy:Z*Kbf[5 r|Z`< B"PE+aN2ps6Put}(dR&޼P}Iu)&R^o1q"F[yf1V_I;ODeDf8 xQ@8xύ?ݚmQzjSca[ ~2YD=`e^R3Fr~ k;ETvzM=#PF"n-[Z\㙏B>}d+~LyToFܣ\p۰K:Ź&N)/3>c, Q?W}hj8MhfaUQ;2ĈERFJɵ5*z/u*OݵzΛil[+Ol̓&Z,+ڶ8 & Ray>QS8'1[%_cIaH2ӆ>1}veT]%Bdݤ{#Զ)B[m1J^"n $wKBTI%2Ό&$kr?H,D#ϘԳ', wL9:lKzQH{W?'}'s C#K$#ȟ &%x* endstream endobj 125 0 obj << /Length1 1177 /Length2 3366 /Length3 0 /Length 4122 /Filter /FlateDecode >> stream xmSy3d,ɾMeIØaeIdKUdKd JeɒB)D'ZPsNӯ T sɀ \W@ =M{W4 Ź$d w0dt^ɯTt7GI# rD>W `H+Dej2@Pu(^ +W"y|~Xo% "_q^4?3(xHPf~<Yh/>yZr? P |p GƸtOEx)*8;wƓR 5$`X %S͈&ah DMQsٻ-`L1QQ7*oڎv)ʐD<%P弯jiV=:4W&FNciYh0I &g\0xt^\G6&0c=\ZPl0*=/Kcyk ݏ 1,&u$MX׹ʛ8ύ>A=>!W@KT55+~wo2=ܭ\ϰ^n )W^b9a^4(ͪ6.G8r_$UPIN(?Ō멍^Şqg3 QUD}(e:<]KZ0X6<y0Q? Zm2]B=|[봮;w %:9,yu0[(ei%]Ј}7m,$НEŵĶƚJ/>ܣq^_c AzҬNo|et}bFuV)Dvm =uc1"=.N>  [1'8ϸ9S 6w^+!6'I64]"_A_SK:hd\ꡑYAĚHQ)t52 5o/?xiS^U9VsZ߯aXVs.Z DVywX7Zks?q]|Dk2cՈn~/ ?MtbOy[/rӞ^%o#jXJyȭ%8B(`G-[#O3fXELwT*^X~r=6fW bϒ-5*d$O6,m.6Z)vr~u]#C4̬찁ԴG脊`/&pd({\EEcw?,C|Qaٴ\4pS&tVUAޏe.&u0HXmivY[7+nV<4an>.Z2M\yMF.hf$ *1~ UN!0}Vk3\g#DQz牮nxYrC|]bH9,*2A.RSIg!%fcO HV%ZɍO-nh,l;zjyWVZbON܁W-/$KZB>r)L Y,Eѵ,>TW&U[6 sjb;s%JΕ%lj4|l8ThCGB|dPŭ +lILĸq 'pE~h6\ʯǼ3u-؇tbV[ 'ܸߧ(_U'/N-1j\Y3d6Eߜ`Vsި9<ֿAm &-œ]=X4#B<ٹ>*i$30W)p~Y]D 2e4 ryCuYauBN↞:Fsn y!)ލ;_c o~ȳ+>' )Ru.O`e6shڂhUT] fݲhßc4]O/`K%ʛ;>}?N{7*ݪA'Ud存8]}4w)yW2sGMTonIHIaбWbj ͷ{LâdUm},ʗtBIXڻB z_> stream xmUMo0+J! ᫊"RVmk N7R$ݪ70W?g_,ɍehܬ=WWU\;;׺v7MOtҺ=po>fv8 | G՗_n}w̭][GL2sQ擾ݾk^!00jYV%H~~v}\; C}h{ϗC`Rރѩc~^ON6[7ݛ ZԲW/{FR^ww?U4H6!L@@B@q\s *G|F/+>㹴3Z~Z83f3[:٭ ߬Lg3t33 ~!>CO!>S 33>IY ?BXIAup*Çq G潪N$p|eO_:q;:'dE_kCvW endstream endobj 128 0 obj << /Length 842 /Filter /FlateDecode >> stream xmUn0CƆ"RVmk N7R$L̛O3 /~\k4~VzhO{|wޝn8O.oN?'uRG]>3dX;ҷ*נ_~vC̵:}W {1Esgq]ߍG@]dbڣH~z~ohTǰ9wxΏU]~NÛ Ju~*6{y~?xڰvtش~>ZjR˦YE3=sׁpuRA)`*R2$!`8li9UEХGSj043`4`4Ý(?Q  rt\e #q5p眛[q>x \iEܰpNMk l4\? 皞c:gN5^ ELOup3%M6`^ۘ1ل150ym 1F}3&ԗ0 bKl+֌>oRa Oѷ`)w`)?\֟agYg ֙P.L(ulgYˉx/N|N|&ٝ N|N'>cv'>7'>S} ~)>_Sϔ+>cR|&L|'a9i0K)cR{XTG5;)NͽRPs> stream xuUMo@Wla_BZXʡIW ld!fm웙7շĶM[؟McpuUӃsk/zfN꺼Ɠfn݅R^w}9qdMoXj_v}EQ>>pø;en>ڲ?`1&5vaj UkNAm<}\MxHM0}Z7WuI]ǽBnz/_ N{y;:ڰox\7nXw.kP^k3^Kյ u/A )`JbD>`2$`TY'``9&Dkx+0*NXXQQ3c w"]j~1F60aG+gıcW c rn q9Qܗ8% DMq.5Sh]`4$a]~9Vk ]8 IncT5obY:socsOPcYB?9Os֙3\Q.4ٰX3Z9#>^Z} ?L[ V|V|oV|3[: } B|)W|L| ,Y a!SMV,鸞:?8C8G潪N$ĸ<ޏ< Nuν_B,u7zl endstream endobj 130 0 obj << /Length 846 /Filter /FlateDecode >> stream xuUMo@Wla_BZXʡMW ldiof<ۻW_W7nzrc7)U7Nߜk]{7+wR}uN7|5s. )裮ݏk&8n~iyQqE0N[,g IM/*D@f`B9xczOpm`>W'9WRzL E]PwWqD`PދoSφ}= imX]ӷn<7̵^y]/׵Il/ܥ: ل0%1 " 0Z{q́0R0r0QK5<T`,if,1L.S5?׃[#M cL#F3X1+N978Nsk`q KpN8q )q4ϮEp O.5Ypc.Y7ь1O*ezl,d mY%0ymȋ,aYʘ8 xA} 3/Y1<*T71މf 97g19w(g1?\֟`g Yg 9LsQ.(ulgYˊx/V|V|&٭ V|N+>cv+>7+>S} ~!>_Sϔ+>cB|&LOr`B,&+jwRP{xᇣI^U E'b\o|s C:].cDܛX=oNܙ endstream endobj 131 0 obj << /Length 845 /Filter /FlateDecode >> stream xuUMo@+H.ȲrhQիԒ ؇6jo73o{q3mfѭVOn/Cf)rtskzf꺼Ɠpi?p>fv8coJ?< a9(})suזÌ\$qATh L}s6G 7o],jotuþ{UןtptZ|MÏѩNN6[7ݫ ZԲWO&suB`ilB =@ )U 9yI(ѥ S*043``MSiv|kiCXc, pDˆzA:x0)ljsn l9u}SrI4"nXCA8%&ٵ6AI cMϱXS_S/w"': fyRy(#c^g!ch"ƨ-kC^d cRx~h K^| МQV14Nd5cY9Y?C9돡'g ?%>O:ShYggΈrYgDg>[bghX|&^V|{ig33qgng3tZ[Yog,g-g B|B|\3gg3?f)O5[TT+&GUP#a#7a/c?w:'dEgtdbP2ڂ endstream endobj 132 0 obj << /Length 665 /Filter /FlateDecode >> stream xmTMk0WhFG*! miʲV6vFrBbތf}\xM}qV'7t羋<]swrո:܉Ǿ-w$mm o\1A+Z7!؛~B?Fߗb n;nX7U{[LG5 @@N,Gw͡ 1}ԿhWWq}QEݹ-r*FNL7uY~~l+l+7tE )b,#TTHy9)9>*QKr7P:MȡQ^s$LD6aȑ*s.$S56`>ƄmÁ#TL 5kd}WXssc*{Rh/#? bE$L|ږ8^y>eSQc̯bV̯cNa'O;Q~{5pX2]$\^snaK??q FqMyc0=) &l(mi,s|d &\cV ]͸&ӈ9w{d :mB Ƈ\..Ա g~n59&\pe[N 8\4<[n6|kq_]~&)a endstream endobj 133 0 obj << /Length 666 /Filter /FlateDecode >> stream xmTn0C6U@"mTt@Կyct+%13nU틛ķR<=]tuUӽsƷÝxrN:ۦ>P)Εrus ~v?'Ǿ5~D !8뇺mRn=MuSxHiQ)YiH޽'w66Z,^DӇr}ݼ-w{s d\{?:1 kmn_~߼h!R,6ew*ؔb%k e+Kӄ$a"1x*s.$S56P>Ƅm„A Fs 5577vر׾+uaя6R:!,əCxg+ѧy*JcL|*m:fvui0ܓ`†›F2g'I`2e?fyx0j5F̹k#n'im7>T20P-9[A˲,p~nE8|p9j7o-kݸJv?ƏVR`c endstream endobj 134 0 obj << /Length 665 /Filter /FlateDecode >> stream xmTMk0WhFG*! miʲVZCcYy#9톅ļ{3񼛤es^7箰 nn8l=hzI-._뫦~^JIu]f `tTsr*o8{&X,dew+mWos~X(2X.EiTz}ܟ^7uY~lVNMєo R.bY.֔O9؄b%9vsr(MXa#D$ar bqMDs!FKRLDP0.BEHQ#͸FuŎ577v}QȕanOd$g;A,əCR;6+ѧx**Ę$90q'oקfQ%n;5pX2]$^q~+s"F!CyhIh~CMnOf1$#h)r~hмj5F̹k#ni<7>Tsa>s\8s&wsaY1:+r1\ut[ZM,k4w6_%aJ endstream endobj 135 0 obj << /Length 666 /Filter /FlateDecode >> stream xmTn0CB*D rضj^pZ;olvR3ތm~<&i$͹+$o)'[֖wkuͷu5P.Υ/U} ~'C $D !8Rˬ9zLU]vރ8QBQVW,N4$  1}н`Еq}Eܶo KQ#U~'+xZZ9?ESھ/6XHfغ)Pb$b ab4aeILD!ID bq&"Q\H&(61*"TDDi5RH׮+&ElƮ}G= WA?Пe aLL\ږq8^9>eSQ!$"VFN??5J195wkdY]$^q~+s~"F!CyhIx~CMnOf1$#x)r<qh|utgmZdGGMYcu endstream endobj 136 0 obj << /Length 665 /Filter /FlateDecode >> stream xmTMk0WhFG*! miʲVZCcYy#9햅ļ{3񸟤e&Oo]&C]]Mq>zwt߉Ǯ)n.pCx?nڽVgx=itO"i [\l\WM}'ԭ̚t4pXeȉeU oq yM\-CnCW_Ey}wP dZz891euB)] W-\v\]~[S!8&+Zce"'2Ɍ5I@|"B2AQhSlLء28a}ɑFq5ҍnnbfǮCG= Wܢe$g;A,:sx l=NOTƘ$0_س/vЧQ%~Zx pX2]$^qnaK??q FqMyc0=) &l(mi,3|d &\c ]͹&ӈ9w{d-tx\ \cΜekqLJs?<@>qhx .׷8wl~1V<*m"mmDa endstream endobj 137 0 obj << /Length 666 /Filter /FlateDecode >> stream xmTn0C6U@"۪V{Mi@Կyct+%13nUķR<=]tuU*Wo;зΝu-M}mS+7F?h^q~M}k $|y'BpOu u+$bTy{!y1  GҢSX< {NmmX#N;{}y[D]`Ah;P5K_;'4S}}⢅Klkީ|cSs&^s 1eΘOd~`xՌk?s׾G0N-۰o|e>ha>6h Z8sseY1:@++܊psqsoZ׺q=7÷c endstream endobj 138 0 obj << /Length 701 /Filter /FlateDecode >> stream xuTn0+Cl m8(zu$:`K$Q4pufn}f)ɻ|tùA<]u6m;O޴\+$ޚv}qff0(h$iƃ}E>.>ttPRJ(:X/rߴu&^!3PZM5^F$o߇7 V+1ؿһ`׮o7qIݞO!Znz/~N̿Z䄦buUWᴫ\k\r-Ve\[3sB A `ehHiJ }*>`!â)dHUA^UwEZK5h"uS/g bρ#)p̹18yi r<ܗ8-pN(T1 PUF9a*~0'ujE5z4jgǺ4QSkj sE8-_ZQY\2=<"NNL>9fѓ@D9{&&gnI0䑱Ӊ3 hxRE"7Yp/hJXCKH eR3ə$Sޛ{cYrwDz~ !G9Kûq_nY3/Bu{XcD~ӺԝE?zO,Fez~ endstream endobj 2 0 obj << /Type /ObjStm /N 100 /First 827 /Length 4000 /Filter /FlateDecode >> stream x[[s6}ׯc;wn8M'mAi[YtD)M|I%Kvt 绂%2-cR1i )͔0#R&=3Yk̦s֣s)T1Y: 'Q)cp^Gkcb1=mҌi $1eF2ef#[#34s8!%NpY( ݅6ặ&N֏~)%pĭP$Yeb"$Ms`UEDz,8a@=qv)11~ ~ùϠ=udm =zp#ReXZ/$*͍Y,`tOKiJq,!)։΄ 3,xBJR)@čTR mCH:LkziTՙuJ *:ՄXQՑиQ j`Q4VeGk djIC'P9h.L44ZFϴ8=cr3`|}S'[U{U!UAgTR1⌊갹:vjyF՞u3cCo MWϧv|ifWݹC=KZt=oiը2jӞb;v?^7+w!eH=ۆ 3)i!*M8xץQ{:'βÔ=[LIX8gɇijXL$8XWHta#y8Lwy9er*\|LŷwwiPvLk}"\ RĚE铔i͢JUo?!âHC9U" ,}`, !bDēAb3R(DlkLC>2HI2&v%#QHS9(EHGȈ?5^&Y"1!ӧEf_> J3]dr-F3kHAGd$Y".r:.ѣIOͯb풐K8O\Jq+/,A@b<NVԱ'tglVpZD.̿+h>6UNϰ!0Y"^-KzcUuz8Pc5ըRI1xCH _IBbb}JYEbBU$u誳jg*ʄ\VTi.X>mu=ex>CϜP/6ӳ2h2vk[t[-Z_y)6QosIĉ]zś$CSU-ՋњMi"Ij凌mڡF++~b`"OMyb{Z'\c.6;bJFruJ&ZP,U/K5BfpliQ&]X`q"7Cjg83L4BXMC EJj1&`iյЪ"7aޱam:%)ho[rާ+.+-|up_կKV8ϸ,E/4~%U=Y #Z~ڂU;֪7IuԄ!C_jQ!]m4C[^m?6QЫÊֿ2\ Lp%No7Uw2źJx[nڵ&RKSb3Z7֥< _BOhi1jEa.+_6L@E:B w4%Ė4\ =ek5?mL%-$v$3):5oM̺@Og1!]hԆ+֮0Zںl0aeפs[R۱ ֑P[!V_>rD:C& A`De6$ߣ2ṴUZ!+h:)TD0ު-d`SCG0m&Ɣ;UjT)NlɂebB-xx!-Ťh>]׊( `:YFSУ;ڑ$:xҺ'dwq1'g`///WϾ'7qyXGGUNEL<߁YzgS聆ǟχSiByY?Gb3o;n1.zb O>?o ~_+~c>p0 fWOɰOiK&t|k6T?ӦxaŨ3~VF$.?#W?M}@|̇?#Q^x8xvuOŘ@5OP@ IaqƯGfy^|t+(^2%g(_x$cʧ< >` I?;Oo6zwaW~< fRLW--ש#)7S.R-Rc()d}Cv"%!"˘߆O{qH|I(Anf( mĜ,}: Yzl[ (P̴ll׻&o 0sf6 W`Lȏɒ()!'S:*mM[zwœu'cOp=dwc6ǵh]|26Lb=7aDc^hmRJhgޖߑBvR N}BgjDya֨LO6rJ~u")84<^C,^h8.ގAYȾ쪭"Vո6V c]/6KMoζ"^DnAYv;P-:]Ě.a ;`] pXbǴ ѺEj'Kxv]/@UKˊ V=ֵy"% ť?ǾʼzR+ ; ].u3 %=gsv)(e7D1RO5go>? OCoʫ}'kFFh]7 w8aKJZqU{ jqW3SPou:hoDYzAu8 8/X endstream endobj 148 0 obj << /Producer (pdfTeX-1.40.22) /Author()/Title(\376\377\000I\000n\000t\000r\000o\000d\000u\000c\000t\000i\000o\000n\000\040\000t\000o\000\040\000B\000a\000t\000c\000h\000t\000o\000o\000l\000s\000P\000a\000r\000a\000m)/Subject()/Creator(LaTeX with hyperref)/Keywords() /CreationDate (D:20231024162711-04'00') /ModDate (D:20231024162711-04'00') /Trapped /False /PTEX.Fullbanner (This is pdfTeX, Version 3.141592653-2.6-1.40.22 (TeX Live 2022/dev/Debian) kpathsea version 6.3.4/dev) >> endobj 141 0 obj << /Type /ObjStm /N 9 /First 64 /Length 337 /Filter /FlateDecode >> stream x}RMk@+XR!Bsk~J "$ⶴkՄd;yV A)T3`nJ *PAgހQ)TFyWvO^d 8N/ʼnje;|okb佊"Bq.+E`έyjo8'ͱ(cP&"XƐ+!_ǐyM <573DF85129E0CEFB32694D9175A83971>] /Length 400 /Filter /FlateDecode >> stream x%;OaO]DpU@AElIi`c011&$$$PPXX'((pޡylvdD,Ѱ]2# W933"R>%mX4H Ȅ,Ȇ"R᧺bʳjfUo%^PEp]Er%09 000o-WP 0 P*PfUFN4鳛T)>kVhRilY3CJDZ;x 0Bt= }t}7?RWY<P]5Xu~뙆U4,ox kXK}J橱e|4čYƷc={ G?ẌŌXLjuX(!"!A endstream endobj startxref 181894 %%EOF BiocParallel/inst/doc/Errors_Logs_And_Debugging.R0000644000175200017520000001317214516024315022774 0ustar00biocbuildbiocbuild## ----style, eval=TRUE, echo=FALSE, results="asis"-------------------------- BiocStyle::latex() ## ----BiocManager, eval=FALSE----------------------------------------------- # if (!requireNamespace("BiocManager", quietly = TRUE)) # install.packages("BiocManager") # BiocManager::install("BiocParallel") ## ----load------------------------------------------------------------------ library(BiocParallel) ## ----messages, eval = FALSE------------------------------------------------ # res <- bplapply(1:2, function(i) { message(i); Sys.sleep(3) }) ## ----messages-immediate, eval = FALSE-------------------------------------- # res <- bplapply(1:2, function(i) { # sink(NULL, type = "message") # message(i) # Sys.sleep(3) # }) ## ----errors_constructor---------------------------------------------------- param <- SnowParam() param ## ----errors_stopOnError---------------------------------------------------- param <- SnowParam(2, stop.on.error = TRUE) param bpstopOnError(param) <- FALSE ## ----errors_6tasksA_stopOnError-------------------------------------------- X <- list(1, "2", 3, 4, 5, 6) param <- SnowParam(3, tasks = length(X), stop.on.error = TRUE) ## ----errors_6tasksA_stopOnError_output------------------------------------- result <- tryCatch({ bplapply(X, sqrt, BPPARAM = param) }, error=identity) result bpresult(result) ## ----errors_6tasks_nonstopOnError------------------------------------------ X <- list("1", 2, 3, 4, 5, 6) param <- SnowParam(3, tasks = length(X), stop.on.error = FALSE) result <- tryCatch({ bplapply(X, sqrt, BPPARAM = param) }, error=identity) result bpresult(result) ## ----error_bptry----------------------------------------------------------- bptry({ bplapply(X, sqrt, BPPARAM=param) }) ## ----errors_3tasksA_stopOnError-------------------------------------------- X <- list(1, 2, "3", 4, 5, 6) param <- SnowParam(3, stop.on.error = TRUE) ## ----errors_3tasksA_stopOnError_output------------------------------------- bptry(bplapply(X, sqrt, BPPARAM = param)) ## ----errors_bpok_bplapply-------------------------------------------------- param <- SnowParam(2, stop.on.error=FALSE) result <- bptry(bplapply(list(1, "2", 3), sqrt, BPPARAM=param)) ## ----errors_bpok----------------------------------------------------------- bpok(result) ## ----errors_traceback------------------------------------------------------ attr(result[[which(!bpok(result))]], "traceback") ## ----redo_error------------------------------------------------------------ X <- list(1, "2", 3) param <- SnowParam(2, stop.on.error=FALSE) result <- bptry(bplapply(X, sqrt, BPPARAM=param)) result ## ----errors_BPREDO_input--------------------------------------------------- X.redo <- list(1, 2, 3) ## ----redo_run-------------------------------------------------------------- bplapply(X.redo, sqrt, BPREDO=result, BPPARAM=param) ## ----logs_constructor------------------------------------------------------ param <- SnowParam(stop.on.error=FALSE) param ## ----logs_accessors-------------------------------------------------------- bplog(param) <- TRUE bpthreshold(param) <- "TRACE" param ## ----logs_bplapply--------------------------------------------------------- tryCatch({ bplapply(list(1, "2", 3), sqrt, BPPARAM = param) }, error=function(e) invisible(e)) ## ----logs_FUN-------------------------------------------------------------- FUN <- function(i) { futile.logger::flog.debug(paste("value of 'i':", i)) if (!length(i)) { futile.logger::flog.warn("'i' has length 0") NA } else if (!is(i, "numeric")) { futile.logger::flog.debug("coercing 'i' to numeric") as.numeric(i) } else { i } } ## ----logs_FUN_WARN--------------------------------------------------------- param <- SnowParam(2, log = TRUE, threshold = "WARN", stop.on.error=FALSE) result <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) simplify2array(result) ## ----logs_FUN_DEBUG-------------------------------------------------------- param <- SnowParam(2, log = TRUE, threshold = "DEBUG", stop.on.error=FALSE) result <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) simplify2array(result) ## ----timeout_constructor--------------------------------------------------- param <- SnowParam(timeout = 20, stop.on.error=FALSE) param ## ----timeout_setter-------------------------------------------------------- param <- SnowParam(timeout = 2, stop.on.error=FALSE) fun <- function(i) { Sys.sleep(i) i } bptry(bplapply(1:3, fun, BPPARAM = param)) ## ----debug_sqrtabs--------------------------------------------------------- fun1 <- function(x) { v <- abs(x) sapply(1:length(v), function(i) sqrt(v[i])) } ## ----debug_fun1_debug------------------------------------------------------ fun2 <- function(x) { v <- abs(x) futile.logger::flog.debug( paste0("'x' = ", paste(x, collapse=","), ": length(v) = ", length(v)) ) sapply(1:length(v), function(i) { futile.logger::flog.info(paste0("'i' = ", i)) sqrt(v[i]) }) } ## ----debug_param_debug----------------------------------------------------- param <- SnowParam(3, log = TRUE, threshold = "DEBUG") ## ----debug_DEBUG----------------------------------------------------------- res <- bplapply(list(c(1,3), numeric(), 6), fun2, BPPARAM = param) res ## ----debug_sqrt------------------------------------------------------------ res <- bptry({ bplapply(list(1, "2", 3), sqrt, BPPARAM = SnowParam(3, stop.on.error=FALSE)) }) result ## ----debug_sqrt_wrap------------------------------------------------------- fun3 <- function(i) sqrt(i) ## ----sessionInfo, results="asis"------------------------------------------- toLatex(sessionInfo()) BiocParallel/inst/doc/Errors_Logs_And_Debugging.Rnw0000644000175200017520000005215614516004410023340 0ustar00biocbuildbiocbuild%\VignetteIndexEntry{3. Errors, Logs and Debugging} %\VignetteKeywords{parallel, Infrastructure} %\VignettePackage{BiocParallel} %\VignetteEngine{knitr::knitr} \documentclass{article} <>= BiocStyle::latex() @ \newcommand{\BiocParallel}{\Biocpkg{BiocParallel}} \title{Errors, Logs and Debugging in \BiocParallel} \author{Valerie Obenchain and Martin Morgan} \date{Edited: December 16, 2015; Compiled: \today} \begin{document} \maketitle \tableofcontents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This vignette is part of the \BiocParallel{} package and focuses on error handling and logging. A section at the end demonstrates how the two can be used together as part of an effective debugging routine. \BiocParallel{} provides a unified interface to the parallel infrastructure in several packages including \CRANpkg{snow}, \CRANpkg{parallel}, \CRANpkg{batchtools} and \CRANpkg{foreach}. When implementing error handling in \BiocParallel{} the primary goals were to enable the return of partial results when an error is thrown (vs just the error) and to establish logging on the workers. In cases where error handling existed, such as \CRANpkg{batchtools} and \CRANpkg{foreach}, those behaviors were preserved. Clusters created with \CRANpkg{snow} and \CRANpkg{parallel} now have flexible error handling and logging available through \Rcode{SnowParam} and \Rcode{MulticoreParam} objects. In this document the term ``job'' is used to describe a single call to a bp*apply function (e.g., the \Rcode{X} in \Rcode{bplapply}). A ``job'' consists of one or more ``tasks'', where each ``task'' is run separately on a worker. The \Rpackage{BiocParallel} package is available at bioconductor.org and can be downloaded via \Rcode{BiocManager::install}: <>= if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("BiocParallel") @ Load the package: <>= library(BiocParallel) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Error Handling} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Messages and warnings} \BiocParallel{} captures messages and warnings in each job, returning the output to the manager and reporting these to the user after the completion of the entire operation. Thus <>= res <- bplapply(1:2, function(i) { message(i); Sys.sleep(3) }) @ %% reports messages only after the entire \Rcode{bplapply()} is complete. It may be desired to output messages immediatly. Do this using \Rcode{sink()}, as in the following example: <>= res <- bplapply(1:2, function(i) { sink(NULL, type = "message") message(i) Sys.sleep(3) }) @ %% This could be confusing when multiple workers write messages at the same time --the messages will be interleaved in an arbitrary way -- or when the workers are not all running on the same computer (e.g., with \Rcode{SnowParam()}) so should not be used in package code. \subsection{Catching errors} By default, \BiocParallel{} attempts all computations and returns any warnings and errors along with successful results. The \Rcode{stop.on.error} field controls if the job is terminated as soon as one task throws an error. This is useful when debugging or when running large jobs (many tasks) and you want to be notified of an error before all runs complete. \Rcode{stop.on.error} is \Rcode{TRUE} by default. <>= param <- SnowParam() param @ The field can be set when constructing the param or modified with the \Rcode{bpstopOnError} accessor. <>= param <- SnowParam(2, stop.on.error = TRUE) param bpstopOnError(param) <- FALSE @ In this example \Rcode{X} is length 6. By default, the elements of \Rcode{X} are divided as evenly as possible over the number of workers and run in chunks. The number of tasks is set equal to the length of \Rcode{X} which forces each element of \Rcode{X} to be executed separately (6 tasks). <>= X <- list(1, "2", 3, 4, 5, 6) param <- SnowParam(3, tasks = length(X), stop.on.error = TRUE) @ Tasks 1, 2, and 3 are assigned to the three workers, and are evaluated. Task 2 fails, stopping further computation. All successfully completed tasks are returned and can be accessed by `bpresult`. Usually, this means that the results of tasks 1, 2, and 3 will be returned. <>= result <- tryCatch({ bplapply(X, sqrt, BPPARAM = param) }, error=identity) result bpresult(result) @ Using \Rcode{stop.on.error=FALSE}, all tasks are evaluated. <>= X <- list("1", 2, 3, 4, 5, 6) param <- SnowParam(3, tasks = length(X), stop.on.error = FALSE) result <- tryCatch({ bplapply(X, sqrt, BPPARAM = param) }, error=identity) result bpresult(result) @ \Rcode{bptry()} is a convenient way of trying to evaluate a \Rcode{bpapply}-like expression, returning the evaluated results without signalling an error. <>= bptry({ bplapply(X, sqrt, BPPARAM=param) }) @ In the next example the elements of \Rcode{X} are grouped instead of run separately. The default value for \Rcode{tasks} is 0 which means 'X' is split as evenly as possible across the number of workers. There are 3 workers so the first task consists of list(1, 2), the second is list("3", 4) and the third is list(5, 6). <>= X <- list(1, 2, "3", 4, 5, 6) param <- SnowParam(3, stop.on.error = TRUE) @ The output shows an error in when evaluating the third element, but also that the fourth element, in the same chunk as 3, was not evaluated. All elements are evaluated because they were assigned to workers before the first error occurred. <>= bptry(bplapply(X, sqrt, BPPARAM = param)) @ Side Note: Results are collected from workers as they finish which is not necessarily the same order in which they were loaded. Depending on how tasks are divided it is possible that the task with the error completes after all others so essentially all workers complete before the job is stopped. In this situation the output includes all results along with the error message and it may appear that \Rcode{stop.on.error=TRUE} did not stop the job soon enough. This is just a heads up that the usefulness of \Rcode{stop.on.error=TRUE} may vary with run time and distribution of tasks over workers. \subsection{Identify failures with \Rcode{bpok()}} The \Rcode{bpok()} function is a quick way to determine which (if any) tasks failed. In this example we use \Rcode{bptry()} to retrieve the partially evaluated expression, including the failed elements. <>= param <- SnowParam(2, stop.on.error=FALSE) result <- bptry(bplapply(list(1, "2", 3), sqrt, BPPARAM=param)) @ \Rcode{bpok} returns TRUE if the task was successful. <>= bpok(result) @ Once errors are identified with \Rcode{bpok} the traceback can be retrieved with the \Rcode{attr} function. This is possible because errors are returned as \Rcode{condition} objects with the traceback as an attribute. <>= attr(result[[which(!bpok(result))]], "traceback") @ Note that the traceback has been modified from the full traceback provided by *R* to include only the calls from the time the \Rcode{bplapply} \Rcode{FUN} is evaluated. \subsection{Rerun failed tasks with \Rcode{BPREDO}} Tasks can fail due to hardware problems or bugs in the input data. The \BiocParallel{} functions support a \Rcode{BPREDO} (re-do) argument for recomputing only the tasks that failed. A list of partial results and errors is supplied to \Rcode{BPREDO} in a second call to the function. The failed elements are identified, recomputed and inserted into the original results. The bug in this example is the second element of 'X' which is a character when it should be numeric. <>= X <- list(1, "2", 3) param <- SnowParam(2, stop.on.error=FALSE) result <- bptry(bplapply(X, sqrt, BPPARAM=param)) result @ First fix the input data. <>= X.redo <- list(1, 2, 3) @ Repeat the call to \Rcode{bplapply} this time supplying the partial results as \Rcode{BPREDO}. Only the failed calculations are computed, in the present case requiring only one worker. <>= bplapply(X.redo, sqrt, BPREDO=result, BPPARAM=param) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Logging} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% NOTE: Logging as described in this section is supported for SnowParam, MulticoreParam and SerialParam. \subsection{Parameters} Logging in \BiocParallel{} is controlled by 3 fields in the \Rcode{BiocParallelParam}: \begin{verbatim} log: TRUE or FALSE logdir: location to write log file threshold: one of "TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL" \end{verbatim} When \Rcode{log = TRUE} the \CRANpkg{futile.logger} package is loaded on each worker. \BiocParallel{} uses a custom script on the workers to collect log messages as well as additional statistics such as gc, runtime and node information. Output to stderr and stdout is also captured. By default \Rcode{log} is FALSE and \Rcode{threshold} is {\it INFO}. <>= param <- SnowParam(stop.on.error=FALSE) param @ Turn logging on and set the threshold to {\it TRACE}. <>= bplog(param) <- TRUE bpthreshold(param) <- "TRACE" param @ \subsection{Setting a threshold} All thresholds defined in \CRANpkg{futile.logger} are supported: {\it FATAL}, {\it ERROR}, {\it WARN}, {\it INFO}, {\it DEBUG} and {\it TRACE}. All messages greater than or equal to the severity of the threshold are shown. For example, a threshold of {\it INFO} will print all messages tagged as {\it FATAL}, {\it ERROR}, {\it WARN} and {\it INFO}. Because the default threshold is {\it INFO} it catches the {\it ERROR}-level message thrown when attempting the square root of a character ("2"). <>= tryCatch({ bplapply(list(1, "2", 3), sqrt, BPPARAM = param) }, error=function(e) invisible(e)) @ All user-supplied messages written in the \CRANpkg{futile.logger} syntax are also captured. This function performs argument checking and includes a couple of {\it WARN} and {\it DEBUG}-level messages. <>= FUN <- function(i) { futile.logger::flog.debug(paste("value of 'i':", i)) if (!length(i)) { futile.logger::flog.warn("'i' has length 0") NA } else if (!is(i, "numeric")) { futile.logger::flog.debug("coercing 'i' to numeric") as.numeric(i) } else { i } } @ Turn logging on and set the threshold to {\it WARN}. <>= param <- SnowParam(2, log = TRUE, threshold = "WARN", stop.on.error=FALSE) result <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) simplify2array(result) @ Changing the threshold to {\it DEBUG} catches both {\it WARN} and {\it DEBUG} messages. <>= param <- SnowParam(2, log = TRUE, threshold = "DEBUG", stop.on.error=FALSE) result <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) simplify2array(result) @ \subsection{Log files} When \Rcode{log == TRUE}, log messages are written to the console by default. If \Rcode{logdir} is given the output is written out to files, one per task. File names are prefixed with the name in \Rcode{bpjobname(BPPARAM)}; default is 'BPJOB'. \begin{verbatim} param <- SnowParam(2, log = TRUE, threshold = "DEBUG", logdir = tempdir()) res <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) ## loading futile.logger on workers list.files(bplogdir(param)) ## [1] "BPJOB.task1.log" "BPJOB.task2.log" \end{verbatim} Read in BPJOB.task2.log: \begin{verbatim} readLines(paste0(bplogdir(param), "/BPJOB.task2.log")) ## [1] "############### LOG OUTPUT ###############" ## [2] "Task: 2" ## [3] "Node: 2" ## [4] "Timestamp: 2015-07-08 09:03:59" ## [5] "Success: TRUE" ## [6] "Task duration: " ## [7] " user system elapsed " ## [8] " 0.009 0.000 0.011 " ## [9] "Memory use (gc): " ## [10] " used (Mb) gc trigger (Mb) max used (Mb)" ## [11] "Ncells 325664 17.4 592000 31.7 393522 21.1" ## [12] "Vcells 436181 3.4 1023718 7.9 530425 4.1" ## [13] "Log messages:" ## [14] "DEBUG [2015-07-08 09:03:59] value of 'i': 2" ## [15] "INFO [2015-07-08 09:03:59] coercing to numeric" ## [16] "DEBUG [2015-07-08 09:03:59] value of 'i': " ## [17] "WARN [2015-07-08 09:03:59] 'i' is missing" ## [18] "" ## [19] "stderr and stdout:" ## [20] "character(0)" \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Worker timeout} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% NOTE: \Rcode{timeout} is supported for SnowParam and MulticoreParam. For long running jobs or untested code it can be useful to set a time limit. The \Rcode{timeout} field is the time, in seconds, allowed for each worker to complete a task; default is \Rcode{Inf}. If the task takes longer than \Rcode{timeout} a timeout error is returned. Time can be changed during param construction with the \Rcode{timeout} arg, <>= param <- SnowParam(timeout = 20, stop.on.error=FALSE) param @ or with the \Rcode{bptimeout} setter: <>= param <- SnowParam(timeout = 2, stop.on.error=FALSE) fun <- function(i) { Sys.sleep(i) i } bptry(bplapply(1:3, fun, BPPARAM = param)) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Debugging} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Effective debugging strategies vary by problem and often involve a combination of error handling and logging techniques. In general, when debugging \R{}-generated errors the traceback is often the best place to start followed by adding debug messages to the worker function. When trouble shooting unexpected behavior (i.e., not a formal error or warning) adding debug messages or switching to \Rcode{SerialParam} are good approaches. Below is an overview of these different strategies. \subsection{Accessing the traceback} The traceback is a good place to start when tracking down \R{}-generated errors. Because the function is executed on the workers it's not accessible for interactive debugging with functions such as \Rcode{trace} or \Rcode{debug}. The traceback provides a snapshot of the state of the worker at the time the error was thrown. This function takes the square root of the absolute value of a vector. <>= fun1 <- function(x) { v <- abs(x) sapply(1:length(v), function(i) sqrt(v[i])) } @ Calling ``fun1'' with a character throws an error: \begin{verbatim} param <- SnowParam(stop.on.error=FALSE) result <- bptry({ bplapply(list(c(1,3), 5, "6"), fun1, BPPARAM = param) }) result ## [[1]] ## [1] 1.000000 1.732051 ## ## [[2]] ## [1] 2.236068 ## ## [[3]] ## ## traceback() available as 'attr(x, "traceback")' ## ## attr(,"REDOENV") ## \end{verbatim} Identify which elements failed with \Rcode{bpok}: \begin{verbatim} bpok(result) ## [1] TRUE TRUE FALSE \end{verbatim} The error (i.e., third element of ``res'') is a \Rcode{condition} object: \begin{verbatim} is(result[[3]], "condition") ## [1] TRUE \end{verbatim} The traceback is an attribute of the \Rcode{condition} and can be accessed with the \Rcode{attr} function. \begin{verbatim} cat(attr(result[[3]], "traceback"), sep = "\n") ## 4: handle_error(e) ## 3: h(simpleError(msg, call)) ## 2: .handleSimpleError(function (e) ## { ## annotated_condition <- handle_error(e) ## stop(annotated_condition) ## }, "non-numeric argument to mathematical function", base::quote(abs(x))) at #2 ## 1: FUN(...) \end{verbatim} In this example the error occurs in \Rcode{FUN}; lines 2, 3, 4 involve error handling. \subsection{Adding debug messages} When a \Rcode{numeric()} is passed to ``fun1'' no formal error is thrown but the length of the second list element is 2 when it should be 1. \begin{verbatim} bplapply(list(c(1,3), numeric(), 6), fun1, BPPARAM = param) ## [[1]] ## [1] 1.000000 1.732051 ## ## [[2]] ## [[2]][[1]] ## [1] NA ## ## [[2]][[2]] ## numeric(0) ## ## [[3]] ## [1] 2.44949 \end{verbatim} Without a formal error we have no traceback so we'll add a few debug messages. The \CRANpkg{futile.logger} syntax tags messages with different levels of severity. A message created with \Rcode{flog.debug} will only print if the threshold is {\it DEBUG} or lower. So in this case it will catch both INFO and DEBUG messages. ``fun2'' has debug statements that show the value of `x', length of `v' and the index `i'. <>= fun2 <- function(x) { v <- abs(x) futile.logger::flog.debug( paste0("'x' = ", paste(x, collapse=","), ": length(v) = ", length(v)) ) sapply(1:length(v), function(i) { futile.logger::flog.info(paste0("'i' = ", i)) sqrt(v[i]) }) } @ Create a param that logs at a threshold level of {\it DEBUG}. <>= param <- SnowParam(3, log = TRUE, threshold = "DEBUG") @ <>= res <- bplapply(list(c(1,3), numeric(), 6), fun2, BPPARAM = param) res @ The debug messages require close inspection, but focusing on task 2 we see \begin{verbatim} res ## ############### LOG OUTPUT ############### ## Task: 2 ## Node: 2 ## Timestamp: 2023-03-23 12:17:28.969158 ## Success: TRUE ## ## Task duration: ## user system elapsed ## 0.156 0.005 0.163 ## ## Memory used: ## used (Mb) gc trigger (Mb) limit (Mb) max used (Mb) ## Ncells 942951 50.4 1848364 98.8 NA 1848364 98.8 ## Vcells 1941375 14.9 8388608 64.0 32768 2446979 18.7 ## ## Log messages: ## INFO [2023-03-23 12:17:28] loading futile.logger package ## DEBUG [2023-03-23 12:17:28] 'x' = : length(v) = 0 ## INFO [2023-03-23 12:17:28] 'i' = 1 ## INFO [2023-03-23 12:17:28] 'i' = 0 ## ## stderr and stdout: \end{verbatim} This reveals the problem. The index for \Rcode{sapply} is along `v' which in this case has length 0. This forces `i' to take values of `1' and `0' giving an output of length 2 for the second element (i.e., \Rcode{NA} and \Rcode{numeric(0)}). ``fun2'' can be fixed by using \Rcode{seq\_along(v)} to create the index instead of \Rcode{1:length(v)}. \subsection{Local debugging with \Rcode{SerialParam}} Errors that occur on parallel workers can be difficult to debug. Often the traceback sent back from the workers is too much to parse or not informative. We are also limited in that our interactive strategies of \Rcode{browser} and \Rcode{trace} are not available. One option for further debugging is to run the code in serial with \Rcode{SerialParam}. This removes the ``parallel'' component and is the same as running a straight \Rcode{*apply} function. This approach may not help if the problem was hardware related but can be very useful when the bug is in the \R{} code. We use the now familiar square root example with a bug in the second element of \Rcode{X}. <>= res <- bptry({ bplapply(list(1, "2", 3), sqrt, BPPARAM = SnowParam(3, stop.on.error=FALSE)) }) result @ \Rcode{sqrt} is an internal function. The problem is likely with our data going into the function and not the \Rcode{sqrt} function itself. We can write a small wrapper around \Rcode{sqrt} so we can see the input. <>= fun3 <- function(i) sqrt(i) @ Debug the new function: \begin{verbatim} debug(fun3) \end{verbatim} We want to recompute only elements that failed and for that we use the \Rcode{BPREDO} argument. The BPPARAM has been changed to \Rcode{SerialParam} so the job is run in the local workspace in serial. \begin{verbatim} > bplapply(list(1, "2", 3), fun3, BPREDO = result, BPPARAM = SerialParam()) Resuming previous calculation ... debugging in: FUN(...) debug: sqrt(i) Browse[2]> objects() [1] "i" Browse[2]> i [1] "2" Browse[2]> \end{verbatim} The local browsing allowed us to see the problem input was the character "2". %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{\Rcode{sessionInfo()}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% <>= toLatex(sessionInfo()) @ \end{document} BiocParallel/inst/doc/Errors_Logs_And_Debugging.pdf0000644000175200017520000057420714516024320023353 0ustar00biocbuildbiocbuild%PDF-1.5 % 113 0 obj << /Length 2068 /Filter /FlateDecode >> stream x\Ks8WHUEwOI&ԤM\;d4I\SiIɦ,*QjpC@NV NqS3NXB+HJ%-/.:IIErLFZDj(%"niΈHg9ܖ}7NMhD؈2oI7'D!Jw6ہbs'{S>#=Xo঑䈑| mݷb[eSO0HQ&U&8zW$"RBQ0-f̓)G$LtЂ9U:'LFO]M+#k~_wt ~Ey2ub, :r K3]La sow'րK kKLWqa ,,tw~o]`u1CREE$F,nX|a 0aۅrնV elNSqh&CW `:؜% ĥLP K#IҚdy'⯂' -OEgoX`;jz |f 9υ8X< þg[# >nFs[A ,X??7;E#E:(38Pl0/juв҈b=.#NgH f/g8g\)_!(a/#G&Mh~ anMn C ߢE ė'$u؉y@1 &J"_Eb$ OD#DX ^O'Ն p8/7lb$F;#@Gb<X:mZ+D1&Ph؏PI"/t!&d|к0yaa&Ч#ʎv ò~h?Cx|jnkZٗUB{5Ln 5e73ӻrU7ɗ6Q/m>nT;b{~]NKٳ߹wp=CߊM ! nS^x'W0vQ/p/ذv_y/|aE.̦A5Hk|A^kHC5P-t7+ |0g̘]z0Ҷemt4i# 5jʅ5/Wɶ.AYne˺72/원C.bÁo[33wͫ2A=uQmǽ?$?,s vg%xrthc~ݯ8;4c$Hp0A6?_z2y?e<"umXZnn+w>Xg%E8nԙS!,*k6N  >]n|/UW:4mm^[\Z[CQRkd

GMn;hn:4SP`T+ YvyM޽!]嵳c2J /Ŵz endstream endobj 130 0 obj << /Length 3034 /Filter /FlateDecode >> stream x]s۸ݿ݉pACs]ݤLg{$bB:.HqeG6 ].v K/?{~qw*b".VX (TQU"k~z>&_5bIsl<T\,`A00ƫ`*x87="&B3{W2*4pD43Jx3+ 8(r;e SN&]2zXS}R(J0ZPIX7%,˼@2@fǛ D=sJ&-A cͺSBBuXSչ0~ZV[N钝ϴo99JE7~0skM.``Hp5!EKua %B1ӬahX}L50@ؾB`qboyJOҪk wC?//[N7~!YW] }tkpP_\z^, Ð[lluBYԧZy $!uQ:ZP8@Rf Й4ZC`^JSמk_| 'Biv&x 3 0&;$b&J2m&bH buɢJ*d2l9.åf\H(@|TZVmҢiV* DÅSE6.]i2RUBp s'm~Mw]hjf`/} e rأq$\BH,%u$&1DT!䟷JQJ>j-T p5R}Τf2-Sq8CwbQ55ݕ+w-RWp q&̫ؗu>-45ٶq,ruMrj^s{Z:!$5Fꫀm Z7Ǒ.;4jf.'zlp),ECgttMQ")Kś%E^&Ե%J,@(\B!c0;?ѕ aq7M1SM7O{v׹~dMX@Dt愎302Iq ֜a 3|*4ٳYn01;)c<ݔ{O;iXz5L6opk's8Sոn[1SVw^C[zofa z1SgwF_ءƶWRU.MZ&N{!݆|r_I? )y:5MM7̟NY< I(kT⧟_LLQPbRCm)}W9pv:@;~lJA<:`Zz_&snUKN| V Nr?2 :\9`HM&,55mu?L ynd#OCYLJU 1M.jWz VJyh(Ƅ ABK"Cι:j,0!3i!Q㞔AR .#5 @G;nt?V֐Mچ, \>le!R>/_lИQ_|z!(-Zx{G[V)҅БW"ϕ}BxA%2)á؛)v[X Y(b8_mWt24QnU.ǟ_c6m6w8yofQ)ONדnʿ# _XO ],%+2쳫uZЃ.omYXoW{צ=ےhn\&۸,VwuqC~U)F˞mX\k":jW}5ϚxT}܆dtCNJ}58Yu00!$]4/ն3vEA:xk+B !®;mIan>?g!ςke& !cCftd #`4rCp 1Ħr !%wRH꒎'51<>7xImbԞ\K::J8>`a#BtnKPmȂ d[AUL/nA/dOX"f  W]sӤmSӄoQt ]Uuүy@I\wbR- We5ыK0PHkpNCFFج6kn-+ fi ʹ{! UJ> stream xkoF|kO.k ع$H(ZZI~g(VoI(m)PjPQeQ5ӬYbXpWiNԟf͍_HnTeq3^˭Ԟ!*9G*ۇ$mcr;Z5q6S?)ڤ1{4݀e<҇/0Ve{yY̖t.GywBo\:n7w^+P@`fp@!A&0wyk 0feT. ֒PH.IQ+Q$& HprI~+" hSaOk2"_;LpR0gȓ k!rni8b"Gj2#chKpvMĭ0=,9.cxX9kgm׌vs ))ȕÐӦmx^f!!G],CSuhuG\fߩ-STt;@!?4SطS0\ܗֵLLR-)qc]qgƧ*.a)uZ`nTc*(۹+7+s?Ӝt20pߑ_m6B<0DHGDn{ %=uwS 3t<ۄ׍ynهGSz'K(JE odtfibV8'(cYWq*-¡Cjo^mZc~P*iX+ƍ6ia"m0<~6k? +Јk>YZ7Cb'2wI{GͽxviE^=: `}AuXV^O_^{;|3oܐFOt >5w3^Vd4,t z@mKfr['!pidug-dc]>|R$Nnm>1i+DIm7)`(d[s`f&tșyi.+SPJܶٮ0t:bJKFhs@X䛒?֭\3 wI+rqӵ̐[Im3.ꆶ2AYYjwxYyH IEVNHgmOT/&}?AHߨJ,.a~?UM:ۭ?'ޜpݻӱʛ1sU*,ÂpVbg9<%( xpIb+5/[!;ryl|kYuAW 7IzScyw[B 4yv4qVv)>}pI)zc:.XZSӉB(G3ٝkq4ܯ("R5ֈkX endstream endobj 140 0 obj << /Length 1176 /Filter /FlateDecode >> stream xko6~[THVuY`mH$g Q"؊JF@a@$O'{`Lg+Cxہz;Ϩp|s̝a"@} 'Υ;Hns^{}2 o1!xaܻցLx8iE2҄`sqEY~u`+*RL'v 8 s$JҙvlMPCJ|Di:3F b}q1S-ܜoϰqTTYԂ' Z^b]_H е,K}_RyDAj_k0iLDR=8I܏2 35s=TpD2]ÀS~y(.ȗ[)a"iny}[|_lzA<͑ gI,5`!,7AS/܃u6Zk4si-gXN  gx)F1yяY`+}[3W'jS1_B>̡ʟ*fP[.=wE_d dv݇Wk!B\5A93n*@"v/S|,!A*M17D(|jil富9g9c>]ԵA{#n_>5d{=kecUA'²uhvX}Yd?̟WL endstream endobj 145 0 obj << /Length 1762 /Filter /FlateDecode >> stream xZmo6_!*1+J-.ܡ@ ,BdK`QGEM ;Lrw=R$|߳{Ͻދ³|+Ēvq0׳˦ւtwH}88N_{q|A9!Pq].7̣#L10m}A$I"㜳vPom#ƒF {_zІ?TPYἧIB}ע%Rr+Iσ\K9PS"4Yf `R#[Vv=.NWС::?܍,ώtNq#h]3^FUgoOʻp> l)@g`Xo}#.mO yݸUo4{),h(c‘v aYz1>sy euײ2CYt$UPF( >88 j#'JnCt!ִ|tqFv }Y$70p7$Ld`qЭ<*8K!KA@49`@ ι^J˸eU.i zYXFy6(W"ܑLc&Xl wFi హIn>}G7=XNChյέOǘQ_CeoޥO. n;:yu:y;Sۗy4J(\PL\6:r8Z]Aw$ijqh|ZF2's=k%g`(ph/NTrՕ4.L-/%v׃p9+~+I?G4E( (rlVw9!:B(h BnR)ϥ>(/to-/ݑ?ԠA5;h4K;|>뱩ȵpm='I[/a\hQ >&?cmdQf auU7yx]}!Mg(0\T%֋YZ6.r5K'f}aI4AQZ߰=jI5Ã&Y^2+f|U<'JM-U%V5k%7Y3MoIӱ_RlNc֌aPʼۜY"U4+(023fa$kp4SlaXCuFNf(= endstream endobj 150 0 obj << /Length 2411 /Filter /FlateDecode >> stream xko6"1˗HHP街85*J=ߐCkwDsW^Ԉ3g %A ٞ<",8a޿E+21tH!7E2$(rLR":jv>k*_Z@une:`tFD搼z.i<527{PGCj+8w!uΰ֠oO$g;q jycfug$V&WM.{cubEwg)2H V)đI_e]QW?8#NTW5ifE 1]Ew[|8bUR0Z-}e.ӲO;;8͈N-V+^pn9Y1Ee)<3"VwMm-QQ9BQ?jgBdl_?x[efˢ+^S$kG7߿櫁%"8߀ش}٨-Nac=eNi7+ U;4-^v?A SQ;P̴-Ps3M`HHnɜϫ)|RƑbNs`R-TS-@ԡ N^\sR"5Ue|aGSl\ipL$ +P"Q1!B,o4=x)B͒ͥ:(kwQh1ը \zH69nޭ$i9du ?r%^Qxߖf.jk0 wta/eIW6[oY}~8baxWWoB~o]G ws&m2rSbEx 5tc}czx{ÿ[!;;kֳPR!iZ=fD(5t+Fc׻ҘC%4lGNxգv*1 endstream endobj 156 0 obj << /Length 2248 /Filter /FlateDecode >> stream xo{ !0{o_:baK Ńl_bJraɖ8/I$݊։w<#!a:x_:_BA’HDYC"03dj(AY Ñ0VZL8'pV ߞ0Q\:ԱBcƫR /UD$xG e8H4mNy›A׍o5doIL{VB3޿nEMsˬU`XCrL$SЊ(S 5foB|[Ѕy}~@=@8F\XGf<}lYL,h B,^;jgxF]?\+rQ6ic+#pP̀:IYL5OG^p=M&6pVv-ёI÷rzȘD߄?LH0= j(Q*4h)9O T *8,8'e _%>chމ1@u"H./:pQ4hfi--IĎ;ҚF->M7TvJ𳪜=ҳe{P{1ϫC6q|Dԙ|9WMY𕔒fR &IFdu@#5E0 `ѮfBL$ͅ)X(~nj]aqJu^+Ƌ<]5h;t#v`HY:w^YC߫(`J)ͦ( ma[,~H%g"%*rd4 f0ɃlYs# ܾ, \LGMZE @T+`Ŵt1 6_F[k*K.J^a9GLJ?q=a LO%8lnwZӥ>0rqԶ㷓qn5AJxyY1l X,?]ڤq0 NVO#ǺsBp'V\>*EF 5AR)}^.ng2}(5gh!o7Ah3F6$7uj9Ki1hZRO"Dy$x irzEMTF`kx*#G4WvҜ#޼*[/}al`>xX䙝L" Qr"D.rF+DdMqxz2t(4ZK5Wy-x55o<0} qh|d~@10^)V+|;+j[pdv#jR<+]^*$qJ];q!bCy(@Ha׾XTGze: p1&3E+@1, |8'}b=9YC_ֳrO 3Ole+VT%a\ÎܚIRR6|}Us^Gϣvc!B>p=s#v{x %]pG0GkE%;Vn.*=f(/Bc]m&`A4ջ\~{ub3/33A Utw<}ܲ=;~ϝڴob#]9ƭhkKQ}{*zȣO+;Ǻ n, -B)W/n|k?ѡs*U _׌pOxvu[Y&mWk lbSZt1-UEK>N_gMOUe2k~:PtCL O*̿a3*)$161hSǎ㽍5Hk*I endstream endobj 166 0 obj << /Length 1944 /Filter /FlateDecode >> stream xZS8/LӧessB:}<8ruVll 3%ZV#*k4=sw8Nzp|: ΅Knvp7.>f-\˯jVt٢s9|8a —ȸ~$xz/$;%Ya7L8Vad7NlwnDm#U=O"ɝh9a09r&{n$C+vYpG@`0=$ׯ;]S]^jC]B`hKBd bwW;Y\HHDVut9Eƒ#.9O\tI@>!!hcV\K= tq1!uA9 ;s}~iX%"5it9RhxnC\Elwti/W3K(%\-ż?in wbچ|QoX s?y8E0.T5esU: =q}Q(>ȖLHu:s4k0imbD\mjY%&~-Tm+f 1,}G %(#8,(S )(\v" J7ʰQȇe#~S3p>&< ܑ*'JoA" MT'IH )6wLr0NOifӥ$. FK:R@ W޺Ƴ&R}hEɳzRu:˕yr2j˯R]%x 3yz֯'U~x08?ZÃ41ZJ-^*dJHd9Nf) gr!o'0| gߊ. W θ->ߑ籦n\ ΃p@+~S(M#)\#=4 Qj* hMUz Y0N!(rM%8 EEOFWS8JuAL}*,Xeʪ˄ 6Ddc.y3!}+:KWAɘf(14]HQztI~wfU7{xS`[rȈv<[m)fKɁ荩"͕i maUYXzu^^*%Gb(|ҴӝII8Jg-seJUr㡭wjB%b9pÓH,9X8D<ʑ,ɠ]Gw|ږ2#%H";JTĦ {z')4KxM}n$m@ߖX WJq,!}d5j!Xe)746z3rR8%Tv Er}L 6#&d$%ocscNxUpoNjCja !XsGYZ.S~6:I?\Wy}.{qW2Zsel.vn3n_˓jP]W@Y NO2o8iOeLAlnEL|'#ve ?a: GZK2IHmuej.Ě7wcF{;~( H˲; #$r؛ġƴ~vET~]ľXۘH\UEG5_=."`*D`Cĝ ٯI5(rc6׃~#O-_|[a)o:o8د?7 =;!$!FQDtGo<P endstream endobj 2 0 obj << /Type /ObjStm /N 100 /First 835 /Length 2258 /Filter /FlateDecode >> stream xZnH}Wc;T|dv{q ˊG4$j2=EJhJ2OEvT[P ZHB !-&= 2x p> !afeЈnܭ] 6s"=2bzP;;%/M:D%gNkԎz,CQ{fQC]YHxK`6X-#4; ) Lv"A.d=`Sp-ϖv)q¥M &Y#?%2>8ݳUkKw+!Z :p/pup9\3baaFdrKǀ95@&:&E51@t <]RڠWLi`B^S%/![ GF@R,)c1aJ>KkK_"hU$S;vI`S,J(hQjnOJ{GG"C.3j:|,kWmBGEw\(psqŌA:[ǯj4mfY_k=O&?rѯq@jg5UVdr1Rma! mR ,-ـtanxVV[q!?\Ml$ۢMlܤ'.,Mոn\k.k6`$uIˊΛΜՖvY5lq5ޭ=}Lg\}\F~}w[ێ5jc}s~}T4;ݠtm c[q[Aʻ_eo6 ]HѠ3FMׇ{4gsoa}^kw]}=*޹ռ;g~ ?l~iӻ MzBSkެB- m5Dz{ac׼xdv̳¾m_V6]GT7t.J*q.7_"{߿|\ >˒|8gpZnW?oF7qk>^v\I<}Od-h9: ߞ@?Bn/H*SUTU*XJ$V"HDR%*TJO(U絙v^ӼvʥW;K;zI>NJ?]vz)(عB\H"=e`R={1.mh|JHEw(eף|of"|r=4;A <x>UDlK Q@nd'-|"&9rǢs$W$#Bިn$fHɀF1H/xqn\ }9dJg=$͞|&8I82-P1K`ph/&MRq>c4 :)e 2h%a޶N8f1 #s3bd8A/iaCΆp^Xc+$a}Dc:iZdʭ/4FW7%gy& b)BnO53Fb8k'$~08wl.6 i9䔹qAFViS2OK2(:>x)kImV.X[<̍k+先,ɯ@;?=Ne߆30yd]>­Fa"Ӿk(b&{/wE?L qXFlJ@G) 405zCmGhpvJb6=<\ FDgsuzI"ܒ|ic|[f̢7eF`.ޟهodUdLV]c X$*EcUϓ f<0[{ 8`CQ./8 {oy |A%iWEVǽ lQpLB^nۣ:rGkb:(c;jZذ55.>_];xڴdHZ8K`a%et,!vLUfuCtAz'?(󌋟g\CsfrD endstream endobj 172 0 obj << /Length 1506 /Filter /FlateDecode >> stream xYKs6Wp 5c!xLvb{ڱcWۃ-'#$AYR(Er6i2#>筓~ŒʠcmxmY&",b6*\+]㵾3|QpNGN01',H`)* rewn+[d'lpf8L$t}}"A-bN#b1q@3xVҌ\Y6\⌮S$! Sq!?0S=t-[#jeiF#/>Ӓ6\;xb>V,îXR?kG,=9a<_2#`Idwl:AQĆkݔ NtQ$#]8(I Ncq2ud?.6 r8ʱvByZ ~}%NWUhYM@n KHK2gkZ (jH؛EraSk>;0iu;:]5+BI: 8$+bv{9fw Aq yɢ/{#g}ʊ3ndؿ}I%cZ:`lv ڼa~t2IpvFHWr=ybT2$ ؚԓ|KmB0!^%!H\cp0f`.ywXg|򕎚Zkk&g9Խ`rn<>L|!:|ŠQ`0i2'ɂ\T] K!B$ab G`{֊D# D$!9U| :y}pw,ϳܪn,-?o|$' T}HtiI¿-~ְ8=Ǽ> stream xZo6_/2Pdl +$s!"ӎPY.)_tLm1 Qԉw;x!!.\ң;Q/ɰz|3; ÎP:ÑsfYC7ͺn }Nr n]9"Ndi)"J']p(q+%_Q`d'Q^u t,ckFfrpwuY֔(O?:YPU`9SS򅃩Sdw~k%~70gW =_8;^zRYD1!AA! ^##FSt6bC&@E  meGYPDi)s\fc~r?q0OVmٽn+F-05#dݭ"$"&Vi"qO'Hy~Ch!s3=̰/!`/nDz daq~֦Qk4ȼ׋Kݺyd9O1NQ ø,X8FfAYfr$MzI9Yd$&1)cwR]0ɒZI#ate7AFh_Š&)`qWX,HwMey@kYT2+~[d@޳W&0>i>J=7!S YQfvqlo*%6mx2MD:_̭KKVX_x?6h2 o {Azs$PasFˑ'{oYk `h$VRueKtM7<\M#]cZ+0ء= j: X+RA,֛g ߱<{F՝_"ڮ}^rQc/: O8whqbA#/H ,lx86W;_QYf!(2GHh%|Be55<ﹳ xp~6GTz]Sª'B-/r> stream xY[s6~Wx6kfV >$]M6%}HLm3m}%KŻ$vr> Pyp坎֊ݽn@O7x׍^c=z~'MIDΛ-,5e6L69|2rQAMG:6{_@h 2)t*8bݤjxj`5e Aq[EnpAkerV $t0iz@JxDT*/ `fw!Yu ͹'bTq)Nٙ@dؿ 'Ib, i!l! 'dIx4iFQ.umz;8sN0%ĶP( r!9U̶p"t(%160B^׵ᴙ<" °+v0EP Wv1I ,Y=`I .6mmcfƁaNGauY> 57\e2ێ%05R YWz۵|ݒF nàC6 >cW/&B|0[w/ iGtJc1g/YNǎ #k#P\ "#?abUQH_R՛rk>HgVry]97X-&8z~A/ Lk2[8 yp?Dk>tWTSS B|=$"ԮXuuܹ_[58`JR jyҼ,D۽3(gߛ+ѿFAP%I5BuR_4kVkoaD",Ow͉)\*P1Nz5> stream xYKoFWmIr`v ]WnNP$KRI;,Ғ"R}~;Hro9m5M{0 {xI$xÉ'0"ꅜ .7{7iY)O_hiekq6ԭ7 I6I<|wB,0ahe\=xrI<#D N|tg؏8MUjձ=hÏ4Z "Bo4~N"28 +7$i͇r/DR`5)P "|?04l0CH°? |JQHȑU;@v<  n{2/[:4مPLـm hxgLxl\S7&:IJTG_&f\ȍkΕ[&|;@VbgTeqIΫzJs]g>aOh9_8_\/gn0R}iy>VDbXdzb1{q`4>hkyIu+q+au_jf4.*5^ԜtN\#?ӏ5{[k)f-|]<ٲ.{WΚx=A͒05h!;l@$ taDʈ aP7Qw <#.\_3U&[zXuQA3 _B4KqM疩Y&{e|' )iOV F; Z;ޟML͟ 9b<0pL"%C+vnqQ'30Z,^ ECS;$AT`ymPcs굋(nA($xAϢ&5F%4s3E j8@z^0{C4`B>+V1 ȲqF~t#I]hMyFGyVkuXMd Ҙ;@vmE%Vdruk%4j1J%8\.ZׅYӝƈfn N!r ɝ$Uve[ױFaw1G$23+I2D>GFq Evh !Xw3E!KڻftσxD.Ϙa廋>aG+#t(\+2v#HZl]=rҺ<&Ta>j5+O3JYjw[qQzaT:ı9 $$9*7?J̖*Uh +-"(HH&=044-njX:ȕ2[vhmh%}dvE\ scW]ٻ I> stream xRFP%/rոW]& d@xQ%ECT=/mYĂ)ֹ vvo:Уs3qFA@C܌[,y9 ­ ieFQ66HO$p n?0#E;%򑩫3^5gI_v2JSBVթBB1$X SgL / ܽwMT=1BalCBÚwIfYѼbCEh]XVvsɆ7epBB7xVFf3:ګ gHײOI듞sX;TG5Gq_Vbo'~T$ռ<7k3c^>Ȳ =4PBGY4k>_~qrOKYΪcvREvb *) J&Ե_ډ̧D5v"(򲞦(J?ҵh͂ߣ2F\d7k{m& cѭ \Pfp(Jz7? !tՊ*TR}L^ hBS_>}r^}e(OP Cu[n41C埾_ũ^c Ai'V hkG۟k] zοj& endstream endobj 200 0 obj << /Length 2089 /Filter /FlateDecode >> stream x]o6=BH*1oI5-zvlpe"K^Iv(;PuI.mÙ!p[/C} `"7?8HH.EI D2< aUO ɔGqKhs`fo0.r༜\]H&@EUAiE()Kc2&\sI*AôN+Gn_9> uV@d \DAR&>ףyn 2DBOǚDW_M]aY{S`Cq%$BG?ߔw^4V:JNFPCoS0;xOSrQ+a6$n|[xJFDıG&kLU̼FnشuښEn;68?>MT¬p2۩[S"/n0EZ2m&< kY_ Yq BE)d2ӐTQl9*.Li ّoNbX0KERaGKBYLӤ e8rp}(מߦ~Cg~N]m쀇ͲrjU-&Dߔnf9½t#sxuU\#*-;ԯ;RtLp7h3O[Rtܳaon6[&s?n}p=4%Y .r )#T@&D!ZAҺD>NLtzkvɪКϩ Ў@:U8u3CU9TI %cU8dnIvbPP)|9ȷ+1p< #ag4[bqt%~Xo00- Q]Mi{ &QXj tE}{oCꃎ|h(6j՞.LxY'EK^NGd뻣9u`y;DEHW6_{HOݭ X7R# G܅E,L3bռJ!FX 8t㧳*6mbc Z'&"dB,>ۗżOhAS*{9Ye}{vh`p~{;?C1"}LwGCoc*r۶_Љb[B"K81p۱p+>Dt#/8aaTGϏEp?;]WV'`^A>1#ւb7NP@8oiΩ-e$wy[z7[bf+8m,~wlM"uqn`(mIڹ`O㞈ѳ7?爽M몴w3H1.ʿ39Xw? B endstream endobj 206 0 obj << /Length 2133 /Filter /FlateDecode >> stream xZ[s۸~$f,,nI}pfLvi8(Hb"$EYe' \psMG~:XqM(^(޹RUNCF崶z4+v8+gPBcɄGI@(6T4 X2dD}VI-!'\tW2Pro{Eٝ>udP_ϲtf:s$r=ƾf$#o5m(A^c^0O8` |(AW C:?01Q YLb7933H/$q@9 Ir$9Qj-{J_S60Ĥlh :0( J˼ý!$f߾ 9 ?_kRG w6ȯ#dC;̲jܳPNl{A9r 32gSɫQ(RS3"I@ɈB<HrhCl=EryO-d O_qDCeT}Y=H.Gi9W$qY}sqyyhc!3ov=v ַ~$գ$>d"}NFƭ6^5 !t # )$ bB_fUP}Y~tULDp\ ] ڹ]&CNS7_;X햚ǂ` "$/n BS7:=Y)W R4"-NsiiZ/lonVA߭#Prs >F@._f\׻qL #3Tpm&&ZbN>iS'I >wuߍ݈k$>t%?ʈ)RQ6I(UB.YV:qW\ :ɶp]7S{D=€m[;-bX,R;Tӥ͠)m;OOoGڳ{v%>:cY6ڈ22Mk$66oCÜ~BZ)>6yI nMe Oe p~B&zҗH/T-)Q™ (RBZ<+g]Fg]K!f@ I6Nu@0<e]˘ǹd *z@#Qg׿t^^e<I#OD!y"PTB9$pW&L`~jA0.][׶P2֢ Z"M.-^s[fxρ#_ `,1pZ>63;| ʮvBN)Rle2 C@DeyV7.jZ_?x؉zV.sw32Sv4Zbߢ#lYCU+}Զ{"E3T͗]U5ڧPCG(t%?NFVN^P>$`l_KB ) z7"[]$K4"E/kA!0awxM s  3Ķ6VWn<2nߠgaLDDю'}JQ,6]s n*4_KGh92!z)-nq}'X43skl窏N?l aK+ qAC,!~"u[$.U/uȿo޹O(mT^sHZ!boܪ݀'?[m1}ܷԈvwSPPZ Ety]\P0Shg[;s)}}Bw]T:4 >3qߔ{^( 1I;1+($zssW)wt3jx1Kz#Ehgp\Y'2 ]!c endstream endobj 211 0 obj << /Length 1344 /Filter /FlateDecode >> stream xZS8_၇:3D[Vx(W`^(&qNN{eKIMflYZ뷻vW+Cr5=A:9hvs$swA:s%IC7NVƃTQO>W9dDG (&dPwLJ=9+ vø{$?H ޙ(:o Ԫ tbT`QTeus_hhl/Bk$`;-fB3\`èLx|=?ds%?sAddv oIy?" ĄZ 9--ۻϣuXSO굁'Q66ĎEw=wvCQ5S8뙫yyH( )DhW㨦9 03*!i9^jd; $>}֔Z|FcӏgGZ4Un։X}g[QROl675OWy0o1MUӯf aH]8+vafp<la ocfpB|Kɗ m١>\6t CnaM^u߮xx+NV5%;")񲹲qgojԸ#g;k] =vhY8sZ*l2  endstream endobj 216 0 obj << /Length 954 /Filter /FlateDecode >> stream xWo6_! !(Â&52/nQVwJmAaHǻII,G7u>z(''LD^LB/$%QXx›u:`o)K%[~^r.VM9TF *JYE{d'E 8:-K]ZCnqtwi2<1lU HbIPWknЀ4gy&%ɘTDȺ Ll-M溙")c8jvqvi9PSF ,3P3 7vXVO 3ڢԤ\זufܘ7'ָvW8 Kۏ_JV`8kcMеsL_bYunn#oF18=^../wv͑$ii#¡EkE>:c٦p\! .03%Wק/Y]EY2 hw`i~ы]>b }l!p4%B˪fiD؉[^6̾ۺUk~}ЋLe>Q!ƕZ{XY'qvZ+Ѭxĥ4rD$^'Alӈm?8-fL €LF/=9 GTDQwej cP endstream endobj 221 0 obj << /Length 2557 /Filter /FlateDecode >> stream x[mo8_!~|Y n?ضiqش#T\INߐCٖI%=!H<ˣg'GO^H$\G'HQb ɈLet2Nu=4Sqx4扉 r_y9Gr^>J&"JU&kiD5,嚮w/ҭ1iW,&oGY-Q>ydTQ3H"y4 e8?ƚj2Msw'xyƌj=z?ʚOؚͫ\R[ƌS7WMk-ec= 1%LuRk{Rd좪 ^n9|׶s'ܾAE]f_ 9 -&lf*yz-v3#)h$3ṯѵDIc45HגXhcmS.N4^xSNS1",9c8*zkv7U`;xy3nH/ [s m CHX_gCNNooM;uZԟ~.Pp<4k{a"ܴ7q]vA`|ZוS ޝ+h.ŀ7]#{0#ЈE\%%1)F|p1IY$FM ;=N $YWC](+*\ϋGظ<'睺@ָIU: ګSCa}[N z+Ru/ؓdgmcSPԋXt5 GH&)0 p?KXq$u8Kh*t fK7>ĒeYL1@QTV0a1 K&# CȉIEnlyoI;!F&;pyf /RN8 u>q@;wʸ1]`0P_y(U&ަq0 ݄7>!8)ŽUC:p6 2p8  2aX2:o1&LĈIK+-v8!4pwHp4GXq) 8:It9e0#)4aǻ+Ft᤟;cr"bom<1SgXpvuM 2G| +̶w `NT.4: j0p*8#=aQ)@MsO)0Yq$A$lG!_I=Yfu5 byJ 3$mW4.Zbq>LV5T%җBs8 8C|F񁻙]OVEXVnVsHS of-:سl o쎶'šQ!#ŮnrY!PE2P.6Ueue?L fòǗw۷O=}5ᳬ._ZgܴՒ@^c뺪!?7/1֜p?wX;6>g(v\Vz}yz>~<9(Ngç z*n8y:Thh}-Ri endstream endobj 226 0 obj << /Length 2541 /Filter /FlateDecode >> stream xZsHBOP#*lgmLjK6h"$GG_ D-Q;ka#±eo|fW9~wd[0`;:ϘgwcMo,n##0b>scge]bwҬy'\\db&畋.;(Y(~zd#g Oױ]W>{{y8WGi8RdAX2;Cvؾgm?ͺ!R㡖P?laN,ml##[)HX7Gߓ9ea,&R@ْ"AO/Y]lD9)!`x`BdIwXwܔIXDi=t)tmx,bwvrHĻXꑴtc\$zFɐ\P9z:&I Q_ 0qsrn Ip"&+K>de@Xx4Q^ &/k<(W"ѩr/,S^Gh!@`hg0\e LV!5\B&[$N:aAh`OFwy%aےQsM-oB^ _7cRT.1]T){WΪQ\c(i?(ݼ,TP 5AXK~[Htcq 6JtZ~~E$A2#j]17uK})`s?=S`BRw% m ޖf]>Eض8\VrH69OnXs?hQJ(jA1? ! ݩȶbk"U3av=(;7vj;=2%®" qkm Oç'wg 3g&}حaK+9 r}G)!_j؄l TzIU{{*Y)`&Cr=}a۰GIy[$峤↾z){i4M. {xOҙɃv]t NO`=O"b8NF+qnz<^M+gir~ ﯴ9iW)9{LGmݾ];`%( Wn!>9TPHYتCBޱ " q4 [saՏ *XIR_4\d>cs=+e5trym(L^^EkQ͘6v;+. ;7AqT w|1k:#JgQ(gד]q\Tgq<=2CQe@ƓF;[@}A3%_UmqWt|\UZ8 r'5Tp_9q%| ׺?Es٩b=]Fau%9/UW^f5mf"DŲnh_ΌX7#Hd476bsFSMQƈ:rn;j夐ƀN@w Q;ts;8ztVSX&G |7W7±|?^sfY|sH7oSyaV_ rr5YB2ìR5k> stream xڝVKs6WH͘^|ԉdV$UlҋHcoQW^D%*3E]bT49AѶ8̆$4/oCkÛr'n[շX`KNXQ"39] oC{ w՝pSD]8~!K ?dᔢt<ᷣZc,p]/h?jcNDC$WOaJ,D+`+ae`?qFʸazB#mF%"98)7,$>8`~ K X1/k,V ]?3?~ꄓ)ǵVᐡ AgzWMBū;( R䈽E̪ͨáSD~q.j'zFЂNx'yVC*MrS[#f+t97諴wFpZ#ځda\1RѢo׮~ӅÀyLv&cۇXh;oGYT1&3nWϵ3v+~q_%k;/\O/: z`ȻAzVu[8}z?Ù/(I)Ee>y̗_&" endstream endobj 248 0 obj << /Length1 2111 /Length2 7464 /Length3 0 /Length 8627 /Filter /FlateDecode >> stream xڍwuXm6!H,ݱtHKKK+!ҩ%-t)4}? |9̜\ؙ6 e(ȃ`z&ׅZCBBBBBD 05 (Z#@R}G:\X/%,!%"8jz0>@AH؁mB$U bEy'0 pA!.^;=n^0#e^`#L+@ b+sqtApd'=\\]19 ?]].^?jk+ ;m0y6@@00 t3}G3%>z r-(AV_TQ b Cbkڋ3C ` $T ("0!7 #3Bq1H ($ W>@P IABA+\!a]!Qhy|0Z4FhѺB-OA1Z] a] a  a؟^! °$1 Q 5W.p+quX1Jm`ֶ =Y??IAKchF-s.zU A %*]19wvǂY9bz- z,{絴k+̹ßE~IQLAkĈw1#t1͹*B?k1WTb\>jjge:\Y+јkQ@ Pc 5 װE{Ԣ0zmx@Lݯ4aElMK?F8xm na Woa+> bt_U!)]& A`rx\p ǟu ®nj~v}a"A וHL77׵5rgWX S+=so# /E|"'hÌw)_l Bl&ƠNoSe:{M} /7K1 kÙf922R!`.xUU¹{}mAcw*ޥŪeYE%v; wچ90PY 4Sl{$1۱ КJj4"I3T/3V`#P!nficYvS {;_̕:h 8ȘJ!~ OcBF izI"UX808Fb 76^gT&7=pC͌*2HZeSDZ@k?ℳfZ" %Nc /p\ %U:޳KA$e pͻ/ fi쟌E?"׍r (f @ Q?'Q 7M`~N.Rs:#T#=Zo^]2քB5 ܰ`+ yo8ǯujqĈs8l1:xyQ3cL/t$yAQ H'AjpS jZ .C1AҘ[.n7:w4߅]6+m, 4}#?NNPELlᜳks3N[pu;<-c Q)JZ2 ]L{4X{+qj Ӿ\݋ ;Og#ɽS4.Ŝ\k$"/J: h׊"4{V%L|YJ~lN4u˭S -j/*5-tEcgm&wjR0{y:1,-VB1#?E~ 1l? qiO̲n5$mU)ux wڦ\3KWihX?q'cWҟW_!Lw#oNy;ANa ErQ{3u|Ѥ*6=)>/Ö )e4uX#6 oXH h3;U,$># y! 9RA[|2"^b?ٵcε9.|KC[R y7xÖ6-FuSSL/S~v!$/1H 6ghϊ*E]{aNБ6}Xvj['NZƀR;-\y4y-z"gto>*jl%y׸(ɓ]u&H?hf`L4nVAPT8_.0$A}%Tӗ/:PACdxӣ1RPs}#xХF O`A6wѴI8r=F)˶IԔ3ygµuMϟs}w()%*q|rC>^ʸȾ%͢]?pd 3maUk ,v+$S!Nr^>͉b!Ӎ5oѴg~HtJ¡-'5] )kYܖS47YT항Lvs6={6n<2F4z뚿e*&I\J.3*;b+U ehv36:jTD}'Ϲ[ي)ުƸhYByUGʓ&4wbA4͐vV:5 >bAi=V~C#f_x@>x)ZDdkKy`.kű[f SȈ^u\=(s-8ք>d\@z͠T˻_i;Q,h?لWAjw€H,ޖUAAEޞ֥uKڛ/*_=Fp H2zfJ+z+@U"8Ҙhjl+ 0߭c*HB~i uݽ u2CҋU K0F~1*W,*)k%bX7)3M(My}Pq^>Cnq pN+^xܙS~{=tȰq7qq 6s[/2Ϯdd]TTIOrjK=ơx{n6{Ԝn捐 Msu*r2_Ƈd Nx?ox+!CpՊOr?koP-dQ`ȰDL+OYNq?z aceC~-Ӫ)FeJ3ME!;,h='}Hʥ^z(y۔"`tiï/R׺v|CVӶO%;D[XzgPK<eUjqq2mVI =fD\ l 4[4󋔟$8$!/$[Sw;ؙ?HmU?^ .0={0<WN<{nVl2UO< V&1Z{ f|iO5{YuP瓾JиB5{0SxY\SS+QE^Yjr *1l@y3EvհZUI{y@_l d4(EWF=ƪ'ȡU3_i4/C5_Z|z'=4EZ:w/XzoM=WMDZB\pW3x.]EmVV>ZRp=hۜHARWɪ%d< |@.Ș1ENqc$()b&WæJÇt3ݝ \w*.`goF±֚.Zu~Mzq2[:,p3Hb>3%(Oo0;T'.[21%س{scd(;/ZRBɳsw Q JP3hwT 舭FsO8RDUjw{ 'ņ.S'H:?Y& >k<~ӎV'Z?90{`쁆Og0ۊZ4*):o5#j_PoSX-WR&֭{6O |ynm O[\S(3{s>z,N4T'yЭ_lx_^xCa+a.+oQƏ|7l, FzݎyńTSZVW]AU09HGSarGF)faI{@G_m8uʷ; jw ;"oGS{Sĺi8 gyTqɘWZg~&PW' u6d XR&ތxS˾k1Ƶ2`o_}8LH*qXPDm\_86bp~9W_S3 0d?LuY{Q. F2&j%HyCK447jpHæv6Hu\|^~i@Ή6d-RN|tbfX],z(=w޷ϋmV#~vCmU5LB*,ò&A&w=T+M&n'O&^u *+DȍܩRZ0sh5+`L;@2ƾGɳ[s gZpA>1o[IѺB c2Գm{I]#{OƂ@$s$h@uIzx faMݎWG%{ww'M_G>'+HhX&LzbCh9Z⒙ǽhɡT|pilOYJhoSE?3;,ː ͟OaA6p_ r~2"L(ZQ8Z30T;/TU(qYqڝqblɯov87{jYQ↤H|y#Y;&5!Q} }bh˽1$x)=ە{cmskEvoxx.y&ջFfvA\i&'mnM6V&k iR5K߰aċ0s{۸K+pI{Im؂œ ⣸*6&;"d4`sJ txCAO( tmGz嗫D,"G,n_wu6YJ\pe|e@ŵMm^rK&_-)i4Lg탻k'  yŒ;(oz,:W8\P|{2cͼ[X>N1ychWVv~MsWarWQ}Q3*%Y_װ| RrN\j;Ugy%dW3|vj4bX?ך/n23u9~{l7 E}?!BtHJ=FXn+"o nsʥ4| RIbI>^*jg7|7>npx{@=W\CBh]$p.y ϛqeޟ8>_~v8Z@=uߑyc݊iB.&WGTE@vV_fD]{kPЌB*f |LMi&{݁EQ!(g+27˟K]O^KjGau  f٘b)gٴJxhGoBv]1Gp@7dinտvX(y;u![_n0Ce#Qm:!烊ޞw"8M2?n_*Oqnk7:{̞ +~ nZn/rۯu0xg6^~+svؠ^~&lJU'`H F5c Y],xI& RO!oU%॰J<i\SۆEUE%KgDE C@L endstream endobj 250 0 obj << /Length1 1685 /Length2 6779 /Length3 0 /Length 7734 /Filter /FlateDecode >> stream xmuXA@ZBB@dΡIIa!DPnPAJ:Ew<ރ¨#k +~^>qnguׄAa<70OEB@`P+XP">>!=ڇm~"Blk=J]Wj n3O0] tm[ ` Cz8;kY5~V oq_rBߊ!J$Vq e6ڂ(X=t;@l`wwؿ{] wr/`kG/@ #cyyM*Bm`=@$ íG%!`$ /@\=v08 @ߦ0(wC tC 7ΩqC" +h$sCBWл!!P Mbh!Bwd q#vv_WZB'9?X~[\\n Dd@ s7+?BS Akx1GDr#sm)=Gw AtjwoGI nDsG7zsjCqKB3u!?CʏP?Ǐ~o{1(#&-q7hg?ѣAnBzJMc"\Kf7iSz,o`H ?@L@?=m<[@k?lAZ0 !< v|xOݯ`:+!(H- rr,J_gގ5XO͋~k<渧˱eV 섆f[E[qMLr5ZPs'4.ȏ]J@bȂa)5-~~L60 -xbC& "Vx$"t}GQ}oY sE|"@mNomt 6a3:srZ=S2_gguRǑ{g͖,cN~S8]aO5@>je%haR!4]˒򝎴S}@Ƌ!LM3%]Pf¦Di DP-Lvg2i`I$VcG&uES2抣jé21zjs$AX1qC_"8*[r_@vlN/ڶ)'z"Xs;U<.Lk+⛝X!]ػuZy?c+P͂qV.뇶ywBL3,i[k{tdW]-S64<k qgv9wh@6bOm_mAmJ@3y:&4 }iGag/Ν T8U'gyԗҷ1#RIdيET;/GpD7Ϋ) 1P ފBfZhqY yJҥ_#E(7O9C^wk' '1⣕SS {u-Aiki1F"FUcp\A gסifffڋwj[_r#CSUvL}[y'Jm= 0աXu"<%Y YiEizGm\`z8q`,Nu[n:̡JH_!X- FUO#*N@Tg[@nzJ셾 }n;OҸԢoĜ‰]uGQ\}PAÑҨ4_,GC߆(0ni #¢]c q!l$:#.2 " d}75z_Q#~$UN8 uyq;pa|*ub6H~`ha(9kvGvӦ|9=|y(0!jCKgܙ#u8u3ı#I+PQ%.TAEK/ta#3;8:z\zL4f &iU=e)+!\ߒ;!Sҽ3Sn;\2T =WGzKb/aԷoeѕ궸7G<>əp7޾ʮڈ) 5Ҍ{n+/nBEjw{plCẀ\JCňU4q3r &q:~1Bm*w:~=<@o W~md%n/Zm406O*[j(o _E} vNvF~ZW>)%U-Џm=*tymK ]3y!շ/c~FN^r-锦chqOߑ%āy[ZaO],\J²Νmi`H2ӎ:;Ha Dž2> >)jVH'ϸ3 +w@/( /3^~UQ`yK! >Q_c U?[q >q$@[ 5JFbm.Uҭz[G0T>,Bw|nI -teS4jšR DEZYY]6äb{XlA@ϱ?*zGe2gVK?:+tc^efSw] 2ʒ m^w7ZfSc;98pKe0QN.!_i_5o2.ج?&ELXV3p_I_B= H1e(+{$*N{H|m. [} _H@eY2yb6 "EѬ`A񋂚[in~I -ԁvO8niFi KMz$:+$[I7R qƋ'Dw,L'וZy#t8$c凉RIG/\➳L}SV` UeǶBRft-D q=:zr1B4j٦Ӊ,wZ0HtRalrLT(jZ59\DWUh _2GKTOאFwB'Q߻7~#*eͱvʹZrc' EN>C/zǩKw%73]ngdb1lu+7;+#kÄYUd':%|hCsԩ A>Clc%jmv]|HF0xCwvi5݊,lWIm*1ًA [bvV̺ũzZi. "qSR;|"Q1CE`_bs 0.9]?jnqO NlJc$|484PC2+=.Mdd(EAܷnR~kl-MwFs1)ߘ_|=ڝ*jZc Q,^D96y5c|6%GgU(gp$؝rU|+%=o$!Tu ^7Oy/QB 6+*#ld{53:Uiw\H܃T\r*?f>ox b?ʤsW̿^~ڭOU՝%ÍvXd0m/#@(;lKB =p}(yAE➬4rdpx'ZA/?yN"cQj<sedb}3'R9QdkolҐѻ)M&Mo(u߁~;}͑s +f2F50f҈C |9($ą0txl%ykm[&ٕj$zPJ(Z7!:x{|-FWtmO~>xPf+,{!'c{Z`}*6w^ov2Cb-+iwiwHݼ]_ &5]J8]= xCn,ԀӃk|i*Ӹ["mӍ +O c8 ]4aL_i}${X\'eYRbͺ{z3F^ͷr&ԪǞ2I)/Y跭-E! POzAůLnEuk~f;8,L6kpp7R˷\5YK0b{K㕬HYv^wO;3l ^}*@FlzqYD).d)'/H+V:eo}9Yu=1^m=ae7`dCl{X;/di\.[gCמ:!Q]#QFB6$\,9 P+RtR`S //4K?%́<?UmϧװPU7(=}q=.3S:1cr+kWwp{)P?gu#+Yql["h%:LFNjPjQ6F(/f]LUq2qW-W%cT.KrMekm`; Z(!3(Ȳ-gDMhqevY;yZ΀i(9g^'5A6u '~ %VDkKcu|6st=:HGhfJxŋkS|r=yT{M2)try]êJkxngS\ i8<|TLz@<+P X`;rKu%|jGVdAv.$׃ 9Lt:rXڻ 5j]e4&eo(ۆʴ2Uae񫢚'·i2FG@Ӣ8b磁 ;Wyt>5"Qp&R y_#1;TYvj;JU~pjg(,5b:(Vk>omrS*ϡFrȩf-U1o} ϷE,I[yE}1:?Αz(nA&uD; O _Q 񳛜EVc=VOg~lk?qE{V%@Ց.>y&~s6%z%@ы4eMaHW{J0v{3/M~@3Άzw"ČYbK^|c{ )o/C.\rf {zm3+2(`u"R}Wߗ`]\2s{:XD?Q,5}cH+`v`#(qYyO *оv{B$VM-W;Bm{W.k\CZj/|B?T$} endstream endobj 252 0 obj << /Length1 2869 /Length2 23363 /Length3 0 /Length 24994 /Filter /FlateDecode >> stream xڴuX=Lww !04 ݝtwwwJ7%sG^s1ús쇚\UYd93 4̜٘Ձf66.$jj W-I(uXSllHf>@v?@lnvm ̿+gțY؃mfNy%2 lЁ@3+ hiHkdUT5Y5,HTse3?׿GJ> '% UppP6sgh_͕Nh?>[7i[oͿ] HlAgϤXY@{ {7ybMB<V?*X% ~/UbAV?*qX 0?EsQ\ 0 >0?]w5 pw?]wu pjf'nfobOd 9?e~[4bg x1 np6@"6ۿ ___LdS>w ' ^7ZrZV|Y[k?Jrp/_}عxoqOϿ h;6qzUOn,6,R=y|h/0oZ7!׋Xi/7[>6{aWkf=ȺIP-7G<)dH5/s]C+VȏMX~=#7Mn=( ƛ eqfIعǾsB K55D?Ot.AtKT#oL /|-\מRԸ<;B_ yP=oXDs5-7׿GɗN#tDW[bpC~еz?4fe٬):%|''U+#kbabw1_3oꀆcr\ ?GsөiYͫǶ \]].D(]qs^ZwI z63d\I rb[b(FˏdruJ;xq:0 Fޑh G0S( sm"^AL/ÝG?R/5u';_Æs&3FPB?9|t21d.Vُ7R[1UZ:4pi|(KWIwi`:t=j(SDH.ŠO`n)ޣV;M! }YZ3oɱ;N5#V-ZZxet>zuXv>N3A$oiToT$2gBH77?ی:񞿾 r,-=ㄚr@k, |(Su62, (g?3];,/Z(xg9ԁ~Q!YVA0A` /THoYDzW&j1an] &𧍖.6T.怤j$W,nSmi#hT,SKُqs+Jp*54w Y@|mZ|pk0ęU|1˸ҷ h@KUcm C\^Q)(aZwx"7h-> 'cpʴwPf~8>hST[aEWfA$8.D5Po릿UA/Nk+]9qQ,NAM|pYKdݣ4Y>TVNױӳ4`0UMi5R ׀:e::U<']<'8s,|m"ešKϫfDEA}Ff)Q0TT?+Sl*k@X[U#Et ؇i/5ͥ$J3ˏCVGn U)-Z.rCP騝$>϶u)gi_c!H~]&M5Dc'c)*Nk W%!LIBV6‚.QTǛIVl&GQl E0oi 5Ӑ<⨊~$Rѽg|TåHrpq&cw>{T;NMH| la[:uef߲ HQLwl>}+M0+fL6.Cd1tE}gi+HjV0E![IzB(^~vyJ2 1PN+[n^9:q=:'XIôkrt/ʜ& \)Ax*W&i6L|/W_c*, %̚K_cѕ15cMaĜM*@H n4PG~ Օ H(0ufR)qdU4j4@jhɽ!H'9csm!i!*G2G[xodӻ@[gŧ"YI[e󇼦w*7&qWܯCx"MWWQ"hױ8"*Fg\)[ n=ζ ə 73Jxf(j)*ܚha x Kw8~T9g`۱oG¶~NCK{]ErsD{VfjwPhpP!mj~Ix@v1GK<:lJX:nt1/Ir)xxN'8X<7ᇖ,dNR.p*1c&Չ2}1'OըD8M|EԴ@jyvޏ=S}7FO|wCLgNHg"(2#껤Ns:˅),`k#K3~QROѨ` :05EKEr<4O}PQ3K3H.65hH|af &y#2׀̎WpsGcM+ːvi*!W֓Q>>`KoBqpZ(|PϋF+|nwRSs&1"/ɭE| K* ,/Ǜ l+WBpS T^&V?{ЛB+82}s+̠ /PO"p.q݅'ho烎OJX\z7H0L{J KzhU/ۘ1&a0A +jimj4_;ieir6EʂIa`d/I]P=Dwi2n lR.I%ɷ780+ЕsӞ ,,Oe Ik/8\=z^$gwڕNڶC^4) wtVGuG۬7 X:xc}+0Z7 s3)Sg[4F|e%u7ǂ#x4oz.Ql^©C}  T ^-X\H%5BLPjm`}t aɲñK 4u.ެwaԷ8n,MhRNA@~ՑFW.?l]cݑh&cG]BTK:~LUɣيW43+"cRd\k\92ߝT`MȈ@AJ={K9"sn\Nȃ$y;#,[9[(>)S鴲R޺ɯcO2.x}Q\Z&0fQ57ɇ|Wg9o,F%}+#{4s1=pwqHRwHe{! /X֦q_Ti UJ)d;%F9~ E*Ol)k/umI+4c剹upd_" k_-Y"dz6!_ZӶ'w.JƏ SCJ qvE]eH4%/$ cȦ5 ԄA♓z{Pck^Fva `ƹbV\m^*jڀ!7A.y;̜gu:QMJE9äo'**y ;%|?Qt@ Gu%ۦWKgH]pEé_7S%(k#7\yixE) )7J4!lu ς,k~ b7 $O^+ϯ?61⍹Fr\mvH_SQFK^F QI犭`O=Kh= #|<@yVYVqu@=go>@5F0yy^Cʮ2Wm[c'EG -D4+IXYuM}klUn j+=CfޟO6çF伉!,Yh]߹o~d*E9a/<%e#dzxǮ;&Bm{#<:EmIڲ.μۂծ]e|S;׮42[R(w_?lht@牡ml=_r٭БG*I RJוDoH(,b$,\kO+ґwsz!$,VJ kEƕ <TQ`xnE̝HifME@d5d³t_ør= +/Foҋ閼IK:Nd[iyTWl<) Jk7} GOc7>J# Yz\q1=TF$JVpC|4"y8P$߹#˚l챪9@ ǛO?YQ0Ug7|/' }fK~PY?W+*=|m`|7I‘Kus|b^)#U 8ԓuMt;v`ff=iqW_K}vpt 3lvPvt@&2a·XjǏ\Қ!5{q]L‰+]iFϒygØ|Rc6!\u𶾋5oiZZ0{;j+fBRW}Kh 6DC6iwK;?n]jt3_"2F4ʂzRG_baM0$7L WVj@[}n_Y5t_Iڻ k:u{(&嘧믾*K+'-̙*E(`(h:v|\hVLFkDB\<[ͼ:zp)?wLIb& goi.&FK "Ȃܹ# X)I5%ͧ3/!ar>dYd5L)ܴH YÀvGV4FqO1h0|Пlh?e7",~t'b~ ^Gzc9QK6%5}̠ܽ^[Yi9TSI8D&&Z]-qVr[9N?]o+Qjk= &3B҇Ķ,a-z z;t/iAq"aJ#Na0<.<<ÿN=J'ߛl9T6zd/iX9 ~')~MVUYSXHwMMxcg/F_nK*ߎGN>Dl@t ƻ7#l+,uZ G4ih37=gTWy|gv56,c[g"{{u(\j_:)}0/z45c_09Gfe|FOnrt)S293/nL6s5;+" [}~㭍a3R77\@BTG6[sa9 nZ] q4a4_q@}UI"_\ z43`fn<|ŧ#eBn*lq`M %XUsFF#ހiMk۬V"RnPs`ܡAGHLJŚt Ky{a}x=[ ܠ EJ,ZnҽxG?Kqj5c_rɕy:A2RA΂dNʱ<  ޱm\WitZ|Rr4.[laDKä^0߅л576V%OMO46"z=W Zb_o7McT9*V* FA?zz/ex%Ǻ8)¿\l)ju|<|8l`:V$i LymZs.Eº-Ӱ.?-Yfl᷾;&u;#=# +bs_vRXRw:ɣm=s\  hmw}cagг1B(<&aUuF7C{[-ǴtxfZ^.VL rVM؋$`(Ax7Ʈ3u9$;C(oB Bpj$.1Br!4-"kZ6mlZy+3٦4u+|f ^T[FakK΋v%RĆܡr&DeDS:v>uqvϫU8ze}=M- ݜ7ƾơ1ӭN1Xu;qTB*VLVzZ;{EBV5'A)ˠp(Q2 6kr5z!Zj}=ܛӌg?.?EPs9z5P/˾W$?y‡Sɩc6,]O NVᱫLv KuQn-_K.eyOYli`.bĒ[*]:G"^A>s8K7MJQI~5lҚuDM>z?yLQLpP5TGSPF!@Wv,/'k ̛^ RcLmFl!f3 s^lwѓ(=dyqj+350f轮ɕߏt!d2,J^ʬ$J՛㚈݄sVrX6/%b ˍ̇٩RT?aB&+*鶚7HpuZڴHNF)c3=ȕ% 1 <;??Mш$,ra wrgXi]ۧ) l(~ <Ź>=xYM?2gY^ua0 t}PO:@xEXSJ>ց)ʜX=ɤ V oü̱Ҡӝd <^C6"@(;Z:QN54YfoU2ocb(YʮN2a,njK@w'9!r4݂ja q)}ú-|Wˎ \z387&_jR8߈j'C٥+Eϟc"8:,70N6*l4Cb5gL?;E>sf~Eq l) {'C6L%|ڍPgvxD=$>D'{$ߞUQa&C3KE+k2`hi$+Lʘo!GjfG\BVuƒ"V̮rHZ ƭ}m^KWPw[K4;yN%TwBݰR_ɰLnM:R5sHOVlV԰]Gx?^miǐ -M H3asyZbEPk!4'l+np?+BDb$m}sP)2"5*9%s$i(lZѠˡDqd1e~HgwË44f/QbB}4E&l؛̂,u8(*m4|{ф<$iԌ50vHbj@2,# +YfY&&<.Ki8s[?y NV 󏌰 teAxjM_TU~c2L<Z}W#6V5]'޾ϧ.kx=ѯOwP&ITACv8w5ZR ܅3\St soIoE^ $#sXHo15P~ >ob¢6̯W W[#en96h<4ꭒ`#td +`ZbĚ-wʇsصb B~H5 4*r'ԇ\ՐO˽J(!5Sr〱aHMXŹnz;g[}R|\+o7 ex}_\?fjGLqâu #w@T#]TWWOy 3UTjOxtkJj{ܞ7Y‰ 1:mǼfQ_{Fri\2j-F7Π(Ҳ~"fAEd$@ ڕEԙT "|IcwMUcOZ8-gŕsz 1P5E"+}lXfa~}xWn𱣾:00荊OBJ>tH! 7U Jz>]#ȖX{kT 3/qdoΘ'w79G%ĆĸEMԐ$Vp9}iMEr=392b) m4y}:ʄl#YzΉ=4d&{$ex\L'v^@ZkCLiwsy)v  NxlժpR{%^ mmH4|MqMSC֚=fĖ#g V[ & Qd,eY4P}.EuʑH ;4DoPwW[U(DZal3ɌSr+EBW"RğS Šz5u%@Hɢ|!BODIL~9-2M浓+zi$#ej-, z_iƴ~&GI IA }K݀lhsq|)v T);D)1g[ < `5 G&q3Iĕ^4H'-:XC ZJ~U$!%EBG5#d>;I$5܎wjC:=Owz-LR{_?Eg ߬p/hR/};u;Wi;a^i:ԯ0E/-eM9]+P>oVwo rGLGb?Ƞٕ8i¨427ݾ49Ɂ,h*` TߤW<`5 |:SٯMS73%Mhh-HfԦ6lzw_.%cڕ^bќUGFQ ^'~]!ElI5a?hu8 Ł٪eaSK_:2-h2Q{e\Pq歿.)r X||/7D'+|J Ƞ`_ ױRIhE5#7xY$&[$ch~=DNMm,yw0gC^<8/y^O.A|[ N20ӏ Hvɷv=&R.֝>n>2 (K5/kW.w_HY.bmYa!ڌa%bx0fEf$`9R^+H>"^fձrX;I(ch 7i'HڍM[.xee~.Lb}Axu 9_Sΐ<̪puYT;9m~QQ0 .3';ڞRp sOxm3g~jNec4xkxg=#h7>>~Q_J idY(i90[W7i/;-P|ح5i$onrVw^[i̵s\m^ǖz,T(ll:9u#.Y ]x}Ҵ dK6bR ^xM%Jb~p\Bhz'KvT+ʠ)AYj o)J85HƱxp*gvbuJfˋ0Jeƭ~z3G쇀m71i tփ+!9RR|MKovk&h0SIQyX\{ V=`Jgq;{cCmޥ{/Jk G2$%ۑ |Vx"fF}EsǮ$hE( wbf댯[[a~,0^DEoP{U-wWuQCKDoD2-;k= sl"bѫ H݁-&=,׹4RDR#rZEChK1Nmof&$zܳx5[H|" ggbT0C6 7ha嫥:?o0u~\7͘ y3o|km:> JH~ 2q!2[2 򏟬w!BI9:GcdF"paX Bd]h׈ΕE7 vCF9gC TN|˕(cmϕyiÑmX챎[.e( *FiZ`YpI Its~1  >(cm&e 3t/hۺo$d كXwK\l^ݑpoOFjY(ѱR}8VEaGJNJ]FZOΕޫjfEފӭ.s!hz_Gi0i ̶"IQ5Pdj>3=ߖ$]ܣV2-K`>kLaHqKE?TE'?!5)|@#:Z@:<-~!EqSzzwFٔpJϞxl<h 1iL=xTҼaUS*ynAݧLNBetqdb>Չ"х GC:tjvդֿk=~0%oTSzX0V5&>&4ܳUO6إ"+ VU\60E3`Uv%:獧 wmhW*DD. ȭUHM> NKd@/2҄b@WRQ<:o mao4]ƌl1588$}>lu?WJ ǰz qWxn,-Gn؋癝dЈmk8r7e9j~`(U)EVQvWnDK}+9,3BǎL~"lִlvs8*Nҷwߗyk*9",qD+ɓښ_5t $n^?eSvdirj_wO&hcK?NrKXq&4%dC4c!nعŇ@rTzr2u'n ٽ)zQ_D3oݵM&2U[,*9p nD џOs"XMG:f ]ɎS q8-WIWnjjR Jkc5&gx:pSÆ7|!_$񑂺z9 Ί*nw>m;lAsbRhP`j~cW' 4x5vڐy>v9-´tĮ?~WzIu^0ۛq`2?Jx'څkq,e.DUD8M5>* <}[BJЗ}1F+XuAtYPLdI:q{eW#Xi]b| Qttubcȗ ̈́nG׊6kWF UHtVqzCl~Gq=8& 图Ww xIxCf&,fKtp  hc>M!\ѵҊ.n_BOW ꦛA[񦭵2 UT!1"nfh `[_#IPӑH*@`fqav=,z:z&/];= B3*%٨RAY-{jb1 ss O$zJ5)^_ۇCI0 aG&|//p5zKʛ@ *ǽWFN7O Xq1$z0jJV'  Y[xEv-75)07b!˜IwbU l VjRgTtsyguf;  O2跔~ŪPrybXA@8CrdtwI ,GXHS~M&@vy A5}oҙȹ`nvٲD3Y) #L^X?,v^@#b^i"rD@HcRzR!h=|:Bү/ Sm?DM+>sg:P&յuI%JhIg)_ eB T]@Ca_(W*̈L7> ѩ.Zt桾1|6[m;Ӹ, Q1hU$ 6AB!<MUU8J,4|7ג8U*^ Cl sXƾ;QHRؽUˆG =br_0Uʜ7'Jsޕ-'IGFYM}~[ OB}n@v|^Lgp)Zs6Iڼ[,JpyU |# e/`.#)dե,ܫi&Em$dDe TbXq+KGG‘qhl@v qm#XIJ-]1DaΫtvKl7$H7:ݑ:+SF{F`7td}\2*oCceD&KY}*ĬsnE xڳOWnp4o|v2D% ps{?@`mnѣ &}) 4z7Ʈ81_A vLL>),陕fdVYwch:]p?Smn)eꜿ@e743R >s6{P$F45VkCQ(j>{G+D獚ڀaCً$T̅^3a(B8GI؍W8Xd0nb5V=Vo2YVe7E"QH?|8wp` J{z$)‡mM5Ouԧsֆ&*^ߞ؟/p L~c`f3(øP$74#=I4\!næ |%}F'H C:)*YYiF4Z#-Եk0g )n4G;iZEٔTRtZ,8S@Y!dl vuBD->\Y)^^ +G } Ҥ~3R$']5U>U \H)Z4;GPmP6'Ef ZDԓ#|gj%"SF3M&Qs Ev%M}Ĵ2\U%jg<k:+&s140q)%E'c`x^frg}8hDL* TU˞d%l,swldBtWhdONZ 8b-MٻQr3uZ #`_.=DeLչ֎ӄ oSh7dVDb, @kF3۫x Cx{\2MY$׬#X&?61* OtOH~eb0FiϿ> :3mlg ug;hTn Nj@5za.^bSi"{3lO?0Hku"bɝ {U?6Bq~zt0g(^#0>]\)R|h j[L  ;w yqMoANmF x}FF}*݆W_/HG_+nwAӀq+2 O F^n1?g^, $P*j'Un#3-O]E8:ѢU/iN{suG~LĠ!QT7:ŋKY]vIn)>l?2%8q=w&u2w~a4Z&3 \;E{/<&57)B :;e4kFƁL$og]bO\&`!Uh T]\_sxOEĺMI?о#ZǩKbXe[Sqasjg%Gf^g|zb%2/~ jnϫ_b|PFڑ(9jۚC*'ձnؐU$ugd x:AOÛ{?=@vFMjBY 19^i=&7Q%2ǜ*IMiK0iP z ^߲Bn$ˋl$Yѝ⨤BD mt`UT#jҵ6 a:9{h4y&|bJ}n#6s,:J]"stAT_+[2;N%1E jt:f-qY&bj& 2"5 5\Ks̊&򆰺vFǁ'`gȅJi@8[`2HH֐My6t5. E^. y+u лڀTYe A MnfuVBK>p1۴ml?@⍆YʔN; ^gݗe.!::cy1Ʌep#Hz4!a` ޻R Y \r5"C[y4 XZ6U 0 ޺x_kL7宓p7oCdG1D*$s&g ϒJ볮{V3TeQާ cRU4V%, hfvxBDA9Q2 3|_5U&rU7h˾EAoU󃇃%ÍPMgyiڍeۙHM4ԫiae3S+ eUA]OK8tcgP;(Ҧ v0mbM BKKӖSXtx.8`-F:p) YjH-, ίsw|F-{_;sd{& ݹ!!މ;DUDv+WF["":#FϾ52B[r/|ԼM[~4 6.4Ag4y/eq*zj75f-ݎ3xu6VZ1ܖ;'#g1®{gWFmA7Z/5RD6:'&?m5"` `5U̎\yOгykaS@^ `BMd(9"]kE8* _ e| "Q @jNi}ZL"< iB%߽./nsbm@wqUQ>ME[ }g,̀ʱS%]3R\#Kz8$Hy{1m\6tSUjf#Jҗ g:s$gOCMV,.qKxgE#; W}Lb']\dW9<49tG.܍k⟆ghA\rpA4䇛bak^CQ;#WϦĊ0sL0lmk~KUtpK82Lj]a  endstream endobj 254 0 obj << /Length1 1875 /Length2 26439 /Length3 0 /Length 27606 /Filter /FlateDecode >> stream xڴyeT[-kpK\w=#{t*ͽ.)* mmr6Lt¶V V@3=##+ F p2(9};`@Ç`:E[G':C7H"bk`nj ݟJFc4=@h L&@UYLI LEQX(JDU@5ZʟWSZʇO?rb*B*bL p:8i?E#J3'';nWWWzSgG'z[Sz;;\m,@+_8dȚmmvZHaw?bB8iw86f** m66FNNΎl@c "z/uaۏ'Ӷ6p3qv6F6NWḼ;93slrBRb*tgC'g _ r8L\Ə!1`G>Qll[غxwɟ0vcPc̿l@'#h1i131igk01rz?`< \'g?`8FN40U1pm`ʿc[mm@y[oKJHdXks+uJQ hhddۥ >@ cbۨg>f2s}X1F6@GG'_.AC?t j"B 4uv 15610  a? 1@&@ocsv:9U߈ !H!NG!&FF? `0d0?X 2}TlQ_Us<࿰%P!rNnZaߟtٿ¶nt210q~21z[߷_[1\E h`kh\M,o'tԩ6\,mb@_oy$η?B @t[/;" fѫ-vSIgjΤĶTGE:"'ޑK[Vs \ PܖpqAޟТ# zgs1F!;QwA# ~lc5^X u_B Ac`BTuC%PER%rCԲz:5NpZXȤGڒ%@Qk@ I?4#8Sk_T,ɍxWWo_l8!q^ ‚o{waUxy_|n#k4Эpޡ*>i2ο\|"SqDW 1} Od#%ɷtvkmp`Ecs/S+n{B_D'&Ҹ m,՚Z=W<wC1A^Yy)QVb e!V1wN}V'CAZUκJ/4uw걺'T֌?,]m3H'RDkPrFFR:vNZf60CൃЇB #IQD#D0cRL&E,EPo]%39C!uF%إ)HpfCZܰOV ۬ B{Q;ⷸB{ͻupaU$7.urO&p_v/ѿ$ZxC2wrG#֡t{* hڹk6+Gv+^uVM{I:"hg~t\i͍BRŤ$S_ĆWla`>"Dʈo`Mlb[TDc[G " %Omi| 6wasM*p.2¶,tl=bk8ZlGzQ.N|m7u"Δ.T+HZMgr=z@2H:9U䯪֪nL1`"IKaS, W߮> [ꂕ:#roT@tt5WjL?1UijǾh#^d_$v[Q댢,0ʗX%ax8^?Cb Y{LW7#GdNG'@X'Juy'q1/ uSjQ7ނ݇L8&WC.qpKBY# ]t]jw+iF.?q$R8Y;fjcROa"Ʈ rZSbF `Ahɶz_-t3ȴXWԇRV;m~ }r5KO w0Rڢm`zJ'XE1]Vu/iA eRVYud]E#<*]Ù}vI>ggCRDkʔԾnRW{W6hMJ}XhA6RzJ~QxI&K0u&~˾&xy:vxf'Ѹx?4^2n\LMW@8Q 8 sCZkBTT1?>ߒUtu[)muʢ0Ch3LoY0sƗ\.4|Z۲t3췆$Rg]a^!45*XZX+)] h,jÉkx;bвO IyHEn7?LuNcednnjgV4d荌[%G(#`,*PF  2>Ry=}I|SI9ScF6΅SkU6TI\s p v!ig?۷Hr6bf I{^屢7"ڪAE̪PP=Sw`n]CHj3UDž>uaYncqv+p4tm2$+ks|QOGX|7__Rv0ֹ;m>߮U\Jp;&X7/w\u e Oi'tkuș֢"\Jedg6$ ME+BeDRXTB^Ojr@@=<0Fa Y,Rnz?u X8JܕE[뾏۪&ËS̘fw4 w12?!h SvP>DuϛltN 2X7ڄ3<1 zrD[}l\e3„/ M[]|K_V kptbY{cF6M8Swu2Wg!+h[!G8d!%e?P\݃U\I>It,xTRYVXi {oÎ3}Ĭmze >ufg@_eQa5VHT=j/I1ǧ]y'Tm^zl=ubd"j%Ƞ9\{{ynp e;6mn: /Z&Sa^U&WeAI5l`,40,q&b6MWfcc5V5 k[J8 <i䬴37eCbW}>{fKGp|yIc?J/+*S[9uJrL8!+t>煺ө֜PAWV;D-s%Yxē2[f-kh:2џZФ85\&2E}trϳyw9[xm2ԟ,. <'7KqC-f凙E|799W&l@+;-Kx>7BsiL= 6D(eHI1Pt:[eJ G_, & Oaq_,ʆUe HE*6<^Ht;@Hpzh3go?#vV4[)X9_;Z;zy^.-V 6$%H_%Caã?-T +oղ,͞C^˛dv :05zω f;k23'-waו_ !5uk[\7\ˌW kdT QN z}0u$WBPgC[ֶ&jTp2;:m{Akmͬ>@.{$thj_s[:lg J 3-OPa76,bҳds BMT1}fuϓu>{޼sC<]qKW@(Tݙc-Z~cms[Ia\|HbOן̂{cS Ps΋3:,aOb?dgpd>W;,3BfuM݉ȵP}MB/e'vI!(Ŋ#,nn}s3?mKm~%!*yd͑_ MhO/F8&f6DKKb:g+MԺ]f/`UTm0Z& s1.bI)cWD1[vɍI>difQkvA;!kD0(ù$d\ب'n4ϸ2[e:E"C78LbQ ,yAU$]=u/eKOBL7t/G(nJ=7!wMSk9At>#,lHz1\7QOx-:VȜ;?;SeOfq;sFQb[ $WNoS}%դA'#kI<2a%?\ސ {Mb(@@ yXL5!a ě,)Wόo=p݄Te 8iZ5aJp[ "/˻ǷS1|Dnngltw_NՁ.e?y\bH1@"4M- ~nd0/N_BЙyV6; ?Ufq^It1#0yTwݖ,U.%% bX5,u`f<-"OBp 鑅*q2sLrO7QbU֠ )QWܿYT= .rdHIps,zgӱH1wK[tA=ɚ:uLv2-H=ׯh 234Hlfhorz ˘+Tc t wEk 6zdp~ ߲eJaaB]A}Ჹ(MJ5nٱT ?@|HL^>;XoYX=Ut}7ofBOuo^og\LZg>moa:#~a?8O%ě.17) R=STun))|tp'm88V4{p+,Nk?֌z7X31H2`N.eǪ'<+#Z=>V~i5![<C޻ ]=W!0Ó\{(v~vIp::BS4O1A!0VHHWJS qO"{;1H%OyBU*D L %>`Bgٿ  HcD&?W'DhF~n6k ?r*x#Pi۸DW êC 5݆U.LOۼ N3vOZQձdT8+o?Z-r^cmL#5u|;) *岄qvx'!fj*EtmᑚH@`إ9[WDwbz獩~4`RzA˲vߞ_W\Z£{;uְ:'o &|11o$#pd(U Ib4/nweu5{q\UD}x(wWzjtpizL5I[$YJ8,'95Ý߶µ8=%mԧPuy~H4 Kr43W.y8~5S&|VpDdFƉ}{ysg{hGY?BZUR:|{x}{|;G{_"i״mCgFl;M<`6lF0۝]# z6C!$n'1]'m4D%$87i9|= C<ۿs*Jf;D8|rv|"?Q5?A4=u:el3^;omwӠ,1lxT[6+9{!ɆgKpbLJF`0Yi~, 0τ%fxU}8ę5lz}Ȍ2pPg^y&SRZ$Ff',u49wGƅ!`"N2J%8c}bѯp5d z2qO܂rX fƨivsv̈@;mvV6[>w}!P(483'L="q4!9#VC@1{ATp)E&/7ːhϣЃtAI±:vL%H yO/g]=JcMT`BJ8: [DqX-H)d*=,r(wi#öGƐ1qrHdS(ꅥO$ 9e Aîmն6~t_qpPZ6Zr,Jii$AP{I2u #HC0~YoI% C6 LC7CcyO]nh|"(P"fvOpWl~Z>vzM>HRǻxw |Xc؈߂ߚ%m$i& }JlKS*{^PS j9=0aHw%guyS>3}՛wsAӉaXp[Wn6u]Q!0//< 6M | o:h f[WΦ"إj ٣HaS` RsT{CaI/'ž|K|U"Ḓ7ˆavhN5tdƼ_^ B[oWn}#E؁!CC2WN &J vf:W봭5757 YbӷvvjGz0R]+8~ED !Ӄ2Yq51( R9黇5g)'z&4U\t*QRt>l_-oKF+7hp{N){XB{&62G۹B.ܚ ܄m]-뙜߬>3_]6L8+ u2kLIeFCpչXֺ ,}AC=/C "ipOɡ(3լw*" 1S!(ѢBe[o3tD(@HUpƔᷟ>vg}!Ih=>f2SH@%BUbh0ON'3{̯%Y~WGK|Z[v*g@9[-[E`'$>WV{4n~*6E7dLovT G6r12<=A(~w(+am|uږՁݍ%4.DBlNu}.mqKF9PkCeYdShI"l<3aܬ2$"s Լ%.F ȼ=7ڂL nGȀؐB-iR6X; ^\tNKn epc☍0D8w~]8fclqZ5VPՁ;7$;R*ҴoNO~98=]k0%2yvTWzl֯{T$ddd &roCը "6f5 O^k7IQKv۾Gܮj~ j2ی^s"paipݙ+ᨈ Њ2\b:<)m^x2ϟF2 ?KUcg(z昩*eP*1cB%.ce%N3?+S΁hK6=ifƳ)EA%e r0:wsy[k9yuEvګ,!=>Ȯ^6c&%~7עKnv03)^b넎hFݭ'χ^p  nL7+KNc>5RU:@ɝ*!FuKAA_7ȇTkB꣝[uiZIU;%TyhH3y$/=1F.h][qߛ i^$apCaZ้F1YjCRgG-l2x =3Zx?+u38uOxe5/B:`|Ogsqa3y-a&QG}]8>JlOX%;`^v}eߡ yoiL {tt:T0+8Z(3/Q&{AYU& k^GNc })/94:d!avj:[ӏ>y0Nٛ0#(WU-9 *MLa JY% 5 uX3喌e\XԱQ`yGDiR_Ni"cBYcc~G'OV[Ncr`]ߠ>,a/9;Ox}.r^ sbt,OJv]%`ΉuV#Ş_}T't䐿;ߵ(R}-,ڻ5*ۇa%%yD!A5D}|BRǧAn,ohn-kU&|*/4csbKv3BxBzAXky "82qhWUH[^_*Ob BS_YWh ) ;8!6gEŴ'i 7o<ľr7u7 )9 'OD^J׋?il%T1QM~[.at@Zb@n6e{7)>MځlVh'q~nȝ&4 E1P82ȡ XT\Xb=n~'l,+bGhK5#W(^֔ަ }XE\0HQs67T@I ó'N8 ¶q>6dKNZ3\ ?>bI̕Ɍk#(*R#D=Ui4A5 #&],ϥ0RD1(x_Wʙ1G T{CNpRo^f:Ȣahbf3i ksWNg4;%>oekĎk7\yl:8`D!~[* b4Je•51 |M%h 2gʥ17w5;PZ~m1~?YuP*2 $ooq&eDžߣiHz#0 (O)aH3F4O/ b*K#+PXǦzG_4툯 *|15U=m`m|b}DX0fx3^ߺlM6@e[CaHbЯ/k䑧Q҆+,jlȂe_~Uz~yzbUĚ́oJdeØ*zC_8Je#-80«r>l 7Q i@ϢQ!  k~7PkxR7s߮r[+cI gwF)J*5\3'?b7f>,dp>,!a Ѯ+I6NO`xYRεEY5ɽUry쑩3V}pގ8h"veIB.MXߐZ&$XH1vdIܸy\*u4Jt%?_2Q?&8H1JTi|V{,Q: /Tm[x]grݯ4/SkL:ɸc} 8e6Y"_jQC>gL[I?FUŎh~7ٻMn:0tBoDP٬:ml]0̮N9/ 'aՍCf"h2 Z;}'lp{C9݅=ڦs/lh0S,i䀤!jЦ.E9ll:<W ,NZ\&@dJV~iЩg8Cۖj_RǧUwB E)fȧ4"*`uCdbÜge:sXoɝNSv [m)ɑVvVsXM;1CI+UDTQ9SezسeG/oHKNhe൐q֯qxyܺ ^_ZJsᏩv!:aߎk`k$s)Dlc~};#rt_6,%]F0*d7n2N8L^+8Sf@BqBd ,ˢKL?Qg2{ڿ`i) ־gK v9au x+_&͉}c׵bFGG{=֒jNMyP3ZEˮS~~Wߧ 8!M&zY/Ů?(ƬfHNHízhҺL :&q'|(Kgju+f[@oA,|TXwF/" sHx{QI=!:dAl^M ʔtXe=h77B42G?ݫI1g%=|Grs2ϸMVؠa)P^/ē}Ev!S&]W /,B5X ,9% У` - ,\ iK£zLXۿ_AbŃϥ2a'.ʾ&fQt$ņ^} 4QL) cyEՅgdUoο:?{+Siۆud~Mo*>)ẘ: k 8D|pPOwm|olFzub wdl5u]:LEg !#PZWx6 FeNDU0~5w; %B( w&e(9dK$eOR.I-~_zBg^g/splb ȷʜYNcx[ȟߥ[Ѷj޾4MK-|D# $tHw4Y幈I]#Ƨȳksw+.*6Li-{})7:dkj 6(?%s*IERЃ 8yeƴ)4be u_W铡F,7 a.aWר&]d&)h]l#e`b n+Rdkb|7H^Z{|ILS`/NsT<*k`|C]OW5_.!ATQq2l!lDVp Z%^C9G=*BrDQM/ArȚ=+QOnKdԓJciϡs޻g^u g$J۠uـ;{k}xg!ǕrKjT #thJ5~a^)FC{#-Etm&q-\ۈE;^4}#P!0"TG*/qGO-EQ“HV7P\j}׀(i#Vh*mkw%#%zU+5ށ=ơSemv.ɫ65SV (>2@$!-8yT {LιxNƠb,ޮfc^j25UTirvghorC"LGvOiqtKDFVEM]?tZ`m.J@ o1B?]#ZʚuD4=5smq~)(~x4E.*DB&pA0coQ}EkOLU\]g'0f~tbd!$P\ E ~ƇvȊwܞIgXly7>%𩫡5&- ] _8_ǖs{DӹyRffuȦQ7lzHj?'\BE`c"o 6T^_NW9{w2Y 5$I1簰*Wӧp8tQm'H B"40*<;<9 Mn= Hz,{ݹ6=4ZacGxE"Z)@Y#C <_Ѥ1ra%J8@m@%Zhq;I#+=Nz8e ścgzOo2DPiYzB#030h7)2K+X k5"ǀ}#i3ZSؑ<S nЙCFMƥRt+HYcQc6oԉ䲄23 ϟ_u1p'#xj) GX=DduɸަӤm`^B;BnO^) 7\U^];'Mqg P-cʲ,Xg6NxP''$R+GԵhi)Uwd/pew`Vۨwʛ9>N<-~AA:ʂ)8 iLr5=(J)rJU)6aqgR~\͓((< >8qGhҜ`--tP6/HtUϪCd<@8!FT*5 ה w6|.Ք#=fP0mgǡýߔM.jOI aR਀2"nC_FJ8D\ |LIOBvH,դki 'nR\.Or|<|3=)Xu Li#ZSd=2p223΀2cT_U\ z7W 8 ^VyiQ +~Dozl#^j&BI;p@FVv~^2us2%#63JY5qASׇ:9N=2F a O%(ipWa?n{H:T]En"իgw~(>uBaR+;tz?#ȉ݉3SD`)wcLGjAÌ03Q!y 7LOvo4-~f<#%^Xc ElK ĥ2Ut˰i_xe3J!cb? Y&?N 'X]wTέUa .=/68$>?`^<-2L {u9jaCLǻ?$b'V-BT&];)V@mF̼>8yVyưm+k j߀ܿ; bvv&|(|M;E8AwBh澎)@ > <@L);R3**kLg ¡9߀< eDA@ @ <4Կ13# 76튘_BQVprYGOzJkdvř0#ʃdr% Wl>9kSCtR q{@VI\zVtr`k Uhm4]M1OS (O?lOeY=# u{L6hEHub֜`ZxU0RUUK͌-D[>l`5*XrJ;˘k bc:/5U{ܠ۴Oq#2~We\~G@R|kjw7y6:fv<p#M.MgCpf:ٲN0:5x7i#2&S>cL_ؼF"j,5m#5~4Xr8I$۬sLlC5@V;Qpl=TMN\Ek(:uCQg ׭ؖpRqզYcW`o1д;>Fw놧ي/9,3J1E%+"v& q!d<Ꮤ}OA]G gfC {".*ќ0qe5!Ypf@E >*g_~#y>-Ig.[ rv)&. {YM H@W/<Mc(=zևlyFLdI;"iRJm[LO_B;[nV9PÐ(3솟CT8@X[7ٓ™.D4ѳVV=Z"0g6ZRYyG4l 2kng9lC ε/hrpݚ4-.ܜBުӖ[x O=RK4Q A\=d?lI+GM^iBff ~A:g {fր(wŚ:5Y4 1}z<߻E@g@}0t XX:8=ZE`ΑzT=zUmf5za !5bzƌ(kRoQ1@ _&rv&2\) 眚R:ۻRi;#qؼ>cpaRՌjPE/H`J-7 ʶj)Qp]V/.`.|UQ:O{=6(B sºCy*J. g@][FvȽQJ~^~,tz^Z,9|=K;>W|HTVȯt$^hFov= I1PiYg*z_H3{wx0\ V0.s쀛Nn~@gp&?'fވ/4!qr2ޛ5a787}"R 87lOF1_?9}l̊v~ >9<γ)xF*T{"t;?`Y6D, E𿣶'.pyTF:b;C?eQ*~GhRXy$tv O;y[jB  nxY7I/#ڨeF}kxIRiՅH4O#P瞯:@Y. W,W"$YG2Vj+4h6yeYJz0޳ ŌD6vrEPWS/}~ Z CUg@j7CLwFĀ)Pf> mu;$ko,O hm&DK1 &In# Y_Ʋ,ӕRtCZo ~Lhή DCߔo(zϔs$nM9^J$ePX®]υJEQbSЂ w}%3ѱT8ݴu>*-nbfɸ9DG&aRAQIK ӵОwE썘TE|L{[G趯SAL;`NuxGF Hrp17i\r8 $[Jحg]bg]SfgrqQȌgMS'@!R-UT!1"nfh{P IN7+dlJ4Q}Lr繖Qq"z'5b򄖮,m*4b^\\D_!+Y%L1-dz\70Oy1' B7Ġ=%DM3$'S!U#6,9\W&q'L {Jz껓u{Ib_]ʩMꌍJ6qaV6$a[(A2E;P WLȶ>8*Ơw JicO5Tx"=}S1=Ŏ ڙL2L B\R)BV5Q>_wnfbB"X> >-Cs_ 0Jj>|8K'mHD-k% qCrOQDd{w|Aޕ?3)1@s/U^U[|kG6 fX +4TM# Y,.j(QiNjWos/mFӧA~Yz/ՁPr@ 6`|ʏ!6d6Ru@`L΁r?q3αH=PܪJeG#t}韺GC;\Q6NZt^^#M-(y[֊}뫀RJNοU3V@:;g 9tTy#)5\__Y{:SiSqa!Ez.=dQL o(TÈV "?B`&@^(Qv4E#4O~$PzJ,;2u{}Y$[ժWjᜥ/қ^_ʜEz˥/S7*|(۾%㉡j&vnhR"ÏE\7jFFk x`aoE}SŌC!Fq#l|eD2 u1R,3Wџ^b wP&؛N.b}Bu܌o&TI`dD"#P Ix:tf<ظDڬ(DOdF[+ؾ?h2 o%,~6)`z;gހ$nY_ViHv7CB >{Xbлd.jp6ֺ}.2;P1PW RߪDBO՝I\z}s d4wl,`ȏÁxgs}|H=$f\.ǺEV%vD55~IuD~uB┅*_5#V#|9D//mRGa,].g vԌJo5{+C[~\InAaسbON|j€_F9oĸ D_=g y#<P b|C5syCk(߷/XUQlb`#o:X:&!6 { j^>tP5lrLւܔԧ;gk'6yW-- V-y1u?ʏ,3_ΰB? q-A0 ] yZ?[. zIuC=e}"^:+j3+-wS?rT@Iiɨݐ.B.3s!z#iE h(JvÔ( K<Ǥ%h[^:DB?Ogɱ'@䬡0vy ۽'3n9RtXY{~k-V({<8S1>One!ı?Fpo `{F$bP5;d(+lJV]QG*j\)_W2O=y(*}m0ٓ*|#l1 {q3GbF"# y!Ag@XKׂX8zr>HtN:axDx&tbVa.NV4pX?4J@9Q.>`*K8v~@^X8Fsqrd[I,c}ZfHA]# S\KEWRSpcP{JݜYS8['V8YPUY/to&`ia8:Y7|f>@^,{B..DD6ZJ&S1J5vdE~!Ҍ "K18&3Np53G2Ew޾ WY|t z>04>`G[JC@̳qS ;]چp) \Pp3R>}}Җ%Nѩp1 @+VB4NDL j`jg9XLw1uyh[  _A.ƣŊ@<L͕i4D,ɿ]p15TF\& Sn@IlS [km Tc F2.Hꞵ3%$,o%-ɬfhw,?7F eQ=c$vw5"!oJITPʪ \dg~r,:p.W)+:3zc)jԔͽj-p=%ܾ^:G&7Vq(]; KSRcl^dM١-@3y=ݱv\/O*XG^P'<بz5ZΧOESϧڿ->c%K~BFc \(qCuncyUw04C(ә 8%3wr!9qݖݭ$ Fx]ʥ>L z; V:%k> 42*O3iQwO;_C1 5$HQTɗƜLk!E%{fjSь3T9K8E8u=Xy{? !B*7c0pgUq2'aGMӹu/ZJi,.ٔ0;0!&!#Fph|ƣ c$F]{ȥngtxg8 sd GʤϔPډk'}k7Tj9jujW,|/( dݒ6h H:C-eCo4o&ڦ/vEףjL*(l?VvL\ -sDZ$>=a3^Ȧ]0Sd9r$f#s}eo=dg3m-$)L-O]^gg,x8vS1`윏{ ^u(wDd 5,c>Doj.nj/y+9f "MOH1۷Bp+v_!ɓgqaĵhQU74M.{~&I,TQz3cE{;&L2^ط1 dt'cy"MgT>@ܰDO=lGTx4s֍SLMuM0z~n *&BR#3ӕ.Iy?Pb(([+ўqfIBH6"ο8 endstream endobj 256 0 obj << /Length1 1998 /Length2 22285 /Length3 0 /Length 23495 /Filter /FlateDecode >> stream xڴeX]6Npݥ{[Pww+ŋgfo~G:{섚\MY lܙYJʚ 7v6f  ZhnIUK7?W?5@)>e3@gvsg0w{SA6v $O Nf?xK-^nvs@Ezt`hkh [zmMi M&=[`MggqҖeHhI:LYmM?Z@&֛O7?ZZj`x]/n4oj v+Yˋ͝j?-[;7vu:*ꭜ3#%;K I[)ߜp{+0/_%55%2Y{ޞ@+$=\]P_iI]62#Gs1sjösag@a*2ҚZJobVUxRJ>6;?IAV`'7nH'eV'w6/deVά ;AHl6e>  vX;쬁o$?7sO wJH+;K[&HEYߘSO[V` hĪvkVptT1wG=?B(sZٹ[^d| ?kY6?to}h8R*_t,UR^FQ[?{/i%d 577uvs8{Hf-U?U_/қ_6"\\''!llVV#pX<p`׿9pX&tF"[tټ) oѝ&w&7k7oN<Vۺ6{a_?v?{_}7dMwWP7eswW;oC=?P{ۏ `a'WZc?kxk6C DZY[ gEVJNR~SHYəLL UK- Υ)+ *q_6?}R3T$F+aU^顠?R(,ѯL$hK1+¨s֫|{xy'y ÜYI$ޘ/ES7:_8̋FV]Aq #Re׉JSJ E1@SvdU&'{0jx0x.VTXfֽP-fmeO!oNt~^ 'Wқb5ٴz~B;u;ƺXq5lEXSpeGy{* эEѫ0ߏpQCfcgTDĕҸ"&8. 5q'蠫AK]Hr27i9n)˻rJ^!YG0'.H?b㩞pMڍ`y2FЎj6$E3j1k^#x`]a?PEIۅ<9Tk)NpOٕS"\<}nKw B,j6dz+'wxh, @9$zѩ&eJyR46R;Y}NAB<9gmŶҟ'+ex@m>%r(HhEu‰C&X;#62*֤Z}iMu-@szk7k^Fr0'cEzL]|񞪝{;K۷tEv+Yw5rÝ =ݾʝ=5sDu/~_#=teÃ+ H잖u \ʄ|`hŸ)BP#(P:zl 2wjj AbR^˙+Rx0ۉ it{kBdLafE-:'#&{f Rh{]ʌriYS.~Ն|FԴ53{TW);>XJ';ءHt1}J;fO;!$i2Sdp33G#[$^,7ˎY$ e'D"ͭ_LQRUo%~Ԩo)YЉ}nM&U)6:ܹ䉀}Ac@*(CT3؄Lڬ|K]buҚ1ij6=/CIcK)/|WxO/HnEjݹ# 2rȨ+2>^tOZ¥L*hg:)+z GăQ?VV_}6OU `bz }E'pcWDDX|x%gCܬeHW_# aTM|IK|+@I@3ϦVX|UwEkuG' ਄ $T _^,F7z)gA~GB)492n|\A~ p4je Rt`Îz?Q݆.=u[u9y _<}»j&ZyN p7?.T[a0Hjlap k"ޥr}NR-*A@*?%yB Q&m<SPU251qc6Mz$lh^5#KoiKnּ _e}WUv{3+sL'eq]X*)~1WkXir$~!nOkO6¡3Bꦾ#9D[Ld &*T,EuJi6ˆ ʰ89zHV8{Lh;]OZ:MtqŘA2*Q..Uږ:oFxk[^3ۆ[5G>z-V](iE}4|Y}+;Ž:.eM$B%CaZGnkS>k&PxM')3 CjѤұ/)fwU%Zw8Y75+$ FKៅ?xQqb*_,=yܭ=R P|௿]ڠZM xq׏i8:z߻픫p"g񫙛;yWXeRN%}0 ؼGGTw1Bs@,(+NzaxBr^)c_݁e }^ҫQ]+6RV=ܺ77Uv4Y*>}qStWn ;6Ws'67񂢚eơ0ַMFvMQq9S/Z#("p\Z a'CE$:>0x+ߚՠZ 3| ]UHl?N}W3ʜS)z&+H|Ho\Նj]n̜_sYC6,DqKd):ͩ 1t؉"%󡢈ٶ2VQ`.;'b 4+-~W\lV#`4Hm)h\"K{; y<h}晇㭛:ג% *揄U4bܳ63ܚ/PW筫 ~(Wrjk%p[|xԁ]m2N lk~:H* +NqH~ !e)X@?+}L0ZP'3Teφߎoy@&8u"o/GG1Alt~RF\p c$7g;z ֩~~'GC8q&0=X}jTU{!s17ZDD -/Nρ]N8g}Y[,ߜA^EyXC U~95Q xf_W CTq9 &ɡ lv ^5/,tZ_ŰCɦaЕW6-+±lZAǫJH%TkqgRj$CpGd F<"px?=hYd w}\; #CFBP;{)Gk΅u f 3"g?gQzWv{H?$e^u˟-bYb#Trm"wF0Daُwa TF]!A(LhxA%1yv2ީ1o]\ǟGe:Y"ezCvZ?֑)qğ]ܻ| 8Sv!KqN >gt>(.8AE1M]@,H T$7. }8=:9]u&zV3t]Ȅ5ZaL=LͽڦWtMZP I=@{"2ˆUnC3V|*Fm,7zRȻy8QW=Z9d] Wig;MT S ?a <  T6JWܱM 2?4x"cw5SS#Y  " eߒHɏv˗vhcz Z1ivBF< n4BFόECP6)E?1]<`O_rClN/99iY&Ms)s:[2T=)|@&EPag]5Ti^3"H‡ 7*soOj !Ga^ع~ tos}AE!KIr _ˢ T9IY2i , /7` =^A]Ms5KaM);D5baT0.C)4:o8B`'F,G&FI]/2XM8N  $ ھOAXmF.dppd?.>qe629)d6_zWAi;X"BLU?A_0lJ[Stxr*؅n≑vUΗVLuyS DHnwz՗)2 A[Q;QyƱf_uf!>V >jLj>V*v5vz5I/|n}X KSLtc ũ7݂_2&`2)B­QtI~JÌYlD!ԏNo͵fȮW5S!n{i4*Y5!ƣcly<]o#!^>A6by!5NLY83;a9G 6>AL= K$ճࣺkk:rcu#iŴ(bNxGH 34Sn^bT8=7le%&j*1;2^~C 7ʳ%9z5.1N>. [ɚ8S`YfJUrl`bPA^60Cz7Q; ySygx['tW|E-G_3w_DΨOpIō1jD)8#Ҡ4ֶ4!N/I\H;$|A`S5Lsu"iUzak1h> #w{_Ko~:MÈ$#7 e͌kg$w&)2kXuq#E;US=f tnNsY% qgQ+/?!o)N_D+isۯϕ0}󧫠Iz~h&K Y_@b,׋#=Iw͋Ĝsh0<ާUBQM%Ov6"dתF+g_u ˖>*ldoG_(Ԙ+4ⰹ2zXuU3W[")])5%\IzG9/yD݌AfbfױكYp"m4/[Jc"PDQ s @WTQŜfUsvˌg6j;,@b6SPqzW'9- cEvCerSxj;&]a"W\'$ Ԕ.MfxH3[TGM} . %3].2rXᮅާp%&>_GgI.f^P_HerʔbvZ2W灱to=#|>eƙsQ:woޭmTQuEju&Vo#QEI`h#Y ,߈9ᛕAO"|-rҜ ĘTHJ5\jG9ԣ]V…؎TwH"X'(TW[Td i'*]QկqAE!-sG()+D> (sH254@ގ3.O7`MaN!粧3;oGvЌ0^JBcp&G! 0U{!lF[:?5t!hQ}qZ\D&N~vp|Na'妏ޡ.;wrs++efd/~j֝e0A=ٝhR'J;1x-n Sy9R$#~]3~9~GdsLt󻒲𤺒ft"6^n! .0GggTVe8`Lomx#OJTyx/ɳnp߭% UsvZշhbmȓDS+Dρ0畹\_?6A]̡/gTS}K7lAKj@^!ʹsnZ\LOb_ pùTH ؿȨ{@ .mZŕwIS`h|<yTv,+ 7\ yJQے U6"MN);M 8 ˕/g\]P,:qJÛ)H \nwtYhl[^Rꄬrf݃asvtާ o_>bm0,:U5>Wn-g>33^WskdesmW:<1b>5n%n(eLU 4q&N/Hی`rprC͓#=+*K:Pe-g -Z漚yRs;,jv.>P]Mfi2z{=O~RaH\ªXKjM贀~܍Ѝxn{܃@90V})_9f5GZ9/ʷ\/WX{aJpD\U2ĭqySdFxo^·Y(wfuu/q V"Fյ*ˑc}ХG+Q/otI$3>u:BU]I85.[`gcUv.O((2 s6[җ?oRN fx݃{H_Eluj35So<ݴ.F0DB|6=)has4897*B41xuE T>fR%2.%mF#&ީ2f~f /_ HN.:Gt~n[l~hAGyQD,^4sZU~=PikܮR ,;Tґ@m^YFG*GWz?]unx E d^K<{L?STvs7:qE^*dˢ0 ֍mι ~NE5$og_;R|&vA.Vk=.%h u:_6܂d!!ud]a'nسRT8fq]3$!wbp(V2"2(rƭRxHuy-NjI2bX  h?Y9x {CbIped،8zs$ִbBwcgüşq5v{kh'c+&'dհ0~:bPx*\?r`ߢk( ah0uxj1R!DzW B&;_\pؑD&uEj2@Cv5ұ(DGfE a<=zr?=0<^pi)$3^]s\쩹闒4.~j< WY\uF|f|+83N.읍}Eh\e GQOS--B-wB j{낂YP9.P_*t' 8 $퓪D0䆃ah~ゼጚGh-On^{-ԙ6?#4ǼԈ_59H-P!^<RrGE^UcШ3VI^,w] ()(#djL'E|)ʠ.FV!˖-7@[.Un.W7ʔma_0:!HcʏLzY!n e<^Aج89_T|7Dp!ZP^vl*+s%PassJ~|r0rh9gl"2OS1޽oryUL&- MԖU`L rjB"Wydf9pD}[O7`>:KXvdմ0uNr'Cm56O8x?!#ʜav8-lsJp.$+5{QzG31/6f-Vp8xa%H)|!6/w N^m7 ֋'9}.? q_Opo,8H.} 3mi3I_( \kRv;eJRo4d㝻*~qҍdɯs"A[<. 6 My!8V v +hAN,W+]櫀/U8 (]BiQ !KzH_Bp7°3W&?[N,ӔH+pثM*z9h=֢1&7tldR4gWNp%P:&H#rhidsRE e_dԜ[@Hh~hܦ?aqGc6F_(&8t` 8W5 lY)&[eKIA(ERenk:fDl9 Mfe|bk:qYs8sҔΔIE d*Sy1T!Ӡ TN\=Щ -=X8BbG?><(v$sud-F~m֠aSvuj*UD0/GGUN0h̲}&Lio.Tf`}>W^>KLo;`AO.Vy{M34>R툒CU?$:H?nw5[͵wF0}knS]ϝ꬧¢sO\"1c珲jm_ݨSa2fߡ;+QSzvfSlg>iNd g[m8ڬnoio-2fA^v  i MR!7|/$qb47UPܮH*`sX`>$DmWYH;JB?Z(Z^O w<Di~ε ZYՖb>'̻AO_{*gX^нV9 F26M:Og˱AbRo0N3^ |uU3W.}*U$"U ήdDbDži=<ΘWǷTXwW u^>';e.cuaL'^'m,@¥poz|ïe@9_k:0 WwX~{9vukɯ,^,sЃAҝĴL rGwgL1s曀?&G ]YA! ąpYF~sRQ4gtʗ«Y e˺#Fșɣ~H $}zw-!倪g٫RJiyy g1O=5ygjtP P ~U8 1PEg3˙'nvҭ@JD9[{`ؠ*6KP'HC8zh4SӴȝ2g1}pBpt!Blr-w4hϭ׿%CϟPڦ/6EtNOn yrFp3즉AɎ;Q7Je~Y <-oJ7>?eJ3O-,n15!97]'bIv.ZIR&†c{#:WT"is.^3DJB&/;^X7 C(T<9oA`}=kfl}LJ#{zdVxPdY.BզJQy1?=>pl-GɊ%hy SM"~WQYs Oޮ6(WuRMks`6^l P(@ω9Pl dqD+)_Jh >!^.P=ܦ~fdh8_ 4ڨA[j3eHO:Q_Z)1ܕ >A޼O;(Cq5p Ku`s503jyRpa56^z=UX9Ld{Qs N@O%ʎ Y? q ~nt\xrz2 3]Lr y|HbsfV%7' U'59^("Ř?2ȼˍi|o كN#l!~xmDߠ~~8_*Vsh~[#X߿^d~fjuƛ-4 z7V "Oß5_C`9?xXE?^\܌7gZvQ aW/@; k[l)6]CVMfbgj2@u7G-U&a/qUbtɓ3?iJDk KtHfV#_( mSW7[PeB¼jzffJƶeUwc*n9]iPG0\QMAҋfKaw/k)ȓ=G]dٗfbhrm:dïi:4vwԇ&#GyFadfX}yEdulO?-כN*]XPTnG# 5=$S;P2y'`~fB55 "c<7.Azca_-&נJ2|u]5S%iZMƐ=ha?}ƚF\著WoV_ @7J~b :B0 Z)1\Tc*?Z l6:.6йxX鄔FĐt1/ #%3́1zui æÏ>!H| r?Xm۫#GMLHa~N *m'S^[EƘEjYy3e3p)jQ{p$hc'bxC"g4_n&̨G54kvQX j7:)x'r1ɞSEk>7 ܫ[%pg~ =q(,TXP1λ y'Pw| jdju$u@-z3-TڟER# ZM/@*ޏ_ݣJϽ.3v]_¯2jB|{$WC^UG!`WIP'8aKv1̟7w*\{,Z(4)&!1kFCB|(@^ L%xk]lij#_fA"Ni"rl>,UXsC<;M,K!=Ol^=Ĕna nxKOeSrW$% :[@\,4'l;A-T çRw5ĞB`v|OfGX% NjS6|4\EB~ r'Tqml,i~UVV$-)N0]€JV֕40I Cv],*̷G&Ӝۣ#spĉ^23땠/#Xɐz< .>e(ydgU‵JZlUڅbV5aG  I8'?a}Zֶ7GGu^!>4pr_I7NTl"l論,Zy'YFXok6$zCĎ=ZK%}Wmx8EL##o` A@F''` H+O0iBίy,kY+[>X[}Z  0/Yz!'x-g&1YWw͂‰eRV;6]In S7碌C3"'-^4 szΓ93>&j[@8\+s8FnZϰiGx-JGLh:#Y_-Tϊ%,vVq]op4O4vOjm@y.N{s)Rts=)c,T.@VPrzԢo u7ŊM-;5vz#2ӚWMśCܦ&y^6G%LYд>xr k8O>~!]P#%˥Vw,2¶<]K|< wVjr)xG#=%Y|o\ɡpg*EQWY^I)]*ڒ?WT)Zhi dpG*%~B0pzQ -޽A?#\<^u'rg0l&^ƨfgB@{DrnԬ[{5IeH ְ7*ɺ*!; +Ġ}Kc h;K"n}J:ZnOвܖY:{>Ku$3Y:,'ZBN l?qyu{,B}zyx;//#tgG_VLpRNļ$=wT^5)2ۧZs" kb,\N_md&]e \N@ݺbaܢD(4sኝh0ik Mqs&ωHm7`9Chm<آL,jIG?~C |?۽0 kԮ*z3@إ0(hqeeMC۷GLf?a$-cX Dy!9owlz(\Z:GhmSw9AAtߚ!?gqUjlOCv*> L0:Jg&~.)O+h}aCD1>վ'G!ʼnH1_+heЍu|p0@w͂2]='w-^= A߃9n;)\bDq4>vN-idWw4{5֡~PyۂSlyQMRihOCahp0l2!0ih?mI6n< j9EUH69X1]g@IE/_ɥ%EJ 0;bo%&TxUޕE@'PDvsl\R8¼r&#a-{ǹ?\܈P z!8Lskϻ@9J6 D Cg[j`Ry ͽRH`@r}l+$m~js A>nmUi?]= Z)%i{TW5e|0S=-( L. |eh)]AeL TXx^.a%AFl܄b=!&ka俢4؅(FI!?2y3EI"^u_/r0+ ;V8g:MC3R=0O hS"z8yAe2JȖ4 7̫dq .НK gFupjUC#:h|sAˆD>ג'q1\fLf qZf2-2vfh s#ZU-wSnXM!C(.OR^LӐeyuZ~x6[A35m endstream endobj 258 0 obj << /Length1 2118 /Length2 22085 /Length3 0 /Length 23325 /Filter /FlateDecode >> stream xڴeT6KpwHqw" ܥXqwݡ8)ѵk`pM|=P)1ٛ%A`&6fV~1șI`gfeDsA` ?l P2:YYR@Mi0(@6_@db,@@71{'+ KLL"eػ9XAfYffۛ @k-mu6@CMBU FXkH1E%@MF@ FO7? ":l,j\NV77f;@k ;𳰸1[8,lni pw=;m5dN%_\) $i/[+ߜ{kOL/_yeey L `gdoͿb.NNr([4.jV_1c?z69[90aY)(HJ3ɿ I; f;/?D6>NېJX;#i[N,\ۀ@^ dff.,?2~!-#nj'_GG/{3l \>^To02 aA+ /gh:tod0#(ڃs_$]lm퀴3_eÕVtVΒV@3e+/ mE@@m?M_4o/oC"'%#L_f S{3+`d6 \\/65+f=;!\?!n߈"7,A<ɿEo`eTT}7b8,4Foސ--m?[F2,XQ|c7d{+-ۨ8a-?b"{,ؿv?weev1]೽~J@'7+m ?p/_Z:o@Sĥy{) !eT|'8Bڲq0K铭x@€& B{yi~ߤP6U0zsbĵc__BT Mf Evr9:%-q-$#1֎(Wdr)g(QG Cs$ITݽ4V(0Zv|O=IW+hMd)dqR?rν2gwΠlV(J!"~ND O!XJh2%} 2NWOfB_B^7ŶD~([xϘ'܁!蓘'kW?:2h=YQ";1Ld!pl%w59J#K9fw,Rf{*2"A>{(Zĥ~kôǝ(19Aݣ_&wrՓ> 2-JVƃbD:+a6d<ИU} Q*19ߓB#ASJj>jFϟD qkCϛf\vnKbL|@+< b+׉vyJek[rN$N8;c]49(lv(^f]/u|!qYbi#Bm+7% +? (: wāvFjOLĆ:u=bHQiupcnYw>9,ևrehk5YMRާ݂8ҵel|cjkvw8< -c*61f^ooz򡗠M#cA R%UE{`3~jwmt Z$ dUR4nsn.~&#G-uuz5MX]P2ǟ^g]~zj&CO1&hkS'2 I飨@hiĞ:4dMK3jK ^Z{ .AewV!NկXz'c!cc gǮe*x)OlB׳ 8 #Y{-NxW+)cD8k1{B}eJ?lm9-l \8e#C|LT42Ǣ(P[cXntkrPv[WMdvk~6]i">XtC~tsj=)>O+1rtPHO%Eqo~(7`ͼKRE 9oG\<a rV.}-y1lM'bI Y7xC;7RV"N9cn%Bꂾ3m6}Ru>sT,?jhƲwNRL*P~vQX;5m̀'R(kO6oUgfQ*j23XP!AvyU#IPa^R]4#!- =\ 1< @Vnퟩ[1wz''cD2?tDy_} ۄGqшV 6WKrIjdt- i( JZai'Tg#/u/߄539JӵMЌD/[я ^P iwc^s#ϛX>aSȱ^'Dhei{J匰i3Iɲ[gDcD(Ӆ(VNb0ؙm?!oe.cКtHͪ3&[2~CF ֝BJk*t+̥{=[nmi#\p iJ8ozB~?wɯ#Ǟ~@s.xp +q%^|Ҳґ:?>ga5 #HVϓYRwH,yoyjF!#d-suXawԦpA9^f+U;˓ Llhb|ܰW> 9x2 wrSɼ0(i '@mXJzյu}k!6@+JJo'2GRvo腛kBHaF)%U$;ZAf'頿Sg. 8__')C$[ЦP. _/M5;IcH1B"FBUs(m @ՙ ~B .^} `yjcr3P7;&g|jL~w %V;`lzHsv2%;,u ՎG>Yx]wSVhX3;H,Mvd:xO&X.?ܹjR:DQÇ5n 㙀Yqaԝ+30x[͋X?Y쉀]v!႑]c9-ر!;) D;@/vkԣIAO! y,?ESV]1mگ5 Sc tqğ3(tt0 IBrԅtoH+#c//c]>`ŖHlIQ(P|C[^!8vp~RNZjo-dXN0I}GD˔ _>UX>i\I--`w qp!_J/=i˿cPQi_Ń+MysXpuo>ORV4q{l%9ȩ;HV@Q'%ʍ'?YmLx SA5pT|o;O2T4X87p^M0 pdZBv] ONϒHI>r_1j;-1@zFʼnФ=<"Px[Hzdw Y*|PNjB3gQ1$I%2jgIpB!3kϹNe)z+vQpӐOݍHLs\%i7z@.vd{X(N{am\ JCpRQ ǣ`oy,Ґ);ߣiЅ'zZaZmޮ7g1'wBxmv7K>cԅY)eQ;9̯ 87rR*(Jyr;ˁmt%]oowۊE"dFH7h%l &WEϊ:׳.b]Cósx\M@$(1'>bhnjK[`qxO$uh. ަ*0SAe] w gm:L,ү&e_1E* XdWX?(kWΊubbD-gkԎ>|aH3 %:WSwuD^E0{i_H/*j{ JX m"?H=9TkW+Q'f TQ22Pt!LJaW;x+"9db-e/q'O7c.h瑥Ug֗Vڪ /ޙgKEQjBѧ敼PI4HBS4l_zFzBMPWHr6\ iAVm)IG܏j auq|HȐ6RzRC!! ј|[թ J7Ev!g1Bf[yUwa0dv#qX[>oI6dڦO6erp$J\~QJc#c#EzۇԠZܘzSeGc&:zݾ} ExATA!_CXHaHhqmXɞN[rMQl$~08;qtymlַdTj 7ǟ;E4pkO-g-Юk0DY>;4J f#Ye%-8spP-FAcQ211EED5έKaBnx`ꅣ^ XhV?sSQ[<$-#_:{A52-4-u8䘿|A,Y \:l7ac ͈$߹s i>-*{]?#HXd㔫Z6;ҭ׽Pu BW,(\c zޘXW&.K==C77AO8\dkl>iSs? e- QpuGb d m#_OeISDП2lHDAL1ȿ>j:Hr ҁ}] ӱ$dz JZuːUw# 8RoV>DewYt1 ˂+a)ET glPb;'PpJ/!QZ`K ;pD\Ljjpt ?9UK$B<, ip$_]MH.k˩j5T޸~I=1  ߦҏ*C}WM64%uTNfVKdf6I6vlKԀLݏKnFP֊`Q.o{VbvC D2o/s2,R's#߰x!'D;hHwc78C_Imv}ywPg4̩zm}j8h p=M10wCB%>f%ًe0(Հ?=jG;_OHe n.0u;u_`P 6QzfPeŦ*0f@ P|9QG&Œ)sRG~8x3g#MѐBuampǓ^dRKrN@5Fp 9IYzCi%Af¢Cv e8%N;HL&$S}2*᠐3[MHuL3Xc^h,3VT+{ȁ)v'GZ53Q~ܬe:}^)ڞgN:ǚzosd=ٖzE|oO\9 5+5deJ~?+nˀ$ljZ"`{ oE ;c A؅QBQ&Pj熾'UT>>z/y?j=LuaGمs#c?j5Fj)rH' wΕ#&r^eҽ( VUj~AK\jg3s.rL/F8*tep;[do&8Ql#$XiN6KljwQIlͳHIiyUwӻ-;gi,3>M!&r]oшR+!fmn5gwM~77_?'u~cRԆfȈ;Mgk=V\5(IMvR`MRñ:N ɎOy忒&;,ǔpw/>Z"zqdfi'&8 v;q>hTF-3kV8L?Nu[ >EQ+h1eBVψ 7v]Ҫy ye;6# X?r ݀*.Or/A|vױ2lFde+O:U]!-Y^0ښ:bkŐ/WDW.]bN}kjǻg/rzl!;Rq^s=֐l4*w#Rz6NI?ܸ5PpoRw 鰗;t~~hҕHC]uOy7ɺqΐEn =(R5UCneyjJ2i89*(aX0KE]nbzLm}Ojam%e`ZxIM_ 8(3#:!gG]渙*x)ZbUt|}.=`KX:W0cN(+,kJe4 ! Biixc^n,)>daݯ}UYΧ D8ܻFC~mr/C3UZhbV7tVɅR,+Km lqC~yAw[e`a^ō40ZweIvrUO%')ŜUlE9Z`kWn8?YpDQi@ $'ޅĝELn~xACIuSƽ6Gc?hXAIA85R(rjA|MZf=stP\ ZV"zv8)# `2m)&AX?|DOs#Mv&Dv85y-D7ɏQ[9_I9.D۾[676̧͠+rV/VwpCL=O~T!,,27k}.%B\͡qϮ=Rŀ* 4R7dN!j.:tuhk8Ag;.G+0/th62q+=6S]W!*&L]`A. ʱvq란rW%Y["Ja( ՜#˗YyR3QҶ`b44)CA<^sOdY-DXB.RFrjGYר"))Q\C,숷L %2ZRN1m{.\j g:pa:BBL>,'I;`a#qH.vqtrW "Hn`q\<(M,}Itwp$GraLM$UOnJPE?ۤR ,'*,7^(\|m"@Xxl!vNH$3+#O4h8$\1kh\sqRDG3&snPiƽ %nfhL,l&@Seg"qPBF!jWʳNٳC$E苒`WZr7nZTw,p#S-H̗!ȴ2>3W:YS~+>c JÖ">o,乣.,#3"_|K&56¤nG{zRpflv k~wx&X,ts5@kO mONBx0#K3jTAVb1F]f|6[?6z,^_+yu捙8xh~ށ-Q@;j2ck7MpF ,l՞UyM#ITC)le7^\m,i>rU9vms_Zn4 "*,j^5>蓣6mXrm5FU*opCo9!qBJR\c=t_I$U* h 4٣gMFRvg7h*uMQ&XP!o"QFz"=ABd6;?9prk!!n S a}x٤kڬ+W6 -5b> >EL5H=1wW#@7d-O%兖|5 ^UAcŔd|$yo~vL DFchW OkTۃϐ*`[搬wg>T!H[N􂉙cHd쵏];]MQ#daG9_GPFmn^G#hQHVnAn9\D DR#k@"2 @$o7U$ ERERu}/=1 Lk"Ĝ+4|?ҊtQXhKG &Lވ9/#e'qЁ%lS=I zꕰqƯ1VRð=(p"UD3}05I C,A+pNf:75 HN<sk+]As*89$MEF[LY$!r%?X9 ܇7xS\~gt5xTT NncoUE{!81a0?7"$sd 6:R gm Sm!")M1BAf`̃rxmh;axJyp5'eDbXmOWN}F;>ۻ  s]ev hP߉'2jM-%%#z=m Cg-kƯNo'I7qON-^m\t*Ssm)RM#}=W>t* 7f/8 uqq.f) ,ky{&(zBStB/(0*w_pBk,PǻdfxO?F`X:\-=@q2_k³J3ĝYܜw3"ңUW%!QL.̐hf)zLp&o/So;.V+@36UX'![¯l YgJW>fRvqAO;gpKx(Jg?>~tY߅)G\\+)@.~v:If b3*!_^ `q9 la@g \@G;OxW/@ U ĎE  ,tG33^mVPAHR:-#>G램>L*;4U">lphI^"]/Ǭ"Tۼ"%^J6ݣ=+x(4q{}dx/vܚyQ}ѻO~0L~SKL|fxxq>YMN[wɱ9 ε V%=%Y"bY)CUGVz7N_E&:>iX &&)ԙeS,Oq3EJUV$=!F =^H-Eíig޽"` F}^0 g#}3ۣSUD%tek  5uY hhvXʩyԆ른[Ǧ=&HphgГ ۍb~^o<=.n{a,a1갹s;}0NMTw6_bC 2xy_VMg\0&= p4O]S?Ov-M)[O-+cZuZd*+";Ε4Enp٧bk$ x 3Ti/7:JRn\GL2Nkw `Vy|7S@ ݨSznK(h~Obgشw{߄֥&s|hŰ#pH@O hL1v!Qg'O:Lߪ(jy}رqlw Q6 }P-"'R Ë f|Lo{k9.SF|a^kHC$/4=P8wuʓ|\GUQp^,0Z@(X2!+䧼#y١H+ :c\HCN]0#O;,ݐIZmJWeE-6^tT ڙu6< qRc nb?<}Iá%J_#ѫ@# r&泯 ~*U,7=&d"bjskD Q"n""Sg=(6ec7!D7h=q]v+u9swUkǏۇ8F<\iL8eᣚqOq}Zu j|Ե. ZtDet#ub)ISFȍYt[ˢlݥw>tw :1%hԡVĶ^;3,1O(dr\HĚ **=}yQ|!d=(hq[vgZ=H֞gz{F4F~CnOGI&o-Ep0&lf{o H|YBqpnqN ˅W&0I9͸V||[xRҵ0QjL*> $it}x 7Kzq<Pjf+_͜"nN 0ثƯsF$+&Ce͹}jFF-V=gy-Ikfd7^ƱbZ?8of4U:9Q6Wx\br[ᄵ[p܃w K+6?mW=?G7G[`Ԃ8hA 6jֻa<=flҎZJkKs) AGYvX:H"rj{cQr#\Ξ UrsQjJl0t˽wA])z"/,,5G^1/TS>mw9-8ct8M Phg)wڗ_Ez")ùv.pAK:w'jYaw/Q, E8R*w3U!qSGe?S!cáwsHwx ,|Л!hTQ,JSxi ǎuU7bIsހYF*/+.>)u9RHCh)n&ey8|qx^R;GO }oc)A|'('XXDWY>X m:pwbD9>qQ# r]ISD_{z8[ : K*;&7;r'6_ZhL`[K)]Xoms]ɨσ9dmۺnj4T U|1..vYRkzGVQO="?1f+3"_t'l_aGp1Pktk>j|Z_ޖ}_ so{?Ah2 c^sEv182'sHPT[w'Gi^qn{NL1ie,5UQ?@Cf쵏;&4,ҢEy`M|~/gQH J z˰iPЭ FtK 7kXVԹ&9*{ZFBE 8"Y.N;5.ivJZe;u>{UҌqERzriDmOU*;Y:€L{FlODZ}ULrK`E|aLȷ2w8TfB@gZb>hJ7iOqF(~8D,~EtՃT a("ZAd7qAT|nOf/Mo0zTG3^JTI`PW餓%I'B'w[Pɶ} op4O4vOhk"Ш6x1ΑCOZܫJ&Nxk:P{z{ FB\~gFt?r66͍'9v&-9PTU_Վx$)b!gDc6frMKJD߱`C!UxZjkJ~.Nӯ 2?J4m"qM$d`lGbիiDk֩ 36E<%<9MYЦPZ$}&!)oj) { VUo#^טy`RN^{Cnײ/3!՜Q#eT7'zjHxrX\4LOB \NMWFUBA|m`GChRyr st.$|Yn$0*nDس:K|Aɧ_-ֈ`dO8 5"N*ϒ<"*Of!}-nZ|qnBbtebm_EWף{r(/ Q:IN- ޚҰpj_giYMA8bh*q \L$; @z*QHJh^Y.6j`H$r_ӕGìL6z%n9-f|q:}TEK_lr s\u _0mkd{5`c%ۛFA XcR`dtxUj%hPQlݶ:*e<-{OF{!Xj@y+P~ trz %K3ƠVzDʇ5dE'qD:t4V./aBr{TiRnPn-@%Bl:`.)td9hxݘȎMh5j&`Fn@gWܟNy6}1nr$>=w%% r[ʴ^4N=U螖V,"\ydMx kheWa{0G$Uw3cѹB=H \R%"T=M%6G}dPi|\h$?~&j?ֵjg7;iO:3 )}L RY{ 8K bn cd6t1J ž(⪖(7@*+DWDu$20sEQ]^<آˏϖLjF)t j CStZTE 7 TL:9)ԓGDF1.]52M{M]!PZ`5,--u\9G 3apGAkW4r:QR_aKNTL6 Epvp2:?;R=8;ՌzbE#Ȉ/=Uh}$x Ľ0چ I!g6uSF$f``dTO缟>tF3!2P6kgRT>:lWL^ /M7G~Qq6/ nk$% 6!ukiA^&6FtlG㒤{/fm- +ODA7<1ūkϧX!h5娺fԡ/aͪX&/ 4Y6v*3eq6ъ@x,?xź ;2_x## s:-m‘v;PYJ"6oG"IJ6VgilV5@<CIwa.;X=h_<ƐglMTt #om*5@%p*eMl+ wFЁQE|*Xk\%mu]a iYWua=析&V OGr LjN^FepS>H֎~ξ}]|J4Hώ>̚&vy$M:8_n!#ϡzZHOFl( 2"YpľK)c-RB_YJ53Џ~XWB@`aVPt[{N B*FfYk=8g@I2b'Z zdt5 /$UMw;'|(LA!}GPPK[djo2ɓIY$ޣv;F=D˨ #Mm~ jpڗAVPvxg)[C^#9Xq<jQ@FWCSE{2ΫcŷΏ{}q EÙ/?i~x)'@ <`F._&ݼXPryet= [%R}mǓkJ #+>(uPn.?ol߾j+H, 渚"Xߦ VGqD=-Psi5P,XCA ZeLx!pe(RJA-خ2q?u 8#VtAz߽åA\hiY4/cЁ˱5\n(Zٚ7C lB%^-˪*C-RѾt[!'x#(2>*.cXSEN z2ŵ7.SzC;#\L5-KVFD37-G.U\fhdLXqJAs ?s^:9j36IrS- {xܧ!H۞W =Qxw}OG\γQ?iP-XԯkB?6ygZg>neTtI$}Nl=|vefi[uO3K AQz;oi*!$!|0_:D>3FYeƈ*l HAqQ+a,2G$QJcb([^={cAםRiw:BĎ(Jrwhռy7j_Gp+}kO8p*#ĩ׬]RwQ /GQNXp6bK[ %M1',`, ![MKbqT:KΥTR5=! >&8W繢k|V).Zu!.k‰ p ':#ye1O\0$+4hhn|U K7F2eȻMT?DZ~GNR>^0MphԱp0QAbX#vvVES"}G$6D>?TɕSBSI G !]gG@wg)j4 [Yf{u3w2 bO݀6Xӿf',||)is0c5"Ԛ۸jhGmcmB{ALjG`3pV&0>jq:%vS"#![zjgbPM6!ehRg(oھÊ\uư֛e)2PʨDȥ=zݲtb rrtH.,qMhjYn(2M+z#.Ndse kcd|?8p92=M[WQY}M3LW {j1+UiR!̽/ϱD(ȾvL~=n'x>ow 6ABY;h֒^Q2)]mynXު}!aA##b;JtQ.Aj.mw{׿e$TRΝ3e'(ǟ=Hg*mv"S仑P0lUs]tކEᏘX,T y>z'M3o*7 ^mV`>^I_ 4x,.}5δ4;ʪ)z8qQEud(7@}()e׬Z%"l"e8kH ;vM  ϼ#@"e)/d{G6f!:acLxVD-y+Ib Lek1ˎ Z-1F;ETԨ Jip,҃2mug;#ݸa‘Tf M6.+|o@"i㹠?t73ݳ7ߏNX?bq;###y'hvc-)DGꬹ2z^rY#.ہjU{\MznV=fFwƶd_%8 s+iםS="B_9~Ժ&j=' YP8(ecFĹJji1G5 R)c\2 1u2/Z(BX;U82ْ ʣ:-3=_d^H<LںTw>]V/er4V›Ske MkՏ] endstream endobj 260 0 obj << /Length1 1608 /Length2 7459 /Length3 0 /Length 8276 /Filter /FlateDecode >> stream xڭUeXFSZ$$``n[BiN 醃~g}_s]^Zza搲8'@`BT9G'?3C!@8HȂ,aaa  lm0j鱰;`I`|pC@#6ۀV`{@FC@I] PA@@{= A` `B,[q>bI@d~<r9A` Pbab 311 a`G8౪_<6@0cz̴ZnO1 !0]h1`8Ζ t'ttts'pފ5{Q VP_~K\A{gXI-{% K , `? D?qwhy{{u_@_@ǿ{/D |Q nN`<d [3ׅXУ[Lla=t?!ÛKV*_?Yu<wjP1O܂@O/[ w e Fb'p qpqv~Tml%A sPAiO}#]I4p3;7J̄QTΫ[R䒔6Ʃb/V",}fd~(QV6ݴI nJ̹Zi/%j! =.-m#Y{sxcJM{AsIn_Izi{xpj7!Ɂ+ 5w WYZFU=t<׷?J7PwiP(hȢXKcf%2fc\](rxzז(+q]{&`j1DO$ .]F^E֤qIꎇ؍RP d#Rc^^DY7NsI"bd|3_ EBÇ,BJ6= MKXx՞6t-m=a=uYGAWrV U d* js!O/Ȇ_j. e;*ȱp؋jR4)f Q9LiBKL:kLaz!:1ΠW(j鰇0@Av-} ~|K] j(FٷT{t/8\rfS<%B/$0v.R۫P7㳱SC}EVF&#zZU.BS2 drQSN荛S.CX{xd{㳇iѲX:fsk-b/i[2qMq:\?~h,wPFvN_I^%\(c "> )Qv 3H9'!vefp{/RoE;c~3j377&B٨ r)f SWJz =tDq%䮁4Bnǫqŝʺc k|E:<9yyzJ :6&1du//AK&j.DB۶ z9RR:{tY%e>g`zO J(N@agnt_Ӥnh_K'CޖN} [0*)c[ xks!d 5j5&}J~yݫp?*,8<#G؉mJ0p3H4t}d\{I9ja~8/W=,m5b0"׿Zd >v~K{AvSǼ$E5'кtcKIu{. ΣrHg%ØCTWy~OWm W3A"+f\fb`FE^m zλlq- _~wbj1nN oe+{ !bXM%)0.kNē_NՔ%txЁhU, ,+ 4y–x%ێI#f,Ue>x9^8YБVk/0ׇhX S~~sB=_S ѓ ]_$ ,ĹQf:G`RPL.95pp,}|`ͽe\vvUCw 7`#vS*4.L T:i̾%+}p2Ȁ=sX^2P;ߕ4ݝUP5SpZB}/ Rs(!çWpݎiB~K}L *X]5j*[MOp±NsE<.+R`xLXed[C0 c ޡ/ y+ڍS۴Xy?Y5zAsͯa?xXz3VJ8.2ak';&IrȈFbTގ{[AKluQupxeշ΂V<|9z.~GJ*'CXΰ~5z溬8itdC(Gtoo!*t?G4imrWNl KsPu^!FQ O_(=>^Q]X3 F~QQqRpL-Zǯ~ jQ醾_+dG47D z,IXБӈHs3qq'تj:5tS}S+?]F|/`+LMrO>GvY7KH{~u%"-CpN(6Zj*]lS[./Q#Q~̶̢7Ǝt[+PoЍ =xϋ/d;0\ﲋ3n|hQmJ%r3HGlĘO]hT d˸P*?H\b4,2{^uF_yΰ«*cn F$JyM\,lv@ 0߷Ӌ>s<)E3^h 5l um%)⺩Z |TBhmLZm{yJSԣ-hCFQ3OӺTX&P_QϢ7\l0UhWSXc}I3Y ~-ѕ[hh!ȹG0$'"q^ڀcvԩǷ ^g:qs,P']MyuÅ0޽JO0 ͜G96O0}uگ)6,l؂+$8Rs$&&6! _Z(iLT=16@(k׿y9,2"G3ك졯Yg, hD4{S4>_ d=aa%K1K'$e.[ֱK޶ɱr-=,_ɢ}v܎d#O2>wi܂HM0zaV0H|0nKL9{9{ ύ,ŧEs!jvP_xcIzח;Mw#0tttCoup`g byWK'iMwFH/{i_x c5#j=auP;}2ű!/gY~Ơ %2,!a&Iz6^:&QvvN,IϬ,:|ME.u[6[dѽv`NdҍF-˯GQ0j=c1ΐ[ WԒ[eqׅW 3"4-qKD>.!"PjЊvJ{>Lhzs37̈́' ϳ$q'5Ι9C||?SJRԒ8xU5& * tvt9JU);O#ym@) n>XNJ*4_ ]IS?7e&H@?r `_1A"EKiޔE@Bt; 7je]#!k5G1ύ~^"On)O}m~) B9;GNGmg}P%<);r`:K(G;eZ/_w2Y%Ywl32I߿r‰j=`}z2A8JΥrSۜRRJI lWRmXJf'hPoX⢗`ڞkSMHP{I41B;f,~:ð#ȎSG=+eD*%R \E^C^Xi2Y|ӮANˁk e\ypҝS)Z DwȢSѺ0'vS]`ڱ1fvT/hN/+t>\!`Qy-0ߖ8;p 4ۢP9*yyEPRjo0F۾pa>SbGTiUjX4$$&%JRGU볮&;%f*6%|'\# 360EK=?)Zw]Nd}ɤ!00`,dP$j,1Դ(z ZFNˆsk`.[[AdJ1?|4_N.mn817a%zq-TgUjA|ͳkri}UOrudZ뷢\#Jy~(>C3KtX LSZ&S*,C$U?dF᝚Kҫ '%f}}<3-;R9/-}#h]#EL4`CҔZZi7֖ C{{;Dﵛ- <ܕMU> 9ĹUW9Un$"͂?eEKdʼnT} 4P.ꬣL6)>5Qˆ8LSͦl] _z3!!;Yo . {z׃ᾯWYK9/80vľؼºU%S5 m\e#!Q,;FJ -8#qo;;QZHz>zJxH}HT9mL 2``9IvTDQ{n]s}H*b#㓨IY 6s52٭I Bܰ7Ί >WI6 DlJq ƗsaX٭,_N*Yf>Reu (k!3q~^(!9w~eڴEKlX7t(*HEUP""WxNJZc1 {}xcB]B {Ҫ8**f^CqٟYf(IeVnc p%L%$][H8-mI*6 V,/YymW&eW'(g+ _74wMI+֍ܫSSȖ1C!6m#7(L +Gtxyf3jVWL{ָ"5 Ͷ‡$Ki%8BF5h&X(z n7fK}+~L!!6CW2Nđ$(TJd'ܑzh!:[`6`TxIizUmލ0P~Ϊ*>~=N+=wyک]C2Nw&1y ׀`(Ew3W hy^ V~/~6Q2ob,ksdr  e}HHXh02'iN|Ubf$Ym맮7BӃk՟Wj!krr&xa mvSйIuieU@=Nk}Ur͈Թt ,*;cۼ8"׃3ȡpr(ۅ,;Lgbry\J$XCNDzx齼WouƁW'upvaFfrA4-?,H7aQ0ym"[NjAM:;ؠZ0WJ.^s+gTaqnܻ\y3W9 PXg3=*"/"m1 8n;B70'pꗆ 5giwlKYþlh]1<9LexLol=bD> G4C&pb(WoG^^eD>:}ϱfHόMY'tm"'qkjz# 'Q>s__#}RN>8Φ,!ii#O&+\ jpa9[v"b#/gQȶvz8QvG$Pgb0٪3}vq.jM< j'ӎcumL+?ʼ&*T4Xu-t1q[фﱐǗ]O,Y L0u4 %  endstream endobj 262 0 obj << /Length1 1625 /Length2 5524 /Length3 0 /Length 6350 /Filter /FlateDecode >> stream xڭWgTS۶]C/!AZ#!%Ih"EJUQ:JUJQw%nAx=www~=ߜ߬kx⚮h.R@>.8sH\ y<6#( Ftp @ZIIJbx( %OhC?7@9{ nHo@jA,zKL⍄p ?8Nȥ8 $!/H A`}8ܱ0X<D}]@>DHE88^:ĉ!0FtE}#Q< /_.+}0X0|qH_wiܿWxFW H<&A%-C }#QTEHK!w! $kfDA\(@+J].KsMZ4iѿ]};7C? ep#=hcAz+Z#߱?5QĎKHH!FtW(-D"H$IK @½PR !PاߑK@MiքGoA74kcx qi@ZZJ$/|&l c{) ))io4Pp믡1P9 b}iy=DNᙑayc]'Sܢ( 5T|R)Q7|8N\6]kgjKClrtTK:d[m7#IY,15sz|r{,H_Q6,r9bº`#ޞ]eN`^ 5$t<RM|w^[5v~MN0ǖc\ a2g5!ZG$Ɔ~~Cއ-Av-a-P:Ⱦ|fda7~nɼ!b(+G/*06Y: C_:= 1~FU-v!'bo kLM[3n-bf,!N0靼mCz( ~*X֔1Z[?Oc^ϙkޥ(pP%nҬ̒ˌz'75|c[lvlI!!; <.OnOʯy6IFq0޲Uµ&5_Ccr$x>5 <yڤsWsֵ~-(Y7 R ڋ冥jv;4$it߭~aFPr<)eFclFxzI .̏y%Aafhp?gKeКMk69赟{U&&@|i1]+G95E9 ѽ'X}`3!Үx(F]pCh &R*]6jIÌ+S+4>Hoz?(ZCuΊgqTh{1|K=AvU髄ٕ~*Y޲!vE9O"=c!1mݨёTR%;"a*sBi,5C63ү^:Q.{+Ywn\Sqk}Pܳn#W>e8,ٷ0?@:l7z^xQx`ʽ (9+VOL`;ZSe_:4t$V߸.Nò>; o*jԋ,|lP~u'ցKE;Ο%5^}LӴ=hj}W!_ ةOdÒbГ CG+^߻[NFZl{B+C` UVQ(aQ-^V/CD<Aweݺ/6Ԓ$tj79Céჼz܏_ݒ<[I}dg5%CuF@;V+뷨NU\=Hf_B_-cbVлԩ6#bS6v\cdԒ{:.w,1peqvozew3ϡV5HW&8$WN?w{ &Ah횠;uYNcQZȃaJ.;R{kn +5,7O2w씲/AsD8'I7iw=T-WɊ7x/3-=r+9ΰ_gb*ύdK oTZ 66=7tro\*/JNvio݌_ 'T~1^>za;XT48(βe~ҝSsV|=ﴽfL2-(}. Zfg NAJsu˶J\hE09kڨB0%LG|H\=wXD'fY|mu#N`][ώCL>/*d\d#s[/fu-]y8F{-T3A?+g_ŤPs`%XnO7@Ar/(=TĶpԿaTjӻwH)zq|5՞p`“3u}cBdNmg a>GjGb";+ˏ>ДPMM[WA?j5&4<'Ԇщa%I0A`RINzaY'b>uV:K<Ёg߇_ ^ K-5cKAL ͇-ӊF9 TZݮOR&sƄӐ{M\mqTnn' 6ykqRdžB#K208/޶p--;R$]:SWWeR˘Dϙ%psRG!yEL}['YPL/okSt|2mjqGX `UH<:@Ni $yIk]8$=UXhn҂O*w넡oƁ1~枚i"\b~xϭ5>{ 7իHIxۃl4>914+Om=@*|?>Rb"'B-35s"皾[OfBl#!x$ll…㦉4+AY|nz5d۠!ĩk< 4#AvZ7x<'lA)K.%m1kUD?Ufz)-JϡN"YcwG(w(DMӻ;0Ư)^(`%KRG%:J|HJR7PUj6:X^¸2b=6,\?̫Q^]9r-z=Q݆BLjyasRv_wGt(~-T_6؈m>n mYZsTZ4m_\U,/"ҧh @"G^lnI̓`ūZ& <.֕ Qyqw2tݳX"b4w^7ﭯՅ8ӿV eMHw4(nSy6՚ pckIQh5gs>B;#Z^}+82E"EGl{|\}Usj)Eg$b]NUb N3+6k &5 S`̸}lϩ`|i79dr^ 9.ÔK^ K܏Hb0`9Л&0 HbH`qfL:CVݎ X'QX`^ZLX!\,Fl8M"8k+7L%ɇi!ξ`[d:5Iud |Li&S4I1/㥟a;6?3$cJ3LȓÆs@/:0Ѡ.v [3M-Z`28J\1 4*m->=ZtVɛ|<[fIB'rɱCCo/?2}3?'(rE,]qO] Sn31Ń)d= o L*d3M[|2 ԙViM\`?ټspOa־mVS*wrOZ%@!<%̢("Xl+vËڽxN 9I%!j" I^DYsl4I݈g+ vcuMrbwmaˍѵU =m cVBng2YE XI9NcS 9[鷣OڞKK[_!(3kdԤ*gx*:R΀/Ԕ^G vz;1uבٜ.t %7˩xv D jlNh|YToZzr9j;ܤ>&`KK|6 muOU=B\j= ZRxݣgao%)oF⯋]{2Snp ÿ`IcdL7xVЙR$rStX1};џ_`qNe-qϒ|?9Ȥ Rt^ ֻ{ӕł endstream endobj 264 0 obj << /Length1 1144 /Length2 6929 /Length3 0 /Length 7689 /Filter /FlateDecode >> stream xusu\ٺ5NECp(@qPA VK)N]X8ݡ_f3ߙsǻZ^Ytela0(h@\VP5n= XᲰA98 An (PXT+h!.0;C<"tۀEvwVw@u 989F@"""k1y; `}XxadzPCmղR h@Y0w;(PV;36MysnNP'v-"]y7$XY ܿ1{0 `7Ɓzޮ?I+`g؁~ܭ< {lp ?`V8 ` A߿Wfj :{ax%+ { =p <8AQDZVae WW` J0 `$/<@y?-/W〇j?zwW 8@C_>(#!62P{ Ղ l2_>ւCxn vwpߟj P-jE< no ? z8@0 l;= q i(^?G?ǍuF]AРp*~T89K^A]5,Z!VID$M)jJ\C#`'՛[r=J Eg,rĺa0 M7M,j%8"O5EƾSTZkG.TLjwIJ-'d )"* D[[U[P۸r>m-~{6E >pش#O]#e;eQ>(j؊˳u|S0c{^Ҧ Quեt(4Am_e]邊 Edr-I - Y跳i7ubriuZ)4ͳWRԀNTA"gdգ띭 ˃faʳ\) 33u%PۥN~-' #Y &`a3mdl%ŭH9{14PrR<߸+t.2Lb ~M-,2W"LԃR':MBc@݀"vYu"5ߥ{f֡PYBߢpkn8%4M=k{ >R8lo@p*?n}qnx)y7(l5*!j`M>A _'%m@"GOt>p*͓^y\#\j=}p*GR:׃6CH1f@vaӼ <R&a#=iMi'# Ա2^1v83,~C'Wl"( CYojoj—0{K{=-zF{n%7ҴvS@ {9tg{kB S<3ZaY(!V_.bJsY r_% ږay[c(l=Em7~峅YhFK@{_r}V1vN[/LP$^va.MeQV\epC7dGsz-~J~|% ,i m]UNJu,lTKIL;oH_4;*y f EWDۖ\BW1^'Ňө+Txig9םx5p`Ѝc CI6f RԿ0Sk=+{!`,n,Lְhh&?>]=\|6'"YZ$yxG\R %g~$ u+0gkAIQbŠ]-cSed {`sN)V5ȴOs} E ɀĞI"|6bzjf uFCY,J&)^1Zɻ[[%90.Zmc IMQU2(CdHJ{dmu).}fc)(7!yQmqh)1G="Mn?rޒ/f(;*"/DPʊQ$J8Q/4iD=峨OڏaWBT-viދXF8tn5*> 7)4_rbɷu{骜%طHpO%׬Ѭn ,/?t3!qȃ!C}hDPK.YԴwʋ# e >V=s̼Q ܁2R'1qA6\}涞Vqn1T}FE8vtڡQVZ nK ' |YMFYصyqk,j]ͥzd"'>Tّd1xQޯ6cq vb`_-q02%:~m=P=h&CNLӴ?]ũ֗H2ghCSiOQiҊW(|c W;(vF<ǟ.}#Ld+:~RO5T^viUN W#Uo 6zɯ:g*_ۯÑ,hAV`TM>ZNIx;Q,| +orT,ӿpD0 XMl^(Yt^[ձDSAB"TjSIlٽP70:=W?4z%BagG޽ G*5\'S7ASu_g;>pܶh:.@H@WO Uhb} ^v5KWIM1$NݖϗIMl!~x**GŻ#Rm`gFKv՘&2쩚9V¨b&cCubC/'vD2| HT6Q @ J/]ILP̂+ĎѸC0d":gwG]2+LANaZS G£[/nD PaGs^5&3nDKDWT"CGN9/Q5JPܝef>}hDHb89 }:~q#+j9.mJ5Ը^G?7݇IO]*U̘b6єm] rϞDH'̅3DFzwmQ f_~|\HHrnN fB rÉRqY%u۸s,aܲy^W]ҥO:4@&8g鵣HA"2aG>MBqzt(hQJFYss|Y5Vm{ֆ`+LH~wLpT~~"Apk?.RiNJc[Q^IUY*vUיV{tNM=7B쫂&V1K>] [^{Y_19߼`$Ty'E0 4S_udEYw&P+Z m(pk['9+1J]1r X/D $^_pw/p{]f1)Oaf"đ}M{I-7- ifgN<7!ww O~SXҦ`5j#.1~z$cΫۼ1fdwt y>L"d:xlz"wn.~̞'%Y^M3`[ꛏbn͉p@շyR&BZo2*白g8UPhcG\||FMӊRTrF ƿ!5t " o cNDFwǵ~RH(m}o̝׾3* z' KejUS3K+$S͒zW|-D@y<dxy~#\Gg-b02.W5hWC~-rJd;[K4%jX:u:B+KRuۼ/i Q~Y(k}YJ3W9[;z%&%ԭ|G{M@>";D`E%Kc}~VVbDB*Jv>8sˏ}o[E=?ssMi0l51Ol٫4Vlnt-(zL[T""m̛^ rOs}TIJi/4ֆ^] zYx3hGT_T hKJ{+ETqpk cM}Ǿ:.LRp[I*]m45q*$*3*։Ϩ(AyvkMFcAX#~1-vͷ:9ևԒ`L-bP.Ȓ&:&.Y)DW4ZS!;hZR+Oz/>! z1X7^^Uѩ)BώNgH76@.9qv_߿=a+xv;Tkѥ:$[6 i1i]k#J;tl$hHSc6(]j@FS*ϾdLIJ*ٮKM9b$GMvmP+\ͨ|RX)îi7]xqa"; h[e{uMϣ1gPOŠgRY5/^QWk "NZE϶/8(m4ȵ/*z~~.>ƴTy`#/DLJL.[9N|dѨmF$-n-y~{0V@Ep:]^3-D[kT{`/`dzAQ} 0O\..}~7CrUFiS4z>^xDJ1<'2 <Z_+F\x_d u-9H_mT~1`(V0*gI?) hXHYlE)%@4dPo}|0=4Q/6!Q0uS,}l_"z;l䪭fA-V.m/w"čǟOH{|=8e>sZA׊e0g"ʩ>[:󧴎_:[S, k2 t[>;{4CN7%GÏ¡Wv,)D%pȄ_Q6448vACEů蝖V``8}{gVr۠M)(JUytfhĆ^$=%>Gw@72X,b:`ZxَX"~^]#;`.{yw'@G>rϷs3qWhͯ3XFqaks-mՔ7o{"OcGQ,Oq4:(OKtOtj[pHrvsa3;̿#<+qOmV:a< uۭ~ฏv%SVCٓ\#҇FMfѫ(WZ>7qGsOn TpIk?Ghw;$~i >R ` >฿o\'{sP7bm_c9j/W2jT[V"C|svɕ{-l+t=bϢIߧ_:Q&SHhNPd $oG^ˇ} rc-(=ѐUL+8jER3Pj  Th*_}d7i~K*D=O誃~m8$a3Ă6+3<#[+Lu2k癛AOّ;x1g#vZ,*dͿu a6i]UNP/'7zNّ!FA-ԥ얘i渡|l8Z|^ԛc' x9]#K7{\L?_ʈJvF5+FX_nH{vQm*†Ej '[3eXS endstream endobj 266 0 obj << /Length 843 /Filter /FlateDecode >> stream xmUMo0+J! ᫊"RVmk N7R$ݪ70W?g_,ɍehܬ=WWU\;;׺v7MOtҺ=po>fv8 | G՗_n}w̭][GL2sQ擾ݾk^!00jYV%H~~v}\; C}h{ϗC`Rރѩc~^ON6[7ݛ ZԲW/{FR^ww?U4H6!L@@B@q\s *G|F/+>㹴3Z~Z83f3[:٭ ߬Lg3t33 ~!>CO!>S 33>IY ?BXIAup*Çq G潪N$p|eO_:q;:'dE_kCvW endstream endobj 267 0 obj << /Length 845 /Filter /FlateDecode >> stream xuUMo@Wla_BZXʡIW ld!fm웙7շĶM[؟McpuUӃsk/zfN꺼Ɠfn݅R^w}9qdMoXj_v}EQ>>pø;en>ڲ?`1&5vaj UkNAm<}\MxHM0}Z7WuI]ǽBnz/_ N{y;:ڰox\7nXw.kP^k3^Kյ u/A )`JbD>`2$`TY'``9&Dkx+0*NXXQQ3c w"]j~1F60aG+gıcW c rn q9Qܗ8% DMq.5Sh]`4$a]~9Vk ]8 IncT5obY:socsOPcYB?9Os֙3\Q.4ٰX3Z9#>^Z} ?L[ V|V|oV|3[: } B|)W|L| ,Y a!SMV,鸞:?8C8G潪N$ĸ<ޏ< Nuν_B,u7zl endstream endobj 268 0 obj << /Length 846 /Filter /FlateDecode >> stream xuUMo@Wla_BZXʡMW ldiof<ۻW_W7nzrc7)U7Nߜk]{7+wR}uN7|5s. )裮ݏk&8n~iyQqE0N[,g IM/*D@f`B9xczOpm`>W'9WRzL E]PwWqD`PދoSφ}= imX]ӷn<7̵^y]/׵Il/ܥ: ل0%1 " 0Z{q́0R0r0QK5<T`,if,1L.S5?׃[#M cL#F3X1+N978Nsk`q KpN8q )q4ϮEp O.5Ypc.Y7ь1O*ezl,d mY%0ymȋ,aYʘ8 xA} 3/Y1<*T71މf 97g19w(g1?\֟`g Yg 9LsQ.(ulgYˊx/V|V|&٭ V|N+>cv+>7+>S} ~!>_Sϔ+>cB|&LOr`B,&+jwRP{xᇣI^U E'b\o|s C:].cDܛX=oNܙ endstream endobj 269 0 obj << /Length 845 /Filter /FlateDecode >> stream xuUMo@+H.ȲrhQիԒ ؇6jo73o{q3mfѭVOn/Cf)rtskzf꺼Ɠpi?p>fv8coJ?< a9(})suזÌ\$qATh L}s6G 7o],jotuþ{UןtptZ|MÏѩNN6[7ݫ ZԲWO&suB`ilB =@ )U 9yI(ѥ S*043``MSiv|kiCXc, pDˆzA:x0)ljsn l9u}SrI4"nXCA8%&ٵ6AI cMϱXS_S/w"': fyRy(#c^g!ch"ƨ-kC^d cRx~h K^| МQV14Nd5cY9Y?C9돡'g ?%>O:ShYggΈrYgDg>[bghX|&^V|{ig33qgng3tZ[Yog,g-g B|B|\3gg3?f)O5[TT+&GUP#a#7a/c?w:'dEgtdbP2ڂ endstream endobj 270 0 obj << /Length 665 /Filter /FlateDecode >> stream xmTMk0WhFG*! miʲV6vFrBbތf}\xM}qV'7t羋<]swrո:܉Ǿ-w$mm o\1A+Z7!؛~B?Fߗb n;nX7U{[LG5 @@N,Gw͡ 1}ԿhWWq}QEݹ-r*FNL7uY~~l+l+7tE )b,#TTHy9)9>*QKr7P:MȡQ^s$LD6aȑ*s.$S56`>ƄmÁ#TL 5kd}WXssc*{Rh/#? bE$L|ږ8^y>eSQc̯bV̯cNa'O;Q~{5pX2]$\^snaK??q FqMyc0=) &l(mi,s|d &\cV ]͸&ӈ9w{d :mB Ƈ\..Ա g~n59&\pe[N 8\4<[n6|kq_]~&)a endstream endobj 271 0 obj << /Length 666 /Filter /FlateDecode >> stream xmTn0C6U@"mTt@Կyct+%13nU틛ķR<=]tuUӽsƷÝxrN:ۦ>P)Εrus ~v?'Ǿ5~D !8뇺mRn=MuSxHiQ)YiH޽'w66Z,^DӇr}ݼ-w{s d\{?:1 kmn_~߼h!R,6ew*ؔb%k e+Kӄ$a"1x*s.$S56P>Ƅm„A Fs 5577vر׾+uaя6R:!,əCxg+ѧy*JcL|*m:fvui0ܓ`†›F2g'I`2e?fyx0j5F̹k#n'im7>T20P-9[A˲,p~nE8|p9j7o-kݸJv?ƏVR`c endstream endobj 272 0 obj << /Length 665 /Filter /FlateDecode >> stream xmTMk0WhFG*! miʲVZCcYy#9톅ļ{3񼛤es^7箰 nn8l=hzI-._뫦~^JIu]f `tTsr*o8{&X,dew+mWos~X(2X.EiTz}ܟ^7uY~lVNMєo R.bY.֔O9؄b%9vsr(MXa#D$ar bqMDs!FKRLDP0.BEHQ#͸FuŎ577v}QȕanOd$g;A,əCR;6+ѧx**Ę$90q'oקfQ%n;5pX2]$^q~+s"F!CyhIh~CMnOf1$#h)r~hмj5F̹k#ni<7>Tsa>s\8s&wsaY1:+r1\ut[ZM,k4w6_%aJ endstream endobj 273 0 obj << /Length 666 /Filter /FlateDecode >> stream xmTn0CB*D rضj^pZ;olvR3ތm~<&i$͹+$o)'[֖wkuͷu5P.Υ/U} ~'C $D !8Rˬ9zLU]vރ8QBQVW,N4$  1}н`Еq}Eܶo KQ#U~'+xZZ9?ESھ/6XHfغ)Pb$b ab4aeILD!ID bq&"Q\H&(61*"TDDi5RH׮+&ElƮ}G= WA?Пe aLL\ږq8^9>eSQ!$"VFN??5J195wkdY]$^q~+s~"F!CyhIx~CMnOf1$#x)r<qh|utgmZdGGMYcu endstream endobj 274 0 obj << /Length 665 /Filter /FlateDecode >> stream xmTMk0WhFG*! miʲVZCcYy#9햅ļ{3񸟤e&Oo]&C]]Mq>zwt߉Ǯ)n.pCx?nڽVgx=itO"i [\l\WM}'ԭ̚t4pXeȉeU oq yM\-CnCW_Ey}wP dZz891euB)] W-\v\]~[S!8&+Zce"'2Ɍ5I@|"B2AQhSlLء28a}ɑFq5ҍnnbfǮCG= Wܢe$g;A,:sx l=NOTƘ$0_س/vЧQ%~Zx pX2]$^qnaK??q FqMyc0=) &l(mi,3|d &\c ]͹&ӈ9w{d-tx\ \cΜekqLJs?<@>qhx .׷8wl~1V<*m"mmDa endstream endobj 275 0 obj << /Length 701 /Filter /FlateDecode >> stream xuTn0+Cl m8(zu$:`K$Q4pufn}f)ɻ|tùA<]u6m;O޴\+$ޚv}qff0(h$iƃ}E>.>ttPRJ(:X/rߴu&^!3PZM5^F$o߇7 V+1ؿһ`׮o7qIݞO!Znz/~N̿Z䄦buUWᴫ\k\r-Ve\[3sB A `ehHiJ }*>`!â)dHUA^UwEZK5h"uS/g bρ#)p̹18yi r<ܗ8-pN(T1 PUF9a*~0'ujE5z4jgǺ4QSkj sE8-_ZQY\2=<"NNL>9fѓ@D9{&&gnI0䑱Ӊ3 hxRE"7Yp/hJXCKH eR3ə$Sޛ{cYrwDz~ !G9Kûq_nY3/Bu{XcD~ӺԝE?zO,Fez~ endstream endobj 168 0 obj << /Type /ObjStm /N 100 /First 906 /Length 3464 /Filter /FlateDecode >> stream x\YsF~篘M-o$9eG>c( SG~~=HGf=3}|62#iyf` )Ǵea^t,h\: [)&eDJyZPCa o]gJ*4cJ $SF ) tѤFyKTPG\ dZPNBN[ .XFh=3b*!!hv̢f" *'cK rWDhZf Ym52M4D(^.RAN 3'T9L@6.Jڕ(8bTQ`a/h9 w`īH4uh{A*`n0HMZRF(aLb>b$C%@8H"H?L P)IBOB*[ʢ IWq ,ERC# i%)qC2JР6x$>P 4*!*Y5 %Ҙ]@Et* Q[r!m*g|[|g0<ˇS|;=lzy~joCC,‰ "uxX<oe[WoO^fkc|&ņ\o&1o~}ˢ0 ?NDUH- MڽA<`|τמ)o^b w+ "QSHU%:e@Q"~(PB,b#)6F\qq)| \9o;C%< ݁Q'4z4xdm $-QcZ ,/-\E;EyUxhbw⤾Ʒwj:ήʫB\\n{UPA}TA> B[PKhq/ 3hgK;p@̬+Eƫ~ut 0Šbf˻^Lc#DH ؗ^LVcec;sAiCJRFH(HcYI-$^je->i$đJMhqsF]uMы=U9C*;j} Vu+H1(禷i@SԸ[x5 :Q(!L z*_3NO \` Su<q8*딹#Vx+~F?ީphz}Ms;l4C-ǦGǔcLKMT :B(jO Dԯ+p^1Aqq)2HJP~$Pcy50#901+etBLH([gq,Dڵ)`\ ZbbMyVE^zVK1V 5]- S-uyY(:bm"uVzPXp9.:`)wĴy*?bVԿtOʍ U8Q L+P} @@Jyf cCFZk k虜|v+^GK9zt;'`xp>1SPe[:[;On:^b+ݾzw/J5yqo ^?L`:/_[-d!ŷ|?J~ȟ#?خ/+6o;O)^{tONߏҐJxڌrq;ve'xqK׾L6E]{FyyryG>xzVxH+88`¨KC>[p9?aQ /d0O ;a? χ uaeov`yz~/2nċ7eKUSe Rͬ4{dL-Z Vmb8~2[^o:҂ʿ53뉺rb062{9/z f럑Wn,ػƈ5͸(JAjx^cy_n?ݛY~g;qMCdd}]>uw%՚*Hr)l *yg'YlӅAȋ|ܔݬ#MٗD)w&R"D,t*+8먙"F:zznƇF.6G+ UxTUnc8~Zjǣdހ$4p+}'Fk+-nLIW>=!* msM-,ȭʼ9zz"ʗPPXzS]5"]B0AtiʺV\=;Yer0c@Ԧ)lʩ`"ԅT/<.b!}] %@fb"v]g~l̿65yM!xs,w ,0-.v1U=]dgӱ^HWyBIZ_R$! Q4t2$njɮpJ 0g(aAo`X̴rTcZ3if Wl[mNV2t7d9B(sMn /M ]bj^W>߲Hju-EEv"Jֈɚ>r/> endobj 277 0 obj << /Type /ObjStm /N 32 /First 254 /Length 854 /Filter /FlateDecode >> stream xڥVn0+XZsPR'89H Vii@8G#" Y@ev`( a0 {-?0,QchoCPBHH*"䟗(#G1[⍷rN IE!Q I`T̒V` Vuy뾇mWC/W`q 82iD 8߆~ BnH#gUԉ-igUu1kb4qVRѧqңqE6 \Tpc 9yq)K(1b|~rlxzL#Q瑗ssoW'4YFE?Hl<ֻx mΊS_ |ۿχ}-ߐBZ endstream endobj 292 0 obj << /Type /XRef /Index [0 293] /Size 293 /W [1 3 1] /Root 290 0 R /Info 291 0 R /ID [<0172D4749E1B405DE4C29FBE5D31076F> <0172D4749E1B405DE4C29FBE5D31076F>] /Length 722 /Filter /FlateDecode >> stream x%IlNaϹPTgCcQCTUmUUTMm5+bDXtmMFba!H,*i"DRÊ6O=w}5Yfn ۙx dXcA.&[y$Bl.F`A~B"+4f1ʹ+G.(#xYNY JViW`)J2m ,#[M+*dtLJwԀZ:ucW$떭L#[OV_Vl@֐lP֘lu!ٚdòN%;#[lD!٘)ٸlc2mA3m`;h{A.7[܈` `')p`8v]֢@/8z܊nk#߭bZpn588yɳ!Ɍ=gg8鞦q5^P4P_ι>!9jl&ܮ_Ҁ" JnЦ x2&uc@аH5Oк.xA'OP]ЦHGK=o}x+;SMvس.ʎx6I6Ł%L&wۙwS m-Ne9P }ȖrP@5ۓG䅶 endstream endobj startxref 193717 %%EOF BiocParallel/inst/doc/Introduction_To_BiocParallel.R0000644000175200017520000001175014516024203023527 0ustar00biocbuildbiocbuild## ----------------------------------------------------------------------------- library(BiocParallel) ## ----quick_start FUN---------------------------------------------------------- FUN <- function(x) { round(sqrt(x), 4) } ## ----quick_start registry----------------------------------------------------- registered() ## ----configure_registry, eval=FALSE------------------------------------------- # options(MulticoreParam=MulticoreParam(workers=4)) ## ----quickstart_bplapply_default, eval=FALSE---------------------------------- # bplapply(1:4, FUN) ## ----quickstart_snow---------------------------------------------------------- param <- SnowParam(workers = 2, type = "SOCK") bplapply(1:4, FUN, BPPARAM = param) ## ----BiocParallelParam_SerialParam-------------------------------------------- serialParam <- SerialParam() serialParam ## ----BiocParallelParam_MulticoreParam----------------------------------------- multicoreParam <- MulticoreParam(workers = 8) multicoreParam ## ----register_registered------------------------------------------------------ registered() ## ----register_bpparam--------------------------------------------------------- bpparam() ## ----register_BatchtoolsParam------------------------------------------------- default <- registered() register(BatchtoolsParam(workers = 10), default = TRUE) ## ----register_BatchtoolsParam2------------------------------------------------ names(registered()) bpparam() ## ----register_restore--------------------------------------------------------- for (param in rev(default)) register(param) ## ----error-vignette, eval=FALSE----------------------------------------------- # browseVignettes("BiocParallel") ## ----use_cases_data----------------------------------------------------------- library(RNAseqData.HNRNPC.bam.chr14) fls <- RNAseqData.HNRNPC.bam.chr14_BAMFILES ## ----forking_gr, message=FALSE------------------------------------------------ library(GenomicAlignments) ## for GenomicRanges and readGAlignments() gr <- GRanges("chr14", IRanges((1000:3999)*5000, width=1000)) ## ----forking_param------------------------------------------------------------ param <- ScanBamParam(which=range(gr)) ## ----forking_FUN-------------------------------------------------------------- FUN <- function(fl, param) { gal <- readGAlignments(fl, param = param) sum(countOverlaps(gr, gal)) } ## ----forking_default_multicore------------------------------------------------ MulticoreParam() ## ----db_problems, eval = FALSE------------------------------------------------ # library(org.Hs.eg.db) # FUN <- function(x, ...) { # ... # mapIds(org.Hs.eg.db, ...) # ... # } # bplapply(X, FUN, ..., BPPARAM = MulticoreParam()) ## ----cluster_FUN-------------------------------------------------------------- FUN <- function(fl, param, gr) { suppressPackageStartupMessages({ library(GenomicAlignments) }) gal <- readGAlignments(fl, param = param) sum(countOverlaps(gr, gal)) } ## ----cluster_snow_param------------------------------------------------------- snow <- SnowParam(workers = 2, type = "SOCK") ## ----cluster_bplapply--------------------------------------------------------- bplapply(fls[1:3], FUN, BPPARAM = snow, param = param, gr = gr) ## ----db_solution_2, eval = FALSE---------------------------------------------- # register(SnowParam()) # default evaluation # bpstart() # start the cluster # ... # bplapply(X, FUN1, ...) # ... # bplapply(X, FUN2, ...) # re-use workers # ... # bpstop() ## ----cluster-MPI-work, eval=FALSE--------------------------------------------- # library(BiocParallel) # library(Rmpi) # FUN <- function(i) system("hostname", intern=TRUE) ## ----cluster-MPI, eval=FALSE-------------------------------------------------- # param <- SnowParam(mpi.universe.size() - 1, "MPI") # register(param) ## ----cluster-MPI-do, eval=FALSE----------------------------------------------- # xx <- bplapply(1:100, FUN) # table(unlist(xx)) # mpi.quit() ## ----cluster-MPI-bpstart, eval=FALSE------------------------------------------ # param <- bpstart(SnowParam(mpi.universe.size() - 1, "MPI")) # register(param) # xx <- bplapply(1:100, FUN) # bpstop(param) # mpi.quit() ## ----slurm-------------------------------------------------------------------- tmpl <- system.file(package="batchtools", "templates", "slurm-simple.tmpl") noquote(readLines(tmpl)) ## ----cluster-batchtools, eval=FALSE------------------------------------------- # ## define work to be done # FUN <- function(i) system("hostname", intern=TRUE) # library(BiocParallel) # # ## register SLURM cluster instructions from the template file # param <- BatchtoolsParam(workers=5, cluster="slurm", template=tmpl) # register(param) # # ## do work # xx <- bplapply(1:100, FUN) # table(unlist(xx)) ## ----devel-bplapply----------------------------------------------------------- system.time(x <- bplapply(1:3, function(i) { Sys.sleep(i); i })) unlist(x) ## ----sessionInfo-------------------------------------------------------------- sessionInfo() BiocParallel/inst/doc/Introduction_To_BiocParallel.Rmd0000644000175200017520000006657514516004410024065 0ustar00biocbuildbiocbuild--- title: "1. Introduction to *BiocParallel*" author: - name: "Valerie Obenchain" - name: "Vincent Carey" - name: "Michael Lawrence" - name: "Phylis Atieno" affiliation: "Vignette translation from Sweave to Rmarkdown / HTML" - name: "Martin Morgan" email: "Martin.Morgan@RoswellPark.org" date: "Edited: October, 2022; Compiled: `r format(Sys.time(), '%B %d, %Y')`" package: BiocParallel vignette: > %\VignetteIndexEntry{1. Introduction to BiocParallel} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} output: BiocStyle::html_document --- # Introduction Numerous approaches are available for parallel computing in R. The CRAN Task View for high performance and parallel computing provides useful high-level summaries and [package categorization](https://cran.r-project.org/web/views/HighPerformanceComputing.html). Most Task View packages cite or identify one or more of [*snow*](https://cran.r-project.org/package=snow) , [*Rmpi*](https://cran.r-project.org/package=Rmpi), [*multicore*](https://cran.r-project.org/package=multicore) or [*foreach*](https://cran.r-project.org/package=foreach) as relevant parallelization infrastructure. Direct support in *R* for *parallel* computing started with release 2.14.0 with inclusion of the [parallel](https://cran.r-project.org/package=parallel) package which contains modified versions of [*multicore*](https://cran.r-project.org/package=multicore) and [*snow*](https://cran.r-project.org/package=snow). A basic objective of [*BiocParallel*][] is to reduce the complexity faced when developing and using software that performs parallel computations. With the introduction of the `BiocParallelParam` object, [*BiocParallel*][] aims to provide a unified interface to existing parallel infrastructure where code can be easily executed in different environments. The `BiocParallelParam` specifies the environment of choice as well as computing resources and is invoked by 'registration' or passed as an argument to the [*BiocParallel*][] functions. [*BiocParallel*][] offers the following conveniences over the 'roll your own' approach to parallel programming. - unified interface: `BiocParallelParam` instances define the method of parallel evaluation (multi-core, snow cluster, etc.) and computing resources (number of workers, error handling, cleanup, etc.). - parallel iteration over lists, files and vectorized operations: `bplapply`, `bpmapply` and `bpvec` provide parallel list iteration and vectorized operations. `bpiterate` iterates through files distributing chunks to parallel workers. - cluster scheduling: When the parallel environment is managed by a cluster scheduler through [*batchtools](https://cran.r-project.org/package=batchtools), job management and result retrieval are considerably simplified. - support of `foreach` : The [*foreach*](https://cran.r-project.org/package=foreach) and [*iterators*](https://cran.r-project.org/package=iterators) packages are fully supported. Registration of the parallel back end uses `BiocParallelParam` instances. # Quick start The [*BiocParallel*][] package is available at bioconductor.org and can be downloaded via `BiocManager`: ``` if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("BiocParallel") ``` Load [*BiocParallel*][] ```{r} library(BiocParallel) ``` The test function simply returns the square root of "x". ```{r quick_start FUN} FUN <- function(x) { round(sqrt(x), 4) } ``` Functions in [*BiocParallel*][] use the registered back-ends for parallel evaluation. The default is the top entry of the registry list. ```{r quick_start registry} registered() ``` Configure your R session to always use a particular back-end configure by setting options named after the back ends in an `.RProfile` file, e.g., ```{r configure_registry, eval=FALSE} options(MulticoreParam=MulticoreParam(workers=4)) ``` When a [*BiocParallel*][] function is invoked with no `BPPARAM` argument the default back-end is used. ```{r quickstart_bplapply_default, eval=FALSE} bplapply(1:4, FUN) ``` Environment specific back-ends can be defined for any of the registry entries. This example uses a 2-worker SOCK cluster. ```{r quickstart_snow} param <- SnowParam(workers = 2, type = "SOCK") bplapply(1:4, FUN, BPPARAM = param) ``` # The *BiocParallel* Interface ## Classes ### `BiocParallelParam` `BiocParallelParam` instances configure different parallel evaluation environments. Creating or `register()` ing a '`Param`' allows the same code to be used in different parallel environments without a code re-write. Params listed are supported on all of Unix, Mac and Windows except `MulticoreParam` which is Unix and Mac only. - `SerialParam`: Supported on all platforms. Evaluate [*BiocParallel*][]-enabled code with parallel evaluation disabled. This approach is useful when writing new scripts and trying to debug code. - `MulticoreParam`: Supported on Unix and Mac. On Windows, `MulticoreParam` dispatches to `SerialParam`. Evaluate [*BiocParallel*][]-enabled code using multiple cores on a single computer. When available, this is the most efficient and least troublesome way to parallelize code. Windows does not support multi-core evaluation (the `MulticoreParam` object can be used, but evaluation is serial). On other operating systems, the default number of workers equals the value of the global option `mc.cores` (e.g.,`getOption("mc.cores")` ) or, if that is not set, the number of cores returned by `arallel::detectCores() - 2` ; when number of cores cannot be determined, the default is 1. `MulticoreParam` uses 'forked' processes with 'copy-on-change' semantics -- memory is only copied when it is changed. This makes it very efficient to invoke compared to other back-ends. There are several important caveats to using `MulticoreParam`. Forked processes are not available on Windows. Some environments, e.g., *RStudio*, do not work well with forked processes, assuming that code evaluation is single-threaded. Some external resources, e.g., access to files or data bases, maintain state in a way that assumes the resource is accessed only by a single thread. A subtle cost is that *R*'s garbage collector runs periodically, and 'marks' memory as in use. This effectively triggers a copy of the marked memory. *R*'s generational garbage collector is triggered at difficult-to-predict times; the effect in a long-running forked process is that the memory is eventually copied. See [this post](https://support.bioconductor.org/p/70196/#70509) for additional details. `MulticoreParam` is based on facilities originally implemented in the [*multicore*](https://cran.r-project.org/package=multicore) package and subsequently the [*parallel*](https://cran.r-project.org/package=parallel) package in base. - `SnowParam`: Supported on all platforms. Evaluate [*BiocParallel*][]-enabled code across several distinct instances, on one or several computers. This is a straightforward approach for executing parallel code on one or several computers, and is based on facilities originally implemented in the [*snow*](https://cran.r-project.org/package=snow) package. Different types of [*snow*](https://cran.r-project.org/package=snow) 'back-ends' are supported, including socket and MPI clusters. - `BatchtoolsParam`: Applicable to clusters with formal schedulers. Evaluate [*BiocParallel*][]-enabled code by submitting to a cluster scheduler like SGE. - `DoparParam`: Supported on all platforms. Register a parallel back-end supported by the [*foreach*](https://cran.r-project.org/package=foreach) package for use with [*BiocParallel*][]. The simplest illustration of creating `BiocParallelParam` is ```{r BiocParallelParam_SerialParam} serialParam <- SerialParam() serialParam ``` Most parameters have additional arguments influencing behavior, e.g., specifying the number of 'cores' to use when creating a `MulticoreParam` instance ```{r BiocParallelParam_MulticoreParam} multicoreParam <- MulticoreParam(workers = 8) multicoreParam ``` Arguments are described on the corresponding help page, e.g., `?MulticoreParam.`. ### `register()`ing `BiocParallelParam` instances The list of registered `BiocParallelParam` instances represents the user's preferences for different types of back-ends. Individual algorithms may specify a preferred back-end, and different back-ends maybe chosen when parallel evaluation is nested. The registry behaves like a 'stack' in that the last entry registered is added to the top of the list and becomes the "next used" (i.e., the default). `registered` invoked with no arguments lists all back-ends. ```{r register_registered} registered() ``` `bpparam` returns the default from the top of the list. ```{r register_bpparam} bpparam() ``` Add a specialized instance with `register`. When `default` is TRUE, the new instance becomes the default. ```{r register_BatchtoolsParam} default <- registered() register(BatchtoolsParam(workers = 10), default = TRUE) ``` `BatchtoolsParam` has been moved to the top of the list and is now the default. ```{r register_BatchtoolsParam2} names(registered()) bpparam() ``` Restore the original registry ```{r register_restore} for (param in rev(default)) register(param) ``` ## Functions ### Parallel looping, vectorized and aggregate operations These are used in common functions, implemented as much as possible for all back-ends. The functions (see the help pages, e.g., `?bplapply` for a full definition) include `bplapply(X, FUN, ...)`: Apply in parallel a function `FUN` to each element of `X`. `bplapply` invokes `FUN length(X)` times, each time with a single element of `X`. `bpmapply(FUN, ...)`: Apply in parallel a function to the first, second, etc., elements of each argument in .... `bpiterate(ITER, FUN, ...)`: Apply in parallel a function to the output of function `ITER`. Data chunks are returned by `ITER` and distributed to parallel workers along with `FUN`. Intended for iteration though an undefined number of data chunks (i.e., records in a file). `bpvec(X, FUN, ...)`: Apply in parallel a function `FUN` to subsets of `X`.`bpvec` invokes function as many times as there are cores or cluster nodes, with receiving a subset (typically more than 1 element, in contrast to `bplapply`) of `X`. `bpaggregate(x, data, FUN, ...)`: Use the formula in `X` to aggregate `data` using `FUN`. ### Parallel evaluation environment These functions query and control the state of the parallel evaluation environment. `bpisup(x)`: Query a `BiocParallelParam` back-end `X` for its status. `bpworkers`; `bpnworkers`: Query a `BiocParallelParam` back-end for the number of workers available for parallel evaluation. `bptasks`: Divides a job (e.g., single call to \*lapply function) into tasks. Applicable to `MulticoreParam` only;`DoparParam` and `BatchtoolsParam` have their own approach to dividing a job among workers. `bpstart(x)`: Start a parallel back end specified by `BiocParallelParam x, `, if possible. `bpstop(x)`: Stop a parallel back end specified by `BiocParallelParam x`. ### Error handling and logging Logging and advanced error recovery is available in `BiocParallel` 1.1.25 and later. For a more details see the vignette titled "Error Handling and Logging": ```{r error-vignette, eval=FALSE} browseVignettes("BiocParallel") ``` ### Locks and counters Inter-process (i.e., single machine) locks and counters are supported using `ipclock()`, `ipcyield()`, and friends. Use these to synchronize computation, e.g., allowing only a single process to write to a file at a time. # Use cases Sample data are BAM files from a transcription profiling experiment available in the *RNAseqData.HNRNPC.bam.chr14* package. ```{r use_cases_data} library(RNAseqData.HNRNPC.bam.chr14) fls <- RNAseqData.HNRNPC.bam.chr14_BAMFILES ``` ## Single machine Common approaches on a single machine are to use multiple cores in forked processes, or to use clusters of independent processes. For purely -based computations on non-Windows computers, there are substantial benefits, such as shared memory, to be had using forked processes. However, this approach is not portable across platforms, and fails when code uses functionality, e.g., file or data base access, that assumes only a single thread is accessing the resource. While use of forked processes with `MulticoreParam` is an attractive solution for scripts using pure functionality, robust and complex code often requires use of independent processes and `SnowParam`. ### Forked processes with `MulticoreParam` This example counts overlaps between BAM files and a defined set of ranges. First create a GRanges with regions of interest (in practice this could be large). ```{r forking_gr, message=FALSE} library(GenomicAlignments) ## for GenomicRanges and readGAlignments() gr <- GRanges("chr14", IRanges((1000:3999)*5000, width=1000)) ``` A `ScanBamParam` defines regions to extract from the files. ```{r forking_param} param <- ScanBamParam(which=range(gr)) ``` `FUN` counts overlaps between the ranges in 'gr' and the files. ```{r forking_FUN} FUN <- function(fl, param) { gal <- readGAlignments(fl, param = param) sum(countOverlaps(gr, gal)) } ``` All parameters necessary for running a job in a multi-core environment are specified in the `MulticoreParam` instance. ```{r forking_default_multicore} MulticoreParam() ``` The [*BiocParallel*][] functions, such as `bplapply`, use information in the `MulticoreParam` to set up the appropriate back-end and pass relevant arguments to low-level functions. ```{verbatim} > bplapply(fls[1:3], FUN, BPPARAM = MulticoreParam(), param = param) $ERR127306 [1] 1185 $ERR127307 [1] 1123 $ERR127308 [1] 1241 ``` Shared memory environments eliminate the need to pass large data between workers or load common packages. Note that in this code the GRanges data was not passed to all workers in `bplapply` and FUN did not need to load [*GenomicAlignments*[](http://bioconductor.org/packages/GenomicAlignments)for access to the `readGAlign ments` function. Problems with forked processes occur when code implementating functionality used by the workers is not written in anticipation of use by forked processes. One example is the database connection underlying Bioconductor's `org.*` packages. This pseudo-code ```{r db_problems, eval = FALSE} library(org.Hs.eg.db) FUN <- function(x, ...) { ... mapIds(org.Hs.eg.db, ...) ... } bplapply(X, FUN, ..., BPPARAM = MulticoreParam()) ``` is likely to fail, because `library(org.Hs.eg.db)` opens a database connection that is accessed by multiple processes. A solution is to ensure that the database is opened independently in each process ``` FUN <- function(x, ...) { library(org.Hs.eg.db) ... mapIds(org.Hs.eg.db, ...) ... } bplapply(X, FUN, ..., BPPARAM = MulticoreParam()) ``` ### Clusters of independent processes with `SnowParam` Both Windows and non-Windows machines can use the cluster approach to spawn processes. [*BiocParallel*][] back-end choices for clusters on a single machine are *SnowParam* for configuring a Snow cluster or the *DoparParam* for use with the *foreach* package. To re-run the counting example, FUN needs to modified such that 'gr' is passed as a formal argument and required libraries are loaded on each worker. (In general, this is not necessary for functions defined in a package name space, see [Section 6](#sec:developers).) ```{r cluster_FUN} FUN <- function(fl, param, gr) { suppressPackageStartupMessages({ library(GenomicAlignments) }) gal <- readGAlignments(fl, param = param) sum(countOverlaps(gr, gal)) } ``` Define a 2-worker SOCK Snow cluster. ```{r cluster_snow_param} snow <- SnowParam(workers = 2, type = "SOCK") ``` A call to `bplapply` with the *SnowParam* creates the cluster and distributes the work. ```{r cluster_bplapply} bplapply(fls[1:3], FUN, BPPARAM = snow, param = param, gr = gr) ``` The FUN written for the cluster adds some overhead due to the passing of the GRanges and the loading of [*GenomicAlignments*](http://bioconductor.org/packages/GenomicAlignments) on each worker. This approach, however, has the advantage that it works on most platforms and does not require a coding change when switching between windows and non-windows machines. If several `bplapply()` statements are likely to require the same resource, it often makes sense to create a cluster once using `bpstart()`. The workers are re-used by each call to `bplapply()`, so they do not have to re-load packages, etc. ```{r db_solution_2, eval = FALSE} register(SnowParam()) # default evaluation bpstart() # start the cluster ... bplapply(X, FUN1, ...) ... bplapply(X, FUN2, ...) # re-use workers ... bpstop() ``` ## *Ad hoc* cluster of multiple machines We use the term *ad hoc* cluster to define a group of machines that can communicate with each other and to which the user has password-less log-in access. This example uses a group of compute machines (\"the rhinos\") on the FHCRC network. ### *Ad hoc* Sockets On Linux and Mac OS X, a socket cluster is created across machines by supplying machine names as the`workers``argument to a *BiocParallelParam* instance instead of a number. Each name represents an *R* process; repeat names indicate multiple workers on the same machine. Create a with *SnowParam* 2 cpus from 'rhino01' and 1 from 'rhino02'. ``` hosts <- c("rhino01", "rhino01", "rhino02") param <- SnowParam(workers = hosts, type = "SOCK") ``` Execute FUN 4 times across the workers. ```{verbatim} > FUN <- function(i) system("hostname", intern=TRUE) > bplapply(1:4, FUN, BPPARAM = param) [[1]] [1] "rhino01" [[2]] [1] "rhino01" [[3]] [1] "rhino02" [[4]] [1] "rhino01" ``` When creating a cluster across Windows machines must be IP addresses (e.g., \"140.107.218.57\") instead of machine names. ### MPI An MPI cluster across machines is created with *mpirun* or *mpiexec* from the command line or a script. A list of machine names provided as the -hostfile argument defines the mpi universe. The hostfile requests 2 processors on 3 different machines. ```{verbatim} rhino01 slots=2 rhino02 slots=2 rhino03 slots=2 ``` From the command line, start a single interactive process on the current machine. ```{verbatim} mpiexec --np 1 --hostfile hostfile R --vanilla ``` Load [*BiocParallel*][] and create an MPI Snow cluster. The number `workers` of in should match the number of slots requested in the hostfile. Using a smaller number of workers uses a subset of the slots. ```{verbatim} > library(BiocParallel) > param <- SnowParam(workers = 6, type = "MPI") ``` Execute FUN 6 times across the workers. ```{verbatim} > FUN <- function(i) system("hostname", intern=TRUE) > bplapply(1:6, FUN, BPPARAM = param) bplapply(1:6, FUN, BPPARAM = param) [[1]] [1] "rhino01" [[2]] [1] "rhino02" [[3]] [1] "rhino02" [[4]] [1] "rhino03" [[5]] [1] "rhino03" [[6]] [1] "rhino01" ``` Batch jobs can be launched with mpiexec and R CMD BATCH. Code to be executed is in 'Rcode.R'. ```{verbatim} mpiexec --hostfile hostfile R CMD BATCH Rcode.R ``` ## Clusters with schedulers Computer clusters are far from standardized, so the following may require significant adaptation; it is written from experience here at FHCRC, where we have a large cluster managed via SLURM. Nodes on the cluster have shared disks and common system images, minimizing complexity about making data resources available to individual nodes. There are two simple models for use of the cluster, Cluster-centric and R-centric. ### Cluster-centric The idea is to use cluster management software to allocate resources, and then arrange for an script to be evaluated in the context of allocated resources. NOTE: Depending on your cluster configuration it may be necessary to add a line to the template file instructing workers to use the version of R on the master / head node. Otherwise the default R on the worker nodes will be used. For SLURM, we might request space for 4 tasks (with `salloc` or `sbatch`), arrange to start the MPI environment (with `orterun`) and on a single node in that universe run an script `BiocParallel-MPI.R`. The command is ```{verbatim} $ salloc -N 4 orterun -n 1 R -f BiocParallel-MPI.R ``` The *R* script might do the following, using MPI for parallel evaluation. Start by loading necessary packages and defining `FUN` work to be done ```{r cluster-MPI-work, eval=FALSE} library(BiocParallel) library(Rmpi) FUN <- function(i) system("hostname", intern=TRUE) ``` Create a *SnowParam* instance with the number of nodes equal to the size of the MPI universe minus 1 (let one node dispatch jobs to workers), and register this instance as the default ```{r cluster-MPI, eval=FALSE} param <- SnowParam(mpi.universe.size() - 1, "MPI") register(param) ``` Evaluate the work in parallel, process the results, clean up, and quit ```{r cluster-MPI-do, eval=FALSE} xx <- bplapply(1:100, FUN) table(unlist(xx)) mpi.quit() ``` The entire session is as follows: ```{verbatim} $ salloc -N 4 orterun -n 1 R --vanilla -f BiocParallel-MPI.R salloc: Job is in held state, pending scheduler release salloc: Pending job allocation 6762292 salloc: job 6762292 queued and waiting for resources salloc: job 6762292 has been allocated resources salloc: Granted job allocation 6762292 ## ... > FUN <- function(i) system("hostname", intern=TRUE) > > library(BiocParallel) > library(Rmpi) > param <- SnowParam(mpi.universe.size() - 1, "MPI") > register(param) > xx <- bplapply(1:100, FUN) > table(unlist(xx)) gizmof13 gizmof71 gizmof86 gizmof88 25 25 25 25 > > mpi.quit() salloc: Relinquishing job allocation 6762292 salloc: Job allocation 6762292 has been revoked. ``` One advantage of this approach is that the responsibility for managing the cluster lies firmly with the cluster management software -- if one wants more nodes, or needs special resources, then adjust parameters to `salloc` (or `sbatch`). Notice that workers are spawned within the `bplapply` function; it might often make sense to more explicitly manage workers with `bpstart` and `bpstop`, e.g., ```{r cluster-MPI-bpstart, eval=FALSE} param <- bpstart(SnowParam(mpi.universe.size() - 1, "MPI")) register(param) xx <- bplapply(1:100, FUN) bpstop(param) mpi.quit() ``` ### R-centric A more *R*-centric approach might start an *R* script on the head node, and use *batchtools* to submit jobs from within *R* the session. One way of doing this is to create a file containing a template for the job submission step, e.g., for SLURM; a starting point might be found at ```{r slurm} tmpl <- system.file(package="batchtools", "templates", "slurm-simple.tmpl") noquote(readLines(tmpl)) ``` The *R* script, run interactively or from the command line, might then look like ```{r cluster-batchtools, eval=FALSE} ## define work to be done FUN <- function(i) system("hostname", intern=TRUE) library(BiocParallel) ## register SLURM cluster instructions from the template file param <- BatchtoolsParam(workers=5, cluster="slurm", template=tmpl) register(param) ## do work xx <- bplapply(1:100, FUN) table(unlist(xx)) ``` The code runs on the head node until `bplapply` , where the script interacts with the SLURM scheduler to request a SLURM allocation, run jobs, and retrieve results. The argument `4` to `BatchtoolsParam` specifies the number of workers to request from the scheduler; `bplapply` divides the 100 jobs among the 4 workers. If `BatchtoolsParam` had been created without specifying any workers, then 100 jobs implied by the argument to `bplapply` would be associated with 100 tasks submitted to the scheduler. Because cluster tasks are running in independent `R` instances, and often on physically separate machines, a convenient 'best practice' is to write `FUN` in a 'functional programming' manner, such that all data required for the function is passed in as arguments or (for large data) loaded implicitly or explicitly (e.g., via an *R* library) from disk. # Analyzing genomic data in *Bioconductor* General strategies exist for handling large genomic data that are well suited to *R* programs. A manuscript titled *Scalable Genomics with R and BioConductor* () by Michael Lawrence and Martin Morgan, reviews several of these approaches and demonstrate implementation with *Bioconductor * packages. Problem areas include scalable processing, summarization and visualization. The techniques presented include restricting queries, compressing data, iterating, and parallel computing. Ideas are presented in an approachable fashion within a framework of common use cases. This is a benificial read for anyone anyone tackling genomics problems in *R*. # For developers {#sec:developers} Developers wishing to use [*BiocParallel*][] in their own packages should include [*BiocParallel*][] in the `DESCRIPTION` file ```{verbatim} Imports: BiocParallel ``` and import the functions they wish to use in the `NAMESPACE` file, e.g., ```{verbatim} importFrom(BiocParallel, bplapply) ``` Then invoke the desired function in the code, e.g., ```{r devel-bplapply} system.time(x <- bplapply(1:3, function(i) { Sys.sleep(i); i })) unlist(x) ``` This will use the back-end returned by `bpparam()` , by default a `MulticoreParam()` on Linux / macOS, on Windows, or the user's preferred back-end if they have used `register()`. The `MulticoreParam` back-end does not require any special configuration or set-up and is therefore the safest option for developers. Unfortunately, `MulticoreParam` provides only serial evaluation on Windows. Developers should document that their function uses [*BiocParallel*][] functions on the main page, and should perhaps include in their function signature an argument `BPPARAM=bpparam()`. Developers should NOT use 'register()' in package code -- this sets a preference that influences use of 'bplapply()' and friends in all packages, not just their package. Developers wishing to invoke back-ends other than `MulticoreParam` , or to write code that works across Windows, macOS and Linux, no longer need to take special care to ensure that required packages, data, and functions are available and loaded on the remote nodes. By default, will export global variables to the workers due to the default. Nonetheless, a good practice during development is to use independent processes (via ) rather than relying on forked (via ) processes. For instance, clusters include the costs of setting up the computational environment (loading required packages, for instance) that may discourage use of parallelization when parallelization provides only marginal performance gains from the computation *per se*. Likewise, may be more sensitive to inappropriate calls to shared libraries, revealing errors that are only transient under. In `bplapply()`, the environment of `FUN` (other than the global environment) is serialized to the workers. A consequence is that, when `FUN ` is inside a package name space, other functions available in the name space are available to `FUN ` on the workers. # For server administrators {#sec:administrators} If the package is installed on a server used by multiple users, then the default value of cores used can sometimes lead to many more tasks being run than the server has cores if two or more users run a parallel-enabled function simultaneously. A more conservative number of cores than all of them minus 2 may be desirable, so that one user does not take all of the cores unless they explicitly specify so. This can be implemented with environment variables. Setting or for all system users to the number of cores divided by the typical number of concurrent users is a reasonable approach to avoiding this scenario. # sessionInfo ```{r sessionInfo} sessionInfo() ``` [*BiocParallel*]: https://bioconductor.org/packages/BiocParallel BiocParallel/inst/doc/Introduction_To_BiocParallel.html0000644000175200017520000246476114516024204024313 0ustar00biocbuildbiocbuild 1. Introduction to BiocParallel

Contents

1 Introduction

Numerous approaches are available for parallel computing in R. The CRAN Task View for high performance and parallel computing provides useful high-level summaries and package categorization. Most Task View packages cite or identify one or more of snow , Rmpi, multicore or foreach as relevant parallelization infrastructure. Direct support in R for parallel computing started with release 2.14.0 with inclusion of the parallel package which contains modified versions of multicore and snow.

A basic objective of BiocParallel is to reduce the complexity faced when developing and using software that performs parallel computations. With the introduction of the BiocParallelParam object, BiocParallel aims to provide a unified interface to existing parallel infrastructure where code can be easily executed in different environments. The BiocParallelParam specifies the environment of choice as well as computing resources and is invoked by ‘registration’ or passed as an argument to the BiocParallel functions.

BiocParallel offers the following conveniences over the ‘roll your own’ approach to parallel programming.

  • unified interface: BiocParallelParam instances define the method of parallel evaluation (multi-core, snow cluster, etc.) and computing resources (number of workers, error handling, cleanup, etc.).

  • parallel iteration over lists, files and vectorized operations: bplapply, bpmapply and bpvec provide parallel list iteration and vectorized operations. bpiterate iterates through files distributing chunks to parallel workers.

  • cluster scheduling: When the parallel environment is managed by a cluster scheduler through *batchtools, job management and result retrieval are considerably simplified.

  • support of foreach : The foreach and iterators packages are fully supported. Registration of the parallel back end uses BiocParallelParam instances.

2 Quick start

The BiocParallel package is available at bioconductor.org and can be downloaded via BiocManager:

if (!requireNamespace("BiocManager", quietly = TRUE))
    install.packages("BiocManager")
BiocManager::install("BiocParallel")

Load BiocParallel

library(BiocParallel)

The test function simply returns the square root of “x”.

FUN <- function(x) { round(sqrt(x), 4) }

Functions in BiocParallel use the registered back-ends for parallel evaluation. The default is the top entry of the registry list.

registered()
## $MulticoreParam
## class: MulticoreParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: FORK
## 
## $SnowParam
## class: SnowParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: TRUE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: SOCK
## 
## $SerialParam
## class: SerialParam
##   bpisup: FALSE; bpnworkers: 1; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: FALSE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: FALSE
##   bplogdir: NA
##   bpresultdir: NA

Configure your R session to always use a particular back-end configure by setting options named after the back ends in an .RProfile file, e.g.,

options(MulticoreParam=MulticoreParam(workers=4))

When a BiocParallel function is invoked with no BPPARAM argument the default back-end is used.

bplapply(1:4, FUN)

Environment specific back-ends can be defined for any of the registry entries. This example uses a 2-worker SOCK cluster.

param <- SnowParam(workers = 2, type = "SOCK")
bplapply(1:4, FUN, BPPARAM = param)
## [[1]]
## [1] 1
## 
## [[2]]
## [1] 1.4142
## 
## [[3]]
## [1] 1.7321
## 
## [[4]]
## [1] 2

3 The BiocParallel Interface

3.1 Classes

3.1.1 BiocParallelParam

BiocParallelParam instances configure different parallel evaluation environments. Creating or register() ing a ‘Param’ allows the same code to be used in different parallel environments without a code re-write. Params listed are supported on all of Unix, Mac and Windows except MulticoreParam which is Unix and Mac only.

  • SerialParam:

    Supported on all platforms.

    Evaluate BiocParallel-enabled code with parallel evaluation disabled. This approach is useful when writing new scripts and trying to debug code.

  • MulticoreParam:

    Supported on Unix and Mac. On Windows, MulticoreParam dispatches to SerialParam.

    Evaluate BiocParallel-enabled code using multiple cores on a single computer. When available, this is the most efficient and least troublesome way to parallelize code. Windows does not support multi-core evaluation (the MulticoreParam object can be used, but evaluation is serial). On other operating systems, the default number of workers equals the value of the global option mc.cores (e.g.,getOption("mc.cores") ) or, if that is not set, the number of cores returned by arallel::detectCores() - 2 ; when number of cores cannot be determined, the default is 1.

    MulticoreParam uses ‘forked’ processes with ‘copy-on-change’ semantics – memory is only copied when it is changed. This makes it very efficient to invoke compared to other back-ends.

    There are several important caveats to using MulticoreParam. Forked processes are not available on Windows. Some environments, e.g., RStudio, do not work well with forked processes, assuming that code evaluation is single-threaded. Some external resources, e.g., access to files or data bases, maintain state in a way that assumes the resource is accessed only by a single thread. A subtle cost is that R’s garbage collector runs periodically, and ‘marks’ memory as in use. This effectively triggers a copy of the marked memory. R’s generational garbage collector is triggered at difficult-to-predict times; the effect in a long-running forked process is that the memory is eventually copied. See this post for additional details.

    MulticoreParam is based on facilities originally implemented in the multicore package and subsequently the parallel package in base.

  • SnowParam:

    Supported on all platforms.

    Evaluate BiocParallel-enabled code across several distinct instances, on one or several computers. This is a straightforward approach for executing parallel code on one or several computers, and is based on facilities originally implemented in the snow package. Different types of snow ‘back-ends’ are supported, including socket and MPI clusters.

  • BatchtoolsParam:

    Applicable to clusters with formal schedulers.

    Evaluate BiocParallel-enabled code by submitting to a cluster scheduler like SGE.

  • DoparParam:

    Supported on all platforms.

    Register a parallel back-end supported by the foreach package for use with BiocParallel.

The simplest illustration of creating BiocParallelParam is

serialParam <- SerialParam()
serialParam
## class: SerialParam
##   bpisup: FALSE; bpnworkers: 1; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: FALSE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: FALSE
##   bplogdir: NA
##   bpresultdir: NA

Most parameters have additional arguments influencing behavior, e.g., specifying the number of ‘cores’ to use when creating a MulticoreParam instance

multicoreParam <- MulticoreParam(workers = 8)
## Warning:   'IS_BIOC_BUILD_MACHINE' environment variable detected, setting
##   BiocParallel workers to 4 (was 8)
multicoreParam
## class: MulticoreParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: FORK

Arguments are described on the corresponding help page, e.g., ?MulticoreParam..

3.1.2 register()ing BiocParallelParam instances

The list of registered BiocParallelParam instances represents the user’s preferences for different types of back-ends. Individual algorithms may specify a preferred back-end, and different back-ends maybe chosen when parallel evaluation is nested.

The registry behaves like a ‘stack’ in that the last entry registered is added to the top of the list and becomes the “next used” (i.e., the default).

registered invoked with no arguments lists all back-ends.

registered()
## $MulticoreParam
## class: MulticoreParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: FORK
## 
## $SnowParam
## class: SnowParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: TRUE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: SOCK
## 
## $SerialParam
## class: SerialParam
##   bpisup: FALSE; bpnworkers: 1; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: FALSE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: FALSE
##   bplogdir: NA
##   bpresultdir: NA

bpparam returns the default from the top of the list.

bpparam()
## class: MulticoreParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: FORK

Add a specialized instance with register. When default is TRUE, the new instance becomes the default.

default <- registered()
register(BatchtoolsParam(workers = 10), default = TRUE)
## Warning:   'IS_BIOC_BUILD_MACHINE' environment variable detected, setting
##   BiocParallel workers to 4 (was 10)

BatchtoolsParam has been moved to the top of the list and is now the default.

names(registered())
## [1] "BatchtoolsParam" "MulticoreParam"  "SnowParam"       "SerialParam"
bpparam()
## class: BatchtoolsParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: NA; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: TRUE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: multicore
##   template: NA
##   registryargs:
##     file.dir: /tmp/RtmpWndJbf/Rbuildb08e87897645f/BiocParallel/vignettes/fileb115f5def4835
##     work.dir: getwd()
##     packages: character(0)
##     namespaces: character(0)
##     source: character(0)
##     load: character(0)
##     make.default: FALSE
##   saveregistry: FALSE
##   resources:

Restore the original registry

for (param in rev(default))
    register(param)

3.2 Functions

3.2.1 Parallel looping, vectorized and aggregate operations

These are used in common functions, implemented as much as possible for all back-ends. The functions (see the help pages, e.g., ?bplapply for a full definition) include

bplapply(X, FUN, ...):

Apply in parallel a function FUN to each element of X. bplapply invokes FUN length(X) times, each time with a single element of X.

bpmapply(FUN, ...):

Apply in parallel a function to the first, second, etc., elements of each argument in ….

bpiterate(ITER, FUN, ...):

Apply in parallel a function to the output of function ITER. Data chunks are returned by ITER and distributed to parallel workers along with FUN. Intended for iteration though an undefined number of data chunks (i.e., records in a file).

bpvec(X, FUN, ...):

Apply in parallel a function FUN to subsets of X.bpvec invokes function as many times as there are cores or cluster nodes, with receiving a subset (typically more than 1 element, in contrast to bplapply) of X.

bpaggregate(x, data, FUN, ...):

Use the formula in X to aggregate data using FUN.

3.2.2 Parallel evaluation environment

These functions query and control the state of the parallel evaluation environment.

bpisup(x): Query a BiocParallelParam back-end X for its status.

bpworkers; bpnworkers: Query a BiocParallelParam back-end for the number of workers available for parallel evaluation.

bptasks: Divides a job (e.g., single call to *lapply function) into tasks. Applicable to MulticoreParam only;DoparParam and BatchtoolsParam have their own approach to dividing a job among workers.

bpstart(x): Start a parallel back end specified by BiocParallelParam x,, if possible.

bpstop(x): Stop a parallel back end specified by BiocParallelParam x.

3.2.3 Error handling and logging

Logging and advanced error recovery is available in BiocParallel 1.1.25 and later. For a more details see the vignette titled “Error Handling and Logging”:

browseVignettes("BiocParallel")

3.2.4 Locks and counters

Inter-process (i.e., single machine) locks and counters are supported using ipclock(), ipcyield(), and friends. Use these to synchronize computation, e.g., allowing only a single process to write to a file at a time.

4 Use cases

Sample data are BAM files from a transcription profiling experiment available in the RNAseqData.HNRNPC.bam.chr14 package.

library(RNAseqData.HNRNPC.bam.chr14)
fls <- RNAseqData.HNRNPC.bam.chr14_BAMFILES

4.1 Single machine

Common approaches on a single machine are to use multiple cores in forked processes, or to use clusters of independent processes.

For purely -based computations on non-Windows computers, there are substantial benefits, such as shared memory, to be had using forked processes. However, this approach is not portable across platforms, and fails when code uses functionality, e.g., file or data base access, that assumes only a single thread is accessing the resource. While use of forked processes with MulticoreParam is an attractive solution for scripts using pure functionality, robust and complex code often requires use of independent processes and SnowParam.

4.1.1 Forked processes with MulticoreParam

This example counts overlaps between BAM files and a defined set of ranges. First create a GRanges with regions of interest (in practice this could be large).

library(GenomicAlignments) ## for GenomicRanges and readGAlignments()
gr <- GRanges("chr14", IRanges((1000:3999)*5000, width=1000))

A ScanBamParam defines regions to extract from the files.

param <- ScanBamParam(which=range(gr))

FUN counts overlaps between the ranges in ‘gr’ and the files.

FUN <- function(fl, param) {
    gal <- readGAlignments(fl, param = param)
    sum(countOverlaps(gr, gal))
}

All parameters necessary for running a job in a multi-core environment are specified in the MulticoreParam instance.

MulticoreParam()
## class: MulticoreParam
##   bpisup: FALSE; bpnworkers: 4; bptasks: 0; bpjobname: BPJOB
##   bplog: FALSE; bpthreshold: INFO; bpstopOnError: TRUE
##   bpRNGseed: ; bptimeout: NA; bpprogressbar: FALSE
##   bpexportglobals: TRUE; bpexportvariables: FALSE; bpforceGC: FALSE
##   bpfallback: TRUE
##   bplogdir: NA
##   bpresultdir: NA
##   cluster type: FORK

The BiocParallel functions, such as bplapply, use information in the MulticoreParam to set up the appropriate back-end and pass relevant arguments to low-level functions.

> bplapply(fls[1:3], FUN, BPPARAM = MulticoreParam(), param = param)
$ERR127306
[1] 1185

$ERR127307
[1] 1123

$ERR127308
[1] 1241

Shared memory environments eliminate the need to pass large data between workers or load common packages. Note that in this code the GRanges data was not passed to all workers in bplapply and FUN did not need to load [GenomicAlignmentsfor access to the readGAlign ments function.

Problems with forked processes occur when code implementating functionality used by the workers is not written in anticipation of use by forked processes. One example is the database connection underlying Bioconductor’s org.* packages. This pseudo-code

library(org.Hs.eg.db)
FUN <- function(x, ...) {
...
mapIds(org.Hs.eg.db, ...)
...
}
bplapply(X, FUN, ..., BPPARAM = MulticoreParam())

is likely to fail, because library(org.Hs.eg.db) opens a database connection that is accessed by multiple processes. A solution is to ensure that the database is opened independently in each process

FUN <- function(x, ...) {
library(org.Hs.eg.db)
...
mapIds(org.Hs.eg.db, ...)
...
}
bplapply(X, FUN, ..., BPPARAM = MulticoreParam())

4.1.2 Clusters of independent processes with SnowParam

Both Windows and non-Windows machines can use the cluster approach to spawn processes. BiocParallel back-end choices for clusters on a single machine are SnowParam for configuring a Snow cluster or the DoparParam for use with the foreach package.

To re-run the counting example, FUN needs to modified such that ‘gr’ is passed as a formal argument and required libraries are loaded on each worker. (In general, this is not necessary for functions defined in a package name space, see Section 6.)

FUN <- function(fl, param, gr) {
    suppressPackageStartupMessages({
        library(GenomicAlignments)
    })
    gal <- readGAlignments(fl, param = param)
    sum(countOverlaps(gr, gal))
}

Define a 2-worker SOCK Snow cluster.

snow <- SnowParam(workers = 2, type = "SOCK")

A call to bplapply with the SnowParam creates the cluster and distributes the work.

bplapply(fls[1:3], FUN, BPPARAM = snow, param = param, gr = gr)
## $ERR127306
## [1] 1185
## 
## $ERR127307
## [1] 1123
## 
## $ERR127308
## [1] 1241

The FUN written for the cluster adds some overhead due to the passing of the GRanges and the loading of GenomicAlignments on each worker. This approach, however, has the advantage that it works on most platforms and does not require a coding change when switching between windows and non-windows machines.

If several bplapply() statements are likely to require the same resource, it often makes sense to create a cluster once using bpstart(). The workers are re-used by each call to bplapply(), so they do not have to re-load packages, etc.

register(SnowParam()) # default evaluation
bpstart() # start the cluster
...
bplapply(X, FUN1, ...)
...
bplapply(X, FUN2, ...) # re-use workers
...
bpstop()

4.2 Ad hoc cluster of multiple machines

We use the term ad hoc cluster to define a group of machines that can communicate with each other and to which the user has password-less log-in access. This example uses a group of compute machines ("the rhinos") on the FHCRC network.

4.2.1 Ad hoc Sockets

On Linux and Mac OS X, a socket cluster is created across machines by supplying machine names as the`workers``argument to a BiocParallelParam instance instead of a number. Each name represents an R process; repeat names indicate multiple workers on the same machine.

Create a with SnowParam 2 cpus from ‘rhino01’ and 1 from ‘rhino02’.

hosts <- c("rhino01", "rhino01", "rhino02")
param <- SnowParam(workers = hosts, type = "SOCK")

Execute FUN 4 times across the workers.

> FUN <- function(i) system("hostname", intern=TRUE)
> bplapply(1:4, FUN, BPPARAM = param)
[[1]]
[1] "rhino01"

[[2]]
[1] "rhino01"

[[3]]
[1] "rhino02"

[[4]]
[1] "rhino01"

When creating a cluster across Windows machines must be IP addresses (e.g., "140.107.218.57") instead of machine names.

### MPI

An MPI cluster across machines is created with mpirun or mpiexec from the command line or a script. A list of machine names provided as the -hostfile argument defines the mpi universe.

The hostfile requests 2 processors on 3 different machines.

rhino01 slots=2
rhino02 slots=2
rhino03 slots=2

From the command line, start a single interactive process on the current machine.

mpiexec --np 1 --hostfile hostfile R --vanilla

Load BiocParallel and create an MPI Snow cluster. The number workers of in should match the number of slots requested in the hostfile. Using a smaller number of workers uses a subset of the slots.

> library(BiocParallel)
> param <- SnowParam(workers = 6, type = "MPI")

Execute FUN 6 times across the workers.

> FUN <- function(i) system("hostname", intern=TRUE)
> bplapply(1:6, FUN, BPPARAM = param)
bplapply(1:6, FUN, BPPARAM = param)
[[1]]
[1] "rhino01"

[[2]]
[1] "rhino02"

[[3]]
[1] "rhino02"

[[4]]
[1] "rhino03"

[[5]]
[1] "rhino03"

[[6]]
[1] "rhino01"

Batch jobs can be launched with mpiexec and R CMD BATCH. Code to be executed is in ‘Rcode.R’.

mpiexec --hostfile hostfile R CMD BATCH Rcode.R

4.3 Clusters with schedulers

Computer clusters are far from standardized, so the following may require significant adaptation; it is written from experience here at FHCRC, where we have a large cluster managed via SLURM. Nodes on the cluster have shared disks and common system images, minimizing complexity about making data resources available to individual nodes. There are two simple models for use of the cluster, Cluster-centric and R-centric.

4.3.1 Cluster-centric

The idea is to use cluster management software to allocate resources, and then arrange for an script to be evaluated in the context of allocated resources. NOTE: Depending on your cluster configuration it may be necessary to add a line to the template file instructing workers to use the version of R on the master / head node. Otherwise the default R on the worker nodes will be used.

For SLURM, we might request space for 4 tasks (with salloc or sbatch), arrange to start the MPI environment (with orterun) and on a single node in that universe run an script BiocParallel-MPI.R. The command is

$ salloc -N 4 orterun -n 1 R -f BiocParallel-MPI.R

The R script might do the following, using MPI for parallel evaluation. Start by loading necessary packages and defining FUN work to be done

library(BiocParallel)
library(Rmpi)
FUN <- function(i) system("hostname", intern=TRUE)

Create a SnowParam instance with the number of nodes equal to the size of the MPI universe minus 1 (let one node dispatch jobs to workers), and register this instance as the default

param <- SnowParam(mpi.universe.size() - 1, "MPI")
register(param)

Evaluate the work in parallel, process the results, clean up, and quit

xx <- bplapply(1:100, FUN)
table(unlist(xx))
mpi.quit()

The entire session is as follows:

$ salloc -N 4 orterun -n 1 R --vanilla -f BiocParallel-MPI.R
salloc: Job is in held state, pending scheduler release
salloc: Pending job allocation 6762292
salloc: job 6762292 queued and waiting for resources
salloc: job 6762292 has been allocated resources
salloc: Granted job allocation 6762292
## ...
> FUN <- function(i) system("hostname", intern=TRUE)
>
> library(BiocParallel)
> library(Rmpi)
> param <- SnowParam(mpi.universe.size() - 1, "MPI")
> register(param)
> xx <- bplapply(1:100, FUN)
> table(unlist(xx))
gizmof13 gizmof71 gizmof86 gizmof88
25 25 25 25
>
> mpi.quit()
salloc: Relinquishing job allocation 6762292
salloc: Job allocation 6762292 has been revoked.

One advantage of this approach is that the responsibility for managing the cluster lies firmly with the cluster management software – if one wants more nodes, or needs special resources, then adjust parameters to salloc (or sbatch).

Notice that workers are spawned within the bplapply function; it might often make sense to more explicitly manage workers with bpstart and bpstop, e.g.,

param <- bpstart(SnowParam(mpi.universe.size() - 1, "MPI"))
register(param)
xx <- bplapply(1:100, FUN)
bpstop(param)
mpi.quit()

4.3.2 R-centric

A more R-centric approach might start an R script on the head node, and use batchtools to submit jobs from within R the session. One way of doing this is to create a file containing a template for the job submission step, e.g., for SLURM; a starting point might be found at

tmpl <- system.file(package="batchtools", "templates", "slurm-simple.tmpl")
noquote(readLines(tmpl))
##  [1] #!/bin/bash                                                                                                 
##  [2]                                                                                                             
##  [3] ## Job Resource Interface Definition                                                                        
##  [4] ##                                                                                                          
##  [5] ## ntasks [integer(1)]:       Number of required tasks,                                                     
##  [6] ##                            Set larger than 1 if you want to further parallelize                          
##  [7] ##                            with MPI within your job.                                                     
##  [8] ## ncpus [integer(1)]:        Number of required cpus per task,                                             
##  [9] ##                            Set larger than 1 if you want to further parallelize                          
## [10] ##                            with multicore/parallel within each task.                                     
## [11] ## walltime [integer(1)]:     Walltime for this job, in seconds.                                            
## [12] ##                            Must be at least 60 seconds for Slurm to work properly.                       
## [13] ## memory   [integer(1)]:     Memory in megabytes for each cpu.                                             
## [14] ##                            Must be at least 100 (when I tried lower values my                            
## [15] ##                            jobs did not start at all).                                                   
## [16] ##                                                                                                          
## [17] ## Default resources can be set in your .batchtools.conf.R by defining the variable                         
## [18] ## 'default.resources' as a named list.                                                                     
## [19]                                                                                                             
## [20] <%                                                                                                          
## [21] # relative paths are not handled well by Slurm                                                              
## [22] log.file = fs::path_expand(log.file)                                                                        
## [23] -%>                                                                                                         
## [24]                                                                                                             
## [25]                                                                                                             
## [26] #SBATCH --job-name=<%= job.name %>                                                                          
## [27] #SBATCH --output=<%= log.file %>                                                                            
## [28] #SBATCH --error=<%= log.file %>                                                                             
## [29] #SBATCH --time=<%= ceiling(resources$walltime / 60) %>                                                      
## [30] #SBATCH --ntasks=1                                                                                          
## [31] #SBATCH --cpus-per-task=<%= resources$ncpus %>                                                              
## [32] #SBATCH --mem-per-cpu=<%= resources$memory %>                                                               
## [33] <%= if (!is.null(resources$partition)) sprintf(paste0("#SBATCH --partition='", resources$partition, "'")) %>
## [34] <%= if (array.jobs) sprintf("#SBATCH --array=1-%i", nrow(jobs)) else "" %>                                  
## [35]                                                                                                             
## [36] ## Initialize work environment like                                                                         
## [37] ## source /etc/profile                                                                                      
## [38] ## module add ...                                                                                           
## [39]                                                                                                             
## [40] ## Export value of DEBUGME environemnt var to slave                                                         
## [41] export DEBUGME=<%= Sys.getenv("DEBUGME") %>                                                                 
## [42]                                                                                                             
## [43] <%= sprintf("export OMP_NUM_THREADS=%i", resources$omp.threads) -%>                                         
## [44] <%= sprintf("export OPENBLAS_NUM_THREADS=%i", resources$blas.threads) -%>                                   
## [45] <%= sprintf("export MKL_NUM_THREADS=%i", resources$blas.threads) -%>                                        
## [46]                                                                                                             
## [47] ## Run R:                                                                                                   
## [48] ## we merge R output with stdout from SLURM, which gets then logged via --output option                     
## [49] Rscript -e 'batchtools::doJobCollection("<%= uri %>")'

The R script, run interactively or from the command line, might then look like

## define work to be done
FUN <- function(i) system("hostname", intern=TRUE)
library(BiocParallel)

## register SLURM cluster instructions from the template file
param <- BatchtoolsParam(workers=5, cluster="slurm", template=tmpl)
register(param)

## do work
xx <- bplapply(1:100, FUN)
table(unlist(xx))

The code runs on the head node until bplapply , where the script interacts with the SLURM scheduler to request a SLURM allocation, run jobs, and retrieve results. The argument 4 to BatchtoolsParam specifies the number of workers to request from the scheduler; bplapply divides the 100 jobs among the 4 workers. If BatchtoolsParam had been created without specifying any workers, then 100 jobs implied by the argument to bplapply would be associated with 100 tasks submitted to the scheduler.

Because cluster tasks are running in independent R instances, and often on physically separate machines, a convenient ‘best practice’ is to write FUN in a ‘functional programming’ manner, such that all data required for the function is passed in as arguments or (for large data) loaded implicitly or explicitly (e.g., via an R library) from disk.

5 Analyzing genomic data in Bioconductor

General strategies exist for handling large genomic data that are well suited to R programs. A manuscript titled Scalable Genomics with R and BioConductor (http://arxiv.org/abs/1409.2864) by Michael Lawrence and Martin Morgan, reviews several of these approaches and demonstrate implementation with Bioconductor packages. Problem areas include scalable processing, summarization and visualization. The techniques presented include restricting queries, compressing data, iterating, and parallel computing.

Ideas are presented in an approachable fashion within a framework of common use cases. This is a benificial read for anyone anyone tackling genomics problems in R.

6 For developers

Developers wishing to use BiocParallel in their own packages should include BiocParallel in the DESCRIPTION file

Imports: BiocParallel

and import the functions they wish to use in the NAMESPACE file, e.g.,

importFrom(BiocParallel, bplapply)

Then invoke the desired function in the code, e.g.,

system.time(x <- bplapply(1:3, function(i) { Sys.sleep(i); i }))
##    user  system elapsed 
##   0.036   0.080   3.068
unlist(x)
## [1] 1 2 3

This will use the back-end returned by bpparam() , by default a MulticoreParam() on Linux / macOS, on Windows, or the user’s preferred back-end if they have used register().

The MulticoreParam back-end does not require any special configuration or set-up and is therefore the safest option for developers. Unfortunately, MulticoreParam provides only serial evaluation on Windows.

Developers should document that their function uses BiocParallel functions on the main page, and should perhaps include in their function signature an argument BPPARAM=bpparam(). Developers should NOT use ‘register()’ in package code – this sets a preference that influences use of ‘bplapply()’ and friends in all packages, not just their package.

Developers wishing to invoke back-ends other than MulticoreParam , or to write code that works across Windows, macOS and Linux, no longer need to take special care to ensure that required packages, data, and functions are available and loaded on the remote nodes. By default, will export global variables to the workers due to the default. Nonetheless, a good practice during development is to use independent processes (via ) rather than relying on forked (via ) processes. For instance, clusters include the costs of setting up the computational environment (loading required packages, for instance) that may discourage use of parallelization when parallelization provides only marginal performance gains from the computation per se. Likewise, may be more sensitive to inappropriate calls to shared libraries, revealing errors that are only transient under.

In bplapply(), the environment of FUN (other than the global environment) is serialized to the workers. A consequence is that, when FUN is inside a package name space, other functions available in the name space are available to FUN on the workers.

7 For server administrators

If the package is installed on a server used by multiple users, then the default value of cores used can sometimes lead to many more tasks being run than the server has cores if two or more users run a parallel-enabled function simultaneously. A more conservative number of cores than all of them minus 2 may be desirable, so that one user does not take all of the cores unless they explicitly specify so. This can be implemented with environment variables. Setting or for all system users to the number of cores divided by the typical number of concurrent users is a reasonable approach to avoiding this scenario.

8 sessionInfo

sessionInfo()
## R version 4.3.1 (2023-06-16)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 22.04.3 LTS
## 
## Matrix products: default
## BLAS:   /home/biocbuild/bbs-3.18-bioc/R/lib/libRblas.so 
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_GB              LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## time zone: America/New_York
## tzcode source: system (glibc)
## 
## attached base packages:
## [1] stats4    stats     graphics  grDevices utils     datasets  methods  
## [8] base     
## 
## other attached packages:
##  [1] GenomicAlignments_1.38.0           Rsamtools_2.18.0                  
##  [3] Biostrings_2.70.0                  XVector_0.42.0                    
##  [5] SummarizedExperiment_1.32.0        Biobase_2.62.0                    
##  [7] MatrixGenerics_1.14.0              matrixStats_1.0.0                 
##  [9] GenomicRanges_1.54.0               GenomeInfoDb_1.38.0               
## [11] IRanges_2.36.0                     S4Vectors_0.40.0                  
## [13] BiocGenerics_0.48.0                RNAseqData.HNRNPC.bam.chr14_0.39.0
## [15] BiocParallel_1.36.0                BiocStyle_2.30.0                  
## 
## loaded via a namespace (and not attached):
##  [1] rappdirs_0.3.3          sass_0.4.7              SparseArray_1.2.0      
##  [4] bitops_1.0-7            lattice_0.22-5          stringi_1.7.12         
##  [7] hms_1.1.3               digest_0.6.33           grid_4.3.1             
## [10] evaluate_0.22           bookdown_0.36           fastmap_1.1.1          
## [13] Matrix_1.6-1.1          jsonlite_1.8.7          progress_1.2.2         
## [16] backports_1.4.1         BiocManager_1.30.22     brew_1.0-8             
## [19] codetools_0.2-19        jquerylib_0.1.4         abind_1.4-5            
## [22] cli_3.6.1               rlang_1.1.1             crayon_1.5.2           
## [25] DelayedArray_0.28.0     withr_2.5.1             cachem_1.0.8           
## [28] yaml_2.3.7              S4Arrays_1.2.0          tools_4.3.1            
## [31] parallel_4.3.1          debugme_1.1.0           checkmate_2.2.0        
## [34] base64url_1.4           GenomeInfoDbData_1.2.11 vctrs_0.6.4            
## [37] R6_2.5.1                lifecycle_1.0.3         zlibbioc_1.48.0        
## [40] pkgconfig_2.0.3         bslib_0.5.1             data.table_1.14.8      
## [43] xfun_0.40               batchtools_0.9.17       knitr_1.44             
## [46] htmltools_0.5.6.1       snow_0.4-4              rmarkdown_2.25         
## [49] compiler_4.3.1          prettyunits_1.2.0       RCurl_1.98-1.12
BiocParallel/inst/doc/Random_Numbers.R0000644000175200017520000001063114516024220020702 0ustar00biocbuildbiocbuild## ----------------------------------------------------------------------------- library(BiocParallel) stopifnot( packageVersion("BiocParallel") > "1.27.5" ) ## ----------------------------------------------------------------------------- result1 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) result1 ## ----------------------------------------------------------------------------- result2 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) stopifnot( identical(result1, result2) ) result3 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 200)) result3 stopifnot( !identical(result1, result3) ) ## ----------------------------------------------------------------------------- result4 <- bplapply(1:3, runif, BPPARAM = SnowParam(RNGseed = 100)) stopifnot( identical(result1, result4) ) if (!identical(.Platform$OS.type, "windows")) { result5 <- bplapply(1:3, runif, BPPARAM = MulticoreParam(RNGseed = 100)) stopifnot( identical(result1, result5) ) } ## ----------------------------------------------------------------------------- result6 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 2, RNGseed = 100)) result7 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 3, RNGseed = 100)) result8 <- bplapply( 1:3, runif, BPPARAM = SnowParam(workers = 2, tasks = 3, RNGseed = 100) ) stopifnot( identical(result1, result6), identical(result1, result7), identical(result1, result8) ) ## ----------------------------------------------------------------------------- ITER_FUN_FACTORY <- function() { x <- 1:3 i <- 0L function() { i <<- i + 1L if (i > length(x)) return(NULL) x[[i]] } } ## ----collapse = TRUE---------------------------------------------------------- ITER <- ITER_FUN_FACTORY() ITER() ITER() ITER() ITER() ## ----------------------------------------------------------------------------- result9 <- bpiterate( ITER_FUN_FACTORY(), runif, BPPARAM = SerialParam(RNGseed = 100) ) stopifnot( identical(result1, result9) ) ## ----------------------------------------------------------------------------- FUN1 <- function(i) { if (identical(i, 2L)) { ## error when evaluating the second element stop("i == 2") } else runif(i) } result10 <- bptry(bplapply( 1:3, FUN1, BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE) )) result10 ## ----------------------------------------------------------------------------- FUN2 <- function(i) { if (identical(i, 2L)) { ## the random number stream should be in the same state as the ## first time through the loop, and rnorm(i) should return ## same result as FUN runif(i) } else { ## if this branch is used, then we are incorrectly updating ## already calculated elements -- '0' in the output would ## indicate this error 0 } } result11 <- bplapply( 1:3, FUN2, BPREDO = result10, BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE) ) stopifnot( identical(result1, result11) ) ## ----------------------------------------------------------------------------- set.seed(200) value <- runif(1) set.seed(200) result12 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) stopifnot( identical(result1, result12), identical(value, runif(1)) ) ## ----------------------------------------------------------------------------- set.seed(100) value <- runif(1) set.seed(100) result13 <- bplapply(1:3, runif, BPPARAM = SerialParam()) stopifnot( !identical(result1, result13), identical(value, runif(1)) ) ## ----------------------------------------------------------------------------- param <- bpstart(SerialParam(RNGseed = 100)) result16 <- bplapply(1:3, runif, BPPARAM = param) bpstop(param) stopifnot( identical(result1, result16) ) ## ----------------------------------------------------------------------------- param <- bpstart(SerialParam(RNGseed = 100)) result16 <- bplapply(1:3, runif, BPPARAM = param) result17 <- bplapply(1:3, runif, BPPARAM = param) bpstop(param) stopifnot( identical(result1, result16), !identical(result1, result17) ) ## ----------------------------------------------------------------------------- set.seed(100) result20 <- lapply(1:3, runif) stopifnot( !identical(result1, result20) ) ## ----echo = FALSE------------------------------------------------------------- sessionInfo() BiocParallel/inst/doc/Random_Numbers.Rmd0000644000175200017520000002345114516004410021226 0ustar00biocbuildbiocbuild--- title: "Random Numbers in _BiocParallel_" author: - name: Martin Morgan affiliation: Roswell Park Comprehensive Cancer Center, Buffalo, NY email: Martin.Morgan@RoswellPark.org date: "Edited: 7 September, 2021; Compiled: `r format(Sys.time(), '%B %d, %Y')`" vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{4. Random Numbers in BiocParallel} %\VignetteEncoding{UTF-8} output: BiocStyle::html_document: number_sections: yes toc: yes toc_depth: 4 --- [RPCI]: https://www.roswellpark.org/martin-morgan # Scope `r Biocpkg("BiocParallel")` enables use of random number streams in a reproducible manner. This document applies to the following `*Param()`: * `SerialParam()`: sequential evaluation in a single R process. * `SnowParam()`: parallel evaluation in multiple independent R processes. * `MulticoreParam())`: parallel evaluation in R sessions running in forked threads. Not available on Windows. The `*Param()` can be used for evaluation with: * `bplapply()`: `lapply()`-like application of a user-supplied function `FUN` to a vector or list of elements `X`. * `bpiterate()`: apply a user-supplied function `FUN` to an unknown number of elements resulting from successive calls to a user-supplied function `ITER.` The reproducible random number implementation also supports: * `bptry()` and the `BPREDO=` argument, for re-evaluation of elements that fail (e.g., because of a bug in `FUN`). # Essentials ## Use of `bplapply()` and `RNGseed=` Attach `r Biocpkg("BiocParallel")` and ensure that the version is greater than 1.27.5 ```{r} library(BiocParallel) stopifnot( packageVersion("BiocParallel") > "1.27.5" ) ``` For reproducible calculation, use the `RNGseed=` argument in any of the `*Param()`constructors. ```{r} result1 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) result1 ``` Repeating the calculation with the same value for `RNGseed=` results in the same result; a different random number seed results in different results. ```{r} result2 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) stopifnot( identical(result1, result2) ) result3 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 200)) result3 stopifnot( !identical(result1, result3) ) ``` Results are invariant across `*Param()` ```{r} result4 <- bplapply(1:3, runif, BPPARAM = SnowParam(RNGseed = 100)) stopifnot( identical(result1, result4) ) if (!identical(.Platform$OS.type, "windows")) { result5 <- bplapply(1:3, runif, BPPARAM = MulticoreParam(RNGseed = 100)) stopifnot( identical(result1, result5) ) } ``` Parallel backends can adjust the number of `workers` (processes performing the evaluation) and `tasks` (how elements of `X` are distributed between workers). Results are invariant to these parameters. This is illustrated with `SnowParam()`, but applies also to `MulticoreParam()`. ```{r} result6 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 2, RNGseed = 100)) result7 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 3, RNGseed = 100)) result8 <- bplapply( 1:3, runif, BPPARAM = SnowParam(workers = 2, tasks = 3, RNGseed = 100) ) stopifnot( identical(result1, result6), identical(result1, result7), identical(result1, result8) ) ``` Subsequent sections illustrate results with `SerialParam()`, but identical results are obtained with `SnowParam()` and `MulticoreParam()`. ## Use with `bpiterate()` `bpiterate()` allows parallel processing of a ’stream’ of data as a series of tasks, with a task consisting of a portion of the overall data. It is useful when the data size is not known or easily partitioned into elements of a vector or list. A real use case might involve iterating through a BAM file, where a task represents successive records (perhaps 100,000 per task) in the file. Here we illustrate with a simple example – iterating through a vector `x = 1:3` ```{r} ITER_FUN_FACTORY <- function() { x <- 1:3 i <- 0L function() { i <<- i + 1L if (i > length(x)) return(NULL) x[[i]] } } ``` `ITER_FUN_FACTORY()` is used to create a function that, on each invocation, returns the next task (here, an element of `x`; in a real example, perhaps 100000 records from a BAM file). When there are no more tasks, the function returns `NULL` ```{r, collapse = TRUE} ITER <- ITER_FUN_FACTORY() ITER() ITER() ITER() ITER() ``` In our simple example, `bpiterate()` is performing the same computations as `bplapply()` so the results, including the random number streams used by each task in `bpiterate()`, are the same ```{r} result9 <- bpiterate( ITER_FUN_FACTORY(), runif, BPPARAM = SerialParam(RNGseed = 100) ) stopifnot( identical(result1, result9) ) ``` ## Use with `bptry()` `bptry()` in conjunction with the `BPREDO=` argument to `bplapply()` or `bpiterate()` allows for graceful recovery from errors. Here a buggy `FUN1()` produces an error for the second element. `bptry()` allows evaluation to continue for other elements of `X`, despite the error. This is shown in the result. ```{r} FUN1 <- function(i) { if (identical(i, 2L)) { ## error when evaluating the second element stop("i == 2") } else runif(i) } result10 <- bptry(bplapply( 1:3, FUN1, BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE) )) result10 ``` `FUN2()` illustrates the flexibility of `bptry()` by fixing the bug when `i == 2`, but also generating incorrect results if invoked for previously correct values. The identity of the result to the original computation shows that only the error task is re-computed, and that the random number stream used by the task is identical to the original stream. ```{r} FUN2 <- function(i) { if (identical(i, 2L)) { ## the random number stream should be in the same state as the ## first time through the loop, and rnorm(i) should return ## same result as FUN runif(i) } else { ## if this branch is used, then we are incorrectly updating ## already calculated elements -- '0' in the output would ## indicate this error 0 } } result11 <- bplapply( 1:3, FUN2, BPREDO = result10, BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE) ) stopifnot( identical(result1, result11) ) ``` ## Relationship between` RNGseed=` and `set.seed()` The global random number stream (influenced by `set.seed()`) is ignored by `r Biocpkg("BiocParallel")`, and `r Biocpkg("BiocParallel")` does NOT increment the global stream. ```{r} set.seed(200) value <- runif(1) set.seed(200) result12 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) stopifnot( identical(result1, result12), identical(value, runif(1)) ) ``` When `RNGseed=` is not used, an internal stream (not accessible to the user) is used and `r Biocpkg("BiocParallel")` does NOT increment the global stream. ```{r} set.seed(100) value <- runif(1) set.seed(100) result13 <- bplapply(1:3, runif, BPPARAM = SerialParam()) stopifnot( !identical(result1, result13), identical(value, runif(1)) ) ``` ## `bpstart()` and random number streams In all of the examples so far `*Param()` objects are passed to `bplapply()` or `bpiterate()` in the ’stopped’ state. Internally, `bplapply()` and `bpiterate()` invoke `bpstart()` to establish the computational environment (e.g., starting workers for `SnowParam()`). `bpstart()` can be called explicitly, e.g., to allow workers to be used across calls to `bplapply()`. The cluster random number stream is initiated with `bpstart()`. Thus ```{r} param <- bpstart(SerialParam(RNGseed = 100)) result16 <- bplapply(1:3, runif, BPPARAM = param) bpstop(param) stopifnot( identical(result1, result16) ) ``` This allows a second call to `bplapply` to represent a continuation of a random number computation – the second call to `bplapply()` results in different random number streams for each element of `X`. ```{r} param <- bpstart(SerialParam(RNGseed = 100)) result16 <- bplapply(1:3, runif, BPPARAM = param) result17 <- bplapply(1:3, runif, BPPARAM = param) bpstop(param) stopifnot( identical(result1, result16), !identical(result1, result17) ) ``` The results from `bplapply()` are different from the results from `lapply()`, even with the same random number seed. This is because correctly implemented parallel random streams require use of a particular random number generator invoked in specific ways for each element of `X`, as outlined in the Implementation notes section. ## Relationship between `bplapply()` and `lapply()` The results from `bplapply()` are different from the results from `lapply()`, even with the same random number seed. This is because correctly implemented parallel random streams require use of a particular random number generator invoked in specific ways for each element of `X`, as outlined in the Implementation notes section. ```{r} set.seed(100) result20 <- lapply(1:3, runif) stopifnot( !identical(result1, result20) ) ``` # Implementation notes The implementation uses the L’Ecuyer-CMRG random number generator (see `?RNGkind` and `?parallel::clusterSetRNGStream` for additional details). This random number generates independent streams and substreams of random numbers. In `r Biocpkg("BiocParallel")`, each call to `bp start()` creates a new stream from the L’Ecuyer-CMRG generator. Each element in `bplap` `ply()` or `bpiterate()` creates a new substream. Each application of `FUN` is therefore using the L’Ecuyer-CMRG random number generator, with a substream that is independent of the substreams of all other elements. Within the user-supplied `FUN` of `bplapply()` or `bpiterate()`, it is a mistake to use `RNGkind()` to set a different random number generator, or to use `set.seed()`. This would in principle compromise the independence of the streams across elements. # `sessionInfo()` ```{r, echo = FALSE} sessionInfo() ``` BiocParallel/inst/doc/Random_Numbers.html0000644000175200017520000236101514516024220021454 0ustar00biocbuildbiocbuild Random Numbers in BiocParallel

Contents

1 Scope

BiocParallel enables use of random number streams in a reproducible manner. This document applies to the following *Param():

  • SerialParam(): sequential evaluation in a single R process.
  • SnowParam(): parallel evaluation in multiple independent R processes.
  • MulticoreParam()): parallel evaluation in R sessions running in forked threads. Not available on Windows.

The *Param() can be used for evaluation with:

  • bplapply(): lapply()-like application of a user-supplied function FUN to a vector or list of elements X.
  • bpiterate(): apply a user-supplied function FUN to an unknown number of elements resulting from successive calls to a user-supplied function ITER.

The reproducible random number implementation also supports:

  • bptry() and the BPREDO= argument, for re-evaluation of elements that fail (e.g., because of a bug in FUN).

2 Essentials

2.1 Use of bplapply() and RNGseed=

Attach BiocParallel and ensure that the version is greater than 1.27.5

library(BiocParallel)
stopifnot(
    packageVersion("BiocParallel") > "1.27.5"
)

For reproducible calculation, use the RNGseed= argument in any of the *Param()constructors.

result1 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100))
result1
## [[1]]
## [1] 0.7393338
## 
## [[2]]
## [1] 0.8216743 0.7451087
## 
## [[3]]
## [1] 0.1962909 0.5226640 0.6857650

Repeating the calculation with the same value for RNGseed= results in the same result; a different random number seed results in different results.

result2 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100))
stopifnot(
    identical(result1, result2)
)

result3 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 200))
result3
## [[1]]
## [1] 0.9757768
## 
## [[2]]
## [1] 0.6525851 0.6416909
## 
## [[3]]
## [1] 0.6710576 0.5895330 0.7686983
stopifnot(
    !identical(result1, result3)
)

Results are invariant across *Param()

result4 <- bplapply(1:3, runif, BPPARAM = SnowParam(RNGseed = 100))
stopifnot(
    identical(result1, result4)
)

if (!identical(.Platform$OS.type, "windows")) {
    result5 <- bplapply(1:3, runif, BPPARAM = MulticoreParam(RNGseed = 100))
    stopifnot(
        identical(result1, result5)
    )
}

Parallel backends can adjust the number of workers (processes performing the evaluation) and tasks (how elements of X are distributed between workers). Results are invariant to these parameters. This is illustrated with SnowParam(), but applies also to MulticoreParam().

result6 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 2, RNGseed = 100))
result7 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 3, RNGseed = 100))
result8 <- bplapply(
    1:3, runif,
    BPPARAM = SnowParam(workers = 2, tasks = 3, RNGseed = 100)
)
stopifnot(
    identical(result1, result6),
    identical(result1, result7),
    identical(result1, result8)
)

Subsequent sections illustrate results with SerialParam(), but identical results are obtained with SnowParam() and MulticoreParam().

2.2 Use with bpiterate()

bpiterate() allows parallel processing of a ’stream’ of data as a series of tasks, with a task consisting of a portion of the overall data. It is useful when the data size is not known or easily partitioned into elements of a vector or list. A real use case might involve iterating through a BAM file, where a task represents successive records (perhaps 100,000 per task) in the file. Here we illustrate with a simple example – iterating through a vector x = 1:3

ITER_FUN_FACTORY <- function() {
    x <- 1:3
    i <- 0L
    function() {
        i <<- i + 1L
        if (i > length(x))
            return(NULL)
        x[[i]]
    }
}

ITER_FUN_FACTORY() is used to create a function that, on each invocation, returns the next task (here, an element of x; in a real example, perhaps 100000 records from a BAM file). When there are no more tasks, the function returns NULL

ITER <- ITER_FUN_FACTORY()
ITER()
## [1] 1

ITER()
## [1] 2

ITER()
## [1] 3

ITER()
## NULL

In our simple example, bpiterate() is performing the same computations as bplapply() so the results, including the random number streams used by each task in bpiterate(), are the same

result9 <- bpiterate(
    ITER_FUN_FACTORY(), runif,
    BPPARAM = SerialParam(RNGseed = 100)
)
stopifnot(
    identical(result1, result9)
)

2.3 Use with bptry()

bptry() in conjunction with the BPREDO= argument to bplapply() or bpiterate() allows for graceful recovery from errors. Here a buggy FUN1() produces an error for the second element. bptry() allows evaluation to continue for other elements of X, despite the error. This is shown in the result.

FUN1 <- function(i) {
    if (identical(i, 2L)) {
        ## error when evaluating the second element
        stop("i == 2")
    } else runif(i)
}
result10 <- bptry(bplapply(
    1:3, FUN1,
    BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE)
))
result10
## [[1]]
## [1] 0.7393338
## 
## [[2]]
## <remote_error in FUN(...): i == 2>
## traceback() available as 'attr(x, "traceback")'
## 
## [[3]]
## [1] 0.1962909 0.5226640 0.6857650
## 
## attr(,"REDOENV")
## <environment: 0x556c070f45f0>

FUN2() illustrates the flexibility of bptry() by fixing the bug when i == 2, but also generating incorrect results if invoked for previously correct values. The identity of the result to the original computation shows that only the error task is re-computed, and that the random number stream used by the task is identical to the original stream.

FUN2 <- function(i) {
    if (identical(i, 2L)) {
        ## the random number stream should be in the same state as the
        ## first time through the loop, and rnorm(i) should return
        ## same result as FUN
        runif(i)
    } else {
        ## if this branch is used, then we are incorrectly updating
        ## already calculated elements -- '0' in the output would
        ## indicate this error
        0
    }
}
result11 <- bplapply(
    1:3, FUN2,
    BPREDO = result10,
    BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE)
)
stopifnot(
    identical(result1, result11)
)

2.4 Relationship betweenRNGseed= and set.seed()

The global random number stream (influenced by set.seed()) is ignored by BiocParallel, and BiocParallel does NOT increment the global stream.

set.seed(200)
value <- runif(1)

set.seed(200)
result12 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100))
stopifnot(
    identical(result1, result12),
    identical(value, runif(1))
)

When RNGseed= is not used, an internal stream (not accessible to the user) is used and BiocParallel does NOT increment the global stream.

set.seed(100)
value <- runif(1)

set.seed(100)
result13 <- bplapply(1:3, runif, BPPARAM = SerialParam())
stopifnot(
    !identical(result1, result13),
    identical(value, runif(1))
)

2.5 bpstart() and random number streams

In all of the examples so far *Param() objects are passed to bplapply() or bpiterate() in the ’stopped’ state. Internally, bplapply() and bpiterate() invoke bpstart() to establish the computational environment (e.g., starting workers for SnowParam()). bpstart() can be called explicitly, e.g., to allow workers to be used across calls to bplapply().

The cluster random number stream is initiated with bpstart(). Thus

param <- bpstart(SerialParam(RNGseed = 100))
result16 <- bplapply(1:3, runif, BPPARAM = param)
bpstop(param)
stopifnot(
    identical(result1, result16)
)

This allows a second call to bplapply to represent a continuation of a random number computation – the second call to bplapply() results in different random number streams for each element of X.

param <- bpstart(SerialParam(RNGseed = 100))
result16 <- bplapply(1:3, runif, BPPARAM = param)
result17 <- bplapply(1:3, runif, BPPARAM = param)
bpstop(param)
stopifnot(
    identical(result1, result16),
    !identical(result1, result17)
)

The results from bplapply() are different from the results from lapply(), even with the same random number seed. This is because correctly implemented parallel random streams require use of a particular random number generator invoked in specific ways for each element of X, as outlined in the Implementation notes section.

2.6 Relationship between bplapply() and lapply()

The results from bplapply() are different from the results from lapply(), even with the same random number seed. This is because correctly implemented parallel random streams require use of a particular random number generator invoked in specific ways for each element of X, as outlined in the Implementation notes section.

set.seed(100)
result20 <- lapply(1:3, runif)
stopifnot(
    !identical(result1, result20)
)

3 Implementation notes

The implementation uses the L’Ecuyer-CMRG random number generator (see ?RNGkind and ?parallel::clusterSetRNGStream for additional details). This random number generates independent streams and substreams of random numbers. In BiocParallel, each call to bp start() creates a new stream from the L’Ecuyer-CMRG generator. Each element in bplap ply() or bpiterate() creates a new substream. Each application of FUN is therefore using the L’Ecuyer-CMRG random number generator, with a substream that is independent of the substreams of all other elements.

Within the user-supplied FUN of bplapply() or bpiterate(), it is a mistake to use RNGkind() to set a different random number generator, or to use set.seed(). This would in principle compromise the independence of the streams across elements.

4 sessionInfo()

## R version 4.3.1 (2023-06-16)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 22.04.3 LTS
## 
## Matrix products: default
## BLAS:   /home/biocbuild/bbs-3.18-bioc/R/lib/libRblas.so 
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_GB              LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## time zone: America/New_York
## tzcode source: system (glibc)
## 
## attached base packages:
## [1] stats4    stats     graphics  grDevices utils     datasets  methods  
## [8] base     
## 
## other attached packages:
##  [1] GenomicAlignments_1.38.0           Rsamtools_2.18.0                  
##  [3] Biostrings_2.70.0                  XVector_0.42.0                    
##  [5] SummarizedExperiment_1.32.0        Biobase_2.62.0                    
##  [7] MatrixGenerics_1.14.0              matrixStats_1.0.0                 
##  [9] GenomicRanges_1.54.0               GenomeInfoDb_1.38.0               
## [11] IRanges_2.36.0                     S4Vectors_0.40.0                  
## [13] BiocGenerics_0.48.0                RNAseqData.HNRNPC.bam.chr14_0.39.0
## [15] BiocParallel_1.36.0                BiocStyle_2.30.0                  
## 
## loaded via a namespace (and not attached):
##  [1] rappdirs_0.3.3          sass_0.4.7              SparseArray_1.2.0      
##  [4] bitops_1.0-7            lattice_0.22-5          stringi_1.7.12         
##  [7] hms_1.1.3               digest_0.6.33           grid_4.3.1             
## [10] evaluate_0.22           bookdown_0.36           fastmap_1.1.1          
## [13] Matrix_1.6-1.1          jsonlite_1.8.7          progress_1.2.2         
## [16] backports_1.4.1         BiocManager_1.30.22     brew_1.0-8             
## [19] codetools_0.2-19        jquerylib_0.1.4         abind_1.4-5            
## [22] cli_3.6.1               rlang_1.1.1             crayon_1.5.2           
## [25] DelayedArray_0.28.0     withr_2.5.1             cachem_1.0.8           
## [28] yaml_2.3.7              S4Arrays_1.2.0          tools_4.3.1            
## [31] parallel_4.3.1          debugme_1.1.0           checkmate_2.2.0        
## [34] base64url_1.4           GenomeInfoDbData_1.2.11 vctrs_0.6.4            
## [37] R6_2.5.1                lifecycle_1.0.3         zlibbioc_1.48.0        
## [40] pkgconfig_2.0.3         bslib_0.5.1             data.table_1.14.8      
## [43] xfun_0.40               batchtools_0.9.17       knitr_1.44             
## [46] htmltools_0.5.6.1       snow_0.4-4              rmarkdown_2.25         
## [49] compiler_4.3.1          prettyunits_1.2.0       RCurl_1.98-1.12
BiocParallel/inst/snow/0000755000175200017520000000000014516004410016063 5ustar00biocbuildbiocbuildBiocParallel/inst/snow/RMPInode.R0000755000175200017520000000126114516004410017626 0ustar00biocbuildbiocbuildlocal({ snowlib <- Sys.getenv("R_SNOW_LIB") outfile <- Sys.getenv("R_SNOW_OUTFILE") args <- commandArgs() pos <- match("--args", args) args <- args[-(1 : pos)] for (a in args) { pos <- regexpr("=", a) name <- substr(a, 1, pos - 1) value <- substr(a,pos + 1, nchar(a)) switch(name, SNOWLIB = snowlib <- value, OUT = outfile <- value) } if (! (snowlib %in% .libPaths())) .libPaths(c(snowlib, .libPaths())) library(methods) ## because Rscript as of R 2.7.0 doesn't load methods loadNamespace("Rmpi") loadNamespace("snow") BiocParallel::bprunMPIworker() quit("no") }) BiocParallel/inst/snow/RSOCKnode.R0000644000175200017520000000160714516004410017741 0ustar00biocbuildbiocbuildlocal({ master <- "localhost" port <- "" snowlib <- Sys.getenv("R_SNOW_LIB") outfile <- Sys.getenv("R_SNOW_OUTFILE") ##**** defaults to ""; document args <- commandArgs() pos <- match("--args", args) args <- args[-(1 : pos)] for (a in args) { pos <- regexpr("=", a) name <- substr(a, 1, pos - 1) value <- substr(a,pos + 1, nchar(a)) switch(name, MASTER = master <- value, PORT = port <- value, SNOWLIB = snowlib <- value, OUT = outfile <- value) } if (! (snowlib %in% .libPaths())) .libPaths(c(snowlib, .libPaths())) library(methods) ## because Rscript as of R 2.7.0 doesn't load methods loadNamespace("snow") if (port == "") port <- getClusterOption("port") BiocParallel::.bpworker_impl(snow::makeSOCKmaster(master, port)) quit("no") }) BiocParallel/inst/unitTests/0000755000175200017520000000000014516004410017077 5ustar00biocbuildbiocbuildBiocParallel/inst/unitTests/test_BatchtoolsParam.R0000644000175200017520000003576614516004410023365 0ustar00biocbuildbiocbuildmessage("Testing BatchtoolsParam") .old_options <- NULL .setUp <- function() .old_options <<- options(BIOCPARALLEL_BATCHTOOLS_REMOVE_REGISTRY_WAIT = 1) .tearDown <- function() { options(.old_options) } .n_connections <- function() { gc() # close connections nrow(showConnections()) } test_BatchtoolsParam_constructor <- function() { param <- BatchtoolsParam() checkTrue(validObject(param)) checkTrue(is(param$registry, "NULLRegistry")) isWindows <- .Platform$OS.type == "windows" cluster <- if (isWindows) "socket" else "multicore" nworkers <- if (isWindows) snowWorkers() else multicoreWorkers() checkIdentical(cluster, bpbackend(param)) checkIdentical(nworkers, bpnworkers(param)) checkIdentical(3L, bpnworkers(BatchtoolsParam(3L))) cluster <- "socket" param <- BatchtoolsParam(cluster=cluster) checkIdentical(cluster, bpbackend(param)) checkIdentical(nworkers, bpnworkers(param)) cluster <- "multicore" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(cluster=cluster) checkIdentical(cluster, bpbackend(param)) checkIdentical(nworkers, bpnworkers(param)) } cluster <- "interactive" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(cluster=cluster) checkIdentical(cluster, bpbackend(param)) checkIdentical(1L, bpnworkers(param)) } cluster <- "sge" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical(cluster, bpbackend(param)) } cluster <- "lsf" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical(cluster, bpbackend(param)) } cluster <- "slurm" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical(cluster, bpbackend(param)) } cluster <- "openlava" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical(cluster, bpbackend(param)) } cluster <- "torque" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical(cluster, bpbackend(param)) } cluster <- "unknown" checkException(BatchtoolsParam(cluster=cluster)) } test_BatchtoolsWorkers <- function() { socket <- snowWorkers() multicore <- multicoreWorkers() isWindows <- .Platform$OS.type == "windows" checkIdentical( if (isWindows) socket else multicore, batchtoolsWorkers() ) checkIdentical(socket, batchtoolsWorkers("socket")) cluster <- "multicore" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) checkIdentical(multicore, batchtoolsWorkers(cluster)) checkIdentical(1L, batchtoolsWorkers("interactive")) checkException(batchtoolsWorkers("unknown")) } .test_BatchtoolsParam_bpisup_start_stop <- function(param) { n_connections <- .n_connections() checkIdentical(FALSE, bpisup(param)) checkIdentical(TRUE, bpisup(bpstart(param))) checkIdentical(FALSE, bpisup(bpstop(param))) checkIdentical(n_connections, .n_connections()) } test_BatchtoolsParam_bpisup_start_stop_default <- function() { param <- BatchtoolsParam(workers=2) .test_BatchtoolsParam_bpisup_start_stop(param) } test_BatchtoolsParam_bpisup_start_stop_socket <- function() { cluster <- "socket" param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical(cluster, bpbackend(param)) .test_BatchtoolsParam_bpisup_start_stop(param) } test_BatchtoolsParam_bpisup_start_stop_interactive <- function() { cluster <- "interactive" param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical(cluster, bpbackend(param)) .test_BatchtoolsParam_bpisup_start_stop(param) } test_BatchtoolsParam_bplapply <- function() { n_connections <- .n_connections() fun <- function(x) Sys.getpid() ## Check for all cluster types cluster <- "interactive" param <- BatchtoolsParam(workers=2, cluster=cluster) result <- bplapply(1:5, fun, BPPARAM=param) checkIdentical(1L, length(unique(unlist(result)))) cluster <- "multicore" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) result <- bplapply(1:5, fun, BPPARAM=param) checkIdentical(2L, length(unique(unlist(result)))) } cluster <- "socket" param <- BatchtoolsParam(workers=2, cluster=cluster) result <- bplapply(1:5, fun, BPPARAM=param) checkIdentical(2L, length(unique(unlist(result)))) checkIdentical(n_connections, .n_connections()) } ## Check registry test_BatchtoolsParam_registry <- function() { n_connections <- .n_connections() param <- BatchtoolsParam() checkTrue(is(param$registry, "NULLRegistry")) bpstart(param) checkTrue(!is(param$registry, "NULLRegistry")) checkTrue(is(param$registry, "Registry")) bpstop(param) checkIdentical(n_connections, .n_connections()) } ## Check bpjobname test_BatchtoolsParam_bpjobname <- function() { checkIdentical("BPJOB", bpjobname(BatchtoolsParam())) checkIdentical("myjob", bpjobname(BatchtoolsParam(jobname="myjob"))) } ## Check bpstopOnError test_BatchtoolsParam_bpstopOnError <- function() { checkTrue(bpstopOnError(BatchtoolsParam())) checkIdentical(FALSE, bpstopOnError(BatchtoolsParam(stop.on.error=FALSE))) } ## Check bptimeout test_BatchtoolsParam_bptimeout <- function() { checkEquals(BiocParallel:::WORKER_TIMEOUT, bptimeout(BatchtoolsParam())) checkEquals(123L, bptimeout(BatchtoolsParam(timeout=123))) } ## Check bpRNGseed test_BatchtoolsParam_bpRNGseed <- function() { n_connections <- .n_connections() ## Check setting RNGseed param <- BatchtoolsParam(RNGseed=123L) checkEqualsNumeric(123L, bpRNGseed(param)) ## Check reset RNGseed new_seed <- 234L bpRNGseed(param) <- new_seed checkEqualsNumeric(new_seed, bpRNGseed(param)) ## Check after bpstart bpstart(param) checkEqualsNumeric(new_seed, bpRNGseed(param)) checkEqualsNumeric(new_seed, param$registry$seed) bpstop(param) ## Check failure to reset ## ## Check NULL value param <- BatchtoolsParam() checkTrue(is.na(bpRNGseed(param))) ## ## Check fail checkException({bpRNGseed(param) <- "abc"}) checkIdentical(n_connections, .n_connections()) } test_BatchtoolsParam_bplog <- function() { n_connections <- .n_connections() ## Test param w/o log and logdir checkTrue(is.na(bplogdir(BatchtoolsParam()))) checkTrue(!bplog(BatchtoolsParam())) ## test param with log, w/o logdir param <- BatchtoolsParam(log=TRUE) checkTrue(bplog(param)) checkTrue(is.na(bplogdir(param))) ## Check if setter works temp_log_dir <- tempfile() dir.create(temp_log_dir) bplogdir(param) <- temp_log_dir checkIdentical(temp_log_dir, bplogdir(param)) ## test param without log and w logdir checkException(BatchtoolsParam(logdir=temp_log_dir)) ## check logs in logdir param <- BatchtoolsParam(log=TRUE, logdir=temp_log_dir) bplapply(1:5, sqrt, BPPARAM=param) checkTrue(file.exists(temp_log_dir)) checkTrue(file.exists(file.path(temp_log_dir, "logs"))) checkIdentical(n_connections, .n_connections()) } test_BatchtoolsParam_available_clusters <- function() { clusters <- BiocParallel:::.BATCHTOOLS_CLUSTERS checkTrue(all.equal( c("socket", "multicore", "interactive", "sge", "slurm", "lsf", "openlava", "torque"), clusters)) } test_BatchtoolsParam_template <- function() { .bptemplate <- BiocParallel:::.bptemplate cluster <- "socket" param <- BatchtoolsParam(cluster=cluster) checkTrue(is.na(.bptemplate(param))) cluster <- "multicore" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(cluster=cluster) checkTrue(is.na(.bptemplate(param))) } cluster <- "interactive" param <- BatchtoolsParam(cluster=cluster) checkTrue(is.na(.bptemplate(param))) ## Test clusters with template cluster <- "sge" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical("sge-simple.tmpl", basename(.bptemplate(param))) } cluster <- "slurm" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical("slurm-simple.tmpl", basename(.bptemplate(param))) } cluster <- "lsf" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical("lsf-simple.tmpl", basename(.bptemplate(param))) } cluster <- "openlava" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical("openlava-simple.tmpl", basename(.bptemplate(param))) } cluster <- "torque" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) checkIdentical("torque-lido.tmpl", basename(.bptemplate(param))) } ## Check setting template to file path cluster <- "sge" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { template <- system.file( "templates", "sge-simple.tmpl", package="batchtools" ) param <- BatchtoolsParam( workers=2, cluster=cluster, template=template ) checkIdentical(template, .bptemplate(param)) } } ## Run only of SGE clusters, this will fail on other machines test_BatchtoolsParam_sge <- function() { n_connections <- .n_connections() if (!BiocParallel:::.batchtoolsClusterAvailable("sge")) return() fun <- function(x) Sys.getpid() template <- system.file( package="BiocParallel", "unitTests", "test_script", "test-sge-template.tmpl" ) param <- BatchtoolsParam(workers=2, cluster="sge", template=template) bpstart(param) checkIdentical("SGE", param$registry$backend) result <- bplapply(1:5, fun, BPPARAM=param) checkIdentical(2L, length(unique(unlist(result)))) bpstop(param) checkIdentical(n_connections, .n_connections()) } ## TODO: write tests for other cluster types, slurm, lsf, torque, openlava test_BatchtoolsParam_bpmapply <- function() { n_connections <- .n_connections() fun <- function(x, y, z) x + y + z ## Initial test param <- BatchtoolsParam() result <- bpmapply(fun, x = 1:3, y = 1:3, MoreArgs = list(z = 1), SIMPLIFY = TRUE, BPPARAM = param) checkIdentical(c(3,5,7), result) cluster <- "interactive" param <- BatchtoolsParam(workers=2, cluster=cluster) result <- bpmapply(fun, x = 1:3, y = 1:3, MoreArgs = list(z = 1), SIMPLIFY = TRUE, BPPARAM=param) checkIdentical(c(3,5,7), result) cluster <- "multicore" if (BiocParallel:::.batchtoolsClusterAvailable(cluster)) { param <- BatchtoolsParam(workers=2, cluster=cluster) result <- bpmapply(fun, x = 1:3, y = 1:3, MoreArgs = list(z = 1), SIMPLIFY = TRUE, BPPARAM=param) checkIdentical(c(3,5,7), result) } cluster <- "socket" param <- BatchtoolsParam(workers=2, cluster=cluster) result <- bpmapply(fun, x = 1:3, y = 1:3, MoreArgs = list(z = 1), SIMPLIFY = TRUE, BPPARAM=param) checkIdentical(c(3,5,7), result) checkIdentical(n_connections, .n_connections()) } test_BatchtoolsParam_bpvec <- function() { ## Mutlticore param <- BatchtoolsParam(workers=2) result <- bpvec(1:10, seq_along, BPPARAM=param) target <- as.integer(rep(1:5, 2)) checkIdentical(target, result) ## socket param <- BatchtoolsParam(workers=2, cluster="socket") result <- bpvec(1:10, seq_along, BPPARAM=param) target <- as.integer(rep(1:5,2)) checkIdentical(target, result) } test_BatchtoolsParam_bpvectorize <- function() { psqrt <- bpvectorize(sqrt) checkTrue(is(psqrt, "function")) ## Mutlticore param <- BatchtoolsParam(workers=2) bpseq_along <- bpvectorize(seq_along, BPPARAM=param) res <- bpseq_along(1:10) target <- as.integer(rep(1:5, 2)) checkIdentical(as.integer(target), res) ## Socket param <- BatchtoolsParam(workers=2, cluster="socket") bpseq_along <- bpvectorize(seq_along, BPPARAM=param) res <- bpseq_along(1:10) target <- as.integer(rep(1:5, 2)) checkIdentical(as.integer(target), res) } test_BatchtoolsParam_bpiterate <- function() { n_connections <- .n_connections() ## Iterator function ITER <- function(n=5) { i <- 0L function() { i <<- i + 1L if (i > n) return(NULL) rep(i, 100) } } ## test function FUN <- function(x, k) { sum(x) + k } ## Multicore cluster param <- BatchtoolsParam() res <- bpiterate(ITER=ITER(), FUN=FUN, k=5, BPPARAM=param) ## Check Identical result target <- list(105, 205, 305, 405, 505) checkIdentical(target, res) ## socket cluster param <- BatchtoolsParam(cluster="socket") res <- bpiterate(ITER=ITER(), FUN=FUN, k=5, BPPARAM=param) ## Check Identical result checkIdentical(target, res) ## Test REDUCE on socket res <- bpiterate(ITER=ITER(), FUN=FUN, k=5, REDUCE=`+`, BPPARAM=param) ## Check Identical result checkIdentical(1525, res) ## Test REDUCE, init on mutlicore param <- BatchtoolsParam() res <- bpiterate(ITER=ITER(), FUN=FUN, k=5, REDUCE=`+`, init = 10, BPPARAM=param) ## Check Identical result checkIdentical(1535, res) checkIdentical(n_connections, .n_connections()) } test_BatchtoolsParam_bpsaveregistry <- function() { .bpregistryargs <- BiocParallel:::.bpregistryargs .bpsaveregistry <- BiocParallel:::.bpsaveregistry .bpsaveregistry_path <- BiocParallel:::.bpsaveregistry_path file.dir <- tempfile() ## Set param with save registry registryargs <- batchtoolsRegistryargs(file.dir = file.dir) param <- BatchtoolsParam(saveregistry=TRUE, registryargs = registryargs) checkIdentical(.bpsaveregistry(param), TRUE) checkIdentical(.bpregistryargs(param)$file.dir, file.dir) ## increment path extension file.dir <- file.path(dirname(file.dir), basename(file.dir)) checkIdentical(.bpsaveregistry_path(param), paste0(file.dir, "-1")) dir.create(.bpsaveregistry_path(param)) checkIdentical(.bpsaveregistry_path(param), paste0(file.dir, "-2")) ## create registry path <- .bpsaveregistry_path(param) checkTrue(!dir.exists(path)) res <- bplapply(1:5, sqrt, BPPARAM=param) checkTrue(dir.exists(path)) } BiocParallel/inst/unitTests/test_BiocParallelParam.R0000644000175200017520000000137314516004410023577 0ustar00biocbuildbiocbuildmessage("Testing BiocParallelParam") test_BiocParallelParam <- function() { ## BiocParallelParam is a virtual class checkException(BiocParallel:::.BiocParallelParam(), silent=TRUE) ## minimal non-virtual class & constructor .A <- setRefClass("A", contains = "BiocParallelParam") A <- function(...) { prototype <- .prototype_update(.BiocParallelParam_prototype, ...) do.call(.A, prototype) } ## no arg constructor checkTrue(validObject(A())) ## non-default inherited slot checkIdentical("WARN", bpthreshold(A(threshold = "WARN"))) ## workers (specified as character()) more than tasks checkException( validObject(A(workers = rep("a", 3L), tasks = 2L)), silent = TRUE ) } BiocParallel/inst/unitTests/test_DoparParam.R0000644000175200017520000000524014516004410022310 0ustar00biocbuildbiocbuildmessage("Testing DoparParam") test_DoparParam_orchestration_error <- function() { test <- requireNamespace("foreach", quietly = TRUE) && requireNamespace("doParallel", quietly = TRUE) if (!test) DEACTIVATED("'foreach' or 'doParallel' not available") if (identical(.Platform$OS.type, "windows")) DEACTIVATED("'DoparParam' orchestration error test not run on Windows") y <- tryCatch({ cl <- parallel::makeCluster(1L) doParallel::registerDoParallel(cl) bplapply(1L, function(x) quit("no"), BPPARAM = DoparParam()) }, error = function(e) { conditionMessage(e) }, finally = { parallel::stopCluster(cl) }) checkTrue(startsWith(y, "'DoparParam()' foreach() error occurred: ")) } test_DoparParam_bplapply <- function() { test <- requireNamespace("foreach", quietly = TRUE) && requireNamespace("doParallel", quietly = TRUE) if (!test) DEACTIVATED("'foreach' or 'doParallel' not available") cl <- parallel::makeCluster(2L) on.exit(parallel::stopCluster(cl)) doParallel::registerDoParallel(cl) res0 <- bplapply(1:9, function(x) x + 1L, BPPARAM = SerialParam()) res <- bplapply(1:9, function(x) x + 1L, BPPARAM = DoparParam()) checkIdentical(res, res0) } test_DoparParam_bplapply_rng <- function() { test <- requireNamespace("foreach", quietly = TRUE) && requireNamespace("doParallel", quietly = TRUE) if (!test) DEACTIVATED("'foreach' or 'doParallel' not available") cl <- parallel::makeCluster(2L) on.exit(parallel::stopCluster(cl)) doParallel::registerDoParallel(cl) res0 <- bplapply(1:9, function(x) runif(1), BPPARAM = SerialParam(RNGseed = 123)) res <- bplapply(1:9, function(x) runif(1), BPPARAM = DoparParam(RNGseed = 123)) checkIdentical(res, res0) } test_DoparParam_stop_on_error <- function() { test <- requireNamespace("foreach", quietly = TRUE) && requireNamespace("doParallel", quietly = TRUE) if (!test) DEACTIVATED("'foreach' or 'doParallel' not available") cl <- parallel::makeCluster(2L) on.exit(parallel::stopCluster(cl)) doParallel::registerDoParallel(cl) fun <- function(x) { if (x == 2) stop() x } res1 <- bptry(bplapply(1:4, fun, BPPARAM = DoparParam(stop.on.error = F))) checkEquals(res1[c(1,3,4)], as.list(c(1,3,4))) checkTrue(is(res1[[2]], "error")) res2 <- bptry(bplapply(1:6, fun, BPPARAM = DoparParam(stop.on.error = T))) checkEquals(res2[c(1,4:6)], as.list(c(1,4:6))) checkTrue(is(res2[[2]], "error")) checkTrue(is(res2[[3]], "error")) } BiocParallel/inst/unitTests/test_MulticoreParam.R0000644000175200017520000000146614516004410023214 0ustar00biocbuildbiocbuildmessage("Testing MulticoreParam") test_MulticoreParam_progressbar <- function() { if (.Platform$OS.type == "windows") return() checkIdentical(bptasks(MulticoreParam()), 0L) checkIdentical(bptasks(MulticoreParam(tasks = 0L, progressbar = TRUE)), 0L) checkIdentical( bptasks(MulticoreParam(progressbar = TRUE)), BiocParallel:::TASKS_MAXIMUM ) } test_MulticoreParam_bpforceGC <- function() { if (.Platform$OS.type == "windows") return() checkIdentical(FALSE, bpforceGC(MulticoreParam())) checkIdentical(FALSE, bpforceGC(MulticoreParam(force.GC = FALSE))) checkIdentical(TRUE, bpforceGC(MulticoreParam(force.GC = TRUE))) checkException(MulticoreParam(force.GC = NA), silent = TRUE) checkException(MulticoreParam(force.GC = 1:2), silent = TRUE) } BiocParallel/inst/unitTests/test_SerialParam.R0000644000175200017520000000214014516004410022456 0ustar00biocbuildbiocbuildmessage("Testing SerialParam") test_SerialParam_bpnworkers <- function() { checkIdentical(1L, bpnworkers(SerialParam())) checkIdentical(1L, bpnworkers(bpstart(SerialParam()))) checkIdentical(1L, bpnworkers(bpstop(bpstart(SerialParam())))) } test_SerialParam_bpbackend <- function() { checkIdentical(NULL, bpbackend(SerialParam())) checkTrue(is(bpbackend(bpstart(SerialParam())), "SerialBackend")) checkIdentical(NULL, bpbackend(bpstop(bpstart(SerialParam())))) } test_SerialParam_bpforceGC <- function() { checkIdentical(FALSE, bpforceGC(SerialParam())) checkIdentical(FALSE, bpforceGC(SerialParam(force.GC = FALSE))) checkIdentical(TRUE, bpforceGC(SerialParam(force.GC = TRUE))) checkException(SerialParam(force.GC = NA), silent = TRUE) checkException(SerialParam(force.GC = 1:2), silent = TRUE) } test_SerialParam_bpisup_start_stop <- function() { param <- SerialParam() checkIdentical(FALSE, bpisup(param)) # not always up param <- bpstart(param) checkIdentical(TRUE, bpisup(param)) param <- bpstop(param) checkIdentical(FALSE, bpisup(param)) } BiocParallel/inst/unitTests/test_SnowParam.R0000644000175200017520000001405314516004410022173 0ustar00biocbuildbiocbuildmessage("Testing SnowParam") test_SnowParam_construction <- function() { checkException(SnowParam(logdir = tempdir())) p <- MulticoreParam(jobname = 'test') checkIdentical(bpjobname(p), 'test') } test_SnowParam_SOCK <- function() { if (!requireNamespace("snow", quietly=TRUE)) DEACTIVATED("'snow' package did not load") param <- SnowParam(2, "SOCK", tasks=2) checkIdentical(FALSE, bpisup(param)) exp <- bplapply(1:2, function(i) Sys.getpid(), BPPARAM=param) checkIdentical(2L, length(unique(unlist(exp)))) checkIdentical(FALSE, bpisup(param)) } test_SnowParam_SOCK_character <- function() { bpstop(bpstart(SnowParam("localhost"))) } test_SnowParam_MPI <- function() { if (.Platform$OS.type == "windows") DEACTIVATED("MPI tests not run on Windows") DEACTIVATED("MPI tests not run") param <- SnowParam(2, "MPI", tasks=2) checkIdentical(FALSE, bpisup(param)) exp <- bplapply(1:2, function(i) mpi.comm.rank(), BPPARAM=param) checkIdentical(c(1L, 2L), sort(unlist(exp))) checkIdentical(FALSE, bpisup(param)) } test_SnowParam_coerce_from_SOCK <- function() { if (!requireNamespace("snow", quietly=TRUE)) DEACTIVATED("'snow' package did not load") cl <- parallel::makeCluster(2L, "SOCK") p <- as(cl, "SnowParam") checkTrue(validObject(p)) obs <- tryCatch(bpstart(p), error=conditionMessage) exp <- "'bpstart' not available; instance from outside BiocParallel?" checkIdentical(exp, obs) obs <- tryCatch(bpstop(p), warning=conditionMessage) exp <- "'bpstop' not available; instance from outside BiocParallel?" checkIdentical(exp, obs) exp <- bplapply(1:2, function(i) Sys.getpid(), BPPARAM=p) checkIdentical(2L, length(unique(unlist(exp)))) checkIdentical(TRUE, bpisup(p)) parallel::stopCluster(cl) } test_SnowParam_coerce_from_MPI <- function() { if (.Platform$OS.type == "windows") DEACTIVATED("MPI tests not run on Windows") if (!requireNamespace("snow", quietly=TRUE) || !requireNamespace("Rmpi", quietly=TRUE)) DEACTIVATED("'snow' and/or 'Rmpi' package did not load") DEACTIVATED("MPI tests not run") cl <- parallel::makeCluster(2L, "MPI") p <- as(cl, "SnowParam") checkTrue(validObject(p)) obs <- tryCatch(bpstart(p), error=conditionMessage) exp <- "'bpstart' not available; instance from outside BiocParallel?" checkIdentical(exp, obs) obs <- tryCatch(bpstop(p), error=conditionMessage) exp <- "'bpstop' not available; instance from outside BiocParallel?" checkIdentical(exp, obs) exp <- bplapply(1:2, function(i) mpi.comm.rank(), BPPARAM=p) checkIdentical(c(1L, 2L), sort(unlist(exp))) checkIdentical(TRUE, bpisup(p)) parallel::stopCluster(cl) } test_SnowParam_workers <- function() { if (.Platform$OS.type == "windows") return() if (!requireNamespace("snow", quietly=TRUE) || !requireNamespace("Rmpi", quietly=TRUE)) DEACTIVATED("'snow' and/or 'Rmpi' package did not load") checkException(SnowParam("host", "MPI"), silent=TRUE) checkException(SnowParam("host", "FORK"), silent=TRUE) } test_SnowParam_progressbar <- function() { checkIdentical(bptasks(SnowParam()), 0L) checkIdentical(bptasks(SnowParam(tasks = 0L, progressbar = TRUE)), 0L) checkIdentical( bptasks(SnowParam(progressbar = TRUE)), BiocParallel:::TASKS_MAXIMUM ) } test_SnowParam_bpforceGC <- function() { checkIdentical(FALSE, bpforceGC(SnowParam())) checkIdentical(FALSE, bpforceGC(SnowParam(force.GC = FALSE))) checkIdentical(TRUE, bpforceGC(SnowParam(force.GC = TRUE))) checkException(SnowParam(force.GC = NA), silent = TRUE) checkException(SnowParam(force.GC = 1:2), silent = TRUE) } .test_cache <- function(msg) { ## No initial cache msg1 <- BiocParallel:::.load_task_static(msg) checkIdentical(msg, msg1) ## Extract the dynamic part of the task msg2 <- BiocParallel:::.task_dynamic(msg) checkTrue(!xor(msg2$static.fun, isTRUE(msg2$data$fun))) if (length(msg2$static.args)) checkTrue(!any(msg2$static.args %in% names(msg2$data$args))) else checkTrue(all(msg2$static.args %in% names(msg2$data$args))) ## rebuild the EXEC msg3 <- BiocParallel:::.load_task_static(msg2) checkIdentical(msg, msg3) ## create another different msg msg4 <- BiocParallel:::.EXEC("test", function(x)x, args = list(a=1,b=1,c=1), static.fun = TRUE, static.args = c("a","b") ) msg5 <- BiocParallel:::.load_task_static(msg4) checkIdentical(msg5, msg4) } test_SnowParam_taskCache <- function() { msg1 <- BiocParallel:::.EXEC("mytag1", identity, args = list(a=1,b=2,c=3) ) .test_cache(msg1) msg2 <- BiocParallel:::.EXEC("mytag2", identity, args = list(a=1,b=2,c=3), static.fun = TRUE ) .test_cache(msg2) msg3 <- BiocParallel:::.EXEC("mytag3", identity, args = list(a=1,b=2,c=3), static.fun = TRUE, static.args = c("a","b") ) .test_cache(msg3) msg4 <- BiocParallel:::.EXEC("mytag", identity, args = NULL, static.fun = TRUE ) .test_cache(msg4) } test_SnowParam_fallback <- function(){ ## trigger fallback p <- SnowParam(1) res <- bplapply(1, function(x) Sys.getpid(), BPPARAM = p)[[1]] checkTrue(res == Sys.getpid()) ## disable fallback bpfallback(p) <- FALSE res <- bplapply(1, function(x) Sys.getpid(), BPPARAM = p)[[1]] checkTrue(res != Sys.getpid()) ## enable fallback again bpfallback(p) <- TRUE res <- bplapply(1, function(x) Sys.getpid(), BPPARAM = p)[[1]] checkTrue(res == Sys.getpid()) ## no fallback p <- SnowParam(2) res <- bplapply(1, function(x) Sys.getpid(), BPPARAM = p)[[1]] checkTrue(res != Sys.getpid()) } BiocParallel/inst/unitTests/test_bpaggregate.R0000644000175200017520000000201414516004410022526 0ustar00biocbuildbiocbuildmessage("Testing bpaggregate") test_bpaggregate <- function() { x <- data.frame(a=1:10, b=10:1) by <- list(c(rep("a", 5), rep("b", 5))) simplify <- TRUE FUN <- mean x1 <- aggregate(x, by=by, FUN=FUN) param <- bpparam() bpworkers(param) <- 2 x2 <- bpaggregate(x, by=by, FUN=FUN, BPPARAM=param, simplify=simplify) checkEquals(x1, x2) by[[2]] <- c(rep("c", 8), rep("d", 2)) x1 <- aggregate(x, by=by, FUN=FUN) x2 <- bpaggregate(x, by=by, FUN=FUN, BPPARAM=param, simplify=simplify) checkEquals(x1, x2) closeAllConnections() TRUE } test_bpaggregate_formula <- function() { f <- Sepal.Length ~ Species iris1 <- iris iris1$Species <- # FIXME: bpaggregate doesn't respect factor as.character(iris1$Species) x1 <- aggregate(f, data=iris1, FUN=sum) x2 <- bpaggregate(f, data = iris1, FUN = sum) checkEquals(x1, x2) iris1 <- iris1[sample(nrow(iris1)),] x3 <- bpaggregate(f, data = iris1, FUN = sum) checkEquals(x2, x3) } BiocParallel/inst/unitTests/test_bpexportglobals.R0000644000175200017520000000323514516004410023473 0ustar00biocbuildbiocbuildmessage("Testing bpexportglobals") test_bpexportglobals_params <- function() { ## Multicore if (.Platform$OS.type == "unix") { param <- MulticoreParam() checkIdentical(bpexportglobals(param), TRUE) bpexportglobals(param) <- FALSE checkIdentical(bpexportglobals(param), FALSE) param <- MulticoreParam(exportglobals=FALSE) checkIdentical(bpexportglobals(param), FALSE) } ## Snow param <- SnowParam() checkIdentical(bpexportglobals(param), TRUE) bpexportglobals(param) <- FALSE checkIdentical(bpexportglobals(param), FALSE) param <- SnowParam(exportglobals=FALSE) checkIdentical(bpexportglobals(param), FALSE) ## Batchtools param <- BatchtoolsParam() checkIdentical(bpexportglobals(param), TRUE) bpexportglobals(param) <- FALSE checkIdentical(bpexportglobals(param), FALSE) param <- BatchtoolsParam(exportglobals=FALSE) checkIdentical(bpexportglobals(param), FALSE) } test_bpexportglobals_bplapply <- function() { oopts <- options(BAR="baz") on.exit(options(oopts)) param <- SnowParam(2L, exportglobals=FALSE) current <- bplapply(1:2, function(i) getOption("BAR"), BPPARAM=param) checkIdentical(NULL, unlist(current)) param <- SnowParam(2L, exportglobals=TRUE) current <- bplapply(1:2, function(i) getOption("BAR"), BPPARAM=param) checkIdentical("baz", unique(unlist(current))) } test_bpexportglobals_lazyEvaluation <- function(){ foo <- function(k){ param <- SnowParam(2L, exportglobals=TRUE) bplapply(1:2, function(x){ k }, BPPARAM = param) } k <- 1 checkIdentical(foo(k), list(1, 1)) } BiocParallel/inst/unitTests/test_bpiterate.R0000644000175200017520000001036114516004410022241 0ustar00biocbuildbiocbuildmessage("Testing bpiterate") quiet <- suppressWarnings .lazyCount <- function(count) { i <- 0L function() { if (i >= count) return(NULL) i <<- i + 1L i } } test_bpiterate_Params <- function() { ## chunks greater than number of workers x <- 1:5 expected <- lapply(x, sqrt) FUN <- function(count, ...) sqrt(count) params <- list(serial=SerialParam(), snow=SnowParam(2)) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2) for (p in params) { ITER <- .lazyCount(length(x)) quiet(res <- bpiterate(ITER, FUN, BPPARAM=p)) checkIdentical(expected, res) } ## chunks less than number of workers x <- 1:2 expected <- lapply(x, sqrt) FUN <- function(count, ...) sqrt(count) params <- list(serial=SerialParam(), snow=SnowParam(3)) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(3) for (p in params) { ITER <- .lazyCount(length(x)) quiet(res <- bpiterate(ITER, FUN, BPPARAM=p)) checkIdentical(expected, res) } cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list(dopar=DoparParam()) for (p in params) { ITER <- .lazyCount(length(x)) checkException(bpiterate(ITER, FUN, BPPARAM=p), silent=TRUE) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() TRUE } test_bpiterate_REDUCE <- function() { ncount <- 3L params <- list(snow=SnowParam(ncount)) ## On Windows MulticoreParam dispatches to SerialParam where ## 'reduce.in.order' does not apply (always TRUE) if (.Platform$OS.type != "windows") params <- c(params, multi=MulticoreParam(ncount)) for (p in params) { ## no REDUCE FUN <- function(count, ...) rep(count, 10) ITER <- .lazyCount(ncount) res <- bpiterate(ITER, FUN, BPPARAM=p) checkTrue(length(res) == ncount) expected <- list(rep(1L, 10), rep(2L, 10), rep(3L, 10)) checkIdentical(expected, res) ## REDUCE FUN <- function(count, ...) rep(count, 10) ITER <- .lazyCount(ncount) res <- bpiterate(ITER, FUN, BPPARAM=p, REDUCE=`+`) checkIdentical(rep(6L, 10), res) FUN <- function(count, ...) { Sys.sleep(3 - count) count } ## 'reduce.in.order' FALSE ITER <- .lazyCount(ncount) res <- bpiterate(ITER, FUN, BPPARAM=p, REDUCE=paste0, reduce.in.order=FALSE) checkIdentical("321", res) ITER <- .lazyCount(ncount) res <- quiet(bpiterate(ITER, FUN, BPPARAM=p, REDUCE=paste0, init=0, reduce.in.order=FALSE)) checkIdentical("0321", res) ## 'reduce.in.order' TRUE ITER <- .lazyCount(ncount) res <- bpiterate(ITER, FUN, BPPARAM=p, REDUCE=paste0, reduce.in.order=TRUE) checkIdentical("123", res) ITER <- .lazyCount(ncount) res <- bpiterate(ITER, FUN, BPPARAM=p, REDUCE=paste0, init=0, reduce.in.order=TRUE) checkIdentical("0123", res) } ## clean up closeAllConnections() TRUE } test_bpiterate_REDUCE_SerialParam <- function() { p <- SerialParam() FUN <- identity ## REDUCE missing, concatenate ITER <- .lazyCount(0) res <- suppressWarnings({ ## warning: first invocation of 'ITER()' returned NULL bpiterate(ITER, FUN, BPPARAM=p) }) checkIdentical(list(), res) ITER <- .lazyCount(1) res <- bpiterate(ITER, FUN, BPPARAM=p) checkIdentical(list(1L), res) ITER <- .lazyCount(5) res <- bpiterate(ITER, FUN, BPPARAM=p) checkIdentical(as.list(1:5), res) ## REDUCE == `+` ITER <- .lazyCount(0) res <- suppressWarnings({ ## warning: first invocation of 'ITER()' returned NULL res <- bpiterate(ITER, FUN, BPPARAM=p, REDUCE=`+`) }) checkIdentical(NULL, res) ITER <- .lazyCount(1) res <- bpiterate(ITER, FUN, BPPARAM=p, REDUCE=`+`) checkIdentical(1L, res) ITER <- .lazyCount(5) res <- bpiterate(ITER, FUN, BPPARAM=p, REDUCE=`+`) checkIdentical(15L, res) } BiocParallel/inst/unitTests/test_bplapply.R0000644000175200017520000001034414516004410022106 0ustar00biocbuildbiocbuildmessage("Testing bplapply") quiet <- suppressWarnings test_bplapply_Params <- function() { cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( serial=SerialParam(), snow=SnowParam(2), dopar=DoparParam() ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2) x <- 1:10 expected <- lapply(x, sqrt) for (param in names(params)) { current <- quiet(bplapply(x, sqrt, BPPARAM=params[[param]])) checkIdentical(expected, current) } # test empty input for (param in names(params)) { current <- quiet(bplapply(list(), identity, BPPARAM=params[[param]])) checkIdentical(list(), current) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() TRUE } test_bplapply_symbols <- function() { cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( serial=SerialParam(), snow=SnowParam(2), dopar=DoparParam() ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2) x <- list(as.symbol(".XYZ")) expected <- lapply(x, as.character) for (param in names(params)) { current <- quiet(bplapply(x, as.character, BPPARAM=params[[param]])) checkIdentical(expected, current) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() TRUE } test_bplapply_named_list <- function() { X <- list() Y <- character() checkIdentical(X, bplapply(X, identity)) checkIdentical(X, bplapply(Y, identity)) names(X) <- names(Y) <- character() checkIdentical(X, bplapply(X, identity)) checkIdentical(X, bplapply(Y, identity)) X <- list(a = 1:2) checkIdentical(X, bplapply(X, identity)) X <- list(c(a = 1)) checkIdentical(X, bplapply(X, identity)) X <- list(A = c(a = 1:2, b = 1:3), B = c(b = 1:2)) checkIdentical(X, bplapply(X, identity)) X <- list(a = 1:2, b = 3:4) checkIdentical(X, bplapply(X, identity)) X <- list(c(a = 1)) checkIdentical(X, bplapply(X, identity)) X <- list(A = c(a = 1, b=2), B = c(c = 1, d = 2)) checkIdentical(X, bplapply(X, identity)) } test_bplapply_named_list_with_REDO <- function(){ X = setNames(1:3, letters[1:3]) param = SerialParam(stop.on.error = FALSE) FUN1 = function(i) if (i == 2) stop() else i result <- bptry(bplapply(X, FUN1, BPPARAM = param)) checkIdentical(names(result), names(X)) FUN2 = function(i) 0 redo <- bplapply(X, FUN2, BPREDO = result, BPPARAM = param) checkIdentical(names(redo), names(X)) } test_bplapply_custom_subsetting <- function(){ ## We have a class A in the previous unit test .B <- setClass("B", slots = c(b = "integer")) setMethod("[", "B", function(x, i, j, ...) initialize(x, b = x@b[i])) setMethod("length", "B", function(x) length(x@b)) as.list.B <<- function(x, ...) lapply(seq_along(x), function(i) x[i]) x <- .B(b = 1:3) expected <- lapply(x, function(elt) elt@b) current <- quiet(bplapply(x, function(elt) elt@b, BPPARAM=SerialParam())) checkIdentical(expected, current) ## Remote worker does not have the definition of the class B res <- tryCatch( bplapply(x, function(elt) elt@b, BPPARAM=SnowParam(workers = 2)), error = identity ) checkTrue(is(res, "bplist_error")) rm(as.list.B, inherits = TRUE) } test_bplapply_auto_export <- function(){ p <- SnowParam(2, exportglobals = FALSE) ## user defined symbols assign("y", 10, envir = .GlobalEnv) on.exit(rm(y, envir = .GlobalEnv)) fun <- function(x) y environment(fun) <- .GlobalEnv bpexportvariables(p) <- TRUE res <- bplapply(1:2, fun, BPPARAM = p) checkIdentical(res, rep(list(10), 2)) bpexportvariables(p) <- FALSE checkException(bplapply(1:2, fun, BPPARAM = p), silent = TRUE) ## symbols defined in a package fun2 <- function(x) SerialParam() environment(fun2) <- .GlobalEnv bpexportvariables(p) <- TRUE res <- bplapply(1:2, fun2, BPPARAM = p) checkTrue(is(res[[1]], "SerialParam")) bpexportvariables(p) <- FALSE checkException(bplapply(1:2, fun2, BPPARAM = p), silent = TRUE) } BiocParallel/inst/unitTests/test_bploop.R0000644000175200017520000002427614516004410021567 0ustar00biocbuildbiocbuildmessage("Testing bploop") .lapplyReducer <- BiocParallel:::.lapplyReducer .iterateReducer <- BiocParallel:::.iterateReducer .reducer_value <- BiocParallel:::.reducer_value .reducer_add <- BiocParallel:::.reducer_add .reducer_ok <- BiocParallel:::.reducer_ok .reducer_complete <- BiocParallel:::.reducer_complete unevaluated <- BiocParallel:::.error_unevaluated() notAvailable <- BiocParallel:::.error_not_available("HI") ## Normal reduce process test_reducer_lapply_1 <- function() { r <- .lapplyReducer(10, NULL) result <- rep(list(unevaluated), 10) checkIdentical(result, .reducer_value(r)) checkIdentical(result, { .reducer_add(r, 2, list(3,4)); .reducer_value(r) }) result[1:4] <- list(1,2,3,4) checkIdentical(result, { .reducer_add(r, 1, list(1,2)); .reducer_value(r) }) result[5:6] <- list(5,6) checkIdentical(result, { .reducer_add(r, 3, list(5,6)); .reducer_value(r) }) checkTrue(.reducer_ok(r)) checkTrue(!.reducer_complete(r)) result[7:10] <- list(7,8,9,10) checkIdentical(result, { .reducer_add(r, 4, list(7,8,9,10)); .reducer_value(r) }) checkTrue(.reducer_ok(r)) checkTrue(.reducer_complete(r)) } ## REDO test_reducer_lapply_2 <- function() { r <- .lapplyReducer(10, NULL) result <- rep(list(unevaluated), 10) checkIdentical(result, .reducer_value(r)) result[1:4] <- list(1,2,3,4) checkIdentical(result, { .reducer_add(r, 1, list(1,2,3,4)); .reducer_value(r) }) checkTrue(.reducer_ok(r)) checkTrue(!.reducer_complete(r)) values <- list(notAvailable,notAvailable,notAvailable,8, notAvailable,notAvailable) result[5:10] <- values checkIdentical(result, { .reducer_add(r, 2, values); .reducer_value(r) }) checkTrue(!.reducer_ok(r)) checkTrue(.reducer_complete(r)) ## REDO r2 <- .lapplyReducer(10, r) checkIdentical(c(5:7,9:10), r2$redo.index) checkTrue(.reducer_ok(r2)) checkTrue(!.reducer_complete(r2)) checkIdentical(result, { .reducer_add(r2, 2, list(7)); .reducer_value(r2) }) checkIdentical(result, { .reducer_add(r2, 3, list(9,10)); .reducer_value(r2) }) result[c(5:7,9:10)] <- list(5,6,7,9,10) checkIdentical(result, { .reducer_add(r2, 1, list(5,6)); .reducer_value(r2) }) checkTrue(.reducer_ok(r2)) checkTrue(.reducer_complete(r2)) ## REDO with new error r3 <- .lapplyReducer(10, r) result[5:7] <- list(5,6,notAvailable) .reducer_add(r3, 1, list(5,6,notAvailable)) .reducer_add(r3, 2, list(9,10)) checkIdentical(result, .reducer_value(r3)) } ## default reducer and reduce in order test_reducer_iterate_1 <- function() { r <- .iterateReducer(reduce.in.order=TRUE, reducer = NULL) checkTrue(.reducer_ok(r)) ## The reducer has no idea about the length of the result checkTrue(.reducer_complete(r)) checkIdentical(list(), .reducer_value(r)) .reducer_add(r, 2, list(2)) expect <- structure(list(NULL,2), errors = list('1'=unevaluated)) checkIdentical(expect, .reducer_value(r)) checkTrue(.reducer_ok(r)) ## The reducer knows at least the result 1 is missing checkTrue(!.reducer_complete(r)) .reducer_add(r, 1, list(1)) expect <- list(1,2) checkIdentical(expect, .reducer_value(r)) .reducer_add(r, 3, list(3)) expect <- list(1,2,3) checkIdentical(expect, .reducer_value(r)) .reducer_add(r, 5, list(notAvailable)) expect <- structure( list(1,2,3,NULL,NULL), errors=list('4'=unevaluated,'5'=notAvailable) ) checkIdentical(expect, .reducer_value(r)) checkTrue(!.reducer_ok(r)) checkTrue(!.reducer_complete(r)) ## BPREDO r2 <- .iterateReducer(reducer = r) checkIdentical(4:5, r2$redo.index) checkTrue(!.reducer_ok(r2)) checkTrue(!.reducer_complete(r2)) .reducer_add(r2, 2, list(5)) expect <- structure( list(1,2,3,NULL,5), errors=list('4'=unevaluated) ) checkIdentical(expect, .reducer_value(r2)) checkTrue(.reducer_ok(r2)) checkTrue(!.reducer_complete(r2)) .reducer_add(r2, 1, list(4)) expect <- list(1,2,3,4,5) checkIdentical(expect, .reducer_value(r2)) checkTrue(.reducer_ok(r2)) checkTrue(.reducer_complete(r2)) .reducer_add(r2, 3, list(6)) expect <- list(1,2,3,4,5,6) checkIdentical(expect, .reducer_value(r2)) checkTrue(.reducer_ok(r2)) checkTrue(.reducer_complete(r2)) .reducer_add(r2, 4, list(notAvailable)) expect <- structure( list(1,2,3,4,5,6,NULL), errors=list('7'=notAvailable) ) checkIdentical(expect, .reducer_value(r2)) checkTrue(!.reducer_ok(r2)) checkTrue(!.reducer_complete(r2)) ## BPREDO 2 r3 <- .iterateReducer(reducer = r2) checkIdentical(7L, r3$redo.index) .reducer_add(r3, 1, list(7)) expect <- list(1,2,3,4,5,6,7) checkIdentical(expect, .reducer_value(r3)) checkTrue(.reducer_ok(r3)) checkTrue(.reducer_complete(r3)) } ## customized reducer and reduce in order test_reducer_iterate_2 <- function() { r <- .iterateReducer(`+`, init=0, reduce.in.order=TRUE, reducer = NULL) checkIdentical(0, .reducer_value(r)) .reducer_add(r, 1, list(1)) expect <- 1 checkIdentical(expect, .reducer_value(r)) .reducer_add(r, 3, list(3)) expect <- structure(1, errors = list('2' = unevaluated)) checkIdentical(expect, .reducer_value(r)) checkTrue(.reducer_ok(r)) checkTrue(!.reducer_complete(r)) .reducer_add(r, 2, list(2)) expect <- 6 checkIdentical(expect, .reducer_value(r)) .reducer_add(r, 5, list(notAvailable)) expect <- structure(6, errors = list('4' = unevaluated, '5' = notAvailable)) checkIdentical(expect, .reducer_value(r)) checkTrue(!.reducer_ok(r)) checkTrue(!.reducer_complete(r)) ## BPREDO round1 r2 <- .iterateReducer(reducer = r) checkIdentical(4:5, r2$redo.index) .reducer_add(r2, 2, list(5)) expect <- structure(6, errors = list('4' = unevaluated)) checkIdentical(expect, .reducer_value(r2)) .reducer_add(r2, 1, list(4)) expect <- 15 checkIdentical(expect, .reducer_value(r2)) checkTrue(.reducer_ok(r2)) checkTrue(.reducer_complete(r2)) .reducer_add(r2, 3, list(notAvailable)) expect <- structure(15, errors = list('6' = notAvailable)) checkIdentical(expect, .reducer_value(r2)) checkTrue(!.reducer_ok(r2)) checkTrue(!.reducer_complete(r2)) ## BPREDO round2 r3 <- .iterateReducer(reducer = r2) checkIdentical(6L, r3$redo.index) .reducer_add(r3, 1, list(6)) expect <- 21 checkIdentical(expect, .reducer_value(r3)) .reducer_add(r3, 2, list(7)) expect <- 28 checkIdentical(expect, .reducer_value(r3)) checkTrue(.reducer_ok(r3)) checkTrue(.reducer_complete(r3)) checkTrue(all(sapply(as.list(r3$value.cache), is.null))) } ## customized reducer and reduce not in order test_reducer_iterate_3 <- function() { r <- .iterateReducer(`+`, init=0, reduce.in.order=FALSE, reducer = NULL) checkIdentical(0, .reducer_value(r)) .reducer_add(r, 1, list(1)) expect <- 1 checkIdentical(expect, .reducer_value(r)) .reducer_add(r, 3, list(3)) expect <- structure(4, errors = list('2' = unevaluated)) checkIdentical(expect, .reducer_value(r)) checkTrue(.reducer_ok(r)) checkTrue(!.reducer_complete(r)) .reducer_add(r, 2, list(2)) expect <- 6 checkIdentical(expect, .reducer_value(r)) .reducer_add(r, 5, list(notAvailable)) expect <- structure(6, errors = list('4' = unevaluated, '5' = notAvailable)) checkIdentical(expect, .reducer_value(r)) checkTrue(!.reducer_ok(r)) checkTrue(!.reducer_complete(r)) ## BPREDO round1 r2 <- .iterateReducer(reducer = r) checkIdentical(4:5, r2$redo.index) .reducer_add(r2, 2, list(5)) expect <- structure(11, errors = list('4' = unevaluated)) checkIdentical(expect, .reducer_value(r2)) .reducer_add(r2, 1, list(4)) expect <- 15 checkIdentical(expect, .reducer_value(r2)) checkTrue(.reducer_ok(r2)) checkTrue(.reducer_complete(r2)) .reducer_add(r2, 3, list(notAvailable)) expect <- structure(15, errors = list('6' = notAvailable)) checkIdentical(expect, .reducer_value(r2)) checkTrue(!.reducer_ok(r2)) checkTrue(!.reducer_complete(r2)) ## BPREDO round2 r3 <- .iterateReducer(reducer = r2) checkIdentical(6L, r3$redo.index) .reducer_add(r3, 1, list(6)) expect <- 21 checkIdentical(expect, .reducer_value(r3)) .reducer_add(r3, 2, list(7)) expect <- 28 checkIdentical(expect, .reducer_value(r3)) checkTrue(.reducer_ok(r3)) checkTrue(.reducer_complete(r3)) checkTrue(all(sapply(as.list(r3$value.cache), is.null))) } ## Test for a marginal case where the result is NULL ## and contains error test_reducer_iterate_4 <- function() { r <- .iterateReducer(function(x,y)NULL, init=NULL, reduce.in.order=FALSE, reducer = NULL) checkIdentical(NULL, .reducer_value(r)) .reducer_add(r, 1, list(1)) expect <- NULL checkIdentical(expect, .reducer_value(r)) .reducer_add(r, 2, list(notAvailable)) expect <- structure(list(),errors=list('2'=notAvailable)) checkIdentical(expect, .reducer_value(r)) } test_iterator_lapply <- function() { .bploop_lapply_iter <- BiocParallel:::.bploop_lapply_iter .bploop_rng_iter <- BiocParallel:::.bploop_rng_iter X <- 1:10 redo_index <- c(1:2,6:8) iter <- .bploop_lapply_iter(X, redo_index, 10) checkIdentical(iter(), 1:2) checkIdentical(iter(), .bploop_rng_iter(3L)) checkIdentical(iter(), 6:8) checkIdentical(iter(), list(NULL)) checkIdentical(iter(), list(NULL)) iter <- .bploop_lapply_iter(X, redo_index, 2) checkIdentical(iter(), 1:2) checkIdentical(iter(), .bploop_rng_iter(3L)) checkIdentical(iter(), 6:7) checkIdentical(iter(), 8L) checkIdentical(iter(), list(NULL)) checkIdentical(iter(), list(NULL)) redo_index <- 6:8 iter <- .bploop_lapply_iter(X, redo_index, 1) checkIdentical(iter(), .bploop_rng_iter(5L)) checkIdentical(iter(), 6L) checkIdentical(iter(), 7L) checkIdentical(iter(), 8L) checkIdentical(iter(), list(NULL)) checkIdentical(iter(), list(NULL)) } BiocParallel/inst/unitTests/test_bpmapply.R0000644000175200017520000001627114516004410022114 0ustar00biocbuildbiocbuildmessage("Testing bpmapply") quiet <- suppressWarnings test_bpmapply_MoreArgs_names <- function() { ## https://github.com/Bioconductor/BiocParallel/issues/51 f <- function(x, y) x target <- bpmapply(f, 1:3, MoreArgs=list(x=1L)) checkIdentical(rep(1L, 3), target) } test_bpmapply_Params <- function() { cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( serial=SerialParam(), snow=SnowParam(2), dopar=DoparParam() ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2) x <- 1:10 y <- rev(x) f <- function(x, y) x + y expected <- x + y for (param in params) { current <- quiet(bpmapply(f, x, y, BPPARAM=param)) checkIdentical(expected, current) } # test names and simplify x <- setNames(1:5, letters[1:5]) for (param in params) { for (SIMPLIFY in c(FALSE, TRUE)) { for (USE.NAMES in c(FALSE, TRUE)) { expected <- mapply(identity, x, USE.NAMES=USE.NAMES, SIMPLIFY=SIMPLIFY) current <- quiet(bpmapply(identity, x, USE.NAMES=USE.NAMES, SIMPLIFY=SIMPLIFY, BPPARAM=param)) checkIdentical(expected, current) } } } # test MoreArgs x <- setNames(1:5, letters[1:5]) f <- function(x, m) { x + m } expected <- mapply(f, x, MoreArgs=list(m=1)) for (param in params) { current <- quiet(bpmapply(f, x, MoreArgs=list(m=1), BPPARAM=param)) checkIdentical(expected, current) } # test empty input for (param in params) { current <- quiet(bpmapply(identity, BPPARAM=param)) checkIdentical(list(), current) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() } test_bpmapply_symbols <- function() { cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list(serial=SerialParam(), snow=SnowParam(2), dopar=DoparParam()) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2) x <- list(as.symbol(".XYZ")) expected <- mapply(as.character, x) for (param in names(params)) { current <- bpmapply(as.character, x, BPPARAM=params[[param]]) checkIdentical(expected, current) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() TRUE } test_bpmapply_named_list <- function() { X <- list() Y <- character() checkIdentical(X, bpmapply(identity)) checkIdentical(X, bpmapply(identity, X)) checkIdentical(mapply(identity, Y), bpmapply(identity, Y)) checkIdentical(X, bpmapply(identity, USE.NAMES = FALSE)) checkIdentical(X, bpmapply(identity, X, USE.NAMES = FALSE)) checkIdentical(X, bpmapply(identity, Y, USE.NAMES = FALSE)) names(X) <- names(Y) <- character() checkIdentical(X, bpmapply(identity, X)) checkIdentical(X, bpmapply(identity, Y)) checkIdentical(list(), bpmapply(identity, X, USE.NAMES = FALSE)) checkIdentical(list(), bpmapply(identity, Y, USE.NAMES = FALSE)) Y1 <- setNames(letters, letters) Y2 <- setNames(letters, LETTERS) checkIdentical(mapply(identity, Y1), bpmapply(identity, Y1)) checkIdentical(mapply(identity, Y2), bpmapply(identity, Y2)) X <- list(c(a = 1)) checkIdentical(X, bpmapply(identity, X, SIMPLIFY = FALSE)) X <- list(a = 1:2) checkIdentical(X, bpmapply(identity, X, SIMPLIFY = FALSE)) X <- list(a = 1:2, b = 1:4) checkIdentical(X, bpmapply(identity, X, SIMPLIFY = FALSE)) X <- list(A = c(a = 1:3)) checkIdentical(X, bpmapply(identity, X, SIMPLIFY = FALSE)) X <- list(A = c(a = 1, b=2), B = c(c = 1, d = 2)) checkIdentical(X, bpmapply(identity, X, SIMPLIFY = FALSE)) ## named arguments to bpmapply Y <- 1:3 checkIdentical(Y, bpmapply(identity, x = Y)) } test_transposeArgsWithIterations <- function() { .transposeArgsWithIterations <- BiocParallel:::.transposeArgsWithIterations ## list() when `mapply()` invoked with no arguments, `mapply(identity)` checkIdentical( list(), .transposeArgsWithIterations(list(), USE.NAMES = TRUE) ) checkIdentical( list(), .transposeArgsWithIterations(list(), USE.NAMES = FALSE) ) ## list(X) when `mapply()` invoked with one argument, `mapply(identity, X)` X <- list() XX <- list(X) checkIdentical(list(), .transposeArgsWithIterations(XX, USE.NAMES = TRUE)) checkIdentical(list(), .transposeArgsWithIterations(XX, USE.NAMES = FALSE)) ## `mapply(identity, character())` returns a _named_ list() X <- character() XX <- list(X) checkIdentical( setNames(list(), character()), .transposeArgsWithIterations(XX, TRUE) ) checkIdentical(list(), .transposeArgsWithIterations(XX, FALSE)) ## named arguments to mapply() are _not_ names of return value... X <- list() XX <- list(x = X) checkIdentical(list(), .transposeArgsWithIterations(XX, TRUE)) checkIdentical(list(), .transposeArgsWithIterations(XX, FALSE)) ## ...except if the argument is a character() X <- character() XX <- list(x = X) checkIdentical( setNames(list(), character()), .transposeArgsWithIterations(XX, TRUE) ) checkIdentical(list(), .transposeArgsWithIterations(XX, FALSE)) ## with multiple arguments, names are from the first argument XX <- list(c(a = 1, b = 2, c = 3), c(d = 4, e = 5, f = 6)) checkIdentical( setNames(list(list(1, 4), list(2, 5), list(3, 6)), letters[1:3]), .transposeArgsWithIterations(XX, TRUE) ) checkIdentical( list(list(1, 4), list(2, 5), list(3, 6)), .transposeArgsWithIterations(XX, FALSE) ) ## ...independent of names on the arguments XX <- list(A = c(a = 1, b = 2, c = 3), B = c(d = 4, e = 5, f = 6)) checkIdentical( list(a = list(A=1, B=4), b = list(A=2, B=5), c = list(A=3, B=6)), .transposeArgsWithIterations(XX, TRUE) ) checkIdentical( list(list(A=1, B=4), list(A=2, B=5), list(A=3, B=6)), .transposeArgsWithIterations(XX, FALSE) ) ## when the first argument is an unnamed character vector, names ## are values XX <- list(A = c("a", "b", "c"), B = 1:3) checkIdentical( list( a = list(A="a", B=1L), b = list(A="b", B=2L), c = list(A="c", B=3L) ), .transposeArgsWithIterations(XX, TRUE) ) checkIdentical( list(list(A="a", B=1L), list(A="b", B=2L), list(A="c", B=3L)), .transposeArgsWithIterations(XX, FALSE) ) ## ...except if there are names on the first vector... XX <- list(A = setNames(letters[1:3], LETTERS[1:3]), B = 1:3) checkIdentical( list( A = list(A="a", B=1L), B = list(A="b", B=2L), C = list(A="c", B=3L) ), .transposeArgsWithIterations(XX, TRUE) ) checkIdentical( list(list(A="a", B=1L), list(A="b", B=2L), list(A="c", B=3L)), .transposeArgsWithIterations(XX, FALSE) ) } BiocParallel/inst/unitTests/test_bpoptions.R0000644000175200017520000000715414516004410022305 0ustar00biocbuildbiocbuildmessage("Testing bpoptions") .checkMessage <- function(x) { message <- character() result <- withCallingHandlers(x, message = function(condition) { message <<- c(message, conditionMessage(condition)) invokeRestart("muffleMessage") }) checkTrue(length(message) > 0) } ## Normal reduce process test_bpoptions_constructor <- function() { opts <- bpoptions() checkIdentical(opts, list()) opts <- bpoptions(tasks = 1) checkIdentical(opts, list(tasks = 1)) .checkMessage(opts <- bpoptions(randomArg = 1)) checkIdentical(opts, list(randomArg = 1)) .checkMessage(opts <- bpoptions(tasks = 1, randomArg = 1)) checkIdentical(opts, list(tasks = 1, randomArg = 1)) } test_bpoptions_bplapply <- function() { p <- SerialParam() ## bpoptions only changes BPPARAM temporarily oldValue <- bptasks(p) opts <- bpoptions(tasks = 100) result0 <- bplapply(1:2, function(x) x, BPPARAM = p, BPOPTIONS = opts) checkIdentical(bptasks(p), oldValue) ## check if bpoptions really works opts <- bpoptions(timeout = 1) checkException( bplapply(1:2, function(x) { t <- Sys.time() ## spin... while(difftime(Sys.time(), t) < 2) {} }, BPPARAM = p, BPOPTIONS = opts) ) ## Random argument has no effect on bplapply .checkMessage(opts <- bpoptions(randomArg = 100)) result1 <- bplapply(1:2, function(x) x, BPPARAM = p, BPOPTIONS = opts) checkIdentical(result0, result1) } test_bpoptions_manually_export <- function(){ p <- SnowParam(2, exportglobals = FALSE) bpstart(p) on.exit(bpstop(p), add = TRUE) ## global variables that cannot be found by auto export bar <- function() x environment(bar) <- .GlobalEnv foo <- function(x) bar() environment(foo) <- .GlobalEnv assign("x", 10, envir = .GlobalEnv) assign("bar", bar, envir = .GlobalEnv) on.exit(rm(x, bar, envir = .GlobalEnv), add = TRUE) ## auto export would not work here bpexportvariables(p) <- FALSE checkException(bplapply(1:2, foo, BPPARAM = p), silent = TRUE) ## still not work as no auto export opts <- bpoptions(exports = "x") checkException(bplapply(1:2, foo, BPPARAM = p, BPOPTIONS = opts), silent = TRUE) ## manually export all variables opts <- bpoptions(exports = c("x", "bar")) res <- bplapply(1:2, foo, BPPARAM = p, BPOPTIONS = opts) checkIdentical(res, rep(list(10), 2)) ## enable auto export would not solve the problem bpexportvariables(p) <- TRUE checkException(bplapply(1:2, foo, BPPARAM = p), silent = TRUE) ## manually export the variables which is missing from auto export opts <- bpoptions(exports = "x") res <- bplapply(1:2, foo, BPPARAM = p, BPOPTIONS = opts) checkIdentical(res, rep(list(10), 2)) ## manually export packages bar2 <- function(x) SerialParam() environment(bar2) <- .GlobalEnv foo2 <- function(x) bar2() environment(foo2) <- .GlobalEnv assign("x", 10, envir = .GlobalEnv) assign("bar2", bar2, envir = .GlobalEnv) on.exit(rm(bar2, envir = .GlobalEnv), add = TRUE) bpexportvariables(p) <- TRUE checkException(bplapply(1:2, foo2, BPPARAM = p), silent = TRUE) opts <- bpoptions(packages = c("BiocParallel")) res <- bplapply(1:2, foo2, BPPARAM = p, BPOPTIONS = opts) checkTrue(is(res[[1]], "SerialParam")) ## https://github.com/Bioconductor/BiocParallel/issues/234 opts <- bpoptions(exports = "x") res <- bplapply(1:2, foo, BPPARAM = SerialParam(), BPOPTIONS = opts) checkIdentical(res, rep(list(10), 2)) checkIdentical(.GlobalEnv[["x"]], 10) } BiocParallel/inst/unitTests/test_bpvalidate.R0000644000175200017520000000634114516004410022400 0ustar00biocbuildbiocbuildmessage("Testing bpvalidate") BPValidate <- BiocParallel:::BPValidate test_bpvalidate_basic_ok <- function() { target <- BPValidate() checkIdentical(target, bpvalidate(function() {} )) checkIdentical(target, bpvalidate(function(x) x )) checkIdentical(target, bpvalidate(function(x) x() )) checkIdentical(target, bpvalidate(function(..., x) x )) checkIdentical(target, bpvalidate(function(..., x) x() )) checkIdentical(target, bpvalidate(function(y, x) y(x) )) checkIdentical(target, bpvalidate(function(y, x) y(x=x) )) checkIdentical(target, bpvalidate(function(y, ...) y(...) )) checkIdentical(target, bpvalidate(local({i = 2; function(y) y + i}))) checkIdentical( target, bpvalidate(local({i = 2; local({function(y) y + i})})) ) checkIdentical(target, bpvalidate(sqrt)) } test_bpvalidate_basic_fail <- function() { target <- BPValidate(unknown = "x") suppressWarnings({ checkIdentical(target, bpvalidate(function() x )) checkIdentical(target, bpvalidate(function() x() )) checkIdentical(target, bpvalidate(function(y) x + y )) checkIdentical(target, bpvalidate(function(y) y(x) )) checkIdentical(target, bpvalidate(function(y) y(x=x) )) checkIdentical(target, bpvalidate(function(y, ...) y(x) )) checkIdentical(target, bpvalidate(function(y, ...) y(x=x) )) }) } test_bpvalidate_search_path <- function() { target <- BPValidate(symbol = "x", environment = "package:.test_env") .test_env <- new.env(parent=emptyenv()) .test_env$x <- NULL attach(.test_env, name = "package:.test_env") on.exit(detach("package:.test_env")) checkIdentical(target, bpvalidate(function() x )) checkIdentical(target, bpvalidate(function(...) x )) checkIdentical(target, bpvalidate(function(y, ...) y(x) )) checkIdentical(target, bpvalidate(function(y, ...) y(x=x) )) ## FIXME: should fail -- in search(), but not a function! ## checkIdentical(target, bpvalidate(function() x() )) } test_bpvalidate_defining_environemt <- function() { target1 <- BPValidate() target2 <- BPValidate(unknown = "x") h = function() { x <- 1; f = function() x; function() f() } checkIdentical(target1, bpvalidate(h)) h = function() { f = function() x; function() f() } checkIdentical(target2, bpvalidate(h, "silent")) } test_bpvalidate_library <- function() { target <- BPValidate() checkException(bpvalidate(function() library("__UNKNOWN__"), signal = "error"), silent=TRUE) checkException(bpvalidate(function() require("__UNKNOWN__"), signal = "error"), silent=TRUE) checkIdentical(target, bpvalidate(function() library(BiocParallel))) ## FIXME: bpvalidate expects unquoted arg to library() / require() ## bpvalidate(function() library("BiocParallel")) target1 <- BPValidate( symbol = "bpvalidate", environment = "package:BiocParallel" ) checkIdentical(target1, bpvalidate(function() bpvalidate())) # inPath fun <- function() { library(BiocParallel); bpvalidate() } checkIdentical(target, bpvalidate(fun)) # in function } BiocParallel/inst/unitTests/test_bpvec.R0000644000175200017520000000263514516004410021366 0ustar00biocbuildbiocbuildmessage("Testing bpvec") test_bpvec_Params <- function() { cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( serial=SerialParam(), snow=SnowParam(2), dopar=DoparParam() ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2) x <- rev(1:10) expected <- sqrt(x) for (param in names(params)) { current <- bpvec(x, sqrt, BPPARAM=params[[param]]) checkIdentical(current, expected) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() TRUE } test_bpvec_MulticoreParam_short_jobs <- function() { ## bpvec should return min(length(X), bpnworkers()) if (.Platform$OS.type == "windows") return(TRUE) exp <- 1:2 obs <- bpvec(exp, c, AGGREGATE=list, BPPARAM=MulticoreParam(workers=4L)) checkIdentical(2L, length(obs)) checkIdentical(exp, unlist(obs)) ## clean up closeAllConnections() TRUE } test_bpvec_invalid_FUN <- function() { res <- bptry(bpvec(1:2, class, BPPARAM=SerialParam())) checkTrue(inherits(res, "bpvec_error")) } test_bpvec_named_list <- function() { X <- list() Y <- character() checkIdentical(X, bpvec(X, length)) checkIdentical(X, bpvec(Y, length)) names(X) <- names(Y) <- character() checkIdentical(X, bpvec(X, length)) checkIdentical(X, bpvec(Y, length)) } BiocParallel/inst/unitTests/test_bpvectorize.R0000644000175200017520000000116714516004410022622 0ustar00biocbuildbiocbuildmessage("Testing bpvectorize") test_bpvectorize_Params <- function() { cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( serial=SerialParam(), snow=SnowParam(2), dopar=DoparParam() ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2) x <- 1:10 expected <- sqrt(x) for (param in names(params)) { psqrt <- bpvectorize(sqrt, BPPARAM=params[[param]]) checkIdentical(expected, psqrt(x)) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() TRUE } BiocParallel/inst/unitTests/test_errorhandling.R0000644000175200017520000002126614516004410023126 0ustar00biocbuildbiocbuildmessage("Testing errorhandling") ## NOTE: On Windows, MulticoreParam() throws a warning and instantiates ## a single FORK worker using scripts from parallel. No logging or ## error catching is implemented. checkExceptionText <- function(expr, txt, negate=FALSE, msg="") { x <- try(eval(expr), silent=TRUE) checkTrue(inherits(x, "condition"), msg=msg) checkTrue(xor(negate, grepl(txt, as.character(x), fixed=TRUE)), msg=msg) } test_composeTry <- function() { .composeTry <- BiocParallel:::.composeTry .workerOptions <- BiocParallel:::.workerOptions .error_unevaluated <- BiocParallel:::.error_unevaluated X <- as.list(1:6); X[[2]] <- "2"; X[[6]] <- -1 ## Evaluate all jobs regardless of errors ## e.g., SerialParam(stop.on.error=FALSE) OPTIONS <- .workerOptions(stop.on.error = FALSE) tsqrt <- .composeTry(sqrt, OPTIONS, NULL) current <- tryCatch(suppressWarnings(lapply(X, tsqrt)), error=identity) target <- list(length(X)) for (i in seq_along(X)) target[[i]] <- tryCatch(suppressWarnings(sqrt(X[[i]])), error=identity) tok <- !vapply(target, is, logical(1), "error") checkIdentical(tok, bpok(current)) checkIdentical(conditionMessage(target[[which(!tok)]]), conditionMessage(current[[which(!bpok(current))]])) checkIdentical(target[tok], current[bpok(current)]) ## stop evaluation when error occurs; entire vector returned with ## 'unevaluated' components. e.g., SnowParam(stop.on.error=TRUE) OPTIONS <- .workerOptions(stop.on.error = TRUE) tsqrt <- .composeTry(sqrt, OPTIONS, NULL) current <- lapply(X, tsqrt) checkTrue(is(current[[2]], "remote_error")) checkTrue(all(vapply(current[-(1:2)], is, logical(1), "unevaluated_error"))) ## illogical checkException(.composeTry(sqrt, FALSE, FALSE, TRUE, timeout=20L), silent=TRUE) } test_SerialParam_stop.on.error <- function() { X <- list(1, "2", 3) ## stop.on.error=TRUE; lapply-like p <- SerialParam() checkIdentical(TRUE, bpstopOnError(p)) checkException(bplapply(X, sqrt, BPPARAM=p), silent=TRUE) current <- tryCatch(bplapply(X, sqrt, BPPARAM=p), error=identity) checkTrue(is(current, "bplist_error")) target <- "BiocParallel errors\n 1 remote errors, element index: 2\n 1 unevaluated and other errors\n first remote error:\nError in FUN(...): non-numeric argument to mathematical function\n" checkIdentical(target, conditionMessage(current)) target <- tryCatch(lapply(X, sqrt), error=identity) checkIdentical( conditionMessage(target), conditionMessage(bpresult(current)[[2]]) ) result <- bptry(bplapply(X, sqrt, BPPARAM=p)) # issue #142 checkIdentical(c(TRUE, FALSE, FALSE), bpok(result)) ## stop.on.error=FALSE p <- SerialParam(stop.on.error=FALSE) # checkException(bplapply(X, sqrt, BPPARAM=p), silent=TRUE) current <- tryCatch(bplapply(X, sqrt, BPPARAM=p), error=identity) checkTrue(is(current, "bplist_error")) result <- bpresult(current) checkIdentical(c(TRUE, FALSE, TRUE), bpok(result)) checkTrue(is(result[[2]], "remote_error")) checkIdentical(list(sqrt(1), sqrt(3)), result[bpok(result)]) result <- bptry(bplapply(X, sqrt, BPPARAM=p)) checkIdentical(c(TRUE, FALSE, TRUE), bpok(result)) } test_stop.on.error <- function() { checkException(bplapply("2", sqrt), silent=TRUE) checkException(bplapply(c(1, "2"), sqrt), silent=TRUE) checkException(bplapply(c(1, "2", 3), sqrt), silent=TRUE) cls <- tryCatch(bplapply(c(1, "2", 3), sqrt), error=class) checkIdentical(c("bplist_error", "bperror", "error", "condition"), cls) } test_catching_errors <- function() { x <- 1:10 y <- rev(x) f <- function(x, y) if (x > y) stop("whooops") else x + y cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( snow=SnowParam(2, stop.on.error = FALSE), dopar=DoparParam(stop.on.error = FALSE) ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2, stop.on.error = FALSE) for (param in params) { res <- tryCatch({ bplapply(list(1, "2", 3), sqrt, BPPARAM=param) }, error=identity) checkTrue(is(res, "bplist_error")) result <- bpresult(res) checkTrue(length(result) == 3L) msg <- "non-numeric argument to mathematical function" checkIdentical(conditionMessage(result[[2]]), msg) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() } test_BPREDO <- function() { f = sqrt x = list(1, "2", 3) x.fix = list(1, 2, 3) cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( snow=SnowParam(2, stop.on.error = FALSE), dopar=DoparParam(stop.on.error = FALSE) ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2, stop.on.error = FALSE) for (param in params) { res <- tryCatch({ bplapply(x, f, BPPARAM=param) }, error=identity) checkTrue(is(res, "bplist_error")) result <- bpresult(res) checkIdentical(3L, length(result)) checkTrue(inherits(result[[2]], "remote_error")) ## data not fixed res2 <- tryCatch({ bplapply(x, f, BPPARAM=param, BPREDO=res) }, error=identity) checkTrue(is(res2, "bplist_error")) result <- bpresult(res2) checkIdentical(3L, length(result)) checkTrue(is(result[[2]], "remote_error")) checkIdentical(as.list(sqrt(c(1, 3))), result[c(1, 3)]) ## data fixed res3 <- bplapply(x.fix, f, BPPARAM=param, BPREDO=res2) checkIdentical(as.list(sqrt(1:3)), res3) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() } test_bpvec_BPREDO <- function() { f = function(i) if (6 %in% i) stop() else sqrt(i) x = 1:10 cl <- parallel::makeCluster(2) doParallel::registerDoParallel(cl) params <- list( snow=SnowParam(2, stop.on.error = FALSE), dopar=DoparParam(stop.on.error = FALSE) ) if (.Platform$OS.type != "windows") params$mc <- MulticoreParam(2, stop.on.error = FALSE) for (param in params) { res <- bptry(bpvec(x, f, BPPARAM=param), bplist_error=identity) checkTrue(is(res, "bplist_error")) result <- bpresult(res) checkIdentical(2L, length(result)) checkTrue(inherits(result[[2]], "condition")) ## data not fixed res2 <- bptry(bpvec(x, f, BPPARAM=param, BPREDO=res), bplist_error=identity) checkTrue(is(res2, "bplist_error")) result <- bpresult(res2) checkIdentical(2L, length(result)) checkTrue(is(result[[2]], "remote_error")) ## data fixed res3 <- bpvec(x, sqrt, BPPARAM=param, BPREDO=res2) checkIdentical(sqrt(x), res3) } ## clean up foreach::registerDoSEQ() parallel::stopCluster(cl) closeAllConnections() } test_bpiterate_BPREDO <- function() { n <- 100L ntask <- n iter_factory <- function(n){ i <- 0L function() if(i 0.4) } test_ipccounter <- function() { checkIdentical(ipcyield(ipcid()), 1L) id <- ipcid() on.exit(ipcremove(id)) result <- bplapply(1:5, function(i, id) { BiocParallel::ipcyield(id) }, id, BPPARAM=SnowParam(2)) checkIdentical(sort(unlist(result, use.names=FALSE)), 1:5) } test_ipc_errors <- function() { ## Error : Expected 'character' actual 'double' checkException(ipclock(123)) ## Error : 'id' must be character(1) and not NA checkException(ipclock(NA_character_)) ## Error : 'id' must be character(1) and not NA checkException(ipclock(letters)) ## expect no error id <- ipcid() ipcreset(id, 10) ## Error: Expected single integer value checkException(ipcreset(id, 1:3)) ## Error: 'n' must not be NA checkException(ipcreset(id, NA_integer_)) ipcremove(id) } BiocParallel/inst/unitTests/test_logging.R0000644000175200017520000000201114516004410021701 0ustar00biocbuildbiocbuildmessage("Testing logging") ## This code tests 'log' and 'progressbar'. test_errorhandling.R ## tests 'stop.on.error' test_log <- function() { ## SnowParam, MulticoreParam only params <- list( snow=SnowParam(2, log=FALSE, stop.on.error=FALSE), snowLog=SnowParam(2, log=TRUE, stop.on.error=FALSE)) if (.Platform$OS.type != "windows") { params$multi=MulticoreParam(3, log=FALSE, stop.on.error=FALSE) params$multiLog=MulticoreParam(3, log=TRUE, stop.on.error=FALSE) } for (param in params) { res <- suppressMessages(tryCatch({ bplapply(list(1, "2", 3), sqrt, BPPARAM=param) }, error=identity)) checkTrue(is(res, "bplist_error")) result <- bpresult(res) checkTrue(length(result) == 3L) msg <- "non-numeric argument to mathematical function" checkIdentical(conditionMessage(result[[2]]), msg) checkTrue(length(attr(result[[2]], "traceback")) > 0L) } ## clean up closeAllConnections() TRUE } BiocParallel/inst/unitTests/test_refclass.R0000644000175200017520000000044314516004410022064 0ustar00biocbuildbiocbuildmessage("Testing refclass") test_SnowParam_refclass <- function() { p <- SnowParam(2) p2 <- p checkTrue(!bpisup(p)) checkTrue(!bpisup(p2)) bpstart(p) checkTrue(bpisup(p)) checkTrue(bpisup(p2)) bpstop(p) checkTrue(!bpisup(p)) checkTrue(!bpisup(p2)) } BiocParallel/inst/unitTests/test_rng.R0000644000175200017520000003524414516004410021057 0ustar00biocbuildbiocbuildmessage("Testing rng") test_rng_lapply <- function() { .rng_get_generator <- BiocParallel:::.rng_get_generator .rng_reset_generator <- BiocParallel:::.rng_reset_generator .workerLapply <- BiocParallel:::.workerLapply .RNGstream <- BiocParallel:::.RNGstream .rng_next_substream <- BiocParallel:::.rng_next_substream OPTIONS <- BiocParallel:::.workerOptions() state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) SEED <- .RNGstream(bpstart(SerialParam())) checkIdentical( ## same sequence of random number streams .workerLapply(1:2, function(i) rnorm(1), NULL, OPTIONS, SEED), .workerLapply(1:2, function(i) rnorm(1), NULL, OPTIONS, SEED) ) SEED1 <- .RNGstream(bpstart(SerialParam())) SEED2 <- .rng_next_substream(SEED1) target <- .workerLapply(1:2, function(i) rnorm(2), NULL, OPTIONS, SEED1) obs <- c( .workerLapply(1, function(i) rnorm(2), NULL, OPTIONS, SEED1), .workerLapply(1, function(i) rnorm(2), NULL, OPTIONS, SEED2) ) checkIdentical(target, obs) checkTrue(identical(state, .rng_get_generator())) } test_rng_bplapply <- function() { .rng_get_generator <- BiocParallel:::.rng_get_generator .rng_reset_generator <- BiocParallel:::.rng_reset_generator state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) p1 <- SerialParam(RNGseed = 123) p2 <- SnowParam(3, RNGseed = 123) p3 <- SnowParam(5, RNGseed = 123) FUN <- function(i) rnorm(2) ## SerialParam / SnowParam same results target <- bplapply(1:11, FUN, BPPARAM = p1) checkIdentical(bplapply(1:11, FUN, BPPARAM = p2), target) ## SerialParam / SnowParam same results, different number of workers checkIdentical(bplapply(1:11, FUN, BPPARAM = p3), target) if (identical(.Platform$OS.type, "unix")) { ## SerialParam / TransientMulticoreParam same results p4a <- MulticoreParam(5, RNGseed = 123) checkIdentical(bplapply(1:11, FUN, BPPARAM = p4a), target) ## SerialParam / MulticoreParam same results p4b <- bpstart(MulticoreParam(5, RNGseed = 123)) checkIdentical(bplapply(1:11, FUN, BPPARAM = p4b), target) bpstop(p4b) } ## single worker coerced to SerialParam p5 <- SnowParam(1, RNGseed = 123) checkIdentical(bplapply(1:11, FUN, BPPARAM = p5), target, "p5") checkIdentical(state$kind, .rng_get_generator()$kind) } test_rng_bpiterate <- function() { .rng_get_generator <- BiocParallel:::.rng_get_generator .rng_reset_generator <- BiocParallel:::.rng_reset_generator state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) FUN <- function(i) rnorm(2) ITER_factory <- function() { x <- 1:11 i <- 0L function() { i <<- i + 1L if (i > length(x)) return(NULL) x[[i]] } } p1 <- SerialParam(RNGseed = 123) p2 <- SnowParam(3, RNGseed = 123) p3 <- SnowParam(5, RNGseed = 123) target <- bplapply(1:11, FUN, BPPARAM = p1) checkIdentical(target, bpiterate(ITER_factory(), FUN, BPPARAM = p1), "p1") checkIdentical(target, bpiterate(ITER_factory(), FUN, BPPARAM = p2), "p2") checkIdentical(target, bpiterate(ITER_factory(), FUN, BPPARAM = p3), "p3") if (identical(.Platform$OS.type, "unix")) { ## SerialParam / TransientMulticoreParam same results p4a <- MulticoreParam(5, RNGseed = 123) checkIdentical( target, bpiterate(ITER_factory(), FUN, BPPARAM = p4a), "p4a" ) ## SerialParam / MulticoreParam same results p4b <- bpstart(MulticoreParam(5, RNGseed = 123)) checkIdentical( target, bpiterate(ITER_factory(), FUN, BPPARAM = p4b), "p4b" ) bpstop(p4b) } checkIdentical(state$kind, .rng_get_generator()$kind) } test_rng_bpstart <- function() { .rng_get_generator <- BiocParallel:::.rng_get_generator .rng_reset_generator <- BiocParallel:::.rng_reset_generator state <- .rng_get_generator() FUN <- function(i) rnorm(2) ITER_factory <- function() { x <- 1:11 i <- 0L function() { i <<- i + 1L if (i > length(x)) return(NULL) x[[i]] } } ## bplapply p0 <- bpstart(SerialParam()) # random seed result1 <- unlist(bplapply(1:11, FUN, BPPARAM = p0)) result2 <- unlist(bplapply(1:11, FUN, BPPARAM = p0)) checkTrue(!any(result1 %in% result2)) bpstart(bpstop(p0)) # different random seed result3 <- unlist(bplapply(1:11, FUN, BPPARAM = p0)) checkTrue(!any(result3 %in% result1)) p0 <- bpstart(SerialParam(RNGseed = 123)) # set seed result1 <- unlist(bplapply(1:11, FUN, BPPARAM = p0)) result2 <- unlist(bplapply(1:11, FUN, BPPARAM = p0)) checkTrue(!any(result1 %in% result2)) bpstart(bpstop(p0)) # reset seed, same stream result3 <- unlist(bplapply(1:11, FUN, BPPARAM = p0)) result4 <- unlist(bplapply(1:11, FUN, BPPARAM = p0)) checkIdentical(result3, result1) checkIdentical(result4, result2) ## bpiterate p0 <- bpstart(SerialParam()) # random seed result1 <- unlist(bpiterate(ITER_factory(), FUN, BPPARAM = p0)) result2 <- unlist(bpiterate(ITER_factory(), FUN, BPPARAM = p0)) checkTrue(!any(result1 %in% result2)) bpstart(bpstop(p0)) # different random seed result3 <- unlist(bpiterate(ITER_factory(), FUN, BPPARAM = p0)) checkTrue(!any(result3 %in% result1)) p0 <- bpstart(SerialParam(RNGseed = 123)) # set seed result1 <- unlist(bpiterate(ITER_factory(), FUN, BPPARAM = p0)) result2 <- unlist(bpiterate(ITER_factory(), FUN, BPPARAM = p0)) checkTrue(!any(result1 %in% result2)) bpstart(bpstop(p0)) # reset seed, same stream result3 <- unlist(bpiterate(ITER_factory(), FUN, BPPARAM = p0)) result4 <- unlist(bpiterate(ITER_factory(), FUN, BPPARAM = p0)) checkIdentical(result3, result1) checkIdentical(result4, result2) checkIdentical(state$kind, .rng_get_generator()$kind) } .test_rng_bpstart_does_not_iterate_rng_seed <- function(param) { .rng_get_generator <- BiocParallel:::.rng_get_generator state <- .rng_get_generator() set.seed(123L) target <- runif(1L) ## bpstart() should not increment the random number seed set.seed(123L) bpstart(param) checkIdentical(target, runif(1L)) bpstop(param) ## bplapply does not increment stream set.seed(123) result <- bplapply(1:3, runif, BPPARAM = param) checkIdentical(target, runif(1L)) ## bplapply uses internal stream set.seed(123) result <- bplapply(1:3, runif, BPPARAM = param) checkTrue(!identical(result, bplapply(1:3, runif, BPPARAM = param))) checkIdentical(target, runif(1L)) target1 <- lapply(1:3, runif) checkTrue(!identical(result, target1)) checkIdentical(state$kind, .rng_get_generator()$kind) } test_rng_bpstart_does_not_iterate_rng_seed <- function() { .rng_get_generator <- BiocParallel:::.rng_get_generator .rng_reset_generator <- BiocParallel:::.rng_reset_generator TEST_FUN <- .test_rng_bpstart_does_not_iterate_rng_seed state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) TEST_FUN(SerialParam()) TEST_FUN(SnowParam(2)) if (identical(.Platform$OS.type, "unix")) TEST_FUN(MulticoreParam(2)) } .test_rng_global_and_RNGseed_indepenent <- function(param_fun) { set.seed(123) target <- bplapply(1:3, runif, BPPARAM = param_fun()) current <- bplapply(1:3, runif, BPPARAM = param_fun(RNGseed = 123)) checkTrue(!identical(target, current)) } test_rng_global_and_RNGseed_independent <- function() { .rng_get_generator <- BiocParallel:::.rng_get_generator .rng_reset_generator <- BiocParallel:::.rng_reset_generator TEST_FUN <- .test_rng_global_and_RNGseed_indepenent state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) TEST_FUN(SerialParam) TEST_FUN(SnowParam) if (identical(.Platform$OS.type, "unix")) TEST_FUN(MulticoreParam) } .test_rng_lapply_bpredo_impl <- function(param) { FUN <- function(i) rnorm(1) target <- unlist(bplapply(1:11, FUN, BPPARAM = param)) FUN0 <- function(i) { if (identical(i, 7L)) { stop("i == 7") } else rnorm(1) } result <- bptry(bplapply(1:11, FUN0, BPPARAM = param)) checkIdentical(unlist(result[-7]), target[-7]) checkTrue(inherits(result[[7]], "remote_error")) FUN1 <- function(i) { if (identical(i, 7L)) { ## the random number stream should be in the same state as the ## first time through the loop, and rnorm(1) should return ## same result as FUN rnorm(1) } else { ## if this branch is used, then we are incorrectly updating ## already calculated elements -- '0' in the output would ## indicate this error 0 } } result <- unlist(bplapply(1:11, FUN1, BPREDO = result, BPPARAM = param)) checkIdentical(result, target) bpstart(param) target1 <- unlist(bplapply(1:11, FUN, BPPARAM = param)) target2 <- unlist(bplapply(1:11, FUN, BPPARAM = param)) target3 <- unlist(bplapply(1:11, FUN, BPPARAM = param)) bpstop(param) bpstart(param) result1 <- bptry(bplapply(1:11, FUN0, BPPARAM = param)) result1_redo1 <- unlist(bplapply(1:11, FUN1, BPREDO = result1, BPPARAM = param)) result2 <- unlist(bplapply(1:11, FUN, BPPARAM = param)) result1_redo2 <- unlist(bplapply(1:11, FUN1, BPREDO = result1, BPPARAM = param)) result3 <- unlist(bplapply(1:11, FUN, BPPARAM = param)) checkIdentical(target1, result1_redo1) checkIdentical(target1, result1_redo2) checkIdentical(target2, result2) checkIdentical(target3, result3) } test_rng_lapply_bpredo <- function() { .rng_get_generator <- BiocParallel:::.rng_get_generator .rng_reset_generator <- BiocParallel:::.rng_reset_generator state <- .rng_get_generator() on.exit(.rng_reset_generator(state$kind, state$seed)) param <- SerialParam(RNGseed = 123, stop.on.error = FALSE) .test_rng_lapply_bpredo_impl(param) if (identical(.Platform$OS.type, "unix")) { param <- MulticoreParam(3, RNGseed = 123, stop.on.error = FALSE) .test_rng_lapply_bpredo_impl(param) } } .test_rng_iterate_bpredo_impl <- function(param) { FUN <- function(i) rnorm(1) target <- unlist(bplapply(1:11, FUN, BPPARAM = param)) FUN0 <- function(i) { if (identical(i, 7L)) { stop("i == 7") } else rnorm(1) } iter_factory <- function(n){ i <- 0L function() if(i ## Combining output/error messages into one file #$ -j y ## Giving the name of the output log file #$ -o <%= log.file %> ## One needs to tell the queue system to use the current directory as the working directory ## Or else the script may fail as it will execute in your top level home directory /home/username #$ -cwd ## Use environment variables #$ -V ## Use correct queue #$ -q all.q ## R settings module load R/3.4.3 ## Export value of DEBUGME environemnt var to worker export DEBUGME=<%= Sys.getenv("DEBUGME") %> Rscript -e 'batchtools::doJobCollection("<%= uri %>")' exit 0 BiocParallel/inst/unitTests/test_utilities.R0000644000175200017520000000677014516004410022306 0ustar00biocbuildbiocbuildmessage("Testing utilities") test_splitIndicies <- function() { .splitIndices <- BiocParallel:::.splitIndices checkIdentical(list(), .splitIndices(0, 0)) checkIdentical(list(), .splitIndices(0, 1)) checkIdentical(list(), .splitIndices(0, 2)) checkIdentical(list(1:4), .splitIndices(4, 0)) checkIdentical(list(1:4), .splitIndices(4, 1)) checkIdentical(list(1:2, 3:4), .splitIndices(4, 2)) checkIdentical(as.list(1:4), .splitIndices(4, 4)) checkIdentical(as.list(1:4), .splitIndices(4, 8)) checkIdentical(list(1:4, 5:7), .splitIndices(7, 2)) } test_splitX <- function() { .splitX <- BiocParallel:::.splitX checkIdentical(list(), .splitX(character(), 0, 0)) checkIdentical(list(), .splitX(character(), 1, 0)) checkIdentical(list(), .splitX(character(), 0, 1)) checkIdentical(list(), .splitX(character(), 1, 1)) X <- LETTERS[1:4] checkIdentical(list(X), .splitX(X, 0, 0)) checkIdentical(list(X), .splitX(X, 1, 0)) checkIdentical(list(X[1:2], X[3:4]), .splitX(X, 2, 0)) checkIdentical(as.list(X), .splitX(X, 4, 0)) checkIdentical(as.list(X), .splitX(X, 8, 0)) checkIdentical(list(X[1:2], X[3:4]), .splitX(X, 2, 0)) checkIdentical(list(X), .splitX(X, 2, 1)) checkIdentical(list(X[1:2], X[3:4]), .splitX(X, 2, 2)) checkIdentical(list(X[1], X[2:3], X[4]), .splitX(X, 2, 3)) checkIdentical(as.list(X), .splitX(X, 2, 4)) } test_redo_index <- function() { .redo_index <- BiocParallel:::.redo_index err <- BiocParallel:::.error("") checkIdentical(integer(), .redo_index(list(), list())) checkIdentical(1L, .redo_index(list(1), list(err))) checkIdentical(2L, .redo_index(list(1, "2"), list(1, err))) ## all need recalculating checkIdentical(1:2, .redo_index(list("1", "2"), list(err, err))) ## X can be a vector checkIdentical(2L, .redo_index(1:2, list(1, err))) ## lengths differ checkException(.redo_index(list(1, 2), list(err)), silent=TRUE) ## no previous error checkException(.redo_index(list(1, 2), list(1, 2)), silent=TRUE) } test_rename <- function() { .rename <- BiocParallel:::.rename X <- list() Y <- character() Z <- list(X) W <- list(Y) checkIdentical(X, .rename(list(), X)) checkIdentical(X, .rename(list(), Y)) checkIdentical(X, .rename(list(), Z)) checkIdentical(X, .rename(list(), W)) names(X) <- names(Y) <- character() Z <- list(X) W <- list(Y) checkIdentical(X, .rename(list(), X)) checkIdentical(X, .rename(list(), Y)) checkIdentical(list(), .rename(list(), Z)) checkIdentical(list(), .rename(list(), W)) Z <- list(x = X) W <- list(x = Y) checkIdentical(list(x = 1), .rename(list(1), Z)) checkIdentical(list(x = 1), .rename(list(1), W)) X <- list(a = 1:2) exp0 <- vector("list", length(X)) checkIdentical(setNames(exp0, names(X)), .rename(exp0, X)) X <- list(c(a = 1)) exp0 <- vector("list", length(X)) checkIdentical(exp0, .rename(exp0, X)) Y <- c(x = "a") checkIdentical(Y, .rename(Y, Y)) X <- list(a = 1:2, b = 3:4) exp0 <- vector("list", length(X)) exp <- setNames(exp0, names(X)) checkIdentical(exp, .rename(exp0, X)) X <- list(c(a = 1)) exp0 <- vector("list", length(X)) checkIdentical(exp0, .rename(exp0, X)) X <- list(A = c(a = 1, b=2), B = c(c = 1, d = 2)) exp0 <- vector("list", length(X)) exp <- setNames(exp0, names(X)) checkIdentical(exp, .rename(exp0, X)) } BiocParallel/inst/unitTests/test_worker-number.R0000644000175200017520000001047414516004410023066 0ustar00biocbuildbiocbuild## .workerEnvironmentVariable ## .defaultWorkers() ## .enforceWorkers(workers, type) message("Testing worker-number") .resetEnv <- function(name, value) { if (is.na(value)) { Sys.unsetenv(name) } else { value <- list(value) names(value) <- name do.call("Sys.setenv", value) } } test_defaultWorkers <- function() { o_check_limits <- Sys.getenv("_R_CHECK_LIMIT_CORES_", NA) Sys.unsetenv("_R_CHECK_LIMIT_CORES_") o_bbs_home <- Sys.getenv("IS_BIOC_BUILD_MACHINE", NA) Sys.unsetenv("IS_BIOC_BUILD_MACHINE") o_worker_n <- Sys.getenv("BIOCPARALLEL_WORKER_NUMBER", NA) Sys.unsetenv("BIOCPARALLEL_WORKER_NUMBER") on.exit({ .resetEnv("_R_CHECK_LIMIT_CORES_", o_check_limits) .resetEnv("IS_BIOC_BUILD_MACHINE", o_bbs_home) .resetEnv("BIOCPARALLEL_WORKER_NUMBER", o_worker_n) }) checkIdentical(parallel::detectCores() - 2L, bpnworkers(SnowParam())) Sys.setenv(BIOCPARALLEL_WORKER_NUMBER = 5) checkIdentical(5L, bpnworkers(SnowParam())) Sys.setenv(IS_BIOC_BUILD_MACHINE="true") checkIdentical(4L, bpnworkers(SnowParam())) Sys.setenv(`_R_CHECK_LIMIT_CORES_` = TRUE) checkIdentical(2L, bpnworkers(SnowParam())) } test_enforceWorkers <- function() { o_check_limits <- Sys.getenv("_R_CHECK_LIMIT_CORES_", NA) Sys.unsetenv("_R_CHECK_LIMIT_CORES_") o_bbs_home <- Sys.getenv("IS_BIOC_BUILD_MACHINE", NA) Sys.unsetenv("IS_BIOC_BUILD_MACHINE") o_worker_max <- Sys.getenv("BIOCPARALLEL_WORKER_MAX", NA) Sys.unsetenv("BIOCPARALLEL_WORKER_MAX") on.exit({ .resetEnv("_R_CHECK_LIMIT_CORES_", o_check_limits) .resetEnv("IS_BIOC_BUILD_MACHINE", o_bbs_home) .resetEnv("BIOCPARALLEL_WORKER_MAX", o_worker_max) }) checkIdentical(6L, bpnworkers(SnowParam(6L))) Sys.setenv(BIOCPARALLEL_WORKER_MAX = 5L) warn <- FALSE withCallingHandlers({ obs <- bpnworkers(SnowParam(6)) }, warning = function(x) { warn <<- startsWith( trimws(conditionMessage(x)), "'BIOCPARALLEL_WORKER_MAX' environment variable detected" ) invokeRestart("muffleWarning") }) checkIdentical(5L, obs) checkTrue(warn) .resetEnv("BIOCPARALLEL_WORKER_MAX", o_worker_max) Sys.setenv(IS_BIOC_BUILD_MACHINE = "true") warn <- FALSE withCallingHandlers({ obs <- bpnworkers(SnowParam(6)) }, warning = function(x) { warn <<- startsWith( trimws(conditionMessage(x)), "'IS_BIOC_BUILD_MACHINE' environment variable detected" ) invokeRestart("muffleWarning") }) checkIdentical(4L, obs) checkTrue(warn) ## .resetEnv("IS_BIOC_BUILD_MACHINE", o_bbs_home) Sys.setenv(`_R_CHECK_LIMIT_CORES_` = "warn") warn <- FALSE withCallingHandlers({ obs <- bpnworkers(SnowParam(6)) }, warning = function(x) { warn <<- startsWith( trimws(conditionMessage(x)), "'_R_CHECK_LIMIT_CORES_' environment variable detected" ) invokeRestart("muffleWarning") }) checkIdentical(2L, obs) checkTrue(warn) Sys.setenv(`_R_CHECK_LIMIT_CORES_` = "false") warn <- FALSE withCallingHandlers({ obs <- bpnworkers(SnowParam(4)) }, warning = function(x) { warn <<- TRUE invokeRestart("muffleWarning") }) checkIdentical(4L, obs) checkTrue(!warn) Sys.setenv(`_R_CHECK_LIMIT_CORES_` = "true") checkException(SnowParam(4), silent = TRUE) } test_bpnworkers_integer_valued <- function() { ## https://github.com/Bioconductor/BiocParallel/issues/232 checkTrue(inherits(snowWorkers(), "integer")) # default checkIdentical(2L, bpnworkers(SnowParam(c("foo", "bar")))) checkIdentical(2L, bpnworkers(SnowParam(2))) checkIdentical(2L, bpnworkers(SnowParam(2.1))) checkIdentical(2L, bpnworkers(SnowParam(2.9))) p <- SnowParam(2); bpworkers(p) <- 2 checkIdentical(2L, bpnworkers(p)) bpworkers(p) <- c("foo", "bar") checkIdentical(2L, bpnworkers(p)) if (!identical(.Platform$OS.type, "windows")) { checkIdentical(2L, bpnworkers(MulticoreParam(2.1))) checkIdentical(2L, bpnworkers(MulticoreParam(2.9))) checkIdentical(2L, bpnworkers(MulticoreParam(2))) p <- MulticoreParam(2); bpworkers(p) <- 2 checkIdentical(2L, bpnworkers(p)) } } BiocParallel/man/0000755000175200017520000000000014516004410014673 5ustar00biocbuildbiocbuildBiocParallel/man/BatchtoolsParam-class.Rd0000644000175200017520000002136114516004410021353 0ustar00biocbuildbiocbuild\name{BatchtoolsParam-class} \Rdversion{1.1} \docType{class} \alias{BatchtoolsParam-class} \alias{BatchtoolsParam} \alias{bpRNGseed,BatchtoolsParam-method} \alias{bpRNGseed<-,BatchtoolsParam,numeric-method} \alias{bpbackend,BatchtoolsParam-method} \alias{bpisup,BatchtoolsParam-method} \alias{bplapply,ANY,BatchtoolsParam-method} \alias{bplogdir,BatchtoolsParam-method} \alias{bplogdir<-,BatchtoolsParam,character-method} \alias{bpschedule,BatchtoolsParam-method} \alias{bpstart,BatchtoolsParam-method} \alias{bpstop,BatchtoolsParam-method} \alias{bpworkers,BatchtoolsParam-method} \alias{show,BatchtoolsParam-method} \alias{batchtoolsWorkers} \alias{batchtoolsCluster} \alias{batchtoolsTemplate} \alias{batchtoolsRegistryargs} \title{Enable parallelization on batch systems} \description{ This class is used to parameterize scheduler options on managed high-performance computing clusters using batchtools. \code{BatchtoolsParam()}: Construct a BatchtoolsParam-class object. \code{batchtoolsWorkers()}: Return the default number of workers for each backend. \code{batchtoolsTemplate()}: Return the default template for each backend. \code{batchtoolsCluster()}: Return the default cluster. \code{batchtoolsRegistryargs()}: Create a list of arguments to be used in batchtools' \code{makeRegistry}; see \code{registryargs} argument. } \usage{ BatchtoolsParam( workers = batchtoolsWorkers(cluster), cluster = batchtoolsCluster(), registryargs = batchtoolsRegistryargs(), saveregistry = FALSE, resources = list(), template = batchtoolsTemplate(cluster), stop.on.error = TRUE, progressbar = FALSE, RNGseed = NA_integer_, timeout = WORKER_TIMEOUT, exportglobals=TRUE, log = FALSE, logdir = NA_character_, resultdir=NA_character_, jobname = "BPJOB" ) batchtoolsWorkers(cluster = batchtoolsCluster()) batchtoolsCluster(cluster) batchtoolsTemplate(cluster) batchtoolsRegistryargs(...) } \arguments{ \item{workers}{\code{integer(1)}} Number of workers to divide tasks (e.g., elements in the first argument of \code{bplapply}) between. On 'multicore' and 'socket' backends, this defaults to \code{multicoreWorkers()} and \code{snowWorkers()}. On managed (e.g., slurm, SGE) clusters \code{workers} has no default, meaning that the number of workers needs to be provided by the user. \item{cluster}{\code{character(1)}} Cluster type being used as the backend by \code{BatchtoolsParam}. The available options are "socket", "multicore", "interactive", "sge", "slurm", "lsf", "torque" and "openlava". The cluster type if available on the machine registers as the backend. Cluster types which need a \code{template} are "sge", "slurm", "lsf", "openlava", and "torque". If the template is not given then a default is selected from the \code{batchtools} package. \item{registryargs}{\code{list()}} Arguments given to the registry created by \code{BatchtoolsParam} to configure the registry and where it's being stored. The \code{registryargs} can be specified by the function \code{batchtoolsRegistryargs()} which takes the arguments \code{file.dir}, \code{work.dir}, \code{packages}, \code{namespaces}, \code{source}, \code{load}, \code{make.default}. It's useful to configure these option, especially the \code{file.dir} to a location which is accessible to all the nodes on your job scheduler i.e master and workers. \code{file.dir} uses a default setting to make a registry in your working directory. \item{saveregistry}{\code{logical(1)}} Option given to store the entire registry for the job(s). This functionality should only be used when debugging. The storage of the entire registry can be time and space expensive on disk. The registry will be saved in the directory specified by \code{file.dir} in \code{registryargs}; the default locatoin is the current working directory. The saved registry directories will have suffix "-1", "-2" and so on, for each time the \code{BatchtoolsParam} is used. \item{resources}{\code{named list()}} Arguments passed to the \code{resources} argument of \code{batchtools::submitJobs} during evaluation of \code{bplapply} and similar functions. These name-value pairs are used for substitution into the template file. \item{template}{\code{character(1)}} Path to a template for the backend in \code{BatchtoolsParam}. It is possible to check which template is being used by the object using the getter \code{bpbackend(BatchtoolsParam())}. The template needs to be written specific to each backend. Please check the list of available templates in the \code{batchtools} package. \item{stop.on.error}{\code{logical(1)}} Stop all jobs as soon as one jobs fails (\code{stop.on.error == TRUE}) or wait for all jobs to terminate. Default is \code{TRUE}. \item{progressbar}{\code{logical(1)}} Suppress the progress bar used in BatchtoolsParam and be less verbose. Default is \code{FALSE}. \item{RNGseed}{\code{integer(1)}} Set an initial seed for the RNG. Default is \code{NULL} where a random seed is chosen upon initialization. \item{timeout}{\code{list()}} Time (in seconds) allowed for worker to complete a task. If the computation exceeds \code{timeout} an error is thrown with message 'reached elapsed time limit'. \item{exportglobals}{\code{logical(1)}} Export \code{base::options()} from manager to workers? Default \code{TRUE}. \item{log}{\code{logical(1)}} Option given to save the logs which are produced by the jobs. If \code{log=TRUE} then the \code{logdir} option must be specified. \item{logdir}{\code{character(1)}} Path to location where logs are stored. The argument \code{log=TRUE} is required before using the logdir option. \item{resultdir}{\code{logical(1)}} Path where results are stored. \item{jobname}{\code{character(1)}} Job name that is prepended to the output log and result files. Default is "BPJOB". \item{\dots}{name-value pairs} Names and values correspond to arguments from batchtools \code{\link[batchtools]{makeRegistry}}. } \section{BatchtoolsParam constructor}{ Return an object with specified values. The object may be saved to disk or reused within a session. } \section{Methods}{ The following generics are implemented and perform as documented on the corresponding help page: \code{\link{bpworkers}}, \code{\link{bpnworkers}}, \code{\link{bpstart}}, \code{\link{bpstop}}, \code{\link{bpisup}}, \code{\link{bpbackend}}. \code{\link{bplapply}} handles arguments \code{X} of classes derived from \code{S4Vectors::List} specially, coercing to \code{list}. } \author{Nitesh Turaga, \url{mailto:nitesh.turaga@roswellpark.org}} \seealso{ \code{getClass("BiocParallelParam")} for additional parameter classes. \code{register} for registering parameter classes for use in parallel evaluation. The batchtools package. } \examples{ ## Pi approximation piApprox = function(n) { nums = matrix(runif(2 * n), ncol = 2) d = sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } piApprox(1000) ## Calculate piApprox 10 times param <- BatchtoolsParam() result <- bplapply(rep(10e5, 10), piApprox, BPPARAM=param) \dontrun{ ## see vignette for additional explanation library(BiocParallel) param = BatchtoolsParam(workers=5, cluster="sge", template="script/test-sge-template.tmpl") ## Run parallel job result = bplapply(rep(10e5, 100), piApprox, BPPARAM=param) ## bpmapply param = BatchtoolsParam() result = bpmapply(fun, x = 1:3, y = 1:3, MoreArgs = list(z = 1), SIMPLIFY = TRUE, BPPARAM = param) ## bpvec param = BatchtoolsParam(workers=2) result = bpvec(1:10, seq_along, BPPARAM=param) ## bpvectorize param = BatchtoolsParam(workers=2) ## this returns a function bpseq_along = bpvectorize(seq_along, BPPARAM=param) result = bpseq_along(1:10) ## bpiterate ITER <- function(n=5) { i <- 0L function() { i <<- i + 1L if (i > n) return(NULL) rep(i, n) } } param <- BatchtoolsParam() res <- bpiterate(ITER=ITER(), FUN=function(x,y) sum(x) + y, y=10, BPPARAM=param) ## save logs logdir <- tempfile() dir.create(logdir) param <- BatchtoolsParam(log=TRUE, logdir=logdir) res <- bplapply(rep(10e5, 10), piApprox, BPPARAM=param) ## save registry (should be used only for debugging) file.dir <- tempfile() registryargs <- batchtoolsRegistryargs(file.dir = file.dir) param <- BatchtoolsParam(saveregistry = TRUE, registryargs = registryargs) res <- bplapply(rep(10e5, 10), piApprox, BPPARAM=param) dir(dirname(file.dir), basename(file.dir)) } } BiocParallel/man/BiocParallel-defunct.Rd0000644000175200017520000000050514516004410021141 0ustar00biocbuildbiocbuild\name{BiocParallel-defunct} \alias{bprunMPIslave} \alias{BatchJobsParam} \title{Defunct Objects in Package \sQuote{BiocParallel}} \description{ These functions and objects are defunct and no longer available. } \details{ Defunct functions are: \code{bprunMPIslave()}. Defunct classes: \code{BatchJobsParam}. } BiocParallel/man/BiocParallel-deprecated.Rd0000644000175200017520000000033014516004410021605 0ustar00biocbuildbiocbuild\name{BiocParallel-deprecated} \alias{BiocParallel-deprecated} \title{Deprecated Functions in Package \sQuote{BiocParallel}} \description{ There are currently no deprecated functions in \sQuote{BiocParallel}. } BiocParallel/man/BiocParallel-package.Rd0000644000175200017520000000074414516004410021111 0ustar00biocbuildbiocbuild\name{BiocParallel-package} \alias{BiocParallel-package} \alias{BiocParallel} \docType{package} \title{Bioconductor facilities for parallel evaluation} \description{ This package provides modified versions and novel implementation of functions for parallel evaluation, tailored to use with Bioconductor objects. } \details{ This package uses code from the \code{\link{parallel}} package, } \author{ See \code{packageDescription("BiocParallel")}. } \keyword{package} BiocParallel/man/BiocParallelParam-class.Rd0000644000175200017520000002670414516004410021610 0ustar00biocbuildbiocbuild\name{BiocParallelParam-class} \Rdversion{1.1} \docType{class} % Class \alias{BiocParallelParam-class} \alias{BiocParallelParam} % Control \alias{bpbackend} \alias{bpbackend<-} \alias{bpbackend,missing-method} \alias{bpbackend<-,missing,ANY-method} \alias{bpisup} \alias{bpisup,ANY-method} \alias{bpisup,missing-method} \alias{bpstart} \alias{bpstart,ANY-method} \alias{bpstart,missing-method} \alias{bpstart,BiocParallelParam-method} \alias{bpstop} \alias{bpstop,ANY-method} \alias{bpstop,missing-method} \alias{bpstop,BiocParallelParam-method} \alias{bpnworkers} \alias{bpworkers} \alias{bpworkers<-} \alias{bpworkers,missing-method} \alias{bpworkers,BiocParallelParam-method} \alias{bptasks} \alias{bptasks,BiocParallelParam-method} \alias{bptasks<-} \alias{bptasks<-,BiocParallelParam-method} \alias{bptasks<-,BiocParallelParam,ANY-method} \alias{bpstopOnError} \alias{bpstopOnError,BiocParallelParam-method} \alias{bpstopOnError<-} \alias{bpstopOnError<-,BiocParallelParam,logical-method} \alias{bpstopOnError<-,DoparParam,logical-method} \alias{bplog} \alias{bplog<-} \alias{bplog,BiocParallelParam-method} \alias{bpthreshold} \alias{bpthreshold<-} \alias{bpthreshold,BiocParallelParam-method} \alias{bplogdir} \alias{bplogdir<-} \alias{bplogdir,BiocParallelParam-method} \alias{bplogdir<-,BiocParallelParam,character-method} \alias{bpresultdir} \alias{bpresultdir<-} \alias{bpresultdir,BiocParallelParam-method} \alias{bpresultdir<-,BiocParallelParam,character-method} \alias{bptimeout} \alias{bptimeout<-} \alias{bptimeout,BiocParallelParam-method} \alias{bptimeout<-,BiocParallelParam,numeric-method} \alias{bpexportglobals} \alias{bpexportglobals<-} \alias{bpexportglobals,BiocParallelParam-method} \alias{bpexportglobals<-,BiocParallelParam,logical-method} \alias{bpexportvariables} \alias{bpexportvariables<-} \alias{bpexportvariables,BiocParallelParam-method} \alias{bpexportvariables<-,BiocParallelParam,logical-method} \alias{bpprogressbar} \alias{bpprogressbar,BiocParallelParam-method} \alias{bpprogressbar<-} \alias{bpprogressbar<-,BiocParallelParam,logical-method} \alias{bpjobname} \alias{bpjobname,BiocParallelParam-method} \alias{bpjobname<-} \alias{bpjobname<-,BiocParallelParam,character-method} \alias{bpRNGseed} \alias{bpRNGseed<-} \alias{bpRNGseed,BiocParallelParam-method} \alias{bpRNGseed<-,BiocParallelParam,NULL-method} \alias{bpRNGseed<-,BiocParallelParam,numeric-method} \alias{bpforceGC} \alias{bpforceGC,BiocParallelParam-method} \alias{bpforceGC<-} \alias{bpforceGC<-,BiocParallelParam,numeric-method} \alias{bpfallback} \alias{bpfallback,BiocParallelParam-method} \alias{bpfallback<-} \alias{bpfallback<-,BiocParallelParam,logical-method} % Other methods \alias{show,BiocParallel-method} \alias{print.remote_error} \title{BiocParallelParam objects} \description{ The \code{BiocParallelParam} virtual class stores configuration parameters for parallel execution. Concrete subclasses include \code{SnowParam}, \code{MulticoreParam}, \code{BatchtoolsParam}, and \code{DoparParam} and \code{SerialParam}. } \details{ \code{BiocParallelParam} is the virtual base class on which other parameter objects build. There are 5 concrete subclasses: \describe{ \item{\code{SnowParam}:}{distributed memory computing} \item{\code{MulticoreParam}:}{shared memory computing} \item{\code{BatchtoolsParam}:}{scheduled cluster computing} \item{\code{DoparParam}:}{foreach computing} \item{\code{SerialParam}:}{non-parallel execution} } The parameter objects hold configuration parameters related to the method of parallel execution such as shared memory, independent memory or computing with a cluster scheduler. } \section{Construction}{ The \code{BiocParallelParam} class is virtual and has no constructor. Instances of the subclasses can be created with the following: \itemize{ \item \code{SnowParam()} \item \code{MulticoreParam()} \item \code{BatchtoolsParam()} \item \code{DoparParam()} \item \code{SerialParam()} } } \section{Accessors}{ \subsection{Back-end control}{ In the code below \code{BPPARAM} is a \code{BiocParallelParam} object. \describe{ \item{\code{bpworkers(x)}, \code{bpworkers(x, ...)}:}{ \code{integer(1)} or \code{character()}. Gets the number or names of the back-end workers. The setter is supported for SnowParam and MulticoreParam only. } \item{\code{bpnworkers(x)}:}{ \code{integer(1)}. Gets the number of the back-end workers. } \item{\code{bptasks(x)}, \code{bptasks(x) <- value}:}{ \code{integer(1)}. Get or set the number of tasks for a job. \code{value} can be a scalar integer > 0L, or integer 0L for matching the worker number, or \code{NA} for representing an infinite task number. \code{DoparParam} and \code{BatchtoolsParam} have their own approach to dividing a job among workers. We define a job as a single call to a function such as \code{bplapply}, \code{bpmapply} etc. A task is the division of the \code{X} argument into chunks. When \code{tasks == 0} (default), \code{X} is divided by the number of workers. This approach distributes \code{X} in (approximately) equal chunks. A \code{tasks} value of > 0 dictates the total number of tasks. Values can range from 1 (all of \code{X} to a single worker) to the length of \code{X} (each element of \code{X} to a different worker); values greater than \code{length(X)} (e.g., \code{.Machine$integer.max}) are rounded to \code{length(X)}. When the length of \code{X} is less than the number of workers each element of \code{X} is sent to a worker and \code{tasks} is ignored. Another case where the \code{tasks} value is ignored is when using the \code{bpiterate} function; the number of tasks are defined by the number of data chunks returned by the \code{ITER} function. } \item{\code{bpstart(x)}:}{ \code{logical(1)}. Starts the back-end, if necessary. } \item{\code{bpstop(x)}:}{ \code{logical(1)}. Stops the back-end, if necessary and possible. } \item{\code{bpisup(x)}:}{ \code{logical(1)}. Tests whether the back-end is available for processing, returning a scalar logical value. \code{bp*} functions such as \code{bplapply} automatically start the back-end if necessary. } \item{\code{bpbackend(x)}, \code{bpbackend(x) <- value}:}{ Gets or sets the parallel \code{bpbackend}. Not all back-ends can be retrieved; see \code{methods("bpbackend")}. } \item{\code{bplog(x)}, \code{bplog(x) <- value}:}{ Get or enable logging, if available. \code{value} must be a \code{logical(1)}. } \item{\code{bpthreshold(x)}, \code{bpthreshold(x) <- value}:}{ Get or set the logging threshold. \code{value} must be a \code{character(1)} string of one of the levels defined in the \code{futile.logger} package: \dQuote{TRACE}, \dQuote{DEBUG}, \dQuote{INFO}, \dQuote{WARN}, \dQuote{ERROR}, or \dQuote{FATAL}. } \item{\code{bplogdir(x)}, \code{bplogdir(x) <- value}:}{ Get or set an optional directory for saving log files. The directory must already exist with read / write ability. } \item{\code{bpresultdir(x)}, \code{bpresultdir(x) <- value}:}{ Get or set an optional directory for saving results as 'rda' files. The directory must already exist with read / write ability. } \item{\code{bptimeout(x)}, \code{bptimeout(x) <- value}:}{ \code{numeric(1)} Time (in seconds) allowed for worker to complete a task. This value is passed to base::setTimeLimit() as both the \code{cpu} and \code{elapsed} arguments. If the computation exceeds \code{timeout} an error is thrown with message 'reached elapsed time limit'. } \item{\code{bpexportglobals(x)}, \code{bpexportglobals(x) <- value}:}{ \code{logical(1)} Export \code{base::options()} from manager to workers? Default \code{TRUE}. } \item{\code{bpexportvariables(x)}, \code{bpexportvariables(x) <- value}:}{ \code{logical(1)} Automatically export the variables which are defined in the global environment and used by the function from manager to workers. Default \code{TRUE}. } \item{\code{bpprogressbar(x)}, \code{bpprogressbar(x) <- value}:}{ Get or set the value to enable text progress bar. \code{value} must be a \code{logical(1)}. } \item{\code{bpRNGseed(x)}, \code{bpRNGseed(x) <- value}:}{ Get or set the seed for random number generaton. \code{value} must be a \code{numeric(1)} or \code{NULL}. } \item{\code{bpjobname(x)}, \code{bpjobname(x) <- value}:}{ Get or set the job name. } \item{\code{bpforceGC(x)}, \code{bpforceGC(x) <- value}:}{ Get or set whether 'garbage collection' should be invoked at the end of each call to \code{FUN}. } \item{\code{bpfallback(x)}, \code{bpfallback(x) <- value}:}{ Get or set whether the fallback \code{SerialParam} should be used (e.g., for efficiency when starting a cluster) when the current \code{BPPARAM} has not been started and the worker number is less than or equal to 1. } } } \subsection{Error Handling}{ In the code below \code{BPPARAM} is a \code{BiocParallelParam} object. \describe{ \item{\code{bpstopOnError(x)}, \code{bpstopOnError(x) <- value}:}{ \code{logical()}. Controls if the job stops when an error is hit. \code{stop.on.error} controls whether the job stops after an error is thrown. When \code{TRUE}, the output contains all successfully completed results up to and including the error. When \code{stop.on.error == TRUE} all computations stop once the error is hit. When \code{FALSE}, the job runs to completion and successful results are returned along with any error messages. } } } } \section{Methods}{ \subsection{Evaluation}{ In the code below \code{BPPARAM} is a \code{BiocParallelParam} object. Full documentation for these functions are on separate man pages: see ?\code{bpmapply}, ?\code{bplapply}, ?\code{bpvec}, ?\code{bpiterate} and ?\code{bpaggregate}. \itemize{ \item \code{bpmapply(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPPARAM=bpparam())} \item \code{bplapply(X, FUN, ..., BPPARAM=bpparam())} \item \code{bpvec(X, FUN, ..., AGGREGATE=c, BPPARAM=bpparam())} \item \code{bpiterate(ITER, FUN, ..., BPPARAM=bpparam())} \item \code{bpaggregate(x, data, FUN, ..., BPPARAM=bpparam())} } } \subsection{Other}{ In the code below \code{BPPARAM} is a \code{BiocParallelParam} object. \itemize{ \item \code{show(x)} } } } \author{Martin Morgan and Valerie Obenchain.} \seealso{ \itemize{ \item \code{\link{SnowParam}} for computing in distributed memory \item \code{\link{MulticoreParam}} for computing in shared memory \item \code{\link{BatchtoolsParam}} for computing with cluster schedulers \item \code{\link{DoparParam}} for computing with foreach \item \code{\link{SerialParam}} for non-parallel execution } } \examples{ getClass("BiocParallelParam") ## For examples see ?SnowParam, ?MulticoreParam, ?BatchtoolsParam ## and ?SerialParam. } \keyword{classes} \keyword{methods} BiocParallel/man/DeveloperInterface.Rd0000644000175200017520000002567514516004410020747 0ustar00biocbuildbiocbuild\name{DeveloperInterface} \alias{.BiocParallelParam_prototype} \alias{.prototype_update} \alias{.recv_all} \alias{.recv_all,ANY-method} \alias{.recv_any} \alias{.recv_any,ANY-method} \alias{.recv_any,SerialBackend-method} \alias{.send_all} \alias{.send_all,ANY-method} \alias{.send_to} \alias{.send_to,ANY-method} \alias{.send_to,SerialBackend-method} \alias{.send} \alias{.send,ANY-method} \alias{.recv} \alias{.recv,ANY-method} \alias{.recv,SOCKnode-method} \alias{.close} \alias{.close,ANY-method} \alias{.manager} \alias{.manager,ANY-method} \alias{.manager,SnowParam-method} \alias{.manager,DoparParam-method} \alias{.manager,TransientMulticoreParam-method} \alias{.manager_send} \alias{.manager_send,ANY-method} \alias{.manager_send,TaskManager-method} \alias{.manager_send,SOCKmanager-method} \alias{.manager_send,DoparParamManager-method} \alias{.manager_recv} \alias{.manager_recv,ANY-method} \alias{.manager_recv,TaskManager-method} \alias{.manager_recv,DoparParamManager-method} \alias{.manager_send_all} \alias{.manager_send_all,ANY-method} \alias{.manager_send_all,TaskManager-method} \alias{.manager_send_all,DoparParamManager-method} \alias{.manager_recv_all} \alias{.manager_recv_all,ANY-method} \alias{.manager_recv_all,TaskManager-method} \alias{.manager_recv_all,DoparParamManager-method} \alias{.manager_flush} \alias{.manager_flush,ANY-method} \alias{.manager_flush,TaskManager-method} \alias{.manager_cleanup} \alias{.manager_cleanup,ANY-method} \alias{.manager_cleanup,TaskManager-method} \alias{.manager_cleanup,SOCKmanager-method} \alias{.manager_capacity} \alias{.manager_capacity,ANY-method} \alias{.manager_capacity,TaskManager-method} \alias{.manager_capacity,DoparParamManager-method} \alias{.bpstart_impl} \alias{.bpstop_impl} \alias{.bpworker_impl} \alias{.bplapply_impl} \alias{.bpiterate_impl} \alias{.task_const} \alias{.task_dynamic} \alias{.task_remake} \alias{.registerOption} \title{Developer interface} \description{ Functions documented on this page are meant for developers wishing to implement \code{BPPARAM} objects that extend the \code{BiocParallelParam} virtual class to support additional parallel back-ends. } \usage{ ## class extension .prototype_update(prototype, ...) ## manager interface .send_to(backend, node, value) .recv_any(backend) .send_all(backend, value) .recv_all(backend) ## worker interface .send(worker, value) .recv(worker) .close(worker) ## task manager interface(optional) .manager(BPPARAM) .manager_send(manager, value, ...) .manager_recv(manager) .manager_send_all(manager, value) .manager_recv_all(manager) .manager_capacity(manager) .manager_flush(manager) .manager_cleanup(manager) ## supporting implementations .bpstart_impl(x) .bpworker_impl(worker) .bplapply_impl( X, FUN, ..., BPREDO = list(), BPPARAM = bpparam(), BPOPTIONS = bpoptions() ) .bpiterate_impl( ITER, FUN, ..., REDUCE, init, reduce.in.order = FALSE, BPREDO = list(), BPPARAM = bpparam(), BPOPTIONS = bpoptions() ) .bpstop_impl(x) ## extract the static or dynamic part from a task .task_const(value) .task_dynamic(value) .task_remake(value, static_data = NULL) ## Register an option for BPPARAM .registerOption(optionName, genericName) } \arguments{ \item{prototype}{ A named \code{list} of default values for reference class fields. } \item{x}{ A \code{BPPARAM} instance. } \item{backend}{ An object containing information about the cluster, returned by \code{bpbackend()}. } \item{manager}{ An object returned by \code{.manager()} } \item{worker}{ The object to which the worker communicates via \code{.send} and \code{.recv}. \code{.close} terminates the worker. } \item{node}{ An integer value indicating the node in the backend to which values are to be sent or received. } \item{value}{ Any R object, to be sent to or from workers. } \item{X, ITER, FUN, REDUCE, init, reduce.in.order, BPREDO, BPPARAM}{ See \code{bplapply} and \code{bpiterate}. } \item{\ldots}{ For \code{.prototype_update()}, name-value pairs to initialize derived and base class fields. For \code{.bplapply_impl()}, \code{.bpiterate_impl()}, additional arguments to \code{FUN()}; see \code{bplapply} and \code{bpiterate}. For \code{.manager_send()}, this is a placeholder for the future development. } \item{static_data}{ An object extracted from \code{.task_const(value)} } \item{BPOPTIONS}{ Additional options to control the behavior of parallel evaluation, see \code{\link{bpoptions}}. } \item{optionName}{ character(1), an option name for \code{BPPARAM}. The named options will be created by \code{\link{bpoptions}} } \item{genericName}{ character(1), the name of the S4 generic function. This will be used to get or set the field in \code{BPPARAM}. The generic needs to support replacement function defined by \code{\link{setReplaceMethod}}. } } \details{ Start a BPPARM implementation by creating a reference class, e.g., extending the virtual class \code{BiocParallelParam}. Because of idiosyncracies in reference class field initialization, an instance of the class should be created by calling the generator returned by \code{setRefClass()} with a list of key-value pairs providing default parameteter arguments. The default values for the \code{BiocParallelParam} base class is provided in a list \code{.BiocParallelParam_prototype}, and the function \code{.prototype_update()} updates a prototype with new values, typically provided by the user. See the example below. BPPARAM implementations need to implement \code{bpstart()} and \code{bpstop()} methods; they may also need to implement, \code{bplapply()} and \code{bpiterate()} methods. Each method usually performs implementation-specific functionality before calling the next (BiocParallelParam) method. To avoid the intricacies of multiple dispatch, the bodies of BiocParallelParam methods are available for direct use as exported symbols. \itemize{ \item \code{bpstart,BiocParallelParam-method} (\code{.bpstart_impl()}) initiates logging, random number generation, and registration of finalizers to ensure that started clusters are stopped. \item \code{bpstop,BiocParallelParam-method} (\code{.bpstop_impl()}) ensures appropriate clean-up of stopped clusters, including sending the DONE semaphore. \code{bpstart()} will usually arrange for workers to enter \code{.bpworker_impl()} to listen for and evaluate tasks. \item \code{bplapply,ANY,BiocParallelParam-method} and \code{bpiterate,ANY,BiocParallelParam-method} (\code{.bplapply_impl()}, \code{.bpiterate_impl()}) implement: serial evaluation when there is a single core or task available; \code{BPREDO} functionality, and parallel lapply-like or iterative calculation. } Invoke \code{.bpstart_impl()}, \code{.bpstop_impl()}, \code{.bplapply_impl()}, and \code{.bpiterate_impl()} after any BPPARAM-specific implementation details. New implementations will also implement \code{bpisup()} and \code{bpbackend()} / \code{bpbackend<-()}; there are no default methods. The \emph{backends} (object returned by \code{bpbackend()}) of new BPPARAM implementations must support \code{length()} (number of nodes). In addition, the backends must support \code{.send_to()} and \code{.recv_any()} manager and \code{.send()}, \code{.recv()}, and \code{.close()} worker methods. Default \code{.send_all()} and \code{.recv_all()} methods are implemented as simple iterations along the \code{length(cluster)}, invoking \code{.send_to()} or \code{.recv_any()} on each iteration. The task manager is an optional interface for a backend that wants to control the task dispatching process. \code{.manager_send()} sends the task value to a worker, \code{.manager_recv()} returns a list with each element being a result received from a worker. \code{.manager_capacity()} instructs how many tasks values can be processed simultaneously by the cluster. \code{.manager_flush()} flushes all the cached tasks(if any) immediately. \code{.manager_cleanup()} performs cleanup after the job is finished. The default methods for \code{.manager_flush()} and \code{.manager_cleanup()} are no-op. In some cases it might be worth-while to cache some objects in a task and reuse them in another task. This can reduce the bandwith requirement for sending the tasks out to the worker. \code{.task_const()} can be used to extract the objects from the task which are not going to change across all tasks. \code{.task_dynamic()} preserve only the dynamic components in a task. Given the static and dynamic task objects, the complete task can be recovered by \code{.task_remake()}. When there is no static data can be extracted(e.g. a task with no static component or a task which has been extracted by \code{.task_dynamic()}), \code{.task_const()} simply returns a \code{NULL} value. Calling \code{.task_remake()} is no-op if the task haven't been extracted by \code{.task_dynamic()} or the static data is \code{NULL}. The function \code{.registerOption} allows the developer to register a generic function that can change the fields in \code{BPPARAM}. The developer does not need to register the fields that are already defined in \code{BiocParallel}. \code{.registerOption} should only be used to support new fields. For example, if the developer defines a \code{BPPARAM} which has a field \code{worker.password}, the developer may also define the getter / setter \code{bpworkerPassword} and \code{bpworkerPassword<-}. Then by calling \code{.registerOption("worker.password", "bpworkerPassword")}, the user can change the field in \code{BPPARAM} by passing an object of \code{bpoptions(worker.password = "1234")} in any apply function. } \value{ The return value of \code{.prototype_update()} is a list with elements in \code{prototype} substituted with key-value pairs provided in \code{\ldots}. All \code{send*} and \code{recv*} functions are endomorphic, returning a \code{cluster} object. The return value of \code{.manager_recv()} is a list with each element being a result received from a worker, \code{.manager_capacity()} is an integer. The return values of the other \code{.manager_*()} are not restricted } \examples{ \donttest{ ## ## Extend BiocParallelParam; `.A()` is not meant for the end user ## .A <- setRefClass( "A", contains = "BiocParallelParam", fields = list(id = "character") ) ## Use a prototype for default values, including the prototype for ## inheritted fields .A_prototype <- c( list(id = "default_id"), .BiocParallelParam_prototype ) ## Provide a constructor for the user A <- function(...) { prototype <- .prototype_update(.A_prototype, ...) do.call(.A, prototype) } ## Provide an R function for field access bpid <- function(x) x$id ## Create and use an instance, overwriting default values bpid(A()) a <- A(id = "my_id", threshold = "WARN") bpid(a) bpthreshold(a) } } BiocParallel/man/DoparParam-class.Rd0000644000175200017520000000606114516004410020316 0ustar00biocbuildbiocbuild\name{DoparParam-class} \Rdversion{1.1} \docType{class} \alias{DoparParam-class} \alias{DoparParam} \alias{coerce,SOCKcluster,DoparParam-method} \alias{bpbackend,DoparParam-method} \alias{bpbackend<-,DoparParam,SOCKcluster-method} \alias{bpisup,DoparParam-method} \alias{bpstart,DoparParam-method} \alias{bpstop,DoparParam-method} \alias{bpworkers,DoparParam-method} \alias{show,DoparParam-method} \title{Enable parallel evaluation using registered dopar backend} \description{ This class is used to dispatch parallel operations to the dopar backend registered with the foreach package. } \usage{ DoparParam(stop.on.error=TRUE, RNGseed = NULL) } \details{ \code{DoparParam} can be used for shared or non-shared memory computing depending on what backend is loaded. The \code{doSNOW} package supports non-shared memory, \code{doParallel} supports both shared and non-shared. When not specified, the default number of workers in \code{DoparParam} is determined by \code{getDoParWorkers()}. See the \code{foreach} package vignette for details using the different backends: \url{http://cran.r-project.org/web/packages/foreach/vignettes/foreach.pdf} } \arguments{ \item{stop.on.error}{\code{logical(1)}} Stop all jobs as soon as one jobs fails (\code{stop.on.error == TRUE}) or wait for all jobs to terminate. Default is \code{TRUE}. \item{RNGseed}{ \code{integer(1)} Seed for random number generation. The seed is used to set a new, independent random number stream for each element of \code{X}. The ith element recieves the same stream seed, regardless of use of \code{SerialParam()}, \code{SnowParam()}, or \code{MulticoreParam()}, and regardless of worker or task number. When \code{RNGseed = NULL}, a random seed is used. } } \section{DoparParam constructor}{ Return a proxy object that dispatches parallel evaluation to the registered foreach parallel backend. There are no options to the constructor. All configuration should be done through the normal interface to the foreach parallel backends. } \section{Methods}{ The following generics are implemented and perform as documented on the corresponding help page (e.g., \code{?bpisup}): \code{\link{bpworkers}}, \code{\link{bpnworkers}}, \code{\link{bpstart}}, \code{\link{bpstop}}, \code{\link{bpisup}}, \code{\link{bpbackend}}, \code{\link{bpbackend<-}}, \code{\link{bpvec}}. } \author{Martin Morgan \url{mailto:mtmorgan@fhcrc.org}} \seealso{ \code{getClass("BiocParallelParam")} for additional parameter classes. \code{register} for registering parameter classes for use in parallel evaluation. \code{foreach-package} for the parallel backend infrastructure used by this param class. } \examples{ \dontrun{ # First register a parallel backend with foreach library(doParallel) registerDoParallel(2) p <- DoparParam() bplapply(1:10, sqrt, BPPARAM=p) bpvec(1:10, sqrt, BPPARAM=p) ## set DoparParam() as the default for BiocParallel ## register(DoparParam(), default=TRUE) } } \keyword{classes} BiocParallel/man/MulticoreParam-class.Rd0000644000175200017520000004306214516004410021216 0ustar00biocbuildbiocbuild\name{MulticoreParam-class} \Rdversion{1.1} \docType{class} \alias{MulticoreParam} \alias{MulticoreParam-class} \alias{multicoreWorkers} \alias{bpisup,MulticoreParam-method} \alias{bpschedule,MulticoreParam-method} \alias{bpworkers<-,MulticoreParam,numeric-method} \alias{show,MulticoreParam-method} %% implementation detail \alias{.close,TransientMulticoreParam-method} \alias{.recv,TransientMulticoreParam-method} \alias{.recv_all,TransientMulticoreParam-method} \alias{.recv_any,TransientMulticoreParam-method} \alias{.send,TransientMulticoreParam-method} \alias{.send_to,TransientMulticoreParam-method} \alias{bpbackend,TransientMulticoreParam-method} \alias{bpstart,TransientMulticoreParam-method} \alias{bpstop,TransientMulticoreParam-method} \alias{length,TransientMulticoreParam-method} \title{Enable multi-core parallel evaluation} \description{ This class is used to parameterize single computer multicore parallel evaluation on non-Windows computers. \code{multicoreWorkers()} chooses the number of workers. } \usage{ ## constructor ## ------------------------------------ MulticoreParam(workers = multicoreWorkers(), tasks = 0L, stop.on.error = TRUE, progressbar = FALSE, RNGseed = NULL, timeout = WORKER_TIMEOUT, exportglobals=TRUE, log = FALSE, threshold = "INFO", logdir = NA_character_, resultdir = NA_character_, jobname = "BPJOB", force.GC = FALSE, fallback = TRUE, manager.hostname = NA_character_, manager.port = NA_integer_, ...) ## detect workers ## ------------------------------------ multicoreWorkers() } \details{ \code{MulticoreParam} is used for shared memory computing. Under the hood the cluster is created with \code{makeCluster(..., type ="FORK")} from the \code{parallel} package. See \code{?BIOCPARALLEL_WORKER_NUMBER} to control the default and maximum number of workers. A FORK transport starts workers with the \code{mcfork} function and communicates between master and workers using socket connections. \code{mcfork} builds on fork() and thus a Linux cluster is not supported. Because FORK clusters are Posix based they are not supported on Windows. When \code{MulticoreParam} is created/used in Windows it defaults to \code{SerialParam} which is the equivalent of using a single worker. \describe{ \item{error handling:}{ By default all computations are attempted and partial results are returned with any error messages. \itemize{ \item \code{stop.on.error} A \code{logical}. Stops all jobs as soon as one job fails or wait for all jobs to terminate. When \code{FALSE}, the return value is a list of successful results along with error messages as 'conditions'. \item The \code{bpok(x)} function returns a \code{logical()} vector that is FALSE for any jobs that threw an error. The input \code{x} is a list output from a bp*apply function such as \code{bplapply} or \code{bpmapply}. } } \item{logging:}{ When \code{log = TRUE} the \code{futile.logger} package is loaded on the workers. All log messages written in the \code{futile.logger} format are captured by the logging mechanism and returned in real-time (i.e., as each task completes) instead of after all jobs have finished. Messages sent to \emph{stdout} and \emph{stderr} are returned to the workspace by default. When \code{log = TRUE} these are diverted to the log output. Those familiar with the \code{outfile} argument to \code{makeCluster} can think of \code{log = FALSE} as equivalent to \code{outfile = NULL}; providing a \code{logdir} is the same as providing a name for \code{outfile} except that BiocParallel writes a log file for each task. The log output includes additional statistics such as memory use and task runtime. Memory use is computed by calling gc(reset=TRUE) before code evaluation and gc() (no reseet) after. The output of the second gc() call is sent to the log file. } \item{log and result files:}{ Results and logs can be written to a file instead of returned to the workspace. Writing to files is done from the master as each task completes. Options can be set with the \code{logdir} and \code{resultdir} fields in the constructor or with the accessors, \code{bplogdir} and \code{bpresultdir}. } \item{random number generation:}{ For \code{MulticoreParam}, \code{SnowParam}, and \code{SerialParam}, random number generation is controlled through the \code{RNGseed = } argument. BiocParallel uses the L'Ecuyer-CMRG random number generator described in the parallel package to generate independent random number streams. One stream is associated with each element of \code{X}, and used to seed the random number stream for the application of \code{FUN()} to \code{X[[i]]}. Thus setting \code{RNGseed = } ensures reproducibility across \code{MulticoreParam()}, \code{SnowParam()}, and \code{SerialParam()}, regardless of worker or task number. The default value \code{RNGseed = NULL} means that each evaluation of \code{bplapply} proceeds independently. For details of the L'Ecuyer generator, see ?\code{clusterSetRNGStream}. } } } \section{Constructor}{ \describe{ \item{ \code{MulticoreParam(workers = multicoreWorkers(), tasks = 0L, stop.on.error = FALSE, tasks = 0L, progressbar = FALSE, RNGseed = NULL, timeout = Inf, exportglobals=TRUE, log = FALSE, threshold = "INFO", logdir = NA_character_, resultdir = NA_character_, manager.hostname = NA_character_, manager.port = NA_integer_, ...)}:}{ Return an object representing a FORK cluster. The cluster is not created until \code{bpstart} is called. Named arguments in \code{...} are passed to \code{makeCluster}. } } } \arguments{ \item{workers}{ \code{integer(1)} Number of workers. Defaults to the maximum of 1 or the number of cores determined by \code{detectCores} minus 2 unless environment variables \code{R_PARALLELLY_AVAILABLECORES_FALLBACK} or \code{BIOCPARALLEL_WORKER_NUMBER} are set otherwise. } \item{tasks}{ \code{integer(1)}. The number of tasks per job. \code{value} must be a scalar integer >= 0L. In this documentation a job is defined as a single call to a function, such as \code{bplapply}, \code{bpmapply} etc. A task is the division of the \code{X} argument into chunks. When \code{tasks == 0} (default, except when \code{progressbar = TRUE}), \code{X} is divided as evenly as possible over the number of workers. A \code{tasks} value of > 0 specifies the exact number of tasks. Values can range from 1 (all of \code{X} to a single worker) to the length of \code{X} (each element of \code{X} to a different worker). When the length of \code{X} is less than the number of workers each element of \code{X} is sent to a worker and \code{tasks} is ignored. When the length of \code{X} is less than \code{tasks}, \code{tasks} is treated as \code{length(X)}. } \item{stop.on.error}{ \code{logical(1)} Enable stop on error. } \item{progressbar}{ \code{logical(1)} Enable progress bar (based on plyr:::progress_text). Enabling the progress bar changes the \emph{default} value of \code{tasks} to \code{.Machine$integer.max}, so that progress is reported for each element of \code{X}. } \item{RNGseed}{ \code{integer(1)} Seed for random number generation. The seed is used to set a new, independent random number stream for each element of \code{X}. The ith element recieves the same stream seed, regardless of use of \code{SerialParam()}, \code{SnowParam()}, or \code{MulticoreParam()}, and regardless of worker or task number. When \code{RNGseed = NULL}, a random seed is used. } \item{timeout}{ \code{numeric(1)} Time (in seconds) allowed for worker to complete a task. This value is passed to base::setTimeLimit() as both the \code{cpu} and \code{elapsed} arguments. If the computation exceeds \code{timeout} an error is thrown with message 'reached elapsed time limit'. } \item{exportglobals}{ \code{logical(1)} Export \code{base::options()} from manager to workers? Default \code{TRUE}. } \item{log}{ \code{logical(1)} Enable logging. } \item{threshold}{ \code{character(1)} Logging threshold as defined in \code{futile.logger}. } \item{logdir}{ \code{character(1)} Log files directory. When not provided, log messages are returned to stdout. } \item{resultdir}{ \code{character(1)} Job results directory. When not provided, results are returned as an \R{} object (list) to the workspace. } \item{jobname}{ \code{character(1)} Job name that is prepended to log and result files. Default is "BPJOB". } \item{force.GC}{ \code{logical(1)} Whether to invoke the garbage collector after each call to \code{FUN}. The value \code{TRUE}, explicitly call the garbage collection, can slow parallel computation, but is necessary when each call to \code{FUN} allocates a 'large' amount of memory. If \code{FUN} allocates little memory, then considerable performance improvements are gained by the default setting \code{force.GC = TRUE}. } \item{fallback}{ \code{logical(1)} When \code{TRUE}, fall back to using \code{SerialParam} when \code{MulticoreParam} has not been started and the number of worker is no greater than 1. } \item{manager.hostname}{ \code{character(1)} Host name of manager node. See 'Global Options', in \code{\link{SnowParam}}. } \item{manager.port}{ \code{integer(1)} Port on manager with which workers communicate. See 'Global Options' in \code{\link{SnowParam}}. } \item{\dots}{ Additional arguments passed to \code{\link{makeCluster}} } } \section{Accessors: Logging and results}{ In the following code, \code{x} is a \code{MulticoreParam} object. \describe{ \item{\code{bpprogressbar(x)}, \code{bpprogressbar(x) <- value}:}{ Get or set the value to enable text progress bar. \code{value} must be a \code{logical(1)}. } \item{\code{bpjobname(x)}, \code{bpjobname(x) <- value}:}{ Get or set the job name. } \item{\code{bpRNGseed(x)}, \code{bpRNGseed(x) <- value}:}{ Get or set the seed for random number generaton. \code{value} must be a \code{numeric(1)} or \code{NULL}. } \item{\code{bplog(x)}, \code{bplog(x) <- value}:}{ Get or set the value to enable logging. \code{value} must be a \code{logical(1)}. } \item{\code{bpthreshold(x)}, \code{bpthreshold(x) <- value}:}{ Get or set the logging threshold. \code{value} must be a \code{character(1)} string of one of the levels defined in the \code{futile.logger} package: \dQuote{TRACE}, \dQuote{DEBUG}, \dQuote{INFO}, \dQuote{WARN}, \dQuote{ERROR}, or \dQuote{FATAL}. } \item{\code{bplogdir(x)}, \code{bplogdir(x) <- value}:}{ Get or set the directory for the log file. \code{value} must be a \code{character(1)} path, not a file name. The file is written out as LOGFILE.out. If no \code{logdir} is provided and \code{bplog=TRUE} log messages are sent to stdout. } \item{\code{bpresultdir(x)}, \code{bpresultdir(x) <- value}:}{ Get or set the directory for the result files. \code{value} must be a \code{character(1)} path, not a file name. Separate files are written for each job with the prefix JOB (e.g., JOB1, JOB2, etc.). When no \code{resultdir} is provided the results are returned to the session as \code{list}. } } } \section{Accessors: Back-end control}{ In the code below \code{x} is a \code{MulticoreParam} object. See the ?\code{BiocParallelParam} man page for details on these accessors. \itemize{ \item \code{bpworkers(x)} \item \code{bpnworkers(x)} \item \code{bptasks(x)}, \code{bptasks(x) <- value} \item \code{bpstart(x)} \item \code{bpstop(x)} \item \code{bpisup(x)} \item \code{bpbackend(x)}, \code{bpbackend(x) <- value} } } \section{Accessors: Error Handling}{ In the code below \code{x} is a \code{MulticoreParam} object. See the ?\code{BiocParallelParam} man page for details on these accessors. \itemize{ \item \code{bpstopOnError(x)}, \code{bpstopOnError(x) <- value} } } \section{Methods: Evaluation}{ In the code below \code{BPPARAM} is a \code{MulticoreParam} object. Full documentation for these functions are on separate man pages: see ?\code{bpmapply}, ?\code{bplapply}, ?\code{bpvec}, ?\code{bpiterate} and ?\code{bpaggregate}. \itemize{ \item \code{bpmapply(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPPARAM=bpparam())} \item \code{bplapply(X, FUN, ..., BPPARAM=bpparam())} \item \code{bpvec(X, FUN, ..., AGGREGATE=c, BPPARAM=bpparam())} \item \code{bpiterate(ITER, FUN, ..., BPPARAM=bpparam())} \item \code{bpaggregate(x, data, FUN, ..., BPPARAM=bpparam())} } } \section{Methods: Other}{ In the code below \code{x} is a \code{MulticoreParam} object. \describe{ \item{\code{show(x)}:}{ Displays the \code{MulticoreParam} object. } } } \section{Global Options}{ See the 'Global Options' section of \code{\link{SnowParam}} for manager host name and port defaults. } \author{Martin Morgan \url{mailto:mtmorgan@fhcrc.org} and Valerie Obenchain} \seealso{ \itemize{ \item \code{register} for registering parameter classes for use in parallel evaluation. \item \code{\link{SnowParam}} for computing in distributed memory \item \code{\link{DoparParam}} for computing with foreach \item \code{\link{SerialParam}} for non-parallel evaluation } } \examples{ ## ----------------------------------------------------------------------- ## Job configuration: ## ----------------------------------------------------------------------- ## MulticoreParam supports shared memory computing. The object fields ## control the division of tasks, error handling, logging and ## result format. bpparam <- MulticoreParam() bpparam ## By default the param is created with the maximum available workers ## determined by multicoreWorkers(). multicoreWorkers() ## Fields are modified with accessors of the same name: bplog(bpparam) <- TRUE dir.create(resultdir <- tempfile()) bpresultdir(bpparam) <- resultdir bpparam ## ----------------------------------------------------------------------- ## Logging: ## ----------------------------------------------------------------------- ## When 'log == TRUE' the workers use a custom script (in BiocParallel) ## that enables logging and access to other job statistics. Log messages ## are returned as each job completes rather than waiting for all to finish. ## In 'fun', a value of 'x = 1' will throw a warning, 'x = 2' is ok ## and 'x = 3' throws an error. Because 'x = 1' sleeps, the warning ## should return after the error. X <- 1:3 fun <- function(x) { if (x == 1) { Sys.sleep(2) sqrt(-x) ## warning x } else if (x == 2) { x ## ok } else if (x == 3) { sqrt("FOO") ## error } } ## By default logging is off. Turn it on with the bplog()<- setter ## or by specifying 'log = TRUE' in the constructor. bpparam <- MulticoreParam(3, log = TRUE, stop.on.error = FALSE) res <- tryCatch({ bplapply(X, fun, BPPARAM=bpparam) }, error=identity) res ## When a 'logdir' location is given the messages are redirected to a file: \dontrun{ bplogdir(bpparam) <- tempdir() bplapply(X, fun, BPPARAM = bpparam) list.files(bplogdir(bpparam)) } ## ----------------------------------------------------------------------- ## Managing results: ## ----------------------------------------------------------------------- ## By default results are returned as a list. When 'resultdir' is given ## files are saved in the directory specified by job, e.g., 'TASK1.Rda', ## 'TASK2.Rda', etc. \dontrun{ dir.create(resultdir <- tempfile()) bpparam <- MulticoreParam(2, resultdir = resultdir, stop.on.error = FALSE) bplapply(X, fun, BPPARAM = bpparam) list.files(bpresultdir(bpparam)) } ## ----------------------------------------------------------------------- ## Error handling: ## ----------------------------------------------------------------------- ## When 'stop.on.error' is TRUE the job is terminated as soon as an ## error is hit. When FALSE, all computations are attempted and partial ## results are returned along with errors. In this example the number of ## 'tasks' is set to equal the length of 'X' so each element is run ## separately. (Default behavior is to divide 'X' evenly over workers.) ## All results along with error: bpparam <- MulticoreParam(2, tasks = 4, stop.on.error = FALSE) res <- bptry(bplapply(list(1, "two", 3, 4), sqrt, BPPARAM = bpparam)) res ## Calling bpok() on the result list returns TRUE for elements with no error. bpok(res) ## ----------------------------------------------------------------------- ## Random number generation: ## ----------------------------------------------------------------------- ## Random number generation is controlled with the 'RNGseed' field. ## This seed is passed to parallel::clusterSetRNGStream ## which uses the L'Ecuyer-CMRG random number generator and distributes ## streams to members of the cluster. bpparam <- MulticoreParam(3, RNGseed = 7739465) bplapply(seq_len(bpnworkers(bpparam)), function(i) rnorm(1), BPPARAM = bpparam) } \keyword{classes} \keyword{methods} BiocParallel/man/SerialParam-class.Rd0000644000175200017520000001555314516004410020476 0ustar00biocbuildbiocbuild\name{SerialParam-class} \Rdversion{1.1} \docType{class} \alias{SerialParam-class} \alias{SerialParam} \alias{bpbackend,SerialParam-method} \alias{bpstart,SerialParam-method} \alias{bpstop,SerialParam-method} \alias{bpisup,SerialParam-method} \alias{bpworkers,SerialParam-method} \alias{bplog,SerialParam-method} \alias{bplogdir,SerialParam-method} \alias{bplog<-,SerialParam,logical-method} \alias{bpthreshold<-,SerialParam,character-method} \alias{bplogdir<-,SerialParam,character-method} \alias{length,SerialBackend-method} \title{Enable serial evaluation} \description{ This class is used to parameterize serial evaluation, primarily to facilitate easy transition from parallel to serial code. } \usage{ SerialParam( stop.on.error = TRUE, progressbar = FALSE, RNGseed = NULL, timeout = WORKER_TIMEOUT, log = FALSE, threshold = "INFO", logdir = NA_character_, resultdir = NA_character_, jobname = "BPJOB", force.GC = FALSE ) } \details{ \code{SerialParam} is used for serial computation on a single node. Using \code{SerialParam} in conjunction with \code{bplapply} differs from use of \code{lapply} because it provides features such as error handling, logging, and random number use consistent with \code{SnowParam} and \code{MulticoreParam}. \describe{ \item{error handling:}{ By default all computations are attempted and partial results are returned with any error messages. \itemize{ \item \code{stop.on.error} A \code{logical}. Stops all jobs as soon as one job fails or wait for all jobs to terminate. When \code{FALSE}, the return value is a list of successful results along with error messages as 'conditions'. \item The \code{bpok(x)} function returns a \code{logical()} vector that is FALSE for any jobs that threw an error. The input \code{x} is a list output from a bp*apply function such as \code{bplapply} or \code{bpmapply}. } } \item{logging:}{ When \code{log = TRUE} the \code{futile.logger} package is loaded on the workers. All log messages written in the \code{futile.logger} format are captured by the logging mechanism and returned real-time (i.e., as each task completes) instead of after all jobs have finished. Messages sent to \emph{stdout} and \emph{stderr} are returned to the workspace by default. When \code{log = TRUE} these are diverted to the log output. Those familiar with the \code{outfile} argument to \code{makeCluster} can think of \code{log = FALSE} as equivalent to \code{outfile = NULL}; providing a \code{logdir} is the same as providing a name for \code{outfile} except that BiocParallel writes a log file for each task. The log output includes additional statistics such as memory use and task runtime. Memory use is computed by calling gc(reset=TRUE) before code evaluation and gc() (no reseet) after. The output of the second gc() call is sent to the log file. } \item{log and result files:}{ Results and logs can be written to a file instead of returned to the workspace. Writing to files is done from the master as each task completes. Options can be set with the \code{logdir} and \code{resultdir} fields in the constructor or with the accessors, \code{bplogdir} and \code{bpresultdir}. } \item{random number generation:}{ For \code{MulticoreParam}, \code{SnowParam}, and \code{SerialParam}, random number generation is controlled through the \code{RNGseed = } argument. BiocParallel uses the L'Ecuyer-CMRG random number generator described in the parallel package to generate independent random number streams. One stream is associated with each element of \code{X}, and used to seed the random number stream for the application of \code{FUN()} to \code{X[[i]]}. Thus setting \code{RNGseed = } ensures reproducibility across \code{MulticoreParam()}, \code{SnowParam()}, and \code{SerialParam()}, regardless of worker or task number. The default value \code{RNGseed = NULL} means that each evaluation of \code{bplapply} proceeds independently. For details of the L'Ecuyer generator, see ?\code{clusterSetRNGStream}. } } } \section{Constructor}{ \describe{ \item{\code{SerialParam()}:}{ Return an object to be used for serial evaluation of otherwise parallel functions such as \code{\link{bplapply}}, \code{\link{bpvec}}. } } } \arguments{ \item{stop.on.error}{ \code{logical(1)} Enable stop on error. } \item{progressbar}{ \code{logical(1)} Enable progress bar (based on plyr:::progress_text). } \item{RNGseed}{ \code{integer(1)} Seed for random number generation. The seed is used to set a new, independent random number stream for each element of \code{X}. The ith element recieves the same stream seed, regardless of use of \code{SerialParam()}, \code{SnowParam()}, or \code{MulticoreParam()}, and regardless of worker or task number. When \code{RNGseed = NULL}, a random seed is used. } \item{timeout}{ \code{numeric(1)} Time (in seconds) allowed for worker to complete a task. This value is passed to base::setTimeLimit() as both the \code{cpu} and \code{elapsed} arguments. If the computation exceeds \code{timeout} an error is thrown with message 'reached elapsed time limit'. } \item{log}{ \code{logical(1)} Enable logging. } \item{threshold}{ \code{character(1)} Logging threshold as defined in \code{futile.logger}. } \item{logdir}{ \code{character(1)} Log files directory. When not provided, log messages are returned to stdout. } \item{resultdir}{ \code{character(1)} Job results directory. When not provided, results are returned as an \R{} object (list) to the workspace. } \item{jobname}{ \code{character(1)} Job name that is prepended to log and result files. Default is "BPJOB". } \item{force.GC}{ \code{logical(1)} Whether to invoke the garbage collector after each call to \code{FUN}. The default (\code{FALSE}, do not explicitly call the garbage collection) rarely needs to be changed. } } \section{Methods}{ The following generics are implemented and perform as documented on the corresponding help page (e.g., \code{?bpworkers}): \code{\link{bpworkers}}. \code{\link{bpisup}}, \code{\link{bpstart}}, \code{\link{bpstop}}, are implemented, but do not have any side-effects. } \author{Martin Morgan \url{mailto:mtmorgan@fhcrc.org}} \seealso{ \code{getClass("BiocParallelParam")} for additional parameter classes. \code{register} for registering parameter classes for use in parallel evaluation. } \examples{ p <- SerialParam() simplify2array(bplapply(1:10, sqrt, BPPARAM=p)) bpvec(1:10, sqrt, BPPARAM=p) } \keyword{classes} BiocParallel/man/SnowParam-class.Rd0000644000175200017520000004554214516004410020206 0ustar00biocbuildbiocbuild\name{SnowParam-class} \Rdversion{1.1} \docType{class} % Class \alias{SnowParam} \alias{SnowParam-class} % Control \alias{snowWorkers} \alias{bpbackend,SnowParam-method} \alias{bpbackend<-,SnowParam,cluster-method} \alias{bpisup,SnowParam-method} \alias{bpstart,SnowParam-method} \alias{bpstop,SnowParam-method} \alias{bpworkers,SnowParam-method} \alias{bpworkers<-,SnowParam,numeric-method} \alias{bpworkers<-,SnowParam,character-method} % Accessors \alias{bplog,SnowParam-method} \alias{bplog<-,SnowParam,logical-method} \alias{bpthreshold,SnowParam-method} \alias{bpthreshold<-,SnowParam,character-method} % Other methods \alias{coerce,SOCKcluster,SnowParam-method} \alias{coerce,spawnedMPIcluster,SnowParam-method} \alias{show,SnowParam-method} \title{Enable simple network of workstations (SNOW)-style parallel evaluation} \description{ This class is used to parameterize simple network of workstations (SNOW) parallel evaluation on one or several physical computers. \code{snowWorkers()} chooses the number of workers. } \usage{ ## constructor ## ------------------------------------ SnowParam(workers = snowWorkers(type), type=c("SOCK", "MPI", "FORK"), tasks = 0L, stop.on.error = TRUE, progressbar = FALSE, RNGseed = NULL, timeout = WORKER_TIMEOUT, exportglobals = TRUE, exportvariables = TRUE, log = FALSE, threshold = "INFO", logdir = NA_character_, resultdir = NA_character_, jobname = "BPJOB", force.GC = FALSE, fallback = TRUE, manager.hostname = NA_character_, manager.port = NA_integer_, ...) ## coercion ## ------------------------------------ ## as(SOCKcluster, SnowParam) ## as(spawnedMPIcluster,SnowParam) ## detect workers ## ------------------------------------ snowWorkers(type = c("SOCK", "MPI", "FORK")) } \details{ \code{SnowParam} is used for distributed memory computing and supports 2 cluster types: \sQuote{SOCK} (default) and \sQuote{MPI}. The \code{SnowParam} builds on infrastructure in the \code{snow} and \code{parallel} packages and provides the additional features of error handling, logging and writing out results. See \code{?BIOCPARALLEL_WORKER_NUMBER} to control the default and maximum number of workers. \describe{ \item{error handling:}{ By default all computations are attempted and partial results are returned with any error messages. \itemize{ \item \code{stop.on.error} A \code{logical}. Stops all jobs as soon as one job fails or wait for all jobs to terminate. When \code{FALSE}, the return value is a list of successful results along with error messages as 'conditions'. \item The \code{bpok(x)} function returns a \code{logical()} vector that is FALSE for any jobs that threw an error. The input \code{x} is a list output from a bp*apply function such as \code{bplapply} or \code{bpmapply}. } } \item{logging:}{ When \code{log = TRUE} the \code{futile.logger} package is loaded on the workers. All log messages written in the \code{futile.logger} format are captured by the logging mechanism and returned real-time (i.e., as each task completes) instead of after all jobs have finished. Messages sent to \emph{stdout} and \emph{stderr} are returned to the workspace by default. When \code{log = TRUE} these are diverted to the log output. Those familiar with the \code{outfile} argument to \code{makeCluster} can think of \code{log = FALSE} as equivalent to \code{outfile = NULL}; providing a \code{logdir} is the same as providing a name for \code{outfile} except that BiocParallel writes a log file for each task. The log output includes additional statistics such as memory use and task runtime. Memory use is computed by calling gc(reset=TRUE) before code evaluation and gc() (no reseet) after. The output of the second gc() call is sent to the log file. } \item{log and result files:}{ Results and logs can be written to a file instead of returned to the workspace. Writing to files is done from the master as each task completes. Options can be set with the \code{logdir} and \code{resultdir} fields in the constructor or with the accessors, \code{bplogdir} and \code{bpresultdir}. } \item{random number generation:}{ For \code{MulticoreParam}, \code{SnowParam}, and \code{SerialParam}, random number generation is controlled through the \code{RNGseed = } argument. BiocParallel uses the L'Ecuyer-CMRG random number generator described in the parallel package to generate independent random number streams. One stream is associated with each element of \code{X}, and used to seed the random number stream for the application of \code{FUN()} to \code{X[[i]]}. Thus setting \code{RNGseed = } ensures reproducibility across \code{MulticoreParam()}, \code{SnowParam()}, and \code{SerialParam()}, regardless of worker or task number. The default value \code{RNGseed = NULL} means that each evaluation of \code{bplapply} proceeds independently. For details of the L'Ecuyer generator, see ?\code{clusterSetRNGStream}. } NOTE: The \code{PSOCK} cluster from the \code{parallel} package does not support cluster options \code{scriptdir} and \code{useRscript}. \code{PSOCK} is not supported because these options are needed to re-direct to an alternate worker script located in BiocParallel. } } \section{Constructor}{ \describe{ \item{ \code{SnowParam(workers = snowWorkers(), type=c("SOCK", "MPI"), tasks = 0L, stop.on.error = FALSE, progressbar = FALSE, RNGseed = NULL, timeout = Inf, exportglobals = TRUE, exportvariables = TRUE, log = FALSE, threshold = "INFO", logdir = NA_character_, resultdir = NA_character_, jobname = "BPJOB", manager.hostname = NA_character_, manager.port = NA_integer_, ...)}:}{ Return an object representing a SNOW cluster. The cluster is not created until \code{bpstart} is called. Named arguments in \code{...} are passed to \code{makeCluster}. } } } \arguments{ \item{workers}{ \code{integer(1)} Number of workers. Defaults to the maximum of 1 or the number of cores determined by \code{detectCores} minus 2 unless environment variables \code{R_PARALLELLY_AVAILABLECORES_FALLBACK} or \code{BIOCPARALLEL_WORKER_NUMBER} are set otherwise. For a \code{SOCK} cluster, \code{workers} can be a \code{character()} vector of host names. } \item{type}{ \code{character(1)} Type of cluster to use. Possible values are \code{SOCK} (default) and \code{MPI}. Instead of \code{type=FORK} use \code{MulticoreParam}. } \item{tasks}{ \code{integer(1)}. The number of tasks per job. \code{value} must be a scalar integer >= 0L. In this documentation a job is defined as a single call to a function, such as \code{bplapply}, \code{bpmapply} etc. A task is the division of the \code{X} argument into chunks. When \code{tasks == 0} (default), \code{X} is divided as evenly as possible over the number of workers. A \code{tasks} value of > 0 specifies the exact number of tasks. Values can range from 1 (all of \code{X} to a single worker) to the length of \code{X} (each element of \code{X} to a different worker). When the length of \code{X} is less than the number of workers each element of \code{X} is sent to a worker and \code{tasks} is ignored. } \item{stop.on.error}{ \code{logical(1)} Enable stop on error. } \item{progressbar}{ \code{logical(1)} Enable progress bar (based on plyr:::progress_text). } \item{RNGseed}{ \code{integer(1)} Seed for random number generation. The seed is used to set a new, independent random number stream for each element of \code{X}. The ith element recieves the same stream seed, regardless of use of \code{SerialParam()}, \code{SnowParam()}, or \code{MulticoreParam()}, and regardless of worker or task number. When \code{RNGseed = NULL}, a random seed is used. } \item{timeout}{ \code{numeric(1)} Time (in seconds) allowed for worker to complete a task. This value is passed to base::setTimeLimit() as both the \code{cpu} and \code{elapsed} arguments. If the computation exceeds \code{timeout} an error is thrown with message 'reached elapsed time limit'. } \item{exportglobals}{ \code{logical(1)} Export \code{base::options()} from manager to workers? Default \code{TRUE}. } \item{exportvariables}{ \code{logical(1)} Automatically export the variables which are defined in the global environment and used by the function from manager to workers. Default \code{TRUE}. } \item{log}{ \code{logical(1)} Enable logging. } \item{threshold}{ \code{character(1)} Logging threshold as defined in \code{futile.logger}. } \item{logdir}{ \code{character(1)} Log files directory. When not provided, log messages are returned to stdout. } \item{resultdir}{ \code{character(1)} Job results directory. When not provided, results are returned as an \R{} object (list) to the workspace. } \item{jobname}{ \code{character(1)} Job name that is prepended to log and result files. Default is "BPJOB". } \item{force.GC}{ \code{logical(1)} Whether to invoke the garbage collector after each call to \code{FUN}. The default (\code{FALSE}, do not explicitly call the garbage collection) rarely needs to be changed. } \item{fallback}{ \code{logical(1)} When \code{TRUE}, fall back to using \code{SerialParam} when \code{SnowParam} has not been started and the number of worker is no greater than 1. } \item{manager.hostname}{ \code{character(1)} Host name of manager node. See 'Global Options', below. } \item{manager.port}{ \code{integer(1)} Port on manager with which workers communicate. See 'Global Options', below. } \item{\dots}{ Additional arguments passed to \code{\link{makeCluster}} } } \section{Accessors: Logging and results}{ In the following code, \code{x} is a \code{SnowParam} object. \describe{ \item{\code{bpprogressbar(x)}, \code{bpprogressbar(x) <- value}:}{ Get or set the value to enable text progress bar. \code{value} must be a \code{logical(1)}. } \item{\code{bpjobname(x)}, \code{bpjobname(x) <- value}:}{ Get or set the job name. } \item{\code{bpRNGseed(x)}, \code{bpRNGseed(x) <- value}:}{ Get or set the seed for random number generaton. \code{value} must be a \code{numeric(1)} or \code{NULL}. } \item{\code{bplog(x)}, \code{bplog(x) <- value}:}{ Get or set the value to enable logging. \code{value} must be a \code{logical(1)}. } \item{\code{bpthreshold(x)}, \code{bpthreshold(x) <- value}:}{ Get or set the logging threshold. \code{value} must be a \code{character(1)} string of one of the levels defined in the \code{futile.logger} package: \dQuote{TRACE}, \dQuote{DEBUG}, \dQuote{INFO}, \dQuote{WARN}, \dQuote{ERROR}, or \dQuote{FATAL}. } \item{\code{bplogdir(x)}, \code{bplogdir(x) <- value}:}{ Get or set the directory for the log file. \code{value} must be a \code{character(1)} path, not a file name. The file is written out as BPLOG.out. If no \code{logdir} is provided and \code{bplog=TRUE} log messages are sent to stdout. } \item{\code{bpresultdir(x)}, \code{bpresultdir(x) <- value}:}{ Get or set the directory for the result files. \code{value} must be a \code{character(1)} path, not a file name. Separate files are written for each job with the prefix TASK (e.g., TASK1, TASK2, etc.). When no \code{resultdir} is provided the results are returned to the session as \code{list}. } } } \section{Accessors: Back-end control}{ In the code below \code{x} is a \code{SnowParam} object. See the ?\code{BiocParallelParam} man page for details on these accessors. \itemize{ \item \code{bpworkers(x)}, \code{bpworkers(x) <- value}, \code{bpnworkers(x)} \item \code{bptasks(x)}, \code{bptasks(x) <- value} \item \code{bpstart(x)} \item \code{bpstop(x)} \item \code{bpisup(x)} \item \code{bpbackend(x)}, \code{bpbackend(x) <- value} } } \section{Accessors: Error Handling}{ In the code below \code{x} is a \code{SnowParam} object. See the ?\code{BiocParallelParam} man page for details on these accessors. \itemize{ \item \code{bpstopOnError(x)}, \code{bpstopOnError(x) <- value} } } \section{Methods: Evaluation}{ In the code below \code{BPPARAM} is a \code{SnowParam} object. Full documentation for these functions are on separate man pages: see ?\code{bpmapply}, ?\code{bplapply}, ?\code{bpvec}, ?\code{bpiterate} and ?\code{bpaggregate}. \itemize{ \item \code{bpmapply(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPPARAM=bpparam())} \item \code{bplapply(X, FUN, ..., BPPARAM=bpparam())} \item \code{bpvec(X, FUN, ..., AGGREGATE=c, BPPARAM=bpparam())} \item \code{bpiterate(ITER, FUN, ..., BPPARAM=bpparam())} \item \code{bpaggregate(x, data, FUN, ..., BPPARAM=bpparam())} } } \section{Methods: Other}{ In the code below \code{x} is a \code{SnowParam} object. \describe{ \item{\code{show(x)}:}{Displays the \code{SnowParam} object.} \item{\code{bpok(x)}:}{ Returns a \code{logical()} vector: FALSE for any jobs that resulted in an error. \code{x} is the result list output by a \code{BiocParallel} function such as \code{bplapply} or \code{bpmapply}. } } } \section{Coercion}{ \describe{ \item{\code{as(from, "SnowParam")}:}{ Creates a \code{SnowParam} object from a \code{SOCKcluster} or \code{spawnedMPIcluster} object. Instances created in this way cannot be started or stopped. } } } \section{Global Options}{ The environment variable \code{BIOCPARALLEL_WORKER_NUMBER} and the the global option \code{mc.cores} influences the number of workers determined by \code{snowWorkers()} (described above) or \code{multicoreWorkers()} (see \code{\link{multicoreWorkers}}). Workers communicate to the master through socket connections. Socket connections require a hostname and port. These are determined by arguments \code{manager.hostname} and \code{manager.port}; default values are influenced by global options. The default manager hostname is "localhost" when the number of workers are specified as a \code{numeric(1)}, and \code{Sys.info()[["nodename"]]} otherwise. The hostname can be over-ridden by the envirnoment variable \code{MASTER}, or the global option \code{bphost} (e.g., \code{options(bphost=Sys.info()[["nodename"]])}. The default port is chosen as a random value between 11000 and 11999. The port may be over-ridden by the environment variable \code{R_PARALLEL_PORT} or \code{PORT}, and by the option \code{ports}, e.g., \code{options(ports=12345L)}. } \author{Martin Morgan and Valerie Obenchain.} \seealso{ \itemize{ \item \code{register} for registering parameter classes for use in parallel evaluation. \item \code{\link{MulticoreParam}} for computing in shared memory \item \code{\link{DoparParam}} for computing with foreach \item \code{\link{SerialParam}} for non-parallel evaluation } } \examples{ ## ----------------------------------------------------------------------- ## Job configuration: ## ----------------------------------------------------------------------- ## SnowParam supports distributed memory computing. The object fields ## control the division of tasks, error handling, logging and result ## format. bpparam <- SnowParam() bpparam ## Fields are modified with accessors of the same name: bplog(bpparam) <- TRUE dir.create(resultdir <- tempfile()) bpresultdir(bpparam) <- resultdir bpparam ## ----------------------------------------------------------------------- ## Logging: ## ----------------------------------------------------------------------- ## When 'log == TRUE' the workers use a custom script (in BiocParallel) ## that enables logging and access to other job statistics. Log messages ## are returned as each job completes rather than waiting for all to ## finish. ## In 'fun', a value of 'x = 1' will throw a warning, 'x = 2' is ok ## and 'x = 3' throws an error. Because 'x = 1' sleeps, the warning ## should return after the error. X <- 1:3 fun <- function(x) { if (x == 1) { Sys.sleep(2) log(-x) ## warning } else if (x == 2) { x ## ok } else if (x == 3) { sqrt("FOO") ## error } } ## By default logging is off. Turn it on with the bplog()<- setter ## or by specifying 'log = TRUE' in the constructor. bpparam <- SnowParam(3, log = TRUE, stop.on.error = FALSE) tryCatch({ bplapply(X, fun, BPPARAM = bpparam) }, error=identity) ## When a 'logdir' location is given the messages are redirected to a ## file: \dontrun{ dir.create(logdir <- tempfile()) bplogdir(bpparam) <- logdir bplapply(X, fun, BPPARAM = bpparam) list.files(bplogdir(bpparam)) } ## ----------------------------------------------------------------------- ## Managing results: ## ----------------------------------------------------------------------- ## By default results are returned as a list. When 'resultdir' is given ## files are saved in the directory specified by job, e.g., 'TASK1.Rda', ## 'TASK2.Rda', etc. \dontrun{ dir.create(resultdir <- tempfile()) bpparam <- SnowParam(2, resultdir = resultdir) bplapply(X, fun, BPPARAM = bpparam) list.files(bpresultdir(bpparam)) } ## ----------------------------------------------------------------------- ## Error handling: ## ----------------------------------------------------------------------- ## When 'stop.on.error' is TRUE the process returns as soon as an error ## is thrown. ## When 'stop.on.error' is FALSE all computations are attempted. Partial ## results are returned along with errors. Use bptry() to see the ## partial results bpparam <- SnowParam(2, stop.on.error = FALSE) res <- bptry(bplapply(list(1, "two", 3, 4), sqrt, BPPARAM = bpparam)) res ## Calling bpok() on the result list returns TRUE for elements with no ## error. bpok(res) ## ----------------------------------------------------------------------- ## Random number generation: ## ----------------------------------------------------------------------- ## Random number generation is controlled with the 'RNGseed' field. ## This seed is passed to parallel::clusterSetRNGStream ## which uses the L'Ecuyer-CMRG random number generator and distributes ## streams for each job bpparam <- SnowParam(3, RNGseed = 7739465) bplapply(seq_len(bpnworkers(bpparam)), function(i) rnorm(1), BPPARAM = bpparam) } \keyword{classes} \keyword{methods} BiocParallel/man/bpaggregate.Rd0000644000175200017520000000624314516004410017437 0ustar00biocbuildbiocbuild\name{bpaggregate} \alias{bpaggregate} \alias{bpaggregate,formula,BiocParallelParam-method} \alias{bpaggregate,matrix,BiocParallelParam-method} \alias{bpaggregate,data.frame,BiocParallelParam-method} \alias{bpaggregate,ANY,missing-method} \title{Apply a function on subsets of data frames} \description{ This is a parallel version of \code{\link[stats]{aggregate}}. } \usage{ \S4method{bpaggregate}{formula,BiocParallelParam}(x, data, FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) \S4method{bpaggregate}{data.frame,BiocParallelParam}(x, by, FUN, ..., simplify=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) \S4method{bpaggregate}{matrix,BiocParallelParam}(x, by, FUN, ..., simplify=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions() ) \S4method{bpaggregate}{ANY,missing}(x, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions() ) } \arguments{ \item{x}{A \code{data.frame}, \code{matrix} or a formula. } \item{by}{A list of factors by which \code{x} is split; applicable when \code{x} is \code{data.frame} or \code{matrix}. } \item{data}{A \code{data.frame}; applicable when \code{x} is a \code{formula}. } \item{FUN}{Function to apply. } \item{...}{Additional arguments for \code{FUN}. } \item{simplify}{If set to \code{TRUE}, the return values of \code{FUN} will be simplified using \code{\link{simplify2array}}. } \item{BPPARAM}{An optional \code{\link{BiocParallelParam}} instance determining the parallel back-end to be used during evaluation. } \item{BPREDO}{A \code{list} of output from \code{bpaggregate} with one or more failed elements. When a list is given in \code{BPREDO}, \code{bpok} is used to identify errors, tasks are rerun and inserted into the original results. } \item{BPOPTIONS}{ Additional options to control the behavior of the parallel evaluation, see \code{\link{bpoptions}}. } } \details{ \code{bpaggregate} is a generic with methods for \code{data.frame} \code{matrix} and \code{formula} objects. \code{x} is divided into subsets according to factors in \code{by}. Data chunks are sent to the workers, \code{FUN} is applied and results are returned as a \code{data.frame}. The function is similar in spirit to \code{\link[stats]{aggregate}} from the stats package but \code{\link[stats]{aggregate}} is not explicitly called. The \code{bpaggregate} \code{formula} method reformulates the call and dispatches to the \code{data.frame} method which in turn distributes data chunks to workers with \code{bplapply}. } \value{ See \code{\link[stats]{aggregate}}. } \author{ Martin Morgan \url{mailto:mtmorgan@fhcrc.org}. } \examples{ if (interactive() && require(Rsamtools) && require(GenomicAlignments)) { fl <- system.file("extdata", "ex1.bam", package="Rsamtools") param <- ScanBamParam(what = c("flag", "mapq")) gal <- readGAlignments(fl, param=param) ## Report the mean map quality by range cutoff: cutoff <- rep(0, length(gal)) cutoff[start(gal) > 1000 & start(gal) < 1500] <- 1 cutoff[start(gal) > 1500] <- 2 bpaggregate(as.data.frame(mcols(gal)$mapq), list(cutoff = cutoff), mean) } } BiocParallel/man/bpiterate.Rd0000644000175200017520000002012414516004410017140 0ustar00biocbuildbiocbuild\name{bpiterate} \alias{bpiterate} \alias{bpiterate,ANY,ANY,missing-method} \alias{bpiterate,ANY,ANY,SerialParam-method} \alias{bpiterate,ANY,ANY,BiocParallelParam-method} \alias{bpiterate,ANY,ANY,SnowParam-method} \alias{bpiterate,ANY,ANY,DoparParam-method} \alias{bpiterate,ANY,ANY,BatchtoolsParam-method} \alias{bpiterateAlong} \title{Parallel iteration over an indeterminate number of data chunks} \description{ \code{bpiterate} iterates over an indeterminate number of data chunks (e.g., records in a file). Each chunk is processed by parallel workers in an asynchronous fashion; as each worker finishes it receives a new chunk. Data are traversed a single time. When provided with a vector-like argument \code{ITER = X}, \code{bpiterate} uses \code{bpiterateAlong} to produce the sequence of elements \code{X[[1]]}, \code{X[[2]]}, etc. } \usage{ bpiterate( ITER, FUN, ..., BPREDO = list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions() ) \S4method{bpiterate}{ANY,ANY,missing}( ITER, FUN, ..., BPREDO = list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) \S4method{bpiterate}{ANY,ANY,BatchtoolsParam}( ITER, FUN, ..., REDUCE, init, reduce.in.order=FALSE, BPREDO = list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions() ) bpiterateAlong(X) } \arguments{ \item{X}{ An object (e.g., vector or list) with `length()` and `[[` methods available. } \item{ITER}{ A function with no arguments that returns an object to process, generally a chunk of data from a file. When no objects are left (i.e., end of file) it should return NULL and continue to return NULL regardless of the number of times it is invoked after reaching the end of file. This function is run on the master. } \item{FUN}{ A function to process the object returned by \code{ITER}; run on parallel workers separate from the master. When BPPARAM is a MulticoreParam, FUN is `decorated` with additional arguments and therefore must have \dots in the signature. } \item{BPPARAM}{An optional \code{\link{BiocParallelParam}} instance determining the parallel back-end to be used during evaluation, or a \code{list} of \code{BiocParallelParam} instances, to be applied in sequence for nested calls to \code{bpiterate}. } \item{REDUCE}{Optional function that combines (reduces) output from \code{FUN}. As each worker returns, the data are combined with the \code{REDUCE} function. \code{REDUCE} takes 2 arguments; one is the current result and the other is the output of \code{FUN} from a worker that just finished.} \item{init}{Optional initial value for \code{REDUCE}; must be of the same type as the object returned from \code{FUN}. When supplied, \code{reduce.in.order} is set to TRUE.} \item{reduce.in.order}{Logical. When TRUE, REDUCE is applied to the results from the workers in the same order the tasks were sent out.} \item{BPREDO}{An output from \code{bpiterate} with one or more failed elements. This argument cannot be used with \code{BatchtoolsParam} } \item{\dots}{Arguments to other methods, and named arguments for \code{FUN}.} \item{BPOPTIONS}{ Additional options to control the behavior of the parallel evaluation, see \code{\link{bpoptions}}. } } \details{ Supported for \code{SnowParam}, \code{MulticoreParam} and \code{BatchtoolsParam}. \code{bpiterate} iterates through an unknown number of data chunks, dispatching chunks to parallel workers as they become available. In contrast, other \code{bp*apply} functions such as \code{bplapply} or \code{bpmapply} require the number of data chunks to be specified ahead of time. This quality makes \code{bpiterate} useful for iterating through files of unknown length. \code{ITER} serves up chunks of data until the end of the file is reached at which point it returns NULL. Note that \code{ITER} should continue to return NULL reguardless of the number of times it is invoked after reaching the end of the file. \code{FUN} is applied to each object (data chunk) returned by \code{ITER}. \code{bpiterateAlong()} provides an interator for a vector or other object with \code{length()} and \code{[[} methods defined. It is used in place of the first argument \code{ITER=} } \value{ By default, a \code{list} the same length as the number of chunks in \code{ITER()}. When \code{REDUCE} is used, the return is consistent with application of the reduction. When errors occur, the errors will be attached to the result as an attribute \code{errors} } \author{ Valerie Obenchain \url{mailto:vobencha@fhcrc.org}. } \seealso{ \itemize{ \item \code{\link{bpvec}} for parallel, vectorized calculations. \item \code{\link{bplapply}} for parallel, lapply-like calculations. \item \code{\link{BiocParallelParam}} for details of \code{BPPARAM}. \item \code{\link{BatchtoolsParam}} for details of \code{BatchtoolsParam}. } } \examples{ ## A simple iterator ITER <- bpiterateAlong(1:10) result <- bpiterate(ITER, sqrt) ## alteernatively, result <- bpiterate(1:10, sqrt) unlist(result) \dontrun{ if (require(Rsamtools) && require(RNAseqData.HNRNPC.bam.chr14) && require(GenomicAlignments) && require(ShortRead)) { ## ---------------------------------------------------------------------- ## Iterate through a BAM file ## ---------------------------------------------------------------------- ## Select a single file and set 'yieldSize' in the BamFile object. fl <- RNAseqData.HNRNPC.bam.chr14_BAMFILES[[1]] bf <- BamFile(fl, yieldSize = 300000) ## bamIterator() is initialized with a BAM file and returns a function. ## The return function requires no arguments and iterates through the ## file returning data chunks the size of yieldSize. bamIterator <- function(bf) { done <- FALSE if (!isOpen( bf)) open(bf) function() { if (done) return(NULL) yld <- readGAlignments(bf) if (length(yld) == 0L) { close(bf) done <<- TRUE NULL } else yld } } ## FUN counts reads in a region of interest. roi <- GRanges("chr14", IRanges(seq(19e6, 107e6, by = 10e6), width = 10e6)) counter <- function(reads, roi, ...) { countOverlaps(query = roi, subject = reads) } ## Initialize the iterator. ITER <- bamIterator(bf) ## The number of chunks returned by ITER() determines the result length. bpparam <- MulticoreParam(workers = 3) ## bpparam <- BatchtoolsParam(workers = 3), see ?BatchtoolsParam bpiterate(ITER, counter, roi = roi, BPPARAM = bpparam) ## Re-initialize the iterator and combine on the fly with REDUCE: ITER <- bamIterator(bf) bpparam <- MulticoreParam(workers = 3) bpiterate(ITER, counter, REDUCE = sum, roi = roi, BPPARAM = bpparam) ## ---------------------------------------------------------------------- ## Iterate through a FASTA file ## ---------------------------------------------------------------------- ## Set data chunk size with 'n' in the FastqStreamer object. sp <- SolexaPath(system.file('extdata', package = 'ShortRead')) fl <- file.path(analysisPath(sp), "s_1_sequence.txt") ## Create an iterator that returns data chunks the size of 'n'. fastqIterator <- function(fqs) { done <- FALSE if (!isOpen(fqs)) open(fqs) function() { if (done) return(NULL) yld <- yield(fqs) if (length(yld) == 0L) { close(fqs) done <<- TRUE NULL } else yld } } ## The process function summarizes the number of times each sequence occurs. summary <- function(reads, ...) { ShortRead::tables(reads, n = 0)$distribution } ## Create a param. bpparam <- SnowParam(workers = 2) ## Initialize the streamer and iterator. fqs <- FastqStreamer(fl, n = 100) ITER <- fastqIterator(fqs) bpiterate(ITER, summary, BPPARAM = bpparam) ## Results from the workers are combined on the fly when REDUCE is used. ## Collapsing the data in this way can substantially reduce memory ## requirements. fqs <- FastqStreamer(fl, n = 100) ITER <- fastqIterator(fqs) bpiterate(ITER, summary, REDUCE = merge, all = TRUE, BPPARAM = bpparam) } } } \keyword{manip} \keyword{methods} BiocParallel/man/bplapply.Rd0000644000175200017520000000450214516004410017006 0ustar00biocbuildbiocbuild\name{bplapply} \alias{bplapply} \alias{bplapply,ANY,list-method} \alias{bplapply,ANY,missing-method} \alias{bplapply,ANY,BiocParallelParam-method} \alias{bplapply,ANY,DoparParam-method} \alias{bplapply,ANY,SerialParam-method} \alias{bplapply,ANY,SnowParam-method} \title{Parallel lapply-like functionality} \description{ \code{bplapply} applies \code{FUN} to each element of \code{X}. Any type of object \code{X} is allowed, provided \code{length}, \code{[}, and \code{[[} methods are available. The return value is a \code{list} of length equal to \code{X}, as with \code{\link[base]{lapply}}. } \usage{ bplapply(X, FUN, ..., BPREDO = list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) } \arguments{ \item{X}{ Any object for which methods \code{length}, \code{[}, and \code{[[} are implemented. } \item{FUN}{ The \code{function} to be applied to each element of \code{X}. } \item{\dots}{ Additional arguments for \code{FUN}, as in \code{\link{lapply}}. } \item{BPPARAM}{ An optional \code{\link{BiocParallelParam}} instance determining the parallel back-end to be used during evaluation, or a \code{list} of \code{BiocParallelParam} instances, to be applied in sequence for nested calls to \pkg{BiocParallel} functions. } \item{BPREDO}{A \code{list} of output from \code{bplapply} with one or more failed elements. When a list is given in \code{BPREDO}, \code{bpok} is used to identify errors, tasks are rerun and inserted into the original results. } \item{BPOPTIONS}{ Additional options to control the behavior of the parallel evaluation, see \code{\link{bpoptions}}. } } \details{ See \code{methods{bplapply}} for additional methods, e.g., \code{method?bplapply("MulticoreParam")}. } \value{See \code{\link[base]{lapply}}.} \author{ Martin Morgan \url{mailto:mtmorgan@fhcrc.org}. Original code as attributed in \code{\link{mclapply}}. } \seealso{ \itemize{ \item \code{\link{bpvec}} for parallel, vectorized calculations. \item \code{\link{BiocParallelParam}} for possible values of \code{BPPARAM}. } } \examples{ methods("bplapply") ## ten tasks (1:10) so ten calls to FUN default registered parallel ## back-end. Compare with bpvec. fun <- function(v) { message("working") ## 10 tasks sqrt(v) } bplapply(1:10, fun) } \keyword{manip} BiocParallel/man/bploop.Rd0000644000175200017520000000640514516004410016462 0ustar00biocbuildbiocbuild\name{bploop} \Rdversion{1.1} % Class \alias{bploop} % managers \alias{bploop.lapply} \alias{bploop.iterate} \alias{bprunMPIworker} \title{Internal Functions for SNOW-style Parallel Evaluation} \description{ The functions documented on this page are primarily for use within \pkg{BiocParallel} to enable SNOW-style parallel evaluation, using communication between manager and worker nodes through sockets. } \usage{ \S3method{bploop}{lapply}(manager, X, FUN, ARGS, BPPARAM, BPOPTIONS = bpoptions(), BPREDO, ...) \S3method{bploop}{iterate}(manager, ITER, FUN, ARGS, BPPARAM, BPOPTIONS = bpoptions(), REDUCE, BPREDO, init, reduce.in.order, ...) } \arguments{ \item{manager}{An object representing the manager node. For workers, this is the node to which the worker will communicate. For managers, this is the form of iteration -- \code{lapply} or \code{iterate}.} \item{X}{A vector of jobs to be performed.} \item{FUN}{A function to apply to each job.} \item{ARGS}{A list of arguments to be passed to \code{FUN}.} \item{BPPARAM}{An instance of a \code{BiocParallelParam} class.} \item{ITER}{A function used to generate jobs. No more jobs are available when \code{ITER()} returns \code{NULL}.} \item{REDUCE}{(Optional) A function combining two values returned by \code{FUN} into a single value.} \item{init}{(Optional) Initial value for reduction.} \item{reduce.in.order}{(Optional) logical(1) indicating that reduction must occur in the order jobs are dispatched (\code{TRUE}) or that reduction can occur in the order jobs are completed (\code{FALSE}).} \item{BPREDO}{(Optional) A \code{list} of output from \code{bplapply} or \code{bpiterate} with one or more failed elements.} \item{\ldots}{Additional arguments, ignored in all cases.} \item{BPOPTIONS}{ Additional options to control the behavior of the parallel evaluation, see \code{\link{bpoptions}}. } } \details{ Workers enter a loop. They wait to receive a message (\R list) from the \code{manager}. The message contains a \code{type} element, with evaluation as follows: \describe{ \item{\dQuote{EXEC}}{Execute the \R{} code in the message, returning the result to the \code{manager}.} \item{\dQuote{DONE}}{Signal termination to the \code{manager}, terminate the worker.} } Managers under \code{lapply} dispatch pre-determined jobs, \code{X}, to workers, collecting the results from and dispatching new jobs to the first available worker. The manager returns a list of results, in a one-to-one correspondence with the order of jobs supplied, when all jobs have been evaluated. Managers under \code{iterate} dispatch an undetermined number of jobs to workers, collecting previous jobs from and dispatching new jobs to the first available worker. Dispatch continues until available jobs are exhausted. The return value is by default a list of results in a one-to-one correspondence with the order of jobs supplied. The return value is influenced by \code{REDUCE}, \code{init}, and \code{reduce.in.order}. } \author{ Valerie Obenchain, Martin Morgan. Derived from similar functionality in the \pkg{snow} and \pkg{parallel} packages. } \examples{ ## These functions are not meant to be called by the end user. } BiocParallel/man/bpmapply.Rd0000644000175200017520000000554614516004410017020 0ustar00biocbuildbiocbuild\name{bpmapply} \alias{bpmapply} \alias{bpmapply,ANY,list-method} \alias{bpmapply,ANY,missing-method} \alias{bpmapply,ANY,BiocParallelParam-method} \title{Parallel mapply-like functionality} \description{ \code{bpmapply} applies \code{FUN} to first elements of \code{...}, the second elements and so on. Any type of object in \code{...} is allowed, provided \code{length}, \code{[}, and \code{[[} methods are available. The return value is a \code{list} of length equal to the length of all objects provided, as with \code{\link[base]{mapply}}. } \usage{ bpmapply(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) \S4method{bpmapply}{ANY,missing}(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) \S4method{bpmapply}{ANY,BiocParallelParam}(FUN, ..., MoreArgs=NULL, SIMPLIFY=TRUE, USE.NAMES=TRUE, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) } \arguments{ \item{FUN}{The \code{function} to be applied to each element passed via \code{...}. } \item{\dots}{Objects for which methods \code{length}, \code{[}, and \code{[[} are implemented. All objects must have the same length or shorter objects will be replicated to have length equal to the longest. } \item{MoreArgs}{List of additional arguments to \code{FUN}. } \item{SIMPLIFY}{ If \code{TRUE} the result will be simplified using \code{\link{simplify2array}}. } \item{USE.NAMES}{If \code{TRUE} the result will be named. } \item{BPPARAM}{An optional \code{\link{BiocParallelParam}} instance defining the parallel back-end to be used during evaluation. } \item{BPREDO}{A \code{list} of output from \code{bpmapply} with one or more failed elements. When a list is given in \code{BPREDO}, \code{bpok} is used to identify errors, tasks are rerun and inserted into the original results. } \item{BPOPTIONS}{ Additional options to control the behavior of the parallel evaluation, see \code{\link{bpoptions}}. } } \details{ See \code{methods{bpmapply}} for additional methods, e.g., \code{method?bpmapply("MulticoreParam")}. } \value{See \code{\link[base]{mapply}}.} \author{ Michel Lang . Original code as attributed in \code{\link{mclapply}}. } \seealso{ \itemize{ \item \code{\link{bpvec}} for parallel, vectorized calculations. \item \code{\link{BiocParallelParam}} for possible values of \code{BPPARAM}. } } \examples{ methods("bpmapply") fun <- function(greet, who) { paste(Sys.getpid(), greet, who) } greet <- c("morning", "night") who <- c("sun", "moon") param <- bpparam() original <- bpworkers(param) bpworkers(param) <- 2 result <- bpmapply(fun, greet, who, BPPARAM = param) cat(paste(result, collapse="\n"), "\n") bpworkers(param) <- original } \keyword{manip} BiocParallel/man/bpok.Rd0000644000175200017520000000730114516004410016116 0ustar00biocbuildbiocbuild\name{bpok} \alias{bpok} \alias{bperrorTypes} \alias{bpresult} \title{Resume computation with partial results} \description{ Identifies unsuccessful results returned from \code{bplapply}, \code{bpmapply}, \code{bpvec}, \code{bpaggregate} or \code{bpvectorize}. } \usage{ bpok(x, type = bperrorTypes()) bperrorTypes() bpresult(x) } \arguments{ \item{x}{ Results returned from a call to \code{bp*apply}. } \item{type}{ A character(1) error type, from the vector returned by \code{bperrorTypes()} and described below } } \details{ \code{bpok()} returns a \code{logical()} vector: FALSE for any jobs that resulted in an error. \code{x} is the result list output by \code{bplapply}, \code{bpmapply}, \code{bpvec}, \code{bpaggregate} or \code{bpvectorize}. \code{bperrorTypes()} returns a character() vector of possible error types generated during parallel evaluation. Types are: \itemize{ \item{\code{bperror}: Any of the following errors. This is the default value for \code{bpok()}.} \item{\code{remote_error}: An \emph{R} error occurring while evaluating \code{FUN()}, e.g., taking the square root of a character vector, \code{sqrt("One")}.} \item{\code{unevaluated_error}: When \code{*Param(stop.on.error = TRUE)} (default), a remote error halts evaluation of other tasks assigned to the same worker. The return value for these unevaluated elements is an error of type \code{unevaluated_error}.} \item{\code{not_available_error}: Only produced by \code{DoparParam()} when a remote error occurs during evaluation of an element of \code{X} -- \code{DoparParam()} sets all values after the remote error to this class.} \item{\code{worker_comm_error}: An error occurring while trying to communicate with workers, e.g., when a worker quits unexpectedly. when this type of error occurs, the length of the result may differ from the length of the input \code{X}. } } \code{bpresult()} when applied to an object with a class of one of the error types returns the list of tasks results. } \author{Michel Lang, Martin Morgan, Valerie Obenchain, and Jiefei Wang} \seealso{ \code{\link{tryCatch}} } \examples{ ## ----------------------------------------------------------------------- ## Catch errors: ## ----------------------------------------------------------------------- ## By default 'stop.on.error' is TRUE in BiocParallelParam objects. If ## 'stop.on.error' is TRUE an ill-fated bplapply() simply stops, ## displaying the error message. param <- SnowParam(workers = 2, stop.on.error = TRUE) result <- tryCatch({ bplapply(list(1, "two", 3), sqrt, BPPARAM = param) }, error=identity) result class(result) bpresult(result) ## If 'stop.on.error' is FALSE then the computation continues. Errors ## are signalled but the full evaluation can be retrieved param <- SnowParam(workers = 2, stop.on.error = FALSE) X <- list(1, "two", 3) result <- bptry(bplapply(X, sqrt, BPPARAM = param)) result ## Check for errors: fail <- !bpok(result) fail ## Access the traceback with attr(): tail(attr(result[[2]], "traceback"), 5) ## ----------------------------------------------------------------------- ## Resume calculations: ## ----------------------------------------------------------------------- ## The 'resume' mechanism is triggered by supplying a list of partial ## results as 'BPREDO'. Data elements that failed are rerun and merged ## with previous results. ## A call of sqrt() on the character "2" returns an error. Fix the input ## data by changing the character "2" to a numeric 2: X_mod <- list(1, 2, 3) bplapply(X_mod, sqrt, BPPARAM = param , BPREDO = result) } BiocParallel/man/bpoptions.Rd0000644000175200017520000000572114516004410017204 0ustar00biocbuildbiocbuild\name{bpoptions} \alias{bpoptions} \title{Additional options to parallel evaluation} \description{ This function is used to pass additional options to \code{bplapply()} and other functions function. One use case is to use the argument \code{BPOPTIONS} to temporarily change the parameter of \code{BPPARAM} (e.g. enabling the progressbar). A second use case is to change the behavior of the parallel evaluation (e.g. manually exporting some variables to the worker) } \usage{ bpoptions( workers, tasks, jobname, log, logdir, threshold, resultdir, stop.on.error, timeout, exportglobals, exportvariables, progressbar, RNGseed, force.GC, fallback, exports, packages, ... ) } \arguments{ \item{workers}{integer(1) or character() parameter for \code{BPPARAM}; see \code{\link{bpnworkers}}.} \item{tasks}{integer(1) parameter for \code{BPPARAM}; see \code{\link{bptasks}}.} \item{jobname}{character(1) parameter for \code{BPPARAM}; see \code{\link{bpjobname}}.} \item{log}{logical(1) parameter for \code{BPPARAM}; see \code{\link{bplog}}.} \item{logdir}{character(1) parameter for \code{BPPARAM}; see \code{\link{bplogdir}}.} \item{threshold}{ A parameter for \code{BPPARAM}; see \code{\link{bpthreshold}}.} \item{resultdir}{character(1) parameter for \code{BPPARAM}; see \code{\link{bpresultdir}}.} \item{stop.on.error}{logical(1) parameter for \code{BPPARAM}; see \code{\link{bpstopOnError}}.} \item{timeout}{integer(1) parameter for \code{BPPARAM}; see \code{\link{bptimeout}}.} \item{exportglobals}{logical(1) parameter for \code{BPPARAM}; see \code{\link{bpexportglobals}}.} \item{exportvariables}{A parameter for \code{BPPARAM}; see \code{\link{bpexportvariables}}.} \item{progressbar}{logical(1) parameter for \code{BPPARAM}; see \code{\link{bpprogressbar}}.} \item{RNGseed}{integer(1) parameter for \code{BPPARAM}; see \code{\link{bpRNGseed}}.} \item{force.GC}{logical(1) parameter for \code{BPPARAM}; see \code{\link{bpforceGC}}.} \item{fallback}{logical(1) parameter for \code{BPPARAM}; see \code{\link{bpfallback}}.} \item{exports}{character() The names of the variables in the global environment which need to be exported to the global environment of the worker. This option works independently of the option \code{exportvariables}.} \item{packages}{character() The packages that needs to be attached by the worker prior to the evaluation of the task. This option works independently of the option \code{exportvariables}.} \item{...}{ Additional arguments which may(or may not) work for some specific type of \code{BPPARAM}. } } \value{ A list of options } \author{Jiefei Wang} \seealso{ \code{\link{BiocParallelParam}}, \code{\link{bplapply}}, \code{\link{bpiterate}}. } \examples{ p <- SerialParam() bplapply(1:5, function(x) Sys.sleep(1), BPPARAM = p, BPOPTIONS = bpoptions(progressbar = TRUE, tasks = 5L)) } \keyword{manip} BiocParallel/man/bpschedule.Rd0000644000175200017520000000224314516004410017301 0ustar00biocbuildbiocbuild\name{bpschedule} \alias{bpschedule} \alias{bpschedule,missing-method} \alias{bpschedule,ANY-method} \title{Schedule back-end Params} \description{ Use functions on this page to influence scheduling of parallel processing. } \usage{ bpschedule(x) } \arguments{ \item{x}{ An instance of a \code{BiocParallelParam} class, e.g., \code{\link{MulticoreParam}}, \code{\link{SnowParam}}, \code{\link{DoparParam}}. \code{x} can be missing, in which case the default back-end (see \code{\link{register}}) is used. } } \details{ \code{bpschedule} returns a logical(1) indicating whether the parallel evaluation should occur at this point. } \value{ \code{bpschedule} returns a scalar logical. } \author{ Martin Morgan \url{mailto:mtmorgan@fhcrc.org}. } \seealso{ \code{\link{BiocParallelParam}} for possible values of \code{x}. } \examples{ bpschedule(SnowParam()) # TRUE bpschedule(MulticoreParam(2)) # FALSE on windows p <- MulticoreParam() bpschedule(p) # TRUE bplapply(1:2, function(i, p) { bpschedule(p) # FALSE }, p = p, BPPARAM=p) } \keyword{manip} BiocParallel/man/bptry.Rd0000644000175200017520000000374514516004410016333 0ustar00biocbuildbiocbuild\name{bptry} \alias{bptry} \title{Try expression evaluation, recovering from bperror signals} \description{ This function is meant to be used as a wrapper around \code{bplapply()} and friends, returning the evaluated expression rather than signalling an error. } \usage{ bptry(expr, ..., bplist_error, bperror) } \arguments{ \item{expr}{An R expression; see \code{\link{tryCatch}}.} \item{bplist_error}{ A \sQuote{handler} function of a single argument, used to catch \code{bplist_error} conditions signalled by \code{expr}. A \code{bplist_error} condition is signalled when an element of \code{bplapply} and other iterations contain a evaluation that failed. When missing, the default retrieves the \dQuote{result} attribute from the error, containing the partially evaluated results. Setting \code{bplist_error=identity} returns the evaluated condition. Setting \code{bplist_error=stop} passes the condition to other handlers, notably the handler provided by \code{bperror}. } \item{bperror}{ A \sQuote{handler} function of a single argument, use to catch \code{bperror} conditions signalled by \code{expr}. A \code{bperror} is a base class to all errors signaled by \pkg{BiocParallel} code. When missing, the default returns the condition without signalling an error. } \item{\dots}{ Additional named handlers passed to \code{tryCatch()}. These user-provided handlers are evaluated before default handlers \code{bplist_error}, \code{bperror}. } } \value{ The partially evaluated list of results. } \author{Martin Morgan \email{martin.morgan@roswellpark.org}} \seealso{ \code{\link{bpok}}, \code{\link{tryCatch}}, \code{\link{bplapply}}. } \examples{ param = registered()[[1]] param X = list(1, "2", 3) bptry(bplapply(X, sqrt)) # bplist_error handler result <- bptry(bplapply(X, sqrt), bplist_error=identity) # bperror handler result bpresult(result) } \keyword{manip} BiocParallel/man/bpvalidate.Rd0000644000175200017520000001206114516004410017275 0ustar00biocbuildbiocbuild\name{bpvalidate} \alias{bpvalidate} \alias{BPValidate-class} \alias{show,BPValidate-method} \title{Tools for developing functions for parallel execution in distributed memory} \description{ \code{bpvalidate} interrogates the function environment and search path to locate undefined symbols. } \usage{ bpvalidate(fun, signal = c("warning", "error", "silent")) } \arguments{ \item{fun}{The function to be checked. \code{typeof(fun)} must return either \code{"closure"} or \code{"builtin"}.} \item{signal}{\code{character(1)} matching \code{"warning", "error", "silent"} or a function with signature \code{(..., call.)} to be invoked when reporting errors. Using \code{"silent"} suppresses output; \code{"warning"} and \code{"error"} emit warnings or errors when \code{fun} contains references to unknown variables or variables defined in the global environment (and hence not serialized to workers).} } \details{ \code{bpvalidate} tests if a function can be run in a distributed memory environment (e.g., SOCK clusters, Windows machines). \code{bpvalidate} looks in the environment of \code{fun}, in the NAMESPACE exports of libraries loaded in \code{fun}, and along the search path to identify any symbols outside the scope of \code{fun}. \code{bpvalidate} can be used to check functions passed to the bp* family of functions in \code{BiocParallel} or other packages that support parallel evaluation on clusters such as \code{snow}, \code{Rmpi}, etc. \describe{ \item{testing package functions}{ The environment of a function defined inside a package is the NAMESPACE of the package. It is important to test these functions as they will be called from within the package, with the appropriate environment. Specifically, do not copy/paste the function into the workspace; once this is done the GlobalEnv becomes the function environment. To test a package function, load the package then call the function by name (myfun) or explicitly (mypkg:::myfun) if not exported. } \item{testing workspace functions}{ The environment of a function defined in the workspace is the GlobalEnv. Because these functions do not have an associated package NAMESPACE, the functions and variables used in the body must be explicitly passed or defined. See examples. Defining functions in the workspace is often done during development or testing. If the function is later moved inside a package, it can be rewritten in a more lightweight form by taking advantage of imported symbols in the package NAMESPACE. } } NOTE: \code{bpvalidate} does not currently work on Generics. } \value{ An object of class \code{BPValidate} summarizing symbols identified in the global environment or search path, or undefined in the enviornments the function was defined in. Details are only available via `show()`. } \author{ Martin Morgan \url{mailto:mtmorgan.bioc@gmail.com} and Valerie Obenchain. } \examples{ ## --------------------------------------------------------------------- ## Interactive use ## --------------------------------------------------------------------- fun <- function() .__UNKNOWN_SYMBOL__ bpvalidate(fun, "silent") ## --------------------------------------------------------------------- ## Testing package functions ## --------------------------------------------------------------------- \dontrun{ library(myPkg) ## Test exported functions by name or the double colon: bpvalidate(myExportedFun) bpvalidate(myPkg::myExportedFun) ## Non-exported functions are called with the triple colon: bpvalidate(myPkg:::myInternalFun) } ## --------------------------------------------------------------------- ## Testing workspace functions ## --------------------------------------------------------------------- ## Functions defined in the workspace have the .GlobalEnv as their ## environment. Often the symbols used inside the function body ## are not defined in .GlobalEnv and must be passed explicitly. ## Loading libraries: ## In 'fun1' countBam() is flagged as unknown: fun1 <- function(fl, ...) countBam(fl) v <- bpvalidate(fun1) ## countBam() is not defined in .GlobalEnv and must be passed as ## an argument or made available by loading the library. fun2 <- function(fl, ...) { Rsamtools::countBam(fl) } v <- bpvalidate(fun2) ## Passing arguments: ## 'param' is defined in the workspace but not passed to 'fun3'. ## bpvalidate() flags 'param' as being found '.GlobalEnv' which means ## it is not defined in the function environment or inside the function. library(Rsamtools) param <- ScanBamParam(flag=scanBamFlag(isMinusStrand=FALSE)) fun3 <- function(fl, ...) { Rsamtools::countBam(fl, param=param) } v <- bpvalidate(fun3) ## 'param' is explicitly passed by adding it as a formal argument. fun4 <- function(fl, ..., param) { Rsamtools::countBam(fl, param=param) } bpvalidate(fun4) ## The corresponding call to a bp* function includes 'param': \dontrun{ bplapply(files, fun4, param=param, BPPARAM=SnowParam(2)) } } \keyword{manip} BiocParallel/man/bpvec.Rd0000644000175200017520000001043614516004410016265 0ustar00biocbuildbiocbuild\name{bpvec} \alias{bpvec} \alias{bpvec,ANY,missing-method} \alias{bpvec,ANY,list-method} \alias{bpvec,ANY,BiocParallelParam-method} \title{Parallel, vectorized evaluation} \description{ \code{bpvec} applies \code{FUN} to subsets of \code{X}. Any type of object \code{X} is allowed, provided \code{length}, and \code{[} are defined on \code{X}. \code{FUN} is a function such that \code{length(FUN(X)) == length(X)}. The objects returned by \code{FUN} are concatenated by \code{AGGREGATE} (\code{c()} by default). The return value is \code{FUN(X)}. } \usage{ bpvec(X, FUN, ..., AGGREGATE=c, BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) } \arguments{ \item{X}{ Any object for which methods \code{length} and \code{[} are implemented. } \item{FUN}{ A function to be applied to subsets of \code{X}. The relationship between \code{X} and \code{FUN(X)} is 1:1, so that \code{length(FUN(X, ...)) == length(X)}. The return value of separate calls to \code{FUN} are concatenated with \code{AGGREGATE}. } \item{\dots}{Additional arguments for \code{FUN}. } \item{AGGREGATE}{A function taking any number of arguments \code{...} called to reduce results (elements of the \code{...} argument of \code{AGGREGATE} from parallel jobs. The default, \code{c}, concatenates objects and is appropriate for vectors; \code{rbind} might be appropriate for data frames. } \item{BPPARAM}{ An optional \code{\link{BiocParallelParam}} instance determining the parallel back-end to be used during evaluation, or a \code{list} of \code{BiocParallelParam} instances, to be applied in sequence for nested calls to \pkg{BiocParallel} functions. } \item{BPREDO}{A \code{list} of output from \code{bpvec} with one or more failed elements. When a list is given in \code{BPREDO}, \code{bpok} is used to identify errors, tasks are rerun and inserted into the original results. } \item{BPOPTIONS}{ Additional options to control the behavior of the parallel evaluation, see \code{\link{bpoptions}}. } } \details{ This method creates a vector of indices for \code{X} that divide the elements as evenly as possible given the number of \code{bpworkers()} and \code{bptasks()} of \code{BPPARAM}. Indices and data are passed to \code{bplapply} for parallel evaluation. The distinction between \code{bpvec} and \code{bplapply} is that \code{bplapply} applies \code{FUN} to each element of \code{X} separately whereas \code{bpvec} assumes the function is vectorized, e.g., \code{c(FUN(x[1]), FUN(x[2]))} is equivalent to \code{FUN(x[1:2])}. This approach can be more efficient than \code{bplapply} but requires the assumption that \code{FUN} takes a vector input and creates a vector output of the same length as the input which does not depend on partitioning of the vector. This behavior is consistent with \code{parallel:::pvec} and the \code{?pvec} man page should be consulted for further details. } \value{ The result should be identical to \code{FUN(X, ...)} (assuming that \code{AGGREGATE} is set appropriately). When evaluation of individual elements of \code{X} results in an error, the result is a \code{list} with the same geometry (i.e., \code{lengths()}) as the split applied to \code{X} to create chunks for parallel evaluation; one or more elements of the list contain a \code{bperror} element, indicting that the vectorized calculation failed for at least one of the index values in that chunk. An error is also signaled when \code{FUN(X)} does not return an object of the same length as \code{X}; this condition is only detected when the number of elements in \code{X} is greater than the number of workers. } \author{Martin Morgan \url{mailto:mtmorgan@fhcrc.org}.} \seealso{ \code{\link{bplapply}} for parallel lapply. \code{\link{BiocParallelParam}} for possible values of \code{BPPARAM}. \code{\link{pvec}} for background. } \examples{ methods("bpvec") ## ten tasks (1:10), called with as many back-end elements are specified ## by BPPARAM. Compare with bplapply fun <- function(v) { message("working") sqrt(v) } system.time(result <- bpvec(1:10, fun)) result ## invalid FUN -- length(class(X)) is not equal to length(X) bptry(bpvec(1:2, class, BPPARAM=SerialParam())) } \keyword{manip} BiocParallel/man/bpvectorize.Rd0000644000175200017520000000471214516004410017522 0ustar00biocbuildbiocbuild\name{bpvectorize} \alias{bpvectorize} \alias{bpvectorize,ANY,ANY-method} \alias{bpvectorize,ANY,missing-method} \title{Transform vectorized functions into parallelized, vectorized function} \description{ This transforms a vectorized function into a parallel, vectorized function. Any function \code{FUN} can be used, provided its parallelized argument (by default, the first argument) has a \code{length} and \code{[} method defined, and the return value of \code{FUN} can be concatenated with \code{c}. } \usage{ bpvectorize(FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) \S4method{bpvectorize}{ANY,ANY}(FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) \S4method{bpvectorize}{ANY,missing}(FUN, ..., BPREDO=list(), BPPARAM=bpparam(), BPOPTIONS = bpoptions()) } \arguments{ \item{FUN}{A function whose first argument has a \code{length} and can be subset \code{[}, and whose evaluation would benefit by splitting the argument into subsets, each one of which is independently transformed by \code{FUN}. The return value of \code{FUN} must support concatenation with \code{c}. } \item{...}{Additional arguments to parallization, unused. } \item{BPPARAM}{An optional \code{\link{BiocParallelParam}} instance determining the parallel back-end to be used during evaluation. } \item{BPREDO}{A \code{list} of output from \code{bpvectorize} with one or more failed elements. When a list is given in \code{BPREDO}, \code{bpok} is used to identify errors, tasks are rerun and inserted into the original results. } \item{BPOPTIONS}{ Additional options to control the behavior of the parallel evaluation, see \code{\link{bpoptions}}. } } \details{ The result of \code{bpvectorize} is a function with signature \code{\dots}; arguments to the returned function are the original arguments \code{FUN}. \code{BPPARAM} is used for parallel evaluation. When \code{BPPARAM} is a class for which no method is defined (e.g., \code{\link{SerialParam}}), \code{FUN(X)} is used. See \code{methods{bpvectorize}} for additional methods, if any. } \value{ A function taking the same arguments as \code{FUN}, but evaluated using \code{\link{bpvec}} for parallel evaluation across available cores. } \author{ Ryan Thompson \url{mailto:rct@thompsonclan.org} } \seealso{\code{bpvec}} \examples{ psqrt <- bpvectorize(sqrt) ## default parallelization psqrt(1:10) } \keyword{interface} BiocParallel/man/ipcmutex.Rd0000644000175200017520000000663714516004410017034 0ustar00biocbuildbiocbuild\name{ipcmutex} \alias{ipclocked} \alias{ipclock} \alias{ipctrylock} \alias{ipcunlock} \alias{ipcid} \alias{ipcremove} \alias{ipcyield} \alias{ipcvalue} \alias{ipcreset} \title{Inter-process locks and counters} \description{ Functions documented on this page enable locks and counters between processes on the \emph{same} computer. Use \code{ipcid()} to generate a unique mutex or counter identifier. A mutex or counter with the same \code{id}, including those in different processes, share the same state. \code{ipcremove()} removes external state associated with mutex or counters created with \code{id}. \code{ipclock()} blocks until the lock is obtained. \code{ipctrylock()} tries to obtain the lock, returning immediately if it is not available. \code{ipcunlock()} releases the lock. \code{ipclocked()} queries the lock to determine whether it is currently held. \code{ipcyield()} returns the current counter, and increments the value for subsequent calls. \code{ipcvalue()} returns the current counter without incrementing. \code{ipcreset()} sets the counter to \code{n}, such that the next call to \code{ipcyield()} or \code{ipcvalue()} returns \code{n}. } \usage{ ## Utilities ipcid(id) ipcremove(id) ## Locks ipclock(id) ipctrylock(id) ipcunlock(id) ipclocked(id) ## Counters ipcyield(id) ipcvalue(id) ipcreset(id, n = 1) } \arguments{ \item{id}{character(1) identifier string for mutex or counter. \code{ipcid()} ensures that the identifier is universally unique.} \item{n}{integer(1) value from which \code{ipcyield()} will increment.} } \value{ Locks: \code{ipclock()} creates a named lock, returning \code{TRUE} on success. \code{trylock()} returns \code{TRUE} if the lock is obtained, \code{FALSE} otherwise. \code{ipcunlock()} returns \code{TRUE} on success, \code{FALSE} (e.g., because there is nothing to unlock) otherwise. \code{ipclocked()} returns \code{TRUE} when \code{id} is locked, and \code{FALSE} otherwise. Counters: \code{ipcyield()} returns an integer(1) value representing the next number in sequence. The first value returned is 1. \code{ipcvalue()} returns the value to be returned by the next call to \code{ipcyield()}, without incrementing the counter. If the counter is no longer available, \code{ipcyield()} returns \code{NA}. \code{ipcreset()} returns \code{n}, invisibly. Utilities: \code{ipcid()} returns a character(1) unique identifier, with \code{id} (if not missing) prepended. \code{ipcremove()} returns (invisibly) \code{TRUE} if external resources were released or \code{FALSE} if not (e.g., because the resources has already been released). } \examples{ ipcid() ## Locks id <- ipcid() ipclock(id) ipctrylock(id) ipcunlock(id) ipctrylock(id) ipclocked(id) ipcremove(id) id <- ipcid() system.time({ ## about 1s, .2s for each process instead of .2s if no lock result <- bplapply(1:2, function(i, id) { BiocParallel::ipclock(id) Sys.sleep(.2) time <- Sys.time() BiocParallel::ipcunlock(id) time }, id) }) ipcremove(id) diff(sort(unlist(result, use.names=FALSE))) ## Counters id <- ipcid() ipcyield(id) ipcyield(id) ipcvalue(id) ipcyield(id) ipcreset(id, 10) ipcvalue(id) ipcyield(id) ipcremove(id) id <- ipcid() result <- bplapply(1:2, function(i, id) { BiocParallel::ipcyield(id) }, id) ipcremove(id) sort(unlist(result, use.names=FALSE)) } BiocParallel/man/register.Rd0000644000175200017520000001024614516004410017011 0ustar00biocbuildbiocbuild\name{register} \alias{register} \alias{registered} \alias{bpparam} \title{Maintain a global registry of available back-end Params} \description{ Use functions on this page to add to or query a registry of back-ends, including the default for use when no \code{BPPARAM} object is provided to functions. } \usage{ register(BPPARAM, default=TRUE) registered(bpparamClass) bpparam(bpparamClass) } \arguments{ \item{BPPARAM}{ An instance of a \code{BiocParallelParam} class, e.g., \code{\link{MulticoreParam}}, \code{\link{SnowParam}}, \code{\link{DoparParam}}. } \item{default}{ Make this the default \code{BiocParallelParam} for subsequent evaluations? If \code{FALSE}, the argument is placed at the lowest priority position. } \item{bpparamClass}{ When present, the text name of the \code{BiocParallelParam} class (e.g., \dQuote{MulticoreParam}) to be retrieved from the registry. When absent, a list of all registered instances is returned. } } \details{ The registry is a list of back-ends with configuration parameters for parallel evaluation. The first list entry is the default and is used by \code{BiocParallel} functions when no \code{BPPARAM} argument is supplied. At load time the registry is populated with default backends. On Windows these are \code{SnowParam} and \code{SerialParam} and on non-Windows \code{MulticoreParam}, \code{SnowParam} and \code{SerialParam}. When \code{snowWorkers()} or \code{multicoreWorkers} returns a single core, only \code{SerialParm} is registered. The \code{\link{BiocParallelParam}} objects are constructed from global options of the corresponding name, or from the default constructor (e.g., \code{SnowParam()}) if no option is specified. The user can set customizations during start-up (e.g., in an \code{.Rprofile} file) with, for instance, \code{options(MulticoreParam=quote(MulticoreParam(workers=8)))}. The act of \dQuote{registering} a back-end modifies the existing \code{\link{BiocParallelParam}} in the list; only one param of each type can be present in the registry. When \code{default=TRUE}, the newly registered param is moved to the top of the list thereby making it the default. When \code{default=FALSE}, the param is modified `in place` vs being moved to the top. \code{bpparam()}, invoked with no arguments, returns the default \code{\link{BiocParallelParam}} instance from the registry. When called with the text name of a \code{bpparamClass}, the global options are consulted first, e.g., \code{options(MulticoreParam=MulticoreParam())} and then the value of \code{registered(bpparamClass)}. } \value{ \code{register} returns, invisibly, a list of registered back-ends. \code{registered} returns the back-end of type \code{bpparamClass} or, if \code{bpparamClass} is missing, a list of all registered back-ends. \code{bpparam} returns the back-end of type \code{bpparamClass} or, } \author{ Martin Morgan \url{mailto:mtmorgan@fhcrc.org}. } \seealso{ \code{\link{BiocParallelParam}} for possible values of \code{BPPARAM}. } \examples{ ## ---------------------------------------------------------------------- ## The registry ## ---------------------------------------------------------------------- ## The default registry. default <- registered() default ## When default = TRUE the last param registered becomes the new default. snowparam <- SnowParam(workers = 3, type = "SOCK") register(snowparam, default = TRUE) registered() ## Retrieve the default back-end, bpparam() ## or a specific BiocParallelParam. bpparam("SnowParam") ## restore original registry -- push the defaults in reverse order for (param in rev(default)) register(param) ## ---------------------------------------------------------------------- ## Specifying a back-end for evaluation ## ---------------------------------------------------------------------- ## The back-end of choice is given as the BPPARAM argument to ## the BiocParallel functions. None, one, or multiple back-ends can be ## used. bplapply(1:6, sqrt, BPPARAM = MulticoreParam(3)) ## When not specified, the default from the registry is used. bplapply(1:6, sqrt) } \keyword{manip} BiocParallel/man/worker-number.Rd0000644000175200017520000000726714516004410017775 0ustar00biocbuildbiocbuild\name{workers} % Environment variables \alias{BIOCPARALLEL_WORKER_NUMBER} \alias{BIOCPARALLEL_WORKER_MAX} \alias{R_PARALLELLY_AVAILABLECORES_FALLBACK} \title{Environment control of worker number} \description{ Environment variables, global options, and aspects of the computing environment controlling default and maximum worker number. } \details{ By default, BiocParallel \code{Param} objects use almost all (\code{parallel::detectCores() - 2}) available cores as workers. Several variables can determine alternative default number of workers. Elements earlier in the description below override elements later in the description. \describe{ \item{\code{_R_CHECK_LIMIT_CORES_}:}{Environment variable defined in base R, described in the 'R Internals' manual (\code{RShowDoc("R-ints")}). If defined and not equal to \code{"false"} or \code{"FALSE"}, default to 2 workers.} \item{\code{IS_BIOC_BUILD_MACHINE}:}{Environment variable used by the Bioconductor build system; when defined, default to 4 workers.} \item{\code{getOption("mc.cores")}:}{Global R option (initialized from the environment variable \code{MC_CORES}) with non-negative integer number of workers, also recognized by the base R 'parallel' package.} \item{\code{BIOCPARALLEL_WORKER_MAX:}}{Environment variable, non-negative integer number of workers. Use this to set both the default and maximum worker number to a single value.} \item{\code{BIOCPARALLEL_WORKER_NUMBER}:}{Environment variable, non-negative integer number of workers. Use this to set a default worker number without specifying \code{BIOCPARALLEL_WORKER_MAX}, or to set a default number of workers less than the maximum number.} \item{\code{R_PARALLELLY_AVAILABLECORES_FALLBACK}:}{Environment variable, non-negative integer number of workers, also recognized by the 'parallelly' family of packages.} } A subset of environment variables and other aspects of the computing environment also \emph{enforce} limits on worker number. Usually, a request for more than the maximum number of workers results in a warning message and creation of a 'Param' object with the maximum rather than requested number of workers. \describe{ \item{\code{_R_CHECK_LIMIT_CORES_}:}{Environment variable defined in base R. \code{"warn"} limits the number of workers to 2, with a warning; \code{"false"}, or \code{"FALSE"} does not limit worker number; any other value generates an error.} \item{\code{IS_BIOC_BUILD_MACHINE}:}{Environment variable used by the Bioconductor build system. When set, limit the number of workers to 4.} \item{\code{BIOCPARALLEL_WORKER_MAX:}}{Environment variable, non-negative integer.} \item{Number of available connections:}{R has an internal limit (126) on the number of connections open at any time. 'SnowParam()' and 'MulticoreParam()' use 1 connection per worker, and so are limited by the number of available connections.} } } \examples{ ## set up example original_worker_max <- Sys.getenv("BIOCPARALLEL_WORKER_MAX", NA_integer_) original_worker_n <- Sys.getenv("BIOCPARALLEL_WORKER_NUMBER", NA_integer_) Sys.setenv(BIOCPARALLEL_WORKER_MAX = 4) Sys.setenv(BIOCPARALLEL_WORKER_NUMBER = 2) bpnworkers(SnowParam()) # 2 bpnworkers(SnowParam(4)) # OK bpnworkers(SnowParam(5)) # warning; set to 4 ## clean up Sys.unsetenv("BIOCPARALLEL_WORKER_MAX") if (!is.na(original_worker_max)) Sys.setenv(BIOCPARALLEL_WORKER_MAX = original_worker_max) Sys.unsetenv("BIOCPARALLEL_WORKER_NUMBER") if (!is.na(original_worker_n)) Sys.setenv(BIOCPARALLEL_WORKER_NUMBER = original_worker_n) } BiocParallel/src/0000755000175200017520000000000014516024321014712 5ustar00biocbuildbiocbuildBiocParallel/src/Makevars.in0000644000175200017520000000002214516004410017002 0ustar00biocbuildbiocbuildPKG_LIBS = @LIBS@ BiocParallel/src/Makevars.ucrt0000644000175200017520000000002214516004410017351 0ustar00biocbuildbiocbuildPKG_LIBS=-lbcrypt BiocParallel/src/cpp11.cpp0000644000175200017520000000647014516004410016346 0ustar00biocbuildbiocbuild// Generated by cpp11: do not edit by hand // clang-format off #include "cpp11/declarations.hpp" #include // ipcmutex.cpp bool cpp_ipc_remove(cpp11::strings id_sexp); extern "C" SEXP _BiocParallel_cpp_ipc_remove(SEXP id_sexp) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_remove(cpp11::as_cpp>(id_sexp))); END_CPP11 } // ipcmutex.cpp cpp11::r_string cpp_ipc_uuid(); extern "C" SEXP _BiocParallel_cpp_ipc_uuid() { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_uuid()); END_CPP11 } // ipcmutex.cpp bool cpp_ipc_locked(cpp11::strings id_sexp); extern "C" SEXP _BiocParallel_cpp_ipc_locked(SEXP id_sexp) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_locked(cpp11::as_cpp>(id_sexp))); END_CPP11 } // ipcmutex.cpp bool cpp_ipc_lock(cpp11::strings id_sexp); extern "C" SEXP _BiocParallel_cpp_ipc_lock(SEXP id_sexp) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_lock(cpp11::as_cpp>(id_sexp))); END_CPP11 } // ipcmutex.cpp bool cpp_ipc_try_lock(cpp11::strings id_sexp); extern "C" SEXP _BiocParallel_cpp_ipc_try_lock(SEXP id_sexp) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_try_lock(cpp11::as_cpp>(id_sexp))); END_CPP11 } // ipcmutex.cpp bool cpp_ipc_unlock(cpp11::strings id_sexp); extern "C" SEXP _BiocParallel_cpp_ipc_unlock(SEXP id_sexp) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_unlock(cpp11::as_cpp>(id_sexp))); END_CPP11 } // ipcmutex.cpp int cpp_ipc_value(cpp11::strings id_sexp); extern "C" SEXP _BiocParallel_cpp_ipc_value(SEXP id_sexp) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_value(cpp11::as_cpp>(id_sexp))); END_CPP11 } // ipcmutex.cpp int cpp_ipc_reset(cpp11::strings id_sexp, int n); extern "C" SEXP _BiocParallel_cpp_ipc_reset(SEXP id_sexp, SEXP n) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_reset(cpp11::as_cpp>(id_sexp), cpp11::as_cpp>(n))); END_CPP11 } // ipcmutex.cpp int cpp_ipc_yield(cpp11::strings id_sexp); extern "C" SEXP _BiocParallel_cpp_ipc_yield(SEXP id_sexp) { BEGIN_CPP11 return cpp11::as_sexp(cpp_ipc_yield(cpp11::as_cpp>(id_sexp))); END_CPP11 } extern "C" { static const R_CallMethodDef CallEntries[] = { {"_BiocParallel_cpp_ipc_lock", (DL_FUNC) &_BiocParallel_cpp_ipc_lock, 1}, {"_BiocParallel_cpp_ipc_locked", (DL_FUNC) &_BiocParallel_cpp_ipc_locked, 1}, {"_BiocParallel_cpp_ipc_remove", (DL_FUNC) &_BiocParallel_cpp_ipc_remove, 1}, {"_BiocParallel_cpp_ipc_reset", (DL_FUNC) &_BiocParallel_cpp_ipc_reset, 2}, {"_BiocParallel_cpp_ipc_try_lock", (DL_FUNC) &_BiocParallel_cpp_ipc_try_lock, 1}, {"_BiocParallel_cpp_ipc_unlock", (DL_FUNC) &_BiocParallel_cpp_ipc_unlock, 1}, {"_BiocParallel_cpp_ipc_uuid", (DL_FUNC) &_BiocParallel_cpp_ipc_uuid, 0}, {"_BiocParallel_cpp_ipc_value", (DL_FUNC) &_BiocParallel_cpp_ipc_value, 1}, {"_BiocParallel_cpp_ipc_yield", (DL_FUNC) &_BiocParallel_cpp_ipc_yield, 1}, {NULL, NULL, 0} }; } extern "C" attribute_visible void R_init_BiocParallel(DllInfo* dll){ R_registerRoutines(dll, NULL, CallEntries, NULL, NULL); R_useDynamicSymbols(dll, FALSE); R_forceSymbols(dll, TRUE); } BiocParallel/src/ipcmutex.cpp0000644000175200017520000000645514516004410017263 0ustar00biocbuildbiocbuild#define BOOST_NO_AUTO_PTR #include #include #include "cpp11.hpp" static boost::uuids::random_generator uuid_generator; std::string uuid_generate() { return boost::uuids::to_string(uuid_generator()); } #include #include using namespace boost::interprocess; class IpcMutex { protected: managed_shared_memory *shm; private: interprocess_mutex *mtx; bool *locked; public: IpcMutex(const char *id) { shm = new managed_shared_memory{open_or_create, id, 1024}; mtx = shm->find_or_construct("mtx")(); locked = shm->find_or_construct("locked")(); } ~IpcMutex() { delete shm; } bool is_locked() { return *locked; } bool lock() { mtx->lock(); *locked = true; return *locked; } bool try_lock() { *locked = mtx->try_lock(); return *locked; } bool unlock() { mtx->unlock(); *locked = false; return *locked; } }; class IpcCounter : IpcMutex { private: int *i; public: IpcCounter(const char *id) : IpcMutex(id) { i = shm->find_or_construct("i")(); } ~IpcCounter() {} int value() { return *i + 1; } int reset(int n) { lock(); *i = n - 1; unlock(); return n; } int yield() { int result; lock(); result = ++(*i); unlock(); return result; } }; #include // internal const char *ipc_id(cpp11::strings id) { if (id.size() != 1 || cpp11::is_na(id[0]) ) Rf_error("'id' must be character(1) and not NA"); return CHAR(static_cast(id[0])); } // utilities [[cpp11::register]] bool cpp_ipc_remove(cpp11::strings id_sexp) { const char *id = ipc_id(id_sexp); bool status = shared_memory_object::remove(id); return status; } // uuid [[cpp11::register]] cpp11::r_string cpp_ipc_uuid() { std::string uuid = uuid_generate(); return cpp11::r_string(uuid); } // mutex [[cpp11::register]] bool cpp_ipc_locked(cpp11::strings id_sexp) { IpcMutex mutex = IpcMutex(ipc_id(id_sexp)); bool status = mutex.is_locked(); return status; } [[cpp11::register]] bool cpp_ipc_lock(cpp11::strings id_sexp) { IpcMutex mutex = IpcMutex(ipc_id(id_sexp)); mutex.lock(); return true; } [[cpp11::register]] bool cpp_ipc_try_lock(cpp11::strings id_sexp) { IpcMutex mutex = IpcMutex(ipc_id(id_sexp)); bool status = mutex.try_lock(); return status; } [[cpp11::register]] bool cpp_ipc_unlock(cpp11::strings id_sexp) { IpcMutex mutex = IpcMutex(ipc_id(id_sexp)); bool status = mutex.unlock(); return status; } // count [[cpp11::register]] int cpp_ipc_value(cpp11::strings id_sexp) { IpcCounter cnt = IpcCounter(ipc_id(id_sexp)); return cnt.value(); } [[cpp11::register]] int cpp_ipc_reset(cpp11::strings id_sexp, int n) { IpcCounter cnt = IpcCounter(ipc_id(id_sexp)); if (cpp11::is_na(n)) Rf_error("'n' must not be NA"); return cnt.reset(n); } [[cpp11::register]] int cpp_ipc_yield(cpp11::strings id_sexp) { IpcCounter cnt = IpcCounter(ipc_id(id_sexp)); return cnt.yield(); } BiocParallel/tests/0000755000175200017520000000000014516004410015262 5ustar00biocbuildbiocbuildBiocParallel/tests/test.R0000644000175200017520000000005314516004410016362 0ustar00biocbuildbiocbuildBiocGenerics:::testPackage("BiocParallel") BiocParallel/vignettes/0000755000175200017520000000000014516024320016132 5ustar00biocbuildbiocbuildBiocParallel/vignettes/BiocParallel_BatchtoolsParam.Rnw0000644000175200017520000002212214516004410024313 0ustar00biocbuildbiocbuild%\VignetteIndexEntry{2. Introduction to BatchtoolsParam} %\VignetteKeywords{parallel, Infrastructure} %\VignettePackage{BiocParallel} %\VignetteEngine{knitr::knitr} \documentclass{article} <>= BiocStyle::latex() @ <>= suppressPackageStartupMessages({ library(BiocParallel) }) @ \newcommand{\BiocParallel}{\Biocpkg{BiocParallel}} \title{Introduction to \emph{BatchtoolsParam}} \author{ Nitesh Turaga\footnote{\url{Nitesh.Turaga@RoswellPark.org}}, Martin Morgan\footnote{\url{Martin.Morgan@RoswellPark.org}} } \date{Edited: March 22, 2018; Compiled: \today} \begin{document} \maketitle \tableofcontents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The \Rcode{BatchtoolsParam} class is an interface to the \CRANpkg{batchtools} package from within \BiocParallel{}, for computing on a high performance cluster such as SGE, TORQUE, LSF, SLURM, OpenLava. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Quick start} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This example demonstrates the easiest way to launch a 100000 jobs using batchtools. The first step involves creating a \Rcode{BatchtoolsParam} class. You can compute using 'bplapply' and then the result is stored. <>= library(BiocParallel) ## Pi approximation piApprox <- function(n) { nums <- matrix(runif(2 * n), ncol = 2) d <- sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } piApprox(1000) ## Apply piApprox over param <- BatchtoolsParam() result <- bplapply(rep(10e5, 10), piApprox, BPPARAM=param) mean(unlist(result)) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{\emph{BatchtoolsParam} interface} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The \Rcode{BatchtoolsParam} interface allows intuitive usage of your high performance cluster with \BiocParallel{}. The \Rcode{BatchtoolsParam} class allows the user to specify many arguments to customize their jobs. Applicable to clusters with formal schedulers. \begin{itemize} \item{\Rcode{workers}} The number of workers used by the job. \item{\Rcode{cluster}} We currently support, SGE, SLURM, LSF, TORQUE and OpenLava. The 'cluster' argument is supported only if the R environment knows how to find the job scheduler. Each cluster type uses a template to pass the job to the scheduler. If the template is not given we use the default templates as given in the 'batchtools' package. The cluster can be accessed by 'bpbackend(param)'. \item{\Rcode{registryargs}} The 'registryargs' argument takes a list of arguments to create a new job registry for you \Rcode{BatchtoolsParam}. The job registry is a data.table which stores all the required information to process your jobs. The arguments we support for registryargs are: \begin{description} \item{\Rcode{file.dir}} Path where all files of the registry are saved. Note that some templates do not handle relative paths well. If nothing is given, a temporary directory will be used in your current working directory. \item{\Rcode{work.dir}} Working directory for R process for running jobs. \item{\Rcode{packages}} Packages that will be loaded on each node. \item{\Rcode{namespaces}} Namespaces that will be loaded on each node. \item{\Rcode{source}} Files that are sourced before executing a job. \item{\Rcode{load}} Files that are loaded before executing a job. \end{description} <<>>= registryargs <- batchtoolsRegistryargs( file.dir = "mytempreg", work.dir = getwd(), packages = character(0L), namespaces = character(0L), source = character(0L), load = character(0L) ) param <- BatchtoolsParam(registryargs = registryargs) param @ \item{\Rcode{resources}} A named list of key-value pairs to be subsituted into the template file; see \Rcode{?batchtools::submitJobs}. \item{\Rcode{template}} The template argument is unique to the \Rcode{BatchtoolsParam} class. It is required by the job scheduler. It defines how the jobs are submitted to the job scheduler. If the template is not given and the cluster is chosen, a default template is selected from the batchtools package. \item{\Rcode{log}} The log option is logical, TRUE/FALSE. If it is set to TRUE, then the logs which are in the registry are copied to directory given by the user using the \Rcode{logdir} argument. \item{\Rcode{logdir}} Path to the logs. It is given only if \Rcode{log=TRUE}. \item{\Rcode{resultdir}} Path to the directory is given when the job has files to be saved in a directory. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Defining templates} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The job submission template controls how the job is processed by the job scheduler on the cluster. Obviously, the format of the template will differ depending on the type of job scheduler. Let's look at the default SLURM template as an example: <<>>= fname <- batchtoolsTemplate("slurm") cat(readLines(fname), sep="\n") @ The \Rcode{<\%= =>} blocks are automatically replaced by the values of the elements in the \Rcode{resources} argument in the \Rcode{BatchtoolsParam} constructor. Failing to specify critical parameters properly (e.g., wall time or memory limits too low) will cause jobs to crash, usually rather cryptically. We suggest setting parameters explicitly to provide robustness to changes to system defaults. Note that the \Rcode{<\%= =>} blocks themselves do not usually need to be modified in the template. The part of the template that is most likely to require explicit customization is the last line containing the call to \Rcode{Rscript}. A more customized call may be necessary if the R installation is not standard, e.g., if multiple versions of R have been installed on a cluster. For example, one might use instead: \begin{verbatim} echo 'batchtools::doJobCollection("<%= uri %>")' |\ ArbitraryRcommand --no-save --no-echo \end{verbatim} If such customization is necessary, we suggest making a local copy of the template, modifying it as required, and then constructing a \Rcode{BiocParallelParam} object with the modified template using the \Rcode{template} argument. However, we find that the default templates accessible with \Rcode{batchtoolsTemplate} are satisfactory in most cases. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Use cases} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% As an example for a BatchtoolParam job being run on an SGE cluster, we use the same \Rcode{piApprox} function as defined earlier. The example runs the function on 5 workers and submits 100 jobs to the SGE cluster. Example of SGE with minimal code: <>= library(BiocParallel) ## Pi approximation piApprox <- function(n) { nums <- matrix(runif(2 * n), ncol = 2) d <- sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } template <- system.file( package = "BiocParallel", "unitTests", "test_script", "test-sge-template.tmpl" ) param <- BatchtoolsParam(workers=5, cluster="sge", template=template) ## Run parallel job result <- bplapply(rep(10e5, 100), piApprox, BPPARAM=param) @ Example of SGE demonstrating some of \Rcode{BatchtoolsParam} methods. <>= library(BiocParallel) ## Pi approximation piApprox <- function(n) { nums <- matrix(runif(2 * n), ncol = 2) d <- sqrt(nums[, 1]^2 + nums[, 2]^2) 4 * mean(d <= 1) } template <- system.file( package = "BiocParallel", "unitTests", "test_script", "test-sge-template.tmpl" ) param <- BatchtoolsParam(workers=5, cluster="sge", template=template) ## start param bpstart(param) ## Display param param ## To show the registered backend bpbackend(param) ## Register the param register(param) ## Check the registered param registered() ## Run parallel job result <- bplapply(rep(10e5, 100), piApprox) bpstop(param) @ \section{\Rcode{sessionInfo()}} <>= toLatex(sessionInfo()) @ \end{document} BiocParallel/vignettes/Errors_Logs_And_Debugging.Rnw0000644000175200017520000005215614516004410023626 0ustar00biocbuildbiocbuild%\VignetteIndexEntry{3. Errors, Logs and Debugging} %\VignetteKeywords{parallel, Infrastructure} %\VignettePackage{BiocParallel} %\VignetteEngine{knitr::knitr} \documentclass{article} <>= BiocStyle::latex() @ \newcommand{\BiocParallel}{\Biocpkg{BiocParallel}} \title{Errors, Logs and Debugging in \BiocParallel} \author{Valerie Obenchain and Martin Morgan} \date{Edited: December 16, 2015; Compiled: \today} \begin{document} \maketitle \tableofcontents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% This vignette is part of the \BiocParallel{} package and focuses on error handling and logging. A section at the end demonstrates how the two can be used together as part of an effective debugging routine. \BiocParallel{} provides a unified interface to the parallel infrastructure in several packages including \CRANpkg{snow}, \CRANpkg{parallel}, \CRANpkg{batchtools} and \CRANpkg{foreach}. When implementing error handling in \BiocParallel{} the primary goals were to enable the return of partial results when an error is thrown (vs just the error) and to establish logging on the workers. In cases where error handling existed, such as \CRANpkg{batchtools} and \CRANpkg{foreach}, those behaviors were preserved. Clusters created with \CRANpkg{snow} and \CRANpkg{parallel} now have flexible error handling and logging available through \Rcode{SnowParam} and \Rcode{MulticoreParam} objects. In this document the term ``job'' is used to describe a single call to a bp*apply function (e.g., the \Rcode{X} in \Rcode{bplapply}). A ``job'' consists of one or more ``tasks'', where each ``task'' is run separately on a worker. The \Rpackage{BiocParallel} package is available at bioconductor.org and can be downloaded via \Rcode{BiocManager::install}: <>= if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("BiocParallel") @ Load the package: <>= library(BiocParallel) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Error Handling} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Messages and warnings} \BiocParallel{} captures messages and warnings in each job, returning the output to the manager and reporting these to the user after the completion of the entire operation. Thus <>= res <- bplapply(1:2, function(i) { message(i); Sys.sleep(3) }) @ %% reports messages only after the entire \Rcode{bplapply()} is complete. It may be desired to output messages immediatly. Do this using \Rcode{sink()}, as in the following example: <>= res <- bplapply(1:2, function(i) { sink(NULL, type = "message") message(i) Sys.sleep(3) }) @ %% This could be confusing when multiple workers write messages at the same time --the messages will be interleaved in an arbitrary way -- or when the workers are not all running on the same computer (e.g., with \Rcode{SnowParam()}) so should not be used in package code. \subsection{Catching errors} By default, \BiocParallel{} attempts all computations and returns any warnings and errors along with successful results. The \Rcode{stop.on.error} field controls if the job is terminated as soon as one task throws an error. This is useful when debugging or when running large jobs (many tasks) and you want to be notified of an error before all runs complete. \Rcode{stop.on.error} is \Rcode{TRUE} by default. <>= param <- SnowParam() param @ The field can be set when constructing the param or modified with the \Rcode{bpstopOnError} accessor. <>= param <- SnowParam(2, stop.on.error = TRUE) param bpstopOnError(param) <- FALSE @ In this example \Rcode{X} is length 6. By default, the elements of \Rcode{X} are divided as evenly as possible over the number of workers and run in chunks. The number of tasks is set equal to the length of \Rcode{X} which forces each element of \Rcode{X} to be executed separately (6 tasks). <>= X <- list(1, "2", 3, 4, 5, 6) param <- SnowParam(3, tasks = length(X), stop.on.error = TRUE) @ Tasks 1, 2, and 3 are assigned to the three workers, and are evaluated. Task 2 fails, stopping further computation. All successfully completed tasks are returned and can be accessed by `bpresult`. Usually, this means that the results of tasks 1, 2, and 3 will be returned. <>= result <- tryCatch({ bplapply(X, sqrt, BPPARAM = param) }, error=identity) result bpresult(result) @ Using \Rcode{stop.on.error=FALSE}, all tasks are evaluated. <>= X <- list("1", 2, 3, 4, 5, 6) param <- SnowParam(3, tasks = length(X), stop.on.error = FALSE) result <- tryCatch({ bplapply(X, sqrt, BPPARAM = param) }, error=identity) result bpresult(result) @ \Rcode{bptry()} is a convenient way of trying to evaluate a \Rcode{bpapply}-like expression, returning the evaluated results without signalling an error. <>= bptry({ bplapply(X, sqrt, BPPARAM=param) }) @ In the next example the elements of \Rcode{X} are grouped instead of run separately. The default value for \Rcode{tasks} is 0 which means 'X' is split as evenly as possible across the number of workers. There are 3 workers so the first task consists of list(1, 2), the second is list("3", 4) and the third is list(5, 6). <>= X <- list(1, 2, "3", 4, 5, 6) param <- SnowParam(3, stop.on.error = TRUE) @ The output shows an error in when evaluating the third element, but also that the fourth element, in the same chunk as 3, was not evaluated. All elements are evaluated because they were assigned to workers before the first error occurred. <>= bptry(bplapply(X, sqrt, BPPARAM = param)) @ Side Note: Results are collected from workers as they finish which is not necessarily the same order in which they were loaded. Depending on how tasks are divided it is possible that the task with the error completes after all others so essentially all workers complete before the job is stopped. In this situation the output includes all results along with the error message and it may appear that \Rcode{stop.on.error=TRUE} did not stop the job soon enough. This is just a heads up that the usefulness of \Rcode{stop.on.error=TRUE} may vary with run time and distribution of tasks over workers. \subsection{Identify failures with \Rcode{bpok()}} The \Rcode{bpok()} function is a quick way to determine which (if any) tasks failed. In this example we use \Rcode{bptry()} to retrieve the partially evaluated expression, including the failed elements. <>= param <- SnowParam(2, stop.on.error=FALSE) result <- bptry(bplapply(list(1, "2", 3), sqrt, BPPARAM=param)) @ \Rcode{bpok} returns TRUE if the task was successful. <>= bpok(result) @ Once errors are identified with \Rcode{bpok} the traceback can be retrieved with the \Rcode{attr} function. This is possible because errors are returned as \Rcode{condition} objects with the traceback as an attribute. <>= attr(result[[which(!bpok(result))]], "traceback") @ Note that the traceback has been modified from the full traceback provided by *R* to include only the calls from the time the \Rcode{bplapply} \Rcode{FUN} is evaluated. \subsection{Rerun failed tasks with \Rcode{BPREDO}} Tasks can fail due to hardware problems or bugs in the input data. The \BiocParallel{} functions support a \Rcode{BPREDO} (re-do) argument for recomputing only the tasks that failed. A list of partial results and errors is supplied to \Rcode{BPREDO} in a second call to the function. The failed elements are identified, recomputed and inserted into the original results. The bug in this example is the second element of 'X' which is a character when it should be numeric. <>= X <- list(1, "2", 3) param <- SnowParam(2, stop.on.error=FALSE) result <- bptry(bplapply(X, sqrt, BPPARAM=param)) result @ First fix the input data. <>= X.redo <- list(1, 2, 3) @ Repeat the call to \Rcode{bplapply} this time supplying the partial results as \Rcode{BPREDO}. Only the failed calculations are computed, in the present case requiring only one worker. <>= bplapply(X.redo, sqrt, BPREDO=result, BPPARAM=param) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Logging} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% NOTE: Logging as described in this section is supported for SnowParam, MulticoreParam and SerialParam. \subsection{Parameters} Logging in \BiocParallel{} is controlled by 3 fields in the \Rcode{BiocParallelParam}: \begin{verbatim} log: TRUE or FALSE logdir: location to write log file threshold: one of "TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL" \end{verbatim} When \Rcode{log = TRUE} the \CRANpkg{futile.logger} package is loaded on each worker. \BiocParallel{} uses a custom script on the workers to collect log messages as well as additional statistics such as gc, runtime and node information. Output to stderr and stdout is also captured. By default \Rcode{log} is FALSE and \Rcode{threshold} is {\it INFO}. <>= param <- SnowParam(stop.on.error=FALSE) param @ Turn logging on and set the threshold to {\it TRACE}. <>= bplog(param) <- TRUE bpthreshold(param) <- "TRACE" param @ \subsection{Setting a threshold} All thresholds defined in \CRANpkg{futile.logger} are supported: {\it FATAL}, {\it ERROR}, {\it WARN}, {\it INFO}, {\it DEBUG} and {\it TRACE}. All messages greater than or equal to the severity of the threshold are shown. For example, a threshold of {\it INFO} will print all messages tagged as {\it FATAL}, {\it ERROR}, {\it WARN} and {\it INFO}. Because the default threshold is {\it INFO} it catches the {\it ERROR}-level message thrown when attempting the square root of a character ("2"). <>= tryCatch({ bplapply(list(1, "2", 3), sqrt, BPPARAM = param) }, error=function(e) invisible(e)) @ All user-supplied messages written in the \CRANpkg{futile.logger} syntax are also captured. This function performs argument checking and includes a couple of {\it WARN} and {\it DEBUG}-level messages. <>= FUN <- function(i) { futile.logger::flog.debug(paste("value of 'i':", i)) if (!length(i)) { futile.logger::flog.warn("'i' has length 0") NA } else if (!is(i, "numeric")) { futile.logger::flog.debug("coercing 'i' to numeric") as.numeric(i) } else { i } } @ Turn logging on and set the threshold to {\it WARN}. <>= param <- SnowParam(2, log = TRUE, threshold = "WARN", stop.on.error=FALSE) result <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) simplify2array(result) @ Changing the threshold to {\it DEBUG} catches both {\it WARN} and {\it DEBUG} messages. <>= param <- SnowParam(2, log = TRUE, threshold = "DEBUG", stop.on.error=FALSE) result <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) simplify2array(result) @ \subsection{Log files} When \Rcode{log == TRUE}, log messages are written to the console by default. If \Rcode{logdir} is given the output is written out to files, one per task. File names are prefixed with the name in \Rcode{bpjobname(BPPARAM)}; default is 'BPJOB'. \begin{verbatim} param <- SnowParam(2, log = TRUE, threshold = "DEBUG", logdir = tempdir()) res <- bplapply(list(1, "2", integer()), FUN, BPPARAM = param) ## loading futile.logger on workers list.files(bplogdir(param)) ## [1] "BPJOB.task1.log" "BPJOB.task2.log" \end{verbatim} Read in BPJOB.task2.log: \begin{verbatim} readLines(paste0(bplogdir(param), "/BPJOB.task2.log")) ## [1] "############### LOG OUTPUT ###############" ## [2] "Task: 2" ## [3] "Node: 2" ## [4] "Timestamp: 2015-07-08 09:03:59" ## [5] "Success: TRUE" ## [6] "Task duration: " ## [7] " user system elapsed " ## [8] " 0.009 0.000 0.011 " ## [9] "Memory use (gc): " ## [10] " used (Mb) gc trigger (Mb) max used (Mb)" ## [11] "Ncells 325664 17.4 592000 31.7 393522 21.1" ## [12] "Vcells 436181 3.4 1023718 7.9 530425 4.1" ## [13] "Log messages:" ## [14] "DEBUG [2015-07-08 09:03:59] value of 'i': 2" ## [15] "INFO [2015-07-08 09:03:59] coercing to numeric" ## [16] "DEBUG [2015-07-08 09:03:59] value of 'i': " ## [17] "WARN [2015-07-08 09:03:59] 'i' is missing" ## [18] "" ## [19] "stderr and stdout:" ## [20] "character(0)" \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Worker timeout} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% NOTE: \Rcode{timeout} is supported for SnowParam and MulticoreParam. For long running jobs or untested code it can be useful to set a time limit. The \Rcode{timeout} field is the time, in seconds, allowed for each worker to complete a task; default is \Rcode{Inf}. If the task takes longer than \Rcode{timeout} a timeout error is returned. Time can be changed during param construction with the \Rcode{timeout} arg, <>= param <- SnowParam(timeout = 20, stop.on.error=FALSE) param @ or with the \Rcode{bptimeout} setter: <>= param <- SnowParam(timeout = 2, stop.on.error=FALSE) fun <- function(i) { Sys.sleep(i) i } bptry(bplapply(1:3, fun, BPPARAM = param)) @ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Debugging} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Effective debugging strategies vary by problem and often involve a combination of error handling and logging techniques. In general, when debugging \R{}-generated errors the traceback is often the best place to start followed by adding debug messages to the worker function. When trouble shooting unexpected behavior (i.e., not a formal error or warning) adding debug messages or switching to \Rcode{SerialParam} are good approaches. Below is an overview of these different strategies. \subsection{Accessing the traceback} The traceback is a good place to start when tracking down \R{}-generated errors. Because the function is executed on the workers it's not accessible for interactive debugging with functions such as \Rcode{trace} or \Rcode{debug}. The traceback provides a snapshot of the state of the worker at the time the error was thrown. This function takes the square root of the absolute value of a vector. <>= fun1 <- function(x) { v <- abs(x) sapply(1:length(v), function(i) sqrt(v[i])) } @ Calling ``fun1'' with a character throws an error: \begin{verbatim} param <- SnowParam(stop.on.error=FALSE) result <- bptry({ bplapply(list(c(1,3), 5, "6"), fun1, BPPARAM = param) }) result ## [[1]] ## [1] 1.000000 1.732051 ## ## [[2]] ## [1] 2.236068 ## ## [[3]] ## ## traceback() available as 'attr(x, "traceback")' ## ## attr(,"REDOENV") ## \end{verbatim} Identify which elements failed with \Rcode{bpok}: \begin{verbatim} bpok(result) ## [1] TRUE TRUE FALSE \end{verbatim} The error (i.e., third element of ``res'') is a \Rcode{condition} object: \begin{verbatim} is(result[[3]], "condition") ## [1] TRUE \end{verbatim} The traceback is an attribute of the \Rcode{condition} and can be accessed with the \Rcode{attr} function. \begin{verbatim} cat(attr(result[[3]], "traceback"), sep = "\n") ## 4: handle_error(e) ## 3: h(simpleError(msg, call)) ## 2: .handleSimpleError(function (e) ## { ## annotated_condition <- handle_error(e) ## stop(annotated_condition) ## }, "non-numeric argument to mathematical function", base::quote(abs(x))) at #2 ## 1: FUN(...) \end{verbatim} In this example the error occurs in \Rcode{FUN}; lines 2, 3, 4 involve error handling. \subsection{Adding debug messages} When a \Rcode{numeric()} is passed to ``fun1'' no formal error is thrown but the length of the second list element is 2 when it should be 1. \begin{verbatim} bplapply(list(c(1,3), numeric(), 6), fun1, BPPARAM = param) ## [[1]] ## [1] 1.000000 1.732051 ## ## [[2]] ## [[2]][[1]] ## [1] NA ## ## [[2]][[2]] ## numeric(0) ## ## [[3]] ## [1] 2.44949 \end{verbatim} Without a formal error we have no traceback so we'll add a few debug messages. The \CRANpkg{futile.logger} syntax tags messages with different levels of severity. A message created with \Rcode{flog.debug} will only print if the threshold is {\it DEBUG} or lower. So in this case it will catch both INFO and DEBUG messages. ``fun2'' has debug statements that show the value of `x', length of `v' and the index `i'. <>= fun2 <- function(x) { v <- abs(x) futile.logger::flog.debug( paste0("'x' = ", paste(x, collapse=","), ": length(v) = ", length(v)) ) sapply(1:length(v), function(i) { futile.logger::flog.info(paste0("'i' = ", i)) sqrt(v[i]) }) } @ Create a param that logs at a threshold level of {\it DEBUG}. <>= param <- SnowParam(3, log = TRUE, threshold = "DEBUG") @ <>= res <- bplapply(list(c(1,3), numeric(), 6), fun2, BPPARAM = param) res @ The debug messages require close inspection, but focusing on task 2 we see \begin{verbatim} res ## ############### LOG OUTPUT ############### ## Task: 2 ## Node: 2 ## Timestamp: 2023-03-23 12:17:28.969158 ## Success: TRUE ## ## Task duration: ## user system elapsed ## 0.156 0.005 0.163 ## ## Memory used: ## used (Mb) gc trigger (Mb) limit (Mb) max used (Mb) ## Ncells 942951 50.4 1848364 98.8 NA 1848364 98.8 ## Vcells 1941375 14.9 8388608 64.0 32768 2446979 18.7 ## ## Log messages: ## INFO [2023-03-23 12:17:28] loading futile.logger package ## DEBUG [2023-03-23 12:17:28] 'x' = : length(v) = 0 ## INFO [2023-03-23 12:17:28] 'i' = 1 ## INFO [2023-03-23 12:17:28] 'i' = 0 ## ## stderr and stdout: \end{verbatim} This reveals the problem. The index for \Rcode{sapply} is along `v' which in this case has length 0. This forces `i' to take values of `1' and `0' giving an output of length 2 for the second element (i.e., \Rcode{NA} and \Rcode{numeric(0)}). ``fun2'' can be fixed by using \Rcode{seq\_along(v)} to create the index instead of \Rcode{1:length(v)}. \subsection{Local debugging with \Rcode{SerialParam}} Errors that occur on parallel workers can be difficult to debug. Often the traceback sent back from the workers is too much to parse or not informative. We are also limited in that our interactive strategies of \Rcode{browser} and \Rcode{trace} are not available. One option for further debugging is to run the code in serial with \Rcode{SerialParam}. This removes the ``parallel'' component and is the same as running a straight \Rcode{*apply} function. This approach may not help if the problem was hardware related but can be very useful when the bug is in the \R{} code. We use the now familiar square root example with a bug in the second element of \Rcode{X}. <>= res <- bptry({ bplapply(list(1, "2", 3), sqrt, BPPARAM = SnowParam(3, stop.on.error=FALSE)) }) result @ \Rcode{sqrt} is an internal function. The problem is likely with our data going into the function and not the \Rcode{sqrt} function itself. We can write a small wrapper around \Rcode{sqrt} so we can see the input. <>= fun3 <- function(i) sqrt(i) @ Debug the new function: \begin{verbatim} debug(fun3) \end{verbatim} We want to recompute only elements that failed and for that we use the \Rcode{BPREDO} argument. The BPPARAM has been changed to \Rcode{SerialParam} so the job is run in the local workspace in serial. \begin{verbatim} > bplapply(list(1, "2", 3), fun3, BPREDO = result, BPPARAM = SerialParam()) Resuming previous calculation ... debugging in: FUN(...) debug: sqrt(i) Browse[2]> objects() [1] "i" Browse[2]> i [1] "2" Browse[2]> \end{verbatim} The local browsing allowed us to see the problem input was the character "2". %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{\Rcode{sessionInfo()}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% <>= toLatex(sessionInfo()) @ \end{document} BiocParallel/vignettes/Introduction_To_BiocParallel.Rmd0000644000175200017520000006657514516004410024353 0ustar00biocbuildbiocbuild--- title: "1. Introduction to *BiocParallel*" author: - name: "Valerie Obenchain" - name: "Vincent Carey" - name: "Michael Lawrence" - name: "Phylis Atieno" affiliation: "Vignette translation from Sweave to Rmarkdown / HTML" - name: "Martin Morgan" email: "Martin.Morgan@RoswellPark.org" date: "Edited: October, 2022; Compiled: `r format(Sys.time(), '%B %d, %Y')`" package: BiocParallel vignette: > %\VignetteIndexEntry{1. Introduction to BiocParallel} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} output: BiocStyle::html_document --- # Introduction Numerous approaches are available for parallel computing in R. The CRAN Task View for high performance and parallel computing provides useful high-level summaries and [package categorization](https://cran.r-project.org/web/views/HighPerformanceComputing.html). Most Task View packages cite or identify one or more of [*snow*](https://cran.r-project.org/package=snow) , [*Rmpi*](https://cran.r-project.org/package=Rmpi), [*multicore*](https://cran.r-project.org/package=multicore) or [*foreach*](https://cran.r-project.org/package=foreach) as relevant parallelization infrastructure. Direct support in *R* for *parallel* computing started with release 2.14.0 with inclusion of the [parallel](https://cran.r-project.org/package=parallel) package which contains modified versions of [*multicore*](https://cran.r-project.org/package=multicore) and [*snow*](https://cran.r-project.org/package=snow). A basic objective of [*BiocParallel*][] is to reduce the complexity faced when developing and using software that performs parallel computations. With the introduction of the `BiocParallelParam` object, [*BiocParallel*][] aims to provide a unified interface to existing parallel infrastructure where code can be easily executed in different environments. The `BiocParallelParam` specifies the environment of choice as well as computing resources and is invoked by 'registration' or passed as an argument to the [*BiocParallel*][] functions. [*BiocParallel*][] offers the following conveniences over the 'roll your own' approach to parallel programming. - unified interface: `BiocParallelParam` instances define the method of parallel evaluation (multi-core, snow cluster, etc.) and computing resources (number of workers, error handling, cleanup, etc.). - parallel iteration over lists, files and vectorized operations: `bplapply`, `bpmapply` and `bpvec` provide parallel list iteration and vectorized operations. `bpiterate` iterates through files distributing chunks to parallel workers. - cluster scheduling: When the parallel environment is managed by a cluster scheduler through [*batchtools](https://cran.r-project.org/package=batchtools), job management and result retrieval are considerably simplified. - support of `foreach` : The [*foreach*](https://cran.r-project.org/package=foreach) and [*iterators*](https://cran.r-project.org/package=iterators) packages are fully supported. Registration of the parallel back end uses `BiocParallelParam` instances. # Quick start The [*BiocParallel*][] package is available at bioconductor.org and can be downloaded via `BiocManager`: ``` if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("BiocParallel") ``` Load [*BiocParallel*][] ```{r} library(BiocParallel) ``` The test function simply returns the square root of "x". ```{r quick_start FUN} FUN <- function(x) { round(sqrt(x), 4) } ``` Functions in [*BiocParallel*][] use the registered back-ends for parallel evaluation. The default is the top entry of the registry list. ```{r quick_start registry} registered() ``` Configure your R session to always use a particular back-end configure by setting options named after the back ends in an `.RProfile` file, e.g., ```{r configure_registry, eval=FALSE} options(MulticoreParam=MulticoreParam(workers=4)) ``` When a [*BiocParallel*][] function is invoked with no `BPPARAM` argument the default back-end is used. ```{r quickstart_bplapply_default, eval=FALSE} bplapply(1:4, FUN) ``` Environment specific back-ends can be defined for any of the registry entries. This example uses a 2-worker SOCK cluster. ```{r quickstart_snow} param <- SnowParam(workers = 2, type = "SOCK") bplapply(1:4, FUN, BPPARAM = param) ``` # The *BiocParallel* Interface ## Classes ### `BiocParallelParam` `BiocParallelParam` instances configure different parallel evaluation environments. Creating or `register()` ing a '`Param`' allows the same code to be used in different parallel environments without a code re-write. Params listed are supported on all of Unix, Mac and Windows except `MulticoreParam` which is Unix and Mac only. - `SerialParam`: Supported on all platforms. Evaluate [*BiocParallel*][]-enabled code with parallel evaluation disabled. This approach is useful when writing new scripts and trying to debug code. - `MulticoreParam`: Supported on Unix and Mac. On Windows, `MulticoreParam` dispatches to `SerialParam`. Evaluate [*BiocParallel*][]-enabled code using multiple cores on a single computer. When available, this is the most efficient and least troublesome way to parallelize code. Windows does not support multi-core evaluation (the `MulticoreParam` object can be used, but evaluation is serial). On other operating systems, the default number of workers equals the value of the global option `mc.cores` (e.g.,`getOption("mc.cores")` ) or, if that is not set, the number of cores returned by `arallel::detectCores() - 2` ; when number of cores cannot be determined, the default is 1. `MulticoreParam` uses 'forked' processes with 'copy-on-change' semantics -- memory is only copied when it is changed. This makes it very efficient to invoke compared to other back-ends. There are several important caveats to using `MulticoreParam`. Forked processes are not available on Windows. Some environments, e.g., *RStudio*, do not work well with forked processes, assuming that code evaluation is single-threaded. Some external resources, e.g., access to files or data bases, maintain state in a way that assumes the resource is accessed only by a single thread. A subtle cost is that *R*'s garbage collector runs periodically, and 'marks' memory as in use. This effectively triggers a copy of the marked memory. *R*'s generational garbage collector is triggered at difficult-to-predict times; the effect in a long-running forked process is that the memory is eventually copied. See [this post](https://support.bioconductor.org/p/70196/#70509) for additional details. `MulticoreParam` is based on facilities originally implemented in the [*multicore*](https://cran.r-project.org/package=multicore) package and subsequently the [*parallel*](https://cran.r-project.org/package=parallel) package in base. - `SnowParam`: Supported on all platforms. Evaluate [*BiocParallel*][]-enabled code across several distinct instances, on one or several computers. This is a straightforward approach for executing parallel code on one or several computers, and is based on facilities originally implemented in the [*snow*](https://cran.r-project.org/package=snow) package. Different types of [*snow*](https://cran.r-project.org/package=snow) 'back-ends' are supported, including socket and MPI clusters. - `BatchtoolsParam`: Applicable to clusters with formal schedulers. Evaluate [*BiocParallel*][]-enabled code by submitting to a cluster scheduler like SGE. - `DoparParam`: Supported on all platforms. Register a parallel back-end supported by the [*foreach*](https://cran.r-project.org/package=foreach) package for use with [*BiocParallel*][]. The simplest illustration of creating `BiocParallelParam` is ```{r BiocParallelParam_SerialParam} serialParam <- SerialParam() serialParam ``` Most parameters have additional arguments influencing behavior, e.g., specifying the number of 'cores' to use when creating a `MulticoreParam` instance ```{r BiocParallelParam_MulticoreParam} multicoreParam <- MulticoreParam(workers = 8) multicoreParam ``` Arguments are described on the corresponding help page, e.g., `?MulticoreParam.`. ### `register()`ing `BiocParallelParam` instances The list of registered `BiocParallelParam` instances represents the user's preferences for different types of back-ends. Individual algorithms may specify a preferred back-end, and different back-ends maybe chosen when parallel evaluation is nested. The registry behaves like a 'stack' in that the last entry registered is added to the top of the list and becomes the "next used" (i.e., the default). `registered` invoked with no arguments lists all back-ends. ```{r register_registered} registered() ``` `bpparam` returns the default from the top of the list. ```{r register_bpparam} bpparam() ``` Add a specialized instance with `register`. When `default` is TRUE, the new instance becomes the default. ```{r register_BatchtoolsParam} default <- registered() register(BatchtoolsParam(workers = 10), default = TRUE) ``` `BatchtoolsParam` has been moved to the top of the list and is now the default. ```{r register_BatchtoolsParam2} names(registered()) bpparam() ``` Restore the original registry ```{r register_restore} for (param in rev(default)) register(param) ``` ## Functions ### Parallel looping, vectorized and aggregate operations These are used in common functions, implemented as much as possible for all back-ends. The functions (see the help pages, e.g., `?bplapply` for a full definition) include `bplapply(X, FUN, ...)`: Apply in parallel a function `FUN` to each element of `X`. `bplapply` invokes `FUN length(X)` times, each time with a single element of `X`. `bpmapply(FUN, ...)`: Apply in parallel a function to the first, second, etc., elements of each argument in .... `bpiterate(ITER, FUN, ...)`: Apply in parallel a function to the output of function `ITER`. Data chunks are returned by `ITER` and distributed to parallel workers along with `FUN`. Intended for iteration though an undefined number of data chunks (i.e., records in a file). `bpvec(X, FUN, ...)`: Apply in parallel a function `FUN` to subsets of `X`.`bpvec` invokes function as many times as there are cores or cluster nodes, with receiving a subset (typically more than 1 element, in contrast to `bplapply`) of `X`. `bpaggregate(x, data, FUN, ...)`: Use the formula in `X` to aggregate `data` using `FUN`. ### Parallel evaluation environment These functions query and control the state of the parallel evaluation environment. `bpisup(x)`: Query a `BiocParallelParam` back-end `X` for its status. `bpworkers`; `bpnworkers`: Query a `BiocParallelParam` back-end for the number of workers available for parallel evaluation. `bptasks`: Divides a job (e.g., single call to \*lapply function) into tasks. Applicable to `MulticoreParam` only;`DoparParam` and `BatchtoolsParam` have their own approach to dividing a job among workers. `bpstart(x)`: Start a parallel back end specified by `BiocParallelParam x, `, if possible. `bpstop(x)`: Stop a parallel back end specified by `BiocParallelParam x`. ### Error handling and logging Logging and advanced error recovery is available in `BiocParallel` 1.1.25 and later. For a more details see the vignette titled "Error Handling and Logging": ```{r error-vignette, eval=FALSE} browseVignettes("BiocParallel") ``` ### Locks and counters Inter-process (i.e., single machine) locks and counters are supported using `ipclock()`, `ipcyield()`, and friends. Use these to synchronize computation, e.g., allowing only a single process to write to a file at a time. # Use cases Sample data are BAM files from a transcription profiling experiment available in the *RNAseqData.HNRNPC.bam.chr14* package. ```{r use_cases_data} library(RNAseqData.HNRNPC.bam.chr14) fls <- RNAseqData.HNRNPC.bam.chr14_BAMFILES ``` ## Single machine Common approaches on a single machine are to use multiple cores in forked processes, or to use clusters of independent processes. For purely -based computations on non-Windows computers, there are substantial benefits, such as shared memory, to be had using forked processes. However, this approach is not portable across platforms, and fails when code uses functionality, e.g., file or data base access, that assumes only a single thread is accessing the resource. While use of forked processes with `MulticoreParam` is an attractive solution for scripts using pure functionality, robust and complex code often requires use of independent processes and `SnowParam`. ### Forked processes with `MulticoreParam` This example counts overlaps between BAM files and a defined set of ranges. First create a GRanges with regions of interest (in practice this could be large). ```{r forking_gr, message=FALSE} library(GenomicAlignments) ## for GenomicRanges and readGAlignments() gr <- GRanges("chr14", IRanges((1000:3999)*5000, width=1000)) ``` A `ScanBamParam` defines regions to extract from the files. ```{r forking_param} param <- ScanBamParam(which=range(gr)) ``` `FUN` counts overlaps between the ranges in 'gr' and the files. ```{r forking_FUN} FUN <- function(fl, param) { gal <- readGAlignments(fl, param = param) sum(countOverlaps(gr, gal)) } ``` All parameters necessary for running a job in a multi-core environment are specified in the `MulticoreParam` instance. ```{r forking_default_multicore} MulticoreParam() ``` The [*BiocParallel*][] functions, such as `bplapply`, use information in the `MulticoreParam` to set up the appropriate back-end and pass relevant arguments to low-level functions. ```{verbatim} > bplapply(fls[1:3], FUN, BPPARAM = MulticoreParam(), param = param) $ERR127306 [1] 1185 $ERR127307 [1] 1123 $ERR127308 [1] 1241 ``` Shared memory environments eliminate the need to pass large data between workers or load common packages. Note that in this code the GRanges data was not passed to all workers in `bplapply` and FUN did not need to load [*GenomicAlignments*[](http://bioconductor.org/packages/GenomicAlignments)for access to the `readGAlign ments` function. Problems with forked processes occur when code implementating functionality used by the workers is not written in anticipation of use by forked processes. One example is the database connection underlying Bioconductor's `org.*` packages. This pseudo-code ```{r db_problems, eval = FALSE} library(org.Hs.eg.db) FUN <- function(x, ...) { ... mapIds(org.Hs.eg.db, ...) ... } bplapply(X, FUN, ..., BPPARAM = MulticoreParam()) ``` is likely to fail, because `library(org.Hs.eg.db)` opens a database connection that is accessed by multiple processes. A solution is to ensure that the database is opened independently in each process ``` FUN <- function(x, ...) { library(org.Hs.eg.db) ... mapIds(org.Hs.eg.db, ...) ... } bplapply(X, FUN, ..., BPPARAM = MulticoreParam()) ``` ### Clusters of independent processes with `SnowParam` Both Windows and non-Windows machines can use the cluster approach to spawn processes. [*BiocParallel*][] back-end choices for clusters on a single machine are *SnowParam* for configuring a Snow cluster or the *DoparParam* for use with the *foreach* package. To re-run the counting example, FUN needs to modified such that 'gr' is passed as a formal argument and required libraries are loaded on each worker. (In general, this is not necessary for functions defined in a package name space, see [Section 6](#sec:developers).) ```{r cluster_FUN} FUN <- function(fl, param, gr) { suppressPackageStartupMessages({ library(GenomicAlignments) }) gal <- readGAlignments(fl, param = param) sum(countOverlaps(gr, gal)) } ``` Define a 2-worker SOCK Snow cluster. ```{r cluster_snow_param} snow <- SnowParam(workers = 2, type = "SOCK") ``` A call to `bplapply` with the *SnowParam* creates the cluster and distributes the work. ```{r cluster_bplapply} bplapply(fls[1:3], FUN, BPPARAM = snow, param = param, gr = gr) ``` The FUN written for the cluster adds some overhead due to the passing of the GRanges and the loading of [*GenomicAlignments*](http://bioconductor.org/packages/GenomicAlignments) on each worker. This approach, however, has the advantage that it works on most platforms and does not require a coding change when switching between windows and non-windows machines. If several `bplapply()` statements are likely to require the same resource, it often makes sense to create a cluster once using `bpstart()`. The workers are re-used by each call to `bplapply()`, so they do not have to re-load packages, etc. ```{r db_solution_2, eval = FALSE} register(SnowParam()) # default evaluation bpstart() # start the cluster ... bplapply(X, FUN1, ...) ... bplapply(X, FUN2, ...) # re-use workers ... bpstop() ``` ## *Ad hoc* cluster of multiple machines We use the term *ad hoc* cluster to define a group of machines that can communicate with each other and to which the user has password-less log-in access. This example uses a group of compute machines (\"the rhinos\") on the FHCRC network. ### *Ad hoc* Sockets On Linux and Mac OS X, a socket cluster is created across machines by supplying machine names as the`workers``argument to a *BiocParallelParam* instance instead of a number. Each name represents an *R* process; repeat names indicate multiple workers on the same machine. Create a with *SnowParam* 2 cpus from 'rhino01' and 1 from 'rhino02'. ``` hosts <- c("rhino01", "rhino01", "rhino02") param <- SnowParam(workers = hosts, type = "SOCK") ``` Execute FUN 4 times across the workers. ```{verbatim} > FUN <- function(i) system("hostname", intern=TRUE) > bplapply(1:4, FUN, BPPARAM = param) [[1]] [1] "rhino01" [[2]] [1] "rhino01" [[3]] [1] "rhino02" [[4]] [1] "rhino01" ``` When creating a cluster across Windows machines must be IP addresses (e.g., \"140.107.218.57\") instead of machine names. ### MPI An MPI cluster across machines is created with *mpirun* or *mpiexec* from the command line or a script. A list of machine names provided as the -hostfile argument defines the mpi universe. The hostfile requests 2 processors on 3 different machines. ```{verbatim} rhino01 slots=2 rhino02 slots=2 rhino03 slots=2 ``` From the command line, start a single interactive process on the current machine. ```{verbatim} mpiexec --np 1 --hostfile hostfile R --vanilla ``` Load [*BiocParallel*][] and create an MPI Snow cluster. The number `workers` of in should match the number of slots requested in the hostfile. Using a smaller number of workers uses a subset of the slots. ```{verbatim} > library(BiocParallel) > param <- SnowParam(workers = 6, type = "MPI") ``` Execute FUN 6 times across the workers. ```{verbatim} > FUN <- function(i) system("hostname", intern=TRUE) > bplapply(1:6, FUN, BPPARAM = param) bplapply(1:6, FUN, BPPARAM = param) [[1]] [1] "rhino01" [[2]] [1] "rhino02" [[3]] [1] "rhino02" [[4]] [1] "rhino03" [[5]] [1] "rhino03" [[6]] [1] "rhino01" ``` Batch jobs can be launched with mpiexec and R CMD BATCH. Code to be executed is in 'Rcode.R'. ```{verbatim} mpiexec --hostfile hostfile R CMD BATCH Rcode.R ``` ## Clusters with schedulers Computer clusters are far from standardized, so the following may require significant adaptation; it is written from experience here at FHCRC, where we have a large cluster managed via SLURM. Nodes on the cluster have shared disks and common system images, minimizing complexity about making data resources available to individual nodes. There are two simple models for use of the cluster, Cluster-centric and R-centric. ### Cluster-centric The idea is to use cluster management software to allocate resources, and then arrange for an script to be evaluated in the context of allocated resources. NOTE: Depending on your cluster configuration it may be necessary to add a line to the template file instructing workers to use the version of R on the master / head node. Otherwise the default R on the worker nodes will be used. For SLURM, we might request space for 4 tasks (with `salloc` or `sbatch`), arrange to start the MPI environment (with `orterun`) and on a single node in that universe run an script `BiocParallel-MPI.R`. The command is ```{verbatim} $ salloc -N 4 orterun -n 1 R -f BiocParallel-MPI.R ``` The *R* script might do the following, using MPI for parallel evaluation. Start by loading necessary packages and defining `FUN` work to be done ```{r cluster-MPI-work, eval=FALSE} library(BiocParallel) library(Rmpi) FUN <- function(i) system("hostname", intern=TRUE) ``` Create a *SnowParam* instance with the number of nodes equal to the size of the MPI universe minus 1 (let one node dispatch jobs to workers), and register this instance as the default ```{r cluster-MPI, eval=FALSE} param <- SnowParam(mpi.universe.size() - 1, "MPI") register(param) ``` Evaluate the work in parallel, process the results, clean up, and quit ```{r cluster-MPI-do, eval=FALSE} xx <- bplapply(1:100, FUN) table(unlist(xx)) mpi.quit() ``` The entire session is as follows: ```{verbatim} $ salloc -N 4 orterun -n 1 R --vanilla -f BiocParallel-MPI.R salloc: Job is in held state, pending scheduler release salloc: Pending job allocation 6762292 salloc: job 6762292 queued and waiting for resources salloc: job 6762292 has been allocated resources salloc: Granted job allocation 6762292 ## ... > FUN <- function(i) system("hostname", intern=TRUE) > > library(BiocParallel) > library(Rmpi) > param <- SnowParam(mpi.universe.size() - 1, "MPI") > register(param) > xx <- bplapply(1:100, FUN) > table(unlist(xx)) gizmof13 gizmof71 gizmof86 gizmof88 25 25 25 25 > > mpi.quit() salloc: Relinquishing job allocation 6762292 salloc: Job allocation 6762292 has been revoked. ``` One advantage of this approach is that the responsibility for managing the cluster lies firmly with the cluster management software -- if one wants more nodes, or needs special resources, then adjust parameters to `salloc` (or `sbatch`). Notice that workers are spawned within the `bplapply` function; it might often make sense to more explicitly manage workers with `bpstart` and `bpstop`, e.g., ```{r cluster-MPI-bpstart, eval=FALSE} param <- bpstart(SnowParam(mpi.universe.size() - 1, "MPI")) register(param) xx <- bplapply(1:100, FUN) bpstop(param) mpi.quit() ``` ### R-centric A more *R*-centric approach might start an *R* script on the head node, and use *batchtools* to submit jobs from within *R* the session. One way of doing this is to create a file containing a template for the job submission step, e.g., for SLURM; a starting point might be found at ```{r slurm} tmpl <- system.file(package="batchtools", "templates", "slurm-simple.tmpl") noquote(readLines(tmpl)) ``` The *R* script, run interactively or from the command line, might then look like ```{r cluster-batchtools, eval=FALSE} ## define work to be done FUN <- function(i) system("hostname", intern=TRUE) library(BiocParallel) ## register SLURM cluster instructions from the template file param <- BatchtoolsParam(workers=5, cluster="slurm", template=tmpl) register(param) ## do work xx <- bplapply(1:100, FUN) table(unlist(xx)) ``` The code runs on the head node until `bplapply` , where the script interacts with the SLURM scheduler to request a SLURM allocation, run jobs, and retrieve results. The argument `4` to `BatchtoolsParam` specifies the number of workers to request from the scheduler; `bplapply` divides the 100 jobs among the 4 workers. If `BatchtoolsParam` had been created without specifying any workers, then 100 jobs implied by the argument to `bplapply` would be associated with 100 tasks submitted to the scheduler. Because cluster tasks are running in independent `R` instances, and often on physically separate machines, a convenient 'best practice' is to write `FUN` in a 'functional programming' manner, such that all data required for the function is passed in as arguments or (for large data) loaded implicitly or explicitly (e.g., via an *R* library) from disk. # Analyzing genomic data in *Bioconductor* General strategies exist for handling large genomic data that are well suited to *R* programs. A manuscript titled *Scalable Genomics with R and BioConductor* () by Michael Lawrence and Martin Morgan, reviews several of these approaches and demonstrate implementation with *Bioconductor * packages. Problem areas include scalable processing, summarization and visualization. The techniques presented include restricting queries, compressing data, iterating, and parallel computing. Ideas are presented in an approachable fashion within a framework of common use cases. This is a benificial read for anyone anyone tackling genomics problems in *R*. # For developers {#sec:developers} Developers wishing to use [*BiocParallel*][] in their own packages should include [*BiocParallel*][] in the `DESCRIPTION` file ```{verbatim} Imports: BiocParallel ``` and import the functions they wish to use in the `NAMESPACE` file, e.g., ```{verbatim} importFrom(BiocParallel, bplapply) ``` Then invoke the desired function in the code, e.g., ```{r devel-bplapply} system.time(x <- bplapply(1:3, function(i) { Sys.sleep(i); i })) unlist(x) ``` This will use the back-end returned by `bpparam()` , by default a `MulticoreParam()` on Linux / macOS, on Windows, or the user's preferred back-end if they have used `register()`. The `MulticoreParam` back-end does not require any special configuration or set-up and is therefore the safest option for developers. Unfortunately, `MulticoreParam` provides only serial evaluation on Windows. Developers should document that their function uses [*BiocParallel*][] functions on the main page, and should perhaps include in their function signature an argument `BPPARAM=bpparam()`. Developers should NOT use 'register()' in package code -- this sets a preference that influences use of 'bplapply()' and friends in all packages, not just their package. Developers wishing to invoke back-ends other than `MulticoreParam` , or to write code that works across Windows, macOS and Linux, no longer need to take special care to ensure that required packages, data, and functions are available and loaded on the remote nodes. By default, will export global variables to the workers due to the default. Nonetheless, a good practice during development is to use independent processes (via ) rather than relying on forked (via ) processes. For instance, clusters include the costs of setting up the computational environment (loading required packages, for instance) that may discourage use of parallelization when parallelization provides only marginal performance gains from the computation *per se*. Likewise, may be more sensitive to inappropriate calls to shared libraries, revealing errors that are only transient under. In `bplapply()`, the environment of `FUN` (other than the global environment) is serialized to the workers. A consequence is that, when `FUN ` is inside a package name space, other functions available in the name space are available to `FUN ` on the workers. # For server administrators {#sec:administrators} If the package is installed on a server used by multiple users, then the default value of cores used can sometimes lead to many more tasks being run than the server has cores if two or more users run a parallel-enabled function simultaneously. A more conservative number of cores than all of them minus 2 may be desirable, so that one user does not take all of the cores unless they explicitly specify so. This can be implemented with environment variables. Setting or for all system users to the number of cores divided by the typical number of concurrent users is a reasonable approach to avoiding this scenario. # sessionInfo ```{r sessionInfo} sessionInfo() ``` [*BiocParallel*]: https://bioconductor.org/packages/BiocParallel BiocParallel/vignettes/Random_Numbers.Rmd0000644000175200017520000002345114516004410021514 0ustar00biocbuildbiocbuild--- title: "Random Numbers in _BiocParallel_" author: - name: Martin Morgan affiliation: Roswell Park Comprehensive Cancer Center, Buffalo, NY email: Martin.Morgan@RoswellPark.org date: "Edited: 7 September, 2021; Compiled: `r format(Sys.time(), '%B %d, %Y')`" vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{4. Random Numbers in BiocParallel} %\VignetteEncoding{UTF-8} output: BiocStyle::html_document: number_sections: yes toc: yes toc_depth: 4 --- [RPCI]: https://www.roswellpark.org/martin-morgan # Scope `r Biocpkg("BiocParallel")` enables use of random number streams in a reproducible manner. This document applies to the following `*Param()`: * `SerialParam()`: sequential evaluation in a single R process. * `SnowParam()`: parallel evaluation in multiple independent R processes. * `MulticoreParam())`: parallel evaluation in R sessions running in forked threads. Not available on Windows. The `*Param()` can be used for evaluation with: * `bplapply()`: `lapply()`-like application of a user-supplied function `FUN` to a vector or list of elements `X`. * `bpiterate()`: apply a user-supplied function `FUN` to an unknown number of elements resulting from successive calls to a user-supplied function `ITER.` The reproducible random number implementation also supports: * `bptry()` and the `BPREDO=` argument, for re-evaluation of elements that fail (e.g., because of a bug in `FUN`). # Essentials ## Use of `bplapply()` and `RNGseed=` Attach `r Biocpkg("BiocParallel")` and ensure that the version is greater than 1.27.5 ```{r} library(BiocParallel) stopifnot( packageVersion("BiocParallel") > "1.27.5" ) ``` For reproducible calculation, use the `RNGseed=` argument in any of the `*Param()`constructors. ```{r} result1 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) result1 ``` Repeating the calculation with the same value for `RNGseed=` results in the same result; a different random number seed results in different results. ```{r} result2 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) stopifnot( identical(result1, result2) ) result3 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 200)) result3 stopifnot( !identical(result1, result3) ) ``` Results are invariant across `*Param()` ```{r} result4 <- bplapply(1:3, runif, BPPARAM = SnowParam(RNGseed = 100)) stopifnot( identical(result1, result4) ) if (!identical(.Platform$OS.type, "windows")) { result5 <- bplapply(1:3, runif, BPPARAM = MulticoreParam(RNGseed = 100)) stopifnot( identical(result1, result5) ) } ``` Parallel backends can adjust the number of `workers` (processes performing the evaluation) and `tasks` (how elements of `X` are distributed between workers). Results are invariant to these parameters. This is illustrated with `SnowParam()`, but applies also to `MulticoreParam()`. ```{r} result6 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 2, RNGseed = 100)) result7 <- bplapply(1:3, runif, BPPARAM = SnowParam(workers = 3, RNGseed = 100)) result8 <- bplapply( 1:3, runif, BPPARAM = SnowParam(workers = 2, tasks = 3, RNGseed = 100) ) stopifnot( identical(result1, result6), identical(result1, result7), identical(result1, result8) ) ``` Subsequent sections illustrate results with `SerialParam()`, but identical results are obtained with `SnowParam()` and `MulticoreParam()`. ## Use with `bpiterate()` `bpiterate()` allows parallel processing of a ’stream’ of data as a series of tasks, with a task consisting of a portion of the overall data. It is useful when the data size is not known or easily partitioned into elements of a vector or list. A real use case might involve iterating through a BAM file, where a task represents successive records (perhaps 100,000 per task) in the file. Here we illustrate with a simple example – iterating through a vector `x = 1:3` ```{r} ITER_FUN_FACTORY <- function() { x <- 1:3 i <- 0L function() { i <<- i + 1L if (i > length(x)) return(NULL) x[[i]] } } ``` `ITER_FUN_FACTORY()` is used to create a function that, on each invocation, returns the next task (here, an element of `x`; in a real example, perhaps 100000 records from a BAM file). When there are no more tasks, the function returns `NULL` ```{r, collapse = TRUE} ITER <- ITER_FUN_FACTORY() ITER() ITER() ITER() ITER() ``` In our simple example, `bpiterate()` is performing the same computations as `bplapply()` so the results, including the random number streams used by each task in `bpiterate()`, are the same ```{r} result9 <- bpiterate( ITER_FUN_FACTORY(), runif, BPPARAM = SerialParam(RNGseed = 100) ) stopifnot( identical(result1, result9) ) ``` ## Use with `bptry()` `bptry()` in conjunction with the `BPREDO=` argument to `bplapply()` or `bpiterate()` allows for graceful recovery from errors. Here a buggy `FUN1()` produces an error for the second element. `bptry()` allows evaluation to continue for other elements of `X`, despite the error. This is shown in the result. ```{r} FUN1 <- function(i) { if (identical(i, 2L)) { ## error when evaluating the second element stop("i == 2") } else runif(i) } result10 <- bptry(bplapply( 1:3, FUN1, BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE) )) result10 ``` `FUN2()` illustrates the flexibility of `bptry()` by fixing the bug when `i == 2`, but also generating incorrect results if invoked for previously correct values. The identity of the result to the original computation shows that only the error task is re-computed, and that the random number stream used by the task is identical to the original stream. ```{r} FUN2 <- function(i) { if (identical(i, 2L)) { ## the random number stream should be in the same state as the ## first time through the loop, and rnorm(i) should return ## same result as FUN runif(i) } else { ## if this branch is used, then we are incorrectly updating ## already calculated elements -- '0' in the output would ## indicate this error 0 } } result11 <- bplapply( 1:3, FUN2, BPREDO = result10, BPPARAM = SerialParam(RNGseed = 100, stop.on.error = FALSE) ) stopifnot( identical(result1, result11) ) ``` ## Relationship between` RNGseed=` and `set.seed()` The global random number stream (influenced by `set.seed()`) is ignored by `r Biocpkg("BiocParallel")`, and `r Biocpkg("BiocParallel")` does NOT increment the global stream. ```{r} set.seed(200) value <- runif(1) set.seed(200) result12 <- bplapply(1:3, runif, BPPARAM = SerialParam(RNGseed = 100)) stopifnot( identical(result1, result12), identical(value, runif(1)) ) ``` When `RNGseed=` is not used, an internal stream (not accessible to the user) is used and `r Biocpkg("BiocParallel")` does NOT increment the global stream. ```{r} set.seed(100) value <- runif(1) set.seed(100) result13 <- bplapply(1:3, runif, BPPARAM = SerialParam()) stopifnot( !identical(result1, result13), identical(value, runif(1)) ) ``` ## `bpstart()` and random number streams In all of the examples so far `*Param()` objects are passed to `bplapply()` or `bpiterate()` in the ’stopped’ state. Internally, `bplapply()` and `bpiterate()` invoke `bpstart()` to establish the computational environment (e.g., starting workers for `SnowParam()`). `bpstart()` can be called explicitly, e.g., to allow workers to be used across calls to `bplapply()`. The cluster random number stream is initiated with `bpstart()`. Thus ```{r} param <- bpstart(SerialParam(RNGseed = 100)) result16 <- bplapply(1:3, runif, BPPARAM = param) bpstop(param) stopifnot( identical(result1, result16) ) ``` This allows a second call to `bplapply` to represent a continuation of a random number computation – the second call to `bplapply()` results in different random number streams for each element of `X`. ```{r} param <- bpstart(SerialParam(RNGseed = 100)) result16 <- bplapply(1:3, runif, BPPARAM = param) result17 <- bplapply(1:3, runif, BPPARAM = param) bpstop(param) stopifnot( identical(result1, result16), !identical(result1, result17) ) ``` The results from `bplapply()` are different from the results from `lapply()`, even with the same random number seed. This is because correctly implemented parallel random streams require use of a particular random number generator invoked in specific ways for each element of `X`, as outlined in the Implementation notes section. ## Relationship between `bplapply()` and `lapply()` The results from `bplapply()` are different from the results from `lapply()`, even with the same random number seed. This is because correctly implemented parallel random streams require use of a particular random number generator invoked in specific ways for each element of `X`, as outlined in the Implementation notes section. ```{r} set.seed(100) result20 <- lapply(1:3, runif) stopifnot( !identical(result1, result20) ) ``` # Implementation notes The implementation uses the L’Ecuyer-CMRG random number generator (see `?RNGkind` and `?parallel::clusterSetRNGStream` for additional details). This random number generates independent streams and substreams of random numbers. In `r Biocpkg("BiocParallel")`, each call to `bp start()` creates a new stream from the L’Ecuyer-CMRG generator. Each element in `bplap` `ply()` or `bpiterate()` creates a new substream. Each application of `FUN` is therefore using the L’Ecuyer-CMRG random number generator, with a substream that is independent of the substreams of all other elements. Within the user-supplied `FUN` of `bplapply()` or `bpiterate()`, it is a mistake to use `RNGkind()` to set a different random number generator, or to use `set.seed()`. This would in principle compromise the independence of the streams across elements. # `sessionInfo()` ```{r, echo = FALSE} sessionInfo() ```