# parallelly: Enhancing the 'parallel' Package
The **parallelly** package provides functions that enhance the **parallel** packages. For example, `availableCores()` gives the number of CPU cores available to your R process as given by R options and environment variables, including those set by job schedulers on high-performance compute (HPC) clusters. If R runs under 'cgroups' or in a Linux container, then their settings are acknowledges too. If nothing else is set, then it will fall back to `parallel::detectCores()`. Another example is `makeClusterPSOCK()`, which is backward compatible with `parallel::makePSOCKcluster()` while doing a better job in setting up remote cluster workers without having to know your local public IP address and configuring the firewall to do port-forwarding to your local computer. The functions and features added to this package are written to be backward compatible with the **parallel** package, such that they may be incorporated there later. The **parallelly** package comes with an open invitation for the R Core Team to adopt all or parts of its code into the **parallel** package.
## Feature Comparison 'parallelly' vs 'parallel'
| | parallelly | parallel |
| ---------------------------------- | :-------------: | :--------: |
| remote clusters without knowing local public IP | ✓ | N/A |
| remote clusters without firewall configuration | ✓ | N/A |
| remote username in ~/.ssh/config | ✓ | R (>= 4.1.0) with `user = NULL` |
| set workers' library package path on startup | ✓ | N/A |
| set workers' environment variables on startup | ✓ | N/A |
| custom workers startup code | ✓ | N/A |
| fallback to RStudio's SSH and PuTTY's plink | ✓ | N/A |
| faster, parallel setup of local workers (R >= 4.0.0) | ✓ | ✓ |
| faster, little-endian protocol by default | ✓ | N/A |
| faster, low-latency socket connections by default | ✓ | N/A |
| validation of cluster at setup | ✓ | ✓ |
| attempt to launch failed workers multiple times | ✓ | N/A |
| collect worker details at cluster setup | ✓ | N/A |
| termination of workers if cluster setup fails | ✓ | R (>= 4.0.0) |
| shutdown of cluster by garbage collector | ✓ | N/A |
| combining multiple, existing clusters | ✓ | N/A |
| more informative printing of cluster objects | ✓ | N/A |
| check if local and remote workers are alive | ✓ | N/A |
| restart local and remote workers | ✓ | N/A |
| defaults via options & environment variables | ✓ | N/A |
| respecting CPU resources allocated by cgroups, Linux containers, and HPC schedulers | ✓ | N/A |
| early error if requesting more workers than possible | ✓ | N/A |
| informative error messages | ✓ | N/A |
## Compatibility with the parallel package
Any cluster created by the **parallelly** package is fully compatible with the clusters created by the **parallel** package and can be used by all of **parallel**'s functions for cluster processing, e.g. `parallel::clusterEvalQ()` and `parallel::parLapply()`. The `parallelly::makeClusterPSOCK()` function can be used as a stand-in replacement of the `parallel::makePSOCKcluster()`, or equivalently, `parallel::makeCluster(..., type = "PSOCK")`.
Most of **parallelly** functions apply also to clusters created by the **parallel** package. For example,
```r
cl <- parallel::makeCluster(2)
cl <- parallelly::autoStopCluster(cl)
```
makes the cluster created by **parallel** to shut down automatically when R's garbage collector removes the cluster object. This lowers the risk for leaving stray R worker processes running in the background by mistake. Another way to achieve the above in a single call is to use:
```r
cl <- parallelly::makeClusterPSOCK(2, autoStop = TRUE)
```
### availableCores() vs parallel::detectCores()
The `availableCores()` function is designed as a better, safer alternative to `detectCores()` of the **parallel** package. It is designed to be a worry-free solution for developers and end-users to query the number of available cores - a solution that plays nice on multi-tenant systems, in Linux containers, on high-performance compute (HPC) cluster, on CRAN and Bioconductor check servers, and elsewhere.
Did you know that `parallel::detectCores()` might return NA on some systems, or that `parallel::detectCores() - 1` might return 0 on some systems, e.g. old hardware and virtual machines? Because of this, you have to use `max(1, parallel::detectCores() - 1, na.rm = TRUE)` to get it correct. In contrast, `parallelly::availableCores()` is guaranteed to return a positive integer, and you can use `parallelly::availableCores(omit = 1)` to return all but one core and always at least one.
Just like other software tools that "hijacks" all cores by default, R scripts, and packages that defaults to `detectCores()` number of parallel workers cause lots of suffering for fellow end-users and system administrators. For instance, a shared server with 48 cores will come to a halt already after a few users run parallel processing using `detectCores()` number of parallel workers. This problem gets worse on machines with many cores because they can host even more concurrent users. If these R users would have used `availableCores()` instead, then the system administrator can limit the number of cores each user get to, say, two (2), by setting the environment variable `R_PARALLELLY_AVAILABLECORES_FALLBACK=2`.
In contrast, it is _not_ possible to override what `parallel::detectCores()` returns, cf. [PR#17641 - WISH: Make parallel::detectCores() agile to new env var R_DEFAULT_CORES ](https://bugs.r-project.org/show_bug.cgi?id=17641).
Similarly, `availableCores()` is also agile to CPU limitations set by Unix control groups (cgroups), which is often used by Linux containers (e.g. Docker, Apptainer / Singularity, and Podman) and Kubernetes (K8s) environments. For example, `docker run --cpuset-cpus=0-2,8 ...` sets the CPU affinity so that the processes can only run on CPUs 0, 1, 2, and 8 on the host system. In this case `availableCores()` detects this and returns four (4). Another example is `docker run --cpu=3.4 ...`, which throttles the CPU quota to on average 3.4 CPUs on the host system. In this case `availableCores()` detects this and returns three (3), because it rounds to the nearest integer. In contrast, `parallel::detectCores()` completely ignores such cgroups settings and returns the number of CPUs on the host system, which results in CPU overuse and degredated performance. Continous Integration (CI) services (e.g. GitHub Actions, Travis CI, and Appveyor CI) and cloud services (e.g. RStudio Cloud) use these types of cgroups settings under the hood, which means `availableCores()` respects their CPU allocations.
If running on an HPC cluster with a job scheduler, a script that uses `availableCores()` will run the number of parallel workers that the job scheduler has assigned to the job. For example, if we submit a Slurm job as `sbatch --cpus-per-task=16 ...`, then `availableCores()` returns 16, because it respects the `SLURM_*` environment variables set by the scheduler. On Son of Grid Engine (SGE), the scheduler sets `NSLOTS` when submitting using `qsub -pe smp 8 ...` and `availableCores()` returns eight (8). See `help("availableCores", package = "parallelly")` for currently supported job schedulers, which includes 'Fujitsu Technical Computing Suite', 'Load Sharing Facility' (LSF), Simple Linux Utility for Resource Management (Slurm), Sun Grid Engine/Oracle Grid Engine/Son of Grid Engine (SGE), Univa Grid Engine (UGE), and TORQUE/PBS.
Of course, `availableCores()` respects also R options and environment variables commonly used to specify the number of parallel workers, e.g. R option `mc.cores` and Bioconductor environment variable `BIOCPARALLEL_WORKER_NUMBER`. It will also detect when running `R CMD check` and limit the number of workers to two (2), which is the maximum number of parallel workers allowed by the [CRAN Policies](https://cran.r-project.org/web/packages/policies.html). This way you, as a package developer, know that your package will always play by the rules on CRAN and Bioconductor.
If nothing is set that limits the number of cores, then `availableCores()` falls back to `parallel::detectCores()` and if that returns `NA_integer_` then one (1) is returned.
The below table summarize the benefits:
| | availableCores() | parallel::detectCores() |
| --------------------------------------- | :--------------: | :---------------------------: |
| Guaranteed to return a positive integer | ✓ | no (may return `NA_integer_`) |
| Safely use all but some cores | ✓ | no (may return zero or less) |
| Can be overridden, e.g. by a sysadm | ✓ | no |
| Respects cgroups and Linux containers | ✓ | no |
| Respects job scheduler allocations | ✓ | no |
| Respects CRAN policies | ✓ | no |
| Respects Bioconductor policies | ✓ | no |
## Backward compatibility with the future package
The functions in this package originate from the **[future](https://cran.r-project.org/package=future)** package where we have used and validated them for several years. I moved these functions to this separate package in 2020, because they are also useful outside of the future framework. For backward-compatibility reasons of the future framework, the R options and environment variables that are prefixed with `parallelly.*` and `R_PARALLELLY_*` can for the time being also be set with `future.*` and `R_FUTURE_*` prefixes.
## Roadmap
* [x] Submit **parallelly** to CRAN, with minimal changes compared to the corresponding functions in the **future** package (on CRAN as of 2020-10-20)
* [x] Update the **future** package to import and re-export the functions from the **parallelly** to maximize backward compatibility in the future framework (**future** 1.20.1 on CRAN as of 2020-11-03)
* [x] Switch to use 10-15% faster `useXDR=FALSE`
* [x] Implement same fast parallel setup of parallel PSOCK workers as in **parallel** (>= 4.0.0)
* [x] After having validated that there is no negative impact on the future framework, allow for changes in the **parallelly** package, e.g. renaming the R options and environment variable to be `parallelly.*` and `R_PARALLELLY_*` while falling back to `future.*` and `R_FUTURE_*`
* [ ] Migrate, currently internal, UUID functions and export them, e.g. `uuid()`, `connectionUuid()`, and `sessionUuid()` (https://github.com/HenrikBengtsson/Wishlist-for-R/issues/96). Because [R does not have a built-in md5 checksum function that operates on object](https://github.com/HenrikBengtsson/Wishlist-for-R/issues/21), these functions require us adding a dependency on the **[digest](https://cran.r-project.org/package=digest)** package.
* [ ] Add vignettes on how to set up cluster running on local or remote machines, including in Linux containers and on popular cloud services, and vignettes on common problems and how to troubleshoot them
Initially, backward compatibility for the **future** package is of top priority.
## Installation
R package parallelly is available on [CRAN](https://cran.r-project.org/package=parallelly) and can be installed in R as:
```r
install.packages("parallelly")
```
### Pre-release version
To install the pre-release version that is available in Git branch `develop` on GitHub, use:
```r
remotes::install_github("HenrikBengtsson/parallelly", ref="develop")
```
This will install the package from source.
parallelly/man/ 0000755 0001762 0000144 00000000000 14563242655 013202 5 ustar ligges users parallelly/man/pid_exists.Rd 0000644 0001762 0000144 00000003660 14367516061 015646 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils,pid.R
\name{pid_exists}
\alias{pid_exists}
\title{Check whether a process PID exists or not}
\usage{
pid_exists(pid, debug = getOption2("parallelly.debug", FALSE))
}
\arguments{
\item{pid}{A positive integer.}
}
\value{
Returns \code{TRUE} if a process with the given PID exists,
\code{FALSE} if a process with the given PID does not exists, and
\code{NA} if it is not possible to check PIDs on the current system.
}
\description{
Check whether a process PID exists or not
}
\details{
There is no single go-to function in \R for testing whether a PID exists
or not. Instead, this function tries to identify a working one among
multiple possible alternatives. A method is considered working if the
PID of the current process is successfully identified as being existing
such that \code{pid_exists(Sys.getpid())} is \code{TRUE}. If no working
approach is found, \code{pid_exists()} will always return \code{NA}
regardless of PID tested.
On Unix, including macOS, alternatives \code{tools::pskill(pid, signal = 0L)}
and \code{system2("ps", args = pid)} are used.
On MS Windows, various alternatives of \code{system2("tasklist", ...)} are used.
Note, some MS Windows machines are configures to not allow using
\code{tasklist} on other process IDs than the current one.
}
\references{
\enumerate{
\item The Open Group Base Specifications Issue 7, 2018 edition,
IEEE Std 1003.1-2017 (Revision of IEEE Std 1003.1-2008)
\url{https://pubs.opengroup.org/onlinepubs/9699919799/functions/kill.html}
\item Microsoft, tasklist, 2021-03-03,
\url{https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/tasklist}
\item R-devel thread 'Detecting whether a process exists or not by its PID?',
2018-08-30.
\url{https://stat.ethz.ch/pipermail/r-devel/2018-August/076702.html}
}
}
\seealso{
\code{\link[tools]{pskill}()} and \code{\link[base]{system2}()}.
}
\keyword{internal}
parallelly/man/isNodeAlive.Rd 0000644 0001762 0000144 00000003153 14434213411 015656 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/isNodeAlive.R
\name{isNodeAlive}
\alias{isNodeAlive}
\title{Check whether or not the cluster nodes are alive}
\usage{
isNodeAlive(x, ...)
}
\arguments{
\item{x}{A cluster or a cluster node ("worker").}
\item{...}{Not used.}
}
\value{
A logical vector of length \code{length(x)} with values
FALSE, TRUE, and NA. If it can be established that the
process for a cluster node is running, then TRUE is returned.
If it does not run, then FALSE is returned.
If neither can be inferred, or it times out, then NA is returned.
}
\description{
Check whether or not the cluster nodes are alive
}
\details{
This function works by checking whether the cluster node process is
running or not. This is done by querying the system for its process
ID (PID), which is registered by \code{\link[=makeClusterPSOCK]{makeClusterPSOCK()}} when the node
starts. If the PID is not known, the NA is returned.
On Unix and macOS, the PID is queried using \code{\link[tools:pskill]{tools::pskill()}} with
fallback to \code{system("ps")}. On MS Windows, \code{system2("tasklist")} is used,
which may take a long time if there are a lot of processes running.
For details, see the \emph{internal} \code{\link[=pid_exists]{pid_exists()}} function.
}
\examples{
\donttest{
cl <- makeClusterPSOCK(2)
## Check if cluster node #2 is alive
print(isNodeAlive(cl[[2]]))
## Check all nodes
print(isNodeAlive(cl))
}
}
\seealso{
Use \code{\link[parallel:makeCluster]{parallel::stopCluster()}} to shut down cluster nodes.
If that's not sufficient, \code{\link[=killNode]{killNode()}} may be attempted.
}
parallelly/man/availableCores.Rd 0000644 0001762 0000144 00000022417 14367516061 016410 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/availableCores.R
\name{availableCores}
\alias{availableCores}
\title{Get Number of Available Cores on The Current Machine}
\usage{
availableCores(
constraints = NULL,
methods = getOption2("parallelly.availableCores.methods", c("system", "cgroups.cpuset",
"cgroups.cpuquota", "cgroups2.cpu.max", "nproc", "mc.cores", "BiocParallel",
"_R_CHECK_LIMIT_CORES_", "Bioconductor", "LSF", "PJM", "PBS", "SGE", "Slurm",
"fallback", "custom")),
na.rm = TRUE,
logical = getOption2("parallelly.availableCores.logical", TRUE),
default = c(current = 1L),
which = c("min", "max", "all"),
omit = getOption2("parallelly.availableCores.omit", 0L)
)
}
\arguments{
\item{constraints}{An optional character specifying under what
constraints ("purposes") we are requesting the values.
For instance, on systems where multicore processing is not supported
(i.e. Windows), using \code{constraints = "multicore"} will force a
single core to be reported.
Using \code{constraints = "connections"}, will append \code{"connections"} to
the \code{methods} argument.
It is possible to specify multiple constraints, e.g.
\code{constraints = c("connections", "multicore")}.}
\item{methods}{A character vector specifying how to infer the number
of available cores.}
\item{na.rm}{If TRUE, only non-missing settings are considered/returned.}
\item{logical}{Passed to
\code{\link[parallel]{detectCores}(logical = logical)}, which,
\emph{if supported}, returns the number of logical CPUs (TRUE) or physical
CPUs/cores (FALSE).
At least as of R 4.2.2, \code{detectCores()} this argument on Linux.
This argument is only if argument \code{methods} includes \code{"system"}.}
\item{default}{The default number of cores to return if no non-missing
settings are available.}
\item{which}{A character specifying which settings to return.
If \code{"min"} (default), the minimum value is returned.
If \code{"max"}, the maximum value is returned (be careful!)
If \code{"all"}, all values are returned.}
\item{omit}{(integer; non-negative) Number of cores to not include.}
}
\value{
Return a positive (>= 1) integer.
If \code{which = "all"}, then more than one value may be returned.
Together with \code{na.rm = FALSE} missing values may also be returned.
}
\description{
The current/main \R session counts as one, meaning the minimum
number of cores available is always at least one.
}
\details{
The following settings ("methods") for inferring the number of cores
are supported:
\itemize{
\item \code{"system"} -
Query \code{\link[parallel]{detectCores}(logical = logical)}.
\item \code{"cgroups.cpuset"} -
On Unix, query control group (cgroup) value \code{cpuset.set}.
\item \code{"cgroups.cpuquota"} -
On Unix, query control group (cgroup) value
\code{cpu.cfs_quota_us} / \code{cpu.cfs_period_us}.
\item \code{"cgroups2.cpu.max"} -
On Unix, query control group (cgroup v2) values \code{cpu.max}.
\item \code{"nproc"} -
On Unix, query system command \code{nproc}.
\item \code{"mc.cores"} -
If available, returns the value of option
\code{\link[base:options]{mc.cores}}.
Note that \code{mc.cores} is defined as the number of
\emph{additional} \R processes that can be used in addition to the
main \R process. This means that with \code{mc.cores = 0} all
calculations should be done in the main \R process, i.e. we have
exactly one core available for our calculations.
The \code{mc.cores} option defaults to environment variable
\env{MC_CORES} (and is set accordingly when the \pkg{parallel}
package is loaded). The \code{mc.cores} option is used by for
instance \code{\link[=mclapply]{mclapply}()} of the \pkg{parallel}
package.
\item \code{"connections"} -
Query the current number of available R connections per
\code{\link[=freeConnections]{freeConnections()}}. This is the maximum number of socket-based
\strong{parallel} cluster nodes that are possible launch, because each
one needs its own R connection.
The exception is when \code{freeConnections()} is zero, then \code{1L} is
still returned, because \code{availableCores()} should always return a
positive integer.
\item \code{"BiocParallel"} -
Query environment variable \env{BIOCPARALLEL_WORKER_NUMBER} (integer),
which is defined and used by \strong{BiocParallel} (>= 1.27.2).
If the former is set, this is the number of cores considered.
\item \code{"_R_CHECK_LIMIT_CORES_"} -
Query environment variable \env{_R_CHECK_LIMIT_CORES_} (logical or
\code{"warn"}) used by \verb{R CMD check} and set to true by
\verb{R CMD check --as-cran}. If set to a non-false value, then a maximum
of 2 cores is considered.
\item \code{"Bioconductor"} -
Query environment variable \env{IS_BIOC_BUILD_MACHINE} (logical)
used by the Bioconductor (>= 3.16) build and check system. If set to
true, then a maximum of 4 cores is considered.
\item \code{"LSF"} -
Query Platform Load Sharing Facility (LSF) environment variable
\env{LSB_DJOB_NUMPROC}.
Jobs with multiple (CPU) slots can be submitted on LSF using
\verb{bsub -n 2 -R "span[hosts=1]" < hello.sh}.
\item \code{"PJM"} -
Query Fujitsu Technical Computing Suite (that we choose to shorten
as "PJM") environment variables \env{PJM_VNODE_CORE} and
\env{PJM_PROC_BY_NODE}.
The first is set when submitted with \verb{pjsub -L vnode-core=8 hello.sh}.
\item \code{"PBS"} -
Query TORQUE/PBS environment variables \env{PBS_NUM_PPN} and \env{NCPUS}.
Depending on PBS system configuration, these \emph{resource}
parameters may or may not default to one.
An example of a job submission that results in this is
\verb{qsub -l nodes=1:ppn=2}, which requests one node with two cores.
\item \code{"SGE"} -
Query Sun Grid Engine/Oracle Grid Engine/Son of Grid Engine (SGE)
and Univa Grid Engine (UGE) environment variable \env{NSLOTS}.
An example of a job submission that results in this is
\verb{qsub -pe smp 2} (or \verb{qsub -pe by_node 2}), which
requests two cores on a single machine.
\item \code{"Slurm"} -
Query Simple Linux Utility for Resource Management (Slurm)
environment variable \env{SLURM_CPUS_PER_TASK}.
This may or may not be set. It can be set when submitting a job,
e.g. \verb{sbatch --cpus-per-task=2 hello.sh} or by adding
\verb{#SBATCH --cpus-per-task=2} to the \file{hello.sh} script.
If \env{SLURM_CPUS_PER_TASK} is not set, then it will fall back to
use \env{SLURM_CPUS_ON_NODE} if the job is a single-node job
(\env{SLURM_JOB_NUM_NODES} is 1), e.g. \verb{sbatch --ntasks=2 hello.sh}.
To make sure all tasks are assign to a single node, specify
\code{--nodes=1}, e.g. \verb{sbatch --nodes=1 --ntasks=16 hello.sh}.
\item \code{"custom"} -
If option
\code{\link[=parallelly.options]{parallelly.availableCores.custom}}
is set and a function,
then this function will be called (without arguments) and it's value
will be coerced to an integer, which will be interpreted as a number
of available cores. If the value is NA, then it will be ignored.
It is safe for this custom function to call \code{availableCores()}; if
done, the custom function will \emph{not} be recursively called.
}
For any other value of a \code{methods} element, the \R option with the
same name is queried. If that is not set, the system environment
variable is queried. If neither is set, a missing value is returned.
}
\section{Avoid ending up with zero cores}{
Note that some machines might have a limited number of cores, or the R
process runs in a container or a cgroup that only provides a small number
of cores. In such cases:
\if{html}{\out{
}}
to put aside one of the cores from being used. Regardless how many cores
you put aside, this function is guaranteed to return at least one core.
}
\section{Advanced usage}{
It is possible to override the maximum number of cores on the machine
as reported by \code{availableCores(methods = "system")}. This can be
done by first specifying
\code{options(parallelly.availableCores.methods = "mc.cores")} and
then the number of cores to use, e.g. \code{options(mc.cores = 8)}.
}
\examples{
message(paste("Number of cores available:", availableCores()))
\dontrun{
options(mc.cores = 2L)
message(paste("Number of cores available:", availableCores()))
}
\dontrun{
## IMPORTANT: availableCores() may return 1L
options(mc.cores = 1L)
ncores <- availableCores() - 1 ## ncores = 0
ncores <- availableCores(omit = 1) ## ncores = 1
message(paste("Number of cores to use:", ncores))
}
\dontrun{
## Use 75\% of the cores on the system but never more than four
options(parallelly.availableCores.custom = function() {
ncores <- max(parallel::detectCores(), 1L, na.rm = TRUE)
ncores <- min(as.integer(0.75 * ncores), 4L)
max(1L, ncores)
})
message(paste("Number of cores available:", availableCores()))
## Use 50\% of the cores according to availableCores(), e.g.
## allocated by a job scheduler or cgroups.
## Note that it is safe to call availableCores() here.
options(parallelly.availableCores.custom = function() {
0.50 * parallelly::availableCores()
})
message(paste("Number of cores available:", availableCores()))
}
}
\seealso{
To get the set of available workers regardless of machine,
see \code{\link[=availableWorkers]{availableWorkers()}}.
}
parallelly/man/availableConnections.Rd 0000644 0001762 0000144 00000005251 14563242645 017616 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/availableConnections.R
\name{availableConnections}
\alias{availableConnections}
\alias{freeConnections}
\title{Number of Available and Free Connections}
\usage{
availableConnections()
freeConnections()
}
\value{
A non-negative integer, or \code{+Inf} if the available number of connections
is greater than 16384, which is a limit be set via option
\code{\link[=parallelly.options]{parallelly.availableConnections.tries}}.
}
\description{
The number of \link{connections} that can be open at the same time in \R is
\emph{typically} 128, where the first three are occupied by the always open
\code{\link[=stdin]{stdin()}}, \code{\link[=stdout]{stdout()}}, and \code{\link[=stderr]{stderr()}} connections, which leaves 125 slots
available for other types of connections. Connections are used in many
places, e.g. reading and writing to file, downloading URLs, communicating
with parallel \R processes over a socket connections (e.g.
\code{\link[parallel:makeCluster]{parallel::makeCluster()}} and \code{\link[=makeClusterPSOCK]{makeClusterPSOCK()}}), and capturing
standard output via text connections (e.g. \code{\link[utils:capture.output]{utils::capture.output()}}).
}
\section{How to increase the limit}{
In R (>= 4.4.0), it is possible to \emph{increase} the limit of 128 connections
to a greater number via command-line option \code{--max-connections=N}, e.g.
\if{html}{\out{
}}\preformatted{$ R --max-connection=512
}\if{html}{\out{
}}
For R (< 4.4.0), the limit can only be changed by rebuilding \R from
source, because the limited is hardcoded as a
\if{html}{\out{
}}
in \file{src/main/connections.c}.
}
\section{How the limit is identified}{
Since the limit \emph{might} changed, for instance in custom \R builds or in
future releases of \R, we do not want to assume that the limit is 128 for
all \R installation. Unfortunately, it is not possible to query \R for what
the limit is.
Instead, \code{availableConnections()} infers it from trial-and-error.
until it fails. For efficiency, the result is memoized throughout the
current \R session.
}
\examples{
total <- availableConnections()
message("You can have ", total, " connections open in this R installation")
free <- freeConnections()
message("There are ", free, " connections remaining")
}
\references{
\enumerate{
\item 'WISH: Increase limit of maximum number of open connections (currently 125+3)', 2016-07-09,
\url{https://github.com/HenrikBengtsson/Wishlist-for-R/issues/28}
}
}
\seealso{
\code{\link[base:showConnections]{base::showConnections()}}.
}
parallelly/man/isConnectionValid.Rd 0000644 0001762 0000144 00000012451 14563242645 017106 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/isConnectionValid.R
\name{isConnectionValid}
\alias{isConnectionValid}
\alias{connectionId}
\title{Checks if a Connection is Valid}
\usage{
isConnectionValid(con)
connectionId(con)
}
\arguments{
\item{con}{A \link[base:connections]{connection}.}
}
\value{
\code{isConnectionValid()} returns TRUE if the connection is still valid,
otherwise FALSE. If FALSE, then character attribute \code{reason} provides
an explanation why the connection is not valid.
\code{connectionId()} returns an non-negative integer, -1, or \code{NA_integer_}.
For connections stdin, stdout, and stderr, 0, 1, and 2, are returned,
respectively. For all other connections, an integer greater or equal to
3 based on the connection's internal pointer is returned.
A connection that has been serialized, which is no longer valid, has
identifier -1.
Attribute \code{raw_id} returns the pointer string from which the above is
inferred.
}
\description{
Get a unique identifier for an R \link[base:connections]{connection}
and check whether or not the connection is still valid.
}
\section{Connection Index versus Connection Identifier}{
R represents \link[base:connections]{connections} as indices using plain
integers, e.g. \code{idx <- as.integer(con)}.
The three connections standard input ("stdin"), standard output ("stdout"),
and standard error ("stderr") always exists and have indices 0, 1, and 2.
Any connection opened beyond these will get index three or greater,
depending on availability as given by \code{\link[base:showConnections]{base::showConnections()}}.
To get the connection with a given index, use \code{\link[base:showConnections]{base::getConnection()}}.
\strong{Unfortunately, this index representation of connections is non-robust},
e.g. there are cases where two or more 'connection' objects can end up with
the same index and if used, the written output may end up at the wrong
destination and files and database might get corrupted. This can for
instance happen if \code{\link[base:showConnections]{base::closeAllConnections()}} is used (*).
\strong{In contrast, \code{id <- connectionId(con)} gives an identifier that is unique
to that 'connection' object.} This identifier is based on the internal
pointer address of the object. The risk for two connections in the same
\R session to end up with the same pointer address is very small.
Thus, in case we ended up in a situation where two connections \code{con1} and
\code{con2} share the same index---\code{as.integer(con1) == as.integer(con2)}---
they will never share the same identifier---
\code{connectionId(con1) != connectionId(con2)}.
Here, \code{isConnectionValid()} can be used to check which one of these
connections, if any, are valid.
(*) Note that there is no good reason for calling \code{closeAllConnections()}
If called, there is a great risk that the files get corrupted etc.
See (1) for examples and details on this problem.
If you think there is a need to use it, it is much safer to restart \R
because that is guaranteed to give you a working \R session with
non-clashing connections.
It might also be that \code{closeAllConnections()} is used because
\code{\link[base:base-internal]{base::sys.save.image()}} is called, which might happen if \R is being
forced to terminate.
}
\section{Connections Cannot be Serialized Or Saved}{
A 'connection' cannot be serialized, e.g. it cannot be saved to file to
be read and used in another \R session. If attempted, the connection will
not be valid. This is a problem that may occur in parallel processing
when passing an \R object to parallel worker for further processing, e.g.
the exported object may hold an internal database connection which will
no longer be valid on the worker.
When a connection is serialized, its internal pointer address will be
invalidated (set to nil). In such cases, \code{connectionId(con)} returns -1
and \code{isConnectionValid(con)} returns FALSE.
}
\examples{
## R represents connections as plain indices
as.integer(stdin()) ## int 0
as.integer(stdout()) ## int 1
as.integer(stderr()) ## int 2
## The first three connections always exist and are always valid
isConnectionValid(stdin()) ## TRUE
connectionId(stdin()) ## 0L
isConnectionValid(stdout()) ## TRUE
connectionId(stdout()) ## 1L
isConnectionValid(stderr()) ## TRUE
connectionId(stderr()) ## 2L
## Connections cannot be serialized
con <- file(tempfile(), open = "w")
x <- list(value = 42, stderr = stderr(), con = con)
y <- unserialize(serialize(x, connection = NULL))
isConnectionValid(y$stderr) ## TRUE
connectionId(y$stderr) ## 2L
isConnectionValid(y$con) ## FALSE with attribute 'reason'
connectionId(y$con) ## -1L
close(con)
}
\references{
\enumerate{
\item \href{https://github.com/HenrikBengtsson/Wishlist-for-R/issues/81}{'BUG: A \code{connection} object may become corrupt and re-referenced to another connection (PATCH)'}, 2018-10-30.
\item R-devel thread \href{https://stat.ethz.ch/pipermail/r-devel/2018-October/077004.html}{PATCH: Asserting that 'connection' used has not changed + R_GetConnection2()}, 2018-10-31.
}
}
\seealso{
See \code{\link[base:showConnections]{base::showConnections()}} for currently open connections and their
indices. To get a connection by its index, use \code{\link[base:showConnections]{base::getConnection()}}.
}
parallelly/man/availableWorkers.Rd 0000644 0001762 0000144 00000014654 14367516061 016775 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/availableWorkers.R
\name{availableWorkers}
\alias{availableWorkers}
\title{Get Set of Available Workers}
\usage{
availableWorkers(
constraints = NULL,
methods = getOption2("parallelly.availableWorkers.methods", c("mc.cores",
"BiocParallel", "_R_CHECK_LIMIT_CORES_", "Bioconductor", "LSF", "PJM", "PBS", "SGE",
"Slurm", "custom", "cgroups.cpuset", "cgroups.cpuquota", "cgroups2.cpu.max", "nproc",
"system", "fallback")),
na.rm = TRUE,
logical = getOption2("parallelly.availableCores.logical", TRUE),
default = getOption2("parallelly.localhost.hostname", "localhost"),
which = c("auto", "min", "max", "all")
)
}
\arguments{
\item{constraints}{An optional character specifying under what
constraints ("purposes") we are requesting the values.
Using \code{constraints = "connections"}, will append \code{"connections"} to
the \code{methods} argument.}
\item{methods}{A character vector specifying how to infer the number
of available cores.}
\item{na.rm}{If TRUE, only non-missing settings are considered/returned.}
\item{logical}{Passed as-is to \code{\link[=availableCores]{availableCores()}}.}
\item{default}{The default set of workers.}
\item{which}{A character specifying which set / sets to return.
If \code{"auto"} (default), the first non-empty set found.
If \code{"min"}, the minimum value is returned.
If \code{"max"}, the maximum value is returned (be careful!)
If \code{"all"}, all values are returned.}
}
\value{
Return a character vector of workers, which typically consists
of names of machines / compute nodes, but may also be IP numbers.
}
\description{
Get Set of Available Workers
}
\details{
The default set of workers for each method is
\code{rep("localhost", times = availableCores(methods = method, logical = logical))},
which means that each will at least use as many parallel workers on the
current machine that \code{\link[=availableCores]{availableCores()}} allows for that method.
In addition, the following settings ("methods") are also acknowledged:
\itemize{
\item \code{"LSF"} -
Query LSF/OpenLava environment variable \env{LSB_HOSTS}.
\item \code{"PJM"} -
Query Fujitsu Technical Computing Suite (that we choose to shorten
as "PJM") the hostname file given by environment variable
\env{PJM_O_NODEINF}.
The \env{PJM_O_NODEINF} file lists the hostnames of the nodes allotted.
This function returns those hostnames each repeated \code{availableCores()}
times, where \code{availableCores()} reflects \env{PJM_VNODE_CORE}.
For example, for \verb{pjsub -L vnode=2 -L vnode-core=8 hello.sh}, the
\env{PJM_O_NODEINF} file gives two hostnames, and \env{PJM_VNODE_CORE}
gives eight cores per host, resulting in a character vector of 16
hostnames (for two unique hostnames).
\item \code{"PBS"} -
Query TORQUE/PBS environment variable \env{PBS_NODEFILE}.
If this is set and specifies an existing file, then the set
of workers is read from that file, where one worker (node)
is given per line.
An example of a job submission that results in this is
\verb{qsub -l nodes=4:ppn=2}, which requests four nodes each
with two cores.
\item \code{"SGE"} -
Query Sun Grid Engine/Oracle Grid Engine/Son of Grid Engine (SGE)
and Univa Grid Engine (UGE) environment variable \env{PE_HOSTFILE}.
An example of a job submission that results in this is
\verb{qsub -pe mpi 8} (or \verb{qsub -pe ompi 8}), which
requests eight cores on a any number of machines.
\item \code{"Slurm"} -
Query Slurm environment variable \env{SLURM_JOB_NODELIST} (fallback
to legacy \env{SLURM_NODELIST}) and parse set of nodes.
Then query Slurm environment variable \env{SLURM_JOB_CPUS_PER_NODE}
(fallback \env{SLURM_TASKS_PER_NODE}) to infer how many CPU cores
Slurm have allotted to each of the nodes. If \env{SLURM_CPUS_PER_TASK}
is set, which is always a scalar, then that is respected too, i.e.
if it is smaller, then that is used for all nodes.
For example, if \code{SLURM_NODELIST="n1,n[03-05]"} (expands to
\code{c("n1", "n03", "n04", "n05")}) and \code{SLURM_JOB_CPUS_PER_NODE="2(x2),3,2"}
(expands to \code{c(2, 2, 3, 2)}), then
\code{c("n1", "n1", "n03", "n03", "n04", "n04", "n04", "n05", "n05")} is
returned. If in addition, \code{SLURM_CPUS_PER_TASK=1}, which can happen
depending on hyperthreading configurations on the Slurm cluster, then
\code{c("n1", "n03", "n04", "n05")} is returned.
\item \code{"custom"} -
If option
\code{\link[=parallelly.options]{parallelly.availableWorkers.custom}}
is set and a function,
then this function will be called (without arguments) and it's value
will be coerced to a character vector, which will be interpreted as
hostnames of available workers.
It is safe for this custom function to call \code{availableWorkers()}; if
done, the custom function will \emph{not} be recursively called.
}
}
\section{Known limitations}{
\code{availableWorkers(methods = "Slurm")} will expand \env{SLURM_JOB_NODELIST}
using \command{scontrol show hostnames "$SLURM_JOB_NODELIST"}, if available.
If not available, then it attempts to parse the compressed nodelist based
on a best-guess understanding on what the possible syntax may be.
One known limitation is that "multi-dimensional" ranges are not supported,
e.g. \code{"a[1-2]b[3-4]"} is expanded by \command{scontrol} to
\code{c("a1b3", "a1b4", "a2b3", "a2b4")}. If \command{scontrol} is not
available, then any components that failed to be parsed are dropped with
an informative warning message. If no components could be parsed, then
the result of \code{methods = "Slurm"} will be empty.
}
\examples{
message(paste("Available workers:",
paste(sQuote(availableWorkers()), collapse = ", ")))
\dontrun{
options(mc.cores = 2L)
message(paste("Available workers:",
paste(sQuote(availableWorkers()), collapse = ", ")))
}
\dontrun{
## Always use two workers on host 'n1' and one on host 'n2'
options(parallelly.availableWorkers.custom = function() {
c("n1", "n1", "n2")
})
message(paste("Available workers:",
paste(sQuote(availableWorkers()), collapse = ", ")))
}
\dontrun{
## A 50\% random subset of the available workers.
## Note that it is safe to call availableWorkers() here.
options(parallelly.availableWorkers.custom = function() {
workers <- parallelly::availableWorkers()
sample(workers, size = 0.50 * length(workers))
})
message(paste("Available workers:",
paste(sQuote(availableWorkers()), collapse = ", ")))
}
}
\seealso{
To get the number of available workers on the current machine,
see \code{\link[=availableCores]{availableCores()}}.
}
parallelly/man/freeCores.Rd 0000644 0001762 0000144 00000002332 14367516061 015403 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/freeCores.R
\name{freeCores}
\alias{freeCores}
\title{Get the Average Number of Free CPU Cores}
\usage{
freeCores(
memory = c("5min", "15min", "1min"),
fraction = 0.9,
logical = getOption2("parallelly.availableCores.logical", TRUE),
default = parallelly::availableCores()
)
}
\arguments{
\item{memory}{(character) The time period used to infer the system load,
with alternatives being 5 minutes (default), 15 minutes, or 1 minute.}
\item{fraction}{(non-negative numeric) A scale factor.}
\item{logical}{Passed as-is to \code{\link[=availableCores]{availableCores()}}.}
\item{default}{(integer) The value to be returned if the system load is
unknown, i.e. \code{\link[=cpuLoad]{cpuLoad()}} return missing values.}
}
\value{
An positive integer with attributes \code{loadavg} (named numeric),
\code{maxCores} (named integer), argument \code{memory} (character), and
argument \code{fraction} (numeric).
}
\description{
Get the Average Number of Free CPU Cores
}
\examples{
free <- freeCores()
print(free)
\dontrun{
## Make availableCores() agile to the system load
options(parallelly.availableCores.custom = function() freeCores())
}
}
\keyword{internal}
parallelly/man/makeClusterPSOCK.Rd 0000644 0001762 0000144 00000113253 14563242655 016555 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/availableCores.R, R/makeClusterPSOCK.R,
% R/makeNodePSOCK.R, R/makeZZZ.R
\name{checkNumberOfLocalWorkers}
\alias{checkNumberOfLocalWorkers}
\alias{makeClusterPSOCK}
\alias{makeNodePSOCK}
\title{Create a PSOCK Cluster of R Workers for Parallel Processing}
\usage{
checkNumberOfLocalWorkers(workers)
makeClusterPSOCK(
workers,
makeNode = makeNodePSOCK,
port = c("auto", "random"),
...,
autoStop = FALSE,
tries = getOption2("parallelly.makeNodePSOCK.tries", 3L),
delay = getOption2("parallelly.makeNodePSOCK.tries.delay", 15),
validate = getOption2("parallelly.makeNodePSOCK.validate", TRUE),
verbose = getOption2("parallelly.debug", FALSE)
)
makeNodePSOCK(
worker = getOption2("parallelly.localhost.hostname", "localhost"),
master = NULL,
port,
connectTimeout = getOption2("parallelly.makeNodePSOCK.connectTimeout", 2 * 60),
timeout = getOption2("parallelly.makeNodePSOCK.timeout", 30 * 24 * 60 * 60),
rscript = NULL,
homogeneous = NULL,
rscript_args = NULL,
rscript_envs = NULL,
rscript_libs = NULL,
rscript_startup = NULL,
rscript_sh = c("auto", "cmd", "sh"),
default_packages = c("datasets", "utils", "grDevices", "graphics", "stats", if
(methods) "methods"),
methods = TRUE,
socketOptions = getOption2("parallelly.makeNodePSOCK.socketOptions", "no-delay"),
useXDR = getOption2("parallelly.makeNodePSOCK.useXDR", FALSE),
outfile = "/dev/null",
renice = NA_integer_,
rshcmd = getOption2("parallelly.makeNodePSOCK.rshcmd", NULL),
user = NULL,
revtunnel = NA,
rshlogfile = NULL,
rshopts = getOption2("parallelly.makeNodePSOCK.rshopts", NULL),
rank = 1L,
manual = FALSE,
dryrun = FALSE,
quiet = FALSE,
setup_strategy = getOption2("parallelly.makeNodePSOCK.setup_strategy", "parallel"),
action = c("launch", "options"),
verbose = FALSE
)
}
\arguments{
\item{workers}{The hostnames of workers (as a character vector) or the
number of localhost workers (as a positive integer).}
\item{makeNode}{A function that creates a \code{"SOCKnode"} or
\code{"SOCK0node"} object, which represents a connection to a worker.}
\item{port}{The port number of the master used for communicating with all
the workers (via socket connections). If an integer vector of ports, then
a random one among those is chosen. If \code{"random"}, then a random port in
is chosen from \code{11000:11999}, or from the range specified by
environment variable \env{R_PARALLELLY_RANDOM_PORTS}.
If \code{"auto"} (default), then the default (single) port is taken from
environment variable \env{R_PARALLEL_PORT}, otherwise \code{"random"} is
used.
\emph{Note, do not use this argument to specify the port number used by
\code{rshcmd}, which typically is an SSH client. Instead, if the SSH daemon
runs on a different port than the default 22, specify the SSH port by
appending it to the hostname, e.g. \code{"remote.server.org:2200"} or via
SSH options \option{-p}, e.g. \code{rshopts = c("-p", "2200")}.}}
\item{\dots}{Optional arguments passed to
\code{makeNode(workers[i], ..., rank = i)} where \code{i = seq_along(workers)}.}
\item{autoStop}{If TRUE, the cluster will be automatically stopped
using \code{\link[parallel:makeCluster]{stopCluster}()} when it is
garbage collected, unless already stopped. See also \code{\link[=autoStopCluster]{autoStopCluster()}}.}
\item{tries, delay}{Maximum number of attempts done to launch each node
with \code{makeNode()} and the delay (in seconds) in-between attempts.
If argument \code{port} specifies more than one port, e.g. \code{port = "random"}
then a random port will be drawn and validated at most \code{tries} times.
Arguments \code{tries} and \code{delay} are used only when
\code{setup_strategy == "sequential"}.}
\item{validate}{If TRUE (default), after the nodes have been created,
they are all validated that they work by inquiring about their session
information, which is saved in attribute \code{session_info} of each node.}
\item{verbose}{If TRUE, informative messages are outputted.}
\item{worker}{The hostname or IP number of the machine where the worker
should run.
Attribute \code{localhost} can be set to TRUE or FALSE to manually indicate
whether \code{worker} is the same as the local host.}
\item{master}{The hostname or IP number of the master / calling machine, as
known to the workers. If NULL (default), then the default is
\code{Sys.info()[["nodename"]]} unless \code{worker} is \emph{localhost} or
\code{revtunnel = TRUE} in case it is \code{"localhost"}.}
\item{connectTimeout}{The maximum time (in seconds) allowed for each socket
connection between the master and a worker to be established (defaults to
2 minutes). \emph{See note below on current lack of support on Linux and
macOS systems.}}
\item{timeout}{The maximum time (in seconds) allowed to pass without the
master and a worker communicate with each other (defaults to 30 days).}
\item{rscript, homogeneous}{The system command for launching \command{Rscript}
on the worker and whether it is installed in the same path as the calling
machine or not. For more details, see below.}
\item{rscript_args}{Additional arguments to \command{Rscript} (as a character
vector). This argument can be used to customize the \R environment of the
workers before they launches.
For instance, use \code{rscript_args = c("-e", shQuote('setwd("/path/to")'))}
to set the working directory to \file{/path/to} on \emph{all} workers.}
\item{rscript_envs}{A named character vector environment variables to
set or unset on worker at startup, e.g.
\code{rscript_envs = c(FOO = "3.14", "HOME", "UNKNOWN", UNSETME = NA_character_)}.
If an element is not named, then the value of that variable will be used as
the name and the value will be the value of \code{Sys.getenv()} for that
variable. Non-existing environment variables will be dropped.
These variables are set using \code{Sys.setenv()}.
An named element with value \code{NA_character_} will cause that variable to be
unset, which is done via \code{Sys.unsetenv()}.}
\item{rscript_libs}{A character vector of \R library paths that will be
used for the library search path of the \R workers. An asterisk
(\code{"*"}) will be resolved to the default \code{.libPaths()} \emph{on the
worker}. That is, to \code{prepend} a folder, instead of replacing the
existing ones, use \code{rscript_libs = c("new_folder", "*")}.
To pass down a non-default library path currently set \emph{on the main \R
session} to the workers, use \code{rscript_libs = .libPaths()}.}
\item{rscript_startup}{An \R expression or a character vector of \R code,
or a list with a mix of these, that will be evaluated on the \R worker
prior to launching the worker's event loop.
For instance, use \code{rscript_startup = 'setwd("/path/to")'}
to set the working directory to \file{/path/to} on \emph{all} workers.}
\item{rscript_sh}{The type of shell used where \code{rscript} is launched,
which should be \code{"sh"} is launched via a POSIX shell and \code{"cmd"} if
launched via an MS Windows shell. This controls how shell command-line
options are quoted, via
\code{\link[base:shQuote]{shQuote(..., type = rscript_sh)}}.
If \code{"auto"} (default), and the cluster node is launched locally, then it
is set to \code{"sh"} or \code{"cmd"} according to the current platform.
\emph{If launched remotely}, then it is set to \code{"sh"} based on the assumption
remote machines typically launch commands via SSH in a POSIX shell.
If the remote machines run MS Windows, use \code{rscript_sh = "cmd"}.}
\item{default_packages}{A character vector or NULL that controls which R
packages are attached on each cluster node during startup. An asterisk
(\code{"*"}) resolves to \code{getOption("defaultPackages")} \emph{on the current machine}.
If NULL, then the default set of packages R are attached.}
\item{methods}{If TRUE (default), then the \pkg{methods} package is also
loaded. This is argument exists for legacy reasons due to how
\command{Rscript} worked in R (< 3.5.0).}
\item{socketOptions}{A character string that sets \R option
\code{socketOptions} on the worker.}
\item{useXDR}{If FALSE (default), the communication between master and workers, which is binary, will use small-endian (faster), otherwise big-endian ("XDR"; slower).}
\item{outfile}{Where to direct the \link[base:showConnections]{stdout} and
\link[base:showConnections]{stderr} connection output from the workers.
If NULL, then no redirection of output is done, which means that the
output is relayed in the terminal on the local computer. On Windows, the
output is only relayed when running \R from a terminal but not from a GUI.}
\item{renice}{A numerical 'niceness' (priority) to set for the worker
processes.}
\item{rshcmd, rshopts}{The command (character vector) to be run on the master
to launch a process on another host and any additional arguments (character
vector). These arguments are only applied if \code{machine} is not
\emph{localhost}. For more details, see below.}
\item{user}{(optional) The user name to be used when communicating with
another host.}
\item{revtunnel}{If TRUE, a reverse SSH tunnel is set up for each worker such
that the worker \R process sets up a socket connection to its local port
\code{(port - rank + 1)} which then reaches the master on port \code{port}.
If FALSE, then the worker will try to connect directly to port \code{port} on
\code{master}.
If NA, then TRUE or FALSE is inferred from inspection of \code{rshcmd[1]}.
For more details, see below.}
\item{rshlogfile}{(optional) If a filename, the output produced by the
\code{rshcmd} call is logged to this file, of if TRUE, then it is logged
to a temporary file. The log file name is available as an attribute
as part of the return node object.
\emph{Warning: This only works with SSH clients that support command-line
option \option{-E out.log`}}. For example, PuTTY's \command{plink} does
\emph{not} support this option, and any attempts to specify \code{rshlogfile} will
cause the SSH connection to fail.}
\item{rank}{A unique one-based index for each worker (automatically set).}
\item{manual}{If TRUE the workers will need to be run manually. The command
to run will be displayed.}
\item{dryrun}{If TRUE, nothing is set up, but a message suggesting how to
launch the worker from the terminal is outputted. This is useful for
troubleshooting.}
\item{quiet}{If TRUE, then no output will be produced other than that from
using \code{verbose = TRUE}.}
\item{setup_strategy}{If \code{"parallel"} (default), the workers are set up
concurrently, one after the other. If \code{"sequential"}, they are set up
sequentially.}
\item{action}{This is an internal argument.}
}
\value{
An object of class \code{c("RichSOCKcluster", "SOCKcluster", "cluster")}
consisting of a list of \code{"SOCKnode"} or \code{"SOCK0node"} workers (that also
inherit from \code{RichSOCKnode}).
\code{makeNodePSOCK()} returns a \code{"SOCKnode"} or
\code{"SOCK0node"} object representing an established connection to a worker.
}
\description{
The \code{makeClusterPSOCK()} function creates a cluster of \R workers
for parallel processing. These \R workers may be background \R sessions
on the current machine, \R sessions on external machines (local or remote),
or a mix of such. For external workers, the default is to use SSH to
connect to those external machines. This function works similarly to
\code{\link[parallel:makeCluster]{makePSOCKcluster}()} of the
\pkg{parallel} package, but provides additional and more flexibility
options for controlling the setup of the system calls that launch the
background \R workers, and how to connect to external machines.
}
\section{Protection against CPU overuse}{
Using too many parallel workers on the same machine may result in
overusing the CPU. For example, if an R script hard codes the
number of parallel workers to 32, as in
\if{html}{\out{
}}
it will use more than 100\% of the CPU cores when running on machine with
fewer than 32 CPU cores. For example, on a eight-core machine, this
may run the CPU at 400\% of its capacity, which has a significant
negative effect on the current R process, but also on all other processes
running on the same machine. This also a problem on systems where R
gets allotted a specific number of CPU cores, which is the case on
high-performance compute (HPC) clusters, but also on other shared systems
that limits user processes via Linux Control Groups (CGroups).
For example, a free account on Posit Cloud is limited to a single
CPU core. Parallelizing with 32 workers when only having access to
a single core, will result in 3200\% overuse and 32 concurrent R
processes competing for this single CPU core.
To protect against CPU overuse by mistake, \code{makeClusterPSOCK()} will
warn when parallelizing above 100\%;
\if{html}{\out{
}}\preformatted{cl <- parallelly:::makeClusterPSOCK(12, dryrun = TRUE)
Warning message:
In checkNumberOfLocalWorkers(workers) :
Careful, you are setting up 12 localhost parallel workers with
only 8 CPU cores available for this R process, which could result
in a 150\% load. The maximum is set to 100\%. Overusing the CPUs has
negative impact on the current R process, but also on all other
processes of yours and others running on the same machine. See
help("parallelly.options", package = "parallelly") for how to
override this threshold
}\if{html}{\out{
}}
Any attempts resulting in more than 300\% overuse will be refused;
\if{html}{\out{
}}\preformatted{> cl <- parallelly:::makeClusterPSOCK(25, dryrun = TRUE)
Error in checkNumberOfLocalWorkers(workers) :
Attempting to set up 25 localhost parallel workers with only
8 CPU cores available for this R process, which could result in
a 312\% load. The maximum is set to 300\%. Overusing the CPUs has
negative impact on the current R process, but also on all other
processes of yours and others running on the same machine. See
help("parallelly.options", package = "parallelly") for how to
override this threshold
}\if{html}{\out{
}}
See \link{parallelly.options} for how to change the default thresholds.
}
\section{Definition of \emph{localhost}}{
A hostname is considered to be \emph{localhost} if it equals:
\itemize{
\item \code{"localhost"},
\item \code{"127.0.0.1"}, or
\item \code{Sys.info()[["nodename"]]}.
}
It is also considered \emph{localhost} if it appears on the same line
as the value of \code{Sys.info()[["nodename"]]} in file \file{/etc/hosts}.
}
\section{Default SSH client and options (arguments \code{rshcmd} and \code{rshopts})}{
Arguments \code{rshcmd} and \code{rshopts} are only used when connecting
to an external host.
The default method for connecting to an external host is via SSH and the
system executable for this is given by argument \code{rshcmd}. The default
is given by option
\code{\link[=parallelly.options]{parallelly.makeNodePSOCK.rshcmd}}.
If that is not
set, then the default is to use \command{ssh} on Unix-like systems,
including macOS as well as Windows 10. On older MS Windows versions, which
does not have a built-in \command{ssh} client, the default is to use
(i) \command{plink} from the \href{https://www.putty.org/}{\command{PuTTY}}
project, and then (ii) the \command{ssh} client that is distributed with
RStudio.
PuTTY puts itself on Windows' system \env{PATH} when installed, meaning this
function will find PuTTY automatically if installed. If not, to manually
set specify PuTTY as the SSH client, specify the absolute pathname of
\file{plink.exe} in the first element and option \command{-ssh} in the
second as in \code{rshcmd = c("C:/Path/PuTTY/plink.exe", "-ssh")}.
This is because all elements of \code{rshcmd} are individually "shell"
quoted and element \code{rshcmd[1]} must be on the system \env{PATH}.
Furthermore, when running \R from RStudio on Windows, the \command{ssh}
client that is distributed with RStudio will also be considered.
This client, which is from \href{https://en.wikipedia.org/wiki/MinGW}{MinGW}
MSYS, is searched for in the folder given by the \env{RSTUDIO_MSYS_SSH}
environment variable---a variable that is (only) set when running RStudio.
To use this SSH client outside of RStudio, set \env{RSTUDIO_MSYS_SSH}
accordingly.
You can override the default set of SSH clients that are searched for
by specifying them in argument \code{rshcmd} or via option
\code{\link[=parallelly.options]{parallelly.makeNodePSOCK.rshcmd}}
using the format \verb{<...>}, e.g.
\code{rshcmd = c("", "", "")}. See
below for examples.
If no SSH-client is found, an informative error message is produced.
Additional SSH command-line options may be specified via argument \code{rshopts},
which defaults to option \code{parallelly.makeNodePSOCK.rshopts}. For
instance, a private SSH key can be provided as
\code{rshopts = c("-i", "~/.ssh/my_private_key")}. PuTTY users should
specify a PuTTY PPK file, e.g.
\code{rshopts = c("-i", "C:/Users/joe/.ssh/my_keys.ppk")}.
Contrary to \code{rshcmd}, elements of \code{rshopts} are not quoted.
}
\section{Accessing external machines that prompts for a password}{
\emph{IMPORTANT: With one exception, it is not possible to for these
functions to log in and launch \R workers on external machines that requires
a password to be entered manually for authentication.}
The only known exception is the PuTTY client on Windows for which one can
pass the password via command-line option \option{-pw}, e.g.
\code{rshopts = c("-pw", "MySecretPassword")}.
Note, depending on whether you run \R in a terminal or via a GUI, you might
not even see the password prompt. It is also likely that you cannot enter
a password, because the connection is set up via a background system call.
The poor man's workaround for setup that requires a password is to manually
log into the each of the external machines and launch the \R workers by hand.
For this approach, use \code{manual = TRUE} and follow the instructions
which include cut'n'pasteable commands on how to launch the worker from the
external machine.
However, a much more convenient and less tedious method is to set up
key-based SSH authentication between your local machine and the external
machine(s), as explain below.
}
\section{Accessing external machines via key-based SSH authentication}{
The best approach to automatically launch \R workers on external machines
over SSH is to set up key-based SSH authentication. This will allow you
to log into the external machine without have to enter a password.
Key-based SSH authentication is taken care of by the SSH client and not \R.
To configure this, see the manuals of your SSH client or search the web
for "ssh key authentication".
}
\section{Reverse SSH tunneling}{
If SSH is used, which is inferred from \code{rshcmd[1]}, then the default is
to use reverse SSH tunneling (\code{revtunnel = TRUE}), otherwise not
(\code{revtunnel = FALSE}). Using reverse SSH tunneling, avoids complications
from otherwise having to configure port forwarding in firewalls, which
often requires static IP address as well as privileges to edit the
firewall on your outgoing router, something most users don't have.
It also has the advantage of not having to know the internal and / or the
public IP address / hostname of the master.
Yet another advantage is that there will be no need for a DNS lookup by the
worker machines to the master, which may not be configured or is disabled
on some systems, e.g. compute clusters.
}
\section{Argument \code{rscript}}{
If \code{homogeneous} is FALSE, the \code{rscript} defaults to \code{"Rscript"}, i.e. it
is assumed that the \command{Rscript} executable is available on the
\env{PATH} of the worker.
If \code{homogeneous} is TRUE, the \code{rscript} defaults to
\code{file.path(R.home("bin"), "Rscript")}, i.e. it is basically assumed that
the worker and the caller share the same file system and \R installation.
When specified, argument \code{rscript} should be a character vector with one or
more elements. Any asterisk (\code{"*"}) will be resolved to the above default
\code{homogeneous}-dependent \code{Rscript} path.
All elements are automatically shell quoted using \code{\link[base:shQuote]{base::shQuote()}}, except
those that are of format \verb{=}, that is, the ones matching the
regular expression '\samp{^[[:alpha:]_][[:alnum:]_]*=.*}'.
Another exception is when \code{rscript} inherits from 'AsIs'.
}
\section{Default value of argument \code{homogeneous}}{
The default value of \code{homogeneous} is TRUE if and only if either
of the following is fulfilled:
\itemize{
\item \code{worker} is \emph{localhost}
\item \code{revtunnel} is FALSE and \code{master} is \emph{localhost}
\item \code{worker} is neither an IP number nor a fully qualified domain
name (FQDN). A hostname is considered to be a FQDN if it contains
one or more periods
}
In all other cases, \code{homogeneous} defaults to FALSE.
}
\section{Connection timeout}{
Argument \code{connectTimeout} does \emph{not} work properly on Unix and
macOS due to limitation in \R itself. For more details on this, please see
R-devel thread 'BUG?: On Linux setTimeLimit() fails to propagate timeout
error when it occurs (works on Windows)' on 2016-10-26
(\url{https://stat.ethz.ch/pipermail/r-devel/2016-October/073309.html}).
When used, the timeout will eventually trigger an error, but it won't happen
until the socket connection timeout \code{timeout} itself happens.
}
\section{Communication timeout}{
If there is no communication between the master and a worker within the
\code{timeout} limit, then the corresponding socket connection will be
closed automatically. This will eventually result in an error in code
trying to access the connection.
This timeout is also what terminates a stray-running parallel cluster-node
process.
}
\section{Failing to set up local workers}{
When setting up a cluster of localhost workers, that is, workers running
on the same machine as the master \R process, occasionally a connection
to a worker ("cluster node") may fail to be set up.
When this occurs, an informative error message with troubleshooting
suggestions will be produced.
The most common reason for such localhost failures is due to port
clashes. Retrying will often resolve the problem.
If R stalls when setting up a cluster of local workers, then it might
be that you have a virtual private network (VPN) enabled that is
configured to prevent you from connecting to \code{localhost}. To verify that
this is the case, call the following from the terminal:
\if{html}{\out{
}}
should solve it (the default is \code{"localhost"}). You can set this
automatically when R starts by adding it to your \verb{~/.Rprofile} startup
file. Alternatively, set environment variable
\verb{R_PARALLELLY_LOCALHOST_HOSTNAME=127.0.0.1} in your \verb{~/.Renviron} file.
If using \verb{127.0.0.1} did not work around the problem, check your VPN
settings and make sure it allows connections to \code{localhost} or \verb{127.0.0.1}.
}
\section{Failing to set up remote workers}{
A cluster of remote workers runs \R processes on external machines. These
external \R processes are launched over, typically, SSH to the remote
machine. For this to work, each of the remote machines needs to have
\R installed, which preferably is of the same version as what is on the
main machine. For this to work, it is required that one can SSH to the
remote machines. Ideally, the SSH connections use authentication based
on public-private SSH keys such that the set up of the remote workers can
be fully automated (see above). If \code{makeClusterPSOCK()} fails to set
up one or more remote \R workers, then an informative error message is
produced.
There are a few reasons for failing to set up remote workers. If this
happens, start by asserting that you can SSH to the remote machine and
launch \file{Rscript} by calling something like:
\preformatted{
{local}$ ssh -l alice remote.server.org
{remote}$ Rscript --version
R scripting front-end version 4.2.2 (2022-10-31)
{remote}$ logout
{local}$
}
When you have confirmed the above to work, then confirm that you can achieve
the same in a single command-line call;
\preformatted{
{local}$ ssh -l alice remote.server.org Rscript --version
R scripting front-end version 4.2.2 (2022-10-31)
{local}$
}
The latter will assert that you have proper startup configuration also for
\emph{non-interactive} shell sessions on the remote machine.
If the remote machines are running on MS Windows, make sure to add argument
\code{rscript_sh = "cmd"} when calling \code{makeClusterPSOCK()}, because the default
is \code{rscript_sh = "sh"}, which assumes that that the remote machines are
Unix-like machines.
Another reason for failing to setup remote workers could be that they are
running an \R version that is not compatible with the version that your main
\R session is running. For instance, if we run R (>= 3.6.0) locally and the
workers run R (< 3.5.0), we will get:
\verb{Error in unserialize(node$con) : error reading from connection}.
This is because R (>= 3.6.0) uses serialization format version 3 by default
whereas R (< 3.5.0) only supports version 2. We can see the version of the
\R workers by adding \code{rscript_args = c("-e", shQuote("getRversion()"))} when
calling \code{makeClusterPSOCK()}.
}
\section{For package developers}{
When creating a \code{cluster} object, for instance via \code{parallel::makeCluster()}
or \code{parallelly::makeClusterPSOCK()}, in a package help example, in a package
vignette, or in a package test, we must \emph{remember to stop the cluster at
the end of all examples(*), vignettes, and unit tests}. This is required in
order to not leave behind stray parallel \code{cluster} workers after our main R
session terminates. On Linux and macOS, the operating system often takes
care of terminating the worker processes if we forget, but on MS Windows
such processes will keep running in the background until they time out
themselves, which takes 30 days (sic!).
\verb{R CMD check --as-cran} will indirectly detect these stray worker processes
on MS Windows when running R (>= 4.3.0). They are detected, because they
result in placeholder \verb{Rscript} files being left behind in
the temporary directory. The check NOTE to look out for
(only in R (>= 4.3.0)) is:
\if{html}{\out{
}}\preformatted{* checking for detritus in the temp directory ... NOTE
Found the following files/directories:
'Rscript1058267d0c10' 'Rscriptbd4267d0c10'
}\if{html}{\out{
}}
Those \verb{Rscript} files are from background R worker processes,
which almost always are parallel \code{cluster}:s that we forgot to stop
at the end. To stop all \code{cluster} workers, use \code{\link[parallel:makeCluster]{parallel::stopCluster()}}
at the end of your examples(*), vignettes, and package tests for every
\code{cluster} object that is created.
(*) Currently, examples are excluded from the detritus checks.
This was validated with R-devel revision 82991 (2022-10-02).
}
\examples{
## NOTE: Drop 'dryrun = TRUE' below in order to actually connect. Add
## 'verbose = TRUE' if you run into problems and need to troubleshoot.
## ---------------------------------------------------------------
## Section 1. Setting up parallel workers on the local machine
## ---------------------------------------------------------------
## EXAMPLE: Two workers on the local machine
workers <- c("localhost", "localhost")
cl <- makeClusterPSOCK(workers, dryrun = TRUE, quiet = TRUE)
## EXAMPLE: Launch 124 workers on MS Windows 10, where half are
## running on CPU Group #0 and half on CPU Group #1.
## (https://lovickconsulting.com/2021/11/18/
## running-r-clusters-on-an-amd-threadripper-3990x-in-windows-10-2/)
## Temporarily disable CPU load protection for this example
oopts <- options(parallelly.maxWorkers.localhost = Inf)
ncores <- 124
cpu_groups <- c(0, 1)
cl <- lapply(cpu_groups, FUN = function(cpu_group) {
parallelly::makeClusterPSOCK(ncores \%/\% length(cpu_groups),
rscript = I(c(
Sys.getenv("COMSPEC"), "/c", "start", "/B",
"/NODE", cpu_group, "/AFFINITY", "0xFFFFFFFFFFFFFFFE",
"*"
)),
dryrun = TRUE, quiet = TRUE
)
})
## merge the two 62-node clusters into one with 124 nodes
cl <- do.call(c, cl)
## Re-enable CPU load protection
options(oopts)
## ---------------------------------------------------------------
## Section 2. Setting up parallel workers on remote machines
## ---------------------------------------------------------------
## EXAMPLE: Three remote workers
## Setup of three R workers on two remote machines are set up
workers <- c("n1.remote.org", "n2.remote.org", "n1.remote.org")
cl <- makeClusterPSOCK(workers, dryrun = TRUE, quiet = TRUE)
## EXAMPLE: Two remote workers running on MS Windows. Because the
## remote workers are MS Windows machines, we need to use
## rscript_sh = "cmd".
workers <- c("mswin1.remote.org", "mswin2.remote.org")
cl <- makeClusterPSOCK(workers, rscript_sh = "cmd", dryrun = TRUE, quiet = TRUE)
## EXAMPLE: Local and remote workers
## Same setup when the two machines are on the local network and
## have identical software setups
cl <- makeClusterPSOCK(
workers,
revtunnel = FALSE, homogeneous = TRUE,
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: Three remote workers 'n1', 'n2', and 'n3' that can only be
## accessed via jumphost 'login.remote.org'
workers <- c("n1", "n2", "n1")
cl <- makeClusterPSOCK(
workers,
rshopts = c("-J", "login.remote.org"),
homogeneous = FALSE,
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: Remote worker running on Linux from MS Windows machine
## Connect to remote Unix machine 'remote.server.org' on port 2200
## as user 'bob' from a MS Windows machine with PuTTY installed.
## Using the explicit special rshcmd = "", will force
## makeClusterPSOCK() to search for and use the PuTTY plink software,
## preventing it from using other SSH clients on the system search PATH.
cl <- makeClusterPSOCK(
"remote.server.org", user = "bob",
rshcmd = "",
rshopts = c("-P", 2200, "-i", "C:/Users/bobby/.ssh/putty.ppk"),
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: Remote workers with specific setup
## Setup of remote worker with more detailed control on
## authentication and reverse SSH tunneling
cl <- makeClusterPSOCK(
"remote.server.org", user = "johnny",
## Manual configuration of reverse SSH tunneling
revtunnel = FALSE,
rshopts = c("-v", "-R 11000:gateway:11942"),
master = "gateway", port = 11942,
## Run Rscript nicely and skip any startup scripts
rscript = c("nice", "/path/to/Rscript"),
rscript_args = c("--no-init-file"),
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: Remote worker running on Linux from RStudio on MS Windows
## Connect to remote Unix machine 'remote.server.org' on port 2200
## as user 'bob' from a MS Windows machine via RStudio's SSH client.
## Using the explicit special rshcmd = "", will force
## makeClusterPSOCK() to use the SSH client that comes with RStudio,
## preventing it from using other SSH clients on the system search PATH.
cl <- makeClusterPSOCK(
"remote.server.org:2200", user = "bob", rshcmd = "",
dryrun = TRUE, quiet = TRUE
)
## ---------------------------------------------------------------
## Section 3. Setting up parallel workers on HPC cluster
## ---------------------------------------------------------------
## EXAMPLE: 'Grid Engine' is a high-performance compute (HPC) job
## scheduler where one can request compute resources on multiple nodes,
## each running multiple cores. Examples of Grid Engine schedulers are
## Oracle Grid Engine (formerly Sun Grid Engine), Univa Grid Engine,
## and Son of Grid Engine - all commonly referred to as SGE schedulers.
## Each SGE cluster may have its own configuration with their own way
## of requesting parallel slots. Here are a few examples:
##
## ## Request 18 slots on a single host
## qsub -pe smp 18 script.sh
##
## ## Request 18 slots on one or more hosts
## qsub -pe mpi 18 script.sh
##
## This will launch the job script 'script.sh' on one host, while have
## reserved in total 18 slots (CPU cores) on this host and possible
## other hosts.
##
## This example shows how to use the SGE command 'qrsh' to launch
## 18 parallel workers from R, which is assumed to have been launched
## by 'script.sh'.
cl <- makeClusterPSOCK(
availableWorkers(),
rshcmd = "qrsh", rshopts = c("-inherit", "-nostdin", "-V"),
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: The 'Fujitsu Technical Computing Suite' is a high-performance
## compute (HPC) job scheduler where one can request compute resources on
## multiple nodes, each running multiple cores. For example,
##
## pjsub -L vnode=3 -L vnode-core=18 script.sh
##
## reserves 18 cores on three nodes. The job script runs on the first
## with enviroment variables set to infer the other nodes, resulting in
## availableWorkers() to return 3 * 18 workers. When the HPC environment
## does not support SSH between compute nodes, one can use the 'pjrsh'
## command to launch the parallel workers.
cl <- makeClusterPSOCK(
availableWorkers(),
rshcmd = "pjrsh",
dryrun = TRUE, quiet = TRUE
)
## ---------------------------------------------------------------
## Section 4. Setting up remote parallel workers in the cloud
## ---------------------------------------------------------------
## EXAMPLE: Remote worker running on AWS
## Launching worker on Amazon AWS EC2 running one of the
## Amazon Machine Images (AMI) provided by RStudio
## (https://www.louisaslett.com/RStudio_AMI/)
public_ip <- "1.2.3.4"
ssh_private_key_file <- "~/.ssh/my-private-aws-key.pem"
cl <- makeClusterPSOCK(
## Public IP number of EC2 instance
public_ip,
## User name (always 'ubuntu')
user = "ubuntu",
## Use private SSH key registered with AWS
rshopts = c(
"-o", "StrictHostKeyChecking=no",
"-o", "IdentitiesOnly=yes",
"-i", ssh_private_key_file
),
## Set up .libPaths() for the 'ubuntu' user
## and then install the future package
rscript_startup = quote(local({
p <- Sys.getenv("R_LIBS_USER")
dir.create(p, recursive = TRUE, showWarnings = FALSE)
.libPaths(p)
install.packages("future")
})),
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: Remote worker running on GCE
## Launching worker on Google Cloud Engine (GCE) running a
## container based VM (with a #cloud-config specification)
public_ip <- "1.2.3.4"
user <- "johnny"
ssh_private_key_file <- "~/.ssh/google_compute_engine"
cl <- makeClusterPSOCK(
## Public IP number of GCE instance
public_ip,
## User name (== SSH key label (sic!))
user = user,
## Use private SSH key registered with GCE
rshopts = c(
"-o", "StrictHostKeyChecking=no",
"-o", "IdentitiesOnly=yes",
"-i", ssh_private_key_file
),
## Launch Rscript inside Docker container
rscript = c(
"docker", "run", "--net=host", "rocker/r-parallel",
"Rscript"
),
dryrun = TRUE, quiet = TRUE
)
## ---------------------------------------------------------------
## Section 5. Parallel workers running locally inside virtual
## machines, Linux containers, etc.
## ---------------------------------------------------------------
## EXAMPLE: Two workers running in Docker on the local machine
## Setup of 2 Docker workers running rocker/r-parallel
cl <- makeClusterPSOCK(
rep("localhost", times = 2L),
## Launch Rscript inside Docker container
rscript = c(
"docker", "run", "--net=host", "rocker/r-parallel",
"Rscript"
),
## IMPORTANT: Because Docker runs inside a virtual machine (VM) on macOS
## and MS Windows (not Linux), when the R worker tries to connect back to
## the default 'localhost' it will fail, because the main R session is
## not running in the VM, but outside on the host. To reach the host on
## macOS and MS Windows, make sure to use master = "host.docker.internal"
master = if (.Platform$OS.type == "unix") NULL else "host.docker.internal",
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: Two workers running via Linux container 'rocker/r-parallel' from
## DockerHub on the local machine using Apptainer (formerly Singularity)
cl <- makeClusterPSOCK(
rep("localhost", times = 2L),
## Launch Rscript inside Linux container
rscript = c(
"apptainer", "exec", "docker://rocker/r-parallel",
"Rscript"
),
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: One worker running in udocker on the local machine
## Setup of a single udocker.py worker running rocker/r-parallel
cl <- makeClusterPSOCK(
"localhost",
## Launch Rscript inside Docker container (using udocker)
rscript = c(
"udocker.py", "run", "rocker/r-parallel",
"Rscript"
),
## Manually launch parallel workers
## (need double shQuote():s because udocker.py drops one level)
rscript_args = c(
"-e", shQuote(shQuote("parallel:::.workRSOCK()"))
),
dryrun = TRUE, quiet = TRUE
)
## EXAMPLE: One worker running in Wine for Linux on the local machine
## To install R for MS Windows in Wine, do something like:
## winecfg # In GUI, set 'Windows version' to 'Windows 10'
## wget https://cran.r-project.org/bin/windows/base/R-4.2.3-win.exe
## wine R-4.2.3-win.exe /SILENT
## Prevent packages from being installed to R's system library:
## chmod ugo-w "$HOME/.wine/drive_c/Program Files/R/R-4.2.3/library/"
## Verify it works:
## wine "C:/Program Files/R/R-4.2.3/bin/x64/Rscript.exe" --version
cl <- makeClusterPSOCK(1L,
rscript = c(
## Silence Wine warnings
"WINEDEBUG=fixme-all",
## Don't pass LC_* and R_LIBS* environments from host to Wine
sprintf("\%s=", grep("^(LC_|R_LIBS)", names(Sys.getenv()), value = TRUE)),
"wine",
"C:/Program Files/R/R-4.2.3/bin/x64/Rscript.exe"
),
dryrun = TRUE, quiet = TRUE
)
}
parallelly/man/cpuLoad.Rd 0000644 0001762 0000144 00000001707 14367516061 015062 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cpuLoad.R
\name{cpuLoad}
\alias{cpuLoad}
\title{Get the Recent CPU Load}
\usage{
cpuLoad()
}
\value{
A named numeric vector with three elements \verb{1min}, \verb{5min}, and
\verb{15min} with non-negative values.
These values represent estimates of the CPU load during the last minute,
the last five minutes, and the last fifteen minutes [1].
An idle system have values close to zero, and a heavily loaded system
have values near \code{parallel::detectCores()}.
If they are unknown, missing values are returned.
}
\description{
Get the Recent CPU Load
}
\details{
This function works only Unix-like system with \file{/proc/loadavg}.
}
\examples{
loadavg <- cpuLoad()
print(loadavg)
}
\references{
\enumerate{
\item Linux Load Averages: Solving the Mystery,
Brendan Gregg's Blog, 2017-08-08,
\url{http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html}
}
}
\keyword{internal}
parallelly/man/isForkedNode.Rd 0000644 0001762 0000144 00000001114 14367516061 016037 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/isForkedNode.R
\name{isForkedNode}
\alias{isForkedNode}
\title{Checks whether or not a Cluster Node Runs in a Forked Process}
\usage{
isForkedNode(node, ...)
}
\arguments{
\item{node}{A cluster node of class \code{SOCKnode} or \code{SOCK0node}.}
\item{\ldots}{Not used.}
}
\value{
(logical) Returns TRUE if the cluster node is running in a
forked child process and FALSE if it does not.
If it cannot be inferred, NA is returned.
}
\description{
Checks whether or not a Cluster Node Runs in a Forked Process
}
parallelly/man/supportsMulticore.Rd 0000644 0001762 0000144 00000005141 14367516061 017252 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/supportsMulticore.R
\name{supportsMulticore}
\alias{supportsMulticore}
\title{Check If Forked Processing ("multicore") is Supported}
\usage{
supportsMulticore(...)
}
\arguments{
\item{\dots}{Internal usage only.}
}
\value{
TRUE if forked processing is supported and not disabled,
otherwise FALSE.
}
\description{
Certain parallelization methods in R rely on \emph{forked} processing, e.g.
\code{parallel::mclapply()}, \code{parallel::makeCluster(n, type = "FORK")},
\code{doMC::registerDoMC()}, and \code{future::plan("multicore")}.
Process forking is done by the operating system and support for it in
\R is restricted to Unix-like operating systems such as Linux, Solaris,
and macOS. R running on Microsoft Windows does not support forked
processing.
In R, forked processing is often referred to as "multicore" processing,
which stems from the 'mc' of the \code{mclapply()} family of functions, which
originally was in a package named \pkg{multicore} which later was
incorporated into the \pkg{parallel} package.
This function checks whether or not forked (aka "multicore") processing
is supported in the current \R session.
}
\section{Support for process forking}{
While R supports forked processing on Unix-like operating system such as
Linux and macOS, it does not on the Microsoft Windows operating system.
For some R environments it is considered unstable to perform parallel
processing based on \emph{forking}.
This is for example the case when using RStudio, cf.
\href{https://github.com/rstudio/rstudio/issues/2597#issuecomment-482187011}{RStudio Inc. recommends against using forked processing when running R from within the RStudio software}.
This function detects when running in such an environment and returns
\code{FALSE}, despite the underlying operating system supports forked processing.
A warning will also be produced informing the user about this the first
time time this function is called in an \R session.
This warning can be disabled by setting R option
\code{parallelly.supportsMulticore.unstable}, or environment variable
\env{R_PARALLELLY_SUPPORTSMULTICORE_UNSTABLE} to \code{"quiet"}.
}
\section{Enable or disable forked processing}{
It is possible to disable forked processing for futures by setting \R
option \code{parallelly.fork.enable} to \code{FALSE}. Alternatively, one can
set environment variable \env{R_PARALLELLY_FORK_ENABLE} to \code{false}.
Analogously, it is possible to override disabled forking by setting one
of these to \code{TRUE}.
}
\examples{
## Check whether or not forked processing is supported
supportsMulticore()
}
parallelly/man/killNode.Rd 0000644 0001762 0000144 00000005101 14563242645 015226 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/killNode.R
\name{killNode}
\alias{killNode}
\title{Terminate one or more cluster nodes using process signaling}
\usage{
killNode(x, signal = tools::SIGTERM, ...)
}
\arguments{
\item{x}{cluster or cluster node to terminate.}
\item{signal}{An integer that specifies the signal level to be sent
to the parallel R process.
It's only \code{tools::SIGINT} (2) and \code{tools::SIGTERM} (15) that are
supported on all operating systems (i.e. Unix, macOS, and MS Windows).
All other signals are platform specific, cf. \code{\link[tools:pskill]{tools::pskill()}}.}
\item{\ldots}{Not used.}
}
\value{
TRUE if the signal was successfully applied, FALSE if not, and NA if
signaling is not supported on the specific cluster or node.
\emph{Warning}: With R (< 3.5.0), NA is always returned. This is due to a
bug in R (< 3.5.0), where the signaling result cannot be trusted.
}
\description{
Terminate one or more cluster nodes using process signaling
}
\details{
Note that the preferred way to terminate a cluster is via
\code{\link[parallel:makeCluster]{parallel::stopCluster()}}, because it terminates the cluster nodes
by kindly asking each of them to nicely shut themselves down.
Using \code{killNode()} is a much more sever approach. It abruptly
terminates the underlying R process, possibly without giving the
parallel worker a chance to terminate gracefully. For example,
it might get terminated in the middle of writing to file.
\code{\link[tools:pskill]{tools::pskill()}} is used to send the signal to the R process hosting
the parallel worker.
}
\section{Known limitations}{
This function works only with cluster nodes of class \code{RichSOCKnode},
which were created by \code{\link[=makeClusterPSOCK]{makeClusterPSOCK()}}. It does not work when
using \code{\link[parallel:makeCluster]{parallel::makeCluster()}} and friends.
Currently, it's only possible to send signals to parallel workers, that
is, cluster nodes, that run on the local machine.
If attempted to use \code{killNode()} on a remote parallel workers, \code{NA}
is returned and an informative warning is produced.
}
\examples{
\dontshow{if (.Platform$OS.type != "windows" || interactive()) \{}
cl <- makeClusterPSOCK(2)
print(isNodeAlive(cl)) ## [1] TRUE TRUE
res <- killNode(cl)
print(res)
## It might take a moment before the background
## workers are shutdown after having been signaled
Sys.sleep(1.0)
print(isNodeAlive(cl)) ## [1] FALSE FALSE
\dontshow{\}}
}
\seealso{
Use \code{\link[=isNodeAlive]{isNodeAlive()}} to check whether one or more cluster nodes are alive.
}
parallelly/man/parallelly.options.Rd 0000644 0001762 0000144 00000027753 14563242645 017341 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/options.R
\name{parallelly.options}
\alias{parallelly.options}
\alias{parallelly.debug}
\alias{parallelly.availableCores.custom}
\alias{parallelly.availableCores.methods}
\alias{parallelly.availableCores.min}
\alias{parallelly.availableCores.fallback}
\alias{parallelly.availableCores.omit}
\alias{parallelly.availableCores.system}
\alias{parallelly.availableWorkers.methods}
\alias{parallelly.availableWorkers.custom}
\alias{parallelly.fork.enable}
\alias{parallelly.supportsMulticore.disableOn}
\alias{parallelly.supportsMulticore.unstable}
\alias{R_PARALLELLY_AVAILABLECORES_FALLBACK}
\alias{R_PARALLELLY_AVAILABLECORES_OMIT}
\alias{R_PARALLELLY_AVAILABLECORES_SYSTEM}
\alias{R_PARALLELLY_AVAILABLECORES_MIN}
\alias{R_PARALLELLY_FORK_ENABLE}
\alias{R_PARALLELLY_SUPPORTSMULTICORE_DISABLEON}
\alias{R_PARALLELLY_SUPPORTSMULTICORE_UNSTABLE}
\alias{future.availableCores.custom}
\alias{future.availableCores.methods}
\alias{future.availableCores.fallback}
\alias{future.availableCores.system}
\alias{future.availableWorkers.methods}
\alias{future.availableWorkers.custom}
\alias{future.fork.enable}
\alias{future.supportsMulticore.unstable}
\alias{R_FUTURE_AVAILABLECORES_FALLBACK}
\alias{R_FUTURE_AVAILABLECORES_SYSTEM}
\alias{R_FUTURE_FORK_ENABLE}
\alias{R_FUTURE_SUPPORTSMULTICORE_UNSTABLE}
\alias{parallelly.makeNodePSOCK.setup_strategy}
\alias{parallelly.makeNodePSOCK.validate}
\alias{parallelly.makeNodePSOCK.connectTimeout}
\alias{parallelly.makeNodePSOCK.timeout}
\alias{parallelly.makeNodePSOCK.useXDR}
\alias{parallelly.makeNodePSOCK.socketOptions}
\alias{parallelly.makeNodePSOCK.rshcmd}
\alias{parallelly.makeNodePSOCK.rshopts}
\alias{parallelly.makeNodePSOCK.tries}
\alias{parallelly.makeNodePSOCK.tries.delay}
\alias{R_PARALLELLY_MAKENODEPSOCK_SETUP_STRATEGY}
\alias{R_PARALLELLY_MAKENODEPSOCK_VALIDATE}
\alias{R_PARALLELLY_MAKENODEPSOCK_CONNECTTIMEOUT}
\alias{R_PARALLELLY_MAKENODEPSOCK_TIMEOUT}
\alias{R_PARALLELLY_MAKENODEPSOCK_USEXDR}
\alias{R_PARALLELLY_MAKENODEPSOCK_SOCKETOPTIONS}
\alias{R_PARALLELLY_MAKENODEPSOCK_RSHCMD}
\alias{R_PARALLELLY_MAKENODEPSOCK_RSHOPTS}
\alias{R_PARALLELLY_MAKENODEPSOCK_TRIES}
\alias{R_PARALLELLY_MAKENODEPSOCK_TRIES_DELAY}
\title{Options Used by the 'parallelly' Package}
\description{
Below are the \R options and environment variables that are used by the
\pkg{parallelly} package and packages enhancing it.\cr
\cr
\emph{WARNING: Note that the names and the default values of these options may
change in future versions of the package. Please use with care until
further notice.}
}
\section{Backward compatibility with the \pkg{future} package}{
The functions in the \pkg{parallelly} package originates from the
\pkg{future} package. Because they are widely used within the future
ecosystem, we need to keep them backward compatible for quite a long time,
in order for all existing packages and R scripts to have time to adjust.
This also goes for the \R options and the environment variables used to
configure these functions.
All options and environment variables used here have prefixes \code{parallelly.}
and \code{R_PARALLELLY_}, respectively. Because of the backward compatibility
with the \pkg{future} package, the same settings can also be controlled
by options and environment variables with prefixes \code{future.} and
\code{R_FUTURE_} until further notice, e.g. setting option
\code{future.availableCores.fallback=1} is the same as setting option
\code{parallelly.availableCores.fallback=1}, and setting environment
variable \env{R_FUTURE_AVAILABLECORES_FALLBACK=1} is the same as setting
\env{R_PARALLELLY_AVAILABLECORES_FALLBACK=1}.
}
\section{Configuring number of parallel workers}{
The below \R options and environment variables control the default results of \code{\link[=availableCores]{availableCores()}} and \code{\link[=availableWorkers]{availableWorkers()}}.
\describe{
\item{\code{parallelly.availableCores.logical}:}{(logical) The default value of argument \code{logical} as used by \code{availableCores()}, \code{availableWorkers()}, and \code{availableCores()} for querying \code{parallel::detectCores(logical = logical)}. The default is \code{TRUE} just like it is for \code{\link[parallel:detectCores]{parallel::detectCores()}}.}
\item{\code{parallelly.availableCores.methods}:}{(character vector) Default lookup methods for \code{\link[=availableCores]{availableCores()}}. (Default: \code{c("system", "cgroups.cpuset", "cgroups.cpuquota", "cgroups2.cpu.max", "nproc", "mc.cores", "BiocParallel", "_R_CHECK_LIMIT_CORES_", "Bioconductor", "LSF", "PJM", "PBS", "SGE", "Slurm", "fallback", "custom")})}
\item{\code{parallelly.availableCores.custom}:}{(function) If set and a function, then this function will be called (without arguments) by \code{\link[=availableCores]{availableCores()}} where its value, coerced to an integer, is interpreted as a number of cores.}
\item{\code{parallelly.availableCores.fallback}:}{(integer) Number of cores to use when no core-specifying settings are detected other than \code{"system"} and \code{"nproc"}. This options makes it possible to set the default number of cores returned by \code{availableCores()} / \code{availableWorkers()} yet allow users and schedulers to override it. In multi-tenant environment, such as HPC clusters, it is useful to set environment variable \env{R_PARALLELLY_AVAILABLECORES_FALLBACK} to \code{1}, which will set this option when the package is loaded.}
\item{\code{parallelly.availableCores.system}:}{(integer) Number of "system" cores used instead of what is reported by \code{\link{availableCores}(which = "system")}. This option allows you to effectively override what \code{parallel::detectCores()} reports the system has.}
\item{\code{parallelly.availableCores.min}:}{(integer) The minimum number of cores \code{\link[=availableCores]{availableCores()}} is allowed to return. This can be used to force multiple cores on a single-core environment. If this is limit is applied, the names of the returned value are appended with an asterisk (\code{*}). (Default: \code{1L})}
\item{\code{parallelly.availableCores.omit}:}{(integer) Number of cores to set aside, i.e. not to include.}
\item{\code{parallelly.availableWorkers.methods}:}{(character vector) Default lookup methods for \code{\link[=availableWorkers]{availableWorkers()}}. (Default: \code{c("mc.cores", "BiocParallel", "_R_CHECK_LIMIT_CORES_", "Bioconductor", "LSF", "PJM", "PBS", "SGE", "Slurm", "custom", "cgroups.cpuset", "cgroups.cpuquota", "cgroups2.cpu.max", "nproc", "system", "fallback")})}
\item{\code{parallelly.availableWorkers.custom}:}{(function) If set and a function, then this function will be called (without arguments) by \code{\link[=availableWorkers]{availableWorkers()}} where its value, coerced to a character vector, is interpreted as hostnames of available workers.}
}
}
\section{Configuring forked parallel processing}{
The below \R options and environment variables control the default result of \code{\link[=supportsMulticore]{supportsMulticore()}}.
\describe{
\item{\code{parallelly.fork.enable}:}{(logical) Enable or disable \emph{forked} processing. If \code{FALSE}, multicore futures becomes sequential futures. If \code{NA}, or not set (the default), the a set of best-practices rules decide whether should be supported or not.}
\item{\code{parallelly.supportsMulticore.disableOn}:}{(character vector)
because the environment in which R runs is considered unstable for
forked processing.
If this vector contains \code{"rstudio_console"}, it is disabled when
running R in the RStudio Console.
If this vector contains \code{"rstudio_terminal"}, it is disabled when
running R in the RStudio Terminal.
(Default: \code{c("rstudio_console", "rstudio_terminal")})
}
\item{\code{parallelly.supportsMulticore.unstable}:}{(character) Controls whether a warning should be produced or not whenever multicore processing is automatically disabled per settings in option \code{parallelly.supportsMulticore.disableOn}. If \code{"warn"} (default), then an informative warning is produces the first time 'multicore' futures are used. If \code{"quiet"}, no warning is produced.}
}
}
\section{Configuring setup of parallel PSOCK clusters}{
The below \R options and environment variables control the default results of \code{\link[=makeClusterPSOCK]{makeClusterPSOCK()}} and its helper function \code{\link[=makeNodePSOCK]{makeNodePSOCK()}} that creates the individual cluster nodes.
\describe{
\item{\code{parallelly.maxWorkers.localhost}:}{(two numerics) Maximum number of localhost workers, relative to \code{availableCores()}, accepted and allowed. The first element corresponds to the threshold where a warning is produced, the second where an error is produced. Thresholds may be \code{+Inf}. If only the first exist, no error is produced (defaults to \code{c(1.0, 3.0)} corresponding to a maximum 100\% and 300\% use).}
\item{\code{parallelly.makeNodePSOCK.setup_strategy}:}{(character) If \code{"parallel"} (default), the PSOCK cluster nodes are set up concurrently. If \code{"sequential"}, they are set up sequentially.}
\item{\code{parallelly.makeNodePSOCK.validate}:}{(logical) If TRUE (default), after the nodes have been created, they are all validated that they work by inquiring about their session information, which is saved in attribute \code{session_info} of each node.}
\item{\code{parallelly.makeNodePSOCK.connectTimeout}:}{(numeric) The maximum time (in seconds) allowed for each socket connection between the master and a worker to be established (defaults to 2*60 seconds = 2 minutes).}
\item{\code{parallelly.makeNodePSOCK.timeout}:}{(numeric) The maximum time (in seconds) allowed to pass without the master and a worker communicate with each other (defaults to 30\emph{24}60*60 seconds = 30 days).}
\item{\code{parallelly.makeNodePSOCK.useXDR}:}{(logical) If FALSE (default), the communication between master and workers, which is binary, will use small-endian (faster), otherwise big-endian ("XDR"; slower).}
\item{\code{parallelly.makeNodePSOCK.socketOptions}:}{(character string) If set to another value than \code{"NULL"}, then option \code{socketOptions} is set to this value on the workers during startup. See \code{\link[base:connections]{base::socketConnection()}} for details. (defaults to \code{"no-delay"})}
\item{\code{parallelly.makeNodePSOCK.rshcmd}:}{(character vector) The command to be run on the master to launch a process on another host.}
\item{\code{parallelly.makeNodePSOCK.rshopts}:}{(character vector) Addition command-line options appended to \code{rshcmd}. These arguments are only applied when connecting to non-localhost machines.}
\item{\code{parallelly.makeNodePSOCK.tries}:}{(integer) The maximum number of attempts done to launch each node. Only used when setting up cluster nodes using the sequential strategy.}
\item{\code{parallelly.makeNodePSOCK.tries.delay}:}{(numeric) The number of seconds to wait before trying to launch a cluster node that failed to launch previously. Only used when setting up cluster nodes using the sequential strategy.}
}
}
\section{Options for debugging}{
\describe{
\item{\code{parallelly.debug}:}{(logical) If \code{TRUE}, extensive debug messages are generated. (Default: \code{FALSE})}
}
}
\section{Environment variables that set R options}{
All of the above \R \verb{parallelly.*} options can be set by
corresponding environment variables \env{R_PARALLELLY_*} \emph{when the
\pkg{parallelly} package is loaded}.
For example, if \code{R_PARALLELLY_MAKENODEPSOCK_SETUP_STRATEGY = "sequential"},
then option \code{parallelly.makeNodePSOCK.setup_strategy} is set to
\code{"sequential"} (character).
Similarly, if \code{R_PARALLELLY_AVAILABLECORES_FALLBACK = "1"}, then option
\code{parallelly.availableCores.fallback} is set to \code{1} (integer).
}
\examples{
# Set an R option:
options(parallelly.availableCores.fallback = 1L)
}
\seealso{
To set \R options when \R starts (even before the \pkg{parallelly} package is loaded), see the \link[base]{Startup} help page. The \href{https://cran.r-project.org/package=startup}{\pkg{startup}} package provides a friendly mechanism for configuring \R's startup process.
}
parallelly/man/find_rshcmd.Rd 0000644 0001762 0000144 00000002152 14367516061 015746 0 ustar ligges users % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils,cluster.R
\name{find_rshcmd}
\alias{find_rshcmd}
\title{Search for SSH clients on the current system}
\usage{
find_rshcmd(which = NULL, first = FALSE, must_work = TRUE)
}
\arguments{
\item{which}{A character vector specifying which types of SSH clients
to search for. If NULL, a default set of clients supported by the
current platform is searched for.}
\item{first}{If TRUE, the first client found is returned, otherwise
all located clients are returned.}
\item{must_work}{If TRUE and no clients was found, then an error
is produced, otherwise only a warning.}
}
\value{
A named list of pathnames to all located SSH clients.
The pathnames may be followed by zero or more command-line options,
i.e. the elements of the returned list are character vectors of length
one or more.
If \code{first = TRUE}, only the first one is returned.
Attribute \code{version} contains the output from querying the
executable for its version (via command-line option \code{-V}).
}
\description{
Search for SSH clients on the current system
}
\keyword{internal}
parallelly/man/figures/ 0000755 0001762 0000144 00000000000 14367516061 014643 5 ustar ligges users parallelly/man/figures/lifecycle-maturing-blue.svg 0000644 0001762 0000144 00000001706 14367516061 022100 0 ustar ligges users