benchmarkme/0000755000176200001440000000000014025731726012535 5ustar liggesusersbenchmarkme/NAMESPACE0000644000176200001440000000410414025711746013753 0ustar liggesusers# Generated by roxygen2: do not edit by hand S3method(plot,ben_results) S3method(print,ram) export(benchmark_io) export(benchmark_matrix_cal) export(benchmark_matrix_fun) export(benchmark_prog) export(benchmark_std) export(bm_matrix_cal_cross_product) export(bm_matrix_cal_lm) export(bm_matrix_cal_manip) export(bm_matrix_cal_power) export(bm_matrix_cal_sort) export(bm_matrix_fun_cholesky) export(bm_matrix_fun_determinant) export(bm_matrix_fun_eigen) export(bm_matrix_fun_fft) export(bm_matrix_fun_inverse) export(bm_parallel) export(bm_prog_escoufier) export(bm_prog_fib) export(bm_prog_gcd) export(bm_prog_hilbert) export(bm_prog_toeplitz) export(bm_read) export(bm_write) export(create_bundle) export(get_available_benchmarks) export(get_byte_compiler) export(get_cpu) export(get_linear_algebra) export(get_platform_info) export(get_r_version) export(get_ram) export(get_sys_details) export(is_blas_optimize) export(plot_past) export(rank_results) export(upload_results) import(Matrix) import(doParallel) import(dplyr) import(foreach) import(parallel) importFrom(benchmarkmeData,is_blas_optimize) importFrom(benchmarkmeData,plot_past) importFrom(benchmarkmeData,select_results) importFrom(compiler,compilePKGS) importFrom(compiler,enableJIT) importFrom(compiler,getCompilerOption) importFrom(foreach,"%dopar%") importFrom(foreach,foreach) importFrom(grDevices,palette) importFrom(grDevices,rgb) importFrom(graphics,abline) importFrom(graphics,grid) importFrom(graphics,legend) importFrom(graphics,par) importFrom(graphics,plot) importFrom(graphics,points) importFrom(graphics,text) importFrom(graphics,title) importFrom(httr,POST) importFrom(httr,upload_file) importFrom(methods,new) importFrom(parallel,detectCores) importFrom(stats,cor) importFrom(stats,fft) importFrom(stats,na.omit) importFrom(stats,rnorm) importFrom(stats,runif) importFrom(tibble,tibble) importFrom(utils,capture.output) importFrom(utils,data) importFrom(utils,globalVariables) importFrom(utils,installed.packages) importFrom(utils,packageDescription) importFrom(utils,read.csv) importFrom(utils,sessionInfo) importFrom(utils,write.csv) benchmarkme/README.md0000644000176200001440000001357414015757162014027 0ustar liggesusers # System benchmarking [![R-CMD-check](https://github.com/csgillespie/benchmarkme/workflows/R-CMD-check/badge.svg)](https://github.com/csgillespie/benchmarkme/actions) [![codecov.io](https://codecov.io/github/csgillespie/benchmarkme/coverage.svg?branch=master)](https://codecov.io/github/csgillespie/benchmarkme?branch=master) [![Downloads](http://cranlogs.r-pkg.org/badges/benchmarkme?color=brightgreen)](https://cran.r-project.org/package=benchmarkme) [![CRAN\_Status\_Badge](http://www.r-pkg.org/badges/version/benchmarkme)](https://cran.r-project.org/package=benchmarkme) R benchmarking made easy. The package contains a number of benchmarks, heavily based on the benchmarks at , for assessing the speed of your system. The package is for R 3.5 and above. In previous versions R, detecting the effect of the byte compiler was tricky and produced unrealistic comparisons. ## Overview A straightforward way of speeding up your analysis is to buy a better computer. Modern desktops are relatively cheap, especially compared to user time. However, it isn’t clear if upgrading your computing is worth the cost. The **benchmarkme** package provides a set of benchmarks to help quantify your system. More importantly, it allows you to compare your timings with *other* systems. ## Overview The package is on [CRAN](https://cran.r-project.org/package=benchmarkme) and can be installed in the usual way ``` r install.packages("benchmarkme") ``` There are two groups of benchmarks: - `benchmark_std()`: this benchmarks numerical operations such as loops and matrix operations. The benchmark comprises of three separate benchmarks: `prog`, `matrix_fun`, and `matrix_cal`. - `benchmark_io()`: this benchmarks reading and writing a 5 / 50, MB csv file. ### The benchmark\_std() function This benchmarks numerical operations such as loops and matrix operations. This benchmark comprises of three separate benchmarks: `prog`, `matrix_fun`, and `matrix_cal`. If you have less than 3GB of RAM (run `get_ram()` to find out how much is available on your system), then you should kill any memory hungry applications, e.g. firefox, and set `runs = 1` as an argument. To benchmark your system, use ``` r library("benchmarkme") ## Increase runs if you have a higher spec machine res = benchmark_std(runs = 3) ``` and upload your results ``` r ## You can control exactly what is uploaded. See details below. upload_results(res) ``` You can compare your results to other users via ``` r plot(res) ``` ### The benchmark\_io() function This function benchmarks reading and writing a 5MB or 50MB (if you have less than 4GB of RAM, reduce the number of `runs` to 1). Run the benchmark using ``` r res_io = benchmark_io(runs = 3) upload_results(res_io) plot(res_io) ``` By default the files are written to a temporary directory generated ``` r tempdir() ``` which depends on the value of ``` r Sys.getenv("TMPDIR") ``` You can alter this to via the `tmpdir` argument. This is useful for comparing hard drive access to a network drive. ``` r res_io = benchmark_io(tmpdir = "some_other_directory") ``` ### Parallel benchmarks The benchmark functions above have a parallel option - just simply specify the number of cores you want to test. For example to test using four cores ``` r res_io = benchmark_std(runs = 3, cores = 4) plot(res_io) ``` ## Previous versions of the package This package was started around 2015. However, multiple changes in the byte compiler over the last few years, has made it very difficult to use previous results. So we have to start from scratch. The previous data can be obtained via ``` r data(past_results, package = "benchmarkmeData") ``` ## Machine specs The package has a few useful functions for extracting system specs: - RAM: `get_ram()` - CPUs: `get_cpu()` - BLAS library: `get_linear_algebra()` - Is byte compiling enabled: `get_byte_compiler()` - General platform info: `get_platform_info()` - R version: `get_r_version()` The above functions have been tested on a number of systems. If they don’t work on your system, please raise [GitHub](https://github.com/csgillespie/benchmarkme/issues) issue. ## Uploaded data sets A summary of the uploaded data sets is available in the [benchmarkmeData](https://github.com/csgillespie/benchmarkme-data) package ``` r data(past_results_v2, package = "benchmarkmeData") ``` A column of this data set, contains the unique identifier returned by the `upload_results()` function. ## What’s uploaded Two objects are uploaded: 1. Your benchmarks from `benchmark_std` or `benchmark_io`; 2. A summary of your system information (`get_sys_details()`). The `get_sys_details()` returns: - `Sys.info()`; - `get_platform_info()`; - `get_r_version()`; - `get_ram()`; - `get_cpu()`; - `get_byte_compiler()`; - `get_linear_algebra()`; - `installed.packages()`; - `Sys.getlocale()`; - The `benchmarkme` version number; - Unique ID - used to extract results; - The current date. The function `Sys.info()` does include the user and nodenames. In the public release of the data, this information will be removed. If you don’t wish to upload certain information, just set the corresponding argument, i.e. ``` r upload_results(res, args = list(sys_info = FALSE)) ``` ------------------------------------------------------------------------ Development of this package was supported by [Jumping Rivers](https://www.jumpingrivers.com) benchmarkme/data/0000755000176200001440000000000013421063453013440 5ustar liggesusersbenchmarkme/data/sample_results.RData0000644000176200001440000000175413421065157017431 0ustar liggesusersV}hGq0!ҦDŊiRP\(Ů)*V뇵)YDEEņ.*zM?yM/cy/<n B!BA Zufd $Pǵ|}-ab={slߞ}'7O3[?շr zn4Pn=\< xR kQG%<4"?]/C񣕯;||A]W8Rt kӐ= x'Yn-ƂuZl6ǀ-ZKh ;Mkl˦"HJw*(\gBWȦL !.Opz 4t{J6A&e<8-ǷSx\^>3OSp뢕O'D'}r0ch#h? yw|$0ʽ4]θn}r%E'<ݏ@ݾxvvp.),J'ǯ*JOڴ;ƇoGhN,GixC4ˢ5=Ƿ3>ʻ^ؿS~ ;F}<À;~'4Ѣ'O;vo@Bjmŧe+;$*Se,g)W3#g%EEf4s¦4T?rܘFTѕu23l\g&ӓ"*#BLjgʌB 0e+%e+ 6X /IcƇK]1Q_k)?\׏(y1;i R7V yU%E`Bdw%omՔT}lXKM?ccc#ɢE&xWZSdzo\4% ̈ benchmarkme/man/0000755000176200001440000000000013556526370013315 5ustar liggesusersbenchmarkme/man/get_available_benchmarks.Rd0000644000176200001440000000050413421063453020544 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmarks.R \name{get_available_benchmarks} \alias{get_available_benchmarks} \title{Available benchmarks} \usage{ get_available_benchmarks() } \description{ The function returns the available benchmarks } \examples{ get_available_benchmarks() } benchmarkme/man/get_linear_algebra.Rd0000644000176200001440000000056713421063453017367 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/get_linear_algebra.R \name{get_linear_algebra} \alias{get_linear_algebra} \title{Get BLAS and LAPACK libraries Extract the the blas/lapack from \code{sessionInfo()}} \usage{ get_linear_algebra() } \description{ Get BLAS and LAPACK libraries Extract the the blas/lapack from \code{sessionInfo()} } benchmarkme/man/get_byte_compiler.Rd0000644000176200001440000000140113650323611017260 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/get_byte_compiler.R \name{get_byte_compiler} \alias{get_byte_compiler} \title{Byte compiler status} \usage{ get_byte_compiler() } \value{ An integer indicating if byte compiling has been turn on. See \code{?compiler} for details. } \description{ Attempts to detect if byte compiling or JIT has been used on the package. } \details{ For R 3.5.0 all packages are byte compiled. Before 3.5.0 it was messy. Sometimes the user would turn it on via JIT, or ByteCompiling the package. On top of that R 3.4.X(?) was byte compiled, but R 3.4.Y(?) was, not fully optimised!!! What this means is don't trust historical results! } \examples{ ## Detect if you use byte optimization get_byte_compiler() } benchmarkme/man/benchmarkme-package.Rd0000644000176200001440000000116113650323611017434 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmarkme-package.R \docType{package} \name{benchmarkme-package} \alias{benchmarkme-package} \alias{benchmarkme} \title{The benchmarkme package} \description{ Benchmark your CPU and compare against other CPUs. Also provides functions for obtaining system specifications, such as RAM, CPU type, and R version. } \examples{ ## Benchmark your system and compare \dontrun{ res = benchmark_std() upload_results(res) plot(res) } } \seealso{ \url{https://github.com/csgillespie/benchmarkme} } \author{ \email{csgillespie@gmail.com} } \keyword{package} benchmarkme/man/bm_parallel.Rd0000644000176200001440000000200313650323611016035 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmark_parallel.R \name{bm_parallel} \alias{bm_parallel} \title{Benchmark in parallel} \usage{ bm_parallel(bm, runs, verbose, cores, ...) } \arguments{ \item{bm}{character name of benchmark function to run from \code{\link{get_available_benchmarks}}} \item{runs}{number of runs of benchmark to make} \item{verbose}{display messages during benchmarking} \item{cores}{number of cores to benchmark. If cores is specified, the benchmark is also run for cores = 1 to allow for normalisation.} \item{...}{additional arguments to pass to \code{bm}} } \description{ This function runs benchmarks in parallel to test multithreading } \examples{ \dontrun{ bm_parallel("bm_matrix_cal_manip", runs = 3, verbose = TRUE, cores = 2) bm = c("bm_matrix_cal_manip","bm_matrix_cal_power", "bm_matrix_cal_sort", "bm_matrix_cal_cross_product", "bm_matrix_cal_lm") results = lapply(bm, bm_parallel, runs = 5, verbose = TRUE, cores = 2L) } } benchmarkme/man/benchmark_io.Rd0000644000176200001440000000201613650323611016210 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmark_io.R \name{benchmark_io} \alias{benchmark_io} \alias{bm_read} \alias{bm_write} \title{IO benchmarks} \usage{ benchmark_io( runs = 3, size = c(5, 50), tmpdir = tempdir(), verbose = TRUE, cores = 0L ) bm_read(runs = 3, size = c(5, 50), tmpdir = tempdir(), verbose = TRUE) bm_write(runs = 3, size = c(5, 50), tmpdir = tempdir(), verbose = TRUE) } \arguments{ \item{runs}{Number of times to run the test. Default 3.} \item{size}{a number specifying the approximate size of the generated csv. Must be one of 5 or 50} \item{tmpdir}{a non-empty character vector giving the directory name. Default \code{tempdir()}} \item{verbose}{Default TRUE.} \item{cores}{Default 0 (serial). When cores > 0, the benchmark is run in parallel.} } \description{ Benchmarking reading and writing a csv file (containing random numbers). The tests are essentially \code{write.csv(x)} and \code{read.csv(...)} where \code{x} is a data frame. Of \code{size}MB. } benchmarkme/man/bm_matrix_cal_manip.Rd0000644000176200001440000000274313650323611017563 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmark_matrix_calculations.R, R/benchmarks.R \name{bm_matrix_cal_manip} \alias{bm_matrix_cal_manip} \alias{bm_matrix_cal_power} \alias{bm_matrix_cal_sort} \alias{bm_matrix_cal_cross_product} \alias{bm_matrix_cal_lm} \alias{benchmark_matrix_cal} \title{Matrix calculation benchmarks} \usage{ bm_matrix_cal_manip(runs = 3, verbose = TRUE) bm_matrix_cal_power(runs = 3, verbose = TRUE) bm_matrix_cal_sort(runs = 3, verbose = TRUE) bm_matrix_cal_cross_product(runs = 3, verbose = TRUE) bm_matrix_cal_lm(runs = 3, verbose = TRUE) benchmark_matrix_cal(runs = 3, verbose = TRUE, cores = 0L) } \arguments{ \item{runs}{Number of times to run the test. Default 3.} \item{verbose}{Default TRUE.} \item{cores}{Default 0 (serial). When cores > 0, the benchmark is run in parallel.} } \description{ A collection of matrix benchmark functions aimed at assessing the calculation speed. \itemize{ \item Creation, transp., deformation of a 2500x2500 matrix. \item 2500x2500 normal distributed random matrix ^1000. \item Sorting of 7,000,000 random values. \item 2500x2500 cross-product matrix (b = a' * a) \item Linear regr. over a 3000x3000 matrix. } These benchmarks have been developed by many authors. See http://r.research.att.com/benchmarks/R-benchmark-25.R for a complete history. The function \code{benchmark_matrix_cal()} runs the five \code{bm} functions. } \references{ http://r.research.att.com/benchmarks/R-benchmark-25.R } benchmarkme/man/get_platform_info.Rd0000644000176200001440000000043113421063453017265 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/get_platform_info.R \name{get_platform_info} \alias{get_platform_info} \title{Platform information} \usage{ get_platform_info() } \description{ This function just returns the outpu of \code{.Platform} } benchmarkme/man/benchmark_std.Rd0000644000176200001440000000171213650323611016375 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmark_std.R \name{benchmark_std} \alias{benchmark_std} \title{Run standard benchmarks} \usage{ benchmark_std(runs = 3, verbose = TRUE, cores = 0L) } \arguments{ \item{runs}{Number of times to run the test. Default 3.} \item{verbose}{Default TRUE.} \item{cores}{Default 0 (serial). When cores > 0, the benchmark is run in parallel.} } \description{ This function runs a set of standard benchmarks, which should be suitable for most machines. It runs a collection of matrix benchmark functions \itemize{ \item \code{benchmark_prog} \item \code{benchmark_matrix_cal} \item \code{benchmark_matrix_fun} } To view the list of benchmarks, see \code{get_available_benchmarks}. } \details{ Setting \code{cores} equal to 1 is useful for assessing the impact of the parallel computing overhead. } \examples{ ## Benchmark your system \dontrun{ res = benchmark_std(3) ## Plot results plot(res) } } benchmarkme/man/plot.ben_results.Rd0000644000176200001440000000176213650323611017100 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plot_results.R \name{plot.ben_results} \alias{plot.ben_results} \title{Compare results to past tests} \usage{ \method{plot}{ben_results}( x, test_group = unique(x$test_group), blas_optimize = is_blas_optimize(x), log = "y", ... ) } \arguments{ \item{x}{The output from a \code{benchmark_*} call.} \item{test_group}{Default \code{unique(x$test_group)}. The default behaviour is select the groups from your benchmark results.} \item{blas_optimize}{Logical. Default The default behaviour is to compare your results with results that use the same blas_optimize setting. To use all results, set to \code{NULL}.} \item{log}{By default the y axis is plotted on the log scale. To change, set the the argument equal to the empty parameter string, \code{""}.} \item{...}{Arguments to be passed to other downstream methods.} } \description{ Plotting } \examples{ data(sample_results) plot(sample_results, blas_optimize = NULL) } benchmarkme/man/bm_matrix_fun_fft.Rd0000644000176200001440000000264513650323611017270 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmark_matrix_functions.R, R/benchmarks.R \name{bm_matrix_fun_fft} \alias{bm_matrix_fun_fft} \alias{bm_matrix_fun_eigen} \alias{bm_matrix_fun_determinant} \alias{bm_matrix_fun_cholesky} \alias{bm_matrix_fun_inverse} \alias{benchmark_matrix_fun} \title{Matrix function benchmarks} \usage{ bm_matrix_fun_fft(runs = 3, verbose = TRUE) bm_matrix_fun_eigen(runs = 3, verbose = TRUE) bm_matrix_fun_determinant(runs = 3, verbose = TRUE) bm_matrix_fun_cholesky(runs = 3, verbose = TRUE) bm_matrix_fun_inverse(runs = 3, verbose = TRUE) benchmark_matrix_fun(runs = 3, verbose = TRUE, cores = 0L) } \arguments{ \item{runs}{Number of times to run the test. Default 3.} \item{verbose}{Default TRUE.} \item{cores}{Default 0 (serial). When cores > 0, the benchmark is run in parallel.} } \description{ A collection of matrix benchmark functions \itemize{ \item FFT over 2,500,000 random values. \item Eigenvalues of a 640x640 random matrix. \item Determinant of a 2500x2500 random matrix. \item Cholesky decomposition of a 3000x3000 matrix. \item Inverse of a 1600x1600 random matrix. } These benchmarks have been developed by many authors. See http://r.research.att.com/benchmarks/R-benchmark-25.R for a complete history. The function \code{benchmark_matrix_fun()} runs the five \code{bm} functions. } \references{ http://r.research.att.com/benchmarks/R-benchmark-25.R } benchmarkme/man/sample_results.Rd0000644000176200001440000000043013650323611016627 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/data_help_files.R \docType{data} \name{sample_results} \alias{sample_results} \title{Sample benchmarking results} \format{ A data frame } \description{ Sample benchmark results. Used in the vignette. } benchmarkme/man/reexports.Rd0000644000176200001440000000100313621070511015611 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plot_results.R, R/rank_results.R \docType{import} \name{reexports} \alias{reexports} \alias{plot_past} \alias{is_blas_optimize} \title{Objects exported from other packages} \keyword{internal} \description{ These objects are imported from other packages. Follow the links below to see their documentation. \describe{ \item{benchmarkmeData}{\code{\link[benchmarkmeData]{is_blas_optimize}}, \code{\link[benchmarkmeData]{plot_past}}} }} benchmarkme/man/get_ram.Rd0000644000176200001440000000146213650542210015207 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/get_ram.R \name{get_ram} \alias{get_ram} \title{Get the amount of RAM} \usage{ get_ram() } \description{ Attempt to extract the amount of RAM on the current machine. This is OS specific: \itemize{ \item Linux: \code{proc/meminfo} \item Apple: \code{system_profiler -detailLevel mini} \item Windows: First tries \code{grep MemTotal /proc/meminfo} then falls back to \code{wmic MemoryChip get Capacity} \item Solaris: \code{prtconf} } A value of \code{NA} is return if it isn't possible to determine the amount of RAM. } \examples{ ## Return (and pretty print) the amount of RAM get_ram() ## Display using iec units print(get_ram(), unit_system = "iec") } \references{ The \code{print.bytes} function was taken from the \pkg{pryr} package. } benchmarkme/man/bm_prog_fib.Rd0000644000176200001440000000250113650323611016033 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/benchmark_programming.R, R/benchmarks.R \name{bm_prog_fib} \alias{bm_prog_fib} \alias{bm_prog_hilbert} \alias{bm_prog_gcd} \alias{bm_prog_toeplitz} \alias{bm_prog_escoufier} \alias{benchmark_prog} \title{Programming benchmarks} \usage{ bm_prog_fib(runs = 3, verbose = TRUE) bm_prog_hilbert(runs = 3, verbose = TRUE) bm_prog_gcd(runs = 3, verbose = TRUE) bm_prog_toeplitz(runs = 3, verbose = TRUE) bm_prog_escoufier(runs = 3, verbose = TRUE) benchmark_prog(runs = 3, verbose = TRUE, cores = 0L) } \arguments{ \item{runs}{Number of times to run the test. Default 3.} \item{verbose}{Default TRUE.} \item{cores}{Default 0 (serial). When cores > 0, the benchmark is run in parallel.} } \description{ A collection of matrix programming benchmark functions \itemize{ \item 3,500,000 Fibonacci numbers calculation (vector calc). \item Creation of a 3500x3500 Hilbert matrix (matrix calc). \item Grand common divisors of 1,000,000 pairs (recursion). \item Creation of a 1600x1600 Toeplitz matrix (loops). \item Escoufier's method on a 60x60 matrix (mixed). } These benchmarks have been developed by many authors. See http://r.research.att.com/benchmarks/R-benchmark-25.R for a complete history. The function \code{benchmark_prog()} runs the five \code{bm} functions. } benchmarkme/man/rank_results.Rd0000644000176200001440000000120113650323611016276 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/rank_results.R \name{rank_results} \alias{rank_results} \title{Benchmark rankings} \usage{ rank_results( results, blas_optimize = is_blas_optimize(results), verbose = TRUE ) } \arguments{ \item{results}{Benchmark results. Probably obtained from \code{benchmark_std()} or \code{benchmark_io()}.} \item{blas_optimize}{Logical. Default The default behaviour is to compare your results with results that use the same blas_optimize setting. To use all results, set to \code{NULL}.} \item{verbose}{Default TRUE.} } \description{ Comparison with past results. } benchmarkme/man/get_cpu.Rd0000644000176200001440000000100113650323611015206 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/get_cpu.R \name{get_cpu} \alias{get_cpu} \title{CPU Description} \usage{ get_cpu() } \description{ Attempt to extract the CPU model on the current host. This is OS specific: \itemize{ \item Linux: \code{/proc/cpuinfo} \item Apple: \code{sysctl -n} \item Solaris: Not implemented. \item Windows: \code{wmic cpu} } A value of \code{NA} is return if it isn't possible to obtain the CPU. } \examples{ ## Return the machine CPU get_cpu() } benchmarkme/man/get_sys_details.Rd0000644000176200001440000000307613650323611016760 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/get_sys_details.R \name{get_sys_details} \alias{get_sys_details} \title{General system information} \usage{ get_sys_details( sys_info = TRUE, platform_info = TRUE, r_version = TRUE, ram = TRUE, cpu = TRUE, byte_compiler = TRUE, linear_algebra = TRUE, locale = TRUE, installed_packages = TRUE, machine = TRUE ) } \arguments{ \item{sys_info}{Default \code{TRUE}.} \item{platform_info}{Default \code{TRUE}.} \item{r_version}{Default \code{TRUE}.} \item{ram}{Default \code{TRUE}.} \item{cpu}{Default \code{TRUE}.} \item{byte_compiler}{Default \code{TRUE}.} \item{linear_algebra}{Default \code{TRUE}.} \item{locale}{Default \code{TRUE}} \item{installed_packages}{Default \code{TRUE}.} \item{machine}{Default \code{TRUE}} } \value{ A list } \description{ The \code{get_sys_info} returns general system level information as a list. The function parameters control the information to upload. If a parameter is set to \code{FALSE}, an \code{NA} is uploaded instead. Each element of the list is contains the output from: \itemize{ \item \code{Sys.info()}; \item \code{get_platform_info()}; \item \code{get_r_version()}; \item \code{get_ram()}; \item \code{get_cpu()}; \item \code{get_byte_compiler()}; \item \code{get_linear_algebra()}; \item \code{Sys.getlocale()} \item \code{installed.packages()}; \item \code{.Machine} \item The package version number; \item Unique ID - used to extract results; \item The current date. } } \examples{ ## Returns all details about your machine get_sys_details() } benchmarkme/man/upload_results.Rd0000644000176200001440000000216313650323611016637 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/upload_results.R \name{create_bundle} \alias{create_bundle} \alias{upload_results} \title{Upload benchmark results} \usage{ create_bundle(results, filename = NULL, args = NULL, id_prefix = "") upload_results( results, url = "http://www.mas.ncl.ac.uk/~ncsg3/form.php", args = NULL, id_prefix = "" ) } \arguments{ \item{results}{Benchmark results. Probably obtained from \code{benchmark_std()} or \code{benchmark_io()}.} \item{filename}{default \code{NULL}. A character vector of where to store the results (in an .rds file). If \code{NULL}, results are not saved.} \item{args}{Default \code{NULL}. A list of arguments to be passed to \code{get_sys_details()}.} \item{id_prefix}{Character string to prefix the benchmark id. Makes it easier to retrieve past results.} \item{url}{The location of where to upload the results.} } \description{ This function uploads the benchmarking results. These results will then be incorparated in future versions of the package. } \examples{ ## Run benchmarks \dontrun{ res = benchmark_std() upload_results(res) } } benchmarkme/man/get_r_version.Rd0000644000176200001440000000034713421063453016442 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/get_r_version.R \name{get_r_version} \alias{get_r_version} \title{R version} \usage{ get_r_version() } \description{ Returns \code{unclass(R.version)} } benchmarkme/DESCRIPTION0000644000176200001440000000221314025731726014241 0ustar liggesusersType: Package Package: benchmarkme Title: Crowd Sourced System Benchmarks Version: 1.0.7 Authors@R: person(given = "Colin", family = "Gillespie", role = c("aut", "cre"), email = "csgillespie@gmail.com", comment = c(ORCID = "0000-0003-1787-0275")) Maintainer: Colin Gillespie Description: Benchmark your CPU and compare against other CPUs. Also provides functions for obtaining system specifications, such as RAM, CPU type, and R version. License: GPL-2 | GPL-3 URL: https://github.com/csgillespie/benchmarkme BugReports: https://github.com/csgillespie/benchmarkme/issues Depends: R (>= 3.5.0) Imports: benchmarkmeData (>= 1.0.4), compiler, doParallel, dplyr, foreach, graphics, httr, Matrix, methods, parallel, tibble, utils Suggests: covr, DT, ggplot2, knitr, RcppZiggurat, rmarkdown, testthat VignetteBuilder: knitr Encoding: UTF-8 LazyData: TRUE RoxygenNote: 7.1.1 NeedsCompilation: no Packaged: 2021-03-21 18:47:15 UTC; ncsg3 Author: Colin Gillespie [aut, cre] () Repository: CRAN Date/Publication: 2021-03-21 21:00:06 UTC benchmarkme/build/0000755000176200001440000000000014025712263013627 5ustar liggesusersbenchmarkme/build/vignette.rds0000644000176200001440000000033514025712263016167 0ustar liggesusersmQ0 ?x x /xf&#ěO.#&گ#&1-ˇe@ޅ}EJ&ӅQZ1XolYXFe'.R#\**@_l ? 0) expect_output(benchmarkme:::print.ram(1.63e+10), regexp = "GB") expect_output(benchmarkme:::print.ram(10), regexp = "B") expect_equal(benchmarkme:::to_bytes(c(16.4, "GB")), 1.64e+10) } ) benchmarkme/tests/testthat/test-benchmark_std.R0000644000176200001440000000060713421063453021440 0ustar liggesuserstest_that("Test benchmark_std", { skip_on_cran() skip_on_travis() expect_error(benchmark_std(runs = 0)) res = benchmark_std(runs = 1) #res2 = benchmark_std(runs = 1, cores=2) # res3 = benchmark_std(runs = 5) # res4 = benchmark_std(runs = 5, cores = 2) expect_equal(nrow(res), 15) #expect_equal(nrow(res2), 30) expect_equal(ncol(res), 6) #expect_equal(ncol(res2), 6) }) benchmarkme/tests/testthat/test-upload_results.R0000644000176200001440000000067413556526370021720 0ustar liggesuserstest_that("Test upload_results", { skip_on_cran() ## Upload empty results that are removed on the server. x = upload_results(NULL) ## Results ID should have date. expect_equal(grep(Sys.Date(), x), 1) fname = tempfile(fileext = ".rds") res = create_bundle(NULL, fname) expect_equal(res, readRDS(fname)) res = create_bundle(NULL, fname, args = list(sys_info = FALSE)) expect_true(is.na(res$sys_info)) unlink(fname) } ) benchmarkme/tests/testthat/test-timings.R0000644000176200001440000000025713556526370020322 0ustar liggesuserstest_that("Test Timing mean", { skip_on_cran() data("sample_results", package = "benchmarkme") expect_true(is.character(benchmarkme:::timings_mean(sample_results))) } ) benchmarkme/tests/testthat/test-plot_results.R0000644000176200001440000000030213556526370021376 0ustar liggesuserstest_that("Test plot_past", { skip_on_cran() tmp_env = new.env() data(sample_results, envir = tmp_env, package = "benchmarkme") res = tmp_env$sample_results expect_null(plot(res)) } ) benchmarkme/tests/testthat/test-sys_details.R0000644000176200001440000000037313556526370021172 0ustar liggesuserstest_that("Test Sys Details", { skip_on_cran() sys = get_sys_details(sys_info = FALSE, installed_packages = FALSE) expect_equal(length(sys), 13) expect_equal(is.na(sys$sys_info), TRUE) expect_equal(is.na(sys$installed_packages), TRUE) } ) benchmarkme/tests/testthat/test-rnorm.R0000644000176200001440000000014113421063453017762 0ustar liggesuserstest_that("Test Rnorm", { skip_on_cran() expect_true(is.numeric(benchmarkme:::Rnorm(1))) } ) benchmarkme/tests/testthat/test-ranking.R0000644000176200001440000000036113556526370020275 0ustar liggesuserstest_that("Test ranking", { skip_on_cran() tmp_env = new.env() data(sample_results, envir = tmp_env, package = "benchmarkme") res = tmp_env$sample_results res = res[res$test_group == "prog", ] expect_gt(rank_results(res), 0) } ) benchmarkme/tests/testthat/test-benchmark_io.R0000644000176200001440000000051114015757162021256 0ustar liggesuserstest_that("Test benchmark_io", { skip_on_cran() library("benchmarkme") expect_error(benchmark_io(size = 1)) res = benchmark_io(runs = 1, size = 5) res2 = benchmark_io(runs = 1, size = 5, cores = 2) expect_equal(nrow(res), 2) expect_equal(ncol(res), 6) expect_equal(nrow(res2), 2) expect_equal(ncol(res2), 6) }) benchmarkme/tests/testthat/test-cpu.R0000644000176200001440000000020013421063453017410 0ustar liggesuserstest_that("Test CPU", { skip_on_cran() cpu = get_cpu() expect_equal(length(cpu), 3) expect_equal(anyNA(cpu), FALSE) } ) benchmarkme/tests/testthat/test-bm_parallel.R0000644000176200001440000000066013650265403021110 0ustar liggesuserstest_that("Test bm_parallel", { skip_on_cran() skip_on_travis() res = bm_parallel("bm_matrix_cal_power", runs = 2, verbose = TRUE, cores = 1) expect_equal(nrow(res), 2) expect_equal(ncol(res), 6) expect_true(all(res$cores == 1)) res = bm_parallel(bm = "bm_matrix_cal_power", runs = 2, verbose = TRUE, cores = 1:2) expect_equal(nrow(res), 4) expect_equal(ncol(res), 6) expect_equal(c(1, 1, 2, 2), res$cores) }) benchmarkme/tests/testthat/test-platform_info.R0000644000176200001440000000015013421063453021464 0ustar liggesuserstest_that("Test Platform Info", { skip_on_cran() expect_equal(get_platform_info(), .Platform) } ) benchmarkme/tests/testthat/test-datatable.R0000644000176200001440000000046313421074062020553 0ustar liggesusers# test_that("Test datatable", { # skip_on_cran() # tmp_env = new.env() # data(sample_results, envir = tmp_env, package="benchmarkme") # res = tmp_env$sample_results # expect_warning({ # data_table = get_datatable(res, test_group = "prog") # }) # expect_true(is.list(data_table)) # } # ) benchmarkme/tests/testthat.R0000644000176200001440000000005613421072245015654 0ustar liggesuserslibrary("testthat") test_check("benchmarkme") benchmarkme/vignettes/0000755000176200001440000000000014025712263014540 5ustar liggesusersbenchmarkme/vignettes/a_introduction.Rmd0000644000176200001440000001343114015757162020235 0ustar liggesusers--- title: "Crowd sourced benchmarks" author: "Colin Gillespie" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Crowd sourced benchmarks} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- ```{r echo=FALSE, purl=FALSE} library("benchmarkme") data(sample_results, package = "benchmarkme") res = sample_results ``` # System benchmarking R benchmarking made easy. The package contains a number of benchmarks, heavily based on the benchmarks at https://mac.R-project.org/benchmarks/R-benchmark-25.R, for assessing the speed of your system. ## Overview A straightforward way of speeding up your analysis is to buy a better computer. Modern desktops are relatively cheap, especially compared to user time. However, it isn't clear if upgrading your computing is worth the cost. The **benchmarkme** package provides a set of benchmarks to help quantify your system. More importantly, it allows you to compare your timings with _other_ systems. ## Overview The package is on [CRAN](https://cran.r-project.org/package=benchmarkme) and can be installed in the usual way ```{r, eval=FALSE} install.packages("benchmarkme") ``` There are two groups of benchmarks: * `benchmark_std()`: this benchmarks numerical operations such as loops and matrix operations. The benchmark comprises of three separate benchmarks: `prog`, `matrix_fun`, and `matrix_cal`. * `benchmark_io()`: this benchmarks reading and writing a 5 / 50, MB csv file. ### The benchmark_std() function This benchmarks numerical operations such as loops and matrix operations. This benchmark comprises of three separate benchmarks: `prog`, `matrix_fun`, and `matrix_cal`. If you have less than 3GB of RAM (run `get_ram()` to find out how much is available on your system), then you should kill any memory hungry applications, e.g. firefox, and set `runs = 1` as an argument. To benchmark your system, use ```{r eval=FALSE} library("benchmarkme") ## Increase runs if you have a higher spec machine res = benchmark_std(runs = 3) ``` and upload your results ```{r, eval=FALSE} ## You can control exactly what is uploaded. See details below. upload_results(res) ``` You can compare your results to other users via ```{r eval=FALSE} plot(res) ``` ### The benchmark_io() function This function benchmarks reading and writing a 5MB or 50MB (if you have less than 4GB of RAM, reduce the number of `runs` to 1). Run the benchmark using ```{r eval=FALSE} res_io = benchmark_io(runs = 3) upload_results(res_io) plot(res_io) ``` By default the files are written to a temporary directory generated ```{r eval=FALSE} tempdir() ``` which depends on the value of ```{r eval=FALSE} Sys.getenv("TMPDIR") ``` You can alter this to via the `tmpdir` argument. This is useful for comparing hard drive access to a network drive. ```{r eval=FALSE} res_io = benchmark_io(tmpdir = "some_other_directory") ``` ### Parallel benchmarks The benchmark functions above have a parallel option - just simply specify the number of cores you want to test. For example to test using four cores ```{r eval=FALSE} res_io = benchmark_std(runs = 3, cores = 4) ``` The process for the parallel benchmarks of the pseudo function `benchmark_x(cores = n)` is: - initialise the parallel environment - Start timer - Run job x in core 1, 2, ..., n simultaneously - when __all__ jobs finish stop timer - stop parallel environment This procedure is repeat `runs` times. ## Previous versions of this This package was started around 2015. However, multiple changes in the byte compiler over the last few years, has made it very difficult to use previous results. So we have to start from scratch. The previous data can be obtained via ```{r} data(past_results, package = "benchmarkmeData") ``` ## Machine specs The package has a few useful functions for extracting system specs: * RAM: `get_ram()` * CPUs: `get_cpu()` * BLAS library: `get_linear_algebra()` * Is byte compiling enabled: `get_byte_compiler()` * General platform info: `get_platform_info()` * R version: `get_r_version()` The above functions have been tested on a number of systems. If they don't work on your system, please raise [GitHub](https://github.com/csgillespie/benchmarkme/issues) issue. ## Uploaded data sets A summary of the uploaded data sets is available in the [benchmarkmeData](https://github.com/csgillespie/benchmarkme-data) package ```{r} data(past_results_v2, package = "benchmarkmeData") ``` A column of this data set, contains the unique identifier returned by the `upload_results()` function. ## What's uploaded Two objects are uploaded: 1. Your benchmarks from `benchmark_std()` or `benchmark_io()`; 1. A summary of your system information (`get_sys_details()`). The `get_sys_details()` returns: * `Sys.info()`; * `get_platform_info()`; * `get_r_version()`; * `get_ram()`; * `get_cpu()`; * `get_byte_compiler()`; * `get_linear_algebra()`; * `installed.packages()`; * `Sys.getlocale()`; * The `benchmarkme` version number; * Unique ID - used to extract results; * The current date. The function `Sys.info()` does include the user and nodenames. In the public release of the data, this information will be removed. If you don't wish to upload certain information, just set the corresponding argument, i.e. ```{r eval=FALSE} upload_results(res, args = list(sys_info = FALSE)) ``` --- Development of this package was supported by [Jumping Rivers](https://www.jumpingrivers.com) benchmarkme/R/0000755000176200001440000000000014025711520012724 5ustar liggesusersbenchmarkme/R/get_linear_algebra.R0000644000176200001440000000043213650265433016646 0ustar liggesusers#' Get BLAS and LAPACK libraries #' Extract the the blas/lapack from \code{sessionInfo()} #' #' @importFrom utils sessionInfo #' @export get_linear_algebra = function() { s = sessionInfo() blas = s$BLAS lapack = s$LAPACK return(list(blas = blas, lapack = lapack)) } benchmarkme/R/clean_ram_output.R0000644000176200001440000000205313650265431016420 0ustar liggesusersto_bytes = function(value) { num = as.numeric(value[1]) units = value[2] power = match(units, c("kB", "MB", "GB", "TB")) if (!is.na(power)) return(num * 1000 ^ power) power = match(units, c("Kilobytes", "Megabytes", "Gigabytes", "Terabytes")) if (!is.na(power)) return(num * 1000 ^ power) num } clean_ram = function(ram, os) { if (length(ram) > 1 || is.na(ram)) return(NA) if (length(grep("^linux", os))) { clean_ram = clean_linux_ram(ram) } else if (length(grep("^darwin", os))) { clean_ram = clean_darwin_ram(ram) # nocov } else if (length(grep("^solaris", os))) { clean_ram = clean_solaris_ram(ram) # nocov } else { clean_ram = clean_win_ram(ram) # nocov } unname(clean_ram) } clean_linux_ram = function(ram) { as.numeric(ram) * 1024 } clean_darwin_ram = function(ram) { as.numeric(ram) } clean_solaris_ram = function(ram) { ram = remove_white(ram) to_bytes(unlist(strsplit(ram, " "))[3:4]) } clean_win_ram = function(ram) { ram = remove_white(ram) ram = ram[nchar(ram) > 0] sum(as.numeric(ram)) } benchmarkme/R/get_cpu.R0000644000176200001440000000375214025711557014516 0ustar liggesusers#' CPU Description #' #' Attempt to extract the CPU model on the current host. This is OS #' specific: #' \itemize{ #' \item Linux: \code{/proc/cpuinfo} #' \item Apple: \code{sysctl -n} #' \item Solaris: Not implemented. #' \item Windows: \code{wmic cpu} #' } #' A value of \code{NA} is return if it isn't possible to obtain the CPU. #' @importFrom parallel detectCores #' @export #' @examples #' ## Return the machine CPU #' get_cpu() get_cpu = function() { cpu = try(get_cpu_internal(), silent = TRUE) if (class(cpu) == "try-error") { message("\t Unable to detect your CPU. Please raise an issue at https://github.com/csgillespie/benchmarkme") # nocov cpu = list(vendor_id = NA_character_, model_name = NA_character_) # nocov } cpu$no_of_cores = parallel::detectCores() cpu } get_cpu_internal = function() { os = R.version$os if (length(grep("^linux", os))) { cmd = "awk '/vendor_id/' /proc/cpuinfo" vendor_id = gsub("vendor_id\t: ", "", unique(system(cmd, intern = TRUE))) cmd = "awk '/model name/' /proc/cpuinfo" model_name = gsub("model name\t: ", "", unique(system(cmd, intern = TRUE))) } else if (length(grep("^darwin", os))) { sysctl = get_sysctl() if (is.na(sysctl)) { vendor_id = model_name = NA } else { vendor_id = suppressWarnings(system2(sysctl, "-n machdep.cpu.vendor", stdout = TRUE, stderr = NULL)) model_name = suppressWarnings(system2(sysctl, "-n machdep.cpu.brand_string", stdout = TRUE, stderr = NULL)) # nocov } } else if (length(grep("^solaris", os))) { vendor_id = NA # nocov model_name = NA # nocov } else { ## CPU model_name = system("wmic cpu get name", intern = TRUE)[2] # nocov vendor_id = system("wmic cpu get manufacturer", intern = TRUE)[2] # nocov } list(vendor_id = remove_white(vendor_id), model_name = remove_white(model_name), no_of_cores = parallel::detectCores()) } benchmarkme/R/zzz.R0000644000176200001440000000045113650265401013711 0ustar liggesusers#' @importFrom stats na.omit .onAttach = function(...) { #nolint # if (!interactive()) return() # # msg = "See https://jumpingrivers.shinyapps.io/benchmarkme/ for a Shiny # interface to the benchmark data." # nocov # packageStartupMessage(paste(strwrap(msg), collapse = "\n")) # nocov } benchmarkme/R/get_r_version.R0000644000176200001440000000015613650265354015732 0ustar liggesusers#' R version #' #' Returns \code{unclass(R.version)} #' @export get_r_version = function() unclass(R.version) benchmarkme/R/rnorm.R0000644000176200001440000000024313556526370014222 0ustar liggesusers#' @importFrom stats rnorm Rnorm = function(n) { #nolint if (requireNamespace("RcppZiggurat", quietly = TRUE)) RcppZiggurat::zrnorm(n) else rnorm(n) } benchmarkme/R/rank_results.R0000644000176200001440000000271313650265373015603 0ustar liggesusers#' @importFrom benchmarkmeData is_blas_optimize #' @export benchmarkmeData::is_blas_optimize #' Benchmark rankings #' #' Comparison with past results. #' @inheritParams upload_results #' @inheritParams benchmark_std #' @inheritParams plot.ben_results #' @importFrom tibble tibble #' @import dplyr #' @export rank_results = function(results, blas_optimize = is_blas_optimize(results), verbose = TRUE) { no_of_test_groups = length(unique(results$test_group)) if (no_of_test_groups != 1) stop("Can only rank a single group at a time", call. = FALSE) no_of_reps = length(results$test) / length(unique(results$test)) results_tib = tibble(time = sum(results$elapsed) / no_of_reps, is_past = FALSE) if (is.null(blas_optimize)) blas_optimize = c(FALSE, TRUE) tmp_env = new.env() data(past_results_v2, package = "benchmarkmeData", envir = tmp_env) pst = tmp_env$past_results_v2 pst$test_group = as.character(pst$test_group) rankings = pst %>% filter(test_group == unique(results$test_group)) %>% filter(blas_optimize %in% !!blas_optimize) %>% filter(cores %in% results$cores) %>% filter(!is.na(time)) %>% mutate(is_past = TRUE) %>% select(time, is_past) %>% bind_rows(results_tib) %>% arrange(time) ben_rank = which(!rankings$is_past) if (verbose) message("You are ranked ", ben_rank, " out of ", nrow(rankings), " machines.") ben_rank } benchmarkme/R/get_sys_details.R0000644000176200001440000000503713650265362016251 0ustar liggesusers#' General system information #' #' The \code{get_sys_info} returns general system level information as a list. The #' function parameters control the information to upload. If a parameter is set to #' \code{FALSE}, an \code{NA} is uploaded instead. Each element of the list #' is contains the output from: #' \itemize{ #' \item \code{Sys.info()}; #' \item \code{get_platform_info()}; #' \item \code{get_r_version()}; #' \item \code{get_ram()}; #' \item \code{get_cpu()}; #' \item \code{get_byte_compiler()}; #' \item \code{get_linear_algebra()}; #' \item \code{Sys.getlocale()} #' \item \code{installed.packages()}; #' \item \code{.Machine} #' \item The package version number; #' \item Unique ID - used to extract results; #' \item The current date. #' } #' @param sys_info Default \code{TRUE}. #' @param platform_info Default \code{TRUE}. #' @param r_version Default \code{TRUE}. #' @param ram Default \code{TRUE}. #' @param cpu Default \code{TRUE}. #' @param byte_compiler Default \code{TRUE}. #' @param linear_algebra Default \code{TRUE}. #' @param locale Default \code{TRUE} #' @param installed_packages Default \code{TRUE}. #' @param machine Default \code{TRUE} #' @return A list #' @importFrom utils installed.packages packageDescription #' @export #' @examples #' ## Returns all details about your machine #' get_sys_details() get_sys_details = function(sys_info = TRUE, platform_info = TRUE, r_version = TRUE, ram=TRUE, cpu=TRUE, byte_compiler=TRUE, linear_algebra=TRUE, locale = TRUE, installed_packages=TRUE, machine=TRUE) { l = list() if (sys_info) l$sys_info = as.list(Sys.info()) else l$sys_info = NA if (platform_info) l$platform_info = get_platform_info() else l$platform_info = NA if (r_version) l$r_version = get_r_version() else l$r_version = NA if (ram) l$ram = get_ram() else l$ram = NA if (cpu) l$cpu = get_cpu() else l$cpu = NA if (byte_compiler) l$byte_compiler = get_byte_compiler() else l$byte_compiler = NA if (linear_algebra) l$linear_algebra = get_linear_algebra() else l$linear_algebra = NA if (locale) l$locale = Sys.getlocale() else l$locale = NA if (installed_packages) l$installed_packages = installed.packages() else l$installed_packages = NA if (machine) l$machine = .Machine else l$machine = NA l$package_version = packageDescription("benchmarkme")$Version l$id = paste0(Sys.Date(), "-", sample(1e8, 1)) l$date = structure(Sys.Date(), class = "Date") l } benchmarkme/R/benchmark_std.R0000644000176200001440000000200713650265334015664 0ustar liggesusers#' Run standard benchmarks #' #' @description This function runs a set of standard benchmarks, which should be suitable for most #' machines. It runs a collection of matrix benchmark functions #' \itemize{ #' \item \code{benchmark_prog} #' \item \code{benchmark_matrix_cal} #' \item \code{benchmark_matrix_fun} #' } #' To view the list of benchmarks, see \code{get_available_benchmarks}. #' @param runs Number of times to run the test. Default 3. #' @param cores Default 0 (serial). When cores > 0, the benchmark is run in parallel. #' @param verbose Default TRUE. #' @details Setting \code{cores} equal to 1 is useful for assessing the impact of the #' parallel computing overhead. #' @export #' @examples #' ## Benchmark your system #' \dontrun{ #' res = benchmark_std(3) #' #' ## Plot results #' plot(res) #' } benchmark_std = function(runs = 3, verbose = TRUE, cores = 0L) { rbind(benchmark_prog(runs, verbose, cores), benchmark_matrix_cal(runs, verbose, cores), benchmark_matrix_fun(runs, verbose, cores)) } benchmarkme/R/plot_results.R0000644000176200001440000001002513650265370015616 0ustar liggesusersnice_palette = function() { alpha = 150 palette(c(rgb(85, 130, 169, alpha = alpha, maxColorValue = 255), rgb(200, 79, 178, alpha = alpha, maxColorValue = 255), rgb(105, 147, 45, alpha = alpha, maxColorValue = 255), rgb(204, 74, 83, alpha = alpha, maxColorValue = 255), rgb(183, 110, 39, alpha = alpha, maxColorValue = 255), rgb(131, 108, 192, alpha = alpha, maxColorValue = 255))) } #' Compare results to past tests #' #' Plotting #' @param x The output from a \code{benchmark_*} call. #' @param test_group Default \code{unique(x$test_group)}. #' The default behaviour is select the groups from your benchmark results. #' @param blas_optimize Logical. Default The default behaviour #' is to compare your results with results that use the same #' blas_optimize setting. To use all results, set to \code{NULL}. #' @param log By default the y axis is plotted on the log scale. To change, set the #' the argument equal to the empty parameter string, \code{""}. #' @param ... Arguments to be passed to other downstream methods. #' @importFrom graphics abline grid par plot points text legend title #' @importFrom grDevices palette rgb #' @importFrom utils data #' @importFrom benchmarkmeData select_results is_blas_optimize #' @export #' @examples #' data(sample_results) #' plot(sample_results, blas_optimize = NULL) plot.ben_results = function(x, test_group = unique(x$test_group), blas_optimize = is_blas_optimize(x), log = "y", ...) { for (i in seq_along(test_group)) { group = x[x$test_group == test_group[i], ] for (core in unique(group$cores)) { make_plot(x = group[group$cores == core, ], blas_optimize = blas_optimize, log = log, ...) } if (length(test_group) != i) readline("Press return to get next plot ") } } #' @import dplyr make_plot = function(x, blas_optimize, log, ...) { test_group = unique(x$test_group) results = benchmarkmeData::select_results(test_group = test_group, blas_optimize = blas_optimize, cores = unique(x$cores)) ben_rank = rank_results(x, blas_optimize = blas_optimize, verbose = TRUE) no_of_reps = length(x$test) / length(unique(x$test)) ben_sum = sum(x[, 3]) / no_of_reps ## Arrange plot colours and layout op = par(mar = c(3, 3, 2, 1), mgp = c(2, 0.4, 0), tck = -.01, cex.axis = 0.8, las = 1, mfrow = c(1, 2)) old_pal = palette() on.exit({palette(old_pal); par(op)}) nice_palette() ## Calculate adjustment for sensible "You" placement adj = ifelse(ben_rank < nrow(results) / 2, -1.5, 1.5) ## Plot limits ymin = min(results$time, ben_sum) ymax = max(results$time, ben_sum) ## Standard timings plot(results$time, xlab = "Rank", ylab = "Total timing (secs)", ylim = c(ymin, ymax), xlim = c(0.5, nrow(results) + 1), panel.first = grid(), cex = 0.7, log = log, ...) points(ben_rank - 1 / 2, ben_sum, bg = 4, pch = 21) abline(v = ben_rank - 1 / 2, col = 4, lty = 3) text(ben_rank - 1 / 2, ymin, "You", col = 4, adj = adj) if (unique(x$cores) == 0) title(paste0("Benchmark: ", test_group), cex = 0.9) else title(paste0("Benchmark: ", test_group, "(", unique(x$cores), " cores)"), cex = 0.9) ## Relative timings fastest = min(ben_sum, results$time) ymax = ymax / fastest plot(results$time / fastest, xlab = "Rank", ylab = "Relative timing", ylim = c(1, ymax), xlim = c(0.5, nrow(results) + 1), panel.first = grid(), cex = 0.7, log = log, ...) abline(h = 1, lty = 3) abline(v = ben_rank - 1 / 2, col = 4, lty = 3) points(ben_rank - 1 / 2, ben_sum / fastest, bg = 4, pch = 21) text(ben_rank - 1 / 2, 1.2, "You", col = 4, adj = adj) title(paste("Benchmark:", test_group), cex = 0.9) } #' @importFrom benchmarkmeData plot_past #' @export benchmarkmeData::plot_past benchmarkme/R/benchmark_io.R0000644000176200001440000001103713650265311015477 0ustar liggesusers#' IO benchmarks #' #' @description Benchmarking reading and writing a csv file (containing random numbers). #' The tests are essentially \code{write.csv(x)} and \code{read.csv(...)} where \code{x} #' is a data frame. #' Of \code{size}MB. #' @inheritParams benchmark_std #' @param tmpdir a non-empty character vector giving the directory name. Default \code{tempdir()} #' @param size a number specifying the approximate size of the generated csv. #' Must be one of 5 or 50 #' @importFrom utils read.csv write.csv #' @rdname benchmark_io #' @export benchmark_io = function(runs = 3, size = c(5, 50), tmpdir = tempdir(), verbose = TRUE, cores = 0L) { # Order size largest to smallest for trial run. # Trial on largest if (!all(size %in% c(5, 50))) { stop("Size must be one of 5, 50", call. = FALSE) } size = sort(size, decreasing = TRUE) if (cores > 0) { results = benchmark_io_parallel(runs = runs, size = size, tmpdir = tmpdir, verbose = verbose, cores = cores) } else { results = benchmark_io_serial(runs = runs, size = size, tmpdir = tmpdir, verbose = verbose) } class(results) = c("ben_results", class(results)) results } ## Two helper functions ---- benchmark_io_serial = function(runs, size, tmpdir, verbose) { ## Avoid spurious first times. ## Perform a dummy run message("Preparing read/write io") bm_write(runs, size = size[1], tmpdir, verbose = FALSE) results = NULL # I know I'm growing a data frame. But nrow < 10 for (s in size) { if (verbose) message("# IO benchmarks (2 tests) for size ", s, " MB:") res = bm_write(runs, size = s, tmpdir, verbose) results = rbind(results, res) res = bm_read(runs, size = s, tmpdir, verbose) results = rbind(results, res) } results$cores = 0 results } benchmark_io_parallel = function(runs, size, tmpdir, verbose, cores) { message("Preparing read/write io") bm_parallel("bm_write", runs = 1, size = size[1], tmpdir = tmpdir, verbose = verbose, cores = max(cores)) results = NULL for (s in size) { if (verbose) message("# IO benchmarks (2 tests) for size ", s, " MB (parallel)") results = rbind(results, bm_parallel("bm_write", runs = runs, size = s, tmpdir = tmpdir, verbose = verbose, cores = cores)) results = rbind(results, bm_parallel("bm_read", runs = runs, size = s, tmpdir = tmpdir, verbose = verbose, cores = cores)) } results } #bm_io(runs = runs, size = s, tmpdir = tmpdir, verbose = verbose) #' @rdname benchmark_io #' @export bm_read = function(runs = 3, size = c(5, 50), tmpdir = tempdir(), verbose = TRUE) { n = 12.5e4 * size set.seed(1); on.exit(set.seed(NULL)) x = Rnorm(n) m = data.frame(matrix(x, ncol = 10)) test = rep(paste0("read", size), runs) timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = test, test_group = test, stringsAsFactors = FALSE) fname = tempfile(fileext = ".csv", tmpdir = tmpdir) write.csv(m, fname, row.names = FALSE) for (i in 1:runs) { invisible(gc()) timings[i, 1:3] = system.time({ read.csv(fname, colClasses = rep("numeric", 10)) })[1:3] if (verbose) { message(c("\t Reading a csv with ", n, " values", timings_mean(timings[timings$test_group == paste0("read", size), ]))) } } unlink(fname) invisible(gc()) timings } #' @rdname benchmark_io #' @export bm_write = function(runs = 3, size = c(5, 50), tmpdir = tempdir(), verbose = TRUE) { n = 12.5e4 * size set.seed(1); on.exit(set.seed(NULL)) x = Rnorm(n) m = data.frame(matrix(x, ncol = 10)) test = rep(paste0("write", size), runs) timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = test, test_group = test, stringsAsFactors = FALSE) for (i in 1:runs) { fname = tempfile(fileext = ".csv", tmpdir = tmpdir) invisible(gc()) timings[i, 1:3] = system.time({ write.csv(m, fname, row.names = FALSE) })[1:3] unlink(fname) invisible(gc()) if (verbose) { message(c("\t Writing a csv with ", n, " values", timings_mean(timings[timings$test_group == paste0("write", size), ]))) } } timings } benchmarkme/R/timing_mean.R0000644000176200001440000000031313556526370015352 0ustar liggesuserstimings_mean = function(timings) { ti = timings[, 3] ti = ti[ti > 0] m = mean(ti) paste0(": ", signif(m, 3), " (sec).") } remove_white = function(x) gsub("(^[[:space:]]+|[[:space:]]+$)", "", x) benchmarkme/R/data_help_files.R0000644000176200001440000000030613421063453016155 0ustar liggesusers#' @rdname sample_results #' @name sample_results #' @title Sample benchmarking results #' @description Sample benchmark results. Used in the vignette. #' @docType data #' @format A data frame NULL benchmarkme/R/benchmarks.R0000644000176200001440000000355213650265337015206 0ustar liggesusersrun_benchmarks = function(bm, runs, verbose, cores) { if (cores > 0) { results = lapply(bm, bm_parallel, runs = runs, verbose = verbose, cores = cores) } else { results = lapply(bm, do.call, list(runs = runs, verbose = verbose), envir = environment(run_benchmarks)) } results = Reduce("rbind", results) results$cores = cores class(results) = c("ben_results", class(results)) results } #' Available benchmarks #' #' The function returns the available benchmarks #' @export #' @examples #' get_available_benchmarks() get_available_benchmarks = function() { c("benchmark_std", "benchmark_prog", "benchmark_matrix_cal", "benchmark_matrix_fun", "benchmark_io") } #' @inheritParams benchmark_std #' @rdname bm_prog_fib #' @export benchmark_prog = function(runs = 3, verbose = TRUE, cores = 0L) { bm = c("bm_prog_fib", "bm_prog_gcd", "bm_prog_hilbert", "bm_prog_toeplitz", "bm_prog_escoufier") if (verbose) message("# Programming benchmarks (5 tests):") run_benchmarks(bm, runs, verbose, cores) } #' @inheritParams benchmark_std #' @rdname bm_matrix_cal_manip #' @export benchmark_matrix_cal = function(runs = 3, verbose = TRUE, cores = 0L) { bm = c("bm_matrix_cal_manip", "bm_matrix_cal_power", "bm_matrix_cal_sort", "bm_matrix_cal_cross_product", "bm_matrix_cal_lm") if (verbose) message("# Matrix calculation benchmarks (5 tests):") run_benchmarks(bm, runs, verbose, cores) } #' @inheritParams benchmark_std #' @rdname bm_matrix_fun_fft #' @export benchmark_matrix_fun = function(runs = 3, verbose = TRUE, cores = 0L) { bm = c("bm_matrix_fun_cholesky", "bm_matrix_fun_determinant", "bm_matrix_fun_eigen", "bm_matrix_fun_fft", "bm_matrix_fun_inverse") if (verbose) message("# Matrix function benchmarks (5 tests):") run_benchmarks(bm, runs, verbose, cores) } benchmarkme/R/benchmark_parallel.R0000644000176200001440000000473714015757162016703 0ustar liggesuserscheck_export = function(export, cl) { if (class(export) %in% "try-error") { parallel::stopCluster(cl) stop("You need to call library(benchmarkme) before running parallel tests.\\ If you think you can avoid this, see github.com/csgillespie/benchmarkme/issues/33", call. = FALSE) } return(invisible(NULL)) } #' Benchmark in parallel #' #' This function runs benchmarks in parallel to test multithreading #' @param bm character name of benchmark function to run from \code{\link{get_available_benchmarks}} #' @param runs number of runs of benchmark to make #' @param verbose display messages during benchmarking #' @param cores number of cores to benchmark. If cores is specified, the benchmark is also #' run for cores = 1 to allow for normalisation. #' @param ... additional arguments to pass to \code{bm} #' @import parallel #' @import foreach #' @import doParallel #' @export #' @examples #' \dontrun{ #' bm_parallel("bm_matrix_cal_manip", runs = 3, verbose = TRUE, cores = 2) #' bm = c("bm_matrix_cal_manip","bm_matrix_cal_power", "bm_matrix_cal_sort", #' "bm_matrix_cal_cross_product", "bm_matrix_cal_lm") #' results = lapply(bm, bm_parallel, #' runs = 5, verbose = TRUE, cores = 2L) #' } #' @importFrom foreach foreach %dopar% bm_parallel = function(bm, runs, verbose, cores, ...) { args = list(...) args[["runs"]] = 1 #TODO consider dropping first results from parallel results due to overhead results = data.frame(user = NA, system = NA, elapsed = NA, test = NA, test_group = NA, cores = NA) for (core in cores) { cl = parallel::makeCluster(core, outfile = "") export = try(parallel::clusterExport(cl, bm), silent = TRUE) # Export check_export(export, cl) parallel::clusterEvalQ(cl, "library('benchmarkme')") doParallel::registerDoParallel(cl) tmp = data.frame(user = numeric(length(runs)), system = 0, elapsed = 0, test = NA, test_group = NA, cores = NA, stringsAsFactors = FALSE) args$runs = 1 for (j in 1:runs) { tmp[j, 1:3] = system.time({ out = foreach(k = 1:(core)) %dopar% do.call(bm, args, quote = TRUE) #, envir = environment(bm_parallel)) })[1:3] } tmp$cores = core tmp$test = as.character(out[[1]]$test)[1] tmp$test_group = as.character(out[[1]]$test_group)[1] results = rbind(results, tmp) parallel::stopCluster(cl)# Would be nice to have on.exit here, but we run out of memory } return(na.omit(results)) } benchmarkme/R/benchmark_programming.R0000644000176200001440000001256113650265331017417 0ustar liggesusers#' @title Programming benchmarks #' @description A collection of matrix programming benchmark functions #' \itemize{ #' \item 3,500,000 Fibonacci numbers calculation (vector calc). #' \item Creation of a 3500x3500 Hilbert matrix (matrix calc). #' \item Grand common divisors of 1,000,000 pairs (recursion). #' \item Creation of a 1600x1600 Toeplitz matrix (loops). #' \item Escoufier's method on a 60x60 matrix (mixed). #' } #' These benchmarks have been developed by many authors. #' See http://r.research.att.com/benchmarks/R-benchmark-25.R #' for a complete history. The function \code{benchmark_prog()} runs the five \code{bm} functions. #' @inheritParams benchmark_std #' @importFrom stats runif #' @export bm_prog_fib = function(runs=3, verbose=TRUE) { a = 0; b = 0; phi = 1.6180339887498949 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "fib", test_group = "prog", stringsAsFactors = FALSE) for (i in 1:runs) { a = floor(runif(3500000) * 1000) invisible(gc()) start = proc.time() b = (phi^a - (-phi) ^ (-a)) / sqrt(5) stop = proc.time() timings[i, 1:3] = (stop - start)[1:3] } if (verbose) message(c("\t3,500,000 Fibonacci numbers calculation (vector calc)", timings_mean(timings))) timings } #' @rdname bm_prog_fib #' @export bm_prog_hilbert = function(runs=3, verbose=TRUE) { a = 3500; b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "hilbert", test_group = "prog", stringsAsFactors = FALSE) for (i in 1:runs) { invisible(gc()) start = proc.time() b <- rep(1:a, a); dim(b) <- c(a, a); b <- 1 / (t(b) + 0:(a - 1)) stop = proc.time() timings[i, 1:3] = (stop - start)[1:3] } if (verbose) message(c("\tCreation of a 3,500 x 3,500 Hilbert matrix (matrix calc)", timings_mean(timings))) timings } #' @rdname bm_prog_fib #' @export bm_prog_gcd = function(runs = 3, verbose = TRUE) { ans = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "gcd", test_group = "prog", stringsAsFactors = FALSE) gcd2 = function(x, y) { if (sum(y > 1.0E-4) == 0) { x } else { y[y == 0] <- x[y == 0]; Recall(y, x %% y) } } for (i in 1:runs) { a = ceiling(runif(1000000) * 1000) b = ceiling(runif(1000000) * 1000) invisible(gc()) start = proc.time() ans <- gcd2(a, b)# gcd2 is a recursive function stop = proc.time() timings[i, 1:3] <- (stop - start)[1:3] } if (verbose) message(c("\tGrand common divisors of 1,000,000 pairs (recursion)", timings_mean(timings))) timings } #' @rdname bm_prog_fib #' @export bm_prog_toeplitz = function(runs = 3, verbose = TRUE) { timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "toeplitz", test_group = "prog", stringsAsFactors = FALSE) N = 3000 #nolint ans = rep(0, N * N) dim(ans) = c(N, N) for (i in 1:runs) { invisible(gc()) start = proc.time() for (j in 1:N) { for (k in 1:N) { ans[k, j] = abs(j - k) + 1 } } stop = proc.time() timings[i, 1:3] = (stop - start)[1:3] } if (verbose) message(c("\tCreation of a 3,000 x 3,000 Toeplitz matrix (loops)", timings_mean(timings))) timings } #' @importFrom stats cor #' @rdname bm_prog_fib #' @export bm_prog_escoufier = function(runs = 3, verbose = TRUE) { timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "escoufier", test_group = "prog", stringsAsFactors = FALSE) p <- 0; vt <- 0; vr <- 0; vrt <- 0; rvt <- 0; RV <- 0; j <- 0; k <- 0; #nolint x2 <- 0; R <- 0; r_xx <- 0; r_yy <- 0; r_xy <- 0; r_yx <- 0; r_vmax <- 0 #nolint # Calculate the trace of a matrix (sum of its diagonal elements) tr = function(y) { sum(c(y)[1 + 0:(min(dim(y)) - 1) * (dim(y)[1] + 1)], na.rm = FALSE) } for (i in 1:runs) { x = abs(Rnorm(60 * 60)) dim(x) = c(60, 60) invisible(gc()) start = proc.time() # Calculation of Escoufier's equivalent vectors p <- ncol(x) vt <- 1:p # Variables to test vr <- NULL # Result: ordered variables RV <- 1:p # Result: correlations #nolint vrt <- NULL # loop on the variable number for (j in 1:p) { r_vmax <- 0 # loop on the variables for (k in 1:(p - j + 1)) { x2 <- cbind(x, x[, vr], x[, vt[k]]) R <- cor(x2) # Correlations table #nolint r_yy <- R[1:p, 1:p] r_xx <- R[(p + 1):(p + j), (p + 1):(p + j)] r_xy <- R[(p + 1):(p + j), 1:p] r_yx <- t(r_xy) rvt <- tr(r_yx %*% r_xy) / sqrt(tr(r_yy %*% r_yy) * tr(r_xx %*% r_xx)) # RV calculation if (rvt > r_vmax) { r_vmax <- rvt # test of RV vrt <- vt[k] # temporary held variable } } vr[j] <- vrt # Result: variable RV[j] <- r_vmax # Result: correlation vt <- vt[vt != vr[j]] # reidentify variables to test } stop = proc.time() timings[i, 1:3] = (stop - start)[1:3] } if (verbose) message(c("\tEscoufier's method on a 60 x 60 matrix (mixed)", timings_mean(timings))) timings } benchmarkme/R/get_ram.R0000644000176200001440000000664314025711520014476 0ustar liggesusersget_windows_ram = function() { ram = try(system("grep MemTotal /proc/meminfo", intern = TRUE), silent = TRUE) if (class(ram) != "try-error" && length(ram) != 0) { ram = strsplit(ram, " ")[[1]] mult = switch(ram[length(ram)], "B" = 1L, "kB" = 1024L, "MB" = 1048576L) ram = as.numeric(ram[length(ram) - 1]) ram_size = ram * mult } else { # Fallback: This was the old method I used # It worked for Windows 7 and below. ram_size = system("wmic MemoryChip get Capacity", intern = TRUE)[-1] } return(ram_size) } system_ram = function(os) { if (length(grep("^linux", os))) { cmd = "awk '/MemTotal/ {print $2}' /proc/meminfo" ram = system(cmd, intern = TRUE) } else if (length(grep("^darwin", os))) { sysctl = get_sysctl() if (is.na(sysctl)) { ram = NA } else { ram = system(paste(sysctl, "hw.memsize"), intern = TRUE) #nocov ram = substring(ram, 13) } } else if (length(grep("^solaris", os))) { cmd = "prtconf | grep Memory" # nocov ram = system(cmd, intern = TRUE) ## Memory size: XXX Megabytes # nocov } else { ram = get_windows_ram() # nocov } ram } #' Get the amount of RAM #' #' Attempt to extract the amount of RAM on the current machine. This is OS #' specific: #' \itemize{ #' \item Linux: \code{proc/meminfo} #' \item Apple: \code{system_profiler -detailLevel mini} #' \item Windows: First tries \code{grep MemTotal /proc/meminfo} then falls back to #' \code{wmic MemoryChip get Capacity} #' \item Solaris: \code{prtconf} #' } #' A value of \code{NA} is return if it isn't possible to determine the amount of RAM. #' @export #' @references The \code{print.bytes} function was taken from the \pkg{pryr} package. #' @examples #' ## Return (and pretty print) the amount of RAM #' get_ram() #' ## Display using iec units #' print(get_ram(), unit_system = "iec") get_ram = function() { os = R.version$os ram = suppressWarnings(try(system_ram(os), silent = TRUE)) if (class(ram) == "try-error" || length(ram) == 0 || is.na(ram)) { message("\t Unable to detect your RAM. # nocov Please raise an issue at https://github.com/csgillespie/benchmarkme") # nocov ram = structure(NA, class = "ram") # nocov } else { cleaned_ram = suppressWarnings(try(clean_ram(ram, os), silent = TRUE)) if (class(cleaned_ram) == "try-error" || length(ram) == 0) { message("\t Unable to detect your RAM. # nocov Please raise an issue at https://github.com/csgillespie/benchmarkme") # nocov ram = structure(NA, class = "ram") #nocov } else { ram = structure(cleaned_ram, class = "ram") } } return(ram) } #' @rawNamespace S3method(print,ram) print.ram = function(x, digits = 3, unit_system = c("metric", "iec"), ...) { unit_system = match.arg(unit_system) #unit_system = "metric" base = switch(unit_system, metric = 1000, iec = 1024) power = min(floor(log(abs(x), base)), 8) if (is.na(x) || power < 1) { unit = "B" } else { unit_labels = switch( unit_system, metric = c("kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"), iec = c("KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB") ) unit = unit_labels[[power]] x = x / (base^power) } formatted = format(signif(x, digits = digits), big.mark = ",", scientific = FALSE, ...) cat(unclass(formatted), " ", unit, "\n", sep = "") invisible(paste(unclass(formatted), unit)) } benchmarkme/R/benchmark_matrix_calculations.R0000644000176200001440000000731413650265316021145 0ustar liggesusers#' @importFrom utils globalVariables globalVariables(c("a", "b", "ans")) #' Matrix calculation benchmarks #' #' @description A collection of matrix benchmark functions aimed at #' assessing the calculation speed. #' \itemize{ #' \item Creation, transp., deformation of a 2500x2500 matrix. #' \item 2500x2500 normal distributed random matrix ^1000. #' \item Sorting of 7,000,000 random values. #' \item 2500x2500 cross-product matrix (b = a' * a) #' \item Linear regr. over a 3000x3000 matrix. #' } #' These benchmarks have been developed by many authors. #' See http://r.research.att.com/benchmarks/R-benchmark-25.R #' for a complete history. The function \code{benchmark_matrix_cal()} runs #' the five \code{bm} functions. #' @inheritParams benchmark_std #' @references http://r.research.att.com/benchmarks/R-benchmark-25.R #' @export bm_matrix_cal_manip = function(runs = 3, verbose = TRUE) { a = 0 b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "manip", test_group = "matrix_cal", stringsAsFactors = FALSE) for (i in 1:runs) { invisible(gc()) timing = system.time({ a = matrix(rnorm(2500 * 2500) / 10, ncol = 2500, nrow = 2500) b = t(a) dim(b) = c(1250, 5000) a = t(b) }) timings[i, 1:3] = timing[1:3] } if (verbose) message(c("\tCreation, transp., deformation of a 5,000 x 5,000 matrix", timings_mean(timings))) timings } #' @rdname bm_matrix_cal_manip #' @export bm_matrix_cal_power = function(runs = 3, verbose = TRUE) { timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "power", test_group = "matrix_cal", stringsAsFactors = FALSE) for (i in 1:runs) { a = abs(matrix(Rnorm(2500 * 2500) / 2, ncol = 2500, nrow = 2500)) invisible(gc()) timings[i, 1:3] = system.time({b <- a^1000})[1:3] } if (verbose) message(c("\t2,500 x 2,500 normal distributed random matrix^1,000", timings_mean(timings))) timings } #' @rdname bm_matrix_cal_manip #' @export bm_matrix_cal_sort = function(runs = 3, verbose = TRUE) { b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "sort", test_group = "matrix_cal", stringsAsFactors = FALSE) for (i in 1:runs) { a = Rnorm(7000000) invisible(gc()) timings[i, 1:3] = system.time({b <- sort(a, method = "quick")})[1:3] } if (verbose) message(c("\tSorting of 7,000,000 random values", timings_mean(timings))) timings } #' @rdname bm_matrix_cal_manip #' @export bm_matrix_cal_cross_product = function(runs = 3, verbose = TRUE) { b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "cross_product", test_group = "matrix_cal", stringsAsFactors = FALSE) for (i in 1:runs) { a = Rnorm(2500 * 2500) dim(a) = c(2500, 2500) invisible(gc()) timings[i, 1:3] = system.time({b <- crossprod(a)})[1:3] } if (verbose) message(c("\t2,500 x 2,500 cross-product matrix (b = a' * a)", timings_mean(timings))) timings } #' @rdname bm_matrix_cal_manip #' @export bm_matrix_cal_lm = function(runs = 3, verbose = TRUE) { ans = 0 b = as.double(1:5000) timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "lm", test_group = "matrix_cal", stringsAsFactors = FALSE) for (i in 1:runs) { a = new("dgeMatrix", x = Rnorm(5000 * 500), Dim = as.integer(c(5000, 500))) invisible(gc()) timings[i, 1:3] = system.time({ans = solve(crossprod(a), crossprod(a, b))})[1:3] } if (verbose) message(c("\tLinear regr. over a 5,000 x 500 matrix (c = a \\ b')", timings_mean(timings))) timings } benchmarkme/R/get_platform_info.R0000644000176200001440000000021313650265356016557 0ustar liggesusers#' Platform information #' #' This function just returns the outpu of \code{.Platform} #' @export get_platform_info = function() .Platform benchmarkme/R/datatable.R0000644000176200001440000000266013650265341015004 0ustar liggesusers#' #' make_DT = function(results, test_group, byte_optimize, blas_optimize) { #' #' pas_res = select_results(test_group, byte_optimize = byte_optimize, #' blas_optimize = blas_optimize) #' ## New result #' results = results[results$test_group %in% test_group,] #' no_of_reps = length(results$test)/length(unique(results$test)) #' ben_sum = sum(results[,3])/no_of_reps #' #' pas_res$new = FALSE #' pas_res = pas_res[,c("cpu", "time", "sysname", "new")] #' results = rbind(pas_res, #' data.frame(cpu = get_cpu()$model_name, #' time=ben_sum, #' sysname = as.character(Sys.info()["sysname"]), #' new=TRUE, stringsAsFactors = FALSE)) #' #' results$time = signif(results$time, 4) #' results = results[order(results$time), ] #' results$rank = 1:nrow(results) #' current_rank = results$rank[results$new] #' message("You are ranked ", current_rank, " out of ", nrow(results), " machines.") #' results = results[,c("rank", "cpu", "time", "sysname")] #' colnames(results) = c("Rank", "CPU", "Time (sec)", "OS") #' #' data_table = DT::datatable(results, rownames=FALSE) #' DT::formatStyle(data_table, "Rank", #' backgroundColor = DT::styleEqual(current_rank, "orange")) #' } #' #' #' @importFrom benchmarkmeData get_datatable_past #' #' @export #' benchmarkmeData::get_datatable_past benchmarkme/R/benchmarkme-package.R0000644000176200001440000000101213650265544016723 0ustar liggesusers#' The benchmarkme package #' #' Benchmark your CPU and compare against other CPUs. Also provides #' functions for obtaining system specifications, such as #' RAM, CPU type, and R version. #' @name benchmarkme-package #' @aliases benchmarkme #' @docType package #' @author \email{csgillespie@gmail.com} #' @keywords package #' @seealso \url{https://github.com/csgillespie/benchmarkme} #' @examples #' ## Benchmark your system and compare #' \dontrun{ #' res = benchmark_std() #' upload_results(res) #' plot(res) #' } NULL benchmarkme/R/benchmark_matrix_functions.R0000644000176200001440000000705113650265323020470 0ustar liggesusers#' Matrix function benchmarks #' #' @description A collection of matrix benchmark functions #' \itemize{ #' \item FFT over 2,500,000 random values. #' \item Eigenvalues of a 640x640 random matrix. #' \item Determinant of a 2500x2500 random matrix. #' \item Cholesky decomposition of a 3000x3000 matrix. #' \item Inverse of a 1600x1600 random matrix. #' } #' These benchmarks have been developed by many authors. #' See http://r.research.att.com/benchmarks/R-benchmark-25.R #' for a complete history. The function \code{benchmark_matrix_fun()} #' runs the five \code{bm} functions. #' @inheritParams benchmark_std #' @references http://r.research.att.com/benchmarks/R-benchmark-25.R #' @importFrom stats fft #' @export bm_matrix_fun_fft = function(runs=3, verbose=TRUE) { b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "fft", test_group = "matrix_fun", stringsAsFactors = FALSE) for (i in 1:runs) { a = Rnorm(2500000) invisible(gc()) timings[i, 1:3] = system.time({b <- fft(a)})[1:3] } if (verbose) message(c("\tFFT over 2,500,000 random values", timings_mean(timings))) timings } #' @rdname bm_matrix_fun_fft #' @export bm_matrix_fun_eigen = function(runs=3, verbose=TRUE) { b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "eigen", test_group = "matrix_fun", stringsAsFactors = FALSE) for (i in 1:runs) { a = array(Rnorm(600 * 600), dim = c(600, 600)) invisible(gc()) timings[i, 1:3] = system.time({ b <- eigen(a, symmetric = FALSE, only.values = TRUE)$Value})[1:3] } if (verbose) message(c("\tEigenvalues of a 640 x 640 random matrix", timings_mean(timings))) timings } #' @rdname bm_matrix_fun_fft #' @export bm_matrix_fun_determinant = function(runs = 3, verbose = TRUE) { b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "determinant", test_group = "matrix_fun", stringsAsFactors = FALSE) for (i in 1:runs) { a = Rnorm(2500 * 2500); dim(a) = c(2500, 2500) invisible(gc()) timings[i, 1:3] = system.time({b <- det(a)})[1:3] } if (verbose) message(c("\tDeterminant of a 2,500 x 2,500 random matrix", timings_mean(timings))) timings } #' @importFrom methods new #' @rdname bm_matrix_fun_fft #' @import Matrix #' @export bm_matrix_fun_cholesky = function(runs = 3, verbose = TRUE) { timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "cholesky", test_group = "matrix_fun", stringsAsFactors = FALSE) for (i in 1:runs) { a = crossprod(new("dgeMatrix", x = Rnorm(3000 * 3000), Dim = as.integer(c(3000, 3000)))) invisible(gc()) timings[i, 1:3] = system.time({b <- chol(a)})[1:3] } if (verbose) message(c("\tCholesky decomposition of a 3,000 x 3,000 matrix", timings_mean(timings))) timings } #' @rdname bm_matrix_fun_fft #' @export bm_matrix_fun_inverse = function(runs=3, verbose=TRUE) { b = 0 timings = data.frame(user = numeric(runs), system = 0, elapsed = 0, test = "inverse", test_group = "matrix_fun", stringsAsFactors = FALSE) for (i in 1:runs) { a = new("dgeMatrix", x = Rnorm(1600 * 1600), Dim = as.integer(c(1600, 1600))) invisible(gc()) timings[i, 1:3] = system.time({b <- solve(a)})[1:3] } if (verbose) message(c("\tInverse of a 1,600 x 1,600 random matrix", timings_mean(timings))) timings } benchmarkme/R/get_byte_compiler.R0000644000176200001440000000332613650265344016562 0ustar liggesusers#' Byte compiler status #' #' Attempts to detect if byte compiling or JIT has been used on the package. #' @details For R 3.5.0 all packages are byte compiled. Before 3.5.0 it was messy. #' Sometimes the user would turn it on via JIT, or ByteCompiling the package. On top of that #' R 3.4.X(?) was byte compiled, but R 3.4.Y(?) was, not fully optimised!!! What this means is #' don't trust historical results! #' @return An integer indicating if byte compiling has been turn on. See \code{?compiler} for #' details. #' @importFrom compiler getCompilerOption #' @importFrom compiler compilePKGS enableJIT #' @importFrom utils capture.output #' @export #' @examples #' ## Detect if you use byte optimization #' get_byte_compiler() get_byte_compiler = function() { comp = Sys.getenv("R_COMPILE_PKGS") if (nchar(comp) > 0L) comp = as.numeric(comp) else comp = 0L ## Try to detect compilePKGS - long shot ## Return to same state as we found it if (comp == 0L) { comp = compiler::compilePKGS(1) compiler::compilePKGS(comp) if (comp) { comp = compiler::getCompilerOption("optimize") } else { comp = 0L } } ## Try to detect enableJIT ## Return to same state as we found it ## This shouldn't affect benchmark tests. So remove. #if(comp == 0L) { # comp = compiler::enableJIT(3) # compiler::enableJIT(comp) #} if (comp == 0L && require("benchmarkme")) { # Get function definition # Check if cmpfun has been used out = capture.output(get("benchmark_std", envir = globalenv())) is_byte = out[length(out) - 1] if (length(grep("bytecode: ", is_byte)) > 0) { comp = compiler::getCompilerOption("optimize") } } structure(as.integer(comp), names = "byte_optimize") } benchmarkme/R/upload_results.R0000644000176200001440000000346613650265376016145 0ustar liggesusers#' @param filename default \code{NULL}. A character vector of where to #' store the results (in an .rds file). If \code{NULL}, results are not saved. #' @rdname upload_results #' @export create_bundle = function(results, filename = NULL, args = NULL, id_prefix = "") { if (is.null(args)) args = list() message("Getting system specs. This can take a while on Macs") type = do.call(get_sys_details, args) type$id = paste0(id_prefix, type$id) type$results = results if (!is.null(filename)) { saveRDS(type, file = filename) } type } #' @title Upload benchmark results #' #' @description This function uploads the benchmarking results. #' These results will then be incorparated #' in future versions of the package. #' @param results Benchmark results. Probably obtained from #' \code{benchmark_std()} or \code{benchmark_io()}. #' @param url The location of where to upload the results. #' @param args Default \code{NULL}. A list of arguments to #' be passed to \code{get_sys_details()}. #' @param id_prefix Character string to prefix the benchmark id. Makes it #' easier to retrieve past results. #' @export #' @importFrom httr POST upload_file #' @examples #' ## Run benchmarks #' \dontrun{ #' res = benchmark_std() #' upload_results(res) #' } upload_results = function(results, url = "http://www.mas.ncl.ac.uk/~ncsg3/form.php", args = NULL, id_prefix = "") { message("Creating temporary file") fname = tempfile(fileext = ".rds") on.exit(unlink(fname)) type = create_bundle(results, fname, id_prefix = id_prefix) message("Uploading results") r = httr::POST(url, body = list(userFile = httr::upload_file(fname)), encode = "multipart") message("Upload complete") message("Tracking id: ", type$id) type$id } benchmarkme/R/utils-sysctl.R0000644000176200001440000000034614025711520015531 0ustar liggesusers# Try to find sysctl in Macs get_sysctl = function() { cmd = Sys.which("sysctl") if (nchar(cmd) == 0) cmd = "/usr/sbin/sysctl" if (!file.exists(cmd)) cmd = "/sbin/sysctl" if (!file.exists(cmd)) cmd = NA return(cmd) } benchmarkme/R/get_hard_disk.R0000644000176200001440000000110513650265351015644 0ustar liggesusers## A work in progress # get_hard_drive = function() { # if(Sys.info()["sysname"]=="Windows") { # cmd = # "C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe get-wmiobject win32_diskdrive" # hard_disk = system(cmd, intern=TRUE) # } # } # cmd = # "C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe get-wmiobject win32_diskdrive" # system(cmd) # # # Partitions : 2 # DeviceID : \\.\PHYSICALDRIVE0 # Model : SAMSUNG MZ7PD128HCFV-000 SCSI Disk Device # Size : 128034708480 # Caption : SAMSUNG MZ7PD128HCFV-000 SCSI Disk Device benchmarkme/R/global_variables.R0000644000176200001440000000014213650265365016352 0ustar liggesusersglobalVariables(c("test_group", "cores", "test", "elapsed", "is_past", "time")) benchmarkme/NEWS.md0000644000176200001440000000532714025712236013635 0ustar liggesusers# benchmarkme Version 1.0.7 _2021-03-21_ * Internal: Suppress warnings on `sysctl` calls # benchmarkme Version 1.0.6 _2021-02-25_ * Internal: Better detection of `sysctl` # benchmarkme Version 1.0.5 _2021-02-08_ * Internal: Move to GitHub Actions * Internal: Detect `sysctl` on Macs * Bug: To run parallel checks, the package needs to be attached (thanks to @davidhen #33) # Version 1.0.4 * Improve RAM detection in Windows (thanks to @xiaodaigh #25) * Example on using IEC units (#22) ## Version 1.0.2 * Minor Bug fix for get_sys_details (thanks to @dipterix) ## Version 1.0.1 * Typo in vignette (thanks to @tmartensecon) ## Version 1.0.0 * Update version focused on R 3.5 & above. Start anew. Sorry everyone ## Version 0.6.1 * Improved BLAS detection (suggested by @ck37 #15) ## Version 0.6.0 * Adding parallel benchmarks (thanks to @jknowles) * Since JIT has been introduced, just byte compile the package for ease of comparison. ## Version 0.5.1 * Add id_prefix to the upload function * Can now run `benchmark_std` if the package is not attached (thanks to @YvesCR) * Nicer version of `print.bytes` (thanks to @richierocks) * Adding parallel benchmarks (thanks to @jknowles) ## Version 0.5.0 * Bug fix in get_byte_compiler when `cmpfun` was used. ## Version 0.4.0 * Update to shinyapps.io example * Moved benchmark description to shinyapps.io * Additional checks on `get_ram()` ## Version 0.3.0 * New vignette describing benchmarks. * Used `Sys.getpid()` to try and determine the BLAS/LAPACK library (suggested by Ashley Ford). ## Version 0.2.3 * Return `NA` for `get_cpu()`/`get_ram()` when it isn't possible to determine CPU/RAM. ## Version 0.2.2 * First CRAN version ## Version 0.2.0 * More flexibility in plot and datatable functions - you can now specify the test you want to compare. * The number of cores returned by `get_cpu()`. * Adding io benchmarks. * New shiny interface. ## Version 0.1.9 * Default log scale on y-axis (suggested by @eddelbuettel). Fixes #5. * Moved data sets to `benchmarkmeData` package. * New ranking function to compare results with past. ## Version 0.1.8 * Added introduction to `benchmarkme` vignette. * Adjust placement of "You" in the S3 plot. * Add `.Machine` to `get_sys_details`. ## Version 0.1.7 * Add locale to `get_sys_details`. ## Version 0.1.6 * Further RAM and Mac issues. ## Version 0.1.4 * Bug fix: Remove white space from apple RAM output (thanks to @vzemlys). Fixes #2. ## Version 0.1.3 * Add a fall-back when getting RAM - grab everything. * Minor: Added a horizontal line automatically generated plots. * Deprecated `benchmark_all` (use `benchmark_std`). ## Version 0.1.2 * First public release. benchmarkme/MD50000644000176200001440000000775514025731726013063 0ustar liggesusers79e33f0bd8f0391b800b5d289916489f *DESCRIPTION d0bdd39a5515b92fab23f351140a1ca3 *NAMESPACE 3aa6ba5e5f0ee826167dd08cb398760f *NEWS.md 0bf5958c081bbc3208b1e61394241a50 *R/benchmark_io.R eb6433b38253e6f12cf4efa4d162f92b *R/benchmark_matrix_calculations.R 7dd439db2431f7a273e256db144986a5 *R/benchmark_matrix_functions.R 6df88640c79c88fd65bbc06445510438 *R/benchmark_parallel.R 14e4402854a47b980c53fb51bbccfe20 *R/benchmark_programming.R f044b92926d74c73a630d695664167af *R/benchmark_std.R e0ccf5936f7c39e7a638e42aa793ae00 *R/benchmarkme-package.R 0bf4da7435dd3516d414dcfce96dc26d *R/benchmarks.R f7c1b846e3f23dac29203e297c280d5e *R/clean_ram_output.R 0c4546be542af58b995b5c627e062578 *R/data_help_files.R ca6c137996782b702a6d1eb91c27b3c0 *R/datatable.R 3acdbb3dbc0fe542c7f7a38b47578cc4 *R/get_byte_compiler.R e882885f01f3a357e5165affcebfd7e6 *R/get_cpu.R e0c10b94c1764d75b01edd188002f778 *R/get_hard_disk.R c3716523a18f6cd756e6fdd6317aa436 *R/get_linear_algebra.R d4a29d25c6a5f1c601e2b7aed6b315d2 *R/get_platform_info.R 7ef2423ddfb7ca1f13e8cfee79990df2 *R/get_r_version.R f06cc187414033efbf1b7c754cfe9077 *R/get_ram.R 4affaaddcab401445cf9fc4ef900c44f *R/get_sys_details.R 74bf634f7143ef2969a482f6cbf0275c *R/global_variables.R 6e3a99f8e3cd26a18bd44fb88fbae44b *R/plot_results.R 5d94a2315757f65f7a511e463d16ce34 *R/rank_results.R b0f1b2960ef578437e0c07f341947fdb *R/rnorm.R fce9bbab01d1e0ae973ac2500f5c01ff *R/timing_mean.R 5110dfb802d911579607664374397cf7 *R/upload_results.R 8028fd3660c92d08c7473f0cf74f49a0 *R/utils-sysctl.R c3a1d3286b31380aeb09b77ac9209fa1 *R/zzz.R ce6457c240de6e4bf23efa8d7153e255 *README.md 98d685d19dde575ab9fb316400524f89 *build/vignette.rds 915beab838a2334b12e3fd312c928208 *data/sample_results.RData 0039a6f9086fdb26ed697cd2d5062338 *inst/doc/a_introduction.R 2f5cdb601dd998a707b9c83f123f8a0d *inst/doc/a_introduction.Rmd 51860f1d5eb0c5b43138da08e50c10ca *inst/doc/a_introduction.html 3e4fc3dfa38315b0ba190c3c2fe787f1 *man/benchmark_io.Rd 9ae8d491b723ad101b64b718f8e773c8 *man/benchmark_std.Rd 27e810f4795aeea508f007e265097f9c *man/benchmarkme-package.Rd 201c188d2360a49ee7dd739defb801f3 *man/bm_matrix_cal_manip.Rd e7530034e01fab232278a6aff5270cca *man/bm_matrix_fun_fft.Rd ad6e9f300835fe739309915a53d81ffb *man/bm_parallel.Rd 55827817054a5838c63240790970bcf2 *man/bm_prog_fib.Rd 4844340c1a2791fe9849c81e51f5e630 *man/get_available_benchmarks.Rd 0c09c018e7abe6fc14d48ebaed7210f6 *man/get_byte_compiler.Rd 01a6c3f547567bb3e141af3f81357b5d *man/get_cpu.Rd 0e335db338ebda1b4f605b19dcbcf6de *man/get_linear_algebra.Rd c444611ce560b171e33cf31c66c47472 *man/get_platform_info.Rd e4a70223e70b03612377670147dfa705 *man/get_r_version.Rd 0b6df2dbe10ed15cb0b740137d9dce92 *man/get_ram.Rd 4b2c804dad51ed1c021ddbaa6c6673b5 *man/get_sys_details.Rd 6a7b2dbe221a88d7060feaece4b14853 *man/plot.ben_results.Rd 11c72ced329fc88543e0979e1c9a252b *man/rank_results.Rd a46927f55aa182edc2527219c944f4ee *man/reexports.Rd 51f680844970d2c6e9d361c01841a670 *man/sample_results.Rd 3c576dc58f7ce46d1b87bdfbd5d2901b *man/upload_results.Rd 6a6f6c2d85d59d4995b2b7e37ec3518c *tests/testthat.R 2bbf232ec01ee0c4a65631ab5a4dab37 *tests/testthat/test-benchmark_io.R 7060a90f8aae2de26f88cd06dded58f0 *tests/testthat/test-benchmark_std.R a991ea8cc9c8dd68474ddcc865fae385 *tests/testthat/test-bm_parallel.R 63cb288e53285a66d9a6db0102e1fb1c *tests/testthat/test-byte_compiler.R 57693fa3be2a9be7bc621e36c96c0fef *tests/testthat/test-cpu.R cd12258df89ebbfa75a78ebb41caa815 *tests/testthat/test-datatable.R 2236a4307260e21095def376c0ea9d21 *tests/testthat/test-platform_info.R 692a9d0a53f018237f8ecbfa9093a844 *tests/testthat/test-plot_results.R 364add53bddc3d5a6106a34aaa4cea99 *tests/testthat/test-ram.R 3a5c903f5fdfa107d5dfb998b849732d *tests/testthat/test-ranking.R 0c48764a0f4af91a4e7cd1a34ba69115 *tests/testthat/test-rnorm.R 4ffc7b52c8e5aa1805db76389a5e0c81 *tests/testthat/test-sys_details.R 13e4c42c7c46e3710b282e0550a44184 *tests/testthat/test-timings.R eb9a5521b3ed67d43c1d51d0752f90ea *tests/testthat/test-upload_results.R 2f5cdb601dd998a707b9c83f123f8a0d *vignettes/a_introduction.Rmd benchmarkme/inst/0000755000176200001440000000000014025712263013505 5ustar liggesusersbenchmarkme/inst/doc/0000755000176200001440000000000014025712263014252 5ustar liggesusersbenchmarkme/inst/doc/a_introduction.R0000644000176200001440000000310714025712263017417 0ustar liggesusers## ---- eval=FALSE-------------------------------------------------------------- # install.packages("benchmarkme") ## ----eval=FALSE--------------------------------------------------------------- # library("benchmarkme") # ## Increase runs if you have a higher spec machine # res = benchmark_std(runs = 3) ## ---- eval=FALSE-------------------------------------------------------------- # ## You can control exactly what is uploaded. See details below. # upload_results(res) ## ----eval=FALSE--------------------------------------------------------------- # plot(res) ## ----eval=FALSE--------------------------------------------------------------- # res_io = benchmark_io(runs = 3) # upload_results(res_io) # plot(res_io) ## ----eval=FALSE--------------------------------------------------------------- # tempdir() ## ----eval=FALSE--------------------------------------------------------------- # Sys.getenv("TMPDIR") ## ----eval=FALSE--------------------------------------------------------------- # res_io = benchmark_io(tmpdir = "some_other_directory") ## ----eval=FALSE--------------------------------------------------------------- # res_io = benchmark_std(runs = 3, cores = 4) ## ----------------------------------------------------------------------------- data(past_results, package = "benchmarkmeData") ## ----------------------------------------------------------------------------- data(past_results_v2, package = "benchmarkmeData") ## ----eval=FALSE--------------------------------------------------------------- # upload_results(res, args = list(sys_info = FALSE)) benchmarkme/inst/doc/a_introduction.Rmd0000644000176200001440000001343114015757162017747 0ustar liggesusers--- title: "Crowd sourced benchmarks" author: "Colin Gillespie" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Crowd sourced benchmarks} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} --- ```{r echo=FALSE, purl=FALSE} library("benchmarkme") data(sample_results, package = "benchmarkme") res = sample_results ``` # System benchmarking R benchmarking made easy. The package contains a number of benchmarks, heavily based on the benchmarks at https://mac.R-project.org/benchmarks/R-benchmark-25.R, for assessing the speed of your system. ## Overview A straightforward way of speeding up your analysis is to buy a better computer. Modern desktops are relatively cheap, especially compared to user time. However, it isn't clear if upgrading your computing is worth the cost. The **benchmarkme** package provides a set of benchmarks to help quantify your system. More importantly, it allows you to compare your timings with _other_ systems. ## Overview The package is on [CRAN](https://cran.r-project.org/package=benchmarkme) and can be installed in the usual way ```{r, eval=FALSE} install.packages("benchmarkme") ``` There are two groups of benchmarks: * `benchmark_std()`: this benchmarks numerical operations such as loops and matrix operations. The benchmark comprises of three separate benchmarks: `prog`, `matrix_fun`, and `matrix_cal`. * `benchmark_io()`: this benchmarks reading and writing a 5 / 50, MB csv file. ### The benchmark_std() function This benchmarks numerical operations such as loops and matrix operations. This benchmark comprises of three separate benchmarks: `prog`, `matrix_fun`, and `matrix_cal`. If you have less than 3GB of RAM (run `get_ram()` to find out how much is available on your system), then you should kill any memory hungry applications, e.g. firefox, and set `runs = 1` as an argument. To benchmark your system, use ```{r eval=FALSE} library("benchmarkme") ## Increase runs if you have a higher spec machine res = benchmark_std(runs = 3) ``` and upload your results ```{r, eval=FALSE} ## You can control exactly what is uploaded. See details below. upload_results(res) ``` You can compare your results to other users via ```{r eval=FALSE} plot(res) ``` ### The benchmark_io() function This function benchmarks reading and writing a 5MB or 50MB (if you have less than 4GB of RAM, reduce the number of `runs` to 1). Run the benchmark using ```{r eval=FALSE} res_io = benchmark_io(runs = 3) upload_results(res_io) plot(res_io) ``` By default the files are written to a temporary directory generated ```{r eval=FALSE} tempdir() ``` which depends on the value of ```{r eval=FALSE} Sys.getenv("TMPDIR") ``` You can alter this to via the `tmpdir` argument. This is useful for comparing hard drive access to a network drive. ```{r eval=FALSE} res_io = benchmark_io(tmpdir = "some_other_directory") ``` ### Parallel benchmarks The benchmark functions above have a parallel option - just simply specify the number of cores you want to test. For example to test using four cores ```{r eval=FALSE} res_io = benchmark_std(runs = 3, cores = 4) ``` The process for the parallel benchmarks of the pseudo function `benchmark_x(cores = n)` is: - initialise the parallel environment - Start timer - Run job x in core 1, 2, ..., n simultaneously - when __all__ jobs finish stop timer - stop parallel environment This procedure is repeat `runs` times. ## Previous versions of this This package was started around 2015. However, multiple changes in the byte compiler over the last few years, has made it very difficult to use previous results. So we have to start from scratch. The previous data can be obtained via ```{r} data(past_results, package = "benchmarkmeData") ``` ## Machine specs The package has a few useful functions for extracting system specs: * RAM: `get_ram()` * CPUs: `get_cpu()` * BLAS library: `get_linear_algebra()` * Is byte compiling enabled: `get_byte_compiler()` * General platform info: `get_platform_info()` * R version: `get_r_version()` The above functions have been tested on a number of systems. If they don't work on your system, please raise [GitHub](https://github.com/csgillespie/benchmarkme/issues) issue. ## Uploaded data sets A summary of the uploaded data sets is available in the [benchmarkmeData](https://github.com/csgillespie/benchmarkme-data) package ```{r} data(past_results_v2, package = "benchmarkmeData") ``` A column of this data set, contains the unique identifier returned by the `upload_results()` function. ## What's uploaded Two objects are uploaded: 1. Your benchmarks from `benchmark_std()` or `benchmark_io()`; 1. A summary of your system information (`get_sys_details()`). The `get_sys_details()` returns: * `Sys.info()`; * `get_platform_info()`; * `get_r_version()`; * `get_ram()`; * `get_cpu()`; * `get_byte_compiler()`; * `get_linear_algebra()`; * `installed.packages()`; * `Sys.getlocale()`; * The `benchmarkme` version number; * Unique ID - used to extract results; * The current date. The function `Sys.info()` does include the user and nodenames. In the public release of the data, this information will be removed. If you don't wish to upload certain information, just set the corresponding argument, i.e. ```{r eval=FALSE} upload_results(res, args = list(sys_info = FALSE)) ``` --- Development of this package was supported by [Jumping Rivers](https://www.jumpingrivers.com) benchmarkme/inst/doc/a_introduction.html0000644000176200001440000005203114025712263020162 0ustar liggesusers Crowd sourced benchmarks

Crowd sourced benchmarks

Colin Gillespie

System benchmarking

R benchmarking made easy. The package contains a number of benchmarks, heavily based on the benchmarks at https://mac.R-project.org/benchmarks/R-benchmark-25.R, for assessing the speed of your system.

Overview

A straightforward way of speeding up your analysis is to buy a better computer. Modern desktops are relatively cheap, especially compared to user time. However, it isn’t clear if upgrading your computing is worth the cost. The benchmarkme package provides a set of benchmarks to help quantify your system. More importantly, it allows you to compare your timings with other systems.

Overview

The package is on CRAN and can be installed in the usual way

install.packages("benchmarkme")

There are two groups of benchmarks:

  • benchmark_std(): this benchmarks numerical operations such as loops and matrix operations. The benchmark comprises of three separate benchmarks: prog, matrix_fun, and matrix_cal.
  • benchmark_io(): this benchmarks reading and writing a 5 / 50, MB csv file.

The benchmark_std() function

This benchmarks numerical operations such as loops and matrix operations. This benchmark comprises of three separate benchmarks: prog, matrix_fun, and matrix_cal. If you have less than 3GB of RAM (run get_ram() to find out how much is available on your system), then you should kill any memory hungry applications, e.g. firefox, and set runs = 1 as an argument.

To benchmark your system, use

library("benchmarkme")
## Increase runs if you have a higher spec machine
res = benchmark_std(runs = 3)

and upload your results

## You can control exactly what is uploaded. See details below.
upload_results(res)

You can compare your results to other users via

plot(res)

The benchmark_io() function

This function benchmarks reading and writing a 5MB or 50MB (if you have less than 4GB of RAM, reduce the number of runs to 1). Run the benchmark using

res_io = benchmark_io(runs = 3)
upload_results(res_io)
plot(res_io)

By default the files are written to a temporary directory generated

tempdir()

which depends on the value of

Sys.getenv("TMPDIR")

You can alter this to via the tmpdir argument. This is useful for comparing hard drive access to a network drive.

res_io = benchmark_io(tmpdir = "some_other_directory")

Parallel benchmarks

The benchmark functions above have a parallel option - just simply specify the number of cores you want to test. For example to test using four cores

res_io = benchmark_std(runs = 3, cores = 4)

The process for the parallel benchmarks of the pseudo function benchmark_x(cores = n) is: - initialise the parallel environment - Start timer - Run job x in core 1, 2, …, n simultaneously - when all jobs finish stop timer - stop parallel environment This procedure is repeat runs times.

Previous versions of this

This package was started around 2015. However, multiple changes in the byte compiler over the last few years, has made it very difficult to use previous results. So we have to start from scratch.

The previous data can be obtained via

data(past_results, package = "benchmarkmeData")

Machine specs

The package has a few useful functions for extracting system specs:

  • RAM: get_ram()
  • CPUs: get_cpu()
  • BLAS library: get_linear_algebra()
  • Is byte compiling enabled: get_byte_compiler()
  • General platform info: get_platform_info()
  • R version: get_r_version()

The above functions have been tested on a number of systems. If they don’t work on your system, please raise GitHub issue.

Uploaded data sets

A summary of the uploaded data sets is available in the benchmarkmeData package

data(past_results_v2, package = "benchmarkmeData")

A column of this data set, contains the unique identifier returned by the upload_results() function.

What’s uploaded

Two objects are uploaded:

  1. Your benchmarks from benchmark_std() or benchmark_io();
  2. A summary of your system information (get_sys_details()).

The get_sys_details() returns:

  • Sys.info();
  • get_platform_info();
  • get_r_version();
  • get_ram();
  • get_cpu();
  • get_byte_compiler();
  • get_linear_algebra();
  • installed.packages();
  • Sys.getlocale();
  • The benchmarkme version number;
  • Unique ID - used to extract results;
  • The current date.

The function Sys.info() does include the user and nodenames. In the public release of the data, this information will be removed. If you don’t wish to upload certain information, just set the corresponding argument, i.e.

upload_results(res, args = list(sys_info = FALSE))

Development of this package was supported by Jumping Rivers