rtdists/0000755000175000017500000000000014166044002012075 5ustar nileshnileshrtdists/MD50000644000175000017500000000646014166044002012413 0ustar nileshnilesh7dc96f5beb6f57b2557d64777c399c39 *DESCRIPTION d2631a2ea24973ace16c2eca8b771e86 *NAMESPACE 857bbfacf71b281207022cfc968acdbf *NEWS 8d9ad04632dd453298936dc621cfb3a4 *R/RcppExports.R fb63a8ade7e8ad802a8e257ba1c125fe *R/deprecated.R 51ed40735acc3bec0ff831195ff655a5 *R/diffusion.R 2889715db017616e6aaca020c032bfd7 *R/lba.r 044bce802600a68a2b829a5658ee9bb8 *R/lba_race.R 0df89b63896000781197d10c72a1b7cb *R/rr98-data.R 07c76d8b5537a727b7e2fc84ceded2c6 *R/rtdists-package.R c2678cbe0d48edcc99317ab6ab87094f *R/single-lba.r 8039601cd23817fea84aae79e7d1fe38 *R/speed_acc-data.R 14d8392c1065b881963b721360c444c0 *build/vignette.rds c0eb131a61162e9697fe8b648889ae06 *data/rr98.rda d3e62a3016446466a0173ab5f4dc7306 *data/speed_acc.RData b14bfca3bf29d4640fcaff99df4c2820 *inst/doc/reanalysis_rr98.R 2c58dc26bfc04360a52343f187150330 *inst/doc/reanalysis_rr98.Rmd bbf3ddd59dbe5fa1325448807ae52af7 *inst/doc/reanalysis_rr98.html 7956fc005d1909a824d1d3ae9c433f20 *inst/extdata/AUTHORS acaa3ce07090051e8abc8d7d76ce2445 *inst/extdata/lba-math.R ab8f7a2e2289365ed46d55ecd4f84f9e *inst/extdata/n1CDF_diff_example.RData d57a0a8f54104188a3cdb10729d5158b *man/Diffusion.Rd b794670fd6c5ea88c6d4c50dac004b45 *man/LBA-race.Rd 3c5f70b993b505a7ba9730b4dbbf8ea5 *man/LBA.Rd 19326eacf0cd9f5dc1d40493f3adb2a0 *man/deprecated.Rd 8453a26e04182e4fd0ea2252f80a1c32 *man/rr98.Rd a21fe04971d97634c8b919634ffc44fc *man/rtdists-package.Rd 05b40cd2a515895e01786467452561e4 *man/single-LBA.Rd a31a353b44c561602789a3dbde039385 *man/speed_acc.Rd a27916870a1d2e7c59e7bd81808fa05c *src/CDF_no_variability.h c85c93f03c11db102268b10c41e20677 *src/CDF_st0_variability.h 859be7cde534e8c3e746dce4ec1ba5ad *src/CDF_sv_variability.h 13acf107aa585cc54de1bff204d4b778 *src/CDF_sz_variability.h 331f68c310109f85458ffd6fecb6eeb7 *src/Density.h 116b6835c22cd6e880d4755367186ca8 *src/Distribution.h 06692212588a05b77539b461567f3606 *src/FCalculator.h 773fcce38eacd409f9e033ace598661c *src/FController.h e3ac0a934d6079c194792a1ce83a40da *src/Parameters.h 78f189914e2a5a75b1590efb479302db *src/RFastDM.cpp 1b184814be685a2d40282728fd9494e7 *src/RFastDM.h e4bf354a6e140bbe19b264a9205c4b1a *src/RcppExports.cpp 11dcb0ab77f60f5172f24b2354cf606f *src/Sampling.h 1adaa6db43403efb8797894b9ad0182c *src/init.c 2d257a06624025d64e8241e1e0c5b1e3 *tests/testthat.R 311d561421a8298f9c6d284c9de4073d *tests/testthat/test-diffusion-bugs.R d07325fb9dbcd0ad79174ebc600ae75b *tests/testthat/test-diffusion-math.R e5bbf5cb66312b1c558519211cab76ba *tests/testthat/test-diffusion-rcpp.R 0cb5dedd3bd98dc9f2515de29aa8fef4 *tests/testthat/test-diffusion.R 7b19e310b76c939a388044b77b03d97d *tests/testthat/test-lba-bugs.R 8c8df03414b20d4bac89ccd1b824d80f *tests/testthat/test-lba-math.R 22a9ee05c8683ac9d3853bead32f53d4 *tests/testthat/test-lba_basics.R d47b0108a8f4ad4d07934f57f75561b6 *tests/testthat/test-lba_input.R 50fc9a2cb1c56fe4ac27b723619acaa6 *tests/testthat/test-lba_race-basic.R df3a1e591fcc2eed704b8e1cbfa0bfe2 *tests/testthat/test-lba_race.R 2f53a3e250bd4c9d013e612d5b064482 *tests/testthat/test-lba_race_input.R 0f3b701a4ddc13d00e3d85ae4618d019 *tests/testthat/test-pdiffusion_rng.R e1ff01181f1d96d8d12b6c20931f572c *tests/testthat/test-rrd.R 2c58dc26bfc04360a52343f187150330 *vignettes/reanalysis_rr98.Rmd 9370c52d870907bff88a12f6d06f25b1 *vignettes/rr98_full-diffusion_fits.rda 336eedacbe9fdad8e72f44fdd6a40b16 *vignettes/rr98_full-lba_fits.rda rtdists/DESCRIPTION0000644000175000017500000000346114166044002013607 0ustar nileshnileshPackage: rtdists Type: Package Title: Response Time Distributions Depends: R (>= 3.0.0) Suggests: testthat, glba, knitr, rmarkdown, dplyr, tidyr, purrr, lattice, latticeExtra, binom, RWiener Imports: evd, msm, gsl, stats, Rcpp LinkingTo: Rcpp Description: Provides response time distributions (density/PDF, distribution function/CDF, quantile function, and random generation): (a) Ratcliff diffusion model (Ratcliff & McKoon, 2008, ) based on C code by Andreas and Jochen Voss and (b) linear ballistic accumulator (LBA; Brown & Heathcote, 2008, ) with different distributions underlying the drift rate. URL: https://github.com/rtdists/rtdists/ License: GPL (>= 3) BugReports: https://github.com/rtdists/rtdists/issues VignetteBuilder: knitr Authors@R: c( person(given="Henrik", family="Singmann", email="singmann@gmail.com", role=c("aut", "cre"), comment=c(ORCID="0000-0002-4842-3657")), person(given="Scott", family="Brown", role=c("aut")), person(given="Matthew", family="Gretton", role=c("aut")), person(given="Andrew", family="Heathcote", role=c("aut")), person(given="Andreas", family="Voss", role=c("ctb")), person(given="Jochen", family="Voss", role=c("ctb")), person(given="Andrew", family="Terry", role=c("ctb")) ) LazyData: true Version: 0.11-5 RoxygenNote: 6.1.1 NeedsCompilation: yes Packaged: 2022-01-03 19:06:08 UTC; singm Author: Henrik Singmann [aut, cre] (), Scott Brown [aut], Matthew Gretton [aut], Andrew Heathcote [aut], Andreas Voss [ctb], Jochen Voss [ctb], Andrew Terry [ctb] Maintainer: Henrik Singmann Repository: CRAN Date/Publication: 2022-01-07 14:02:42 UTC rtdists/man/0000755000175000017500000000000013667512040012657 5ustar nileshnileshrtdists/man/Diffusion.Rd0000644000175000017500000004205113667512040015076 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/diffusion.R \name{Diffusion} \alias{Diffusion} \alias{recalc_t0} \alias{diffusion} \alias{ddiffusion} \alias{pdiffusion} \alias{qdiffusion} \alias{rdiffusion} \title{The Ratcliff Diffusion Model} \usage{ recalc_t0(t0, st0) ddiffusion(rt, response = "upper", a, v, t0, z = 0.5 * a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, stop_on_error = FALSE) pdiffusion(rt, response = "upper", a, v, t0, z = 0.5 * a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, maxt = 20, stop_on_error = FALSE, use_precise = TRUE) qdiffusion(p, response = "upper", a, v, t0, z = 0.5 * a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, maxt = 20, interval = c(0, 10), scale_p = FALSE, scale_max = Inf, stop_on_error = FALSE, max_diff = 1e-04) rdiffusion(n, a, v, t0, z = 0.5 * a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, stop_on_error = TRUE, maxt = 20, interval = c(0, 10), method = c("fastdm", "qdiffusion")) } \arguments{ \item{t0}{non-decision time or response time constant (in seconds). Lower bound for the duration of all non-decisional processes (encoding and response execution). Typical range: 0.1 < \code{t0} < 0.5} \item{st0}{inter-trial-variability of non-decisional components. Range of a uniform distribution with mean \code{t0 + st0/2} describing the distribution of actual \code{t0} values across trials. Accounts for response times below \code{t0}. Reduces skew of predicted RT distributions. Values different from 0 can slow computation considerably. Typical range: 0 < \code{st0} < 0.2. Default is 0.} \item{rt}{a vector of RTs. Or for convenience also a \code{data.frame} with columns \code{rt} and \code{response} (such as returned from \code{rdiffusion} or \code{\link{rLBA}}). See examples.} \item{response}{character vector. Which response boundary should be tested? Possible values are \code{c("upper", "lower")}, possibly abbreviated and \code{"upper"} being the default. Alternatively, a numeric vector with values 1=lower and 2=upper. For convenience, \code{response} is converted via \code{as.numeric} also allowing factors (see examples). Ignored if the first argument is a \code{data.frame}.} \item{a}{threshold separation. Amount of information that is considered for a decision. Large values indicate a conservative decisional style. Typical range: 0.5 < \code{a} < 2} \item{v}{drift rate. Average slope of the information accumulation process. The drift gives information about the speed and direction of the accumulation of information. Large (absolute) values of drift indicate a good performance. If received information supports the response linked to the upper threshold the sign will be positive and vice versa. Typical range: -5 < \code{v} < 5} \item{z}{starting point. Indicator of an a priori bias in decision making. When the relative starting point \code{z} deviates from \code{0.5*a}, the amount of information necessary for a decision differs between response alternatives. Default is \code{0.5*a} (i.e., no bias).} \item{d}{differences in speed of response execution (in seconds). Positive values indicate that response execution is faster for responses linked to the upper threshold than for responses linked to the lower threshold. Typical range: -0.1 < \code{d} < 0.1. Default is 0.} \item{sz}{inter-trial-variability of starting point. Range of a uniform distribution with mean \code{z} describing the distribution of actual starting points from specific trials. Values different from 0 can predict fast errors (but can slow computation considerably). Typical range: 0 < \code{sz} < 0.5. Default is 0.} \item{sv}{inter-trial-variability of drift rate. Standard deviation of a normal distribution with mean \code{v} describing the distribution of actual drift rates from specific trials. Values different from 0 can predict slow errors. Typical range: 0 < \code{sv} < 2. Default is 0.} \item{s}{diffusion constant; standard deviation of the random noise of the diffusion process (i.e., within-trial variability), scales \code{a}, \code{v}, and \code{sv}. Needs to be fixed to a constant in most applications. Default is 1. Note that the default used by Ratcliff and in other applications is often 0.1.} \item{precision}{\code{numerical} scalar value. Precision of calculation. Corresponds roughly to the number of decimals of the predicted CDFs that are calculated accurately. Default is 3.} \item{stop_on_error}{Should the diffusion functions return 0 if the parameters values are outside the allowed range (= \code{FALSE}) or produce an error in this case (= \code{TRUE}).} \item{maxt}{maximum \code{rt} allowed, used to stop integration problems. Larger values lead to considerably longer calculation times.} \item{use_precise}{boolean. Should \code{pdiffusion} use the precise version for calculating the CDF? The default is \code{TRUE} which is highly recommended. Using \code{FALSE} (i.e., the imprecise version) is hardly any faster and produces clearly wrong results for most parameter settings.} \item{p}{vector of probabilities. Or for convenience also a \code{data.frame} with columns \code{p} and \code{response}. See examples.} \item{interval}{a vector containing the end-points of the interval to be searched for the desired quantiles (i.e., RTs) in \code{qdiffusion}. Default is \code{c(0, 10)}.} \item{scale_p}{logical. Should entered probabilities automatically be scaled by maximally predicted probability? Default is \code{FALSE}. Convenience argument for obtaining predicted quantiles. Can be slow as the maximally predicted probability is calculated individually for each \code{p}.} \item{scale_max}{numerical scalar. Value at which maximally predicted RT should be calculated if \code{scale_p} is \code{TRUE}.} \item{max_diff}{numeric. Maximum acceptable difference between desired and observed probability of the quantile function (\code{qdiffusion}).} \item{n}{is a desired number of observations.} \item{method}{character. Experimentally implementation of an alternative way of generating random variates via the quantile function (\code{qdiffusion}) and random uniform value. For simple calls, the default method \code{"fastdm"} is dramatically faster.} } \value{ \code{ddiffusion} gives the density, \code{pdiffusion} gives the distribution function, \code{qdiffusion} gives the quantile function (i.e., predicted RTs), and \code{rdiffusion} generates random response times and decisions (returning a \code{data.frame} with columns \code{rt} (numeric) and \code{response} (factor)). The length of the result is determined by \code{n} for \code{rdiffusion}, equal to the length of \code{rt} for \code{ddiffusion} and \code{pdiffusion}, and equal to the length of \code{p} for \code{qdiffusion}. The distribution parameters (as well as \code{response}) are recycled to the length of the result. In other words, the functions are completely vectorized for all parameters and even the response boundary. } \description{ Density, distribution function, quantile function, and random generation for the Ratcliff diffusion model with following parameters: \code{a} (threshold separation), \code{z} (starting point), \code{v} (drift rate), \code{t0} (non-decision time/response time constant), \code{d} (differences in speed of response execution), \code{sv} (inter-trial-variability of drift), \code{st0} (inter-trial-variability of non-decisional components), \code{sz} (inter-trial-variability of relative starting point), and \code{s} (diffusion constant). \strong{Note that the parameterization or defaults of non-decision time variability \code{st0} and diffusion constant \code{s} differ from what is often found in the literature and that the parameterization of \code{z} and \code{sz} have changed compared to previous versions (now absolute and not relative).} } \details{ The Ratcliff diffusion model (Ratcliff, 1978) is a mathematical model for two-choice discrimination tasks. It is based on the assumption that information is accumulated continuously until one of two decision thresholds is hit. For introductions see Ratcliff and McKoon (2008), Voss, Rothermund, and Voss (2004), Voss, Nagler, and Lerche (2013), or Wagenmakers (2009). All functions are fully vectorized across all parameters as well as the response to match the length or \code{rt} (i.e., the output is always of length equal to \code{rt}). This allows for trialwise parameters for each model parameter. For convenience, all functions (with the exception of \code{rdiffusion}) allow that the first argument is a \code{data.frame} containing the information of the first and second argument in two columns (i.e., \code{rt}/\code{p} and \code{response}). Other columns (as well as passing \code{response} separately argument) will be ignored. This allows, for example, to pass the \code{data.frame} generated by \code{rdiffusion} directly to \code{pdiffusion}. See examples. \subsection{Quantile Function}{ Due to the bivariate nature of the diffusion model, the diffusion processes reaching each response boundary only return the defective CDF that does not reach 1. Only the sum of the CDF for both boundaries reaches 1. Therefore, \code{qdiffusion} can only return quantiles/RTs for any accumulator up to the maximal probability of that accumulator's CDF. This can be obtained by evaluating the CDF at \code{Inf}. As a convenience for the user, if \code{scale_p = TRUE} in the call to \code{qdiffusion} the desired probabilities are automatically scaled by the maximal probability for the corresponding response. Note that this can be slow as the maximal probability is calculated separately for each desired probability. See examples. Also note that quantiles (i.e., predicted RTs) are obtained by numerically minimizing the absolute difference between desired probability and the value returned from \code{pdiffusion} using \code{\link{optimize}}. If the difference between the desired probability and probability corresponding to the returned quantile is above a certain threshold (currently 0.0001) no quantile is returned but \code{NA}. This can be either because the desired quantile is above the maximal probability for this accumulator or because the limits for the numerical integration are too small (default is \code{c(0, 10)}). } } \note{ The parameterization of the non-decisional components, \code{t0} and \code{st0}, differs from the parameterization used by, for example, Andreas Voss or Roger Ratcliff. In the present case \code{t0} is the lower bound of the uniform distribution of length \code{st0}, but \emph{not} its midpoint. The parameterization employed here is in line with the parametrization for the \link{LBA} code (where \code{t0} is also the lower bound). The default diffusion constant \code{s} is 1 and not 0.1 as in most applications of Roger Ratcliff and others. We have changed the parameterization of the start point \code{z} which is now the absolute start point in line with most published literature (it was the relative start point in previous versions of \pkg{rtdists}). } \examples{ ## identical calls (but different random values) rt1 <- rdiffusion(500, a=1, v=2, t0=0.5) head(rt1) rt2 <- rdiffusion(500, a=1, v=2, t0=0.5, d=0, sz=0, sv=0, st0=0) head(rt2) # get density for random RTs (possible to specify arguments for pdiffusion in same way): sum(log(ddiffusion(rt1$rt, rt1$response, a=1, v=2, t0=0.5))) # response is factor sum(log(ddiffusion(rt1$rt, as.numeric(rt1$response), a=1, v=2, t0=0.5))) # response is numeric sum(log(ddiffusion(rt1$rt, as.character(rt1$response), a=1, v=2, t0=0.5))) # response is character sum(log(ddiffusion(rt1, a=1, v=2, t0=0.5))) # response is data.frame sum(log(ddiffusion(rt2$rt, rt2$response, a=1, v=2, t0=0.5))) # can we recover the parameters? ll_diffusion <- function(pars, rt, response) { densities <- ddiffusion(rt, response=response, a=pars[1], v=pars[2], t0=pars[3], sz=pars[4], st0=pars[5], sv=pars[6]) if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } \dontrun{) start <- c(runif(2, 0.5, 3), 0.1, runif(3, 0, 0.5)) names(start) <- c("a", "v", "t0", "sz", "st0", "sv") recov <- nlminb(start, ll_diffusion, lower = 0, rt=rt1$rt, response=rt1$response) round(recov$par, 3) # a v t0 sz st0 sv # 1.019 1.879 0.496 0.000 0.000 0.389 ## results of course depend on random seed for rdiffusion and runif } \dontrun{ ## replicate Table 1 from Wagenmakers et al. (2007) using rdiffusion: n <- 1e5 # number of samples # take parameter valeus from Table 2 and set s to 0.1 george <- rdiffusion(n, a = 0.12, v = 0.25, t0 = 0.3, s = 0.1) rich <- rdiffusion(n, a = 0.12, v = 0.25, t0 = 0.25, s = 0.1) amy <- rdiffusion(n, a = 0.08, v = 0.25, t0 = 0.3, s = 0.1) mark <- rdiffusion(n, a = 0.08, v = 0.25, t0 = 0.25, s = 0.1) george$id <- "george" rich$id <- "rich" amy$id <- "amy" mark$id <- "mark" wag <- rbind(george, rich, amy, mark) wag$id <- factor(wag$id, levels = c("george", "rich", "amy", "mark")) opt <- options() options(digits = 3) aggregate(cbind(rt, as.numeric(response)-1) ~ id, wag, mean) # id rt V2 # 1 george 0.517 0.952 # 2 rich 0.467 0.953 # 3 amy 0.422 0.881 # 4 mark 0.372 0.882 options(digits = 1) aggregate(rt ~ id, wag, var) # id rt # 1 george 0.024 # 2 rich 0.024 # 3 amy 0.009 # 4 mark 0.009 options(opt) } ## plot density: curve(ddiffusion(x, a=1, v=2, t0=0.5, response = "upper"), xlim=c(0,3), main="Density of upper responses", ylab="density", xlab="response time") curve(ddiffusion(x, a=1, v=2, t0=0.5, st0=0.2, response = "upper"), add=TRUE, lty = 2) legend("topright", legend=c("no", "yes"), title = "Starting Point Variability?", lty = 1:2) # plot cdf: curve(pdiffusion(x, a=1, v=2, t0=0.5, st0=0.2, response="u"), xlim = c(0, 3),ylim = c(0,1), ylab = "cumulative probability", xlab = "response time", main = "CDF of diffusion model with start point variability") curve(pdiffusion(x, a=1, v=2, t0=0.5, st0=0.2, response="l"), add=TRUE, lty = 2) legend("topleft", legend=c("upper", "lower"), title="response boundary", lty=1:2) \dontrun{ ### qdiffusion can only return values up to maximal predicted probability: (max_p <- pdiffusion(Inf, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u")) # [1] 0.87 # (Note that with the current integration routine for pdiffusion use Inf and not smaller values.) qdiffusion(0.87, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u") # [1] 1.945802 qdiffusion(0.88, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u") # NA with warning. # to get predicted quantiles, scale required quantiles by maximally predicted response rate: qs <- c(.1, .3, .5, .7, .9) qdiffusion(qs*max_p, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u") # or set scale_p to TRUE which scales automatically by maximum p # (but can be slow as it calculates max_p for each probability separately) qdiffusion(qs, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u", scale_p = TRUE) # qdiffusion also accepts a data.frame as first argument: t3 <- data.frame(p = rep(c(0.05, 0.1, 0.87), 2), response = rep(c("upper", "lower"), each = 3)) # p response # 1 0.05 upper # 2 0.10 upper # 3 0.87 upper # 4 0.05 lower # 5 0.10 lower # 6 0.87 lower qdiffusion(t3, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, scale_p = TRUE) } ## LBA and diffusion can be used interchangeably: rt1 <- rLBA(500, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)) rt2 <- rdiffusion(500, a=1, v=2, t0=0.5) # data can also be passed as data.frame (same is true for pLBA): sum(log(dLBA(rt1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)))) sum(log(dLBA(rt2, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)))) sum(log(ddiffusion(rt1, a=1, v=2, t0=0.5))) sum(log(ddiffusion(rt2, a=1, v=2, t0=0.5))) } \references{ Ratcliff, R. (1978). A theory of memory retrieval. \emph{Psychological Review}, 85(2), 59-108. Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. \emph{Neural Computation}, 20(4), 873-922. Voss, A., Rothermund, K., & Voss, J. (2004). Interpreting the parameters of the diffusion model: An empirical validation. \emph{Memory & Cognition}. Vol 32(7), 32, 1206-1220. Voss, A., Nagler, M., & Lerche, V. (2013). Diffusion Models in Experimental Psychology: A Practical Introduction. \emph{Experimental Psychology}, 60(6), 385-402. doi:10.1027/1618-3169/a000218 Wagenmakers, E.-J., van der Maas, H. L. J., & Grasman, R. P. P. P. (2007). An EZ-diffusion model for response time and accuracy. \emph{Psychonomic Bulletin & Review}, 14(1), 3-22. Wagenmakers, E.-J. (2009). Methodological and empirical developments for the Ratcliff diffusion model of response times and accuracy. \emph{European Journal of Cognitive Psychology}, 21(5), 641-671. } \author{ Underlying C code by Jochen Voss and Andreas Voss. Porting and R wrapping by Matthew Gretton, Andrew Heathcote, Scott Brown, and Henrik Singmann. \code{qdiffusion} by Henrik Singmann. } rtdists/man/LBA-race.Rd0000644000175000017500000001516613667512040014465 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/lba_race.R \name{LBA-race} \alias{LBA-race} \alias{n1PDF} \alias{n1CDF} \title{LBA race functions: Likelihood for first accumulator to win.} \usage{ n1PDF(rt, A, b, t0, ..., st0 = 0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) n1CDF(rt, A, b, t0, ..., st0 = 0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) } \arguments{ \item{rt}{a vector of RTs.} \item{A, b, t0}{LBA parameters, see \code{\link{LBA}}. Can either be a single numeric vector (which will be recycled to reach \code{length(rt)} for trialwise parameters) \emph{or} a \code{list} of such vectors in which each list element corresponds to the parameters for this accumulator (i.e., the list needs to be of the same length as there are accumulators). Each list will also be recycled to reach \code{length(rt)} for trialwise parameters per accumulator.} \item{...}{two \emph{named} drift rate parameters depending on \code{distribution} (e.g., \code{mean_v} and \code{sd_v} for \code{distribution=="norm"}). The parameters can either be given as a numeric vector or a list. If a numeric vector is passed each element of the vector corresponds to one accumulator. If a list is passed each list element corresponds to one accumulator allowing again trialwise driftrates. The shorter parameter will be recycled as necessary (and also the elements of the list to match the length of \code{rt}). See examples.} \item{st0}{parameter specifying the variability of \code{t0} (which varies uniformly from \code{t0} to \code{t0} + \code{st0}). Can be trialwise, and will be recycled to length of \code{rt}.} \item{distribution}{character specifying the distribution of the drift rate. Possible values are \code{c("norm", "gamma", "frechet", "lnorm")}, default is \code{"norm"}.} \item{args.dist}{list of optional further arguments to the distribution functions (i.e., \code{posdrift} or \code{robust} for \code{distribution=="norm"}).} \item{silent}{logical. Should the number of accumulators used be suppressed? Default is \code{FALSE} which prints the number of accumulators.} } \description{ n1PDF and n1CDF take RTs, the distribution functions of the \link{LBA}, and corresponding parameter values and put them throughout the race equations and return the likelihood for the first accumulator winning (hence n1) in a set of accumulators. } \details{ For a set of \eqn{N} independent accumulators \eqn{i = 1...N}, the race likelihood for a given accumulator \eqn{i} is given by \deqn{L(\mbox{unit }i \mbox{ wins}) = f_i(t) \times \prod_{j<>i} [ S_j(t) ]}{L(unit i wins) = f_i(t) * prod_j<>i [ S_j(t) ]} where \eqn{f(t)} is the PDF (\code{dlba_...}) and \eqn{S_j(t) = 1 - F_j(t)} is the survivor function, that is the complement of the CDF \eqn{F(t)} (\code{plba_...}) at time \eqn{t}. In other words, this is just the PDF/CDF for the winning accumulator at time \eqn{t} times the probability that no other accumulators have finished at time \eqn{t}. } \examples{ ## check random generated values against race functions: ## 1. Without st0: r_lba <- rLBA(1e4, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=0.2) x <- seq(0.5, 4, length.out = 100) # for plotting # PDF y <- n1PDF(x, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) # PDF hist(r_lba$rt[r_lba$response==1],probability = TRUE, breaks = "FD") lines(x=x,y=y/mean(r_lba$response == 1)) # CDF plot(ecdf(r_lba$rt[r_lba$response==1])) y <- n1CDF(x, A=0.5, b=1, t0 = 0.5, st0 = 0, mean_v=c(1.2, 1.0), sd_v=0.2) lines(x=x,y=y/mean(r_lba$response == 1), col = "red", lwd = 4.5, lty = 2) # KS test \dontrun{ normalised_n1CDF = function(rt,...) n1CDF(rt,...)/n1CDF(rt=Inf,...) ks.test(r_lba$rt[r_lba$response==1], normalised_n1CDF, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) } \dontrun{ ## Other examples (don't run to save time): ## 2. With st0 = 0.2: r_lba <- rLBA(1e4, A=0.5, b=1, t0 = 0.5, st0 = 0.2, mean_v=c(1.2, 1), sd_v=0.2) x <- seq(0.5, 4, length.out = 100) # for plotting # PDF y <- n1PDF(x, A=0.5, b=1, t0 = 0.5, st0 = 0.2, mean_v=c(1.2, 1.0), sd_v=0.2) # PDF hist(r_lba$rt[r_lba$response==1],probability = TRUE, breaks = "FD") lines(x=x,y=y/mean(r_lba$response == 1)) # CDF plot(ecdf(r_lba$rt[r_lba$response==1])) y <- n1CDF(x, A=0.5, b=1, t0 = 0.5, st0 = 0.2, mean_v=c(1.2, 1.0), sd_v=0.2) lines(x=x,y=y/mean(r_lba$response == 1), col = "red", lwd = 4.5, lty = 2) # KS test normalised_n1CDF = function(rt,...) n1CDF(rt,...)/n1CDF(rt=Inf,...) ks.test(r_lba$rt[r_lba$response==1], normalised_n1CDF, A=0.5, b=1, t0 = 0.5, st0 = 0.2, mean_v=c(1.2, 1.0), sd_v=0.2) xx <- rLBA(10, A=0.5, b=1, t0 = 0.5, mean_v=1.2, sd_v=0.2) # default uses normal distribution for drift rate: n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) # other distributions: n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3), distribution = "gamma") n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3), distribution = "frechet") n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, meanlog_v = c(0.5, 0.8), sdlog_v = 0.5, distribution = "lnorm") # add st0: n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2, st0 = 0.4) # use different A parameters for each RT: n1PDF(xx$rt, A=runif(10, 0.4, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) # use different A parameters for each RT and each accumulator: n1PDF(xx$rt, A=list(runif(10, 0.4, 0.6), runif(10, 0.2, 0.4)), b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) ### vectorize drift rates: # vector versus list: v1 <- n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) v2 <- n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(1.2, 1.0), sd_v=0.2) identical(v1, v2) # TRUE # drift rate per trial: n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(rnorm(10, 1.2), rnorm(10, 1)), sd_v=0.2) # combine list with vector: n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(rnorm(10, 1.2), rnorm(10, 1)), sd_v=c(0.2, 0.1)) # t0 per trial and accumulator: n1PDF(xx$rt, A=0.5, b=1, t0 = c(0.5), mean_v=c(1.2, 1.0), sd_v=0.2) n1PDF(xx$rt, A=0.5, b=1, t0 = c(0.5, 0.6), mean_v=c(1.2, 1.0), sd_v=0.2) # per trial only n1PDF(xx$rt, A=0.5, b=1, t0 = list(0.5, 0.6), mean_v=c(1.2, 1.0), sd_v=0.2) # per drift rate only n1PDF(xx$rt, A=0.5, b=1, t0 = list(c(0.4, 0.5), c(0.5, 0.6)), mean_v=c(1.2, 1.0), sd_v=0.2) } } \seealso{ For more user-friendly functions that return the PDF or CDF for the corresponding (and not first) accumulator winning see /code{/link{LBA}}. } rtdists/man/single-LBA.Rd0000644000175000017500000001473113667512040015031 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/single-lba.r \name{single-LBA} \alias{single-LBA} \alias{dlba_norm} \alias{plba_norm} \alias{rlba_norm} \alias{dlba_gamma} \alias{plba_gamma} \alias{rlba_gamma} \alias{dlba_frechet} \alias{plba_frechet} \alias{rlba_frechet} \alias{dlba_lnorm} \alias{plba_lnorm} \alias{rlba_lnorm} \title{Single accumulator of linear ballistic accumulator (LBA)} \usage{ dlba_norm(rt, A, b, t0, mean_v, sd_v, posdrift = TRUE, robust = FALSE) plba_norm(rt, A, b, t0, mean_v, sd_v, posdrift = TRUE, robust = FALSE) rlba_norm(n, A, b, t0, mean_v, sd_v, st0 = 0, posdrift = TRUE) dlba_gamma(rt, A, b, t0, shape_v, rate_v, scale_v) plba_gamma(rt, A, b, t0, shape_v, rate_v, scale_v) rlba_gamma(n, A, b, t0, shape_v, rate_v, scale_v, st0 = 0) dlba_frechet(rt, A, b, t0, shape_v, scale_v) plba_frechet(rt, A, b, t0, shape_v, scale_v) rlba_frechet(n, A, b, t0, shape_v, scale_v, st0 = 0) dlba_lnorm(rt, A, b, t0, meanlog_v, sdlog_v, robust = FALSE) plba_lnorm(rt, A, b, t0, meanlog_v, sdlog_v, robust = FALSE) rlba_lnorm(n, A, b, t0, meanlog_v, sdlog_v, st0 = 0) } \arguments{ \item{rt}{a vector of RTs.} \item{A}{start point interval or evidence in accumulator before beginning of decision process. Start point varies from trial to trial in the interval [0, \code{A}] (uniform distribution). Average amount of evidence before evidence accumulation across trials is \code{A}/2.} \item{b}{response threshold. (\code{b} - \code{A}/2) is a measure of "response caution".} \item{t0}{non-decision time or response time constant (in seconds). Lower bound for the duration of all non-decisional processes (encoding and response execution).} \item{mean_v, sd_v}{mean and standard deviation of normal distribution for drift rate (\code{norm}). See \code{\link{Normal}}} \item{posdrift}{logical. Should driftrates be forced to be positive? Default is \code{TRUE}. (Uses truncated normal for random generation).} \item{robust}{logical. Should robust normal distributions be used for \code{norm} and \code{lnorm}? Can be helpful in rare cases but is approximately three times slower than the non-robust versions. Default is \code{FALSE}.} \item{n}{desired number of observations (scalar integer).} \item{st0}{variability of non-decision time, such that \code{t0} is uniformly distributed between \code{t0} and \code{t0} + \code{st0}. Only available in random number generation functions \code{rlba_}.} \item{shape_v, rate_v, scale_v}{shape, rate, and scale of gamma (\code{gamma}) and scale and shape of Frechet (\code{frechet}) distributions for drift rate. See \code{\link{GammaDist}} or \code{\link[evd]{frechet}}. For Gamma, scale = 1/shape and shape = 1/scale.} \item{meanlog_v, sdlog_v}{mean and standard deviation of lognormal distribution on the log scale for drift rate (\code{lnorm}). See \code{\link{Lognormal}}.} } \value{ All functions starting with a \code{d} return the density (PDF), all functions starting with \code{p} return the distribution function (CDF), and all functions starting with \code{r} return random response times and responses (in a \code{matrix}). } \description{ Density, distribution function, and random generation for a single accumulator of the LBA model with the following parameters: \code{A} (upper value of starting point), \code{b} (response threshold), \code{t0} (non-decision time), and driftrate (\code{v}). All functions are available with different distributions underlying the drift rate: Normal (\code{norm}), Gamma (\code{gamma}), Frechet (\code{frechet}), and log normal (\code{lnorm}). } \details{ These functions are mainly for internal purposes. We do not recommend to use them. Use the high-level functions described in \code{/link{LBA}} instead. } \note{ Density (i.e., \code{dlba_}), distribution (i.e., \code{plba_}), and random derivative (i.e., \code{rlba_}) functions are vectorized for all parameters (i.e., in case parameters are not of the same length as \code{rt}, parameters are recycled). Furthermore, the random derivative functions also accept a matrix of length \code{n} in which each column corresponds to a accumulator specific value (see \code{\link{rLBA}} for a more user-friendly way). } \examples{ ## random number generation using different distributions for v: rlba_norm(10, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) rlba_gamma(10, A=0.5, b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3)) rlba_frechet(10, A=0.5, b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3)) rlba_lnorm(10, A=0.5, b=1, t0 = 0.5, meanlog_v=c(1.2, 1), sdlog_v=c(0.2, 0.3)) # use somewhat plausible values for plotting: A <- 0.2 b <- 0.5 t0 <- 0.3 # plot density: curve(dlba_norm(x, A=A, b=b, t0=t0, mean_v = 1.0, sd_v = 0.5), ylim = c(0, 4), xlim=c(0,3), main="Density/PDF of LBA versions", ylab="density", xlab="response time") curve(dlba_gamma(x, A=A, b=b, t0=t0, shape_v=1, scale_v=1), add=TRUE, lty = 2) curve(dlba_frechet(x, A=A, b=b, t0=t0, shape_v=1,scale_v=1.0), add=TRUE, lty = 3) curve(dlba_lnorm(x, A=A, b=b, t0=t0, meanlog_v = 0.5, sdlog_v = 0.5), add=TRUE, lty = 4) legend("topright", legend=c("Normal", "Gamma", "Frechet", "Log-Normal"), title = expression("Distribution of"~~italic(v)), lty = 1:4) # plot cdf: curve(plba_norm(x, A=A, b=b, t0=t0, mean_v=1.0, sd_v=1.0), xlim = c(0, 3),ylim = c(0,1), ylab = "cumulative probability", xlab = "response time", main = "Distribution/CDF of LBA versions") curve(plba_gamma(x, A=A, b=b, t0=t0, shape_v=1,scale_v=1), add=TRUE, lty = 2) curve(plba_frechet(x, A=A, b=b, t0=t0, shape=1, scale=1), add=TRUE, lty = 3) curve(plba_lnorm(x, A=A, b=b, t0=t0, meanlog_v=0.5, sdlog_v = 0.5), add=TRUE, lty = 4) legend("bottomright", legend=c("Normal", "Gamma", "Frechet", "Log-Normal"), title = expression("Distribution of"~~italic(v)), lty = 1:4) } \references{ Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. \emph{Cognitive Psychology}, 57(3), 153-178. doi:10.1016/j.cogpsych.2007.12.002 Donkin, C., Averell, L., Brown, S., & Heathcote, A. (2009). Getting more from accuracy and response time data: Methods for fitting the linear ballistic accumulator. \emph{Behavior Research Methods}, 41(4), 1095-1110. doi:10.3758/BRM.41.4.1095 Heathcote, A., & Love, J. (2012). Linear deterministic accumulator models of simple choice. \emph{Frontiers in Psychology}, 3, 292. doi:10.3389/fpsyg.2012.00292 } rtdists/man/LBA.Rd0000644000175000017500000003671613667512040013561 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/lba.r \name{LBA} \alias{LBA} \alias{dLBA} \alias{pLBA} \alias{qLBA} \alias{rLBA} \title{The Linear Ballistic Accumulator (LBA)} \usage{ dLBA(rt, response, A, b, t0, ..., st0 = 0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) pLBA(rt, response, A, b, t0, ..., st0 = 0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) qLBA(p, response, A, b, t0, ..., st0 = 0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE, interval = c(0, 10), scale_p = FALSE, scale_max = Inf) rLBA(n, A, b, t0, ..., st0 = 0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) } \arguments{ \item{rt}{vector of RTs. Or for convenience also a \code{data.frame} with columns \code{rt} and \code{response} (such as returned from \code{rLBA} or \code{\link{rdiffusion}}). See examples.} \item{response}{integer vector of winning accumulators/responses corresponding to the vector of RTs/p (i.e., used for specifying the response for a given RT/probability). Will be recycled if necessary. Cannot contain values larger than the number of accumulators. First response/accumulator must receive value 1, second 2, and so forth. For conmvenience, \code{response} is converted via \code{as.numeric} thereby allowing factors to be passed as well (such as returned from \code{\link{rdiffusion}}). Ignored if \code{rt} or \code{p} is a \code{data.frame}.} \item{A}{start point interval or evidence in accumulator before beginning of decision process. Start point varies from trial to trial in the interval [0, \code{A}] (uniform distribution). Average amount of evidence before evidence accumulation across trials is \code{A}/2.} \item{b}{response threshold. (\code{b} - \code{A}/2) is a measure of "response caution".} \item{t0}{non-decision time or response time constant (in seconds). Lower bound for the duration of all non-decisional processes (encoding and response execution).} \item{...}{two \emph{named} drift rate parameters depending on \code{distribution} (e.g., \code{mean_v} and \code{sd_v} for \code{distribution=="norm"}). The parameters can either be given as a numeric vector or a list. If a numeric vector is passed each element of the vector corresponds to one accumulator. If a list is passed each list element corresponds to one accumulator allowing again trialwise driftrates. The shorter parameter will be recycled as necessary (and also the elements of the list to match the length of \code{rt}). See details.} \item{st0}{variability of non-decision time, such that \code{t0} is uniformly distributed between \code{t0} and \code{t0} + \code{st0}. Default is 0. Can be trialwise, and will be recycled to length of \code{rt}.} \item{distribution}{character specifying the distribution of the drift rate. Possible values are \code{c("norm", "gamma", "frechet", "lnorm")}, default is \code{"norm"}.} \item{args.dist}{list of optional further arguments to the distribution functions (i.e., \code{posdrift} or \code{robust} for \code{distribution=="norm"}, see \code{\link{single-LBA}}).} \item{silent}{logical. Should the number of accumulators used be suppressed? Default is \code{FALSE} which prints the number of accumulators.} \item{p}{vector of probabilities. Or for convenience also a \code{data.frame} with columns \code{p} and \code{response}. See examples.} \item{interval}{a vector containing the end-points of the interval to be searched for the desired quantiles (i.e., RTs) in \code{qLBA}. Default is \code{c(0, 10)}.} \item{scale_p}{logical. Should entered probabilities automatically be scaled by maximally predicted probability? Default is \code{FALSE}. Convenience argument for obtaining predicted quantiles. Can be slow as the maximally predicted probability is calculated individually for each \code{p}.} \item{scale_max}{numerical scalar. Value at which maximally predicted RT should be calculated if \code{scale_p} is \code{TRUE}.} \item{n}{desired number of observations (scalar integer).} } \value{ \code{dLBA} returns the density (PDF), \code{pLBA} returns the distribution function (CDF), \code{qLBA} returns the quantile/RT, \code{rLBA} return random response times and responses (in a \code{data.frame}). The length of the result is determined by \code{n} for \code{rLBA}, equal to the length of \code{rt} for \code{dLBA} and \code{pLBA}, and equal to the length of \code{p} for \code{qLBA}. The distribution parameters (as well as \code{response}) are recycled to the length of the result. In other words, the functions are completely vectorized for all parameters and even the response. } \description{ Density, distribution function, quantile function, and random generation for the LBA model with the following parameters: \code{A} (upper value of starting point), \code{b} (response threshold), \code{t0} (non-decision time), and driftrate (\code{v}). All functions are available with different distributions underlying the drift rate: Normal (\code{norm}), Gamma (\code{gamma}), Frechet (\code{frechet}), and log normal (\code{lnorm}). The functions return their values conditional on the accumulator given in the response argument winning. } \details{ For convenience, all functions (with the exception of \code{rdiffusion}) allow that the first argument is a \code{data.frame} containing the information of the first and second argument in two columns (i.e., \code{rt}/\code{p} and \code{response}). Other columns will be ignored. This allows, for example, to pass the \code{data.frame} generated by \code{rLBA} directly to \code{pLBA}. See examples. \subsection{Parameters}{ The following arguments are allowed as \code{...} drift rate parameters: \itemize{ \item \code{mean_v,sd_v} mean and standard deviation of normal distribution for drift rate (\code{norm}). See \code{\link{Normal}} \item \code{shape_v,rate_v,scale_v} shape, rate, and scale of gamma (\code{gamma}) and scale and shape of Frechet (\code{frechet}) distributions for drift rate. See \code{\link{GammaDist}} or \code{\link[evd]{frechet}}. For Gamma, scale = 1/shape and shape = 1/scale. \item \code{meanlog_v,sdlog_v} mean and standard deviation of lognormal distribution on the log scale for drift rate (\code{lnorm}). See \code{\link{Lognormal}}. } As described above, the accumulator parameters can either be given as a numeric vector or a list. If a numeric vector is passed each element of the vector corresponds to one accumulator. If a list is passed each list element corresponds to one accumulator allowing trialwise driftrates. The shorter parameter will be recycled as necessary (and also the elements of the list to match the length of \code{rt}). The other LBA parameters (i.e., \code{A}, \code{b}, and \code{t0}, with the exception of \code{st0}) can either be a single numeric vector (which will be recycled to reach \code{length(rt)} or \code{length(n)} for trialwise parameters) \emph{or} a \code{list} of such vectors in which each list element corresponds to the parameters for this accumulator (i.e., the list needs to be of the same length as there are accumulators). Each list will also be recycled to reach \code{length(rt)} for trialwise parameters per accumulator. To make the difference between both paragraphs clear: Whereas for the accumulators both a single vector or a list corresponds to different accumulators, only the latter is true for the other parameters. For those (i.e., \code{A}, \code{b}, and \code{t0}) a single vector always corresponds to trialwise values and a list must be used for accumulator wise values. \code{st0} can only vary trialwise (via a vector). And it should be noted that \code{st0} not equal to zero will considerably slow done everything. } \subsection{Quantile Function}{ Due to the bivariate nature of the LBA, single accumulators only return defective CDFs that do not reach 1. Only the sum of all accumulators reaches 1. Therefore, \code{qLBA} can only return quantiles/RTs for any accumulator up to the maximal probability of that accumulator's CDF. This can be obtained by evaluating the CDF at \code{Inf}. As a conveniece for the user, if \code{scale_p = TRUE} in the call to \code{qLBA} the desired probabilities are automatically scaled by the maximal probability for the corresponding response. Note that this can be slow as the maximal probability is calculated separately for each desired probability. See examples. Also note that quantiles (i.e., predicted RTs) are obtained by numerically minimizing the absolute difference between desired probability and the value returned from \code{pLBA} using \code{\link{optimize}}. If the difference between the desired probability and probability corresponding to the returned quantile is above a certain threshold (currently 0.0001) no quantile is returned but \code{NA}. This can be either because the desired quantile is above the maximal probability for this accumulator or because the limits for the numerical integration are too small (default is \code{c(0, 10)}). } \subsection{RNG}{ For random number generation at least one of the distribution parameters (i.e., \code{mean_v}, \code{sd_v}, \code{shape_v}, \code{scale_v}, \code{rate_v}, \code{meanlog_v}, and \code{sdlog_v}) should be of length > 1 to receive RTs from multiple responses. Shorter vectors are recycled as necessary.\cr Note that for random number generation from a normal distribution for the driftrate the number of returned samples may be less than the number of requested samples if \code{posdrifts==FALSE}. } } \note{ These are the top-level functions intended for end-users. To obtain the density and cumulative density the race functions are called for each response time with the corresponding winning accumulator as first accumulator (see \code{\link{LBA-race}}). } \examples{ ## generate random LBA data: rt1 <- rLBA(500, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)) head(rt1) prop.table(table(rt1$response)) # original parameters have 'high' log-likelihood: sum(log(dLBA(rt1$rt, rt1$response, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)))) # data can also be passed as data.frame (same is true for pLBA): sum(log(dLBA(rt1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)))) objective_fun <- function(par, rt, response, distribution = "norm") { # simple parameters spar <- par[!grepl("[12]$", names(par))] # distribution parameters: dist_par_names <- unique(sub("[12]$", "", grep("[12]$" ,names(par), value = TRUE))) dist_par <- vector("list", length = length(dist_par_names)) names(dist_par) <- dist_par_names for (i in dist_par_names) dist_par[[i]] <- as.list(unname(par[grep(i, names(par))])) dist_par$sd_v <- c(1, dist_par$sd_v) # fix first sd to 1 # get summed log-likelihood: d <- do.call(dLBA, args = c(rt=list(rt), response=list(response), spar, dist_par, distribution=distribution, silent=TRUE)) if (any(d < 0e-10)) return(1e6) else return(-sum(log(d))) } # gives same value as manual calculation above: objective_fun(c(A=0.5, b=1, t0=0.5, mean_v1=2.4, mean_v2=1.6, sd_v2=1.2), rt=rt1$rt, response=rt1$response) \dontrun{ # can we recover the parameters? # should be run several times with different random values of init_par init_par <- runif(6) init_par[2] <- sum(init_par[1:2]) # ensures b is larger than A init_par[3] <- runif(1, 0, min(rt1$rt)) #ensures t0 is mot too large names(init_par) <- c("A", "b", "t0", "mean_v1", "mean_v2", "sd_v2") nlminb(objective_fun, start = init_par, rt=rt1$rt, response=rt1$response, lower = 0) } # plot cdf (2 accumulators): curve(pLBA(x, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)), xlim = c(0, 2), ylim = c(0,1), ylab = "cumulative probability", xlab = "response time", main = "Defective CDFs of LBA") curve(pLBA(x, response = 2, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)), add=TRUE, lty = 2) legend("topleft", legend=c("1", "2"), title="Response", lty=1:2) # plot cdf (3 accumulators): curve(pLBA(x, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6, 1.0), sd_v=c(1,1.2, 2.0)), xlim = c(0, 2), ylim = c(0,1), ylab = "cumulative probability", xlab = "response time", main = "Defective CDFs of LBA") curve(pLBA(x, response = 2, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6, 1.0), sd_v=c(1,1.2, 2.0)), add=TRUE, lty = 2) curve(pLBA(x, response = 3, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6, 1.0), sd_v=c(1,1.2, 2.0)), add=TRUE, lty = 3) legend("topleft", legend=c("1", "2", "3"), title="Response", lty=1:2) ## qLBA can only return values up to maximal predicted probability: (max_p <- pLBA(Inf, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2))) # [1] 0.6604696 qLBA(0.66, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)) # 2.559532 qLBA(0.67, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)) # NA # to get predicted quantiles, scale required quantiles by maximally predicted response rate: qs <- c(.1, .3, .5, .7, .9) qLBA(qs*max_p, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)) # or set scale_p to TRUE which scales automatically by maximum p # (but can be slow as it calculates max_p for each probability separately) qLBA(qs, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2), scale_p=TRUE) # qLBA also accepts a data.frame as first argument: t <- data.frame(p = rep(c(0.05, 0.1, 0.66), 2), response = rep(1:2, each = 3)) # p response # 1 0.05 1 # 2 0.10 1 # 3 0.66 1 # 4 0.05 2 # 5 0.10 2 # 6 0.66 2 qLBA(t, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)) ## LBA and diffusion can be used interchangeably: rt1 <- rLBA(500, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)) rt2 <- rdiffusion(500, a=1, v=2, t0=0.5) # data can also be passed as data.frame (same is true for pLBA): sum(log(dLBA(rt1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)))) sum(log(dLBA(rt2, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)))) sum(log(ddiffusion(rt1, a=1, v=2, t0=0.5))) sum(log(ddiffusion(rt2, a=1, v=2, t0=0.5))) ### trial wise parameters work as expected (only since package version 0.9): x1 <- dLBA(rt=c(1,1), response=c(1,2), A=1,b=list(c(1,3),c(2,4)), t0=0.1, mean_v=c(3,3), sd_v=c(1,1),distribution="norm") x2a <- dLBA(rt=c(1), response=c(1), A=1,b=list(c(1),c(2)), t0=0.1,mean_v=c(3,3),sd_v=c(1,1),distribution="norm") x2b <- dLBA(rt=c(1), response=c(2), A=1,b=list(c(3),c(4)), t0=0.1,mean_v=c(3,3),sd_v=c(1,1),distribution="norm") all(x1 == c(x2a, x2b)) ## should be TRUE } \references{ Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. \emph{Cognitive Psychology}, 57(3), 153-178. doi:10.1016/j.cogpsych.2007.12.002 Donkin, C., Averell, L., Brown, S., & Heathcote, A. (2009). Getting more from accuracy and response time data: Methods for fitting the linear ballistic accumulator. \emph{Behavior Research Methods}, 41(4), 1095-1110. doi:10.3758/BRM.41.4.1095 Heathcote, A., & Love, J. (2012). Linear deterministic accumulator models of simple choice. \emph{Frontiers in Psychology}, 3, 292. doi:10.3389/fpsyg.2012.00292 } rtdists/man/deprecated.Rd0000644000175000017500000000111513667512040015244 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/deprecated.R \name{drd} \alias{drd} \alias{rtdists-deprecated} \alias{prd} \alias{rrd} \title{Deprecated functions} \usage{ drd(...) prd(...) rrd(...) } \arguments{ \item{...}{arguments passed from the old functions to the new functions} } \description{ These functions have been renamed and deprecated in \pkg{rtdists}: \code{drd()} (use \code{\link{ddiffusion}()}), \code{prd()} (use \code{\link{pdiffusion}()}), \code{rrd()} (use \code{\link{rdiffusion}()}). } \keyword{internal} rtdists/man/rtdists-package.Rd0000644000175000017500000000205413667512040016234 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/rtdists-package.R \docType{package} \name{rtdists-package} \alias{rtdists-package} \title{The rtdists Package} \description{ Response Time Distributions. } \details{ \tabular{ll}{ Package: \tab rtdists\cr Type: \tab Package\cr Version: \tab 0.8-3\cr Date: \tab 2018-06-23\cr Depends: \tab R (>= 3.0.0)\cr License: \tab GPL (>=3)\cr URL: \tab https://github.com/rtdists/rtdists/\cr } Provides response time distributions (density/PDF, distribution function/CDF, quantile function, and random generation): (a) Ratcliff diffusion model (Ratcliff & McKoon, 2008, ) based on C code by Andreas and Jochen Voss and (b) linear ballistic accumulator (LBA; Brown & Heathcote, 2008, ) with different distributions underlying the drift rate. } \author{ Henrik Singmann, Scott Brown, Matthew Gretton, Andrew Heathcote, with contributions from Andreas Voss, Jochen Voss, Andrew Terry } \keyword{package} rtdists/man/speed_acc.Rd0000644000175000017500000000757113667512040015066 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/speed_acc-data.R \docType{data} \name{speed_acc} \alias{speed_acc} \title{Speed-Accuracy Data from Wagenmakers, Ratcliff, Gomez, & McKoon (2008, Experiment 1)} \format{A \code{data.frame} with 31,522 obs. and 9 variables: \describe{ \item{id}{participant id} \item{block}{block number} \item{condition}{\code{accuracy} for blocks with accuracy instructions; \code{speed} for blocks with speed instruction} \item{stim}{unique identifier of stimulus, stimuli are nested in frequency conditions} \item{stim_cat}{category of stimulus, either word or non-word} \item{frequency}{"high frequency word", "low frequency word", "very low frequency word", or non-words derived from the first three categories} \item{response}{\code{word}, \code{nonword}, or not interpretable response (\code{error}, i.e., pushed a button, but not the right one and also not the one next to the right button)} \item{rt}{response time in seconds} \item{censor}{boolean indicating whether or not a response should be eliminated prior to analysis; uninterpretable response, too fast response (<180 ms), too slow response (>3 sec)} }} \source{ Wagenmakers, E.-J., Ratcliff, R., Gomez, P., & McKoon, G. (2008). A diffusion model account of criterion shifts in the lexical decision task. \emph{Journal of Memory and Language}, 58(1), 140-159. } \usage{ speed_acc } \description{ Responses and response times from an experiment in which instruction manipulated speed and accuracy between blocks. This data was also analyzed by Heathcote and Love (2012) who were the first to use the 17 participants also included here. } \details{ The data excludes the practice blocks but includes all trials. Variable \code{censor} can be used for excluding all trials also excluded from the papers using it namely uninterpretable response, too fast response (<180 ms), too slow response (>3 sec). Heathcote and Love (2012, p. 7) describe the data as follows: We fit the LBA and LNR models to data from Wagenmaker et al.'s (2008) experiment one, where participants made decisions about whether a string of letters constituted a word. These lexical decisions were made about four types of stimuli, non-words (nw) and high-frequency (hf), low-frequency (lf), and very low-frequency (vlf) words. Participants made decisions either under speed or accuracy emphasis instructions in different experimental blocks. Accuracy blocks were preceded by the message "Try to respond accurately" and "ERROR" was displayed after each wrong response. Speed blocks were preceded by the message "Try to respond accurately" and "TOO SLOW" was displayed after each response slower than 0.75 s.We report analyses of data from 17 participants (31,412 data points) in their Experiment 1, including the 15 participants analyzed in Wagenmakers et al. (2008) and two extras (we thank Eric-Jan Wagenmakers for supplying this data). } \examples{ data(speed_acc) str(speed_acc) # remove excluded trials: speed_acc <- droplevels(speed_acc[!speed_acc$censor,]) # new factors for obtaining values as in Table 1, Wagenmakers et al. (2008, p. 152) speed_acc$freq <- with(speed_acc, factor(ifelse(stim_cat == "nonword", "nonword", as.character(frequency)), levels = c("high", "low", "very_low", "nonword"))) # corr = correct (0 = correct, 1 = error) speed_acc$corr <- with(speed_acc, 1-as.numeric(stim_cat == response)) str(speed_acc) ## aggregated RTs: aggregate(rt ~ condition + freq + corr, speed_acc, mean) ## Error Rate: aggregate(corr ~ condition + freq + corr, speed_acc, mean) } \references{ Heathcote, A., & Love, J. (2012). Linear deterministic accumulator models of simple choice. \emph{Frontiers in Psychology: Cognitive Science}, 3, 292. doi:10.3389/fpsyg.2012.00292 } \keyword{dataset} rtdists/man/rr98.Rd0000644000175000017500000001530613667512040013757 0ustar nileshnilesh% Generated by roxygen2: do not edit by hand % Please edit documentation in R/rr98-data.R \docType{data} \name{rr98} \alias{rr98} \title{Ratcliff and Rouder (1998, Exp. 1) Luminance Discrimination Data} \format{A \code{data.frame} with 24,358 obs. and 12 variables: \describe{ \item{id}{participant id, factor with three levels} \item{session}{session number, integer} \item{block}{block number, integer} \item{trial}{trial number within a block, integer} \item{instruction}{factor with two levels: \code{"accuracy"} for blocks with accuracy instructions; \code{"speed"} for blocks with speed instruction} \item{source}{factor with two levels: distribution strength was drawn from, \code{"dark"} and \code{"light"}} \item{strength}{proportion of white to black pixels were varied by 33 equally spaced proportions from zero (all 1,024 pixels were black) to 1 (all 1,024 pixels were white). with 0 darkest and 32 lightest. Integer.} \item{response}{factor with two levels: \code{"dark"} and \code{"light"}} \item{response_num}{numeric response variable such that \code{1="dark"} and \code{2="light"}} \item{correct}{boolean indicating whether or not \code{source==response}. (Does not seem to be used in the original analysis.)} \item{rt}{response time in seconds} \item{outlier}{boolean indicating whether or not the response was considered an outlier by Ratcliff and Rouder (1998), i.e., RTs outside of (200ms, 2500ms)} }} \source{ Ratcliff, R., & Rouder, J. N. (1998). Modeling Response Times for Two-Choice Decisions. \emph{Psychological Science}, 9(5), 347-356. http://doi.org/10.1111/1467-9280.00067 } \usage{ rr98 } \description{ Responses and response times from an experiment in which three participants were asked to decide whether the overall brightness of pixel arrays displayed on a computer monitor was "high" or "low". In addition, instruction manipulated speed and accuracy between blocks. } \details{ The Experiment is described in the following by Ratcliff and Rouder (1998, pp. 349): In Experiment 1, subjects were asked to decide whether the overall brightness of pixel arrays displayed on a computer monitor was "high" or "low". The brightness of a display was controlled by the proportion of the pixels that were white. For each trial, the proportion of white pixels was chosen from one of two distributions, a high distribution [i.e., light] or a low [i.e., dark] distribution, each with fixed mean and standard deviation. Feedback was given after each trial to tell the subject whether his or her decision had correctly indicated the distribution from which the stimulus had been chosen. Other than this feedback, a subject had no information about the distributions. Because the distributions overlapped substantially, a subject could not be highly accurate. A display with 50% white pixels, for example, might have come from the high distribution on one trial and the low distribution on another. \subsection{Stimuli}{ The stimulus display for Experiment 1 was a square that was 64 pixels on each side and subtended 3.8 degree of visual angle on a PC-VGA monitor. [...] In each square, 3,072 randomly chosen pixels were neutral gray, like the background, and the remaining 1,024 pixels were either black or white; the proportion of white to black pixels provided the brightness manipulation. There were 33 equally spaced proportions from zero (all 1,024 pixels were black) to 1 (all 1,024 pixels were white). The two distributions from which the bright and dark stimuli were chosen were centered at .375 (low brightness) and .625 (high brightness), and they each had a standard deviation of .1875. } \subsection{Procedure}{ A subject's task was to decide, on each trial, from which distribution, high or low brightness in Experiment 1, the observed stimulus (stimuli) had been sampled. Subjects made their decision by pressing one of two response keys. On each trial, a 500-ms foreperiod, during which the display consisted solely of neutral gray, was followed by presentation of the stimulus; presentation was terminated by the subject's response. In Experiment 1, speed-versus-accuracy instructions were manipulated. For some blocks of trials, subjects were instructed to respond as quickly as possible, and a "too slow" message followed every response longer than 550 ms. For other blocks of trials, subjects were instructed to be as accurate as possible, and a "bad error" message followed incorrect responses to stimuli from the extreme ends of the distributions. Experiment 1 had ten 35-min sessions, and Experiments 2 and 3 had four sessions. In Experiment 1, subjects switched from emphasis on speed to emphasis on accuracy every 204 trials. Each session consisted of eight blocks of 102 trials per block, for a total of 8,160 trials per subject. Each session consisted of eight blocks of 102 trials, for a total of 3,264 trials per subject in each experiment. For all trials in each experiment, subjects were instructed to maintain a high level of accuracy while responding quickly, and an "error" message indicated incorrect responses. Responses were followed by a 300-ms blank interval, and the error message was displayed for 300 ms after the blank interval. } } \note{ The data is already prepared following Ratcliff and Rouder (1998) by removing the following trials: \itemize{ \item{the first session for each participant} \item{the first 20 trials of each session} \item{the first trial of each block (each change in speed accuracy starts a new block)} } To fully replicate the data used by Ratcliff and Rouder (1998) one only needs to remove the trials that are \code{TRUE} in column \code{outlier} (i.e., RTs outside of (200ms, 2500ms)). The full raw data is also available as part of this package, see:\cr \code{system.file("extdata", "rr98-data", package = "rtdists")} and \code{system.file("extdata", "rr98-data.codes", package = "rtdists")} } \examples{ data(rr98) rr98 <- rr98[!rr98$outlier,] #remove outliers head(rr98) # id session block trial instruction source strength response response_num correct rt outlier # 1 jf 2 1 21 accuracy dark 8 dark 1 TRUE 0.801 FALSE # 2 jf 2 1 22 accuracy dark 7 dark 1 TRUE 0.680 FALSE # 3 jf 2 1 23 accuracy light 19 light 2 TRUE 0.694 FALSE # 4 jf 2 1 24 accuracy dark 21 light 2 FALSE 0.582 FALSE # 5 jf 2 1 25 accuracy light 19 dark 1 FALSE 0.925 FALSE # 6 jf 2 1 26 accuracy dark 10 dark 1 TRUE 0.605 FALSE ## See vignette for more examples. } \keyword{dataset} rtdists/src/0000755000175000017500000000000014164644437012704 5ustar nileshnileshrtdists/src/Density.h0000644000175000017500000001567714164636111014502 0ustar nileshnilesh/* Density.hpp - Functions for PDF calculation (originally in density.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef DENSITY_H #define DENSITY_H using namespace Rcpp; #define EPSILON 1e-6 // Forward declarations double g_minus (double t); double g_plus (double t); static double integral_t0_g_minus (double t, Parameters *params); static double integral_z_g_minus (double t, Parameters *params); static double integral_v_g_minus (double t, double zr, Parameters *params); static double g_minus_no_var (double t, double a, double zr, double v); static double g_minus_small_time (double t, double zr, int N); static double g_minus_large_time (double t, double zr, int N); // TODO: Make sure these function names are accurate static double integrate_z_over_t (Parameters *params, double a, double b, double step_width); static double integrate_v_over_zr (Parameters *params, double a, double b, double t, double step_width); // Main calls NumericVector density (NumericVector rts, int boundary) { int length = rts.length(); NumericVector out(length); if (boundary == 1) { for (int i = 0; i < length; i++) { out[i] = g_plus(rts[i]); } } // Calc upper else { for (int i = 0; i < length; i++) { out[i] = -g_minus(rts[i]); } } // Calc lower return out; } double g_minus(double t) { return integral_t0_g_minus (t - g_Params->t0 - 0.5*g_Params->d, g_Params); } double g_plus(double t) { // Make a copy so we don't disturb our params // (?TODO: we could optimise the object creation out and just set them back after the call) Parameters new_params(*g_Params); new_params.zr = 1 - g_Params->zr; new_params.v = -g_Params->v; return integral_t0_g_minus (t - new_params.t0 + 0.5*new_params.d, &new_params); } static double integral_t0_g_minus (double t, Parameters *params) { double res; if (params->st0 < params->TUNE_ST0_EPSILON) // 170501 was == 0) { res = integral_z_g_minus (t, params); } else { res = integrate_z_over_t(params, t - .5*params->st0, t + .5*params->st0, params->TUNE_INT_T0) / params->st0; } return res; } static double integral_z_g_minus (double t, Parameters *params) { double res; if (t <= 0) return 0; if (params->szr < params->TUNE_SZ_EPSILON) // 170501 was == 0) { res = integral_v_g_minus (t, params->zr, params); } else { res = integrate_v_over_zr(params, params->zr - .5*params->szr, params->zr + .5*params->szr, t, params->TUNE_INT_Z) / params->szr; } return res; } static double integral_v_g_minus (double t, double zr, Parameters *params) { double a = params->a; double v = params->v; double sv = params->sv; int N_small, N_large; double simple, factor, eps; double ta = t/(a*a); factor = 1 / (a*a * sqrt(t * sv*sv + 1)) * exp(-0.5 * (v*v*t + 2*v*a*zr - a*a * zr*zr * sv*sv) / (t*sv*sv+1)); if (std::isinf(factor)) { return 0; } eps = EPSILON / factor; if (params->sv == 0) { return g_minus_no_var(t, a, zr, v); } N_large = (int)ceil(1 / (M_PI*sqrt(t))); if (M_PI*ta*eps < 1) { N_large = std::max(N_large, (int)ceil(sqrt(-2*log(M_PI*ta*eps) / (M_PI*M_PI*ta)))); } if (2*sqrt(2*M_PI*ta)*eps < 1) { N_small = (int)ceil(fmax(sqrt(ta) + 1, 2 + sqrt(-2*ta*log(2*eps*sqrt(2*M_PI*ta))))); } else { N_small = 2; } if (N_small < N_large) { simple = g_minus_small_time(t/(a*a), zr, N_small); } else { simple = g_minus_large_time(t/(a*a), zr, N_large); } return factor * simple; } static double g_minus_no_var(double t, double a, double zr, double v) { int N_small, N_large; double simple, factor, eps; double ta = t/(a*a); factor = exp(-a*zr*v - 0.5*v*v*t) / (a*a); if (std::isinf(factor)) { return 0; } eps = EPSILON / factor; N_large = (int)ceil (1/ (M_PI*sqrt(t))); if (M_PI*ta*eps < 1) { N_large = std::max(N_large, (int)ceil(sqrt(-2*log(M_PI*ta*eps) / (M_PI*M_PI*ta)))); } if (2*sqrt(2*M_PI*ta)*eps < 1) { N_small = (int)ceil(fmax(sqrt(ta) + 1, 2 + sqrt(-2*ta*log(2*eps*sqrt(2*M_PI*ta))))); } else { N_small = 2; } if (N_small < N_large) { simple = g_minus_small_time(t/(a*a), zr, N_small); } else { simple = g_minus_large_time(t/(a*a), zr, N_large); } return factor * simple; } static double g_minus_small_time(double t, double zr, int N) { int i; double sum = 0; double d; for(i = -N/2; i <= N/2; i++) { d = 2*i + zr; sum += exp(-d*d / (2*t)) * d; } return sum / sqrt(2*M_PI*t*t*t); } static double g_minus_large_time(double t, double zr, int N) { int i; double sum = 0; double d; for(i = 1; i <= N; i++) { d = i * M_PI; sum += exp(-0.5 * d*d * t) * sin(d*zr) * i; } return sum * M_PI; } // CONVERSION NOTE: Simplest way to deal with the integrate function is to remove // the clever recursiveness and instead (ugh) duplicate code static double integrate_z_over_t (Parameters *params, double a, double b, double step_width) { double width = b-a; double tmp_N = width / step_width; if (std::isnan(tmp_N)) tmp_N = 20; int N = std::max(4, static_cast(tmp_N)); double step = std::max(width / N, EPSILON); double x; double result = 0; for(x = a+0.5*step; x < b; x += step) { result += step * integral_z_g_minus(x, params); } return result; } static double integrate_v_over_zr (Parameters *params, double a, double b, double t, double step_width) { double width = b-a; double tmp_N = width / step_width; if (std::isnan(tmp_N)) tmp_N = 20; int N = std::max(4, static_cast(tmp_N)); double step = std::max(width / N, EPSILON); double x; double result = 0; for(x = a+0.5*step; x < b; x += step) { result += step * integral_v_g_minus (t, x, params); } return result; } #endif // DENSITY_H rtdists/src/CDF_st0_variability.h0000644000175000017500000001717513667512040016640 0ustar nileshnilesh/* CDF_st0_variability.hpp - Functions to calculate CDF when there is variance * in the non-decision-time parameter (st0 != 0) (originally in cdf.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef CDF_ST0_VARIABILITY_H #define CDF_ST0_VARIABILITY_H // // st0: variability in t0 // // This implements numerical integration in t-direction, using the trapezoidal rule. Since computing the CDF is slow and since we can // solve the PDE only forward in time, we cache old values of the CDF. // // The cached values form a grid of M different t-values such that the smallest cached t-value is smaller or equal than t-0.5*st0, the // biggest cached t-value is bigger or equal than t+0.5*st0. The total length of the cached time interval is (M-1)*dt where M and dt // are chosen such that st0 = (M-2)*dt. #include "CDF_sv_variability.h" // Forward declarations static void F_st0_start (F_calculator *fc, int plus); static const double *F_st0_get_F (F_calculator *fc, double t); static double F_st0_get_z (const F_calculator *fc, int i); static void F_st0_delete (F_calculator *fc); struct F_st0_data { struct F_calculator *base_fc; double st0; // variability of t0 int M; // number of stored grid lines double start; // t-value of first stored grid line double dt; // t-spacing of stored grid lines double *values; // array: stored grid lines (length M*(N+1)) char *valid; // which lines in 'values' are valid int base; // first grid line starts at pos. base*(N+1) double *avg; // the computed average (size N+1) }; static struct F_calculator *F_st0_new (Parameters *params) // Allocate a new 'struct F_calculator' (with sv == 0). // // This function can deal with variabilities in z and t. // If 'st0 == 0', it just returns the result of 'F_sz_new'. // // { F_calculator *base_fc; F_calculator *fc; F_st0_data *data; double dt; int M, N; double st0 = params->st0; // convenience only: cache st0 locally base_fc = F_sv_new (params); if (st0 <= params->TUNE_DT0*1e-6) return base_fc; M = (int)(st0/params->TUNE_DT0 + 1.5); if (M < 3) M = 3; dt = st0/(M-2); N = base_fc->N; fc = xnew (F_calculator, 1); // NOTE: MEMORY ALLOCATION: struct F_calculator fc->N = N; fc->plus = -1; data = xnew (F_st0_data, 1); // NOTE: MEMORY ALLOCATION: struct F_st0_data data->st0 = st0; data->base_fc = base_fc; data->M = M; /* data->start is set in F_st0_start */ data->dt = dt; data->values = xnew (double, M*(N+1)); // NOTE: MEMORY ALLOCATION: M*(N+1) doubles data->valid = xnew (char, M); // NOTE: MEMORY ALLOCATION: M chars data->base = 0; data->avg = xnew (double, N+1); // NOTE: MEMORY ALLOCATION: N+1 doubles fc->data = data; fc->start = F_st0_start; fc->free = F_st0_delete; fc->get_F = F_st0_get_F; fc->get_z = F_st0_get_z; return fc; } static void F_st0_delete (F_calculator *fc) { F_st0_data *data = (F_st0_data *)fc->data; F_delete (data->base_fc); xfree (data->valid); xfree (data->values); xfree (data->avg); xfree (data); xfree (fc); } static void F_st0_start (F_calculator *fc, int plus) { F_st0_data *data = (F_st0_data *)fc->data; int j; fc->plus = plus; F_start (data->base_fc, plus); data->start = -DBL_MAX; // initially mark all of the cache as invalid for (j = 0; j < data->M; ++j) data->valid[j] = 0; } static const double *F_st0_get_row(const F_calculator *fc, int j) // Get a pointer to one of the stored grid lines. // The value j is the grid line index (range 0, ..., M-1). // The returned value is a pointer to an array of length N+1. { const F_st0_data *data = (F_st0_data *)fc->data; int M, N, idx; double *row; M = data->M; N = fc->N; assert(0 <= j && j < M); idx = (data->base + j)%M; row = data->values + idx*(N+1); if (! data->valid[idx]) { double t; const double *F; t = data->start + j*data->dt; F = F_get_F(data->base_fc, t); memcpy(row, F, (N+1)*sizeof(double)); data->valid[idx] = 1; } return row; } static void add_vec(long n, double a, const double *x, double *y) // the vector operation y += a*x { #ifdef HAVE_LIBBLAS extern void daxpy_(long *Np, double *DAp, const double *X, long *INCXp, double *Y, long *INCYp); long inc = 1; daxpy_(&n, &a, x, &inc, y, &inc); #else /* ! HAVE_LIBBLAS */ int i; if (a == 1) { for (i=0; idata; double a, b, dt; const double *row; double q, r, *avg; int M, N, shift; int i, j, m; a = t - 0.5*data->st0; b = t + 0.5*data->st0; dt = data->dt; M = data->M; N = fc->N; // how many of the precalculated rows can we keep? if (a - data->start >= M*dt) { // beware of integer overflows for small dt shift = M; } else { shift = (int)((a - data->start)/dt); assert (shift >= 0); } for (j=0; jvalid[(data->base+j)%M] = 0; if (shift < M) { data->start += shift*dt; data->base = (data->base+shift)%M; } else { data->start = a; } /* compute the average over the rows from a to b */ avg = data->avg; for (i=0; i<=N; ++i) avg[i] = 0; { double tmp = (b - data->start)/dt; m = (int)(ceil (tmp) + 0.5); if (m >= M) m = M-1; /* protect against rounding errors */ q = (a - data->start)/dt; r = m - tmp; } if (m >= 3) { row = F_st0_get_row(fc, 0); add_vec(N+1, 0.5*(1-q)*(1-q), row, avg); row = F_st0_get_row(fc, 1); add_vec(N+1, 1-0.5*q*q, row, avg); for (j=2; jdata; return F_get_z (data->base_fc, i); } #endif // CDF_ST0_VARIABILITY rtdists/src/Parameters.h0000644000175000017500000001147613667512040015160 0ustar nileshnilesh/* Parameters.hpp - A class to contain the model parameters and precision tuning * (originally in parameters.c and precision.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef PARAMETERS_H #define PARAMETERS_H // Note: Parameters class now includes precision constants // Indices for packed parameters array #define PARAM_a 0 #define PARAM_v 1 #define PARAM_t0 2 #define PARAM_d 3 #define PARAM_szr 4 #define PARAM_sv 5 #define PARAM_st0 6 #define PARAM_zr 7 class Parameters { public: double a; // Boundary separation double v; // Mean of the drift double t0; // Non-decision time double d; // Difference between boundaries of non-decision time double szr; // width of zr distribution double sv; // standard deviation of v distribution double st0; // width of t0 distribution double zr; // Mean of diffusion starting point relative to boundaries // Precision constants set by SetPrecision() double TUNE_DZ; double TUNE_DV; double TUNE_DT0; double TUNE_PDE_DT_MIN; // If std=c++11 we can use C++ defaults to set as = 1e-6; double TUNE_PDE_DT_MAX; // ... we can default to = 1e-6; double TUNE_PDE_DT_SCALE; // ... we can default to = 0.0; double TUNE_INT_T0; double TUNE_INT_Z; double TUNE_SV_EPSILON; // CONVERSION NOTE: See below in SetPrecision() double TUNE_SZ_EPSILON; // CONVERSION NOTE: See below in SetPrecision() double TUNE_ST0_EPSILON; // CONVERSION NOTE: See below in SetPrecision() public: // Construct the object from the passed in params Parameters (NumericVector params, double precision) { a = params[PARAM_a]; v = params[PARAM_v]; t0 = params[PARAM_t0]; d = params[PARAM_d]; szr = params[PARAM_szr]; sv = params[PARAM_sv]; st0 = params[PARAM_st0]; zr = params[PARAM_zr]; SetPrecision (precision); } bool ValidateParams (bool print) { bool valid = true; if (a <= 0) { valid = false; if (print) Rcpp::Rcout << "error: invalid parameter a = " << a << std::endl; } if (szr < 0 || szr > 1) { valid = false; if (print) Rcpp::Rcout << "error: invalid parameter szr = " << szr << std::endl; } if (st0 < 0) { valid = false; if (print) Rcpp::Rcout << "error: invalid parameter st0 = " << st0 << std::endl; } if (sv < 0) { valid = false; if (print) Rcpp::Rcout << "error: invalid parameter sv = " << sv << std::endl; } if (t0 - fabs(0.5*d) - 0.5*st0 < 0) { valid = false; if (print) Rcpp::Rcout << "error: invalid parameter combination t0 = " << t0 << ", d = " << d << ", st0 =" << st0 << std::endl; } if (zr - 0.5*szr <= 0) { valid = false; if (print) Rcpp::Rcout << "error: invalid parameter combination zr = " << zr << ", szr = " << szr << std::endl;} if (zr + 0.5*szr >= 1) { valid = false; if (print) Rcpp::Rcout << "error: invalid parameter combination zr = " << zr << ", szr = " << szr << std::endl;} return valid; } private: void SetPrecision (double p) { // Try to achieve an accuracy of approximately 10^{-p} for the CDF. TUNE_PDE_DT_MIN = pow(10, -0.400825*p-1.422813); TUNE_PDE_DT_MAX = pow(10, -0.627224*p+0.492689); TUNE_PDE_DT_SCALE = pow(10, -1.012677*p+2.261668); TUNE_DZ = pow(10, -0.5*p-0.033403); TUNE_DV = pow(10, -1.0*p+1.4); TUNE_DT0 = pow(10, -0.5*p-0.323859); TUNE_INT_T0 = 0.089045 * exp(-1.037580*p); TUNE_INT_Z = 0.508061 * exp(-1.022373*p); // CONVERSION NOTE: // These have been added to optimise code paths by treating very small variances as 0 // e.g. with precision = 3, sv or sz values < 10^-5 are considered 0 TUNE_SV_EPSILON = pow (10, -(p+2.0)); // Used by pdiffusion TUNE_SZ_EPSILON = pow (10, -(p+2.0)); // Used by ddiffusion and pdiffusion TUNE_ST0_EPSILON = pow (10, -(p+2.0)); // Used by ddiffusion } }; #endif // PARAMETERS_H rtdists/src/CDF_sz_variability.h0000644000175000017500000001034613667512040016557 0ustar nileshnilesh/* CDF_sz_variability.hpp - Functions to calculate CDF when there is variance * in the start point parameter (szr != 0) (originally in cdf.c and phi.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef CDF_SZ_VARIABILITY_H #define CDF_SZ_VARIABILITY_H // // sz: variability in z // #include "CDF_no_variability.h" static void F_sz_delete (F_calculator *fc); static void F_sz_start (F_calculator *fc, int plus); static const double *F_sz_get_F (F_calculator *fc, double t); static double F_sz_get_z (const F_calculator *fc, int i); struct F_sz_data { F_calculator *base_fc; // gives the values we average over double *avg; // the computed averages int k; // the average involves 2*k+1 cells double q; // unused part of the outermost cells double f; // scale factor for the integration }; static F_calculator *F_sz_new (Parameters *params) // Allocate a new 'struct F_calculator' (with sv == 0 and st == 0). // // This function can deal with variabilities in z. // If 'sz == 0', it just returns the result of 'F_plain_new'. // // { struct F_calculator *base_fc; struct F_calculator *fc; struct F_sz_data *data; double sz, tmp, dz; int N, k; base_fc = F_plain_new (params); sz = params->szr*params->a; if (sz < params->TUNE_SZ_EPSILON) return base_fc; N = base_fc->N; dz = F_get_z (base_fc, 1) - F_get_z (base_fc, 0); tmp = sz/(2*dz); k = (int)(ceil(tmp) + 0.5); assert (2*k <= N); fc = xnew (struct F_calculator, 1); // NOTE: MEMORY ALLOCATION fc->N = N-2*k; fc->plus = -1; data = xnew (struct F_sz_data, 1); // NOTE: MEMORY ALLOCATION data->base_fc = base_fc; data->avg = xnew (double, fc->N+1); // NOTE: MEMORY ALLOCATION data->k = k; data->q = k - tmp; data->f = dz/sz; fc->data = data; fc->start = F_sz_start; fc->free = F_sz_delete; fc->get_F = F_sz_get_F; fc->get_z = F_sz_get_z; return fc; } static void F_sz_delete (F_calculator *fc) { F_sz_data *data = (F_sz_data *)fc->data; F_delete (data->base_fc); xfree (data->avg); xfree (data); xfree (fc); } static void F_sz_start (F_calculator *fc, int plus) { F_sz_data *data = (F_sz_data *)fc->data; fc->plus = plus; F_start (data->base_fc, plus); } static const double *F_sz_get_F (F_calculator *fc, double t) { F_sz_data *data = (F_sz_data *)fc->data; const double *F; double tmp, q, f; int i, j, m; F = F_get_F (data->base_fc, t); m = 2*data->k; q = data->q; f = data->f; if (m >= 3) { for (i=0; i<=fc->N; ++i) { tmp = F[i] * 0.5*(1-q)*(1-q); tmp += F[i+1] * (1-0.5*q*q); for (j=i+2; javg[i] = tmp * f; } } else { /* m == 2 */ for (i=0; i<=fc->N; ++i) { tmp = F[i] * 0.5*(1-q)*(1-q); tmp += F[i+1] * (1-q*q); tmp += F[i+2] * 0.5*(1-q)*(1-q); data->avg[i] = tmp * f; } } /* m == 1 is impossible here */ return data->avg; } static double F_sz_get_z (const F_calculator *fc, int i) { F_sz_data *data = (F_sz_data *)fc->data; return F_get_z (data->base_fc, i+data->k); } #endif // CDF_SZ_VARIABILITY_H rtdists/src/Sampling.h0000644000175000017500000001774313667512040014632 0ustar nileshnilesh/* Sampling.hpp - Contains main call for random sampling * (originally in construct-samples.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef SAMPLING_H #define SAMPLING_H #include using namespace Rcpp; // Forward declarations for support functions static int compare_doubles (const void *a, const void *b); static int find_slot(double target, const double *value, int l, int r); // RCpp Conversion Note: we've removed the idea of multiple samples (s_count) List sampling (int s_size) { F_calculator *fc; double *F; double *Fs, // TODO: Drop this down to a 1D array Fs_min, Fs_max; double t_min, t_max, dt; int i, N; // get the F-values for the samples Fs = xnew(double, s_size); // Conversion Note: not ready to rewrite entirely using Rcpp NumericVector etc. Fs_min = 1; Fs_max = 0; // Create Fs[], an s_size matrix of random or sequential numbers (depending on random_flag), // and set Fs_min and Fs_max, the min and max produced bool random_flag = true; // false = allow non-random sampling for testing if (random_flag) { // ? TODO: vectorise this (though first attempt when quite badly performance-wise...) for (i=0; i Fs_max) Fs_max = Fs[i]; if (Fs[i] < Fs_min) Fs_min = Fs[i]; } } else { // Generate equally-spaced samples for (i=0; izr * g_Params->a; while (F_get_val (fc, t_max, scaled_z) < Fs_max) t_max += 0.1; t_min = -0.5; F_start (fc, BOUNDARY_LOWER); while (F_get_val (fc, -t_min, scaled_z) > Fs_min) t_min -= 0.1; // get a table of F-values N = (int)((t_max-t_min)/0.001 + 0.5); dt = (t_max-t_min)/N; F = xnew(double, N+1); F_start (fc, BOUNDARY_UPPER); for (i=0; i<=N; ++i) { double t = t_min+i*dt; if (t < 0) continue; F[i] = F_get_val (fc, t, scaled_z); } F_start (fc, BOUNDARY_LOWER); for (i=N; i>=0; --i) { double t = -(t_min+i*dt); if (t < 0) continue; F[i] = F_get_val (fc, t, scaled_z); } F_delete (fc); // protect against rounding errors: make F increasing and restrict to valid range for (i=0; i<=N; ++i) { if (F[i] < 0) F[i] = 0; if (F[i] > 1) F[i] = 1; } qsort(F, N+1, sizeof(double), compare_doubles); if (F[0] > Fs_min) F[0] = Fs_min; if (F[N] < Fs_max) F[N] = Fs_max; NumericVector out_RTs(s_size); NumericVector out_bounds(s_size); for (i=0; i= 0; out_RTs[i] = fabs(t); } // Free up allocated objects xfree(F); xfree(Fs); return List::create(Named("rt") = out_RTs, Named("boundary") = out_bounds); } // Support functions for rfastdm () static int compare_doubles (const void *a, const void *b) { const double *da = (double *)a; const double *db = (double *)b; if (*da < *db) return -1; if (*da > *db) return 1; return 0; } static int find_slot(double target, const double *value, int l, int r) { int m = (l+r)/2; if (m == l) { return l; } else if ( value[m] > target) { return find_slot(target, value, l, m); } else { return find_slot(target, value, m, r); } } #endif // SAMPLING_H // OLDER VERSION... TODO: Delete this // // List sampling (int s_size) // { // F_calculator *fc; // double *F; // double **Fs, // TODO: Drop this down to a 1D array // Fs_min, Fs_max; // double t_min, t_max, dt; // int i, j, N; // // bool random_flag = true; // false = allow non-random sampling for testing // // // get the F-values for the samples // Fs = xnew(double *, 1); // Fs_min = 1; // Fs_max = 0; // // // Create Fs[][], an s_count X s_size matrix of random or sequential numbers, // // and set Fs_min and Fs_max, the min and max produced // // j = 0; // !! Interim HACK for s_count = 1 // // Fs[j] = xnew(double, s_size); // // if (random_flag) // { // // ! TODO: vectorise this... // for (i=0; i Fs_max) Fs_max = Fs[j][i]; // if (Fs[j][i] < Fs_min) Fs_min = Fs[j][i]; // } // } // else // { // // Generate equal-spaced samples // for (i=0; izr * g_Params->a; // while (F_get_val (fc, t_max, scaled_z) < Fs_max) t_max += 0.1; // // t_min = -0.5; // F_start (fc, BOUNDARY_LOWER); // while (F_get_val (fc, -t_min, scaled_z) > Fs_min) t_min -= 0.1; // // // get a table of F-values // N = (int)((t_max-t_min)/0.001 + 0.5); // dt = (t_max-t_min)/N; // F = xnew(double, N+1); // // F_start (fc, BOUNDARY_UPPER); // for (i=0; i<=N; ++i) // { // double t = t_min+i*dt; // if (t < 0) continue; // F[i] = F_get_val (fc, t, scaled_z); // } // F_start (fc, BOUNDARY_LOWER); // for (i=N; i>=0; --i) // { // double t = -(t_min+i*dt); // if (t < 0) continue; // F[i] = F_get_val (fc, t, scaled_z); // } // // F_delete (fc); // // // protect against rounding errors: make F increasing and restrict to valid range // for (i=0; i<=N; ++i) // { // if (F[i] < 0) F[i] = 0; // if (F[i] > 1) F[i] = 1; // } // // qsort(F, N+1, sizeof(double), compare_doubles); // if (F[0] > Fs_min) F[0] = Fs_min; // if (F[N] < Fs_max) F[N] = Fs_max; // // NumericVector out_RTs(s_size); // NumericVector out_bounds(s_size); // // for (i=0; i= 0; // out_RTs[(j*s_size)+i] = fabs(t); // } // // // Free up allocated objects // xfree(F); // xfree(Fs[j]); // xfree(Fs); // // return List::create(Named("rt") = out_RTs, Named("boundary") = out_bounds); // } rtdists/src/Distribution.h0000644000175000017500000001274013667512040015527 0ustar nileshnilesh/* Distribution.hpp - Contains main CDF calls for fast and precise versions * (precise version originally in plot-cdf.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef DISTRIBUTION_H #define DISTRIBUTION_H #include using namespace Rcpp; //#define _CDF_DEBUG_ // An R-like version which finds the left-hand area (from 0 to RT) - uses Scott Brown's code, pass in boundary to retrieve NumericVector distribution (NumericVector rts, int boundary) { struct F_calculator *fc; double mint; const double *F; int i,j,nz; double p[1]; // This is CDF(t=Inf)? #ifdef _CDF_DEBUG_ Rcpp::Rcout << "distribution (): Entered distribution ()\n"; Rcpp::Rcout << "Using Parameters:\n"; Rcpp::Rcout << "rts[0] = " << rts[0] << "\n"; Rcpp::Rcout << "boundary = " << boundary << "\n"; #endif fc = F_new (); double scaled = g_Params->zr;// * g_params[p_a]; // SCALING zr BY a (in line with pfastdm and rfastdm) // Get the value of CDF(t=Inf) (for the upper boundary??) (store in p[0]) F_start(fc, BOUNDARY_UPPER); mint = g_Params->t0 - 0.5*g_Params->st0 ; // Min RT F = F_get_F(fc,mint); nz = F_get_N(fc); j = (int) nz*scaled; p[0] = F[j]; int length = rts.length(); NumericVector out (length); if (boundary == BOUNDARY_UPPER) // Calc upper boundary { #ifdef _CDF_DEBUG_ Rcpp::Rcout << "pfastdm_b: Calculating upper boundary\n"; Rcpp::Rcout << "Using length = "<< length <<"\n"; Rcpp::Rcout << "Using mint = "<< mint <<"\n"; // Rcpp::Rcout << "Using F = "<< XXXXX <<"\n"; Rcpp::Rcout << "Using nz = "<< nz <<"\n"; Rcpp::Rcout << "Using j = "<< j <<"\n"; Rcpp::Rcout << "Using p[0] = "<< p[0] <<"\n"; #endif // ASSUMING F_start (,upper) has already been called (to get CDF(t=Inf)) for (i=0; i < length; i++) { if (rts[i] <= mint) { out[i] = 0.0; } else { // if (i > 0) // { // if (rts[i] <= rts[i-1]) Rcpp::stop("Must call Rfastdm.c with increasing t values (in_RTs[%d]=%g, in_RTs[i-1]=%g.", i, rts[i], rts[i-1]); // } F = F_get_F(fc, rts[i]); nz = F_get_N(fc); j = (int) nz*scaled; out[i] = F[j]-p[0]; } } } else // Calc lower boundary { #ifdef _CDF_DEBUG_ Rcpp::Rcout << "pfastdm_b: Calculating lower boundary\n"; #endif F_start(fc, BOUNDARY_LOWER); for (i=0; i < length; i++) { if (rts[i] <= mint) { out[i]=0.0; } else { // if (i > 0) // { // if (rts[i] <= rts[i-1]) Rcpp::stop("Must call Rfastdm.c with increasing t values (in_RTs[%d]=%g, in_RTs[i-1]=%g.", i, rts[i], rts[i-1]); // } F = F_get_F(fc, rts[i]); nz = F_get_N(fc); j = (int) nz*scaled; out[i] = p[0]-F[j]; } } } #ifdef _CDF_DEBUG_ Rcpp::Rcout << "distribution(): About to delete FC and return.\n"; #endif F_delete(fc); return out; } // A version of distribution using (slower) F_get_val instead NumericVector precise_distribution (NumericVector rts, int boundary) { struct F_calculator *fc; int i; #ifdef _CDF_DEBUG_ Rcpp::Rcout << "precise_distribution (): Entered precise_distribution ()\n"; Rcpp::Rcout << "Using Parameters:\n"; Rcpp::Rcout << "rts[0] = " << rts[0] << "\n"; Rcpp::Rcout << "length of rts = " << rts.length() << "\n"; Rcpp::Rcout << "boundary = " << boundary << "\n"; #endif double scaled = g_Params->zr * g_Params->a; // SCALING zr BY a (in line with pfastdm and rfastdm) fc = F_new (); int length = rts.length(); NumericVector out (length); F_start(fc, BOUNDARY_UPPER); double offset = F_get_val (fc, 0, scaled); // Subtract the value for t=0 if (boundary == BOUNDARY_UPPER) // Calc upper boundary { #ifdef _CDF_DEBUG_ Rcpp::Rcout << "Upper bound offset = " << offset << "\n"; #endif for (i=0; i < length; i++) { out[i] = F_get_val (fc, rts[i], scaled) - offset; } } else { F_start(fc, BOUNDARY_LOWER); #ifdef _CDF_DEBUG_ Rcpp::Rcout << "Lower bound offset = " << offset << "\n"; #endif for (i=0; i < length; i++) { out[i] = offset - F_get_val (fc, rts[i], scaled); } } #ifdef _CDF_DEBUG_ Rcpp::Rcout << "precise_distribution(): About to delete FC and return.\n"; #endif F_delete(fc); return out; } #endif // DISTRIBUTION_H rtdists/src/RcppExports.cpp0000644000175000017500000000711714164636116015702 0ustar nileshnilesh// Generated by using Rcpp::compileAttributes() -> do not edit by hand // Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 #include using namespace Rcpp; #ifdef RCPP_USE_GLOBAL_ROSTREAM Rcpp::Rostream& Rcpp::Rcout = Rcpp::Rcpp_cout_get(); Rcpp::Rostream& Rcpp::Rcerr = Rcpp::Rcpp_cerr_get(); #endif // d_fastdm NumericVector d_fastdm(NumericVector rts, NumericVector params, double precision, int boundary, bool stop_on_error); RcppExport SEXP _rtdists_d_fastdm(SEXP rtsSEXP, SEXP paramsSEXP, SEXP precisionSEXP, SEXP boundarySEXP, SEXP stop_on_errorSEXP) { BEGIN_RCPP Rcpp::RObject rcpp_result_gen; Rcpp::RNGScope rcpp_rngScope_gen; Rcpp::traits::input_parameter< NumericVector >::type rts(rtsSEXP); Rcpp::traits::input_parameter< NumericVector >::type params(paramsSEXP); Rcpp::traits::input_parameter< double >::type precision(precisionSEXP); Rcpp::traits::input_parameter< int >::type boundary(boundarySEXP); Rcpp::traits::input_parameter< bool >::type stop_on_error(stop_on_errorSEXP); rcpp_result_gen = Rcpp::wrap(d_fastdm(rts, params, precision, boundary, stop_on_error)); return rcpp_result_gen; END_RCPP } // p_fastdm NumericVector p_fastdm(NumericVector rts, NumericVector params, double precision, int boundary, bool stop_on_error); RcppExport SEXP _rtdists_p_fastdm(SEXP rtsSEXP, SEXP paramsSEXP, SEXP precisionSEXP, SEXP boundarySEXP, SEXP stop_on_errorSEXP) { BEGIN_RCPP Rcpp::RObject rcpp_result_gen; Rcpp::RNGScope rcpp_rngScope_gen; Rcpp::traits::input_parameter< NumericVector >::type rts(rtsSEXP); Rcpp::traits::input_parameter< NumericVector >::type params(paramsSEXP); Rcpp::traits::input_parameter< double >::type precision(precisionSEXP); Rcpp::traits::input_parameter< int >::type boundary(boundarySEXP); Rcpp::traits::input_parameter< bool >::type stop_on_error(stop_on_errorSEXP); rcpp_result_gen = Rcpp::wrap(p_fastdm(rts, params, precision, boundary, stop_on_error)); return rcpp_result_gen; END_RCPP } // p_precise_fastdm NumericVector p_precise_fastdm(NumericVector rts, NumericVector params, double precision, int boundary, bool stop_on_error); RcppExport SEXP _rtdists_p_precise_fastdm(SEXP rtsSEXP, SEXP paramsSEXP, SEXP precisionSEXP, SEXP boundarySEXP, SEXP stop_on_errorSEXP) { BEGIN_RCPP Rcpp::RObject rcpp_result_gen; Rcpp::RNGScope rcpp_rngScope_gen; Rcpp::traits::input_parameter< NumericVector >::type rts(rtsSEXP); Rcpp::traits::input_parameter< NumericVector >::type params(paramsSEXP); Rcpp::traits::input_parameter< double >::type precision(precisionSEXP); Rcpp::traits::input_parameter< int >::type boundary(boundarySEXP); Rcpp::traits::input_parameter< bool >::type stop_on_error(stop_on_errorSEXP); rcpp_result_gen = Rcpp::wrap(p_precise_fastdm(rts, params, precision, boundary, stop_on_error)); return rcpp_result_gen; END_RCPP } // r_fastdm List r_fastdm(int num_values, NumericVector params, double precision, bool stop_on_error); RcppExport SEXP _rtdists_r_fastdm(SEXP num_valuesSEXP, SEXP paramsSEXP, SEXP precisionSEXP, SEXP stop_on_errorSEXP) { BEGIN_RCPP Rcpp::RObject rcpp_result_gen; Rcpp::RNGScope rcpp_rngScope_gen; Rcpp::traits::input_parameter< int >::type num_values(num_valuesSEXP); Rcpp::traits::input_parameter< NumericVector >::type params(paramsSEXP); Rcpp::traits::input_parameter< double >::type precision(precisionSEXP); Rcpp::traits::input_parameter< bool >::type stop_on_error(stop_on_errorSEXP); rcpp_result_gen = Rcpp::wrap(r_fastdm(num_values, params, precision, stop_on_error)); return rcpp_result_gen; END_RCPP } rtdists/src/CDF_sv_variability.h0000644000175000017500000001157113667512040016554 0ustar nileshnilesh/* CDF_sv_variability.hpp - Functions to calculate CDF when there is variance * in the drift rate parameter (sv != 0) (originally in cdf.c and phi.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef CDF_SV_VARIABILITY_H #define CDF_SV_VARIABILITY_H // // sv: variability in v // #include "CDF_sz_variability.h" // Includes normal and inverse normal CDF code required only here (originally in phi.c) double Phi (double x); double Phi_inverse (double y); static void F_sv_delete (F_calculator *fc); static void F_sv_start (F_calculator *fc, int plus); static const double *F_sv_get_F (F_calculator *fc, double t); static double F_sv_get_z (const F_calculator *fc, int i); struct F_sv_data { int nv; // number of points in integration F_calculator **base_fc; // F_calculators for different v double *avg; }; static F_calculator *F_sv_new (Parameters *params) // Allocate a new 'struct F_calculator'. // // This initialises the PDE and prepares most things for the calculation. The initial condition for the returned // 'struct F_calculator' has to be set using 'F_start'. // // This function can deal with variabilities in all parameters. If 'sv == 0', it just return the result of 'F_st_new'. { struct F_calculator **base_fc; struct F_calculator *fc; struct F_sv_data *data; int nv, j; double sv = params->sv; // convenience only: cache sv locally if (sv < params->TUNE_SV_EPSILON) return F_sz_new (params); // No need to integrate nv = (int)(sv/params->TUNE_DV + 0.5); if (nv < 3) nv = 3; // Create a temp copy of the parameters Parameters temp_params = *params; // SHOULD WORK, BUT CHECK THIS // Integrate across svs temp_params.sv = 0; base_fc = xnew (struct F_calculator *, nv); // NOTE: MEMORY ALLOCATION for (j=0; jv; base_fc[j] = F_sz_new (&temp_params); } fc = xnew (struct F_calculator, 1); // NOTE: MEMORY ALLOCATION fc->N = base_fc[0]->N; fc->plus = -1; data = xnew (struct F_sv_data, 1); // NOTE: MEMORY ALLOCATION data->nv = nv; data->base_fc = base_fc; data->avg = xnew (double, fc->N+1); // NOTE: MEMORY ALLOCATION fc->data = data; fc->start = F_sv_start; fc->free = F_sv_delete; fc->get_F = F_sv_get_F; fc->get_z = F_sv_get_z; return fc; } static void F_sv_delete (F_calculator *fc) { F_sv_data *data = (F_sv_data *)fc->data; int j; for (j=0; jnv; ++j) F_delete (data->base_fc[j]); xfree (data->base_fc); xfree (data->avg); xfree (data); xfree (fc); } static void F_sv_start (F_calculator *fc, int plus) { F_sv_data *data = (F_sv_data *)fc->data; int j; fc->plus = plus; for (j=0; jnv; ++j) F_start (data->base_fc[j], plus); } static const double *F_sv_get_F (F_calculator *fc, double t) { F_sv_data *data = (F_sv_data *)fc->data; const double *F; double *avg = data->avg; int i, j; F = F_get_F(data->base_fc[0], t); for (i=0; i<=fc->N; ++i) avg[i] = F[i]; for (j=1; jnv; ++j) { F = F_get_F(data->base_fc[j], t); for (i=0; i<=fc->N; ++i) avg[i] += F[i]; } for (i=0; i<=fc->N; ++i) avg[i] /= data->nv; return avg; } static double F_sv_get_z (const F_calculator *fc, int i) { F_sv_data *data = (F_sv_data *)fc->data; return F_get_z (data->base_fc[0], i); } double Phi (double x) /* The distribution function of the standard normal distribution. */ { return 0.5*(1+erf (x/M_SQRT2)); } double Phi_inverse (double y) /* The inverse of Phi, calculated using the bisection method */ { double l, r; if (y<=0.5) { l = -1; while (Phi(l)>=y) l -= 1; r = l+1; } else { r = 0; while (Phi(r) 1e-8); return 0.5*(l+r); } #endif // CDF_SV_VARIABILITY_H rtdists/src/RFastDM.h0000644000175000017500000000560413667512040014311 0ustar nileshnilesh/* RFastDM.Hpp - Main header file for the RCpp implementation of fast-dm * * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef RFASTDM_H #define RFASTDM_H #include using namespace Rcpp; #include #define MAX_INPUT_VALUES 1e+6 // Include all .hpp files // Note: This is bad organisation, but Rcpp (and especially RStudio's "sourceCpp" don't seem to handle // projects with multiple .cpp files) // Used by both PDF and CDF (also includes tuning params) #include "Parameters.h" // While not enforced, this is the global parameters Singleton // To be created and freed in the d_, p_ and r_ calls in RFastDM.cpp Parameters *g_Params; #define BOUNDARY_LOWER 0 #define BOUNDARY_UPPER 1 // PDF #include "Density.h" // Placing memory routines used by CDF code here // TODO: (relying on C-style realloc until we have a better C++ solution - just vector?) void *xmalloc (size_t size) { void *ptr; if (size == 0) return NULL; ptr = malloc (size); if (ptr == NULL) Rcpp::stop("memory exhausted"); return ptr; } // Originally this was used once as a mem optimization in solve_tridiag () (pde.c, line 43) // (relevant code is now in CDF_no_variability.hpp) void *xrealloc (void *ptr, size_t newsize) { if (newsize == 0) { if (ptr) free (ptr); return NULL; } ptr = ptr ? realloc (ptr, newsize) : malloc(newsize); if (ptr == NULL) Rcpp::stop("memory exhausted"); return ptr; } #define xnew(T,N) ((T *)xmalloc((N)*sizeof(T))) #define xrenew(T,OLD,N) ((T *)xrealloc(OLD,(N)*sizeof(T))) void xfree (void *ptr) { if (ptr) free (ptr); } // CDF code #include "FCalculator.h" // Forward declare all FController functions for access from all CDF_* F_calculator *F_new (); void F_delete (F_calculator *fc); void F_start (F_calculator *fc, int boundary); int F_get_N (const F_calculator *fc); double F_get_z (const F_calculator *fc, int i); const double *F_get_F (F_calculator *fc, double t); double F_get_val (F_calculator *fc, double t, double z); #include "FController.h" #include "Distribution.h" #include "Sampling.h" #endif // RFASTDM_H rtdists/src/CDF_no_variability.h0000644000175000017500000001527013667512040016540 0ustar nileshnilesh/* CDF_no_variability.hpp - Functions to calculate CDF when all variance * parameters are 0 (originally in cdf.c and pde.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef CDF_NO_VARIABILITY_H #define CDF_NO_VARIABILITY_H #include "Parameters.h" #include "FController.h" // Forward declarations static void F_plain_delete (F_calculator *fc); static void F_plain_start (F_calculator *fc, int plus); static const double *F_plain_get_F (F_calculator *fc, double t); static double F_plain_get_z (const F_calculator *fc, int i); void advance_to (int N, double *vector, double t0, double t1, double dz, double v); struct F_plain_data { double a, v, t0, d; // parameters (except z) double dz; // z step-size double t_offset; // time adjustment, resulting from t0 and d double t; // adjusted time, corresponding to the vector F double *F; // state at time t + t_offset }; static F_calculator *F_plain_new (Parameters *params) // Allocate a new 'struct F_calculator' (without variabilities). { F_calculator *fc; F_plain_data *data; int N; // N must be even, otherwise the case szr == 1 fails N = 2*(int)(params->a*0.5/params->TUNE_DZ+0.5); if (N<4) N = 4; fc = xnew (struct F_calculator, 1); // NOTE: MEMORY ALLOCATION: struct F_calculator fc->N = N; fc->plus = -1; data = xnew (struct F_plain_data, 1); // NOTE: MEMORY ALLOCATION: struct F_plain_data data->a = params->a; data->v = params->v; data->t0 = params->t0; data->d = params->d; data->dz = params->a/N; data->F = xnew (double, N+1); // NOTE: MEMORY ALLOCATION: (N+1) doubles fc->data = data; fc->start = F_plain_start; fc->free = F_plain_delete; fc->get_F = F_plain_get_F; fc->get_z = F_plain_get_z; return fc; } static void F_plain_delete (F_calculator *fc) { F_plain_data *data = (F_plain_data*)fc->data; xfree (data->F); xfree (data); xfree (fc); } static void F_plain_start (F_calculator *fc, int plus) { F_plain_data *data = (F_plain_data*)fc->data; double a = data->a; double v = data->v; int N = fc->N; int i; fc->plus = plus; data->t_offset = data->t0 - data->d * (plus == 1? 0.5 : -0.5); data->t = 0; data->F[0] = (plus == 1) ? 1 : 0; for (i=1; iF[i] = F_limit(a, z, v); } data->F[N] = (plus == 1) ? 1 : 0; } static const double *F_plain_get_F (F_calculator *fc, double t) { F_plain_data *data = (F_plain_data*)fc->data; t -= data->t_offset; if (t > data->t) { advance_to (fc->N, data->F, data->t, t, data->dz, data->v); data->t = t; } return data->F; } static double F_plain_get_z (const F_calculator *fc, int i) { F_plain_data *data = (F_plain_data*)fc->data; return i * data->dz; } // NOTE: Functions from pde.c // (note the use of a static array of doubles and a realloc for optimisation) // TODO: Investigate if there's a better/faster/more robust/more C++ way of doing this // NOTE: *res is in/out // static void solve_tridiag(int n, const double *rhs, double *res, double left, double mid, double right) // Solve an n by n tridiagonal system of linear equations. // The matrix has 'mid' on the diagonal, 'left' on the subdiagonal and 'right' on the superdiagonal. // { static double *tmp = NULL; static int tmp_len = 0; double p, old_res, old_tmp; int i; if (n-1 > tmp_len) { // Reallocating/freeing the scratch buffer for every call to 'solve_tridiag' caused about 10% of the total CPU load during // some fast-dm runs. To avoid this problem, re-use the same buffer between runs if possible. tmp = xrenew(double, tmp, n-1); // NOTE: MEMORY ALLOCATION tmp_len = n-1; } /* step 1: solving forward */ tmp[0] = old_tmp = right / mid; res[0] = old_res = rhs[0] / mid; for (i=1; i0; --i) res[i-1] -= tmp[i-1]*res[i]; } // NOTE: *vector is in/out static void make_step (int N, double *vector, double dt, double dz, double v) // Advance the numerical solution of the PDE by one step in time, using the Crank-Nicolson scheme. // The time step size is 'dt', the space grid size is 'dz'. */ { double *tmp_vector; double left, mid, right; int i; tmp_vector = xnew (double, N+1); // NOTE: MEMORY ALLOCATION left = (1-dz*v) / (2*dz*dz); mid = -1 / (dz*dz); right = (1+dz*v) / (2*dz*dz); tmp_vector[1] = (dt*left * vector[0] + (1+0.5*dt*mid) * vector[1] + 0.5*dt*right * vector[2]); for (i=2; iF from F_plain_get_F void advance_to (int N, double *vector, double t0, double t1, double dz, double v) // Advance the state 'vector' of the PDE from time 't0' to time 't1' { int done = 0; do { double dt = g_Params->TUNE_PDE_DT_MIN + g_Params->TUNE_PDE_DT_SCALE*t0; if (dt > g_Params->TUNE_PDE_DT_MAX) dt = g_Params->TUNE_PDE_DT_MAX; if (t0 + dt >= t1) { dt = t1 - t0; done = 1; } make_step (N, vector, dt, dz, v); t0 += dt; } while (!done); } #endif // CDF_NO_VARIABILITY_H rtdists/src/RFastDM.cpp0000644000175000017500000001130714070616276014646 0ustar nileshnilesh/* RFastDM.cpp - Main source file for the RCpp implementation of fast-dm * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #define STRICT_R_HEADERS #include #include #include #include #include "RFastDM.h" using namespace Rcpp; // R-callable PDF for fastdm - pass boundary to retrieve (1 = lower, 2 = upper) // [[Rcpp::export]] NumericVector d_fastdm (NumericVector rts, NumericVector params, double precision=3, int boundary=2, bool stop_on_error=true) { int length = rts.length(); if (length > MAX_INPUT_VALUES) { Rcpp::stop("Number of RT values passed in exceeds maximum of %d.\n", MAX_INPUT_VALUES); } if ((boundary < 1) || (boundary > 2)) { Rcpp::stop ("Boundary must be either 2 (upper) or 1 (lower)\n"); } g_Params = new Parameters (params, precision); NumericVector out(length, 0.0); // Should default to 0s when creating NumericVector, but just in case.. if (!g_Params->ValidateParams(stop_on_error)) { if (stop_on_error) { Rcpp::stop("Error validating parameters.\n"); } else { return out; } } out = density (rts, boundary-1); delete g_Params; return out; } // R-callable CDF which finds the left-hand area (from 0 to RT) - pass boundary to retrieve (1 = lower, 2 = upper) // [[Rcpp::export]] NumericVector p_fastdm (NumericVector rts, NumericVector params, double precision=3, int boundary=2, bool stop_on_error=true) { int length = rts.length(); if (length > MAX_INPUT_VALUES) { Rcpp::stop("Number of RT values passed in exceeds maximum of %d.\n", MAX_INPUT_VALUES); } if ((boundary < 1) || (boundary > 2)) { Rcpp::stop ("Boundary must be either 2 (upper) or 1 (lower)\n"); } g_Params = new Parameters (params, precision); NumericVector out(length, 0.0); if (!g_Params->ValidateParams(stop_on_error)) { if (stop_on_error) { Rcpp::stop("Error validating parameters.\n"); } else return out; } // Pass through to distribution in Distribution.hpp out = distribution (rts, boundary-1); delete g_Params; return out; } // More precise R-callable CDF which finds the left-hand area (from 0 to RT) - pass boundary to retrieve (1 = lower, 2 = upper) // [[Rcpp::export]] NumericVector p_precise_fastdm (NumericVector rts, NumericVector params, double precision=3, int boundary=2, bool stop_on_error=true) { int length = rts.length(); if (length > MAX_INPUT_VALUES) { Rcpp::stop("Number of RT values passed in exceeds maximum of %d.\n", MAX_INPUT_VALUES); } if ((boundary < 1) || (boundary > 2)) { Rcpp::stop ("Boundary must be either 2 (upper) or 1 (lower)\n"); } g_Params = new Parameters (params, precision); NumericVector out(length, 0.0); if (!g_Params->ValidateParams(stop_on_error)) { if (stop_on_error) { Rcpp::stop("Error validating parameters.\n"); } else return out; } // Pass through to precise_distribution.hpp out = precise_distribution (rts, boundary-1); delete g_Params; return out; } // R-style sampling from the DM - returns a List consisting of RTs and boundaries // [[Rcpp::export]] List r_fastdm (int num_values, NumericVector params, double precision=3, bool stop_on_error=true) { if ((num_values < 1) || (num_values > MAX_INPUT_VALUES)) { Rcpp::stop("Number of samples requested exceeds maximum of %d.\n", MAX_INPUT_VALUES); } g_Params = new Parameters (params, precision); if (!g_Params->ValidateParams(stop_on_error)) { if (stop_on_error) { Rcpp::stop("Error validating parameters.\n"); } else { NumericVector out_RTs(num_values, 0.0); NumericVector out_bounds(num_values, 0.0); return List::create(Named("rt") = out_RTs, Named("boundary") = out_bounds); } } // Pass through to Sampling.hpp List out = sampling (num_values); delete g_Params; return out; } rtdists/src/FController.h0000644000175000017500000000713113667512040015277 0ustar nileshnilesh/* FController.hpp - Main controller class to manage an FCalculator structure * and call the function pointers stored within (originally in cdf.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef FCONTROLLER_H #define FCONTROLLER_H #include "CDF_st0_variability.h" // // All functions are externally visible // F_calculator *F_new () // Allocate data required to compute a CDF. // // CONVERSION NOTE: while only F_sz_new and F_plain_new rely on copying the global parameters to integrate over // we'll pass them for all *_new calls for orthogonality // // The returned structure must be initialised with 'F_start' and must be freed with 'F_delete' after use. { return F_st0_new (g_Params); } void F_delete (F_calculator *fc) // Free a 'struct F_calculator' and all associated resources. 'fc' must have been allocated by 'F_new'. After 'F_delete' is // called, 'fc' cannot be used any longer. { fc->free (fc); } void F_start (F_calculator *fc, int boundary) // Set the initial condition for the PDE. // If upper boundary prepare to calculate the CDF for hitting a before 0, // otherwise prepare to calculate the CDF for hitting 0 before a. // // CONVERSION NOTE: Changed from enum boundary b to int boundary, where 0 = lower and 1 = upper { fc->start (fc, boundary); } int F_get_N (const F_calculator *fc) { return fc->N; } double F_get_z (const F_calculator *fc, int i) // Get the z-value corresponding to index i. { return fc->get_z (fc, i); } const double *F_get_F (F_calculator *fc, double t) // Get the array of CDF values at time t for all grid points z. // // 'F_start' must be used for initialisation before calling 'F_get_F'. Between calls of 'F_start' the calls to 'F_get_F' must have // increasing values of 't'. // The returned array is owned by the 'struct F_calculator' and must not be changed or freed. // { return fc->get_F (fc, t); } double F_get_val (F_calculator *fc, double t, double z) // Get the value of the CDF for the parameters given when creating fc. // The function Uses linear interpolation for z-values between the grid points. // Don't use this function for parameter fitting, since it is not very fast (use 'F_get_F' instead). // // ? CONVERSION NOTE: Speed difference seems to be within +/- 10% for rt/parameter combinations so far tested // { const double *F; double z0, z1; double p, x; int N = fc->N; int i; F = F_get_F (fc, t); if (N == 0) { x = F[0]; } else { z0 = F_get_z (fc, 0); z1 = F_get_z (fc, N); i = (int)(N*(z-z0)/(z1-z0)); if (i < N) { z0 = F_get_z (fc, i); z1 = F_get_z (fc, i+1); p = (z1-z) / (z1-z0); x = p*F[i] + (1-p)*F[i+1]; } else { x = F[N]; } } return x; } #endif // FCONTROLLER_H rtdists/src/init.c0000644000175000017500000000167213667512040014010 0ustar nileshnilesh#include #include #include // for NULL #include /* FIXME: Check these declarations against the C/Fortran source code. */ /* .Call calls */ extern SEXP _rtdists_d_fastdm(SEXP, SEXP, SEXP, SEXP, SEXP); extern SEXP _rtdists_p_fastdm(SEXP, SEXP, SEXP, SEXP, SEXP); extern SEXP _rtdists_p_precise_fastdm(SEXP, SEXP, SEXP, SEXP, SEXP); extern SEXP _rtdists_r_fastdm(SEXP, SEXP, SEXP, SEXP); static const R_CallMethodDef CallEntries[] = { {"_rtdists_d_fastdm", (DL_FUNC) &_rtdists_d_fastdm, 5}, {"_rtdists_p_fastdm", (DL_FUNC) &_rtdists_p_fastdm, 5}, {"_rtdists_p_precise_fastdm", (DL_FUNC) &_rtdists_p_precise_fastdm, 5}, {"_rtdists_r_fastdm", (DL_FUNC) &_rtdists_r_fastdm, 4}, {NULL, NULL, 0} }; void R_init_rtdists(DllInfo *dll) { R_registerRoutines(dll, NULL, CallEntries, NULL, NULL); R_useDynamicSymbols(dll, FALSE); } rtdists/src/FCalculator.h0000644000175000017500000000531613667512040015250 0ustar nileshnilesh/* FCalculator.hpp - A structure containing the data and function pointers * required to calculate a 'dimension' of the CDF integration. * (originally in cdf.c) * * Copyright (C) 2006 Jochen Voss, Andreas Voss. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License as * published by the Free Software Foundation; either version 2 of the * License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. */ #ifndef FCALCULATOR_H #define FCALCULATOR_H /********************************************************************** * struct F_calculator: * Store information to calculate the cumulative distribution function F. * * Usage: * 1) Allocate a F_calculator structure with the 'F_new' function below. * This initialises the appropriate method for the given variabilities. * 2) Set the initial condition for the PDE with 'F_start' * 3) Get an array of computed values at time t with 'F_get_F'. * The field 'F_calculator.N' gives the length of the array. * 4) Get the z-value associated with array element 'i' using * the function 'F_get_z'. * * The values returned by the functions F_get_F and F_get_val are not * directly the values of the CDF, but they are transformed to ease * the use of the results by the higher levels of fast-dm. To get the * actual values of the CDF, the following transform has to be applied * to the function results: * * for b_upper: CDF = return value - F^-(\infty) * for b_lower: CDF = F^-(\infty) - return value * * When all variabilities are zero, F^-(\infty) can be computed using * the function 'F_limit'. */ static double F_limit(double a, double z, double v) { if (fabs(v) < 1e-8) { return 1 - z/a; } else { return (exp(-2*v*z)-exp(-2*v*a)) / (1-exp(-2*v*a)); } } struct F_calculator { int N; int plus; // boundary void *data; // To be filled in void (*start) (F_calculator *, int plus); // CONVERSION NOTE: 'plus' was enum boundary (0=lower,1=upper) void (*free) (F_calculator *); // CONVERSION NOTE: was "delete" in C version const double *(*get_F) (F_calculator *, double t); double (*get_z) (const F_calculator *, int i); }; #endif // FCALCULATOR_H rtdists/vignettes/0000755000175000017500000000000014164644437014125 5ustar nileshnileshrtdists/vignettes/rr98_full-lba_fits.rda0000644000175000017500000000257514012213045020210 0ustar nileshnileshyPW%Jԩ@":-PjTJ,JHB  ! F񈭨T*PM+ J[k j1B"}kItڙNwf}{ogYr0 F"Җ~l0[8)70vG 'x 瘸}P&LHAՄWFI" !*z BDMxiU[!Ylb[t$q1mF0xR%:oNn-)t:=ttSωS0c{dh6v6 Mw :d~1lih~Q?hn<lR&k52?L槇Ք!:"_ײ :]'?d{*5aPhQbrψhAOx&L8 a]Ae/hO1)[, TYp}s^Qx[89;um56rD6f|ֈC e$5O8'Qs բS13gagW>XL%9 vKCصggI2Nmb-u4 qL?Q8ca>EĝgUՈSJt vU(FE8>,689UIi3R7qJ wB-XeL`[sxӟYp<(,W.Yفsvd笴L}J̜e99{x7<9yEA×tz?/q,4l gr8ٹj8+G_O0;|.X_ rtdists/vignettes/rr98_full-diffusion_fits.rda0000644000175000017500000000257414012213045021437 0ustar nileshnilesh}LgW(-([T|A"qDe)!èPVRVL EnMt[D@% ,-Yv}=/=Ջ##`3DCtƫr3Fw֛1˚KSuۯD`/>x䵫( &!]sa'v(UH>8`-+[D5|&Z̳/HD&-mv͵iEE4)%<'`h_nx#AnEjE4cEDbԼ]EP rtdists/vignettes/reanalysis_rr98.Rmd0000644000175000017500000014321214155710063017617 0ustar nileshnilesh--- title: "Reanalysis of Ratcliff and Rouder (1998) with Diffusion Model and LBA" author: "Henrik Singmann" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Reanalysis of Ratcliff and Rouder (1998) with Diffusion Model and LBA} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- This vignette provides the `R` scripts for a reanalysis of Experiment 1 of Ratcliff and Rouder (1998). In contrast to the original analysis, which used RT bins, we will employ trial-wise maximum likelihood estimation. The code heavily uses [`dplyr`](https://cran.r-project.org/package=dplyr) ([vignette](https://CRAN.R-project.org/package=dplyr/vignettes/dplyr.html)), [`tidyr`](https://cran.r-project.org/package=tidyr) ([vignette](https://CRAN.R-project.org/package=tidyr/vignettes/tidy-data.html)), and [`purrr`](https://cran.r-project.org/package=purrr) for data handling and `lattice` (see `?Lattice`) and `latticeExtra` (specifically `as.layer`) for plotting. A throrough introduction to the former three packages is provided in [R for Data Science](https://r4ds.had.co.nz/) by Wickham and Gorlemund, see especially Chapter 25. # Description of the Experiment In the experiment, three participants were asked to decide whether the overall brightness of pixel arrays displayed on a computer monitor was "high" or "low". To this end, the number of white versus black pixels (i.e., the brightness `strength`) was manipulated in 33 levels from 0% white pixels (level 0) to 100% white pixels (level 32). In addition, instruction manipulated speed and accuracy between blocks. In total, each participant contributed around 4000 trials per instruction condition. The experiment contained another manipulation, the distribution (or brightness `source`) from which the pixel array was drawn. One distribution mean was on the "high" brightness side and one distribution mean was on the "low" brightness side. However, as the distributions were unbounded and overlapping, the same strength level could come from either distribution. Participant also received feedback whether or not they had picked the correct distribution (e.g., for the middle strength level 16 probability of belonging to either source was 50%). We do not further consider this manipulation in the following, which seems to be in line with the analysis of Ratcliff and Rouder (1998). # Descriptive data As a first step, we load the data and then plot the probability with which each response (i.e., "dark" or "light") is given as a function of strength and instruction condition. This clearly shows that there is a massive effect of strength on which response is given while at the same time the instruction only seems to have a minor effect and more on the extremes than in the middle. ```{r, fig.height=4, fig.width=7, message=FALSE, warning=FALSE} require(rtdists) require(dplyr) # for data manipulations and looping require(tidyr) # for data manipulations require(purrr) # for data manipulations require(lattice) # for plotting and corresponding themes require(latticeExtra) lattice.options(default.theme = standard.theme(color = FALSE)) lattice.options(default.args = list(as.table = TRUE)) options(digits = 3) # only three decimal digits require(binom) # for binomial confidence intervals data(rr98) rr98 <- rr98[!rr98$outlier,] #remove outliers # aggregate data for first plot: agg_rr98 <- rr98 %>% group_by(id, instruction, strength) %>% summarise(prop = mean(response == "dark"), mean_rt = mean(rt), median_rt = mean(rt)) %>% ungroup() xyplot(prop ~ strength|id, agg_rr98, group = instruction, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses") ``` Next, we want to get an overview of the response time distributions. For this we look at the response times of the five quantiles (i.e., 0.1, 0.3, 0.5/median, 0.7, 0.9) across the strength manipulations. This time, we also separate the plots by condition as the speed condition resulted in, as expected, vastly shorter response times. These two plots reveal considerable differences between the two instruction conditions. ```{r, fig.height=6, fig.width=7} quantiles <- c(0.1, 0.3, 0.5, 0.7, 0.9) ## aggregate data for quantile plot quantiles_rr98 <- rr98 %>% group_by(id, instruction, strength) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, strength) xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed") xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy") ``` In the speed conditions, response times were, as expected, generally fast and there seemed to be hardly any effect of strength. Only for one participant, `nh`, we can see a small increase in RTs for the higher quantiles for strength values near the middle. In contrast, in the accuracy condition strength has a considerable effect on response times for all participants. Again, this increase was especially strong for the slower responses (i.e., the higher quantiles). For those we see a strong inverse u-shaped effect, symmetrically around the middle -- where the probability for each response is 50% -- with very high response times for strength values near the middle. However, as this plot is getting a little bit messy, we now bin the strength levels to remove noise which should provide a clearer overview. For this, we will construct five separate strength bins with approximately equal response behavior and comparable numbers of trials. This is similar to what was done originally by Ratcliff and Rouder (1998). The next table shows the number of trials per participant, bin, and response. ```{r, fig.height=4, fig.width=7} #bins <- c(-0.5, 5.5, 10.5, 13.5, 16.5, 19.5, 25.5, 32.5) # seven bins like RR98 bins <- c(-0.5, 10.5, 13.5, 16.5, 19.5, 32.5) rr98$strength_bin <- cut(rr98$strength, breaks = bins, include.lowest = TRUE) levels(rr98$strength_bin) <- as.character(1:7) # aggregate data for response probability plot: agg_rr98_bin <- rr98 %>% group_by(id, instruction, strength_bin) %>% summarise(n1 = n(), dark = sum(response == "dark"), light = sum(response == "light")) %>% ungroup() %>% mutate(prop = map2(dark, n1, ~ binom.confint(.x, .y, methods = "agresti-coull"))) %>% unnest(prop) knitr::kable( rr98 %>% group_by(id, instruction, strength_bin, response) %>% summarise(n = n()) %>% spread(strength_bin, n) ) ``` We first look again and the response proportions and see more clearly the difference between the strength conditions at the outer bins. ```{r, fig.height=4, fig.width=7} xyplot(mean ~ strength_bin|id, agg_rr98_bin, group = instruction, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses") ``` Now we also look again at the RT quantiles and see more clearly the symmetrical inverse u-shaped increase in RTs for the middle bins described above. ```{r, fig.height=6, fig.width=7} ## aggregate data for quantile plot quantiles_rr98_bin <- rr98 %>% group_by(id, instruction, strength_bin) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, strength_bin) xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed") xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy") ``` With this clear pattern we now take a look at the RT distributions separately for both responses to see if they are simply mirror images of each other or not. For this, we overlay the two RT quantile plots for all trials in which the responses was "dark" in black (there are more "dark" pixels for the bins on the left side of the plot) with the same plot in which the responses was "light" in grey (there are more "light" pixels for the bins on the right side of the plot). ```{r, fig.height=6, fig.width=7} agg2_rr98_response <- rr98 %>% group_by(id, instruction, strength_bin, response) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, response, strength_bin) p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & response == "dark", layout = c(3,1)) p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & response == "light", col = "grey") p1 + as.layer(p2) p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & response == "dark", layout = c(3,1)) p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & response == "light", col = "grey") p1 + as.layer(p2) ``` These two plots reveal an interesting pattern. In the speed condition (upper plot), we particularly see fast "errors" (i.e., responses to "dark" when there are more light pixels or the other way round). When "dark" is the more likely response (i.e. on the left side) the "light" responses in grey are faster and this is especially true for the lower quantiles. The opposite pattern seems to hold on the opposite side where "dark" responses in black are faster than "light" responses in grey. At intermediate bins the difference seems to be rather at the higher quantiles. This is particularly noticeable for participant `kr` for which for there seem to be slow "light"-"errors" just to the left to the middle bin and slow "right"-"errors" just to the right of the middle bin. For the accuracy condition in the lower plot the pattern is noticeably different. First of all, there are only very few or no "error" responses in the extreme bins. Consequently, there does not seem to be any evidence for fast errors at the extremes (and also not at intermediate strength levels). However, we here more clearly see the slow errors at the intermediate bins. When "dark" is somewhat more probably (i.e., to the left of the middle) "light" responses are noticeably slower than "dark" responses. The same holds for "dark" responses if "light" is more probable. Importantly, this shows that the symmetrical inverse u-shaped increase for the middle bins described above is actually a consequence of a mixture of slow "errors", two asymmetric increases for the two different responses. # Diffusion Model Analysis We will follow Ratcliff and Rouder (1998) and analyze the data with the diffusion model. For this, we will fit a separate model to each participant and instruction condition. To do so, we will first create a new data set we will use for fitting. This data set will be `nested` with one row for each combinations of the variables: ```{r} d_nested <- rr98 %>% group_by(id, instruction) %>% # we loop across both, id and instruction nest() d_nested ``` Like Ratcliff and Rouder we will fit the data to the strength bins instead of the full strength manipulation. We fit basically the full diffusion model (with the exception of $s_{t0}$) to each instruction condition which results in 10 parameters per participant and instruction condition: - 5 drift rates $v$ (i.e., one per strength bin) - 1 boundary separation $a$ - 1 non-decision time $t_0$ - 1 drift rate variability $s_v$ - 1 start point $z$ (for ease in interpretation we parameterize this as the *relative* start point so that values different from 0.5 indicate a bias towards one of the responses) - 1 start point variability $s_z$ Like in Ratcliff and Rouder (1998), the two response boundaries are the two response options "dark" and "light". To estimate the model we diverge from Ratcliff and Rouder and employ trial wise maximum likelihood estimation (i.e., no binning of responses). For this, we simply need to have a wrapper function which returns us the negative summed log-likelihood of the data (i.e., RTs and corresponding responses) given a set of parameters. We need the negativ sum because most optimization function minimize whereas we want to obtain the maximum likelihood value. The following function for which we simply loop across drift rates will do so: ```{r} # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_diffusion_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { densities[drift == levels(drift)[i]] <- ddiffusion(rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], a=pars["a"], t0=pars["t0"], sv=pars["sv"], sz=if ("sz" %in% non_v_pars) pars["sz"] else 0.1, z=if ("z" %in% non_v_pars) pars["z"]*pars["a"] else 0.5*pars["a"], st0=if ("st0" %in% non_v_pars) pars["st0"] else 0, v=pars[base_par+i]) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } ``` Note that the function is written in such a way that we could easily fix certain parameters without the necessity to change it (using `if`-`then` on the parameters names passed via `pars`). Additionally, we also need a function that generates a set of random starting values. And, as any random set of starting values may be impossible, another wrapper function that generates starting values until a set of valid starting values is found and then passes those to the optimization routine. As optimization routine we will be using `nlminb`. These functions are given next and are specified in a way that they will be usable for other model variants for this data (e.g., fixing parameters). ```{r} # function that creates random start values, also get_start <- function(base_par, n_drift = 5) { start1 <- c( a = runif(1, 0.5, 3), a_1 = runif(1, 0.5, 3), a_2 = runif(1, 0.5, 3), t0 = runif(1, 0, 0.5), z = runif(1, 0.4, 0.6), sz = runif(1, 0, 0.5), sv = runif(1, 0, 0.5), st0 = runif(1, 0, 0.5), d = rnorm(1, 0, 0.05) ) start2 <- sort(rnorm(n_drift), decreasing = FALSE) names(start2) <- paste0("v_", seq_len(n_drift)) c(start1[base_par], start2) } # function that tries different random start values until it works: ensure_fit <- function(data, start_function, objective_function, base_pars, n_drift = 5, n_fits = 1, lower = c(rep(0, length(base_pars)), -Inf, rep(-Inf,length(start_function(base_pars))-length(base_pars)))) { best_fit <- list(objective = 1e+06) for (i in seq_len(n_fits)) { start_ll <- 1e+06 #browser() while(start_ll == 1e+06) { start <- start_function(base_pars, n_drift=n_drift) start_ll <- objective_function(start, rt = data$rt, response = data$response_num, drift = factor(data$strength_bin, seq_len(n_drift)), instruction = data$instruction) } cat("\nstart fitting.\n") # just for information to see if it is stuck fit <- nlminb(start, objective_function, rt = data$rt, response = data$response_num, drift = factor(data$strength_bin, seq_len(n_drift)), instruction = data$instruction, lower = lower) if (fit$objective < best_fit$objective) best_fit <- fit } out <- as.data.frame(t(unlist(best_fit[1:3]))) colnames(out) <- sub("par.", "", colnames(out)) out } ``` ```{r, echo=FALSE} load("rr98_full-diffusion_fits.rda") load("rr98_full-lba_fits.rda") ``` With these functions in place, we now simply need to loop over participants and items to obtain the fit. The simplest way is perhaps to use the combination of `purrr:map` and `dplyr::mutate` as shown here: ```{r, eval = FALSE} fit_diffusion <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")))) %>% unnest(fit) ``` On Unix-like systems (i.e., Linux and Mac) we can easily use the inbuild multicore functionality using `parallel::mclapply` to distribute fitting of the different parts across different cores: ```{r, eval = FALSE} require(parallel) fit_diffusion <- d_nested fit_diffusion$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")), mc.cores = 2) fit_diffusion <- unnest(fit_diffusion, fit) ``` The following table gives the parameter values, the negative summed log-likelihoods (i.e., `objective`, where smaller is better), and the convergence code of the optimization algorithm (where 0 indicates no problem) obtained from this fit: ```{r} fit_diffusion$data <- NULL if (!("st0" %in% colnames(fit_diffusion))) fit_diffusion$st0 <- 0 if (!("z" %in% colnames(fit_diffusion))) fit_diffusion$z <- 0.5 if (!("sz" %in% colnames(fit_diffusion))) fit_diffusion$sz <- 0.1 knitr::kable(fit_diffusion) ``` We can see from these values that there is a large effect of instruction on $a$. However, instruction also has effects on other parameters: - $t_0$ is consistently larger in the accuracy compared to the speed condition, although this effect is small. - $s_v$ is estimated at 0 or very low in the speed condition, but 0.5 or 1 in the accuracy condition. This is consistent with the absence of slow "errors" in the speed condition. - $s_z$ is consistently larger in the speed conditions consistent with the presence or more fast "errors" in the speed than in the accuracy condition. - $v$ appears to be more extreme (i.e., smaller for $v_1$ and $v_2$ and larger for $v_4$ and $v_5$) in the speed compared to the accuracy condition. ```{r obtain_fits_not_run, eval = FALSE, include = FALSE} require(parallel) fit_diffusion <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")))) %>% unnest(fit) fit_diffusion$data <- NULL fit_diffusion2 <- d_nested fit_diffusion2$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")), mc.cores = 3) fit_diffusion2 <- unnest(fit_diffusion2, fit) fit_diffusion2$data <- NULL all.equal(as.data.frame(fit_diffusion), as.data.frame(fit_diffusion2), tolerance = 0.01) save(fit_diffusion, fit_diffusion2, file = "rr98_full-diffusion_fits.rda") ``` ## Graphical Model Fit ### Predicted Response Rates To evaluate the fits graphically we first compare the actual response rates for the two responses with the predicted responses rates. The grey lines and points show the observed data and the error bars are binomial confidence intervals. The black lines and points show the predicted response rates. ```{r, fig.height=5, fig.width=7, message=FALSE} # get predicted response proportions pars_separate_l <- fit_diffusion %>% gather("strength_bin", "v", starts_with("v")) pars_separate_l$strength_bin <- factor(substr(pars_separate_l$strength_bin, 3,3), levels = as.character(seq_len(length(bins)-1))) #pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin) pars_separate_l <- pars_separate_l %>% group_by(id, instruction, strength_bin) %>% mutate(resp_prop = pdiffusion(rt=Inf, response="lower", a=a, v=v, t0=t0, sz = sz, z=a*z, sv=sv, st0=st0)) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black") p2 + as.layer(p1) + as.layer(p3) ``` This figure show that overall the model can predict the actual response rates very accurately. There are only a few minor deviations. ### Predicted Median RTs Next we compare the central tendency of the RTs with the prediction. For this we evaluate the CDF at the quantiles of the predicted response proportions. Again, the data is shown in grey (and error bars show the standard errors of the median) and the predicted RTs in black. We first show the predictions for the speed condition, separated by response. ```{r, fig.height=6, fig.width=7, message=FALSE} # get predicted quantiles (uses predicted response proportions) separate_pred_dark <- pars_separate_l %>% do(as.data.frame(t( qdiffusion(quantiles*.$resp_prop, response="lower", a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% ungroup() %>% gather("quantiles", "dark", V1:V5) separate_pred_light <- pars_separate_l %>% do(as.data.frame(t( qdiffusion(quantiles*(1-.$resp_prop), response="upper", a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% ungroup() %>% gather("quantiles", "light", V1:V5) #separate_pred_light %>% filter(is.na(light)) separate_pred <- inner_join(separate_pred_dark, separate_pred_light) separate_pred$quantiles <- factor(separate_pred$quantiles, levels = c("V5", "V4", "V3", "V2", "V1"), labels = c("90%", "70%", "50%", "30%", "10%")) separate_pred <- separate_pred %>% gather("response", "rt", dark, light) # get SE for observed quantiles agg2_rr98_response_se <- rr98 %>% group_by(id, instruction, strength_bin, response) %>% summarise(se_median = sqrt(pi/2)*(sd(rt)/sqrt(n()))) %>% ungroup() # calculate error bars for quantiles. agg2_rr98_response <- left_join(agg2_rr98_response, agg2_rr98_response_se) agg2_rr98_response <- agg2_rr98_response %>% mutate(lower = rt-se_median, upper = rt+se_median) p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` Again, the model seems to be overall able to describe the general pattern quite well. However, there are some visible misfits for participants `jf` and `nh`. Next shows the same plot for the accuracy condition. Here we again see that the model is able to predict the pattern in the data quite well. While there also seems to be quite some misfit for participant `nh` this only appears in conditions with very little trials as indicated by the large error bars. ```{r, fig.height=6, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` ### All quantiles Next, we investigate the full RT distribution by comparing observed and predicted quantiles. The observed quantiles are again displayed in grey and the predictions in black. The first plot shows the sped condition separated by response. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed") p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.9)))) p2 + as.layer(p1) + as.layer(p1e) ``` This plots shows some clear misfits for the diffusion model, particularly in the upper quantiles. But there are also misfits in the lower quantiles. The next plot shows the accuracy condition separated by response. Here it appears that the diffusion model provides an overall better account. However, it is important to consider the different y-axis scaling of both plots. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy") p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.0)))) p2 + as.layer(p1) + as.layer(p1e) ``` Overall the diffusion model provides a good account of the data. Only when considering all quantiles we see some clear misfits. Nevertheless, the general trends in the data are well recovered, the only exceptions here are conditions with very low numbers of trials. # LBA analysis Next, we fit the LBA model to the data. For this, we use an LBA model with the same number of parameters. To make the model identifiable, we fix the sum of the drift rates to 1. Specifically, the model has the following 10 parameters per participant and instruction condition: - 5 drift rates $v$ (i.e., one per strength bin) - 2 response thresholds $A$ (i.e., one for each accumulator) - 1 non-decision time $t_0$ (i.e., one per participant) - 1 drift rate variability $s_v$ - 1 starting points $b$ (parameterized as an increment to the max value of the two $A$) To fit the model we need an objective function wrapping the LBA PDF and a new function for generating the correct starting values. ```{r} # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_lba_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { if (sum(drift == levels(drift)[i]) == 0) next densities[drift == levels(drift)[i]] <- dLBA( rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], A = list(pars["a_1"], pars["a_2"]), b = max(pars["a_1"], pars["a_2"])+pars["b"], t0 = pars["t0"], mean_v = c(pars[i], 1-pars[i]), sd_v = pars["sv"], silent=TRUE) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } # function that creates random start values get_start_lba <- function(base_par, n_drift = 10) { start1 <- c( a = runif(1, 0.5, 3), a_1 = runif(1, 0.5, 3), a_2 = runif(1, 0.5, 3), t0 = runif(1, 0, 0.5), b = runif(1, 0, 0.5), sv = runif(1, 0.5, 1.5), st0 = runif(1, 0, 0.5) ) start2 <- sort(rnorm(n_drift), decreasing = FALSE) names(start2) <- paste0("v_", seq_len(n_drift)) c(start2, start1[base_par]) } ``` With this, we simply need to loop across participants and instructions to estimate the LBA. Again, we need to run several fitting runs to reach the maximum likelihood estimate (i.e., the global optimum). ```{r, eval=FALSE} fit_lba <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10))) %>% unnest(fit) ``` Again, on Unix-like systems (i.e., Linux and Mac) we can use multicore using `parallel::mclapply`: ```{r, eval = FALSE} require(parallel) fit_lba <- d_nested fit_lba$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10), mc.cores = 2) fit_lba <- unnest(fit_lba, fit) ``` The following table gives the parameters and the negative summed log-likelihoods obtained from the LBA fit (with $b$ already correctly transformed by the maximum $A$). Note that some of the parameters might differ slightly for for id = `kr` and instruction = `accuracy` although the value of the objective function is identical to the reported one. This suggests that the likelihood surface is either quite shallow near the MLE or there are at least two peaks in the likelihood surface with a similar maximum. The fact that the convergence code for this data set is 1 instead of 0 also suggests some problems in finding the gloval optimum. In any case, running the optimization multiple times and comparing the results should reveal such problems. ```{r} knitr::kable(fit_lba) ``` The negtaive log-likelihood (column `objective`) shows that the LBA provides a better account for four of the six data sets (because it is the negative log-likelihood, smaller is better). The diffusion model only provides a better account for the `kr` and `nh` accuracy conditions. In terms of the parameter estimates we see a pattern similar as the one for the diffusion model: - Instruction shows the expected effect on $b$. - Instruction also shows an effect on $A$, which is also larger in the accuracy compared to the speed condition. - Instruction seems to have a small effect on $t_0$ in the same direction. - Instruction also affected $s_v$ in the same direction. - Instruction also affected $v$ which seemed to be more extreme in the accuracy compared to the speed condition. ```{r obtain_fits_lba, eval = FALSE, include = FALSE} fit_lba <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10))) %>% unnest(fit) fit_lba$data <- NULL fit_lba2 <- d_nested fit_lba2$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10), mc.cores = 2) fit_lba2 <- unnest(fit_lba2, fit) fit_lba2$data <- NULL all.equal(as.data.frame(fit_lba), as.data.frame(fit_lba2), tolerance = 0.03) save(fit_lba, fit_lba2, file = "rr98_full-lba_fits.rda") # objective function for LBA with 1 a. loops over drift to assign drift rates to strength objective_lba_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { if (sum(drift == levels(drift)[i]) == 0) next densities[drift == levels(drift)[i]] <- dLBA( rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], A = pars["a"], b = pars["a"]+pars["b"], t0 = pars["t0"], mean_v = pars[((i-1)*2+1):((i-1)*2+2)], sd_v = c(1, pars["sv"]), silent=TRUE) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_lba_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { if (sum(drift == levels(drift)[i]) == 0) next densities[drift == levels(drift)[i]] <- dLBA( rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], A = list(pars["a_1"], pars["a_2"]), b = list(pars["a_1"]+pars["b"], pars["a_2"]+pars["b"]), t0 = pars["t0"], mean_v = c(pars[i], 1-pars[i]), sd_v = pars["sv"], silent=TRUE) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } ``` ## Graphical Model Fit The fact that the LBA provides a slightly better fit is also visible in the graphical fit assessment. ### Predicted Response Rates We again first consider the predicted response rates (in black) and they are also highly accurate. ```{r, fig.height=5, fig.width=7, message=FALSE} # get predicted response proportions lba_pars_separate_l <- fit_lba %>% gather("strength_bin", "v", starts_with("v")) lba_pars_separate_l$strength_bin <- factor(substr(lba_pars_separate_l$strength_bin, 3,3), levels = as.character(seq_len(length(bins)-1))) #pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin) lba_pars_separate_l <- lba_pars_separate_l %>% group_by(id, instruction, strength_bin) %>% mutate(resp_prop = pLBA(rt=Inf, response=1, A=list(a_1, a_2), sd_v=sv, mean_v=c(v, 1-v), t0=t0, b=max(a_1, a_2)+b, silent=TRUE)) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black") p2 + as.layer(p1) + as.layer(p3) ``` ### Predicted Median RTs Again, the data is shown in grey (and error bars show the standard errors of the median) and the predicted RTs in black. We first show the predictions for the speed condition, separated by response. ```{r, fig.height=6, fig.width=7, message=FALSE} # get predicted quantiles (uses predicted response proportions) lba_separate_pred_dark <- lba_pars_separate_l %>% do(as.data.frame(t( qLBA(quantiles*.$resp_prop, response=1, A=list(.$a_1, .$a_2), sd_v=.$sv, mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% ungroup() %>% gather("quantiles", "dark", V1:V5) lba_separate_pred_light <- lba_pars_separate_l %>% do(as.data.frame(t( qLBA(quantiles*(1-.$resp_prop), response=2, A=list(.$a_1, .$a_2), sd_v=.$sv, mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% ungroup() %>% gather("quantiles", "light", V1:V5) #separate_pred_light %>% filter(is.na(light)) lba_separate_pred <- inner_join(lba_separate_pred_dark, lba_separate_pred_light) lba_separate_pred$quantiles <- factor(lba_separate_pred$quantiles, levels = c("V5", "V4", "V3", "V2", "V1"), labels = c("90%", "70%", "50%", "30%", "10%")) lba_separate_pred <- lba_separate_pred %>% gather("response", "rt", dark, light) p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` Next shows the same plot for the accuracy condition. ```{r, fig.height=6, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` Overall the LBA is able to describe the central tendency relatively well. The only considerable misfit is evident at the extreme bins with few responses (i.e., large error bars). ### All quantiles Next, we investigate the full RT distribution by comparing observed and predicted quantiles. The observed quantiles are again displayed in grey and the predictions in black. The first plot shows the speed condition separated by response. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed") p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.6)))) p2 + as.layer(p1) + as.layer(p1e) ``` The next plot shows the accuracy condition separated by response. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy") p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.3)))) p2 + as.layer(p1) + as.layer(p1e) ``` These two plots show not only the central tendency, but the other quantiles are quite well described. # Comparing Model Fit Finally, we graphically compare the fit of the two models. Here we can see that while the LBA seems to provide a slightly better fit (as indicated by the lower negative log-likelihoods), the diffusion model seems to somewhat better recover some of the trends in the data. ## Predicted Response Rates We again first consider the predicted response rates (in black) for the two models. ```{r, fig.height=6.5, fig.width=7, message=FALSE} key <- simpleKey(text = c("data", "LBA", "Diffusion"), lines = TRUE) key$lines$col <- c("grey", "black", "black") key$lines$lty <- c(1, 1, 2) key$points$col <- c("grey", "black", "black") key$points$pch <- c(1, 0, 4) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black", pch = 0) p4 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black", lty = 2, key = key, pch=4) p4 + as.layer(p2) + as.layer(p1) + as.layer(p3) ``` When comparing the fit of the two models like this, it shows that both models make very similar predictions for the predicted response rates. It is difficult to see huge differences between the models. ## Predicted Median RTs Next, we also compare the two accounts for the median. As before, data is shown in grey and we begin with a plot for the speed condition, separated by response. ```{r, fig.height=6.5, fig.width=7, message=FALSE} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5))), pch = 0) p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5))), col = "black", lty = 2, key = key, pch=4) p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e) ``` This plot suggests that the diffusion model is better able to predict changes in the median RTs in the speed condition across strength bins than the LBA. While this leads to obvious misfit in certain cases (`dark` responses for `nh`) it appears to provide a generally better recovery of the patterns in the data. Next shows the same plot for the accuracy condition. ```{r, fig.height=6.5, fig.width=7, message=FALSE} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", pch = 0) p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5))), col = "black", lty = 2, key = key, pch=4) p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e) ``` Here we see the same observation as for the speed condition. Considerable changes in median RT across strength bins are better captured by the diffusion model than the LBA. This leads to the situation that in most cases the diffusion model can make more accurate prediction for conditions with low numbers of trials. # References - Ratcliff, R., & Rouder, J. N. (1998). Modeling Response Times for Two-Choice Decisions. _Psychological Science_, 9(5), 347--356. https://doi.org/10.1111/1467-9280.00067 rtdists/build/0000755000175000017500000000000014164644437013214 5ustar nileshnileshrtdists/build/vignette.rds0000644000175000017500000000040114164644437015546 0ustar nileshnileshmQMk1~ Q/ ǖVЃ7 &lR⭿\;ԱLf#1!$&i82 !!"왓pʯ+f9+ڊ26ZUFPfwB::y)ٔ6*li o; +>>^QgϏ2ƺs_0\iRzHRF klwtn4X 'rtdists/tests/0000755000175000017500000000000013667512040013246 5ustar nileshnileshrtdists/tests/testthat/0000755000175000017500000000000014166044002015077 5ustar nileshnileshrtdists/tests/testthat/test-lba-math.R0000644000175000017500000001061713667512040017700 0ustar nileshnileshcontext("LBA-math agrees with current implementation") runif(1) x <- .Random.seed set.seed(2) test_that("PDF and CDF", { n <- 10 samples_per_run <- 100 source(system.file("extdata", "lba-math.R", package = "rtdists")) #source("inst/extdata//lba-math.r") for (i in seq_len(n)) { A <- runif(1, 0.3, 0.9) b <- A+runif(1, 0, 0.5) t0 <- runif(1, 0.1, 0.7) v1 <- runif(2, 0.5, 1.5) v2 <- runif(2, 0.1, 0.5) r_lba1 <- rLBA(samples_per_run, A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1:2]) expect_equal( dlba_norm(r_lba1$rt[r_lba1$response==1], A=A, b=b, t0 = t0, mean_v=v1[1], sd_v=v2[1]), fptpdf(pmax(r_lba1$rt[r_lba1$response==1]-t0[1], 0), x0max=A, chi=b, driftrate=v1[1], sddrift=v2[1]) ) expect_equal( plba_norm(r_lba1$rt[r_lba1$response==1], A=A, b=b, t0 = t0, mean_v=v1[1], sd_v=v2[1]), fptcdf(pmax(r_lba1$rt[r_lba1$response==1]-t0[1], 0), x0max=A, chi=b, driftrate=v1[1], sddrift=v2[1]) ) } }) test_that("small A values for 'norm'", { n <- 10 samples_per_run <- 100 source(system.file("extdata", "lba-math.R", package = "rtdists")) #source("inst/extdata//lba-math.r") for (i in seq_len(n)) { A <- runif(1, 0, 1e-10) b <- A+runif(1, 0, 0.5) t0 <- runif(1, 0.1, 0.7) v1 <- runif(2, 0.5, 1.5) v2 <- runif(2, 0.1, 0.5) r_lba1 <- rlba_norm(samples_per_run, A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1:2]) expect_equal( dlba_norm(r_lba1[,"rt"][r_lba1[,"response"]==1], A=A, b=b, t0 = t0, mean_v=v1[1], sd_v=v2[1]), fptpdf(pmax(r_lba1[,"rt"][r_lba1[,"response"]==1]-t0[1], 0), x0max=A, chi=b, driftrate=v1[1], sddrift=v2[1]) ) expect_equal( plba_norm(r_lba1[,"rt"][r_lba1[,"response"]==1], A=A, b=b, t0 = t0, mean_v=v1[1], sd_v=v2[1]), fptcdf(pmax(r_lba1[,"rt"][r_lba1[,"response"]==1]-t0[1], 0), x0max=A, chi=b, driftrate=v1[1], sddrift=v2[1]) ) } }) test_that("Random generation", { n <- 10 samples_per_run <- 100 source(system.file("extdata", "lba-math.R", package = "rtdists")) #source("inst/extdata//lba-math.r") for (i in seq_len(n)) { A <- runif(1, 0.3, 0.9) b <- A+runif(1, 0, 0.5) t0 <- runif(1, 0.1, 0.7) v1 <- runif(2, 0.5, 1.5) v2 <- runif(2, 0.1, 0.5) x <- .Random.seed r_lba1 <- rlba_norm(samples_per_run, A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1:2]) .Random.seed <<- x #r_lba2 <- rlba_norm(samples_per_run, A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1:2]) r_lba2 <- rlba(samples_per_run, A=A, b=b, t0 = t0, vs=v1[1:2], s=v2[1:2]) expect_equal(r_lba1[,"rt"], r_lba2$rt) expect_equal(r_lba1[,"response"], r_lba2$resp) } }) test_that("n1CDF", { n <- 10 samples_per_run <- 100 source(system.file("extdata", "lba-math.R", package = "rtdists")) #source("inst/extdata//lba-math.r") for (i in seq_len(n)) { A <- runif(1, 0.3, 0.9) b <- A+runif(1, 0, 0.5) t0 <- runif(1, 0.1, 0.7) v1 <- runif(2, 0.5, 1.5) v2 <- runif(2, 0.1, 0.5) r_lba1 <- rlba_norm(samples_per_run, A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1:2], posdrift = TRUE) #head(r_lba1) #if(!isTRUE(all.equal(n1CDF(r_lba1$rt[r_lba1$response==1], A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1]),.n1CDF(pmax(r_lba1$rt[r_lba1$response==1]-t0[1], 0), x0max=A, chi=b, drift=v1[1:2], sdI=v2[1]) ))) browser() #n1CDF(r_lba1$rt[r_lba1$response==1], A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1], browser = TRUE) #n1CDF(pmax(r_lba1$rt[r_lba1$response==1]-t0[1], 0), A=A, b=b, t0 = 0, mean_v=v1[1:2], sd_v=v2[1]) #.n1CDF(pmax(r_lba1$rt[r_lba1$response==1]-t0[1], 0), x0max=A, chi=b, drift=v1[1:2], sdI=v2[1], browser=TRUE) #save(r_lba1, A, b, t0, v1, v2, file = "n1CDF_no_diff_example_5.RData") expect_equal( n1CDF(sort(r_lba1[,"rt"][r_lba1[,"response"]==1]), A=A, b=b, t0 = t0, mean_v=v1[1:2], sd_v=v2[1]), .n1CDF(sort(pmax(r_lba1[,"rt"][r_lba1[,"response"]==1]-t0[1], 0)), x0max=A, chi=b, drift=v1[1:2], sdI=v2[1]), tolerance = 0.0001 ) } }) .Random.seed <<- x rtdists/tests/testthat/test-diffusion-rcpp.R0000644000175000017500000003500613667512040021142 0ustar nileshnilesh#require(testthat) context("Diffusion pdiffusion and ddiffusion functions (new rcpp versions).") is_testing <- TRUE # Set to TRUE for release package ## FALSE only works with extra non-supplied R and C files and after uncommenting # the two devtools lines (67 & 68) pset1 <- list(a = 0.8, zr = 0.3, v = 3.69, t0 = 0.3, d = 0, szr = 0.2, sv = 0.9, st0 = 0.1, bound = "upper", precision = 3) pset2 <- list(a = seq(0.8, 0.9, by=0.1), zr = 0.3, v = 3.69, t0 = seq(0.3,0.5,length=3), d = 0, szr = 0.2, sv = 0.9, st0 = 0.1, bound = c("upper","lower"), precision = 3) pset3 <- list(a = 1, zr = 0.3, v = 3.69, t0 = seq(0.3,0.5,length=3), d = 0.1, szr = 0.2, sv = 0.9, st0 = 0.1, bound = c("upper","lower"), precision = 3) pset4 <- list(a = 1, zr = 0.7, v = 1.69, t0 = seq(0.3,0.5,length=3), d = 0, szr = 0.2, sv = 0.9, st0 = 0.1, bound = c("upper","lower"), precision = 3) pset5 <- list(a = 2, zr = 0.7, v = 1.3, t0 = 0.2, d = 0, szr = 0.3, sv = 0.4, st0 = 0.5, bound = "upper", precision = 3) # Values to calculate d/p for x <- seq (0,4,by=0.1) if (!is_testing) { #### This generates datasets from the old (pre-RCpp) version of rtdists (specifically, from 0.6-6) #### That C code is not included in the release package. # UNLOAD EXISTING RTDISTS, LOAD IN OLD VERSION FOR COMPARISON TESTING ## UNCOMMENT NEXT TWO LINES # require (devtools) # devtools::unload(rtdists) #install.packages ("rtdists", lib="tests\\temp_testing\\old_rtdists_0.6-6\\") require (rtdists, lib.loc="tests\\temp_testing\\old_rtdists_0.6-6") orig_d_set1 <- ddiffusion (x, response=pset1$bound, a=pset1$a, v=pset1$v, t0=pset1$t0, z=pset1$zr, d=pset1$d, sz=pset1$szr, sv=pset1$sv, st0=pset1$st0, precision=pset1$precision) orig_d_set2 <- ddiffusion (x, response=pset2$bound, a=pset2$a, v=pset2$v, t0=pset2$t0, z=pset2$zr, d=pset2$d, sz=pset2$szr, sv=pset2$sv, st0=pset2$st0, precision=pset2$precision) orig_d_set3 <- ddiffusion (x, response=pset3$bound, a=pset3$a, v=pset3$v, t0=pset3$t0, z=pset3$zr, d=pset3$d, sz=pset3$szr, sv=pset3$sv, st0=pset3$st0, precision=pset3$precision) orig_d_set4 <- ddiffusion (x, response=pset4$bound, a=pset4$a, v=pset4$v, t0=pset4$t0, z=pset4$zr, d=pset4$d, sz=pset4$szr, sv=pset4$sv, st0=pset4$st0, precision=pset4$precision) orig_d_set5 <- ddiffusion (x, response=pset5$bound, a=pset5$a, v=pset5$v, t0=pset5$t0, z=pset5$zr, d=pset5$d, sz=pset5$szr, sv=pset5$sv, st0=pset5$st0, precision=pset5$precision) orig_p_set1 <- pdiffusion (x, response=pset1$bound, a=pset1$a, v=pset1$v, t0=pset1$t0, z=pset1$zr, d=pset1$d, sz=pset1$szr, sv=pset1$sv, st0=pset1$st0, precision=pset1$precision) orig_p_set2 <- pdiffusion (x, response=pset2$bound, a=pset2$a, v=pset2$v, t0=pset2$t0, z=pset2$zr, d=pset2$d, sz=pset2$szr, sv=pset2$sv, st0=pset2$st0, precision=pset2$precision) orig_p_set3 <- pdiffusion (x, response=pset3$bound, a=pset3$a, v=pset3$v, t0=pset3$t0, z=pset3$zr, d=pset3$d, sz=pset3$szr, sv=pset3$sv, st0=pset3$st0, precision=pset3$precision) orig_p_set4 <- pdiffusion (x, response=pset4$bound, a=pset4$a, v=pset4$v, t0=pset4$t0, z=pset4$zr, d=pset4$d, sz=pset4$szr, sv=pset4$sv, st0=pset4$st0, precision=pset4$precision) orig_p_set5 <- pdiffusion (x, response=pset5$bound, a=pset5$a, v=pset5$v, t0=pset5$t0, z=pset5$zr, d=pset5$d, sz=pset5$szr, sv=pset5$sv, st0=pset5$st0, precision=pset5$precision) # Print out in a format that makes it easy to just copy/paste below for when is_testing = TRUE tmp <- paste(orig_d_set1, collapse=","); paste0("orig_d_set1 <- c(", tmp, ")") tmp <- paste(orig_d_set2, collapse=","); paste0("orig_d_set2 <- c(", tmp, ")") tmp <- paste(orig_d_set3, collapse=","); paste0("orig_d_set3 <- c(", tmp, ")") tmp <- paste(orig_d_set4, collapse=","); paste0("orig_d_set4 <- c(", tmp, ")") tmp <- paste(orig_d_set5, collapse=","); paste0("orig_d_set5 <- c(", tmp, ")") tmp <- paste(orig_p_set1, collapse=","); paste0("orig_p_set1 <- c(", tmp, ")") tmp <- paste(orig_p_set2, collapse=","); paste0("orig_p_set2 <- c(", tmp, ")") tmp <- paste(orig_p_set3, collapse=","); paste0("orig_p_set3 <- c(", tmp, ")") tmp <- paste(orig_p_set4, collapse=","); paste0("orig_p_set4 <- c(", tmp, ")") tmp <- paste(orig_p_set5, collapse=","); paste0("orig_p_set5 <- c(", tmp, ")") } else { orig_d_set1 <- c(0,0,0,0,4.60161372697272,3.041737237641,0.775314844644052,0.20100806320729,0.0554709338440229,0.0161957486935331,0.00496015213247891,0.0015820477670175,0.000522441147958111,0.000177777939582215,6.20919658078304e-05,2.2186660934609e-05,8.08832554025722e-06,3.00144977750857e-06,1.13150079545548e-06,4.32611779274754e-07,1.67506365119956e-07,6.56006229469518e-08,2.59569175886733e-08,1.03669533415991e-08,4.1757727970268e-09,1.69507173542302e-09,6.92977992501777e-10,2.85152550466124e-10,1.18041641819417e-10,4.91349890568927e-11,2.05571224094785e-11,8.64143629954106e-12,3.64849639698318e-12,1.54672293834599e-12,6.58203772075445e-13,2.81091063487899e-13,1.20440317808058e-13,5.17658055663906e-14,2.23139322405947e-14,9.64481885706687e-15,4.17951521546082e-15) orig_d_set2 <- c(0,0,0,0,0,0,0.775314844644052,0.0700116011901688,0.775314844644053,0.00353928546940399,0.0161957486935331,0.00353928546940399,0.000522441147958111,0.000251976277630378,0.000522441147958112,2.18490202747983e-05,2.21866609346089e-05,2.18490202747983e-05,1.13150079545548e-06,2.16099430669139e-06,1.13150079545548e-06,2.34429558069607e-07,6.56006229469517e-08,2.34429558069607e-07,4.1757727970268e-09,2.72074791976855e-08,4.17577279702683e-09,3.32247839607158e-09,2.85152550466124e-10,3.32247839607157e-09,2.05571224094785e-11,4.22014357427282e-10,2.05571224094784e-11,5.52966324877327e-11,1.54672293834599e-12,5.52966324877327e-11,1.20440317808058e-13,7.42916764373375e-12,1.20440317808058e-13,1.01875094681317e-12,9.6448188570669e-15) orig_d_set3 <- c(0,0,0,0,0.224316377535072,0,0.937541143121596,0.126753167747569,0.937541143121597,0.00757988933744593,0.0455187591671571,0.00757988933744593,0.00330381878930778,0.000731748626620572,0.00330381878930778,8.74395992954433e-05,0.000315516310623682,8.74395992954431e-05,3.62428978864632e-05,1.20187376012792e-05,3.62428978864632e-05,1.82135427331724e-06,4.74235061754887e-06,1.82135427331724e-06,6.82667279031284e-07,2.96284872535752e-07,6.82667279031286e-07,5.08294823925357e-08,1.05615500286041e-07,5.08294823925355e-08,1.72778333632897e-08,9.08472806108125e-09,1.72778333632897e-08,1.67695307663344e-09,2.95409251671028e-09,1.67695307663344e-09,5.23353474367034e-10,3.17669847527765e-10,5.23353474367034e-10,6.14614534542645e-11,9.5452153579115e-11) orig_d_set4 <- c(0,0,0,0,0,0,0.855232589686379,0.208607865977777,0.85523258968638,0.0413181064028648,0.120197599664321,0.0413181064028648,0.0200197053245129,0.00804095783640437,0.0200197053245129,0.00161216584359968,0.00357699899429369,0.00161216584359968,0.000669964362901077,0.000330179193913326,0.000669964362901078,6.86827544594374e-05,0.000129768799535354,6.86827544594373e-05,2.57686397845211e-05,1.44568191389061e-05,2.57686397845212e-05,3.07116552815478e-06,5.2151229197864e-06,3.07116552815477e-06,1.07127886505753e-06,6.57262114462798e-07,1.07127886505753e-06,1.41510848321378e-07,2.22696573793223e-07,1.41510848321378e-07,4.67453504638768e-08,3.06204660554486e-08,4.67453504638768e-08,6.65367389909361e-09,9.89119776040409e-09) orig_d_set5 <- c(0,0,0,0.000684863555753859,0.0409663461601378,0.171559555851273,0.354416541503711,0.546415314740592,0.724462358219232,0.84105593055722,0.843845451375756,0.772355383476894,0.672361523379911,0.569176374664438,0.474128641968136,0.391267238673863,0.32116664067043,0.262880021576662,0.214906784039583,0.175653423294687,0.143637296268304,0.117562376720732,0.0963335696115949,0.079043613603511,0.064949625185905,0.0534475151592299,0.0440479731300484,0.0363554332579497,0.0300503288312008,0.0248744446422993,0.020618983886394,0.017114922807483,0.0142252500919618,0.0118387378519219,0.00986494693318661,0.0082302221334705,0.00687447921678601,0.00574862452487928,0.0048124799156655,0.00403311157398015,0.00338348192505018) orig_p_set1 <- c(0,0,0,0,0.169654748430154,0.644852220438606,0.811721201050489,0.854057849069507,0.865306050934252,0.868482806734998,0.869428910313148,0.869723626727494,0.869819010517484,0.86985091068276,0.869861886388069,0.869865757198307,0.86986715220185,0.869867664636671,0.869867856083721,0.869867928694559,0.869867956607604,0.869867967468629,0.869867971741068,0.869867973438421,0.869867974118822,0.869867974393807,0.869867974505775,0.869867974551679,0.869867974570617,0.869867974578475,0.869867974581754,0.869867974583128,0.869867974583707,0.869867974583952,0.869867974584056,0.8698679745841,0.869867974584119,0.869867974584127,0.869867974584131,0.869867974584132,0.869867974584133) orig_p_set2 <- c(0,0,0,0,0,0,0.811721196913999,0.125574562330438,0.811721196913999,0.131997518069757,0.868482802598511,0.131997518069757,0.869819006380996,0.132363879963647,0.869819006380996,0.132391772625441,0.86986575306182,0.132391772625441,0.869867851947233,0.132394304104058,0.869867851947233,0.132394562734449,0.869867963332141,0.132394562734449,0.869867969982335,0.132394591469189,0.869867969982335,0.132394594864978,0.869867970415191,0.132394594864978,0.869867970445266,0.132394595285525,0.869867970445266,0.13239459533954,0.869867970447464,0.13239459533954,0.869867970447631,0.132394595346681,0.869867970447631,0.132394595347648,0.869867970447645) orig_p_set3 <- c(0,0,0,0,0.00178292322825537,0,0.775411087834309,0.120922452250843,0.775411087834309,0.132716135127724,0.861195435552079,0.132716135127724,0.865901441795384,0.133576741913059,0.865901441795384,0.133666526168523,0.866276666320826,0.133666526168523,0.866314922623755,0.133677824411988,0.866314922623755,0.133679435165113,0.866319529210655,0.133679435165113,0.866320153470468,0.133679685884491,0.866320153470468,0.13367972750184,0.866320245781243,0.13367972750184,0.866320260352645,0.133679734753962,0.866320260352645,0.133679736066237,0.866320262787705,0.133679736066237,0.866320263198217,0.133679736310875,0.866320263198217,0.133679736357606,0.866320263283258) orig_p_set4 <- c(0,0,0,0,0,0,0.781191530696519,0.0504836200207758,0.781191530696518,0.0819600110121512,0.890312005832964,0.0819600110121512,0.906961293667404,0.0880426402075123,0.906961293667404,0.0892403909461952,0.909812660187627,0.0892403909461952,0.910331667455244,0.0894825802843886,0.910331667455244,0.0895324893578018,0.910430168562784,0.0895324893578018,0.910449434981355,0.0895429198770103,0.910449434981355,0.0895451233632549,0.910453289384567,0.0895451233632549,0.910454074007284,0.0895455928255754,0.910454074007284,0.0895456935333022,0.910454235934917,0.0895456935333022,0.910454269724044,0.0895457152583196,0.910454269724044,0.0895457199671402,0.910454276838814) orig_p_set5 <- c(0,0,0,7.08446361178149e-06,0.00138436468812972,0.0113507675678673,0.0374267297023063,0.0825133638296158,0.146245531551868,0.225422581216873,0.310520009922219,0.391726173987651,0.464066183870186,0.526107264997353,0.57818038301509,0.62134283021036,0.656860758949882,0.685970510299194,0.70978061414215,0.729242390730573,0.745152325059471,0.758167663671159,0.768826127576991,0.777565489792073,0.784741228585,0.790641683658525,0.795500713500313,0.799508094326108,0.802817980191424,0.805555746159603,0.807823505956937,0.809704554224829,0.811266941644485,0.812566353290477,0.813648428047359,0.814550629885263,0.815303759724723,0.815933178816323,0.816459800292277,0.816900894157634,0.81727074191634) test_that("ensure new RCpp ddiffusion function produces the same result as previous C-only versions:", { tolerance <- 1e-5 expect_equal(orig_d_set1, ddiffusion (x, response=pset1$bound, a=pset1$a, v=pset1$v, t0=pset1$t0, z=pset1$zr, d=pset1$d, sz=pset1$szr, sv=pset1$sv, st0=pset1$st0, precision=pset1$precision), tolerance=tolerance) expect_equal(orig_d_set2, ddiffusion (x, response=pset2$bound, a=pset2$a, v=pset2$v, t0=pset2$t0, z=pset2$zr, d=pset2$d, sz=pset2$szr, sv=pset2$sv, st0=pset2$st0, precision=pset2$precision), tolerance=tolerance) expect_equal(orig_d_set3, ddiffusion (x, response=pset3$bound, a=pset3$a, v=pset3$v, t0=pset3$t0, z=pset3$zr, d=pset3$d, sz=pset3$szr, sv=pset3$sv, st0=pset3$st0, precision=pset3$precision), tolerance=tolerance) expect_equal(orig_d_set4, ddiffusion (x, response=pset4$bound, a=pset4$a, v=pset4$v, t0=pset4$t0, z=pset4$zr, d=pset4$d, sz=pset4$szr, sv=pset4$sv, st0=pset4$st0, precision=pset4$precision), tolerance=tolerance) expect_equal(orig_d_set5, ddiffusion (x, response=pset5$bound, a=pset5$a, v=pset5$v, t0=pset5$t0, z=pset5$zr, d=pset5$d, sz=pset5$szr, sv=pset5$sv, st0=pset5$st0, precision=pset5$precision), tolerance=tolerance) }) test_that("ensure new RCpp pdiffusion function produces the same result as previous C-only versions:", { tolerance <- 1e-3 expect_equal(orig_p_set1, pdiffusion (x, response=pset1$bound, a=pset1$a, v=pset1$v, t0=pset1$t0, z=pset1$zr, d=pset1$d, sz=pset1$szr, sv=pset1$sv, st0=pset1$st0, precision=pset1$precision), tolerance=tolerance) expect_equal(orig_p_set2, pdiffusion (x, response=pset2$bound, a=pset2$a, v=pset2$v, t0=pset2$t0, z=pset2$zr, d=pset2$d, sz=pset2$szr, sv=pset2$sv, st0=pset2$st0, precision=pset2$precision), tolerance=tolerance) expect_equal(orig_p_set3, pdiffusion (x, response=pset3$bound, a=pset3$a, v=pset3$v, t0=pset3$t0, z=pset3$zr, d=pset3$d, sz=pset3$szr, sv=pset3$sv, st0=pset3$st0, precision=pset3$precision), tolerance=tolerance) expect_equal(orig_p_set4, pdiffusion (x, response=pset4$bound, a=pset4$a, v=pset4$v, t0=pset4$t0, z=pset4$zr, d=pset4$d, sz=pset4$szr, sv=pset4$sv, st0=pset4$st0, precision=pset4$precision), tolerance=tolerance) tolerance <- 1e-2 # !! This still produces 1e-2 errors with the 5th parameter set! expect_equal(orig_p_set5, pdiffusion (x, response=pset5$bound, a=pset5$a, v=pset5$v, t0=pset5$t0, z=pset5$zr, d=pset5$d, sz=pset5$szr, sv=pset5$sv, st0=pset5$st0, precision=pset5$precision), tolerance=tolerance) }) } rtdists/tests/testthat/test-lba_race-basic.R0000644000175000017500000000335713667512040021025 0ustar nileshnilesh context("LBA race functions= f*(1-F)") test_that("dLBA with norm and posdrift=FALSE works as expected", { rt <- 2 vs <- c(0.8, 0.4) sd <- 0.2 f <- dlba_norm(rt, A=0.5, b=1, t0 = 0.5, mean_v=vs[1], sd_v=sd, posdrift = FALSE) F <- plba_norm(rt, A=0.5, b=1, t0 = 0.5, mean_v=vs[2], sd_v=sd, posdrift = FALSE) expect_equivalent(n1PDF(rt, A=0.5, b=1, t0 = 0.5, mean_v=vs, sd_v=sd, args.dist = list(posdrift = FALSE)), f*(1-F)) }) test_that("dLBA with lnorm works as expected", { rt <- 2 vs <- c(0.8, 0.4) sd <- 1 f <- dlba_lnorm(rt, A=0.5, b=1, t0 = 0.5, meanlog_v = vs[1], sdlog_v = sd) F <- plba_lnorm(rt, A=0.5, b=1, t0 = 0.5, meanlog_v = vs[2], sdlog_v = sd) expect_equivalent(f*(1-F), n1PDF(rt, A=0.5, b=1, t0 = 0.5,distribution = "lnorm", meanlog_v = vs, sdlog_v = sd)) }) test_that("dLBA with gamma works as expected", { rt <- 2 vs <- c(1, 1.2) sd <- 1 f <- dlba_gamma(rt, A=0.5, b=1, t0 = 0.5, shape_v = vs[1], scale_v = sd) F <- plba_gamma(rt, A=0.5, b=1, t0 = 0.5, shape_v = vs[2], scale_v = sd) expect_equivalent(f*(1-F), n1PDF(rt, A=0.5, b=1, t0 = 0.5,distribution = "gamma", shape_v = vs, scale_v = sd)) }) test_that("dLBA with frechet works as expected", { rt <- 2 vs <- c(1, 1.2) sd <- 1 f <- dlba_frechet(rt, A=0.5, b=1, t0 = 0.5, shape_v = vs[1], scale_v = sd) F <- plba_frechet(rt, A=0.5, b=1, t0 = 0.5, shape_v = vs[2], scale_v = sd) expect_equivalent(f*(1-F), n1PDF(rt, A=0.5, b=1, t0 = 0.5,distribution = "frechet", shape_v = vs, scale_v = sd)) })rtdists/tests/testthat/test-lba_race_input.R0000644000175000017500000003265413667512040021167 0ustar nileshnilesh context("LBA race functions: Input") test_that("n1PDF: List input for A and b", { samples <- 100 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) p1 <- n1PDF(r_lba$rt, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF p2 <- n1PDF(r_lba$rt, A=list(A[1], A[1]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p2, p1) p3 <- n1PDF(r_lba$rt, A=A[1], b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p3, p1) p4 <- n1PDF(r_lba$rt, A=list(A[1], A[1]), b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p4, p1) p5 <- n1PDF(r_lba$rt, A=rep(A[1], 13), b=list(rep(b[1], 13), rep(b[1], 5)) , t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p5, p1) p6_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) p6_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=A[2], b=b[2], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) p7 <- n1PDF(r_lba$rt, A=rep(A, each = samples/2), b=rep(b, each = samples/2), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) expect_identical(c(p6_a, p6_b), p7) p6x <- n1PDF(r_lba$rt, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF p7x <- n1PDF(r_lba$rt, A=list(A[1], A[1]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p7x, p6x) p8x <- n1PDF(r_lba$rt, A=A[1], b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p8x, p6x) p9x <- n1PDF(r_lba$rt, A=list(A[1], A[1]), b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p9x, p6x) p10x <- n1PDF(r_lba$rt, A=rep(A[1], 13), b=list(rep(b[1], 13), rep(b[1], 5)), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p10x, p9x) }) test_that("n1PDF: List and trialwise input for A and b", { samples <- 2 A <- runif(4, 0.3, 0.9) b <- A+runif(4, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) p1 <- n1PDF(r_lba$rt, A=list(A[1:2],A[3:4]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) p2 <- n1PDF(r_lba$rt[1], A=list(A[1],A[3]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) p3 <- n1PDF(r_lba$rt[2], A=list(A[2],A[4]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) expect_equal(p1, c(p2, p3)) pb1 <- n1PDF(r_lba$rt, A=A[1:2], b=list(b[1:2],b[3:4]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) pb2 <- n1PDF(r_lba$rt[1], A=A[1], b=list(b[1],b[3]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) pb3 <- n1PDF(r_lba$rt[2], A=A[2], b = list(b[2],b[4]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) expect_equal(pb1, c(pb2, pb3)) }) test_that("n1CDF: List input for A and b", { samples <- 100 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) p1 <- n1CDF(r_lba$rt, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF p2 <- n1CDF(r_lba$rt, A=list(A[1], A[1]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p2, p1) p3 <- n1CDF(r_lba$rt, A=A[1], b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p3, p1) p4 <- n1CDF(r_lba$rt, A=list(A[1], A[1]), b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p4, p1) p5 <- n1CDF(r_lba$rt, A=rep(A[1], 13), b=list(rep(b[1], 13), rep(b[1], 5)), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) # PDF expect_equal(p5, p1) p6 <- n1CDF(r_lba$rt, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF p7 <- n1CDF(r_lba$rt, A=list(A[1], A[1]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p7, p6) p8 <- n1CDF(r_lba$rt, A=A[1], b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p8, p6) p9 <- n1CDF(r_lba$rt, A=list(A[1], A[1]), b=list(b[1], b[1]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p9, p6) p10 <- n1CDF(r_lba$rt, A=rep(A[1], 13), b=list(rep(b[1], 13), rep(b[1], 5)), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = 0.2, silent = TRUE) # PDF expect_equal(p10, p9) }) test_that("n1CDF: List input for drift rate", { samples <- 100 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) v1 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) v2 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(1.2, 1.0), sd_v=0.2) expect_identical(v1, v2) }) test_that("n1CDF: Trialwise input for drift rate", { samples <- 100 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) rts <- sample(seq(0.5, 2, length.out = samples)) v1_a <- n1CDF(rts[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) v1_b <- n1CDF(rts[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5), sd_v=0.5) v2 <- n1CDF(rts, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50))) expect_identical(c(v1_a, v1_b), v2) v3_a <- n1CDF(rts[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0, 0.6), sd_v=0.2) v3_b <- n1CDF(rts[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5, 1.2), sd_v=0.5) v4 <- n1CDF(rts, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50), rep(c(0.6, 1.2), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50))) expect_identical(c(v3_a, v3_b), v4) v5_a <- n1CDF(rts[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2, st0 = 0.2) v5_b <- n1CDF(rts[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5), sd_v=0.5, st0 = 0.2) v6 <- n1CDF(rts, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50)), st0 = 0.2) expect_identical(c(v5_a, v5_b), v6) v7_a <- n1CDF(rts[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0, 0.6), sd_v=0.2, st0 = 0.2) v7_b <- n1CDF(rts[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5, 1.2), sd_v=0.5, st0 = 0.2) v8 <- n1CDF(rts, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50), rep(c(0.6, 1.2), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50)), st0 = 0.2) expect_identical(c(v7_a, v7_b), v8) }) test_that("n1PDF: Trialwise input for t0", { samples <- 100 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) v1_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) v1_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.2, mean_v=c(1.2, 1.0), sd_v=0.2) v2 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = rep(c(0.5, 0.2), each = 50), mean_v=c(1.2, 1.0), sd_v=0.2) expect_identical(c(v1_a, v1_b), v2) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.5, 1.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], shape_v =v1[1:2], scale_v = v2[1:2], distribution = "gamma") v3_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, shape_v=v1[3:4], scale_v = v2[3:4], distribution = "gamma") v3_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.2, shape_v=v1[3:4], scale_v = v2[3:4], distribution = "gamma") v3 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = rep(c(0.5, 0.2), each = 50), shape_v=v1[3:4], scale_v = v2[3:4], distribution = "gamma") expect_identical(c(v3_a, v3_b), v3) }) test_that("n1PDF: Trialwise input for st0", { samples <- 100 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(2, 0.1, 0.2) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], st0 = st0[1]) v1_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2, st0 = st0[1]) v1_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.2, mean_v=c(1.2, 1.0), sd_v=0.2, st0 = st0[2]) v2 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = rep(c(0.5, 0.2), each = 50), mean_v=c(1.2, 1.0), sd_v=0.2, st0 = rep(st0, each = 50)) expect_identical(c(v1_a, v1_b), v2) }) test_that("n1PDF: Trialwise input for drift rate", { samples <- 100 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) v1_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2) v1_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5), sd_v=0.5) v2 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50))) expect_identical(c(v1_a, v1_b), v2) v3_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0, 0.6), sd_v=0.2) v3_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5, 1.2), sd_v=0.5) v4 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50), rep(c(0.6, 1.2), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50))) expect_identical(c(v3_a, v3_b), v4) v5_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2, st0 = 0.2) v5_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5), sd_v=0.5, st0 = 0.2) v6 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50)), st0 = 0.2) expect_identical(c(v5_a, v5_b), v6) v7_a <- n1PDF(r_lba$rt[seq_len(samples/2)], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0, 0.6), sd_v=0.2, st0 = 0.2) v7_b <- n1PDF(r_lba$rt[seq_len(samples/2)+samples/2], A=0.5, b=1, t0 = 0.5, mean_v=c(2.2, 0.5, 1.2), sd_v=0.5, st0 = 0.2) v8 <- n1PDF(r_lba$rt, A=0.5, b=1, t0 = 0.5, mean_v=list(rep(c(1.2, 2.2), each = 50), rep(c(1.0, 0.5), each = 50), rep(c(0.6, 1.2), each = 50)), sd_v=list(rep(c(0.2, 0.5), each = 50)), st0 = 0.2) expect_identical(c(v7_a, v7_b), v8) }) rtdists/tests/testthat/test-lba_race.R0000644000175000017500000003445013667512040017744 0ustar nileshnilesh context("LBA race functions: RNG is equivalent to n1") x <- .Random.seed set.seed(5) tryCatch.W.E <- function(expr) { mc <- match.call() mc2 <- match.call(definition = ks.test, call = as.call(mc[[2]])) mc2[[1]] <- list W <- NULL w.handler <- function(w){ # warning handler W <<- w invokeRestart("muffleWarning") } list(value = withCallingHandlers(tryCatch(expr, error = function(e) e), warning = w.handler), warning = W, data = eval(mc2, envir = parent.frame())) } conditional_save_t <- function(t, distribution) { mc <- match.call() ex_data <- t$data #if (!is.null(t$warning)) save(ex_data, file = paste0(mc[[2]], "_", distribution, "_problem.Rdata")) #browser() #str(t) } test_that("Norm: n1CDF corresponds to random derivates", { testthat::skip_on_cran() testthat::skip_on_travis() normalised_n1CDF = function(rt,...) n1CDF(rt,...)/n1CDF(rt=Inf,...) samples <- 1e3 p_min <- 0.001 p_max <- 0.05 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba1 <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) r_lba2 <- rLBA(samples, A=A[2], b=b[2], t0 = t0[2], mean_v=v1[3:4], sd_v=v2[3:4], st0 = st0[1]) t1 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1], b=b[1]+0.1, t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2])) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "norm") t2 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], mean_v=v1[3:4], sd_v=v2[3:4], st0 = 0)) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "norm") t3 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], mean_v=v1[3:4], sd_v=v2[3:4], st0 = st0[1]+0.1)) expect_lt(t3$value$p.value, p_min) conditional_save_t(t3, "norm") t4 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2])) expect_gt(t4$value$p.value, p_max) conditional_save_t(t4, "norm") t5 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], mean_v=v1[3:4], sd_v=v2[3:4], st0 = st0[1])) conditional_save_t(t5, "norm") expect_gt(t5$value$p.value, p_max) #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) test_that("Gamma: n1CDF corresponds to random derivates", { testthat::skip_on_cran() normalised_n1CDF = function(rt,...) n1CDF(rt,...)/n1CDF(rt=Inf,...) samples <- 1e3 p_min <- 0.01 p_max <- 0.05 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.25, 0.75) r_lba1 <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], shape_v=v1[1:2], scale_v=v2[1:2], distribution = "gamma") r_lba2 <- rLBA(samples, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = st0[1], distribution = "gamma") t1 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1]+0.1, b=b[1]+0.2, t0 = t0[1], shape_v=v1[1:2], scale_v=v2[1:2], distribution = "gamma")) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "gamma") t2 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = 0, distribution = "gamma")) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "gamma") t3 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = st0[1]+0.5, distribution = "gamma")) expect_lt(t3$value$p.value, p_min) conditional_save_t(t3, "gamma") t4 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1], b=b[1], t0 = t0[1], shape_v=v1[1:2], scale_v=v2[1:2], distribution = "gamma")) expect_gt(t4$value$p.value, p_max) conditional_save_t(t4, "gamma") t5 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = st0[1], distribution = "gamma")) expect_gt(t5$value$p.value, p_max) conditional_save_t(t5, "gamma") #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) test_that("Frechet: n1CDF corresponds to random derivates", { testthat::skip_on_cran() testthat::skip_on_travis() normalised_n1CDF = function(rt,...) n1CDF(rt,...)/n1CDF(rt=Inf,...) samples <- 2e2 p_min <- 0.001 p_max <- 0.05 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.5, 1.5) st0 <- runif(1, 0.25, 0.5) r_lba1 <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], shape_v=v1[1:2], scale_v=v2[1:2], distribution = "frechet") r_lba2 <- rLBA(samples, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = st0[1], distribution = "frechet") t1 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1]+0.4, b=b[1]+0.8, t0 = t0[1], shape_v=v1[1:2], scale_v=v2[1:2], distribution = "frechet")) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "frechet") t2 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = 0, distribution = "frechet")) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "frechet") #browser() t3 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = st0[1]+0.2, distribution = "frechet")) expect_lt(t3$value$p.value, p_min+0.01) conditional_save_t(t3, "frechet") t4 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1], b=b[1], t0 = t0[1], shape_v=v1[1:2], scale_v=v2[1:2], distribution = "frechet")) expect_gt(t4$value$p.value, p_max) conditional_save_t(t4, "frechet") t5 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], shape_v=v1[3:4], scale_v=v2[3:4], st0 = st0[1], distribution = "frechet")) #t5 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1]-t0[2], normalised_n1CDF, A=A[2], b=b[2], t0 = 0, shape_v=v1[3:4], scale_v=v2[3:4], st0 = st0[1], distribution = "frechet")) expect_gt(t5$value$p.value, p_max) conditional_save_t(t5, "frechet") #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) test_that("lnorm: n1CDF corresponds to random derivates", { testthat::skip_on_cran() testthat::skip_on_travis() normalised_n1CDF = function(rt,...) n1CDF(rt,...)/n1CDF(rt=Inf,...) samples <- 1e3 p_min <- 0.0001 p_max <- 0.05 A <- runif(2, 0.3, 0.9) b <- A+runif(2, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba1 <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], meanlog_v=v1[1:2], sdlog_v=v2[1:2], distribution = "lnorm") r_lba2 <- rLBA(samples, A=A[2], b=b[2], t0 = t0[2], meanlog_v=v1[3:4], sdlog_v=v2[3:4], st0 = st0[1], distribution = "lnorm") t1 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1], b=b[1]+0.1, t0 = t0[1], meanlog_v=v1[1:2], sdlog_v=v2[1:2], distribution = "lnorm")) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "lnorm") t2 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], meanlog_v=v1[3:4], sdlog_v=v2[3:4], st0 = 0, distribution = "lnorm")) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "lnorm") t3 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], meanlog_v=v1[3:4], sdlog_v=v2[3:4], st0 = st0[1]+0.2, distribution = "lnorm")) expect_lt(t3$value$p.value, p_min) conditional_save_t(t3, "lnorm") #t4 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=A[1], b=b[1], t0 = t0[1], meanlog_v=v1[1:2], sdlog_v=v2[1:2], distribution = "lnorm")) t4 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1]-t0[1], normalised_n1CDF, A=A[1], b=b[1], t0 = 0, meanlog_v=v1[1:2], sdlog_v=v2[1:2], distribution = "lnorm")) expect_gt(t4$value$p.value, p_max) conditional_save_t(t4, "lnorm") #t5 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=A[2], b=b[2], t0 = t0[2], meanlog_v=v1[3:4], sdlog_v=v2[3:4], st0 = st0[1], distribution = "lnorm")) t5 <- tryCatch.W.E(ks.test(pmax(r_lba2$rt[r_lba2$response==1]-t0[2],0), normalised_n1CDF, A=A[2], b=b[2], t0 = 0, meanlog_v=v1[3:4], sdlog_v=v2[3:4], st0 = st0[1], distribution = "lnorm")) expect_gt(t5$value$p.value, p_max) conditional_save_t(t5, "lnorm") #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) test_that("Norm: n1CDF corresponds to random derivates with accumulatorwise parameters", { testthat::skip_on_cran() testthat::skip_on_travis() normalised_n1CDF = function(rt,...) n1CDF(rt,...)/n1CDF(rt=Inf,...) samples <- 1e3 p_min <- 0.001 p_max <- 0.05 A <- runif(4, 0.3, 0.9) b <- A+runif(4, 0, 0.5) t0 <- runif(4, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba1 <- rLBA(samples, A=list(A[1], A[2]), b=list(b[1], b[2]), t0 = list(t0[1], t0[2]), mean_v=v1[1:2], sd_v=v2[1:2]) r_lba2 <- rLBA(samples, A=list(A[3], A[4]), b=list(b[3], b[4]), t0 = list(t0[3], t0[4]), mean_v=v1[3:4], sd_v=v2[3:4], st0 = st0[1]) t1 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=list(A[1], A[2]), b=list(b[1], b[2]), t0 = list(t0[1]+0.1, t0[2]+0.1), mean_v=v1[1:2], sd_v=v2[1:2])) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "norm") t2 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=list(A[3], A[4]), b=list(b[3], b[4]), t0 = list(t0[3], t0[4]), mean_v=v1[3:4], sd_v=v2[3:4], st0 = 0)) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "norm") t3 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=list(A[3], A[4]), b=list(b[3], b[4]), t0 = list(t0[3], t0[4]), mean_v=v1[3:4], sd_v=v2[3:4], st0 = st0[1]+0.1)) expect_lt(t3$value$p.value, p_min) #+0.003 conditional_save_t(t3, "norm") t4 <- tryCatch.W.E(ks.test(r_lba1$rt[r_lba1$response==1], normalised_n1CDF, A=list(A[1], A[2]), b=list(b[1], b[2]), t0 = list(t0[1], t0[2]), mean_v=v1[1:2], sd_v=v2[1:2])) expect_gt(t4$value$p.value, p_max) conditional_save_t(t4, "norm") t5 <- tryCatch.W.E(ks.test(r_lba2$rt[r_lba2$response==1], normalised_n1CDF, A=list(A[3], A[4]), b=list(b[3], b[4]), t0 = list(t0[3], t0[4]), mean_v=v1[3:4], sd_v=v2[3:4], st0 = st0[1])) conditional_save_t(t5, "norm") expect_gt(t5$value$p.value, p_max) #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) test_that("rLBA works with trial wise input", { set.seed(1) rt1 <- rLBA(100, A=rep(c(0.5, 0.4), each = 50), b=rep(c(1, 1.4), each = 50), t0 = rep(c(0.2, 0.4), each = 50), mean_v=list(rep(c(1.5, 0.9), each = 50), rep(c(0.9, 1.5), each = 50)), sd_v=list(rep(c(0.2, 0.4), each = 50), rep(c(0.3, 0.5), each = 50)), st0 = rep(c(0.1, 0.2), each = 50)) set.seed(1) rt2a <- rLBA(50, A=0.5, b=1, t0 = 0.2, mean_v=c(1.5, 0.9), sd_v=list(0.2, 0.3), st0 = 0.1) rt2b <- rLBA(50, A=0.4, b=1.4, t0 = 0.4, mean_v=c(0.9, 1.5), sd_v=c(0.4, 0.5), st0 = 0.2) expect_identical(rt1, rbind(rt2a, rt2b)) }) .Random.seed <<- xrtdists/tests/testthat/test-diffusion-math.R0000644000175000017500000001026713667512040021131 0ustar nileshnilesh context("Diffusion Model: Compare with RWiener") test_that("ddiffusion is equal to dwiener", { if (require(RWiener)) { for (a in seq(0.5, 2.0, length.out = 6)) { for (v in seq(0.5, 2.0, length.out = 6)) { for (t0 in seq(0.05, 0.5, length.out = 6)) { for (z in seq(0.4, 0.6, length.out = 6)) { expect_equivalent( ddiffusion(seq(0, 3, length.out = 15), a=a, v=v, t0=t0, z = z*a) , dwiener(seq(0, 3, length.out = 15), resp = rep("upper", 15), alpha=a, delta=v, tau = t0, beta = z) ) expect_equivalent( ddiffusion(seq(0, 3, length.out = 16), c("upper", "lower"), a=a, v=v, t0=t0, z = z*a) , dwiener(seq(0, 3, length.out = 16), resp = rep(c("upper", "lower"), 8), alpha=a, delta=v, tau = t0, beta = z) ) } } } } } }) test_that("pdiffusion is equal to pwiener", { testthat::skip_on_cran() if (require(RWiener)) { for (a in seq(0.5, 2.0, length.out = 6)) { for (v in seq(0.5, 2.0, length.out = 6)) { for (t0 in seq(0.05, 0.5, length.out = 6)) { for (z in seq(0.4, 0.6, length.out = 6)) { expect_equal( pdiffusion(seq(0, 3, length.out = 15), a=a, v=v, t0=t0, z = z*a) , pwiener(seq(0, 3, length.out = 15), resp = rep("upper", 15), alpha=a, delta=v, tau = t0, beta = z) , tolerance = 0.01) expect_equivalent( pdiffusion(seq(0, 3, length.out = 16), c("upper", "lower"), a=a, v=v, t0=t0, z = z*a) , pwiener(seq(0, 3, length.out = 16), resp = rep(c("upper", "lower"), 8), alpha=a, delta=v, tau = t0, beta = z) , tolerance = 0.01) } } } } } }) tryCatch.W.E <- function(expr) { mc <- match.call() mc2 <- match.call(definition = ks.test, call = as.call(mc[[2]])) mc2[[1]] <- list W <- NULL w.handler <- function(w){ # warning handler W <<- w invokeRestart("muffleWarning") } list(value = withCallingHandlers(tryCatch(expr, error = function(e) e), warning = w.handler),warning = W, data = eval(mc2, envir = parent.frame())) } test_that("Norm: pdiffusion corresponds to random derivates", { testthat::skip_on_cran() #testthat::skip_on_travis() normalised_pdiffusion <- function(rt,...) pdiffusion(rt,...)/pdiffusion(rt=10, ...) normalised_pwiener <- function(q,...) pwiener(q, resp = rep("upper", length(q)), ...)/pwiener(q=10, resp = "upper", ...) samples <- 1e4 p_min <- 0.001 p_max <- 0.01 a <- runif(1, 0.3, 0.9) t0 <- runif(1, 0.1, 0.5) v <- runif(1, 0.5, 2.5) z <- runif(1, 0.5, 0.6) r_diffusion <- rdiffusion(samples, a=a, t0=t0, v=v, z=z*a) t1 <- tryCatch.W.E(ks.test(r_diffusion$rt[r_diffusion$response=="upper"], normalised_pdiffusion, a=a*2, t0=t0, v=v*2, z=z*a)) expect_lt(t1$value$p.value, p_min) t2 <- tryCatch.W.E(ks.test(r_diffusion$rt[r_diffusion$response=="upper"], normalised_pdiffusion, a=a, t0=t0, v=v, z=z*a)) expect_gt(t2$value$p.value, p_max) t3 <- tryCatch.W.E(ks.test(r_diffusion$rt[r_diffusion$response=="upper"], normalised_pwiener, alpha=a, delta=v, tau = t0, beta = z)) expect_gt(t3$value$p.value, p_max) }) test_that("Norm: pdiffusion corresponds to random derivates (with variabilities)", { testthat::skip_on_cran() #testthat::skip_on_travis() normalised_pdiffusion <- function(rt,...) pdiffusion(rt,...)/pdiffusion(rt=10, ...) samples <- 1e4 p_min <- 0.001 p_max <- 0.01 a <- runif(1, 0.3, 0.9) t0 <- runif(1, 0.1, 0.5) v <- runif(1, 0.5, 2.5) sv <- runif(1, 0.1, 0.5) sz <- runif(1, 0.05, 0.2) z <- runif(1, 0.5, 0.6) r_diffusion <- rdiffusion(samples, a=a, t0=t0, v=v, z=z*a, sz=sz, sv = sv) t1 <- tryCatch.W.E(ks.test(r_diffusion$rt[r_diffusion$response=="upper"], normalised_pdiffusion, a=a, t0=t0, v=v, z=z*a, sv=1, sz = 0.6*a)) expect_lt(t1$value$p.value, p_min) t2 <- tryCatch.W.E(ks.test(r_diffusion$rt[r_diffusion$response=="upper"], normalised_pdiffusion, a=a, t0=t0, v=v, z=z*a,sv=sv, sz=sz)) expect_gt(t2$value$p.value, p_max) })rtdists/tests/testthat/test-diffusion.R0000644000175000017500000002036013667512040020175 0ustar nileshnileshcontext("Diffusion pdiffusion and ddiffusion functions.") test_that("diffusion functions work with numeric and factor boundaries", { n_test <- 20 rts <- rdiffusion(n_test, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0) expect_is(ddiffusion(rts$rt, response = rts$response, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0), "numeric") expect_is(pdiffusion(sort(rts$rt), response = rts$response, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0), "numeric") expect_is(ddiffusion(rts$rt, response = sample(1:2, 20, replace = TRUE), a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0), "numeric") expect_is(pdiffusion(sort(rts$rt), response = sample(1:2, 20, replace = TRUE), a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0), "numeric") expect_error(ddiffusion(rts$rt, rep_len(1:3, length.out=20), a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0), "response") expect_error(pdiffusion(sort(rts$rt), rep_len(1:3, length.out=20), a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0), "response") }) test_that("diffusion functions are identical with all input options", { rt1 <- rdiffusion(500, a=1, v=2, t0=0.5) # get density for random RTs: ref <- sum(log(ddiffusion(rt1$rt, rt1$response, a=1, v=2, t0=0.5))) # response is factor expect_identical(sum(log(ddiffusion(rt1$rt, as.numeric(rt1$response), a=1, v=2, t0=0.5))), ref) expect_identical(sum(log(ddiffusion(rt1$rt, as.character(rt1$response), a=1, v=2, t0=0.5))), ref) expect_identical(sum(log(ddiffusion(rt1, a=1, v=2, t0=0.5))), ref) rt2 <- rt1[order(rt1$rt),] ref2 <- pdiffusion(rt2$rt, rt2$response, a=1, v=2, t0=0.5) expect_identical(pdiffusion(rt2$rt, as.numeric(rt2$response), a=1, v=2, t0=0.5), ref2) expect_identical(pdiffusion(rt2$rt, as.character(rt2$response), a=1, v=2, t0=0.5), ref2) expect_identical(pdiffusion(rt2, a=1, v=2, t0=0.5), ref2) # rt3 <- data.frame(p = rep(seq(0.1, 0.9, 0.2), 2), # response = rep(c("upper", "lower"), each = 5)) rt3 <- data.frame(p = rep(c(0.05, 0.1), 2), response = factor(rep(c("upper", "lower"), each = 2))) ref3 <- qdiffusion(rt3$p, rt3$response, a=1, v=2, t0=0.5) expect_identical(qdiffusion(rt3$p, as.numeric(rt3$response), a=1, v=2, t0=0.5), ref3) expect_identical(qdiffusion(rt3$p, as.character(rt3$response), a=1, v=2, t0=0.5), ref3) expect_identical(qdiffusion(rt3, a=1, v=2, t0=0.5), ref3) }) test_that("qdiffusion is equivalent to manual calculation",{ p11_fit <- structure(list(par = structure(c(1.32060063610882, 3.27271614698074, 0.338560144920614, 0.34996447540773, 0.201794924457386, 1.05516829794661), .Names = c("a", "v", "t0", "sz", "st0", "sv")))) q <- c(0.1, 0.3, 0.5, 0.7, 0.9) i_pdiffusion <- function(x, args, value, response) { abs(value - do.call(pdiffusion, args = c(rt = x, args, response = response))) } prop_correct <- pdiffusion(Inf, response = "upper", a=p11_fit$par["a"], v=p11_fit$par["v"], t0=p11_fit$par["t0"], sz=p11_fit$par["sz"], st0=p11_fit$par["st0"], sv=p11_fit$par["sv"]) pred_dir <- sapply(q*prop_correct, function(y) optimize(i_pdiffusion, c(0, 5), args = as.list(p11_fit$par), value = y, response = "upper")[[1]]) expect_equal(qdiffusion(q, response = "upper", a=p11_fit$par["a"], v=p11_fit$par["v"], t0=p11_fit$par["t0"], sz=p11_fit$par["sz"], st0=p11_fit$par["st0"], sv=p11_fit$par["sv"], scale_p = TRUE),pred_dir, tolerance=0.001) expect_equal(suppressWarnings(qdiffusion(q, response = "lower", a=p11_fit$par["a"], v=p11_fit$par["v"], t0=p11_fit$par["t0"], sz=p11_fit$par["sz"]*p11_fit$par["a"], st0=p11_fit$par["st0"], sv=p11_fit$par["sv"])),as.numeric(rep(NA, 5))) }) test_that("s works as expected", { set.seed(1) x <- rdiffusion(n = 100, a = 1, v = 2, t0 = 0.3, z = 0.5, s = 1) set.seed(1) y <- rdiffusion(n = 100, a = 0.1, v = 0.2, t0 = 0.3, z = 0.05, s = 0.1) expect_identical(x, y) set.seed(1) z <- rdiffusion(n = 100, a = 0.1, v = 0.2, t0 = 0.3, s = 0.1) expect_identical(x, z) expect_identical( ddiffusion(x[x$response == "upper", "rt"], a = 1, v = 2, t0 = 0.3, z = 0.5, s=1), ddiffusion(x[x$response == "upper", "rt"], a = 0.1, v = 0.2, t0 = 0.3, z = 0.05, s=0.1) ) expect_identical( ddiffusion(x[x$response == "upper", "rt"], a = 1, v = 2, t0 = 0.3, z = 0.5, s=1), ddiffusion(x[x$response == "upper", "rt"], a = 0.1, v = 0.2, t0 = 0.3, s=0.1) ) expect_identical( pdiffusion(sort(x[x$response == "upper", "rt"]), a = 1, v = 2, t0 = 0.3, z = 0.5, s=1), pdiffusion(sort(x[x$response == "upper", "rt"]), a = 0.1, v = 0.2, t0 = 0.3, z = 0.05, s=0.1) ) expect_identical( pdiffusion(sort(x[x$response == "upper", "rt"]), a = 1, v = 2, t0 = 0.3, z = 0.5, s=1), pdiffusion(sort(x[x$response == "upper", "rt"]), a = 0.1, v = 0.2, t0 = 0.3, s=0.1) ) expect_identical( qdiffusion(0.6, a = 1, v = 2, t0 = 0.3, z = 0.5, s=1), qdiffusion(0.6, a = 0.1, v = 0.2, t0 = 0.3, z = 0.05, s=0.1) ) expect_identical( qdiffusion(0.6, a = 1, v = 2, t0 = 0.3, z = 0.5, s=1), qdiffusion(0.6, a = 0.1, v = 0.2, t0 = 0.3, s=0.1) ) }) test_that("scale_p works as expected", { (max_p <- pdiffusion(20, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u")) # [1] 0.8705141 # to get predicted quantiles, scale required quantiles by maximally predicted response rate: qs <- c(.1, .3, .5, .7, .9) expect_equal(qdiffusion(qs*max_p, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u"), qdiffusion(qs, a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response="u", scale_p = TRUE)) }) test_that("rdiffusion recovers Table 1 from Wagenmakers et al. (2007)", { set.seed(4) n <- 1e4 # number of samples # take parameter valeus from Table 2 and set s to 0.1 george <- rdiffusion(n, a = 0.12, v = 0.25, t0 = 0.3, s = 0.1) rich <- rdiffusion(n, a = 0.12, v = 0.25, t0 = 0.25, s = 0.1) amy <- rdiffusion(n, a = 0.08, v = 0.25, t0 = 0.3, s = 0.1) mark <- rdiffusion(n, a = 0.08, v = 0.25, t0 = 0.25, s = 0.1) george$id <- "george" rich$id <- "rich" amy$id <- "amy" mark$id <- "mark" wag <- rbind(george, rich, amy, mark) wag$id <- factor(wag$id, levels = c("george", "rich", "amy", "mark")) expect_equal(aggregate(rt ~ id, wag, mean)$rt, c(0.517, 0.467, 0.422, 0.372), tolerance = 0.003) expect_equal(aggregate(as.numeric(response)-1 ~ id, wag, mean)[,2], c(0.953, 0.953, 0.881, 0.881), tolerance = 0.01) expect_equal(aggregate(rt ~ id, wag, var)$rt, c(0.024, 0.024, 0.009, 0.009), tolerance = 0.01) }) test_that("pdiffusion recovers proportions of Table 1 from Wagenmakers et al. (2007)", { props <- pdiffusion(rep(Inf, 4), a = rep(c(0.12, 0.08), each = 2), v = 0.25, t0 = c(0.3, 0.25), s = 0.1) expect_equal(props, c(0.953, 0.953, 0.881, 0.881), tolerance = 0.001) props <- pdiffusion(rep(Inf, 4), a = rep(c(0.12, 0.08), each = 2), v = 0.25, t0 = c(0.3, 0.25), z = rep(c(0.06, 0.04), each = 2), s = 0.1) expect_equal(props, c(0.953, 0.953, 0.881, 0.881), tolerance = 0.001) }) test_that("ddiffusion and dwiener give same log-likelihoods when fitted to RNG data", { skip_if_not_installed("RWiener") library("RWiener") set.seed(1) ## identical calls (but different random values) rt1 <- rdiffusion(500, a=1, v=2, t0=0.5) ll_diffusion <- function(pars, rt, response) { densities <- ddiffusion(rt, response=response, a=pars[1], v=pars[2], t0=pars[3]) if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } ll_wiener <- function(pars, rt, response) { densities <- dwiener(q = rt, resp=response, alpha = pars[1], delta = pars[2], tau = pars[3], beta = 0.5) if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } start <- c(runif(2, 0.5, 3), 0.1, runif(3, 0, 0.5)) names(start) <- c("a", "v", "t0") recov_rtdists <- nlminb(start, ll_diffusion, lower = 0.1, rt=rt1$rt, response=rt1$response) recov_rwiener <- nlminb(start, ll_wiener, lower = 0.1, rt=rt1$rt, response=rt1$response) expect_equivalent(recov_rtdists, recov_rwiener) }) rtdists/tests/testthat/test-rrd.R0000644000175000017500000000370713667512040017004 0ustar nileshnilesh context("diffusion parameter input (via rdiffusion)") test_that("check individual parameters:", { expect_that(rdiffusion(10, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0), is_a("data.frame")) expect_that(suppressWarnings(rdiffusion(10, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = NULL)), throws_error("Not enough parameters")) expect_that(rdiffusion(10, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = Inf, st0 = 0), throws_error()) expect_that(suppressWarnings(rdiffusion(10, a=1, z=NA, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0)), throws_error()) }) # test_that("check parameters:", { # p1 <- c(1, 0.5, 2, 0.5, 0, 0, 0, 0) # expect_that(rdiffusion(10, parameters = p1), is_a("data.frame")) # expect_that(rdiffusion(10, parameters = p1[1:7]), throws_error()) # names(p1) <- c("a", "z", "v","t0","d", "sz","sv","st0") # expect_that(rdiffusion(10, parameters = p1), is_a("data.frame")) # names(p1) <- c(c("a","v","t0","z"), sample(c("sz","sv","st0", "d"))) # expect_that(rdiffusion(10, parameters = p1), is_a("data.frame")) # names(p1)[3] <- "xx" # expect_that(rdiffusion(10, parameters = p1), throws_error()) # names(p1) <- NULL # p1[1] <- NA # expect_that(rdiffusion(10, parameters = p1), throws_error()) # p1[1] <- Inf # expect_that(rdiffusion(10, parameters = p1), throws_error()) # }) context("rdiffusion: random number generation for the diffusion model") test_that("rdiffusion works", { rrds1 <- rdiffusion(10, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0) rrds2 <- rdiffusion(10, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0) expect_that(rrds1, is_a("data.frame")) expect_false(isTRUE(all.equal(rrds1, rrds2))) set.seed(1) rrds1 <- rdiffusion(10, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0) set.seed(1) rrds2 <- rdiffusion(10, a=1, z=0.5, v=2, t0=0.5, d=0, sz = 0, sv = 0, st0 = 0) expect_that(rrds1, equals(rrds2)) set.seed(NULL) }) rtdists/tests/testthat/test-diffusion-bugs.R0000644000175000017500000001365413667512040021143 0ustar nileshnilesh context("Diffusion Model: bugs") test_that("ddiffusion passes NAs if all are NA", { expect_true( isTRUE(is.na(ddiffusion(rt = 1, response = 1, a = NA, v = NA, t0 = NA))) ) # expect_true( # isTRUE(is.na(ddiffusion(rt = 1, response = 1, a = 0, v = NA, t0 = NA))) # ) expect_true( isTRUE(is.na(ddiffusion(rt = 1, response = 1, a = NA, v = 0, t0 = NA))) ) expect_true( isTRUE(is.na(ddiffusion(rt = 1, response = 1, a = NA, v = NA, t0 = 0))) ) }) test_that("qdiffusion does not fail for certain values", { qex <- c(a = 1.97092532512193, t0 = 0.22083875893466, sv = 0.462630056494015, v = -2.59881372383245, resp_prop = 0.99877722499984, z = 0.5, sz= 0.1) #pdiffusion(0.28165, response = "lower", a=qex["a"], v=qex["v"], t0 = qex["t0"], sv = qex["sv"], sz = 0.1, z = 0.5) expect_true(!is.na( qdiffusion(p = 0.0998, response = "lower", a=qex["a"], v=qex["v"], t0 = qex["t0"], sv = qex["sv"], sz = 0.1, z = 0.5) )) }) test_that("qdiffusion does not fail for named probability", { qex2 <- c( t0 = 0.194096266864241, sv = 0.867039443746426, v = -3.8704467331985, a = 0.819960332004152, resp_prop = 0.923810293538986, z = 0.5, sz= 0.1) #pdiffusion(20, response = "upper", a=qex2["a"], v=qex2["v"], t0 = qex2["t0"], sv = qex2["sv"], sz = 0.1, z = 0.5) expect_true(!is.na( qdiffusion(p = qex2["resp_prop"], response = "lower", a=qex2["a"], v=qex2["v"], t0 = qex2["t0"], sv = qex2["sv"], sz = 0.1, z = 0.5*qex2["a"]) )) }) test_that("pdiffusion is equal to pwiener", { if (require(RWiener)) { expect_equal(pdiffusion(2, response = "upper", a = 1.63, v = 1.17, t0 = 0.22, z = 0.517*1.63), pwiener(2, resp = "upper", alpha = 1.63, delta = 1.17, tau = 0.22, beta = 0.517), tolerance = 0.0001) } }) test_that("numerical integration does not fail in certain cases", { # fails expect_equal(pdiffusion(0.640321329425053, response = "upper", a = 1.1, v=1.33, t0 = 0.3, z = 0.55*1.1, st0 = 0.2, sz = 0.7*1.1, precision = 3), 0.5345, tolerance = 0.001) expect_equal(pdiffusion(0.640321329425053, response = "upper", a = 1.1, v=1.33, t0 = 0.3, z = 0.55*1.1, st0 = 0.2, sz = 0.7*1.1, precision = 2.9), 0.5345517, tolerance = 0.001) }) test_that("pdiffusion does not add to 1 for both responses in case sz goes towards max.", { expect_equal(sum(pdiffusion(rep(Inf, 2), a=1, v=2, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response=c("l", "u"))), 1, tolerance = 0.001) expect_equal(sum(pdiffusion(rep(Inf, 2), a=1, v=2, z = 0.4, t0=0.5, st0=0.2, sz = 0.1, sv = 0.5, response=c("l", "u"))), 1, tolerance = 0.001) expect_equal(sum(pdiffusion(rep(Inf, 2), response=c("l", "u"), a=1, v = 3.69, t0 = 0.3, sz = 0.1, sv = 1.2, st0 = 0)), 1, tolerance = 0.001) expect_equal(sum(pdiffusion(rep(Inf, 2), response=c("l", "u"), a=1, v = 3.69, t0 = 0.3, sz = 0.5, sv = 1.2, st0 = 0)), 1, tolerance = 0.001) #testthat::skip("currently pdiffusion does not add up to 1 fo rthe following tests.") expect_equal(sum(pdiffusion(rep(Inf, 2), response=c("l", "u"), a=1, v = 3.69, t0 = 0.3, sz = 0.9, sv = 1.2, st0 = 0)), 1, tolerance = 0.001) expect_equal(sum(pdiffusion(rep(Inf, 2), response=c("l", "u"), a=0.08, v = 0.369, t0 = 0.3, sz = 0.07, sv = 0.12, st0 = 0, s=0.1, precision = 2)), 1, tolerance = 0.001) expect_equal(sum(pdiffusion(rep(Inf, 2), response=c("l", "u"), a=0.08, v = 0.369, t0 = 0.3, sz = 0.07, sv = 0.12, st0 = 0, s=0.1, precision = 3)), 1, tolerance = 0.001) }) test_that("ddiffusion does not go crazy if sz, sv, and st0 goes to 0", { y1 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , sz =0, z = 0.879, v = -2.4, st0 = 0, response = 1) x1 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , sz =0.0000001, z = 0.879, v = -2.4, response = 1) x2 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , sz =0.000001, z = 0.879, v = -2.4, response = 1) x3 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , sz =0.00001, z = 0.879, v = -2.4, response = 1) st00 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , st0 =0.0000007, z = 0.879, v = -2.4, response = 1) st01 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , st0 =0.0000001, z = 0.879, v = -2.4, response = 1) st02 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , st0 =0.000001, z = 0.879, v = -2.4, response = 1) st03 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0 , st0 =0.00001, z = 0.879, v = -2.4, response = 1) sv0 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0.0000007, z = 0.879, v = -2.4, response = 1) sv1 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0.0000001, z = 0.879, v = -2.4, response = 1) sv2 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0.000001, z = 0.879, v = -2.4, response = 1) sv3 <- ddiffusion(0.53, a = 1.84, t0 = 0.14, sv =0.00001, z = 0.879, v = -2.4, response = 1) tolerance <- 0.0001 expect_equal(y1, x1, tolerance=tolerance) expect_equal(y1, x2, tolerance=tolerance) expect_equal(y1, x3, tolerance=tolerance) expect_equal(y1, st00, tolerance=tolerance) expect_equal(y1, st01, tolerance=tolerance) expect_equal(y1, st02, tolerance=tolerance) expect_equal(y1, st03, tolerance=tolerance) expect_equal(y1, sv0, tolerance=tolerance) expect_equal(y1, sv1, tolerance=tolerance) expect_equal(y1, sv2, tolerance=tolerance) expect_equal(y1, sv3, tolerance=tolerance) ### shows the bug: # curve(ddiffusion(rep(0.53, 101), a = 1.84, t0 = 0.14, sv =0 , sz =x, z = 0.879, v = -2.4, response = 1), from= 0, to = 0.00001, xlab = "sz") # curve(ddiffusion(rep(0.53, 101), a = 1.84, t0 = 0.14, sv =x , sz =0, z = 0.879, v = -2.4, response = 1), from= 0, to = 0.00001, xlab = "sv") # curve(ddiffusion(rep(0.53, 101), a = 1.84, t0 = 0.14, sv =0 , sz =0, z = 0.879, v = -2.4, st0 = x, response = 1), from= 0, to = 0.00001, xlab = "st0") # curve(ddiffusion(rep(0.53, 101), a = 2, t0 = 0.3, sv =0 , sz =0, z = x, v = 1, response = 1), from= 0, to = 0.1, xlab = "sz") }) rtdists/tests/testthat/test-lba_basics.R0000644000175000017500000006655213667512040020306 0ustar nileshnilesh context("LBA works correctly") test_that("dLBA norm is identical to n1PDF", { n <- 100 x <- rLBA(n, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) ex1_n1pdf <- vector("numeric", n) ex1_n1pdf[x$response == 1] <- n1PDF(x$rt[x$response == 1], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex1_n1pdf[x$response == 2] <- n1PDF(x$rt[x$response == 2], A=0.5, b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex1_dLBA <- dLBA(x$rt, x$response, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex1_n1pdf, ex1_dLBA) ex2_n1pdf <- vector("numeric", n) ex2_n1pdf[x$response == 1] <- n1PDF(x$rt[x$response == 1], A=list(0.5, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex2_n1pdf[x$response == 2] <- n1PDF(x$rt[x$response == 2], A=list(0.6, 0.5), b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex2_dLBA <- dLBA(x$rt, x$response, A=list(0.5, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex2_n1pdf, ex2_dLBA) ex3_n1pdf <- vector("numeric", n) ex3_n1pdf[x$response == 1] <- n1PDF(x$rt[x$response == 1], A=rep(c(0.5, 0.6), n/2)[x$response == 1], b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex3_n1pdf[x$response == 2] <- n1PDF(x$rt[x$response == 2], A=rep(c(0.5, 0.6), n/2)[x$response == 2], b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex3_dLBA <- dLBA(x$rt, x$response, A=c(0.5, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex3_n1pdf, ex3_dLBA) ex4_n1pdf <- vector("numeric", n) ex4_n1pdf[x$response == 1] <- n1PDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex4_n1pdf[x$response == 2] <- n1PDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex4_dLBA <- dLBA(x$rt, x$response, A=list(c(0.5, 0.6), c(0.6, 0.5)), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex4_n1pdf, ex4_dLBA) ex5_n1pdf <- vector("numeric", n) ex5_n1pdf[x$response == 1] <- n1PDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex5_n1pdf[x$response == 2] <- n1PDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b=1, t0 = list(0.3, 0.5), mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex5_dLBA <- dLBA(x$rt, x$response, A=list(c(0.5, 0.6), c(0.6, 0.5)), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex5_n1pdf, ex5_dLBA) ex6_n1pdf <- vector("numeric", n) ex6_n1pdf[x$response == 1] <- n1PDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE, st0 = 0.2) ex6_n1pdf[x$response == 2] <- n1PDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b=1, t0 = list(0.3, 0.5), mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE, st0 = 0.2) ex6_dLBA <- dLBA(x$rt, x$response, A=list(c(0.5, 0.6), c(0.6, 0.5)), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", st0 = 0.2) expect_identical(ex6_n1pdf, ex6_dLBA) drifts1 <- rnorm(n/2, 1, 0.1) drifts2 <- rnorm(n/2, 1.5, 0.2) sddrift1 <- runif(n/5, 0.1, 0.3) sddrift2 <- runif(n/5, 0.2, 0.4) ex7_n1pdf <- vector("numeric", n) ex7_n1pdf[x$response == 1] <- n1PDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b = list( rep(c(1, 1.1, 1.2, 1.3, 1.4), n/5)[x$response == 1], rep(c(1, 1.1, 1.2, 1.3, 1.4) - 0.2, n/5)[x$response == 1] ), t0 = list( seq(0.1, 0.5, length.out = n)[x$response == 1], seq(0.3, 0.8, length.out = n)[x$response == 1]), mean_v = list( rep(drifts1, 2)[x$response == 1], rep(drifts2, 2)[x$response == 1] ), sd_v = list( rep(sddrift1, 5)[x$response == 1], rep(sddrift2, 5)[x$response == 1] ), distribution = "norm", silent = TRUE, st0 = 0.2) ex7_n1pdf[x$response == 2] <- n1PDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b = list( rep(c(1, 1.1, 1.2, 1.3, 1.4) - 0.2, n/5)[x$response == 2], rep(c(1, 1.1, 1.2, 1.3, 1.4), n/5)[x$response == 2] ), t0 = list( seq(0.3, 0.8, length.out = n)[x$response == 2], seq(0.1, 0.5, length.out = n)[x$response == 2] ), mean_v = list( rep(drifts2, 2)[x$response == 2], rep(drifts1, 2)[x$response == 2] ), sd_v = list( rep(sddrift2, 5)[x$response == 2], rep(sddrift1, 5)[x$response == 2] ), distribution = "norm", silent = TRUE, st0 = 0.2) ex7_dLBA <- dLBA(x$rt, x$response, A = list( c(0.5, 0.6), c(0.6, 0.5) ), b = list( c(1, 1.1, 1.2, 1.3, 1.4), c(1, 1.1, 1.2, 1.3, 1.4) - 0.2 ), t0 = list( seq(0.1, 0.5, length.out = n), seq(0.3, 0.8, length.out = n)), mean_v = list( drifts1, drifts2 ), sd_v = list( sddrift1, sddrift2 ), distribution = "norm", st0 = 0.2) expect_identical(ex7_n1pdf, ex7_dLBA) }) test_that("pLBA norm is identical to n1CDF", { n <- 100 x <- rLBA(n, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) ex1_n1cdf <- vector("numeric", n) ex1_n1cdf[x$response == 1] <- n1CDF(x$rt[x$response == 1], A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex1_n1cdf[x$response == 2] <- n1CDF(x$rt[x$response == 2], A=0.5, b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex1_pLBA <- pLBA(x$rt, x$response, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex1_n1cdf, ex1_pLBA) ex2_n1cdf <- vector("numeric", n) ex2_n1cdf[x$response == 1] <- n1CDF(x$rt[x$response == 1], A=list(0.5, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex2_n1cdf[x$response == 2] <- n1CDF(x$rt[x$response == 2], A=list(0.6, 0.5), b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex2_pLBA <- pLBA(x$rt, x$response, A=list(0.5, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex2_n1cdf, ex2_pLBA) ex3_n1cdf <- vector("numeric", n) ex3_n1cdf[x$response == 1] <- n1CDF(x$rt[x$response == 1], A=rep(c(0.5, 0.6), n/2)[x$response == 1], b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex3_n1cdf[x$response == 2] <- n1CDF(x$rt[x$response == 2], A=rep(c(0.5, 0.6), n/2)[x$response == 2], b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex3_pLBA <- pLBA(x$rt, x$response, A=c(0.5, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex3_n1cdf, ex3_pLBA) ex4_n1cdf <- vector("numeric", n) ex4_n1cdf[x$response == 1] <- n1CDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex4_n1cdf[x$response == 2] <- n1CDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex4_pLBA <- pLBA(x$rt, x$response, A=list(c(0.5, 0.6), c(0.6, 0.5)), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex4_n1cdf, ex4_pLBA) ex5_n1cdf <- vector("numeric", n) ex5_n1cdf[x$response == 1] <- n1CDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) ex5_n1cdf[x$response == 2] <- n1CDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b=1, t0 = list(0.3, 0.5), mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) ex5_pLBA <- pLBA(x$rt, x$response, A=list(c(0.5, 0.6), c(0.6, 0.5)), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm") expect_identical(ex5_n1cdf, ex5_pLBA) ex6_n1cdf <- vector("numeric", n) ex6_n1cdf[x$response == 1] <- n1CDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE, st0 = 0.2) ex6_n1cdf[x$response == 2] <- n1CDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b=1, t0 = list(0.3, 0.5), mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE, st0 = 0.2) ex6_pLBA <- pLBA(x$rt, x$response, A=list(c(0.5, 0.6), c(0.6, 0.5)), b=1, t0 = list(0.5, 0.3), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", st0 = 0.2) expect_identical(ex6_n1cdf, ex6_pLBA) drifts1 <- rnorm(n/2, 1, 0.1) drifts2 <- rnorm(n/2, 1.5, 0.2) sddrift1 <- runif(n/5, 0.1, 0.3) sddrift2 <- runif(n/5, 0.2, 0.4) ex7_n1cdf <- vector("numeric", n) ex7_n1cdf[x$response == 1] <- n1CDF(x$rt[x$response == 1], A=list( rep(c(0.5, 0.6), n/2)[x$response == 1], rep(c(0.6, 0.5), n/2)[x$response == 1] ), b = list( rep(c(1, 1.1, 1.2, 1.3, 1.4), n/5)[x$response == 1], rep(c(1, 1.1, 1.2, 1.3, 1.4) - 0.2, n/5)[x$response == 1] ), t0 = list( seq(0.1, 0.5, length.out = n)[x$response == 1], seq(0.3, 0.8, length.out = n)[x$response == 1]), mean_v = list( rep(drifts1, 2)[x$response == 1], rep(drifts2, 2)[x$response == 1] ), sd_v = list( rep(sddrift1, 5)[x$response == 1], rep(sddrift2, 5)[x$response == 1] ), distribution = "norm", silent = TRUE, st0 = 0.2) ex7_n1cdf[x$response == 2] <- n1CDF(x$rt[x$response == 2], A=list( rep(c(0.6, 0.5), n/2)[x$response == 2], rep(c(0.5, 0.6), n/2)[x$response == 2] ), b = list( rep(c(1, 1.1, 1.2, 1.3, 1.4) - 0.2, n/5)[x$response == 2], rep(c(1, 1.1, 1.2, 1.3, 1.4), n/5)[x$response == 2] ), t0 = list( seq(0.3, 0.8, length.out = n)[x$response == 2], seq(0.1, 0.5, length.out = n)[x$response == 2] ), mean_v = list( rep(drifts2, 2)[x$response == 2], rep(drifts1, 2)[x$response == 2] ), sd_v = list( rep(sddrift2, 5)[x$response == 2], rep(sddrift1, 5)[x$response == 2] ), distribution = "norm", silent = TRUE, st0 = 0.2) ex7_pLBA <- pLBA(x$rt, x$response, A = list( c(0.5, 0.6), c(0.6, 0.5) ), b = list( c(1, 1.1, 1.2, 1.3, 1.4), c(1, 1.1, 1.2, 1.3, 1.4) - 0.2 ), t0 = list( seq(0.1, 0.5, length.out = n), seq(0.3, 0.8, length.out = n)), mean_v = list( drifts1, drifts2 ), sd_v = list( sddrift1, sddrift2 ), distribution = "norm", st0 = 0.2) expect_identical(ex7_n1cdf, ex7_pLBA) }) test_that("pLBA norm is identical to n1CDF", { x <- seq(0, 3, by =0.1) o1a <- n1CDF(x, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) o1b <- n1CDF(x, A=0.5, b=1, t0 = 0.5, mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) o1c <- pLBA(c(x, x), rep(1:2, each = length(x)), A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) expect_identical(c(o1a, o1b), o1c) o2a <- n1CDF(x, A=0.5, b=1, t0 = list(0.5, 0.2), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) o2b <- n1CDF(x, A=0.5, b=1, t0 = list(0.2, 0.5), mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) o2c <- pLBA(c(x, x), rep(1:2, each = length(x)), A=0.5, b=1, t0 = list(0.5, 0.2), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) expect_identical(c(o2a, o2b), o2c) o3a <- n1CDF(x, A=list(seq(0.5, 0.6, length.out = length(x)), 0.2), b=1, t0 = list(0.5, 0.2), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) o3b <- n1CDF(x, A=list(0.2, seq(0.5, 0.6, length.out = length(x))), b=1, t0 = list(0.2, 0.5), mean_v=c(1, 1.2), sd_v=c(0.3,0.2), distribution = "norm", silent = TRUE) o3c <- pLBA(c(x, x), rep(1:2, each = length(x)), A=list(seq(0.5, 0.6, length.out = length(x)), 0.2), b=1, t0 = list(0.5, 0.2), mean_v=c(1.2, 1), sd_v=c(0.2,0.3), distribution = "norm", silent = TRUE) expect_identical(c(o2a, o2b), o2c) }) test_that("qLBA is equivalent to manual calculation",{ p11_norm <- structure(list(par = structure( c( 0.11829564514883, -2.74097628458775, 1.04498350371894, 0.451351158199909, 0.124346087574482, 0.260994169758609 ), .Names = c("A", "v1", "v2", "b", "t0", "sv") ))) q <- c(0.1, 0.3, 0.5, 0.7, 0.9) pred_prop_correct <- pLBA(Inf, 2, A = p11_norm$par["A"], b = p11_norm$par["A"]+p11_norm$par["b"], t0 = p11_norm$par["t0"], mean_v = c(p11_norm$par["v1"], p11_norm$par["v2"]), sd_v = c(1, p11_norm$par["sv"]), silent = TRUE) expect_equal(qLBA(pred_prop_correct*q, 2, A = p11_norm$par["A"], b = p11_norm$par["A"]+p11_norm$par["b"], t0 = p11_norm$par["t0"], mean_v = c(p11_norm$par["v1"], p11_norm$par["v2"]), sd_v = c(1, p11_norm$par["sv"]), silent = TRUE), c(0.487170939752454, 0.551026400837336, 0.608185370581083, 0.680979476696082, 0.830128589908231)) expect_equal(suppressWarnings(qLBA(pred_prop_correct*q, 1, A = p11_norm$par["A"], b = p11_norm$par["A"]+p11_norm$par["b"], t0 = p11_norm$par["t0"], mean_v = c(p11_norm$par["v1"], p11_norm$par["v2"]), sd_v = c(1, p11_norm$par["sv"]), silent = TRUE)), as.numeric(rep(NA, 5))) }) test_that("rLBA works wit all distributions", { expect_is(rLBA(100, A=c(0.5, 0.6), b=1, t0 = 0.5, meanlog_v=c(1.2, 1), sdlog_v=c(0.2,0.3), distribution = "lnorm"), "data.frame") expect_is(rLBA(100, A=c(0.5, 0.6), b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3), distribution = "frechet"), "data.frame") expect_is(rLBA(100, A=c(0.5, 0.6), b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3), distribution = "gamma"), "data.frame") }) rtdists/tests/testthat/test-pdiffusion_rng.R0000644000175000017500000001764413667512040021236 0ustar nileshnileshcontext("pdiffusion functions: RNG is equivalent to pdiffusion") #x <- .Random.seed set.seed(3) tryCatch.W.E <- function(expr) { mc <- match.call() mc2 <- match.call(definition = ks.test, call = as.call(mc[[2]])) mc2[[1]] <- list W <- NULL w.handler <- function(w){ # warning handler W <<- w invokeRestart("muffleWarning") } list(value = withCallingHandlers(tryCatch(expr, error = function(e) e), warning = w.handler),warning = W, data = eval(mc2, envir = parent.frame())) } conditional_save_t <- function(t, distribution) { mc <- match.call() ex_data <- t$data #if (!is.null(t$warning)) save(ex_data, file = paste0(mc[[2]], "_", distribution, "_problem.Rdata")) #browser() #str(t) } test_that("Norm: pdiffusion corresponds to random derivates with specific values", { testthat::skip_on_cran() #testthat::skip_on_travis() normalised_pdiffusion = function(rt,...) pdiffusion(rt,...)/pdiffusion(rt=Inf,...) samples <- 1e3 p_min <- 0.01 p_max <- 0.01 diffusion_pars <- structure(list(par = structure(c(1.32060063610882, 3.27271614698074, 0.338560144920614, 0.34996447540773, 0.201794924457386, 1.05516829794661), .Names = c("a", "v", "t0", "sz", "st0", "sv")))) r_diff1 <- rdiffusion(samples, a=diffusion_pars$par["a"], v=diffusion_pars$par["v"], t0=diffusion_pars$par["t0"], sz=diffusion_pars$par["sz"], st0=diffusion_pars$par["st0"], sv=diffusion_pars$par["sv"]) t1 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=1.1, v=diffusion_pars$par["v"], t0=diffusion_pars$par["t0"], sz=diffusion_pars$par["sz"], st0=diffusion_pars$par["st0"], sv=diffusion_pars$par["sv"])) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "norm") t2 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars$par["a"], v=diffusion_pars$par["v"]-0.5, t0=diffusion_pars$par["t0"], sz=diffusion_pars$par["sz"], st0=diffusion_pars$par["st0"], sv=diffusion_pars$par["sv"])) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "norm") t3 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars$par["a"], v=diffusion_pars$par["v"]-0.5, t0=diffusion_pars$par["t0"], sz=diffusion_pars$par["sz"], st0=diffusion_pars$par["st0"], sv=diffusion_pars$par["sv"])) expect_lt(t3$value$p.value, p_min) conditional_save_t(t3, "norm") t4 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars$par["a"], v=diffusion_pars$par["v"], t0=diffusion_pars$par["t0"]-0.1, sz=diffusion_pars$par["sz"], st0=diffusion_pars$par["st0"], sv=diffusion_pars$par["sv"])) expect_lt(t4$value$p.value, p_min) conditional_save_t(t4, "norm") t5 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars$par["a"], v=diffusion_pars$par["v"], t0=diffusion_pars$par["t0"], sz=diffusion_pars$par["sz"], st0=diffusion_pars$par["st0"], sv=diffusion_pars$par["sv"])) conditional_save_t(t5, "norm") expect_gt(t5$value$p.value, p_max) #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) test_that("Norm: pdiffusion corresponds to random derivates with random values", { testthat::skip_on_cran() #testthat::skip_on_travis() normalised_pdiffusion = function(rt,...) pdiffusion(rt,...)/pdiffusion(rt=Inf,...) samples <- 5e3 p_min <- 0.01 p_max <- 0.01 diffusion_pars <- list( a = runif(1, 0.5, 1.5), v = runif(1, 2, 3.5), t0 = runif(1, 0.2, 0.4), sz = runif(1, 0.1, 0.2), st0 = runif(1, 0.1, 0.2), sv = runif(1, 0.5, 1.5) ) r_diff1 <- rdiffusion(samples, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]]) t1 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]]-0.3, v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "norm") t2 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=pmin(0, diffusion_pars[["v"]]-1), t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "norm") t3 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]]+0.25, st0=0, sv=diffusion_pars[["sv"]]-0.5)) expect_lt(t3$value$p.value, p_min) conditional_save_t(t3, "norm") t4 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]]-0.08, sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) expect_lt(t4$value$p.value, p_max) conditional_save_t(t4, "norm") t5 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) conditional_save_t(t5, "norm") expect_gt(t5$value$p.value, p_max) #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) test_that("Norm: pdiffusion corresponds to random derivates with random values 2", { testthat::skip_on_cran() #testthat::skip_on_travis() normalised_pdiffusion = function(rt,...) pdiffusion(rt,...)/pdiffusion(rt=Inf,...) samples <- 5e3 p_min <- 0.01 p_max <- 0.01 diffusion_pars <- list( a = runif(1, 0.5, 1.5), v = runif(1, 2, 3.5), t0 = runif(1, 0.2, 0.4), sz = runif(1, 0, 0.4), st0 = runif(1, 0, 0.2), sv = runif(1, 0, 1) ) r_diff1 <- rdiffusion(samples, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]]) t1 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]]-0.3, v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) expect_lt(t1$value$p.value, p_min) conditional_save_t(t1, "norm") t2 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]]+4, t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) expect_lt(t2$value$p.value, p_min) conditional_save_t(t2, "norm") t3 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]]+0.25, st0=0, sv=0)) expect_lt(t3$value$p.value, p_min) conditional_save_t(t3, "norm") t4 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]]-0.08, sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) expect_lt(t4$value$p.value, p_max) conditional_save_t(t4, "norm") t5 <- tryCatch.W.E(ks.test(r_diff1$rt[r_diff1$response=="upper"], normalised_pdiffusion, a=diffusion_pars[["a"]], v=diffusion_pars[["v"]], t0=diffusion_pars[["t0"]], sz=diffusion_pars[["sz"]], st0=diffusion_pars[["st0"]], sv=diffusion_pars[["sv"]])) conditional_save_t(t5, "norm") expect_gt(t5$value$p.value, p_max) #if (any(sapply(list(t1, t2, t3, t4, t5), function(x) !is.null(x$warning)))) browser() }) rtdists/tests/testthat/test-lba_input.R0000644000175000017500000002513313667512040020167 0ustar nileshnilesh context("LBA vectorization") test_that("_norm vectorizes", { n <- 10 x <- rlba_norm(n, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) as <- seq(0.2, 0.5, length.out = n) bs <- seq(0.5, 1.5, length.out = n) t0s <- seq(0, 0.5, length.out = n/2) vs <- seq(0.8, 1.2, length.out = 10) o1 <- plba_norm(x[,"response"], A=as, b=bs, t0 = t0s, mean_v=vs, sd_v=0.2) o2 <- mapply(plba_norm, rt = x[,"response"], A = as, b = bs, t0 = t0s, mean_v=vs, MoreArgs = list(sd_v=0.2)) expect_identical(o1, o2) p1 <- dlba_norm(x[,"response"], A=as, b=bs, t0 = t0s, mean_v=1.2, sd_v=vs) p2 <- mapply(dlba_norm, rt = x[,"response"], A = as, b = bs, t0 = t0s, sd_v=vs, MoreArgs = list(mean_v=1.2)) expect_identical(p1, p2) }) test_that("_norm with small A", { n <- 10 x <- rlba_norm(n, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) as <- seq(0.2, 0.5, length.out = n) bs <- seq(0.5, 1.5, length.out = n) t0s <- seq(0, 0.5, length.out = n/2) vs <- seq(0.8, 1.2, length.out = 10) o1 <- plba_norm(x[,"response"], A=c(as[1:4], 0.1e-10, 0.5e-10, as[7:10]), b=bs, t0 = t0s, mean_v=vs, sd_v=0.2) o2_a <- plba_norm(x[,"response"][1:4], A=as[1:4], b=bs, t0 = t0s, mean_v=vs, sd_v=0.2) o2_b <- plba_norm(x[,"response"][5:6], A=c(0.1e-10, 0.5e-10), b=bs[5:6], t0 = c(t0s[5], t0s[1]), mean_v=vs[5:6], sd_v=0.2) o2_c <- plba_norm(x[,"response"][7:10], A=as[7:10], b=bs[7:10], t0 = t0s[2:5], mean_v=vs[7:10], sd_v=0.2) expect_identical(o1, c(o2_a, o2_b, o2_c)) }) test_that("_gamma vectorizes", { n <- 10 x <- rlba_norm(n, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) as <- seq(0.2, 0.5, length.out = n) bs <- seq(0.5, 1.5, length.out = n) t0s <- seq(0, 0.5, length.out = n/2) vs <- seq(0.8, 1.2, length.out = 10) o1 <- dlba_gamma(x[,"response"], A=as, b=bs, t0 = t0s, shape_v=vs, scale_v=0.2) o2 <- mapply(dlba_gamma, rt = x[,"response"], A = as, b = bs, t0 = t0s, shape_v=vs, MoreArgs = list(scale_v=0.2)) expect_identical(o1, o2) p1 <- plba_gamma(x[,"response"], A=as, b=bs, t0 = t0s, shape_v=1.2, scale_v=vs) p2 <- mapply(plba_gamma, rt = x[,"response"], A = as, b = bs, t0 = t0s, scale_v=vs, MoreArgs = list(shape_v=1.2)) expect_identical(p1, p2) }) test_that("_frechet vectorizes", { n <- 10 x <- rlba_norm(n, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) as <- seq(0.2, 0.5, length.out = n) bs <- seq(0.5, 1.5, length.out = n) t0s <- seq(0, 0.5, length.out = n/2) vs <- seq(0.8, 1.2, length.out = 10) o1 <- dlba_frechet(x[,"response"], A=as, b=bs, t0 = t0s, shape_v=vs, scale_v=0.2) o2 <- mapply(dlba_frechet, rt = x[,"response"], A = as, b = bs, t0 = t0s, shape_v=vs, MoreArgs = list(scale_v=0.2)) expect_identical(o1, o2) p1 <- plba_frechet(x[,"response"], A=as, b=bs, t0 = t0s, shape_v=1.2, scale_v=vs) p2 <- mapply(plba_frechet, rt = x[,"response"], A = as, b = bs, t0 = t0s, scale_v=vs, MoreArgs = list(shape_v=1.2)) expect_identical(p1, p2) }) test_that("_lnorm vectorizes", { n <- 10 x <- rlba_norm(n, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) as <- seq(0.2, 0.5, length.out = n) bs <- seq(0.5, 1.5, length.out = n) t0s <- seq(0, 0.5, length.out = n/2) vs <- seq(0.8, 1.2, length.out = 10) o1 <- dlba_lnorm(x[,"response"], A=as, b=bs, t0 = t0s, meanlog_v=vs, sdlog_v=0.2) o2 <- mapply(dlba_lnorm, rt = x[,"response"], A = as, b = bs, t0 = t0s, meanlog_v=vs, MoreArgs = list(sdlog_v=0.2)) expect_identical(o1, o2) p1 <- plba_lnorm(x[,"response"], A=as, b=bs, t0 = t0s, meanlog_v=1.2, sdlog_v=vs) p2 <- mapply(plba_lnorm, rt = x[,"response"], A = as, b = bs, t0 = t0s, sdlog_v=vs, MoreArgs = list(meanlog_v=1.2)) expect_identical(p1, p2) }) context("LBA input") test_that("_norm input works as they should", { expect_error(rlba_norm("10", A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3))) expect_is(rlba_norm(10, A=c(0.5, 0.6), b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)), "matrix") x <- rlba_norm(10, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) expect_error(plba_norm(x[,"response"], A=as.character(seq(0.2, 0.5, length.out = 10)), b=0.5, t0 = 0.5, mean_v=1.2, sd_v=0.2)) expect_is(plba_norm(x[,"response"], A=seq(0.2, 0.5, length.out = 10), b=0.5, t0 = seq(0.5, 0.7, 0.1), mean_v=1.2, sd_v=0.2), "numeric") }) test_that("_gamma input works as they should", { expect_error(rlba_gamma("10", A=0.5, b=1, t0 = 0.5, shape_v =c(1.2, 1), scale_v =c(0.2,0.3)), regexp = "numeric and finite") expect_is(rlba_gamma(10, A=c(0.5, 0.6), b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3)), "matrix") x <- rlba_gamma(10, A=0.5, b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3)) expect_error(plba_gamma(x[,"response"], A=as.character(seq(0.2, 0.5, length.out = 10)), b=0.5, t0 = 0.5, shape_v=1.2, scale_v=0.2), "numeric") expect_is(plba_gamma(x[,"response"], A=seq(0.2, 0.5, length.out = 10), b=0.5, t0 = seq(0.5, 0.7, 0.1), shape_v=1.2, scale_v=0.2), "numeric") }) test_that("_frechet input works as they should", { expect_error(rlba_frechet("10", A=0.5, b=1, t0 = 0.5, shape_v =c(1.2, 1), scale_v =c(0.2,0.3)), regexp = "numeric and finite") expect_is(rlba_frechet(10, A=c(0.5, 0.6), b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3)), "matrix") x <- rlba_frechet(10, A=0.5, b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3)) expect_error(plba_frechet(x[,"response"], A=as.character(seq(0.2, 0.5, length.out = 10)), b=0.5, t0 = 0.5, shape_v=1.2, scale_v=0.2), "numeric") expect_is(plba_frechet(x[,"response"], A=seq(0.2, 0.5, length.out = 10), b=0.5, t0 = seq(0.5, 0.7, 0.1), shape_v=1.2, scale_v=0.2), "numeric") }) test_that("_lnorm input works as they should", { expect_error(rlba_lnorm("10", A=0.5, b=1, t0 = 0.5, meanlog_v =c(1.2, 1), sdlog_v =c(0.2,0.3)), regexp = "numeric and finite") expect_is(rlba_lnorm(10, A=c(0.5, 0.6), b=1, t0 = 0.5, meanlog_v=c(1.2, 1), sdlog_v=c(0.2,0.3)), "matrix") x <- rlba_lnorm(10, A=0.5, b=1, t0 = 0.5, meanlog_v=c(1.2, 1), sdlog_v=c(0.2,0.3)) expect_error(plba_lnorm(x[,"response"], A=as.character(seq(0.2, 0.5, length.out = 10)), b=0.5, t0 = 0.5, meanlog_v=1.2, sdlog_v=0.2), "numeric") expect_is(plba_lnorm(x[,"response"], A=seq(0.2, 0.5, length.out = 10), b=0.5, t0 = seq(0.5, 0.7, 0.1), meanlog_v=1.2, sdlog_v=0.2), "numeric") }) test_that("LBA functions are identical with all input options", { rt1 <- rLBA(500, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) # get density for random RTs: ref <- sum(log(dLBA(rt1$rt, rt1$response, A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)))) # response is factor #expect_identical(sum(log(dLBA(rt1$rt, as.numeric(rt1$response), A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)))), ref) expect_identical(sum(log(dLBA(rt1, A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)))), ref) rt2 <- rt1[order(rt1$rt),] ref2 <- pLBA(rt2$rt, rt2$response, A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) #expect_identical(pLBA(rt2$rt, as.numeric(rt2$response), A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)), ref2) expect_identical(pLBA(rt2, A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)), ref2) rt3 <- data.frame(p = rep(c(0.05, 0.1), 2), response = rep(1:2, each = 2)) ref3 <- qLBA(rt3$p, rt3$response, A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)) #expect_identical(qLBA(rt3$p, as.character(rt3$response), A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)), ref3) expect_identical(qLBA(rt3, A=0.5, b = 1, t0 = 0.5, mean_v=c(1.2, 1), sd_v=c(0.2,0.3)), ref3) }) test_that("scale_p works as expected", { (max_p <- pLBA(Inf, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2))) # [1] 0.6604696 # to get predicted quantiles, scale required quantiles by maximally predicted response rate: qs <- c(.1, .3, .5, .7, .9) expect_identical(qLBA(qs*max_p, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2)), qLBA(qs, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(2.4, 1.6), sd_v=c(1,1.2), scale_p=TRUE)) }) test_that("pLBA and dLBA work with vectorized accumulators", { v1 <- seq(2, 4, length.out = 6) v2 <- seq(0, 2, length.out = 6) res1p <- vector("numeric", 6) res1d <- vector("numeric", 6) for (i in 1:6) { res1p[i] <- pLBA(1, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(v1[i], v2[i]), sd_v=1, silent=TRUE) res1d[i] <- dLBA(1, response = 1, A=0.5, b=1, t0 = 0.5, mean_v=c(v1[i], v2[i]), sd_v=1, silent=TRUE) } res2p <- pLBA(rep(1, 6), response = 1, A=0.5, b=1, t0 = 0.5, mean_v=list(v1, v2), sd_v=1, silent=TRUE) res2d <- dLBA(rep(1, 6), response = 1, A=0.5, b=1, t0 = 0.5, mean_v=list(v1, v2), sd_v=1, silent=TRUE) expect_identical(res1p, res2p) expect_identical(res1d, res2d) }) rtdists/tests/testthat/test-lba-bugs.R0000644000175000017500000005540513667512040017713 0ustar nileshnilesh context("dLBA: Known Bugs") test_that("dLBA: List and trialwise input for A and b", { samples <- 2 A <- runif(4, 0.3, 0.9) b <- A+runif(4, 0, 0.5) t0 <- runif(2, 0.1, 0.7) v1 <- runif(4, 0.5, 1.5) v2 <- runif(4, 0.1, 0.5) st0 <- runif(1, 0.1, 0.5) r_lba <- rLBA(samples, A=A[1], b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2]) p1 <- dLBA(rt = r_lba$rt, response = c(1, 2), A=list(A[1:2],A[3:4]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) p2 <- dLBA(rt = r_lba$rt[1], response = 1, A=list(A[1],A[3]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) p3 <- dLBA(rt = r_lba$rt[2], response = 2, A=list(A[2],A[4]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) expect_identical(p1, c(p2, p3)) p2n1 <- n1PDF(rt = r_lba$rt[1], A=list(A[1],A[3]), b=b[1], t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) expect_identical(p2, p2n1) p3n1 <- n1PDF(rt = r_lba$rt[2], A=list(A[4],A[2]), b=b[1], t0 = t0[1], mean_v=v1[2:1], sd_v=v2[2:1], silent = TRUE) expect_identical(p3, p3n1) pb1 <- dLBA(r_lba$rt, c(1, 2), A=A[1:2], b=list(b[1:2],b[3:4]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) pb2 <- dLBA(r_lba$rt[1], 1, A=A[1], b=list(b[1],b[3]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) pb3 <- dLBA(r_lba$rt[2], 2, A=A[2], b = list(b[2],b[4]), t0 = t0[1], 1, mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) expect_identical(pb1, c(pb2, pb3)) pb2n1 <- n1PDF(rt = r_lba$rt[1], A=A[1], b=list(b[1],b[3]), t0 = t0[1], mean_v=v1[1:2], sd_v=v2[1:2], silent = TRUE) expect_identical(pb2, pb2n1) pb3n1 <- n1PDF(rt = r_lba$rt[2], A=A[2], b=list(b[4],b[2]), t0 = t0[1], mean_v=v1[2:1], sd_v=v2[2:1], silent = TRUE) expect_identical(pb3, pb3n1) }) context("n1PDF: Known Bugs") test_that("n1PDF and n1CDF pass arguments correctly", { skip_if_not_installed("glba") data(bh08, package = "glba") bh08 <- bh08[bh08$rt>.180&bh08$rt<2,] ny <- dim(bh08)[1] set.seed(3) sddr <- rep(0.2,ny) sp <- rep(rnorm(1,.3,.02),ny) bound <- rep(rnorm(1,.1,.02),ny) nond <- rep(rnorm(1,.2,.02),ny) drift1 <- rep(rnorm(1,.75,.05),ny) drift2 <- 1-drift1 parsMat <- matrix(c(sddr,sp,bound,nond,drift1,drift2),ncol=6,nrow=ny) o1 <- n1PDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm") o2 <- n1PDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm", args.dist = list(posdrift = TRUE)) expect_identical(o1, o2) o3 <- n1PDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm", args.dist = list(posdrift = FALSE)) expect_false(all(o1 == o3)) o4 <- n1PDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm", args.dist = list(posdrift = FALSE, robust = TRUE)) expect_false(all(o1 == o4)) c1 <- n1CDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm") c2 <- n1CDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm", args.dist = list(posdrift = TRUE)) expect_identical(c1, c2) c3 <- n1CDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm", args.dist = list(posdrift = FALSE)) expect_false(all(c1 == c3)) c4 <- n1CDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1],dist="norm", args.dist = list(posdrift = FALSE, robust = TRUE)) expect_false(all(c1 == c4)) set.seed(NULL) }) test_that("named parameter vectors do not cause havoc", { xx <- rLBA(10, A=0.5, b=1, t0 = 0.5, mean_v=1.2, sd_v=0.2) expect_is(n1PDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2, st0 = c(xx = 0.1), silent =TRUE), "numeric") expect_is(n1PDF(xx$rt, A=0.5, b=1, t0 = c(aa=0.5), mean_v=c(1.2, 1.0), sd_v=c(xx=0.2), silent =TRUE), "numeric") expect_is(n1PDF(xx$rt, A=c(xx=0.5), b=c(A = 1), t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=c(xx=0.2), silent =TRUE), "numeric") expect_is(n1PDF(xx$rt, A=0.5, b=1, t0 = c(aa=0.5), mean_v=c(1.2, 1.0), sd_v=0.2, st0 = 0.1, silent =TRUE), "numeric") expect_is(n1CDF(xx$rt, A=0.5, b=1, t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=0.2, st0 = c(xx = 0.1), silent =TRUE), "numeric") expect_is(n1CDF(xx$rt, A=0.5, b=1, t0 = c(aa=0.5), mean_v=c(1.2, 1.0), sd_v=c(xx=0.2), silent =TRUE), "numeric") expect_is(n1CDF(xx$rt, A=c(xx=0.5), b=c(A = 1), t0 = 0.5, mean_v=c(1.2, 1.0), sd_v=c(xx=0.2), silent =TRUE), "numeric") expect_is(n1CDF(xx$rt, A=0.5, b=1, t0 = c(aa=0.5), mean_v=c(1.2, 1.0), sd_v=0.2, st0 = 0.1, silent =TRUE), "numeric") }) test_that("PDFs and CDFs do not return NaN for A = 0", { expect_true(all(is.finite(dlba_norm(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, mean_v=1.2, sd_v=0.2)))) expect_true(all(is.finite(dlba_gamma(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, shape_v=1.2, rate_v=0.2)))) expect_true(all(is.finite(dlba_frechet(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, shape_v=1.2, scale_v=0.2)))) expect_true(all(is.finite(dlba_lnorm(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, meanlog_v = 1.2, sdlog_v = 0.2)))) expect_true(all(is.finite(plba_norm(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, mean_v=1.2, sd_v=0.2)))) expect_true(all(is.finite(plba_gamma(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, shape_v=1.2, rate_v=0.2)))) expect_true(all(is.finite(plba_frechet(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, shape_v=1.2, scale_v=0.2)))) expect_true(all(is.finite(plba_lnorm(rt = c(0, 0.0000001, 0.5), A=0, b=1, t0 = 0, meanlog_v = 1.2, sdlog_v = 0.2)))) }) test_that("LBA-norm: PDF and CDF work with various parameter values", { testthat::skip_on_cran() rts <- c(0, 0.0000001, 0.5, 1.5, 2) seq_parameters <- seq(0, 1, length.out = 5) for (A in seq_parameters) { for (b in seq_parameters) { for (t0 in seq_parameters) { for (d1 in seq_parameters) { for (d2 in c(0.0001, seq_parameters[-1])) { expect_true(all(is.finite( dlba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) expect_true(all(is.finite( dlba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2, posdrift = FALSE ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) expect_true(all(is.finite( dlba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2, robust = TRUE ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) expect_true(all(is.finite( dlba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2, robust = TRUE, posdrift = FALSE ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) expect_true(all(is.finite( plba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) expect_true(all(is.finite( plba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2, posdrift = FALSE ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) expect_true(all(is.finite( plba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2, robust = TRUE ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) expect_true(all(is.finite( plba_norm( rt = rts, A = A, b = (A + b), t0 = t0, mean_v = d1, sd_v = d2, robust = TRUE, posdrift = FALSE ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", mean_v=", d1, ", sd_v=", d2)) } } } } } }) test_that("LBA-gamma: PDF and CDF work with various parameter values", { testthat::skip_on_cran() rts <- c(0, 0.0000001, 0.5, 1.5, 2) seq_parameters <- seq(0, 1, length.out = 5) for (A in seq_parameters) { for (b in seq_parameters) { for (t0 in seq_parameters) { for (d1 in seq_parameters) { for (d2 in c(0.0001, seq_parameters[-1])) { suppressWarnings(expect_true( all(is.finite( dlba_gamma( rt = rts, A = A, b = (A + b), t0 = t0, shape_v = d1, scale_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", shape_v=", d1, ", scale_v=", d2) )) suppressWarnings(expect_true( all(is.finite( plba_gamma( rt = rts, A = A, b = (A + b), t0 = t0, shape_v = d1, scale_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", shape_v=", d1, ", scale_v=", d2) )) } } } } } }) test_that("LBA-frechet: PDF and CDF work with various parameter values", { testthat::skip_on_cran() rts <- c(0, 0.0000001, 0.5, 1.5, 2) seq_parameters <- seq(0, 1, length.out = 5) for (A in seq_parameters) { for (b in seq_parameters) { for (t0 in seq_parameters) { for (d1 in c(0.0001, seq_parameters[-1])) { for (d2 in c(0.0001, seq_parameters[-1])) { expect_true(all(is.finite( dlba_frechet( rt = rts, A = A, b = (A + b), t0 = t0, shape_v = d1, scale_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", shape_v=", d1, ", scale_v=", d2)) expect_true(all(is.finite( plba_frechet( rt = rts, A = A, b = (A + b), t0 = t0, shape_v = d1, scale_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", shape_v=", d1, ", scale_v=", d2)) } } } } } }) test_that("LBA-lnorm: PDF and CDF work with various parameter values", { testthat::skip_on_cran() rts <- c(0, 0.0000001, 0.5, 1.5, 2) seq_parameters <- seq(0, 1, length.out = 5) for (A in seq_parameters) { for (b in seq_parameters) { for (t0 in seq_parameters) { for (d1 in c(0.0001, seq_parameters[-1])) { for (d2 in c(0.0001, seq_parameters[-1])) { expect_true( all(is.finite( dlba_lnorm( rt = rts, A = A, b = (A + b), t0 = t0, meanlog_v = d1, sdlog_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", meanlog_v=", d1, ", sdlog_v=", d2) ) expect_true( all(is.finite( dlba_lnorm( rt = rts, A = A, b = (A + b), t0 = t0, meanlog_v = d1, sdlog_v = d2 ) ), robust = TRUE), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", meanlog_v=", d1, ", sdlog_v=", d2) ) expect_true( all(is.finite( plba_lnorm( rt = rts, A = A, b = (A + b), t0 = t0, meanlog_v = d1, sdlog_v = d2 ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", meanlog_v=", d1, ", sdlog_v=", d2) ) expect_true( all(is.finite( plba_lnorm( rt = rts, A = A, b = (A + b), t0 = t0, meanlog_v = d1, sdlog_v = d2, robust = TRUE ) )), info = paste0("A=", A, ", b=", b, ", t0=", t0, ", meanlog_v=", d1, ", sdlog_v=", d2) ) } } } } } }) context("glba and rtdists are in agreement") test_that("glba and rtdists agree", { obj <- function(rt,pars,loglink,weights) { # vectorized loglike function # rt: a vector with response times # pars: matrix with 4+nrcat parameters on each row to model each rt # the drift pars are ordered: the drift for the given response first, the others # after that (order in the remaining drifts does not make a difference) for(i in 1:4) if(loglink[i]) pars[,i]=exp(pars[,i]) ndrift <- dim(pars)[2]-4 if(ndrift<2) stop("nr of drift pars should at least be two") ll <- numeric(length(rt)) ll <- glba:::n1PDF(t=rt-pars[,4], x0max=pars[,2], chi=pars[,2]+pars[,3], sdI=pars[,1], # sdI=0.15, # Scott: I fit chi-x0max. drift=pars[,5:(4+ndrift)]) # return(logl=-sum(log(pmax(weights*ll,1e-10)))) # this has weird effects due to the contaminant model ... return(logl=log(weights*ll)) } skip_if_not_installed("glba") data(bh08, package = "glba") # remove extreme RTs bh08 <- bh08[bh08$rt>.180&bh08$rt<2,] ny <- dim(bh08)[1] set.seed(3) sddr <- rep(0.2,ny) sp <- rep(rnorm(1,.3,.02),ny) bound <- rep(rnorm(1,.1,.02),ny) nond <- rep(rnorm(1,.2,.02),ny) drift1 <- rep(rnorm(1,.75,.05),ny) drift2 <- 1-drift1 parsMat <- matrix(c(sddr,sp,bound,nond,drift1,drift2),ncol=6,nrow=ny) head(parsMat) ll1 <- obj(bh08$rt,parsMat,loglink = c(FALSE,FALSE,FALSE,FALSE),rep(1,ny)) ll2 <- log(n1PDF(bh08$rt,A=sp[1],b=bound[1]+sp[1], t0=nond[1], mean_v=c(drift1[1],drift2[1]), sd_v=sddr[1], dist="norm", args.dist = list(posdrift = FALSE))) expect_identical(ll1, ll2) }) test_that("n1PDF works with named lists", { rt1 <- rLBA(500, A=0.5, b=1, t0 = 0.5, mean_v=list(a=2.4, b=1.6), sd_v=list(v=1,A=1.2)) expect_is(sum(log(n1PDF(rt1$rt, A=list(r1=0.5,r2=.5), b=1, t0 = 0.5, mean_v=list(b=seq(2.0, 2.4, length.out = 500), c=1.6), sd_v=c(xx=1,hans=1.2)))), "numeric") expect_is(sum(log(n1PDF(rt1$rt, A=.5, b=list(r1=0.5,r2=.5), t0 = 0.5, mean_v=list(b=seq(2.0, 2.4, length.out = 500), c=1.6), sd_v=c(xx=1,hans=1.2)))), "numeric") }) test_that("lba_lnorm work with A = 0", { A <- 0 b <- 1 #Can compare to log-normal if b=1 t0 <- 0 meanlog_v=0 sdlog_v=.5 ########## set.seed(1) check<-rlba_lnorm(1000, A=0, b=1, t0 = 0, meanlog_v=0, sdlog_v=.5) rt<-check[,"rt"] expect_equal( sum(log(dlba_lnorm(rt=rt,A=A,b=b,t0=0,meanlog_v = 0,sdlog_v=.5))), sum(log(dlnorm(rt,0,.5))) ) set.seed(2) check<-rlba_lnorm(1000, A=0, b=1, t0 = 0, meanlog_v=.5, sdlog_v=.5) rt<-check[,"rt"] expect_equal( sum(log(dlba_lnorm(rt=rt,A=A,b=b,t0=0,meanlog_v = .5,sdlog_v=.5))), sum(log(dlnorm(rt,-.5,.5))) ) #x<- plba_lnorm(rt=rt,A=A,b=b,t0=0,meanlog_v = .5,sdlog_v=.5)- plnorm(rt,-.5,.5) expect_equal( plba_lnorm(rt=rt,A=A,b=b,t0=0,meanlog_v = .5,sdlog_v=.5), plnorm(rt,-.5,.5) ) #CDF is working too #what about b=.5 set.seed(3) check2<-rlba_lnorm(1000, A=0, b=.5, t0 = 0, meanlog_v=0, sdlog_v=.5) rt<-check[,"rt"] expect_equal( sum(log(dlba_lnorm(rt=rt,A=A,b=.5,t0=0,meanlog_v = 0,sdlog_v=.5))), # [1] -Inf sum(log(dlnorm(rt/.5,0,.5)/.5)) # [1] -261.9191 ) expect_equal( plba_lnorm(rt=rt,A=A,b=.5,t0=0,meanlog_v = 0,sdlog_v=.5), plnorm(rt/.5,0,.5) ) }) test_that("lba_gamma works with A=0", { check_gamma <- rlba_gamma(10, A=0.5, b=1, t0 = 0.5, shape_v=c(1.2, 1), scale_v=c(0.2,0.3)) rt<-check_gamma[,"rt"] expect_equal( sum(log(dlba_gamma(rt=rt,A=0.00001, b=1, t0 = 0.5, shape_v=1.2, scale_v=0.2))), sum(log(dlba_gamma(rt=rt,A=0, b=1, t0 = 0.5, shape_v=1.2, scale_v=0.2))) , tolerance = 0.00001) expect_equal( plba_gamma(rt=rt,A=0.00001, b=1, t0 = 0.5, shape_v=1.2, scale_v=0.2), plba_gamma(rt=rt,A=0, b=1, t0 = 0.5, shape_v=1.2, scale_v=0.2) , tolerance = 0.00001) A <- runif(1, 0.3, 0.9) b <- A+runif(1, 0, 0.5) t0 <- runif(1, 0.1, 0.7) v1 <- runif(2, 0.5, 1.5) v2 <- runif(2, 0.1, 0.5) expect_equal( sum(log(dlba_gamma(rt=rt,A=0.00001, b=b, t0 = t0, shape_v=v1, scale_v=v2))), sum(log(dlba_gamma(rt=rt,A=0, b=b, t0 = t0, shape_v=v1, scale_v=v2))) , tolerance = 0.00001) expect_equal( sum(log(dlba_gamma(rt=rt,A=0.00001, b=b, t0 = t0, shape_v=v1, scale_v=v2))), sum(log(dlba_gamma(rt=rt,A=0, b=b, t0 = t0, shape_v=v1, scale_v=v2))) , tolerance = 0.00001) }) test_that("args.dist is passed through correctly for dLBA, pLBA, qLBA", { # see: https://github.com/rtdists/rtdists/issues/7 d1 <- dLBA(100,1, 10, 100, 0, mean_v=c(3,1), sd_v=c(1,1), args.dist = list(posdrift = FALSE)) d2 <- dlba_norm(100, 10, 100, 0, 3, 1, posdrift = F, robust = FALSE) * (1-plba_norm(100, 10, 100, 0, 1, 1, posdrift = F, robust = FALSE)) d3 <- n1PDF(100, 10, 100, 0, mean_v=c(3,1), sd_v=c(1,1), args.dist = list(posdrift = FALSE)) expect_identical(d1, d2) expect_identical(d1, d3) p1 <- pLBA(100,1, 10, 100, 0, mean_v=c(3,1), sd_v=c(1,1), args.dist = list(posdrift = FALSE)) p2 <- n1CDF(100, 10, 100, 0, mean_v=c(3,1), sd_v=c(1,1), args.dist = list(posdrift = FALSE)) expect_identical(p1, p2) q1 <- qLBA(0.5, 1, 10, 100, 0, mean_v=c(3,1), sd_v=c(1,1), scale_p = TRUE, interval = c(0, 100)) q2 <- qLBA(0.5, 1, 10, 100, 0, mean_v=c(3,1), sd_v=c(1,1), scale_p = TRUE, interval = c(0, 100), args.dist = list(posdrift = FALSE)) expect_true(q1 != q2) }) test_that("another args.dist bug (Glen Livingston Jr, 19/11/2018)", { N_choices = 3 N_data = 1000 # Simulated Data-------------------------------------------------------------- A_actual = 1 b_actual = 1.4 t0_actual = 0.3 v_actual = c(3, 1, 1) s_actual = c(1, 0.7, 0.65) rt1 <- rLBA(N_data, A=A_actual, b=b_actual, t0 = t0_actual, mean_v=v_actual, sd_v=s_actual, posdrift=FALSE) # Density function------------------------------------------------------------ expect_is(dLBA(rt1$rt,rt1$response, A=A_actual, b=b_actual, t0 = t0_actual, mean_v=v_actual, sd_v=s_actual, args.dist = list(posdrift=FALSE), silent = TRUE), class = "numeric") }) rtdists/tests/testthat.R0000644000175000017500000000005413667512040015230 0ustar nileshnileshlibrary(testthat) test_check("rtdists") rtdists/R/0000755000175000017500000000000014164636116012311 5ustar nileshnileshrtdists/R/diffusion.R0000644000175000017500000007061313667512040014425 0ustar nileshnilesh#' The Ratcliff Diffusion Model #' #' Density, distribution function, quantile function, and random generation for the Ratcliff diffusion model with following parameters: \code{a} (threshold separation), \code{z} (starting point), \code{v} (drift rate), \code{t0} (non-decision time/response time constant), \code{d} (differences in speed of response execution), \code{sv} (inter-trial-variability of drift), \code{st0} (inter-trial-variability of non-decisional components), \code{sz} (inter-trial-variability of relative starting point), and \code{s} (diffusion constant). \strong{Note that the parameterization or defaults of non-decision time variability \code{st0} and diffusion constant \code{s} differ from what is often found in the literature and that the parameterization of \code{z} and \code{sz} have changed compared to previous versions (now absolute and not relative).} #' #' @param rt a vector of RTs. Or for convenience also a \code{data.frame} with columns \code{rt} and \code{response} (such as returned from \code{rdiffusion} or \code{\link{rLBA}}). See examples. #' @param n is a desired number of observations. #' @param response character vector. Which response boundary should be tested? Possible values are \code{c("upper", "lower")}, possibly abbreviated and \code{"upper"} being the default. Alternatively, a numeric vector with values 1=lower and 2=upper. For convenience, \code{response} is converted via \code{as.numeric} also allowing factors (see examples). Ignored if the first argument is a \code{data.frame}. #' @param p vector of probabilities. Or for convenience also a \code{data.frame} with columns \code{p} and \code{response}. See examples. #' #' @param a threshold separation. Amount of information that is considered for a decision. Large values indicate a conservative decisional style. Typical range: 0.5 < \code{a} < 2 #' @param v drift rate. Average slope of the information accumulation process. The drift gives information about the speed and direction of the accumulation of information. Large (absolute) values of drift indicate a good performance. If received information supports the response linked to the upper threshold the sign will be positive and vice versa. Typical range: -5 < \code{v} < 5 #' @param t0 non-decision time or response time constant (in seconds). Lower bound for the duration of all non-decisional processes (encoding and response execution). Typical range: 0.1 < \code{t0} < 0.5 #' @param z starting point. Indicator of an a priori bias in decision making. When the relative starting point \code{z} deviates from \code{0.5*a}, the amount of information necessary for a decision differs between response alternatives. Default is \code{0.5*a} (i.e., no bias). #' @param d differences in speed of response execution (in seconds). Positive values indicate that response execution is faster for responses linked to the upper threshold than for responses linked to the lower threshold. Typical range: -0.1 < \code{d} < 0.1. Default is 0. #' @param sz inter-trial-variability of starting point. Range of a uniform distribution with mean \code{z} describing the distribution of actual starting points from specific trials. Values different from 0 can predict fast errors (but can slow computation considerably). Typical range: 0 < \code{sz} < 0.5. Default is 0. #' @param sv inter-trial-variability of drift rate. Standard deviation of a normal distribution with mean \code{v} describing the distribution of actual drift rates from specific trials. Values different from 0 can predict slow errors. Typical range: 0 < \code{sv} < 2. Default is 0. #' @param st0 inter-trial-variability of non-decisional components. Range of a uniform distribution with mean \code{t0 + st0/2} describing the distribution of actual \code{t0} values across trials. Accounts for response times below \code{t0}. Reduces skew of predicted RT distributions. Values different from 0 can slow computation considerably. Typical range: 0 < \code{st0} < 0.2. Default is 0. #' @param s diffusion constant; standard deviation of the random noise of the diffusion process (i.e., within-trial variability), scales \code{a}, \code{v}, and \code{sv}. Needs to be fixed to a constant in most applications. Default is 1. Note that the default used by Ratcliff and in other applications is often 0.1. #' #' @param precision \code{numerical} scalar value. Precision of calculation. Corresponds roughly to the number of decimals of the predicted CDFs that are calculated accurately. Default is 3. #' @param maxt maximum \code{rt} allowed, used to stop integration problems. Larger values lead to considerably longer calculation times. #' @param interval a vector containing the end-points of the interval to be searched for the desired quantiles (i.e., RTs) in \code{qdiffusion}. Default is \code{c(0, 10)}. #' @param scale_p logical. Should entered probabilities automatically be scaled by maximally predicted probability? Default is \code{FALSE}. Convenience argument for obtaining predicted quantiles. Can be slow as the maximally predicted probability is calculated individually for each \code{p}. #' @param scale_max numerical scalar. Value at which maximally predicted RT should be calculated if \code{scale_p} is \code{TRUE}. #' @param stop_on_error Should the diffusion functions return 0 if the parameters values are outside the allowed range (= \code{FALSE}) or produce an error in this case (= \code{TRUE}). #' @param use_precise boolean. Should \code{pdiffusion} use the precise version for calculating the CDF? The default is \code{TRUE} which is highly recommended. Using \code{FALSE} (i.e., the imprecise version) is hardly any faster and produces clearly wrong results for most parameter settings. #' @param max_diff numeric. Maximum acceptable difference between desired and observed probability of the quantile function (\code{qdiffusion}). #' @param method character. Experimentally implementation of an alternative way of generating random variates via the quantile function (\code{qdiffusion}) and random uniform value. For simple calls, the default method \code{"fastdm"} is dramatically faster. #' #' @return \code{ddiffusion} gives the density, \code{pdiffusion} gives the distribution function, \code{qdiffusion} gives the quantile function (i.e., predicted RTs), and \code{rdiffusion} generates random response times and decisions (returning a \code{data.frame} with columns \code{rt} (numeric) and \code{response} (factor)). #' #' The length of the result is determined by \code{n} for \code{rdiffusion}, equal to the length of \code{rt} for \code{ddiffusion} and \code{pdiffusion}, and equal to the length of \code{p} for \code{qdiffusion}. #' #' The distribution parameters (as well as \code{response}) are recycled to the length of the result. In other words, the functions are completely vectorized for all parameters and even the response boundary. #' #' @details The Ratcliff diffusion model (Ratcliff, 1978) is a mathematical model for two-choice discrimination tasks. It is based on the assumption that information is accumulated continuously until one of two decision thresholds is hit. For introductions see Ratcliff and McKoon (2008), Voss, Rothermund, and Voss (2004), Voss, Nagler, and Lerche (2013), or Wagenmakers (2009). #' #' All functions are fully vectorized across all parameters as well as the response to match the length or \code{rt} (i.e., the output is always of length equal to \code{rt}). This allows for trialwise parameters for each model parameter. #' #' For convenience, all functions (with the exception of \code{rdiffusion}) allow that the first argument is a \code{data.frame} containing the information of the first and second argument in two columns (i.e., \code{rt}/\code{p} and \code{response}). Other columns (as well as passing \code{response} separately argument) will be ignored. This allows, for example, to pass the \code{data.frame} generated by \code{rdiffusion} directly to \code{pdiffusion}. See examples. #' #' \subsection{Quantile Function}{ #' Due to the bivariate nature of the diffusion model, the diffusion processes reaching each response boundary only return the defective CDF that does not reach 1. Only the sum of the CDF for both boundaries reaches 1. Therefore, \code{qdiffusion} can only return quantiles/RTs for any accumulator up to the maximal probability of that accumulator's CDF. This can be obtained by evaluating the CDF at \code{Inf}. #' #' As a convenience for the user, if \code{scale_p = TRUE} in the call to \code{qdiffusion} the desired probabilities are automatically scaled by the maximal probability for the corresponding response. Note that this can be slow as the maximal probability is calculated separately for each desired probability. See examples. #' #' Also note that quantiles (i.e., predicted RTs) are obtained by numerically minimizing the absolute difference between desired probability and the value returned from \code{pdiffusion} using \code{\link{optimize}}. If the difference between the desired probability and probability corresponding to the returned quantile is above a certain threshold (currently 0.0001) no quantile is returned but \code{NA}. This can be either because the desired quantile is above the maximal probability for this accumulator or because the limits for the numerical integration are too small (default is \code{c(0, 10)}). #' } #' #' @note The parameterization of the non-decisional components, \code{t0} and \code{st0}, differs from the parameterization used by, for example, Andreas Voss or Roger Ratcliff. In the present case \code{t0} is the lower bound of the uniform distribution of length \code{st0}, but \emph{not} its midpoint. The parameterization employed here is in line with the parametrization for the \link{LBA} code (where \code{t0} is also the lower bound). #' #' The default diffusion constant \code{s} is 1 and not 0.1 as in most applications of Roger Ratcliff and others. #' #' We have changed the parameterization of the start point \code{z} which is now the absolute start point in line with most published literature (it was the relative start point in previous versions of \pkg{rtdists}). #' #' @references Ratcliff, R. (1978). A theory of memory retrieval. \emph{Psychological Review}, 85(2), 59-108. #' #' Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. \emph{Neural Computation}, 20(4), 873-922. #' #' Voss, A., Rothermund, K., & Voss, J. (2004). Interpreting the parameters of the diffusion model: An empirical validation. \emph{Memory & Cognition}. Vol 32(7), 32, 1206-1220. #' #' Voss, A., Nagler, M., & Lerche, V. (2013). Diffusion Models in Experimental Psychology: A Practical Introduction. \emph{Experimental Psychology}, 60(6), 385-402. doi:10.1027/1618-3169/a000218 #' #' Wagenmakers, E.-J., van der Maas, H. L. J., & Grasman, R. P. P. P. (2007). An EZ-diffusion model for response time and accuracy. \emph{Psychonomic Bulletin & Review}, 14(1), 3-22. #' #' Wagenmakers, E.-J. (2009). Methodological and empirical developments for the Ratcliff diffusion model of response times and accuracy. \emph{European Journal of Cognitive Psychology}, 21(5), 641-671. #' #' #' @author Underlying C code by Jochen Voss and Andreas Voss. Porting and R wrapping by Matthew Gretton, Andrew Heathcote, Scott Brown, and Henrik Singmann. \code{qdiffusion} by Henrik Singmann. #' #' @useDynLib rtdists, .registration = TRUE #' #' @name Diffusion # @importFrom utils head #' @importFrom stats optimize uniroot runif # @importFrom pracma integral #' @aliases diffusion #' @importFrom Rcpp evalCpp #' #' @example examples/examples.diffusion.R #' # [MG 20150616] # In line with LBA, adjust t0 to be the lower bound of the non-decision time distribution rather than the average # Called from prd, drd, rrd recalc_t0 <- function (t0, st0) { t0 <- t0 + st0/2 } prepare_diffusion_parameter <- function(response, a, v, t0, z, d, sz, sv, st0, s, nn, z_absolute = TRUE, stop_on_error) { if(any(missing(a), missing(v), missing(t0))) stop("a, v, and/or t0 must be supplied") if ( (length(s) == 1) & (length(a) == 1) & (length(v) == 1) & (length(t0) == 1) & (length(z) == 1) & (length(d) == 1) & (length(sz) == 1) & (length(sv) == 1) & (length(st0) == 1)) { skip_checks <- TRUE } else { skip_checks <- FALSE } # Build parameter matrix # Convert boundaries to numeric if necessary if (is.character(response)) { response <- match.arg(response, choices=c("upper", "lower"),several.ok = TRUE) numeric_bounds <- ifelse(response == "upper", 2L, 1L) } else { response <- as.numeric(response) if (any(!(response %in% 1:2))) stop("response needs to be either 'upper', 'lower', or as.numeric(response) %in% 1:2!") numeric_bounds <- as.integer(response) } numeric_bounds <- rep(numeric_bounds, length.out = nn) if (!skip_checks) { # all parameters brought to length of rt s <- rep(s, length.out = nn) a <- rep(a, length.out = nn) v <- rep(v, length.out = nn) t0 <- rep(t0, length.out = nn) z <- rep(z, length.out = nn) d <- rep(d, length.out = nn) sz <- rep(sz, length.out = nn) sv <- rep(sv, length.out = nn) st0 <- rep(st0, length.out = nn) } if (z_absolute) { z <- z/a # transform z from absolute to relative scale (which is currently required by the C code) sz <- sz/a # transform sz from absolute to relative scale (which is currently required by the C code) } t0 <- recalc_t0 (t0, st0) # Build parameter matrix (and divide a, v, and sv, by s) params <- cbind (a/s, v/s, t0, d, sz, sv/s, st0, z, numeric_bounds) # Check for illegal parameter values if(ncol(params)<9) stop("Not enough parameters supplied: probable attempt to pass NULL values?") if(!is.numeric(params)) stop("Parameters need to be numeric.") if (any(is.na(params)) || !all(is.finite(params))) { if (stop_on_error) stop("Parameters need to be numeric and finite.") } if (!skip_checks) { parameter_char <- apply(params, 1, paste0, collapse = "\t") parameter_factor <- factor(parameter_char, levels = unique(parameter_char)) parameter_indices <- split(seq_len(nn), f = parameter_factor) } else { if (all(numeric_bounds == 2L) | all(numeric_bounds == 1L)) { parameter_indices <- list( seq_len(nn) ) } else { parameter_indices <- list( seq_len(nn)[numeric_bounds == 2L], seq_len(nn)[numeric_bounds == 1L] ) } } list( params = params , parameter_indices = parameter_indices ) } #' @rdname Diffusion #' @export ddiffusion <- function (rt, response = "upper", a, v, t0, z = 0.5*a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, stop_on_error = FALSE) { # for convenience accept data.frame as first argument. if (is.data.frame(rt)) { response <- rt$response rt <- rt$rt } nn <- length(rt) pars <- prepare_diffusion_parameter(response = response, a = a, v = v, t0 = t0, z = z, d = d, sz = sz, sv = sv, st0 = st0, s = s, nn = nn, stop_on_error = stop_on_error) densities <- vector("numeric",length=nn) for (i in seq_len(length(pars$parameter_indices))) { ok_rows <- pars$parameter_indices[[i]] densities[ok_rows] <- d_fastdm (rt[ok_rows], pars$params[ok_rows[1],1:8], precision, pars$params[ok_rows[1],9], stop_on_error) } abs(densities) } ## @param stop.on.error logical. If true (the default) an error stops the \code{integration} of \code{pdiffusion}. If false some errors will give a result with a warning in the message component. #' @rdname Diffusion #' @export pdiffusion <- function (rt, response = "upper", a, v, t0, z = 0.5*a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, maxt = 20, stop_on_error = FALSE, use_precise = TRUE) { if(any(missing(a), missing(v), missing(t0))) stop("a, v, and/or t0 must be supplied") # for convenience accept data.frame as first argument. if (is.data.frame(rt)) { response <- rt$response rt <- rt$rt } rt[rt>maxt] <- maxt # if(!all(rt == sort(rt))) stop("rt needs to be sorted") # Convert boundaries to numeric nn <- length(rt) pars <- prepare_diffusion_parameter(response = response, a = a, v = v, t0 = t0, z = z, d = d, sz = sz, sv = sv, st0 = st0, s = s, nn = nn, stop_on_error = stop_on_error) pvalues <- vector("numeric",length=nn) if (use_precise) { for (i in seq_len(length(pars$parameter_indices))) { ok_rows <- pars$parameter_indices[[i]] pvalues[ok_rows] <- p_precise_fastdm (rt[ok_rows], pars$params[ok_rows[1],1:8], precision, pars$params[ok_rows[1],9], stop_on_error) } } else { for (i in seq_len(length(pars$parameter_indices))) { ok_rows <- pars$parameter_indices[[i]] pvalues[ok_rows] <- p_fastdm (rt[ok_rows], pars$params[ok_rows[1],1:8], precision, pars$params[ok_rows[1],9], stop_on_error) } } #pvalues <- unsplit(densities, f = parameter_factor) pvalues } inv_cdf_diffusion <- function(x, response, a, v, t0, z, d, sz, sv, st0, s, precision, maxt, value, abs = TRUE, stop_on_error = TRUE) { if (abs) abs(value - pdiffusion(rt=x, response=response, a=a, v=v, t0=t0, z=z, d=d, sz=sz, sv=sv, s=s, st0=st0, precision=precision, maxt=maxt, stop_on_error)) else (value - pdiffusion(rt=x, response=response, a=a, v=v, t0=t0, z=z, d=d, sz=sz, sv=sv, st0=st0, s=s, precision=precision, maxt=maxt, stop_on_error)) } #' @rdname Diffusion #' @export qdiffusion <- function (p, response = "upper", a, v, t0, z = 0.5*a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, maxt = 20, interval = c(0, 10), scale_p = FALSE, scale_max = Inf, stop_on_error = FALSE, max_diff = 0.0001) { if(any(missing(a), missing(v), missing(t0))) stop("a, v, and t0 must be supplied") # for convenience accept data.frame as first argument. if (is.data.frame(p)) { response <- p$response p <- p$p } nn <- length(p) response <- rep(unname(response), length.out = nn) s <- rep(s, length.out = nn) # pass s to other functions for correct handling! No division here. a <- rep(unname(a), length.out = nn) v <- rep(unname(v), length.out = nn) t0 <- rep(unname(t0), length.out = nn) z <- rep(unname(z), length.out = nn) d <- rep(unname(d), length.out = nn) sz <- rep(unname(sz), length.out = nn) sv <- rep(unname(sv), length.out = nn) st0 <- rep(unname(st0), length.out = nn) p <- unname(p) op <- order(p) max_interval <- interval[2] - interval[1] steps <- max_interval/nn * 4 out <- vector("numeric", nn) for (i in seq_len(nn)) { if (scale_p) max_p <- pdiffusion(scale_max, response=response[op[i]], a=a[op[i]], v=v[op[i]], t0=t0[op[i]], z=z[op[i]], d=d[op[i]], sz=sz[op[i]], sv=sv[op[i]], st0=st0[op[i]], s=s[op[i]], precision=precision, maxt=maxt, stop_on_error=stop_on_error) else max_p <- 1 tmp <- list(objective = 1) if (i > 1 && !is.na(out[op[i-1]])) { tmp <- do.call(optimize, args = c(f = inv_cdf_diffusion, interval = list(c(out[op[i-1]], out[op[i-1]]+steps)), response=response[op[i]], a=a[op[i]], v=v[op[i]], t0=t0[op[i]], z=z[op[i]], d=d[op[i]], sz=sz[op[i]], sv=sv[op[i]], st0=st0[op[i]], s=s[op[i]], precision=precision, maxt=maxt, stop_on_error=stop_on_error, value =p[op[i]]*max_p, tol = .Machine$double.eps^0.5)) } if (tmp$objective > max_diff) { tmp <- do.call(optimize, args = c(f = inv_cdf_diffusion, interval = list(c(max(interval[1], t0), interval[2])), response=response[op[i]], a=a[op[i]], v=v[op[i]], t0=t0[op[i]], z=z[op[i]], d=d[op[i]], sz=sz[op[i]], sv=sv[op[i]], st0=st0[op[i]], s=s[op[i]], precision=precision, maxt=maxt, stop_on_error=stop_on_error, value =p[op[i]]*max_p, tol = .Machine$double.eps^0.5)) } if (tmp$objective > max_diff) { tmp <- do.call(optimize, args = c(f=inv_cdf_diffusion, interval = list(c(max(interval[1], t0),max(interval)/2)), response=response[op[i]], a=a[op[i]], v=v[op[i]], t0=t0[op[i]], z=z[op[i]], d=d[op[i]], sz=sz[op[i]], sv=sv[op[i]], st0=st0[op[i]], s=s[op[i]], precision=precision, maxt=maxt, stop_on_error=stop_on_error, value =p[op[i]]*max_p, tol = .Machine$double.eps^0.5)) } if (tmp$objective > max_diff) { try({ uni_tmp <- do.call(uniroot, args = c(f=inv_cdf_diffusion, interval = list(c(max(interval[1], t0), interval[2])), response=response[op[i]], a=a[op[i]], v=v[op[i]], t0=t0[op[i]], z=z[op[i]], d=d[op[i]], sz=sz[op[i]], sv=sv[op[i]], st0=st0[op[i]], precision=precision, s=s[op[i]], maxt=maxt, stop_on_error=stop_on_error, value =p[op[i]]*max_p, tol = .Machine$double.eps^0.5, abs = FALSE)) tmp$objective <- uni_tmp$f.root tmp$minimum <- uni_tmp$root }, silent = TRUE) } if (tmp$objective > max_diff) { tmp[["minimum"]] <- NA warning("Cannot obtain RT that is less than ", max_diff, " away from desired p = ", p[op[i]], ".\nIncrease/decrease interval or obtain for different response.", call. = FALSE) } out[op[i]] <- tmp[["minimum"]] } return(out) } ## When given vectorised parameters, n is the number of replicates for each parameter set #' @rdname Diffusion #' @export rdiffusion <- function (n, a, v, t0, z = 0.5*a, d = 0, sz = 0, sv = 0, st0 = 0, s = 1, precision = 3, stop_on_error = TRUE, maxt = 20, interval = c(0, 10), method = c("fastdm", "qdiffusion")) { if(any(missing(a), missing(v), missing(t0))) stop("a, v, and/or t0 must be supplied") method <- match.arg(method) if (method == "fastdm") { pars <- prepare_diffusion_parameter(response = 1L, a = a, v = v, t0 = t0, z = z, d = d, sz = sz, sv = sv, st0 = st0, s = s, nn = n, stop_on_error = stop_on_error) randRTs <- vector("numeric",length=n) randBounds <- vector("numeric",length=n) for (i in seq_len(length(pars$parameter_indices))) { ok_rows <- pars$parameter_indices[[i]] # Calculate n for this row current_n <- length(ok_rows) out <- r_fastdm (current_n, pars$params[ok_rows[1],1:8], precision, stop_on_error=stop_on_error) #current_n, uniques[i,1:8], precision, stop_on_error=stop_on_error) randRTs[ok_rows] <- out$rt randBounds[ok_rows] <- out$boundary } response <- factor(randBounds, levels = 0:1, labels = c("lower", "upper")) return(data.frame(rt = randRTs, response)) } else if (method == "qdiffusion") { s <- rep(s, length.out = n) a <- rep(a, length.out = n) v <- rep(v, length.out = n) t0 <- rep(t0, length.out = n) z <- rep(z, length.out = n) #z <- z/a # transform z from absolute to relative scale (which is currently required by the C code) d <- rep(d, length.out = n) sz <- rep(sz, length.out = n) #sz <- sz/a # transform sz from absolute to relative scale (which is currently required by the C code) sv <- rep(sv, length.out = n) st0 <- rep(st0, length.out = n) t0 <- recalc_t0 (t0, st0) # Build parameter matrix (and divide a, v, and sv, by s) params <- cbind (a, v, t0, d, sz, sv, st0, z) # Check for illegal parameter values if(ncol(params)<8) stop("Not enough parameters supplied: probable attempt to pass NULL values?") if(!is.numeric(params)) stop("Parameters need to be numeric.") if (any(is.na(params)) || !all(is.finite(params))) stop("Parameters need to be numeric and finite.") randRTs <- vector("numeric",length=n) randBounds <- vector("numeric",length=n) #uniques <- unique(params) parameter_char <- apply(params, 1, paste0, collapse = "\t") parameter_factor <- factor(parameter_char, levels = unique(parameter_char)) parameter_indices <- split(seq_len(n), f = parameter_factor) for (i in seq_len(length(parameter_indices))) { ok_rows <- parameter_indices[[i]] # Calculate n for this row current_n <- length(ok_rows) mu <- pdiffusion(rt = Inf, response = "upper", a = a[ok_rows[1]], v = v[ok_rows[1]], t0 = t0[ok_rows[1]], z = z[ok_rows[1]], d = d[ok_rows[1]], sz = sz[ok_rows[1]], sv = sv[ok_rows[1]], st0 = st0[ok_rows[1]], s = s[ok_rows[1]], precision = precision, maxt = maxt) unif_variates <- runif(current_n) sel_u <- unif_variates < mu sel_l <- !sel_u unif_variates_u <- unif_variates[sel_u] unif_variates_l <- 1-unif_variates[sel_l] qdiffusion(p = unif_variates_u, response = "upper", a = a[ok_rows[1]], v = v[ok_rows[1]], t0 = t0[ok_rows[1]], z = z[ok_rows[1]], d = d[ok_rows[1]], sz = sz[ok_rows[1]], sv = sv[ok_rows[1]], st0 = st0[ok_rows[1]], s = s[ok_rows[1]], precision = precision, maxt = maxt, interval = interval, scale_p = FALSE) randRTs[ok_rows[sel_u]] <- qdiffusion(p = unif_variates_u, response = "upper", a = a[ok_rows[1]], v = v[ok_rows[1]], t0 = t0[ok_rows[1]], z = z[ok_rows[1]], d = d[ok_rows[1]], sz = sz[ok_rows[1]], sv = sv[ok_rows[1]], st0 = st0[ok_rows[1]], s = s[ok_rows[1]], precision = precision, maxt = maxt, interval = interval, scale_p = FALSE) randRTs[ok_rows[sel_l]] <- qdiffusion(p = unif_variates_l, response = "lower", a = a[ok_rows[1]], v = v[ok_rows[1]], t0 = t0[ok_rows[1]], z = z[ok_rows[1]], d = d[ok_rows[1]], sz = sz[ok_rows[1]], sv = sv[ok_rows[1]], st0 = st0[ok_rows[1]], s = s[ok_rows[1]], precision = precision, maxt = maxt, interval = interval, scale_p = FALSE) randBounds[ok_rows] <- ifelse(sel_u, 1, 0) } response <- factor(randBounds, levels = 0:1, labels = c("lower", "upper")) return(data.frame(rt = randRTs, response)) } } rtdists/R/rr98-data.R0000644000175000017500000001406513667512040014151 0ustar nileshnilesh#' Ratcliff and Rouder (1998, Exp. 1) Luminance Discrimination Data #' #' Responses and response times from an experiment in which three participants were asked to decide whether the overall brightness of pixel arrays displayed on a computer monitor was "high" or "low". In addition, instruction manipulated speed and accuracy between blocks. #' #' @details The Experiment is described in the following by Ratcliff and Rouder (1998, pp. 349): #' #' In Experiment 1, subjects were asked to decide whether the overall #' brightness of pixel arrays displayed on a computer monitor was "high" #' or "low". The brightness of a display was controlled by the #' proportion of the pixels that were white. For each trial, the proportion #' of white pixels was chosen from one of two distributions, a high #' distribution [i.e., light] or a low [i.e., dark] distribution, each with fixed mean and standard #' deviation. Feedback was given after each trial to tell the subject whether his or her decision had correctly indicated the distribution #' from which the stimulus had been chosen. Other than this feedback, a #' subject had no information about the distributions. Because the distributions overlapped substantially, a subject could not be highly accurate. A display with 50% white pixels, for example, might have come #' from the high distribution on one trial and the low distribution on #' another. #' #' \subsection{Stimuli}{ #' The stimulus display for Experiment 1 was a square that was 64 #' pixels on each side and subtended 3.8 degree of visual angle on a PC-VGA #' monitor. [...] In each square, 3,072 randomly #' chosen pixels were neutral gray, like the background, and the remaining #' 1,024 pixels were either black or white; the proportion of white to #' black pixels provided the brightness manipulation. There were 33 #' equally spaced proportions from zero (all 1,024 pixels were black) to #' 1 (all 1,024 pixels were white). The two distributions from which the #' bright and dark stimuli were chosen were centered at .375 (low brightness) #' and .625 (high brightness), and they each had a standard deviation of .1875. #' } #' #' \subsection{Procedure}{ #' A subject's task was to decide, on each trial, from which distribution, #' high or low brightness in Experiment 1, the observed #' stimulus (stimuli) had been sampled. Subjects made their decision by #' pressing one of two response keys. On each trial, a 500-ms foreperiod, #' during which the display consisted solely of neutral gray, was followed #' by presentation of the stimulus; presentation was terminated by #' the subject's response. In Experiment 1, speed-versus-accuracy #' instructions were manipulated. For some blocks of trials, subjects #' were instructed to respond as quickly as possible, and a "too slow" #' message followed every response longer than 550 ms. For other #' blocks of trials, subjects were instructed to be as accurate as possible, #' and a "bad error" message followed incorrect responses to stimuli #' from the extreme ends of the distributions. Experiment 1 had ten 35-min #' sessions, and Experiments 2 and 3 had four sessions. In Experiment 1, #' subjects switched from emphasis on speed to emphasis on #' accuracy every 204 trials. Each session consisted of eight blocks of #' 102 trials per block, for a total of 8,160 trials per subject. Each #' session consisted of eight blocks of 102 trials, for a total of 3,264 trials #' per subject in each experiment. For all trials in each experiment, subjects #' were instructed to maintain a high level of accuracy while #' responding quickly, and an "error" message indicated incorrect #' responses. Responses were followed by a 300-ms blank interval, and #' the error message was displayed for 300 ms after the blank interval. #' } #' #' @note The data is already prepared following Ratcliff and Rouder (1998) by removing the following trials: #' \itemize{ #' \item{the first session for each participant} #' \item{the first 20 trials of each session} #' \item{the first trial of each block (each change in speed accuracy starts a new block)} #' } #' To fully replicate the data used by Ratcliff and Rouder (1998) one only needs to remove the trials that are \code{TRUE} in column \code{outlier} (i.e., RTs outside of (200ms, 2500ms)). The full raw data is also available as part of this package, see:\cr #' \code{system.file("extdata", "rr98-data", package = "rtdists")} and \code{system.file("extdata", "rr98-data.codes", package = "rtdists")} #' #' @docType data #' @keywords dataset #' @name rr98 #' @usage rr98 #' @format A \code{data.frame} with 24,358 obs. and 12 variables: #' \describe{ #' \item{id}{participant id, factor with three levels} #' \item{session}{session number, integer} #' \item{block}{block number, integer} #' \item{trial}{trial number within a block, integer} #' \item{instruction}{factor with two levels: \code{"accuracy"} for blocks with accuracy instructions; \code{"speed"} for blocks with speed instruction} #' \item{source}{factor with two levels: distribution strength was drawn from, \code{"dark"} and \code{"light"}} #' \item{strength}{proportion of white to black pixels were varied by 33 equally spaced proportions from zero (all 1,024 pixels were black) to 1 (all 1,024 pixels were white). with 0 darkest and 32 lightest. Integer.} #' \item{response}{factor with two levels: \code{"dark"} and \code{"light"}} #' \item{response_num}{numeric response variable such that \code{1="dark"} and \code{2="light"}} #' \item{correct}{boolean indicating whether or not \code{source==response}. (Does not seem to be used in the original analysis.)} #' \item{rt}{response time in seconds} #' \item{outlier}{boolean indicating whether or not the response was considered an outlier by Ratcliff and Rouder (1998), i.e., RTs outside of (200ms, 2500ms)} #' } #' #' @source Ratcliff, R., & Rouder, J. N. (1998). Modeling Response Times for Two-Choice Decisions. \emph{Psychological Science}, 9(5), 347-356. http://doi.org/10.1111/1467-9280.00067 #' #' @example examples/examples.rr98.R #' NULL rtdists/R/speed_acc-data.R0000644000175000017500000000615413667512040015253 0ustar nileshnilesh#' Speed-Accuracy Data from Wagenmakers, Ratcliff, Gomez, & McKoon (2008, Experiment 1) #' #' Responses and response times from an experiment in which instruction manipulated speed and accuracy between blocks. This data was also analyzed by Heathcote and Love (2012) who were the first to use the 17 participants also included here. #' #' @details The data excludes the practice blocks but includes all trials. Variable \code{censor} can be used for excluding all trials also excluded from the papers using it namely uninterpretable response, too fast response (<180 ms), too slow response (>3 sec). Heathcote and Love (2012, p. 7) describe the data as follows: #' #' We fit the LBA and LNR models to data from Wagenmaker et al.'s (2008) experiment one, where participants made decisions about whether a string of letters constituted a word. These lexical decisions were made about four types of stimuli, non-words (nw) and high-frequency (hf), low-frequency (lf), and very low-frequency (vlf) words. Participants made decisions either under speed or accuracy emphasis instructions in different experimental blocks. Accuracy blocks were preceded by the message "Try to respond accurately" and "ERROR" was displayed after each wrong response. Speed blocks were preceded by the message "Try to respond accurately" and "TOO SLOW" was displayed after each response slower than 0.75 s.We report analyses of data from 17 participants (31,412 data points) in their Experiment 1, including the 15 participants analyzed in Wagenmakers et al. (2008) and two extras (we thank Eric-Jan Wagenmakers for supplying this data). #' #' @docType data #' @keywords dataset #' @name speed_acc #' @usage speed_acc #' @format A \code{data.frame} with 31,522 obs. and 9 variables: #' \describe{ #' \item{id}{participant id} #' \item{block}{block number} #' \item{condition}{\code{accuracy} for blocks with accuracy instructions; \code{speed} for blocks with speed instruction} #' \item{stim}{unique identifier of stimulus, stimuli are nested in frequency conditions} #' \item{stim_cat}{category of stimulus, either word or non-word} #' \item{frequency}{"high frequency word", "low frequency word", "very low frequency word", or non-words derived from the first three categories} #' \item{response}{\code{word}, \code{nonword}, or not interpretable response (\code{error}, i.e., pushed a button, but not the right one and also not the one next to the right button)} #' \item{rt}{response time in seconds} #' \item{censor}{boolean indicating whether or not a response should be eliminated prior to analysis; uninterpretable response, too fast response (<180 ms), too slow response (>3 sec)} #' } #' #' @source Wagenmakers, E.-J., Ratcliff, R., Gomez, P., & McKoon, G. (2008). A diffusion model account of criterion shifts in the lexical decision task. \emph{Journal of Memory and Language}, 58(1), 140-159. #' #' @references Heathcote, A., & Love, J. (2012). Linear deterministic accumulator models of simple choice. \emph{Frontiers in Psychology: Cognitive Science}, 3, 292. doi:10.3389/fpsyg.2012.00292 #' #' @example examples/examples.speed_acc.R #' NULL rtdists/R/deprecated.R0000644000175000017500000000162413667512040014533 0ustar nileshnilesh#' Deprecated functions #' #' These functions have been renamed and deprecated in \pkg{rtdists}: #' \code{drd()} (use \code{\link{ddiffusion}()}), #' \code{prd()} (use \code{\link{pdiffusion}()}), #' \code{rrd()} (use \code{\link{rdiffusion}()}). #' @rdname deprecated #' @keywords internal #' @aliases rtdists-deprecated #' @param ... arguments passed from the old functions to the new functions #' @export drd <- function(...) { .Deprecated("ddiffusion", "rtdists", "drd was renamed to ddiffusion and is now deprecated.") ddiffusion(...) } #' @rdname deprecated #' @export prd <- function(...) { .Deprecated("pdiffusion", "rtdists", "prd was renamed to pdiffusion and is now deprecated.") pdiffusion(...) } #' @rdname deprecated #' @export rrd <- function(...) { .Deprecated("rdiffusion", "rtdists", "rrd was renamed to rdiffusion and is now deprecated.") rdiffusion(...) } rtdists/R/lba.r0000644000175000017500000007265413667512040013244 0ustar nileshnilesh#' The Linear Ballistic Accumulator (LBA) #' #' Density, distribution function, quantile function, and random generation for #' the LBA model with the following parameters: \code{A} (upper value of #' starting point), \code{b} (response threshold), \code{t0} (non-decision #' time), and driftrate (\code{v}). All functions are available with different #' distributions underlying the drift rate: Normal (\code{norm}), Gamma #' (\code{gamma}), Frechet (\code{frechet}), and log normal (\code{lnorm}). The #' functions return their values conditional on the accumulator given in the #' response argument winning. #' #' @param rt vector of RTs. Or for convenience also a \code{data.frame} with #' columns \code{rt} and \code{response} (such as returned from \code{rLBA} or #' \code{\link{rdiffusion}}). See examples. #' @param response integer vector of winning accumulators/responses #' corresponding to the vector of RTs/p (i.e., used for specifying the #' response for a given RT/probability). Will be recycled if necessary. Cannot #' contain values larger than the number of accumulators. First #' response/accumulator must receive value 1, second 2, and so forth. For #' conmvenience, \code{response} is converted via \code{as.numeric} thereby #' allowing factors to be passed as well (such as returned from #' \code{\link{rdiffusion}}). Ignored if \code{rt} or \code{p} is a #' \code{data.frame}. #' @param p vector of probabilities. Or for convenience also a \code{data.frame} #' with columns \code{p} and \code{response}. See examples. #' @param n desired number of observations (scalar integer). #' @param A start point interval or evidence in accumulator before beginning of #' decision process. Start point varies from trial to trial in the interval #' [0, \code{A}] (uniform distribution). Average amount of evidence before #' evidence accumulation across trials is \code{A}/2. #' @param b response threshold. (\code{b} - \code{A}/2) is a measure of #' "response caution". #' @param t0 non-decision time or response time constant (in seconds). Lower #' bound for the duration of all non-decisional processes (encoding and #' response execution). #' @param st0 variability of non-decision time, such that \code{t0} is uniformly #' distributed between \code{t0} and \code{t0} + \code{st0}. Default is 0. Can #' be trialwise, and will be recycled to length of \code{rt}. #' @param ... two \emph{named} drift rate parameters depending on #' \code{distribution} (e.g., \code{mean_v} and \code{sd_v} for #' \code{distribution=="norm"}). The parameters can either be given as a #' numeric vector or a list. If a numeric vector is passed each element of the #' vector corresponds to one accumulator. If a list is passed each list #' element corresponds to one accumulator allowing again trialwise driftrates. #' The shorter parameter will be recycled as necessary (and also the elements #' of the list to match the length of \code{rt}). See details. #' @param distribution character specifying the distribution of the drift rate. #' Possible values are \code{c("norm", "gamma", "frechet", "lnorm")}, default #' is \code{"norm"}. #' @param args.dist list of optional further arguments to the distribution #' functions (i.e., \code{posdrift} or \code{robust} for #' \code{distribution=="norm"}, see \code{\link{single-LBA}}). #' @param silent logical. Should the number of accumulators used be suppressed? #' Default is \code{FALSE} which prints the number of accumulators. #' @param interval a vector containing the end-points of the interval to be #' searched for the desired quantiles (i.e., RTs) in \code{qLBA}. Default is #' \code{c(0, 10)}. #' @param scale_p logical. Should entered probabilities automatically be scaled #' by maximally predicted probability? Default is \code{FALSE}. Convenience #' argument for obtaining predicted quantiles. Can be slow as the maximally #' predicted probability is calculated individually for each \code{p}. #' @param scale_max numerical scalar. Value at which maximally predicted RT #' should be calculated if \code{scale_p} is \code{TRUE}. #' #' @details #' #' For convenience, all functions (with the exception of \code{rdiffusion}) #' allow that the first argument is a \code{data.frame} containing the #' information of the first and second argument in two columns (i.e., #' \code{rt}/\code{p} and \code{response}). Other columns will be ignored. This #' allows, for example, to pass the \code{data.frame} generated by \code{rLBA} #' directly to \code{pLBA}. See examples. #' #' \subsection{Parameters}{ The following arguments are allowed as \code{...} #' drift rate parameters: \itemize{ \item \code{mean_v,sd_v} mean and standard #' deviation of normal distribution for drift rate (\code{norm}). See #' \code{\link{Normal}} \item \code{shape_v,rate_v,scale_v} shape, rate, and #' scale of gamma (\code{gamma}) and scale and shape of Frechet (\code{frechet}) #' distributions for drift rate. See \code{\link{GammaDist}} or #' \code{\link[evd]{frechet}}. For Gamma, scale = 1/shape and shape = 1/scale. #' \item \code{meanlog_v,sdlog_v} mean and standard deviation of lognormal #' distribution on the log scale for drift rate (\code{lnorm}). See #' \code{\link{Lognormal}}. } #' #' As described above, the accumulator parameters can either be given as a #' numeric vector or a list. If a numeric vector is passed each element of the #' vector corresponds to one accumulator. If a list is passed each list element #' corresponds to one accumulator allowing trialwise driftrates. The shorter #' parameter will be recycled as necessary (and also the elements of the list to #' match the length of \code{rt}). #' #' The other LBA parameters (i.e., \code{A}, \code{b}, and \code{t0}, with the #' exception of \code{st0}) can either be a single numeric vector (which will be #' recycled to reach \code{length(rt)} or \code{length(n)} for trialwise #' parameters) \emph{or} a \code{list} of such vectors in which each list #' element corresponds to the parameters for this accumulator (i.e., the list #' needs to be of the same length as there are accumulators). Each list will #' also be recycled to reach \code{length(rt)} for trialwise parameters per #' accumulator. #' #' To make the difference between both paragraphs clear: Whereas for the #' accumulators both a single vector or a list corresponds to different #' accumulators, only the latter is true for the other parameters. For those #' (i.e., \code{A}, \code{b}, and \code{t0}) a single vector always corresponds #' to trialwise values and a list must be used for accumulator wise values. #' #' \code{st0} can only vary trialwise (via a vector). And it should be noted #' that \code{st0} not equal to zero will considerably slow done everything. } #' #' \subsection{Quantile Function}{ Due to the bivariate nature of the LBA, #' single accumulators only return defective CDFs that do not reach 1. Only the #' sum of all accumulators reaches 1. Therefore, \code{qLBA} can only return #' quantiles/RTs for any accumulator up to the maximal probability of that #' accumulator's CDF. This can be obtained by evaluating the CDF at \code{Inf}. #' #' As a conveniece for the user, if \code{scale_p = TRUE} in the call to #' \code{qLBA} the desired probabilities are automatically scaled by the maximal #' probability for the corresponding response. Note that this can be slow as the #' maximal probability is calculated separately for each desired probability. #' See examples. #' #' Also note that quantiles (i.e., predicted RTs) are obtained by numerically #' minimizing the absolute difference between desired probability and the value #' returned from \code{pLBA} using \code{\link{optimize}}. If the difference #' between the desired probability and probability corresponding to the returned #' quantile is above a certain threshold (currently 0.0001) no quantile is #' returned but \code{NA}. This can be either because the desired quantile is #' above the maximal probability for this accumulator or because the limits for #' the numerical integration are too small (default is \code{c(0, 10)}). } #' #' \subsection{RNG}{ For random number generation at least one of the #' distribution parameters (i.e., \code{mean_v}, \code{sd_v}, \code{shape_v}, #' \code{scale_v}, \code{rate_v}, \code{meanlog_v}, and \code{sdlog_v}) should #' be of length > 1 to receive RTs from multiple responses. Shorter vectors are #' recycled as necessary.\cr Note that for random number generation from a #' normal distribution for the driftrate the number of returned samples may be #' less than the number of requested samples if \code{posdrifts==FALSE}. } #' #' #' @return \code{dLBA} returns the density (PDF), \code{pLBA} returns the #' distribution function (CDF), \code{qLBA} returns the quantile/RT, #' \code{rLBA} return random response times and responses (in a #' \code{data.frame}). #' #' The length of the result is determined by \code{n} for \code{rLBA}, equal #' to the length of \code{rt} for \code{dLBA} and \code{pLBA}, and equal to #' the length of \code{p} for \code{qLBA}. #' #' The distribution parameters (as well as \code{response}) are recycled to #' the length of the result. In other words, the functions are completely #' vectorized for all parameters and even the response. #' #' @note These are the top-level functions intended for end-users. To obtain the #' density and cumulative density the race functions are called for each #' response time with the corresponding winning accumulator as first #' accumulator (see \code{\link{LBA-race}}). #' #' @references #' #' Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice #' response time: Linear ballistic accumulation. \emph{Cognitive Psychology}, #' 57(3), 153-178. doi:10.1016/j.cogpsych.2007.12.002 #' #' Donkin, C., Averell, L., Brown, S., & Heathcote, A. (2009). Getting more from #' accuracy and response time data: Methods for fitting the linear ballistic #' accumulator. \emph{Behavior Research Methods}, 41(4), 1095-1110. #' doi:10.3758/BRM.41.4.1095 #' #' Heathcote, A., & Love, J. (2012). Linear deterministic accumulator models of #' simple choice. \emph{Frontiers in Psychology}, 3, 292. #' doi:10.3389/fpsyg.2012.00292 #' #' @name LBA #' @importFrom stats optimize uniroot #' #' @example examples/examples.lba.R #' NULL # this functions takes the argument as entered by the user and always returns a # list of length n_v in which each element is of length nn. This ensures that # for each parameter we have a list of length equal to the number of # accumulators with each element of length equal to the number of trials. This # consistency will be exploited when passing to the n1functions. check_i_arguments <- function(arg, nn, n_v, dots = FALSE) { mc <- match.call() varname <- sub("dots$", "", deparse(mc[["arg"]]), fixed = TRUE) if (!is.list(arg)) { if ((!is.vector(arg, "numeric")) || (length(arg) < 1)) stop(paste(varname, "needs to be a numeric vector of length >= 1!")) if (dots) { arg <- as.list(arg) arg <- lapply(arg, rep, length.out=nn) } else arg <- lapply(seq_len(n_v), function(x) rep(arg, length.out=nn)) } else { if (!dots && (length(arg) != n_v)) stop(paste("if", varname, "is a list, its length needs to correspond to the number of accumulators.")) for (i in seq_along(arg)) { if ((!is.vector(arg[[i]], "numeric")) || (length(arg[[i]]) < 1)) stop(paste0(varname, "[[", i, "]] needs to be a numeric vector of length >= 1!")) arg[[i]] <- rep(arg[[i]], length.out=nn) } } #if (length(arg) != n_v) stop(paste("size of", varname, "does not correspond to number of accumulators.")) return(arg) } #' @rdname LBA #' @export dLBA <- function(rt, response, A, b, t0, ..., st0=0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) { dots <- list(...) if (is.null(names(dots))) stop("... arguments need to be named.") # for convenience accept data.frame as first argument. if (is.data.frame(rt)) { response <- rt$response rt <- rt$rt } response <- as.numeric(response) nn <- length(rt) n_v <- max(vapply(dots, length, 0)) # Number of responses if(!silent) message(paste("Results based on", n_v, "accumulators/drift rates.")) if (!is.numeric(response) || max(response) > n_v) stop("response needs to be a numeric vector of integers up to number of accumulators.") if (any(response < 1)) stop("the first response/accumulator must have value 1.") if (n_v < 2) stop("There need to be at least two accumulators/drift rates.") distribution <- match.arg(distribution) response <- rep(response, length.out = nn) A <- check_i_arguments(A, nn=nn, n_v=n_v) b <- check_i_arguments(b, nn=nn, n_v=n_v) t0 <- check_i_arguments(t0, nn=nn, n_v=n_v) switch(distribution, norm = { if (any(!(c("mean_v","sd_v") %in% names(dots)))) stop("mean_v and sd_v need to be passed for distribution = \"norm\"") dots$mean_v <- check_i_arguments(dots$mean_v, nn=nn, n_v=n_v, dots = TRUE) dots$sd_v <- check_i_arguments(dots$sd_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("mean_v","sd_v")] }, gamma = { if (!("shape_v" %in% names(dots))) stop("shape_v needs to be passed for distribution = \"gamma\"") if ((!("rate_v" %in% names(dots))) & (!("scale_v" %in% names(dots)))) stop("rate_v or scale_v need to be passed for distribution = \"gamma\"") dots$shape_v <- check_i_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) if ("scale_v" %in% names(dots)) { dots$scale_v <- check_i_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) if (is.list(dots$scale_v)) { dots$rate_v <- lapply(dots$scale_v, function(x) 1/x) } else dots$rate_v <- 1/dots$scale_v } else dots$rate_v <- check_i_arguments(dots$rate_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","rate_v")] }, frechet = { if (any(!(c("shape_v","scale_v") %in% names(dots)))) stop("shape_v and scale_v need to be passed for distribution = \"frechet\"") dots$shape_v <- check_i_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) dots$scale_v <- check_i_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","scale_v")] }, lnorm = { if (any(!(c("meanlog_v","sdlog_v") %in% names(dots)))) stop("meanlog_v and sdlog_v need to be passed for distribution = \"lnorm\"") dots$meanlog_v <- check_i_arguments(dots$meanlog_v, nn=nn, n_v=n_v, dots = TRUE) dots$sdlog_v <- check_i_arguments(dots$sdlog_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("meanlog_v","sdlog_v")] } ) #browser() for (i in seq_len(length(dots))) { if (length(dots[[i]]) < n_v) dots[[i]] <- rep(dots[[i]],length.out=n_v) } out <- vector("numeric", nn) for (i in unique(response)) { sel <- response == i out[sel] <- do.call(n1PDF, args = c(rt=list(rt[sel]), A = list(lapply(A, "[", i = sel)[c(i, seq_len(n_v)[-i])]), b = list(lapply(b, "[", i = sel)[c(i, seq_len(n_v)[-i])]), t0 = list(lapply(t0, "[", i = sel)[c(i, seq_len(n_v)[-i])]), lapply(dots, function(x) lapply(x, "[", i = sel)[c(i, seq_len(n_v)[-i])]), distribution=distribution, args.dist=list(args.dist), silent=TRUE, st0 = list(st0))) } return(out) } #' @rdname LBA #' @export pLBA <- function(rt, response, A, b, t0, ..., st0=0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) { dots <- list(...) if (is.null(names(dots))) stop("... arguments need to be named.") # for convenience accept data.frame as first argument. if (is.data.frame(rt)) { response <- rt$response rt <- rt$rt } response <- as.numeric(response) nn <- length(rt) n_v <- max(vapply(dots, length, 0)) # Number of responses if(!silent) message(paste("Results based on", n_v, "accumulators/drift rates.")) if (!is.numeric(response) || max(response) > n_v) stop("response needs to be a numeric vector of integers up to number of accumulators.") if (n_v < 2) stop("There need to be at least two accumulators/drift rates.") distribution <- match.arg(distribution) response <- rep(response, length.out = nn) A <- check_i_arguments(A, nn=nn, n_v=n_v) b <- check_i_arguments(b, nn=nn, n_v=n_v) t0 <- check_i_arguments(t0, nn=nn, n_v=n_v) switch(distribution, norm = { if (any(!(c("mean_v","sd_v") %in% names(dots)))) stop("mean_v and sd_v need to be passed for distribution = \"norm\"") dots$mean_v <- check_i_arguments(dots$mean_v, nn=nn, n_v=n_v, dots = TRUE) dots$sd_v <- check_i_arguments(dots$sd_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("mean_v","sd_v")] }, gamma = { if (!("shape_v" %in% names(dots))) stop("shape_v needs to be passed for distribution = \"gamma\"") if ((!("rate_v" %in% names(dots))) & (!("scale_v" %in% names(dots)))) stop("rate_v or scale_v need to be passed for distribution = \"gamma\"") dots$shape_v <- check_i_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) if ("scale_v" %in% names(dots)) { dots$scale_v <- check_i_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) if (is.list(dots$scale_v)) { dots$rate_v <- lapply(dots$scale_v, function(x) 1/x) } else dots$rate_v <- 1/dots$scale_v } else dots$rate_v <- check_i_arguments(dots$rate_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","rate_v")] }, frechet = { if (any(!(c("shape_v","scale_v") %in% names(dots)))) stop("shape_v and scale_v need to be passed for distribution = \"frechet\"") dots$shape_v <- check_i_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) dots$scale_v <- check_i_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","scale_v")] }, lnorm = { if (any(!(c("meanlog_v","sdlog_v") %in% names(dots)))) stop("meanlog_v and sdlog_v need to be passed for distribution = \"lnorm\"") dots$meanlog_v <- check_i_arguments(dots$meanlog_v, nn=nn, n_v=n_v, dots = TRUE) dots$sdlog_v <- check_i_arguments(dots$sdlog_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("meanlog_v","sdlog_v")] } ) #browser() for (i in seq_len(length(dots))) { if (length(dots[[i]]) < n_v) dots[[i]] <- rep(dots[[i]],length.out=n_v) } out <- vector("numeric", nn) for (i in unique(response)) { sel <- response == i #if(!all(rt[sel] == sort(rt[sel]))) stop("rt needs to be sorted (per response)") out[sel] <- do.call(n1CDF, args = c(rt=list(rt[sel]), A = list(lapply(A, "[", i = sel)[c(i, seq_len(n_v)[-i])]), b = list(lapply(b, "[", i = sel)[c(i, seq_len(n_v)[-i])]), t0 = list(lapply(t0, "[", i = sel)[c(i, seq_len(n_v)[-i])]), lapply(dots, function(x) lapply(x, "[", i = sel)[c(i, seq_len(n_v)[-i])]), distribution=distribution, args.dist=list(args.dist), silent=TRUE, st0 = list(st0))) } return(out) } # rt, response, A, b, t0, ..., st0=0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE inv_cdf_lba <- function(x, response, A, b, t0, ..., st0, distribution, args.dist, value, abs = TRUE) { if (abs) abs(value - pLBA(rt=x, response=response, A=A, b = b, t0 = t0, ..., st0=st0, distribution=distribution, args.dist=args.dist, silent=TRUE)) else (value - pLBA(rt=x, response=response, A=A, b = b, t0 = t0, ..., st0=st0, distribution=distribution, args.dist=args.dist, silent=TRUE)) } #' @rdname LBA #' @export qLBA <- function(p, response, A, b, t0, ..., st0=0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE, interval = c(0, 10), scale_p = FALSE, scale_max = Inf) { dots <- list(...) if (is.null(names(dots))) stop("... arguments need to be named.") # for convenience accept data.frame as first argument. if (is.data.frame(p)) { response <- p$response p <- p$p } response <- as.numeric(response) nn <- length(p) n_v <- max(vapply(dots, length, 0)) # Number of responses if(!silent) message(paste("Results based on", n_v, "accumulators/drift rates.")) if (!is.numeric(response) || max(response) > n_v) stop("response needs to be a numeric vector of integers up to number of accumulators.") if (any(response < 1)) stop("the first response/accumulator must have value 1.") if (n_v < 2) stop("There need to be at least two accumulators/drift rates.") distribution <- match.arg(distribution) response <- rep(response, length.out = nn) A <- check_i_arguments(A, nn=nn, n_v=n_v) b <- check_i_arguments(b, nn=nn, n_v=n_v) t0 <- check_i_arguments(t0, nn=nn, n_v=n_v) st0 <- rep(st0, length.out = nn) out <- vector("numeric", nn) p <- unname(p) for (i in seq_len(nn)) { if (scale_p) max_p <- do.call( pLBA, args = c( rt = list(scale_max), response = list(response[i]), A = ret_arg(A, i), b = ret_arg(b, i), t0 = ret_arg(t0, i), sapply(dots, function(z, i) sapply(z, ret_arg2, which = i, simplify = FALSE), i = i, simplify = FALSE), args.dist = list(args.dist), distribution = distribution, st0 = list(st0[i]), silent = TRUE ) ) else max_p <- 1 tmp <- do.call( optimize, args = c( f = inv_cdf_lba, interval = list(interval), response = list(response[i]), A = ret_arg(A, i), b = ret_arg(b, i), t0 = ret_arg(t0, i), sapply(dots, function(z, i) sapply(z, ret_arg2, which = i, simplify = FALSE), i = i, simplify = FALSE), args.dist = list(args.dist), distribution = distribution, st0 = list(st0[i]), value = p[i] * max_p, tol = .Machine$double.eps ^ 0.5 ) ) if (tmp$objective > 0.0001) { tmp <- do.call( optimize, args = c( f = inv_cdf_lba, interval = list(c(min(interval), max(interval) / 2)), response = list(response[i]), A = ret_arg(A, i), b = ret_arg(b, i), t0 = ret_arg(t0, i), sapply(dots, function(z, i) sapply(z, ret_arg2, which = i, simplify = FALSE), i = i, simplify = FALSE), args.dist = list(args.dist), distribution = distribution, st0 = list(st0[i]), value = p[i] * max_p, tol = .Machine$double.eps ^ 0.5 ) ) } if (tmp$objective > 0.0001) { try({ uni_tmp <- do.call( uniroot, args = c( f = inv_cdf_lba, interval = list(c(min(interval), max(interval) / 2)), response = list(response[i]), A = ret_arg(A, i), b = ret_arg(b, i), t0 = ret_arg(t0, i), sapply(dots, function(z, i) sapply(z, ret_arg2, which = i, simplify = FALSE), i = i, simplify = FALSE), args.dist = list(args.dist), distribution = distribution, st0 = list(st0[i]), value = p[i] * max_p, tol = .Machine$double.eps ^ 0.5, abs = FALSE ) ) tmp$objective <- uni_tmp$f.root tmp$minimum <- uni_tmp$root }, silent = TRUE) } if (tmp$objective > 0.0001) { tmp[["minimum"]] <- NA warning( "Cannot obtain RT that is less than 0.0001 away from desired p = ", p[i], ".\nIncrease/decrease interval or obtain for different boundary.", call. = FALSE ) } out[i] <- tmp[["minimum"]] } return(out ) } #' @rdname LBA #' @export rLBA <- function(n,A,b,t0, ..., st0=0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) { dots <- list(...) if (is.null(names(dots))) stop("... arguments need to be named.") if (any(names(dots) == "")) stop("all ... arguments need to be named.") n_v <- max(vapply(dots, length, 0)) # Number of responses if(!silent) message(paste("Results based on", n_v, "accumulators/drift rates.")) #if (n_v < 2) stop("There need to be at least two accumulators/drift rates.") nn <- n distribution <- match.arg(distribution) A <- check_i_arguments(A, nn=nn, n_v=n_v) b <- check_i_arguments(b, nn=nn, n_v=n_v) t0 <- check_i_arguments(t0, nn=nn, n_v=n_v) st0 <- rep(unname(st0), length.out = nn) switch(distribution, norm = { rng <- rlba_norm if (any(!(c("mean_v","sd_v") %in% names(dots)))) stop("mean_v and sd_v need to be passed for distribution = \"norm\"") dots$mean_v <- check_i_arguments(dots$mean_v, nn=nn, n_v=n_v, dots = TRUE) dots$sd_v <- check_i_arguments(dots$sd_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("mean_v","sd_v")] }, gamma = { rng <- rlba_gamma if (!("shape_v" %in% names(dots))) stop("shape_v needs to be passed for distribution = \"gamma\"") if ((!("rate_v" %in% names(dots))) & (!("scale_v" %in% names(dots)))) stop("rate_v or scale_v need to be passed for distribution = \"gamma\"") dots$shape_v <- check_i_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) if ("scale_v" %in% names(dots)) { dots$scale_v <- check_i_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) if (is.list(dots$scale_v)) { dots$rate_v <- lapply(dots$scale_v, function(x) 1/x) } else dots$rate_v <- 1/dots$scale_v } else dots$rate_v <- check_i_arguments(dots$rate_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","rate_v")] }, frechet = { rng <- rlba_frechet if (any(!(c("shape_v","scale_v") %in% names(dots)))) stop("shape_v and scale_v need to be passed for distribution = \"frechet\"") dots$shape_v <- check_i_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) dots$scale_v <- check_i_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","scale_v")] }, lnorm = { rng <- rlba_lnorm if (any(!(c("meanlog_v","sdlog_v") %in% names(dots)))) stop("meanlog_v and sdlog_v need to be passed for distribution = \"lnorm\"") dots$meanlog_v <- check_i_arguments(dots$meanlog_v, nn=nn, n_v=n_v, dots = TRUE) dots$sdlog_v <- check_i_arguments(dots$sdlog_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("meanlog_v","sdlog_v")] } ) for (i in seq_len(length(dots))) { if (length(dots[[i]]) < n_v) dots[[i]] <- rep(dots[[i]],length.out=n_v) } tmp_acc <- as.data.frame(dots, optional = TRUE) colnames(tmp_acc) <- sub("\\.c\\(.+", "", colnames(tmp_acc)) parameter_char <- apply(tmp_acc, 1, paste0, collapse = "\t") parameter_factor <- factor(parameter_char, levels = unique(parameter_char)) parameter_indices <- split(seq_len(nn), f = parameter_factor) out <- vector("list", length(parameter_indices)) for (i in seq_len(length(parameter_indices))) { ok_rows <- parameter_indices[[i]] tmp_dots <- lapply(dots, function(x) sapply(x, "[[", i = ok_rows[1])) out[[i]] <- do.call(rng, args = c(n=list(length(ok_rows)), A = list(sapply(A, "[", i = ok_rows)), b = list(sapply(b, "[", i = ok_rows)), t0 = list(sapply(t0, "[", i = ok_rows)), st0 = list(st0[ok_rows]), tmp_dots, args.dist)) } out <- do.call("rbind", out) as.data.frame(out) } rtdists/R/lba_race.R0000644000175000017500000004526213667512040014171 0ustar nileshnilesh#' LBA race functions: Likelihood for first accumulator to win. #' #' n1PDF and n1CDF take RTs, the distribution functions of the \link{LBA}, and #' corresponding parameter values and put them throughout the race equations and #' return the likelihood for the first accumulator winning (hence n1) in a set #' of accumulators. #' #' @param rt a vector of RTs. #' @param A,b,t0 LBA parameters, see \code{\link{LBA}}. Can either be a single #' numeric vector (which will be recycled to reach \code{length(rt)} for #' trialwise parameters) \emph{or} a \code{list} of such vectors in which each #' list element corresponds to the parameters for this accumulator (i.e., the #' list needs to be of the same length as there are accumulators). Each list #' will also be recycled to reach \code{length(rt)} for trialwise parameters #' per accumulator. #' @param st0 parameter specifying the variability of \code{t0} (which varies #' uniformly from \code{t0} to \code{t0} + \code{st0}). Can be trialwise, and #' will be recycled to length of \code{rt}. #' @param ... two \emph{named} drift rate parameters depending on #' \code{distribution} (e.g., \code{mean_v} and \code{sd_v} for #' \code{distribution=="norm"}). The parameters can either be given as a #' numeric vector or a list. If a numeric vector is passed each element of the #' vector corresponds to one accumulator. If a list is passed each list #' element corresponds to one accumulator allowing again trialwise driftrates. #' The shorter parameter will be recycled as necessary (and also the elements #' of the list to match the length of \code{rt}). See examples. #' @param distribution character specifying the distribution of the drift rate. #' Possible values are \code{c("norm", "gamma", "frechet", "lnorm")}, default #' is \code{"norm"}. #' @param args.dist list of optional further arguments to the distribution #' functions (i.e., \code{posdrift} or \code{robust} for #' \code{distribution=="norm"}). #' @param silent logical. Should the number of accumulators used be suppressed? #' Default is \code{FALSE} which prints the number of accumulators. #' #' #' @details For a set of \eqn{N} independent accumulators \eqn{i = 1...N}, the #' race likelihood for a given accumulator \eqn{i} is given by #' \deqn{L(\mbox{unit }i \mbox{ wins}) = f_i(t) \times \prod_{j<>i} [ S_j(t) #' ]}{L(unit i wins) = f_i(t) * prod_j<>i [ S_j(t) ]} where \eqn{f(t)} is the #' PDF (\code{dlba_...}) and \eqn{S_j(t) = 1 - F_j(t)} is the survivor #' function, that is the complement of the CDF \eqn{F(t)} (\code{plba_...}) at #' time \eqn{t}. #' #' In other words, this is just the PDF/CDF for the winning accumulator at #' time \eqn{t} times the probability that no other accumulators have finished #' at time \eqn{t}. #' #' @seealso For more user-friendly functions that return the PDF or CDF for the #' corresponding (and not first) accumulator winning see /code{/link{LBA}}. #' #' @name LBA-race #' @importFrom stats integrate #' #' @example examples/examples.lba-race.R #' NULL ## note, this functions does not check parameters, it is only called internally (i.e., passed correctly). n1PDFfixedt0 <- function(rt,A,b, t0, ..., pdf, cdf, args.dist = list()) { # Generates defective PDF for responses on node #1. dots <- list(...) nn <- length(rt) #if (length(A) != nn) browser() #browser() n_v <- max(vapply(dots, length, 0)) # Number of responses if (n_v>2) { tmp=array(dim=c(length(rt),n_v-1)) for (i in 2:n_v) tmp[,i-1] <- do.call(cdf, args = c( rt=list(rt), A=if(is.list(A)) A[i] else list(A), b=if(is.list(b)) b[i] else list(b), t0 = if(is.list(t0)) t0[i] else list(t0), sapply(dots, "[[", i = i, simplify = FALSE), args.dist, nn=nn)) G <- apply(1-tmp,1,prod) } else { G <- 1-do.call(cdf, args = c( rt=list(rt), A=if(is.list(A)) A[2] else list(A), b=if(is.list(b)) b[2] else list(b), t0 = if(is.list(t0)) t0[2] else list(t0), sapply(dots, "[[", i = 2, simplify = FALSE), args.dist, nn=nn)) } G*do.call(pdf, args = c( rt=list(rt), A=if(is.list(A)) A[1] else list(A), b=if(is.list(b)) b[1] else list(b), t0 = if(is.list(t0)) t0[1] else list(t0), sapply(dots, "[[", i = 1, simplify = FALSE), args.dist, nn=nn)) } #sapply(dots, rep_dots, which = 1, nn = nn, simplify = FALSE) rep_dots <- function(arg, which, nn) { rep(arg[[which]], length.out=nn) } ## functions which checks if argument is numeric and check_n1_arguments <- function(arg, nn, n_v, dots = FALSE) { mc <- match.call() varname <- sub("dots$", "", deparse(mc[["arg"]]), fixed = TRUE) if (!is.list(arg)) { if ((!is.vector(arg, "numeric")) || (length(arg) < 1)) stop(paste(varname, "needs to be a numeric vector of length >= 1!")) if (dots) { arg <- as.list(arg) arg <- lapply(arg, rep, length.out=nn) } else arg <- rep(arg, length.out=nn) } else { if (!dots && (length(arg) != n_v)) stop(paste("if", varname, "is a list, its length needs to correspond to the number of accumulators.")) for (i in seq_along(arg)) { if ((!is.vector(arg[[i]], "numeric")) || (length(arg[[i]]) < 1)) stop(paste0(varname, "[[", i, "]] needs to be a numeric vector of length >= 1!")) arg[[i]] <- rep(arg[[i]], length.out=nn) } } return(unname(arg)) } #' @rdname LBA-race #' @export n1PDF <- function(rt, A, b, t0, ..., st0=0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) { dots <- list(...) #browser() if (is.null(names(dots))) stop("... arguments need to be named.") if (any(names(dots) == "")) stop("all ... arguments need to be named.") n_v <- max(vapply(dots, length, 0)) # Number of responses if(!silent) message(paste("Results based on", n_v, "accumulators/drift rates.")) if (n_v < 2) stop("There need to be at least two accumulators/drift rates.") distribution <- match.arg(distribution) #check_single_arg(t0 = t0) nn <- length(rt) #browser() A <- check_n1_arguments(A, nn=nn, n_v=n_v) b <- check_n1_arguments(b, nn=nn, n_v=n_v) t0 <- check_n1_arguments(t0, nn=nn, n_v=n_v) st0 <- rep(unname(st0), length.out = nn) switch(distribution, norm = { pdf <- dlba_norm_core cdf <- plba_norm_core if (any(!(c("mean_v","sd_v") %in% names(dots)))) stop("mean_v and sd_v need to be passed for distribution = \"norm\"") dots$mean_v <- check_n1_arguments(dots$mean_v, nn=nn, n_v=n_v, dots = TRUE) dots$sd_v <- check_n1_arguments(dots$sd_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("mean_v","sd_v")] }, gamma = { pdf <- dlba_gamma_core cdf <- plba_gamma_core if (!("shape_v" %in% names(dots))) stop("shape_v needs to be passed for distribution = \"gamma\"") if ((!("rate_v" %in% names(dots))) & (!("scale_v" %in% names(dots)))) stop("rate_v or scale_v need to be passed for distribution = \"gamma\"") dots$shape_v <- check_n1_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) if ("scale_v" %in% names(dots)) { dots$scale_v <- check_n1_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) if (is.list(dots$scale_v)) { dots$rate_v <- lapply(dots$scale_v, function(x) 1/x) } else dots$rate_v <- 1/dots$scale_v } else dots$rate_v <- check_n1_arguments(dots$rate_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","rate_v")] }, frechet = { pdf <- dlba_frechet_core cdf <- plba_frechet_core if (any(!(c("shape_v","scale_v") %in% names(dots)))) stop("shape_v and scale_v need to be passed for distribution = \"frechet\"") dots$shape_v <- check_n1_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) dots$scale_v <- check_n1_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","scale_v")] }, lnorm = { pdf <- dlba_lnorm_core cdf <- plba_lnorm_core if (any(!(c("meanlog_v","sdlog_v") %in% names(dots)))) stop("meanlog_v and sdlog_v need to be passed for distribution = \"lnorm\"") dots$meanlog_v <- check_n1_arguments(dots$meanlog_v, nn=nn, n_v=n_v, dots = TRUE) dots$sdlog_v <- check_n1_arguments(dots$sdlog_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("meanlog_v","sdlog_v")] } ) #browser() for (i in seq_len(length(dots))) { if (length(dots[[i]]) < n_v) dots[[i]] <- rep(dots[[i]],length.out=n_v) } # if (length(st0)>1) { # warning("st0 set to st0[1]. Only one non-decision time variability permitted.") # st0 <- st0[1] # Only ONE non-decision time. # } #browser() do.call(n1PDF_core, args = c( rt = list(rt), A = list(A), b = list(b), t0 = list(t0), st0 = list(st0), dots, pdf = pdf, cdf = cdf, args.dist = list(args.dist) )) } n1PDF_core <- function(rt, A, b, t0, ..., st0, pdf, cdf, args.dist = list()) { dots <- list(...) #browser() if (all(st0==0)) return(do.call(n1PDFfixedt0, args = c( rt = list(rt), A = list(A), b = list(b), t0 = list(t0), dots, pdf = pdf, cdf = cdf, args.dist = list(args.dist) ))) else { tmpf <- function(rt, A, b, t0, st0, ..., pdf, cdf, args.dist = list()) { #browser() dots2 <- list(...) do.call(n1PDFfixedt0, args = c( rt = list(rt), A = list(A), b = list(b), t0 = list(t0), dots2, pdf = pdf, cdf = cdf, args.dist = list(args.dist) )) / st0 #rt=list(pmax(rt-t0, 0)) } outs <- vector("numeric", length = length(rt)) if (length(st0) == 1) st0 <- rep(st0, length.out = length(rt)) for (i in 1:length(rt)) { if (st0[i] != 0) { tmp <- do.call(integrate, args = c( f = tmpf, lower = unname(rt[i] - st0[i]), upper = unname(rt[i]), A = ret_arg(A, i), b = ret_arg(b, i), t0 = ret_arg(t0, i), sapply(dots, function(z, i) sapply(z, ret_arg2, which = i, simplify = FALSE), i = i, simplify = FALSE), pdf = pdf, cdf = cdf, args.dist = list(args.dist), stop.on.error = FALSE, st0 = list(st0[i]) )) if (tmp$message != "OK") warning(paste("n1PDF:", tmp$message)) outs[i] <- tmp$value } else outs[i] <- do.call(n1PDFfixedt0, args = c( rt = list(rt[i]), A = ret_arg(A, i), b = ret_arg(b, i), t0 = ret_arg(t0, i), sapply(dots, function(z, i) sapply(z, ret_arg2, which = i, simplify = FALSE), i = i, simplify = FALSE), pdf = pdf, cdf = cdf, args.dist = list(args.dist) )) } return(outs) } } # n1PDF_single <- function(rt, A, b, t0, ..., st0, pdf, cdf, args.dist = list()) { # dots <- list(...) # #browser() # if (st0==0) return(do.call(n1PDFfixedt0, args = c(rt=list(rt), A=list(A), b=list(b), t0 = list(t0), dots, pdf=pdf, cdf=cdf, args.dist = args.dist))) # else { # tmpf <- function(rt, A, b, t0, ..., pdf, cdf, args.dist = list()) { # #browser() # do.call(n1PDFfixedt0, args = c(rt=list(pmax(rt-t0, 0)), A=list(A), b=list(b), t0 = list(0), dots, pdf=pdf, cdf=cdf, args.dist = args.dist))/st0 # } # outs=numeric(length(rt)) # #browser() # for (i in 1:length(outs)) { # tmp <- do.call(integrate, args=c(f=tmpf, lower=rt[i]-t0[1]-st0, upper=rt[i]-t0[1], A=list(A), b=list(b), t0=list(0), dots, pdf=pdf, cdf=cdf, args.dist = args.dist, stop.on.error = FALSE)) # if (tmp$message != "OK") warning(paste("n1PDF:", tmp$message)) # outs[i] <- tmp$value # } # return(outs) # } # } ret_arg <- function(arg, which) { list(if(is.list(arg)) { if (which <= min(sapply(arg, length))) sapply(arg, "[[", i = which, simplify = FALSE) else arg } else { if (which <= length(arg)) arg[which] else arg }) } ret_arg2 <- function(arg, which) { if(is.list(arg)) { if (which <= min(sapply(arg, length))) sapply(arg, "[[", i = which, simplify = FALSE) else arg } else { if (which <= length(arg)) arg[which] else arg } } # rt = time, A=x0max, b=chi, v=drift, sv=sdI #' @rdname LBA-race #' @export n1CDF <- function(rt,A,b, t0, ..., st0=0, distribution = c("norm", "gamma", "frechet", "lnorm"), args.dist = list(), silent = FALSE) { #, browser=FALSE # Generates defective CDF for responses on node #1. dots <- list(...) if (is.null(names(dots))) stop("... arguments need to be named.") n_v <- max(vapply(dots, length, 0)) # Number of responses if(!silent) message(paste("Results based on", n_v, "accumulators/drift rates.")) if (n_v < 2) stop("There need to be at least two accumulators/drift rates.") distribution <- match.arg(distribution) #check_single_arg(t0 = t0) nn <- length(rt) #browser() A <- check_n1_arguments(A, nn=nn, n_v=n_v) b <- check_n1_arguments(b, nn=nn, n_v=n_v) t0 <- check_n1_arguments(t0, nn=nn, n_v=n_v) st0 <- rep(unname(st0), length.out = nn) switch(distribution, norm = { pdf <- dlba_norm_core cdf <- plba_norm_core if (any(!(c("mean_v","sd_v") %in% names(dots)))) stop("mean_v and sd_v need to be passed for distribution = \"norm\"") dots$mean_v <- check_n1_arguments(dots$mean_v, nn=nn, n_v=n_v, dots = TRUE) dots$sd_v <- check_n1_arguments(dots$sd_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("mean_v","sd_v")] }, gamma = { pdf <- dlba_gamma_core cdf <- plba_gamma_core if (!("shape_v" %in% names(dots))) stop("shape_v needs to be passed for distribution = \"gamma\"") if ((!("rate_v" %in% names(dots))) & (!("scale_v" %in% names(dots)))) stop("rate_v or scale_v need to be passed for distribution = \"gamma\"") dots$shape_v <- check_n1_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) if ("scale_v" %in% names(dots)) { dots$scale_v <- check_n1_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) if (is.list(dots$scale_v)) { dots$rate_v <- lapply(dots$scale_v, function(x) 1/x) } else dots$rate_v <- 1/dots$scale_v } else dots$rate_v <- check_n1_arguments(dots$rate_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","rate_v")] }, frechet = { pdf <- dlba_frechet_core cdf <- plba_frechet_core if (any(!(c("shape_v","scale_v") %in% names(dots)))) stop("shape_v and scale_v need to be passed for distribution = \"frechet\"") dots$shape_v <- check_n1_arguments(dots$shape_v, nn=nn, n_v=n_v, dots = TRUE) dots$scale_v <- check_n1_arguments(dots$scale_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("shape_v","scale_v")] }, lnorm = { pdf <- dlba_lnorm_core cdf <- plba_lnorm_core if (any(!(c("meanlog_v","sdlog_v") %in% names(dots)))) stop("meanlog_v and sdlog_v need to be passed for distribution = \"lnorm\"") dots$meanlog_v <- check_n1_arguments(dots$meanlog_v, nn=nn, n_v=n_v, dots = TRUE) dots$sdlog_v <- check_n1_arguments(dots$sdlog_v, nn=nn, n_v=n_v, dots = TRUE) dots <- dots[c("meanlog_v","sdlog_v")] } ) for (i in seq_len(length(dots))) { if (length(dots[[i]]) < n_v) dots[[i]] <- rep(dots[[i]],length.out=n_v) } # if (length(st0)>1) { # warning("st0 set to st0[1]. Only one non-decision time variability permitted.") # st0 <- st0[1] # Only ONE non-decision time. # } if (any(st0<1e-6)) { if (any(sapply(st0[st0<1e-6], function(x) !isTRUE(all.equal(x, 0))))) warning("st0 set to 0 for values < 1e-6. Integral can fail for small st0.") st0[st0<1e-6] <- 0 } # outs <- numeric(length(rt)) #bounds <- c(0,rt) #browser() for (i in 1:length(rt)) { tmp_obj <- do.call(integrate, args = c( f = n1PDF_core, lower = 0, upper = rt[i], subdivisions = 1000, A = ret_arg(A, i), b = ret_arg(b, i), t0 = ret_arg(t0, i), st0 = list(st0[i]), sapply(dots, function(z, i) sapply(z, "[[", i = i, simplify = FALSE), i = i, simplify = FALSE), pdf = pdf, cdf = cdf, stop.on.error = FALSE, args.dist = list(args.dist) )) if (tmp_obj$message != "OK") { warning(tmp_obj$message) } outs[i] <- tmp_obj$value } outs } rtdists/R/rtdists-package.R0000644000175000017500000000176213667512040015523 0ustar nileshnilesh#' Response Time Distributions. #' #' \tabular{ll}{ #' Package: \tab rtdists\cr #' Type: \tab Package\cr #' Version: \tab 0.8-3\cr #' Date: \tab 2018-06-23\cr #' Depends: \tab R (>= 3.0.0)\cr #' License: \tab GPL (>=3)\cr #' URL: \tab https://github.com/rtdists/rtdists/\cr #' } #' #' Provides response time distributions (density/PDF, distribution function/CDF, quantile #' function, and random generation): (a) Ratcliff diffusion model (Ratcliff & McKoon, 2008, #' ) based on C code by Andreas and Jochen Voss and (b) linear #' ballistic accumulator (LBA; Brown & Heathcote, 2008, ) #' with different distributions underlying the drift rate. #' #' @aliases rtdists-package #' @name rtdists-package #' @docType package #' @title The rtdists Package #' @author Henrik Singmann, Scott Brown, Matthew Gretton, Andrew Heathcote, with contributions from Andreas Voss, Jochen Voss, Andrew Terry #' @keywords package NULL rtdists/R/single-lba.r0000644000175000017500000007404713667512040014521 0ustar nileshnilesh#' Single accumulator of linear ballistic accumulator (LBA) #' #' Density, distribution function, and random generation for a single accumulator of the LBA model with the following parameters: \code{A} (upper value of starting point), \code{b} (response threshold), \code{t0} (non-decision time), and driftrate (\code{v}). All functions are available with different distributions underlying the drift rate: Normal (\code{norm}), Gamma (\code{gamma}), Frechet (\code{frechet}), and log normal (\code{lnorm}). #' #' @param rt a vector of RTs. #' @param n desired number of observations (scalar integer). #' @param A start point interval or evidence in accumulator before beginning of decision process. Start point varies from trial to trial in the interval [0, \code{A}] (uniform distribution). Average amount of evidence before evidence accumulation across trials is \code{A}/2. #' @param b response threshold. (\code{b} - \code{A}/2) is a measure of "response caution". #' @param t0 non-decision time or response time constant (in seconds). Lower bound for the duration of all non-decisional processes (encoding and response execution). #' @param st0 variability of non-decision time, such that \code{t0} is uniformly distributed between \code{t0} and \code{t0} + \code{st0}. Only available in random number generation functions \code{rlba_}. #' #' @param mean_v,sd_v mean and standard deviation of normal distribution for drift rate (\code{norm}). See \code{\link{Normal}} #' @param shape_v,rate_v,scale_v shape, rate, and scale of gamma (\code{gamma}) and scale and shape of Frechet (\code{frechet}) distributions for drift rate. See \code{\link{GammaDist}} or \code{\link[evd]{frechet}}. For Gamma, scale = 1/shape and shape = 1/scale. #' @param meanlog_v,sdlog_v mean and standard deviation of lognormal distribution on the log scale for drift rate (\code{lnorm}). See \code{\link{Lognormal}}. #' #' @param posdrift logical. Should driftrates be forced to be positive? Default is \code{TRUE}. (Uses truncated normal for random generation). #' @param robust logical. Should robust normal distributions be used for \code{norm} and \code{lnorm}? Can be helpful in rare cases but is approximately three times slower than the non-robust versions. Default is \code{FALSE}. #' #' #' @details These functions are mainly for internal purposes. We do not recommend to use them. Use the high-level functions described in \code{/link{LBA}} instead. #' #' @return All functions starting with a \code{d} return the density (PDF), all functions starting with \code{p} return the distribution function (CDF), and all functions starting with \code{r} return random response times and responses (in a \code{matrix}). #' #' @note Density (i.e., \code{dlba_}), distribution (i.e., \code{plba_}), and random derivative (i.e., \code{rlba_}) functions are vectorized for all parameters (i.e., in case parameters are not of the same length as \code{rt}, parameters are recycled). Furthermore, the random derivative functions also accept a matrix of length \code{n} in which each column corresponds to a accumulator specific value (see \code{\link{rLBA}} for a more user-friendly way). #' #' @references #' #' Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. \emph{Cognitive Psychology}, 57(3), 153-178. doi:10.1016/j.cogpsych.2007.12.002 #' #' Donkin, C., Averell, L., Brown, S., & Heathcote, A. (2009). Getting more from accuracy and response time data: Methods for fitting the linear ballistic accumulator. \emph{Behavior Research Methods}, 41(4), 1095-1110. doi:10.3758/BRM.41.4.1095 #' #' Heathcote, A., & Love, J. (2012). Linear deterministic accumulator models of simple choice. \emph{Frontiers in Psychology}, 3, 292. doi:10.3389/fpsyg.2012.00292 #' #' @importFrom evd rfrechet dfrechet pfrechet #' @importFrom msm rtnorm #' @importFrom gsl gamma_inc #' @importFrom stats dgamma dlnorm dnorm pgamma plnorm pnorm rgamma rlnorm rnorm runif #' #' #' @name single-LBA #' #' @example examples/examples.slba.R #' NULL # protected normal desity and cdf pnormP <- function(x,mean=0,sd=1,lower.tail=TRUE) ifelse(abs(x)<7,pnorm(x, mean=mean, sd=sd,lower.tail=lower.tail),ifelse(x<0,0,1)) dnormP <- function(x,mean=0,sd=1) ifelse(abs(x)<7,dnorm(x,mean=mean,sd=sd),0) make_r <- function(drifts, n,b,A,n_v,t0,st0=0) { drifts <- drifts[1:n,] drifts[drifts<0] <- 0 if (is.null(dim(A))) starts <- matrix(runif(min=0,max=A,n=n*n_v),ncol=n_v,byrow=TRUE) else starts <- apply(A, c(1,2), function(x) runif(min=0, max = x, 1)) if (is.null(dim(b))) ttf <- t((b-t(starts)))/drifts else ttf <- (b-starts)/drifts rt <- apply(ttf+t0+runif(min=0,max=st0,n=n),1,min) resp <- apply(ttf+t0,1,which.min) bad <- !is.finite(rt) if (any(bad)) { warning(paste(sum(bad),"infinite RTs removed and less than", n, "rts returned")) resp <- resp[!bad] rt <- rt[!bad] } cbind(rt=rt,response=resp) } rem_t0 <- function(rt, t0) pmax(rt - t0, 0) check_single_arg <- function(...) { mc <- match.call() vars <- all.vars(mc) arguments <- list(...) for(i in seq_along(arguments)) { if (length(arguments[[i]]) != 1) stop(paste(vars[i], "needs to be of length 1!")) if (!is.numeric(arguments[[i]]) | !is.finite(arguments[[i]])) stop(paste(vars[i], "needs to be numeric and finite!")) } } check_vector <- function(...) { mc <- match.call() vars <- all.vars(mc) dots <- list(...) for (i in seq_along(dots)) { if ((vars[i] == "rt") && (any(dots[[i]] < 0))) stop("rt needs to contain only positive values.") if (!is.vector(dots[[i]], "numeric")) stop(paste(vars[[i]], "needs to be a numeric vector!")) if (length(dots[[i]]) < 1) stop(paste(vars[[i]], "needs to have a length >= 1.")) } } error_message_b_smaller_A <- "b cannot be smaller than A!" ####### Normal: #' @rdname single-LBA #' @export dlba_norm dlba_norm <- function(rt,A,b, t0, mean_v, sd_v, posdrift=TRUE, robust = FALSE) { #check_single_arg(A=A, b=b, t0=t0, mean_v=mean_v, sd_v=sd_v) check_vector(rt, A, b, t0, mean_v, sd_v) # bring all arguments to length of rt nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) mean_v <- rep(mean_v, length.out = nn) sd_v <- rep(sd_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) dlba_norm_core(rt = rt, A = A, b = b, t0 = t0, mean_v = mean_v, sd_v = sd_v, posdrift = posdrift, robust = robust, nn = nn) } ## this functions expects all arguments to have the samel length (which is nn) dlba_norm_core <- function(rt,A,b, t0, mean_v, sd_v, posdrift=TRUE, robust = FALSE, nn) { if (robust) { # robust == TRUE uses robust versions of the normal distributions pnorm1 <- pnormP dnorm1 <- dnormP } else { pnorm1 <- pnorm dnorm1 <- dnorm } rt <- rem_t0(rt, t0) # rmove t0 from rt if (posdrift) denom <- pmax(pnorm1(mean_v/sd_v),1e-10) else denom <- rep(1, nn) if (any(A<1e-10, na.rm = TRUE)) { # for A<1e-10 save results in out_A A_small <- A<1e-10 out_A <- pmax(0, ((b[A_small]/rt[A_small]^2)*dnorm1(b[A_small]/rt[A_small],mean_v[A_small],sd=sd_v[A_small]))/denom[A_small], na.rm = TRUE) # calculate other results into out_o zs <- rt[!A_small]*sd_v[!A_small] zu <- rt[!A_small]*mean_v[!A_small] chiminuszu <- b[!A_small]-zu chizu <- chiminuszu/zs chizumax <- (chiminuszu-A[!A_small])/zs out_o <- pmax(0,(mean_v[!A_small]*(pnorm1(chizu)-pnorm1(chizumax)) + sd_v[!A_small]*(dnorm1(chizumax)-dnorm1(chizu)))/(A[!A_small]*denom[!A_small])) # combine out_A and out_o out <- numeric(nn) out[!A_small] <- out_o out[A_small] <- out_A return(out) } else { zs <- rt*sd_v zu <- rt*mean_v chiminuszu <- b-zu chizu <- chiminuszu/zs chizumax <- (chiminuszu-A)/zs return(pmax(0,(mean_v*(pnorm1(chizu)-pnorm1(chizumax)) + sd_v*(dnorm1(chizumax)-dnorm1(chizu)))/(A*denom), na.rm=TRUE)) } } #' @rdname single-LBA #' @export plba_norm plba_norm <- function(rt,A,b,t0,mean_v, sd_v,posdrift=TRUE, robust = FALSE) { check_vector(rt, A, b, t0, mean_v, sd_v) nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) mean_v <- rep(mean_v, length.out = nn) sd_v <- rep(sd_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) plba_norm_core(rt = rt, A = A, b = b, t0 = t0, mean_v = mean_v, sd_v = sd_v, posdrift = posdrift, robust = robust, nn = nn) } plba_norm_core <- function(rt,A,b,t0,mean_v, sd_v,posdrift=TRUE, robust = FALSE, nn) { if (robust) { # robust == TRUE uses robust versions of the normal distributions pnorm1 <- pnormP dnorm1 <- dnormP } else { pnorm1 <- pnorm dnorm1 <- dnorm } rt <- rem_t0(rt, t0) if (posdrift) denom <- pmax(pnorm1(mean_v/sd_v),1e-10) else denom <- 1 if (any(A<1e-10, na.rm = TRUE)) { # for A<1e-10 save results in out_A A_small <- A<1e-10 out_A <- pmin(1, pmax(0, (pnorm1(b[A_small]/rt[A_small],mean=mean_v[A_small],sd=sd_v[A_small],lower.tail=FALSE))/denom[A_small], na.rm=TRUE)) # calculate other results into out_o zs <- rt[!A_small]*sd_v[!A_small] zu <- rt[!A_small]*mean_v[!A_small] chiminuszu <- b[!A_small]-zu xx <- chiminuszu-A[!A_small] chizu <- chiminuszu/zs chizumax <- xx/zs tmp1 <- zs*(dnorm1(chizumax)-dnorm1(chizu)) tmp2 <- xx*pnorm1(chizumax)-chiminuszu*pnorm1(chizu) out_o <- pmin(pmax(0,(1+(tmp1+tmp2)/A[!A_small])/denom[!A_small]), 1) # combine out_A and out_o out <- numeric(nn) out[!A_small] <- out_o out[A_small] <- out_A return(out) } else { zs <- rt*sd_v zu <- rt*mean_v chiminuszu <- b-zu xx <- chiminuszu-A chizu <- chiminuszu/zs chizumax <- xx/zs tmp1 <- zs*(dnorm1(chizumax)-dnorm1(chizu)) tmp2 <- xx*pnorm1(chizumax)-chiminuszu*pnorm1(chizu) return(pmin(pmax(0,(1+(tmp1+tmp2)/A)/denom, na.rm=TRUE), 1)) } } #' @rdname single-LBA #' @export rlba_norm rlba_norm <- function(n,A,b,t0,mean_v, sd_v, st0=0,posdrift=TRUE) { #check_single_arg(n, A, b, t0, st0) check_single_arg(n) if (any(b < A)) stop(error_message_b_smaller_A) n_v <- max(length(mean_v), length(sd_v)) if (posdrift) drifts <- matrix(rtnorm(n=n*n_v, mean=mean_v, sd=sd_v, lower=0),ncol=n_v,byrow=TRUE) else drifts <- matrix(rnorm(n=n*n_v, mean=mean_v, sd=sd_v),ncol=n_v,byrow=TRUE) make_r(drifts=drifts, n=n, b=b,A=A, n_v=n_v, t0=t0, st0=st0) } ####### Gamma: #' @rdname single-LBA #' @export dlba_gamma dlba_gamma <- function(rt,A,b,t0,shape_v,rate_v, scale_v) { if (!missing(rate_v) && !missing(scale_v)) stop("specify 'rate_v' or 'scale_v', but not both") if (missing(rate_v)) rate_v <- 1/scale_v check_vector(rt, A, b=b, t0, shape_v, rate_v) nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) shape_v <- rep(shape_v, length.out = nn) rate_v <- rep(rate_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) dlba_gamma_core(rt=rt,A=A,b=b,t0=t0, shape_v=shape_v, rate_v=rate_v, nn=nn) } dlba_gamma_core <- function(rt,A,b,t0,shape_v, rate_v, nn) { rt <- rem_t0(rt, t0) min <- (b-A)/rt max <- b/rt if (any(A<1e-10, na.rm = TRUE)) { # for A<1e-10 save results in out_A A_small<- A<1e-10 out_A<- pmax(0, (b[A_small]/rt[A_small]^2)*(dgamma(b[A_small]/rt[A_small],shape=shape_v[A_small],rate=rate_v[A_small])), na.rm = TRUE) min <- (b[!A_small]-A[!A_small])/rt[!A_small] max <- b[!A_small]/rt[!A_small] Gmax <- pgamma(max, shape_v[!A_small], rate=rate_v[!A_small]) Gmin <- pgamma(min, shape_v[!A_small], rate=rate_v[!A_small]) Gmax2 <- pgamma(max, (shape_v[!A_small]+1), rate=rate_v[!A_small]) Gmin2 <- pgamma(min, (shape_v[!A_small]+1), rate=rate_v[!A_small]) zgamma <- ( ((Gmax2-Gmin2)*gamma(shape_v[!A_small]+1))/((Gmax-Gmin)*rate_v[!A_small]*gamma(shape_v[!A_small])) ) diffG <- function(rt,point,shape_v, rate_v) { (-point/(rt^2))*dgamma(point/rt,shape_v[!A_small],rate = rate_v) } #NB:point refers to the constants b OR b-A. u <- (Gmax2-Gmin2) v <- (Gmax-Gmin) udash <- (diffG(rt[!A_small], b[!A_small], shape_v[!A_small]+1, rate_v[!A_small])- diffG(rt[!A_small], (b[!A_small]-A[!A_small]), shape_v[!A_small]+1, rate_v[!A_small])) vdash <- (diffG(rt[!A_small], b[!A_small], shape_v[!A_small], rate_v[!A_small])- diffG(rt[!A_small], (b[!A_small]-A[!A_small]), shape_v[!A_small], rate_v[!A_small])) const <- gamma(shape_v[!A_small]+1)/(rate_v[!A_small]*gamma(shape_v[!A_small])) diffzgamma <- ((udash*v - vdash*u)/(v^2))*const #quotient rule term1 <- (Gmax - Gmin)*(zgamma + (rt*diffzgamma)) term2 <- diffG(rt,b,shape_v[!A_small],rate_v[!A_small])*((zgamma*rt)-b) term3 <- diffG(rt,(b-A),shape_v[!A_small],rate_v[!A_small])*(b-A-(zgamma*rt)) out_o <- ((term1+term2+term3)/A) out_o[!is.finite(out_o)] <- 0 # Set NaN or -Inf or Inf to pdf=0 out<- numeric(nn) out[!A_small] <- out_o out[A_small] <- out_A return(pmax(0, out)) } else{ Gmax <- pgamma(max, shape_v, rate=rate_v) Gmin <- pgamma(min, shape_v, rate=rate_v) Gmax2 <- pgamma(max, (shape_v+1), rate=rate_v) Gmin2 <- pgamma(min, (shape_v+1), rate=rate_v) zgamma <- ( ((Gmax2-Gmin2)*gamma(shape_v+1))/((Gmax-Gmin)*rate_v*gamma(shape_v)) ) diffG <- function(rt,point,shape_v, rate_v) { (-point/(rt^2))*dgamma(point/rt,shape_v,rate = rate_v) } #NB:point refers to the constants b OR b-A. u <- (Gmax2-Gmin2) v <- (Gmax-Gmin) udash <- (diffG(rt, b, shape_v+1, rate_v)- diffG(rt, (b-A), shape_v+1, rate_v)) vdash <- (diffG(rt, b, shape_v, rate_v)- diffG(rt, (b-A), shape_v, rate_v)) const <- gamma(shape_v+1)/(rate_v*gamma(shape_v)) diffzgamma <- ((udash*v - vdash*u)/(v^2))*const #quotient rule term1 <- (Gmax - Gmin)*(zgamma + (rt*diffzgamma)) term2 <- diffG(rt,b,shape_v,rate_v)*((zgamma*rt)-b) term3 <- diffG(rt,(b-A),shape_v,rate_v)*(b-A-(zgamma*rt)) out.value <- ((term1+term2+term3)/A) out.value[!is.finite(out.value)] <- 0 # Set NaN or -Inf or Inf to pdf=0 return(pmax(0, out.value)) } } #' @rdname single-LBA #' @export plba_gamma plba_gamma <- function(rt,A,b,t0,shape_v, rate_v, scale_v) { if (!missing(rate_v) && !missing(scale_v)) stop("specify 'rate_v' or 'scale_v', but not both") if (missing(rate_v)) rate_v <- 1/scale_v check_vector(rt, A, b=b, t0, shape_v, rate_v) nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) shape_v <- rep(shape_v, length.out = nn) rate_v <- rep(rate_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) plba_gamma_core(rt=rt,A=A,b=b,t0=t0,shape_v=shape_v, rate_v=rate_v, nn=nn) } plba_gamma_core <- function(rt,A,b,t0,shape_v, rate_v, nn) { rt <- rem_t0(rt, t0) if (any(A<1e-10, na.rm = TRUE)) { # for A<1e-10 save results in out_A A_small<- A<1e-10 out_A <- pmin(1, pmax(0, (pgamma(b[A_small]/rt[A_small],shape=shape_v[A_small],rate=rate_v[A_small],lower.tail=FALSE)), na.rm=TRUE)) min <- (b[!A_small]-A[!A_small])/rt[!A_small] max <- b[!A_small]/rt[!A_small] Gmax <- pgamma(max, shape_v[!A_small], rate=rate_v[!A_small]) Gmin <- pgamma(min, shape_v[!A_small], rate=rate_v[!A_small]) Gmax2 <- pgamma(max, (shape_v[!A_small]+1), rate=rate_v[!A_small]) Gmin2 <- pgamma(min, (shape_v[!A_small]+1), rate=rate_v[!A_small]) zgamma <- ((Gmax2-Gmin2)*gamma(shape_v[!A_small]+1))/((Gmax-Gmin)*rate_v[!A_small]*gamma(shape_v[!A_small])) term1 <- ((rt*zgamma) - b)/A term2 <- (b-A-(rt*zgamma))/A pmax <- pgamma(max, shape_v[!A_small], rate = rate_v[!A_small]) pmin <- pgamma(min, shape_v[!A_small], rate = rate_v[!A_small]) out_o <- (1 + pmax*term1 + pmin*term2) out_o[rt==Inf] <- 1 # term1=Inf and term2=-Inf cancel in this case out_o[!is.finite(out_o)] <- 0 # Set NaN or -Inf to CDF=0 out<- numeric(nn) out[!A_small] <- out_o out[A_small] <- out_A return(pmin(pmax(0, out), 1)) } else { min <- (b-A)/rt max <- b/rt Gmax <- pgamma(max, shape_v, rate=rate_v) Gmin <- pgamma(min, shape_v, rate=rate_v) Gmax2 <- pgamma(max, (shape_v+1), rate=rate_v) Gmin2 <- pgamma(min, (shape_v+1), rate=rate_v) zgamma <- ((Gmax2-Gmin2)*gamma(shape_v+1))/((Gmax-Gmin)*rate_v*gamma(shape_v)) term1 <- ((rt*zgamma) - b)/A term2 <- (b-A-(rt*zgamma))/A pmax <- pgamma(max, shape_v, rate = rate_v) pmin <- pgamma(min, shape_v, rate = rate_v) out.value <- (1 + pmax*term1 + pmin*term2) out.value[rt==Inf] <- 1 # term1=Inf and term2=-Inf cancel in this case out.value[!is.finite(out.value)] <- 0 # Set NaN or -Inf to CDF=0 return(pmin(pmax(0, out.value), 1)) } } #' @rdname single-LBA #' @export rlba_gamma rlba_gamma <- function(n,A,b,t0,shape_v, rate_v, scale_v, st0=0) { check_single_arg(n) if (any(b < A)) stop(error_message_b_smaller_A) if (!missing(rate_v) && !missing(scale_v)) stop("specify 'rate_v' or 'scale_v', but not both") if (missing(rate_v)) rate_v <- 1/scale_v n_v <- max(length(shape_v), length(rate_v)) drifts <- matrix(rgamma(n=n*n_v,shape = shape_v,rate = rate_v),ncol=n_v,byrow=TRUE) make_r(drifts=drifts, n=n, b=b,A=A, n_v = n_v, t0=t0, st0=st0) } ####### Frechet: #' @rdname single-LBA #' @export dlba_frechet dlba_frechet <- function(rt,A,b,t0,shape_v, scale_v) { check_vector(rt, A, b, t0, shape_v, scale_v) nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) shape_v <- rep(shape_v, length.out = nn) scale_v <- rep(scale_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) dlba_frechet_core(rt=rt,A=A,b=b,t0=t0,shape_v=shape_v, scale_v=scale_v, nn=nn) } dlba_frechet_core <- function(rt,A,b,t0,shape_v, scale_v, nn) { rt <- rem_t0(rt, t0) ps <- cbind(b, b-A, scale_v,shape_v) ps_below_zero <- apply(ps, 1, function(x) any(x <= 0)) # rt <- pmax(rt,0) #not needed, see rem_t0 t_old <- rt out <- numeric(nn) if (sum(!ps_below_zero) > 0) { rt <- rt[!ps_below_zero] A <- A[!ps_below_zero] b <- b[!ps_below_zero] t0 <- t0[!ps_below_zero] shape_v <- shape_v[!ps_below_zero] scale_v <- scale_v[!ps_below_zero] min <- (b-A)/rt max <- b/rt Gmax <- pfrechet(max, loc=0, scale=scale_v, shape=shape_v) Gmin <- pfrechet(min, loc=0, scale=scale_v, shape=shape_v) D <- Gmax - Gmin gam <- gamma_inc(1-(1/shape_v), (1/scale_v*max)^(-shape_v))-gamma_inc(1-(1/shape_v), (1/scale_v*min)^(-shape_v)) zfrechet <- gam/(1/scale_v*D) diffG1 <- ((-b/(rt^2))*dfrechet(b/rt, loc=0, scale=scale_v, shape=shape_v)) diffG2 <- ((-(b-A)/(rt^2))*dfrechet((b-A)/rt, loc=0, scale=scale_v, shape=shape_v)) diffD <- diffG1 - diffG2 diffgam <- (-shape_v*(((1/scale_v*b)^(-shape_v+1))/(rt^(-shape_v+2)))*exp(-(1/scale_v*b/rt)^(-shape_v))) - (-shape_v*(((1/scale_v*(b-A))^(-shape_v+1))/(rt^(-shape_v+2)))*exp(-(1/scale_v*(b-A)/rt)^(-shape_v))) diffzfrechet <- ((1/scale_v)^(-1))*(((-D^(-2))*diffD)*gam + (diffgam*(D^(-1)))) term1 <- (Gmax - Gmin)*(zfrechet + (rt*diffzfrechet)) term2 <- diffG1*((zfrechet*rt)-b) term3 <- diffG2*(b-A-(zfrechet*rt)) out.value <- ((term1+term2+term3)/A) out.value[!is.finite(out.value)] <- 0 # Set NaN or -Inf or Inf to pdf=0 out[!ps_below_zero] <- out.value } return(pmax(0, out)) } #' @rdname single-LBA #' @export plba_frechet plba_frechet <- function(rt,A,b,t0,shape_v, scale_v) { check_vector(rt, A, b, t0, shape_v, scale_v) nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) shape_v <- rep(shape_v, length.out = nn) scale_v <- rep(scale_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) plba_frechet_core(rt=rt,A=A,b=b,t0=t0,shape_v=shape_v, scale_v=scale_v, nn=nn) } plba_frechet_core <- function(rt,A,b,t0,shape_v, scale_v, nn) { rt <- rem_t0(rt, t0) ps <- cbind(b, b-A, scale_v,shape_v) ps_below_zero <- apply(ps, 1, function(x) any(x <= 0)) # rt <- pmax(rt,0) #not needed, see rem_t0 t_old <- rt out <- numeric(nn) if (sum(!ps_below_zero) > 0) { rt <- rt[!ps_below_zero] A <- A[!ps_below_zero] b <- b[!ps_below_zero] t0 <- t0[!ps_below_zero] shape_v <- shape_v[!ps_below_zero] scale_v <- scale_v[!ps_below_zero] # rt <- pmax(rt,0) #not needed, see rem_t0 min <- (b-A)/rt max <- b/rt pmax <- pfrechet(max, loc=0, scale=scale_v, shape=shape_v) pmin <- pfrechet(min, loc=0, scale=scale_v, shape=shape_v) zfrechet <- (gamma_inc(1-(1/shape_v),(1/scale_v*max)^(-shape_v))-gamma_inc(1-(1/shape_v),(1/scale_v*min)^(-shape_v)))/(1/scale_v*(pmax-pmin)) term1 <- ((rt*zfrechet) - b)/A term2 <- (b-A-(rt*zfrechet))/A out.value <- (1 + pmax*term1 + pmin*term2) out.value[rt==Inf] <- 1 # term1=Inf and term2=-Inf cancel in this case out.value[!is.finite(out.value)] <- 0 # Set NaN or -Inf to CDF=0 out[!ps_below_zero] <- out.value } return(pmin(pmax(0, out), 1)) } #' @rdname single-LBA #' @export rlba_frechet rlba_frechet <- function(n,A,b,t0,shape_v, scale_v,st0=0){ check_single_arg(n) if (any(b < A)) stop(error_message_b_smaller_A) n_v <- max(length(shape_v), length(scale_v)) drifts <- matrix(rfrechet(n=n*n_v, loc=0, scale=scale_v, shape=shape_v),ncol=n_v,byrow=TRUE) make_r(drifts=drifts, n=n, b=b,A=A, n_v=n_v, t0=t0, st0=st0) } ####### Log-Normal: #' @rdname single-LBA #' @export dlba_lnorm dlba_lnorm <- function(rt,A,b,t0,meanlog_v, sdlog_v, robust = FALSE) { check_vector(rt, A, b, t0, meanlog_v, sdlog_v) nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) meanlog_v <- rep(meanlog_v, length.out = nn) sdlog_v <- rep(sdlog_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) dlba_lnorm_core(rt=rt,A=A,b=b,t0=t0,meanlog_v=meanlog_v, sdlog_v=sdlog_v, robust = robust, nn=nn) } dlba_lnorm_core <- function(rt,A,b,t0,meanlog_v, sdlog_v, robust=FALSE, nn) { if (robust) { # robust == TRUE uses robust versions of the normal distributions pnorm1 <- pnormP dnorm1 <- dnormP } else { pnorm1 <- pnorm dnorm1 <- dnorm } rt <- rem_t0(rt, t0) if (any(A<1e-10, na.rm = TRUE)) { # for A<1e-10 save results in out_A A_small<- A<1e-10 out_A <- pmax(0, (b[A_small]/rt[A_small]^2)*(dlnorm(b[A_small]/rt[A_small],meanlog_v[A_small],sdlog=sdlog_v[A_small])), na.rm = TRUE) # calculate other results into out_o #Should there also be a check that A_small has at least one FALSE to bother running this code? -Angus; Probably not necessary, Henrik. min <- (b-A)/rt max <- b/rt zlognorm <- (exp(meanlog_v[!A_small]+(sdlog_v[!A_small]^2)/2)*(pnorm1((log(max)-meanlog_v[!A_small]-(sdlog_v[!A_small]^2))/sdlog_v[!A_small])-pnorm1((log(max)-meanlog_v[!A_small]-(sdlog_v[!A_small]^2))/sdlog_v[!A_small]))) / (pnorm1((log(max)-meanlog_v[!A_small])/sdlog_v[!A_small])-pnorm1((log(max)-meanlog_v[!A_small])/sdlog_v[!A_small])) Gmax <- plnorm(max,meanlog=meanlog_v[!A_small],sdlog=sdlog_v[!A_small]) Gmin <- plnorm(max,meanlog=meanlog_v[!A_small],sdlog=sdlog_v[!A_small]) u <- (pnorm1((log(max)-meanlog_v[!A_small]-(sdlog_v[!A_small])^2)/sdlog_v[!A_small])-pnorm1((log(max)-meanlog_v[!A_small]-(sdlog_v[!A_small])^2)/sdlog_v[!A_small])) v <- (pnorm1((log(max)-meanlog_v[!A_small])/sdlog_v[!A_small])-pnorm1((log(max)-meanlog_v[!A_small])/sdlog_v[!A_small])) udash <- (((-1/(sdlog_v[!A_small]*rt[!A_small]))*dnorm1((log(b[!A_small]/rt[!A_small])-meanlog_v[!A_small]-(sdlog_v[!A_small])^2)/sdlog_v[!A_small])) - ((-1/(sdlog_v[!A_small]*rt[!A_small]))*dnorm1((log((b[!A_small]-A[!A_small])/rt[!A_small])-meanlog_v[!A_small]-(sdlog_v[!A_small])^2)/sdlog_v[!A_small]))) vdash <- (((-1/(sdlog_v[!A_small]*rt[!A_small]))*dnorm1((log(b[!A_small]/rt[!A_small])-meanlog_v[!A_small])/sdlog_v[!A_small])) - ((-1/(sdlog_v[!A_small]*rt[!A_small]))*dnorm1((log((b[!A_small]-A[!A_small])/rt[!A_small])-meanlog_v[!A_small])/sdlog_v[!A_small]))) const <- exp(meanlog_v[!A_small]+((sdlog_v[!A_small])^2)/2) diffzlognorm <- ((udash*v - vdash*u)/(v^2))*const #quotient rule term1 <- (Gmax - Gmin)*(zlognorm + (rt[!A_small]*diffzlognorm)) term2 <- ((-b[!A_small]/(rt[!A_small]^2))*dlnorm(b[!A_small]/rt[!A_small],meanlog=meanlog_v[!A_small],sdlog=sdlog_v[!A_small]))*((zlognorm*rt[!A_small])-b[!A_small]) term3 <- (b[!A_small]-A[!A_small]-(zlognorm*rt[!A_small]))*((-(b[!A_small]-A[!A_small])/(rt[!A_small]^2))*dlnorm((b[!A_small]-A[!A_small])/rt[!A_small],meanlog=meanlog_v[!A_small],sdlog=sdlog_v[!A_small])) out_o<- ((term1+term2+term3)/A[!A_small]) out_o[!is.finite(out_o)] <- 0 # Set NaN or -Inf or Inf to pdf=0 out<- numeric(nn) out[!A_small] <- out_o out[A_small] <- out_A return(pmax(0, out)) } else{ min <- (b-A)/rt max <- b/rt zlognorm <- (exp(meanlog_v+(sdlog_v^2)/2)*(pnorm1((log(max)-meanlog_v-(sdlog_v^2))/sdlog_v)-pnorm1((log(min)-meanlog_v-(sdlog_v^2))/sdlog_v))) / (pnorm1((log(max)-meanlog_v)/sdlog_v)-pnorm1((log(min)-meanlog_v)/sdlog_v)) Gmax <- plnorm(max,meanlog=meanlog_v,sdlog=sdlog_v) Gmin <- plnorm(min,meanlog=meanlog_v,sdlog=sdlog_v) u <- (pnorm1((log(max)-meanlog_v-(sdlog_v)^2)/sdlog_v)-pnorm1((log(min)-meanlog_v-(sdlog_v)^2)/sdlog_v)) v <- (pnorm1((log(max)-meanlog_v)/sdlog_v)-pnorm1((log(min)-meanlog_v)/sdlog_v)) udash <- (((-1/(sdlog_v*rt))*dnorm1((log(b/rt)-meanlog_v-(sdlog_v)^2)/sdlog_v)) - ((-1/(sdlog_v*rt))*dnorm1((log((b-A)/rt)-meanlog_v-(sdlog_v)^2)/sdlog_v))) vdash <- (((-1/(sdlog_v*rt))*dnorm1((log(b/rt)-meanlog_v)/sdlog_v)) - ((-1/(sdlog_v*rt))*dnorm1((log((b-A)/rt)-meanlog_v)/sdlog_v))) const <- exp(meanlog_v+((sdlog_v)^2)/2) diffzlognorm <- ((udash*v - vdash*u)/(v^2))*const #quotient rule term1 <- (Gmax - Gmin)*(zlognorm + (rt*diffzlognorm)) term2 <- ((-b/(rt^2))*dlnorm(b/rt,meanlog=meanlog_v,sdlog=sdlog_v))*((zlognorm*rt)-b) term3 <- (b-A-(zlognorm*rt))*((-(b-A)/(rt^2))*dlnorm((b-A)/rt,meanlog=meanlog_v,sdlog=sdlog_v)) out.value <- ((term1+term2+term3)/A) out.value[!is.finite(out.value)] <- 0 # Set NaN or -Inf or Inf to pdf=0 return(pmax(0, out.value)) } } #' @rdname single-LBA #' @export plba_lnorm plba_lnorm <- function(rt,A,b,t0,meanlog_v, sdlog_v, robust = FALSE) { check_vector(rt, A, b, t0, meanlog_v, sdlog_v) nn <- length(rt) A <- rep(A, length.out = nn) b <- rep(b, length.out = nn) t0 <- rep(t0, length.out = nn) meanlog_v <- rep(meanlog_v, length.out = nn) sdlog_v <- rep(sdlog_v, length.out = nn) if (any(b < A)) stop(error_message_b_smaller_A) plba_lnorm_core(rt=rt,A=A,b=b,t0=t0,meanlog_v=meanlog_v, sdlog_v=sdlog_v, robust=robust, nn=nn) } plba_lnorm_core <- function(rt,A,b,t0,meanlog_v, sdlog_v, robust = FALSE, nn) { if (robust) { # robust == TRUE uses robust versions of the normal distributions pnorm1 <- pnormP } else { pnorm1 <- pnorm } if (any(A<1e-10, na.rm = TRUE)) { # for A<1e-10 save results in out_A A_small<- A<1e-10 #Pretty sure it is log(b[A_small]/rt[A_small]), not b[A_small]/log(rt[A_small])- Angus #Should this be lower.tail=TRUE? out_A <- pmin(1, pmax(0, (plnorm(b[A_small]/rt[A_small],meanlog=meanlog_v[A_small],sdlog=sdlog_v[A_small],lower.tail=FALSE)), na.rm=TRUE)) pmin(1, pmax(0, (plnorm(b[A_small]/rt[A_small],meanlog=meanlog_v[A_small],sdlog=sdlog_v[A_small],lower.tail=FALSE)), na.rm=TRUE)) min <- (b[!A_small]-A[!A_small])/rt[!A_small] max <- b[!A_small]/rt[!A_small] zlognorm <- (exp(meanlog_v[!A_small]+(sdlog_v[!A_small]^2)/2)*(pnorm1((log(max)-meanlog_v[!A_small]-(sdlog_v[!A_small]^2))/sdlog_v[!A_small])-pnorm1((log(max)-meanlog_v[!A_small]-(sdlog_v[!A_small]^2))/sdlog_v[!A_small]))) / (pnorm1((log(max)-meanlog_v[!A_small])/sdlog_v[!A_small])-pnorm1((log(max)-meanlog_v[!A_small])/sdlog_v[!A_small])) term1 <- ((rt[!A_small]*zlognorm) - b[!A_small])/A[!A_small] term2 <- (b[!A_small]-A[!A_small]-(rt[!A_small]*zlognorm))/A [!A_small] pmax <- plnorm(max, meanlog=meanlog_v[!A_small], sdlog=sdlog_v[!A_small]) pmin <- plnorm(max, meanlog=meanlog_v[!A_small], sdlog=sdlog_v[!A_small]) out_o <- (1 + pmax*term1 + pmin*term2) #not sure about this next line-Angus out_o[rt[!A_small]==Inf] <- 1 # term1=Inf and term2=-Inf cancel in this case out_o[!is.finite(out_o)] <- 0 # Set NaN or -Inf to CDF=0 out<- numeric(nn) out[!A_small] <- out_o out[A_small] <- out_A return(pmin(pmax(0, out), 1)) }else{ rt <- rem_t0(rt, t0) min <- (b-A)/rt max <- b/rt zlognorm <- (exp(meanlog_v+(sdlog_v^2)/2)*(pnorm1((log(max)-meanlog_v-(sdlog_v^2))/sdlog_v)-pnorm1((log(min)-meanlog_v-(sdlog_v^2))/sdlog_v))) / (pnorm1((log(max)-meanlog_v)/sdlog_v)-pnorm1((log(min)-meanlog_v)/sdlog_v)) term1 <- ((rt*zlognorm) - b)/A term2 <- (b-A-(rt*zlognorm))/A pmax <- plnorm(max, meanlog=meanlog_v, sdlog=sdlog_v) pmin <- plnorm(min, meanlog=meanlog_v, sdlog=sdlog_v) out.value <- (1 + pmax*term1 + pmin*term2) out.value[rt==Inf] <- 1 # term1=Inf and term2=-Inf cancel in this case out.value[!is.finite(out.value)] <- 0 # Set NaN or -Inf to CDF=0 return(pmin(pmax(0, out.value), 1)) } } #' @rdname single-LBA #' @export rlba_lnorm rlba_lnorm <- function(n,A,b,t0,meanlog_v, sdlog_v, st0=0){ check_single_arg(n) if (any(b < A)) stop(error_message_b_smaller_A) n_v <- max(length(meanlog_v), length(sdlog_v)) drifts=matrix(rlnorm(n=n*n_v,meanlog = meanlog_v,sdlog=sdlog_v),ncol=n_v,byrow=TRUE) make_r(drifts=drifts, n=n, b=b, A=A, n_v=n_v, t0=t0, st0=st0) } rtdists/R/RcppExports.R0000644000175000017500000000150314164636116014724 0ustar nileshnilesh# Generated by using Rcpp::compileAttributes() -> do not edit by hand # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 d_fastdm <- function(rts, params, precision = 3, boundary = 2L, stop_on_error = TRUE) { .Call(`_rtdists_d_fastdm`, rts, params, precision, boundary, stop_on_error) } p_fastdm <- function(rts, params, precision = 3, boundary = 2L, stop_on_error = TRUE) { .Call(`_rtdists_p_fastdm`, rts, params, precision, boundary, stop_on_error) } p_precise_fastdm <- function(rts, params, precision = 3, boundary = 2L, stop_on_error = TRUE) { .Call(`_rtdists_p_precise_fastdm`, rts, params, precision, boundary, stop_on_error) } r_fastdm <- function(num_values, params, precision = 3, stop_on_error = TRUE) { .Call(`_rtdists_r_fastdm`, num_values, params, precision, stop_on_error) } rtdists/NEWS0000644000175000017500000001744214155621322012610 0ustar nileshnileshChanges to Version 0.11-x (released March 2020) o Faster calculation of diffusion density (ddiffusion) and CDF (pdiffusion) in case exactly one set of parameters is passed. In this case, some checks are skipped now leading to a 40% to 50% speed increase. o Diffusion quantile function qdiffusion should be faster now when it is called with many probabilities for the same response. o Diffusion quantile function qdiffusion has a new argument max_diff which allows to control the minimally acceptable difference between desired and obtained probabilities. o Added experimental method = "qdiffusion" to rdiffusion() which obtains random derivates via the quantile function and runif(). This method is for the time being a lot slower than the native one (i.e., method = "fastdm"). o Tests should work even if options("stringsAsFactors" = FALSE) (in preparation for R 4.0). o 0.11-3: removed bug that diffusion functions failed if there were NAs in the parameters. o 0.11-3: uses RCPP STRICT_R_HEADERS: https://github.com/rtdists/rtdists/pull/14 Changes to Version 0.10-x (released October 2019) o Removed another argument passing bug that occured when using args.dist and more than two drift rates. Reported by Glen Livingston Jr. o Removed issue in vignette due to breaking change in a tidyverse package. Changes to Version 0.9-x (released August 2018) o Removed bug in dLBA and pLBA that prevented correct usage of trial-wise parameters. This bug always appeared when data with more than one response was present together with trial-wise parameters. In the following example the first call should be identical to the second and third call: x1 <- dLBA(rt=c(1,1), response=c(1,2), A=1,b=list(c(1,3),c(2,4)), t0=0.1, mean_v=c(3,3), sd_v=c(1,1),distribution="norm") x2a <- dLBA(rt=c(1), response=c(1), A=1,b=list(c(1),c(2)), t0=0.1,mean_v=c(3,3),sd_v=c(1,1),distribution="norm") x2b <- dLBA(rt=c(1), response=c(2), A=1,b=list(c(3),c(4)), t0=0.1,mean_v=c(3,3),sd_v=c(1,1),distribution="norm") all(x1 == c(x2a, x2b)) ## should be TRUE Changes to Version 0.8-x (released December 2017 & updated June 2018) o Removed bug preventing args.list (e.g., posdrift) to be passed correctly in dLBA, pLBA, qLBA, and rLBA. Thanks to Bruno Nicenboim for reporting this. See: https://github.com/rtdists/rtdists/issues/7 o Removed an non-used "devtools::" call in tests that caused a false positive CRAN warning (June 2018). o Deactivated a few more tests in CRAN for faster checking (June 2018). Changes to Version 0.7-x (released May 2017) o Performance of diffusion functions and rLBA increased, especially for calls with parameters that differ trialwise. As a consequence single rlba_... functions now return a matrix and no data.frame. o All C functions are now accessed via Rcpp. o pdiffusion uses the C++ CDF (no more numerical integration in R). o sv can produce slow errors, and sz fast erros (this was the wrong way around in the documentation). Thanks to Gabriel Tillman for noticing that. o removed a bug in the pdiffusion C code letting rtdists hang indefinitely (see https://github.com/rtdists/rtdists/pull/3). Thanks to Tomas Kalibera for the fix. o removed bug: meanlog_v and sdlog_v were not recycled for the lnorm LBA. o Ratcliff and Rouder (1998) vignette now uses nested data.frames and purrr::map(i.e., more proper use of the tidyverse). o removed bug when non-accumulator LBA parameters where passed as a named list. Changes to Version 0.6-6 (bug-fix version, released July 2016) o Bug when passing start point with s != 1 removed. Changes to Version 0.6-x (released July 2016) o Start point z in diffusion model is now on absolute scale and not relative to be in line with A (start point of LBA) which is also on absolute scale. (Thanks to Steve Lewandowsky for noticing this.) o PDFs, CDFs, and quantile functions of both models now accept a data.frame as first argument containing both RTs/probabilities and responses. Allows more convenient way to pass data. o renamed boundary (argument in diffusion functions) to response to be in line with LBA functions. (Thanks to Steve Lewandowsky for suggesting this.) o added diffusion constant s as argument to all diffusion functions. o added scale_p and scale_max arguments to quantile functions which automatically scale the entered probability to more conveniently obtain predicted quantiles. o LBA functions now accept factor as response (which is converted via as.numeric). This allows to pass results from rdiffusion directly to LBA function. o changed integration routine in pdiffusion to pracma::integral() which seems to be more robust. (Thanks to Anna-Lena Schubert and Steve Lewandowsky for reporting problems with the previous version.) o removed bug preventing lnorm as distribution in rLBA. (Thanks to Steve Lewandowsky for reporting this bug.) Changes to Version 0.5-x (released May 2016) o Calculation of the CDF for the diffusion model was incorrect (this bug was present in all prior versions of rtdists). pdiffusion now simply integrates the PDF to obtain the CDF using R's integrate which provides the correct result (albeit slower). o Added rr98 data set: Experiment 1 from Ratcliff and Rouder (1998, Psych. Science). We thank Roger Ratcliff and Jeff Rouder for providing the data. o Added vignette showing how to analyze the data from Ratcliff and Rouder (1998) with both diffusion and LBA model. o Quantile functions work more robust and try uniroot if optimize does not converge. Changes to Version 0.4-x (released April 2016) o Added dLBA(), pLBA(), qLBA(), and rLBA(). dLBA() is a fully vectorized versions of n1PDF which has response as second argument, allowing to get the density for each response and corresponding response time in one step. As for the diffusion model (see below), this allows a likelihood function which only includes one call to the density function. pLBA() and qLBA() are the correpsonding CDF and quantile functions, respectively. rLBA() is a fully vectorized version of the RNG functions and should be used from now on as top-level function. o t0 in the LBA now accepts accumulator and trialwise parameters just as A and b. st0 now accepts trialwise parameter (not accumulator wise). o Diffusion model function have been renamed to ddiffusion, pdiffusion, and rdiffusion. Added quantile function for diffusion model, qdiffusion. o Diffusion model functions are now completely vectorized and accept vectors as parameters (including for boundary). As for the LBA, this allows a likelihood function which only includes one call to the density function (see examples). o Boundary parameter for diffusion functions accept numeric/factor vectors. o t0 in the diffusion model now corresponds to the lower bound of the uniform distribution from which t0 is drawn (it was the mean before). The specifications of t0 now agree between LBA and diffusion model. o Density for diffusion model is now always positive. o First argument in most functions (vector of response times) renamed to rt (was t before). o PDF and CDF LBA functions more robust (now mostly return 0 instead of NaN) for problematic parameter values (such as A = 0) [2015-09-17]. rtdists/inst/0000755000175000017500000000000014164644437013072 5ustar nileshnileshrtdists/inst/extdata/0000755000175000017500000000000014012213045014477 5ustar nileshnileshrtdists/inst/extdata/n1CDF_diff_example.RData0000644000175000017500000000210414012213045020747 0ustar nileshnileshT{0WbQL=KH&^Mҭ Qɪ!k& R*yHhbE!RI4&!}[vh~d!wwwϽq܌8FAFz@#!7$p] BlE)KܿP\Fw3?(g1 23[@+#ZA\NfY9} {݌P]{o@?S9 LwL$mwn@%>?qMw_0o5 "@a2]y[2.@4JoQ mkE754$u1PxֽJ@Lj Kh{j!ɍcsDILa~s9I>5P9V$0dĸI!H&>,>\u8@gwVQaΟR0Y\KA η_+,1|bPi_&C|c'd&]\zL AX fەݘBpE>yw$K 2b@@T،nf>T;-OM#7{ٳjX˜W^!j'wCYGHL`r8z57үh~_oe#{V^7CT^&LV$?k1C<!?-yM|ΤZmnnZQ7̧גҠj q%$ Andreas Voss Which is available from: http://www.psychologie.uni-heidelberg.de/ae/meth/fast-dm/ And is described in the following papers: Voss, A., Voss, J., 2008. A fast numerical algorithm for the estimation of diffusion-model parameters. Journal of Mathematiocal Psychology. Voss, A., Voss, J., 2007. Fast-dm: A free program for efficient diffusion model analysis. Behavioral Research Methods, 39, 767-775. RCppFastDM, by Matthew Gretton , encapsulates the core PDF, CDF and random sampling implementations of fast-dm with minor modifications to produce an RCpp library capable of being compiled via a C++ compiler and invoked via the R programming language. rtdists/inst/extdata/lba-math.R0000644000175000017500000001536213667512040016332 0ustar nileshnileshrequire(msm) # z = time, A=x0max, b=chi, v=driftrate, sv=sddrift fptcdf0=function(z,x0max,chi,driftrate,sddrift) { if (x0max<1e-10) # LATER solution return( pnorm(chi/z,mean=driftrate,sd=sddrift,lower.tail=FALSE) ) zs=z*sddrift ; zu=z*driftrate ; chiminuszu=chi-zu ; xx=chiminuszu-x0max chizu=chiminuszu/zs ; chizumax=xx/zs tmp1=zs*(dnorm(chizumax)-dnorm(chizu)) tmp2=xx*pnorm(chizumax)-chiminuszu*pnorm(chizu) 1+(tmp1+tmp2)/x0max } # z = time, A=x0max, b=chi, v=driftrate, sv=sddrift fptpdf0=function(z,x0max,chi,driftrate,sddrift) { if (x0max<1e-10) # LATER solution return( (chi/z^2)*dnorm(chi/z,mean=driftrate,sd=sddrift) ) zs=z*sddrift ; zu=z*driftrate ; chiminuszu=chi-zu chizu=chiminuszu/zs ; chizumax=(chiminuszu-x0max)/zs (driftrate*(pnorm(chizu)-pnorm(chizumax)) + sddrift*(dnorm(chizumax)-dnorm(chizu)))/x0max } # protected normal desity and cdf pnormP <- function(x,mean=0,sd=1,lower.tail=TRUE){ ifelse(abs(x)<7,pnorm(x,mean=mean,sd=sd,lower.tail=lower.tail),ifelse(x<0,0,1))} dnormP <- function(x,mean=0,sd=1){ ifelse(abs(x)<7,dnorm(x,mean=mean,sd=sd),0)} # robust version, 3 times slower! fptcdfR=function(z,x0max,chi,driftrate,sddrift) { if (x0max<1e-10) # LATER solution return( pmin(1,pmax(0,pnormP(chi/z,mean=driftrate,sd=sddrift,lower.tail=FALSE))) ) zs=z*sddrift ; zu=z*driftrate ; chiminuszu=chi-zu ; xx=chiminuszu-x0max chizu=chiminuszu/zs ; chizumax=xx/zs tmp1=zs*(dnormP(chizumax)-dnormP(chizu)) tmp2=xx*pnormP(chizumax)-chiminuszu*pnormP(chizu) pmin(pmax(0,1+(tmp1+tmp2)/x0max),1) } # robust version, 3 times slower! fptpdfR=function(z,x0max,chi,driftrate,sddrift) { if (x0max<1e-10) # LATER solution return( pmax(0,(chi/z^2)*dnormP(chi/z,mean=driftrate,sd=sddrift)) ) zs=z*sddrift ; zu=z*driftrate ; chiminuszu=chi-zu chizu=chiminuszu/zs ; chizumax=(chiminuszu-x0max)/zs pmax(0,(driftrate*(pnormP(chizu)-pnormP(chizumax)) + sddrift*(dnormP(chizumax)-dnormP(chizu)))/x0max) } fptcdf=function(z,x0max,chi,driftrate,sddrift,posdrift=TRUE,robust=FALSE) { if ( robust ) { if (!posdrift) fptcdfR(z,x0max,chi,driftrate,sddrift) else fptcdfR(z,x0max,chi,driftrate,sddrift)/pmax(pnormP(driftrate/sddrift),1e-10) } else { if (!posdrift) fptcdf0(z,x0max,chi,driftrate,sddrift) else fptcdf0(z,x0max,chi,driftrate,sddrift)/pmax(pnormP(driftrate/sddrift),1e-10) } } fptpdf=function(z,x0max,chi,driftrate,sddrift,posdrift=TRUE,robust=FALSE) { if ( robust ) { if (!posdrift) fptpdfR(z,x0max,chi,driftrate,sddrift) else fptpdfR(z,x0max,chi,driftrate,sddrift)/pmax(pnormP(driftrate/sddrift),1e-10) } else { if (!posdrift) fptpdf0(z,x0max,chi,driftrate,sddrift) else fptpdf0(z,x0max,chi,driftrate,sddrift)/pmax(pnormP(driftrate/sddrift),1e-10) } } # n = number of samples, v = vs, t0 = minimum non-decision time # st = width of uniform non-decision time # If all rates are negative returns Inf and resp=1 rlba=function(n,b,A,vs,s,t0,st0=0,posdrift=TRUE){ if (posdrift) { drifts=matrix(rtnorm(mean=vs,sd=s,n=n*length(vs),lower=0),ncol=length(vs),byrow=TRUE) } else { drifts=matrix(rnorm(mean=vs,sd=s,n=n*length(vs)),ncol=length(vs),byrow=TRUE) drifts[drifts<0]=0 } starts=matrix(runif(min=0,max=A,n=n*length(vs)),ncol=length(vs),byrow=TRUE) ttf=t((b-t(starts)))/drifts rt=apply(ttf,1,min)+t0+runif(min=0,max=st0,n=n) resp=apply(ttf,1,which.min) bad <- !is.finite(rt) if (any(bad)) { warning(paste(sum(bad),"infinite RTs removed")) resp <- resp[!bad] rt <- rt[!bad] } list(rt=rt,resp=resp) } # t = time, A=x0max, b=chi, v=drift, sv=sdI .n1PDFfixedt0=function(t,x0max,chi,drift,sdI,posdrift=TRUE,robust=FALSE) { # Generates defective PDF for responses on node #1. N=length(drift) # Number of responses. if (N>2) { tmp=array(dim=c(length(t),N-1)) for (i in 2:N) tmp[,i-1]=fptcdf(z=t,x0max=x0max[i],chi=chi[i], driftrate=drift[i],sddrift=sdI[i],posdrift=posdrift,robust=robust) G=apply(1-tmp,1,prod) } else { G=1-fptcdf(z=t,x0max=x0max[2],chi=chi[2],driftrate=drift[2], sddrift=sdI[2],posdrift=posdrift,robust=robust) } G*fptpdf(z=t,x0max=x0max[1],chi=chi[1],driftrate=drift[1], sddrift=sdI[1],posdrift=posdrift,robust=robust) } .n1PDF=function(t,x0max,chi,drift,sdI,st0=0,posdrift=TRUE,robust=FALSE) { N=length(drift) # Number of responses if (length(x0max)1) st0=st0[1] # Only ONE non-decision time. if (st0==0) return(.n1PDFfixedt0(t,x0max,chi,drift,sdI,posdrift,robust)) tmpf=function(t,x0max,chi,drift,sdI,st0,posdrift,robust) .n1PDFfixedt0(t,x0max,chi,drift,sdI,posdrift,robust)/st0 outs=numeric(length(t)) for (i in 1:length(outs)) outs[i]=integrate(f=tmpf,lower=t[i]-st0,upper=t[i], x0max=x0max,chi=chi,drift=drift,sdI=sdI,st0=st0,posdrift=posdrift,robust=robust)$value outs } .n1CDF=function(t,x0max,chi,drift,sdI,st0=0,posdrift=TRUE,robust=FALSE) { #, browser=FALSE # Generates defective CDF for responses on node #1. N=length(drift) # Number of responses if (length(x0max)1) stop("Only one value of st0 allowed.") if (st0<1e-6) st0=0 # Integral can fail for small st0. outs=numeric(length(t)) ; bounds=c(-st0/2,t) for (i in 1:length(t)) { tmp="error" repeat { if (bounds[i]>=bounds[i+1]) {outs[i]=0;break} #if(i==1 && browser) browser() tmp=try(integrate(f=.n1PDF,lower=bounds[i],upper=bounds[i+1],subdivisions=1000, x0max=x0max,chi=chi,drift=drift,sdI=sdI,st0=st0,posdrift=posdrift,robust=robust)$value,silent=T) if (is.numeric(tmp)) {outs[i]=tmp;break} #browser() # Try smart lower bound. if (bounds[i]<=0) { bounds[i]=max(c((chi-0.98*x0max)/(max(mean(drift),drift[1])+2*sdI)[1],0)) next } # Try smart upper bound. if (bounds[i+1]==Inf) { bounds[i+1]=0.02*max(chi)/(mean(drift)-2*mean(sdI)) next } stop("Error in n1CDF that I could not catch.") } } cumsum(outs) } n1mean=function(x0max,chi,drift,sdI,posdrift=TRUE,robust=FALSE) { # Generates mean RT for responses on node #1. pc=n1CDF(Inf,x0max,chi,drift,sdI,posdrift,robust) fn=function(t,x0max,chi,drift,sdI,st0=0,pc,posdrift,robust) t*n1PDF(t,x0max,chi,drift,sdI,st0,posdrift,robust)/pc tmp=integrate(f=fn,lower=0,upper=100*chi,x0max=x0max,chi=chi,pc=pc, drift=drift,sdI=sdI,st0=st0,posdrift=posdrift,robust=robust)$value list(mean=tmp,p=pc) } rtdists/inst/doc/0000755000175000017500000000000014164644437013637 5ustar nileshnileshrtdists/inst/doc/reanalysis_rr98.Rmd0000644000175000017500000014321214155710063017331 0ustar nileshnilesh--- title: "Reanalysis of Ratcliff and Rouder (1998) with Diffusion Model and LBA" author: "Henrik Singmann" date: "`r Sys.Date()`" output: rmarkdown::html_vignette: toc: true vignette: > %\VignetteIndexEntry{Reanalysis of Ratcliff and Rouder (1998) with Diffusion Model and LBA} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- This vignette provides the `R` scripts for a reanalysis of Experiment 1 of Ratcliff and Rouder (1998). In contrast to the original analysis, which used RT bins, we will employ trial-wise maximum likelihood estimation. The code heavily uses [`dplyr`](https://cran.r-project.org/package=dplyr) ([vignette](https://CRAN.R-project.org/package=dplyr/vignettes/dplyr.html)), [`tidyr`](https://cran.r-project.org/package=tidyr) ([vignette](https://CRAN.R-project.org/package=tidyr/vignettes/tidy-data.html)), and [`purrr`](https://cran.r-project.org/package=purrr) for data handling and `lattice` (see `?Lattice`) and `latticeExtra` (specifically `as.layer`) for plotting. A throrough introduction to the former three packages is provided in [R for Data Science](https://r4ds.had.co.nz/) by Wickham and Gorlemund, see especially Chapter 25. # Description of the Experiment In the experiment, three participants were asked to decide whether the overall brightness of pixel arrays displayed on a computer monitor was "high" or "low". To this end, the number of white versus black pixels (i.e., the brightness `strength`) was manipulated in 33 levels from 0% white pixels (level 0) to 100% white pixels (level 32). In addition, instruction manipulated speed and accuracy between blocks. In total, each participant contributed around 4000 trials per instruction condition. The experiment contained another manipulation, the distribution (or brightness `source`) from which the pixel array was drawn. One distribution mean was on the "high" brightness side and one distribution mean was on the "low" brightness side. However, as the distributions were unbounded and overlapping, the same strength level could come from either distribution. Participant also received feedback whether or not they had picked the correct distribution (e.g., for the middle strength level 16 probability of belonging to either source was 50%). We do not further consider this manipulation in the following, which seems to be in line with the analysis of Ratcliff and Rouder (1998). # Descriptive data As a first step, we load the data and then plot the probability with which each response (i.e., "dark" or "light") is given as a function of strength and instruction condition. This clearly shows that there is a massive effect of strength on which response is given while at the same time the instruction only seems to have a minor effect and more on the extremes than in the middle. ```{r, fig.height=4, fig.width=7, message=FALSE, warning=FALSE} require(rtdists) require(dplyr) # for data manipulations and looping require(tidyr) # for data manipulations require(purrr) # for data manipulations require(lattice) # for plotting and corresponding themes require(latticeExtra) lattice.options(default.theme = standard.theme(color = FALSE)) lattice.options(default.args = list(as.table = TRUE)) options(digits = 3) # only three decimal digits require(binom) # for binomial confidence intervals data(rr98) rr98 <- rr98[!rr98$outlier,] #remove outliers # aggregate data for first plot: agg_rr98 <- rr98 %>% group_by(id, instruction, strength) %>% summarise(prop = mean(response == "dark"), mean_rt = mean(rt), median_rt = mean(rt)) %>% ungroup() xyplot(prop ~ strength|id, agg_rr98, group = instruction, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses") ``` Next, we want to get an overview of the response time distributions. For this we look at the response times of the five quantiles (i.e., 0.1, 0.3, 0.5/median, 0.7, 0.9) across the strength manipulations. This time, we also separate the plots by condition as the speed condition resulted in, as expected, vastly shorter response times. These two plots reveal considerable differences between the two instruction conditions. ```{r, fig.height=6, fig.width=7} quantiles <- c(0.1, 0.3, 0.5, 0.7, 0.9) ## aggregate data for quantile plot quantiles_rr98 <- rr98 %>% group_by(id, instruction, strength) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, strength) xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed") xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy") ``` In the speed conditions, response times were, as expected, generally fast and there seemed to be hardly any effect of strength. Only for one participant, `nh`, we can see a small increase in RTs for the higher quantiles for strength values near the middle. In contrast, in the accuracy condition strength has a considerable effect on response times for all participants. Again, this increase was especially strong for the slower responses (i.e., the higher quantiles). For those we see a strong inverse u-shaped effect, symmetrically around the middle -- where the probability for each response is 50% -- with very high response times for strength values near the middle. However, as this plot is getting a little bit messy, we now bin the strength levels to remove noise which should provide a clearer overview. For this, we will construct five separate strength bins with approximately equal response behavior and comparable numbers of trials. This is similar to what was done originally by Ratcliff and Rouder (1998). The next table shows the number of trials per participant, bin, and response. ```{r, fig.height=4, fig.width=7} #bins <- c(-0.5, 5.5, 10.5, 13.5, 16.5, 19.5, 25.5, 32.5) # seven bins like RR98 bins <- c(-0.5, 10.5, 13.5, 16.5, 19.5, 32.5) rr98$strength_bin <- cut(rr98$strength, breaks = bins, include.lowest = TRUE) levels(rr98$strength_bin) <- as.character(1:7) # aggregate data for response probability plot: agg_rr98_bin <- rr98 %>% group_by(id, instruction, strength_bin) %>% summarise(n1 = n(), dark = sum(response == "dark"), light = sum(response == "light")) %>% ungroup() %>% mutate(prop = map2(dark, n1, ~ binom.confint(.x, .y, methods = "agresti-coull"))) %>% unnest(prop) knitr::kable( rr98 %>% group_by(id, instruction, strength_bin, response) %>% summarise(n = n()) %>% spread(strength_bin, n) ) ``` We first look again and the response proportions and see more clearly the difference between the strength conditions at the outer bins. ```{r, fig.height=4, fig.width=7} xyplot(mean ~ strength_bin|id, agg_rr98_bin, group = instruction, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses") ``` Now we also look again at the RT quantiles and see more clearly the symmetrical inverse u-shaped increase in RTs for the middle bins described above. ```{r, fig.height=6, fig.width=7} ## aggregate data for quantile plot quantiles_rr98_bin <- rr98 %>% group_by(id, instruction, strength_bin) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, strength_bin) xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed") xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy") ``` With this clear pattern we now take a look at the RT distributions separately for both responses to see if they are simply mirror images of each other or not. For this, we overlay the two RT quantile plots for all trials in which the responses was "dark" in black (there are more "dark" pixels for the bins on the left side of the plot) with the same plot in which the responses was "light" in grey (there are more "light" pixels for the bins on the right side of the plot). ```{r, fig.height=6, fig.width=7} agg2_rr98_response <- rr98 %>% group_by(id, instruction, strength_bin, response) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, response, strength_bin) p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & response == "dark", layout = c(3,1)) p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & response == "light", col = "grey") p1 + as.layer(p2) p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & response == "dark", layout = c(3,1)) p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & response == "light", col = "grey") p1 + as.layer(p2) ``` These two plots reveal an interesting pattern. In the speed condition (upper plot), we particularly see fast "errors" (i.e., responses to "dark" when there are more light pixels or the other way round). When "dark" is the more likely response (i.e. on the left side) the "light" responses in grey are faster and this is especially true for the lower quantiles. The opposite pattern seems to hold on the opposite side where "dark" responses in black are faster than "light" responses in grey. At intermediate bins the difference seems to be rather at the higher quantiles. This is particularly noticeable for participant `kr` for which for there seem to be slow "light"-"errors" just to the left to the middle bin and slow "right"-"errors" just to the right of the middle bin. For the accuracy condition in the lower plot the pattern is noticeably different. First of all, there are only very few or no "error" responses in the extreme bins. Consequently, there does not seem to be any evidence for fast errors at the extremes (and also not at intermediate strength levels). However, we here more clearly see the slow errors at the intermediate bins. When "dark" is somewhat more probably (i.e., to the left of the middle) "light" responses are noticeably slower than "dark" responses. The same holds for "dark" responses if "light" is more probable. Importantly, this shows that the symmetrical inverse u-shaped increase for the middle bins described above is actually a consequence of a mixture of slow "errors", two asymmetric increases for the two different responses. # Diffusion Model Analysis We will follow Ratcliff and Rouder (1998) and analyze the data with the diffusion model. For this, we will fit a separate model to each participant and instruction condition. To do so, we will first create a new data set we will use for fitting. This data set will be `nested` with one row for each combinations of the variables: ```{r} d_nested <- rr98 %>% group_by(id, instruction) %>% # we loop across both, id and instruction nest() d_nested ``` Like Ratcliff and Rouder we will fit the data to the strength bins instead of the full strength manipulation. We fit basically the full diffusion model (with the exception of $s_{t0}$) to each instruction condition which results in 10 parameters per participant and instruction condition: - 5 drift rates $v$ (i.e., one per strength bin) - 1 boundary separation $a$ - 1 non-decision time $t_0$ - 1 drift rate variability $s_v$ - 1 start point $z$ (for ease in interpretation we parameterize this as the *relative* start point so that values different from 0.5 indicate a bias towards one of the responses) - 1 start point variability $s_z$ Like in Ratcliff and Rouder (1998), the two response boundaries are the two response options "dark" and "light". To estimate the model we diverge from Ratcliff and Rouder and employ trial wise maximum likelihood estimation (i.e., no binning of responses). For this, we simply need to have a wrapper function which returns us the negative summed log-likelihood of the data (i.e., RTs and corresponding responses) given a set of parameters. We need the negativ sum because most optimization function minimize whereas we want to obtain the maximum likelihood value. The following function for which we simply loop across drift rates will do so: ```{r} # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_diffusion_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { densities[drift == levels(drift)[i]] <- ddiffusion(rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], a=pars["a"], t0=pars["t0"], sv=pars["sv"], sz=if ("sz" %in% non_v_pars) pars["sz"] else 0.1, z=if ("z" %in% non_v_pars) pars["z"]*pars["a"] else 0.5*pars["a"], st0=if ("st0" %in% non_v_pars) pars["st0"] else 0, v=pars[base_par+i]) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } ``` Note that the function is written in such a way that we could easily fix certain parameters without the necessity to change it (using `if`-`then` on the parameters names passed via `pars`). Additionally, we also need a function that generates a set of random starting values. And, as any random set of starting values may be impossible, another wrapper function that generates starting values until a set of valid starting values is found and then passes those to the optimization routine. As optimization routine we will be using `nlminb`. These functions are given next and are specified in a way that they will be usable for other model variants for this data (e.g., fixing parameters). ```{r} # function that creates random start values, also get_start <- function(base_par, n_drift = 5) { start1 <- c( a = runif(1, 0.5, 3), a_1 = runif(1, 0.5, 3), a_2 = runif(1, 0.5, 3), t0 = runif(1, 0, 0.5), z = runif(1, 0.4, 0.6), sz = runif(1, 0, 0.5), sv = runif(1, 0, 0.5), st0 = runif(1, 0, 0.5), d = rnorm(1, 0, 0.05) ) start2 <- sort(rnorm(n_drift), decreasing = FALSE) names(start2) <- paste0("v_", seq_len(n_drift)) c(start1[base_par], start2) } # function that tries different random start values until it works: ensure_fit <- function(data, start_function, objective_function, base_pars, n_drift = 5, n_fits = 1, lower = c(rep(0, length(base_pars)), -Inf, rep(-Inf,length(start_function(base_pars))-length(base_pars)))) { best_fit <- list(objective = 1e+06) for (i in seq_len(n_fits)) { start_ll <- 1e+06 #browser() while(start_ll == 1e+06) { start <- start_function(base_pars, n_drift=n_drift) start_ll <- objective_function(start, rt = data$rt, response = data$response_num, drift = factor(data$strength_bin, seq_len(n_drift)), instruction = data$instruction) } cat("\nstart fitting.\n") # just for information to see if it is stuck fit <- nlminb(start, objective_function, rt = data$rt, response = data$response_num, drift = factor(data$strength_bin, seq_len(n_drift)), instruction = data$instruction, lower = lower) if (fit$objective < best_fit$objective) best_fit <- fit } out <- as.data.frame(t(unlist(best_fit[1:3]))) colnames(out) <- sub("par.", "", colnames(out)) out } ``` ```{r, echo=FALSE} load("rr98_full-diffusion_fits.rda") load("rr98_full-lba_fits.rda") ``` With these functions in place, we now simply need to loop over participants and items to obtain the fit. The simplest way is perhaps to use the combination of `purrr:map` and `dplyr::mutate` as shown here: ```{r, eval = FALSE} fit_diffusion <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")))) %>% unnest(fit) ``` On Unix-like systems (i.e., Linux and Mac) we can easily use the inbuild multicore functionality using `parallel::mclapply` to distribute fitting of the different parts across different cores: ```{r, eval = FALSE} require(parallel) fit_diffusion <- d_nested fit_diffusion$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")), mc.cores = 2) fit_diffusion <- unnest(fit_diffusion, fit) ``` The following table gives the parameter values, the negative summed log-likelihoods (i.e., `objective`, where smaller is better), and the convergence code of the optimization algorithm (where 0 indicates no problem) obtained from this fit: ```{r} fit_diffusion$data <- NULL if (!("st0" %in% colnames(fit_diffusion))) fit_diffusion$st0 <- 0 if (!("z" %in% colnames(fit_diffusion))) fit_diffusion$z <- 0.5 if (!("sz" %in% colnames(fit_diffusion))) fit_diffusion$sz <- 0.1 knitr::kable(fit_diffusion) ``` We can see from these values that there is a large effect of instruction on $a$. However, instruction also has effects on other parameters: - $t_0$ is consistently larger in the accuracy compared to the speed condition, although this effect is small. - $s_v$ is estimated at 0 or very low in the speed condition, but 0.5 or 1 in the accuracy condition. This is consistent with the absence of slow "errors" in the speed condition. - $s_z$ is consistently larger in the speed conditions consistent with the presence or more fast "errors" in the speed than in the accuracy condition. - $v$ appears to be more extreme (i.e., smaller for $v_1$ and $v_2$ and larger for $v_4$ and $v_5$) in the speed compared to the accuracy condition. ```{r obtain_fits_not_run, eval = FALSE, include = FALSE} require(parallel) fit_diffusion <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")))) %>% unnest(fit) fit_diffusion$data <- NULL fit_diffusion2 <- d_nested fit_diffusion2$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start, objective_function = objective_diffusion_separate, base_pars = c("a", "t0", "sv", "sz", "z")), mc.cores = 3) fit_diffusion2 <- unnest(fit_diffusion2, fit) fit_diffusion2$data <- NULL all.equal(as.data.frame(fit_diffusion), as.data.frame(fit_diffusion2), tolerance = 0.01) save(fit_diffusion, fit_diffusion2, file = "rr98_full-diffusion_fits.rda") ``` ## Graphical Model Fit ### Predicted Response Rates To evaluate the fits graphically we first compare the actual response rates for the two responses with the predicted responses rates. The grey lines and points show the observed data and the error bars are binomial confidence intervals. The black lines and points show the predicted response rates. ```{r, fig.height=5, fig.width=7, message=FALSE} # get predicted response proportions pars_separate_l <- fit_diffusion %>% gather("strength_bin", "v", starts_with("v")) pars_separate_l$strength_bin <- factor(substr(pars_separate_l$strength_bin, 3,3), levels = as.character(seq_len(length(bins)-1))) #pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin) pars_separate_l <- pars_separate_l %>% group_by(id, instruction, strength_bin) %>% mutate(resp_prop = pdiffusion(rt=Inf, response="lower", a=a, v=v, t0=t0, sz = sz, z=a*z, sv=sv, st0=st0)) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black") p2 + as.layer(p1) + as.layer(p3) ``` This figure show that overall the model can predict the actual response rates very accurately. There are only a few minor deviations. ### Predicted Median RTs Next we compare the central tendency of the RTs with the prediction. For this we evaluate the CDF at the quantiles of the predicted response proportions. Again, the data is shown in grey (and error bars show the standard errors of the median) and the predicted RTs in black. We first show the predictions for the speed condition, separated by response. ```{r, fig.height=6, fig.width=7, message=FALSE} # get predicted quantiles (uses predicted response proportions) separate_pred_dark <- pars_separate_l %>% do(as.data.frame(t( qdiffusion(quantiles*.$resp_prop, response="lower", a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% ungroup() %>% gather("quantiles", "dark", V1:V5) separate_pred_light <- pars_separate_l %>% do(as.data.frame(t( qdiffusion(quantiles*(1-.$resp_prop), response="upper", a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% ungroup() %>% gather("quantiles", "light", V1:V5) #separate_pred_light %>% filter(is.na(light)) separate_pred <- inner_join(separate_pred_dark, separate_pred_light) separate_pred$quantiles <- factor(separate_pred$quantiles, levels = c("V5", "V4", "V3", "V2", "V1"), labels = c("90%", "70%", "50%", "30%", "10%")) separate_pred <- separate_pred %>% gather("response", "rt", dark, light) # get SE for observed quantiles agg2_rr98_response_se <- rr98 %>% group_by(id, instruction, strength_bin, response) %>% summarise(se_median = sqrt(pi/2)*(sd(rt)/sqrt(n()))) %>% ungroup() # calculate error bars for quantiles. agg2_rr98_response <- left_join(agg2_rr98_response, agg2_rr98_response_se) agg2_rr98_response <- agg2_rr98_response %>% mutate(lower = rt-se_median, upper = rt+se_median) p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` Again, the model seems to be overall able to describe the general pattern quite well. However, there are some visible misfits for participants `jf` and `nh`. Next shows the same plot for the accuracy condition. Here we again see that the model is able to predict the pattern in the data quite well. While there also seems to be quite some misfit for participant `nh` this only appears in conditions with very little trials as indicated by the large error bars. ```{r, fig.height=6, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` ### All quantiles Next, we investigate the full RT distribution by comparing observed and predicted quantiles. The observed quantiles are again displayed in grey and the predictions in black. The first plot shows the sped condition separated by response. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed") p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.9)))) p2 + as.layer(p1) + as.layer(p1e) ``` This plots shows some clear misfits for the diffusion model, particularly in the upper quantiles. But there are also misfits in the lower quantiles. The next plot shows the accuracy condition separated by response. Here it appears that the diffusion model provides an overall better account. However, it is important to consider the different y-axis scaling of both plots. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy") p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.0)))) p2 + as.layer(p1) + as.layer(p1e) ``` Overall the diffusion model provides a good account of the data. Only when considering all quantiles we see some clear misfits. Nevertheless, the general trends in the data are well recovered, the only exceptions here are conditions with very low numbers of trials. # LBA analysis Next, we fit the LBA model to the data. For this, we use an LBA model with the same number of parameters. To make the model identifiable, we fix the sum of the drift rates to 1. Specifically, the model has the following 10 parameters per participant and instruction condition: - 5 drift rates $v$ (i.e., one per strength bin) - 2 response thresholds $A$ (i.e., one for each accumulator) - 1 non-decision time $t_0$ (i.e., one per participant) - 1 drift rate variability $s_v$ - 1 starting points $b$ (parameterized as an increment to the max value of the two $A$) To fit the model we need an objective function wrapping the LBA PDF and a new function for generating the correct starting values. ```{r} # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_lba_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { if (sum(drift == levels(drift)[i]) == 0) next densities[drift == levels(drift)[i]] <- dLBA( rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], A = list(pars["a_1"], pars["a_2"]), b = max(pars["a_1"], pars["a_2"])+pars["b"], t0 = pars["t0"], mean_v = c(pars[i], 1-pars[i]), sd_v = pars["sv"], silent=TRUE) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } # function that creates random start values get_start_lba <- function(base_par, n_drift = 10) { start1 <- c( a = runif(1, 0.5, 3), a_1 = runif(1, 0.5, 3), a_2 = runif(1, 0.5, 3), t0 = runif(1, 0, 0.5), b = runif(1, 0, 0.5), sv = runif(1, 0.5, 1.5), st0 = runif(1, 0, 0.5) ) start2 <- sort(rnorm(n_drift), decreasing = FALSE) names(start2) <- paste0("v_", seq_len(n_drift)) c(start2, start1[base_par]) } ``` With this, we simply need to loop across participants and instructions to estimate the LBA. Again, we need to run several fitting runs to reach the maximum likelihood estimate (i.e., the global optimum). ```{r, eval=FALSE} fit_lba <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10))) %>% unnest(fit) ``` Again, on Unix-like systems (i.e., Linux and Mac) we can use multicore using `parallel::mclapply`: ```{r, eval = FALSE} require(parallel) fit_lba <- d_nested fit_lba$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10), mc.cores = 2) fit_lba <- unnest(fit_lba, fit) ``` The following table gives the parameters and the negative summed log-likelihoods obtained from the LBA fit (with $b$ already correctly transformed by the maximum $A$). Note that some of the parameters might differ slightly for for id = `kr` and instruction = `accuracy` although the value of the objective function is identical to the reported one. This suggests that the likelihood surface is either quite shallow near the MLE or there are at least two peaks in the likelihood surface with a similar maximum. The fact that the convergence code for this data set is 1 instead of 0 also suggests some problems in finding the gloval optimum. In any case, running the optimization multiple times and comparing the results should reveal such problems. ```{r} knitr::kable(fit_lba) ``` The negtaive log-likelihood (column `objective`) shows that the LBA provides a better account for four of the six data sets (because it is the negative log-likelihood, smaller is better). The diffusion model only provides a better account for the `kr` and `nh` accuracy conditions. In terms of the parameter estimates we see a pattern similar as the one for the diffusion model: - Instruction shows the expected effect on $b$. - Instruction also shows an effect on $A$, which is also larger in the accuracy compared to the speed condition. - Instruction seems to have a small effect on $t_0$ in the same direction. - Instruction also affected $s_v$ in the same direction. - Instruction also affected $v$ which seemed to be more extreme in the accuracy compared to the speed condition. ```{r obtain_fits_lba, eval = FALSE, include = FALSE} fit_lba <- d_nested %>% mutate(fit = map(data, ~ensure_fit(data = ., start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10))) %>% unnest(fit) fit_lba$data <- NULL fit_lba2 <- d_nested fit_lba2$fit <- mclapply(d_nested$data, function(x) ensure_fit(data = x, start_function = get_start_lba, objective_function = objective_lba_separate, base_pars = c("a_1", "a_2", "t0", "b", "sv"), lower = c(rep(-Inf, 5), rep(0, 5)), n_drift = 5, n_fits = 10), mc.cores = 2) fit_lba2 <- unnest(fit_lba2, fit) fit_lba2$data <- NULL all.equal(as.data.frame(fit_lba), as.data.frame(fit_lba2), tolerance = 0.03) save(fit_lba, fit_lba2, file = "rr98_full-lba_fits.rda") # objective function for LBA with 1 a. loops over drift to assign drift rates to strength objective_lba_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { if (sum(drift == levels(drift)[i]) == 0) next densities[drift == levels(drift)[i]] <- dLBA( rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], A = pars["a"], b = pars["a"]+pars["b"], t0 = pars["t0"], mean_v = pars[((i-1)*2+1):((i-1)*2+2)], sd_v = c(1, pars["sv"]), silent=TRUE) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_lba_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { if (sum(drift == levels(drift)[i]) == 0) next densities[drift == levels(drift)[i]] <- dLBA( rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], A = list(pars["a_1"], pars["a_2"]), b = list(pars["a_1"]+pars["b"], pars["a_2"]+pars["b"]), t0 = pars["t0"], mean_v = c(pars[i], 1-pars[i]), sd_v = pars["sv"], silent=TRUE) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } ``` ## Graphical Model Fit The fact that the LBA provides a slightly better fit is also visible in the graphical fit assessment. ### Predicted Response Rates We again first consider the predicted response rates (in black) and they are also highly accurate. ```{r, fig.height=5, fig.width=7, message=FALSE} # get predicted response proportions lba_pars_separate_l <- fit_lba %>% gather("strength_bin", "v", starts_with("v")) lba_pars_separate_l$strength_bin <- factor(substr(lba_pars_separate_l$strength_bin, 3,3), levels = as.character(seq_len(length(bins)-1))) #pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin) lba_pars_separate_l <- lba_pars_separate_l %>% group_by(id, instruction, strength_bin) %>% mutate(resp_prop = pLBA(rt=Inf, response=1, A=list(a_1, a_2), sd_v=sv, mean_v=c(v, 1-v), t0=t0, b=max(a_1, a_2)+b, silent=TRUE)) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black") p2 + as.layer(p1) + as.layer(p3) ``` ### Predicted Median RTs Again, the data is shown in grey (and error bars show the standard errors of the median) and the predicted RTs in black. We first show the predictions for the speed condition, separated by response. ```{r, fig.height=6, fig.width=7, message=FALSE} # get predicted quantiles (uses predicted response proportions) lba_separate_pred_dark <- lba_pars_separate_l %>% do(as.data.frame(t( qLBA(quantiles*.$resp_prop, response=1, A=list(.$a_1, .$a_2), sd_v=.$sv, mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% ungroup() %>% gather("quantiles", "dark", V1:V5) lba_separate_pred_light <- lba_pars_separate_l %>% do(as.data.frame(t( qLBA(quantiles*(1-.$resp_prop), response=2, A=list(.$a_1, .$a_2), sd_v=.$sv, mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% ungroup() %>% gather("quantiles", "light", V1:V5) #separate_pred_light %>% filter(is.na(light)) lba_separate_pred <- inner_join(lba_separate_pred_dark, lba_separate_pred_light) lba_separate_pred$quantiles <- factor(lba_separate_pred$quantiles, levels = c("V5", "V4", "V3", "V2", "V1"), labels = c("90%", "70%", "50%", "30%", "10%")) lba_separate_pred <- lba_separate_pred %>% gather("response", "rt", dark, light) p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` Next shows the same plot for the accuracy condition. ```{r, fig.height=6, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5)))) p2 + as.layer(p1) + as.layer(p1e) ``` Overall the LBA is able to describe the central tendency relatively well. The only considerable misfit is evident at the extreme bins with few responses (i.e., large error bars). ### All quantiles Next, we investigate the full RT distribution by comparing observed and predicted quantiles. The observed quantiles are again displayed in grey and the predictions in black. The first plot shows the speed condition separated by response. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed") p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.6)))) p2 + as.layer(p1) + as.layer(p1e) ``` The next plot shows the accuracy condition separated by response. ```{r, fig.height=7, fig.width=7} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy") p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.3)))) p2 + as.layer(p1) + as.layer(p1e) ``` These two plots show not only the central tendency, but the other quantiles are quite well described. # Comparing Model Fit Finally, we graphically compare the fit of the two models. Here we can see that while the LBA seems to provide a slightly better fit (as indicated by the lower negative log-likelihoods), the diffusion model seems to somewhat better recover some of the trends in the data. ## Predicted Response Rates We again first consider the predicted response rates (in black) for the two models. ```{r, fig.height=6.5, fig.width=7, message=FALSE} key <- simpleKey(text = c("data", "LBA", "Diffusion"), lines = TRUE) key$lines$col <- c("grey", "black", "black") key$lines$lty <- c(1, 1, 2) key$points$col <- c("grey", "black", "black") key$points$pch <- c(1, 0, 4) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black", pch = 0) p4 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black", lty = 2, key = key, pch=4) p4 + as.layer(p2) + as.layer(p1) + as.layer(p3) ``` When comparing the fit of the two models like this, it shows that both models make very similar predictions for the predicted response rates. It is difficult to see huge differences between the models. ## Predicted Median RTs Next, we also compare the two accounts for the median. As before, data is shown in grey and we begin with a plot for the speed condition, separated by response. ```{r, fig.height=6.5, fig.width=7, message=FALSE} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5))), pch = 0) p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5))), col = "black", lty = 2, key = key, pch=4) p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e) ``` This plot suggests that the diffusion model is better able to predict changes in the median RTs in the speed condition across strength bins than the LBA. While this leads to obvious misfit in certain cases (`dark` responses for `nh`) it appears to provide a generally better recovery of the patterns in the data. Next shows the same plot for the accuracy condition. ```{r, fig.height=6.5, fig.width=7, message=FALSE} p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", pch = 0) p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5))), col = "black", lty = 2, key = key, pch=4) p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e) ``` Here we see the same observation as for the speed condition. Considerable changes in median RT across strength bins are better captured by the diffusion model than the LBA. This leads to the situation that in most cases the diffusion model can make more accurate prediction for conditions with low numbers of trials. # References - Ratcliff, R., & Rouder, J. N. (1998). Modeling Response Times for Two-Choice Decisions. _Psychological Science_, 9(5), 347--356. https://doi.org/10.1111/1467-9280.00067 rtdists/inst/doc/reanalysis_rr98.html0000644000175000017500000173272114164644437017600 0ustar nileshnilesh Reanalysis of Ratcliff and Rouder (1998) with Diffusion Model and LBA

Reanalysis of Ratcliff and Rouder (1998) with Diffusion Model and LBA

Henrik Singmann

2022-01-03

This vignette provides the R scripts for a reanalysis of Experiment 1 of Ratcliff and Rouder (1998). In contrast to the original analysis, which used RT bins, we will employ trial-wise maximum likelihood estimation.

The code heavily uses dplyr (vignette), tidyr (vignette), and purrr for data handling and lattice (see ?Lattice) and latticeExtra (specifically as.layer) for plotting. A throrough introduction to the former three packages is provided in R for Data Science by Wickham and Gorlemund, see especially Chapter 25.

Description of the Experiment

In the experiment, three participants were asked to decide whether the overall brightness of pixel arrays displayed on a computer monitor was “high” or “low”. To this end, the number of white versus black pixels (i.e., the brightness strength) was manipulated in 33 levels from 0% white pixels (level 0) to 100% white pixels (level 32). In addition, instruction manipulated speed and accuracy between blocks. In total, each participant contributed around 4000 trials per instruction condition.

The experiment contained another manipulation, the distribution (or brightness source) from which the pixel array was drawn. One distribution mean was on the “high” brightness side and one distribution mean was on the “low” brightness side. However, as the distributions were unbounded and overlapping, the same strength level could come from either distribution. Participant also received feedback whether or not they had picked the correct distribution (e.g., for the middle strength level 16 probability of belonging to either source was 50%). We do not further consider this manipulation in the following, which seems to be in line with the analysis of Ratcliff and Rouder (1998).

Descriptive data

As a first step, we load the data and then plot the probability with which each response (i.e., “dark” or “light”) is given as a function of strength and instruction condition. This clearly shows that there is a massive effect of strength on which response is given while at the same time the instruction only seems to have a minor effect and more on the extremes than in the middle.

require(rtdists)
require(dplyr)   # for data manipulations and looping
require(tidyr)   # for data manipulations
require(purrr)   # for data manipulations
require(lattice) # for plotting and corresponding themes
require(latticeExtra)
lattice.options(default.theme = standard.theme(color = FALSE))
lattice.options(default.args = list(as.table = TRUE))
options(digits = 3) # only three decimal digits
require(binom)  # for binomial confidence intervals

data(rr98)
rr98 <- rr98[!rr98$outlier,]  #remove outliers

# aggregate data for first plot:
agg_rr98 <- rr98  %>% group_by(id, instruction, strength) %>% 
  summarise(prop = mean(response == "dark"), mean_rt = mean(rt), median_rt = mean(rt)) %>% 
  ungroup()

xyplot(prop ~ strength|id, agg_rr98, group = instruction, type = "b", 
       auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses")

Next, we want to get an overview of the response time distributions. For this we look at the response times of the five quantiles (i.e., 0.1, 0.3, 0.5/median, 0.7, 0.9) across the strength manipulations. This time, we also separate the plots by condition as the speed condition resulted in, as expected, vastly shorter response times. These two plots reveal considerable differences between the two instruction conditions.

quantiles <- c(0.1, 0.3, 0.5, 0.7, 0.9)
## aggregate data for quantile plot
quantiles_rr98 <- rr98  %>% 
  group_by(id, instruction, strength) %>% 
  nest() %>% 
  mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% 
  unnest(quantiles) %>% 
  gather("quantile", "rt",`10%`:`90%`) %>% 
  arrange(id, instruction, strength)

xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", 
       auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed")

xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", 
       auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy")

In the speed conditions, response times were, as expected, generally fast and there seemed to be hardly any effect of strength. Only for one participant, nh, we can see a small increase in RTs for the higher quantiles for strength values near the middle. In contrast, in the accuracy condition strength has a considerable effect on response times for all participants. Again, this increase was especially strong for the slower responses (i.e., the higher quantiles). For those we see a strong inverse u-shaped effect, symmetrically around the middle – where the probability for each response is 50% – with very high response times for strength values near the middle.

However, as this plot is getting a little bit messy, we now bin the strength levels to remove noise which should provide a clearer overview. For this, we will construct five separate strength bins with approximately equal response behavior and comparable numbers of trials. This is similar to what was done originally by Ratcliff and Rouder (1998). The next table shows the number of trials per participant, bin, and response.

#bins <- c(-0.5, 5.5, 10.5, 13.5, 16.5, 19.5, 25.5, 32.5) # seven bins like RR98
bins <- c(-0.5, 10.5, 13.5, 16.5, 19.5, 32.5)
rr98$strength_bin <- cut(rr98$strength, breaks = bins, include.lowest = TRUE)
levels(rr98$strength_bin) <- as.character(1:7)

# aggregate data for response probability plot:
agg_rr98_bin <- rr98 %>% 
  group_by(id, instruction, strength_bin) %>%
  summarise(n1 = n(), 
            dark = sum(response == "dark"),
            light = sum(response == "light")) %>%
  ungroup() %>%
  mutate(prop = map2(dark, n1, ~ binom.confint(.x, .y, methods = "agresti-coull"))) %>% 
  unnest(prop)
## `summarise()` has grouped output by 'id', 'instruction'. You can override using the `.groups` argument.
knitr::kable(
  rr98 %>% group_by(id, instruction, strength_bin, response) %>%
    summarise(n = n()) %>%
    spread(strength_bin, n)
)
## `summarise()` has grouped output by 'id', 'instruction', 'strength_bin'. You can override using the `.groups` argument.
id instruction response 1 2 3 4 5
jf speed dark 840 467 338 168 146
jf speed light 59 66 284 481 1060
jf accuracy dark 893 496 310 94 30
jf accuracy light 3 44 294 486 1176
kr speed dark 800 379 277 129 102
kr speed light 69 121 350 505 1064
kr accuracy dark 884 469 276 80 9
kr accuracy light 9 71 296 489 1202
nh speed dark 973 552 422 110 81
nh speed light 21 55 276 565 1290
nh accuracy dark 971 535 431 81 23
nh accuracy light 4 32 250 542 1318

We first look again and the response proportions and see more clearly the difference between the strength conditions at the outer bins.

xyplot(mean ~ strength_bin|id, agg_rr98_bin, group = instruction, type = "b", 
       auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses")

Now we also look again at the RT quantiles and see more clearly the symmetrical inverse u-shaped increase in RTs for the middle bins described above.

## aggregate data for quantile plot
quantiles_rr98_bin <- rr98  %>% 
  group_by(id, instruction, strength_bin) %>% 
  nest() %>% 
  mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% 
  unnest(quantiles) %>% 
  gather("quantile", "rt",`10%`:`90%`) %>% 
  arrange(id, instruction, strength_bin)

xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", 
       auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed")

xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", 
       auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy")

With this clear pattern we now take a look at the RT distributions separately for both responses to see if they are simply mirror images of each other or not. For this, we overlay the two RT quantile plots for all trials in which the responses was “dark” in black (there are more “dark” pixels for the bins on the left side of the plot) with the same plot in which the responses was “light” in grey (there are more “light” pixels for the bins on the right side of the plot).

agg2_rr98_response <- rr98  %>% 
  group_by(id, instruction, strength_bin, response) %>% 
  nest() %>% 
  mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% 
  unnest(quantiles) %>% 
  gather("quantile", "rt",`10%`:`90%`) %>% 
  arrange(id, instruction, response, strength_bin)

p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & response == "dark", layout = c(3,1))
p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & response == "light", col = "grey")
p1 + as.layer(p2)

p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & response == "dark", layout = c(3,1))
p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & response == "light", col = "grey")
p1 + as.layer(p2)

These two plots reveal an interesting pattern. In the speed condition (upper plot), we particularly see fast “errors” (i.e., responses to “dark” when there are more light pixels or the other way round). When “dark” is the more likely response (i.e. on the left side) the “light” responses in grey are faster and this is especially true for the lower quantiles. The opposite pattern seems to hold on the opposite side where “dark” responses in black are faster than “light” responses in grey. At intermediate bins the difference seems to be rather at the higher quantiles. This is particularly noticeable for participant kr for which for there seem to be slow “light”-“errors” just to the left to the middle bin and slow “right”-“errors” just to the right of the middle bin.

For the accuracy condition in the lower plot the pattern is noticeably different. First of all, there are only very few or no “error” responses in the extreme bins. Consequently, there does not seem to be any evidence for fast errors at the extremes (and also not at intermediate strength levels). However, we here more clearly see the slow errors at the intermediate bins. When “dark” is somewhat more probably (i.e., to the left of the middle) “light” responses are noticeably slower than “dark” responses. The same holds for “dark” responses if “light” is more probable. Importantly, this shows that the symmetrical inverse u-shaped increase for the middle bins described above is actually a consequence of a mixture of slow “errors”, two asymmetric increases for the two different responses.

Diffusion Model Analysis

We will follow Ratcliff and Rouder (1998) and analyze the data with the diffusion model. For this, we will fit a separate model to each participant and instruction condition. To do so, we will first create a new data set we will use for fitting. This data set will be nested with one row for each combinations of the variables:

d_nested <- rr98 %>% 
  group_by(id, instruction) %>% # we loop across both, id and instruction
  nest()
d_nested
## # A tibble: 6 x 3
## # Groups:   id, instruction [6]
##   id    instruction data                 
##   <fct> <fct>       <list>               
## 1 jf    accuracy    <tibble [3,826 x 11]>
## 2 jf    speed       <tibble [3,909 x 11]>
## 3 kr    accuracy    <tibble [3,785 x 11]>
## 4 kr    speed       <tibble [3,796 x 11]>
## 5 nh    speed       <tibble [4,345 x 11]>
## 6 nh    accuracy    <tibble [4,187 x 11]>

Like Ratcliff and Rouder we will fit the data to the strength bins instead of the full strength manipulation. We fit basically the full diffusion model (with the exception of \(s_{t0}\)) to each instruction condition which results in 10 parameters per participant and instruction condition:

  • 5 drift rates \(v\) (i.e., one per strength bin)
  • 1 boundary separation \(a\)
  • 1 non-decision time \(t_0\)
  • 1 drift rate variability \(s_v\)
  • 1 start point \(z\) (for ease in interpretation we parameterize this as the relative start point so that values different from 0.5 indicate a bias towards one of the responses)
  • 1 start point variability \(s_z\)

Like in Ratcliff and Rouder (1998), the two response boundaries are the two response options “dark” and “light”. To estimate the model we diverge from Ratcliff and Rouder and employ trial wise maximum likelihood estimation (i.e., no binning of responses).

For this, we simply need to have a wrapper function which returns us the negative summed log-likelihood of the data (i.e., RTs and corresponding responses) given a set of parameters. We need the negativ sum because most optimization function minimize whereas we want to obtain the maximum likelihood value. The following function for which we simply loop across drift rates will do so:

# objective function for diffusion with 1 a. loops over drift to assign drift rates to strength
objective_diffusion_separate <- function(pars, rt, response, drift, ...) {
  non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE)
  base_par <- length(non_v_pars)  # number of non-drift parameters
  densities <- vector("numeric", length(rt))
  for (i in seq_along(levels(drift))) {
    densities[drift == levels(drift)[i]] <- 
      ddiffusion(rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], 
                 a=pars["a"], t0=pars["t0"],  
                 sv=pars["sv"],
                 sz=if ("sz" %in% non_v_pars) pars["sz"] else 0.1,
                 z=if ("z" %in% non_v_pars) pars["z"]*pars["a"] else 0.5*pars["a"],
                 st0=if ("st0" %in% non_v_pars) pars["st0"] else 0, 
                 v=pars[base_par+i])
  }
  if (any(densities == 0)) return(1e6)
  return(-sum(log(densities)))
}

Note that the function is written in such a way that we could easily fix certain parameters without the necessity to change it (using if-then on the parameters names passed via pars).

Additionally, we also need a function that generates a set of random starting values. And, as any random set of starting values may be impossible, another wrapper function that generates starting values until a set of valid starting values is found and then passes those to the optimization routine. As optimization routine we will be using nlminb. These functions are given next and are specified in a way that they will be usable for other model variants for this data (e.g., fixing parameters).

# function that creates random start values, also 
get_start <- function(base_par, n_drift = 5) {
  start1 <- c(
    a = runif(1, 0.5, 3),
    a_1 = runif(1, 0.5, 3), 
    a_2 = runif(1, 0.5, 3),
    t0 = runif(1, 0, 0.5), 
    z = runif(1, 0.4, 0.6), 
    sz = runif(1, 0, 0.5),
    sv = runif(1, 0, 0.5),
    st0 = runif(1, 0, 0.5),
    d = rnorm(1, 0, 0.05)
  )
  start2 <- sort(rnorm(n_drift), decreasing = FALSE)
  names(start2) <- paste0("v_", seq_len(n_drift))
  c(start1[base_par], start2)
}

# function that tries different random start values until it works:
ensure_fit <- 
  function(data, start_function, objective_function, 
           base_pars, n_drift = 5, n_fits = 1, 
           lower = c(rep(0, length(base_pars)), -Inf,
                     rep(-Inf,length(start_function(base_pars))-length(base_pars)))) {
    best_fit <- list(objective = 1e+06)
  for (i in seq_len(n_fits)) {
    start_ll <- 1e+06
    #browser()
    while(start_ll == 1e+06) {
      start <- start_function(base_pars, n_drift=n_drift)
      start_ll <- objective_function(start, 
                                     rt = data$rt, response = data$response_num, 
                                     drift = factor(data$strength_bin, seq_len(n_drift)), 
                                     instruction = data$instruction)
    }
    cat("\nstart fitting.\n") # just for information to see if it is stuck
    
    fit <- nlminb(start, objective_function, 
                  rt = data$rt, response = data$response_num, 
                  drift = factor(data$strength_bin, seq_len(n_drift)), 
                  instruction = data$instruction,
                  lower = lower)
    
    if (fit$objective < best_fit$objective) best_fit <- fit
  }
  out <- as.data.frame(t(unlist(best_fit[1:3])))
  colnames(out) <- sub("par.", "", colnames(out))
  out
}

With these functions in place, we now simply need to loop over participants and items to obtain the fit. The simplest way is perhaps to use the combination of purrr:map and dplyr::mutate as shown here:

fit_diffusion <- d_nested %>% 
  mutate(fit = 
           map(data, 
               ~ensure_fit(data = ., start_function = get_start, 
                            objective_function = objective_diffusion_separate, 
                            base_pars = c("a", "t0", "sv", "sz", "z")))) %>% 
  unnest(fit)

On Unix-like systems (i.e., Linux and Mac) we can easily use the inbuild multicore functionality using parallel::mclapply to distribute fitting of the different parts across different cores:

require(parallel)

fit_diffusion <- d_nested
fit_diffusion$fit <- 
  mclapply(d_nested$data, function(x) 
    ensure_fit(data = x, start_function = get_start,
               objective_function = objective_diffusion_separate, 
               base_pars = c("a", "t0", "sv", "sz", "z")),  
    mc.cores = 2)
fit_diffusion <- unnest(fit_diffusion, fit)

The following table gives the parameter values, the negative summed log-likelihoods (i.e., objective, where smaller is better), and the convergence code of the optimization algorithm (where 0 indicates no problem) obtained from this fit:

fit_diffusion$data <- NULL
if (!("st0" %in% colnames(fit_diffusion))) fit_diffusion$st0 <- 0
if (!("z" %in% colnames(fit_diffusion))) fit_diffusion$z <- 0.5
if (!("sz" %in% colnames(fit_diffusion))) fit_diffusion$sz <- 0.1
knitr::kable(fit_diffusion)
id instruction a t0 sv sz z v_1 v_2 v_3 v_4 v_5 objective convergence st0
jf accuracy 1.979 0.221 0.496 0.208 0.488 -2.39 -1.52 -0.018 1.22 2.07 1301 0 0
jf speed 0.877 0.195 0.000 0.222 0.468 -2.92 -2.28 -0.075 1.83 2.94 -3200 0 0
kr accuracy 2.196 0.212 1.077 0.250 0.549 -3.55 -1.60 -0.070 1.23 3.12 1228 0 0
kr speed 0.819 0.196 0.196 0.216 0.475 -2.99 -1.66 0.550 2.18 3.25 -3489 0 0
nh speed 1.130 0.196 0.000 0.188 0.524 -4.03 -2.83 -0.746 1.87 3.02 -3318 0 0
nh accuracy 1.843 0.227 1.001 0.547 0.519 -3.80 -2.35 -0.620 1.66 3.38 -245 0 0

We can see from these values that there is a large effect of instruction on \(a\). However, instruction also has effects on other parameters:

  • \(t_0\) is consistently larger in the accuracy compared to the speed condition, although this effect is small.
  • \(s_v\) is estimated at 0 or very low in the speed condition, but 0.5 or 1 in the accuracy condition. This is consistent with the absence of slow “errors” in the speed condition.
  • \(s_z\) is consistently larger in the speed conditions consistent with the presence or more fast “errors” in the speed than in the accuracy condition.
  • \(v\) appears to be more extreme (i.e., smaller for \(v_1\) and \(v_2\) and larger for \(v_4\) and \(v_5\)) in the speed compared to the accuracy condition.

Graphical Model Fit

Predicted Response Rates

To evaluate the fits graphically we first compare the actual response rates for the two responses with the predicted responses rates. The grey lines and points show the observed data and the error bars are binomial confidence intervals. The black lines and points show the predicted response rates.

# get predicted response proportions
pars_separate_l <- fit_diffusion %>% gather("strength_bin", "v", starts_with("v"))
pars_separate_l$strength_bin <- factor(substr(pars_separate_l$strength_bin, 3,3), 
                                       levels = as.character(seq_len(length(bins)-1)))
#pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin)
pars_separate_l <- pars_separate_l  %>% group_by(id, instruction, strength_bin) %>%
  mutate(resp_prop = pdiffusion(rt=Inf, response="lower", 
                                a=a, v=v, t0=t0, sz = sz, z=a*z, sv=sv, st0=st0))

p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = 
               list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey")
p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, 
              auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
              col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
              draw.bands = FALSE, angle = 90, length = 0.05, ends = "both")
p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", 
             auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
             col = "black")
p2 + as.layer(p1) + as.layer(p3)

This figure show that overall the model can predict the actual response rates very accurately. There are only a few minor deviations.

Predicted Median RTs

Next we compare the central tendency of the RTs with the prediction. For this we evaluate the CDF at the quantiles of the predicted response proportions. Again, the data is shown in grey (and error bars show the standard errors of the median) and the predicted RTs in black. We first show the predictions for the speed condition, separated by response.

# get predicted quantiles (uses predicted response proportions)
separate_pred_dark <- pars_separate_l %>% do(as.data.frame(t(
  qdiffusion(quantiles*.$resp_prop, response="lower", 
             a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% 
  ungroup() %>% gather("quantiles", "dark", V1:V5)
separate_pred_light <- pars_separate_l %>% do(as.data.frame(t(
  qdiffusion(quantiles*(1-.$resp_prop), response="upper", 
             a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% 
  ungroup() %>% gather("quantiles", "light", V1:V5)

#separate_pred_light %>% filter(is.na(light))
separate_pred <- inner_join(separate_pred_dark, separate_pred_light)
separate_pred$quantiles <- factor(separate_pred$quantiles, 
                                  levels = c("V5", "V4", "V3", "V2", "V1"), 
                                  labels = c("90%", "70%", "50%", "30%", "10%"))
separate_pred <- separate_pred %>% gather("response", "rt", dark, light)

# get SE for observed quantiles
agg2_rr98_response_se <- rr98  %>% group_by(id, instruction, strength_bin, response) %>% 
  summarise(se_median = sqrt(pi/2)*(sd(rt)/sqrt(n()))) %>%
  ungroup()

# calculate error bars for quantiles.
agg2_rr98_response <- left_join(agg2_rr98_response, agg2_rr98_response_se)
agg2_rr98_response <- agg2_rr98_response %>%
  mutate(lower = rt-se_median, upper = rt+se_median)


p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & quantile == "50%", 
             layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "speed" & quantile == "50%", layout = c(3,2))
p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & quantiles == "50%", 
             scales = list(y = list(limits = c(0.25, 0.5))))
p2 + as.layer(p1) + as.layer(p1e)

Again, the model seems to be overall able to describe the general pattern quite well. However, there are some visible misfits for participants jf and nh.

Next shows the same plot for the accuracy condition. Here we again see that the model is able to predict the pattern in the data quite well. While there also seems to be quite some misfit for participant nh this only appears in conditions with very little trials as indicated by the large error bars.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & quantile == "50%", 
             layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2))
p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & quantiles == "50%", 
             scales = list(y = list(limits = c(0.2, 1.5))))
p2 + as.layer(p1) + as.layer(p1e)

All quantiles

Next, we investigate the full RT distribution by comparing observed and predicted quantiles. The observed quantiles are again displayed in grey and the predictions in black. The first plot shows the sped condition separated by response.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed", layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "speed")
p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.9))))
p2 + as.layer(p1) + as.layer(p1e)

This plots shows some clear misfits for the diffusion model, particularly in the upper quantiles. But there are also misfits in the lower quantiles.

The next plot shows the accuracy condition separated by response. Here it appears that the diffusion model provides an overall better account. However, it is important to consider the different y-axis scaling of both plots.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy", layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "accuracy")
p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.0))))
p2 + as.layer(p1) + as.layer(p1e)

Overall the diffusion model provides a good account of the data. Only when considering all quantiles we see some clear misfits. Nevertheless, the general trends in the data are well recovered, the only exceptions here are conditions with very low numbers of trials.

LBA analysis

Next, we fit the LBA model to the data. For this, we use an LBA model with the same number of parameters. To make the model identifiable, we fix the sum of the drift rates to 1. Specifically, the model has the following 10 parameters per participant and instruction condition:

  • 5 drift rates \(v\) (i.e., one per strength bin)
  • 2 response thresholds \(A\) (i.e., one for each accumulator)
  • 1 non-decision time \(t_0\) (i.e., one per participant)
  • 1 drift rate variability \(s_v\)
  • 1 starting points \(b\) (parameterized as an increment to the max value of the two \(A\))

To fit the model we need an objective function wrapping the LBA PDF and a new function for generating the correct starting values.

# objective function for diffusion with 1 a. loops over drift to assign drift rates to strength
objective_lba_separate <- function(pars, rt, response, drift, ...) {
  non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE)
  base_par <- length(non_v_pars)  # number of non-drift parameters
  densities <- vector("numeric", length(rt))
  for (i in seq_along(levels(drift))) {
    if (sum(drift == levels(drift)[i]) == 0) next
    densities[drift == levels(drift)[i]] <- dLBA(
      rt[drift == levels(drift)[i]], 
      response=response[drift == levels(drift)[i]],
      A = list(pars["a_1"], pars["a_2"]), 
      b = max(pars["a_1"], pars["a_2"])+pars["b"], 
      t0 = pars["t0"], 
      mean_v = c(pars[i], 1-pars[i]), 
      sd_v = pars["sv"], silent=TRUE)
  }
  if (any(densities == 0)) return(1e6)
  return(-sum(log(densities)))
}

# function that creates random start values
get_start_lba <- function(base_par, n_drift = 10) {
  start1 <- c(
    a = runif(1, 0.5, 3),
    a_1 = runif(1, 0.5, 3), 
    a_2 = runif(1, 0.5, 3),
    t0 = runif(1, 0, 0.5), 
    b = runif(1, 0, 0.5), 
    sv = runif(1, 0.5, 1.5),
    st0 = runif(1, 0, 0.5)
  )
  start2 <- sort(rnorm(n_drift), decreasing = FALSE)
  names(start2) <- paste0("v_", seq_len(n_drift))
  c(start2, start1[base_par])
}

With this, we simply need to loop across participants and instructions to estimate the LBA. Again, we need to run several fitting runs to reach the maximum likelihood estimate (i.e., the global optimum).

fit_lba <- d_nested %>% 
  mutate(fit = 
           map(data, 
               ~ensure_fit(data = ., start_function = get_start_lba, 
                      objective_function = objective_lba_separate, 
                      base_pars = c("a_1", "a_2", "t0", "b", "sv"),
                      lower = c(rep(-Inf, 5), rep(0, 5)),
                      n_drift = 5, n_fits = 10))) %>% 
  unnest(fit)

Again, on Unix-like systems (i.e., Linux and Mac) we can use multicore using parallel::mclapply:

require(parallel)

fit_lba <- d_nested
fit_lba$fit <- 
  mclapply(d_nested$data, function(x) 
    ensure_fit(data = x, start_function = get_start_lba, 
                      objective_function = objective_lba_separate, 
                      base_pars = c("a_1", "a_2", "t0", "b", "sv"),
                      lower = c(rep(-Inf, 5), rep(0, 5)),
                      n_drift = 5, n_fits = 10),  
    mc.cores = 2)
fit_lba <- unnest(fit_lba, fit)

The following table gives the parameters and the negative summed log-likelihoods obtained from the LBA fit (with \(b\) already correctly transformed by the maximum \(A\)). Note that some of the parameters might differ slightly for for id = kr and instruction = accuracy although the value of the objective function is identical to the reported one. This suggests that the likelihood surface is either quite shallow near the MLE or there are at least two peaks in the likelihood surface with a similar maximum. The fact that the convergence code for this data set is 1 instead of 0 also suggests some problems in finding the gloval optimum. In any case, running the optimization multiple times and comparing the results should reveal such problems.

knitr::kable(fit_lba)
id instruction v_1 v_2 v_3 v_4 v_5 a_1 a_2 t0 b sv objective convergence
jf accuracy 0.964 0.793 0.508 0.259 0.096 0.359 0.331 0.049 0.327 0.293 1171 0
jf speed 0.586 0.565 0.494 0.439 0.406 0.084 0.067 0.014 0.133 0.100 -3855 0
kr accuracy 3.580 2.101 0.526 -0.812 -2.413 0.596 0.885 0.107 0.863 1.383 1358 1
kr speed 0.619 0.563 0.471 0.411 0.368 0.075 0.066 0.083 0.096 0.136 -4033 0
nh speed 0.678 0.613 0.529 0.425 0.378 0.065 0.076 0.038 0.155 0.123 -4235 0
nh accuracy 1.448 1.100 0.644 0.022 -0.381 0.387 0.405 0.144 0.238 0.456 -227 0

The negtaive log-likelihood (column objective) shows that the LBA provides a better account for four of the six data sets (because it is the negative log-likelihood, smaller is better). The diffusion model only provides a better account for the kr and nh accuracy conditions. In terms of the parameter estimates we see a pattern similar as the one for the diffusion model:

  • Instruction shows the expected effect on \(b\).
  • Instruction also shows an effect on \(A\), which is also larger in the accuracy compared to the speed condition.
  • Instruction seems to have a small effect on \(t_0\) in the same direction.
  • Instruction also affected \(s_v\) in the same direction.
  • Instruction also affected \(v\) which seemed to be more extreme in the accuracy compared to the speed condition.

Graphical Model Fit

The fact that the LBA provides a slightly better fit is also visible in the graphical fit assessment.

Predicted Response Rates

We again first consider the predicted response rates (in black) and they are also highly accurate.

# get predicted response proportions
lba_pars_separate_l <- fit_lba %>% gather("strength_bin", "v", starts_with("v"))
lba_pars_separate_l$strength_bin <- factor(substr(lba_pars_separate_l$strength_bin, 3,3), 
                                       levels = as.character(seq_len(length(bins)-1)))
#pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin)
lba_pars_separate_l <- lba_pars_separate_l  %>% group_by(id, instruction, strength_bin) %>%
  mutate(resp_prop = pLBA(rt=Inf, response=1, A=list(a_1, a_2), sd_v=sv,
                          mean_v=c(v, 1-v), t0=t0, b=max(a_1, a_2)+b, silent=TRUE))

p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = 
               list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey")
p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, 
              auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
              col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
              draw.bands = FALSE, angle = 90, length = 0.05, ends = "both")
p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", 
             auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
             col = "black")
p2 + as.layer(p1) + as.layer(p3)

Predicted Median RTs

Again, the data is shown in grey (and error bars show the standard errors of the median) and the predicted RTs in black. We first show the predictions for the speed condition, separated by response.

# get predicted quantiles (uses predicted response proportions)
lba_separate_pred_dark <- lba_pars_separate_l %>% do(as.data.frame(t(
  qLBA(quantiles*.$resp_prop, response=1, A=list(.$a_1, .$a_2), sd_v=.$sv,
       mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% 
  ungroup() %>% gather("quantiles", "dark", V1:V5)
lba_separate_pred_light <- lba_pars_separate_l %>% do(as.data.frame(t(
  qLBA(quantiles*(1-.$resp_prop), response=2, A=list(.$a_1, .$a_2), sd_v=.$sv,
       mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% 
  ungroup() %>% gather("quantiles", "light", V1:V5)

#separate_pred_light %>% filter(is.na(light))
lba_separate_pred <- inner_join(lba_separate_pred_dark, lba_separate_pred_light)
lba_separate_pred$quantiles <- factor(lba_separate_pred$quantiles, 
                                  levels = c("V5", "V4", "V3", "V2", "V1"), 
                                  labels = c("90%", "70%", "50%", "30%", "10%"))
lba_separate_pred <- lba_separate_pred %>% gather("response", "rt", dark, light)

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & quantile == "50%", 
             layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "speed" & quantile == "50%", layout = c(3,2))
p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & quantiles == "50%", 
             scales = list(y = list(limits = c(0.25, 0.5))))
p2 + as.layer(p1) + as.layer(p1e)

Next shows the same plot for the accuracy condition.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & quantile == "50%", 
             layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2))
p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & quantiles == "50%", 
             scales = list(y = list(limits = c(0.2, 1.5))))
p2 + as.layer(p1) + as.layer(p1e)

Overall the LBA is able to describe the central tendency relatively well. The only considerable misfit is evident at the extreme bins with few responses (i.e., large error bars).

All quantiles

Next, we investigate the full RT distribution by comparing observed and predicted quantiles. The observed quantiles are again displayed in grey and the predictions in black. The first plot shows the speed condition separated by response.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed", layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "speed")
p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.6))))
p2 + as.layer(p1) + as.layer(p1e)

The next plot shows the accuracy condition separated by response.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy", layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "accuracy")
p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.3))))
p2 + as.layer(p1) + as.layer(p1e)

These two plots show not only the central tendency, but the other quantiles are quite well described.

Comparing Model Fit

Finally, we graphically compare the fit of the two models. Here we can see that while the LBA seems to provide a slightly better fit (as indicated by the lower negative log-likelihoods), the diffusion model seems to somewhat better recover some of the trends in the data.

Predicted Response Rates

We again first consider the predicted response rates (in black) for the two models.

key <- simpleKey(text = c("data", "LBA", "Diffusion"), lines = TRUE)
key$lines$col <- c("grey", "black", "black")
key$lines$lty <- c(1, 1, 2)
key$points$col <- c("grey", "black", "black")
key$points$pch <- c(1, 0, 4)

p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = 
               list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey")
p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, 
              auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
              col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
              draw.bands = FALSE, angle = 90, length = 0.05, ends = "both")
p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", 
             auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
             col = "black", pch = 0)
p4 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", 
             auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
             col = "black", lty = 2, key = key, pch=4)
p4 + as.layer(p2) + as.layer(p1) + as.layer(p3)

When comparing the fit of the two models like this, it shows that both models make very similar predictions for the predicted response rates. It is difficult to see huge differences between the models.

Predicted Median RTs

Next, we also compare the two accounts for the median. As before, data is shown in grey and we begin with a plot for the speed condition, separated by response.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & quantile == "50%", 
             layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "speed" & quantile == "50%", layout = c(3,2))
p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & quantiles == "50%", 
             scales = list(y = list(limits = c(0.25, 0.5))), pch = 0)
p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "speed" & quantiles == "50%", 
             scales = list(y = list(limits = c(0.25, 0.5))),
             col = "black", lty = 2, key = key, pch=4)

p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e)

This plot suggests that the diffusion model is better able to predict changes in the median RTs in the speed condition across strength bins than the LBA. While this leads to obvious misfit in certain cases (dark responses for nh) it appears to provide a generally better recovery of the patterns in the data.

Next shows the same plot for the accuracy condition.

p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & quantile == "50%", 
             layout = c(3,2), col = "grey")
p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, 
               auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", 
               col = "grey", horizontal = FALSE, segments.fun = panel.arrows,  
               draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", 
               subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2))
p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & quantiles == "50%", 
             pch = 0)
p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", 
             auto.key = list(lines = TRUE), ylab = "RT (in seconds)", 
             subset = instruction == "accuracy" & quantiles == "50%", 
             scales = list(y = list(limits = c(0.2, 1.5))),
             col = "black", lty = 2, key = key, pch=4)

p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e)

Here we see the same observation as for the speed condition. Considerable changes in median RT across strength bins are better captured by the diffusion model than the LBA. This leads to the situation that in most cases the diffusion model can make more accurate prediction for conditions with low numbers of trials.

References

rtdists/inst/doc/reanalysis_rr98.R0000644000175000017500000010302514164644435017017 0ustar nileshnilesh## ---- fig.height=4, fig.width=7, message=FALSE, warning=FALSE----------------- require(rtdists) require(dplyr) # for data manipulations and looping require(tidyr) # for data manipulations require(purrr) # for data manipulations require(lattice) # for plotting and corresponding themes require(latticeExtra) lattice.options(default.theme = standard.theme(color = FALSE)) lattice.options(default.args = list(as.table = TRUE)) options(digits = 3) # only three decimal digits require(binom) # for binomial confidence intervals data(rr98) rr98 <- rr98[!rr98$outlier,] #remove outliers # aggregate data for first plot: agg_rr98 <- rr98 %>% group_by(id, instruction, strength) %>% summarise(prop = mean(response == "dark"), mean_rt = mean(rt), median_rt = mean(rt)) %>% ungroup() xyplot(prop ~ strength|id, agg_rr98, group = instruction, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses") ## ---- fig.height=6, fig.width=7----------------------------------------------- quantiles <- c(0.1, 0.3, 0.5, 0.7, 0.9) ## aggregate data for quantile plot quantiles_rr98 <- rr98 %>% group_by(id, instruction, strength) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, strength) xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed") xyplot(rt ~ strength|id + instruction, quantiles_rr98, group = quantile, type = "b", auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy") ## ---- fig.height=4, fig.width=7----------------------------------------------- #bins <- c(-0.5, 5.5, 10.5, 13.5, 16.5, 19.5, 25.5, 32.5) # seven bins like RR98 bins <- c(-0.5, 10.5, 13.5, 16.5, 19.5, 32.5) rr98$strength_bin <- cut(rr98$strength, breaks = bins, include.lowest = TRUE) levels(rr98$strength_bin) <- as.character(1:7) # aggregate data for response probability plot: agg_rr98_bin <- rr98 %>% group_by(id, instruction, strength_bin) %>% summarise(n1 = n(), dark = sum(response == "dark"), light = sum(response == "light")) %>% ungroup() %>% mutate(prop = map2(dark, n1, ~ binom.confint(.x, .y, methods = "agresti-coull"))) %>% unnest(prop) knitr::kable( rr98 %>% group_by(id, instruction, strength_bin, response) %>% summarise(n = n()) %>% spread(strength_bin, n) ) ## ---- fig.height=4, fig.width=7----------------------------------------------- xyplot(mean ~ strength_bin|id, agg_rr98_bin, group = instruction, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses") ## ---- fig.height=6, fig.width=7----------------------------------------------- ## aggregate data for quantile plot quantiles_rr98_bin <- rr98 %>% group_by(id, instruction, strength_bin) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, strength_bin) xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed") xyplot(rt ~ strength_bin|id + instruction, quantiles_rr98_bin, group = quantile, type = "b", auto.key = FALSE, ylab = "RT (in seconds)", subset = instruction == "accuracy") ## ---- fig.height=6, fig.width=7----------------------------------------------- agg2_rr98_response <- rr98 %>% group_by(id, instruction, strength_bin, response) %>% nest() %>% mutate(quantiles = map(data, ~ as.data.frame(t(quantile(.x$rt, probs = quantiles))))) %>% unnest(quantiles) %>% gather("quantile", "rt",`10%`:`90%`) %>% arrange(id, instruction, response, strength_bin) p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & response == "dark", layout = c(3,1)) p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & response == "light", col = "grey") p1 + as.layer(p2) p1 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & response == "dark", layout = c(3,1)) p2 <- xyplot(rt ~ strength_bin|id, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & response == "light", col = "grey") p1 + as.layer(p2) ## ----------------------------------------------------------------------------- d_nested <- rr98 %>% group_by(id, instruction) %>% # we loop across both, id and instruction nest() d_nested ## ----------------------------------------------------------------------------- # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_diffusion_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { densities[drift == levels(drift)[i]] <- ddiffusion(rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], a=pars["a"], t0=pars["t0"], sv=pars["sv"], sz=if ("sz" %in% non_v_pars) pars["sz"] else 0.1, z=if ("z" %in% non_v_pars) pars["z"]*pars["a"] else 0.5*pars["a"], st0=if ("st0" %in% non_v_pars) pars["st0"] else 0, v=pars[base_par+i]) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } ## ----------------------------------------------------------------------------- # function that creates random start values, also get_start <- function(base_par, n_drift = 5) { start1 <- c( a = runif(1, 0.5, 3), a_1 = runif(1, 0.5, 3), a_2 = runif(1, 0.5, 3), t0 = runif(1, 0, 0.5), z = runif(1, 0.4, 0.6), sz = runif(1, 0, 0.5), sv = runif(1, 0, 0.5), st0 = runif(1, 0, 0.5), d = rnorm(1, 0, 0.05) ) start2 <- sort(rnorm(n_drift), decreasing = FALSE) names(start2) <- paste0("v_", seq_len(n_drift)) c(start1[base_par], start2) } # function that tries different random start values until it works: ensure_fit <- function(data, start_function, objective_function, base_pars, n_drift = 5, n_fits = 1, lower = c(rep(0, length(base_pars)), -Inf, rep(-Inf,length(start_function(base_pars))-length(base_pars)))) { best_fit <- list(objective = 1e+06) for (i in seq_len(n_fits)) { start_ll <- 1e+06 #browser() while(start_ll == 1e+06) { start <- start_function(base_pars, n_drift=n_drift) start_ll <- objective_function(start, rt = data$rt, response = data$response_num, drift = factor(data$strength_bin, seq_len(n_drift)), instruction = data$instruction) } cat("\nstart fitting.\n") # just for information to see if it is stuck fit <- nlminb(start, objective_function, rt = data$rt, response = data$response_num, drift = factor(data$strength_bin, seq_len(n_drift)), instruction = data$instruction, lower = lower) if (fit$objective < best_fit$objective) best_fit <- fit } out <- as.data.frame(t(unlist(best_fit[1:3]))) colnames(out) <- sub("par.", "", colnames(out)) out } ## ---- echo=FALSE-------------------------------------------------------------- load("rr98_full-diffusion_fits.rda") load("rr98_full-lba_fits.rda") ## ---- eval = FALSE------------------------------------------------------------ # fit_diffusion <- d_nested %>% # mutate(fit = # map(data, # ~ensure_fit(data = ., start_function = get_start, # objective_function = objective_diffusion_separate, # base_pars = c("a", "t0", "sv", "sz", "z")))) %>% # unnest(fit) ## ---- eval = FALSE------------------------------------------------------------ # require(parallel) # # fit_diffusion <- d_nested # fit_diffusion$fit <- # mclapply(d_nested$data, function(x) # ensure_fit(data = x, start_function = get_start, # objective_function = objective_diffusion_separate, # base_pars = c("a", "t0", "sv", "sz", "z")), # mc.cores = 2) # fit_diffusion <- unnest(fit_diffusion, fit) ## ----------------------------------------------------------------------------- fit_diffusion$data <- NULL if (!("st0" %in% colnames(fit_diffusion))) fit_diffusion$st0 <- 0 if (!("z" %in% colnames(fit_diffusion))) fit_diffusion$z <- 0.5 if (!("sz" %in% colnames(fit_diffusion))) fit_diffusion$sz <- 0.1 knitr::kable(fit_diffusion) ## ----obtain_fits_not_run, eval = FALSE, include = FALSE----------------------- # # require(parallel) # # fit_diffusion <- d_nested %>% # mutate(fit = # map(data, # ~ensure_fit(data = ., start_function = get_start, # objective_function = objective_diffusion_separate, # base_pars = c("a", "t0", "sv", "sz", "z")))) %>% # unnest(fit) # fit_diffusion$data <- NULL # # fit_diffusion2 <- d_nested # fit_diffusion2$fit <- # mclapply(d_nested$data, function(x) # ensure_fit(data = x, start_function = get_start, # objective_function = objective_diffusion_separate, # base_pars = c("a", "t0", "sv", "sz", "z")), # mc.cores = 3) # fit_diffusion2 <- unnest(fit_diffusion2, fit) # fit_diffusion2$data <- NULL # # all.equal(as.data.frame(fit_diffusion), as.data.frame(fit_diffusion2), tolerance = 0.01) # # save(fit_diffusion, fit_diffusion2, file = "rr98_full-diffusion_fits.rda") # # ## ---- fig.height=5, fig.width=7, message=FALSE-------------------------------- # get predicted response proportions pars_separate_l <- fit_diffusion %>% gather("strength_bin", "v", starts_with("v")) pars_separate_l$strength_bin <- factor(substr(pars_separate_l$strength_bin, 3,3), levels = as.character(seq_len(length(bins)-1))) #pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin) pars_separate_l <- pars_separate_l %>% group_by(id, instruction, strength_bin) %>% mutate(resp_prop = pdiffusion(rt=Inf, response="lower", a=a, v=v, t0=t0, sz = sz, z=a*z, sv=sv, st0=st0)) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black") p2 + as.layer(p1) + as.layer(p3) ## ---- fig.height=6, fig.width=7, message=FALSE-------------------------------- # get predicted quantiles (uses predicted response proportions) separate_pred_dark <- pars_separate_l %>% do(as.data.frame(t( qdiffusion(quantiles*.$resp_prop, response="lower", a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% ungroup() %>% gather("quantiles", "dark", V1:V5) separate_pred_light <- pars_separate_l %>% do(as.data.frame(t( qdiffusion(quantiles*(1-.$resp_prop), response="upper", a=.$a, v=.$v, t0=.$t0, sz = .$sz, z = .$z*.$a, sv=.$sv, st0=.$st0)))) %>% ungroup() %>% gather("quantiles", "light", V1:V5) #separate_pred_light %>% filter(is.na(light)) separate_pred <- inner_join(separate_pred_dark, separate_pred_light) separate_pred$quantiles <- factor(separate_pred$quantiles, levels = c("V5", "V4", "V3", "V2", "V1"), labels = c("90%", "70%", "50%", "30%", "10%")) separate_pred <- separate_pred %>% gather("response", "rt", dark, light) # get SE for observed quantiles agg2_rr98_response_se <- rr98 %>% group_by(id, instruction, strength_bin, response) %>% summarise(se_median = sqrt(pi/2)*(sd(rt)/sqrt(n()))) %>% ungroup() # calculate error bars for quantiles. agg2_rr98_response <- left_join(agg2_rr98_response, agg2_rr98_response_se) agg2_rr98_response <- agg2_rr98_response %>% mutate(lower = rt-se_median, upper = rt+se_median) p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5)))) p2 + as.layer(p1) + as.layer(p1e) ## ---- fig.height=6, fig.width=7----------------------------------------------- p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5)))) p2 + as.layer(p1) + as.layer(p1e) ## ---- fig.height=7, fig.width=7----------------------------------------------- p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed") p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.9)))) p2 + as.layer(p1) + as.layer(p1e) ## ---- fig.height=7, fig.width=7----------------------------------------------- p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy") p2 <- xyplot(rt ~ strength_bin|id + response, separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.0)))) p2 + as.layer(p1) + as.layer(p1e) ## ----------------------------------------------------------------------------- # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength objective_lba_separate <- function(pars, rt, response, drift, ...) { non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) base_par <- length(non_v_pars) # number of non-drift parameters densities <- vector("numeric", length(rt)) for (i in seq_along(levels(drift))) { if (sum(drift == levels(drift)[i]) == 0) next densities[drift == levels(drift)[i]] <- dLBA( rt[drift == levels(drift)[i]], response=response[drift == levels(drift)[i]], A = list(pars["a_1"], pars["a_2"]), b = max(pars["a_1"], pars["a_2"])+pars["b"], t0 = pars["t0"], mean_v = c(pars[i], 1-pars[i]), sd_v = pars["sv"], silent=TRUE) } if (any(densities == 0)) return(1e6) return(-sum(log(densities))) } # function that creates random start values get_start_lba <- function(base_par, n_drift = 10) { start1 <- c( a = runif(1, 0.5, 3), a_1 = runif(1, 0.5, 3), a_2 = runif(1, 0.5, 3), t0 = runif(1, 0, 0.5), b = runif(1, 0, 0.5), sv = runif(1, 0.5, 1.5), st0 = runif(1, 0, 0.5) ) start2 <- sort(rnorm(n_drift), decreasing = FALSE) names(start2) <- paste0("v_", seq_len(n_drift)) c(start2, start1[base_par]) } ## ---- eval=FALSE-------------------------------------------------------------- # # fit_lba <- d_nested %>% # mutate(fit = # map(data, # ~ensure_fit(data = ., start_function = get_start_lba, # objective_function = objective_lba_separate, # base_pars = c("a_1", "a_2", "t0", "b", "sv"), # lower = c(rep(-Inf, 5), rep(0, 5)), # n_drift = 5, n_fits = 10))) %>% # unnest(fit) # ## ---- eval = FALSE------------------------------------------------------------ # require(parallel) # # fit_lba <- d_nested # fit_lba$fit <- # mclapply(d_nested$data, function(x) # ensure_fit(data = x, start_function = get_start_lba, # objective_function = objective_lba_separate, # base_pars = c("a_1", "a_2", "t0", "b", "sv"), # lower = c(rep(-Inf, 5), rep(0, 5)), # n_drift = 5, n_fits = 10), # mc.cores = 2) # fit_lba <- unnest(fit_lba, fit) ## ----------------------------------------------------------------------------- knitr::kable(fit_lba) ## ----obtain_fits_lba, eval = FALSE, include = FALSE--------------------------- # fit_lba <- d_nested %>% # mutate(fit = # map(data, # ~ensure_fit(data = ., start_function = get_start_lba, # objective_function = objective_lba_separate, # base_pars = c("a_1", "a_2", "t0", "b", "sv"), # lower = c(rep(-Inf, 5), rep(0, 5)), # n_drift = 5, n_fits = 10))) %>% # unnest(fit) # fit_lba$data <- NULL # # fit_lba2 <- d_nested # fit_lba2$fit <- # mclapply(d_nested$data, function(x) # ensure_fit(data = x, start_function = get_start_lba, # objective_function = objective_lba_separate, # base_pars = c("a_1", "a_2", "t0", "b", "sv"), # lower = c(rep(-Inf, 5), rep(0, 5)), # n_drift = 5, n_fits = 10), # mc.cores = 2) # fit_lba2 <- unnest(fit_lba2, fit) # fit_lba2$data <- NULL # # all.equal(as.data.frame(fit_lba), as.data.frame(fit_lba2), tolerance = 0.03) # save(fit_lba, fit_lba2, file = "rr98_full-lba_fits.rda") # # # # # objective function for LBA with 1 a. loops over drift to assign drift rates to strength # objective_lba_separate <- function(pars, rt, response, drift, ...) { # non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) # base_par <- length(non_v_pars) # number of non-drift parameters # densities <- vector("numeric", length(rt)) # for (i in seq_along(levels(drift))) { # if (sum(drift == levels(drift)[i]) == 0) next # densities[drift == levels(drift)[i]] <- dLBA( # rt[drift == levels(drift)[i]], # response=response[drift == levels(drift)[i]], # A = pars["a"], b = pars["a"]+pars["b"], # t0 = pars["t0"], # mean_v = pars[((i-1)*2+1):((i-1)*2+2)], # sd_v = c(1, pars["sv"]), silent=TRUE) # } # if (any(densities == 0)) return(1e6) # return(-sum(log(densities))) # } # # # objective function for diffusion with 1 a. loops over drift to assign drift rates to strength # objective_lba_separate <- function(pars, rt, response, drift, ...) { # non_v_pars <- grep("^v", names(pars), invert = TRUE, value = TRUE) # base_par <- length(non_v_pars) # number of non-drift parameters # densities <- vector("numeric", length(rt)) # for (i in seq_along(levels(drift))) { # if (sum(drift == levels(drift)[i]) == 0) next # densities[drift == levels(drift)[i]] <- dLBA( # rt[drift == levels(drift)[i]], # response=response[drift == levels(drift)[i]], # A = list(pars["a_1"], pars["a_2"]), b = list(pars["a_1"]+pars["b"], pars["a_2"]+pars["b"]), # t0 = pars["t0"], # mean_v = c(pars[i], 1-pars[i]), # sd_v = pars["sv"], silent=TRUE) # } # if (any(densities == 0)) return(1e6) # return(-sum(log(densities))) # } # ## ---- fig.height=5, fig.width=7, message=FALSE-------------------------------- # get predicted response proportions lba_pars_separate_l <- fit_lba %>% gather("strength_bin", "v", starts_with("v")) lba_pars_separate_l$strength_bin <- factor(substr(lba_pars_separate_l$strength_bin, 3,3), levels = as.character(seq_len(length(bins)-1))) #pars_separate_l <- inner_join(pars_separate_l, agg_rr98_bin) lba_pars_separate_l <- lba_pars_separate_l %>% group_by(id, instruction, strength_bin) %>% mutate(resp_prop = pLBA(rt=Inf, response=1, A=list(a_1, a_2), sd_v=sv, mean_v=c(v, 1-v), t0=t0, b=max(a_1, a_2)+b, silent=TRUE)) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black") p2 + as.layer(p1) + as.layer(p3) ## ---- fig.height=6, fig.width=7, message=FALSE-------------------------------- # get predicted quantiles (uses predicted response proportions) lba_separate_pred_dark <- lba_pars_separate_l %>% do(as.data.frame(t( qLBA(quantiles*.$resp_prop, response=1, A=list(.$a_1, .$a_2), sd_v=.$sv, mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% ungroup() %>% gather("quantiles", "dark", V1:V5) lba_separate_pred_light <- lba_pars_separate_l %>% do(as.data.frame(t( qLBA(quantiles*(1-.$resp_prop), response=2, A=list(.$a_1, .$a_2), sd_v=.$sv, mean_v=c(.$v, 1-.$v), t0=.$t0, b=max(.$a_1, .$a_2)+.$b, silent=TRUE)))) %>% ungroup() %>% gather("quantiles", "light", V1:V5) #separate_pred_light %>% filter(is.na(light)) lba_separate_pred <- inner_join(lba_separate_pred_dark, lba_separate_pred_light) lba_separate_pred$quantiles <- factor(lba_separate_pred$quantiles, levels = c("V5", "V4", "V3", "V2", "V1"), labels = c("90%", "70%", "50%", "30%", "10%")) lba_separate_pred <- lba_separate_pred %>% gather("response", "rt", dark, light) p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5)))) p2 + as.layer(p1) + as.layer(p1e) ## ---- fig.height=6, fig.width=7----------------------------------------------- p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5)))) p2 + as.layer(p1) + as.layer(p1e) ## ---- fig.height=7, fig.width=7----------------------------------------------- p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed") p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed", scales = list(y = list(limits = c(0.2, 0.6)))) p2 + as.layer(p1) + as.layer(p1e) ## ---- fig.height=7, fig.width=7----------------------------------------------- p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, group = quantile, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy") p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, group = quantiles, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy", scales = list(y = list(limits = c(0.1, 3.3)))) p2 + as.layer(p1) + as.layer(p1e) ## ---- fig.height=6.5, fig.width=7, message=FALSE------------------------------ key <- simpleKey(text = c("data", "LBA", "Diffusion"), lines = TRUE) key$lines$col <- c("grey", "black", "black") key$lines$lty <- c(1, 1, 2) key$points$col <- c("grey", "black", "black") key$points$pch <- c(1, 0, 4) p1 <- xyplot(mean ~ strength_bin|id + instruction, agg_rr98_bin, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey") p2 <- segplot(strength_bin ~ upper+lower|id + instruction, agg_rr98_bin, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both") p3 <- xyplot(resp_prop ~ strength_bin|id + instruction, lba_pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black", pch = 0) p4 <- xyplot(resp_prop ~ strength_bin|id + instruction, pars_separate_l, type = "b", auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "black", lty = 2, key = key, pch=4) p4 + as.layer(p2) + as.layer(p1) + as.layer(p3) ## ---- fig.height=6.5, fig.width=7, message=FALSE------------------------------ p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "speed" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5))), pch = 0) p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "speed" & quantiles == "50%", scales = list(y = list(limits = c(0.25, 0.5))), col = "black", lty = 2, key = key, pch=4) p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e) ## ---- fig.height=6.5, fig.width=7, message=FALSE------------------------------ p1 <- xyplot(rt ~ strength_bin|id+response, agg2_rr98_response, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2), col = "grey") p1e <- segplot(strength_bin ~ upper+lower|id+response, agg2_rr98_response, auto.key = list(lines = TRUE), ylab = "Proportion of 'dark' responses", col = "grey", horizontal = FALSE, segments.fun = panel.arrows, draw.bands = FALSE, angle = 90, length = 0.05, ends = "both", subset = instruction == "accuracy" & quantile == "50%", layout = c(3,2)) p2 <- xyplot(rt ~ strength_bin|id + response, lba_separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", pch = 0) p3 <- xyplot(rt ~ strength_bin|id + response, separate_pred, type = "b", auto.key = list(lines = TRUE), ylab = "RT (in seconds)", subset = instruction == "accuracy" & quantiles == "50%", scales = list(y = list(limits = c(0.2, 1.5))), col = "black", lty = 2, key = key, pch=4) p3 + as.layer(p2) + as.layer(p1) + as.layer(p1e) rtdists/data/0000755000175000017500000000000014012213045013001 5ustar nileshnileshrtdists/data/rr98.rda0000644000175000017500000016164414012213045014311 0ustar nileshnilesh7zXZi"6!XVe])TW"nRʟbl$SJ !5?;W>  vU,BZXzjU~sLf*𫓹ϟsTN"Gti'@yi%@}4ʦn'̬<+@ x*ɢD(Q>"~` bȞm$EԂ5-Jp "1O: s_Һ6 Y`灦MldƼjd&} H(4c ɒŔ&  87Let-xh׸sH5Vm=K01d裚8R@gDpT^+z5 {3TC( k X>,{l>mvpH%$.{8}CQ @}dG T"6 6kճK૒%Q> b9q u6#!Hi-JC4m,dϱ~@c(^%j~{jm,?&Ɖ]Fk !T@RED+rlmNxjB"piBx{]%D=4'v~pMV1G)`DR@,DG뜊/Pplj1t[]tR1s.P-e'Ȩl$s?VHMQ=[e[\_x>#: tۮQs L}edho#zGuP''͠.uN BhWA;:XW#幖K:%DK2|v ? }MRw\RU1ŧr^8Nϓp U5QUn/pZr;$f~٢aaAtq+cyvJ gI;lU}y I2һg]hݏ z^LM$F }7VK pā NC"}9Q!q8fy Y;o~Q9T<G%X:&nP’k=~s#2ʦs,6j~V?y,ݹ Q,A49\P[ɨ+  6"+*lEb86MG;e kd'.@ ښkbL=.6fD |D/!UJ-^ D֚903l~XirG"[9_&^u0_QCÚ48Qc9\!I#iy6>vS[N:a6ywC-6JEAp+2EbU,~(l^>RӞKO SnͶ!0C{#p0#˞ɵG{d"DHhp%^#%lƄ˲zVd6mo)\O\W]D(bF/6М.YTQyIdpb^t#яƷ>f·24[4lwnɓ#"}[1 vdRvÍ\p6HlXޝDe{;JJ4H%8lسPYXf:|thHcv!Wa /0~l*@=RKWܦ7?xw,w u<JOd+uF.= ά!?S0φ`&Vlh9F-pա[&rdm $=O'mmB}hJZFI, QH 1GH 1?{4΀7WD 5Z88B^Oy/+ÂR``n[%!Fb׆Q Fd`˒r9π`eX^yօo&ZVӶAӮ H%uvb%d}*៚r`7ZˠE[`5nc\ėGӆÀ% re}wsH3Q ><+Kk3LH?632i> (uܫZL݊HıʱvƳؕB7qV""O,tNr 0VJ+4L7T tWnq͠39~3+9U'7{b yq8RV [|#xDEߛ:1 SpYg]Im՟MQp7\\kJwJv |G%Ū0zY43h'ԀnY ‰g-q@@HۥH1vI}yu :2:娯Z؇Sޖ9@o$KJmhkfF^FFxISIl_Ooxy(.5vgo77`Bm^>Nzds{٥@~,V]\n2| ͼw-]F1X6V"T@6VD^y=ׄ̿΃OMwp'$v$q:n 6F0R'|( < zmEr7Dz1{Eo`o ;fv@p#n,~*-ӄWT,@d[w(.`Im"-=Q}3Ν4?xoCUhr =3 ؃U2jtV >xx8b$ pZߜʃ4&QeO.N S#Å1,jc8 ";b#pWzrZ8!!s;p%ZdYݐ@6<&pj c7`EHq5%2)AaSiSD|Gw q˦Oi(`]^PtNRdd 2גkC[=1bڶ;xO*xq'c$ޯ}w"Dt5/[Lt54_튴 H$:3&&:bhGK&NҊ.Q~aZDh'hM>c7։ȍ`B9n׶ X4ء_g/%;U{}VoC UJ]3~"nȻY (s~{i8;\Y`]RYA/ x}XGg!^\.7ϵ#@@vPZU1Mdj]ci%lqk*w&+F+%${CW'+En6&>|3OztH+AMwuЖx-)^%t&68\>Dav,ia&Lpv[mN.b;-CGwQxY<x|؈nGC)/1 *6άGL3Y_Xb?^ "ՁMwN?vrVעN@ +͗h1:5^6V\%]6Tt;}r7 {S61ƞ+BZZ;TYk?T\4LP24;JF-JԩX;<_Հs1#;~鸷bxqr|YK8+ RѴALV= 3AKїdf~G K5N}̄"&qhsNAUPh*=sժap6Uv%}$zϾJ?,tJ"4t*3[h+OGcbt$e[@RitZ|m:aMl~&eޓ^bl-q^k_!i=:esЭ(B8 ;=W`4ΰ *I/߬t+"*:C(O&IE7xSz%b]EbDɑpG91MG: 8 ZƠgn/( J@>-Q-S`V!wi/ }`&e*5Z b yܼ:Z9{سK#S9փIrwhVJf}e+b o+0|ZsF+ U8ؠ4;UkYչ w)3Ql0]|M*~% 1A}K 2dA9AK;pJIX˛LKxs4҂r"f/QJ+jdr@ΟKX;f\*1 b\3zVkuvyŢXl <0osl\q҅k<1/o-Tj=+ 6f{%47uљm ȳK(;32>*sp8(Ok}42z깤ٌ\'tQU3osJqH+u7O^ R@}gO2}dqkf?C-4.ׅΟ*6ԢC\$S,4ݰJGl?en%C*!x|tp6va:PCSﰚUޙyw]V1J`V04JEc1!*d i}R-h݈$n. C(Xc@*Ѿ ߠj(ӠJ^x'^\|%Ӷ I{7N*(4D `fjIgP0Wb^ک O-1nH Qr2fk!80\%}c-r[u3VM! Yr1,6N>#%RAJZ*@V}#S0jKul|DHOz?tT~m_SR+~Ef$A]ErusUcJ//PrV{:bm4*K^VI 7l?F-}JÌwbvK9Si/l@DYw7ʧe{Sm0\'RY9$xϋL&mx>Y7'sdx [uJ=-C@C#O2k">*{ 3.peiHv !>2(Y34jG d.u2?TWí=!B{?'= V0j@؝=ɜ$NF #cD[!9E &m#0Y_kU`m>gw'e̹'N pK%iDG~!-v۶6zt N)~њJGFX2{PDmkaDzࣖͦ(25 Ms*Į`wVY5]J:c66^xEE]B'=WfX|XP bqst%um\2>li[*\&_ hxm:)kqH/AEI]DcrAQa<)19p@,ʄ0[N/fI X#+ /0˅(Ox}t}gjWVa.BtEESBI8e& Bw4Dxum!PYK=L5" №& OdXBivd4Yvh:Ժx܍4xq1ڭ6Z s- :Z(漢lxsW ŻN MBVmB-ĵ_;W8iF' BScKTk *k:elrf6}z2βLB+Ǟb;)Gycͧ84` 4brz8DY7d"j`y}Mhvnq Ġ!Qhp։DF?dx6cC&?/̀#xm|~â8+Pۊ9 1\%z(2FӴ ZO[]A`#QNee܎(NNYg6w8CI2EIRJ~խ^9w<|ؽjeMyHVINj%e{~#!9ՈmHo+@Cΐz0U\*E:Fgߖ(j[20*Ʌ PG2]3SoE' ]2}N2X(%|cKO@A):~_[ i˿㽭WExLS2Ý9i|\=0:MXELs@=GIQ!|QBnE`fbdK֖L!D@++2V(Qj GLoV n'`J .D_uW#W:k ୧j 4*U`җux2ɴk#= Os!t)^FCDc3)("*`1 9נsR3bN t(Ų.W@.ʳ HFîU*P[J,$,zHH)x(@_xVhׁ"jAAiw ϳd"|ADzi;}0ahCfYFE=~fR79Am*ԝ=][We4܉kHm\HB/, !1wU5JAp\_@ȍMŏ#Dg1 gZh}bQ<ߚ%8Q />{fƶ'owm'\=1M9V% 2UĒlF1^1 Fd֪[r3D(9D/\I))։1NbLFA~#((_{oj6Yg$^oC"{b:N'2*VYص'ZK܌Y W>E~|<joc]iaփA)}& ]s>JDOb>^:EM'v^ךjU={B؀R'nE6O . ,I ŋ{Sх,OO+T|c*՜Zѵs| F ޯdd'gA m6'Q?_ sė@IOnCnƟ=ڢ +-,RMiQi-mx6~4OPba\! q=nN~KD5թ%#gT^Mۀ(Èvi<%b-&)zh\F𨒤oF(PXZB͔2:}"`3dv:0i53'oVv #XZŧrڧ,huHKu4wkc@݀ͤ А+%OX!6c?i} `0B/2iZP\{cۺS>;~$j$OVD3shLså#z4f@_qNoJqXA(eK.)ANfsF} 2; xxz5QU==PCD1:.vW [|Zi HI<^kB"Xn0(lNZK-'BlK~Gs4cX$gH^ĥ&>u# uB%ޥ%ᖪRJ{C xLnI:d1$'->2pfkBI* QXn.?0yG(#㔏3..qhヤʭ Hպ-!NҔuYsQmxSmP{z7.L;?{S%M~DX-q>K5ݛ2?}7X ꖅ҉:t-o$%'d>\,>"]A")-%խ{4x + KA`N"ԩ;( p:RYc ~ 9BDv@'ȁ]aVna#ȣֶw7طYp\{TDΔa>z9w Xy BY`P"J6Ҥ'msԁJIQ#q/0~}~AްL-@$(zXUN6`(e.ѐɵmI\?a6@XxcAa$.{3x$R &a(ZimT8c&1/$3CY_ȏi-̚Yp7 y]D$r17OR\8:R7xTl6 fM󨠪>,i%tͻrnUIz^ sif+|Ekxa`QMOiPSltӊudXf:a왆ḟ@$S]d \`-הp5Kc>HRNOc kmT  em8 JqZck_|9q.G֬Vw80RF7'xzf|fǾJXb"¿#y歰D-m{)2JAI5`j<awa -ߍ--dD>PRf@6y۫BDcų-dd'zF YMRYi #%RtnJy3U]૛t/@5PQm{ό[GQH56c$BOUY 6%D9mރ}qX9M!r7x/ gt:T^c1sLd~ʽMܝĴ]ixzX]ChlUpg[t-,ETx7.fPxpȭ md^ih=9yt"jdx ~VHtEtЅT4!β@bHZy˅Qڤ\惆٠?pix,zA.Z0'dIu Ԧ)[g(]D7XW'±e@bUǥDaЂGP %4$tYuUqzFڇ~dcp< =J./#:!QvLZ`s5]M @Ji58o|lx.577>IEqӂ- f(KS Br[;,1ץD 0f=(zK? ^(uLJ\(B/sexNjxirIxxH6DiUNnn4W[捨G71fu=b2mvَގUܨ-}7,U{_u eu%h"EK/Rs˾T72lTElMz6@(YtsKDU/]w\RF :-,26C3K0!wEL~:Nkuj=  ~\7X5k|Z!>s=aV %J5(0pV3#̊k !TE]vVKrwl`IŎbKl4AK>^X<'ΑB .}=t818ĕ .6_}0_˛ZoS{~y y0S`ryrL> 1F[fZXCr.U `^-o%}.RId򸿛 pNw/VAա)|~Wfu`.ece$qS{V]>ѴrkS s/Xc\Sw^zf2"׻XwL/h!I6x`%Ow&qXb6`'T!Ώq!b3E;?> 㮎 H^`[r7NWf y1|=5{Yi. Ð_P LZ1O HBwt!] N0TLPiMPi`}$ BJ'lggAYcf%c[G~3h Ob'h5.LyD"[䤱#TCb3QeQؖN*TonV:EDMCx .Y VK˥>%y3/*4 7@?yqŜ ~7fR2nM->e:굦U_ubI, o`~uؿ{2jKVD.'{SQE9*#eʒJk5I)Z>)l;dh/XŒK%x bsCvӏht.=,nfݍaP ,o+ҹ 铓BĤ@k{=(QoC5B;x1+œkBNzџ?n`".*OԣӬ D7jp1=,maGjd2Oy3ය7AZڶٔVT@ڐZNj^QVu1Lh+!|i^>Vg'[,աn(Z le1Y/N;JL9;yCX ݲ7 ]j R('7m9g>ȶUd I$6cIY,#"Ȅ-G.ٷE)Qo@Ld?^w%(*"qo@ze6[SLak6$D#f'LJIϡ2u齳/h ykȢOV;&f>V=>h11(MN5~^Y@n ,U-]k4<}8(SzS/B;`^Kh}~:eaօ,{{%Uݹ eC`pYz^0( /S@ĄPǏ%j@* Ǽm.GS$&xTfŝڱtxqW9M7Q'Vfj,Э6 !zoL.T9^Kd/TA 7PXgrC@7+b;`ܕ,j(Vu48ܸvpуj3d#Aqcib*`*'?g$r%ܾ03W7W_q:e%_DH57 #@w5"ΔsR*)PveJp@V1[mOfI y&m7K^lZD.\uDUlڂ'-k#ګ7"MXxR[뾟Mm{$*Q$`yS3zCł BKᝮ}A@N\*L^# &ֱ|aR9h96oqU=ཡDCOwIpUlD^qoUW琐I3M!gn9I/G ?h!f9tq:P|yzzimm|hdq. L!g@"%VJ{ z90' =~s5>0uj%Mqe*2긃VS5| P{^xẛM?G)^gqf-g9kFoiD$8 aŪz:ۦe(Y, #d{8&|(& tn* ţF @ nƪXN 1w/RxŃ-(?¥y\=P t.lޓ59bG`=޻Q-ǻo_;);$%dgXMtEfg媌#Oè6z%Mcg.̌,)СZEDSVLD;8~'NR"y-SxM\)d+MX˘G,3xsCWI u$ #H6A|LH 1!J7Xɶ(eS=܄jT܀ٶ .',G$c#l+O|iDn1dyޢ.WMƆMMU^yUN2[oI}wi![n v7p_(B$ZsU6LӸd_zߛA2aq@O.Zi?QBҗx aPibLn%9݀/KߦFWƅGqq>Q#{lYеEݦhiKR 4݆i7 \Vq[[VEɦͅPVO,|ݞTQo2>Dqi) C(:xOGW`3uA#Czg 9p-|ޗ*:*qk] e|@TҩN0ϗ) L${="|Z>3HS 5i'de ,*FՄ3p}N M@QyMѪQn-#,N+SC4>Qj G k^5 ^V T8IQ[o>Zw Q_6&~wc3c&k(K>S L31Csc'Nq_ys ?y'aNl>d:2N|/v4Pх,q ɍi}2Z">. dY=%ZqVr- ̼^wtDLx][,%pba׽īݏ&aC [,8 ԇeGΩ>t[$v0y!p#!q3d@=쑯W1 N>LI^7(S6QJ?TFg羃p\!R 34st7LjS N|KU'`SNd6d5z5wȂ-EgZ$NRl,*Wm,f'4 ,u,Id?'S"4{4?GW-2op2Єa"C~=5, QLG=vzO_Z[eИie` `FntP`&vҐڜn+}[ޠ `>l8(&?XE/ԉ$!XKNx 9'yy80.KιeTzj̦lSb *jj_ǚUW)'|Ӆ90/@\m |CM? Gn/yiӆzKAa9;B{Oz59682=ӓ,xH $\@`H_fk[+{X~/?p Hl{5\Op D%@*1%߈XĜ:B-c0lC*hzvU4t׀[&g֬,@WM:+DMb UXG dWM\CUr UFPٯ+d,؞k +KL~5]?/Oïc;xc_6:.P5 MLW2jOҽkaH$qh ȣi@TRLE媾$U#3;G;t5F^ RYJAυwOL. r>W!2r;իQY?d9 /sX>55[FR4ZGAuՅxuo?7s uyGA̘YX9 WN]%C OT!MףeaY8.2XUJth;CIY-gR  {+V@\$lpZlVʣ>\:[#RO8AdszD,Z]Mv}o]H|qD sZQu[,?[F 9)rPz 1jFW;5_EEM< Wl!?ڍ9[0?Jk"gw?0@@;Bɳ%N$/  >4$fXkնpi6xiY.#b gFV>I:fjR'bե3 ؐ5? d 3GGmfs+pn,2FELHq% [(رj7I yZbSQ# lQYVہ1 E4)4]6$Jr;t*. c õ$ޒ.S}xuʢ\Sm>?wp:gXaa["ASq}-T&ttKiD_B:_,?[urtBEC+n$ˀqryw;AYI:SXYЮ`OX2n]ֱhIAn,y%I%}˃>ɒJB=8 kWlwPYh1l}}uh`%^}I~5=#ob,ОSPzOk0lӾhTV6dńj5,8JݝߦC%݉CJXa%$M$>`zn;0!;\h\ C|9Nr`@{5@=ˋƸvap7$(8z7"f҈طO{JdX2N.n*SuFv8_SZIx?Uy8M1FfA Ȣ¢{5sfa=3H\$gc IurwBM\9*@_(ͪv$scy0"+Ƈ?/4<$X@]#~ ;)}L?ZBޡ7GAThM3~ 8k :Mz骷vS|x4jjڟ*yn (44;[IF'h~zvK&ܴؐn*W;~;c7 78ڬ=O+F F%RtkC364L`mhGVp}0j//95d{Hx2g%ָlW< 6_˽7Q: p(\d0R%w~6<U!28hK4M%Vz?"I^<@LyEŞM+B~<%pg^ʟtRNLl̗ˑ 62L`yl5 $clTzP@Z:f6` =cȭB%Krn5 .!XEw45ɌDj,ݱr+uvT Ѿԏ_̯wGY9 e(XR6Ng"NT6_k5ְm|E S[ߴnn_͗La{to^^jo\SrDjin3}A?>IlKld>z!fM6y+`0UKeDcUVӓ7m`tN]TgdΛ4, O ,Zyu*M< B̑$`3Cn\ ׹}>;+c,/I`C~ٻGBIUY}s>QA>\B+{{tu(In)M2\GJ m$vmAӉ⚩Z,Չt&y.¨Eߥ+r>F k+ \U'w\S:'/h/)sG&0"`n[!IboxQԝh $-^^ڣ,CC c^׽|Ar;?{5ֽuBmБsL[ufMFK:D_qJ,> yi\T#f%Sȕ6: `+F44ݼ l#eZ`!oE.i@ji;, 0"] j!G5g+#„b>)OU";-m!XV6կSB Scy$ [YǽGRkCd7U և,ʼPDmz'u!/^ss{BCОm/eV8|HJxޘQʹ2 iYnl~ɑJ jOXM.&D/y_LI2L.J\SCCWLcAUK޳86^S5NR Kڠ&`\uT|nqUe6,OFiIz+b At !$#KJKpՙj$}yLU0~p ?ZOw K^!bU9~3;e[6i,eܓ/EպÃN\frFOo ]/$M4?sżCEe,zka7iU(Sb۱Fє)"FxC`pE{mA1ag"_:%b~[ &Y)v'gM H#M^D:%l,;pd!pHePTyy2֩MReߨw!CE@f~beJlyKLo+CI" ][T5At GymH/]̫`i`9RK`i,U% Z!LR b~!W%~˟JTxE,2 Jhm}v>ă#LW$,g?Z,(gl5(bU6W^g Zcůzr;3]aY40 i[}w:"9w*?ESYaz`OH0vDwm\eF_إc[DJ%0a|Ɉ<2@ÍKza8N{SZ}lW ^]BX4Spl`6L8^|k ZbPYu1 h'}T[E.0DIܥY{FP>2v3M p⒲G!͸,xRQ.C]~=V2f%? EܸWjDSY: z ٿ!Gfs*TWr@+GOy&OΊт5 =.b`Cߜ=nweGBD5o(*(w72cԺ~ÞݹtxK<+oB /H+RiX<`%t5XD11M 4#Vy{Gkaocx?jjHB չZ\ _W߱D8&3&3`w}IN;Q1,}1ͬ10nF{Q#֍8[襕8?!MR4'M*s=nщ@v %̮)ڛ|7 8Guz#llkr&4.QیR`AjOtXk;ZI!*sP5 3'ȅ:fx DXt*U) 9wS0_Kz e,`&6F˛›"Vtoý.asf )E9% )!~tnu:zV\%7*.dE"⪫Nj\M;@%eOܔ9;mх/\Ŗ58W>߬fAT%(N'; c |Uȫa>n NK:eZ36թB-OU2LO'lj2d?(Ӳ$}D :}qpCXؗr\O%7~l1MLF<4?zw Eg_4MrIP~Wv vBC4{FA|I! _78?Yи]W:FWCF-RqH-aKqtB/t~ 8{>:<=7?1&._2"H^hdpK]xB'-ɫUƩ]덂4#_X La8Ǥ(a%_bldHrQ 전U[//:oNM5.iG6 ϜuD,E  ԫwÚJƫm`^zv&e/Xjx-/c+qE4PSX)*<\FvTH9df?B"רb+7\w}֋Q{g/9CYyGsLԿV S%?"JdY[R#D% ;5*W]H=:[rAzr 0SSPZswdKM.EQmϔ41խw=.I>o Ͱ2ϟELya63zna[-W^OqecX[_-WS{VĽ| Z==Aè伛}vl=E> ]sm sK[)6FԬAP-;n4 P*Z}*È>9 a:x [aݠL | ,wVa5lWe98v~qo:F}JT&}k&f=,RSρO ; ;-DovsuȎ/zO䳻X(vL8X.屘d!eN%]Z^,~g# F=c5 !lMxy؆Mse뎛Xw1I0 5 #DJ$R1\T[ɱkėױAO(;lλ7&*nd?KSqjNdt:Q["ucΚ{n1M! vڰCόzsMs|`SLܞbI5+oђYJ;X*%)Ci L=]N rJjDʢ2ҹIfοfoP?%̖bt$A 1i4ZYE_7q/Y 41Dh! H¢I&xfiHӂ O}f PD_+\n!jPaX&l IW޳a JW)qZjKsv7HfbUl#5>Q-mg٠pqP?3T'K) ^RěFWōUkɢnԕ8 I3K% Bgz*X~WpywqiUlf.\кGl:i8wcg~{|_о>xϭZ2Wq.4?.7¸=o KmdJ zzo/}bXZҰsa8ĉ$t%+7?iXquu JO xCTX"2Lq\𤯡4t~T<$PǙ }< cwcBӽD3c( jiCʆ4-$\SHA0络?zLsH&t㐢[^0CfVÉDEo }SLbidg b@j}!McK#H(csM傀 s齙ey :E9B*z bc0̅z]~\a>Do^ƔݭǛ+;@̴[RU\>If*' 1  u㦾#S[+D~m4RMz7{6vSPw9OLÆ>q<-΍RJ7?F5}X`NߒaB |?qejjX8g44wG0ðz}n'$݉*ľv,0A@z{>,ʩ5:Tt]T:!p329sņxaޝ k>Sa-fˉ%=T̩ę>qYsPxM3 -l Np0;0PuRz`Ciy* _4Fj7G_%=(vL_$$N/5A3 ``J[v)DN v{ʎ$%a0}yI0l_z6qpè zSBO7AۂwwQ!jV> Ρq ԶTF*S|.Z)\*`vFQ ޚ՝=hK( ݐ}+{uZ3^t&Vz'^zߒVyF&[1pټPBרv랴ESHHX 6g&Vt7J+чqg@ B{ '4}Bcp5%(0hXti4_&C'9Ҝ0JI_a3dnMňB`|_ 򆬃}4j*qiX-i]ȷ4SIǓ8.imR9*Y3?#c k.#)ՌCcmŻ mqiprt^{ɈNHR0?^aZFQ{1Нext)Dʻq?爟djVa}1W>;_iCF\+y[˴ӈ)kI=~ygG/f[-1\̛!&DŌ˙sJ’e3hDS5%t,C!޾oNVF|ZiҌO& #O īLɫL@Rŭ{YvtN4ݤ4m Oԓ{= `)J/?*5Bځh1V.pbg%8Od?ʎKɄ|&)Qe'ce!Ƥhj`tVQ`'c1v%BVƸQߍ~8,SI=ƺ]4ʙYjNߟڦ8eTɃGDW-D_w6)KA?{Wt1hGe6ˠV5k`~ j"}?H!~gƶOKt4Q!e9,<)3XK{*Xq $ G0*R(ނV)o)xNZje%B4~|.ff wHXV=|j\$Q;`QME*^\]"Q}]]«5HWju5uCm:bzPk?op=H?aX(wfZc:Yz8ί{t'D*Ȟߵ@#7SQkϑApIUfNj7&u{K_o k iMv}@XdOؚ㝩?3wzD9Տ_=86 @M샶cV_bg ߐ4U =4bNZgz :['ÑGǔ伹]&b4 d<&lwՁA& p~1MPFMEfGLXN筊4shF88xQ 'Lhr'Jvl4 cu2p(/g8*{'i/+nL^[rB%bcja(NȀgiW<m{l?`k(!7>w-iF#E ׶e4D26\=8.L =vUA @.'6.־e6͇F%$i5ڏSIRZ#ѻ&v}L* aTgXb3vv^T$v~=d|#tg͑MJA2C+M۽{gC/IhAOc9VU <ƃ? @r&wOkԳdHuECXZVT"r7 wjO]C7."aF$ .Qݘbb#9],؎"ބjq`Aϸ_Ɗh!kT*b9O-iq`p((t"geN^c.h$>-6n-ۻ[|\Lz1hӂ61|/C Y^[0h~-dXޜYFNJ,/F~˩ xZ/`Y/Y*#͹ o-}=Pyym/h З>}8 B7žo%Toys KۇM[Aٵjᥨ EPmU =ks$t A0(`8urT¬qCx7R>`M0.&YՉτVb8[@m ">\1UT$Cq!571MqhRelUDM{fYGf׽Jܵ21E'Ѥ] ?$]9e'kIkP } N(؉ҙ~Q v؆Ae."%N7jt*CI8u.C pEr Jv\kfT *ܭH4.VV_2Y#C4kT&I$ n Vz* wa֝8<*u+&,7]Z.צP5E$ Z䄑~q7 dt QEc6?ю2n6KϤ%&U6ջ!ZmFXcЁ2/i 8ggSl9YSf\LFҘC F߷Q).^'˫OcmZS|t9TmjXĭ4R;K. d [#Tă/o9qNLyPZ}#1*+8LtϘ>а%nfK~HnvuW,͚ue -R96kcYh`g|ki&U7[*sVuQAl{1go4(w,@ƭC(&BB k0 -ֶ #pvF=(哇X!M9rhWb|bfާo"oѓ5E@Ɣu|&y<ĺk߇kJ{e蜺I5ԛRζw|BbkŨ`ɰ2`^Fz睪O4c]0el۶ƓsƂnĨ-R2# ]q@|1x`mMm;nEK)ZHy-(SQ_4rej^.E+vmu&_vA/:_~&B0z\eW1c 2 0],VY*}Дs^]IsJKQ ̹!Ёɿ6qAbZfO^(vdŷUMQ$g$99s0W2Zz#:j|pJedqQj* ޞ'<5TQM ~ R ;Ҳ )O绀Qa.?1H8 ;[|J  - ?|kq`EUNZ{H1^jrXe5$UFr3UBp)=9Fo,00wwv.ƺ \1%Q;D[zĕofaz |/9u>WnɓR|cO2Wk5DvԄ,P`Np=h1@@snqΞ8?,qep`VXn+4 %<}(ij)mnݢ.&3{)@F[0p%>ӛ4+"n@ORԄ+ 6u0sOs6J{x1q?h"^l^GKR7Tc?,y˱ hg]rK ?t Jg&V 6Gt_F1Na &R[&_dQXkxQLXM4˄~q\i`Fj#8򸍒̗tt+6t#Hi8,{;\]h,R ʖEI@ {BchFR0 0x*}=%gf&ҐPӢrP+jmɟ\"ZIUK 92آdYu $ *l2!(Д#π/P5ZJ҄+#e,"&NEs&~+D%S`d10T[( s{5|+J_PP5 #ҲeV6 _,XEܰ¼]%aF_vHQLFl̪ a\S~opȟm1pt5CޜdE]\[,Eȿ.p2ʋٽ$(ʹ=ˠ/{y6cGyj?\g156\+," iW )0봬$'񦓐~t͸})%|y{jP'dRo&[1X;+T oK!Z]2gMO//lensVzV], Qj#YL9 $niY %'>AHfLUbM^/ s6n&6,JL;K,@V플 xfVFrP:r<տrϝ3ycCD-mnQ q[)aj;:"+z>|,^ ɢ/\;5)Zwwpa8kA>UUMOYFޚc+>X b~'cK!3mdZ T'N6.zL }*l12ݍ5US沅 SF;vz,.#R@)sѸh1$柖)2II"7nI݃yH{W QixM_S*fBu9IwE:=?h#oE{D+ UX5$c9\=3D&nZKSTg]AD7e?<ɳZwx:.®Y=TzD"?|*];Y$ ѕMc<ϰVԥ,5$4S:}BCzg\ʢ6kOB@${`~oޜ5 L,7d3֠TMj˕¦2)DxgJ~ʼn\P9n@6|n /;Y{Z蛂sJ#֍(pzY: ~l~}5.n`LC(;Әa4G(Iwlc`t5W+A4nҞs l9!.nPUj*FD7>UMljĎ{.mn E Vmz7?c3q?Um- A3 D@,`ˍ텛{P(hH#V5F>8%Nۢl0QnşA)l/3.d^LN֋Po\ iD+ k^i| ֵ+5/9G{<$Z(r4ibhtJϓ{5i0V(W/$PKĦv =v;U`5yx1v/n49f2h5, ΜBI_qXs(A  o)B ?$0ұ}GMhK!V7QP{.!B6N>nmf5J\Rge[ 7:a{^7F=N&[zJU ʮ{_O:O} 2[G*ثՅ[>mSxSi !o릕Tcs+JS_z4`mǦyJ:jmD}5g)ygRiM˄?7{΁0޼ :0cgLñ,;x`=ƊꭶVmIGv^a!VL΢Et,F&-EŕCL\yD(_8G!{bN2'jx@z+a1 i qRLa$`te.wWxnЧ8t|BǠF2&OHP]M.jdfV#̿`EV{Çv;xEסm\8̟'zj-G*7 iNd~^9dG}ZC\^,jj_d">33IvۮOtPi#o*GXQ!g4[VvKe5Fp |l|ϖsaQ֌3_u_! jZ%gl^Ŧu~\Oc1Wr։P]u`Wb?'{gQtSYpP"`5 כ'nvOOQrnuˑJ5B sYaBz]Wݒ"G@~ +FHJ#r\σhubm. j2B o؁ɺ ] ˍDy&1щZ_ݽ2̼ͳ0e8I(\|8dG|eS-4_FKT>F~ueYt$;@~ dл kZ̝6^ ! GÆNC^Hc :itH"ӯݚqd̅ CX;1'-ziwc#N( {3<6*\72~J>ZW,~: gg02/U4Hܑ}:䘧`hw(1V$,w"yc=C?\>M; >.S47Qvy0IT!E~z rPcwwNk2ҍـ'=f0}UA4g1,rSSڈ8ظ!ΞC)o7@l`bųk ISJRet)٘AUl^ԅkH Arg.V32$B dJrjORPbGN6\ 8<KTWHUuq{~'`gzxdȧ IfM^xbсZ0 ==8Dя{ibXBe3s庽~'`tb\' {DtK~ ?G52ObH ;^/U8>7k4xzXu:uG3"T5o;ʋU_!Rl+rIa$}"nZ'o%0[h/G쇣ǹ|bXÞ̶]mщB\Î}Q_etyUgsxGU׋NEN~&dLć@  ջ`'(:d @Iq7{U|s ֓ל?9nb5xTہx@ˀ̾MjeDWI)|K\?{%eW7SN淽ЫВH÷cQ1v/!& ЂZ.t7.54lhOI @RrЪw=FN.mt- 54d㘠)餷T=m( [ה 66Q[D.,?)`OˢpVкPVں ereGu 4^<3rK߹|us_Os|]_L3#vT Xi7+FVKahZB# p$g)&E_]mUv.R#4~Za7^tʣ[[cPo#]MZ2P"̣Fa0ԙnH =px†‡# {!iyVZ9 ;tY-~P2% N)cL@JAm|qh2\uD |FHxn()7ohc' X(l{/h8O!0J5i n/i|Z2qSȜ>̳P556d7~/Zf3S.jRk!cJIvG{~͒_T96C 4!Otl{ qEr$'_K,FwdJ2]/jU|a[9BϮ9 C#cB IvdNB6.e@eYۯ:ф^͛tg- ijX27&U?xdz=[߫;㜅' f^ ]YBehN|M;O#4Xi\lR3hOx߾Cha~ E^~\d^wdղspۑ\ZR~$Z ;|jZ0FO2 Z".yhi%+ (@ bF YZ%höjY0sjmѩh,/{8ܣm/Xv=‚Phw`ȝ; Rι\n9KX*>-ufF0BaCkgvvklxmJ('.qg9r+<^FUW?k&.S`j٠}Qܔ "mBќ+pi‘h%e%΂R?mQLӶʕh!g u777Z˯!Tu!OP,֋,|SwBꪗI;k Qw0e٨&P@d}A&x=Ҡ$afXtυ~Yg2\8k8'g^fuk6͒O*p񐎚D7+V.C:Fz ͌BafjбxɶBP)ϟ0!D VH4o>d+B95MNu),!K,y&Aw;u$.ċG促]0ANKDyL1f?aٱ׍(W9WZn O7ca+sz^h#V2-zfJ#~= k'KW6?η5G7C"WDB?%ޏIH)9lpU~tAtzd[W*H,ÌK4<aEܠI)w) kG , h2ߝG`w譎lAc`$-Qp^!IٰpMꄽwyW: ߍh_O%#bШVyЪtL?ŇUTfpi [&GmmZ9=_i5Uf7TLMP3p.w 鲥QrdƟ?+Je Tw%KQf&Id$NuMḵ h#{0ݙZt|zg';$p :$A0%#+ڻ/⌔G8U޵ajU=jhUoHrkq4o!us/@0•}pDs'7]j>3?Q$ylC,+H#jPeY9\k[4+5^l Ml9U ^m'񑒺HJ,|&u3&~M@D, 2~Z/ZƜo6W{lH)"Lm b@%O;IH^ORٽף`Qa?䘍j` h_6:D#z}.G̮){#l>P~k-T_F#ɕ3v;ňeQk颒Qs=wl |Lg_ ~qo 6 :b5D]Plcz~HML gaG:,QaXǍ!ژR:[|Qf{ox(B4FetF1׽Wu},lrGJw'v`T!K11ۑ1JȽDmݞ>Im dűaeSӮ`3iQ@чQNΖY?dGfAuFY0ZBڝ'dkVWqA[pJ^wůMikȆ["L0Q74]A1FTy8kxtA\+&B9}P!+|o#e @Ц9 ~4g0e)#xPAS dğ"Iy^5@ق -,>}O&zۇ7٬^+  ` X4[b(hzy C Kԫ j5kRӪIi)*Fɏ7r| 1=&"/TxG:-jg]suflTubr8Ndnw~!tp5&؈QK]V._c̄~&9J]-(3C8 B=\7b.\QO*( rOtM\?h>bc6+j}ҟv:wFz7KOpd] twhLىka}aո PfL/anE2#Qvgkw%>+W\D,)+e^|8!Vul/%@;f6} :J\j$݂Je@;jI6o 'X|m;O;.F,h-, & K?N\4k–0gKw?$H4=C8ytxMp v Gu-$s,bBcM#TBaN/ljXbj ӯ6WSX^ۣ^T1s{A4-nfnCԻk +p,߱jJ`|hKTJJЋy.ӏMZ2s )3!37+ cTԞwm~n3@6r˯v0WnOUS/nFE.n 9bt0b@l~O]t>R`ݬtfZ`8M؛TW uA$plڡq~:W2(قT}WH%?Ū̧ #cN:Zl^V%F[ %n62n#8lpP|E8IB8&o,FZ$!,V,q;ԟ{ C]<|YШL{ID+<xAFa|Ls˿Ӏ9([EaAuI`*Bd**n M0ye'.?NU hOOMR"uR\xWSR?ot -_#IRIɍ6 eh6馁1L3֟e4Pjϻa2}DKɂ%ˋ'1@S?5=YUu.NBi/6dvQ _5ÊV6Q p Ʋ]Yf$4W8IP|QL{{?z% ZYwC=tVl:3 X{b(~ƴa@˧\(ķ[1uc IٻV9b Q%mk*>ΰP',pU)jD,ZVG9k^\_Mcl_E3 պC8 Em` ,{ >3_TyƱEj?FM{S)@_uȗa]VZ[YLAn-^٧,JTI|Yb~i{kiDIa^eQvbpFrӵ+~(Ұ>me#59} fX1 4 E]{KV"|A}5v^٪Lp0Q/V}PMup U//=>q]] >uWUh׀Ӿ IJǛAw~`"c5l5{q&S|QPo8e [.jQh ;fkkBZE5M4*}1!`"~=E2 "iG,`or:8O \&ȉ/e>D-8%=a1Ã2l#CcߟT#fצ6v@|Jhۡ| =#^7YF@Gۀ*p ٣SEn!dN(lmϸ׽faSb’h?s>Բ#mG SOGN%%p& Yo]U Ai6KLcx kw[Sn}93tI&>ٿ0// wtĈ{S>}7+9 . *Av.:qpw9Uz" ̓,j*UgԈ2>sjV3. bOAVoI02VᵻKX,F5B;ĥ"vÉsGv ͰC^hGIyP(c[ǰQΊ@ף|"󋲸#yB\վ>muבčZH k `>oB% ZX%"L.RQ}*Rx9Dt^0|,v<.,𒱲yλAݚ*V m32E KYHn0q݇da(l{bUjJXSg @CQ8atT>v[B"qǤSj9=:Rm7工d&gАǭ3{.S+qo0FbLEJ71'- ONhSTGUzYQ&( Ti!#7\$UA9i&wVf/z ޑ16n~mĬ[l_^LYddh׀BH4<+[TX5, Yp,`|=XL*ǥE8QGflb>YS$3#*fo (3de זڹf"Iwa$7XhQLYI.+zLp!ti~f߉+~ҌT9 b>B+C YEᕤa/ OOKϙ[M`lq.*>lhHǛ/hI52Cv6K{|nfFi'ЏdFDFJ0 2$5x#Zv9۝w?(/h^m/dd/`A jOMPR b21N'M`jyͤnG ه:N`nB /VIc8WwBpHo1w 0< /p clղAUh67>l V* Rؙtd'DIyz}b=z̡ ѣqݘLaU0TxiƧ"#hPTqW4 AJ%h[Å)) َ9xt72MâP^_C9P:,I0:6F٬b*pŰdlT?\=$Q'A8[֪™`Qۘi(q_LY1T^!?+3xfwz5.>Ge$E=˥ȇѻ;4e <=ksWށJWJ'XTz; :_ER^*~U#5-JceK͐AiEa4T௉Į3{P.8H5*F7M-PپBPMN m|Sod0x^"AŘ$Jh^&hbM3xC8Ӻ'r*fJhFһ,yzm_uVpG>u;rZ-M ZlhM nZŜIzC9,%h{'n&rm'Sh_? /[2¡}8h]U%#A:P oENkV:OGE]Hт%&qlp j>%>YӂK#*Z`|T]:|δԜ:.ޖer\u^t.`_j$o%| 8ɜi_\pgdI|VV)^rc0 #O !|'bEweMf3VYeS+!SjӬ-W h/͢^nDYySѦaHW/q s-V%!5mO"TThA 0n+ǮLl-mRIyq3u %ظR# 4Vra%p>|l(&"ft9l7Յf|}̶ Wt!9òt_Oxտ3*nprxeZ.$ȉA.*F̂$ħ5*21(w?So}͡fz8bdF9}p㿥i:ȣZd~Fn?Aa?WncXʁk0LT/h0]muDKԕkf}*<L&46KD֠ .oMʀ]gMi1 s[kQ'FiW/ ب1?-wBe (0mx i3Rׁ5J34!ĕG+Fl : &vKhӯ;LwHD'VL9 2n*9Q7KxS}}K t@icsZ u%C++T;Aϖ49$i˙&x}AYM,m*7e AG KlM [ltq~pmn'/*|1AmjsjePA$SnhM`I;T;ssF9 ;Gtj MѥӑNn<(lee >6cQ,RCN3Gv=PC@.O`+`} ɔF&ղčo }phLh8~$XgIu{!bm8*lpZ֬*4ʔoXR,)u2ks2,6&/YRn WYc51&QYrR#%c8h@D$Gc@f5Bgik}qqj% ,!^!2Dewv壍17COב;&H t \سdyaىF+o$q?{q=}E"c,9"#F U7U/X"'N S~7 v/&˥SG/&mtp;JuxSIPeG#dT3j^ _jS!)ᑈkT!@g7ϬbTpߵʝ5/Hgp$AX4H|'pq]#'D,wOdN^⼨VV ^?dj-0^,?%Bas1LkPdiSkQ= г"9 >j8rM6yqvHmJ7W^׿BA =x+l|9tM&3B+Yq?\r~_N pGsO- 9G$^Awyo}rB/97f;;]Og(X]2c{7f퐄Ҋ%ɸ{nCl;kLs?M ɾĪ|c0u7czV$ J$+8ϔ?>;iz<.?mzVV/ev҂Y[$`.s !2 xjvD]4L1:@?j/]c[@y85\-pLIXɧ"҉ RjȢ K ϝj-8`zg&(4 !ߋFH$1oXgV&K. OM{l>0 YZrtdists/data/speed_acc.RData0000644000175000017500000026574414012213045015646 0ustar nileshnilesh7zXZi"6!XЛ])TW"nRʟ[^ nV:o^lr_SzazƆK:԰d8~ j qφ:yԮ)+;զTkJDI :3D*At^Zz>;tqmHhXs'27 +Q(^)cyRM.K E6ʃ3+j Vo;W,85fޥtAwxP65޸%sI7Դ=;E4(GD4RR&kXygMkӾ _d[ 5%DJ^[8>$[tcnтL]o7eHLP?L2l-.6Ek\~W"1PZ4 ~?2!9_ԿW9;U(BTF?z7$bTnDk ;PDZ&F2򶡌Qw4y'&9816k!QzYg_L29*Axb(:n-P`qSio\y8_6_WE7[0g:–r3$-휻cg.'RZTp[r8~~x91<]0΂^SȺ^mS;U#&I*#p޸gCP9^G< 'DuƋut d!5c&OϮ/9wy9 >]y$T΄ߕjI; 8',݉[RvŦ aK&{4PoT@AA#IZoY5%.-H n.nqTSb2N%Y^5HӉb|_;7W8>e=&*m)7aߘj̠=xS 7B$[,M7SkRD`G IetaϯBy(uJ;0d*&usLTm! L^4uTv+DaVI ij't[,n(l:PWH1 W5wq\8KN Qim=f1H${4fNAs ain:h(Zsw F1¢Hkz,s7Xwg4O_=˦δzci/1d*|4LRimM(h,񕏪 ;;2l@]E}p&*œ9 >* =أ30dAPu78|;f¸=wEƼM쐣`6b^%Ӑ:]6o!@%&")"L9 =>B/2+va[O61(Lsr`\UQEs=n9zъIxcTB㊊+\TSݽ2W.Hu`n+Y$V ̀[Nb7q%m%*we=C0@Ѐ.`3[`z`ଉ{EtsJd\c( ?+Fy\0oy, 䬤PNvvNy@.398]F!N=֫E%3dchU_6N gh-}"_9J$mK>nگI*~s^2ɼyf3G>u8a ,A 9׉lnGM=®l򛉧^ot+l.#A6qY_[!p\:q211 uC*nK4El`wizo&83#):z"r v%q+`hAw.ؾu10~;Ņ`#ڂm۔涼F Cugoܐ1,S'Jw(ˋT5%vpY(B/jx><~+@pA2uыЙ>2ݓ#E׽]-ՌBVCiö}"9Ϻ)J3.w>lSf$aҮJgF٪֮5GO[53TZ~bÝULqdK :p( &{)dAϜ3Zڄ #vc!CD7(b{_jϣ;@J&UX0|P*N3aBunVj+<-d=nPWf:*:zN™iQ.j @tO֗DA 8` 8E sJHYy،[%mv3d>U] q=.Gb&ԇdHd /u"lFDrtZ Av&$xV:l54WE!d|OUFȣQRUE"uU];~2x( ;F3u7t فai}ed*s "pa2Py?(gD +:=7t*s>/K &`)Nsv $nnNnI X(+)qĩN#{۲x-X{RM64;ܡ6U7w T` XZBP< \_ lkؿxeSs7庲+kƤ/ LF|+52=]^ xV]cn$`#ȊkzNBfkq] Z.brD "31x jJcGY8;?STS?̗{Tt㌣D»IS N졉R@AdUK_ : 7`EQ҉!-@n= nZרl25gdM È%SR0["lEVvs j|zEJ,Ө >r2% s趖tNMGYt3櫌t?`B&_^a%SG=wT^>Mol[W3Ee{&Sy|2B`3-Ȳ#]C6b4qz2ld.u 7G'֡;D-_J _d1,hyB@T{B\M "(rkM')F9@u3'C TJv:nqq-QۣD5N\S.z8`S>Ŕ(p'm?J-wo~S4qf Z6 jAgToRZ(T^uh <Acϼ5ng:-犃̩~߀sQHJUȻDͱ7,Y-V:J^<]9NC_椛-V F4A4?k9)!JZ)4rF0o;1&.εcE, պRl,, (M2-LBP DYa9#qÿ+ kCUJH|GiЕb q;HO{ӸQ(mkx'ޝjmp0J7DDb `Y r ˵̾8C j]°=,i*f# ;}$I1.yYP%] ;qjYioҊ8mV_o{ oHeH 9Nb@)k\n,ǎ;6G`0FIk–!ӷ(k4%1d7dK0!% K d{=+ ŀ2_kstܸG]fy.V_ DS ~5~/9HKCxO"S;tǎ Y$ǃ#C%D{LsCwTpCciI 54WՄxEcUd9 ԰Sr'Hbc_zl -bn%m-i;Jؿdx;E\1oΜ;'@> Ok곦^kȏQФM-nfdi?ҭs$ZL,Ɠ =k./B0S(ǀ)nD3?AvF'ja]+Y8:_@Ff9 *ފZ,E~0ׁ}Ǡ~璻 i!Ayy7m8@+Lc [ƐdYߦJۻv8' ʬ3.iG)9)N}5hjٖ*g6fym8MmK =Vy8K0ޥ[ %H8gNi3` =6 G.Z@8(5VVOPUb8WÛ.ts!RC*$M |'o1Sthxe3|2Û&~sDӶ?ò,Z*[`^B(PUBzizHw=3@[ZHYxJ)qW$VKH3j"bPGҩ"qɋqzd)"q92F=wqk =#9qN˸0rg_׶ h.=c AzRJ3PX!G?%I];E]B?Rj]kۮgӼz!x?t_F^~oD, As1f0/Ae16 Є;jY۷ǐ5DE ׷,#-Ģ۲UJǂG3o!V7͢nw̺O{7jwT !DՑ*l>S>0{,f*-H#ʄx$mFUA8g#uqI6Mz"ʘS&E~5n(DD^JEw,eGFV G#x^vy^5u]Fva[Gʚ,~E;*iTZS-hdƻ92DgOŅÕZo8MDtfs t6 _%ev!`_&d0npE*ggDuTG}ݍP-"b{AUv1(}w4m )5ڂ *Ppwi@:qrzg5jTuIJl@Ƙp1[`X+h!Crv^Y*^N)Neg#ס _vQa.G ZnKLZ#r(A=M`jY=q4|ҳüT=,F=)>E37iX$JB^Ҏ_OA8-주N/;4d숰)as]6!I KDiL5g$=kh W3^c8 ti@)U~S\O;F@bT<]*_w[!x0tyt?}ɦklCiDnwoș>p`h}vɱ۳aq0CJ_.Pԗ8= 0+ooH:$ءb ho64`-^vOB͋£Nv-3LHb&F+m&VE00| R%8 dԲ<L;B&=PtoLs(TŤٍǠ_xPf.ci:ß&ś;G c9l#%iYp=tJ`2 1KU4RjQ;`nX 0 t9'\*(D0O{0P􋝣ΛDh;[ ąU]f Ř..|ݟ5I@UW|5 AXk>/9vz=!jbM]$.Y1V9ɩDw](u?":kYu 7'f$s\E*B.W/ ejL[kV4^z3h $_o:΁HB@e=Ϝf5P4y NL:U;%U~OM@u}h'XSve7KE`])5[jʇ653"\}Hy,@^x'˙R!<E4v~b? hnR,}jU` >yV JYyKDh;dhNtb{'tcD]k1 !dFqCG*K19E3{zqriKj:(YLݷAύ)nI\D`JqpPƽ.3X ~!˗De-]$oq)Me-)=\HA ?HTbbJi6 [)զoKa||ӆO47RA xD뿃D6,J~CڼG ( YW%,i%DfN Lf|_\Bp3HS.>1yfkﻺTjϒΨ Ø&!7{??T/w|lۖ,h"$Tvʭ~c7 [T'G0 @> T{+AǾD]Z\5t- /GcfV5#1yIMW#uޤ @A h9g4Q y%~evjV+RMblFnx_Ä 2b%IX)N$zQ&;vb ,(*fl UJ!< {$/s/g+QDV3֎B/&fz,#JH:Jvl*L &\=I Q'O*:'N<r(èqg!`q%+O@TΫ-)RNNTN5yT.c[̘P.ځFw`|gί'F.sYHI$!wJO e t Ԧ9,XIA/rH!ÆD3ojLx+w :@,HGDiπBFHo>^^3Hv7D/W03%Hho`hZq/ݠu~.Qb9Vqx5jR %w{ b0%x3Tٚ3óS{p}9uvQMę/hGc3@9 Z O~28o݅/ʽ!ۃ_ޯL*} u#(_T?9XLD5 m8B&cx0{8;eﴼSSJ' dC ,!C=4W|44v&?C6D=Ij@%<"G*~c*TSۓ((/2aPkN9хaDʎDm˥v"?vj\# :zQ΢&L>C[$42w;_кZa{bg[<2W3X_SŖ=Im6x[4rC#XWf,VWru"Ʒt 40N Ng3nj!'6 g4[zA aFӑs تV]Bk|8E-BGF$$Q(LMbX'F )YE)wzs,yQjچElC;Y%8&Os?rUb(|>4&kJH,@J%T ΠS`z!rlkΗuɣz] dL$8bT6R j@!3&c}R2J5*Dz_Bi娏#Öo:tcxSқW MP_7$u4V+ٮ`]CԭbsÈbma쾲TZX,<*_)vuّ$o$z_8W EH^x ߎ xۣPc"Ln.=ycbys$Uަz.K܉eN/q ۲ػ{ &`§~¾e} D "RPvF#N}9g.݄ѽ2qGԱc`^]Eћ`j>a>o 3p.CO#YkMzLZXu"L`;Dsu1خo;ʀC1khFVGGvz="Qmd|&D3S[ X#6}ɘ ]Y?v9] F^BW`N{5 )цvW/ݞ߽h =I\nL|RUX4UFL3l[ dagcZ&*~ F4 Ξh <)R6Vf2d@3igM@u(3Wמ% %+LexJPŀ(? CrK &1 o=8N7AN1VЗ1xI;yihv,/mC-$K*&Ynu+MqWVHb+ZYgJ|֥%֏´][ejִ/#nFߠ⻧r&62c%iz$DZ_|9#ծ]JmwNZ{8u Uk:3{?<a.*7R֙+ZڑTYszOuo! !8PG6P]6Dtzd x#DISgLZ!Bcƍ34a|~@V y[MAPO`jljbt `uE@{Ž~{e[D`c$YTdA GOY{P#v8K*!?zDB-b-b' " IB6irmu+@]R8y)k JSM.j&zp5?,0+ݥaGnC+NJ˳ 9SXKe7ɣd[<OjEi=wF# #692lgd)q/t7;xVԷp@iT@ZFtn>ޝ1e%;pY`dzM iLbKkg8aAQ :-8Y-cjփ#,ww ScL%gSY d%4~HzRVPs)0!~f䁉QFFKR-G:Kp=sR~rϱJ-5kJK0 Hi3=pgVG.u j_\2Qx6k>IU#,rc"k;>8An06f=nI:5EduP(wr¼ {KIFS;]T(ߡs;KIdN:Cq Q/9'[T8w3IHÒ{Հ5ob0@)fH.ι8o\(=k?ޟ#Jw*@->r1y~lLjddnuo_*0}hCf*g g:d Xs1^mNhp"QϼqĨE|mR% 1$"W϶o?-Dr!$M21CNS:LНS8۹5P)0aƓ KUZ@A=>UJ͂GubuŎƅtኮ ͙f$jl|(zEjbM%nAjQgV-'G1 ŜVg=aH<+6T$SRO akߍ!5*euT+x-MV27$]A^gk8[u֛jTd͌l?x:3w.ȗgU`,!9^b~I ZXt)o4uGe}<EOOk6E*>Ǣ5$-(_σ -|Upqz 0K"#UY=+ FYI190vr.X[g^昖F+[,$NP9R=?*6"?]ϔfk)2-KMoLK_+6ڦ99¿ޖ]_!e~ΰ 9:$ :ޱXߴju _ Y֕9d2ZT=-З>NO]JÌ+hQƭbp²SN'wl=1p갇~R:m']u7CYN"ܹs{Q۟z0s& e>^BozJJ%B j{Q}L -.c.J?Us]Pm9vy1eb֦!T)0wd>3+6k1[;r8EIJ烬6Kk%>R? 7]JwdgHD5=Cc%/B"\\`p7~=ɹeq.8+R"q插8CR-'܉wሸ4+0V>|L@P;|#&(Vc- ?S[䄶2~Xue(%"1Hdo8$,ܦK ‘bJ(?CO&zfI7JIR! ҹ楝R5/oTPRڗ 5x,*<`h$3e*msoȱa&xMBa?WG5cD nH|zTJ3/nԿ6G`A7uގuċ"Iy>]c &x0`3P٬g 6!f)"X^pi6ݐU;_IpvBZ>cm1o?Ql%6 N!n,ÂHcs.h-ah!T :h;"nޡ2A=hI8LLKw@đD 3:I[?WC/b(pn) ܞ=0mЇ)D3+Oߐ7~DYR ^@(T0vT/ |j|ܘ xqZCZ+-yg JY?K=܀v+C8 W XФIЎxBAP?@z_C|me8btD\֚zV(*ӏV8mpZgǏkB, F6 =I 6rf0%߲q#/uCqg7>*\'hz]\a.\Y1Lt,j%_ĸr4Qme PDZQ6?f {DŧH$XF)ћ(/: X[XMM_ߌ0%/n[v5޴Dq *VFZ.tIDnv"~$WMWhRetr}Bm[Nʨu㚶;U`Q Gdm 4DZ +?쑹`1.MW5#ݨɡlV}8Z90,?w! ׆&鑻);?Us(HQb0bz٧ןwb_RD)׈OS42Y54ӫ̋jrXЁen(Wz"x:@uEQaNJe/l`s2B/!iOheSR#+w>5}K/$0hm`uʚtm%KDoIЎ^? 10]/3&BN &< "m2rCT),S"}VW\A/#ىgSB;njբ#fXU·9ZeUTG rd!TK8Ý` %lnΣp kOڧ%"('اS+d.GJH!\[Udm*z\;d1uiέl-ׁ͙*kE 3h׾yfW̶7eTG(1zE`o"|RD5}fU1|p ɧKX2D%y~f&L}ZYq ⎱ZoW%[p!yd/v=>0`[眴OEA p) MM !~q++?LAl.X/k udf$w"f[@ K57փyV+\UI7,;{+i`="BHhaK I6%{[ZyI?r $1I[xwkfoܑ+O$[xwGfT{ZWpmN{H\%@v\_q**qHGעx# Wj)j#T64ܥq.:`$zbQ(n3i@!":^Nܤbqe$먝UN@l8HŠ/Tde addN3-xQG"{""O|ru;R|y\ ug.f?˴{}>CRh儇c/N68O(ŻL]`**䬟ChN(b?5ɇӾKqcl0>> ]6fS%A.8H>މsT=S0Yn%:;K{};y\Es73P?=Au,QW"I21Tg2~BYgD}E.e@MRDdu0jPMy!>YZE#4"% ɫ:] Xq2(sR/{LNknJ'QF(& wyZp9n`7rs#VxxC3j-$ЮjsQc^@nGvj%}25 `M-1 Q(3#ql``fOǐta| 4# f՝Χ"t I!~9]AEXv] ^dp >p۬eu30Fy;=p./Yff~7W._,%cAuaTpc62_+ ;X7[Pd(>x^oK|5-x>6e7`7c*9sd/HDh V_Rrw@1| oFU5>O& 5WS1z^^zR|Qs44hmv*Ѷ_A{tEnԼ>K`]oʑStL#'ʏ|t,2qG;}'3 eKIYZ"MQ/7{@q*JHQf7S`6ڱGh1eѮ؛'U1f1^Z,_7&Sw~Itv?^<qÓvG8غlfɘkB$ $p`( GiU#.ԉ%.)7~{I&}0Ug]x,K?yLa!d!?W*W%"ɋ&|6> Z Ƣ ìTm꜡ŭ?R?+"N ԵrbFat~o 7ewJeQt]c̐jr ?2nmoR}UO?wB}̙hc ua脌 G%+E=!Xc){!w_%=>CۃFt]}x3s~o֝kj 2S$oXN+nYA%6ÏF2ˆf-$}gL2-5uz7hyCq~{Jb7g}z=/)Tlv9V2K v#:S; l?b]`uSԐ~%V^N&6Yp>ACOazὟ _Ιo]#hҰ\{ NFhwAӃaxWYN} &:[>}/|>FzG,o_op+zleCu!5{FSmc_̀gVTVlExl$2Ren!ڳ*QU&T-BH׆--L#;"x-3;Tp1YTk >i*_eA8]M=' w`{∗5 K|ݹ7HNV#O;ܴ/,Dy5 F9p5Ainǽ.'sg_9bb<6Ҽ57d9g2%z-i o~ݰ>ڬ"T^V7lU9Lǿoh`_<~9?qRzܲEnOĚ)^1"zd`g[f*M5!%URzzIݳyUe+(kH[5J2ӸlJdմ3VMJP^C"۟z-!vœ?(8HI:Z'VœL:׺΢6d-%Ҙ1sy %J+{SvІa=5õlQmSr=39{@t-jĊ\we)&+evM3Bdw+CmU[f")+N/p_ }E{2RI̞X ? >2wQ-Hg!<*Gi(wiP4ZbK(ݟijQapƀٶUFbA!eXet{ 'ZҜkBݳ8Ejc|-XRjg!z<kA2$tκö b Rz/CPKNJB,:00?ؗ~ ̐ڏQTwF /X0OJ5T˞~h= J➭שXP}ATkp*ډxМR֦݇kݓ.'c6 G5!KFh2_efF u/6˗5^$0JSn(Q HOM@N)Dd=/ U͐ x uETi$8#FL~(\&NsHs6iI?J4xōjzwIZ,H}x`h-' 5йi\'qh ӫlc.!Js:%v/˕DjXsCr#NbXM n]CVCg~:ĹF'4. XOҭ#ъ/t.<=F%GQqVUur1:fx>~MU_3ݾ&6؀>&)O=q3"D!Q2:ns^!evtKh Q5dY^0>xROA2ѩ 0Mg)p]u޻9ÛCХtm.vߊ(ہSuFN9CϷ.\W >X㗉9g[ 7xsF.b"BG& 8HC)Q a*ci'S* OW#k$\0C=\*yOsiu"!~o6,K;Z0cRT~h*)DIdwPm4}@?$$!ű6S\WNrȆ;t:c/ܺ,%i(* Dɢ.-&r_#643TTa 昗/G û ˄HnZ<a']7WZ`lF. >aCF8%ADjt멪 OvBR:s>Uqr?2o.2O[2.T1FC+DG~ R/>tPQ>gwCE d;{j!W3j@K趒G#9OM۲٠x*I-rOb)Y9›z}5ʼWm#MkIlNE|>d|H6^.Q`eﳆ]7h@LWf'++ p` $mM*!;|bڊ50izeE,z]^#C,]$DL#evR3{ &ǷdVWW Ǵ4JvehWJ=2Ŝ1;Y]VbS|㵏UaW%O-tƐZך4}4"z;[vMFX/ fAb`y}7[>hl-LhMLZ˗; bLZI,-bߕL"'Vg޻1ܒ"U^h sg,~ )x(7]V`3wi#WXd'k|k;\cd:e#).۪NÎ$c)m)?2YGҊLEƳm-`bͥ#QKjOTTڪ]G8ĬB%Q_Сw[i,TZS;̿T*+-ׇӁ `yecjiڏ4ŗtЏ|1fzR2B{PPPuZF(*-'يT5Ul^ CYu^W_\SịT0UqEmW!3"YwBuva.hYvLڂȬ2 ib7Q$tXaQ]lDd;©$1Tפ݀NjޙrGwu.Y܊U7Vŏdϸ\Tj[\҇fSU֕5{@ rOi;Mu#)txx|:QPнE}lk{y(G&:%Nd/@212ҫuIB~}g&( @id@m\m6Ԏ.WU@$a+LjMI!%4gZj0n2c͓=yJ&WXW,UegWڔ/0^VW֓~N$'_bӔ1)G!&0Խ,fl!rk0?>L[~O6ICZo +f9p_ֹ)R0)/'eTSҾIuwZt$\}xgd8+w,r_Qix0"c;_)OfpVjxt8xrd,/J(U;D*zCa_QZfj{ƂʅU|3sJ2y߉s7 JJZx{+(ڜBG BQRmgJ-@&w@Rr-JxI5=^vڰoWH$ a#8Q|(Zn/xz5#Lʫ=׋>˛펢w#wKKs^ /"HUPݘiǢ I a(DߥDIm;n]6BÌij#\W>fid1JxQ{ HHAoPJ4,K>:VB "#o`#̭g?2!0ſ +R\;4;V2%e4{rf~]g$_ÐbSW€*W)ÖyQ e7b Tu11 IqĝaF^#E#7p5# =9p8W?$R="۝-'@r1ߥ⑵dXmg\e$ ~+C)n6xO.cbb c!|~JD-;TWe3aM2Z~zAsDvPy \jMɺ)|3/r`Xh:qkOnZtR@8&y#A|62ao {*킉tk{d(L-prVegî!]kpxc1T*LS%JS]Z-s30c9@݁hh co[rOEh !ك2ftjvPOKGvŁk4]WöNbZ`wa4*swAsVxʣlK-Y4k~5p:ǥ0}7ayGݮ}9]1N,s_WwY=|Z5Opx`b:Jqx[J44F7bVd#(VL>lld_&6;PIa;(y-p.M-z#~ X#?OS4Z9ccJW0UԟBw5!Mx3Tqtd,T!A)5qrN~8~)%Cʳ%Fj VQy.sYQ q|OtR"jQS$V뫢OC+?VAт.y ṷ= !(τ}V1UNW8Er0P̖mU[p . ^{"g$='bTlaݔs\"SҟqafpL҃1yӠv4s8 P-*>N# 7U*NTM;oJ: 2Zn"yנ+W+BܴœDڄE3/ZOPl~x'KTle%td8,vW!ŻT%{k09}tݣҩ  X'D<8weR &ZиL_VgH0:ZF^r0WcCf]r!UG&>, ZW%0LH'^1E[6jHF?<R|Is|;[g&ɟ&"D=㚋qC0,w! @%Tn=ZEf%ZZYl@Ptb-#د`*^_K]1umB%>s߾P% SNdDe.xk%XLD5<\N ztsG#l,(\gcAE7]hదHC,ޖL2Dn7:K/ 6`mw9IDa@OSUmdw4y4  :p Ƴd5hv܈5lkD~jrĠz1Ŗz}~@4 pW_/|+xRr4_(r8Sz̫7c5֢p Mt{/lJG"VfWR$f|Ɍwŗ*]'묉uW?C$nW㱙 7`0hU[0[fn" yXFH0 p~wzC;0ީ[Q$HYM:ʛ\~-|HZkEL--y.k)IXCQxezÆy%' yUP>~[BΥȕS8.T"U7|t\ cO CבES1ݗN# s9pDJ a%(Vy$,[jrd}k pWuAe rKy؜j:3?\CϦ[xUX-G2Ű~f>uj8Ho-*j ̾#^@v'd Q V!grw'累SXE}=EҮg "y.*n%vc^+Q1=sa[ .uI(=DW8D(z+b*yP,ɡpKj7EbQo_&v|-q`ŋwDT='AQKI5R (rǖޣ`\%"/rSr)n30;bqnc7"܄9R9 (lgˏA'1ɟZ_)pY :nf/zL-/JnH|̡44l?s/6huUϹ;v]SAJri z!ϸ93@l%:80ȿU)ܪF".xײX絒Ip[gKMCvOfdo5Lt ӹ`ͩ9fzVؚ)?+ܙdC~<ֻ<=^XqbϖJ S  5ln2q:I9?ZIr;87m9m`x8qNIa,&M=QA0y1ybTBks,j '4VҐl[*MjBTIW_7=|GODǐ@5rQ="F=X/ <\V!Q`s*p,FlIj(ǫO@5yH!'~޷O8DrQ󲷡 -E@", u{WWfB3vlPU&1pj&_;^M],E0JTY[C>" \qC=ӢM3CU8A [=*?ʧAUNeP6}J'SBl7^*EXpͧ{١;a7:v[+nRIo6x~IP`j)$|FGF a&Ք {MWSe|~=)o?e-"cn*Xmvʋih:/emcԢ$7B;,=NTϟ}W8G]aGǷ\H:Ze)%oM ;CôsxWQt!-EO(Ad!K LgK 9m`oˆ氬^!Fx7=!`! ގ=4CClbd9xqͲ[@:ǾL-7(k]"dWcx5Lu Dz@n7߉;E:Sle.G ~6ӓ0tP8AU|i7>#Hm9.dˆY۵{8/$gkmWC^X#Xе>P\ϸN9 暒1%.0ˀwF=PrWBMbT8پNGSw'JTKR=j9nWRXIp+i?#gG;&,8('ȼ튥933kQ=ST&^ 'J4̈6KqJ rPVX(p˺`3st EE4Lj ̇T n30{ % ]l}8^D9 dfa 9 z^rcE/KqK&L[Y`rpaևGS`yeyS+i1H{M|_&#P[*'yocy6SDԟn@)cTs% spV_\ W/~rcF<4H |W+)$I+I nt&?ٰ^Ҳ5{ДM þS:/%=6+~y^V{%nJnb6vB뭶Gϔ?r(n ԔT֛ԛfm#z3:Tf$;X<x񮷴-9ư4$Y׽:RC}S@J3?iSm{6@~PsM44\ې)ri3^Ő{H&# 3?#r$[g,n[K~$ܐ&#joC@N/G3l;OAS|L3x57[ѭ./=FPΰtfm;n::K ڣf90ˏAuB 0]1_ rwH~pNeFۅz@hr>nM@%~0% *_?ÑoMxXKY{Ϊi00Uk83 Qtb~Rɒ+'NŐ2͆'(R)yeji&fAȲdH:GfE$8gO iջ~E>IPdCż0vQkA/*vdNO5a6*&CyϽi8tSh+غc؟S Z*lxn!Hx㔕; ,1Or5)*7;9DoMiUHGU˾w:sg;ucy6lsIfJNP :;(v)5Ḑko>b=G<d6rx5B*h]'Aa ]QQ!h]8QDgm%/z\܌ 4fn@L&I܀F^7hH:,G !ш= TG/ᛏ6yMPpgX]5jt^i)aQi{`@I' OܹP93R&%7AܫItoV8=p:=g\h8ԏv٫yE;Ј)r~_INENj]yNZ.3K,XĽzyb/N7G3 oAվ>/k"($/|X"|@],oX0(bF=UwN2n|w,eз^bS5A5:' /e Lp{":|PbOWftXr9n  %_r[Ws@>zEBsO][ιx՝4WDpS'1q!Q_M- Xls;Zx(Q)'x)?-:Gs#aH?@Cq!l> ԭx(+KZ#X+W9VNގ!,:X0?L 49njAl -x+cjl=7; Y PǙۛxUIksmoy\2h:XL>dr>;N/Ƶc?ˡ{A5A5?µERa/.6R';x{THDDM2").!w::_@yɁ&7p9L'}1N཯PGU4b( (8࣒փ8#" X EB?wWmCcg|JyƼ$ El6(Ȉ;jL!Uvbj5D67g- |HV&i=J%C yoQ1Y5_)v<7dzpjc"۳ܓogj\&[p̻Npl9<T0TmBB3K@J `Lqvi(*#UrdO+vc yr=LP)P&'s ؏p9W*<1PbLZc&Xs7#,adH8[)V.HXUSO9 Mv[ )6,MжPXdŨ\CF'F@ pT*sL8͎Ka I!ʿk3=f:7{]PgĨpA H&#Q4Krlv)6 ]H{nQ+{^?C>mqr% !"w[FO$~ _kc[q8{`8i{ha $}Mw< ǫQ&P[{Դc VVt>៽+CR1n}P;xOHNEzJxPN e J qtn+bM,|C1o(Sj;eƄl$1d\ůGJB50N0߻ew,TG6-bINJC$Nfg4w 0UB̋\9ZkocPPiA&8*{e! d?FsKv%̗ aY#1ysbN 907-!5 im g1#_-6vhCۨh#Ԭg2&y.ǩ B6~fՑUA˶sDMLxWMw̹a!!q ?24e*$?Ƭl 4 ruzeV3'_DpQp8r96óߒwK .v݃)v(Ʊm%V8‹Pwbx:K;-Ig*q~8w"-&)"Kޖս\s-g"f<}@__"dLz:d;T |?*"zeU,{8H c'.ZYʝ/BZ58$)^1H0Q>uHJaPkJ́!"!E8x`.B?2]MtaN ;]оɜ=ӔX;c͉t5) }^1QOw-n_th5^QeqcC$⯗Bp ;;73vԎ7! NQ\D?DەcaYjTC/0OR4)` PJś&Lŭ6-VZaY6 +NxY>.\"V9X/!P,ɛ5{:~&TܣKyww4 ;vhf%vsaLM R#wȥa}+^'!O{mp}-ux$P^Ae?-©eyc ;ErT仿1l)Zt89ϔ֝ld{@ǟ+h$&qlSD4G8I_hRKYW>Djc];uq)!Wb3|NxRxe1 utuT ((4X#vlרߛCH kXG*[鈻@tuvH 9x-8S?䃚Y$7qoD˪ɓ&%-_p}K\GoHטr 3]~ݛz~]wR!㸍G&v#:& Z%ANHɋ?OτKiixm@{&\E`,htڤMj!MF~g`q VGJD"tE l[i"ha(]Rv6d>~G nX?Åk ]Bb@A֋}/`m߅4%,fSUS |f(Fa2P`[aaR>au01 r`/B6ޭGM$_ٯ2 V7pg Kz.0)ҋy?jC~7]}n˿rH J\\;_Qc9D SWM5)\3MH-ѽPXrQg=cT8=3zOGB߃$Fk/t`-6} J&$g6kIP$a^4)-<6$mD O$“gw(vC)~ET ma%@-xZSct/ k @|Xm8I5n`_Q@ »W3SV:!cB"xp肋.di 7 5g)Ď5yp5H+/Xi͹ϊ-aj'z`I&1 v:#Bд[ q֝l Eƭn"hAip,ef ɈUU&<:m\E[`f G!f`ެ61X8ۥ2HPZVlq{A\t3YLhQuDB¶Z\lLaqE9wG.pzmE)N߾ٔ &9Xj(8ZUR h@yaLwټ=V+^ŠOa$Q$u m!o !]ܒ;vMGϢ.܊e\ clwᓊ(YBO[bKORWѴ <@Kr~u)DiꡊC}0}MyPo >W1lK)M4 YP3K (0sW"p|Ll$qn鼙U(LIV*еDe,ŏ@bOWZHb2ŝ)VP%yI{X1)7ל^2Qи)!6˰Psm-FP._vU8q4STw孺o1P/R wuy/=JepgH0hLTTz[g'e$$D9^H?Ļ`'b=?PI s73 My֕4Ȃ0xcсamLDWl&ˑtƤL# Z_}+Ap o/ZN' |*Php#JJyj诰][iCɺ~8bʩ B ,-vI~R5',UE?DŽXIot6$wBqَ٢!z2;_dN7` ~%5-$.GVKN;~}N^A[ҏ2dυFC9*mJCz2ȁd\9PO%\1($_-R{ c#_ NeVޮG?>-j#gBZ)}qt?Ki#Pu6wlشZ_ʷЎxϰ~^z݌ܐjRk)>${=?[QcԧAYN׋)?jWEN1of`6O#yq.U¼8x6rsϯ#|k|>cbkB5gck8?+-Ta{{wnoݏEZK[&W:3}d\[Tߥ 8[q~"aLM/t.FYw:|2Պ/N[#Bj.}[6qw<~/tSRKd%M. U)֜7N*#(SWj-Hۘ2 5gqc,nҥ`Q>qGGzeR&J.ͺnĚlG8o+>ۼ$x;Vϙ]'DaT; tD4 wfCD! ayt.G.'MFAk`c aYB֭;StsJ3)8HڙW1n::8.|oc:+ "P +9%ٓX CC {o5{SyM 9uX51*[tG.a0g|= Ȁsk0 .?m 4%?q2$;j7E,=,$I|Me:S,u&75 }$|"6 ;aC ?:V85%<rׅI23a'.mʛ$ћ%T>>&Oy~]%,Fx A^3f/3a0}ĬD~Ұ۟M/'_(Dγ:v ζl+Q­m)? "Bc~Is3[jȆ/Gb2 |`7gµvB>zr;*}ӎ0[n` :,?O]I/%';Ha)!8эvx :0'l3#U@ ^` \ [ߵ}.hAw+VlqѩTVƀ\:9r9}cIO3j#YԎHȂken1Ѽ^2REWf u-2qsOP$:p"Y:$*5*ﰢ(jmwNe*#*P0`7gJbSv Ղ웄]eK Ӊ\fEP[AAJm}rv!fOw{5:wB0>.~pp͍fY:J|ѻr{l]ӥP\t W g1kܶŚjH}ٺQauz=Pd'z|i>^qJJx"SBFِV,cCD,"c-k6 zi:+fA潔dTk=/M B{ LנߡL!l;p?mV;w SA{AG8dmL9}ʣͨXLx:Lȱy}:O}MAiYPvȦL "qS]=d΢Q=M nKEg,r2k8];;Ȣ'$i5i-%<21wLHcۯ}ŶZ#PGɎ^~(;h#P2r4M&$g"eACZ( M͛l*xgg<@6I? &&鶯RzB)%= kW߂ek ,=~<جpU#6!@wyƭdv, 8Ɲbvе$0T9tz$H\8IZeJ@!>!Ұ?!!($ʪKLesK&$2(X^T"瑢ގt-F ֝&%FLfmj(p7p^~iҪb],Lx[Ia~UJM18J jYCq(`YaOsAcRRXɮ nnu;wn7g ^vpBOȎ [Ny[K' Ԇ,s>+fCbP2p ,~!3'ʊe]v(RT4y yjʕksMoS-CS(dܴ30F^Z~iwSJFuwNŇ֪ޒs,HEg0OI/o 8֦k0v[=S1 "v=g)jמ87Eo>)ߴ s< n"KǩD$!CE-p3 "#04V(ѿR/6-Gq,qZ[>{Cbq {k賀X{e\K0ˋՍjX!,wpwPf h %f}Mf+uڿIlKylH{T,/yR:gWc 1TԪ>v K~f|oXӹU-Nrˁ_}PEaz"Z+@a[dqRʏ:4m^޴p[oWe,iF+- v2~׀KV@r TЊ! J!ґ"*/>`*.-c'E8 ƀԨkWyIO[/gpʆ={Ӧ,z0DҌu졄1W˃ ܐwujrUFج Nww[gN$RUb˗RH$D{;oq],Ϧ? :/`.-4u{5MO.6t6:Mۢ`_ i{ \ inh7i ݓyS1p(̓ t3ӪiaO)'wiHbGDyD==f k ٗ8wr,f!X!ȷgV%d8t̫Mv,EňE.yz1xFjw;[ٌjj7^f?܆~I=b27DڑwCxb窃1\(~w>OԬKRY=#O.Ԩ'"I0ȶ8\එ˫-]s6Q*\~|g_ E4S#hlj{)ϯ= d1bPJ1F ڊj,KV?i҈"W> H<]&Ak0U4\î Lf+XłM/3^F§d~<ұ|*{x/F$˱YbJWyH~̆n8E;;T @符C8`6# S"pA5ҽ$ bHR3!GP1YF/hqb $ GM4-(ss|YxW?E&>F"ś r\]]"wa^h5Te<d':dbe2Bpg i`YzE25ߕ2IHPD}ߢ.{wfR}s,Aԁx\a)4,m@^ӬsFۼAl(uA~M>j*e=5J>iqYN]=  ĬF!@MI$z,Bld I]0nM%plm/Q X45׷|cxeQwk }/ ͪb]KS 7s"fW$ 0i6Kr8;pHx.L xǕKYU۸ҁ[P]nY^J2s) ƥv@$`'I*w|y#7yrhX,GѠg}R"R8vor/йk6\>#)X@Lzy͉ ^xzOG:?:ˎ!""4@>&E6*3?mĬHChGԯ7!ͻF&_jf!` 4>,C"XfA3򆻠)W]1 2r ME:Lԩ?nmsek*}~"q)%GJeJh|+FE o3fyZT:;S.W20{@zŶȩ'#5̫xe#$Ǚ°PFfGeT0t&g #zP'|S3wHNw%D=Zk/I3kw_o}hnEpѓ7=s@姮p4IKէ v,ώ\΃ս ByЋ Ery.̧b3\M55TP<@ks/Grllk{ůax"#]쪺33z8h38'oZ5@ܥ_I4Y94N~'`M~kULT#A_43;5k-l[x,qktr7'RLV>^c@jWR Xq gќ\og{pîUkS?gbu3BXT Lb1Ws+Dx{75Ljv)?vݳe9 _Q||GFg5\T  )G%Mu!7 ,?O֯Д9pW_Ff-- $n(ԈKRc*/DtuU .۲6MK4 IFޛD˧ؼsN .~eR([w=gM.1si8[GE4P.80ꈸQV>#K$[r-ʩ&֨L&C}} nδ- ,1W7}3MM_%GM>oAH#{~XYb'%Jqp-0@4:\X+ R%?wM0RO->hL{UH[A=De:4{TeZFDMPTm8cҀ xfچ'-\tTn-ɅS$%`Cpl1]E^mw"NYޔh { %E@HDIwwXKN-}Cϫw>sAVj=cT]*P~//wGR$JfmlO|i2iJ% ZG,S.4t&"CuC`8Bdܔn.5!ǂ o^HW|8PnTnT\?&hoK:U-&6T\WEsI -CԱFU nrV(C`5q Gow̨ .pq/ErFr:I=Wc )${?CSJZ]~2ЖՕd9?yH5֣pti<\6@8gmD$@H[Œ+[LҶ4TM9ljv6!(o"-%TÂr/2rqFcy3 >@pС›PxV\yJQp7p߃|܅ۺe_ݧs34GRUng|˚0>a6hX{!cIu"1OTԝJ6;0GKU $4p"w:9ցzݷ>[5udz[$H4QkSJoP8iˣi[3.h\V'O*4NxBÀDPV7JBKAs3þ~ P Sx$^]agz/[0Fs'Vn/Icֈ;6Sg)AAJbyr7e܂2˲npqⵐ$0rux:0gPG $R S7ьqH;WE?U s[skОȦgx X귛:8^^Zh}@?{TJ+H!W, j n^SڶLB~##Z(b>aw{,ab^5xK:<%VN]v#0k fwMOi/}3 ?tvԺ,:#G-wsNlCݿ؂梽9Uh+LR{ɳ5, [b4N`H\N"G|Mtg%HI+ASD.5C[kO>VjEY4q_36rXhz޹~!R[`Z9@Yhd(OP*effVŐ:Z]6TcVߍ xkFz6Cd#b|۴XȬ 'iՑF"6MvyL+f1 --D#\g NkIDp)-?68Tv`HEV1Pv&G3[<ʆCVe-[dia Mn+Y i4L.зA +͏2ջ~^2nCb8joF^t%С#Б hRVy1UI{WNdG|"KnH\'>20^9ICWLzubtT{N6qN1U&CGoXD0hjtp/?;ϣK[ CEp//R8:3ls煠 ħm\x)`% ḔN54l y9MߖE4Q!(Ut;70vSbw u?z8= |σN f1U`CQ&J(=j(bnLB,%!KK[FhrWmm}! !19;Gj0_!vFBd]7oѾR Ы޺IvqlU>>1q):eOGTӄ T~>V%.vP Q凁+ƒAO\.&&o!l8U~;2  ҙM5!/s{>"Rl1 {l:b{ lj 0HMgW72gnһ4|}kl҅1Ϳř_xl$ &K!,u~%yT2a ۱`MĉXqtmc &DOF7PkyW3-""VLf5U*`q DB|vW~g;3 }Hŷi+P艏Dqz f+-]t#Oҧ`gI7%^U) WmDgݭ{xti&ɰ,ѷgi4^&ՋQEQ~Ο&g>.[;-S aP>Ǘ3 lN⚾avf>`D4VSG8]o`1ebVx|i  &icx L2kb^тOc]҃j1BQ$1 祥8Hxh+S)ܻba'6[wfn.1}MI3T^dX+fai|Uksvh< Aij|6'D#o˛MdErA(U߃b3Q05P>[ 4"o0wlii9 (J601m| *:;l-*%fN2^E&(>ilb:DžȥCE3I'X!=fQkc5sudPȚ$r?L:] Gc{EʜN+VeICd;I}R"pӊNr Q좐M.j}2CRSoRMdCvr #&MÕ C@zݷ8#BR܀ɺ|^(QMx { e2W!?%aQ(}x Th$(K8#Zt{x<Мq)mZ/mx!Ѹ$A%94M 6A4)a;u֘A 3~ԎV&ݐQXnc 1}G;9:$GfŊX*$dk۴5.M1"3 5'ρ|Ӏ>фR;$2 36[]t\%{jų4I AXa.mi]TDό@IuJҸX{Ki^ H6TXof}tL0jmUDŽ#hIưNZ&x}.c} ޷9ƺ~ԯ.%5vn i0-Naȍ nH: 0Bg4y̳SlEDspkqu[7@ 4G 1Qiu,ow/I.,2'ڀ`$̵e⒬ͥVyM1:@:q 4ΌU C˝[f\}E |,&r߲VEJ]z-0_MO9e:<|,f!߿@gi!/]Z7WΔٴ{]Yr:ʊ2=gg72ސila?(n"M_"A5(\ fϠi S_b"Q_Rj"X- \E(/o;w,_m@+2) %&!l'`D \%Jb-yXuq6Ԏ 6bW ~ 6,MiЩe4Y0H@( BjaMKH\~%LHpA jE_iu?~CV" c^KJ~Jׯs6>+ 'd*6ӿo 9ͭf@{YW}Jhͦn%7͸\!i0E0^IGy\dȕ $Tؠ1~ҊpƲ@Jƀ#3hu#M-d Xc[KվHb2mp4 Z%} {1bnXjfu_[D忇6NКre9I41QY&V#vQ^ec!\NSi*^EHXYKY@0O+{[eyc~yޠt$7!暴+I!$˫%Et- .hIby&cec^_ѧB'MgYv\5؁SG J8( bS_BN^{y5-t<C5%,2 B @9g^hi t Gw[I:&KC8Ю^?XLb߬_5K? lg<<'РuS<]"Z91,~&Ɔ e8cbb r+1I12CߞCe6S]Q~G r"n8G4l4/oNBD/ id8uc@^5Vr#YsEϺS*UqRuQV2ds~_0l7$v(]&ԩll69Flj .S]|}VRO^a"RQu/KyxO2GzkJy;V2LSt:\l>AG.|V|E2&ki$0t#F#= |'fh83$2cx2` esW[}V‚xUȊ&}_pE\yt@ Z7=G#g:]4&%)%-oK<wkDbN.֝1Ad &%J[tZh{58V!Aa3}$Ny'!38O umλ+}%(k#O軖*OTRVIkHU*m-?F`32w) \v Z/8]3 3G"Bpk>^ū$WH2L/22KAg˄Iqm78f]i*-ZKXk ,t#IVsfL>s[;Eu}uyWlXу,,[]J mbC({g0y1I 2|/;[? K ,y`8HSb}f{h%>xHN=0eJ6GxqdlpY0- 5< p-2Í8c~fIpy ύyaZ;3CB`eɮP+ a jx' Y#@No:m+QN5ilEUwH3jaz{u:O K Ս?T%ϫ@s!^eI7a:49;oode(k:GS zįGo-I^UCb@$oh R)9^+,{Ϳ9 i(܅ LMOBV[w^@gRbZLCC)|O14Â=Q:f3Ey]6|.j\L՛`d7 72Z%W`| fV_ ;Yҕrrt> VG_XS*|'FbĚ74B8Ab 6f;yf4{I*Y{W쁄3 w]l l9²IlV/]~F<t-pJ7q¼rkxLvYNiglB<2>_QjrJ)N zrVR`(o~Ia/ژ^oܷ1{IH\VPw$4 Ӵ9zsQ&ߞBUhBiŔyW9 }fʼ-Wk?]μK4r! y*#bR< x0\Yg~C-<)M97Ʒnn[:YqHKA-z;ţrB-q-|`2UͰ-f9,IFiZ|Sskf.< ڻLgPRhTMD7ז-ƍ}?{_!fl(4;E: G`-i叫D"i~ Yiiܱ$sn\W򿎽ZPN9&1fq! ;z!>.r91iۜ - !ǔ`&/Jhx_{K^[7N [Po̝FۉJ?,xDhRD2tV^[O`DcO6gS@M7R[ ^C9͋jK?wo)jT8Rg@&s_},5m*+Q) xg y=XŦ]v5ayjKfCbWs' .n(k;6YƩlNtg{bnc}x)LVr HC ec6i9іcZ|hAn1VLTIdpR #|ͷ# Nl"c(Syaw&%D_)I^A e҆)>lE[淦ga/>Hf--\bwj,Nlsj0uZ==wT7XƔ*Rנyš?~C/x,={0_:Xs8oF2Ҥ:؏(Ri<ǤCzTAs. |HTl$/v"n&Ba;2"BLwZݣ© OR.tD`<a0e?˺{m6qG~1^eCg14vUu41sHZ3;Yu9RUJ dZWd]~1-X5x3˪+F!XH\Ed_)D4˷IuZto@U)N&dy,u8Ivc҈8xEY.Ɵ5w\eŌ4fD. lρAZ A:q䷴=۟ ˟9XL 5 6ly|t]~(~.`0չӒ.}! z(#pTj=g"wIH#ѕגoeq.Jv9% @|h ⍂"T`it@HGGFeRn1eՎvc/-Dm3.`ڼ }ћe:#B!=՗eGY:/SxYn1zu^$y;=(T]E DoA^AwjCU^.׈CQSᆶXgeSAHl+GH"--B.'D]˲,$ 3X#Jn gU?3DH3>X5g5){H]W&d>:hks}OkO'|,n-5A|2(_Sd.!nqM7˜/rd-0 :=;hɘ}BďQEԧ=EXlE2ac‘\|K$hCQ7T!aio"_zX;d'|l6J4濈(*%Ttr3*#ч "(W|̓P>>l2M?oN3Ѓ&52i3q<2*-Fa u]#LSo̔l͐BuPX QyXm" ~Dzr@$nۄ~!tQLR3 kS'H%:~*'BB}uAk'"`^ά` M=OűOkqFZ:?=odzG GHr^Ю&eE |qRg}l `+^js?G*+,o3Z7?JA22w۱[΁uo'wy8 *My 'Z|i>ve;𾖒~VLnV֢(5} m)Vqr5$03HPtnpVgpT(Ta>WJ8Ve[ӄ W^Iq}\rXD2^*_n3Q6[,=)_(&J3F G{Rv_!TjbrNJwC5:~Uz4ϥY;FnV5l>ZB $!^}M?YpGֵ|\HglB.ρ6/r=ʓ h7RVfYƑviXYDzTeU Z,#J% AB~j)^8) lX/Bxz:Cٜxdbǘ19]I7Ģ`{P;sưt\ 'K@TuX0C! :Cxk]Sit<WI1=YYy{!<'寪v[% ZF;ӍTu"4NKxrQ}6 j%=pc#6ӏ X ǚOV.)Lmxޮ/2޿tQۉ}ͻ0GLaȴ9kƼ!mlڰ]ZMkC0FoٜQWJQ[+XU+6RqL"HMl 5nf&[Q"H ]=M5Lx5< 9m{?$ r̸:.W H(Ȇ oYa=p`v5eXԓSBJ&4Czgj|+KBآ=Ro?,I'KցRA~ T*f&3Qbu&nZAw,ț_ -%WmXeiT `ONMG xmtcz}fN[gp%uJ)FrJq g!h`gjfx$-X x$sNT`L?e |%ވ^8mT]C5$2,t(Q7 5N/x獷N8 3>8rR C*1g;u Q@ѭgoe£Kio\h@{o aK(N75d ym)K5}I][Q!g֡13>i=Mٝf3 W2fU˻|֔S k91ƍ[pW̼̉GISdeaj!dȤZ \n9xaa4GVkJW7QA6=0)Qhzۉ/Y=!,~bsAٺLƵC gd%#!o=+f*LMEVwv tt"8B{sH^LϛŴH n> HnreiU}Yv>Ơz 䌈I=tɜb"+qتTȅf B:']^DГYJ*1hud>WeT\9BR;wFHpnB%ѹp3;Wf:SMZ/ַ;u!hdh̩:onvFs z{b4/@_T<0GSn0b,έoɞ|F)Q(!܃@mI;jnmYIYC)ϝK*lG؞4V٘ޭdv $KxqlmsDHXR]1F;iI.cO4 '4ȻmsР&bFNDrpHmݒO*x h-)Q 藮@ P{ݭBծ x0b >K/J\g~]rl18W{2xm(>q_[&z:N40R`xwʌ"5pmT;/Dq{ѓ@ f[y)k :2`O.M.{'3TJ3`p)TJJxp΁v6i ^%Y)w$7a{c.Mx&fFG ,[U{j/JSӹ7֥V"]\l f-rKi12%aaΐJFry\,rh d}mqT u^( y`^ނ"Wȴ!P% jZ4W۵QD_~o罉%8`v:= ~Ba/S띇sB{LmL٘5|ia^ _=~@~*Kh<މIc4^2vvn=_=غ[F1ءȯ`򍴅XC*}=~`kH+f+$Հ/b$Dv"^S׊횛hhᖏiEtw˦&TQQ ,'`~A39bh}u[(NZ >B@ Pdi+>;KpqH/b#`Kn!?e4 ',-k9]gn ea^*9_%9PP"&DaLRxFTq*|K֮(dUHƀΧ/ ~ cF1'P_:Ȟ۹yf "{SI{0'{r#%Lĭ"ʣIJDf >[$* }~gLgHP g'f?*p_03Hn@atup߽f\K)H&+])̚xdCݛԿ"[xOlH: Q3jrIXr8^?Ubᨌ52A~Q}D}Tri{z Ky ;Zx#l!C-m$\ԱDDf,}Sp`Z4Ѥ*f!F;ڶgӝp|3'H+<7x]PlIܶLX 7lfLֺyM5&5QmW,hReٳ4e rb0`w=./?F]1VdiA!;ZaT ~_ ̧:f4Dte k%ͱ&yTj RCŎo54Y7d,SÅlߕL+pwp!NЃh|%~Ƶ*V廞Mk#䟊lV?~0em5zbțp+?򭻮o*]~~*t}q%"8fOZjPϏnzp_'|F?K%!BFbrp~DB7":齴Ą؜~+-Kir- t&q.Uk!;|D]ViO6dsPZ#P5A*+ X EF9d%;k=Q;)5e(Nfu1;4~O#4C"]-w}=Ծ?bG6(9_< ~#ld8jZWIv'Th`(,84[:L\xI) x ^G?M_RPQ, ŞI0wdC G +<eì pr8Pp ]7׆gզɓuh>?rصa 瓳YGq8]ۇ~ԲV<bUeRtޞrDJVQٍkou⅀ =f]R}#FEőeTqSV@ AhKR|%xLdIO?[D(sC")ͅpdYP9ޓ)V6f^K崯&aQNd*k4eg'vS5C7,h>ZS>U"# :$ XA`Ŗppӄ>}p: hf ˧ 9M_GQMV><{( N$҂NfQ txlGAd'u=U״gvrtu. EL[߻B8u4W9SzV!g_ťK(fkϘt05I*N2ˢX1pLqbL5.dҲt4eY#,VD8qo ';ېA]՛u5uqn-h&s_ls|Љq5PxDy ެ*qxfB$3Bu)c W pĊ8TuCqlݞ;T[|h) cW&`y+i7#S uܠ#t(+g; bPX8BZT#d$^eίȫXYv eIvDlC`ӀO<1[PЦk,Ac;E Pʖ.CVlnZ9]ngF;JDUYuo38)c^y͈7Ksֲc"vy/ufczoX+bh2ɬ0 H!{*^;dž်|jy6\y rVA+ Hog߈_@ϓN(x:S wVE"uQ`gq8Z1չ]Vn*5#;]5 e>K\0NC cJ;V-6 gv/=.M\LEG;c >6qE;Ҫ0ds&/NТ9pܜ,Rd$1J#Fb k<ݮY}!E9DUO^[uٻD ?S`X>R4HTpKxԊyĵC. & k7g4*q; !j΄Wf3}9Nd?/c]0\Pqw9 /'ltq bHu^YAj LLUdM@}¥ c;&p۸乄ZQEM K$Wy$Q>FTdTpRAm(d$5й=rsƨ㭗QbS_^thW.[$˂!^Zm,]8+,qe6)odyS4A(|Av4oIcȅ#'14 R_c M=zU,;"1Wڀ%/N$ԺvRbE-y՟;-/5H38d8w&V䜉8!|/(1zk7Ǽb9BUTXd#&)פQWeAkzA RըrBopshsV MR9&F3Io;}@܁7eIFz=ttet:u@5Is@a@PWNә^@$$"Kab:r17a"%sHIB6Pp% r+GrUKh zdUuL&2ύiiS;lmj`5k!pLɐvC2CYhHa ,֥)`6@2a4'l"eJLMl/ko=k12F5BjWa5/| g"pUaj )<#Qy(p!6PN) V)CzXT>3td\,l>2yd^1txn&lf<62`\"[=nxb>.9wS0Cn U -ԜDHKHy&(8|/9/ Vem Scm0TFnqjP~j$-D!p.tMs)705hр.Pр2"} cAҚGg-zf͝_#Rf?L` p|4J|,"uaϵ#<'wh AL$GŠ Y05OM@w?ѐ~(\e$DܱK0#zNy vž6#X(zwGv۟DtQܶW4b> S@t9v xd*+\3C$?`\09I->4ہp[ q; G)x; YICc8&v;#B1 C9K# bv :wCmHwE( [̭B0L=\5*j܏\U2~و̝)} l$ ^tX{2ȱ>;@ %|zt; Q;`<2氏>K(Yzݱ5)J^>Q,0Og'm-f>k[ׂTpD&h]߳aq[D r j[*$(-=8KgrmzQM^([7 b,0*\7'8-r(.kNT V-Z2cW\i#n>yL'u@9|BD :+ jX+~Dv؁"E.5t4Ix:L(l&0;# 1d²VHBp)Gx;&l/H8j4jyW¬"l/!m6WCWP##LY2 ]VIEdjͰSF0 ʝ<.gCJF Ua*~cd/??h$2Gs ( ٥_37}; ]pN#<-9c;Q%_]2V_>3](tC+ .-ѿKfс=!E𑂩 dA/u1I Sw.5':, ]Ct `D+bRY:?Wϥ[bH7)Ԡi3 N5A0a˘7ˍBvtVW9-/$ G25|\*Ń4D_zE߇~H>tb먟6{XWaQ֢=Ei_^E-qu$` ppY9>%•0 w)06up#hث[F b(Xߥ1UMtGs0Z{**Fb5-+?p&eXkk}&wJ@s|+US噗6{B Mo@#.˴n&ɋ zFO]K۫AhUp&XrN7 ^ܩ@*0Ow+X6M[H* C'CM@nJ09-zU]«V7uN!W|e/S@.АoES#Dj>fam!IJESN /\ ށ*ej{=!*-@Rq~M6zɿw\ӣ"U /r~ۈ1y7Z|suX482Kی^9U{!{2{[*ݰ+벷ypxJ!b Z|Ο`GKPIx6aӒq`k8u筄lYb&_! @Q ,a x@) 5WfG"[T~q~8(Qޞ KO2N '_:92KVtH/`v3?.o&s ak~\laۡ_'svC%n_D0TPwQ?Te o+}teۖ)H{2\<ūMcќ\wx]#r)خ_l=)wБ'&){|+RnVs[":A\dIFW#> RiZL6wPIPָ!r=lKμ" *YE|g)B q7͎#{t%4qVN|zV#=N"eCR2^$/ƪ|qGt )62dSQ#,[Rp\I986UBrπt̑#WO@ bUĎ`CIJ?˂XFw G@qi)Q>bhUҳhMԻZ7\gVpeŚj;7#o5.j37aLlwݍ$h%nN>(hGSOVxؤuEoZpȺƚWJ,&{0g R7&r|]*-tq%,#t%[nr&o.Z䁹G~դ$|Wjv >>4@zi DdB`7~q x!n 05,;uZF[3;@QS\#y=l"cpwsee*t(zE RpaoEKuJqĝp }$?  b TI xgLORVfljb.A6NͬWEJAT;iȸ`&2:yYu2ܬ}U<˓\1K.Qdqr,GWQt$'6æ_ JnR"q8QU&mu=ׇH=2‌zkc>W!:y>-P3_YPG*"W;8`f&rh݇)T;AXjǃCaUe{vvpWX:wPћaU̾è갏d% ]R AuKp1Y-PJR)D}`Ä[PUi,[$],<lKz~E[*$-[9<f[-G3݃LJ2qn{0U ,PU8$f[<-AﶸKhoQ,]kKOk]*)e49M>QCVtNqZ#( v󋪨~ -m!kG(;roRP4̫SSFݞm'-7|URhup΢)h.@f[ X &&t]âhPb҂x?IN1apD/ \Hϯ}:rI2HgRl.-{ߛ ^;ʪ@А#+"; X_B_OAr&l ̩̏1rޕ?5 vsW{ <,m;T0bX.uU5+>+|4KiQ: ܱxw0WͥљkD=wmL{-E@Á!iؾ/U+s6Kӆ}3}cL UyyVD]Ȥ|H9 k':5dڂ-A@'a ,VV\eˑUR XtӸtq Q6hFm;]J&/K[P/{u;D [o.J;3itK lAA#޽=%d.w8G7CII,j/Fi6Vm(b%J=N,-P+"Tt( ;s k GF-il\~O=GMe|ぉ`ǯtyq)~lFD-cL^ʄ\zߴx*$A) p! *#pa7a@ 4+\KLpy ]WK ;Wa,Pն7q5&bYұ. oHAfj)7<Ր|FsnQa Cgk5/ U uyo3 yS58OA6wR?5csM|.Nɷ`Mv$<[LG ,wۮt IUZVco>:PakL(H߉ ^ HW"KU/e-*-UM>UmYMXw,"}5j=BJs$?S7V7VZ`,'JN6S- aG2uT d$bIpolyPTNɁB45}ڹQ.ݑ®+*&FӇaAW)xokxGmǯ~[3suf+k|׫lvaۍG!<'xl[ pGφRayZJ3;P_1[)!8Z˳H#؟glxjU ⦜"f鮍FѪXbt{v5bأh<^Q|>y zȼȱUkA۬>X߇[꛱\D~|2Uy$î,yy{(6jNj}mu/"熚gp) P^\u42{ AO[DU`M;MT 1Hdlr\UD\m(1G.pdɱ,VQCjVPm .Qn nC_N(}{1+9[g˿e&z1i|D~R[埉8OԎ17<3VbRYjQ*ݜA h/z& -?a/VJsUUc0 35l0% :tJoio[ݿ 1Ct.O'`hV_J#|y+`X=="JHX 4&Sw0I9|CÑ7::'.]*v\=~xり A>RB&K\#mMnnk#Yf61cl䕬T]GnyPǝ(Zo qG@FbY,"fkFs-όOӵd}$SAys0^(e W.`3\f{mٷ{~eT9B11hbB:1NE v|I`AcH|5sׄÝ#xCEh?@8B؆#@ hK/Ƶ:Q˷ {oUkQT9q\ⴍ[[v/jzjܻ!6878hYg$2 ^ޔzQX5@J;(kR:0>oFg4FhR( tGG/N\%1ԬwXDo3}zŜ{-T&mĴH,ؠlʘR >JHp2SPɂ#Jh>H-lq^%"t. ]P0Ւ#gQ i0ꓗZ Glٞ0\P2x=cxJB'hE_d8!( " "Tǚru<@4Lr9_6 :]sw/3!nRKтΧ0,o8TyOIQ)˦B97*G51i->~_,mkЩ+c efEL 99c{a}I@?O7FV35~,@qQeY0$ `B@H$+r8ÎӡG`MD-5|VS6L&o>ZfBXAK[bYz. Li]E$<!HcKֹHBuѰVV<yL|e͎<0EĶc|-m t@>G_򦍻Yqd_3R:gO:/c)1YrXSDHdVFkuYp%*e~[-(-AͮGaJ~wpjSqӿA%{̃!E54n8AucqH$4txqhR?cS8۷}) N0fTo?^(U-RH܌؄N7f 8'T3\P0EP\8p);ȁҽdBm]#EH&\F:bﻈ*G'*_ vknM[{mLƙX4vrSP$Ec8Ǣ|4ohrܵ:%/z[V@ImܦrtUrm6DXRcG쨌C]C=I V'`,L-,jKG B~x\W9k3(]<VC}t.эhtf̈́ X>=WP|%^Ow`Ԉ[֥&VWDo<> 5n0u%W) ҢS *%],n1X.A3${\.x_y#:j%%%C՞([<熕"(!jÒg9~B58ONd]d#;A(%Q"hF-ăt.W x'uLKIL py1E^t~s#)w~ВzW# /VO}$30@v'=DPZqݸBDֳ ]cv;ʸm8Xə1KN~z-u3mW& X]Ew1i뉣[҉{ŏQjÙ_vsۀݰ<,Vb ,uu_$ oߔtɑo$z(\8 ӋW:j<;]2_IHp{voЈ\5xs魵ڤNֲulWT' `j3@"3,JCPCߴOR@vm%VjAê ӠYs2Yοhũ\ ם3PV/Sw%z"9]&)打\Xmkjw#PvgQ#X))2Oe)sb/7r 1莁64ZT?0au9־ )Q/j=V=8\HSԐa26w)Jwӳ@o PuRϡwv[ ׽ӓJs=d8C'GR S $ߌW鯚#uQ.-NuKzΩh/_ඌI+p%[;*I!Z2B'Na 0^EБ' YX[WHj K] Ur*>F:o͒G]|Sw [lEE,18Ԑ ݗiq%X4#C$g&@7WV Pd)(>*W $Rp쫻L'GK-^G6lb7^ > g6CA+_3iA-V1/9ϑqͭp&'Kgd(iq*\~ V#rb3:/u tioB4atMIuO[tI9z!r}uU2FQA&}t3n~lO5}1zS[H4q:_~%r'(GR,2*Y5Au 2B&"Z0JR5ɅnVbPvZlaG-/%eNYEIywn,)zmD0<][zU,8 H!\qZJ ɴљ _!rW:zաn%K.IE<#T#=X]̦ҬXqE#=94?27̓6lӶ@CnգJQ[{]XI{O@Bۼ NčD굩%Φ G?\GE Dp(u=%6 C 0hVvvQ}9l,{tma󟂙|3{'s{WY򂔨IwVxB'G5LooSlwORwϏ{ OI]) ͞MZdz4=ΎtƋisЉ,tY;;VxM7S6Z( !-Mp@za 0;fX]EWjd2V/syi憛A ;5WMοzu+0V44Ix5Rꞧupt~DԄ&ث!ҵЋEDžW[q=D9LX2$czىa7^$,*gjM|J+EUP]P{/HCtl~ 9;,f|am`WԂ?CS1CݠhИ{$, \5Ռ81 N=ד2B>.865'-o * {N=}2q o?rNZ |-AZdՌ iI Q[]8^+mɮ%7Z@Ј2Ӽy퐺(|S8 {v:ݱYY~\P/R!ߜ|nO"; lC 1[KVҰ}fWxKύFj7@' XuZXb˓G/K=r>ϪBr |mWFӕZk:L8(ˀrMu4-ʨL#7OzC:$f CCtZ|+*:6piF`[XtaX'ˑq _TߵвҊ?ޱ>0]E)y>m^elj/JK$rGW>t#]3߳yd-rܤ*v$Pv1J8`UH&;$',;Ry(LsJcǹP Z ,;,y?{X]|GEޢf!l!堎ߑGV<{)`q7##wz#h= "m iFMCFp/ 틄P`WxN fS\@g,-d)j9 ֟sǍvH,W:suQSwdcA*CtYr3,b32e̙qp|ޥK[vЗnL< m켳A[^WZ+aBʾ艣}kJ7H[X Ʀܑݍ-oV1*8$6 N$wIaPlSk$#@F/OF2ˣ>A'-^^X"b/MH u%Zѕ^97~hA-|ʳV$u@#HG=a ,f2I1rc|9x-< 7cj3ck ru#dͪAfVW \X[(F+"벦̋3qB\.衴 Œ |CZyk7f A/yVZP,0\:)5{ QA=JXAwI[%/g Xĩh ȭ;Ρgua׋ڰ/s*,bia85(P:E7ECo*rudvD=ళI7ݎG7A@IV,$(#~WND@ȌBemw;67KTY;~0L巳NE~*] *QOP93Jbtzz^x{$%քa=cc{QaH?Sm @jJ3YǪr'ω4~ {:*GwL (=4c"q"&lF܁f1rǡ''dRX Hn aח6V~>ʜškUmW!]r CJ^:'o;+E7Y%x [a ;p2ޒwm5vJYB ՙWQYF$vT#\lrN^lW]Ev*d9F,<;fw1ձk,jsU}:po0ꃆ3&$fv1Tٌm\)Jzw>)l<3'_"t\s.-ݷ]㮆Oe^thq,s(gO@pGn~۴ ~Xtt5 0_u GA(K}-DiG;눚nc0dc\| daA9fׇ 3mS6;t;<HDwB}+bUWhG=,3ܴbV.~{ !'\FCŰI fѯn^JI&ENXLF{,*GBԍ[ b|lyAG2 @զf +DFoKBE0S ooPVVT똂*)׫<l݅3LosiUJ25DShlӈC᭲8eP0>h@v$Fx0XЦi5=#]v}ɆxEǙR+= i^>+l'Vi?woL2dhwRL7:jj_ ޫ?'mZ?I;6#7 2=la:{ 7È'ļfk0 ;Kbŏb7 [Rx%1+Tn?)A^%h5yL}[`$Aob?2 %`\ސ56z9y,13AËdsq[℮BtH *_CiaVJ{57NK5fjVhČiT!y1y@c30@{$\ArrՀNώ5u@OjO2P 8%w'4+_9xƥ]~r/s7+$3ZvɐSALmV쨠 Sږax}>SUQ;!>c2ƞ5 iO 7}jdE3#a;=1y6!}e:W%|F%7s:ܸ|jeF+p[B|;a$WsX /Tב~RA:]H0du'fjy{+&יM}ϗ./fm[_bU3 :(A[ x&` l}ŽͿ/-OiIG؝1U =h,/ȢZ_5 bVش\#ͻ1~VHq"މLmq:ez RMRnL,~VP"2$Ջ4ζs˓{S”4~S$l/Ũ82 |m-!ʡ&s!":9 b򅳕0N-~ů8柪Y8,F6z"#o/GHBf^*Q~o_5B`Ɉ ,D4˸W7EosCv{c{Zk dil$yJJ?d^>:{ (c)o⯺݌|TNbRou\4ȹ'tZc Єg>$ݬ1YH`Ȟb0_fمfQJlyiaԎz $9}㓆:p߽3?\y:]z%O\oX1x\N8_xI䷈_Zpd@m{.e]r@-*mLjxC7J`PL3&L|m> 9WLI_^wX٘<4y?1e|s_Q\Ϛs8)4., © ȴu(B mb+uZ,>N!5)B k"`& 'Հķ lj@YBCH`AbzXmzm.[3(U-,s"H ;dgFx]m. )eMq,+ -LT[0Wo<ops?,C(/Y uwtaKj>*9K[2@x#Ζd`[2aUxݐ M.,vϩ(4wnHs%F +=@Mv dx𰭖{my 掄¬u9j0D{qq>3h#bPO7Zc>fDx`ZU| _Jp7W7܆^$x<l-Q[>G /m0 Ɵʜy2֪Uz> "2PFs8IM樨19G@Ϣ-%V F`6I!bzq!e .2,1buϵޠ,wr9;JH2Q<ݥ]g&5JO<~c8tmwu,8}bW5݈Mq)߮, H"ԫ)󯸨SvAG࿲0V& MFj|\phBd!}MUڬ#KvOT\g~IJWPzT s)BhT2G^N^Qꗃ-bIZEf:>po0!Ԓ;=hPO]Kr& Lk@{',c.ĕ붎cyܱW iGʤ;jwj^gX)?U1],;G8}Ib Lɦ󭵏$L 8('kMd+H󫇊6Ew APϗ9sDc5VkEΧY2q} L@a]&jS=I Gua@VJjF=>>dOWvlb_|Huږp&o$&">n"mE(\׌Y= 30|؀ (ba)/.N'2l "Zg־m 6q?0x͒e4olv`/i7xi\M {P23Gջ%N3;Q.Of!iTSoJ#O6AjOj sxjZbxH |0rN9tۂi?cFL 9c}1uYHu' 069[ |@>J3#{P:GRO{ "r3R s3 (~LLU DjetY)#t ,6g IIՠkSdWq.bN^T8`8? 4;1x+.E효KDmi':6J$L}\5ZjrmD-^BSj+#U͏ok SșUpwvB3)B?N1_t)7BrLN5mPrna56?C0>b0`< UL+$H{yK;-@Fg'mcCޮW=(:NQ>V ZG:cZƉ)@kچ֎9#k52[G>KpA:tV@귖WUƐ 5^F뇏$2/`qv񯋚 W-p$AQ .;f vGH93fJkM۟ y#j- #ױAXࢠ] A1_v XP0%QG靈}sk H>0!"SKm8Ŝ_ t_zN5HB@(ZA]\$HʐМ|ʬ# ǖzj;T UI -M ycj ;°dgy..r@I%>ctc牻ҪtvSGBpLw?TnpC/Vd&o!+,;̭ҽ,K4}yR'?:^{2|~K}(JH!nj+„5"{8XxۥRX ٽ=ųI5AS HRE*uL̥E>޺F| FۡSf'h8 +EKe 2gci:[ᶖ=EYD{==LQ.!VxucK@91ANwC 9͒)Nu#DRD}q>DhQ0>>%Cndv( όf8O2hҩvʣUvf,[a|4{pjf 9JASqpG/Q&`\9eF/8T!p/ѬT$4:]?gaoFڮ  $+ C aj5Oj ) o9\ysQn _+Nsz(򭫆C43c'NOM>]kSF1lܞ:kY>Uɳrr'fu6"Re`ͨwj,]^I/^^qMR=rmm =lv9 R 1VL0 ; P-lc}\ A',XhGYx"\v%4Z UsVqτJp1/y?ג`cBaz}/Fێm1֖;/ In?oJ3>UV38ңIp#v6GMZ>i)ap|}&v"\|qҘ> APҳtKZ%\QMB$gwwb0qywYV49NC.ރ nF:n>0`bf}\ ]G{)|t&ND>+X]mg@TZ{UwT'&g;m3Y0DFbvޗժZ6\.A砤 032[CM{r`Pqb|E /Иv7zt0NE{d^cٕ8)]M6Ft:Gŭ!tLoc ̿M"Xρb(?)аP ¶JbٳKj_B[*3rS롾7]?nsM!9wKlnc# ҍK$i9Yrw Gf<G,DIfta}H'!WfĔOU~TrH,G&6\v)bG٣iJcX(m8jB 0El!u0 ۋT@tlW `X N" w~)kΩCjǂ񷛛Wq Z%X>B"K'Fϋh0&95?)nܒ{EAĶ b42[q Qc W3o5 r ^ժWrﲍkLQ0QĘPzoE>؄XkUsjl!cRMy+y(ƵQCBE$+t߹_tz-aEx2cK0],,SIW0![^GP,L.@m%9G84=#Ğچ`2SjͿ)]ܹTXG&s\{osMfm|5 gPs~*늩ďWmrϩzu&BL`Jsp\M*pE3L =L23;SpEKE;kS xfCzDpD*NB? -`qmk׬-Sr>Pm**R&: 6n}/!m> s!JΝOm̪8b1U*oʊִKVuzAYnIhLcێt6.~>Aw2tl"%y6ɇ@=}#!WQMNdBҺK F6XqlK>1?o*"C\ r%" >ݴ&?*эϥnFc%W㻴iX{USn)vn :m:Aɣ BI MG.!>OGAl|?os2+.%hj2M#Z@v"H^p^|G!Uͷ%['8[@`λO+KSx +"?!/ԏM.mS~" JX'v<'5^ܺhX}㾜Zۓs R6ջPkl|޹;Q:qslR ZppmS/VF7eKb%݋{my}v5,8A KT)-J:YPB.U 7J'Y _odS ii}wn~ᬌBDŽR&S hq[+(yQ#kq෍TgtdzR`7Y"DI^^=,k31m&h;[Cd6 .z\h nDi =m٥%q|(t3nbw:>|DSDq DW© 4v8$g EO[>Xr6s޼:<">8?爖Kd9^D_{m`(Es|E Hz1YA>AːS S*XSl(niϛ[ YpdjJ?^ODD8Tg%!rkX14c#u×<5<1@.5w6drWUc7|F7.||"4B^SIfDp46M?eW&oࣥ6%dUIh(beO[?Z7G'!O#8/9HJG>nH^wZ 3O,lьA0 +ٗ9d'l<0(&nh֨"huQTm%lcfO sP[?}Ңan$Ѽiyڀ8Yb(ǧa@KR%]bΤH?vTTS=錖,]ZĔHSz _STʒH ?KB;PY~-e8W )Ds M}|U *{gɗ\y@7̴QZ?ai'WXDFJ) -. %)*4 Y)IGbzoЊk9-@[?ƙأl)+{ `S =漙&fMl7ԢNVfCgMqfٌt*X`RcR,&)RDKml ꕴ/~&oJ,~ wG$h`8מM s0k6j%LL* Nc`4ZΘkWPavtߚB W4-fT"\28_ i%ͪ08C޵0V]ʴ'j(4%0NL\.7h%(!TAJo(l]D 9h M=mֳ8]`A$sz*x7π%oY03W728 .ᔙQuxw@I6z_bw pb%Hb\܋Ճ8pE޿y3WE &ٽe]Gʛd\Z@VkoJJ΂*yDa]=HnULu6Fl'`TIO-te ZgץAԖ} ٤Zۗ/S l2YH+k n1k;ԝAzY@~;둻J7H$n\XԀfu0e^wqOX87T+H[h'W齚nٝX# J>0 YZrtdists/NAMESPACE0000644000175000017500000000201113667512040013315 0ustar nileshnilesh# Generated by roxygen2: do not edit by hand export(dLBA) export(ddiffusion) export(dlba_frechet) export(dlba_gamma) export(dlba_lnorm) export(dlba_norm) export(drd) export(n1CDF) export(n1PDF) export(pLBA) export(pdiffusion) export(plba_frechet) export(plba_gamma) export(plba_lnorm) export(plba_norm) export(prd) export(qLBA) export(qdiffusion) export(rLBA) export(rdiffusion) export(rlba_frechet) export(rlba_gamma) export(rlba_lnorm) export(rlba_norm) export(rrd) importFrom(Rcpp,evalCpp) importFrom(evd,dfrechet) importFrom(evd,pfrechet) importFrom(evd,rfrechet) importFrom(gsl,gamma_inc) importFrom(msm,rtnorm) importFrom(stats,dgamma) importFrom(stats,dlnorm) importFrom(stats,dnorm) importFrom(stats,integrate) importFrom(stats,optimize) importFrom(stats,pgamma) importFrom(stats,plnorm) importFrom(stats,pnorm) importFrom(stats,rgamma) importFrom(stats,rlnorm) importFrom(stats,rnorm) importFrom(stats,runif) importFrom(stats,uniroot) useDynLib(rtdists, .registration = TRUE)