stabledist/0000755000175100001440000000000012521701706012427 5ustar hornikusersstabledist/TODO0000644000175100001440000000363612521673501013130 0ustar hornikusers* Compare with Jim Lindsey's "stable" package --> http://www.commanster.eu/rcode.html Maybe even ask him to make it CRANny (maybe after transfering maintainership ?) At SfS ETH: source in /usr/local/app/R/R_local/JimLindsey/stable/ installed in /usr/local/app/R/R_local/JimLindsey/library * stableMode(): [MM] I'm pretty sure that the 'beta.max' argument is unneeded and could be eliminated. * parametrizations -- pm=0, 1, 2 --- provide functions for the parameter transformation that go from other to pm=0 -- and back and then show how to, or use these to to provide transformations from all pm's to any other. * [qp] check the lower.tail & log.p argument settings NB: This check *should* happen in fBasics 'checkdist' ... * curve(dstable(exp(x), alpha= 0.1, beta=1, pm=1, log=TRUE), -15, 0) very much depends on zeta.tol ... that should somehow depend on alpha or zeta; --> tests/dstab-ex.R ("beta = 1") * As easy alternative to fBasics::stableFit() define one based on stats4::mle (and possibly also a quantile-matching alternative). ------------------------------------ Done : -------------------------------------- ==== * Nolan ("tail.pdf") and other places mention *asymptotic* formulae --> use these for large |x|, notably for dstable(*, log = TRUE) * pstable() should also get lower.tail = TRUE, log.p = FALSE ~~~~~~~ arguments * qstable() ditto * using doExtras to *not* run some tests when --as-cran , as From: Prof Brian Ripley Subject: Re: New version of copula uploaded 0.99-2 Date: Tue, 29 May 2012 17:17:05 +0100 .. but the test times * checking tests ... [173s/174s] OK Running doRUnit.R [82s/83s] Running dstab-ex.R [34s/34s] Running pstab-ex.R [19s/19s] Running tails.R [39s/39s] are more than we would like, so please consider reducing substantially next time around ... stabledist/inst/0000755000175100001440000000000012521673675013421 5ustar hornikusersstabledist/inst/xtraR/0000755000175100001440000000000012521673675014521 5ustar hornikusersstabledist/inst/xtraR/Levy.R0000644000175100001440000000351612156046365015562 0ustar hornikusers### Lévy : ### ==== ## Stable(alpha = 1/2, beta = 1, gamma, delta, pm = 1) <===> Levy(delta, gamma) ## ~~~~~~~~~~~ ~~~~~~~~ ## http://en.wikipedia.org/wiki/L%C3%A9vy_distribution ## The probability density function of the Lévy distribution over the domain x >= \mu is ## ## f(x;\mu,c)=\sqrt{\frac{c}{2\pi}}~~\frac{e^{ -\frac{c}{2(x-\mu)}}} {(x-\mu)^{3/2}} ## ## NOTA BENE: You can use 'mpfr numbers for x' -- ! --> ../../tests/pstab-ex.R ## ~~~~~~~~~~~~~~~~~~~~~~ dLevy <- function(x, mu=0, c=1, log=FALSE) { r <- x <- x-mu ## ensure f(0) = 0 {not NaN}: pos <- x > 0 ; x <- x[pos]; if(log) r[!pos] <- -Inf r[pos] <- if(log) (log(c/(2*pi)) + -c/x - 3*log(x))/2 else sqrt(c/(2*pi)) * exp(-c/(2*x)) / (x^(3/2)) r } ## where \mu is the location parameter and c is the scale parameter. ## The cumulative distribution function is ## ## F(x;\mu,c)=\textrm{erfc}\left(\sqrt{c/(2(x-\mu))}\right) ## ## {MM: fixed Wikipedia entry: (x-mu) is in the denominator!} pLevy <- function(x, mu=0, c=1, log.p=FALSE, lower.tail=TRUE) { ## erfc <- function(x) 2 * pnorm(x * sqrt(2), lower = FALSE) ## erfc(sqrt(c/(2*(x-mu)))) x <- (x-mu)/c # re-scale to (0,1) u <- 1/sqrt(x) if(log.p) { if(lower.tail) log(2) + pnorm(u, lower.tail = FALSE, log.p=TRUE) else log(2 * pnorm(u) - 1) } else { if(lower.tail) 2* pnorm(u, lower.tail = FALSE) else 2*pnorm(u) - 1 } } ## where \textrm{erfc}(z) is the complementary error function. The shift ## parameter \mu has the effect of shifting the curve to the right by an ## amount \mu, and changing the support to the interval [\mu, \infty). Like ## all stable distributions, the Levy distribution has a standard form ## f(x;0,1) which has the following property: ## ## f(x;\mu,c) dx = f(y;0,1) dy ## ## where y is defined as ## ## y = \frac{x-\mu}{c} stabledist/inst/unitTests/0000755000175100001440000000000012521673675015423 5ustar hornikusersstabledist/inst/unitTests/Makefile0000644000175100001440000000042312521673676017063 0ustar hornikusersPKG=stabledist TOP=../.. SUITE=doRUnit.R R=R all: inst test inst: # Install package -- but where ?? -- will that be in R_LIBS ? cd ${TOP}/..;\ ${R} CMD INSTALL ${PKG} test: # Run unit tests export RCMDCHECK=FALSE;\ cd ${TOP}/tests;\ ${R} --vanilla --slave < ${SUITE} stabledist/inst/unitTests/runit.StableDistribution.R0000644000175100001440000001155512027023255022507 0ustar hornikusers# This R package is free software; you can redistribute it and/or # modify it under the terms of the GNU Library General Public # License as published by the Free Software Foundation; either # version 2 of the License, or (at your option) any later version. # # This R package is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Library General Public License for more details. # # You should have received a copy of the GNU Library General # Public License along with this R package; if not, write to the # Free Foundation, Inc., 59 Temple Place, Suite 330, Boston, # MA 02111-1307 USA # Copyrights (C) for this R-port: # 1999 - 2007, Diethelm Wuertz, GPL # Diethelm Wuertz # info@rmetrics.org # www.rmetrics.org # and # Copyright (C) 2010--2012 Martin Maechler, ETH Zurich # # for the code accessed (or partly included) from other R-ports: # see R's copyright and license files # for the code accessed (or partly included) from contributed R-ports # and other sources # see Rmetrics's copyright file ################################################################################ # FUNCTIONS: STABLE DISTRIBUTION: # stableMode Computes stable mode # dstable Returns density for stable DF # pstable Returns probabilities for stable DF # qstable Returns quantiles for stable DF # rstable Returns random variates for stable DF ################################################################################ ## FIXME / TODO: need unit tests for stableMode() !! if(do.stable.rUnitTest <- require("fBasics") && ## need the newer distCheck(): packageDescription("fBasics")$Version >= package_version("2110.79")) { ## fBasics: for .distCheck() distCheck <- fBasics:::.distCheck environment(distCheck) <- asNamespace("stabledist") ## and re-attach "stabledist" as its contents is now masked by fBasics::dstable: if((P <- "package:stabledist") %in% search()) detach(P, character.only=TRUE) stopifnot(require("stabledist"), ## check that indeed we get stabledist's functions, not fBasics: identical(dstable, stabledist::dstable)) } source(system.file("test-tools-1.R", package = "Matrix")) #-> identical3(), showProc.time(),... (doExtras <- stabledist:::doExtras()) n.check <- if(doExtras) 1000 else 64 test.stableS0 <- function(n = n.check) { if (do.stable.rUnitTest) { ## "TODO" in distCheck() -- use 'tol = .005' in newer versions # stable - Parameterization S0: test <- distCheck("stable", n=n, alpha = 1.8, beta = 0.3) print(test) ## the 3rd test -- matching (mean, var) typically fails for stable -- as Var(.) == Inf ! checkTrue(all(test[1:2])) # stable - Parameterization S0: test <- distCheck("stable", n=n, alpha = 1.2, beta = -0.3) print(test) checkTrue(all(test[1:2])) if(doExtras) { # stable - Parameterization S0: test <- distCheck("stable", n=n, alpha = 0.6, beta = 0) print(test) checkTrue(all(test[1:2])) } } # Return Value: return() } # ------------------------------------------------------------------------------ test.stableS1 <- function(n = n.check) { if (do.stable.rUnitTest) { if(doExtras) { # stable - Parameterization S1: test <- distCheck("stable", n=n, alpha = 1.8, beta = 0.3, pm = 1) print(test) checkTrue(all(test[1:2])) } # stable - Parameterization S1: test <- distCheck("stable", n=n, alpha = 1.2, beta = -0.3, pm = 1) print(test) checkTrue(all(test[1:2])) if(doExtras) { # stable - Parameterization S1: test <- distCheck("stable", n=n, alpha = 0.6, beta = 0, pm = 1) print(test) checkTrue(all(test[1:2])) } } # Return Value: return() } ##if(doExtras) test.stableS1 <- Tst.stableS1 # ------------------------------------------------------------------------------ Tst.stableS2 <- function(n = n.check) { if (do.stable.rUnitTest) { if(doExtras) { # stable - Parameterization S2: test <- distCheck("stable", n=n, alpha = 1.8, beta = 0.3, pm = 2) print(test) checkTrue(all(test[1:2])) # stable - Parameterization S2: test <- distCheck("stable", n=n, alpha = 1.2, beta = -0.3, pm = 2) print(test) checkTrue(all(test[1:2])) } # stable - Parameterization S2: test <- distCheck("stable", n=n, alpha = 0.6, beta = 0, pm = 2) print(test) checkTrue(all(test[1:2])) } # Return Value: return() } if(doExtras) test.stableS2 <- Tst.stableS2 ################################################################################ stabledist/inst/unitTests/runTests.R0000644000175100001440000000453211522536021017360 0ustar hornikuserspkg <- "stabledist" if(require("RUnit", quietly = TRUE)) { library(package=pkg, character.only = TRUE) if(!(exists("path") && file.exists(path))) path <- system.file("unitTests", package = pkg) ## --- Testing --- ## Define tests testSuite <- defineTestSuite(name = paste(pkg, "unit testing"), dirs = path) if(interactive()) { cat("Now have RUnit Test Suite 'testSuite' for package '", pkg, "' :\n", sep='') str(testSuite) cat('', "Consider doing", "\t tests <- runTestSuite(testSuite)", "\nand later", "\t printTextProtocol(tests)", '', sep = "\n") } else { ## run from shell / Rscript / R CMD Batch / ... ## Run tests <- runTestSuite(testSuite) if(file.access(path, 02) != 0) { ## cannot write to path -> use writable one tdir <- tempfile(paste(pkg, "unitTests", sep="_")) dir.create(tdir) pathReport <- file.path(tdir, "report") cat("RUnit reports are written into ", tdir, "/report.(txt|html)", sep = "") } else { pathReport <- file.path(path, "report") } ## Print Results: printTextProtocol(tests, showDetails = FALSE) printTextProtocol(tests, showDetails = FALSE, fileName = paste(pathReport, "Summary.txt", sep = "")) printTextProtocol(tests, showDetails = TRUE, fileName = paste(pathReport, ".txt", sep = "")) ## Print HTML Version to a File: ## printHTMLProtocol has problems on Mac OS X if (Sys.info()["sysname"] != "Darwin") printHTMLProtocol(tests, fileName = paste(pathReport, ".html", sep = "")) ## stop() if there are any failures i.e. FALSE to unit test. ## This will cause R CMD check to return error and stop tmp <- getErrors(tests) if(tmp$nFail > 0 | tmp$nErr > 0) { stop(paste("\n\nunit testing failed (#test failures: ", tmp$nFail, ", R errors: ", tmp$nErr, ")\n\n", sep="")) } } } else { cat("R package 'RUnit' cannot be loaded -- no unit tests run\n", "for package", pkg,"\n") } ################################################################################ stabledist/tests/0000755000175100001440000000000012521673675013606 5ustar hornikusersstabledist/tests/dstab-ex.R0000644000175100001440000003050612156043102015420 0ustar hornikusersrequire("stabledist") dPareto <- stabledist:::dPareto source(system.file("test-tools-1.R", package = "Matrix"), keep.source=interactive()) #-> identical3(), showProc.time(),... (doExtras <- stabledist:::doExtras()) if(!require("sfsmisc")) eaxis <- axis # use sfsmisc::eaxis if available stopifnot(0 < print(dstable(4000., alpha=1.00001, beta=0.6))) ## gave error in fBasics::dstable() ## now 18 warnings from uniroot() pdf("dstab-ex.pdf") x <- 2^seq(0, 20, length= if(doExtras) 200 else 64) ## Regression check for alpha=2: <==> *norm() : x. <- x/1024 fx <- dstable(x., alpha = 2, beta = 0, gamma = 1.1, delta=2, pm=2) lf <- dstable(x., alpha = 2, beta = 0, gamma = 1.1, delta=2, pm=2, log=TRUE) stopifnot( local({i <- is.finite(log(fx)); all.equal(log(fx[i]), lf[i])}), all.equal(fx, dnorm(x., m=2, s=1.1)), all.equal(lf, dnorm(x., m=2, s=1.1, log=TRUE))) fx <- dstable(x, alpha = 1.0001, beta = 0.6) plot(x,fx, log="x", type="l")# looks good plot(x,fx, log="xy", type="l")# --- perfect now stopifnot((dlx <- diff(log(fx))) < 0, # decreasing { dl <- dlx[x[-1] > 1000] # the negative slope: relErr(if(doExtras) -0.13934 else -0.44016, dl) < 4e-4}, diff(dl) > -1e-6)# > 0 "in theory"; > -6e-7, ok on 64-bit Linux nc <- if(doExtras) 512 else 101 # number of points for curve() zeta <- function(alpha,beta) if(alpha==1) 0 else -beta*tan(pi/2*alpha) ## negative beta: cx <- curve(dstable(x, 0.75, -.5), -.5, 1.5, n=nc)# ok, now m <- stableMode(0.75, -.5, tol=1e-14) stopifnot(all.equal(m, 0.35810298366, tol = 1e-7), all.equal(dstable(m, 0.75, -.5), 0.3554664043, tol=1e-6)) showProc.time() ###-------- "small" alpha ----------------- ## alpha --> 0 --- very heavy tailed -- and numerically challenging. ## symmetric (beta = 0) (x0 <- (-16:16)/256) fx0 <- dstable(x0, alpha = 0.1, beta=0, gamma = 1e6) plot(x0, fx0, type = "o", main = expression(f(x, alpha== 0.1, beta == 0, gamma == 10^6))) stopifnot(all.equal(fx0[17],1.15508291498374), all.equal(fx0[ 1],0.02910420736536), all.equal(range(diff(fx0[1:8])), c(0.0011871409, 0.0025179435), tol=1e-6) ) ## beta > 0 r3 <- curve(dstable(x, alpha = 0.3, beta = 0.5, tol=1e-7), -1, 1) m3 <- stableMode(0.3, 0.5, tol=1e-14)# still with 3 warnings stopifnot(all.equal(m3, -0.2505743952946, tol = 1e-10)) ## zoom into the above r3. <- curve(dstable(x, alpha = 0.3, beta = 0.5, tol=1e-7), -.27, -.22) abline(v = m3, col="gray40", lty=2) r1 <- curve(dstable(x, alpha = 0.1, beta = 0.5, tol=1e-7), -.4, .2, ylim = c(0, 20), n = nc) m1 <- stableMode(0.1, 0.5, tol=1e-15) abline(v=m1, h=0, col="gray40", lty=2) stopifnot(all.equal(m1, -0.079193, tol=1e-5)) # -0.079193188, was -0.079192219, and -0.0791921758 title(main = expression(f(x, alpha== 0.1, beta == 0.5))) ## check mode *and* unimodality i. <- r1$x > m1 stopifnot(## decreasing to the right: diff(r1$y[ i.]) < 0, ## increasing on the left: diff(r1$y[!i.]) > 0) ## Now *really* small alpha --> 0: ## -------------------------- ## Note that if(require("FMStable")) { try( FMStable::setParam(alpha = 1e-18, location=0, logscale=0, pm = 0) ) ## gives ## Error in .... setParam: S0=M parametrization not suitable for small alpha } ## now if this is true (and I think we can trust Geoff Robinson on this), ## we currently have a "design bug - problem": ## as internally, we always translate to 'pm=0' parametrization __FIXME_(how ??)__ ## --> solution: see below: there's a simple (alpha -> 0) asymptotic ## These now "work": .... well with integration warnings !! pdstab.alpha <- function(x, beta, alphas = 2^-(40:2), type = "o", add=FALSE, log = "xy", ...) { stopifnot(is.numeric(x), length(x) == 1, length(beta) == 1, is.numeric(beta), -1 <= beta, beta <= 1, is.numeric(alphas), 0 <= alphas, alphas <= 2, is.logical(add)) if(is.unsorted(alphas)) alphas <- sort(alphas) dst.alp <- vapply(alphas, function(aaa) dstable(x, aaa, beta = beta, pm = 0), 1.) ## with warnings if(add) lines(alphas, dst.alp, type=type, ...) else { plot(alphas, dst.alp, type=type, log=log, axes=FALSE, xlab = quote(alpha), ylab = bquote(f(x == .(x), alpha)), main = bquote(dstable(x == .(x), alpha, beta == .(beta), pm == 0) ~~~"for (very) small" ~ alpha)) if(!require("sfsmisc")) eaxis <- axis # use sfsmisc::eaxis if available eaxis(1); eaxis(2) } invisible(cbind(alphas, dstable = dst.alp)) } ## vary beta, keep x : pdstab.alpha(x = -1, beta = 0.1) if(doExtras) { pdstab.alpha(x = -1, beta = 0.3, add=TRUE, col=2, type="l") pdstab.alpha(x = -1, beta = 0.5, add=TRUE, col=3, type="l") pdstab.alpha(x = -1, beta = 0.7, add=TRUE, col=4, type="l") pdstab.alpha(x = -1, beta = 0.9, add=TRUE, col=5, type="l") ## vary x, keep beta pdstab.alpha(x = -1, beta = 0.3) pdstab.alpha(x = +1, beta = 0.3, add=TRUE, col=2, type="l") pdstab.alpha(x = +5, beta = 0.3, add=TRUE, col=3, type="l") pdstab.alpha(x = +50, beta = 0.3, add=TRUE, col=4, type="l") pdstab.alpha(x = -10, beta = 0.3, add=TRUE, col=5, type="l") } ## Plots suggest a simple formula ## log(f(x, alpha, beta))= c(x,beta) + log(alpha) ## f(x, alpha, beta) = C(x,beta) * (alpha) -- ??? ## for normal alpha, it looks different {which is reassuring!} pdstab.alpha(x = -1, beta = 0.3, alphas = seq(1/128, 2, length=100)) ## Show the formula, we derived: ## f(x, \alpha, \beta) ~ \alpha * \frac{1 + \beta}{2e \abs{x + \pi/2 \alpha\beta}} ## ONLY ok, when x > zeta := - beta * tan(pi/2 *alpha) ## otherwise, "swap sign" of (x, beta) dst.smlA <- function(x, alpha, beta, log = FALSE) { pa <- pi/2*alpha i. <- x < -pa*beta if(any(i.)) { beta <- rep(beta, length.out=length(x)) beta[i.] <- -beta[i.] x [i.] <- -x [i.] } ## alpha*(1+beta) / (2*exp(1)*(x+ pa*beta)) r <- log(alpha) + log1p(beta) - (1 + log(2*(x+ pa*beta))) if(log) r else exp(r) } set.seed(17) alpha <- 1e-15 ## must be larger than cutoff in .fct1() in ../R/dpqr-stable.R : stopifnot(alpha > stabledist:::.alpha.small.dstable) for(beta in runif(if(doExtras) 20 else 2, -1, 1)) { cat(sprintf("beta =%10g: ", beta)) for(N in 1:(if(doExtras) 10 else 1)) { x <- 10*rnorm(100); cat(".") stopifnot(all.equal(dstable (x, alpha, beta), dst.smlA(x, alpha, beta), tol = 1e-13)) }; cat("\n") } curve( dstable (x, alpha=1e-10, beta=.8, log=TRUE) , -10, 10) curve( dst.smlA(x, alpha=1e-10, beta=.8, log=TRUE), add=TRUE, col=2) ## "same" picture (self-similar !) curve( dstable (x, alpha=1e-10, beta=.8, log=TRUE), -1, 1) curve( dst.smlA(x, alpha=1e-10, beta=.8, log=TRUE), add=TRUE, col=2) ## Testing stableMode here: ### beta = 1 (extremely skewed) and small alpha: --------- ## -------- ## Problem at *left* ("less problematic") tail, namely very close to where the ## support of the density becomes mathematically exactly zero : ## ## clear inaccuracy / bug --- maybe *seems* curable ## curve(dstable(exp(x), alpha= 0.1, beta=1, pm=1, log=TRUE), -40, 10) ## ------ ## --> warnings both from uniroot ("-Inf") and .integrate2() ## about equivalent to curve(dstable(x, alpha= 0.1, beta=1, pm=1, log=TRUE), 1e-15, 4e4, log="x", xaxt="n"); eaxis(1) ## If we decrease zeta.tol "to 0", we get better here: curve(dstable(exp(x), alpha= 0.1, beta=1, pm=1, log=TRUE, zeta.tol=1e-100), -40, 20) ## or here, ... but still not good enough curve(dstable(exp(x), alpha= 0.1, beta=1, pm=1, log=TRUE, zeta.tol=1e-200), -45, 30) showProc.time() ##------ NB: Pareto tail behavior -- see more in ./tails.R ## ======= ## alpha ~= 1 ---- and x ~ zeta(a,b) ----------- ## ========== f1 <- dstable(6366.197, alpha= 1.00001, beta= .1) f2 <- dstable(-50929.58, alpha= 1.00001, beta= -.8) stopifnot(f1 > 0, f2 > 0) ## these all work (luck): zet <- zeta(alpha= 1.00001, beta= -.8)# -50929.58 ## here, we must have larger zeta.tol: = 5e-5 is fine; now using "automatic" default r4 <- curve(dstable(zet+x, alpha= 1.00001, beta= -.8), -3, 7, xlab = expression(zeta(alpha,beta) - x), ylim=c(2.1, 2.3)*1e-10, n = nc) cc <- "pink3" abline(v=0, col=cc) z.txt <- bquote(paste(x == zeta(.), phantom() == .(signif(zet,6)))) mtext(z.txt, at=0, line = -1, adj = -.1, padj=.2, col=cc) ## no longer much noise (thanks to zeta.tol = 1e-5): curve(dPareto(zet+x, alpha= 1.00001, beta= -.8), add=TRUE, col=2) stopifnot({ rr <- range(r4$y) 2.1e-10 < rr & rr < 2.3e-10 }) showProc.time() ### ---- alpha == 1 --------- curve(dstable(x, alpha = 1, beta = 0.3), -20, 20, log="y", n=nc) curve(dstable(x, alpha = 1, beta = 0.3, log=TRUE), -200, 160, n=nc) curve(dPareto(x, alpha = 1, beta = 0.3, log=TRUE), add=TRUE, col=4) ## "works", but discontinuous --- FIXME ## ditto: curve(dstable(x, alpha=1, beta= 0.1, log=TRUE), -70,80, col=2) curve(dPareto(x, alpha=1, beta= 0.1, log=TRUE), add=TRUE) showProc.time() dstable(-44, alpha=1, beta= .1)# failed ## large x gave problems at times: dstable(-1e20, alpha = 0.9, beta = 0.8) chkUnimodal <- function(x) { ## x = c(x1, x2) and x1 is *increasing* and x2 is *decreasing* stopifnot((n <- length(x)) %% 2 == 0, (n2 <- n %/% 2) >= 2) if(is.unsorted(x[seq_len(n2)])) stop("first part is *not* increasing") if(is.unsorted(x[n:(n2+1)])) stop("seconds part is *not* decreasing") invisible(x) } showProc.time() xLrg <- c(10^c(10:100,120, 150, 200, 300), Inf) xLrg <- sort(c(-xLrg, xLrg)) d <- dstable(xLrg, alpha = 1.8, beta = 0.3 ); chkUnimodal(d) d <- dstable(xLrg, alpha = 1.01, beta = 0.3 ); chkUnimodal(d) # (was slow!) ## look at the problem: dstCurve <- function(alpha, beta, log=TRUE, NEG=FALSE, from, to, n=nc, cLog=NULL, ...) { if(NEG) { r <- curve(dstable(-x, alpha=alpha, beta=beta, log=log), from=from, to=to, n=n, log = cLog, ...) curve(dPareto(-x, alpha=alpha, beta=beta, log=log), add=TRUE, col=2, lwd=2, lty=2) } else { r <- curve(dstable(x, alpha=alpha, beta=beta, log=log), from=from, to=to, n=n, log = cLog, ...) curve(dPareto(x, alpha=alpha, beta=beta, log=log), add=TRUE, col=2, lwd=2, lty=2) } leg.ab <- paste0("(", if(NEG) "-x" else "x", ", a=",formatC(alpha, digits=3), ", b=",formatC(beta, digits=3),")") legend("topright", paste0(c("dstable ", "dPareto"), leg.ab), col=1:2, lty=1:2, lwd=1:2, bty="n") invisible(r) } ## (was *S.L.O.W.* on [2010-03-28] !) r <- dstCurve(alpha = 1.01, beta = 0.3, NEG=TRUE, from=1e10, to=1e20, cLog="x", ylim = c(-100, -45)) ## zoom in: r <- dstCurve(alpha = 1.01, beta = 0.3, , , .1e13, 9e13, ylim = c(-80, -55)) showProc.time() d <- dstable(xLrg, alpha = 1.001, beta = -0.9) # >= 50 warnings try( chkUnimodal(d) ) # FIXME ## look at the problem: dstCurve(alpha = 1.001, beta = -0.9, log=FALSE, NEG=TRUE, 1e10, 1e20, cLog="xy") ## and at the right tail, too: dstCurve(alpha = 1.001, beta = -0.9, log=FALSE, NEG=FALSE, 1000, 1e17, cLog="xy") d <- dstable(xLrg, alpha = 1. , beta = 0.3 ); chkUnimodal(d) # "ok" now d <- dstable(xLrg, alpha = 0.9, beta = 0.3 ) # 10 warnings (had 11) try( chkUnimodal(d) ) # FIXME d <- dstable(xLrg, alpha = 0.5, beta = 0.3 ) # 19 warnings (had 22) chkUnimodal(d) d <- dstable(xLrg, alpha = 0.1, beta = 0.3 ) # 26 warnings (had 21) chkUnimodal(d) showProc.time() ##------------- beta = 1 --------------------- options(dstable.debug = TRUE) dstable(1, alpha=1.2, beta= 1 - 1e-7)#ok dstable(1, alpha=1.2, beta= 1)# gave error, because g(pi/2) < 0 dstable(0, alpha=13/16, beta= 1 -2^-52)# had error as g(-theta0) |-> NaN dstable(0, alpha=19/16, beta= 1) # had error as g(pi/2) |-> NaN options(dstable.debug = FALSE) ## NB: (beta=1, alpha = 1/2) is 'Levy' ---> dLevy() and some checks ## -- in ./pstab-ex.R ## ~~~~~~~~~~ if(doExtras) { ## actually "very-Extras" (checkLevel == "FULL") ## This needs 65 seconds (nb-mm3: 54*32*11 dstable() values) ep <- 2^-(1:54)## beta := 1 - ep ---> 1 {NB: 1 - 2^-54 == 1 numerically} alph.s <- (1:32)/16 # in (0, 2] f.b1 <- sapply(alph.s, function(alf) sapply(1 - ep, function(bet) dstable(0:10, alpha = alf, beta = bet)), simplify = if(getRversion() >= "2.13") "array" else TRUE) print(summary(f.b1)) r.b1 <- range(f.b1) stopifnot(0 < r.b1[1], r.b1[2] < 0.35) ## "FIXME" test more: monotonicity in x {mode is < 0 = min{x_i}}, beta, alpha, ... showProc.time() } else message("expensive dstable() checks have been skipped") cat('Time elapsed: ', proc.time(),'\n') # "stats" stabledist/tests/tails.R0000644000175100001440000001664512026640212015036 0ustar hornikusersrequire("stabledist") ###--- Tail approximations etc -- both for pstable() and dstable() dPareto <- stabledist:::dPareto source(system.file("test-tools-1.R", package = "Matrix"), keep.source=interactive()) #-> identical3(), showProc.time(),... (doExtras <- stabledist:::doExtras()) nc <- if(doExtras) 512 else 64 # number of points for curve() pdf("stable-tails.pdf") pstab.tailratio <- function(alpha, beta, n = nc, prob = 1/4096, xmin = qstable(0.95, alpha,beta, tol = 0.01), xmax = qstable(1 - prob, alpha,beta)) { ## Purpose: Explore eps(x) where (1 - pstable(x))/(1- pPareto()) = 1 + eps(x) ## <==> ## ---------------------------------------------------------------------- ## Arguments: ## ---------------------------------------------------------------------- ## Author: Martin Maechler, Date: 21 Mar 2011, 10:09 cl <- match.call() stopifnot(0 < prob, prob < .5, 0 < xmin, xmin < xmax) x <- exp(seq(log(xmin), log(xmax), length.out = n)) iF <- pstable(x, alpha,beta, lower.tail=FALSE) ok <- iF > 0 iF <- iF[ok] x <- x[ok] iFp <- stabledist:::pPareto(x, alpha,beta, lower.tail=FALSE) eps <- (iF - iFp)/iFp structure(list(x=x, eps=eps, call = cl, alpha=alpha, beta=beta), class = "pstableTailratio") } plot.pstableTailratio <- function(x, type="l", col="blue3", lin.col = adjustcolor("red2",.6), ...) { ## Purpose: ## ---------------------------------------------------------------------- ## Arguments: ## ---------------------------------------------------------------------- stopifnot(is.list(x), is.numeric(x$eps)) tit <- substitute("tail ratio approx. for"~~ pstable(alpha==A, beta==B), list(A=x[["alpha"]], B=x[["beta"]])) dat <- as.data.frame(x[c("x","eps")]) ## NOTA BENE: Empirically, I see that eps > 0 <==> alpha > 1 ## eps < 0 <==> alpha < 1 dat <- dat[dat[,"eps"] > 0, ] ## drop invalid eps[] plot(eps ~ x, log = "xy", data = dat, col=col, ylab = expression(epsilon ~~~ "('eps')"), main = tit, type=type, ...) mtext( expression(epsilon(x) == (bar(F)(x,.) - bar(F)[P](x,.)) / bar(F)[P](x,.)) ) fm <- lm(log(eps) ~ log(x), weights = x^2, data = dat) lines(dat[["x"]], exp(predict(fm)), col=lin.col) Form <- function(x) formatC(x, digits=4, wid=1) leg.line <- substitute(log(epsilon) == A + B * log(x), list(A = Form(coef(fm)[["(Intercept)"]]), B = Form(coef(fm)[["log(x)"]]))) legend("topright", legend=leg.line, bty = "n", lty=1, col=lin.col) } plot(tr0 <- pstab.tailratio(1, 0.5)) plot(tr1 <- pstab.tailratio(1.1, 0.25)) plot(tr2 <- pstab.tailratio(0.99, +0.992)) showProc.time() plot(tr <- pstab.tailratio(1.2, 0.5)) plot(tr3 <- pstab.tailratio(0.7, +0.9)) plot(tr4 <- pstab.tailratio(1.7, +0.6))# not really useful: pstable(.) = 1 too early showProc.time() ##---------------- Now the density ##' @title Explore eps(x) where dstable(x)/dPareto(x) = 1 + eps(x) ##' @param alpha ##' @param beta ##' @param n ##' @param prob ##' @param xmin ##' @param xmax ##' @return an object of \code{\link{class} "dstableTailratio"}, ... ##' @author Martin Maechler, 21 Mar 2011 dstab.tailratio <- function(alpha, beta, n = nc, prob = 1/4096, xmin = qstable(0.95, alpha,beta, tol = 0.01), xmax = qstable(1 - prob, alpha,beta)) { cl <- match.call() stopifnot(0 < prob, prob < .5, 0 < xmin, xmin < xmax) x <- exp(seq(log(xmin), log(xmax), length.out = n)) f <- dstable(x, alpha,beta) ok <- f > 0 f <- f[ok] x <- x[ok] fp <- stabledist:::dPareto(x, alpha,beta) eps <- (f - fp)/fp structure(list(x=x, eps=eps, call = cl, alpha=alpha, beta=beta), class = "dstableTailratio") } ##' @title plot() method for dstableTailratio() results ##' @param x object of \code{\link{class} "dstableTailratio"}. ##' @param type plot type; default simple lines ##' @param col ##' @param lin.col ##' @param ... optional further arguments passed to \code{\link{plot.formula}()}. ##' @return ##' @author Martin Maechler plot.dstableTailratio <- function(x, type="l", col="blue3", lin.col = adjustcolor("red2",.6), ...) { ## Purpose: ## ---------------------------------------------------------------------- ## Arguments: ## ---------------------------------------------------------------------- stopifnot(is.list(x), is.numeric(x$eps)) tit <- substitute("tail ratio approx. for"~~ dstable(alpha==A, beta==B), list(A=x[["alpha"]], B=x[["beta"]])) dat <- as.data.frame(x[c("x","eps")]) ## NOTA BENE: Empirically, I see that eps > 0 <==> alpha > 1 ## eps < 0 <==> alpha < 1 dat <- dat[dat[,"eps"] > 0, ] ## drop invalid eps[] plot(eps ~ x, log = "xy", data = dat, col=col, ylab = expression(epsilon ~~~ "('eps')"), main = tit, type=type, ...) mtext( expression(epsilon(x) == (f(x,.) - f[P](x,.)) / f[P](x,.)) ) fm <- lm(log(eps) ~ log(x), weights = x^2, data = dat) lines(dat[["x"]], exp(predict(fm)), col=lin.col) Form <- function(x) formatC(x, digits=4, wid=1) leg.line <- substitute(log(epsilon) == A + B * log(x), list(A = Form(coef(fm)[["(Intercept)"]]), B = Form(coef(fm)[["log(x)"]]))) legend("topright", legend=leg.line, bty = "n", lty=1, col=lin.col) } plot(fr <- dstab.tailratio(1.01, 0.8)) plot(fr <- dstab.tailratio(1.05, 0.4)) plot(fr <- dstab.tailratio(1.1, 0.4)) plot(fr <- dstab.tailratio(1.2, 0.5)) plot(fr <- dstab.tailratio(1.3, 0.6)) showProc.time() plot(fr <- dstab.tailratio(1.4, 0.7)) plot(fr <- dstab.tailratio(1.5, 0.8)) plot(fr <- dstab.tailratio(1.5, 0.8, xmax= 1000)) plot(fr <- dstab.tailratio(1.5, 0.8, xmax= 1e4));abline(v=1000, lty=2) plot(fr <- dstab.tailratio(1.5, 0.8, xmax= 1e5));abline(v=1e4, lty=2) showProc.time() plot(fr <- dstab.tailratio(1.6, 0.9)) plot(fr <- dstab.tailratio(1.7, 0.1)) plot(fr <- dstab.tailratio(1.8, 0.2)) showProc.time() ##------ Some explicit tail problems visualized: I <- integrate(dstable, 0,Inf, alpha=0.998, beta=0, subdivisions=1000) str(I) stopifnot(abs(I$value - 0.5) < 1e-4) curve(dstable(x, alpha=.998, beta=0, log=TRUE), 10, 1e17, log="x") curve(dstable(x, alpha=.999, beta=0.1, log=TRUE), 10, 1e17, log="x") curve(dstable(x, alpha=.999, beta=0.9, log=TRUE), 10, 1e17, log="x") curve(dstable(x, alpha=.999, beta=0.99, log=TRUE), 10, 1e17, log="x") curve(dstable(x, alpha=.999, beta=0.99, log=TRUE), 10, 1e170, log="x") showProc.time() ## less problems when alpha > ~= 1 (but it's __S..L..O..W__ !) curve(dstable(x, alpha=1.001,beta=0.99, log=TRUE), 10, 1e7, log="x") curve(dstable(x, alpha=1.001,beta=0.99, log=TRUE), 10, 1e17, log="x") ## -> problem --> zoom in: curve(dstable(x, alpha=1.001,beta=0.99, log=TRUE), 1e12, 160e12) curve(dPareto(x, alpha=1.001,beta=0.99, log=TRUE), add=TRUE, lty=3, col=4) curve(dstable(x, alpha=1.001,beta=0.99, log=TRUE), 10, 1e40, log="x") showProc.time() ## NB: alpha == 1 also has problems in tail --- only as long as "old R"s wrong uniroot is used: curve(dstable(x, alpha=1. ,beta=0.99, log=TRUE), 1, 20) curve(dstable(x, alpha=1. ,beta=0.99, log=TRUE), 1,100) showProc.time() stabledist/tests/doRUnit.R0000644000175100001440000000222211536247376015313 0ustar hornikusers#### doRUnit.R --- Run RUnit tests ####------------------------------------------------------------------------ ### Origianlly follows Gregor Gojanc's example in CRAN package 'gdata' ### and the corresponding section in the R Wiki: ### http://wiki.r-project.org/rwiki/doku.php?id=developers:runit ### MM: Vastly changed: This should also be "runnable" for *installed* ## package which has no ./tests/ ## ----> put the bulk of the code e.g. in ../inst/unitTests/runTests.R : if(require("RUnit", quietly=TRUE)) { ## --- Setup --- wd <- getwd() pkg <- sub("\\.Rcheck$", '', basename(dirname(wd))) library(package=pkg, character.only=TRUE) path <- system.file("unitTests", package = pkg) stopifnot(file.exists(path), file.info(path.expand(path))$isdir) ## MM: Setting RNG states *outside* testing functions, ## so we can run them, testing new situations each time oRK <- RNGkind(kind = "Marsaglia-Multicarry", normal.kind = "Inversion") set.seed(4711) source(file.path(path, "runTests.R"), echo = TRUE) ## reset RNG: RNGkind(oRK[1]) } warnings()## as we see them cat('Time elapsed: ', proc.time(),'\n') # "stats" stabledist/tests/pstab-ex.R0000644000175100001440000001755112521632275015454 0ustar hornikusersrequire("stabledist") pPareto <- stabledist:::pPareto source(system.file("test-tools-1.R", package = "Matrix"), keep.source=interactive()) #-> identical3(), showProc.time(),... (doExtras <- stabledist:::doExtras()) options(pstable.debug = FALSE) options(pstable.debug = TRUE)# want to see when uniroot() is called stopifnot(all.equal(pstable(0.3, 0.75, -.5, tol= 1e-14), 0.6688726496, tol = 1e-8)) ## was 0.66887227658457, tol = 1e-10)) pstable(-4.5, alpha = 1, beta = 0.01)## gave integration error (now uniroot..) ## a "outer vectorized" version: pstabALL <- function(x, alpha, beta, ...) sapply(alpha, function(alph) sapply(beta, function(bet) pstable(x, alph, bet, ...))) alph.s <- (1:32)/16 # in (0, 2] beta.s <- (-16:16)/16 # in [-1, 1] stopifnot(pstabALL( Inf, alph.s, beta.s) == 1, pstabALL(-Inf, alph.s, beta.s, log.p=TRUE) == -Inf, pstabALL( 0, alph.s, beta = 0) == 0.5, TRUE) pdf("pstab-ex.pdf") ##---- log-scale ------------- r <- curve(pstable(x, alpha=1.8, beta=.9, lower.tail=FALSE, log.p=TRUE), 5, 150, n=500, log="x",type="b", cex=.5) curve(pPareto(x, alpha=1.8, beta=.9, lower.tail=FALSE, log.p=TRUE), add=TRUE, col=2) ##--> clearly potential for improvement! ## the less extreme part - of that: r <- curve(pstable(x, alpha=1.8, beta=.9, lower.tail=FALSE, log.p=TRUE), 1, 50, n=500, log="x") curve(pPareto(x, alpha=1.8, beta=.9, lower.tail=FALSE, log.p=TRUE), add=TRUE, col=2) ## Check that pstable() is the integral of dstable() --- using simple Simpson's rule ## in it's composite form: ## \int_a^b f(x) dx\approx \frac{h}{3} ## \bigg[ f(x_0) + 2 \sum_{j=1}^{n/2-1}f(x_{2j}) + ## + 4 \sum_{j=1}^{n/2} f(x_{2j-1}) + ## + f(x_n) \bigg], intSimps <- function(fx, h) { stopifnot((n <- length(fx)) %% 2 == 0, n >= 4, length(h) == 1, h > 0) n2 <- n %/% 2 j2 <- 2L * seq_len(n2-1) j4 <- 2L * seq_len(n2) - 1L h/3 * sum(fx[1], 2* fx[j2], 4* fx[j4], fx[n]) } chk.pd.stable <- function(alpha, beta, xmin=NA, xmax=NA, n = 256, do.plot=TRUE, comp.tol = 1e-13, eq.tol = 1e-3) { stopifnot(n >= 20) if(is.na(xmin)) xmin <- qstable(0.01, alpha, beta) if(is.na(xmax)) xmax <- qstable(0.99, alpha, beta) dx <- ceiling(1024*grDevices::extendrange(r = c(xmin, xmax), f = 0.01))/1024 h <- diff(dx)/n x <- seq(dx[1], dx[2], by = h) fx <- dstable(x, alpha=alpha, beta=beta, tol= comp.tol) Fx <- pstable(x, alpha=alpha, beta=beta, tol=2*comp.tol) i.ev <- (i <- seq_along(x))[i %% 2 == 0 & i >= max(n/10, 16)] ## integrate from x[1] up to x[i] (where i is even); ## the exact value will be F(x[i]) - F(x[1]) == Fx[i] - Fx[1] Fx. <- vapply(lapply(i.ev, seq_len), function(ii) intSimps(fx[ii], h), 0) a.eq <- all.equal(Fx., Fx[i.ev] - Fx[1], tol = eq.tol) if(do.plot) { ## Show the fit plot(x, Fx - Fx[1], type = "l") lines(x[i.ev], Fx., col=adjustcolor("red", 0.5), lwd=3) op <- par(ask=TRUE) ; on.exit(par(op)) ## show the "residual", i.e., the relative error plot(x[i.ev], 1- Fx./(Fx[i.ev] - Fx[1]), type = "l", xlim = range(x)) abline(h=0, lty=3, lwd = .6) } if(!isTRUE(a.eq)) stop(a.eq) invisible(list(x=x, f=fx, F=Fx, i. = i.ev, F.appr. = Fx.)) } op <- par(mfrow=2:1, mar = .1+c(3,3,1,1), mgp=c(1.5, 0.6,0)) c1 <- chk.pd.stable(.75, -.5, -1, 1.5, eq.tol = .006) c2 <- chk.pd.stable(.95, +0.6, -1, 1.5, eq.tol = .006)# with >= 50 warnings ## here are the "values" with(c1, all.equal(F.appr., F[i.] - F[1], tol = 0)) # (.0041290 on 64-Lnx) with(c2, all.equal(F.appr., F[i.] - F[1], tol = 0)) # (.0049307 on 64-Lnx) showProc.time() # c3 <- chk.pd.stable(.95, +0.9, -3, 15) # >= 50 warnings curve(dstable(x, .999, -0.9), -20, 5, log="y") curve(pstable(x, .999, -0.9), -20, 5, log="y")#-> using uniroot c4 <- chk.pd.stable(.999, -0.9, -20, 5) showProc.time() # ## alpha == 1 , small beta ---- now perfect curve(pstable(x, alpha=1, beta= .01), -6, 8, ylim=0:1) abline(h=0:1, v=0, lty=3, col="gray30") n <- length(x <- seq(-6,8, by = 1/16)) px <- pstable(x, alpha=1, beta= .01) ## now take approximation derivative by difference ratio: x. <- (x[-n]+x[-1])/2 plot (x., diff(px)*16, type="l") ## now check convexity/concavity : f2 <- diff(diff(px)) stopifnot(f2[x[-c(1,n)] < 0] > 0, f2[x[-c(1,n)] > 0] < 0) ## and compare with dstable() ... which actually shows dstable() problem: fx. <- dstable(x., alpha=1, beta=.01) lines(x., fx., col = 2, lwd=3, lty="5111") if(dev.interactive(orNone=TRUE)) { curve(dstable(x, 1., 0.99), -6, 50, log="y")# "uneven" (x < 0); 50 warnings curve(dstable(x, 1.001, 0.95), -10, 30, log="y")# much better } showProc.time() # if(doExtras) { c5 <- chk.pd.stable(1., 0.99, -6, 50)# -> uniroot c6 <- chk.pd.stable(1.001, 0.95, -10, 30)# -> uniroot; 2nd plot *clearly* shows problem with(c5, all.equal(F.appr., F[i.] - F[1], tol = 0)) # .00058 on 64-Lnx with(c6, all.equal(F.appr., F[i.] - F[1], tol = 0)) # 2.611e-5 on 64-Lnx ## right tail: try(## FIXME: c1.0 <- chk.pd.stable(1., 0.8, -6, 500)# uniroot; rel.difference = .030 ) ## show it more clearly curve(pstable(x, alpha=1, beta=0.5), 20, 800, log="x", ylim=c(.97, 1)) curve(pPareto(x, alpha=1, beta=0.5), add=TRUE, col=2, lty=2) abline(h=1, lty=3,col="gray") # and similarly (alpha ~= 1, instead of == 1): curve(pstable(x, alpha=1.001, beta=0.5), 20, 800, log="x", ylim=c(.97, 1)) curve(pPareto(x, alpha=1.001, beta=0.5), add=TRUE, col=2, lty=2) abline(h=1, lty=3,col="gray") ## zoom curve(pstable(x, alpha=1.001, beta=0.5), 100, 200, log="x") curve(pPareto(x, alpha=1.001, beta=0.5), add=TRUE, col=2, lty=2) ## but alpha = 1 is only somewhat better as approximation: curve(pstable(x, alpha=1 , beta=0.5), add=TRUE, col=3, lwd=3, lty="5131") showProc.time() # } c7 <- chk.pd.stable(1.2, -0.2, -40, 30) c8 <- chk.pd.stable(1.5, -0.999, -40, 30)# two warnings showProc.time() # ### Newly found -- Marius Hofert, 18.Sept. (via qstable): stopifnot(all.equal(qstable(0.6, alpha = 0.5, beta = 1, tol=1e-15, integ.tol=1e-15), 2.636426573120147)) ##--> which can be traced to the first of stopifnot(pstable(q= -1.1, alpha=0.5, beta=1) == 0, pstable(q= -2.1, alpha=0.6, beta=1) == 0) ## Found by Tobias Hudec, 2 May 2015: stopifnot( all.equal(1.5, qstable(p=0.5, alpha=1.5, beta=0, gamma=2, delta = 1.5)), all.equal(1.5, qstable(p=0.5, alpha=0.6, beta=0, gamma=0.2, delta = 1.5)) ) ## Stable(alpha = 1/2, beta = 1, gamma, delta, pm = 1) <===> Levy(delta, gamma) source(system.file("xtraR", "Levy.R", package = "stabledist"), keep.source=interactive()) ##-> dLevy(x, mu, c, log) and ##-> pLevy(x, mu, c, log.p, lower.tail) set.seed(101) show.Acc <- (interactive() && require("Rmpfr")) if(show.Acc) { ## want to see accuracies, do not stop "quickly" format.relErr <- function(tt, cc) format(as.numeric(relErr(tt, cc)), digits = 4, width = 8) } ## FIXME: Look why pstable() is so much less accurate than dstable() ## even though the integration in dstable() is more delicate in general pTOL <- 1e-6 # typically see relErr of 5e-7 dTOL <- 1e-14 # typically see relErr of 1.3...3.9 e-15 showProc.time() ## Note that dstable() is more costly than pstable() for(ii in 1:(if(doExtras) 32 else 8)) { Z <- rnorm(2) mu <- sign(Z[1])*exp(Z[1]) sc <- exp(Z[2]) x <- seq(mu, mu+ sc* 100*rchisq(1, df=3), length.out= if(doExtras) 512 else 32) ## dLevy() and pLevy() using only pnorm() are "mpfr-aware": x. <- if(show.Acc) mpfr(x, 256) else x pL <- pLevy(x., mu, sc) pS <- pstable(x, alpha=1/2, beta=1, gamma=sc, delta=mu, pm = 1) dL <- dLevy(x., mu, sc) dS <- dstable(x, alpha=1/2, beta=1, gamma=sc, delta=mu, pm = 1) if(show.Acc) { cat("p: ", format.relErr(pL, pS), "\t") cat("d: ", format.relErr(dL, dS), "\n") } else { cat(ii %% 10) } stopifnot(all.equal(pL, pS, tol = pTOL), all.equal(dL, dS, tol = dTOL)) }; cat("\n") showProc.time()## ~ 75 sec (doExtras=TRUE) on lynne (2012-09) stabledist/NAMESPACE0000644000175100001440000000044711761160453013656 0ustar hornikusersimportFrom(stats, dnorm, pnorm, qnorm, rnorm, dcauchy, pcauchy, qcauchy, rcauchy, runif, integrate, uniroot, optimize) export( "dstable", "pstable", "qstable", "rstable", "stableMode" ) ## and later maybe also the "fitter" functions? stabledist/R/0000755000175100001440000000000012521673675012645 5ustar hornikusersstabledist/R/utils.R0000644000175100001440000000264112521632275014122 0ustar hornikusers## This is from 'fBasics', but so small we will not import ## nor export, but just use it ... ## Now needs R >= 3.1.0 with its new argument(s) 'extendInt' etc .unirootNA <- function(f, interval, ..., lower = min(interval), upper = max(interval), f.lower = f(lower, ...), f.upper = f(upper, ...), extendInt = c("no", "yes", "downX", "upX"), check.conv = FALSE, tol = .Machine$double.eps^0.25, maxiter = 1000, trace = 0) { # Arguments: # see 'uniroot' # Value: # Returns the x value of f where the root is located. If # no root exists, NA will be returned instead. In that case, # the function doesn't terminate with an error as # the standard function uniroot(). # Example: # .unirootNA(sin, c(1, 2)); .unirootNA(sin, c(-1, 1)) # If there is no Root: if(is.na(f.lower) || is.na(f.upper) || f.lower * f.upper > 0) return(NA) ## else there is one : uniroot(f, interval = interval, ..., lower=lower, upper=upper, f.lower=f.lower, f.upper=f.upper, extendInt=extendInt, check.conv=check.conv, tol=tol, maxiter=maxiter, trace=trace)$root } ## Not exported, and only used because CRAN checks must be faster doExtras <- function() { interactive() || nzchar(Sys.getenv("R_STABLEDIST_CHECK_EXTRA")) || identical("true", unname(Sys.getenv("R_PKG_CHECKING_doExtras"))) } stabledist/R/dist-stableMode.R0000644000175100001440000000600511540713055015774 0ustar hornikusers## Part of R package 'stabledist' (part of the Rmetrics project). ## The stabledist R package is free software; you can redistribute it and/or ## modify it under the terms of the GNU Library General Public ## License as published by the Free Software Foundation; either ## version 2 of the License, or (at your option) any later version. ## ## This R package is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU Library General Public License for more details. ## ## A copy of the GNU General Public License is available at ## http://www.r-project.org/Licenses/ ################################################################################ # FUNCTIONS: DESCRIPTION: # stableMode Computes the mode of the stable DF ################################################################################ ##' Computes the mode of the alpha stable distribution ##' @title Mode of the stable distribution ##' @param alpha ##' @param beta ##' @param beta.max for numerical purposes, values of beta too close to 1, ##' are set to beta.max ##' @param tol numerical tolerance used in optimize() ##' @return a number, the stable mode ##' @author Diethelm Wuertz and Martin Maechler stableMode <- function(alpha, beta, beta.max = 1 - 1e-11, tol = .Machine$double.eps^0.25) { stopifnot(0 < alpha, alpha <= 2, length(alpha) == 1, -1 <= beta, beta <= 1, length(beta) == 1, length(beta.max) == 1) # Notes: # # Test for values close to beta = 1 # alpha <- seq(0, 2, by = 0.1) # ans <- matrix(NA, nrow=length(alpha), ncol = 4) # for (i in 1:seq_along(alpha)) { # ans[i,] <- c( # stableMode(alpha = alpha[i], beta = 0.99 ), # stableMode(alpha = alpha[i], beta = 0.99999 ), # stableMode(alpha = alpha[i], beta = 0.99999999 ), # stableMode(alpha = alpha[i], beta = 0.99999999999)) # } # cbind(alpha, ans), # # alpha 0.99 0.99999 0.99999999 0.99999999999 # 0.0 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 # 0.2 -3.214142e-01 -3.246759e-01 -3.246787e-01 -3.246788e-01 # 0.4 -6.105318e-01 -6.158562e-01 -6.158616e-01 -6.158616e-01 # 0.6 -6.550106e-01 -6.594746e-01 -6.594790e-01 -6.594790e-01 # 0.8 -5.558811e-01 -5.590032e-01 -5.590063e-01 -5.590063e-01 # 1.0 -4.271033e-01 -4.293078e-01 -4.293099e-01 -4.293099e-01 # 1.2 -3.074015e-01 -3.090820e-01 -3.090804e-01 -3.090804e-01 # 1.4 -2.050956e-01 -2.063979e-01 -2.063951e-01 -2.063951e-01 # 1.6 -1.199623e-01 -1.208875e-01 -1.208853e-01 -1.208853e-01 # 1.8 -5.098617e-02 -5.145758e-02 -5.145639e-02 -5.145639e-02 # 2.0 -7.487432e-05 -7.487432e-05 -7.487432e-05 -7.487432e-05 if(alpha * beta == 0) return(0) ## else if(beta > beta.max) beta <- beta.max optimize(dstable, interval = c(-0.7, 0)*sign(beta), alpha = alpha, beta = beta, pm = 0, maximum = TRUE, tol = tol)$maximum } stabledist/R/dpqr-stable.R0000644000175100001440000010102712521632275015176 0ustar hornikusers# Part of R package 'stabledist' (part of the Rmetrics project). ## The stabledist R package is free software; you can redistribute it and/or ## modify it under the terms of the GNU Library General Public ## License as published by the Free Software Foundation; either ## version 2 of the License, or (at your option) any later version. ## ## This R package is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU Library General Public License for more details. ## ## A copy of the GNU General Public License is available at ## http://www.r-project.org/Licenses/ ################################################################################ ## FUNCTIONS: DESCRIPTION: ## dstable Returns density for stable DF ## pstable Returns probabilities for stable DF ## qstable Returns quantiles for stable DF ## rstable Returns random variates for stable DF ## UTILITY FUNCTION DESCRIPTION: ## .integrate2 Integrates internal functions for *stable ################################################################################ ##============================================================================== ### MM TODO: ## 0) All d, p, q, q -- have identical parts ## a) 'parametrization' (pm) check ## b) checking, alpha, beta, ## c) subdivisions etc {all but rstable} ## --- to do: "Fix" these in dstable(), then copy/paste to others ##============================================================================== pi2 <- pi/2 # - we use it so often ##' @title omega() according to Lambert & Lindsey (1999), p.412 ##' @param gamma [dpqr]stable()'s scale parameter, > 0 -- of length 1 ##' @param alpha [dpqr]stable()'s "main" parameter, in [0, 2] -- of length 1 ##' @return omega(.) = tan(pi/2 alpha) if alpha != 1 ... .om <- function(gamma,alpha) { if(alpha != round(alpha)) # non-integer usual case tan(pi2*alpha)# not tanpi2() ! else if(alpha == 1) (2/pi)*log(gamma) else 0 # for alpha = 0 or = 2 } ##' @title C_alpha - the tail constant ##' @param alpha numeric vector of stable tail parameters, in [0,2] ##' @return ##' @author Martin Maechler C.stable.tail <- function(alpha, log = FALSE) { stopifnot(0 <= alpha, alpha <= 2) r <- alpha i0 <- alpha == 0 r[i0] <- if(log) -log(2) else 0.5 al <- alpha[!i0] r[!i0] <- if(log) lgamma(al)-log(pi)+ log(sin(al*pi2)) else gamma(al)/pi * sin(al*pi2) if(any(a2 <- alpha == 2)) r[a2] <- if(log) -Inf else 0 r } ##' @title tan(pi/2*x), for x in [-1,1] with correct limits ##' i.e. tanpi2(-/+ 1) == -/+ Inf ##' @param x numeric vector ##' @return numeric vector of values tan(pi/2*x) ##' @author Martin Maechler tanpi2 <- function(x) { r <- x if(any(i <- x & x == round(x)))# excluding 0 r[i] <- (2 - (x[i] %% 4))*Inf io <- which(!i) r[io] <- tan(pi2* x[io]) r } ##' @title cos(pi/2*x), for x in [-1,1] with correct limits ##' i.e. cospi2(+- 1) == 0 ##' @param x numeric vector ##' @return numeric vector of values cos(pi/2*x) ##' @author Martin Maechler cospi2 <- function(x) { r <- x if(any(i <- x == round(x))) r[i] <- as.numeric(x[i] == 0)# 1 or 0 - iff x \in [-1,1] ! io <- which(!i) r[io] <- cos(pi2* x[io]) r } ##' According to Nolan's "tail.pdf" paper, where he takes *derivatives* ##' of the tail approximation 1-F(x) ~ (1+b) C_a x^{-a} to prove ##' that f(x) ~ a(1+b) C_a x^{-(1+a)} ... ##' ##' @title tail approximation density for dstable() ##' @param x ##' @param alpha ##' @param beta ##' @param log if true, return log(f(.)) ##' @return ##' @author Martin Maechler dPareto <- function(x, alpha, beta, log = FALSE) { if(any(neg <- x < 0)) { ## left tail x [neg] <- -x [neg] beta <- rep(beta, length.out=length(x)) beta[neg] <- -beta[neg] } if(log) log(alpha)+ log1p(beta)+ C.stable.tail(alpha, log=TRUE) -(1+alpha)*log(x) else alpha*(1+beta)* C.stable.tail(alpha)* x^(-(1+alpha)) } pPareto <- function(x, alpha, beta, lower.tail = TRUE, log.p = FALSE) { if(any(neg <- x < 0)) { ## left tail x [neg] <- -x [neg] beta <- rep(beta, length.out=length(x)) beta[neg] <- -beta[neg] stop("FIXME --- pPareto() is not correct for negative x")## switch 1-iF / iF } if(log.p) { if(lower.tail) ## log(1 - iF) log1p(-(1+beta)* C.stable.tail(alpha)* x^(-alpha)) else ## log(iF) log1p(beta)+ C.stable.tail(alpha, log=TRUE) - alpha*log(x) } else { iF <- (1+beta)* C.stable.tail(alpha)* x^(-alpha) if(lower.tail) 1-iF else iF } } dstable <- function(x, alpha, beta, gamma = 1, delta = 0, pm = 0, log = FALSE, tol = 64*.Machine$double.eps, zeta.tol= NULL, subdivisions = 1000) { ## Original implemented by Diethelm Wuertz; ## Changes for efficiency and accuracy by Martin Maechler ## Description: ## Returns density for stable DF ## Details: ## The function uses the approach of J.P. Nolan for general ## stable distributions. Nolan derived expressions in form ## of integrals based on the charcteristic function for ## standardized stable random variables. These integrals ## can be numerically evaluated. ## Arguments: ## alpha = index of stability, in the range (0,2] ## beta = skewness, in the range [-1, 1] ## gamma = scale, in the range (0, infinity) ## delta = location, in the range (-infinity, +infinity) ## param = type of parmeterization ## Note: S+ compatibility no longer considered (explicitly) ## Parameter Check: ## NB: (gamma, delta) can be *vector*s (vectorized along x) stopifnot( 0 < alpha, alpha <= 2, length(alpha) == 1, -1 <= beta, beta <= 1, length(beta) == 1, 0 <= gamma, length(pm) == 1, pm %in% 0:2, tol > 0, subdivisions > 0) ## not an official argument {no doc!}: verbose <- getOption("dstable.debug", default=FALSE) ## Parameterizations: if (pm == 1) { delta <- delta + beta*gamma * .om(gamma,alpha) } else if (pm == 2) { delta <- delta - alpha^(-1/alpha)*gamma*stableMode(alpha, beta) gamma <- alpha^(-1/alpha) * gamma } ## else pm == 0 ## Shift and Scale: x <- (x - delta) / gamma ans <- ## Special Cases: if (alpha == 2) { dnorm(x, mean = 0, sd = sqrt(2), log=log) } else if (alpha == 1 && beta == 0) { dcauchy(x, log=log) } else { ## General Case if (alpha != 1) { ## 0 < alpha < 2 & |beta| <= 1 from above tanpa2 <- tan(pi2*alpha) betan <- beta * tanpa2 zeta <- -betan theta0 <- min(max(-pi2, atan(betan) / alpha), pi2) if(verbose) cat(sprintf( "dstable(., alpha=%g, beta=%g,..): --> theta0=%g, zeta=%g,", alpha, beta, theta0, zeta)) if(is.null(zeta.tol)) { zeta.tol <- if(betan == 0) .4e-15 else if(1-abs(beta) < .01 || alpha < .01) 2e-15 else 5e-5 if(verbose) cat(sprintf(" --> zeta.tol= %g", zeta.tol)) } else stopifnot(is.numeric(zeta.tol), zeta.tol >= 0) if(verbose) cat("\n") ## Loop over all x values ( < , = , or > zeta): vapply(x, .fct1, 0., zeta=zeta, alpha=alpha, beta=beta, theta0=theta0, log=log, verbose=verbose, tol=tol, zeta.tol=zeta.tol, subdivisions=subdivisions) } ## Special Case alpha == 1 and -1 <= beta <= 1 (but not = 0) : else { ## (alpha == 1) and 0 < |beta| <= 1 from above ## Loop over all x values: vapply(x, function(z) { if (z >= 0) { .fct2( z , beta, log=log, tol=tol, subdivisions=subdivisions) } else { .fct2(-z, -beta, log=log, tol=tol, subdivisions=subdivisions) } }, 0.) } } i0 <- ans == (if(log)-Inf else 0) # --> we can do better using asymptotic: if(any(i0)) { d <- dPareto(x[i0], alpha, beta, log=log) ## do recycle correctly: if(length(gamma) > 1) gamma <- rep(gamma, length.out=length(x))[i0] ans[i0] <- if(log) d - log(gamma) else d/gamma } if(any(io <- !i0)) { d <- ans[io] if(length(gamma) > 1) gamma <- rep(gamma, length.out=length(x))[io] ans[io] <- if (log) d - log(gamma) else d/gamma } ans }## {dstable} ## ------------------------------------------------------------------------------ .large.exp.arg <- -(.Machine$double.min.exp * log(2)) ## == 708.396... ##' @title x*exp(-x) numerically stably, with correct limit 0 for x --> Inf ##' @param x numeric ##' @return x*exp(x) ##' @author Martin Maechler x.exp.m.x <- function(x) { r <- x*exp(-x) if(any(nax <- is.na(x))) r[nax] <- NA_real_ if(any(lrg <- !nax & x > .large.exp.arg))# e.g. x == Inf r[lrg] <- 0 r } .e.plus <- function(x, eps) x + eps* abs(x) .e.minus<- function(x, eps) x - eps* abs(x) pi2.. <- function(eps) pi2 * (1 - eps) ## == .e.minus(pi/2, eps), slight more efficiently ##' dstable() for very small alpha > 0 ##' ok only for x > zeta := - beta * tan(pi/2 *alpha) dstable.smallA <- function(x, alpha, beta, log=FALSE) { r <- log(alpha) + log1p(beta) - (1 + log(2*x + pi*alpha*beta)) if(log) r else exp(r) } ## 1e-17: seems "good", but not "optimized" at all -- hidden for now .alpha.small.dstable <- 1e-17 .fct1 <- function(x, zeta, alpha, beta, theta0, log, tol, subdivisions, zeta.tol, verbose = getOption("dstable.debug", default=FALSE)) { ## --- dstable(x, alpha, beta, ..) for alpha < 2 --- ## For x = zeta, have special case formula [Nolan(1997)]; ## need to use it also for x ~= zeta, i.e., x.m.zet := |x - zeta| < delta : stopifnot(is.finite(zeta)) x.m.zet <- abs(x - zeta) f.zeta <- function(log) if(log) lgamma(1+1/alpha)+ log(cos(theta0)) - (log(pi)+ log1p(zeta^2)/(2*alpha)) else gamma(1+1/alpha)*cos(theta0) / (pi*(1+zeta^2)^(1/(2*alpha))) ## Modified: originally was if (z == zeta), ## then (D.W.) if (x.m.zet < 2 * .Machine$double.eps) ## then (M.M.) if (x.m.zet <= 1e-5 * abs(x)) if(is.finite(x) && x.m.zet <= zeta.tol * (zeta.tol+ max(abs(x),abs(zeta)))) { if(verbose) cat(sprintf(".fct1(%.11g, %.10g,..): x ~= zeta => using f.zeta()\n", x, zeta)) return(f.zeta(log)) } ## the real check should be about the feasibility of g() below, or its integration smallAlpha <- (alpha < .alpha.small.dstable) if(x < zeta) { theta0 <- -theta0 # see Nolan(1997), Thm.1 (c) if(smallAlpha) { beta <- -beta x <- -x } } if(smallAlpha) { ## here, *MUST* have __ x > zeta __ if(verbose) cat(sprintf(".fct1(%.11g, %.10g,..): small alpha=%g\n", x, zeta, alpha)) return(dstable.smallA(x, alpha, beta, log=log)) } ## constants ( independent of integrand g1(th) = g*exp(-g) ): ## zeta <- -beta * tan(pi*alpha/2) ## theta0 <- (1/alpha) * atan( beta * tan(pi*alpha/2)) ## x.m.zet <- abs(x - zeta) ##-------->>> identically as in .FCT1() for pstable() below: <<<----------- a_1 <- alpha - 1 cat0 <- cos(at0 <- alpha*theta0) ##' g() is strictly monotone -- Nolan(1997) ["3. Numerical Considerations"] ##' alpha >= 1 <==> g() is falling, ie. from Inf --> 0; otherwise growing from 0 to +Inf g <- function(th) { r <- th ## g(-pi/2) or g(pi/2) could become NaN --> work around i.bnd <- abs(pi2 -sign(a_1)*th) < 64*.Machine$double.eps r[i.bnd] <- 0 th <- th[io <- !i.bnd] att <- at0 + alpha*th ## = alpha*(theta0 + theta) r[io] <- (cat0 * cos(th) * (x.m.zet/sin(att))^alpha)^(1/a_1) * cos(att-th) r } ## Function to integrate: dstable(..)= f(..) = c2 * \int_{-\theta_0}^{\pi/2} g1(u) du g1 <- function(th) { ## g1 := g(.) exp(-g(.)) x.exp.m.x( g(th) ) } c2 <- ( alpha / (pi*abs(a_1)*x.m.zet) ) ## Now, result = c2 * \int_{-t0}^{pi/2} g1(u) du , we "only" need the integral ## where however, g1(.) may look to be (almost) zero almost everywhere and just have a small peak ## ==> Find the peak, split the integral into two parts of for intervals (t0, t_max) + (t_max, pi/2) ## However, this may still be bad, e.g., for dstable(71.61531, alpha=1.001, beta=0.6), ## or dstable(1.205, 0.75, -0.5) ## the 2nd integral was "completely wrong" (basically zero, instead of ..e-5) ## NB: g() is monotone, see above if((alpha >= 1 && ((!is.na(g. <- g( pi2 )) && g. > .large.exp.arg) || identical(g(-theta0), 0))) || (alpha < 1 && ((!is.na(g. <- g(-theta0)) && g. > .large.exp.arg) || identical(g(pi2), 0)))) { ## g() is numerically too large *or* 0 even where it should be inf ## ===> g() * exp(-g()) is 0 everywhere if(verbose) cat(sprintf(".fct1(%.11g, %.10g,..): g() is 'Inf' (or 0) ==> result 0", x,zeta)) return(if(log)-Inf else 0) } g. <- if(alpha >= 1) g(.e.plus(-theta0, 1e-6)) else g(pi2..(1e-6)) if(is.na(g.))# g() is not usable --- FIXME rather use *asymptotic dPareto()? if(max(x.m.zet, x.m.zet / abs(x)) < .01) return(f.zeta(log)) if(verbose) cat(sprintf(".fct1(%.11g, %.10g,..): c2*sum(r[1:4])= %.11g*", x,zeta, c2)) Int <- function(a,b) .integrate2(g1, lower = a, upper = b, subdivisions=subdivisions, rel.tol= tol, abs.tol= tol) ## We know that the maximum of g1(.) is = exp(-1) = 0.3679 "at" g(.) == 1 ## find that by uniroot : ## g(.) == 1 <==> log(g(.)) == 0 --- the latter is better conditioned, ## e.g., for (x = -1, alpha = 0.95, beta = 0.6) ## the former is better for dstable(-122717558, alpha = 1.8, beta = 0.3, pm = 1) ## However, it can be that the maximum is at the boundary, and ## g(.) > 1 everywhere or g(.) < 1 everywhere {in that case we could revert to optimize..} if((alpha >= 1 && !is.na(g. <- g(pi2)) && g. > 1) || (alpha < 1 && !is.na(g. <- g(pi2)) && g. < 1)) g1.th2 <- g1( theta2 <- pi2..(1e-6) ) else if((alpha < 1 && g(-theta0) > 1) || (alpha >= 1 && g(-theta0) < 1)) g1.th2 <- g1( theta2 <- .e.plus(-theta0, 1e-6) ) else { ## when alpha ~=< 1 (0.998 e.g.), g(x) is == 0 (numerically) on a wide range; ## uniroot is not good enough, and we should *increase* -theta0 ## or decrease pi2 such that it can find the root: l.th <- -theta0 u.th <- pi2 if(alpha < 1) { ## g() is *in*creasing from 0 .. while ((g.t <- g(.th <- (l.th + pi2)/2)) == 0) l.th <- .th if(g.t == 1)# decrease upper limit {needed, e.g. for alpha = 1e-20} while ((g.t <- g(.th <- (l.th + u.th)/2)) == 1) u.th <- .th if(abs(u.th - l.th) < 1e-13)# do not trust g() return(if(log)-Inf else 0) if(verbose >= 2) cat(sprintf("\n -theta0=%g %s l.th=%g .. u.th=%g <= pi/2\n", -theta0, if(-theta0 == l.th) "=" else "<", l.th, u.th)) } ur1 <- uniroot(function(th) g(th) - 1, lower = l.th, upper = u.th, tol = .Machine$double.eps) ## consider using safeUroot() [ ~/R/Pkgs/copula/R/safeUroot.R ] !! ur2 <- tryCatch(uniroot(function(th) log(g(th)), lower = l.th, upper = u.th, tol = .Machine$double.eps), error=function(e)e) g.1 <- x.exp.m.x(ur1$f.root+1) g.2 <- if(inherits(ur2, "error")) -Inf else x.exp.m.x(exp(ur2$f.root)) if(g.1 >= g.2) { theta2 <- ur1$root g1.th2 <- g.1 ## == g1(theta2) } else { theta2 <- ur2$root g1.th2 <- g.2 } } ## now, because g1()'s peak (at th = theta2) may be extreme, we find two more intermediate values ## NB: Theoretically: Max = 0.3679 = g1(theta2) ==> 1e-4 is a very small fraction of that ## to the left: eps <- 1e-4 if((do1 <- g1.th2 > eps && g1(-theta0) < eps)) th1 <- uniroot(function(th) g1(th) - eps, lower = -theta0, upper = theta2, tol = tol)$root if((do4 <- g1.th2 > eps && g1(pi2) < eps)) ## to the right: th3 <- uniroot(function(th) g1(th) - eps, lower = theta2, upper = pi2, tol = tol)$root if(do1) { r1 <- Int(-theta0, th1) r2 <- Int( th1, theta2) } else { r1 <- 0 r2 <- Int(-theta0, theta2) } if(do4) { r3 <- Int( theta2, th3) r4 <- Int( th3, pi2) } else { r3 <- Int( theta2, pi2) r4 <- 0 } if(verbose) cat(sprintf("(%6.4g + %6.4g + %6.4g + %6.4g)= %g\n", r1,r2,r3,r4, c2*(r1+r2+r3+r4))) if(log) log(c2)+ log(r1+r2+r3+r4) else c2*(r1+r2+r3+r4) } ## {.fct1} ## ------------------------------------------------------------------------------ ##' Auxiliary for dstable() only used when alpha == 1 : ##' @param x numeric *scalar*, >= 0 ##' @param beta 0 < |beta| <= 1 ##' @param tol ##' @param subdivisions .fct2 <- function(x, beta, log, tol, subdivisions, verbose = getOption("dstable.debug", default=FALSE)) { i2b <- 1/(2*beta) p2b <- pi*i2b # = pi/(2 beta) ea <- -p2b*x if(is.infinite(ea)) return(if(log)-Inf else 0) ##' g() is strictly monotone; ##' g(u) := original_g(u*pi/2) ##' for beta > 0: increasing from g(-1) = 0 to g(+1) = Inf ##' for beta < 0: decreasing from g(-1) = Inf to g(+1) = 0 ##t0 <- -sign(beta)*pi2# g(t0) == 0 mathematically, but not always numerically u0 <- -sign(beta)# g(u0) == 0 mathematically, but not always numerically g <- function(u) { r <- u r[i <- abs(u-u0) < 1e-10] <- 0 u <- u[!i] th <- u*pi2 h <- p2b+ th # == g'/beta where g' := pi/2 + beta*th = pi/2* (1 + beta*u) r[!i] <- (h/p2b) * exp(ea + h*tanpi2(u)) / cospi2(u) r } ## Function to Integrate; u is a non-sorted vector! g2 <- function(u) { ## g2 = g(.) exp(-g(.)) x.exp.m.x( g(u) ) } ## We know that the maximum of g2(.) is = exp(-1) = 0.3679 "at" g(.) == 1 ## find that by uniroot : ur <- uniroot(function(u) g(u) - 1, lower = -1, upper = 1, tol = tol) u2 <- ur$root r1 <- .integrate2(g2, lower = -1, upper = u2, subdivisions = subdivisions, rel.tol = tol, abs.tol = tol) r2 <- .integrate2(g2, lower = u2, upper = 1, subdivisions = subdivisions, rel.tol = tol, abs.tol = tol) if(verbose) { cc <- pi2*abs(i2b) cat(sprintf(".fct2(%.11g, %.6g,..): c*sum(r1+r2)= %.11g*(%6.4g + %6.4g)= %g\n", x,beta, cc, r1, r2, cc*(r1+r2))) } if(log) log(pi2) + log(abs(i2b)) + log(r1 + r2) else pi2*abs(i2b)*(r1 + r2) }## {.fct2} ### ------------------------------------------------------------------------------ pstable <- function(q, alpha, beta, gamma = 1, delta = 0, pm = 0, lower.tail = TRUE, log.p = FALSE, tol = 64*.Machine$double.eps, subdivisions = 1000) { ## A function implemented by Diethelm Wuertz ## Description: ## Returns probability for stable DF x <- q ## Parameter Check: ## NB: (gamma, delta) can be *vector*s (vectorized along x) stopifnot( 0 < alpha, alpha <= 2, length(alpha) == 1, -1 <= beta, beta <= 1, length(beta) == 1, 0 <= gamma, length(pm) == 1, pm %in% 0:2, tol > 0, subdivisions > 0) ## Parameterizations: if (pm == 1) { delta <- delta + beta*gamma * .om(gamma,alpha) } else if (pm == 2) { delta <- delta - alpha^(-1/alpha)*gamma*stableMode(alpha, beta) gamma <- alpha^(-1/alpha) * gamma } ## else pm == 0 ## Shift and Scale: x <- (x - delta) / gamma ## Return directly ## ------ first, special cases: if (alpha == 2) { pnorm(x, mean = 0, sd = sqrt(2), lower.tail=lower.tail, log.p=log.p) } else if (alpha == 1 && beta == 0) { pcauchy(x, lower.tail=lower.tail, log.p=log.p) } else { retValue <- function(F, useLower) { ## (vectorized in F) if(useLower) { if(log.p) log(F) else F } else { ## upper: 1 - F if(log.p) log1p(-F) else 1 - F } } ## General Case if (alpha != 1) { ## 0 < alpha < 2 & |beta| <= 1 from above tanpa2 <- tan(pi2*alpha) zeta <- -beta * tanpa2 theta0 <- min(max(-pi2, atan(-zeta) / alpha), pi2) if(finSupp <- (abs(beta) == 1 && alpha < 1)) { ## has *finite* support [zeta, Inf) if beta == 1 ## (-Inf, zeta] if beta == -1 } ## Loop over all x values: vapply(x, function(z) { if(finSupp) { if(beta == 1 && z <= zeta) return(retValue(0., useLower=lower.tail)) else if(beta == -1 && z >= zeta) return(retValue(1., useLower=lower.tail)) ## else .. one of the cases below } if(abs(z - zeta) < 2 * .Machine$double.eps) { ## FIXME? same problem as dstable r <- if(lower.tail) (1/2- theta0/pi) else 1/2+ theta0/pi if(log.p) log(r) else r } else { useLower <- ((z > zeta && lower.tail) || (z < zeta && !lower.tail)) ## FIXME: for alpha > 1 -- the following computes F1 = 1 -c3*r(x) ## and suffers from cancellation when 1-F1 is used below: giveI <- !useLower && alpha > 1 # if TRUE, .FCT1() return 1-F .F1 <- .FCT1(z, zeta, alpha=alpha, theta0=theta0, giveI = giveI, tol = tol, subdivisions = subdivisions) if(giveI) if(log.p) log(.F1) else .F1 else retValue(.F1, useLower=useLower) } }, 0.) } ## Special Case alpha == 1 and -1 <= beta <= 1 (but not = 0) : else { ## (alpha == 1) and 0 < |beta| <= 1 from above useL <- if(beta >= 0) lower.tail else { beta <- -beta x <- -x !lower.tail } if(giveI <- !useL && !log.p) useL <- TRUE ## Loop over all x values: retValue(vapply(x, function(z) .FCT2(z, beta = beta, tol=tol, subdivisions=subdivisions, giveI = giveI), 0.), useLower = useL) } } }## {pstable} ## ------------------------------------------------------------------------------ ##' Auxiliary for pstable() (for alpha != 1) .FCT1 <- function(x, zeta, alpha, theta0, giveI, tol, subdivisions, verbose = getOption("pstable.debug", default=FALSE)) { if(is.infinite(x)) return(if(giveI) 0 else 1) stopifnot(is.finite(zeta)) x.m.zet <- abs(x - zeta) ##-------->>> identically as in .fct1() for dstable() above: <<<----------- ## FIXME: also provide "very small alpha" case, as in .fct1() if(x < zeta) theta0 <- -theta0 a_1 <- alpha - 1 cat0 <- cos(at0 <- alpha*theta0) g <- function(th) { r <- th ## g(-pi/2) or g(pi/2) could become NaN --> work around i.bnd <- abs(pi2 -sign(a_1)*th) < 64*.Machine$double.eps r[i.bnd] <- 0 th <- th[io <- !i.bnd] att <- at0 + alpha*th ## = alpha*(theta0 + theta) r[io] <- (cat0 * cos(th) * (x.m.zet/sin(att))^alpha)^(1/a_1) * cos(att-th) r } if(verbose) cat(sprintf(".FCT1(%9g, %10g, th0=%.10g, %s..): ", x,zeta, theta0, if(giveI)"giveI=TRUE," else "")) ## as g() is montone, the integrand exp(-g(.)) is too ==> maximum is at the boundary ## however, integration can be inaccuracte when g(.) quickly jumps from Inf to 0 ## _BUT_ empirically I find that good values l.th / u.th below are *INDEPENDENT* of x, l.th <- .e.plus(-theta0, 1e-6) if(alpha > 1 && g(l.th) == Inf) { ur <- uniroot(function(t) 1-2*(g(t)==Inf), lower=l.th, upper=pi2, f.lower= -1, f.upper= 1, tol = 1e-8) l.th <- ur$root if(verbose) cat(sprintf(" g(-th0 +1e-6)=Inf: unirt(%d it) -> l.th=%.10g ", ur$iter, l.th)) } u.th <- .e.minus(pi2, 1e-6) if(alpha < 1 && g(u.th) == Inf) { ur <- uniroot(function(t) 1-2*(g(t)==Inf), lower=l.th, upper=u.th, f.upper= -1, tol = 1e-8) u.th <- ur$root if(verbose) cat(sprintf(" g(pi/2 -1e-6)=Inf: unirt(%d it) -> u.th=%.10g ", ur$iter, u.th)) } r <- .integrate2(function(th) exp(-g(th)), lower = l.th, upper = u.th, subdivisions = subdivisions, rel.tol = tol, abs.tol = tol) if(verbose) cat(sprintf("--> Int r= %.11g\n", r)) if(giveI) { ## { ==> alpha > 1 ==> c1 = 1; c3 = -1/pi} ## return (1 - F) = 1 - (1 -1/pi * r) = r/pi : r/pi } else { c1 <- if(alpha < 1) 1/2 - theta0/pi else 1 c3 <- sign(1-alpha)/pi ## FIXME: for alpha > 1, F = 1 - |.|*r(x) ## <==> cancellation iff we eventually want 1 - F() [-> 'lower.tail'] c1 + c3* r } } ## {.FCT1} ## ------------------------------------------------------------------------------ ##' Auxiliary for pstable() only used when alpha == 1 : ##' @param x numeric *scalar* ##' @param beta >= 0 here ##' @param tol ##' @param subdivisions .FCT2 <- function(x, beta, tol, subdivisions, giveI = FALSE, verbose = getOption("pstable.debug", default=FALSE)) { i2b <- 1/(2*beta) p2b <- pi*i2b # = pi/(2 beta) ea <- -p2b*x if(is.infinite(ea)) return(R.D.Lval(if(ea < 0) ## == -Inf ==> g(.) == 0 ==> G2(.) == 1 1 else 0, ## == +Inf ==> g(.) == Inf ==> G2(.) == 0 lower.tail= !giveI)) ##' g() is strictly monotone; ##' g(u) := original_g(u*pi/2) ##' for beta > 0: increasing from g(-1) = 0 to g(+1) = Inf ##' for beta < 0: decreasing from g(-1) = Inf to g(+1) = 0 ## original_g : ## g <- function(th) { ## h <- p2b+ th # == g'/beta where g' := pi/2 + beta*th ## (h/p2b) * exp(ea + h*tan(th)) / cos(th) ## } ##t0 <- -pi2# g(t0) == 0 mathematically, but not always numerically u0 <- -1 # g(u0) == 0 mathematically, but not always numerically g <- function(u) { r <- u r[i <- abs(u-u0) < 1e-10] <- 0 u <- u[!i] th <- u*pi2 h <- p2b+ th # == g'/beta where g' := pi/2 + beta*th = pi/2* (1 + beta*u) r[!i] <- (h/p2b) * exp(ea + h*tanpi2(u)) / cospi2(u) r } if(verbose) cat(sprintf(".FCT2(%.11g, %.6g, %s..): ", x,beta, if(giveI) "giveI=TRUE," else "")) ## g(-u0) == +Inf {at other end}, mathematically ==> exp(-g(.)) == 0 ## in the outer tails, the numerical integration can be inaccurate, ## because g(.) jumps from 0 to Inf, but is 0 almost always ## <==> g1(.) = exp(-g(.)) jumps from 1 to 0 and is 1 almost everywhere ## ---> the integration "does not see the 0" and returns too large.. u. <- 1 if(g(uu <- .e.minus(u., 1e-6)) == Inf) { ur <- uniroot(function(t) 1-2*(g(t)==Inf), lower=-1, upper= uu, f.lower= +1, f.upper= -1, tol = 1e-8) u. <- ur$root if(verbose) cat(sprintf(" g(%g)=Inf: unirt(%d it) -> u.=%.10g", uu, ur$iter, u.)) } ##' G2(.) = exp(-g(.)) is strictly monotone .. no need for 'theta2' ! G2 <- if(giveI) function(u) expm1(-g(u)) else function(u) exp(-g(u)) r <- .integrate2(G2, lower = -1, upper = u., subdivisions = subdivisions, rel.tol = tol, abs.tol = tol) / 2 if(verbose) cat(sprintf("--> Int r= %.11g\n", r)) if(giveI) -r else r }## {.FCT2} ### ------------------------------------------------------------------------------ ## -- utilities (==^== Macros in R's src/nmath/dpq.h ) : R.D.Lval <- function(p, lower.tail) if(lower.tail) p else (1 - p) # p R.D.Cval <- function(p, lower.tail) if(lower.tail) (1 - p) else p # 1 - p ## R.D.qIv <- function(p, log.p) if(log.p) exp(p) else p # p in qF(p,..) ##' == R.D.Lval(R.D.qIv(p)) "===" p in qF ! R.DT.qIv <- function(p, lower.tail, log.p) { if(log.p) if(lower.tail) exp(p) else - expm1(p) else R.D.Lval(p, lower.tail) } ##' == R.D.Cval(R.D.qIv(p)) "===" (1 - p) in qF R.DT.CIv <- function(p, lower.tail, log.p) { if(log.p) if(lower.tail) -expm1(p) else exp(p) else R.D.Cval(p, lower.tail) } qstable <- function(p, alpha, beta, gamma = 1, delta = 0, pm = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps^0.25, maxiter = 1000, trace = 0, integ.tol = 1e-7, subdivisions = 200) { ## A function implemented by Diethelm Wuertz ## Description: ## Returns quantiles for stable DF ## Parameter Check: ## NB: (gamma, delta) can be *vector*s (vectorized along x) stopifnot( 0 < alpha, alpha <= 2, length(alpha) == 1, -1 <= beta, beta <= 1, length(beta) == 1, 0 <= gamma, length(pm) == 1, pm %in% 0:2, tol > 0, subdivisions > 0) ## Parameterizations: if (pm == 1) { delta <- delta + beta*gamma * .om(gamma,alpha) } else if (pm == 2) { delta <- delta - alpha^(-1/alpha)*gamma*stableMode(alpha, beta) gamma <- alpha^(-1/alpha) * gamma } ## else pm == 0 result <- ## Special Cases: if (alpha == 2) qnorm(p, mean = 0, sd = sqrt(2), lower.tail=lower.tail, log.p=log.p) else if (alpha == 1 && beta == 0) qcauchy(p, lower.tail=lower.tail, log.p=log.p) else { ## -------------- 0 < alpha < 2 --------------- .froot <- function(x, p) { pstable(q = x, alpha=alpha, beta=beta, pm = 0, lower.tail=lower.tail, log.p=log.p, tol=integ.tol, subdivisions=subdivisions) - p } ## for approximate interval: .qN <- function(p) qnorm (p, mean = 0, sd = sqrt(2), lower.tail=lower.tail, log.p=log.p) .qC <- function(p) qcauchy(p, lower.tail=lower.tail, log.p=log.p) ## Calculate: qst1 <- function(pp) { ## 1) Find narrow interval [xmin, xmax] ----------------------- ## NB: will deal with a too narrow interval later p0 <- R.DT.qIv(pp, lower.tail=lower.tail, log.p=log.p) left <- p0 < 0.5 if (beta < 0) { xmin <- -R.DT.CIv(pp, lower.tail=lower.tail, log.p=log.p)/p0 xmax <- if (left) .qN(pp) else .qC(pp) } else if (beta > 0 ) { xmin <- if (left) .qC(pp) else .qN(pp) xmax <- p0/R.DT.CIv(pp, lower.tail=lower.tail, log.p=log.p) } else { ## (beta == 0) xmin <- if (left) .qC(pp) else .qN(pp) xmax <- if (left) .qN(pp) else .qC(pp) } if(xmin >= xmax) { # fixup interval such that xmin < xmax fdx <- if(xmin == xmax) .01*max(1e-7, abs(xmin)) else 1.01*(xmin-xmax) xmin <- xmin - fdx xmax <- xmax + fdx stopifnot(xmin < xmax) } ## 2) root-finding pstable(..) = p inside the interval: ------- dx <- 1 repeat { root <- .unirootNA(.froot, interval = c(xmin, xmax), p = pp, extendInt = if(lower.tail) "upX" else "downX", tol=tol, maxiter=maxiter, trace=trace) if(!is.na(root)) break xmin <- xmin- dx xmax <- xmax+ dx if(xmin == -Inf && xmax == +Inf) stop("could not find an interval for x where pstable(x,*) - p changes sign") dx <- dx * 2 } root } vapply(p, qst1, 0.) } ## Result: result * gamma + delta } ## ------------------------------------------------------------------------------ rstable <- function(n, alpha, beta, gamma = 1, delta = 0, pm = 0) { ## Description: ## Returns random variates for stable DF ## slightly amended along copula::rstable1 ## Parameter Check: ## NB: (gamma, delta) can be *vector*s (vectorized along x) stopifnot( 0 < alpha, alpha <= 2, length(alpha) == 1, -1 <= beta, beta <= 1, length(beta) == 1, 0 <= gamma, length(pm) == 1, pm %in% 0:2) ## Parameterizations: if (pm == 1) { delta <- delta + beta*gamma * .om(gamma,alpha) } else if (pm == 2) { delta <- delta - alpha^(-1/alpha)*gamma*stableMode(alpha, beta) gamma <- alpha^(-1/alpha) * gamma } ## else pm == 0 ## Calculate uniform and exponential distributed random numbers: theta <- pi * (runif(n)-1/2) w <- -log(runif(n)) result <- ## If alpha is equal 1 then: if (alpha == 1 & beta == 0) { rcauchy(n) ## Otherwise, if alpha is different from 1: } else { ## FIXME: learn from nacopula::rstable1R() b.tan.pa <- beta*tan(pi2*alpha) theta0 <- min(max(-pi2, atan(b.tan.pa) / alpha), pi2) c <- (1+b.tan.pa^2)^(1/(2*alpha)) a.tht <- alpha*(theta+theta0) r <- ( c*sin(a.tht)/ (cos(theta))^(1/alpha) ) * (cos(theta-a.tht)/w)^((1-alpha)/alpha) ## Use Parametrization 0: r - b.tan.pa } ## Result: result * gamma + delta } ## ------------------------------------------------------------------------------ ##' Numerically Integrate -- basically the same as R's integrate() ##' --------------------- main difference: no errors, but warnings .integrate2 <- function(f, lower, upper, ..., subdivisions, rel.tol, abs.tol, stop.on.error = FALSE) { ri <- integrate(f, lower, upper, ..., subdivisions=subdivisions, rel.tol=rel.tol, abs.tol=abs.tol, stop.on.error=stop.on.error) if((msg <- ri[["message"]]) != "OK") warning(msg) ## NB: "roundoff error ..." happens many times ri[["value"]] } stabledist/MD50000644000175100001440000000155012521701706012740 0ustar hornikusers2e440abde01fbee99886ba3db5ab9c54 *ChangeLog 18c196555ca918533593742080d1876f *DESCRIPTION 1f724acd67d568813a7aa03de6320455 *NAMESPACE a2e64d28aa452251d3fa42b5f704624f *R/dist-stableMode.R ac50cfd69551f2ed724badf550660f25 *R/dpqr-stable.R c73a695cf772f99f972d5df8a4d6010d *R/utils.R 3a9c0170e6b9b8e8d6c3f8ed55762070 *TODO 28247fb9d484e6a8c8f38a424ff54c45 *inst/unitTests/Makefile ff3104d4f61550a859a2a275a8d256db *inst/unitTests/runTests.R a200a347cdfb3b877002827c19a99b0f *inst/unitTests/runit.StableDistribution.R 34b68c0cace4a8530b8549128c714ec5 *inst/xtraR/Levy.R 68b4798a427f5b5f2e0a1a5da0974881 *man/dist-stable.Rd 29fad7aa4bd95d674f1e0e166afadbb2 *man/stableMode.Rd fcc086ea9ed8248a4943f4f0a94e5c47 *tests/doRUnit.R b336a1519f0ec57ef56cc78e90b859ab *tests/dstab-ex.R 3a11a0a20e84fb3d03c0d177dae9a353 *tests/pstab-ex.R c9bbe9fe49096caa2e1f2b3c734494d6 *tests/tails.R stabledist/DESCRIPTION0000644000175100001440000000122212521701706014132 0ustar hornikusersPackage: stabledist Version: 0.7-0 Date: 2015-05-04 Title: Stable Distribution Functions Author: Diethelm Wuertz, Martin Maechler and Rmetrics core team members. Maintainer: Martin Maechler Depends: R (>= 3.1.0), stats, utils Suggests: Matrix, fBasics, FMStable, RUnit, Rmpfr, sfsmisc Description: Density, Probability and Quantile functions, and random number generation for (skew) stable distributions, using the parametrizations of Nolan. LazyData: yes License: GPL (>= 2) URL: http://www.rmetrics.org NeedsCompilation: no Packaged: 2015-05-04 13:44:30 UTC; maechler Repository: CRAN Date/Publication: 2015-05-04 16:35:50 stabledist/ChangeLog0000644000175100001440000000714412521632275014213 0ustar hornikusers2015-05-04 Martin Maechler * R/dpqr-stable.R (qstable): check that xmin < xmax before using uniroot. Fixes qstable(0, .. beta=0) bug found by Tobias Hudec. * R/utils.R (.unirootNA): build on newer uniroot() --> need R >= 3.1.0 2012-09-30 Martin Maechler * R/dpqr-stable.R (dstable, .fct1): direct 'log' argument; advantageous in few (only) cases. New default 'zeta.tol=NULL' now determines zeta.tol depending on parameters. 2012-09-29 Martin Maechler * R/dpqr-stable.R (dstable.smallA): new asymptotic (alpha -> 0) formula for small alpha. * DESCRIPTION (Version): 0.6-5, released to CRAN on 2012-10-01 2012-05-29 Martin Maechler * DESCRIPTION (Version): 0.6-4, released to CRAN on 2012-05-29 2012-03-19 Martin Maechler * DESCRIPTION (Version): 0.6-3, released to CRAN on 2012-03-19 2011-03-29 Martin Maechler * R/dpqr-stable.R (dstable): new 'zeta.tol' argument; and checking for "absolut+relative" closeness to 'zeta'. * R/dpqr-stable.R (dstable, pstable): lower the default 'tol' to from 16 * epsC to 64* epsC. 2011-03-29 Martin Maechler * DESCRIPTION (Version): 0.6-2.1, released to CRAN on 2011-03-30 * R/dpqr-stable.R (dstable(*, alpha=1) -> .fct2): "robustified" g() * R/dpqr-stable.R (pstable(*, alpha=1) -> .FCT2): ditto + simplified (1 integ.) 2011-03-28 Martin Maechler * DESCRIPTION (Version): 0.6-2 * R/dpqr-stable.R (dstable -> .fct1): another round of improvements, thanks to Dieter Schell's bug reports. 2011-03-24 Martin Maechler * R/dpqr-stable.R (qstable): also add arguments lower.tail, log.p 2011-03-23 Martin Maechler * DESCRIPTION (Version): 0.6-1.1 * R/dpqr-stable.R (dstable -> .fct1 -> g): more careful computation of g(), notably at +- pi/2 where it could become NaN instead of 0. Numerically better but considerably (~ 30-40% ?) slower! * tests/dstab-ex.R: testing that 2011-03-21 Martin Maechler * R/dpqr-stable.R (pstable): new arguments (lower.tail=TRUE, log.p=FALSE) not used "properly" yet (dPareto, pPareto): new auxiliary functions for tail behavior; not used yet - apart from tests: * tests/tails.R: new: testing tail behavior. * R/dpqr-stable.R (qstable): infinite while(.) now caught 2011-03-12 Martin Maechler * DESCRIPTION (Version): 0.6-0, released to CRAN, 2011-03-14 * R/dpqr-stable.R (dstable, .fct1): detect when the integration seems completely unreliable and then also use the x=zeta value. This cures all remaining big problems I've found. 2011-03-10 Martin Maechler * R/dpqr-stable.R (dstable, .fct1): use uniroot() to find theta2; then possibly split into four (instead just two) intervals for integration. 2011-03-08 Martin Maechler * R/dist-stableMode.R (stableMode): add 'tol' (and 'beta.max') arguments which were implicit before. * R/utils.R (.unirootNA): add 'tol' and 'maxiter' arguments. * R/dpqr-stable.R (pstable): get rid of "Sversion" and add 'tol, subdivisions' as arguments. Slightly *increase* default tol(erance) - so .integrate2() no longer warns, at least not for the examples. 2011-02-02 Martin Maechler * DESCRIPTION (Package): renamed to 'stabledist' as Jim Lindsey has maintained a non-CRAN package 'stable' forever (and it's used). * TODO: added stabledist/man/0000755000175100001440000000000012521673675013217 5ustar hornikusersstabledist/man/dist-stable.Rd0000644000175100001440000002204312521632275015711 0ustar hornikusers\name{StableDistribution} \title{Stable Distribution Function} \alias{StableDistribution} \alias{dstable} \alias{pstable} \alias{qstable} \alias{rstable} \concept{Stable Distribution} \description{ A collection and description of functions to compute density, distribution and quantile function and to generate random variates of the stable distribution. The four functions are: \tabular{ll}{ \code{[dpqr]stable} \tab the (skewed) stable distribution. } } \usage{ dstable(x, alpha, beta, gamma = 1, delta = 0, pm = 0, log = FALSE, tol = 64*.Machine$double.eps, zeta.tol = NULL, subdivisions = 1000) pstable(q, alpha, beta, gamma = 1, delta = 0, pm = 0, lower.tail = TRUE, log.p = FALSE, tol = 64*.Machine$double.eps, subdivisions = 1000) qstable(p, alpha, beta, gamma = 1, delta = 0, pm = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps^0.25, maxiter = 1000, trace = 0, integ.tol = 1e-7, subdivisions = 200) rstable(n, alpha, beta, gamma = 1, delta = 0, pm = 0) } \arguments{ \item{alpha, beta, gamma, delta}{ value of the index parameter \code{alpha} with \code{alpha = (0,2]}; skewness parameter \code{beta}, in the range [-1, 1]; scale parameter \code{gamma}; and location (or \sQuote{shift}) parameter \code{delta}. } \item{n}{sample size (integer).} \item{p}{numeric vector of probabilities.} \item{pm}{ parameterization, an integer in \code{0, 1, 2}; by default \code{pm=0}, the \sQuote{S0} parameterization.} \item{x, q}{numeric vector of quantiles.} \item{log, log.p}{logical; if TRUE, probabilities p are given as log(p).} \item{lower.tail}{logical; if TRUE (default), probabilities are \eqn{P[X \le x]} otherwise, \eqn{P[X > x]}.} \item{integ.tol}{positive number, the tolerance used for numerical integration, see \code{\link{integrate}}.} \item{tol}{numerical tolerance, \describe{ \item{dstable(), pstable():}{used for numerical integration, see \code{integ.tol} above. Note that earlier versions had tighter tolerances -- which seem too tight as default values.} \item{qstable():}{used for rootfinding, see \code{\link{uniroot}}.} } } \item{zeta.tol}{(\code{dstable}) numerical tolerance for checking if \code{x} is close to \eqn{\zeta(\alpha,\beta)}. The default, \code{NULL} depends itself on \eqn{(\alpha,\beta)}. \cr As it is \emph{experimental} and not guaranteed to remain in the future, its use is not recommended in production code. Rather e-mail the package maintainer about it.} \item{subdivisions}{maximal number of intervals for integration, see \code{\link{integrate}}.} \item{maxiter, trace}{maximal number of iterations and verboseness in \code{\link{uniroot}}, see there.} } \value{ All values for the \code{*stable} functions are numeric vectors: \code{d*} returns the density, \code{p*} returns the distribution function, \code{q*} returns the quantile function, and \code{r*} generates random deviates. } \details{ \bold{Skew Stable Distribution:} \cr\cr The function uses the approach of J.P. Nolan for general stable distributions. Nolan (1997) derived expressions in form of integrals based on the characteristic function for standardized stable random variables. For \code{dstable} and \code{pstable}, these integrals are numerically evaluated using \R's \code{\link{integrate}()} function. \cr \dQuote{S0} parameterization [pm=0]: based on the (M) representation of Zolotarev for an alpha stable distribution with skewness beta. Unlike the Zolotarev (M) parameterization, gamma and delta are straightforward scale and shift parameters. This representation is continuous in all 4 parameters, and gives an intuitive meaning to gamma and delta that is lacking in other parameterizations. \cr Switching the sign of \code{beta} \emph{mirrors} the distribution at the vertical axis \eqn{x = \delta}{x = delta}, i.e., \deqn{f(x, \alpha, -\beta, \gamma, \delta, 0) = f(2\delta-x, \alpha, +\beta, \gamma, \delta, 0),} see the graphical example below. \dQuote{S} or \dQuote{S1} parameterization [pm=1]: the parameterization used by Samorodnitsky and Taqqu in the book Stable Non-Gaussian Random Processes. It is a slight modification of Zolotarev's (A) parameterization. \cr \dQuote{S*} or \dQuote{S2} parameterization [pm=2]: a modification of the S0 parameterization which is defined so that (i) the scale gamma agrees with the Gaussian scale (standard dev.) when alpha=2 and the Cauchy scale when alpha=1, (ii) the mode is exactly at delta. For this parametrization, \code{\link{stableMode}(alpha,beta)} is needed. \cr \dQuote{S3} parameterization [pm=3]: an internal parameterization, currently not available for these functions. The scale is the same as the \dQuote{S2} parameterization, the shift is \eqn{-\beta*g(\alpha)}, where \eqn{g(\alpha)} is defined in Nolan(1999). } \section{Tail Behavior}{ The asymptotic behavior for large \eqn{x}, aka \dQuote{tail behavior} for the cumulative \eqn{F(x) = P(X \le x)}{F(x) = P(X <= x)} is (for \eqn{x\to\infty}{x -> Inf}) \deqn{1 - F(x) \sim (1+\beta) C_\alpha x^{-\alpha},}{1 - F(x) ~ (1+b) C_a x^-a,} where \eqn{C_\alpha = \Gamma(\alpha)/\pi \sin(\alpha\pi/2) }{a=alpha, b=beta, C_a = Gamma(a)/pi * sin(a*pi/2)}; hence also \deqn{F(-x) \sim (1-\beta) C_\alpha x^{-\alpha}.}{F(-x) ~ (1+b) C_a x^-a.} Differentiating \eqn{F()} above gives \deqn{f(x) \sim \alpha(1+\beta) C_\alpha x^{-(1+\alpha)}.}{f(x) ~ a(1+b) C_a x^-(1+a).} } \note{ In the case \eqn{\beta = 1}, the distributions are \dQuote{maximally skewed to the right} or simply \dQuote{\emph{extremal} stable} (Zolotarev). In that case, the package \pkg{FMStable} provides \code{dpq*} functions which are faster and more accurate than ours (if accuracy higher than about 6 digits is needed), see, \code{\link[FMStable]{pEstable}}. When \code{alpha} is close to 1 or close to 0 (\dQuote{close}, e.g., meaning distance \eqn{d < 0.01}), the computations typically are numerically considerably more challenging, and the results may not be accurate. \cr As we % currently (2011-03-28) plan to improve on this, \emph{and} as it is unknown when exactly the numerical difficulties arise, we currently only do warn here (in the documentation), but \emph{not} by giving explicit \code{\link{warning}()}s. } \seealso{ the \code{\link[fBasics]{stableSlider}()} function from package \pkg{fBasics} for displaying densities and probabilities of these distributions, for educational purposes. } \author{ Diethelm Wuertz for the original Rmetrics \R-port. Many numerical improvements by Martin Maechler. } \references{ Chambers J.M., Mallows, C.L. and Stuck, B.W. (1976) \emph{A Method for Simulating Stable Random Variables}, J. Amer. Statist. Assoc. \bold{71}, 340--344. John P. Nolan (2012) \emph{Stable Distributions - Models for Heavy Tailed Data} Birkhauser, Boston; in progress, chapter 1 online at \url{http://academic2.american.edu/~jpnolan/stable/chap1.pdf} Nolan J.P. (1997) Numerical calculation of stable densities and distribution functions. \emph{Stochastic Models} \bold{13}(4), 759--774. \cr Also available as \file{density.ps} from Nolan's web page. Samoridnitsky G., Taqqu M.S. (1994); \emph{Stable Non-Gaussian Random Processes, Stochastic Models with Infinite Variance}, Chapman and Hall, New York, 632 pages. Weron, A., Weron R. (1999); \emph{Computer Simulation of Levy alpha-Stable Variables and Processes}, Preprint Technical University of Wroclaw, 13 pages. } \examples{ ## stable - ## Plot stable random number series set.seed(1953) r <- rstable(n = 1000, alpha = 1.9, beta = 0.3) plot(r, type = "l", main = "stable: alpha=1.9 beta=0.3", col = "steelblue") grid() ## Plot empirical density and compare with true density: hist(r, n = 25, probability = TRUE, border = "white", col = "steelblue") x <- seq(-5, 5, 0.25) lines(x, dstable(x, alpha = 1.9, beta = 0.3, tol= 1e-3), lwd = 2) ## Plot df and compare with true df: plot(ecdf(r), do.points=TRUE, col = "steelblue", main = "Probabilities: ecdf(rstable(1000,..)) and true cdf F()") rug(r) lines(x, pstable(q = x, alpha = 1.9, beta = 0.3), col="#0000FF88", lwd= 2.5) ## Switching sign(beta) <==> Mirror the distribution around x == delta: curve(dstable(x, alpha=1.2, beta = .8, gamma = 3, delta = 2), -10, 10) curve(dstable(x, alpha=1.2, beta = -.8, gamma = 3, delta = 2), add=TRUE, col=2) ## or the same curve(dstable(2*2-x, alpha=1.2, beta = +.8, gamma = 3, delta = 2), add=TRUE, col=adjustcolor("gray",0.2), lwd=5) abline(v = 2, col = "gray", lty=2, lwd=2) axis(1, at = 2, label = expression(delta == 2)) ## Compute quantiles: x. <- -4:4 px <- pstable(x., alpha = 1.9, beta = 0.3) (qs <- qstable(px, alpha = 1.9, beta = 0.3)) stopifnot(all.equal(as.vector(qs), x., tol = 1e-5)) } \keyword{distribution} stabledist/man/stableMode.Rd0000644000175100001440000000351312031612662015550 0ustar hornikusers\name{StableMode} \alias{stableMode} \title{Mode of the Stable Distribution Function} \description{ Computes the mode of the stable distribution, i.e., the maximum of its density function in the "0" parametrization, i.e., the maximum \eqn{x_0} of \code{\link{dstable}(x, alpha, beta, gamma = 1, delta = 0, pm = 0)}. Finds the maximum of \code{\link{dstable}} numerically, using \code{\link{optimize}}. } \usage{ stableMode(alpha, beta, beta.max = 1 - 1e-11, tol = .Machine$double.eps^0.25) } \arguments{ \item{alpha, beta}{numeric parameters: value of the index parameter \code{alpha} in the range \eqn{(0,2]}, and the skewness parameter \code{beta}, in the range \eqn{[-1, 1]}.} \item{beta.max}{for numerical purposes, values of beta too close to 1, are set to \code{beta.max}. Do not modify unless you know what you're doing.} \item{tol}{numerical tolerance for \code{\link{optimize}()}.} } \value{ returns a numeric value, the location of the stable mode. } \author{ Diethelm Wuertz for the Rmetrics \R-port; minor cleanup by Martin Maechler. } \seealso{ For definition and the \dQuote{dpqr}-functions, \code{\link{StableDistribution}}, also for the references. } \examples{ ## beta = 0 <==> symmetric <==> mode = 0 all.equal(stableMode(alpha=1, beta=0), 0) al.s <- c(1e-100, seq(0,2, by = 1/32)[-1]) stopifnot(vapply(al.s, function(alp) stableMode(alpha=alp, beta=0), 1.) == 0) ## more interesting: asymmetric (beta != 0): stableMode(alpha=1.2, beta=0.1) if(stabledist:::doExtras()) { # takes 2.5 seconds sm0.5 <- vapply(al.s, function(AA) stableMode(alpha=AA, beta= 0.5), 1.) plot(al.s, sm0.5, type = "o", col=2, xlab = quote(alpha), ylab="mode", main = quote("Mode of stable"*{}(alpha, beta == 0.5, pm==0))) }%doExtras } \keyword{distribution}