VGAM/0000755000176200001440000000000013135730670011011 5ustar liggesusersVGAM/BUGS0000755000176200001440000000750713135276753011517 0ustar liggesusersHere is a list of known bugs. 2016-06 lrtest(zipoissonff.object, poissonff.object) fails. 2016-05 rcim() with alaplace2() may fail. 2014-02 The subset argument of vgam() may not work, especially with multiple responses. To get around this, use subset() to create a smaller data frame and then feed that into vgam(). 2013-11 vgam() can only handle constraint matrices cmat, say, such that t(cmat) %*% cmat is diagonal. 2013-07 quasipoisson()'s scale parameter estimate does not handle prior weights correctly. 2012-09 loge('a', short = FALSE, inverse = FALSE) loge('a', short = FALSE, inverse = TRUE) give the same answer. Coef(vglm.dirmultinomial.fit) fails. Evidently, multiple "mlogit"s saved on vglm.dirmultinomial.fit@misc do not suffice. 2011-12 VGAM version 0.8-4 said it needed R version 2-11.1 or later. But really, R version 2-13.0 or later is needed. This is because the generic nobs() was not defined properly. Another fix is to install the (latest) prerelease version at http://www.stat.auckland.ac.nz/~yee/VGAM/prerelease 2010-04-12 cqo() should be working now. It uses new C code. Also, vgam() and vsmooth.spline() should not be noticeably different from before. But cao() is still working... getting it going soon hopefully. 2009/07/13 cqo() fails... I think it is due to initial values being faulty. Hope to look into it soon. 2009/06/18 For a given VGAM family function, arguments such as parallel, exchangeable etc. will not work if the RHS of the formula is an intercept only. For example, parallel = FALSE ~ 1 and exchangeable = TRUE ~ 1 will fail. Instead, try something like parallel = TRUE ~ x2 + x3 + x4 -1 and exchangeable = FAlSE ~ x2 + x3 + x4 + x5 -1 respectively. 2009/01/01 prediction with vgam( ~ offset(myoffsetmatrix) + ... ) fails inside a function because myoffsetmatrix cannot be found. 2008/08/12 Under Windows, the vgam() example involving the Hunua data seems to fail. It is under investigation. 2008/08/04 VGAM interferes with other packages, e.g., predict() and summary(). This is due to S3 and S4 interference, and currently I haven't sussed out the full details (e.g., NAMESPACES). For now it is best to attach VGAM only when needed and detach it when other packages are to be used. This can be done with library(VGAM) and detach("package:VGAM") 2008/05/16 zipf() did not handle 0 < s < 1. The prerelease version fixes this. 2008/03/12 A call such as mydof = 4 Fit = vgam(y ~ s(x, df=mydof), fam=poissonff) will result in failure when plot(Fit) Instead, one needs Fit = vgam(y ~ s(x, df=4), fam=poissonff) 2008/02/16 The VGAM package interferes with other functions, for example, if VGAM is loaded and lmobject is an "lm" object then fitted(lmobject) predict(lmobject) resid(lmobject) residuals(lmobject) will fail. 2006/05/18 dirmul() is not working yet. 2005/11/16 cao() now works in Windows. The argument xij does not work properly. 2005/8/31 The windows version of cao() seems to hang. It does not hang in Linux. 2005/6/10 cao() works in Linux but seems to hang in Windows. The latter (distributed in a .zip file format) is made from a R Cross Build process which may be a reason for the bug. I'm slowly looking into the bug. 2005/5/6 The VGAM package interferes with other code, including glm(). This may be due to the smart prediction code, or be due to the NAMESPACE facility. In order to use other functions outside the VGAM package you may need to type "detach()". 2003/7/14 vgam(y ~ s(x, df=2), subset= x > 2) will fail in R because the subset argument has the effect that the "df" and "spar" attributes are removed from the data frame containing the smoothing variables. Current fix: create a separate data frame satisfying the subset= condition, and then run vgam() on this smaller data frame. Thanks for Eugene Zwane for finding this bug. VGAM/inst/0000755000176200001440000000000013135276754011776 5ustar liggesusersVGAM/inst/CITATION0000644000176200001440000001055213135276754013136 0ustar liggesuserscitHeader("To cite VGAM in publications please use:") ## R >= 2.8.0 passes package metadata to citation(). if(!exists("meta") || is.null(meta)) meta <- packageDescription("VGAM") year <- sub("-.*", "", meta$Date) note <- sprintf("R package version %s", meta$Version) citEntry(entry = "Book", title = "Vector Generalized Linear and Additive Models: With an Implementation in R", author = personList(as.person("Thomas W. Yee")), year = "2015", publisher = "Springer", address = "New York, USA", textVersion = paste("Thomas W. Yee (2015).", "Vector Generalized Linear and Additive Models:", "With an Implementation in R. New York, USA: Springer.") ) citEntry(entry = "Article", title = "Vector Generalized Additive Models", author = personList(as.person("Thomas W. Yee"), as.person("C. J. Wild")), journal = "Journal of Royal Statistical Society, Series B", year = "1996", volume = "58", number = "3", pages = "481--493", textVersion = paste("Thomas W. Yee and C. J. Wild (1996).", "Vector Generalized Additive Models.", "Journal of Royal Statistical Society, Series B, 58(3), 481-493.") ) citEntry(entry = "Article", title = "The {VGAM} Package for Categorical Data Analysis", author = personList(as.person("Thomas W. Yee")), journal = "Journal of Statistical Software", year = "2010", volume = "32", number = "10", pages = "1--34", url = "http://www.jstatsoft.org/v32/i10/", textVersion = paste("Thomas W. Yee (2010).", "The VGAM Package for Categorical Data Analysis.", "Journal of Statistical Software, 32(10), 1-34.", "URL http://www.jstatsoft.org/v32/i10/."), header = "and/or" ) citEntry(entry = "Article", title = "Row-column interaction models, with an {R} implementation", author = personList(as.person("Thomas W. Yee"), as.person("Alfian F. Hadi")), journal = "Computational Statistics", year = "2014", volume = "29", number = "6", pages = "1427--1445", textVersion = paste("Thomas W. Yee, Alfian F. Hadi (2014).", "Row-column interaction models, with an R implementation.", "Computational Statistics, 29(6), 1427--1445."), header = "and/or" ) citEntry(entry = "Manual", title = "{VGAM}: Vector Generalized Linear and Additive Models", author = personList(as.person("Thomas W. Yee")), year = year, note = note, url = "https://CRAN.R-project.org/package=VGAM", textVersion = paste("Thomas W. Yee", sprintf("(%s).", year), "VGAM: Vector Generalized Linear and Additive Models.", paste(note, ".", sep = ""), "URL https://CRAN.R-project.org/package=VGAM"), header = "and/or" ) citEntry(entry = "Article", title = "Two-parameter reduced-rank vector generalized linear models", author = personList(as.person("Thomas W. Yee")), journal = "Computational Statistics and Data Analysis", year = "2013", url = "http://ees.elsevier.com/csda", textVersion = paste("Thomas W. Yee (2013).", "Two-parameter reduced-rank vector generalized linear models.", "Computational Statistics and Data Analysis.", "URL http://ees.elsevier.com/csda."), header = "and/or" ) citEntry(entry = "Article", title = "The {VGAM} Package for Capture-Recapture Data Using the Conditional Likelihood", author = personList(as.person("Thomas W. Yee"), as.person("Jakub Stoklosa"), as.person("Richard M. Huggins")), journal = "Journal of Statistical Software", year = "2015", volume = "65", number = "5", pages = "1--33", url = "http://www.jstatsoft.org/v65/i05/", textVersion = paste("Thomas W. Yee, Jakub Stoklosa, Richard M. Huggins (2015).", "The VGAM Package for Capture-Recapture Data Using the Conditional Likelihood.", "Journal of Statistical Software, 65(5), 1-33.", "URL http://www.jstatsoft.org/v65/i05/.") ) VGAM/src/0000755000176200001440000000000013135276753011607 5ustar liggesusersVGAM/src/vmux3.c0000644000176200001440000006253213135276761013044 0ustar liggesusers #include #include #include #include #include void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu); int fvlmz9iyC_VIAM(int *cz8qdfyj, int *rvy1fpli, int *wy1vqfzu); void fvlmz9iyC_vm2a(double mtlgduey8[], double bzmd6ftvmat[], int *dim1m, int *f8yswcat, int *wy1vqfzu, int *irb1onzwu, int tgiyxdw1[], int dufozmt7[], int *oey3ckps); void fvlmz9iyC_mux22(double wpuarq2m[], double tlgduey8[], double bzmd6ftvmat[], int *npjlv3mr, int *f8yswcat, int *wy1vqfzu); void fvlmz9iyC_vbks(double wpuarq2m[], double unvxka0m[], int *wy1vqfzu, int *f8yswcat, int *dimu); void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *i_solve); void fvlmz9iyC_mux17(double wpuarq2m[], double he7mqnvy[], int *wy1vqfzu, int *xjc4ywlh, int *f8yswcat, int *dimu, int *rutyk8mg); void fvlmz9iyC_lkhnw9yq(double wpuarq2m[], double ks3wejcv[], int *npjlv3mr, int *wy1vqfzu, int *dvhw1ulq); double fvlmz9iyC_tldz5ion(double xx); void fvlmz9iyC_enbin9(double bzmd6ftv[], double hdqsx7bk[], double nm0eljqk[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go, int *sguwj9ty); void fvlmz9iyC_enbin8(double bzmd6ftv[], double hdqsx7bk[], double hsj9bzaq[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go); void fvlmz9iyC_mbessI0(double unvxka0m[], int *f8yswcat, int *kpzavbj3, double dvector0[], double dvector1[], double dvector2[], int *zjkrtol8, double *qaltf0nz); void VGAM_C_mux34(double he7mqnvy[], double Dmat[], int *vnc1izfy, int *e0nmabdk, int *ui4ntmvd, double bqelz3cy[]); void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu) { int urohxe6t, bpvaqm5z, *ptri; ptri = tgiyxdw1; for (urohxe6t = *wy1vqfzu; urohxe6t >= 1; urohxe6t--) { for (bpvaqm5z = 1; bpvaqm5z <= urohxe6t; bpvaqm5z++) { *ptri++ = bpvaqm5z; } } ptri = dufozmt7; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { for (bpvaqm5z = urohxe6t; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { *ptri++ = bpvaqm5z; } } } int fvlmz9iyC_VIAM(int *cz8qdfyj, int *rvy1fpli, int *wy1vqfzu) { int urohxe6t; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; int imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); for (urohxe6t = 1; urohxe6t <= imk5wjxg; urohxe6t++) { if ((wkumc9idtgiyxdw1[urohxe6t-1]== *cz8qdfyj && wkumc9iddufozmt7[urohxe6t-1] == *rvy1fpli) || (wkumc9idtgiyxdw1[urohxe6t-1]== *rvy1fpli && wkumc9iddufozmt7[urohxe6t-1] == *cz8qdfyj)) { Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); return urohxe6t; } } Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); return 0; } void fvlmz9iyC_vm2a(double mtlgduey8[], double bzmd6ftvmat[], int *dim1m, int *f8yswcat, int *wy1vqfzu, int *irb1onzwu, int tgiyxdw1[], int dufozmt7[], int *oey3ckps) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t; int bpvaqm5z, usvdbx3tk, i_size_bzmd6ftvmat, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, zyojx5hw = *wy1vqfzu * *wy1vqfzu; double *qnwamo0e; if (*oey3ckps == 1) { if (*irb1onzwu == 1 || *dim1m != imk5wjxg) { i_size_bzmd6ftvmat = zyojx5hw * *f8yswcat; qnwamo0e = bzmd6ftvmat; for (ayfnwr1v = 0; ayfnwr1v < i_size_bzmd6ftvmat; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } if (irb1onzwu == 0) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { urohxe6t = (ayfnwr1v-1) * zyojx5hw; for (yq6lorbx = 1; yq6lorbx <= *dim1m; yq6lorbx++) { bpvaqm5z = tgiyxdw1[yq6lorbx-1] - 1 + (dufozmt7[yq6lorbx-1] - 1) * *wy1vqfzu + urohxe6t; usvdbx3tk = dufozmt7[yq6lorbx-1] - 1 + (tgiyxdw1[yq6lorbx-1] - 1) * *wy1vqfzu + urohxe6t; gp1jxzuh = (yq6lorbx-1) + (ayfnwr1v-1) * *dim1m; bzmd6ftvmat[usvdbx3tk] = bzmd6ftvmat[bpvaqm5z] = mtlgduey8[gp1jxzuh]; } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { urohxe6t = (ayfnwr1v-1) * zyojx5hw; for (yq6lorbx = 1; yq6lorbx <= *dim1m; yq6lorbx++) { bpvaqm5z = tgiyxdw1[yq6lorbx-1] - 1 + (dufozmt7[yq6lorbx-1] - 1) * *wy1vqfzu + urohxe6t; gp1jxzuh = (ayfnwr1v-1) * *dim1m + (yq6lorbx-1); bzmd6ftvmat[bpvaqm5z] = mtlgduey8[gp1jxzuh]; } } } } void fvlmz9iyC_mux22(double wpuarq2m[], double tlgduey8[], double bzmd6ftvmat[], int *npjlv3mr, int *f8yswcat, int *wy1vqfzu) { int ayfnwr1v, yq6lorbx, bpvaqm5z, pqneb2ra = 1, djaq7ckz = 1, oey3ckps = 0; int zyojx5hw = *wy1vqfzu * *wy1vqfzu, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; double q6zdcwxk; double *wkumc9idwk12; wkumc9idwk12 = Calloc(zyojx5hw, double); wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { fvlmz9iyC_vm2a(wpuarq2m + (ayfnwr1v - 1) * *npjlv3mr, wkumc9idwk12, npjlv3mr, &pqneb2ra, wy1vqfzu, &djaq7ckz, wkumc9idtgiyxdw1, wkumc9iddufozmt7, &oey3ckps); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = 0.0e0; for (bpvaqm5z = yq6lorbx; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { q6zdcwxk += wkumc9idwk12[yq6lorbx-1 + (bpvaqm5z-1) * *wy1vqfzu] * tlgduey8[ayfnwr1v-1 + (bpvaqm5z-1) * *f8yswcat]; } bzmd6ftvmat[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = q6zdcwxk; } } Free(wkumc9idwk12); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void fvlmz9iyC_vbks(double wpuarq2m[], double unvxka0m[], int *wy1vqfzu, int *f8yswcat, int *npjlv3mr) { int ayfnwr1v, yq6lorbx, gp1jxzuh, pqneb2ra = 1, djaq7ckz = 1, oey3ckps = 0, zyojx5hw = *wy1vqfzu * *wy1vqfzu, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; double q6zdcwxk; double *wkumc9idwk12; wkumc9idwk12 = Calloc(zyojx5hw , double); wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { fvlmz9iyC_vm2a(wpuarq2m + (ayfnwr1v - 1) * *npjlv3mr, wkumc9idwk12, npjlv3mr, &pqneb2ra, wy1vqfzu, &djaq7ckz, wkumc9idtgiyxdw1, wkumc9iddufozmt7, &oey3ckps); for (yq6lorbx = *wy1vqfzu; yq6lorbx >= 1; yq6lorbx--) { q6zdcwxk = unvxka0m[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu]; for (gp1jxzuh = yq6lorbx+1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { q6zdcwxk -= wkumc9idwk12[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] * unvxka0m[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu]; } unvxka0m[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = q6zdcwxk / wkumc9idwk12[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } } Free(wkumc9idwk12); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *i_solve) { double q6zdcwxk; int ayfnwr1v, yq6lorbx, gp1jxzuh; *dvhw1ulq = 1; for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { q6zdcwxk = 0.0e0; for (gp1jxzuh = 1; gp1jxzuh <= ayfnwr1v-1; gp1jxzuh++) { q6zdcwxk += pow(rbne6ouj[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu], (double) 2.0); } rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu] -= q6zdcwxk; if (rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu] <= 0.0e0) { Rprintf("Error in fvlmz9iyjdbomp0g: not pos-def.\n"); *dvhw1ulq = 0; return; } rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu] = sqrt(rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu]); for (yq6lorbx = ayfnwr1v+1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = 0.0e0; for (gp1jxzuh = 1; gp1jxzuh <= ayfnwr1v-1; gp1jxzuh++) { q6zdcwxk += rbne6ouj[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu] * rbne6ouj[gp1jxzuh-1 + (yq6lorbx-1) * *wy1vqfzu]; } rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = (rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] - q6zdcwxk) / rbne6ouj[ayfnwr1v-1 + (ayfnwr1v-1) * *wy1vqfzu]; } } if (*i_solve == 0) { for (ayfnwr1v = 2; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= ayfnwr1v-1; yq6lorbx++) { rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = 0.0e0; } return; } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = unvxka0m[yq6lorbx-1]; for (gp1jxzuh = 1; gp1jxzuh <= yq6lorbx-1; gp1jxzuh++) { q6zdcwxk -= rbne6ouj[gp1jxzuh-1 + (yq6lorbx-1) * *wy1vqfzu] * unvxka0m[gp1jxzuh-1]; } unvxka0m[yq6lorbx-1] = q6zdcwxk / rbne6ouj[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } for(yq6lorbx = *wy1vqfzu; yq6lorbx >= 1; yq6lorbx--) { q6zdcwxk = unvxka0m[yq6lorbx-1]; for(gp1jxzuh = yq6lorbx+1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { q6zdcwxk -= rbne6ouj[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] * unvxka0m[gp1jxzuh-1]; } unvxka0m[yq6lorbx-1] = q6zdcwxk / rbne6ouj[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } } void fvlmz9iyC_mux17(double wpuarq2m[], double he7mqnvy[], int *wy1vqfzu, int *xjc4ywlh, int *f8yswcat, int *npjlv3mr, int *rutyk8mg) { double q6zdcwxk; int ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z; double *wkumc9idwk12, *wkumc9idwk34; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, zyojx5hw = *wy1vqfzu * *wy1vqfzu, dz1lbtph = *wy1vqfzu * *xjc4ywlh; wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); wkumc9idwk12 = Calloc(zyojx5hw, double); wkumc9idwk34 = Calloc(dz1lbtph, double); for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { for (bpvaqm5z = 1; bpvaqm5z <= *npjlv3mr; bpvaqm5z++) { yq6lorbx = wkumc9idtgiyxdw1[bpvaqm5z-1] - 1 + (wkumc9iddufozmt7[bpvaqm5z-1] - 1) * *wy1vqfzu; wkumc9idwk12[yq6lorbx] = wpuarq2m[bpvaqm5z-1 + (ayfnwr1v-1) * *npjlv3mr]; } for (gp1jxzuh = 1; gp1jxzuh <= *xjc4ywlh; gp1jxzuh++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { wkumc9idwk34[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] = he7mqnvy[(ayfnwr1v-1) * *wy1vqfzu + yq6lorbx-1 + (gp1jxzuh-1) * *rutyk8mg]; } } for (gp1jxzuh = 1; gp1jxzuh <= *xjc4ywlh; gp1jxzuh++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk = 0.0e0; for (bpvaqm5z = yq6lorbx; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { q6zdcwxk += wkumc9idwk12[yq6lorbx-1 + (bpvaqm5z-1) * *wy1vqfzu] * wkumc9idwk34[bpvaqm5z-1 + (gp1jxzuh-1) * *wy1vqfzu]; } he7mqnvy[(ayfnwr1v-1) * *wy1vqfzu + yq6lorbx-1 + (gp1jxzuh-1) * *rutyk8mg] = q6zdcwxk; } } } Free(wkumc9idwk12); Free(wkumc9idwk34); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void fvlmz9iyC_lkhnw9yq(double wpuarq2m[], double ks3wejcv[], int *npjlv3mr, int *wy1vqfzu, int *dvhw1ulq) { int ayfnwr1v, yq6lorbx, gp1jxzuh, uaoynef0, zyojx5hw = *wy1vqfzu * *wy1vqfzu; double q6zdcwxk, vn3iasxugno = 1.0e-14; double *wkumc9idwrk; wkumc9idwrk = Calloc(zyojx5hw, double); *dvhw1ulq = 1; for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = ayfnwr1v; yq6lorbx >= 1; yq6lorbx--) { q6zdcwxk = (yq6lorbx == ayfnwr1v) ? 1.0e0 : 0.0e0; for (gp1jxzuh = yq6lorbx+1; gp1jxzuh <= ayfnwr1v; gp1jxzuh++) { q6zdcwxk -= wpuarq2m[yq6lorbx-1 + (gp1jxzuh-1) * *npjlv3mr] * wkumc9idwrk[gp1jxzuh-1 + (ayfnwr1v-1) * *wy1vqfzu]; } if (fabs(wpuarq2m[yq6lorbx-1 + (yq6lorbx-1) * *npjlv3mr]) < vn3iasxugno) { Rprintf("Error in fvlmz9iyC_lkhnw9yq: U(cz8qdfyj,cz8qdfyj) is zero.\n"); *dvhw1ulq = 0; } else { wkumc9idwrk[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = q6zdcwxk / wpuarq2m[yq6lorbx-1 + (yq6lorbx-1) * *npjlv3mr]; } } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = yq6lorbx; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { uaoynef0 = (yq6lorbx < ayfnwr1v) ? ayfnwr1v : yq6lorbx; q6zdcwxk = 0.0e0; for(gp1jxzuh = uaoynef0; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { q6zdcwxk += wkumc9idwrk[yq6lorbx-1 + (gp1jxzuh-1) * *wy1vqfzu] * wkumc9idwrk[ayfnwr1v-1 + (gp1jxzuh-1) * *wy1vqfzu]; } ks3wejcv[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = ks3wejcv[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = q6zdcwxk; } } Free(wkumc9idwrk); } double fvlmz9iyC_tldz5ion(double xval) { double hofjnx2e, xd4mybgj[6], q6zdcwxk = 1.000000000190015, tmp_y = xval; int yq6lorbx; xd4mybgj[0]= 76.18009172947146e0; xd4mybgj[1]= -86.50532032941677e0; xd4mybgj[2]= 24.01409824083091e0; xd4mybgj[3]= -1.231739572450155e0; xd4mybgj[4]= 0.1208650973866179e-2; xd4mybgj[5]= -0.5395239384953e-5; hofjnx2e = xval + 5.50; hofjnx2e -= (xval + 0.50) * log(hofjnx2e); for (yq6lorbx = 0; yq6lorbx < 6; yq6lorbx++) { tmp_y += 1.0e0; q6zdcwxk += xd4mybgj[yq6lorbx] / tmp_y; } return -hofjnx2e + log(2.5066282746310005e0 * q6zdcwxk / xval); } void fvlmz9iyC_enbin9(double bzmd6ftvmat[], double hdqsx7bk[], double nm0eljqk[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go, int *sguwj9ty) { int ayfnwr1v, kij0gwer, esql7umk; double vjz5sxty, pvcjl2na, mwuvskg1, btiehdm2 = 100.0e0 * *rsynp1go, ydb, ft3ijqmy, q6zdcwxk, plo6hkdr, csi9ydge, oxjgzv0e = 0.001e0; double bk3ymcih = -1.0; csi9ydge = bk3ymcih; bk3ymcih += bk3ymcih; bk3ymcih += csi9ydge; if (*n2kersmx <= 0.80e0 || *n2kersmx >= 1.0e0) { Rprintf("Error in fvlmz9iyC_enbin9: bad n2kersmx value.\n"); *dvhw1ulq = 0; return; } *dvhw1ulq = 1; for (kij0gwer = 1; kij0gwer <= *zy1mchbf; kij0gwer++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { vjz5sxty = nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]; if ((vjz5sxty < oxjgzv0e) || ( nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > 1.0e5)) { bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * (1.0e0 + hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat])) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); if (bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > -btiehdm2) bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -btiehdm2; goto ceqzd1hi20; } q6zdcwxk = 0.0e0; pvcjl2na = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); mwuvskg1 = 1.0e0 - pvcjl2na; csi9ydge = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]; if (pvcjl2na < btiehdm2) pvcjl2na = btiehdm2; if (mwuvskg1 < btiehdm2) mwuvskg1 = btiehdm2; esql7umk = 100 + 15 * floor(nm0eljqk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); if (esql7umk < *sguwj9ty) { esql7umk = *sguwj9ty; } ft3ijqmy = pow(pvcjl2na, csi9ydge); *ux3nadiw = ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); q6zdcwxk += plo6hkdr; ydb = 1.0e0; ft3ijqmy = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * mwuvskg1 * ft3ijqmy; *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow((hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb), (double) 2.0); q6zdcwxk += plo6hkdr; ydb = 2.0e0; while (((*ux3nadiw <= *n2kersmx) || (plo6hkdr > 1.0e-4)) && (ydb < esql7umk)) { ft3ijqmy = (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] - 1.0 + ydb) * mwuvskg1 * ft3ijqmy / ydb; *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow((hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb), (double) 2.0); q6zdcwxk += plo6hkdr; ydb += 1.0e0; } bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -q6zdcwxk; ceqzd1hi20: bk3ymcih = 0.0e0; } } } void fvlmz9iyC_enbin8(double bzmd6ftvmat[], double hdqsx7bk[], double hsj9bzaq[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go) { int ayfnwr1v, kij0gwer; double ft3ijqmy, tad5vhsu, o3jyipdf, pq0hfucn, q6zdcwxk, plo6hkdr, qtce8hzo1 = 0.0e0, qtce8hzo2 = 0.0e0; int fw2rodat, rx8qfndg, mqudbv4y; double onemse, nm0eljqk, ydb, btiehdm2 = -100.0 * *rsynp1go, kbig = 1.0e4, oxjgzv0e = 0.0010; if (*n2kersmx <= 0.80e0 || *n2kersmx >= 1.0e0) { Rprintf("returning since n2kersmx <= 0.8 or >= 1\n"); *dvhw1ulq = 0; return; } onemse = 1.0e0 / (1.0e0 + oxjgzv0e); *dvhw1ulq = 1; for (kij0gwer = 1; kij0gwer <= *zy1mchbf; kij0gwer++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { if ( hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > kbig) hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = kbig; if (hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] < oxjgzv0e) hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = oxjgzv0e; if (hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > onemse) { nm0eljqk = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * (1.0e0 / hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] - 1.0e0); bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -nm0eljqk * (1.0e0 + hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] / (hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + nm0eljqk)) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); if (bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] > btiehdm2) bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = btiehdm2; goto ceqzd1hi20; } q6zdcwxk = 0.0e0; fw2rodat = 1; rx8qfndg = hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1)**f8yswcat] < (1.0 - *rsynp1go) ? 1 : 0; mqudbv4y = fw2rodat && rx8qfndg ? 1 : 0; if (mqudbv4y) { qtce8hzo2 = hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] * log(hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); *ux3nadiw = exp(qtce8hzo2); } else { *ux3nadiw = 0.0e0; } plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat], (double) 2.0); q6zdcwxk += plo6hkdr; o3jyipdf = fvlmz9iyC_tldz5ion(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); ydb = 1.0e0; tad5vhsu = fvlmz9iyC_tldz5ion(ydb + hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); pq0hfucn = 0.0e0; if (mqudbv4y) { qtce8hzo1 = log(1.0e0 - hsj9bzaq[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat]); ft3ijqmy = exp(ydb * qtce8hzo1 + qtce8hzo2 + tad5vhsu - o3jyipdf - pq0hfucn); } else { ft3ijqmy = 0.0e0; } *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb, (double) 2.0); q6zdcwxk += plo6hkdr; ydb = 2.0e0; while((*ux3nadiw <= *n2kersmx) || (plo6hkdr > 1.0e-4)) { tad5vhsu += log(ydb + hdqsx7bk[ayfnwr1v-1+(kij0gwer-1) * *f8yswcat] - 1.0); pq0hfucn += log(ydb); if (mqudbv4y) { ft3ijqmy = exp(ydb * qtce8hzo1 + qtce8hzo2 + tad5vhsu - o3jyipdf - pq0hfucn); } else { ft3ijqmy = 0.0e0; } *ux3nadiw += ft3ijqmy; plo6hkdr = (1.0e0 - *ux3nadiw) / pow(hdqsx7bk[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] + ydb, (double) 2.0); q6zdcwxk += plo6hkdr; ydb += 1.0e0; if (ydb > 1.0e3) goto ceqzd1hi21; } ceqzd1hi21: bzmd6ftvmat[ayfnwr1v-1 + (kij0gwer-1) * *f8yswcat] = -q6zdcwxk; ceqzd1hi20: tad5vhsu = 0.0e0; } } } void fvlmz9iyC_mbessI0(double unvxka0m[], int *f8yswcat, int *kpzavbj3, double dvector0[], double dvector1[], double dvector2[], int *zjkrtol8, double *qaltf0nz) { int ayfnwr1v, gp1jxzuh, c5aesxkus; double f0, t0, m0, f1, t1, m1, f2, t2, m2, Toobig = 20.0e0; *zjkrtol8 = 0; if (!(*kpzavbj3 == 0 || *kpzavbj3 == 1 || *kpzavbj3 == 2)) { Rprintf("Error in fvlmz9iyC_mbessI0: kpzavbj3 not in 0:2. Returning.\n"); *zjkrtol8 = 1; return; } for (gp1jxzuh = 1; gp1jxzuh <= *f8yswcat; gp1jxzuh++) { if (fabs(unvxka0m[gp1jxzuh-1]) > Toobig) { Rprintf("Error in fvlmz9iyC_mbessI0: unvxka0m[] value > too big.\n"); *zjkrtol8 = 1; return; } t1 = unvxka0m[gp1jxzuh-1] / 2.0e0; f1 = t1; t0 = t1 * t1; f0 = 1.0e0 + t0; t2 = 0.50e0; f2 = t2; c5aesxkus = 15; if (fabs(unvxka0m[gp1jxzuh-1]) > 10.0) c5aesxkus = 25; if (fabs(unvxka0m[gp1jxzuh-1]) > 15.0) c5aesxkus = 35; if (fabs(unvxka0m[gp1jxzuh-1]) > 20.0) c5aesxkus = 40; if (fabs(unvxka0m[gp1jxzuh-1]) > 30.0) c5aesxkus = 55; for (ayfnwr1v = 1; ayfnwr1v <= c5aesxkus; ayfnwr1v++) { m0 = pow(unvxka0m[gp1jxzuh-1] / (2.0 * (ayfnwr1v + 1.0)), (double) 2); m1 = m0 * (1.0e0 + 1.0e0 / ayfnwr1v); m2 = m1 * (2.0e0 * ayfnwr1v + 1.0e0) / (2.0e0 * ayfnwr1v - 1.0e0); t0 = t0 * m0; t1 = t1 * m1; t2 = t2 * m2; f0 = f0 + t0; f1 = f1 + t1; f2 = f2 + t2; if ((fabs(t0) < *qaltf0nz) && (fabs(t1) < *qaltf0nz) && (fabs(t2) < *qaltf0nz)) break; } if (0 <= *kpzavbj3) dvector0[gp1jxzuh-1] = f0; if (1 <= *kpzavbj3) dvector1[gp1jxzuh-1] = f1; if (2 <= *kpzavbj3) dvector2[gp1jxzuh-1] = f2; } } void VGAM_C_mux34(double he7mqnvy[], double Dmat[], int *vnc1izfy, int *e0nmabdk, int *ui4ntmvd, double bqelz3cy[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh; double *qnwamo0e1, *qnwamo0e2; if (*e0nmabdk == 1) { qnwamo0e1 = bqelz3cy; qnwamo0e2 = he7mqnvy; for (ayfnwr1v = 0; ayfnwr1v < *vnc1izfy; ayfnwr1v++) { *qnwamo0e1++ = *Dmat * pow(*qnwamo0e2++, (double) 2.0); } return; } if (*ui4ntmvd == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *vnc1izfy; ayfnwr1v++) { bqelz3cy[ayfnwr1v-1] = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *e0nmabdk; yq6lorbx++) { bqelz3cy[ayfnwr1v-1] += Dmat[yq6lorbx-1 + (yq6lorbx-1) * *e0nmabdk] * pow(he7mqnvy[ayfnwr1v-1 + (yq6lorbx-1) * *vnc1izfy], (double) 2.0); } if (*e0nmabdk > 1) { for (yq6lorbx = 1; yq6lorbx <= *e0nmabdk; yq6lorbx++) { for (gp1jxzuh = yq6lorbx+1; gp1jxzuh <= *e0nmabdk; gp1jxzuh++) { bqelz3cy[ayfnwr1v-1] += Dmat[yq6lorbx-1 + (gp1jxzuh-1) * *e0nmabdk] * he7mqnvy[ayfnwr1v-1 + (yq6lorbx-1) * *vnc1izfy] * he7mqnvy[ayfnwr1v-1 + (gp1jxzuh-1) * *vnc1izfy] * 2.0; } } } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *vnc1izfy; ayfnwr1v++) { bqelz3cy[ayfnwr1v-1] = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *e0nmabdk; yq6lorbx++) { for (gp1jxzuh = 1; gp1jxzuh <= *e0nmabdk; gp1jxzuh++) { bqelz3cy[ayfnwr1v-1] += Dmat[yq6lorbx-1 + (gp1jxzuh-1) * *e0nmabdk] * he7mqnvy[ayfnwr1v-1 + (yq6lorbx-1) * *vnc1izfy] * he7mqnvy[ayfnwr1v-1 + (gp1jxzuh-1) * *vnc1izfy]; } } } } } VGAM/src/vmux.f0000644000176200001440000004620113135276761012757 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) implicit logical (a-z) integer wy1vqfzu, tgiyxdw1(*), dufozmt7(*) integer urohxe6t, bpvaqm5z, ayfnwr1v ayfnwr1v = 1 urohxe6t = wy1vqfzu 23000 if(.not.(urohxe6t .ge. 1))goto 23002 do23003 bpvaqm5z=1,urohxe6t tgiyxdw1(ayfnwr1v) = bpvaqm5z ayfnwr1v = ayfnwr1v+1 23003 continue 23004 continue 23001 urohxe6t=urohxe6t-1 goto 23000 23002 continue ayfnwr1v = 1 do23005 urohxe6t=1,wy1vqfzu do23007 bpvaqm5z=urohxe6t,wy1vqfzu dufozmt7(ayfnwr1v) = bpvaqm5z ayfnwr1v = ayfnwr1v+1 23007 continue 23008 continue 23005 continue 23006 continue return end integer function viamf(cz8qdfyj, rvy1fpli, wy1vqfzu, tgiyxdw1, duf *ozmt7) integer cz8qdfyj, rvy1fpli, wy1vqfzu, tgiyxdw1(*), dufozmt7(*) integer urohxe6t, imk5wjxg imk5wjxg = wy1vqfzu*(wy1vqfzu+1)/2 do23009 urohxe6t=1,imk5wjxg if((tgiyxdw1(urohxe6t).eq.cz8qdfyj .and. dufozmt7(urohxe6t).eq.rvy *1fpli) .or. (tgiyxdw1(urohxe6t).eq.rvy1fpli .and. dufozmt7(urohxe6 *t).eq.cz8qdfyj))then viamf = urohxe6t return endif 23009 continue 23010 continue viamf = 0 return end subroutine vm2af(mat, a, dimm, tgiyxdw1, dufozmt7, kuzxj1lo, wy1vq *fzu, rb1onzwu) implicit logical (a-z) integer dimm, tgiyxdw1(dimm), dufozmt7(dimm), kuzxj1lo, wy1vqfzu, *rb1onzwu double precision mat(dimm,kuzxj1lo), a(wy1vqfzu,wy1vqfzu,kuzxj1lo) integer ayfnwr1v, yq6lorbx, gp1jxzuh, imk5wjxg imk5wjxg = wy1vqfzu * (wy1vqfzu + 1) / 2 if(rb1onzwu .eq. 1 .or. dimm .ne. imk5wjxg)then ayfnwr1v = 1 23015 if(.not.(ayfnwr1v .le. kuzxj1lo))goto 23017 yq6lorbx = 1 23018 if(.not.(yq6lorbx .le. wy1vqfzu))goto 23020 gp1jxzuh = 1 23021 if(.not.(gp1jxzuh .le. wy1vqfzu))goto 23023 a(gp1jxzuh,yq6lorbx,ayfnwr1v) = 0.0d0 23022 gp1jxzuh=gp1jxzuh+1 goto 23021 23023 continue 23019 yq6lorbx=yq6lorbx+1 goto 23018 23020 continue 23016 ayfnwr1v=ayfnwr1v+1 goto 23015 23017 continue endif do23024 ayfnwr1v=1,kuzxj1lo do23026 yq6lorbx=1,dimm a(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx),ayfnwr1v) = mat(yq6lorbx,a *yfnwr1v) if(rb1onzwu .eq. 0)then a(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx),ayfnwr1v) = mat(yq6lorbx,a *yfnwr1v) endif 23026 continue 23027 continue 23024 continue 23025 continue return end subroutine mux22f(wpuarq2m, tlgduey8, lfu2qhid, dimu, tgiyxdw1, du *fozmt7, kuzxj1lo, wy1vqfzu, wk1200) implicit logical (a-z) integer dimu, tgiyxdw1(*), dufozmt7(*), kuzxj1lo, wy1vqfzu double precision wpuarq2m(dimu,kuzxj1lo), tlgduey8(kuzxj1lo,wy1vqf *zu), lfu2qhid(wy1vqfzu,kuzxj1lo), wk1200(wy1vqfzu,wy1vqfzu) double precision q6zdcwxk integer ayfnwr1v, yq6lorbx, bpvaqm5z, one, rb1onzwu one = 1 rb1onzwu = 1 ayfnwr1v = 1 23030 if(.not.(ayfnwr1v .le. kuzxj1lo))goto 23032 call vm2af(wpuarq2m(1,ayfnwr1v), wk1200, dimu, tgiyxdw1, dufozmt7, * one, wy1vqfzu, rb1onzwu) yq6lorbx = 1 23033 if(.not.(yq6lorbx .le. wy1vqfzu))goto 23035 q6zdcwxk = 0.0d0 bpvaqm5z = yq6lorbx 23036 if(.not.(bpvaqm5z .le. wy1vqfzu))goto 23038 q6zdcwxk = q6zdcwxk + wk1200(yq6lorbx,bpvaqm5z) * tlgduey8(ayfnwr1 *v,bpvaqm5z) 23037 bpvaqm5z=bpvaqm5z+1 goto 23036 23038 continue lfu2qhid(yq6lorbx,ayfnwr1v) = q6zdcwxk 23034 yq6lorbx=yq6lorbx+1 goto 23033 23035 continue 23031 ayfnwr1v=ayfnwr1v+1 goto 23030 23032 continue return end subroutine vbksf(wpuarq2m, bvecto, wy1vqfzu, kuzxj1lo, wk1200, tgi *yxdw1, dufozmt7, dimu) implicit logical (a-z) integer wy1vqfzu, kuzxj1lo, tgiyxdw1(*), dufozmt7(*), dimu double precision wpuarq2m(dimu,kuzxj1lo), bvecto(wy1vqfzu,kuzxj1lo *), wk1200(wy1vqfzu,wy1vqfzu) double precision q6zdcwxk integer ayfnwr1v, yq6lorbx, gp1jxzuh, rb1onzwu, one rb1onzwu = 1 one = 1 ayfnwr1v = 1 23039 if(.not.(ayfnwr1v .le. kuzxj1lo))goto 23041 call vm2af(wpuarq2m(1,ayfnwr1v), wk1200, dimu, tgiyxdw1, dufozmt7, * one, wy1vqfzu, rb1onzwu) yq6lorbx = wy1vqfzu 23042 if(.not.(yq6lorbx .ge. 1))goto 23044 q6zdcwxk = bvecto(yq6lorbx,ayfnwr1v) gp1jxzuh = yq6lorbx+1 23045 if(.not.(gp1jxzuh .le. wy1vqfzu))goto 23047 q6zdcwxk = q6zdcwxk - wk1200(yq6lorbx,gp1jxzuh) * bvecto(gp1jxzuh, *ayfnwr1v) 23046 gp1jxzuh=gp1jxzuh+1 goto 23045 23047 continue bvecto(yq6lorbx,ayfnwr1v) = q6zdcwxk / wk1200(yq6lorbx,yq6lorbx) 23043 yq6lorbx=yq6lorbx-1 goto 23042 23044 continue 23040 ayfnwr1v=ayfnwr1v+1 goto 23039 23041 continue return end subroutine vcholf(wmat, bvecto, wy1vqfzu, dvhw1ulq, isolve) implicit logical (a-z) integer isolve integer wy1vqfzu, dvhw1ulq double precision wmat(wy1vqfzu,wy1vqfzu), bvecto(wy1vqfzu) double precision q6zdcwxk, dsqrt integer ayfnwr1v, yq6lorbx, gp1jxzuh dvhw1ulq=1 do23048 ayfnwr1v=1,wy1vqfzu q6zdcwxk = 0d0 do23050 gp1jxzuh=1,ayfnwr1v-1 q6zdcwxk = q6zdcwxk + wmat(gp1jxzuh,ayfnwr1v) * wmat(gp1jxzuh,ayfn *wr1v) 23050 continue 23051 continue wmat(ayfnwr1v,ayfnwr1v) = wmat(ayfnwr1v,ayfnwr1v) - q6zdcwxk if(wmat(ayfnwr1v,ayfnwr1v) .le. 0d0)then dvhw1ulq = 0 return endif wmat(ayfnwr1v,ayfnwr1v) = dsqrt(wmat(ayfnwr1v,ayfnwr1v)) do23054 yq6lorbx=ayfnwr1v+1,wy1vqfzu q6zdcwxk = 0d0 do23056 gp1jxzuh=1,ayfnwr1v-1 q6zdcwxk = q6zdcwxk + wmat(gp1jxzuh,ayfnwr1v) * wmat(gp1jxzuh,yq6l *orbx) 23056 continue 23057 continue wmat(ayfnwr1v,yq6lorbx) = (wmat(ayfnwr1v,yq6lorbx) - q6zdcwxk) / w *mat(ayfnwr1v,ayfnwr1v) 23054 continue 23055 continue 23048 continue 23049 continue if(isolve .eq. 0)then do23060 ayfnwr1v=2,wy1vqfzu do23062 yq6lorbx=1,ayfnwr1v-1 wmat(ayfnwr1v,yq6lorbx) = 0.0d0 23062 continue 23063 continue return 23060 continue 23061 continue endif do23064 yq6lorbx=1,wy1vqfzu q6zdcwxk = bvecto(yq6lorbx) do23066 gp1jxzuh=1,yq6lorbx-1 q6zdcwxk = q6zdcwxk - wmat(gp1jxzuh,yq6lorbx) * bvecto(gp1jxzuh) 23066 continue 23067 continue bvecto(yq6lorbx) = q6zdcwxk / wmat(yq6lorbx,yq6lorbx) 23064 continue 23065 continue yq6lorbx = wy1vqfzu 23068 if(.not.(yq6lorbx .ge. 1))goto 23070 q6zdcwxk = bvecto(yq6lorbx) gp1jxzuh = yq6lorbx+1 23071 if(.not.(gp1jxzuh .le. wy1vqfzu))goto 23073 q6zdcwxk = q6zdcwxk - wmat(yq6lorbx,gp1jxzuh) * bvecto(gp1jxzuh) 23072 gp1jxzuh=gp1jxzuh+1 goto 23071 23073 continue bvecto(yq6lorbx) = q6zdcwxk / wmat(yq6lorbx,yq6lorbx) 23069 yq6lorbx=yq6lorbx-1 goto 23068 23070 continue return end subroutine mux17f(wpuarq2m, he7mqnvy, wy1vqfzu, xjc4ywlh, kuzxj1lo *, wk1200, wk3400, tgiyxdw1, dufozmt7, dimu, rutyk8mg) implicit logical (a-z) integer dimu, wy1vqfzu, xjc4ywlh, kuzxj1lo, tgiyxdw1(*), dufozmt7( **), rutyk8mg double precision wpuarq2m(dimu,kuzxj1lo), he7mqnvy(rutyk8mg,xjc4yw *lh), wk1200(wy1vqfzu,wy1vqfzu), wk3400(wy1vqfzu,xjc4ywlh) double precision q6zdcwxk integer ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z do23074 yq6lorbx=1,wy1vqfzu do23076 ayfnwr1v=1,wy1vqfzu wk1200(ayfnwr1v,yq6lorbx) = 0.0d0 23076 continue 23077 continue 23074 continue 23075 continue do23078 ayfnwr1v=1,kuzxj1lo do23080 bpvaqm5z=1,dimu wk1200(tgiyxdw1(bpvaqm5z), dufozmt7(bpvaqm5z)) = wpuarq2m(bpvaqm5z *,ayfnwr1v) 23080 continue 23081 continue do23082 gp1jxzuh=1,xjc4ywlh do23084 yq6lorbx=1,wy1vqfzu wk3400(yq6lorbx,gp1jxzuh) = he7mqnvy((ayfnwr1v-1)*wy1vqfzu+yq6lorb *x,gp1jxzuh) 23084 continue 23085 continue 23082 continue 23083 continue do23086 gp1jxzuh=1,xjc4ywlh do23088 yq6lorbx=1,wy1vqfzu q6zdcwxk = 0d0 do23090 bpvaqm5z=yq6lorbx,wy1vqfzu q6zdcwxk = q6zdcwxk + wk1200(yq6lorbx,bpvaqm5z) * wk3400(bpvaqm5z, *gp1jxzuh) 23090 continue 23091 continue he7mqnvy((ayfnwr1v-1)*wy1vqfzu+yq6lorbx,gp1jxzuh) = q6zdcwxk 23088 continue 23089 continue 23086 continue 23087 continue 23078 continue 23079 continue return end subroutine vrinvf9(wpuarq2m, ldr, wy1vqfzu, dvhw1ulq, ks3wejcv, wo *rk) implicit logical (a-z) integer ldr, wy1vqfzu, dvhw1ulq double precision wpuarq2m(ldr,wy1vqfzu), ks3wejcv(wy1vqfzu,wy1vqfz *u), work(wy1vqfzu,wy1vqfzu) double precision q6zdcwxk integer yq6lorbx, gp1jxzuh, col, uaoynef0 dvhw1ulq = 1 yq6lorbx = 1 23092 if(.not.(yq6lorbx .le. wy1vqfzu))goto 23094 col = 1 23095 if(.not.(col .le. wy1vqfzu))goto 23097 work(yq6lorbx,col) = 0.0d0 23096 col=col+1 goto 23095 23097 continue 23093 yq6lorbx=yq6lorbx+1 goto 23092 23094 continue col = 1 23098 if(.not.(col .le. wy1vqfzu))goto 23100 yq6lorbx = col 23101 if(.not.(yq6lorbx .ge. 1))goto 23103 if(yq6lorbx .eq. col)then q6zdcwxk = 1.0d0 else q6zdcwxk = 0.0d0 endif gp1jxzuh = yq6lorbx+1 23106 if(.not.(gp1jxzuh .le. col))goto 23108 q6zdcwxk = q6zdcwxk - wpuarq2m(yq6lorbx,gp1jxzuh) * work(gp1jxzuh, *col) 23107 gp1jxzuh=gp1jxzuh+1 goto 23106 23108 continue if(wpuarq2m(yq6lorbx,yq6lorbx) .eq. 0.0d0)then dvhw1ulq = 0 else work(yq6lorbx,col) = q6zdcwxk / wpuarq2m(yq6lorbx,yq6lorbx) endif 23102 yq6lorbx=yq6lorbx-1 goto 23101 23103 continue 23099 col=col+1 goto 23098 23100 continue yq6lorbx = 1 23111 if(.not.(yq6lorbx .le. wy1vqfzu))goto 23113 col = yq6lorbx 23114 if(.not.(col .le. wy1vqfzu))goto 23116 if(yq6lorbx .lt. col)then uaoynef0 = col else uaoynef0 = yq6lorbx endif q6zdcwxk = 0.0d0 gp1jxzuh = uaoynef0 23119 if(.not.(gp1jxzuh .le. wy1vqfzu))goto 23121 q6zdcwxk = q6zdcwxk + work(yq6lorbx,gp1jxzuh) * work(col,gp1jxzuh) 23120 gp1jxzuh=gp1jxzuh+1 goto 23119 23121 continue ks3wejcv(yq6lorbx,col) = q6zdcwxk ks3wejcv(col,yq6lorbx) = q6zdcwxk 23115 col=col+1 goto 23114 23116 continue 23112 yq6lorbx=yq6lorbx+1 goto 23111 23113 continue return end subroutine tldz5ion(xx, lfu2qhid) implicit logical (a-z) double precision xx, lfu2qhid double precision x, y, hofjnx2e, q6zdcwxk, xd4mybgj(6) integer yq6lorbx xd4mybgj(1)= 76.18009172947146d0 xd4mybgj(2)= -86.50532032941677d0 xd4mybgj(3)= 24.01409824083091d0 xd4mybgj(4)= -1.231739572450155d0 xd4mybgj(5)= 0.1208650973866179d-2 xd4mybgj(6)= -0.5395239384953d-5 x = xx y = xx hofjnx2e = x+5.50d0 hofjnx2e = hofjnx2e - (x+0.50d0) * dlog(hofjnx2e) q6zdcwxk=1.000000000190015d0 yq6lorbx=1 23122 if(.not.(yq6lorbx .le. 6))goto 23124 y = y + 1.0d0 q6zdcwxk = q6zdcwxk + xd4mybgj(yq6lorbx)/y 23123 yq6lorbx=yq6lorbx+1 goto 23122 23124 continue lfu2qhid = -hofjnx2e + dlog(2.5066282746310005d0 * q6zdcwxk / x) return end subroutine enbin9(bzmd6ftv, hdqsx7bk, nm0eljqk, n2kersmx, n, dvhw1 *ulq, zy1mchbf, ux3nadiw, rsynp1go, sguwj9ty) implicit logical (a-z) integer n, dvhw1ulq, zy1mchbf, sguwj9ty double precision bzmd6ftv(n, zy1mchbf), hdqsx7bk(n, zy1mchbf), nm0 *eljqk(n, zy1mchbf), n2kersmx, ux3nadiw, rsynp1go integer ayfnwr1v, kij0gwer double precision oxjgzv0e, btiehdm2, ydb, vjz5sxty, esql7umk, pvcj *l2na, mwuvskg1, ft3ijqmy, hmayv1xt, q6zdcwxk, plo6hkdr real csi9ydge if(n2kersmx .le. 0.80d0 .or. n2kersmx .ge. 1.0d0)then dvhw1ulq = 0 return endif btiehdm2 = 100.0d0 * rsynp1go oxjgzv0e = 0.001d0 dvhw1ulq = 1 kij0gwer=1 23127 if(.not.(kij0gwer.le.zy1mchbf))goto 23129 ayfnwr1v=1 23130 if(.not.(ayfnwr1v.le.n))goto 23132 vjz5sxty = nm0eljqk(ayfnwr1v,kij0gwer) / hdqsx7bk(ayfnwr1v,kij0gwe *r) if((vjz5sxty .lt. oxjgzv0e) .or. (nm0eljqk(ayfnwr1v,kij0gwer) .gt. * 1.0d5))then bzmd6ftv(ayfnwr1v,kij0gwer) = -nm0eljqk(ayfnwr1v,kij0gwer) * (1.0d *0 + hdqsx7bk(ayfnwr1v,kij0gwer)/(hdqsx7bk(ayfnwr1v,kij0gwer) + nm0 *eljqk(ayfnwr1v,kij0gwer))) / hdqsx7bk(ayfnwr1v,kij0gwer)**2 if(bzmd6ftv(ayfnwr1v,kij0gwer) .gt. -btiehdm2)then bzmd6ftv(ayfnwr1v,kij0gwer) = -btiehdm2 endif goto 20 endif q6zdcwxk = 0.0d0 pvcjl2na = hdqsx7bk(ayfnwr1v,kij0gwer) / (hdqsx7bk(ayfnwr1v,kij0gw *er) + nm0eljqk(ayfnwr1v,kij0gwer)) mwuvskg1 = 1.0d0 - pvcjl2na csi9ydge = hdqsx7bk(ayfnwr1v,kij0gwer) if(pvcjl2na .lt. btiehdm2)then pvcjl2na = btiehdm2 endif if(mwuvskg1 .lt. btiehdm2)then mwuvskg1 = btiehdm2 endif esql7umk = 100.0d0 + 15.0d0 * nm0eljqk(ayfnwr1v,kij0gwer) if(esql7umk .lt. sguwj9ty)then esql7umk = sguwj9ty endif ft3ijqmy = pvcjl2na ** csi9ydge ux3nadiw = ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / hdqsx7bk(ayfnwr1v,kij0gwer)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = 1.0d0 ft3ijqmy = hdqsx7bk(ayfnwr1v,kij0gwer) * mwuvskg1 * ft3ijqmy ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + ydb *)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = 2.0d0 23143 if(((ux3nadiw .le. n2kersmx) .or. (plo6hkdr .gt. 1.0d-4)) .and. (y *db .lt. esql7umk))then ft3ijqmy = (hdqsx7bk(ayfnwr1v,kij0gwer) - 1.0d0 + ydb) * mwuvskg1 ** ft3ijqmy / ydb ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + ydb *)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = ydb + 1.0d0 goto 23143 endif 23144 continue bzmd6ftv(ayfnwr1v,kij0gwer) = -q6zdcwxk 20 hmayv1xt = 0.0d0 23131 ayfnwr1v=ayfnwr1v+1 goto 23130 23132 continue 23128 kij0gwer=kij0gwer+1 goto 23127 23129 continue return end subroutine enbin8(bzmd6ftv, hdqsx7bk, hsj9bzaq, n2kersmx, kuzxj1lo *, dvhw1ulq, zy1mchbf, ux3nadiw, rsynp1go) implicit logical (a-z) integer kuzxj1lo, dvhw1ulq, zy1mchbf double precision bzmd6ftv(kuzxj1lo, zy1mchbf), hdqsx7bk(kuzxj1lo, *zy1mchbf), hsj9bzaq(kuzxj1lo, zy1mchbf), n2kersmx, ux3nadiw, rsynp *1go integer ayfnwr1v, kij0gwer, esql7umk double precision ft3ijqmy, tad5vhsu, o3jyipdf, pq0hfucn, q6zdcwxk, * d1, d2, plo6hkdr, hnu1vjyw logical pok1, pok2, pok12 double precision oxjgzv0e, onemse, nm0eljqk, btiehdm2, ydb, kbig d1 = 0.0d0 d2 = 0.0d0 btiehdm2 = -100.0d0 * rsynp1go esql7umk = 3000 if(n2kersmx .le. 0.80d0 .or. n2kersmx .ge. 1.0d0)then dvhw1ulq = 0 return endif kbig = 1.0d4 oxjgzv0e = 0.001d0 hnu1vjyw = 1.0d0 - rsynp1go onemse = 1.0d0 / (1.0d0 + oxjgzv0e) dvhw1ulq = 1 kij0gwer=1 23147 if(.not.(kij0gwer.le.zy1mchbf))goto 23149 ayfnwr1v=1 23150 if(.not.(ayfnwr1v.le.kuzxj1lo))goto 23152 if(hdqsx7bk(ayfnwr1v,kij0gwer) .gt. kbig)then hdqsx7bk(ayfnwr1v,kij0gwer) = kbig endif if(hsj9bzaq(ayfnwr1v,kij0gwer) .lt. oxjgzv0e)then hsj9bzaq(ayfnwr1v,kij0gwer) = oxjgzv0e endif if((hsj9bzaq(ayfnwr1v,kij0gwer) .gt. onemse))then nm0eljqk = hdqsx7bk(ayfnwr1v,kij0gwer) * (1.0d0/hsj9bzaq(ayfnwr1v, *kij0gwer) - 1.0d0) bzmd6ftv(ayfnwr1v,kij0gwer) = -nm0eljqk * (1.0d0 + hdqsx7bk(ayfnwr *1v,kij0gwer)/(hdqsx7bk(ayfnwr1v,kij0gwer) + nm0eljqk)) / hdqsx7bk( *ayfnwr1v,kij0gwer)**2 if(bzmd6ftv(ayfnwr1v,kij0gwer) .gt. btiehdm2)then bzmd6ftv(ayfnwr1v,kij0gwer) = btiehdm2 endif goto 20 endif q6zdcwxk = 0.0d0 pok1 = .true. pok2 = hsj9bzaq(ayfnwr1v,kij0gwer) .lt. (1.0d0-rsynp1go) pok12 = pok1 .and. pok2 if(pok12)then d2 = hdqsx7bk(ayfnwr1v,kij0gwer) * dlog(hsj9bzaq(ayfnwr1v,kij0gwer *)) ux3nadiw = dexp(d2) else ux3nadiw = 0.0d0 endif plo6hkdr = (1.0d0 - ux3nadiw) / hdqsx7bk(ayfnwr1v,kij0gwer)**2 q6zdcwxk = q6zdcwxk + plo6hkdr call tldz5ion(hdqsx7bk(ayfnwr1v,kij0gwer), o3jyipdf) ydb = 1.0d0 call tldz5ion(ydb + hdqsx7bk(ayfnwr1v,kij0gwer), tad5vhsu) pq0hfucn = 0.0d0 if(pok12)then d1 = dlog(1.0d0 - hsj9bzaq(ayfnwr1v,kij0gwer)) ft3ijqmy = dexp(ydb * d1 + d2 + tad5vhsu - o3jyipdf - pq0hfucn) else ft3ijqmy = 0.0d0 endif ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + ydb *)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = 2.0d0 23165 if((ux3nadiw .le. n2kersmx) .or. (plo6hkdr .gt. 1.0d-4))then tad5vhsu = tad5vhsu + dlog(ydb + hdqsx7bk(ayfnwr1v,kij0gwer) - 1.0 *d0) pq0hfucn = pq0hfucn + dlog(ydb) if(pok12)then ft3ijqmy = dexp(ydb * d1 + d2 + tad5vhsu - o3jyipdf - pq0hfucn) else ft3ijqmy = 0.0d0 endif ux3nadiw = ux3nadiw + ft3ijqmy plo6hkdr = (1.0d0 - ux3nadiw) / (hdqsx7bk(ayfnwr1v,kij0gwer) + ydb *)**2 q6zdcwxk = q6zdcwxk + plo6hkdr ydb = ydb + 1.0d0 if(ydb .gt. 1.0d3)then goto 21 endif goto 23165 endif 23166 continue 21 bzmd6ftv(ayfnwr1v,kij0gwer) = -q6zdcwxk 20 tad5vhsu = 0.0d0 23151 ayfnwr1v=ayfnwr1v+1 goto 23150 23152 continue 23148 kij0gwer=kij0gwer+1 goto 23147 23149 continue return end subroutine mbessi0(bvecto, kuzxj1lo, kpzavbj3, d0, d1, d2, zjkrtol *8, qaltf0nz) implicit logical (a-z) integer kuzxj1lo, kpzavbj3, zjkrtol8, c5aesxkus double precision bvecto(kuzxj1lo), d0(kuzxj1lo), d1(kuzxj1lo), d2( *kuzxj1lo), qaltf0nz integer ayfnwr1v, gp1jxzuh double precision f0, t0, m0, f1, t1, m1, f2, t2, m2 double precision toobig toobig = 20.0d0 zjkrtol8 = 0 if(.not.(kpzavbj3 .eq. 0 .or. kpzavbj3 .eq. 1 .or. kpzavbj3 .eq. 2 *))then zjkrtol8 = 1 return endif do23173 gp1jxzuh=1,kuzxj1lo if(dabs(bvecto(gp1jxzuh)) .gt. toobig)then zjkrtol8 = 1 return endif t1 = bvecto(gp1jxzuh) / 2.0d0 f1 = t1 t0 = t1 * t1 f0 = 1.0d0 + t0 t2 = 0.50d0 f2 = t2 c5aesxkus = 15 if(dabs(bvecto(gp1jxzuh)) .gt. 10)then c5aesxkus = 25 endif if(dabs(bvecto(gp1jxzuh)) .gt. 15)then c5aesxkus = 35 endif if(dabs(bvecto(gp1jxzuh)) .gt. 20)then c5aesxkus = 40 endif if(dabs(bvecto(gp1jxzuh)) .gt. 30)then c5aesxkus = 55 endif do23185 ayfnwr1v=1,c5aesxkus m0 = (bvecto(gp1jxzuh) / (2.0d0*(ayfnwr1v+1.0d0))) ** 2.0 m1 = m0 * (1.0d0 + 1.0d0/ayfnwr1v) m2 = m1 * (2.0d0*ayfnwr1v + 1.0d0) / (2.0d0*ayfnwr1v - 1.0d0) t0 = t0 * m0 t1 = t1 * m1 t2 = t2 * m2 f0 = f0 + t0 f1 = f1 + t1 f2 = f2 + t2 if((dabs(t0) .lt. qaltf0nz) .and. (dabs(t1) .lt. qaltf0nz) .and. ( *dabs(t2) .lt. qaltf0nz))then goto 23186 endif 23185 continue 23186 continue if(0 .le. kpzavbj3)then d0(gp1jxzuh) = f0 endif if(1 .le. kpzavbj3)then d1(gp1jxzuh) = f1 endif if(2 .le. kpzavbj3)then d2(gp1jxzuh) = f2 endif 23173 continue 23174 continue return end VGAM/src/caqo3.c0000644000176200001440000030314413135276761012765 0ustar liggesusers #include #include #include #include #include void yiumjq3npnm1or(double *objzgdk0, double *lfu2qhid); void yiumjq3npnm1ow(double objzgdk0[], double lfu2qhid[], int *f8yswcat); void yiumjq3nn2howibc2a(double *objzgdk0, double *i9mwnvqt, double *lfu2qhid); void yiumjq3nbewf1pzv9(double *objzgdk0, double *lfu2qhid); void yiumjq3ng2vwexyk9(double *objzgdk0, double *lfu2qhid); void yiumjq3npkc4ejib(double w8znmyce[], double zshtfg8c[], double m0ibglfx[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xlpjcg3s, int *vtsou9pz, int *hj3ftvzu, int *qfx3vhct, int *unhycz0e, double vm4xjosb[]); void yiumjq3nnipyajc1(double m0ibglfx[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu); void yiumjq3nshjlwft5(int *qfx3vhct, double tlgduey8[], double ufgqj9ck[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *kvowz9ht, double m0ibglfx[], double *jxacz5qu, int *hj3ftvzu, double *dn3iasxug, double *vsoihn1r, int *dqk5muto); void yiumjq3nflncwkfq76(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct); void yiumjq3nflncwkfq71(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *xwdf5ltg, int *qfx3vhct, double vm4xjosb[], int *br5ovgcj, int *xlpjcg3s, double kifxa0he[], int *yru9olks, int *unhycz0e); void yiumjq3nflncwkfq72(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct, int *afpc0kns, int *fmzq7aob, int *eu3oxvyb, int *unhycz0e, double vm4xjosb[]); void yiumjq3nietam6(double tlgduey8[], double m0ibglfx[], double y7sdgtqi[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu, double ufgqj9ck[], int *wr0lbopv); void yiumjq3ndlgpwe0c(double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double *rsynp1go, double *dn3iasxug, double *uaf2xgqy, int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *hj3ftvzu, int *qfx3vhct, int *zjkrtol8, int *unhycz0e, double vm4xjosb[]); void cqo_2(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]); void cqo_1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]); void vcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]); void dcqo1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double *ydcnh9xl); void vdcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double ajul8wkv[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]); double fvlmz9iyC_tldz5ion(double xx); void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu); void fvlmz9iyC_enbin9(double lfu2qhid[], double hdqsx7bk[], double nm0eljqk[], double *n2kersmx, int *f8yswcat, int *dvhw1ulq, int *zy1mchbf, double *ux3nadiw, double *rsynp1go, int *sguwj9ty); void Yee_vbfa(int psdvgce3[], double *fjcasv7g, double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], // 20130525; lindex added int acpios9q[], int jwbkl9fp[]); void F77_NAME(vqrdca)(double*, int*, int*, int*, double*, int*, double*, int*, double*); void F77_NAME(vdqrsl)(double*, int*, int*, int*, double*, double*, double*, double*, double*, double*, double*, int*, int*); void tyee_C_vdgam1(double*, double*, int*); void tyee_C_vtgam1(double*, double*, int*); void yiumjq3nn2howibc2a(double *objzgdk0, double *i9mwnvqt, double *lfu2qhid) { double pq0hfucn, xd4mybgj; if (1.0e0 - *objzgdk0 >= 1.0e0) { *lfu2qhid = -8.12589e0 / (3.0 * sqrt(*i9mwnvqt)); } else if (1.0e0 - *objzgdk0 <= 0.0e0) { *lfu2qhid = 8.12589e0 / (3.0 * sqrt(*i9mwnvqt)); } else { pq0hfucn = 1.0e0 - *objzgdk0; yiumjq3npnm1or(&pq0hfucn, &xd4mybgj); xd4mybgj /= 3.0e0 * sqrt(*i9mwnvqt); *lfu2qhid = -3.0e0 * log(1.0e0 + xd4mybgj); } } void yiumjq3nbewf1pzv9(double *objzgdk0, double *lfu2qhid) { if (*objzgdk0 <= 2.0e-200) { *lfu2qhid = -460.0e0; } else if (*objzgdk0 <= 1.0e-14) { *lfu2qhid = log( *objzgdk0 ); } else if (1.0e0 - *objzgdk0 <= 0.0e0) { *lfu2qhid = 3.542106e0; } else { *lfu2qhid = log(-log(1.0e0 - *objzgdk0)); } } void yiumjq3ng2vwexyk9(double *objzgdk0, double *lfu2qhid) { if (*objzgdk0 <= 2.0e-200) { *lfu2qhid = -460.0e0; } else if (*objzgdk0 <= 1.0e-14) { *lfu2qhid = log( *objzgdk0 ); } else if (1.0e0 - *objzgdk0 <= 0.0e0) { *lfu2qhid = 34.53958e0; } else { *lfu2qhid = log(*objzgdk0 / (1.0e0 - *objzgdk0)); } } void yiumjq3npkc4ejib(double w8znmyce[], double zshtfg8c[], double m0ibglfx[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xlpjcg3s, int *vtsou9pz, int *hj3ftvzu, int *qfx3vhct, int *unhycz0e, double vm4xjosb[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, sedf7mxb; double *fpdlcqk9zshtfg8c, *fpdlcqk9w8znmyce, *fpdlcqk9f9piukdx, *fpdlcqk9m0ibglfx, *fpdlcqk9vm4xjosb; if (*vtsou9pz == 1) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { sedf7mxb = 2 * *hj3ftvzu - 1; if (*br5ovgcj != 2 * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != 2 * *ftnjamu2 in C_pkc4ejib\n"); fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = 0.0; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c = zshtfg8c; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { fpdlcqk9w8znmyce = w8znmyce + 0 + (gp1jxzuh-1) * *br5ovgcj; fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce++ * *fpdlcqk9zshtfg8c; fpdlcqk9w8znmyce++; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c++; } sedf7mxb = 2 * *hj3ftvzu; fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = 0.0; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c = zshtfg8c; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { fpdlcqk9w8znmyce = w8znmyce + 1 + (gp1jxzuh-1) * *br5ovgcj; fpdlcqk9m0ibglfx = m0ibglfx + sedf7mxb-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce++ * *fpdlcqk9zshtfg8c; fpdlcqk9w8znmyce++; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c++; } } else { fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; for (ayfnwr1v = 0; ayfnwr1v < *br5ovgcj; ayfnwr1v++) { *fpdlcqk9m0ibglfx = 0.0; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c = zshtfg8c; fpdlcqk9w8znmyce = w8znmyce; // + (gp1jxzuh-1) * *br5ovgcj; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce++ * *fpdlcqk9zshtfg8c; fpdlcqk9m0ibglfx += *wy1vqfzu; } fpdlcqk9zshtfg8c++; } } } else { if (*br5ovgcj != *wy1vqfzu * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != *wy1vqfzu * *ftnjamu2 in C_pkc4ejib\n"); fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9f9piukdx = w8znmyce; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx = 0.0e0; fpdlcqk9zshtfg8c = zshtfg8c; fpdlcqk9w8znmyce = fpdlcqk9f9piukdx++; for (gp1jxzuh = 1; gp1jxzuh <= *xlpjcg3s; gp1jxzuh++) { *fpdlcqk9m0ibglfx += *fpdlcqk9w8znmyce * *fpdlcqk9zshtfg8c++; fpdlcqk9w8znmyce += *br5ovgcj; } fpdlcqk9m0ibglfx++; } } } fpdlcqk9vm4xjosb = vm4xjosb; if (*unhycz0e == 1) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9m0ibglfx = m0ibglfx + 2 * *hj3ftvzu - 2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } else { fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu - 1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } } void yiumjq3nnipyajc1(double m0ibglfx[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu) { int ayfnwr1v, yq6lorbx; double tmpwk, *fpdlcqk9t8hwvalr, *fpdlcqk9m0ibglfx; if (*hj3ftvzu == 0) { fpdlcqk9t8hwvalr = t8hwvalr; fpdlcqk9m0ibglfx = m0ibglfx; if (*qfx3vhct == 1) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { tmpwk = exp(*fpdlcqk9m0ibglfx++); *fpdlcqk9t8hwvalr++ = tmpwk / (1.0 + tmpwk); } } if (*qfx3vhct == 2) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9t8hwvalr++ = exp(*fpdlcqk9m0ibglfx++); } if (*qfx3vhct == 4) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9t8hwvalr++ = 1.0e0 - exp(-exp(*fpdlcqk9m0ibglfx++)); } if (*qfx3vhct == 3 || *qfx3vhct == 5) { if (2 * *afpc0kns != *wy1vqfzu) { //Rprintf Rprintf("Error: 2 * *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); } //Rprintf for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9t8hwvalr++ = exp(*fpdlcqk9m0ibglfx++); fpdlcqk9m0ibglfx++; } } if (*qfx3vhct == 8) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9t8hwvalr++ = *fpdlcqk9m0ibglfx++; } } else { fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; if (*qfx3vhct == 1) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { tmpwk = exp(*fpdlcqk9m0ibglfx); *fpdlcqk9t8hwvalr = tmpwk / (1.0 + tmpwk); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 2) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = exp(*fpdlcqk9m0ibglfx); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 4) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = 1.0e0 - exp(-exp(*fpdlcqk9m0ibglfx)); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9m0ibglfx = m0ibglfx + 2 * *hj3ftvzu-2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = exp(*fpdlcqk9m0ibglfx); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9t8hwvalr = *fpdlcqk9m0ibglfx; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } } void yiumjq3nshjlwft5(int *qfx3vhct, double tlgduey8[], double ufgqj9ck[], double t8hwvalr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *kvowz9ht, double m0ibglfx[], double *jxacz5qu, int *hj3ftvzu, double *dn3iasxug, double *vsoihn1r, int *dqk5muto) { int ayfnwr1v, yq6lorbx, lbgwvp3q; double txlvcey5, xd4mybgj, uqnkc6zg, hofjnx2e, smmu, afwp5imx, ivqk2ywz, qvd7yktm, hdqsx7bk, anopu9vi, jtnbu2hz, prev_lfu2qhid = 0.0e0, lfu2qhid = 0.0e0, *fpdlcqk9m0ibglfx, *fpdlcqk9t8hwvalr, *fpdlcqk9ufgqj9ck, *fpdlcqk9tlgduey8; if (*hj3ftvzu == 0) { fpdlcqk9tlgduey8 = tlgduey8; if (*qfx3vhct == 1 || *qfx3vhct == 4) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_shjlwft5\n"); fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { // yyy fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { // bbb ivqk2ywz = *fpdlcqk9tlgduey8 > 0.0 ? *fpdlcqk9tlgduey8*log(*fpdlcqk9tlgduey8) :0.0; if (*fpdlcqk9tlgduey8 < 1.0e0) ivqk2ywz += (1.0e0 - *fpdlcqk9tlgduey8) * log(1.0e0 - *fpdlcqk9tlgduey8); xd4mybgj = *fpdlcqk9t8hwvalr * (1.0e0 - *fpdlcqk9t8hwvalr); if (xd4mybgj < *dn3iasxug) { smmu = *fpdlcqk9t8hwvalr; qvd7yktm = *fpdlcqk9tlgduey8 * ((smmu < *dn3iasxug) ? *vsoihn1r : log(smmu)); afwp5imx = 1.0e0 - smmu; qvd7yktm += (afwp5imx < *dn3iasxug ? *vsoihn1r : log(afwp5imx))* (1.0 - *fpdlcqk9tlgduey8); } else { qvd7yktm = *fpdlcqk9tlgduey8 * log( *fpdlcqk9t8hwvalr) + (1.0 - *fpdlcqk9tlgduey8) * log(1.0 - *fpdlcqk9t8hwvalr); } lfu2qhid += *fpdlcqk9ufgqj9ck++ * (ivqk2ywz - qvd7yktm); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } // bbb jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } // yyy } if (*qfx3vhct == 2) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_shjlwft5\n"); fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgj = *fpdlcqk9tlgduey8 > 0.0 ? *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8 + *fpdlcqk9tlgduey8 * log(*fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr) : *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } if (*qfx3vhct == 5) { fpdlcqk9tlgduey8 = tlgduey8; if (2 * *afpc0kns != *wy1vqfzu) { //Rprintf Rprintf("Error: 2 * *afpc0kns != *wy1vqfzu in C_nipyajc1\n"); } //Rprintf for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { fpdlcqk9m0ibglfx = m0ibglfx + 2*yq6lorbx-1; fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { jtnbu2hz = exp(*fpdlcqk9m0ibglfx); uqnkc6zg = fvlmz9iyC_tldz5ion(jtnbu2hz); xd4mybgj = *fpdlcqk9tlgduey8 > 0.0 ? (jtnbu2hz - 1.0e0) * log(*fpdlcqk9tlgduey8) + (log(jtnbu2hz) - *fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr - log(*fpdlcqk9t8hwvalr)) * jtnbu2hz - uqnkc6zg : -1000.0e0; xd4mybgj = -xd4mybgj; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } if (*qfx3vhct == 3) { if (*dqk5muto == 0) { anopu9vi = 34.0e0; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } xd4mybgj = (tlgduey8[ayfnwr1v-1+ (yq6lorbx-1)* *ftnjamu2] < 1.0e0) ? 1.0e0 : tlgduey8[ayfnwr1v-1+ (yq6lorbx-1)* *ftnjamu2]; lfu2qhid += ufgqj9ck[ayfnwr1v-1] * (tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] * log(xd4mybgj/t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns]) + (tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] + hdqsx7bk) * log((t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns ] + hdqsx7bk) / (hdqsx7bk + tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]))); } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } else { anopu9vi = 34.0e0; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2*yq6lorbx-1 + (ayfnwr1v-1)* *wy1vqfzu]); lbgwvp3q = 0; } if (lbgwvp3q) { uqnkc6zg = hofjnx2e = 0.0e0; } else { uqnkc6zg = fvlmz9iyC_tldz5ion(hdqsx7bk + tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]); hofjnx2e = fvlmz9iyC_tldz5ion(hdqsx7bk); } txlvcey5 = fvlmz9iyC_tldz5ion(1.0e0 + tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]); xd4mybgj = hdqsx7bk * log(hdqsx7bk / (hdqsx7bk + t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns])) + uqnkc6zg - hofjnx2e - txlvcey5; if (tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] > 0.0e0) { xd4mybgj += tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] * log(t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns] / (hdqsx7bk + t8hwvalr[yq6lorbx-1 + (ayfnwr1v-1) * *afpc0kns])); } lfu2qhid += ufgqj9ck[ayfnwr1v-1] * xd4mybgj; } jxacz5qu[yq6lorbx] = 2.0 * (-0.5 * lfu2qhid + 0.5 * prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } lfu2qhid *= (-0.5); } } if (*qfx3vhct == 8) { fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9t8hwvalr = t8hwvalr + yq6lorbx-1; fpdlcqk9ufgqj9ck = ufgqj9ck; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgj = *fpdlcqk9tlgduey8++ - *fpdlcqk9t8hwvalr; lfu2qhid += *fpdlcqk9ufgqj9ck++ * pow(xd4mybgj, (double) 2.0); fpdlcqk9t8hwvalr += *afpc0kns; } jxacz5qu[yq6lorbx] = 2.0e0 * (lfu2qhid - prev_lfu2qhid); prev_lfu2qhid = lfu2qhid; } } } else { fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9ufgqj9ck = ufgqj9ck; if (*qfx3vhct == 1 || *qfx3vhct == 4) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { ivqk2ywz = *fpdlcqk9tlgduey8 > 0.0 ? *fpdlcqk9tlgduey8 * log(*fpdlcqk9tlgduey8) : 0.0; if (*fpdlcqk9tlgduey8 < 1.0e0) ivqk2ywz += (1.0e0 - *fpdlcqk9tlgduey8) * log(1.0e0 - *fpdlcqk9tlgduey8); xd4mybgj = *fpdlcqk9t8hwvalr * (1.0e0 - *fpdlcqk9t8hwvalr); if (xd4mybgj < *dn3iasxug) { smmu = *fpdlcqk9t8hwvalr; qvd7yktm = *fpdlcqk9tlgduey8 * ((smmu < *dn3iasxug) ? *vsoihn1r : log(smmu)); afwp5imx = 1.0e0 - smmu; qvd7yktm += (afwp5imx < *dn3iasxug ? *vsoihn1r : log(afwp5imx)) * (1.0 - *fpdlcqk9tlgduey8); } else { qvd7yktm = *fpdlcqk9tlgduey8 * log( *fpdlcqk9t8hwvalr) + (1.0 - *fpdlcqk9tlgduey8) * log(1.0e0 - *fpdlcqk9t8hwvalr); } lfu2qhid += *fpdlcqk9ufgqj9ck++ * (ivqk2ywz - qvd7yktm); fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } } if (*qfx3vhct == 2) { if (*afpc0kns != *wy1vqfzu) Rprintf("Error: *afpc0kns != *wy1vqfzu in C_shjlwft5\n"); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgj = *fpdlcqk9tlgduey8 > 0.0e0 ? *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8 + *fpdlcqk9tlgduey8 * log(*fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr) : *fpdlcqk9t8hwvalr - *fpdlcqk9tlgduey8; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9tlgduey8++; } } if (*qfx3vhct == 5) { fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9ufgqj9ck = ufgqj9ck; fpdlcqk9m0ibglfx = m0ibglfx + 2 * *hj3ftvzu-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { jtnbu2hz = exp(*fpdlcqk9m0ibglfx); uqnkc6zg = fvlmz9iyC_tldz5ion(jtnbu2hz); xd4mybgj = *fpdlcqk9tlgduey8 > 0.0 ? (jtnbu2hz - 1.0e0) * log(*fpdlcqk9tlgduey8) + jtnbu2hz * (log(jtnbu2hz) - *fpdlcqk9tlgduey8 / *fpdlcqk9t8hwvalr - log(*fpdlcqk9t8hwvalr)) - uqnkc6zg : -1000.0e0; xd4mybgj = -xd4mybgj; lfu2qhid += *fpdlcqk9ufgqj9ck++ * xd4mybgj; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9tlgduey8++; } } if (*qfx3vhct == 3) { if (*dqk5muto == 0) { anopu9vi = 34.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2 * *hj3ftvzu -1 + (ayfnwr1v-1) * *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2 * *hj3ftvzu -1 + (ayfnwr1v-1) * *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2* *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } xd4mybgj = (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] < 1.0e0) ? 1.0e0 : tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]; lfu2qhid += ufgqj9ck[ayfnwr1v-1] * (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] * log(xd4mybgj/t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns]) + (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] + hdqsx7bk) * log((t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns] + hdqsx7bk) / (hdqsx7bk+tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]))); } } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { hdqsx7bk = exp(m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); uqnkc6zg = fvlmz9iyC_tldz5ion(hdqsx7bk + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]); hofjnx2e = fvlmz9iyC_tldz5ion(hdqsx7bk); txlvcey5 = fvlmz9iyC_tldz5ion(1.0e0 + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]); xd4mybgj = hdqsx7bk * log(hdqsx7bk / (hdqsx7bk + t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns])) + uqnkc6zg - hofjnx2e - txlvcey5; if (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] > 0.0e0) { xd4mybgj += tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] * log(t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns] / (hdqsx7bk + t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns])); } lfu2qhid += ufgqj9ck[ayfnwr1v-1] * xd4mybgj; } lfu2qhid *= (-0.5e0); } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { lfu2qhid += *fpdlcqk9ufgqj9ck++ * pow(*fpdlcqk9tlgduey8++ - *fpdlcqk9t8hwvalr, (double) 2.0); fpdlcqk9t8hwvalr += *afpc0kns; } } } *jxacz5qu = 2.0e0 * lfu2qhid; } void yiumjq3nflncwkfq76(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct) { int ayfnwr1v, hpmwnav2; // sedf7mxb = 1; double *fpdlcqk9w8znmyce, *fpdlcqk9lncwkfq7; fpdlcqk9w8znmyce = w8znmyce; fpdlcqk9lncwkfq7 = lncwkfq7; if (*qfx3vhct == 3 || *qfx3vhct == 5) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 1.0e0; *fpdlcqk9w8znmyce++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 0.0e0; *fpdlcqk9w8znmyce++ = 1.0e0; } for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; *fpdlcqk9w8znmyce++ = 0.0e0; } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 1.0e0; } if (*br5ovgcj != *ftnjamu2) Rprintf("Error: *br5ovgcj != *ftnjamu2 in C_flncwkfq76\n"); for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; } } } } void yiumjq3nflncwkfq71(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *xwdf5ltg, int *qfx3vhct, double vm4xjosb[], int *br5ovgcj, int *xlpjcg3s, double kifxa0he[], int *yru9olks, int *unhycz0e) { int i0spbklx, ayfnwr1v, hpmwnav2, // sedf7mxb = *xwdf5ltg + 1, hyqwtp6i = *xwdf5ltg * (*xwdf5ltg + 1) / 2; double *fpdlcqk9lncwkfq7, *fpdlcqk9lncwkfq71, *fpdlcqk9lncwkfq72, *fpdlcqk9w8znmyce, *fpdlcqk9vm4xjosb, *fpdlcqk9kifxa0he; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; wkumc9idtgiyxdw1 = Calloc(hyqwtp6i, int); wkumc9iddufozmt7 = Calloc(hyqwtp6i, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, xwdf5ltg); fpdlcqk9w8znmyce = w8znmyce; fpdlcqk9lncwkfq7 = fpdlcqk9lncwkfq71 = fpdlcqk9lncwkfq72 = lncwkfq7; if (*qfx3vhct == 3 || *qfx3vhct == 5) { // ggg if (*br5ovgcj != 2 * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != 2 * *ftnjamu2 in C_flncwkfq71\n"); for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; *fpdlcqk9w8znmyce++ = 0.0e0; } } if (*unhycz0e == 0) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; *fpdlcqk9w8znmyce++ = 0.0e0; } } } else { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) *fpdlcqk9vm4xjosb++ = 0.0; fpdlcqk9lncwkfq7 = lncwkfq7; for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb += pow(*fpdlcqk9lncwkfq7++, (double) 2.0); fpdlcqk9vm4xjosb++; } } fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb *= (-0.50e0); fpdlcqk9vm4xjosb++; } } } else { // ggg and hhh for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7++; } } if (*unhycz0e == 0) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; } } } else { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) *fpdlcqk9vm4xjosb++ = 0.0; fpdlcqk9lncwkfq7 = lncwkfq7; for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb += pow(*fpdlcqk9lncwkfq7++, (double) 2.0); fpdlcqk9vm4xjosb++; } } fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb *= (-0.50e0); fpdlcqk9vm4xjosb++; } } } // hhh if (*yru9olks > 0) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { // kkk for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 1.0e0; *fpdlcqk9w8znmyce++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = 0.0e0; *fpdlcqk9w8znmyce++ = 1.0e0; } if (*yru9olks > 1) { fpdlcqk9kifxa0he = kifxa0he; // + (i0spbklx-1) * *ftnjamu2; for (i0spbklx = 2; i0spbklx <= *yru9olks; i0spbklx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9kifxa0he++; *fpdlcqk9w8znmyce++ = 0.0e0; } } } } else { // kkk and iii fpdlcqk9kifxa0he = kifxa0he; // + (i0spbklx-1) * *ftnjamu2; for (i0spbklx = 1; i0spbklx <= *yru9olks; i0spbklx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9kifxa0he++; } } } // iii } // if (*yru9olks > 0) Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void yiumjq3nflncwkfq72(double lncwkfq7[], double w8znmyce[], int *ftnjamu2, int *wy1vqfzu, int *br5ovgcj, int *xwdf5ltg, int *qfx3vhct, int *afpc0kns, int *fmzq7aob, int *eu3oxvyb, int *unhycz0e, double vm4xjosb[]) { int i0spbklx, ayfnwr1v, yq6lorbx, gp1jxzuh, hpmwnav2, sedf7mxb = 0, hyqwtp6i = *xwdf5ltg * (*xwdf5ltg + 1) / 2; double uqnkc6zg, *fpdlcqk9lncwkfq7, *fpdlcqk9lncwkfq71, *fpdlcqk9lncwkfq72, *fpdlcqk9w8znmyce, *fpdlcqk9vm4xjosb; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; wkumc9idtgiyxdw1 = Calloc(hyqwtp6i, int); wkumc9iddufozmt7 = Calloc(hyqwtp6i, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, xwdf5ltg); fpdlcqk9w8znmyce = w8znmyce; fpdlcqk9lncwkfq7 = lncwkfq7; for (gp1jxzuh = 1; gp1jxzuh <= *eu3oxvyb; gp1jxzuh++) { for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) *fpdlcqk9w8znmyce++ = 0.0e0; } fpdlcqk9w8znmyce = w8znmyce; if (*qfx3vhct == 3 || *qfx3vhct == 5) { if (*br5ovgcj != 2 * *ftnjamu2) //Rprinf Rprintf("Error: *br5ovgcj != 2 * *ftnjamu2 in C_flncwkfq72\n"); for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9w8znmyce = *fpdlcqk9lncwkfq7; fpdlcqk9w8znmyce += 2 + *br5ovgcj; } fpdlcqk9lncwkfq7++; fpdlcqk9w8znmyce -= *afpc0kns * *br5ovgcj; // fixed@20100406 } sedf7mxb += *afpc0kns; } } else { for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9w8znmyce++ = *fpdlcqk9lncwkfq7; fpdlcqk9w8znmyce += *br5ovgcj; } fpdlcqk9lncwkfq7++; fpdlcqk9w8znmyce -= *wy1vqfzu * *br5ovgcj; // fixed@20100406 } sedf7mxb += *wy1vqfzu; } } if (*fmzq7aob == 0) { if (*qfx3vhct == 3 || *qfx3vhct == 5) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9w8znmyce = uqnkc6zg; fpdlcqk9w8znmyce += 2 + *br5ovgcj; } fpdlcqk9w8znmyce -= *afpc0kns * *br5ovgcj; // fixed@20100406 } sedf7mxb += *afpc0kns; } } else { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + sedf7mxb * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9w8znmyce++ = uqnkc6zg; fpdlcqk9w8znmyce += *br5ovgcj; } fpdlcqk9w8znmyce -= *wy1vqfzu * *br5ovgcj; // fixed@20100406 } sedf7mxb += *wy1vqfzu; } } } else { if (*unhycz0e == 1) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) *fpdlcqk9vm4xjosb++ = 0.0; fpdlcqk9lncwkfq7 = lncwkfq7; for (hpmwnav2 = 1; hpmwnav2 <= *xwdf5ltg; hpmwnav2++) { fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb += pow(*fpdlcqk9lncwkfq7++, (double) 2.0); fpdlcqk9vm4xjosb++; } } fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vm4xjosb *= (-0.50e0); fpdlcqk9vm4xjosb++; } } else { if (*qfx3vhct == 3 || *qfx3vhct == 5) { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + (sedf7mxb+i0spbklx-1) * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9w8znmyce++ = uqnkc6zg; fpdlcqk9w8znmyce++; } } } sedf7mxb += hyqwtp6i; } else { for (i0spbklx = 1; i0spbklx <= hyqwtp6i; i0spbklx++) { fpdlcqk9lncwkfq71 = lncwkfq7 + (wkumc9idtgiyxdw1[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9lncwkfq72 = lncwkfq7 + (wkumc9iddufozmt7[i0spbklx-1]-1) * *ftnjamu2; fpdlcqk9w8znmyce = w8znmyce + (sedf7mxb+i0spbklx-1) * *br5ovgcj; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { uqnkc6zg = *fpdlcqk9lncwkfq71++ * *fpdlcqk9lncwkfq72++; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) *fpdlcqk9w8znmyce++ = uqnkc6zg; } } sedf7mxb += hyqwtp6i; } } } Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void yiumjq3nietam6(double tlgduey8[], double m0ibglfx[], double y7sdgtqi[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *qfx3vhct, int *hj3ftvzu, double ufgqj9ck[], int *wr0lbopv) { int ayfnwr1v; double gyuq8dex, g2vwexykp, qa8ltuhj, vogkfwt8 = 0.0e0, msrdjh5f = 0.0e0, kwvo4ury, cpz4fgkx, tad5vhsu, khl0iysgk, myoffset = 1.0 / 32.0; double *fpdlcqk9tlgduey8, *fpdlcqk9m0ibglfx, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9ufgqj9ck; fpdlcqk9m0ibglfx = fpdlcqk9m0ibglfx1 = fpdlcqk9m0ibglfx2 = &tad5vhsu; gyuq8dex = 1.0; fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9ufgqj9ck = ufgqj9ck; if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9m0ibglfx1 = m0ibglfx + 2 * *hj3ftvzu-1; fpdlcqk9m0ibglfx2 = m0ibglfx + 2 * *hj3ftvzu-2; } else { fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; } if (*qfx3vhct == 1 || *qfx3vhct == 4 || *qfx3vhct == 3 || *qfx3vhct == 5) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { msrdjh5f += *fpdlcqk9ufgqj9ck; vogkfwt8 += *fpdlcqk9tlgduey8++ * *fpdlcqk9ufgqj9ck++; } gyuq8dex = vogkfwt8 / msrdjh5f; fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; } if (*qfx3vhct == 1) { yiumjq3ng2vwexyk9(&gyuq8dex, &g2vwexykp); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = g2vwexykp; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 2) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = log(*fpdlcqk9tlgduey8++ + myoffset); fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 4) { yiumjq3nbewf1pzv9(&gyuq8dex, &qa8ltuhj); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = qa8ltuhj; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 5) { if (*wr0lbopv == 1 || *wr0lbopv == 2) { kwvo4ury = *wr0lbopv == 1 ? log(gyuq8dex + myoffset) : log((6.0 / 8.0) * gyuq8dex); cpz4fgkx = log(y7sdgtqi[3 + *afpc0kns + *hj3ftvzu -1] + myoffset); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = kwvo4ury; *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } else { cpz4fgkx = log(y7sdgtqi[3 + *afpc0kns + *hj3ftvzu -1] + myoffset); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = log(*fpdlcqk9tlgduey8++ + myoffset); *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } } if (*qfx3vhct == 3) { if (*wr0lbopv == 1) { kwvo4ury = log(gyuq8dex + myoffset); cpz4fgkx = log(y7sdgtqi[3 + *hj3ftvzu -1] + myoffset); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = kwvo4ury; *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } else if (*wr0lbopv == 2) { kwvo4ury = log(gyuq8dex + myoffset); khl0iysgk = y7sdgtqi[3 + *hj3ftvzu -1]; cpz4fgkx = log(khl0iysgk); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { tad5vhsu = *fpdlcqk9tlgduey8 - gyuq8dex; *fpdlcqk9m0ibglfx2 = (tad5vhsu < 3.0 * gyuq8dex) ? kwvo4ury : log(sqrt(*fpdlcqk9tlgduey8)); *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; fpdlcqk9tlgduey8++; } } else if (*wr0lbopv == 3) { kwvo4ury = log(gyuq8dex + myoffset); khl0iysgk = y7sdgtqi[3 + *hj3ftvzu -1]; cpz4fgkx = log(khl0iysgk); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { tad5vhsu = *fpdlcqk9tlgduey8 - gyuq8dex; if (tad5vhsu > gyuq8dex) { *fpdlcqk9m0ibglfx2 = log(0.5 * (*fpdlcqk9tlgduey8 + gyuq8dex)); *fpdlcqk9m0ibglfx1 = log(khl0iysgk / (tad5vhsu / gyuq8dex)); } else if (*fpdlcqk9tlgduey8 < (gyuq8dex / 4.0)) { *fpdlcqk9m0ibglfx2 = log(gyuq8dex / 4.0); *fpdlcqk9m0ibglfx1 = cpz4fgkx; } else { *fpdlcqk9m0ibglfx2 = kwvo4ury; *fpdlcqk9m0ibglfx1 = cpz4fgkx; } fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; fpdlcqk9tlgduey8++; } } else { cpz4fgkx = log(y7sdgtqi[3 + *hj3ftvzu - 1]); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = log(*fpdlcqk9tlgduey8++ + myoffset); *fpdlcqk9m0ibglfx1 = cpz4fgkx; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; } } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = *fpdlcqk9tlgduey8++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } void yiumjq3ndlgpwe0c(double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double *rsynp1go, double *dn3iasxug, double *uaf2xgqy, int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *hj3ftvzu, int *qfx3vhct, int *zjkrtol8, int *unhycz0e, double vm4xjosb[]) { int ayfnwr1v, lbgwvp3q = -7; //qfx3vhct # kvowz9ht double xd4mybgja, xd4mybgjb, xd4mybgjc, anopu9vi; double *fpdlcqk9m0ibglfx, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9t8hwvalr, *fpdlcqk9vm4xjosb, *fpdlcqk9wpuarq2m, *fpdlcqk9ufgqj9ck, *fpdlcqk9rbne6ouj, *fpdlcqk9tlgduey8, *fpdlcqk9ghz9vuba; double hdqsx7bk, dkdeta, dldk, ux3nadiw, ed2ldk2, n2kersmx; double bzmd6ftvmat[1], kkmat[1], nm0eljqk[1]; int dvhw1ulq, sguwj9ty, pqneb2ra = 1; double jtnbu2hz, uqnkc6zgd, uqnkc6zgt, dldshape, fvn3iasxug, xk7dnvei; int okobr6tcex; double tmp1; fpdlcqk9m0ibglfx = fpdlcqk9m0ibglfx1 = fpdlcqk9m0ibglfx2 = &xd4mybgja; lbgwvp3q += 7; lbgwvp3q *= lbgwvp3q; n2kersmx = 0.990e0; n2kersmx = 0.995e0; fpdlcqk9m0ibglfx = m0ibglfx + *hj3ftvzu-1; if (*qfx3vhct == 3 || *qfx3vhct == 5) { fpdlcqk9m0ibglfx1 = m0ibglfx + 2 * *hj3ftvzu-1; fpdlcqk9m0ibglfx2 = m0ibglfx + 2 * *hj3ftvzu-2; } fpdlcqk9t8hwvalr = t8hwvalr + *hj3ftvzu-1; fpdlcqk9vm4xjosb = vm4xjosb; fpdlcqk9wpuarq2m = wpuarq2m + *hj3ftvzu-1; fpdlcqk9ufgqj9ck = ufgqj9ck; fpdlcqk9rbne6ouj = rbne6ouj + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9tlgduey8 = tlgduey8 + (*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9ghz9vuba = ghz9vuba + (*hj3ftvzu-1) * *ftnjamu2; if (*qfx3vhct == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgja = *fpdlcqk9t8hwvalr * (1.0e0 - *fpdlcqk9t8hwvalr); xd4mybgjb = xd4mybgja * *fpdlcqk9ufgqj9ck++; if (xd4mybgja < *dn3iasxug) xd4mybgja = *dn3iasxug; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; *fpdlcqk9wpuarq2m = *uaf2xgqy; } else { *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); } *fpdlcqk9rbne6ouj++ = xd4mybgjb; *fpdlcqk9ghz9vuba++ = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8++ - *fpdlcqk9t8hwvalr) / xd4mybgja; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9m0ibglfx += *wy1vqfzu; } } if (*qfx3vhct == 2) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { xd4mybgja = *fpdlcqk9t8hwvalr; xd4mybgjb = xd4mybgja * *fpdlcqk9ufgqj9ck++; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; *fpdlcqk9wpuarq2m = *uaf2xgqy; } else { *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); } *fpdlcqk9rbne6ouj = xd4mybgjb; if (*fpdlcqk9tlgduey8 > 0.0e0) { xd4mybgjc = xd4mybgja; if (xd4mybgjc < *dn3iasxug) xd4mybgjc = *dn3iasxug; *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8 - xd4mybgjc) / xd4mybgjc; } else { *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx - 1.0e0; } fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9rbne6ouj++; fpdlcqk9tlgduey8++; fpdlcqk9ghz9vuba++; } } if (*qfx3vhct == 4) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (*fpdlcqk9t8hwvalr < *dn3iasxug || *fpdlcqk9t8hwvalr > 1.0e0 - *dn3iasxug) { xd4mybgja = *dn3iasxug; xd4mybgjb = xd4mybgja * *fpdlcqk9ufgqj9ck; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; *fpdlcqk9wpuarq2m = *uaf2xgqy; } else { *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); } *fpdlcqk9rbne6ouj = xd4mybgjb; *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8 - *fpdlcqk9t8hwvalr) / xd4mybgja; } else { xd4mybgja = -(1.0e0 - *fpdlcqk9t8hwvalr) * log(1.0e0 - *fpdlcqk9t8hwvalr); if (xd4mybgja < *dn3iasxug) { xd4mybgja = *dn3iasxug; } xd4mybgjb = -xd4mybgja * *fpdlcqk9ufgqj9ck * log(1.0e0 - *fpdlcqk9t8hwvalr) / *fpdlcqk9t8hwvalr; if (xd4mybgjb < *dn3iasxug) { xd4mybgjb = *dn3iasxug; } *fpdlcqk9rbne6ouj = xd4mybgjb; *fpdlcqk9wpuarq2m = sqrt(xd4mybgjb); *fpdlcqk9ghz9vuba = *fpdlcqk9m0ibglfx + (*fpdlcqk9tlgduey8 - *fpdlcqk9t8hwvalr) / xd4mybgja; } fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9t8hwvalr += *afpc0kns; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9ufgqj9ck++; fpdlcqk9rbne6ouj++; fpdlcqk9tlgduey8++; fpdlcqk9ghz9vuba++; } } if (*qfx3vhct == 5) { fvn3iasxug = 1.0e-20; anopu9vi = 34.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] > anopu9vi) { jtnbu2hz = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] < -anopu9vi) { jtnbu2hz = exp(-anopu9vi); lbgwvp3q = 1; } else { jtnbu2hz = exp(m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } tyee_C_vdgam1(&jtnbu2hz, &uqnkc6zgd, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("Error 1 in dlgpwe0c okobr6tcex=%d. Ploughing on.\n", okobr6tcex); } xk7dnvei = t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns]; if (xk7dnvei < fvn3iasxug) { xk7dnvei = fvn3iasxug; } dldshape = log(tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2]) + log(jtnbu2hz) - log(xk7dnvei) + 1.0e0 - uqnkc6zgd - tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] / xk7dnvei; tyee_C_vtgam1(&jtnbu2hz, &uqnkc6zgt, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("Error 2 in dlgpwe0c okobr6tcex=%d. Ploughing on.\n", okobr6tcex); } rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * jtnbu2hz; xd4mybgja = jtnbu2hz * uqnkc6zgt - 1.0e0; rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * jtnbu2hz * xd4mybgja; if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else { wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2]); } if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else { wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2]); } if (xd4mybgja < fvn3iasxug) { xd4mybgja = fvn3iasxug; } ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *wy1vqfzu] + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] / xk7dnvei - 1.0e0; ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] + dldshape / xd4mybgja; } } if (*qfx3vhct == 3) { anopu9vi = 34.0e0; fvn3iasxug = 1.0e-20; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] > anopu9vi) { hdqsx7bk = exp(anopu9vi); lbgwvp3q = 1; } else if (m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] < -anopu9vi) { hdqsx7bk = exp(-anopu9vi); lbgwvp3q = 1; } else { hdqsx7bk = exp(m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu]); lbgwvp3q = 0; } xk7dnvei = t8hwvalr[*hj3ftvzu-1 + (ayfnwr1v-1) * *afpc0kns]; if (xk7dnvei < fvn3iasxug) { xk7dnvei = fvn3iasxug; } tmp1 = tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] + hdqsx7bk; tyee_C_vdgam1(&tmp1, &xd4mybgja, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("error in dlgpwe0c okobr6tcex 3: %3d \n", okobr6tcex); } tyee_C_vdgam1(&hdqsx7bk, &xd4mybgjb, &okobr6tcex); if (okobr6tcex != 1) { Rprintf("error in dlgpwe0c okobr6tcex 4: %3d \n", okobr6tcex); } dldk = xd4mybgja - xd4mybgjb - (tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] + hdqsx7bk) / (xk7dnvei + hdqsx7bk) + 1.0 + log(hdqsx7bk / (xk7dnvei + hdqsx7bk)); dkdeta = hdqsx7bk; kkmat[0] = hdqsx7bk; nm0eljqk[0] = xk7dnvei; sguwj9ty = 5000; fvlmz9iyC_enbin9(bzmd6ftvmat, kkmat, nm0eljqk, &n2kersmx, &pqneb2ra, &dvhw1ulq, &pqneb2ra, &ux3nadiw, rsynp1go, &sguwj9ty); if (dvhw1ulq != 1) { *zjkrtol8 = 5; Rprintf("Error. Exiting enbin9; dvhw1ulq is %d\n", dvhw1ulq); return; } ed2ldk2 = -bzmd6ftvmat[0] - 1.0e0 / hdqsx7bk + 1.0e0 / (hdqsx7bk + xk7dnvei); rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * xk7dnvei * hdqsx7bk / (xk7dnvei + hdqsx7bk); rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = ufgqj9ck[ayfnwr1v-1] * hdqsx7bk * (-bzmd6ftvmat[0] * hdqsx7bk - 1.0e0 + hdqsx7bk / (hdqsx7bk + xk7dnvei)); if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else wpuarq2m[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2]); if (rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] < *dn3iasxug) { rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = *dn3iasxug; wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = *uaf2xgqy; } else { wpuarq2m[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *npjlv3mr] = sqrt(rbne6ouj[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2]); } ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-2) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-2 + (ayfnwr1v-1) * *wy1vqfzu] + tlgduey8[ayfnwr1v-1 + (*hj3ftvzu-1) * *ftnjamu2] / xk7dnvei - 1.0e0; ghz9vuba[ayfnwr1v-1 + (2 * *hj3ftvzu-1) * *ftnjamu2] = m0ibglfx[2 * *hj3ftvzu-1 + (ayfnwr1v-1) * *wy1vqfzu] + dldk / (dkdeta * ed2ldk2); } } if (*qfx3vhct == 8) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9rbne6ouj = *fpdlcqk9ufgqj9ck++; *fpdlcqk9wpuarq2m = sqrt(*fpdlcqk9rbne6ouj); *fpdlcqk9ghz9vuba++ = *fpdlcqk9tlgduey8++; fpdlcqk9wpuarq2m += *npjlv3mr; fpdlcqk9rbne6ouj++; } } if (*unhycz0e == 1) { fpdlcqk9ghz9vuba = ghz9vuba + ((*qfx3vhct == 3 || *qfx3vhct == 5) ? (2 * *hj3ftvzu-2) : (*hj3ftvzu-1)) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9ghz9vuba -= *fpdlcqk9vm4xjosb++; fpdlcqk9ghz9vuba++; } } } void cqo_2(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z, yu6izdrc = 0, kcm6jfob, fmzq7aob, xwdf5ltg, kvowz9ht, f7svlajr, qfx3vhct, c5aesxkul, pqneb2ra = 1; int ybnsqgo9, algpft4y, qemj9asg, xlpjcg3s, eu3oxvyb, vtsou9pz, unhycz0e, wr0lbopv; double dn3iasxug, wiptsjx8, bh2vgiay, pvofyg8z = 1.0e-7, uylxqtc7 = 0.0, uaf2xgqy, vsoihn1r, rsynp1go; // rpto5qwb, double *qnwamo0e1, *fpdlcqk9w8znmyce, *fpdlcqk9m0ibglfx, *fpdlcqk9vm4xjosb, *fpdlcqk9vc6hatuj, *fpdlcqk9wpuarq2m, *fpdlcqk9ghz9vuba; double hmayv1xt1 = 10.0, hmayv1xt2 = 0.0; int x1jrewny = 0; double *wkumc9idrpto5qwb, *wkumc9idtwk; wkumc9idrpto5qwb = Calloc(1 + *afpc0kns , double); wkumc9idtwk = Calloc(*wy1vqfzu * *ftnjamu2 * 2, double); xwdf5ltg = xui7hqwl[0]; fmzq7aob = xui7hqwl[1]; xlpjcg3s = xui7hqwl[2]; kvowz9ht = xui7hqwl[3]; f7svlajr = xui7hqwl[4]; qfx3vhct = xui7hqwl[5]; c5aesxkul = xui7hqwl[6]; xui7hqwl[8] = 0; eu3oxvyb = xui7hqwl[10]; vtsou9pz = xui7hqwl[11]; unhycz0e = xui7hqwl[13]; wr0lbopv = xui7hqwl[17]; dn3iasxug = y7sdgtqi[0]; uaf2xgqy = sqrt(dn3iasxug); if (qfx3vhct == 1 || qfx3vhct == 4) vsoihn1r = log(dn3iasxug); bh2vgiay = y7sdgtqi[1]; rsynp1go = y7sdgtqi[2]; hmayv1xt1 -= bh2vgiay; hmayv1xt2 -= rsynp1go; hmayv1xt1 += hmayv1xt2; *zjkrtol8 = 1; yiumjq3nflncwkfq72(lncwkfq7, w8znmyce, ftnjamu2, wy1vqfzu, br5ovgcj, &xwdf5ltg, &qfx3vhct, afpc0kns, &fmzq7aob, &eu3oxvyb, &unhycz0e, vm4xjosb); ceqzd1hi653: hmayv1xt2 = 1.0e0; if (f7svlajr == 0) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { yiumjq3nietam6(tlgduey8, m0ibglfx, y7sdgtqi, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &yq6lorbx, ufgqj9ck, &wr0lbopv); } } else if (f7svlajr == 2) { yiumjq3npkc4ejib(w8znmyce, zshtfg8c, m0ibglfx, ftnjamu2, wy1vqfzu, br5ovgcj, &xlpjcg3s, &vtsou9pz, &yu6izdrc, &qfx3vhct, &unhycz0e, vm4xjosb); } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &yu6izdrc); if (f7svlajr == 2) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, wkumc9idrpto5qwb, &yu6izdrc, &dn3iasxug, &vsoihn1r, &pqneb2ra); } else { wkumc9idrpto5qwb[0] = -1.0e0; } for (kcm6jfob = 1; kcm6jfob <= c5aesxkul; kcm6jfob++) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { yiumjq3ndlgpwe0c(tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, &rsynp1go, &dn3iasxug, &uaf2xgqy, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, &yq6lorbx, &qfx3vhct, zjkrtol8, &unhycz0e, vm4xjosb); } fpdlcqk9vc6hatuj = vc6hatuj; fpdlcqk9w8znmyce = w8znmyce; for (yq6lorbx = 1; yq6lorbx <= xlpjcg3s; yq6lorbx++) for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) *fpdlcqk9vc6hatuj++ = *fpdlcqk9w8znmyce++; if (qfx3vhct == 3 || qfx3vhct == 5) { Rprintf("20100410; Error: this definitely does not work\n"); if (2 * *wy1vqfzu * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: 2 * *wy1vqfzu * *ftnjamu2 != *br5ovgcj in C_cqo_2\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m++; fpdlcqk9vc6hatuj++; } } } } else { if (*wy1vqfzu * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: *wy1vqfzu * *ftnjamu2 != *br5ovgcj in C_cqo_2\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m++; fpdlcqk9vc6hatuj++; } } } } for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) ges1xpkr[gp1jxzuh-1] = gp1jxzuh; F77_CALL(vqrdca)(vc6hatuj, br5ovgcj, br5ovgcj, &xlpjcg3s, fasrkub3, ges1xpkr, wkumc9idtwk, &qemj9asg, &pvofyg8z); if (qemj9asg != xlpjcg3s) { *zjkrtol8 = 2; Rprintf("Failure or Error in cqo_2: vc6hatuj is not of full xwdf5ltg.\n"); Free(wkumc9idrpto5qwb); Free(wkumc9idtwk); return; } if (*npjlv3mr != *wy1vqfzu) //Rprintf Rprintf("Error: *wy1vqfzu != *npjlv3mr in C_cqo_2\n"); qnwamo0e1 = wkumc9idtwk; fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9ghz9vuba = ghz9vuba + ayfnwr1v-1; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *qnwamo0e1++ = *fpdlcqk9wpuarq2m++ * *fpdlcqk9ghz9vuba; fpdlcqk9ghz9vuba += *ftnjamu2; } } ybnsqgo9 = 101; F77_CALL(vdqrsl)(vc6hatuj, br5ovgcj, br5ovgcj, &qemj9asg, fasrkub3, wkumc9idtwk, &uylxqtc7, wkumc9idtwk + *wy1vqfzu * *ftnjamu2, zshtfg8c, &uylxqtc7, m0ibglfx, &ybnsqgo9, &algpft4y); if (*npjlv3mr != *wy1vqfzu) //Rprintf Rprintf("Error: *wy1vqfzu != *npjlv3mr in C_cqo_2\n"); fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9wpuarq2m = wpuarq2m; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx /= *fpdlcqk9wpuarq2m++; fpdlcqk9m0ibglfx++; } } if (unhycz0e == 1) { if (qfx3vhct == 3 || qfx3vhct == 5) { if (2 * *afpc0kns != *wy1vqfzu) //Rprintf Rprintf("Error: 2 * *afpc0kns != *wy1vqfzu in C_cqo_2\n"); fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *afpc0kns; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb; fpdlcqk9m0ibglfx += 2; } fpdlcqk9vm4xjosb++; } } else { fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb; fpdlcqk9m0ibglfx++; } fpdlcqk9vm4xjosb++; } } } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &yu6izdrc); yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes, &yu6izdrc, &dn3iasxug, &vsoihn1r, &pqneb2ra); wiptsjx8 = fabs(*tlq9wpes - *wkumc9idrpto5qwb) / (1.0e0 + fabs(*tlq9wpes)); if (wiptsjx8 < bh2vgiay) { // xxx *zjkrtol8 = 0; xui7hqwl[7] = kcm6jfob; if (qfx3vhct == 3 || qfx3vhct == 5) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes, &yu6izdrc, &dn3iasxug, &vsoihn1r, &yu6izdrc); } x1jrewny = 1; goto ceqzd1hi20097; } else { // xxx and *wkumc9idrpto5qwb = *tlq9wpes; x1jrewny = 0; } } ceqzd1hi20097: hmayv1xt1 = 0.0e0; if (x1jrewny == 1) { Free(wkumc9idrpto5qwb); Free(wkumc9idtwk); return; } if (f7svlajr == 1 || f7svlajr == 2) { f7svlajr = 0; xui7hqwl[8] = 1; goto ceqzd1hi653; } *zjkrtol8 = 3; Free(wkumc9idrpto5qwb); Free(wkumc9idtwk); } void cqo_1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[]) { int ayfnwr1v, hj3ftvzu, yu6izdrc = 0, pqneb2ra = 1, wr0lbopv, kcm6jfob, unhycz0e, xwdf5ltg, kvowz9ht, f7svlajr, qfx3vhct, c5aesxkul, ybnsqgo9, algpft4y, qemj9asg, xlpjcg3s, vtsou9pz, yru9olks; double dn3iasxug, wiptsjx8, pvofyg8z = 1.0e-7, uylxqtc7 = 0.0, bh2vgiay, uaf2xgqy, vsoihn1r, rsynp1go, rpto5qwb; double *fpdlcqk9zshtfg8c, *fpdlcqk9w8znmyce, *fpdlcqk9m0ibglfx, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9vm4xjosb, *fpdlcqk9vc6hatuj, *fpdlcqk9twk, *fpdlcqk9wpuarq2m, *fpdlcqk9wpuarq2m1, *fpdlcqk9wpuarq2m2, *fpdlcqk9ghz9vuba1, *fpdlcqk9ghz9vuba2; int gp1jxzuh; double hmayv1xt = 2.0, Totdev = 0.0e0; double *wkumc9idtwk; wkumc9idtwk = Calloc(*br5ovgcj * 3 , double); xwdf5ltg = xui7hqwl[0]; xlpjcg3s = xui7hqwl[2]; kvowz9ht = xui7hqwl[3]; f7svlajr = xui7hqwl[4]; qfx3vhct = xui7hqwl[5]; c5aesxkul = xui7hqwl[6]; xui7hqwl[8] = 0; // twice vtsou9pz = xui7hqwl[11]; zjkrtol8[0] = -1; for (ayfnwr1v = 1; ayfnwr1v <= *afpc0kns; ayfnwr1v++) zjkrtol8[ayfnwr1v] = 1; if (vtsou9pz != 1) { Rprintf("Error: vtsou9pz is not unity in cqo_1!\n"); *zjkrtol8 = 4; Free(wkumc9idtwk); return; } unhycz0e = xui7hqwl[13]; yru9olks = xui7hqwl[15]; wr0lbopv = xui7hqwl[17]; //20120222; correct but unused. dn3iasxug = y7sdgtqi[0]; uaf2xgqy = sqrt(dn3iasxug); if (qfx3vhct == 1 || qfx3vhct == 4) vsoihn1r = log(dn3iasxug); bh2vgiay = y7sdgtqi[1]; rsynp1go = y7sdgtqi[2]; hmayv1xt -= rsynp1go; hmayv1xt += hmayv1xt; yiumjq3nflncwkfq71(lncwkfq7, w8znmyce, ftnjamu2, &xwdf5ltg, &qfx3vhct, vm4xjosb, br5ovgcj, &xlpjcg3s, kifxa0he, &yru9olks, &unhycz0e); for (hj3ftvzu = 1; hj3ftvzu <= *afpc0kns; hj3ftvzu++) { ceqzd1hi653: hmayv1xt = 1.0e0; if (f7svlajr == 0) { yiumjq3nietam6(tlgduey8, m0ibglfx, y7sdgtqi, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu, ufgqj9ck, &wr0lbopv); } else if (f7svlajr == 2) { yiumjq3npkc4ejib(w8znmyce, zshtfg8c + (hj3ftvzu-1) * xlpjcg3s, m0ibglfx, ftnjamu2, wy1vqfzu, br5ovgcj, &xlpjcg3s, &vtsou9pz, &hj3ftvzu, &qfx3vhct, &unhycz0e, vm4xjosb); } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); if (f7svlajr == 2) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, &rpto5qwb, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); } else { rpto5qwb = -1.0e0; } for (kcm6jfob = 1; kcm6jfob <= c5aesxkul; kcm6jfob++) { yiumjq3ndlgpwe0c(tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, &rsynp1go, &dn3iasxug, &uaf2xgqy, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, &hj3ftvzu, &qfx3vhct, zjkrtol8 + hj3ftvzu, &unhycz0e, vm4xjosb); fpdlcqk9vc6hatuj = vc6hatuj; fpdlcqk9w8znmyce = w8znmyce; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) for (ayfnwr1v = 1; ayfnwr1v <= *br5ovgcj; ayfnwr1v++) *fpdlcqk9vc6hatuj++ = *fpdlcqk9w8znmyce++; if (qfx3vhct == 3 || qfx3vhct == 5) { if (2 * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: 2 * *ftnjamu2 != *br5ovgcj in C_cqo_1\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m2 = wpuarq2m + 2*hj3ftvzu -2; fpdlcqk9wpuarq2m1 = wpuarq2m + 2*hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m2; fpdlcqk9vc6hatuj++; *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m1; fpdlcqk9vc6hatuj++; fpdlcqk9wpuarq2m1 += *npjlv3mr; fpdlcqk9wpuarq2m2 += *npjlv3mr; } } } else { if (1 * *ftnjamu2 != *br5ovgcj) //Rprintf Rprintf("Error: 1 * *ftnjamu2 != *br5ovgcj in C_cqo_1\n"); fpdlcqk9vc6hatuj = vc6hatuj; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { fpdlcqk9wpuarq2m = wpuarq2m + hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9vc6hatuj *= *fpdlcqk9wpuarq2m; fpdlcqk9vc6hatuj++; fpdlcqk9wpuarq2m += *npjlv3mr; } } } for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) ges1xpkr[gp1jxzuh-1] = gp1jxzuh; F77_CALL(vqrdca)(vc6hatuj, br5ovgcj, br5ovgcj, &xlpjcg3s, fasrkub3, ges1xpkr, wkumc9idtwk, &qemj9asg, &pvofyg8z); if (qemj9asg != xlpjcg3s) { Rprintf("Error in cqo_1: vc6hatuj is not of full xwdf5ltg.\n"); *zjkrtol8 = 2; Free(wkumc9idtwk); return; } if (qfx3vhct == 3 || qfx3vhct == 5) { fpdlcqk9ghz9vuba1 = ghz9vuba + (2*hj3ftvzu-1) * *ftnjamu2; fpdlcqk9ghz9vuba2 = ghz9vuba + (2*hj3ftvzu-2) * *ftnjamu2; fpdlcqk9wpuarq2m1 = wpuarq2m + 2*hj3ftvzu-1; fpdlcqk9wpuarq2m2 = wpuarq2m + 2*hj3ftvzu-2; fpdlcqk9twk = wkumc9idtwk; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9twk++ = *fpdlcqk9wpuarq2m2 * *fpdlcqk9ghz9vuba2++; *fpdlcqk9twk++ = *fpdlcqk9wpuarq2m1 * *fpdlcqk9ghz9vuba1++; fpdlcqk9wpuarq2m1 += *npjlv3mr; fpdlcqk9wpuarq2m2 += *npjlv3mr; } } else { fpdlcqk9ghz9vuba1 = ghz9vuba + (hj3ftvzu-1) * *ftnjamu2; fpdlcqk9twk = wkumc9idtwk; fpdlcqk9wpuarq2m = wpuarq2m + hj3ftvzu-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9twk++ = *fpdlcqk9wpuarq2m * *fpdlcqk9ghz9vuba1++; fpdlcqk9wpuarq2m += *npjlv3mr; } } ybnsqgo9 = 101; F77_CALL(vdqrsl)(vc6hatuj, br5ovgcj, br5ovgcj, &qemj9asg, fasrkub3, wkumc9idtwk, &uylxqtc7, wkumc9idtwk + *br5ovgcj, zshtfg8c + (hj3ftvzu-1) * xlpjcg3s, &uylxqtc7, wkumc9idtwk + 2 * *br5ovgcj, &ybnsqgo9, &algpft4y); fpdlcqk9twk = wkumc9idtwk; fpdlcqk9zshtfg8c = zshtfg8c + (hj3ftvzu-1) * xlpjcg3s; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { *fpdlcqk9twk++ = *fpdlcqk9zshtfg8c++; } fpdlcqk9twk = wkumc9idtwk; fpdlcqk9zshtfg8c = zshtfg8c + (hj3ftvzu-1) * xlpjcg3s; for (gp1jxzuh = 1; gp1jxzuh <= xlpjcg3s; gp1jxzuh++) { *(fpdlcqk9zshtfg8c + ges1xpkr[gp1jxzuh-1] - 1) = *fpdlcqk9twk++; } if (qfx3vhct == 3 || qfx3vhct == 5) { fpdlcqk9m0ibglfx2 = m0ibglfx + 2 * hj3ftvzu -2; fpdlcqk9m0ibglfx1 = m0ibglfx + 2 * hj3ftvzu -1; fpdlcqk9twk = wkumc9idtwk + 2 * *br5ovgcj; fpdlcqk9wpuarq2m2 = wpuarq2m + 2 * hj3ftvzu -2; fpdlcqk9wpuarq2m1 = wpuarq2m + 2 * hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx2 = *fpdlcqk9twk++ / *fpdlcqk9wpuarq2m2; *fpdlcqk9m0ibglfx1 = *fpdlcqk9twk++ / *fpdlcqk9wpuarq2m1; fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9m0ibglfx2 += *wy1vqfzu; fpdlcqk9wpuarq2m1 += *npjlv3mr; fpdlcqk9wpuarq2m2 += *npjlv3mr; } if (unhycz0e == 1) { fpdlcqk9m0ibglfx = m0ibglfx + 2*hj3ftvzu-2; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } else { fpdlcqk9m0ibglfx = m0ibglfx + hj3ftvzu -1; fpdlcqk9twk = wkumc9idtwk + 2 * *br5ovgcj; fpdlcqk9wpuarq2m = wpuarq2m + hj3ftvzu -1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx = *fpdlcqk9twk++ / *fpdlcqk9wpuarq2m; fpdlcqk9m0ibglfx += *wy1vqfzu; fpdlcqk9wpuarq2m += *npjlv3mr; } if (unhycz0e == 1) { fpdlcqk9m0ibglfx = m0ibglfx + hj3ftvzu-1; fpdlcqk9vm4xjosb = vm4xjosb; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9m0ibglfx += *fpdlcqk9vm4xjosb++; fpdlcqk9m0ibglfx += *wy1vqfzu; } } } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); wiptsjx8 = fabs(tlq9wpes[hj3ftvzu] - rpto5qwb) / (1.0e0 + fabs(tlq9wpes[hj3ftvzu])); if (wiptsjx8 < bh2vgiay) { zjkrtol8[hj3ftvzu] = 0; xui7hqwl[7] = kcm6jfob; if (qfx3vhct == 3 || qfx3vhct == 5) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &yu6izdrc); } Totdev += tlq9wpes[hj3ftvzu]; goto ceqzd1hi1011; } else { rpto5qwb = tlq9wpes[hj3ftvzu]; } } Rprintf("cqo_1; no convergence for Species "); Rprintf("number %3d. Trying internal starting values.\n", hj3ftvzu); if (f7svlajr == 1) { f7svlajr = 0; xui7hqwl[8] = 1; goto ceqzd1hi653; } *zjkrtol8 = 3; zjkrtol8[hj3ftvzu] = 2; Rprintf("cqo_1; no convergence for Species "); Rprintf("number %3d. Continuing on with other species.\n", hj3ftvzu); Totdev += tlq9wpes[hj3ftvzu]; ceqzd1hi1011: hmayv1xt = 3.0e0; } if (zjkrtol8[0] == -1) for (ayfnwr1v = 1; ayfnwr1v <= *afpc0kns; ayfnwr1v++) if (zjkrtol8[ayfnwr1v] != 0) zjkrtol8[0] = 1; if (zjkrtol8[0] == -1) zjkrtol8[0] = 0; *tlq9wpes = Totdev; Free(wkumc9idtwk); } void dcqo1(double lncwkfq7[], double tlgduey8[], double kifxa0he[], double ufgqj9ck[], double m0ibglfx[], double vm4xjosb[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double w8znmyce[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double *tlq9wpes, double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double *ydcnh9xl) { int ayfnwr1v, gp1jxzuh, xvr7bonh, hpmwnav2, idlosrw8, xwdf5ltg = xui7hqwl[ 0], vtsou9pz; int exrkcn5d = xui7hqwl[12]; double fxnhilr3, *fpdlcqk9k7hulceq, *fpdlcqk9kpzavbj3mat, *fpdlcqk9lncwkfq7, *fpdlcqk9yxiwebc5, *fpdlcqk9atujnxb8; double *wkumc9idajul8wkv, *wkumc9iddev0, *wkumc9idyxiwebc5; wkumc9idajul8wkv = Calloc(exrkcn5d , double); wkumc9iddev0 = Calloc(1 + *afpc0kns , double); wkumc9idyxiwebc5 = Calloc(*ftnjamu2 * xwdf5ltg , double); fpdlcqk9kpzavbj3mat = kpzavbj3mat; idlosrw8 = xui7hqwl[ 4]; vtsou9pz = xui7hqwl[11]; fpdlcqk9lncwkfq7 = lncwkfq7; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5; for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fxnhilr3 = 0.0e0; fpdlcqk9k7hulceq = k7hulceq + (hpmwnav2-1) * *eoviz2fb; fpdlcqk9atujnxb8 = atujnxb8 + ayfnwr1v-1; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { fxnhilr3 += *fpdlcqk9atujnxb8 * *fpdlcqk9k7hulceq++; fpdlcqk9atujnxb8 += *ftnjamu2; } *fpdlcqk9yxiwebc5++ = *fpdlcqk9lncwkfq7++ = fxnhilr3; } } if (vtsou9pz == 1) { cqo_1(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, wkumc9iddev0, wkumc9idajul8wkv, y7sdgtqi); } else { cqo_2(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, wkumc9iddev0, wkumc9idajul8wkv, y7sdgtqi); } fpdlcqk9atujnxb8 = atujnxb8; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9atujnxb8 *= *ydcnh9xl; fpdlcqk9atujnxb8++; } } for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9atujnxb8 = atujnxb8 + (xvr7bonh-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++ + *fpdlcqk9atujnxb8++; } xui7hqwl[4] = 2; for (gp1jxzuh = 1; gp1jxzuh <= exrkcn5d; gp1jxzuh++) zshtfg8c[gp1jxzuh-1] = wkumc9idajul8wkv[gp1jxzuh-1]; if (vtsou9pz == 1) { cqo_1(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, tlq9wpes, zshtfg8c, y7sdgtqi); } else { cqo_2(lncwkfq7, tlgduey8, kifxa0he, ufgqj9ck, m0ibglfx, vm4xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, tlq9wpes, zshtfg8c, y7sdgtqi); } if (*zjkrtol8 != 0) { Rprintf("Error in dcqo1: zjkrtol8 = %d\n", *zjkrtol8); Rprintf("Continuing.\n"); } *fpdlcqk9kpzavbj3mat++ = (*tlq9wpes - *wkumc9iddev0) / *ydcnh9xl; } if (xwdf5ltg > 1) { fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++; } } Free(wkumc9idajul8wkv); Free(wkumc9iddev0); Free(wkumc9idyxiwebc5); xui7hqwl[4] = idlosrw8; } void vcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double tlq9wpes[], double zshtfg8c[], double y7sdgtqi[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]) { int hj3ftvzu, ehtjigf4, kvowz9ht, yu6izdrc = 0, pqneb2ra = 1, xwdf5ltg = xui7hqwl[0], f7svlajr, qfx3vhct, c5aesxkul, wr0lbopv, vtsou9pz, xlpjcg3s, sedf7mxb, kcm6jfob, lensmo = (xwdf5ltg == 1 ? 2 : 4) * *afpc0kns; double rpto5qwb, dn3iasxug, wiptsjx8, bh2vgiay, uaf2xgqy, vsoihn1r, rsynp1go, fjcasv7g[6], ghdetj8v = 0.0; double *fpdlcqk9kispwgx3; int len_1spp_ifys6woa; double hmayv1xt = 0.0, Totdev = 0.0e0; int qes4mujl, ayfnwr1v, kij0gwer, xumj5dnk, lyma1kwc; // = xui7hqwl[10]; double hmayv1xtvm4xjosb[2]; double *fpdlcqk9lxyst1eb, *fpdlcqk9zyodca3j, *fpdlcqk9m0ibglfx1, *fpdlcqk9m0ibglfx2, *fpdlcqk9wpuarq2m1, *fpdlcqk9wpuarq2m2; double *wkumc9idui8ysltq, *wkumc9idlxyst1eb, *wkumc9idzyodca3j; double *wkumc9idhdnw2fts, *wkumc9idwbkq9zyi; fjcasv7g[0] = 0.001; fjcasv7g[1] = 0.0; fjcasv7g[2] = -1.5; fjcasv7g[3] = 1.5; fjcasv7g[4] = 1.0e-4; fjcasv7g[5] = 2.0e-8; wkumc9idui8ysltq = Calloc((*ftnjamu2 * *wy1vqfzu) * (*afpc0kns * *wy1vqfzu), double); wkumc9idlxyst1eb = Calloc( *qfozcl5b * *ftnjamu2 , double); wkumc9idzyodca3j = Calloc( *qfozcl5b * *ftnjamu2 , double); wkumc9idhdnw2fts = Calloc(lensmo , double); wkumc9idwbkq9zyi = Calloc(lensmo , double); for (ayfnwr1v = 0; ayfnwr1v < lensmo; ayfnwr1v++) { wkumc9idhdnw2fts[ayfnwr1v] = hdnw2fts[ayfnwr1v]; wkumc9idwbkq9zyi[ayfnwr1v] = wbkq9zyi[ayfnwr1v]; } xlpjcg3s = xui7hqwl[2]; kvowz9ht = xui7hqwl[3]; // # = 1 f7svlajr = xui7hqwl[4]; qfx3vhct = xui7hqwl[5]; c5aesxkul = xui7hqwl[6]; xui7hqwl[8] = 0; lyma1kwc = psdvgce3[10]; // vtsou9pz = xui7hqwl[11]; if (vtsou9pz != 1 || lyma1kwc != xwdf5ltg) { Rprintf("Error: 'vtsou9pz' != 1, or 'lyma1kwc' != 'xwdf5ltg', in vcao6!\n"); *zjkrtol8 = 4; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); return; } wr0lbopv = xui7hqwl[17]; dn3iasxug = y7sdgtqi[0]; uaf2xgqy = sqrt(dn3iasxug); vsoihn1r = log(dn3iasxug); bh2vgiay = y7sdgtqi[1]; rsynp1go = y7sdgtqi[2]; hmayv1xt += hmayv1xt; hmayv1xt *= hmayv1xt; len_1spp_ifys6woa = lindex[lyma1kwc] - 1; *zjkrtol8 = 1; for (hj3ftvzu = 1; hj3ftvzu <= *afpc0kns; hj3ftvzu++) { ceqzd1hi653: hmayv1xt = 1.0; qes4mujl = (qfx3vhct == 3 || qfx3vhct == 5) ? 2 * hj3ftvzu - 1 : hj3ftvzu; if (f7svlajr == 0) { yiumjq3nietam6(tlgduey8, m0ibglfx, y7sdgtqi, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu, ufgqj9ck, &wr0lbopv); } else if (f7svlajr != 1) { Rprintf("Failure due to bad input of 'f7svlajr' variable\n"); *zjkrtol8 = 6; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); return; } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); if (f7svlajr == 2) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, &rpto5qwb, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); } else { rpto5qwb = -1.0e0; } for (kcm6jfob = 1; kcm6jfob <= c5aesxkul; kcm6jfob++) { yiumjq3nflncwkfq76(lncwkfq7, vc6hatuj, ftnjamu2, br5ovgcj, &xwdf5ltg, &qfx3vhct); psdvgce3[6] = 0; yiumjq3ndlgpwe0c(tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, &rsynp1go, &dn3iasxug, &uaf2xgqy, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, &hj3ftvzu, &qfx3vhct, zjkrtol8, &yu6izdrc, hmayv1xtvm4xjosb); fpdlcqk9lxyst1eb = wkumc9idlxyst1eb; fpdlcqk9zyodca3j = wkumc9idzyodca3j; fpdlcqk9m0ibglfx1 = m0ibglfx + qes4mujl-1; fpdlcqk9wpuarq2m1 = wpuarq2m + qes4mujl-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9m0ibglfx2 = fpdlcqk9m0ibglfx1; fpdlcqk9wpuarq2m2 = fpdlcqk9wpuarq2m1; for (kij0gwer = 1; kij0gwer <= *qfozcl5b; kij0gwer++) { *fpdlcqk9lxyst1eb++ = *fpdlcqk9m0ibglfx2++; *fpdlcqk9zyodca3j++ = *fpdlcqk9wpuarq2m2++; } fpdlcqk9m0ibglfx1 += *wy1vqfzu; fpdlcqk9wpuarq2m1 += *npjlv3mr; } sedf7mxb = 0; // 20100416 a stop gap. Used for xwdf5ltg==2 only i think. ehtjigf4 = xwdf5ltg * (hj3ftvzu-1); if (kcm6jfob == 1) { for (kij0gwer = 1; kij0gwer <= lyma1kwc; kij0gwer++) { fpdlcqk9kispwgx3 = kispwgx3 + (ehtjigf4 + hnpt1zym[kij0gwer-1]-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) *fpdlcqk9kispwgx3++ = 0.0e0; } } else { wbkq9zyi[ ehtjigf4 + hnpt1zym[0]-1] = wkumc9idwbkq9zyi[ ehtjigf4 + hnpt1zym[0]-1]; hdnw2fts[ ehtjigf4 + hnpt1zym[0]-1] = wkumc9idhdnw2fts[ ehtjigf4 + hnpt1zym[0]-1]; if (xwdf5ltg == 2) { wbkq9zyi[ ehtjigf4 + hnpt1zym[1]-1] = wkumc9idwbkq9zyi[ ehtjigf4 + hnpt1zym[1]-1]; // wkumc9idr3eoxkzp; hdnw2fts[sedf7mxb + ehtjigf4 + hnpt1zym[1]-1] = wkumc9idhdnw2fts[sedf7mxb + ehtjigf4 + hnpt1zym[1]-1]; // wkumc9idwld4qctn; } } Yee_vbfa(psdvgce3, fjcasv7g, mbvnaor6, ghz9vuba + (qes4mujl-1) * *ftnjamu2, rbne6ouj + (qes4mujl-1) * *ftnjamu2, hdnw2fts + sedf7mxb + ehtjigf4 + hnpt1zym[0] - 1, lamvec + ehtjigf4 + hnpt1zym[0] - 1, wbkq9zyi + ehtjigf4 + hnpt1zym[0] - 1, ezlgm2up, lqsahu0r, which, kispwgx3 + (ehtjigf4 + *hnpt1zym - 1) * *ftnjamu2, wkumc9idlxyst1eb, zshtfg8c + (hj3ftvzu - 1) * xlpjcg3s, wkumc9idui8ysltq, vc6hatuj, fasrkub3, ges1xpkr, wkumc9idzyodca3j, hjm2ktyr, jnxpuym2, hnpt1zym, iz2nbfjc, ifys6woa + ehtjigf4 * len_1spp_ifys6woa, rpyis2kc + (hj3ftvzu-1) * (nbzjkpi3[xwdf5ltg] - 1), gkdx5jals, nbzjkpi3, lindex, // 20130525; lindex added acpios9q, jwbkl9fp); y7sdgtqi[3 + *afpc0kns + *afpc0kns] = ghdetj8v; xumj5dnk = psdvgce3[13]; if (xumj5dnk != 0) { Rprintf("vcao6: Error... exiting; error code = %d\n", xumj5dnk); *zjkrtol8 = 8; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); return; } fpdlcqk9lxyst1eb = wkumc9idlxyst1eb; fpdlcqk9m0ibglfx1 = m0ibglfx + qes4mujl-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9m0ibglfx2 = fpdlcqk9m0ibglfx1; for (kij0gwer = 1; kij0gwer <= *qfozcl5b; kij0gwer++) { *fpdlcqk9m0ibglfx2++ = *fpdlcqk9lxyst1eb++; } fpdlcqk9m0ibglfx1 += *wy1vqfzu; } yiumjq3nnipyajc1(m0ibglfx, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &qfx3vhct, &hj3ftvzu); yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &pqneb2ra); wiptsjx8 = fabs(tlq9wpes[hj3ftvzu] - rpto5qwb) / (1.0e0 + fabs(tlq9wpes[hj3ftvzu])); if (wiptsjx8 < bh2vgiay) { *zjkrtol8 = 0; xui7hqwl[7] = kcm6jfob; if (qfx3vhct == 3 || qfx3vhct == 5) { yiumjq3nshjlwft5(&qfx3vhct, tlgduey8, ufgqj9ck, t8hwvalr, ftnjamu2, wy1vqfzu, afpc0kns, &kvowz9ht, m0ibglfx, tlq9wpes + hj3ftvzu, &hj3ftvzu, &dn3iasxug, &vsoihn1r, &yu6izdrc); } Totdev += tlq9wpes[hj3ftvzu]; goto ceqzd1hi1011; } else { rpto5qwb = tlq9wpes[hj3ftvzu]; } } if (f7svlajr == 1) { f7svlajr = 0; xui7hqwl[8] = 1; goto ceqzd1hi653; } *zjkrtol8 = 3; Totdev += tlq9wpes[hj3ftvzu]; ceqzd1hi1011: hmayv1xt = 2.0e0; } *tlq9wpes = Totdev; Free(wkumc9idui8ysltq); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); Free(wkumc9idhdnw2fts); Free(wkumc9idwbkq9zyi); } void vdcao6(double lncwkfq7[], double tlgduey8[], double ufgqj9ck[], double m0ibglfx[], double t8hwvalr[], double ghz9vuba[], double rbne6ouj[], double wpuarq2m[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], int *ftnjamu2, int *wy1vqfzu, int *afpc0kns, int *br5ovgcj, int *npjlv3mr, int *zjkrtol8, int xui7hqwl[], double tlq9wpes[], double zshtfg8c[], double y7sdgtqi[], double atujnxb8[], double k7hulceq[], int *eoviz2fb, double kpzavbj3mat[], double ajul8wkv[], int psdvgce3[], int *qfozcl5b, double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double mbvnaor6[], double hjm2ktyr[], int jnxpuym2[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]) { int ayfnwr1v, xvr7bonh, hpmwnav2, idlosrw8, xwdf5ltg = xui7hqwl[ 0], vtsou9pz; double fxnhilr3; double ghdetj8v = 0.0e0, ydcnh9xl = y7sdgtqi[3 + *afpc0kns + *afpc0kns + 3 -1]; double *fpdlcqk9k7hulceq, *fpdlcqk9kpzavbj3mat, *fpdlcqk9lncwkfq7, *fpdlcqk9yxiwebc5, *fpdlcqk9atujnxb8; double *wkumc9idyxiwebc5; double *wkumc9idlxyst1eb, *wkumc9idzyodca3j; double *wkumc9iddev0; wkumc9idyxiwebc5 = Calloc(*ftnjamu2 * xwdf5ltg , double); fpdlcqk9kpzavbj3mat = kpzavbj3mat; wkumc9iddev0 = Calloc(1 + *afpc0kns , double); wkumc9idlxyst1eb = Calloc( *qfozcl5b * *ftnjamu2 , double); wkumc9idzyodca3j = Calloc( *qfozcl5b * *ftnjamu2 , double); idlosrw8 = xui7hqwl[ 4]; vtsou9pz = xui7hqwl[11]; fpdlcqk9lncwkfq7 = lncwkfq7; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5; for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fxnhilr3 = 0.0e0; fpdlcqk9k7hulceq = k7hulceq + (hpmwnav2-1) * *eoviz2fb; fpdlcqk9atujnxb8 = atujnxb8 + ayfnwr1v-1; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { fxnhilr3 += *fpdlcqk9atujnxb8 * *fpdlcqk9k7hulceq++; fpdlcqk9atujnxb8 += *ftnjamu2; } *fpdlcqk9yxiwebc5++ = *fpdlcqk9lncwkfq7++ = fxnhilr3; } } if (vtsou9pz == 1) { vcao6(lncwkfq7, tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, wkumc9iddev0, ajul8wkv, y7sdgtqi, psdvgce3, qfozcl5b, hdnw2fts, lamvec, wbkq9zyi, ezlgm2up, lqsahu0r, which, kispwgx3, mbvnaor6, hjm2ktyr, jnxpuym2, hnpt1zym, iz2nbfjc, ifys6woa, rpyis2kc, gkdx5jals, nbzjkpi3, lindex, acpios9q, jwbkl9fp); y7sdgtqi[3 + *afpc0kns + *afpc0kns] = ghdetj8v; } fpdlcqk9atujnxb8 = atujnxb8; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9atujnxb8 *= ydcnh9xl; fpdlcqk9atujnxb8++; } } for (hpmwnav2 = 1; hpmwnav2 <= xwdf5ltg; hpmwnav2++) { fpdlcqk9atujnxb8 = atujnxb8; // + (xvr7bonh-1) * *ftnjamu2; for (xvr7bonh = 1; xvr7bonh <= *eoviz2fb; xvr7bonh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++ + *fpdlcqk9atujnxb8++; } xui7hqwl[4] = 0; if (vtsou9pz == 1) { vcao6(lncwkfq7, tlgduey8, ufgqj9ck, m0ibglfx, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, ftnjamu2, wy1vqfzu, afpc0kns, br5ovgcj, npjlv3mr, zjkrtol8, xui7hqwl, tlq9wpes, zshtfg8c, y7sdgtqi, psdvgce3, qfozcl5b, hdnw2fts, lamvec, wbkq9zyi, ezlgm2up, lqsahu0r, which, kispwgx3, mbvnaor6, hjm2ktyr, jnxpuym2, hnpt1zym, iz2nbfjc, ifys6woa, rpyis2kc, gkdx5jals, nbzjkpi3, lindex, acpios9q, jwbkl9fp); y7sdgtqi[3 + *afpc0kns + *afpc0kns] = ghdetj8v; } if (*zjkrtol8 != 0) { Rprintf("Warning: failured to converge in vdcao6. \n"); Rprintf("Continuing.\n"); } *fpdlcqk9kpzavbj3mat++ = (*tlq9wpes - *wkumc9iddev0) / ydcnh9xl; } if (xwdf5ltg > 1) { fpdlcqk9lncwkfq7 = lncwkfq7 + (hpmwnav2-1) * *ftnjamu2; fpdlcqk9yxiwebc5 = wkumc9idyxiwebc5 + (hpmwnav2-1) * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) *fpdlcqk9lncwkfq7++ = *fpdlcqk9yxiwebc5++; } } Free(wkumc9idyxiwebc5); Free(wkumc9iddev0 ); Free(wkumc9idlxyst1eb); Free(wkumc9idzyodca3j); xui7hqwl[4] = idlosrw8; } void yiumjq3npnm1or(double *objzgdk0, double *lfu2qhid) { int sn; double R1, R2, y, y2, y3, y4, y5, y6, y7; double erf, erfc, z, z2, z3, z4; double SQRT2 = 1.414213562373095049e0, SQRTPI = 1.772453850905516027e0, ULIMIT = 20.0e0, P10 = 242.66795523053175e0, P11 = 21.979261618294152e0, P12 = 6.9963834886191355e0, P13 = -.035609843701815385e0, Q10 = 215.05887586986120e0, Q11 = 91.164905404514901e0, Q12 = 15.082797630407787e0, Q13 = 1.0e0, P20 = 300.4592610201616005e0, P21 = 451.9189537118729422e0, P22 = 339.3208167343436870e0, P23 = 152.9892850469404039e0, P24 = 43.16222722205673530e0, P25 = 7.211758250883093659e0, P26 = .5641955174789739711e0, P27 = -.0000001368648573827167067e0, Q20 = 300.4592609569832933e0, Q21 = 790.9509253278980272e0, Q22 = 931.3540948506096211e0, Q23 = 638.9802644656311665e0, Q24 = 277.5854447439876434e0, Q25 = 77.00015293522947295e0, Q26 = 12.78272731962942351e0, Q27 = 1.0e0, P30 = -.00299610707703542174e0, P31 = -.0494730910623250734e0, P32 = -.226956593539686930e0, P33 = -.278661308609647788e0, P34 = -.0223192459734184686e0, Q30 = .0106209230528467918e0, Q31 = .191308926107829841e0, Q32 = 1.05167510706793207e0, Q33 = 1.98733201817135256e0, Q34 = 1.0e0; if (*objzgdk0 < -ULIMIT) { *lfu2qhid = 2.753624e-89; return; } if (*objzgdk0 > ULIMIT) { *lfu2qhid = 1.0e0; return; } y = *objzgdk0 / SQRT2; if (y < 0.0e0) { y = -y; sn = -1; } else { sn = 1; } y2 = y * y; y4 = y2 * y2; y6 = y4 * y2; if (y < 0.46875e0) { R1 = P10 + P11 * y2 + P12 * y4 + P13 * y6; R2 = Q10 + Q11 * y2 + Q12 * y4 + Q13 * y6; erf = y * R1 / R2; *lfu2qhid = (sn == 1) ? 0.5e0 + 0.5 * erf : 0.5e0 - 0.5 * erf; } else if (y < 4.0e0) { y3 = y2 * y; y5 = y4 * y; y7 = y6 * y; R1 = P20 + P21 * y + P22 * y2 + P23 * y3 + P24 * y4 + P25 * y5 + P26 * y6 + P27 * y7; R2 = Q20 + Q21 * y + Q22 * y2 + Q23 * y3 + Q24 * y4 + Q25 * y5 + Q26 * y6 + Q27 * y7; erfc = exp(-y2) * R1 / R2; *lfu2qhid = (sn == 1) ? 1.0 - 0.5 * erfc : 0.5 * erfc; } else { z = y4; z2 = z * z; z3 = z2 * z; z4 = z2 * z2; R1 = P30 + P31 * z + P32 * z2 + P33 * z3 + P34 * z4; R2 = Q30 + Q31 * z + Q32 * z2 + Q33 * z3 + Q34 * z4; erfc = (exp(-y2)/y) * (1.0 / SQRTPI + R1 / (R2 * y2)); *lfu2qhid = (sn == 1) ? 1.0 - 0.5 * erfc : 0.5 * erfc; } } void yiumjq3npnm1ow(double objzgdk0[], double lfu2qhid[], int *f8yswcat) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { yiumjq3npnm1or(objzgdk0++, lfu2qhid++); } } VGAM/src/rgam3.c0000644000176200001440000006644513135276761013002 0ustar liggesusers #include #include #include #include #include void n5aioudkdnaoqj0l(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *wbkq9zyi, double jstx4uwe[4], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int l3zpbstu[3], int *xtov9rbf, int *wep0oibc, int *fbd5yktj); void n5aioudkhbzuprs6(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double *rpyis2kc, double *imdvf4hx, double *ifys6woa, double *i9mwnvqt, int *pn9eowxc, int *ic5aesxku, double *mynl7uaq, double *zustx4fw, double *nbe4gvpq, double *qaltf0nz, int *cvnjhg2u, double xwy[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *fbd5yktj); void n5aioudkzosq7hub(double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double gkdx5jal[], int *acpios9q); void n5aioudkvmnweiy2(double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *acpios9q, int *wep0oibc, int *iflag); void n5aioudkwmhctl9x( double *qgnl3toc, double sjwyig9t[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, int *pn9eowxc, // int *icrit, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *i9mwnvqt, double xwy[], double *qcpiaj7f, double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *algpft4y); void n5aioudkgt9iulbf(double sjwyig9t[], double ghz9vuba[], double po8rwsmy[], double gkdx5jal[], int *rvy1fpli, int *kuzxj1lo, double zyupcmk6[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[]); void F77_NAME(vinterv)(double*, int*, double*, int*, int*); void F77_NAME(vbsplvd)(double*, int*, double*, int*, double*, double*, int*); void F77_NAME(dpbfa8)(double*, int*, int*, int*, int*); void F77_NAME(dpbsl8)(double*, int*, int*, int*, double*); void F77_NAME(wbvalue)(double*, double*, int*, int*, double*, int*, double*); void n5aioudkdnaoqj0l(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *wbkq9zyi, double jstx4uwe[4], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int l3zpbstu[3], int *xtov9rbf, int *wep0oibc, int *fbd5yktj) { double *wkumc9idxwy, *wkumc9idbuhyalv4, *wkumc9idzvau2lct, *wkumc9idf6lsuzax, *wkumc9idfvh2rwtc, *wkumc9iddcfir2no, *wkumc9idfulcp8wa, *wkumc9idplj0trqx; wkumc9idxwy = Calloc(*acpios9q, double); wkumc9idzvau2lct = Calloc(*acpios9q, double); wkumc9idf6lsuzax = Calloc(*acpios9q, double); wkumc9idfvh2rwtc = Calloc(*acpios9q, double); wkumc9iddcfir2no = Calloc(*acpios9q, double); wkumc9idbuhyalv4 = Calloc(*xtov9rbf * *acpios9q, double); wkumc9idfulcp8wa = Calloc(*xtov9rbf * *acpios9q, double); wkumc9idplj0trqx = Calloc( (int) 1 , double); n5aioudkhbzuprs6(qgnl3toc, sjwyig9t, bhcji9gl, po8rwsmy, kuzxj1lo, acpios9q, gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, wbkq9zyi, l3zpbstu + 1, l3zpbstu + 2, jstx4uwe, jstx4uwe + 1, jstx4uwe + 2, jstx4uwe + 3, cvnjhg2u, wkumc9idxwy, wkumc9idzvau2lct, wkumc9idf6lsuzax, wkumc9idfvh2rwtc, wkumc9iddcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, wkumc9idbuhyalv4, wkumc9idfulcp8wa, wkumc9idplj0trqx, xtov9rbf, wep0oibc, fbd5yktj); Free(wkumc9idxwy); Free(wkumc9idbuhyalv4); Free(wkumc9idzvau2lct); Free(wkumc9idf6lsuzax); Free(wkumc9idfvh2rwtc); Free(wkumc9iddcfir2no); Free(wkumc9idfulcp8wa); Free(wkumc9idplj0trqx); } void n5aioudkhbzuprs6(double *qgnl3toc, double sjwyig9t[], double bhcji9gl[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, double gkdx5jal[], double *rpyis2kc, double *imdvf4hx, double *ifys6woa, double *wbkq9zyi, int *pn9eowxc, int *ic5aesxku, double *mynl7uaq, double *zustx4fw, double *nbe4gvpq, double *qaltf0nz, int *cvnjhg2u, double xwy[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *fbd5yktj) { static const double c_Gold = 0.381966011250105151795413165634; double tt1 = 0.0, g2dnwteb, wkumc9ida, wkumc9idb, wkumc9idd, wkumc9ide, wkumc9idxm, wkumc9idp, wkumc9idq, wkumc9idr, // qaltf0nz, Tol1, Tol2, wkumc9idu, wkumc9idv, wkumc9idw, wkumc9idfu, wkumc9idfv, wkumc9idfw, wkumc9idfx, wkumc9idx, wkumc9idax, wkumc9idbx; int ayfnwr1v, viter = 0; double yjpnro8d = 8.0e88, bk3ymcih = 0.0e0, *qcpiaj7f, qcpiaj7f0 = 0.0; qcpiaj7f = &qcpiaj7f0; g2dnwteb = bk3ymcih; bk3ymcih += bk3ymcih; bk3ymcih *= bk3ymcih; bk3ymcih += g2dnwteb; wkumc9idd = 0.0; wkumc9idfu = 0.0e0; wkumc9idu = 0.0e0; if (*cvnjhg2u == 0) { n5aioudkzosq7hub(xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, gkdx5jal, acpios9q); *tt2 = 0.0; for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { *tt2 += xecbg0pf[ayfnwr1v-1]; } *cvnjhg2u = 1; } else { } n5aioudkgt9iulbf(sjwyig9t, bhcji9gl, po8rwsmy, gkdx5jal, kuzxj1lo, acpios9q, xwy, zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no); for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { tt1 += zvau2lct[ayfnwr1v-1]; } g2dnwteb = tt1 / *tt2; if (*pn9eowxc == 1) { *mynl7uaq = g2dnwteb * pow(16.0, *wbkq9zyi * 6.0 - 2.0); n5aioudkwmhctl9x(qgnl3toc, sjwyig9t, po8rwsmy, kuzxj1lo, acpios9q, pn9eowxc, // icrit, (icrit used to be used solely) gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, mynl7uaq, xwy, qcpiaj7f, // Not used here zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, wep0oibc, fbd5yktj); return; } wkumc9idax = *mynl7uaq; wkumc9idbx = *zustx4fw; /* Initialization. */ wkumc9ida = wkumc9idax; wkumc9idb = wkumc9idbx; wkumc9idv = wkumc9ida + c_Gold * (wkumc9idb - wkumc9ida); wkumc9idw = wkumc9idx = wkumc9idv; wkumc9ide = 0.0e0; *wbkq9zyi = wkumc9idx; *mynl7uaq = g2dnwteb * pow((double) 16.0, (double) *wbkq9zyi * 6.0 - 2.0); n5aioudkwmhctl9x(qgnl3toc, sjwyig9t, po8rwsmy, kuzxj1lo, acpios9q, pn9eowxc, // icrit, gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, mynl7uaq, xwy, qcpiaj7f, zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, wep0oibc, fbd5yktj); wkumc9idfx = *qcpiaj7f; wkumc9idfv = wkumc9idfw = wkumc9idfx; while (*fbd5yktj == 0) { viter++; wkumc9idxm = 0.5e0 * (wkumc9ida + wkumc9idb); Tol1 = *qaltf0nz * fabs(wkumc9idx) + *nbe4gvpq / 3.0e0; Tol2 = 2.0e0 * Tol1; if ((fabs(wkumc9idx - wkumc9idxm) <= (Tol2 - 0.5 * (wkumc9idb - wkumc9ida))) || (viter > *ic5aesxku)) goto L_End; if ((fabs(wkumc9ide) <= Tol1) || (wkumc9idfx >= yjpnro8d) || (wkumc9idfv >= yjpnro8d) || (wkumc9idfw >= yjpnro8d)) goto a3bdsirf; wkumc9idr = (wkumc9idx - wkumc9idw) * (wkumc9idfx - wkumc9idfv); wkumc9idq = (wkumc9idx - wkumc9idv) * (wkumc9idfx - wkumc9idfw); wkumc9idp = (wkumc9idx - wkumc9idv) * wkumc9idq - (wkumc9idx - wkumc9idw) * wkumc9idr; wkumc9idq = 2.0e0 * (wkumc9idq - wkumc9idr); if (wkumc9idq > 0.0e0) wkumc9idp = -wkumc9idp; wkumc9idq = fabs(wkumc9idq); wkumc9idr = wkumc9ide; wkumc9ide = wkumc9idd; if (fabs(wkumc9idp) >= fabs(0.5 * wkumc9idq * wkumc9idr) || wkumc9idq == 0.0e0) { goto a3bdsirf; } if (wkumc9idp <= wkumc9idq * (wkumc9ida - wkumc9idx) || wkumc9idp >= wkumc9idq * (wkumc9idb - wkumc9idx)) goto a3bdsirf; wkumc9idd = wkumc9idp / wkumc9idq; wkumc9idu = wkumc9idx + wkumc9idd; if (wkumc9idu - wkumc9ida < Tol2 || wkumc9idb - wkumc9idu < Tol2) wkumc9idd = fsign(Tol1, wkumc9idxm - wkumc9idx); goto ceqzd1hi50; a3bdsirf: wkumc9ide = (wkumc9idx >= wkumc9idxm) ? wkumc9ida - wkumc9idx : wkumc9idb - wkumc9idx; wkumc9idd = c_Gold * wkumc9ide; ceqzd1hi50: wkumc9idu = wkumc9idx + ((fabs(wkumc9idd) >= Tol1) ? wkumc9idd : fsign(Tol1, wkumc9idd)); *wbkq9zyi = wkumc9idu; *mynl7uaq = g2dnwteb * pow((double) 16.0, (double) *wbkq9zyi * 6.0 - 2.0); n5aioudkwmhctl9x(qgnl3toc, sjwyig9t, po8rwsmy, kuzxj1lo, acpios9q, pn9eowxc, // icrit, gkdx5jal, rpyis2kc, imdvf4hx, ifys6woa, mynl7uaq, xwy, qcpiaj7f, zvau2lct, f6lsuzax, fvh2rwtc, dcfir2no, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, wep0oibc, fbd5yktj); wkumc9idfu = *qcpiaj7f; if (wkumc9idfu > yjpnro8d) wkumc9idfu = 2.0e0 * yjpnro8d; if (wkumc9idfu <= wkumc9idfx) { if (wkumc9idu >= wkumc9idx) wkumc9ida = wkumc9idx; else wkumc9idb = wkumc9idx; wkumc9idv = wkumc9idw; wkumc9idfv = wkumc9idfw; wkumc9idw = wkumc9idx; wkumc9idfw = wkumc9idfx; wkumc9idx = wkumc9idu; wkumc9idfx = wkumc9idfu; } else { if (wkumc9idu < wkumc9idx) wkumc9ida = wkumc9idu; else wkumc9idb = wkumc9idu; if (wkumc9idfu <= wkumc9idfw || wkumc9idw == wkumc9idx) { wkumc9idv = wkumc9idw; wkumc9idfv = wkumc9idfw; wkumc9idw = wkumc9idu; wkumc9idfw = wkumc9idfu; } else if (wkumc9idfu <= wkumc9idfv || wkumc9idv == wkumc9idx || wkumc9idv == wkumc9idw) { wkumc9idv = wkumc9idu; wkumc9idfv = wkumc9idfu; } } } L_End: bk3ymcih = 0.0e0; *wbkq9zyi = wkumc9idx; *qcpiaj7f = wkumc9idfx; return; } void n5aioudkzosq7hub(double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double gkdx5jal[], int *acpios9q) { int dqlr5bse, pqzfxw4i, bvsquk3z = 3, h2dpsbkr = 4, nkplus1 = *acpios9q + 1; int ayfnwr1v, gp1jxzuh, yq6lorbx; int urohxe6t; double g9fvdrbw[12], ms0qypiw[16], yw1[4], yw2[4], wrk1, othird = 1.0 / 3.0, *qnwamo0e0, *qnwamo0e1, *qnwamo0e2, *qnwamo0e3; qnwamo0e0 = xecbg0pf; qnwamo0e1 = z4grbpiq; qnwamo0e2 = d7glzhbj; qnwamo0e3 = v2eydbxs; for (ayfnwr1v = 0; ayfnwr1v < *acpios9q; ayfnwr1v++) { *qnwamo0e0++ = *qnwamo0e1++ = *qnwamo0e2++ = *qnwamo0e3++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { F77_CALL(vinterv)(gkdx5jal, &nkplus1, gkdx5jal + ayfnwr1v-1, &dqlr5bse, &pqzfxw4i); F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, gkdx5jal + ayfnwr1v - 1, &dqlr5bse, ms0qypiw, g9fvdrbw, &bvsquk3z); for (gp1jxzuh = 1; gp1jxzuh <= 4; gp1jxzuh++) { yw1[gp1jxzuh-1] = g9fvdrbw[gp1jxzuh-1 + 2*4]; } F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, gkdx5jal + ayfnwr1v, &dqlr5bse, ms0qypiw, g9fvdrbw, &bvsquk3z); for (gp1jxzuh = 1; gp1jxzuh <= 4; gp1jxzuh++) { yw2[gp1jxzuh-1] = g9fvdrbw[gp1jxzuh-1 + 2*4] - yw1[gp1jxzuh-1]; } wrk1 = gkdx5jal[ayfnwr1v] - gkdx5jal[ayfnwr1v-1]; if (dqlr5bse >= 4) { for (gp1jxzuh = 1; gp1jxzuh <= 4; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 4 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); yq6lorbx = gp1jxzuh + 1; if (yq6lorbx <= 4) { z4grbpiq[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } yq6lorbx = gp1jxzuh + 2; if (yq6lorbx <= 4) { d7glzhbj[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } yq6lorbx = gp1jxzuh + 3; if (yq6lorbx <= 4) { v2eydbxs[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } else if (dqlr5bse == 3) { for (gp1jxzuh = 1; gp1jxzuh <= 3; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 3 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); yq6lorbx = gp1jxzuh + 1; if (yq6lorbx <= 3) { z4grbpiq[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } yq6lorbx = gp1jxzuh + 2; if (yq6lorbx <= 3) { d7glzhbj[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } else if (dqlr5bse == 2) { for (gp1jxzuh = 1; gp1jxzuh <= 2; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 2 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); yq6lorbx = gp1jxzuh + 1; if (yq6lorbx <= 2) { z4grbpiq[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } else if (dqlr5bse == 1) { for (gp1jxzuh = 1; gp1jxzuh <= 1; gp1jxzuh++) { yq6lorbx = gp1jxzuh; urohxe6t = dqlr5bse - 1 + gp1jxzuh; xecbg0pf[urohxe6t-1] += wrk1 * (yw1[gp1jxzuh-1]*yw1[yq6lorbx-1] + (yw2[gp1jxzuh-1]*yw1[yq6lorbx-1] + yw2[yq6lorbx-1]*yw1[gp1jxzuh-1]) * 0.50 + yw2[gp1jxzuh-1]*yw2[yq6lorbx-1] * othird); } } } } void n5aioudkvmnweiy2(double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *acpios9q, int *wep0oibc, int *iflag) { int ayfnwr1v, yq6lorbx, gp1jxzuh; double wjm3[3], wjm2[2], wjm1[1], c0, c1, c2, c3; double pcsuow9k, qdbgu6oi, upwkh5xz, rul5fnyd, ueydbrg6, plce2srm, k3yvomnh, bfdjhu7l, ctfvwdu0; c1 = c2 = c3 = 0.0e0; wjm3[0] = wjm3[1] = wjm3[2] = wjm2[0] = wjm2[1] = wjm1[0] = 0.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { yq6lorbx = *acpios9q - ayfnwr1v + 1; c0 = 1.0e0 / buhyalv4[3 + (yq6lorbx-1) * *xtov9rbf]; if (yq6lorbx <= (*acpios9q-3)) { c1 = buhyalv4[0 + (yq6lorbx+2) * *xtov9rbf] * c0; c2 = buhyalv4[1 + (yq6lorbx+1) * *xtov9rbf] * c0; c3 = buhyalv4[2 + (yq6lorbx+0) * *xtov9rbf] * c0; } else if (yq6lorbx == (*acpios9q - 2)) { c1 = 0.0e0; c2 = buhyalv4[1 + (yq6lorbx+1) * *xtov9rbf] * c0; c3 = buhyalv4[2 + yq6lorbx * *xtov9rbf] * c0; } else if (yq6lorbx == (*acpios9q - 1)) { c1 = c2 = 0.0e0; c3 = buhyalv4[2 + yq6lorbx * *xtov9rbf] * c0; } else if (yq6lorbx == *acpios9q) { c1 = c2 = c3 = 0.0e0; } pcsuow9k = c1 * wjm3[0]; qdbgu6oi = c2 * wjm3[1]; upwkh5xz = c3 * wjm3[2]; rul5fnyd = c1 * wjm3[1]; ueydbrg6 = c2 * wjm2[0]; plce2srm = c3 * wjm2[1]; k3yvomnh = c1 * wjm3[2]; bfdjhu7l = c2 * wjm2[1]; ctfvwdu0 = c3 * wjm1[0]; fulcp8wa[0 + (yq6lorbx-1) * *xtov9rbf] = 0.0 - (pcsuow9k+qdbgu6oi+upwkh5xz); fulcp8wa[1 + (yq6lorbx-1) * *xtov9rbf] = 0.0 - (rul5fnyd+ueydbrg6+plce2srm); fulcp8wa[2 + (yq6lorbx-1) * *xtov9rbf] = 0.0 - (k3yvomnh+bfdjhu7l+ctfvwdu0); fulcp8wa[3 + (yq6lorbx-1) * *xtov9rbf] = pow(c0, (double) 2.0) + c1 * (pcsuow9k + 2.0e0 * (qdbgu6oi + upwkh5xz)) + c2 * (ueydbrg6 + 2.0e0 * plce2srm) + c3 * ctfvwdu0; wjm3[0] = wjm2[0]; wjm3[1] = wjm2[1]; wjm3[2] = fulcp8wa[1 + (yq6lorbx-1) * *xtov9rbf]; wjm2[0] = wjm1[0]; wjm2[1] = fulcp8wa[2 + (yq6lorbx-1) * *xtov9rbf]; wjm1[0] = fulcp8wa[3 + (yq6lorbx-1) * *xtov9rbf]; } if (*iflag == 0) { return; } Rprintf("plj0trqx must not be a double of length one!\n"); for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { yq6lorbx = *acpios9q - ayfnwr1v + 1; for (gp1jxzuh = 1; gp1jxzuh <= 4 && yq6lorbx + gp1jxzuh-1 <= *acpios9q; gp1jxzuh++) { plj0trqx[yq6lorbx-1 + (yq6lorbx+gp1jxzuh-2) * *wep0oibc] = fulcp8wa[4-gp1jxzuh + (yq6lorbx-1) * *xtov9rbf]; } } for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { yq6lorbx = *acpios9q - ayfnwr1v + 1; for (gp1jxzuh = yq6lorbx-4; gp1jxzuh >= 1; gp1jxzuh--) { c0 = 1.0 / buhyalv4[3 + (gp1jxzuh-1) * *xtov9rbf]; c1 = buhyalv4[0 + (gp1jxzuh+2) * *xtov9rbf] * c0; c2 = buhyalv4[1 + (gp1jxzuh+1) * *xtov9rbf] * c0; c3 = buhyalv4[2 + gp1jxzuh * *xtov9rbf] * c0; plj0trqx[gp1jxzuh-1 + (yq6lorbx-1) * *wep0oibc] = 0.0e0 - ( c1 * plj0trqx[gp1jxzuh+2 + (yq6lorbx-1) * *wep0oibc] + c2 * plj0trqx[gp1jxzuh+1 + (yq6lorbx-1) * *wep0oibc] + c3 * plj0trqx[gp1jxzuh + (yq6lorbx-1) * *wep0oibc] ); } } } void n5aioudkwmhctl9x(double *qgnl3toc, double sjwyig9t[], double po8rwsmy[], int *kuzxj1lo, int *acpios9q, int *pn9eowxc, // int *icrit, double gkdx5jal[], double rpyis2kc[], double imdvf4hx[], double ifys6woa[], double *i9mwnvqt, double xwy[], double *qcpiaj7f, double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double buhyalv4[], double fulcp8wa[], double plj0trqx[], int *xtov9rbf, int *wep0oibc, int *algpft4y) { double ms0qypiw[16], b0, b1, b2, b3, qaltf0nz = 0.1e-10, g9fvdrbw[4], qtce8hzo, *chw8lzty, egwbdua212 = 0.0e0; int yu6izdrc = 0, pqneb2ra = 1, bvsquk3z = 3, h2dpsbkr = 4, pqzfxw4i, ayfnwr1v, yq6lorbx, dqlr5bse, nkp1 = *acpios9q + 1; double *qnwamo0e1, *qnwamo0e2; qnwamo0e1 = rpyis2kc; qnwamo0e2 = xwy; for (ayfnwr1v = 0; ayfnwr1v < *acpios9q; ayfnwr1v++) { *qnwamo0e1++ = *qnwamo0e2++; } qnwamo0e1 = zvau2lct; qnwamo0e2 = xecbg0pf; for (ayfnwr1v = 0; ayfnwr1v < *acpios9q; ayfnwr1v++) { buhyalv4[3 + ayfnwr1v * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } qnwamo0e1 = f6lsuzax; qnwamo0e2 = z4grbpiq; for (ayfnwr1v = 1; ayfnwr1v <= (*acpios9q-1); ayfnwr1v++) { buhyalv4[2 + ayfnwr1v * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } qnwamo0e1 = fvh2rwtc; qnwamo0e2 = d7glzhbj; for (ayfnwr1v = 1; ayfnwr1v <= (*acpios9q-2); ayfnwr1v++) { buhyalv4[1 + (ayfnwr1v+1) * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } qnwamo0e1 = dcfir2no; qnwamo0e2 = v2eydbxs; for (ayfnwr1v = 1; ayfnwr1v <= (*acpios9q-3); ayfnwr1v++) { buhyalv4[ (ayfnwr1v+2) * *xtov9rbf] = *qnwamo0e1++ + *i9mwnvqt * *qnwamo0e2++; } F77_CALL(dpbfa8)(buhyalv4, xtov9rbf, acpios9q, &bvsquk3z, algpft4y); if (*algpft4y != 0) { Rprintf("In C function wmhctl9x; Error:\n"); Rprintf("Leading minor of order %d is not pos-def\n", *algpft4y); return; } F77_CALL(dpbsl8)(buhyalv4, xtov9rbf, acpios9q, &bvsquk3z, rpyis2kc); chw8lzty = sjwyig9t; qnwamo0e1 = imdvf4hx; for (ayfnwr1v = 1; ayfnwr1v <= *kuzxj1lo; ayfnwr1v++) { F77_CALL(wbvalue)(gkdx5jal, rpyis2kc, acpios9q, &h2dpsbkr, chw8lzty++, &yu6izdrc, qnwamo0e1++); } n5aioudkvmnweiy2(buhyalv4, fulcp8wa, plj0trqx, xtov9rbf, acpios9q, wep0oibc, &yu6izdrc); //Rprintf("first one n5aioudkwmhctl9x pow(po8rwsmy[0], (double) 1.0) = "); //Rprintf("%9.5e\n", pow(po8rwsmy[0], (double) 1.0)); chw8lzty = sjwyig9t; for (ayfnwr1v = 1; ayfnwr1v <= *kuzxj1lo; ayfnwr1v++) { F77_CALL(vinterv)(gkdx5jal, &nkp1, chw8lzty, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == -1) { dqlr5bse = 4; *chw8lzty = gkdx5jal[3] + qaltf0nz; } else if (pqzfxw4i == 1) { dqlr5bse = *acpios9q; *chw8lzty = gkdx5jal[*acpios9q] - qaltf0nz; } yq6lorbx = dqlr5bse-3; F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, chw8lzty++, &dqlr5bse, ms0qypiw, g9fvdrbw, &pqneb2ra); b0 = g9fvdrbw[0]; b1 = g9fvdrbw[1]; b2 = g9fvdrbw[2]; b3 = g9fvdrbw[3]; qtce8hzo = (b0 * (fulcp8wa[3 + (yq6lorbx-1) * *xtov9rbf] * b0 + 2.0e0* (fulcp8wa[2 + (yq6lorbx-1) * *xtov9rbf] * b1 + fulcp8wa[1 + (yq6lorbx-1) * *xtov9rbf] * b2 + fulcp8wa[0 + (yq6lorbx-1) * *xtov9rbf] * b3)) + b1 * (fulcp8wa[3 + yq6lorbx * *xtov9rbf] * b1 + 2.0e0* (fulcp8wa[2 + yq6lorbx * *xtov9rbf] * b2 + fulcp8wa[1 + yq6lorbx * *xtov9rbf] * b3)) + b2 * (fulcp8wa[3 + (yq6lorbx+1) * *xtov9rbf] * b2 + 2.0e0* fulcp8wa[2 + (yq6lorbx+1) * *xtov9rbf] * b3) + fulcp8wa[3 + (yq6lorbx+2) * *xtov9rbf] * pow(b3, (double) 2.0)) * po8rwsmy[ayfnwr1v-1]; ifys6woa[ayfnwr1v-1] = qtce8hzo; } if (*pn9eowxc == 1) { return; } for (ayfnwr1v = 1; ayfnwr1v <= *kuzxj1lo; ayfnwr1v++) { egwbdua212 += ifys6woa[ayfnwr1v-1]; } *qcpiaj7f = pow(*qgnl3toc - egwbdua212, (double) 2.0); } void n5aioudkgt9iulbf(double sjwyig9t[], double ghz9vuba[], double po8rwsmy[], double gkdx5jal[], int *rvy1fpli, int *kuzxj1lo, double zyupcmk6[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[]) { double g9fvdrbw[12]; /* 20140522 Effectively g9fvdrbw(4,3), just in case */ double ms0qypiw[16], wsvdbx3tk, wv2svdbx3tk, qaltf0nz = 0.1e-9; int ayfnwr1v, yq6lorbx, dqlr5bse, pqzfxw4i, nhnpt1zym1 = *kuzxj1lo + 1, pqneb2ra = 1, h2dpsbkr = 4; double *qnwamo0e0, *qnwamo0e1, *qnwamo0e2, *qnwamo0e3, *qnwamo0e4; qnwamo0e0 = zvau2lct; qnwamo0e1 = f6lsuzax; qnwamo0e2 = fvh2rwtc; qnwamo0e3 = dcfir2no; qnwamo0e4 = zyupcmk6; for (ayfnwr1v = 0; ayfnwr1v < *kuzxj1lo; ayfnwr1v++) { *qnwamo0e0++ = *qnwamo0e1++ = *qnwamo0e2++ = *qnwamo0e3++ = *qnwamo0e4++ = 0.0e0; } //Rprintf("first one n5aioudkgt9iulbf pow(po8rwsmy[0], (double) 1.0) = "); //Rprintf("%9.5e\n", pow(po8rwsmy[0], (double) 1.0)); for (ayfnwr1v = 1; ayfnwr1v <= *rvy1fpli; ayfnwr1v++) { F77_CALL(vinterv)(gkdx5jal, &nhnpt1zym1, sjwyig9t + ayfnwr1v - 1, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == 1) { if (sjwyig9t[ayfnwr1v-1] <= (gkdx5jal[dqlr5bse-1] + qaltf0nz)) { dqlr5bse--; } else { return; } } F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, sjwyig9t + ayfnwr1v - 1, &dqlr5bse, ms0qypiw, g9fvdrbw, &pqneb2ra); yq6lorbx = dqlr5bse - 4 + 1; wsvdbx3tk = po8rwsmy[ayfnwr1v-1]; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[0]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[0]; f6lsuzax[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[1]; fvh2rwtc[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[2]; dcfir2no[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; yq6lorbx = dqlr5bse - 4 + 2; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[1]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[1]; f6lsuzax[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[2]; fvh2rwtc[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; yq6lorbx = dqlr5bse - 4 + 3; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[2]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[2]; f6lsuzax[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; yq6lorbx = dqlr5bse; wv2svdbx3tk = wsvdbx3tk * g9fvdrbw[3]; zyupcmk6[yq6lorbx-1] += wv2svdbx3tk * ghz9vuba[ayfnwr1v-1]; zvau2lct[yq6lorbx-1] += wv2svdbx3tk * g9fvdrbw[3]; } } VGAM/src/vlinpack3.f0000644000176200001440000004505013135276761013653 0ustar liggesusersc 1/4/00 c The following code is linpack.f from GAMFIT c For R.1.0-0, subroutine dshift is needed c 12/7/02; T.Yee c I've modifed the routines in this file so that reals become double c precisions. The subroutine and functions may have a "8" put after it c to (hopefully) make it unique. c All this for the VGAM package. c For example, "real function ddot" to "double precision function ddot8". c I might add a "implicit logical (a-z)" line to pick up errors. subroutine daxpy8(n,da,dx,incx,dy,incy) implicit logical (a-z) c c constant times a vector plus a vector. c uses unrolled loops for increments equal to one. c jack dongarra, linpack, 3/11/78. c c c c 20130419: orig.: c double precision dx(1),dy(1),da c c c c double precision dx(*),dy(*),da integer i,incx,incy,m,mp1,n c Undeclared, so added by T.Yee integer ix, iy c if(n.le.0)return if (da .eq. 0.0d0) return if(incx.eq.1.and.incy.eq.1)go to 20 c c code for unequal increments or equal increments c not equal to 1 c ix = 1 iy = 1 if(incx.lt.0)ix = (-n+1)*incx + 1 if(incy.lt.0)iy = (-n+1)*incy + 1 do 10 i = 1,n dy(iy) = dy(iy) + da*dx(ix) ix = ix + incx iy = iy + incy 10 continue return c c code for both increments equal to 1 c c c clean-up loop c 20 m = mod(n,4) if( m .eq. 0 ) go to 40 do 30 i = 1,m dy(i) = dy(i) + da*dx(i) 30 continue if( n .lt. 4 ) return 40 mp1 = m + 1 do 50 i = mp1,n,4 dy(i) = dy(i) + da*dx(i) dy(i + 1) = dy(i + 1) + da*dx(i + 1) dy(i + 2) = dy(i + 2) + da*dx(i + 2) dy(i + 3) = dy(i + 3) + da*dx(i + 3) 50 continue return end subroutine dcopy8(n,dx,incx,dy,incy) implicit logical (a-z) c c copies a vector, x, to a vector, y. c uses unrolled loops for increments equal to one. c jack dongarra, linpack, 3/11/78. c double precision dx(*),dy(*) integer i,incx,incy,ix,iy,m,mp1,n c if(n.le.0)return if(incx.eq.1.and.incy.eq.1)go to 20 c c code for unequal increments or equal increments c not equal to 1 c ix = 1 iy = 1 if(incx.lt.0)ix = (-n+1)*incx + 1 if(incy.lt.0)iy = (-n+1)*incy + 1 do 10 i = 1,n dy(iy) = dx(ix) ix = ix + incx iy = iy + incy 10 continue return c c code for both increments equal to 1 c c c clean-up loop c 20 m = mod(n,7) if( m .eq. 0 ) go to 40 do 30 i = 1,m dy(i) = dx(i) 30 continue if( n .lt. 7 ) return 40 mp1 = m + 1 do 50 i = mp1,n,7 dy(i) = dx(i) dy(i + 1) = dx(i + 1) dy(i + 2) = dx(i + 2) dy(i + 3) = dx(i + 3) dy(i + 4) = dx(i + 4) dy(i + 5) = dx(i + 5) dy(i + 6) = dx(i + 6) 50 continue return end double precision function ddot8(n,dx,incx,dy,incy) c c 12/7/02; T.Yee c I've modifed "real function ddot" to c "double precision function ddot8" for the VGAM package c I've added the "implicit logical (a-z)" line implicit logical (a-z) c c forms the dot product of two vectors. c uses unrolled loops for increments equal to one. c jack dongarra, linpack, 3/11/78. c double precision dx(*),dy(*),dtemp integer i,incx,incy,ix,iy,m,mp1,n c ddot8 = 0.0d0 dtemp = 0.0d0 if(n.le.0)return if(incx.eq.1.and.incy.eq.1)go to 20 c c code for unequal increments or equal increments c not equal to 1 c ix = 1 iy = 1 if(incx.lt.0)ix = (-n+1)*incx + 1 if(incy.lt.0)iy = (-n+1)*incy + 1 do 10 i = 1,n dtemp = dtemp + dx(ix)*dy(iy) ix = ix + incx iy = iy + incy 10 continue ddot8 = dtemp return c c code for both increments equal to 1 c c c clean-up loop c 20 m = mod(n,5) if( m .eq. 0 ) go to 40 do 30 i = 1,m dtemp = dtemp + dx(i)*dy(i) 30 continue if( n .lt. 5 ) go to 60 40 mp1 = m + 1 do 50 i = mp1,n,5 dtemp = dtemp + dx(i)*dy(i) + dx(i + 1)*dy(i + 1) + * dx(i + 2)*dy(i + 2) + dx(i + 3)*dy(i + 3)+dx(i + 4)*dy(i + 4) 50 continue 60 ddot8 = dtemp return end double precision function dnrm28 ( n, dx,ldx, incx) implicit logical (a-z) c Undeclared, so added by T.Yee integer n, ldx, incx, i, j, nn integer next double precision dx(ldx), cutlo, cuthi, hitest, sum, * xmax,zero,one data zero, one /0.0d0, 1.0d0/ c c euclidean norm of the n-vector stored in dx() with storage c increment incx . c if n .le. 0 return with result = 0. c if n .ge. 1 then incx must be .ge. 1 c c c.l.lawson, 1978 jan 08 c c four phase method using two built-in constants that are c hopefully applicable to all machines. c cutlo = maximum of sqrt(u/eps) over all known machines. c cuthi = minimum of sqrt(v) over all known machines. c where c eps = smallest no. such that eps + 1. .gt. 1. c u = smallest positive no. (underflow limit) c v = largest no. (overflow limit) c c brief outline of algorithm.. c c phase 1 scans zero components. c move to phase 2 when a component is nonzero and .le. cutlo c move to phase 3 when a component is .gt. cutlo c move to phase 4 when a component is .ge. cuthi/m c where m = n for x() double precision and m = 2*n for complex. c c values for cutlo and cuthi.. c from the environmental parameters listed in the imsl converter c document the limiting values are as follows.. c cutlo, s.p. u/eps = 2**(-102) for honeywell. close seconds are c univac and dec at 2**(-103) c thus cutlo = 2**(-51) = 4.44089d-16 c cuthi, s.p. v = 2**127 for univac, honeywell, and dec. c thus cuthi = 2**(63.5) = 1.30438d19 c cutlo, d.p. u/eps = 2**(-67) for honeywell and dec. c thus cutlo = 2**(-33.5) = 8.23181d-11 c cuthi, d.p. same as s.p. cuthi = 1.30438d19 c data cutlo, cuthi / 8.232d-11, 1.304d19 / c data cutlo, cuthi / 4.441d-16, 1.304d19 / data cutlo, cuthi / 8.232d-11, 1.304d19 / c if(n .gt. 0) go to 10 dnrm28 = zero go to 300 c 10 next = 30 sum = zero nn = n * incx c begin main loop i = 1 c 20 go to next,(30, 50, 70, 110) 20 if(next .eq. 30) go to 30 if(next .eq. 50) go to 50 if(next .eq. 70) go to 70 if(next .eq. 110) go to 110 c An error!!! dnrm28 = 0.0d0 return c 30 if( dabs(dx(i)) .gt. cutlo) go to 85 next = 50 xmax = zero c c phase 1. sum is zero c 50 if( dx(i) .eq. zero) go to 200 if( dabs(dx(i)) .gt. cutlo) go to 85 c c prepare for phase 2. next = 70 go to 105 c c prepare for phase 4. c 100 i = j next = 110 sum = (sum / dx(i)) / dx(i) 105 xmax = dabs(dx(i)) go to 115 c c phase 2. sum is small. c scale to avoid destructive underflow. c 70 if( dabs(dx(i)) .gt. cutlo ) go to 75 c c common code for phases 2 and 4. c in phase 4 sum is large. scale to avoid overflow. c 110 if( dabs(dx(i)) .le. xmax ) go to 115 sum = one + sum * (xmax / dx(i))**2 xmax = dabs(dx(i)) go to 200 c 115 sum = sum + (dx(i)/xmax)**2 go to 200 c c c prepare for phase 3. c 75 sum = (sum * xmax) * xmax c c c for real or d.p. set hitest = cuthi/n c for complex set hitest = cuthi/(2*n) c c "float" changed to "dfloat" by T.Yee 85 hitest = cuthi/dfloat( n ) c c phase 3. sum is mid-range. no scaling. c do 95 j =i,nn,incx if(dabs(dx(j)) .ge. hitest) go to 100 sum = sum + dx(j)**2 95 continue dnrm28 = dsqrt( sum ) go to 300 c 200 continue i = i + incx if ( i .le. nn ) go to 20 c c end of main loop. c c compute square root and adjust for scaling. c dnrm28 = xmax * dsqrt(sum) 300 continue return end subroutine dscal8(n,da,dx,incx) implicit logical (a-z) c c scales a vector by a constant. c uses unrolled loops for increment equal to one. c jack dongarra, linpack, 3/11/78. c double precision da,dx(*) integer i,incx,m,mp1,n,nincx c if(n.le.0)return if(incx.eq.1)go to 20 c c code for increment not equal to 1 c nincx = n*incx do 10 i = 1,nincx,incx dx(i) = da*dx(i) 10 continue return c c code for increment equal to 1 c c c clean-up loop c 20 m = mod(n,5) if( m .eq. 0 ) go to 40 do 30 i = 1,m dx(i) = da*dx(i) 30 continue if( n .lt. 5 ) return 40 mp1 = m + 1 do 50 i = mp1,n,5 dx(i) = da*dx(i) dx(i + 1) = da*dx(i + 1) dx(i + 2) = da*dx(i + 2) dx(i + 3) = da*dx(i + 3) dx(i + 4) = da*dx(i + 4) 50 continue return end subroutine dshift8(x,ldx,n,j,k) implicit logical (a-z) integer ldx,n,j,k double precision x(ldx,k), tt integer i,jj if(k.le.j)return do 100 i=1,n tt=x(i,j) do 50 jj=j+1,k x(i,jj-1)=x(i,jj) 50 continue x(i,k)=tt 100 continue return end subroutine vdqrsl(x,ldx,n,k,qraux,y,qy,qty,b,rsd,xb,job,info) implicit logical (a-z) integer ldx,n,k,job,info double precision x(ldx,*),qraux(*),y(*),qy(*),qty(*),b(*),rsd(*), * xb(*) c c c c 20130419: orig.: c double precision x(ldx,1),qraux(1),y(1),qy(1),qty(1),b(1),rsd(1), c * xb(1) c c c c dqrsl applies the output of dqrdc to compute coordinate c transformations, projections, and least squares solutions. c for k .le. min(n,p), let xk be the matrix c c xk = (x(jpvt(1)),x(jpvt(2)), ... ,x(jpvt(k))) c c formed from columnns jpvt(1), ... ,jpvt(k) of the original c n x p matrix x that was input to dqrdc (if no pivoting was c done, xk consists of the first k columns of x in their c original order). dqrdc produces a factored orthogonal matrix q c and an upper triangular matrix r such that c c xk = q * (r) c (0) c c this information is contained in coded form in the arrays c x and qraux. c c on entry c c x double precision(ldx,p). c x contains the output of dqrdc. c c ldx integer. c ldx is the leading dimension of the array x. c c n integer. c n is the number of rows of the matrix xk. it must c have the same value as n in dqrdc. c c k integer. c k is the number of columns of the matrix xk. k c must nnot be greater than min(n,p), where p is the c same as in the calling sequence to dqrdc. c c qraux double precision(p). c qraux contains the auxiliary output from dqrdc. c c y double precision(n) c y contains an n-vector that is to be manipulated c by dqrsl. c c job integer. c job specifies what is to be computed. job has c the decimal expansion abcde, with the following c meaning. c c if a.ne.0, compute qy. c if b,c,d, or e .ne. 0, compute qty. c if c.ne.0, compute b. c if d.ne.0, compute rsd. c if e.ne.0, compute xb. c c note that a request to compute b, rsd, or xb c automatically triggers the computation of qty, for c which an array must be provided in the calling c sequence. c c on return c c qy double precision(n). c qy conntains q*y, if its computation has been c requested. c c qty double precision(n). c qty contains trans(q)*y, if its computation has c been requested. here trans(q) is the c transpose of the matrix q. c c b double precision(k) c b contains the solution of the least squares problem c c minimize norm2(y - xk*b), c c if its computation has been requested. (note that c if pivoting was requested in dqrdc, the j-th c component of b will be associated with column jpvt(j) c of the original matrix x that was input into dqrdc.) c c rsd double precision(n). c rsd contains the least squares residual y - xk*b, c if its computation has been requested. rsd is c also the orthogonal projection of y onto the c orthogonal complement of the column space of xk. c c xb double precision(n). c xb contains the least squares approximation xk*b, c if its computation has been requested. xb is also c the orthogonal projection of y onto the column space c of x. c c info integer. c info is zero unless the computation of b has c been requested and r is exactly singular. in c this case, info is the index of the first zero c diagonal element of r and b is left unaltered. c c the parameters qy, qty, b, rsd, and xb are not referenced c if their computation is not requested and in this case c can be replaced by dummy variables in the calling program. c to save storage, the user may in some cases use the same c array for different parameters in the calling sequence. a c frequently occuring example is when one wishes to compute c any of b, rsd, or xb and does not need y or qty. in this c case one may identify y, qty, and one of b, rsd, or xb, while c providing separate arrays for anything else that is to be c computed. thus the calling sequence c c call dqrsl(x,ldx,n,k,qraux,y,dum,y,b,y,dum,110,info) c c will result in the computation of b and rsd, with rsd c overwriting y. more generally, each item in the following c list contains groups of permissible identifications for c a single callinng sequence. c c 1. (y,qty,b) (rsd) (xb) (qy) c c 2. (y,qty,rsd) (b) (xb) (qy) c c 3. (y,qty,xb) (b) (rsd) (qy) c c 4. (y,qy) (qty,b) (rsd) (xb) c c 5. (y,qy) (qty,rsd) (b) (xb) c c 6. (y,qy) (qty,xb) (b) (rsd) c c in any group the value returned in the array allocated to c the group corresponds to the last member of the group. c c linpack. this version dated 08/14/78 . c g.w. stewart, university of maryland, argonne national lab. c c dqrsl uses the following functions and subprograms. c c blas daxpy8,dcopy8,ddot8 c fortran dabs,min0,mod c c internal variables c integer i,j,jj,ju,kp1 double precision ddot8,t,temp logical cb,cqy,cqty,cr,cxb c c c set info flag. c info = 0 c c determine what is to be computed. c cqy = job/10000 .ne. 0 cqty = mod(job,10000) .ne. 0 cb = mod(job,1000)/100 .ne. 0 cr = mod(job,100)/10 .ne. 0 cxb = mod(job,10) .ne. 0 ju = min0(k,n-1) c c special action when n=1. c if (ju .ne. 0) go to 40 if (cqy) qy(1) = y(1) if (cqty) qty(1) = y(1) if (cxb) xb(1) = y(1) if (.not.cb) go to 30 if (x(1,1) .ne. 0.0d0) go to 10 info = 1 go to 20 10 continue b(1) = y(1)/x(1,1) 20 continue 30 continue if (cr) rsd(1) = 0.0d0 go to 250 40 continue c c set up to compute qy or qty. c if (cqy) call dcopy8(n,y,1,qy,1) if (cqty) call dcopy8(n,y,1,qty,1) if (.not.cqy) go to 70 c c compute qy. c do 60 jj = 1, ju j = ju - jj + 1 if (qraux(j) .eq. 0.0d0) go to 50 temp = x(j,j) x(j,j) = qraux(j) t = -ddot8(n-j+1,x(j,j),1,qy(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,qy(j),1) x(j,j) = temp 50 continue 60 continue 70 continue if (.not.cqty) go to 100 c c compute trans(q)*y. c do 90 j = 1, ju if (qraux(j) .eq. 0.0d0) go to 80 temp = x(j,j) x(j,j) = qraux(j) t = -ddot8(n-j+1,x(j,j),1,qty(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,qty(j),1) x(j,j) = temp 80 continue 90 continue 100 continue c c set up to compute b, rsd, or xb. c if (cb) call dcopy8(k,qty,1,b,1) kp1 = k + 1 if (cxb) call dcopy8(k,qty,1,xb,1) if(cr .and. k .lt. n) call dcopy8(n-k,qty(kp1),1,rsd(kp1),1) if (.not.cxb .or. kp1 .gt. n) go to 120 do 110 i = kp1, n xb(i) = 0.0d0 110 continue 120 continue if (.not.cr) go to 140 do 130 i = 1, k rsd(i) = 0.0d0 130 continue 140 continue if (.not.cb) go to 190 c c compute b. c do 170 jj = 1, k j = k - jj + 1 if (x(j,j) .ne. 0.0d0) go to 150 info = j c ......exit go to 180 150 continue b(j) = b(j)/x(j,j) if (j .eq. 1) go to 160 t = -b(j) call daxpy8(j-1,t,x(1,j),1,b,1) 160 continue 170 continue 180 continue 190 continue if (.not.cr .and. .not.cxb) go to 240 c c compute rsd or xb as required. c do 230 jj = 1, ju j = ju - jj + 1 if (qraux(j) .eq. 0.0d0) go to 220 temp = x(j,j) x(j,j) = qraux(j) if (.not.cr) go to 200 t = -ddot8(n-j+1,x(j,j),1,rsd(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,rsd(j),1) 200 continue if (.not.cxb) go to 210 t = -ddot8(n-j+1,x(j,j),1,xb(j),1)/x(j,j) call daxpy8(n-j+1,t,x(j,j),1,xb(j),1) 210 continue x(j,j) = temp 220 continue 230 continue 240 continue 250 continue return end VGAM/src/zeta3.c0000644000176200001440000001437613135276761013013 0ustar liggesusers #include #include #include void vzetawr(double sjwyig9t[], double *bqelz3cy, int *kpzavbj3, int *f8yswcat); double fvlmz9iyzeta8(double , double kxae8glp[]); double fvlmz9iydzeta8(double , double kxae8glp[]); double fvlmz9iyddzeta8(double , double kxae8glp[]); void vbecoef(double kxae8glp[]); void vzetawr(double sjwyig9t[], double *bqelz3cy, int *kpzavbj3, int *f8yswcat) { int ayfnwr1v; double *qnwamo0e1, *qnwamo0e2; double kxae8glp[12]; vbecoef(kxae8glp); qnwamo0e1 = bqelz3cy; qnwamo0e2 = sjwyig9t; if (*kpzavbj3 == 0) { for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *qnwamo0e1++ = fvlmz9iyzeta8(*qnwamo0e2++, kxae8glp); } } else if (*kpzavbj3 == 1) { for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *qnwamo0e1++ = fvlmz9iydzeta8(*qnwamo0e2++, kxae8glp); } } else if (*kpzavbj3 == 2) { for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *qnwamo0e1++ = fvlmz9iyddzeta8(*qnwamo0e2++, kxae8glp); } } else { Rprintf("Error: *kpzavbj3 must equal 0, 1 or 2 in C function vzetawr\n"); } } double fvlmz9iyzeta8(double ghz9vuba, double kxae8glp[]) { int ayfnwr1v, gp1jxzuh, uw3favmo, nsvdbx3tk, m2svdbx3tk; double q6zdcwxk, xvr7bonh, a2svdbx3tk, fred; ayfnwr1v = 12; gp1jxzuh = 8; a2svdbx3tk = pow((double) ayfnwr1v, (double) 2.0); xvr7bonh = ghz9vuba / 2.000 / a2svdbx3tk; q6zdcwxk = 1.000 / (ghz9vuba - 1.000) + 0.500 / ayfnwr1v + kxae8glp[0] * xvr7bonh; for (uw3favmo = 2; uw3favmo <= gp1jxzuh; uw3favmo++) { m2svdbx3tk = uw3favmo + uw3favmo; xvr7bonh *= (ghz9vuba + m2svdbx3tk - 3.000) * (ghz9vuba + m2svdbx3tk - 2.000) / (m2svdbx3tk - 1.000) / m2svdbx3tk / a2svdbx3tk; q6zdcwxk += xvr7bonh * kxae8glp[uw3favmo-1]; } fred = pow((double) ayfnwr1v, (double) 1.0 - ghz9vuba); q6zdcwxk = 1.000 + q6zdcwxk * fred; for (nsvdbx3tk = 2; nsvdbx3tk < ayfnwr1v; nsvdbx3tk++) { q6zdcwxk += pow((double) nsvdbx3tk, (double) -ghz9vuba); } return q6zdcwxk; } double fvlmz9iydzeta8(double ghz9vuba, double kxae8glp[]) { int ayfnwr1v, gp1jxzuh, uw3favmo, nsvdbx3tk, m2svdbx3tk; double q6zdcwxk, xvr7bonh, dh9mgvze, a2svdbx3tk, ugqvjoe5a, ugqvjoe5n, fred; ayfnwr1v = 12; gp1jxzuh = 8; ugqvjoe5a = log( (double) ayfnwr1v ); a2svdbx3tk = ayfnwr1v * ayfnwr1v; xvr7bonh = ghz9vuba / 2.000 / a2svdbx3tk; dh9mgvze = 1.000 / ghz9vuba - ugqvjoe5a; q6zdcwxk = kxae8glp[0] * xvr7bonh * dh9mgvze; for (uw3favmo = 2; uw3favmo <= gp1jxzuh; uw3favmo++) { m2svdbx3tk = uw3favmo + uw3favmo; xvr7bonh *= (ghz9vuba + m2svdbx3tk - 3.0) * (ghz9vuba + m2svdbx3tk - 2.0) / (m2svdbx3tk - 1.0) / m2svdbx3tk / a2svdbx3tk; dh9mgvze += 1.0 / (ghz9vuba + m2svdbx3tk - 3.0) + 1.0 / (ghz9vuba + m2svdbx3tk - 2.0); q6zdcwxk += kxae8glp[uw3favmo-1] * xvr7bonh * dh9mgvze; } fred = pow((double) ayfnwr1v, (double) 1.0 - ghz9vuba); q6zdcwxk = (q6zdcwxk - 1.000 / pow(ghz9vuba - 1.000, (double) 2.0) - ugqvjoe5a * (1.000 / (ghz9vuba - 1.000) + 0.5000 / ayfnwr1v)) * fred; for (nsvdbx3tk = 2; nsvdbx3tk < ayfnwr1v; nsvdbx3tk++) { ugqvjoe5n = log( (double) nsvdbx3tk ); q6zdcwxk -= ugqvjoe5n / exp(ugqvjoe5n * ghz9vuba); } return q6zdcwxk; } double fvlmz9iyddzeta8(double ghz9vuba, double kxae8glp[]) { int ayfnwr1v, gp1jxzuh, uw3favmo, nsvdbx3tk, m2svdbx3tk; double q6zdcwxk, xvr7bonh, dh9mgvze, hpmwnav2, a2svdbx3tk, ugqvjoe5a, ugqvjoe5n, fred1, fred2; ayfnwr1v = 12; gp1jxzuh = 8; ugqvjoe5a = log( (double) ayfnwr1v ); a2svdbx3tk = ayfnwr1v * ayfnwr1v; xvr7bonh = ghz9vuba / 2.000 / a2svdbx3tk; dh9mgvze = 1.000 / ghz9vuba - ugqvjoe5a; hpmwnav2 = 1.000 / ghz9vuba / ghz9vuba; q6zdcwxk = kxae8glp[0] * xvr7bonh * (pow(dh9mgvze, (double) 2.0) - hpmwnav2); for (uw3favmo = 2; uw3favmo < gp1jxzuh; uw3favmo++) { m2svdbx3tk = uw3favmo + uw3favmo; xvr7bonh *= (ghz9vuba + m2svdbx3tk - 3.000) * (ghz9vuba + m2svdbx3tk - 2.000) / (m2svdbx3tk - 1.0) / m2svdbx3tk / a2svdbx3tk; dh9mgvze += 1.000 / (ghz9vuba + m2svdbx3tk - 3.000) + 1.000 / (ghz9vuba + m2svdbx3tk - 2.000); hpmwnav2 += 1.000 / pow(ghz9vuba + m2svdbx3tk - 3.000, (double) 2.0) + 1.000 / pow(ghz9vuba + m2svdbx3tk - 2.000, (double) 2.0); q6zdcwxk += kxae8glp[uw3favmo-1] * xvr7bonh * (dh9mgvze * dh9mgvze - hpmwnav2); } fred1 = pow((double) ayfnwr1v, (double) 1.0 - ghz9vuba); fred2 = pow(ugqvjoe5a, (double) 2.0) * (1.0 / (ghz9vuba - 1.0) + 0.50 / ayfnwr1v); q6zdcwxk = (q6zdcwxk + 2.0 / pow(ghz9vuba - 1.0, (double) 3.0) + 2.0 * ugqvjoe5a / pow(ghz9vuba - 1.0, (double) 2.0) + fred2) * fred1; for (nsvdbx3tk = 2; nsvdbx3tk < ayfnwr1v; nsvdbx3tk++) { ugqvjoe5n = log( (double) nsvdbx3tk ); q6zdcwxk += pow(ugqvjoe5n, (double) 2.0) / exp(ugqvjoe5n * ghz9vuba); } return q6zdcwxk; } void vbecoef(double kxae8glp[]) { kxae8glp[0] = 1.000 / 6.000; kxae8glp[1] = -1.000 / 30.000; kxae8glp[2] = 1.000 / 42.000; kxae8glp[3] = -1.000 / 30.000; kxae8glp[4] = 5.000 / 66.000; kxae8glp[5] = -691.000 / 2730.000; kxae8glp[6] = 7.000 / 6.000; kxae8glp[7] = -3617.000 / 510.000; kxae8glp[8] = 4386.700 / 79.800; kxae8glp[9] = -1746.1100 / 3.3000; kxae8glp[10] = 8545.1300 / 1.3800; kxae8glp[11] = -2363.6409100 / 0.0273000; } void conmax_Z(double *lamvec, double *nuvec, double *bqelz3cy, int *nlength, int *kpzavbj3, double *qaltf0nz) { double *pq6zdcwxk, denom = 0.0, yq6lorbx, prevterm; int ayfnwr1v; *qaltf0nz = 1.0e-6; if (*kpzavbj3 == 0) { pq6zdcwxk = bqelz3cy; for (ayfnwr1v = 0; ayfnwr1v < *nlength; ayfnwr1v++) { prevterm = 1.0 + *lamvec; denom = 1.0; *pq6zdcwxk = prevterm; yq6lorbx = 2.0; if (*nuvec == 0.0 && *lamvec >= 1.0) { Rprintf("Error: series will not converge. Returning 0.0\n"); *pq6zdcwxk = 0.0; } else { while (prevterm > *qaltf0nz) { denom = denom * pow(yq6lorbx, *lamvec); prevterm = prevterm * *lamvec / denom; *pq6zdcwxk += prevterm; yq6lorbx += 1.0; } } lamvec++; nuvec++; pq6zdcwxk++; } } else if (*kpzavbj3 == 1) { } else if (*kpzavbj3 == 2) { } } VGAM/src/rgam.f0000644000176200001440000004770113135276761012714 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine dnaoqj0l(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk, ankcghz *2,coef,sz,ifys6woa, qcpiaj7f,wbkq9zyi,parms, scrtch, gp0xjetb,l3zp *bstu,e5knafcg,wep0oibc,fbd5yktj) implicit logical (a-z) integer kuzxj1lo, nk, gp0xjetb, l3zpbstu(3), e5knafcg, wep0oibc, f *bd5yktj double precision penalt, pjb6wfoq, xs(kuzxj1lo), ys(kuzxj1lo), ws( *kuzxj1lo), ankcghz2(nk+4), coef(nk), sz(kuzxj1lo), ifys6woa(kuzxj1 *lo), qcpiaj7f, wbkq9zyi, parms(3), scrtch(*) call hbzuprs6(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk, ankcghz2,coef *,sz,ifys6woa, qcpiaj7f,l3zpbstu(1),wbkq9zyi,l3zpbstu(2), l3zpbstu( *3), parms(1),parms(2),parms(3), gp0xjetb, scrtch(1), scrtch(nk+1), *scrtch(2*nk+1),scrtch(3*nk+1),scrtch(4*nk+1), scrtch(5*nk+1),scrtc *h(6*nk+1),scrtch(7*nk+1),scrtch(8*nk+1), scrtch(9*nk+1),scrtch(9*n *k+e5knafcg*nk+1),scrtch(9*nk+2*e5knafcg*nk+1), e5knafcg,wep0oibc,f *bd5yktj) return end subroutine hbzuprs6(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk, ankcghz *2,coef,sz,ifys6woa, qcpiaj7f,icrit,i9mwnvqt,ispar, c5aesxku, mynl7 *uaq,zustx4fw,tol, gp0xjetb, xwy, zvau2lct,f6lsuzax,fvh2rwtc,dcfir2 *no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, buhyalv4,fulcp8wa,plj0trq *x, e5knafcg,wep0oibc,fbd5yktj) implicit logical (a-z) integer kuzxj1lo,nk, icrit,ispar, gp0xjetb, e5knafcg,wep0oibc,fbd5 *yktj integer c5aesxku double precision penalt,pjb6wfoq,xs(kuzxj1lo),ys(kuzxj1lo),ws(kuzx *j1lo), ankcghz2(nk+4), coef(nk),sz(kuzxj1lo),ifys6woa(kuzxj1lo), q *cpiaj7f,i9mwnvqt,mynl7uaq,zustx4fw,tol, xwy(nk), zvau2lct(nk),f6ls *uzax(nk),fvh2rwtc(nk),dcfir2no(nk), xecbg0pf(nk),z4grbpiq(nk),d7gl *zhbj(nk),v2eydbxs(nk), buhyalv4(e5knafcg,nk),fulcp8wa(e5knafcg,nk) *,plj0trqx(wep0oibc,nk) double precision t1,t2,ratio, a,b,c,d,e,qaltf0nz,xm,p,q,r,tol1,tol *2,u,v,w, fu,fv,fw,fx,x, ax,bx integer ayfnwr1v, viter double precision yjpnro8d, hmayv1xt yjpnro8d = 8.0d88 hmayv1xt = 0.0d0 d = 0.5d0 u = 0.5d0 ratio = 0.5d0 ayfnwr1v = 1 23000 if(.not.(ayfnwr1v .le. kuzxj1lo))goto 23002 if(ws(ayfnwr1v).gt.0.0d0)then ws(ayfnwr1v) = dsqrt(ws(ayfnwr1v)) endif 23001 ayfnwr1v = ayfnwr1v+1 goto 23000 23002 continue if(gp0xjetb .eq. 0)then call zosq7hub(xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs,ankcghz2,nk) call gt9iulbf(xs,ys,ws,ankcghz2, kuzxj1lo,nk, xwy,zvau2lct,f6lsuza *x,fvh2rwtc,dcfir2no) t1 = 0.0d0 t2 = 0.0d0 do23007 ayfnwr1v = 3,nk-3 t1 = t1 + zvau2lct(ayfnwr1v) 23007 continue 23008 continue do23009 ayfnwr1v = 3,nk-3 t2 = t2 + xecbg0pf(ayfnwr1v) 23009 continue 23010 continue ratio = t1/t2 gp0xjetb = 1 endif if(ispar .eq. 1)then call wmhctl9x(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk,icrit, ankcghz *2,coef,sz,ifys6woa,qcpiaj7f, i9mwnvqt, xwy, zvau2lct,f6lsuzax,fvh2 *rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, buhyalv4,fulcp *8wa,plj0trqx,e5knafcg,wep0oibc,fbd5yktj) return endif ax = mynl7uaq bx = zustx4fw c = 0.381966011250105097d0 qaltf0nz = 2.0d-5 viter = 0 a = ax b = bx v = a + c*(b - a) w = v x = v e = 0.0d0 i9mwnvqt = ratio * dexp((-2.0d0 + x*6.0d0) * dlog(16.0d0)) call wmhctl9x(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk,icrit, ankcghz *2,coef,sz,ifys6woa,qcpiaj7f, i9mwnvqt, xwy, zvau2lct,f6lsuzax,fvh2 *rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, buhyalv4,fulcp *8wa,plj0trqx,e5knafcg,wep0oibc,fbd5yktj) fx = qcpiaj7f fv = fx fw = fx 23013 if(fbd5yktj .eq. 0)then viter = viter + 1 xm = 0.5d0*(a + b) tol1 = qaltf0nz*dabs(x) + tol/3.0d0 tol2 = 2.0d0*tol1 if((dabs(x - xm) .le. (tol2 - 0.5d0*(b - a))) .or. (viter .gt. c5a *esxku))then go to 90 endif if((dabs(e) .le. tol1) .or. (fx .ge. yjpnro8d) .or. (fv .ge. yjpnr *o8d) .or. (fw .ge. yjpnro8d))then go to 40 endif r = (x - w)*(fx - fv) q = (x - v)*(fx - fw) p = (x - v)*q - (x - w)*r q = 2.0d0 * (q - r) if(q .gt. 0.0d0)then p = -p endif q = dabs(q) r = e e = d if((dabs(p) .ge. dabs(0.5d0*q*r)) .or. (q .eq. 0.0d0))then go to 40 endif if((p .le. q*(a - x)) .or. (p .ge. q*(b - x)))then go to 40 endif d = p/q u = x + d if((u - a) .lt. tol2)then d = dsign(tol1, xm - x) endif if((b - u) .lt. tol2)then d = dsign(tol1, xm - x) endif go to 50 40 if(x .ge. xm)then e = a - x else e = b - x endif d = c*e 50 if(dabs(d) .ge. tol1)then u = x + d else u = x + dsign(tol1, d) endif i9mwnvqt = ratio * dexp((-2.0d0 + u*6.0) * dlog(16.0d0)) call wmhctl9x(penalt,pjb6wfoq,xs,ys,ws, kuzxj1lo,nk,icrit, ankcghz *2,coef,sz,ifys6woa,qcpiaj7f, i9mwnvqt, xwy, zvau2lct,f6lsuzax,fvh2 *rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, buhyalv4,fulcp *8wa,plj0trqx,e5knafcg,wep0oibc,fbd5yktj) fu = qcpiaj7f if(fu .gt. yjpnro8d)then fu = 2.0d0 * yjpnro8d endif if(fu .le. fx)then if(u .ge. x)then a = x else b = x endif v = w fv = fw w = x fw = fx x = u fx = fu else if(u .lt. x)then a = u else b = u endif if((fu .le. fw) .or. (w .eq. x))then v = w fv = fw w = u fw = fu else if((fu .le. fv) .or. (v .eq. x) .or. (v .eq. w))then v = u fv = fu endif endif endif goto 23013 endif 23014 continue 90 hmayv1xt = 0.0d0 i9mwnvqt = ratio * dexp((-2.0d0 + x*6.0d0) * dlog(16.0d0)) qcpiaj7f = fx return return end subroutine zosq7hub(xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs,tb,nb) implicit logical (a-z) integer nb double precision xecbg0pf(nb),z4grbpiq(nb),d7glzhbj(nb),v2eydbxs(n *b),tb(nb+4) integer dqlr5bse,ilo,pqzfxw4i, three3, ifour4, nbp1 integer ayfnwr1v,iii,yq6lorbx integer i2svdbx3tk double precision g9fvdrbw(4,3),work(16),yw1(4),yw2(4), wpt double precision othird othird = 1.0d0 / 3.0d0 three3 = 3 ifour4 = 4 nbp1 = nb + 1 do23045 ayfnwr1v = 1,nb xecbg0pf(ayfnwr1v) = 0.0d0 z4grbpiq(ayfnwr1v) = 0.0d0 d7glzhbj(ayfnwr1v) = 0.0d0 v2eydbxs(ayfnwr1v) = 0.0d0 23045 continue 23046 continue ilo = 1 do23047 ayfnwr1v = 1,nb call vinterv(tb(1), nbp1 ,tb(ayfnwr1v),dqlr5bse,pqzfxw4i) call vbsplvd(tb,ifour4,tb(ayfnwr1v),dqlr5bse,work,g9fvdrbw,three3) do23049 iii = 1,4 yw1(iii) = g9fvdrbw(iii,3) 23049 continue 23050 continue call vbsplvd(tb,ifour4,tb(ayfnwr1v+1),dqlr5bse,work,g9fvdrbw,three *3) do23051 iii = 1,4 yw2(iii) = g9fvdrbw(iii,3) - yw1(iii) 23051 continue 23052 continue wpt = tb(ayfnwr1v+1) - tb(ayfnwr1v) if(dqlr5bse .ge. 4)then do23055 iii = 1,4 yq6lorbx = iii i2svdbx3tk = dqlr5bse-4+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt * (yw1(iii)*yw1( *yq6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 * + yw2(iii)*yw2(yq6lorbx)*othird) yq6lorbx = iii+1 if(yq6lorbx .le. 4)then z4grbpiq(i2svdbx3tk) = z4grbpiq(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) endif yq6lorbx = iii+2 if(yq6lorbx .le. 4)then d7glzhbj(i2svdbx3tk) = d7glzhbj(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) endif yq6lorbx = iii+3 if(yq6lorbx .le. 4)then v2eydbxs(i2svdbx3tk) = v2eydbxs(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) endif 23055 continue 23056 continue else if(dqlr5bse .eq. 3)then do23065 iii = 1,3 yq6lorbx = iii i2svdbx3tk = dqlr5bse-3+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) yq6lorbx = iii+1 if(yq6lorbx .le. 3)then z4grbpiq(i2svdbx3tk) = z4grbpiq(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) endif yq6lorbx = iii+2 if(yq6lorbx .le. 3)then d7glzhbj(i2svdbx3tk) = d7glzhbj(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) endif 23065 continue 23066 continue else if(dqlr5bse .eq. 2)then do23073 iii = 1,2 yq6lorbx = iii i2svdbx3tk = dqlr5bse-2+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) yq6lorbx = iii+1 if(yq6lorbx .le. 2)then z4grbpiq(i2svdbx3tk) = z4grbpiq(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) endif 23073 continue 23074 continue else if(dqlr5bse .eq. 1)then do23079 iii = 1,1 yq6lorbx = iii i2svdbx3tk = dqlr5bse-1+iii xecbg0pf(i2svdbx3tk) = xecbg0pf(i2svdbx3tk) + wpt* (yw1(iii)*yw1(y *q6lorbx) + (yw2(iii)*yw1(yq6lorbx) + yw2(yq6lorbx)*yw1(iii))*0.50 *+ yw2(iii)*yw2(yq6lorbx)*othird) 23079 continue 23080 continue endif endif endif endif 23047 continue 23048 continue return end subroutine vmnweiy2(buhyalv4,fulcp8wa,plj0trqx, e5knafcg,nk,wep0oi *bc,iflag) implicit logical (a-z) integer e5knafcg,nk,wep0oibc,iflag double precision buhyalv4(e5knafcg,nk), fulcp8wa(e5knafcg,nk), plj *0trqx(wep0oibc,nk) integer ayfnwr1v, yq6lorbx, gp1jxzuh double precision wjm3(3),wjm2(2),wjm1(1),c0,c1,c2,c3 double precision pcsuow9k, qdbgu6oi, upwkh5xz, rul5fnyd, ueydbrg6, * plce2srm, k3yvomnh, bfdjhu7l, ctfvwdu0 c1 = 0.0d0 c2 = 0.0d0 c3 = 0.0d0 wjm3(1) = 0.0d0 wjm3(2) = 0.0d0 wjm3(3) = 0.0d0 wjm2(1) = 0.0d0 wjm2(2) = 0.0d0 wjm1(1) = 0.0d0 do23081 ayfnwr1v = 1,nk yq6lorbx = nk-ayfnwr1v+1 c0 = 1.0d0 / buhyalv4(4,yq6lorbx) if(yq6lorbx .le. (nk-3))then c1 = buhyalv4(1,yq6lorbx+3)*c0 c2 = buhyalv4(2,yq6lorbx+2)*c0 c3 = buhyalv4(3,yq6lorbx+1)*c0 else if(yq6lorbx .eq. (nk-2))then c1 = 0.0d0 c2 = buhyalv4(2,yq6lorbx+2)*c0 c3 = buhyalv4(3,yq6lorbx+1)*c0 else if(yq6lorbx .eq. (nk-1))then c1 = 0.0d0 c2 = 0.0d0 c3 = buhyalv4(3,yq6lorbx+1)*c0 else if(yq6lorbx .eq. nk)then c1 = 0.0d0 c2 = 0.0d0 c3 = 0.0d0 endif endif endif endif pcsuow9k = c1*wjm3(1) qdbgu6oi = c2*wjm3(2) upwkh5xz = c3*wjm3(3) rul5fnyd = c1*wjm3(2) ueydbrg6 = c2*wjm2(1) plce2srm = c3*wjm2(2) k3yvomnh = c1*wjm3(3) bfdjhu7l = c2*wjm2(2) ctfvwdu0 = c3*wjm1(1) fulcp8wa(1,yq6lorbx) = 0.0d0 - (pcsuow9k+qdbgu6oi+upwkh5xz) fulcp8wa(2,yq6lorbx) = 0.0d0 - (rul5fnyd+ueydbrg6+plce2srm) fulcp8wa(3,yq6lorbx) = 0.0d0 - (k3yvomnh+bfdjhu7l+ctfvwdu0) fulcp8wa(4,yq6lorbx) = c0**2 + c1*(pcsuow9k + 2.0d0*(qdbgu6oi + up *wkh5xz)) + c2*(ueydbrg6 + 2.0d0* plce2srm) + c3*ctfvwdu0 wjm3(1) = wjm2(1) wjm3(2) = wjm2(2) wjm3(3) = fulcp8wa(2,yq6lorbx) wjm2(1) = wjm1(1) wjm2(2) = fulcp8wa(3,yq6lorbx) wjm1(1) = fulcp8wa(4,yq6lorbx) 23081 continue 23082 continue if(iflag .eq. 0)then return endif do23093 ayfnwr1v = 1,nk yq6lorbx = nk-ayfnwr1v+1 gp1jxzuh = 1 23095 if(.not.(gp1jxzuh .le. 4 .and. yq6lorbx+gp1jxzuh-1 .le. nk))goto 2 *3097 plj0trqx(yq6lorbx,yq6lorbx+gp1jxzuh-1) = fulcp8wa(5-gp1jxzuh,yq6lo *rbx) 23096 gp1jxzuh = gp1jxzuh+1 goto 23095 23097 continue 23093 continue 23094 continue do23098 ayfnwr1v = 1,nk yq6lorbx = nk-ayfnwr1v+1 gp1jxzuh = yq6lorbx-4 23100 if(.not.(gp1jxzuh .ge. 1))goto 23102 c0 = 1.0 / buhyalv4(4,gp1jxzuh) c1 = buhyalv4(1,gp1jxzuh+3)*c0 c2 = buhyalv4(2,gp1jxzuh+2)*c0 c3 = buhyalv4(3,gp1jxzuh+1)*c0 plj0trqx(gp1jxzuh,yq6lorbx) = 0.0d0- ( c1*plj0trqx(gp1jxzuh+3,yq6l *orbx) + c2*plj0trqx(gp1jxzuh+2,yq6lorbx) + c3*plj0trqx(gp1jxzuh+1, *yq6lorbx) ) 23101 gp1jxzuh = gp1jxzuh-1 goto 23100 23102 continue 23098 continue 23099 continue return end subroutine wmhctl9x(penalt,pjb6wfoq,x,y,w, kuzxj1lo,nk,icrit, ankc *ghz2,coef,sz,ifys6woa, qcpiaj7f, i9mwnvqt, xwy, zvau2lct,f6lsuzax, *fvh2rwtc,dcfir2no, xecbg0pf,z4grbpiq,d7glzhbj,v2eydbxs, buhyalv4,f *ulcp8wa,plj0trqx, e5knafcg,wep0oibc,info) implicit logical (a-z) integer kuzxj1lo,nk,icrit, e5knafcg,wep0oibc,info double precision penalt,pjb6wfoq,x(kuzxj1lo),y(kuzxj1lo),w(kuzxj1l *o) double precision ankcghz2(nk+4), coef(nk),sz(kuzxj1lo),ifys6woa(ku *zxj1lo), qcpiaj7f, i9mwnvqt, xwy(nk) double precision zvau2lct(nk),f6lsuzax(nk),fvh2rwtc(nk),dcfir2no(n *k) double precision xecbg0pf(nk),z4grbpiq(nk),d7glzhbj(nk),v2eydbxs(n *k), buhyalv4(e5knafcg,nk),fulcp8wa(e5knafcg,nk),plj0trqx(wep0oibc, *nk) double precision resss, work(16), b0,b1,b2,b3,qaltf0nz, g9fvdrbw(4 *,1), xv,eqdf double precision qtce8hzo double precision rxeqjn0y integer izero0, three3, ilo, pqzfxw4i, yq6lorbx, ayfnwr1v integer icoef, dqlr5bse, ifour4, hbsl0gto, nkp1 ilo = 1 qaltf0nz = 0.1d-10 izero0 = 0 three3 = 3 ifour4 = 4 hbsl0gto = 1 nkp1 = nk + 1 do23103 ayfnwr1v = 1,nk coef(ayfnwr1v) = xwy(ayfnwr1v) 23103 continue 23104 continue do23105 ayfnwr1v = 1,nk buhyalv4(4,ayfnwr1v) = zvau2lct(ayfnwr1v)+i9mwnvqt*xecbg0pf(ayfnwr *1v) 23105 continue 23106 continue do23107 ayfnwr1v = 1,(nk-1) buhyalv4(3,ayfnwr1v+1) = f6lsuzax(ayfnwr1v)+i9mwnvqt*z4grbpiq(ayfn *wr1v) 23107 continue 23108 continue do23109 ayfnwr1v = 1,(nk-2) buhyalv4(2,ayfnwr1v+2) = fvh2rwtc(ayfnwr1v)+i9mwnvqt*d7glzhbj(ayfn *wr1v) 23109 continue 23110 continue do23111 ayfnwr1v = 1,(nk-3) buhyalv4(1,ayfnwr1v+3) = dcfir2no(ayfnwr1v)+i9mwnvqt*v2eydbxs(ayfn *wr1v) 23111 continue 23112 continue call dpbfa8(buhyalv4,e5knafcg,nk,three3,info) if(info .ne. 0)then return endif call dpbsl8(buhyalv4,e5knafcg,nk,three3,coef) icoef = 1 do23115 ayfnwr1v = 1,kuzxj1lo xv = x(ayfnwr1v) call wbvalue(ankcghz2,coef, nk,ifour4,xv,izero0, sz(ayfnwr1v)) 23115 continue 23116 continue if(icrit .eq. 0)then return endif call vmnweiy2(buhyalv4,fulcp8wa,plj0trqx, e5knafcg,nk,wep0oibc,ize *ro0) do23119 ayfnwr1v = 1,kuzxj1lo xv = x(ayfnwr1v) call vinterv(ankcghz2(1), nkp1 ,xv,dqlr5bse,pqzfxw4i) if(pqzfxw4i .eq. -1)then dqlr5bse = 4 xv = ankcghz2(4) + qaltf0nz endif if(pqzfxw4i .eq. 1)then dqlr5bse = nk xv = ankcghz2(nk+1) - qaltf0nz endif yq6lorbx = dqlr5bse-3 call vbsplvd(ankcghz2,ifour4,xv,dqlr5bse,work,g9fvdrbw,hbsl0gto) b0 = g9fvdrbw(1,1) b1 = g9fvdrbw(2,1) b2 = g9fvdrbw(3,1) b3 = g9fvdrbw(4,1) qtce8hzo = (b0 *(fulcp8wa(4,yq6lorbx)*b0 + 2.0d0*(fulcp8wa(3,yq6lo *rbx)*b1 + fulcp8wa(2,yq6lorbx)*b2 + fulcp8wa(1,yq6lorbx)*b3)) + b1 * *(fulcp8wa(4,yq6lorbx+1)*b1 + 2.0d0*(fulcp8wa(3,yq6lorbx+1)*b2 + *fulcp8wa(2,yq6lorbx+1)*b3)) + b2 *(fulcp8wa(4,yq6lorbx+2)*b2 + 2.0 *d0* fulcp8wa(3,yq6lorbx+2)*b3 )+ b3**2* fulcp8wa(4,yq6lorbx+3)) * *w(ayfnwr1v)**2 ifys6woa(ayfnwr1v) = qtce8hzo 23119 continue 23120 continue if(icrit .eq. 1)then resss = 0.0d0 eqdf = 0.0d0 rxeqjn0y = 0.0d0 do23127 ayfnwr1v = 1,kuzxj1lo resss = resss + ((y(ayfnwr1v)-sz(ayfnwr1v))*w(ayfnwr1v))**2 eqdf = eqdf + ifys6woa(ayfnwr1v) rxeqjn0y = rxeqjn0y + w(ayfnwr1v)*w(ayfnwr1v) 23127 continue 23128 continue qcpiaj7f = (resss/rxeqjn0y)/((1.0d0-(pjb6wfoq+penalt*eqdf)/rxeqjn0 *y)**2) else if(icrit .eq. 2)then qcpiaj7f = 0.0d0 rxeqjn0y = 0.0d0 do23131 ayfnwr1v = 1,kuzxj1lo qcpiaj7f = qcpiaj7f + (((y(ayfnwr1v)-sz(ayfnwr1v))*w(ayfnwr1v))/(1 *.0d0-ifys6woa(ayfnwr1v)))**2 rxeqjn0y = rxeqjn0y + w(ayfnwr1v)*w(ayfnwr1v) 23131 continue 23132 continue qcpiaj7f = qcpiaj7f / rxeqjn0y else qcpiaj7f = 0.0d0 do23133 ayfnwr1v = 1,kuzxj1lo qcpiaj7f = qcpiaj7f+ifys6woa(ayfnwr1v) 23133 continue 23134 continue qcpiaj7f = 3.0d0 + (pjb6wfoq-qcpiaj7f)**2 endif endif return end subroutine gt9iulbf(he7mqnvy,ghz9vuba,w,gkdx5jal, rvy1fpli,kuzxj1l *o, bhcji9glto,zvau2lct,f6lsuzax,fvh2rwtc,dcfir2no) implicit logical (a-z) integer rvy1fpli,kuzxj1lo double precision he7mqnvy(rvy1fpli),ghz9vuba(rvy1fpli),w(rvy1fpli) *,gkdx5jal(kuzxj1lo+4), bhcji9glto(kuzxj1lo), zvau2lct(kuzxj1lo),f6 *lsuzax(kuzxj1lo),fvh2rwtc(kuzxj1lo),dcfir2no(kuzxj1lo) double precision qaltf0nz,g9fvdrbw(4,1),work(16) double precision w2svdbx3tk, wv2svdbx3tk integer yq6lorbx,ayfnwr1v,ilo,dqlr5bse,pqzfxw4i, nhnpt1zym1 integer ifour4, hbsl0gto hbsl0gto = 1 ifour4 = 4 nhnpt1zym1 = kuzxj1lo + 1 do23135 ayfnwr1v = 1,kuzxj1lo bhcji9glto(ayfnwr1v) = 0.0d0 zvau2lct(ayfnwr1v) = 0.0d0 f6lsuzax(ayfnwr1v) = 0.0d0 fvh2rwtc(ayfnwr1v) = 0.0d0 dcfir2no(ayfnwr1v) = 0.0d0 23135 continue 23136 continue ilo = 1 qaltf0nz = 0.1d-9 do23137 ayfnwr1v = 1,rvy1fpli call vinterv(gkdx5jal(1), nhnpt1zym1 ,he7mqnvy(ayfnwr1v),dqlr5bse, *pqzfxw4i) if(pqzfxw4i .eq. 1)then if(he7mqnvy(ayfnwr1v) .le. (gkdx5jal(dqlr5bse)+qaltf0nz))then dqlr5bse = dqlr5bse-1 else return endif endif call vbsplvd(gkdx5jal,ifour4,he7mqnvy(ayfnwr1v),dqlr5bse,work,g9fv *drbw,hbsl0gto) yq6lorbx = dqlr5bse-4+1 w2svdbx3tk = w(ayfnwr1v)**2 wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(1,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk*ghz9vuba *(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(1,1 *) f6lsuzax(yq6lorbx) = f6lsuzax(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(2,1 *) fvh2rwtc(yq6lorbx) = fvh2rwtc(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(3,1 *) dcfir2no(yq6lorbx) = dcfir2no(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4,1 *) yq6lorbx = dqlr5bse-4+2 wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(2,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk*ghz9vuba *(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(2,1 *) f6lsuzax(yq6lorbx) = f6lsuzax(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(3,1 *) fvh2rwtc(yq6lorbx) = fvh2rwtc(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4,1 *) yq6lorbx = dqlr5bse-4+3 wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(3,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk*ghz9vuba *(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(3,1 *) f6lsuzax(yq6lorbx) = f6lsuzax(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4,1 *) yq6lorbx = dqlr5bse wv2svdbx3tk = w2svdbx3tk * g9fvdrbw(4,1) bhcji9glto(yq6lorbx) = bhcji9glto(yq6lorbx) + wv2svdbx3tk*ghz9vuba *(ayfnwr1v) zvau2lct(yq6lorbx) = zvau2lct(yq6lorbx) + wv2svdbx3tk*g9fvdrbw(4,1 *) 23137 continue 23138 continue return end VGAM/src/lerchphi.c0000644000176200001440000002320713135276761013554 0ustar liggesusers/* ------------------------------- Lerch's transcendent Phi(z,s,v) ------------------------------- This program is copyright by Sergej V. Aksenov (http://www.geocities.com/saksenov) and Ulrich D. Jentschura (jentschura@physik.tu-dresden.de), 2002. Version 1.00 (May 1, 2002) Calling sequence: int lerchphi(double *z, double *s, double *v, double *acc, double *result, int *iter) calculates Lerch's Phi transcendent Phi(z,s,v) with *result to a specified accuracy *acc after *iter iterations. Double precision is used throughout the calculation. The program uses direct summation of the defining series for |z| <= 0.5 and CNCT for 0.5 < |z| < 1.0. The integer return code has to be interpreted as follows. ------------- Return codes: ------------- 0 - Normal termination. 1 - Lerch Phi diverges for 1 <= |z|. 2 - Lerch Phi is not defined for integer v <= 0. 3 - pow() is not defined for v < 0 and s not integer. 4 - Long integer overflow in aj1234(). 5 - Underflow in remainder estimate omega in lerchphi(). 6 - No convergence within the maximum number of iterations. Implementation note: In subroutine aj1234(), defining variables ind and two2k as type double instead of long int might eliminate overflow error which occurs for high indices (error code 4). */ #include #include #include #define macheps DBL_EPSILON #define machmin DBL_MIN /* If preprocessor macro ADD_UNDERSCORE was defined, add underscore to the function name --- needed for linking to Fortran programs on a Sun. */ #if (ADD_UNDERSCORE) #define lerchphi lerchphi_ #endif /* Function that computes van Wijngaarden's A_j for a given j. */ static int aj1234(double *z, double *s, double *v, int j, double *acc, double *res) { double sum, bjk, z2ind; int k, flag; unsigned long int ind, two2k; sum = bjk = 0.0; k = -1; two2k = 1; flag = 0; /* Sum b^j_k's over k. */ for (;;) { k++; /* Index for the term of the original series. */ if (k > 0) two2k *= 2; ind = two2k * (j + 1) - 1; /* If long integer overflow occurs, variables become zero. Not relevant in v1.0 because two2k and ind are double type. */ if (k > 0 && (two2k == 0 || ind == 0)) { flag = 4; break; } /* Increment the sum. */ z2ind = pow(*z, ind); bjk = two2k * z2ind / pow(*v + ind, *s); sum += bjk; /* Stop summation if either sum is zero or |term/sum| is below requested accuracy. */ if (fabs(sum) <= machmin || fabs(bjk/sum) < 1.0e-2 * (*acc)) break; } *res = sum; return flag; } /* Function that computes approximation to Lerch Phi as a converging sequence of CNC transforms S^n_k. */ int lerchphi(double *z, double *s, double *v, double *acc, double *result, int *iter) { const unsigned short int beta = 1, n = 0, imax = 100; unsigned short int j, m; int i, sign, flag; double v1, sn, eps0, eps, skn, skn0, omega, *num, *den, *StoreAj, factor, factor1, x, est, iom, sum1, cacc; /* Added 20090205 by T.Yee to suppress 4 warnings */ sum1 = est = 0.0; StoreAj = &v1; m = 0; /* Local copy of v. */ v1 = *v; /* Special cases. */ /* 1 <= |z|. (Return error, Lerch Phi diverges.) */ if (1.0 <= fabs(*z)) { *result = 1.0; *iter = 0; return 1; } /* v <= 0 is integer. (Return error, Lerch Phi is not defined.) */ if (fabs(floor(*v) - *v) <= macheps*fabs(*v) && *v <= 0.0) { *result = 1.0; *iter = 0; return 2; } /* v < 0 is not integer or zero and z != 0 (z == 0 considered below) ... */ if (*v < 0.0 && fabs(*z) > machmin) { /* s is not an integer. (Return error because pow() is not defined.) */ if (fabs(floor(*s) - *s) > macheps*fabs(*s)) { *result = 1.0; *iter = 0; return 3; } /* s is an integer. (Transform v to positive). */ else { m = - (int) floor(*v); v1 += m; sum1 = 0.0; if ((int) *s % 2 == 0) sign = 1; else sign = -1; for (i = 0; i <= m-1; i++) { if ((i > 0) && (*z < 0)) sign = -sign; sum1 += sign*pow(fabs(*z),i)/pow(fabs(*v+i),*s); } } } /* z = 0 and ... */ if (fabs(*z) <= machmin) { /* ... v < 0 is not integer or zero and ... */ if (*v < 0) { /* s is not an integer. (Return error because pow() is not defined.) */ if (fabs(floor(*s) - *s) > macheps*fabs(*s)) { *result = 1.0; *iter = 0; return 3; } /* s is an integer. (Return first term of series.)*/ else { if ((int) *s % 2 == 0) sign = 1; else sign = -1; *result = sign * 1.0 / pow(fabs(*v), *s); } } /* ... v > 0. (Return first term of series.) */ else { *result = 1.0 / pow(*v, *s); *iter = 1; return 0; } } /* General case. */ /* Some initializations. */ /* sn denotes current partial sum of defining series: z > 0.5: sn is partial sum S_n of the van Wijngaarden transformed series. z <= 0.5: sn is the partial sum of the power series defining LerchPhi. skn0 and skn denote successive partial sums S^k_n that are same as sn in case of direct summation and delta-transformed in case of CNCT. eps0 and eps denote successive differences between partial sums S^k_n. */ eps0 = skn = skn0 = sn = 0.0; /* omega is next term of a partial sum (of defining power series for direct summation, of van Wijngaarden transformed series for CNCT) and also becomes a remainder estimate in the delta transformation in CNCT). */ /* For z <= 0.5 van Wijngaarden transformation is not used [hence no calls to aj1234()]. */ /* Direct summation and CNCT (z < -0.5) case. */ if (*z <= 0.5) omega = 1.0 / pow(v1, *s); /* CNCT (z > 0.5) case. */ else { flag = aj1234(z, s, &v1, 0, acc, &omega); if (flag) { *result = 1.0; *iter = 0; return flag; } } /* Allocate memory for working arrays. */ num = (double *) malloc(imax * sizeof(double)); den = (double *) malloc(imax * sizeof(double)); /* StoreAj is used only in CNCT */ if (*z > 0.5) StoreAj = (double *) malloc(imax * sizeof(double)); flag = 0; i = -1; sign = -1; /* Main loop: iterations for S^k_n. */ for (;;) { /* i points to current iterate. */ i++; /* Increment the sum. */ sign = -sign; sn += omega; /* Next term: omega. */ if (*z < 0.0) /* Direct summation and CNCT (z < -0.5) case. */ /* Recurrence for power series. */ omega = (*z) * pow((v1+i)/(v1+i+1), *s) * omega; else /* z > 0 */ { if (*z <= 0.5) /* "Direct summation". */ omega = (*z) * pow((v1+i)/(v1+i+1), *s) * omega; else /* CNCT (z > 0.5) case. */ { *(StoreAj+i) = sign * omega; if (i % 2 == 0) /* Recurrence for odd pointer i. */ {omega = -sign * 0.5 * (*(StoreAj+i/2) - pow(*z, i/2) / pow(v1+i/2, *s));} else { flag = aj1234(z, s, &v1, i+1, acc, &omega); if (flag) break; else omega = -sign * omega; } } } /* Direct summation case: store current sum and remainder estimate. */ if (fabs(*z) <= 0.5) { skn = sn; est = 2.0 * pow(fabs(*z), (i+1)) / pow(v1+i+1, *s); } /* CNCT case. */ else { /* Make sure omega is representable machine number. */ if (fabs(omega) <= machmin) { flag = 5; break; } else iom = 1.0 / omega; /* Last terms in sums of numerator and denominator of i-th partial sum. */ *(num+i) = sn * iom; *(den+i) = iom; /* Recurrence computation of numerator and denominator of a S_k^n. */ if (i > 0) { factor = 1.0; *(num+i-1) = *(num+i) - factor * (*(num+i-1)); *(den+i-1) = *(den+i) - factor * (*(den+i-1)); } factor1 = (double) (beta+n+i-1) * (beta+n+i-2); for(j = 2; j <= i; j++) { factor = factor1 / (beta+n+i+j-2) / (beta+n+i+j-3); *(num+i-j) = *(num+i-j+1) - factor * (*(num+i-j)); *(den+i-j) = *(den+i-j+1) - factor * (*(den+i-j)); } /* Current approximation of the sum S_k^n. */ skn = *num / *den; } /* else CNCT case. */ eps = fabs(skn - skn0); /* Check the three termination criteria. */ /* |est/skn| is less than the requested accuracy (est is a remainder estimate). */ if (i > 0 && eps < eps0) { if (fabs(*z) > 0.5) { x = eps/eps0; est = 2.0/x/(1.0-x)*eps; } cacc = fabs(est/skn); if (cacc < (*acc)) break; } /* Successive iterates skn are the same. */ if (eps <= 0.0) break; /* Maximum number of iterations is exceeded. */ if (i > imax-2) { flag = 6; break; } /* Go on to the next iteration. */ skn0 = skn; eps0 = eps; } /* for */ /* Store the resulting sum. */ if (*v < 0) { sign = 1; if ((*z < 0) && (m % 2 != 0)) sign = -1; *result = sum1 + skn * sign * pow(fabs(*z),m); } else *result = skn; /* Store the number of iterations. */ *iter = i + 1; /* Clean up. */ free(num); free(den); if (*z > 0.5) free(StoreAj); return flag; } #undef macheps #undef machmin /* Code below written by T. Yee 14/6/06; is a wrapper function */ void lerchphi123(int *err, int *L, double *z, double *s, double *v, double *acc, double *result, int *iter) { int ell; for(ell = 0; ell < *L; ell++) { err[ell] = lerchphi(z+ell, s+ell, v+ell, acc, result+ell, iter); } } VGAM/src/fgam.f0000644000176200001440000005740213135276761012677 0ustar liggesusersc 24/8/99 c This is the original fgam.f file c It needs to be compiled and loaded into R in order to smooth. c All of this is automatically in Splus subroutine vbsplvd ( t, k, x, left, a, dbiatx, nderiv ) implicit double precision(a-h,o-z) calls bsplvb calculates value and deriv.s of all b-splines which do not vanish at x c c****** i n p u t ****** c t the knot array, of length left+k (at least) c k the order of the b-splines to be evaluated c x the point at which these values are sought c left an integer indicating the left endpoint of the interval of c interest. the k b-splines whose support contains the interval c (t(left), t(left+1)) c are to be considered. c a s s u m p t i o n - - - it is assumed that c t(left) .lt. t(left+1) c division by zero will result otherwise (in b s p l v b ). c also, the output is as advertised only if c t(left) .le. x .le. t(left+1) . c nderiv an integer indicating that values of b-splines and their c derivatives up to but not including the nderiv-th are asked c for. ( nderiv is replaced internally by the integer in (1,k) c closest to it.) c c****** w o r k a r e a ****** c a an array of order (k,k), to contain b-coeff.s of the derivat- c ives of a certain order of the k b-splines of interest. c c****** o u t p u t ****** c dbiatx an array of order (k,nderiv). its entry (i,m) contains c value of (m-1)st derivative of (left-k+i)-th b-spline of c order k for knot sequence t , i=m,...,k; m=1,...,nderiv. c c****** m e t h o d ****** c values at x of all the relevant b-splines of order k,k-1,..., c k+1-nderiv are generated via bsplvb and stored temporarily c in dbiatx . then, the b-coeffs of the required derivatives of the c b-splines of interest are generated by differencing, each from the c preceding one of lower order, and combined with the values of b- c splines of corresponding order in dbiatx to produce the desired c values. c integer k,left,nderiv, i,ideriv,il,j,jlow,jp1mid,kp1,kp1mm, * ldummy,m,mhigh double precision a(k,k),dbiatx(k,nderiv),t(*),x double precision factor,fkp1mm,sum mhigh = max0(min0(nderiv,k),1) c mhigh is usually equal to nderiv. kp1 = k+1 call bsplvb(t,kp1-mhigh,1,x,left,dbiatx) if (mhigh .eq. 1) go to 99 c the first column of dbiatx always contains the b-spline values c for the current order. these are stored in column k+1-current c order before bsplvb is called to put values for the next c higher order on top of it. ideriv = mhigh do 15 m=2,mhigh jp1mid = 1 do 11 j=ideriv,k dbiatx(j,ideriv) = dbiatx(jp1mid,1) jp1mid = jp1mid + 1 11 continue ideriv = ideriv - 1 call bsplvb(t,kp1-ideriv,2,x,left,dbiatx) 15 continue c c at this point, b(left-k+i, k+1-j)(x) is in dbiatx(i,j) for c i=j,...,k and j=1,...,mhigh ('=' nderiv). in particular, the c first column of dbiatx is already in final form. to obtain cor- c responding derivatives of b-splines in subsequent columns, gene- c rate their b-repr. by differencing, then evaluate at x. c jlow = 1 do 20 i=1,k do 19 j=jlow,k a(j,i) = 0d0 19 continue jlow = i a(i,i) = 1d0 20 continue c at this point, a(.,j) contains the b-coeffs for the j-th of the c k b-splines of interest here. c c c c 20161111: was originally c do 40 m=2,mhigh do 400 m=2,mhigh kp1mm = kp1 - m fkp1mm = dble(kp1mm) il = left i = k c c for j=1,...,k, construct b-coeffs of (m-1)st derivative of c b-splines from those for preceding derivative by differencing c and store again in a(.,j) . the fact that a(i,j) = 0 for c i .lt. j is used.sed. do 25 ldummy=1,kp1mm factor = fkp1mm/(t(il+kp1mm) - t(il)) c the assumption that t(left).lt.t(left+1) makes denominator c in factor nonzero. do 24 j=1,i a(i,j) = (a(i,j) - a(i-1,j))*factor 24 continue il = il - 1 i = i - 1 25 continue c c for i=1,...,k, combine b-coeffs a(.,i) with b-spline values c stored in dbiatx(.,m) to get value of (m-1)st derivative of c i-th b-spline (of interest here) at x , and store in c dbiatx(i,m). storage of this value over the value of a b-spline c of order m there is safe since the remaining b-spline derivat- c ive of the same order do not use this value due to the fact c that a(j,i) = 0 for j .lt. i . c Originally: c 30 do 40 i=1,k do 40 i=1,k sum = 0. jlow = max0(i,m) do 35 j=jlow,k sum = a(j,i)*dbiatx(j,m) + sum 35 continue dbiatx(i,m) = sum 40 continue c 20161111: twyee added this line (expanded 40 to two lines). 400 continue 99 return end subroutine bsplvb ( t, jhigh, index, x, left, biatx ) implicit double precision(a-h,o-z) calculates the value of all possibly nonzero b-splines at x of order c c jout = dmax( jhigh , (j+1)*(index-1) ) c c with knot sequence t . c c****** i n p u t ****** c t.....knot sequence, of length left + jout , assumed to be nonde- c creasing. a s s u m p t i o n . . . . c t(left) .lt. t(left + 1) . c d i v i s i o n b y z e r o will result if t(left) = t(left+1) c jhigh, c index.....integers which determine the order jout = max(jhigh, c (j+1)*(index-1)) of the b-splines whose values at x are to c be returned. index is used to avoid recalculations when seve- c ral columns of the triangular array of b-spline values are nee- c ded (e.g., in bvalue or in vbsplvd ). precisely, c if index = 1 , c the calculation starts from scratch and the entire triangular c array of b-spline values of orders 1,2,...,jhigh is generated c order by order , i.e., column by column . c if index = 2 , c only the b-spline values of order j+1, j+2, ..., jout are ge- c nerated, the assumption being that biatx , j , deltal , deltar c are, on entry, as they were on exit at the previous call. c in particular, if jhigh = 0, then jout = j+1, i.e., just c the next column of b-spline values is generated. c c w a r n i n g . . . the restriction jout .le. jmax (= 20) is im- c posed arbitrarily by the dimension statement for deltal and c deltar below, but is n o w h e r e c h e c k e d for . c c x.....the point at which the b-splines are to be evaluated. c left.....an integer chosen (usually) so that c t(left) .le. x .le. t(left+1) . c c****** o u t p u t ****** c biatx.....array of length jout , with biatx(i) containing the val- c ue at x of the polynomial of order jout which agrees with c the b-spline b(left-jout+i,jout,t) on the interval (t(left), c t(left+1)) . c c****** m e t h o d ****** c the recurrence relation c c x - t(i) t(i+j+1) - x c b(i,j+1)(x) = -----------b(i,j)(x) + ---------------b(i+1,j)(x) c t(i+j)-t(i) t(i+j+1)-t(i+1) c c is used (repeatedly) to generate the (j+1)-vector b(left-j,j+1)(x), c ...,b(left,j+1)(x) from the j-vector b(left-j+1,j)(x),..., c b(left,j)(x), storing the new values in biatx over the old. the c facts that c b(i,1) = 1 if t(i) .le. x .lt. t(i+1) c and that c b(i,j)(x) = 0 unless t(i) .le. x .lt. t(i+j) c are used. the particular organization of the calculations follows al- c gorithm (8) in chapter x of the text. c parameter(jmax = 20) integer index,jhigh,left, i,j,jp1 double precision biatx(jhigh),t(*),x, deltal(jmax) double precision deltar(jmax),saved,term c dimension biatx(jout), t(left+jout) current fortran standard makes it impossible to specify the length of c t and of biatx precisely without the introduction of otherwise c superfluous additional arguments. data j/1/ c save j,deltal,deltar (valid in fortran 77) c c c c 20161111; originally: c go to (10,20), index c See https://www.obliquity.com/computer/fortran/control.html if (index .eq. 1) then go to 10 else if (index .eq. 2) then go to 20 end if c c c c 10 j = 1 biatx(1) = 1d0 if (j .ge. jhigh) go to 99 c 20 jp1 = j + 1 deltar(j) = t(left+j) - x deltal(j) = x - t(left+1-j) saved = 0d0 do 26 i=1,j term = biatx(i)/(deltar(i) + deltal(jp1-i)) biatx(i) = saved + deltar(i)*term saved = deltal(jp1-i)*term 26 continue biatx(jp1) = saved j = jp1 if (j .lt. jhigh) go to 20 c 99 return end c 20090105; converted bvalue into a subroutine. subroutine wbvalue ( t, bcoef, n, k, x, jderiv, bvalue) implicit double precision(a-h,o-z) double precision bvalue calls vinterv c calculates value at x of jderiv-th derivative of spline from b-repr. c the spline is taken to be continuous from the right. c c****** i n p u t ****** c t, bcoef, n, k......forms the b-representation of the spline f to c be evaluated. specifically, c t.....knot sequence, of length n+k, assumed nondecreasing. c bcoef.....b-coefficient sequence, of length n . c n.....length of bcoef and dimension of s(k,t), c a s s u m e d positive . c k.....order of the spline . c c w a r n i n g . . . the restriction k .le. kmax (=20) is imposed c arbitrarily by the dimension statement for aj, dm, dm below, c but is n o w h e r e c h e c k e d for. c c x.....the point at which to evaluate . c jderiv.....integer giving the order of the derivative to be evaluated c a s s u m e d to be zero or positive. c c****** o u t p u t ****** c bvalue.....the value of the (jderiv)-th derivative of f at x . c c****** m e t h o d ****** c the nontrivial knot interval (t(i),t(i+1)) containing x is lo- c cated with the aid of vinterv . the k b-coeffs of f relevant for c this interval are then obtained from bcoef (or taken to be zero if c not explicitly available) and are then differenced jderiv times to c obtain the b-coeffs of (d**jderiv)f relevant for that interval. c precisely, with j = jderiv, we have from x.(12) of the text that c c (d**j)f = sum ( bcoef(.,j)*b(.,k-j,t) ) c c where c / bcoef(.), , j .eq. 0 c / c bcoef(.,j) = / bcoef(.,j-1) - bcoef(.-1,j-1) c / ----------------------------- , j .gt. 0 c / (t(.+k-j) - t(.))/(k-j) c c then, we use repeatedly the fact that c c sum ( a(.)*b(.,m,t)(x) ) = sum ( a(.,x)*b(.,m-1,t)(x) ) c with c (x - t(.))*a(.) + (t(.+m-1) - x)*a(.-1) c a(.,x) = --------------------------------------- c (x - t(.)) + (t(.+m-1) - x) c c to write (d**j)f(x) eventually as a linear combination of b-splines c of order 1 , and the coefficient for b(i,1,t)(x) must then c be the desired number (d**j)f(x). (see x.(17)-(19) of text). c parameter(kmax = 20) integer jderiv,k,n, i,ilo,imk,j,jc,jcmin,jcmax,jj,km1,mflag,nmi double precision bcoef(n),t(*),x double precision aj(kmax),dm(kmax),dp(kmax),fkmj c dimension t(n+k) current fortran standard makes it impossible to specify the length of c t precisely without the introduction of otherwise superfluous c additional arguments. bvalue = 0.0d0 if (jderiv .ge. k) go to 99 c c *** find i s.t. 1 .le. i .lt. n+k and t(i) .lt. t(i+1) and c t(i) .le. x .lt. t(i+1) . if no such i can be found, x lies c outside the support of the spline f and bvalue = 0. c (the asymmetry in this choice of i makes f rightcontinuous) if( (x.ne.t(n+1)) .or. (t(n+1).ne.t(n+k)) ) go to 700 i = n go to 701 700 call vinterv ( t, n+k, x, i, mflag ) if (mflag .ne. 0) go to 99 701 continue c *** if k = 1 (and jderiv = 0), bvalue = bcoef(i). km1 = k - 1 if (km1 .gt. 0) go to 1 bvalue = bcoef(i) go to 99 c c *** store the k b-spline coefficients relevant for the knot interval c (t(i),t(i+1)) in aj(1),...,aj(k) and compute dm(j) = x - t(i+1-j), c dp(j) = t(i+j) - x, j=1,...,k-1 . set any of the aj not obtainable c from input to zero. set any t.s not obtainable equal to t(1) or c to t(n+k) appropriately. 1 jcmin = 1 imk = i - k if (imk .ge. 0) go to 8 jcmin = 1 - imk do 5 j=1,i dm(j) = x - t(i+1-j) 5 continue do 6 j=i,km1 aj(k-j) = 0. dm(j) = dm(i) 6 continue go to 10 8 do 9 j=1,km1 dm(j) = x - t(i+1-j) 9 continue c 10 jcmax = k nmi = n - i if (nmi .ge. 0) go to 18 jcmax = k + nmi do 15 j=1,jcmax dp(j) = t(i+j) - x 15 continue do 16 j=jcmax,km1 aj(j+1) = 0. dp(j) = dp(jcmax) 16 continue go to 20 18 do 19 j=1,km1 dp(j) = t(i+j) - x 19 continue c 20 do 21 jc=jcmin,jcmax aj(jc) = bcoef(imk + jc) 21 continue c c *** difference the coefficients jderiv times. if (jderiv .eq. 0) go to 30 c 20161111; was: c do 23 j=1,jderiv do 233 j=1,jderiv kmj = k-j fkmj = dble(kmj) ilo = kmj do 23 jj=1,kmj aj(jj) = ((aj(jj+1) - aj(jj))/(dm(ilo) + dp(jj)))*fkmj ilo = ilo - 1 23 continue 233 continue c c *** compute value at x in (t(i),t(i+1)) of jderiv-th derivative, c given its relevant b-spline coeffs in aj(1),...,aj(k-jderiv). 30 if (jderiv .eq. km1) go to 39 jdrvp1 = jderiv + 1 c 20161111: was: c do 33 j=jdrvp1,km1 do 34 j=jdrvp1,km1 kmj = k-j ilo = kmj do 33 jj=1,kmj aj(jj) = (aj(jj+1)*dm(ilo) + aj(jj)*dp(jj))/(dm(ilo)+dp(jj)) ilo = ilo - 1 33 continue 34 continue 39 bvalue = aj(1) c 99 return end subroutine vinterv ( xt, lxt, x, left, mflag ) implicit double precision(a-h,o-z) computes left = max( i ; 1 .le. i .le. lxt .and. xt(i) .le. x ) . c c****** i n p u t ****** c xt.....a double precision sequence, of length lxt , assumed to be nondecreasing c lxt.....number of terms in the sequence xt . c x.....the point whose location with respect to the sequence xt is c to be determined. c c****** o u t p u t ****** c left, mflag.....both integers, whose value is c c 1 -1 if x .lt. xt(1) c i 0 if xt(i) .le. x .lt. xt(i+1) c lxt 1 if xt(lxt) .le. x c c in particular, mflag = 0 is the 'usual' case. mflag .ne. 0 c indicates that x lies outside the halfopen interval c xt(1) .le. y .lt. xt(lxt) . the asymmetric treatment of the c interval is due to the decision to make all pp functions cont- c inuous from the right. c c****** m e t h o d ****** c the program is designed to be efficient in the common situation that c it is called repeatedly, with x taken from an increasing or decrea- c sing sequence. this will happen, e.g., when a pp function is to be c graphed. the first guess for left is therefore taken to be the val- c ue returned at the previous call and stored in the l o c a l varia- c ble ilo . a first check ascertains that ilo .lt. lxt (this is nec- c essary since the present call may have nothing to do with the previ- c ous call). then, if xt(ilo) .le. x .lt. xt(ilo+1), we set left = c ilo and are done after just three comparisons. c otherwise, we repeatedly double the difference istep = ihi - ilo c while also moving ilo and ihi in the direction of x , until c xt(ilo) .le. x .lt. xt(ihi) , c after which we use bisection to get, in addition, ilo+1 = ihi . c left = ilo is then returned. c integer left,lxt,mflag, ihi,ilo,istep,middle double precision x,xt(lxt) data ilo /1/ c save ilo (a valid fortran statement in the new 1977 standard) ihi = ilo + 1 if (ihi .lt. lxt) go to 20 if (x .ge. xt(lxt)) go to 110 if (lxt .le. 1) go to 90 ilo = lxt - 1 ihi = lxt c 20 if (x .ge. xt(ihi)) go to 40 if (x .ge. xt(ilo)) go to 100 c c **** now x .lt. xt(ilo) . decrease ilo to capture x . c c c Originally: c 30 istep = 1 istep = 1 c c 31 ihi = ilo ilo = ihi - istep if (ilo .le. 1) go to 35 if (x .ge. xt(ilo)) go to 50 istep = istep*2 go to 31 35 ilo = 1 if (x .lt. xt(1)) go to 90 go to 50 c **** now x .ge. xt(ihi) . increase ihi to capture x . 40 istep = 1 41 ilo = ihi ihi = ilo + istep if (ihi .ge. lxt) go to 45 if (x .lt. xt(ihi)) go to 50 istep = istep*2 go to 41 45 if (x .ge. xt(lxt)) go to 110 ihi = lxt c c **** now xt(ilo) .le. x .lt. xt(ihi) . narrow the interval. 50 middle = (ilo + ihi)/2 if (middle .eq. ilo) go to 100 c note. it is assumed that middle = ilo in case ihi = ilo+1 . if (x .lt. xt(middle)) go to 53 ilo = middle go to 50 53 ihi = middle go to 50 c**** set output and return. 90 mflag = -1 left = 1 return 100 mflag = 0 left = ilo return 110 mflag = 1 left = lxt return end c ===================================================================== c These two subroutines, dpbfa8 and dpbsl8, are called by sslvrg. c Note: a rational cholesky version of these functions are available, c called vdpbfa7 and vdpbsl7 c T.Yee 7/10/99 c 1/7/02 c T.Yee has renamed dbpbfa to dbpbfa8 and dpbsl to dpbsl8, to ensure uniqueness subroutine dpbfa8(abd,lda,n,m,info) integer lda,n,m,info double precision abd(lda,*) c c c 20130419; Originally: c double precision abd(lda,1) c c c c c dpbfa8 factors a double precision symmetric positive definite c matrix stored in band form. c c dpbfa8 is usually called by dpbco, but it can be called c directly with a saving in time if rcond is not needed. c c on entry c c abd double precision(lda, n) c the matrix to be factored. the columns of the upper c triangle are stored in the columns of abd and the c diagonals of the upper triangle are stored in the c rows of abd . see the comments below for details. c c lda integer c the leading dimension of the array abd . c lda must be .ge. m + 1 . c c n integer c the order of the matrix a . c c m integer c the number of diagonals above the main diagonal. c 0 .le. m .lt. n . c c on return c c abd an upper triangular matrix r , stored in band c form, so that a = trans(r)*r . c c info integer c = 0 for normal return. c = k if the leading minor of order k is not c positive definite. c c band storage c c if a is a symmetric positive definite band matrix, c the following program segment will set up the input. c c m = (band width above diagonal) c do 20 j = 1, n c i1 = max0(1, j-m) c do 10 i = i1, j c k = i-j+m+1 c abd(k,j) = a(i,j) c 10 continue c 20 continue c c linpack. this version dated 08/14/78 . c cleve moler, university of new mexico, argonne national lab. c c subroutines and functions c c blas ddot c fortran max0,dsqrt c c internal variables c double precision ddot8,t double precision s integer ik,j,jk,k,mu c begin block with ...exits to 40 c c do 30 j = 1, n info = j s = 0.0d0 ik = m + 1 jk = max0(j-m,1) mu = max0(m+2-j,1) if (m .lt. mu) go to 20 do 10 k = mu, m t = abd(k,j) - ddot8(k-mu,abd(ik,jk),1,abd(mu,j),1) t = t/abd(m+1,jk) abd(k,j) = t s = s + t*t ik = ik - 1 jk = jk + 1 10 continue 20 continue s = abd(m+1,j) - s c ......exit if (s .le. 0.0d0) go to 40 abd(m+1,j) = dsqrt(s) 30 continue info = 0 40 continue return end subroutine dpbsl8(abd,lda,n,m,b) integer lda,n,m double precision abd(lda,*),b(*) c c c 20130419; originally: c double precision abd(lda,1),b(1) c c c dpbsl8 solves the double precision symmetric positive definite c band system a*x = b c using the factors computed by dpbco or dpbfa8. c c on entry c c abd double precision(lda, n) c the output from dpbco or dpbfa8. c c lda integer c the leading dimension of the array abd . c c n integer c the order of the matrix a . c c m integer c the number of diagonals above the main diagonal. c c b double precision(n) c the right hand side vector. c c on return c c b the solution vector x . c c error condition c c a division by zero will occur if the input factor contains c a zero on the diagonal. technically this indicates c singularity but it is usually caused by improper subroutine c arguments. it will not occur if the subroutines are called c correctly and info .eq. 0 . c c to compute inverse(a) * c where c is a matrix c with p columns c call dpbco(abd,lda,n,rcond,z,info) c if (rcond is too small .or. info .ne. 0) go to ... c do 10 j = 1, p c call dpbsl8(abd,lda,n,c(1,j)) c 10 continue c c linpack. this version dated 08/14/78 . c cleve moler, university of new mexico, argonne national lab. c c subroutines and functions c c blas daxpy,ddot c fortran min0 c c internal variables c double precision ddot8,t integer k,kb,la,lb,lm c c solve trans(r)*y = b c do 10 k = 1, n lm = min0(k-1,m) la = m + 1 - lm lb = k - lm t = ddot8(lm,abd(la,k),1,b(lb),1) b(k) = (b(k) - t)/abd(m+1,k) 10 continue c c solve r*x = y c do 20 kb = 1, n k = n + 1 - kb lm = min0(k-1,m) la = m + 1 - lm lb = k - lm b(k) = b(k)/abd(m+1,k) t = -b(k) call daxpy8(lm,t,abd(la,k),1,b(lb),1) 20 continue return end VGAM/src/muxr3.c0000644000176200001440000003140713135276761013035 0ustar liggesusers #include #include #include #include #include void vdecccc(int *hqipzx3n, int *exz2jrhq, int *dimm); void m2accc(double *m, double *a, int *dimm, int *hqipzx3n, int *exz2jrhq, int *n, int *M, int *rb1onzwu); void a2mccc(double *a, double *m, int *dimm, int *hqipzx3n, int *exz2jrhq, int *n, int *M); void mux2ccc(double *cc, double *tlgduey8, double *bzmd6ftv, int *p, int *n, int *M); void mux22ccc(double *cc, double *tlgduey8, double *bzmd6ftv, int *dimm, int *hqipzx3n, int *exz2jrhq, int *n, int *M, double *wk, int *rb1onzwu); void mux5ccc(double *cc, double *x, double *bzmd6ftv, int *M, int *n, int *r, int *dimm, int *dimr, int *matrix, double *wk, double *wk2, int *hqipzx3n_M, int *exz2jrhq_M, int *hqipzx3n_r, int *exz2jrhq_r); void mux55ccc(double *evects, double *evals, double *bzmd6ftv, double *wk, double *wk2, int *hqipzx3n, int *exz2jrhq, int *M, int *n); void mux7ccc(double *cc, double *x, double *bzmd6ftv, int *M, int *q, int *n, int *r); void mux111ccc(double *cc, double *the7mqnvy, int *M, int *R, int *n, double *wkcc, double *wk2, int *hqipzx3n, int *exz2jrhq, int *dimm, int *rb1onzwu); void mux15ccc(double *cc, double *x, double *bzmd6ftv, int *M, int *n); void vcholccc(double *cc, int *M, int *n, int *ok, double *wk, int *hqipzx3n, int *exz2jrhq, int *dimm); void vforsubccc(double *cc, double *b, int *M, int *n, double *wk, int *hqipzx3n, int *exz2jrhq, int *dimm); void vbacksubccc(double *cc, double *b, int *M, int *n, double *wk, int *hqipzx3n, int *exz2jrhq, int *dimm); void tapply_mat1(double *mat, int *nr, int *nc, int *type); void vdecccc(int *hqipzx3n, int *exz2jrhq, int *dimm) { int ayfnwr1v; for(ayfnwr1v = 0; ayfnwr1v < *dimm; ayfnwr1v++) { hqipzx3n[ayfnwr1v] -= 1; exz2jrhq[ayfnwr1v] -= 1; } } void m2accc(double *m, double *a, int *dimm, int *hqipzx3n, int *exz2jrhq, int *n, int *M, int *rb1onzwu) { int ayfnwr1v, gp1jxzuh, MM = *M * *M, MMn = *M * *M * *n; if(*rb1onzwu == 1 || *dimm != *M * (*M + 1) / 2) for(gp1jxzuh = 0; gp1jxzuh < MMn; gp1jxzuh++) a[gp1jxzuh] = 0.0; for(gp1jxzuh = 0; gp1jxzuh < *n; gp1jxzuh++) { for(ayfnwr1v = 0; ayfnwr1v < *dimm; ayfnwr1v++) { a[hqipzx3n[ayfnwr1v] + exz2jrhq[ayfnwr1v] * *M] = m[ayfnwr1v]; if(*rb1onzwu == 0) a[exz2jrhq[ayfnwr1v] + hqipzx3n[ayfnwr1v] * *M] = m[ayfnwr1v]; } a += MM; m += *dimm; } } void a2mccc(double *a, double *m, int *dimm, int *hqipzx3n, int *exz2jrhq, int *n, int *M) { int ayfnwr1v, gp1jxzuh, MM= *M * *M; for(gp1jxzuh = 0; gp1jxzuh < *n; gp1jxzuh++) { for(ayfnwr1v = 0; ayfnwr1v < *dimm; ayfnwr1v++) m[ayfnwr1v] = a[hqipzx3n[ayfnwr1v] + exz2jrhq[ayfnwr1v] * *M]; a += MM; m += *dimm; } } void mux2ccc(double *cc, double *tlgduey8, double *bzmd6ftv, int *p, int *n, int *M) { double urohxe6t; int ayfnwr1v, yq6lorbx, bpvaqm5z, Mp = *M * *p; for(ayfnwr1v = 0; ayfnwr1v < *n; ayfnwr1v++) { for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) { urohxe6t = 0.0; for(bpvaqm5z = 0; bpvaqm5z < *p; bpvaqm5z++) urohxe6t += cc[yq6lorbx + bpvaqm5z * *M] * tlgduey8[bpvaqm5z]; *bzmd6ftv++ = urohxe6t; } tlgduey8 += *p; cc += Mp; } } void mux22ccc(double *cc, double *tlgduey8, double *bzmd6ftv, int *dimm, int *hqipzx3n, int *exz2jrhq, int *n, int *M, double *wk, int *rb1onzwu) { double urohxe6t; int yq6lorbx, bpvaqm5z, gp1jxzuh, one = 1, nzqklc9x; vdecccc(hqipzx3n, exz2jrhq, dimm); for(gp1jxzuh = 0; gp1jxzuh < *n; gp1jxzuh++) { m2accc(cc, wk, dimm, hqipzx3n, exz2jrhq, &one, M, rb1onzwu); for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) { urohxe6t = 0.0; nzqklc9x = *rb1onzwu == 0 ? 0 : yq6lorbx; for(bpvaqm5z = nzqklc9x; bpvaqm5z < *M; bpvaqm5z++) urohxe6t += wk[yq6lorbx + bpvaqm5z * *M] * tlgduey8[bpvaqm5z]; *bzmd6ftv++ = urohxe6t; } tlgduey8 += *M; cc += *dimm; } } void mux5ccc(double *cc, double *x, double *bzmd6ftv, int *M, int *n, int *r, int *dimm, int *dimr, int *matrix, double *wk, double *wk2, int *hqipzx3n_M, int *exz2jrhq_M, int *hqipzx3n_r, int *exz2jrhq_r) { double urohxe6t, *pd, *pd2; int ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z, Mr = *M * *r, rr = *r * *r, MM = *M * *M, usvdbx3tk, jM, jr, kM, kr, one=1, rb1onzwu=0; if(*matrix == 1) { vdecccc(hqipzx3n_M, exz2jrhq_M, dimm); vdecccc(hqipzx3n_r, exz2jrhq_r, dimr); pd = wk; pd2 = wk2; } else { pd = pd2 = wk; } for(ayfnwr1v = 0; ayfnwr1v < *n; ayfnwr1v++) { if(*matrix == 1) m2accc(cc, pd, dimm, hqipzx3n_M, exz2jrhq_M, &one, M, &rb1onzwu); else { pd = cc; pd2 = bzmd6ftv; } for(yq6lorbx = 0; yq6lorbx < *r; yq6lorbx++) { jM = yq6lorbx * *M; jr = yq6lorbx * *r; for(gp1jxzuh = yq6lorbx; gp1jxzuh < *r; gp1jxzuh++) { kM = gp1jxzuh * *M; kr = gp1jxzuh * *r; urohxe6t = 0.0; for(bpvaqm5z = 0; bpvaqm5z < *M; bpvaqm5z++) for(usvdbx3tk = 0; usvdbx3tk < *M; usvdbx3tk++) urohxe6t += x[bpvaqm5z + jM] * pd[bpvaqm5z + usvdbx3tk * *M] * x[usvdbx3tk + kM]; pd2[yq6lorbx + kr] = pd2[gp1jxzuh + jr] = urohxe6t; } } if(*matrix == 1) a2mccc(pd2, bzmd6ftv, dimr, hqipzx3n_r, exz2jrhq_r, &one, r); cc += (*matrix == 1 ? *dimm : MM); x += Mr; bzmd6ftv += (*matrix == 1 ? *dimr : rr); } } void mux55ccc(double *evects, double *evals, double *bzmd6ftv, double *wk, double *wk2, int *hqipzx3n, int *exz2jrhq, int *M, int *n) { double *pd, *pd2, bpvaqm5z; int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, MM = *M * *M, one = 1, imk5wjxg = *M * (*M + 1)/2; vdecccc(hqipzx3n, exz2jrhq, &imk5wjxg); for(ayfnwr1v = 0; ayfnwr1v < *n; ayfnwr1v++) { pd = evects; pd2 = wk2; for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) for(gp1jxzuh = 0; gp1jxzuh < *M; gp1jxzuh++) *pd2++ = *pd++ * evals[yq6lorbx]; for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) for(gp1jxzuh = yq6lorbx; gp1jxzuh < *M; gp1jxzuh++) { bpvaqm5z = 0.0; for(urohxe6t = 0; urohxe6t < *M; urohxe6t++) bpvaqm5z += wk2[yq6lorbx + urohxe6t * *M] * evects[gp1jxzuh + urohxe6t * *M]; wk[yq6lorbx + gp1jxzuh * *M] = wk[gp1jxzuh + yq6lorbx * *M] = bpvaqm5z; } a2mccc(wk, bzmd6ftv, &imk5wjxg, hqipzx3n, exz2jrhq, &one, M); bzmd6ftv += imk5wjxg; evals += *M; evects += MM; } } void mux7ccc(double *cc, double *x, double *bzmd6ftv, int *M, int *q, int *n, int *r) { double urohxe6t; int ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z, Mq = *M * *q, qr = *q * *r, Mr = *M * *r, kq, kM; for(ayfnwr1v = 0; ayfnwr1v < *n; ayfnwr1v++) { for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) { for(gp1jxzuh = 0; gp1jxzuh < *r; gp1jxzuh++) { kq = gp1jxzuh * *q; kM = gp1jxzuh * *M; urohxe6t = 0.0; for(bpvaqm5z = 0; bpvaqm5z < *q; bpvaqm5z++) urohxe6t += cc[yq6lorbx + bpvaqm5z * *M] * x[bpvaqm5z + kq]; bzmd6ftv[yq6lorbx + kM] = urohxe6t; } } cc += Mq; bzmd6ftv += Mr; x += qr; } } void mux111ccc(double *cc, double *the7mqnvy, int *M, int *R, int *n, double *wkcc, double *wk2, int *hqipzx3n, int *exz2jrhq, int *dimm, int *rb1onzwu) { double urohxe6t, *pd2, obr6tcexdouble; int ayfnwr1v, yq6lorbx, gp1jxzuh, bpvaqm5z, MM = *M * *M, MR = *M * *R, lowlim; vdecccc(hqipzx3n, exz2jrhq, dimm); for(ayfnwr1v = 0; ayfnwr1v < MM; ayfnwr1v++) wkcc[ayfnwr1v] = 0.0; for(bpvaqm5z = 0; bpvaqm5z < *n; bpvaqm5z++) { for(ayfnwr1v = 0; ayfnwr1v < *dimm; ayfnwr1v++) { if(*rb1onzwu == 0) { obr6tcexdouble = *cc++; wkcc[hqipzx3n[ayfnwr1v] + exz2jrhq[ayfnwr1v] * *M] = wkcc[exz2jrhq[ayfnwr1v] + hqipzx3n[ayfnwr1v] * *M] = obr6tcexdouble; } else { wkcc[hqipzx3n[ayfnwr1v] + exz2jrhq[ayfnwr1v] * *M] = *cc++; } } /* ayfnwr1v */ pd2 = the7mqnvy; for(ayfnwr1v = 0; ayfnwr1v < *M; ayfnwr1v++) for(yq6lorbx = 0; yq6lorbx < *R; yq6lorbx++) wk2[ayfnwr1v + yq6lorbx * *M] = *pd2++; for(ayfnwr1v = 0; ayfnwr1v < *M; ayfnwr1v++) { lowlim = *rb1onzwu == 0 ? 0 : ayfnwr1v; for(yq6lorbx = 0; yq6lorbx < *R; yq6lorbx++) { urohxe6t = 0.0; for(gp1jxzuh = lowlim; gp1jxzuh < *M; gp1jxzuh++) urohxe6t += wk2[gp1jxzuh + yq6lorbx * *M] * wkcc[ayfnwr1v + gp1jxzuh * *M]; the7mqnvy[yq6lorbx + ayfnwr1v * *R] = urohxe6t; } /* yq6lorbx */ } /* ayfnwr1v */ the7mqnvy += MR; } /* bpvaqm5z */ } void mux15ccc(double *cc, double *x, double *bzmd6ftv, int *M, int *n) { double *pd, *pd2; int ayfnwr1v, yq6lorbx, gp1jxzuh, MM = *M * *M; for(ayfnwr1v = 0; ayfnwr1v < *n; ayfnwr1v++) { pd = cc; pd2 = bzmd6ftv; for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) for(gp1jxzuh = 0; gp1jxzuh < *M; gp1jxzuh++) *pd2++ = *pd++ * x[yq6lorbx]; pd2 = bzmd6ftv; for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) for(gp1jxzuh = 0; gp1jxzuh < *M; gp1jxzuh++) { *pd2 *= x[gp1jxzuh]; pd2++; } bzmd6ftv += MM; x += *M; } } void vcholccc(double *cc, int *M, int *n, int *ok, double *wk, int *hqipzx3n, int *exz2jrhq, int *dimm) { double urohxe6t, *pd; int bpvaqm5z, ayfnwr1v, yq6lorbx, gp1jxzuh, iM, iiM, rb1onzwu = 0, one = 1; vdecccc(hqipzx3n, exz2jrhq, dimm); pd = wk; for(bpvaqm5z = 0; bpvaqm5z < *n; bpvaqm5z++) { *ok = 1; m2accc(cc, wk, dimm, hqipzx3n, exz2jrhq, &one, M, &rb1onzwu); for(ayfnwr1v = 0; ayfnwr1v < *M; ayfnwr1v++) { urohxe6t = 0.0; iM = ayfnwr1v * *M; iiM = ayfnwr1v + iM; for(gp1jxzuh = 0; gp1jxzuh < ayfnwr1v; gp1jxzuh++) urohxe6t += pd[gp1jxzuh + iM] * pd[gp1jxzuh + iM]; pd[iiM] -= urohxe6t; if(pd[iiM] < 0.0) { *ok = 0; break; } pd[iiM] = sqrt(pd[iiM]); for(yq6lorbx = ayfnwr1v+1; yq6lorbx < *M; yq6lorbx++) { urohxe6t = 0.0; for(gp1jxzuh = 0; gp1jxzuh < ayfnwr1v; gp1jxzuh++) urohxe6t += pd[gp1jxzuh + iM] * pd[gp1jxzuh + yq6lorbx * *M]; pd[ayfnwr1v + yq6lorbx * *M] = (pd[ayfnwr1v + yq6lorbx * *M] - urohxe6t) / pd[iiM]; } } a2mccc(wk, cc, dimm, hqipzx3n, exz2jrhq, &one, M); cc += *dimm; ok++; } } void vforsubccc(double *cc, double *b, int *M, int *n, double *wk, int *hqipzx3n, int *exz2jrhq, int *dimm) { double urohxe6t, *pd; int yq6lorbx, gp1jxzuh, bpvaqm5z, rb1onzwu = 1, one = 1; pd = wk; vdecccc(hqipzx3n, exz2jrhq, dimm); for(bpvaqm5z = 0; bpvaqm5z < *n; bpvaqm5z++) { m2accc(cc, wk, dimm, hqipzx3n, exz2jrhq, &one, M, &rb1onzwu); for(yq6lorbx = 0; yq6lorbx < *M; yq6lorbx++) { urohxe6t = b[yq6lorbx]; for(gp1jxzuh = 0; gp1jxzuh < yq6lorbx; gp1jxzuh++) urohxe6t -= pd[gp1jxzuh + yq6lorbx * *M] * b[gp1jxzuh]; b[yq6lorbx] = urohxe6t / pd[yq6lorbx + yq6lorbx * *M]; } cc += *dimm; b += *M; } } void vbacksubccc(double *cc, double *b, int *M, int *n, double *wk, int *hqipzx3n, int *exz2jrhq, int *dimm) { double urohxe6t, *pd; int yq6lorbx, gp1jxzuh, bpvaqm5z, rb1onzwu = 1, one = 1; pd = wk; vdecccc(hqipzx3n, exz2jrhq, dimm); for(bpvaqm5z = 0; bpvaqm5z < *n; bpvaqm5z++) { m2accc(cc, wk, dimm, hqipzx3n, exz2jrhq, &one, M, &rb1onzwu); for(yq6lorbx = *M - 1; yq6lorbx >= 0; yq6lorbx--) { urohxe6t = b[yq6lorbx]; for(gp1jxzuh = yq6lorbx + 1; gp1jxzuh < *M; gp1jxzuh++) urohxe6t -= pd[yq6lorbx + gp1jxzuh * *M] * b[gp1jxzuh]; b[yq6lorbx] = urohxe6t / pd[yq6lorbx + yq6lorbx * *M]; } cc += *dimm; b += *M; } } void tapply_mat1(double *mat, int *nr, int *nc, int *type) { double *pd = mat, *pd2 = mat + *nr; int ayfnwr1v, yq6lorbx; if(*type == 1) for(yq6lorbx = 2; yq6lorbx <= *nc; yq6lorbx++) for(ayfnwr1v = 0; ayfnwr1v < *nr; ayfnwr1v++, pd2++) *pd2 += *pd++; if(*type == 2) { pd2 = mat + *nr * *nc - 1; pd = pd2 - *nr; for(yq6lorbx = *nc; yq6lorbx >= 2; yq6lorbx--) for(ayfnwr1v = 0; ayfnwr1v < *nr; ayfnwr1v++, pd2--) *pd2 -= *pd--; } if(*type == 3) for(yq6lorbx = 2; yq6lorbx <= *nc; yq6lorbx++) for(ayfnwr1v = 0; ayfnwr1v < *nr; ayfnwr1v++, pd2++) *pd2 *= *pd++; if(*type < 1 || *type > 3) Rprintf("Error: *type not ezlgm2uped\n"); } VGAM/src/tyeepolygamma3.c0000644000176200001440000001202613135276761014713 0ustar liggesusers #include #include #include #include #include void tyee_C_vdgam1(double *xval, double *lfu2qhid, int *dvhw1ulq); void tyee_C_vtgam1(double *xval, double *lfu2qhid, int *dvhw1ulq); void tyee_C_dgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq); void tyee_C_tgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq); void tyee_C_cum8sum(double ci1oyxas[], double lfu2qhid[], int *nlfu2qhid, double valong[], int *ntot, int *notdvhw1ulq); void eimpnbinomspecialp(int *interceptonly, double *nrows, double *ncols, double *sizevec, double *pnbinommat, double *rowsums); void tyee_C_vdgam1(double *xval, double *lfu2qhid, int *dvhw1ulq) { double wval, series, obr6tcex = 0.0, tmp1; *dvhw1ulq = 1; if (*xval <= 0.0e0) { *dvhw1ulq = 0; return; } if (*xval < 6.0e0) { tmp1 = *xval + 6.0e0; tyee_C_vdgam1(&tmp1, &obr6tcex, dvhw1ulq); *lfu2qhid = obr6tcex - 1.0e0 / *xval - 1.0e0 / (*xval + 1.0e0) - 1.0e0 / (*xval + 2.0e0) - 1.0e0 / (*xval + 3.0e0) - 1.0e0 / (*xval + 4.0e0) - 1.0e0 / (*xval + 5.0e0); return; } wval = 1.0e0 / (*xval * *xval); series = ((wval * ( -1.0e0 / 12.0e0 + ((wval * ( 1.0e0 / 120.0e0 + ((wval * ( -1.0e0 / 252.0e0 + ((wval * ( 1.0e0 / 240.0e0 + ((wval * ( -1.0e0 / 132.0e0 + ((wval * (691.0e0 /32760.0e0 + ((wval * ( -1.0e0 / 12.0e0 + (wval * 3617.0e0)/ 8160.0e0))))))))))))))))))))); *lfu2qhid = log(*xval) - 0.5e0 / *xval + series; } void tyee_C_vtgam1(double *xval, double *lfu2qhid, int *dvhw1ulq) { double wval, series, obr6tcex = 0.0, tmp1; *dvhw1ulq = 1; if (*xval <= 0.0e0) { *dvhw1ulq = 0; return; } if (*xval < 6.0e0) { tmp1 = *xval + 6.0e0; tyee_C_vtgam1(&tmp1, &obr6tcex, dvhw1ulq); *lfu2qhid = obr6tcex + 1.0e0 / pow( (double) *xval, (double) 2.0) + 1.0e0 / pow( (double) (*xval + 1.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 2.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 3.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 4.0e0), (double) 2.0) + 1.0e0 / pow( (double) (*xval + 5.0e0), (double) 2.0); return; } wval = 1.0e0 / (*xval * *xval); series = 1.0e0 + (wval * ( 1.0e0 / 6.0e0 + (wval * ( -1.0e0 / 30.0e0 + (wval * ( 1.0e0 / 42.0e0 + (wval * ( -1.0e0 / 30.0e0 + (wval * ( 5.0e0 / 66.0e0 + (wval * (-691.0e0 /2370.0e0 + (wval * ( 7.0e0 / 6.0e0 - (wval * 3617.0e0)/ 510.0e0)))))))))))))); *lfu2qhid = 0.5e0 * wval + series / *xval; } void tyee_C_dgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq) { int ayfnwr1v, okobr6tcex; double *qnwamo0e1, *qnwamo0e2; *dvhw1ulq = 1; qnwamo0e1 = sjwyig9t; qnwamo0e2 = lfu2qhid; for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { tyee_C_vdgam1(qnwamo0e1++, qnwamo0e2++, &okobr6tcex); if (okobr6tcex != 1) *dvhw1ulq = okobr6tcex; } } void tyee_C_tgam1w(double sjwyig9t[], double lfu2qhid[], int *f8yswcat, int *dvhw1ulq) { int ayfnwr1v, okobr6tcex; double *qnwamo0e1, *qnwamo0e2; *dvhw1ulq = 1; qnwamo0e1 = sjwyig9t; qnwamo0e2 = lfu2qhid; for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { tyee_C_vtgam1(qnwamo0e1++, qnwamo0e2++, &okobr6tcex); if (okobr6tcex != 1) *dvhw1ulq = okobr6tcex; } } void tyee_C_cum8sum(double ci1oyxas[], double lfu2qhid[], int *nlfu2qhid, double valong[], int *ntot, int *notdvhw1ulq) { int ayfnwr1v, iii = 1; lfu2qhid[iii-1] = ci1oyxas[iii-1]; for (ayfnwr1v = 2; ayfnwr1v <= *ntot; ayfnwr1v++) { if (valong[ayfnwr1v-1] > valong[ayfnwr1v-2]) { lfu2qhid[iii-1] += ci1oyxas[ayfnwr1v-1]; } else { iii++; lfu2qhid[iii-1] = ci1oyxas[ayfnwr1v-1]; } } *notdvhw1ulq = (iii == *nlfu2qhid) ? 0 : 1; } void eimpnbinomspecialp(int *interceptonly, double *nrows, double *ncols, double *sizevec, /* length is nrows */ double *pnbinommat, double *rowsums) { double ayfnwr1v, yq6lorbx, tmp1 = 0.0, tmp2; double *fpdlcqk9rowsums, *fpdlcqk9sizevec; if (*interceptonly == 1) { for (yq6lorbx = 0; yq6lorbx < *ncols; yq6lorbx++) { tmp2 = (*sizevec + yq6lorbx); tmp1 += *pnbinommat++ / (tmp2 * tmp2); } *rowsums = tmp1; return; } fpdlcqk9rowsums = rowsums; for (ayfnwr1v = 0; ayfnwr1v < *nrows; ayfnwr1v++) *fpdlcqk9rowsums++ = 0.0; for (yq6lorbx = 0; yq6lorbx < *ncols; yq6lorbx++) { fpdlcqk9rowsums = rowsums; fpdlcqk9sizevec = sizevec; for (ayfnwr1v = 0; ayfnwr1v < *nrows; ayfnwr1v++) { tmp2 = (yq6lorbx + *fpdlcqk9sizevec++); tmp1 = *pnbinommat++ / (tmp2 * tmp2); *fpdlcqk9rowsums++ += tmp1; } } } VGAM/src/vlinpack2.f0000644000176200001440000001717513135276761013661 0ustar liggesusersc This file contains modified code from Hastie and Tibshirani's c GAMFIT code, as well as a rational cholesky function or two. c All code here derives from linpack c T.Yee 7/10/99 c This function was formerly real function dnrm2, but now converted c to double precision c Nb. changed "float(n)" to "dfloat(n)" double precision function vdnrm2 ( n, dx,ldx, incx) c c added by tyee 23/9/00: implicit double precision (a-h,o-z) implicit integer (i-n) c integer next double precision dx(ldx), cutlo, cuthi, hitest, sum double precision xmax,zero,one data zero, one /0.0d0, 1.0d0/ c c euclidean norm of the n-vector stored in dx() with storage c increment incx . c if n .le. 0 return with result = 0. c if n .ge. 1 then incx must be .ge. 1 c c c.l.lawson, 1978 jan 08 c c four phase method using two built-in constants that are c hopefully applicable to all machines. c cutlo = maximum of dsqrt(u/eps) over all known machines. c cuthi = minimum of dsqrt(v) over all known machines. c where c eps = smallest no. such that eps + 1. .gt. 1. c u = smallest positive no. (underflow limit) c v = largest no. (overflow limit) c c brief outline of algorithm.. c c phase 1 scans zero components. c move to phase 2 when a component is nonzero and .le. cutlo c move to phase 3 when a component is .gt. cutlo c move to phase 4 when a component is .ge. cuthi/m c where m = n for x() double precision and m = 2*n for complex. c c values for cutlo and cuthi.. c from the environmental parameters listed in the imsl converter c document the limiting values are as follows.. c cutlo, s.p. u/eps = 2**(-102) for honeywell. close seconds are c univac and dec at 2**(-103) c thus cutlo = 2**(-51) = 4.44089e-16 c cuthi, s.p. v = 2**127 for univac, honeywell, and dec. c thus cuthi = 2**(63.5) = 1.30438e19 c cutlo, d.p. u/eps = 2**(-67) for honeywell and dec. c thus cutlo = 2**(-33.5) = 8.23181e-11 c cuthi, d.p. same as s.p. cuthi = 1.30438e19 c data cutlo, cuthi / 8.232e-11, 1.304e19 / c data cutlo, cuthi / 4.441e-16, 1.304e19 / data cutlo, cuthi / 8.232e-11, 1.304e19 / c if(n .gt. 0) go to 10 vdnrm2 = zero go to 300 c 10 next = 30 sum = zero nn = n * incx c begin main loop i = 1 c 20 go to next,(30, 50, 70, 110) 20 if(next .eq. 30) go to 30 if(next .eq. 50) go to 50 if(next .eq. 70) go to 70 if(next .eq. 110) go to 110 c An error!!! vdnrm2 = 0.0d0 return 30 if( dabs(dx(i)) .gt. cutlo) go to 85 next = 50 xmax = zero c c phase 1. sum is zero c 50 if( dx(i) .eq. zero) go to 200 if( dabs(dx(i)) .gt. cutlo) go to 85 c c prepare for phase 2. next = 70 go to 105 c c prepare for phase 4. c 100 i = j next = 110 sum = (sum / dx(i)) / dx(i) 105 xmax = dabs(dx(i)) go to 115 c c phase 2. sum is small. c scale to avoid destructive underflow. c 70 if( dabs(dx(i)) .gt. cutlo ) go to 75 c c common code for phases 2 and 4. c in phase 4 sum is large. scale to avoid overflow. c 110 if( dabs(dx(i)) .le. xmax ) go to 115 c 11/4/01: replacing "**2.0d0" by "**2" (three times in this file) sum = one + sum * (xmax / dx(i))**2 xmax = dabs(dx(i)) go to 200 c 115 sum = sum + (dx(i)/xmax)**2 go to 200 c c c prepare for phase 3. c 75 sum = (sum * xmax) * xmax c c c for real or d.p. set hitest = cuthi/n c for complex set hitest = cuthi/(2*n) c 85 hitest = cuthi / dfloat( n ) c c phase 3. sum is mid-range. no scaling. c do 95 j =i,nn,incx if(dabs(dx(j)) .ge. hitest) go to 100 sum = sum + dx(j)**2 95 continue vdnrm2 = dsqrt( sum ) go to 300 c 200 continue i = i + incx if ( i .le. nn ) go to 20 c c end of main loop. c c compute square root and adjust for scaling. c vdnrm2 = xmax * dsqrt(sum) 300 continue return end c ============================================================== c This is modified linpack Fortran code c Changes marked with yyy c 23/9/99 c Works subroutine vdpbfa7(abd,lda,n,m,info,d) integer lda,n,m,info double precision abd(lda,*), d(n) c c c c 20130419: orig.: c double precision abd(lda,1), d(n) c c c c vdpbfa7 is dpbfa8 but uses Rational Cholesky instead of ordinary c Cholesky c c abd = t(u) d u where u is unit upper triangular and d is diagonal c the diagonal of d is stored where the 1's of the u would be stored c c See dpbfa8 for more information c d(1:n) is assigned the values of diag(d), and abd(m+1,) <- 1 c c Improvement yet to do: c delete d and put its contents into abd(m+1,) (intrinsic 1's) c c internal variables c c double precision ddot8 double precision s,t integer ik,j,jk,k,mu, i,row c begin block with ...exits to 40 c c c yyy d(1) = abd(m+1,1) c do 30 j = 1, n c print *, "j = ", j info = j s = 0.0d0 ik = m + 1 jk = max0(j-m,1) mu = max0(m+2-j,1) if (m .lt. mu) go to 20 do 10 k = mu, m c print *, " k = ", k c t = abd(k,j) - ddot8(k-mu,abd(ik,jk),1,abd(mu,j),1) c t = abd(k,j) do 1 i = 1,k-mu row = mu-2+i+j-m t = t - d(row)*abd(ik-1+i,jk)*abd(mu-1+i,j) c print *, " row = ", row 1 continue c c yyy c t = t/abd(m+1,jk) row = mu-2+(k-mu+1)+j-m c print *, " row = ", row t = t/d(row) c abd(k,j) = t c c yyy c print *, " index = ", mu-1+i+j-m s = s + t*t*d(row) c ik = ik - 1 jk = jk + 1 10 continue 20 continue s = abd(m+1,j) - s c c ......exit if (s .le. 0.0d0) go to 40 c c yyy c abd(m+1,j) = dsqrt(s) abd(m+1,j) = 1d0 d(j) = s c 30 continue info = 0 40 continue return end subroutine vdpbsl7(abd,lda,n,m,b,d) integer lda,n,m double precision abd(lda,*),b(*),d(*) c c c c 20130419: orig: c double precision abd(lda,1),b(1),d(1) c c c c vdpbsl7 is dpbsl8 but uses Rational Cholesky instead of ordinary c Cholesky c c See dpbsl8 for more information c c Improvement yet to do: c delete d and put its contents into abd(m+1,) (intrinsic 1's) c c internal variables c double precision ddot8,t integer k,kb,la,lb,lm c c solve trans(r)*y = b c do 10 k = 1, n lm = min0(k-1,m) la = m + 1 - lm lb = k - lm t = ddot8(lm,abd(la,k),1,b(lb),1) c c yyy c b(k) = (b(k) - t)/abd(m+1,k) b(k) = b(k) - t c 10 continue c c c yyy do 15 k = 1, n b(k) = b(k)/d(k) 15 continue c c c solve r*x = y c do 20 kb = 1, n k = n + 1 - kb lm = min0(k-1,m) la = m + 1 - lm lb = k - lm c c yyy c b(k) = b(k)/abd(m+1,k) c t = -b(k) call daxpy8(lm,t,abd(la,k),1,b(lb),1) 20 continue return end VGAM/src/lms.f0000644000176200001440000001477713135276761012570 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine dpdlyjn(psi, i9mwnvqt, mymu, sigma, kpzavbj3ative, lfu2 *qhid) implicit logical (a-z) integer kpzavbj3ative double precision psi, i9mwnvqt, mymu, sigma, lfu2qhid(3) integer hbsl0gto, izero0 double precision aa, bb, uqnkc6zg, n3iasxug logical cc, pos hbsl0gto = 1 izero0 = 0 n3iasxug = 1.0d-04 mymu = 0.0d0 sigma = 1.0d0 cc = (psi .ge. 0.0d0) if(cc)then bb = i9mwnvqt pos = (dabs(i9mwnvqt) .le. n3iasxug) else bb = -2.0d0 + i9mwnvqt pos = (dabs(i9mwnvqt-2.0d0) .le. n3iasxug) endif aa = 1.0d0 + psi * bb if(kpzavbj3ative .ge. 0)then if(pos)then lfu2qhid(1) = psi else lfu2qhid(1) = aa / bb endif endif if(kpzavbj3ative .ge. 1)then if(pos)then lfu2qhid(2) = (lfu2qhid(1)**2) / 2 else uqnkc6zg = lfu2qhid(1) lfu2qhid(2) = (aa * (dlog(aa)/bb) - uqnkc6zg) / bb endif endif if(kpzavbj3ative .ge. 2)then if(pos)then lfu2qhid(3) = (lfu2qhid(1)**3) / 3 else uqnkc6zg = lfu2qhid(2) * 2.0d0 lfu2qhid(3) = (aa * (dlog(aa)/bb) ** 2 - uqnkc6zg) / bb endif endif return end subroutine gleg11(ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat, le *nkpzavbj3mat, lfu2qhid) implicit logical (a-z) integer lenkpzavbj3mat double precision ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat(4), *lfu2qhid integer hbsl0gto, itwo2, three3 double precision psi, pim12, o3jyipdf(3), two12 three3 = 3 itwo2 = 2 hbsl0gto = 1 two12 = 1.41421356237309515d0 if(lenkpzavbj3mat .gt. 0)then lfu2qhid = kpzavbj3mat(4) * (kpzavbj3mat(2)**2 + two12 * sigma * g *hz9vuba * kpzavbj3mat(3)) else pim12 = 0.564189583547756279d0 psi = mymu + two12 * sigma * ghz9vuba call dpdlyjn(psi, i9mwnvqt, mymu, sigma, itwo2, o3jyipdf) lfu2qhid = (dexp(-ghz9vuba*ghz9vuba) * pim12) * (o3jyipdf(2)**2 + *(psi - mymu) * o3jyipdf(3)) / sigma**2 endif return end subroutine gleg12(ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat, le *nkpzavbj3mat, lfu2qhid) implicit logical (a-z) integer lenkpzavbj3mat double precision ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat(4), *lfu2qhid integer hbsl0gto, itwo2 double precision psi, pim12, two12 double precision tad5vhsu(3) itwo2 = 2 hbsl0gto = 1 if(lenkpzavbj3mat .gt. 0)then lfu2qhid = kpzavbj3mat(4) * (-kpzavbj3mat(2)) else pim12 = 0.564189583547756279d0 two12 = 1.41421356237309515d0 psi = mymu + two12 * sigma * ghz9vuba call dpdlyjn(psi, i9mwnvqt, mymu, sigma, hbsl0gto, tad5vhsu) lfu2qhid = (dexp(-ghz9vuba*ghz9vuba) * pim12) * (-tad5vhsu(2)) / s *igma**2 endif return end subroutine gleg13(ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat, le *nkpzavbj3mat, lfu2qhid) implicit logical (a-z) integer lenkpzavbj3mat double precision ghz9vuba, i9mwnvqt, mymu, sigma, kpzavbj3mat(4), *lfu2qhid integer hbsl0gto, itwo2 double precision psi, mtpim12, two12 double precision tad5vhsu(3) itwo2 = 2 hbsl0gto = 1 if(lenkpzavbj3mat .gt. 0)then lfu2qhid = kpzavbj3mat(4) * (-kpzavbj3mat(2)) * dsqrt(8.0d0) * ghz *9vuba else mtpim12 = -1.12837916709551256d0 two12 = 1.41421356237309515d0 psi = mymu + two12 * sigma * ghz9vuba call dpdlyjn(psi, i9mwnvqt, mymu, sigma, hbsl0gto, tad5vhsu) lfu2qhid = dexp(-ghz9vuba*ghz9vuba) * mtpim12 * tad5vhsu(2) * (psi * - mymu) / sigma**3 endif return end subroutine gint3(minx, maxx, wts, ahl0onwx, i9mwnvqt, mymu, sigma, * kk, lfu2qhid, elemnt) implicit logical (a-z) integer kk, elemnt double precision minx, maxx, wts(kk), ahl0onwx(kk), lfu2qhid, i9mw *nvqt, mymu, sigma integer gp1jxzuh, lenkpzavbj3mat double precision atx, dint, tint, kpzavbj3mat(4), midpt, range12 lenkpzavbj3mat = 0 midpt = 0.50d0 * (minx + maxx) range12 = 0.50d0 * (maxx - minx) dint = 0.0d0 if(elemnt .eq. 1)then do23022 gp1jxzuh=1,kk atx = midpt + range12 * ahl0onwx(gp1jxzuh) call gleg11(atx, i9mwnvqt, mymu, sigma, kpzavbj3mat, lenkpzavbj3ma *t, tint) dint = dint + tint * wts(gp1jxzuh) 23022 continue 23023 continue else if(elemnt .eq. 2)then do23026 gp1jxzuh=1,kk atx = midpt + range12 * ahl0onwx(gp1jxzuh) call gleg12(atx, i9mwnvqt, mymu, sigma, kpzavbj3mat, lenkpzavbj3ma *t, tint) dint = dint + tint * wts(gp1jxzuh) 23026 continue 23027 continue else if(elemnt .eq. 3)then do23030 gp1jxzuh=1,kk atx = midpt + range12 * ahl0onwx(gp1jxzuh) call gleg13(atx, i9mwnvqt, mymu, sigma, kpzavbj3mat, lenkpzavbj3ma *t, tint) dint = dint + tint * wts(gp1jxzuh) 23030 continue 23031 continue endif endif endif lfu2qhid = lfu2qhid + range12 * dint return end subroutine yjngintf(minx, maxx, ahl0onwx, wts, kuzxj1lo, kk, i9mwn *vqt, mymu, sigma, lfu2qhid, qaltf0nz) implicit logical (a-z) integer kuzxj1lo, kk double precision minx(kuzxj1lo), maxx(kuzxj1lo), wts(kk), ahl0onwx *(kk), i9mwnvqt(kuzxj1lo), mymu(kuzxj1lo), sigma(kuzxj1lo), lfu2qhi *d(3,kuzxj1lo), qaltf0nz integer ayfnwr1v, iii, gp1jxzuh, lencomp, ipzbcvw3, hmayv1xt, elem *nt, hbsl0gto, itwo2 double precision xd4mybgj, j4qgxvlk, wiptsjx8 hbsl0gto = 1 itwo2 = 2 lencomp = 12 do23032 ayfnwr1v = 1,kuzxj1lo do23034 elemnt=1,3 j4qgxvlk = -10.0d0 do23036 iii=2,lencomp ipzbcvw3 = 2 ** iii xd4mybgj = (maxx(ayfnwr1v) - minx(ayfnwr1v)) / ipzbcvw3 lfu2qhid(elemnt,ayfnwr1v) = 0.0d0 do23038 gp1jxzuh=1,ipzbcvw3 call gint3(minx(ayfnwr1v)+(gp1jxzuh-1)*xd4mybgj, minx(ayfnwr1v)+gp *1jxzuh*xd4mybgj, wts, ahl0onwx, i9mwnvqt(ayfnwr1v), mymu(ayfnwr1v) *, sigma(ayfnwr1v), kk, lfu2qhid(elemnt,ayfnwr1v), elemnt) 23038 continue 23039 continue wiptsjx8 = dabs(lfu2qhid(elemnt,ayfnwr1v) - j4qgxvlk) / (1.0d0 + d *abs(lfu2qhid(elemnt,ayfnwr1v))) if(wiptsjx8 .lt. qaltf0nz)then goto 234 else j4qgxvlk = lfu2qhid(elemnt,ayfnwr1v) endif 23036 continue 23037 continue 234 hmayv1xt = 0 23034 continue 23035 continue 23032 continue 23033 continue return end VGAM/src/vcall2.f0000644000176200001440000000117313135276761013142 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine vcall2(onemor,w,y,eta,beta,u) logical onemor double precision w(*), y(*), eta(*), beta(*), u(*) onemor = .true. w(1) = 1.0d0 y(1) = 1.0d0 eta(1) = 1.0d0 beta(1) = 1.0d0 u(1) = 1.0d0 return end subroutine vcall1(onemor,y,eta,beta,u,xbig,cpxbig) logical onemor, cpxbig double precision y(*), eta(*), beta(*), u(*), xbig(*) onemor = .true. y(1) = 1.0d0 eta(1) = 1.0d0 beta(1) = 1.0d0 u(1) = 1.0d0 xbig(1) = 1.0d0 cpxbig = .true. return end VGAM/src/vlinpack1.f0000644000176200001440000000406413135276761013651 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine vqrdca(x,ldx,n,p,fasrkub3,jpvt,work,xwdf5ltg,eps) implicit double precision (a-h,o-z) implicit integer (i-n) double precision dsign, dabs, dmax1, dsqrt integer min0 integer ldx,n,p,xwdf5ltg integer jpvt(*) integer j,jj,jp,l,lup,curpvt double precision x(ldx,p),fasrkub3(p),work(*),eps double precision vdnrm2,tt double precision ddot8,nrmxl,t do23000 j=1,p fasrkub3(j) = vdnrm2(n,x(1,j),ldx,1) work(j) = fasrkub3(j) 23000 continue 23001 continue l=1 lup = min0(n,p) curpvt = p 23002 if(l.le.lup)then fasrkub3(l) = 0.0d0 nrmxl = vdnrm2(n-l+1, x(l,l), ldx, 1) if(nrmxl .lt. eps)then call dshift8(x,ldx,n,l,curpvt) jp = jpvt(l) t=fasrkub3(l) tt=work(l) j=l+1 23006 if(.not.(j.le.curpvt))goto 23008 jj=j-1 jpvt(jj)=jpvt(j) fasrkub3(jj)=fasrkub3(j) work(jj)=work(j) 23007 j=j+1 goto 23006 23008 continue jpvt(curpvt)=jp fasrkub3(curpvt)=t work(curpvt)=tt curpvt=curpvt-1 if(lup.gt.curpvt)then lup=curpvt endif else if(l.eq.n)then goto 23003 endif if(x(l,l).ne.0.0d0)then nrmxl = dsign(nrmxl,x(l,l)) endif call dscal8(n-l+1,1.0d0/nrmxl,x(l,l),1) x(l,l) = 1.0d0+x(l,l) j=l+1 23015 if(.not.(j.le.curpvt))goto 23017 t = -ddot8(n-l+1,x(l,l),1,x(l,j),1)/x(l,l) call daxpy8(n-l+1,t,x(l,l),1,x(l,j),1) if(fasrkub3(j).ne.0.0d0)then tt = 1.0d0-(dabs(x(l,j))/fasrkub3(j))**2 tt = dmax1(tt,0.0d0) t = tt tt = 1.0d0+0.05d0*tt*(fasrkub3(j)/work(j))**2 if(tt.ne.1.0d0)then fasrkub3(j) = fasrkub3(j)*dsqrt(t) else fasrkub3(j) = vdnrm2(n-l,x(l+1,j),ldx,1) work(j) = fasrkub3(j) endif endif 23016 j=j+1 goto 23015 23017 continue fasrkub3(l) = x(l,l) x(l,l) = -nrmxl l=l+1 endif goto 23002 endif 23003 continue xwdf5ltg = lup return end VGAM/src/vgam3.c0000644000176200001440000024023413135276761012774 0ustar liggesusers #include #include #include #include #include void Yee_vbvs(int *f8yswcat, double gkdx5jal[], double rpyis2kc[], double sjwyig9t[], double kispwgx3[], int *acpios9q, int *order, int *wy1vqfzu); void fapc0tnbtfeswo7c(double osiz4fxy[], int *acpios9q, int *wy1vqfzu, int *ldk, double wbkq9zyi[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[]); void fapc0tnbybnagt8k(int *iii, int *cz8qdfyj, int *tesdm5kv, double g9fvdrbw[], double osiz4fxy[], double rbne6ouj[], int *kxvq6sfw, int *nyfu9rod, int *wy1vqfzu, int *ldk, int *kvowz9ht, int *kuzxj1lo, int tgiyxdw1[], int dufozmt7[]); void Yee_spline(double *sjwyig9t, double *tlgduey8, double *rbne6ouj, double *gkdx5jal, int *lqsahu0r, int *acpios9q, int *ldk, int *wy1vqfzu, int *kvowz9ht, double wbkq9zyi[], double lamvec[], int *aalgpft4y, double t8hwvalr[], double rpyis2kc[], double ui8ysltq[], double ifys6woa[], double hdnw2fts[], int *yzoe1rsp, int *fbd5yktj, int *ftnjamu2, double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double tt2[], int *cvnjhg2u, int itdcb8ilk[], // Added 20100313 double tdcb8ilk[] // Added 20100313 ); void fapc0tnbcn8kzpab(double gkdx5jal[], double sjwyig9t[], double rpyis2kc[], int *lqsahu0r, int *acpios9q, int *wy1vqfzu, double t8hwvalr[]); void vsuff9(int *ftnjamu2, int *lqsahu0r, int ezlgm2up[], double sjwyig9t[], double tlgduey8[], double rbne6ouj[], double pygsw6ko[], double pasjmo8g[], double eshvo2ic[], double ueshvo2ic[], double onxjvw8u[], int *dvhw1ulq, int *wy1vqfzu, int *kvowz9ht, int *npjlv3mr, double conmat[], int *kgwmz4ip, int *iz2nbfjc, int *wueshvo2ic, int *npjlv3mreshvo2ic, int *dim2eshvo2ic); void fapc0tnbicpd0omv(double enaqpzk9[], double sjwyig9t[], double gkdx5jal[], double grmuyvx9[], int *ldk, int *lqsahu0r, int *acpios9q, int *wy1vqfzu, int *jzwsy6tp, double rbne6ouj[], double ifys6woa[], int *kvowz9ht, int *ftnjamu2); void fapc0tnbo0xlszqr(int *wy1vqfzu, double *g9fvdrbw, double *quc6khaf, double *bmb); void fapc0tnbvsel(int *nurohxe6t, int *nbpvaqm5z, int *wy1vqfzu, int *ldk, double minv[], double quc6khaf[]); void fapc0tnbovjnsmt2(double bmb[], double rbne6ouj[], double ifys6woa[], int *wy1vqfzu, int *kuzxj1lo, int *dimw, int *iii, int tgiyxdw1_[], int dufozmt7_[]); void fapc0tnbvicb2(double enaqpzk9[], double wpuarq2m[], double Dvector[], int *wy1vqfzu, int *f8yswcat); void fapc0tnbewg7qruh(double ci1oyxas[], double tlgduey8[], double rbne6ouj[], int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int *lqsahu0r, double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double ui8ysltq[], int *kvowz9ht, int *fbd5yktj, int *ldk, int *aalgpft4y, int *yzoe1rsp, double rpyis2kc[], double gkdx5jals[], double ifys6woa[], double conmat[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int *acpios9q, int *iz2nbfjc, int *kgwmz4ip, int *npjlv3mr, int itdcb8ilk[], // Added 20100313 double tdcb8ilk[] // Added 20100313 ); void Yee_vbfa( int psdvgce3[], double *fjcasv7g, double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]); void fapc0tnbvbfa1(int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int lqsahu0r[], int which[], double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double *zpcqv3uj, double vc6hatuj[], double fasrkub3[], int *qemj9asg, int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], double *ghdetj8v, int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[], int *nhja0izq, int *yzoe1rsp, int *ueb8hndv, int *gtrlbz3e, int *rutyk8mg, int *xjc4ywlh, int *kvowz9ht, int *npjlv3mr, int *fbd5yktj, int *ldk, int *algpft4y, int itdcb8ilk[], double tdcb8ilk[]); void fapc0tnbx6kanjdh(double sjwyig9t[], double xout[], int *f8yswcat, int *wy1vqfzu); double fapc0tnbrd9beyfk(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double m0ibglfx[]); void fapc0tnbpitmeh0q(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double *lfu2qhid, double *lm9vcjob); void fapc0tnbdsrt0gem(int *f8yswcat, double sjwyig9t[], double po8rwsmy[], double bhcji9gl[], double ub4xioar[], double ui8ysltq[], int *yzoe1rsp); void fapc0tnbshm8ynte(int *ftnjamu2, int ezlgm2up[], double pygsw6ko[], double sjwyig9t[]); void vknootl2(double x[], int *f8yswcat, double gkdx5jal[], int *rvy1fpli, int *ukgwt7na); void Yee_pknootl2(double *gkdx5jal, int *f8yswcat, int *zo8wpibx, double *Toler_ankcghz2); void F77_NAME(wbvalue)(double*, double*, int*, int*, double*, int*, double*); void F77_NAME(vinterv)(double*, int*, double*, int*, int*); void F77_NAME(vbsplvd)(double*, int*, double*, int*, double*, double*,int*); void F77_NAME(vdpbfa7)(double*, int*, int*, int*, int*, double*); void F77_NAME(vdpbsl7)(double*, int*, int*, int*, double*, double*); void F77_NAME(vdqrsl)(double*, int*, int*, int*, double*, double*, double*, double*, double*, double*, double*, int*, int*); void F77_NAME(vqrdca)(double*, int*, int*, int*, double*, int*, double*, int*, double*); void Free_fapc0tnbyee_spline(double *wkumc9idosiz4fxy, double *wkumc9idenaqpzk9, double *wkumc9idbtwy, double *wkumc9idwk0, double *wkumc9idbk3ymcih, int *wkumc9idtgiyxdw1, int *wkumc9iddufozmt7); void Free_fapc0tnbewg7qruh(double *wkumc9idWrk1, int *wkumc9idges1xpkr, double *wkumc9idbeta, double *wkumc9idfasrkub3, double *wkumc9idsout, double *wkumc9idr0oydcxb, double *wkumc9idub4xioar, double *wkumc9ideffect, double *wkumc9idueshvo2ic, double *wkumc9ids0, double *wkumc9idpygsw6ko, double *wkumc9idpasjmo8g, double *wkumc9ideshvo2ic, double *wkumc9idonxjvw8u, double *wkumc9idwk4); void F77_NAME(vdigami)(double*, double*, double*, double*, double*, double*, double*, double*, double*, int*, double*); void VGAM_C_vdigami(double d[], double x[], double p[], double gplog[], double gp1log[], double psip[], double psip1[], double psidp[], double psidp1[], int *ifault, double *tmax, int *f8yswcat); void n5aioudkgt9iulbf(double sjwyig9t[], double ghz9vuba[], double po8rwsmy[], double gkdx5jal[], int *rvy1fpli, int *kuzxj1lo, double zyupcmk6[], double zvau2lct[], double f6lsuzax[], double fvh2rwtc[], double dcfir2no[]); extern void n5aioudkdnaoqj0l(double *pjb6wfoq, double *xs, double *ys, double ws[], int *kuzxj1lo, int *nk, double gkdx5jal[], double coef[], double sz[], double ifys6woa[], double *wbkq9zyi, double parms[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int l3zpbstu[], int *xtov9rbf, int *wep0oibc, int *fbd5yktj); extern void n5aioudkzosq7hub(double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double tb[], int *nb); extern void fvlmz9iyC_qpsedg8x(int tgiyxdw1[], int dufozmt7[], int *wy1vqfzu); extern void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *isolve); extern void fvlmz9iyjdbomp0g(double rbne6ouj[], double unvxka0m[], int *wy1vqfzu, int *dvhw1ulq, int *isolve); extern void fvlmz9iyC_mux22(double wpuarq2m[], double tlgduey8[], double lfu2qhid[], int *dimu, int *f8yswcat, int *wy1vqfzu); extern void fvlmz9iyC_vbks(double wpuarq2m[], double unvxka0m[], int *wy1vqfzu, int *f8yswcat, int *dimu); extern void fvlmz9iyC_lkhnw9yq(double wpuarq2m[], double ks3wejcv[], int *npjlv3mr, int *wy1vqfzu, int *dvhw1ulq); extern void fvlmz9iyC_mux17(double wpuarq2m[], double he7mqnvy[], int *wy1vqfzu, int *xjc4ywlh, int *f8yswcat, int *dimu, int *rutyk8mg); void VGAM_C_vdigami(double d[], double x[], double p[], double gplog[], double gp1log[], double psip[], double psip1[], double psidp[], double psidp1[], int *ifault, double *tmax, int *f8yswcat) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { F77_CALL(vdigami)(d, x, p, gplog, gp1log, psip, psip1, psidp, psidp1, ifault, tmax); d += 6; x++; p++; gplog++; gp1log++; psip++; psip1++; psidp++; psidp1++; ifault++; } } void Yee_vbvs(int *f8yswcat, double gkdx5jal[], double rpyis2kc[], double sjwyig9t[], double kispwgx3[], int *acpios9q, int *order, int *wy1vqfzu) { double *chw8lzty; int ayfnwr1v, yq6lorbx, h2dpsbkr = 4; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { chw8lzty = sjwyig9t; for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { F77_CALL(wbvalue)(gkdx5jal, rpyis2kc, acpios9q, &h2dpsbkr, chw8lzty++, order, kispwgx3++); } rpyis2kc += *acpios9q; } } void fapc0tnbtfeswo7c(double osiz4fxy[], int *acpios9q, int *wy1vqfzu, int *ldk, double wbkq9zyi[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[]) { int ayfnwr1v, yq6lorbx, ayfnwr1vupp; double *fpdlcqk9wbkq9zyi, *fpdlcqk9xecbg0pf, *fpdlcqk9z4grbpiq, *fpdlcqk9d7glzhbj, *fpdlcqk9v2eydbxs, *fpdlcqk9osiz4fxy; fpdlcqk9osiz4fxy = osiz4fxy + *ldk - 1; fpdlcqk9xecbg0pf = xecbg0pf; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9xecbg0pf; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9xecbg0pf++; } fpdlcqk9osiz4fxy = osiz4fxy + *wy1vqfzu * *ldk; fpdlcqk9osiz4fxy = fpdlcqk9osiz4fxy + *ldk - *wy1vqfzu - 1; fpdlcqk9z4grbpiq = z4grbpiq; ayfnwr1vupp = *acpios9q - 1; // 20140523; I changed the following line plus 2 other lines: for (ayfnwr1v = 1; ayfnwr1v <= ayfnwr1vupp; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9z4grbpiq; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9z4grbpiq++; } fpdlcqk9osiz4fxy = osiz4fxy + *ldk + 2 * *wy1vqfzu * *ldk; fpdlcqk9osiz4fxy = fpdlcqk9osiz4fxy - 2 * *wy1vqfzu - 1; fpdlcqk9d7glzhbj = d7glzhbj; ayfnwr1vupp = *acpios9q - 2; for (ayfnwr1v = 1; ayfnwr1v <= ayfnwr1vupp; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9d7glzhbj; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9d7glzhbj++; } fpdlcqk9osiz4fxy = osiz4fxy + *ldk + 3 * *wy1vqfzu * *ldk; fpdlcqk9osiz4fxy = fpdlcqk9osiz4fxy - 3 * *wy1vqfzu - 1; fpdlcqk9v2eydbxs = v2eydbxs; ayfnwr1vupp = *acpios9q - 3; for (ayfnwr1v = 1; ayfnwr1v <= ayfnwr1vupp; ayfnwr1v++) { fpdlcqk9wbkq9zyi = wbkq9zyi; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9osiz4fxy += *fpdlcqk9wbkq9zyi++ * *fpdlcqk9v2eydbxs; fpdlcqk9osiz4fxy += *ldk; } fpdlcqk9v2eydbxs++; } } void fapc0tnbybnagt8k(int *iii, int *cz8qdfyj, int *tesdm5kv, double g9fvdrbw[], double osiz4fxy[], double rbne6ouj[], int *kxvq6sfw, int *nyfu9rod, int *wy1vqfzu, int *ldk, int *kvowz9ht, int *kuzxj1lo, int tgiyxdw1[], int dufozmt7[]) { double tmp_wrk; int urohxe6t, nead, bcol, brow, biuvowq2, nbj8tdsk; bcol = *cz8qdfyj + *tesdm5kv; brow = *cz8qdfyj; for (urohxe6t = 1; urohxe6t <= *kvowz9ht; urohxe6t++) { tmp_wrk = rbne6ouj[*iii -1 + (urohxe6t-1) * *kuzxj1lo] * g9fvdrbw[*kxvq6sfw-1] * g9fvdrbw[*nyfu9rod-1]; biuvowq2 = (brow-1) * *wy1vqfzu + tgiyxdw1[urohxe6t-1]; nbj8tdsk = (bcol-1) * *wy1vqfzu + dufozmt7[urohxe6t-1]; nead = nbj8tdsk - biuvowq2; osiz4fxy[*ldk - nead - 1 + (nbj8tdsk-1) * *ldk] += tmp_wrk; if (*tesdm5kv > 0 && dufozmt7[urohxe6t-1] != tgiyxdw1[urohxe6t-1]) { biuvowq2 = (brow-1) * *wy1vqfzu + dufozmt7[urohxe6t-1]; nbj8tdsk = (bcol-1) * *wy1vqfzu + tgiyxdw1[urohxe6t-1]; nead = nbj8tdsk - biuvowq2; osiz4fxy[*ldk - nead - 1 + (nbj8tdsk-1) * *ldk] += tmp_wrk; } } } void Free_fapc0tnbyee_spline(double *wkumc9idosiz4fxy, double *wkumc9idenaqpzk9, double *wkumc9idbtwy, double *wkumc9idwk0, double *wkumc9idbk3ymcih, int *wkumc9idtgiyxdw1, int *wkumc9iddufozmt7) { Free(wkumc9idosiz4fxy); Free(wkumc9idenaqpzk9); Free(wkumc9idbtwy); Free(wkumc9idwk0); Free(wkumc9idbk3ymcih); Free(wkumc9idtgiyxdw1); Free(wkumc9iddufozmt7); } void Yee_spline(double *sjwyig9t, double *tlgduey8, double *rbne6ouj, double *gkdx5jal, int *lqsahu0r, int *acpios9q, int *ldk, int *wy1vqfzu, int *kvowz9ht, double wbkq9zyi[], double lamvec[], int *aalgpft4y, double t8hwvalr[], double rpyis2kc[], double ui8ysltq[], double ifys6woa[], double hdnw2fts[], int *yzoe1rsp, int *fbd5yktj, int *ftnjamu2, double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int itdcb8ilk[], // Added 20100313 double tdcb8ilk[] // Added 20100313 ) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, dqlr5bse, pqzfxw4i, wep0oibc; int have_setup_sg = 0; /* == 1 if sg[0123] have been initialized */ int junkicrit = -1, xtov9rbf = 4, l3zpbstu[3], pn9eowxc; double jstx4uwe[4], g9fvdrbw[4], qaltf0nz = 0.1e-9, ms0qypiw[16], *fpdlcqk9btwy; int yu6izdrc = 0, pqneb2ra = 1, qhzja4ny = 2, bvsquk3z = 3, h2dpsbkr = 4; int arm0lkbg1, arm0lkbg2; double *wkumc9idosiz4fxy, *wkumc9idenaqpzk9, *wkumc9idbtwy, *wkumc9idwk0, *wkumc9idbk3ymcih; int *wkumc9idtgiyxdw1, *wkumc9iddufozmt7; int imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2; double kpftdm0jmynl7uaq = tdcb8ilk[0], kpftdm0jzustx4fw = tdcb8ilk[1], kpftdm0jtol = tdcb8ilk[2], kpftdm0jeps = tdcb8ilk[3]; double svdbx3tk_tt1, svdbx3tk_tt2 = 0.0, svdbx3tk_g2dnwteb = -1.0; double *wkumc9idzvau2lct, *wkumc9idf6lsuzax, *wkumc9idfvh2rwtc, *wkumc9iddcfir2no; double *wkumc9idxwy; double *fpdlcqk9ifys6woa; wkumc9idtgiyxdw1 = Calloc(imk5wjxg, int); wkumc9iddufozmt7 = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1, wkumc9iddufozmt7, wy1vqfzu); wkumc9idosiz4fxy = Calloc(*ldk * (*wy1vqfzu * *acpios9q), double); wkumc9idenaqpzk9 = Calloc(*ldk * (*acpios9q * *wy1vqfzu), double); wkumc9idbtwy = Calloc(*wy1vqfzu * *acpios9q , double); wkumc9idbk3ymcih = Calloc( *lqsahu0r , double); wkumc9idwk0 = Calloc(*acpios9q * *wy1vqfzu , double); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { if (wbkq9zyi[yq6lorbx-1] == 0.0) { pn9eowxc = 0; } else { /// vvv pn9eowxc = 1; if (have_setup_sg == 0) { have_setup_sg = 1; // Need only be done once n5aioudkzosq7hub(xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, gkdx5jal, acpios9q); for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { svdbx3tk_tt2 += xecbg0pf[ayfnwr1v-1]; } } wkumc9idxwy = Calloc(*acpios9q, double); wkumc9idzvau2lct = Calloc(*acpios9q, double); wkumc9idf6lsuzax = Calloc(*acpios9q, double); wkumc9idfvh2rwtc = Calloc(*acpios9q, double); wkumc9iddcfir2no = Calloc(*acpios9q, double); n5aioudkgt9iulbf(sjwyig9t, tlgduey8 + (yq6lorbx-1) * *lqsahu0r, // bhcji9gl rbne6ouj + (yq6lorbx-1) * *lqsahu0r, // po8rwsmy, gkdx5jal, lqsahu0r, acpios9q, wkumc9idxwy, // lqsahu0r === kuzxj1lo wkumc9idzvau2lct, wkumc9idf6lsuzax, wkumc9idfvh2rwtc, wkumc9iddcfir2no); svdbx3tk_tt1 = 0.0; for (ayfnwr1v = 3; ayfnwr1v <= (*acpios9q - 3); ayfnwr1v++) { svdbx3tk_tt1 += wkumc9idzvau2lct[ayfnwr1v-1]; } Free(wkumc9idxwy); Free(wkumc9idzvau2lct); Free(wkumc9idf6lsuzax); Free(wkumc9idfvh2rwtc); Free(wkumc9iddcfir2no); svdbx3tk_g2dnwteb = svdbx3tk_tt1 / svdbx3tk_tt2; lamvec[yq6lorbx-1] = svdbx3tk_g2dnwteb * pow(16.0, wbkq9zyi[yq6lorbx-1] * 6.0 - 2.0); } /// vvv if (*wy1vqfzu == 1 || *kvowz9ht == *wy1vqfzu || pn9eowxc == 0) { // ggg wep0oibc = 1; l3zpbstu[0] = junkicrit; l3zpbstu[1] = pn9eowxc; l3zpbstu[2] = itdcb8ilk[0]; jstx4uwe[0] = kpftdm0jmynl7uaq; // Prior to 20100313: was waiez6nt; jstx4uwe[1] = kpftdm0jzustx4fw; // Prior to 20100313: was fp6nozvx; jstx4uwe[2] = kpftdm0jtol; // Prior to 20100313: was Toler_df; jstx4uwe[3] = kpftdm0jeps; // Introduced as an arg, 20100313 if (*wy1vqfzu == 1 || *kvowz9ht == *wy1vqfzu) { // hhh for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] /= rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } have_setup_sg = 1; n5aioudkdnaoqj0l(hdnw2fts + yq6lorbx-1, sjwyig9t, tlgduey8 + (yq6lorbx-1) * *lqsahu0r, rbne6ouj + (yq6lorbx-1) * *lqsahu0r, lqsahu0r, acpios9q, gkdx5jal, rpyis2kc + (yq6lorbx-1) * *acpios9q, t8hwvalr + (yq6lorbx-1) * *lqsahu0r, ifys6woa + (yq6lorbx-1) * *lqsahu0r, // *ftnjamu2, wbkq9zyi + yq6lorbx-1, jstx4uwe, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, cvnjhg2u, l3zpbstu, &xtov9rbf, &wep0oibc, fbd5yktj); lamvec[yq6lorbx-1] = jstx4uwe[0]; if (*fbd5yktj) { Rprintf("Error in n5aioudkdnaoqj0l; inside Yee_spline\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } if (*yzoe1rsp) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { gp1jxzuh = ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2; bpvaqm5z = ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r; ui8ysltq[gp1jxzuh] = ifys6woa[bpvaqm5z] / rbne6ouj[bpvaqm5z]; } } } else { // hhh and uuu have_setup_sg = 1; n5aioudkdnaoqj0l(hdnw2fts + yq6lorbx-1, sjwyig9t, wkumc9idbk3ymcih, rbne6ouj + (yq6lorbx-1) * *lqsahu0r, lqsahu0r, acpios9q, gkdx5jal, rpyis2kc + (yq6lorbx-1) * *acpios9q, t8hwvalr + (yq6lorbx-1) * *lqsahu0r, ifys6woa + (yq6lorbx-1) * *lqsahu0r, // 20130427 wbkq9zyi + yq6lorbx-1, jstx4uwe, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, cvnjhg2u, l3zpbstu, &xtov9rbf, &wep0oibc, fbd5yktj); lamvec[yq6lorbx-1] = jstx4uwe[0]; if (*fbd5yktj) { Rprintf("Error in Rgam_dnaoqj0l; inside Yee_spline\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } } // uuu if (*fbd5yktj) { Rprintf("Error in n5aioudkdnaoqj0l: fbd5yktj = %3d.\n", *fbd5yktj); Rprintf("Called within Yee_spline.\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } } // ggg } if (*wy1vqfzu == 1 || *kvowz9ht == *wy1vqfzu) { Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { hdnw2fts[yq6lorbx-1] -= 1.0; // Decrement it. } return; } for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { arm0lkbg1 = *acpios9q + 1; F77_CALL(vinterv)(gkdx5jal, &arm0lkbg1, sjwyig9t + ayfnwr1v-1, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == 1) { if (sjwyig9t[ayfnwr1v-1] <= (gkdx5jal[dqlr5bse-1] + qaltf0nz)) { dqlr5bse--; } else { Rprintf("Freeing memory in Yee_spline and returning.\n"); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } } F77_CALL(vbsplvd)(gkdx5jal, &h2dpsbkr, sjwyig9t + ayfnwr1v-1, &dqlr5bse, ms0qypiw, g9fvdrbw, &pqneb2ra); yq6lorbx = dqlr5bse - 4 + 1; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[0]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &pqneb2ra, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &pqneb2ra, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &qhzja4ny, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &qhzja4ny, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &bvsquk3z, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &bvsquk3z, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &pqneb2ra, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); yq6lorbx = dqlr5bse - 4 + 2; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[1]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &qhzja4ny, &qhzja4ny, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &pqneb2ra, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &qhzja4ny, &bvsquk3z, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &qhzja4ny, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &qhzja4ny, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); yq6lorbx = dqlr5bse - 4 + 3; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[2]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &bvsquk3z, &bvsquk3z, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &pqneb2ra, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &bvsquk3z, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); yq6lorbx = dqlr5bse - 4 + 4; fpdlcqk9btwy = wkumc9idbtwy + (yq6lorbx-1) * *wy1vqfzu; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { *fpdlcqk9btwy += tlgduey8[ayfnwr1v-1 + (urohxe6t-1) * *lqsahu0r] * g9fvdrbw[3]; fpdlcqk9btwy++; } fapc0tnbybnagt8k(&ayfnwr1v, &yq6lorbx, &yu6izdrc, g9fvdrbw, wkumc9idosiz4fxy, rbne6ouj, &h2dpsbkr, &h2dpsbkr, wy1vqfzu, ldk, kvowz9ht, lqsahu0r, wkumc9idtgiyxdw1, wkumc9iddufozmt7); } fapc0tnbtfeswo7c(wkumc9idosiz4fxy, acpios9q, wy1vqfzu, ldk, lamvec, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs); arm0lkbg1 = *acpios9q * *wy1vqfzu; arm0lkbg2 = *ldk - 1; F77_CALL(vdpbfa7)(wkumc9idosiz4fxy, ldk, &arm0lkbg1, &arm0lkbg2, aalgpft4y, wkumc9idwk0); if (*aalgpft4y) { Rprintf("Error in subroutine vdpbfa7; inside Yee_spline.\n"); Rprintf("*aalgpft4y = %3d\n", *aalgpft4y); Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); return; } arm0lkbg1 = *acpios9q * *wy1vqfzu; arm0lkbg2 = *ldk - 1; F77_CALL(vdpbsl7)(wkumc9idosiz4fxy, ldk, &arm0lkbg1, &arm0lkbg2, wkumc9idbtwy, wkumc9idwk0); fpdlcqk9btwy = wkumc9idbtwy; for (ayfnwr1v = 1; ayfnwr1v <= *acpios9q; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { rpyis2kc[ ayfnwr1v-1 + (yq6lorbx-1) * *acpios9q] = *fpdlcqk9btwy++; } } fapc0tnbcn8kzpab(gkdx5jal, sjwyig9t, rpyis2kc, lqsahu0r, acpios9q, wy1vqfzu, t8hwvalr); arm0lkbg1 = *acpios9q * *wy1vqfzu; arm0lkbg2 = *ldk - 1; fapc0tnbvicb2(wkumc9idenaqpzk9, wkumc9idosiz4fxy, wkumc9idwk0, &arm0lkbg2, &arm0lkbg1); fapc0tnbicpd0omv(wkumc9idenaqpzk9, sjwyig9t, gkdx5jal, ui8ysltq, ldk, lqsahu0r, acpios9q, wy1vqfzu, yzoe1rsp, rbne6ouj, ifys6woa, kvowz9ht, ftnjamu2); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { hdnw2fts[yq6lorbx-1] = -1.0; // Initialize; subtract the linear part } fpdlcqk9ifys6woa = ifys6woa; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { hdnw2fts[yq6lorbx-1] += *fpdlcqk9ifys6woa++; } } Free_fapc0tnbyee_spline(wkumc9idosiz4fxy, wkumc9idenaqpzk9, wkumc9idbtwy, wkumc9idwk0, wkumc9idbk3ymcih, wkumc9idtgiyxdw1, wkumc9iddufozmt7); } void fapc0tnbcn8kzpab(double gkdx5jals[], double sjwyig9t[], double rpyis2kc[], int *lqsahu0r, int *acpios9q, int *wy1vqfzu, double t8hwvalr[]) { int ayfnwr1v, yq6lorbx, yu6izdrc = 0, h2dpsbkr = 4; double *chw8lzty; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { chw8lzty = sjwyig9t; for (ayfnwr1v = 0; ayfnwr1v < *lqsahu0r; ayfnwr1v++) { F77_CALL(wbvalue)(gkdx5jals, rpyis2kc, acpios9q, &h2dpsbkr, chw8lzty++, &yu6izdrc, t8hwvalr++); } rpyis2kc += *acpios9q; } } void Free_fapc0tnbvsuff9(double *wkumc9idwk1a, double *wkumc9idwk1b, double *wkumc9idwk2a, double *wkumc9idwk2b, double *wkumc9ideshvo2ic, double *wkumc9idonxjvw8u, int *wkumc9idtgiyxdw11, int *wkumc9iddufozmt71, int *wkumc9idtgiyxdw12, int *wkumc9iddufozmt72, int *iz2nbfjc) { Free(wkumc9idwk1a); Free(wkumc9idwk1b); Free(wkumc9idwk2a); Free(wkumc9idwk2b); if (! *iz2nbfjc) { Free(wkumc9ideshvo2ic); Free(wkumc9idonxjvw8u); } Free(wkumc9idtgiyxdw11); Free(wkumc9iddufozmt71); Free(wkumc9idtgiyxdw12); Free(wkumc9iddufozmt72); } void vsuff9(int *ftnjamu2, int *lqsahu0r, int ezlgm2up[], double sjwyig9t[], double tlgduey8[], double rbne6ouj[], double pygsw6ko[], double pasjmo8g[], double eshvo2ic[], double ueshvo2ic[], double onxjvw8u[], int *dvhw1ulq, int *wy1vqfzu, int *kvowz9ht, int *npjlv3mr, double conmat[], int *kgwmz4ip, int *iz2nbfjc, int *wueshvo2ic, int *npjlv3mreshvo2ic, int *dim2eshvo2ic) { double *qnwamo0e, *qnwamo0e1, *qnwamo0e2; int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, *ptri; int pqneb2ra = 1; double *wkumc9idwk1a, *wkumc9idwk1b, *wkumc9idwk2a, *wkumc9idwk2b, *wkumc9ideshvo2ic, *wkumc9idonxjvw8u; int *wkumc9idtgiyxdw11, *wkumc9iddufozmt71, *wkumc9idtgiyxdw12, *wkumc9iddufozmt72; int zyojx5hw = *wy1vqfzu * *wy1vqfzu, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, n2colb = *kgwmz4ip * *kgwmz4ip, n3colb = *kgwmz4ip * (*kgwmz4ip + 1) / 2; double hmayv1xt1 = 1.0, hmayv1xt2; hmayv1xt2 = hmayv1xt1 + 1.0; wkumc9ideshvo2ic = &hmayv1xt2; wkumc9idonxjvw8u = &hmayv1xt2; wkumc9idwk1a = Calloc(zyojx5hw , double); wkumc9idwk1b = Calloc(*wy1vqfzu , double); wkumc9idwk2a = Calloc(n2colb , double); wkumc9idwk2b = Calloc(*kgwmz4ip , double); wkumc9idtgiyxdw11 = Calloc(imk5wjxg , int); wkumc9iddufozmt71 = Calloc(imk5wjxg , int); wkumc9idtgiyxdw12 = Calloc(n3colb , int); wkumc9iddufozmt72 = Calloc(n3colb , int); if (*iz2nbfjc) { if (*npjlv3mr < *kvowz9ht || *kgwmz4ip != *wy1vqfzu) { Rprintf("Error in fapc0tnbvsuff9: "); Rprintf("must have npjlv3mr >= kvowz9ht & kgwmz4ip = M\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); *dvhw1ulq = 0; return; } } else { if (*npjlv3mreshvo2ic < n3colb || *dim2eshvo2ic < n3colb) { Rprintf("Error in fapc0tnbvsuff9 with nontrivial constraints:\n"); Rprintf("must have npjlv3mreshvo2ic and dim2eshvo2ic both >= n3colb\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); *dvhw1ulq = 0; return; } wkumc9ideshvo2ic = Calloc(*lqsahu0r * zyojx5hw , double); wkumc9idonxjvw8u = Calloc(*lqsahu0r * *wy1vqfzu , double); } fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw11, wkumc9iddufozmt71, wy1vqfzu); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw12, wkumc9iddufozmt72, kgwmz4ip); ptri = ezlgm2up; qnwamo0e = sjwyig9t; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { pygsw6ko[(*ptri++) - 1] = *qnwamo0e++; } if (*iz2nbfjc) { qnwamo0e = onxjvw8u; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 0; ayfnwr1v < *lqsahu0r; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } if (*iz2nbfjc) { qnwamo0e = eshvo2ic; for (yq6lorbx = 1; yq6lorbx <= *dim2eshvo2ic; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9idwk1a[wkumc9iddufozmt71[yq6lorbx-1]-1 + (wkumc9idtgiyxdw11[yq6lorbx-1]-1) * *wy1vqfzu] = rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } qnwamo0e1 = (*iz2nbfjc) ? eshvo2ic : wkumc9ideshvo2ic; qnwamo0e2 = (*iz2nbfjc) ? onxjvw8u : wkumc9idonxjvw8u; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { qnwamo0e2[ezlgm2up[ayfnwr1v-1]-1 + (yq6lorbx-1) * *lqsahu0r] += wkumc9idwk1a[yq6lorbx -1 + (gp1jxzuh-1) * *wy1vqfzu] * tlgduey8[ayfnwr1v -1 + (gp1jxzuh-1) * *ftnjamu2]; } } for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { qnwamo0e1[ezlgm2up[ayfnwr1v-1]-1 + (yq6lorbx-1) * *lqsahu0r] += rbne6ouj[ayfnwr1v -1 + (yq6lorbx-1) * *ftnjamu2]; } } *dvhw1ulq = 1; if (*iz2nbfjc) { for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9idwk1a[wkumc9iddufozmt71[yq6lorbx-1]-1 + (wkumc9idtgiyxdw11[yq6lorbx-1]-1) * *wy1vqfzu] = eshvo2ic[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { wkumc9idwk1b[yq6lorbx-1] = onxjvw8u[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } fvlmz9iyjdbomp0g(wkumc9idwk1a, wkumc9idwk1b, wy1vqfzu, dvhw1ulq, &pqneb2ra); if (*dvhw1ulq != 1) { Rprintf("*dvhw1ulq != 1 after fvlmz9iyjdbomp0g in vsuff9.\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); return; } if (*wueshvo2ic) { for (yq6lorbx = 1; yq6lorbx <= *npjlv3mreshvo2ic; yq6lorbx++) { ueshvo2ic[yq6lorbx-1 + (ayfnwr1v-1) * *npjlv3mreshvo2ic] = wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu]; } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { pasjmo8g[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk1b[yq6lorbx-1]; } } } else { qnwamo0e = wkumc9idwk1a; for (yq6lorbx = 1; yq6lorbx <= zyojx5hw; yq6lorbx++) { *qnwamo0e++ = 0.0e0; } for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kvowz9ht; yq6lorbx++) { wkumc9idwk1a[wkumc9idtgiyxdw11[yq6lorbx-1]-1 + (wkumc9iddufozmt71[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9idwk1a[wkumc9iddufozmt71[yq6lorbx-1]-1 + (wkumc9idtgiyxdw11[yq6lorbx-1]-1) * *wy1vqfzu] = wkumc9ideshvo2ic[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { wkumc9idwk1b[yq6lorbx-1] = wkumc9idonxjvw8u[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r]; } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { for (gp1jxzuh = yq6lorbx; gp1jxzuh <= *kgwmz4ip; gp1jxzuh++) { wkumc9idwk2a[yq6lorbx-1 + (gp1jxzuh-1) * *kgwmz4ip] = 0.0e0; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { wkumc9idwk2a[yq6lorbx-1 + (gp1jxzuh-1) * *kgwmz4ip] += conmat[urohxe6t-1 + (yq6lorbx-1) * *wy1vqfzu] * wkumc9idwk1a[urohxe6t-1 + (bpvaqm5z-1) * *wy1vqfzu] * conmat[bpvaqm5z-1 + (gp1jxzuh-1) * *wy1vqfzu]; } } } } for (yq6lorbx = 1; yq6lorbx <= *dim2eshvo2ic; yq6lorbx++) { eshvo2ic[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk2a[wkumc9idtgiyxdw12[yq6lorbx-1]-1 + (wkumc9iddufozmt72[yq6lorbx-1]-1) * *kgwmz4ip]; } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { wkumc9idwk2b[yq6lorbx-1] = 0.0e0; for (urohxe6t = 1; urohxe6t <= *wy1vqfzu; urohxe6t++) { wkumc9idwk2b[yq6lorbx-1] += conmat[urohxe6t-1 + (yq6lorbx-1) * *wy1vqfzu] * wkumc9idwk1b[urohxe6t-1]; } } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { onxjvw8u[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk2b[yq6lorbx-1]; } fvlmz9iyjdbomp0g(wkumc9idwk2a, wkumc9idwk2b, kgwmz4ip, dvhw1ulq, &pqneb2ra); if (*dvhw1ulq != 1) { Rprintf("*dvhw1ulq!=1 in vchol-vsuff9. Something gone wrong\n"); Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); return; } if (*wueshvo2ic) { for (yq6lorbx = 1; yq6lorbx <= *npjlv3mreshvo2ic; yq6lorbx++) { ueshvo2ic[yq6lorbx-1 + (ayfnwr1v-1) * *npjlv3mreshvo2ic] = wkumc9idwk2a[wkumc9idtgiyxdw12[yq6lorbx-1]-1 + (wkumc9iddufozmt72[yq6lorbx-1]-1) * *kgwmz4ip]; } } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { pasjmo8g[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] = wkumc9idwk2b[yq6lorbx-1]; } } } Free_fapc0tnbvsuff9(wkumc9idwk1a, wkumc9idwk1b, wkumc9idwk2a, wkumc9idwk2b, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idtgiyxdw11, wkumc9iddufozmt71, wkumc9idtgiyxdw12, wkumc9iddufozmt72, iz2nbfjc); } void fapc0tnbicpd0omv(double enaqpzk9[], double sjwyig9t[], double gkdx5jals[], double grmuyvx9[], int *ldk, int *lqsahu0r, int *acpios9q, int *wy1vqfzu, int *jzwsy6tp, double rbne6ouj[], double ifys6woa[], int *kvowz9ht, int *ftnjamu2) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, dqlr5bse, pqzfxw4i; double ms0qypiw[16], g9fvdrbw[4], qaltf0nz = 0.10e-9; int arm0lkbg1, arm0lkbg4, *ptri1, *ptri2; double tmp_var4, tmp_var5, *qnwamo0e; double *wkumc9idwrk, *wkumc9idbmb; int *wkumc9idtgiyxdw1_, *wkumc9iddufozmt7_, imk5wjxg = *wy1vqfzu * (*wy1vqfzu + 1) / 2, zyojx5hw = *wy1vqfzu * *wy1vqfzu; wkumc9idtgiyxdw1_ = Calloc(imk5wjxg, int); wkumc9iddufozmt7_ = Calloc(imk5wjxg, int); fvlmz9iyC_qpsedg8x(wkumc9idtgiyxdw1_, wkumc9iddufozmt7_, wy1vqfzu); ptri1 = wkumc9idtgiyxdw1_; ptri2 = wkumc9iddufozmt7_; for (ayfnwr1v = 0; ayfnwr1v < imk5wjxg; ayfnwr1v++) { (*ptri1++)--; (*ptri2++)--; } wkumc9idwrk = Calloc(zyojx5hw, double); wkumc9idbmb = Calloc(zyojx5hw, double); if (*jzwsy6tp) { qnwamo0e = grmuyvx9; for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *qnwamo0e++ = 0.0e0; } } } for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { qnwamo0e = wkumc9idbmb; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { *qnwamo0e++ = 0.0e0; } } arm0lkbg1 = *acpios9q + 1; F77_CALL(vinterv)(gkdx5jals, &arm0lkbg1, sjwyig9t + ayfnwr1v-1, &dqlr5bse, &pqzfxw4i); if (pqzfxw4i == 1) { if (sjwyig9t[ayfnwr1v-1] <= (gkdx5jals[dqlr5bse-1] + qaltf0nz)) { dqlr5bse--; } else { Rprintf("pqzfxw4i!=1 after vinterv called in fapc0tnbicpd0omv\n"); Free(wkumc9idtgiyxdw1_); Free(wkumc9iddufozmt7_); Free(wkumc9idwrk); return; } } arm0lkbg1 = 1; arm0lkbg4 = 4; F77_CALL(vbsplvd)(gkdx5jals, &arm0lkbg4, sjwyig9t + ayfnwr1v-1, &dqlr5bse, ms0qypiw, g9fvdrbw, &arm0lkbg1); yq6lorbx = dqlr5bse - 4 + 1; for (urohxe6t = yq6lorbx; urohxe6t <= (yq6lorbx + 3); urohxe6t++) { fapc0tnbvsel(&urohxe6t, &urohxe6t, wy1vqfzu, ldk, enaqpzk9, wkumc9idwrk); tmp_var4 = pow(g9fvdrbw[urohxe6t-yq6lorbx], (double) 2.0); fapc0tnbo0xlszqr(wy1vqfzu, &tmp_var4, wkumc9idwrk, wkumc9idbmb); } for (urohxe6t = yq6lorbx; urohxe6t <= (yq6lorbx+3); urohxe6t++) { for (bpvaqm5z = urohxe6t+1; bpvaqm5z <= (yq6lorbx+3); bpvaqm5z++) { fapc0tnbvsel(&urohxe6t, &bpvaqm5z, wy1vqfzu, ldk, enaqpzk9, wkumc9idwrk); tmp_var5 = 2.0 * g9fvdrbw[urohxe6t-yq6lorbx] * g9fvdrbw[bpvaqm5z-yq6lorbx]; fapc0tnbo0xlszqr(wy1vqfzu, &tmp_var5, wkumc9idwrk, wkumc9idbmb); } } if (*jzwsy6tp) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { grmuyvx9[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = wkumc9idbmb[yq6lorbx-1 + (yq6lorbx-1) * *wy1vqfzu]; } } fapc0tnbovjnsmt2(wkumc9idbmb, rbne6ouj, ifys6woa, wy1vqfzu, lqsahu0r, kvowz9ht, &ayfnwr1v, wkumc9idtgiyxdw1_, wkumc9iddufozmt7_); } Free(wkumc9idtgiyxdw1_); Free(wkumc9iddufozmt7_); Free(wkumc9idwrk); Free(wkumc9idbmb); } void fapc0tnbo0xlszqr(int *wy1vqfzu, double *g9fvdrbw, double *quc6khaf, double *bmb) { int yq6lorbx, gp1jxzuh; double *qnwamo0e; qnwamo0e = quc6khaf; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 0; gp1jxzuh < *wy1vqfzu; gp1jxzuh++) { *quc6khaf *= *g9fvdrbw; quc6khaf++; } } quc6khaf = qnwamo0e; for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { for (gp1jxzuh = 0; gp1jxzuh < *wy1vqfzu; gp1jxzuh++) { *bmb += *quc6khaf++; bmb++; } } } void fapc0tnbvsel(int *nurohxe6t, int *nbpvaqm5z, int *wy1vqfzu, int *ldk, double minv[], double quc6khaf[]) { int ayfnwr1v, yq6lorbx, biuvowq2, nbj8tdsk; double *qnwamo0e; qnwamo0e = quc6khaf; for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *qnwamo0e++ = 0.0; } } if (*nurohxe6t != *nbpvaqm5z) { for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { biuvowq2 = (*nurohxe6t - 1) * *wy1vqfzu + ayfnwr1v; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { nbj8tdsk = (*nbpvaqm5z - 1) * *wy1vqfzu + yq6lorbx; quc6khaf[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = minv[*ldk - (nbj8tdsk-biuvowq2)-1 + (nbj8tdsk-1) * *ldk]; } } } else { for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { biuvowq2 = (*nurohxe6t - 1) * *wy1vqfzu + ayfnwr1v; for (yq6lorbx = ayfnwr1v; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { nbj8tdsk = (*nbpvaqm5z - 1) * *wy1vqfzu + yq6lorbx; quc6khaf[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu] = minv[*ldk - (nbj8tdsk-biuvowq2)-1 + (nbj8tdsk-1) * *ldk]; } } for (ayfnwr1v = 1; ayfnwr1v <= *wy1vqfzu; ayfnwr1v++) { for (yq6lorbx = ayfnwr1v+1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { quc6khaf[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] = quc6khaf[ayfnwr1v-1 + (yq6lorbx-1) * *wy1vqfzu]; } } } } void fapc0tnbovjnsmt2(double bmb[], double rbne6ouj[], double ifys6woa[], int *wy1vqfzu, int *lqsahu0r, int *kvowz9ht, int *iii, int tgiyxdw1_[], int dufozmt7_[]) { int yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z; double q6zdcwxk; int zyojx5hw = *wy1vqfzu * *wy1vqfzu; double *wkumc9idwrk; wkumc9idwrk = Calloc(zyojx5hw, double); for (bpvaqm5z = 1; bpvaqm5z <= *wy1vqfzu; bpvaqm5z++) { for (urohxe6t = 1; urohxe6t <= *kvowz9ht; urohxe6t++) { yq6lorbx = tgiyxdw1_[urohxe6t-1] + (dufozmt7_[urohxe6t-1] ) * *wy1vqfzu; gp1jxzuh = dufozmt7_[urohxe6t-1] + (tgiyxdw1_[urohxe6t-1] ) * *wy1vqfzu; wkumc9idwrk[yq6lorbx] = wkumc9idwrk[gp1jxzuh] = rbne6ouj[*iii-1 + (urohxe6t-1) * *lqsahu0r]; } q6zdcwxk = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { q6zdcwxk += bmb[bpvaqm5z-1 + (yq6lorbx-1) * *wy1vqfzu] * wkumc9idwrk[yq6lorbx-1 + (bpvaqm5z-1) * *wy1vqfzu]; } ifys6woa[*iii-1 + (bpvaqm5z-1) * *lqsahu0r] = q6zdcwxk; } Free(wkumc9idwrk); } void fapc0tnbvicb2(double enaqpzk9[], double wpuarq2m[], double Dvector[], int *wy1vqfzu, int *f8yswcat) { int ayfnwr1v, gp1jxzuh, urohxe6t, uplim, sedf7mxb, hofjnx2e, kij0gwer; int Mplus1 = *wy1vqfzu + 1; int Mp1Mp1 = Mplus1 * Mplus1; double *wkumc9iduu; wkumc9iduu = Calloc(Mp1Mp1, double); enaqpzk9[*wy1vqfzu + (*f8yswcat-1) * Mplus1] = 1.0e0 / Dvector[*f8yswcat-1]; hofjnx2e = *wy1vqfzu + 1; sedf7mxb = *f8yswcat + 1 - hofjnx2e; for (kij0gwer = sedf7mxb; kij0gwer <= *f8yswcat; kij0gwer++) { for (ayfnwr1v = 1; ayfnwr1v <= hofjnx2e; ayfnwr1v++) { wkumc9iduu[ayfnwr1v-1 + (kij0gwer-sedf7mxb) * Mplus1] = wpuarq2m[ayfnwr1v-1 + (kij0gwer-1 ) * Mplus1]; } } for (ayfnwr1v = *f8yswcat-1; ayfnwr1v >= 1; ayfnwr1v--) { uplim = *wy1vqfzu < (*f8yswcat - ayfnwr1v) ? *wy1vqfzu : *f8yswcat - ayfnwr1v; for (urohxe6t = 1; urohxe6t <= uplim; urohxe6t++) { enaqpzk9[-urohxe6t+*wy1vqfzu + (ayfnwr1v+urohxe6t-1) * Mplus1] = 0.0e0; for (gp1jxzuh = 1; gp1jxzuh <= urohxe6t; gp1jxzuh++) { enaqpzk9[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t-1 ) * Mplus1] -= wkumc9iduu[-gp1jxzuh + *wy1vqfzu + (ayfnwr1v+gp1jxzuh - sedf7mxb) * Mplus1] * enaqpzk9[gp1jxzuh-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t-1 ) * Mplus1]; } for ( ; gp1jxzuh <= uplim; gp1jxzuh++) { enaqpzk9[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t-1 ) * Mplus1] -= wkumc9iduu[-gp1jxzuh + *wy1vqfzu + (ayfnwr1v+gp1jxzuh - sedf7mxb) * Mplus1] * enaqpzk9[urohxe6t-gp1jxzuh + *wy1vqfzu + (ayfnwr1v+gp1jxzuh-1 ) * Mplus1]; } } enaqpzk9[*wy1vqfzu + (ayfnwr1v-1) * Mplus1] = 1.0e0 / Dvector[ayfnwr1v-1]; for (urohxe6t = 1; urohxe6t <= uplim; urohxe6t++) { enaqpzk9[ *wy1vqfzu + (ayfnwr1v - 1 ) * Mplus1] -= wkumc9iduu[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t - sedf7mxb) * Mplus1] * enaqpzk9[-urohxe6t + *wy1vqfzu + (ayfnwr1v+urohxe6t - 1 ) * Mplus1]; } if (ayfnwr1v == sedf7mxb) { if (--sedf7mxb < 1) { sedf7mxb = 1; } else { for (kij0gwer = hofjnx2e - 1; kij0gwer >= 1; kij0gwer--) { for (gp1jxzuh = 1; gp1jxzuh <= hofjnx2e; gp1jxzuh++) { wkumc9iduu[gp1jxzuh-1 + kij0gwer * Mplus1] = wkumc9iduu[gp1jxzuh-1 + (kij0gwer-1) * Mplus1]; } } for (gp1jxzuh = 1; gp1jxzuh <= hofjnx2e; gp1jxzuh++) { wkumc9iduu[gp1jxzuh-1] = wpuarq2m[gp1jxzuh-1 + (sedf7mxb-1) * Mplus1]; } } } } Free(wkumc9iduu); } void Free_fapc0tnbewg7qruh(double *wkumc9idWrk1, int *wkumc9idges1xpkr, double *wkumc9idbeta, double *wkumc9idfasrkub3, double *wkumc9idsout, double *wkumc9idr0oydcxb, double *wkumc9idub4xioar, double *wkumc9ideffect, double *wkumc9idueshvo2ic, double *wkumc9ids0, double *wkumc9idpygsw6ko, double *wkumc9idpasjmo8g, double *wkumc9ideshvo2ic, double *wkumc9idonxjvw8u, double *wkumc9idwk4) { Free(wkumc9idWrk1); Free(wkumc9idges1xpkr); Free(wkumc9idbeta); Free(wkumc9idfasrkub3); Free(wkumc9idsout); Free(wkumc9idr0oydcxb); Free(wkumc9idub4xioar); Free(wkumc9ideffect); Free(wkumc9idueshvo2ic); Free(wkumc9ids0); Free(wkumc9idpygsw6ko); Free(wkumc9idpasjmo8g); Free(wkumc9ideshvo2ic); Free(wkumc9idonxjvw8u); Free(wkumc9idwk4); } void fapc0tnbewg7qruh(double ci1oyxas[], double tlgduey8[], double rbne6ouj[], int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int *lqsahu0r, double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double ui8ysltq[], int *kvowz9ht, int *fbd5yktj, int *ldk, int *aalgpft4y, int *yzoe1rsp, double rpyis2kc[], double gkdx5jals[], double ifys6woa[], double conmat[], double xecbg0pf[], double z4grbpiq[], double d7glzhbj[], double v2eydbxs[], double *tt2, int *cvnjhg2u, int *acpios9q, int *iz2nbfjc, int *kgwmz4ip, int *npjlv3mr, int itdcb8ilk[], double tdcb8ilk[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, qemj9asg, dvhw1ulq, infoqr_svdbx3tk, rutyk8mg = *lqsahu0r * *kgwmz4ip; int pqneb2ra = 1, ybnsqgo9 = 101; int xjc4ywlh = 2 * *kgwmz4ip, kgwmz4ip2 = 2 * *kgwmz4ip; int npjlv3mreshvo2ic = (*iz2nbfjc == 1) ? *npjlv3mr : *kgwmz4ip * (*kgwmz4ip + 1) / 2, dim2eshvo2ic = (*iz2nbfjc == 1) ? *kvowz9ht : *kgwmz4ip * (*kgwmz4ip + 1) / 2; double xmin, xrange, *fpdlcqk9ui8ysltq, *fpdlcqk9hdnw2fts, *fpdlcqk9ub4xioar, *fpdlcqk9ifys6woa, *fpdlcqk9pygsw6ko, dtad5vhsu, do3jyipdf, dpq0hfucn, pvofyg8z = 1.0e-7; int *wkumc9idges1xpkr, maxrutyk8mgxjc4ywlh; double *wkumc9idWrk1, *wkumc9idwk4; double *wkumc9idbeta, *wkumc9idfasrkub3, *wkumc9idsout, *wkumc9idr0oydcxb, *wkumc9idub4xioar, *wkumc9ideffect, *wkumc9idueshvo2ic, *wkumc9ids0; double *wkumc9idpygsw6ko, *wkumc9idpasjmo8g, *wkumc9ideshvo2ic, *wkumc9idonxjvw8u; maxrutyk8mgxjc4ywlh = (rutyk8mg > xjc4ywlh) ? rutyk8mg : xjc4ywlh; wkumc9idWrk1 = Calloc(maxrutyk8mgxjc4ywlh , double); wkumc9idwk4 = Calloc(rutyk8mg * xjc4ywlh , double); wkumc9idges1xpkr = Calloc(kgwmz4ip2 , int); wkumc9idbeta = Calloc(kgwmz4ip2 , double); wkumc9idfasrkub3 = Calloc(kgwmz4ip2 , double); wkumc9idsout = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9idr0oydcxb = Calloc(*kgwmz4ip * *lqsahu0r , double); wkumc9idub4xioar = Calloc(*kgwmz4ip * *lqsahu0r , double); wkumc9ideffect = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9idueshvo2ic = Calloc(npjlv3mreshvo2ic * *lqsahu0r , double); wkumc9ids0 = Calloc(kgwmz4ip2 * kgwmz4ip2 * 2 , double); wkumc9idpygsw6ko = Calloc(*lqsahu0r , double); wkumc9idpasjmo8g = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9idonxjvw8u = Calloc(*lqsahu0r * *kgwmz4ip , double); wkumc9ideshvo2ic = Calloc(*lqsahu0r * dim2eshvo2ic , double); vsuff9(ftnjamu2, lqsahu0r, ezlgm2up, ci1oyxas, tlgduey8, rbne6ouj, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idueshvo2ic, wkumc9idonxjvw8u, &dvhw1ulq, wy1vqfzu, kvowz9ht, npjlv3mr, conmat, kgwmz4ip, iz2nbfjc, &pqneb2ra, &npjlv3mreshvo2ic, &dim2eshvo2ic); if (dvhw1ulq != 1) { Rprintf("Error in fapc0tnbewg7qruh after calling vsuff9.\n"); Free_fapc0tnbewg7qruh(wkumc9idWrk1, wkumc9idges1xpkr, wkumc9idbeta, wkumc9idfasrkub3, wkumc9idsout, wkumc9idr0oydcxb, wkumc9idub4xioar, wkumc9ideffect, wkumc9idueshvo2ic, wkumc9ids0, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idwk4); return; } xmin = wkumc9idpygsw6ko[0]; xrange = wkumc9idpygsw6ko[*lqsahu0r-1] - wkumc9idpygsw6ko[0]; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { wkumc9idpygsw6ko[ayfnwr1v-1] = (wkumc9idpygsw6ko[ayfnwr1v-1] - xmin) / xrange; } *ldk = 4 * *kgwmz4ip; *ldk = 3 * *kgwmz4ip + 1; *fbd5yktj = 0; Yee_spline(wkumc9idpygsw6ko, wkumc9idonxjvw8u, wkumc9ideshvo2ic, gkdx5jals, lqsahu0r, acpios9q, ldk, kgwmz4ip, &dim2eshvo2ic, wbkq9zyi, lamvec, aalgpft4y, wkumc9idsout, rpyis2kc, ui8ysltq, ifys6woa, hdnw2fts, yzoe1rsp, fbd5yktj, ftnjamu2, xecbg0pf, z4grbpiq, d7glzhbj, v2eydbxs, tt2, cvnjhg2u, itdcb8ilk, tdcb8ilk); for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { } if (1) { // Do not execute this code block fpdlcqk9hdnw2fts = hdnw2fts; fpdlcqk9ifys6woa = ifys6woa; for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { *fpdlcqk9hdnw2fts = 0.0e0; *fpdlcqk9hdnw2fts = -1.0e0; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { *fpdlcqk9hdnw2fts += *fpdlcqk9ifys6woa++; } fpdlcqk9hdnw2fts++; } } if (*kgwmz4ip >= 1) { fapc0tnbx6kanjdh(wkumc9idpygsw6ko, wkumc9idwk4, lqsahu0r, kgwmz4ip); rutyk8mg = *lqsahu0r * *kgwmz4ip; fvlmz9iyC_mux17(wkumc9idueshvo2ic, wkumc9idwk4, kgwmz4ip, &xjc4ywlh, lqsahu0r, &npjlv3mreshvo2ic, &rutyk8mg); for (gp1jxzuh = 1; gp1jxzuh <= xjc4ywlh; gp1jxzuh++) { wkumc9idges1xpkr[gp1jxzuh-1] = gp1jxzuh; } F77_CALL(vqrdca)(wkumc9idwk4, &rutyk8mg, &rutyk8mg, &xjc4ywlh, wkumc9idfasrkub3, wkumc9idges1xpkr, wkumc9idWrk1, &qemj9asg, &pvofyg8z); fvlmz9iyC_mux22(wkumc9idueshvo2ic, wkumc9idsout, wkumc9idr0oydcxb, &npjlv3mreshvo2ic, lqsahu0r, kgwmz4ip); F77_CALL(vdqrsl)(wkumc9idwk4, &rutyk8mg, &rutyk8mg, &qemj9asg, wkumc9idfasrkub3, wkumc9idr0oydcxb, wkumc9idWrk1, wkumc9ideffect, wkumc9idbeta, wkumc9idWrk1, wkumc9idub4xioar, &ybnsqgo9, &infoqr_svdbx3tk); fvlmz9iyC_vbks(wkumc9idueshvo2ic, wkumc9idub4xioar, kgwmz4ip, lqsahu0r, &npjlv3mreshvo2ic); if (*yzoe1rsp) { fvlmz9iyC_lkhnw9yq(wkumc9idwk4, wkumc9ids0, &rutyk8mg, &xjc4ywlh, &dvhw1ulq); if (dvhw1ulq != 1) { Rprintf("Error in fapc0tnbewg7qruh calling fvlmz9iyC_lkhnw9yq.\n"); Free_fapc0tnbewg7qruh(wkumc9idWrk1, wkumc9idges1xpkr, wkumc9idbeta, wkumc9idfasrkub3, wkumc9idsout, wkumc9idr0oydcxb, wkumc9idub4xioar, wkumc9ideffect, wkumc9idueshvo2ic, wkumc9ids0, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idwk4); return; } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { dtad5vhsu = wkumc9ids0[yq6lorbx-1 + (yq6lorbx-1 ) * kgwmz4ip2]; do3jyipdf = wkumc9ids0[yq6lorbx-1 + (yq6lorbx-1 + *kgwmz4ip) * kgwmz4ip2]; dpq0hfucn = wkumc9ids0[yq6lorbx-1 + *kgwmz4ip + (yq6lorbx-1 + *kgwmz4ip) * kgwmz4ip2]; fpdlcqk9ui8ysltq = ui8ysltq + (yq6lorbx-1) * *ftnjamu2; fpdlcqk9pygsw6ko = wkumc9idpygsw6ko; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { *fpdlcqk9ui8ysltq -= dtad5vhsu + *fpdlcqk9pygsw6ko * (2.0 * do3jyipdf + *fpdlcqk9pygsw6ko * dpq0hfucn); fpdlcqk9ui8ysltq++; fpdlcqk9pygsw6ko++; } } } } else { fapc0tnbdsrt0gem(lqsahu0r, wkumc9idpygsw6ko, wkumc9ideshvo2ic, wkumc9idsout, wkumc9idub4xioar, ui8ysltq, yzoe1rsp); } fpdlcqk9ub4xioar = wkumc9idub4xioar; for (ayfnwr1v = 1; ayfnwr1v <= *lqsahu0r; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { wkumc9idsout[ayfnwr1v-1 + (yq6lorbx-1) * *lqsahu0r] -= *fpdlcqk9ub4xioar++; } } for (yq6lorbx = 1; yq6lorbx <= *kgwmz4ip; yq6lorbx++) { fapc0tnbshm8ynte(ftnjamu2, /* lqsahu0r, */ ezlgm2up, wkumc9idsout + (yq6lorbx-1) * *lqsahu0r, kispwgx3 + (yq6lorbx-1) * *ftnjamu2); } Free_fapc0tnbewg7qruh(wkumc9idWrk1, wkumc9idges1xpkr, wkumc9idbeta, wkumc9idfasrkub3, wkumc9idsout, wkumc9idr0oydcxb, wkumc9idub4xioar, wkumc9ideffect, wkumc9idueshvo2ic, wkumc9ids0, wkumc9idpygsw6ko, wkumc9idpasjmo8g, wkumc9ideshvo2ic, wkumc9idonxjvw8u, wkumc9idwk4); } void Yee_vbfa(int psdvgce3[], double *fjcasv7g, double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double hdnw2fts[], double lamvec[], double wbkq9zyi[], int ezlgm2up[], int lqsahu0r[], int which[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double vc6hatuj[], double fasrkub3[], int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[]) { double *ghdetj8v, *zpcqv3uj; int nhja0izq, rutyk8mg, xjc4ywlh, lyzoe1rsp, ueb8hndv, gtrlbz3e, algpft4y = 0, qemj9asg, npjlv3mr, kvowz9ht, ldk, fbd5yktj = 0; int *ftnjamu2, *wy1vqfzu; int itdcb8ilk[1]; double tdcb8ilk[4]; itdcb8ilk[0] = psdvgce3[15]; /* contr.sp$c5aesxku in s.vam() */ tdcb8ilk[0] = fjcasv7g[2]; /* contr.sp$low in s.vam() */ tdcb8ilk[1] = fjcasv7g[3]; /* contr.sp$high in s.vam() */ tdcb8ilk[2] = fjcasv7g[4]; /* contr.sp$tol in s.vam() */ tdcb8ilk[3] = fjcasv7g[5]; /* contr.sp$eps in s.vam() */ wy1vqfzu = psdvgce3 + 7; ftnjamu2 = psdvgce3; nhja0izq = psdvgce3[2]; lyzoe1rsp = psdvgce3[3]; gtrlbz3e = psdvgce3[5]; qemj9asg = psdvgce3[6]; rutyk8mg = psdvgce3[8]; xjc4ywlh = psdvgce3[9]; kvowz9ht = psdvgce3[11]; npjlv3mr = psdvgce3[12]; ldk = psdvgce3[14]; zpcqv3uj = fjcasv7g + 0; /* bf.qaltf0nz */ ghdetj8v = fjcasv7g + 1; /* ghdetj8v */ fapc0tnbvbfa1(ftnjamu2, wy1vqfzu, ezlgm2up, lqsahu0r, which, he7mqnvy, tlgduey8, rbne6ouj, wbkq9zyi, lamvec, hdnw2fts, kispwgx3, m0ibglfx, zshtfg8c, ui8ysltq, zpcqv3uj, vc6hatuj, fasrkub3, &qemj9asg, ges1xpkr, wpuarq2m, hjm2ktyr, ulm3dvzg, hnpt1zym, iz2nbfjc, ifys6woa, rpyis2kc, gkdx5jals, ghdetj8v, nbzjkpi3, lindex, acpios9q, jwbkl9fp, &nhja0izq, &lyzoe1rsp, &ueb8hndv, >rlbz3e, &rutyk8mg, &xjc4ywlh, &kvowz9ht, &npjlv3mr, &fbd5yktj, &ldk, &algpft4y, itdcb8ilk, tdcb8ilk); psdvgce3[6] = qemj9asg; psdvgce3[4] = ueb8hndv; psdvgce3[13] = fbd5yktj; psdvgce3[16] = algpft4y; } void fapc0tnbvbfa1(int *ftnjamu2, int *wy1vqfzu, int ezlgm2up[], int lqsahu0r[], int which[], double he7mqnvy[], double tlgduey8[], double rbne6ouj[], double wbkq9zyi[], double lamvec[], double hdnw2fts[], double kispwgx3[], double m0ibglfx[], double zshtfg8c[], double ui8ysltq[], double *zpcqv3uj, double vc6hatuj[], double fasrkub3[], int *qemj9asg, int ges1xpkr[], double wpuarq2m[], double hjm2ktyr[], int ulm3dvzg[], int hnpt1zym[], int iz2nbfjc[], double ifys6woa[], double rpyis2kc[], double gkdx5jals[], double *ghdetj8v, int nbzjkpi3[], int lindex[], int acpios9q[], int jwbkl9fp[], int *nhja0izq, int *yzoe1rsp, int *ueb8hndv, int *gtrlbz3e, int *rutyk8mg, int *xjc4ywlh, int *kvowz9ht, int *npjlv3mr, int *fbd5yktj, int *ldk, int *algpft4y, int itdcb8ilk[], double tdcb8ilk[]) { int ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, wg1xifdy, ybnsqgo9 = 101, maxrutyk8mgxjc4ywlh, infoqr_svdbx3tk, sumzv2xfhei = 0; double qtce8hzo, deltaf, z4vrscot, pvofyg8z = 1.0e-7, g2dnwteb = 1.0, *fpdlcqk9m0ibglfx, *fpdlcqk9ub4xioar, *fpdlcqk9tlgduey8, *fpdlcqk9ghz9vuba, *fpdlcqk9hjm2ktyr, *fpdlcqk9kispwgx3, *qnwamo0e1; double *wkumc9idTwk, *wkumc9idwkbzmd6ftv, *wkumc9idwk9; double *wkumc9idghz9vuba, *wkumc9idoldmat, *wkumc9idub4xioar, *wkumc9idwk2; double *wkumc9idall_xecbg0pf, *wkumc9idall_z4grbpiq, *wkumc9idall_d7glzhbj, *wkumc9idall_v2eydbxs, *wkumc9idall_tt2; int cvnjhg2u; maxrutyk8mgxjc4ywlh = (*ftnjamu2 * *wy1vqfzu > *xjc4ywlh) ? (*ftnjamu2 * *wy1vqfzu) : *xjc4ywlh; wkumc9idTwk = Calloc(maxrutyk8mgxjc4ywlh , double); wkumc9idwkbzmd6ftv = Calloc(*xjc4ywlh * *rutyk8mg, double); wkumc9idwk9 = Calloc(*xjc4ywlh , double); wkumc9idghz9vuba = Calloc(*ftnjamu2 * *wy1vqfzu, double); wkumc9idoldmat = Calloc(*ftnjamu2 * *wy1vqfzu, double); wkumc9idub4xioar = Calloc(*wy1vqfzu * *ftnjamu2, double); wkumc9idwk2 = Calloc(*ftnjamu2 * *wy1vqfzu, double); if ( *nhja0izq == 0 || *nhja0izq == 1 ) { *gtrlbz3e = 1; } if (*qemj9asg == 0) { fvlmz9iyC_mux17(wpuarq2m, vc6hatuj, wy1vqfzu, xjc4ywlh, ftnjamu2, npjlv3mr, rutyk8mg); for (gp1jxzuh = 1; gp1jxzuh <= *xjc4ywlh; gp1jxzuh++) { ges1xpkr[gp1jxzuh-1] = gp1jxzuh; } F77_CALL(vqrdca)(vc6hatuj, rutyk8mg, rutyk8mg, xjc4ywlh, fasrkub3, ges1xpkr, wkumc9idTwk, qemj9asg, &pvofyg8z); } fpdlcqk9m0ibglfx = m0ibglfx; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 0; yq6lorbx < *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx++ = 0.0e0; } } for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { if (iz2nbfjc[gp1jxzuh-1] == 1) { fpdlcqk9m0ibglfx = m0ibglfx; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { bpvaqm5z = hnpt1zym[gp1jxzuh-1] - 1; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += kispwgx3[ayfnwr1v-1 + bpvaqm5z * *ftnjamu2]; fpdlcqk9m0ibglfx++; bpvaqm5z++; } } } else { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { urohxe6t = hnpt1zym[gp1jxzuh-1] + wg1xifdy - 2; fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9kispwgx3 = kispwgx3 + urohxe6t * *ftnjamu2; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { fpdlcqk9hjm2ktyr = hjm2ktyr + urohxe6t * *wy1vqfzu; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9hjm2ktyr++ * *fpdlcqk9kispwgx3; fpdlcqk9m0ibglfx++; } fpdlcqk9kispwgx3++; } } } } sumzv2xfhei = jwbkl9fp[(1 + *nhja0izq) - 1]; wkumc9idall_xecbg0pf = Calloc(sumzv2xfhei, double); wkumc9idall_z4grbpiq = Calloc(sumzv2xfhei, double); wkumc9idall_d7glzhbj = Calloc(sumzv2xfhei, double); wkumc9idall_v2eydbxs = Calloc(sumzv2xfhei, double); wkumc9idall_tt2 = Calloc(*nhja0izq , double); *ueb8hndv = 0; while ((g2dnwteb > *zpcqv3uj ) && (*ueb8hndv < *gtrlbz3e)) { (*ueb8hndv)++; deltaf = 0.0e0; fpdlcqk9ghz9vuba = wkumc9idghz9vuba; fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9m0ibglfx = m0ibglfx + yq6lorbx-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9ghz9vuba++ = *fpdlcqk9tlgduey8++ - *fpdlcqk9m0ibglfx; fpdlcqk9m0ibglfx += *wy1vqfzu; } } fvlmz9iyC_mux22(wpuarq2m, wkumc9idghz9vuba, wkumc9idTwk, npjlv3mr, ftnjamu2, wy1vqfzu); F77_CALL(vdqrsl)(vc6hatuj, rutyk8mg, rutyk8mg, qemj9asg, fasrkub3, wkumc9idTwk, wkumc9idwk2, wkumc9idwk2, zshtfg8c, wkumc9idwk2, wkumc9idub4xioar, &ybnsqgo9, &infoqr_svdbx3tk); *ghdetj8v = 0.0e0; qnwamo0e1 = wkumc9idTwk; fpdlcqk9ub4xioar = wkumc9idub4xioar; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { qtce8hzo = *qnwamo0e1++ - *fpdlcqk9ub4xioar++; *ghdetj8v += pow(qtce8hzo, (double) 2.0); } } fvlmz9iyC_vbks(wpuarq2m, wkumc9idub4xioar, wy1vqfzu, ftnjamu2, npjlv3mr); for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { urohxe6t = hnpt1zym[gp1jxzuh-1] + yq6lorbx -2; if (iz2nbfjc[gp1jxzuh-1] == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = kispwgx3[ayfnwr1v-1 + urohxe6t * *ftnjamu2]; wkumc9idghz9vuba[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] - wkumc9idub4xioar[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] - m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] + wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = 0.0e0; for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { bpvaqm5z = hnpt1zym[gp1jxzuh-1] + wg1xifdy -2; wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] += hjm2ktyr[yq6lorbx-1 + bpvaqm5z * *wy1vqfzu] * kispwgx3[ayfnwr1v-1 + bpvaqm5z * *ftnjamu2]; } wkumc9idghz9vuba[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] = tlgduey8[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] - wkumc9idub4xioar[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] - m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] + wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } } } cvnjhg2u = (*ueb8hndv == 1) ? 0 : 1; fapc0tnbewg7qruh(he7mqnvy+(which[gp1jxzuh-1]-1) * *ftnjamu2, wkumc9idghz9vuba, rbne6ouj, ftnjamu2, wy1vqfzu, ezlgm2up + (gp1jxzuh-1) * *ftnjamu2, lqsahu0r + gp1jxzuh-1, wbkq9zyi + hnpt1zym[gp1jxzuh-1]-1, lamvec + hnpt1zym[gp1jxzuh-1]-1, hdnw2fts + hnpt1zym[gp1jxzuh-1]-1, kispwgx3 + (hnpt1zym[gp1jxzuh-1]-1) * *ftnjamu2, ui8ysltq + (hnpt1zym[gp1jxzuh-1]-1) * *ftnjamu2, kvowz9ht, fbd5yktj, ldk, algpft4y, yzoe1rsp, rpyis2kc + nbzjkpi3[gp1jxzuh-1]-1, gkdx5jals + jwbkl9fp[gp1jxzuh-1]-1, ifys6woa + lindex[gp1jxzuh-1]-1, hjm2ktyr + (hnpt1zym[gp1jxzuh-1]-1) * *wy1vqfzu, wkumc9idall_xecbg0pf + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_z4grbpiq + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_d7glzhbj + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_v2eydbxs + jwbkl9fp[gp1jxzuh-1]-1, wkumc9idall_tt2 + gp1jxzuh-1 , // If 0 then compute wkumc9idall_sg[0:3] else already done: &cvnjhg2u, acpios9q + gp1jxzuh-1, iz2nbfjc + gp1jxzuh-1, ulm3dvzg + gp1jxzuh-1, npjlv3mr, itdcb8ilk, tdcb8ilk); for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { if (iz2nbfjc[gp1jxzuh-1] == 1) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] += kispwgx3[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+yq6lorbx-2) * *ftnjamu2]; } } else { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { m0ibglfx[yq6lorbx-1+ (ayfnwr1v-1) * *wy1vqfzu] += hjm2ktyr[yq6lorbx-1+ (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *wy1vqfzu] * kispwgx3[ayfnwr1v-1+ (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2]; } } } for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu] -= wkumc9idoldmat[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2]; } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { if (iz2nbfjc[gp1jxzuh-1] == 1) { deltaf += fapc0tnbrd9beyfk(ftnjamu2, wkumc9idoldmat + (yq6lorbx-1) * *ftnjamu2, rbne6ouj + (yq6lorbx-1) * *ftnjamu2, kispwgx3 + (hnpt1zym[gp1jxzuh-1]+yq6lorbx-2) * *ftnjamu2); } else { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { wkumc9idTwk[ayfnwr1v-1] = 0.0e0; for (wg1xifdy=1; wg1xifdy<=ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { wkumc9idTwk[ayfnwr1v-1] += hjm2ktyr[yq6lorbx-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *wy1vqfzu] * kispwgx3[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2]; } } deltaf += fapc0tnbrd9beyfk(ftnjamu2, wkumc9idoldmat + (yq6lorbx-1) * *ftnjamu2, rbne6ouj + (yq6lorbx-1) * *ftnjamu2, wkumc9idTwk); } } fpdlcqk9ghz9vuba = wkumc9idghz9vuba; fpdlcqk9tlgduey8 = tlgduey8; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { fpdlcqk9m0ibglfx = m0ibglfx + yq6lorbx-1; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { *fpdlcqk9ghz9vuba++ = *fpdlcqk9tlgduey8++ - *fpdlcqk9m0ibglfx; fpdlcqk9m0ibglfx += *wy1vqfzu; } } fvlmz9iyC_mux22(wpuarq2m, wkumc9idghz9vuba, wkumc9idTwk, npjlv3mr, ftnjamu2, wy1vqfzu); F77_CALL(vdqrsl)(vc6hatuj, rutyk8mg, rutyk8mg, qemj9asg, fasrkub3, wkumc9idTwk, wkumc9idwk2, wkumc9idwk2, zshtfg8c, wkumc9idwk2, wkumc9idub4xioar, &ybnsqgo9, &infoqr_svdbx3tk); fvlmz9iyC_vbks(wpuarq2m, wkumc9idub4xioar, wy1vqfzu, ftnjamu2, npjlv3mr); } if (*nhja0izq > 0) { z4vrscot = 0.0e0; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { z4vrscot += rbne6ouj[ayfnwr1v-1 + (yq6lorbx-1) * *ftnjamu2] * pow(m0ibglfx[yq6lorbx-1 + (ayfnwr1v-1) * *wy1vqfzu], (double) 2.0); } } g2dnwteb = (z4vrscot > 0.0e0) ? sqrt(deltaf / z4vrscot) : 0.0; } if (*ueb8hndv == 1) { g2dnwteb = 1.0e0; } } for (yq6lorbx = 1; yq6lorbx <= *xjc4ywlh; yq6lorbx++) { wkumc9idwk9[yq6lorbx-1] = zshtfg8c[yq6lorbx-1]; } for (yq6lorbx = 1; yq6lorbx <= *xjc4ywlh; yq6lorbx++) { zshtfg8c[ges1xpkr[yq6lorbx-1]-1] = wkumc9idwk9[yq6lorbx-1]; } fpdlcqk9m0ibglfx = m0ibglfx; fpdlcqk9ub4xioar = wkumc9idub4xioar; for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { *fpdlcqk9m0ibglfx += *fpdlcqk9ub4xioar++; fpdlcqk9m0ibglfx++; } } if (*yzoe1rsp && (*nhja0izq > 0)) { for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { fapc0tnbshm8ynte(ftnjamu2, /* lqsahu0r + gp1jxzuh-1, */ ezlgm2up + (gp1jxzuh-1) * *ftnjamu2, ui8ysltq + (hnpt1zym[ gp1jxzuh-1] + wg1xifdy-2) * *ftnjamu2, wkumc9idoldmat); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { ui8ysltq[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2] = wkumc9idoldmat[ayfnwr1v-1]; } } } if (0) { for (gp1jxzuh = 1; gp1jxzuh <= *nhja0izq; gp1jxzuh++) { for (wg1xifdy = 1; wg1xifdy <= ulm3dvzg[gp1jxzuh-1]; wg1xifdy++) { fapc0tnbshm8ynte(ftnjamu2, /* lqsahu0r + gp1jxzuh-1, */ ezlgm2up + (gp1jxzuh-1) * *ftnjamu2, ifys6woa + (hnpt1zym[ gp1jxzuh-1] + wg1xifdy-2) * *ftnjamu2, wkumc9idoldmat); for (ayfnwr1v = 1; ayfnwr1v <= *ftnjamu2; ayfnwr1v++) { ifys6woa[ayfnwr1v-1 + (hnpt1zym[gp1jxzuh-1]+wg1xifdy-2) * *ftnjamu2] = wkumc9idoldmat[ayfnwr1v-1]; } } } } } Free(wkumc9idwkbzmd6ftv); Free(wkumc9idwk9); Free(wkumc9idTwk); Free(wkumc9idghz9vuba); Free(wkumc9idoldmat); Free(wkumc9idub4xioar); Free(wkumc9idwk2); Free(wkumc9idall_xecbg0pf); Free(wkumc9idall_z4grbpiq); Free(wkumc9idall_d7glzhbj); Free(wkumc9idall_v2eydbxs); Free(wkumc9idall_tt2); } void fapc0tnbx6kanjdh(double sjwyig9t[], double xout[], int *f8yswcat, int *wy1vqfzu) { int ayfnwr1v, yq6lorbx, gp1jxzuh, iptr = 0; for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { xout[iptr++] = (yq6lorbx == gp1jxzuh) ? 1.0e0 : 0.0e0; } } } for (yq6lorbx = 1; yq6lorbx <= *wy1vqfzu; yq6lorbx++) { for (ayfnwr1v = 1; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { for (gp1jxzuh = 1; gp1jxzuh <= *wy1vqfzu; gp1jxzuh++) { xout[iptr++] = (yq6lorbx == gp1jxzuh) ? sjwyig9t[ayfnwr1v-1] : 0.0e0; } } } } double fapc0tnbrd9beyfk(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double m0ibglfx[]) { int ayfnwr1v; double rd9beyfk, rxeqjn0y = 0.0, lm9vcjob = 0.0; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { lm9vcjob += *po8rwsmy; rxeqjn0y += *po8rwsmy++ * pow(*bhcji9gl++ - *m0ibglfx++, (double) 2.0); } rd9beyfk = (lm9vcjob > 0.0e0) ? (rxeqjn0y / lm9vcjob) : 0.0e0; return rd9beyfk; } void fapc0tnbpitmeh0q(int *f8yswcat, double bhcji9gl[], double po8rwsmy[], double *lfu2qhid, double *lm9vcjob) { double rxeqjn0yy = 0.0; int ayfnwr1v; *lm9vcjob = 0.0e0; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *lm9vcjob += *po8rwsmy; rxeqjn0yy += *po8rwsmy++ * *bhcji9gl++; } *lfu2qhid = (*lm9vcjob > 0.0e0) ? (rxeqjn0yy / *lm9vcjob) : 0.0e0; } void fapc0tnbdsrt0gem(int *f8yswcat, double sjwyig9t[], double po8rwsmy[], double bhcji9gl[], double ub4xioar[], double ui8ysltq[], int *yzoe1rsp) { int ayfnwr1v; double pygsw6ko, pasjmo8g, intercept, eck8vubt, qtce8hzo, lm9vcjob = 0.0, q6zdcwxk = 0.0, nsum = 0.0, *fpdlcqk9po8rwsmy, *fpdlcqk9sjwyig9t, *fpdlcqk9bhcji9gl; fapc0tnbpitmeh0q(f8yswcat, sjwyig9t, po8rwsmy, &pygsw6ko, &lm9vcjob); fapc0tnbpitmeh0q(f8yswcat, bhcji9gl, po8rwsmy, &pasjmo8g, &lm9vcjob); fpdlcqk9sjwyig9t = sjwyig9t; fpdlcqk9bhcji9gl = bhcji9gl; fpdlcqk9po8rwsmy = po8rwsmy; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { qtce8hzo = *fpdlcqk9sjwyig9t++ - pygsw6ko; nsum += qtce8hzo * (*fpdlcqk9bhcji9gl++ - pasjmo8g) * *fpdlcqk9po8rwsmy; qtce8hzo = pow(qtce8hzo, (double) 2.0); q6zdcwxk += qtce8hzo * *fpdlcqk9po8rwsmy++; } eck8vubt = nsum / q6zdcwxk; intercept = pasjmo8g - eck8vubt * pygsw6ko; fpdlcqk9sjwyig9t = sjwyig9t; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { *ub4xioar++ = intercept + eck8vubt * *fpdlcqk9sjwyig9t++; } if (*yzoe1rsp) { fpdlcqk9sjwyig9t = sjwyig9t; fpdlcqk9po8rwsmy = po8rwsmy; for (ayfnwr1v = 0; ayfnwr1v < *f8yswcat; ayfnwr1v++) { qtce8hzo = *fpdlcqk9sjwyig9t++ - pygsw6ko; if (*fpdlcqk9po8rwsmy++ > 0.0e0) { *ui8ysltq -= (1.0e0 / lm9vcjob + pow(qtce8hzo, (double) 2.0) / q6zdcwxk); ui8ysltq++; } else { *ui8ysltq++ = 0.0e0; } } } } void fapc0tnbshm8ynte(int *ftnjamu2, int ezlgm2up[], double pygsw6ko[], double sjwyig9t[]) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *ftnjamu2; ayfnwr1v++) { *sjwyig9t++ = pygsw6ko[*ezlgm2up++ -1]; } } void vknootl2(double sjwyig9t[], int *f8yswcat, double gkdx5jal[], int *rvy1fpli, int *ukgwt7na) { int ayfnwr1v, yq6lorbx, ndzv2xfhei; if (*ukgwt7na) { ndzv2xfhei = *rvy1fpli - 6; } else { ndzv2xfhei = (*f8yswcat <= 40) ? *f8yswcat : floor((double) 40.0 + pow((double) *f8yswcat - 40.0, (double) 0.25)); } *rvy1fpli = ndzv2xfhei + 6; for (yq6lorbx = 1; yq6lorbx <= 3; yq6lorbx++) { *gkdx5jal++ = sjwyig9t[0]; } for (yq6lorbx = 1; yq6lorbx <= ndzv2xfhei; yq6lorbx++) { ayfnwr1v = (yq6lorbx - 1) * (*f8yswcat - 1) / (ndzv2xfhei - 1); *gkdx5jal++ = sjwyig9t[ayfnwr1v]; } for (yq6lorbx = 1; yq6lorbx <= 3; yq6lorbx++) { *gkdx5jal++ = sjwyig9t[*f8yswcat -1]; } } void Yee_pknootl2(double *gkdx5jal, int *f8yswcat, int *zo8wpibx, double *Toler_ankcghz2) { int ayfnwr1v, yq6lorbx = *f8yswcat - 4, cjop5bwm = 4; for (ayfnwr1v = 1; ayfnwr1v <= 4; ayfnwr1v++) { *zo8wpibx++ = 1; } for (ayfnwr1v = 5; ayfnwr1v <= yq6lorbx; ayfnwr1v++) { if ((gkdx5jal[ayfnwr1v -1] - gkdx5jal[cjop5bwm -1] >= *Toler_ankcghz2) && (gkdx5jal[ *f8yswcat -1] - gkdx5jal[ayfnwr1v -1] >= *Toler_ankcghz2)) { *zo8wpibx++ = 1; cjop5bwm = ayfnwr1v; } else { *zo8wpibx++ = 0; } } for (ayfnwr1v = *f8yswcat - 3; ayfnwr1v <= *f8yswcat; ayfnwr1v++) { *zo8wpibx++ = 1; } } VGAM/src/vgam.f0000644000176200001440000014005713135276761012716 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine vbvs(kuzxj1lo,ankcghz2,rpyis2kc,nk,he7mqnvy,smat,order, *wy1vqfzu) integer kuzxj1lo, nk, order, wy1vqfzu double precision ankcghz2(nk+4), rpyis2kc(nk,wy1vqfzu), he7mqnvy(k *uzxj1lo), smat(kuzxj1lo,wy1vqfzu) double precision chw8lzty integer ayfnwr1v, yq6lorbx, ifour4 ifour4 = 4 do23000 yq6lorbx=1,wy1vqfzu do23002 ayfnwr1v=1,kuzxj1lo chw8lzty = he7mqnvy(ayfnwr1v) call wbvalue(ankcghz2, rpyis2kc(1,yq6lorbx), nk, ifour4, chw8lzty, * order, smat(ayfnwr1v,yq6lorbx)) 23002 continue 23003 continue 23000 continue 23001 continue return end subroutine tfeswo7c(osiz4fxy, nk, wy1vqfzu, ldk, wbkq9zyi, sgmat) implicit logical (a-z) integer nk, wy1vqfzu, ldk double precision osiz4fxy(ldk,nk*wy1vqfzu), wbkq9zyi(wy1vqfzu), sg *mat(nk,4) integer ayfnwr1v, yq6lorbx do23004 ayfnwr1v=1,nk do23006 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk,(ayfnwr1v-1)*wy1vqfzu+yq6lorbx) = osiz4fxy(ldk,(ayfnw *r1v-1)*wy1vqfzu+yq6lorbx) + wbkq9zyi(yq6lorbx) * sgmat(ayfnwr1v,1) 23006 continue 23007 continue 23004 continue 23005 continue do23008 ayfnwr1v=1,(nk-1) do23010 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk-wy1vqfzu,(ayfnwr1v-0)*wy1vqfzu+yq6lorbx) = osiz4fxy(l *dk-wy1vqfzu,(ayfnwr1v-0)*wy1vqfzu+yq6lorbx) + wbkq9zyi(yq6lorbx) * * sgmat(ayfnwr1v,2) 23010 continue 23011 continue 23008 continue 23009 continue do23012 ayfnwr1v=1,(nk-2) do23014 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk-2*wy1vqfzu,(ayfnwr1v+1)*wy1vqfzu+yq6lorbx) = osiz4fxy *(ldk-2*wy1vqfzu,(ayfnwr1v+1)*wy1vqfzu+yq6lorbx) + wbkq9zyi(yq6lorb *x) * sgmat(ayfnwr1v,3) 23014 continue 23015 continue 23012 continue 23013 continue do23016 ayfnwr1v=1,(nk-3) do23018 yq6lorbx=1,wy1vqfzu osiz4fxy(ldk-3*wy1vqfzu,(ayfnwr1v+2)*wy1vqfzu+yq6lorbx) = osiz4fxy *(ldk-3*wy1vqfzu,(ayfnwr1v+2)*wy1vqfzu+yq6lorbx) + wbkq9zyi(yq6lorb *x) * sgmat(ayfnwr1v,4) 23018 continue 23019 continue 23016 continue 23017 continue return end subroutine ybnagt8k(iii, cz8qdfyj, tesdm5kv, g9fvdrbw, osiz4fxy, w *mat, kxvq6sfw, nyfu9rod, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxd *w1, dufozmt7) implicit logical (a-z) integer iii, cz8qdfyj, tesdm5kv, kxvq6sfw, nyfu9rod, wy1vqfzu, ldk *, dimw, kuzxj1lo, nk, tgiyxdw1(*), dufozmt7(*) double precision g9fvdrbw(4,*), osiz4fxy(ldk, nk*wy1vqfzu), wmat(k *uzxj1lo,dimw) double precision obr6tcex integer urohxe6t, nead, bcol, brow, biuvowq2, nbj8tdsk bcol = cz8qdfyj + tesdm5kv brow = cz8qdfyj do23020 urohxe6t=1,dimw obr6tcex = wmat(iii,urohxe6t) * g9fvdrbw(kxvq6sfw,1) * g9fvdrbw(ny *fu9rod,1) biuvowq2 = (brow-1)*wy1vqfzu + tgiyxdw1(urohxe6t) nbj8tdsk = (bcol-1)*wy1vqfzu + dufozmt7(urohxe6t) nead = nbj8tdsk - biuvowq2 osiz4fxy(ldk-nead, nbj8tdsk) = osiz4fxy(ldk-nead, nbj8tdsk) + obr6 *tcex if(tesdm5kv .gt. 0 .and. dufozmt7(urohxe6t) .ne. tgiyxdw1(urohxe6t *))then biuvowq2 = (brow-1)*wy1vqfzu + dufozmt7(urohxe6t) nbj8tdsk = (bcol-1)*wy1vqfzu + tgiyxdw1(urohxe6t) nead = nbj8tdsk - biuvowq2 osiz4fxy(ldk-nead, nbj8tdsk) = osiz4fxy(ldk-nead, nbj8tdsk) + obr6 *tcex endif 23020 continue 23021 continue return end subroutine vsplin(he7mqnvy,rbne6ouj,wmat,kuzxj1lo,gkdx5jal, nk,ldk *,wy1vqfzu,dimw, tgiyxdw1,dufozmt7, wkmm, wbkq9zyi, info, t8hwvalr, * rpyis2kc, osiz4fxy, btwy, sgdub, ui8ysltq, yzoe1rsp, bmb, ifys6wo *a, dof, scrtch, fbd5yktj, truen) implicit logical (a-z) integer kuzxj1lo, nk, ldk, wy1vqfzu, dimw, tgiyxdw1(*), dufozmt7(* *), info, fbd5yktj, truen integer yzoe1rsp double precision he7mqnvy(kuzxj1lo), rbne6ouj(kuzxj1lo,wy1vqfzu), *wmat(kuzxj1lo,dimw), gkdx5jal(nk+4), wkmm(wy1vqfzu,wy1vqfzu,16), w *bkq9zyi(wy1vqfzu), t8hwvalr(kuzxj1lo,wy1vqfzu), rpyis2kc(nk,wy1vqf *zu), osiz4fxy(ldk,nk*wy1vqfzu), btwy(wy1vqfzu,nk) double precision sgdub(nk,wy1vqfzu), ui8ysltq(truen,wy1vqfzu), bmb *(wy1vqfzu,wy1vqfzu), ifys6woa(kuzxj1lo,wy1vqfzu), dof(wy1vqfzu), s *crtch(*) integer yq6lorbx, ayfnwr1v, dqlr5bse, pqzfxw4i, urohxe6t, icrit integer gp0xjetb, e5knafcg, wep0oibc, l3zpbstu(3), ispar, i1loc double precision qaltf0nz, g9fvdrbw(4,1), ms0qypiw(16), penalt, qc *piaj7f, fp6nozvx, waiez6nt, toldf, parms(3) do23024 yq6lorbx=1,wy1vqfzu if(wbkq9zyi(yq6lorbx) .eq. 0.0d0)then ispar=0 icrit=3 else ispar=1 icrit=1 endif if((wy1vqfzu .eq. 1) .or. (dimw.eq.wy1vqfzu) .or. (ispar .eq. 0))t *hen e5knafcg = 4 fp6nozvx = 1.50d0 waiez6nt = 0.00d0 wep0oibc = 1 toldf=0.001d0 if(wy1vqfzu.eq.1)then toldf=0.005d0 else if(wy1vqfzu.eq.2)then toldf=0.015d0 else if(wy1vqfzu.eq.3)then toldf=0.025d0 else toldf=0.045d0 endif endif endif l3zpbstu(1) = icrit l3zpbstu(2) = ispar l3zpbstu(3) = 300 parms(1) = waiez6nt parms(2) = fp6nozvx parms(3) = toldf gp0xjetb=0 if((wy1vqfzu .eq. 1) .or. (dimw.eq.wy1vqfzu))then do23038 ayfnwr1v=1,kuzxj1lo rbne6ouj(ayfnwr1v,yq6lorbx) = rbne6ouj(ayfnwr1v,yq6lorbx) / wmat(a *yfnwr1v,yq6lorbx) 23038 continue 23039 continue call dnaoqj0l(penalt, dof(yq6lorbx), he7mqnvy, rbne6ouj(1,yq6lorbx *), wmat(1,yq6lorbx), kuzxj1lo,nk, gkdx5jal,rpyis2kc(1,yq6lorbx), t *8hwvalr(1,yq6lorbx), ifys6woa(1,yq6lorbx), qcpiaj7f,wbkq9zyi(yq6lo *rbx),parms, scrtch, gp0xjetb,l3zpbstu, e5knafcg,wep0oibc,fbd5yktj) if(fbd5yktj .ne. 0)then return endif do23042 ayfnwr1v=1,kuzxj1lo wmat(ayfnwr1v,yq6lorbx) = wmat(ayfnwr1v,yq6lorbx) * wmat(ayfnwr1v, *yq6lorbx) 23042 continue 23043 continue if(yzoe1rsp .ne. 0)then do23046 ayfnwr1v=1,kuzxj1lo ui8ysltq(ayfnwr1v,yq6lorbx) = ifys6woa(ayfnwr1v,yq6lorbx) / wmat(a *yfnwr1v,yq6lorbx) 23046 continue 23047 continue endif else call dnaoqj0l(penalt, dof(yq6lorbx), he7mqnvy, btwy(1,yq6lorbx), w *mat(1,yq6lorbx), kuzxj1lo,nk, gkdx5jal,rpyis2kc(1,yq6lorbx),t8hwva *lr(1,yq6lorbx), ifys6woa(1,yq6lorbx), qcpiaj7f,wbkq9zyi(yq6lorbx), *parms, scrtch, gp0xjetb,l3zpbstu, e5knafcg,wep0oibc,fbd5yktj) if(fbd5yktj .ne. 0)then return endif do23050 ayfnwr1v=1,kuzxj1lo wmat(ayfnwr1v,yq6lorbx) = wmat(ayfnwr1v,yq6lorbx) * wmat(ayfnwr1v, *yq6lorbx) 23050 continue 23051 continue endif if(fbd5yktj .ne. 0)then return endif endif 23024 continue 23025 continue if((wy1vqfzu .eq. 1) .or. (dimw .eq. wy1vqfzu))then return endif do23056 ayfnwr1v=1,nk do23058 yq6lorbx=1,wy1vqfzu btwy(yq6lorbx,ayfnwr1v)=0.0d0 23058 continue 23059 continue 23056 continue 23057 continue do23060 ayfnwr1v=1,(nk*wy1vqfzu) do23062 yq6lorbx=1,ldk osiz4fxy(yq6lorbx,ayfnwr1v) = 0.0d0 23062 continue 23063 continue 23060 continue 23061 continue qaltf0nz = 0.1d-9 do23064 ayfnwr1v=1,kuzxj1lo call vinterv(gkdx5jal(1),(nk+1),he7mqnvy(ayfnwr1v),dqlr5bse,pqzfxw *4i) if(pqzfxw4i .eq. 1)then if(he7mqnvy(ayfnwr1v) .le. (gkdx5jal(dqlr5bse)+qaltf0nz))then dqlr5bse=dqlr5bse-1 else return endif endif call vbsplvd(gkdx5jal,4,he7mqnvy(ayfnwr1v),dqlr5bse,ms0qypiw,g9fvd *rbw,1) yq6lorbx= dqlr5bse-4+1 do23070 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj(ayfnwr1 *v,urohxe6t) * g9fvdrbw(1,1) 23070 continue 23071 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 1, *1, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 1, g9fvdrbw, osiz4fxy, wmat, 1, *2, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 2, g9fvdrbw, osiz4fxy, wmat, 1, *3, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 3, g9fvdrbw, osiz4fxy, wmat, 1, *4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) yq6lorbx= dqlr5bse-4+2 do23072 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj(ayfnwr1 *v,urohxe6t) * g9fvdrbw(2,1) 23072 continue 23073 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 2, *2, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 1, g9fvdrbw, osiz4fxy, wmat, 2, *3, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 2, g9fvdrbw, osiz4fxy, wmat, 2, *4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) yq6lorbx= dqlr5bse-4+3 do23074 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj(ayfnwr1 *v,urohxe6t) * g9fvdrbw(3,1) 23074 continue 23075 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 3, *3, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) call ybnagt8k(ayfnwr1v, yq6lorbx, 1, g9fvdrbw, osiz4fxy, wmat, 3, *4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) yq6lorbx= dqlr5bse-4+4 do23076 urohxe6t=1,wy1vqfzu btwy(urohxe6t,yq6lorbx)=btwy(urohxe6t,yq6lorbx) + rbne6ouj(ayfnwr1 *v,urohxe6t) * g9fvdrbw(4,1) 23076 continue 23077 continue call ybnagt8k(ayfnwr1v, yq6lorbx, 0, g9fvdrbw, osiz4fxy, wmat, 4, *4, wy1vqfzu, ldk, dimw, kuzxj1lo, nk, tgiyxdw1, dufozmt7) 23064 continue 23065 continue call zosq7hub(sgdub(1,1), sgdub(1,2), sgdub(1,3), sgdub(1,4), gkdx *5jal, nk) call tfeswo7c(osiz4fxy, nk, wy1vqfzu, ldk, wbkq9zyi, sgdub) call vdpbfa7(osiz4fxy, ldk, nk*wy1vqfzu, ldk-1, info, sgdub) if(info .ne. 0)then return endif call vdpbsl7(osiz4fxy, ldk, nk*wy1vqfzu, ldk-1, btwy, sgdub) i1loc = 0 do23080 ayfnwr1v=1,nk do23082 yq6lorbx=1,wy1vqfzu i1loc = i1loc + 1 rpyis2kc(ayfnwr1v,yq6lorbx) = btwy(yq6lorbx,ayfnwr1v) 23082 continue 23083 continue 23080 continue 23081 continue call cn8kzpab(gkdx5jal, he7mqnvy, rpyis2kc, kuzxj1lo, nk, wy1vqfzu *, t8hwvalr) call vicb2(osiz4fxy, osiz4fxy, sgdub, wkmm, ldk-1, nk*wy1vqfzu) call icpd0omv(osiz4fxy, he7mqnvy, gkdx5jal, ui8ysltq, ldk, kuzxj1l *o, nk, wy1vqfzu, yzoe1rsp, bmb, wkmm, wmat, ifys6woa, dimw, tgiyxd *w1, dufozmt7, truen) return end subroutine cn8kzpab(ankcghz2, he7mqnvy, rpyis2kc, kuzxj1lo, nk, wy *1vqfzu, t8hwvalr) implicit logical (a-z) integer kuzxj1lo, nk, wy1vqfzu double precision ankcghz2(nk+4), he7mqnvy(kuzxj1lo), rpyis2kc(nk,w *y1vqfzu), t8hwvalr(kuzxj1lo,wy1vqfzu) double precision chw8lzty integer ayfnwr1v, yq6lorbx, izero0, ifour4 izero0 = 0 ifour4 = 4 do23084 ayfnwr1v=1,kuzxj1lo chw8lzty = he7mqnvy(ayfnwr1v) do23086 yq6lorbx=1,wy1vqfzu call wbvalue(ankcghz2, rpyis2kc(1,yq6lorbx), nk, ifour4, chw8lzty, * izero0, t8hwvalr(ayfnwr1v,yq6lorbx)) 23086 continue 23087 continue 23084 continue 23085 continue return end subroutine vsuff9(kuzxj1lo,nef,ezlgm2up, he7mqnvy,tlgduey8,wmat, p *ygsw6ko,pasjmo8g,wbar,uwbar,wpasjmo8g, wy1vqfzu, dimw, dimu, tgiyx *dw1, dufozmt7, work, work2, hjm2ktyr, kgwmz4ip, iz2nbfjc, wuwbar, *dvhw1ulq) implicit logical (a-z) integer kuzxj1lo, nef, ezlgm2up(kuzxj1lo), wy1vqfzu, dimw, dimu, k *gwmz4ip, iz2nbfjc, wuwbar, dvhw1ulq, tgiyxdw1(*),dufozmt7(*) double precision he7mqnvy(kuzxj1lo), tlgduey8(kuzxj1lo,wy1vqfzu), *wmat(kuzxj1lo,dimw), pygsw6ko(nef), pasjmo8g(nef,wy1vqfzu), wbar(n *ef,*), uwbar(dimu,nef), wpasjmo8g(nef,wy1vqfzu), work(wy1vqfzu,wy1 *vqfzu+1), work2(kgwmz4ip,kgwmz4ip+1), hjm2ktyr(wy1vqfzu,kgwmz4ip) integer ayfnwr1v, yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z, imk5wjxg integer oneint oneint = 1 if(iz2nbfjc .eq. 1)then if((dimu .ne. dimw) .or. (kgwmz4ip .ne. wy1vqfzu))then dvhw1ulq = 0 return endif endif imk5wjxg = wy1vqfzu * (wy1vqfzu+1) / 2 if(dimw .gt. imk5wjxg)then endif call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) do23094 ayfnwr1v=1,kuzxj1lo pygsw6ko(ezlgm2up(ayfnwr1v))=he7mqnvy(ayfnwr1v) 23094 continue 23095 continue do23096 yq6lorbx=1,wy1vqfzu do23098 ayfnwr1v=1,nef wpasjmo8g(ayfnwr1v,yq6lorbx) = 0.0d0 23098 continue 23099 continue 23096 continue 23097 continue do23100 yq6lorbx=1,dimw do23102 ayfnwr1v=1,nef wbar(ayfnwr1v,yq6lorbx) = 0.0d0 23102 continue 23103 continue 23100 continue 23101 continue if(dimw .ne. imk5wjxg)then do23106 gp1jxzuh=1,wy1vqfzu do23108 yq6lorbx=1,wy1vqfzu work(yq6lorbx,gp1jxzuh) = 0.0d0 23108 continue 23109 continue 23106 continue 23107 continue endif do23110 ayfnwr1v=1,kuzxj1lo do23112 yq6lorbx=1,dimw work(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx)) = wmat(ayfnwr1v,yq6lor *bx) work(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx)) = work(tgiyxdw1(yq6lor *bx),dufozmt7(yq6lorbx)) 23112 continue 23113 continue do23114 yq6lorbx=1,wy1vqfzu do23116 gp1jxzuh=1,wy1vqfzu wpasjmo8g(ezlgm2up(ayfnwr1v),yq6lorbx) = wpasjmo8g(ezlgm2up(ayfnwr *1v),yq6lorbx) + work(yq6lorbx,gp1jxzuh)*tlgduey8(ayfnwr1v,gp1jxzuh *) 23116 continue 23117 continue 23114 continue 23115 continue do23118 yq6lorbx=1,dimw wbar(ezlgm2up(ayfnwr1v),yq6lorbx) = wbar(ezlgm2up(ayfnwr1v),yq6lor *bx) + wmat(ayfnwr1v,yq6lorbx) 23118 continue 23119 continue 23110 continue 23111 continue dvhw1ulq = 1 if(iz2nbfjc .eq. 1)then do23122 ayfnwr1v=1,nef do23124 yq6lorbx=1,dimw work(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx)) = wbar(ayfnwr1v,yq6lor *bx) work(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx)) = work(tgiyxdw1(yq6lor *bx),dufozmt7(yq6lorbx)) 23124 continue 23125 continue do23126 yq6lorbx=1,wy1vqfzu work(yq6lorbx,wy1vqfzu+1)=wpasjmo8g(ayfnwr1v,yq6lorbx) 23126 continue 23127 continue call vcholf(work, work(1,wy1vqfzu+1), wy1vqfzu, dvhw1ulq, oneint) if(dvhw1ulq .ne. 1)then return endif if(wuwbar .ne. 0)then do23132 yq6lorbx=1,dimw uwbar(yq6lorbx,ayfnwr1v) = work(tgiyxdw1(yq6lorbx),dufozmt7(yq6lor *bx)) 23132 continue 23133 continue endif do23134 yq6lorbx=1,wy1vqfzu pasjmo8g(ayfnwr1v,yq6lorbx)=work(yq6lorbx,wy1vqfzu+1) 23134 continue 23135 continue 23122 continue 23123 continue else if(dimw .ne. imk5wjxg)then do23138 yq6lorbx=1,wy1vqfzu do23140 gp1jxzuh=1,wy1vqfzu work(yq6lorbx,gp1jxzuh) = 0.0d0 23140 continue 23141 continue 23138 continue 23139 continue endif do23142 ayfnwr1v=1,nef call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) do23144 yq6lorbx=1,dimw work(tgiyxdw1(yq6lorbx),dufozmt7(yq6lorbx)) = wbar(ayfnwr1v,yq6lor *bx) work(dufozmt7(yq6lorbx),tgiyxdw1(yq6lorbx)) = work(tgiyxdw1(yq6lor *bx),dufozmt7(yq6lorbx)) 23144 continue 23145 continue do23146 yq6lorbx=1,wy1vqfzu work(yq6lorbx,wy1vqfzu+1)=wpasjmo8g(ayfnwr1v,yq6lorbx) 23146 continue 23147 continue do23148 yq6lorbx=1,kgwmz4ip do23150 gp1jxzuh=yq6lorbx,kgwmz4ip work2(yq6lorbx,gp1jxzuh) = 0.0d0 do23152 urohxe6t=1,wy1vqfzu do23154 bpvaqm5z=1,wy1vqfzu work2(yq6lorbx,gp1jxzuh) = work2(yq6lorbx,gp1jxzuh) + hjm2ktyr(uro *hxe6t,yq6lorbx) * work(urohxe6t,bpvaqm5z) * hjm2ktyr(bpvaqm5z,gp1j *xzuh) 23154 continue 23155 continue 23152 continue 23153 continue 23150 continue 23151 continue 23148 continue 23149 continue call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) do23156 yq6lorbx=1,dimu wbar(ayfnwr1v,yq6lorbx) = work2(tgiyxdw1(yq6lorbx),dufozmt7(yq6lor *bx)) 23156 continue 23157 continue do23158 yq6lorbx=1,kgwmz4ip work2(yq6lorbx,kgwmz4ip+1) = 0.0d0 do23160 urohxe6t=1,wy1vqfzu work2(yq6lorbx,kgwmz4ip+1) = work2(yq6lorbx,kgwmz4ip+1) + hjm2ktyr *(urohxe6t,yq6lorbx) * work(urohxe6t,wy1vqfzu+1) 23160 continue 23161 continue 23158 continue 23159 continue do23162 yq6lorbx=1,kgwmz4ip wpasjmo8g(ayfnwr1v,yq6lorbx) = work2(yq6lorbx,kgwmz4ip+1) 23162 continue 23163 continue call vcholf(work2, work2(1,kgwmz4ip+1), kgwmz4ip, dvhw1ulq, oneint *) if(dvhw1ulq .ne. 1)then return endif if(wuwbar .ne. 0)then do23168 yq6lorbx=1,dimu uwbar(yq6lorbx,ayfnwr1v) = work2(tgiyxdw1(yq6lorbx),dufozmt7(yq6lo *rbx)) 23168 continue 23169 continue endif do23170 yq6lorbx=1,kgwmz4ip pasjmo8g(ayfnwr1v,yq6lorbx) = work2(yq6lorbx,kgwmz4ip+1) 23170 continue 23171 continue 23142 continue 23143 continue endif return end subroutine icpd0omv(enaqpzk9, he7mqnvy, gkdx5jal, grmuyvx9, ldk, k *uzxj1lo, nk, wy1vqfzu, jzwsy6tp, bmb, work, wmat, ifys6woa, dimw, *tgiyxdw1, dufozmt7, truen) implicit logical (a-z) integer ldk, kuzxj1lo, nk, wy1vqfzu, jzwsy6tp, dimw, tgiyxdw1(*), *dufozmt7(*), truen double precision enaqpzk9(ldk,nk*wy1vqfzu), he7mqnvy(kuzxj1lo), gk *dx5jal(nk+4), grmuyvx9(truen,wy1vqfzu), bmb(wy1vqfzu,wy1vqfzu), wo *rk(wy1vqfzu,wy1vqfzu), wmat(kuzxj1lo,dimw), ifys6woa(kuzxj1lo,wy1v *qfzu) integer ayfnwr1v, yq6lorbx, gp1jxzuh, dqlr5bse, pqzfxw4i, urohxe6t *, bpvaqm5z double precision qaltf0nz, ms0qypiw(16), g9fvdrbw(4,1) if(jzwsy6tp .ne. 0)then do23174 gp1jxzuh=1,wy1vqfzu do23176 ayfnwr1v=1,kuzxj1lo grmuyvx9(ayfnwr1v,gp1jxzuh) = 0.0d0 23176 continue 23177 continue 23174 continue 23175 continue endif qaltf0nz = 0.10d-9 call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) do23178 ayfnwr1v=1,kuzxj1lo do23180 yq6lorbx=1,wy1vqfzu do23182 gp1jxzuh=1,wy1vqfzu bmb(yq6lorbx,gp1jxzuh)=0.0d0 23182 continue 23183 continue 23180 continue 23181 continue call vinterv(gkdx5jal(1), (nk+1), he7mqnvy(ayfnwr1v), dqlr5bse, pq *zfxw4i) if(pqzfxw4i.eq. 1)then if(he7mqnvy(ayfnwr1v) .le. (gkdx5jal(dqlr5bse)+qaltf0nz))then dqlr5bse=dqlr5bse-1 else return endif endif call vbsplvd(gkdx5jal, 4, he7mqnvy(ayfnwr1v), dqlr5bse, ms0qypiw, *g9fvdrbw, 1) yq6lorbx= dqlr5bse-4+1 do23188 urohxe6t=yq6lorbx,yq6lorbx+3 call vsel(urohxe6t, urohxe6t, wy1vqfzu, nk, ldk, enaqpzk9, work) call o0xlszqr(wy1vqfzu, g9fvdrbw(urohxe6t-yq6lorbx+1,1) * g9fvdrbw *(urohxe6t-yq6lorbx+1,1), work, bmb) 23188 continue 23189 continue do23190 urohxe6t=yq6lorbx,yq6lorbx+3 do23192 bpvaqm5z=urohxe6t+1,yq6lorbx+3 call vsel(urohxe6t, bpvaqm5z, wy1vqfzu, nk, ldk, enaqpzk9, work) call o0xlszqr(wy1vqfzu, 2.0d0 * g9fvdrbw(urohxe6t-yq6lorbx+1,1) * *g9fvdrbw(bpvaqm5z-yq6lorbx+1,1), work, bmb) 23192 continue 23193 continue 23190 continue 23191 continue if(jzwsy6tp .ne. 0)then do23196 yq6lorbx=1,wy1vqfzu grmuyvx9(ayfnwr1v,yq6lorbx) = bmb(yq6lorbx,yq6lorbx) 23196 continue 23197 continue endif call ovjnsmt2(bmb, wmat, work, ifys6woa, wy1vqfzu, kuzxj1lo, dimw, * tgiyxdw1, dufozmt7, ayfnwr1v) 23178 continue 23179 continue return end subroutine o0xlszqr(wy1vqfzu, g9fvdrbw, work, bmb) implicit logical (a-z) integer wy1vqfzu double precision g9fvdrbw, work(wy1vqfzu,wy1vqfzu), bmb(wy1vqfzu,w *y1vqfzu) integer yq6lorbx, gp1jxzuh do23198 yq6lorbx=1,wy1vqfzu do23200 gp1jxzuh=1,wy1vqfzu work(yq6lorbx,gp1jxzuh) = work(yq6lorbx,gp1jxzuh) * g9fvdrbw 23200 continue 23201 continue 23198 continue 23199 continue do23202 yq6lorbx=1,wy1vqfzu do23204 gp1jxzuh=1,wy1vqfzu bmb(gp1jxzuh,yq6lorbx) = bmb(gp1jxzuh,yq6lorbx) + work(gp1jxzuh,yq *6lorbx) 23204 continue 23205 continue 23202 continue 23203 continue return end subroutine vsel(s, t, wy1vqfzu, nk, ldk, minv, work) implicit logical (a-z) integer s, t, wy1vqfzu, nk, ldk double precision minv(ldk,nk*wy1vqfzu), work(wy1vqfzu,wy1vqfzu) integer ayfnwr1v, yq6lorbx, biuvowq2, nbj8tdsk do23206 ayfnwr1v=1,wy1vqfzu do23208 yq6lorbx=1,wy1vqfzu work(ayfnwr1v,yq6lorbx) = 0.0d0 23208 continue 23209 continue 23206 continue 23207 continue if(s .ne. t)then do23212 ayfnwr1v=1,wy1vqfzu biuvowq2 = (s-1)*wy1vqfzu + ayfnwr1v do23214 yq6lorbx=1,wy1vqfzu nbj8tdsk = (t-1)*wy1vqfzu + yq6lorbx work(ayfnwr1v,yq6lorbx) = minv(ldk-(nbj8tdsk-biuvowq2), nbj8tdsk) 23214 continue 23215 continue 23212 continue 23213 continue else do23216 ayfnwr1v=1,wy1vqfzu biuvowq2 = (s-1)*wy1vqfzu + ayfnwr1v do23218 yq6lorbx=ayfnwr1v,wy1vqfzu nbj8tdsk = (t-1)*wy1vqfzu + yq6lorbx work(ayfnwr1v,yq6lorbx) = minv(ldk-(nbj8tdsk-biuvowq2), nbj8tdsk) 23218 continue 23219 continue 23216 continue 23217 continue do23220 ayfnwr1v=1,wy1vqfzu do23222 yq6lorbx=ayfnwr1v+1,wy1vqfzu work(yq6lorbx,ayfnwr1v) = work(ayfnwr1v,yq6lorbx) 23222 continue 23223 continue 23220 continue 23221 continue endif return end subroutine ovjnsmt2(bmb, wmat, work, ifys6woa, wy1vqfzu, kuzxj1lo, * dimw, tgiyxdw1, dufozmt7, iii) implicit logical (a-z) integer wy1vqfzu, kuzxj1lo, dimw, tgiyxdw1(*), dufozmt7(*), iii double precision bmb(wy1vqfzu,wy1vqfzu), wmat(kuzxj1lo,dimw), work *(wy1vqfzu,wy1vqfzu), ifys6woa(kuzxj1lo,wy1vqfzu) double precision q6zdcwxk, obr6tcex integer yq6lorbx, gp1jxzuh, urohxe6t, bpvaqm5z do23224 bpvaqm5z=1,wy1vqfzu do23226 yq6lorbx=1,wy1vqfzu do23228 gp1jxzuh=1,wy1vqfzu work(gp1jxzuh,yq6lorbx) = 0.0d0 23228 continue 23229 continue 23226 continue 23227 continue do23230 urohxe6t=1,dimw obr6tcex = wmat(iii,urohxe6t) work(tgiyxdw1(urohxe6t),dufozmt7(urohxe6t)) = obr6tcex work(dufozmt7(urohxe6t),tgiyxdw1(urohxe6t)) = obr6tcex 23230 continue 23231 continue q6zdcwxk = 0.0d0 do23232 yq6lorbx=1,wy1vqfzu q6zdcwxk = q6zdcwxk + bmb(bpvaqm5z,yq6lorbx) * work(yq6lorbx,bpvaq *m5z) 23232 continue 23233 continue ifys6woa(iii,bpvaqm5z) = q6zdcwxk 23224 continue 23225 continue return end subroutine vicb2(enaqpzk9, wpuarq2m, d, uu, wy1vqfzu, kuzxj1lo) implicit logical (a-z) integer wy1vqfzu, kuzxj1lo double precision enaqpzk9(wy1vqfzu+1,kuzxj1lo), wpuarq2m(wy1vqfzu+ *1,kuzxj1lo), d(kuzxj1lo), uu(wy1vqfzu+1,wy1vqfzu+1) integer ayfnwr1v, gp1jxzuh, lsvdbx3tk, uplim, sedf7mxb, hofjnx2e, *kij0gwer enaqpzk9(wy1vqfzu+1,kuzxj1lo) = 1.0d0 / d(kuzxj1lo) hofjnx2e = wy1vqfzu+1 sedf7mxb = kuzxj1lo+1 - hofjnx2e do23234 kij0gwer=sedf7mxb,kuzxj1lo do23236 ayfnwr1v=1,hofjnx2e uu(ayfnwr1v, kij0gwer-sedf7mxb+1) = wpuarq2m(ayfnwr1v, kij0gwer) 23236 continue 23237 continue 23234 continue 23235 continue ayfnwr1v = kuzxj1lo-1 23238 if(.not.(ayfnwr1v .ge. 1))goto 23240 if(wy1vqfzu .lt. kuzxj1lo-ayfnwr1v)then uplim = wy1vqfzu else uplim = kuzxj1lo-ayfnwr1v endif lsvdbx3tk=1 23243 if(.not.(lsvdbx3tk .le. uplim))goto 23245 enaqpzk9(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) = 0.0d0 gp1jxzuh=1 23246 if(.not.(gp1jxzuh .le. lsvdbx3tk))goto 23248 enaqpzk9(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) = enaqpzk9(-lsv *dbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) - uu(-gp1jxzuh+wy1vqfzu+1,ay *fnwr1v+gp1jxzuh -sedf7mxb+1) * enaqpzk9(gp1jxzuh-lsvdbx3tk+wy1vqfz *u+1,ayfnwr1v+lsvdbx3tk) 23247 gp1jxzuh=gp1jxzuh+1 goto 23246 23248 continue 23249 if(.not.(gp1jxzuh .le. uplim))goto 23251 enaqpzk9(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) = enaqpzk9(-lsv *dbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) - uu(-gp1jxzuh+wy1vqfzu+1,ay *fnwr1v+gp1jxzuh -sedf7mxb+1) * enaqpzk9(lsvdbx3tk-gp1jxzuh+wy1vqfz *u+1,ayfnwr1v+gp1jxzuh) 23250 gp1jxzuh=gp1jxzuh+1 goto 23249 23251 continue 23244 lsvdbx3tk=lsvdbx3tk+1 goto 23243 23245 continue enaqpzk9(wy1vqfzu+1,ayfnwr1v) = 1.0d0 / d(ayfnwr1v) lsvdbx3tk = 1 23252 if(.not.(lsvdbx3tk .le. uplim))goto 23254 enaqpzk9(wy1vqfzu+1,ayfnwr1v) = enaqpzk9(wy1vqfzu+1,ayfnwr1v) - uu *(-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk -sedf7mxb+1) * enaqpzk9( *-lsvdbx3tk+wy1vqfzu+1,ayfnwr1v+lsvdbx3tk) 23253 lsvdbx3tk=lsvdbx3tk+1 goto 23252 23254 continue if(ayfnwr1v .eq. sedf7mxb)then sedf7mxb = sedf7mxb-1 if(sedf7mxb .lt. 1)then sedf7mxb = 1 else kij0gwer=hofjnx2e-1 23259 if(.not.(kij0gwer .ge. 1))goto 23261 gp1jxzuh=1 23262 if(.not.(gp1jxzuh .le. hofjnx2e))goto 23264 uu(gp1jxzuh,kij0gwer+1) = uu(gp1jxzuh,kij0gwer) 23263 gp1jxzuh=gp1jxzuh+1 goto 23262 23264 continue 23260 kij0gwer=kij0gwer-1 goto 23259 23261 continue gp1jxzuh=1 23265 if(.not.(gp1jxzuh .le. hofjnx2e))goto 23267 uu(gp1jxzuh,1) = wpuarq2m(gp1jxzuh,sedf7mxb) 23266 gp1jxzuh=gp1jxzuh+1 goto 23265 23267 continue endif endif 23239 ayfnwr1v = ayfnwr1v-1 goto 23238 23240 continue return end subroutine ewg7qruh(sjwyig9tto,tlgduey8,wmat, kuzxj1lo,wy1vqfzu,ez *lgm2up,nef, wbkq9zyi,dof,smo,cov, s0, xin,yin,rbne6ouj,win, work1, *work3, dimw, fbd5yktj, ldk, info, yzoe1rsp, sgdub, rpyis2kc, zv2xf *hei, acpios9q,tgiyxdw1,dufozmt7, bmb, ifys6woa, wkmm, iz2nbfjc,kgw *mz4ip,ges1xpkr, hjm2ktyr, beta, fasrkub3, sout, r0oydcxb, ub4xioar *, effect, uwin) implicit logical (a-z) integer kuzxj1lo,wy1vqfzu,ezlgm2up(kuzxj1lo),nef, dimw, fbd5yktj, *ldk, info, yzoe1rsp, acpios9q,tgiyxdw1(*),dufozmt7(*), iz2nbfjc, k *gwmz4ip, ges1xpkr(kgwmz4ip*2) double precision sjwyig9tto(kuzxj1lo), tlgduey8(kuzxj1lo,wy1vqfzu) *, wmat(kuzxj1lo,dimw), wbkq9zyi(kgwmz4ip), dof(kgwmz4ip), smo(kuzx *j1lo,kgwmz4ip), cov(kuzxj1lo,kgwmz4ip) double precision s0(2*kgwmz4ip, 2*kgwmz4ip,2) double precision work1(*), work3(*), sgdub(*), rpyis2kc(*), zv2xfh *ei(acpios9q+4) double precision xin(nef), yin(nef,wy1vqfzu), rbne6ouj(nef,wy1vqfz *u), win(nef,*), bmb(*), ifys6woa(nef,kgwmz4ip), wkmm(wy1vqfzu,wy1v *qfzu,16), hjm2ktyr(wy1vqfzu,kgwmz4ip) double precision beta(2*kgwmz4ip), fasrkub3(2*kgwmz4ip), sout(nef, *kgwmz4ip), r0oydcxb(kgwmz4ip,nef), ub4xioar(kgwmz4ip,nef), effect( *nef*kgwmz4ip), uwin(*) integer dimwin integer ayfnwr1v, yq6lorbx, gp1jxzuh, rutyk8mg, xjc4ywlh, job, qem *j9asg, dvhw1ulq integer oneint double precision xmin, xrange, pvofyg8z oneint = 1 if(iz2nbfjc .eq. 1)then dimwin = dimw else dimwin = kgwmz4ip*(kgwmz4ip+1)/2 endif call qpsedg8xf(tgiyxdw1, dufozmt7, wy1vqfzu) call vsuff9(kuzxj1lo,nef,ezlgm2up, sjwyig9tto,tlgduey8,wmat, xin,y *in,win,uwin,rbne6ouj, wy1vqfzu, dimw, dimwin, tgiyxdw1, dufozmt7, *wkmm, wkmm(1,1,3), hjm2ktyr, kgwmz4ip, iz2nbfjc, oneint, dvhw1ulq) if(dvhw1ulq .ne. 1)then return endif xmin = xin(1) xrange = xin(nef)-xin(1) do23272 ayfnwr1v=1,nef xin(ayfnwr1v) = (xin(ayfnwr1v)-xmin)/xrange 23272 continue 23273 continue ldk = 4*kgwmz4ip fbd5yktj = 0 do23274 yq6lorbx=1,kgwmz4ip if(wbkq9zyi(yq6lorbx) .eq. 0.0d0)then dof(yq6lorbx) = dof(yq6lorbx) + 1.0d0 endif 23274 continue 23275 continue call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) call vsplin(xin,rbne6ouj,win,nef,zv2xfhei, acpios9q,ldk,kgwmz4ip,d *imwin, tgiyxdw1,dufozmt7, wkmm, wbkq9zyi, info, sout, rpyis2kc, wo *rk3(1), work3(1+acpios9q*kgwmz4ip*ldk), sgdub, cov, yzoe1rsp, bmb, * ifys6woa, dof, work1, fbd5yktj, kuzxj1lo) do23278 yq6lorbx=1,kgwmz4ip dof(yq6lorbx) = -1.0d0 do23280 ayfnwr1v=1,nef dof(yq6lorbx)=dof(yq6lorbx)+ifys6woa(ayfnwr1v,yq6lorbx) 23280 continue 23281 continue 23278 continue 23279 continue if(kgwmz4ip .ge. 1)then pvofyg8z = 1.0d-7 rutyk8mg = nef*kgwmz4ip xjc4ywlh = 2*kgwmz4ip job = 101 info = 1 call x6kanjdh(xin, work3, nef, kgwmz4ip) call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) call mux17f(uwin, work3, kgwmz4ip, xjc4ywlh, nef, wkmm(1,1,1), wkm *m(1,1,2), tgiyxdw1, dufozmt7, dimwin, rutyk8mg) do23284 gp1jxzuh=1,xjc4ywlh ges1xpkr(gp1jxzuh) = gp1jxzuh 23284 continue 23285 continue call vqrdca(work3,rutyk8mg,rutyk8mg,xjc4ywlh,fasrkub3,ges1xpkr,wor *k1,qemj9asg,pvofyg8z) call qpsedg8xf(tgiyxdw1, dufozmt7, kgwmz4ip) call mux22f(uwin,sout,r0oydcxb,dimwin,tgiyxdw1,dufozmt7,nef,kgwmz4 *ip,wkmm) call vdqrsl(work3,rutyk8mg,rutyk8mg,qemj9asg,fasrkub3,r0oydcxb,wor *k1(1),effect,beta, work1(1),ub4xioar,job,info) call vbksf(uwin,ub4xioar,kgwmz4ip,nef,wkmm,tgiyxdw1,dufozmt7,dimwi *n) if(yzoe1rsp .ne. 0)then call vrinvf9(work3, rutyk8mg, xjc4ywlh, dvhw1ulq, s0(1,1,1), s0(1, *1,2)) if(dvhw1ulq .ne. 1)then return endif do23290 yq6lorbx=1,kgwmz4ip do23292 ayfnwr1v=1,nef cov(ayfnwr1v,yq6lorbx) = cov(ayfnwr1v,yq6lorbx) - s0(yq6lorbx,yq6l *orbx,1) - xin(ayfnwr1v) * (2.0d0 * s0(yq6lorbx,yq6lorbx+kgwmz4ip,1 *) + xin(ayfnwr1v) * s0(yq6lorbx+kgwmz4ip,yq6lorbx+kgwmz4ip,1)) 23292 continue 23293 continue 23290 continue 23291 continue endif else call dsrt0gem(nef, xin, win, sout, ub4xioar, cov, yzoe1rsp) endif do23294 ayfnwr1v=1,nef do23296 yq6lorbx=1,kgwmz4ip sout(ayfnwr1v,yq6lorbx) = sout(ayfnwr1v,yq6lorbx) - ub4xioar(yq6lo *rbx,ayfnwr1v) 23296 continue 23297 continue 23294 continue 23295 continue do23298 yq6lorbx=1,kgwmz4ip call shm8ynte(kuzxj1lo, nef, ezlgm2up, sout(1,yq6lorbx), smo(1,yq6 *lorbx)) 23298 continue 23299 continue return end subroutine vbfa( n,wy1vqfzu,psdvgce3, he7mqnvy,tlgduey8,wmat,wbkq9 *zyi,dof, ezlgm2up,nef,which, ub4xioar,kispwgx3,m0ibglfx,s0, beta,c *ov,zpcqv3uj, vc6hatuj,fasrkub3, ges1xpkr, xbig, wpuarq2m, hjm2ktyr *, jnxpuym2, hnpt1zym, fzm1ihwj, iz2nbfjc, work1, wk2, wkmm, work3, * sgdub, bmb, ifys6woa, mwk, twk, rpyis2kc, zv2xfhei, resss, nbzjkp *i3, acpios9q, itwk, jwbkl9fp) implicit logical (a-z) integer irhm4cfa, n, wy1vqfzu, psdvgce3(15), ezlgm2up(*),nef(*),wh *ich(*), ges1xpkr(*) integer jnxpuym2(*), hnpt1zym(*), fzm1ihwj(*), iz2nbfjc(*), nbzjkp *i3(*), acpios9q(*), itwk(*), jwbkl9fp(*) double precision he7mqnvy(*),tlgduey8(*),wmat(*),wbkq9zyi(*),dof(* *), ub4xioar(*),kispwgx3(*), m0ibglfx(*), s0(wy1vqfzu), beta(*),cov *(*),zpcqv3uj, vc6hatuj(*),fasrkub3(*) double precision xbig(*), wpuarq2m(*), hjm2ktyr(*), work1(*), wk2( *n,wy1vqfzu,3), wkmm(wy1vqfzu,wy1vqfzu,16), work3(*), sgdub(*), bmb *(*), ifys6woa(*), mwk(*), twk(*), rpyis2kc(*), zv2xfhei(*), resss integer p,q,yzoe1rsp,niter,gtrlbz3e, rutyk8mg, xjc4ywlh, lyma1kwc, * dimw, dimu, fbd5yktj,ldk integer iter integer xs4wtvlg integer ayfnwr1v, imk5wjxg, qemj9asg irhm4cfa = 0 imk5wjxg = wy1vqfzu*(wy1vqfzu+1)/2 p=psdvgce3(2) q=psdvgce3(3) yzoe1rsp= 0 if(psdvgce3(4) .eq. 1)then yzoe1rsp = 1 endif gtrlbz3e=psdvgce3(6) qemj9asg=psdvgce3(7) rutyk8mg=psdvgce3(9) xjc4ywlh=psdvgce3(10) lyma1kwc=psdvgce3(11) dimw=psdvgce3(12) dimu=psdvgce3(13) fbd5yktj = 0 ldk=psdvgce3(15) xs4wtvlg = 1 if(lyma1kwc .gt. 0)then do23304 ayfnwr1v=1,lyma1kwc work1(ayfnwr1v) = dof(ayfnwr1v) work1(ayfnwr1v+lyma1kwc) = wbkq9zyi(ayfnwr1v) work1(ayfnwr1v+2*lyma1kwc) = dof(ayfnwr1v) 23304 continue 23305 continue endif iter = 0 23306 if(xs4wtvlg .ne. 0)then iter = iter+1 if(iter .gt. 1)then if(lyma1kwc .gt. 0)then do23312 ayfnwr1v=1,lyma1kwc if(work1(ayfnwr1v+lyma1kwc).eq.0.0d0 .and. (dabs(work1(ayfnwr1v+2* *lyma1kwc)-dof(ayfnwr1v))/dof(ayfnwr1v).gt.0.05d0))then work1(ayfnwr1v+2*lyma1kwc) = dof(ayfnwr1v) dof(ayfnwr1v)=work1(ayfnwr1v) wbkq9zyi(ayfnwr1v)=0.0d0 else work1(ayfnwr1v+2*lyma1kwc) = dof(ayfnwr1v) endif 23312 continue 23313 continue endif endif call vbfa1(irhm4cfa,n,wy1vqfzu, he7mqnvy,tlgduey8,wmat,wbkq9zyi,do *f, ezlgm2up,nef,which, ub4xioar,kispwgx3,m0ibglfx,s0, beta,cov,zpc *qv3uj, vc6hatuj,fasrkub3, qemj9asg,ges1xpkr, xbig, wpuarq2m, hjm2k *tyr, jnxpuym2, hnpt1zym, fzm1ihwj(1), fzm1ihwj(1 + imk5wjxg), iz2n *bfjc, work1(1+3*lyma1kwc), wkmm, work3, sgdub, bmb, ifys6woa, mwk, * twk, rpyis2kc, zv2xfhei, resss, nbzjkpi3, acpios9q, itwk, jwbkl9f *p, p,q,yzoe1rsp,niter,gtrlbz3e, wk2(1,1,1), wk2(1,1,2), wk2(1,1,3) *, rutyk8mg, xjc4ywlh, lyma1kwc, dimw, dimu, fbd5yktj, ldk) if(irhm4cfa .ne. 0)then call vcall2(xs4wtvlg,w,y,m0ibglfx,beta,wpuarq2m) else xs4wtvlg = 0 endif if(xs4wtvlg .ne. 0)then qemj9asg=0 endif goto 23306 endif 23307 continue psdvgce3(7) = qemj9asg psdvgce3(5) = niter psdvgce3(14) = fbd5yktj return end subroutine vbfa1(irhm4cfa,kuzxj1lo,wy1vqfzu, he7mqnvy,tlgduey8,wma *t,wbkq9zyi,dof, ezlgm2up,nef,which, ub4xioar,kispwgx3,m0ibglfx,s0, * beta,cov,zpcqv3uj, vc6hatuj,fasrkub3, qemj9asg,ges1xpkr, xbig, wp *uarq2m, hjm2ktyr, jnxpuym2, hnpt1zym, tgiyxdw1, dufozmt7, iz2nbfjc *, work1, wkmm, work3, sgdub, bmb, ifys6woa, mwk, twk, rpyis2kc, zv *2xfhei, resss, nbzjkpi3, acpios9q, itwk, jwbkl9fp, p, q, yzoe1rsp, * niter, gtrlbz3e, ghz9vuba, oldmat, wk2, rutyk8mg, xjc4ywlh, lyma1 *kwc, dimw, dimu, fbd5yktj, ldk) implicit logical (a-z) integer qemj9asg integer dufozmt7(*), tgiyxdw1(*) integer p, q, yzoe1rsp, niter, gtrlbz3e, rutyk8mg, xjc4ywlh, lyma1 *kwc, dimw, dimu, fbd5yktj, ldk integer irhm4cfa, kuzxj1lo, wy1vqfzu, ezlgm2up(kuzxj1lo,q),nef(q), *which(q), ges1xpkr(xjc4ywlh) integer jnxpuym2(q), hnpt1zym(q), iz2nbfjc(q), nbzjkpi3(q+1), acpi *os9q(q), itwk(*), jwbkl9fp(q+1) double precision he7mqnvy(kuzxj1lo,p), tlgduey8(kuzxj1lo,wy1vqfzu) *, wmat(kuzxj1lo,dimw), wbkq9zyi(lyma1kwc), dof(lyma1kwc) double precision ub4xioar(wy1vqfzu,kuzxj1lo), kispwgx3(kuzxj1lo,ly *ma1kwc), m0ibglfx(wy1vqfzu,kuzxj1lo), s0(wy1vqfzu), beta(xjc4ywlh) *, cov(kuzxj1lo,lyma1kwc), zpcqv3uj, vc6hatuj(rutyk8mg,xjc4ywlh), f *asrkub3(xjc4ywlh) double precision xbig(rutyk8mg,xjc4ywlh), wpuarq2m(dimu,kuzxj1lo), * hjm2ktyr(wy1vqfzu,lyma1kwc), work1(*), wk2(kuzxj1lo,wy1vqfzu), wk *mm(wy1vqfzu,wy1vqfzu,16), work3(*), sgdub(*), bmb(*), ifys6woa(*), * mwk(*), twk(*), rpyis2kc(*), zv2xfhei(*), resss double precision ghz9vuba(kuzxj1lo,wy1vqfzu), oldmat(kuzxj1lo,wy1v *qfzu) integer job,info,nefk integer ayfnwr1v, yq6lorbx, gp1jxzuh, wg1xifdy double precision vo4mtexk, rd9beyfk,ratio, deltaf, z4vrscot,pvofyg *8z pvofyg8z = 1.0d-7 job = 101 info = 1 if(q .eq. 0)then gtrlbz3e = 1 endif if(irhm4cfa .ne. 0)then do23324 yq6lorbx=1,xjc4ywlh do23326 ayfnwr1v=1,rutyk8mg vc6hatuj(ayfnwr1v,yq6lorbx)=xbig(ayfnwr1v,yq6lorbx) 23326 continue 23327 continue 23324 continue 23325 continue endif if(qemj9asg.eq.0)then call qpsedg8xf(tgiyxdw1,dufozmt7,wy1vqfzu) call mux17f(wpuarq2m, vc6hatuj, wy1vqfzu, xjc4ywlh, kuzxj1lo, wkmm *(1,1,1), wkmm(1,1,2), tgiyxdw1, dufozmt7, dimu, rutyk8mg) do23330 gp1jxzuh=1,xjc4ywlh ges1xpkr(gp1jxzuh) = gp1jxzuh 23330 continue 23331 continue call vqrdca(vc6hatuj,rutyk8mg,rutyk8mg,xjc4ywlh,fasrkub3,ges1xpkr, *twk,qemj9asg,pvofyg8z) endif do23332 yq6lorbx=1,wy1vqfzu do23334 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v)=0.0d0 23334 continue 23335 continue if(q .gt. 0)then do23338 gp1jxzuh=1,q if(iz2nbfjc(gp1jxzuh).eq.1)then do23342 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + kispwg *x3(ayfnwr1v,hnpt1zym(gp1jxzuh)+yq6lorbx-1) 23342 continue 23343 continue else do23344 wg1xifdy=1,jnxpuym2(gp1jxzuh) do23346 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + hjm2kt *yr(yq6lorbx,hnpt1zym(gp1jxzuh)+wg1xifdy-1) * kispwgx3(ayfnwr1v,hnp *t1zym(gp1jxzuh)+wg1xifdy-1) 23346 continue 23347 continue 23344 continue 23345 continue endif 23338 continue 23339 continue endif 23332 continue 23333 continue niter = 0 ratio = 1.0d0 23348 if((ratio .gt. zpcqv3uj ) .and. (niter .lt. gtrlbz3e))then niter = niter + 1 deltaf = 0.0d0 do23350 yq6lorbx=1,wy1vqfzu do23352 ayfnwr1v=1,kuzxj1lo ghz9vuba(ayfnwr1v,yq6lorbx)=tlgduey8(ayfnwr1v,yq6lorbx)-m0ibglfx(y *q6lorbx,ayfnwr1v) 23352 continue 23353 continue 23350 continue 23351 continue call qpsedg8xf(tgiyxdw1,dufozmt7,wy1vqfzu) call mux22f(wpuarq2m,ghz9vuba, twk, dimu,tgiyxdw1,dufozmt7,kuzxj1l *o,wy1vqfzu,wkmm) call vdqrsl(vc6hatuj,rutyk8mg,rutyk8mg,qemj9asg,fasrkub3, twk, wk2 *,wk2, beta, wk2,ub4xioar,job,info) resss=0.0d0 do23354 ayfnwr1v=1,kuzxj1lo do23356 yq6lorbx=1,wy1vqfzu vo4mtexk = twk((ayfnwr1v-1)*wy1vqfzu+yq6lorbx) - ub4xioar(yq6lorbx *,ayfnwr1v) resss = resss + vo4mtexk * vo4mtexk 23356 continue 23357 continue 23354 continue 23355 continue call vbksf(wpuarq2m,ub4xioar,wy1vqfzu,kuzxj1lo,wkmm,tgiyxdw1,dufoz *mt7,dimu) if(q .gt. 0)then do23360 gp1jxzuh=1,q do23362 yq6lorbx=1,wy1vqfzu if(iz2nbfjc(gp1jxzuh).eq.1)then do23366 ayfnwr1v=1,kuzxj1lo oldmat(ayfnwr1v,yq6lorbx)=kispwgx3(ayfnwr1v,hnpt1zym(gp1jxzuh)+yq6 *lorbx-1) ghz9vuba(ayfnwr1v,yq6lorbx) = tlgduey8(ayfnwr1v,yq6lorbx) - ub4xio *ar(yq6lorbx,ayfnwr1v) - m0ibglfx(yq6lorbx,ayfnwr1v) + oldmat(ayfnw *r1v,yq6lorbx) 23366 continue 23367 continue else do23368 ayfnwr1v=1,kuzxj1lo oldmat(ayfnwr1v,yq6lorbx)=0.0d0 do23370 wg1xifdy=1,jnxpuym2(gp1jxzuh) oldmat(ayfnwr1v,yq6lorbx)=oldmat(ayfnwr1v,yq6lorbx) + hjm2ktyr(yq6 *lorbx,hnpt1zym(gp1jxzuh)+wg1xifdy-1) * kispwgx3(ayfnwr1v,hnpt1zym( *gp1jxzuh)+wg1xifdy-1) 23370 continue 23371 continue ghz9vuba(ayfnwr1v,yq6lorbx) = tlgduey8(ayfnwr1v,yq6lorbx) - ub4xio *ar(yq6lorbx,ayfnwr1v) - m0ibglfx(yq6lorbx,ayfnwr1v) + oldmat(ayfnw *r1v,yq6lorbx) 23368 continue 23369 continue endif 23362 continue 23363 continue nefk = nef(gp1jxzuh) call ewg7qruh(he7mqnvy(1,which(gp1jxzuh)),ghz9vuba,wmat, kuzxj1lo, *wy1vqfzu,ezlgm2up(1,gp1jxzuh),nefk, wbkq9zyi(hnpt1zym(gp1jxzuh)), *dof(hnpt1zym(gp1jxzuh)), kispwgx3(1,hnpt1zym(gp1jxzuh)), cov(1,hnp *t1zym(gp1jxzuh)), s0, mwk(1), mwk(1+nefk), mwk(1+nefk*(wy1vqfzu+1) *), mwk(1+nefk*(2*wy1vqfzu+1)), work1, work3, dimw, fbd5yktj, ldk, *info, yzoe1rsp, sgdub, rpyis2kc(nbzjkpi3(gp1jxzuh)), zv2xfhei(jwbk *l9fp(gp1jxzuh)), acpios9q(gp1jxzuh),tgiyxdw1, dufozmt7, bmb, ifys6 *woa, wkmm, iz2nbfjc(gp1jxzuh),jnxpuym2(gp1jxzuh),itwk, hjm2ktyr(1, *hnpt1zym(gp1jxzuh)), twk(1), twk(1+2*jnxpuym2(gp1jxzuh)), twk(1+4* *jnxpuym2(gp1jxzuh)), twk(1+(4+nefk)*jnxpuym2(gp1jxzuh)), twk(1+(4+ *2*nefk)*jnxpuym2(gp1jxzuh)), twk(1+(4+3*nefk)*jnxpuym2(gp1jxzuh)), * twk(1+(4+4*nefk)*jnxpuym2(gp1jxzuh))) do23372 yq6lorbx=1,wy1vqfzu if(iz2nbfjc(gp1jxzuh).eq.1)then do23376 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + kispwg *x3(ayfnwr1v,hnpt1zym(gp1jxzuh)+yq6lorbx-1) 23376 continue 23377 continue else do23378 wg1xifdy=1,jnxpuym2(gp1jxzuh) do23380 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v)=m0ibglfx(yq6lorbx,ayfnwr1v) + hjm2ktyr *(yq6lorbx,hnpt1zym(gp1jxzuh)+wg1xifdy-1) * kispwgx3(ayfnwr1v,hnpt1 *zym(gp1jxzuh)+wg1xifdy-1) 23380 continue 23381 continue 23378 continue 23379 continue endif do23382 ayfnwr1v=1,kuzxj1lo m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) - oldmat *(ayfnwr1v,yq6lorbx) 23382 continue 23383 continue 23372 continue 23373 continue do23384 yq6lorbx=1,wy1vqfzu if(iz2nbfjc(gp1jxzuh) .eq. 1)then deltaf = deltaf + rd9beyfk(kuzxj1lo,oldmat(1,yq6lorbx),kispwgx3(1, *hnpt1zym(gp1jxzuh)+yq6lorbx-1), wmat(1,yq6lorbx)) else do23388 ayfnwr1v=1,kuzxj1lo twk(ayfnwr1v) = 0.0d0 do23390 wg1xifdy=1,jnxpuym2(gp1jxzuh) twk(ayfnwr1v) = twk(ayfnwr1v) + hjm2ktyr(yq6lorbx,hnpt1zym(gp1jxzu *h)+wg1xifdy-1) * kispwgx3(ayfnwr1v,hnpt1zym(gp1jxzuh)+wg1xifdy-1) 23390 continue 23391 continue 23388 continue 23389 continue deltaf = deltaf + rd9beyfk(kuzxj1lo, oldmat(1,yq6lorbx), twk, wmat *(1,yq6lorbx)) endif 23384 continue 23385 continue do23392 yq6lorbx=1,wy1vqfzu do23394 ayfnwr1v=1,kuzxj1lo ghz9vuba(ayfnwr1v,yq6lorbx)=tlgduey8(ayfnwr1v,yq6lorbx)-m0ibglfx(y *q6lorbx,ayfnwr1v) 23394 continue 23395 continue 23392 continue 23393 continue call qpsedg8xf(tgiyxdw1,dufozmt7,wy1vqfzu) call mux22f(wpuarq2m,ghz9vuba, twk, dimu,tgiyxdw1,dufozmt7,kuzxj1l *o,wy1vqfzu,wkmm) call vdqrsl(vc6hatuj,rutyk8mg,rutyk8mg,qemj9asg,fasrkub3, twk, wk2 *,wk2, beta, wk2,ub4xioar,job,info) call vbksf(wpuarq2m,ub4xioar,wy1vqfzu,kuzxj1lo,wkmm,tgiyxdw1,dufoz *mt7,dimu) 23360 continue 23361 continue endif if(q .gt. 0)then z4vrscot=0.0d0 do23398 yq6lorbx=1,wy1vqfzu do23400 ayfnwr1v=1,kuzxj1lo z4vrscot = z4vrscot + wmat(ayfnwr1v,yq6lorbx) * m0ibglfx(yq6lorbx, *ayfnwr1v)**2 23400 continue 23401 continue 23398 continue 23399 continue if(z4vrscot .gt. 0.0d0)then ratio = dsqrt(deltaf/z4vrscot) else ratio = 0.0d0 endif endif if(niter .eq. 1)then ratio = 1.0d0 endif goto 23348 endif 23349 continue do23406 yq6lorbx=1,xjc4ywlh twk(yq6lorbx)=beta(yq6lorbx) 23406 continue 23407 continue do23408 yq6lorbx=1,xjc4ywlh beta(ges1xpkr(yq6lorbx))=twk(yq6lorbx) 23408 continue 23409 continue do23410 ayfnwr1v=1,kuzxj1lo do23412 yq6lorbx=1,wy1vqfzu m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + ub4xio *ar(yq6lorbx,ayfnwr1v) 23412 continue 23413 continue 23410 continue 23411 continue if((yzoe1rsp .ne. 0) .and. (q .gt. 0))then do23416 gp1jxzuh=1,q do23418 wg1xifdy=1,jnxpuym2(gp1jxzuh) call shm8ynte(kuzxj1lo,nef(gp1jxzuh),ezlgm2up(1,gp1jxzuh), cov(1,h *npt1zym(gp1jxzuh)+wg1xifdy-1),oldmat) do23420 ayfnwr1v=1,kuzxj1lo cov(ayfnwr1v,hnpt1zym(gp1jxzuh)+wg1xifdy-1) = oldmat(ayfnwr1v,1) 23420 continue 23421 continue 23418 continue 23419 continue 23416 continue 23417 continue endif return end subroutine x6kanjdh(he7mqnvy, xout, kuzxj1lo, wy1vqfzu) implicit logical (a-z) integer kuzxj1lo, wy1vqfzu double precision he7mqnvy(kuzxj1lo), xout(*) integer ayfnwr1v, yq6lorbx, gp1jxzuh, iptr iptr=1 do23422 yq6lorbx=1,wy1vqfzu do23424 ayfnwr1v=1,kuzxj1lo do23426 gp1jxzuh=1,wy1vqfzu if(yq6lorbx .eq. gp1jxzuh)then xout(iptr) = 1.0d0 else xout(iptr) = 0.0d0 endif iptr=iptr+1 23426 continue 23427 continue 23424 continue 23425 continue 23422 continue 23423 continue do23430 yq6lorbx=1,wy1vqfzu do23432 ayfnwr1v=1,kuzxj1lo do23434 gp1jxzuh=1,wy1vqfzu if(yq6lorbx .eq. gp1jxzuh)then xout(iptr) = he7mqnvy(ayfnwr1v) else xout(iptr) = 0.0d0 endif iptr=iptr+1 23434 continue 23435 continue 23432 continue 23433 continue 23430 continue 23431 continue return end double precision function rd9beyfk(kuzxj1lo, bhcji9gl, m0ibglfx, p *o8rwsmy) integer kuzxj1lo double precision bhcji9gl(kuzxj1lo), m0ibglfx(kuzxj1lo), po8rwsmy( *kuzxj1lo) integer ayfnwr1v double precision lm9vcjob, rxeqjn0y, work rxeqjn0y = 0.0d0 lm9vcjob = 0.0d0 do23438 ayfnwr1v=1,kuzxj1lo work = bhcji9gl(ayfnwr1v) - m0ibglfx(ayfnwr1v) rxeqjn0y = rxeqjn0y + po8rwsmy(ayfnwr1v)*work*work lm9vcjob = lm9vcjob + po8rwsmy(ayfnwr1v) 23438 continue 23439 continue if(lm9vcjob .gt. 0.0d0)then rd9beyfk=rxeqjn0y/lm9vcjob else rd9beyfk=0.0d0 endif return end subroutine pitmeh0q(kuzxj1lo, bhcji9gl, po8rwsmy, lfu2qhid, lm9vcj *ob) implicit logical (a-z) integer kuzxj1lo double precision bhcji9gl(kuzxj1lo), po8rwsmy(kuzxj1lo), lfu2qhid, * lm9vcjob double precision rxeqjn0y integer ayfnwr1v lm9vcjob = 0.0d0 rxeqjn0y = 0.0d0 do23442 ayfnwr1v=1,kuzxj1lo rxeqjn0y = rxeqjn0y + bhcji9gl(ayfnwr1v) * po8rwsmy(ayfnwr1v) lm9vcjob = lm9vcjob + po8rwsmy(ayfnwr1v) 23442 continue 23443 continue if(lm9vcjob .gt. 0.0d0)then lfu2qhid = rxeqjn0y / lm9vcjob else lfu2qhid = 0.0d0 endif return end subroutine dsrt0gem(kuzxj1lo, x, w, bhcji9gl, ub4xioar, cov, yzoe1 *rsp) implicit logical (a-z) integer kuzxj1lo integer yzoe1rsp double precision x(kuzxj1lo), w(kuzxj1lo), bhcji9gl(kuzxj1lo), ub4 *xioar(kuzxj1lo) double precision cov(kuzxj1lo,*) integer ayfnwr1v double precision pasjmo8g, pygsw6ko, q6zdcwxk, nsum, eck8vubt, int *erc, bzmd6ftv, hofjnx2e, lm9vcjob call pitmeh0q(kuzxj1lo,bhcji9gl,w,pasjmo8g, lm9vcjob) call pitmeh0q(kuzxj1lo,x,w,pygsw6ko, lm9vcjob) nsum = 0.0d0 q6zdcwxk = 0.0d0 do23446 ayfnwr1v=1,kuzxj1lo hofjnx2e = x(ayfnwr1v)-pygsw6ko nsum = nsum + hofjnx2e * (bhcji9gl(ayfnwr1v)-pasjmo8g) * w(ayfnwr1 *v) hofjnx2e = hofjnx2e * hofjnx2e q6zdcwxk = q6zdcwxk + hofjnx2e * w(ayfnwr1v) 23446 continue 23447 continue eck8vubt = nsum/q6zdcwxk interc = pasjmo8g - eck8vubt * pygsw6ko do23448 ayfnwr1v=1,kuzxj1lo ub4xioar(ayfnwr1v) = interc + eck8vubt * x(ayfnwr1v) 23448 continue 23449 continue bzmd6ftv = interc + eck8vubt * x(1) if(yzoe1rsp .ne. 0)then do23452 ayfnwr1v=1,kuzxj1lo hofjnx2e = x(ayfnwr1v)-pygsw6ko if(w(ayfnwr1v) .gt. 0.0d0)then cov(ayfnwr1v,1) = cov(ayfnwr1v,1) - 1.0d0/lm9vcjob - hofjnx2e * ho *fjnx2e / q6zdcwxk else cov(ayfnwr1v,1) = 0.0d0 endif 23452 continue 23453 continue endif return end subroutine shm8ynte(kuzxj1lo, p, ezlgm2up, pygsw6ko, x) implicit logical (a-z) integer kuzxj1lo, p, ezlgm2up(kuzxj1lo) double precision pygsw6ko(p), x(kuzxj1lo) integer ayfnwr1v do23456 ayfnwr1v=1,kuzxj1lo x(ayfnwr1v) = pygsw6ko(ezlgm2up(ayfnwr1v)) 23456 continue 23457 continue return end subroutine vankcghz2l2(x, kuzxj1lo, ankcghz2, rvy1fpli, ukgwt7na) implicit logical (a-z) integer kuzxj1lo, rvy1fpli, ukgwt7na double precision x(kuzxj1lo), ankcghz2(kuzxj1lo) integer ndk, yq6lorbx if(ukgwt7na .eq. 0)then if(kuzxj1lo .le. 40)then ndk = kuzxj1lo else ndk = 40 + dexp(0.25d0 * dlog(kuzxj1lo-40.0d0)) endif else ndk = rvy1fpli - 6 endif rvy1fpli = ndk + 6 do23462 yq6lorbx = 1,3 ankcghz2(yq6lorbx) = x(1) 23462 continue 23463 continue do23464 yq6lorbx = 1,ndk ankcghz2(yq6lorbx+3) = x( 1 + (yq6lorbx-1)*(kuzxj1lo-1)/(ndk-1) ) 23464 continue 23465 continue do23466 yq6lorbx = 1,3 ankcghz2(ndk+3+yq6lorbx) = x(kuzxj1lo) 23466 continue 23467 continue return end subroutine pankcghz2l2(ankcghz2, kuzxj1lo, zo8wpibx, tol) implicit logical (a-z) integer kuzxj1lo, zo8wpibx(kuzxj1lo) double precision ankcghz2(kuzxj1lo), tol integer ayfnwr1v, cjop5bwm do23468 ayfnwr1v=1,4 zo8wpibx(ayfnwr1v) = 1 23468 continue 23469 continue cjop5bwm = 4 do23470 ayfnwr1v=5,(kuzxj1lo-4) if((ankcghz2(ayfnwr1v) - ankcghz2(cjop5bwm) .ge. tol) .and. (ankcg *hz2(kuzxj1lo) - ankcghz2(ayfnwr1v) .ge. tol))then zo8wpibx(ayfnwr1v) = 1 cjop5bwm = ayfnwr1v else zo8wpibx(ayfnwr1v) = 0 endif 23470 continue 23471 continue do23474 ayfnwr1v=(kuzxj1lo-3),kuzxj1lo zo8wpibx(ayfnwr1v) = 1 23474 continue 23475 continue return end VGAM/src/gautr.c0000644000176200001440000001502713135276761013101 0ustar liggesusers#include "math.h" /* Frequently used numerical constants: */ #define OneUponSqrt2Pi .39894228040143267794 #define twopi 6.283195307179587 #define LnSqrt2Pi -0.9189385332046727417803296 #define SQRT2 1.414213562373095049 #define SQRTPI 1.772453850905516027 /* --------------------------------------------------------------------------- UNIVARIATE NORMAL PROBABILITY ---------------------------------------------------------------------------*/ #define UPPERLIMIT 20.0 /* I won't return either of univariate normal density or probability when x < -UPPERLIMIT or x > UPPERLIMIT. */ #define P10 242.66795523053175 #define P11 21.979261618294152 #define P12 6.9963834886191355 #define P13 -.035609843701815385 #define Q10 215.05887586986120 #define Q11 91.164905404514901 #define Q12 15.082797630407787 #define Q13 1.0 #define P20 300.4592610201616005 #define P21 451.9189537118729422 #define P22 339.3208167343436870 #define P23 152.9892850469404039 #define P24 43.16222722205673530 #define P25 7.211758250883093659 #define P26 .5641955174789739711 #define P27 -.0000001368648573827167067 #define Q20 300.4592609569832933 #define Q21 790.9509253278980272 #define Q22 931.3540948506096211 #define Q23 638.9802644656311665 #define Q24 277.5854447439876434 #define Q25 77.00015293522947295 #define Q26 12.78272731962942351 #define Q27 1.0 #define P30 -.00299610707703542174 #define P31 -.0494730910623250734 #define P32 -.226956593539686930 #define P33 -.278661308609647788 #define P34 -.0223192459734184686 #define Q30 .0106209230528467918 #define Q31 .191308926107829841 #define Q32 1.05167510706793207 #define Q33 1.98733201817135256 #define Q34 1.0 double pnorm1(double x) { int sn; double R1, R2, y, y2, y3, y4, y5, y6, y7; double erf, erfc, z, z2, z3, z4; double phi; if (x < -UPPERLIMIT) return 0.0; if (x > UPPERLIMIT) return 1.0; y = x / SQRT2; if (y < 0) { y = -y; sn = -1; } else sn = 1; y2 = y * y; y4 = y2 * y2; y6 = y4 * y2; if(y < 0.46875) { R1 = P10 + P11 * y2 + P12 * y4 + P13 * y6; R2 = Q10 + Q11 * y2 + Q12 * y4 + Q13 * y6; erf = y * R1 / R2; if (sn == 1) phi = 0.5 + 0.5*erf; else phi = 0.5 - 0.5*erf; } else if (y < 4.0) { y3 = y2 * y; y5 = y4 * y; y7 = y6 * y; R1 = P20 + P21 * y + P22 * y2 + P23 * y3 + P24 * y4 + P25 * y5 + P26 * y6 + P27 * y7; R2 = Q20 + Q21 * y + Q22 * y2 + Q23 * y3 + Q24 * y4 + Q25 * y5 + Q26 * y6 + Q27 * y7; erfc = exp(-y2) * R1 / R2; if (sn == 1) phi = 1.0 - 0.5*erfc; else phi = 0.5*erfc; } else { z = y4; z2 = z * z; z3 = z2 * z; z4 = z2 * z2; R1 = P30 + P31 * z + P32 * z2 + P33 * z3 + P34 * z4; R2 = Q30 + Q31 * z + Q32 * z2 + Q33 * z3 + Q34 * z4; erfc = (exp(-y2)/y) * (1.0 / SQRTPI + R1 / (R2 * y2)); if (sn == 1) phi = 1.0 - 0.5*erfc; else phi = 0.5*erfc; } return phi; } /* --------------------------------------------------------------------------- UNIVARIATE NORMAL DENSITY ---------------------------------------------------------------------------*/ double dnorm1(double x) { if (x < -UPPERLIMIT) return 0.0; if (x > UPPERLIMIT) return 0.0; return OneUponSqrt2Pi * exp(-0.5 * x * x); } /* --------------------------------------------------------------------------- LN OF UNIVARIATE NORMAL DENSITY ---------------------------------------------------------------------------*/ double lndnorm1(double x) { return LnSqrt2Pi - (0.5*x*x); } /*--------------------------------------------------------------------------- BIVARIATE NORMAL PROBABILITY ---------------------------------------------------------------------------*/ #define con (twopi / 2.0) * 10.0e-10 double bivnor(double ah, double ak, double r) { /* based on alg 462 comm. acm oct 73 gives the probability that a bivariate normal exceeds (ah,ak). gh and gk are .5 times the right tail areas of ah, ak under a n(0,1) Tranlated from FORTRAN to ratfor using struct; from ratfor to C by hand. */ double a2, ap, b, cn, conex, ex, g2, gh, gk, gw, h2, h4, rr, s1, s2, sgn, sn, sp, sqr, t, temp, w2, wh, wk; int is; temp = -ah; gh = pnorm1(temp); gh = gh / 2.0; temp = -ak; gk = pnorm1(temp); gk = gk / 2.0; b = 0; if (r==0) b = 4*gh*gk; else { rr = 1-r*r; if (rr<0) return 0; /* zz; 29/6/02; was originally return; not sure */ if (rr!=0) { sqr = sqrt(rr); if (ah!=0) { b = gh; if (ah*ak<0) b = b-.5; else if (ah*ak==0) goto label10; } else if (ak==0) { b = atan(r/sqr)/twopi+.25; goto label50; } b = b+gk; if (ah==0) goto label20; label10: wh = -ah; wk = (ak/ah-r)/sqr; gw = 2*gh; is = -1; goto label30; label20: do { wh = -ak; wk = (ah/ak-r)/sqr; gw = 2*gk; is = 1; label30: sgn = -1; t = 0; if (wk!=0) { if (fabs(wk)>=1) { /* this brace added 28/6/02 by tyee */ if (fabs(wk)==1) { t = wk*gw*(1-gw)/2; goto label40; } else { sgn = -sgn; wh = wh*wk; g2 = pnorm1(wh); wk = 1/wk; if (wk<0) b = b+.5; b = b-(gw+g2)/2+gw*g2; } } h2 = wh*wh; a2 = wk*wk; h4 = h2*.5; ex = 0; if (h4<150.0) ex = exp(-h4); w2 = h4*ex; ap = 1; s2 = ap-ex; sp = ap; s1 = 0; sn = s1; conex = fabs(con/wk); do { cn = ap*s2/(sn+sp); s1 = s1+cn; if (fabs(cn)<=conex) break; sn = sp; sp = sp+1; s2 = s2-w2; w2 = w2*h4/sp; ap = -ap*a2; } while (1); t = (atan(wk)-wk*s1)/twopi; label40: b = b+sgn*t; } if (is>=0) break; } while(ak!=0); } else if (r>=0) if (ah>=ak) b = 2*gh; else b = 2*gk; else if (ah+ak<0) b = 2*(gh+gk)-1; } label50: if (b<0) b = 0; if (b>1) b = 1; return(b); } /* in the following function size measures the dimension of x singler == 1 if r is a scalar; otherwise r is same size as x & y */ /* This is called by S */ void pnorm2ccc(double *x, double *y, double *r, int *size, int *singler, double *ans) { int i; if(*singler == 1) { for(i = 0; i < *size; i++) ans[i] = bivnor(x[i], y[i], *r); } else { for(i = 0; i < *size; i++) ans[i] = bivnor(x[i], y[i], r[i]); } } /* main() { int i; double x,y,r; x = 0.0; y = 0.0; for(i = -9; i<=9; i++) { r = i / 10.0; Rprintf("%10.2f %10.6f \n",r,bivnor(x,y,r)); } } */ VGAM/src/cqof.f0000644000176200001440000023373213135276761012717 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine pnm1or(objzgdk0, lfu2qhid) implicit logical (a-z) double precision objzgdk0, lfu2qhid integer sn double precision r1, r2, y, y2, y3, y4, y5, y6, y7 double precision erf, erfc, z, z2, z3, z4 double precision sqrt2, sqrtpi, ulimit, p10,p11,p12,p13, q10,q11,q *12,q13 double precision p20,p21,p22,p23,p24,p25,p26,p27 double precision q20,q21,q22,q23,q24,q25,q26,q27 double precision p30,p31,p32,p33,p34 double precision q30,q31,q32,q33,q34 sqrt2 = 1.414213562373095049d0 sqrtpi = 1.772453850905516027d0 ulimit = 20.0d0 p10 = 242.66795523053175d0 p11 = 21.979261618294152d0 p12 = 6.9963834886191355d0 p13 = -.035609843701815385d0 q10 = 215.05887586986120d0 q11 = 91.164905404514901d0 q12 = 15.082797630407787d0 q13 = 1.0d0 p20 = 300.4592610201616005d0 p21 = 451.9189537118729422d0 p22 = 339.3208167343436870d0 p23 = 152.9892850469404039d0 p24 = 43.16222722205673530d0 p25 = 7.211758250883093659d0 p26 = .5641955174789739711d0 p27 = -.0000001368648573827167067d0 q20 = 300.4592609569832933d0 q21 = 790.9509253278980272d0 q22 = 931.3540948506096211d0 q23 = 638.9802644656311665d0 q24 = 277.5854447439876434d0 q25 = 77.00015293522947295d0 q26 = 12.78272731962942351d0 q27 = 1.0d0 p30 = -.00299610707703542174d0 p31 = -.0494730910623250734d0 p32 = -.226956593539686930d0 p33 = -.278661308609647788d0 p34 = -.0223192459734184686d0 q30 = .0106209230528467918d0 q31 = .191308926107829841d0 q32 = 1.05167510706793207d0 q33 = 1.98733201817135256d0 q34 = 1.0d0 if(objzgdk0 .lt. -ulimit)then lfu2qhid = 2.753624d-89 return endif if(objzgdk0 .gt. ulimit)then lfu2qhid = 1.0d0 return endif y = objzgdk0 / sqrt2 if(y .lt. 0.0d0)then y = -y sn = -1 else sn = 1 endif y2 = y * y y4 = y2 * y2 y6 = y4 * y2 if(y .lt. 0.46875d0)then r1 = p10 + p11 * y2 + p12 * y4 + p13 * y6 r2 = q10 + q11 * y2 + q12 * y4 + q13 * y6 erf = y * r1 / r2 if(sn .eq. 1)then lfu2qhid = 0.5d0 + 0.5*erf else lfu2qhid = 0.5d0 - 0.5*erf endif else if(y .lt. 4.0d0)then y3 = y2 * y y5 = y4 * y y7 = y6 * y r1 = p20 + p21 * y + p22 * y2 + p23 * y3 + p24 * y4 + p25 * y5 + p *26 * y6 + p27 * y7 r2 = q20 + q21 * y + q22 * y2 + q23 * y3 + q24 * y4 + q25 * y5 + q *26 * y6 + q27 * y7 erfc = dexp(-y2) * r1 / r2 if(sn .eq. 1)then lfu2qhid = 1.0 - 0.5*erfc else lfu2qhid = 0.5*erfc endif else z = y4 z2 = z * z z3 = z2 * z z4 = z2 * z2 r1 = p30 + p31 * z + p32 * z2 + p33 * z3 + p34 * z4 r2 = q30 + q31 * z + q32 * z2 + q33 * z3 + q34 * z4 erfc = (dexp(-y2)/y) * (1.0 / sqrtpi + r1 / (r2 * y2)) if(sn .eq. 1)then lfu2qhid = 1.0d0 - 0.5*erfc else lfu2qhid = 0.5*erfc endif endif endif return end subroutine pnm1ow(objzgdk0, lfu2qhid, kuzxj1lo) implicit logical (a-z) integer kuzxj1lo, ayfnwr1v double precision objzgdk0(kuzxj1lo), lfu2qhid(kuzxj1lo) do23016 ayfnwr1v=1,kuzxj1lo call pnm1or(objzgdk0(ayfnwr1v), lfu2qhid(ayfnwr1v)) 23016 continue 23017 continue return end subroutine n2howibc2a(objzgdk0, i9mwnvqt, lfu2qhid) implicit logical (a-z) double precision objzgdk0, i9mwnvqt, lfu2qhid double precision xd4mybgja if(1.0d0 - objzgdk0 .ge. 1.0d0)then lfu2qhid = -8.12589d0 / (3.0*dsqrt(i9mwnvqt)) else if(1.0d0 - objzgdk0 .le. 0.0d0)then lfu2qhid = 8.12589d0 / (3.0*dsqrt(i9mwnvqt)) else call pnm1or(1.0d0-objzgdk0, xd4mybgja) xd4mybgja = xd4mybgja / (3.0*dsqrt(i9mwnvqt)) lfu2qhid = -3.0d0 * dlog(1.0d0 + xd4mybgja) endif endif return end subroutine zi8qrpsb(objzgdk0, lfu2qhid) implicit logical (a-z) double precision objzgdk0, lfu2qhid if(1.0d0 - objzgdk0 .ge. 1.0d0)then lfu2qhid = -35.0d0 else if(1.0d0 - objzgdk0 .le. 0.0d0)then lfu2qhid = 3.542106d0 else lfu2qhid = dlog(-dlog(1.0d0 - objzgdk0)) endif endif return end subroutine g2vwexyk9(objzgdk0, lfu2qhid) implicit logical (a-z) double precision objzgdk0, lfu2qhid if(1.0d0 - objzgdk0 .ge. 1.0d0)then lfu2qhid = -34.53958d0 else if(1.0d0 - objzgdk0 .le. 0.0d0)then lfu2qhid = 34.53958d0 else lfu2qhid = dlog(objzgdk0 / (1.0d0 - objzgdk0)) endif endif return end subroutine pkc4ejib(w8znmyce, beta, m0ibglfx, kuzxj1lo, wy1vqfzu, *br5ovgcj, xlpjcg3s, vtsou9pz, hj3ftvzu, qfx3vhct, unhycz0e, vm4xjo *sb) implicit logical (a-z) integer kuzxj1lo, wy1vqfzu, br5ovgcj, xlpjcg3s, vtsou9pz, hj3ftvzu *, qfx3vhct, unhycz0e double precision w8znmyce(br5ovgcj,xlpjcg3s), beta(xlpjcg3s), m0ib *glfx(wy1vqfzu,kuzxj1lo), vm4xjosb(kuzxj1lo) integer ayfnwr1v, yq6lorbx, gp1jxzuh, i1loc, sedf7mxb double precision vogkfwt8 if(vtsou9pz .eq. 1)then if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then sedf7mxb = 2*hj3ftvzu-1 do23034 ayfnwr1v=1,kuzxj1lo vogkfwt8 = 0.0d0 do23036 gp1jxzuh=1,xlpjcg3s vogkfwt8 = vogkfwt8 + w8znmyce(2*ayfnwr1v-1,gp1jxzuh) * beta(gp1jx *zuh) 23036 continue 23037 continue m0ibglfx(sedf7mxb,ayfnwr1v) = vogkfwt8 23034 continue 23035 continue sedf7mxb = 2*hj3ftvzu do23038 ayfnwr1v=1,kuzxj1lo vogkfwt8 = 0.0d0 do23040 gp1jxzuh=1,xlpjcg3s vogkfwt8 = vogkfwt8 + w8znmyce(2*ayfnwr1v ,gp1jxzuh) * beta(gp1jxz *uh) 23040 continue 23041 continue m0ibglfx(sedf7mxb,ayfnwr1v) = vogkfwt8 23038 continue 23039 continue else do23042 ayfnwr1v=1,br5ovgcj vogkfwt8 = 0.0d0 do23044 gp1jxzuh=1,xlpjcg3s vogkfwt8 = vogkfwt8 + w8znmyce(ayfnwr1v,gp1jxzuh) * beta(gp1jxzuh) 23044 continue 23045 continue m0ibglfx(hj3ftvzu,ayfnwr1v) = vogkfwt8 23042 continue 23043 continue endif else i1loc = 1 do23046 ayfnwr1v=1,kuzxj1lo do23048 yq6lorbx=1,wy1vqfzu vogkfwt8 = 0.0d0 do23050 gp1jxzuh=1,xlpjcg3s vogkfwt8 = vogkfwt8 + w8znmyce(i1loc,gp1jxzuh) * beta(gp1jxzuh) 23050 continue 23051 continue i1loc = i1loc + 1 m0ibglfx(yq6lorbx,ayfnwr1v) = vogkfwt8 23048 continue 23049 continue 23046 continue 23047 continue endif if(unhycz0e .eq. 1)then if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23056 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) *+ vm4xjosb(ayfnwr1v) 23056 continue 23057 continue else do23058 ayfnwr1v=1,kuzxj1lo m0ibglfx(hj3ftvzu,ayfnwr1v) = m0ibglfx(hj3ftvzu,ayfnwr1v) + vm4xjo *sb(ayfnwr1v) 23058 continue 23059 continue endif endif return end subroutine nipyajc1(m0ibglfx, t8hwvalr, kuzxj1lo, wy1vqfzu, afpc0k *ns, qfx3vhct, hj3ftvzu) implicit logical (a-z) integer kuzxj1lo, wy1vqfzu, afpc0kns, qfx3vhct, hj3ftvzu double precision m0ibglfx(wy1vqfzu,kuzxj1lo), t8hwvalr(afpc0kns,ku *zxj1lo) integer ayfnwr1v, yq6lorbx double precision o3jyipdf0 if(hj3ftvzu .eq. 0)then if(qfx3vhct .eq. 1)then do23064 ayfnwr1v=1,kuzxj1lo do23066 yq6lorbx=1,wy1vqfzu o3jyipdf0 = dexp(m0ibglfx(yq6lorbx,ayfnwr1v)) t8hwvalr(yq6lorbx,ayfnwr1v) = o3jyipdf0 / (1.0d0 + o3jyipdf0) 23066 continue 23067 continue 23064 continue 23065 continue endif if(qfx3vhct .eq. 2)then do23070 ayfnwr1v=1,kuzxj1lo do23072 yq6lorbx=1,wy1vqfzu t8hwvalr(yq6lorbx,ayfnwr1v) = dexp(m0ibglfx(yq6lorbx,ayfnwr1v)) 23072 continue 23073 continue 23070 continue 23071 continue endif if(qfx3vhct .eq. 4)then do23076 ayfnwr1v=1,kuzxj1lo do23078 yq6lorbx=1,wy1vqfzu t8hwvalr(yq6lorbx,ayfnwr1v) = 1.0d0-dexp(-dexp(m0ibglfx(yq6lorbx,a *yfnwr1v))) 23078 continue 23079 continue 23076 continue 23077 continue endif if(qfx3vhct .eq. 5)then do23082 ayfnwr1v=1,kuzxj1lo do23084 yq6lorbx=1,afpc0kns t8hwvalr(yq6lorbx,ayfnwr1v) = dexp(m0ibglfx(2*yq6lorbx-1,ayfnwr1v) *) 23084 continue 23085 continue 23082 continue 23083 continue endif if(qfx3vhct .eq. 3)then do23088 ayfnwr1v=1,kuzxj1lo do23090 yq6lorbx=1,afpc0kns t8hwvalr(yq6lorbx,ayfnwr1v) = dexp(m0ibglfx(2*yq6lorbx-1,ayfnwr1v) *) 23090 continue 23091 continue 23088 continue 23089 continue endif if(qfx3vhct .eq. 8)then do23094 ayfnwr1v=1,kuzxj1lo do23096 yq6lorbx=1,wy1vqfzu t8hwvalr(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) 23096 continue 23097 continue 23094 continue 23095 continue endif else if(qfx3vhct .eq. 1)then do23100 ayfnwr1v=1,kuzxj1lo o3jyipdf0 = dexp(m0ibglfx(hj3ftvzu,ayfnwr1v)) t8hwvalr(hj3ftvzu,ayfnwr1v) = o3jyipdf0 / (1.0d0 + o3jyipdf0) 23100 continue 23101 continue endif if(qfx3vhct .eq. 2)then do23104 ayfnwr1v=1,kuzxj1lo t8hwvalr(hj3ftvzu,ayfnwr1v) = dexp(m0ibglfx(hj3ftvzu,ayfnwr1v)) 23104 continue 23105 continue endif if(qfx3vhct .eq. 4)then do23108 ayfnwr1v=1,kuzxj1lo t8hwvalr(hj3ftvzu,ayfnwr1v) = 1.0d0 - dexp(-dexp(m0ibglfx(hj3ftvzu *,ayfnwr1v))) 23108 continue 23109 continue endif if(qfx3vhct .eq. 5)then do23112 ayfnwr1v=1,kuzxj1lo t8hwvalr(hj3ftvzu,ayfnwr1v) = dexp(m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) *) 23112 continue 23113 continue endif if(qfx3vhct .eq. 3)then do23116 ayfnwr1v=1,kuzxj1lo t8hwvalr(hj3ftvzu,ayfnwr1v) = dexp(m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) *) 23116 continue 23117 continue endif if(qfx3vhct .eq. 8)then do23120 ayfnwr1v=1,kuzxj1lo t8hwvalr(hj3ftvzu,ayfnwr1v) = m0ibglfx(hj3ftvzu,ayfnwr1v) 23120 continue 23121 continue endif endif return end subroutine shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, *wy1vqfzu, afpc0kns, dimw, m0ibglfx, dev, hj3ftvzu, n3iasxug, vsoih *n1r, cll) implicit logical (a-z) integer qfx3vhct, kuzxj1lo, wy1vqfzu, afpc0kns, dimw, hj3ftvzu, cl *l double precision tlgduey8(kuzxj1lo, afpc0kns), wmat(kuzxj1lo, dimw *), t8hwvalr(afpc0kns, kuzxj1lo), m0ibglfx(wy1vqfzu,kuzxj1lo), dev, * n3iasxug, vsoihn1r integer ayfnwr1v, yq6lorbx double precision bzmd6ftv, txlvcey5, xd4mybgj, uqnkc6zg, hofjnx2e, * smu, afwp5imx, ivqk2ywz, qvd7yktm double precision hdqsx7bk, anopu9vi, jtnbu2hz logical lbgwvp3q bzmd6ftv = 0.0d0 if(hj3ftvzu .eq. 0)then if((qfx3vhct .eq. 1) .or. (qfx3vhct .eq. 4))then do23126 yq6lorbx=1,wy1vqfzu do23128 ayfnwr1v=1,kuzxj1lo if(tlgduey8(ayfnwr1v,yq6lorbx) .gt. 0.0d0)then ivqk2ywz = tlgduey8(ayfnwr1v,yq6lorbx) * dlog(tlgduey8(ayfnwr1v,yq *6lorbx)) else ivqk2ywz = 0.0d0 endif if(tlgduey8(ayfnwr1v,yq6lorbx) .lt. 1.0d0)then ivqk2ywz = ivqk2ywz + (1.0d0 - tlgduey8(ayfnwr1v,yq6lorbx)) * dlog *(1.0d0 - tlgduey8(ayfnwr1v,yq6lorbx)) endif xd4mybgj = t8hwvalr(yq6lorbx,ayfnwr1v) * (1.0d0 - t8hwvalr(yq6lorb *x,ayfnwr1v)) if(xd4mybgj .lt. n3iasxug)then smu = t8hwvalr(yq6lorbx,ayfnwr1v) if(smu .lt. n3iasxug)then qvd7yktm = tlgduey8(ayfnwr1v,yq6lorbx) * vsoihn1r else qvd7yktm = tlgduey8(ayfnwr1v,yq6lorbx) * dlog(smu) endif afwp5imx = 1.0d0 - smu if(afwp5imx .lt. n3iasxug)then qvd7yktm = qvd7yktm + (1.0d0 - tlgduey8(ayfnwr1v,yq6lorbx)) * vsoi *hn1r else qvd7yktm = qvd7yktm + (1.0d0 - tlgduey8(ayfnwr1v,yq6lorbx)) * dlog *(afwp5imx) endif else qvd7yktm = (tlgduey8(ayfnwr1v,yq6lorbx) * dlog(t8hwvalr(yq6lorbx,a *yfnwr1v)) + (1.0d0 - tlgduey8(ayfnwr1v,yq6lorbx)) * dlog(1.0d0 - t *8hwvalr(yq6lorbx,ayfnwr1v))) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * (ivqk2ywz - qvd7yktm) 23128 continue 23129 continue 23126 continue 23127 continue endif if(qfx3vhct .eq. 2)then do23142 yq6lorbx=1,wy1vqfzu do23144 ayfnwr1v=1,kuzxj1lo if(tlgduey8(ayfnwr1v,yq6lorbx) .gt. 0.0d0)then xd4mybgj = t8hwvalr(yq6lorbx,ayfnwr1v) - tlgduey8(ayfnwr1v,yq6lorb *x) + tlgduey8(ayfnwr1v,yq6lorbx) * dlog(tlgduey8(ayfnwr1v,yq6lorbx *) / t8hwvalr(yq6lorbx,ayfnwr1v)) else xd4mybgj = t8hwvalr(yq6lorbx,ayfnwr1v) - tlgduey8(ayfnwr1v,yq6lorb *x) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj 23144 continue 23145 continue 23142 continue 23143 continue endif if(qfx3vhct .eq. 5)then do23150 yq6lorbx=1,afpc0kns do23152 ayfnwr1v=1,kuzxj1lo jtnbu2hz = dexp(m0ibglfx(2*yq6lorbx,ayfnwr1v)) call tldz5ion(jtnbu2hz, uqnkc6zg) if(tlgduey8(ayfnwr1v,yq6lorbx) .gt. 0.0d0)then xd4mybgj = (jtnbu2hz - 1.0d0) * dlog(tlgduey8(ayfnwr1v,yq6lorbx)) *+ (dlog(jtnbu2hz)-tlgduey8(ayfnwr1v,yq6lorbx) / t8hwvalr(yq6lorbx, *ayfnwr1v) - dlog(t8hwvalr(yq6lorbx,ayfnwr1v)) ) * jtnbu2hz - uqnkc *6zg else xd4mybgj = -1000.0d0 endif xd4mybgj = -xd4mybgj bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj 23152 continue 23153 continue 23150 continue 23151 continue endif if(qfx3vhct .eq. 3)then if(cll .eq. 0)then anopu9vi = 34.0d0 do23160 yq6lorbx=1,afpc0kns do23162 ayfnwr1v=1,kuzxj1lo if(m0ibglfx(2*yq6lorbx,ayfnwr1v) .gt. anopu9vi)then hdqsx7bk = dexp(anopu9vi) lbgwvp3q = .true. else if(m0ibglfx(2*yq6lorbx,ayfnwr1v) .lt. -anopu9vi)then hdqsx7bk = dexp(-anopu9vi) lbgwvp3q = .true. else hdqsx7bk = dexp(m0ibglfx(2*yq6lorbx,ayfnwr1v)) lbgwvp3q = .false. endif endif if(tlgduey8(ayfnwr1v,yq6lorbx) .lt. 1.0d0)then xd4mybgj = 1.0d0 else xd4mybgj = tlgduey8(ayfnwr1v,yq6lorbx) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * (tlgduey8(ayfnwr1v,yq6lor *bx) * dlog(xd4mybgj/t8hwvalr(yq6lorbx,ayfnwr1v)) + (tlgduey8(ayfnw *r1v,yq6lorbx) + hdqsx7bk) * dlog((t8hwvalr(yq6lorbx,ayfnwr1v)+hdqs *x7bk) / (hdqsx7bk+ tlgduey8(ayfnwr1v,yq6lorbx)))) 23162 continue 23163 continue 23160 continue 23161 continue else anopu9vi = 34.0d0 do23170 yq6lorbx=1,afpc0kns do23172 ayfnwr1v=1,kuzxj1lo if(m0ibglfx(2*yq6lorbx,ayfnwr1v) .gt. anopu9vi)then hdqsx7bk = dexp(anopu9vi) lbgwvp3q = .true. else if(m0ibglfx(2*yq6lorbx,ayfnwr1v) .lt. -anopu9vi)then hdqsx7bk = dexp(-anopu9vi) lbgwvp3q = .true. else hdqsx7bk = dexp(m0ibglfx(2*yq6lorbx,ayfnwr1v)) lbgwvp3q = .false. endif endif if( lbgwvp3q )then uqnkc6zg = 0.0d0 hofjnx2e = 0.0d0 else call tldz5ion(hdqsx7bk + tlgduey8(ayfnwr1v,yq6lorbx), uqnkc6zg) call tldz5ion(hdqsx7bk, hofjnx2e) endif call tldz5ion(1.0d0 + tlgduey8(ayfnwr1v,yq6lorbx), txlvcey5) xd4mybgj = hdqsx7bk * dlog(hdqsx7bk / (hdqsx7bk + t8hwvalr(yq6lorb *x,ayfnwr1v))) + uqnkc6zg - hofjnx2e - txlvcey5 if(tlgduey8(ayfnwr1v,yq6lorbx) .gt. 0.0d0)then xd4mybgj = xd4mybgj + tlgduey8(ayfnwr1v,yq6lorbx) * dlog(t8hwvalr( *yq6lorbx,ayfnwr1v) / (hdqsx7bk + t8hwvalr(yq6lorbx,ayfnwr1v))) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj 23172 continue 23173 continue 23170 continue 23171 continue bzmd6ftv = -bzmd6ftv / 2.0d0 endif endif if(qfx3vhct .eq. 8)then do23184 yq6lorbx=1,wy1vqfzu do23186 ayfnwr1v=1,kuzxj1lo xd4mybgj = tlgduey8(ayfnwr1v,yq6lorbx) - t8hwvalr(yq6lorbx,ayfnwr1 *v) bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj**2 23186 continue 23187 continue 23184 continue 23185 continue endif else if((qfx3vhct .eq. 1) .or. (qfx3vhct .eq. 4))then do23190 ayfnwr1v=1,kuzxj1lo if(tlgduey8(ayfnwr1v,hj3ftvzu) .gt. 0.0d0)then ivqk2ywz = tlgduey8(ayfnwr1v,hj3ftvzu) * dlog(tlgduey8(ayfnwr1v,hj *3ftvzu)) else ivqk2ywz = 0.0d0 endif if(tlgduey8(ayfnwr1v,hj3ftvzu) .lt. 1.0d0)then ivqk2ywz = ivqk2ywz + (1.0d0 - tlgduey8(ayfnwr1v,hj3ftvzu)) * dlog *(1.0d0 - tlgduey8(ayfnwr1v,hj3ftvzu)) endif xd4mybgj = t8hwvalr(hj3ftvzu,ayfnwr1v) * (1.0d0 - t8hwvalr(hj3ftvz *u,ayfnwr1v)) if(xd4mybgj .lt. n3iasxug)then smu = t8hwvalr(hj3ftvzu,ayfnwr1v) if(smu .lt. n3iasxug)then qvd7yktm = tlgduey8(ayfnwr1v,hj3ftvzu) * vsoihn1r else qvd7yktm = tlgduey8(ayfnwr1v,hj3ftvzu) * dlog(smu) endif afwp5imx = 1.0d0 - smu if(afwp5imx .lt. n3iasxug)then qvd7yktm = qvd7yktm + (1.0d0-tlgduey8(ayfnwr1v,hj3ftvzu))*vsoihn1r else qvd7yktm = qvd7yktm + (1.0d0-tlgduey8(ayfnwr1v,hj3ftvzu))*dlog(afw *p5imx) endif else qvd7yktm = (tlgduey8(ayfnwr1v,hj3ftvzu) * dlog(t8hwvalr(hj3ftvzu,a *yfnwr1v)) + (1.0d0 - tlgduey8(ayfnwr1v,hj3ftvzu)) * dlog(1.0d0 - t *8hwvalr(hj3ftvzu,ayfnwr1v))) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * (ivqk2ywz - qvd7yktm) 23190 continue 23191 continue endif if(qfx3vhct .eq. 2)then do23204 ayfnwr1v=1,kuzxj1lo if(tlgduey8(ayfnwr1v,hj3ftvzu) .gt. 0.0d0)then xd4mybgj = t8hwvalr(hj3ftvzu,ayfnwr1v) - tlgduey8(ayfnwr1v,hj3ftvz *u) + tlgduey8(ayfnwr1v,hj3ftvzu) * dlog(tlgduey8(ayfnwr1v,hj3ftvzu *) / t8hwvalr(hj3ftvzu,ayfnwr1v)) else xd4mybgj = t8hwvalr(hj3ftvzu,ayfnwr1v) - tlgduey8(ayfnwr1v,hj3ftvz *u) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj 23204 continue 23205 continue endif if(qfx3vhct .eq. 5)then do23210 ayfnwr1v=1,kuzxj1lo jtnbu2hz = dexp(m0ibglfx(2*hj3ftvzu,ayfnwr1v)) call tldz5ion(jtnbu2hz, uqnkc6zg) if(tlgduey8(ayfnwr1v,hj3ftvzu) .gt. 0.0d0)then xd4mybgj = (jtnbu2hz - 1.0d0) * dlog(tlgduey8(ayfnwr1v,hj3ftvzu)) *+ jtnbu2hz * (dlog(jtnbu2hz) - tlgduey8(ayfnwr1v,hj3ftvzu) / t8hwv *alr(hj3ftvzu,ayfnwr1v) - dlog(t8hwvalr(hj3ftvzu,ayfnwr1v))) - uqnk *c6zg else xd4mybgj = -1000.0d0 endif xd4mybgj = -xd4mybgj bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj 23210 continue 23211 continue endif if(qfx3vhct .eq. 3)then if(cll .eq. 0)then anopu9vi = 34.0d0 do23218 ayfnwr1v=1,kuzxj1lo if(m0ibglfx(2*hj3ftvzu,ayfnwr1v) .gt. anopu9vi)then hdqsx7bk = dexp(anopu9vi) lbgwvp3q = .true. else if(m0ibglfx(2*hj3ftvzu,ayfnwr1v) .lt. -anopu9vi)then hdqsx7bk = dexp(-anopu9vi) lbgwvp3q = .true. else hdqsx7bk = dexp(m0ibglfx(2*hj3ftvzu,ayfnwr1v)) lbgwvp3q = .false. endif endif if(tlgduey8(ayfnwr1v,hj3ftvzu) .lt. 1.0d0)then xd4mybgj = 1.0d0 else xd4mybgj = tlgduey8(ayfnwr1v,hj3ftvzu) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * (tlgduey8(ayfnwr1v,hj3ftv *zu) * dlog(xd4mybgj/t8hwvalr(hj3ftvzu,ayfnwr1v)) + (tlgduey8(ayfnw *r1v,hj3ftvzu)+hdqsx7bk) * dlog((t8hwvalr(hj3ftvzu,ayfnwr1v) + hdqs *x7bk) / ( hdqsx7bk+tlgduey8(ayfnwr1v,hj3ftvzu)))) 23218 continue 23219 continue else do23226 ayfnwr1v=1,kuzxj1lo hdqsx7bk = dexp(m0ibglfx(2*hj3ftvzu,ayfnwr1v)) call tldz5ion(hdqsx7bk + tlgduey8(ayfnwr1v,hj3ftvzu), uqnkc6zg) call tldz5ion(hdqsx7bk, hofjnx2e) call tldz5ion(1.0d0 + tlgduey8(ayfnwr1v,hj3ftvzu), txlvcey5) xd4mybgj = hdqsx7bk * dlog(hdqsx7bk / (hdqsx7bk + t8hwvalr(hj3ftvz *u,ayfnwr1v))) + uqnkc6zg - hofjnx2e - txlvcey5 if(tlgduey8(ayfnwr1v,hj3ftvzu) .gt. 0.0d0)then xd4mybgj = xd4mybgj + tlgduey8(ayfnwr1v,hj3ftvzu) * dlog(t8hwvalr( *hj3ftvzu,ayfnwr1v) / (hdqsx7bk + t8hwvalr(hj3ftvzu,ayfnwr1v))) endif bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj 23226 continue 23227 continue bzmd6ftv = -bzmd6ftv / 2.0d0 endif endif if(qfx3vhct .eq. 8)then do23232 ayfnwr1v=1,kuzxj1lo xd4mybgj = tlgduey8(ayfnwr1v,hj3ftvzu) - t8hwvalr(hj3ftvzu,ayfnwr1 *v) bzmd6ftv = bzmd6ftv + wmat(ayfnwr1v,1) * xd4mybgj**2 23232 continue 23233 continue endif endif dev = 2.0d0 * bzmd6ftv return end subroutine flncwkfq76(lncwkfq7, w8znmyce, kuzxj1lo, br5ovgcj, xwdf *5ltg, qfx3vhct) implicit logical (a-z) integer kuzxj1lo, br5ovgcj, xwdf5ltg, qfx3vhct double precision lncwkfq7(kuzxj1lo,xwdf5ltg), w8znmyce(br5ovgcj,*) integer ayfnwr1v, sedf7mxb, hpmwnav2 if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq.5 ))then sedf7mxb = 1 do23236 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,sedf7mxb) = 1.0d0 w8znmyce(2*ayfnwr1v, sedf7mxb) = 0.0d0 23236 continue 23237 continue sedf7mxb = sedf7mxb + 1 do23238 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,sedf7mxb) = 0.0d0 w8znmyce(2*ayfnwr1v, sedf7mxb) = 1.0d0 23238 continue 23239 continue sedf7mxb = sedf7mxb + 1 do23240 hpmwnav2=1,xwdf5ltg do23242 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,sedf7mxb) = lncwkfq7(ayfnwr1v,hpmwnav2) w8znmyce(2*ayfnwr1v, sedf7mxb) = 0.0d0 23242 continue 23243 continue sedf7mxb = sedf7mxb + 1 23240 continue 23241 continue else sedf7mxb = 1 do23244 ayfnwr1v=1,kuzxj1lo w8znmyce(ayfnwr1v,sedf7mxb) = 1.0d0 23244 continue 23245 continue sedf7mxb = sedf7mxb + 1 do23246 hpmwnav2=1,xwdf5ltg do23248 ayfnwr1v=1,kuzxj1lo w8znmyce(ayfnwr1v,sedf7mxb)=lncwkfq7(ayfnwr1v,hpmwnav2) 23248 continue 23249 continue sedf7mxb = sedf7mxb + 1 23246 continue 23247 continue endif return end subroutine flncwkfq71(lncwkfq7, w8znmyce, kuzxj1lo, xwdf5ltg, qfx3 *vhct, vm4xjosb, br5ovgcj, xlpjcg3s, hyqwtp6i, tgiyxdw1, dufozmt7, *kifxa0he, p1, unhycz0e) implicit logical (a-z) integer kuzxj1lo, xwdf5ltg, qfx3vhct, br5ovgcj, xlpjcg3s, hyqwtp6i *, tgiyxdw1(hyqwtp6i), dufozmt7(hyqwtp6i), p1, unhycz0e double precision lncwkfq7(kuzxj1lo,xwdf5ltg), w8znmyce(br5ovgcj,xl *pjcg3s), kifxa0he(kuzxj1lo,p1) double precision vm4xjosb(kuzxj1lo) integer i0spbklx, ayfnwr1v, sedf7mxb, hpmwnav2 double precision tad5vhsu, uqnkc6zg if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23252 hpmwnav2=1,xwdf5ltg do23254 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,hpmwnav2) = lncwkfq7(ayfnwr1v,hpmwnav2) w8znmyce(2*ayfnwr1v ,hpmwnav2) = 0.0d0 23254 continue 23255 continue 23252 continue 23253 continue sedf7mxb = xwdf5ltg + 1 if(unhycz0e .eq. 0)then do23258 i0spbklx=1,hyqwtp6i do23260 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,sedf7mxb) = lncwkfq7(ayfnwr1v,tgiyxdw1(i0spb *klx)) * lncwkfq7(ayfnwr1v,dufozmt7(i0spbklx)) w8znmyce(2*ayfnwr1v ,sedf7mxb) = 0.0d0 23260 continue 23261 continue sedf7mxb = sedf7mxb + 1 23258 continue 23259 continue else do23262 ayfnwr1v=1,kuzxj1lo tad5vhsu = 0.0d0 do23264 hpmwnav2=1,xwdf5ltg uqnkc6zg = lncwkfq7(ayfnwr1v,hpmwnav2) tad5vhsu = tad5vhsu + uqnkc6zg * uqnkc6zg 23264 continue 23265 continue vm4xjosb(ayfnwr1v) = -0.50d0 * tad5vhsu 23262 continue 23263 continue endif else do23266 hpmwnav2=1,xwdf5ltg do23268 ayfnwr1v=1,kuzxj1lo w8znmyce(ayfnwr1v,hpmwnav2) = lncwkfq7(ayfnwr1v,hpmwnav2) 23268 continue 23269 continue 23266 continue 23267 continue sedf7mxb = xwdf5ltg + 1 if(unhycz0e .eq. 0)then do23272 i0spbklx=1,hyqwtp6i do23274 ayfnwr1v=1,kuzxj1lo w8znmyce(ayfnwr1v,sedf7mxb) = lncwkfq7(ayfnwr1v,tgiyxdw1(i0spbklx) *) * lncwkfq7(ayfnwr1v,dufozmt7(i0spbklx)) 23274 continue 23275 continue sedf7mxb = sedf7mxb + 1 23272 continue 23273 continue else do23276 ayfnwr1v=1,kuzxj1lo tad5vhsu = 0.0d0 do23278 hpmwnav2=1,xwdf5ltg uqnkc6zg = lncwkfq7(ayfnwr1v,hpmwnav2) tad5vhsu = tad5vhsu + uqnkc6zg * uqnkc6zg 23278 continue 23279 continue vm4xjosb(ayfnwr1v) = -0.50d0 * tad5vhsu 23276 continue 23277 continue endif endif if(p1 .gt. 0)then if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23284 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,sedf7mxb) = 1.0d0 w8znmyce(2*ayfnwr1v, sedf7mxb) = 0.0d0 23284 continue 23285 continue sedf7mxb = sedf7mxb + 1 do23286 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,sedf7mxb) = 0.0d0 w8znmyce(2*ayfnwr1v, sedf7mxb) = 1.0d0 23286 continue 23287 continue sedf7mxb = sedf7mxb + 1 if(p1 .gt. 1)then do23290 i0spbklx=2,p1 do23292 ayfnwr1v=1,kuzxj1lo w8znmyce(2*ayfnwr1v-1,sedf7mxb) = kifxa0he(ayfnwr1v,i0spbklx) w8znmyce(2*ayfnwr1v, sedf7mxb) = 0.0d0 23292 continue 23293 continue sedf7mxb = sedf7mxb + 1 23290 continue 23291 continue endif else do23294 i0spbklx=1,p1 do23296 ayfnwr1v=1,kuzxj1lo w8znmyce(ayfnwr1v,sedf7mxb) = kifxa0he(ayfnwr1v,i0spbklx) 23296 continue 23297 continue sedf7mxb = sedf7mxb + 1 23294 continue 23295 continue endif endif return end subroutine flncwkfq72(lncwkfq7, w8znmyce, kuzxj1lo, wy1vqfzu, br5o *vgcj, xwdf5ltg, qfx3vhct, afpc0kns, fmzq7aob, eu3oxvyb, hyqwtp6i, *tgiyxdw1, dufozmt7, unhycz0e, vm4xjosb) implicit logical (a-z) integer kuzxj1lo, wy1vqfzu, br5ovgcj, xwdf5ltg, qfx3vhct, afpc0kns *, fmzq7aob, eu3oxvyb, hyqwtp6i, tgiyxdw1(hyqwtp6i), dufozmt7(hyqwt *p6i), unhycz0e double precision lncwkfq7(kuzxj1lo,xwdf5ltg), w8znmyce(br5ovgcj,*) *, vm4xjosb(kuzxj1lo) integer i0spbklx, ayfnwr1v, yq6lorbx, gp1jxzuh, ptr, sedf7mxb, hpm *wnav2 double precision uqnkc6zg, tad5vhsu do23298 gp1jxzuh=1,eu3oxvyb do23300 ayfnwr1v=1,br5ovgcj w8znmyce(ayfnwr1v,gp1jxzuh) = 0.0d0 23300 continue 23301 continue 23298 continue 23299 continue sedf7mxb = 0 if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23304 hpmwnav2=1,xwdf5ltg ptr = 1 do23306 ayfnwr1v=1,kuzxj1lo do23308 yq6lorbx=1,afpc0kns w8znmyce(ptr,sedf7mxb+yq6lorbx) = lncwkfq7(ayfnwr1v,hpmwnav2) ptr = ptr + 2 23308 continue 23309 continue 23306 continue 23307 continue sedf7mxb = sedf7mxb + afpc0kns 23304 continue 23305 continue else do23310 hpmwnav2=1,xwdf5ltg ptr = 0 do23312 ayfnwr1v=1,kuzxj1lo do23314 yq6lorbx=1,wy1vqfzu ptr = ptr + 1 w8znmyce(ptr,sedf7mxb+yq6lorbx) = lncwkfq7(ayfnwr1v,hpmwnav2) 23314 continue 23315 continue 23312 continue 23313 continue sedf7mxb = sedf7mxb + wy1vqfzu 23310 continue 23311 continue endif if(fmzq7aob .eq. 0)then if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23320 i0spbklx=1,hyqwtp6i ptr = 1 do23322 ayfnwr1v=1,kuzxj1lo uqnkc6zg = lncwkfq7(ayfnwr1v,tgiyxdw1(i0spbklx)) * lncwkfq7(ayfnwr *1v,dufozmt7(i0spbklx)) do23324 yq6lorbx=1,afpc0kns w8znmyce(ptr,sedf7mxb+yq6lorbx) = uqnkc6zg ptr = ptr + 2 23324 continue 23325 continue 23322 continue 23323 continue sedf7mxb = sedf7mxb + afpc0kns 23320 continue 23321 continue else do23326 i0spbklx=1,hyqwtp6i ptr = 0 do23328 ayfnwr1v=1,kuzxj1lo uqnkc6zg = lncwkfq7(ayfnwr1v,tgiyxdw1(i0spbklx)) * lncwkfq7(ayfnwr *1v,dufozmt7(i0spbklx)) do23330 yq6lorbx=1,wy1vqfzu ptr = ptr + 1 w8znmyce(ptr,sedf7mxb+yq6lorbx) = uqnkc6zg 23330 continue 23331 continue 23328 continue 23329 continue sedf7mxb = sedf7mxb + wy1vqfzu 23326 continue 23327 continue endif else if(unhycz0e .eq. 1)then if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23336 ayfnwr1v=1,kuzxj1lo tad5vhsu = 0.0d0 do23338 hpmwnav2=1,xwdf5ltg uqnkc6zg = lncwkfq7(ayfnwr1v,hpmwnav2) tad5vhsu = tad5vhsu + uqnkc6zg * uqnkc6zg 23338 continue 23339 continue vm4xjosb(ayfnwr1v) = -0.50d0 * tad5vhsu 23336 continue 23337 continue else do23340 ayfnwr1v=1,kuzxj1lo tad5vhsu = 0.0d0 do23342 hpmwnav2=1,xwdf5ltg uqnkc6zg = lncwkfq7(ayfnwr1v,hpmwnav2) tad5vhsu = tad5vhsu + uqnkc6zg * uqnkc6zg 23342 continue 23343 continue vm4xjosb(ayfnwr1v) = -0.50d0 * tad5vhsu 23340 continue 23341 continue endif else if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23346 i0spbklx=1,hyqwtp6i ptr = 1 do23348 ayfnwr1v=1,kuzxj1lo uqnkc6zg = lncwkfq7(ayfnwr1v,tgiyxdw1(i0spbklx)) * lncwkfq7(ayfnwr *1v,dufozmt7(i0spbklx)) do23350 yq6lorbx=1,afpc0kns w8znmyce(ptr,sedf7mxb+i0spbklx) = uqnkc6zg ptr = ptr + 2 23350 continue 23351 continue 23348 continue 23349 continue 23346 continue 23347 continue sedf7mxb = sedf7mxb + hyqwtp6i else do23352 i0spbklx=1,hyqwtp6i ptr = 0 do23354 ayfnwr1v=1,kuzxj1lo uqnkc6zg = lncwkfq7(ayfnwr1v,tgiyxdw1(i0spbklx)) * lncwkfq7(ayfnwr *1v,dufozmt7(i0spbklx)) do23356 yq6lorbx=1,wy1vqfzu ptr = ptr + 1 w8znmyce(ptr,sedf7mxb+i0spbklx) = uqnkc6zg 23356 continue 23357 continue 23354 continue 23355 continue 23352 continue 23353 continue sedf7mxb = sedf7mxb + hyqwtp6i endif endif endif return end subroutine ietam6(tlgduey8, m0ibglfx, y7sdgtqi, kuzxj1lo, wy1vqfzu *, afpc0kns, qfx3vhct, hj3ftvzu, wmat, wr0lbopv) implicit logical (a-z) integer kuzxj1lo, wy1vqfzu, afpc0kns, qfx3vhct, hj3ftvzu, wr0lbopv double precision tlgduey8(kuzxj1lo,afpc0kns), m0ibglfx(wy1vqfzu,ku *zxj1lo), y7sdgtqi(15) double precision wmat(kuzxj1lo,*) double precision vogkfwt8, cumw, gyuq8dex, g2vwexykp, qa8ltuhj, kw *vo4ury, cpz4fgkx, fguvm9tyi, kinit integer ayfnwr1v if((qfx3vhct .eq. 1) .or. (qfx3vhct .eq. 4) .or. (qfx3vhct .eq. 3) * .or. (qfx3vhct .eq. 5))then vogkfwt8 = 0.0d0 cumw = 0.0d0 do23360 ayfnwr1v=1,kuzxj1lo vogkfwt8 = vogkfwt8 + tlgduey8(ayfnwr1v,hj3ftvzu) * wmat(ayfnwr1v, *1) cumw = cumw + wmat(ayfnwr1v,1) 23360 continue 23361 continue gyuq8dex = vogkfwt8 / cumw endif if(qfx3vhct .eq. 1)then call g2vwexyk9(gyuq8dex, g2vwexykp) do23364 ayfnwr1v=1,kuzxj1lo m0ibglfx(hj3ftvzu,ayfnwr1v) = g2vwexykp 23364 continue 23365 continue endif if(qfx3vhct .eq. 2)then do23368 ayfnwr1v=1,kuzxj1lo m0ibglfx(hj3ftvzu,ayfnwr1v) = dlog(tlgduey8(ayfnwr1v,hj3ftvzu) + 0 *.125d0) 23368 continue 23369 continue endif if(qfx3vhct .eq. 4)then call zi8qrpsb(gyuq8dex, qa8ltuhj) do23372 ayfnwr1v=1,kuzxj1lo m0ibglfx(hj3ftvzu,ayfnwr1v) = qa8ltuhj 23372 continue 23373 continue endif if(qfx3vhct .eq. 5)then if(wr0lbopv .eq. 1)then kwvo4ury = dlog(gyuq8dex + 0.03125d0) cpz4fgkx = dlog(y7sdgtqi(3+afpc0kns+hj3ftvzu)+0.01d0) do23378 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = kwvo4ury m0ibglfx(2*hj3ftvzu, ayfnwr1v) = cpz4fgkx 23378 continue 23379 continue else if(wr0lbopv .eq. 2)then kwvo4ury = dlog((6.0/8.0)*gyuq8dex+0.000d0) cpz4fgkx = dlog(y7sdgtqi(3+afpc0kns+hj3ftvzu)+0.01d0) do23382 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = kwvo4ury m0ibglfx(2*hj3ftvzu ,ayfnwr1v) = cpz4fgkx 23382 continue 23383 continue else cpz4fgkx = dlog(y7sdgtqi(3+afpc0kns+hj3ftvzu)+0.01d0) do23384 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = dlog(tlgduey8(ayfnwr1v,hj3ftvzu) * + 0.03125d0) m0ibglfx(2*hj3ftvzu, ayfnwr1v) = cpz4fgkx 23384 continue 23385 continue endif endif endif if(qfx3vhct .eq. 3)then if(wr0lbopv .eq. 1)then kwvo4ury = dlog(gyuq8dex + 0.03125d0) cpz4fgkx = dlog(y7sdgtqi(3+hj3ftvzu)+0.03125d0) do23390 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = kwvo4ury m0ibglfx(2*hj3ftvzu,ayfnwr1v) = cpz4fgkx 23390 continue 23391 continue else if(wr0lbopv .eq. 2)then kwvo4ury = dlog(gyuq8dex + 0.03125d0) kinit = y7sdgtqi(3+hj3ftvzu) cpz4fgkx = dlog(kinit) do23394 ayfnwr1v=1,kuzxj1lo fguvm9tyi = tlgduey8(ayfnwr1v,hj3ftvzu) - gyuq8dex if(fguvm9tyi .gt. 3.0 * gyuq8dex)then m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = dlog(dsqrt(tlgduey8(ayfnwr1v,hj3 *ftvzu))) m0ibglfx(2*hj3ftvzu ,ayfnwr1v) = cpz4fgkx else m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = kwvo4ury m0ibglfx(2*hj3ftvzu ,ayfnwr1v) = cpz4fgkx endif 23394 continue 23395 continue else if(wr0lbopv .eq. 3)then kwvo4ury = dlog(gyuq8dex + 0.03125d0) kinit = y7sdgtqi(3+hj3ftvzu) cpz4fgkx = dlog(kinit) do23400 ayfnwr1v=1,kuzxj1lo fguvm9tyi = tlgduey8(ayfnwr1v,hj3ftvzu) - gyuq8dex if(fguvm9tyi .gt. gyuq8dex)then m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = dlog(0.5*(tlgduey8(ayfnwr1v,hj3f *tvzu)+gyuq8dex)) m0ibglfx(2*hj3ftvzu ,ayfnwr1v) = dlog(kinit / (fguvm9tyi / gyuq8de *x)) else if(tlgduey8(ayfnwr1v,hj3ftvzu) .lt. (gyuq8dex / 4.0))then m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = dlog(gyuq8dex / 4.0) m0ibglfx(2*hj3ftvzu ,ayfnwr1v) = cpz4fgkx else m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = kwvo4ury m0ibglfx(2*hj3ftvzu ,ayfnwr1v) = cpz4fgkx endif endif 23400 continue 23401 continue else cpz4fgkx = dlog(y7sdgtqi(3+hj3ftvzu)) do23406 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = dlog(tlgduey8(ayfnwr1v,hj3ftvzu) * + 0.03125d0) m0ibglfx(2*hj3ftvzu, ayfnwr1v) = cpz4fgkx 23406 continue 23407 continue endif endif endif endif if(qfx3vhct .eq. 8)then do23410 ayfnwr1v=1,kuzxj1lo m0ibglfx(hj3ftvzu,ayfnwr1v) = tlgduey8(ayfnwr1v,hj3ftvzu) 23410 continue 23411 continue endif return end subroutine dlgpwe0c(tlgduey8, wmat, m0ibglfx, t8hwvalr, ghz9vuba, *rbne6ouj, wpuarq2m, rsynp1go, n3iasxug, uaf2xgqy, kuzxj1lo, wy1vqf *zu, afpc0kns, br5ovgcj, dimu, hj3ftvzu, qfx3vhct, zjkrtol8, unhycz *0e, vm4xjosb) implicit logical (a-z) integer kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, hj3ftvzu, zj *krtol8, unhycz0e double precision tlgduey8(kuzxj1lo,afpc0kns), wmat(kuzxj1lo,*), m0 *ibglfx(wy1vqfzu,kuzxj1lo), t8hwvalr(afpc0kns,kuzxj1lo), vm4xjosb(k *uzxj1lo), ghz9vuba(kuzxj1lo,wy1vqfzu), rbne6ouj(kuzxj1lo,wy1vqfzu) *, wpuarq2m(dimu,kuzxj1lo), rsynp1go, n3iasxug, uaf2xgqy integer ayfnwr1v, qfx3vhct double precision xd4mybgja, xd4mybgjb, xd4mybgjc, anopu9vi logical lbgwvp3q double precision hdqsx7bk, dkdeta, dldk, ux3nadiw, ed2ldk2, n2kers *mx, bzmd6ftvmat(1,1), kkmat(1,1), nm0eljqk(1,1) integer hbsl0gto, dvhw1ulq, sguwj9ty double precision jtnbu2hz, uqnkc6zgd, uqnkc6zgt, dldshape double precision fvn3iasxug, xk7dnvei integer okobr6tcex br5ovgcj = 1 hbsl0gto = 1 n2kersmx = 0.990d0 n2kersmx = 0.995d0 if(qfx3vhct .eq. 1)then do23414 ayfnwr1v=1,kuzxj1lo xd4mybgja = t8hwvalr(hj3ftvzu,ayfnwr1v) * (1.0d0 - t8hwvalr(hj3ftv *zu,ayfnwr1v)) xd4mybgjb = xd4mybgja * wmat(ayfnwr1v,1) if(xd4mybgja .lt. n3iasxug)then xd4mybgja = n3iasxug endif if(xd4mybgjb .lt. n3iasxug)then xd4mybgjb = n3iasxug wpuarq2m(hj3ftvzu,ayfnwr1v) = uaf2xgqy else wpuarq2m(hj3ftvzu,ayfnwr1v) = dsqrt(xd4mybgjb) endif rbne6ouj(ayfnwr1v,hj3ftvzu) = xd4mybgjb ghz9vuba(ayfnwr1v,hj3ftvzu) = m0ibglfx(hj3ftvzu,ayfnwr1v) + (tlgdu *ey8(ayfnwr1v,hj3ftvzu)-t8hwvalr(hj3ftvzu,ayfnwr1v)) / xd4mybgja 23414 continue 23415 continue endif if(qfx3vhct .eq. 2)then do23422 ayfnwr1v=1,kuzxj1lo xd4mybgja = t8hwvalr(hj3ftvzu,ayfnwr1v) xd4mybgjb = xd4mybgja * wmat(ayfnwr1v,1) if(xd4mybgjb .lt. n3iasxug)then xd4mybgjb = n3iasxug wpuarq2m(hj3ftvzu,ayfnwr1v) = uaf2xgqy else wpuarq2m(hj3ftvzu,ayfnwr1v) = dsqrt(xd4mybgjb) endif rbne6ouj(ayfnwr1v,hj3ftvzu) = xd4mybgjb if(tlgduey8(ayfnwr1v,hj3ftvzu) .gt. 0.0d0)then xd4mybgjc = xd4mybgja if(xd4mybgjc .lt. n3iasxug)then xd4mybgjc = n3iasxug endif ghz9vuba(ayfnwr1v,hj3ftvzu) = m0ibglfx(hj3ftvzu,ayfnwr1v) + (tlgdu *ey8(ayfnwr1v,hj3ftvzu)-xd4mybgjc)/xd4mybgjc else ghz9vuba(ayfnwr1v,hj3ftvzu) = m0ibglfx(hj3ftvzu,ayfnwr1v) - 1.0d0 endif 23422 continue 23423 continue endif if(qfx3vhct .eq. 4)then do23432 ayfnwr1v=1,kuzxj1lo if((t8hwvalr(hj3ftvzu,ayfnwr1v) .lt. n3iasxug) .or. (t8hwvalr(hj3f *tvzu,ayfnwr1v) .gt. 1.0d0 - n3iasxug))then xd4mybgja = n3iasxug xd4mybgjb = xd4mybgja * wmat(ayfnwr1v,1) if(xd4mybgjb .lt. n3iasxug)then xd4mybgjb = n3iasxug wpuarq2m(hj3ftvzu,ayfnwr1v) = uaf2xgqy else wpuarq2m(hj3ftvzu,ayfnwr1v) = dsqrt(xd4mybgjb) endif rbne6ouj(ayfnwr1v,hj3ftvzu) = xd4mybgjb ghz9vuba(ayfnwr1v,hj3ftvzu) = m0ibglfx(hj3ftvzu,ayfnwr1v) + (tlgdu *ey8(ayfnwr1v,hj3ftvzu)-t8hwvalr(hj3ftvzu,ayfnwr1v)) / xd4mybgja else xd4mybgja = -(1.0d0 - t8hwvalr(hj3ftvzu,ayfnwr1v)) * dlog(1.0d0 - *t8hwvalr(hj3ftvzu,ayfnwr1v)) if(xd4mybgja .lt. n3iasxug)then xd4mybgja = n3iasxug endif xd4mybgjb = -xd4mybgja * wmat(ayfnwr1v,1) * dlog(1.0d0 - t8hwvalr( *hj3ftvzu,ayfnwr1v)) / t8hwvalr(hj3ftvzu,ayfnwr1v) if(xd4mybgjb .lt. n3iasxug)then xd4mybgjb = n3iasxug endif rbne6ouj(ayfnwr1v,hj3ftvzu) = xd4mybgjb wpuarq2m(hj3ftvzu,ayfnwr1v) = dsqrt(xd4mybgjb) ghz9vuba(ayfnwr1v,hj3ftvzu) = m0ibglfx(hj3ftvzu,ayfnwr1v) + (tlgdu *ey8(ayfnwr1v,hj3ftvzu)-t8hwvalr(hj3ftvzu,ayfnwr1v)) / xd4mybgja endif 23432 continue 23433 continue endif if(qfx3vhct .eq. 5)then fvn3iasxug = 1.0d-20 anopu9vi = 34.0d0 do23444 ayfnwr1v=1,kuzxj1lo if(m0ibglfx(2*hj3ftvzu,ayfnwr1v) .gt. anopu9vi)then jtnbu2hz = dexp(anopu9vi) lbgwvp3q = .true. else if(m0ibglfx(2*hj3ftvzu,ayfnwr1v) .lt. -anopu9vi)then jtnbu2hz = dexp(-anopu9vi) lbgwvp3q = .true. else jtnbu2hz = dexp(m0ibglfx(2*hj3ftvzu,ayfnwr1v)) lbgwvp3q = .false. endif endif call vdgam1(jtnbu2hz, uqnkc6zgd, okobr6tcex) if(okobr6tcex .ne. 1)then call intpr("error in dlgpwe0c okobr6tcex 1: ",-1,okobr6tcex,1) endif xk7dnvei = t8hwvalr(hj3ftvzu,ayfnwr1v) if(xk7dnvei .lt. fvn3iasxug)then xk7dnvei = fvn3iasxug endif dldshape = dlog(tlgduey8(ayfnwr1v,hj3ftvzu)) + dlog(jtnbu2hz) - dl *og(xk7dnvei) + 1.0d0 - uqnkc6zgd - tlgduey8(ayfnwr1v,hj3ftvzu) / x *k7dnvei call vtgam1(jtnbu2hz, uqnkc6zgt, okobr6tcex) if(okobr6tcex .ne. 1)then call intpr("error in dlgpwe0c okobr6tcex 2: ",-1,okobr6tcex,1) endif rbne6ouj(ayfnwr1v,2*hj3ftvzu-1) = wmat(ayfnwr1v,1) * jtnbu2hz xd4mybgja = jtnbu2hz * uqnkc6zgt - 1.0d0 rbne6ouj(ayfnwr1v,2*hj3ftvzu ) = wmat(ayfnwr1v,1) * jtnbu2hz * xd4 *mybgja if(rbne6ouj(ayfnwr1v,2*hj3ftvzu-1) .lt. n3iasxug)then rbne6ouj(ayfnwr1v,2*hj3ftvzu-1) = n3iasxug wpuarq2m(2*hj3ftvzu-1,ayfnwr1v) = uaf2xgqy else wpuarq2m(2*hj3ftvzu-1,ayfnwr1v) = dsqrt(rbne6ouj(ayfnwr1v,2*hj3ftv *zu-1)) endif if(rbne6ouj(ayfnwr1v,2*hj3ftvzu) .lt. n3iasxug)then rbne6ouj(ayfnwr1v,2*hj3ftvzu) = n3iasxug wpuarq2m(2*hj3ftvzu,ayfnwr1v) = uaf2xgqy else wpuarq2m(2*hj3ftvzu,ayfnwr1v) = dsqrt(rbne6ouj(ayfnwr1v,2*hj3ftvzu *)) endif if(xd4mybgja .lt. fvn3iasxug)then xd4mybgja = fvn3iasxug endif ghz9vuba(ayfnwr1v,2*hj3ftvzu-1) = m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) *+ tlgduey8(ayfnwr1v,hj3ftvzu) / xk7dnvei - 1.0d0 ghz9vuba(ayfnwr1v,2*hj3ftvzu ) = m0ibglfx(2*hj3ftvzu ,ayfnwr1v) + *dldshape / xd4mybgja 23444 continue 23445 continue endif if(qfx3vhct .eq. 3)then anopu9vi = 34.0d0 fvn3iasxug = 1.0d-20 do23464 ayfnwr1v=1,kuzxj1lo if(m0ibglfx(2*hj3ftvzu,ayfnwr1v) .gt. anopu9vi)then hdqsx7bk = dexp(anopu9vi) lbgwvp3q = .true. else if(m0ibglfx(2*hj3ftvzu,ayfnwr1v) .lt. -anopu9vi)then hdqsx7bk = dexp(-anopu9vi) lbgwvp3q = .true. else hdqsx7bk = dexp(m0ibglfx(2*hj3ftvzu,ayfnwr1v)) lbgwvp3q = .false. endif endif xk7dnvei = t8hwvalr(hj3ftvzu,ayfnwr1v) if(xk7dnvei .lt. fvn3iasxug)then xk7dnvei = fvn3iasxug endif call vdgam1(tlgduey8(ayfnwr1v,hj3ftvzu) + hdqsx7bk, xd4mybgja, oko *br6tcex) if(okobr6tcex .ne. 1)then endif call vdgam1(hdqsx7bk, xd4mybgjb, okobr6tcex) if(okobr6tcex .ne. 1)then endif dldk = xd4mybgja - xd4mybgjb - (tlgduey8(ayfnwr1v,hj3ftvzu) + hdqs *x7bk) / (xk7dnvei + hdqsx7bk) + 1.0d0 + dlog(hdqsx7bk / (xk7dnvei *+ hdqsx7bk)) dkdeta = hdqsx7bk kkmat(1,1) = hdqsx7bk nm0eljqk(1,1) = xk7dnvei sguwj9ty = 5000 call enbin9(bzmd6ftvmat, kkmat, nm0eljqk, n2kersmx, hbsl0gto, dvhw *1ulq, hbsl0gto, ux3nadiw, rsynp1go, sguwj9ty) if(dvhw1ulq .ne. 1)then zjkrtol8 = 5 return endif ed2ldk2 = -bzmd6ftvmat(1,1) - 1.0d0 / hdqsx7bk + 1.0d0 / (hdqsx7bk * + xk7dnvei) rbne6ouj(ayfnwr1v,2*hj3ftvzu-1) = wmat(ayfnwr1v,1) * xk7dnvei * hd *qsx7bk / (xk7dnvei + hdqsx7bk) rbne6ouj(ayfnwr1v,2*hj3ftvzu ) = wmat(ayfnwr1v,1) * hdqsx7bk * (-b *zmd6ftvmat(1,1)*hdqsx7bk - 1.0d0 + hdqsx7bk / (hdqsx7bk + xk7dnvei *)) if(rbne6ouj(ayfnwr1v,2*hj3ftvzu-1) .lt. n3iasxug)then rbne6ouj(ayfnwr1v,2*hj3ftvzu-1) = n3iasxug wpuarq2m(2*hj3ftvzu-1,ayfnwr1v) = uaf2xgqy else wpuarq2m(2*hj3ftvzu-1,ayfnwr1v) = dsqrt(rbne6ouj(ayfnwr1v,2*hj3ftv *zu-1)) endif if(rbne6ouj(ayfnwr1v,2*hj3ftvzu) .lt. n3iasxug)then rbne6ouj(ayfnwr1v,2*hj3ftvzu) = n3iasxug wpuarq2m(2*hj3ftvzu,ayfnwr1v) = uaf2xgqy else wpuarq2m(2*hj3ftvzu,ayfnwr1v) = dsqrt(rbne6ouj(ayfnwr1v,2*hj3ftvzu *)) endif ghz9vuba(ayfnwr1v,2*hj3ftvzu-1) = m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) *+ tlgduey8(ayfnwr1v,hj3ftvzu) / xk7dnvei - 1.0d0 ghz9vuba(ayfnwr1v,2*hj3ftvzu ) = m0ibglfx(2*hj3ftvzu ,ayfnwr1v) + *dldk / (dkdeta * ed2ldk2) 23464 continue 23465 continue endif if(qfx3vhct .eq. 8)then do23484 ayfnwr1v=1,kuzxj1lo rbne6ouj(ayfnwr1v,hj3ftvzu) = wmat(ayfnwr1v,1) wpuarq2m(hj3ftvzu,ayfnwr1v) = dsqrt(rbne6ouj(ayfnwr1v,hj3ftvzu)) ghz9vuba(ayfnwr1v,hj3ftvzu) = tlgduey8(ayfnwr1v,hj3ftvzu) 23484 continue 23485 continue endif if(unhycz0e .eq. 1)then if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23490 ayfnwr1v=1,kuzxj1lo ghz9vuba(ayfnwr1v,2*hj3ftvzu-1) = ghz9vuba(ayfnwr1v,2*hj3ftvzu-1) *- vm4xjosb(ayfnwr1v) 23490 continue 23491 continue else do23492 ayfnwr1v=1,kuzxj1lo ghz9vuba(ayfnwr1v,hj3ftvzu) = ghz9vuba(ayfnwr1v,hj3ftvzu) - vm4xjo *sb(ayfnwr1v) 23492 continue 23493 continue endif endif return end subroutine cqo2f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4 *xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, * fasrkub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, * zjkrtol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm *, y7sdgtqi) implicit logical (a-z) integer xui7hqwl(18), tgiyxdw1(*), dufozmt7(*) integer kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, ge *s1xpkr(*) double precision lncwkfq7(kuzxj1lo,*), tlgduey8(kuzxj1lo,afpc0kns) *, kifxa0he(kuzxj1lo,*), wmat(kuzxj1lo,*), m0ibglfx(wy1vqfzu,kuzxj1 *lo), vm4xjosb(kuzxj1lo), t8hwvalr(afpc0kns,kuzxj1lo) double precision ghz9vuba(kuzxj1lo,wy1vqfzu), rbne6ouj(kuzxj1lo,wy *1vqfzu), wpuarq2m(dimu,kuzxj1lo), w8znmyce(br5ovgcj,*) double precision vc6hatuj(br5ovgcj,*), fasrkub3(*), tlq9wpes, beta *(*), y7sdgtqi(*) double precision twk(wy1vqfzu,kuzxj1lo,2), wkmm(wy1vqfzu * (wy1vqf *zu + 1)) integer ayfnwr1v, yq6lorbx, gp1jxzuh, hyqwtp6i, ptr, i1loc, i2, iz *ero0, iter, fmzq7aob, xwdf5ltg, dimw, f7svlajr, qfx3vhct, c5aesxku *l integer job, info, qemj9asg, xlpjcg3s, eu3oxvyb, vtsou9pz, unhycz0 *e, zaupqv9b integer hbsl0gto, wr0lbopv double precision rpto5qwb, n3iasxug, pvofyg8z, wiptsjx8, uylxqtc7, * bh2vgiay, uaf2xgqy, vsoihn1r, rsynp1go double precision hmayv1xt1, hmayv1xt2 integer x1jrewny hbsl0gto = 1 x1jrewny = 0 kifxa0he(1,1) = 1 wkmm(1) = 0.0d0 xwdf5ltg = xui7hqwl(1) fmzq7aob = xui7hqwl(2) xlpjcg3s = xui7hqwl(3) dimw = xui7hqwl(4) f7svlajr = xui7hqwl(5) qfx3vhct = xui7hqwl(6) c5aesxkul = xui7hqwl(7) xui7hqwl(9) = 0 eu3oxvyb = xui7hqwl(11) vtsou9pz = xui7hqwl(12) unhycz0e = xui7hqwl(14) zaupqv9b = xui7hqwl(15) wr0lbopv = xui7hqwl(18) n3iasxug = y7sdgtqi(1) uaf2xgqy = dsqrt(n3iasxug) if((qfx3vhct .eq. 1) .or. (qfx3vhct .eq. 4))then vsoihn1r = dlog(n3iasxug) endif bh2vgiay = y7sdgtqi(2) rsynp1go = y7sdgtqi(3) uylxqtc7 = 0.0d0 izero0 = 0 zjkrtol8 = 1 call qpsedg8xf(tgiyxdw1, dufozmt7, xwdf5ltg) hyqwtp6i = xwdf5ltg * (xwdf5ltg+1) / 2 call flncwkfq72(lncwkfq7, w8znmyce, kuzxj1lo, wy1vqfzu, br5ovgcj, *xwdf5ltg, qfx3vhct, afpc0kns, fmzq7aob, eu3oxvyb, hyqwtp6i, tgiyxd *w1, dufozmt7, unhycz0e, vm4xjosb) 653 hmayv1xt2 = 1.0d0 if(f7svlajr .eq. 0)then do23498 yq6lorbx=1,afpc0kns call ietam6(tlgduey8, m0ibglfx, y7sdgtqi, kuzxj1lo, wy1vqfzu, afpc *0kns, qfx3vhct, yq6lorbx, wmat, wr0lbopv) 23498 continue 23499 continue else if(f7svlajr .eq. 2)then call pkc4ejib(w8znmyce, beta, m0ibglfx, kuzxj1lo, wy1vqfzu, br5ovg *cj, xlpjcg3s, vtsou9pz, izero0, qfx3vhct, unhycz0e, vm4xjosb) endif endif call nipyajc1(m0ibglfx, t8hwvalr, kuzxj1lo, wy1vqfzu, afpc0kns, qf *x3vhct, izero0) if(f7svlajr .eq. 2)then call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, rpto5qwb, izero0, n3iasxug, vsoihn1r *, hbsl0gto) else rpto5qwb = -1.0d0 endif do23504 iter=1,c5aesxkul do23506 yq6lorbx=1,afpc0kns call dlgpwe0c(tlgduey8, wmat, m0ibglfx, t8hwvalr, ghz9vuba, rbne6o *uj, wpuarq2m, rsynp1go, n3iasxug, uaf2xgqy, kuzxj1lo, wy1vqfzu, af *pc0kns, br5ovgcj, dimu, yq6lorbx, qfx3vhct, zjkrtol8, unhycz0e, vm *4xjosb) 23506 continue 23507 continue do23508 yq6lorbx=1,xlpjcg3s do23510 ayfnwr1v=1,br5ovgcj vc6hatuj(ayfnwr1v,yq6lorbx) = w8znmyce(ayfnwr1v,yq6lorbx) 23510 continue 23511 continue 23508 continue 23509 continue do23512 yq6lorbx=1,xlpjcg3s ptr = 1 do23514 i1loc=1,kuzxj1lo do23516 i2=1,wy1vqfzu vc6hatuj(ptr,yq6lorbx) = wpuarq2m(i2,i1loc) * vc6hatuj(ptr,yq6lorb *x) ptr = ptr + 1 23516 continue 23517 continue 23514 continue 23515 continue 23512 continue 23513 continue do23518 gp1jxzuh=1,xlpjcg3s ges1xpkr(gp1jxzuh) = gp1jxzuh 23518 continue 23519 continue pvofyg8z = 1.0d-7 call vqrdca(vc6hatuj,br5ovgcj,br5ovgcj,xlpjcg3s,fasrkub3,ges1xpkr, *twk,qemj9asg,pvofyg8z) if(qemj9asg .ne. xlpjcg3s)then zjkrtol8 = 2 return endif do23522 ayfnwr1v=1,kuzxj1lo do23524 yq6lorbx=1,wy1vqfzu twk(yq6lorbx,ayfnwr1v,1) = wpuarq2m(yq6lorbx,ayfnwr1v) * ghz9vuba( *ayfnwr1v,yq6lorbx) 23524 continue 23525 continue 23522 continue 23523 continue job = 101 call vdqrsl(vc6hatuj,br5ovgcj,br5ovgcj,qemj9asg,fasrkub3, twk, uyl *xqtc7, twk(1,1,2), beta, uylxqtc7,m0ibglfx,job,info) do23526 ayfnwr1v=1,kuzxj1lo do23528 yq6lorbx=1,wy1vqfzu m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) / wpuarq *2m(yq6lorbx,ayfnwr1v) 23528 continue 23529 continue 23526 continue 23527 continue if(unhycz0e .eq. 1)then if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23534 ayfnwr1v=1,kuzxj1lo do23536 yq6lorbx=1,afpc0kns m0ibglfx(2*yq6lorbx-1,ayfnwr1v) = m0ibglfx(2*yq6lorbx-1,ayfnwr1v) *+ vm4xjosb(ayfnwr1v) 23536 continue 23537 continue 23534 continue 23535 continue else do23538 ayfnwr1v=1,kuzxj1lo do23540 yq6lorbx=1,wy1vqfzu m0ibglfx(yq6lorbx,ayfnwr1v) = m0ibglfx(yq6lorbx,ayfnwr1v) + vm4xjo *sb(ayfnwr1v) 23540 continue 23541 continue 23538 continue 23539 continue endif endif call nipyajc1(m0ibglfx, t8hwvalr, kuzxj1lo, wy1vqfzu, afpc0kns, qf *x3vhct, izero0) call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, tlq9wpes,izero0,n3iasxug,vsoihn1r, h *bsl0gto) wiptsjx8 = dabs(tlq9wpes - rpto5qwb) / (1.0d0 + dabs(tlq9wpes)) if(wiptsjx8 .lt. bh2vgiay)then zjkrtol8 = 0 xui7hqwl(8) = iter if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, tlq9wpes,izero0,n3iasxug,vsoihn1r, i *zero0) endif x1jrewny = 1 goto 20097 else rpto5qwb = tlq9wpes x1jrewny = 0 endif 23504 continue 23505 continue 20097 hmayv1xt1 = 0.0d0 if(x1jrewny .eq. 1)then return endif if(f7svlajr .eq. 1 .or. f7svlajr .eq. 2)then f7svlajr = 0 xui7hqwl(9) = 1 goto 653 endif zjkrtol8 = 3 return end subroutine cqo1f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4 *xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, * fasrkub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, * zjkrtol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm *, y7sdgtqi) implicit logical (a-z) integer xui7hqwl(18), tgiyxdw1(*), dufozmt7(*) integer kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, ge *s1xpkr(*) double precision lncwkfq7(kuzxj1lo,*), tlgduey8(kuzxj1lo,afpc0kns) *, wmat(kuzxj1lo,*), m0ibglfx(wy1vqfzu,kuzxj1lo), vm4xjosb(kuzxj1lo *), t8hwvalr(afpc0kns,kuzxj1lo), kifxa0he(kuzxj1lo,*), ghz9vuba(kuz *xj1lo,wy1vqfzu), rbne6ouj(kuzxj1lo,wy1vqfzu), wpuarq2m(dimu,kuzxj1 *lo), w8znmyce(br5ovgcj,*) double precision vc6hatuj(br5ovgcj,*), fasrkub3(*), tlq9wpes, beta *(*), y7sdgtqi(*) double precision twk(br5ovgcj,3), wkmm(wy1vqfzu*(wy1vqfzu+1)) integer ayfnwr1v, hj3ftvzu, hyqwtp6i, izero0, iter, fmzq7aob, unhy *cz0e, xwdf5ltg, dimw, f7svlajr, qfx3vhct, c5aesxkul integer job, info, qemj9asg, xlpjcg3s, vtsou9pz, zaupqv9b integer hbsl0gto, p1, wr0lbopv double precision rpto5qwb, n3iasxug, pvofyg8z, wiptsjx8, uylxqtc7, * bh2vgiay, uaf2xgqy, vsoihn1r, rsynp1go integer gp1jxzuh double precision aqg1vdmo, hmayv1xt aqg1vdmo = 0.0d0 hbsl0gto = 1 wkmm(1) = 1.0d0 call intpr("entering cqo1f hbsl0gto ------------------------------ *-: ",-1,hbsl0gto,1) call intpr("in cqo1f afpc0kns: ",-1,afpc0kns,1) xwdf5ltg = xui7hqwl(1) fmzq7aob = xui7hqwl(2) xlpjcg3s = xui7hqwl(3) dimw = xui7hqwl(4) f7svlajr = xui7hqwl(5) qfx3vhct = xui7hqwl(6) c5aesxkul = xui7hqwl(7) xui7hqwl(9) = 0 vtsou9pz = xui7hqwl(12) if(vtsou9pz .ne. 1)then zjkrtol8 = 4 return endif unhycz0e = xui7hqwl(14) zaupqv9b = xui7hqwl(15) p1 = xui7hqwl(16) wr0lbopv = xui7hqwl(18) call intpr("Entry to cqo1f: f7svlajr ",-1,f7svlajr,1) n3iasxug = y7sdgtqi(1) uaf2xgqy = dsqrt(n3iasxug) if((qfx3vhct .eq. 1) .or. (qfx3vhct .eq. 4))then vsoihn1r = dlog(n3iasxug) endif bh2vgiay = y7sdgtqi(2) rsynp1go = y7sdgtqi(3) uylxqtc7 = 0.0d0 izero0 = 0 zjkrtol8 = 1 call qpsedg8xf(tgiyxdw1, dufozmt7, xwdf5ltg) hyqwtp6i = xwdf5ltg * (xwdf5ltg+1) / 2 call flncwkfq71(lncwkfq7, w8znmyce, kuzxj1lo, xwdf5ltg, qfx3vhct, *vm4xjosb, br5ovgcj, xlpjcg3s, hyqwtp6i, tgiyxdw1, dufozmt7, kifxa0 *he, p1, unhycz0e) call dblepr("cqo1f: vm4xjosb()",-1,vm4xjosb,kuzxj1lo) call dblepr("cqo1f: w8znmyce(,)",-1,w8znmyce,br5ovgcj*xlpjcg3s) call dblepr("cqo1f: wmat(,1)",-1,wmat(1,1),kuzxj1lo) do23554 hj3ftvzu=1,afpc0kns call intpr("cqo1f: hj3ftvzu======================: ",-1,hj3ftvzu,1 *) 653 hmayv1xt = 1.0d0 if(f7svlajr .eq. 0)then call intpr("cqo1f: calling ietam6 ",-1,hj3ftvzu,1) call ietam6(tlgduey8, m0ibglfx, y7sdgtqi, kuzxj1lo, wy1vqfzu, afpc *0kns, qfx3vhct, hj3ftvzu, wmat, wr0lbopv) else if(f7svlajr .eq. 2)then call intpr("cqo1f: calling pkc4ejib; vtsou9pz== ",-1,vtsou9pz,1) call pkc4ejib(w8znmyce, beta(1+(hj3ftvzu-1)*xlpjcg3s), m0ibglfx, k *uzxj1lo, wy1vqfzu, br5ovgcj, xlpjcg3s, vtsou9pz, hj3ftvzu, qfx3vhc *t, unhycz0e, vm4xjosb) endif endif call nipyajc1(m0ibglfx, t8hwvalr, kuzxj1lo, wy1vqfzu, afpc0kns, qf *x3vhct, hj3ftvzu) if(f7svlajr .eq. 2)then call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, rpto5qwb, hj3ftvzu, n3iasxug, vsoihn *1r, hbsl0gto) else rpto5qwb = -1.0d0 endif do23562 iter=1,c5aesxkul call intpr("iter: ",-1,iter,1) call intpr("posn 7: ",-1,hbsl0gto,1) call intpr("qfx3vhct: ",-1,qfx3vhct,1) call dblepr("rpto5qwb",-1,rpto5qwb,1) call dlgpwe0c(tlgduey8, wmat, m0ibglfx, t8hwvalr, ghz9vuba, rbne6o *uj, wpuarq2m, rsynp1go, n3iasxug, uaf2xgqy, kuzxj1lo, wy1vqfzu, af *pc0kns, br5ovgcj, dimu, hj3ftvzu, qfx3vhct, zjkrtol8, unhycz0e, vm *4xjosb) call dblepr("cqo1f: m0ibglfx",-1,m0ibglfx,wy1vqfzu*kuzxj1lo) call dblepr("cqo1f: wpuarq2m",-1,wpuarq2m,dimu*kuzxj1lo) call dblepr("cqo1f: ghz9vuba",-1,ghz9vuba,kuzxj1lo*wy1vqfzu) call dblepr("cqo1f: rbne6ouj",-1,rbne6ouj,kuzxj1lo*wy1vqfzu) do23564 gp1jxzuh=1,xlpjcg3s do23566 ayfnwr1v=1,br5ovgcj vc6hatuj(ayfnwr1v,gp1jxzuh) = w8znmyce(ayfnwr1v,gp1jxzuh) 23566 continue 23567 continue 23564 continue 23565 continue call intpr("posn 3: ",-1,hbsl0gto,1) if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23570 gp1jxzuh=1,xlpjcg3s do23572 ayfnwr1v=1,kuzxj1lo vc6hatuj(2*ayfnwr1v-1,gp1jxzuh) = wpuarq2m(2*hj3ftvzu-1,ayfnwr1v) ** vc6hatuj(2*ayfnwr1v-1,gp1jxzuh) vc6hatuj(2*ayfnwr1v ,gp1jxzuh) = wpuarq2m(2*hj3ftvzu ,ayfnwr1v) * *vc6hatuj(2*ayfnwr1v ,gp1jxzuh) 23572 continue 23573 continue 23570 continue 23571 continue else do23574 gp1jxzuh=1,xlpjcg3s do23576 ayfnwr1v=1,kuzxj1lo vc6hatuj(ayfnwr1v,gp1jxzuh) = wpuarq2m(hj3ftvzu,ayfnwr1v) * vc6hat *uj(ayfnwr1v,gp1jxzuh) 23576 continue 23577 continue 23574 continue 23575 continue endif call intpr("posn 4: ",-1,hbsl0gto,1) do23578 gp1jxzuh=1,xlpjcg3s ges1xpkr(gp1jxzuh) = gp1jxzuh 23578 continue 23579 continue call dblepr("cqo1f: vc6hatuj",-1,vc6hatuj,br5ovgcj*xlpjcg3s) call intpr("iter: ",-1,iter,1) pvofyg8z = 1.0d-7 call vqrdca(vc6hatuj,br5ovgcj,br5ovgcj,xlpjcg3s,fasrkub3,ges1xpkr, *twk,qemj9asg,pvofyg8z) call intpr("ges1xpkr: ",-1,ges1xpkr,xlpjcg3s) if(qemj9asg .ne. xlpjcg3s)then zjkrtol8 = 2 return endif if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23584 ayfnwr1v=1,kuzxj1lo twk(2*ayfnwr1v-1,1) = wpuarq2m(2*hj3ftvzu-1,ayfnwr1v) * ghz9vuba(a *yfnwr1v,2*hj3ftvzu-1) twk(2*ayfnwr1v ,1) = wpuarq2m(2*hj3ftvzu ,ayfnwr1v) * ghz9vuba(ayf *nwr1v,2*hj3ftvzu ) 23584 continue 23585 continue else do23586 ayfnwr1v=1,kuzxj1lo twk(ayfnwr1v,1) = wpuarq2m(hj3ftvzu,ayfnwr1v) * ghz9vuba(ayfnwr1v, *hj3ftvzu) 23586 continue 23587 continue endif call intpr("posn 5: ",-1,hbsl0gto,1) job = 101 call intpr("posn 6: ",-1,hbsl0gto,1) call vdqrsl(vc6hatuj,br5ovgcj,br5ovgcj,qemj9asg,fasrkub3, twk(1,1) *, uylxqtc7, twk(1,2), beta(1+(hj3ftvzu-1)*xlpjcg3s), uylxqtc7,twk( *1,3),job,info) call dblepr("beta(1+(hj3ftvzu-1)*xlpjcg3s)",-1,beta(1+(hj3ftvzu-1) **xlpjcg3s),xlpjcg3s) if(zaupqv9b .gt. 1)then endif do23590 gp1jxzuh=1,xlpjcg3s twk(gp1jxzuh,1) = beta((hj3ftvzu-1)*xlpjcg3s + gp1jxzuh) 23590 continue 23591 continue do23592 gp1jxzuh=1,xlpjcg3s beta((hj3ftvzu-1)*xlpjcg3s + ges1xpkr(gp1jxzuh)) = twk(gp1jxzuh,1) 23592 continue 23593 continue call intpr("posn 7: ",-1,hbsl0gto,1) if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then do23596 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = twk(2*ayfnwr1v-1,3) / wpuarq2m(2 **hj3ftvzu-1,ayfnwr1v) m0ibglfx(2*hj3ftvzu ,ayfnwr1v) = twk(2*ayfnwr1v ,3) / wpuarq2m(2*h *j3ftvzu ,ayfnwr1v) 23596 continue 23597 continue if(unhycz0e .eq. 1)then do23600 ayfnwr1v=1,kuzxj1lo m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) = m0ibglfx(2*hj3ftvzu-1,ayfnwr1v) *+ vm4xjosb(ayfnwr1v) 23600 continue 23601 continue endif else do23602 ayfnwr1v=1,kuzxj1lo m0ibglfx(hj3ftvzu,ayfnwr1v) = twk(ayfnwr1v,3) / wpuarq2m(hj3ftvzu, *ayfnwr1v) 23602 continue 23603 continue if(unhycz0e .eq. 1)then do23606 ayfnwr1v=1,kuzxj1lo m0ibglfx(hj3ftvzu,ayfnwr1v) = m0ibglfx(hj3ftvzu,ayfnwr1v) + vm4xjo *sb(ayfnwr1v) 23606 continue 23607 continue endif endif call intpr("posn 8: ",-1,hbsl0gto,1) call nipyajc1(m0ibglfx, t8hwvalr, kuzxj1lo, wy1vqfzu, afpc0kns, qf *x3vhct, hj3ftvzu) call intpr("posn 8b: ",-1,hbsl0gto,1) call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, tlq9wpes,hj3ftvzu,n3iasxug,vsoihn1r, *hbsl0gto) call intpr("posn 8c: ",-1,hbsl0gto,1) wiptsjx8 = dabs(tlq9wpes - rpto5qwb) / (1.0d0 + dabs(tlq9wpes)) call intpr("cqo1f: iter -------------",-1,iter,1) call dblepr("cqo1f: wiptsjx8",-1,wiptsjx8,1) if(wiptsjx8 .lt. bh2vgiay)then zjkrtol8 = 0 xui7hqwl(8)=iter call intpr("cqo1f xui7hqwl(8): ",-1,xui7hqwl(8),1) if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, tlq9wpes,hj3ftvzu,n3iasxug,vsoihn1r, * izero0) endif aqg1vdmo = aqg1vdmo + tlq9wpes goto 1011 else rpto5qwb = tlq9wpes endif call intpr("posn 9: ",-1,hbsl0gto,1) 23562 continue 23563 continue call intpr("cqo1f; unsuccessful convergence: ",-1,hbsl0gto,1) if(f7svlajr .eq. 1)then f7svlajr = 0 xui7hqwl(9) = 1 goto 653 endif zjkrtol8 = 3 1011 hmayv1xt = 1.0d0 23554 continue 23555 continue call intpr("exiting cqo1f hbsl0gto ============================ : *",-1,hbsl0gto,1) tlq9wpes = aqg1vdmo return end subroutine vcao6f(lncwkfq7, tlgduey8, wmat, m0ibglfx, t8hwvalr, gh *z9vuba, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, kuzxj1lo *, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, xui7hqwl, tlq9wpes *, beta, twk, wkmm, y7sdgtqi, psdvgce3,qfozcl5b, kiye1wjz, ezlgm2up *, nef, which, ub4xioar,kispwgx3,s0, zyodca3j, lxyst1eb, mbvnaor6, *hjm2ktyr, jnxpuym2, hnpt1zym, fzm1ihwj, iz2nbfjc, work1, wk2, wwkm *m, work3, sgdub, bmb, ifys6woa, mwk, ttwk, rpyis2kc, zv2xfhei, nbz *jkpi3, acpios9q, itwk, jwbkl9fp) implicit logical (a-z) integer xui7hqwl(19) integer kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, ge *s1xpkr(*) double precision lncwkfq7(kuzxj1lo,*), tlgduey8(kuzxj1lo,afpc0kns) *, wmat(kuzxj1lo,*), m0ibglfx(wy1vqfzu,kuzxj1lo), t8hwvalr(afpc0kns *,kuzxj1lo) double precision ghz9vuba(kuzxj1lo,wy1vqfzu), rbne6ouj(kuzxj1lo,wy *1vqfzu), wpuarq2m(dimu,kuzxj1lo) double precision vc6hatuj(br5ovgcj,2), fasrkub3(*), tlq9wpes, beta *(*), y7sdgtqi(*) double precision twk(br5ovgcj,3), wkmm(wy1vqfzu*(wy1vqfzu+1)) integer hj3ftvzu, ehtjigf4, izero0, iter, xwdf5ltg, dimw, f7svlajr *, qfx3vhct, c5aesxkul integer vtsou9pz, zaupqv9b, xlpjcg3s integer hbsl0gto, sedf7mxb double precision rpto5qwb, n3iasxug, wiptsjx8, uylxqtc7, bh2vgiay, * uaf2xgqy, vsoihn1r, rsynp1go double precision aqg1vdmo, hmayv1xt integer psdvgce3(15), qfozcl5b, ezlgm2up(*),nef(*),which(*), jnxpu *ym2(*), hnpt1zym(*), fzm1ihwj(*), iz2nbfjc(*) integer wr0lbopv, acpios9q(*), itwk(*), jwbkl9fp(*) integer nbzjkpi3(*) double precision kiye1wjz(*) double precision ub4xioar(qfozcl5b,kuzxj1lo), kispwgx3(kuzxj1lo,*) *,s0(wy1vqfzu), zyodca3j(qfozcl5b,kuzxj1lo), lxyst1eb(qfozcl5b,kuzx *j1lo), mbvnaor6(kuzxj1lo,*), hjm2ktyr(qfozcl5b,*), work1(*), wk2(k *uzxj1lo,qfozcl5b), work3(*), sgdub(*), bmb(*), ifys6woa(*), mwk(*) *, rpyis2kc(*), zv2xfhei(*) integer qes4mujl integer ayfnwr1v, kij0gwer, xumj5dnk integer irhm4cfa, lyma1kwc double precision xbignn(2), lncrw8mg, ufkq9rpg, r3eoxkzp, wld4qctn double precision zpcqv3uj, resss double precision vm4xjosb(2) lncrw8mg=0.0d0 ufkq9rpg=0.0d0 r3eoxkzp=0.0d0 wld4qctn=0.0d0 irhm4cfa = xui7hqwl(19) aqg1vdmo = 0.0d0 hbsl0gto = 1 wkmm(1) = 1.0d0 twk(1,1) = 1.0d0 xwdf5ltg = xui7hqwl(1) xlpjcg3s = xui7hqwl(3) dimw = xui7hqwl(4) f7svlajr = xui7hqwl(5) qfx3vhct = xui7hqwl(6) c5aesxkul = xui7hqwl(7) xui7hqwl(9) = 0 lyma1kwc = xui7hqwl(11) vtsou9pz = xui7hqwl(12) if((vtsou9pz .ne. 1) .or. (lyma1kwc .ne. xwdf5ltg))then zjkrtol8 = 4 return endif zaupqv9b = xui7hqwl(15) wr0lbopv = xui7hqwl(18) zpcqv3uj = y7sdgtqi(3+afpc0kns+afpc0kns+2) n3iasxug = y7sdgtqi(1) uaf2xgqy = dsqrt(n3iasxug) if((qfx3vhct .eq. 1) .or. (qfx3vhct .eq. 4))then vsoihn1r = dlog(n3iasxug) endif bh2vgiay = y7sdgtqi(2) rsynp1go = y7sdgtqi(3) uylxqtc7 = 0.0d0 izero0 = 0 zjkrtol8 = 1 do23618 hj3ftvzu=1,afpc0kns 653 hmayv1xt = 1.0d0 if(f7svlajr .eq. 0)then call ietam6(tlgduey8, m0ibglfx, y7sdgtqi, kuzxj1lo, wy1vqfzu, afpc *0kns, qfx3vhct, hj3ftvzu, wmat, wr0lbopv) else if(f7svlajr .ne. 1)then zjkrtol8 = 6 return endif endif call nipyajc1(m0ibglfx, t8hwvalr, kuzxj1lo, wy1vqfzu, afpc0kns, qf *x3vhct, hj3ftvzu) if(f7svlajr .eq. 2)then call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, rpto5qwb, hj3ftvzu, n3iasxug, vsoihn *1r, hbsl0gto) else rpto5qwb = -1.0d0 endif do23626 iter=1,c5aesxkul call flncwkfq76(lncwkfq7, vc6hatuj, kuzxj1lo, br5ovgcj, xwdf5ltg, *qfx3vhct) psdvgce3(7) = 0 call dlgpwe0c(tlgduey8, wmat, m0ibglfx, t8hwvalr, ghz9vuba, rbne6o *uj, wpuarq2m, rsynp1go, n3iasxug, uaf2xgqy, kuzxj1lo, wy1vqfzu, af *pc0kns, br5ovgcj, dimu, hj3ftvzu, qfx3vhct, zjkrtol8, izero0, vm4x *josb) if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then qes4mujl = 2*hj3ftvzu-1 else qes4mujl = hj3ftvzu endif do23630 kij0gwer=1,qfozcl5b do23632 ayfnwr1v=1,kuzxj1lo zyodca3j(kij0gwer,ayfnwr1v) = wpuarq2m(qes4mujl-1+kij0gwer,ayfnwr1 *v) lxyst1eb(kij0gwer,ayfnwr1v) = m0ibglfx(qes4mujl-1+kij0gwer,ayfnwr1 *v) 23632 continue 23633 continue 23630 continue 23631 continue sedf7mxb = lyma1kwc * afpc0kns ehtjigf4 = xwdf5ltg * (hj3ftvzu-1) if(iter .eq. 1)then lncrw8mg = kiye1wjz( ehtjigf4 + hnpt1zym(1)) ufkq9rpg = kiye1wjz(sedf7mxb + ehtjigf4 + hnpt1zym(1)) if(xwdf5ltg .eq. 2)then r3eoxkzp = kiye1wjz( ehtjigf4 + hnpt1zym(2)) wld4qctn = kiye1wjz(sedf7mxb + ehtjigf4 + hnpt1zym(2)) endif do23638 kij0gwer=1,lyma1kwc do23640 ayfnwr1v=1,kuzxj1lo kispwgx3(ayfnwr1v,ehtjigf4 + hnpt1zym(kij0gwer)) = 0.0d0 23640 continue 23641 continue 23638 continue 23639 continue else kiye1wjz( ehtjigf4 + hnpt1zym(1)) = lncrw8mg kiye1wjz(sedf7mxb + ehtjigf4 + hnpt1zym(1)) = ufkq9rpg if(xwdf5ltg .eq. 2)then kiye1wjz( ehtjigf4 + hnpt1zym(2)) = r3eoxkzp kiye1wjz(sedf7mxb + ehtjigf4 + hnpt1zym(2)) = wld4qctn endif endif call vbfa(irhm4cfa,kuzxj1lo,qfozcl5b,psdvgce3, mbvnaor6, ghz9vuba( *1,qes4mujl), rbne6ouj(1,qes4mujl), kiye1wjz( ehtjigf4 + hnpt1zym(1 *)), kiye1wjz(sedf7mxb + ehtjigf4 + hnpt1zym(1)), ezlgm2up,nef,whic *h, ub4xioar,kispwgx3(1,ehtjigf4 + hnpt1zym(1)), lxyst1eb,s0, beta( *1+(hj3ftvzu-1)*xlpjcg3s), cov,zpcqv3uj, vc6hatuj,fasrkub3, ges1xpk *r, xbignn, zyodca3j, hjm2ktyr, jnxpuym2, hnpt1zym, fzm1ihwj, iz2nb *fjc, work1, wk2, wwkmm, work3, sgdub, bmb, ifys6woa, mwk, ttwk, rp *yis2kc(1+(hj3ftvzu-1)*(nbzjkpi3(1+xwdf5ltg)-1)), zv2xfhei, resss, *nbzjkpi3, acpios9q, itwk, jwbkl9fp) y7sdgtqi(3+afpc0kns+afpc0kns+1) = resss xumj5dnk = psdvgce3(14) if(xumj5dnk .ne. 0)then call intpr("vcao6f: exiting because of an error",-1,xumj5dnk,1) zjkrtol8 = 8 return endif do23646 kij0gwer=1,qfozcl5b do23648 ayfnwr1v=1,kuzxj1lo m0ibglfx(qes4mujl-1+kij0gwer,ayfnwr1v) = lxyst1eb(kij0gwer,ayfnwr1 *v) 23648 continue 23649 continue 23646 continue 23647 continue call nipyajc1(m0ibglfx, t8hwvalr, kuzxj1lo, wy1vqfzu, afpc0kns, qf *x3vhct, hj3ftvzu) call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, tlq9wpes, hj3ftvzu, n3iasxug, vsoihn *1r, hbsl0gto) wiptsjx8 = dabs(tlq9wpes - rpto5qwb) / (1.0d0 + dabs(tlq9wpes)) if(wiptsjx8 .lt. bh2vgiay)then zjkrtol8 = 0 xui7hqwl(8) = iter if((qfx3vhct .eq. 3) .or. (qfx3vhct .eq. 5))then call shjlwft5(qfx3vhct, tlgduey8, wmat, t8hwvalr, kuzxj1lo, wy1vqf *zu, afpc0kns, dimw, m0ibglfx, tlq9wpes,hj3ftvzu,n3iasxug,vsoihn1r, * izero0) endif aqg1vdmo = aqg1vdmo + tlq9wpes goto 1011 else rpto5qwb = tlq9wpes endif 23626 continue 23627 continue if(f7svlajr .eq. 1)then f7svlajr = 0 xui7hqwl(9) = 1 goto 653 endif zjkrtol8 = 3 1011 hmayv1xt = 1.0d0 23618 continue 23619 continue tlq9wpes = aqg1vdmo return end subroutine dcqof(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4 *xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, * fasrkub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, * zjkrtol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm *, y7sdgtqi, atujnxb8, yxiwebc5, k7hulceq, p2, kpzavbj3, ydcnh9xl, *ajul8wkv) implicit logical (a-z) integer xui7hqwl(19), tgiyxdw1(*), dufozmt7(*) integer kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, ge *s1xpkr(*) integer vtsou9pz double precision lncwkfq7(kuzxj1lo,*), tlgduey8(kuzxj1lo,afpc0kns) *, kifxa0he(kuzxj1lo,*), wmat(kuzxj1lo,*), m0ibglfx(wy1vqfzu,kuzxj1 *lo), vm4xjosb(kuzxj1lo), t8hwvalr(afpc0kns,kuzxj1lo), ghz9vuba(kuz *xj1lo,wy1vqfzu), rbne6ouj(kuzxj1lo,wy1vqfzu), wpuarq2m(dimu,kuzxj1 *lo), w8znmyce(br5ovgcj,*) double precision vc6hatuj(br5ovgcj,*), fasrkub3(*), tlq9wpes, beta *(*), y7sdgtqi(*) double precision twk(wy1vqfzu,kuzxj1lo,*), wkmm(wy1vqfzu*(wy1vqfzu *+1)) integer p2 double precision atujnxb8(kuzxj1lo,p2), yxiwebc5(kuzxj1lo,*), k7hu *lceq(p2,*), kpzavbj3(p2,*), ydcnh9xl, ajul8wkv(*) integer ayfnwr1v, xvr7bonh, hpmwnav2, xwdf5ltg, idlosrw8, gp1jxzuh *, exrkcn5d, wr0lbopv double precision summ, dev0 xwdf5ltg = xui7hqwl(1) idlosrw8 = xui7hqwl(5) vtsou9pz = xui7hqwl(12) exrkcn5d = xui7hqwl(13) wr0lbopv = xui7hqwl(18) do23656 hpmwnav2=1,xwdf5ltg do23658 ayfnwr1v=1,kuzxj1lo summ = 0.0d0 do23660 xvr7bonh=1,p2 summ = summ + atujnxb8(ayfnwr1v,xvr7bonh) * k7hulceq(xvr7bonh,hpmw *nav2) 23660 continue 23661 continue yxiwebc5(ayfnwr1v,hpmwnav2) = summ lncwkfq7(ayfnwr1v,hpmwnav2) = summ 23658 continue 23659 continue 23656 continue 23657 continue if(vtsou9pz.eq.1)then call cqo1f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, dev0, ajul8wkv, twk, wkmm, y7sd *gtqi) else call cqo2f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, dev0, ajul8wkv, twk, wkmm, y7sd *gtqi) endif do23664 xvr7bonh=1,p2 do23666 ayfnwr1v=1,kuzxj1lo atujnxb8(ayfnwr1v,xvr7bonh) = ydcnh9xl * atujnxb8(ayfnwr1v,xvr7bon *h) 23666 continue 23667 continue 23664 continue 23665 continue do23668 hpmwnav2=1,xwdf5ltg do23670 xvr7bonh=1,p2 do23672 ayfnwr1v=1,kuzxj1lo lncwkfq7(ayfnwr1v,hpmwnav2)=yxiwebc5(ayfnwr1v,hpmwnav2)+atujnxb8(a *yfnwr1v,xvr7bonh) 23672 continue 23673 continue xui7hqwl(5) = 2 do23674 gp1jxzuh=1,exrkcn5d beta(gp1jxzuh) = ajul8wkv(gp1jxzuh) 23674 continue 23675 continue if(vtsou9pz.eq.1)then call cqo1f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm, y7sd *gtqi) else call cqo2f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm, y7sd *gtqi) endif if(zjkrtol8 .ne. 0)then return endif kpzavbj3(xvr7bonh,hpmwnav2) = (tlq9wpes - dev0) / ydcnh9xl 23670 continue 23671 continue if(xwdf5ltg .gt. 1)then do23682 ayfnwr1v=1,kuzxj1lo lncwkfq7(ayfnwr1v,hpmwnav2) = yxiwebc5(ayfnwr1v,hpmwnav2) 23682 continue 23683 continue endif 23668 continue 23669 continue xui7hqwl(5) = idlosrw8 return end subroutine vdcaof(lncwkfq7, tlgduey8, wmat, m0ibglfx, t8hwvalr, gh *z9vuba, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, kuzxj1lo *, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, xui7hqwl, tlq9wpes *, beta, twk, wkmm, y7sdgtqi, atujnxb8, yxiwebc5, k7hulceq, p2, kpz *avbj3, ajul8wkv, psdvgce3,qfozcl5b, kiye1wjz, ezlgm2up, nef, which *, ub4xioar,kispwgx3,s0, zyodca3j, lxyst1eb, mbvnaor6, hjm2ktyr, jn *xpuym2, hnpt1zym, fzm1ihwj, iz2nbfjc, work1, wk2, wwkmm, work3, sg *dub, bmb, ifys6woa, mwk, ttwk, rpyis2kc, zv2xfhei, nbzjkpi3, acpio *s9q, itwk, jwbkl9fp) implicit logical (a-z) integer xui7hqwl(19) integer kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, ge *s1xpkr(*) integer vtsou9pz double precision lncwkfq7(kuzxj1lo,*), tlgduey8(kuzxj1lo,afpc0kns) *, wmat(kuzxj1lo,*), m0ibglfx(wy1vqfzu,kuzxj1lo), t8hwvalr(afpc0kns *,kuzxj1lo), ghz9vuba(kuzxj1lo,wy1vqfzu), rbne6ouj(kuzxj1lo,wy1vqfz *u), wpuarq2m(dimu,kuzxj1lo) double precision vc6hatuj(br5ovgcj,*), fasrkub3(*), tlq9wpes, beta *(*), y7sdgtqi(*) double precision twk(wy1vqfzu,kuzxj1lo,*) double precision wkmm(wy1vqfzu*(wy1vqfzu+1)) integer p2 double precision atujnxb8(kuzxj1lo,p2), yxiwebc5(kuzxj1lo,*), k7hu *lceq(p2,*), kpzavbj3(p2,*), ydcnh9xl, ajul8wkv(*) integer ayfnwr1v, pp, hpmwnav2, xwdf5ltg, idlosrw8, exrkcn5d, wr0l *bopv double precision summ, dev0 integer psdvgce3(15), qfozcl5b, ezlgm2up(*),nef(*),which(*), jnxpu *ym2(*), hnpt1zym(*), fzm1ihwj(*), iz2nbfjc(*), nbzjkpi3(2), acpios *9q(*), itwk(*), jwbkl9fp(2) double precision kiye1wjz(*) double precision ub4xioar(qfozcl5b,kuzxj1lo), kispwgx3(kuzxj1lo,*) *,s0(wy1vqfzu), zyodca3j(qfozcl5b,kuzxj1lo) double precision lxyst1eb(qfozcl5b,kuzxj1lo), mbvnaor6(kuzxj1lo,*) *, hjm2ktyr(qfozcl5b,*), work1(*), wk2(kuzxj1lo,qfozcl5b), work3(*) *, sgdub(*), bmb(*), ifys6woa(*), mwk(*), rpyis2kc(*), zv2xfhei(*), * resss integer irhm4cfa double precision zpcqv3uj resss = 0.0d0 irhm4cfa = 0 xwdf5ltg = xui7hqwl(1) idlosrw8 = xui7hqwl(5) vtsou9pz = xui7hqwl(12) exrkcn5d = xui7hqwl(13) wr0lbopv = xui7hqwl(18) zpcqv3uj = y7sdgtqi(3+afpc0kns+afpc0kns+2) ydcnh9xl = y7sdgtqi(3+afpc0kns+afpc0kns+3) do23684 hpmwnav2=1,xwdf5ltg do23686 ayfnwr1v=1,kuzxj1lo summ = 0.0d0 do23688 pp=1,p2 summ = summ + atujnxb8(ayfnwr1v,pp) * k7hulceq(pp,hpmwnav2) 23688 continue 23689 continue yxiwebc5(ayfnwr1v,hpmwnav2) = summ lncwkfq7(ayfnwr1v,hpmwnav2) = summ 23686 continue 23687 continue 23684 continue 23685 continue if(vtsou9pz.eq.1)then call vcao6f(lncwkfq7, tlgduey8, wmat, m0ibglfx, t8hwvalr, ghz9vuba *, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, kuzxj1lo, wy1v *qfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, xui7hqwl, dev0, ajul8wkv *, twk, wkmm, y7sdgtqi, psdvgce3,qfozcl5b, kiye1wjz, ezlgm2up, nef, * which, ub4xioar,kispwgx3,s0, zyodca3j, lxyst1eb, mbvnaor6, hjm2kt *yr, jnxpuym2, hnpt1zym, fzm1ihwj, iz2nbfjc, work1, wk2, wwkmm, wor *k3, sgdub, bmb, ifys6woa, mwk, ttwk, rpyis2kc, zv2xfhei, nbzjkpi3, * acpios9q, itwk, jwbkl9fp) y7sdgtqi(3+afpc0kns+afpc0kns+1) = resss else endif do23692 pp=1,p2 do23694 ayfnwr1v=1,kuzxj1lo atujnxb8(ayfnwr1v,pp) = ydcnh9xl * atujnxb8(ayfnwr1v,pp) 23694 continue 23695 continue 23692 continue 23693 continue do23696 hpmwnav2=1,xwdf5ltg do23698 pp=1,p2 do23700 ayfnwr1v=1,kuzxj1lo lncwkfq7(ayfnwr1v,hpmwnav2) = yxiwebc5(ayfnwr1v,hpmwnav2) + atujnx *b8(ayfnwr1v,pp) 23700 continue 23701 continue xui7hqwl(5) = 0 if(vtsou9pz.eq.1)then call vcao6f(lncwkfq7, tlgduey8, wmat, m0ibglfx, t8hwvalr, ghz9vuba *, rbne6ouj, wpuarq2m, vc6hatuj, fasrkub3, ges1xpkr, kuzxj1lo, wy1v *qfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, xui7hqwl, tlq9wpes, beta *, twk, wkmm, y7sdgtqi, psdvgce3,qfozcl5b, kiye1wjz, ezlgm2up, nef, * which, ub4xioar,kispwgx3,s0, zyodca3j, lxyst1eb, mbvnaor6, hjm2kt *yr, jnxpuym2, hnpt1zym, fzm1ihwj, iz2nbfjc, work1, wk2, wwkmm, wor *k3, sgdub, bmb, ifys6woa, mwk, ttwk, rpyis2kc, zv2xfhei, nbzjkpi3, * acpios9q, itwk, jwbkl9fp) y7sdgtqi(3+afpc0kns+afpc0kns+1) = resss else endif if(zjkrtol8 .ne. 0)then return endif kpzavbj3(pp,hpmwnav2) = (tlq9wpes - dev0) / ydcnh9xl 23698 continue 23699 continue if(xwdf5ltg .gt. 1)then do23708 ayfnwr1v=1,kuzxj1lo lncwkfq7(ayfnwr1v,hpmwnav2) = yxiwebc5(ayfnwr1v,hpmwnav2) 23708 continue 23709 continue endif 23696 continue 23697 continue xui7hqwl(5) = idlosrw8 return end subroutine duqof(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4 *xjosb, t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, * fasrkub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, * zjkrtol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm *, y7sdgtqi, yxiwebc5, kpzavbj3, ydcnh9xl, ajul8wkv) implicit logical (a-z) integer xui7hqwl(19), tgiyxdw1(*), dufozmt7(*) integer kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrtol8, ge *s1xpkr(*) integer vtsou9pz double precision lncwkfq7(kuzxj1lo,*), tlgduey8(kuzxj1lo,afpc0kns) *, kifxa0he(kuzxj1lo,*), wmat(kuzxj1lo,*), m0ibglfx(wy1vqfzu,kuzxj1 *lo), vm4xjosb(kuzxj1lo), t8hwvalr(afpc0kns,kuzxj1lo), ghz9vuba(kuz *xj1lo,wy1vqfzu), rbne6ouj(kuzxj1lo,wy1vqfzu), wpuarq2m(dimu,kuzxj1 *lo), w8znmyce(br5ovgcj,*) double precision vc6hatuj(br5ovgcj,*), fasrkub3(*), tlq9wpes, beta *(*), y7sdgtqi(*) double precision twk(wy1vqfzu,kuzxj1lo,*), wkmm(wy1vqfzu*(wy1vqfzu *+1)) double precision yxiwebc5(kuzxj1lo,*), kpzavbj3(kuzxj1lo,*), ydcnh *9xl, ajul8wkv(*) integer ayfnwr1v, hpmwnav2, xwdf5ltg, idlosrw8, gp1jxzuh, exrkcn5d double precision dev0 xwdf5ltg = xui7hqwl(1) idlosrw8 = xui7hqwl(5) vtsou9pz = xui7hqwl(12) exrkcn5d = xui7hqwl(13) if(vtsou9pz.eq.1)then call cqo1f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, dev0, ajul8wkv, twk, wkmm, y7sd *gtqi) else call cqo2f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, dev0, ajul8wkv, twk, wkmm, y7sd *gtqi) endif do23712 hpmwnav2=1,xwdf5ltg do23714 ayfnwr1v=1,kuzxj1lo lncwkfq7(ayfnwr1v,hpmwnav2) = yxiwebc5(ayfnwr1v,hpmwnav2) + ydcnh9 *xl xui7hqwl(5) = 2 do23716 gp1jxzuh=1,exrkcn5d beta(gp1jxzuh) = ajul8wkv(gp1jxzuh) 23716 continue 23717 continue if(vtsou9pz.eq.1)then call cqo1f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm, y7sd *gtqi) else call cqo2f(lncwkfq7, tlgduey8, kifxa0he, wmat, m0ibglfx, vm4xjosb, * t8hwvalr, ghz9vuba, rbne6ouj, wpuarq2m, w8znmyce, vc6hatuj, fasrk *ub3, ges1xpkr, kuzxj1lo, wy1vqfzu, afpc0kns, br5ovgcj, dimu, zjkrt *ol8, xui7hqwl, tgiyxdw1, dufozmt7, tlq9wpes, beta, twk, wkmm, y7sd *gtqi) endif if(zjkrtol8 .ne. 0)then return endif kpzavbj3(ayfnwr1v,hpmwnav2) = (tlq9wpes - dev0) / ydcnh9xl lncwkfq7(ayfnwr1v,hpmwnav2) = yxiwebc5(ayfnwr1v,hpmwnav2) 23714 continue 23715 continue 23712 continue 23713 continue xui7hqwl(5) = idlosrw8 return end VGAM/src/ei.f0000644000176200001440000006042513135276761012361 0ustar liggesusers SUBROUTINE calcei(ARG,RESULT,INT) C---------------------------------------------------------------------- C C This Fortran 77 packet computes the exponential integrals Ei(x), C E1(x), and exp(-x)*Ei(x) for real arguments x where C C integral (from t=-infinity to t=x) (exp(t)/t), x > 0, C Ei(x) = C -integral (from t=-x to t=infinity) (exp(t)/t), x < 0, C C and where the first integral is a principal value integral. C The packet contains three function type subprograms: EI, EONE, C and EXPEI; and one subroutine type subprogram: CALCEI. The C calling statements for the primary entries are C C Y = EI(X), where X .NE. 0, C C Y = EONE(X), where X .GT. 0, C and C Y = EXPEI(X), where X .NE. 0, C C and where the entry points correspond to the functions Ei(x), C E1(x), and exp(-x)*Ei(x), respectively. The routine CALCEI C is intended for internal packet use only, all computations within C the packet being concentrated in this routine. The function C subprograms invoke CALCEI with the Fortran statement C CALL CALCEI(ARG,RESULT,INT) C where the parameter usage is as follows C C Function Parameters for CALCEI C Call ARG RESULT INT C C EI(X) X .NE. 0 Ei(X) 1 C EONE(X) X .GT. 0 -Ei(-X) 2 C EXPEI(X) X .NE. 0 exp(-X)*Ei(X) 3 C C The main computation involves evaluation of rational Chebyshev C approximations published in Math. Comp. 22, 641-649 (1968), and C Math. Comp. 23, 289-303 (1969) by Cody and Thacher. This C transportable program is patterned after the machine-dependent C FUNPACK packet NATSEI, but cannot match that version for C efficiency or accuracy. This version uses rational functions C that theoretically approximate the exponential integrals to C at least 18 significant decimal digits. The accuracy achieved C depends on the arithmetic system, the compiler, the intrinsic C functions, and proper selection of the machine-dependent C constants. C C C******************************************************************* C******************************************************************* C C Explanation of machine-dependent constants C C beta = radix for the floating-point system. C minexp = smallest representable power of beta. C maxexp = smallest power of beta that overflows. C XBIG = largest argument acceptable to EONE; solution to C equation: C exp(-x)/x * (1 + 1/x) = beta ** minexp. C XINF = largest positive machine number; approximately C beta ** maxexp C XMAX = largest argument acceptable to EI; solution to C equation: exp(x)/x * (1 + 1/x) = beta ** maxexp. C C Approximate values for some important machines are: C C beta minexp maxexp C C CRAY-1 (S.P.) 2 -8193 8191 C Cyber 180/185 C under NOS (S.P.) 2 -975 1070 C IEEE (IBM/XT, C SUN, etc.) (S.P.) 2 -126 128 C IEEE (IBM/XT, C SUN, etc.) (D.P.) 2 -1022 1024 C IBM 3033 (D.P.) 16 -65 63 C VAX D-Format (D.P.) 2 -128 127 C VAX G-Format (D.P.) 2 -1024 1023 C C XBIG XINF XMAX C C CRAY-1 (S.P.) 5670.31 5.45E+2465 5686.21 C Cyber 180/185 C under NOS (S.P.) 669.31 1.26E+322 748.28 C IEEE (IBM/XT, C SUN, etc.) (S.P.) 82.93 3.40E+38 93.24 C IEEE (IBM/XT, C SUN, etc.) (D.P.) 701.84 1.79D+308 716.35 C IBM 3033 (D.P.) 175.05 7.23D+75 179.85 C VAX D-Format (D.P.) 84.30 1.70D+38 92.54 C VAX G-Format (D.P.) 703.22 8.98D+307 715.66 C C******************************************************************* C******************************************************************* C C Error returns C C The following table shows the types of error that may be C encountered in this routine and the function value supplied C in each case. C C Error Argument Function values for C Range EI EXPEI EONE C C UNDERFLOW (-)X .GT. XBIG 0 - 0 C OVERFLOW X .GE. XMAX XINF - - C ILLEGAL X X = 0 -XINF -XINF XINF C ILLEGAL X X .LT. 0 - - USE ABS(X) C C Intrinsic functions required are: C C ABS, SQRT, EXP C C C Author: W. J. Cody C Mathematics abd Computer Science Division C Argonne National Laboratory C Argonne, IL 60439 C C Latest modification: September 9, 1988 C C---------------------------------------------------------------------- INTEGER I,INT CS REAL DOUBLE PRECISION 1 A,ARG,B,C,D,EXP40,E,EI,F,FOUR,FOURTY,FRAC,HALF,ONE,P, 2 PLG,PX,P037,P1,P2,Q,QLG,QX,Q1,Q2,R,RESULT,S,SIX,SUMP, 3 SUMQ,T,THREE,TWELVE,TWO,TWO4,W,X,XBIG,XINF,XMAX,XMX0, 4 X0,X01,X02,X11,Y,YSQ,ZERO DIMENSION A(7),B(6),C(9),D(9),E(10),F(10),P(10),Q(10),R(10), 1 S(9),P1(10),Q1(9),P2(10),Q2(9),PLG(4),QLG(4),PX(10),QX(10) C---------------------------------------------------------------------- C Mathematical constants C EXP40 = exp(40) C X0 = zero of Ei C X01/X11 + X02 = zero of Ei to extra precision C---------------------------------------------------------------------- CS DATA ZERO,P037,HALF,ONE,TWO/0.0E0,0.037E0,0.5E0,1.0E0,2.0E0/, CS 1 THREE,FOUR,SIX,TWELVE,TWO4/3.0E0,4.0E0,6.0E0,12.E0,24.0E0/, CS 2 FOURTY,EXP40/40.0E0,2.3538526683701998541E17/, CS 3 X01,X11,X02/381.5E0,1024.0E0,-5.1182968633365538008E-5/, CS 4 X0/3.7250741078136663466E-1/ DATA ZERO,P037,HALF,ONE,TWO/0.0D0,0.037D0,0.5D0,1.0D0,2.0D0/, 1 THREE,FOUR,SIX,TWELVE,TWO4/3.0D0,4.0D0,6.0D0,12.D0,24.0D0/, 2 FOURTY,EXP40/40.0D0,2.3538526683701998541D17/, 3 X01,X11,X02/381.5D0,1024.0D0,-5.1182968633365538008D-5/, 4 X0/3.7250741078136663466D-1/ C---------------------------------------------------------------------- C Machine-dependent constants C---------------------------------------------------------------------- CS DATA XINF/3.40E+38/,XMAX/93.246E0/,XBIG/82.93E0/ DATA XINF/1.79D+308/,XMAX/716.351D0/,XBIG/701.84D0/ C---------------------------------------------------------------------- C Coefficients for -1.0 <= X < 0.0 C---------------------------------------------------------------------- CS DATA A/1.1669552669734461083368E2, 2.1500672908092918123209E3, CS 1 1.5924175980637303639884E4, 8.9904972007457256553251E4, CS 2 1.5026059476436982420737E5,-1.4815102102575750838086E5, CS 3 5.0196785185439843791020E0/ CS DATA B/4.0205465640027706061433E1, 7.5043163907103936624165E2, CS 1 8.1258035174768735759855E3, 5.2440529172056355429883E4, CS 2 1.8434070063353677359298E5, 2.5666493484897117319268E5/ DATA A/1.1669552669734461083368D2, 2.1500672908092918123209D3, 1 1.5924175980637303639884D4, 8.9904972007457256553251D4, 2 1.5026059476436982420737D5,-1.4815102102575750838086D5, 3 5.0196785185439843791020D0/ DATA B/4.0205465640027706061433D1, 7.5043163907103936624165D2, 1 8.1258035174768735759855D3, 5.2440529172056355429883D4, 2 1.8434070063353677359298D5, 2.5666493484897117319268D5/ C---------------------------------------------------------------------- C Coefficients for -4.0 <= X < -1.0 C---------------------------------------------------------------------- CS DATA C/3.828573121022477169108E-1, 1.107326627786831743809E+1, CS 1 7.246689782858597021199E+1, 1.700632978311516129328E+2, CS 2 1.698106763764238382705E+2, 7.633628843705946890896E+1, CS 3 1.487967702840464066613E+1, 9.999989642347613068437E-1, CS 4 1.737331760720576030932E-8/ CS DATA D/8.258160008564488034698E-2, 4.344836335509282083360E+0, CS 1 4.662179610356861756812E+1, 1.775728186717289799677E+2, CS 2 2.953136335677908517423E+2, 2.342573504717625153053E+2, CS 3 9.021658450529372642314E+1, 1.587964570758947927903E+1, CS 4 1.000000000000000000000E+0/ DATA C/3.828573121022477169108D-1, 1.107326627786831743809D+1, 1 7.246689782858597021199D+1, 1.700632978311516129328D+2, 2 1.698106763764238382705D+2, 7.633628843705946890896D+1, 3 1.487967702840464066613D+1, 9.999989642347613068437D-1, 4 1.737331760720576030932D-8/ DATA D/8.258160008564488034698D-2, 4.344836335509282083360D+0, 1 4.662179610356861756812D+1, 1.775728186717289799677D+2, 2 2.953136335677908517423D+2, 2.342573504717625153053D+2, 3 9.021658450529372642314D+1, 1.587964570758947927903D+1, 4 1.000000000000000000000D+0/ C---------------------------------------------------------------------- C Coefficients for X < -4.0 C---------------------------------------------------------------------- CS DATA E/1.3276881505637444622987E+2,3.5846198743996904308695E+4, CS 1 1.7283375773777593926828E+5,2.6181454937205639647381E+5, CS 2 1.7503273087497081314708E+5,5.9346841538837119172356E+4, CS 3 1.0816852399095915622498E+4,1.0611777263550331766871E03, CS 4 5.2199632588522572481039E+1,9.9999999999999999087819E-1/ CS DATA F/3.9147856245556345627078E+4,2.5989762083608489777411E+5, CS 1 5.5903756210022864003380E+5,5.4616842050691155735758E+5, CS 2 2.7858134710520842139357E+5,7.9231787945279043698718E+4, CS 3 1.2842808586627297365998E+4,1.1635769915320848035459E+3, CS 4 5.4199632588522559414924E+1,1.0E0/ DATA E/1.3276881505637444622987D+2,3.5846198743996904308695D+4, 1 1.7283375773777593926828D+5,2.6181454937205639647381D+5, 2 1.7503273087497081314708D+5,5.9346841538837119172356D+4, 3 1.0816852399095915622498D+4,1.0611777263550331766871D03, 4 5.2199632588522572481039D+1,9.9999999999999999087819D-1/ DATA F/3.9147856245556345627078D+4,2.5989762083608489777411D+5, 1 5.5903756210022864003380D+5,5.4616842050691155735758D+5, 2 2.7858134710520842139357D+5,7.9231787945279043698718D+4, 3 1.2842808586627297365998D+4,1.1635769915320848035459D+3, 4 5.4199632588522559414924D+1,1.0D0/ C---------------------------------------------------------------------- C Coefficients for rational approximation to ln(x/a), |1-x/a| < .1 C---------------------------------------------------------------------- CS DATA PLG/-2.4562334077563243311E+01,2.3642701335621505212E+02, CS 1 -5.4989956895857911039E+02,3.5687548468071500413E+02/ CS DATA QLG/-3.5553900764052419184E+01,1.9400230218539473193E+02, CS 1 -3.3442903192607538956E+02,1.7843774234035750207E+02/ DATA PLG/-2.4562334077563243311D+01,2.3642701335621505212D+02, 1 -5.4989956895857911039D+02,3.5687548468071500413D+02/ DATA QLG/-3.5553900764052419184D+01,1.9400230218539473193D+02, 1 -3.3442903192607538956D+02,1.7843774234035750207D+02/ C---------------------------------------------------------------------- C Coefficients for 0.0 < X < 6.0, C ratio of Chebyshev polynomials C---------------------------------------------------------------------- CS DATA P/-1.2963702602474830028590E01,-1.2831220659262000678155E03, CS 1 -1.4287072500197005777376E04,-1.4299841572091610380064E06, CS 2 -3.1398660864247265862050E05,-3.5377809694431133484800E08, CS 3 3.1984354235237738511048E08,-2.5301823984599019348858E10, CS 4 1.2177698136199594677580E10,-2.0829040666802497120940E11/ CS DATA Q/ 7.6886718750000000000000E01,-5.5648470543369082846819E03, CS 1 1.9418469440759880361415E05,-4.2648434812177161405483E06, CS 2 6.4698830956576428587653E07,-7.0108568774215954065376E08, CS 3 5.4229617984472955011862E09,-2.8986272696554495342658E10, CS 4 9.8900934262481749439886E10,-8.9673749185755048616855E10/ DATA P/-1.2963702602474830028590D01,-1.2831220659262000678155D03, 1 -1.4287072500197005777376D04,-1.4299841572091610380064D06, 2 -3.1398660864247265862050D05,-3.5377809694431133484800D08, 3 3.1984354235237738511048D08,-2.5301823984599019348858D10, 4 1.2177698136199594677580D10,-2.0829040666802497120940D11/ DATA Q/ 7.6886718750000000000000D01,-5.5648470543369082846819D03, 1 1.9418469440759880361415D05,-4.2648434812177161405483D06, 2 6.4698830956576428587653D07,-7.0108568774215954065376D08, 3 5.4229617984472955011862D09,-2.8986272696554495342658D10, 4 9.8900934262481749439886D10,-8.9673749185755048616855D10/ C---------------------------------------------------------------------- C J-fraction coefficients for 6.0 <= X < 12.0 C---------------------------------------------------------------------- CS DATA R/-2.645677793077147237806E00,-2.378372882815725244124E00, CS 1 -2.421106956980653511550E01, 1.052976392459015155422E01, CS 2 1.945603779539281810439E01,-3.015761863840593359165E01, CS 3 1.120011024227297451523E01,-3.988850730390541057912E00, CS 4 9.565134591978630774217E00, 9.981193787537396413219E-1/ CS DATA S/ 1.598517957704779356479E-4, 4.644185932583286942650E00, CS 1 3.697412299772985940785E02,-8.791401054875438925029E00, CS 2 7.608194509086645763123E02, 2.852397548119248700147E01, CS 3 4.731097187816050252967E02,-2.369210235636181001661E02, CS 4 1.249884822712447891440E00/ DATA R/-2.645677793077147237806D00,-2.378372882815725244124D00, 1 -2.421106956980653511550D01, 1.052976392459015155422D01, 2 1.945603779539281810439D01,-3.015761863840593359165D01, 3 1.120011024227297451523D01,-3.988850730390541057912D00, 4 9.565134591978630774217D00, 9.981193787537396413219D-1/ DATA S/ 1.598517957704779356479D-4, 4.644185932583286942650D00, 1 3.697412299772985940785D02,-8.791401054875438925029D00, 2 7.608194509086645763123D02, 2.852397548119248700147D01, 3 4.731097187816050252967D02,-2.369210235636181001661D02, 4 1.249884822712447891440D00/ C---------------------------------------------------------------------- C J-fraction coefficients for 12.0 <= X < 24.0 C---------------------------------------------------------------------- CS DATA P1/-1.647721172463463140042E00,-1.860092121726437582253E01, CS 1 -1.000641913989284829961E01,-2.105740799548040450394E01, CS 2 -9.134835699998742552432E-1,-3.323612579343962284333E01, CS 3 2.495487730402059440626E01, 2.652575818452799819855E01, CS 4 -1.845086232391278674524E00, 9.999933106160568739091E-1/ CS DATA Q1/ 9.792403599217290296840E01, 6.403800405352415551324E01, CS 1 5.994932325667407355255E01, 2.538819315630708031713E02, CS 2 4.429413178337928401161E01, 1.192832423968601006985E03, CS 3 1.991004470817742470726E02,-1.093556195391091143924E01, CS 4 1.001533852045342697818E00/ DATA P1/-1.647721172463463140042D00,-1.860092121726437582253D01, 1 -1.000641913989284829961D01,-2.105740799548040450394D01, 2 -9.134835699998742552432D-1,-3.323612579343962284333D01, 3 2.495487730402059440626D01, 2.652575818452799819855D01, 4 -1.845086232391278674524D00, 9.999933106160568739091D-1/ DATA Q1/ 9.792403599217290296840D01, 6.403800405352415551324D01, 1 5.994932325667407355255D01, 2.538819315630708031713D02, 2 4.429413178337928401161D01, 1.192832423968601006985D03, 3 1.991004470817742470726D02,-1.093556195391091143924D01, 4 1.001533852045342697818D00/ C---------------------------------------------------------------------- C J-fraction coefficients for X .GE. 24.0 C---------------------------------------------------------------------- CS DATA P2/ 1.75338801265465972390E02,-2.23127670777632409550E02, CS 1 -1.81949664929868906455E01,-2.79798528624305389340E01, CS 2 -7.63147701620253630855E00,-1.52856623636929636839E01, CS 3 -7.06810977895029358836E00,-5.00006640413131002475E00, CS 4 -3.00000000320981265753E00, 1.00000000000000485503E00/ CS DATA Q2/ 3.97845977167414720840E04, 3.97277109100414518365E00, CS 1 1.37790390235747998793E02, 1.17179220502086455287E02, CS 2 7.04831847180424675988E01,-1.20187763547154743238E01, CS 3 -7.99243595776339741065E00,-2.99999894040324959612E00, CS 4 1.99999999999048104167E00/ DATA P2/ 1.75338801265465972390D02,-2.23127670777632409550D02, 1 -1.81949664929868906455D01,-2.79798528624305389340D01, 2 -7.63147701620253630855D00,-1.52856623636929636839D01, 3 -7.06810977895029358836D00,-5.00006640413131002475D00, 4 -3.00000000320981265753D00, 1.00000000000000485503D00/ DATA Q2/ 3.97845977167414720840D04, 3.97277109100414518365D00, 1 1.37790390235747998793D02, 1.17179220502086455287D02, 2 7.04831847180424675988D01,-1.20187763547154743238D01, 3 -7.99243595776339741065D00,-2.99999894040324959612D00, 4 1.99999999999048104167D00/ C---------------------------------------------------------------------- X = ARG IF (X .EQ. ZERO) THEN EI = -XINF IF (INT .EQ. 2) EI = -EI ELSE IF ((X .LT. ZERO) .OR. (INT .EQ. 2)) THEN C---------------------------------------------------------------------- C Calculate EI for negative argument or for E1. C---------------------------------------------------------------------- Y = ABS(X) IF (Y .LE. ONE) THEN SUMP = A(7) * Y + A(1) SUMQ = Y + B(1) DO 110 I = 2, 6 SUMP = SUMP * Y + A(I) SUMQ = SUMQ * Y + B(I) 110 CONTINUE EI = LOG(Y) - SUMP / SUMQ IF (INT .EQ. 3) EI = EI * EXP(Y) ELSE IF (Y .LE. FOUR) THEN W = ONE / Y SUMP = C(1) SUMQ = D(1) DO 130 I = 2, 9 SUMP = SUMP * W + C(I) SUMQ = SUMQ * W + D(I) 130 CONTINUE EI = - SUMP / SUMQ IF (INT .NE. 3) EI = EI * EXP(-Y) ELSE IF ((Y .GT. XBIG) .AND. (INT .LT. 3)) THEN EI = ZERO ELSE W = ONE / Y SUMP = E(1) SUMQ = F(1) DO 150 I = 2, 10 SUMP = SUMP * W + E(I) SUMQ = SUMQ * W + F(I) 150 CONTINUE EI = -W * (ONE - W * SUMP / SUMQ ) IF (INT .NE. 3) EI = EI * EXP(-Y) END IF END IF IF (INT .EQ. 2) EI = -EI ELSE IF (X .LT. SIX) THEN C---------------------------------------------------------------------- C To improve conditioning, rational approximations are expressed C in terms of Chebyshev polynomials for 0 <= X < 6, and in C continued fraction form for larger X. C---------------------------------------------------------------------- T = X + X T = T / THREE - TWO PX(1) = ZERO QX(1) = ZERO PX(2) = P(1) QX(2) = Q(1) DO 210 I = 2, 9 PX(I+1) = T * PX(I) - PX(I-1) + P(I) QX(I+1) = T * QX(I) - QX(I-1) + Q(I) 210 CONTINUE SUMP = HALF * T * PX(10) - PX(9) + P(10) SUMQ = HALF * T * QX(10) - QX(9) + Q(10) FRAC = SUMP / SUMQ XMX0 = (X - X01/X11) - X02 IF (ABS(XMX0) .GE. P037) THEN EI = LOG(X/X0) + XMX0 * FRAC IF (INT .EQ. 3) EI = EXP(-X) * EI ELSE C---------------------------------------------------------------------- C Special approximation to ln(X/X0) for X close to X0 C---------------------------------------------------------------------- Y = XMX0 / (X + X0) YSQ = Y*Y SUMP = PLG(1) SUMQ = YSQ + QLG(1) DO 220 I = 2, 4 SUMP = SUMP*YSQ + PLG(I) SUMQ = SUMQ*YSQ + QLG(I) 220 CONTINUE EI = (SUMP / (SUMQ*(X+X0)) + FRAC) * XMX0 IF (INT .EQ. 3) EI = EXP(-X) * EI END IF ELSE IF (X .LT. TWELVE) THEN FRAC = ZERO DO 230 I = 1, 9 FRAC = S(I) / (R(I) + X + FRAC) 230 CONTINUE EI = (R(10) + FRAC) / X IF (INT .NE. 3) EI = EI * EXP(X) ELSE IF (X .LE. TWO4) THEN FRAC = ZERO DO 240 I = 1, 9 FRAC = Q1(I) / (P1(I) + X + FRAC) 240 CONTINUE EI = (P1(10) + FRAC) / X IF (INT .NE. 3) EI = EI * EXP(X) ELSE IF ((X .GE. XMAX) .AND. (INT .LT. 3)) THEN EI = XINF ELSE Y = ONE / X FRAC = ZERO DO 250 I = 1, 9 FRAC = Q2(I) / (P2(I) + X + FRAC) 250 CONTINUE FRAC = P2(10) + FRAC EI = Y + Y * Y * FRAC IF (INT .NE. 3) THEN IF (X .LE. XMAX-TWO4) THEN EI = EI * EXP(X) ELSE C---------------------------------------------------------------------- C Calculation reformulated to avoid premature overflow C---------------------------------------------------------------------- EI = (EI * EXP(X-FOURTY)) * EXP40 END IF END IF END IF END IF RESULT = EI RETURN C---------- Last line of CALCEI ---------- END SUBROUTINE einlib(X, RESULT) C FUNCTION EINLIB(X) C-------------------------------------------------------------------- C C This function program computes approximate values for the C exponential integral Ei(x), where x is real. C C Author: W. J. Cody C C Latest modification: January 12, 1988 C Latest modification: 20130629 by TWY C C-------------------------------------------------------------------- INTEGER INT CS REAL EI CS REAL X CS REAL RESULT DOUBLE PRECISION X CD DOUBLE PRECISION EI DOUBLE PRECISION RESULT C-------------------------------------------------------------------- INT = 1 CALL calcei(X,RESULT,INT) CD EI = RESULT RETURN C---------- Last line of EI ---------- END SUBROUTINE expeinl(X, RESULT) C FUNCTION EXPEINL(X) C-------------------------------------------------------------------- C C This function program computes approximate values for the C function exp(-x) * Ei(x), where Ei(x) is the exponential C integral, and x is real. C C Author: W. J. Cody C C Latest modification: January 12, 1988 C Latest modification: 20130629 by TWY C C-------------------------------------------------------------------- INTEGER INT CS REAL EXPEI CS REAL X CS REAL RESULT CD DOUBLE PRECISION EXPEI DOUBLE PRECISION X DOUBLE PRECISION RESULT C-------------------------------------------------------------------- INT = 3 CALL calcei(X,RESULT,INT) CD EXPEI = RESULT RETURN C---------- Last line of EXPEI ---------- END SUBROUTINE eonenl(X, RESULT) C FUNCTION EONENL(X) C-------------------------------------------------------------------- C C This function program computes approximate values for the C exponential integral E1(x), where x is real. C C Author: W. J. Cody C C Latest modification: January 12, 1988 C Latest modification: 20130629 by TWY C C-------------------------------------------------------------------- INTEGER INT CS REAL EONE CS REAL X CS REAL RESULT CD DOUBLE PRECISION EONE DOUBLE PRECISION X DOUBLE PRECISION RESULT C-------------------------------------------------------------------- INT = 2 CALL calcei(X,RESULT,INT) CD EONE = RESULT RETURN C---------- Last line of EONE ---------- END VGAM/src/tyeepolygamma.f0000644000176200001440000001152213135276761014633 0ustar liggesusersC Output from Public domain Ratfor, version 1.01 subroutine vdgam1(x, lfu2qhid, dvhw1ulq) implicit logical (a-z) double precision x, lfu2qhid integer dvhw1ulq double precision w, series, obr6tcex dvhw1ulq = 1 if(x .le. 0.0d0)then dvhw1ulq = 0 return endif if(x .lt. 6.0d0)then call vdgam2(x + 6.0d0, obr6tcex, dvhw1ulq) lfu2qhid = obr6tcex - 1.0d0/x - 1.0d0/(x + 1.0d0) - 1.0d0/(x + 2.0 *d0) - 1.0d0/(x + 3.0d0) - 1.0d0/(x + 4.0d0) - 1.0d0/(x + 5.0d0) return endif w = 1.0d0 / (x * x) series = ((w * (-1.0d0/12.0d0 + ((w * (1.0d0/120.0d0 + ((w * (-1.0 *d0/252.0d0 + ((w * (1.0d0/240.0d0 + ((w * (-1.0d0/132.0d0 + ((w * *(691.0d0/32760.0d0 + ((w * (-1.0d0/12.0d0 + (3617.0d0 * w)/8160.0d *0))))))))))))))))))))) lfu2qhid = ( dlog(x) - 0.5d0/x + series ) return end subroutine vdgam2(x, lfu2qhid, dvhw1ulq) implicit logical (a-z) double precision x, lfu2qhid integer dvhw1ulq double precision w, series, obr6tcex dvhw1ulq = 1 if(x .le. 0.0d0)then dvhw1ulq = 0 return endif if(x .lt. 6.0d0)then call vdgam1(x + 6.0d0, obr6tcex, dvhw1ulq) lfu2qhid = obr6tcex - 1.0d0/x - 1.0d0/(x + 1.0d0) - 1.0d0/(x + 2.0 *d0) - 1.0d0/(x + 3.0d0) - 1.0d0/(x + 4.0d0) - 1.0d0/(x + 5.0d0) return endif w = 1.0d0 / (x * x) series = ((w * (-1.0d0/12.0d0 + ((w * (1.0d0/120.0d0 + ((w * (-1.0 *d0/252.0d0 + ((w * (1.0d0/240.0d0 + ((w * (-1.0d0/132.0d0 + ((w * *(691.0d0/32760.0d0 + ((w * (-1.0d0/12.0d0 + (3617.0d0 * w)/8160.0d *0))))))))))))))))))))) lfu2qhid = ( dlog(x) - 0.5d0/x + series ) return end subroutine vtgam1(x, lfu2qhid, dvhw1ulq) implicit logical (a-z) double precision x, lfu2qhid integer dvhw1ulq double precision w, series, obr6tcex dvhw1ulq = 1 if(x .le. 0.0d0)then dvhw1ulq = 0 return endif if(x .lt. 6.0d0)then call vtgam2(x + 6.0d0, obr6tcex, dvhw1ulq) lfu2qhid = obr6tcex + 1.0d0/x**2 + 1.0d0/(x + 1.0d0)**2 + 1.0d0/(x * + 2.0d0)**2 + 1.0d0/(x + 3.0d0)**2 + 1.0d0/(x + 4.0d0)**2 + 1.0d0 */(x + 5.0d0)**2 return endif w = 1.0d0 / (x * x) series = 1.0d0 + (w * (1.0d0/6.0d0 + (w * (-1.0d0/30.0d0 + (w * (1 *.0d0/42.0d0 + (w * (-1.0d0/30.0d0 + (w * (5.0d0/66.0d0 + (w * (-69 *1.0d0/2370.0d0 + (w * (7.0d0/6.0d0 - (3617.0d0 * w)/510.0d0))))))) *))))))) lfu2qhid = 0.5d0 * w + series / x return end subroutine vtgam2(x, lfu2qhid, dvhw1ulq) implicit logical (a-z) double precision x, lfu2qhid integer dvhw1ulq double precision w, series, obr6tcex dvhw1ulq = 1 if(x .le. 0.0d0)then dvhw1ulq = 0 return endif if(x .lt. 6.0d0)then call vtgam1(x + 6.0d0, obr6tcex, dvhw1ulq) lfu2qhid = obr6tcex + 1.0d0/x**2 + 1.0d0/(x + 1.0d0)**2 + 1.0d0/(x * + 2.0d0)**2 + 1.0d0/(x + 3.0d0)**2 + 1.0d0/(x + 4.0d0)**2 + 1.0d0 */(x + 5.0d0)**2 return endif w = 1.0d0 / (x * x) series = 1.0d0 + (w * (1.0d0/6.0d0 + (w * (-1.0d0/30.0d0 + (w * (1 *.0d0/42.0d0 + (w * (-1.0d0/30.0d0 + (w * (5.0d0/66.0d0 + (w * (-69 *1.0d0/2370.0d0 + (w * (7.0d0/6.0d0 - (3617.0d0 * w)/510.0d0))))))) *))))))) lfu2qhid = 0.5d0 * w + series / x return end subroutine dgam1w(x, lfu2qhid, n, dvhw1ulq) implicit logical (a-z) integer n, dvhw1ulq double precision x(n), lfu2qhid(n) integer i, okobr6tcex dvhw1ulq = 1 do23016 i=1,n call vdgam1(x(i), lfu2qhid(i), okobr6tcex) if(okobr6tcex .ne. 1)then dvhw1ulq = okobr6tcex endif 23016 continue 23017 continue return end subroutine tgam1w(x, lfu2qhid, n, dvhw1ulq) implicit logical (a-z) integer n, dvhw1ulq double precision x(n), lfu2qhid(n) integer i, okobr6tcex dvhw1ulq = 1 do23020 i=1,n call vtgam1(x(i), lfu2qhid(i), okobr6tcex) if(okobr6tcex .ne. 1)then dvhw1ulq = okobr6tcex endif 23020 continue 23021 continue return end subroutine cum8sum(ci1oyxas, lfu2qhid, nlfu2qhid, valong, ntot, no *tdvhw1ulq) implicit logical (a-z) integer nlfu2qhid, ntot, notdvhw1ulq double precision ci1oyxas(ntot), lfu2qhid(nlfu2qhid), valong(ntot) integer ayfnwr1v, iii iii = 1 lfu2qhid(iii) = ci1oyxas(iii) do23024 ayfnwr1v=2,ntot if(valong(ayfnwr1v) .gt. valong(ayfnwr1v-1))then lfu2qhid(iii) = lfu2qhid(iii) + ci1oyxas(ayfnwr1v) else iii = iii + 1 lfu2qhid(iii) = ci1oyxas(ayfnwr1v) endif 23024 continue 23025 continue if(iii .eq. nlfu2qhid)then notdvhw1ulq = 0 else notdvhw1ulq = 1 endif return end VGAM/src/vdigami.f0000644000176200001440000000763013135276761013403 0ustar liggesusers SUBROUTINE vdigami(D, X, P, GPLOG, GP1LOG, PSIP, PSIP1, PSIDP, * PSIDP1, IFAULT, TMAX) C C ALGORITHM AS 187 APPL. STATIST. (1982) VOL.31, NO.3 C C Computes derivatives of the incomplete gamma integral for positive C parameters, X, P, using a series expansion if P > X or X <= 1, and C a continued fraction expansion otherwise. C C Calculation of D(4) in line 60 corrected 5 October 1993. C C N.B. The user must input values of the incomplete gamma, digamma C and trigamma functions. These can be obtained using AS 239 C (or 32), AS 103 and AS 121 respectively. C C C C C 20130214; adapted by T. W. Yee to handle DOUBLE PRECISION arguments. C And declarations of *all* variables. C And a wrapper function written to call this subroutine. C TMAX is now input. C Seems to work but more testing is required. C C 20141108; A, C, CP, CPP, DSP, DSPP, DFP, DFPP, F, S, TMAXP etc. now C declared, by T. W. Yee. C ABS() changed to DABS() too. C C DOUBLE PRECISION X, P, GPLOG, GP1LOG, PSIP, PSIP1, PSIDP, PSIDP1 DOUBLE PRECISION TMAX INTEGER IFAULT C DOUBLE PRECISION A, AN, B, C, CP, CPC, CPP, DSP, DSPP, DFP, DFPP DOUBLE PRECISION F, PM1, S, S0, XLOG, TERM, TMAXP C C C C C INTEGER I, I2 DOUBLE PRECISION PN(6), D(6), DP(6), DPP(6), ZERO, ONE, TWO C DATA TMAX/100.0/ DATA E, OFLO, VSMALL/1.D-6, 1.D30, 1.D-30/ DATA ZERO/0.0/, ONE/1.0/, TWO/2.0/ C IFAULT = 0 C C Derivatives with respect to X C PM1 = P - ONE XLOG = DLOG(X) D(1) = DEXP(-GPLOG + PM1*XLOG - X) D(2) = D(1) * (PM1/X - ONE) D(5) = D(1) * (XLOG - PSIP) C C Derivatives with respect to P C IF (X .GT. ONE .AND. X .GE. P) GO TO 30 C C Series expansion C F = DEXP(P*XLOG - GP1LOG - X) DFP = F * (XLOG - PSIP1) DFPP = DFP*DFP/F - F*PSIDP1 C TMAXP = TMAX + P C = ONE S = ONE CP = ZERO CPP = ZERO DSP = ZERO DSPP = ZERO A = P 1 A = A + ONE CPC = CP / C CP = CPC - ONE/A CPP = CPP/C - CPC*CPC + ONE/A**2 C = C*X/A CP = CP*C CPP = CPP*C + CP*CP/C S = S + C DSP = DSP + CP DSPP = DSPP + CPP IF (A .GT. TMAXP) GO TO 1001 IF (C .GT. E*S) GO TO 1 D(6) = S*F D(3) = S*DFP + F*DSP D(4) = S*DFPP + TWO*DFP*DSP + F*DSPP RETURN C C Continued fraction expansion C 30 F = DEXP(P*XLOG - GPLOG - X) DFP = F * (XLOG - PSIP) DFPP = DFP*DFP/F - F*PSIDP C A = PM1 B = X + ONE - A TERM = ZERO PN(1) = ONE PN(2) = X PN(3) = X + ONE PN(4) = X * B S0 = PN(3) / PN(4) DO 31 I = 1, 4 DP(I) = ZERO DPP(I) = ZERO 31 CONTINUE DP(4) = -X C 32 A = A - ONE B = B + TWO TERM = TERM + ONE AN = A*TERM PN(5) = B*PN(3) + AN*PN(1) PN(6) = B*PN(4) + AN*PN(2) DP(5) = B*DP(3) - PN(3) + AN*DP(1) + PN(1)*TERM DP(6) = B*DP(4) - PN(4) + AN*DP(2) + PN(2)*TERM DPP(5) = B*DPP(3) + AN*DPP(1) + TWO*(TERM*DP(1) - DP(3)) DPP(6) = B*DPP(4) + AN*DPP(2) + TWO*(TERM*DP(2) - DP(4)) C IF (DABS(PN(6)) .LT. VSMALL) GO TO 35 S = PN(5) / PN(6) C = DABS(S - S0) IF (C*P .GT. E) GO TO 34 IF (C .LE. E*S) GO TO 42 C 34 S0 = S 35 DO 36 I = 1, 4 I2 = I + 2 DP(I) = DP(I2) DPP(I) = DPP(I2) PN(I) = PN(I2) 36 CONTINUE C IF (TERM .GT. TMAX) GO TO 1001 IF (DABS(PN(5)) .LT. OFLO) GO TO 32 DO 41 I = 1, 4 DP(I) = DP(I) / OFLO DPP(I) = DPP(I) / OFLO PN(I) = PN(I) / OFLO 41 CONTINUE GO TO 32 C 42 D(6) = ONE - F*S DSP = (DP(5) - S*DP(6)) / PN(6) DSPP = (DPP(5) - S*DPP(6) - TWO*DSP*DP(6)) / PN(6) D(3) = -F*DSP - S*DFP D(4) = -F*DSPP - TWO*DSP*DFP - S*DFPP RETURN C C Set fault indicator C 1001 IFAULT = 1 RETURN END VGAM/src/VGAM_init.c0000644000176200001440000001635713135276761013543 0ustar liggesusers#include #include // for NULL #include /* FIXME: Check these declarations against the C/Fortran source code. */ /* .C calls */ extern void a2mccc(void *, void *, void *, void *, void *, void *, void *); extern void cqo_1(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void cqo_2(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void dcqo1(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void eimpnbinomspecialp(void *, void *, void *, void *, void *, void *); extern void lerchphi123(void *, void *, void *, void *, void *, void *, void *, void *); extern void m2accc(void *, void *, void *, void *, void *, void *, void *, void *); extern void mux111ccc(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void mux15ccc(void *, void *, void *, void *, void *); extern void mux2ccc(void *, void *, void *, void *, void *, void *); extern void mux22ccc(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void mux5ccc(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void mux55ccc(void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void mux7ccc(void *, void *, void *, void *, void *, void *, void *); extern void pnorm2ccc(void *, void *, void *, void *, void *, void *); extern void sf_C_expexpint(void *, void *, void *); extern void sf_C_expint(void *, void *, void *); extern void sf_C_expint_e1(void *, void *, void *); extern void tapply_mat1(void *, void *, void *, void *); extern void tyee_C_cum8sum(void *, void *, void *, void *, void *, void *); extern void vbacksubccc(void *, void *, void *, void *, void *, void *, void *, void *); extern void vcao6(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void vcholccc(void *, void *, void *, void *, void *, void *, void *, void *); extern void vdcao6(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void vforsubccc(void *, void *, void *, void *, void *, void *, void *, void *); extern void VGAM_C_kend_tau(void *, void *, void *, void *); extern void VGAM_C_mux34(void *, void *, void *, void *, void *, void *); extern void VGAM_C_vdigami(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void vknootl2(void *, void *, void *, void *, void *); extern void vsuff9(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void vzetawr(void *, void *, void *, void *); extern void Yee_pknootl2(void *, void *, void *, void *); extern void Yee_spline(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void Yee_vbfa(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void Yee_vbvs(void *, void *, void *, void *, void *, void *, void *, void *); /* .Fortran calls */ extern void F77_NAME(veigenf)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); extern void F77_NAME(yjngintf)(void *, void *, void *, void *, void *, void *, void *, void *, void *, void *, void *); static const R_CMethodDef CEntries[] = { {"a2mccc", (DL_FUNC) &a2mccc, 7}, {"cqo_1", (DL_FUNC) &cqo_1, 24}, {"cqo_2", (DL_FUNC) &cqo_2, 24}, {"dcqo1", (DL_FUNC) &dcqo1, 29}, {"eimpnbinomspecialp", (DL_FUNC) &eimpnbinomspecialp, 6}, {"lerchphi123", (DL_FUNC) &lerchphi123, 8}, {"m2accc", (DL_FUNC) &m2accc, 8}, {"mux111ccc", (DL_FUNC) &mux111ccc, 11}, {"mux15ccc", (DL_FUNC) &mux15ccc, 5}, {"mux2ccc", (DL_FUNC) &mux2ccc, 6}, {"mux22ccc", (DL_FUNC) &mux22ccc, 10}, {"mux5ccc", (DL_FUNC) &mux5ccc, 16}, {"mux55ccc", (DL_FUNC) &mux55ccc, 9}, {"mux7ccc", (DL_FUNC) &mux7ccc, 7}, {"pnorm2ccc", (DL_FUNC) &pnorm2ccc, 6}, {"sf_C_expexpint", (DL_FUNC) &sf_C_expexpint, 3}, {"sf_C_expint", (DL_FUNC) &sf_C_expint, 3}, {"sf_C_expint_e1", (DL_FUNC) &sf_C_expint_e1, 3}, {"tapply_mat1", (DL_FUNC) &tapply_mat1, 4}, {"tyee_C_cum8sum", (DL_FUNC) &tyee_C_cum8sum, 6}, {"vbacksubccc", (DL_FUNC) &vbacksubccc, 8}, {"vcao6", (DL_FUNC) &vcao6, 42}, {"vcholccc", (DL_FUNC) &vcholccc, 8}, {"vdcao6", (DL_FUNC) &vdcao6, 47}, {"vforsubccc", (DL_FUNC) &vforsubccc, 8}, {"VGAM_C_kend_tau", (DL_FUNC) &VGAM_C_kend_tau, 4}, {"VGAM_C_mux34", (DL_FUNC) &VGAM_C_mux34, 6}, {"VGAM_C_vdigami", (DL_FUNC) &VGAM_C_vdigami, 12}, {"vknootl2", (DL_FUNC) &vknootl2, 5}, {"vsuff9", (DL_FUNC) &vsuff9, 21}, {"vzetawr", (DL_FUNC) &vzetawr, 4}, {"Yee_pknootl2", (DL_FUNC) &Yee_pknootl2, 4}, {"Yee_spline", (DL_FUNC) &Yee_spline, 28}, {"Yee_vbfa", (DL_FUNC) &Yee_vbfa, 30}, {"Yee_vbvs", (DL_FUNC) &Yee_vbvs, 8}, {NULL, NULL, 0} }; static const R_FortranMethodDef FortranEntries[] = { {"veigenf", (DL_FUNC) &F77_NAME(veigenf), 13}, {"yjngintf", (DL_FUNC) &F77_NAME(yjngintf), 11}, {NULL, NULL, 0} }; void R_init_VGAM(DllInfo *dll) { R_registerRoutines(dll, CEntries, NULL, FortranEntries, NULL); R_useDynamicSymbols(dll, FALSE); } VGAM/src/specfun3.c0000644000176200001440000000360013135276761013477 0ustar liggesusers #include #include #include #include #include void sf_C_expint(double *x, int *size, double *bzmd6ftv); void sf_C_expexpint(double *x, int *size, double *bzmd6ftv); void sf_C_expint_e1(double *x, int *size, double *bzmd6ftv); void VGAM_C_kend_tau(double *x, double *y, int *f8yswcat, double *bqelz3cy); void F77_NAME(einlib)(double*, double*); void F77_NAME(expeinl)(double*, double*); void F77_NAME(eonenl)(double*, double*); void sf_C_expint(double *x, int *size, double *bzmd6ftv) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *size; ayfnwr1v++) F77_NAME(einlib)(x + ayfnwr1v, bzmd6ftv + ayfnwr1v); } void sf_C_expexpint(double *x, int *size, double *bzmd6ftv) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *size; ayfnwr1v++) F77_NAME(expeinl)(x + ayfnwr1v, bzmd6ftv + ayfnwr1v); } void sf_C_expint_e1(double *x, int *size, double *bzmd6ftv) { int ayfnwr1v; for (ayfnwr1v = 0; ayfnwr1v < *size; ayfnwr1v++) F77_NAME(eonenl)(x + ayfnwr1v, bzmd6ftv + ayfnwr1v); } void VGAM_C_kend_tau(double *x, double *y, int *f8yswcat, double *bqelz3cy) { int ayfnwr1v, yq6lorbx, gp1jxzuh = *f8yswcat ; double q6zdcwxk1, q6zdcwxk2; for (ayfnwr1v = 0; ayfnwr1v < 3; ayfnwr1v++) bqelz3cy[ayfnwr1v] = 0.0; for (ayfnwr1v = 0; ayfnwr1v < gp1jxzuh; ayfnwr1v++) { for (yq6lorbx = ayfnwr1v + 1; yq6lorbx < *f8yswcat; yq6lorbx++) { q6zdcwxk1 = x[ayfnwr1v] - x[yq6lorbx]; q6zdcwxk2 = y[ayfnwr1v] - y[yq6lorbx]; if (q6zdcwxk1 == 0.0 || q6zdcwxk2 == 0.0) { bqelz3cy[1] += 1.0; } else if ((q6zdcwxk1 < 0.0 && q6zdcwxk2 < 0.0) || (q6zdcwxk1 > 0.0 && q6zdcwxk2 > 0.0)) { bqelz3cy[0] += 1.0; } else { bqelz3cy[2] += 1.0; } } } } VGAM/src/veigen.f0000644000176200001440000005255413135276761013245 0ustar liggesusers subroutine veigenf(M, n, x, vals, ov, vec, junk1, junk2, * wk, rowi, coli, dimmv, ec) implicit logical (a-z) integer M, n, ov, ec, i, k, dimmv, MM2, * rowi(M*(M+1)/2), coli(M*(M+1)/2), full double precision x(dimmv, n), vals(M, n), vec(M,M,n), junk1(M), * junk2(M), wk(M,M) MM2 = M*(M+1)/2 if(dimmv.eq.MM2) then full = 1 else full = 0 end if do 300 i=1,n do 600 k=1,dimmv wk(rowi(k), coli(k)) = x(k,i) wk(coli(k), rowi(k)) = wk(rowi(k), coli(k)) 600 continue if(full.eq.0) then do 500 k=dimmv+1,MM2 wk(rowi(k), coli(k)) = 0.0d0 wk(coli(k), rowi(k)) = 0.0d0 500 continue end if c call vrs818(M, M, wk, vals(1,i), ov, vec(1,1,i), junk1, * junk2, ec) if(ec.ne.0) goto 200 300 continue c 200 return end SUBROUTINE VRS818(NM,N,A,W,MATZ,Z,FV1,FV2,IERR) C INTEGER N,NM,IERR,MATZ DOUBLE PRECISION A(NM,N),W(N),Z(NM,N),FV1(N),FV2(N) C C THIS SUBROUTINE CALLS THE RECOMMENDED SEQUENCE OF C SUBROUTINES FROM THE EIGENSYSTEM SUBROUTINE PACKAGE (EISPACK) C TO FIND THE EIGENVALUES AND EIGENVECTORS (IF DESIRED) C OF A REAL SYMMETRIC MATRIX. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF THE TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX A. C C A CONTAINS THE REAL SYMMETRIC MATRIX. C C MATZ IS AN INTEGER VARIABLE SET EQUAL TO ZERO IF C ONLY EIGENVALUES ARE DESIRED. OTHERWISE IT IS SET TO C ANY NON-ZERO INTEGER FOR BOTH EIGENVALUES AND EIGENVECTORS. C C ON OUTPUT C C W CONTAINS THE EIGENVALUES IN ASCENDING ORDER. C C Z CONTAINS THE EIGENVECTORS IF MATZ IS NOT ZERO. C C IERR IS AN INTEGER OUTPUT VARIABLE SET EQUAL TO AN ERROR C COMPLETION CODE DESCRIBED IN THE DOCUMENTATION FOR TQLRAT C AND TQL2. THE NORMAL COMPLETION CODE IS ZERO. C C FV1 AND FV2 ARE TEMPORARY STORAGE ARRAYS. C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ C IF (N .LE. NM) GO TO 10 IERR = 10 * N GO TO 50 C 10 IF (MATZ .NE. 0) GO TO 20 C .......... FIND EIGENVALUES ONLY .......... CALL VTRED1(NM,N,A,W,FV1,FV2) CALL TQLRA9(N,W,FV2,IERR) GO TO 50 C .......... FIND BOTH EIGENVALUES AND EIGENVECTORS .......... 20 CALL VTRED2(NM,N,A,W,FV1,Z) CALL VTQL21(NM,N,W,FV1,Z,IERR) 50 RETURN END SUBROUTINE VTQL21(NM,N,D,E,Z,IERR) C INTEGER I,J,K,L,M,N,II,L1,L2,NM,MML,IERR DOUBLE PRECISION D(N),E(N),Z(NM,N) DOUBLE PRECISION C,C2,C3,DL1,EL1,F,G,H,P,R,S,S2,TST1,TST2,PYTHA9 C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TQL2, C NUM. MATH. 11, 293-306(1968) BY BOWDLER, MARTIN, REINSCH, AND C WILKINSON. C HANDBOOK FOR AUTO. COMP., VOL.II-LINEAR ALGEBRA, 227-240(1971). C C THIS SUBROUTINE FINDS THE EIGENVALUES AND EIGENVECTORS C OF A SYMMETRIC TRIDIAGONAL MATRIX BY THE QL METHOD. C THE EIGENVECTORS OF A FULL SYMMETRIC MATRIX CAN ALSO C BE FOUND IF TRED2 HAS BEEN USED TO REDUCE THIS C FULL MATRIX TO TRIDIAGONAL FORM. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX. C C D CONTAINS THE DIAGONAL ELEMENTS OF THE INPUT MATRIX. C C E CONTAINS THE SUBDIAGONAL ELEMENTS OF THE INPUT MATRIX C IN ITS LAST N-1 POSITIONS. E(1) IS ARBITRARY. C C Z CONTAINS THE TRANSFORMATION MATRIX PRODUCED IN THE C REDUCTION BY TRED2, IF PERFORMED. IF THE EIGENVECTORS C OF THE TRIDIAGONAL MATRIX ARE DESIRED, Z MUST CONTAIN C THE IDENTITY MATRIX. C C ON OUTPUT C C D CONTAINS THE EIGENVALUES IN ASCENDING ORDER. IF AN C ERROR EXIT IS MADE, THE EIGENVALUES ARE CORRECT BUT C UNORDERED FOR INDICES 1,2,...,IERR-1. C C E HAS BEEN DESTROYED. C C Z CONTAINS ORTHONORMAL EIGENVECTORS OF THE SYMMETRIC C TRIDIAGONAL (OR FULL) MATRIX. IF AN ERROR EXIT IS MADE, C Z CONTAINS THE EIGENVECTORS ASSOCIATED WITH THE STORED C EIGENVALUES. C C IERR IS SET TO C ZERO FOR NORMAL RETURN, C J IF THE J-TH EIGENVALUE HAS NOT BEEN C DETERMINED AFTER 30 ITERATIONS. C C CALLS PYTHA9 FOR DSQRT(A*A + B*B) . C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ c c unnecessary initialization of C3 and S2 to keep g77 -Wall happy c C3 = 0.0D0 S2 = 0.0D0 C IERR = 0 IF (N .EQ. 1) GO TO 1001 C DO 100 I = 2, N E(I-1) = E(I) 100 CONTINUE C F = 0.0D0 TST1 = 0.0D0 E(N) = 0.0D0 C DO 240 L = 1, N J = 0 H = DABS(D(L)) + DABS(E(L)) IF (TST1 .LT. H) TST1 = H C .......... LOOK FOR SMALL SUB-DIAGONAL ELEMENT .......... DO 110 M = L, N TST2 = TST1 + DABS(E(M)) IF (TST2 .EQ. TST1) GO TO 120 C .......... E(N) IS ALWAYS ZERO, SO THERE IS NO EXIT C THROUGH THE BOTTOM OF THE LOOP .......... 110 CONTINUE C 120 IF (M .EQ. L) GO TO 220 130 IF (J .EQ. 30) GO TO 1000 J = J + 1 C .......... FORM SHIFT .......... L1 = L + 1 L2 = L1 + 1 G = D(L) P = (D(L1) - G) / (2.0D0 * E(L)) R = PYTHA9(P,1.0D0) D(L) = E(L) / (P + DSIGN(R,P)) D(L1) = E(L) * (P + DSIGN(R,P)) DL1 = D(L1) H = G - D(L) IF (L2 .GT. N) GO TO 145 C DO 140 I = L2, N D(I) = D(I) - H 140 CONTINUE C 145 F = F + H C .......... QL TRANSFORMATION .......... P = D(M) C = 1.0D0 C2 = C EL1 = E(L1) S = 0.0D0 MML = M - L C .......... FOR I=M-1 STEP -1 UNTIL L DO -- .......... DO 200 II = 1, MML C3 = C2 C2 = C S2 = S I = M - II G = C * E(I) H = C * P R = PYTHA9(P,E(I)) E(I+1) = S * R S = E(I) / R C = P / R P = C * D(I) - S * G D(I+1) = H + S * (C * G + S * D(I)) C .......... FORM VECTOR .......... DO 180 K = 1, N H = Z(K,I+1) Z(K,I+1) = S * Z(K,I) + C * H Z(K,I) = C * Z(K,I) - S * H 180 CONTINUE C 200 CONTINUE C P = -S * S2 * C3 * EL1 * E(L) / DL1 E(L) = S * P D(L) = C * P TST2 = TST1 + DABS(E(L)) IF (TST2 .GT. TST1) GO TO 130 220 D(L) = D(L) + F 240 CONTINUE C .......... ORDER EIGENVALUES AND EIGENVECTORS .......... DO 300 II = 2, N I = II - 1 K = I P = D(I) C DO 260 J = II, N IF (D(J) .GE. P) GO TO 260 K = J P = D(J) 260 CONTINUE C IF (K .EQ. I) GO TO 300 D(K) = D(I) D(I) = P C DO 280 J = 1, N P = Z(J,I) Z(J,I) = Z(J,K) Z(J,K) = P 280 CONTINUE C 300 CONTINUE C GO TO 1001 C .......... SET ERROR -- NO CONVERGENCE TO AN C EIGENVALUE AFTER 30 ITERATIONS .......... 1000 IERR = L 1001 RETURN END SUBROUTINE TQLRA9(N,D,E2,IERR) C INTEGER I,J,L,M,N,II,L1,MML,IERR DOUBLE PRECISION D(N),E2(N) DOUBLE PRECISION B,C,F,G,H,P,R,S,T,EPSLO9,PYTHA9 C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TQLRAT, C ALGORITHM 464, COMM. ACM 16, 689(1973) BY REINSCH. C C THIS SUBROUTINE FINDS THE EIGENVALUES OF A SYMMETRIC C TRIDIAGONAL MATRIX BY THE RATIONAL QL METHOD. C C ON INPUT C C N IS THE ORDER OF THE MATRIX. C C D CONTAINS THE DIAGONAL ELEMENTS OF THE INPUT MATRIX. C C E2 CONTAINS THE SQUARES OF THE SUBDIAGONAL ELEMENTS OF THE C INPUT MATRIX IN ITS LAST N-1 POSITIONS. E2(1) IS ARBITRARY. C C ON OUTPUT C C D CONTAINS THE EIGENVALUES IN ASCENDING ORDER. IF AN C ERROR EXIT IS MADE, THE EIGENVALUES ARE CORRECT AND C ORDERED FOR INDICES 1,2,...IERR-1, BUT MAY NOT BE C THE SMALLEST EIGENVALUES. C C E2 HAS BEEN DESTROYED. C C IERR IS SET TO C ZERO FOR NORMAL RETURN, C J IF THE J-TH EIGENVALUE HAS NOT BEEN C DETERMINED AFTER 30 ITERATIONS. C C CALLS PYTHA9 FOR DSQRT(A*A + B*B) . C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ c c unnecessary initialization of B and C to keep g77 -Wall happy c B = 0.0D0 C = 0.0D0 C IERR = 0 IF (N .EQ. 1) GO TO 1001 C DO 100 I = 2, N E2(I-1) = E2(I) 100 CONTINUE C F = 0.0D0 T = 0.0D0 E2(N) = 0.0D0 C DO 290 L = 1, N J = 0 H = DABS(D(L)) + DSQRT(E2(L)) IF (T .GT. H) GO TO 105 T = H B = EPSLO9(T) C = B * B C .......... LOOK FOR SMALL SQUARED SUB-DIAGONAL ELEMENT .......... 105 DO 110 M = L, N IF (E2(M) .LE. C) GO TO 120 C .......... E2(N) IS ALWAYS ZERO, SO THERE IS NO EXIT C THROUGH THE BOTTOM OF THE LOOP .......... 110 CONTINUE C 120 IF (M .EQ. L) GO TO 210 130 IF (J .EQ. 30) GO TO 1000 J = J + 1 C .......... FORM SHIFT .......... L1 = L + 1 S = DSQRT(E2(L)) G = D(L) P = (D(L1) - G) / (2.0D0 * S) R = PYTHA9(P,1.0D0) D(L) = S / (P + DSIGN(R,P)) H = G - D(L) C DO 140 I = L1, N D(I) = D(I) - H 140 CONTINUE C F = F + H C .......... RATIONAL QL TRANSFORMATION .......... G = D(M) IF (G .EQ. 0.0D0) G = B H = G S = 0.0D0 MML = M - L C .......... FOR I=M-1 STEP -1 UNTIL L DO -- .......... DO 200 II = 1, MML I = M - II P = G * H R = P + E2(I) E2(I+1) = S * R S = E2(I) / R D(I+1) = H + S * (H + D(I)) G = D(I) - E2(I) / G IF (G .EQ. 0.0D0) G = B H = G * P / R 200 CONTINUE C E2(L) = S * G D(L) = H C .......... GUARD AGAINST UNDERFLOW IN CONVERGENCE TEST .......... IF (H .EQ. 0.0D0) GO TO 210 IF (DABS(E2(L)) .LE. DABS(C/H)) GO TO 210 E2(L) = H * E2(L) IF (E2(L) .NE. 0.0D0) GO TO 130 210 P = D(L) + F C .......... ORDER EIGENVALUES .......... IF (L .EQ. 1) GO TO 250 C .......... FOR I=L STEP -1 UNTIL 2 DO -- .......... DO 230 II = 2, L I = L + 2 - II IF (P .GE. D(I-1)) GO TO 270 D(I) = D(I-1) 230 CONTINUE C 250 I = 1 270 D(I) = P 290 CONTINUE C GO TO 1001 C .......... SET ERROR -- NO CONVERGENCE TO AN C EIGENVALUE AFTER 30 ITERATIONS .......... 1000 IERR = L 1001 RETURN END SUBROUTINE VTRED1(NM,N,A,D,E,E2) C INTEGER I,J,K,L,N,II,NM,JP1 DOUBLE PRECISION A(NM,N),D(N),E(N),E2(N) DOUBLE PRECISION F,G,H,SCALE C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TRED1, C NUM. MATH. 11, 181-195(1968) BY MARTIN, REINSCH, AND WILKINSON. C HANDBOOK FOR AUTO. COMP., VOL.II-LINEAR ALGEBRA, 212-226(1971). C C THIS SUBROUTINE REDUCES A REAL SYMMETRIC MATRIX C TO A SYMMETRIC TRIDIAGONAL MATRIX USING C ORTHOGONAL SIMILARITY TRANSFORMATIONS. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX. C C A CONTAINS THE REAL SYMMETRIC INPUT MATRIX. ONLY THE C LOWER TRIANGLE OF THE MATRIX NEED BE SUPPLIED. C C ON OUTPUT C C A CONTAINS INFORMATION ABOUT THE ORTHOGONAL TRANS- C FORMATIONS USED IN THE REDUCTION IN ITS STRICT LOWER C TRIANGLE. THE FULL UPPER TRIANGLE OF A IS UNALTERED. C C D CONTAINS THE DIAGONAL ELEMENTS OF THE TRIDIAGONAL MATRIX. C C E CONTAINS THE SUBDIAGONAL ELEMENTS OF THE TRIDIAGONAL C MATRIX IN ITS LAST N-1 POSITIONS. E(1) IS SET TO ZERO. C C E2 CONTAINS THE SQUARES OF THE CORRESPONDING ELEMENTS OF E. C E2 MAY COINCIDE WITH E IF THE SQUARES ARE NOT NEEDED. C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ C DO 100 I = 1, N D(I) = A(N,I) A(N,I) = A(I,I) 100 CONTINUE C .......... FOR I=N STEP -1 UNTIL 1 DO -- .......... DO 300 II = 1, N I = N + 1 - II L = I - 1 H = 0.0D0 SCALE = 0.0D0 IF (L .LT. 1) GO TO 130 C .......... SCALE ROW (ALGOL TOL THEN NOT NEEDED) .......... DO 120 K = 1, L SCALE = SCALE + DABS(D(K)) 120 CONTINUE C IF (SCALE .NE. 0.0D0) GO TO 140 C DO 125 J = 1, L D(J) = A(L,J) A(L,J) = A(I,J) A(I,J) = 0.0D0 125 CONTINUE C 130 E(I) = 0.0D0 E2(I) = 0.0D0 GO TO 300 C 140 DO 150 K = 1, L D(K) = D(K) / SCALE H = H + D(K) * D(K) 150 CONTINUE C E2(I) = SCALE * SCALE * H F = D(L) G = -DSIGN(DSQRT(H),F) E(I) = SCALE * G H = H - F * G D(L) = F - G IF (L .EQ. 1) GO TO 285 C .......... FORM A*U .......... DO 170 J = 1, L E(J) = 0.0D0 170 CONTINUE C DO 240 J = 1, L F = D(J) G = E(J) + A(J,J) * F JP1 = J + 1 IF (L .LT. JP1) GO TO 220 C DO 200 K = JP1, L G = G + A(K,J) * D(K) E(K) = E(K) + A(K,J) * F 200 CONTINUE C 220 E(J) = G 240 CONTINUE C .......... FORM P .......... F = 0.0D0 C DO 245 J = 1, L E(J) = E(J) / H F = F + E(J) * D(J) 245 CONTINUE C H = F / (H + H) C .......... FORM Q .......... DO 250 J = 1, L E(J) = E(J) - H * D(J) 250 CONTINUE C .......... FORM REDUCED A .......... DO 280 J = 1, L F = D(J) G = E(J) C DO 260 K = J, L A(K,J) = A(K,J) - F * E(K) - G * D(K) 260 CONTINUE C 280 CONTINUE C 285 DO 290 J = 1, L F = D(J) D(J) = A(L,J) A(L,J) = A(I,J) A(I,J) = F * SCALE 290 CONTINUE C 300 CONTINUE C RETURN END SUBROUTINE VTRED2(NM,N,A,D,E,Z) C INTEGER I,J,K,L,N,II,NM,JP1 DOUBLE PRECISION A(NM,N),D(N),E(N),Z(NM,N) DOUBLE PRECISION F,G,H,HH,SCALE C C THIS SUBROUTINE IS A TRANSLATION OF THE ALGOL PROCEDURE TRED2, C NUM. MATH. 11, 181-195(1968) BY MARTIN, REINSCH, AND WILKINSON. C HANDBOOK FOR AUTO. COMP., VOL.II-LINEAR ALGEBRA, 212-226(1971). C C THIS SUBROUTINE REDUCES A REAL SYMMETRIC MATRIX TO A C SYMMETRIC TRIDIAGONAL MATRIX USING AND ACCUMULATING C ORTHOGONAL SIMILARITY TRANSFORMATIONS. C C ON INPUT C C NM MUST BE SET TO THE ROW DIMENSION OF TWO-DIMENSIONAL C ARRAY PARAMETERS AS DECLARED IN THE CALLING PROGRAM C DIMENSION STATEMENT. C C N IS THE ORDER OF THE MATRIX. C C A CONTAINS THE REAL SYMMETRIC INPUT MATRIX. ONLY THE C LOWER TRIANGLE OF THE MATRIX NEED BE SUPPLIED. C C ON OUTPUT C C D CONTAINS THE DIAGONAL ELEMENTS OF THE TRIDIAGONAL MATRIX. C C E CONTAINS THE SUBDIAGONAL ELEMENTS OF THE TRIDIAGONAL C MATRIX IN ITS LAST N-1 POSITIONS. E(1) IS SET TO ZERO. C C Z CONTAINS THE ORTHOGONAL TRANSFORMATION MATRIX C PRODUCED IN THE REDUCTION. C C A AND Z MAY COINCIDE. IF DISTINCT, A IS UNALTERED. C C QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO BURTON S. GARBOW, C MATHEMATICS AND COMPUTER SCIENCE DIV, ARGONNE NATIONAL LABORATORY C C THIS VERSION DATED AUGUST 1983. C C ------------------------------------------------------------------ C DO 100 I = 1, N C DO 80 J = I, N Z(J,I) = A(J,I) 80 CONTINUE C D(I) = A(N,I) 100 CONTINUE C IF (N .EQ. 1) GO TO 510 C .......... FOR I=N STEP -1 UNTIL 2 DO -- .......... DO 300 II = 2, N I = N + 2 - II L = I - 1 H = 0.0D0 SCALE = 0.0D0 IF (L .LT. 2) GO TO 130 C .......... SCALE ROW (ALGOL TOL THEN NOT NEEDED) .......... DO 120 K = 1, L SCALE = SCALE + DABS(D(K)) 120 CONTINUE C IF (SCALE .NE. 0.0D0) GO TO 140 130 E(I) = D(L) C DO 135 J = 1, L D(J) = Z(L,J) Z(I,J) = 0.0D0 Z(J,I) = 0.0D0 135 CONTINUE C GO TO 290 C 140 DO 150 K = 1, L D(K) = D(K) / SCALE H = H + D(K) * D(K) 150 CONTINUE C F = D(L) G = -DSIGN(DSQRT(H),F) E(I) = SCALE * G H = H - F * G D(L) = F - G C .......... FORM A*U .......... DO 170 J = 1, L E(J) = 0.0D0 170 CONTINUE C DO 240 J = 1, L F = D(J) Z(J,I) = F G = E(J) + Z(J,J) * F JP1 = J + 1 IF (L .LT. JP1) GO TO 220 C DO 200 K = JP1, L G = G + Z(K,J) * D(K) E(K) = E(K) + Z(K,J) * F 200 CONTINUE C 220 E(J) = G 240 CONTINUE C .......... FORM P .......... F = 0.0D0 C DO 245 J = 1, L E(J) = E(J) / H F = F + E(J) * D(J) 245 CONTINUE C HH = F / (H + H) C .......... FORM Q .......... DO 250 J = 1, L E(J) = E(J) - HH * D(J) 250 CONTINUE C .......... FORM REDUCED A .......... DO 280 J = 1, L F = D(J) G = E(J) C DO 260 K = J, L Z(K,J) = Z(K,J) - F * E(K) - G * D(K) 260 CONTINUE C D(J) = Z(L,J) Z(I,J) = 0.0D0 280 CONTINUE C 290 D(I) = H 300 CONTINUE C .......... ACCUMULATION OF TRANSFORMATION MATRICES .......... DO 500 I = 2, N L = I - 1 Z(N,L) = Z(L,L) Z(L,L) = 1.0D0 H = D(I) IF (H .EQ. 0.0D0) GO TO 380 C DO 330 K = 1, L D(K) = Z(K,I) / H 330 CONTINUE C DO 3600 J = 1, L c 20161111; originally was: c DO 360 J = 1, L G = 0.0D0 C DO 340 K = 1, L G = G + Z(K,I) * Z(K,J) 340 CONTINUE C DO 360 K = 1, L Z(K,J) = Z(K,J) - G * D(K) 360 CONTINUE 3600 CONTINUE C 380 DO 400 K = 1, L Z(K,I) = 0.0D0 400 CONTINUE C 500 CONTINUE C 510 DO 520 I = 1, N D(I) = Z(N,I) Z(N,I) = 0.0D0 520 CONTINUE C Z(N,N) = 1.0D0 E(1) = 0.0D0 RETURN END DOUBLE PRECISION FUNCTION EPSLO9(X) DOUBLE PRECISION X C C ESTIMATE UNIT ROUNDOFF IN QUANTITIES OF SIZE X. C DOUBLE PRECISION A,B,C,EPS C C THIS PROGRAM SHOULD FUNCTION PROPERLY ON ALL SYSTEMS C SATISFYING THE FOLLOWING TWO ASSUMPTIONS, C 1. THE BASE USED IN REPRESENTING FLOATING POINT C NUMBERS IS NOT A POWER OF THREE. C 2. THE QUANTITY A IN STATEMENT 10 IS REPRESENTED TO C THE ACCURACY USED IN FLOATING POINT VARIABLES C THAT ARE STORED IN MEMORY. C THE STATEMENT NUMBER 10 AND THE GO TO 10 ARE INTENDED TO C FORCE OPTIMIZING COMPILERS TO GENERATE CODE SATISFYING C ASSUMPTION 2. C UNDER THESE ASSUMPTIONS, IT SHOULD BE TRUE THAT, C A IS NOT EXACTLY EQUAL TO FOUR-THIRDS, C B HAS A ZERO FOR ITS LAST BIT OR DIGIT, C C IS NOT EXACTLY EQUAL TO ONE, C EPS MEASURES THE SEPARATION OF 1.0 FROM C THE NEXT LARGER FLOATING POINT NUMBER. C THE DEVELOPERS OF EISPACK WOULD APPRECIATE BEING INFORMED C ABOUT ANY SYSTEMS WHERE THESE ASSUMPTIONS DO NOT HOLD. C C THIS VERSION DATED 4/6/83. C A = 4.0D0/3.0D0 10 B = A - 1.0D0 C = B + B + B EPS = DABS(C-1.0D0) IF (EPS .EQ. 0.0D0) GO TO 10 EPSLO9 = EPS*DABS(X) RETURN END DOUBLE PRECISION FUNCTION PYTHA9(A,B) DOUBLE PRECISION A,B C C FINDS DSQRT(A**2+B**2) WITHOUT OVERFLOW OR DESTRUCTIVE UNDERFLOW C DOUBLE PRECISION P,R,S,T,U P = DMAX1(DABS(A),DABS(B)) IF (P .EQ. 0.0D0) GO TO 20 R = (DMIN1(DABS(A),DABS(B))/P)**2 10 CONTINUE T = 4.0D0 + R IF (T .EQ. 4.0D0) GO TO 20 S = R/T U = 1.0D0 + 2.0D0*S P = U*P R = (S/U)**2 * R GO TO 10 20 PYTHA9 = P RETURN END VGAM/NAMESPACE0000644000176200001440000004674413135276760012254 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. useDynLib(VGAM, .registration = TRUE) export(lrp, lrp.vglm) export(retain.col, d3theta.deta3) export(ghn100, ghw100) export(hdeff, hdeff.vglm) export(calibrate.rrvglm.control, calibrate.rrvglm) importFrom("utils", "tail") importFrom("stats", ".nknots.smspl") export(sm.os) export(label.cols.y) export(prob.munb.size.VGAM) export(negbinomial.initialize.yj) export(mroot2) export(psint) export(psintpvgam) export(startstoppvgam) export(summarypvgam, show.summary.pvgam) S3method(df.residual, pvgam, df.residual_pvgam) export(df.residual_pvgam) exportMethods(endf) export(endfpvgam) export(vcov.pvgam) S3method(vcov, pvgam, vcovpvgam) export(show.pvgam) importFrom("graphics", "polygon") export(model.matrixpvgam) S3method(model.matrix, pvgam, model.matrixpvgam) importFrom("stats", "ppoints") export(doazeta, poazeta, qoazeta, roazeta, oazeta) export(doapospois, poapospois, qoapospois, roapospois, oapospoisson) export(doalog, poalog, qoalog, roalog, oalog) export(ddiffzeta, pdiffzeta, qdiffzeta, rdiffzeta, diffzeta) export(dotzeta, potzeta, qotzeta, rotzeta, otzeta) export(dotpospois, potpospois, qotpospois, rotpospois, otpospoisson) export(dotlog, potlog, qotlog, rotlog, otlog) export(doilog, poilog, qoilog, roilog, oilog) export(doizipf, poizipf, qoizipf, roizipf, oizipf) export(gharmonic, gharmonic2) export(pzeta, qzeta, rzeta) export(qzipf) export(doizeta, poizeta, qoizeta, roizeta, oizeta) export(bisection.basic) export(Zeta.aux, deflat.limit.oizeta) export(topple, dtopple, ptopple, qtopple, rtopple) export(oiposbinomial, doiposbinom, poiposbinom, qoiposbinom, roiposbinom) export(doipospois, poipospois, qoipospois, roipospois, oipospoisson) export(deflat.limit.oipospois) export(zoabetaR) export(sm.ps, get.X.VLM.aug, psv2magic) export(checkwz) export(process.constraints) export(mux22, mux111) importFrom("splines", "splineDesign") export(AR1EIM) export(AR1.gammas) importFrom("stats", "cov") export(as.char.expression) export(predictvglmS4VGAM) export(EIM.posNB.speciald, EIM.NB.speciald, EIM.posNB.specialp, EIM.NB.specialp) export(.min.criterion.VGAM) export(pzoibetabinom, pzoibetabinom.ab, rzoibetabinom, rzoibetabinom.ab, dzoibetabinom, dzoibetabinom.ab, Init.mu) export(log1mexp) export(dzoabeta, pzoabeta, qzoabeta, rzoabeta) export(logitoffsetlink) export(showvglmS4VGAM) export(showvgamS4VGAM) export(subsetarray3) export(tapplymat1) export(findFirstMethod) export(summaryvglmS4VGAM) export(showsummaryvglmS4VGAM) S3method(vcov, vlm, vcovvlm) S3method(coef, vlm, coefvlm) S3method(df.residual, vlm, df.residual_vlm) S3method(model.matrix, vlm, model.matrixvlm) S3method(formula, vlm, formulavlm) export(vcov.vlm, coef.vlm, formula.vlm, model.matrix.vlm) export(has.interceptvlm) exportMethods(has.intercept) export(term.namesvlm) exportMethods(term.names) export(responseNamevlm) exportMethods(responseName) importFrom("grDevices", "chull") importFrom("graphics", "abline", "arrows", "axis", "lines", "matlines", "matplot", "matpoints", "mtext", "par", "points", "rug", "segments", "text") importFrom("methods", "as", "is", "new", "slot", "slot<-", "slotNames", "callNextMethod", "existsMethod", "signature", "show") importFrom("stats", ".getXlevels", "as.formula", "contrasts<-", "dbeta", "dbinom", "delete.response", "deriv3", "dgamma", "dgeom", "dnbinom", "dt", "dweibull", "getCall", "integrate", "is.empty.model", "lm.fit", "median", "model.offset", "model.response", "model.weights", "na.fail", "napredict", "optim", "pbeta", "pbinom", "pgamma", "pgeom", "pnbinom", "polym", "printCoefmat", "plogis", "qlogis", "pweibull", "qbeta", "qbinom", "qchisq", "qf", "qgamma", "qgeom", "qnbinom", "qt", "quantile", "qweibull", "rbeta", "rbinom", "rgamma", "rgeom", "rlnorm", "rlogis", "rnbinom", "runif", "rweibull", "sd", "spline", "terms.formula", "time", "uniroot", "update.formula", "var", "weighted.mean") importFrom("utils", "flush.console", "getS3method", "head") importFrom("stats4", profile) # For S4, not S3 export(profilevglm) # For S4, not S3 importFrom("stats", "approx") export(vplot.profile) export(vpairs.profile) importFrom("grDevices", "dev.flush", "dev.hold") importFrom("graphics", "frame") importFrom("stats4", confint) # For S4, not S3 export(confintvglm) # For S4, not S3 export(confintrrvglm) # For S4, not S3 export(confintvgam) # For S4, not S3 exportMethods(confint) # For S4, not S3 export(dgenpois) export(AR1) export(dAR1) export(param.names) export(is.buggy.vlm) exportMethods(is.buggy) importFrom("splines", splineDesign, bs, ns) export(nparam, nparam.vlm, nparam.vgam, nparam.rrvglm, nparam.qrrvglm, nparam.rrvgam) export(linkfun, linkfun.vglm) export(sm.bs, sm.ns, sm.scale.default, sm.poly, sm.scale) exportMethods(coefficients, coef) importFrom("stats", coefficients, coef) export(case.names, coef, coefficients, df.residual, fitted, fitted.values, formula, residuals, variable.names, weights) export(expected.betabin.ab, grid.search, grid.search2, grid.search3, grid.search4) exportMethods(QR.Q, QR.R) export(QR.Q, QR.R) export(Select, subsetcol) export(simulate.vlm) importFrom("stats", simulate) export(familyname.vlm) export(familyname.vglmff) exportMethods(familyname) export(logLik.qrrvglm) importFrom("stats4", BIC) exportMethods(BIC) export(BICvlm) export(check.omit.constant) export(I.col) export(dbiclaytoncop, rbiclaytoncop, biclaytoncop) export(bistudentt, dbistudentt) export(dbinormcop, pbinormcop, rbinormcop, binormalcop) export(kendall.tau) export(expint, expexpint, expint.E1) export(pgamma.deriv, pgamma.deriv.unscaled, truncweibull) export(binom2.rho.ss) export(arwz2wz) export(link2list) export(multilogit) export(perks, dperks, pperks, qperks, rperks) export(gumbelII, dgumbelII, pgumbelII, qgumbelII, rgumbelII) export(makeham, dmakeham, pmakeham, qmakeham, rmakeham) export(gompertz, dgompertz, pgompertz, qgompertz, rgompertz) export(lindley, dlind, plind, rlind) export(w.wz.merge, w.y.check, vweighted.mean.default) export(is.parallel.matrix, is.parallel.vglm, is.zero.matrix, is.zero.vglm) exportMethods(is.parallel, is.zero) export(nvar_vlm) importFrom("stats4", nobs) exportMethods(nobs) importFrom("stats4", AIC, coef, summary, plot, logLik, vcov) exportMethods(AIC, AICc, coef, summary, plot, logLik, vcov) export(npred, npred.vlm) exportMethods(npred) export(hatvalues, hatvaluesvlm) exportMethods(hatvalues) importFrom("stats", hatvalues) importFrom("stats", dfbeta) # Added 20140509 export(dfbeta, dfbetavlm) exportMethods(dfbeta) export(hatplot, hatplot.vlm) exportMethods(hatplot) export(VGAMenv) export(lrtest, lrtest_vglm) export(update_default, update_formula) export(nvar, nvar.vlm, nvar.vgam, nvar.rrvglm, nvar.qrrvglm, nvar.rrvgam, nvar.rcim) export( nobs.vlm) export(plota21) export(Confint.rrnb, Confint.nb1) export(vcovrrvglm) export(posbernoulli.b, posbernoulli.t, posbernoulli.tb, aux.posbernoulli.t) export(N.hat.posbernoulli) export(dposbern, rposbern) export(is.empty.list) export( Build.terms.vlm, interleave.VGAM, interleave.cmat, procVec, eijfun, ResSS.vgam, valt.control, trivial.constraints, vcontrol.expression, vplot, vplot.default, vplot.factor, vplot.list, vplot.matrix, vplot.numeric, vvplot.factor) export( m2a,a2m,vforsub, vbacksub, vchol) export( case.namesvlm, variable.namesvlm ) export(expgeometric, dexpgeom, pexpgeom, qexpgeom, rexpgeom, genrayleigh, dgenray, pgenray, qgenray, rgenray, exppoisson, dexppois, pexppois, qexppois, rexppois, explogff, dexplog, pexplog, qexplog, rexplog) export(Rcim, plotrcim0, rcim, summaryrcim) export(moffset) export(plotqvar, qvplot, Qvar, qvar) export(depvar, depvar.vlm) export(put.caption) export( cm.VGAM, cm.nointercept.VGAM, cm.zero.VGAM, Deviance.categorical.data.vgam, lm2qrrvlm.model.matrix, vlabel, dimm) export(is.smart, smart.mode.is, wrapup.smart, setup.smart, sm.min1, sm.min2) export( smart.expression, get.smart, get.smart.prediction, put.smart) export(dbinorm, pbinorm, rbinorm, binormal) export(pnorm2, dnorm2) export(iam, fill, fill1, fill2, fill3, biamhcop, dbiamhcop, pbiamhcop, rbiamhcop, bigamma.mckay, freund61, frechet, dfrechet, pfrechet, qfrechet, rfrechet, bifrankcop, dbifrankcop, pbifrankcop, rbifrankcop, biplackettcop, dbiplackcop, pbiplackcop, rbiplackcop, benini1, dbenini, pbenini, qbenini, rbenini, maxwell, dmaxwell, pmaxwell, qmaxwell, rmaxwell, bifgmexp, bifgmcop, dbifgmcop, pbifgmcop, rbifgmcop, bigumbelIexp, erf, erfc, lerch, lambertW, log1pexp, truncpareto, dtruncpareto, qtruncpareto, rtruncpareto, ptruncpareto, paretoff, dpareto, qpareto, rpareto, ppareto, paretoIV, dparetoIV, qparetoIV, rparetoIV, pparetoIV, paretoIII, dparetoIII, qparetoIII, rparetoIII, pparetoIII, paretoII, dparetoII, qparetoII, rparetoII, pparetoII, dparetoI, qparetoI, rparetoI, pparetoI, cens.gumbel, gumbelff, gumbel, dgumbel, pgumbel, qgumbel, rgumbel, foldnormal, dfoldnorm, pfoldnorm, qfoldnorm, rfoldnorm, cennormal, cens.normal, double.cens.normal, rec.normal, rec.normal.control, rec.exp1, rec.exp1.control, cens.rayleigh, rayleigh, drayleigh, prayleigh, qrayleigh, rrayleigh, drice, price, qrice, rrice, riceff, marcumQ, dskellam, rskellam, skellam, inv.gaussianff, dinv.gaussian, pinv.gaussian, rinv.gaussian, waldff, expexpff1, expexpff) export( AICvlm, AICvgam, AICrrvglm, AICqrrvglm, # AICvglm, anova.vgam, anova.vglm, bisa, dbisa, pbisa, qbisa, rbisa, betabinomialff, betabinomial, double.expbinomial, dbetabinom, pbetabinom, rbetabinom, dbetabinom.ab, pbetabinom.ab, rbetabinom.ab, biplot.qrrvglm, dbort, rbort, borel.tanner, care.exp, cauchy, cauchy1, concoef.rrvgam, concoef.Coef.rrvgam, concoef.Coef.qrrvglm, concoef.qrrvglm, cdf, cdf.lms.bcg, cdf.lms.bcn, cdf.lms.yjn, cdf.vglm, Coef.rrvgam, Coefficients, coefqrrvglm, coefvlm, coefvgam, coefvsmooth.spline, coefvsmooth.spline.fit, constraints, constraints.vlm, deplot, deplot.default, deplot.lms.bcg, deplot.lms.bcn, deplot.lms.yjn, deplot.lms.yjn2, deplot.vglm, deviance.vlm, deviance.qrrvglm, df.residual_vlm, dirmultinomial, dirmul.old, dtheta.deta, d2theta.deta2) S3method(anova, vgam) S3method(anova, vglm) S3method(as.character, SurvS4) S3method(biplot, qrrvglm) S3method(biplot, rrvglm) S3method(deviance, qrrvglm) S3method(deviance, vlm) S3method(logLik, qrrvglm) S3method(logLik, vlm) S3method(model.matrix, qrrvglm) S3method(nobs, vlm) S3method(persp, rrvgam) S3method(plot, rrvgam) S3method(plot, vgam) S3method(predict, rrvgam) S3method(predict, rrvglm) S3method(predict, vgam) S3method(predict, vlm) S3method(simulate, vlm) S3method(sm.scale, default) S3method(summary, grc) S3method(summary, qrrvglm) S3method(summary, rrvgam) S3method(summary, rrvglm) S3method(terms, vlm) export(cloglog,cauchit,extlogit,explink,fisherz,logc,loge,logneg,logit, logoff,negreciprocal, probit,reciprocal,rhobit, golf,polf,nbolf,nbolf2,Cut) export(ordpoisson) export(poisson.points, dpois.points) export( erlang, dfelix, felix, fittedvlm, fittedvsmooth.spline, foldsqrt, formulavlm, formulaNA.VGAM, garma, gaussianff, hypersecant, hypersecant01, hyperg, inv.binomial, InverseBrat, inverse.gaussianff, is.Numeric, mccullagh89, leipnik, dlevy, plevy, qlevy, rlevy, levy, lms.bcg.control, lms.bcn.control, lmscreg.control, lms.yjn.control, lms.bcg, lms.bcn, lms.yjn, lms.yjn2, dlms.bcn, qlms.bcn, lqnorm, dbilogis, pbilogis, rbilogis, bilogistic, logistic1, logistic, logLik.vlm, latvar.rrvgam, latvar.Coef.qrrvglm, latvar.rrvglm, latvar.qrrvglm, lvplot.rrvgam, Rank, Rank.rrvglm, Rank.qrrvglm, Rank.rrvgam, Max.Coef.qrrvglm, Max.qrrvglm, is.bell.vlm, is.bell.rrvglm, is.bell.qrrvglm, is.bell.rrvgam, is.bell, model.matrix.qrrvglm, model.matrixvlm, model.framevlm, nakagami, dnaka, pnaka, qnaka, rnaka, namesof, nlminbcontrol, negloge, Opt.Coef.qrrvglm, Opt.qrrvglm, persp.rrvgam) export( micmen ) export( plot.rrvgam, plotpreplotvgam, plotvglm, plotvlm, plotvsmooth.spline, powerlink, predict.rrvgam, predictrrvgam, predictors, predictors.vglm, predictqrrvglm, predict.rrvglm, predict.vgam, predictvglm, predict.vlm, predictvsmooth.spline, predictvsmooth.spline.fit, show.Coef.rrvgam, show.Coef.qrrvglm, show.Coef.rrvglm, show.rrvglm, show.summary.rrvgam, show.summary.qrrvglm, show.summary.rrvglm, show.summary.vgam, show.summary.vglm, show.summary.vlm, show.vanova, show.vgam, show.vglm, show.vlm, show.vglmff, show.vsmooth.spline, process.binomial2.data.VGAM, process.categorical.data.VGAM, negzero.expression.VGAM, qtplot, qtplot.default, qtplot.gumbel, qtplot.gumbelff, qtplot.lms.bcg, qtplot.lms.bcn, qtplot.lms.yjn, qtplot.lms.yjn2, qtplot.vextremes, qtplot.vglm, explot.lms.bcn, rlplot, rlplot.gevff, rlplot.gev, rlplot.vextremes, rlplot.vglm, rlplot, rlplot.vglm, rrar.control) export( SurvS4, is.SurvS4, as.character.SurvS4, show.SurvS4, simple.exponential, better.exponential, simple.poisson, seq2binomial, size.binomial, sm.scale1, sm.scale2, summary.rrvgam, summary.grc, summary.qrrvglm, summary.rrvglm, summaryvgam, summaryvglm, summaryvlm, s.vam, terms.vlm, termsvlm, Tol.Coef.qrrvglm, Tol.qrrvglm, triangle, dtriangle, ptriangle, qtriangle, rtriangle, valid.vknotl2, vcovvlm, vglm.fit, vgam.fit, vglm.garma.control, vglm.multinomial.control, vglm.multinomial.deviance.control, vglm.VGAMcategorical.control, vlm, vlm.control, vnonlinear.control, wweights, yeo.johnson, dzipf, pzipf, rzipf, zipf, zeta, zetaff, dzeta) export(lm2vlm.model.matrix) export(vlm2lm.model.matrix) importFrom("stats", model.matrix) importFrom("stats", model.frame) importFrom("stats", terms) importFrom("stats", resid) importFrom("stats", residuals) importFrom("stats", fitted) importFrom("stats", predict) importFrom("stats", df.residual) importFrom("stats", deviance) importFrom("stats", fitted.values) importFrom("stats", effects) importFrom("stats", weights) importFrom("stats", formula) importFrom("stats", case.names) importFrom("stats", variable.names) importFrom("stats", dchisq, pchisq, pf, dexp, rexp, dpois, ppois, qpois, rpois, dnorm, pnorm, qnorm, rnorm) importFrom("graphics", persp) export(ddagum, rdagum, qdagum, pdagum, dagum) export(dfisk, pfisk, qfisk, rfisk, fisk) export(dlomax, plomax, qlomax, rlomax, lomax) export(dinv.lomax, pinv.lomax, qinv.lomax, rinv.lomax, inv.lomax) export(dparalogistic, pparalogistic, qparalogistic, rparalogistic, paralogistic) export(dinv.paralogistic, pinv.paralogistic, qinv.paralogistic, rinv.paralogistic, inv.paralogistic) export(dsinmad, psinmad, qsinmad, rsinmad, sinmad) export(lognormal) export(dpolono, ppolono, rpolono) export(dgpd, pgpd, qgpd, rgpd, gpd) export(dgev, pgev, qgev, rgev, gev, gevff) export(dlaplace, plaplace, qlaplace, rlaplace, laplace) export(dalap, palap, qalap, ralap, alaplace1.control, alaplace2.control, alaplace3.control, alaplace1, alaplace2, alaplace3) export(dloglap, ploglap, qloglap, rloglap) export(loglaplace1.control, loglaplace1) export(dlogitlap, plogitlap, qlogitlap, rlogitlap, logitlaplace1.control, logitlaplace1) export(dprobitlap, pprobitlap, qprobitlap, rprobitlap) export(dclogloglap, pclogloglap, qclogloglap, rclogloglap) export(dcard, pcard, qcard, rcard, cardioid) export(fff, fff.control, mbesselI0, vonmises) export( AA.Aa.aa, AB.Ab.aB.ab, ABO, acat, betaR, betaff, dbetageom, pbetageom, rbetageom, betageometric, dbetanorm, pbetanorm, qbetanorm, rbetanorm, # betanorm, betaprime, betaII, zipebcom, binom2.or, dbinom2.or, rbinom2.or, binom2.rho, dbinom2.rho, rbinom2.rho, binom2.Rho, binomialff, biplot.rrvglm, brat, bratt, Brat, calibrate.qrrvglm.control, calibrate.qrrvglm, calibrate, cao.control, cao, cdf.lmscreg, cgo, chisq, clo, concoef, Coef, Coef.qrrvglm, Coef.rrvglm, Coef.vlm, predictqrrvglm, cratio, cumulative, propodds, prplot, prplot.control) export( deplot.lmscreg, dirichlet, exponential, A1A2A3) export( lgamma1, lgamma3) export( gammahyperbola, gengamma.stacy, gamma1, gamma2, gammaR, gammaff) export(dlgamma, plgamma, qlgamma, rlgamma) export(dgengamma.stacy, pgengamma.stacy, qgengamma.stacy, rgengamma.stacy) export( dbenf, pbenf, qbenf, rbenf, genbetaII.Loglikfun4, genbetaII, dgenbetaII, genpoisson, geometric, truncgeometric, dlino, plino, qlino, rlino, lino, grc, dhzeta, phzeta, qhzeta, rhzeta, hzeta, negidentity, identitylink, dprentice74, prentice74, amlnormal, amlbinomial, amlexponential, amlpoisson, Wr1, Wr2, dkumar, pkumar, qkumar, rkumar, kumar, dyules, pyules, qyules, ryules, yulesimon, logff, dlog, plog, qlog, rlog, logF, dlogF, loglinb2, loglinb3, loglog, lvplot.qrrvglm, lvplot.rrvglm, Max, MNSs, dmultinomial, multinomial, margeffS4VGAM, cratio.derivs, margeff) export( huber2, huber1, dhuber, edhuber, phuber, qhuber, rhuber) export( slash, dslash, pslash, rslash) export( deunif, peunif, qeunif, reunif, denorm, penorm, qenorm, renorm, sc.studentt2, dsc.t2, psc.t2, qsc.t2, rsc.t2, deexp, peexp, qeexp, reexp) export( meplot, meplot.default, meplot.vlm, guplot, guplot.default, guplot.vlm, posNBD.Loglikfun2, NBD.Loglikfun2, negbinomial, negbinomial.size, polya, polyaR, uninormal, SURff, normal.vcm, nbcanlink, tobit, dtobit, ptobit, qtobit, rtobit, Opt, perspqrrvglm, plotdeplot.lmscreg, plotqrrvglm, plotqtplot.lmscreg, plotvgam.control, plotvgam, plot.vgam, cens.poisson, poissonff, dposbinom, pposbinom, qposbinom, rposbinom, posbinomial, dposgeom, pposgeom, qposgeom, rposgeom, # posgeometric, dposnegbin, pposnegbin, qposnegbin, rposnegbin, posnegbinomial, dposnorm, pposnorm, qposnorm, rposnorm, posnormal, dpospois, ppospois, qpospois, rpospois, pospoisson, qtplot.lmscreg, quasibinomialff, quasipoissonff, rdiric, rigff, rrar, rrvglm.control, rrvglm.optim.control) export(eta2theta, theta2eta, rrvglm, simplex, dsimplex, rsimplex, sratio, s, studentt, studentt2, studentt3, Kayfun.studentt, Tol, trplot.qrrvglm, trplot, rcqo, cqo, qrrvglm.control, vgam.control, vgam, vglm.control, vglm, vsmooth.spline, weibull.mean, weibullR, yip88, dzabinom, pzabinom, qzabinom, rzabinom, zabinomial, zabinomialff, dzageom, pzageom, qzageom, rzageom, zageometric, zageometricff, dzanegbin, pzanegbin, qzanegbin, rzanegbin, zanegbinomial, zanegbinomialff, dzapois, pzapois, qzapois, rzapois, zapoisson, zapoissonff, dzibinom, pzibinom, qzibinom, rzibinom, zibinomial, zibinomialff, dzigeom, pzigeom, qzigeom, rzigeom, zigeometric, zigeometricff, dzinegbin, pzinegbin, qzinegbin, rzinegbin, zinegbinomial, zinegbinomialff, dzipois, pzipois, qzipois, rzipois, zipoisson, zipoissonff, mix2exp, mix2normal, mix2poisson, mix2exp.control, mix2normal.control, mix2poisson.control, skewnormal, dskewnorm, rskewnorm, tikuv, dtikuv, ptikuv, qtikuv, rtikuv) exportClasses(vglmff, vlm, vglm, vgam, rrvglm, qrrvglm, grc, rcim, vlmsmall, rrvgam, summary.vgam, summary.vglm, summary.vlm, summary.qrrvglm, summary.rrvgam, summary.rrvglm, Coef.rrvglm, Coef.qrrvglm, Coef.rrvgam, vcov.qrrvglm, vsmooth.spline.fit, vsmooth.spline) exportClasses(SurvS4) exportMethods( Coef, coefficients, constraints, effects, predict, fitted, fitted.values, resid, residuals, show, terms, model.frame, model.matrix, summary, coef, AIC, AICc, plot, logLik, vcov, deviance, calibrate, cdf, df.residual, lv, latvar, Max, Opt, Tol, biplot, deplot, lvplot, qtplot, rlplot, meplot, trplot, vplot, formula, case.names, variable.names, weights, persp) exportMethods(AIC, AICc, coef, summary, plot, logLik, vcov) VGAM/demo/0000755000176200001440000000000013135276753011744 5ustar liggesusersVGAM/demo/distributions.R0000755000176200001440000000257213135276753015002 0ustar liggesusers# Demo for the maximum likelihood estimation of parameters from # some selected distributions # At the moment this is copied from some .Rd file ## Negative binomial distribution ## Data from Bliss and Fisher (1953). appletree <- data.frame(y = 0:7, w = c(70, 38, 17, 10, 9, 3, 2, 1)) fit <- vglm(y ~ 1, negbinomial(deviance = TRUE), data = appletree, weights = w, crit = "coef", half.step = FALSE) summary(fit) coef(fit, matrix = TRUE) Coef(fit) deviance(fit) # NB2 only; needs 'crit = "coef"' & 'deviance = TRUE' above ## Beta distribution set.seed(123) bdata <- data.frame(y = rbeta(nn <- 1000, shape1 = exp(0), shape2 = exp(1))) fit1 <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) # Useful for intercept-only models # General A and B, and with a covariate bdata <- transform(bdata, x2 = runif(nn)) bdata <- transform(bdata, mu = logit(0.5 - x2, inverse = TRUE), prec = exp(3.0 + x2)) # prec == phi bdata <- transform(bdata, shape2 = prec * (1 - mu), shape1 = mu * prec) bdata <- transform(bdata, y = rbeta(nn, shape1 = shape1, shape2 = shape2)) bdata <- transform(bdata, Y = 5 + 8 * y) # From 5 to 13, not 0 to 1 fit2 <- vglm(Y ~ x2, data = bdata, trace = TRUE, betaff(A = 5, B = 13, lmu = elogit(min = 5, max = 13))) coef(fit2, matrix = TRUE) VGAM/demo/vgam.R0000755000176200001440000000070513135276753013026 0ustar liggesusers# Demo for vgam data(hunua, package = "VGAM") fit.h <- vgam(agaaus ~ s(altitude), binomialff, data = hunua) plot(fit.h, se = TRUE, lcol = "blue", scol = "orange", llwd = 2, slwd = 2, las = 1) nn <- nrow(hunua) ooo <- with(hunua, order(altitude)) with(hunua, plot(altitude[ooo], fitted(fit.h)[ooo], type = "l", ylim = 0:1, lwd = 2, col = "blue", las = 1)) points(agaaus + (runif(nn)-0.5)/30 ~ altitude, hunua, col = "orange") VGAM/demo/cqo.R0000755000176200001440000000720113135276753012654 0ustar liggesusers# Demo for constrained quadratic ordination (CQO; aka # canonical Gaussian ordination) data(hspider, package = "VGAM") hspider[, 1:6] <- scale(hspider[, 1:6]) # standardize environmental vars ## Rank-1 model (unequal tolerances, deviance = 1176.0) set.seed(123) p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, Bestof = 10, Crow1positive = FALSE, eq.tolerances = FALSE, I.tolerances = FALSE) par(mfrow = c(3, 3)) lvplot(p1, lcol = 1:12, llwd = 2, llty = 1:12, y = TRUE, pch = 1:12, pcol = 1:12, las = 1, main = "Hunting spider data") print(cancoef(p1), digits = 3) print(Coef(p1), digits = 3) # trajectory plot trplot(p1, which = 1:3, log = "xy", type = "b", lty = 1, col = c("blue", "orange", "green"), lwd = 2, label = TRUE) -> ii legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "orange", "green")) abline(a = 0, b = 1, lty = "dashed") ## Rank-2 model (equal tolerances, deviance = 856.5) set.seed(111) r2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, Rank = 2, Bestof = 10, I.tolerances = TRUE, eq.tolerances = TRUE, Crow1positive = c(FALSE, FALSE)) print(ccoef(r2), digits = 3) print(Coef(r2), digits = 3) clr <- (1:(10+1))[-7] # Omit yellow colour adj <- c(-0.1, -0.1, -0.1, 1.1, 1.1, 1.1, -0.1, -0.1, -0.1, 1.1) # With C arrows lvplot(r2, label = TRUE, xlim = c(-2.8, 5.0), ellipse = FALSE, C = TRUE, Cadj = c(1.1, -0.1, 1.2, 1.1, 1.1, -0.1), adj = adj, las = 1, chull = TRUE, pch = "+", pcol = clr, sites = TRUE) # With circular contours lvplot(r2, label = TRUE, xlim = c(-2.8, 5.0), ellipse = TRUE, C = FALSE, Cadj = c(1.1, -0.1, 1.2, 1.1, 1.1, -0.1), adj = adj, las = 1, chull = TRUE, pch = "+", pcol = clr, sites = TRUE) # With neither C arrows or circular contours lvplot(r2, label = TRUE, xlim = c(-2.8, 5.0), ellipse = FALSE, C = FALSE, Cadj = c(1.1, -0.1, 1.2, 1.1, 1.1, -0.1), adj = adj, las = 1, chull = TRUE, pch = "+", pcol = clr, sites = TRUE) # Perspective plot persp(r2, xlim = c(-5, 5), ylim = c(-3, 6), theta = 50, phi = 20) ## Gaussian logit regression ## Not recommended actually because the number of sites is far too low. ## Deviance = 154.6, equal tolerances. ybin <- with(hspider, 0 + (cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) > 0)) # Matrix of 0s and 1s colnames(ybin) <- paste0(colnames(ybin), ".01") hspider <- data.frame(hspider, ybin) set.seed(1312) b1 <- cqo(ybin[, -c(1, 5)] ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasibinomialff(mv = TRUE), Bestof = 4, I.tolerances = TRUE, data = hspider, eq.tolerances = TRUE, Crow1positive = FALSE) lvplot(b1, type = "predictors", llwd = 2, las = 1, ylab = "logit mu", ylim = c(-20, 11), lcol = 1:10) c1 <- Coef(b1) cts <- c("Trocterr", "Pardmont", "Alopfabr", "Arctlute") text(c1@Optimum[1, cts], logit(c1@Maximum[cts])+1.0, cts) round(t(Coef(b1, I.tolerances = FALSE)@C), dig = 3) # On the probability scale lvplot(b1, type = "fitted", llwd = 2, las = 1, llty = 1, ylab = "Probability of presence", ylim = c(0, 1), lcol = 1:10) VGAM/demo/00Index0000755000176200001440000000041413135276753013100 0ustar liggesusersbinom2.or Bivariate logistic model cqo Constrained auadratic ordination distributions Maximum likelihood estimation of some distributions lmsqreg LMS quantile regression vgam Vector generalized additive models zipoisson Zero inflated Poisson VGAM/demo/zipoisson.R0000755000176200001440000000217413135276753014133 0ustar liggesusers# Demo for Zero-Inflated Poisson set.seed(111) zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pstr01 = logit(-0.5 + 1*x2, inverse = TRUE), pstr02 = logit( 0.5 - 1*x2, inverse = TRUE), Ps01 = logit(-0.5 , inverse = TRUE), Ps02 = logit( 0.5 , inverse = TRUE), lambda1 = loge(-0.5 + 2*x2, inverse = TRUE), lambda2 = loge( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda = lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) with(zdata, stem(y2)) fit1 <- vglm(y1 ~ x2, zipoisson(zero = 1), data = zdata, crit = "coef") fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), data = zdata, crit = "coef") coef(fit1, matrix = TRUE) # These should agree with the above values coef(fit2, matrix = TRUE) # These should agree with the above values head(fit1@misc$pobs0) # The estimate of P(Y=0) coef(fit1) coef(fit1, matrix=TRUE) Coef(fit1) VGAM/demo/binom2.or.R0000755000176200001440000000207013135276753013676 0ustar liggesusers# Demo for binom2.or data(hunua, package = "VGAM") Hunua <- hunua Hunua <- transform(Hunua, y00 = (1-agaaus) * (1-kniexc), y01 = (1-agaaus) * kniexc, y10 = agaaus * (1-kniexc), y11 = agaaus * kniexc) fit <- vgam(cbind(y00, y01, y10, y11) ~ s(altitude, df = c(4, 4, 2.5)), binom2.or(zero = NULL), data = Hunua) par(mfrow = c(2, 3)) plot(fit, se = TRUE, scol = "darkgreen", lcol = "blue") summary(fit) # Plot the marginal functions together mycols <- c("blue", "orange") plot(fit, which.cf = 1:2, lcol = mycols, scol = mycols, overlay = TRUE, se = TRUE, llwd = 2, slwd = 2) legend(x = 100, y = -4, leg = c("Agathis australis", "Knightia excelsa"), col = mycols, lty = 1) # Plot the odds ratio ooo <- order(fit@x[, 2]) plot(fit@x[ooo, 2], exp(predict(fit)[ooo, "log(oratio)"]), log = "y", xlab = "Altitude (m)", ylab = "Odds ratio (log scale)", col = "blue", type = "b", las = 1) abline(h = 1, lty = 2) # Denotes independence between species VGAM/demo/lmsqreg.R0000755000176200001440000000170613135276753013550 0ustar liggesusers# Demo for lmsqreg # At the moment this is copied from lms.bcn.Rd data(bmi.nz, package = "VGAM") fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit), 3) head(fitted(fit), 3) head(bmi.nz, 3) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit), 3) # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE, mfrow = c(1, 2)) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(lwd = 2) aa <- deplot(fit, x0 = 20, y = ygrid, main = "Density functions at Age = 20, 42 and 55", xlab = "BMI") aa aa <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, lty = 2, col = "orange") aa <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, lty = 4, col = 4, Attach = TRUE) aa@post$deplot # Contains density function values VGAM/NEWS0000755000176200001440000027750013135276753011535 0ustar liggesusers ************************************************* * * * 1.0 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 1.0-4 NEW FEATURES o This package is now Renjin compatible. o gengamma.stacy() handles multiple responses, and some of its arguments have had their default values changed. o calibrate() has a methods function for "rrvglm" objects. o "vglm" objects have a new "hadof" slot. o "vglm" objects will soon have a new "charfun" slot. o constraints(vglmfit, matrix = TRUE) is now embellished by rownames, by default. o New generic and methods function: hdeff(), lrp(). o Many link functions accommodate 'deriv = 3' and sometimes a bit higher. More safety when 'deriv' is assigned out-of-range values. o fitted() and predict() with lms.???() fits, such as lms.bcn(), now handles inputting new 'percentiles' values. Thanks to Sven Garbade for picking up this obvious deficiency. o New function(s): d3theta.deta3(). o Tested okay on R 3.4.1. This package now requires R 3.4.0 or higher (not R 3.1.0 as before). BUG FIXES and CHANGES o [dqpr]gengamma.stacy() have no default value for 'd' and 'k' args. o negbinomial(lmu = "nbcanlink") now works properly; thanks to Victor Miranda for much help and picking up 3 bugs. o perspqrrvglm() called @linkinv() without passing in @extra. o fisherz(theta, inverse = TRUE, deriv = 1) was out by a factor of 2. o multilogit() did not handle 1-row arguments correctly when 'inverse = TRUE' and 'refLevel' was neither 1 or M+1. o Several C and Fortran functions renamed, for R 3.4.0. CHANGES IN VGAM VERSION 1.0-3 NEW FEATURES o vgam() with sm.os() and sm.ps() terms allows G2-VGAMs to be fitted. o plotvgam() has a "shade" argument. o Almost all family functions have been "validparams"-enabled, for greater reliability. o confint() implements the profile likelihood method (in addition to the Wald method). o New family functions: diffzeta(dpqr), oilog(dpqr), oiposbinomial(dpqr), oizeta(dpqr), oizipf(dpqr), otlog(dpqr), otpospoisson(dpqr), otzeta(dpqr), oalog(dpqr), oapospoisson(dpqr), oazeta(dpqr), topple(dpqr). o New functions: [pqr]zeta(), [qr]zipf(). o Argument 'zero' now accepts "" or NA and interprets these as NULL, i.e., no linear or additive predictors are intercept-only. o Significance stars added to summary(rrvglm.object), for very crude inference. o zeta() can return the Hurwitz zeta function, via the 'shift' argument. o show.summary.vglm() will only print out any dispersion parameter that is not equal to 1. o type.fitted = "quantiles" is available for gevff(), negbinomial(), poissonff(). o Tested okay on R 3.3.2. BUG FIXES and CHANGES o mills.ratio1() in tobit() did not handle very negative 'x' values correctly. Thanks to Christoph Nolte for detecting this. o Renamed arguments: zetaff(d) use 'shape', not 'p'. o betabinomialff()@infos was buggy wrt 'lshape1' and 'lshape2'. Thanks to Xiaodong for detecting this. o leipnik() uses logoff(offset = -1) as the default link for lambda now, not "loge". o logff(dpqr) uses 'shape' instead of 'c' as the parameter name. o yules(dpqr) uses 'shape' instead of 'rho' as the parameter name. o hzeta(dpqr) uses 'shape' instead of 'alpha' as the parameter name. o felix(dpqr) uses 'rate' instead of 'a' as the parameter name. o dbetabinom.ab() handles large values of shape1 and shape2 better, via the dbinom() limit. Thanks to Micha Schneider for picking up the bug. o [dpqr]posnegbin() have been improved a little. o logLik(summation = TRUE): prior weights 'w' have been converted to a vector when passed in; this is likely to make it less likely to give an error. o Labelling of the colnames of the fitted values have changed for many family functions, including those for multiple responses, e.g., gevff(). Also "mean" had a bug or two in gevff()@linkinv. CHANGES IN VGAM VERSION 1.0-2 NEW FEATURES o vglm.fit() has been simplified and handles half-stepping better. o AR1() implements the EIM of Porat and Friedlander (1986); this is the work of Victor Miranda. It is specified by type.EIM = "exact" (the default). o Function gevff() replaces egev(). It handles multiple responses like any other ordinary VGAM family function. o A rudimentrary plotvglm() plots the Pearson residuals, firstly versus the predicted values, and secondly, against the hat values. o The 'refLevel' argument of multinomial() accepts a character string, e.g., multinomial(refLevel = "European") for xs.nz$ethnicity as a response. o New family function: oipospoisson(dpqr), zoabetaR(). o New functions: grid.search[23](), [dpqr]oiposbinom(). o is.buggy() is called by vgam() immediately after estimation; it gives a warning if any constraint matrix corresponding to an s() term is not orthogonal. BUG FIXES and CHANGES o vglm.fit() did not handle half-stepping very well. o Some families for counts (i.e., [pos,z[ai]]negbinomial[ff]()) have been "validparams"-enabled in order to make estimation near the boundary of the parameter space more stable, especially when a Poisson approximation is suitable. o Other families that have been "validparams"-enabled: gev(), gpd(). o Actuarial or statistical size distributions families have been modified with respect to initial values, e.g., sinmad, dagum, [inv.]lomax, [inv.]paralogistic, [gen]betaII(). o rep_len() replaces rep() where possible. o Function gev() has been changed internally and arguments such as 'gshape' have changed. o Function rzipois() may not have handled 0-deflation properly but it does so now. o Function plotvgam() had a bug testing for variable names when the xij facility was used. o multinomial() and multilogit() use "(Last)" to signify the last level of a factor; it used to be "last". o qposbinom() returned 0 (incorrect), and not 1 (correct), for p = 0. o zipoisson() and zipoissonff() no longer store fitted values such as pstr0 in the misc slot. They can be obtained by, e.g., fitted(fit, type.fitted = "pstr0"). o Renamed functions: egumbel() is now called gumbelff(). [dqpr]ozibeta() is now called [dqpr]zoabeta(). o Renamed parameter names: zetaff() uses 'shape', not 'p'. o qzibinom() did not handle arguments lower.tail and log.p correctly. o Tested okay on R 3.3.0. This package now requires R 3.1.0 or higher (not R 3.0.0 as before). CHANGES IN VGAM VERSION 1.0-1 NEW FEATURES o Argument 'zero' has been programmed to handle (a more intuitive) a character vector. Each value of this vector is fed into grep() with fixed = TRUE. Many VGAM family functions have an equivalent default character value of 'zero'. o New slots: "validparams" and "validfitted" for providing more opportunities for half-stepping. o The "infos" slot of most family functions have a component called "parameters.names", and also "Q1" and "M1". o margeff() works for cratio(), sratio() and acat() models, and is generic (with S4 dispatch). For this, "vcategorical" replaced by "VGAMcategorical", and "VGAMordinal" is also a virtual class. And margeffS4VGAM() is generic. o summaryvglm() calls the generic summaryvglmS4VGAM() in order to compute useful quantities, and it is printed by showsummaryvglmS4VGAM(). Specific examples include the binom2.or() and cumulative() families. o Similarly, show.vglm() calls the generic showvglmS4VGAM() in order to print extra potentially useful output. Ditto for , show.vgam() which calls showvgamS4VGAM(). o Similarly, predictvglm() calls the generic predictvglmS4VGAM() in order to allow for family-function-specific prediction. o logitoffsetlink() is new. o [dpqr]ozibeta() and [dpr]ozibetabinom() and [dpr]ozibetabinom.ab() are new; by Xiangjie Xue and Thomas Yee. o coef(..., type = c("linear", "nonlinear")) is available for "vgam" objects. o The following have new 'weights' slots (based on negbinomial()@weight): posnegbinomial(), zanegbinomial[ff](), zinegbinomial[ff](). It is based on the expectation of a difference between 2 trigamma function evaluations being computed using pnbinom(lower.tail = FALSE) and variants. Both functions have some argument defaults tweaked. o log1mexp() and log1pexp(), based on Martin Maechler's 2012 paper, is 'new'. o Many zero-altered and zero-inflated families have additional 'type.fitted' choices. Initial values for such families hav been improved (via Init.mu()). o expint(), expexpint(), expint.E1() allow the computation of the first few derivatives. o Tested okay on R 3.2.4. BUG FIXES and CHANGES o Order of arguments changed: binom2.rho(lmu, lrho), negbinomial(), posnegbinomial(), zanegbinomial(), zinegbinomial(). o pzanegbin() could return -.Machine$double.eps. Thanks to Ryan Thompson for notifying me about this. o pbinorm() used to have a bug wrt Inf and -Inf values in its arguments. Thanks to Xiangjie Xue for picking this up. o plota21() used qchisq(0.95, df = 1) instead of qchisq(0.95, df = 1) / 2 for LRT confidence intervals. Thanks to Russell Millar for picking this up. o A new function Init.mu() is used to initialize several family functions, especially those based on the negative binomial and Poisson distributions. The default for Init.mu() is suitable for 0-inflated data. o The fitted value of polya() was wrong (wasn't the mean). o Default value of argument 'zero' has changed for: bisa(), gumbelII(). o zibinomialff()@weight had a bug when calling iam(). o [dpqr]nbinom(..., size = Inf) was buggy; it produced many NaNs. Thanks to Martin Maechler for promptly fixing this, for R 3.2.4. o The arguments of interleave.VGAM() have changed: from interleave.VGAM(L, M) to interleave.VGAM(.M, M1, inverse = FALSE). The is a compromise solution with respect to my book. The 'inverse' argument is due to Victor Miranda. o summaryvglm() evidently evaluated the weights slot of an object twice. Now it is only done once. CHANGES IN VGAM VERSION 1.0-0 NEW FEATURES o Official version that goes with the just-released book "Vector Generalized Linear and Additive Models: With an Implementation in R" by T. W. Yee (2015), Springer: New York, USA. o gengamma.stacy() implements a grid search wrt all its parameters. o New functions: [dp]lms.bcn(). o New family function: weibull.mean(). o triangle.control() slows down the speed of the iterations towards the MLE, because the regularity conditions do not hold. o New arguments: AR1(nodrift = FALSE). o binormal has arguments eq.mean and eq.sd which now operate independently. o confint() should work for objects that are "vglm"s. Thanks to Tingting Zhan for suggesting this. o Tested okay on R 3.2.2. o Methods functions for responseName() and term.names() and has.intercept(). BUG FIXES and CHANGES o Link functions have changed a lot!!! They return different values when deriv = 1 and deriv = 2, coupled with inverse = TRUE and inverse = FALSE. Type ?Links to see examples. The first derivatives become reciprocals of each other when inverse = TRUE and inverse = FALSE, however the 2nd derivatives are no longer reciprocals of each other. Also affected are dtheta.deta() and d2theta.deta2(), etc. o 'show()' added to importMethods('methods') in NAMESPACE. o The following currently do not work: golf(), nbolf(), polf(). o AA.Aa.aa() used the OIM and worked for intercept-only models, but now it uses the EIM. o logneg("a", short = TRUE) has been modified. o posnormal(): the first and second derivatives have been modified for both SFS and ordinary FS, and the default is zero = -2 now ('sd' is intercept-only). Several other improvements have been done. o binomialff()@deviance is assigned all the time now. o dbetabin.ab() better handles extremes in the shape parameters (very close to 0 and larger than 1e6, say). Thanks to Juraj Medzihorsky for picking this up. o Family functions: zigeometric()@weight and zigeometricff()@weight had one element incorrect. o logit("a+b", short = FALSE) was labelled incorrectly, etc. o Family function tobit()@weights implements Fisher scoring entirely. And it handles observations whose fitted values are (relatively) large and positive; thanks to Victor Champonnois for picking up this bug. o S3 methods function df.residual_vlm() also called by df.residual.vlm(). This is to avoid a bug picked up by car::linearHypothesis(). Ditto for vcovvlm() by vcov.vlm(). Also model.matrix() and formula(). Thanks to Michael Friendly and John Fox for help here. ************************************************* * * * 0.9 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.9-8 NEW FEATURES o Tested okay on R 3.2.0. o is.buggy() tests to see if a fitted VGAM object suffers from known bugs, e.g., a vgam() object with at least one s() term whose constraint matrix does not have orthogonal columns. o New family function: AR1(d). o New function: dgenpois(). o The package has been updated to reflect the new J. Stat. Soft. paper by Yee, Stoklosa and Huggins. A vignette based on this paper is now included. o dgenbetaII() has now been written; and genbetaII() improved, and about 8 special cases of genbetaII() have all been modernized to handle multiple responses and a default grid search over all the parameters (arguments 'gscale' and 'gshape1.a', etc.). These families are based on Kleiber and Kotz (2003). BUG FIXES and CHANGES o Family function genpoisson() has been modernized, and should give correct results wrt AIC() etc. o Argument 'init.alpha' renamed to 'ialpha', for the brat(), bratt(), and dirmul.old() families. o Calls to N.hat.posbernoulli() used Hlist = constraints rather than Hlist = Hlist; this failed for RR-VGLMs. o Family function tobit() obtains initial values even in the case when it would otherwise fit an underdetermined system of equations. Thanks to McClelland Kemp for picking this up. CHANGES IN VGAM VERSION 0.9-7 NEW FEATURES o Tested okay on R 3.1.2. o linkfun() and nparam() are new generic functions. o betabinomialff() replaces 'lshape12' with 'lshape1' and 'lshape2'. Arguments 'i1' and 'i2' are now 'ishape1' and 'ishape2'. o ABO() has more arguments. o Arguments lower.tail and log.p have been added to quite a few pq-type functions (work done by Kai Huang). BUG FIXES and CHANGES o Argument 'mv' has been renamed to 'multiple.responses'. This applies to about 10 family functions such as binomialff(). o Argument 'lss' added to betaII(), dagum(), fisk(), genbetaII(), inv.paralogistic(), paralogistic(), sinmad(). Note that the order of the arguments of these functions will change in the near future, and consequently the order of the parameters. The [dpqr]-type functions of all these distributions have arguments that have been rearranged. o All d-type functions handle 'x = Inf' and 'x = -Inf'. Much help from Kai Huang here. Thanks to Ott Toomet for alerting me to this type of bug. o vsmooth.spline() has 2 argument name changes, and a little reordering of its arguments. o More p-type functions handle 'q = Inf' and 'q = -Inf'. More q-type functions handle 'p = 0' and 'p = 1'. Much help from Kai Huang here. o AA.Aa.aa() and A1A2A3() handled 'inbreeding' the wrong way round. o pposnorm() returned wrong answers. o ptobit(log.p = TRUE) was incorrect, as well as some other bugs in [dpqr]tobit(). The dtobit(Lower) and dtobit(Upper) have changed. o negbinomial() now computes the EIM wrt the 'size' parameter based on a finite approximation to an infinite series (provide the mu and size parameter has values lying in a certain range). This may be time- and/or memory-hungry, but the user has control over this via some arguments such as max.mu, min.size and chunk.max.MB. o Renamed functions: elogit() is now called extlogit(), fsqrt() is now called foldsqrt(). CHANGES IN VGAM VERSION 0.9-6 NEW FEATURES o All r-type functions handle the 'n' argument the same way as runif(). This was done with the help of Kai Huang. BUG FIXES and CHANGES o Slot "res.ss" changed to "ResSS". o Some argument reference errors found by valgrind have been fixed. CHANGES IN VGAM VERSION 0.9-5 NEW FEATURES o Tested okay on R 3.1.2. o New argument 'lss' appears on some family functions. This is important because it changes the order of the parameters. o New functions: QR.Q(), QR.R(), [pq]rice() (thanks to Benjamin Hall for pointing that these are based on the Marcum-Q function). o exponential() has a new loglikelihood slot. Thanks to Neyko Neykov for picking this up this omission. o Constraint matrices in process.constraints() are checked that they are of full column-rank. o New family functions: better.exponential(), polyaR(). o New functions: qvplot() is preferred over plotqvar(), [dpqr]levy(). o summary() applied to a "vglm" object now prints out the table of estimates, SEs, test statistics and p-values very similarly to glm() objects. In particular, two-tailed p-values in the 4th column are new; these correspond to the z ratio based on a normal reference distribution. o gev(), egev(), and gpd() have a 'type.fitted' argument, which should be set to "mean" if the mean is desired as the fitted values. gpd() has a stop() if the data is negative. o AA.Aa.aa() and A1A2A3() have a 'inbreeding = TRUE' argument. If 'inbreeding = TRUE' then an extra parameter is estimated. If 'inbreeding = FALSE' then the inbreeding coefficient is 0 by definition, and not estimated. G1G2G3() is now renamed to A1A2A3(). o Decommissioned VGAM family functions: AAaa.nohw(), matched.binomial(). o deviance() applied to a "qrrvglm" or "rrvgam" object now has a 'history' argument. o binomialff(mv = TRUE) is no longer restricted to responses having 0 and 1 values. o New data sets: flourbeetle. o The 'constraints' argument accepts a list with functions as components, that compute the constraint matrices. BUG FIXES and CHANGES o Renamed the order of arguments and linear predictors (now, 'location'-type precedes 'scale'-type, and 'scale'-type precedes 'shape'-type parameters): benini1(dpqr) bisa(dpqr) gumbelII(dpqr) makeham(dpqr) nakagami(dpqr) perks(dpqr) riceff(dpqr) genrayleigh(dpqr) expexpff1(), expexpff() exppoisson(dpqr) gammaR() o Renamed parameter names: poissonff() has "lambda", not "mu", binomialff() has "prob", not "mu". o Renamed functions: plot.vgam() plots "vgam" objects, not plotvgam(). Use plot(as(vglmObject, "vgam")) to plot vglm() objects as if they were vgam() objects. plot.vgam(): the user has total control over 'xlim' and 'ylim' if specified. o Renamed functions: cm.zero.vgam() has become cm.zero.VGAM(), cm.nointercept.vgam() has become cm.nointercept.VGAM(), cm.vgam() has become cm.VGAM(), process.categorical.data.vgam to process.categorical.data.VGAM, process.binomial2.data.vgam to process.binomial2.data.VGAM. o Link loge() returns "loge" as its tag, not "log" anymore. o Class "cao" changed to "rrvgam". o dbilogis4() was faulty. o Renamed arguments: 'location' is now 'scale' in [dpqr]pareto(), and paretoff(). o gev() and egev() handle working weights better when sigma is close o gev(zero = 3) has changed to gev(zero = 2:3), by default, and egev(zero = 3) has changed to egev(zero = 2:3), by default. That is, only the location parameter is now modelled as functions of covariates, by default; the scale and shape parameters are intercept-only. o bigamma.mckay(zero = 1) has changed to bigamma.mckay(zero = 2:3), by default. o rlplot() works for gev() model fits now. o Renamed functions: subsetc() has become subsetcol(), my1 has become sc.min1(), my2 has become sc.min2(), stdze1() has become sc.scale1(), stdze2() has become sc.scale2(), mlogit() has become multilogit(). o Decommissioned VGAM family functions: AB.Ab.aB.ab2() o Renamed VGAM family functions: OLD NAME: NEW NAME: amh() biamhcop() bigumbelI() bigumbelIexp() fgm() bifgmcop() gammahyp() gammahyperbola() morgenstern() bifgmexp() plackett() biplackettcop() benini() benini1() cgumbel() cens.gumbel() cenpoisson() cens.poisson() cennormal() cens.normal() double.cennormal() double.cens.normal() recnormal() rec.normal() recexp1() rec.exp1() invbinomial() inv.binomial.exp1() invlomax() inv.lomax.exp1() invparalogistic() inv.paralogistic.exp1() koenker() sc.studentt2() frechet2() frechet() hypersecant.1() hypersecant01() gengamma() gengamma.stacy() beta.ab() betaR() betabinom.ab() betabinomialR() gamma2.ab() gammaR() [see note about reordered arguments] logistic2() logistic() lgammaff() lgamma1() lgamma3ff() lgamma3() SUR() SURff() expexp() expexpff() expexp1() expexpff1() weibull() weibullR(lss = FALSE). Also 'zero' has changed. o Functionality has changed: weibull() weibullR(lss = FALSE). Also 'zero' has changed. o Data sets renamed: mmt renamed to melbmaxtemp. o lms.bcn(): changes in the arguments. o [log]alaplace[123](): changes in the arguments, e.g., 'parallelLocation' changed to 'parallel.locat'. o Argument 'reference' has been changed to 'refResponse' for CQO objects. o Argument 'shrinkage.init' has been changed to 'ishrinkage'. o Argument 'matrix.arg = TRUE' has been changed to 'drop = FALSE' in fittedvlm(). o Bug in dbort(). Thanks to Benjamin Kjellson for picking this up. o vglm.control()$save.weight changed to vglm.control()$save.weights. vgam.control()$save.weight changed to vgam.control()$save.weights. o "ccoef" has been replaced by "concoef". o Some documentation regarding qvar(se = TRUE) was wrong. o Argument "alpha" in several bivariate distributions have been replaced by "apar", for association parameter. o Arguments "optima" replaced by "optimums", "maxima" replaced by "maximums", "logmaxima" replaced by "log.maximums". o Function getMaxMin() renamed to grid.search(). o lognormal3() withdrawn. o dfbeta() returns the difference between the coeffs. o negbinomial(deviance = TRUE) works when fitting the NB-2, provided criterion = "coef" or half.step = FALSE. o Argument "a" replaced by "rate" in maxwell(dpqr). o Arguments "x1" and "x2" replaced by "q1" and "q2" in pbinorm(). CHANGES IN VGAM VERSION 0.9-4 NEW FEATURES o New data sets: cfibrosis, lakeO, wine. o New functions: Select(). o negbinomial(deviance = TRUE) works, provided criterion = "coef" is used too. o simulate() works with binomialff(), poissonff(), rayleigh() and several other families. See help(simulate.vlm) for a current listing. o coef(colon = FALSE) works for VLM objects. o pslash() has a 'very.negative = -4' argument. Thanks to Tiago Pereira for picking this up. o Some family functions have a 'summation = TRUE' argument in the loglikelihood slot. Can be accessed using, e.g., logLik(fit, summation = FALSE). See ?logLik.vlm. Similarly for deviance(fit, summation = FALSE). o Tested okay on R 3.1.0. BUG FIXES and CHANGES o bs(), ns(), scale() and poly() are no longer smart, but they will handle simple terms such as bs(x) and scale(x). The smart version of those functions have been renamed to sm.bs(), sm.ns(), sm.scale(), sm.poly(); these will handle complicated terms such as sm.bs(sm.scale(x)). o Renamed functions: identity() has become identitylink(). o Argument names changed: 'ITolerances' renamed to 'I.tolerances' thoughout, 'EqualTolerances' renamed to 'eq.tolerances' thoughout. o Bug in mix2normal() fixed in @initialize. Thanks to Troels Ring for finding the bug. o Upon loading the package, no warnings (such as masking) is given. o multinomial(parallel = TRUE) now applies the parallelism constraint to the intercept. o If a factor response is ordered then a warning is issued for multinomial(). o predict(fit, newdata = zdata2, type = "response") used to fail for z[ai][poisson][,ff]() and z[ai][negbinomial][,ff]() families. Thanks to Diego Nieto Lugilde for picking this up. o A bug with offsets and coefstart has been fixed. Thanks to Giuseppe Casalicchio for picking this up. o Variables "Blist" replaced by "Hlist". o Expression new.s.call no longer used in vglm.fit() and vgam.fit(). Musual has been replaced by M1. o Variable names changed: prinia, Huggins89table1, Huggins89.t1. o Memory leaks found by valgrind have been patched. CHANGES IN VGAM VERSION 0.9-3 NEW FEATURES o New argument: posbinomial(omit.constant = FALSE), set to TRUE if comparing M_0/M_h models with M_b/M_t/M_tb/M_bh/M_th/M_tbh. o rcim() works with family = multinomial; in conjunction with arguments M and cindex to be specified. rcim() also had additional arguments and new defaults. o New arguments: positive Bernoulli functions have 'p.small' and 'no.warning' arguments. o AICc() is new. o family.name() generic is new. o New data sets: prinia. o logLik() methods function for "qrrvglm" objects. AIC() methods function for "qrrvglm" objects is corrected. AIC() methods function for "cao" objects is new. BUG FIXES and CHANGES o vgam() with nontrivial constraints is giving incorrect predict(vgam.object) and fitted(vgam.object). Not yet fixed up but will try soon! Thanks to Zachary Kurtz for picking this up. o Argument 'which.lp' changed to 'which.linpred'. Argument 'which.eta' changed to 'which.linpred'. Argument 'lapred.index' changed to 'linpred.index'. Argument 'whichSpecies' changed to 'which.species'. Argument 'plot.it' changed to 'show.plot'. Argument 'intervalWidth' in plotqvar() changed to 'interval.width'. o Decommissioned VGAM family functions: cennormal1(). o posbinomial() returns @extra$N.hat and @extra$SE.N.hat if the number of trials is constant across observations. o calibrate() restored to working order. o Argument names changed: 'szero' renamed to 'str0' thoughout, 'allowable.length' renamed to 'length.arg' in is.Numeric(). o Function uqo() has been withdrawn. Reasons: (i) It needs to be rewritten in C but unfortunately am too busy... (ii) It is a very difficult optimization problem, probably too difficult to solve in general efficiently. o Arguments in rcqo() have changed. o Data set Perom withdrawn, but deermice remains. o Argument 'zero' in binom2.or() had a bug. CHANGES IN VGAM VERSION 0.9-2 NEW FEATURES o New family functions: logF(d), biclaytoncop(dr), binormalcop(dp), bistudentt(d), and a basic normal.vcm(), zabinomialff(), zageometricff(), zanegbinomialff(), zapoissonff(), zibinomialff(), zigeometricff(), zinegbinomialff(). o cao.control()suppress.warnings == TRUE is new, and it suppresses warnings (esp. lack of convergence) by default. o The convergence criterion now takes into consideration the sample size, somewhat. It should stop premature convergence for very large data sets. o New functions: dpois.points(), log1pexp(), expint(), expexpint(), expint.E1(), dbinorm(), rbinorm(), kendall.tau(), qvar(). Also, depvar(type = c("lm", "lm2")) has a 'type' argument. Also, aux.posbernoulli.t() is new. o New link functions: logneg(). o New data sets: beggs, corbet, deermice, machinists, prats, V1. o Argument 'form2' added to vgam(), so vgam.fit() has been modified too. o posbernoulli.tb() seems correct, and works for any number of sampling occasions. And posbernoulli.[b,t,tb]() have more argument choices. o BIC() is now available, it is based on AIC(..., k = log(nobs(object))). But users need to use it with care. Also, AICvlm() has a 'corrected = FALSE' argument. o fittedvlm() now has a 'type.fitted' argument that allows different fitted values to be computed from a vglm()/vgam() object. Several family functions such as zi*() [e.g., zipoisson()] and za*() [e.g., zapoisson()] have a 'type.fitted' argument that matches it. BUG FIXES and CHANGES o Default arguments have changed, esp. wrt 'zero' for: zibinomial(), zinegbinomial(). o cao() used to crash due to memory problems and segment faults. o Syntax such as parallel = TRUE ~ 1 is now supported. Hence argument 'apply.parint' has been removed. o posbernoulli.b() has a new and superior parameterization, & faster . o Printed output when trace = TRUE has been improved, especially for large data sets. o For ordination methods "lv" has been generally replaced by "latvar". "latvar()" is supported, "lv()" will become fully deprecated soon. But "lvplot()" is retained. Also, this applies to most argument names and list component names returned, e.g., OLD NAME: NEW NAME: isdlv isd.latvar varlvI varI.latvar lvOrder latvar.order OptimumOrder Optimum.order maxfitted max.fitted SD.Ainit sd.Ainit SD.Cinit sd.Cinit SD.sitescores sd.sitescores o For ordination methods "ccoef" has been generally replaced by "concoef". This applies to most methods functions. Attributes have changed too, from "ccoefficients" to "con.coefficients". o VGAM now suggests \pkg{VGAMdata}. o Renamed VGAM family functions: OLD NAME: NEW NAME: normal1() uninormal() bivgamma.mckay() bigamma.mckay() cennormal1() cennormal() dcennormal1() double.cennormal() dexpbinomial() double.expbinomial() explogarithmic() explogff() frank() bifrankcop(dpr) [dpr]frank() [dpr]bifrankcop() fnormal1() foldnormal() [dpqr]fnorm() [dpqr]foldnorm() gumbelIbiv() bigumbelI() mbinomial() matched.binomial() mix2normal1() mix2normal() mix2normal1.control() mix2normal.control() nidentity() negidentity() normal1() uninormal() nloge() negloge() pnorm2() pbinorm(dpr) pareto1() paretoff() poissonp() poisson.points() powl() powerlink() recnormal1(d) recnormal() rig() rigff() skewnormal1() skewnormal() [dr]snorm() [dr]skewnorm() tpareto1() truncpareto() wald() waldff() o Decommissioned functions: OLD NEW dnorm2() dbinorm() pnorm2() pbinorm() o Renamed internal functions: OLD NEW lv.cao() latvar.cao() o Renamed arguments: OLD NEW equalsd eq.sd o Internally, variables identifiers with "_" have been replaced by a ".", e.g., X_vlm becomes X.vlm. Saved component names follow this change too, e.g., @extra$ncols_X_lm becomes @extra$ncols.X.lm. o Improved: fgm() has its explicit EIM programmed in. o summary() applied to a "rcim0" or "rcim" object now works. o Family functions which have changed: zigeometric(). o Slotname "rss" changed to "res.ss". o zinegbinomial()@weight continues to use Fisher scoring until not all the random variates are zeros or nonzeros. o loglinb2(zero = 3) and loglinb3(zero = 4:6) are defaults now (used to be zero = NULL). o Data sets moved: wffc, wffc.nc, etc. moved to \pkg{VGAMdata}. o stats::print.anova() no longer called directly by lrtest(). CHANGES IN VGAM VERSION 0.9-1 NEW FEATURES o A companion package, called \pkg{VGAMdata}, is new. Some large data sets previously in \pkg{VGAM} have been shifted there, e.g., xs.nz and ugss. In \pkg{VGAMdata} there is (new) oly12 and students.tw. o pnorm2() argument names have changed from 'sd1' to 'var1', etc. and 'rho' to 'cov12'. See documentation. Warning given if it returns any negative value. o Introduction of g-type arguments for grid search. o Improved initial values for: lomax(). o Argument 'bred' works for poissonff(). o latvar() generic available, identical to lv(). But the latter will be withdrawn soon. o Rank() generic available for RR-VGLMs, QRR-VGLMs, CAO models. o New function: pgamma.deriv(), pgamma.deriv.unscaled(), vlm2lm.model.matrix(). o New VGAM family functions: posbernoulli.b(), posbernoulli.t(), posbernoulli.tb(tau = 2 or 3). These provide estimates of N as well as its standard error. Also, truncgeometric() and truncweibull() are new. Also, SUR() is new. Also, binom2.rho.ss() does not work yet. o New argument 'matrix.out = FALSE' for constraints.vlm(). o cm.vgam() has a few more arguments to provide more flexibility. But there should be no changes for VGAM users at this stage. o Renamed functions: confint_rrnb() is now renamed to Confint.rrnb() and confint_nb1() is now renamed to Confint.nb1(). o Some changes to component names returned by Confint.rrnb() and Confint.nb1(): $CI. and $SE. are uppercase. o Some zero-inflated VGAM family functions return a "vglm" object with @misc$pstr0 for the estimated probability of a structural zero. o New data set: olym12. Note that Students.tw is earmarked for \pkg{VGAMdata}. o Data sets renamed: olympic renamed to olym08. o Qvar() has a 'which.eta = 1' argument specifying which linear predictor to use. So quasi-variances are now available to models with M > 1 linear predictors. o Tested okay on R 3.0.0. BUG FIXES and CHANGES o VGAM now depends on R >= 2.15.1. o Fortran array bounds problems (picked up by AddressSanitizer) have been fixed. o All "no visible binding for global variables" warnings have been suppressed. o vgam() with a s(spar = myspar) term should run, and if myspar is extracted from a previous vgam() model then the two models should effectively be the same. o summaryvgam() did not calculate or print out all the p-values for testing linearity. o fnormal1()@initialize was faulty wrt lm.wfit(). o zageometric() and zigeometric() handle multiple responses. o mlogit(inverse = TRUE) and mlogit(inverse = FALSE) were switched. Now multinomial() makes use of mlogit(). mlogit() now calls care.exp() to avoid overflow and underflow; this stops multinomial() from returning a NA as a fitted value if abs(eta) is very large. o arwz2wz() introduced to simplify multiple responses working weight matrices (wrt construction). o Renamed functions: dhuggins91() is now dposbern(), huber() is now huber2(), ei() is now eifun(), eij() is now eijfun(), rss.vgam() is now ResSS.vgam(). o fisherz(theta) was wrong. Corrected, then replaced by atanh(theta). o [dpq]dagum(x), [dpq]lomax(x), [dpq]sinmad(x), etc. handled correctly for x = 0, Inf, -Inf, NaN, NA. o qdagum(x) failed due to 'Scale' [thanks to Alena Tartalova]. o Arguments renamed: 'intercept.apply' renamed to 'apply.parint', 'Norrr' renamed to 'noRRR' (warning/error message issued), 'nowarning' renamed to 'noWarning' in vglm.control(). o seq2binomial()@loglikelihood includes the binomial lchoose() constants. o qgev() bug [thanks to Alex Cannon], and qgpd(). o cao() produces less error/warning messages usually. o Data sets corrected for errors: chinese.nz. o Data set changes: gew had incorrect y1 and y2 values, and variables x1 to x4 have been renamed to value.g, capital.w, etc. The year variable has been added. CHANGES IN VGAM VERSION 0.9-0 NEW FEATURES o Major change: VGAM family functions no longer have arguments such as earg, escale, eshape, etc. Arguments such as offset that used to be passed in via those arguments can be done directly through the link function. For example, gev(lshape = "logoff", eshape = list(offset = 0.5)) is replaced by gev(lshape = logoff(offset = 0.5)). The @misc slot retains the $link and $earg components, however, the latter is in a different format. Functions such as dtheta.deta(), d2theta.deta2(), eta2theta(), theta2eta() have been modified. Link functions have been simplified somewhat. The casual user will probably not be affected, but programmers will. Sorry about this! o New VGAM family functions: [dpqr]gompertz(), [dpqr]gumbelII(), [dpr]lindley(), [dpqr]makeham(), [dpqr]perks(). o df.residual() supports a new formula/equation for 'type = "lm"'. o garma("reciprocal") supported. o is.parallel() for constraint matrices summary. o Improved family functions: these can handle multiple responses: benini(), chisq(), erlang(), exponential(), gamma1(), geometric(), gpd(), inv.gaussianff(), logff(), maxwell(), rayleigh(), yulesimon(), zetaff(). o New data set: hormone [http://www.stat.tamu.edu/~carroll/data/hormone_data.txt]. o If a factor response is not ordered then a warning is issued for acat(), cratio(), cumulative() and sratio(). o New dpqr-type functions: [dpqr]perks(), [dpqr]mperks(), [dpqr]mbeard(). o Argument 'parallel' added to gamma2(). o New link functions: mlogit(). BUG FIXES and CHANGES o zibinomial() had 1 wrong element in the EIM; one of the corrections of VGAM 0.8-4 was actually incorrect. o zibinomial() blurb was wrong: previously was "(1 - pstr0) * prob / (1 - (1 - prob)^w)" where prob is the mean of the ordinary binomial distribution. Now is "(1 - pstr0) * prob". o betaff() no longer has "A" and "B" arguments; they ar extracted from "lmu = elogit(min = A, max = B)". o binom2.rho() has "lmu" as a new argument 2. o logistic2() has has zero = -2 as default, and can handle multiple responses. o gengamma() returned the wrong mean (picked up by Andrea Venturini): not b * k but b * gamma(k + 1 / d) / gamma(k). o tobit.Rd nows states vector values for 'Lower' and 'Upper' are permitted. Also, the @misc$Lower and @misc$Upper are matrices of the same dimension as the response. o constraints.vlm(type = c("vlm", "lm")) has been changed to constraints.vlm(type = c("lm", "term")) [respectively]. o Rcam() renamed to Rcim(), and rcam() renamed to rcim(). Class "rcam" changed to "rcim". o Days changed from "Monday" to "Mon" in all crash data frames, etc. o w.wz.merge() written to handle the working weights for multiple responses. w.y.check() written to check the integrity of prior weights and response. o Argument 'sameScale' changed to 'eq.scale', 'quantile.probs' in negbinomial-type families changed to 'probs.y'. o No more warnings: dirmultinomial(). o Renamed arguments: benini(earg <- eshape), binormal(equalmean <- eq.mean), binormal(equalsd <- eq.sd), o dirmultinomial() can handle a 1-row response [thanks to Peng Yu]. o weibull() gives improved warnings re. the shape parameter wrt regularity conditions. o The 12 most time-consuming examples have been placed in a \dontrun{} to save time. o Argument "prob.x" renamed to "probs.x". o Argument "hbw" removed from iam(). o Argument 'name' is passed into .C and .Fortran() [in dotC() and dotFortran()] is now okay because the first argument is unnamed. ************************************************** * * * 0.8 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.8-7 NEW FEATURES o Modified VGAM family functions: genbetaII()@initialize has been improved, as well as those special cases of that distribution (such as sinmad, lomax, paralogistic, dagum, etc.). o Argument 'lapred.index' added to model.matrix(). o npred() is now defined as a generic function (returns M). o hatvalues() and hatplot() written for vglm() objects. o The argument 'qr.arg' is set TRUE now by default in vglm(). o df.residual() supports the argument 'type = c("vlm", "lm")'. o Argument 'nowarning' added to vglm.control(). o New data set: ucberk. o Improved functions: rposbinom(), rposgeom(), rposnegbin(), rpospois(). o Tested okay on R 2.15.0. BUG FIXES and CHANGES o Labelling of the linear predictors for sratio(), cratio() etc. was faulty. o pbetabinom.ab() did not recycle shape1 correctly [found by David Venet]. o Arguments lower.tail and log.p not supported (temporarily) in pposbinom() and qposbinom(). CHANGES IN VGAM VERSION 0.8-6 NEW FEATURES o Modified VGAM family functions: sinmad()@initialize has been improved. BUG FIXES and CHANGES o VGAM now depends on R >= 2.14.0. o Trying to eliminate some residual errors with the NAMESPACE. CHANGES IN VGAM VERSION 0.8-5 NEW FEATURES o New VGAM family functions: negbinomial.size(), zabinomial(dpqr), zageometric(dpqr), [dpqr]posgeom(). o New link functions: nbcanlink(). o Modified VGAM family functions: posnegbinomial(), zanegbinomial() and zinegbinomial() use the nsimEIM argument; zipoisson() handles a matrix response; all [dpqr]zi-type functions handle zero-deflation, normal1() can model the variance too as the 2nd parameter. o Rudimentary methods functions for lrtest() and update(), based on packages lmtest and base. o The VGAM family functions for genetic models have been improved wrt initial values. o New data sets: xs.nz. BUG FIXES and CHANGES o In anticipation for R version 2.15.0, VGAM imports from stats4 'coef', 'plot', 'summary', 'vcov'. Calls to 'print' have been replaced by 'show' since VGAM uses S4 methods. Numerous NAMESPACE changes have been made. No more warnings during checking and installation! o Labelling in summary() of vglm() objects changed. It now closely follows glm(). In particular, it has changed from c("Value", "Std. Error", "t value") to c("Estimate", "Std. Error", "z value"). Note that "z value" might change later to, e.g., "Wald". o Zero-inflated and zero-altered functions have renamed and reordered arguments. Ouch! These include 'pstr0' for probability of a structural 0 [zero-inflated], and 'pobs0' for probability of an observed 0 [zero-altered]. For example, argument lpstr0 replaces lphi in zipoisson(). The order of these arguments, including the respective dpqr-type functions, may have changed too. o zapoisson() now implements Fisher scoring. o zipoissonff() had the wrong sign for the non-diagonal EIM element. o nobs() is now defined as a generic function (needed for older versions of R---versions 2-12.2 or earlier, actually). o Data sets renamed: uscrime and usgrain renamed to crime.us and grain.us; bminz renamed to bmi.nz, nzc renamed to chinese.nz, nzmarital renamed to marital.nz. o Improved family functions: genbetaII(), betaII(), sinmad(), dagum(), lomax(), invlomax(), fisk(), invparalogistic(), paralogistic(); wrt fitted values (range checks in place now). These functions have many argument names changed, e.g., link.a is now lshape1.a, init.a is now ishape1.a. Also, some default initial values have changed from 1 to 2. o Argument names changed (Ouch!): q.lag.ma changed to q.ma.lag in garma(). CHANGES IN VGAM VERSION 0.8-4 NEW FEATURES o VGAM family functions renamed (Ouch!): 1. 'betabinom.ab' renamed to 'betabinomial.ab'. o Other functions renamed (Ouch!): 1. '[dpr]betabin' renamed to '[dpr]betabinom' etc.; 2. '[dpr]betabin.ab' renamed to '[dpr]betabinom.ab' etc.; o Slot names changed (Ouch!): 1. 'link' renamed to 'linkfun' for class "vglmff"; 2. 'inverse' renamed to 'linkinv' for class "vglmff". o Extra arguments added to freund61(). o New VGAM family functions: abbott(), zigeometric(dpqr), huber1(). o New functions: [p]polono(), depvar() generic for the dependent (response) variable, Qvar() and explink() for quasi-variances. o Improved functions: [d]polono(), [dr]betabin() handles rho = 0 (suggested by Peng Yu). o Improved family functions: normal1() handles matrix 'weights'. o Defaults changed: [dr]betabin(rho = 0). o New methods functions: nobs(), nvar(), depvar(). o Renaming: fitted.vlm() is now fittedvlm(), persp.qrrvglm() is now perspqrrvglm(), predict.qrrvglm() is now predictqrrvglm(), predict.vglm() is now predictvglm(). o New data sets: finney44. o VGAM now depends on R >= 2.11.1. o Tested okay on R 2.14.0. BUG FIXES o zibinomial() had 2 wrong elements in the EIM, thanks to Alan Welsh for picking this up. o margeff() for cumulative() was faulty. o blurb slot of binormal() was faulty. o betabinomial() did not return the estimated rho in @misc$rho as did betabinomial.ab(). o kumar() did not initialize well with non-integer prior weights. o rdagum() did not handle the 'scale' argument correctly. o codes() in s() is defunct so it has been replaced. CHANGES IN VGAM VERSION 0.8-3 NEW FEATURES o Argument names changed (Ouch!): 1. 'method.init' renamed to 'imethod'; 2. 'k' renamed to 'size' in negbinomial(), zanegbinomial(), posnegbinomial(), and zinegbinomial(): e.g., 'lk' renamed to 'lsize', 'ik' renamed to 'isize', etc. o New data sets: hued, huie, huse, ugss, uscrime; usagrain renamed to usgrain. o The "prior.weights" slot is now a "matrix", not "numeric". o [dpr]betabin.ab() now handles size = 0. Thanks to Robert Wolpert for picking up this bug. o New VGAM family functions: [d]binormal(), [dr]huggins91() but this is not working properly, [dpqr]explogarithmic(), polya(). o New functions: [dpqr]tobit(). o Improved family functions: tobit() implements the proper EIM for the standard model, dcennormal1() has some new arguments [and renamed], cennormal1(), cenrayleigh() renamed. o VGAM now depends on R >= 2.10.0. o Rcam(), moffset(), etc. for RCAMs have been improved and modified. o VGAM family functions currently withdrawn: frechet3(). o Tested ok on R 2.13.0. BUG FIXES o tobit()@loglikelihood had omitted the constant in dnorm(). Also, tobit() uses simulated Fisher scoring (for nonstandard model). o moffset() was buggy. o ABO() is a little more robust. o dirichlet()@loglikelihood miscalculated. Ditto for gaussian()@loglikelihood (constants were omitted). Thanks to Arne Henningsen for picking up these bugs. o zipoissonff() did not initialize correctly and labelling was wrong with matrix responses. CHANGES IN VGAM VERSION 0.8-2 NEW FEATURES o Objects of class "vglmff" have a "infos" slot to give information about the family. o New functions: lambertW(), rcam(), wffc.P3(), wffc.P3star(), confint_rrnb(), confint_nb1(). o New VGAM family functions: binom2.Rho(), [dpqr]expgeometric(), [dpqr]genrayleigh(), [dpqr]huber(), [dpqr]koenker(), studentt[23](), zipoissonff(). o Argument 'imethod' changed to 'method.init' for some families, e.g., cnormal1(), tobit(), weibull(). o Improvements have been made to binom2.rho(). o Improved family functions: negbinomial() has a new argument 'parallel', micmen() has more initial value choices and fitting algorithms, kumar(), studentt() and studentt2() now implement the EIM, normal1() can handle multiple responses. o Argument names changed: 'init.rho' renamed to 'irho' in binom2.rho(), 'a' renamed to 'scale' in rayleigh(), 'Structural.zero' renamed to 'szero' thoughout. o zapoisson() permutes the linear/additive predictors. o Several families such as negbinomial(), z[ai]negbinomial(), zapoisson(), gamma2(), handle zero = c(-2, 3), say, i.e., negative and positive values. o New data sets: about half a dozen road crash data frames of 2009 NZ data. o constraints(vglmfit) now has a 'type' argument that can be fed into the original fit (type = "lm") as the constraints argument. o vchol() takes drastic action to avoid infinite looping: it sets the working weights to be proportional to the order-M diagonal matrix. o lognormal() and lognormal3() now have zero = 2 as the default (was zero = NULL). o Some variable names within grc() changed, e.g., Row. and not Row. o The smartpred package within VGAM has updated poly(), ns(), bs() and scale() for R version 2.12.0 (2010-10-15). Calls to smartpredenv are now VGAM:::smartpredenv. BUG FIXES o VGAM:::VGAMenv is used now to avoid problems locating this environment. o Input of the mustart, etastart and coefstart arguments at the solution should results in only one iteration being needed. o binomialff() and categorical familes (e.g., multinomial) only accept a factor or non-negative counts as the response. This allows the 'weights' vector to have any value really. In the past the denominator of a sample proportion was allowed via the 'weights' argument. o wffc.P1() had some rounding problems, e.g., with 0.280 m. CHANGES IN VGAM VERSION 0.8-1 NEW FEATURES o Most of the Fortran 77 code has been converted to C. This change will be largely hidden from most users but there may be the occasional bug not detected. Much of the heavy work was done by Alvin Sou. o lms.bcn()@loglikelihood incorporates the constants in the log-likelihood. o Also, no more F90 code! This means less portability/platform problems. o bivgamma.mckay, formerly mckaygamma2(), has been modified substantially. o Improvements have been made to simplex() and [dr]simplex() have been written. o Expectile functions for the uniform, normal and exponential distributions: [dpqr]-type functions. o cqo() has EqualTolerances = TRUE and ITolerances = FALSE as the default now. The result is that cqo() should work without the environmental variables being scaled. If it is scaled then setting ITolerances = TRUE will result in greater speed and requiring less memory. o Families that deal with proportions, such as binomialff() and betabinomial(), incorporate weights separately from the weights generated by the response/counts. So the weights argument can now have any positive values. o rrvglm(..., Norrr = NULL) can be used so that the reduced-rank regression is applied to every variable including the intercept. o Renaming: ggamma() is now gengamma(), etc. o Improved functions: negbinomial() has a few new arguments. BUG FIXES o Deviance.categorical.data.vgam did not handle small fitted probabilities. o binom2.rho() could produce small negative fitted probabilities. o seq2binomial() did not initialize 'mvector'. o zeta() crashed on some platforms. o cqo() appears to be working again with the new C code. o cao() still not working with the new C code. o zapoisson() did not implement the elambda argument correctly. o Tested ok on R 2.11.1. ************************************************** * * * 0.7 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.7-10 NEW FEATURES o Surv() renamed to Surv4(), class "SurvS4" renamed to "Surv4". o coef(summary(vglmObject)) returns a 3-column matrix of estimates, standard errors and Wald statistics, rather than coef(vglmObject) in the past. o Improved VGAM family functions: fff() uses simulated Fisher scoring now and has slightly better initial values. o New VGAM family functions: propodds(reverse) is equivalent to cumulative(parallel=TRUE, reverse=reverse) (for convenience only). o Compatible with R 2.10.1 and the article "The VGAM package for categorical data analysis," Journal of Statistical Software, 2010. A vignette based on this paper is included. o Argument w.als renamed to w.aml in amlnormal(). BUG FIXES o VGAM family functions: fff() had poor initial values. o betabinomial()@loglikelihood required 'ycounts' to be integer. o [dpqr]betanorm() were written but not in the NAMESPACE. CHANGES IN VGAM VERSION 0.7-9 NEW FEATURES o New functions: margeff() for marginal effects of a vglm() "multinomial" or "cumulative" model. o Almost all VGAM family functions now have a "loglikelihood" slot that incorporates any constants in the density function. Hence the fitted likelihood may differ by a constant from previous results. In particular, models such as multinomial(), cumulative() and binom2.or() have this new feature. o vglm() now has a modified 'xij' argument which implements eta-specific covariates. Usage now involves the form2' argument, and the 'xij' argument does not interfere with constraint matrices. Documentation is supplied on the VGAM website, in particular, http://www.stat.auckland.ac.nz/~yee/VGAM/doc/xij.pdf o cases.names() and variable.names() methods functions written for vglm()/vgam() objects. o cumulative() has improved initial values, especially for long data format, i.e., when each row of the data frame is an observation rather than inputting a matrix of counts. o rrvglm() handles a factor response without giving a warning. o New data: olympic. o testf90.f90 has been renamed to testf90.f95. This may decrease the incidences of compiler problems on some platforms (f95 seems more popular than f90). o For cqo() objects: AIC(), resid() have been written. o Improved functions: negbinomial() default initial values are more robust to outliers in the response, betabinomial() and betabin.ab() have better initialization and "loglikelihood" slot matches dbetabin.ab(log=TRUE). o Renamed VGAM family functions: alsqreg() becomes amlnormal(). o Renamed arguments: lmu replaces link.mu in zibinomial(). o dzeta(p) has changed wrt 'p'. o The functions summary.lms() and summary.rc.exponential() are no longer distributed to avoid a warning wrt S3 vs S4 methods dispatch. o The VGAM family functions for genetic models have been improved, e.g., some basic error checking. Also some changes in the names of the parameters, e.g., "q" to "pB" for ABO(), plus some switching of the order of the arguments. BUG FIXES o VGAM interferes much less in regard to generic functions such as predict(), fitted(), resid(), wrt other packages and also including base's lm(), glm(), etc. o AIC() method for rrvglm() objects was wrong (did not account for argument 'Structural.zero'). o dzibinom(log=TRUE) was wrong. CHANGES IN VGAM VERSION 0.7-8 NEW FEATURES o [dpqr]benf() written for Benford's distribution. o plog() and dlog() improved. o multinomial() now has a refLevel argument to specify the reference or baseline level of the factor. o binom2.or() has a new argument 'morerobust'. o Renamed arguments in mix2normal1(). o DeLury() written. o [dpqr]-type functions related to the negative binomial distribution have changed wrt argument names and order. o [pq]posnegbin(), [dpqr]zapois(), [dr]zanegbin() written. o [dpqr]zinb() renamed to [dpqr]zinegbin(). o lms.bcn(), lms.bcg(), lms.yjn() have zero=c(1,3) as the new default. This will increase the chances of successive convergence. o Renamed arguments in lms.bcn(), lms.bcg(), lms.yjn(), e.g., link.sigma is now lsigma. Some new arguments added too, e.g., llambda. o Works for R 2.5.0 and later (not 2.4.0 and later). Compatible with R 2.7.2. o Contains Fortran 90 code (since 0.7-7, actually). This will cause problems on older Linux machines without such a compiler. Actually, removing the .f90 file(s) will not be too much of a problem as there is very little F90 code in use by the package at the moment. o New functions: dbinom2.rho(), rbinom2.rho(), dposnegbin(). o New data: wffc, wffc.indiv, wffc.teams, wffc.nc. o Improved functions: binom2.rho(), micmen(), negbinomial(), poissonff(), posnegbinomial(), zanegbinomial(), o A new form2 argument has been added to vglm(). VGAM family functions such as micmen() have the regressor inputted using form2 now, rather than the regressor argument. The resulting usage is a more elegant. Fitted objects have a few more slots and formulas put in set places on the object. o AIC() methods functions has been modified. BUG FIXES o The big bug whereby vgam(... ~ s(x), ... ) did not work under Windows was due to a single array element that was not initialized. Evidently, Linux compilers seemed to have set it to zero. Funny, the code has worked for decade or so... o dposbinom() was buggy at x=0. Also it now handles size=0 and prob=0 or prob=1. o pzipois() was buggy at x<0. o dbetabin.ab(log=T) was incorrect outside its support. o zipf() did not handle 0 < s < 1. o data(ruge) was faulty. o summary(rrvglmObject) failed. o persp.cao(ylim=) did not work. o plotvgam() failed when se=TRUE and which.cf was specified. CHANGES IN VGAM VERSION 0.7-7 NEW FEATURES o Labelling changes: binom2.or() uses "oratio" instead of "OR" (stands for the odds ratio). o New VGAM family functions: zipebcom(). o New functions: dbinom2.or(), rbinom2.or(). o binom2.or() has new arguments 'imu1, 'imu2' and 'ioratio' for inputting optional marginal probabilities and odds ratio. The third element of the score vector uses a new formula. o [dpqr]zinb() has arguments prob and munb set to NULL by default. o Compatible with R 2.7.0. BUG FIXES o gaussianff()@loglikelihood was buggy. o all(trivial.constraints(Blist)) changed to all(trivial.constraints(Blist) == 1) to avoid a warning in R 2.7.0. Ditto for 'all(findex)' and 'any(diff(Alphavec))'. o qtriangle(0.3, theta=0.3) used to fail. o gharmonic() handles a negative argument s. CHANGES IN VGAM VERSION 0.7-6 NEW FEATURES o dpolono() has a new argument 'bigx' which implements an approximation. It is for handling large values of x. o vglm() and vgam() now create the response and model matrices etc. in the same way as glm(). A consequence is that the response does not have to be "numeric" as in lm(), e.g., a factor response is now permitted. o New VGAM family functions: alaplace1(), alaplace2(), alaplace3(dpqr), amlbinomial(), amlexponential(), amlpoisson(), amh(), lqnorm(), mbinomial(), scumulative(). o Other VGAM family functions with argument names changed or added: lms.yjn2(). o These VGAM family functions have been improved: alsqreg() [parallel option, w argument can be a vector, link function for the expectiles]. o The data set "aml" has been renamed "leukemia". o Previously laplace(zero=NULL), now laplace(zero=2). BUG FIXES o deplot() applied to a "lms.yjn2" object gave an unnecessary warning. o In the leukemia and toxop data sets 1L is replaced by 1 and 2L by 2 etc. CHANGES IN VGAM VERSION 0.7-5 NEW FEATURES o New VGAM family functions: betaff(), cardioid(dpqr), cauchy(), felix(d), fnormal1(dpqr), invbinomial(), kumar(dpqr), lms.yjn2(), mix2exp(), plackett(dpr), riceff(dr), skellam(dr), zinegbinomial(dpqr). o These VGAM family functions have been improved: frank(), genpoisson(), hzeta(), mix2normal1(), mix2poisson(), pospoisson(), studentt(). o These VGAM family functions have had their default arguments changed: genpoisson(), mix2normal1(). o New documentation: borel.tanner(dr). o expm1() used whenever possible. o Renamed VGAM family functions: betaff() changed to beta.ab(). o cauchy1() now returns the location estimates as the fitted values instead of NA (for the mean). BUG FIXES o cumulative(), sratio(), cratio(), acat() had response-matrix column names which got lost. o lms.yjn() failed if there was not enough data. CHANGES IN VGAM VERSION 0.7-4 NEW FEATURES o weibull() does not handle any censored observations at all. The function cenweibull(), which will handle censored observations, is currently being written and will use Surv() as input; it should be distributed with version 0.7-5 of VGAM. o bisa() now implements full Fisher scoring. No numerical integration is needed. o Certain functions from the smartpred package are no longer distributed with the VGAM package. These are lm, glm, predict.lm, predict.mlm, predict.glm. This is done because many users have found they interfere with the VGAM package in unpredictable ways. o The following VGAM family functions have improved initial values: betabinomial(), cauchy1(), mccullagh89(), negbinomial(), tpareto1(), zipoisson(). o New family functions: alsqreg(), dexpbinomial(), laplace(), poissonp(), seq2binomial(), triangle(dpqr). o VGAM family functions currently withdrawn: cexpon(). o A new class called "SurvS4" has been prepared. It will be used later to handle VGAM family functions beginning with "cen" that use Surv() as input. o log1p() used whenever possible. BUG FIXES o bisa() did not make use of ishape. o cao(..., family=gaussianff) failed. It now works, although the dispersion parameter is computed using a slightly different formula. CHANGES IN VGAM VERSION 0.7-3 NEW FEATURES o gpd() now does not delete any data internally. The user should use the subset argument of vglm() and vgam() in order to select any subset of a data frame. o zapoisson() has a zero argument, and this can be assigned a negative value. o "partial for" is added to the ylabel of linear terms of a vgam() object when it is plotted. o When a vgam() object is plotted with se=TRUE and if there are linear terms then the mean of x is added to the plot (this makes the standard error curves meet there). o This package has been tested (somewhat) under R 2.5.0. BUG FIXES o plotvgam() did not work for vgam() objects using the subset argument. o cao() objects would not show() or print(), at least under R 2.4.1. o summary(vgam.object) failed if vgam.object was a totally linear model (i.e., no s() term in the formula). Now the "R" slot is assigned for all vgam() objects. o preplotvgam() had a bug regarding $se.fit of an atomic pred$se.fit. CHANGES IN VGAM VERSION 0.7-2 NEW FEATURES o Almost all VGAM family functions now have an earg-type argument to support each link function. This allows parameters specific to each link to be passed in, e.g., VGAMfamilyfunction(link="logoff", earg=list(offset=1)) o rinv.gaussian() is new. o New VGAM family functions: morgenstern(), fgm(), gumbelIbiv(), ordpoisson(). o New documentation: powl(), fsqrt(). BUG FIXES o zanegbinomial()@last had wrong names in misc$link. o summary(vgam.object) failed to print the anova table. o summary(cao.object) failed. CHANGES o binom2.or() has argument names changed from "lp" to "lmu" etc. This is partly to make it in keeping with other VGAM family functions for binary responses. o Other VGAM family functions with argument names changed: frank(). o lms.bcn(), lms.bcg(), lms.yjn() arguments have changed order. o hyper() renamed to hyperg(). o plotvgam() uses ylim if it is inputted. CHANGES IN VGAM VERSION 0.7-1 NEW FEATURES o VGAM family functions now require full (name) specification of parameter link functions. For example, binomialff(link=probit) is ok, as is binomialff(link="probit"), but binomialff(link="pr") isn't. VGAM family functions no longer offer a fixed set of link functions but the user can invoke any, as well as write their own link function. o Working residuals for vglm() objects are now the default. They used to be deviance residuals but they are not defined for most VGAM family functions. In the future the default may become "pearson" residuals. For safety, use the type argument, e.g., resid(vglmobject, type="response"). o ITolerances=TRUE is now the default for qrrvglm.control(), consequently, equal tolerances CQO models are fitted. The rationale for this change that setting ITolerances=TRUE provides the fast computational speed as well as the easiest interpretation of the results. Also, rcqo() matches this by having EqualTolerances=TRUE as its default. However, having an equal tolerances assumption should be checked. o New VGAM family functions: tikuv(dpqr), [dpqr]naka(), [dpr]log(), [dpqr]tpareto1(), betabinomial(). o VGAM family functions which have been renamed (and often improved): New name Old name -------- -------- dirmultinomial() dirmul() negbinomial() negbin.mu() negbinomial.ab() negbin.mn() posnegbinomial() posnegbin.mu() zanegbinomial() zanegbin.mu() rposnegbin() rposnegbin.mu() gamma2() gamma2.ab() gamma2mu() gamma2() o New functions: lerch(), rcqo(). o In the smartpred package smart.mode.is(mode.arg) now requires mode.arg, if given, to be exactly one of 3 character strings. Also, is.smart(object) handles object being a smart function or a fitted object. o The VGAM package comes with modified lm, predict.lm, predict.glm, predict.mlm, glm functions---these implement smart prediction, and are current to R version 2.3.1 (2006-06-01). o The order of the linear/additive predictors for expexp() have been switched. o weibull(zero=2) is the default now. o negbin.mu(), posnegbin.mu(), zanegbin.mu: these have a few added arguments for further flexibility, and some arguments have changed names, e.g., 'k.init' has been changed to 'ik' and 'link.mu' to 'lmu'. o Negative binomial random variates are now generated using rnbinom() in the stats package rather than rnegbin() in the MASS package. o binom2.or() and binom2.rho() have more choices for some arguments such as lor and lrho. o Initial values have been improved for logff(), zipf() and zetaff(). o This package should work for R 2.4.0 after additional tweaks to handle changes in show(). BUG FIXES o pbetabin() had a bug. o studentt() has a mean of 0 only if df > 1. o garma() failed for link="loge". It now works for binary data with the "logit" link. o Internally, wz <- matrix(NA, ...) changed to wz <- matrix(as.numeric(NA), ...). Ditto for rep(NA, ...) to rep(as.numeric(NA), ...). o tobit() had a bug in the initialize slot. o rposnegbin.mu() now calls the MASS library function rnegbin() explicitly. o gammaff() now works. o Working residuals for cao() objects were wrong. o lvplot() for cao() objects have a whichSpecies argument which allows selective plotting of the species' curves. o gaussianff() did not work with rrvglm(). It now has a loglikelihood slot, and returns deviance residuals for M>1. CHANGES o gaussianff(), studentt() have the order of its arguments changed. o eta2theta(), theta2eta(): if these have a matrix "theta" then it no longer calls the VGAM link function one column at a time. Hence VGAM link functions must handle matrix "theta" using one value of "earg" argument. o The earg argument has changed for many VGAM link functions. It is now a list, with component names that are specific to each link function. See the online help files for the list component names. Soon, every VGAM family function that allows a link function will have an earg argument to match it, thus giving maximum flexibility. ************************************************** * * * 0.6 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.6-9 NEW FEATURES o New VGAM family functions: lino(dpqr), recexp1(), posnormal1(dpqr), betageometric(dpr), [dr]polono(), [dpr]betabin(), gamma2mu(), bisa(dpqr), zipf(dp). There is a new dirmul() (the old one is renamed to dirmul.old()) but it hasn't yet be completed. o Renamed VGAM family functions: beta2() changed to betaff(). o Renamed VGAM functions: is.a.number() changed to is.Numeric(). o The Windows crossbuild was done under R 2.3.0. BUG FIXES o Family functions lognormal(), lognormal3() now include the 1/sqrt(2*pi) constant in @loglikelihood because of its use of dnorm(..., log=TRUE) and dlnorm(..., log=TRUE). o [dpqr]lognormal() withdrawn as they exist in R already. o Documentation for betaff() contained mistakes. o summary() of a betabin.ab() object used to fail. o The assign statement has been removed from some FORTRAN code. CHANGES IN VGAM VERSION 0.6-8 NEW FEATURES o New VGAM family functions: recnormal1(), recexp1(), paretoIV(dpqr), paretoIII(dpqr), paretoII(dpqr), gammahyp(), benini(dpqr). However, the fitted value (mean) for benini() may be faulty. o Decommissioned VGAM family functions: gpdold(), ogev(), zipoissonX(). o gumbel.block() renamed to gumbel(), and gumbel() renamed to egumbel(). o Argument names and defaults have changed for: gpd(), egev(), gev(), ogev(), cgumbel(), egumbel(), gumbel(), and weibull(). Also, gpd(), gev() and egev() have some improvements done internally. Also, rlplot() is new. o Several family functions have been converted to a new convention whereby ilocation, iscale, ishape arguments are used, and also llocation, lscale, lshape arguments for the link functions etc. o New link function: nidentity(theta) for negative-identity: -theta. o New argument "untransform" in predict() and vcov() for VGLMs. o For intercept-only models, Coef(fit) returns more user-friendly labelled output. BUG FIXES o ppareto() had a bug. o gpd() had an incorrect second derivative. CHANGES IN VGAM VERSION 0.6-7 NEW FEATURES o New VGAM family functions: bilogistic4(dpr), frechet2(), frechet3(), freund61(), frank(dpr), mccullagh89(). o For cao(), df1.nl has a default of 2.5, changed from 2.0 before. o For vglm(), vgam() etc., diagonal elements of the working weight matrices that are less than .Machine$double.eps^0.75 are replaced by this value. The arguments checkwz and wzepsilon support this feature. o More documentation on: fill() [re. the xij argument], cauchy1(). o logistic2() now uses Fisher scoring. o Argument init.method changed to method.init in several family functions. o Any non-smart-prediction use of smartpredenv has been changed to VGAMenv. BUG FIXES o rayleigh() was not in NAMESPACE. o logistic1() and logistic2() had wrong first derivatives and loglikelihood function. logistic1() offers some choice of link function for the location parameter. CHANGES IN VGAM VERSION 0.6-6 NEW FEATURES o New functions: zibinomial(), zibinom(dpqr), posbinom(dpqr), mix2normal1(), mix2poisson(), dsnorm(), rsnorm(), cexpon(), cgumbel(), cnormal1(), hyper(). o New generic functions: is.bell() works for RR-VGLMs, QRR-VGLMs and RR-VGAMs (CLO, CQO and CAO, respectively). o normal1() has a new (first) argument: lmean for the mean. o Documentation for skewnormal1() and snorm(dr). BUG FIXES o tobit() now implements Fisher scoring properly. o Coef.vlm() needed to test for trivial constraints. o skewnorm1() had a bug in it. It has been fixed and renamed to skewnormal1(). o cao() had a problem with the variable "usethiseta" when it had possible NAs. o An appropriate error message is given if residuals=TRUE in the call to @loglikelihood, for all VGAM family functions. o Two unneeded lines in rgam.f have been removed. CHANGES IN VGAM VERSION 0.6-5 NEW FEATURES o New functions: guplot(), meplot(), ggamma(dpqr), fff(), vonmises(), lgamma3ff, lgamma(dpqr), prentice74, tobit, zipoisson(dpqr), [dpqr]pospois(), [dpqr]laplace but there is no laplace(). o cqo() has been largely rewritten. It now sports a new algorithm for ITolerances=TRUE. It can handle large data sets (e.g., 1000 sites with 100 species). Compared to other cqo() options, it is the fastest. There are a few things to learn though to take full advantage of the new algorithm, e.g., centering the variables. o Windows version is cross built with R 2.2.0. The Linux version has been tested with R 2.2.0. o cao() has been largely rewritten. It now should not hang in the the windows cross build version. o .Init.Poisson.CQO() has been renamed .Init.Poisson.QO(), and also improved (however, it uses more memory by default). o Modelling functions such as vglm(), vgam() and cao() have qr.arg=FALSE now. This means object sizes can be a lot smaller. o The functions positive.poisson(), positive.binomial() etc. have been renamed pospoisson(), posbinomial() etc. o The functions [dpqr]gpd now have a location=0 argument. o Some VGAM family functions will be adapted later to use the BFGS quasi-Newton update for their working weight matrices. o The link function logoff() now works, for constant offsets. Link functions had the argument "extra"; now called "earg" to avoid confusion with the argument "extra" used in vglm() etc. Also, elogit() is new, which allows a parameter to lie between two values (A,B), say. BUG FIXES o plotvgam() was incorrect if one of the terms (but not the first) was "x" or a function of "x" such as bs(x). o smart.expression now handles multiple 'arguments' by choosing the first, which is the smart function name. o lv(rrvglm.object) failed. CHANGES IN VGAM VERSION 0.6-4 NEW FEATURES o New family functions: betabin.ab(), betaprime(), dcnormal1(), erlang(), expexp(), inv.gaussianff(), maxwell(), mckaygamma2(), nakagami(), pareto1(), rayleigh(), wald(). Of these, Pareto, Rayleigh and Maxwell have random number generation etc. o If criter="coef" and trace=TRUE, then the number of decimal places used to print the estimated coefficients at each iteration is proportional to the control constant epsilon. o tanl() has been named to cauchit(), and appropriate family functions reflect this change, i.e., link="cauchit" instead of link="tanl". o size.binomial() has been improved. o Documentation for gamma1(), gamma2(). BUG FIXES o The documentation for the reverse argument in cumulative(), cratio(), etc. was incorrect. o vcov() didn't work on the windows version. o cao() still hangs under the windows version, so hopefully this bug will be fixed soon! CHANGES IN VGAM VERSION 0.6-3 NEW FEATURES o Built with R 2.1.0 for the .zip file (Windows version) and deposited in the right directory at www.stat.auckland.ac.nz. o More documentation, e.g., fitted(), yeo.johnson(), dirmul(). o zeta() and zetaff() have been improved and/or corrected. o The family functions binomial, poisson, quasibinomial, quasipoisson, gaussian, inverse.gaussian, Gamma have been withdrawn because of inteference with glm(). CHANGES IN VGAM VERSION 0.6-2 NEW FEATURES o model.frame() and model.matrix() are roughly working for objects that inherit from "vlm"s, e.g., "vglm" objects. Both of these methods functions accept a "data" argument etc. Also, for these, smart prediction works. o A methods function for the generic function weights() has been written for VGLM objects. It returns either the prior or working weights. BUG FIXES o The Crow1positive argument in cao() did not function correctly. o The family functions dagum, fisk, lomax, invlomax, paralogistic, invparalogistic, lognormal were not exported in the NAMESPACE file. o Functions in gaut.c and mux.c used "long" to represent integers. In R, these should be "int". Although these are equivalent on 32-bit machines, they differ on 64-bit machines and crash. The files are now renamed to gautr.c and muxr.c in R. o summary(cao.object) failed. CHANGES IN VGAM VERSION 0.6-1 NEW FEATURES o New functions: cao() for "constrained additive ordination", and uqo() for "unconstrained quadratic ordination". Both of these are unfinished but will hopefully be completed in the forseeable future. o The function cgo() has been renamed to cqo(). Ouch! CQO stands for "constrained quadratic ordination", and is better than the old name cgo(), for canonical Gaussian ordination. o The inverse() link function has been renamed to reciprocal(). o More documentation: loglinb2() and loglinb3(). o zipbipp() renamed to zapoisson(), where "za" stand for "zero-altered". This is more in line with the literature. New families: zanegbin.mu, positive.negbin.mu. New random variates: rposnegbin.mu, rpospois. o negbin.mu() works now for cgo(). The subsequent methods functions have been adapted to work on it too. However, negbin.mu() is not recommended because maximum likelihood estimation of the index parameter is fraught numerically. It is better to use quasipoissonff(). o cgo() now uses the function .Init.Poisson.CGO() to obtain initial values for the canonical coefficients, C. The argument Use.Init.Poisson.CGO in qrrvglm.control() now controls this feature. o Lazy loading has been enabled for the VGAM package. o Name spaces has been introduced into the VGAM package. The consequencies of this might be far reaching for code heavily based on the internals of the VGAM package. o The application of name spaces means "ff" can be dropped from certain family functions. In particular, poisson() can be used instead of poissonff(), and binomial() instead of binomialff(). Ditto for quasipoissonff() and quasibinomialff(). o names.of() changed to namesof(). Many other function names have been changed, particularly those of the S3 classes such as coef. something, e.g., coef.vlm to coefvlm. In general, S3 methods functions such as print.summary.vlm have the first "." deleted, but classes such as "summary.vlm" retain the ".", and the function is printsummary.vlm. BUG FIXES o Some documentation regarding the negative binomial distribution was wrong. o The digamma function in FORTRAN was buggy. o gumbel.block() now handles a vector response (equivalently, a one column matrix) and the deviance has been decommissioned. Instead, the log-likelihood is computed. ************************************************** * * * 0.5 SERIES NEWS * * * ************************************************** CHANGES IN VGAM VERSION 0.5-24 NEW FEATURES o zipbipp() and zipoissonX() are new alternatives to yip88(). They fit a zero-inflated Poisson distribution. Both can handle covariates for both parameters (p0 or phi, and lambda.) zipbipp() is recommended over the others. zipoissonX() is experimental at this stage and should be used with caution. rpospois() is new. o More documentation: rhobit and binom2.rho. o binom2.or() now has lp1 and lp2 arguments, which allow a different link function for each of the two marginal probabilities. o bratt() is a new family function. It fits the Bradley Terry model with ties. o flush.console() is used if it exists. This will make Windows version more nicer for large data sets and when trace=TRUE is used. o wweights() extracts the working weights of an object. Used to be called vweights(). CHANGES IN VGAM VERSION 0.5-23 NEW FEATURES o The package works under the latest version, viz. 2.0.0. There are fewer warning messages when checking :) o persp() for CGO objects now handles Rank=1 models. CHANGES IN VGAM VERSION 0.5-22 BUG FIXES o plot.vgam(..., overlay=TRUE, which.cf=1:2) was incorrect. NEW FEATURES o demo files now are avaible for VGAM. These include lmsqreg, distributions, and cgo. More will be added later. CHANGES IN VGAM VERSION 0.5-21 BUG FIXES o .Rd files adapted to reflect new changes in the library names. o persp.qrrvglm() argument whichSpecies was faulty. o gpd()@inverse returned erroneous centiles. o Coef(cgo(..., FastAlgorithm=TRUE)) produced wrong results. NEW FEATURES o cgo(..., FastAlgorithm=TRUE) has been fined-tuned to give greater speed and accuracy. o lms.yjn() uses FORTRAN code to implement the Gauss-Legendre algorithm. This results in greater accuracy. o More documentation, especially for family functions for extreme values modelling. CHANGES IN VGAM VERSION 0.5-20 BUG FIXES o vglm(y ~ x, binomialff(link=tanl)) used to fail. o The CHECK command failed previously, but now it only gives 5 warnings. NEW FEATURES o persp.qrrvglm() has been written to apply persp() to a rank-2 CGO model. o cgo(..., FastAlgorithm=TRUE) now has a logical argument GradientFunction, which if TRUE (default), computes the derivatives by using finite-difference approximations. The default will cause the speed to generally increase. CHANGES IN VGAM VERSION 0.5-19 BUG FIXES o garma() did coerce the model matrix into the correct class o fisherz() could not work out the inverse. NEW FEATURES o trplot() is a new generic function, and for objects of class "qrrvglm" (a cgo() object), it produces a trajectory plot for species. o vcov.qrrvglm() now computes standard errors and returns the variance-covariance matrix for rank-1 QRR-VGLMs. o A new fast algorithm is implemented for cgo(..., FastAlgorithm=TRUE) which only works under windows. It is a new undocumented algorithm. o New family functions: lognormal(), lognormal3(), weibull(). o New family functions: genbetaII(), betaII(), sinmad(), dagum(), lomax(), invlomax(), fisk(), invparalogistic(), paralogistic(). Additionally, d*, r* p* and q* forms of the density/random-generation etc. functions for all of these except for betaII and genbetaII. o New link function for (0,1) parameters: tanl() for tan link. It has a heavier tail and corresponds to a Cauchy distribution (cf. probit for normal). o New family function: brat() for the Bradley Terry model (intercept model only). CHANGES IN VGAM VERSION 0.5-18 NEW FEATURES o I've changed deplot.lmscreg() so that the "at" argument is now "y.arg", and the density is returned with name "density" instead of "y". That is, "at" is now "y", and "y" is now "density". o lvplot.rrvglm() and biplot.rrvglm() have been merged and are now equivalent. CHANGES IN VGAM VERSION 0.5-17 BUG FIXES o Bestof argument in cgo() and rrvglm() was faulty. o Bug in plot.vgam(type.resid) fixed. NEW FEATURES o Updated to work under R 1.8.1 o logLik() and AIC() methods functions supported for many VGAM objects. o lms.bcn.control(), lms.bcg.control(), lms.yjn.control() now have trace=TRUE because monitoring LMS quantile regression models is a good idea. o lms.bcn(), lms.bcg(), lms.yjn() now improved. CHANGES IN VGAM VERSION 0.5-16 BUG FIXES o biplot.rrvglm() had a internal bug with @C. o Runs under R 1.8.0 now, having a fix with "qr" slot. o etastart, coefstart, mustart arguments were not functional in vgam(). o vchol() did not replace the correct elements; sometimes the index was out of subscript range. o residuals.vlm() tried to evaluate a deviance slot in a "vglmff" object even when it was empty. o Documentation links to functions in other packages now work. NEW FEATURES o lvplot.qrrvglm() has been renamed biplot.qrrvglm(). Argument Equal.tolerances changed to EqualTolerances. Argument Circular changed to ITolerances. rrvglm.control() now split into qrrvglm.control() and itself. o cgo() now performs canonical Gaussian ordination. CHANGES IN VGAM VERSION 0.5-15 BUG FIXES o Coef.qrrvglm() failed wrt Equal.tolerances and Circular when Rank>2. NEW FEATURES o gco() is now an easier interface for fitting Gaussian canonical ordination models. gco(...) is essentially rrvglm(..., Quadratic=TRUE). o Documentation for deplot.lmscreg(), qtplot.lmscreg(), cdf.lmscreg() and related functions. Also for positive.poisson(), positive.binomial() and yip88(). o lvplot.qrrvglm() improved to handle non-diagonal tolerance matrices, and a new Rotate option is available for QRR-VGLMs. o By default, QRR-VGLMs now have the constraint that the latent variables are uncorrelated and have unit variances, i.e., their variance-covariance matrix is diag(Rank). Also, the Crow1positive argument allows ordinations to be reflected across axes. CHANGES IN VGAM VERSION 0.5-14 BUG FIXES o vgam() with s() terms and subset= used to give a bug because the attributes of s() were lost. o summary() of a gaussianff was faulty because control.gaussianff() was called gaussianff.control(). NEW FEATURES o install.packages("VGAM", CRAN="http://www.stat.auckland.ac.nz/~yee") now works for PC and Linux/Unix, i.e., the distribution of the VGAM package allows for this type of download. o poissonff(), quasipoissonff(), binomialff() and quasibinomialff() now handle multiple dispersion parameters when mv=TRUE and onedpar=FALSE. o Generic function predictx(), with methods function for "qrrvglm" objects. This solves (with limited functionality) the calibration problem. o predict.qrrvglm() and predict.rrvglm() written (but don't work 100%) o Coef.rrvglm() now returns an S4 object, which can be printed nicely. o summary.qrrvglm() has been improved. o Documentation for poissonff(), quasipoissonff(), binomialff() and quasibinomialff(). CHANGES IN VGAM VERSION 0.5-13 BUG FIXES o Code with T and F now use TRUE and FALSE. NEW FEATURES o Documentation for lms.bcn(), lms.bcg(), lms.yjn(), and bmi. Additionally, the overall documentation has been improved throughout. o print.Coef.qrrvglm prints the contents of Coef(qrrvglm.object) in a nicer format. It uses S4 features. CHANGES IN VGAM VERSION 0.5-12 BUG FIXES o The package now works under R 1.7.1. This includes the smart prediction library. o dirichlet(), skewnorm1(), geometric(), gamma2() and erlang() had a bug that has been fixed. NEW FEATURES o documentation for beta2(), and dirichlet(). o Easier installation; use something like "R CMD INSTALL -l ./myRlibs VGAM_0.5-12.tar.gz" for a local library. CHANGES IN VGAM VERSION 0.5-11 BUG FIXES o The code has been upgraded to work under R 1.7.0 because of the calls to LAPACK and object oriented features. NEW FEATURES o levy() added, plus grc() documentation. o constraints added to binomialff() and poissonff() since they both handle multivariate responses. CHANGES IN VGAM VERSION 0.5-10 BUG FIXES o Many univariate family functions had a faulty loglikelihood slot. o negbin.mu() was faulty causing very slow convergence. o Coef.vglm() had a bug due to "fit" rather than "object" NEW FEATURES o logff() added. o The undocumented backchat facility now works for Splus 6.x. This should increase the efficiency of vglm() in particular. Thanks to Insightful and Dr J. Chambers for helping to get it going under the S4 engine. CHANGES IN VGAM VERSION 0.5-9 BUG FIXES o binomialff() had a bug in @weight. o binomialff(mv=T) used to fail. o gev(), ogev() and egev() had @loglikelihood that was faulty. NEW FEATURES o .Rd documentation included for vglm(), vgam(), rrvglm(), and associated control and class functions, plus smart prediction. CHANGES IN VGAM VERSION 0.5-8 NEW FEATURES o rrvglm() now has a Quadratic argument to implement the class of Quadratic Reduced-rank VGLMs, which gives maximum likelihood solutions to Gaussian canonical ordination problems. Documentation is in rrvglm.pdf CHANGES IN VGAM VERSION 0.5-7 NEW FEATURES o vglm() now has a xij argument which implements eta-specific covariates. Documentation is supplied on the VGAM website. o grc() has been written for Goodman's RC association model for a contingency table. Documentation is in rrvglm.pdf VGAM/data/0000755000176200001440000000000013135276762011731 5ustar liggesusersVGAM/data/gew.txt.gz0000644000176200001440000000104013135276761013665 0ustar liggesusers‹=T[j1 üï)|a½lù8¡„¥´iBo_½¼û³^{Æ#¤cü{}ù=Þ~~¾þù€ãóåýï«¿¿¿üzûxy÷U}õÑ×sôõmŒ‡uÄà 蟸'¬ÜgƒÅ '¡°¾kvEaŽAµ8§€Ä‚Ž Å|æÃÞù½7#m2p±Ñ Ì~ohn:{ƒ4»¤D"bš| ®C]qŸïØ í5+?3ÀfŸb[äMt97•5ÉÈvƒ[*šY2]¶­b“F8+Í^+KޤÐX A³¤<*-åÝGƒ=Àf—”—ãD²Ö^¹dWLÜÇŒ|a‡5Öã¹”Å~Ä‘¬œ¶†oš.9)û²M®çR`wË•pÙ…0aÕÎÙ†™÷êrD 7òrâpUlvºæ5¤ØÚühz´«,šqºUÖÝÂÒ­¥ ‰Ùžm{îå˜ÍN)”¼—ÞðÄW™”*h°Õ°-AÄÛ-R£àJáÚºÝ=ԽΕû˜ìµè61û(5»úçdÄ(Œ=% ;3¾$ό鼮ihãMny.ÒlŠÅòqé6ônÙÍ®^«#4¼õ^»+¦9I›mŸ¶Ë¦ò|–æ¾3¶½¼y±ß²-¾MŒþ3è>W®‰:1±cëwas–ö™0£‘°’K°ïßà`=A@@ â€áɶJM·b•àjzÑSÓ =L2j O%$b iˆÂ4À“Õ%*CL‚ª£5TC@FF„bŽûfËn™¶–+V+)š›²lPn2²³+aB°V‹&1h¶/­[•­r˜µY¶¥V¶¨$³bÑhÕ llj£F4F±¬Z¢ÛH+lVÁ±XÐ[)Š-F´lZ-bµIE¨fØ´m¤Äl[(V™[É ÙT"جj fˆ"¢A$ÊHm“b“Æ(Š•QªecI„mE£$QŠR4†2DLjcbÌ©"@‰¢1`ȉ°cX%"L¡d´I-£m”´ÌÅb¨‹b™ˆÔHÊMR$H"ÙMcPf`R²LRÍQHÔ°–‰3 S!J±&Ѳh(V(Á0Q’ …#’“a,L ²I1š $™"KHØÐA¬j“ ©LSVÙ¢! ѲFÉTÁ( Ù E›10F6*#V352f DkÆ4I)­F£$&ÆŒZ M bÍŠ I ‘J HÔc0¤Œ†@‹Š´VƒTš±lRX±j(Æ-j6ÐF5ƒS*Ñ b‚¶%”S)”ÂTXÑRRTʈÆBL(E$¥ÒM“›  µÐŠQ¨ÍQƒQEd©6‰ Ö6ÄkcX¬lTh(*PiIIYE#FSa…’I¨¤´‰¦E‹dh©“b ¦Lmc[dÆÖ"ÑQ¨‹Tk¶1bÆÐ!,V ¨«±XÑm£hˆÚš*#cZ+`ÂX¤Ò 52-&š“ˆ‚Åb2HГ&–1&„"¥e™‚ %4H˜ Q ‰1“B*(6 !K"!*D’ME°Y2h PÊ$4›S1EŒh! EDd4&`-)2kQDÂÆÆÑ¨ÅIbE”# „‚4ƒAMIF&”Y”™M™±b¬Òe­&(ر3iL`Æ2#lL2H“M-!¶*-&f¤“UL¬m%„‚L$”F˜Q°i,M!™02FƒFdb³$’ Fl i$P`ˆÔF†`SSLY‘¢Ô$ Ô‰ŠMXµQ£Ʊ!lI‚¤bJh¡‘"‰2FC&É™F‘ ‚EEˆÑ±D1`˜„Ä!4TjaDL¤²Q¤`…H›$‘#È@chÌÉ3(’,EÌŒ”¥DARReF™± 22A`£M¨Ñ¨ÄjM ¦R ",Ìdl¡¢†"4¨„‰˜Íˆ¦QhÉ QH‘( "#h Ö,dÄÈ Š4Äbˆ„Í&Å53)™¡„2Õ2),ËFeA¡"(™±¨H±Вɪ1lLhÆÅˆ(ÄdÃ2h‹$k*$Œ¦STU%cYXƒF# ±„FcL¢”˜ÌJDš&ƒJT’†&E*0b”e0LÉŠ%$EjM$0mƒLÅŒš"›DŒ22j)¦f…F1H¤„„ªc4Ê‹c’)´d¢E˜Hb&ƒ)F dŒÀ¥"E,²TŲ‚k F LÄX¨1h‰–™ŒVMFˆÚ¥d´%ŠMh1¢ˆØ£Pj"ɱJ)”lXDÅDÌdИ*@ÀlQ‘hƒhÉ2-’¢J2I‰ š¤Äi &ª#`ÆÆÚ‹XÐÆ (#F2‰™d6˜I”HËn-Ÿ]£Öª½ög|:ÐöÍtfÕ þ±•ô’£Ðó¤,Ñ”+ªºfZÀ­"+­ývªa¬ÑˆÃ‘ ž:#…T f[¸˜}TͶíY80+N—9r^'‘Ü8“W­‘â®ó ØÔMÂÚâd4’ÉxK‹8Z›ËP½fºé·x¦H‹³•hÅFÖZ´n³cŠ(++{PªÛíæ-k1ˆ;B¬QÀ®<±Ne@ûN±X¹É7u ·e®­H¹W}½Êƒec˜hõ¬«;¹];±’•’{g{tq`l·Z XÆËG»\dü^öݦüØüwð(Eo¥](†:ª¶»Ý”Œ­¹rnÆË ÷VkÏàç<Ï9ÀHà ’m¬Úe›Vf¤‹Ê‹lÊØÊÌ(SFeS1$±Ui­m²E´Û)‰ Úª‰¥‘F“VÚ4SK"›QQµbCZ6¬D•lf¬SlÔ2±YX‰µa­-LlV¥ VÕcZ̈Àb‹VŠ [³ ÕDE“QEAF¶-²i‘¥J1J3mµ¦²Ð± •3d´©’”X”JQd•hcR±µmŠÛaTV+66RÑlXm-«Y5¨¶Û&­¨£c&Ù6Øe$•‹H[Qf­µ[bÙ–-©3!Ve0Ä ³&Jf… ¶ÕÚÄkµT [FÑ£FÖlj±[Rlc62™¢+ 1ÁÕDœÎ 9\KÍÞõw®ç.Ósw{ÞóŠçL\8nƒß5ôTÚMDÁ´Å0£mXSm´UE†elµ´hË+mIj6Ö+h£±cFÖÊMYˆš²•¶(-B´µfS6(*¶ÒPiPhV‘¤T¡A¡šÅZ6ÕEµc¢Á‹iÚÓmX(¦2,­›0I Ž$E4oœàçç8[gN.½kÔöõyîøö Ž»º7|ZÖ^÷µ«ãm||÷Ï+Ü‘[QwÈ÷#»ƒ1íq=Ï'bxO,°8š ¯$å!J7JpÏ%ôO\9ÔîëÌNãâ¸KåÞ¸£'··žï{Þ^yë¹Û¹Û¸Ä™ãl„Ci&ÃI$Ø||'‡¢-ÜbÝ];¹tç7uõ7ŒÉé¤Q”‚´Rª5ï­9Øt•_/ã}üOÉÞÈÅ$8¹?xùVÇàæYà +æ²å2^t>ªGI[%‘¯ -Že F—¹ªñw$S… l†àVGAM/data/oxtemp.txt.gz0000644000176200001440000000040513135276761014423 0ustar liggesusers‹-’;n0 Cw"G°õµŽ“!c€¢èÐÞ¾VÈéÁ2EÑŸ÷ó÷çõþzü½žßrú±{m9>TÒäÄÐåìaHë0¥>ú¢îPרï%=ë}ýjxýlh\_¿ñÙÁzbþ¦ß¾~9¤Ÿ.ìëF]UzúÔ¸vÌ× ¯ßæé!>¶¤Fg9MY§Ÿ9É|–è7æ³C6æ9óùÍw†Š{rž×z2¥'ŸyØß˜ ÷õPèÂ'¾Ô%û çŽ#=ïý\Ðç–úPñžÉ|É÷Í@&æda~œ+¬…sýJqe\;æ×ý/ߣ˜¯ø_ª1ç,ù•­×9VGAM/data/bmi.nz.txt.xz0000644000176200001440000001221013135276761014322 0ustar liggesusersý7zXZi"Þ6!ÏXÌà4J]i”Læs¦m|×—Ÿ“ÊËáeÄ]™ãtÄüæE@²ÐsŒ¥"=Bh–° ‡bÿð°í•Ü”ê,ú)êqüµMgýŸ£}ánÙÜwM/þ®â(0çŠAX9»Œ‡xr—QD´öàŽð[9Oñx K['F'½yý@ó6ˆNx  C* .q’Fï¥ð :"̋ŒBÝV]«TPR´2$VWøãÚ;ÈxÇ[áÙ¨aŒ%4¿öѦš¢áh“AžŒß»Δ¹F«¡‡åæ ÔAº= ÅÚ£NFŒñ<\†^ú++шë]̶µ®&‡’±$rsÖ¬êÊà ÐÕ‘˜ó"qHÃË Sž»ÒîfHʃ¼èa?bgwñ¬]‰2&Ãm¾kþ߃°Ö•›Zõá®°s¯—lµ=ÊhdµB•÷ÿåSámdj, ³ë”hòYÐ…¦ÇŸ&|?¶ÍÆÓÑbj—ÕCjöçYt5ÙfSzño{¬2"Ò2³'{¨íÜÃ×¥Tt_ŸMv;å BiÕŠ%QXDÜ ¨lüŠªuº$XºVû¤žs”Rí'Ÿ:cNVo’ŽåÖ­—C*g.‹'–Œh¥,®ˆ#Ÿîöê{1¯Íi·§h¦8 E‹‡‹tˆ™·!ír(âÞJ’78JÕiÐ`ü»îÿÃÇiy)à”«Ö„·¶{8?ÖÞΑîÖØÝÕ^¿¿®%V,)v*óÀRÐÖg¦s1lg×ÄzN5=bj£jlƒýecôúpºnÅTë®À6¤·Ž¾äJæðµO¹ YÄÄu¯ÁWpî°}b,k£@íS#FZ/‚ùbÂYÄÝ|¹B”#A åÚ—.®¿ý<>ÒƒèYƒÖvÄCQDôæMŠÄÒ ·îMã#YÜUùXL0úè9¿^e6-µFµŽGF4ìÜÆá)3¨bT(ƒî©MúÖÚ¿‚+á‚C&nn.‡²«0Î+BòKï£ñæ?¿nÿöÜÇ©â)y¶“ÿ)'3¼ýÁžÕ û,«U²›f«õôk^Ù7¶}=ºØÌÈxUSa÷‰×d¾üz’µÓW`¦¦-fóf*œ3Úƒ¶¸êlËm{“¢Þ]ŸÉK6ßîŽìí(!k10Ìš¡5›n P¦"à×öÍûeøÃótïý«<ü¸l* Üéµ"ïØòÄ#<-]MÔ7[Eƒ vk#.êz_’¥쩬e™g8¼´½†þ‚uªÑ‹eVjTù'ˆU–Õ=²¯¸v'þŒÆiùìQ5–^¡nüã²qxdçˆæø@ÚÈ8Ê;oOÙÕË‘qQ¨˜¢0²7( ž~È 1c%Â×ç¼à§žþÈBøJ2* | ÚFô­äÈnƒ ëŸ^7RŽåëÇF¨pVëàΟ(2ê Œª™ègrf·§ÏYq-Íé6E—<›8v»Ä»é«•ŽCˆ_# ¹ø¡øÕ%mÜÜ鹊‘4 ãjHøß\³,aØÒ­Ø•œ¬WP‹Žp:qPÃï4}Ë…"JèpØŸ¸"[Ó ž[ ÅHGÌâRÛÏÔà™±|äòñ§üßÓ8E£öõl¦Zdö6“,S´ àübƒ?µm#Ò¦yÀ§ýÆPrx 3}ún\þ%Ç_ž&¦#“²4?µ¡íÿÕ‰Ês°ŽBNqeÛBš Gp‘/'„_†?Y·}uÜp¢ÕÜ×H=ióƒêiˆò tµñRpdæ+)P}¯Jz•ø§d—Ÿéóïx¶„d”ÑåâËbúõXbޱðæS\á­W‰hB“µi*àPUóWäy÷e1™4hiOòkϓÔH‚€(É·W ö[+–€Cº®2{pIÁ`ËЛÔ§%“¢¤9§•`º4i8ø(˜ø¶š[÷ðC!£ÚgºÆæò;±ôói¡#×|‚P+µp5m+&‚ û~-›_1\†ñì°MašSÊ-2Œ÷·8ïúlòÅ„ ï‚•e%@Of@100âž¼ê‚>3`µ*2$éy^›ÌU~_jª .›ê²¾×dñˆIûW“çE}³ŽAÄ/$*ו…µkDÏ›rM¶·Ò gùŒì|ÝzÉÖÈoGWâQŽ„SžÜGFÕžÿc2)¶ÇýaîO†»j°NˆþÌÏ(°hšô"×ð,b4€&ª.°pØΙ§MF‡LG'ÀÍ •:~Ò ‡/c3Þ ?k︔R*‰¢Zb(Õ¸'':Ý È–Yø=ͲšöV6“x_Ó ÚeIË*F»3Ÿƒÿˆ­ 0ÿßP«¡°äkΊÿMÙtz»4BNH¬OïCw¦ìX”ƒ\ÈOk…e\ºŠ25éŸèŽLׇ“ |6QÀ‰º‹1ñ5³­ÆTJ,­{ß³hà[Hbnh{Y„4ådôPŵöŽÎ*‚’x¸pï°Ñ>ÚÈ#UeÀ†ó©óÇ¿1¿¤§gfW̺Læ%ù3ZƒÝMçµ½ª£s«A±‘ŽŸûí‘£Á@D¿•0®¼>2÷¡i“„Î=S½ÊJ¤Ÿ9”†¤%@‹?15fAŒÈ['¢P {´¢.ƒ–Û¸&’XÁc3eò&›C<Ý Ú©d)^{ö«ç@(ÀC§ùQO©`ÌD€grÍLj…yÏñéBƒó¶ËüÇöú¦S€…¼ŸÄæï^Oýn‰4þžƒ-Å—az8çGߣ.i°Q;ûõ 8ˆZ\·ÏodüÅwìh¬“¡”ÊÂH}^óòd·ÔäÙ;Ê "©RåHYàk¡§ÞÓ«oì­¯ÒÖ—„=ÿ]9æ|xy(ÜVÔ×±Þp¿ÔeˈG¸ˆÿ<Š=“ý—mrý@Á7&å0¨x1ÔC cÜ*Q00Ä^¼€§Ež{0”š6déÐÙžjÄF¬"aË~ŽzÍ ÛÝ/…xÑ?šÂ.þ˜b€0™»9i²Ó •é99ºgN¿Šáë–¨v›±ÜNéW‡*ý%J´Æ)­zßÙv4\»<Öª°ý9•M6ö%}Xà!ì¿vb£u£Ù¨Ž$a1Ñ*]£H4yètWÒ·'V¼ÀÌå¯ÓÇ5XÅ( #Åv«õ°-ó%õrËœÂÀÒn¶kQLÈã“—’i á¥b~ç2_Ôã=û¨ÙœWVü'{2«Tª‹©¸ˆ‰I寗\*œ˜ëa•yà°ÎùmÇÒÝ„œ b&êóˆ«E™ý}7;$°øÀœ™FªÍ '÷Š¿C 㘢dâh¡±ƒ¥Oéc›Rþvhh—lôðxC:6ÔÙ Sÿ]b4azL~‹^;ÛSŸ‹æ^µ“6Ç0 R©i%ƒ5U-Þü@@ÄŽÁ ö° ÃRTV P‘“tM "Çg"D0²Lô–^ZBUbûød s?zhÆ×倖b9Ý7·èáp‡Yr‰¸6Is.K§GyÃù\£?ññWD߯|<áYÇ‘Úˤ·¼ÊTHÊ ÞŠplïœ)ýõÛìlAúȦ‰í+:µZ©>ë¥Y©‡©ài26ˆdµN¨÷#âøóµÍÙ r$£~É>u“I‹×†ÞÍïÎ Ó-M]úz_àœß§Æ.> f«Å ‰Ží‘Ü0Jv|ŽÏÓû!—/øltmºKb#½å—„¬îE+<ªRÁçFŠoèÓ¥àìß5j®i-EE o$'áIk |:•&§˜ É\ç¡d'5VØ(³…‘ÉÛ.¦Ö FÑ16ÞöD^ÅñÞüIJµæ±1·¹õ<Ý÷¼qý¶‚â겘úf0ÝѽÕ)‚] â»^w+«Ò yÕ#S\Dbg¢™SJZjêDòúÉ:Õ)HhñÛ~ªôŽ{ðwœ0ãùW)3NTñ0™ô£Õ ©Qp F<΋¬öBÐÀ!C§â!D#CS:2 s´äŠ»‰Wæªó3ÀuMãt¶ÔË pO©Íx„›H 3Ô»wôJ%B ºœÚž"ŠÌF÷ ¸ÅôÌ2i`iÑpÅ€næïcÄÓ¦~ëÛ¿Æo<­— ¢1Ç(ù!cºÿDœ[ŸŠ’\P 0 ‹YZVGAM/data/ucberk.txt.gz0000644000176200001440000000017513135276762014367 0ustar liggesusers‹%1ƒP C÷œÂ'¨ˆó¿>ÛÒÞé30€Po_K,qž•Ø}9Nl¾ì?ÉÜ·õÄuãu£=¨NÀÓ^àÐ5 ”Ûì-§Â9 ÒµÓ&°ðH°©ÛGS×úb*'‹}•ᩆ&-ö¼F˜VGAM/data/beggs.rda0000644000176200001440000000030613135276762013507 0ustar liggesusers‹ r‰0âŠàb```b`fad`b2Y˜€# 'fMJMO/f``q€˜D;ä`G­À€ì?À™õ1Põ¶Dªw…ª3€ÒPZ‡z-¨¼ªz$u¨ê%PåÑi@ Œ¼ÄÜT``0€œÁ dJ5€³ á,#8ËÎ2A3޳(¿\«‘Ip#“àF&ÁL‚™„n$krNb1Ì8˜ WJbI¢^ZÐ& ï“丆ùVGAM/data/finney44.rda0000644000176200001440000000032213135276762014056 0ustar liggesusers‹ r‰0âŠàb```b`fad`b2Y˜€# 'æHËÌËK­41a``òA ø€˜ ìSªº¯©¶/Ò-Pz&̲_ ¡aúÜ úŒ¡´!”Ö‚Ò ¢Ï»"U§¡´;”ö„iCñk^bnj1!öT° 9?/ÊaÏH,IÎHMr9Kó`¨&qå—ëÁLãÈaÿÿÿÿ…nerNb1ÌJ˜ WJbI¢^ZP?÷„a7ñ …VGAM/data/crashmc.rda0000644000176200001440000000053313135276762014042 0ustar liggesusersBZh91AY&SY$+ô€ÿÿÿÐàÀ/gž€00± "€ÿU1¨dh 44fPh &)äž”ÓOQ ‘¦˜f4i d`F&¦Ök €Â˜q¡½¡ c AŒ9¨"k mˆ9q‡=ñLÌ:––ËDÔè1øBÛNŽ £%££QD¬D)YQðЬ#M*ùèW\é“täm#«çô=È6ëµÖú„á›Ü‹šDöau²p\x™ù‰s)@µ D@#ú¤ÖöÍ»·ð”’I$’KpÎÞÓJ. :lm‚UšC†¢ÒS1 åæ`+îXv’ $‘òÒ:X„K@¸Ø·R"+ˆ™š´QCm¶Ûmï¬chÍæBLñ·ÔXž«’…‰æ¢ [• ˜[eÅÔÁt> …’âîH§ ¤…~€VGAM/data/crashf.rda0000644000176200001440000000052413135276762013670 0ustar liggesusers‹m’ËjÃ0EG–œC¡ÐM¾"ÄʳûÒ]7m¡ÙšFœŒÃ´kªöt±÷avì6Øä/ã: ϳ„{Ëã¿6Zšƒ8tx˜Ä8ßÑqKš¼ÆÑ~©rE†û°I­\ÇhÀ—ê˜X“ ìcâ¼ÞQ>úqÔ/÷ÂçÆõ Žô,±ûÁ‡µ˜s_¸³´>›¡áh½“ÿû·qG1œCN:ð£g~GÜÿÐÝà sªÁR\’WŸ×Õס p'ý£¿:íó¹¾uàûa¯ÞS|j>€¯ÕE±«S¥ÝgÕB Îb_]ªé± E$éãæü=åâ&1`f€àsÀ°¬kÀð!+gJ¥’Wš+-”–J+¥µÒFI5¼jxÕðªáçáûÓÛ/µ³q±VGAM/data/ducklings.rda0000644000176200001440000000106113135276762014402 0ustar liggesusers‹ r‰0âŠàb```b`fad`b2Y˜€# 'æL)MÎÎÉÌK/f`` €Tð±¸ý±[,#’ììɪûy[Ù~«}ÍÛ‰ÅöÛ\ž:Ð,d¿ùû¹›bíÚöKªº¯©¶_ ä0ñxÙïœ ³ì7ƒýÖF6•»úÒöëv¾n•Ûa¿ jîJ¨ºMŽÌ^úØ~C’ï¥+ÿØo=Ô£ñõ°ýÀ*ûž5†j/+áúçï+™`?ª9—mÁâë\ö“!î»ÿÐV©ùûì//} ”²¿5ç^‹Ää’=ö@dV½=XUÙIûëÖ)Q2k8ì¯xîŠ3µ¯µ¿iî}lùûyöw þ»}l‘ý- çó¦÷Ú߇Ú{hjÅ•ö?ÏévžfêþWþ~ÞæºÈþ4üf}:Çþ‘)8 íïUOmd³ÙFŒ–‚€ÛdlŒ¦ˆˆ{þS«¡ Ѥ†ULK9bÒd&HP @R$¡ß·e”i´éÓDTÐFìÄ¢™1¢™ÞEÝ!$ò.—Lö%<¦*D™Á‡C–‘RD£ ¡ÀE¤ì„Ð"gýäó¿?îbºþ0ÈS³o-›ºñ@¹(†4?ÙeŒ™¼\|œ¼Üý= ºë®ºö´Ó=G_fsŸÙºí‚½*‹*!H’‚‚è$§Ç–È!òk®q²­R”¥)J± âI$“÷0 $’I+Ð`’I$’pÁÝÝÝÝ܆’I$ÀÕUV2ÅUUXËUUc,UUD†ywoã†>ÏÏ×pË/*>·pÆ>ƒªz=€xi£¦SŽ`PÚÄpçTè `›Ã «× ´´îÒÃKkWOòÀ?iö×V·:-¦øº½­œE6ÀvöØÜ¿ÅÜ‘N$7µ›$VGAM/data/olym08.txt.gz0000644000176200001440000000165713135276761014251 0ustar liggesusers‹U•ßrÚ:Æïõû–ÿÂ%¡é94¥“ ¤½[lÕ¨Ø#Ëɧ?Ÿ„KœÁÃŒZíî·»’›“6-Õv4Þ]¨µ]CƒîÞ”£ƒ³æªÈ[Ï]¯î„$Zµa*$¥x$“D¤D¯F{Õì<{5PVR¶ÿR&"#z‡A3¥Ù´§JENôSìœö¬ É%ÉŒdAy% ¬)׳¹„$b)J¢Õ8xÇ|ɾ¨óxèt•Œ*EZЮ³o|Ò|X‰Ê¼ÿR¼«!F^Ò ¾¬ÿ@‹¼©yB¯×ƒ:é¨PF5¥Èe¬£ –1÷Bä)=ŒÚYðÙÆ4Ö¨asúªMÌYÞ∼ ïìß&£sÇ«Õc¦= /¶×³jî->ÞšÉÅ„žQʱåîΊ$$’ Eà6ظ¨,Ô PÁG6`@kö{Æ÷<'ûÑèAÏP‰©F™ØÌ•(*Z;ËF1ü }×þ8ÆCs‡ËpÇ(œë$V7eB{í NiööÀ­‹AU‰S¡[墇Pdا·+gBA*ÐÆ©X÷Ú¡ås£‚V݇«P ”Z‡9ZÛÎö‡ÏfO×^®qRftk­k;G{þ£§‘úK+îçNEªðXÜX7G›úCÁ„¶8¹—aÊdb;|øŒ3ô †«võÛ…ËnŽÇfÊc?µòèíÂÉp½º5×Àí`ý÷…ææb!iõ»=¢“¬(,àÇörös°«nN¶<âk¢ÇáÄ÷ wÕímìúýý§2ê:¢"wø?I2×aVGAM/data/backPain.txt.gz0000644000176200001440000000070313135276761014620 0ustar liggesusers‹}•ënƒ0 …ÃÃT±sœjÍ6´R*Zm}ü1p&.¶Å¯“ø8œ4/h^ؼls?w·šåyœûÒ"‰þ<~•Ë©ëïãð]úr{¶–ÐÛÐ߯åYNc¹vå½uÓÖù¾Šk÷ñùÜÌ„˜Ò‘оtš ±é‡KÏÓøzR®ðXÌD,ï@lo8—Ãe!`I±ÞàˆŠøfÙBf 㺊ÄÓÜ.²31¶G4•ípé¾~t¬êè–ïá*e<<±ý’pÙÚhŒ¤¸iÿû?Ãø˜^Í$ç‰ÖTuìÚ1¶k‹•2ù²^$KëH.–¾*Æ2ã-#ÑýfØe‰ÿ~™$ãà 1ÖÁ|tJgW<ÔN~}Ü]Ï»‹ Kr¸,3o C…Y!}¼“€—@3ÉG…%9¯|ò*-¡ÈAPaV¢àä8 ^aa5!nT’ƒ'd™E£0Ð+¢JÕ°‹N¥^ŽÂ––e–Œ½ ä°M¨0» âäÖAœ¼¶)¨4ª4©4Ë1žÍ:·3ln”•3=;…)wAVï‚Uº½²r!€Qo0 á_)—ò½ VGAM/data/melbmaxtemp.rda0000644000176200001440000001025413135276762014736 0ustar liggesusersBZh91AY&SYБßv#3ÿÿü=$ÿÿÿþ`3L@@0@ ` yä!lÅ"¨" ïöðú@ ǽ`ZÀ,ƒí€v†"§úB…›Ã4¦”üÕOS=D£‘‘†˜¦¢€£ÔÚ Ð)ä¥T„À&M“ €Mz2h L&§êUU4dÈ0˜ “†L10Œ˜bBjzdÐ  =&OIˆÐ4M&ѧ¨Ñ颌˜@  õ6£ìŸ¾JH66,…Š–(H -,I$‘$l ˵ÝÙ¹·vpÏ¿†)µï÷{£ßJ!»³0 ÏÙ©’±ªY/†:z5z¾µ“bNH$/)Š'UØö¦bŽ@:ü1wÞw‡Kˆõ^Ãû¢~"èmd›î<“*´iÃöÚ³³{´‡4U,a˜Š[¡KšBÂñ‹)¼Eµ‘ª³šX¯I1ÐuuÝ£Fy’-žpdªngG)3>iôÜÄ9>vÃkF^=N ø.ï•Éi÷ùÓ«"GçˆÌòxod&µnTº‡Í#¡øÍ‘öãóªc3!ƒçLÊH"O&“Âp$+T$‹¢Á-ñõ>¯o Š„Ûç’Þí°!4A ô¥óºy`·ëÆ¢€_¬W'Cžnîùo"…dì¢×ƒŠe2Þ2nÑñ}µ«|u«^¸!«û6ceÖš(ñ¾³ ùíõ€Ä‡ù|ºtP⊠Ÿ‹›F ª¨rEVßDt" Њ-Å9óÊØ•¥–RØHèÙe–ÓE¶çδ;·m¨–Û¤Žv›1¨QTRº†•¥¢Š©VhªžF¡‰êQIífɉVskfím§$E†´N1ÍšÛV'Òm¦¹Í3v͆Ví–jÅ[°²Øs[›vÌÛH­t¡H̵Ô#)I,5,$‰<ÔUT¥D7-Å34²¯ÉU\Ô¤ÂÍHÝP]róI)%"-S ¶™hÛqi5'b ÜÖB‘,ie°…%$¸Ðq€ˆãÀóßÑM-P£R×%utÍT‘= %BȤ•5W#B 0hlbÚ­±ŽÎe’5ƒ¶Æì¶4m#mk·jÊh­pÚDKXÔ¡ª†Š¹yb*Hib§ 4ÖÖÜÖëDH4µ§ k5¬æm™¦­†áˆ•®††RˆE”bh†ŠA"…¥QV¢ê–jŠŠ(¹’h¨˜¡%UfÉie›LâÛ2f˜Æ1\H+\ª,ȉTîw¶lDÒÙšvÆ2´ÒRŒ+!Q=pÌ@ÒÔ*ÁB‹3 Õ2%"/JU”5$4±-ÅJ]+K]Q)Ì²Š·\Е ¼¢“ÉT0 ÃQ"H“IØÍNØfDÜæË[-1§niØLÖ°·Øq–ÈÓkk†Œ­6âH*RH‰i(+®X– –"®bè†fHiº‹¤eššªªé‘(¹&„¨’蒹⚙†›ªT‰'ªz•¥ é!Šè.I‘$))‘¹-šÐùîܹsÂY-ÄxÝ] ÏM5y”Eµ*Sl£K´b„ƒI¶kS#_qËS€¡É`¬¿Bª¾àf¸^í'M\“1ÄPÿLB=ïÈßmÖµ%²oÅ24§Â´X›Ó=sI­Î;asš‡6A¾üÊ Ÿ’ÕeµËG%к=™†K ê9¢iPˆTzCL4DdAlD$VÁ @ ’D0Ë^*ñ2çÄðü¸Ü÷Æ;™,^½'NÑž{\}gÒ˜D„“2 Ÿ[6³t“B;³ã‚1a™4æ› =T˜Ó=ć,¦Sâ³ 4w Fà*Ô¾„U  é S±ôô”CæŠzB›Ì&è§÷0¡&z!.pwܦ•læÄÂr‰R5?GbñëjÜü6LµkH¼ÅÃŽ#͈ÝÕÝwJAUýçM=à•V¨‚òr2꟣» š†ð:ïn˜ó»BWÌä 7]ƒ –a¥&Ã<ô:Â4Ç2K)b^XËË»38¢ p1­7ôç%õÛ~«²`‚4¿9Fý[P(ûqßfÀŠ .\B"""è ó¯¦ö®;ðñ<Š;¤H”é!Èx× '®éåéyê%òaéÏb®k®X˜Yéåª(uÒ¡‡¶$Löÿ ù9ö±3Òò Ðõœä‘Ón8+É:Ë‘GªãÚW‘ .KFpÖr»ÞŒyÚÆr5u¬Ä˜Ö2Hf­ aí’_8q#NÝå2±/tiÃŒëkidÙ6M£9ũɔ±\&¥*+ÖÊú¬\»έ{ÞÑP…a¦JmB,BIC1I“%„‰ Ö©ÂÖ!W8gS²ª²bB&é"­[©-%muZjÆj°˜ªº6³f©ø={ÎL+gWlÊê‰mŒÛj &¡*Ö-&¹ÑÞ1í‹D¢åÚkgsS›|; ÒÅç®Ø»Iø<{^7`Ú]YèW²(ìeÕ»®•éëRM‘7[; ;Bá{E´Õ¬˜mË“ã&}%tÞ‹a¶+ãÝÝþhß¡îø~©ýMƒrÉveT'?t3XUÄѸII*ÝÃI6M®SŠ£]:·Jd”#«ˆR³t dGø7ˆ¿ºS ©› @!÷ŽnÈ#“,êͰ“4ìhÖé¶ØàÑ×ðÁ\óPO[„™d|ÏlqÌp×A‚veuëÆïrìL!dÍE73ôíÐ4%0`™€rÍ™}e’A'8wæ»®+Ü]vw•EÛ¦eS`÷kþ8ë®8¼qŒŠXÝDS¢ùY¦0pY%Hš¯ˆè¶Ë»3œ‹½Oóþö5é÷qý?§à\~4íʯ@¹ :wO åfïℌ@Ö"ý€Âù3²þaÈÛ—5%vcH ª> I0dsKLlÃÉÔ!\œûŸöÒ²ÏG ‚êO <0ꎧ®©¿2ÚÅváÙðU7*”[‚EÁÛáÝ¥QYàÑýÑ×!\›ìjÏtÔ¡™ ã8ûÈÏCNË0}„f7™Ô‰i­hÁF›–ÞéåOB €I$ý€HÐ)UùÎîͬê´ë.+*‚H²!“fbSj^«‹ ëªÆÁ¦±…  Æmf©†H†_3¤Tô±ö&JXY )°ª=æe_k*‹¶N‘ÆÖÈtî"i¥½te•âöË­D,Ê|tÏL}4ˆü+Ù+M-áýVì—*ƒuJ³Q>¨e“;0ên\;¥c%“ö\iyrÆS*uJàÙñŒñgòX»+zONœv‚}$€("R%.:Š. ‹‚áN“£¨èŽ’(’$îäèê(£º‘I;‚£’’*N)Ò% I) I.#9Ò!— Á‡”äè;”¹Ã“‰ÎE@€îs‰Á¢ç98Ž$”I y“àïxr>ß.P€áM‚©íäjb}Ú\ Ê#äÄh@ÄOŒ#”™‚ÐûDŒx£ðÃ;!€Ë䥒”41$̦G•¢ÂMEEZ¾Ý…­²Ì9¥Ä’`¡ÓeíYâã~MÙ’nTðCAWM¤Ø«”¾³1yÕ¡[`ÖΟ6™À]ºÖQ5 ÍwŒäâråÖêj\Úl››TWH¦ "H¿è-gVw]œ\TwUeugãî÷zú¢üyú<¬ê‹ßåSþ )Dƒò§ƒ(kOËÝVS¶(í]Nðç«ÄÚ¥>™éõë³áÑY©ÉN ‰°¿‰XYM=Á|UøÙå$ªïe ¨·=lr—¾ƒe3FŒiG±yì\"”Æ&Q¶½*á ç };›`™”›Iuó§~nrAéx×¼Êó!ñÜ>QR³0¿TâQ ™ˆrëŽ~*K‚ì’1“Dd%Mº´$bÉÖ‹k~ËÙJwñð7ª.«†“3:RÆË0œÓ6dÕá­& ¸¡t‘ÑJݦHÖHÝ{ɨµ>¬œÐ³*ZÌ!À¨uDR $$B "cŠã ë¸à‹£‰Ë£Ž'N $ç.t¹ÃN¢sNDáI9¡ÂDNâ$ s‰N€§§(GNICŽŽI8ã¡8p\r”¸8ŽpD   :I8"uë㌔›Cp¤ úÁH62vÙ:2#äÍ%Ò]%âÌŽ".”¤¥eÈ6AöYÖ`ÓÁðððÄã‹ ÜuÒ–ÐøÑÏ0´f& xè¶HY.¹=‡S§ÎÄæIÅ烌æÕÝcƒ ~âîH§ ;îÀVGAM/data/hunua.txt.bz20000644000176200001440000000260513135276761014310 0ustar liggesusersBZh91AY&SYxØÙœÙ€@à>ïßà` ¿‚‡Ðïç§¥S§wÏGÀÕ?DÊhšl‰è#LOPjxBjT%<¤ªiê`A‚hžªª0T`¦©%4õhÙžÛ ŽÝ³½¼fë®f–j:Žöífí–í6ê5 F­™Z’1[ ¶7)ºš¶!Q¤ÚŒ"1fZ’’),QƒDT†Ð¦ÆÁˆ“C&–¨(È$Ì„V"ØÖ,Se3m#cQƒlTF"¢dš6Òi– …,lj¬Ð±Œh£RŒal¡&¥1 A¨el’L˜Ñ¤,`ˆJ4b2F“*™E¤Ê J6Š,XÑbÒÂi³-‘‘m£ESÍ4˜¢„ˆÆ$@HÍ&HÔÅ13I±D‰²dØ IdTƒbL¤Ô(CP€QJY603(¨¬hÒTL›Õ*4ņ›dL3‰&ƈƈXˆ`ÊdB’Q¨ÐTÅ„ÈDd6#%ŒmFØ£U¶6 •X…£QF• ¥c&JdhÅh5£IŒm´b„ ØŒF‹cɵd™‰0I‹¢µ%’¬Z#`Ø1dJHÉ"X¶bª5iFVjÊÌP¦+2¶55cPhÆÁF±mH3I0Ж¶¢"YM"‚ÄXѨ¢ŠŠª5VJ´”ƒmclahT¥P¥R A¡‰š-²J"Œ`Æ"6ˆØ ,±j1ZTh‹Ò(³6eIŠ(ÄZ#E(ÔLf„1‚’e’-…fb¤ŠI6¬ÍˆÔCKY6¨¨#`*M ÙŒ ¶-EÅ( Q„¢Æ#E –ŠEAl‰I±£j"¬U#QZ) T(Ã# ŒJFЕ"bCbÑV‰PTdÕ‰1BÂX„*$„DIEI’b±$¨XÐkQX¶É„Äbb!(©²B`Ù˜…1€ÒmbÔ˜ÔRÄÔ¡™‹"HhÛ±b‹Q´b‹bÆRÚ¬T’ih,Y,%ŒZ5¡˜5¢¨HTF’J(Å" ¤-™5F¨Úˆ´T T•b¬cmEB%bL€,˜ ÖDi¬b“$“.;ºyðÿ(«·fž¸¦”Y‚ƒH&poH™uƒeJª9j;2W>Tݺ¶Ùyw8ÈO ÑŠl Ýd©tÓhÞÄ+RC]¤!nmXÛóh’ºBípxk¨ S%çh¡Mž¨ v4¨0Õ9µ•G…¬‹;ÛÈ¢Ëˈ“nä€Ö½®ð›øuÁЦ(iÁ@„ÓF +6w/2ü̼ިFÓÃUBr¶¢EöëÎÏ„pðŽs€âf°…)±#(VÓV›2³a-†R™šÓ¶F„I%R”&a-¢ÊZ¶0’¶I,š¬¦ÌVÍLÛ’5ÔÆi[Tm«Ñ3h“ÖÈX™X‹e4†2”Û1X£`–+31š²Ð«4ØÍ-¶‹Y˜¶™$™[m[–ªJ¢›M,ÚBа¶–™µdZ´’ÛA¤KlÚZ°¨²LcwxÞ—8ðç,¨BE#Å¡È\4¨ª¦Ø1p¸’A—]8¥ÕÎ''9Ìðv†²Âib–Ù™&+5Š4LÚ‘Zš‘)BšZQSÚÌÕRi1¶”†;!`9˜eͲϖœØæBoU¯yFa†’-VaŠdKP<&"" !28"y9èžg£4«¬¿£ÐFdGçK˜.§SK¥h*µ %Ê]ƒ$vË4³:&ìиŽKÄ…Á·«œv†T«!6SjY^WÛo×[Ƕ©‡Ò1Dì/'÷|Û yÔ\JEð®N¦±bŒ’Zå¿*?˜­ÉrB÷HÔ(øx¾ójâîH§ s VGAM/data/crashtr.rda0000644000176200001440000000055113135276762014070 0ustar liggesusers‹’ËNÃ0E'vÞ Ħ_Q5N_ì;6€D·VT$H¥<Äß#œvf:±X`éÆÇãtîuÜ燭ɷ9(ÐaJ; •{BæædרöÐ5únX:];Má<¸ ŠR§\h2ôаão!8ŹøP] /Z_aŸ =c̘‰¬ à> ¥½w"ìIgJÐo"êžO‰Œä%}dú)öQÈ)öNEoʉ|!2å–÷£`|.ß+÷f>ƒï­ØÓ^-|zi1KOZÇâÛ$Þž/€¿¿›7 j²_$8„Ë]„¢·‚Ñýí£Ú~U­ƒ›SØsQ?kÂ×¾"|«ö\=ô„Íá‹íûÚwÚ}Ú–œ¨˜ïmggï ὞5Çï™ 7Å`NP‚’`A°$X¬ 6÷ª˜3L†©dZ0-™VLk¦ {ö0ìaØÃ”îù3èìšÍ ²VGAM/data/V1.txt.gz0000644000176200001440000000010113135276761013366 0ustar liggesusers‹ËÈ,)VÈO+J-ä2P02²ä2T024ä2RP°4æ2VP06å2QPP0ç2’†\û®2¹/VGAM/data/crashp.rda0000644000176200001440000000057013135276762013703 0ustar liggesusers‹m’[OÂ@…·»½QTLH Ñÿ@ Üßo¾¨‰¼6€ÁDÁˆÿÞ¸ ç Cu“é|{™9³Ó}ºŸçÙ<3ÆXãÂÀXç1´þ˜Ð4¼e±[ãÚ~–x»ôÖ1ÇaÍi£Ž¾J“V©Àˆ¥µ¼UÙ¯ãà•Oëü7,r4½eÈ#.‚ö­ÚO¡Öê¥N]Ÿ#ÂÙÄÆÐj¨ùîÒ„N £†Öá`¯êZWˆàY„ô}¼­ÝŸ:A͸¦u¨•(Íê_·p§¶ÊŸâŒS:ì©5ûdzýŠÕ]RÔÐÄœžý%S'UwÖïE³5§7Çšò0.Á^xÞ“³WmŠÏÕ H°è·âËaE|]-eu} >”ïÄçb/xØÔ•ÅŽJ\̖ž辕¾ˆÚñF¹ýîêâ:Øz„>!' Cˆ0&LS `û=¡¾P.4 „ÆB¡©h䢑‹F.ùÀ*ûƒ+ün±VGAM/data/alclevels.rda0000644000176200001440000000104713135276762014375 0ustar liggesusers‹]“MkQ†ïÌ$µ *…†REmU4&“¤‰­¸ª¿jÅ*‚“¥1…´±¸Õ…B)E—±› þB@W"ˆkWnD µ[ŸËœ{3dñÌ}¹ç=÷Ü3×ǧýÔtJ)å*/á(×C&\>ŽJ¨Æž ^­‡OÂú¼R^»` ÀSà,”aôÉz’²_Ó ÚHí‡a8Y(Á%a î@#æsîÂyñŠÅê’ØÝ²?%ã~89a j°/à ¬À‡˜Ï:¼…Y¸,çÒâ£Ù {e.-ž‡à œƒ ’k V©a&à6̶}œqÐ5ø[ð *RÃ0(µ9"ñu­ŽJ­t nÂ2|"ÎaøÃûáëþåž·}<îìâëpWG{|‡% 'uL2â‘‘Æ$Ÿ—Ñ9ç±ÀÃÇûB“l·}’U8ÍüWö|d¯~¿¨j 8§NBQÆ‹ò6£Rçy_œ„ŸÄù5bb5æÃùä|vXÿ,÷Òµÿ›gTj4,w8%ý¥ï|ž©¨o_Ã{ÐõjÂ»ŽÆO6‚Ç!M¯z•nàhÒ»:×0rªy+¬ÙÙ‡-#'šŒ¼,XÙjt:UëÁ¼q2“©Z°d4I¢ólÎ-fãÉ È‚“3"o„oDÁˆ¢%#FŒ(Q1âŒ7Ÿ³*o•oUÁª¢U%«F¬*[U±ÊzøÖ÷¾õð |ÿiþw£ý´VGAM/data/crashbc.rda0000644000176200001440000000056613135276762014035 0ustar liggesusers‹…’[OÂ@…÷Öª‰&¢ñ7X®¾ß|Qy­€ÁD!)ÿ½q6ž×"¡Éé~îž³³Ù‡Û‰Ï'¹RÊ(ë´2–ÐziåTÆlZëÅËT){>I'¤–ÚÿhÒ1霔‚×H)éˆÔ$…h‡šŽÖë96ò¼†‡†¿C¦Ç)éÿ,ꜣäd4á‘F{MÀ¡—zsX«#¯}9ܯCÎU´O>?½„³½D½Žµ#÷¼¯Þw¯,Úk9ôÁY<‡ç±çÄ=V‡sI£‘óSŒ9jFížÙ÷ :'î«φú½#YÔë]?×>YóµÒOÑÞ¯–ŒOÛ9ãó|&ÕÅ–ñ®|c|,6‚Ûe5iú^¬9‰‹ù¬Øí×’6Q™^/WŸíxs-üІ.ƒgè1ô C†Øá`º¡®ê õ…BC¡‘ÐXH2¼dxÉð’á{ôþ úE1 •²VGAM/data/auuc.rda0000644000176200001440000000036613135276762013363 0ustar liggesusers‹ r‰0âŠàb```b`fad`b2Y˜€# 'ˆ“XZšÌÀÀ, d³1HÌ¡ú8ôz@èâíÑÀuÝ? âsV@èN ¼U]=„ãÐ/¡« æû3 ªKu€ˆç€ÐáPZM]:Ô¼ ¨¼”¯S†âGÖ¼ÄÜÔb CìIˆ ‡s~nnjQr*, ‹JŠ¡l¶àäL×¼t(Ù'±¦É75%393/Í΢ür=d[X`¦»"±ØÆHlt''ç$à ƒ r¥$–$ê¥íòþ0’[°áVGAM/data/leukemia.rda0000644000176200001440000000051113135276762014212 0ustar liggesusers‹’KKÃ@€'Å*‚B¯Å“x(ͦm*Š âEžz]š-ó€$UýÉþ‚ê¶™™š@Áº°ù¾ÌÎ$›¼ÞO…3uÀË6À´´Ú¦¾`CKó(V‹7•DÀ:Õëu±žà6#¸lP }ä r‚¼C>.+>á:ìV<ƒ:» ^ û>×È[d€õ–µ~ôü“¯jA¤ñßø®¼]Ü7»ÏjëKÖßèïÓÜoÖþÃX½«¸Ðv²¹[Eg¥¥ž*ÄHû%K“m°^ä`Ë‚jp幜•Y®mÕLOe¢(Ý ]F‰¢­E)ËE+ã³±¿•g½ß5:”é’d@2$‘ø$c’+Óí³¹l‚Íc° ÙFl>Û˜{î!¸‡à«ÞÞúnœ¯ÊRöæ¹>‡êŒaõÌ 'óVGAM/data/crashi.rda0000644000176200001440000000075313135276762013677 0ustar liggesusers‹]ÒýKSQÇñ³S×Êp=XšD´åФæèAÑÒRìnºšQ3îE¿!A?ý©ÑûÐç®þðÚýìînŸs¾g÷ëkAf-cŒIšT:a’)b:ÉK¤M?×CQØim“Êñ®G0„cÊ=ö)}–ÅEÜ@7e w±€¼EqWqcúí³±»ºÃè³KS×Qõ,b÷ÐPÏÌa_ÐÆ'ݳ]\ÆœŠõœÄ€öÕ§NÛ3ŠÇØE3ðP{yŽÏZÇWã!". „I’=ÎjÒÆÓ-£ôž /iã’˜¥&*½x‚ òð>íBm³LD-P% `Ša†Îp._¨FÖg•'O ¾Xa‚üÈDb÷áÕb{ׯõÕYt5hóÔJLV%u6“i£v®Ë¾ãÄŒµ ÞÒ~&$zS¶rf í1 BŽw›¬V¨f’7‚ Øè-´Ô Q´&pFoé H_Ü‘N$ '¤ë@VGAM/data/lakeO.rda0000644000176200001440000000052013135276762013451 0ustar liggesusers‹m’MKÃ@†§ijm@-èÁƒ‡"""´Š"̡أâ©×­¦Œ $Áâ-¿ÊïoýAú¬oÌLaï>;»³û΄=îôÚNÏ!"‹ªv…¬*¦¶…¡B65ÀZ¨ÎýC¢êl@SÐ Qýºn¡;èz€¡'èz^¡7èú€>q~º¸Ã[£¿ÏÛan2»yÉæ‘Wr•÷·™{ÌIæsƒ¹ôÿºøï‘öI,ôšúþ؇tšùÞœ±ÞÒóÅÜôßfv™¼¤ôoö³kÔ³Î\a.±kø·Œýy=o\üïÒëX6üå>ñmy-±×X¤.ü“"½Æ‹ö•¯ž;Yœ©Ðé™é'ñ0â ž¨ êÇC'.ƒ4ÈRä®/ØÊ1ŒF£o³š“P¥R,:§*Sî ÁyD?…~ŒO9VGAM/data/coalminers.txt.gz0000644000176200001440000000022713135276761015245 0ustar liggesusers‹-ޱÃ0 {OÁLJ$å6K¤N‘KçýËàunp hf¯·Iné-{ã>¿ï¡¡]HcÒÌ×t³HŒû ©œš5gé¦SJ:jÛ\ f³t¼®§=vQ¡I'6Y‹â!Bgž¼E“ 6ÏØ¤ôÎàÔäÈR(錮çÇ­8þ]ÔpVGAM/data/toxop.rda0000644000176200001440000000073113135276762013573 0ustar liggesusers‹mS»JA½IVñßoAD,‚(¨ˆè"Ѝ ¢i—˜àB̆lðUméX(ñÙYú ––Öbiiie¼Ãœfpáä䜹÷ÎÝ™»;+é©X:FDaŠX! Gø¯æŸYTÏ\SöN¼"Q¤]¬0šI»2@â±omÉwàÛ7É•´äë„®ïçGúú½})ùêEòä^¿ò¨ç+ý o~ѨsúqÉ7çàì?>€ÿxÔ¯¼ÂÂþ¨ûIxä¹ÄÑ×(¸ÃÐdø ð4¸ÇˆŸ“¼ô½ž1ü1Co‚“à)ðøÿyö®ìÌ^$OÂ߃ž€^…ÞyÖ}Õ¯ê9À›RCt"1dɃ·`FKŽ[È9ù¼ ò}÷, Q›q˧[ž -z¾[v²FÝú’wœRµÅ‡E{ÕjõÝl “w|Õ€2cûNÙIåJœÏêGàŽtÐMÀVGAM/data/venice.rda0000644000176200001440000000172713135276762013701 0ustar liggesusersBZh91AY&SY%ÇÐTÿÿ¹H@äDDÿÿgÝäÄDDETDDDDDDDDDDPx§»ÊãîâƒD£e?Ô¨Új<  ЀÄz‚&ÐÑ(„õ4hi§‚€C@ 2©T€ @Б d42m@† 4Ú@ € ÉSÔÉ &š`šbdi‚0˜ #ýƒòï t¹”‚E­R:©™¨Z)D‰H&IiTZÄ’4%C1 Ñ"T-(Ĩ²¨8UQ"T:ª["£5Q¢URlU%é’) „ºcŒ»´žºã­ê¿íy›ô`ü뮃¤ÐÐ[zé®Ï¸×¥;osXëSw=èÖ;Á¸Z󮫯‹Ï­ççÝ÷[ ‘O]üÁõ„ïÍ ½Ídåsó†²òI߆²ò™Â¼¡¬|æÂÅj*³OüÞ3”Égg~·šÊªž{XgCYùtð9`(+Ÿô?d(G~÷vFÞCo"ƒ"ø?yaÒzi`d¿ÖêÎe›žrxÆá9‡¶8l«°^£P°š`5Áj‚Õ« V¬&XM°šj‹%ÈŒŸ«T†¼ùor7›Í¾£µºzª%³wžðªý±<_¶¦þö Lè˜eVGAM/data/enzyme.txt.gz0000644000176200001440000000015313135276761014416 0ustar liggesusers‹UÍ1 €0 н§È Jš´i3Óv`µth!ý–4ÉÚì¶ûu°ÀǵÀvðÓµq³Àé'”ª’3àLÛS”1Ê<Žà¹âPp)¸0p®pfàÄ@OCøÎÍ äzõ¡Æéò*ñºÉ¿g¾¾vµÕ®«ÓD]}}óýgêã_vïI©CêݽËw©ÞW¤ñ_ †²·$Oqx uw…Ñãì±âÄ¥N(M“:= }Hx’V5aEgò(ÉYÆÚTïZ•ç•üÇžàÖ4ÍU-$É—…Hcp$5YÑ ãQ»·ò+@¸2VGAM/data/venice90.rda0000644000176200001440000001755413135276762014057 0ustar liggesusersý7zXZi"Þ6!ÏXÌà‡“0])TW"änRÊŸ’Øâ_''ñÅq•ÒÅФ¤}vË‚ °_™f4ž,güá§ñ”Ú§l5-VuL»½š,µ±_NŸ±»œC³TÚ°r²ÇK—ß­ã¯0Ù ë*^‡W™èF ùbóéÔ¯ÞlÜE-€ªÚ|ðòÝô®bP<+¿;)}b Á±€¯÷©den»Ö`¸ª»óET.»V#]‡ñŸ¯Y‹_Z´ô; ò•+™/]w¯‡"Ðn†½Âçõ:•œ×»:f”ìÛ&&%û×§ gÖ†Ýe²Ur¦Ã»79ü›ñ~æ?…öECGIknÇËlbKÑÞ²–‰KzŽ «Wòáy6ÅHòßA)u~%=­¢² k:Ü9v5>40œP íŠÄõ»··HU÷zé»ÎÛ‘,×Þ#¤³hì½UÛ/ËWB¼Êfh"3bôà!ƒ¶R–[BmA\…©?ç1ROM.–2“,¨§<ÏL]o ËË/(°-ôüžBóõòA„ëþà×nüˆú"âX_1üˆ-Àõ™g k³¸ $>ÔòîEÈÞ5Ckm¸`ÌNšé†Ï üÁVnN: ý¢)€²Þƒì—¥õ1möh€#÷'A(@¿q Ek® ȱ‰uÌÍax^±Ô- ±j ×Ðü&~yUëì`±}C-QI($щuûjÈ 1’‰^0û«ÖÜ×_PÉÄ]k׈‚oPmZìsÀ4*ïÑIæ Ÿa¶nÎÜ\ˆPnße;I0Ïð#ª:òì»h 1Ý?6ŽRú¤è}“>›C÷<Ÿrˆ·e*Œd6è?Ì wu(æ³-H6+KkÔ¸Gý`g.l? ’/ÕãÔô­{"¾µûv°B@pj}¯qº)¹ërU„ð{Ã2ßBK,ÞùÇU|Qs ü|ˆ”VÄsÈ,7{&¤¢XWQeÙ,!7=‰¶XZJjïáØ’ùFŸ?8¯ X¸pÁ@…)()¸@8ÉX€Ø¤¡éìÄI ×î®Ì:®¡Êþ‡ºÑumZnȈyk°öÇÔ É€T‚>ØÛÏ{| ¬d^A“Ó2Ï?²}. ]4Ÿ¶(b ±¢k÷2…jˆÕŠN¤ˆ ëx›ê‹Ñ=CÔ<Òe¤¼ÈZ\žÀ4Ù4â©æËéŒh¦-…Ôw]U†næ°‘ñ,•/¨«ö÷¿W šBï³Üæn›†=åŽûô׿›,Òþ¾›BI€€¾EßšÀº¥1u’UŸËøàªEUz¢öØf[ø®ÃA¾ß…Ù±çq^³`Óþ; mscGü”f¦0í‡9<ê옶„šBÔ'aSŽu"ÉMcc¶íÔNtw |ªnÀÑ˸ j=s‘é_ïTÍOo–v¿‹·• æýã_EiÚ@NMôxmMGñ¿)ÙþÞ—ió©ˆ5²´^šhýÕœòY6ª£ä¯_˜3 ïJгÚ0¾µñHÛè›§¥Ö-ÕPŒç€&ÜC-dòr˜cñ!ïíq¾¥¿ÍÒ”†¥<•b‡¹Q‘¦P$¢šÛ¢¾/`ËZ=YR’úÁ–§·‚Õæ¹r4•êh}XYц͉Mºs@Þ7§à›¥ÖұméŸJ»³„ÃDs¤ÄÜ|©ð-7òB\¼Ü#Ž·GWzÏQ©§Á¯uq˜âUl@›¤Í機xÚ CÝrŒÊ™WÇþ^ÏñÍTñâØ•-}³^pµ¿$qˆžîº™5Z”sÉåÞåhæ/Šê®=ƒ_ºækÐ˰i”Å,ÔÖN£,~4r̓m{'±ZÚ´»»Ò¿U¦þïqçãô7(ÕPŽÔ nfà€©UÏŸk"3»È~ò„÷_ ÿðaKe‘^~'ó(gî 7}Æ2ýc_¢a‹TˆÙ#+’´_úq/n—“éê)Æ.¬‰ ” §O›÷ñÓ 1SfILÕV²Éàžœ"mŠj)4d9ê ñI#^p‚7½ä¢ ^©Ìš6ÃÉ ¢MçˆðáPD§¾ª:M“ˆ< øZÅIèXl–„ð„_žD#š@|‰ÿI‹V'z`-e”Æ+w¢Ïqµm·…uŠg¨¡N¸Î>¯”Sï ©Ó÷d³´GýðuF`;Toq=¼ã²t(!ˤàÏ~2D0ùáâÈŸØìhÒ/òHŸ\'ýÜQQ¨Dõ¾-‘ˆ’2b€|Y=x”Þ6QŒLD i9^|Ý3×Q¼ôLÝ‹MD‚*0ˆH“ÓªD)ÁÑså«Q4ä™TÃzŠ)[È ›¬ðáß½ÊÀ·øÉøƒSw¬œIb¥#*²—ý8°18ÊÎYsÅ@¨ T^jEEl#4¤L ˆ˜ í+OÖ…ú§(×*`š»88ðvm÷”Å4Šø DËtöSÇþÍBí_u&8Œ®ûJ]ùOYC³êx@»Éµ–ëæ!¶_“=³@×/-2ö w?) Ë„»tÙlS~AÒˆŸ§A”Móãâ¹ìë]z_Ž–ô‡{˹ÊànfBË%K¶RïÓ<æN¥R©n󿽨ÂÊlàÜjŽQÌYC2º'?žYbòÞÆ9¥Ko5’Ûa² –ר‚ÒæëáÇ—Å=ÑåP85Ù¡5¸œmò`}â>xž÷ÄbÁ~ÝÈô¬ ¬ÄKƒV-­PÛÇÇñ², *“c2ˆü2… MÝl¬Ç*Äû_¾“¦©|g§·èì2áËkSÍ.×Ó:yª2ëïÆEqÉ–pèÃw\Š{jmÚ¢ŒøÙåeÇr5Ïà·pC;&bf>ì>é’Œ™h³QQÿ£î-k ;ïg?<£ôMÉân뎘—ž×åL¾Òžz‘énÎx§ï €ºl6¥™4+³ðâw8i‘F"¬ÄypÚíýÇm2b:*;R5X!ª g—´ËçPx*”•&î L¤ÈÁ ŠN- T664Øû±+L,ѨÅ"óàŠ³ZtþäE‡xT•Øš~éN¿YøY~¥¨ø±‚«!¦³ß{ßu…$†yI£»Õõ|ú½¿ ¢¼öáÑCl£ ¿-_" d2Ò]lžQö™Y¼:A©øŽ#ñ–¡» ‰ùµ¸ wÁtoØô¸N¤Hø•ËSIN'xÑ{½©:ï8­˜V#ç;9*´IDOu×+ ©?3ŠI@çð|RYÒ­ yÈ$ ª¶)`y@4÷È] v•s¿YöbH¨—6?§»Å±´^nDFäÛLëEJ+ ³¹¦a0ª$ £QÄf=Bj:*®*ÃAëá`¯f3Lš0¸s E2÷q°ÝxEÞ^£IïÎôTg 3ßñ"RBÕˆê Æ£m9•½ÿRÞÚYÞ³B´ÁŒÄ™ZÌ´8¶ÎPLá†ZyPÁ(H‰wƒ8\\Å)šX^.‰˜Uz¶ÙM\\žºp?cðz[Áàû"tCO·cÈëˆf@í÷,.U;>ˆ™FdÞ˜^<üN«Ö5ö±‚ÀÔ‰ÁâËÁþå‡è~ÏœŒ‘v踦¾Àu¼éE—D·<#RÈo$?ð~Òè&“ë[«ÝÄ×)J·¦6Æd¥b«5Üe’¯“²3$ÜŸ UôÑvó‹˜ã‹çWHH.YÚæ=ÐÇQ*$4ѪÈ$nOqgC®ûcØ?O¥Æt~Ìä;æ œ©I•êG†ãPAòÊ$<ÏöÀš^zwnº-6m¢JNϳÔ8ó× .R1-:•Ò]u“("“÷ÿõ‡óÏ kYDƒÄ˜¶ÕëãÇñl({ @nMbe”œ½Hy]êÏ{ÁK´” µéV» ôháRº¨ù^–ÍÞªÒ09È´#Lõl÷¸Hr·Ã¸ù¸8Ù1Ig-_ŸŒ/¨=µ{|ˆr˜ û7•º´á^pbVZGÏ͹kýÆËøR}lgÖj¾Þ`R>uE©jÓ"еÂþ$xÆ2^áar«b™Û39~™†e+ WïUë-›Á}T*‚„îµ ÆÌ£åײ†PgÇÅÚ_¨wh¿Ä‡"+¶¨'LR/sýäjæîæð„'dÒå4bèF£SSÎÏCÆXYk<}¬ëÉQÂÚ7Yºî±@„÷ØcޏG?Ihį,-‘± €¸˜¸Ž%ÅÐñEöÙ»«0@+´ýF¬ÆµÕÛH´°ÛPõ0-½‘õ/áùâêüX»{K©¦Þ+("¿ë)x@ÅjÀÒBª€ÄO‘¹$¸Ïé‘üæâ=þîGv©NyzÄÆ²¬W™ÜüŸ Õl‚¡$E  ¯ºC»ê_¸=÷õŽñ}Ó¦é>Zõî0؇yvq!¹ß÷ÿCŽ%Tjªí7]£r™¥2¥Â1Ÿ%ãRöÞN!Þݙ"â ËTJ,µ8?Vw… H‹Œ?±ç“b7ÚIbf.#Áýõí‰Ai.k.Óºâ໢„Èü¹Ò2†½DܧInÂdœtÂl‘ˆâRB0}†›à3ÿO•þx~'¾/˜ß^Üð:ÈÑ2Š3Êþ»:u?Å–Scãù”s'ç ¤ c4÷’Q‘Ð-ÁªàýJéý@v»’¼hWܲ [Bçú໪ý’f¬5J‘—š]±Níýu#‚Nó*s#í'!L[#‚ßçã•b£“5$ίïCÀaûg¸•mƒì ¤2¹MÍaÔàü^(7,"_³IÒ„!}]ólËù­Q¨Htþ¨RbHþ‚eï›Eº™t±dM©7Ø,” †}÷J쪡cÖÊ63j ·:k¦é½ujÐgŒÕ²1-4Å[º[¦.gFÏמî`– T¼ÐÌÜä&ü@æi#ßzÖbñ x¸m7Ëé_åya±3ÔÙì6wvHêÉW*HKȧ´±D›å!–Z& ŒlÀ|o}üN—µaâÓ€{hæÆ‚„£WîøÐ> ·"Sο1,—Í@b'*ÍuCšo×£Æ>ÅêrM K‘%_ lù(¦TIÑ—ãúoÙ"ↄÎ%k_4âó­iá{\}sa»+BüÅEÅìp5à|þ3Fß`ÓWnnXïÕnaìûRÖ݈nw]ÉòfceoiMõò±Æÿß>®Õ¥=¯œ,ž¡`ÊÀBŠÐ? n ×'Py8Öã@óàçEae‹£¦®]ÇR´Ú’¿ƒ¥’Ný[#˜Jñ°Ö6öD#  !Œ¹÷Þ©Œq6Ùá«ÒºHR£ Uñ,—]Énuüh€¯IòÊæ bý:?Šã†q'ú¶ü5fÛžsCÈ€,)ÑœJ²Yv8äc[>¿…µGæEGt}Çšxê˜î®³¬¥îM° u1Â!¢b~oПË4h{ÅAÌÈ„¤G¹Ž¸»7Dä„ê ˜5TɃÕÿ.“$0´ìÿ_ø 2Òà€@µrù ø¯NJ;•O-mÝGzl3`ëÉ´W)BgBÏFø^+G:bÆX=â2÷Ø[¦ƒjñ‘o@Ûn"Ó›mËÈjÓ|xPÏÌÞ©‰h º†‘KpÂ/RtFøž!‹•±ÌÇw¶šuæèfÀ$ÍB¨/°U[?1q9LÏPn“”½ ®ç`'-‚§'8ê…üÇ7×{ ~‰•z½îŽÂ^‡òwW®ÀŒ,4>šFrÐBß¿ÕÜ;©’ìK¶GùF ›ÏMýç'¤¤=–¯;S8æìÝ{y+ãâ›SÏÃn=þ£éæIP¬s}TÈê8D?‡Öðñnìån‹äå´UC ˆ†Û¨×²VÏh¿ÃŸÞr®VÒ–3VA…I«À³¯±M½Æêc¤ g6çØgãûqVç*âÝ¥HúzÙïærJWïd@µùp ñѰ†¨eÃȸ19!m%t‡òP“ãû'°—:zVaîËhSA¡”°7ZmÛ`ÂÇ­¯ ³@3$£-âÙ…[V0jTü³Êè ­l¢4–ýŸ–˜9‹’Íõ,•í+tʳ, Ì÷°Ö“{}Â7#-…#€õ¹[?iÆó ¡?¢ö#N¥Ìí„àhÕ~ )•G6àÖï²j\6| é¡b—ÉÕ²+ [‚HèF‡ek—çs!¨”Â*š?üùS„¶€ýmhФ© ï¡ï WQþñ‘DŸ("pü‡€Pi„ï†2 •ôtŒ¡Cq(1·jUð´7Ú¾g ”4¼0¦#7ÔØe°@žˆ¾þ­º#öä,€—§2å‘Gm`^Ï•¢ªÆ¤'\8–hÛé<¤:eä¹éQjÞä¾+çάçëfºå²¥î8ÁŽ,AòÔt’hÙz/º»4°›Æù[W²r)ñ t‘òDHx§É;&Û;f¿£ÄI远FEš<‚{¼B¼ ÍšèïfAŠ1'ˆÛY#áÛ xº§ßÔ¿£\;4å%Ũ¬<_z^Nþ´’¼{ãï‘â"i|N#‰w`9w8ºZ¬ËǦ3•v·–®öiاÃdÿøM2h¡™ã,Õn½LV‚ ¬b¤U¦uVRK©ÕP¦ŠHÊÐan¤ [švåaŠÌ¤}DÉp¢Ç]½Ïî*ôå+À\˜j~ÕVŒ?øhõXè~¨ò*ôÔe5B¿…§“ õv ºÈ¤Sñî4Cƒ'i›”Ps‘G²R\"ª Å*‡[–Ð/‘ʇj ÷€uPe3¤ÃBC×ûîø³‚·ËÇpG_(cNŒÌØ_u~x6<Åo›Š³’ZøCˆh˜£xfì ó÷ -Gö ³žï Xò{¡€Ý@vQ¬QŠ­¶ žz~‰†¥È|ª”P·¬f—°‹°…>ì1¾Á÷SÊ—Ê×ë1Áàkìà3¢¼©bt°)ñtñYvå,ÁdB7SnPfb±vN¢‹\1¢Û¿°g64vj=£Æ«AÍ9‹†›Qî|×£Iœ#‰›•~;5ϽN¸ßHƒaþ9`p ªÖ­L `Tœd™{Üç#ÍÍ?â@íØY/ÉNáʬn >L—Ç‹ŠÊ#Ÿ,y'üЃà´íüƒO†1V¿Û&YßÜG—w<»¦iiÊÊÎ… äÜ9"CzÒȤ—èù2p”×’$ñl}xöŒ£KÂi\ù5NVøï$UšŒØ84½qÁ])ZÏ;w¥Kè^W|Àù:*1ÏGòqµU&† Á…-ÖÁ.sG” G³ BD¾ßœY6 Fwí~¥¥áóå9½1ØŒC=å×¥6sÖ/m©§Ô-·x[4i «‘‡+Ž42ÉûM‘ðhåÁè ŠàA[”„P7"ÑÉ«Päø«€ÊÌó‰éqˆ²1d—,ÓÛ~-îÂ[=~ÕÊ´¾°:´ï¡%_ôboÃ5Ïwð¦ÂPsÏ)Ю\/Þ^¦Ð&¹Õ¿KèFn+¼ìd~™³(.ÃÊ#CÝò g…Ö‡Rj‹è •éãbå½3FnI$îs4C-Äß[ûZk1~nõ8ïÆü¿Õ¯Ègéñ çÁÝÔ B©Öíì[œƒÛ³É5v·F<¯Ð5Q:¨CtÈýÆßç)ÛµÆD\=ÞÐpcDØ Šns)gƒÒjeÔüá…U¼KTWÈÐ8.WäôÏ1GY*©&ÂùŠ,ýµµ)MÈ$ Å^*Qš/EºÀÐ7>ªóÄÔ‚L}80i– æ³¶‹=Àò”«'¨sÏúo À"…·Noð®dmb•wæADà.ž:’y{.Œ¶Ã•>¿]ãÉ‹¡ÂÈLì0sõû–&þ?tѾeQmœÄz¯ïjÁlR)Èbé܇É9¢Ül²õsŠ6üÔz¶téíî$,1-Î~Î0Ôï† ÕÊ›`ýÐn†ž h#´@/DÊ>´ =šñÄî=ýYzªd×»ß –Þ­Ýõ¨S£{ rx0™+~éS§Ò—ª?Š€œ†Ž¢²Ðu¹>€&H­2¢F눫6½ûõŽË]Þ¸mbå—ƒ)@ï¦i Néa¢ßRkh 3ø$Rzf6È&)ÖŠÔœV·Ô“^ÁÛqIàO˜~;ÿgNì™Pì)ÝÛ¶É·2ž÷ñ¦¹^,ô"3Šâh, ’º4úðåHýFšfм"'î'Xé+k¹t{»îiUÜå\Q¸QhaP Œ‘D§uS£%tzf¾ù+rŒ~¿’Àž¤þÃ@[¢ÕÀ³ÆøHb“Bä^²<8Ì›@ä/¢kFQE°î–äù÷ü$0x< ÔÏÁØk'‡´ö>ñmó¹ñdõÈ~­½´1ÛLÛõ“•î3è÷ÓrŠ8–²NÞ|‡šØ®Êq›xÆœ0s‚³}ç—ÜÁ\—§YW ¢ÏÄŠ˜¤dñ»®ë7?»Öø,`‡ C¢‘ Â&i~rBîYß„(‚sÚúyŒ¥ f¡köNîÖ±®»áÒä!뼿­Â­¾§Zúy«‹ý-jÄ«há¸B£TSåµ[ ÅB«0±ž}êð‡°ïH‡G¡œýÌR&B|­¡¥[ÞݵåeÓ:¢ÈæV\™Ï€Ì6’MEBÍ£+˜0ÍFð°qÉ/FÛŒ½4»…ò(kAæ¬ÁFbýÅý ¶ö-Šn¹`°vYà˜­LëÞõ‹"×F!Q¦žørMfwwýaùáëí¡U$0tÓ·²UC懿,:ÈQvd,_½64‚½cq_ìÝ|†±¾‰Þ'ÉXàï +êyÌ–Bš1•ïË{Û % ×Jiì{É#ßÅÔ®µ”8LJ~Ýв?gË"œÞÞÊömyƒ©sPŽrÐ- 3‘•ßy··Û&~ø¬n±j»c±d’|O©»P5vYÔ?‘±Ý8{à<¹~¦p ©zVKÀðÅXcKCÏ$a¢â)ƾvÆV­åuË–IR:6‰ÛË‘øtÖäÏ>UÉ=·&î÷-ÂÛa´$El•«½=!“ö¦ò1íy´ÇŒT©r¹gq jöò¦‹%e^Ëb³¡Æb!$ÕÐLt,DÞ¡ÈþKÉ‘ÿ:úÕvAݱŋ¶$ ×My–Ñg±‘ùÛ‹nŠešq+</WS"¾¼³Ð?ÍA¯•F–µzâlˆGD)7¢ Ì9ñY o@ÚáZƵÑoâ&‹µØd˦ÜÔÌZ§Û¬â,2I–ÈqêÂßÂ4Ç¥D›¨1@”^/¸ÙøäúCïþLO qÕ÷ÊÖ€¯S£’—å°&¬{V’šùüš_C=¹R3á:/ìóoß³?!£úçß9»üŠPPß+ÝV•kê_ðƒ&È>”/‚Ê>0 ‹YZVGAM/data/prinia.rda0000644000176200001440000000231513135276762013704 0ustar liggesusers‹íKhA€§I­µh **‚ E{вñ­‡Ñ‹EO©4ÔW}Ä>ñA¼¨ øºˆPPð¤öñ >.¢¢(^Ô‚-b©DEŒkÝÙi§™ÌÌîììfûþ3;ÿßgþîÌnÈú•Íɺæ:„PE««P$j&«#æŸ*TÆ™²¦#מmÏ d榚Ÿzós.50ýê‹g_¥¾-ïq¹ñAáÂÅun¼ùš®=Ý6¶kâùÔ›‰—z[O}øR˜ù|L.Ö¾>˜ïßó©-M›ow÷àv©/óVýе<±Ë­~í|ÿìk'µvÚõ­ã…¾KùE›ž­H½ûSóðxÃ[Üoê÷‘ŽëOV Oã÷çž2§ðzïš{³—5ÒíhiÛõ3Y3pxÖ­Ç-ý´¶|9£çLý•N»_†½t?Ìã¬öŒúw#úÞõì¬f¶ÃçEëåègÚË’X%m;ŠmŸ66únû“òo:ÒzørÕN–_Fø™î÷Çðë:Œ¬ö”ÝØßô¸¶ÛSça§íÆõ¨qÇõ;åg–?Fœ§èùc½o7|ìšwsráýŽS»4²æ o~ØÇñõck¾ÚþeÌ?–ý¬ëFâ„–V^VʶwªF¶Ü+»ÜJÕzidÇ븪ëîQ;½B—ž~·þ–½ŽNíàé£Ë‡0¿Ò“ÿgÒµ–œ6¼>.·óµåíIÇJ×O£òåL}hx=œw:oEÛóêÑvº•ÂvÅÔêeÙÁ³GÔ^ÙóæåUùS•ýA“Üù‰çQL°?F½4ë×µ—WHû¿š¬v KεdÒ’MT>AåigÔ78å,}UÏ Êe¥h{^=ÚN·RÔ.Úª¥AI·öÊž7/¯ÊŸªìšL–‹Ž#V=Ö|–õ3m¯àuAVüBšaÝH ¿xñÆÁºµl¹ªøÆ[_—¹/€Ð¢z~ñæ½l\ˆžÜÉúâTNç­ÂùëÇÝ>ñ%~U ²Ï}pÄ|r¿Dûû¼vº¿v¿€&JÆ/ÕûõN÷éÜÖw‹ìs§ýë’Àè@õ8ðø Äþ½Ûv²ýøzx Ä; d”ýþ®x"[èâ[¸Qµ¤k<”Ð3,~é­?Fûý”×ç+;®a.T½Ç)[÷_ô–íßéûp@8€øæ-ª÷UïK 5~Áø«,‚²n`é…8W§÷Ϻõ;­?_Ö¢ýù¤j*Ý~U¨þ qN Þû@ªúUˆÐû~ÛÔqT»9à:Ê¡{¿¾Œ>©÷'Dã±®}4¿öëtë Š^¯ðjêô9PØŸGø½^Tؾ?Tu^¥Ñ×FÑøÐ²~”Åïû*ÐßãÙoý pôý!Ôá×ú3ó¼"âWÐìTú~›w ûuµ1ÙÌî-ûÌD ýûq5ë'×vmÉnËo·rÑ­™­õ ºé¦Mò¨þ™^éúVGAM/data/lirat.txt.gz0000644000176200001440000000046113135276761014224 0ustar liggesusers‹UQ»mÅ@ ëßœÀ°t?ß)2Âk’2ÈþEDžÎp¦ù%ãø¾ßøúýyÙ Pƒ½Œo@9œÀIfàg1Ì ÚRf’µcR6ž`¸Àöd™kÍá¨z”d殃 |þÍé’¡«Îòðó L+T˜Šö\a<—›÷Тèì†v3«h_Œ ´Õ`{ÊsŸ™ew •õ¨y·²î}f¦ÅÐqÏqÊμAt½bÑÍ;Æ .Ñãa8)»Öÿ™ÑÀu߀§¬tÌèæ:ˆ6ôÔL³‡®93º¹Vã( P·F óh½€æ´”)à À}BÖCVÕ-@ YÍÖÖ¢[0C`1ÖRF<ÄhŽñÏÕÝ­,YD3MÝýð± ½VGAM/data/car.all.rda0000644000176200001440000001545313135276762013745 0ustar liggesusersBZh91AY&SYyÂä8˜ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüòÑÔwÿÿÿÿÿÿ}ÿß}àü8yåîž]ó«Û*àyžò·œ÷—<=ïÞÝ‚·:½€òRí,®ŽÝÇ®ìöÀ]÷Í=ö»ã³.Á¢DhÐOBz˜ 6€Ó zi©½õ4õ2i“Ñ4ÈÉ“@3M&¦ƒF€Ðiê=@ˆCMh4 Ð@&@FÐi4›I¦L˜BŒÔ4ÓM 4@@4Ð4i£@@Щâ”ôC 6“#@4d ¨ÔdÓÔ=@hzƒCɃڀzFƒC@  $ÒH”Ó)=MêžTÛR`hÒz£Fš Ñ©¦hÐÚ†OP3S&SÀ˜L¤õ=C i‘£@Ðdd¦M44L Ê4ÓL€4Ð@4¦@€ ”$i&ÓhQé¢~”ôb<¦žFŒOL ÙAê“54 ) AêhÐÐÐõ¤ôj=@”4h @4=@õ?ê‡÷o‚chŒ‹- «" c$ am¶Ëk*ƒBµ*VÅJÛV,KkZÔª ©^¿3-¥²Øõ˜¢4¥‹e+ZXÛj5¬MeÉTo®Ëе ã2akKi+D¬¶‚Ò±Ô`©A­²¶•ZÑJŠ”*­(¶ÛX–Z%ÛQDm©V‹mŠ-ZÒÛµ-Fµ Ý4rÛ6j\«ZÆaKhÒÑ¢£+fÙ•ÃÚÕ[TmÌ•2UKmeUckiU—üèTÕ4VS i\Mql:62**ðDA¥dª̤fIf bESŠ!€¦Hƒ%M0Ò)ÌU"-8’iSDšN0DiE¢ ™SáUR† 6¢¤ š2 MWmÅb•·{ï•Èû¿A·»ù=ÑpK®Ï¯„Ø-« …c¾fáµènb(GøH´«%b:%PìcH.î‚€ºPÜC xt*½¶âÈ4úÞ}44èxÑ Ö4óµÄ­ü‹LÒ/ܾX0ÙQiTà”¨ÞÍuÁ%×1Q­Q#*õÑvöQü¼À‡)µ±x—s|ZÃG9|ªÊ¾Š Ë7MÆ08ú½ 'Ú¡V´Eöíêë‚þÏ]½Ç÷Ç~Uæ/àTnzgËVä‚T|l‡äI]!"7 ‹·[),WÇ{0Š 18ç>âK¾ü_Œ€!dN(€æ£˜:jáÖ_*¦R¥™¬ÔÓJ ‰´UwlQÏȆ19쩬äDœ3L„„š5ãxÃM¤Pb^K%Cs2„•&b!%ÆÃÌ"å¸Næ LÞÌ+˜˜W† Y2(ãÃ>SÕn‰è:.ÏC5r/ÂSÉ—Á¡–k£õsã5êÔ®ldyÌŽÞ4ݤËXs®ãS£nG¹­­l’h„EN@¬5W"JjÉcEÂ)èúçêµÔŸ³|²ËgÖû!×cÜõ¸òRÀÅ’#Tn%ãf]HÎVwš–ùËÀÛÝ]sSìžùÛ ySM)b H‚,M¬›2I—ØbžËÙÞʽ“í2å²¾yvØü#ó¸µÂÌ6pÄ«wW8[¤ìö=ƶïž^s8Q‘X¯/·ýzcå½ï1ËŸ}+¦JÌx*ªŽ\ÜHËÓ…wSÓÑæõM” ¹±é©|VöHááßTâLptÕ¡Êßn$2ÉÇÏ¡‹±Þ­ä¶ÓŸµc¶kojÞ˜ço®™¼È~PÆüŧ¤`e°!<ÇSÓ¡’Ë;³´¿Uô9öwúM¹sè`dh3X,0D,¡¡ ÂE×n•²Ù°®]d0®ÿYÞº~Óûn2BÍß‹›©F!ÇiPMt2]ÊMšÖÕb#—à ¸Ì ÔÜu½ AÚª&­1l±fî°\Ía½ª‹RCӋ̲¶ëŽæ¶\NÀ¬õÂQr6¤H:èèï!l‚ÚJM$I ÂF#ºUÝ ;5µ—|\ŠnÖD§=²BRbñ"!–¸þ=Qí}Šû\„¾$íØ ÂÖ´lfºžnCNp&ô¹j9˜ǃ·.öºD÷k{“à‘\fôô)}]K†`DUÄ­=Îl©¸7µ¡Mã+xsªíT @ˆ¡B0 ¶ºíä$ÝúPˆöâ@vL?„™è­ÔË¥ž]žÖÂ&œï0ÒñÍ¢ÎÓv«À¯4°=((Ñ5u¬3Œ”̃ežt†IÏcSHŠÂ­S^4å|7Ù]K—R&Îm*Àqµ&eãåfŒËŽl“»EsÊ`{G–íUÅÞ÷¤™âÐ6ó‰¡"ÞU¢c2j¥9´`l†¤æݸ&K{×Q¶ÿˆ·¦0oˆdò Ç‹IÖ¥Ô~;'¡OR/S2c€R¡×YΗG^7oóµÆä‹ž4Ó—Sw –ÛˆåÏ´¬Ô}£žxi¨Ç€À£&|MYvº8û3Ø•tefƒïi‡ß¶iMm:àš©ÅÕÌ3]¹Š,…0©,¯rêOzot–˜åyíÅÚ¸ï8+Ï×XžúX*ÁP±D™‘1 JÖgsŠeŠ·ŸˆèqSªæ#“²f×è6h‘^МÎ$¦Äü&°-v-Ê}]R[üIÕlÝÁ×ÝÊž¸t³ …ÃÐKS‚ąªrçùUbÙ7tÖšLh*·Ÿ}k.e±ÌvŒZo«C¸zÖnDZ£!¹®Oµvƒk¬ÐŽ™¬DÍ_ßÛ"e^®æVµzÈ_*&<‹‰ ª½ýK+ xÖØJgˆØWÐñnàH‡rÍÉÈvÖÞéŒzæöMÈ¡RŒH–”»KѬ$ábllìfÀÍQ¢ÄÚZL8vC W˜·—’ã[YæÝÌÜb^]± Iw äÄÌ9›'ùÐ'aXScEk³°ØìçììïFÃv„Hiˆð†¬È†=PèÕˆ€Ðî'´‹Ç² \@*¨—Öý™ç4Zìë·.=r®=¥Èâ)o_Ç"@Èž0@Ýa_‰#õ´¸ Ñ­kzDmókÞ— <_YžôÕÉ©ˆY¡WÄN0 ÌàÐÇsscss~gg “dDUdAŠ"£Š+EAEЍ¢Š¤—¬^fù‹Šã0Ý ŠŒ]U_tг& ¢³WÏuG—‡;¹y æåé+9UbªªduZØkf>Üz0åmµLÃz!1å_ËÚì_¹ £I•-û›Ÿ }š˜Tÿ/*3.ØÖf;Í,Ú6á¡]¨ô¹¶äÔÎ…GnIsÇ/ÐŽFG—¢[³òNÍJµ.©Æ3Qˆ—^FÍac)ìg©æuÓ½“g£±ÁÞm(°ß¸3ÇB× Û/„ ï€á³9ä#I‘@@‘A@„BD$DI* ÷ ½ †Ò)",€© ! H¬‚Šˆ! ¨Z š]i@µÊ`¨žÌT³L)JȦª²„!Œ@Y8!Y%¥­°*VAHJ’DI%d(‚À­k!P%J¬¬+B¥E­H(«Pð7ÑWd$@RE$P"ŠH‰'Xˆ© AC¿Âq’EVH) úFDd$UdDP$.€•€ Ò”¢´b¤’+"ÁIP± ‚È) û2;Â@ö ‰6`,X;» Â(*ŠHáÐÉþ"@ÒIf@ $Æ9  äH[Ò­­’ÖCr!7Hwoµø»Z`Q¹dùr3¬-”Å´E2à½ÂÐøT()?ÍÁÍýcq*–‹ º)Ø K&@~FAÜ (Á—cî_v4Pu2Ò+ViJólL¸­® mV¦oÝÍ)y”¤)˜kNe”µ\Ì€ÂÜÈÎ’c&"\w²"Q‹nùç~²v,èË ¦ÆvÉÝNpœ¬!FÈÜ®¶Í:FA^åƒRœá›úÇVàíd–ÙLÝ[ÒOL³ã«Û ÑjbÛ2J¥‡þ\û¿y½n2ûÎùŠ•…U™œß7½óŒcÎsœç9ÎsœæõÈ9" ŠE€’šÎê‰UýñZ)o,wv,¸²²ì…’ì(ƒQ‹éMIÙ÷FפÂ;‡s÷]×§ÀšH¢ªªª¨Š¨ƒÒôDJ^TRV N¡¼m8jâª9j¨wqÌ­V¨ðj*£‰TU‰”¡iDErÊ¢â5†¬Ò#A‹ç­•¢Ò¸UÀ§£H4& Å# UÐâРÂãÇ~XP¿§»Ö i^ë;u¥maµ– Àoxm˜Q7»ÌŒb.ªðQG©23‹¸”DD™ÊÍÓ¸yM²Q+”ŽKJ*7öYß8N|>4Ñ^5A‚ ¶TgQ ¬ ‰Fòeg`íΰ)ÌÕØRC2zÙiüBrlîdX!p‡ ¨;ˆÅ°TÓ.ûáÑlx}t£ ‘ˆ‰E¨,Ô* R´G0‚ P HcJˆ\ü1év71ÚnëóE.DxûkÒ퀇C6ÊjSZ hº˜²’t…Ät(ZüXLüü xœ6­”±†Õ2"Ú´ 5 b„el2€RËJMöZÁ`uéÖÈ€+óÜ@°ÌÖ¬j‘ÕXÜmdå‰6h¡0 TÚ3÷t¨óüì¨äL•¶Ý€¹Ö‡ºl%dX»@ÙaüsöÝRDý+ìâñ~•ú6¢5ïDÄG†„À|Ãä“kª¶¥U:·Š™UR|º#QDC49ÇX/ˆu0©ho¹Q&臹§cœØ5ÎŒX«4³š™»½ñy˜‹Å›´rq}«½I:Nƒeº éeë×^­S3WˆˆmßLb“™Õ{‹r’8®&bH%hmí0±¬æÔ HM†x†ÜêWÛØvCÄZã(ˆ¼rÊ«Tæ!FD>RR3ƒ…ÛiF•ÔbS¶ÈB(ж%ш„Êׇƒ!4¬ 44zØÓnuEî ‚V±Âd:½ø‹}âEiN©Aô±³™«,¨5Ž‹+CUP Ñ"ÄzÅÊm"3ÀT™r2˜‡& F"Dï( ÌarÏ–Ãë¢`²yÎ÷q gX˜ù¶Ú_+a²¢;•›Üø]w’æö'#í{²OF޾ÚèÛ¤á°ÉŽf]¦C¼È2%7ÒÂ$C*&K µË]u½èf †˜ñjÍÀ(Z•­zw98§Ú¸¥r*"¢TuÉ¢¶ExuÕî;qoÞx¹¸íYâÇIPÓ¶Uje¸ÅA2å¦J˜ÌeUH°R¹—¹$ 2Šre‹$Þ–EE„Ɖ<9£ƒ’iIÃV€ÙM[1PZ†Å%c]ÜMÐ+ºTÝ ¹Ct4]åÀÕZT+5¢¹VªE34Ìt´M³0j ¬¡‚WYkLŽé\¥Q–ÛlFƒ4ÌÊÅ"fdE©¡ˆµjX£m* 0F&ÉTb#–J‚¨Š+"Á`£A@FcQ@TÊU`¨ÄEA-•Eˆ‚‹Šå!XŒY‚ÁE"›5 ¢ÄE‘ ¡RK*™”˜ÁdR $PUˆÈ¢‘AQ’°Z‘DDETEAbªÅˆ(,Q„rÊÅŠH 1‹1UQV(,Š "‹EY!dUY%A@A$UGV‹´B"E‘-*c¤ƒi+  ¨ÁEAaPE$UŠDT†+ ˱,!Œ&’i ÆA¥  ¤„¬'I¦Š"DEŽ’‚å*&Z‹hQV¡b‘*QŠ–Õ6’÷s¾ÏQútx]50©•¾âÆà‡O1ó>—ˆß±ÁµÝ¾ãÝpnC‡u§g.¥{_#{¥‹ eˆ{¯é57› ½¯49}0Åuâw“D(:DŽR{Êýc@â'j?)ÑO;­êOU@ • %…jˆƒÌ ó5…UY${Ù(&‡~¨G‡"UAÁÒfب!ê`Äè+ב Án-2™š1lø^)Š.ž¼2TÊÀ!›*Õ±´~ûGFN;Mï¸èÑiÉy€¹…æf\rÓW›Š£÷D›’¿ƒCPggé{*ͶN÷¸|J"¦ÈÝÑO0]ƒšÕjuÖ2®)%ï»èj~áÄÐ0:¤O-Šѱö)숩:…Éžï@„-áù8\P©U¥A€wo>ÔA—ÂÔë`Ø"@9í H¢-šlß/­ yÚ]–žÂhý:KẎªü“ËÝ…yÈ©HL =‡’DKÈ×däë5¹:æ™ZÖµä²ù§äpd3öƒ“Ú˜Ôç‘×åjù¥!fµe6Eϙ߱Ôó¸­%¾eDçHÆPR’ÆÝ•Yb¸ ÑÆì ù“I’ÉL7òÙÀþ n=+ÚάX< ŸUx²ÛÐDdèt.¾ÚÂwÌ êAŠûmB³¦ã)2 ó¢'†i˜hèt©.”'˜ Ú±ÎwϖÛ.E™\Æ…¯y¢¢DìC˜ß6Á¤¸hW&+Ÿ¥"Bæ|PL†o¿Zn!«Ïî+¢Mzˆ }ð2/o]«¡ŠN]hœÞñ7ŒQŸaÏáÒi, 9S3‰ØÏµq‹Á¾Ý‡k³lÃá"‹8´‰8Þ{ÇqŠ3k„f|Fq$6j lÄTÑ8šp¿ov±Àß; ­\#Á Óçf'V`GFY3lj dlû+ &•}ŒÑ$5^ñÌ·&p3LŠ˜PH EÁÒãøë>•ÊÖ{¹ûß·zÏeyÿIÎ|f`#:4¥ú^ÚÖjЃBÅseõ) WyˆïÖ‰ V=‡;ºn‚hÖP7`@wªŽñ³q•f˜£ÀI´‚lïƒ, Hp9–aÂ|3 œ¯T©x' õLp ì—¥›ˆ•Ô7s.žƒAq‡D:Y\ÁSõGL{P`F ˆ&f „E­[©\håz!l‹†Üñœp¨°äÌëÚÙ- @ɺ(†9B‚fÌ[ÊÆ6ž× ˆd à*˜Ô<—Žt…}W™ãõ7f'¥onÛH]ÝY!ÐÛ”sµó8õ¢æúHÔ2ÂÝë*ÓRáÍεآK°Ðiˆ–„ßh¾ ®7Ê8[;ðRxÄ`¯o|@0æ·A$‹qCÓz°uK_››nÙï8<9kxVÊDŒ;D%5¸$QR ž!.r@©€Ru¼‡‚Q Ù ±h*Y˜CH/ þŒQ@$“™P(Èë5z©vÓcÅ™³ÓߊƒjRE‚ÀDÙ#í݈rºÔz±¬C¡à‚QcÛW‹©Ïðh’I¢`&Ø%«Å¬Xoä›sØÅhb„hù¹*ñ8‚`vô ÀÀؠΗ/(­¸^ …€ªÙ‹¨(5LY10àHÖmB±,ÞXBmu„‹’§(€%¥`¿à°ÀVà-$Ó!ÑêÔÑ+Aå%ÃzXè(sŸ{‰ÎcÄ\ÒI)!eD9\…n‰/!›Ü%YéGPí•È"D HwD ‘Ï™ùy4¢S—:í—j¾1¬ó02å“Ñ6*ú4d®1""£¶Šÿ¹úƒëÔç‚ýþUz*†NõW°®òH¼YZMÁÔ ÷—™=E›rP°–‡Ð ¬©ì}HYê—X\ôˆ¼}E}'8WÓE¥RÔoZ©ác{Žé¼E„†¶¢„ËFܱ–šˆ¾Ü±$ „Òme:ûvf˜"cø/ºÙÄ͇í8ñËE¿j`v%SE„E+3(ÃÞÞ`ÂPã°ÌBd†f`„|ò'ʽ陳7ߨº g¼áž4x;—oõ×U.¯…‡W+˜%ÂzætÙ¬CdÊš'R%x?¦v ÍxiBžß–C  @@Za°×y¹ËšÍ¥­À=¡ØÙËÇ.¸÷&G£ç“Ômt¡~›]rÎBv,dÊ ©NoÐã¥ö{"ÃÍõ]U=¿§áv’ų­–ë©tÒ„#…ýü>¸Úe£ÌôLÝz´‡[ùï÷ ß1Áöˆ€¼#Ùóúož •Æ'ßkï0„è¨>-OuËö~+ª:ßøJë)k÷r§aIƒÙSq}¦Že‹Áë9úÅF1· ð ¡ŸŸÆJŠúÂ)ÁÀQõû 'üØ:£ö`‡WE;ž=Òã…"œîq@øšŽçg´Öÿ¿ýºøÞב®þÿc§ô€[|ma¾<ߘe àðŽg‚±…,dó‡KÂZjÌ»liQÌs4”ÖcÇt˜uAÄ3°½^A`°Ÿ)üžN½ƒä¿"æÝŽøïsÉ—uï¸s á÷nØdÉ‹úÖ¬ðæÛmI\9ãæàÈëÖB4ì¶.ßGêÀüœ:¨vÑÂm8©X œ 7^wö×}çé ø_r­¾áÁF.3 rNŒ$3çÓú V‚XýL˜Óî·WVøÌ U:URŠ© dГؤa~Ý­ò¯¶C/y:¸uòy.§íë¦÷¿õ<Ï1W)5y<Örv„=ä_C'–˜ñïî@2~V vžŸã¢è€ ¾ÿ‹¹"œ(H<ár?€VGAM/data/Huggins89.t1.rda0000644000176200001440000000067313135276762014537 0ustar liggesusers‹ÍWËNÂ@½´…E@‰~ááVtáµ+¶ 1QL£ÎÊßÐoÒ?qá€ÓöÞ¹ÐPZMæ$í¹s;szgúººè5ó½<X`g3`Ù2ÌZr—,ä$;—OƒÁÝpÜîÔ& »,srÛ•[Å=jp‹À-}úør˘?îp÷°]}÷ñáV°ÿ!/!Óñ}dÊS›ú;¨Sˆô§<¶É'˜Àg÷'lD9ëŽ'ÅŠqêíw^ZŸ Æ]ZOÝñâúozÞ:=X3ïºë -4ô–ÞGºë3éuéÎÛVúÔ]ºõN1®Ö¼ëêë"ÁóDëùù_÷ýHåSp=á{³JïC3YùÜÆ¼¡¬|Òw€¡¬|f0o(+ŸN¸XMeö‰ß{†2ùìNÃïVSYÕsël(+Ÿe|ÊÊ'ýÊ‘½­¡÷p;–AüŸ¼0i½41²_ë ç²-O8<åðŒÃsÛvTبS(XM°š`5Áj‚Õ« V¬&XMHµÅäFÏ5*CÁ¿ü7¹›ÍfßÑZ]ß{cª%ó7ÞÄ«õGò|ÙšúÛ/qÏ•bVGAM/data/machinists.txt.gz0000644000176200001440000000012013135276761015243 0ustar liggesusers‹SPHLNÎLIÍ+)VÈO+J-äR€ 6²4C1æ&H"F`Y1˜´@1H"¦"fó‘D, "£Î8Ä¢VGAM/data/wine.rda0000644000176200001440000000041613135276762013364 0ustar liggesusers‹}QÁj1“Õ­ •‚{(ý€µ=»‡~AO^ãaawS²AñÖOöÄ81É–Dl ™÷2o^ÂÌ÷×z‘­3 @“Š0!x ±!‡ªåtŠ8Åølîny»‰ÝAͨæ{^wˆ^nYçTŠzÛ»2ÙDUòf/ê­v¬TB"ºD“¿hlˆŽž#­pˆygETÿgc_˜<®Õ -«{·¤xuqvWguî¾x‹â,ÔyRøøúǺ;}ïvµe ÷]M}ûo~NKÑ*ì‚§›J).ç!]„tÒ~F_KqÈý7ÌÉ/Zëóƒ©d[¦X¾“Xv2—+3”;=©VGAM/data/chinese.nz.txt.gz0000644000176200001440000000057413135276761015162 0ustar liggesusers‹ER9rA Ë÷ûÞÇs6XE’göëEªÒTÍôt£Iÿ½_ŸÏç×ëóýüxïrÏŸÿ®Èùca½³˜W8c°dìBó0Ãέ­€agÅv˜Ì«f]¬€™ü`QVÈiÝþ‹¹)‹½)Wq%„Ìy°æË‡¡BD… 0jæiØ,i*lnZ UWo™{…[µÎ‡9–ÖPêÁ˜O­\ΉcòRV`G¼l-2Ú[6®{¹(±$G$°ëÓg6 M GûÔ6¬— ª³§ƒ§ùvw{ÍŽ,œ$*àé¨7¤u)5¸´¶%xúö“®l’ŸµÓÏŽóa}×@=™ë_<‰ÖŽP AFóøÞ;SÝ÷ dÊ43à_'âŠÖâÈo•¥µàÃVXï2`¦Š™èu¬›À¥‹NF)h€ï3Qa5ÊÓÔi%°‰ÓÌÙ Æî‚KIBðÝÉ‚`, a@4%óñ é®´ VGAM/data/flourbeetle.rda0000644000176200001440000000053013135276762014727 0ustar liggesusers‹ r‰0âŠàb```b`fad`b2Y˜€# 'æNËÉ/-JJM-ÉIe``É1sØÿæµ6}%`ÿ{šO,KÖû?¢ ">ÙÿéáüõþÌLû?¿eNÚehÿ7ùUÉFåUö-1–m~mÿO%ÃÒ^LfŽƒGû:‘ª‡^u;_·Ê;ø”T=t_'âà—Ý*·#ðC€Áâë\¶…:_õÇ8©sÙ,¾î¬Ô£ñõ0Üßpðc€ÐþPÚƇÊÃÔÁä¡êáæH@ŵ ´”¶Ò^0fª½hAÈš—˜›Z d€"Èž“Ÿž’_œ å²9å¦ûÀ$S+ €r)0ÉìÌœ ÕX΢ür=˜Ñ¼ (¹âÿÿÿ?ÐíOÎI,†ÙäJI,IÔK+êòþ03š- VGAM/data/hspider.rda0000644000176200001440000000250113135276762014055 0ustar liggesusers‹ÅXL”uï㸹;ÙbJ³‘1ÊbÜ+`ÁûE¥h ˜NÇËÝ ÇÝñÞb[Fýãæœ±Ü²?ªÙ0ÓÙÚ¬´U®ø'«?ÚÒܘӌ2­t¥†‹¤ Þã¾Ïý_zå®Üün/Ÿçóýñüú>ïûkz¸M´·ÙA0 «I0[4ÑjÖþ˜«¥aæS‘p·_QÁ²P£níY =`>°Ô1`ylþ ejǰÆîi°ïo„ ÿ®}çzÀ¶äãþÏß«Göãn*— +ðÉá ­Xÿî8úö3-`ùn!¬ÙwL»¿Ùóû²Í`¹üËdý4˜[íÕááQȾ{TöípÑϧ¦Á^ÌìU¿áze¬ìÚ¶VŸ~l“oþ©¦2gÔö‚íÛÐÉ%¿]€Ì/O^;øçÒuíTÃÙÒø"ï÷— ê¥IyÆ®4å™Ñ+]ùâμóǾ–~Ý=Ö³mãR`)¹èùcïázn´y×øê~^úë÷éÜ¡=©Ú1Ôow¾ôÑËŸۆ¯.´×Ï>dŽß¸÷nzmqú ŽSSãG.ÎùüI:—à®ÓЇ3æó× .vïîÍìÜV¼\'ʬ—Vîœgþºeù‡ eܵ£g®oŸúïbuËò#]cv0Üý!—.>0–³¸¯Ÿö»š†3š‹ò1àì{áSës!X°÷F– ígñçòrcd¦0ùu²ã\dÖÞ,"úúÑ.ä‰\z·ëuŠóÉòFyvÇΗòÜ„Ïùzbúɺ¦óŠþã9œçãäò3ÇË+ʼn~qu ¹«Ž¿yåÕwP?ä<N8ñþ9á‰+s"n²ÃsGùL#¡s<²uÊ;8ÏШøÐOp¶Å5€3sëþ#Ž\ô\Û—Å_TÞ>ÅYÅé+aˆ~\åÖ9û¸6î|©~þÿ`< ‹rö«õ~çR6úÊæp´Çìðy ¸ïcøÈ N?ùÍÅçè<7o4æ[ þ¯ðþ¤:xÿÒð#¥Aõõ‚ùÇûÄ{À{¾ŸŽÞ4NÞoÒSœ¦œ¾Û0ÒúN¸Ýœ ñ½ÉãÖEý¾¤¼&ÞO<_Ãë ’ãxŸ¥z{ü˜Ó·8û·zÀü[ôßCWoŽü>X5˜’súq>‡lÞ¾ÔÇJõú@OSîC°†óû2ÖêÇ÷»†›_ÁÐ˰u$mŒc]Ìö…´†aŸÀ¼ìϧ©Dœ˜?ìcXgoûXd¼tvòIöVêôãáßGºý*)£‘ˆ³ùQÇy+Ãf¦ïe=èü1¬ë»6é1Ýq út"ÎÆÁ„k!ãO2lIàŒodÈÕ/ÅçdˆïWÓ‹ycß_ðà A?ïåPÔëŸÓçJ ægE}¿-äüE»øU«×›ê÷ù úsd—ßoÓÏóësôÏ[·ºŸ2‚r¯Ñ„¸y7›´µÊQE­‘×ɪÒ,ýÈä@`Ý–î.äõ¡~eu(Iæ)jþ¦Ñ¤tc¸\…eŸOIæ¾XPÇ;啸ê‹bQ%™‡µ›x,’Ä×ʪ?ëŠ%óÞP0šÌƒÝ]j2ÇäëÔOK­¯©r$Üär—¥†¶”$節-˜<(ˆ(,G¡ …r¼(T P‰BÌžR’<$‰$-'©Œ¤r’¼$UTIÙɆH6D²!’ ‘lˆdC$"Ù+ùZóäæ 'í~9*—tªZ56þÜ\­ò%VGAM/data/pneumo.rda0000644000176200001440000000041313135276762013722 0ustar liggesusers‹]OKÃ0Æß¶Q¶‚"8Ѓž nÎ ÌP]=õ\„AÓŽtS~4?’Ÿ@L×ç®7¿çý“'!/Où0ÍS‰%Q‘ÄI*[$JúûËÒ®]%’7­‡!zúd´Y:“ÍÒ㯖÷ þnù>cnÖæ[Ÿó3pй)}OÀ+ðˆ6­Mž‚ð<ëô·6»>?裢ÏÁ‹NNßOîüÞ^iœ­ƒhž«P<°ŸËª^{›­ÎòŸËÊ;S SnQÌÙ©í»õ¶ëüZ˜šÎ,¦s³2Ù›—vÆû¾úÈþ?¦‡FtM1¤QÜPŒ)n)&waûmâÄ©KVGAM/data/hormone.txt.bz20000644000176200001440000000052613135276761014637 0ustar liggesusersBZh91AY&SY2Û,PØZ@à`@ýÎq8a©èbJ˜†©úi’j3PIꪌLA ji 4ÓADà/ȂРÊ„’@‘a%(”liQ2À"20Ä™@Q@ÂA$ôGDw÷ϾÇî×>Þ'…0¥+Q¢Íßë C²È­jéÞXÅEËÓ„fæâ•.*+&ÓªqlË!Ým6]´4AÎI¡Â‚—4¥Ä–„O{¢É̉tt‹×ÀÇiÞÄÖNîEåvá[’¶ñQ«ÃœÚ9ªq5­è9R¤¹Sm¦Á$JîX®¹atì“qJ¢*¦ 4·’rùy/3à¿3¸Œ‰ÜóØtŒË uT´CV!Y$ƒµvf¬ƒ"ix£ ÂL…¼R²ØÔ1Sw{ÚG %’àrÕ‰áJ>‹¹"œ(Hm–(VGAM/data/alcoff.rda0000644000176200001440000000104413135276762013652 0ustar liggesusers‹]’MkQ†ïÌ$µ *…©F±BmU4&“¤‰­tU+õi­XEÐ1(­ ¤ Å­.DD—i7 þ‚?À•J×]¹´Û¶ø\æÜ›!‹gî;÷ã¼çž{f&æüÄ\B)å*/æ(×CÆ\>ŽŠ©Æ®`¡Ü¨Õ”òúøÛ žB¡‡¡OÖ“—ýšnÐ.:â!†£†\fá.Ô#>á\¯¡H¬.‰Ý-û2ÓÆ ËðÞÃøñù`®Ê¹¤øhöÃA™KŠç‡q¸,¹¶`…¦`îÀ|ÛÇ™]ƒŸðÞBIjxŽIm%¾®ÕI©•®Á-x _‰svx<|Ý]ÞèyÛÇãÎ.¾wu´Ç¼rR§„!!%)ÉaLòyžsn <|¼u:äoÛ'^†óÌo°ç {õû} k Ö€sê,äeœ’·•:?÷øÍÙiøEœaÆV">œ_Ãg›õr/]ûMø.qF¥FÃr‡sÒ_úÎ÷á™ ûö|]¯&¬vt}¼<©."z•nàpÒ»Þ¨9Ûªy»Z±³ZFN6y3X²²Uït*/‹ÆÉL&*ÁR®5I¢c{O³±œŽ&×/ Nƈ¬¾9#òFŒ1¢hDɈ "Ülƪ¬U¾U9«òV¬±ªhUÉ*ëá[ßzøÖÃÏñÝÓüÚ· ±VGAM/data/deermice.rda0000644000176200001440000000060713135276762014201 0ustar liggesusersBZh91AY&SYvî8€ÿÿÏÀ èH/çà°™T65?Õ=)êe=&õ44"SÏ%L™ 4h MIÐÐÓAêaOF¬$ H bÂ6¨È¢&V8ä*̈ŒEFíKï bÉFFaxDåÌð·Ì«p=@O’7Ã0úÖÃÉZeoªÒN5O¥¼¬3952% b£çû³ikÆŽ ú·ÝÝ’K! %{>Š4êײHùîÜffffÄÆ3,ú:á°Ì N°$B@Ì=iLêBš{.¡E–ª¹¤% Z¨J¡aH¡) cj)…˜S'ù„¹ÐBF "’ ’èB)®Â^ô»ïÑH ÍŸ <2ÚÖ㉄’bTb*ª©b!$ç5­®Z:ÀÌÞ$¢¢§Õ<.YåîªlO^ˆI½ûUfѲÃX¨‘,‚ÚëÿgZY\:Òè•w)"tÂîH§ Ø`uÀVGAM/data/ruge.rda0000644000176200001440000000040213135276762013357 0ustar liggesusers‹ r‰0âŠàb```b`fad`b2Y˜€# 'ˆSTšžÊÀÀ, VÃÀÀÄü> à™¡Ë?@è† (½BWBÕ @èD¨z7¨¸5„VÒep`ÿ•†Ù.ïå;p šã ¥% ´ ”V€ÒJhö«Ai (­¥u`6¢„ k^bnj1²Ž *È–œ_šWR ã啿&¥¡iä,Ê/×CÖÌ•`4„1Œ` cÃÆ0…1Ì` sÃư„2˜ à,C8ËÎ2†³Là,Sto&ç$Ã\ äJI,IÔK+zÈûÂ΢ÉMVGAM/data/marital.nz.rda0000644000176200001440000002427013135276762014505 0ustar liggesusersý7zXZi"Þ6!ÏXÌáÚÎ(y])TW"änRÊŸ’Øâ_''ñÅq•ÒÅФ¤}vË‚Áižþ7"Ķ{ÚmÓ†b²}i’9ÀäÙ‰¹ó]‹™`/âIÉøŽÕ¿î5éäÇ…åé]ê˜é¦àGlŒÈDÝ*¨=ÈY­ÿ+ÛìÇœ;ÔNÐÁ/Þ>Í(É¿åçÕ–=ë9I#4{ýäá⛽ë0 @õÆñ¤Ž]5ÂqÁÃ\®kx³ŸÅÖp„31£ž#$0ò]·ñaÜ].{åHˆµMîë@D0Uq(¤ÎB/¾ÀU…òãh¨ËÅ¢'¨yêý£ˆtl²Ûi].Û5-=>ŠµÑ©7õJ[í"«ã4”Q&cHÎÚœ¼̱éGG‰bkÎeÏNÓ,å’ä †üÃcŒ¦3~¼è ¸½I Ù ÷o]ƒ¬ïI±Fè©O Ébæ–²B½åq´a¡‘q’¾˜ƒÎÖÖò,©gEt‘ò­ž:4ËD½˜©"Ôî•Z…¸ýF:}ͽ÷5¶mL4Ê›´µBî×@RCql»³ŽÀ æYã1ÄEÛ( Ðçýƒþžõ†êÕª#x>oZ%¨©r«@Õo×”ÍQDݧ P³r¼@s §ÕÇ0-ºK²NÁ³ÉñhoÄ} sk6HVÙ銼êá¼ìöÝ/ì=ËnøÓ_ð³r¦ìk¨´l@¥×Fû̼ª°Ä:žÞ‡1Ôkÿ_Èy@ÍmÊÆ¡ÂÆO©::Ñ»Ï »äKâVçÁÂ’÷&Ξ¶+ňï6]•º!¡.öþÓÙ;1~L2M¿­8¼`hï)Ì'ýXËÐÆÙÐt )¸õfŽìöGûR~Q£ÌÌ‹òF̆™Z¹4Bþæç¼¢•Œ@!Ÿ"Q¹Ë0F¨ àºöL®ù³½'`Ï5t?[»¤¤ï•4qæSWÍÈÄ-oÁßu0¹üÍö†Ü‡T~­˜"XŸyøŠ­G?|ÚÞk‹/ zðú_Dúë›À¸ÚY^àÅaµt €2•¾$°ëIFýIk&–ˆË\g}ª=œr«³â œ¦ÁÐY–‘“ÝàèE .Ö½;Ú§<&ËJ¿Øø™ò)Ü%ª ÞùNM\ ÔìÐÌôI Ì”03‹4Ô¿AWªØ-hôÉäk×áÝû+ú‘dxqÛ°ÆC‡YP&LSkHÖŽ,˜4é•î·ÿüf:íbswlœ­CÒ<`£ß57>ÿÖôÅKå.Н\+Ä\ U:<4ïû¯ÞŽ™÷(!ª‹öç½,+Ö÷ÅV]¥À†šûÒ”ïŸÔ»1KÎõ]8d7Rp¸Œ/µ”J›LkpÔz÷=Ó“•Bf*m4Âc´”Šñl?÷‘ΜgH9mY`Ñ¢4Uº*YŠìðàøÈ•«iÜ防ë@»å:]½,Þ\ü ·*Ù.+ö¼Ô-Ì5ýZƒg‚d,ÎZ9k+pkäµý8Æ´F ’Vp|¡Ü>nåÕ–b®Õ/%!­ôï7õl³iËÞèx»†,ÛsÃø'y\{^KlëÓš¾™t–TeQ籬h¾k¦AÄÚnbÇ®…Ý[±ÉaÕ;]AðXñð*ÜZúk]Zâ÷ÙÑsE„ˆŒ³Î„?—Cÿ=”s3á6Å=‰–'óĤA3üj¸&½ó=Òb²}ž•æ­æ. ñ)­³³gcwK*’nseéÞ1«´À¾×PVŠÙ0®ã³+ÛÉ%hÁ;¥MÎhi]Z½¶V9òÃÍæ¶•Ò— ^Õ`š¦J^ˆ×BùÑV% )ZU‰ÝœeI‘!ððîc±OÿЗ˜ìÖ˜±"Y½”q¿òóÈçÞñ’!5Æ!RôvσLR¶Ý?ôX$ŽLVq]È¡#ûÕñXñ^_¸`ÆùŒìž€†®ÈñÓÊv·"}UJÀ,ÎHÏ0þŒUã8Œ§ËÂàšÎËaG˜ç“eÅŸÖ¯í¯ñ!rRœï=14µ¯ùxY†ýèx„­{r`뻾žüm»VdF}Rà5/^£esǘ¯õ;„¯-ØõúPd£{(¤Õ`wÿ_bâ Æ†®m}mÓï@õ3¨ƒ#g‰­MÝÝOãi|ÊÔZí 5‘ü%Rh ÕXÇžþ‰Uè-ÖΛѬõñÑoÿHÅs0ƒŠ,_î ƒ€˜{bNÛq~•N«†×–m„ì¹ržjU46bö…«ãÔï{„з,±·Òy»€´§Ì/|óóÛ|k3/ôÒ‰ž€­YZpiþHœV1²qjœ¸ýy„{ t’;‡”êSb€Öà ó’£ç™¬) ˜ÇV°—«ÜôBBfÄÈų̈¶Ñáicwc­#QG×ÅZ(×Ð’ôæ}?¬2ûÒ’øx>Äcn_€kì›Ïµ‘ƳÎCsÛÜ_’3Ia!pº÷a]x1umª"$ñ/”´à5>+pPˆ‰Kwå2Rµ•nº¢3+ðNÿ<ý½e®®³‘¾Á^sº†l…’’·6TÕ …ß½Ma+Kçr뾤x`Hý)ÞbþË~úkðÍm½äx¿RWQ¶›úð^Á‚5™JŒG1âucÅ@*Ä9ë³ ]p53õ«Ê’pïüÑhÅj(ƒ @¡:F#܆6›Ãhë2„´x-½b²&+Î:Nö{dÄ– ÿõ§¥¤6:JX»vj.æ²³ç%ÿù}§“‹È¯ú·/‘åÇpDe±(c“éÃ÷"öpÒ’ ‡&#eòrÑp¢……??"®Š?c$Wûy‡ü_²èIäWCSƒ=Ã}n]}â^ˆí')L&e£¥02Bwk»émx¦Rî³°BÆR{AJ…`y"lŸ¶Úõ%¼–O´Ò±Dñs9l"-Jæ.ê`‘Mɦ„5<áÚíп –KsUñ;ç¹îOH?™µòÓ$Ö6 +[Ì#K:Çh;˜{«Â~;jÂØR8È\²°ãµ)㈠• Õ}L_>nÀ>ÇÈ”‚Gx¶ƒÝò”6…Q2ºç4&þ‹„Uòúo=xr;Š2²Hß^¥5ƒ‡øi鉋ÀÙ.rGž{.nŽX/¬Ü_ÑÕ‚ø±á yŸüd‚%GlÎB¢ˆqú»è®?á "Hòyq†Ñ!h6ÙÙIåSà÷îTÃAêD‡ö’ù;й˜_ŒpÿÐÅ釧×ðùΙŸW;õ5+ôÄ[ *‚˜‚¹Üó/æÌh£Æ˜õ¦ò†·f°ƒ•[(,žuìGCô°†¼VÏ“š]²&ZƒQ³µ›µúObÅÊ•áE0Ó%q¢ñê¦E°¥cc0ÓÚ-!GKg? yЗL–KÊ‹¬·e“Œd‘^X5]Õ Ø» #ìR~á ü o;’}ÕµŒ_9—ξ}{U.Øž]ñfÑeܱE:ð-»9 †\‚•!)ªŠv^ºÝœ« 47€­½~ß8§›ÿFQCëÍ(,7ç ´9hÀ-«³Ålj¨ú…¿bSêsÅ©mé5:\@Y·Ä˶ÈÌŠŒúëÎ÷nåˆ(zãú}*÷|¯ÀÀÚ¨ŒÝ¼gŒÑ(@Qõ´‰¼Où /ÏÌimaÙЂ˜ÆÑ…áјA*ü…vœ%×?ò‡'äOD昱uï§Æ³<çáÐ Ô¢ÎÏq™¯mØÓÚV0Ñ êhe0'ôK¬ÀBñ?Y5iÎð2`k»« ¥ÐÑŽV•ÁR‹W‡hˆÕO@WÀ'?åT0†÷*•‰åŒG²%i7¤7u§~ñL0 ñبʼ›+¥kFN^~–<ΚSu²K±ŽÝpÛì̾ÚiáãsP¿ù¿6Z²÷éâÇsþÖÓz”ð¯ñ™¤†¬Šå ä4ܼˆ:Gt¡:ò|ÀSàb¿ÖYRĺÿèdp—V‰Û±S«A7`”'ì׋ï²Û’ÌPLs†‡ì![WßÝ+ö©x†Úú¥ÕñÙdyúÅĹ%eȾl]èÕN#»wž0kî³Ü…k3›¿ašÏœW)ÜÄ;!$OBÃ{5æîïaˆ‚a¨¶ 5 …rc†¢L? Ko®üàÙ6ú›¡µ”û[ÛCÚîp¸ò*þÊ4µ[n’0ÖâŽæeÏé å’Wrô1*V§ì0g§2û®ÝŒ 3â?Ë<¹®êªí™àÞÎÖ¹ªJ–¸ j5è§Iú`Â#¡–lÒ¢8 ƒ -f.æVÊ÷˜WvÌ>Vü$|™}P½u´0Üa'½ÖèT¸Bëêu¸ÞZÕ£k¨lË,jy61¸$ìîxä.ùÕñ->tou[DÕhÈ~@@+§’vˆwüTÈbS¢„ͽŒ! !èÿÇßšÝÚXÊ"÷s“T¤&PÍÖ­ÆÙ9úhþ‚FÖMŒ.C±"ÿß7êbŸv¹âðÍïuåÒ†Ú„'×"UŽXï·Â—áþŽ» øïè¨NzY¥·ú€D¬¦qÙõê?_‡q¥O3QQŸæ—™û±‘ÍÇ¥Ú †FY"jØjÞÀ˜„³¨M¦`Ÿ•×€ûªê/Ô’s¢ƒdý™+q×àÍ.Àr-¿©~ å¬~2µÄhò±È5×¾®[sy§k‰¯ÍüÍÚ£.saˆšg®qf¯F¯ØCwÜyšÌ-µ§1¯Á°£’þCÛˆ¥hy9AVi¸ö Qh´¡¹¦ý håÿR4’Ca2.‚ g0LÉÞû1ÜG{SˆÐpŽ]”úÑcî©:“[¼!wÐ é¤ê¡[r 釰ö‚ * ôX\ŠÉRXÈC7ù´ïظø~ ýpàuúª/ÙŒ¸û·\…«p¤-¦ ô¤zU!Vu?ƒÝošÿÏ™k³‡1äâ†ñèÀ;UåòŸ¾9¿sÍq7“¡àµÆÙ•çÌ:+Ä`s°<zv4¶vv“£â³¡´Œ…ú½jˤ†½ÏDÞ;µ¬Bä³DG›ð‚a– 2bÊ=èVëù× ™Ë¿×T¹™ W=æÈ~ÕÀhËWÓó·L4vp„bϲÈí5åRcÃò è¹Åæê¨Í6*mšt,¡§mÊÔoZÊ¢ ™ÿah° (3@ÑÑâ}†ýì¶6Y÷Ìýwf0ÏßÛÑæž¶7Å)ž¨^{¬3YJ¯¨ié›è¬²DÌ2¼9êæX9…Ÿv±ÔUŒ pG³Øc~:WuÎÏAÅáüÌ0æÉ¤M(žYì¸*o@á¿êu.µ'-‚ ç¥ð23=k(fç„M& tÂév¦Ë»cnë@4H¡\Šø,ÔdÄ£ýyŸE!®oßÃØ÷×yù5>üºËôw~xpì3·b#ï´ðØ[°` «?_ÚGN/ñ(Ô·¥¹æà.¾'›\u=RUô)…P1õÍY|¿À}„Œkj´­Á-n-Wšê`‚[œ¹îs§{ÂG £üty˜Ù{ôç‡{ÉK¨9™ñ2†œ«užݰ­}Þ }tc›™H>Ùÿšô£Åþœ“ìdþð›|êJ»¢¸x@8*ãþ…Ÿ¹G´—co_yÿ2BFïÜR嚥¯ô#›F1Nfùsp{"Ai3 œ¸—Ÿ =F‘›'–Šyûœß猙PÄá¨Ñ ‰›_¡b5"Sa©ä–qct-Ü¡É £†tUp9\@ßꔑMcN·ÝÐ íÖS’AÉì^PUì©”á‰CÝ)«¥ªÎìl·WRPDsÿ<ûæSIÀ‘’Û1ðþ<¯êÛ|‡‰]éia‡„›já_ê67—p:.¼A)Œ³ÔÑmÉ H°q,‚òÁ»dšéµgж¥Â±#Ýz™ã}8"r ‰ sî_—¯d/ZéB~¸ñ·­v6ñsHIr°µ‡!¡û)¥!L¾ž.‚½yxÖ—_YÞ/îWA£¦dzö5†YCÕÛ%c!Pª Úuë›4è]Gy‘K-ñ%šÉ¹Q.›Í øº|:÷C\\u{'¨owÁÎ9k´ì ! CáÝ5³”³4 ÔÇô¼p·´e}4‰3•â”ò>\42Ü2ŠÃ1ÛˆEæc47nL^§‡ƒíK®iå^HPÔHß½a3κªâÉ;{Ä‚8ýìéõŠ´aXÉÓ`»´˜›ªH5Ò?,Æ)‚ÝÀ¥ÉjGØñÅÿàÉ*¯vôr|EmÝ¿ÎXÇhóðƒŸðÆDÏ'ÁŒÅ6DÚRÊ›° ªß<7ˆgšÎRmT£Üõ*±A.xîXÊA„ÈDk„?ÙCW„°O&Ì w…E1¹ < 8‚:äµv*­¢éE«Îýµâ¤!]açDwL(ÀÌ_7_~e]þ¯•cb&åg ‡7:3`ôUÅ»›é1•[£‹_ÁÅ/²Tâ2ãWh^«TVa½ê¹€”]‘ÎMÞ£ 7G¼çéo;èHŠ…Øûçd8Ï>45ËØwË"1¯-芬ԩuFºƒb!Uè™uØÖ“ƯõW“Ö˜UçŒM·zÄh$V5öC§J„‚=¯gˆ@‹ì#£ø2ãZìÏ»PVæÇ†v9ù€øµzFDÍQҖ׉Zf*Ÿj3µ´ˆ­ï·±‚NeÁhŒ®¢Ž"Qps#¶ïè Š#`ÎΗÚsÌTvH³×¡½ÃJE¢Š¿maÌëð‘t˜Þ¯‰¢©™J¿Å$ùQ8Eˆà†¿ßnSWyFþéÑ•T„øRl~#e!1{¬Yÿýëëʰæ7²ž^µ›:¿4lmÖ¥Hî€ SMº;:Ý’k;Þ7Óå…»" @"Ö€¸}sv¦]SsQ}„#·¯ 4þ™ÿaäÜqÞIž¶¢µ+÷m¶›Úq…´Î=½lg¼~@|4šª'd|®sl ÖØÆ ¦È\ ö:!U¨ý5º)Jf”å5íp²ºüAï ¸ÅiÄ2¼äŒyè9ø°qøâF2­%V‡dÞ‰¶È¢èxŒÃ×xÑõ$!Å~á™áxÌ̈́Ȅiß§úYáÎï.üÒ6è®í·x|Šû–jÉ»ŠË7¾/a‘æV=— fg4ã¾û!­XM ­øø¥¾ß&¾Ú¶¥N·&MJ$®Å(o¨E™»5â¼ÛvºçΙš‘UNô[²,h×R|Ô•×I›tt>IËD>UÑuç žú¢Ä×Öeåß[%•«­Š¿gŒqBXqEÇ…zný~{ôKÑ9ñ'&€ài»Agž}ánç øÝ匠ÆÞæfª^aXZ»qM7~C¨Õ?£’âðÀñbEu˜G£çÒ @‹ÃÿQ€_6!UÿûÛY™¡„õ¹¦˜Ó$l¡ÏÑ“®]wô4ׯy|öøéL³<¾5Ò'Ú\Ó7œè#yfÖ¿ÈÚ÷«­#”ŽiæÇaP °–—M¢–³!½R‰eFÛ_s‡´ƒ'o´Z1EŸà2½¿éýƒ+a–¥hÿþ‘!ù¶Wøí³‹¥³ Ïq•½‡#›‡B°ÃÞX`ØníWKðÉ(yŽ pŸ¤/o= ¢dŽw9Žå±Ô “K`ÃËç×°¢½™ñE{*X™ó 4þ‰ó:°/²yõŽî"Šï€[Qa¤û5²9ñ:ÕíÈ“­}…ÛÔ`´´”’HAYç Ó?1–þâr² ñLZ4ð¬¦æ.qDjª\ºwCs«?H§nöže¬þ?ÞÐÞ?}ÈsÞðV™æÄa@²³ú» 曣âÊš.ùçý¾}ŽÎpÚ].g½Á¸&&2Ià«üÄó 7,­ºW[¬ Ea!&:ñK • =(oJpPü@e®ùdÓž^IvAŒ©“ºÍ[ä™ìêß#1›ñ5xY,S±à I·_ÿ¸Ï]ÍgØ!jzUðdã@šMd¥o  ‹¹eÉwJ7QpÊ2 +„Ç*K^×¥ñ`ýV¹‚3<¯þCRÒMò6 bÕ°°áíå›>wÊé11²}­»ý½.gÕH(…˜ñ|V讞\(ˆüì}ºLÎÄ÷IÕÊTR5„-+O€J®’¼{Ó…$¬aÉwñXrµWŽ4ÉVéFÒÔG`Ô í{Øx·ž:ÑÓ {láàÙäì½â#“PÆt(*ù¹4Äo0ÝÀ{ LG‚tC˜m@—¬ÆkJK/²¥Fp%z˜ªàP~=*×ÿÝzˆ%S€ 2 êXú9—¨,Û¦}(?¤©îXÛ½ ;ƒÓþóÓ{7¦knnÒXñ{ü_òœÃÛ¸'ƒGó…^:?œ^Âwñð¨ž6õµ¾Q«ÿQRQ¦U¹ÃÔ‡`áÀä 'È_âÇé„ë ªõ¥@qAc=M:]^ÈÚ÷†¸9W´Je³ÏŒ=Llí“Uù†ë<¥Ioþ[ê†ôìâMó¶²íU¶ºõÆŸQk/IR_o„ :ý-gOÕ˜ÿ± ·Ò?óÕóvÖºéÊ'|Î5Ľ•å‚Q’ .ÝÓùRêò³ûk±zøYk¤‘‘*àçN>ÊZ45ð %œ¦È!-Êš’à` ×°õýy4„‘øcBýºu9Çù²89ê-%Èmgg‡´L .k©DÁ–²j®©ù4ç  Õ²™Ø´ÿ)äé=Ê߯£ öõ=0EU¶›zøÝ~ê!ˆ9ïŸEÛ-%ðS*›õö¢Æ›x~¾¡ E}B‡7ªNÂ[Ré«Lêk—f3‚&‹b‡ÐñÆR®ˆ:å™!OâMêÓ<Œñ}ë ™˜æÿš F:ÄóØBÜŠÇr^U“$— Kûä b¹¯Äÿ”õ»c¹X}:{cvË?e,ë3Fl2*ûÑ„ŽÛC ÜÏ(E’£Ã?Ñ»Ì~F\§ùø±&~!RZVP¿ Ðkk©É ÆæÕ þ¼ŒSW8ûGióÙÅG@™Ûüq“*°¥¥Ðsž·„ù%ËÞ_ÛM^VöÔ¯ÙlIÆÚ³6n×iv2‰ÃW F­VΣä>î­1K“ÛÁÌGº*÷ %xÉÏdÔ+çÌ""µDÍØ3ߨ.¶‡JP 38ŒmÿÝßs‰¤˜ÖÚàfW^šÃa®Á2à™uòŠŽò¾%ý(á÷+Œ&­ô'€Äf­Žy:ÖVÔTûö»†]³ûŒW°ü…Íl÷-šÞüDžÆ¯$žÆ^’µS‹œõ„QÀ6…¸P•O]Ù»]tjÙ Þ2?më Žb,eôÈMÖZoÉÍþm<¶˜²gy|å'°Džs-\ú]Xd 1Á$Õ•¨kbŸ†«6’g¥/SeïCõn‰(–øä#á†ãö¶Ð²¹ã8jàKmê],æ«!ô¼ÕELTqÕF×óµzM3ÜÚÛ`ª½2̲¥"ö´õ¹ D÷aÐŽÈbu 5í-¨›ÈH瀜#=‹œ›TázˤœHïÌUýL©sAóÜá++‰°׆2r¤îpt·ð7º[G޹ó&7Œ×ªÂ“°æ!ίc Q§sIªžïà²Ù_ƒAîÈöÁ;]¾Ê+U‹ï œôl~ûâr>ºJö´6ë³HÒáF¡TªW’•åšÉN2{Øõt“½Šñ™#•*å…mÊïHmÜ@õ ëŽoÇ6ˆ…Ív-çô߯‹pv@\õ­q×´-õ4È«tÁ‘ªFf³½Š¬—‹£l8‹T'vÆuÞ%8£›Ò©>‡/Ýû§‰U!ËQﯠµ¹Ží9#öøÎú…/ª¨ô—xâ8«´fÆL]ö]‡±ãGaB–@æLM£1íÉ&íÄÌê‹P~]ú~ä¾`§NKìÌQ™D4øÈ‘4ÉòíS»*]!3.¯‡ ñ´ŠN¸ª‡²¤-öggéµ™CXçŸ&c&Ä%EPŒ_)QBkÙÖ*a"7M*³†ŸXb«xQ}’‡~%|tT}Ëc˜ô-[ß~k&˜ü<)bB@+ü˜i{·&’$¿ZEˆÿÊèÑz'~ÂŒ9û0V ¢q]n¡†83Jì®&ÏV c–¡xcƒÇéÖ tPЧę7gZËíVöâd¤d4/jíÐíbe(Ë ñ™iG» `Á%>HºLbS ’zЕfÆðÜï1ÿ¦Ì©QüÒ‘ "mÆ«¸<£ZˆÉëMS.²€Ý'K¿Mþ M|Ôï“Öà­l^m›|ÖÇICŽ5Õ”Dœ†>ƒ)œIuËØÑ‡òf-©Îñ¼Oüz¨Ž–£¦*¦n»âR¿ÆiŸú¢MZeç´ì´g•ÛmÃ:ÿJbµ_7`檕ö Oï /þX2} yü¿íˆÜ|Z®‹Ïw‚ĉñÊNƒýs©¾†Pº;[ÌiD2ÇÎNæ×Ó9LFŠ{U“#Ulw„8•¿?‡ Í\Ž딄ÈÏXFüª¢~þÖ¡"¼½¦9Ñv7â%w•©ª¹ð‡OÎ,•ýõÊäi|ÂðòšÑ?¾•BqM<¦†G6‹ïzo6^f‚ĪQœÓ­“Z€Ÿ¸æ¥Óâø'ìúÕ TV6'npé¢!á|NPtÝBWAL¶tù ˜}e^ÆêúK*˜Ù-k0§*û 0¦ûTùNí‰ýÿðBòáÖ.¸´¿ë~ñ‹°Fg >.,®è/Ê÷wV9Þ¦~õþ;Ó˜•ƒEüY;÷`ñÁBšÁ³†k€t¹6_²u+ ö€o–šÅ ÿŠ{ldŸ ©#ºúÖ6!?±°™Å TÌD)Tû"JýhÉ B›4¨ºÐsA§¥Ñ¥,›Šhð‘¢&ª‹wýÚ,,¤!!­Ë'øh7qÎg†!3Á>ž:…–Ô,ºš>wgèRY—PûGŸC-?ü<0 CiÆ „1≽îxf;´U´<%k¼…yZà]ï–.õ•_X¸@ïú–7m(.‚·îm9°6›Ž’Ž^YýÖF9ݯXê͵-ç§õnTý],K?~Œ1±ÎìÅ—ž•Úü£Ç!Vpáx×¹ýS䢡xï±ùœôj…P)"³˜ÏÀtJüQˆoRP¼ÆÌ/?óX†¥¸=cZÕd*9ìÅmå«:ìg ;Sg •VMp³#ƒìRHΧaוç"Ùµ¬<ÊJÞ+ÏFÖ¿ûzض=Y"ŠH7ùšÆ4·«¼Œ5¿‹Ñ/(u[ ü'Þ,¡0[®´—žp¥DñÉÝÝGíóœ#¦mÉ4ŒÖ‘iÂ~ò­ Œ{[Ö°¦”Nä‚JÞ ÁŒå1d²ÚŠƒs„4ú¨{&H¹¡ÿ£¿$í),¢57å9¼nZÚÅCèÜþäùÚ)LÚʱ÷FìÚ­—(t#Ëhèõk{’øÈßt{õ`·,FÕôSÀ¨ÜÀàqÁˆl{£ep,lÂ}gÔà~mè R¯‘Qϵ,£æ²>0 ‹YZVGAM/data/chest.nz.txt.bz20000644000176200001440000000074413135276761014726 0ustar liggesusersBZh91AY&SYx.ç&Ù€@à"…@p8ªTÂTɦSÔQè4É J( S&&A‘‚BSɪmJÿË6—ÑÁýƒ4ÙwðV,•¦]¹@¡s¥ˆ&Š•"ÁMLAL[u¿w.ätØ­“wF‰¡³®êùNÖ1¼fªÈK%ådqÄI¢8Ño†^Ñî §4.zðÝÓ­ à¡gHËUrŠ©¡X”å%µ·¥oY(?‘"lÞæÏ¬wô‰¨ç÷[Ë©a{“¨¯Ï'± Hý µƒ÷%¼á”•I’+gÇß7›šÐiA6¨Ä‘g6]fàˆ5¤ÚV6Å$DÀ2áã<Ø¥>R# æg7x…¬±'œ‚ó2©ð‘œæfà’I1a´/GÎcA ˆt÷·±…nžã!¯rù»”Ö´Ù$m"|e 9ñ’ï_½s‰À„FßTlÐ’9¼Þ±† ‚I ž³Y~2 ç{½€` H±Ê³K+/0ÁåWVˆ‡å‚6.äŠp¡ ð\%ÎVGAM/R/0000755000176200001440000000000013135276760011217 5ustar liggesusersVGAM/R/family.functions.R0000644000176200001440000001613713135276757014650 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. fill <- fill1 <- fill2 <- fill3 <- function(x, values = 0, ncolx = ncol(x)) { x <- as.matrix(x) matrix(values, nrow = nrow(x), ncol = ncolx, byrow = TRUE) } extract.arg <- function(a) { s <- substitute(a) as.character(s) } remove.arg <- function(string) { nc <- nchar(string) bits <- substring(string, 1:nc, 1:nc) b1 <- (1:nc)[bits == "("] b1 <- if (length(b1)) b1[1]-1 else nc if (b1 == 0) return("") string <- paste(bits[1:b1], collapse = "") string } add.arg <- function(string, arg.string) { if (arg.string == "") return(string) nc <- nchar(string) lastc <- substring(string, nc, nc) if (lastc == ")") { if (substring(string, nc-1, nc-1) == "(") { paste(substring(string, 1, nc-2), "(", arg.string, ")", sep = "") } else { paste(substring(string, 1, nc-1), ", ", arg.string, ")", sep = "") } } else { paste(string, "(", arg.string, ")", sep = "") } } get.arg <- function(string) { nc <- nchar(string) bits <- substring(string, 1:nc, 1:nc) b1 <- (1:nc)[bits == "("] b2 <- (1:nc)[bits == ")"] b1 <- if (length(b1)) min(b1) else return("") b2 <- if (length(b2)) max(b2) else return("") if (b2-b1 == 1) "" else paste(bits[(1+b1):(b2-1)], collapse = "") } eifun <- function(i, n) cbind(as.numeric((1:n) == i)) eifun <- I.col <- function(i, n) diag(n)[, i, drop = FALSE] eijfun <- function(i, n) { temp <- matrix(0, n, 1) if (length(i)) temp[i, ] <- 1 temp } tapplymat1 <- function(mat, function.arg = c("cumsum", "diff", "cumprod")) { if (!missing(function.arg)) function.arg <- as.character(substitute(function.arg)) function.arg <- match.arg(function.arg, c("cumsum", "diff", "cumprod"))[1] type <- switch(function.arg, cumsum = 1, diff = 2, cumprod = 3, stop("argument 'function.arg' not matched")) if (!is.matrix(mat)) mat <- as.matrix(mat) NR <- nrow(mat) NC <- ncol(mat) fred <- .C("tapply_mat1", mat = as.double(mat), as.integer(NR), as.integer(NC), as.integer(type)) # , PACKAGE = "VGAM" dim(fred$mat) <- c(NR, NC) dimnames(fred$mat) <- dimnames(mat) switch(function.arg, cumsum = fred$mat, diff = fred$mat[, -1, drop = FALSE], cumprod = fred$mat) } matrix.power <- function(wz, M, power, fast = TRUE) { n <- nrow(wz) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- if (is.matrix(wz)) ncol(wz) else 1 if (dimm.value > M*(M+1)/2) stop("too many columns") if (M == 1 || dimm.value == M) { WW <- wz^power # May contain NAs return(t(WW)) } if (fast) { k <- veigen(t(wz), M = M) # matrix.arg) evals <- k$values # M x n evects <- k$vectors # M x M x n } else { stop("sorry, cannot handle matrix-band form yet") k <- unlist(apply(wz, 3, eigen), use.names = FALSE) dim(k) <- c(M, M+1, n) evals <- k[, 1, , drop = TRUE] # M x n evects <- k[, -1, , drop = TRUE] # M x M x n } temp <- evals^power # Some values may be NAs index <- as.vector( matrix(1, 1, M) %*% is.na(temp) ) index <- (index == 0) if (!all(index)) { warning("Some weight matrices have negative ", "eigenvalues. They will be assigned NAs") temp[,!index] <- 1 } WW <- mux55(evects, temp, M = M) WW[,!index] <- NA WW } ResSS.vgam <- function(z, wz, M) { if (M == 1) return(sum(c(wz) * c(z^2))) wz.z <- mux22(t(wz), z, M = M, as.matrix = TRUE) sum(wz.z * z) } wweighted.mean <- function(y, w = NULL, matrix.arg = TRUE) { if (!matrix.arg) stop("currently, argument 'matrix.arg' must be TRUE") y <- as.matrix(y) M <- ncol(y) n <- nrow(y) if (M == 1) { if (missing(w)) mean(y) else sum(w * y) / sum(w) } else { if (missing(w)) y %*% rep(1, n) else { numer <- mux22(t(w), y, M, as.matrix = TRUE) numer <- t(numer) %*% rep(1, n) denom <- t(w) %*% rep(1, n) denom <- matrix(denom, 1, length(denom)) if (matrix.arg) denom <- m2a(denom, M = M)[, , 1] c(solve(denom, numer)) } } } veigen <- function(x, M) { n <- ncol(x) index <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dimm.value <- nrow(x) # usually M or M(M+1)/2 z <- .Fortran("veigenf", as.integer(M), as.integer(n), as.double(x), values = double(M * n), as.integer(1), vectors = double(M*M*n), double(M), double(M), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), error.code = integer(1)) if (z$error.code) stop("eigen algorithm (rs) returned error code ", z$error.code) ord <- M:1 dim(z$values) <- c(M, n) z$values <- z$values[ord, , drop = FALSE] dim(z$vectors) <- c(M, M, n) z$vectors <- z$vectors[, ord, , drop = FALSE] return(list(values = z$values, vectors = z$vectors)) } ima <- function(j, k, M) { if (length(M) > 1 || M <= 0 || j <= 0 || k <= 0 || j > M || k > M) stop("input wrong in ima()") m <- diag(M) m[col(m) <= row(m)] <- 1:(M*(M+1)/2) if (j >= k) m[j, k] else m[k, j] } checkwz <- function(wz, M, trace = FALSE, wzepsilon = .Machine$double.eps^0.75) { if (wzepsilon > 0.5) warning("argument 'wzepsilon' is probably too large") if (!is.matrix(wz)) wz <- as.matrix(wz) wzsubset <- wz[, 1:M, drop = FALSE] if (any(is.na(wzsubset))) stop("NAs found in the working weights variable 'wz'") if (any(!is.finite(wzsubset))) stop("Some elements in the working weights variable 'wz' are ", "not finite") if ((temp <- sum(wzsubset < wzepsilon))) warning(temp, " diagonal elements of the working weights variable ", "'wz' have been replaced by ", signif(wzepsilon, 5)) wz[, 1:M] <- pmax(wzepsilon, wzsubset) wz } label.cols.y <- function(answer, colnames.y = NULL, NOS = 1, percentiles = c(25, 50, 75), one.on.one = TRUE, byy = TRUE) { if (!is.matrix(answer)) answer <- as.matrix(answer) if (one.on.one) { colnames(answer) <- if (length(colnames.y) == ncol(answer)) colnames.y else NULL return(answer) } if (is.null(percentiles)) percentiles <- c(25, 50, 75) # Restore to the default if (!is.Numeric(percentiles) || min(percentiles) <= 0 || max(percentiles) >= 100) stop("values of 'percentiles' should be in [0, 100]") percentiles <- signif(percentiles, digits = 5) ab1 <- rep(as.character(percentiles), length = ncol(answer)) ab1 <- paste(ab1, "%", sep = "") if (NOS > 1) { suffix.char <- if (length(colnames.y) == NOS) colnames.y else as.character(1:NOS) ab1 <- paste(ab1, rep(suffix.char, each = length(percentiles)), sep = "") } colnames(answer) <- ab1 if (byy) { answer <- answer[, interleave.VGAM(.M = NCOL(answer), M1 = NOS), # length(percentiles)), drop = FALSE] } answer } VGAM/R/vlm.wfit.q0000644000176200001440000001357313135276760013160 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vlm.wfit <- function(xmat, zmat, Hlist, wz = NULL, U = NULL, matrix.out = FALSE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, x.ret = FALSE, offset = NULL, omit.these = NULL, only.ResSS = FALSE, ncolx = if (matrix.out && is.vlmX) { stop("need argument 'ncolx'") } else { ncol(xmat) }, xij = NULL, lp.names = NULL, Eta.range = NULL, Xm2 = NULL, Xvlm.aug = NULL, sm.osps.list = NULL, constraints = NULL, first.sm.osps = FALSE, control = list(), # This is vgam.control() trace = FALSE, ...) { mgcvvgam <- length(sm.osps.list) fixspar <- unlist(sm.osps.list$fixspar) missing.Hlist <- missing(Hlist) zmat <- as.matrix(zmat) n <- nrow(zmat) M <- ncol(zmat) if (!only.ResSS) { contrast.save <- attr(xmat, "contrasts") znames <- dimnames(zmat)[[2]] } if (length(offset)) { zmat <- zmat - offset } if (missing(U) || !length(U)) { U <- vchol(wz, M = M, n = n, silent = FALSE) } dU <- dim(U) if (dU[2] != n) { stop("input unconformable") } X.vlm.save <- if (is.vlmX) { xmat } else { if (missing.Hlist || !length(Hlist)) { Hlist <- replace.constraints(vector("list", ncol(xmat)), diag(M), 1:ncol(xmat)) # NULL } lm2vlm.model.matrix(x = xmat, Hlist = Hlist, M = M, assign.attributes = FALSE, xij = xij, Xm2 = Xm2) } X.vlm <- mux111(U, X.vlm.save, M = M) z.vlm <- mux22(U, zmat, M = M, upper = TRUE, as.matrix = FALSE) if (length(omit.these)) { X.vlm <- X.vlm[!omit.these, , drop = FALSE] z.vlm <- z.vlm[!omit.these] } if (mgcvvgam) { # The matrix components of m.objects have colns reordered, for magic(). m.objects <- psv2magic(x.VLM = X.vlm, constraints = constraints, spar.vlm = attr(Xvlm.aug, "spar.vlm"), sm.osps.list = sm.osps.list) fixspar <- rep_len(fixspar, length(m.objects$sp)) if (FALSE && trace) { cat("m.objects$sp \n") print( m.objects$sp ) cat("m.objects$OFF \n") print( m.objects$OFF ) flush.console() } if (first.sm.osps) { } magicfit <- mgcv::magic(y = z.vlm, X = m.objects$x.VLM.new, # Cols reordered if necessary sp = m.objects$sp, S = m.objects$S.arg, off = m.objects$OFF, gamma = control$gamma.arg, gcv = FALSE) SP <- ifelse(fixspar, m.objects$sp, magicfit$sp) if (FALSE && trace) { cat("SP \n") print( SP ) flush.console() } magicfit$sp <- SP # Make sure; 20160809 length.spar.vlm <- sapply(attr(Xvlm.aug, "spar.vlm"), length) # spar.new sp.opt <- vector("list", length(length.spar.vlm)) # list() iioffset <- 0 for (ii in seq_along(length.spar.vlm)) { sp.opt[[ii]] <- SP[iioffset + 1:length.spar.vlm[ii]] iioffset <- iioffset + length.spar.vlm[ii] } names(sp.opt) <- names(sm.osps.list$which.X.sm.osps) if (FALSE && trace) { cat("sp.opt \n") print( sp.opt ) flush.console() } sm.osps.list$sparlist <- sp.opt Xvlm.aug <- get.X.VLM.aug(constraints = constraints, sm.osps.list = sm.osps.list) first.sm.osps <- FALSE X.vlm <- rbind(X.vlm, Xvlm.aug) z.vlm <- c(z.vlm, rep(0, nrow(Xvlm.aug))) } ans <- lm.fit(X.vlm, y = z.vlm, ...) if (mgcvvgam) { ans$residuals <- head(ans$residuals, n*M) ans$effects <- head(ans$effects, n*M) ans$fitted.values <- head(ans$fitted.values, n*M) ans$qr$qr <- head(ans$qr$qr, n*M) } if (ResSS) { ans$ResSS <- sum(ans$resid^2) if (only.ResSS) return(list(ResSS = ans$ResSS)) } if (length(omit.these) && any(omit.these)) { stop("code beyond here cannot handle omitted observations") } fv <- ans$fitted.values dim(fv) <- c(M, n) fv <- vbacksub(U, fv, M = M, n = n) # Have to premultiply fv by U if (length(Eta.range)) { if (length(Eta.range) != 2) { stop("length(Eta.range) must equal 2") } fv <- ifelse(fv < Eta.range[1], Eta.range[1], fv) fv <- ifelse(fv > Eta.range[2], Eta.range[2], fv) } ans$fitted.values <- if (M == 1) c(fv) else fv if (M > 1) { dimnames(ans$fitted.values) <- list(dimnames(zmat)[[1]], znames) } ans$residuals <- if (M == 1) c(zmat-fv) else zmat-fv if (M > 1) { dimnames(ans$residuals) <- list(dimnames(ans$residuals)[[1]], znames) } ans$misc <- list(M = M, n = n) ans$call <- match.call() ans$constraints <- Hlist ans$contrasts <- contrast.save if (mgcvvgam) { ans$first.sm.osps <- first.sm.osps # Updated. ans$sm.osps.list <- sm.osps.list # Updated wrt "sparlist" component ans$Xvlm.aug <- Xvlm.aug # Updated matrix. ans$magicfit <- magicfit # Updated. } if (x.ret) { ans$X.vlm <- X.vlm.save } if (!is.null(offset)) { ans$fitted.values <- ans$fitted.values + offset } if (!matrix.out) { return(ans) } dx2 <- if (is.vlmX) NULL else dimnames(xmat)[[2]] B <- matrix(NA_real_, nrow = M, ncol = ncolx, dimnames = list(lp.names, dx2)) if (is.null(Hlist)) { Hlist <- replace.constraints(vector("list", ncolx), diag(M), 1:ncolx) } ncolHlist <- unlist(lapply(Hlist, ncol)) temp <- c(0, cumsum(ncolHlist)) for (ii in 1:ncolx) { index <- (temp[ii]+1):(temp[ii+1]) cm <- Hlist[[ii]] B[, ii] <- cm %*% ans$coef[index] } ans$mat.coefficients <- t(B) ans } # vlm.wfit VGAM/R/sm.os.R0000644000176200001440000002005413135276760012402 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. sm.os <- function(x, ..., niknots = 6, # NULL if 'alg.niknots' is to be used. spar = -1, # was 0 prior to 20160810 o.order = 2, alg.niknots = c("s", ".nknots.smspl")[1], all.knots = FALSE, # 20161013 ridge.adj = 1e-5, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, fixspar = FALSE) { niknots.orig <- niknots if (all.knots && length(niknots)) warning("ignoring 'niknots' because 'all.knots = TRUE'") Penalty.os <- function(a, b, intKnots, o.order = 2) { if (any(diff(intKnots) <= 0)) stop("argument 'intKnots' must be sorted in increasing order") if (length(unique(intKnots)) != (KK <- length(intKnots))) stop("argument 'intKnots' must have unique values") if (KK == 0) stop("no interior knots (intKnots == 0)") allKnots <- c(rep(a, 2 * o.order), intKnots, rep(b, 2 * o.order)) mkmat <- matrix(c(1, rep(NA, 6), 1/3, 4/3, 1/3, rep(NA, 4), 14/45, 64/45, 8/15, 64/45, 14/45, NA, NA, 41/140, 54/35, 27/140, 68/35, 27/140, 54/35, 41/140), 4, 7, byrow = TRUE) vec.ell <- 1:(KK + 4 * o.order - 1) # length(allKnots) - 1 vec.ellp <- 0:(2 * o.order - 2) hmell <- if (o.order == 1) diff(allKnots) else diff(allKnots) / (2 * o.order - 2) xtilde <- wts <- numeric((2*o.order - 1) * (KK * 4*o.order - 1)) index1 <- (2*o.order - 1) * (vec.ell - 1) + 1 for (ellp in vec.ellp) { xtilde[index1 + ellp] <- hmell * ellp + allKnots[vec.ell] wts[index1 + ellp] <- hmell * mkmat[o.order, ellp + 1] } Bdd <- splineDesign(allKnots, xtilde, ord = 2 * o.order, derivs = rep(o.order, length(xtilde)), outer.ok = TRUE) Omega <- crossprod(Bdd * wts, Bdd) attr(Omega, "allKnots") <- allKnots Omega } xs <- substitute(x) ans <- as.character(xs) x.index <- as.vector(x) alg.niknots <- match.arg(alg.niknots, c("s", ".nknots.smspl"))[1] if (!is.Numeric(o.order, length.arg = 1, integer.valued = TRUE, positive = TRUE) || o.order > 4) stop("argument 'o.order' must be one value from the set 1:4") x.orig <- x.index xdots <- list(...) uses.xij <- length(xdots) > 0 if (uses.xij) x.index <- as.vector(c(x.index, unlist(xdots))) xl <- min(x.index) xr <- max(x.index) if (smart.mode.is("read")) { smartlist <- get.smart() xl <- smartlist$xl # Overwrite its value xr <- smartlist$xr # Overwrite its value alg.niknots <- smartlist$alg.niknots # Ditto spar <- smartlist$spar o.order <- smartlist$o.order all.knots <- smartlist$all.knots ridge.adj <- smartlist$ridge.adj spillover <- smartlist$spillover maxspar <- smartlist$maxspar maXX <- smartlist$maXX Cmat <- smartlist$Cmat intKnots <- smartlist$intKnots outer.ok <- smartlist$outer.ok fixspar <- smartlist$fixspar } else { intKnots <- NULL maXX <- NULL Cmat <- NULL } xmax <- xr + spillover * (xr - xl) xmin <- xl - spillover * (xr - xl) nx <- names(x.index) nax <- is.na(x.index) if (nas <- any(nax)) x.index <- x[!nax] usortx <- unique(sort(as.vector(x.index))) neff <- length(usortx) if (neff < 2) { stop("not enough unique 'x' values (need 2 or more)") } noround <- TRUE # Improvement 20020803 if (all.knots) { xbar <- (usortx - usortx[1]) / (usortx[neff] - usortx[1]) knot <- if (noround) { valid.vknotl2(c(rep_len(xbar[ 1], 2 * o.order - 1), # 3 xbar, rep_len(xbar[neff], 2 * o.order - 1))) # 3 } else { c(rep_len(xbar[ 1], 2 * o.order - 1), xbar, rep_len(xbar[neff], 2 * o.order - 1)) } if (length(niknots.orig)) { warning("overriding 'niknots' by 'all.knots = TRUE'") } niknots <- length(knot) - 2 * o.order # TWYee } else if (is.null(niknots.orig)) { xbar <- (usortx - usortx[1]) / (usortx[neff] - usortx[1]) if (alg.niknots == "s") { chosen <- length(niknots) if (chosen && (niknots > neff + 2 || niknots <= 5)) { stop("bad value for 'niknots'") } if (!chosen) { niknots <- 0 } knot.list <- .C("vknootl2", as.double(xbar), as.integer(neff), knot = double(neff + 4 * o.order - 2), # (neff+6), zz unsure k = as.integer(niknots + 2 * o.order), # (niknots+4), zz unsure chosen = as.integer(chosen)) if (noround) { knot <- valid.vknotl2(knot.list$knot[1:(knot.list$k)]) knot.list$k <- length(knot) } else { knot <- knot.list$knot[1:(knot.list$k)] } niknots <- knot.list$k - 2 * o.order # TWYee } else { niknots <- .nknots.smspl(neff) } } # !all.knots if (!is.Numeric(niknots, positive = TRUE, integer.valued = TRUE, length.arg = 1)) { stop("bad value of 'niknots'") } numIntKnots <- niknots if (is.null(intKnots)) intKnots <- quantile(usortx, # unique(x.index), probs = seq(0, 1, length = numIntKnots + 2)[ -c(1, numIntKnots + 2)]) Basis <- bs(x, knots = intKnots, degree = 2 * o.order - 1, # 3 by default Boundary.knots = c(a = xmin, b = xmax), # zz not sure intercept = TRUE) n.col <- ncol(Basis) if (nas) { nmat <- matrix(NA_real_, length(nax), n.col) nmat[!nax, ] <- Basis Basis <- nmat } dimnames(Basis) <- list(1:nrow(Basis), 1:n.col) fixspar <- rep_len(fixspar, max(length(fixspar), length(spar))) spar <- rep_len( spar, max(length(fixspar), length(spar))) if (any(spar < 0 & fixspar)) { spar[spar < 0 & fixspar] <- 0 warning("some 'spar' values are negative : have used 'spar' = ", paste(spar, collapse = ", ")) } if (any(maxspar < spar)) { spar[maxspar < spar] <- maxspar warning("some 'spar' values are > ", maxspar, ": ", "for stability have used 'spar' = ", paste(spar, collapse = ", ")) } pen.aug <- Penalty.os(a = xmin, b = xmax, intKnots, o.order = o.order) allKnots <- attr(pen.aug, "allKnots") # Retrieved if (is.null(maXX)) maXX <- mean(abs(crossprod(Basis))) maS <- mean(abs(pen.aug)) / maXX pen.aug <- pen.aug / maS kk <- ncol(Basis) if (is.null(Cmat)) Cmat <- matrix(colSums(Basis), 1, kk) qrCt <- qr(t(Cmat)) jay <- nrow(Cmat) # 1 XZ <- t(qr.qty(qrCt, t(Basis))[(jay+1):kk, ]) Basis <- XZ ZtSZ <- qr.qty(qrCt, t(qr.qty(qrCt, t(pen.aug))))[(jay+1):kk, (jay+1):kk] if (smart.mode.is("write")) put.smart(list(xl = xl, xr = xr, alg.niknots = alg.niknots, spar = spar, o.order = o.order, all.knots = all.knots, ridge.adj = ridge.adj, spillover = spillover, maxspar = maxspar, maXX = maXX, Cmat = Cmat, intKnots = intKnots, outer.ok = outer.ok, fixspar = fixspar)) Basis <- Basis[seq_along(x.orig), , drop = FALSE] attr(Basis, "S.arg") <- ZtSZ attr(Basis, "knots") <- allKnots # zz might be intKnots attr(Basis, "intKnots") <- intKnots attr(Basis, "spar") <- spar # Vector attr(Basis, "o.order") <- o.order # Save argument attr(Basis, "ps.int") <- NA_real_ # For the psint() methods function attr(Basis, "all.knots") <- all.knots # Save logical argument attr(Basis, "alg.niknots") <- alg.niknots # Save argument attr(Basis, "ridge.adj") <- ridge.adj # Save argument attr(Basis, "outer.ok") <- outer.ok # Save argument attr(Basis, "fixspar") <- fixspar # Save argument Basis } VGAM/R/rrvglm.fit.q0000644000176200001440000004572113135276757013512 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. rrvglm.fit <- function(x, y, w = rep_len(1, nrow(x)), etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = rrvglm.control(...), criterion = "coefficients", qr.arg = FALSE, constraints = NULL, extra = NULL, Terms = Terms, function.name = "rrvglm", ...) { eff.n <- nrow(x) # + sum(abs(w[1:nrow(x)])) specialCM <- NULL post <- list() check.rank <- TRUE # !control$Quadratic check.rank <- control$Check.rank nonparametric <- FALSE epsilon <- control$epsilon maxit <- control$maxit save.weights <- control$save.weights trace <- control$trace orig.stepsize <- control$stepsize minimize.criterion <- control$min.criterion fv <- one.more <- rrr.expression <- modelno <- NULL RRR.expression <- paste("rrr", control$Algorithm, "expression", sep = ".") n <- dim(x)[1] copy.X.vlm <- FALSE # May be overwritten in @initialize stepsize <- orig.stepsize old.coeffs <- coefstart intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n Rank <- control$Rank rrcontrol <- control # if (length(slot(family, "initialize"))) eval(slot(family, "initialize")) # Initlz mu & M (and optionally w) eval(rrr.init.expression) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (is.character(rrcontrol$Dzero)) { index <- match(rrcontrol$Dzero, dimnames(as.matrix(y))[[2]]) if (anyNA(index)) stop("Dzero argument didn't fully match y-names") if (length(index) == M) stop("all linear predictors are linear in the ", "latent variable(s); so set 'Quadratic = FALSE'") rrcontrol$Dzero <- control$Dzero <- index } if (length(family@constraints)) eval(family@constraints) special.matrix <- matrix(-34956.125, M, M) # An unlikely used matrix just.testing <- cm.VGAM(special.matrix, x, rrcontrol$noRRR, constraints) findex <- trivial.constraints(just.testing, special.matrix) if (is.null(just.testing)) findex <- NULL # 20100617 tc1 <- trivial.constraints(constraints) if (!is.null(findex) && !control$Quadratic && sum(!tc1)) { for (ii in names(tc1)) if (!tc1[ii] && !any(ii == names(findex)[findex == 1])) warning("'", ii, "' is a non-trivial constraint that ", "will be overwritten by reduced-rank regression") } if (!is.null(findex) && all(findex == 1)) stop("use vglm(), not rrvglm()!") colx1.index <- names.colx1.index <- NULL dx2 <- dimnames(x)[[2]] if (sum(findex)) { asx <- attr(x, "assign") for (ii in names(findex)) if (findex[ii]) { names.colx1.index <- c(names.colx1.index, dx2[asx[[ii]]]) colx1.index <- c(colx1.index, asx[[ii]]) } names(colx1.index) <- names.colx1.index } rrcontrol$colx1.index <- control$colx1.index <- colx1.index # Save it on the object colx2.index <- 1:ncol(x) names(colx2.index) <- dx2 if (length(colx1.index)) colx2.index <- colx2.index[-colx1.index] p1 <- length(colx1.index) p2 <- length(colx2.index) rrcontrol$colx2.index <- control$colx2.index <- colx2.index # Save it on the object Index.corner <- control$Index.corner Amat <- if (length(rrcontrol$Ainit)) rrcontrol$Ainit else matrix(rnorm(M * Rank, sd = rrcontrol$sd.Cinit), M, Rank) Cmat <- if (length(rrcontrol$Cinit)) rrcontrol$Cinit else { if (!rrcontrol$Use.Init.Poisson.QO) { matrix(rnorm(p2 * Rank, sd = rrcontrol$sd.Cinit), p2, Rank) } else { .Init.Poisson.QO(ymat = as.matrix(y), X1 = if (length(colx1.index)) x[, colx1.index, drop = FALSE] else NULL, X2 = x[, colx2.index, drop = FALSE], Rank = rrcontrol$Rank, trace = rrcontrol$trace, max.ncol.etamat = rrcontrol$Etamat.colmax, Crow1positive = rrcontrol$Crow1positive, isd.latvar = rrcontrol$isd.latvar) } } if (control$Corner) Amat[control$Index.corner,] <- diag(Rank) if (length(control$str0)) Amat[control$str0, ] <- 0 rrcontrol$Ainit <- control$Ainit <- Amat # Good for valt() rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() Hlist <- process.constraints(constraints, x, M, specialCM = specialCM) nice31 <- control$Quadratic && (!control$eq.tol || control$I.tolerances) && all(trivial.constraints(Hlist) == 1) Hlist <- Hlist.save <- replace.constraints(Hlist, Amat, colx2.index) ncolHlist <- unlist(lapply(Hlist, ncol)) X.vlm.save <- if (control$Quadratic) { tmp500 <- lm2qrrvlm.model.matrix(x = x, Hlist = Hlist, C = Cmat, control = control) xsmall.qrr <- tmp500$new.latvar.model.matrix H.list <- tmp500$constraints if (FALSE && modelno == 3) { H.list[[1]] <- (H.list[[1]])[, c(TRUE, FALSE), drop = FALSE] # Amat H.list[[2]] <- (H.list[[2]])[, c(TRUE, FALSE), drop = FALSE] # D } latvar.mat <- tmp500$latvar.mat if (length(tmp500$offset)) { offset <- tmp500$offset } lm2vlm.model.matrix(xsmall.qrr, H.list, xij = control$xij) } else { latvar.mat <- x[, colx2.index, drop = FALSE] %*% Cmat lm2vlm.model.matrix(x, Hlist, xij = control$xij) } if (length(coefstart)) { eta <- if (ncol(X.vlm.save) > 1) X.vlm.save %*% coefstart + offset else X.vlm.save * coefstart + offset eta <- if (M > 1) matrix(eta, ncol = M, byrow = TRUE) else c(eta) mu <- family@linkinv(eta, extra) } if (criterion != "coefficients") { tfun <- slot(family, criterion) # family[[criterion]] } iter <- 1 new.crit <- switch(criterion, coefficients = 1, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) old.crit <- ifelse(minimize.criterion, 10 * new.crit + 10, -10 * new.crit - 10) deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset c.list <- list(z = as.double(z), fit = as.double(t(eta)), one.more = TRUE, coeff = as.double(rep_len(1, ncol(X.vlm.save))), U = as.double(U), copy.X.vlm = copy.X.vlm, X.vlm = if (copy.X.vlm) as.double(X.vlm.save) else double(3)) dX.vlm <- as.integer(dim(X.vlm.save)) nrow.X.vlm <- dX.vlm[[1]] ncol.X.vlm <- dX.vlm[[2]] if (nrow.X.vlm < ncol.X.vlm) stop(ncol.X.vlm, " parameters but only ", nrow.X.vlm, " observations") bf.call <- expression(vlm.wfit(xmat = X.vlm.save, zedd, Hlist = if (control$Quadratic) H.list else Hlist, ncolx = ncol(x), U = U, Eta.range = control$Eta.range, matrix.out = if (control$Quadratic) FALSE else TRUE, is.vlmX = TRUE, qr = qr.arg, xij = control$xij)) while (c.list$one.more) { if (control$Quadratic) { zedd <- as.matrix(z) if (control$Corner) zedd[, Index.corner] <- zedd[, Index.corner] - latvar.mat } else { zedd <- z } if (!nice31) tfit <- eval(bf.call) # tfit$fitted.values is n x M if (!control$Quadratic) { Cmat <- tfit$mat.coef[colx2.index,,drop = FALSE] %*% Amat %*% solve(t(Amat) %*% Amat) rrcontrol$Ainit <- control$Ainit <- Amat # Good for valt() rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() } if (!nice31) c.list$coeff <- tfit$coefficients if (control$Quadratic) { if (control$Corner) tfit$fitted.values[, Index.corner] <- tfit$fitted.values[, Index.corner] + latvar.mat } if (!nice31) tfit$predictors <- tfit$fitted.values # Does not contain the offset if (!nice31) c.list$fit <- tfit$fitted.values if (!c.list$one.more) { break } fv <- c.list$fit new.coeffs <- c.list$coeff if (length(family@middle)) eval(family@middle) eta <- fv + offset mu <- family@linkinv(eta, extra) if (length(family@middle2)) eval(family@middle2) old.crit <- new.crit new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra)) if (trace && orig.stepsize == 1) { cat(if (control$Quadratic) "QRR-VGLM" else "RR-VGLM", " linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, dig = round(1 - log10(epsilon))), format(new.crit, dig = max(4, round(-0 - log10(epsilon) + log10(sqrt(eff.n)))))) switch(criterion, coefficients = {if (length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } take.half.step <- (control$half.stepsizing && length(old.coeffs)) && !control$Quadratic && ((orig.stepsize != 1) || (criterion != "coefficients" && (if (minimize.criterion) new.crit > old.crit else new.crit < old.crit))) if (!is.logical(take.half.step)) take.half.step <- TRUE if (take.half.step) { stepsize <- 2 * min(orig.stepsize, 2*stepsize) new.coeffs.save <- new.coeffs if (trace) cat("Taking a modified step") repeat { if (trace) { cat(".") flush.console() } stepsize <- stepsize / 2 if (too.small <- stepsize < 0.001) break new.coeffs <- (1 - stepsize) * old.coeffs + stepsize * new.coeffs.save if (length(family@middle)) eval(family@middle) fv <- X.vlm.save %*% new.coeffs if (M > 1) fv <- matrix(fv, n, M, byrow = TRUE) eta <- fv + offset mu <- family@linkinv(eta, extra) if (length(family@middle2)) eval(family@middle2) new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu,y = y,w = w,res = FALSE, eta = eta,extra)) if ((criterion == "coefficients") || ( minimize.criterion && new.crit < old.crit) || (!minimize.criterion && new.crit > old.crit)) break } if (trace) cat("\n") if (too.small) { warning("iterations terminated because ", "half-step sizes are very small") one.more <- FALSE } else { if (trace) { cat(if (control$Quadratic) "QRR-VGLM" else "RR-VGLM", " linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, dig = round(1 - log10(epsilon))), format(new.crit, dig = max(4, round(-0 - log10(epsilon) + log10(sqrt(eff.n)))))) switch(criterion, coefficients = {if (length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } one.more <- eval(control$convergence) } } else { one.more <- eval(control$convergence) } flush.console() if (one.more) { iter <- iter + 1 deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) wz <- matrix(wz, nrow = n) U <- vchol(wz, M = M, n = n, silent=!trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M, n) - offset # Contains \bI \bnu rrr.expression <- get(RRR.expression) eval(rrr.expression) c.list$z <- z # contains \bI_{Rank} \bnu c.list$U <- U if (copy.X.vlm) c.list$X.vlm <- X.vlm.save } c.list$one.more <- one.more c.list$coeff <- runif(length(new.coeffs)) # 20030312; twist needed! old.coeffs <- new.coeffs } # End of while() if (maxit > 1 && iter >= maxit && !control$noWarning) warning("convergence not obtained in ", maxit, " iterations") dnrow.X.vlm <- labels(X.vlm.save) xnrow.X.vlm <- dnrow.X.vlm[[2]] ynrow.X.vlm <- dnrow.X.vlm[[1]] if (length(family@fini)) eval(family@fini) if (M > 1 && !nice31) tfit$predictors <- matrix(tfit$predictors, n, M) asgn <- attr(X.vlm.save, "assign") if (nice31) { coefs <- rep_len(0, length(xnrow.X.vlm)) rank <- ncol.X.vlm } else { coefs <- tfit$coefficients names(coefs) <- xnrow.X.vlm rank <- tfit$rank } cnames <- xnrow.X.vlm if (check.rank && rank < ncol.X.vlm) stop("rrvglm only handles full-rank models (currently)") if (nice31) { R <- matrix(NA_real_, 5, 5) } else { R <- tfit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] R[lower.tri(R)] <- 0 attributes(R) <- list(dim = c(ncol.X.vlm, ncol.X.vlm), dimnames = list(cnames, cnames), rank = rank) } if (nice31) { effects <- rep_len(0, 77) } else { effects <- tfit$effects neff <- rep_len("", nrow.X.vlm) neff[seq(ncol.X.vlm)] <- cnames names(effects) <- neff dim(tfit$predictors) <- c(n, M) } dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] if (nice31) { residuals <- z - fv if (M == 1) { residuals <- as.vector(residuals) names(residuals) <- yn } else { dimnames(residuals) <- list(yn, predictors.names) } } else { residuals <- z - tfit$predictors if (M == 1) { tfit$predictors <- as.vector(tfit$predictors) residuals <- as.vector(residuals) names(residuals) <- names(tfit$predictors) <- yn } else { dimnames(residuals) <- dimnames(tfit$predictors) <- list(yn, predictors.names) } } if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } elts.tildeA <- (M - Rank - length(control$str0)) * Rank no.dpar <- 0 df.residual <- nrow.X.vlm - rank - ifelse(control$Quadratic, Rank*p2, 0) - no.dpar - elts.tildeA fit <- list(assign = asgn, coefficients = coefs, constraints = if (control$Quadratic) H.list else Hlist, df.residual = df.residual, df.total = n*M, effects = effects, fitted.values = mu, offset = offset, rank = rank, residuals = residuals, R = R, terms = Terms) # terms: This used to be done in vglm() if (qr.arg && !nice31) { fit$qr <- tfit$qr dimnames(fit$qr$qr) <- dnrow.X.vlm } if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } # else fit$weights <- if (save.weights) wz else NULL misc <- list( colnames.x = xn, colnames.X.vlm = xnrow.X.vlm, criterion = criterion, function.name = function.name, intercept.only = intercept.only, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, nrow.X.vlm = nrow.X.vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol.X.vlm = ncol.X.vlm, ynames = dimnames(y)[[2]]) if (one.more) misc$rrr.expression <- rrr.expression # crit.list <- list() if (criterion != "coefficients") crit.list[[criterion]] <- fit[[criterion]] <- new.crit for (ii in names( .min.criterion.VGAM )) { if (ii != criterion && any(slotNames(family) == ii) && length(body(slot(family, ii)))) { fit[[ii]] <- crit.list[[ii]] <- (slot(family, ii))(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra) } } if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(family@last)) eval(family@last) structure(c(fit, list(predictors = if (nice31) matrix(eta, n, M) else tfit$predictors, contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, ResSS = if (nice31) 000 else tfit$ResSS, x = x, y = y)), vclass = family@vfamily) } VGAM/R/family.actuary.R0000644000176200001440000064653013135276757014316 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dgumbelII <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) ans <- x index0 <- (x < 0) & is.finite(x) & !is.na(x) ans[!index0] <- log(shape[!index0] / scale[!index0]) + (shape[!index0] + 1) * log(scale[!index0] / x[!index0]) - (x[!index0] / scale[!index0])^(-shape[!index0]) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0] <- NaN ans } pgumbelII <- function(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { # 20150121 KaiH if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") # 20150121 KaiH if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) # 20150121 KaiH if (lower.tail) { if (log.p) { ans <- -(q / scale)^(-shape) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- exp(-(q / scale)^(-shape)) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log(-expm1(-(q / scale)^(-shape))) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- -expm1(-(q / scale)^(-shape)) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[shape <= 0 | scale <= 0] <- NaN ans } qgumbelII <- function(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(p), length(shape), length(scale)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (lower.tail) { if (log.p) { ln.p <- p ans <- scale * (-ln.p)^(-1 / shape) ans[ln.p > 0] <- NaN } else { # Default ans <- scale * (-log(p))^(-1 / shape) ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- scale * (-log(-expm1(ln.p)))^(-1 / shape) ans[ln.p > 0] <- NaN } else { ans <- scale * (-log1p(-p))^(-1 / shape) ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 ans[p > 1] <- NaN } } ans[shape <= 0 | scale <= 0] <- NaN ans } rgumbelII <- function(n, scale = 1, shape) { qgumbelII(runif(n), shape = shape, scale = scale) } gumbelII <- function(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), perc.out = NULL, # 50, imethod = 1, zero = "shape", nowarning = FALSE) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(probs.y, positive = TRUE) || length(probs.y) < 2 || max(probs.y) >= 1) stop("bad input for argument 'probs.y'") if (length(perc.out)) if (!is.Numeric(perc.out, positive = TRUE) || max(probs.y) >= 100) stop("bad input for argument 'perc.out'") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") new("vglmff", blurb = c("Gumbel Type II distribution\n\n", "Links: ", namesof("scale", lscale, escale), ", ", namesof("shape", lshape, eshape), "\n", "Mean: scale^(1/shape) * gamma(1 - 1 / shape)\n", "Variance: scale^(2/shape) * (gamma(1 - 2/shape) - ", "gamma(1 + 1/shape)^2)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, parameters.names = c("scale", "shape"), perc.out = .perc.out , zero = .zero ) }, list( .zero = zero, .perc.out = perc.out ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("scale", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lscale , .escale , tag = FALSE), namesof(mynames2, .lshape , .eshape , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] Shape.init <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) Scale.init <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { if (!length( .ishape ) || !length( .iscale )) { for (ilocal in 1:ncoly) { anyc <- FALSE # extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # can be all data probs.y <- .probs.y xvec <- log(-log(probs.y)) fit0 <- lsfit(y = xvec, x = log(quantile(y[!i11, ilocal], probs = probs.y ))) if (!is.Numeric(Shape.init[, ilocal])) Shape.init[, ilocal] <- -fit0$coef["X"] if (!is.Numeric(Scale.init[, ilocal])) Scale.init[, ilocal] <- exp(fit0$coef["Intercept"] / Shape.init[, ilocal]) } # ilocal etastart <- cbind(theta2eta(Scale.init, .lscale , .escale ), theta2eta(Shape.init, .lshape , .eshape ))[, interleave.VGAM(M, M1 = M1)] } } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .iscale = iscale, .ishape = ishape, .probs.y = probs.y, .imethod = imethod ) )), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) Shape <- as.matrix(Shape) if (length( .perc.out ) > 1 && ncol(Shape) > 1) stop("argument 'perc.out' should be of length one since ", "there are multiple responses") if (!length( .perc.out )) { return(Scale * gamma(1 - 1 / Shape)) } ans <- if (length( .perc.out ) > 1) { qgumbelII(p = matrix( .perc.out / 100, length(Shape), length( .perc.out ), byrow = TRUE), shape = Shape, scale = Scale) } else { qgumbelII(p = .perc.out / 100, shape = Shape, scale = Scale) } colnames(ans) <- paste(as.character( .perc.out ), "%", sep = "") ans }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .perc.out = perc.out ) )), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$perc.out <- .perc.out misc$true.mu <- FALSE # @fitted is not a true mu }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .perc.out = perc.out, .imethod = imethod ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgumbelII(x = y, shape = Shape, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), vfamily = c("gumbelII"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(Shape)) && all(0 < Shape) okay1 }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape) )), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) rgumbelII(nsim * length(Scale), shape = Shape, scale = Scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), deriv = eval(substitute(expression({ M1 <- 2 Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) dl.dshape <- 1 / Shape + log(Scale / y) - log(Scale / y) * (Scale / y)^Shape dl.dscale <- Shape / Scale - (Shape / y) * (Scale / y)^(Shape - 1) dscale.deta <- dtheta.deta(Scale, .lscale , .escale ) dshape.deta <- dtheta.deta(Shape, .lshape , .eshape ) myderiv <- c(w) * cbind(dl.dscale, dl.dshape) * cbind(dscale.deta, dshape.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ) )), weight = eval(substitute(expression({ EulerM <- -digamma(1.0) ned2l.dshape2 <- (1 + trigamma(2) + digamma(2)^2) / Shape^2 ned2l.dscale2 <- (Shape / Scale)^2 ned2l.dshapescale <- digamma(2) / Scale wz <- array(c(c(w) * ned2l.dscale2 * dscale.deta^2, c(w) * ned2l.dshape2 * dshape.deta^2, c(w) * ned2l.dshapescale * dscale.deta * dshape.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale, .lshape = lshape )))) } dmbeard <- function(x, shape, scale = 1, rho, epsilon, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(rho), length(epsilon)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(rho) != LLL) rho <- rep_len(rho, LLL) if (length(epsilon) != LLL) epsilon <- rep_len(epsilon, LLL) index0 <- (x < 0) ans <- log(epsilon * exp(-x * scale) + shape) + (-epsilon * x - ((rho * epsilon - 1) / (rho * scale)) * (log1p(rho * shape) - log(exp(-x * scale) + rho * shape) - scale * x)) - log(exp(-x * scale) + shape * rho) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | rho <= 0 | epsilon <= 0] <- NaN ans } pmbeard <- function(q, shape, scale = 1, rho, epsilon) { LLL <- max(length(q), length(shape), length(scale), length(rho), length(epsilon)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(rho) != LLL) rho <- rep_len(rho, LLL) if (length(epsilon) != LLL) epsilon <- rep_len(epsilon, LLL) ans <- -expm1(-epsilon * q - ((rho * epsilon - 1) / (rho * scale)) * (log1p(rho * shape) - log(exp(-scale * q) + rho * shape) - scale * q)) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0 | rho <= 0 | epsilon <= 0] <- NaN ans[q == Inf] <- 1 ans } dmperks <- function(x, scale = 1, shape, epsilon, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(epsilon)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(epsilon) != LLL) epsilon <- rep_len(epsilon, LLL) index0 <- (x < 0) ans <- log(epsilon * exp(-x * scale) + shape) + (-epsilon * x - ((epsilon - 1) / scale) * (log1p(shape) - log(shape + exp(-x * scale)) -x * scale)) - log(exp(-x * scale) + shape) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | epsilon <= 0] <- NaN ans } pmperks <- function(q, scale = 1, shape, epsilon) { LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) ans <- -expm1(-epsilon * q - ((epsilon - 1) / scale) * (log1p(shape) - log(shape + exp(-q * scale)) - q * scale)) ans[(q <= 0)] <- 0 ans[shape <= 0 | scale <= 0] <- NaN ans[q == Inf] <- 1 ans } dbeard <- function(x, shape, scale = 1, rho, log = FALSE) { warning("does not integrate to unity") if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(rho)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(rho) != LLL) rho <- rep_len(rho, LLL) index0 <- (x < 0) ans <- log(shape) - x * scale * (rho^(-1 / scale)) + log(rho) + log(scale) + (rho^(-1 / scale)) * log1p(shape * rho) - (1 + rho^(-1 / scale)) * log(shape * rho + exp(-x * scale)) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | rho <= 0] <- NaN ans } dbeard <- function(x, shape, scale = 1, rho, log = FALSE) { alpha <- shape beta <- scale warning("does not integrate to unity") ret <- ifelse(x <= 0 | beta <= 0, NaN, exp(alpha+beta*x)*(1+exp(alpha+rho))**(exp(-rho/beta))/ (1+exp(alpha+rho+beta*x))**(1+exp(-rho/beta))) ret } qbeard <- function(x, u = 0.5, alpha = 1, beta = 1,rho = 1) { ret <- ifelse(x <= 0 | u <= 0 | u >= 1 | length(x) != length(u) | beta <= 0, NaN, (1/beta) * (log((u**(-beta*exp(rho))) * (1+exp(alpha+rho+beta*x))-1)-alpha-rho)-x) return(ret) } dperks <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) index0 <- (x < 0) ans <- log(shape) - x + log1p(shape) / scale - (1 + 1 / scale) * log(shape + exp(-x * scale)) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0] <- NaN ans } pperks <- function(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) logS <- -q + (log1p(shape) - log(shape + exp(-q * scale))) / scale if (lower.tail) { if (log.p) { ans <- log(-expm1(logS)) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- -expm1(logS) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- logS ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- exp(logS) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[shape <= 0 | scale <= 0] <- NaN ans } qperks <- function(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(p), length(shape), length(scale)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (lower.tail) { if (log.p) { ln.p <- p tmp <- scale * log(-expm1(ln.p)) onemFb <- exp(tmp) ans <- (log1p(shape - onemFb) - log(shape) - tmp) / scale ans[ln.p > 0] <- NaN } else { tmp <- scale * log1p(-p) onemFb <- exp(tmp) ans <- (log1p(shape - onemFb) - log(shape) - tmp) / scale ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p tmp <- scale * ln.p onemFb <- exp(tmp) ans <- (log1p(shape - onemFb) - log(shape) - tmp) / scale ans[ln.p > 0] <- NaN } else { tmp <- scale * log(p) onemFb <- exp(tmp) ans <- (log1p(shape - onemFb) - log(shape) - tmp) / scale ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 ans[p > 1] <- NaN } } ans[shape <= 0 | scale <= 0] <- NaN ans } rperks <- function(n, scale = 1, shape) { qperks(runif(n), scale = scale, shape = shape) } perks.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } perks <- function(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, gscale = exp(-5:5), gshape = exp(-5:5), nsimEIM = 500, oim.mean = FALSE, zero = NULL, nowarning = FALSE) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") if (!is.logical(oim.mean) || length(oim.mean) != 1) stop("bad input for argument 'oim.mean'") new("vglmff", blurb = c("Perks' distribution\n\n", "Links: ", namesof("scale", lscale, escale), ", ", namesof("shape", lshape, eshape), "\n", "Median: qperks(p = 0.5, scale = scale, shape = shape)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, nsimEIM = .nsimEIM , parameters.names = c("scale", "shape"), zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("scale", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lscale , .escale , tag = FALSE), namesof(mynames2, .lshape , .eshape , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { matH <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) matC <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) shape.grid <- .gshape scale.grid <- .gscale for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] perks.Loglikfun2 <- function(scaleval, shapeval, y, x, w, extraargs) { sum(c(w) * dperks(x = y, shape = shapeval, scale = scaleval, log = TRUE)) } try.this <- grid.search2(scale.grid, shape.grid, objfun = perks.Loglikfun2, y = yvec, w = wvec, ret.objfun = TRUE) # Last value is the loglik if (!length( .iscale )) matC[, spp.] <- try.this["Value1"] if (!length( .ishape )) matH[, spp.] <- try.this["Value2"] } # spp. etastart <- cbind(theta2eta(matC, .lscale , .escale ), theta2eta(matH, .lshape , .eshape ))[, interleave.VGAM(M, M1 = M1)] } # End of !length(etastart) }), list( .lscale = lscale, .lshape = lshape, .eshape = eshape, .escale = escale, .gshape = gshape, .gscale = gscale, .ishape = ishape, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) qperks(p = 0.5, shape = Shape, scale = Scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape } misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dperks(x = y, shape = Shape, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), vfamily = c("perks"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(Shape)) && all(0 < Shape) okay1 }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape) )), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) rperks(nsim * length(Scale), shape = Shape, scale = Scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 2 scale <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , .eshape ) temp2 <- exp(y * scale) temp3 <- 1 + shape * temp2 dl.dshape <- 1 / shape + 1 / (scale * (1 + shape)) - (1 + 1 / scale) * temp2 / temp3 dl.dscale <- y - log1p(shape) / scale^2 + log1p(shape * temp2) / scale^2 - (1 + 1 / scale) * shape * y * temp2 / temp3 dshape.deta <- dtheta.deta(shape, .lshape , .eshape ) dscale.deta <- dtheta.deta(scale, .lscale , .escale ) dthetas.detas <- cbind(dscale.deta, dshape.deta) myderiv <- c(w) * cbind(dl.dscale, dl.dshape) * dthetas.detas myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ NOS <- M / M1 dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M1 = M1)] wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = M1, both = TRUE, diag = TRUE) for (spp. in 1:NOS) { run.varcov <- 0 Scale <- scale[, spp.] Shape <- shape[, spp.] if (FALSE && intercept.only && .oim.mean ) { stop("this is wrong") temp8 <- (1 + Shape * exp(Scale * y[, spp.]))^2 nd2l.dadb <- 2 * y[, spp.] * exp(Scale * y[, spp.]) / temp8 nd2l.dada <- 1 / Shape^2 + 1 / (1 + Shape)^2 - 2 * exp(2 * Scale * y[, spp.]) / temp8 nd2l.dbdb <- 2 * Shape * y[, spp.]^2 * exp(Scale * y[, spp.]) / temp8 ave.oim11 <- weighted.mean(nd2l.dada, w[, spp.]) ave.oim12 <- weighted.mean(nd2l.dadb, w[, spp.]) ave.oim22 <- weighted.mean(nd2l.dbdb, w[, spp.]) run.varcov <- cbind(ave.oim11, ave.oim22, ave.oim12) } else { for (ii in 1:( .nsimEIM )) { ysim <- rperks(n = n, shape = Shape, scale = Scale) if (ii < 3) { } temp2 <- exp(ysim * Scale) temp3 <- 1 + Shape * temp2 dl.dshape <- 1 / Shape + 1 / (Scale * (1 + Shape)) - (1 + 1 / Scale) * temp2 / temp3 dl.dscale <- ysim - log1p(Shape) / Scale^2 + log1p(Shape * temp2) / Scale^2 - (1 + 1 / Scale) * Shape * ysim * temp2 / temp3 temp7 <- cbind(dl.dscale, dl.dshape) if (ii < 3) { } run.varcov <- run.varcov + temp7[, ind1$row.index] * temp7[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM ) } wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, M1 * (spp. - 1) + ind1$row] * dThetas.detas[, M1 * (spp. - 1) + ind1$col] for (jay in 1:M1) for (kay in jay:M1) { cptr <- iam((spp. - 1) * M1 + jay, (spp. - 1) * M1 + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = M1)] } } # End of for (spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / M1) }), list( .lscale = lscale, .escale = escale, .nsimEIM = nsimEIM, .oim.mean = oim.mean )))) } # perks() dmakeham <- function(x, scale = 1, shape, epsilon = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale), length(epsilon)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(epsilon) != LLL) epsilon <- rep_len(epsilon, LLL) index0 <- (x < 0) ans <- log(epsilon * exp(-x * scale) + shape) + x * (scale - epsilon) - (shape / scale) * expm1(x * scale) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0 | epsilon < 0] <- NaN ans } pmakeham <- function(q, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(q), length(shape), length(scale), length(epsilon)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(epsilon) != LLL) epsilon <- rep_len(epsilon, LLL) if (lower.tail) { if (log.p) { ans <- log(-expm1(-q * epsilon - (shape / scale) * expm1(scale * q))) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- -expm1(-q * epsilon - (shape / scale) * expm1(scale * q)) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- -q * epsilon - (shape / scale) * expm1(scale * q) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- exp(-q * epsilon - (shape / scale) * expm1(scale * q)) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[shape <= 0 | scale <= 0 | epsilon < 0] <- NaN ans } qmakeham <- function(p, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(p), length(shape), length(scale), length(epsilon)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(epsilon) != LLL) epsilon <- rep_len(epsilon, LLL) if (lower.tail) { if (log.p) { ln.p <- p ans <- shape / (scale * epsilon) - log(-expm1(ln.p)) / epsilon - lambertW((shape / epsilon) * exp(shape / epsilon) * exp(log(-expm1(ln.p)) * (-scale / epsilon))) / scale ans[ln.p == 0] <- Inf ans[ln.p > 0] <- NaN } else { ans <- shape / (scale * epsilon) - log1p(-p) / epsilon - lambertW((shape / epsilon) * exp(shape / epsilon) * exp( (-scale / epsilon) * log1p(-p) )) / scale ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- shape / (scale * epsilon) - ln.p / epsilon - lambertW((shape / epsilon) * exp(shape / epsilon) * exp(ln.p * (-scale / epsilon))) / scale ans[ln.p == -Inf] <- Inf ans[ln.p > 0] <- NaN } else { ans <- shape / (scale * epsilon) - log(p) / epsilon - lambertW((shape / epsilon) * exp(shape / epsilon) * p^(-scale / epsilon)) / scale ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 ans[p > 1] <- NaN } } ans[epsilon == 0] <- qgompertz(p = p[epsilon == 0], shape = shape[epsilon == 0], scale = scale[epsilon == 0], lower.tail = lower.tail, log.p = log.p) ans[shape <= 0 | scale <= 0 | epsilon < 0] <- NaN ans } rmakeham <- function(n, scale = 1, shape, epsilon = 0) { qmakeham(runif(n), scale = scale, shape = shape, epsilon = epsilon) } makeham.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } makeham <- function(lscale = "loge", lshape = "loge", lepsilon = "loge", iscale = NULL, ishape = NULL, iepsilon = NULL, # 0.3, gscale = exp(-5:5), gshape = exp(-5:5), gepsilon = exp(-4:1), nsimEIM = 500, oim.mean = TRUE, zero = NULL, nowarning = FALSE) { lepsil <- lepsilon iepsil <- iepsilon lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lepsil <- as.list(substitute(lepsil)) eepsil <- link2list(lepsil) lepsil <- attr(eepsil, "function.name") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") if (length(iepsil)) if (!is.Numeric(iepsil, positive = TRUE)) stop("argument 'iepsil' values must be positive") if (!is.logical(oim.mean) || length(oim.mean) != 1) stop("bad input for argument 'oim.mean'") new("vglmff", blurb = c("Makeham distribution\n\n", "Links: ", namesof("scale", lscale, escale), ", ", namesof("shape", lshape, eshape), ", ", namesof("epsilon", lepsil, eepsil), "\n", "Median: qmakeham(p = 0.5, scale, shape, epsilon)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, nsimEIM = .nsimEIM, parameters.names = c("scale", "shape"), zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 3 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("scale", ncoly) mynames2 <- param.names("shape", ncoly) mynames3 <- param.names("epsilon", ncoly) predictors.names <- c(namesof(mynames1, .lscale , .escale , tag = FALSE), namesof(mynames2, .lshape , .eshape , tag = FALSE), namesof(mynames3, .lepsil , .eepsil , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { matC <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) matH <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) matE <- matrix(if (length( .iepsil )) .iepsil else 0.3, n, ncoly, byrow = TRUE) shape.grid <- unique(sort(c( .gshape ))) scale.grid <- unique(sort(c( .gscale ))) for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] makeham.Loglikfun2 <- function(scaleval, shapeval, y, x, w, extraargs) { sum(c(w) * dmakeham(x = y, shape = shapeval, epsilon = extraargs$Epsil, scale = scaleval, log = TRUE)) } try.this <- grid.search2(scale.grid, shape.grid, objfun = makeham.Loglikfun2, y = yvec, w = wvec, extraargs = list(Epsilon = matE[1, spp.]), ret.objfun = TRUE) # Last value is the loglik if (!length( .iscale )) matC[, spp.] <- try.this["Value1"] if (!length( .ishape )) matH[, spp.] <- try.this["Value2"] } # spp. epsil.grid <- c( .gepsil ) for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] makeham.Loglikfun2 <- function(epsilval, y, x, w, extraargs) { ans <- sum(c(w) * dmakeham(x = y, shape = extraargs$Shape, epsilon = epsilval, scale = extraargs$Scale, log = TRUE)) ans } Init.epsil <- grid.search(epsil.grid, objfun = makeham.Loglikfun2, y = yvec, x = x, w = wvec, extraargs = list(Shape = matH[1, spp.], Scale = matC[1, spp.])) matE[, spp.] <- Init.epsil } # spp. etastart <- cbind(theta2eta(matC, .lscale , .escale ), theta2eta(matH, .lshape , .eshape ), theta2eta(matE, .lepsil , .eepsil ))[, interleave.VGAM(M, M1 = M1)] } # End of !length(etastart) }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil, .gshape = gshape, .gscale = gscale, .gepsil = gepsilon, .ishape = ishape, .iscale = iscale, .iepsil = iepsil ))), linkinv = eval(substitute(function(eta, extra = NULL) { scale <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lshape , .eshape ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lepsil , .eepsil ) qmakeham(p = 0.5, scale = scale, shape = shape, epsil = epsil) }, list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape , ncoly), rep_len( .lepsil , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2, mynames3)[ interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-2]] <- .escale misc$earg[[M1*ii-1]] <- .eshape misc$earg[[M1*ii ]] <- .eepsil } misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { scale <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lshape , .eshape ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lepsil , .eepsil ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dmakeham(x = y, scale = scale, shape = shape, epsil = epsil, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), vfamily = c("makeham"), # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, c(TRUE, FALSE, FALSE), drop = FALSE], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE, FALSE), drop = FALSE], .lshape , .eshape ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE), drop = FALSE], .lepsil , .eepsil ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(shape)) && all(0 < shape) && all(is.finite(epsil)) && all(0 < epsil) okay1 }, list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Scale <- eta2theta(eta[, c(TRUE, FALSE, FALSE), drop = FALSE], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE, FALSE), drop = FALSE], .lshape , .eshape ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE), drop = FALSE], .lepsil , .eepsil ) rmakeham(nsim * length(Scale), scale = c(Scale), shape = c(shape), epsilon = c(epsil)) }, list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), deriv = eval(substitute(expression({ scale <- eta2theta(eta[, c(TRUE, FALSE, FALSE), drop = FALSE], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE, FALSE), drop = FALSE], .lshape , .eshape ) epsil <- eta2theta(eta[, c(FALSE, FALSE, TRUE), drop = FALSE], .lepsil , .eepsil ) temp2 <- exp(y * scale) temp3 <- epsil + shape * temp2 dl.dshape <- temp2 / temp3 - expm1(y * scale) / scale dl.dscale <- shape * y * temp2 / temp3 + shape * expm1(y * scale) / scale^2 - shape * y * temp2 / scale dl.depsil <- 1 / temp3 - y dshape.deta <- dtheta.deta(shape, .lshape , .eshape ) dscale.deta <- dtheta.deta(scale, .lscale , .escale ) depsil.deta <- dtheta.deta(epsil, .lepsil , .eepsil ) dthetas.detas <- cbind(dscale.deta, dshape.deta, depsil.deta) myderiv <- c(w) * cbind(dl.dscale, dl.dshape, dl.depsil) * dthetas.detas myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil ))), weight = eval(substitute(expression({ NOS <- M / M1 dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M1 = M1)] wz <- matrix(0.0, n, M + M - 1 + M - 2) # wz has half-bandwidth 3 ind1 <- iam(NA, NA, M = M1, both = TRUE, diag = TRUE) # Use simulated EIM for (spp. in 1:NOS) { run.varcov <- 0 Shape <- shape[, spp.] Scale <- scale[, spp.] Epsil <- epsil[, spp.] for (ii in 1:( .nsimEIM )) { ysim <- rmakeham(n = n, scale = Scale, shape = Shape, epsil = Epsil) temp2 <- exp(ysim * Scale) temp3 <- Epsil + Shape * temp2 dl.dshape <- temp2 / temp3 - expm1(ysim * Scale) / Scale dl.dscale <- Shape * ysim * temp2 / temp3 + Shape * expm1(ysim * Scale) / Scale^2 - Shape * ysim * temp2 / Scale dl.depsil <- 1 / temp3 - ysim temp7 <- cbind(dl.dscale, dl.dshape, dl.depsil) run.varcov <- run.varcov + temp7[, ind1$row.index] * temp7[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM ) wz1 <- if (intercept.only) matrix(colMeans(run.varcov, na.rm = TRUE), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, M1 * (spp. - 1) + ind1$row] * dThetas.detas[, M1 * (spp. - 1) + ind1$col] for (jay in 1:M1) for (kay in jay:M1) { # Now copy wz1 into wz cptr <- iam((spp. - 1) * M1 + jay, (spp. - 1) * M1 + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = M1)] } } # End of for (spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / M1) }), list( .lshape = lshape, .lscale = lscale, .lepsil = lepsil, .eshape = eshape, .escale = escale, .eepsil = eepsil, .nsimEIM = nsimEIM, .oim.mean = oim.mean )))) } # makeham() dgompertz <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(scale)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) index0 <- (x < 0) index1 <- abs(x * scale) < 0.1 & is.finite(x * scale) ans <- log(shape) + x * scale - (shape / scale) * (exp(x * scale) - 1) ans[index1] <- log(shape[index1]) + x[index1] * scale[index1] - (shape[index1] / scale[index1]) * expm1(x[index1] * scale[index1]) ans[index0] <- log(0) ans[x == Inf] <- log(0) if (log.arg) { } else { ans <- exp(ans) ans[index0] <- 0 ans[x == Inf] <- 0 } ans[shape <= 0 | scale <= 0] <- NaN ans } pgompertz <- function(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(q), length(shape), length(scale)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (lower.tail) { if (log.p) { ans <- log1p(-exp((-shape / scale) * expm1(scale * q))) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- -expm1((-shape / scale) * expm1(scale * q)) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- (-shape / scale) * expm1(scale * q) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- exp((-shape / scale) * expm1(scale * q)) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[shape <= 0 | scale <= 0] <- NaN ans } qgompertz <- function(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(p), length(shape), length(scale)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (lower.tail) { if (log.p) { ln.p <- p ans <- log1p((-scale / shape) * log(-expm1(ln.p))) / scale ans[ln.p > 0] <- NaN } else { ans <- log1p((-scale / shape) * log1p(-p)) / scale ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- log1p((-scale / shape) * ln.p) / scale ans[ln.p > 0] <- NaN } else { ans <- log1p((-scale / shape) * log(p)) / scale ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 ans[p > 1] <- NaN } } ans[shape <= 0 | scale <= 0] <- NaN ans } rgompertz <- function(n, scale = 1, shape) { qgompertz(runif(n), scale = scale, shape = shape) } gompertz.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } gompertz <- function(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, nsimEIM = 500, zero = NULL, nowarning = FALSE) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") new("vglmff", blurb = c("Gompertz distribution\n\n", "Links: ", namesof("scale", lscale, escale ), ", ", namesof("shape", lshape, eshape ), "\n", "Median: scale * log(2 - 1 / shape)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, nsimEIM = .nsimEIM, parameters.names = c("scale", "shape"), zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("scale", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lscale , .escale , tag = FALSE), namesof(mynames2, .lshape , .eshape , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { matH <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) matC <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) shape.grid <- c(exp(-seq(4, 0.1, len = 07)), 1, exp( seq(0.1, 4, len = 07))) scale.grid <- c(exp(-seq(4, 0.1, len = 07)), 1, exp( seq(0.1, 4, len = 07))) for (spp. in 1:ncoly) { yvec <- y[, spp.] wvec <- w[, spp.] gompertz.Loglikfun <- function(scaleval, y, x, w, extraargs) { ans <- sum(c(w) * dgompertz(x = y, shape = extraargs$Shape, scale = scaleval, log = TRUE)) ans } mymat <- matrix(-1, length(shape.grid), 2) for (jlocal in seq_along(shape.grid)) { mymat[jlocal, ] <- grid.search(scale.grid, objfun = gompertz.Loglikfun, y = yvec, x = x, w = wvec, ret.objfun = TRUE, extraargs = list(Shape = shape.grid[jlocal])) } index.shape <- which(mymat[, 2] == max(mymat[, 2]))[1] if (!length( .ishape )) matH[, spp.] <- shape.grid[index.shape] if (!length( .iscale )) matC[, spp.] <- mymat[index.shape, 1] } # spp. etastart <- cbind(theta2eta(matC, .lscale , .escale ), theta2eta(matH, .lshape , .eshape ))[, interleave.VGAM(M, M1 = M1)] } # End of !length(etastart) }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale, .ishape = ishape, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) log1p((scale / shape) * log(2)) / scale }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape } misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgompertz(x = y, scale = scale, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), vfamily = c("gompertz"), # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Scale <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , .eshape ) rgompertz(nsim * length(Scale), shape = c(Shape), scale = c(Scale)) }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), deriv = eval(substitute(expression({ M1 <- 2 scale <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , .eshape ) temp2 <- exp(y * scale) temp4 <- -expm1(y * scale) dl.dshape <- 1 / shape + temp4 / scale dl.dscale <- y * (1 - shape * temp2 / scale) - shape * temp4 / scale^2 dscale.deta <- dtheta.deta(scale, .lscale , .escale ) dshape.deta <- dtheta.deta(shape, .lshape , .eshape ) dthetas.detas <- cbind(dscale.deta, dshape.deta) myderiv <- c(w) * cbind(dl.dscale, dl.dshape) * dthetas.detas myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), weight = eval(substitute(expression({ NOS <- M / M1 dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M1 = M1)] wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = M1, both = TRUE, diag = TRUE) for (spp. in 1:NOS) { run.varcov <- 0 Shape <- shape[, spp.] Scale <- scale[, spp.] for (ii in 1:( .nsimEIM )) { ysim <- rgompertz(n = n, shape = Shape, scale = Scale) if (ii < 3) { } temp2 <- exp(ysim * scale) temp4 <- -expm1(ysim * scale) dl.dshape <- 1 / shape + temp4 / scale dl.dscale <- ysim * (1 - shape * temp2 / scale) - shape * temp4 / scale^2 temp7 <- cbind(dl.dscale, dl.dshape) run.varcov <- run.varcov + temp7[, ind1$row.index] * temp7[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM ) wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, M1 * (spp. - 1) + ind1$row] * dThetas.detas[, M1 * (spp. - 1) + ind1$col] for (jay in 1:M1) for (kay in jay:M1) { cptr <- iam((spp. - 1) * M1 + jay, (spp. - 1) * M1 + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = M1)] } } # End of for (spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / M1) }), list( .lscale = lscale, .escale = escale, .nsimEIM = nsimEIM )))) } # gompertz() dmoe <- function (x, alpha = 1, lambda = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(alpha), length(lambda)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(alpha) != LLL) alpha <- rep_len(alpha, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) index0 <- (x < 0) if (log.arg) { ans <- log(lambda) + (lambda * x) - 2 * log(expm1(lambda * x) + alpha) ans[index0] <- log(0) } else { ans <- lambda * exp(lambda * x) / (expm1(lambda * x) + alpha)^2 ans[index0] <- 0 } ans[alpha <= 0 | lambda <= 0] <- NaN ans } pmoe <- function (q, alpha = 1, lambda = 1) { ret <- ifelse(alpha <= 0 | lambda <= 0, NaN, 1 - 1 / (expm1(lambda * q) + alpha)) ret[q < log(2 - alpha) / lambda] <- 0 ret } qmoe <- function (p, alpha = 1, lambda = 1) { ifelse(p < 0 | p > 1 | alpha <= 0 | lambda <= 0, NaN, log1p(-alpha + 1 / (1 - p)) / lambda) } rmoe <- function (n, alpha = 1, lambda = 1) { qmoe(p = runif(n), alpha = alpha, lambda = lambda) } exponential.mo.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } exponential.mo <- function(lalpha = "loge", llambda = "loge", ealpha = list(), elambda = list(), ialpha = 1, ilambda = NULL, imethod = 1, nsimEIM = 200, zero = NULL) { stop("fundamentally unable to estimate the parameters as ", "the support of the density depends on the parameters") lalpha <- as.list(substitute(lalpha)) ealpha <- link2list(lalpha) lalpha <- attr(ealpha, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lalpha0 <- lalpha ealpha0 <- ealpha ialpha0 <- ialpha if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be an integer ", "greater than 50, say") if (length(ialpha0)) if (!is.Numeric(ialpha0, positive = TRUE)) stop("argument 'ialpha' values must be positive") if (length(ilambda)) if (!is.Numeric(ilambda, positive = TRUE)) stop("argument 'ilambda' values must be positive") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Marshall-Olkin exponential distribution\n\n", "Links: ", namesof("alpha", lalpha0, ealpha0 ), ", ", namesof("lambda", llambda, elambda ), "\n", "Median: log(3 - alpha) / lambda"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, nsimEIM = .nsimEIM, parameters.names = c("alpha", "lambda"), zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("alpha", ncoly) mynames2 <- param.names("lambda", ncoly) predictors.names <- c(namesof(mynames1, .lalpha0 , .ealpha0 , tag = FALSE), namesof(mynames2, .llambda , .elambda , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { matL <- matrix(if (length( .ilambda )) .ilambda else 0, n, ncoly, byrow = TRUE) matA <- matrix(if (length( .ialpha0 )) .ialpha0 else 0, n, ncoly, byrow = TRUE) for (spp. in 1:ncoly) { yvec <- y[, spp.] moexpon.Loglikfun <- function(lambdaval, y, x, w, extraargs) { ans <- sum(c(w) * log(dmoe(x = y, alpha = extraargs$alpha, lambda = lambdaval))) ans } Alpha.init <- .ialpha0 lambda.grid <- seq(0.1, 10.0, len = 21) Lambda.init <- grid.search(lambda.grid, objfun = moexpon.Loglikfun, y = y, x = x, w = w, extraargs = list(alpha = Alpha.init)) if (length(mustart)) { Lambda.init <- Lambda.init / (1 - Phimat.init) } if (!length( .ialpha0 )) matA[, spp.] <- Alpha0.init if (!length( .ilambda )) matL[, spp.] <- Lambda.init } # spp. etastart <- cbind(theta2eta(matA, .lalpha0, .ealpha0 ), theta2eta(matL, .llambda, .elambda ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda, .ialpha0 = ialpha0, .ilambda = ilambda, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha0 <- eta2theta(eta[, c(TRUE, FALSE)], .lalpha0 , .ealpha0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , .elambda ) log(3 - alpha0) / lambda }, list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lalpha0 , ncoly), rep_len( .llambda , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .ealpha0 misc$earg[[M1*ii ]] <- .elambda } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { alpha0 <- eta2theta(eta[, c(TRUE, FALSE)], .lalpha0 , .ealpha0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * log(dmoe(x = y, alpha = alpha0, lambda = lambda)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda ))), vfamily = c("exponential.mo"), validparams = eval(substitute(function(eta, y, extra = NULL) { alpha0 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lalpha0 , .ealpha0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda , .elambda ) okay1 <- all(is.finite(alpha0)) && all(0 < alpha0) && all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 alpha0 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lalpha0 , .ealpha0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda , .elambda ) temp2 <- (expm1(lambda * y) + alpha0) dl.dalpha0 <- -2 / temp2 dl.dlambda <- 1 / lambda + y - 2 * y * exp(lambda * y) / temp2 dalpha0.deta <- dtheta.deta(alpha0, .lalpha0 , .ealpha0 ) dlambda.deta <- dtheta.deta(lambda, .llambda , .elambda ) dthetas.detas <- cbind(dalpha0.deta, dlambda.deta) myderiv <- c(w) * cbind(dl.dalpha0, dl.dlambda) * dthetas.detas myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lalpha0 = lalpha0, .llambda = llambda, .ealpha0 = ealpha0, .elambda = elambda ))), weight = eval(substitute(expression({ NOS <- M / M1 dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M1 = M1)] wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = M1, both = TRUE, diag = TRUE) for (spp. in 1:NOS) { run.varcov <- 0 Alph <- alpha0[, spp.] Lamb <- lambda[, spp.] for (ii in 1:( .nsimEIM )) { ysim <- rmoe(n = n, alpha = Alph, lambda = Lamb) if (ii < 3) { } temp2 <- (expm1(lambda * ysim) + alpha0) dl.dalpha0 <- -2 / temp2 dl.dlambda <- 1 / lambda + ysim - 2 * ysim * exp(lambda * ysim) / temp2 temp3 <- cbind(dl.dalpha0, dl.dlambda) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM) wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, M1 * (spp. - 1) + ind1$row] * dThetas.detas[, M1 * (spp. - 1) + ind1$col] for (jay in 1:M1) for (kay in jay:M1) { cptr <- iam((spp. - 1) * M1 + jay, (spp. - 1) * M1 + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = M1)] } } # End of for (spp.) loop w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / M1) }), list( .llambda = llambda, .elambda = elambda, .nsimEIM = nsimEIM )))) } # exponential.mo() genbetaII.Loglikfun4 <- function(scaleval, shape1.a, shape2.p, shape3.q, y, x, w, extraargs) { sum(c(w) * dgenbetaII(x = y, scale = scaleval, shape1.a = shape1.a, shape2.p = shape2.p, shape3.q = shape3.q, log = TRUE)) } genbetaII <- function(lscale = "loge", lshape1.a = "loge", lshape2.p = "loge", lshape3.q = "loge", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, ishape3.q = NULL, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = exp(-5:5), zero = "shape") { if (length(lss) != 1 && !is.logical(lss)) stop("Argument 'lss' not specified correctly") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape1.a) && !is.Numeric(ishape1.a, positive = TRUE)) stop("Bad input for argument 'ishape1.a'") if (length(ishape2.p) && !is.Numeric(ishape2.p, positive = TRUE)) stop("Bad input for argument 'ishape2.p'") if (length(ishape3.q) && !is.Numeric(ishape3.q, positive = TRUE)) stop("Bad input for argument 'ishape3.q'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") new("vglmff", blurb = c("Generalized Beta II distribution \n\n", "Links: ", ifelse (lss, namesof("scale" , lscale , earg = escale), namesof("shape1.a", lshape1.a, earg = eshape1.a)), ", ", ifelse (lss, namesof("shape1.a", lshape1.a, earg = eshape1.a), namesof("scale" , lscale , earg = escale)), ", ", namesof("shape2.p" , lshape2.p, earg = eshape2.p), ", ", namesof("shape3.q" , lshape3.q, earg = eshape3.q), "\n", "Mean: scale * gamma(shape2.p + 1/shape1.a) * ", "gamma(shape3.q - 1/shape1.a) / ", "(gamma(shape2.p) * gamma(shape3.q))"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 4) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 4, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = if ( .lss ) c("scale", "shape1.a", "shape2.p", "shape3.q") else c("shape1.a", "scale", "shape2.p", "shape3.q"), lscale = .lscale , lshape1.a = .lshape1.a , escale = .escale , eshape1.a = .eshape1.a , lshape2.p = .lshape2.p , lshape3.q = .lshape3.q , eshape2.p = .eshape2.p , eshape3.q = .eshape3.q ) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lss = lss , .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 4 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha1.names <- param.names("shape1.a", NOS) sha2.names <- param.names("shape2.p", NOS) sha3.names <- param.names("shape3.q", NOS) predictors.names <- c( if ( .lss ) { c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE)) } else { c(namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE), namesof(scaL.names , .lscale , earg = .escale , tag = FALSE)) }, namesof(sha2.names , .lshape2.p , earg = .eshape2.p , tag = FALSE), namesof(sha3.names , .lshape3.q , earg = .eshape3.q , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- aa.init <- pp.init <- qq.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] gscale <- .gscale gshape1.a <- .gshape1.a gshape2.p <- .gshape2.p gshape3.q <- .gshape3.q if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape1.a )) gshape1.a <- rep_len( .ishape1.a , NOS) if (length( .ishape2.p )) gshape2.p <- rep_len( .ishape2.p , NOS) if (length( .ishape3.q )) gshape3.q <- rep_len( .ishape3.q , NOS) try.this <- grid.search4(gscale, gshape1.a, gshape2.p, gshape3.q, objfun = genbetaII.Loglikfun4, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] aa.init[, spp.] <- try.this["Value2"] pp.init[, spp.] <- try.this["Value3"] qq.init[, spp.] <- try.this["Value4"] } # End of for (spp. ...) finite.mean <- 1 < aa.init * qq.init COP.use <- 1.15 while (FALSE && any(!finite.mean)) { qq.init[!finite.mean] <- 0.1 + qq.init[!finite.mean] * COP.use aa.init[!finite.mean] <- 0.1 + aa.init[!finite.mean] * COP.use finite.mean <- 1 < aa.init * qq.init } etastart <- cbind(if ( .lss ) cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(aa.init, .lshape1.a , earg = .eshape1.a )) else cbind(theta2eta(aa.init, .lshape1.a , earg = .eshape1.a ), theta2eta(sc.init, .lscale , earg = .escale )), theta2eta(pp.init , .lshape2.p , earg = .eshape2.p ), theta2eta(qq.init , .lshape3.q , earg = .eshape3.q )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .iscale = iscale , .ishape1.a = ishape1.a, .gscale = gscale , .gshape1.a = gshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .ishape2.p = ishape2.p, .ishape3.q = ishape3.q, .gshape2.p = gshape2.p, .gshape3.q = gshape3.q, .lss = lss ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 4 NOS <- ncol(eta) / M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) ans <- cbind(Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq))) ans }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lss = lss ))), last = eval(substitute(expression({ M1 <- 4 misc$link <- c(rep_len( if ( .lss ) .lscale else .lshape1.a , ncoly), rep_len( if ( .lss ) .lshape1.a else .lscale , ncoly), rep_len( .lshape2.p , ncoly), rep_len( .lshape3.q , ncoly))[ interleave.VGAM(M, M1 = M1)] temp.names <- if ( .lss ) { c(scaL.names, sha1.names, sha2.names, sha3.names) } else { c(sha1.names, scaL.names, sha2.names, sha3.names) } names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) { if ( .lss ) { misc$earg[[M1*ii-3]] <- .escale misc$earg[[M1*ii-2]] <- .eshape1.a } else { misc$earg[[M1*ii-3]] <- .eshape1.a misc$earg[[M1*ii-2]] <- .escale } misc$earg[[M1*ii-1]] <- .eshape2.p misc$earg[[M1*ii ]] <- .eshape3.q } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lss = lss ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 4 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = aa, shape2.p = parg, shape3.q = qq, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lss = lss ))), vfamily = c("genbetaII"), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 4 NOS <- ncol(eta) / M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q) okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint -a*p < 1 < a*q has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lss = lss ))), deriv = eval(substitute(expression({ M1 <- 4 NOS <- ncol(eta)/M1 # Needed for summary() if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 3, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q) temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dp <- aa * temp1 + temp3 - temp3a - temp4 dl.dq <- temp3 - temp3b - temp4 dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) da.deta <- dtheta.deta(aa, .lshape1.a , earg = .eshape1.a ) dp.deta <- dtheta.deta(parg, .lshape2.p , earg = .eshape2.p ) dq.deta <- dtheta.deta(qq, .lshape3.q , earg = .eshape3.q ) myderiv <- if ( .lss ) { c(w) * cbind(dl.dscale * dscale.deta, dl.da * da.deta, dl.dp * dp.deta, dl.dq * dq.deta) } else { c(w) * cbind(dl.da * da.deta, dl.dscale * dscale.deta, dl.dp * dp.deta, dl.dq * dq.deta) } myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lss = lss ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.da <- (1 + parg + qq + parg * qq * (temp5a + temp5b + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dp <- temp5a - temp5 ned2l.dq <- temp5b - temp5 ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (Scale*(1 + parg+qq)) ned2l.dap <- -(qq * (temp3a -temp3b) -1) / (aa*(parg+qq)) ned2l.daq <- -(parg * (temp3b -temp3a) -1) / (aa*(parg+qq)) ned2l.dscalep <- aa * qq / (Scale*(parg+qq)) ned2l.dscaleq <- -aa * parg / (Scale*(parg+qq)) ned2l.dpq <- -temp5 wz <- if ( .lss ) { array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dp * dp.deta^2, c(w) * ned2l.dq * dq.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta, c(w) * ned2l.dap * da.deta * dp.deta, c(w) * ned2l.dpq * dp.deta * dq.deta, c(w) * ned2l.dscalep * dscale.deta * dp.deta, c(w) * ned2l.daq * da.deta * dq.deta, c(w) * ned2l.dscaleq * dscale.deta * dq.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } else { array(c(c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dp * dp.deta^2, c(w) * ned2l.dq * dq.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta, c(w) * ned2l.dscalep * dscale.deta * dp.deta, c(w) * ned2l.dpq * dp.deta * dq.deta, c(w) * ned2l.dap * da.deta * dp.deta, c(w) * ned2l.dscaleq * dscale.deta * dq.deta, c(w) * ned2l.daq * da.deta * dq.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .lss = lss )))) } dgenbetaII <- function(x, scale = 1, shape1.a, shape2.p, shape3.q, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("Bad input for argument 'log'") rm(log) logden <- log(shape1.a) + (shape1.a * shape2.p - 1) * log(abs(x)) - shape1.a * shape2.p * log(scale) - lbeta(shape2.p, shape3.q) - (shape2.p + shape3.q) * log1p((abs(x)/scale)^shape1.a) if (any(x <= 0) || any(is.infinite(x))) { LLL <- max(length(x), length(scale), length(shape1.a), length(shape2.p), length(shape3.q)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape1.a) != LLL) shape1.a <- rep_len(shape1.a, LLL) if (length(shape2.p) != LLL) shape2.p <- rep_len(shape2.p, LLL) if (length(shape3.q) != LLL) shape3.q <- rep_len(shape3.q, LLL) logden[is.infinite(x)] <- log(0) logden[x < 0] <- log(0) x.eq.0 <- !is.na(x) & (x == 0) if (any(x.eq.0)) { axp <- shape1.a[x.eq.0] * shape2.p[x.eq.0] logden[x.eq.0 & axp < 1] <- log(Inf) ind5 <- x.eq.0 & axp == 1 logden[ind5] <- log(shape1.a[ind5]) - shape1.a[ind5] * shape2.p[ind5] * log(scale[ind5]) - lbeta(shape2.p[ind5], shape3.q[ind5]) - (shape2.p[ind5] + shape3.q[ind5]) * log1p((0/scale[ind5])^shape1.a[ind5]) logden[x.eq.0 & axp > 1] <- log(0) } } if (log.arg) logden else exp(logden) } rsinmad <- function(n, scale = 1, shape1.a, shape3.q) qsinmad(runif(n), shape1.a = shape1.a, scale = scale, shape3.q = shape3.q) rlomax <- function(n, scale = 1, shape3.q) rsinmad(n, scale = scale, shape1.a = 1, shape3.q = shape3.q) rfisk <- function(n, scale = 1, shape1.a) rsinmad(n, scale = scale, shape1.a = shape1.a, shape3.q = 1) rparalogistic <- function(n, scale = 1, shape1.a) rsinmad(n, scale = scale, shape1.a = shape1.a, shape3.q = shape1.a) rdagum <- function(n, scale = 1, shape1.a, shape2.p) qdagum(runif(n), scale = scale, shape1.a = shape1.a, shape2.p = shape2.p) rinv.lomax <- function(n, scale = 1, shape2.p) rdagum(n, scale = scale, shape1.a = 1, shape2.p = shape2.p) rinv.paralogistic <- function(n, scale = 1, shape1.a) rdagum(n, scale = scale, shape1.a = shape1.a, shape2.p = shape1.a) qsinmad <- function(p, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(p), length(shape1.a), length(scale), length(shape3.q)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape1.a) != LLL) shape1.a <- rep_len(shape1.a, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape3.q) != LLL) shape3.q <- rep_len(shape3.q, LLL) if (lower.tail) { if (log.p) { ln.p <- p ans <- scale * expm1((-1/shape3.q) * log(-expm1(ln.p)))^(1/shape1.a) } else { ans <- scale * expm1((-1/shape3.q) * log1p(-p))^(1/shape1.a) ans[p == 0] <- 0 ans[p == 1] <- Inf } } else { if (log.p) { ln.p <- p ans <- scale * expm1(-ln.p / shape3.q)^(1/shape1.a) } else { ans <- scale * expm1(-log(p) / shape3.q)^(1/shape1.a) ans[p == 0] <- Inf ans[p == 1] <- 0 } } ans[scale <= 0 | shape1.a <= 0 | shape3.q <= 0] <- NaN ans } qlomax <- function(p, scale = 1, shape3.q, lower.tail = TRUE, log.p = FALSE) qsinmad(p, shape1.a = 1, scale = scale, shape3.q = shape3.q, lower.tail = lower.tail, log.p = log.p) qfisk <- function(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qsinmad(p, shape1.a = shape1.a, scale = scale, shape3.q = 1, lower.tail = lower.tail, log.p = log.p) qparalogistic <- function(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qsinmad(p, shape1.a = shape1.a, scale = scale, shape3.q = shape1.a, ## 20150121 KaiH; add shape3.q = shape1.a lower.tail = lower.tail, log.p = log.p) qdagum <- function(p, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(p), length(shape1.a), length(scale), length(shape2.p)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape1.a) != LLL) shape1.a <- rep_len(shape1.a, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape2.p) != LLL) shape2.p <- rep_len(shape2.p, LLL) if (lower.tail) { if (log.p) { ln.p <- p ans <- scale * (expm1(-ln.p/shape2.p))^(-1/shape1.a) ans[ln.p > 0] <- NaN } else { ans <- scale * (expm1(-log(p)/shape2.p))^(-1/shape1.a) ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- scale * (expm1(-log(-expm1(ln.p))/shape2.p))^(-1/shape1.a) ans[ln.p > 0] <- NaN } else { ans <- scale * (expm1(-log1p(-p)/shape2.p))^(-1/shape1.a) ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 ans[p > 1] <- NaN } } ans[scale <= 0 | shape1.a <= 0 | shape2.p <= 0] <- NaN ans } qinv.lomax <- function(p, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) qdagum(p, scale = scale, shape1.a = 1, shape2.p = shape2.p, lower.tail = lower.tail, log.p = log.p) qinv.paralogistic <- function(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qdagum(p, scale = scale, shape1.a = shape1.a, shape2.p = shape1.a, ## 20150121 Kai; add shape2.p = shape1.a lower.tail = lower.tail, log.p = log.p) psinmad <- function(q, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(q), length(shape1.a), length(scale), length(shape3.q)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape1.a) != LLL) shape1.a <- rep_len(shape1.a, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape3.q) != LLL) shape3.q <- rep_len(shape3.q, LLL) # 20150121 KaiH if (lower.tail) { if (log.p) { ans <- log1p(-(1 + (q / scale)^shape1.a)^(-shape3.q)) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- exp(log1p(-(1 + (q / scale)^shape1.a)^(-shape3.q))) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- (-shape3.q) * log1p((q / scale)^shape1.a) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- (1 + (q / scale)^shape1.a)^(-shape3.q) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[scale <= 0 | shape1.a <= 0 | shape3.q <= 0] <- NaN ans } plomax <- function(q, scale = 1, shape3.q, # Change the order lower.tail = TRUE, log.p = FALSE) psinmad(q, shape1.a = 1, scale = scale, shape3.q = shape3.q, lower.tail = lower.tail, log.p = log.p) pfisk <- function(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) psinmad(q, shape1.a = shape1.a, scale = scale, shape3.q = 1, lower.tail = lower.tail, log.p = log.p) pparalogistic <- function(q, scale = 1, shape1.a, # Change the order lower.tail = TRUE, log.p = FALSE) psinmad(q, shape1.a = shape1.a, scale = scale, shape3.q = shape1.a, # Add shape3.q = shape1.a lower.tail = lower.tail, log.p = log.p) pdagum <- function(q, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") LLL <- max(length(q), length(shape1.a), length(scale), length(shape2.p)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape1.a) != LLL) shape1.a <- rep_len(shape1.a, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape2.p) != LLL) shape2.p <- rep_len(shape2.p, LLL) if (lower.tail) { if (log.p) { ans <- (-shape2.p) * log1p((q/scale)^(-shape1.a)) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- exp( (-shape2.p) * log1p((q/scale)^(-shape1.a)) ) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log1p(-(1 + (q/scale)^(-shape1.a))^(-shape2.p)) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { stop("unfinished") ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[shape1.a <= 0 | scale <= 0 | shape2.p <= 0] <- NaN ans } pinv.lomax <- function(q, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) pdagum(q, scale = scale, shape1.a = 1, shape2.p = shape2.p, lower.tail = lower.tail, log.p = log.p) pinv.paralogistic <- function(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) pdagum(q, scale = scale, shape1.a = shape1.a, shape2.p = shape1.a, lower.tail = lower.tail, log.p = log.p) dbetaII <- function(x, scale = 1, shape2.p, shape3.q, log = FALSE) dgenbetaII(x = x, scale = scale, shape1.a = 1, shape2.p = shape2.p, shape3.q = shape3.q, log = log) dsinmad <- function(x, scale = 1, shape1.a, shape3.q, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape1.a), length(scale), length(shape3.q)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape1.a) != LLL) shape1.a <- rep_len(shape1.a, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape3.q) != LLL) shape3.q <- rep_len(shape3.q, LLL) Loglik <- rep_len(log(0), LLL) xok <- (x > 0) & !is.na(x) # Avoids log(x) if x<0, and handles NAs Loglik[xok] <- log(shape1.a[xok]) + log(shape3.q[xok]) + (shape1.a[xok]-1) * log(x[xok]) - shape1.a[xok] * log(scale[xok]) - (1 + shape3.q[xok]) * log1p((x[xok]/scale[xok])^shape1.a[xok]) x.eq.0 <- (x == 0) & !is.na(x) Loglik[x.eq.0] <- log(shape1.a[x.eq.0]) + log(shape3.q[x.eq.0]) - shape1.a[x.eq.0] * log(scale[x.eq.0]) Loglik[is.na(x)] <- NA Loglik[is.nan(x)] <- NaN Loglik[x == Inf] <- log(0) if (log.arg) Loglik else exp(Loglik) } dlomax <- function(x, scale = 1, shape3.q, log = FALSE) dsinmad(x, scale = scale, shape1.a = 1, shape3.q = shape3.q, log = log) dfisk <- function(x, scale = 1, shape1.a, log = FALSE) dsinmad(x, scale = scale, shape1.a = shape1.a, shape3.q = 1, log = log) dparalogistic <- function(x, scale = 1, shape1.a, log = FALSE) dsinmad(x, scale = scale, shape1.a = shape1.a, shape3.q = shape1.a, log = log) ddagum <- function(x, scale = 1, shape1.a, shape2.p, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape1.a), length(scale), length(shape2.p)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape1.a) != LLL) shape1.a <- rep_len(shape1.a, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape2.p) != LLL) shape2.p <- rep_len(shape2.p, LLL) Loglik <- rep_len(log(0), LLL) xok <- (x > 0) & !is.na(x) # Avoids log(x) if x<0, and handles NAs Loglik[xok] <- log(shape1.a[xok]) + log(shape2.p[xok]) + (shape1.a[xok] * shape2.p[xok]-1) * log( x[xok]) - shape1.a[xok] * shape2.p[xok] * log(scale[xok]) - (1 + shape2.p[xok]) * log1p((x[xok]/scale[xok])^shape1.a[xok]) Loglik[shape2.p <= 0] <- NaN x.eq.0 <- (x == 0) & !is.na(x) Loglik[x.eq.0] <- log(shape1.a[x.eq.0]) + log(shape2.p[x.eq.0]) - shape1.a[x.eq.0] * shape2.p[x.eq.0] * log(scale[x.eq.0]) Loglik[is.na(x)] <- NA Loglik[is.nan(x)] <- NaN Loglik[x == Inf] <- log(0) if (log.arg) Loglik else exp(Loglik) } dinv.lomax <- function(x, scale = 1, shape2.p, log = FALSE) ddagum(x, scale = scale, shape1.a = 1, shape2.p = shape2.p, log = log) dinv.paralogistic <- function(x, scale = 1, shape1.a, log = FALSE) ddagum(x, scale = scale, shape1.a = shape1.a, shape2.p = shape1.a, log = log) sinmad <- function(lscale = "loge", lshape1.a = "loge", lshape3.q = "loge", iscale = NULL, ishape1.a = NULL, ishape3.q = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape3.q = exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape") { if (length(lss) != 1 && !is.logical(lss)) stop("Argument 'lss' not specified correctly") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape1.a) && !is.Numeric(ishape1.a, positive = TRUE)) stop("Bad input for argument 'ishape1.a'") if (length(ishape3.q) && !is.Numeric(ishape3.q, positive = TRUE)) stop("Bad input for argument 'ishape3.q'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") new("vglmff", blurb = c("Singh-Maddala distribution \n\n", "Links: ", ifelse (lss, namesof("scale" , lscale , earg = escale), namesof("shape1.a", lshape1.a, earg = eshape1.a)), ", ", ifelse (lss, namesof("shape1.a", lshape1.a, earg = eshape1.a), namesof("scale" , lscale , earg = escale)), ", ", namesof("shape3.q" , lshape3.q, earg = eshape3.q), "\n", "Mean: scale * gamma(shape2.p + 1/shape1.a) * ", "gamma(shape3.q - 1/shape1.a) / ", "gamma(shape3.q)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = if ( .lss ) c("scale", "shape1.a", "shape3.q") else c("shape1.a", "scale", "shape3.q"), lscale = .lscale , lshape1.a = .lshape1.a , escale = .escale , eshape1.a = .eshape1.a , lshape3.q = .lshape3.q , eshape3.q = .eshape3.q ) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .lss = lss , .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 3 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha1.names <- param.names("shape1.a", NOS) sha3.names <- param.names("shape3.q", NOS) predictors.names <- c( if ( .lss ) { c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE)) } else { c(namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE), namesof(scaL.names , .lscale , earg = .escale , tag = FALSE)) }, namesof(sha3.names , .lshape3.q , earg = .eshape3.q , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- aa.init <- qq.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1 ) { gscale <- .gscale gshape1.a <- .gshape1.a gshape3.q <- .gshape3.q if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape1.a )) gshape1.a <- rep_len( .ishape1.a , NOS) if (length( .ishape3.q )) gshape3.q <- rep_len( .ishape3.q , NOS) try.this <- grid.search4(gscale, gshape1.a, vov3 = 1, gshape3.q, objfun = genbetaII.Loglikfun4, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] aa.init[, spp.] <- try.this["Value2"] qq.init[, spp.] <- try.this["Value4"] } else { # .imethod == 2 qvec <- .probs.y ishape3.q <- if (length( .ishape3.q )) .ishape3.q else 1 xvec <- log( (1-qvec)^(-1/ ishape3.q ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(yvec, qvec))) sc.init[, spp.] <- if (length( .iscale )) .iscale else exp(fit0$coef[1]) aa.init[, spp.] <- if (length( .ishape1.a )) .ishape1.a else 1/fit0$coef[2] qq.init[, spp.] <- ishape3.q } } # End of for (spp. ...) finite.mean <- 1 < aa.init * qq.init COP.use <- 1.15 while (FALSE && any(!finite.mean)) { qq.init[!finite.mean] <- 0.1 + qq.init[!finite.mean] * COP.use aa.init[!finite.mean] <- 0.1 + aa.init[!finite.mean] * COP.use finite.mean <- 1 < aa.init * qq.init } etastart <- cbind(if ( .lss ) cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(aa.init, .lshape1.a , earg = .eshape1.a )) else cbind(theta2eta(aa.init, .lshape1.a , earg = .eshape1.a ), theta2eta(sc.init, .lscale , earg = .escale )), theta2eta(qq.init , .lshape3.q , earg = .eshape3.q )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .iscale = iscale , .ishape1.a = ishape1.a, .gscale = gscale , .gshape1.a = gshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .ishape3.q = ishape3.q, .gshape3.q = gshape3.q, .imethod = imethod , .probs.y = probs.y, .lss = lss ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 3 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans[qq <= 0] <- NA ans }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .lss = lss ))), last = eval(substitute(expression({ M1 <- 3 misc$link <- c(rep_len( if ( .lss ) .lscale else .lshape1.a , ncoly), rep_len( if ( .lss ) .lshape1.a else .lscale , ncoly), rep_len( .lshape3.q , ncoly))[ interleave.VGAM(M, M1 = M1)] temp.names <- if ( .lss ) { c(scaL.names, sha1.names, sha3.names) } else { c(sha1.names, scaL.names, sha3.names) } names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) { if ( .lss ) { misc$earg[[M1*ii-2]] <- .escale misc$earg[[M1*ii-1]] <- .eshape1.a } else { misc$earg[[M1*ii-2]] <- .eshape1.a misc$earg[[M1*ii-1]] <- .escale } misc$earg[[M1*ii ]] <- .eshape3.q } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .lss = lss ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = aa, shape2.p = parg, shape3.q = qq, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .lss = lss ))), vfamily = c("sinmad"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) M1 <- 3 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) rsinmad(nsim * length(qq), shape1.a = aa, scale = Scale, shape3.q = qq) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 NOS <- ncol(eta) / M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q) okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint -a < 1 < a*q has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .lss = lss ))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta)/M1 # Needed for summary() if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q) temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dq <- temp3 - temp3b - temp4 dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) da.deta <- dtheta.deta(aa, .lshape1.a , earg = .eshape1.a ) dq.deta <- dtheta.deta(qq, .lshape3.q , earg = .eshape3.q ) myderiv <- if ( .lss ) { c(w) * cbind(dl.dscale * dscale.deta, dl.da * da.deta, dl.dq * dq.deta) } else { c(w) * cbind(dl.da * da.deta, dl.dscale * dscale.deta, dl.dq * dq.deta) } myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .lss = lss ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.da <- (1 + parg + qq + parg * qq * (temp5a + temp5b + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dq <- temp5b - temp5 ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (Scale*(1 + parg+qq)) ned2l.daq <- -(parg * (temp3b -temp3a) -1) / (aa*(parg+qq)) ned2l.dscaleq <- -aa * parg / (Scale*(parg+qq)) wz <- if ( .lss ) { array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dq * dq.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta, c(w) * ned2l.daq * da.deta * dq.deta, c(w) * ned2l.dscaleq * dscale.deta * dq.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } else { array(c(c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dq * dq.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta, c(w) * ned2l.dscaleq * dscale.deta * dq.deta, c(w) * ned2l.daq * da.deta * dq.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .lss = lss )))) } dagum <- function(lscale = "loge", lshape1.a = "loge", lshape2.p = "loge", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), # exp(-5:5), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape") { if (length(lss) != 1 && !is.logical(lss)) stop("Argument 'lss' not specified correctly") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape1.a) && !is.Numeric(ishape1.a, positive = TRUE)) stop("Bad input for argument 'ishape1.a'") if (length(ishape2.p) && !is.Numeric(ishape2.p, positive = TRUE)) stop("Bad input for argument 'ishape2.p'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") new("vglmff", blurb = c("Dagum distribution \n\n", "Links: ", ifelse (lss, namesof("scale" , lscale , earg = escale), namesof("shape1.a", lshape1.a, earg = eshape1.a)), ", ", ifelse (lss, namesof("shape1.a", lshape1.a, earg = eshape1.a), namesof("scale" , lscale , earg = escale)), ", ", namesof("shape2.p" , lshape2.p, earg = eshape2.p), "\n", "Mean: scale * gamma(shape2.p + 1/shape1.a) * ", "gamma(1 - 1/shape1.a) / ", "gamma(shape2.p)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = if ( .lss ) c("scale", "shape1.a", "shape2.p") else c("shape1.a", "scale", "shape2.p"), lscale = .lscale , lshape1.a = .lshape1.a , escale = .escale , eshape1.a = .eshape1.a , lshape2.p = .lshape2.p , eshape2.p = .eshape2.p ) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss , .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 3 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha1.names <- param.names("shape1.a", NOS) sha2.names <- param.names("shape2.p", NOS) predictors.names <- c( if ( .lss ) { c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE)) } else { c(namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE), namesof(scaL.names , .lscale , earg = .escale , tag = FALSE)) }, namesof(sha2.names , .lshape2.p , earg = .eshape2.p , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- aa.init <- pp.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1 ) { gscale <- .gscale gshape1.a <- .gshape1.a gshape2.p <- .gshape2.p if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape1.a )) gshape1.a <- rep_len( .ishape1.a , NOS) if (length( .ishape2.p )) gshape2.p <- rep_len( .ishape2.p , NOS) try.this <- grid.search4(gscale, gshape1.a, gshape2.p, vov4 = 1, objfun = genbetaII.Loglikfun4, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] aa.init[, spp.] <- try.this["Value2"] pp.init[, spp.] <- try.this["Value3"] } else { # .imethod == 2 qvec <- .probs.y ishape2.p <- if (length( .ishape2.p )) .ishape2.p else 1 xvec <- log( qvec^(-1/ ishape2.p) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(yvec, qvec))) sc.init[, spp.] <- if (length( .iscale )) .iscale else exp(fit0$coef[1]) aa.init[, spp.] <- if (length( .ishape1.a )) .ishape1.a else -1/fit0$coef[2] pp.init[, spp.] <- ishape2.p } } # End of for (spp. ...) finite.mean <- 1 < aa.init COP.use <- 1.15 while (FALSE && any(!finite.mean)) { aa.init[!finite.mean] <- 0.1 + aa.init[!finite.mean] * COP.use finite.mean <- 1 < aa.init } etastart <- cbind(if ( .lss ) cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(aa.init, .lshape1.a , earg = .eshape1.a )) else cbind(theta2eta(aa.init, .lshape1.a , earg = .eshape1.a ), theta2eta(sc.init, .lscale , earg = .escale )), theta2eta(pp.init , .lshape2.p , earg = .eshape2.p )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .iscale = iscale , .ishape1.a = ishape1.a, .gscale = gscale , .gshape1.a = gshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .ishape2.p = ishape2.p, .gshape2.p = gshape2.p, .imethod = imethod , .probs.y = probs.y, .lss = lss ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 3 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- 1 ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans[parg <= 0] <- NA ans }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss ))), last = eval(substitute(expression({ M1 <- 3 misc$link <- c(rep_len( if ( .lss ) .lscale else .lshape1.a , ncoly), rep_len( if ( .lss ) .lshape1.a else .lscale , ncoly), rep_len( .lshape2.p , ncoly))[ interleave.VGAM(M, M1 = M1)] temp.names <- if ( .lss ) { c(scaL.names, sha1.names, sha2.names) } else { c(sha1.names, scaL.names, sha2.names) } names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) { if ( .lss ) { misc$earg[[M1*ii-2]] <- .escale misc$earg[[M1*ii-1]] <- .eshape1.a } else { misc$earg[[M1*ii-2]] <- .eshape1.a misc$earg[[M1*ii-1]] <- .escale } misc$earg[[M1*ii ]] <- .eshape2.p } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = aa, shape2.p = parg, shape3.q = 1, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss ))), vfamily = c("dagum"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p) qq <- 1 rdagum(nsim * length(parg), shape1.a = aa, scale = Scale, shape2.p = parg) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 NOS <- ncol(eta)/M1 # Needed for summary() if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p) qq <- 1 okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint -a*p < 1 < a has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss ))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta)/M1 # Needed for summary() if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) } parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p) qq <- 1 temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dl.dp <- aa * temp1 + temp3 - temp3a - temp4 dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) da.deta <- dtheta.deta(aa, .lshape1.a , earg = .eshape1.a ) dp.deta <- dtheta.deta(parg, .lshape2.p , earg = .eshape2.p ) myderiv <- if ( .lss ) { c(w) * cbind(dl.dscale * dscale.deta, dl.da * da.deta, dl.dp * dp.deta) } else { c(w) * cbind(dl.da * da.deta, dl.dscale * dscale.deta, dl.dp * dp.deta) } myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.da <- (1 + parg + qq + parg * qq * (temp5a + temp5b + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dp <- temp5a - temp5 ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (Scale*(1 + parg+qq)) ned2l.dap <- -(qq * (temp3a -temp3b) -1) / (aa*(parg+qq)) ned2l.dscalep <- aa * qq / (Scale*(parg+qq)) wz <- if ( .lss ) { array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dp * dp.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta, c(w) * ned2l.dap * da.deta * dp.deta, c(w) * ned2l.dscalep * dscale.deta * dp.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } else { array(c(c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dp * dp.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta, c(w) * ned2l.dscalep * dscale.deta * dp.deta, c(w) * ned2l.dap * da.deta * dp.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .lss = lss )))) } betaII <- function(lscale = "loge", lshape2.p = "loge", lshape3.q = "loge", iscale = NULL, ishape2.p = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), # exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape") { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale ) && !is.Numeric(iscale , positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape2.p) && !is.Numeric(ishape2.p, positive = TRUE)) stop("Bad input for argument 'ishape2.p'") if (length(ishape3.q) && !is.Numeric(ishape3.q, positive = TRUE)) stop("Bad input for argument 'ishape3.q'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") new("vglmff", blurb = c("Beta II distribution \n\n", "Links: ", namesof("scale" , lscale , earg = escale ), ", ", namesof("shape2.p" , lshape2.p, earg = eshape2.p), ", ", namesof("shape3.q" , lshape3.q, earg = eshape3.q), "\n", "Mean: scale * gamma(shape2.p + 1) * ", "gamma(shape3.q - 1) / ", "(gamma(shape2.p) * gamma(shape3.q))"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = c("scale", "shape2.p", "shape3.q"), lscale = .lscale , escale = .escale , lshape2.p = .lshape2.p , lshape3.q = .lshape3.q , eshape2.p = .eshape2.p , eshape3.q = .eshape3.q ) }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 3 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha2.names <- param.names("shape2.p", NOS) sha3.names <- param.names("shape3.q", NOS) predictors.names <- c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha2.names , .lshape2.p , earg = .eshape2.p , tag = FALSE), namesof(sha3.names , .lshape3.q , earg = .eshape3.q , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- pp.init <- qq.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1 ) { gscale <- .gscale gshape2.p <- .gshape2.p gshape3.q <- .gshape3.q if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape2.p )) gshape2.p <- rep_len( .ishape2.p , NOS) if (length( .ishape3.q )) gshape3.q <- rep_len( .ishape3.q , NOS) try.this <- grid.search4(gscale, vov2 = 1, gshape2.p, gshape3.q, objfun = genbetaII.Loglikfun4, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] pp.init[, spp.] <- try.this["Value3"] qq.init[, spp.] <- try.this["Value4"] } else { # .imethod == 2 sc.init[, spp.] <- if (length( .iscale )) .iscale else { qvec <- .probs.y ishape3.q <- if (length( .ishape3.q )) .ishape3.q else 1 xvec <- log( (1-qvec)^(-1/ ishape3.q ) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(yvec, qvec ))) exp(fit0$coef[1]) } pp.init[, spp.] <- if (length( .ishape2.p )) .ishape2.p else 1.0 qq.init[, spp.] <- if (length( .ishape3.q )) .ishape3.q else 1.0 } } # End of for (spp. ...) finite.mean <- 1 < qq.init COP.use <- 1.15 while (any(!finite.mean)) { qq.init[!finite.mean] <- 0.1 + qq.init[!finite.mean] * COP.use finite.mean <- 1 < qq.init } etastart <- cbind(theta2eta(sc.init , .lscale , earg = .escale ), theta2eta(pp.init , .lshape2.p , earg = .eshape2.p ), theta2eta(qq.init , .lshape3.q , earg = .eshape3.q )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .escale = escale , .iscale = iscale , .gscale = gscale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q, .ishape2.p = ishape2.p, .ishape3.q = ishape3.q, .gshape2.p = gshape2.p, .gshape3.q = gshape3.q, .imethod = imethod , .probs.y = probs.y ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 3 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[Scale <= 0] <- NA ans[parg <= 0] <- NA ans[qq <= 0] <- NA ans }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q ))), last = eval(substitute(expression({ M1 <- 3 misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape2.p , ncoly), rep_len( .lshape3.q , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(scaL.names, sha2.names, sha3.names) names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) { misc$earg[[M1*ii-2]] <- .escale misc$earg[[M1*ii-1]] <- .eshape2.p misc$earg[[M1*ii ]] <- .eshape3.q } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = aa, shape2.p = parg, shape3.q = qq, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q ))), vfamily = c("betaII"), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint -p < 1 < q has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q ))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta)/M1 # Needed for summary() Scale <- eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dp <- aa * temp1 + temp3 - temp3a - temp4 dl.dq <- temp3 - temp3b - temp4 dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dp.deta <- dtheta.deta(parg, .lshape2.p , earg = .eshape2.p ) dq.deta <- dtheta.deta(qq, .lshape3.q , earg = .eshape3.q ) myderiv <- c(w) * cbind(dl.dscale * dscale.deta, dl.dp * dp.deta, dl.dq * dq.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dp <- temp5a - temp5 ned2l.dq <- temp5b - temp5 ned2l.dscalep <- aa * qq / (Scale*(parg+qq)) ned2l.dscaleq <- -aa * parg / (Scale*(parg+qq)) ned2l.dpq <- -temp5 wz <- array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dp * dp.deta^2, c(w) * ned2l.dq * dq.deta^2, c(w) * ned2l.dscalep * dscale.deta * dp.deta, c(w) * ned2l.dpq * dp.deta * dq.deta, # Switched!! c(w) * ned2l.dscaleq * dscale.deta * dq.deta), dim = c(n, M/M1, M1*(M1+1)/2)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .lshape3.q = lshape3.q, .eshape2.p = eshape2.p, .eshape3.q = eshape3.q )))) } lomax <- function(lscale = "loge", lshape3.q = "loge", iscale = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), # exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape") { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape3.q) && !is.Numeric(ishape3.q, positive = TRUE)) stop("Bad input for argument 'ishape3.q'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape3.q <- as.list(substitute(lshape3.q)) eshape3.q <- link2list(lshape3.q) lshape3.q <- attr(eshape3.q, "function.name") new("vglmff", blurb = c("Lomax distribution \n\n", "Links: ", namesof("scale" , lscale , earg = escale ), ", ", namesof("shape3.q" , lshape3.q, earg = eshape3.q), "\n", "Mean: scale / (shape3.q - 1)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = c("scale", "shape3.q"), lscale = .lscale , escale = .escale , lshape3.q = .lshape3.q , eshape3.q = .eshape3.q ) }, list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 2 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha3.names <- param.names("shape3.q", NOS) predictors.names <- c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha3.names , .lshape3.q , earg = .eshape3.q , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- qq.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1) { gscale <- .gscale gshape3.q <- .gshape3.q if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape3.q )) gshape3.q <- rep_len( .ishape3.q , NOS) try.this <- grid.search4(gscale, vov2 = 1, vov3 = 1, gshape3.q, objfun = genbetaII.Loglikfun4, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] qq.init[, spp.] <- try.this["Value4"] } else { # .imethod == 2 qvec <- .probs.y iscale <- if (length( .iscale )) .iscale else 1 xvec <- log1p( quantile(yvec / iscale, probs = qvec) ) fit0 <- lsfit(x = xvec, y = -log1p(-qvec), intercept = FALSE) sc.init[, spp.] <- iscale qq.init[, spp.] <- if (length( .ishape3.q )) .ishape3.q else fit0$coef } } # End of for (spp. ...) finite.mean <- 1 < qq.init COP.use <- 1.15 while (FALSE && any(!finite.mean)) { qq.init[!finite.mean] <- 0.1 + qq.init[!finite.mean] * COP.use finite.mean <- 1 < qq.init } etastart <- cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(qq.init, .lshape3.q , earg = .eshape3.q )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .escale = escale , .iscale = iscale , .gscale = gscale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q, .ishape3.q = ishape3.q, .gshape3.q = gshape3.q, .imethod = imethod , .probs.y = probs.y ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 2 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[Scale <= 0] <- NA ans[qq <= 0] <- NA ans }, list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q ))), last = eval(substitute(expression({ M1 <- 2 misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape3.q , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(scaL.names, sha3.names) names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape3.q } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = aa, shape2.p = parg, shape3.q = qq, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q ))), vfamily = c("lomax"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) M1 <- 2 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q ) rlomax(nsim * length(qq), scale = Scale, shape3.q = qq) }, list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2 NOS <- ncol(eta) / M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q) okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint 1 < q has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta)/M1 # Needed for summary() Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- 1 parg <- 1 qq <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape3.q , earg = .eshape3.q) temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dq <- temp3 - temp3b - temp4 dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dq.deta <- dtheta.deta(qq, .lshape3.q , earg = .eshape3.q ) myderiv <- c(w) * cbind(dl.dscale * dscale.deta, dl.dq * dq.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dq <- temp5b - temp5 ned2l.dscaleq <- -aa * parg / (Scale*(parg+qq)) wz <- array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dq * dq.deta^2, c(w) * ned2l.dscaleq * dscale.deta * dq.deta), dim = c(n, M/M1, M1*(M1+1)/2)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .escale = escale , .lshape3.q = lshape3.q, .eshape3.q = eshape3.q )))) } fisk <- function(lscale = "loge", lshape1.a = "loge", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), # exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape") { if (length(lss) != 1 && !is.logical(lss)) stop("Argument 'lss' not specified correctly") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape1.a) && !is.Numeric(ishape1.a, positive = TRUE)) stop("Bad input for argument 'ishape1.a'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") new("vglmff", blurb = c("Fisk distribution \n\n", "Links: ", ifelse (lss, namesof("scale" , lscale , earg = escale), namesof("shape1.a", lshape1.a, earg = eshape1.a)), ", ", ifelse (lss, namesof("shape1.a", lshape1.a, earg = eshape1.a), namesof("scale" , lscale , earg = escale)), "\n", "Mean: scale * gamma(1 + 1/shape1.a) * ", "gamma(1 - 1/shape1.a)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = if ( .lss ) c("scale", "shape1.a") else c("shape1.a", "scale"), lscale = .lscale , lshape1.a = .lshape1.a , escale = .escale , eshape1.a = .eshape1.a ) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss , .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 2 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha1.names <- param.names("shape1.a", NOS) predictors.names <- if ( .lss ) { c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE)) } else { c(namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE), namesof(scaL.names , .lscale , earg = .escale , tag = FALSE)) } predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- aa.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1 ) { gscale <- .gscale gshape1.a <- .gshape1.a if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape1.a )) gshape1.a <- rep_len( .ishape1.a , NOS) try.this <- grid.search4(gscale, gshape1.a, vov3 = 1, vov4 = 1, objfun = genbetaII.Loglikfun4, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] aa.init[, spp.] <- try.this["Value2"] } else { # .imethod == 2 qvec <- .probs.y iscale <- if (length( .iscale )) .iscale else 1 xvec <- log( quantile(yvec / iscale, probs = qvec) ) fit0 <- lsfit(x = xvec, y = logit(qvec), intercept = FALSE) sc.init[, spp.] <- iscale aa.init[, spp.] <- if (length( .ishape1.a )) .ishape1.a else fit0$coef } } # End of for (spp. ...) finite.mean <- 1 < aa.init COP.use <- 1.15 while (FALSE && any(!finite.mean)) { aa.init[!finite.mean] <- 0.1 + aa.init[!finite.mean] * COP.use finite.mean <- 1 < aa.init } etastart <- if ( .lss ) cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(aa.init, .lshape1.a , earg = .eshape1.a )) else cbind(theta2eta(aa.init, .lshape1.a , earg = .eshape1.a ), theta2eta(sc.init, .lscale , earg = .escale )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .iscale = iscale , .ishape1.a = ishape1.a, .gscale = gscale , .gshape1.a = gshape1.a, .imethod = imethod , .probs.y = probs.y, .lss = lss ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- 1 ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[Scale <= 0] <- NA ans[aa <= 0] <- NA ans }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), last = eval(substitute(expression({ M1 <- 2 misc$link <- c(rep_len( if ( .lss ) .lscale else .lshape1.a , ncoly), rep_len( if ( .lss ) .lshape1.a else .lscale , ncoly))[ interleave.VGAM(M, M1 = M1)] temp.names <- if ( .lss ) { c(scaL.names, sha1.names) } else { c(sha1.names, scaL.names) } names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) if ( .lss ) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape1.a } else { misc$earg[[M1*ii-1]] <- .eshape1.a misc$earg[[M1*ii ]] <- .escale } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dfisk(x = y, scale = Scale, shape1.a = aa, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), vfamily = c("fisk"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } rfisk(nsim * length(aa), shape1.a = aa, scale = Scale) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2 NOS <- ncol(eta) / M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- 1 okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint -a < 1 < a has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta)/M1 # Needed for summary() if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- 1 temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) da.deta <- dtheta.deta(aa, .lshape1.a , earg = .eshape1.a ) myderiv <- if ( .lss ) { c(w) * cbind(dl.dscale * dscale.deta, dl.da * da.deta) } else { c(w) * cbind(dl.da * da.deta, dl.dscale * dscale.deta) } myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.da <- (1 + parg + qq + parg * qq * (temp5a + temp5b + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (Scale*(1 + parg+qq)) wz <- if ( .lss ) { array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } else { array(c(c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss )))) } inv.lomax <- function(lscale = "loge", lshape2.p = "loge", iscale = NULL, ishape2.p = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape2.p") { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape2.p) && !is.Numeric(ishape2.p, positive = TRUE)) stop("Bad input for argument 'ishape2.p'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape2.p <- as.list(substitute(lshape2.p)) eshape2.p <- link2list(lshape2.p) lshape2.p <- attr(eshape2.p, "function.name") new("vglmff", blurb = c("Inverse Lomax distribution \n\n", "Links: ", namesof("scale" , lscale , earg = escale), ", ", namesof("shape2.p" , lshape2.p, earg = eshape2.p), "\n", "Mean: does not exist"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = c("scale", "shape2.p"), lscale = .lscale , escale = .escale , lshape2.p = .lshape2.p , eshape2.p = .eshape2.p ) }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 2 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha2.names <- param.names("shape2.p", NOS) predictors.names <- c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha2.names , .lshape2.p , earg = .eshape2.p , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- pp.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1 ) { gscale <- .gscale gshape2.p <- .gshape2.p if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape2.p )) gshape2.p <- rep_len( .ishape2.p , NOS) try.this <- grid.search4(gscale, vov2 = 1, gshape2.p, vov4 = 1, objfun = genbetaII.Loglikfun4, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] pp.init[, spp.] <- try.this["Value3"] } else { # .imethod == 2 qvec <- .probs.y ishape2.p <- if (length( .ishape2.p )) .ishape2.p else 1 xvec <- log( qvec^(-1/ ishape2.p) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(yvec, qvec))) sc.init[, spp.] <- if (length( .iscale )) .iscale else exp(fit0$coef[1]) pp.init[, spp.] <- ishape2.p } } # End of for (spp. ...) etastart <- cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(pp.init, .lshape2.p , earg = .eshape2.p )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .escale = escale , .iscale = iscale , .gscale = gscale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p, .ishape2.p = ishape2.p, .gshape2.p = gshape2.p, .imethod = imethod , .probs.y = probs.y ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 2 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p ) qinv.lomax(p = 0.5, scale = Scale, shape2.p = parg) }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p ))), last = eval(substitute(expression({ M1 <- 2 misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape2.p , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(scaL.names, sha2.names) names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape2.p } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 NOS <- ncol(eta)/M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = 1, shape2.p = parg, shape3.q = 1, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p ))), vfamily = c("inv.lomax"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p ) aa <- 1 qq <- 1 rinv.lomax(nsim * length(Scale), scale = Scale, shape2.p = parg) }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2 NOS <- ncol(eta) / M1 Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- 1 aa <- 1 okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 ) else TRUE if (!okay.support) warning("parameter constraint -a*p < 1 has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta)/M1 # Needed for summary() Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) parg <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape2.p , earg = .eshape2.p ) qq <- 1 aa <- 1 temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.dp <- aa * temp1 + temp3 - temp3a - temp4 dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dp.deta <- dtheta.deta(parg, .lshape2.p , earg = .eshape2.p ) myderiv <- c(w) * cbind(dl.dscale * dscale.deta, dl.dp * dp.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dp <- temp5a - temp5 ned2l.dscalep <- aa * qq / (Scale*(parg+qq)) wz <- array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dp * dp.deta^2, c(w) * ned2l.dscalep * dscale.deta * dp.deta), dim = c(n, M/M1, M1*(M1+1)/2)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .escale = escale , .lshape2.p = lshape2.p, .eshape2.p = eshape2.p )))) } paralogistic <- function(lscale = "loge", lshape1.a = "loge", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), # exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape") { if (length(lss) != 1 && !is.logical(lss)) stop("Argument 'lss' not specified correctly") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape1.a) && !is.Numeric(ishape1.a, positive = TRUE)) stop("Bad input for argument 'ishape1.a'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") new("vglmff", blurb = c("Paralogistic distribution \n\n", "Links: ", ifelse (lss, namesof("scale" , lscale , earg = escale), namesof("shape1.a", lshape1.a, earg = eshape1.a)), ", ", ifelse (lss, namesof("shape1.a", lshape1.a, earg = eshape1.a), namesof("scale" , lscale , earg = escale)), "\n", "Mean: scale * gamma(1 + 1/shape1.a) * ", "gamma(shape1.a - 1/shape1.a) / ", "gamma(shape1.a)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = if ( .lss ) c("scale", "shape1.a") else c("shape1.a", "scale"), lscale = .lscale , lshape1.a = .lshape1.a , escale = .escale , eshape1.a = .eshape1.a ) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss , .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 2 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha1.names <- param.names("shape1.a", NOS) predictors.names <- if ( .lss ) { c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE)) } else { c(namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE), namesof(scaL.names , .lscale , earg = .escale , tag = FALSE)) } predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- aa.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1 ) { gscale <- .gscale gshape1.a <- .gshape1.a if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape1.a )) gshape1.a <- rep_len( .ishape1.a , NOS) paralogistic.Loglikfun2 <- function(scaleval, shape1.a, y, x, w, extraargs) { sum(c(w) * dgenbetaII(x = y, scale = scaleval, shape1.a = shape1.a, shape2.p = 1, shape3.q = shape1.a, log = TRUE)) } try.this <- grid.search2(gscale, gshape1.a, # vov3 = 1, vov4 = gshape1.a, objfun = paralogistic.Loglikfun2, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] aa.init[, spp.] <- try.this["Value2"] } else { # .imethod == 2 qvec <- .probs.y ishape3.q <- if (length( .ishape1.a )) .ishape1.a else 1 xvec <- log( (1-qvec)^(-1/ ishape3.q) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(yvec, qvec))) sc.init[, spp.] <- if (length( .iscale )) .iscale else exp(fit0$coef[1]) aa.init[, spp.] <- if (length( .ishape1.a )) .ishape1.a else 1/fit0$coef[2] } } # End of for (spp. ...) finite.mean <- 1 < aa.init * aa.init COP.use <- 1.15 while (FALSE && any(!finite.mean)) { aa.init[!finite.mean] <- 0.1 + aa.init[!finite.mean] * COP.use finite.mean <- 1 < aa.init * aa.init } etastart <- if ( .lss ) cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(aa.init, .lshape1.a , earg = .eshape1.a )) else cbind(theta2eta(aa.init, .lshape1.a , earg = .eshape1.a ), theta2eta(sc.init, .lscale , earg = .escale )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .iscale = iscale , .ishape1.a = ishape1.a, .gscale = gscale , .gshape1.a = gshape1.a, .imethod = imethod , .probs.y = probs.y, .lss = lss ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- aa ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), last = eval(substitute(expression({ M1 <- 2 misc$link <- c(rep_len(if ( .lss ) .lscale else .lshape1.a , ncoly), rep_len(if ( .lss ) .lshape1.a else .lscale , ncoly))[ interleave.VGAM(M, M1 = M1)] temp.names <- if ( .lss ) { c(scaL.names, sha1.names) } else { c(sha1.names, scaL.names) } names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) if ( .lss ) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape1.a } else { misc$earg[[M1*ii-1]] <- .eshape1.a misc$earg[[M1*ii ]] <- .escale } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- aa if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = aa, shape2.p = parg, shape3.q = aa, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), vfamily = c("paralogistic"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } rparalogistic(nsim * length(Scale), shape1.a = aa, scale = Scale) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2 NOS <- ncol(eta) / M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- aa okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint -a < 1 < a*a has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta)/M1 # Needed for summary() if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- 1 qq <- aa temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) da.deta <- dtheta.deta(aa, .lshape1.a , earg = .eshape1.a ) myderiv <- if ( .lss ) { c(w) * cbind(dl.dscale * dscale.deta, dl.da * da.deta) } else { c(w) * cbind(dl.da * da.deta, dl.dscale * dscale.deta) } myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.da <- (1 + parg + qq + parg * qq * (temp5a + temp5b + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (Scale*(1 + parg+qq)) wz <- if ( .lss ) { array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } else { array(c(c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss )))) } inv.paralogistic <- function(lscale = "loge", lshape1.a = "loge", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), # exp(-5:5), probs.y = c(0.25, 0.50, 0.75), zero = "shape") { if (length(lss) != 1 && !is.logical(lss)) stop("Argument 'lss' not specified correctly") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("Bad input for argument 'imethod'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("Bad input for argument 'iscale'") if (length(ishape1.a) && !is.Numeric(ishape1.a, positive = TRUE)) stop("Bad input for argument 'ishape1.a'") if (length(probs.y) < 2 || max(probs.y) > 1 || !is.Numeric(probs.y, positive = TRUE)) stop("Bad input for argument 'probs.y'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape1.a <- as.list(substitute(lshape1.a)) eshape1.a <- link2list(lshape1.a) lshape1.a <- attr(eshape1.a, "function.name") new("vglmff", blurb = c("Inverse paralogistic distribution \n\n", "Links: ", ifelse (lss, namesof("scale" , lscale , earg = escale), namesof("shape1.a", lshape1.a, earg = eshape1.a)), ", ", ifelse (lss, namesof("shape1.a", lshape1.a, earg = eshape1.a), namesof("scale" , lscale , earg = escale)), "\n", "Mean: scale * gamma(shape1.a + 1/shape1.a) * ", "gamma(1 - 1/shape1.a) / ", "gamma(shape1.a)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, zero = .zero , multipleResponses = TRUE, parameters.names = if ( .lss ) c("scale", "shape1.a") else c("shape1.a", "scale"), lscale = .lscale , lshape1.a = .lshape1.a , escale = .escale , eshape1.a = .eshape1.a ) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss , .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) y <- temp5$y w <- temp5$w M1 <- 2 # Number of parameters for one response NOS <- ncoly <- ncol(y) M <- M1*ncol(y) scaL.names <- param.names("scale", NOS) sha1.names <- param.names("shape1.a", NOS) predictors.names <- if ( .lss ) { c(namesof(scaL.names , .lscale , earg = .escale , tag = FALSE), namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE)) } else { c(namesof(sha1.names , .lshape1.a , earg = .eshape1.a , tag = FALSE), namesof(scaL.names , .lscale , earg = .escale , tag = FALSE)) } predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { sc.init <- aa.init <- matrix(NA_real_, n, NOS) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] if ( .imethod == 1 ) { gscale <- .gscale gshape1.a <- .gshape1.a if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape1.a )) gshape1.a <- rep_len( .ishape1.a , NOS) inv.paralogistic.Loglikfun2 <- function(scaleval, shape1.a, y, x, w, extraargs) { sum(c(w) * dgenbetaII(x = y, scale = scaleval, shape1.a = shape1.a, shape2.p = shape1.a, shape3.q = 1, log = TRUE)) } try.this <- grid.search2(gscale, gshape1.a, # vov3 = 1, vov4 = gshape1.a, objfun = inv.paralogistic.Loglikfun2, # x = x, y = yvec, w = wvec, # extraargs = NULL, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1"] aa.init[, spp.] <- try.this["Value2"] } else { # .imethod == 2 qvec <- .probs.y ishape2.p <- if (length( .ishape1.a )) .ishape1.a else 1 xvec <- log( qvec^(-1/ ishape2.p) - 1 ) fit0 <- lsfit(x = xvec, y = log(quantile(yvec, qvec))) sc.init[, spp.] <- if (length( .iscale )) .iscale else exp(fit0$coef[1]) aa.init[, spp.] <- if (length( .ishape1.a )) .ishape1.a else -1/fit0$coef[2] } } # End of for (spp. ...) finite.mean <- 1 < aa.init COP.use <- 1.15 while (FALSE && any(!finite.mean)) { aa.init[!finite.mean] <- 0.1 + aa.init[!finite.mean] * COP.use finite.mean <- 1 < aa.init } etastart <- if ( .lss ) cbind(theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(aa.init, .lshape1.a , earg = .eshape1.a )) else cbind(theta2eta(aa.init, .lshape1.a , earg = .eshape1.a ), theta2eta(sc.init, .lscale , earg = .escale )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } # End of etastart. }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .iscale = iscale , .ishape1.a = ishape1.a, .gscale = gscale , .gshape1.a = gshape1.a, .imethod = imethod , .probs.y = probs.y, .lss = lss ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- aa qq <- 1 ans <- Scale * exp(lgamma(parg + 1/aa) + lgamma(qq - 1/aa) - lgamma(parg) - lgamma(qq)) ans[parg + 1/aa <= 0] <- NA ans[qq - 1/aa <= 0] <- NA ans[aa <= 0] <- NA ans[Scale <= 0] <- NA ans }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), last = eval(substitute(expression({ M1 <- 2 misc$link <- c(rep_len(if ( .lss ) .lscale else .lshape1.a , ncoly), rep_len(if ( .lss ) .lshape1.a else .lscale , ncoly))[ interleave.VGAM(M, M1 = M1)] temp.names <- if ( .lss ) { c(scaL.names, sha1.names) } else { c(sha1.names, scaL.names) } names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names[interleave.VGAM(M, M1 = M1)] for (ii in 1:ncoly) if ( .lss ) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape1.a } else { misc$earg[[M1*ii-1]] <- .eshape1.a misc$earg[[M1*ii ]] <- .escale } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 NOS <- ncol(eta)/M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a ) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a ) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- aa if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenbetaII(x = y, scale = Scale, shape1.a = aa, shape2.p = aa, shape3.q = 1, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), vfamily = c("inv.paralogistic"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- aa qq <- 1 rinv.paralogistic(nsim * length(Scale), shape1.a = aa, scale = Scale) }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2 NOS <- ncol(eta) / M1 if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- aa qq <- 1 okay1 <- all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(aa )) && all(aa > 0) && all(is.finite(parg )) && all(parg > 0) && all(is.finite(qq )) && all(qq > 0) okay.support <- if (okay1) all(-aa * parg < 1 & 1 < aa * qq) else TRUE if (!okay.support) warning("parameter constraint -a*a < 1 < a has been violated; ", "solution may be at the boundary of the ", "parameter space.") okay1 && okay.support }, list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta)/M1 # Needed for summary() if ( .lss ) { Scale <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lscale , earg = .escale ) aa <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lshape1.a , earg = .eshape1.a) } else { aa <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lshape1.a , earg = .eshape1.a) Scale <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lscale , earg = .escale ) } parg <- aa qq <- 1 temp1 <- log(y/Scale) temp2 <- (y/Scale)^aa temp3 <- digamma(parg + qq) temp3a <- digamma(parg) temp3b <- digamma(qq) temp4 <- log1p(temp2) dl.dscale <- (aa/Scale) * (-parg + (parg+qq) / (1+1/temp2)) dl.da <- 1/aa + parg * temp1 - (parg+qq) * temp1 / (1+1/temp2) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) da.deta <- dtheta.deta(aa, .lshape1.a , earg = .eshape1.a ) myderiv <- if ( .lss ) { c(w) * cbind(dl.dscale * dscale.deta, dl.da * da.deta) } else { c(w) * cbind(dl.da * da.deta, dl.dscale * dscale.deta) } myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss ))), weight = eval(substitute(expression({ temp5 <- trigamma(parg + qq) temp5a <- trigamma(parg) temp5b <- trigamma(qq) ned2l.da <- (1 + parg + qq + parg * qq * (temp5a + temp5b + (temp3b - temp3a + (parg-qq)/(parg*qq))^2 - (parg^2 + qq^2) / (parg*qq)^2)) / (aa^2 * (1+parg+qq)) ned2l.dscale <- (aa^2) * parg * qq / ((1+parg+qq) * Scale^2) ned2l.dascale <- (parg - qq - parg * qq * (temp3a -temp3b)) / (Scale*(1 + parg+qq)) wz <- if ( .lss ) { array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } else { array(c(c(w) * ned2l.da * da.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dascale * da.deta * dscale.deta), dim = c(n, M/M1, M1*(M1+1)/2)) } wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale , .lshape1.a = lshape1.a, .escale = escale , .eshape1.a = eshape1.a, .lss = lss )))) } VGAM/R/family.censored.R0000644000176200001440000016034613135276757014444 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. cens.poisson <- function(link = "loge", imu = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Censored Poisson distribution\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu"), initialize = eval(substitute(expression({ if (anyNA(y)) stop("NAs are not allowed in the response") w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 3, Is.integer.y = TRUE) centype <- attr(y, "type") if (centype == "right") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- rep_len(FALSE, n) extra$interval <- rep_len(FALSE, n) init.mu <- pmax(y[, 1], 1/8) } else if (centype == "left") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- rep_len(FALSE, n) extra$leftcensored <- ifelse(temp == 0, TRUE, FALSE) extra$interval <- rep_len(FALSE, n) init.mu <- pmax(y[, 1], 1/8) } else if (centype == "interval" || centype == "interval2") { temp <- y[, 3] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- ifelse(temp == 2, TRUE, FALSE) extra$intervalcensored <- ifelse(temp == 3, TRUE, FALSE) init.mu <- pmax((y[, 1] + y[, 2])/2, 1/8) # for intervalcensored if (any(extra$uncensored)) init.mu[extra$uncensored] <- pmax(y[extra$uncensored, 1], 1/8) if (any(extra$rightcensored)) init.mu[extra$rightcensored] <- pmax(y[extra$rightcensored, 1], 1/8) if (any(extra$leftcensored)) init.mu[extra$leftcensored] <- pmax(y[extra$leftcensored, 1], 1/8) } else if (centype == "counting") { stop("type == 'counting' not compatible with cens.poisson()") init.mu <- pmax(y[, 1], 1/8) stop("currently not working") } else stop("response have to be in a class of SurvS4") if (length( .imu )) init.mu <- 0 * y[, 1] + .imu predictors.names <- namesof("mu", .link, earg = .earg, short = TRUE) if (!length(etastart)) etastart <- theta2eta(init.mu, link = .link, earg = .earg) }), list( .link = link, .earg = earg, .imu = imu))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link, earg = .earg) mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$expected <- FALSE misc$link <- c("mu" = .link) misc$earg <- list("mu" = .earg) misc$multipleResponses <- FALSE }), list( .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link, earg = .earg) }, list( .link = link, .earg = earg ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$intervalcensored if (residuals){ stop("loglikelihood residuals not implemented yet") } else { sum(w[cen0] * dpois(y[cen0, 1], mu[cen0], log = TRUE)) + sum(w[cenU] * log1p(-ppois(y[cenU, 1] - 1, mu[cenU]))) + sum(w[cenL] * ppois(y[cenL, 1] - 1, mu[cenL], log.p = TRUE)) + sum(w[cenI] * log(ppois(y[cenI, 2], mu[cenI]) - ppois(y[cenI, 1], mu[cenI]))) } }, vfamily = "cens.poisson", validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta, link = .link , earg = .earg ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$intervalcensored lambda <- eta2theta(eta, link = .link , earg = .earg ) dl.dlambda <- (y[, 1] - lambda)/lambda # uncensored yllim <- yulim <- y[, 1] # uncensored if (any(cenU)) { yllim[cenU] <- y[cenU, 1] densm1 <- dpois(yllim-1, lambda) queue <- ppois(yllim-1, lambda, lower.tail = FALSE) dl.dlambda[cenU] <- densm1[cenU] / queue[cenU] } if (any(cenL)) { yulim[cenL] <- y[cenL, 1] - 1 densm0 <- dpois(yulim, lambda) Queue <- ppois(yulim, lambda) # Left tail probability dl.dlambda[cenL] <- -densm0[cenL] / Queue[cenL] } if (any(cenI)) { yllim[cenI] <- y[cenI, 1] + 1 yulim[cenI] <- y[cenI, 2] Queue1 <- ppois(yllim-1, lambda) Queue2 <- ppois(yulim, lambda) densm02 <- dpois(yulim, lambda) densm12 <- dpois(yllim-1, lambda) dl.dlambda[cenI] <- (-densm02[cenI]+densm12[cenI]) / (Queue2[cenI]-Queue1[cenI]) } dlambda.deta <- dtheta.deta(theta=lambda, link = .link, earg = .earg) c(w) * dl.dlambda * dlambda.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2lambda.deta2 <- d2theta.deta2(theta = lambda, link = .link, earg = .earg ) d2l.dlambda2 <- 1 / lambda # uncensored; Fisher scoring if (any(cenU)) { densm2 <- dpois(yllim-2, lambda) d2l.dlambda2[cenU] <- (dl.dlambda[cenU])^2 - (densm2[cenU]-densm1[cenU])/queue[cenU] } if (any(cenL)) { densm1 <- dpois(yulim-1, lambda) d2l.dlambda2[cenL] <- (dl.dlambda[cenL])^2 - (densm0[cenL]-densm1[cenL])/Queue[cenL] } if (any(cenI)) { densm03 <- dpois(yulim-1, lambda) densm13 <- dpois(yllim-2, lambda) d2l.dlambda2[cenI] <- (dl.dlambda[cenI])^2 - (densm13[cenI]-densm12[cenI]-densm03[cenI] + densm02[cenI]) / (Queue2[cenI]-Queue1[cenI]) } wz <- c(w) * ((dlambda.deta^2) * d2l.dlambda2) wz }), list( .link = link, .earg = earg )))) } if (FALSE) cens.exponential <- ecens.exponential <- function(link = "loge", location = 0) { if (!is.Numeric(location, length.arg = 1)) stop("bad input for 'location'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Censored exponential distribution\n\n", "Link: ", namesof("rate", link, tag = TRUE), "\n", "Mean: ", "mu = ", location, " + 1 / ", namesof("rate", link, tag = FALSE), "\n", "Variance: ", if (location == 0) "Exponential: mu^2" else paste("(mu-", location, ")^2", sep = "")), initialize = eval(substitute(expression({ extra$location <- .location if (any(y[, 1] <= extra$location)) stop("all responses must be greater than ", extra$location) predictors.names <- namesof("rate", .link , .earg , tag = FALSE) type <- attr(y, "type") if (type == "right" || type == "left"){ mu <- y[, 1] + (abs(y[, 1] - extra$location) < 0.001) / 8 } else if (type == "interval") { temp <- y[, 3] mu <- ifelse(temp == 3, y[, 2] + (abs(y[, 2] - extra$location) < 0.001) / 8, y[, 1] + (abs(y[, 1] - extra$location) < 0.001) / 8) } if (!length(etastart)) etastart <- theta2eta(1/(mu-extra$location), .link , .earg ) if (type == "right") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- rep_len(FALSE, n) extra$interval <- rep_len(FALSE, n) } else if (type == "left") { temp <- y[, 2] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- rep_len(FALSE, n) extra$leftcensored <- ifelse(temp == 0, TRUE, FALSE) extra$interval <- rep_len(FALSE, n) } else if (type == "counting") { stop("type == 'counting' not recognized") extra$uncensored <- rep(temp == 1, TRUE, FALSE) extra$interval <- rep_len(FALSE, n) extra$leftcensored <- rep_len(FALSE, n) extra$rightcensored <- rep_len(FALSE, n) extra$counting <- ifelse(temp == 0, TRUE, FALSE) } else if (type == "interval") { temp <- y[, 3] extra$uncensored <- ifelse(temp == 1, TRUE, FALSE) extra$rightcensored <- ifelse(temp == 0, TRUE, FALSE) extra$leftcensored <- ifelse(temp == 2, TRUE, FALSE) extra$interval <- ifelse(temp == 3, TRUE, FALSE) } else stop("'type' not recognized") }), list( .location = location, .link = link ))), linkinv = eval(substitute(function(eta, extra = NULL) extra$location + 1 / eta2theta(eta, .link , .earg ), list( .link = link ) )), last = eval(substitute(expression({ misc$location <- extra$location misc$link <- c("rate" = .link) misc$multipleResponses <- FALSE }), list( .link = link ))), link = eval(substitute(function(mu, extra = NULL) theta2eta(1 / (mu - extra$location), .link , .earg ), list( .link = link ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { rate <- 1 / (mu - extra$location) cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$interval if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cenL] * log1p(-exp(-rate[cenL] * (y[cenL, 1] - extra$location)))) + sum(w[cenU] * (-rate[cenU]*(y[cenU, 1]-extra$location))) + sum(w[cen0] * (log(rate[cen0]) - rate[cen0]*(y[cen0, 1]-extra$location))) + sum(w[cenI] * log(-exp(-rate[cenI]*(y[cenI, 2]-extra$location))+ exp(-rate[cenI]*(y[cenI, 1]-extra$location)))) }, list( .link = link ))), vfamily = c("ecens.exponential"), validparams = eval(substitute(function(eta, y, extra = NULL) { rate <- 1 / (mu - extra$location) okay1 <- all(is.finite(rate)) && all(0 < rate) okay1 }, list( .link = link ))), deriv = eval(substitute(expression({ rate <- 1 / (mu - extra$location) cen0 <- extra$uncensored cenL <- extra$leftcensored cenU <- extra$rightcensored cenI <- extra$interval dl.drate <- 1/rate - (y[, 1]-extra$location) # uncensored tmp200 <- exp(-rate*(y[, 1]-extra$location)) tmp200b <- exp(-rate*(y[, 2]-extra$location)) # for interval censored if (any(cenL)) dl.drate[cenL] <- (y[cenL, 1]-extra$location) * tmp200[cenL] / (1 - tmp200[cenL]) if (any(cenU)) dl.drate[cenU] <- -(y[cenU, 1]-extra$location) if (any(cenI)) dl.drate[cenI] <- ((y[cenI, 2] - extra$location) * tmp200b[cenI] - (y[cenI, 1] - extra$location) * tmp200[cenI]) / (-tmp200b[cenI] + tmp200[cenI]) drate.deta <- dtheta.deta(rate, .link , .earg ) c(w) * dl.drate * drate.deta }), list( .link = link ) )), weight = eval(substitute(expression({ A123 <- ((mu-extra$location)^2) # uncensored d2l.drate2 Lowpt <- ifelse(cenL, y[, 1], extra$location) Lowpt <- ifelse(cenI, y[, 1], Lowpt) #interval censored Upppt <- ifelse(cenU, y[, 1], Inf) Upppt <- ifelse(cenI, y[, 2], Upppt) #interval censored tmp300 <- exp(-rate*(Lowpt - extra$location)) d2l.drate2 <- 0 * y[, 1] ind50 <- Lowpt > extra$location d2l.drate2[ind50] <- (Lowpt[ind50]-extra$location)^2 * tmp300[ind50] / (1-tmp300[ind50]) d2l.drate2 <- d2l.drate2 + (exp(-rate*(Lowpt-extra$location)) - exp(-rate*(Upppt-extra$location))) * A123 wz <- c(w) * (drate.deta^2) * d2l.drate2 wz }), list( .link = link )))) } cennormal <- cens.normal <- function(lmu = "identitylink", lsd = "loge", imethod = 1, zero = "sd") { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Censored univariate normal\n\n", "Links: ", namesof("mu", lmu, tag = TRUE), "; ", namesof("sd", lsd, tag = TRUE), "\n", "Conditional variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, zero = .zero , multiple.responses = FALSE, parameters.names = c("mu", "sd"), expected = TRUE ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y if (!length(extra$leftcensored)) extra$leftcensored <- rep_len(FALSE, n) if (!length(extra$rightcensored)) extra$rightcensored <- rep_len(FALSE, n) if (any(extra$rightcensored & extra$leftcensored)) stop("some observations are both right and left censored!") predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("sd", .lsd , earg = .esd , tag = FALSE)) if (!length(etastart)) { anyc <- extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # can be all data junk <- lm.wfit(x = cbind(x[!i11, ]), y = y[!i11], w = w[!i11]) sd.y.est <- sqrt(sum(w[!i11] * junk$resid^2) / junk$df.residual) etastart <- cbind(mu = y, rep_len(theta2eta(sd.y.est, .lsd), n)) if (any(anyc)) etastart[anyc, 1] <- x[anyc, , drop = FALSE] %*% junk$coeff } }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .imethod = imethod ))), linkinv = eval(substitute( function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmu , "sd" = .lsd ) misc$earg <- list("mu" = .emu , "sd" = .esd ) misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns mum <- eta2theta(eta[, 1], .lmu , earg = .emu ) sdv <- eta2theta(eta[, 2], .lsd , earg = .esd ) Lower <- ifelse(cenL, y, -Inf) Upper <- ifelse(cenU, y, Inf) ell1 <- -log(sdv[cen0]) - 0.5 * ((y[cen0] - mum[cen0])/sdv[cen0])^2 ell2 <- log1p(-pnorm((mum[cenL] - Lower[cenL]) / sdv[cenL])) ell3 <- log1p(-pnorm(( Upper[cenU] - mum[cenU]) / sdv[cenU])) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cen0] * ell1) + sum(w[cenL] * ell2) + sum(w[cenU] * ell3) }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd ))), vfamily = c("cens.normal"), validparams = eval(substitute(function(eta, y, extra = NULL) { mum <- eta2theta(eta[, 1], .lmu ) sdv <- eta2theta(eta[, 2], .lsd ) okay1 <- all(is.finite(mum)) && all(is.finite(sdv)) && all(0 < sdv) okay1 }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd ))), deriv = eval(substitute(expression({ cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns Lower <- ifelse(cenL, y, -Inf) Upper <- ifelse(cenU, y, Inf) mum <- eta2theta(eta[, 1], .lmu ) sdv <- eta2theta(eta[, 2], .lsd ) dl.dmu <- (y-mum) / sdv^2 dl.dsd <- (((y-mum)/sdv)^2 - 1) / sdv dmu.deta <- dtheta.deta(mum, .lmu , earg = .emu ) dsd.deta <- dtheta.deta(sdv, .lsd , earg = .esd ) if (any(cenL)) { mumL <- mum - Lower temp21L <- mumL[cenL] / sdv[cenL] PhiL <- pnorm(temp21L) phiL <- dnorm(temp21L) fred21 <- phiL / (1 - PhiL) dl.dmu[cenL] <- -fred21 / sdv[cenL] dl.dsd[cenL] <- mumL[cenL] * fred21 / sdv[cenL]^2 rm(fred21) } if (any(cenU)) { mumU <- Upper - mum temp21U <- mumU[cenU] / sdv[cenU] PhiU <- pnorm(temp21U) phiU <- dnorm(temp21U) fred21 <- phiU / (1 - PhiU) dl.dmu[cenU] <- fred21 / sdv[cenU] # Negated dl.dsd[cenU] <- mumU[cenU] * fred21 / sdv[cenU]^2 rm(fred21) } c(w) * cbind(dl.dmu * dmu.deta, dl.dsd * dsd.deta) }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd ))), weight = eval(substitute(expression({ A1 <- 1 - pnorm((mum - Lower) / sdv) # Lower A3 <- 1 - pnorm((Upper - mum) / sdv) # Upper A2 <- 1 - A1 - A3 # Middle; uncensored wz <- matrix(0, n, 3) wz[, iam(1, 1,M)] <- A2 * 1 / sdv^2 # ed2l.dmu2 wz[, iam(2, 2,M)] <- A2 * 2 / sdv^2 # ed2l.dsd2 mumL <- mum - Lower temp21L <- mumL / sdv PhiL <- pnorm(temp21L) phiL <- dnorm(temp21L) temp31L <- ((1-PhiL) * sdv)^2 wz.cenL11 <- phiL * (phiL - (1-PhiL)*temp21L) / temp31L wz.cenL22 <- mumL * phiL * ((1-PhiL) * (2 - temp21L^2) + mumL * phiL / sdv) / (sdv * temp31L) wz.cenL12 <- phiL * ((1-PhiL)*(temp21L^2 - 1) - temp21L*phiL) / temp31L wz.cenL11[!is.finite(wz.cenL11)] <- 0 wz.cenL22[!is.finite(wz.cenL22)] <- 0 wz.cenL12[!is.finite(wz.cenL12)] <- 0 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + A1 * wz.cenL11 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] + A1 * wz.cenL22 wz[, iam(1, 2, M)] <- A1 * wz.cenL12 mumU <- Upper - mum # often Inf temp21U <- mumU / sdv # often Inf PhiU <- pnorm(temp21U) # often 1 phiU <- dnorm(temp21U) # often 0 temp31U <- ((1-PhiU) * sdv)^2 # often 0 tmp8 <- (1-PhiU)*temp21U wzcenU11 <- phiU * (phiU - tmp8) / temp31U tmp9 <- (1-PhiU) * (2 - temp21U^2) wzcenU22 <- mumU * phiU * (tmp9 + mumU * phiU / sdv) / (sdv * temp31U) wzcenU12 <- -phiU * ((1-PhiU)*(temp21U^2 - 1) - temp21U*phiU) / temp31U wzcenU11[!is.finite(wzcenU11)] <- 0 # Needed when Upper==Inf wzcenU22[!is.finite(wzcenU22)] <- 0 # Needed when Upper==Inf wzcenU12[!is.finite(wzcenU12)] <- 0 # Needed when Upper==Inf wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + A3 * wzcenU11 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] + A3 * wzcenU22 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] + A3 * wzcenU12 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dmu.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dsd.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] * dmu.deta * dsd.deta c(w) * wz }), list( .lmu = lmu, .lsd = lsd )))) } cens.rayleigh <- function(lscale = "loge", oim = TRUE) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(oim) || length(oim) != 1) stop("bad input for argument 'oim'") new("vglmff", blurb = c("Censored Rayleigh distribution\n\n", "f(y) = y*exp(-0.5*(y/scale)^2)/scale^2, y>0, scale>0\n", "Link: ", namesof("scale", lscale, earg = escale ), "\n", "\n", "Mean: scale * sqrt(pi / 2)"), initialize = eval(substitute(expression({ if (NCOL(y) != 1) stop("response must be a vector or a one-column matrix") if (length(extra$leftcensored)) stop("cannot handle left-censored data") if (!length(extra$rightcensored)) extra$rightcensored <- rep_len(FALSE, n) predictors.names <- namesof("scale", .lscale , earg = .escale , tag = FALSE) if (!length(etastart)) { a.init <- (y+1/8) / sqrt(pi/2) etastart <- theta2eta(a.init, .lscale , earg = .escale ) } }), list( .lscale = lscale, .escale = escale ))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta, .lscale , earg = .escale ) Scale * sqrt(pi/2) }, list( .lscale = lscale, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale ) misc$earg <- list("scale" = .escale ) misc$oim <- .oim }), list( .lscale = lscale, .escale = escale, .oim = oim ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Scale <- eta2theta(eta, .lscale , earg = .escale ) cen0 <- !extra$rightcensored # uncensored obsns cenU <- extra$rightcensored if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cen0] * (log(y[cen0]) - 2*log(Scale[cen0]) - 0.5*(y[cen0]/Scale[cen0])^2)) - sum(w[cenU] * (y[cenU]/Scale[cenU])^2) * 0.5 }, list( .lscale = lscale, .escale = escale ))), vfamily = c("cens.rayleigh"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta, .lscale , earg = .escale ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) okay1 }, list( .lscale = lscale, .escale = escale ))), deriv = eval(substitute(expression({ cen0 <- !extra$rightcensored # uncensored obsns cenU <- extra$rightcensored Scale <- eta2theta(eta, .lscale , earg = .escale ) dl.dScale <- ((y/Scale)^2 - 2) / Scale dScale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dl.dScale[cenU] <- y[cenU]^2 / Scale[cenU]^3 c(w) * dl.dScale * dScale.deta }), list( .lscale = lscale, .escale = escale ))), weight = eval(substitute(expression({ ned2l.dScale2 <- 4 / Scale^2 wz <- dScale.deta^2 * ned2l.dScale2 if ( .oim ) { d2l.dScale2 <- 3 * (y[cenU])^2 / (Scale[cenU])^4 d2Scale.deta2 <- d2theta.deta2(Scale[cenU], .lscale , earg = .escale ) wz[cenU] <- (dScale.deta[cenU])^2 * d2l.dScale2 - dl.dScale[cenU] * d2Scale.deta2 } else { ned2l.dScale2[cenU] <- 6 / (Scale[cenU])^2 wz[cenU] <- (dScale.deta[cenU])^2 * ned2l.dScale2[cenU] } c(w) * wz }), list( .lscale = lscale, .escale = escale, .oim = oim )))) } weibull.mean <- function(lmean = "loge", lshape = "loge", imean = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape") { imeann <- imean lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lmeann <- as.list(substitute(lmean)) emeann <- link2list(lmeann) lmeann <- attr(emeann, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(probs.y, positive = TRUE) || length(probs.y) < 2 || max(probs.y) >= 1) stop("bad input for argument 'probs.y'") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(imeann)) if (!is.Numeric(imeann, positive = TRUE)) stop("argument 'imean' values must be positive") blurb.vec <- c(namesof("mean", lmeann, earg = emeann), namesof("shape", lshape, earg = eshape)) new("vglmff", blurb = c("Weibull distribution (parameterized by the mean)\n\n", "Links: ", blurb.vec[1], ", ", blurb.vec[2], "\n", "Mean: mean\n", "Variance: mean^2 * (gamma(1 + 2/shape) / ", "gamma(1 + 1/shape)^2 - 1)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero, .lmeann = lmeann ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("mean", "shape"), lmean = .lmeann , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .lmeann = lmeann, .lshape = lshape ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly if (is.SurvS4(y)) stop("only uncensored observations are allowed; ", "don't use SurvS4()") mynames1 <- param.names("mean" , ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lmeann , earg = .emeann , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] Meann.init <- matrix(if (length( .imeann )) .imeann else 0.5 * colMeans(y), n, ncoly, byrow = TRUE) + 0.5 * y Shape.init <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { if (!length( .ishape ) || !length( .imeann )) { for (ilocal in 1:ncoly) { anyc <- FALSE # extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # Can be all data probs.y <- .probs.y xvec <- log(-log1p(-probs.y)) fit0 <- lsfit(x = xvec, y = log(quantile(y[!i11, ilocal], probs = probs.y ))) if (!is.Numeric(Shape.init[, ilocal])) Shape.init[, ilocal] <- 1 / fit0$coef["X"] } # ilocal etastart <- cbind(theta2eta(Meann.init, .lmeann , earg = .emeann ), theta2eta(Shape.init, .lshape , earg = .eshape ))[, interleave.VGAM(M, M1 = M1)] } } }), list( .lmeann = lmeann, .lshape = lshape, .emeann = emeann, .eshape = eshape, .imeann = imeann, .ishape = ishape, .probs.y = probs.y, .imethod = imethod ) )), linkinv = eval(substitute(function(eta, extra = NULL) { Meann <- eta2theta(eta[, c(TRUE, FALSE)], .lmeann , earg = .emeann ) Meann }, list( .lmeann = lmeann, .lshape = lshape, .emeann = emeann, .eshape = eshape ) )), last = eval(substitute(expression({ regnotok <- any(Shape <= 2) if (any(Shape <= 1)) { warning("MLE regularity conditions are violated", "(shape <= 1) at the final iteration: ", "MLEs are not consistent") } else if (any(1 < Shape & Shape < 2)) { warning("MLE regularity conditions are violated", "(1 < shape < 2) at the final iteration: ", "MLEs exist but are not asymptotically normal") } else if (any(2 == Shape)) { warning("MLE regularity conditions are violated", "(shape == 2) at the final iteration: ", "MLEs exist and are normal and asymptotically ", "efficient but with a slower convergence rate than when ", "shape > 2") } M1 <- extra$M1 avector <- c(rep_len( .lmeann , ncoly), rep_len( .lshape , ncoly)) misc$link <- avector[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .emeann misc$earg[[M1*ii ]] <- .eshape } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$RegCondOK <- !regnotok # Save this for later misc$expected <- TRUE # all(cen0) }), list( .lmeann = lmeann, .lshape = lshape, .emeann = emeann, .eshape = eshape, .imethod = imethod ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Meann <- eta2theta(eta[, c(TRUE, FALSE)], .lmeann , earg = .emeann ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dweibull(x = y, shape = Shape, scale = Meann / gamma(1 + 1/Shape), log = TRUE)) } }, list( .lmeann = lmeann, .lshape = lshape, .emeann = emeann, .eshape = eshape ) )), vfamily = c("weibull.mean"), validparams = eval(substitute(function(eta, y, extra = NULL) { Meann <- eta2theta(eta[, c(TRUE, FALSE)], .lmeann , earg = .emeann ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) okay1 <- all(is.finite(Meann)) && all(0 < Meann) && all(is.finite(Shape)) && all(0 < Shape) okay1 }, list( .lmeann = lmeann, .lshape = lshape, .emeann = emeann, .eshape = eshape ) )), deriv = eval(substitute(expression({ M1 <- 2 Meann <- eta2theta(eta[, c(TRUE, FALSE)], .lmeann , earg = .emeann ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) if (FALSE) { } else { EulerM <- -digamma(1.0) AA <- (EulerM - 1)^2 + (pi^2) / 6 BB <- digamma(1 + 1/Shape) CC <- y * gamma(1 + 1/Shape) / Meann dl.dmeann <- (CC^Shape - 1) * Shape / Meann # Agrees dl.dshape <- 1/Shape - (log(y/Meann) + lgamma(1 + 1/Shape)) * (CC^Shape - 1) + (BB / Shape) * (CC^Shape - 1) } dmeann.deta <- dtheta.deta(Meann, .lmeann , earg = .emeann ) dshape.deta <- dtheta.deta(Shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dmeann * dmeann.deta, dl.dshape * dshape.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lmeann = lmeann, .lshape = lshape, .emeann = emeann, .eshape = eshape ) )), weight = eval(substitute(expression({ if (FALSE) { } else { ned2l.dmeann <- (Shape / Meann)^2 # ned2l.dshape <- AA / Shape^2 # Unchanged ned2l.dshapemeann <- (EulerM - 1 + BB) / Meann } wz <- array(c(c(w) * ned2l.dmeann * dmeann.deta^2, c(w) * ned2l.dshape * dshape.deta^2, c(w) * ned2l.dshapemeann * dmeann.deta * dshape.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .eshape = eshape )))) } weibullR <- function(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, lss = TRUE, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape") { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(probs.y, positive = TRUE) || length(probs.y) < 2 || max(probs.y) >= 1) stop("bad input for argument 'probs.y'") if (!is.Numeric(nrfs, length.arg = 1) || nrfs < 0 || nrfs > 1) stop("bad input for argument 'nrfs'") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' values must be positive") scale.TF <- if (lss) c(TRUE, FALSE) else c(FALSE, TRUE) scale.12 <- if (lss) 1:2 else 2:1 blurb.vec <- c(namesof("scale", lscale, earg = escale), namesof("shape", lshape, earg = eshape)) blurb.vec <- blurb.vec[scale.12] new("vglmff", blurb = c("Weibull distribution\n\n", "Links: ", blurb.vec[1], ", ", blurb.vec[2], "\n", "Mean: scale * gamma(1 + 1/shape)\n", "Variance: scale^2 * (gamma(1 + 2/shape) - ", "gamma(1 + 1/shape)^2)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = if ( .lss ) c("scale", "shape") else c("shape", "scale"), lss = .lss , lscale = .lscale , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .scale.12 = scale.12, .scale.TF = scale.TF, .lscale = lscale , .lshape = lshape , .lss = lss ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly if (is.SurvS4(y)) stop("only uncensored observations are allowed; ", "don't use SurvS4()") if ( .lss ) { mynames1 <- param.names("scale", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lscale , earg = .escale , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE)) } else { mynames1 <- param.names("shape", ncoly) mynames2 <- param.names("scale", ncoly) predictors.names <- c(namesof(mynames1, .lshape , earg = .eshape , tag = FALSE), namesof(mynames2, .lscale , earg = .escale , tag = FALSE)) } predictors.names <- predictors.names[ interleave.VGAM(M, M1 = M1)] Shape.init <- matrix(if (length( .ishape )) .ishape else 0 + NA, n, ncoly, byrow = TRUE) Scale.init <- matrix(if (length( .iscale )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { if (!length( .ishape ) || !length( .iscale )) { for (ilocal in 1:ncoly) { anyc <- FALSE # extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # Can be all data probs.y <- .probs.y xvec <- log(-log1p(-probs.y)) fit0 <- lsfit(x = xvec, y = log(quantile(y[!i11, ilocal], probs = probs.y ))) if (!is.Numeric(Shape.init[, ilocal])) Shape.init[, ilocal] <- 1 / fit0$coef["X"] if (!is.Numeric(Scale.init[, ilocal])) Scale.init[, ilocal] <- exp(fit0$coef["Intercept"]) } # ilocal etastart <- if ( .lss ) cbind(theta2eta(Scale.init, .lscale , earg = .escale ), theta2eta(Shape.init, .lshape , earg = .eshape ))[, interleave.VGAM(M, M1 = M1)] else cbind(theta2eta(Shape.init, .lshape , earg = .eshape ), theta2eta(Scale.init, .lscale , earg = .escale ))[, interleave.VGAM(M, M1 = M1)] } } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .iscale = iscale, .ishape = ishape, .probs.y = probs.y, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss, .imethod = imethod ) )), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, .scale.TF ], .lscale , earg = .escale ) Shape <- eta2theta(eta[, !( .scale.TF )], .lshape , earg = .eshape ) Scale * gamma(1 + 1 / Shape) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss ) )), last = eval(substitute(expression({ regnotok <- any(Shape <= 2) if (any(Shape <= 1)) { warning("MLE regularity conditions are violated", "(shape <= 1) at the final iteration: ", "MLEs are not consistent") } else if (any(1 < Shape & Shape < 2)) { warning("MLE regularity conditions are violated", "(1 < shape < 2) at the final iteration: ", "MLEs exist but are not asymptotically normal") } else if (any(2 == Shape)) { warning("MLE regularity conditions are violated", "(shape == 2) at the final iteration: ", "MLEs exist and are normal and asymptotically ", "efficient but with a slower convergence rate than when ", "shape > 2") } M1 <- extra$M1 avector <- if ( .lss ) c(rep_len( .lscale , ncoly), rep_len( .lshape , ncoly)) else c(rep_len( .lshape , ncoly), rep_len( .lscale , ncoly)) misc$link <- avector[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- if ( .lss ) .escale else .eshape misc$earg[[M1*ii ]] <- if ( .lss ) .eshape else .escale } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nrfs <- .nrfs misc$RegCondOK <- !regnotok # Save this for later }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .imethod = imethod, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss, .nrfs = nrfs ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Scale <- eta2theta(eta[, .scale.TF ], .lscale , earg = .escale ) Shape <- eta2theta(eta[, !( .scale.TF )], .lshape , earg = .eshape ) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * dweibull(y, shape = Shape, scale = Scale, log = TRUE)) } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss ) )), vfamily = c("weibullR"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, .scale.TF ], .lscale , earg = .escale ) Shape <- eta2theta(eta[, !( .scale.TF )], .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(Shape)) && all(0 < Shape) okay1 }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss ) )), deriv = eval(substitute(expression({ M1 <- 2 Scale <- eta2theta(eta[, .scale.TF ], .lscale , earg = .escale ) Shape <- eta2theta(eta[, !( .scale.TF )], .lshape , earg = .eshape ) dl.dshape <- 1 / Shape + log(y / Scale) - log(y / Scale) * (y / Scale)^Shape dl.dscale <- (Shape / Scale) * (-1.0 + (y / Scale)^Shape) dshape.deta <- dtheta.deta(Shape, .lshape, earg = .eshape ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) myderiv <- if ( .lss ) c(w) * cbind(dl.dscale * dscale.deta, dl.dshape * dshape.deta) else c(w) * cbind(dl.dshape * dshape.deta, dl.dscale * dscale.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss ) )), weight = eval(substitute(expression({ EulerM <- -digamma(1.0) ned2l.dscale <- (Shape / Scale)^2 ned2l.dshape <- (6*(EulerM - 1)^2 + pi^2)/(6*Shape^2) # KK (2003) ned2l.dshapescale <- (EulerM-1) / Scale wz <- if ( .lss ) array(c(c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dshape * dshape.deta^2, c(w) * ned2l.dshapescale * dscale.deta * dshape.deta), dim = c(n, M / M1, 3)) else array(c(c(w) * ned2l.dshape * dshape.deta^2, c(w) * ned2l.dscale * dscale.deta^2, c(w) * ned2l.dshapescale * dscale.deta * dshape.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .eshape = eshape, .nrfs = nrfs, .scale.12 = scale.12, .scale.TF = scale.TF, .lss = lss )))) } setOldClass(c("SurvS4", "Surv")) SurvS4 <- function (time, time2, event, type = c("right", "left", "interval", "counting", "interval2"), origin = 0) { nn <- length(time) ng <- nargs() if (missing(type)) { if (ng == 1 || ng == 2) type <- "right" else if (ng == 3) type <- "counting" else stop("Invalid number of arguments") } else { type <- match.arg(type) ng <- ng - 1 if (ng != 3 && (type == "interval" || type == "counting")) stop("Wrong number of args for this type of survival data") if (ng != 2 && (type == "right" || type == "left" || type == "interval2")) stop("Wrong number of args for this type of survival data") } who <- !is.na(time) if (ng == 1) { if (!is.numeric(time)) stop("Time variable is not numeric") ss <- cbind(time, 1) dimnames(ss) <- list(NULL, c("time", "status")) } else if (type == "right" || type == "left") { if (!is.numeric(time)) stop("Time variable is not numeric") if (length(time2) != nn) stop("Time and status are different lengths") if (is.logical(time2)) status <- 1 * time2 else if (is.numeric(time2)) { who2 <- !is.na(time2) if (max(time2[who2]) == 2) status <- time2 - 1 else status <- time2 if (any(status[who2] != 0 & status[who2] != 1)) stop("Invalid status value") } else stop("Invalid status value") ss <- cbind(time, status) dimnames(ss) <- list(NULL, c("time", "status")) } else if (type == "counting") { if (length(time2) != nn) stop("Start and stop are different lengths") if (length(event) != nn) stop("Start and event are different lengths") if (!is.numeric(time)) stop("Start time is not numeric") if (!is.numeric(time2)) stop("Stop time is not numeric") who3 <- who & !is.na(time2) if (any(time[who3] >= time2[who3])) stop("Stop time must be > start time") if (is.logical(event)) status <- 1 * event else if (is.numeric(event)) { who2 <- !is.na(event) if (max(event[who2]) == 2) status <- event - 1 else status <- event if (any(status[who2] != 0 & status[who2] != 1)) stop("Invalid status value") } else stop("Invalid status value") ss <- cbind(time - origin, time2 - origin, status) } else { if (type == "interval2") { event <- ifelse(is.na(time), 2, ifelse(is.na(time2), 0, ifelse(time == time2, 1, 3))) if (any(time[event == 3] > time2[event == 3])) stop("Invalid interval: start > stop") time <- ifelse(event != 2, time, time2) type <- "interval" } else { temp <- event[!is.na(event)] if (!is.numeric(temp)) stop("Status indicator must be numeric") if (length(temp) > 0 && any(temp != floor(temp) | temp < 0 | temp > 3)) stop("Status indicator must be 0, 1, 2 or 3") } status <- event ss <- cbind(time, ifelse(!is.na(event) & event == 3, time2, 1), status) } attr(ss, "type") <- type class(ss) <- "SurvS4" ss } is.SurvS4 <- function(x) inherits(x, "SurvS4") setIs(class1 = "SurvS4", class2 = "matrix") # Forces vglm()@y to be a matrix as.character.SurvS4 <- function (x, ...) { class(x) <- NULL type <- attr(x, "type") if (type == "right") { temp <- x[, 2] temp <- ifelse(is.na(temp), "?", ifelse(temp == 0, "+", " ")) paste(format(x[, 1]), temp, sep = "") } else if (type == "counting") { temp <- x[, 3] temp <- ifelse(is.na(temp), "?", ifelse(temp == 0, "+", " ")) paste("(", format(x[, 1]), ",", format(x[, 2]), temp, "]", sep = "") } else if (type == "left") { temp <- x[, 2] temp <- ifelse(is.na(temp), "?", ifelse(temp == 0, "<", " ")) paste(temp, format(x[, 1]), sep = "") } else { stat <- x[, 3] temp <- c("+", "", "-", "]")[stat + 1] temp2 <- ifelse(stat == 3, paste("(", format(x[, 1]), ", ", format(x[, 2]), sep = ""), format(x[, 1])) ifelse(is.na(stat), as.character(NA), paste(temp2, temp, sep = "")) } } "[.SurvS4" <- function(x, i, j, drop = FALSE) { if (missing(j)) { temp <- class(x) type <- attr(x, "type") class(x) <- NULL x <- x[i, , drop = FALSE] class(x) <- temp attr(x, "type") <- type x } else { class(x) <- NULL NextMethod("[") } } is.na.SurvS4 <- function(x) { as.vector( (1* is.na(unclass(x)))%*% rep(1, ncol(x)) >0) } show.SurvS4 <- function (object) print.default(as.character.SurvS4(object), quote = FALSE) setMethod("show", "SurvS4", function(object) show.SurvS4(object)) pgamma.deriv.unscaled <- function(q, shape) { gam0 <- exp(lgamma(shape) + pgamma(q = q, shape = shape, log.p = TRUE)) I.sq <- pgamma(q = q, shape = shape) alld <- pgamma.deriv(q = q, shape = shape) # 6-coln matrix tmp3 <- alld[, 3] / I.sq # RHS of eqn (4.5) of \cite{wing:1989} G1s <- digamma(shape) + tmp3 # eqn (4.9) gam1 <- gam0 * G1s dG1s <- trigamma(shape) + alld[, 4] / I.sq - tmp3^2 # eqn (4.13) G2s <- dG1s + G1s^2 # eqn (4.12) gam2 <- gam0 * G2s cbind("0" = gam0, "1" = gam1, "2" = gam2) } truncweibull <- function(lower.limit = 1e-5, lAlpha = "loge", lBetaa = "loge", iAlpha = NULL, iBetaa = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "Betaa") { lAlpha <- as.list(substitute(lAlpha)) eAlpha <- link2list(lAlpha) lAlpha <- attr(eAlpha, "function.name") lBetaa <- as.list(substitute(lBetaa)) eBetaa <- link2list(lBetaa) lBetaa <- attr(eBetaa, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(probs.y, positive = TRUE) || length(probs.y) < 2 || max(probs.y) >= 1) stop("bad input for argument 'probs.y'") if (!is.Numeric(nrfs, length.arg = 1) || nrfs < 0 || nrfs > 1) stop("bad input for argument 'nrfs'") if (length(iAlpha)) if (!is.Numeric(iAlpha, positive = TRUE)) stop("argument 'iAlpha' values must be positive") if (length(iBetaa)) if (!is.Numeric(iBetaa, positive = TRUE)) stop("argument 'iBetaa' values must be positive") new("vglmff", blurb = c("Truncated weibull distribution\n\n", "Links: ", namesof("Alpha", lAlpha, earg = eAlpha), ", ", namesof("Betaa", lBetaa, earg = eBetaa), "\n", if (length( lower.limit ) < 5) paste("Truncation point(s): ", lower.limit, sep = ", ") else ""), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("Alpha", "Betaa"), lower.limit = .lower.limit , lAlpha = .lAlpha , lBetaa = .lBetaa , zero = .zero ) }, list( .zero = zero, .lAlpha = lAlpha , .lBetaa = lBetaa , .lower.limit = lower.limit ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$lower.limit <- matrix( .lower.limit , n, ncoly, byrow = TRUE) if (any(y < extra$lower.limit)) { stop("some response values less than argument 'lower.limit'") } if (is.SurvS4(y)) stop("only uncensored observations are allowed; ", "don't use SurvS4()") mynames1 <- param.names("Alpha", ncoly) mynames2 <- param.names("Betaa", ncoly) predictors.names <- c(namesof(mynames1, .lAlpha , earg = .eAlpha , tag = FALSE), namesof(mynames2, .lBetaa , earg = .eBetaa , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] Alpha.init <- matrix(if (length( .iAlpha )) .iAlpha else 0 + NA, n, ncoly, byrow = TRUE) Betaa.init <- matrix(if (length( .iBetaa )) .iBetaa else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { if (!length( .iAlpha ) || !length( .iBetaa )) { for (ilocal in 1:ncoly) { anyc <- FALSE # extra$leftcensored | extra$rightcensored i11 <- if ( .imethod == 1) anyc else FALSE # Can be all data probs.y <- .probs.y xvec <- log(-log1p(-probs.y)) fit0 <- lsfit(x = xvec, y = log(quantile(y[!i11, ilocal], probs = probs.y ))) aaa.init <- 1 / fit0$coef["X"] bbb.init <- exp(fit0$coef["Intercept"]) if (!is.Numeric(Betaa.init[, ilocal])) Betaa.init[, ilocal] <- aaa.init if (!is.Numeric(Alpha.init[, ilocal])) Alpha.init[, ilocal] <- (1 / bbb.init)^aaa.init } # ilocal } else { Alpha.init <- rep_len( .iAlpha , n) Betaa.init <- rep_len( .iBetaa , n) } etastart <- cbind(theta2eta(Alpha.init, .lAlpha , earg = .eAlpha ), theta2eta(Betaa.init, .lBetaa , earg = .eBetaa ))[, interleave.VGAM(M, M1 = M1)] } }), list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .iBetaa = iBetaa, .iAlpha = iAlpha, .lower.limit = lower.limit, .probs.y = probs.y, .imethod = imethod ) )), linkinv = eval(substitute(function(eta, extra = NULL) { Alpha <- eta2theta(eta[, c(TRUE, FALSE)], .lAlpha , earg = .eAlpha ) Betaa <- eta2theta(eta[, c(FALSE, TRUE)], .lBetaa , earg = .eBetaa ) aTb <- Alpha * extra$lower.limit^Betaa wingo3 <- pgamma.deriv.unscaled(q = aTb, shape = 1 + 1 / Betaa) exp.aTb <- exp(aTb) (gamma(1 + 1 / Betaa) - wingo3[, 1]) * exp.aTb / Alpha^(1 / Betaa) }, list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .lower.limit = lower.limit) )), last = eval(substitute(expression({ aaa.hat <- Betaa regnotok <- any(aaa.hat <= 2) if (any(aaa.hat <= 1)) { warning("MLE regularity conditions are violated", "(Betaa <= 1) at the final iteration: ", "MLEs are not consistent") } else if (any(1 < aaa.hat & aaa.hat < 2)) { warning("MLE regularity conditions are violated", "(1 < Betaa < 2) at the final iteration: ", "MLEs exist but are not asymptotically normal") } else if (any(2 == aaa.hat)) { warning("MLE regularity conditions are violated", "(Betaa == 2) at the final iteration: ", "MLEs exist and are normal and asymptotically ", "efficient but with a slower convergence rate than when ", "Betaa > 2") } M1 <- extra$M1 misc$link <- c(rep_len( .lAlpha , ncoly), rep_len( .lBetaa , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .eAlpha misc$earg[[M1*ii ]] <- .eBetaa } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nrfs <- .nrfs misc$RegCondOK <- !regnotok # Save this for later }), list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .imethod = imethod, .lower.limit = lower.limit, .nrfs = nrfs ) )), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Alpha <- eta2theta(eta[, c(TRUE, FALSE)], .lAlpha , earg = .eAlpha ) Betaa <- eta2theta(eta[, c(FALSE, TRUE)], .lBetaa , earg = .eBetaa ) Shape <- Betaa Scale <- 1 / Alpha^(1/Betaa) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else { sum(c(w) * (dweibull(x = y, shape = Shape, scale = Scale, log = TRUE) - pweibull(q = extra$lower.limit, shape = Shape, scale = Scale, log.p = TRUE, lower.tail = FALSE))) } }, list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .lower.limit = lower.limit ) )), vfamily = c("truncweibull"), validparams = eval(substitute(function(eta, y, extra = NULL) { Alpha <- eta2theta(eta[, c(TRUE, FALSE)], .lAlpha , earg = .eAlpha ) Betaa <- eta2theta(eta[, c(FALSE, TRUE)], .lBetaa , earg = .eBetaa ) okay1 <- all(is.finite(Alpha)) && all(0 < Alpha) && all(is.finite(Betaa)) && all(0 < Betaa) okay1 }, list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .lower.limit = lower.limit ) )), deriv = eval(substitute(expression({ M1 <- 2 Alpha <- eta2theta(eta[, c(TRUE, FALSE)], .lAlpha , earg = .eAlpha ) Betaa <- eta2theta(eta[, c(FALSE, TRUE)], .lBetaa , earg = .eBetaa ) Shape <- Betaa Scale <- 1 / Alpha^(1/Betaa) TTT <- extra$lower.limit dl.dAlpha <- 1 / Alpha - y^Betaa + TTT^Betaa dl.dBetaa <- (1 / Betaa) + log(y) - Alpha * (y^Betaa * log(y) - TTT^Betaa * log(TTT)) dAlpha.deta <- dtheta.deta(Alpha, .lAlpha, earg = .eAlpha ) dBetaa.deta <- dtheta.deta(Betaa, .lBetaa, earg = .eBetaa ) myderiv <- c(w) * cbind(dl.dAlpha * dAlpha.deta, dl.dBetaa * dBetaa.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lBetaa = lBetaa, .lAlpha = lAlpha, .eBetaa = eBetaa, .eAlpha = eAlpha, .lower.limit = lower.limit ) )), weight = eval(substitute(expression({ aTb <- Alpha * TTT^Betaa exp.aTb <- exp(aTb) TblogT <- (TTT^Betaa) * log(TTT) wingo3 <- pgamma.deriv.unscaled(q = aTb, shape = 2) # 3-cols Eyblogy <- (exp.aTb * (digamma(2) - wingo3[, 2]) - (aTb + 1) * log(Alpha)) / (Alpha * Betaa) Eyblog2y <- (exp.aTb * (digamma(2)^2 + trigamma(2) - wingo3[, 3]) - 2 * log(Alpha) * (digamma(2) - wingo3[, 2])) / (Alpha * Betaa^2) + (log(Alpha)^2) * (aTb + 1) / (Alpha * Betaa^2) ned2l.daa <- 1 / Alpha^2 ned2l.dab <- Eyblogy - TblogT ned2l.dbb <- (1 / Betaa)^2 + Alpha * Eyblog2y - aTb * (log(TTT))^2 wz <- array(c(c(w) * ned2l.daa * dAlpha.deta^2, c(w) * ned2l.dbb * dBetaa.deta^2, c(w) * ned2l.dab * dBetaa.deta * dAlpha.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .nrfs = nrfs )))) } VGAM/R/plot.vgam.R0000644000176200001440000006343513135276757013272 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. plotvgam <- plot.vgam <- function(x, newdata = NULL, y = NULL, residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, raw = TRUE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, type.residuals = c("deviance", "working", "pearson", "response"), plot.arg = TRUE, which.term = NULL, which.cf = NULL, control = plotvgam.control(...), varxij = 1, ...) { missing.control <- missing(control) na.act <- x@na.action x@na.action <- list() if (!is.Numeric(varxij, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for the 'varxij' argument") if (any(slotNames(x) == "control")) { x@control$varxij <- varxij } missing.type.residuals <- missing(type.residuals) if (mode(type.residuals) != "character" && mode(type.residuals) != "name") type.residuals <- as.character(substitute(type.residuals)) if (!missing.type.residuals) type.residuals <- match.arg(type.residuals, c("deviance", "working", "pearson", "response"))[1] if (!is.Numeric(deriv.arg, integer.valued = TRUE, length.arg = 1) || deriv.arg < 0) stop("bad input for the 'deriv' argument") if (se && deriv.arg > 0) { warning("standard errors not available with derivatives. ", "Setting 'se = FALSE'") se <- FALSE } preplot.object <- x@preplot if (!length(preplot.object)) { preplot.object <- preplotvgam(x, newdata = newdata, raw = raw, deriv.arg = deriv.arg, se = se, varxij = varxij) } x@preplot <- preplot.object if (!is.null(residuals) && length(residuals) == 1) { if (residuals) { if (missing.type.residuals) { for (rtype in type.residuals) if (!is.null(residuals <- resid(x, type = rtype))) break } else { residuals = resid(x, type = type.residuals) if (!length(residuals)) warning("residuals are NULL. Ignoring 'residuals = TRUE'") } } else { residuals <- NULL } } if (!missing.control) { control <- c(plotvgam.control( .include.dots = FALSE, ...), control, plotvgam.control(...)) } x@post$plotvgam.control <- control # Add it to the object if (plot.arg) plotpreplotvgam(preplot.object, residuals = residuals, rugplot = rugplot, scale = scale, se = se, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.term = which.term, which.cf = which.cf, control = control) x@na.action <- na.act # Restore its original value invisible(x) } ylim.scale <- function(ylim, scale = 0) { if (length(ylim) != 2 || ylim[2] < ylim[1]) stop("error in 'ylim'") try <- ylim[2] - ylim[1] if (try > scale) ylim else c(ylim[1] + ylim[2] - scale, ylim[1] + ylim[2] + scale) / 2 } getallresponses <- function(xij) { if (!is.list(xij)) return("") allterms <- lapply(xij, terms) allres <- NULL for (ii in seq_along(xij)) allres <- c(allres, as.character(attr(allterms[[ii]], "variables"))[2]) allres } headpreplotvgam <- function(object, newdata = NULL, terms = attr((object@terms)$terms, "term.labels"), raw = TRUE, deriv.arg = deriv.arg, se = FALSE, varxij = 1) { Terms <- terms(object) # 20030811; object@terms$terms aa <- attributes(Terms) all.terms <- labels(Terms) xvars <- parse(text = all.terms) names(xvars) <- all.terms terms <- sapply(terms, match.arg, all.terms) Interactions <- aa$order > 1 if (any(Interactions)) { stop("cannot handle interactions") } xvars <- xvars[terms] xnames <- as.list(terms) names(xnames) <- terms modes <- sapply(xvars, mode) for (term in terms[modes != "name"]) { evars <- all.names(xvars[term], functions = FALSE, unique = TRUE) if (!length(evars)) next xnames[[term]] <- evars evars <- parse(text=evars) if (length(evars) == 1) { evars <- evars[[1]] } else if (length(evars) > 1 && length(intersect(getallresponses(object@control$xij), names(xnames))) ) { evars <- evars[[varxij]] } else { evars <- c(as.name("list"), evars) mode(evars) <- "call" } xvars[[term]] <- evars } xvars <- c(as.name("list"), xvars) mode(xvars) <- "call" if (length(newdata)) { xvars <- eval(xvars, newdata) } else { Call <- object@call if (!is.null(Call$subset) | !is.null(Call$na.action) | !is.null(options("na.action")[[1]])) { Rownames <- names(fitted(object)) if (!(Rl <- length(Rownames))) Rownames <- dimnames(fitted(object))[[1]] if (length(object@x) && !(Rl <- length(Rownames))) Rownames <- (dimnames(object@x))[[1]] if (length(object@y) && !(Rl <- length(Rownames))) Rownames <- (dimnames(object@y))[[1]] if (!(Rl <- length(Rownames))) stop("need to have names for fitted.values ", "when call has a 'subset' or 'na.action' argument") form <- paste("~", unlist(xnames), collapse = "+") Mcall <- c(as.name("model.frame"), list(formula = terms(as.formula(form)), subset = Rownames, na.action = function(x) x)) mode(Mcall) <- "call" Mcall$data <- Call$data xvars <- eval(xvars, eval(Mcall)) } else { ecall <- substitute(eval(expression(xvars))) ecall$local <- Call$data xvars <- eval(ecall) } } list(xnames = xnames, xvars = xvars) } preplotvgam <- function(object, newdata = NULL, terms = attr((object@terms)$terms, "term.labels"), raw = TRUE, deriv.arg = deriv.arg, se = FALSE, varxij = 1) { result1 <- headpreplotvgam(object, newdata = newdata, terms = terms, raw = raw, deriv.arg = deriv.arg, se = se, varxij = varxij) xvars <- result1$xvars xnames <- result1$xnames if (FALSE && !is.null(object@control$jix)) { myxij <- object@control$xij if (length(myxij)) { } } pred <- if (length(newdata)) { predict(object, newdata, type = "terms", raw = raw, se.fit = se, deriv.arg = deriv.arg) } else { predict(object, type = "terms", raw = raw, se.fit = se, deriv.arg = deriv.arg) } fits <- if (is.atomic(pred)) NULL else pred$fit se.fit <- if (is.atomic(pred)) NULL else pred$se.fit if (is.null(fits)) fits <- pred fred <- attr(fits, "vterm.assign") # NULL for M==1 Constant <- attr(fits, "constant") # NULL if se = TRUE gamplot <- xnames loop.var <- names(fred) for (term in loop.var) { .VGAM.x <- xvars[[term]] myylab <- if (all(substring(term, 1:nchar(term), 1:nchar(term)) != "(")) paste("partial for", term) else term TT <- list(x = .VGAM.x, y = fits[, (if (is.null(fred)) term else fred[[term]])], se.y = if (is.null(se.fit)) NULL else se.fit[, (if (is.null(fred)) term else fred[[term]])], xlab = xnames[[term]], ylab = myylab) class(TT) <- "preplotvgam" gamplot[[term]] <- TT } attr(gamplot, "Constant") <- Constant invisible(gamplot) } plotpreplotvgam <- function(x, y = NULL, residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.term = NULL, which.cf = NULL, control = NULL) { listof <- inherits(x[[1]], "preplotvgam") if (listof) { TT <- names(x) if (is.null(which.term)) which.term <- TT # Plot them all plot.no <- 0 for (ii in TT) { plot.no <- plot.no + 1 if ((is.character(which.term) && any(which.term == ii)) || (is.numeric(which.term) && any(which.term == plot.no))) plotpreplotvgam(x[[ii]], y = NULL, residuals, rugplot = rugplot, se = se, scale = scale, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.cf = which.cf, control = control) } } else { dummy <- function(residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, control = plotvgam.control()) c(list(residuals = residuals, rugplot = rugplot, se = se, scale = scale, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.cf = which.cf), control) dd <- dummy(residuals = residuals, rugplot = rugplot, se = se, scale = scale, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, which.cf = which.cf, control = control) uniq.comps <- unique(c(names(x), names(dd))) Call <- c(as.name("vplot"), c(dd, x)[uniq.comps]) mode(Call) <- "call" invisible(eval(Call)) } } vplot.default <- function(x, y, se.y = NULL, xlab = "", ylab = "", residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, ...) { switch(data.class(x)[1], logical = vplot.factor(factor(x), y, se.y, xlab, ylab, residuals, rugplot, scale, se, offset.arg = offset.arg, overlay = overlay, ...), if (is.numeric(x)) { vplot.numeric(as.vector(x), y, se.y, xlab, ylab, residuals, rugplot, scale, se, offset.arg = offset.arg, overlay = overlay, ...) } else { warning("The 'x' component of '", ylab, "' has class '", class(x), "'; no vplot() methods available") } ) # End of switch } vplot.list <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, ...) { if (is.numeric(x[[1]])) { vplot.numeric(x[[1]], y, se.y, xlab, ylab, residuals, rugplot, scale, se, offset.arg = offset.arg, deriv.arg = deriv.arg, overlay = overlay, ...) } else { stop("this function has not been written yet") } } plotvgam.control <- function(which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg = FALSE, one.at.a.time = FALSE, .include.dots = TRUE, noxmean = FALSE, shade = FALSE, shcol = "gray80", ...) { ans <- list(which.cf = which.cf, xlim = xlim, ylim = ylim, llty = llty, slty = slty, pcex = pcex, pch = pch, pcol = pcol, lcol = lcol, rcol = rcol, scol = scol, llwd = llwd, slwd = slwd, add.arg = add.arg, noxmean = noxmean, one.at.a.time = one.at.a.time, shade = shade, shcol = shcol) if (.include.dots) { c(list(...), ans) } else { default.vals <- plotvgam.control() return.list <- list() for (ii in names(default.vals)) { replace.val <- !((length(ans[[ii]]) == length(default.vals[[ii]])) && (length(default.vals[[ii]]) > 0) && identical(ans[[ii]], default.vals[[ii]])) if (replace.val) return.list[[ii]] <- ans[[ii]] } if (length(return.list)) { names(return.list) <- names(return.list) return.list } else { NULL } } } vplot.numeric <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, se = FALSE, scale = 0, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg = FALSE, one.at.a.time = FALSE, noxmean = FALSE, separator = ":", shade = FALSE, shcol = "gray80", ...) { ylim0 <- ylim if (length(y)/length(x) != round(length(y)/length(x))) stop("length of 'x' and 'y' do not seem to match") y <- as.matrix(y) if (!length(which.cf)) which.cf <- 1:ncol(y) # Added 20040807 if (!is.null(se.y)) se.y <- as.matrix(se.y) if (!is.null(se.y) && anyNA(se.y)) se.y <- NULL if (!is.null(residuals)) { residuals <- as.matrix(residuals) if (ncol(residuals) != ncol(y)) { warning("ncol(residuals) != ncol(y) so residuals are not plotted") residuals <- NULL } } offset.arg <- matrix(offset.arg, nrow(y), ncol(y), byrow = TRUE) y <- y + offset.arg ylab <- add.hookey(ylab, deriv.arg) if (xmeanAdded <- (se && !is.null(se.y) && !noxmean && all(substring(ylab, 1:nchar(ylab), 1:nchar(ylab)) != "("))) { x <- c(x, mean(x)) y <- rbind(y, 0 * y[1, ]) se.y <- rbind(se.y, 0 * se.y[1, ]) if (!is.null(residuals)) residuals <- rbind(residuals, NA*residuals[1, ]) # NAs not plotted } ux <- unique(sort(x)) ooo <- match(ux, x) uy <- y[ooo, , drop = FALSE] xlim.orig <- xlim ylim.orig <- ylim xlim <- range(if (length(xlim)) NULL else ux, xlim, na.rm = TRUE) ylim <- range(if (length(ylim)) NULL else uy[, which.cf], ylim, na.rm = TRUE) if (rugplot) { usex <- if (xmeanAdded) x[-length(x)] else x jx <- jitter(usex[!is.na(usex)]) xlim <- range(if (length(xlim.orig)) NULL else jx, xlim.orig, na.rm = TRUE) } if (se && !is.null(se.y)) { se.upper <- uy + 2 * se.y[ooo, , drop = FALSE] se.lower <- uy - 2 * se.y[ooo, , drop = FALSE] ylim <- if (length(ylim.orig)) range(ylim.orig) else range(c(ylim, se.upper[, which.cf], se.lower[, which.cf])) } if (!is.null(residuals)) { if (length(residuals) == length(y)) { residuals <- as.matrix(y + residuals) ylim <- if (length(ylim.orig)) range(ylim.orig) else range(c(ylim, residuals[, which.cf]), na.rm = TRUE) } else { residuals <- NULL warning("Residuals do not match 'x' in \"", ylab, "\" preplot object") } } all.missingy <- all(is.na(y)) if (all.missingy) return() if (!length(ylim.orig)) ylim <- ylim.scale(ylim, scale) if (overlay) { if (!length(which.cf)) which.cf <- 1:ncol(uy) # Added 20040807 if (!add.arg) { matplot(ux, uy[, which.cf], type = "n", xlim = xlim, ylim = ylim, xlab = xlab, ylab = ylab, ...) } matlines(ux, uy[, which.cf], lwd = llwd, col = lcol, lty = llty) if (!is.null(residuals)) { if (ncol(y) == 1) { points(x, residuals, pch = pch, col = pcol, cex = pcex) } else { matpoints(x, residuals[, which.cf], pch = pch, col = pcol, cex = pcex) # add.arg = TRUE, } } if (rugplot) rug(jx, col = rcol) if (se && !is.null(se.y)) { matlines(ux, se.upper[, which.cf], lty = slty, lwd = slwd, col = scol) matlines(ux, se.lower[, which.cf], lty = slty, lwd = slwd, col = scol) } } else { YLAB <- ylab pcex <- rep_len(pcex, ncol(uy)) pch <- rep_len(pch , ncol(uy)) pcol <- rep_len(pcol, ncol(uy)) lcol <- rep_len(lcol, ncol(uy)) llty <- rep_len(llty, ncol(uy)) llwd <- rep_len(llwd, ncol(uy)) slty <- rep_len(slty, ncol(uy)) rcol <- rep_len(rcol, ncol(uy)) scol <- rep_len(scol, ncol(uy)) slwd <- rep_len(slwd, ncol(uy)) for (ii in 1:ncol(uy)) { if (!length(which.cf) || ( length(which.cf) && any(which.cf == ii))) { if (is.Numeric(ylim0, length.arg = 2)) { ylim <- ylim0 } else { ylim <- range(ylim0, uy[, ii], na.rm = TRUE) if (se && !is.null(se.y)) ylim <- range(ylim0, se.lower[, ii], se.upper[, ii], na.rm = TRUE) if (!is.null(residuals)) ylim <- range(c(ylim, residuals[, ii]), na.rm = TRUE) ylim <- ylim.scale(ylim, scale) } if (ncol(uy) > 1 && length(separator)) YLAB <- paste(ylab, separator, ii, sep = "") if (!add.arg) { if (one.at.a.time) { readline("Hit return for the next plot ") } plot(ux, uy[, ii], type = "n", xlim = xlim, ylim = ylim, xlab = xlab, ylab = YLAB, ...) } lines(ux, uy[, ii], lwd = llwd[ii], col = lcol[ii], lty = llty[ii]) if (!is.null(residuals)) points(x, residuals[, ii], pch = pch[ii], col = pcol[ii], cex = pcex[ii]) if (rugplot) rug(jx, col = rcol[ii]) if (se && !is.null(se.y)) { if (shade) { polygon(c(ux, rev(ux), ux[1]), c(se.upper[, ii], rev(se.lower[, ii]), se.upper[1, ii]), col = shcol, border = NA) lines(ux, uy[, ii], lwd = llwd[ii], col = lcol[ii], lty = llty[ii]) } else { lines(ux, se.upper[, ii], lty = slty[ii], lwd = slwd[ii], col = scol[ii]) lines(ux, se.lower[, ii], lty = slty[ii], lwd = slwd[ii], col = scol[ii]) } # !shade } # se && !is.null(se.y)) } } # for() } # overlay } # vplot.numeric() vplot.matrix <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, ...) { stop("You shouldn't ever call this function!") } add.hookey <- function(ch, deriv.arg = 0) { if (!is.Numeric(deriv.arg, integer.valued = TRUE, length.arg = 1) || deriv.arg < 0) stop("bad input for the 'deriv' argument") if (deriv.arg == 0) return(ch) hookey <- switch(deriv.arg, "'", "''", "'''", "''''", "'''''", stop("too high a derivative")) nc <- nchar(ch) sub <- substring(ch, 1:nc, 1:nc) if (nc >= 2 && sub[1] == "s" && sub[2] == "(") { paste("s", hookey, substring(ch, 2, nc), sep = "", coll = "") } else { paste(ch, hookey, sep = "", collapse = "") } } vplot.factor <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, xlim = NULL, ylim = NULL, offset.arg = 0, deriv.arg = 0, overlay = FALSE, which.cf = NULL, ...) { if (deriv.arg > 0) return(NULL) if (length(y)/length(x) != round(length(y)/length(x))) stop("length of 'x' and 'y' do not seem to match") y <- as.matrix(y) if (!is.null(se.y)) se.y <- as.matrix(se.y) if (!is.null(se.y) && anyNA(se.y)) se.y <- NULL if (!is.null(residuals)) { residuals <- as.matrix(residuals) if (ncol(residuals) != ncol(y)) { warning("ncol(residuals) != ncol(y) so residuals are not plotted") residuals <- NULL } } if (overlay) { vvplot.factor(x, y, se.y = if (is.null(se.y)) NULL else se.y, xlab = xlab, ylab = ylab, residuals = residuals, rugplot = rugplot, scale = scale, se = se, xlim = xlim, ylim = ylim, ...) } else { for (ii in 1:ncol(y)) { ylab <- rep_len(ylab, ncol(y)) if (ncol(y) > 1) ylab <- dimnames(y)[[2]] vvplot.factor(x, y[, ii,drop = FALSE], se.y = if (is.null(se.y)) NULL else se.y[, ii,drop = FALSE], xlab = xlab, ylab = ylab[ii], residuals = if (is.null(residuals)) NULL else residuals[, ii,drop = FALSE], rugplot = rugplot, scale = scale, se = se, xlim = xlim, ylim = ylim, ...) } } invisible(NULL) } vvplot.factor <- function(x, y, se.y = NULL, xlab, ylab, residuals = NULL, rugplot = FALSE, scale = 0, se = FALSE, xlim = NULL, ylim = NULL, ...) { M <- ncol(y) nn <- as.numeric(table(x)) codex <- as.numeric(x) ucodex <- seq(nn)[nn > 0] ooo <- match(ucodex, codex, 0) uy <- y[ooo, , drop = FALSE] ylim <- range(ylim, uy) xlim <- range(c(0, sum(nn), xlim)) rightx <- cumsum(nn) leftx <- c(0, rightx[ -length(nn)]) ux <- (leftx + rightx)/2 delta <- (rightx - leftx)/8 jx <- runif(length(codex), (ux - delta)[codex], (ux + delta)[codex]) nnajx <- jx[!is.na(jx)] if (rugplot) xlim <- range(c(xlim, nnajx)) if (se && !is.null(se.y)) { se.upper <- uy + 2 * se.y[ooo, , drop = FALSE] se.lower <- uy - 2 * se.y[ooo, , drop = FALSE] ylim <- range(c(ylim, se.upper, se.lower)) } if (!is.null(residuals)) { if (length(residuals) == length(y)) { residuals <- y + residuals ylim <- range(c(ylim, residuals)) } else { residuals <- NULL warning("Residuals do not match 'x' in \"", ylab, "\" preplot object") } } ylim <- ylim.scale(ylim, scale) Levels <- levels(x) if (!all(nn)) { keep <- nn > 0 nn <- nn[keep] ux <- ux[keep] delta <- delta[keep] leftx <- leftx[keep] rightx <- rightx[keep] Levels <- Levels[keep] } about <- function(ux, M, Delta = 1 / M) { if (M == 1) return(cbind(ux)) ans <- matrix(NA_real_, length(ux), M) grid <- seq(-Delta, Delta, len = M) for (ii in 1:M) { ans[, ii] <- ux + grid[ii] } ans } uxx <- about(ux, M, Delta = min(delta)) xlim <- range(c(xlim, uxx)) matplot(ux, uy, ylim = ylim, xlim = xlim, xlab = "", type = "n", ylab = ylab, axes = FALSE, frame.plot = TRUE, ...) mtext(xlab, 1, 2, adj = 0.5) axis(side = 2) lpos <- par("mar")[3] mtext(Levels, side = 3, line = lpos/2, at = ux, adj = 0.5, srt = 45) for (ii in 1:M) segments(uxx[, ii] - 1.0 * delta, uy[, ii], uxx[, ii] + 1.0 * delta, uy[, ii]) if (!is.null(residuals)) { for (ii in 1:M) { jux <- uxx[, ii] jux <- jux[codex] jux <- jux + runif(length(jux), -0.7*min(delta), 0.7*min(delta)) if (M == 1) points(jux, residuals[, ii]) else points(jux, residuals[, ii], pch = as.character(ii)) } } if (rugplot) rug(nnajx) if (se) { for (ii in 1:M) { segments(uxx[, ii] + 0.5*delta, se.upper[, ii], uxx[, ii] - 0.5*delta, se.upper[, ii]) segments(uxx[, ii] + 0.5*delta, se.lower[, ii], uxx[, ii] - 0.5*delta, se.lower[, ii]) segments(uxx[, ii], se.lower[, ii], uxx[, ii], se.upper[, ii], lty = 2) } } invisible(diff(ylim)) } if (!isGeneric("vplot")) setGeneric("vplot", function(x, ...) standardGeneric("vplot")) setMethod("vplot", "factor", function(x, ...) vplot.factor(x, ...)) setMethod("vplot", "list", function(x, ...) vplot.list(x, ...)) setMethod("vplot", "matrix", function(x, ...) vplot.matrix(x, ...)) setMethod("vplot", "numeric", function(x, ...) vplot.numeric(x, ...)) setMethod("plot", "vgam", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plot.vgam(x = x, y = y, ...))}) plotqrrvglm <- function(object, rtype = c("response", "pearson", "deviance", "working"), ask = FALSE, main = paste(Rtype, "residuals vs latent variable(s)"), xlab = "Latent Variable", I.tolerances = object@control$eq.tolerances, ...) { M <- object@misc$M n <- object@misc$n Rank <- object@control$Rank Coef.object <- Coef(object, I.tolerances = I.tolerances) rtype <- match.arg(rtype, c("response", "pearson", "deviance", "working"))[1] res <- resid(object, type = rtype) my.ylab <- if (length(object@misc$ynames)) object@misc$ynames else rep_len(" ", M) Rtype <- switch(rtype, pearson = "Pearson", response = "Response", deviance = "Deviance", working = "Working") done <- 0 for (rr in 1:Rank) for (ii in 1:M) { plot(Coef.object@latvar[, rr], res[, ii], xlab = paste(xlab, if (Rank == 1) "" else rr, sep = ""), ylab = my.ylab[ii], main = main, ...) done <- done + 1 if (done >= prod(par()$mfrow) && ask && done != Rank*M) { done <- 0 readline("Hit return for the next plot: ") } } object } setMethod("plot", "qrrvglm", function(x, y, ...) invisible(plotqrrvglm(object = x, ...))) put.caption <- function(text.arg = "(a)", w.x = c(0.50, 0.50), w.y = c(0.07, 0.93), ...) { text(text.arg, x = weighted.mean(par()$usr[1:2], w = w.x), y = weighted.mean(par()$usr[3:4], w = w.y), ...) } setMethod("plot", "pvgam", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plot.vgam(x = x, y = y, ...))}) VGAM/R/family.qreg.R0000644000176200001440000060002413135276757013570 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dlms.bcn <- function(x, lambda = 1, mu = 0, sigma = 1, tol0 = 0.001, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- ((x/mu)^lambda - 1) / (lambda * sigma) log.dz.dy <- (lambda - 1) * log(x/mu) - log(mu * sigma) is.eff.0 <- abs(lambda) < tol0 if (any(is.eff.0)) { zedd[is.eff.0] <- log(x[is.eff.0] / mu[is.eff.0]) / sigma[is.eff.0] log.dz.dy[is.eff.0] <- -log(x[is.eff.0] * sigma[is.eff.0]) } logden <- dnorm(zedd, log = TRUE) + log.dz.dy if (log.arg) logden else exp(logden) } qlms.bcn <- function(p, lambda = 1, mu = 0, sigma = 1) { answer <- mu * (1 + lambda * sigma * qnorm(p))^(1/lambda) answer } lms.bcn.control <- lms.bcg.control <- lms.yjn.control <- function(trace = TRUE, ...) list(trace = trace) lms.bcn <- function(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL, tol0 = 0.001) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(tol0, positive = TRUE, length.arg = 1)) stop("bad input for argument 'tol0'") if (!is.Numeric(ilambda)) stop("bad input for argument 'ilambda'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") new("vglmff", blurb = c("LMS ", "quantile", " regression (Box-Cox transformation to normality)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", ", namesof("mu", link = lmu, earg = emu), ", ", namesof("sigma", link = lsigma, earg = esigma)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("lambda", "mu", "sigma"), llambda = .llambda , lmu = .lmu , lsigma = .lsigma , percentiles = .percentiles , # For the original fit only true.mu = FALSE, # quantiles zero = .zero ) }, list( .zero = zero, .percentiles = percentiles, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("lambda", .llambda, earg = .elambda, short= TRUE), namesof("mu", .lmu, earg = .emu, short= TRUE), namesof("sigma", .lsigma, earg = .esigma, short= TRUE)) extra$percentiles <- .percentiles if (!length(etastart)) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .idf.mu ) fv.init <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) lambda.init <- if (is.Numeric( .ilambda )) .ilambda else 1.0 sigma.init <- if (is.null(.isigma)) { myratio <- ((y/fv.init)^lambda.init - 1) / lambda.init if (is.Numeric( .idf.sigma )) { fit600 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = myratio^2, w = w, df = .idf.sigma) sqrt(c(abs(predict(fit600, x = x[, min(ncol(x), 2)])$y))) } else { sqrt(var(myratio)) } } else { .isigma } etastart <- cbind(theta2eta(lambda.init, .llambda , earg = .elambda ), theta2eta(fv.init, .lmu , earg = .emu ), theta2eta(sigma.init, .lsigma , earg = .esigma )) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .idf.mu = idf.mu, .idf.sigma = idf.sigma, .ilambda = ilambda, .isigma = isigma, .percentiles = percentiles ))), linkinv = eval(substitute(function(eta, extra = NULL) { pcent <- extra$percentiles eta[, 1] <- eta2theta(eta[, 1], .llambda , earg = .elambda ) eta[, 2] <- eta2theta(eta[, 2], .lmu , earg = .emu ) eta[, 3] <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) qtplot.lms.bcn(percentiles = pcent, eta = eta) }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), last = eval(substitute(expression({ misc$links <- c(lambda = .llambda , mu = .lmu , sigma = .lsigma ) misc$earg <- list(lambda = .elambda , mu = .emu , sigma = .esigma ) misc$tol0 <- .tol0 misc$percentiles <- .percentiles # These are argument values if (control$cdf) { post$cdf <- cdf.lms.bcn(y, eta0 = matrix(c(lambda, mymu, sigma), ncol = 3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .percentiles = percentiles, .tol0 = tol0 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) muvec <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- dlms.bcn(x = y, lambda = lambda, mu = mu, sigma = sigma, tol0 = .tol0 , log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .tol0 = tol0 ))), vfamily = c("lms.bcn", "lmscreg"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mymu <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) okay1 <- all(is.finite(mymu )) && all(is.finite(sigma )) && all(0 < sigma) && all(is.finite(lambda)) okay1 }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .tol0 = tol0 ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mymu <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) zedd <- ((y / mymu)^lambda - 1) / (lambda * sigma) z2m1 <- zedd * zedd - 1 dl.dlambda <- zedd * (zedd - log(y/mymu) / sigma) / lambda - z2m1 * log(y/mymu) dl.dmu <- zedd / (mymu * sigma) + z2m1 * lambda / mymu dl.dsigma <- z2m1 / sigma dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) dmu.deta <- dtheta.deta(mymu, .lmu , earg = .emu ) dsigma.deta <- dtheta.deta(sigma, .lsigma , earg = .esigma ) c(w) * cbind(dl.dlambda * dlambda.deta, dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, 6) wz[,iam(1, 1, M)] <- (7 * sigma^2 / 4) * dlambda.deta^2 wz[,iam(2, 2, M)] <- (1 + 2*(lambda*sigma)^2)/(mymu*sigma)^2 * dmu.deta^2 wz[,iam(3, 3, M)] <- (2 / sigma^2) * dsigma.deta^2 wz[,iam(1, 2, M)] <- (-1 / (2 * mymu)) * dlambda.deta * dmu.deta wz[,iam(1, 3, M)] <- (lambda * sigma) * dlambda.deta * dsigma.deta wz[,iam(2, 3, M)] <- (2*lambda/(mymu * sigma)) * dmu.deta * dsigma.deta c(w) * wz }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma )))) } # End of lms.bcn lms.bcg <- function(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(ilambda)) stop("bad input for argument 'ilambda'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") new("vglmff", blurb = c("LMS Quantile Regression ", "(Box-Cox transformation to a Gamma distribution)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", ", namesof("mu", link = lmu, earg = emu), ", ", namesof("sigma", link = lsigma, earg = esigma)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list(.zero = zero))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("lambda", "mu", "sigma"), llambda = .llambda , lmu = .lmu , lsigma = .lsigma , percentiles = .percentiles , # For the original fit only true.mu = FALSE, # quantiles zero = .zero ) }, list( .zero = zero, .percentiles = percentiles, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c( namesof("lambda", .llambda, earg = .elambda, short = TRUE), namesof("mu", .lmu, earg = .emu, short = TRUE), namesof("sigma", .lsigma, earg = .esigma, short = TRUE)) extra$percentiles <- .percentiles if (!length(etastart)) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .idf.mu ) fv.init <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) lambda.init <- if (is.Numeric( .ilambda )) .ilambda else 1.0 sigma.init <- if (is.null( .isigma )) { myratio <- ((y/fv.init)^lambda.init-1) / lambda.init if (is.numeric( .idf.sigma ) && is.finite( .idf.sigma )) { fit600 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = (myratio)^2, w = w, df = .idf.sigma ) sqrt(c(abs(predict(fit600, x = x[, min(ncol(x), 2)])$y))) } else { sqrt(var(myratio)) } } else .isigma etastart <- cbind(theta2eta(lambda.init, .llambda , earg = .elambda ), theta2eta(fv.init, .lmu , earg = .emu ), theta2eta(sigma.init, .lsigma , earg = .esigma )) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .idf.mu = idf.mu, .idf.sigma = idf.sigma, .ilambda = ilambda, .isigma = isigma, .percentiles = percentiles ))), linkinv = eval(substitute(function(eta, extra = NULL) { pcent <- extra$percentiles eta[, 1] <- eta2theta(eta[, 1], .llambda , earg = .elambda ) eta[, 2] <- eta2theta(eta[, 2], .lmu , earg = .emu ) eta[, 3] <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) qtplot.lms.bcg(percentiles = pcent, eta = eta) }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda , mu = .lmu , sigma = .lsigma ) misc$earg <- list(lambda = .elambda , mu = .emu , sigma = .esigma ) misc$percentiles <- .percentiles # These are argument values if (control$cdf) { post$cdf <- cdf.lms.bcg(y, eta0 = matrix(c(lambda, mymu, sigma), ncol = 3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mu <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) Gee <- (y / mu)^lambda theta <- 1 / (sigma * lambda)^2 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (log(abs(lambda)) + theta * (log(theta) + log(Gee)-Gee) - lgamma(theta) - log(y)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), vfamily = c("lms.bcg", "lmscreg"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mymu <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) okay1 <- all(is.finite(mymu )) && all(is.finite(sigma )) && all(0 < sigma) && all(is.finite(lambda)) okay1 }, list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda, earg = .elambda ) mymu <- eta2theta(eta[, 2], .lmu, earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma, earg = .esigma ) Gee <- (y / mymu)^lambda theta <- 1 / (sigma * lambda)^2 dd <- digamma(theta) dl.dlambda <- (1 + 2 * theta * (dd + Gee -1 -log(theta) - 0.5 * (Gee + 1) * log(Gee))) / lambda dl.dmu <- lambda * theta * (Gee-1) / mymu dl.dsigma <- 2*theta*(dd + Gee - log(theta * Gee)-1) / sigma dlambda.deta <- dtheta.deta(lambda, link = .llambda , earg = .elambda ) dmu.deta <- dtheta.deta(mymu, link = .lmu , earg = .emu ) dsigma.deta <- dtheta.deta(sigma, link = .lsigma , earg = .esigma ) cbind(dl.dlambda * dlambda.deta, dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) * w }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma ))), weight = eval(substitute(expression({ tritheta <- trigamma(theta) wz <- matrix(0, n, 6) if (TRUE) { part2 <- dd + 2/theta - 2*log(theta) wz[,iam(1, 1, M)] <- ((1 + theta*(tritheta*(1+4*theta) - 4*(1+1/theta) - log(theta)*(2/theta - log(theta)) + dd*part2)) / lambda^2) * dlambda.deta^2 } else { temp <- mean( Gee*(log(Gee))^2 ) wz[,iam(1, 1, M)] <- ((4 * theta * (theta * tritheta-1) - 1 + theta*temp) / lambda^2) * dlambda.deta^2 } wz[,iam(2, 2, M)] <- dmu.deta^2 / (mymu * sigma)^2 wz[,iam(3, 3, M)] <- (4 * theta * (theta * tritheta - 1) / sigma^2) * dsigma.deta^2 wz[,iam(1, 2, M)] <- (-theta * (dd + 1 / theta - log(theta)) / mymu) * dlambda.deta * dmu.deta wz[,iam(1, 3, M)] <- 2 * theta^1.5 * (2 * theta * tritheta - 2 - 1 / theta) * dlambda.deta * dsigma.deta c(w) * wz }), list( .llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma )))) } dy.dpsi.yeojohnson <- function(psi, lambda) { L <- max(length(psi), length(lambda)) if (length(psi) != L) psi <- rep_len(psi, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) ifelse(psi > 0, (1 + psi * lambda)^(1/lambda - 1), (1 - (2-lambda) * psi)^((lambda - 1) / (2-lambda))) } dyj.dy.yeojohnson <- function(y, lambda) { L <- max(length(y), length(lambda)) if (length(y) != L) y <- rep_len(y, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) ifelse(y>0, (1 + y)^(lambda - 1), (1 - y)^(1 - lambda)) } yeo.johnson <- function(y, lambda, derivative = 0, epsilon = sqrt(.Machine$double.eps), inverse = FALSE) { if (!is.Numeric(derivative, length.arg = 1, integer.valued = TRUE) || derivative < 0) stop("argument 'derivative' must be a non-negative integer") ans <- y if (!is.Numeric(epsilon, length.arg = 1, positive = TRUE)) stop("argument 'epsilon' must be a single positive number") L <- max(length(lambda), length(y)) if (length(y) != L) y <- rep_len(y, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) if (inverse) { if (derivative != 0) stop("argument 'derivative' must 0 when inverse = TRUE") if (any(index <- y >= 0 & abs(lambda ) > epsilon)) ans[index] <- (y[index]*lambda[index] + 1)^(1/lambda[index]) - 1 if (any(index <- y >= 0 & abs(lambda ) <= epsilon)) ans[index] <- expm1(y[index]) if (any(index <- y < 0 & abs(lambda-2) > epsilon)) ans[index] <- 1 - (-(2-lambda[index]) * y[index]+1)^(1/(2-lambda[index])) if (any(index <- y < 0 & abs(lambda-2) <= epsilon)) ans[index] <- -expm1(-y[index]) return(ans) } if (derivative == 0) { if (any(index <- y >= 0 & abs(lambda ) > epsilon)) ans[index] <- ((y[index]+1)^(lambda[index]) - 1) / lambda[index] if (any(index <- y >= 0 & abs(lambda ) <= epsilon)) ans[index] <- log1p(y[index]) if (any(index <- y < 0 & abs(lambda-2) > epsilon)) ans[index] <- -((-y[index]+1)^(2-lambda[index]) - 1)/(2 - lambda[index]) if (any(index <- y < 0 & abs(lambda-2) <= epsilon)) ans[index] <- -log1p(-y[index]) } else { psi <- Recall(y = y, lambda = lambda, derivative = derivative - 1, epsilon = epsilon, inverse = inverse) if (any(index <- y >= 0 & abs(lambda ) > epsilon)) ans[index] <- ( (y[index]+1)^(lambda[index]) * (log1p(y[index]))^(derivative) - derivative * psi[index] ) / lambda[index] if (any(index <- y >= 0 & abs(lambda ) <= epsilon)) ans[index] <- (log1p(y[index]))^(derivative + 1) / (derivative + 1) if (any(index <- y < 0 & abs(lambda-2) > epsilon)) ans[index] <- -( (-y[index]+1)^(2-lambda[index]) * (-log1p(-y[index]))^(derivative) - derivative * psi[index] ) / (2-lambda[index]) if (any(index <- y < 0 & abs(lambda-2) <= epsilon)) ans[index] <- (-log1p(-y[index]))^(derivative + 1) / (derivative + 1) } ans } dpsi.dlambda.yjn <- function(psi, lambda, mymu, sigma, derivative = 0, smallno = 1.0e-8) { if (!is.Numeric(derivative, length.arg = 1, integer.valued = TRUE) || derivative < 0) stop("argument 'derivative' must be a non-negative integer") if (!is.Numeric(smallno, length.arg = 1, positive = TRUE)) stop("argument 'smallno' must be a single positive number") L <- max(length(psi), length(lambda), length(mymu), length(sigma)) if (length(psi) != L) psi <- rep_len(psi, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) if (length(mymu) != L) mymu <- rep_len(mymu, L) if (length(sigma) != L) sigma <- rep_len(sigma, L) answer <- matrix(NA_real_, L, derivative+1) CC <- psi >= 0 BB <- ifelse(CC, lambda, -2+lambda) AA <- psi * BB temp8 <- if (derivative > 0) { answer[,1:derivative] <- Recall(psi = psi, lambda = lambda, mymu = mymu, sigma = sigma, derivative = derivative-1, smallno = smallno) answer[,derivative] * derivative } else { 0 } answer[, 1+derivative] <- ((AA+1) * (log1p(AA)/BB)^derivative - temp8) / BB pos <- (CC & abs(lambda) <= smallno) | (!CC & abs(lambda-2) <= smallno) if (any(pos)) answer[pos,1+derivative] = (answer[pos, 1]^(1+derivative))/(derivative+1) answer } gh.weight.yjn.11 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { ((derivmat[, 2]/sigma)^2 + sqrt(2) * z * derivmat[, 3] / sigma) / sqrt(pi) } else { # Long-winded way psi <- mymu + sqrt(2) * sigma * z (1 / sqrt(pi)) * (dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]^2 + (psi - mymu) * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2)[, 3]) / sigma^2 } } gh.weight.yjn.12 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { (-derivmat[, 2]) / (sqrt(pi) * sigma^2) } else { psi <- mymu + sqrt(2) * sigma * z (1 / sqrt(pi)) * (-dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) / sigma^2 } } gh.weight.yjn.13 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { sqrt(8 / pi) * (-derivmat[, 2]) * z / sigma^2 } else { psi <- mymu + sqrt(2) * sigma * z (1 / sqrt(pi)) * (-2 * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) * (psi - mymu) / sigma^3 } } glag.weight.yjn.11 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (derivmat[, 2]^2 + sqrt(2) * sigma * z * derivmat[, 3]) } else { psi <- mymu + sqrt(2) * sigma * z discontinuity <- -mymu / (sqrt(2) * sigma) (1 / (2 * sqrt((z-discontinuity^2)^2 + discontinuity^2))) * (1 / sqrt(pi)) * (dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]^2 + (psi - mymu) * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2)[, 3]) / sigma^2 } } glag.weight.yjn.12 <- function(z, lambda, mymu, sigma, derivmat = NULL) { discontinuity <- -mymu / (sqrt(2) * sigma) if (length(derivmat)) { derivmat[, 4] * (-derivmat[, 2]) } else { psi <- mymu + sqrt(2) * sigma * z (1 / (2 * sqrt((z-discontinuity^2)^2 + discontinuity^2))) * (1 / sqrt(pi)) * (- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) / sigma^2 } } glag.weight.yjn.13 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (-derivmat[, 2]) * sqrt(8) * z } else { psi <- mymu + sqrt(2) * sigma * z discontinuity <- -mymu / (sqrt(2) * sigma) (1 / (2 * sqrt((z-discontinuity^2)^2 + discontinuity^2))) * (1 / sqrt(pi)) * (-2 * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) * (psi - mymu) / sigma^3 } } gleg.weight.yjn.11 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (derivmat[, 2]^2 + sqrt(2) * sigma*z* derivmat[, 3]) } else { psi <- mymu + sqrt(2) * sigma * z (exp(-z^2) / sqrt(pi)) * (dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]^2 + (psi - mymu) * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2)[, 3]) / sigma^2 } } gleg.weight.yjn.12 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (- derivmat[, 2]) } else { psi <- mymu + sqrt(2) * sigma * z (exp(-z^2) / sqrt(pi)) * (- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) / sigma^2 } } gleg.weight.yjn.13 <- function(z, lambda, mymu, sigma, derivmat = NULL) { if (length(derivmat)) { derivmat[, 4] * (-derivmat[, 2]) * sqrt(8) * z } else { psi <- mymu + sqrt(2) * sigma * z (exp(-z^2) / sqrt(pi)) * (-2 * dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 1)[, 2]) * (psi - mymu) / sigma^3 } } lms.yjn2.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } lms.yjn2 <- function(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1.0, isigma = NULL, yoffset = NULL, nsimEIM = 250) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(ilambda)) stop("bad input for argument 'ilambda'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") new("vglmff", blurb = c("LMS Quantile Regression (Yeo-Johnson transformation", " to normality)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", ", namesof("mu", link = lmu, earg = emu ), ", ", namesof("sigma", link = lsigma, earg = esigma )), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("lambda", "mu", "sigma"), llambda = .llambda , lmu = .lmu , lsigma = .lsigma , percentiles = .percentiles , # For the original fit only true.mu = FALSE, # quantiles zero = .zero ) }, list( .zero = zero, .percentiles = percentiles, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) extra$percentiles <- .percentiles predictors.names <- c(namesof("lambda", .llambda, earg = .elambda, short= TRUE), namesof("mu", .lmu, earg = .emu, short= TRUE), namesof("sigma", .lsigma, earg = .esigma, short= TRUE)) y.save <- y yoff <- if (is.Numeric( .yoffset)) .yoffset else -median(y) extra$yoffset <- yoff y <- y + yoff if (!length(etastart)) { lambda.init <- if (is.Numeric( .ilambda )) .ilambda else 1. y.tx <- yeo.johnson(y, lambda.init) fv.init = if (smoothok <- (length(unique(sort(x[, min(ncol(x), 2)]))) > 7)) { fit700 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.tx, w = w, df = .idf.mu ) c(predict(fit700, x = x[, min(ncol(x), 2)])$y) } else { rep_len(weighted.mean(y, w), n) } sigma.init <- if (!is.Numeric(.isigma)) { if (is.Numeric( .idf.sigma) && smoothok) { fit710 = vsmooth.spline(x = x[, min(ncol(x), 2)], y = (y.tx - fv.init)^2, w = w, df = .idf.sigma) sqrt(c(abs(predict(fit710, x = x[, min(ncol(x), 2)])$y))) } else { sqrt( sum( w * (y.tx - fv.init)^2 ) / sum(w) ) } } else .isigma etastart <- matrix(0, n, 3) etastart[, 1] <- theta2eta(lambda.init, .llambda, earg = .elambda) etastart[, 2] <- theta2eta(fv.init, .lmu, earg = .emu) etastart[, 3] <- theta2eta(sigma.init, .lsigma, earg = .esigma) } }), list(.llambda = llambda, .lmu = lmu, .lsigma = lsigma, .elambda = elambda, .emu = emu, .esigma = esigma, .ilambda = ilambda, .isigma = isigma, .idf.mu = idf.mu, .idf.sigma = idf.sigma, .yoffset = yoffset, .percentiles = percentiles ))), linkinv = eval(substitute(function(eta, extra = NULL) { pcent <- extra$percentiles eta[, 1] <- eta2theta(eta[, 1], .llambda, earg = .elambda) eta[, 3] <- eta2theta(eta[, 3], .lsigma, earg = .esigma) qtplot.lms.yjn(percentiles = pcent, eta = eta, yoffset = extra$yoff) }, list( .esigma = esigma, .elambda = elambda, .llambda = llambda, .lsigma = lsigma ))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda, mu = .lmu, sigma = .lsigma) misc$earg <- list(lambda = .elambda, mu = .emu, sigma = .esigma) misc$nsimEIM <- .nsimEIM misc$percentiles <- .percentiles # These are argument values misc[["yoffset"]] <- extra$yoffset y <- y.save # Restore back the value; to be attached to object if (control$cdf) { post$cdf <- cdf.lms.yjn(y + misc$yoffset, eta0=matrix(c(lambda,mymu,sigma), ncol=3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list(.percentiles = percentiles, .elambda = elambda, .emu = emu, .esigma = esigma, .nsimEIM=nsimEIM, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mu <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) psi <- yeo.johnson(y, lambda) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (-log(sigma) - 0.5 * ((psi-mu)/sigma)^2 + (lambda-1) * sign(y) * log1p(abs(y))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elambda = elambda, .emu = emu, .esigma = esigma, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), vfamily = c("lms.yjn2", "lmscreg"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mymu <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) okay1 <- all(is.finite(mymu )) && all(is.finite(sigma )) && all(0 < sigma) && all(is.finite(lambda)) okay1 }, list( .elambda = elambda, .emu = emu, .esigma = esigma, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mymu <- eta2theta(eta[, 2], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) dlambda.deta <- dtheta.deta(lambda, link = .llambda, earg = .elambda) dmu.deta <- dtheta.deta(mymu, link = .lmu, earg = .emu) dsigma.deta <- dtheta.deta(sigma, link = .lsigma, earg = .esigma) psi <- yeo.johnson(y, lambda) d1 <- yeo.johnson(y, lambda, deriv = 1) AA <- (psi - mymu) / sigma dl.dlambda <- -AA * d1 /sigma + sign(y) * log1p(abs(y)) dl.dmu <- AA / sigma dl.dsigma <- (AA^2 -1) / sigma dthetas.detas <- cbind(dlambda.deta, dmu.deta, dsigma.deta) c(w) * cbind(dl.dlambda, dl.dmu, dl.dsigma) * dthetas.detas }), list( .elambda = elambda, .emu = emu, .esigma = esigma, .llambda = llambda, .lmu = lmu, .lsigma = lsigma ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { psi <- rnorm(n, mymu, sigma) ysim <- yeo.johnson(y = psi, lam = lambda, inverse = TRUE) d1 <- yeo.johnson(ysim, lambda, deriv = 1) AA <- (psi - mymu) / sigma dl.dlambda <- -AA * d1 /sigma + sign(ysim) * log1p(abs(ysim)) dl.dmu <- AA / sigma dl.dsigma <- (AA^2 -1) / sigma rm(ysim) temp3 <- cbind(dl.dlambda, dl.dmu, dl.dsigma) run.varcov <- ((ii-1) * run.varcov + temp3[,ind1$row.index]*temp3[,ind1$col.index]) / ii } if (intercept.only) run.varcov <- matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) wz <- run.varcov * dthetas.detas[,ind1$row] * dthetas.detas[,ind1$col] dimnames(wz) <- list(rownames(wz), NULL) # Remove the colnames c(w) * wz }), list(.lsigma = lsigma, .esigma = esigma, .elambda = elambda, .nsimEIM=nsimEIM, .llambda = llambda)))) } lms.yjn <- function(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1.0, isigma = NULL, rule = c(10, 5), yoffset = NULL, diagW = FALSE, iters.diagW = 6) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") rule <- rule[1] # Nbr of points (common) for all the quadrature schemes if (rule != 5 && rule != 10) stop("only rule=5 or 10 is supported") new("vglmff", blurb = c("LMS Quantile Regression ", "(Yeo-Johnson transformation to normality)\n", "Links: ", namesof("lambda", link = llambda, earg = elambda), ", ", namesof("mu", link = "identitylink", earg = list()), ", ", namesof("sigma", link = lsigma, earg = esigma)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list(.zero = zero))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("lambda", "mu", "sigma"), llambda = .llambda , lmu = "identitylink", lsigma = .lsigma , percentiles = .percentiles , # For the original fit only true.mu = FALSE, # quantiles zero = .zero ) }, list( .zero = zero, .percentiles = percentiles, .llambda = llambda, .lsigma = lsigma ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("lambda", .llambda, earg = .elambda , short = TRUE), "mu", namesof("sigma", .lsigma, earg = .esigma , short = TRUE)) extra$percentiles <- .percentiles y.save <- y yoff <- if (is.Numeric( .yoffset )) .yoffset else -median(y) extra$yoffset <- yoff y <- y + yoff if (!length(etastart)) { lambda.init <- if (is.Numeric( .ilambda )) .ilambda else 1.0 y.tx <- yeo.johnson(y, lambda.init) if (smoothok <- (length(unique(sort(x[, min(ncol(x), 2)]))) > 7)) { fit700 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.tx, w = w, df = .idf.mu ) fv.init <- c(predict(fit700, x = x[, min(ncol(x), 2)])$y) } else { fv.init <- rep_len(weighted.mean(y, w), n) } sigma.init <- if (!is.Numeric( .isigma )) { if (is.Numeric( .idf.sigma) && smoothok) { fit710 = vsmooth.spline(x = x[, min(ncol(x), 2)], y = (y.tx - fv.init)^2, w = w, df = .idf.sigma) sqrt(c(abs(predict(fit710, x = x[, min(ncol(x), 2)])$y))) } else { sqrt( sum( w * (y.tx - fv.init)^2 ) / sum(w) ) } } else .isigma etastart <- cbind(theta2eta(lambda.init, .llambda , earg = .elambda ), fv.init, theta2eta(sigma.init, .lsigma , earg = .esigma )) } }), list( .llambda = llambda, .lsigma = lsigma, .elambda = elambda, .esigma = esigma, .ilambda = ilambda, .isigma = isigma, .idf.mu = idf.mu, .idf.sigma = idf.sigma, .yoffset = yoffset, .percentiles = percentiles ))), linkinv = eval(substitute(function(eta, extra = NULL) { pcent <- extra$percentiles eta[, 1] <- eta2theta(eta[, 1], .llambda , earg = .elambda ) eta[, 3] <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) qtplot.lms.yjn(percentiles = pcent, eta = eta, yoffset = extra$yoff) }, list(.percentiles = percentiles, .esigma = esigma, .elambda = elambda, .llambda = llambda, .lsigma = lsigma))), last = eval(substitute(expression({ misc$link <- c(lambda = .llambda , mu = "identitylink", sigma = .lsigma ) misc$earg <- list(lambda = .elambda , mu = list(theta = NULL), sigma = .esigma ) misc$percentiles <- .percentiles # These are argument values misc$true.mu <- FALSE # $fitted is not a true mu misc[["yoffset"]] <- extra$yoff y <- y.save # Restore back the value; to be attached to object if (control$cdf) { post$cdf = cdf.lms.yjn(y + misc$yoffset, eta0 = matrix(c(lambda,mymu,sigma), ncol = 3, dimnames = list(dimnames(x)[[1]], NULL))) } }), list( .percentiles = percentiles, .elambda = elambda, .esigma = esigma, .llambda = llambda, .lsigma = lsigma))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mu <- eta[, 2] sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) psi <- yeo.johnson(y, lambda) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (-log(sigma) - 0.5 * ((psi-mu)/sigma)^2 + (lambda-1) * sign(y) * log1p(abs(y))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .esigma = esigma, .elambda = elambda, .lsigma = lsigma, .llambda = llambda))), vfamily = c("lms.yjn", "lmscreg"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mymu <- eta[, 2] sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) okay1 <- all(is.finite(mymu )) && all(is.finite(sigma )) && all(0 < sigma) && all(is.finite(lambda)) okay1 }, list( .esigma = esigma, .elambda = elambda, .lsigma = lsigma, .llambda = llambda))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], .llambda , earg = .elambda ) mymu <- eta[, 2] sigma <- eta2theta(eta[, 3], .lsigma , earg = .esigma ) psi <- yeo.johnson(y, lambda) d1 <- yeo.johnson(y, lambda, deriv = 1) AA <- (psi - mymu) / sigma dl.dlambda <- -AA * d1 /sigma + sign(y) * log1p(abs(y)) dl.dmu <- AA / sigma dl.dsigma <- (AA^2 -1) / sigma dlambda.deta <- dtheta.deta(lambda, link = .llambda, earg = .elambda ) dsigma.deta <- dtheta.deta(sigma, link = .lsigma, earg = .esigma ) cbind(dl.dlambda * dlambda.deta, dl.dmu, dl.dsigma * dsigma.deta) * c(w) }), list( .esigma = esigma, .elambda = elambda, .lsigma = lsigma, .llambda = llambda ))), weight = eval(substitute(expression({ wz <- matrix(0, n, 6) wz[,iam(2, 2, M)] <- 1 / sigma^2 wz[,iam(3, 3, M)] <- 2 * wz[,iam(2, 2, M)] # 2 / sigma^2 if (.rule == 10) { glag.abs = c(0.13779347054,0.729454549503, 1.80834290174,3.40143369785, 5.55249614006,8.33015274676, 11.8437858379,16.2792578314, 21.996585812, 29.9206970123) glag.wts = c(0.308441115765, 0.401119929155, 0.218068287612, 0.0620874560987, 0.00950151697517, 0.000753008388588, 2.82592334963e-5, 4.24931398502e-7, 1.83956482398e-9, 9.91182721958e-13) } else { glag.abs = c(0.2635603197180449, 1.4134030591060496, 3.5964257710396850, 7.0858100058570503, 12.6408008442729685) glag.wts = c(5.217556105826727e-01, 3.986668110832433e-01, 7.594244968176882e-02, 3.611758679927785e-03, 2.336997238583738e-05) } if (.rule == 10) { sgh.abs = c(0.03873852801690856, 0.19823332465268367, 0.46520116404433082, 0.81686197962535023, 1.23454146277833154, 1.70679833036403172, 2.22994030591819214, 2.80910399394755972, 3.46387269067033854, 4.25536209637269280) sgh.wts = c(9.855210713854302e-02, 2.086780884700499e-01, 2.520517066468666e-01, 1.986843323208932e-01,9.719839905023238e-02, 2.702440190640464e-02, 3.804646170194185e-03, 2.288859354675587e-04, 4.345336765471935e-06, 1.247734096219375e-08) } else { sgh.abs = c(0.1002421519682381, 0.4828139660462573, 1.0609498215257607, 1.7797294185202606, 2.6697603560875995) sgh.wts = c(0.2484061520284881475,0.3923310666523834311, 0.2114181930760276606, 0.0332466603513424663, 0.0008248533445158026) } if (.rule == 10) { gleg.abs = c(-0.973906528517, -0.865063366689, -0.679409568299, -0.433395394129, -0.148874338982) gleg.abs = c(gleg.abs, rev(-gleg.abs)) gleg.wts = c(0.0666713443087, 0.149451349151, 0.219086362516, 0.26926671931, 0.295524224715) gleg.wts = c(gleg.wts, rev(gleg.wts)) } else { gleg.abs = c(-0.9061798459386643,-0.5384693101056820, 0, 0.5384693101056828, 0.9061798459386635) gleg.wts = c(0.2369268850561853,0.4786286704993680, 0.5688888888888889, 0.4786286704993661, 0.2369268850561916) } discontinuity = -mymu/(sqrt(2)*sigma) LL <- pmin(discontinuity, 0) UU <- pmax(discontinuity, 0) if (FALSE) { AA <- (UU-LL)/2 for (kk in seq_along(gleg.wts)) { temp1 <- AA * gleg.wts[kk] abscissae <- (UU+LL)/2 + AA * gleg.abs[kk] psi <- mymu + sqrt(2) * sigma * abscissae temp9 <- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2) temp9 <- cbind(temp9, exp(-abscissae^2) / (sqrt(pi) * sigma^2)) wz[,iam(1, 1, M)] <- wz[,iam(1, 1, M)] + temp1 * gleg.weight.yjn.11(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 2, M)] <- wz[,iam(1, 2, M)] + temp1 * gleg.weight.yjn.12(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 3, M)] <- wz[,iam(1, 3, M)] + temp1 * gleg.weight.yjn.13(abscissae, lambda, mymu, sigma, temp9) } } else { temp9 <- .Fortran("yjngintf", as.double(LL), as.double(UU), as.double(gleg.abs), as.double(gleg.wts), as.integer(n), as.integer(length(gleg.abs)), as.double(lambda), as.double(mymu), as.double(sigma), answer = double(3*n), eps=as.double(1.0e-5))$ans dim(temp9) <- c(3,n) wz[,iam(1, 1, M)] <- temp9[1,] wz[,iam(1, 2, M)] <- temp9[2,] wz[,iam(1, 3, M)] <- temp9[3,] } for (kk in seq_along(sgh.wts)) { abscissae <- sign(-discontinuity) * sgh.abs[kk] psi <- mymu + sqrt(2) * sigma * abscissae # abscissae = z temp9 <- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2) wz[,iam(1, 1, M)] <- wz[,iam(1, 1, M)] + sgh.wts[kk] * gh.weight.yjn.11(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 2, M)] <- wz[,iam(1, 2, M)] + sgh.wts[kk] * gh.weight.yjn.12(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 3, M)] <- wz[,iam(1, 3, M)] + sgh.wts[kk] * gh.weight.yjn.13(abscissae, lambda, mymu, sigma, temp9) } temp1 <- exp(-discontinuity^2) for (kk in seq_along(glag.wts)) { abscissae <- sign(discontinuity) * sqrt(glag.abs[kk]) + discontinuity^2 psi <- mymu + sqrt(2) * sigma * abscissae temp9 <- dpsi.dlambda.yjn(psi, lambda, mymu, sigma, derivative = 2) temp9 <- cbind(temp9, 1 / (2 * sqrt((abscissae-discontinuity^2)^2 + discontinuity^2) * sqrt(pi) * sigma^2)) temp7 <- temp1 * glag.wts[kk] wz[,iam(1, 1, M)] <- wz[,iam(1, 1, M)] + temp7 * glag.weight.yjn.11(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 2, M)] <- wz[,iam(1, 2, M)] + temp7 * glag.weight.yjn.12(abscissae, lambda, mymu, sigma, temp9) wz[,iam(1, 3, M)] <- wz[,iam(1, 3, M)] + temp7 * glag.weight.yjn.13(abscissae, lambda, mymu, sigma, temp9) } wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dlambda.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] * dlambda.deta wz[, iam(1, 3, M)] <- wz[, iam(1, 3, M)] * dsigma.deta * dlambda.deta if ( .diagW && iter <= .iters.diagW) { wz[,iam(1, 2, M)] <- wz[, iam(1, 3, M)] <- 0 } wz[, iam(2, 3, M)] <- wz[, iam(2, 3, M)] * dsigma.deta wz[, iam(3, 3, M)] <- wz[, iam(3, 3, M)] * dsigma.deta^2 c(w) * wz }), list(.lsigma = lsigma, .esigma = esigma, .elambda = elambda, .rule = rule, .diagW = diagW, .iters.diagW = iters.diagW, .llambda = llambda)))) } lmscreg.control <- function(cdf = TRUE, at.arg = NULL, x0 = NULL, ...) { if (!is.logical(cdf)) { warning("'cdf' is not logical; using TRUE instead") cdf <- TRUE } list(cdf = cdf, at.arg = at.arg, x0 = x0) } Wr1 <- function(r, w) ifelse(r <= 0, 1, w) Wr2 <- function(r, w) (r <= 0) * 1 + (r > 0) * w amlnormal.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y, extra$n, extra$M) devi <- cbind((y - mu)^2) if (residuals) { stop("not sure here") wz <- VGAM.weights.function(w = w, M = extra$M, n = extra$n) return((y - mu) * sqrt(wz) * matrix(extra$w.aml,extra$n,extra$M)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) for (ii in 1:M) all.deviances[ii] <- sum(c(w) * devi[, ii] * Wr1(myresid[, ii], w = extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlnormal <- function(w.aml = 1, parallel = FALSE, lexpectile = "identitylink", iexpectile = NULL, imethod = 1, digw = 4) { if (!is.Numeric(w.aml, positive = TRUE)) stop("argument 'w.aml' must be a vector of positive values") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1, 2 or 3") lexpectile <- as.list(substitute(lexpectile)) eexpectile <- link2list(lexpectile) lexpectile <- attr(eexpectile, "function.name") if (length(iexpectile) && !is.Numeric(iexpectile)) stop("bad input for argument 'iexpectile'") new("vglmff", blurb = c("Asymmetric least squares quantile regression\n\n", "Links: ", namesof("expectile", link = lexpectile, earg = eexpectile)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlnormal.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ extra$w.aml <- .w.aml temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") predictors.names <- c(namesof( paste("expectile(",y.names,")", sep = ""), .lexpectile , earg = .eexpectile, tag = FALSE)) if (!length(etastart)) { mean.init <- if ( .imethod == 1) rep_len(median(y), n) else if ( .imethod == 2 || .imethod == 3) rep_len(weighted.mean(y, w), n) else { junk <- lm.wfit(x = x, y = c(y), w = c(w)) junk$fitted } if ( .imethod == 3) mean.init <- abs(mean.init) + 0.01 if (length( .iexpectile)) mean.init <- matrix( .iexpectile, n, M, byrow = TRUE) etastart <- matrix(theta2eta(mean.init, .lexpectile, earg = .eexpectile), n, M) } }), list( .lexpectile = lexpectile, .eexpectile = eexpectile, .iexpectile = iexpectile, .imethod = imethod, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { ans <- eta <- as.matrix(eta) for (ii in 1:ncol(eta)) ans[, ii] <- eta2theta(eta[, ii], .lexpectile , earg = .eexpectile ) dimnames(ans) <- list(dimnames(eta)[[1]], extra$y.names) ans }, list( .lexpectile = lexpectile, .eexpectile = eexpectile ))), last = eval(substitute(expression({ misc$link <- rep_len(.lexpectile , M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) misc$parallel <- .parallel misc$expected <- TRUE extra$percentile <- numeric(M) # These are estimates (empirical) misc$multipleResponses <- TRUE for (ii in 1:M) { use.w <- if (M > 1 && NCOL(w) == M) w[, ii] else w extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, use.w) } names(extra$percentile) <- names(misc$link) extra$individual <- TRUE if (!(M > 1 && NCOL(w) == M)) { extra$deviance <- amlnormal.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names } }), list( .lexpectile = lexpectile, .eexpectile = eexpectile, .parallel = parallel ))), vfamily = c("amlnormal"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta, .lexpectile , earg = .eexpectile ) okay1 <- all(is.finite(mymu)) okay1 }, list( .lexpectile = lexpectile, .eexpectile = eexpectile ))), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .lexpectile , earg = .eexpectile ) dexpectile.deta <- dtheta.deta(mymu, .lexpectile , earg = .eexpectile ) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * myresid * wor1 * dexpectile.deta }), list( .lexpectile = lexpectile, .eexpectile = eexpectile ))), weight = eval(substitute(expression({ wz <- c(w) * wor1 * dexpectile.deta^2 wz }), list( .lexpectile = lexpectile, .eexpectile = eexpectile )))) } amlpoisson.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y,extra$n,extra$M) nz <- y > 0 devi <- cbind(-(y - mu)) devi[nz] <- devi[nz] + y[nz] * log(y[nz]/mu[nz]) if (residuals) { stop("not sure here") return(sign(y - mu) * sqrt(2 * abs(devi) * w) * matrix(extra$w,extra$n,extra$M)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) for (ii in 1:M) all.deviances[ii] <- 2 * sum(c(w) * devi[, ii] * Wr1(myresid[, ii], w=extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlpoisson <- function(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") { if (!is.Numeric(w.aml, positive = TRUE)) stop("'w.aml' must be a vector of positive values") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Poisson expectile regression by", " asymmetric maximum likelihood estimation\n\n", "Link: ", namesof("expectile", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlpoisson.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ extra$w.aml <- .w.aml temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") extra$individual <- FALSE predictors.names <- c(namesof(paste("expectile(",y.names,")", sep = ""), .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { mean.init <- if ( .imethod == 2) rep_len(median(y), n) else if ( .imethod == 1) rep_len(weighted.mean(y, w), n) else { junk = lm.wfit(x = x, y = c(y), w = c(w)) abs(junk$fitted) } etastart <- matrix(theta2eta(mean.init, .link , earg = .earg ), n, M) } }), list( .link = link, .earg = earg, .imethod = imethod, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu.ans <- eta <- as.matrix(eta) for (ii in 1:ncol(eta)) mu.ans[, ii] <- eta2theta(eta[, ii], .link , earg = .earg ) dimnames(mu.ans) <- list(dimnames(eta)[[1]], extra$y.names) mu.ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$multipleResponses <- TRUE misc$expected <- TRUE misc$parallel <- .parallel misc$link <- rep_len( .link , M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) extra$percentile <- numeric(M) # These are estimates (empirical) for (ii in 1:M) extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, w) names(extra$percentile) <- names(misc$link) extra$individual <- TRUE extra$deviance <- amlpoisson.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names }), list( .link = link, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = c("amlpoisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(mymu)) && all(0 < mymu) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .link , earg = .earg ) dexpectile.deta <- dtheta.deta(mymu, .link , earg = .earg ) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * myresid * wor1 * (dexpectile.deta / mymu) }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ use.mu <- mymu use.mu[use.mu < .Machine$double.eps^(3/4)] <- .Machine$double.eps^(3/4) wz <- c(w) * wor1 * use.mu * (dexpectile.deta / mymu)^2 wz }), list( .link = link, .earg = earg )))) } amlbinomial.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y,extra$n,extra$M) devy <- y nz <- y != 0 devy[nz] <- y[nz] * log(y[nz]) nz <- (1 - y) != 0 devy[nz] <- devy[nz] + (1 - y[nz]) * log1p(-y[nz]) devmu <- y * log(mu) + (1 - y) * log1p(-mu) if (any(small <- mu * (1 - mu) < .Machine$double.eps)) { warning("fitted values close to 0 or 1") smu <- mu[small] sy <- y[small] smu <- ifelse(smu < .Machine$double.eps, .Machine$double.eps, smu) onemsmu <- ifelse((1 - smu) < .Machine$double.eps, .Machine$double.eps, 1 - smu) devmu[small] <- sy * log(smu) + (1 - sy) * log(onemsmu) } devi <- 2 * (devy - devmu) if (residuals) { stop("not sure here") return(sign(y - mu) * sqrt(abs(devi) * w)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - matrix(mu,extra$n,extra$M) for (ii in 1:M) all.deviances[ii] <- sum(c(w) * devi[, ii] * Wr1(myresid[, ii], w=extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlbinomial <- function(w.aml = 1, parallel = FALSE, digw = 4, link = "logit") { if (!is.Numeric(w.aml, positive = TRUE)) stop("'w.aml' must be a vector of positive values") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Logistic expectile regression by ", "asymmetric maximum likelihood estimation\n\n", "Link: ", namesof("expectile", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlbinomial.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ { NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep_len(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1 + w) no.successes <- w * y if (any(abs(no.successes - round(no.successes)) > 0.001)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (any(abs(y - round(y)) > 0.001)) stop("Count data must be integer-valued") nn <- y[, 1] + y[, 2] y <- ifelse(nn > 0, y[, 1]/nn, 0) w <- w * nn if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nn * y) / (1 + nn) } else stop("Response not of the right form") } extra$w.aml <- .w.aml if (ncol(y <- cbind(y)) != 1) stop("response must be a vector or a one-column matrix") extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") extra$individual <- FALSE predictors.names <- c(namesof(paste("expectile(", y.names, ")", sep = ""), .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { etastart <- matrix(theta2eta(mustart, .link , earg = .earg ), n, M) mustart <- NULL } }), list( .link = link, .earg = earg, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu.ans <- eta <- as.matrix(eta) for (ii in 1:ncol(eta)) mu.ans[, ii] <- eta2theta(eta[, ii], .link , earg = .earg ) dimnames(mu.ans) <- list(dimnames(eta)[[1]], extra$y.names) mu.ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep_len(.link , M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) misc$parallel <- .parallel misc$expected <- TRUE extra$percentile <- numeric(M) # These are estimates (empirical) for (ii in 1:M) extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, w) names(extra$percentile) <- names(misc$link) extra$individual <- TRUE extra$deviance <- amlbinomial.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names }), list( .link = link, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = c("amlbinomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(mymu)) && all(0 < mymu & mymu < 1) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .link , earg = .earg ) use.mu <- mymu use.mu[use.mu < .Machine$double.eps^(3/4)] = .Machine$double.eps^(3/4) dexpectile.deta <- dtheta.deta(use.mu, .link , earg = .earg ) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * myresid * wor1 * (dexpectile.deta / (use.mu * (1-use.mu))) }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ wz <- c(w) * wor1 * (dexpectile.deta^2 / (use.mu * (1 - use.mu))) wz }), list( .link = link, .earg = earg)))) } amlexponential.deviance <- function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- length(extra$w.aml) if (M > 1) y <- matrix(y,extra$n,extra$M) devy <- cbind(-log(y) - 1) devi <- cbind(-log(mu) - y / mu) if (residuals) { stop("not sure here") return(sign(y - mu) * sqrt(2 * abs(devi) * w) * matrix(extra$w,extra$n,extra$M)) } else { all.deviances <- numeric(M) myresid <- matrix(y,extra$n,extra$M) - cbind(mu) for (ii in 1:M) all.deviances[ii] = 2 * sum(c(w) * (devy[, ii] - devi[, ii]) * Wr1(myresid[, ii], w=extra$w.aml[ii])) } if (is.logical(extra$individual) && extra$individual) all.deviances else sum(all.deviances) } amlexponential <- function(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") { if (!is.Numeric(w.aml, positive = TRUE)) stop("'w.aml' must be a vector of positive values") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1, 2 or 3") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") y.names <- paste("w.aml = ", round(w.aml, digits = digw), sep = "") predictors.names <- c(namesof( paste("expectile(", y.names,")", sep = ""), link, earg = earg)) predictors.names <- paste(predictors.names, collapse = ", ") new("vglmff", blurb = c("Exponential expectile regression by", " asymmetric maximum likelihood estimation\n\n", "Link: ", predictors.names), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) }), list( .parallel = parallel ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { amlexponential.deviance(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra) }, initialize = eval(substitute(expression({ extra$w.aml <- .w.aml temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- length(extra$w.aml) # Recycle if necessary extra$n <- n extra$y.names <- y.names <- paste("w.aml = ", round(extra$w.aml, digits = .digw ), sep = "") extra$individual = FALSE predictors.names <- c(namesof( paste("expectile(", y.names, ")", sep = ""), .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { mean.init <- if ( .imethod == 1) rep_len(median(y), n) else if ( .imethod == 2) rep_len(weighted.mean(y, w), n) else { 1 / (y + 1) } etastart <- matrix(theta2eta(mean.init, .link , earg = .earg ), n, M) } }), list( .link = link, .earg = earg, .imethod = imethod, .digw = digw, .w.aml = w.aml ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu.ans <- eta <- as.matrix(eta) for (ii in 1:ncol(eta)) mu.ans[, ii] <- eta2theta(eta[, ii], .link , earg = .earg ) dimnames(mu.ans) <- list(dimnames(eta)[[1]], extra$y.names) mu.ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$multipleResponses <- TRUE misc$expected <- TRUE misc$parallel <- .parallel misc$link <- rep_len( .link , M) names(misc$link) <- extra$y.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list(theta = NULL) names(misc$earg) <- names(misc$link) extra$percentile <- numeric(M) # These are estimates (empirical) for (ii in 1:M) extra$percentile[ii] <- 100 * weighted.mean(myresid[, ii] <= 0, w) names(extra$percentile) <- names(misc$link) extra$individual <- TRUE extra$deviance = amlexponential.deviance(mu = mu, y = y, w = w, residuals = FALSE, eta = eta, extra = extra) names(extra$deviance) <- extra$y.names }), list( .link = link, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = c("amlexponential"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(mymu)) && all(0 < mymu) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ mymu <- eta2theta(eta, .link , earg = .earg ) bigy <- matrix(y,extra$n,extra$M) dl.dmu <- (bigy - mymu) / mymu^2 dmu.deta <- dtheta.deta(mymu, .link , earg = .earg ) myresid <- bigy - cbind(mymu) wor1 <- Wr2(myresid, w = matrix(extra$w.aml, extra$n, extra$M, byrow = TRUE)) c(w) * wor1 * dl.dmu * dmu.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dmu2 <- 1 / mymu^2 wz <- c(w) * wor1 * ned2l.dmu2 * dmu.deta^2 wz }), list( .link = link, .earg = earg )))) } rho1check <- function(u, tau = 0.5) u * (tau - (u <= 0)) dalap <- function(x, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location), length(scale), length(kappa), length(tau)) if (length(x) != NN) x <- rep_len(x, NN) if (length(location) != NN) location <- rep_len(location, NN) if (length(scale) != NN) scale <- rep_len(scale, NN) if (length(kappa) != NN) kappa <- rep_len(kappa, NN) if (length(tau) != NN) tau <- rep_len(tau, NN) logconst <- 0.5 * log(2) - log(scale) + log(kappa) - log1p(kappa^2) exponent <- -(sqrt(2) / scale) * abs(x - location) * ifelse(x >= location, kappa, 1/kappa) indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & logconst[!indexTF] <- NaN if (log.arg) logconst + exponent else exp(logconst + exponent) } ralap <- function(n, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n location <- rep_len(location, use.n) scale <- rep_len(scale, use.n) tau <- rep_len(tau, use.n) kappa <- rep_len(kappa, use.n) ans <- location + scale * log(runif(use.n)^kappa / runif(use.n)^(1/kappa)) / sqrt(2) indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } palap <- function(q, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") NN <- max(length(q), length(location), length(scale), length(kappa), length(tau)) if (length(q) != NN) q <- rep_len(q, NN) if (length(location) != NN) location <- rep_len(location, NN) if (length(scale) != NN) scale <- rep_len(scale, NN) if (length(kappa) != NN) kappa <- rep_len(kappa, NN) if (length(tau) != NN) tau <- rep_len(tau, NN) exponent <- -(sqrt(2) / scale) * abs(q - location) * ifelse(q >= location, kappa, 1/kappa) temp5 <- exp(exponent) / (1 + kappa^2) index1 <- (q < location) if (lower.tail) { if (log.p) { ans <- log1p(-exp(exponent) / (1 + kappa^2)) logtemp5 <- exponent - log1p(kappa^2) ans[index1] <- 2 * log(kappa[index1]) + logtemp5[index1] } else { ans <- (kappa^2 - expm1(exponent)) / (1 + kappa^2) ans[index1] <- (kappa[index1])^2 * temp5[index1] } } else { if (log.p) { ans <- exponent - log1p(kappa^2) # logtemp5 ans[index1] <- log1p(-(kappa[index1])^2 * temp5[index1]) } else { ans <- temp5 ans[index1] <- (1 + (kappa[index1])^2 * (-expm1(exponent[index1]))) / (1+(kappa[index1])^2) } } indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } qalap <- function(p, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau / (1 - tau)), lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") NN <- max(length(p), length(location), length(scale), length(kappa), length(tau)) if (length(p) != NN) p <- rep_len(p, NN) if (length(location) != NN) location <- rep_len(location, NN) if (length(scale) != NN) scale <- rep_len(scale, NN) if (length(kappa) != NN) kappa <- rep_len(kappa, NN) if (length(tau) != NN) tau <- rep_len(tau, NN) temp5 <- kappa^2 / (1 + kappa^2) if (lower.tail) { if (log.p) { ans <- exp(p) index1 <- (exp(p) <= temp5) exponent <- exp(p[index1]) / temp5[index1] ans[index1] <- location[index1] + (scale[index1] * kappa[index1]) * log(exponent) / sqrt(2) ans[!index1] <- location[!index1] - (scale[!index1] / kappa[!index1]) * (log1p((kappa[!index1])^2) + log(-expm1(p[!index1]))) / sqrt(2) } else { ans <- p index1 <- (p <= temp5) exponent <- p[index1] / temp5[index1] ans[index1] <- location[index1] + (scale[index1] * kappa[index1]) * log(exponent) / sqrt(2) ans[!index1] <- location[!index1] - (scale[!index1] / kappa[!index1]) * (log1p((kappa[!index1])^2) + log1p(-p[!index1])) / sqrt(2) } } else { if (log.p) { ans <- -expm1(p) index1 <- (-expm1(p) <= temp5) exponent <- -expm1(p[index1]) / temp5[index1] ans[index1] <- location[index1] + (scale[index1] * kappa[index1]) * log(exponent) / sqrt(2) ans[!index1] <- location[!index1] - (scale[!index1] / kappa[!index1]) * (log1p((kappa[!index1])^2) + p[!index1]) / sqrt(2) } else { ans <- exp(log1p(-p)) index1 <- (p >= (1 / (1+kappa^2))) exponent <- exp(log1p(-p[index1])) / temp5[index1] ans[index1] <- location[index1] + (scale[index1] * kappa[index1]) * log(exponent) / sqrt(2) ans[!index1] <- location[!index1] - (scale[!index1] / kappa[!index1]) * (log1p((kappa[!index1])^2) + log(p[!index1])) / sqrt(2) } } indexTF <- (scale > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } rloglap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n location.ald <- rep_len(location.ald, use.n) scale.ald <- rep_len(scale.ald, use.n) tau <- rep_len(tau, use.n) kappa <- rep_len(kappa, use.n) ans <- exp(location.ald) * (runif(use.n)^kappa / runif(use.n)^(1/kappa))^(scale.ald / sqrt(2)) indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } dloglap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) scale <- scale.ald location <- location.ald NN <- max(length(x), length(location), length(scale), length(kappa), length(tau)) if (length(x) != NN) x <- rep_len(x, NN) if (length(location) != NN) location <- rep_len(location, NN) if (length(scale) != NN) scale <- rep_len(scale, NN) if (length(kappa) != NN) kappa <- rep_len(kappa, NN) if (length(tau) != NN) tau <- rep_len(tau, NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- exp(location.ald) exponent <- ifelse(x >= Delta, -(Alpha+1), (Beta-1)) * (log(x) - location.ald) logdensity <- -location.ald + log(Alpha) + log(Beta) - log(Alpha + Beta) + exponent indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf if (log.arg) logdensity else exp(logdensity) } qloglap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") NN <- max(length(p), length(location.ald), length(scale.ald), length(kappa)) p <- rep_len(p, NN) location <- rep_len(location.ald, NN) scale <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) tau <- rep_len(tau, NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- exp(location.ald) temp9 <- Alpha + Beta if (lower.tail) { if (log.p) { ln.p <- p ans <- ifelse((exp(ln.p) > Alpha / temp9), Delta * (-expm1(ln.p) * temp9 / Beta)^(-1/Alpha), Delta * (exp(ln.p) * temp9 / Alpha)^(1/Beta)) ans[ln.p > 0] <- NaN } else { ans <- ifelse((p > Alpha / temp9), Delta * exp((-1/Alpha) * (log1p(-p) + log(temp9/Beta))), Delta * (p * temp9 / Alpha)^(1/Beta)) ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- ifelse((-expm1(ln.p) > Alpha / temp9), Delta * (exp(ln.p) * temp9 / Beta)^(-1/Alpha), Delta * (-expm1(ln.p) * temp9 / Alpha)^(1/Beta)) ans[ln.p > 0] <- NaN } else { ans <- ifelse((p < (temp9 - Alpha) / temp9), Delta * (p * temp9 / Beta)^(-1/Alpha), Delta * exp((1/Beta)*(log1p(-p) + log(temp9/Alpha)))) ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 ans[p > 1] <- NaN } } indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) ans[!indexTF] <- NaN ans } ploglap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location <- rep_len(location.ald, NN) scale <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) q <- rep_len(q, NN) tau <- rep_len(tau, NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- exp(location.ald) temp9 <- Alpha + Beta index1 <- (Delta <= q) if (lower.tail) { if (log.p) { ans <- log((Alpha / temp9) * (q / Delta)^(Beta)) ans[index1] <- log1p((-(Beta/temp9) * (Delta/q)^(Alpha))[index1]) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- (Alpha / temp9) * (q / Delta)^(Beta) ans[index1] <- -expm1((log(Beta/temp9) + Alpha * log(Delta/q)))[index1] ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log1p(-(Alpha / temp9) * (q / Delta)^(Beta)) ans[index1] <- log(((Beta/temp9) * (Delta/q)^(Alpha))[index1]) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- -expm1(log(Alpha/temp9) + Beta * log(q/Delta)) ans[index1] <- ((Beta/temp9) * (Delta/q)^(Alpha))[index1] ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & ans[!indexTF] <- NaN ans } rlogitlap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { logit(ralap(n = n, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa), inverse = TRUE) # earg = earg } dlogitlap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location.ald), length(scale.ald), length(kappa)) location <- rep_len(location.ald, NN) scale <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) x <- rep_len(x, NN) tau <- rep_len(tau, NN) Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- logit(location.ald, inverse = TRUE) # earg = earg exponent <- ifelse(x >= Delta, -Alpha, Beta) * (logit(x) - # earg = earg location.ald) logdensity <- log(Alpha) + log(Beta) - log(Alpha + Beta) - log(x) - log1p(-x) + exponent indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf logdensity[x > 1 & indexTF] <- -Inf if (log.arg) logdensity else exp(logdensity) } qlogitlap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { qqq <- qalap(p = p, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa) ans <- logit(qqq, inverse = TRUE) # earg = earg ans[(p < 0) | (p > 1)] <- NaN ans[p == 0] <- 0 ans[p == 1] <- 1 ans } plogitlap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep_len(location.ald, NN) scale.ald <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) q <- rep_len(q, NN) tau <- rep_len(tau, NN) indexTF <- (q > 0) & (q < 1) qqq <- logit(q[indexTF]) # earg = earg ans <- q ans[indexTF] <- palap(q = qqq, location = location.ald[indexTF], scale = scale.ald[indexTF], tau = tau[indexTF], kappa = kappa[indexTF]) ans[q >= 1] <- 1 ans[q <= 0] <- 0 ans } rprobitlap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { probit(ralap(n = n, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa), inverse = TRUE) } dprobitlap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE, meth2 = TRUE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep_len(location.ald, NN) scale.ald <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) x <- rep_len(x, NN) tau <- rep_len(tau, NN) logdensity <- x * NaN index1 <- (x > 0) & (x < 1) indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & if (meth2) { dx.dy <- x use.x <- probit(x[index1]) # earg = earg logdensity[index1] <- dalap(x = use.x, location = location.ald[index1], scale = scale.ald[index1], tau = tau[index1], kappa = kappa[index1], log = TRUE) } else { Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- pnorm(location.ald) use.x <- qnorm(x) # qnorm(x[index1]) log.dy.dw <- dnorm(use.x, log = TRUE) exponent <- ifelse(x >= Delta, -Alpha, Beta) * (use.x - location.ald) - log.dy.dw logdensity[index1] <- (log(Alpha) + log(Beta) - log(Alpha + Beta) + exponent)[index1] } logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf logdensity[x > 1 & indexTF] <- -Inf if (meth2) { dx.dy[index1] <- probit(x[index1], # earg = earg, inverse = TRUE, deriv = 1) dx.dy[!index1] <- 0 dx.dy[!indexTF] <- NaN if (log.arg) logdensity - log(abs(dx.dy)) else exp(logdensity) / abs(dx.dy) } else { if (log.arg) logdensity else exp(logdensity) } } qprobitlap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { qqq <- qalap(p = p, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa) ans <- probit(qqq, inverse = TRUE) # , earg = earg ans[(p < 0) | (p > 1)] = NaN ans[p == 0] <- 0 ans[p == 1] <- 1 ans } pprobitlap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep_len(location.ald, NN) scale.ald <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) q <- rep_len(q, NN) tau <- rep_len(tau, NN) indexTF <- (q > 0) & (q < 1) qqq <- probit(q[indexTF]) # earg = earg ans <- q ans[indexTF] <- palap(q = qqq, location = location.ald[indexTF], scale = scale.ald[indexTF], tau = tau[indexTF], kappa = kappa[indexTF]) ans[q >= 1] <- 1 ans[q <= 0] <- 0 ans } rclogloglap <- function(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { cloglog(ralap(n = n, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa), # earg = earg, inverse = TRUE) } dclogloglap <- function(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE, meth2 = TRUE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) NN <- max(length(x), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep_len(location.ald, NN) scale.ald <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) x <- rep_len(x, NN) tau <- rep_len(tau, NN) logdensity <- x * NaN index1 <- (x > 0) & (x < 1) indexTF <- (scale.ald > 0) & (tau > 0) & (tau < 1) & (kappa > 0) # & if (meth2) { dx.dy <- x use.w <- cloglog(x[index1]) # earg = earg logdensity[index1] <- dalap(x = use.w, location = location.ald[index1], scale = scale.ald[index1], tau = tau[index1], kappa = kappa[index1], log = TRUE) } else { Alpha <- sqrt(2) * kappa / scale.ald Beta <- sqrt(2) / (scale.ald * kappa) Delta <- cloglog(location.ald, inverse = TRUE) exponent <- ifelse(x >= Delta, -(Alpha+1), Beta-1) * log(-log1p(-x)) + ifelse(x >= Delta, Alpha, -Beta) * location.ald logdensity[index1] <- (log(Alpha) + log(Beta) - log(Alpha + Beta) - log1p(-x) + exponent)[index1] } logdensity[!indexTF] <- NaN logdensity[x < 0 & indexTF] <- -Inf logdensity[x > 1 & indexTF] <- -Inf if (meth2) { dx.dy[index1] <- cloglog(x[index1], # earg = earg, inverse = TRUE, deriv = 1) dx.dy[!index1] <- 0 dx.dy[!indexTF] <- NaN if (log.arg) logdensity - log(abs(dx.dy)) else exp(logdensity) / abs(dx.dy) } else { if (log.arg) logdensity else exp(logdensity) } } qclogloglap <- function(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { qqq <- qalap(p = p, location = location.ald, scale = scale.ald, tau = tau, kappa = kappa) ans <- cloglog(qqq, inverse = TRUE) # , earg = earg ans[(p < 0) | (p > 1)] <- NaN ans[p == 0] <- 0 ans[p == 1] <- 1 ans } pclogloglap <- function(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) { NN <- max(length(q), length(location.ald), length(scale.ald), length(kappa)) location.ald <- rep_len(location.ald, NN) scale.ald <- rep_len(scale.ald, NN) kappa <- rep_len(kappa, NN) q <- rep_len(q, NN) tau <- rep_len(tau, NN) indexTF <- (q > 0) & (q < 1) qqq <- cloglog(q[indexTF]) # earg = earg ans <- q ans[indexTF] <- palap(q = qqq, location = location.ald[indexTF], scale = scale.ald[indexTF], tau = tau[indexTF], kappa = kappa[indexTF]) ans[q >= 1] <- 1 ans[q <= 0] <- 0 ans } alaplace2.control <- function(maxit = 100, ...) { list(maxit = maxit) } alaplace2 <- function(tau = NULL, llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, kappa = sqrt(tau / (1-tau)), ishrinkage = 0.95, parallel.locat = TRUE ~ 0, parallel.scale = FALSE ~ 0, digt = 4, idf.mu = 3, imethod = 1, zero = "scale") { apply.parint.locat <- FALSE apply.parint.scale <- TRUE llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ilocat <- ilocation if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (length(tau) && max(abs(kappa - sqrt(tau / (1 - tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") new("vglmff", blurb = c("Two-parameter asymmetric Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), # ", ", "\n\n", "Mean: ", "location + scale * (1/kappa - kappa) / sqrt(2)", "\n", "Quantiles: location", "\n", "Variance: scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ onemat <- matrix(1, Mdiv2, 1) constraints.orig <- constraints cm1.locat <- kronecker(diag(Mdiv2), rbind(1, 0)) cmk.locat <- kronecker(onemat, rbind(1, 0)) con.locat <- cm.VGAM(cmk.locat, x = x, bool = .parallel.locat , constraints = constraints.orig, apply.int = .apply.parint.locat , cm.default = cm1.locat, cm.intercept.default = cm1.locat) cm1.scale <- kronecker(diag(Mdiv2), rbind(0, 1)) cmk.scale <- kronecker(onemat, rbind(0, 1)) con.scale <- cm.VGAM(cmk.scale, x = x, bool = .parallel.scale , constraints = constraints.orig, apply.int = .apply.parint.scale , cm.default = cm1.scale, cm.intercept.default = cm1.scale) con.use <- con.scale for (klocal in seq_along(con.scale)) { con.use[[klocal]] <- cbind(con.locat[[klocal]], con.scale[[klocal]]) } constraints <- con.use constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = M1) }), list( .parallel.locat = parallel.locat, .parallel.scale = parallel.scale, .zero = zero, .apply.parint.scale = apply.parint.scale, .apply.parint.locat = apply.parint.locat ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, summary.pvalues = FALSE, expected = TRUE, # 20161117 multipleResponses = TRUE, # FALSE, parameters.names = c("location", "scale"), true.mu = .fittedMean , zero = .zero , tau = .tau , kappa = .kappa ) }, list( .tau = tau, .kappa = kappa, .fittedMean = fittedMean, .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = if (length( .kappa ) > 1) 1 else Inf, ncol.y.max = if (length( .kappa ) > 1) 1 else Inf, out.wy = TRUE, colsyperw = 1, # Uncommented out 20140621 maximize = TRUE) w <- temp5$w y <- temp5$y extra$ncoly <- ncoly <- ncol(y) if ((ncoly > 1) && (length( .kappa ) > 1)) stop("response must be a vector if 'kappa' or 'tau' ", "has a length greater than one") extra$kappa <- .kappa extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) extra$Mdiv2 <- Mdiv2 <- max(ncoly, length( .kappa )) extra$M <- M <- M1 * Mdiv2 extra$n <- n extra$tau.names <- tau.names <- paste("(tau = ", round(extra$tau, digits = .digt), ")", sep = "") extra$Y.names <- Y.names <- if (ncoly > 1) dimnames(y)[[2]] else "y" if (is.null(Y.names) || any(Y.names == "")) extra$Y.names <- Y.names <- paste("y", 1:ncoly, sep = "") extra$y.names <- y.names <- if (ncoly > 1) paste(Y.names, tau.names, sep = "") else tau.names extra$individual <- FALSE mynames1 <- param.names("location", Mdiv2) mynames2 <- param.names("scale", Mdiv2) predictors.names <- c(namesof(mynames1, .llocat , earg = .elocat , tag = FALSE), namesof(mynames2, .lscale , earg = .escale , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] locat.init <- scale.init <- matrix(0, n, Mdiv2) if (!length(etastart)) { for (jay in 1:Mdiv2) { y.use <- if (ncoly > 1) y[, jay] else y Jay <- if (ncoly > 1) jay else 1 if ( .imethod == 1) { locat.init[, jay] <- weighted.mean(y.use, w[, Jay]) scale.init[, jay] <- sqrt(var(y.use) / 2) } else if ( .imethod == 2) { locat.init[, jay] <- median(y.use) scale.init[, jay] <- sqrt(sum(c(w[, Jay]) * abs(y - median(y.use))) / (sum(w[, Jay]) * 2)) } else if ( .imethod == 3) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.use, w = w[, Jay], df = .idf.mu ) locat.init[, jay] <- predict(Fit5, x = x[, min(ncol(x), 2)])$y scale.init[, jay] <- sqrt(sum(c(w[, Jay]) * abs(y.use - median(y.use))) / ( sum(w[, Jay]) * 2)) } else { use.this <- weighted.mean(y.use, w[, Jay]) locat.init[, jay] <- (1 - .ishrinkage ) * y.use + .ishrinkage * use.this scale.init[, jay] <- sqrt(sum(c(w[, Jay]) * abs(y.use - median(y.use ))) / (sum(w[, Jay]) * 2)) } } if (length( .ilocat )) { locat.init <- matrix( .ilocat , n, Mdiv2, byrow = TRUE) } if (length( .iscale )) { scale.init <- matrix( .iscale , n, Mdiv2, byrow = TRUE) } etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .imethod = imethod, .idf.mu = idf.mu, .ishrinkage = ishrinkage, .digt = digt, .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .kappa = kappa, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 2 Mdiv2 <- ncol(eta) / M1 # extra$Mdiv2 vTF <- c(TRUE, FALSE) locat <- eta2theta(eta[, vTF, drop = FALSE], .llocat , earg = .elocat ) dimnames(locat) <- list(dimnames(eta)[[1]], extra$y.names) myans <- if ( .fittedMean ) { kappamat <- matrix(extra$kappa, extra$n, extra$Mdiv2, byrow = TRUE) Scale <- eta2theta(eta[, !vTF, drop = FALSE], .lscale , earg = .escale ) locat + Scale * (1/kappamat - kappamat) } else { locat } dimnames(myans) <- list(dimnames(myans)[[1]], extra$y.names) myans }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .fittedMean = fittedMean, .kappa = kappa ))), last = eval(substitute(expression({ M1 <- 2 # extra$M1 Mdiv2 <- ncol(eta) / M1 # extra$Mdiv2 tmp34 <- c(rep_len( .llocat , Mdiv2), rep_len( .lscale , Mdiv2)) names(tmp34) <- c(mynames1, mynames2) tmp34 <- tmp34[interleave.VGAM(M, M1 = M1)] misc$link <- tmp34 # Already named misc$earg <- vector("list", M) for (ii in 1:Mdiv2) { misc$earg[[M1 * ii - 1]] <- .elocat misc$earg[[M1 * ii ]] <- .escale } names(misc$earg) <- names(misc$link) extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) extra$percentile <- numeric(Mdiv2) # length(misc$kappa) locat <- as.matrix(locat) for (ii in 1:Mdiv2) { y.use <- if (ncoly > 1) y[, ii] else y Jay <- if (ncoly > 1) ii else 1 extra$percentile[ii] <- 100 * weighted.mean(y.use <= locat[, ii], w[, Jay]) } names(extra$percentile) <- y.names }), list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .fittedMean = fittedMean, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 Mdiv2 <- ncol(eta) / M1 # extra$Mdiv2 ymat <- matrix(y, extra$n, extra$Mdiv2) kappamat <- matrix(extra$kappa, extra$n, extra$Mdiv2, byrow = TRUE) vTF <- c(TRUE, FALSE) locat <- eta2theta(eta[, vTF, drop = FALSE], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, !vTF, drop = FALSE], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dalap(x = c(ymat), location = c(locat), scale = c(Scale), kappa = c(kappamat), log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .kappa = kappa ))), vfamily = c("alaplace2"), validparams = eval(substitute(function(eta, y, extra = NULL) { vTF <- c(TRUE, FALSE) locat <- eta2theta(eta[, vTF, drop = FALSE], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, !vTF, drop = FALSE], .lscale , earg = .escale ) okay1 <- all(is.finite(locat)) && all(is.finite(Scale)) && all(0 < Scale) okay1 }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .kappa = kappa ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) extra <- object@extra vTF <- c(TRUE, FALSE) locat <- eta2theta(eta[, vTF, drop = FALSE], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, !vTF, drop = FALSE], .lscale , earg = .escale ) kappamat <- matrix(extra$kappa, extra$n, extra$Mdiv2, byrow = TRUE) ralap(nsim * length(Scale), location = c(locat), scale = c(Scale), kappa = c(kappamat)) }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .kappa = kappa ))), deriv = eval(substitute(expression({ M1 <- 2 Mdiv2 <- ncol(eta) / M1 # extra$Mdiv2 ymat <- matrix(y, n, Mdiv2) vTF <- c(TRUE, FALSE) locat <- eta2theta(eta[, vTF, drop = FALSE], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, !vTF, drop = FALSE], .lscale , earg = .escale ) kappamat <- matrix(extra$kappa, n, Mdiv2, byrow = TRUE) zedd <- abs(ymat - locat) / Scale dl.dlocat <- sqrt(2) * ifelse(ymat >= locat, kappamat, 1/kappamat) * sign(ymat - locat) / Scale dl.dscale <- sqrt(2) * ifelse(ymat >= locat, kappamat, 1/kappamat) * zedd / Scale - 1 / Scale dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans[, interleave.VGAM(ncol(ans), M1 = M1)] }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat, .kappa = kappa ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, M) d2l.dlocat2 <- 2 / Scale^2 d2l.dscale2 <- 1 / Scale^2 wz[, vTF] <- d2l.dlocat2 * dlocat.deta^2 wz[, !vTF] <- d2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat )))) } # End of alaplace2(). alaplace1.control <- function(maxit = 100, ...) { list(maxit = maxit) } alaplace1 <- function(tau = NULL, llocation = "identitylink", ilocation = NULL, kappa = sqrt(tau/(1-tau)), Scale.arg = 1, ishrinkage = 0.95, parallel.locat = TRUE ~ 0, # FALSE, digt = 4, idf.mu = 3, zero = NULL, imethod = 1) { apply.parint.locat <- FALSE if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") llocation <- llocation llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (!is.Numeric(Scale.arg, positive = TRUE)) stop("bad input for argument 'Scale.arg'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") new("vglmff", blurb = c("One-parameter asymmetric Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), "\n", "\n", "Mean: location + scale * (1/kappa - kappa) / ", "sqrt(2)", "\n", "Quantiles: location", "\n", "Variance: scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ onemat <- matrix(1, M, 1) constraints.orig <- constraints cm1.locat <- diag(M) cmk.locat <- onemat con.locat <- cm.VGAM(cmk.locat, x = x, bool = .parallel.locat , constraints = constraints.orig, apply.int = .apply.parint.locat , cm.default = cm1.locat, cm.intercept.default = cm1.locat) constraints <- con.locat constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .parallel.locat = parallel.locat, .zero = zero, .apply.parint.locat = apply.parint.locat ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, summary.pvalues = FALSE, tau = .tau , multipleResponses = FALSE, parameters.names = c("location"), kappa = .kappa) }, list( .kappa = kappa, .tau = tau ))), initialize = eval(substitute(expression({ extra$M1 <- M1 <- 1 temp5 <- w.y.check(w = w, y = y, ncol.w.max = if (length( .kappa ) > 1) 1 else Inf, ncol.y.max = if (length( .kappa ) > 1) 1 else Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$ncoly <- ncoly <- ncol(y) if ((ncoly > 1) && (length( .kappa ) > 1 || length( .Scale.arg ) > 1)) stop("response must be a vector if 'kappa' or 'Scale.arg' ", "has a length greater than one") extra$kappa <- .kappa extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) extra$M <- M <- max(length( .Scale.arg ), ncoly, length( .kappa )) # Recycle extra$Scale <- rep_len( .Scale.arg , M) extra$kappa <- rep_len( .kappa , M) extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) extra$n <- n extra$tau.names <- tau.names <- paste("(tau = ", round(extra$tau, digits = .digt), ")", sep = "") extra$Y.names <- Y.names <- if (ncoly > 1) dimnames(y)[[2]] else "y" if (is.null(Y.names) || any(Y.names == "")) extra$Y.names <- Y.names <- paste("y", 1:ncoly, sep = "") extra$y.names <- y.names <- if (ncoly > 1) paste(Y.names, tau.names, sep = "") else tau.names extra$individual <- FALSE mynames1 <- param.names("location", M) predictors.names <- c(namesof(mynames1, .llocat , earg = .elocat , tag = FALSE)) locat.init <- matrix(0, n, M) if (!length(etastart)) { for (jay in 1:M) { y.use <- if (ncoly > 1) y[, jay] else y if ( .imethod == 1) { locat.init[, jay] <- weighted.mean(y.use, w[, min(jay, ncol(w))]) } else if ( .imethod == 2) { locat.init[, jay] <- median(y.use) } else if ( .imethod == 3) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y.use, w = w, df = .idf.mu ) locat.init[, jay] <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) } else { use.this <- weighted.mean(y.use, w[, min(jay, ncol(w))]) locat.init[, jay] <- (1- .ishrinkage ) * y.use + .ishrinkage * use.this } if (length( .ilocat )) { locat.init <- matrix( .ilocat , n, M, byrow = TRUE) } if ( .llocat == "loge") locat.init <- abs(locat.init) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat )) } } }), list( .imethod = imethod, .idf.mu = idf.mu, .ishrinkage = ishrinkage, .digt = digt, .elocat = elocat, .Scale.arg = Scale.arg, .llocat = llocat, .kappa = kappa, .ilocat = ilocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { if ( .fittedMean ) { kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) locat <- eta2theta(eta, .llocat , earg = .elocat ) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat + Scale * (1/kappamat - kappamat) } else { locat <- eta2theta(eta, .llocat , earg = .elocat ) if (length(locat) > extra$n) dimnames(locat) <- list(dimnames(eta)[[1]], extra$y.names) locat } }, list( .elocat = elocat, .llocat = llocat, .fittedMean = fittedMean, .Scale.arg = Scale.arg, .kappa = kappa ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$M1 <- M1 misc$multipleResponses <- TRUE tmp34 <- c(rep_len( .llocat , M)) names(tmp34) <- mynames1 misc$link <- tmp34 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) { misc$earg[[ii]] <- .elocat } misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) misc$true.mu <- .fittedMean # @fitted is not a true mu? extra$percentile <- numeric(M) locat <- as.matrix(locat) for (ii in 1:M) { y.use <- if (ncoly > 1) y[, ii] else y extra$percentile[ii] <- 100 * weighted.mean(y.use <= locat[, ii], w[, min(ii, ncol(w))]) } names(extra$percentile) <- y.names extra$Scale.arg <- .Scale.arg }), list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .fittedMean = fittedMean, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ymat <- matrix(y, extra$n, extra$M) locat <- eta2theta(eta, .llocat , earg = .elocat ) kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dalap(x = c(ymat), locat = c(locat), scale = c(Scale), kappa = c(kappamat), log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .kappa = kappa ))), vfamily = c("alaplace1"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat <- eta2theta(eta, .llocat , earg = .elocat ) okay1 <- all(is.finite(locat)) okay1 }, list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .kappa = kappa ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) extra <- object@extra locat <- eta2theta(eta, .llocat , .elocat ) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) ralap(nsim * length(Scale), location = c(locat), scale = c(Scale), kappa = c(kappamat)) }, list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .kappa = kappa ))), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat <- eta2theta(eta, .llocat , earg = .elocat ) kappamat <- matrix(extra$kappa, n, M, byrow = TRUE) zedd <- abs(ymat-locat) / Scale dl.dlocat <- ifelse(ymat >= locat, kappamat, 1/kappamat) * sqrt(2) * sign(ymat - locat) / Scale dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) c(w) * cbind(dl.dlocat * dlocat.deta) }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat, .kappa = kappa ))), weight = eval(substitute(expression({ d2l.dlocat2 <- 2 / Scale^2 wz <- cbind(d2l.dlocat2 * dlocat.deta^2) c(w) * wz }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat )))) } alaplace3.control <- function(maxit = 100, ...) { list(maxit = maxit) } alaplace3 <- function(llocation = "identitylink", lscale = "loge", lkappa = "loge", ilocation = NULL, iscale = NULL, ikappa = 1.0, imethod = 1, zero = c("scale", "kappa")) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lkappa <- as.list(substitute(lkappa)) ekappa <- link2list(lkappa) lkappa <- attr(ekappa, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Three-parameter asymmetric Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("kappa", lkappa, earg = ekappa), "\n", "\n", "Mean: location + scale * (1/kappa - kappa) / sqrt(2)", "\n", "Variance: Scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, multipleResponses = FALSE, parameters.names = c("location", "scale", "kappa"), summary.pvalues = FALSE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("location", .llocat , earg = .elocat, tag = FALSE), namesof("scale", .lscale , earg = .escale, tag = FALSE), namesof("kappa", .lkappa , earg = .ekappa, tag = FALSE)) if (!length(etastart)) { kappa.init <- if (length( .ikappa )) rep_len( .ikappa , n) else rep_len( 1.0 , n) if ( .imethod == 1) { locat.init <- median(y) scale.init <- sqrt(var(y) / 2) } else { locat.init <- y scale.init <- sqrt(sum(c(w)*abs(y-median(y ))) / (sum(w) *2)) } locat.init <- if (length( .ilocat )) rep_len( .ilocat , n) else rep_len(locat.init, n) scale.init <- if (length( .iscale )) rep_len( .iscale , n) else rep_len(scale.init, n) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(kappa.init, .lkappa, earg = .ekappa)) } }), list( .imethod = imethod, .elocat = elocat, .escale = escale, .ekappa = ekappa, .llocat = llocat, .lscale = lscale, .lkappa = lkappa, .ilocat = ilocat, .iscale = iscale, .ikappa = ikappa ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kappa <- eta2theta(eta[, 3], .lkappa, earg = .ekappa) locat + Scale * (1/kappa - kappa) / sqrt(2) }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .ekappa = ekappa, .lkappa = lkappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale , kappa = .lkappa ) misc$earg <- list(location = .elocat, scale = .escale, kappa = .ekappa ) misc$expected = TRUE }), list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .ekappa = ekappa, .lkappa = lkappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kappa <- eta2theta(eta[, 3], .lkappa , earg = .ekappa ) # a matrix if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dalap(x = y, locat = locat, scale = Scale, kappa = kappa, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .ekappa = ekappa, .lkappa = lkappa ))), vfamily = c("alaplace3"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kappa <- eta2theta(eta[, 3], .lkappa , earg = .ekappa ) okay1 <- all(is.finite(locat)) && all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(kappa)) && all(0 < kappa) okay1 }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .ekappa = ekappa, .lkappa = lkappa ))), deriv = eval(substitute(expression({ locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kappa <- eta2theta(eta[, 3], .lkappa , earg = .ekappa ) zedd <- abs(y - locat) / Scale dl.dlocat <- sqrt(2) * ifelse(y >= locat, kappa, 1/kappa) * sign(y-locat) / Scale dl.dscale <- sqrt(2) * ifelse(y >= locat, kappa, 1/kappa) * zedd / Scale - 1 / Scale dl.dkappa <- 1 / kappa - 2 * kappa / (1+kappa^2) - (sqrt(2) / Scale) * ifelse(y > locat, 1, -1/kappa^2) * abs(y-locat) dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dkappa.deta <- dtheta.deta(kappa, .lkappa, earg = .ekappa) c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta, dl.dkappa * dkappa.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat, .ekappa = ekappa, .lkappa = lkappa ))), weight = eval(substitute(expression({ d2l.dlocat2 <- 2 / Scale^2 d2l.dscale2 <- 1 / Scale^2 d2l.dkappa2 <- 1 / kappa^2 + 4 / (1+kappa^2)^2 d2l.dkappadloc <- -sqrt(8) / ((1+kappa^2) * Scale) d2l.dkappadscale <- -(1-kappa^2) / ((1+kappa^2) * kappa * Scale) wz <- matrix(0, nrow = n, dimm(M)) wz[,iam(1, 1, M)] <- d2l.dlocat2 * dlocat.deta^2 wz[,iam(2, 2, M)] <- d2l.dscale2 * dscale.deta^2 wz[,iam(3, 3, M)] <- d2l.dkappa2 * dkappa.deta^2 wz[,iam(1, 3, M)] <- d2l.dkappadloc * dkappa.deta * dlocat.deta wz[,iam(2, 3, M)] <- d2l.dkappadscale * dkappa.deta * dscale.deta c(w) * wz }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat )))) } dlaplace <- function(x, location = 0, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdensity <- (-abs(x-location)/scale) - log(2*scale) if (log.arg) logdensity else exp(logdensity) } plaplace <- function(q, location = 0, scale = 1, lower.tail = TRUE, log.p =FALSE) { zedd <- (q - location) / scale if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") L <- max(length(q), length(location), length(scale)) if (length(q) != L) q <- rep_len(q, L) if (length(location) != L) location <- rep_len(location, L) if (length(scale) != L) scale <- rep_len(scale, L) if (lower.tail) { if (log.p) { ans <- ifelse(q < location, log(0.5) + zedd, log1p(- 0.5 * exp(-zedd))) } else { ans <- ifelse(q < location, 0.5 * exp(zedd), 1 - 0.5 * exp(-zedd)) } } else { if (log.p) { ans <- ifelse(q < location, log1p(- 0.5 * exp(zedd)), log(0.5) - zedd) } else { ans <- ifelse(q < location, 1 - 0.5 * exp(zedd), 0.5 * exp(-zedd)) } } ans[scale <= 0] <- NaN ans } qlaplace <- function(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") L <- max(length(p), length(location), length(scale)) if (length(p) != L) p <- rep_len(p, L) if (length(location) != L) location <- rep_len(location, L) if (length(scale) != L) scale <- rep_len(scale, L) if (lower.tail) { if (log.p) { ln.p <- p ans <- location - sign(exp(ln.p)-0.5) * scale * log(2 * ifelse(exp(ln.p) < 0.5, exp(ln.p), -expm1(ln.p))) } else { ans <- location - sign(p-0.5) * scale * log(2 * ifelse(p < 0.5, p, 1-p)) } } else { if (log.p) { ln.p <- p ans <- location - sign(0.5 - exp(ln.p)) * scale * log(2 * ifelse(-expm1(ln.p) < 0.5, -expm1(ln.p), exp(ln.p))) # ans[ln.p > 0] <- NaN } else { ans <- location - sign(0.5 - p) * scale * log(2 * ifelse(p > 0.5, 1 - p, p)) } } ans[scale <= 0] <- NaN ans } rlaplace <- function(n, location = 0, scale = 1) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(scale, positive = TRUE)) stop("'scale' must be positive") location <- rep_len(location, use.n) scale <- rep_len(scale, use.n) rrrr <- runif(use.n) location - sign(rrrr - 0.5) * scale * (log(2) + ifelse(rrrr < 0.5, log(rrrr), log1p(-rrrr))) } laplace <- function(llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Two-parameter Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: location", "\n", "Variance: 2*scale^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, multipleResponses = FALSE, parameters.names = c("location", "scale"), summary.pvalues = FALSE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("location", .llocat , earg = .elocat, tag = FALSE), namesof("scale", .lscale , earg = .escale, tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { locat.init <- median(y) scale.init <- sqrt(var(y) / 2) } else if ( .imethod == 2) { locat.init <- weighted.mean(y, w) scale.init <- sqrt(var(y) / 2) } else { locat.init <- median(y) scale.init <- sqrt(sum(c(w)*abs(y-median(y ))) / (sum(w) *2)) } locat.init <- if (length( .ilocat )) rep_len( .ilocat , n) else rep_len(locat.init, n) scale.init <- if (length( .iscale )) rep_len( .iscale , n) else rep_len(scale.init, n) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) } }), list( .imethod = imethod, .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) }, list( .elocat = elocat, .llocat = llocat ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) misc$expected <- TRUE misc$RegCondOK <- FALSE # Save this for later }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlaplace(x = y, locat = locat, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), vfamily = c("laplace"), validparams = eval(substitute(function(eta, y, extra = NULL) { Locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) okay1 <- all(is.finite(Locat)) && all(is.finite(Scale)) && all(0 < Scale) okay1 }, list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), deriv = eval(substitute(expression({ Locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) zedd <- abs(y-Locat) / Scale dl.dLocat <- sign(y - Locat) / Scale dl.dscale <- zedd / Scale - 1 / Scale dLocat.deta <- dtheta.deta(Locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * cbind(dl.dLocat * dLocat.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), weight = eval(substitute(expression({ d2l.dLocat2 <- d2l.dscale2 <- 1 / Scale^2 wz <- matrix(0, nrow = n, ncol = M) # diagonal wz[,iam(1, 1, M)] <- d2l.dLocat2 * dLocat.deta^2 wz[,iam(2, 2, M)] <- d2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat )))) } fff.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } fff <- function(link = "loge", idf1 = NULL, idf2 = NULL, nsimEIM = 100, # ncp = 0, imethod = 1, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("argument 'nsimEIM' should be an integer greater than 10") ncp <- 0 if (any(ncp != 0)) warning("not sure about ncp != 0 wrt dl/dtheta") new("vglmff", blurb = c("F-distribution\n\n", "Links: ", namesof("df1", link, earg = earg), ", ", namesof("df2", link, earg = earg), "\n", "\n", "Mean: df2/(df2-2) provided df2>2 and ncp = 0", "\n", "Variance: ", "2*df2^2*(df1+df2-2)/(df1*(df2-2)^2*(df2-4)) ", "provided df2>4 and ncp = 0"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, multipleResponses = FALSE, parameters.names = c("df1", "df2"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("df1", .link , earg = .earg , tag = FALSE), namesof("df2", .link , earg = .earg , tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { df2.init <- b <- 2*mean(y) / (mean(y)-1) df1.init <- 2*b^2*(b-2)/(var(y)*(b-2)^2 * (b-4) - 2*b^2) if (df2.init < 4) df2.init <- 5 if (df1.init < 2) df1.init <- 3 } else { df2.init <- b <- 2*median(y) / (median(y)-1) summy <- summary(y) var.est <- summy[5] - summy[2] df1.init <- 2*b^2*(b-2)/(var.est*(b-2)^2 * (b-4) - 2*b^2) } df1.init <- if (length( .idf1 )) rep_len( .idf1 , n) else rep_len(df1.init, n) df2.init <- if (length( .idf2 )) rep_len( .idf2 , n) else rep_len(1, n) etastart <- cbind(theta2eta(df1.init, .link , earg = .earg ), theta2eta(df2.init, .link , earg = .earg )) } }), list( .imethod = imethod, .idf1 = idf1, .earg = earg, .idf2 = idf2, .link = link ))), linkinv = eval(substitute(function(eta, extra = NULL) { df2 <- eta2theta(eta[, 2], .link , earg = .earg ) ans <- df2 * NA ans[df2 > 2] <- df2[df2 > 2] / (df2[df2 > 2] - 2) ans }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(df1 = .link , df2 = .link ) misc$earg <- list(df1 = .earg , df2 = .earg ) misc$nsimEIM <- .nsimEIM misc$ncp <- .ncp }), list( .link = link, .earg = earg, .ncp = ncp, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { df1 <- eta2theta(eta[, 1], .link , earg = .earg ) df2 <- eta2theta(eta[, 2], .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * df(x = y, df1 = df1, df2 = df2, ncp = .ncp , log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg, .ncp = ncp ))), vfamily = c("fff"), validparams = eval(substitute(function(eta, y, extra = NULL) { df1 <- eta2theta(eta[, 1], .link , earg = .earg ) df2 <- eta2theta(eta[, 2], .link , earg = .earg ) okay1 <- all(is.finite(df1)) && all(0 < df1) && all(is.finite(df2)) && all(0 < df2) okay1 }, list( .link = link, .earg = earg, .ncp = ncp ))), deriv = eval(substitute(expression({ df1 <- eta2theta(eta[, 1], .link , earg = .earg ) df2 <- eta2theta(eta[, 2], .link , earg = .earg ) dl.ddf1 <- 0.5*digamma(0.5*(df1+df2)) + 0.5 + 0.5*log(df1/df2) + 0.5*log(y) - 0.5*digamma(0.5*df1) - 0.5*(df1+df2)*(y/df2) / (1 + df1*y/df2) - 0.5*log1p(df1*y/df2) dl.ddf2 <- 0.5*digamma(0.5*(df1+df2)) - 0.5*df1/df2 - 0.5*digamma(0.5*df2) - 0.5*(df1+df2) * (-df1*y/df2^2) / (1 + df1*y/df2) - 0.5*log1p(df1*y/df2) ddf1.deta <- dtheta.deta(df1, .link , earg = .earg ) ddf2.deta <- dtheta.deta(df2, .link , earg = .earg ) dthetas.detas <- cbind(ddf1.deta, ddf2.deta) c(w) * dthetas.detas * cbind(dl.ddf1, dl.ddf2) }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { ysim <- rf(n = n, df1=df1, df2=df2) dl.ddf1 <- 0.5*digamma(0.5*(df1+df2)) + 0.5 + 0.5*log(df1/df2) + 0.5*log(ysim) - 0.5*digamma(0.5*df1) - 0.5*(df1+df2)*(ysim/df2) / (1 + df1*ysim/df2) - 0.5*log1p(df1*ysim/df2) dl.ddf2 <- 0.5*digamma(0.5*(df1+df2)) - 0.5*df1/df2 - 0.5*digamma(0.5*df2) - 0.5*(df1+df2) * (-df1*ysim/df2^2)/(1 + df1*ysim/df2) - 0.5*log1p(df1*ysim/df2) rm(ysim) temp3 <- cbind(dl.ddf1, dl.ddf2) run.varcov <- ((ii-1) * run.varcov + temp3[,ind1$row.index]*temp3[,ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- c(w) * wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM, .ncp = ncp )))) } hyperg <- function(N = NULL, D = NULL, lprob = "logit", iprob = NULL) { inputN <- is.Numeric(N, positive = TRUE) inputD <- is.Numeric(D, positive = TRUE) if (inputD && inputN) stop("only one of 'N' and 'D' is to be inputted") if (!inputD && !inputN) stop("one of 'N' and 'D' needs to be inputted") lprob <- as.list(substitute(lprob)) earg <- link2list(lprob) lprob <- attr(earg, "function.name") new("vglmff", blurb = c("Hypergeometric distribution\n\n", "Link: ", namesof("prob", lprob, earg = earg), "\n", "Mean: D/N\n"), initialize = eval(substitute(expression({ NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep_len(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") mustart <- (0.5 + w * y) / (1 + w) no.successes <- w * y if (any(abs(no.successes - round(no.successes)) > 0.001)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (any(abs(y - round(y)) > 0.001)) stop("Count data must be integer-valued") nn <- y[, 1] + y[, 2] y <- ifelse(nn > 0, y[, 1]/nn, 0) w <- w * nn mustart <- (0.5 + nn * y) / (1 + nn) mustart[mustart >= 1] <- 0.95 } else stop("Response not of the right form") predictors.names <- namesof("prob", .lprob , earg = .earg , tag = FALSE) extra$Nvector <- .N extra$Dvector <- .D extra$Nunknown <- length(extra$Nvector) == 0 if (!length(etastart)) { init.prob <- if (length( .iprob)) rep_len( .iprob, n) else mustart etastart <- matrix(init.prob, n, NCOL(y)) } }), list( .lprob = lprob, .earg = earg, .N = N, .D = D, .iprob = iprob ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .lprob , earg = .earg ) }, list( .lprob = lprob, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("prob" = .lprob) misc$earg <- list("prob" = .earg ) misc$Dvector <- .D misc$Nvector <- .N }), list( .N = N, .D = D, .lprob = lprob, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, .lprob, earg = .earg ) }, list( .lprob = lprob, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { N <- extra$Nvector Dvec <- extra$Dvector prob <- mu yvec <- w * y if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- if (extra$Nunknown) { tmp12 <- Dvec * (1-prob) / prob (lgamma(1+tmp12) + lgamma(1+Dvec/prob-w) - lgamma(1+tmp12-w+yvec) - lgamma(1+Dvec/prob)) } else { (lgamma(1+N*prob) + lgamma(1+N*(1-prob)) - lgamma(1+N*prob-yvec) - lgamma(1+N*(1-prob) -w + yvec)) } if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lprob = lprob, .earg = earg ))), vfamily = c("hyperg"), validparams = eval(substitute(function(eta, y, extra = NULL) { prob <- eta2theta(eta, .lprob , earg = .earg ) okay1 <- all(is.finite(prob)) && all(0 < prob & prob < 1) okay1 }, list( .lprob = lprob, .earg = earg ))), deriv = eval(substitute(expression({ prob <- mu # equivalently, eta2theta(eta, .lprob, earg = .earg ) dprob.deta <- dtheta.deta(prob, .lprob, earg = .earg ) Dvec <- extra$Dvector Nvec <- extra$Nvector yvec <- w * y if (extra$Nunknown) { tmp72 <- -Dvec / prob^2 tmp12 <- Dvec * (1-prob) / prob dl.dprob <- tmp72 * (digamma(1 + tmp12) + digamma(1 + Dvec/prob -w) - digamma(1 + tmp12-w+yvec) - digamma(1 + Dvec/prob)) } else { dl.dprob <- Nvec * (digamma(1+Nvec*prob) - digamma(1+Nvec*(1-prob)) - digamma(1+Nvec*prob-yvec) + digamma(1+Nvec*(1-prob)-w+yvec)) } c(w) * dl.dprob * dprob.deta }), list( .lprob = lprob, .earg = earg ))), weight = eval(substitute(expression({ if (extra$Nunknown) { tmp722 <- tmp72^2 tmp13 <- 2*Dvec / prob^3 d2l.dprob2 <- tmp722 * (trigamma(1 + tmp12) + trigamma(1 + Dvec/prob - w) - trigamma(1 + tmp12 - w + yvec) - trigamma(1 + Dvec/prob)) + tmp13 * (digamma(1 + tmp12) + digamma(1 + Dvec/prob - w) - digamma(1 + tmp12 - w + yvec) - digamma(1 + Dvec/prob)) } else { d2l.dprob2 <- Nvec^2 * (trigamma(1+Nvec*prob) + trigamma(1+Nvec*(1-prob)) - trigamma(1+Nvec*prob-yvec) - trigamma(1+Nvec*(1-prob)-w+yvec)) } d2prob.deta2 <- d2theta.deta2(prob, .lprob , earg = .earg ) wz <- -(dprob.deta^2) * d2l.dprob2 wz <- c(w) * wz wz[wz < .Machine$double.eps] <- .Machine$double.eps wz }), list( .lprob = lprob, .earg = earg )))) } dbenini <- function(x, y0, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape), length(y0)) if (length(x) != N) x <- rep_len(x, N) if (length(shape) != N) shape <- rep_len(shape, N) if (length(y0) != N) y0 <- rep_len(y0, N) logdensity <- rep_len(log(0), N) xok <- (x > y0) tempxok <- log(x[xok]/y0[xok]) logdensity[xok] <- log(2*shape[xok]) - shape[xok] * tempxok^2 + log(tempxok) - log(x[xok]) logdensity[is.infinite(x)] <- log(0) # 20141209 KaiH if (log.arg) logdensity else exp(logdensity) } pbenini <- function(q, y0, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(y0, positive = TRUE)) stop("bad input for argument 'y0'") if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") N <- max(length(q), length(shape), length(y0)) if (length(q) != N) q <- rep_len(q, N) if (length(shape) != N) shape <- rep_len(shape, N) if (length(y0) != N) y0 <- rep_len(y0, N) ans <- y0 * 0 ok <- q > y0 if (lower.tail) { if (log.p) { ans[ok] <- log(-expm1(-shape[ok] * (log(q[ok]/y0[ok]))^2)) ans[q <= y0 ] <- -Inf } else { ans[ok] <- -expm1(-shape[ok] * (log(q[ok]/y0[ok]))^2) } } else { if (log.p) { ans[ok] <- -shape[ok] * (log(q[ok]/y0[ok]))^2 ans[q <= y0] <- 0 } else { ans[ok] <- exp(-shape[ok] * (log(q[ok]/y0[ok]))^2) ans[q <= y0] <- 1 } } ans } qbenini <- function(p, y0, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- y0 * exp(sqrt(-log(-expm1(ln.p)) / shape)) } else { ans <- y0 * exp(sqrt(-log1p(-p) / shape)) } } else { if (log.p) { ln.p <- p ans <- y0 * exp(sqrt(-ln.p / shape)) } else { ans <- y0 * exp(sqrt(-log(p) / shape)) } } ans[y0 <= 0] <- NaN ans } rbenini <- function(n, y0, shape) { y0 * exp(sqrt(-log(runif(n)) / shape)) } benini1 <- function(y0 = stop("argument 'y0' must be specified"), lshape = "loge", ishape = NULL, imethod = 1, zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(y0, positive = TRUE)) stop("bad input for argument 'y0'") new("vglmff", blurb = c("1-parameter Benini distribution\n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n", "\n", "Median: qbenini(p = 0.5, y0, shape)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, parameters.names = c("shape"), lshape = .lshape , eshape = .eshape ) }, list( .eshape = eshape, .lshape = lshape))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- paste("shape", if (ncoly > 1) 1:ncoly else "", sep = "") predictors.names <- namesof(mynames1, .lshape , earg = .eshape , tag = FALSE) extra$y0 <- matrix( .y0 , n, ncoly, byrow = TRUE) if (any(y <= extra$y0)) stop("some values of the response are > argument 'y0' values") if (!length(etastart)) { probs.y <- (1:3) / 4 qofy <- quantile(rep(y, times = w), probs = probs.y) if ( .imethod == 1) { shape.init <- mean(-log1p(-probs.y) / (log(qofy))^2) } else { shape.init <- median(-log1p(-probs.y) / (log(qofy))^2) } shape.init <- matrix(if (length( .ishape )) .ishape else shape.init, n, ncoly, byrow = TRUE) etastart <- cbind(theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .imethod = imethod, .ishape = ishape, .lshape = lshape, .eshape = eshape, .y0 = y0 ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) qbenini(p = 0.5, y0 = extra$y0, shape) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lshape , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .eshape } misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE extra$y0 <- .y0 }), list( .lshape = lshape, .eshape = eshape, .y0 = y0 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) y0 <- extra$y0 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbenini(x = y, y0 = y0, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("benini1"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) extra <- object@extra shape <- eta2theta(eta, .lshape , earg = .eshape ) y0 <- extra$y0 rbenini(nsim * length(shape), y0 = y0, shape = shape) }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) y0 <- extra$y0 dl.dshape <- 1/shape - (log(y/y0))^2 dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ ned2l.dshape2 <- 1 / shape^2 wz <- ned2l.dshape2 * dshape.deta^2 c(w) * wz }), list( .lshape = lshape, .eshape = eshape )))) } dpolono <- function (x, meanlog = 0, sdlog = 1, bigx = 170, ...) { mapply(function(x, meanlog, sdlog, ...) { if (abs(x) > floor(x)) { # zero prob for -ve or non-integer 0 } else if (x == Inf) { # 20141215 KaiH 0 } else if (x > bigx) { z <- (log(x) - meanlog) / sdlog (1 + (z^2 + log(x) - meanlog - 1) / (2 * x * sdlog^2)) * exp(-0.5 * z^2) / (sqrt(2 * pi) * sdlog * x) } else integrate( function(t) exp(t * x - exp(t) - 0.5 * ((t - meanlog) / sdlog)^2), lower = -Inf, upper = Inf, ...)$value / (sqrt(2 * pi) * sdlog * exp(lgamma(x + 1.0))) }, x, meanlog, sdlog, ...) } ppolono <- function(q, meanlog = 0, sdlog = 1, isOne = 1 - sqrt( .Machine$double.eps ), ...) { .cumprob <- rep_len(0, length(q)) .cumprob[q == Inf] <- 1 # special case q <- floor(q) ii <- -1 while (any(xActive <- ((.cumprob < isOne) & (q > ii)))) .cumprob[xActive] <- .cumprob[xActive] + dpolono(ii <- (ii+1), meanlog, sdlog, ...) .cumprob } rpolono <- function(n, meanlog = 0, sdlog = 1) { lambda <- rlnorm(n = n, meanlog = meanlog, sdlog = sdlog) rpois(n = n, lambda = lambda) } dtriangle <- function(x, theta, lower = 0, upper = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(theta), length(lower), length(upper)) if (length(x) != N) x <- rep_len(x, N) if (length(theta) != N) theta <- rep_len(theta, N) if (length(lower) != N) lower <- rep_len(lower, N) if (length(upper) != N) upper <- rep_len(upper, N) denom1 <- ((upper-lower)*(theta-lower)) denom2 <- ((upper-lower)*(upper-theta)) logdensity <- rep_len(log(0), N) xok.neg <- (lower < x) & (x <= theta) xok.pos <- (theta <= x) & (x < upper) logdensity[xok.neg] = log(2 * (x[xok.neg] - lower[xok.neg]) / denom1[xok.neg]) logdensity[xok.pos] = log(2 * (upper[xok.pos] - x[xok.pos]) / denom2[xok.pos]) logdensity[lower >= upper] <- NaN logdensity[lower > theta] <- NaN logdensity[upper < theta] <- NaN if (log.arg) logdensity else exp(logdensity) } rtriangle <- function(n, theta, lower = 0, upper = 1) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(theta)) stop("bad input for argument 'theta'") if (!is.Numeric(lower)) stop("bad input for argument 'lower'") if (!is.Numeric(upper)) stop("bad input for argument 'upper'") if (!all(lower < theta & theta < upper)) stop("lower < theta < upper values are required") N <- use.n lower <- rep_len(lower, N) upper <- rep_len(upper, N) theta <- rep_len(theta, N) t1 <- sqrt(runif(n)) t2 <- sqrt(runif(n)) ifelse(runif(n) < (theta - lower) / (upper - lower), lower + (theta - lower) * t1, upper - (upper - theta) * t2) } qtriangle <- function(p, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") N <- max(length(p), length(theta), length(lower), length(upper)) if (length(p) != N) p <- rep_len(p, N) if (length(theta) != N) theta <- rep_len(theta, N) if (length(lower) != N) lower <- rep_len(lower, N) if (length(upper) != N) upper <- rep_len(upper, N) ans <- NA_real_ * p if (lower.tail) { if (log.p) { Neg <- (exp(ln.p) <= (theta - lower) / (upper - lower)) temp1 <- exp(ln.p) * (upper - lower) * (theta - lower) Pos <- (exp(ln.p) >= (theta - lower) / (upper - lower)) pstar <- (exp(ln.p) - (theta - lower) / (upper - lower)) / ((upper - theta) / (upper - lower)) } else { Neg <- (p <= (theta - lower) / (upper - lower)) temp1 <- p * (upper - lower) * (theta - lower) Pos <- (p >= (theta - lower) / (upper - lower)) pstar <- (p - (theta - lower) / (upper - lower)) / ((upper - theta) / (upper - lower)) } } else { if (log.p) { ln.p <- p Neg <- (exp(ln.p) >= (upper- theta) / (upper - lower)) temp1 <- -expm1(ln.p) * (upper - lower) * (theta - lower) Pos <- (exp(ln.p) <= (upper- theta) / (upper - lower)) pstar <- (-expm1(ln.p) - (theta - lower) / (upper - lower)) / ((upper - theta) / (upper - lower)) } else { Neg <- (p >= (upper- theta) / (upper - lower)) temp1 <- (1 - p) * (upper - lower) * (theta - lower) Pos <- (p <= (upper- theta) / (upper - lower)) pstar <- ((upper- theta) / (upper - lower) - p) / ((upper - theta) / (upper - lower)) } } ans[ Neg] <- lower[ Neg] + sqrt(temp1[ Neg]) if (any(Pos)) { qstar <- cbind(1 - sqrt(1-pstar), 1 + sqrt(1-pstar)) qstar <- qstar[Pos,, drop = FALSE] qstar <- ifelse(qstar[, 1] >= 0 & qstar[, 1] <= 1, qstar[, 1], qstar[, 2]) ans[Pos] <- theta[Pos] + qstar * (upper - theta)[Pos] } ans[theta < lower | theta > upper] <- NaN ans } ptriangle <- function(q, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) { N <- max(length(q), length(theta), length(lower), length(upper)) if (length(q) != N) q <- rep_len(q, N) if (length(theta) != N) theta <- rep_len(theta, N) if (length(lower) != N) lower <- rep_len(lower, N) if (length(upper) != N) upper <- rep_len(upper, N) if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") ans <- q * 0 qstar <- (q - lower)^2 / ((upper - lower) * (theta - lower)) Neg <- (lower <= q & q <= theta) ans[Neg] <- if (lower.tail) { if (log.p) { (log(qstar))[Neg] } else { qstar[Neg] } } else { if (log.p) { (log1p(-qstar))[Neg] } else { 1 - qstar[Neg] } } Pos <- (theta <= q & q <= upper) qstar <- (q - theta) / (upper-theta) if (lower.tail) { if (log.p) { ans[Pos] <- log(((theta-lower)/(upper-lower))[Pos] + (qstar * (2-qstar) * (upper-theta) / (upper - lower))[Pos]) ans[q <= lower] <- -Inf ans[q >= upper] <- 0 } else { ans[Pos] <- ((theta-lower)/(upper-lower))[Pos] + (qstar * (2-qstar) * (upper-theta) / (upper - lower))[Pos] ans[q <= lower] <- 0 ans[q >= upper] <- 1 } } else { if (log.p) { ans[Pos] <- log(((upper - theta)/(upper-lower))[Pos] + (qstar * (2-qstar) * (upper-theta) / (upper - lower))[Pos]) ans[q <= lower] <- 0 ans[q >= upper] <- -Inf } else { ans[Pos] <- ((upper - theta)/(upper-lower))[Pos] + (qstar * (2-qstar) * (upper-theta) / (upper - lower))[Pos] ans[q <= lower] <- 1 ans[q >= upper] <- 0 } } ans[theta < lower | theta > upper] <- NaN ans } triangle.control <- function(stepsize = 0.33, maxit = 100, ...) { list(stepsize = stepsize, maxit = maxit) } triangle <- function(lower = 0, upper = 1, link = extlogit(min = 0, max = 1), itheta = NULL) { if (!is.Numeric(lower)) stop("bad input for argument 'lower'") if (!is.Numeric(upper)) stop("bad input for argument 'upper'") if (!all(lower < upper)) stop("lower < upper values are required") if (length(itheta) && !is.Numeric(itheta)) stop("bad input for 'itheta'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(earg$min) && any(earg$min != lower)) stop("argument 'lower' does not match the 'link'") if (length(earg$max) && any(earg$max != upper)) stop("argument 'upper' does not match the 'link'") new("vglmff", blurb = c("Triangle distribution\n\n", "Link: ", namesof("theta", link, earg = earg)), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, parameters.names = c("theta"), link = .link ) }, list( .link = link ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) extra$lower <- rep_len( .lower , n) extra$upper <- rep_len( .upper , n) if (any(y <= extra$lower | y >= extra$upper)) stop("some y values in [lower,upper] detected") predictors.names <- namesof("theta", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { Theta.init <- if (length( .itheta )) .itheta else { weighted.mean(y, w) } Theta.init <- rep_len(Theta.init, n) etastart <- theta2eta(Theta.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .itheta=itheta, .upper = upper, .lower = lower ))), linkinv = eval(substitute(function(eta, extra = NULL) { Theta <- eta2theta(eta, .link , earg = .earg ) lower <- extra$lower upper <- extra$upper mu1 <- (lower + upper + Theta) / 3 mu1 }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(theta = .link ) misc$earg <- list(theta = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Theta <- eta2theta(eta, .link , earg = .earg ) lower <- extra$lower upper <- extra$upper if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dtriangle(x = y, theta = Theta, lower = lower, upper = upper, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("triangle"), validparams = eval(substitute(function(eta, y, extra = NULL) { Theta <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(Theta)) && all(extra$lower < Theta & Theta < extra$upper) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) extra <- object@extra Theta <- eta2theta(eta, .link , earg = .earg ) lower <- extra$lower upper <- extra$upper rtriangle(nsim * length(Theta), theta = Theta, lower = lower, upper = upper) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ Theta <- eta2theta(eta, .link , earg = .earg ) dTheta.deta <- dtheta.deta(Theta, .link , earg = .earg ) pos <- y > Theta neg <- y < Theta lower <- extra$lower upper <- extra$upper dl.dTheta <- 0 * y dl.dTheta[neg] <- -1 / (Theta[neg]-lower[neg]) dl.dTheta[pos] <- 1 / (upper[pos]-Theta[pos]) c(w) * dl.dTheta * dTheta.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ var.dl.dTheta <- 1 / ((Theta - lower) * (upper - Theta)) wz <- var.dl.dTheta * dTheta.deta^2 c(w) * wz }), list( .link = link, .earg = earg )))) } adjust0.loglaplace1 <- function(ymat, y, w, rep0) { rangey0 <- range(y[y > 0]) ymat[ymat <= 0] <- min(rangey0[1] / 2, rep0) ymat } loglaplace1.control <- function(maxit = 300, ...) { list(maxit = maxit) } loglaplace1 <- function(tau = NULL, llocation = "loge", ilocation = NULL, kappa = sqrt(tau/(1-tau)), Scale.arg = 1, ishrinkage = 0.95, parallel.locat = FALSE, digt = 4, idf.mu = 3, rep0 = 0.5, # 0.0001, minquantile = 0, maxquantile = Inf, imethod = 1, zero = NULL) { if (length(minquantile) != 1) stop("bad input for argument 'minquantile'") if (length(maxquantile) != 1) stop("bad input for argument 'maxquantile'") if (!is.Numeric(rep0, positive = TRUE, length.arg = 1) || rep0 > 1) stop("bad input for argument 'rep0'") if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation llocat.identity <- as.list(substitute("identitylink")) elocat.identity <- link2list(llocat.identity) llocat.identity <- attr(elocat.identity, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (!is.Numeric(Scale.arg, positive = TRUE)) stop("bad input for argument 'Scale.arg'") if (!is.logical(parallel.locat) || length(parallel.locat) != 1) stop("bad input for argument 'parallel.locat'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") mystring0 <- namesof("location", llocat, earg = elocat) mychars <- substring(mystring0, first = 1:nchar(mystring0), last = 1:nchar(mystring0)) mychars[nchar(mystring0)] <- ", inverse = TRUE)" mystring1 <- paste(mychars, collapse = "") new("vglmff", blurb = c("One-parameter ", if (llocat == "loge") "log-Laplace" else c(llocat, "-Laplace"), " distribution\n\n", "Links: ", mystring0, "\n", "\n", "Quantiles: ", mystring1), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel.locat , constraints = constraints, apply.int = FALSE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .parallel.locat = parallel.locat, .Scale.arg = Scale.arg, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, parameters.names = c("location"), llocation = .llocat ) }, list( .llocat = llocat, .zero = zero ))), initialize = eval(substitute(expression({ extra$M <- M <- max(length( .Scale.arg ), length( .kappa )) extra$Scale <- rep_len( .Scale.arg , M) extra$kappa <- rep_len( .kappa , M) extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$n <- n extra$y.names <- y.names <- paste("tau = ", round(extra$tau, digits = .digt), sep = "") extra$individual <- FALSE predictors.names <- namesof(paste("quantile(", y.names, ")", sep = ""), .llocat , earg = .elocat , tag = FALSE) if (FALSE) { if (min(y) < 0) stop("negative response values detected") if ((prop.0. <- weighted.mean(1*(y == 0), w)) >= min(extra$tau)) stop("sample proportion of 0s == ", round(prop.0., digits = 4), " > minimum 'tau' value. Choose larger values for 'tau'.") if ( .rep0 == 0.5 && (ave.tau <- (weighted.mean(1*(y <= 0), w) + weighted.mean(1*(y <= 1), w))/2) >= min(extra$tau)) warning("the minimum 'tau' value should be greater than ", round(ave.tau, digits = 4)) } if (!length(etastart)) { if ( .imethod == 1) { locat.init <- quantile(rep(y, w), probs= extra$tau) + 1/16 } else if ( .imethod == 2) { locat.init <- weighted.mean(y, w) } else if ( .imethod == 3) { locat.init <- median(y) } else if ( .imethod == 4) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .idf.mu ) locat.init <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) } else { use.this <- weighted.mean(y, w) locat.init <- (1- .ishrinkage )*y + .ishrinkage * use.this } locat.init <- if (length( .ilocat )) rep_len( .ilocat , M) else rep_len(locat.init, M) locat.init <- matrix(locat.init, n, M, byrow = TRUE) if ( .llocat == "loge") locat.init <- abs(locat.init) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat )) } }), list( .imethod = imethod, .idf.mu = idf.mu, .rep0 = rep0, .ishrinkage = ishrinkage, .digt = digt, .elocat = elocat, .Scale.arg = Scale.arg, .llocat = llocat, .kappa = kappa, .ilocat = ilocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat.y = eta2theta(eta, .llocat , earg = .elocat ) if ( .fittedMean ) { stop("Yet to do: handle 'fittedMean = TRUE'") kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.y + Scale * (1/kappamat - kappamat) } else { if (length(locat.y) > extra$n) dimnames(locat.y) <- list(dimnames(eta)[[1]], extra$y.names) locat.y } locat.y[locat.y < .minquantile] = .minquantile locat.y[locat.y > .maxquantile] = .maxquantile locat.y }, list( .elocat = elocat, .llocat = llocat, .minquantile = minquantile, .maxquantile = maxquantile, .fittedMean = fittedMean, .Scale.arg = Scale.arg, .kappa = kappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat) misc$earg <- list(location = .elocat ) misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) extra$Scale.arg <- .Scale.arg misc$true.mu <- .fittedMean # @fitted is not a true mu? misc$rep0 <- .rep0 misc$minquantile <- .minquantile misc$maxquantile <- .maxquantile extra$percentile <- numeric(length(misc$kappa)) locat.y <- as.matrix(locat.y) for (ii in seq_along(misc$kappa)) extra$percentile[ii] <- 100 * weighted.mean(y <= locat.y[, ii], w) }), list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .fittedMean = fittedMean, .minquantile = minquantile, .maxquantile = maxquantile, .rep0 = rep0, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) ymat <- matrix(y, extra$n, extra$M) if ( .llocat == "loge") ymat <- adjust0.loglaplace1(ymat = ymat, y = y, w = w, rep0 = .rep0) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logoff() if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dalap(x = c(w.mat), locat = c(eta), scale = c(Scale.w), kappa = c(kappamat), log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elocat = elocat, .llocat = llocat, .rep0 = rep0, .Scale.arg = Scale.arg, .kappa = kappa ))), vfamily = c("loglaplace1"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat.w <- eta locat.y <- eta2theta(locat.w, .llocat , earg = .elocat ) okay1 <- all(is.finite(locat.y)) okay1 }, list( .elocat = elocat, .llocat = llocat, .rep0 = rep0, .Scale.arg = Scale.arg, .kappa = kappa ))), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.w <- eta locat.y <- eta2theta(locat.w, .llocat , earg = .elocat ) kappamat <- matrix(extra$kappa, n, M, byrow = TRUE) ymat <- adjust0.loglaplace1(ymat = ymat, y = y, w = w, rep0= .rep0) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logit() zedd <- abs(w.mat-locat.w) / Scale.w dl.dlocat <- ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * sqrt(2) * sign(w.mat-locat.w) / Scale.w dlocat.deta <- dtheta.deta(locat.w, .llocat.identity , earg = .elocat.identity ) c(w) * cbind(dl.dlocat * dlocat.deta) }), list( .Scale.arg = Scale.arg, .rep0 = rep0, .llocat = llocat, .elocat = elocat, .elocat.identity = elocat.identity, .llocat.identity = llocat.identity, .kappa = kappa ))), weight = eval(substitute(expression({ ned2l.dlocat2 <- 2 / Scale.w^2 wz <- cbind(ned2l.dlocat2 * dlocat.deta^2) c(w) * wz }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat, .elocat.identity = elocat.identity, .llocat.identity = llocat.identity )))) } loglaplace2.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } loglaplace2 <- function(tau = NULL, llocation = "loge", lscale = "loge", ilocation = NULL, iscale = NULL, kappa = sqrt(tau/(1-tau)), ishrinkage = 0.95, parallel.locat = FALSE, digt = 4, eq.scale = TRUE, idf.mu = 3, rep0 = 0.5, nsimEIM = NULL, imethod = 1, zero = "(1 + M/2):M") { warning("it is best to use loglaplace1()") if (length(nsimEIM) && (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10)) stop("argument 'nsimEIM' should be an integer greater than 10") if (!is.Numeric(rep0, positive = TRUE, length.arg = 1) || rep0 > 1) stop("bad input for argument 'rep0'") if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (!is.logical(eq.scale) || length(eq.scale) != 1) stop("bad input for argument 'eq.scale'") if (!is.logical(parallel.locat) || length(parallel.locat) != 1) stop("bad input for argument 'parallel.locat'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") if (llocat != "loge") stop("argument 'llocat' must be \"loge\"") new("vglmff", blurb = c("Two-parameter log-Laplace distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: zz location + scale * ", "(1/kappa - kappa) / sqrt(2)", "\n", "Quantiles: location", "\n", "Variance: zz scale^2 * (1 + kappa^4) / (2 * kappa^2)"), constraints = eval(substitute(expression({ .ZERO <- .zero if (is.character( .ZERO )) .ZERO <- eval(parse(text = .ZERO )) .PARALLEL <- .parallel.locat parelHmat <- if (is.logical( .PARALLEL ) && .PARALLEL ) matrix(1, M/2, 1) else diag(M/2) scaleHmat <- if (is.logical( .eq.scale ) && .eq.scale ) matrix(1, M/2, 1) else diag(M/2) mycmatrix <- cbind(rbind( parelHmat, 0*parelHmat), rbind(0*scaleHmat, scaleHmat)) constraints <- cm.VGAM(mycmatrix, x = x, bool = .PARALLEL , constraints = constraints, apply.int = FALSE) constraints <- cm.zero.VGAM(constraints, x = x, .ZERO , M = M, predictors.names = predictors.names, M1 = 2) if ( .PARALLEL && names(constraints)[1] == "(Intercept)") { parelHmat <- diag(M/2) mycmatrix <- cbind(rbind( parelHmat, 0*parelHmat), rbind(0*scaleHmat, scaleHmat)) constraints[["(Intercept)"]] <- mycmatrix } if (is.logical( .eq.scale) && .eq.scale && names(constraints)[1] == "(Intercept)") { temp3 <- constraints[["(Intercept)"]] temp3 <- cbind(temp3[,1:(M/2)], rbind(0*scaleHmat, scaleHmat)) constraints[["(Intercept)"]] = temp3 } }), list( .eq.scale = eq.scale, .parallel.locat = parallel.locat, .zero = zero ))), initialize = eval(substitute(expression({ extra$kappa <- .kappa extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$M <- M <- 2 * length(extra$kappa) extra$n <- n extra$y.names <- y.names <- paste("tau = ", round(extra$tau, digits = .digt), sep = "") extra$individual = FALSE predictors.names <- c(namesof(paste("quantile(", y.names, ")", sep = ""), .llocat , earg = .elocat, tag = FALSE), namesof(if (M == 2) "scale" else paste("scale", 1:(M/2), sep = ""), .lscale , earg = .escale, tag = FALSE)) if (weighted.mean(1 * (y < 0.001), w) >= min(extra$tau)) stop("sample proportion of 0s > minimum 'tau' value. ", "Choose larger values for 'tau'.") if (!length(etastart)) { if ( .imethod == 1) { locat.init.y <- weighted.mean(y, w) scale.init <- sqrt(var(y) / 2) } else if ( .imethod == 2) { locat.init.y <- median(y) scale.init <- sqrt(sum(c(w)*abs(y-median(y))) / (sum(w) *2)) } else if ( .imethod == 3) { Fit5 <- vsmooth.spline(x = x[, min(ncol(x), 2)], y = y, w = w, df = .idf.mu ) locat.init.y <- c(predict(Fit5, x = x[, min(ncol(x), 2)])$y) scale.init <- sqrt(sum(c(w)*abs(y-median(y))) / (sum(w) *2)) } else { use.this <- weighted.mean(y, w) locat.init.y <- (1- .ishrinkage )*y + .ishrinkage * use.this scale.init <- sqrt(sum(c(w)*abs(y-median(y ))) / (sum(w) *2)) } locat.init.y <- if (length( .ilocat )) rep_len( .ilocat , n) else rep_len(locat.init.y, n) locat.init.y <- matrix(locat.init.y, n, M/2) scale.init <- if (length( .iscale )) rep_len( .iscale , n) else rep_len(scale.init, n) scale.init <- matrix(scale.init, n, M/2) etastart <- cbind(theta2eta(locat.init.y, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) } }), list( .imethod = imethod, .idf.mu = idf.mu, .kappa = kappa, .ishrinkage = ishrinkage, .digt = digt, .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat.y <- eta2theta(eta[, 1:(extra$M/2), drop = FALSE], .llocat , earg = .elocat ) if ( .fittedMean ) { kappamat <- matrix(extra$kappa, extra$n, extra$M/2, byrow = TRUE) Scale.y <- eta2theta(eta[,(1+extra$M/2):extra$M], .lscale , earg = .escale ) locat.y + Scale.y * (1/kappamat - kappamat) } else { dimnames(locat.y) = list(dimnames(eta)[[1]], extra$y.names) locat.y } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .fittedMean = fittedMean, .kappa = kappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) misc$true.mu <- .fittedMean # @fitted is not a true mu? misc$nsimEIM <- .nsimEIM misc$rep0 <- .rep0 extra$percentile <- numeric(length(misc$kappa)) locat <- as.matrix(locat.y) for (ii in seq_along(misc$kappa)) extra$percentile[ii] <- 100 * weighted.mean(y <= locat.y[, ii], w) }), list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .fittedMean = fittedMean, .nsimEIM = nsimEIM, .rep0 = rep0, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { kappamat <- matrix(extra$kappa, extra$n, extra$M/2, byrow = TRUE) Scale.w <- eta2theta(eta[, (1+extra$M/2):extra$M], .lscale , earg = .escale ) ymat <- matrix(y, extra$n, extra$M/2) ymat[ymat <= 0] <- min(min(y[y > 0]), .rep0 ) # Adjust for 0s ell.mat <- matrix(c(dloglaplace(x = c(ymat), locat.ald = c(eta[, 1:(extra$M/2)]), scale.ald = c(Scale.w), kappa = c(kappamat), log = TRUE)), extra$n, extra$M/2) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * ell.mat if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .rep0 = rep0, .kappa = kappa ))), vfamily = c("loglaplace2"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale.w <- eta2theta(eta[, (1+extra$M/2):extra$M], .lscale , earg = .escale ) locat.w <- eta[, 1:(extra$M/2), drop = FALSE] locat.y <- eta2theta(locat.w, .llocat , earg = .elocat ) okay1 <- all(is.finite(locat.y)) && all(is.finite(Scale.w)) && all(0 < Scale.w) okay1 }, list( .elocat = elocat, .llocat = llocat, .escale = escale, .lscale = lscale, .rep0 = rep0, .kappa = kappa ))), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M/2) Scale.w <- eta2theta(eta[, (1+extra$M/2):extra$M], .lscale , earg = .escale ) locat.w <- eta[, 1:(extra$M/2), drop = FALSE] locat.y <- eta2theta(locat.w, .llocat , earg = .elocat ) kappamat <- matrix(extra$kappa, n, M/2, byrow = TRUE) w.mat <- ymat w.mat[w.mat <= 0] <- min(min(w.mat[w.mat > 0]), .rep0) w.mat <- theta2eta(w.mat, .llocat , earg = .elocat ) zedd <- abs(w.mat-locat.w) / Scale.w dl.dlocat <- sqrt(2) * ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * sign(w.mat-locat.w) / Scale.w dl.dscale <- sqrt(2) * ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * zedd / Scale.w - 1 / Scale.w dlocat.deta <- dtheta.deta(locat.w, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale.w, .lscale , earg = .escale ) c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat, .rep0 = rep0, .kappa = kappa ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas <- cbind(dlocat.deta, dscale.deta) if (length( .nsimEIM )) { for (ii in 1:( .nsimEIM )) { wsim <- matrix(rloglap(n*M/2, loc = c(locat.w), sca = c(Scale.w), kappa = c(kappamat)), n, M/2) zedd <- abs(wsim-locat.w) / Scale.w dl.dlocat <- sqrt(2) * ifelse(wsim >= locat.w, kappamat, 1/kappamat) * sign(wsim-locat.w) / Scale.w dl.dscale <- sqrt(2) * ifelse(wsim >= locat.w, kappamat, 1/kappamat) * zedd / Scale.w - 1 / Scale.w rm(wsim) temp3 <- cbind(dl.dlocat, dl.dscale) # n x M matrix run.varcov <- ((ii-1) * run.varcov + temp3[,ind1$row.index]*temp3[,ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[,ind1$row] * dthetas.detas[,ind1$col] wz <- c(w) * matrix(wz, n, dimm(M)) wz } else { d2l.dlocat2 <- 2 / (Scale.w * locat.w)^2 d2l.dscale2 <- 1 / Scale.w^2 wz <- cbind(d2l.dlocat2 * dlocat.deta^2, d2l.dscale2 * dscale.deta^2) c(w) * wz } }), list( .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .nsimEIM = nsimEIM) ))) } logitlaplace1.control <- function(maxit = 300, ...) { list(maxit = maxit) } adjust01.logitlaplace1 <- function(ymat, y, w, rep01) { rangey01 <- range(y[(y > 0) & (y < 1)]) ymat[ymat <= 0] <- min(rangey01[1] / 2, rep01 / w[y <= 0]) ymat[ymat >= 1] <- max((1 + rangey01[2]) / 2, 1 - rep01 / w[y >= 1]) ymat } logitlaplace1 <- function(tau = NULL, llocation = "logit", ilocation = NULL, kappa = sqrt(tau/(1-tau)), Scale.arg = 1, ishrinkage = 0.95, parallel.locat = FALSE, digt = 4, idf.mu = 3, rep01 = 0.5, imethod = 1, zero = NULL) { if (!is.Numeric(rep01, positive = TRUE, length.arg = 1) || rep01 > 0.5) stop("bad input for argument 'rep01'") if (!is.Numeric(kappa, positive = TRUE)) stop("bad input for argument 'kappa'") if (length(tau) && max(abs(kappa - sqrt(tau/(1-tau)))) > 1.0e-6) stop("arguments 'kappa' and 'tau' do not match") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation llocat.identity <- as.list(substitute("identitylink")) elocat.identity <- link2list(llocat.identity) llocat.identity <- attr(elocat.identity, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2 or ... 4") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (!is.Numeric(Scale.arg, positive = TRUE)) stop("bad input for argument 'Scale.arg'") if (!is.logical(parallel.locat) || length(parallel.locat) != 1) stop("bad input for argument 'parallel.locat'") fittedMean <- FALSE if (!is.logical(fittedMean) || length(fittedMean) != 1) stop("bad input for argument 'fittedMean'") mystring0 <- namesof("location", llocat, earg = elocat) mychars <- substring(mystring0, first = 1:nchar(mystring0), last = 1:nchar(mystring0)) mychars[nchar(mystring0)] = ", inverse = TRUE)" mystring1 <- paste(mychars, collapse = "") new("vglmff", blurb = c("One-parameter ", llocat, "-Laplace distribution\n\n", "Links: ", mystring0, "\n", "\n", "Quantiles: ", mystring1), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel.locat , constraints = constraints, apply.int = FALSE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .parallel.locat = parallel.locat, .Scale.arg = Scale.arg, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = FALSE, parameters.names = c("location"), llocation = .llocat , zero = .zero ) }, list( .zero = zero, .llocat = llocat ))), initialize = eval(substitute(expression({ extra$M <- M <- max(length( .Scale.arg ), length( .kappa )) extra$Scale <- rep_len( .Scale.arg , M) extra$kappa <- rep_len( .kappa , M) extra$tau <- extra$kappa^2 / (1 + extra$kappa^2) temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$n <- n extra$y.names <- y.names <- paste("tau = ", round(extra$tau, digits = .digt), sep = "") extra$individual <- FALSE predictors.names <- namesof(paste("quantile(", y.names, ")", sep = ""), .llocat , earg = .elocat, tag = FALSE) if (all(y == 0 | y == 1)) stop("response cannot be all 0s or 1s") if (min(y) < 0) stop("negative response values detected") if (max(y) > 1) stop("response values greater than 1 detected") if ((prop.0. <- weighted.mean(1*(y == 0), w)) >= min(extra$tau)) stop("sample proportion of 0s == ", round(prop.0., digits = 4), " > minimum 'tau' value. Choose larger values for 'tau'.") if ((prop.1. <- weighted.mean(1*(y == 1), w)) >= max(extra$tau)) stop("sample proportion of 1s == ", round(prop.1., digits = 4), " < maximum 'tau' value. Choose smaller values for 'tau'.") if (!length(etastart)) { if ( .imethod == 1) { locat.init <- quantile(rep(y, w), probs= extra$tau) } else if ( .imethod == 2) { locat.init <- weighted.mean(y, w) locat.init <- median(rep(y, w)) } else if ( .imethod == 3) { use.this <- weighted.mean(y, w) locat.init <- (1- .ishrinkage )*y + use.this * .ishrinkage } else { stop("this option not implemented") } locat.init <- if (length( .ilocat )) rep_len( .ilocat , M) else rep_len(locat.init, M) locat.init <- matrix(locat.init, n, M, byrow = TRUE) locat.init <- abs(locat.init) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat )) } }), list( .imethod = imethod, .idf.mu = idf.mu, .ishrinkage = ishrinkage, .digt = digt, .elocat = elocat, .Scale.arg = Scale.arg, .llocat = llocat, .kappa = kappa, .ilocat = ilocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { locat.y <- eta2theta(eta, .llocat , earg = .elocat ) if ( .fittedMean ) { stop("Yet to do: handle 'fittedMean = TRUE'") kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.y + Scale * (1/kappamat - kappamat) } else { if (length(locat.y) > extra$n) dimnames(locat.y) <- list(dimnames(eta)[[1]], extra$y.names) locat.y } }, list( .elocat = elocat, .llocat = llocat, .fittedMean = fittedMean, .Scale.arg = Scale.arg, .kappa = kappa ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat ) misc$earg <- list(location = .elocat ) misc$expected <- TRUE extra$kappa <- misc$kappa <- .kappa extra$tau <- misc$tau <- misc$kappa^2 / (1 + misc$kappa^2) extra$Scale.arg <- .Scale.arg misc$true.mu <- .fittedMean # @fitted is not a true mu? misc$rep01 <- .rep01 extra$percentile <- numeric(length(misc$kappa)) locat.y <- eta2theta(eta, .llocat , earg = .elocat ) locat.y <- as.matrix(locat.y) for (ii in seq_along(misc$kappa)) extra$percentile[ii] <- 100 * weighted.mean(y <= locat.y[, ii], w) }), list( .elocat = elocat, .llocat = llocat, .Scale.arg = Scale.arg, .fittedMean = fittedMean, .rep01 = rep01, .kappa = kappa ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { kappamat <- matrix(extra$kappa, extra$n, extra$M, byrow = TRUE) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) ymat <- matrix(y, extra$n, extra$M) ymat <- adjust01.logitlaplace1(ymat = ymat, y = y, w = w, rep01 = .rep01) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logit() if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dalap(x = c(w.mat), location = c(eta), scale = c(Scale.w), kappa = c(kappamat), log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elocat = elocat, .llocat = llocat, .rep01 = rep01, .Scale.arg = Scale.arg, .kappa = kappa ))), vfamily = c("logitlaplace1"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat.w <- eta okay1 <- all(is.finite(locat.w)) okay1 }, list( .Scale.arg = Scale.arg, .rep01 = rep01, .elocat = elocat, .llocat = llocat, .elocat.identity = elocat.identity, .llocat.identity = llocat.identity, .kappa = kappa ))), deriv = eval(substitute(expression({ ymat <- matrix(y, n, M) Scale.w <- matrix(extra$Scale, extra$n, extra$M, byrow = TRUE) locat.w <- eta kappamat <- matrix(extra$kappa, n, M, byrow = TRUE) ymat <- adjust01.logitlaplace1(ymat = ymat, y = y, w = w, rep01 = .rep01 ) w.mat <- theta2eta(ymat, .llocat , earg = .elocat ) # e.g., logit() zedd <- abs(w.mat - locat.w) / Scale.w dl.dlocat <- ifelse(w.mat >= locat.w, kappamat, 1/kappamat) * sqrt(2) * sign(w.mat-locat.w) / Scale.w dlocat.deta <- dtheta.deta(locat.w, "identitylink", earg = .elocat.identity ) c(w) * cbind(dl.dlocat * dlocat.deta) }), list( .Scale.arg = Scale.arg, .rep01 = rep01, .elocat = elocat, .llocat = llocat, .elocat.identity = elocat.identity, .llocat.identity = llocat.identity, .kappa = kappa ))), weight = eval(substitute(expression({ d2l.dlocat2 <- 2 / Scale.w^2 wz <- cbind(d2l.dlocat2 * dlocat.deta^2) c(w) * wz }), list( .Scale.arg = Scale.arg, .elocat = elocat, .llocat = llocat )))) } VGAM/R/vsmooth.spline.q0000644000176200001440000004636013135276760014402 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. setClass("vsmooth.spline.fit", representation( "Bcoefficients" = "matrix", "knots" = "numeric", "xmin" = "numeric", "xmax" = "numeric")) setClass("vsmooth.spline", representation( "call" = "call", "constraints" = "list", "df" = "numeric", "nlfit" = "vsmooth.spline.fit", # is the nonlinear component "lev" = "matrix", "lfit" = "vlm", # 20020606 was "vlm.wfit"; is the linear component "spar" = "numeric", "lambda" = "numeric", "var" = "matrix", "w" = "matrix", "x" = "numeric", "y" = "matrix", "yin" = "matrix")) setMethod("coefficients", signature(object = "vsmooth.spline"), function(object, ...) coefvsmooth.spline(object, ...)) setMethod("coef", signature(object = "vsmooth.spline"), function(object, ...) coefvsmooth.spline(object, ...)) setMethod("coefficients", signature(object = "vsmooth.spline.fit"), function(object, ...) coefvsmooth.spline.fit(object, ...)) setMethod("coef", signature(object = "vsmooth.spline.fit"), function(object, ...) coefvsmooth.spline.fit(object, ...)) setMethod("fitted.values", signature(object = "vsmooth.spline"), function(object, ...) fittedvsmooth.spline(object, ...)) setMethod("fitted", signature(object = "vsmooth.spline"), function(object, ...) fittedvsmooth.spline(object, ...)) setMethod("residuals", signature(object = "vsmooth.spline"), function(object, ...) residvsmooth.spline(object, ...)) setMethod("resid", signature(object = "vsmooth.spline"), function(object, ...) residvsmooth.spline(object, ...)) setMethod("predict", signature(object="vsmooth.spline"), function(object, ...) predictvsmooth.spline(object, ...)) setMethod("show", "vsmooth.spline", function(object) show.vsmooth.spline(object)) setMethod("plot", "vsmooth.spline", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plotvsmooth.spline(x, ...))}) setMethod("predict", "vsmooth.spline.fit", function(object, ...) predictvsmooth.spline.fit(object, ...)) setMethod("model.matrix", "vsmooth.spline", function(object, ...) model.matrixvlm(object, ...)) depvar.vsmooth.spline <- function(object, ...) { object@y } if (!isGeneric("depvar")) setGeneric("depvar", function(object, ...) standardGeneric("depvar"), package = "VGAM") setMethod("depvar", "vsmooth.spline", function(object, ...) depvar.vsmooth.spline(object, ...)) vsmooth.spline <- function(x, y, w = NULL, df = rep(5, M), spar = NULL, #rep(0,M), i.constraint = diag(M), x.constraint = diag(M), constraints = list("(Intercepts)" = i.constraint, x = x.constraint), all.knots = FALSE, var.arg = FALSE, scale.w = TRUE, nk = NULL, control.spar = list()) { if (var.arg) { warning("@var will be returned, but no use will be made of it") } missing.constraints <- missing(constraints) if (!(missing.spar <- missing(spar)) && !missing(df)) { stop("cannot specify both 'spar' and 'df'") } contr.sp <- list(low = -1.5,## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4,## tol = 0.001 was default till R 1.3.x eps = 2e-8,## eps = 0.00244 was default till R 1.3.x maxit = 500 ) contr.sp[(names(control.spar))] <- control.spar if (!all(sapply(contr.sp[1:4], is.numeric)) || contr.sp$tol < 0 || contr.sp$eps <= 0 || contr.sp$maxit <= 0) stop("invalid 'control.spar'") my.call <- match.call() if (missing(y)) { if (is.list(x)) { if (anyNA(match(c("x", "y"), names(x)))) stop("cannot find 'x' and 'y' in list") y <- x$y x <- x$x } else if (is.complex(x)) { y <- Im(x) x <- Re(x) } else if (is.matrix(x)) { y <- x[,-1] x <- x[,1] } else { y <- x x <- time(x) } } xvector <- x n_lm <- length(xvector) ymat <- as.matrix(y) ny2 <- dimnames(ymat)[[2]] # NULL if vector M <- ncol(ymat) if (n_lm != nrow(ymat)) { stop("lengths of arguments 'x' and 'y' must match") } if (anyNA(xvector) || anyNA(ymat)) { stop("NAs not allowed in arguments 'x' or 'y'") } if (is.null(w)) { wzmat <- matrix(1, n_lm, M) } else { if (anyNA(w)) { stop("NAs not allowed in argument 'w'") } wzmat <- as.matrix(w) if (nrow(ymat) != nrow(wzmat) || ncol(wzmat) > M * (M+1) / 2) { stop("arguments 'w' and 'y' don't match") } if (scale.w) { wzmat <- wzmat / mean(wzmat[,1:M]) # 'Average' value is 1 } } dim2wz <- ncol(wzmat) if (missing.constraints) { constraints <- list("(Intercepts)" = eval(i.constraint), "x" = eval(x.constraint)) } constraints <- eval(constraints) if (is.matrix(constraints)) { constraints <- list("(Intercepts)" = constraints, "x" = constraints) } if (!is.list(constraints) || length(constraints) != 2) { stop("'constraints' must equal a list (of length 2) or a matrix") } for (ii in 1:2) if (!is.numeric(constraints[[ii]]) || !is.matrix (constraints[[ii]]) || nrow(constraints[[ii]]) != M || ncol(constraints[[ii]]) > M) stop("something wrong with argument 'constraints'") names(constraints) <- c("(Intercepts)", "x") usortx <- unique(sort(as.vector(xvector))) ooo <- match(xvector, usortx) # usortx[ooo] == x neff <- length(usortx) if (neff < 7) { stop("not enough unique 'x' values (need 7 or more)") } dim1U <- dim2wz # 20000110; was M * (M+1) / 2 collaps <- .C("vsuff9", as.integer(n_lm), as.integer(neff), as.integer(ooo), as.double(xvector), as.double(ymat), as.double(wzmat), xbar = double(neff), ybar = double(neff * M), wzbar = double(neff * dim2wz), uwzbar = double(1), wzybar = double(neff * M), okint = as.integer(0), as.integer(M), dim2wz = as.integer(dim2wz), dim1U = as.integer(dim1U), Hlist1 = as.double(diag(M)), ncolb = as.integer(M), trivc = as.integer(1), wuwzbar = as.integer(0), dim1Uwzbar = as.integer(dim1U), dim2wzbar = as.integer(dim2wz)) if (collaps$okint != 1) { stop("some non-positive-definite weight matrices ", "detected in 'vsuff9'") } dim(collaps$ybar) <- c(neff, M) if (FALSE) { } else { yinyin <- collaps$ybar # Includes both linear and nonlinear parts x <- collaps$xbar # Could call this xxx for location finder lfit <- vlm(yinyin ~ 1 + x, # xxx constraints = constraints, save.weights = FALSE, qr.arg = FALSE, x.arg = FALSE, y.arg = FALSE, smart = FALSE, weights = matrix(collaps$wzbar, neff, dim2wz)) } ncb0 <- ncol(constraints[[2]]) # Of xxx and not of the intercept spar <- rep_len(if (length(spar)) spar else 0, ncb0) dfvec <- rep_len(df, ncb0) if (!missing.spar) { ispar <- 1 if (any(spar <= 0) || !is.numeric(spar)) { stop("not allowed non-positive or non-numeric ", "smoothing parameters") } nonlin <- (spar != Inf) } else { ispar <- 0 if (!is.numeric(dfvec) || any(dfvec < 2 | dfvec > neff)) { stop("you must supply '2 <= df <= ", neff, "'") } nonlin <- (abs(dfvec - 2) > contr.sp$tol) } if (all(!nonlin)) { junk.fill <- new("vsmooth.spline.fit", "Bcoefficients" = matrix(NA_real_, 1, 1), "knots" = numeric(0), "xmin" = numeric(0), "xmax" = numeric(0)) # 20031108 dratio <- NA_real_ object <- new("vsmooth.spline", "call" = my.call, "constraints" = constraints, "df" = if (ispar == 0) dfvec else rep_len(2, length(spar)), "lfit" = lfit, "nlfit" = junk.fill, "spar" = if (ispar == 1) spar else rep_len(Inf, length(dfvec)), "lambda" = if (ispar == 1) dratio * 16.0^(spar * 6.0 - 2.0) else rep_len(Inf, length(dfvec)), "w" = matrix(collaps$wzbar, neff, dim2wz), "x" = usortx, "y" = lfit@fitted.values, "yin" = yinyin) return(object) } xbar <- (usortx - usortx[1]) / (usortx[neff] - usortx[1]) noround <- TRUE # Improvement 20020803 nknots <- nk if (all.knots) { knot <- if (noround) { valid.vknotl2(c(rep_len(xbar[1], 3), xbar, rep_len(xbar[neff], 3))) } else { c(rep_len(xbar[1], 3), xbar, rep_len(xbar[neff], 3)) } if (length(nknots)) { warning("overriding 'nk' by 'all.knots = TRUE'") } nknots <- length(knot) - 4 # No longer neff + 2 } else { chosen <- length(nknots) if (chosen && (nknots > neff+2 || nknots <= 5)) { stop("bad value for 'nk'") } if (!chosen) { nknots <- 0 } knot.list <- .C("vknootl2", as.double(xbar), as.integer(neff), knot = double(neff+6), k = as.integer(nknots+4), chosen = as.integer(chosen)) if (noround) { knot <- valid.vknotl2(knot.list$knot[1:(knot.list$k)]) knot.list$k <- length(knot) } else { knot <- knot.list$knot[1:(knot.list$k)] } nknots <- knot.list$k - 4 } if (nknots <= 5) { stop("not enough distinct knots found") } conmat <- (constraints[[2]])[, nonlin, drop = FALSE] ncb <- sum(nonlin) trivc <- trivial.constraints(conmat) resmat <- collaps$ybar - lfit@fitted.values # neff by M spar.nl <- spar[nonlin] dofr.nl <- dfvec[nonlin] dim1Uwzbar <- if (trivc) dim1U else ncb * (ncb+1) / 2 dim2wzbar <- if (trivc) dim2wz else ncb * (ncb+1) / 2 ooo <- 1:neff # Already sorted collaps <- .C("vsuff9", as.integer(neff), as.integer(neff), as.integer(ooo), as.double(collaps$xbar), as.double(resmat), as.double(collaps$wzbar), xbar = double(neff), ybar = double(neff * ncb), wzbar = double(neff * dim2wzbar), uwzbar = double(1), wzybar = double(neff * ncb), okint = as.integer(0), as.integer(M), as.integer(dim2wz), as.integer(dim1U), Hlist1 = as.double(conmat), ncolb = as.integer(ncb), as.integer(trivc), wuwzbar = as.integer(0), as.integer(dim1Uwzbar), as.integer(dim2wzbar)) if (collaps$okint != 1) { stop("some non-positive-definite weight matrices ", "detected in 'vsuff9' during the second call.") } dim(collaps$ybar) <- dim(collaps$wzybar) <- c(neff, ncb) dim(collaps$wzbar) <- c(neff, dim2wzbar) wzyb.c <- zedd.c <- matrix(0, neff, ncb) Wmat.c <- array(0, c(ncb, ncb, neff)) if (FALSE) for (ii in 1:neff) { Wi.indiv <- m2a(wzmat[ii, , drop = FALSE], M = ncb) Wi.indiv <- Wi.indiv[,, 1] # Drop the 3rd dimension Wmat.c[,, ii] <- t(conmat) %*% Wi.indiv %*% conmat one.Wmat.c <- matrix(Wmat.c[,, ii], ncb, ncb) zedd.c[ii, ] <- solve(Wmat.c[,, ii], t(conmat) %*% Wi.indiv %*% cbind(resmat[ii, ])) wzyb.c[ii, ] <- one.Wmat.c %*% zedd.c[ii, ] } ldk <- 3 * ncb + 1 # 20020710; Previously 4 * ncb varmat <- if (var.arg) matrix(0, neff, ncb) else double(1) vsplin <- .C("Yee_spline", xs = as.double(xbar), yyy = as.double(collaps$wzybar), # zz as.double(collaps$wzbar), xknot = as.double(knot), n = as.integer(neff), nknots = as.integer(nknots), as.integer(ldk), M = as.integer(ncb), dim2wz = as.integer(dim2wzbar), spar.nl = as.double(spar.nl), lamvec = as.double(spar.nl), iinfo = integer(1), fv = double(neff * ncb), Bcoef = double(nknots * ncb), varmat = as.double(varmat), levmat = double(neff * ncb), as.double(dofr.nl), ifvar = as.integer(var.arg), ierror = as.integer(0), n_lm = as.integer(neff), double(nknots), double(nknots), double(nknots), double(nknots), double(1), as.integer(0), icontrsp = as.integer(contr.sp$maxit), contrsp = as.double(unlist(contr.sp[1:4]))) if (vsplin$ierror != 0) { stop("vsplin$ierror == ", vsplin$ierror, ". Something gone wrong in 'vsplin'") } if (vsplin$iinfo != 0) { stop("leading minor of order ", vsplin$iinfo, " is not positive-definite") } dim(vsplin$levmat) <- c(neff, ncb) # A matrix even when ncb == 1 if (ncb > 1) { dim(vsplin$fv) <- c(neff, ncb) if (var.arg) dim(vsplin$varmat) <- c(neff, ncb) } dofr.nl <- colSums(vsplin$levmat) # Actual EDF used fv <- lfit@fitted.values + vsplin$fv %*% t(conmat) if (M > 1) { dimnames(fv) <- list(NULL, ny2) } dfvec[!nonlin] <- 2.0 dfvec[ nonlin] <- dofr.nl if (ispar == 0) { spar[!nonlin] <- Inf spar[ nonlin] <- vsplin$spar.nl # Actually used } fit.object <- new("vsmooth.spline.fit", "Bcoefficients" = matrix(vsplin$Bcoef, nknots, ncb), "knots" = knot, "xmax" = usortx[neff], "xmin" = usortx[1]) object <- new("vsmooth.spline", "call" = my.call, "constraints" = constraints, "df" = dfvec, "nlfit" = fit.object, "lev" = vsplin$levmat, "lfit" = lfit, "spar" = spar, # if (ispar == 1) spar else vsplin$spar, "lambda" = vsplin$lamvec, # "w" = collaps$wzbar, "x" = usortx, "y" = fv, "yin" = yinyin) if (var.arg) object@var <- vsplin$varmat object } show.vsmooth.spline <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } ncb <- if (length(x@nlfit)) ncol(x@nlfit@Bcoefficients) else NULL cat("\nSmoothing Parameter (Spar):", if (length(ncb) && ncb == 1) format(x@spar) else paste(format(x@spar), collapse = ", "), "\n") cat("\nEquivalent Degrees of Freedom (Df):", if (length(ncb) && ncb == 1) format(x@df) else paste(format(x@df), collapse = ", "), "\n") if (!all(trivial.constraints(x@constraints) == 1)) { cat("\nConstraint matrices:\n") print(x@constraints) } invisible(x) } coefvsmooth.spline.fit <- function(object, ...) { object@Bcoefficients } coefvsmooth.spline <- function(object, matrix = FALSE, ...) { list(lfit = coefvlm(object@lfit, matrix.out = matrix), nlfit = coefvsmooth.spline.fit(object@nlfit)) } fittedvsmooth.spline <- function(object, ...) { object@y } residvsmooth.spline <- function(object, ...) { as.matrix(object@yin - object@y) } plotvsmooth.spline <- function(x, xlab = "x", ylab = "", points = TRUE, pcol = par()$col, pcex = par()$cex, pch = par()$pch, lcol = par()$col, lwd = par()$lwd, lty = par()$lty, add = FALSE, ...) { points.arg <- points; rm(points) M <- ncol(x@y) pcol <- rep_len(pcol, M) pcex <- rep_len(pcex, M) pch <- rep_len(pch, M) lcol <- rep_len(lcol, M) lwd <- rep_len(lwd, M) lty <- rep_len(lty, M) if (!add) matplot(x@x, x@yin, type = "n", xlab = xlab, ylab = ylab, ...) for (ii in 1:ncol(x@y)) { if (points.arg) points(x@x, x@yin[,ii], col = pcol[ii], pch = pch[ii], cex = pcex[ii]) lines(x@x, x@y[,ii], col = lcol[ii], lwd = lwd[ii], lty = lty[ii]) } invisible(x) } predictvsmooth.spline <- function(object, x, deriv = 0, se.fit = FALSE) { if (se.fit) warning("'se.fit = TRUE' is not currently implemented. ", "Using 'se.fit = FALSE'") lfit <- object@lfit # Linear part of the vector spline nlfit <- object@nlfit # Nonlinear part of the vector spline if (missing(x)) { if (deriv == 0) { return(list(x = object@x, y = object@y)) } else { x <- object@x return(Recall(object, x, deriv)) } } mat.coef <- coefvlm(lfit, matrix.out = TRUE) coeflfit <- t(mat.coef) # M x p now M <- nrow(coeflfit) # if (is.matrix(object@y)) ncol(object@y) else 1 pred <- if (deriv == 0) predict(lfit, data.frame(x = x)) else if (deriv == 1) matrix(coeflfit[,2], length(x), M, byrow = TRUE) else matrix(0, length(x), M) if (!length(nlfit@knots)) { return(list(x = x, y = pred)) } nonlin <- (object@spar != Inf) conmat <- if (!length(lfit@constraints)) diag(M) else lfit@constraints[[2]] conmat <- conmat[, nonlin, drop = FALSE] # Of nonlinear functions list(x = x, y = pred + predict(nlfit, x, deriv)$y %*% t(conmat)) } predictvsmooth.spline.fit <- function(object, x, deriv = 0) { nknots <- nrow(object@Bcoefficients) drangex <- object@xmax - object@xmin if (missing(x)) x <- seq(from = object@xmin, to = object@xmax, length.out = nknots-4) xs <- as.double((x - object@xmin) / drangex) bad.left <- (xs < 0) bad.right <- (xs > 1) good <- !(bad.left | bad.right) ncb <- ncol(object@Bcoefficients) y <- matrix(NA_real_, length(xs), ncb) if (ngood <- sum(good)) { junk <- .C("Yee_vbvs", as.integer(ngood), as.double(object@knots), as.double(object@Bcoefficients), as.double(xs[good]), smomat = double(ngood * ncb), as.integer(nknots), as.integer(deriv), as.integer(ncb)) y[good,] <- junk$smomat if (TRUE && deriv > 1) { edges <- xs <= 0 | xs >= 1 # Zero the edges & beyond explicitly y[edges,] <- 0 } } if (any(!good)) { xrange <- c(object@xmin, object@xmax) if (deriv == 0) { end.object <- Recall(object, xrange)$y end.slopes <- Recall(object, xrange, 1)$y * drangex if (any(bad.left)) { y[bad.left,] <- rep(end.object[1,], rep(sum(bad.left), ncb)) + rep(end.slopes[1,], rep(sum(bad.left), ncb)) * xs[bad.left] } if (any(bad.right)) { y[bad.right,] <- rep(end.object[2,], rep(sum(bad.right), ncb)) + rep(end.slopes[2,], rep(sum(bad.right), ncb)) * (xs[bad.right] - 1) } } else if (deriv == 1) { end.slopes <- Recall(object, xrange, 1)$y * drangex y[bad.left,] <- rep(end.slopes[1,], rep(sum(bad.left), ncb)) y[bad.right,] <- rep(end.slopes[2,], rep(sum(bad.right), ncb)) } else y[!good,] <- 0 } if (deriv > 0) y <- y / (drangex^deriv) list(x = x, y = y) } valid.vknotl2 <- function(knot, tol = 1/1024) { junk <- .C("Yee_pknootl2", knot = as.double(knot), as.integer(length(knot)), keep = integer(length(knot)), as.double(tol)) keep <- as.logical(junk$keep) knot <- junk$knot[keep] if (length(knot) <= 11) { stop("too few (distinct) knots") } knot } VGAM/R/cqo.fit.q0000644000176200001440000007370713135276757012770 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. callcqoc <- function(cmatrix, etamat, xmat, ymat, wvec, X.vlm.1save, modelno, Control, n, M, p1star, p2star, nice31, allofit = FALSE) { ocmatrix <- cmatrix control <- Control Rank <- control$Rank p1 <- length(control$colx1.index) p2 <- length(control$colx2.index) dim(cmatrix) <- c(p2, Rank) # for crow1C pstar <- p1star + p2star maxMr <- max(M, Rank) nstar <- if (nice31) ifelse(modelno %in% c(3, 5), n*2, n) else n*M NOS <- ifelse(modelno %in% c(3, 5), M/2, M) lenbeta <- pstar * ifelse(nice31, NOS, 1) if (I.tol <- control$I.tolerances) { if (Rank > 1) { numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix evnu <- eigen(var(numat), symmetric = TRUE) cmatrix <- cmatrix %*% evnu$vector } cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix sdnumat <- apply(numat, 2, sd) for (lookat in 1:Rank) if (sdnumat[lookat] > control$MUXfactor[lookat] * control$isd.latvar[lookat]) { muxer <- control$isd.latvar[lookat] * control$MUXfactor[lookat] / sdnumat[lookat] numat[, lookat] <- numat[, lookat] * muxer cmatrix[,lookat] <- cmatrix[,lookat] * muxer if (control$trace) { cat(paste("Taking evasive action for latent variable ", lookat, ".\n", sep = "")) flush.console() } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") } } else { numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix evnu <- eigen(var(numat), symmetric = TRUE) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) cmatrix <- cmatrix %*% temp7 cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix } inited <- ifelse(exists(".VGAM.CQO.etamat", envir = VGAMenv), 1, 0) usethiseta <- if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.CQO.") else t(etamat) usethisbeta <- if (inited == 2) getfromVGAMenv("beta", prefix = ".VGAM.CQO.") else double(lenbeta) othint <- c(Rank = Rank, control$eq.tol, pstar = pstar, dimw = 1, inited = inited, modelno = modelno, maxitl = control$maxitl, actnits = 0, twice = 0, p1star = p1star, p2star = p2star, nice31 = nice31, lenbeta = lenbeta, I.tol = I.tol, control$trace, p1 = p1, p2 = p2, control$imethod) bnumat <- if (nice31) matrix(0,nstar,pstar) else cbind(matrix(0, nstar, p2star), X.vlm.1save) ans1 <- if (nice31) .C("cqo_1", numat = as.double(numat), as.double(ymat), as.double(if (p1) xmat[, control$colx1.index] else 999), as.double(wvec), etamat = as.double(usethiseta), moff = double(if (I.tol) n else 1), fv = double(NOS*n), z = double(n*M), wz = double(n*M), U = double(M*n), bnumat = as.double(bnumat), qr = double(nstar*pstar), qraux = double(pstar), qpivot = integer(pstar), as.integer(n), as.integer(M), NOS = as.integer(NOS), as.integer(nstar), dim1U = as.integer(M), errcode = integer(1 + NOS), othint = as.integer(othint), deviance = double(1+NOS), beta = as.double(usethisbeta), othdbl = as.double(c(small = control$SmallNo, epsilon = control$epsilon, .Machine$double.eps, iKvector = rep_len(control$iKvector, NOS), iShape = rep_len(control$iShape, NOS)))) else .C("cqo_2", numat = as.double(numat), as.double(ymat), as.double(if (p1) xmat[, control$colx1.index] else 999), as.double(wvec), etamat = as.double(usethiseta), moff = double(if (I.tol) n else 1), fv = double(NOS*n), z = double(n*M), wz = double(n*M), U = double(M*n), bnumat = as.double(bnumat), qr = double(nstar*pstar), qraux = double(pstar), qpivot = integer(pstar), as.integer(n), as.integer(M), NOS = as.integer(NOS), as.integer(nstar), dim1U = as.integer(M), errcode = integer(1 + NOS), othint = as.integer(othint), deviance = double(1+NOS), beta = as.double(usethisbeta), othdbl = as.double(c(small = control$SmallNo, epsilon = control$epsilon, .Machine$double.eps, iKvector = rep_len(control$iKvector, NOS), iShape = rep_len(control$iShape, NOS)))) if (ans1$errcode[1] == 0) { assign2VGAMenv(c("etamat", "z", "U", "beta", "deviance"), ans1, prefix = ".VGAM.CQO.") assign(".VGAM.CQO.cmatrix", cmatrix, envir = VGAMenv) assign(".VGAM.CQO.ocmatrix", ocmatrix, envir = VGAMenv) } else { warning("error code in callcqoc = ", ans1$errcode[1]) if (nice31) { } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") } if (control$trace) flush.console() if (allofit) list(deviance = ans1$deviance[1], alldeviance = ans1$deviance[-1], coefficients = ans1$beta) else ans1$deviance[1] } calldcqo <- function(cmatrix, etamat, xmat, ymat, wvec, X.vlm.1save, modelno, Control, n, M, p1star, p2star, nice31, allofit = FALSE) { control <- Control Rank <- control$Rank p1 <- length(control$colx1.index); p2 <- length(control$colx2.index) dim(cmatrix) <- c(p2, Rank) # for crow1C xmat2 <- xmat[, control$colx2.index, drop = FALSE] #ccc numat <- double(n*Rank) #ccc pstar <- p1star + p2star maxMr <- max(M, Rank) nstar <- if (nice31) ifelse(modelno == 3 || modelno == 5,n*2,n) else n*M NOS <- ifelse(modelno == 3 || modelno == 5, M/2, M) lenbeta <- pstar * ifelse(nice31, NOS, 1) if (I.tol <- control$I.tolerances) { if (Rank > 1) { numat <- xmat[, control$colx2.index, drop=FALSE] %*% cmatrix evnu <- eigen(var(numat), symmetric = TRUE) cmatrix <- cmatrix %*% evnu$vector } cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[,control$colx2.index,drop=FALSE] %*% cmatrix sdnumat <- apply(numat, 2, sd) for (lookat in 1:Rank) if (sdnumat[lookat] > control$MUXfactor[lookat] * control$isd.latvar[lookat]) { muxer <- control$isd.latvar[lookat] * control$MUXfactor[lookat] / sdnumat[lookat] cmatrix[, lookat] <- cmatrix[, lookat] * muxer if (control$trace) { cat(paste("Taking evasive action for latent variable ", lookat, ".\n", sep = "")) flush.console() } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") } } else { numat <- xmat[,control$colx2.index,drop=FALSE] %*% cmatrix evnu <- eigen(var(numat), symmetric = TRUE) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) cmatrix <- cmatrix %*% temp7 cmatrix <- crow1C(cmatrix, control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix } inited <- ifelse(exists(".VGAM.CQO.etamat", envir = VGAMenv), 1, 0) usethiseta <- if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.CQO.") else t(etamat) usethisbeta <- if (inited == 2) getfromVGAMenv("beta", prefix = ".VGAM.CQO.") else double(lenbeta) othint <- c(Rank, control$eq.tol, pstar, dimw = 1, inited = inited, modelno, maxitl = control$maxitl, actnits = 0, twice = 0, p1star = p1star, p2star = p2star, nice31 = nice31, lenbeta, I.tol = I.tol, control$trace, p1, p2, control$imethod) # other ints bnumat <- if (nice31) matrix(0,nstar,pstar) else cbind(matrix(0,nstar,p2star), X.vlm.1save) flush.console() ans1 <- .C("dcqo1", numat = as.double(numat), as.double(ymat), as.double(if (p1) xmat[,control$colx1.index] else 999), as.double(wvec), etamat = as.double(usethiseta), moff = double(if (I.tol) n else 1), fv = double(NOS*n), z = double(n*M), wz = double(n*M), U = double(M*n), bnumat = as.double(bnumat), qr = double(nstar * pstar), qraux = double(pstar), qpivot = integer(pstar), as.integer(n), as.integer(M), NOS = as.integer(NOS), as.integer(nstar), dim1U = as.integer(M), errcode = integer(1 + NOS), othint = as.integer(othint), deviance = double(1 + NOS), beta = as.double(usethisbeta), othdbl = as.double(c(small = control$SmallNo, epsilon = control$epsilon, .Machine$double.eps, iKvector = rep_len(control$iKvector, NOS), iShape = rep_len(control$iShape, NOS))), xmat2 = as.double(xmat2), cmat = as.double(cmatrix), p2 = as.integer(p2), deriv = double(p2*Rank), hstep = as.double(control$Hstep)) if (ans1$errcode[1] != 0) { warning("error code in calldcqo = ", ans1$errcode[1]) } flush.console() ans1$deriv } checkCMCO <- function(Hlist, control, modelno) { p1 <- length(colx1.index <- control$colx1.index) p2 <- length(colx2.index <- control$colx2.index) if (p1 + p2 != length(Hlist)) stop("'Hlist' is the wrong length") if (p1 == 0 || p2 == 0) stop("Some variables are needed in noRRR and non-noRRR arguments") if (all(names(colx1.index) != "(Intercept)")) stop("an intercept term must be in the argument 'noRRR' formula") Hlist1 <- vector("list", p1) Hlist2 <- vector("list", p2) for (kk in 1:p1) Hlist1[[kk]] <- Hlist[[(colx1.index[kk])]] for (kk in 1:p2) Hlist2[[kk]] <- Hlist[[(colx2.index[kk])]] if (modelno == 3 || modelno == 5) { if (p1 > 1) for (kk in 2:p1) Hlist1[[kk]] <- (Hlist1[[kk]])[c(TRUE,FALSE),,drop = FALSE] for (kk in 1:p2) Hlist2[[kk]] <- (Hlist2[[kk]])[c(TRUE,FALSE),,drop = FALSE] } if (!all(trivial.constraints(Hlist2) == 1)) stop("the constraint matrices for the non-noRRR terms ", "are not trivial") if (!trivial.constraints(Hlist1[[1]])) stop("the constraint matrices for intercept term is ", "not trivial") if (p1 > 1) for (kk in 2:p1) if (!trivial.constraints(list(Hlist1[[kk]]))) stop("the constraint matrices for some 'noRRR' ", "terms is not trivial") nice31 <- if (control$Quadratic) (!control$eq.tol || control$I.tolerances) else TRUE as.numeric(nice31) } cqo.fit <- function(x, y, w = rep_len(1, length(x[, 1])), etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = qrrvglm.control(...), constraints = NULL, extra = NULL, Terms = Terms, function.name = "cqo", ...) { modelno <- quasi.newton <- NOS <- z <- fv <- NULL if (!all(offset == 0)) stop("cqo.fit() cannot handle offsets") eff.n <- nrow(x) # + sum(abs(w[1:nrow(x)])) specialCM <- NULL post <- list() nonparametric <- FALSE epsilon <- control$epsilon maxitl <- control$maxitl save.weights <- control$save.weights trace <- control$trace orig.stepsize <- control$stepsize ny <- names(y) n <- dim(x)[1] intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n Rank <- control$Rank rrcontrol <- control # if (length(family@initialize)) eval(family@initialize) # Initialize mu and M (and optionally w) n <- n.save eval(rrr.init.expression) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (is.character(rrcontrol$Dzero)) { index <- match(rrcontrol$Dzero, dimnames(as.matrix(y))[[2]]) if (anyNA(index)) stop("Dzero argument didn't fully match y-names") if (length(index) == M) stop("all linear predictors are linear in the", " latent variable(s); so set 'Quadratic=FALSE'") rrcontrol$Dzero <- control$Dzero <- index } if (length(family@constraints)) eval(family@constraints) special.matrix <- matrix(-34956.125, M, M) # An unlikely used matrix just.testing <- cm.VGAM(special.matrix, x, rrcontrol$noRRR, constraints) findex <- trivial.constraints(just.testing, special.matrix) tc1 <- trivial.constraints(constraints) if (!control$Quadratic && sum(!tc1)) { for (ii in names(tc1)) if (!tc1[ii] && !any(ii == names(findex)[findex == 1])) warning("'", ii, "' is a non-trivial constraint that will ", "be overwritten by reduced-rank regression") } if (all(findex == 1)) stop("use vglm(), not rrvglm()!") colx1.index <- names.colx1.index <- NULL dx2 <- dimnames(x)[[2]] if (sum(findex)) { asx <- attr(x, "assign") for (ii in names(findex)) if (findex[ii]) { names.colx1.index <- c(names.colx1.index, dx2[asx[[ii]]]) colx1.index <- c(colx1.index, asx[[ii]]) } names(colx1.index) <- names.colx1.index } rrcontrol$colx1.index <- control$colx1.index <- colx1.index colx2.index <- 1:ncol(x) names(colx2.index) <- dx2 colx2.index <- colx2.index[-colx1.index] p1 <- length(colx1.index); p2 <- length(colx2.index) rrcontrol$colx2.index <- control$colx2.index <- colx2.index Amat <- if (length(rrcontrol$Ainit)) rrcontrol$Ainit else matrix(rnorm(M * Rank, sd = rrcontrol$sd.Cinit), M, Rank) Cmat <- if (length(rrcontrol$Cinit)) { matrix(rrcontrol$Cinit, p2, Rank) } else { if (!rrcontrol$Use.Init.Poisson.QO) { matrix(rnorm(p2 * Rank, sd = rrcontrol$sd.Cinit), p2, Rank) } else { .Init.Poisson.QO(ymat = as.matrix(y), X1 = x[, colx1.index, drop = FALSE], X2 = x[, colx2.index, drop = FALSE], Rank = rrcontrol$Rank, trace = rrcontrol$trace, max.ncol.etamat = rrcontrol$Etamat.colmax, Crow1positive = rrcontrol$Crow1positive, isd.latvar = rrcontrol$isd.latvar, constwt = family@vfamily[1] %in% c("negbinomial", "gamma2", "gaussianff"), takelog = any(family@vfamily[1] != c("gaussianff"))) } } if (rrcontrol$I.tolerances) { latvarmat <- x[, rrcontrol$colx2.index, drop = FALSE] %*% Cmat latvarmatmeans <- t(latvarmat) %*% matrix(1/n, n, 1) if (!all(abs(latvarmatmeans) < 4)) warning("I.tolerances = TRUE but the variables making up the ", "latent variable(s) do not appear to be centered.") } if (modelno %in% c(3, 5)) Amat[c(FALSE, TRUE), ] <- 0 # Intercept only for log(k) if (length(control$str0)) Amat[control$str0, ] <- 0 rrcontrol$Ainit <- control$Ainit <- Amat # Good for valt() rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() Hlist <- process.constraints(constraints, x, M, specialCM = specialCM) nice31 <- checkCMCO(Hlist, control = control, modelno = modelno) ncolHlist <- unlist(lapply(Hlist, ncol)) X.vlm.save <- if (nice31) { NULL } else { tmp500 <- lm2qrrvlm.model.matrix(x = x, Hlist = Hlist, C = Cmat, control = control) xsmall.qrr <- tmp500$new.latvar.model.matrix H.list <- tmp500$constraints latvar.mat <- tmp500$latvar.mat if (length(tmp500$offset)) { offset <- tmp500$offset } lm2vlm.model.matrix(xsmall.qrr, H.list, xij = control$xij) } if (length(coefstart) && length(X.vlm.save)) { eta <- if (ncol(X.vlm.save) > 1) X.vlm.save %*% coefstart + offset else X.vlm.save * coefstart + offset eta <- if (M > 1) matrix(eta, ncol = M, byrow = TRUE) else c(eta) mu <- family@linkinv(eta, extra) } rmfromVGAMenv(c("etamat", "z", "U", "beta", "deviance", "cmatrix", "ocmatrix"), prefix = ".VGAM.CQO.") eval(cqo.init.derivative.expression) for (iter in 1:control$optim.maxit) { eval(cqo.derivative.expression) if (!quasi.newton$convergence) break } if (maxitl > 1 && iter >= maxitl && quasi.newton$convergence) warning("convergence not obtained in", maxitl, "iterations.") if (length(family@fini)) eval(family@fini) asgn <- attr(x, "assign") coefs <- getfromVGAMenv("beta", prefix = ".VGAM.CQO.") if (control$I.tolerances) { if (NOS == M) { coefs <- c(t(matrix(coefs, ncol = M))) # Get into right order } else { coefs <- coefs } } dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] residuals <- z - fv if (M == 1) { residuals <- as.vector(residuals) names(residuals) <- yn } else { dimnames(residuals) <- list(yn, predictors.names) } if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) y.names <- NULL } df.residual <- 55 - 8 - Rank*p2 fit <- list(assign = asgn, coefficients = coefs, constraints = Hlist, df.residual = df.residual, df.total = n*M, fitted.values = mu, offset = offset, residuals = residuals, terms = Terms) # terms: This used to be done in vglm() if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } fit$weights <- if (save.weights) wz else NULL misc <- list( colnames.x = xn, criterion = "deviance", function.name = function.name, intercept.only=intercept.only, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, orig.assign = attr(x, "assign"), p = ncol(x), ynames = dimnames(y)[[2]]) if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(family@last)) eval(family@last) edeviance <- getfromVGAMenv("deviance", prefix = ".VGAM.CQO.") crit.list <- list( deviance = edeviance[ 1], alldeviance = edeviance[-1]) if (is.character(y.names) && length(y.names) == length(crit.list$alldeviance)) names(crit.list$alldeviance) <- y.names structure(c(fit, list(predictors = matrix(eta, n, M), contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, ResSS = 000, x = x, y = y)), vclass = family@vfamily) } .Init.Poisson.QO <- function(ymat, X1, X2, Rank = 1, epsilon = 1/32, max.ncol.etamat = 10, trace = FALSE, Crow1positive = rep_len(TRUE, Rank), isd.latvar = rep_len(1, Rank), constwt = FALSE, takelog = TRUE) { print.CQO.expression <- expression({ if (trace && length(X2)) { cat("\nUsing initial values\n") dimnames(ans) <- list(dimnames(X2)[[2]], if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "")) print(if (p2 > 5) ans else t(ans), dig = 3) } flush.console() }) sd.scale.X2.expression <- expression({ if (length(isd.latvar)) { actualSD <- c( sqrt(diag(var(X2 %*% ans))) ) for (ii in 1:Rank) ans[,ii] <- ans[,ii] * isd.latvar[ii] / actualSD[ii] } }) Crow1positive <- if (length(Crow1positive)) rep_len(Crow1positive, Rank) else rep_len(TRUE, Rank) if (epsilon <= 0) stop("epsilon > 0 is required") ymat <- cbind(ymat) + epsilon # ymat == 0 cause problems NOS <- ncol(ymat) p2 <- ncol(X2) if (NOS < 2*Rank) { ans <- crow1C(matrix(rnorm(p2 * Rank, sd = 0.02), p2, Rank), Crow1positive) eval(sd.scale.X2.expression) if (NOS == 1) { eval(print.CQO.expression) return(ans) } else { ans.save <- ans; # ans.save contains scaled guesses } } calS <- 1:NOS # Set of all species available for the approximation effrank <- min(Rank, floor(NOS/2)) # Effective rank ncol.etamat <- min(if (length(X2)) floor(NOS/2) else effrank, max.ncol.etamat) etamat <- wts <- matrix(0, nrow = nrow(ymat), ncol = ncol.etamat) # has >=1 coln rr <- 1 for (ii in 1:floor(NOS/2)) { if (length(calS) < 2) break index <- sample(calS, size = 2) # Randomness here etamat[, rr] <- etamat[, rr] + (if (takelog) log(ymat[, index[1]] / ymat[, index[2]]) else ymat[, index[1]] - ymat[, index[2]]) wts[, rr] <- wts[, rr] + (if (constwt) 1 else ymat[, index[1]] + ymat[, index[2]]) calS <- setdiff(calS, index) rr <- (rr %% ncol.etamat) + 1 } if (trace) cat("\nObtaining initial values\n") if (length(X2)) { alt <- valt(x = cbind(X1, X2), z = etamat, U = sqrt(t(wts)), Rank = effrank, Hlist = NULL, Cinit = NULL, trace = FALSE, colx1.index = 1:ncol(X1), Criterion = "ResSS") temp.control <- list(Rank = effrank, colx1.index = 1:ncol(X1), Alpha = 0.5, colx2.index = (ncol(X1)+1):(ncol(X1) + ncol(X2)), Corner = FALSE, Svd.arg = TRUE, Uncorrelated.latvar = TRUE, Quadratic = FALSE) ans2 <- if (Rank > 1) rrr.normalize(rrcontrol = temp.control, A = alt$A, C = alt$C, x = cbind(X1, X2)) else alt ans <- crow1C(ans2$C, rep_len(Crow1positive, effrank)) Rank.save <- Rank Rank <- effrank eval(sd.scale.X2.expression) Rank <- Rank.save if (effrank < Rank) { ans <- cbind(ans, ans.save[,-(1:effrank)]) # ans is better } eval(print.CQO.expression) } else { xij <- NULL # temporary measure U <- t(sqrt(wts)) tmp <- vlm.wfit(xmat = X1, zmat = etamat, Hlist = NULL, U = U, matrix.out = TRUE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, xij = xij) ans <- crow1C(as.matrix(tmp$resid), rep_len(Crow1positive, effrank)) if (effrank < Rank) { ans <- cbind(ans, ans.save[,-(1:effrank)]) # ans is better } if (Rank > 1) { evnu <- eigen(var(ans), symmetric = TRUE) ans <- ans %*% evnu$vector } if (length(isd.latvar)) { actualSD <- apply(cbind(ans), 2, sd) for (ii in 1:Rank) ans[,ii] <- ans[,ii] * isd.latvar[ii] / actualSD[ii] } ans <- crow1C(ans, rep_len(Crow1positive, Rank)) dimnames(ans) <- list(dimnames(X1)[[1]], if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "")) if (trace) { print(if (nrow(ans) > 10) t(ans) else ans, dig = 3) } } ans } cqo.init.derivative.expression <- expression({ which.optimizer <- if (control$Quadratic && control$FastAlgorithm) { "BFGS" } else { ifelse(iter <= rrcontrol$Switch.optimizer, "Nelder-Mead", "BFGS") } if (trace && control$OptimizeWrtC) { cat("\nUsing", which.optimizer, "algorithm\n") flush.console() } if (FALSE) { constraints <- replace.constraints(constraints, diag(M), rrcontrol$colx2.index) nice31 <- (!control$eq.tol || control$I.tolerances) && all(trivial.constraints(constraints) == 1) } NOS <- ifelse(modelno == 3 || modelno == 5, M/2, M) canfitok <- (exists("CQO.FastAlgorithm", envir = VGAMenv) && get("CQO.FastAlgorithm", envir = VGAMenv)) if (!canfitok) stop("cannot fit this model using fast algorithm") p2star <- if (nice31) ifelse(control$I.toleran, Rank, Rank + Rank*(Rank+1)/2) else (NOS*Rank + Rank*(Rank+1)/2 * ifelse(control$eq.tol, 1, NOS)) p1star <- if (nice31) ifelse(modelno %in% c(3, 5), 1+p1, p1) else (ncol(X.vlm.save) - p2star) X.vlm.1save <- if (p1star > 0) X.vlm.save[, -(1:p2star)] else NULL }) cqo.derivative.expression <- expression({ if (iter == 1 || quasi.newton$convergence) { quasi.newton <- optim(par = Cmat, fn = callcqoc, gr = if (control$GradientFunction) calldcqo else NULL, method = which.optimizer, control = list(fnscale = 1, trace = as.integer(control$trace), parscale = rep_len(control$Parscale, length(Cmat)), maxit = control$Maxit.optim), etamat = eta, xmat = x, ymat = y, wvec = w, X.vlm.1save = X.vlm.1save, modelno = modelno, Control = control, n = n, M = M, p1star = p1star, p2star = p2star, nice31 = nice31) z <- matrix(getfromVGAMenv("z", prefix = ".VGAM.CQO."), n, M) U <- matrix(getfromVGAMenv("U", prefix = ".VGAM.CQO."), M, n) } ocmatrix <- getfromVGAMenv("ocmatrix", prefix = ".VGAM.CQO.") maxdiff <- max(abs(c(ocmatrix) - c(quasi.newton$par)) / (1 + abs(c(ocmatrix)))) if (maxdiff < 1.0e-4) { Cmat <- getfromVGAMenv("cmatrix", prefix = ".VGAM.CQO.") } else { warning("solution does not correspond to .VGAM.CQO.cmatrix") } alt <- valt.1iter(x = x, z = z, U = U, Hlist = Hlist, C = Cmat, nice31 = nice31, control = rrcontrol, lp.names = predictors.names, MSratio = M / NOS) if (length(alt$offset)) offset <- alt$offset B1.save <- alt$B1 # Put later into extra tmp.fitted <- alt$fitted # contains \bI_{Rank} \bnu if Corner if (trace && control$OptimizeWrtC) { cat("\n") cat(which.optimizer, "using optim():", "\n") cat("Objective =", quasi.newton$value, "\n") cat("Parameters (= c(C)) = ", if (length(quasi.newton$par) < 5) "" else "\n") cat(alt$Cmat, fill = TRUE) cat("\n") cat("Number of function evaluations =", quasi.newton$count[1], "\n") if (length(quasi.newton$message)) cat("Message =", quasi.newton$message, "\n") cat("\n") flush.console() } Amat <- alt$Amat # Cmat <- alt$Cmat # Dmat <- alt$Dmat # eval(cqo.end.expression) # }) cqo.end.expression <- expression({ rmfromVGAMenv(c("etamat"), prefix = ".VGAM.CQO.") if (control$Quadratic) { if (!length(extra)) extra <- list() extra$Amat <- Amat # Not the latest iteration ?? extra$Cmat <- Cmat # Saves the latest iteration extra$Dmat <- Dmat # Not the latest iteration extra$B1 <- B1.save # Not the latest iteration (not good) } else { Hlist <- replace.constraints(Hlist.save, Amat, colx2.index) } fv <- tmp.fitted # Contains \bI \bnu eta <- fv + offset mu <- family@linkinv(eta, extra) if (anyNA(mu)) warning("there are NAs in mu") deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzeps = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset # Contains \bI \bnu }) crow1C <- function(cmat, crow1positive = rep_len(TRUE, ncol(cmat)), amat = NULL) { if (!is.logical(crow1positive) || length(crow1positive) != ncol(cmat)) stop("bad input in crow1C") for (LV in 1:ncol(cmat)) if (( crow1positive[LV] && cmat[1, LV] < 0) || (!crow1positive[LV] && cmat[1, LV] > 0)) { cmat[, LV] <- -cmat[, LV] if (length(amat)) amat[, LV] <- -amat[, LV] } if (length(amat)) list(cmat = cmat, amat = amat) else cmat } printqrrvglm <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } if (FALSE) { } if (FALSE) { nobs <- if (length(x@df.total)) x@df.total else length(x@residuals) rdf <- x@df.residual if (!length(rdf)) rdf <- nobs - Rank } cat("\n") if (length(deviance(x))) cat("Residual deviance:", format(deviance(x)), "\n") if (FALSE && length(x@criterion)) { ncrit <- names(x@criterion) for (ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), format(x@criterion[[ii]]), "\n") } invisible(x) } setMethod("Coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("coefficients", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) if (!isGeneric("deviance")) setGeneric("deviance", function(object, ...) standardGeneric("deviance")) setMethod("deviance", "qrrvglm", function(object,...) object@criterion$deviance) setMethod("fitted", "qrrvglm", function(object, ...) fittedvlm(object)) setMethod("fitted.values", "qrrvglm", function(object, ...) fittedvlm(object)) setMethod("show", "qrrvglm", function(object) printqrrvglm(object)) VGAM/R/qtplot.q0000644000176200001440000005642313135276757012744 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. qtplot.lms.bcn <- function(percentiles = c(25, 50, 75), eta = NULL, yoffset = 0) { lp <- length(percentiles) answer <- matrix(NA_real_, nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) for (ii in 1:lp) { answer[, ii] <- qlms.bcn(p = percentiles[ii]/100, lambda = eta[, 1], mu = eta[, 2], sigma = eta[, 3]) } answer } qtplot.lms.bcg <- function(percentiles = c(25,50,75), eta = NULL, yoffset = 0) { cc <- percentiles lp <- length(percentiles) answer <- matrix(NA_real_, nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) lambda <- eta[, 1] sigma <- eta[, 3] shape <- 1 / (lambda * sigma)^2 for (ii in 1:lp) { ccc <- rep_len(cc[ii]/100, nrow(eta)) ccc <- ifelse(lambda > 0, ccc, 1-ccc) answer[, ii] <- eta[, 2] * (qgamma(ccc, shape = shape)/shape)^(1/lambda) } answer } qtplot.lms.yjn2 <- qtplot.lms.yjn <- function(percentiles = c(25,50,75), eta = NULL, yoffset = 0) { cc <- percentiles lp <- length(percentiles) answer <- matrix(NA_real_, nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) lambda <- eta[, 1] mu <- eta[, 2] sigma <- eta[, 3] # Link function already taken care of above for (ii in 1:lp) { ccc <- mu + sigma * qnorm(cc[ii]/100) answer[, ii] <- yeo.johnson(ccc, lambda, inverse= TRUE) - yoffset } answer } qtplot.default <- function(object, ...) { warning("no methods function. Returning the object") invisible(object) } "qtplot.vglm" <- function(object, Attach= TRUE, ...) { LL <- length(object@family@vfamily) newcall <- paste("qtplot.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall <- parse(text = newcall)[[1]] if (Attach) { object@post$qtplot <- eval(newcall) invisible(object) } else eval(newcall) } qtplot.lmscreg <- function(object, newdata = NULL, percentiles = object@misc$percentiles, show.plot = TRUE, ...) { same <- length(percentiles) == length(object@misc$percentiles) && all(percentiles == object@misc$percentiles) lp <- length(percentiles) if (same) { fitted.values <- if (!length(newdata)) object@fitted.values else { predict(object, newdata = newdata, type = "response") } fitted.values <- as.matrix(fitted.values) } else { if (!is.numeric(percentiles)) stop("'percentiles' must be specified") eta <- if (length(newdata)) predict(object, newdata = newdata, type = "link") else object@predictors if (!length(double.check.earg <- object@misc$earg)) double.check.earg <- list(theta = NULL) eta <- eta2theta(eta, link = object@misc$link, earg = double.check.earg) # lambda, mu, sigma if (!is.logical(expectiles <- object@misc$expectiles)) { expectiles <- FALSE } newcall <- paste(if (expectiles) "explot." else "qtplot.", object@family@vfamily[1], "(percentiles = percentiles", ", eta = eta, yoffset=object@misc$yoffset)", sep = "") newcall <- parse(text = newcall)[[1]] fitted.values <- as.matrix( eval(newcall) ) dimnames(fitted.values) <- list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = "")) } if (show.plot) { plotqtplot.lmscreg(fitted.values = fitted.values, object = object, newdata = newdata, lp = lp, percentiles = percentiles, ...) } list(fitted.values = fitted.values, percentiles = percentiles) } plotqtplot.lmscreg <- function(fitted.values, object, newdata = NULL, percentiles = object@misc$percentiles, lp = NULL, add.arg = FALSE, y = if (length(newdata)) FALSE else TRUE, spline.fit = FALSE, label = TRUE, size.label = 0.06, xlab = NULL, ylab = "", pch = par()$pch, pcex = par()$cex, pcol.arg = par()$col, xlim = NULL, ylim = NULL, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) { if (!length(newdata)) { X <- model.matrixvlm(object, type = "lm") if (is.matrix(X) && length(object@y) && ncol(X)==2 && dimnames(X)[[2]][1] == "(Intercept)") { xx <- X[, 2] if (is.null(xlab)) { xlab <- if (object@misc$nonparametric) as.vector(slot(object, "s.xargument")) else names(object@assign)[2] } if (!add.arg) { if (!is.numeric(xlim)) xlim <- if (label) c(min(xx), max(xx) + size.label*diff(range(xx))) else c(min(xx), max(xx)) fred <- cbind(object@y, fitted.values) if (!is.numeric(ylim)) ylim <- c(min(fred), max(fred)) matplot(x = xx, y = fred, xlab = xlab, ylab = ylab, type = "n", xlim = xlim, ylim = ylim, ...) } if (y && length(object@y)) matpoints(x = xx, y = object@y, pch = pch, cex = pcex, col = pcol.arg) } else { warning("there is not a single covariate. ", "Returning the object.") return(fitted.values) } } else { firstterm <- attr(terms(object), "term.labels")[1] if (object@misc$nonparametric && length(object@s.xargument[firstterm])) firstterm <- object@s.xargument[firstterm] xx <- newdata[[firstterm]] if (!is.numeric(xx)) stop("couldn't extract the 'primary' variable from newdata") if (!add.arg) { if (is.null(xlab)) xlab <- firstterm if (!is.numeric(xlim)) xlim <- if (label) c(min(xx), max(xx)+size.label*diff(range(xx))) else c(min(xx), max(xx)) if (!is.numeric(ylim)) ylim <- c(min(fitted.values), max(fitted.values)) matplot(x = xx, y = fitted.values, xlab = xlab, ylab = ylab, type = "n", xlim = xlim, ylim = ylim, col = pcol.arg) } if (y && length(object@y)) matpoints(x = xx, y = object@y, pch = pch, cex = pcex, col = pcol.arg) } tcol.arg <- rep_len(tcol.arg, lp) lcol.arg <- rep_len(lcol.arg, lp) llwd.arg <- rep_len(llwd.arg, lp) llty.arg <- rep_len(llty.arg, lp) for (ii in 1:lp) { temp <- cbind(xx, fitted.values[, ii]) temp <- temp[sort.list(temp[, 1]), ] index <- !duplicated(temp[, 1]) if (spline.fit) { lines(spline(temp[index, 1], temp[index, 2]), lty = llty.arg[ii], col = lcol.arg[ii], err = -1, lwd = llwd.arg[ii]) } else { lines(temp[index, 1], temp[index, 2], lty = llty.arg[ii], col = lcol.arg[ii], err = -1, lwd = llwd.arg[ii]) } if (label) text(par()$usr[2], temp[nrow(temp), 2], paste( percentiles[ii], "%", sep = ""), adj = tadj, col = tcol.arg[ii], err = -1) } invisible(fitted.values) } if (TRUE) { if (!isGeneric("qtplot")) setGeneric("qtplot", function(object, ...) standardGeneric("qtplot")) setMethod("qtplot", signature(object = "vglm"), function(object, ...) invisible(qtplot.vglm(object, ...))) setMethod("qtplot", signature(object = "vgam"), function(object, ...) invisible(qtplot.vglm(object, ...))) } "qtplot.vextremes" <- function(object, ...) { newcall <- paste("qtplot.", object@family@vfamily[1], "(object = object, ... )", sep = "") newcall <- parse(text = newcall)[[1]] eval(newcall) } qtplot.gumbelff <- qtplot.gumbel <- function(object, show.plot = TRUE, y.arg = TRUE, spline.fit = FALSE, label = TRUE, R = object@misc$R, percentiles = object@misc$percentiles, add.arg = FALSE, mpv = object@misc$mpv, xlab = NULL, ylab = "", main = "", pch = par()$pch, pcol.arg = par()$col, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) { if (!is.logical(mpv) || length(mpv) != 1) stop("bad input for 'mpv'") if (!length(percentiles) || (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for 'percentiles'") eta <- predict(object) if (is.Numeric(R)) R <- rep_len(R, nrow(eta)) if (!is.Numeric(percentiles)) stop("the 'percentiles' argument needs to be assigned a value") extra <- object@extra extra$mpv <- mpv # Overwrite if necessary extra$R <- R extra$percentiles <- percentiles fitted.values <- object@family@linkinv(eta = eta, extra = extra) answer <- list(fitted.values = fitted.values, percentiles = percentiles) if (!show.plot) return(answer) lp <- length(percentiles) # Does not include mpv tcol.arg <- rep_len(tcol.arg, lp+mpv) lcol.arg <- rep_len(lcol.arg, lp+mpv) llwd.arg <- rep_len(llwd.arg, lp+mpv) llty.arg <- rep_len(llty.arg, lp+mpv) X <- model.matrixvlm(object, type = "lm") if (is.matrix(X) && length(object@y) && ncol(X)==2 && dimnames(X)[[2]][1] == "(Intercept)") { xx <- X[, 2] if (!length(xlab)) xlab <- if (object@misc$nonparametric && length(object@s.xargument)) object@s.xargument else names(object@assign)[2] if (!add.arg) matplot(x = xx, y = cbind(object@y, fitted.values), main = main, xlab = xlab, ylab = ylab, type = "n", ...) if (y.arg) { matpoints(x = xx, y = object@y, pch = pch, col = pcol.arg) } } else { warning("there is not a single covariate.") return(answer) } for (ii in 1:(lp+mpv)) { temp <- cbind(xx, fitted.values[, ii]) temp <- temp[sort.list(temp[, 1]), ] index <- !duplicated(temp[, 1]) if (spline.fit) { lines(spline(temp[index, 1], temp[index, 2]), lty = llty.arg[ii], col = lcol.arg[ii], lwd = llwd.arg[ii]) } else { lines(temp[index, 1], temp[index, 2], lty = llty.arg[ii], col = lcol.arg[ii], lwd = llwd.arg[ii]) } if (label) { mylabel <- (dimnames(answer$fitted)[[2]])[ii] text(par()$usr[2], temp[nrow(temp), 2], mylabel, adj = tadj, col = tcol.arg[ii], err = -1, cex = par()$cex.axis, xpd = par()$xpd) } } invisible(answer) } deplot.lms.bcn <- function(object, newdata, y.arg, eta0) { if (!any(object@family@vfamily == "lms.bcn")) warning("I think you've called the wrong function") Zvec <- ((y.arg/eta0[, 2])^(eta0[, 1]) -1) / (eta0[, 1] * eta0[, 3]) dZ.dy <- ((y.arg/eta0[, 2])^(eta0[, 1]-1)) / (eta0[, 2] * eta0[, 3]) yvec <- dnorm(Zvec) * abs(dZ.dy) list(newdata = newdata, y = y.arg, density = yvec) } deplot.lms.bcg <- function(object, newdata, y.arg, eta0) { if (!any(object@family@vfamily == "lms.bcg")) warning("I think you've called the wrong function") Zvec <- (y.arg/eta0[, 2])^(eta0[, 1]) # different from lms.bcn dZ.dy <- ((y.arg/eta0[, 2])^(eta0[, 1]-1)) * eta0[, 1] / eta0[, 2] lambda <- eta0[, 1] sigma <- eta0[, 3] shape <- 1 / (lambda * sigma)^2 yvec <- dgamma(Zvec, shape = shape, rate = shape) * abs(dZ.dy) list(newdata = newdata, y = y.arg, density = yvec) } deplot.lms.yjn2 <- deplot.lms.yjn <- function(object, newdata, y.arg, eta0) { if (!length(intersect(object@family@vfamily, c("lms.yjn","lms.yjn2")))) warning("I think you've called the wrong function") lambda <- eta0[, 1] Zvec <- (yeo.johnson(y.arg+object@misc$yoffset, lambda = eta0[, 1]) - eta0[, 2]) / eta0[, 3] dZ.dy <- dyj.dy.yeojohnson(y.arg+object@misc$yoffset, lambda = eta0[, 1]) / eta0[, 3] yvec <- dnorm(Zvec) * abs(dZ.dy) list(newdata = newdata, y = y.arg, density = yvec) } deplot.default <- function(object, ...) { warning("no methods function. Returning the object") invisible(object) } "deplot.vglm" <- function(object, Attach= TRUE, ...) { LL <- length(object@family@vfamily) newcall <- paste("deplot.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall <- parse(text = newcall)[[1]] if (Attach) { object@post$deplot <- eval(newcall) invisible(object) } else { eval(newcall) } } "deplot.lmscreg" <- function(object, newdata = NULL, x0, y.arg, show.plot = TRUE, ...) { if (!length(newdata)) { newdata <- data.frame(x0=x0) var1name <- attr(terms(object), "term.labels")[1] names(newdata) <- var1name ii <- if (object@misc$nonparametric) slot(object, "s.xargument") else NULL if (length(ii) && any(logic.vec <- names(slot(object, "s.xargument")) == var1name)) names(newdata) <- ii[logic.vec] # should be the first one } eta0 <- if (length(newdata)) predict(object, newdata) else predict(object) if (!length(double.check.earg <- object@misc$earg)) double.check.earg <- list(theta = NULL) eta0 <- eta2theta(eta0, link = object@misc$link, earg = double.check.earg) # lambda, mu, sigma newcall <- paste("deplot.", object@family@vfamily[1], "(object, newdata, y.arg = y.arg, eta0 = eta0)", sep = "") newcall <- parse(text = newcall)[[1]] answer <- eval(newcall) if (show.plot) plotdeplot.lmscreg(answer, y.arg=y.arg, ...) invisible(answer) } plotdeplot.lmscreg <- function(answer, y.arg, add.arg= FALSE, xlab = "", ylab = "density", xlim = NULL, ylim = NULL, llty.arg = par()$lty, col.arg = par()$col, llwd.arg = par()$lwd, ...) { yvec <- answer$density xx <- y.arg if (!add.arg) { if (!is.numeric(xlim)) xlim <- c(min(xx), max(xx)) if (!is.numeric(ylim)) ylim <- c(min(yvec), max(yvec)) matplot(x = xx, y = yvec, xlab = xlab, ylab = ylab, type = "n", xlim = xlim, ylim = ylim, ...) } temp <- cbind(xx, yvec) temp <- temp[sort.list(temp[, 1]), ] index <- !duplicated(temp[, 1]) lines(temp[index, 1], temp[index, 2], lty = llty.arg, col = col.arg, err = -1, lwd = llwd.arg) invisible(answer) } if (TRUE) { if (!isGeneric("deplot")) setGeneric("deplot", function(object, ...) standardGeneric("deplot")) setMethod("deplot", signature(object = "vglm"), function(object, ...) invisible(deplot.vglm(object, ...))) setMethod("deplot", signature(object = "vgam"), function(object, ...) invisible(deplot.vglm(object, ...))) } if (TRUE) { if (!isGeneric("cdf")) setGeneric("cdf", function(object, ...) standardGeneric("cdf")) setMethod("cdf", signature(object = "vglm"), function(object, ...) cdf.vglm(object, ...)) setMethod("cdf", signature(object = "vgam"), function(object, ...) cdf.vglm(object, ...)) } "cdf.vglm" <- function(object, newdata = NULL, Attach = FALSE, ...) { LL <- length(object@family@vfamily) newcall <- paste("cdf.", object@family@vfamily[LL], "(object, newdata, ...)", sep = "") newcall <- parse(text = newcall)[[1]] if (Attach) { object@post$cdf <- eval(newcall) object } else { eval(newcall) } } "cdf.lmscreg" <- function(object, newdata = NULL, ...) { if (!length(newdata)) return(object@post$cdf) eta0 <- if (length(newdata)) predict(object, newdata) else predict(object) if (!length(double.check.earg <- object@misc$earg)) double.check.earg <- list(theta = NULL) eta0 <- eta2theta(eta0, link = object@misc$link, earg = double.check.earg) # lambda, mu, sigma y <- vgety(object, newdata) # Includes yoffset newcall <- paste("cdf.", object@family@vfamily[1], "(y, eta0, ... )", sep = "") newcall <- parse(text = newcall)[[1]] eval(newcall) } cdf.lms.bcn <- function(y, eta0) { Zvec <- ((y/eta0[, 2])^(eta0[, 1]) -1) / (eta0[, 1] * eta0[, 3]) Zvec[abs(eta0[, 3]) < 1e-5] <- log(y/eta0[, 2]) / eta0[, 3] ans <- c(pnorm(Zvec)) names(ans) <- dimnames(eta0)[[1]] ans } cdf.lms.bcg <- function(y, eta0) { shape <- 1 / (eta0[, 1] * eta0[, 3])^2 Gvec <- shape * (y/eta0[, 2])^(eta0[, 1]) ans <- c(pgamma(Gvec, shape = shape)) ans[eta0[, 1] < 0] <- 1-ans names(ans) <- dimnames(eta0)[[1]] ans } cdf.lms.yjn <- function(y, eta0) { Zvec <- (yeo.johnson(y, eta0[, 1]) - eta0[, 2])/eta0[, 3] ans <- c(pnorm(Zvec)) names(ans) <- dimnames(eta0)[[1]] ans } vgety <- function(object, newdata = NULL) { y <- if (length(newdata)) { yname <- dimnames(attr(terms(object@terms),"factors"))[[1]][1] newdata[[yname]] } else { object@y } if (length(object@misc$yoffset)) y <- y + object@misc$yoffset y } "rlplot.vglm" <- function(object, Attach = TRUE, ...) { LL <- length(object@family@vfamily) newcall <- paste("rlplot.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall <- parse(text = newcall)[[1]] if (Attach) { object@post$rlplot <- eval(newcall) invisible(object) } else { eval(newcall) } } "rlplot.vextremes" <- function(object, ...) { newcall <- paste("rlplot.", object@family@vfamily[1], "(object = object, ... )", sep = "") newcall <- parse(text = newcall)[[1]] eval(newcall) } rlplot.gevff <- rlplot.gev <- function(object, show.plot = TRUE, probability = c((1:9)/100, (1:9)/10, 0.95, 0.99, 0.995, 0.999), add.arg = FALSE, xlab = if(log.arg) "Return Period (log-scale)" else "Return Period", ylab = "Return Level", main = "Return Level Plot", pch = par()$pch, pcol.arg = par()$col, pcex = par()$cex, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, slty.arg = par()$lty, scol.arg = par()$col, slwd.arg = par()$lwd, ylim = NULL, log.arg = TRUE, CI = TRUE, epsilon = 1.0e-05, ...) { if (!is.Numeric(epsilon, length.arg = 1) || abs(epsilon) > 0.10) stop("bad input for 'epsilon'") if (!is.Numeric(probability, positive = TRUE) || max(probability) >= 1 || length(probability) < 5) stop("bad input for 'probability'") if (!is.logical(log.arg) || length(log.arg) != 1) stop("bad input for argument 'log'") if (!is.logical(CI) || length(CI) != 1) stop("bad input for argument 'CI'") if (!object@misc$intercept.only) stop("object must be an intercept-only fit, ", "i.e., y ~ 1 is the response") extra2 <- object@extra extra2$percentiles <- 100 * probability # Overwrite zp <- object@family@linkinv(eta = predict(object)[1:2, ], extra = extra2)[1, ] yp <- -log(probability) ydata <- sort(object@y[, 1]) n <- object@misc$n if (log.arg) { if (!add.arg) plot(log(1/yp), zp, log = "", type = "n", ylim = if (length(ylim)) ylim else c(min(c(ydata, zp)), max(c(ydata, zp))), xlab = xlab, ylab = ylab, main = main, cex.axis = par()$cex.axis, cex.main = par()$cex.main, cex.lab = par()$cex.lab, ...) points(log(-1/log((1:n)/(n+1))), ydata, col = pcol.arg, pch = pch, cex = pcex) lines(log(1/yp), zp, lwd = llwd.arg, col = lcol.arg, lty = llty.arg) } else { if (!add.arg) plot(1/yp, zp, log = "x", type = "n", ylim = if (length(ylim)) ylim else c(min(c(ydata, zp)), max(c(ydata, zp))), xlab = xlab, ylab = ylab, main = main, cex.axis = par()$cex.axis, cex.main = par()$cex.main, cex.lab = par()$cex.lab, ...) points(-1/log((1:n)/(n+1)), ydata, col = pcol.arg, pch = pch, cex = pcex) lines(1/yp, zp, lwd = llwd.arg, col = lcol.arg, lty = llty.arg) } if (CI) { zpp <- cbind(zp, zp, zp) # lp x 3 eta <- predict(object) Links <- object@misc$link earg <- object@misc$earg M <- object@misc$M for (ii in 1:M) { TTheta <- eta[, ii] use.earg <- earg[[ii]] newcall <- paste(Links[ii], "(theta = TTheta, ", " inverse = TRUE)", sep = "") newcall <- parse(text = newcall)[[1]] uteta <- eval(newcall) # Theta, the untransformed parameter uteta <- uteta + epsilon # Perturb it newcall <- paste(Links[ii], "(theta = uteta", ")", sep = "") newcall <- parse(text = newcall)[[1]] teta <- eval(newcall) # The transformed parameter peta <- eta peta[, ii] <- teta zpp[, ii] <- object@family@linkinv(eta = peta, extra = extra2)[1, ] zpp[, ii] <- (zpp[, ii] - zp) / epsilon # On the transformed scale } VCOV <- vcov(object, untransform = TRUE) vv <- numeric(nrow(zpp)) for (ii in 1:nrow(zpp)) vv[ii] <- t(as.matrix(zpp[ii, ])) %*% VCOV %*% as.matrix(zpp[ii, ]) if (log.arg) { lines(log(1/yp), zp - 1.96 * sqrt(vv), lwd = slwd.arg, col = scol.arg, lty = slty.arg) lines(log(1/yp), zp + 1.96 * sqrt(vv), lwd = slwd.arg, col = scol.arg, lty = slty.arg) } else { lines(1/yp, zp - 1.96 * sqrt(vv), lwd = slwd.arg, col = scol.arg, lty = slty.arg) lines(1/yp, zp + 1.96 * sqrt(vv), lwd = slwd.arg, col = scol.arg, lty = slty.arg) } } answer <- list(yp = yp, zp = zp) if (CI) { answer$lower <- zp - 1.96 * sqrt(vv) answer$upper <- zp + 1.96 * sqrt(vv) } invisible(answer) } if (!isGeneric("rlplot")) setGeneric("rlplot", function(object, ...) standardGeneric("rlplot")) setMethod("rlplot", "vglm", function(object, ...) rlplot.vglm(object, ...)) explot.lms.bcn <- function(percentiles = c(25, 50, 75), eta = NULL, yoffset = 0) { lp <- length(percentiles) answer <- matrix(NA_real_, nrow(eta), lp, dimnames = list(dimnames(eta)[[1]], paste(as.character(percentiles), "%", sep = ""))) for (ii in 1:lp) { answer[, ii] <- eta[, 2] * (1 + eta[, 1] * eta[, 3] * qenorm(percentiles[ii]/100))^(1/eta[, 1]) } answer } VGAM/R/family.oneinf.R0000644000176200001440000020023613135276757014111 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dlog <- function(x, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape)) if (length(x) != N) x <- rep_len(x, N) if (length(shape) != N) shape <- rep_len(shape, N) ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 ans <- rep_len(0.0, length(x)) if (log.arg) { ans[ zero] <- log(0.0) ans[!zero] <- x[!zero] * log(shape[!zero]) - log(x[!zero]) - log(-log1p(-shape[!zero])) ans[ox] <- log(0) # 20141212 KaiH } else { ans[!zero] <- -(shape[!zero]^(x[!zero])) / (x[!zero] * log1p(-shape[!zero])) ans[ox] <- 0.0 # 20141212 KaiH } ans[shape < 0 | 1 < shape] <- NaN ans } plog <- function(q, shape, log.p = FALSE) { if (any(is.na(q))) stop("NAs not allowed for argument 'q'") if (any(is.na(shape))) stop("NAs not allowed for argument 'shape'") N <- max(length(q), length(shape)) if (length(q) != N) q <- rep_len(q, N) if (length(shape) != N) shape <- rep_len(shape, N) bigno <- 10 owen1965 <- (q * (1 - shape) > bigno) if (specialCase <- any(owen1965)) { qqq <- q[owen1965] ppp <- shape[owen1965] pqp <- qqq * (1 - ppp) bigans <- (ppp^(1+qqq) / (1-ppp)) * (1/qqq - 1 / ( pqp * (qqq-1)) + 2 / ((1-ppp) * pqp * (qqq-1) * (qqq-2)) - 6 / ((1-ppp)^2 * pqp * (qqq-1) * (qqq-2) * (qqq-3)) + 24 / ((1-ppp)^3 * pqp * (qqq-1) * (qqq-2) * (qqq-3) * (qqq-4))) bigans <- 1 + bigans / log1p(-ppp) } floorq <- pmax(1, floor(q)) # Ensures at least one element per q value floorq[owen1965] <- 1 seqq <- sequence(floorq) seqp <- rep(shape, floorq) onevector <- (seqp^seqq / seqq) / (-log1p(-seqp)) rlist <- .C("tyee_C_cum8sum", as.double(onevector), answer = double(N), as.integer(N), as.double(seqq), as.integer(length(onevector)), notok=integer(1)) if (rlist$notok != 0) stop("error in C function 'cum8sum'") ans <- if (log.p) log(rlist$answer) else rlist$answer if (specialCase) ans[owen1965] <- if (log.p) log(bigans) else bigans ans[q < 1] <- if (log.p) log(0.0) else 0.0 ans[shape < 0 | 1 < shape] <- NaN ans } qlog <- function(p, shape) { LLL <- max(length(p), length(shape)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) ans <- rep_len(0, LLL) lo <- rep_len(1, LLL) approx.ans <- lo # True at lhs hi <- 2 * lo + 10 dont.iterate <- p == 1 | shape <= 0 done <- p <= plog(hi, shape) | dont.iterate while (!all(done)) { hi.save <- hi[!done] hi[!done] <- 2 * lo[!done] + 10 lo[!done] <- hi.save done[!done] <- (p[!done] <= plog(hi[!done], shape[!done])) } foo <- function(q, shape, p) plog(q, shape) - p lhs <- (p <= dlog(1, shape)) | dont.iterate approx.ans[!lhs] <- bisection.basic(foo, lo[!lhs], hi[!lhs], tol = 1/16, shape = shape[!lhs], p = p[!lhs]) faa <- floor(approx.ans) ans <- ifelse(plog(faa, shape) < p & p <= plog(faa+1, shape), faa+1, faa) ans[p == 1] <- Inf ans[shape <= 0] <- NaN ans } # qlog rlog <- function(n, shape) { qlog(runif(n), shape) } logff <- function(lshape = "logit", gshape = ppoints(8), zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Logarithmic distribution f(y) = a * shape^y / y, ", "y = 1, 2, 3,...,\n", " 0 < shape < 1, a = -1 / log(1-shape) \n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n", "\n", "Mean: a * shape / (1 - shape)", "\n"), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 1 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = "shape", zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly M <- M1 * ncoly mynames1 <- param.names("shape", ncoly) predictors.names <- namesof(mynames1, .lshape , earg = .eshape , tag = FALSE) if (!length(etastart)) { logff.Loglikfun <- function(shapeval, y, x, w, extraargs) { sum(c(w) * dlog(x = y, shape = shapeval, log = TRUE)) } Init.shape <- matrix(0, n, M) shape.grid <- .gshape for (ilocal in 1:ncoly) { Init.shape[, ilocal] <- grid.search(shape.grid, objfun = logff.Loglikfun, y = y[, ilocal], # x = x, w = w[, ilocal]) } # for etastart <- theta2eta(Init.shape, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .gshape = gshape ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) aa <- -1 / log1p(-shape) aa * shape / (1 - shape) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lshape , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .eshape } }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlog(x = y, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("logff"), validparams = eval(substitute(function(eta, y, extra = NULL) { okay0 <- if ( .lshape == "logfflink") all(0 < eta) else TRUE okay1 <- if (okay0) { shape <- eta2theta(eta, .lshape , earg = .eshape ) all(is.finite(shape)) && all(0 < shape & shape < 1) } else { FALSE } okay0 && okay1 }, list( .lshape = lshape, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) shape <- eta2theta(eta, .lshape , earg = .eshape ) rlog(nsim * length(shape), shape = shape) }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 1 shape <- eta2theta(eta, .lshape , earg = .eshape ) aa <- -1 / log1p(-shape) dl.dshape <- -aa / (1 - shape) + y / shape dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ ned2l.dshape2 <- aa * (1 - aa * shape) / (shape * (1-shape)^2) wz <- c(w) * ned2l.dshape2 * dshape.deta^2 wz }), list( .lshape = lshape, .eshape = eshape )))) } deflat.limit.oilog <- function(shape) { if (any(shape <= 0 | 1 <= shape )) stop("argument 'shape' must be in (0, 1)") ans <- 1 / (1 - 1 / dlog(1, shape)) ans } doilog <- function(x, shape, pstr1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(pstr1)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep(NA_real_, LLL) index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pstr1[ index1] + (1 - pstr1[ index1]) * dlog(x[ index1], shape[ index1])) ans[!index1] <- log1p(-pstr1[!index1]) + dlog(x[!index1], shape[!index1], log = TRUE) } else { ans[ index1] <- pstr1[ index1] + (1 - pstr1[ index1]) * dlog(x[ index1], shape[ index1]) ans[!index1] <- (1 - pstr1[!index1]) * dlog(x[!index1], shape[!index1]) } ans[pstr1 < deflat.limit.oilog(shape) | 1 < pstr1] <- NaN ans[shape <= 0 | 1 <= shape] <- NaN ans } # doilog poilog <- function(q, shape, pstr1 = 0) { LLL <- max(length(q), length(shape), length(pstr1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- deflat.limit.oilog(shape) ans <- plog(q, shape) #, lower.tail = lower.tail, log.p = log.p ans <- ifelse(q < 1, 0, pstr1 + (1 - pstr1) * ans) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[shape <= 0] <- NaN ans[1 <= shape] <- NaN ans } # poilog qoilog <- function(p, shape, pstr1 = 0) { LLL <- max(length(p), length(shape), length(pstr1)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- deflat.limit.oilog(shape) ans[p <= pstr1] <- 1 pindex <- (deflat.limit <= pstr1) & (pstr1 < p) ans[pindex] <- qlog((p[pindex] - pstr1[pindex]) / (1 - pstr1[pindex]), shape = shape[pindex]) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans[shape <= 0] <- NaN ans[1 <= shape] <- NaN ans } # qoilog roilog <- function(n, shape, pstr1 = 0) { qoilog(runif(n), shape, pstr1 = pstr1) } oilog <- function(lpstr1 = "logit", lshape = "logit", type.fitted = c("mean", "shape", "pobs1", "pstr1", "onempstr1"), ishape = NULL, gpstr1 = ppoints(8), gshape = ppoints(8), zero = NULL) { lpstr1 <- as.list(substitute(lpstr1)) epstr1 <- link2list(lpstr1) lpstr1 <- attr(epstr1, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "pstr1", "onempstr1"))[1] if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") new("vglmff", blurb = c("One-inflated logarithmic distribution\n\n", "Links: ", namesof("pstr1", lpstr1, earg = epstr1 ), ", ", namesof("shape", lshape, earg = eshape ), "\n", "Mean: pstr1 + (1 - pstr1) * a * shape / (1 - shape), ", "a = -1 / log(1-shape), 0 < shape < 1"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pstr1", "shape"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y NOS <- ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pstr1", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lpstr1 , earg = .epstr1 , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { shape.init <- pstr1.init <- matrix(NA_real_, n, NOS) gpstr1 <- .gpstr1 gshape <- .gshape oilog.Loglikfun <- function(pstr1, shape, y, x, w, extraargs) { sum(c(w) * doilog(x = y, pstr1 = pstr1, shape = shape, log = TRUE)) } for (jay in 1:NOS) { # For each response 'y_jay'... do: try.this <- grid.search2(gpstr1, gshape, objfun = oilog.Loglikfun, y = y[, jay], # x = x[TFvec, , drop = FALSE], w = w[, jay], ret.objfun = TRUE) # Last value is the loglik pstr1.init[, jay] <- try.this["Value1"] shape.init[, jay] <- try.this["Value2"] } # for (jay ...) etastart <- cbind(theta2eta(pstr1.init, .lpstr1 , earg = .epstr1 ), theta2eta(shape.init, .lshape , earg = .eshape ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape, .ishape = ishape, .gpstr1 = gpstr1, .gshape = gshape, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "pstr1", "onempstr1"))[1] pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Meanfun <- function(shape) { aa <- -1 / log1p(-shape) Mean <- aa * shape / (1 - shape) Mean[shape <= 0 | 1 <= shape] <- NaN Mean } ans <- switch(type.fitted, "mean" = pstr1 + (1 - pstr1) * Meanfun(shape), "shape" = shape, "pobs1" = doizeta(1, shape = shape, pstr1 = pstr1), # P(Y=1) "pstr1" = pstr1, "onempstr1" = 1 - pstr1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lpstr1 , NOS), rep_len( .lshape , NOS))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .epstr1 misc$earg[[M1*ii ]] <- .eshape } }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doilog(x = y, pstr1 = pstr1, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), vfamily = c("oilog"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) roilog(nsim * length(shape), shape = shape, pstr1 = pstr1) }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), validparams = eval(substitute(function(eta, y, extra = NULL) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , earg = .eshape ) okay1 <- all(is.finite(shape )) && all(0 < shape & shape < 1) && all(is.finite(pstr1)) && all(pstr1 < 1) deflat.limit <- deflat.limit.oizeta(shape) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < pstr1))) warning("parameter 'pstr1' is too negative even allowing for ", "1-deflation.") okay1 && okay2.deflat }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- M / M1 pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , earg = .eshape ) pmf1 <- dlog(1, shape) onempmf1 <- 1 - pmf1 # dozeta(1, shape = shape, pstr1 = pstr1) pobs1 <- pstr1 + (1 - pstr1) * pmf1 index1 <- as.matrix(y == 1) mraa <- log1p(-shape) aaaa <- -1 / mraa dl.dpstr1 <- onempmf1 / pobs1 dl.dpstr1[!index1] <- -1 / (1 - pstr1[!index1]) dpmf1.dshape <- -1 / mraa - shape / ((1 - shape) * mraa^2) d2pmf1.dshape2 <- -2 / ((1 - shape) * mraa^2) - shape * (2 + mraa) / ((1 - shape)^2 * mraa^3) dl.dshape <- (1 - pstr1) * dpmf1.dshape / pobs1 # dl.dshape[!index1] <- y[!index1] / shape[!index1] + 1 / ((1 - shape[!index1]) * mraa[!index1]) dpstr1.deta <- dtheta.deta(pstr1, .lpstr1 , earg = .epstr1 ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dpstr1 * dpstr1.deta, dl.dshape * dshape.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), weight = eval(substitute(expression({ EY.y.gt.1 <- aaaa * shape^2 / ((1 - shape) * (1 - aaaa * shape)) LHS <- ((1 - pstr1) / pobs1) * dpmf1.dshape^2 - d2pmf1.dshape2 RHS <- EY.y.gt.1 / shape^2 - (1 + mraa) / ((1 - shape) * mraa)^2 ned2l.dpstr12 <- onempmf1 / ((1 - pstr1) * pobs1) # ned2l.dpstr1shape <- dpmf1.dshape / pobs1 # ned2l.dshape2 <- (1 - pstr1) * (LHS + (1 - pmf1) * RHS) wz <- array(c(c(w) * ned2l.dpstr12 * dpstr1.deta^2, c(w) * ned2l.dshape2 * dshape.deta^2, c(w) * ned2l.dpstr1shape * dpstr1.deta * dshape.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lshape = lshape, .eshape = eshape )))) } # oilog dotlog <- function(x, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (log.arg) { ans <- dlog(x, shape, log = log.arg) - log1p(-dlog(1, shape)) ans[x == 1] <- log(0) } else { ans <- dlog(x, shape) / (1 - dlog(1, shape)) ans[x == 1] <- 0 } ans } # dotlog potlog <- function(q, shape, log.p = FALSE) { if (log.p) log(plog(q, shape) - dlog(1, shape)) - log1p(-dlog(1, shape)) else (plog(q, shape) - dlog(1, shape)) / (1 - dlog(1, shape)) } qotlog <- function(p, shape) { ans <- qlog((1 - dlog(1, shape)) * p + dlog(1, shape), shape = shape) ans[p == 1] <- Inf ans[p < 0] <- NaN ans[1 < p] <- NaN ans[shape < 0 | 1 < shape] <- NaN ans } # qotlog rotlog <- function(n, shape) { qotlog(runif(n), shape) } otlog <- function(lshape = "logit", gshape = ppoints(8), zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("One-truncated logarithmic distribution ", "f(y) = shape^y / ((-shape - log1p(-shape)) * y), ", "y = 2, 3,...,\n", " 0 < shape < 1,\n\n", "Link: ", namesof("shape", lshape, earg = eshape)), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 1 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = "shape", zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (any(y <= 1)) stop("cannot have any 1s in the response") ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly M <- M1 * ncoly mynames1 <- param.names("shape", ncoly) predictors.names <- namesof(mynames1, .lshape , earg = .eshape , tag = FALSE) if (!length(etastart)) { dotlog.Loglikfun <- function(shapeval, y, x, w, extraargs) { sum(c(w) * dotlog(x = y, shape = shapeval, log = TRUE)) } Init.shape <- matrix(0, n, M) shape.grid <- .gshape for (ilocal in 1:ncoly) { Init.shape[, ilocal] <- grid.search(shape.grid, objfun = dotlog.Loglikfun, y = y[, ilocal], # x = x, w = w[, ilocal]) } # for etastart <- theta2eta(Init.shape, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .gshape = gshape ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) aa <- -1 / log1p(-shape) ((aa * shape / (1 - shape)) - dlog(1, shape)) / (1 - dlog(1, shape)) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lshape , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .eshape } }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dotlog(x = y, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("otlog"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape & shape < 1) okay1 }, list( .lshape = lshape, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) shape <- eta2theta(eta, .lshape , earg = .eshape ) rotlog(nsim * length(shape), shape = shape) }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 1 shape <- eta2theta(eta, .lshape , earg = .eshape ) aa <- -1 / log1p(-shape) dl.dshape <- y / shape + shape / ((1 - shape) * (shape + log1p(-shape))) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ EY.logff <- aa * shape / (1 - shape) d3 <- deriv3( ~ shape / ((1 - shape) * (shape + log(1 - shape))), c("shape"), hessian = FALSE) eval.d3 <- eval(d3) d2pmf1.dshape2 <- c(attr(eval.d3, "gradient")) ned2l.dshape2 <- (EY.logff - dlog(1, shape)) / ((1 - dlog(1, shape)) * shape^2) - d2pmf1.dshape2 wz <- c(w) * ned2l.dshape2 * dshape.deta^2 wz }), list( .lshape = lshape, .eshape = eshape )))) } # otlog dotpospois <- function(x, lambda, log = FALSE) { if (!is.logical(larg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (larg) { ans <- dpospois(x, lambda, log = larg) - log1p(-dpospois(1, lambda)) ans[x == 1] <- log(0) } else { ans <- dpospois(x, lambda) / (1 - dpospois(1, lambda)) ans[x == 1] <- 0 } ans } # dotpospois potpospois <- function(q, lambda, log.p = FALSE) { if (log.p) log(ppospois(q, lambda) - dpospois(1, lambda)) - log1p(-dpospois(1, lambda)) else (ppospois(q, lambda) - dpospois(1, lambda)) / (1-dpospois(1, lambda)) } qotpospois <- function(p, lambda) { ans <- qpospois((1 - dpospois(1, lambda)) * p + dpospois(1, lambda), lambda = lambda) ans[p == 1 & 0 < lambda] <- Inf ans[p < 0] <- NaN ans[1 < p] <- NaN ans[lambda < 0] <- NaN ans } # qotpospois rotpospois <- function(n, lambda) { qotpospois(runif(n), lambda) } otpospoisson <- function(llambda = "loge", type.fitted = c("mean", "lambda", "prob0", "prob1"), ilambda = NULL, imethod = 1, zero = NULL) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") if (length( ilambda) && !is.Numeric(ilambda, positive = TRUE)) stop("bad input for argument 'ilambda'") type.fitted <- match.arg(type.fitted, c("mean", "lambda", "prob0", "prob1"))[1] new("vglmff", blurb = c("One-truncated Positive-Poisson distribution\n\n", "Links: ", namesof("lambda", llambda, earg = elambda, tag = FALSE)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("lambda"), type.fitted = .type.fitted , llambda = .llambda , elambda = .elambda ) }, list( .llambda = llambda, .elambda = elambda, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (any(y < 2)) stop("response values must be 2 or more") ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("lambda", ncoly) predictors.names <- namesof(mynames1, .llambda , earg = .elambda , tag = FALSE) if (!length(etastart)) { lambda.init <- Init.mu(y = y, w = w, imethod = .imethod , imu = .ilambda ) etastart <- theta2eta(lambda.init, .llambda , earg = .elambda) } }), list( .llambda = llambda, .elambda = elambda, .ilambda = ilambda, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "prob0", "prob1"))[1] lambda <- eta2theta(eta, .llambda , earg = .elambda ) ans <- switch(type.fitted, "mean" = lambda / ppois(1, lambda, lower = FALSE), "lambda" = lambda, "prob0" = ppois(0, lambda), # P(Y=0) as it were "prob1" = ppois(1, lambda)) # P(Y=1) as it were label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .llambda = llambda, .elambda = elambda ))), last = eval(substitute(expression({ misc$link <- rep_len( .llambda , M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:M) misc$earg[[ii]] <- .elambda }), list( .llambda = llambda, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta, .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dotpospois(x = y, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llambda = llambda, .elambda = elambda ))), vfamily = c("otpospoisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta, .llambda , earg = .elambda ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .llambda = llambda, .elambda = elambda ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) lambda <- eta2theta(eta, .llambda , earg = .elambda ) rotpospois(nsim * length(lambda), lambda) }, list( .llambda = llambda, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 1 lambda <- eta2theta(eta, .llambda , earg = .elambda ) EY.cond <- 1 / ppois(1, lambda, lower.tail = FALSE) temp1 <- expm1(lambda) temp0 <- lambda * exp(-lambda) prob.geq.2 <- -expm1(-lambda) - temp0 dl.dlambda <- y / lambda - 1 - temp0 / prob.geq.2 dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * dl.dlambda * dlambda.deta }), list( .llambda = llambda, .elambda = elambda ))), weight = eval(substitute(expression({ ned2l.dlambda2 <- EY.cond / lambda + ((1 - lambda) * exp(-lambda) - temp0^2 / prob.geq.2) / prob.geq.2 wz <- ned2l.dlambda2 * dlambda.deta^2 c(w) * wz }), list( .llambda = llambda, .elambda = elambda )))) } # otpospoisson doalog <- function(x, shape, pobs1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(pobs1)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(0.0, LLL) index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pobs1[index1]) ans[!index1] <- log1p(-pobs1[!index1]) + dotlog(x[!index1], shape[!index1], log = TRUE) } else { ans[ index1] <- pobs1[index1] ans[!index1] <- (1 - pobs1[!index1]) * dotlog(x[!index1], shape[!index1]) } ans[pobs1 < 0 | 1 < pobs1] <- NaN ans } poalog <- function(q, shape, pobs1 = 0) { LLL <- max(length(q), length(shape), length(pobs1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(0.0, LLL) ans[q > 1] <- pobs1[q > 1] + (1-pobs1[q > 1]) * potlog(q[q > 1], shape[q > 1]) ans[q < 1] <- 0 ans[q == 1] <- pobs1[q == 1] ans <- pmax(0, ans) ans <- pmin(1, ans) ans[pobs1 < 0 | 1 < pobs1] <- NaN ans } qoalog <- function(p, shape, pobs1 = 0) { LLL <- max(length(p), length(shape), length(pobs1)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(NaN, LLL) ind4 <- pobs1 < p ans[!ind4] <- 1 ans[ ind4] <- qotlog((p[ind4] - pobs1[ind4]) / (1 - pobs1[ind4]), shape = shape[ind4]) ans[pobs1 < 0 | 1 < pobs1] <- NaN ans[p < 0 | 1 < p] <- NaN ans } roalog <- function(n, shape, pobs1 = 0) { qoalog(runif(n), shape = shape, pobs1 = pobs1) } oalog <- function(lpobs1 = "logit", lshape = "logit", type.fitted = c("mean", "shape", "pobs1", "onempobs1"), ipobs1 = NULL, gshape = ppoints(8), zero = NULL) { lpobs1 <- as.list(substitute(lpobs1)) epobs1 <- link2list(lpobs1) lpobs1 <- attr(epobs1, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "onempobs1"))[1] new("vglmff", blurb = c("One-altered logarithmic distribution \n", "(Bernoulli and 1-truncated logarithmic distribution model)", "\n\n", "Links: ", namesof("pobs1", lpobs1, earg = epobs1, tag = FALSE), ", ", namesof("shape", lshape, earg = eshape, tag = FALSE)), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 2 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pobs1", "shape"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y1 <- y1 <- ifelse(y == 1, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y1), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pobs1", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lpobs1 , earg = .epobs1 , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly if (!length(etastart)) { dotlog.Loglikfun <- function(shapeval, y, x, w, extraargs) { sum(c(w) * dotlog(x = y, shape = shapeval, log = TRUE)) } Init.shape <- matrix(0, n, ncoly) shape.grid <- .gshape for (jlocal in 1:ncoly) { index1 <- y[, jlocal] > 1 Init.shape[, jlocal] <- grid.search(shape.grid, objfun = dotlog.Loglikfun, y = y[index1, jlocal], # x = x, w = w[index1, jlocal]) } # for etastart <- cbind(theta2eta(if (length( .ipobs1 )) .ipobs1 else (0.5 + w * y1) / (1 + w), .lpobs1 , earg = .epobs1 ), theta2eta(Init.shape, .lshape , earg = .eshape )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lshape = lshape, .eshape = eshape, .gshape = gshape, .lpobs1 = lpobs1, .epobs1 = epobs1, .ipobs1 = ipobs1, # .ishape = ishape, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "onempobs1"))[1] M1 <- 2 NOS <- ncol(eta) / M1 pobs1 <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lpobs1 , earg = .epobs1 )) shape <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lshape , earg = .eshape )) aa <- -1 / log1p(-shape) otlog.mean <- ((aa * shape / (1 - shape)) - dlog(1, shape)) / (1 - dlog(1, shape)) ans <- switch(type.fitted, "mean" = pobs1 + (1 - pobs1) * otlog.mean, "shape" = shape, "pobs1" = pobs1, # P(Y=1) "onempobs1" = 1 - pobs1) # P(Y>1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lpobs1 , NOS), rep_len( .lshape , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names names(misc$link) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] misc$earg <- vector("list", M1 * NOS) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .epobs1 misc$earg[[M1*ii ]] <- .eshape } }), list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pobs1 <- cbind(eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpobs1, earg = .epobs1 )) shape <- cbind(eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape, earg = .eshape )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doalog(x = y, pobs1 = pobs1, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), vfamily = c("oalog"), validparams = eval(substitute(function(eta, y, extra = NULL) { TFvec <- c(TRUE, FALSE) pobs1 <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs1 , earg = .epobs1 ) shape <- eta2theta(eta[, !TFvec, drop = FALSE], .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape & shape < 1) && all(is.finite(pobs1)) && all(0 < pobs1 & pobs1 < 1) okay1 }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pobs1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpobs1 , earg = .epobs1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) roalog(nsim * length(shape), shape = shape, pobs1 = pobs1) }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 # extra$NOS y1 <- extra$y1 skip <- extra$skip.these TFvec <- c(TRUE, FALSE) pobs1 <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs1 , earg = .epobs1 ) shape <- eta2theta(eta[, !TFvec, drop = FALSE], .lshape , earg = .eshape ) aa <- -1 / log1p(-shape) dl.dshape <- y / shape + shape / ((1 - shape) * (shape + log1p(-shape))) dl.dpobs1 <- -1 / (1 - pobs1) # For y > 1 obsns for (spp. in 1:NOS) { dl.dpobs1[skip[, spp.], spp.] <- 1 / pobs1[skip[, spp.], spp.] dl.dshape[skip[, spp.], spp.] <- 0 } dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) mu.phi1 <- pobs1 temp3 <- if ( .lpobs1 == "logit") { c(w) * (y1 - mu.phi1) } else { c(w) * dtheta.deta(mu.phi1, link = .lpobs1 , earg = .epobs1 ) * dl.dpobs1 } ans <- cbind(temp3, c(w) * dl.dshape * dshape.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M1 * NOS) # EIM is diagonal EY.logff <- aa * shape / (1 - shape) d3 <- deriv3( ~ shape / ((1 - shape) * (shape + log(1 - shape))), c("shape"), hessian = FALSE) eval.d3 <- eval(d3) d2pmf1.dshape2 <- c(attr(eval.d3, "gradient")) ned2l.dshape2 <- (EY.logff - dlog(1, shape)) / ((1 - dlog(1, shape)) * shape^2) - d2pmf1.dshape2 ned2l.dshape2 <- (1-pobs1) * ned2l.dshape2 #+stop("another quantity") wz[, NOS+(1:NOS)] <- c(w) * ned2l.dshape2 * dshape.deta^2 tmp100 <- mu.phi1 * (1 - mu.phi1) tmp200 <- if ( .lpobs1 == "logit" && is.empty.list( .epobs1 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(mu.phi1, link = .lpobs1 , earg = .epobs1 )^2) } wz[, 1:NOS] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M1 = M1)] wz }), list( .lpobs1 = lpobs1, .epobs1 = epobs1 )))) } # End of oalog doapospois <- function(x, lambda, pobs1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(lambda), length(pobs1)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(0.0, LLL) index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pobs1[index1]) ans[!index1] <- log1p(-pobs1[!index1]) + dotpospois(x[!index1], lambda[!index1], log = TRUE) } else { ans[ index1] <- pobs1[index1] ans[!index1] <- (1 - pobs1[!index1]) * dotpospois(x[!index1], lambda[!index1]) } ans[pobs1 < 0 | 1 < pobs1] <- NaN ans[lambda < 0] <- NaN ans } poapospois <- function(q, lambda, pobs1 = 0) { LLL <- max(length(q), length(lambda), length(pobs1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(0.0, LLL) ans[q > 1] <- pobs1[q > 1] + (1-pobs1[q > 1]) * potpospois(q[q > 1], lambda[q > 1]) ans[q < 1] <- 0 ans[q == 1] <- pobs1[q == 1] ans <- pmax(0, ans) ans <- pmin(1, ans) ans[pobs1 < 0 | 1 < pobs1] <- NaN ans[lambda < 0] <- NaN ans } qoapospois <- function(p, lambda, pobs1 = 0) { LLL <- max(length(p), length(lambda), length(pobs1)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(NaN, LLL) ind4 <- pobs1 < p ans[!ind4] <- 1 ans[ ind4] <- qotpospois((p[ind4] - pobs1[ind4]) / (1 - pobs1[ind4]), lambda = lambda[ind4]) ans[pobs1 < 0 | 1 < pobs1] <- NaN ans[p < 0 | 1 < p] <- NaN ans[lambda < 0] <- NaN ans } roapospois <- function(n, lambda, pobs1 = 0) { qoapospois(runif(n), lambda = lambda, pobs1 = pobs1) } oapospoisson <- function(lpobs1 = "logit", llambda = "loge", type.fitted = c("mean", "lambda", "pobs1", "onempobs1"), ipobs1 = NULL, zero = NULL) { lpobs1 <- as.list(substitute(lpobs1)) epobs1 <- link2list(lpobs1) lpobs1 <- attr(epobs1, "function.name") llambd <- as.list(substitute(llambda)) elambd <- link2list(llambd) llambd <- attr(elambd, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs1", "onempobs1"))[1] new("vglmff", blurb = c("One-altered positive-Poisson distribution \n", "(Bernoulli and 1-truncated positive-Poisson ", "distribution model)\n\n", "Links: ", namesof("pobs1", lpobs1, earg = epobs1, tag = FALSE), ", ", namesof("lambda", llambd, earg = elambd, tag = FALSE)), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 2 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pobs1", "lambda"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y1 <- y1 <- ifelse(y == 1, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y1), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pobs1", ncoly) mynames2 <- param.names("lambda", ncoly) predictors.names <- c(namesof(mynames1, .lpobs1 , earg = .epobs1 , tag = FALSE), namesof(mynames2, .llambd , earg = .elambd , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly if (!length(etastart)) { Init.lambda <- y - 0.25 etastart <- cbind(theta2eta(if (length( .ipobs1 )) .ipobs1 else (0.5 + w * y1) / (1 + w), .lpobs1 , earg = .epobs1 ), theta2eta(Init.lambda, .llambd , earg = .elambd )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .llambd = llambd, .elambd = elambd, .lpobs1 = lpobs1, .epobs1 = epobs1, .ipobs1 = ipobs1, # .ilambd = ilambd, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs1", "onempobs1"))[1] M1 <- 2 NOS <- ncol(eta) / M1 pobs1 <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lpobs1 , earg = .epobs1 )) lambd <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .llambd , earg = .elambd )) ans <- switch(type.fitted, "mean" = pobs1 + (1 - pobs1) * lambd / ppois(1, lambd, lower = FALSE), "lambda" = lambd, "pobs1" = pobs1, # P(Y=1) "onempobs1" = 1 - pobs1) # P(Y>1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpobs1 = lpobs1, .llambd = llambd, .epobs1 = epobs1, .elambd = elambd ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lpobs1 , NOS), rep_len( .llambd , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names names(misc$link) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] misc$earg <- vector("list", M1 * NOS) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .epobs1 misc$earg[[M1*ii ]] <- .elambd } }), list( .lpobs1 = lpobs1, .llambd = llambd, .epobs1 = epobs1, .elambd = elambd ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pobs1 <- cbind(eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpobs1, earg = .epobs1)) lambd <- cbind(eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambd, earg = .elambd )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doapospois(x = y, pobs1 = pobs1, lambda = lambd, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpobs1 = lpobs1, .llambd = llambd, .epobs1 = epobs1, .elambd = elambd ))), vfamily = c("oapospoisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { TFvec <- c(TRUE, FALSE) pobs1 <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs1 , earg = .epobs1 ) lambd <- eta2theta(eta[, !TFvec, drop = FALSE], .llambd , earg = .elambd ) okay1 <- all(is.finite(lambd)) && all(0 < lambd) && all(is.finite(pobs1)) && all(0 < pobs1 & pobs1 < 1) okay1 }, list( .lpobs1 = lpobs1, .llambd = llambd, .epobs1 = epobs1, .elambd = elambd ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pobs1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpobs1 , earg = .epobs1 ) lambd <- eta2theta(eta[, c(FALSE, TRUE)], .llambd , earg = .elambd ) roapospois(nsim * length(lambd), lambd = lambd, pobs1 = pobs1) }, list( .lpobs1 = lpobs1, .llambd = llambd, .epobs1 = epobs1, .elambd = elambd ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 # extra$NOS y1 <- extra$y1 skip <- extra$skip.these TFvec <- c(TRUE, FALSE) pobs1 <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs1 , earg = .epobs1 ) lambda <- eta2theta(eta[, !TFvec, drop = FALSE], .llambd , earg = .elambd ) EY.cond <- 1 / ppois(1, lambda, lower.tail = FALSE) temp1 <- expm1(lambda) temp0 <- lambda * exp(-lambda) shape.geq.2 <- -expm1(-lambda) - temp0 dl.dlambd <- y / lambda - 1 - temp0 / shape.geq.2 dl.dpobs1 <- -1 / (1 - pobs1) # For y > 1 obsns for (spp. in 1:NOS) { dl.dpobs1[skip[, spp.], spp.] <- 1 / pobs1[skip[, spp.], spp.] dl.dlambd[skip[, spp.], spp.] <- 0 } dlambd.deta <- dtheta.deta(lambda, .llambd , earg = .elambd ) mu.phi1 <- pobs1 temp3 <- if ( .lpobs1 == "logit") { c(w) * (y1 - mu.phi1) } else { c(w) * dtheta.deta(mu.phi1, link = .lpobs1 , earg = .epobs1 ) * dl.dpobs1 } ans <- cbind(temp3, c(w) * dl.dlambd * dlambd.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lpobs1 = lpobs1, .llambd = llambd, .epobs1 = epobs1, .elambd = elambd ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M1 * NOS) # EIM is diagonal ned2l.dlambd2 <- EY.cond / lambda + ((1 - lambda) * exp(-lambda) - temp0^2 / shape.geq.2) / shape.geq.2 ned2l.dlambd2 <- (1 - pobs1) * ned2l.dlambd2 wz[, NOS+(1:NOS)] <- c(w) * ned2l.dlambd2 * dlambd.deta^2 tmp100 <- mu.phi1 * (1 - mu.phi1) tmp200 <- if ( .lpobs1 == "logit" && is.empty.list( .epobs1 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(mu.phi1, link = .lpobs1 , earg = .epobs1 )^2) } wz[, 1:NOS] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M1 = M1)] wz }), list( .lpobs1 = lpobs1, .epobs1 = epobs1 )))) } # End of oapospoisson doazeta <- function(x, shape, pobs1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(pobs1)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(0.0, LLL) index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pobs1[index1]) ans[!index1] <- log1p(-pobs1[!index1]) + dotzeta(x[!index1], shape[!index1], log = TRUE) } else { ans[ index1] <- pobs1[index1] ans[!index1] <- (1 - pobs1[!index1]) * dotzeta(x[!index1], shape[!index1]) } ans[pobs1 < 0 | 1 < pobs1] <- NaN ans[shape <= 0] <- NaN ans } poazeta <- function(q, shape, pobs1 = 0) { LLL <- max(length(q), length(shape), length(pobs1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(0.0, LLL) ans[q > 1] <- pobs1[q > 1] + (1-pobs1[q > 1]) * potzeta(q[q > 1], shape[q > 1]) ans[q < 1] <- 0 ans[q == 1] <- pobs1[q == 1] ans <- pmax(0, ans) ans <- pmin(1, ans) ans[pobs1 < 0 | 1 < pobs1] <- NaN ans[shape <= 0] <- NaN ans } qoazeta <- function(p, shape, pobs1 = 0) { LLL <- max(length(p), length(shape), length(pobs1)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pobs1) != LLL) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(NaN, LLL) ind4 <- pobs1 < p ans[!ind4] <- 1 ans[ ind4] <- qotzeta((p[ind4] - pobs1[ind4]) / (1 - pobs1[ind4]), shape = shape[ind4]) ans[pobs1 < 0 | 1 < pobs1] <- NaN ans[p < 0 | 1 < p] <- NaN ans[shape <= 0] <- NaN ans } roazeta <- function(n, shape, pobs1 = 0) { qoazeta(runif(n), shape = shape, pobs1 = pobs1) } oazeta <- function(lpobs1 = "logit", lshape = "loge", type.fitted = c("mean", "shape", "pobs1", "onempobs1"), gshape = exp((-4:3)/4), ishape = NULL, ipobs1 = NULL, zero = NULL) { lpobs1 <- as.list(substitute(lpobs1)) epobs1 <- link2list(lpobs1) lpobs1 <- attr(epobs1, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "onempobs1"))[1] new("vglmff", blurb = c("One-altered zeta distribution \n", "(Bernoulli and 1-truncated zeta distribution model)\n\n", "Links: ", namesof("pobs1", lpobs1, earg = epobs1, tag = FALSE), ", ", namesof("shape", lshape, earg = eshape, tag = FALSE)), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 2 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pobs1", "shape"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y1 <- y1 <- ifelse(y == 1, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y1), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pobs1", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lpobs1 , earg = .epobs1 , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly if (!length(etastart)) { otzetaff.Loglikfun <- function(shape, y, x, w, extraargs) { sum(c(w) * dotzeta(x = y, shape, log = TRUE)) } gshape <- .gshape if (!length( .ishape )) { shape.init <- matrix(NA_real_, n, M/M1, byrow = TRUE) for (jay in 1:ncoly) { index1 <- y[, jay] > 1 shape.init[, jay] <- grid.search(gshape, objfun = otzetaff.Loglikfun, # x = x, y = y[index1, jay], w = w[index1, jay]) } } else { shape.init <- matrix( .ishape , n, M, byrow = TRUE) } etastart <- cbind(theta2eta(if (length( .ipobs1 )) .ipobs1 else (0.5 + w * y1) / (1 + w), .lpobs1 , earg = .epobs1 ), theta2eta(shape.init, .lshape , earg = .eshape )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lshape = lshape, .eshape = eshape, .lpobs1 = lpobs1, .epobs1 = epobs1, .ipobs1 = ipobs1, .ishape = ishape, .gshape = gshape, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "onempobs1"))[1] M1 <- 2 NOS <- ncol(eta) / M1 pobs1 <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lpobs1 , earg = .epobs1 )) shape <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lshape , earg = .eshape )) if (type.fitted == "mean") { ans <- shape ans[shape > 1] <- zeta(shape[shape > 1])/zeta(shape[shape > 1] + 1) ans[shape <= 1] <- NA pmf.1 <- dzeta(1, shape) mean.otzeta <- (ans - pmf.1) / (1 - pmf.1) } ans <- switch(type.fitted, "mean" = pobs1 + (1 - pobs1) * mean.otzeta, "shape" = shape, "pobs1" = pobs1, # P(Y=1) "onempobs1" = 1 - pobs1) # P(Y>1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lpobs1 , NOS), rep_len( .lshape , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names names(misc$link) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] misc$earg <- vector("list", M1 * NOS) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .epobs1 misc$earg[[M1*ii ]] <- .eshape } }), list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pobs1 <- cbind(eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpobs1, earg = .epobs1 )) shape <- cbind(eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape, earg = .eshape )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doazeta(x = y, pobs1 = pobs1, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), vfamily = c("oazeta"), validparams = eval(substitute(function(eta, y, extra = NULL) { TFvec <- c(TRUE, FALSE) pobs1 <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs1 , earg = .epobs1 ) shape <- eta2theta(eta[, !TFvec, drop = FALSE], .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) && all(is.finite(pobs1)) && all(0 < pobs1 & pobs1 < 1) okay1 }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pobs1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpobs1 , earg = .epobs1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) roazeta(nsim * length(shape), shape = shape, pobs1 = pobs1) }, list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 # extra$NOS y1 <- extra$y1 skip <- extra$skip.these TFvec <- c(TRUE, FALSE) pobs1 <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs1 , earg = .epobs1 ) shape <- eta2theta(eta[, !TFvec, drop = FALSE], .lshape , earg = .eshape ) BBBB <- zeta(shape + 1) - 1 fred1 <- zeta(shape + 1, deriv = 1) dl.dshape <- -log(y) - fred1 / BBBB dl.dpobs1 <- -1 / (1 - pobs1) # For y > 1 obsns for (spp. in 1:NOS) { dl.dpobs1[skip[, spp.], spp.] <- 1 / pobs1[skip[, spp.], spp.] dl.dshape[skip[, spp.], spp.] <- 0 } dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) mu.phi1 <- pobs1 temp3 <- if ( .lpobs1 == "logit") { c(w) * (y1 - mu.phi1) } else { c(w) * dtheta.deta(mu.phi1, link = .lpobs1 , earg = .epobs1 ) * dl.dpobs1 } ans <- cbind(temp3, c(w) * dl.dshape * dshape.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lpobs1 = lpobs1, .lshape = lshape, .epobs1 = epobs1, .eshape = eshape ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M1 * NOS) # EIM is diagonal ned2l.dshape2 <- (zeta(shape + 1, deriv = 2) - fred1^2 / BBBB) / BBBB ned2l.dshape2 <- (1 - pobs1) * ned2l.dshape2 wz[, NOS+(1:NOS)] <- c(w) * ned2l.dshape2 * dshape.deta^2 tmp100 <- mu.phi1 * (1 - mu.phi1) tmp200 <- if ( .lpobs1 == "logit" && is.empty.list( .epobs1 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(mu.phi1, link = .lpobs1 , earg = .epobs1 )^2) } wz[, 1:NOS] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M1 = M1)] wz }), list( .lpobs1 = lpobs1, .epobs1 = epobs1 )))) } # End of oazeta VGAM/R/vglm.fit.q0000644000176200001440000003250013135276760013127 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vglm.fit <- function(x, y, w = rep_len(1, nrow(x)), X.vlm.arg = NULL, Xm2 = NULL, Ym2 = NULL, etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = vglm.control(), qr.arg = FALSE, constraints = NULL, extra = NULL, Terms = Terms, function.name = "vglm", ...) { if (is.null(criterion <- control$criterion)) criterion <- "coefficients" eff.n <- nrow(x) # + sum(abs(w[1:nrow(x)])) specialCM <- NULL post <- list() check.rank <- control$Check.rank nonparametric <- FALSE epsilon <- control$epsilon maxit <- control$maxit save.weights <- control$save.weights trace <- control$trace orig.stepsize <- control$stepsize minimize.criterion <- control$min.criterion fv <- NULL n <- nrow(x) stepsize <- orig.stepsize old.coeffs <- coefstart # May be a NULL intercept.only <- ncol(x) == 1 && colnames(x) == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n if (length(slot(family, "initialize"))) eval(slot(family, "initialize")) # Initialize mu & M (& optionally w) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else slot(family, "linkinv")(eta, extra = extra) } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra = extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'linkfun' slot to use it") } } validparams <- validfitted <- TRUE if (length(body(slot(family, "validparams")))) validparams <- slot(family, "validparams")(eta, y = y, extra = extra) if (length(body(slot(family, "validfitted")))) validfitted <- slot(family, "validfitted")(mu, y = y, extra = extra) if (!(validparams && validfitted)) stop("could not obtain valid initial values. ", "Try using 'etastart', 'coefstart' or 'mustart', else ", "family-specific arguments such as 'imethod'.") M <- NCOL(eta) if (length(slot(family, "constraints"))) eval(slot(family, "constraints")) Hlist <- process.constraints(constraints, x = x, M = M, specialCM = specialCM, Check.cm.rank = control$Check.cm.rank) ncolHlist <- unlist(lapply(Hlist, ncol)) X.vlm.save <- if (length(X.vlm.arg)) { X.vlm.arg } else { lm2vlm.model.matrix(x, Hlist, xij = control$xij, Xm2 = Xm2) } if (length(coefstart)) { eta <- if (ncol(X.vlm.save) > 1) { matrix(X.vlm.save %*% coefstart, n, M, byrow = TRUE) + offset } else { matrix(X.vlm.save * coefstart, n, M, byrow = TRUE) + offset } if (M == 1) eta <- c(eta) mu <- slot(family, "linkinv")(eta, extra = extra) } if (criterion != "coefficients") { tfun <- slot(family, criterion) # family[[criterion]] } iter <- 1 new.crit <- switch(criterion, coefficients = 1, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra = extra)) deriv.mu <- eval(slot(family, "deriv")) wz <- eval(slot(family, "weight")) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset one.more <- TRUE nrow.X.vlm <- nrow(X.vlm.save) ncol.X.vlm <- ncol(X.vlm.save) if (nrow.X.vlm < ncol.X.vlm) stop("There are ", ncol.X.vlm, " parameters but only ", nrow.X.vlm, " observations") while (one.more) { tfit <- vlm.wfit(xmat = X.vlm.save, zmat = z, Hlist = NULL, U = U, matrix.out = FALSE, is.vlmX = TRUE, qr = qr.arg, xij = NULL) fv <- tfit$fitted.values new.coeffs <- tfit$coefficients # c.list$coeff if (length(slot(family, "middle"))) eval(slot(family, "middle")) eta <- fv + offset mu <- slot(family, "linkinv")(eta, extra = extra) if (length(slot(family, "middle2"))) eval(slot(family, "middle2")) old.crit <- new.crit new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra = extra)) if (trace && orig.stepsize == 1) { cat("VGLM linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, digits = round(1 - log10(epsilon))), format(new.crit, digits = max(4, round(-0 - log10(epsilon) + log10(sqrt(eff.n)))))) switch(criterion, coefficients = {if (length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } # if (trace && orig.stepsize == 1) take.half.step <- (control$half.stepsizing && length(old.coeffs)) && ((orig.stepsize != 1) || (!is.finite(new.crit)) || # 20160321 (criterion != "coefficients" && (if (minimize.criterion) new.crit > old.crit else new.crit < old.crit))) if (!is.logical(take.half.step)) take.half.step <- TRUE if (!take.half.step && length(old.coeffs)) { validparams <- validfitted <- TRUE if (length(body(slot(family, "validparams")))) validparams <- slot(family, "validparams")(eta, y = y, extra = extra) if (length(body(slot(family, "validfitted")))) validfitted <- slot(family, "validfitted")(mu, y = y, extra = extra) take.half.step <- !(validparams && validfitted) if (FALSE && take.half.step) { stepsize <- orig.stepsize / 4 } } if (take.half.step) { stepsize <- (1 + (orig.stepsize != 1)) * orig.stepsize new.coeffs.save <- new.coeffs if (trace) cat("Taking a modified step") repeat { if (trace) { cat(".") flush.console() } stepsize <- stepsize / 2 if (too.small <- stepsize < 1e-6) break new.coeffs <- (1-stepsize) * old.coeffs + stepsize * new.coeffs.save if (length(slot(family, "middle"))) eval(slot(family, "middle")) fv <- X.vlm.save %*% new.coeffs if (M > 1) fv <- matrix(fv, n, M, byrow = TRUE) eta <- fv + offset mu <- slot(family, "linkinv")(eta, extra = extra) if (length(slot(family, "middle2"))) eval(slot(family, "middle2")) new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra = extra)) validparams <- validfitted <- TRUE if (length(body(slot(family, "validparams")))) validparams <- slot(family, "validparams")(eta, y, extra = extra) if (length(body(slot(family, "validfitted")))) validfitted <- slot(family, "validfitted")(mu, y, extra = extra) if (validparams && validfitted && (is.finite(new.crit)) && # 20160321 (criterion == "coefficients" || (( minimize.criterion && new.crit < old.crit) || (!minimize.criterion && new.crit > old.crit)))) break } # of repeat if (trace) cat("\n") if (too.small) { warning("iterations terminated because ", "half-step sizes are very small") one.more <- FALSE } else { if (trace) { cat("VGLM linear loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, digits = round(1 - log10(epsilon))), format(new.crit, digits = max(4, round(-0 - log10(epsilon) + log10(sqrt(eff.n)))))) switch(criterion, coefficients = { if (length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } # if (trace) one.more <- eval(control$convergence) } # Not too.small } else { one.more <- eval(control$convergence) } flush.console() if (!is.logical(one.more)) one.more <- FALSE if (one.more) { iter <- iter + 1 deriv.mu <- eval(slot(family, "deriv")) wz <- eval(slot(family, "weight")) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset } # if (one.more) if (!one.more && take.half.step && orig.stepsize == 1) warning("some quantities such as z, residuals, SEs may ", "be inaccurate due to convergence at a half-step") old.coeffs <- new.coeffs } # End of while() if (maxit > 1 && iter >= maxit && !control$noWarning) warning("convergence not obtained in ", maxit, " IRLS iterations") dnrow.X.vlm <- labels(X.vlm.save) xnrow.X.vlm <- dnrow.X.vlm[[2]] ynrow.X.vlm <- dnrow.X.vlm[[1]] if (length(slot(family, "fini"))) eval(slot(family, "fini")) if (M > 1) fv <- matrix(fv, n, M) final.coefs <- new.coeffs # Was tfit$coefficients prior to 20160317 asgn <- attr(X.vlm.save, "assign") names(final.coefs) <- xnrow.X.vlm rank <- tfit$rank cnames <- xnrow.X.vlm if (check.rank && rank < ncol.X.vlm) stop("vglm() only handles full-rank models (currently)") R <- tfit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] R[lower.tri(R)] <- 0 attributes(R) <- list(dim = c(ncol.X.vlm, ncol.X.vlm), dimnames = list(cnames, cnames), rank = rank) effects <- tfit$effects neff <- rep_len("", nrow.X.vlm) neff[seq(ncol.X.vlm)] <- cnames names(effects) <- neff dim(fv) <- c(n, M) dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] wresiduals <- z - fv # Replaced by fv 20160408 if (M == 1) { fv <- as.vector(fv) wresiduals <- as.vector(wresiduals) names(wresiduals) <- names(fv) <- yn } else { dimnames(wresiduals) <- dimnames(fv) <- list(yn, predictors.names) } if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } fit <- list(assign = asgn, coefficients = final.coefs, constraints = Hlist, df.residual = nrow.X.vlm - rank, df.total = n * M, effects = effects, # this is good fitted.values = mu, # this is good offset = offset, rank = rank, # this is good residuals = wresiduals, R = R, terms = Terms) # terms: This used to be done in vglm() if (qr.arg) { fit$qr <- tfit$qr dimnames(fit$qr$qr) <- dnrow.X.vlm } if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } # else fit$weights <- if (save.weights) wz else NULL misc <- list( colnames.x = xn, colnames.X.vlm = xnrow.X.vlm, criterion = criterion, function.name = function.name, intercept.only = intercept.only, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, nrow.X.vlm = nrow.X.vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol.X.vlm = ncol.X.vlm, ynames = colnames(y)) crit.list <- list() if (criterion != "coefficients") crit.list[[criterion]] <- fit[[criterion]] <- new.crit for (ii in names(.min.criterion.VGAM)) { if (ii != criterion && any(slotNames(family) == ii) && length(body(slot(family, ii)))) { fit[[ii]] <- crit.list[[ii]] <- (slot(family, ii))(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra = extra) } } if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(slot(family, "last"))) eval(slot(family, "last")) structure(c(fit, list(predictors = fv, # tfit$predictors, contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, ResSS = tfit$ResSS, x = x, y = y)), vclass = slot(family, "vfamily")) } # vglm.fit() VGAM/R/predict.vglm.q0000644000176200001440000002406213135276757014011 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. predictvglm <- function(object, newdata = NULL, type = c("link", "response", "terms"), # "parameters", se.fit = FALSE, deriv = 0, dispersion = NULL, untransform = FALSE, type.fitted = NULL, percentiles = NULL, ...) { na.act <- object@na.action object@na.action <- list() new.extra <- object@extra if (length(percentiles)) { new.extra$percentiles <- percentiles } if (length(type.fitted)) { new.extra$type.fitted <- type.fitted } if (deriv != 0) stop("'deriv' must be 0 for predictvglm()") if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("link", "response", "terms"))[1] if (untransform && (type == "response" || type == "terms" || se.fit || deriv != 0)) stop("argument 'untransform=TRUE' only if 'type=\"link\", ", "se.fit = FALSE, deriv=0'") predn <- if (se.fit) { switch(type, response = { warning("'type='response' and 'se.fit=TRUE' are not valid ", "together; setting 'se.fit = FALSE'") se.fit <- FALSE predictor <- predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) fv <- object@family@linkinv(predictor, extra = new.extra) fv <- as.matrix(fv) dn1 <- dimnames(fv)[[1]] dn2 <- dimnames(object@fitted.values)[[2]] if (nrow(fv) == length(dn1) && ncol(fv) == length(dn2)) dimnames(fv) <- list(dn1, dn2) fv }, link = { predict.vlm(object, newdata = newdata, type = "response", se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }) # End of switch } else { if (is.null(newdata)) { switch(type, link = object@predictors, response = { object@family@linkinv(eta = object@predictors, extra = new.extra) }, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }) } else { if (!(length(object@offset) == 1 && object@offset == 0)) warning("zero offset used") switch(type, response = { predictor <- predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) M <- object@misc$M fv <- object@family@linkinv(predictor, extra = new.extra) double.check <- is.null(new.extra$type.fitted) if (M > 1 && is.matrix(fv) && double.check) { fv <- as.matrix(fv) dn1 <- dimnames(fv)[[1]] dn2 <- dimnames(object@fitted.values)[[2]] if (nrow(fv) == length(dn1) && ncol(fv) == length(dn2)) dimnames(fv) <- list(dn1, dn2) } else { } fv }, link = { predict.vlm(object, newdata = newdata, type = "response", se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }) # End of switch } } # End of se.fit == FALSE try.this <- findFirstMethod("predictvglmS4VGAM", object@family@vfamily) if (length(try.this)) { predn <- predictvglmS4VGAM(object = object, VGAMff = new(try.this), predn = predn, # This is 'new' newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, untransform = untransform, ...) } else { } if (!length(newdata) && length(na.act)) { if (se.fit) { predn$fitted.values <- napredict(na.act[[1]], predn$fitted.values) predn$se.fit <- napredict(na.act[[1]], predn$se.fit) } else { predn <- napredict(na.act[[1]], predn) } } if (untransform) untransformVGAM(object, predn) else predn } # predictvglm setMethod("predict", "vglm", function(object, ...) predictvglm(object, ...)) predict.rrvglm <- function(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, ...) { if (se.fit) { stop("20030811; predict.rrvglm(..., se.fit=TRUE) not complete yet") pred <- switch(type, response = { warning("'type=\"response\"' and 'se.fit=TRUE' not valid ", "together; setting 'se.fit = FALSE'") se.fit <- FALSE predictor <- predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) fv <- object@family@linkinv(predictor, extra = extra) fv <- as.matrix(fv) dn1 <- dimnames(fv)[[1]] dn2 <- dimnames(object@fitted.values)[[2]] if (nrow(fv) == length(dn1) && ncol(fv) == length(dn2)) dimnames(fv) <- list(dn1, dn2) fv }, link = { type <- "response" predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) }, terms = { predict.vlm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...) } ) } else { return(predictvglm(object, newdata = newdata, type = type, se.fit = se.fit, deriv = deriv, dispersion = dispersion, ...)) } na.act <- object@na.action if (!length(newdata) && length(na.act)) { if (se.fit) { pred$fitted.values <- napredict(na.act[[1]], pred$fitted.values) pred$se.fit <- napredict(na.act[[1]], pred$se.fit) } else { pred <- napredict(na.act[[1]], pred) } } pred } setMethod("predict", "rrvglm", function(object, ...) predict.rrvglm(object, ...)) untransformVGAM <- function(object, pred) { M <- object@misc$M Links <- object@misc$link if (length(Links) != M && length(Links) != 1) stop("cannot obtain the link functions to untransform the object") upred <- pred earg <- object@misc$earg LINK <- object@misc$link # link.names # This should be a character vector. EARG <- object@misc$earg # This could be a NULL if (is.null(EARG)) EARG <- list(theta = NULL) if (!is.list(EARG)) stop("the 'earg' component of 'object@misc' must be a list") if (length(LINK) != M && length(LINK) != 1) stop("cannot obtain the link functions to untransform 'object'") if (!is.character(LINK)) stop("the 'link' component of 'object@misc' should ", "be a character vector") learg <- length(EARG) llink <- length(LINK) if (llink != learg) stop("the 'earg' component of 'object@misc' should ", "be a list of length ", learg) level1 <- length(EARG) > 3 && length(intersect(names(EARG), c("theta", "inverse", "deriv", "short", "tag"))) > 3 if (level1) EARG <- list(oneOnly = EARG) learg <- length(EARG) for (ii in 1:M) { TTheta <- pred[, ii] # Transformed theta use.earg <- if (llink == 1) EARG[[1]] else EARG[[ii]] function.name <- if (llink == 1) LINK else LINK[ii] use.earg[["inverse"]] <- TRUE # New use.earg[["theta"]] <- TTheta # New Theta <- do.call(function.name, use.earg) upred[, ii] <- Theta } dmn2 <- if (length(names(object@misc$link))) { names(object@misc$link) } else { if (length(object@misc$parameters)) object@misc$parameters else NULL } dimnames(upred) <- list(dimnames(upred)[[1]], dmn2) upred } setMethod("predictvglmS4VGAM", signature(VGAMff = "binom2.or"), function(object, VGAMff, predn, newdata = NULL, type = c("link", "response", "terms"), # "parameters", se.fit = FALSE, deriv = 0, dispersion = NULL, untransform = FALSE, extra = object@extra, n.ahead = 1, ...) { # object@post <- # callNextMethod(VGAMff = VGAMff, # object = object, # ...) #object@post$reverse <- object@misc$reverse if (se.fit) { predn$junk.component <- rep_len(coef(object), n.ahead) predn$se.fit.junk.component <- rep_len(diag(vcov(object)), n.ahead) } else { could.return.this.instead.of.predn <- predn2 <- rep_len(coef(object), n.ahead) } predn }) VGAM/R/family.glmgam.R0000644000176200001440000016622013135276757014103 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. binomialff <- function(link = "logit", dispersion = 1, multiple.responses = FALSE, onedpar = !multiple.responses, parallel = FALSE, # apply.parint = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE) { if (!is.logical(bred) || length(bred) > 1) stop("argument 'bred' must be a single logical") apply.parint <- FALSE estimated.dispersion <- dispersion == 0 if (earg.link) { earg <- link } else { link <- as.list(substitute(link)) earg <- link2list(link) } link <- attr(earg, "function.name") ans <- new("vglmff", blurb = if (multiple.responses) c("Multiple binomial model\n\n", "Link: ", namesof("mu[,j]", link, earg = earg), "\n", "Variance: mu[,j]*(1-mu[,j])") else c("Binomial model\n\n", "Link: ", namesof("prob", link, earg = earg), "\n", "Variance: mu * (1 - mu)"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, bred = .bred , expected = TRUE, hadof = TRUE, parameters.names = c("prob"), # new.name zero = .zero ) }, list( .zero = zero, .bred = bred ))), initialize = eval(substitute(expression({ assign("CQO.FastAlgorithm", ( .link == "logit" || .link == "cloglog"), envir = VGAMenv) assign("modelno", if ( .link == "logit") 1 else if ( .link == "cloglog") 4 else NULL, envir = VGAMenv) old.name <- "mu" new.name <- "prob" if ( .multiple.responses ) { temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y y.counts <- y y <- y / w M <- ncol(y) if (FALSE) if (!all(y == 0 | y == 1)) stop("response must contain 0s and 1s only") dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste(new.name, 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else new.name, .link , earg = .earg , short = TRUE) if (!length(mustart) && !length(etastart)) mustart <- matrix(colMeans(y.counts), nrow(y), ncol = ncol(y), byrow = TRUE) / matrix(colMeans(w), nrow = nrow(w), ncol = ncol(w), byrow = TRUE) extra$multiple.responses <- TRUE } else { if (!all(w == 1)) extra$orig.w <- w NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- (y != levels(y)[1]) nvec <- rep_len(1, n) y[w == 0] <- 0 if (!all(y == 0 | y == 1)) stop("response values 'y' must be 0 or 1") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor (first level = fail, other levels = success)", ",\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } predictors.names <- namesof(new.name, .link , earg = .earg , short = TRUE) } if ( .bred ) { if ( !control$save.weights ) { save.weights <- control$save.weights <- TRUE } } }), list( .link = link, .multiple.responses = multiple.responses, .earg = earg, .bred = bred ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link , earg = .earg ) colnames(mu) <- NULL mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAMenv) if (exists("modelno", envir = VGAMenv)) rm("modelno", envir = VGAMenv) dpar <- .dispersion if (!dpar) { temp87 <- (y-mu)^2 * wz / ( dtheta.deta(mu, link = .link , earg = .earg )^2) # w cancel if (.multiple.responses && ! .onedpar ) { dpar <- rep_len(NA_real_, M) temp87 <- cbind(temp87) nrow.mu <- if (is.matrix(mu)) nrow(mu) else length(mu) for (ii in 1:M) dpar[ii] <- sum(temp87[, ii]) / (nrow.mu - ncol(x)) if (is.matrix(y) && length(dimnames(y)[[2]]) == length(dpar)) names(dpar) <- dimnames(y)[[2]] } else { dpar <- sum(temp87) / (length(mu) - ncol(x)) } } misc$multiple.responses <- .multiple.responses misc$dispersion <- dpar misc$default.dispersion <- 1 misc$estimated.dispersion <- .estimated.dispersion misc$bred <- .bred misc$expected <- TRUE misc$link <- rep_len( .link , M) names(misc$link) <- if (M > 1) dn2 else new.name # Was old.name=="mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg }), list( .dispersion = dispersion, .estimated.dispersion = estimated.dispersion, .onedpar = onedpar, .multiple.responses = multiple.responses, .bred = bred, .link = link, .earg = earg))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, .link , earg = .earg ) }, list( .link = link, .earg = earg))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { c(w) * (y / mu - (1-y) / (1-mu)) } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e6 * .Machine$double.eps smallno <- sqrt(.Machine$double.eps) if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- if ( .multiple.responses ) { c(w) * ( ycounts * log( mu) + (1 - ycounts) * log1p(-mu)) } else { (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbinom(x = ycounts, size = nvec, prob = mu, log = TRUE) } if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .multiple.responses = multiple.responses ))), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(mymu)) && all(0 < mymu & mymu < 1) okay1 }, list( .link = link, .earg = earg, .bred = bred))), vfamily = c("binomialff", "VGAMcategorical"), hadof = eval(substitute( function(eta, extra = list(), deriv = 1, linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), ...) { fvs <- eta2theta(eta, link = .link , earg = .earg ) if ( .bred ) { fvs <- fvs + NA_real_ # Correct dimension for below too } ans <- c(w) * switch(as.character(deriv), "0" = 1 / (fvs * (1 - fvs)), "1" = -(1 - 2*fvs) / (fvs * (1 - fvs))^2, "2" = 2 * (1 - 3*fvs*(1-fvs)) / (fvs * (1 - fvs))^3, stop("argument 'deriv' must be 0 or 1 or 2")) if (deriv == 0) ans else retain.col(ans, linpred.index) # Coz M1 = 1 }, list( .link = link, .earg = earg, .bred = bred) )), simslot = function (object, nsim) { ftd <- fitted(object) if (ncol(ftd) > 1) stop("simulate() does not work with more than one response") n <- length(ftd) ntot <- n * nsim pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts %% 1 != 0)) stop("cannot simulate from non-integer prior.weights") if (length(m <- object@model) > 0) { y <- model.response(m) if (is.factor(y)) { yy <- factor(1 + rbinom(ntot, size = 1, prob = ftd), labels = levels(y)) split(yy, rep(seq_len(nsim), each = n)) } else if (is.matrix(y) && ncol(y) == 2) { yy <- vector("list", nsim) for (i in seq_len(nsim)) { Y <- rbinom(n, size = pwts, prob = ftd) YY <- cbind(Y, pwts - Y) colnames(YY) <- colnames(y) yy[[i]] <- YY } yy } else { rbinom(ntot, size = pwts, prob = ftd)/pwts } } else { rbinom(ntot, size = c(pwts), prob = c(ftd))/c(pwts) } }, deriv = eval(substitute(expression({ yBRED <- if ( .bred ) { Hvector <- hatvaluesbasic(X.vlm = X.vlm.save, diagWm = c(t(w * mu))) # Handles M>1 varY <- mu * (1 - mu) / w # A matrix if M>1. Seems the most correct. d1.ADJ <- dtheta.deta(mu, .link , earg = .earg ) temp.earg <- .earg temp.earg$inverse <- FALSE temp.earg$inverse <- TRUE d2.ADJ <- d2theta.deta2(mu, .link , earg = temp.earg ) yBRED <- y + matrix(Hvector, n, M, byrow = TRUE) * varY * d2.ADJ / (2 * d1.ADJ^2) yBRED } else { y } answer <- if ( .link == "logit") { c(w) * (yBRED - mu) } else if ( .link == "cloglog") { mu.use <- mu smallno <- 100 * .Machine$double.eps mu.use[mu.use < smallno] <- smallno mu.use[mu.use > 1.0 - smallno] <- 1.0 - smallno -c(w) * (yBRED - mu) * log1p(-mu.use) / mu.use } else { c(w) * dtheta.deta(mu, link = .link , earg = .earg ) * (yBRED / mu - 1.0) / (1.0 - mu) } answer }), list( .link = link, .earg = earg, .bred = bred))), weight = eval(substitute(expression({ tmp100 <- mu * (1.0 - mu) ned2ldprob2 <- if ( .link == "logit") { cbind(c(w) * tmp100) } else if ( .link == "cloglog") { cbind(c(w) * (1.0 - mu.use) * (log1p(-mu.use))^2 / mu.use) } else { cbind(c(w) * dtheta.deta(mu, link = .link , earg = .earg )^2 / tmp100) } for (ii in 1:M) { index500 <- !is.finite(ned2ldprob2[, ii]) | (abs(ned2ldprob2[, ii]) < .Machine$double.eps) if (any(index500)) { # Diagonal 0s are bad ned2ldprob2[index500, ii] <- .Machine$double.eps } } ned2ldprob2 }), list( .link = link, .earg = earg)))) ans@deviance <- if (multiple.responses) function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Deviance.categorical.data.vgam(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra, summation = summation) } else function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Deviance.categorical.data.vgam(mu = cbind(mu, 1-mu), y = cbind(y , 1-y), w = w, residuals = residuals, eta = eta, extra = extra, summation = summation) } ans } gammaff <- function(link = "nreciprocal", dispersion = 0) { estimated.dispersion <- dispersion == 0 link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Gamma distribution\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu^2 / k"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { devi <- -2 * c(w) * (log(ifelse(y == 0, 1, y/mu)) - (y - mu)/mu) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else { dev.elts <- c(w) * devi if (summation) { sum(dev.elts) } else { dev.elts } } }, infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, parameters.names = c("mu"), dispersion = .dispersion ) }, list( .dispersion = dispersion ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y mustart <- y + 0.167 * (y == 0) M <- if (is.matrix(y)) ncol(y) else 1 dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "mu", .link , earg = .earg , short = TRUE) if (!length(etastart)) etastart <- theta2eta(mustart, link = .link , earg = .earg ) }), list( .link = link, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, link = .link , earg = .earg ) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ dpar <- .dispersion if (!dpar) { if (M == 1) { temp <- c(w) * dmu.deta^2 dpar <- sum(c(w) * (y-mu)^2 * wz / temp) / (length(mu) - ncol(x)) } else { dpar <- rep_len(0, M) for (spp in 1:M) { temp <- c(w) * dmu.deta[, spp]^2 dpar[spp] <- sum(c(w) * (y[,spp]-mu[, spp])^2 * wz[, spp]/temp) / ( length(mu[,spp]) - ncol(x)) } } } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$link <- rep_len( .link , M) names(misc$link) <- param.names("mu", M) misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .dispersion = dispersion, .earg = earg, .estimated.dispersion = estimated.dispersion, .link = link ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg))), vfamily = "gammaff", validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- theta2eta(mu, .link , earg = .earg ) okay1 <- all(is.finite(mymu)) && all(0 < mymu) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ M1 <- 1 ncoly <- NCOL(y) dl.dmu <- (y-mu) / mu^2 dmu.deta <- dtheta.deta(theta = mu, link = .link , earg = .earg ) c(w) * dl.dmu * dmu.deta }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ d2l.dmu2 <- 1 / mu^2 wz <- dmu.deta^2 * d2l.dmu2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .link = link, .earg = earg)))) } inverse.gaussianff <- function(link = "natural.ig", dispersion = 0) { estimated.dispersion <- dispersion == 0 warning("@deviance() not finished") warning("needs checking, but I'm sure it works") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Inverse Gaussian distribution\n\n", "Link: ", namesof("mu", link, earg = earg), "\n", "Variance: mu^3 / k"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pow <- 3 # Use Quasi()$deviance with pow==3 devy <- y^(2-pow) / (1-pow) - y^(2-pow) / (2-pow) devmu <- y * mu^(1-pow) / (1-pow) - mu^(2-pow) / (2-pow) devi <- 2 * (devy - devmu) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else { dev.elts <- c(w) * devi if (summation) { sum(dev.elts) } else { dev.elts } } }, infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, parameters.names = c("mu"), quasi.type = TRUE, dispersion = .dispersion ) }, list( .earg = earg , .dispersion = dispersion ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y mu <- y + 0.167 * (y == 0) M <- if (is.matrix(y)) ncol(y) else 1 dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "mu", .link , .earg , short = TRUE) if (!length(etastart)) etastart <- theta2eta(mu, link = .link , .earg ) }), list( .link = link, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ dpar <- .dispersion if (!dpar) { temp <- c(w) * dmu.deta^2 dpar <- sum( c(w) * (y-mu)^2 * wz / temp ) / (length(mu) - ncol(x)) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$link <- rep_len( .link , M) names(misc$link) <- param.names("mu", M) misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .dispersion = dispersion, .estimated.dispersion = estimated.dispersion, .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg ))), vfamily = "inverse.gaussianff", validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- theta2eta(mu, .link , earg = .earg ) okay1 <- all(is.finite(mymu)) && all(0 < mymu) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ M1 <- 1 ncoly <- NCOL(y) dl.dmu <- (y - mu) / mu^3 dmu.deta <- dtheta.deta(theta = mu, link = .link , earg = .earg ) c(w) * dl.dmu * dmu.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2l.dmu2 <- 1 / mu^3 wz <- dmu.deta^2 * d2l.dmu2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .link = link, .earg = earg )))) } dinv.gaussian <- function(x, mu, lambda, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mu), length(lambda)) if (length(x) != L) x <- rep_len(x, L) if (length(mu) != L) mu <- rep_len(mu, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) logdensity <- rep_len(log(0), L) xok <- (x > 0) logdensity[xok] = 0.5 * log(lambda[xok] / (2 * pi * x[xok]^3)) - lambda[xok] * (x[xok]-mu[xok])^2 / (2*mu[xok]^2 * x[xok]) logdensity[mu <= 0] <- NaN logdensity[lambda <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pinv.gaussian <- function(q, mu, lambda) { L <- max(length(q), length(mu), length(lambda)) if (length(q) != L) q <- rep_len(q, L) if (length(mu) != L) mu <- rep_len(mu, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) ans <- q ans[q <= 0] <- 0 bb <- q > 0 ans[bb] <- pnorm( sqrt(lambda[bb]/q[bb]) * (q[bb]/mu[bb] - 1)) + exp(2*lambda[bb]/mu[bb]) * pnorm(-sqrt(lambda[bb]/q[bb]) * (q[bb]/mu[bb] + 1)) ans[mu <= 0] <- NaN ans[lambda <= 0] <- NaN ans } rinv.gaussian <- function(n, mu, lambda) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n mu <- rep_len(mu, use.n) lambda <- rep_len(lambda, use.n) u <- runif(use.n) Z <- rnorm(use.n)^2 # rchisq(use.n, df = 1) phi <- lambda / mu y1 <- 1 - 0.5 * (sqrt(Z^2 + 4*phi*Z) - Z) / phi ans <- mu * ifelse((1+y1)*u > 1, 1/y1, y1) ans[mu <= 0] <- NaN ans[lambda <= 0] <- NaN ans } inv.gaussianff <- function(lmu = "loge", llambda = "loge", imethod = 1, ilambda = NULL, parallel = FALSE, ishrinkage = 0.99, zero = NULL) { apply.parint <- FALSE lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (is.logical(parallel) && parallel && length(zero)) stop("set 'zero = NULL' if 'parallel = TRUE'") new("vglmff", blurb = c("Inverse Gaussian distribution\n\n", "f(y) = sqrt(lambda/(2*pi*y^3)) * ", "exp(-lambda * (y - mu)^2 / (2 * mu^2 * y)); ", "y, mu & lambda > 0", "Link: ", namesof("mu", lmu, earg = emu), ", ", namesof("lambda", llambda, earg = elambda),"\n", "Mean: ", "mu\n", "Variance: mu^3 / lambda"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, parameters.names = c("mu", "lambda"), expected = TRUE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("mu", ncoly) mynames2 <- param.names("lambda", ncoly) predictors.names <- c(namesof(mynames1, .lmu , earg = .emu , short = TRUE), namesof(mynames2, .llambda , earg = .elambda , short = TRUE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { init.mu <- if ( .imethod == 2) { mediany <- apply(y, 2, median) matrix(1.1 * mediany + 1/8, n, ncoly, byrow = TRUE) } else if ( .imethod == 3) { use.this <- colSums(y * w) / colSums(w) # weighted.mean(y, w) (1 - .ishrinkage ) * y + .ishrinkage * use.this } else { matrix(colSums(y * w) / colSums(w) + 1/8, n, ncoly, byrow = TRUE) } variancey <- apply(y, 2, var) init.la <- matrix(if (length( .ilambda )) .ilambda else (init.mu^3) / (0.10 + variancey), n, ncoly, byrow = TRUE) etastart <- cbind( theta2eta(init.mu, link = .lmu , earg = .emu ), theta2eta(init.la, link = .llambda , earg = .elambda ))[, interleave.VGAM(M, M1 = M1)] } }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .ishrinkage = ishrinkage, .imethod = imethod, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, c(TRUE, FALSE)], link = .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu, .elambda = elambda ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lmu , ncoly), rep_len( .llambda , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .emu misc$earg[[M1*ii ]] <- .elambda } misc$M1 <- M1 misc$imethod <- .imethod misc$ishrinkage <- .ishrinkage misc$expected <- TRUE misc$multipleResponses <- FALSE misc$parallel <- .parallel misc$apply.parint <- .apply.parint }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .parallel = parallel, .apply.parint = apply.parint, .ishrinkage = ishrinkage, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mymu <- eta2theta(eta[, c(TRUE, FALSE)], link = .lmu , earg = .emu ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], link = .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dinv.gaussian(x = y, mu = mymu, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), vfamily = "inv.gaussianff", validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, c(TRUE, FALSE)], link = .lmu , earg = .emu ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], link = .llambda , earg = .elambda ) okay1 <- all(is.finite(mymu )) && all(0 < mymu ) && all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 mymu <- eta2theta(eta[, c(TRUE, FALSE)], link = .lmu , earg = .emu ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], link = .llambda , earg = .elambda ) dmu.deta <- dtheta.deta(theta = mymu , link = .lmu , earg = .emu ) dlambda.deta <- dtheta.deta(theta = lambda, link = .llambda , earg = .elambda ) dl.dmu <- lambda * (y - mymu) / mymu^3 dl.dlambda <- 0.5 / lambda - (y - mymu)^2 / (2 * mymu^2 * y) myderiv <- c(w) * cbind(dl.dmu * dmu.deta, dl.dlambda * dlambda.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), weight = eval(substitute(expression({ ned2l.dmu2 <- lambda / mymu^3 ned2l.dlambda2 <- 0.5 / (lambda^2) wz <- cbind(dmu.deta^2 * ned2l.dmu2, dlambda.deta^2 * ned2l.dlambda2)[, interleave.VGAM(M, M1 = M1)] w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / M1) }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda )))) } poissonff <- function(link = "loge", dispersion = 1, onedpar = FALSE, imu = NULL, imethod = 1, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75)) { type.fitted <- match.arg(type.fitted, c("mean", "quantiles"))[1] if (!is.logical(bred) || length(bred) > 1) stop("argument 'bred' must be a single logical") estimated.dispersion <- (dispersion == 0) if (earg.link) { earg <- link } else { link <- as.list(substitute(link)) earg <- link2list(link) } link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(imu) && !is.Numeric(imu, positive = TRUE)) stop("bad input for argument 'imu'") new("vglmff", blurb = c("Poisson distribution\n\n", "Link: ", namesof("lambda", link, earg = earg), "\n", "Variance: lambda"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, hadof = TRUE, multipleResponses = TRUE, parameters.names = c("lambda"), type.fitted = .type.fitted , percentiles = .percentiles , bred = .bred , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted, .percentiles = percentiles, .bred = bred ))), deviance = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mupo <- eta2theta(eta, link = .link , earg = .earg ) nz <- (y > 0) devi <- -(y - mupo) devi[nz] <- devi[nz] + y[nz] * log(y[nz]/mupo[nz]) if (residuals) { sign(y - mupo) * sqrt(2 * abs(devi) * c(w)) } else { dev.elts <- 2 * c(w) * devi if (summation) { sum(dev.elts) } else { dev.elts } } }, list( .link = link, .earg = earg ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- ncoly <- ncol(y) assign("CQO.FastAlgorithm", ( .link == "loge"), envir = VGAMenv) old.name <- "mu" new.name <- "lambda" dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste(new.name, 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else new.name, # was "mu" == old.name .link , earg = .earg , short = TRUE) if ( .bred ) { if ( !control$save.weights ) { save.weights <- control$save.weights <- TRUE } } extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) extra$percentiles <- .percentiles if (!length(etastart)) { mu.init <- pmax(y, 1/8) for (iii in 1:ncol(y)) { if ( .imethod == 2) { mu.init[, iii] <- weighted.mean(y[, iii], w[, iii]) + 1/8 } else if ( .imethod == 3) { mu.init[, iii] <- median(y[, iii]) + 1/8 } } if (length( .imu )) mu.init <- matrix( .imu , n, ncoly, byrow = TRUE) etastart <- theta2eta(mu.init, link = .link , earg = .earg ) } }), list( .link = link, .estimated.dispersion = estimated.dispersion, .type.fitted = type.fitted, .percentiles = percentiles, .bred = bred, .imethod = imethod, .imu = imu, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { mupo <- eta2theta(eta, link = .link , earg = .earg ) type.fitted <- if (length(extra$type.fitted)) { extra$type.fitted } else { warning("cannot find 'type.fitted'. Returning 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "quantiles"))[1] if (type.fitted == "mean") { return(label.cols.y(mupo, colnames.y = extra$colnames.y, NOS = NOS)) } percvec <- extra$percentiles lenperc <- length(percvec) NOS <- NCOL(eta) / c(M1 = 1) jvec <- lenperc * (0:(NOS - 1)) ans <- matrix(0, NROW(eta), lenperc * NOS) for (kay in 1:lenperc) ans[, jvec + kay] <- qpois(0.01 * percvec[kay], lambda = mupo) rownames(ans) <- rownames(eta) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS, percentiles = percvec, one.on.one = FALSE) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAMenv) dpar <- .dispersion if (!dpar) { temp87 <- (y-mu)^2 * wz / (dtheta.deta(mu, link = .link , earg = .earg )^2) # w cancel if (M > 1 && ! .onedpar ) { dpar <- rep_len(NA_real_, M) temp87 <- cbind(temp87) nrow.mu <- if (is.matrix(mu)) nrow(mu) else length(mu) for (ii in 1:M) dpar[ii] <- sum(temp87[, ii]) / (nrow.mu - ncol(x)) if (is.matrix(y) && length(dimnames(y)[[2]]) == length(dpar)) names(dpar) <- dimnames(y)[[2]] } else { dpar <- sum(temp87) / (length(mu) - ncol(x)) } } misc$dispersion <- dpar misc$default.dispersion <- 1 misc$estimated.dispersion <- .estimated.dispersion misc$imethod <- .imethod misc$link <- rep_len( .link , M) names(misc$link) <- if (M > 1) dn2 else new.name # Was old.name=="mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg }), list( .dispersion = dispersion, .imethod = imethod, .estimated.dispersion = estimated.dispersion, .bred = bred, .onedpar = onedpar, .link = link, .earg = earg))), linkfun = eval(substitute( function(mu, extra = NULL) { theta2eta(mu, link = .link , earg = .earg ) }, list( .link = link, .earg = earg))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mupo <- eta2theta(eta, link = .link , earg = .earg ) if (residuals) { c(w) * (y / mupo - 1) } else { ll.elts <- c(w) * dpois(x = y, lambda = mupo, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = "poissonff", validparams = eval(substitute(function(eta, y, extra = NULL) { mupo <- eta2theta(eta, link = .link , earg = .earg ) okay1 <- all(is.finite(mupo)) && all(0 < mupo) okay1 }, list( .link = link, .earg = earg ))), hadof = eval(substitute( function(eta, extra = list(), deriv = 1, linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), ...) { mupo <- eta2theta(eta, link = .link , earg = .earg ) ans <- c(w) * switch(as.character(deriv), "0" = 1 / mupo, "1" = -1 / mupo^2, "2" = 2 / mupo^3, "3" = -6 / mupo^4, stop("argument 'deriv' must be 0, 1, 2 or 3")) if (deriv == 0) ans else retain.col(ans, linpred.index) # Coz M1 = 1 }, list( .link = link, .earg = earg ))), simslot = function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") ftd <- fitted(object) rpois(nsim * length(ftd), ftd) }, deriv = eval(substitute(expression({ mupo <- eta2theta(eta, link = .link , earg = .earg ) yBRED <- if ( .bred ) { Hvector <- hatvaluesbasic(X.vlm = X.vlm.save, diagWm = c(t(c(w) * mupo))) # Handles M>1 varY <- mupo # Is a matrix if M>1. d1.BRED <- dtheta.deta(mupo, .link , earg = .earg ) d2.BRED <- d2theta.deta2(mupo, .link , earg = .earg ) y + matrix(Hvector, n, M, byrow = TRUE) * varY * d2.BRED / (2 * d1.BRED^2) } else { y } answer <- if ( .link == "loge" && (any(mupo < .Machine$double.eps))) { c(w) * (yBRED - mupo) } else { lambda <- mupo dl.dlambda <- (yBRED - lambda) / lambda dlambda.deta <- dtheta.deta(theta = lambda, link = .link , earg = .earg ) c(w) * dl.dlambda * dlambda.deta } answer }), list( .link = link, .earg = earg, .bred = bred))), weight = eval(substitute(expression({ if ( .link == "loge" && (any(mupo < .Machine$double.eps))) { tmp600 <- mupo tmp600[tmp600 < .Machine$double.eps] <- .Machine$double.eps c(w) * tmp600 } else { ned2l.dlambda2 <- 1 / lambda ned2lambda.deta2 <- d2theta.deta2(theta = lambda, link = .link , earg = .earg ) c(w) * dlambda.deta^2 * ned2l.dlambda2 } }), list( .link = link, .earg = earg)))) } # poissonff() quasibinomialff <- function( link = "logit", multiple.responses = FALSE, onedpar = !multiple.responses, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") dispersion <- 0 # Estimated; this is the only difference w. binomialff() ans <- binomialff(link = earg, earg.link = TRUE, dispersion = dispersion, multiple.responses = multiple.responses, onedpar = onedpar, parallel = parallel, zero = zero) ans@vfamily <- "quasibinomialff" ans@infos <- eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = .multiple.responses , parameters.names = c("prob"), quasi.type = TRUE, zero = .zero ) }, list( .zero = zero, .multiple.responses = multiple.responses ))) ans } quasipoissonff <- function(link = "loge", onedpar = FALSE, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") dispersion <- 0 # Estimated; this is the only difference with poissonff() ans <- poissonff(link = earg, earg.link = TRUE, dispersion = dispersion, onedpar = onedpar, parallel = parallel, zero = zero) ans@vfamily <- "quasipoissonff" ans@infos <- eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = TRUE, parameters.names = c("lambda"), quasi.type = TRUE, zero = .zero ) }, list( .zero = zero ))) ans } double.exppoisson <- function(lmean = "loge", ldispersion = "logit", idispersion = 0.8, zero = NULL) { if (!is.Numeric(idispersion, positive = TRUE)) stop("bad input for 'idispersion'") lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") ldisp <- as.list(substitute(ldispersion)) edisp <- link2list(ldisp) ldisp <- attr(edisp, "function.name") idisp <- idispersion new("vglmff", blurb = c("Double exponential Poisson distribution\n\n", "Link: ", namesof("mean", lmean, earg = emean), ", ", namesof("dispersion", ldisp, earg = edisp), "\n", "Mean: ", "mean\n", "Variance: mean / dispersion"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, parameters.names = c("mean", "dispersion"), lmean = .lmean , ldispersion = .ldispersion , zero = .zero ) }, list( .lmean = lmean, .ldispersion = ldispersion, .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) M <- if (is.matrix(y)) ncol(y) else 1 dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { "mu" } predictors.names <- c(namesof(dn2, link = .lmean, earg = .emean, short = TRUE), namesof("dispersion", link = .ldisp, earg = .edisp, short = TRUE)) init.mu <- pmax(y, 1/8) tmp2 <- rep_len( .idisp , n) if (!length(etastart)) etastart <- cbind(theta2eta(init.mu, link = .lmean , earg = .emean ), theta2eta(tmp2, link = .ldisp , earg = .edisp )) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp, .idisp = idisp ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], link = .lmean, earg = .emean) }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), last = eval(substitute(expression({ misc$link <- c(mean = .lmean , dispersion = .ldisp ) misc$earg <- list(mean = .emean , dispersion = .edisp ) misc$expected <- TRUE }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, 1], link = .lmean, earg = .emean ) Disper <- eta2theta(eta[, 2], link = .ldisp, earg = .edisp ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (0.5 * log(Disper) + Disper*(y-lambda) + Disper*y*log(lambda)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), vfamily = "double.exppoisson", validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta[, 1], link = .lmean , earg = .emean ) Disper <- eta2theta(eta[, 2], link = .ldisp , earg = .edisp ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) && all(is.finite(Disper)) && all(0 < Disper & Disper < 1) okay1 }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 1], link = .lmean , earg = .emean ) Disper <- eta2theta(eta[, 2], link = .ldisp , earg = .edisp ) dl.dlambda <- Disper * (y / lambda - 1) dl.dDisper <- y * log(lambda) + y - lambda + 0.5 / Disper dlambda.deta <- dtheta.deta(theta = lambda, link = .lmean, earg = .emean) dDisper.deta <- dtheta.deta(theta = Disper, link = .ldisp, earg = .edisp) c(w) * cbind(dl.dlambda * dlambda.deta, dl.dDisper * dDisper.deta) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, nrow = n, ncol = 2) # diagonal usethis.lambda <- pmax(lambda, .Machine$double.eps / 10000) wz[, iam(1, 1, M)] <- (Disper / usethis.lambda) * dlambda.deta^2 wz[, iam(2, 2, M)] <- (0.5 / Disper^2) * dDisper.deta^2 c(w) * wz }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp )))) } double.expbinomial <- function(lmean = "logit", ldispersion = "logit", idispersion = 0.25, zero = "dispersion") { lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") ldisp <- as.list(substitute(ldispersion)) edisp <- link2list(ldisp) ldisp <- attr(edisp, "function.name") idisp <- idispersion if (!is.Numeric(idispersion, positive = TRUE)) stop("bad input for 'idispersion'") new("vglmff", blurb = c("Double Exponential Binomial distribution\n\n", "Link: ", namesof("mean", lmean, earg = emean), ", ", namesof("dispersion", ldisp, earg = edisp), "\n", "Mean: ", "mean\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = NA, parameters.names = c("mean", "dispersion"), lmean = .lmean , ldisp = .ldisp , multipleResponses = FALSE, zero = .zero ) }, list( .lmean = lmean, .zero = zero, .ldisp = ldisp ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (NCOL(w) != 1) stop("'weights' must be a vector or a one-column matrix") NCOL <- function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y <- (y != levels(y)[1]) nvec <- rep_len(1, n) y[w == 0] <- 0 if (!all(y == 0 | y == 1)) stop("response values 'y' must be 0 or 1") init.mu <- (0.5 + w * y) / (1 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec init.mu <- (0.5 + nvec * y) / (1 + nvec) } else stop("for the double.expbinomial family, response 'y' must be", " a vector of 0 and 1's\n", "or a factor (first level = fail, ", "other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) paste("E[", dn2, "]", sep = "") else "mu" predictors.names <- c(namesof(dn2, .lmean , earg = .emean , short = TRUE), namesof("dispersion", .ldisp , earg = .edisp , short = TRUE)) tmp2 <- rep_len( .idisp , n) if (!length(etastart)) etastart <- cbind(theta2eta(init.mu, .lmean, earg = .emean), theta2eta(tmp2, .ldisp, earg = .edisp)) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp, .idisp = idisp ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], link = .lmean , earg = .emean ) }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), last = eval(substitute(expression({ misc$link <- c("mean" = .lmean, "dispersion" = .ldisp) misc$earg <- list( mean = .emean, dispersion = .edisp) misc$expected <- TRUE }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { prob <- eta2theta(eta[, 1], link = .lmean, earg = .emean) Disper <- eta2theta(eta[, 2], link = .ldisp, earg = .edisp) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { temp1 <- y * log(ifelse(y > 0, y, 1)) # y*log(y) temp2 <- (1.0-y)* log1p(ifelse(y < 1, -y, 0)) # (1-y)*log(1-y) ll.elts <- (0.5 * log(Disper) + w * (y * Disper * log(prob) + (1-y) * Disper * log1p(-prob) + temp1 * (1-Disper) + temp2 * (1 - Disper))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), vfamily = "double.expbinomial", validparams = eval(substitute(function(eta, y, extra = NULL) { prob <- eta2theta(eta[, 1], link = .lmean , earg = .emean ) Disper <- eta2theta(eta[, 2], link = .ldisp , earg = .edisp ) okay1 <- all(is.finite(prob )) && all(0 < prob & prob < 1) && all(is.finite(Disper)) && all(0 < Disper & Disper < 1) okay1 }, list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), deriv = eval(substitute(expression({ prob <- eta2theta(eta[, 1], link = .lmean, earg = .emean) Disper <- eta2theta(eta[, 2], link = .ldisp, earg = .edisp) temp1 <- y * log(ifelse(y > 0, y, 1)) # y*log(y) temp2 <- (1.0-y) * log1p(ifelse(y < 1, -y, 0)) # (1-y)*log(1-y) temp3 <- prob * (1.0-prob) temp3 <- pmax(temp3, .Machine$double.eps * 10000) dl.dprob <- w * Disper * (y - prob) / temp3 dl.dDisper <- 0.5 / Disper + w * (y * log(prob) + (1-y)*log1p(-prob) - temp1 - temp2) dprob.deta <- dtheta.deta(theta = prob, .lmean, earg = .emean) dDisper.deta <- dtheta.deta(theta = Disper, .ldisp, earg = .edisp) cbind(dl.dprob * dprob.deta, dl.dDisper * dDisper.deta) }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, nrow = n, ncol = 2) # diagonal wz[, iam(1, 1, M)] <- w * (Disper / temp3) * dprob.deta^2 wz[, iam(2, 2, M)] <- (0.5 / Disper^2) * dDisper.deta^2 wz }), list( .lmean = lmean, .emean = emean, .ldisp = ldisp, .edisp = edisp )))) } augbinomial <- function(link = "logit", multiple.responses = FALSE, parallel = TRUE) { if (!is.logical(parallel) || length(parallel) != 1 || !parallel) warning("Argument 'parallel' should be assigned 'TRUE' only") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = if (multiple.responses) c("Augmented multivariate binomial model\n\n", "Link: ", namesof("mu.1[,j]", link, earg = earg), ", ", namesof("mu.2[,j]", link, earg = earg), "\n", "Variance: mu[,j]*(1-mu[,j])") else c("Augmented binomial model\n\n", "Link: ", namesof("mu.1[,j]", link, earg = earg), ", ", namesof("mu.2[,j]", link, earg = earg), "\n", "Variance: mu*(1-mu)"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Deviance.categorical.data.vgam(mu = cbind(mu, 1-mu), y=cbind(y, 1-y), w = w, residuals = residuals, eta = eta, extra = extra, summation = summation) }, infos = eval(substitute(function(...) { list(M1 = 2, parameters.names = c("mu.1[,j]", "mu.2[,j]"), parallel = .parallel) }, list( .parallel = parallel ))), initialize = eval(substitute(expression({ M1 = 2 if ( .multiple.responses ) { y = as.matrix(y) M = M1 * ncol(y) if (!all(y == 0 | y == 1)) stop("response must contain 0's and 1's only") dn2 = if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 = if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("mu", 1:M, sep = "") } predictors.names <- c(namesof(if (M > 1) dn2 else "mu.1", .link , earg = .earg , short = TRUE), namesof(if (M > 1) dn2 else "mu.2", .link , earg = .earg , short = TRUE)) NOS = M / M1 predictors.names <- predictors.names[interleave.VGAM(M1 * NOS, M1 = M1)] if (!length(mustart) && !length(etastart)) mustart = (0.5 + w * y) / (1 + w) } else { dn2 = c("mu1.", "mu2.") M = M1 if (!all(w == 1)) extra$orig.w = w NCOL = function (x) if (is.array(x) && length(dim(x)) > 1 || is.data.frame(x)) ncol(x) else as.integer(1) if (NCOL(y) == 1) { if (is.factor(y)) y = (y != levels(y)[1]) nvec = rep_len(1, n) y[w == 0] <- 0 if (!all(y == 0 | y == 1)) stop("response values 'y' must be 0 or 1") if (!length(mustart) && !length(etastart)) mustart = (0.5 + w * y) / (1 + w) no.successes = y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y = round(y) nvec = y[, 1] + y[, 2] y = ifelse(nvec > 0, y[, 1] / nvec, 0) w = w * nvec if (!length(mustart) && !length(etastart)) mustart = (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor (first level = fail, ", "other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } predictors.names <- c(namesof("mu.1", .link , earg = .earg , short = TRUE), namesof("mu.2", .link , earg = .earg , short = TRUE)) } }), list( .link = link, .multiple.responses = multiple.responses, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { Mdiv2 = ncol(eta) / 2 index1 = 2*(1:Mdiv2) - 1 mu <- eta2theta(eta[, index1], link = .link , earg = .earg ) mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- if (M > 1) dn2 else "mu" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$parallel <- .parallel misc$expected <- TRUE misc$multiple.responses <- .multiple.responses }), list( .link = link, .multiple.responses = multiple.responses, .earg = earg, .parallel = parallel ))), linkfun = eval(substitute(function(mu, extra = NULL) { usualanswer = theta2eta(mu, .link , earg = .earg ) kronecker(usualanswer, matrix(1, 1, 2)) }, list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { c(w) * (y / mu - (1-y) / (1-mu)) } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * c(w) # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e6 * .Machine$double.eps smallno <- sqrt(.Machine$double.eps) if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbinom(x = ycounts, size = nvec, prob = mu, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("augbinomial", "VGAMcategorical"), validparams = eval(substitute(function(eta, y, extra = NULL) { Mdiv2 = ncol(eta) / 2 index1 = 2*(1:Mdiv2) - 1 mu <- eta2theta(eta[, index1], link = .link , earg = .earg ) okay1 <- all(is.finite(mu)) && all(0 < mu & mu < 1) okay1 }, list( .link = link, .earg = earg))), deriv = eval(substitute(expression({ M1 <- 2 Mdiv2 <- M / 2 NOS <- M / M1 Konst1 <- 1 # Works with this deriv1 <- Konst1 * w * if ( .link == "logit") { y * (1 - mu) } else { stop("this is not programmed in yet") dtheta.deta(mu, link = .link , earg = .earg ) * (y / mu - 1.0) / (1.0 - mu) } deriv2 = Konst1 * w * if ( .link == "logit") { -(1 - y) * mu } else { stop("this is not programmed in yet") dtheta.deta(mu, link = .link , earg = .earg ) * (y / mu - 1.0) / (1.0 - mu) } myderiv = (cbind(deriv1, deriv2))[, interleave.VGAM(M1 * NOS, M1 = M1)] myderiv }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ tmp100 <- mu * (1.0 - mu) tmp200 <- if ( .link == "logit") { cbind(w * tmp100) } else { cbind(w * dtheta.deta(mu, link = .link , earg = .earg )^2 / tmp100) } wk.wt1 <- (Konst1^2) * tmp200 * (1 - mu) wk.wt2 <- (Konst1^2) * tmp200 * mu my.wk.wt <- cbind(wk.wt1, wk.wt2) my.wk.wt <- my.wk.wt[, interleave.VGAM(M1 * NOS, M1 = M1)] my.wk.wt }), list( .link = link, .earg = earg)))) } VGAM/R/links.q0000644000176200001440000015355613135276757012546 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. ToString <- function(x) paste(x, collapse = ", ") as.char.expression <- function(x) { answer <- x for (i in length(x)) { charvec <- substring(x[i], 1:nchar(x[i]), 1:nchar(x[i])) if (!all(is.element(charvec, c(letters, LETTERS, as.character(0:9), ".", "_")))) answer[i] <- paste("(", x[i], ")", sep = "") } answer } if (FALSE) { as.char.expression("a") as.char.expression("a+b") as.char.expression(c("a", "a+b")) } TypicalVGAMfamilyFunction <- function(lsigma = "loge", isigma = NULL, link.list = list("(Default)" = "identitylink", x2 = "loge", x3 = "logoff", x4 = "multilogit", x5 = "multilogit"), earg.list = list("(Default)" = list(), x2 = list(), x3 = list(offset = -1), x4 = list(), x5 = list()), gsigma = exp(-5:5), parallel = TRUE, ishrinkage = 0.95, nointercept = NULL, imethod = 1, type.fitted = c("mean", "quantiles", "pobs0", "pstr0", "onempstr0"), percentiles = c(25, 50, 75), probs.x = c(0.15, 0.85), probs.y = c(0.25, 0.50, 0.75), multiple.responses = FALSE, earg.link = FALSE, whitespace = FALSE, bred = FALSE, lss = TRUE, oim = FALSE, nsimEIM = 100, byrow.arg = FALSE, zero = NULL) { NULL } TypicalVGAMlink <- function(theta, someParameter = 0, bvalue = NULL, # .Machine$double.xmin is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { NULL } care.exp <- function(x, thresh = -log( sqrt( .Machine$double.xmin ) ) ) { x[x > thresh] <- thresh x[x < (-thresh)] <- -thresh exp(x) } loge <- function(theta, bvalue = NULL, # .Machine$double.xmin is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("loge(", theta, ")", sep = "") else paste("loge(", theta, ")", sep = "") if (tag) string <- paste("Log:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = exp(theta), "1" = theta, "2" = theta, "3" = theta, "4" = theta, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = log(theta), "1" = 1 / theta, "2" = -1 / theta^2, "3" = 2 / theta^3, "4" = -6 / theta^4, stop("argument 'deriv' unmatched")) } } logneg <- function(theta, bvalue = NULL, # .Machine$double.xmin is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("logneg(", theta, ")", sep = "") else paste( "log(-(", theta, "))", sep = "") if (tag) string <- paste("Log negative:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = -exp(theta), "1" = theta, "2" = theta, "3" = theta, "4" = theta, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = log(-theta), "1" = 1 / theta, "2" = -1 / theta^2, "3" = 2 / theta^3, "4" = -6 / theta^4, stop("argument 'deriv' unmatched")) } } logoff <- function(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(offset)) stop("bad input for argument 'offset'") if (is.character(theta)) { string <- if (short) paste("logoff(", theta, ", offset = ", as.character(offset), ")", sep = "") else paste("log(", as.character(offset), "+", as.char.expression(theta), ")", sep = "") if (tag) string <- paste("Log with offset:", string) return(string) } if (inverse) { switch(as.character(deriv), "0" = exp(theta) - offset, "1" = theta + offset, "2" = theta + offset, "3" = theta + offset, "4" = theta + offset, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = log(theta + offset), "1" = 1 / (theta + offset), "2" = -1 / (theta + offset)^2, "3" = 2 / (theta + offset)^3, "4" = -6 / (theta + offset)^4, stop("argument 'deriv' unmatched")) } } identitylink <- function(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- theta if (tag) string <- paste("Identity:", string) return(string) } switch(as.character(deriv), "0" = theta, "1" = theta * 0 + 1, "2" = theta * 0, # zz Does not handle Inf and -Inf "3" = theta * 0, # zz Does not handle Inf and -Inf "4" = theta * 0, # zz Does not handle Inf and -Inf "5" = theta * 0, # zz Does not handle Inf and -Inf stop("argument 'deriv' unmatched")) } negidentity <- function(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { theta <- as.char.expression(theta) string <- paste("-", theta, sep = "") if (tag) string <- paste("Negative-identity:", string) return(string) } switch(as.character(deriv), "0" = -theta, "1" = theta * 0 - 1, "2" = theta * 0, # zz Does not handle Inf and -Inf "3" = theta * 0, # zz Does not handle Inf and -Inf "4" = theta * 0, # zz Does not handle Inf and -Inf "5" = theta * 0, # zz Does not handle Inf and -Inf stop("argument 'deriv' unmatched")) } logit <- function(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("logit(", theta, ")", sep = "") else paste("log(", as.char.expression(theta), "/(1-", as.char.expression(theta), "))", sep = "") if (tag) string <- paste("Logit:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1.0 - bvalue } if (inverse) { switch(as.character(deriv), "0" = plogis(theta), "1" = 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv), "2" = theta * (1 - theta) * (1 - 2 * theta), "3" = (1 - 6 * theta * (1 - theta)) * theta * (1 - theta), stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = qlogis(theta), "1" = 1 / (theta * (1 - theta)), "2" = (2 * theta - 1) / (theta * (1 - theta))^2, "3" = 2 * (1 - 3 * theta * (1 - theta)) / (theta * (1 - theta))^3, stop("argument 'deriv' unmatched")) } } loglog <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("loglog(", theta, ")", sep = "") else paste("log(log(", theta, "))", sep = "") if (tag) string <- paste("Log-Log:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 1.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = exp(exp(theta)), "1" = (theta * log(theta)), "2" = { junk <- log(theta) theta * junk * (1 + junk) }, "3" = { Junk <- theta * log(theta) Junk * ((1 + log(theta))^2 + Junk / theta) }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = log(log(theta)), "1" = 1 / (theta * log(theta)), "2" = { junk <- log(theta) -(1 + junk) / (theta * junk)^2 }, "3" = { Junk <- theta * log(theta) (2 * (1 + log(theta))^2 / Junk - 1 / theta) / Junk^2 }, stop("argument 'deriv' unmatched")) } } cloglog <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("cloglog(", theta, ")", sep = "") else paste("log(-log(1-", as.char.expression(theta), "))", sep = "") if (tag) string <- paste("Complementary log-log:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1.0 - bvalue } if (inverse) { switch(as.character(deriv), "0" = { -expm1(-exp(theta)) }, "1" = -((1 - theta) * log1p(-theta)), "2" = { junk <- log1p(-theta) -(1 - theta) * (1 + junk) * junk }, "3" = { junk <- log1p(-theta) Junk <- (1 - theta) * junk -Junk * (Junk / (1 - theta) + (1 + junk)^2) }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = log(-log1p(-theta)), "1" = -1 / ((1 - theta) * log1p(-theta)), "2" = { junk <- log1p(-theta) -(1 + junk) / ((1 - theta) * junk)^2 }, "3" = { junk <- log1p(-theta) Junk <- (1 - theta) * junk (1 / (1 - theta) - 2 * (1 + junk)^2 / Junk) / Junk^2 }, stop("argument 'deriv' unmatched")) } } probit <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("probit(", theta, ")", sep = "") else paste("qnorm(", theta, ")", sep = "") if (tag) string <- paste("Probit:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1 - bvalue } if (inverse) { switch(as.character(deriv), "0" = { ans <- pnorm(theta) if (is.matrix(theta)) dim(ans) <- dim(theta) ans }, "1" = { # 1st deriv 1 / Recall(theta = theta, bvalue = bvalue, inverse = FALSE, deriv = deriv) }, "2" = { # 2nd deriv Junk <- qnorm(theta) ans <- -Junk * dnorm(Junk) if (is.vector(theta)) ans else if (is.matrix(theta)) { dim(ans) <- dim(theta) ans } else { warning("can only handle vectors and matrices;", " converting to vector") ans } }, "3" = { Junk <- qnorm(theta) junk <- dnorm(Junk) junk * (Junk^2 - 1) }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = { ans <- qnorm(theta) if (is.matrix(theta)) dim(ans) <- dim(theta) ans }, "1" = { # 1st deriv if (is.matrix(theta)) { ans <- 1 / dnorm(qnorm(theta)) dim(ans) <- dim(theta) ans } else { 1 / dnorm(qnorm(as.vector(theta))) } }, "2" = { # 2nd deriv Junk <- qnorm(theta) ans <- Junk / (dnorm(Junk))^2 if (is.vector(theta)) ans else if (is.matrix(theta)) { dim(ans) <- dim(theta) ans } else { warning("can only handle vectors and matrices;", " converting to vector") ans } }, "3" = { Junk <- qnorm(theta) junk <- dnorm(Junk) (1 + 2 * Junk^2) / junk^3 }, stop("argument 'deriv' unmatched")) } } explink <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("explink(", theta, ")", sep = "") else paste("exp(", theta, ")", sep = "") if (tag) string <- paste("Exp:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = log(theta), "1" = exp( -theta), "2" = - exp(-2 * theta), # 20170610 Fixes up a bug "3" = 2 * exp(-3 * theta), "4" = -6 * exp(-4 * theta), stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = exp(theta), "1" = exp(theta), "2" = exp(theta), "3" = exp(theta), "4" = exp(theta), stop("argument 'deriv' unmatched")) } } reciprocal <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { theta <- as.char.expression(theta) string <- paste("1/", theta, sep = "") if (tag) string <- paste("Reciprocal:", string) return(string) } if (!inverse && length(bvalue)) theta[theta == 0.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = 1 / theta, "1" = - theta^2, "2" = 2 * theta^3, "3" = -6 * theta^4, "4" = 24 * theta^5, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = 1 / theta, "1" = -1 / theta^2, "2" = 2 / theta^3, "3" = -6 / theta^4, "4" = 24 / theta^5, stop("argument 'deriv' unmatched")) } } negloge <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("negloge(", theta, ")", sep = "") else paste("-log(", theta, ")", sep = "") if (tag) string <- paste("Negative log:", string) return(string) } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = exp(-theta), "1" = -theta, "2" = theta, "3" = -theta, "4" = theta, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = -log(theta), "1" = -1/theta, "2" = 1/theta^2, "3" = -2/theta^3, "4" = 6/theta^4, stop("argument 'deriv' unmatched")) } } negreciprocal <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { theta <- as.char.expression(theta) string <- paste("-1/", theta, sep = "") if (tag) string <- paste("Negative reciprocal:", string) return(string) } if (!inverse && length(bvalue)) theta[theta == 0.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = -1 / theta, "1" = theta^2, "2" = 2 * theta^3, "3" = 6 * theta^4, "4" = 24 * theta^5, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = -1 / theta, "1" = 1 / theta^2, "2" = -2 / theta^3, "3" = 6 / theta^4, "4" = -24 / theta^5, stop("argument 'deriv' unmatched")) } } igcanlink <- function(theta, bvalue = NULL, # .Machine$double.eps is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { theta <- as.char.expression(theta) string <- paste("-1/", theta, sep = "") if (tag) string <- paste("Negative inverse:", string) return(string) } if (inverse) { switch(as.character(deriv), "0" = 1 / sqrt(-2*theta), "1" = theta^3, "2" = 3 * theta^5, "3" = 15 * theta^7, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = -1 / (2 * theta^2), "1" = 1 / theta^3, "2" = -3 / theta^4, "3" = 12 / theta^5, "4" = -60 / theta^6, stop("argument 'deriv' unmatched")) } } rhobit <- function(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("rhobit(", theta, ")", sep = "") else paste("log((1+", as.char.expression(theta), ")/(1-", as.char.expression(theta), "))", sep = "") if (tag) string <- paste("Rhobit:", string) return(string) } if (!inverse) { if (length(bminvalue)) theta[theta <= -1.0] <- bminvalue if (length(bmaxvalue)) theta[theta >= 1.0] <- bmaxvalue } if (inverse) { switch(as.character(deriv), "0" = { junk <- exp(theta) expm1(theta) / (junk + 1.0) }, "1" = (1 - theta^2) / 2, "2" = (-theta / 2) * (1 - theta^2), "3" = (3 * theta^2 - 1) * (1 - theta^2) / 4, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = { log1p(theta) - log1p(-theta) }, "1" = 2 / (1 - theta^2), "2" = (4*theta) / (1 - theta^2)^2, "3" = 4 * (1 + 3 * theta^2) / (1 - theta^2)^3, stop("argument 'deriv' unmatched")) } } fisherz <- function(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("fisherz(", theta, ")", sep = "") else paste("(1/2) * log((1+", as.char.expression(theta), ")/(1-", as.char.expression(theta), "))", sep = "") if (tag) string <- paste("Fisher's Z transformation:", string) return(string) } if (!inverse) { if (length(bminvalue)) theta[theta <= -1.0] <- bminvalue if (length(bmaxvalue)) theta[theta >= 1.0] <- bmaxvalue } if (inverse) { switch(as.character(deriv), "0" = tanh(theta), "1" = 1 - theta^2, "2" = 2 * (-theta) * (1 - theta^2), "3" = (3 * theta^2 - 1) * (1 - theta^2) * 2, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = atanh(theta), "1" = 1 / (1 - theta^2), "2" = (2*theta) / (1 - theta^2)^2, "3" = 2 * (1 + 3 * theta^2) / (1 - theta^2)^3, stop("argument 'deriv' unmatched")) } } multilogit <- function(theta, refLevel = "(Last)", M = NULL, # stop("argument 'M' not specified"), whitespace = FALSE, bvalue = NULL, inverse = FALSE, deriv = 0, all.derivs = FALSE, short = TRUE, tag = FALSE) { fillerChar <- ifelse(whitespace, " ", "") if (length(refLevel) != 1) stop("the length of argument 'refLevel' must be one") if (is.character(refLevel)) { if (refLevel != "(Last)") stop('if a character, refLevel must be "(Last)"') refLevel <- -1 } else if (is.factor(refLevel)) { if (is.ordered(refLevel)) warning("argument 'refLevel' is from an ordered factor") refLevel <- as.character(refLevel) == levels(refLevel) refLevel <- (seq_along(refLevel))[refLevel] if (!is.Numeric(refLevel, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("could not coerce 'refLevel' into a single positive integer") } else if (!is.Numeric(refLevel, length.arg = 1, integer.valued = TRUE)) stop("'refLevel' must be a single (positive?) integer") if (is.character(theta)) { is.M <- is.finite(M) && is.numeric(M) string <- if (short) { paste("multilogit(", theta, ")", sep = "") } else { theta <- as.char.expression(theta) if (refLevel < 0) { ifelse(whitespace, paste("log(", theta, "[,j] / ", theta, "[,", ifelse(is.M, M+1, "M+1"), "]), j = 1:", ifelse(is.M, M, "M"), sep = ""), paste("log(", theta, "[,j]/", theta, "[,", ifelse(is.M, M+1, "M+1"), "]), j=1:", ifelse(is.M, M, "M"), sep = "")) } else { if (refLevel == 1) { paste("log(", theta, "[,", "j]", fillerChar, "/", fillerChar, "", theta, "[,", refLevel, "]), j", fillerChar, "=", fillerChar, "2:", ifelse(is.M, (M+1), "(M+1)"), sep = "") } else { paste("log(", theta, "[,", "j]", fillerChar, "/", "", theta, "[,", refLevel, "]), j", fillerChar, "=", fillerChar, "c(1:", refLevel-1, ",", fillerChar, refLevel+1, ":", ifelse(is.M, (M+1), "(M+1)"), ")", sep = "") } } } if (tag) string <- paste("Multinomial logit link:", string) return(string) } M.orig <- M M <- NCOL(theta) - !(inverse && deriv == 0) if (M < 1) ifelse(inverse, stop("argument 'eta' should have at least one column"), stop("argument 'theta' should have at least two columns")) if (is.numeric(M.orig) && M != M.orig) { warning("argument 'M' does not seem right but using it") M <- M.orig } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (!inverse && length(bvalue)) theta[theta >= 1.0] <- 1 - bvalue foo <- function(eta, refLevel = -1, M) { phat <- if ((refLevel < 0) || (refLevel == M+1)) { cbind(care.exp(eta), 1.0) } else if ( refLevel == 1) { cbind(1.0, care.exp(eta)) } else { use.refLevel <- if ( refLevel < 0) M+1 else refLevel etamat <- cbind(eta[, 1:( refLevel - 1), drop = FALSE], 0.0, eta[, ( refLevel ):M, drop = FALSE]) care.exp(etamat) } ans <- phat / rowSums(phat) colnames(ans) <- NULL ans } # foo if (inverse) { use.refLevel <- if (refLevel < 0) ncol(theta) else refLevel switch(as.character(deriv), "0" = { foo(theta, refLevel, M = M) # log(theta[, -jay] / theta[, jay]) }, "1" = if (all.derivs) { index <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) theta <- theta[, -use.refLevel, drop = FALSE] wz <- -theta[, index$row, drop = FALSE] * theta[, index$col, drop = FALSE] wz[, 1:M] <- wz[, 1:M] + theta wz } else { theta[, -use.refLevel, drop = FALSE] * theta[, use.refLevel] / ( theta[, -use.refLevel, drop = FALSE] + theta[, use.refLevel]) }, "2" = (theta*(1-theta)*(1-2*theta))[, -use.refLevel, drop = FALSE], "3" = { temp1 <- theta * (1 - theta) (temp1 * (1 - 6 * temp1))[, -use.refLevel, drop = FALSE] }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = { ans <- if (refLevel < 0) { log(theta[, -ncol(theta)] / theta[, ncol(theta)]) } else { use.refLevel <- if (refLevel < 0) ncol(theta) else refLevel log(theta[, -( use.refLevel )] / theta[, use.refLevel ]) } colnames(ans) <- NULL ans }, "1" = care.exp(-log(theta) - log1p(-theta)), "2" = (2 * theta - 1) / care.exp(2*log(theta) + 2*log1p(-theta)), "3" = { temp1 <- care.exp(log(theta) + log1p(-theta)) 2 * (1 - 3 * temp1) / temp1^3 }, stop("argument 'deriv' unmatched")) } } # end of multilogit foldsqrt <- function(theta, # = NA , = NULL, min = 0, max = 1, mux = sqrt(2), inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(min, length.arg = 1)) stop("bad input for 'min' component") if (!is.Numeric(max, length.arg = 1)) stop("bad input for 'max' component") if (!is.Numeric(mux, length.arg = 1, positive = TRUE)) stop("bad input for 'mux' component") if (min >= max) stop("'min' >= 'max' is not allowed") if (is.character(theta)) { string <- if (short) paste("foldsqrt(", theta, ")", sep = "") else { theta <- as.char.expression(theta) if (abs(mux-sqrt(2)) < 1.0e-10) paste("sqrt(2*", theta, ") - sqrt(2*(1-", theta, "))", sep = "") else paste(as.character(mux), " * (sqrt(", theta, "-", min, ") - sqrt(", max, "-", theta, "))", sep = "") } if (tag) string <- paste("Folded square root:", string) return(string) } if (inverse) { switch(as.character(deriv), "0" = { mid <- (min + max) / 2 boundary <- mux * sqrt(max - min) temp <- pmax(0, (theta/mux)^2 * (2*(max-min) - (theta/mux)^2)) ans <- theta if (any(ind5 <- theta < 0)) ans[ind5] <- mid - 0.5 * sqrt(temp[ind5]) if (any(ind5 <- theta >= 0)) ans[ind5] <- mid + 0.5 * sqrt(temp[ind5]) ans[theta < -boundary] <- NA ans[theta > boundary] <- NA ans }, "1" = (2 / mux ) / (1/sqrt(theta-min) + 1/sqrt(max-theta)), "2" = stop("use the chain rule formula to obtain this"), "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = mux * (sqrt(theta-min) - sqrt(max-theta)), "1" = (1/sqrt(theta-min) + 1/sqrt(max-theta)) * mux / 2, "2" = -(mux / 4) * ((theta-min)^(-3/2) - (max-theta)^(-3/2)), "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } } powerlink <- function(theta, power = 1, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { exponent <- power if (exponent == 0) stop("use the 'loge' link") if (is.character(theta)) { string <- if (short) paste("powerlink(", theta, ", power = ", as.character(exponent), ")", sep = "") else paste(as.char.expression(theta), "^(", as.character(exponent), ")", sep = "") if (tag) string <- paste("Power link:", string) return(string) } if (inverse) { switch(as.character(deriv), "0" = theta^(1/exponent), "1" = (theta^(1-exponent)) / exponent, "2" = ((1-exponent) / exponent^2) * (theta^(1 - 2*exponent)), "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = theta^exponent, "1" = exponent / (theta^(1-exponent)), "2" = exponent * (exponent-1) * (theta^(exponent-2)), "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } } extlogit <- function(theta, min = 0, max = 1, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { A <- min B <- max if (!inverse && length(bminvalue)) theta[theta <= A] <- bminvalue if (!inverse && length(bmaxvalue)) theta[theta >= B] <- bmaxvalue if (is.character(theta)) { string <- if (short) { if (A != 0 || B != 1) paste("extlogit(", theta, ", min = ", A, ", max = ", B, ")", sep = "") else paste("extlogit(", theta, ")", sep = "") } else { paste("log((", as.char.expression(theta), "-min)/(max-", as.char.expression(theta), "))", sep = "") } if (tag) string <- paste("Extended logit:", string) return(string) } if (inverse) { switch(as.character(deriv), "0" = { junk <- care.exp(theta) (A + B * junk) / (1.0 + junk) }, "1" = ((theta - A) * (B - theta)) / (B-A), "2" = (A + B - 2 * theta) * (theta - A) * (B - theta) / (B-A)^2, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = { log((theta - A)/(B - theta))}, "1" = (B-A) / ((theta - A) * (B - theta)), "2" = ((2 * theta - A - B) * (B-A)) / ((theta - A) * (B - theta))^2, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } } logc <- function(theta, bvalue = NULL, # .Machine$double.xmin is an alternative inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("logc(", theta, ")", sep = "") else { theta <- as.char.expression(theta) paste("log(1-", theta, ")", sep = "") } if (tag) string <- paste("Log Complementary:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta >= 1.0] <- bvalue; } if (inverse) { switch(as.character(deriv), "0" = -expm1(theta), "1" = theta - 1, "2" = theta - 1, "3" = theta - 1, "4" = theta - 1, "5" = theta - 1, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = log1p(-theta), "1" = -1 / (1 - theta), "2" = -1 / (1 - theta)^2, "3" = -2 / (1 - theta)^3, "4" = -6 / (1 - theta)^4, "5" = -24 / (1 - theta)^5, stop("argument 'deriv' unmatched")) } } cauchit <- function(theta, bvalue = .Machine$double.eps, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("cauchit(", theta, ")", sep = "") else { theta <- as.char.expression(theta) paste("tan(pi*(", theta, "-0.5))", sep = "") } if (tag) string <- paste("Cauchit:", string) return(string) } if (!inverse && length(bvalue)) { theta[theta <= 0.0] <- bvalue theta[theta >= 1.0] <- 1.0 - bvalue } if (inverse) { switch(as.character(deriv), "0" = 0.5 + atan(theta) / pi, "1" = (cos(pi * (theta-0.5)))^2 / pi, "2" = { temp2 <- cos(pi * (theta-0.5)) temp4 <- sin(pi * (theta-0.5)) -2 * temp4 * temp2^3 / pi }, "3" = { temp2 <- cos(pi * (theta-0.5)) temp5 <- tan(pi * (theta-0.5)) 2 * temp2^6 * (3 * temp5^2 - 1) / pi }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = tan(pi * (theta-0.5)), "1" = pi / (cos(pi * (theta-0.5)))^2, "2" = { temp2 <- cos(pi * (theta-0.5)) temp3 <- tan(pi * (theta-0.5)) (temp3 * 2 * pi^2) / temp2^2 }, "3" = { temp2 <- cos(pi * (theta-0.5)) temp3 <- tan(pi * (theta-0.5)) 2 * pi^3 * (1 + 3 * temp3^2) / temp2^2 }, stop("argument 'deriv' unmatched")) } } golf <- function(theta, lambda = 1, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(lambda, positive = TRUE)) stop('could not determine lambda or lambda has negative values') if (is.Numeric(cutpoint)) if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should contain ", "non-negative integer values") if (is.character(theta)) { string <- if (short) { lenl <- length(lambda) > 1 lenc <- length(cutpoint) > 1 paste("golf(", theta, ", lambda = ", if (lenl) "c(" else "", ToString(lambda), if (lenl) ")" else "", if (is.Numeric(cutpoint)) paste(", cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", sep = "") else "", ")", sep = "") } else { theta <- as.char.expression(theta) if (is.Numeric(cutpoint)) { paste("-3*log(1-qnorm(", theta, ")/(3*sqrt(lambda)))", " + log(cutpoint)", sep = "") } else { paste("-3*log(1-qnorm(", theta, ")/(3*sqrt(lambda)))", sep = "") } } if (tag) string <- paste("Gamma-ordinal link function:", string) return(string) } thmat <- cbind(theta) lambda <- rep_len(lambda, ncol(thmat)) # Allow recycling for lambda if (is.Numeric(cutpoint)) cutpoint <- rep_len(cutpoint, ncol(thmat)) if (ncol(thmat) > 1) { answer <- thmat for (ii in 1:ncol(thmat)) answer[, ii] <- Recall(theta = thmat[, ii], lambda = lambda[ii], cutpoint = if (is.Numeric(cutpoint)) cutpoint[ii] else NULL, inverse = inverse, deriv = deriv) return(answer) } answer <- if (inverse) { switch(as.character(deriv), "0" = { if (is.Numeric(cutpoint)) { pnorm((1-care.exp(-(theta-log(cutpoint))/3)) * 3 * sqrt(lambda)) } else { pnorm((1-care.exp(-theta/3)) * 3 * sqrt(lambda)) } }, "1" = 1 / Recall(theta = theta, lambda = lambda, cutpoint = cutpoint, inverse = FALSE, deriv = deriv), "2" = stop('cannot currently handle deriv = 2', "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) ) } else { smallno <- 1 * .Machine$double.eps Theta <- theta Theta <- pmin(Theta, 1 - smallno) # Since theta==1 is a possibility Theta <- pmax(Theta, smallno) # Since theta == 0 is a possibility Ql <- qnorm(Theta) switch(as.character(deriv), "0" = { temp <- Ql / (3*sqrt(lambda)) temp <- pmin(temp, 1.0 - smallno) # 100 / .Machine$double.eps origans <- -3*log1p(-temp) + if (is.Numeric(cutpoint)) log(cutpoint) else 0 1 / origans }, "1" = { origans <- (1 - Ql / (3*sqrt(lambda))) * sqrt(lambda) * dnorm(Ql) 1 / origans }, "2" = { stop('cannot currently handle deriv = 2') }, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } if (!is.Numeric(answer)) warning("the answer contains some NAs") answer } polf <- function(theta, # = 1, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (!is.Numeric(cutpoint)) stop("could not determine the cutpoint") if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should", " contain non-negative integer values") if (is.character(theta)) { string <- if (short) { lenc <- length(cutpoint) > 1 paste("polf(", theta, ", cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", ")", sep = "") } else { theta <- as.char.expression(theta) paste("2*log(0.5*qnorm(", theta, ") + sqrt(cutpoint+7/8))", sep = "") } if (tag) string <- paste("Poisson-ordinal link function:", string) return(string) } thmat <- cbind(theta) if (ncol(thmat) > 1) { answer <- thmat cutpoint <- rep_len(cutpoint, ncol(thmat)) for (ii in 1:ncol(thmat)) answer[, ii] <- Recall(theta = thmat[, ii], cutpoint = cutpoint, inverse = inverse, deriv = deriv) return(answer) } answer <- if (inverse) { switch(as.character(deriv), "0" = { # deriv == 0 origans <- if (any(cp.index <- cutpoint == 0)) { tmp <- theta tmp[cp.index] <- cloglog(theta = theta[cp.index], inverse = inverse, deriv = deriv) tmp[!cp.index] <- pnorm(2 * exp(theta[!cp.index]/2) - 2 * sqrt(cutpoint[!cp.index] + 7/8)) tmp } else { pnorm(2 * exp(theta/2) - 2 * sqrt(cutpoint + 7/8)) } 1 / origans }, "1" = 1 / Recall(theta = theta, cutpoint = cutpoint, inverse = FALSE, deriv = deriv), "2" = stop('cannot currently handle deriv = 2'), "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } else { if (any(cp.index <- cutpoint == 0)) { cloglog(theta = theta, inverse = inverse, deriv = deriv) } else { smallno <- 1 * .Machine$double.eps SMALLNO <- 1 * .Machine$double.xmin Theta <- theta Theta <- pmin(Theta, 1 - smallno) # Since theta == 1 is possible Theta <- pmax(Theta, smallno) # Since theta == 0 is a possibility Ql <- qnorm(Theta) switch(as.character(deriv), "0" = { temp <- 0.5 * Ql + sqrt(cutpoint + 7/8) temp <- pmax(temp, SMALLNO) origans <- 2 * log(temp) 1 / origans }, "1" = { origans <- (Ql/2 + sqrt(cutpoint + 7/8)) * dnorm(Ql) 1 / origans }, "2" = { stop('cannot currently handle deriv = 2') }, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } } if (!is.Numeric(answer)) warning("the answer contains some NAs") answer } nbolf <- function(theta, cutpoint = NULL, k = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { kay <- k if (!is.Numeric(kay, positive = TRUE)) stop("could not determine 'k' or it is not positive-valued") if (!is.Numeric(cutpoint)) stop("could not determine the cutpoint") if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should", " contain non-negative integer values") if (is.character(theta)) { string <- if (short) { lenc <- length(cutpoint) > 1 lenk <- length(kay) > 1 paste("nbolf(", theta, ", cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", ", k = ", if (lenk) "c(" else "", ToString(kay), if (lenk) ")" else "", ")", sep = "") } else { theta <- as.char.expression(theta) paste("2*log(sqrt(k) * sinh(qnorm(", theta, ")/(2*sqrt(k)) + ", "asinh(sqrt(cutpoint/k))))", sep = "") } if (tag) string <- paste("Negative binomial-ordinal link function:", string) return(string) } thmat <- cbind(theta) kay <- rep_len(kay, ncol(thmat)) # Allow recycling for kay cutpoint <- rep_len(cutpoint, ncol(thmat)) # Allow recycling 4 cutpoint if (ncol(thmat) > 1) { answer <- thmat for (ii in 1:ncol(thmat)) answer[, ii] <- Recall(theta = thmat[, ii], cutpoint = cutpoint[ii], k = kay[ii], inverse = inverse, deriv = deriv) return(answer) } answer <- if (inverse) { switch(as.character(deriv), "0" = { if (cutpoint == 0) { 1.0 - (kay / (kay + care.exp(theta)))^kay } else { pnorm((asinh(exp(theta/2)/sqrt(kay)) - asinh(sqrt(cutpoint/kay))) * 2 * sqrt(kay)) } }, "0" = { 1 / Recall(theta = theta, cutpoint = cutpoint, k = kay, inverse = FALSE, deriv = deriv) }, "0" = { stop('cannot currently handle deriv = 2') }, "0" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } else { smallno <- 1 * .Machine$double.eps SMALLNO <- 1 * .Machine$double.xmin Theta <- theta Theta <- pmin(Theta, 1 - smallno) # Since theta == 1 is possible Theta <- pmax(Theta, smallno) # Since theta == 0 is a possibility if (cutpoint == 0) { switch(as.character(deriv), "0" = { temp <- (1 - Theta)^(-1/kay) - 1 temp <- pmax(temp, SMALLNO) origans <- log(kay) + log(temp) 1 / origans }, "1" = { origans <- (kay / (1 - Theta)^(1/kay) - kay) * (1 - Theta)^(kay+1/kay) 1 / origans }, "2" = { stop('cannot handle deriv = 2') }, "3" = { stop('cannot handle deriv = 2') }, stop("argument 'deriv' unmatched")) } else { Ql <- qnorm(Theta) switch(as.character(deriv), "0" = { temp <- sqrt(kay) * sinh(Ql/(2*sqrt(kay)) + asinh(sqrt(cutpoint/kay))) temp <- pmax(temp, SMALLNO) origans <- 2 * log(temp) 1 / origans }, "1" = { arg1 <- (Ql/(2*sqrt(kay)) + asinh(sqrt(cutpoint/kay))) origans <- sqrt(kay) * tanh(arg1) * dnorm(Ql) 1 / origans }, "2" = { stop('cannot currently handle deriv = 2') }, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } } if (!is.Numeric(answer)) warning("the answer contains some NAs") answer } nbolf2 <- function(theta, cutpoint = NULL, k = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { warning("20150711; this function has not been updated") kay <- k if (!is.Numeric(kay, positive = TRUE)) stop("could not determine argument 'k' or ", "it is not positive-valued") if (!is.Numeric(cutpoint)) stop("could not determine the cutpoint") if (any(cutpoint < 0) || !is.Numeric(cutpoint, integer.valued = TRUE)) warning("argument 'cutpoint' should ", "contain non-negative integer values") if (is.character(theta)) { string <- if (short) { lenc <- length(cutpoint) > 1 lenk <- length(kay) > 1 paste("nbolf2(", theta, ", earg = list(cutpoint = ", if (lenc) "c(" else "", ToString(cutpoint), if (lenc) ")" else "", ", k = ", if (lenk) "c(" else "", ToString(kay), if (lenk) ")" else "", "))", sep = "") } else { theta <- as.char.expression(theta) paste("3*log()", sep = "") } if (tag) string <- paste("Negative binomial-ordinal link function 2:", string) return(string) } thmat <- cbind(theta) kay <- rep_len(kay, ncol(thmat)) # Allow recycling for kay if (ncol(thmat) > 1) { answer <- thmat for (ii in 1:ncol(thmat)) answer[, ii] <- Recall(theta = thmat[, ii], cutpoint = cutpoint[ii], k = kay[ii], inverse = inverse, deriv = deriv) return(answer) } answer <- if (inverse) { if (deriv > 0) { 1 / Recall(theta = theta, cutpoint = cutpoint, k = kay, inverse = FALSE, deriv = deriv) } else { if (cutpoint == 0) { 1.0 - (kay / (kay + care.exp(theta)))^kay } else { a1 <- -(9*cutpoint+8) / (cutpoint+1) a2 <- (9*kay-1) / (kay * (cutpoint+1)^(1/3)) a3 <- 9 / (kay * (cutpoint+1)^(2/3)) a4 <- 9 / (cutpoint+1) B <- exp(theta/3) mymat <- rbind(a1^2*a2^2 + 2*a1*a2^3*B + B^2*a2^4, 0, -2*a1*a2*a3*B - 2*a2^2*a3*B^2 - a1^2*a3 - a2^2*a4, 0, B^2 * a3^2 + a3 * a4) ans <- Re(t(apply(mymat, 2, polyroot))) theta2 <- invfun <- pnorm(-ans) # pnorm(-x) = 1-pnorm(x) for (ii in 1:4) { theta2[, ii] <- Recall(theta = theta2[, ii], cutpoint = cutpoint, k = kay, inverse = FALSE, deriv = deriv) } rankmat <- t(apply(abs(theta2 - theta), 1, rank)) for (ii in 2:4) { if (any(index4 <- (rankmat[, ii] == 1))) { invfun[index4, 1] <- invfun[index4, ii] } } invfun[, 1] } } } else { smallno <- 1 * .Machine$double.eps SMALLNO <- 1 * .Machine$double.xmin Theta <- theta Theta <- pmin(Theta, 1 - smallno) # Since theta == 1 is possible Theta <- pmax(Theta, smallno) # Since theta == 0 is possible if (cutpoint == 0) { switch(as.character(deriv), "0" = { temp <- (1 - Theta)^(-1/kay) - 1 temp <- pmax(temp, SMALLNO) log(kay) + log(temp)}, "0" = (kay / (1 - Theta)^(1/kay) - kay) * (1 - Theta)^(kay+1/kay), "0" = { stop("cannot handle 'deriv = 2'") }, "0" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } else { Ql <- qnorm(Theta) a1 <- -(9*cutpoint+8) / (cutpoint+1) a2 <- (9*kay-1) / (kay * (cutpoint+1)^(1/3)) a3 <- 9 / (kay * (cutpoint+1)^(2/3)) a4 <- 9 / (cutpoint+1) discrim <- a1^2 * a3 + a2^2 * a4 - Ql^2 * a3 * a4 denomin <- Ql^2 * a3 - a2^2 numerat <- (a1*a2 - Ql * sqrt(discrim)) argmax1 <- numerat / denomin switch(as.character(deriv), "0" = { argmax2 <- (a1*a2 + Ql * sqrt(discrim)) / denomin temp <- ifelse(argmax1 > 0, argmax1, argmax2) temp <- pmax(temp, SMALLNO) 3 * log(temp)}, "1" = { BB <- (sqrt(discrim) - Ql^2 * a3 * a4 / sqrt(discrim)) / dnorm(Ql) CC <- 2 * Ql * a3 / dnorm(Ql) dA.dtheta <- (-denomin * BB - numerat * CC) / denomin^2 argmax1 / (3 * dA.dtheta) }, "2" = { stop('cannot currently handle deriv = 2') }, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } } if (!is.Numeric(answer)) warning("the answer contains some NAs") answer } Cut <- function(y, breaks = c(-Inf, quantile(c(y), prob = (1:4)/4))) { y <- as.matrix(y) temp <- cut(y, breaks = breaks, labels = FALSE) temp <- c(temp) # integer vector of integers if (anyNA(temp)) warning("there are NAs") answer <- if (ncol(y) > 1) matrix(temp, nrow(y), ncol(y)) else temp if (ncol(y) > 1) { ynames <- dimnames(y)[[2]] if (!length(ynames)) ynames <- paste("Y", 1:ncol(y), sep = "") xnames <- dimnames(y)[[1]] if (!length(xnames)) xnames = as.character(1:nrow(y)) dimnames(answer) <- list(xnames, ynames) } attr(answer, "breaks") <- breaks answer } checkCut <- function(y) { if (!is.Numeric(y, positive = TRUE, integer.valued = TRUE)) stop("argument 'y' must contain positive integers only") uy <- unique(y) L <- max(uy) oklevels <- 1:L if (L == 1) stop("only one unique value") for (ii in oklevels) { if (all(ii != uy)) stop("there is no ", ii, " value") } TRUE } nbcanlink <- function(theta, size = NULL, wrt.param = NULL, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("nbcanlink(", theta, ")", sep = "") else { theta <- as.char.expression(theta) paste("log(", theta, " / (", theta, " + size))", sep = "") } if (tag) string <- paste("Nbcanlink:", string) return(string) } kmatrix <- size theta <- cbind(theta) kmatrix <- cbind(kmatrix) if (ncol(kmatrix) != ncol(theta)) stop("arguments 'theta' and 'size' do not have ", "an equal number of cols") if (nrow(kmatrix) != nrow(theta)) stop("arguments 'theta' and 'size' do not have ", "an equal number of rows") if (deriv > 0) { if (!(wrt.param %in% 1:2)) stop("argument 'wrt.param' should be 1 or 2") } if (!inverse && length(bvalue)) theta[theta <= 0.0] <- bvalue if (inverse) { switch(as.character(deriv), "0" = { ans <- (kmatrix / expm1(-theta)) if (is.matrix(ans)) dimnames(ans) <- NULL else names(ans) <- NULL ans }, "1" = if (wrt.param == 1) (theta * (theta + kmatrix)) / kmatrix else -(theta + kmatrix), "2" = if (wrt.param == 1) (2 * theta + kmatrix) * theta * (theta + kmatrix) / kmatrix^2 else theta + kmatrix, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }) } else { ans <- switch(as.character(deriv), "0" = log(theta / (theta + kmatrix)), "1" = if (wrt.param == 1) kmatrix / (theta * (theta + kmatrix)) else -1 / (theta + kmatrix), "2" = if (wrt.param == 1) (2 * theta + kmatrix) * (-kmatrix) / (theta * (theta + kmatrix))^2 else 1 / (theta + kmatrix)^2, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }) if (is.matrix(ans)) dimnames(ans) <- NULL else names(ans) <- NULL ans } } linkfun.vglm <- function(object, earg = FALSE, ...) { if (!any(slotNames(object) == "extra")) stop("cannot access the 'extra' slot of the object") if (!any(slotNames(object) == "misc")) stop("cannot access the 'misc' slot of the object") M <- npred(object) misc <- object@misc LINKS1 <- misc$link EARGS1 <- misc$earg extra <- object@extra LINKS2 <- extra$link EARGS2 <- extra$earg if (length(LINKS1) != M && length(LINKS2) != M) { if (LINKS1 != "multilogit" && LINKS2 != "multilogit") warning("the length of the 'links' component is not ", M) } if (length(LINKS1)) { if (earg) list(link = LINKS1, earg = EARGS1) else LINKS1 } else { if (earg) list(link = LINKS2, earg = EARGS2) else LINKS2 } } if (!isGeneric("linkfun")) setGeneric("linkfun", function(object, ...) standardGeneric("linkfun")) setMethod("linkfun", "vglm", function(object, ...) linkfun.vglm(object, ...)) logitoffsetlink <- function(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) { if (is.character(theta)) { string <- if (short) paste("logitoffsetlink(", theta, ", ", offset[1], ")", sep = "") else paste("log(", as.char.expression(theta), "/(1-", as.char.expression(theta), ")", " - ", offset[1], ")", sep = "") if (tag) string <- paste("Logit-with-offset:", string) return(string) } if (inverse) { switch(as.character(deriv), "0" = { exp.eta <- exp(theta) (exp.eta + offset) / (1 + exp.eta + offset) }, "1" = 1 / Recall(theta = theta, offset = offset, inverse = FALSE, deriv = deriv), "2" = theta * (1 - theta) * (1 - 2 * theta), "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } else { switch(as.character(deriv), "0" = { temp2 <- log(theta / (1 - theta) - offset) temp2 }, "1" = 1 / ((1 - theta) * (theta - (1-theta) * offset)), "2" = (2 * (theta - offset * (1-theta)) - 1) / ( (theta - (1-theta)*offset) * (1-theta))^2, "3" = { #3rd deriv stop("3rd deriv not yet implemented") }, stop("argument 'deriv' unmatched")) } } VGAM/R/cao.R0000644000176200001440000001234113135276757012113 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. cao <- function(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = cao.control(...), offset = NULL, method = "cao.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "cao" ocall <- match.call() if (smart) setup.smart("write") mt <- terms(formula, data = data) if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) mf$family <- mf$method <- mf$model <- mf$x.arg <- mf$y.arg <- mf$control <- mf$contrasts <- mf$constraints <- mf$extra <- mf$qr.arg <- NULL mf$coefstart <- mf$etastart <- mf$... <- NULL mf$smart <- NULL mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) if (method == "model.frame") return(mf) na.act <- attr(mf, "na.action") xvars <- as.character(attr(mt, "variables"))[-1] if ((yvar <- attr(mt, "response")) > 0) xvars <- xvars[-yvar] xlev <- if (length(xvars) > 0) { xlev <- lapply(mf[xvars], levels) xlev[!sapply(xlev, is.null)] } y <- model.response(mf, "numeric") # model.extract(mf, "response") x <- model.matrix(mt, mf, contrasts) attr(x, "assign") <- attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) { w <- rep_len(1, nrow(mf)) } else if (NCOL(w) == 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } eval(vcontrol.expression) if (!is.null(family@first)) eval(family@first) cao.fitter <- get(method) deviance.Bestof <- rep_len(NA_real_, control$Bestof) for (tries in 1:control$Bestof) { if (control$trace && (control$Bestof > 1)) { cat(paste("\n========================= Fitting model", tries, "=========================\n")) if (exists("flush.console")) flush.console() } onefit <- cao.fitter(x = x, y = y, w = w, offset = offset, etastart = etastart, mustart = mustart, coefstart = coefstart, family = family, control = control, constraints = constraints, criterion = control$criterion, extra = extra, qr.arg = qr.arg, Terms = mt, function.name = function.name, ...) deviance.Bestof[tries] <- onefit$crit.list$deviance if (tries == 1 || min(deviance.Bestof[1:(tries-1)]) > deviance.Bestof[tries]) fit <- onefit } fit$misc$deviance.Bestof <- deviance.Bestof fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new("rrvgam", "assign" = attr(x, "assign"), "Bspline" = fit$Bspline, "call" = ocall, "coefficients" = fit$coefficients, "criterion" = fit$crit.list, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "residuals" = as.matrix(fit$wresiduals), "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (qr.arg) { class(fit$qr) <- "list" slot(answer, "qr") <- fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") <- as.matrix(fit$fitted.values) slot(answer, "na.action") <- if (length(na.act)) list(na.act) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") <- as.matrix(fit$weights) if (x.arg) slot(answer, "x") <- fit$x # The 'small' design matrix if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(fit$y) slot(answer, "control") <- fit$control slot(answer, "extra") <- if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("'extra' is not a list, therefore ", "placing 'extra' into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") <- fit$iter fit$predictors <- as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) <- list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") <- fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") <- as.matrix(fit$prior.weights) answer } attr(cao, "smart") <- TRUE VGAM/R/family.math.R0000644000176200001440000003155713135276757013574 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. if (FALSE) log1pexp <- function(x) { ans <- log1p(exp(x)) big <- (x > 10) ans[big] <- x[big] + log1p(exp(-x[big])) ans } erf <- function(x, inverse = FALSE) { if (inverse) { ans <- qnorm((x+1)/2) / sqrt(2) ans[x < -1] <- NA ans[x > +1] <- NA ans[x == -1] <- -Inf ans[x == +1] <- Inf ans } else { 2 * pnorm(x * sqrt(2)) - 1 } } erfc <- function(x, inverse = FALSE) { if (inverse) { ans <- qnorm(x/2, lower.tail = FALSE) / sqrt(2) ans[x < 0] <- NA ans[x > 2] <- NA ans[x == 0] <- Inf ans[x == 2] <- -Inf ans } else { 2 * pnorm(x * sqrt(2), lower.tail = FALSE) } } lambertW <- function(x, tolerance = 1.0e-10, maxit = 50) { if (any(Im(x) != 0.0)) stop("argument 'x' must be real, not complex!") ans <- x ans[!is.na(x) & x < -exp(-1)] <- NA ans[!is.na(x) & x >= -exp(-1)] <- log1p(x[!is.na(x) & x >= -exp(-1)]) ans[!is.na(x) & x >= 0 ] <- sqrt(x[!is.na(x) & x >= 0 ]) / 2 cutpt <- 3.0 if (any(myTF <- !is.na(x) & x > cutpt)) { L1 <- log(x[!is.na(x) & x > cutpt]) # log(as.complex(x)) L2 <- log(L1) # log(as.complex(L1)) wzinit <- L1 - L2 + (L2 + (L2*( -2 + L2)/(2) + (L2*( 6 + L2*(-9 + L2* 2)) / (6) + L2*(-12 + L2*(36 + L2*(-22 + L2*3))) / (12*L1)) / L1) / L1) / L1 ans[myTF] <- wzinit } for (ii in 1:maxit) { exp1 <- exp(ans) exp2 <- ans * exp1 delta <- (exp2 - x) / (exp2 + exp1 - ((ans + 2) * (exp2 - x) / (2 * (ans + 1.0)))) ans <- ans - delta if (all(is.na(delta) || max(abs(delta), na.rm = TRUE) < tolerance)) break if (ii == maxit) warning("did not converge") } ans[x == Inf] <- Inf ans } pgamma.deriv <- function(q, shape, tmax = 100) { nnn <- max(length(q), length(shape)) if (length(q) != nnn) q <- rep_len(q, nnn) if (length(shape) != nnn) shape <- rep_len(shape, nnn) if (!is.Numeric(q, positive = TRUE)) stop("bad input for argument 'q'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(tmax, length.arg = 1, positive = TRUE)) stop("bad input for argument 'tmax'") if (tmax < 10) warning("probably argument 'tmax' is too small") gplog <- lgamma(shape) gp1log <- gplog + log(shape) psip <- digamma(shape) psip1 <- psip + 1 / shape psidp <- trigamma(shape) psidp1 <- psidp - 1 / shape^2 fred <- .C("VGAM_C_vdigami", d = as.double(matrix(0, 6, nnn)), x = as.double(q), p = as.double(shape), as.double(gplog), as.double(gp1log), as.double(psip), as.double(psip1), as.double(psidp), as.double(psidp1), ifault = integer(nnn), tmax = as.double(tmax), as.integer(nnn)) answer <- matrix(fred$d, nnn, 6, byrow = TRUE) dimnames(answer) <- list(names(q), c("q", "q^2", "shape", "shape^2", "q.shape", "pgamma(q, shape)")) if (any(fred$ifault != 0)) { indices <- which(fred$ifault != 0) warning("convergence problems with elements ", indices) } answer } expint <- function (x, deriv = 0) { if (deriv == 0) { LLL <- length(x) answer <- .C("sf_C_expint", x = as.double(x), size = as.integer(LLL), ans = double(LLL))$ans answer[x < 0] <- NA answer[x == 0] <- NA answer } else { if (!is.Numeric(deriv, integer.valued = TRUE, positive = TRUE) || deriv > 3) stop("Bad input for argument 'deriv'") answer <- rep_len(0, length(x)) if (deriv == 1) { answer <- exp(x) / x } if (deriv == 2) { answer <- exp(x) / x - exp(x) / x^2 } if (deriv == 3) { answer <- exp(x) / x - 2 * exp(x) / x^2 + 2 * exp(x) / x^3 } answer } } expexpint <- function (x, deriv = 0) { LLL <- length(x) answer <- .C("sf_C_expexpint", x = as.double(x), size = as.integer(LLL), ans = double(LLL))$ans answer[x < 0] <- NA answer[x == 0] <- NA if (deriv > 0) { if (!is.Numeric(deriv, integer.valued = TRUE, positive = TRUE) || deriv > 3) stop("Bad input for argument 'deriv'") if (deriv >= 1) { answer <- -answer + 1 / x } if (deriv >= 2) { answer <- -answer - 1 / x^2 } if (deriv == 3) { answer <- -answer + 2 / x^3 } } answer } expint.E1 <- function (x, deriv = 0) { if (deriv == 0) { LLL <- length(x) answer <- .C("sf_C_expint_e1", x = as.double(x), size = as.integer(LLL), ans = double(LLL))$ans answer[x < 0] <- NA answer[x == 0] <- NA } else { if (!is.Numeric(deriv, integer.valued = TRUE, positive = TRUE) || deriv > 3) stop("Bad input for argument 'deriv'") answer <- rep_len(0, length(x)) if (deriv == 1) { answer <- exp(-x) / x } if (deriv == 2) { answer <- exp(-x) / x + exp(-x) / x^2 } if (deriv == 3) { answer <- exp(-x) / x + 2 * exp(-x) / x^2 + 2 * exp(-x) / x^3 } answer <- (-1)^deriv * answer } answer } if (FALSE) expint <- function(x) { LLL <- length(x) answer <- .C("sf_C_expint", x = as.double(x), size = as.integer(LLL), ans = double(LLL))$ans answer[x < 0] <- NA answer[x == 0] <- NA answer } if (FALSE) expexpint <- function(x) { LLL <- length(x) answer <- .C("sf_C_expexpint", x = as.double(x), size = as.integer(LLL), ans = double(LLL))$ans answer[x < 0] <- NA answer[x == 0] <- NA answer } if (FALSE) pochhammer <- function (x, n) { exp(lgamma(x+n) - lgamma(x)) } if (FALSE) expint.E1 <- function(x) { LLL <- length(x) answer <- .C("sf_C_expint_e1", x = as.double(x), size = as.integer(LLL), ans = double(LLL))$ans answer[x < 0] <- NA answer[x == 0] <- NA answer } Zeta.aux <- function(shape, qq, shift = 1) { LLL <- max(length(shape), length(qq)) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(qq ) != LLL) qq <- rep_len(qq, LLL) if (any(qq < 12-1)) warning("all values of argument 'q' should be 12 or more") aa <- qq B2 <- c(1/6, -1/30, 1/42, -1/30, 5/66, -691/2730, 7/6, -3617/510) kk <- length(B2) # 8 ans <- 1 / ((shape-1) * (shift + aa)^(shape-1)) + 0.5 / (shift + aa)^shape term <- (shape/2) / (shift + aa)^(shape+1) ans <- ans + term * B2[1] for (mm in 2:kk) { term <- term * (shape+2*mm-3) * (shape+2*mm-2) / ((2*mm-1) * 2 * mm * (shift + aa)^2) ans <- ans + term * B2[mm] } ifelse(aa - 1 <= qq, ans, rep(0, length(ans))) # Handled above } zeta <- function(x, deriv = 0, shift = 1) { deriv.arg <- deriv rm(deriv) if (!is.Numeric(deriv.arg, length.arg = 1, integer.valued = TRUE)) stop("'deriv' must be a single non-negative integer") if (deriv.arg < 0 || deriv.arg > 2) stop("'deriv' must be 0, 1, or 2") if (deriv.arg > 0) return(Zeta.derivative(x, deriv.arg = deriv.arg, shift = shift)) if (any(special <- Re(x) <= 1)) { ans <- x ans[special] <- Inf # For Re(x) == 1 special3 <- Re(x) < 1 ans[special3] <- NA # For 0 < Re(x) < 1 special4 <- (0 < Re(x)) & (Re(x) < 1) & (Im(x) == 0) ans[special4] <- Zeta.derivative(x[special4], deriv.arg = deriv.arg, shift = shift) special2 <- Re(x) < 0 if (any(special2)) { x2 <- x[special2] cx <- 1 - x2 ans[special2] <- 2^(x2) * pi^(x2-1) * sin(pi*x2/2) * gamma(cx) * Recall(cx) } # special2 if (any(!special)) { ans[!special] <- Recall(x[!special]) } return(ans) } # special aa <- 12 ans <- 0 for (ii in 0:(aa-1)) ans <- ans + 1 / (shift + ii)^x ans <- ans + Zeta.aux(shape = x, aa, shift = shift) ans[shift <= 0] <- NaN ans } # zeta Zeta.derivative <- function(x, deriv.arg = 0, shift = 1) { if (!all(shift == 1)) stop("currently 'shift' must all be 1") if (!is.Numeric(deriv.arg, length.arg = 1, integer.valued = TRUE)) stop("'deriv.arg' must be a single non-negative integer") if (deriv.arg < 0 || deriv.arg > 2) stop("'deriv.arg' must be 0, 1, or 2") if (any(Im(x) != 0)) stop("Sorry, currently can only handle x real, not complex") if (any(x < 0)) stop("Sorry, currently cannot handle x < 0") ok <- is.finite(x) & x > 0 & x != 1 # Handles NAs ans <- rep_len(NA_real_, length(x)) nn <- sum(ok) # Effective length (excludes x < 0 and x = 1 values) if (nn) ans[ok] <- .C("vzetawr", as.double(x[ok]), ans = double(nn), as.integer(deriv.arg), as.integer(nn))$ans if (deriv.arg == 0) ans[is.finite(x) & abs(x) < 1.0e-12] <- -0.5 ans } ghn100 <- c(-13.4064873381449, -12.8237997494878, -12.3429642228597, -11.9150619431142, -11.521415400787, -11.1524043855851, -10.8022607536847, -10.4671854213428, -10.1445099412928, -9.83226980777795, -9.5289658233901, -9.23342089021916, -8.94468921732547, -8.66199616813451, -8.38469694041627, -8.11224731116279, -7.84418238446082, -7.58010080785749, -7.31965282230454, -7.06253106024886, -6.80846335285879, -6.55720703192154, -6.30854436111214, -6.0622788326143, -5.81823213520352, -5.57624164932992, -5.33615836013836, -5.09784510508914, -4.86117509179121, -4.62603063578716, -4.39230207868269, -4.15988685513103, -3.92868868342767, -3.69861685931849, -3.46958563641859, -3.24151367963101, -3.01432358033115, -2.78794142398199, -2.56229640237261, -2.33732046390688, -2.11294799637119, -1.88911553742701, -1.66576150874151, -1.44282597021593, -1.22025039121895, -0.997977436098106, -0.775950761540146, -0.554114823591618, -0.332414692342232, -0.11079587242244, 0.110795872422439, 0.332414692342232, 0.554114823591617, 0.775950761540145, 0.997977436098105, 1.22025039121895, 1.44282597021593, 1.66576150874151, 1.88911553742701, 2.11294799637119, 2.33732046390688, 2.5622964023726, 2.78794142398199, 3.01432358033115, 3.24151367963101, 3.46958563641859, 3.69861685931849, 3.92868868342767, 4.15988685513103, 4.39230207868269, 4.62603063578716, 4.86117509179121, 5.09784510508914, 5.33615836013836, 5.57624164932993, 5.81823213520351, 6.0622788326143, 6.30854436111214, 6.55720703192153, 6.80846335285879, 7.06253106024886, 7.31965282230453, 7.58010080785749, 7.84418238446082, 8.11224731116279, 8.38469694041626, 8.66199616813451, 8.94468921732548, 9.23342089021915, 9.52896582339012, 9.83226980777795, 10.1445099412928, 10.4671854213428, 10.8022607536847, 11.1524043855851, 11.521415400787, 11.9150619431142, 12.3429642228597, 12.8237997494878, 13.4064873381449 ) ghw100 <- c(5.90806786503149e-79, 1.97286057487953e-72, 3.08302899000321e-67, 9.01922230369242e-63, 8.51888308176111e-59, 3.45947793647603e-55, 7.19152946346349e-52, 8.59756395482676e-49, 6.42072520534849e-46, 3.18521787783596e-43, 1.10047068271428e-40, 2.74878488435709e-38, 5.11623260438594e-36, 7.27457259688812e-34, 8.06743427870884e-32, 7.10181222638517e-30, 5.03779116621273e-28, 2.91735007262926e-26, 1.39484152606877e-24, 5.56102696165936e-23, 1.86499767513029e-21, 5.30231618313167e-20, 1.28683292112113e-18, 2.68249216476057e-17, 4.82983532170314e-16, 7.5488968779154e-15, 1.02887493735098e-13, 1.22787851441009e-12, 1.28790382573158e-11, 1.19130063492903e-10, 9.74792125387112e-10, 7.07585728388942e-09, 4.568127508485e-08, 2.62909748375372e-07, 1.35179715911036e-06, 6.22152481777778e-06, 2.56761593845487e-05, 9.51716277855096e-05, 0.000317291971043304, 0.000952692188548621, 0.00257927326005907, 0.00630300028560806, 0.0139156652202317, 0.0277791273859335, 0.0501758126774289, 0.0820518273912242, 0.121537986844105, 0.163130030502782, 0.198462850254188, 0.218892629587438, 0.21889262958744, 0.198462850254186, 0.163130030502783, 0.121537986844104, 0.082051827391225, 0.0501758126774289, 0.0277791273859336, 0.0139156652202318, 0.00630300028560809, 0.00257927326005912, 0.000952692188548612, 0.000317291971043303, 9.51716277855086e-05, 2.5676159384549e-05, 6.22152481777782e-06, 1.35179715911039e-06, 2.62909748375376e-07, 4.56812750848495e-08, 7.07585728388942e-09, 9.74792125387167e-10, 1.19130063492907e-10, 1.28790382573154e-11, 1.22787851441012e-12, 1.02887493735101e-13, 7.5488968779154e-15, 4.82983532170362e-16, 2.68249216476036e-17, 1.28683292112121e-18, 5.30231618313197e-20, 1.86499767513026e-21, 5.56102696165912e-23, 1.39484152606877e-24, 2.91735007262916e-26, 5.03779116621305e-28, 7.10181222638506e-30, 8.06743427870919e-32, 7.2745725968875e-34, 5.1162326043855e-36, 2.74878488435732e-38, 1.10047068271418e-40, 3.18521787783605e-43, 6.42072520534922e-46, 8.59756395482676e-49, 7.1915294634638e-52, 3.45947793647628e-55, 8.51888308176039e-59, 9.01922230369063e-63, 3.08302899000303e-67, 1.97286057487992e-72, 5.90806786503182e-79 ) VGAM/R/vgam.fit.q0000644000176200001440000003455413135276760013127 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vgam.fit <- function(x, y, w = rep_len(1, nrow(x)), mf, # No X.vlm.arg, but mf happens to be in its position Xm2 = NULL, Ym2 = NULL, # Added 20130730 etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = vgam.control(), qr.arg = FALSE, constraints = NULL, extra = NULL, Terms, nonparametric, smooth.labels, function.name = "vgam", sm.osps.list = NULL, # mf, ...) { mgcvvgam <- length(sm.osps.list) > 0 if (is.null(criterion <- control$criterion)) criterion <- "coefficients" eff.n <- nrow(x) # + sum(abs(w[1:nrow(x)])) specialCM <- NULL post <- list() check.rank <- control$Check.rank epsilon <- control$epsilon maxit <- control$maxit save.weights <- control$save.weights trace <- control$trace bf.maxit <- control$bf.maxit bf.epsilon <- control$bf.epsilon se.fit <- control$se.fit minimize.criterion <- control$min.criterion fv <- NULL n <- nrow(x) old.coeffs <- coefstart intercept.only <- ncol(x) == 1 && colnames(x) == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n if (length(slot(family, "initialize"))) eval(slot(family, "initialize")) # Initialize mu & M (& optionally w) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else slot(family, "linkinv")(eta, extra = extra) } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra = extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'linkfun' slot to use it") } } validparams <- validfitted <- TRUE if (length(body(slot(family, "validparams")))) validparams <- slot(family, "validparams")(eta, y = y, extra = extra) if (length(body(slot(family, "validfitted")))) validfitted <- slot(family, "validfitted")(mu, y = y, extra = extra) if (!(validparams && validfitted)) stop("could not obtain valid initial values. ", "Try using 'etastart', 'coefstart' or 'mustart', else ", "family-specific arguments such as 'imethod'.") M <- NCOL(eta) if (length(family@constraints)) eval(slot(family, "constraints")) Hlist <- process.constraints(constraints, x = x, M = M, specialCM = specialCM, Check.cm.rank = control$Check.cm.rank) ncolHlist <- unlist(lapply(Hlist, ncol)) if (nonparametric) { smooth.frame <- mf assignx <- attr(x, "assign") which <- assignx[smooth.labels] bf <- "s.vam" bf.call <- parse(text = paste( "s.vam(x, z, wz, tfit$smomat, which, tfit$smooth.frame,", "bf.maxit, bf.epsilon, trace, se = se.fit, X.vlm.save, ", "Hlist, ncolHlist, M = M, qbig = qbig, Umat = U, ", "all.knots = control$all.knots, nk = control$nk)", sep = ""))[[1]] qbig <- sum(ncolHlist[smooth.labels]) # Number of component funs smomat <- matrix(0, n, qbig) dy <- if (is.matrix(y)) dimnames(y)[[1]] else names(y) d2 <- if (is.null(predictors.names)) paste("(Additive predictor ",1:M,")", sep = "") else predictors.names dimnames(smomat) <- list(dy, vlabel(smooth.labels, ncolHlist[smooth.labels], M)) tfit <- list(smomat = smomat, smooth.frame = smooth.frame) } else { bf.call <- expression(vlm.wfit(xmat = X.vlm.save, z, Hlist = NULL, U = U, matrix.out = FALSE, is.vlmX = TRUE, qr = qr.arg, xij = NULL)) bf <- "vlm.wfit" } X.vlm.save <- lm2vlm.model.matrix(x, Hlist, xij = control$xij, Xm2 = Xm2) # 20160420 if (mgcvvgam) { Xvlm.aug <- get.X.VLM.aug(constraints = constraints, sm.osps.list = sm.osps.list) first.sm.osps <- TRUE # Useless actually } if (length(coefstart)) { eta <- if (ncol(X.vlm.save) > 1) { matrix(X.vlm.save %*% coefstart, n, M, byrow = TRUE) + offset } else { matrix(X.vlm.save * coefstart, n, M, byrow = TRUE) + offset } if (M == 1) eta <- c(eta) mu <- slot(family, "linkinv")(eta, extra = extra) } if (criterion != "coefficients") { tfun <- slot(family, criterion) # Needed 4 R so have to follow suit } iter <- 1 new.crit <- switch(criterion, coefficients = 1, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra = extra)) old.crit <- ifelse(minimize.criterion, 10 * new.crit + 10, -10 * new.crit - 10) deriv.mu <- eval(slot(family, "deriv")) wz <- eval(slot(family, "weight")) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset nrow.X.vlm <- nrow(X.vlm.save) ncol.X.vlm <- ncol(X.vlm.save) if (!nonparametric && nrow.X.vlm < ncol.X.vlm) stop("There are ", ncol.X.vlm, " parameters but only ", nrow.X.vlm, " observations") if (mgcvvgam) { bf.call <- expression(vlm.wfit(xmat = X.vlm.save, z, Hlist = Hlist, U = U, matrix.out = FALSE, is.vlmX = TRUE, qr = qr.arg, xij = NULL, Xvlm.aug = Xvlm.aug, sm.osps.list = sm.osps.list, constraints = constraints, first.sm.osps = first.sm.osps, control = control, # 20160813 trace = trace)) bf <- "vlm.wfit" } fully.cvged <- FALSE for (iter.outer in 1:control$Maxit.outer) { if (fully.cvged) break if (trace && mgcvvgam) { cat("VGAM outer iteration ", iter.outer, " =============================================\n") flush.console() } iter <- 1 # This is a reset for iter.outer > 1. one.more <- TRUE sm.osps.list$fixspar <- sm.osps.list$orig.fixspar while (one.more) { tfit <- eval(bf.call) # fit$smooth.frame is new if (mgcvvgam) { first.sm.osps <- tfit$first.sm.osps Xvlm.aug <- tfit$Xvlm.aug sm.osps.list <- tfit$sm.osps.list if (control$Maxit.outer > 1) sm.osps.list$fixspar <- rep_len(TRUE, length(sm.osps.list$fixspar)) magicfit <- tfit$magicfit } fv <- tfit$fitted.values # c.list$fit if (mgcvvgam) { fv <- head(fv, n * M) } new.coeffs <- tfit$coefficients # c.list$coeff if (length(slot(family, "middle"))) eval(slot(family, "middle")) eta <- fv + offset mu <- slot(family, "linkinv")(eta, extra = extra) if (length(family@middle2)) eval(family@middle2) old.crit <- new.crit new.crit <- switch(criterion, coefficients = new.coeffs, tfun(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra = extra)) if (trace) { cat("VGAM ", bf, " loop ", iter, ": ", criterion, "= ") UUUU <- switch(criterion, coefficients = format(new.crit, dig = round(1 - log10(epsilon))), format(new.crit, dig = max(4, round(-0 - log10(epsilon) + log10(sqrt(eff.n)))))) switch(criterion, coefficients = {if (length(new.crit) > 2) cat("\n"); cat(UUUU, fill = TRUE, sep = ", ")}, cat(UUUU, fill = TRUE, sep = ", ")) } one.more <- eval(control$convergence) flush.console() if (!is.logical(one.more)) one.more <- FALSE if (one.more) { iter <- iter + 1 deriv.mu <- eval(slot(family, "deriv")) wz <- eval(slot(family, "weight")) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent = !trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset } else { fully.cvged <- if (mgcvvgam) (iter <= 2) else TRUE } old.coeffs <- new.coeffs } # End of while() } # End of for() if (maxit > 1 && iter >= maxit && !control$noWarning) warning("convergence not obtained in ", maxit, " IRLS iterations") if (control$Maxit.outer > 1 && iter.outer >= control$Maxit.outer && !control$noWarning) warning("convergence not obtained in ", control$Maxit.outer, " outer iterations") dnrow.X.vlm <- labels(X.vlm.save) xnrow.X.vlm <- dnrow.X.vlm[[2]] ynrow.X.vlm <- dnrow.X.vlm[[1]] if (length(slot(family, "fini"))) eval(slot(family, "fini")) if (M > 1) fv <- matrix(fv, n, M) final.coefs <- new.coeffs # Was tfit$coefficients prior to 20160317 asgn <- attr(X.vlm.save, "assign") names(final.coefs) <- xnrow.X.vlm if (!is.null(tfit$rank)) { rank <- tfit$rank } else { rank <- NCOL(x) } cnames <- xnrow.X.vlm if (!nonparametric && # The first condition needed for vgam() check.rank && rank < ncol.X.vlm) stop("vgam() only handles full-rank models (currently)") R <- tfit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] R[lower.tri(R)] <- 0 attributes(R) <- list(dim = c(ncol.X.vlm, ncol.X.vlm), dimnames = list(cnames, cnames), rank = rank) dim(fv) <- c(n, M) dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] wresiduals <- z - fv # Replaced by fv 20160408 if (M == 1) { fv <- as.vector(fv) wresiduals <- as.vector(wresiduals) names(wresiduals) <- names(fv) <- yn } else { dimnames(wresiduals) <- dimnames(fv) <- list(yn, predictors.names) } if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } tfit$fitted.values <- NULL # Have to kill it off 20011203 fit <- structure(c(tfit, list(assign = asgn, constraints = Hlist, control = control, fitted.values = mu, formula = as.vector(attr(Terms, "formula")), iter = iter, offset = offset, rank = rank, R = R, terms = Terms))) if (qr.arg) { fit$qr <- tfit$qr dimnames(fit$qr$qr) <- dnrow.X.vlm } if (!mgcvvgam && !se.fit) { fit$varmat <- NULL } if (M == 1) { wz <- as.vector(wz) # Convert wz into a vector } # else fit$weights <- if (save.weights) wz else NULL NewHlist <- process.constraints(constraints, x, M, specialCM = specialCM, by.col = FALSE) misc <- list( colnames.x = xn, colnames.X.vlm = xnrow.X.vlm, criterion = criterion, function.name = function.name, intercept.only = intercept.only, predictors.names = predictors.names, M = M, n = n, new.assign = new.assign(x, NewHlist), nonparametric = nonparametric, nrow.X.vlm = nrow.X.vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol.X.vlm = ncol.X.vlm, ynames = colnames(y)) if (!mgcvvgam && se.fit && length(fit$s.xargument)) { misc$varassign <- varassign(Hlist, names(fit$s.xargument)) } if (nonparametric) { misc$smooth.labels <- smooth.labels } if (mgcvvgam) { misc$Xvlm.aug <- Xvlm.aug misc$sm.osps.list <- sm.osps.list misc$magicfit <- magicfit misc$iter.outer <- iter.outer } crit.list <- list() if (criterion != "coefficients") crit.list[[criterion]] <- fit[[criterion]] <- new.crit for (ii in names(.min.criterion.VGAM)) { if (ii != criterion && any(slotNames(family) == ii) && length(body(slot(family, ii)))) { fit[[ii]] <- crit.list[[ii]] <- (slot(family, ii))(mu = mu, y = y, w = w, res = FALSE, eta = eta, extra = extra) } } if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(slot(family, "last"))) eval(slot(family, "last")) if (!is.null(fit$smomat)) { fit$nl.chisq <- vgam.nlchisq(fit$qr, fit$resid, wz = wz, smomat = fit$smomat, deriv = deriv.mu, U = U, smooth.labels, attr(x, "assign"), M = M, n = n, constraints = Hlist) } if (!qr.arg) { fit$qr <- NULL } fit$misc <- NULL structure(c(fit, list(predictors = fv, # tfit$predictors, contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, x = x, y = y)), vclass = slot(family, "vfamily")) } # vgam.fit() new.assign <- function(X, Hlist) { M <- nrow(Hlist[[1]]) dn <- labels(X) xn <- dn[[2]] asgn <- attr(X, "assign") nasgn <- names(asgn) lasgn <- unlist(lapply(asgn, length)) ncolHlist <- unlist(lapply(Hlist, ncol)) names(ncolHlist) <- NULL # This is necessary for below to work temp2 <- vlabel(nasgn, ncolHlist, M) L <- length(temp2) newasgn <- vector("list", L) kk <- 0 low <- 1 for (ii in seq_along(asgn)) { len <- low:(low + ncolHlist[ii] * lasgn[ii] -1) temp <- matrix(len, ncolHlist[ii], lasgn[ii]) for (mm in 1:ncolHlist[ii]) newasgn[[kk + mm]] <- temp[mm, ] low <- low + ncolHlist[ii] * lasgn[ii] kk <- kk + ncolHlist[ii] } names(newasgn) <- temp2 newasgn } # new.assign VGAM/R/vcov.pvgam.R0000644000176200001440000002643313135276757013446 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vcov.pvgam <- function(object, ...) { vcovpvgam(object, ...) } vcovpvgam <- function(object, special = FALSE, frequentist = FALSE, dispersion = NULL, unconditional = FALSE, ...) { if (!special) { return(vcovvlm(object, ...)) } warning("vcovpvgam() is only 50% finished") print("in vcovpvgam; hi 2a") print("class(object)") print( class(object) ) M <- npred(object) n <- nobs(object, type = "lm") wz <- weights(object, type = "working") X.vlm.save <- model.matrix(object, type = "vlm") U <- vchol(wz, M = M, n = n) X.vlm <- mux111(U, X.vlm.save, M = M) X.vlm.aug <- rbind(X.vlm, model.matrix(object, type = "penalty")) qr1 <- qr(X.vlm.aug) qr2 <- qr(X.vlm) poststuff <- mgcv::magic.post.proc(X.vlm.aug, object = object@ospsslot$magicfit, w = NULL) magicfit <- object@ospsslot$magicfit rV <- magicfit$rV Vb <- poststuff$Vb Ve <- poststuff$Ve hhat <- poststuff$hat eedf <- poststuff$edf scale.param <- 1 # Assumed vc <- if (frequentist) { mat1 <- solve(crossprod(qr.R(qr1))) scale.param * (mat1 %*% crossprod(qr.R(qr2)) %*% mat1) } else { Vc <- NULL # Corrected ML or REML is not available. Vp <- scale.param * tcrossprod(solve(qr.R(qr1))) Vp2 <- rV %*% t(rV) # * sig2 # For checking print("max(abs(Vp - Vp2)); should be 0") print( max(abs(Vp - Vp2)) ) if (FALSE) { He <- SemiParFit$fit$hessian He.eig <- eigen(He, symmetric=TRUE) Vb <- He.eig$vectors %*% tcrossprod(diag(1/He.eig$values), He.eig$vectors) # this could be taken from magic as well Vb <- (Vb + t(Vb) ) / 2 HeSh <- He - SemiParFit$fit$S.h F <- Vb%*%HeSh # diag(SemiParFit$magpp$edf) HeSh <- He Ve <- Vb F <- F1 <- diag(rep(1,dim(Vb)[1])) R <- SemiParFit$bs.mgfit$R } if (unconditional && !is.null(Vc)) Vc else Vp } # Bayesian if (is.null(dispersion)) { sig2 <- 1 # zz vc <- summary(object)@dispersion * vc / sig2 } else { sig2 <- summary(object)@dispersion # 1 # zz vc <- dispersion * vc / sig2 } print("head(sort(diag(vc)))") print( head(sort(diag(vc))) ) print("head(sort(diag(Ve)))") print( head(sort(diag(Ve))) ) print("tail(sort(diag(vc)))") print( tail(sort(diag(vc))) ) print("tail(sort(diag(Ve)))") print( tail(sort(diag(Ve))) ) print("head(sort(diag(vc))) / head(sort(diag(Ve)))") print( head(sort(diag(vc))) / head(sort(diag(Ve))) ) print("max(abs(sort(diag(vc)) - sort(diag(Ve))))") print( max(abs(sort(diag(vc)) - sort(diag(Ve)))) ) vc } setMethod("vcov", "pvgam", function(object, ...) vcovpvgam(object, ...)) startstoppvgam <- function(object, ...) { which.X.sm.osps <- object@ospsslot$sm.osps.list$which.X.sm.osps if (!length(which.X.sm.osps)) stop("no 'sm.os()' or 'sm.ps()' term in 'object'") all.ncol.Hk <- unlist(lapply(constraints(object, type = "term"), ncol)) names.which.X.sm.osps <- names(which.X.sm.osps) endf <- rep_len(NA_real_, sum(all.ncol.Hk[names.which.X.sm.osps])) names(endf) <- vlabel(names.which.X.sm.osps, all.ncol.Hk[names.which.X.sm.osps], M = npred(object)) stopstart <- NULL iptr <- 1 iterm <- 1 for (ii in names(all.ncol.Hk)) { if (length(which.X.sm.osps[[ii]])) { temp3 <- -1 + iptr + all.ncol.Hk[ii] * length(which.X.sm.osps[[ii]]) new.index <- iptr:temp3 # Includes all component functions wrt xk iptr <- iptr + length(new.index) # temp3 mat.index <- matrix(new.index, ncol = all.ncol.Hk[ii], byrow = TRUE) for (jay in 1:all.ncol.Hk[ii]) { cf.index <- mat.index[, jay] stopstart <- c(stopstart, list(cf.index)) iterm <- iterm + 1 } # for } else { iptr <- iptr + all.ncol.Hk[ii] } } # ii names(stopstart) <- names(endf) stopstart } summarypvgam <- function(object, dispersion = NULL, digits = options()$digits-2, presid = TRUE) { stuff <- summaryvglm(object, dispersion = dispersion, digits = digits, presid = presid) answer <- new("summary.pvgam", object, call = stuff@call, cov.unscaled = stuff@cov.unscaled, correlation = stuff@correlation, df = stuff@df, sigma = stuff@sigma) answer@misc$nopredictors <- stuff@misc$nopredictors answer@ospsslot <- object@ospsslot slot(answer, "coefficients") <- stuff@coefficients # Replace coef3 <- stuff@coef3 aassign <- attr(model.matrix(object, type = "vlm"), "assign") myterms <- names(object@ospsslot$sm.osps.list$which.X.sm.osps) index.exclude <- NULL for (ii in myterms) { index.exclude <- c(index.exclude, unlist(aassign[[ii]])) } slot(answer, "coef3") <- coef3[-index.exclude, , drop = FALSE] if (is.numeric(stuff@dispersion)) slot(answer, "dispersion") <- stuff@dispersion if (presid) { Presid <- residuals(object, type = "pearson") if (length(Presid)) answer@pearson.resid <- as.matrix(Presid) } pinv <- function(V, M, rank.tol = 1e-6) { D <- eigen(V, symmetric = TRUE) M1 <- length(D$values[D$values > rank.tol * D$values[1]]) if (M > M1) M <- M1 # avoid problems with zero eigen-values if (M+1 <= length(D$values)) D$values[(M+1):length(D$values)] <- 1 D$values <- 1 / D$values if (M+1 <= length(D$values)) D$values[(M+1):length(D$values)] <- 0 res <- D$vectors %*% (D$values * t(D$vectors)) ##D$u%*%diag(D$d)%*%D$v attr(res, "rank") <- M res } ## end of pinv startstop <- startstoppvgam(object) m <- length(startstop) df <- edf1 <- edf <- s.pv <- chi.sq <- array(0, m) names(chi.sq) <- names(startstop) p.type <- 5 # Frequentist est.disp <- if (is.logical(object@misc$estimated.dispersion)) object@misc$estimated.dispersion else FALSE pvgam.residual.df <- df.residual_pvgam(object) for (i in 1:m) { p <- coef(as(object, "pvgam"))[(startstop[[i]])] # params for smooth endf <- endfpvgam(object, diag.all = TRUE) # This is ENDF+1 actually edf1[i] <- edf[i] <- sum(endf[(startstop[[i]])]) if (FALSE && !is.null(object$edf1)) edf1[i] <- sum(object$edf1[(startstop[[i]])]) V <- if (p.type == 5) { Ve <- vcov(object, special = FALSE) Ve[(startstop[[i]]), (startstop[[i]]), drop = FALSE] } else { Vp <- vcov(object, special = TRUE, frequentist = FALSE) Vp[(startstop[[i]]), (startstop[[i]]), drop = FALSE] } if (p.type == 5) { M1 <- length(startstop[[i]]) # zz M <- min(M1, ceiling(2*sum(endf[(startstop[[i]])])) ) V <- pinv(V, M) # , rank.tol = 1e-5 chi.sq[i] <- t(p) %*% V %*% p df[i] <- attr(V, "rank") } if (p.type == 5) { s.pv[i] <- if (est.disp) { pf(chi.sq[i] / df[i], df1 = df[i], df2 = pvgam.residual.df, lower.tail = FALSE) } else { pchisq(chi.sq[i], df = df[i], lower.tail = FALSE) } if (df[i] < 0.1) s.pv[i] <- NA } if (est.disp) { if (p.type == 5) { s.table <- cbind(edf, df, chi.sq / df, s.pv) dimnames(s.table) <- list(names(chi.sq), c("edf", "Est.rank", "F", "p-value")) } else { s.table <- cbind(edf, df, chi.sq/df, s.pv) dimnames(s.table) <- list(names(chi.sq), c("edf", "Ref.df", "F", "p-value")) } } else { if (p.type == 5) { # This case is commonly executed s.table <- cbind(edf, df, chi.sq, s.pv) dimnames(s.table) <- list(names(chi.sq), c("edf", "Est.rank", "Chi.sq", "p-value")) } else { s.table <- cbind(edf, df, chi.sq, s.pv) dimnames(s.table) <- list(names(chi.sq), c("edf", "Ref.df", "Chi.sq", "p-value")) } } # else } # for (i) answer@post$s.table <- s.table aod <- data.frame(message = 'this does not work yet') slot(answer, "anova") <- aod answer } # summarypvgam() show.summary.pvgam <- function(x, quote = TRUE, prefix = "", digits = options()$digits-2, signif.stars = getOption("show.signif.stars")) { show.summary.vglm(x, quote = quote, prefix = prefix, digits = digits, top.half.only = TRUE) startstop <- startstoppvgam(x) m <- length(startstop) s.table <- x@post$s.table if (0 < m && length(s.table)) { cat("\nApproximate significance of smooth terms:\n") printCoefmat(s.table, digits = digits, signif.stars = signif.stars, has.Pvalue = TRUE, na.print = "NA", cs.ind = 1) } M <- x@misc$M Presid <- x@pearson.resid rdf <- x@df[2] cat("\nNumber of linear/additive predictors: ", M, "\n") if (!is.null(x@misc$predictors.names)) if (M == 1) cat("\nName of linear/additive predictor:", paste(x@misc$predictors.names, collapse = ", "), "\n") else if (M <= 5) cat("\nNames of linear/additive predictors:", paste(x@misc$predictors.names, collapse = ", "), "\n") prose <- "" if (length(x@dispersion)) { if (is.logical(x@misc$estimated.dispersion) && x@misc$estimated.dispersion) { prose <- "(Estimated) " } else { if (is.numeric(x@misc$default.dispersion) && x@dispersion == x@misc$default.dispersion) prose <- "(Default) " if (is.numeric(x@misc$default.dispersion) && x@dispersion != x@misc$default.dispersion) prose <- "(Pre-specified) " } cat(paste("\n", prose, "Dispersion Parameter for ", x@family@vfamily[1], " family: ", format(round(x@dispersion, digits)), "\n", sep = "")) } if (length(deviance(x))) cat("\nResidual deviance: ", format(round(deviance(x), digits)), "on", format(round(rdf, 3)), "degrees of freedom\n") if (length(logLik.vlm(x))) cat("\nLog-likelihood:", format(round(logLik.vlm(x), digits)), "on", format(round(rdf, 3)), "degrees of freedom\n") if (length(x@criterion)) { ncrit <- names(x@criterion) for (ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), format(x@criterion[[ii]]), "\n") } if (is.Numeric(x@ospsslot$iter.outer)) { cat("\nNumber of outer iterations: ", x@ospsslot$iter.outer, "\n") cat("\nNumber of IRLS iterations at final outer iteration: ", x@iter, "\n") } else { cat("\nNumber of IRLS iterations: ", x@iter, "\n") } if (FALSE && length(x@anova)) { show.vanova(x@anova, digits = digits) # ".vanova" for Splus6 } invisible(NULL) } # show.summary.pvgam() setMethod("summary", "pvgam", function(object, ...) summarypvgam(object, ...)) setMethod("show", "summary.pvgam", function(object) show.summary.pvgam(object)) psintpvgam <- function(object, ...) { object@ospsslot$sm.osps.list$ps.int } if (!isGeneric("psint")) setGeneric("psint", function(object, ...) standardGeneric("psint"), package = "VGAM") setMethod("psint", "pvgam", function(object, ...) psintpvgam(object, ...)) VGAM/R/model.matrix.vglm.q0000644000176200001440000005640013135276757014763 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. attrassigndefault <- function(mmat, tt) { if (!inherits(tt, "terms")) stop("need terms object") aa <- attr(mmat, "assign") if (is.null(aa)) stop("argument is not really a model matrix") ll <- attr(tt, "term.labels") if (attr(tt, "intercept") > 0) ll <- c("(Intercept)", ll) aaa <- factor(aa, labels = ll) split(order(aa), aaa) } attrassignlm <- function(object, ...) attrassigndefault(model.matrix(object), object@terms) vlabel <- function(xn, ncolHlist, M, separator = ":", colon = FALSE) { if (length(xn) != length(ncolHlist)) stop("length of first two arguments not equal") n1 <- rep(xn, ncolHlist) if (M == 1) return(n1) n2 <- as.list(ncolHlist) n2 <- lapply(n2, seq) n2 <- unlist(n2) n2 <- as.character(n2) n2 <- paste(separator, n2, sep = "") n3 <- rep(ncolHlist, ncolHlist) if (!colon) n2[n3 == 1] <- "" n1n2 <- paste(n1, n2, sep = "") n1n2 } vlm2lm.model.matrix <- function(x.vlm, Hlist = NULL, which.linpred = 1, M = NULL) { if (is.numeric(M)) { if (M != nrow(Hlist[[1]])) stop("argument 'M' does not match argument 'Hlist'") } else { M <- nrow(Hlist[[1]]) } Hmatrices <- matrix(c(unlist(Hlist)), nrow = M) if (ncol(Hmatrices) != ncol(x.vlm)) stop("ncol(Hmatrices) != ncol(x.vlm)") n.lm <- nrow(x.vlm) / M if (round(n.lm) != n.lm) stop("'n.lm' does not seem to be an integer") linpred.index <- which.linpred vecTF <- Hmatrices[linpred.index, ] != 0 X.lm.jay <- x.vlm[(0:(n.lm - 1)) * M + linpred.index, vecTF, drop = FALSE] X.lm.jay } lm2vlm.model.matrix <- function(x, Hlist = NULL, assign.attributes = TRUE, M = NULL, xij = NULL, Xm2 = NULL) { if (length(Hlist) != ncol(x)) stop("length(Hlist) != ncol(x)") if (length(xij)) { if (inherits(xij, "formula")) xij <- list(xij) if (!is.list(xij)) stop("'xij' is not a list of formulae") } if (!is.numeric(M)) M <- nrow(Hlist[[1]]) nrow.X.lm <- nrow(x) if (all(trivial.constraints(Hlist) == 1)) { X.vlm <- if (M > 1) kronecker(x, diag(M)) else x ncolHlist <- rep(M, ncol(x)) } else { allB <- matrix(unlist(Hlist), nrow = M) ncolHlist <- unlist(lapply(Hlist, ncol)) Rsum <- sum(ncolHlist) X1 <- rep(c(t(x)), rep(ncolHlist, nrow.X.lm)) dim(X1) <- c(Rsum, nrow.X.lm) X.vlm <- kronecker(t(X1), matrix(1, M, 1)) * kronecker(matrix(1, nrow.X.lm, 1), allB) rm(X1) } dn <- labels(x) yn <- dn[[1]] xn <- dn[[2]] dimnames(X.vlm) <- list(vlabel(yn, rep(M, nrow.X.lm), M), vlabel(xn, ncolHlist, M)) if (assign.attributes) { attr(X.vlm, "contrasts") <- attr(x, "contrasts") attr(X.vlm, "factors") <- attr(x, "factors") attr(X.vlm, "formula") <- attr(x, "formula") attr(X.vlm, "class") <- attr(x, "class") attr(X.vlm, "order") <- attr(x, "order") attr(X.vlm, "term.labels") <- attr(x, "term.labels") nasgn <- oasgn <- attr(x, "assign") lowind <- 0 for (ii in seq_along(oasgn)) { mylen <- length(oasgn[[ii]]) * ncolHlist[oasgn[[ii]][1]] nasgn[[ii]] <- (lowind+1):(lowind+mylen) lowind <- lowind + mylen } # End of ii if (lowind != ncol(X.vlm)) stop("something gone wrong") attr(X.vlm, "assign") <- nasgn fred <- unlist(lapply(nasgn, length)) / unlist(lapply(oasgn, length)) vasgn <- vector("list", sum(fred)) kk <- 0 for (ii in seq_along(oasgn)) { temp <- matrix(nasgn[[ii]], ncol = length(oasgn[[ii]])) for (jloc in 1:nrow(temp)) { kk <- kk + 1 vasgn[[kk]] <- temp[jloc, ] } } names(vasgn) <- vlabel(names(oasgn), fred, M) attr(X.vlm, "vassign") <- vasgn attr(X.vlm, "constraints") <- Hlist } # End of if (assign.attributes) if (!length(xij)) return(X.vlm) at.x <- attr(x, "assign") at.vlmx <- attr(X.vlm, "assign") at.Xm2 <- attr(Xm2, "assign") for (ii in seq_along(xij)) { form.xij <- xij[[ii]] if (length(form.xij) != 3) stop("xij[[", ii, "]] is not a formula with a response") tform.xij <- terms(form.xij) aterm.form <- attr(tform.xij, "term.labels") # Does not include response if (length(aterm.form) != M) stop("xij[[", ii, "]] does not contain ", M, " terms") name.term.y <- as.character(form.xij)[2] cols.X.vlm <- at.vlmx[[name.term.y]] # May be > 1 in length. x.name.term.2 <- aterm.form[1] # Choose the first one One.such.term <- at.Xm2[[x.name.term.2]] for (bbb in seq_along(One.such.term)) { use.cols.Xm2 <- NULL for (sss in 1:M) { x.name.term.2 <- aterm.form[sss] one.such.term <- at.Xm2[[x.name.term.2]] use.cols.Xm2 <- c(use.cols.Xm2, one.such.term[bbb]) } # End of sss allXk <- Xm2[, use.cols.Xm2, drop = FALSE] cmat.no <- (at.x[[name.term.y]])[1] # 1st one will do (all the same). cmat <- Hlist[[cmat.no]] Rsum.k <- ncol(cmat) tmp44 <- kronecker(matrix(1, nrow.X.lm, 1), t(cmat)) * kronecker(allXk, matrix(1, ncol(cmat), 1)) # n*Rsum.k x M tmp44 <- array(t(tmp44), c(M, Rsum.k, nrow.X.lm)) tmp44 <- aperm(tmp44, c(1, 3, 2)) # c(M, n, Rsum.k) rep.index <- cols.X.vlm[((bbb-1)*Rsum.k+1):(bbb*Rsum.k)] X.vlm[, rep.index] <- c(tmp44) } # End of bbb } # End of for (ii in seq_along(xij)) if (assign.attributes) { attr(X.vlm, "vassign") <- vasgn attr(X.vlm, "assign") <- nasgn attr(X.vlm, "xij") <- xij } X.vlm } # lm2vlm.model.matrix model.matrix.vlm <- function(object, ...) model.matrixvlm(object, ...) model.matrixvlm <- function(object, type = c("vlm", "lm", "lm2", "bothlmlm2"), linpred.index = NULL, ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("vlm", "lm", "lm2", "bothlmlm2"))[1] if (length(linpred.index) && type != "lm") stop("Must set 'type = \"lm\"' when 'linpred.index' is ", "assigned a value") if (length(linpred.index) && length(object@control$xij)) stop("Currently cannot handle 'xij' models when 'linpred.index' is ", "assigned a value") x <- slot(object, "x") Xm2 <- if (any(slotNames(object) == "Xm2")) slot(object, "Xm2") else numeric(0) form2 <- if (any(slotNames(object) == "misc")) object@misc$form2 else NULL if (type == "lm2" && !length(form2)) return(Xm2) if (!length(x)) { data <- model.frame(object, xlev = object@xlevels, ...) kill.con <- if (length(object@contrasts)) object@contrasts else NULL x <- vmodel.matrix.default(object, data = data, contrasts.arg = kill.con) tt <- terms(object) attr(x, "assign") <- attrassigndefault(x, tt) } if ((type == "lm2" || type == "bothlmlm2") && !length(Xm2)) { object.copy2 <- object data <- model.frame(object.copy2, xlev = object.copy2@xlevels, ...) kill.con <- if (length(object.copy2@contrasts)) object.copy2@contrasts else NULL Xm2 <- vmodel.matrix.default(object.copy2, data = data, contrasts.arg = kill.con) if (length(form2)) { attr(Xm2, "assign") <- attrassigndefault(Xm2, terms(form2)) } } if (type == "lm" && is.null(linpred.index)) { return(x) } else if (type == "lm2") { return(Xm2) } else if (type == "bothlmlm2") { return(list(X = x, Xm2 = Xm2)) } M <- object@misc$M Hlist <- object@constraints # == constraints(object, type = "lm") X.vlm <- lm2vlm.model.matrix(x = x, Hlist = Hlist, xij = object@control$xij, Xm2 = Xm2) if (type == "vlm") { return(X.vlm) } else if (type == "lm" && length(linpred.index)) { if (!is.Numeric(linpred.index, integer.valued = TRUE, positive = TRUE, length.arg = 1)) stop("bad input for argument 'linpred.index'") if (!length(intersect(linpred.index, 1:M))) stop("argument 'linpred.index' should have ", "a single value from the set 1:", M) Hlist <- Hlist n.lm <- nobs(object) # Number of rows of the LM matrix M <- object@misc$M # Number of linear/additive predictors Hmatrices <- matrix(c(unlist(Hlist)), nrow = M) jay <- linpred.index index0 <- Hmatrices[jay, ] != 0 X.lm.jay <- X.vlm[(0:(n.lm - 1)) * M + jay, index0, drop = FALSE] X.lm.jay } else { stop("am confused. Do not know what to return") } } setMethod("model.matrix", "vlm", function(object, ...) model.matrixvlm(object, ...)) model.matrixvgam <- function(object, type = c("lm", "vlm", "lm", "lm2", "bothlmlm2"), linpred.index = NULL, ...) { model.matrixvlm(object = object, type = type[1], linpred.index = linpred.index, ...) } setMethod("model.matrix", "vgam", function(object, ...) model.matrixvgam(object, ...)) model.framevlm <- function(object, setupsmart = TRUE, wrapupsmart = TRUE, ...) { dots <- list(...) nargs <- dots[match(c("data", "na.action", "subset"), names(dots), 0)] if (length(nargs) || !length(object@model)) { fcall <- object@call fcall$method <- "model.frame" fcall[[1]] <- as.name("vlm") fcall$smart <- FALSE if (setupsmart && length(object@smart.prediction)) { setup.smart("read", smart.prediction=object@smart.prediction) } fcall[names(nargs)] <- nargs env <- environment(object@terms$terms) # @terms or @terms$terms ?? if (is.null(env)) env <- parent.frame() ans <- eval(fcall, env, parent.frame()) if (wrapupsmart && length(object@smart.prediction)) { wrapup.smart() } ans } else object@model } if (!isGeneric("model.frame")) setGeneric("model.frame", function(formula, ...) standardGeneric("model.frame")) setMethod("model.frame", "vlm", function(formula, ...) model.framevlm(object = formula, ...)) vmodel.matrix.default <- function(object, data = environment(object), contrasts.arg = NULL, xlev = NULL, ...) { t <- if (missing(data)) terms(object) else terms(object, data = data) if (is.null(attr(data, "terms"))) data <- model.frame(object, data, xlev = xlev) else { reorder <- match(sapply(attr(t, "variables"), deparse, width.cutoff = 500)[-1], names(data)) if (anyNA(reorder)) stop("model frame and formula mismatch in model.matrix()") if (!identical(reorder, seq_len(ncol(data)))) data <- data[, reorder, drop = FALSE] } int <- attr(t, "response") if (length(data)) { contr.funs <- as.character(getOption("contrasts")) namD <- names(data) for (i in namD) if (is.character(data[[i]])) { data[[i]] <- factor(data[[i]]) warning(gettextf("variable '%s' converted to a factor", i), domain = NA) } isF <- sapply(data, function(x) is.factor(x) || is.logical(x)) isF[int] <- FALSE isOF <- sapply(data, is.ordered) for (nn in namD[isF]) if (is.null(attr(data[[nn]], "contrasts"))) contrasts(data[[nn]]) <- contr.funs[1 + isOF[nn]] if (!is.null(contrasts.arg) && is.list(contrasts.arg)) { if (is.null(namC <- names(contrasts.arg))) stop("invalid 'contrasts.arg' argument") for (nn in namC) { if (is.na(ni <- match(nn, namD))) warning(gettextf( "variable '%s' is absent, its contrast will be ignored", nn), domain = NA) else { ca <- contrasts.arg[[nn]] if (is.matrix(ca)) contrasts(data[[ni]], ncol(ca)) <- ca else contrasts(data[[ni]]) <- contrasts.arg[[nn]] } } } } else { isF <- FALSE data <- list(x = rep(0, nrow(data))) } ans <- (model.matrix(t, data)) cons <- if (any(isF)) lapply(data[isF], function(x) attr(x, "contrasts")) else NULL attr(ans, "contrasts") <- cons ans } depvar.vlm <- function(object, type = c("lm", "lm2"), drop = FALSE, ...) { type <- match.arg(type, c("lm", "lm2"))[1] ans <- if (type == "lm") { object@y } else { object@Ym2 } ans[, , drop = drop] } if (!isGeneric("depvar")) setGeneric("depvar", function(object, ...) standardGeneric("depvar"), package = "VGAM") setMethod("depvar", "vlm", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "rrvglm", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "qrrvglm", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "rrvgam", function(object, ...) depvar.vlm(object, ...)) setMethod("depvar", "rcim", function(object, ...) depvar.vlm(object, ...)) npred.vlm <- function(object, type = c("total", "one.response"), ...) { if (!missing(type)) type <- as.character(substitute(type)) type.arg <- match.arg(type, c("total", "one.response"))[1] MM <- if (length(object@misc$M)) object@misc$M else if (NCOL(predict(object)) > 0) NCOL(predict(object)) else stop("cannot seem to obtain 'M'") if (type.arg == "one.response") { M1.infos <- NULL infos.fun <- object@family@infos Ans.infos <- infos.fun() if (is.list(Ans.infos) && length(Ans.infos$M1)) M1.infos <- Ans.infos$M1 Q1 <- Ans.infos$Q1 if (is.numeric(Q1)) { S <- ncol(depvar(object)) / Q1 # Number of (multiple) responses if (is.numeric(M1.infos) && M1.infos * S != MM) warning("contradiction in values after computing it two ways") } M1 <- if (is.numeric(M1.infos)) M1.infos else if (is.numeric(MM )) MM else stop("failed to compute 'M'") M1 } else { # One response is assumed, by default MM } } if (!isGeneric("npred")) setGeneric("npred", function(object, ...) standardGeneric("npred"), package = "VGAM") setMethod("npred", "vlm", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "rrvglm", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "qrrvglm", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "rrvgam", function(object, ...) npred.vlm(object, ...)) setMethod("npred", "rcim", function(object, ...) npred.vlm(object, ...)) hatvaluesvlm <- function(model, type = c("diagonal", "matrix", "centralBlocks"), ...) { if (!missing(type)) type <- as.character(substitute(type)) type.arg <- match.arg(type, c("diagonal", "matrix", "centralBlocks"))[1] qrSlot <- model@qr if (!is.list(qrSlot) && class(qrSlot) != "qr") stop("slot 'qr' should be a list") M <- npred(model) nn <- nobs(model, type = "lm") if (is.empty.list(qrSlot)) { wzedd <- weights(model, type = "working") UU <- vchol(wzedd, M = M, n = nn, silent = TRUE) # Few rows, many cols X.vlm <- model.matrix(model, type = "vlm") UU.X.vlm <- mux111(cc = UU, xmat = X.vlm, M = M) qrSlot <- qr(UU.X.vlm) } else { X.vlm <- NULL class(qrSlot) <- "qr" # S3 class } Q.S3 <- qr.Q(qrSlot) if (type.arg == "diagonal") { Diag.Hat <- rowSums(Q.S3^2) Diag.Elts <- matrix(Diag.Hat, nn, M, byrow = TRUE) if (length(model@misc$predictors.names) == M) colnames(Diag.Elts) <- model@misc$predictors.names if (length(rownames(model.matrix(model, type = "lm")))) rownames(Diag.Elts) <- rownames(model.matrix(model, type = "lm")) attr(Diag.Elts, "predictors.names") <- model@misc$predictors.names attr(Diag.Elts, "ncol.X.vlm") <- model@misc$ncol.X.vlm Diag.Elts } else if (type.arg == "matrix") { all.mat <- Q.S3 %*% t(Q.S3) if (!length(X.vlm)) X.vlm <- model.matrix(model, type = "vlm") dimnames(all.mat) <- list(rownames(X.vlm), rownames(X.vlm)) attr(all.mat, "M") <- M attr(all.mat, "predictors.names") <- model@misc$predictors.names attr(all.mat, "ncol.X.vlm") <- model@misc$ncol.X.vlm all.mat } else { ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) MMp1d2 <- M * (M + 1) / 2 all.rows.index <- rep((0:(nn-1)) * M, rep(MMp1d2, nn)) + ind1$row.index all.cols.index <- rep((0:(nn-1)) * M, rep(MMp1d2, nn)) + ind1$col.index H.ss <- rowSums(Q.S3[all.rows.index, ] * Q.S3[all.cols.index, ]) H.ss <- matrix(H.ss, nn, MMp1d2, byrow = TRUE) H.ss } } # hatvaluesvlm setMethod("hatvalues", "vlm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "vglm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "rrvglm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "qrrvglm", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "rrvgam", function(model, ...) hatvaluesvlm(model, ...)) setMethod("hatvalues", "rcim", function(model, ...) hatvaluesvlm(model, ...)) hatplot.vlm <- function(model, multiplier = c(2, 3), lty = "dashed", xlab = "Observation", ylab = "Hat values", ylim = NULL, ...) { if (is(model, "vlm")) { hatval <- hatvalues(model, diag = TRUE) } else { hatval <- model } if (!is.matrix(hatval)) stop("argument 'model' seems neither a vglm() object or a matrix") ncol.X.vlm <- attr(hatval, "ncol.X.vlm") M <- attr(hatval, "M") predictors.names <- attr(hatval, "predictors.names") if (!length(predictors.names)) { predictors.names <- paste("Linear/additive predictor", 1:M) } if (length(M)) { N <- nrow(hatval) / M hatval <- matrix(hatval, N, M, byrow = TRUE) } else { M <- ncol(hatval) N <- nrow(hatval) } if (is.null(ylim)) ylim <- c(0, max(hatval)) for (jay in 1:M) { plot(hatval[, jay], type = "n", main = predictors.names[jay], ylim = ylim, xlab = xlab, ylab = ylab, ...) points(1:N, hatval[, jay], ...) abline(h = multiplier * ncol.X.vlm / (N * M), lty = lty, ...) } } if (!isGeneric("hatplot")) setGeneric("hatplot", function(model, ...) standardGeneric("hatplot"), package = "VGAM") setMethod("hatplot", "matrix", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "vlm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "vglm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "rrvglm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "qrrvglm", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "rrvgam", function(model, ...) hatplot.vlm(model, ...)) setMethod("hatplot", "rcim", function(model, ...) hatplot.vlm(model, ...)) dfbetavlm <- function(model, maxit.new = 1, trace.new = FALSE, smallno = 1.0e-8, ...) { if (!is(model, "vlm")) stop("argument 'model' does not seem to be a vglm() object") n.lm <- nobs(model, type = "lm") X.lm <- model.matrix(model, type = "lm") X.vlm <- model.matrix(model, type = "vlm") p.vlm <- ncol(X.vlm) # nvar(model, type = "vlm") M <- npred(model) etastart <- predict(model) offset <- matrix(model@offset, n.lm, M) new.control <- model@control pweights <- weights(model, type = "prior") orig.w <- if (is.numeric(model@extra$orig.w)) model@extra$orig.w else 1 y.integer <- if (is.logical(model@extra$y.integer)) model@extra$y.integer else FALSE coef.model <- coef(model) new.control$trace <- trace.new new.control$maxit <- maxit.new dfbeta <- matrix(0, n.lm, p.vlm) Terms.zz <- NULL for (ii in 1:n.lm) { if (trace.new) { cat("\n", "Observation ", ii, "\n") flush.console() } w.orig <- if (length(orig.w) != n.lm) rep_len(orig.w, n.lm) else orig.w w.orig[ii] <- w.orig[ii] * smallno # Relative fit <- vglm.fit(x = X.lm, X.vlm.arg = X.vlm, # Should be more efficient y = if (y.integer) round(depvar(model) * c(pweights) / c(orig.w)) else (depvar(model) * c(pweights) / c(orig.w)), w = w.orig, # Set to zero so that it is 'deleted'. Xm2 = NULL, Ym2 = NULL, etastart = etastart, # coefstart = NULL, offset = offset, family = model@family, control = new.control, criterion = new.control$criterion, # "coefficients", qr.arg = FALSE, constraints = constraints(model, type = "term"), extra = model@extra, Terms = Terms.zz, function.name = "vglm") dfbeta[ii, ] <- coef.model - fit$coeff } dimnames(dfbeta) <- list(rownames(X.lm), names(coef.model)) dfbeta } setMethod("dfbeta", "matrix", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "vlm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "vglm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "rrvglm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "qrrvglm", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "rrvgam", function(model, ...) dfbetavlm(model, ...)) setMethod("dfbeta", "rcim", function(model, ...) dfbetavlm(model, ...)) hatvaluesbasic <- function(X.vlm, diagWm, M = 1) { if (M > 1) stop("currently argument 'M' must be 1") nn <- nrow(X.vlm) ncol.X.vlm <- ncol(X.vlm) XtW <- t(c(diagWm) * X.vlm) UU <- sqrt(diagWm) # Only for M == 1 UU.X.vlm <- c(UU) * X.vlm # c(UU) okay for M==1 qrSlot <- qr(UU.X.vlm) Rmat <- qr.R(qrSlot) rinv <- diag(ncol.X.vlm) rinv <- backsolve(Rmat, rinv) Diag.Hat <- if (FALSE) { covun <- rinv %*% t(rinv) rhs.mat <- covun %*% XtW colSums(t(X.vlm) * rhs.mat) } else { mymat <- X.vlm %*% rinv rowSums(diagWm * mymat^2) } Diag.Hat } model.matrixpvgam <- function(object, type = c("vlm", "lm", "lm2", "bothlmlm2", "augmentedvlm", "penalty"), # This line is new linpred.index = NULL, ...) { type <- match.arg(type, c("vlm", "lm", "lm2", "bothlmlm2", "augmentedvlm", "penalty"))[1] if (type == "augmentedvlm" || type == "penalty") { rbind(if (type == "penalty") NULL else model.matrixvlm(object, type = "vlm", linpred.index = linpred.index, ...), get.X.VLM.aug(constraints = constraints(object, type = "term"), sm.osps.list = object@ospsslot$sm.osps.list)) } else { model.matrixvlm(object, type = type, linpred.index = linpred.index, ...) } } setMethod("model.matrix", "pvgam", function(object, ...) model.matrixpvgam(object, ...)) VGAM/R/fittedvlm.R0000644000176200001440000000463213135276757013353 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. fittedvlm <- function(object, drop = FALSE, type.fitted = NULL, percentiles = NULL, ...) { if (is.null(type.fitted) && is.null(percentiles)) { answer <- if (drop) { if (!is.matrix(object@fitted.values) || !length(object@fitted.values)) stop("object@fitted.values is not a matrix or is empty") if (ncol(object@fitted.values) == 1) { c(object@fitted.values) } else { warning("ncol(object@fitted.values) is not 1") c(object@fitted.values) } } else { object@fitted.values } } else { linkinv <- object@family@linkinv new.extra <- object@extra if (length(percentiles)) { new.extra$percentiles <- percentiles } if (length(type.fitted)) { new.extra$type.fitted <- type.fitted } answer <- linkinv(eta = predict(object), extra = new.extra) linkinv <- object@family@linkinv answer <- if (drop) { c(answer) } else { as.matrix(answer) } } if (length(answer) && length(object@na.action)) { napredict(object@na.action[[1]], answer) } else { answer } } setMethod("fitted.values", "vlm", function(object, ...) fittedvlm(object, ...)) setMethod("fitted", "vlm", function(object, ...) fittedvlm(object, ...)) setMethod("fitted.values", "vglm", function(object, ...) fittedvlm(object, ...)) setMethod("fitted", "vglm", function(object, ...) fittedvlm(object, ...)) predictors.vglm <- function(object, matrix = TRUE, ...) { answer <- if (matrix) { object@predictors } else { if (!is.matrix(object@predictors) || !length(object@predictors)) stop("object@predictors is not a matrix or is empty") if (ncol(object@predictors) == 1) { c(object@predictors) } else { warning("ncol(object@predictors) is not 1") c(object@predictors) } } if (length(answer) && length(object@na.action)) { napredict(object@na.action[[1]], answer) } else { answer } } if (!isGeneric("predictors")) setGeneric("predictors", function(object, ...) standardGeneric("predictors")) setMethod("predictors", "vglm", function(object, ...) predictors.vglm(object, ...)) VGAM/R/family.robust.R0000644000176200001440000003431213135276757014151 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. edhuber <- function(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (x - mu) / sigma fk <- dnorm(k) eps <- 1 - 1 / (pnorm(k) - pnorm(-k) + 2 * fk / k) ceps <- 1 / (pnorm(k) - pnorm(-k) + 2 * fk / k) if (log.arg) { val <- log(ceps) + dnorm(zedd, log = TRUE) val[zedd < (-k)] <- (log(ceps) + log(fk) + ( k * (zedd+k)))[zedd < (-k)] val[zedd > (+k)] <- (log(ceps) + log(fk) + (-k * (zedd-k)))[zedd > (+k)] } else { val <- (ceps) * dnorm(zedd) val[zedd < (-k)] <- ((ceps) * fk * exp( k * (zedd + k)))[zedd < (-k)] val[zedd > (+k)] <- ((ceps) * fk * exp(-k * (zedd - k)))[zedd > (+k)] } list(val = if (log.arg) val - log(sigma) else val / sigma, eps = eps) } dhuber <- function(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) edhuber(x, k, mu, sigma, log = log)$val rhuber <- function(n, k = 0.862, mu = 0, sigma = 1) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n myl <- rep_len(0.0, use.n) lowlim <- 1 upplim <- 0 chunksize <- 2 * use.n while (lowlim <= use.n) { x <- rexp(chunksize) s <- sample(c(-1, 1), size = chunksize, replace = TRUE) y <- s*x/k u <- runif(chunksize) yok <- (abs(y) >= k | u <= exp(k * abs(y) - (k * k + y * y) / 2)) sumyok <- sum(yok) if (sumyok > 0) { upplim <- upplim + sumyok if (upplim > use.n) myl <- rep_len(myl, upplim) myl[lowlim:upplim] <- y[yok] lowlim <- lowlim + sumyok } } myl <- rep_len(myl, use.n) # Prune to right length rep_len(mu + sigma * myl, use.n) } qhuber <- function (p, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE ) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") cnorm <- sqrt(2 * pi) * ((2 * pnorm(k) - 1) + 2 * dnorm(k) / k) if (lower.tail) { if (log.p) { ln.p <- p x <- pmin(exp(ln.p), -expm1(ln.p)) } else { x <- pmin(p, 1 - p) } } else { if (log.p) { ln.p <- p x <- pmin(-expm1(ln.p), exp(ln.p)) } else { x <- pmin(1 - p, p) } } q <- ifelse(x <= sqrt(2 * pi) * dnorm(k) / ( k * cnorm), log(k * cnorm * x) / k - k / 2, qnorm(abs(1 - pnorm(k) + x * cnorm / sqrt(2 * pi) - dnorm(k) / k))) ans <- if (lower.tail) { if (log.p) { ifelse(exp(ln.p) < 0.5, mu + q * sigma, mu - q * sigma) } else { ifelse(p < 0.5, mu + q * sigma, mu - q * sigma) } } else { if (log.p) { ifelse(exp(ln.p) > 0.5, mu + q * sigma, mu - q * sigma) } else { ifelse(p > 0.5, mu + q * sigma, mu - q * sigma) } } ans[k <= 0 | sigma <= 0] <- NaN ans } phuber <- function(q, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE ) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") A1 <- (2 * dnorm(k) / k - 2 * pnorm(-k)) eps <- A1 / (1 + A1) zedd <- (q - mu) / sigma x <- -abs(zedd) p <- ifelse(x <= -k , exp(k^2 / 2) / k * exp(k * x) / sqrt(2 * pi), dnorm(k) / k + pnorm(x) - pnorm(-k)) if (lower.tail) { if (log.p) { ans <- ifelse(zedd <= 0, log(p) + log1p(-eps), log1p(exp(log(p) + log1p(-eps)))) } else { ans <- ifelse(zedd <= 0, exp(log(p) + log1p(-eps)), -expm1(log(p) + log1p(-eps))) } } else { if (log.p) { ans <- ifelse(zedd <= 0, log1p(exp(log(p) + log1p(-eps))), log(p) + log1p(-eps)) } else { ans <- ifelse(zedd <= 0, -expm1(log(p) + log1p(-eps)), exp(log(p) + log1p(-eps))) } } ans } huber2 <- function(llocation = "identitylink", lscale = "loge", k = 0.862, imethod = 1, zero = "scale") { A1 <- (2 * dnorm(k) / k - 2 * pnorm(-k)) eps <- A1 / (1 + A1) if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.Numeric(k, length.arg = 1, positive = TRUE)) stop("bad input for argument 'k'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Huber least favorable distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n\n", "Mean: location"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale"), llocation = .llocat , lscale = .lscale , zero = .zero ) }, list( .zero = zero, .llocat = llocat, .lscale = lscale ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat , earg = .elocat, tag = FALSE), namesof("scale", .lscale , earg = .escale, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) scale.y.est <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .imethod == 3) { rep_len(weighted.mean(y, w), n) } else if ( .imethod == 2) { rep_len(median(rep(y, w)), n) } else if ( .imethod == 1) { junk$fitted } else { y } } etastart <- cbind( theta2eta(location.init, .llocat , earg = .elocat ), theta2eta(scale.y.est, .lscale , earg = .escale )) } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat , "scale" = .lscale ) misc$earg <- list("location" = .elocat , "scale" = .escale ) misc$expected <- TRUE misc$k.huber <- .k misc$imethod <- .imethod misc$multipleResponses <- FALSE }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .k = k, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) kay <- .k if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dhuber(y, k = kay, mu = location, sigma = myscale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .k = k ))), vfamily = c("huber2"), validparams = eval(substitute(function(eta, y, extra = NULL) { mylocat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) okay1 <- all(is.finite(mylocat)) && all(is.finite(myscale)) && all(0 < myscale) okay1 }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .k = k ))), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) myk <- .k zedd <- (y - mylocat) / myscale cond2 <- (abs(zedd) <= myk) cond3 <- (zedd > myk) dl.dlocat <- -myk + 0 * zedd # cond1 dl.dlocat[cond2] <- zedd[cond2] dl.dlocat[cond3] <- myk # myk is a scalar dl.dlocat <- dl.dlocat / myscale dl.dscale <- (-myk * zedd) dl.dscale[cond2] <- (zedd^2)[cond2] dl.dscale[cond3] <- ( myk * zedd)[cond3] dl.dscale <- (-1 + dl.dscale) / myscale dlocat.deta <- dtheta.deta(mylocat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(myscale, .lscale , earg = .escale ) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .eps = eps, .k = k ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, 2) # diag matrix; y is one-col too temp4 <- erf(myk / sqrt(2)) ned2l.dlocat2 <- temp4 * (1 - .eps) / myscale^2 ned2l.dscale2 <- (dnorm(myk) * (1 - myk^2) + temp4) * 2 * (1 - .eps) / (myk * myscale^2) wz[, iam(1, 1, M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale2 * dscale.deta^2 ans c(w) * wz }), list( .eps = eps )))) } huber1 <- function(llocation = "identitylink", k = 0.862, imethod = 1) { A1 <- (2 * dnorm(k) / k - 2 * pnorm(-k)) eps <- A1 / (1 + A1) if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.Numeric(k, length.arg = 1, positive = TRUE)) stop("bad input for argument 'k'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") new("vglmff", blurb = c("Huber least favorable distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), "\n\n", "Mean: location"), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat , earg = .elocat, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .imethod == 3) { rep_len(weighted.mean(y, w), n) } else if ( .imethod == 2) { rep_len(median(rep(y, w)), n) } else if ( .imethod == 1) { junk$fitted } else { y } } etastart <- cbind( theta2eta(location.init, .llocat , earg = .elocat )) } }), list( .llocat = llocat, .elocat = elocat, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat ) misc$earg <- list("location" = .elocat ) misc$expected <- TRUE misc$k.huber <- .k misc$imethod <- .imethod misc$multipleResponses <- FALSE }), list( .llocat = llocat, .elocat = elocat, .k = k, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- eta2theta(eta, .llocat , earg = .elocat ) kay <- .k if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dhuber(y, k = kay, mu = location, sigma = 1, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .elocat = elocat, .k = k ))), vfamily = c("huber1"), validparams = eval(substitute(function(eta, y, extra = NULL) { mylocat <- eta2theta(eta, .llocat , earg = .elocat ) okay1 <- all(is.finite(mylocat)) okay1 }, list( .llocat = llocat, .elocat = elocat, .k = k ))), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta, .llocat , earg = .elocat ) myk <- .k zedd <- (y - mylocat) # / myscale cond2 <- (abs(zedd) <= myk) cond3 <- (zedd > myk) dl.dlocat <- -myk + 0 * zedd # cond1 dl.dlocat[cond2] <- zedd[cond2] dl.dlocat[cond3] <- myk # myk is a scalar dl.dlocat <- dl.dlocat # / myscale if (FALSE) { dl.dscale <- (-myk * zedd) dl.dscale[cond2] <- (zedd^2)[cond2] dl.dscale[cond3] <- ( myk * zedd)[cond3] dl.dscale <- (-1 + dl.dscale) / myscale } dlocat.deta <- dtheta.deta(mylocat, .llocat , earg = .elocat ) ans <- c(w) * cbind(dl.dlocat * dlocat.deta) ans }), list( .llocat = llocat, .elocat = elocat, .eps = eps, .k = k ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, 1) # diag matrix; y is one-col too temp4 <- erf(myk / sqrt(2)) ned2l.dlocat2 <- temp4 * (1 - .eps) # / myscale^2 wz[, iam(1,1,M)] <- ned2l.dlocat2 * dlocat.deta^2 ans c(w) * wz }), list( .eps = eps )))) } VGAM/R/family.ts.R0000644000176200001440000011024013135276757013254 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. rrar.Ci <- function(i, coeffs, aa, Ranks., MM) { index <- cumsum(c(aa, MM*Ranks.)) ans <- matrix(coeffs[(index[i]+1):index[i+1]], Ranks.[i], MM, byrow = TRUE) t(ans) } rrar.Ak1 <- function(MM, coeffs, Ranks., aa) { ptr <- 0 Ak1 <- diag(MM) for (jay in 1:MM) { for (i in 1:MM) { if (i > jay && (MM+1)-(Ranks.[jay]-1) <= i) { ptr <- ptr + 1 Ak1[i,jay] <- coeffs[ptr] } } } if (aa > 0 && ptr != aa) stop("something wrong") Ak1 } rrar.Di <- function(i, Ranks.) { if (Ranks.[1] == Ranks.[i]) diag(Ranks.[i]) else rbind(diag(Ranks.[i]), matrix(0, Ranks.[1] - Ranks.[i], Ranks.[i])) } rrar.Mi <- function(i, MM, Ranks., ki) { if (Ranks.[ki[i]] == MM) return(NULL) hi <- Ranks.[ki[i]] - Ranks.[ki[i+1]] Ji <- matrix(0, hi, Ranks.[1]) for (j in 1:hi) { Ji[j,j+Ranks.[ki[i+1]]] <- 1 } Mi <- matrix(0, MM-Ranks.[ki[i]], MM) # dim(Oi) == dim(Ji) for (j in 1:(MM-Ranks.[ki[i]])) { Mi[j,j+Ranks.[ki[i ]]] <- 1 } kronecker(Mi, Ji) } rrar.Mmat <- function(MM, uu, Ranks., ki) { Mmat <- NULL for (ii in uu:1) { Mmat <- rbind(Mmat, rrar.Mi(ii, MM, Ranks., ki)) } Mmat } block.diag <- function(A, B) { if (is.null(A) && is.null(B)) return(NULL) if (!is.null(A) && is.null(B)) return(A) if (is.null(A) && !is.null(B)) return(B) A <- as.matrix(A) B <- as.matrix(B) temp <- cbind(A, matrix(0, nrow(A), ncol(B))) rbind(temp, cbind(matrix(0, nrow(B), ncol(A)), B)) } rrar.Ht <- function(plag, MM, Ranks., coeffs, aa, uu, ki) { Htop <- Hbot <- NULL Mmat <- rrar.Mmat(MM, uu, Ranks., ki) # NULL if full rank Ak1 <- rrar.Ak1(MM, coeffs, Ranks., aa) if (!is.null(Mmat)) for (i in 1:plag) { Di <- rrar.Di(i, Ranks.) Ci <- rrar.Ci(i, coeffs, aa, Ranks., MM) temp <- Di %*% t(Ci) Htop <- cbind(Htop, Mmat %*% kronecker(diag(MM), temp)) } for (i in 1:plag) { Di <- rrar.Di(i, Ranks.) temp <- kronecker(t(Di) %*% t(Ak1), diag(MM)) Hbot <- block.diag(Hbot, temp) } rbind(Htop, Hbot) } rrar.Ut <- function(y, tt, plag, MM) { Ut <- NULL if (plag>1) for (i in 1:plag) { Ut <- rbind(Ut, kronecker(diag(MM), cbind(y[tt-i,]))) } Ut } rrar.UU <- function(y, plag, MM, n) { UU <- NULL for (i in (plag+1):n) { UU <- rbind(UU, t(rrar.Ut(y, i, plag, MM))) } UU } rrar.Wmat <- function(y, Ranks., MM, ki, plag, aa, uu, n, coeffs) { temp1 <- rrar.UU(y, plag, MM, n) temp2 <- t(rrar.Ht(plag, MM, Ranks., coeffs, aa, uu, ki)) list(UU = temp1, Ht = temp2) } rrar.control <- function(stepsize = 0.5, save.weights = TRUE, ...) { if (stepsize <= 0 || stepsize > 1) { warning("bad value of stepsize; using 0.5 instead") stepsize <- 0.5 } list(stepsize = stepsize, save.weights = as.logical(save.weights)[1]) } rrar <- function(Ranks = 1, coefstart = NULL) { lag.p <- length(Ranks) new("vglmff", blurb = c("Nested reduced-rank vector autoregressive model AR(", lag.p, ")\n\n", "Link: ", namesof("mu_t", "identitylink"), ", t = ", paste(paste(1:lag.p, coll = ",", sep = ""))), initialize = eval(substitute(expression({ Ranks. <- .Ranks plag <- length(Ranks.) nn <- nrow(x) # original n indices <- 1:plag copy.X.vlm <- TRUE # X.vlm.save matrix changes at each iteration dsrank <- -sort(-Ranks.) # ==rev(sort(Ranks.)) if (any(dsrank != Ranks.)) stop("Ranks must be a non-increasing sequence") if (!is.matrix(y) || ncol(y) == 1) { stop("response must be a matrix with more than one column") } else { MM <- ncol(y) ki <- udsrank <- unique(dsrank) uu <- length(udsrank) for (i in 1:uu) ki[i] <- max((1:plag)[dsrank == udsrank[i]]) ki <- c(ki, plag+1) # For computing a Ranks. <- c(Ranks., 0) # For computing a aa <- sum( (MM-Ranks.[ki[1:uu]]) * (Ranks.[ki[1:uu]]-Ranks.[ki[-1]]) ) } if (!intercept.only) warning("ignoring explanatory variables") if (any(MM < Ranks.)) stop("'max(Ranks)' can only be ", MM, " or less") y.save <- y # Save the original if (any(w != 1)) stop("all weights should be 1") new.coeffs <- .coefstart # Needed for iter = 1 of $weight new.coeffs <- if (length(new.coeffs)) rep_len(new.coeffs, aa+sum(Ranks.)*MM) else runif(aa+sum(Ranks.)*MM) temp8 <- rrar.Wmat(y.save, Ranks., MM, ki, plag, aa, uu, nn, new.coeffs) X.vlm.save <- temp8$UU %*% temp8$Ht if (!length(etastart)) { etastart <- X.vlm.save %*% new.coeffs etastart <- matrix(etastart, ncol = ncol(y), byrow = TRUE) } extra$Ranks. <- Ranks.; extra$aa <- aa extra$plag <- plag; extra$nn <- nn extra$MM <- MM; extra$coeffs <- new.coeffs; extra$y.save <- y.save keep.assign <- attr(x, "assign") x <- x[-indices, , drop = FALSE] if (is.R()) attr(x, "assign") <- keep.assign y <- y[-indices, , drop = FALSE] w <- w[-indices] n.save <- n <- nn - plag }), list( .Ranks = Ranks, .coefstart = coefstart ))), linkinv = function(eta, extra = NULL) { aa <- extra$aa coeffs <- extra$coeffs MM <- extra$MM nn <- extra$nn plag <- extra$plag Ranks. <- extra$Ranks. y.save <- extra$y.save tt <- (1+plag):nn mu <- matrix(0, nn-plag, MM) Ak1 <- rrar.Ak1(MM, coeffs, Ranks., aa) for (i in 1:plag) { Di <- rrar.Di(i, Ranks.) Ci <- rrar.Ci(i, coeffs, aa, Ranks., MM) mu <- mu + y.save[tt-i, , drop = FALSE] %*% t(Ak1 %*% Di %*% t(Ci)) } mu }, last = expression({ misc$plag <- plag misc$Ranks <- Ranks. misc$Ak1 <- Ak1 misc$omegahat <- omegahat misc$Cmatrices <- Cmatrices misc$Dmatrices <- Dmatrices misc$Hmatrix <- temp8$Ht misc$Phimatrices <- vector("list", plag) for (ii in 1:plag) { misc$Phimatrices[[ii]] <- Ak1 %*% Dmatrices[[ii]] %*% t(Cmatrices[[ii]]) } misc$Z <- y.save %*% t(solve(Ak1)) }), vfamily = "rrar", validparams = function(eta, y, extra = NULL) { okay1 <- TRUE okay1 }, deriv = expression({ temp8 <- rrar.Wmat(y.save, Ranks., MM, ki, plag, aa, uu, nn, new.coeffs) X.vlm.save <- temp8$UU %*% temp8$Ht extra$coeffs <- new.coeffs resmat <- y tt <- (1+plag):nn Ak1 <- rrar.Ak1(MM, new.coeffs, Ranks., aa) Cmatrices <- Dmatrices <- vector("list", plag) for (ii in 1:plag) { Dmatrices[[ii]] <- Di <- rrar.Di(ii, Ranks.) Cmatrices[[ii]] <- Ci <- rrar.Ci(ii, new.coeffs, aa, Ranks., MM) resmat <- resmat - y.save[tt - ii, , drop = FALSE] %*% t(Ak1 %*% Di %*% t(Ci)) } omegahat <- (t(resmat) %*% resmat) / n # MM x MM omegainv <- solve(omegahat) omegainv <- solve(omegahat) ind1 <- iam(NA, NA, MM, both = TRUE) wz <- matrix(omegainv[cbind(ind1$row, ind1$col)], nn-plag, length(ind1$row), byrow = TRUE) mux22(t(wz), y-mu, M = extra$MM, as.matrix = TRUE) }), weight = expression({ wz })) } vglm.garma.control <- function(save.weights = TRUE, ...) { list(save.weights = as.logical(save.weights)[1]) } garma <- function(link = "identitylink", p.ar.lag = 1, q.ma.lag = 0, coefstart = NULL, step = 1.0) { if (!is.Numeric(p.ar.lag, integer.valued = TRUE, length.arg = 1)) stop("bad input for argument 'p.ar.lag'") if (!is.Numeric(q.ma.lag, integer.valued = TRUE, length.arg = 1)) stop("bad input for argument 'q.ma.lag'") if (q.ma.lag != 0) stop("sorry, only q.ma.lag = 0 is currently implemented") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("GARMA(", p.ar.lag, ",", q.ma.lag, ")\n\n", "Link: ", namesof("mu_t", link, earg = earg), ", t = ", paste(paste(1:p.ar.lag, coll = ",", sep = ""))), initialize = eval(substitute(expression({ plag <- .p.ar.lag predictors.names <- namesof("mu", .link , earg = .earg , tag = FALSE) indices <- 1:plag tt.index <- (1 + plag):nrow(x) p.lm <- ncol(x) copy.X.vlm <- TRUE # x matrix changes at each iteration if ( .link == "logit" || .link == "probit" || .link == "cloglog" || .link == "cauchit") { delete.zero.colns <- TRUE eval(process.categorical.data.VGAM) mustart <- mustart[tt.index, 2] y <- y[, 2] } else { } x.save <- x # Save the original y.save <- y # Save the original w.save <- w # Save the original new.coeffs <- .coefstart # Needed for iter = 1 of @weight new.coeffs <- if (length(new.coeffs)) rep_len(new.coeffs, p.lm + plag) else c(rnorm(p.lm, sd = 0.1), rep_len(0, plag)) if (!length(etastart)) { etastart <- x[-indices, , drop = FALSE] %*% new.coeffs[1:p.lm] } x <- cbind(x, matrix(NA_real_, n, plag)) # Right size now dx <- dimnames(x.save) morenames <- paste("(lag", 1:plag, ")", sep = "") dimnames(x) <- list(dx[[1]], c(dx[[2]], morenames)) x <- x[-indices, , drop = FALSE] class(x) <- "matrix" y <- y[-indices] w <- w[-indices] n.save <- n <- n - plag more <- vector("list", plag) names(more) <- morenames for (ii in 1:plag) more[[ii]] <- ii + max(unlist(attr(x.save, "assign"))) attr(x, "assign") <- c(attr(x.save, "assign"), more) }), list( .link = link, .p.ar.lag = p.ar.lag, .coefstart = coefstart, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, link = .link , earg = .earg) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(mu = .link ) misc$earg <- list(mu = .earg ) misc$plag <- plag }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { switch( .link , identitylink = y - mu, loge = w * (y / mu - 1), reciprocal = w * (y / mu - 1), inverse = w * (y / mu - 1), w * (y / mu - (1-y) / (1 - mu))) } else { ll.elts <- switch( .link , identitylink = c(w) * (y - mu)^2, loge = c(w) * (-mu + y * log(mu)), reciprocal = c(w) * (-mu + y * log(mu)), inverse = c(w) * (-mu + y * log(mu)), c(w) * (y * log(mu) + (1-y) * log1p(-mu))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), middle2 = eval(substitute(expression({ realfv <- fv for (ii in 1:plag) { realfv <- realfv + old.coeffs[ii + p.lm] * (x.save[tt.index-ii, 1:p.lm, drop = FALSE] %*% new.coeffs[1:p.lm]) # + } true.eta <- realfv + offset mu <- family@linkinv(true.eta, extra) # overwrite mu with correct one }), list( .link = link, .earg = earg ))), vfamily = c("garma", "vglmgam"), validparams = eval(substitute(function(eta, y, extra = NULL) { mu <- eta2theta(eta, link = .link , earg = .earg ) okay1 <- all(is.finite(mu)) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ dl.dmu <- switch( .link , identitylink = y-mu, loge = (y - mu) / mu, reciprocal = (y - mu) / mu, inverse = (y - mu) / mu, (y - mu) / (mu * (1 - mu))) dmu.deta <- dtheta.deta(mu, .link , earg = .earg) Step <- .step # This is another method of adjusting step lengths Step * c(w) * dl.dmu * dmu.deta }), list( .link = link, .step = step, .earg = earg ))), weight = eval(substitute(expression({ x[, 1:p.lm] <- x.save[tt.index, 1:p.lm] # Reinstate for (ii in 1:plag) { temp <- theta2eta(y.save[tt.index-ii], .link , earg = .earg ) x[, 1:p.lm] <- x[, 1:p.lm] - x.save[tt.index-ii, 1:p.lm] * new.coeffs[ii + p.lm] x[, p.lm+ii] <- temp - x.save[tt.index-ii, 1:p.lm, drop = FALSE] %*% new.coeffs[1:p.lm] } class(x) <- "matrix" # Added 20020227; 20040226 if (iter == 1) old.coeffs <- new.coeffs X.vlm.save <- lm2vlm.model.matrix(x, Hlist, xij = control$xij) vary <- switch( .link , identitylink = 1, loge = mu, reciprocal = mu^2, inverse = mu^2, mu * (1 - mu)) c(w) * dtheta.deta(mu, link = .link , earg = .earg )^2 / vary }), list( .link = link, .earg = earg )))) } if (FALSE) { setClass(Class = "Coef.rrar", representation( "plag" = "integer", "Ranks" = "integer", "omega" = "integer", "C" = "matrix", "D" = "matrix", "H" = "matrix", "Z" = "matrix", "Phi" = "list", # list of matrices "Ak1" = "matrix")) Coef.rrar <- function(object, ...) { result = new(Class = "Coef.rrar", "plag" = object@misc$plag, "Ranks" = object@misc$Ranks, "omega" = object@misc$omega, "C" = object@misc$C, "D" = object@misc$D, "H" = object@misc$H, "Z" = object@misc$Z, "Phi" = object@misc$Phi, "Ak1" = object@misc$Ak1) } show.Coef.rrar <- function(object) { cat(object@plag) } setMethod("Coef", "rrar", function(object, ...) Coef(object, ...)) setMethod("show", "Coef.rrar", function(object) show.Coef.rrar(object)) } dAR1 <- function(x, drift = 0, # Stationarity is the default var.error = 1, ARcoef1 = 0.0, type.likelihood = c("exact", "conditional"), log = FALSE) { type.likelihood <- match.arg(type.likelihood, c("exact", "conditional"))[1] is.vector.x <- is.vector(x) x <- as.matrix(x) drift <- as.matrix(drift) var.error <- as.matrix(var.error) ARcoef1 <- as.matrix(ARcoef1) LLL <- max(nrow(x), nrow(drift), nrow(var.error), nrow(ARcoef1)) UUU <- max(ncol(x), ncol(drift), ncol(var.error), ncol(ARcoef1)) x <- matrix(x, LLL, UUU) drift <- matrix(drift, LLL, UUU) var.error <- matrix(var.error, LLL, UUU) rho <- matrix(ARcoef1, LLL, UUU) if (any(abs(rho) > 1)) warning("Values of argument 'ARcoef1' are greater ", "than 1 in absolute value") if (!is.logical(log.arg <- log) || length(log) != 1) stop("Bad input for argument 'log'") rm(log) ans <- matrix(0.0, LLL, UUU) var.noise <- var.error / (1 - rho^2) ans[ 1, ] <- dnorm(x = x[1, ], mean = drift[ 1, ] / (1 - rho[1, ]), sd = sqrt(var.noise[1, ]), log = log.arg) ans[-1, ] <- dnorm(x = x[-1, ], mean = drift[-1, ] + rho[-1, ] * x[-nrow(x), ], sd = sqrt(var.error[-1, ]), log = log.arg) if (type.likelihood == "conditional") ans[1, ] <- NA if (is.vector.x) as.vector(ans) else ans } if (FALSE) AR1.control <- function(epsilon = 1e-6, maxit = 30, stepsize = 1,...){ list(epsilon = epsilon, maxit = maxit, stepsize = stepsize, ...) } AR1 <- function(ldrift = "identitylink", lsd = "loge", lvar = "loge", lrho = "rhobit", idrift = NULL, isd = NULL, ivar = NULL, irho = NULL, imethod = 1, ishrinkage = 0.95, # 0.90; unity means a constant type.likelihood = c("exact", "conditional"), type.EIM = c("exact", "approximate"), var.arg = FALSE, # TRUE, nodrift = FALSE, # TRUE, print.EIM = FALSE, zero = c(if (var.arg) "var" else "sd", "rho") # "ARcoeff1" ) { type.likelihood <- match.arg(type.likelihood, c("exact", "conditional"))[1] if (length(isd) && !is.Numeric(isd, positive = TRUE)) stop("Bad input for argument 'isd'") if (length(ivar) && !is.Numeric(ivar, positive = TRUE)) stop("Bad input for argument 'ivar'") if (length(irho) && (!is.Numeric(irho) || any(abs(irho) > 1.0))) stop("Bad input for argument 'irho'") type.EIM <- match.arg(type.EIM, c("exact", "approximate"))[1] poratM <- (type.EIM == "exact") if (!is.logical(nodrift) || length(nodrift) != 1) stop("argument 'nodrift' must be a single logical") if (!is.logical(var.arg) || length(var.arg) != 1) stop("argument 'var.arg' must be a single logical") if (!is.logical(print.EIM)) stop("Invalid 'print.EIM'.") ismn <- idrift lsmn <- as.list(substitute(ldrift)) esmn <- link2list(lsmn) lsmn <- attr(esmn, "function.name") lsdv <- as.list(substitute(lsd)) esdv <- link2list(lsdv) lsdv <- attr(esdv, "function.name") lvar <- as.list(substitute(lvar)) evar <- link2list(lvar) lvar <- attr(evar, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") n.sc <- if (var.arg) "var" else "sd" l.sc <- if (var.arg) lvar else lsdv e.sc <- if (var.arg) evar else esdv new("vglmff", blurb = c(ifelse(nodrift, "Two", "Three"), "-parameter autoregressive process of order-1\n\n", "Links: ", if (nodrift) "" else paste(namesof("drift", lsmn, earg = esmn), ", ", sep = ""), namesof(n.sc , l.sc, earg = e.sc), ", ", namesof("rho", lrho, earg = erho), "\n", "Model: Y_t = drift + rho * Y_{t-1} + error_{t},", "\n", " where 'error_{2:n}' ~ N(0, sigma^2) ", "independently", if (nodrift) ", and drift = 0" else "", "\n", "Mean: drift / (1 - rho)", "\n", "Correlation: rho = ARcoef1", "\n", "Variance: sd^2 / (1 - rho^2)"), constraints = eval(substitute(expression({ M1 <- 3 - .nodrift dotzero <- .zero # eval(negzero.expression.VGAM) constraints <- cm.zero.VGAM(constraints, x = x, zero = .zero , M = M, predictors.names = predictors.names, M1 = M1) }), list( .zero = zero, .nodrift = nodrift ))), infos = eval(substitute(function(...) { list(M1 = 3 - .nodrift , Q1 = 1, expected = TRUE, multipleResponse = TRUE, type.likelihood = .type.likelihood , ldrift = if ( .nodrift ) NULL else .lsmn , edrift = if ( .nodrift ) NULL else .esmn , lvar = .lvar , lsd = .lsdv , evar = .evar , esd = .esdv , lrho = .lrho , erho = .erho , zero = .zero ) }, list( .lsmn = lsmn, .lvar = lvar, .lsdv = lsdv, .lrho = lrho, .esmn = esmn, .evar = evar, .esdv = esdv, .erho = erho, .type.likelihood = type.likelihood, .nodrift = nodrift, .zero = zero))), initialize = eval(substitute(expression({ extra$M1 <- M1 <- 3 - .nodrift check <- w.y.check(w = w, y = y, Is.positive.y = FALSE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- check$w y <- check$y if ( .type.likelihood == "conditional") { w[1, ] <- 1.0e-6 } else { if (!(.nodrift )) w[1, ] <- 1.0e-1 } NOS <- ncoly <- ncol(y) n <- nrow(y) M <- M1*NOS var.names <- param.names("var", NOS) sdv.names <- param.names("sd", NOS) smn.names <- if ( .nodrift ) NULL else param.names("drift", NOS) rho.names <- param.names("rho", NOS) mynames1 <- smn.names mynames2 <- if ( .var.arg ) var.names else sdv.names mynames3 <- rho.names predictors.names <- c(if ( .nodrift ) NULL else namesof(smn.names, .lsmn , earg = .esmn , tag = FALSE), if ( .var.arg ) namesof(var.names, .lvar , earg = .evar , tag = FALSE) else namesof(sdv.names, .lsdv , earg = .esdv , tag = FALSE), namesof(rho.names, .lrho , earg = .erho , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if ( .nodrift ) y <- scale(y, scale = FALSE) if (!length(etastart)) { init.smn <- Init.mu(y = y, w = w, imethod = .imethod , # x = x, imu = .ismn , ishrinkage = .ishrinkage , pos.only = FALSE) init.rho <- matrix(if (length( .irho )) .irho else 0.1, n, NOS, byrow = TRUE) init.sdv <- matrix(if (length( .isdv )) .isdv else 1.0, n, NOS, byrow = TRUE) init.var <- matrix(if (length( .ivar )) .ivar else 1.0, n, NOS, byrow = TRUE) for (jay in 1: NOS) { mycor <- cor(y[-1, jay], y[-n, jay]) init.smn[ , jay] <- mean(y[, jay]) * (1 - mycor) if (!length( .irho )) init.rho[, jay] <- sign(mycor) * min(0.95, abs(mycor)) if (!length( .ivar )) init.var[, jay] <- var(y[, jay]) * (1 - mycor^2) if (!length( .isdv )) init.sdv[, jay] <- sqrt(init.var[, jay]) } # for etastart <- cbind(if ( .nodrift ) NULL else theta2eta(init.smn, .lsmn , earg = .esmn ), if ( .var.arg ) theta2eta(init.var, .lvar , earg = .evar ) else theta2eta(init.sdv, .lsdv , earg = .esdv ), theta2eta(init.rho, .lrho , earg = .erho )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } # end of etastart }), list( .lsmn = lsmn, .lrho = lrho, .lsdv = lsdv, .lvar = lvar, .esmn = esmn, .erho = erho, .esdv = esdv, .evar = evar, .ismn = ismn, .irho = irho, .isdv = isd , .ivar = ivar, .type.likelihood = type.likelihood, .ishrinkage = ishrinkage, .poratM = poratM, .var.arg = var.arg, .nodrift = nodrift, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 3 - .nodrift NOS <- ncol(eta)/M1 ar.smn <- if ( .nodrift ) 0 else eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lsmn , earg = .esmn ) ar.rho <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lrho , earg = .erho ) ar.smn / (1 - ar.rho) }, list ( .lsmn = lsmn, .lrho = lrho , .lsdv = lsdv, .lvar = lvar , .var.arg = var.arg, .type.likelihood = type.likelihood, .nodrift = nodrift, .esmn = esmn, .erho = erho , .esdv = esdv, .evar = evar ))), last = eval(substitute(expression({ if (any(abs(ar.rho) > 1)) warning("Regularity conditions are violated at the final", "IRLS iteration, since 'abs(rho) > 1") M1 <- extra$M1 temp.names <- c(mynames1, mynames2, mynames3) temp.names <- temp.names[interleave.VGAM(M1 * ncoly, M1 = M1)] misc$link <- rep_len( .lrho , M1 * ncoly) misc$earg <- vector("list", M1 * ncoly) names(misc$link) <- names(misc$earg) <- temp.names for (ii in 1:ncoly) { if ( !( .nodrift )) misc$link[ M1*ii-2 ] <- .lsmn misc$link[ M1*ii-1 ] <- if ( .var.arg ) .lvar else .lsdv misc$link[ M1*ii ] <- .lrho if ( !( .nodrift )) misc$earg[[M1*ii-2]] <- .esmn misc$earg[[M1*ii-1]] <- if ( .var.arg ) .evar else .esdv misc$earg[[M1*ii ]] <- .erho } misc$type.likelihood <- .type.likelihood misc$var.arg <- .var.arg misc$M1 <- M1 misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- TRUE misc$nodrift <- .nodrift }), list( .lsmn = lsmn, .lrho = lrho, .lsdv = lsdv, .lvar = lvar, .esmn = esmn, .erho = erho, .esdv = esdv, .evar = evar, .irho = irho, .isdv = isd , .ivar = ivar, .nodrift = nodrift, .poratM = poratM, .var.arg = var.arg, .type.likelihood = type.likelihood, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals= FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 - .nodrift NOS <- ncol(eta)/M1 if ( .var.arg ) { ar.var <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lvar , earg = .evar ) ar.sdv <- sqrt(ar.var) } else { ar.sdv <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lsdv , earg = .esdv ) ar.var <- ar.sdv^2 } ar.smn <- if ( .nodrift ) 0 else eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lsmn , earg = .esmn ) ar.rho <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lrho , earg = .erho ) if (residuals) { stop("Loglikelihood not implemented yet to handle", "residuals.") } else { loglik.terms <- c(w) * dAR1(x = y, drift = ar.smn, var.error = ar.var, type.likelihood = .type.likelihood , ARcoef1 = ar.rho, log = TRUE) loglik.terms <- as.matrix(loglik.terms) if (summation) { sum(if ( .type.likelihood == "exact") loglik.terms else loglik.terms[-1, ] ) } else { loglik.terms } } }, list( .lsmn = lsmn, .lrho = lrho , .lsdv = lsdv, .lvar = lvar , .var.arg = var.arg, .type.likelihood = type.likelihood, .nodrift = nodrift, .esmn = esmn, .erho = erho , .esdv = esdv, .evar = evar ))), vfamily = c("AR1"), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 - .nodrift n <- nrow(eta) NOS <- ncol(eta)/M1 ncoly <- NCOL(y) if ( .var.arg ) { ar.var <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lvar , earg = .evar ) ar.sdv <- sqrt(ar.var) } else { ar.sdv <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lsdv , earg = .esdv ) ar.var <- ar.sdv^2 } ar.smn <- if ( .nodrift ) matrix(0, n, NOS) else eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lsmn , earg = .esmn ) ar.rho <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lrho , earg = .erho ) okay1 <- all(is.finite(ar.sdv)) && all(0 < ar.sdv) && all(is.finite(ar.smn)) && all(is.finite(ar.rho)) okay1 }, list( .lsmn = lsmn, .lrho = lrho , .lsdv = lsdv, .lvar = lvar , .var.arg = var.arg, .type.likelihood = type.likelihood, .nodrift = nodrift, .esmn = esmn, .erho = erho , .esdv = esdv, .evar = evar ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) fva <- fitted(object) M1 <- 3 - .nodrift NOS <- ncol(eta)/M1 if ( .var.arg ) { ar.var <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lvar , earg = .evar ) ar.sdv <- sqrt(ar.var) } else { ar.sdv <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lsdv , earg = .esdv ) ar.var <- ar.sdv^2 } ar.smn <- if ( .nodrift ) matrix(0, n, NOS) else eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lsmn , earg = .esmn ) ar.rho <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lrho , earg = .erho ) ans <- array(0, c(nrow(eta), NOS, nsim)) for (jay in 1:NOS) { ans[1, jay, ] <- rnorm(nsim, m = fva[1, jay], # zz sd = sqrt(ar.var[1, jay])) for (ii in 2:nrow(eta)) ans[ii, jay, ] <- ar.smn[ii, jay] + ar.rho[ii, jay] * ans[ii-1, jay, ] + rnorm(nsim, sd = sqrt(ar.var[ii, jay])) } ans <- matrix(c(ans), c(nrow(eta) * NOS, nsim)) ans }, list( .lsmn = lsmn, .lrho = lrho , .lsdv = lsdv, .lvar = lvar , .var.arg = var.arg, .type.likelihood = type.likelihood, .nodrift = nodrift, .esmn = esmn, .erho = erho , .esdv = esdv, .evar = evar ))), deriv = eval(substitute(expression({ M1 <- 3 - .nodrift NOS <- ncol(eta)/M1 ncoly <- NCOL(y) if ( .var.arg ) { ar.var <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lvar , earg = .evar ) ar.sdv <- sqrt(ar.var) } else { ar.sdv <- eta2theta(eta[, M1*(1:NOS) - 1, drop = FALSE], .lsdv , earg = .esdv ) ar.var <- ar.sdv^2 } ar.smn <- if ( .nodrift ) matrix(0, n, NOS) else eta2theta(eta[, M1*(1:NOS) - 2, drop = FALSE], .lsmn , earg = .esmn ) ar.rho <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lrho , earg = .erho ) if (any(abs(ar.rho) < 1e-2)) warning("Estimated values of 'rho' are too close to zero.") help2 <- (length(colnames(x)) >= 2) myMeans <- matrix(colMeans(y), nrow = n, ncol = NOS, by = TRUE) yLag <- matrix(y, ncol = NOS) temp4 <- matrix(0.0, nrow = n, ncol = NOS) temp4[-1, ] <- y[-1, , drop = FALSE] - ar.smn[-1, , drop = FALSE] yLag[-1, ] <- y[-n, ] temp1 <- matrix(0.0, nrow = n, ncol = NOS) temp1[-1, ] <- y[-1, , drop = FALSE] - (ar.smn[-1, ,drop = FALSE] + ar.rho[-1, , drop = FALSE] * y[-n, , drop = FALSE]) temp1[1, ] <- y[1, ] - ar.smn[1, ] dl.dsmn <- temp1 / ar.var dl.dsmn[1, ] <- ( (y[1, ] - myMeans[1, ]) * (1 + ar.rho[1, ]) ) / ar.var[1, ] if ( .var.arg ) { dl.dvarSD <- temp1^2 / ( 2 * ar.var^2) - 1 / (2 * ar.var) dl.dvarSD[1, ] <- ( (1 - ar.rho[1, ]^2) * (y[1, ] - myMeans[1, ])^2 ) /(2 * ar.var[1, ]^2) - 1 / (2 * ar.var[1, ]) } else { dl.dvarSD <- temp1^2 / ar.sdv^3 - 1 / ar.sdv dl.dvarSD[1, ] <- ( (1 - ar.rho[1, ]^2) * (y[1, ] - myMeans[1, ])^2 ) / ar.sdv[1, ]^3 - 1/ar.sdv[1, ] } dl.drho <- rbind(rep_len(0, 1), ( (y[-n, , drop = FALSE] - myMeans[-n, ]) * temp1[-1, , drop = FALSE ] )/ ar.var[-1, ] ) dl.drho[1, ] <- (ar.rho[1, ] * (y[1, ] - myMeans[1, ])^2 ) / ar.var[1, ] - ar.rho[1, ] / (1 - ar.rho[1, ]^2) dsmn.deta <- dtheta.deta(ar.smn, .lsmn , earg = .esmn ) drho.deta <- dtheta.deta(ar.rho, .lrho , earg = .erho ) if ( .var.arg ) { dvarSD.deta <- dtheta.deta(ar.var, .lvar , earg = .evar ) } else { dvarSD.deta <- dtheta.deta(ar.sdv, .lsdv , earg = .esdv ) } myderiv <- c(w) * cbind(if ( .nodrift ) NULL else dl.dsmn * dsmn.deta, dl.dvarSD * dvarSD.deta, dl.drho * drho.deta) myderiv <- myderiv[, interleave.VGAM(M, M1 = M1)] myderiv }), list( .lsmn = lsmn, .lrho = lrho, .lsdv = lsdv, .lvar = lvar, .esmn = esmn, .erho = erho, .esdv = esdv, .evar = evar, .nodrift = nodrift , .var.arg = var.arg, .type.likelihood = type.likelihood ))), weight = eval(substitute(expression({ ned2l.dsmn <- 1 / ar.var ned2l.dsmn[1, ] <- ( (1 + ar.rho[1, ]) / (1 - ar.rho[1, ]) ) * (1 / ar.var[1, ]) # Here, same results for the first and t > 1 observations. ned2l.dvarSD <- if ( .var.arg ) 1 / (2 * ar.var^2) else 2 / ar.var gamma0 <- (1 - help2) * ar.var/(1 - ar.rho^2) + help2 * (yLag - myMeans)^2 ned2l.drho <- gamma0 / ar.var ned2l.drho[1, ] <- 2 * ar.rho[1, ]^2 / (1 - ar.rho[1, ]^2)^2 ned2l.drdv <- matrix(0.0, nrow = n, ncol = NOS) ned2l.drdv[1, ] <- 2 * temp4[1, ] / ((1 - temp4[1, ]^2) * ar.sdv[1, ]) ncol.wz <- M + (M - 1) + ifelse( .nodrift , 0, M - 2) ncol.pf <- 3 * (M + ( .nodrift ) ) - 3 wz <- matrix(0, nrow = n, ncol = ncol.wz) helpPor <- .poratM pf.mat <- if (helpPor) AR1EIM(x = scale(y, scale = FALSE), var.arg = .var.arg , p.drift = 0, WNsd = ar.sdv, ARcoeff1 = ar.rho ) else array(0.0, dim= c(n, NOS, ncol.pf)) if (!( .nodrift )) wz[, M1*(1:NOS) - 2] <- ( (helpPor) * pf.mat[, , 1] + (1 - (helpPor)) * ned2l.dsmn) * dsmn.deta^2 wz[, M1*(1:NOS) - 1] <- ( (helpPor) * pf.mat[, , 2 ] + (1 - (helpPor)) * ned2l.dvarSD) * dvarSD.deta^2 wz[, M1*(1:NOS) ] <- ( (helpPor) * pf.mat[, , 3] + (1 - (helpPor)) * ned2l.drho) * drho.deta^2 wz[, M1*(1:NOS) + (M - 1) ] <- ((helpPor) * pf.mat[, , 4] + (1 - (helpPor)) * ned2l.drdv) * drho.deta * dvarSD.deta wz <- w.wz.merge(w = w, wz = wz, n = n, M = ncol.wz, ndepy = NOS) if ( .print.EIM ) { wz2 <- matrix(0, nrow = n, ncol = ncol.wz) if (!(.nodrift )) wz2[, M1*(1:NOS) - 2] <- ned2l.dsmn wz2[, M1*(1:NOS) - 1] <- if ( .var.arg ) 1 / (2 * ar.var^2) else 2 / ar.var wz2[, M1*(1:NOS) ] <- ned2l.drho wz2 <- wz2[, interleave.VGAM( M1 * NOS, M1)] if (NOS > 1) { matAux1 <- matAux2 <- matrix(NA_real_, nrow = n, ncol = NOS) approxMat <- array(wz2[, 1:(M1*NOS)], dim = c(n, M1, NOS)) for (kk in 1:NOS) { matAux1[, kk] <- rowSums(approxMat[, , kk]) matAux2[, kk] <- rowSums(pf.mat[, kk , ]) } matAux <- cbind(matAux1, if (.poratM ) matAux2 else NULL) colnames(matAux) <- c(paste("ApproxEIM.R",1:NOS, sep = ""), if (!(.poratM )) NULL else paste("ExactEIM.R",1:NOS, sep = "")) matAux <- matAux[, interleave.VGAM( (1 + .poratM) * NOS, M1 = 1 + .poratM)] } else { matAux <- cbind(rowSums(wz2), if (helpPor) rowSums(pf.mat[, 1, ][, 1:3]) else NULL) colnames(matAux) <- c("Approximate", if (helpPor) "Exact" else NULL) } print(matAux[1:10, , drop = FALSE]) } wz }), list( .var.arg = var.arg, .type.likelihood = type.likelihood, .nodrift = nodrift, .poratM = poratM, .print.EIM = print.EIM ))) ) } VGAM/R/summary.vgam.q0000644000176200001440000001443713135276757014046 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. summaryvgam <- function(object, dispersion = NULL, digits = options()$digits-2, presid = TRUE, nopredictors = FALSE) { if (length(dispersion) && dispersion == 0 && length(object@family@summary.dispersion) && !object@family@summary.dispersion) { stop("cannot use the general VGLM formula (based on a residual ", "sum of squares) for computing the dispersion parameter") } newobject <- object class(newobject) <- "vglm" stuff <- summaryvglm(newobject, dispersion = dispersion) rdf <- stuff@df[2] <- object@df.residual # NA M <- object@misc$M nrow.X.vlm <- object@misc$nrow.X.vlm rank <- if (is.null(object@qr$rank)) length(object@coefficients) else object@qr$rank useF <- object@misc$useF if (is.null(useF)) useF <- FALSE df <- unlist(lapply(object@misc$new.assign, length)) nldf <- object@nl.df if (length(df)) { aod <- as.matrix(round(df, 1)) dimnames(aod) <- list(names(df), "Df") if (!is.null(object@nl.chisq)) { aod <- cbind(aod, NA, NA, NA) nl.chisq <- object@nl.chisq / object@dispersion special <- abs(nldf) < 0.1 # This was the quick fix in s.vam() nldf[special] <- 1 # Give it a plausible value for pchisq & pf snames <- names(nldf) aod[snames, 2] <- round(nldf, 1) aod[snames, 3] <- if (useF) nl.chisq/nldf else nl.chisq aod[snames, 4] <- if (useF) pf(nl.chisq / nldf, nldf, rdf, lower.tail = FALSE) else pchisq(nl.chisq, nldf, lower.tail = FALSE) if (any(special)) { aod[snames[special], 2:4] <- NA } rnames <- c("Df", "Npar Df", "Npar Chisq", "P(Chi)") if (useF) rnames[3:4] <- c("Npar F", "Pr(F)") dimnames(aod) <- list(names(df), rnames) heading <- if (useF) "\nDF for Terms and Approximate F-values for Nonparametric Effects\n" else "\nDF for Terms and Approximate Chi-squares for Nonparametric Effects\n" } else { heading <- "DF for Terms\n\n" } aod <- as.vanova(data.frame(aod, check.names = FALSE), heading) class(aod) <- "data.frame" } else { aod <- data.frame() } answer <- new("summary.vgam", object, call = stuff@call, cov.unscaled = stuff@cov.unscaled, correlation = stuff@correlation, df = stuff@df, sigma = stuff@sigma) slot(answer, "coefficients") <- stuff@coefficients # Replace if (is.numeric(stuff@dispersion)) slot(answer, "dispersion") <- stuff@dispersion if (presid) { Presid <- residuals(object, type = "pearson") if (length(Presid)) answer@pearson.resid <- as.matrix(Presid) } answer@misc$nopredictors <- nopredictors slot(answer, "anova") <- aod answer } show.summary.vgam <- function(x, quote = TRUE, prefix = "", digits = options()$digits-2, nopredictors = NULL) { M <- x@misc$M cat("\nCall:\n", paste(deparse(x@call), sep = "\n", collapse = "\n"), "\n\n", sep = "") Presid <- x@pearson.resid rdf <- x@df[2] if (FALSE && !is.null(Presid) && all(!is.na(Presid))) { if (rdf/M > 5) { rq <- apply(as.matrix(Presid), 2, quantile) # 5 x M dimnames(rq) <- list(c("Min", "1Q", "Median", "3Q", "Max"), x@misc$predictors.names) cat("\nPearson residuals:\n") print(t(rq), digits = digits) } else if (rdf > 0) { cat("\nPearson residuals:\n") print(Presid, digits = digits) } } use.nopredictors <- if (is.logical(nopredictors)) nopredictors else x@misc$nopredictors # 20140728 if (!is.logical(use.nopredictors)) { warning("cannot determine 'nopredictors'; choosing FALSE") use.nopredictors <- FALSE } cat("\nNumber of linear predictors: ", M, "\n") if (!is.null(x@misc$predictors.names) && !use.nopredictors) { if (M == 1) { cat("\nName of linear predictor:", paste(x@misc$predictors.names, collapse = ", "), "\n") } else if (M <= 5) { cat("\nNames of linear predictors:", paste(x@misc$predictors.names, collapse = ", "), fill = TRUE) } } prose <- "" if (length(x@dispersion)) { if (is.logical(x@misc$estimated.dispersion) && x@misc$estimated.dispersion) { prose <- "(Estimated) " } else { if (is.numeric(x@misc$default.dispersion) && x@dispersion == x@misc$default.dispersion) prose <- "(Default) " if (is.numeric(x@misc$default.dispersion) && x@dispersion != x@misc$default.dispersion) prose <- "(Pre-specified) " } cat(paste("\n", prose, "Dispersion Parameter for ", x@family@vfamily[1], " family: ", format(round(x@dispersion, digits)), "\n", sep = "")) } if (length(deviance(x))) cat("\nResidual deviance: ", format(round(deviance(x), digits)), "on", format(round(rdf, 3)), "degrees of freedom\n") if (length(logLik.vlm(x))) cat("\nLog-likelihood:", format(round(logLik.vlm(x), digits)), "on", format(round(rdf, 3)), "degrees of freedom\n") if (length(x@criterion)) { ncrit <- names(x@criterion) for (ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), format(x@criterion[[ii]]), "\n") } cat("\nNumber of iterations: ", x@iter, "\n") if (length(x@anova)) { show.vanova(x@anova, digits = digits) # ".vanova" for Splus6 } invisible(NULL) } setMethod("summary", "vgam", function(object, ...) summaryvgam(object, ...)) setMethod("show", "summary.vgam", function(object) show.summary.vgam(object)) show.vanova <- function(x, digits = .Options$digits, ...) { rrr <- row.names(x) heading <- attr(x, "heading") if (!is.null(heading)) cat(heading, sep = "\n") attr(x, "heading") <- NULL for (ii in seq_along(x)) { xx <- x[[ii]] xna <- is.na(xx) xx <- format(zapsmall(xx, digits)) xx[xna] <- "" x[[ii]] <- xx } print.data.frame(as.data.frame(x, row.names = rrr)) invisible(x) } as.vanova <- function(x, heading) { if (!is.data.frame(x)) stop("x must be a data frame") rrr <- row.names(x) attr(x, "heading") <- heading x <- as.data.frame(x, row.names = rrr) x } VGAM/R/family.loglin.R0000644000176200001440000002755013135276757014125 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. loglinb2 <- function(exchangeable = FALSE, zero = "u12") { if (!is.logical(exchangeable)) warning("argument 'exchangeable' should be a single logical") new("vglmff", blurb = c("Log-linear model for binary data\n\n", "Links: ", "Identity: u1, u2, u12", "\n"), constraints = eval(substitute(expression({ cm.intercept.default <- diag(3) constraints <- cm.VGAM(matrix(c(1,1,0, 0,0,1), 3, 2), x = x, bool = .exchangeable , constraints = constraints, apply.int = TRUE, cm.default = cm.intercept.default, cm.intercept.default = cm.intercept.default) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .exchangeable = exchangeable, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 4, # ncol(fitted(object)) expected = TRUE, multipleResponses = FALSE, parameters.names = c("u1", "u2", "u12"), zero = .zero ) }, list( .zero = zero ))), initialize = expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y if (ncol(y) != 2) stop("ncol(y) must be = 2") predictors.names <- c("u1", "u2", "u12") if (length(mustart) + length(etastart) == 0) { mustart <- matrix(NA_real_, nrow(y), 4) mustart[,1] <- weighted.mean((1-y[,1])*(1-y[,2]), w) mustart[,2] <- weighted.mean((1-y[,1])*y[,2], w) mustart[,3] <- weighted.mean(y[,1]*(1-y[,2]), w) mustart[,4] <- weighted.mean(y[,1]*y[,2], w) if (any(mustart == 0)) stop("some combinations of the response not realized") } }), linkinv = function(eta, extra = NULL) { u1 <- eta[,1] u2 <- eta[,2] u12 <- eta[,3] denom <- 1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12) cbind("00" = 1/denom, "01" = exp(u2) / denom, "10" = exp(u1) / denom, "11" = exp(u1+u2+u12) / denom) }, last = expression({ misc$link <- c("u1" = "identitylink", "u2" = "identitylink", "u12" = "identitylink") misc$earg <- list("u1" = list(), "u2" = list(), "u12" = list()) misc$expected <- TRUE misc$multipleResponses <- TRUE }), linkfun = function(mu, extra = NULL) { u0 <- log(mu[,1]) u2 <- log(mu[,2]) - u0 u1 <- log(mu[,3]) - u0 u12 <- log(mu[,4]) - u0 - u1 - u2 cbind(u1, u2, u12) }, loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { u1 <- eta[,1] u2 <- eta[,2] u12 <- eta[,3] denom <- 1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12) u0 <- -log(denom) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (u0 + u1*y[,1] + u2*y[,2] + u12*y[,1]*y[,2]) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("loglinb2"), validparams = function(eta, y, extra = NULL) { u1 <- eta[, 1] u2 <- eta[, 2] u12 <- eta[, 3] okay1 <- all(is.finite(u1 )) && all(is.finite(u2 )) && all(is.finite(u12)) okay1 }, deriv = expression({ u1 <- eta[, 1] u2 <- eta[, 2] u12 <- eta[, 3] denom <- 1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12) du0.du1 <- -(exp(u1) + exp(u1 + u2 + u12)) / denom du0.du2 <- -(exp(u2) + exp(u1 + u2 + u12)) / denom du0.du12 <- -exp(u1 + u2 + u12) / denom c(w) * cbind(du0.du1 + y[,1], du0.du2 + y[,2], du0.du12 + y[,1] * y[,2]) }), weight = expression({ d2u0.du1.2 <- -(exp(u1) + exp(u1 + u2 + u12)) * (1+exp(u2)) / denom^2 d2u0.du22 <- -(exp(u2) + exp(u1 + u2 + u12)) * (1+exp(u1)) / denom^2 d2u0.du122 <- -exp(u1 + u2 + u12) * (1+exp(u1)+exp(u2)) / denom^2 d2u0.du1u2 <- -(exp(u1 + u2 + u12) - exp(u1 + u2)) / denom^2 d2u0.du1u3 <- -(1 + exp(u2)) * exp(u1 + u2 + u12) / denom^2 d2u0.du2u3 <- -(1 + exp(u1)) * exp(u1 + u2 + u12) / denom^2 wz <- matrix(NA_real_, n, dimm(M)) wz[,iam(1,1,M)] <- -d2u0.du1.2 wz[,iam(2,2,M)] <- -d2u0.du22 wz[,iam(3,3,M)] <- -d2u0.du122 wz[,iam(1,2,M)] <- -d2u0.du1u2 wz[,iam(1,3,M)] <- -d2u0.du1u3 wz[,iam(2,3,M)] <- -d2u0.du2u3 c(w) * wz })) } loglinb3 <- function(exchangeable = FALSE, zero = c("u12", "u13", "u23")) { if (!is.logical(exchangeable)) warning("argument 'exchangeable' should be a single logical") new("vglmff", blurb = c("Log-linear model for trivariate binary data\n\n", "Links: ", "Identity: u1, u2, u3, u12, u13, u23", "\n"), constraints = eval(substitute(expression({ cm.intercept.default <- diag(6) constraints <- cm.VGAM(matrix(c(1,1,1,0,0,0, 0,0,0,1,1,1), 6, 2), x = x, bool = .exchangeable , constraints = constraints, apply.int = TRUE, cm.default = cm.intercept.default, cm.intercept.default = cm.intercept.default) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 6) }), list( .exchangeable = exchangeable, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 6, Q1 = 8, # ncol(fitted(object)) expected = TRUE, multipleResponses = FALSE, parameters.names = c("u1", "u2", "u3", "u12", "u13", "u23"), zero = .zero ) }, list( .zero = zero ))), initialize = expression({ predictors.names <- c("u1", "u2", "u3", "u12", "u13", "u23") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 3, out.wy = TRUE, colsyperw = 3, maximize = TRUE) w <- temp5$w y <- temp5$y if (ncol(y) != 3) stop("ncol(y) must be = 3") if (FALSE) extra$my.expression <- expression({ u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) }) if (length(mustart) + length(etastart) == 0) { mustart <- matrix(NA_real_, nrow(y), 2^3) mustart[,1] <- weighted.mean((1-y[,1])*(1-y[,2])*(1-y[,3]), w) mustart[,2] <- weighted.mean((1-y[,1])*(1-y[,2])*y[,3], w) mustart[,3] <- weighted.mean((1-y[,1])*y[,2]*(1-y[,3]), w) mustart[,4] <- weighted.mean((1-y[,1])*y[,2]*y[,3], w) mustart[,5] <- weighted.mean(y[,1]*(1-y[,2])*(1-y[,3]), w) mustart[,6] <- weighted.mean(y[,1]*(1-y[,2])*y[,3], w) mustart[,7] <- weighted.mean(y[,1]*y[,2]*(1-y[,3]), w) mustart[,8] <- weighted.mean(y[,1]*y[,2]*y[,3], w) if (any(mustart == 0)) stop("some combinations of the response not realized") } }), linkinv = function(eta, extra = NULL) { u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) cbind("000" = 1, "001" = exp(u3), "010" = exp(u2), "011" = exp(u2+u3+u23), "100" = exp(u1), "101" = exp(u1+u3+u13), "110" = exp(u1+u2+u12), "111" = exp(u1+u2+u3+u12+u13+u23)) / denom }, last = expression({ misc$link <- rep_len("identitylink", M) names(misc$link) <- predictors.names misc$earg <- list(u1 = list(), u2 = list(), u3 = list(), u12 = list(), u13 = list(), u23 = list()) misc$expected <- TRUE misc$multipleResponses <- TRUE }), linkfun = function(mu, extra = NULL) { u0 <- log(mu[,1]) u3 <- log(mu[,2]) - u0 u2 <- log(mu[,3]) - u0 u23 <- log(mu[,4]) - u0 - u2 - u3 u1 <- log(mu[,5]) - u0 u13 <- log(mu[,6]) - u0 - u1 - u3 u12 <- log(mu[,7]) - u0 - u1 - u2 cbind(u1, u2, u3, u12, u13, u23) }, loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) u0 <- -log(denom) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (u0 + u1*y[,1] + u2*y[,2] + u3*y[,3] +u12*y[,1]*y[,2] + u13*y[,1]*y[,3] + u23*y[,2]*y[,3]) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("loglinb3"), validparams = function(eta, y, extra = NULL) { okay1 <- all(is.finite(eta)) okay1 }, deriv = expression({ u1 <- eta[, 1] u2 <- eta[, 2] u3 <- eta[, 3] u12 <- eta[, 4] u13 <- eta[, 5] u23 <- eta[, 6] denom <- 1 + exp(u1) + exp(u2) + exp(u3) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + exp(u2 + u3 + u23) + exp(u1 + u2 + u3 + u12 + u13 + u23) allterms <- exp(u1+u2+u3+u12+u13+u23) A1 <- exp(u1) + exp(u1 + u2 + u12) + exp(u1 + u3 + u13) + allterms A2 <- exp(u2) + exp(u1 + u2 + u12) + exp(u2 + u3 + u23) + allterms A3 <- exp(u3) + exp(u3 + u2 + u23) + exp(u1 + u3 + u13) + allterms A12 <- exp(u1 + u2 + u12) + allterms A13 <- exp(u1 + u3 + u13) + allterms A23 <- exp(u2 + u3 + u23) + allterms c(w) * cbind(-A1/denom + y[,1], -A2/denom + y[,2], -A3/denom + y[,3], -A12/denom + y[,1]*y[,2], -A13/denom + y[,1]*y[,3], -A23/denom + y[,2]*y[,3]) }), weight = expression({ u0 <- -log(denom) dA2.du1 <- exp(u1 + u2 + u12) + allterms dA3.du1 <- exp(u1 + u3 + u13) + allterms dA3.du2 <- exp(u2 + u3 + u23) + allterms wz <- matrix(NA_real_, n, dimm(6)) expu0 <- exp(u0) wz[,iam(1,1,M)] <- A1 * (1 - expu0 * A1) wz[,iam(2,2,M)] <- A2 * (1 - expu0 * A2) wz[,iam(3,3,M)] <- A3 * (1 - expu0 * A3) wz[,iam(1,2,M)] <- (dA2.du1 - expu0 * A1 * A2) wz[,iam(1,3,M)] <- (dA3.du1 - expu0 * A1 * A3) wz[,iam(2,3,M)] <- (dA3.du2 - expu0 * A2 * A3) wz[,iam(4,4,M)] <- A12 * (1 - expu0 * A12) wz[,iam(5,5,M)] <- A13 * (1 - expu0 * A13) wz[,iam(6,6,M)] <- A23 * (1 - expu0 * A23) wz[,iam(4,6,M)] <- (allterms - expu0 * A12 * A23) wz[,iam(5,6,M)] <- (allterms - expu0 * A12 * A23) wz[,iam(4,5,M)] <- (allterms - expu0 * A12 * A13) wz[,iam(1,4,M)] <- A12 * (1 - expu0 * A1) wz[,iam(1,5,M)] <- A13 * (1 - expu0 * A1) wz[,iam(1,6,M)] <- (allterms - expu0 * A1 * A23) wz[,iam(2,4,M)] <- A12 * (1 - expu0 * A2) wz[,iam(2,5,M)] <- (allterms - expu0 * A2 * A13) wz[,iam(2,6,M)] <- A23 * (1 - expu0 * A2) wz[,iam(3,4,M)] <- (allterms - expu0 * A3 * A12) wz[,iam(3,5,M)] <- A13 * (1 - expu0 * A3) wz[,iam(3,6,M)] <- A23 * (1 - expu0 * A3) wz <- expu0 * wz c(w) * wz })) } VGAM/R/summary.vlm.q0000644000176200001440000001233313135276757013703 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. summaryvlm <- function(object, correlation = FALSE, dispersion = NULL, Colnames = c("Estimate", "Std. Error", "z value", "Pr(>|z|)"), presid = TRUE) { if (is.logical(object@misc$BFGS) && object@misc$BFGS) warning("the estimated variance-covariance matrix is ", "usually inaccurate because the working weight matrices are ", "obtained by a crude BFGS quasi-Newton approximation") M <- object@misc$M n <- object@misc$n nrow.X.vlm <- object@misc$nrow.X.vlm ncol.X.vlm <- object@misc$ncol.X.vlm # May be NULL for CQO objects Coefs <- object@coefficients cnames <- names(Coefs) Presid <- if (presid) { Presid <- residualsvlm(object, type = "pearson") # NULL if pooled.weight Presid } else { NULL } if (anyNA(Coefs)) { warning("Some NAs in the coefficients---no summary is ", " provided; returning 'object'") return(object) } rdf <- object@df.residual if (!length(dispersion)) { if (is.numeric(object@misc$dispersion)) { dispersion <- object@misc$dispersion if (all(dispersion == 0)) stop("dispersion shouldn't be zero here!") } else { dispersion <- 1 object@misc$estimated.dispersion <- FALSE } } else if (dispersion == 0) { dispersion <- if (!length(object@ResSS)) { stop("object@ResSS is empty") } else { object@ResSS / object@df.residual } object@misc$estimated.dispersion <- TRUE } else { if (is.numeric(object@misc$dispersion) && object@misc$dispersion != dispersion) warning("overriding the value of object@misc$dispersion") object@misc$estimated.dispersion <- FALSE } sigma <- sqrt(dispersion) # Can be a vector if (is.Numeric(ncol.X.vlm)) { R <- object@R if (ncol.X.vlm < max(dim(R))) stop("R is rank deficient") covun <- chol2inv(R) dimnames(covun) <- list(cnames, cnames) } coef3 <- matrix(rep(Coefs, 4), ncol = 4) dimnames(coef3) <- list(cnames, Colnames) SEs <- sqrt(diag(covun)) if (length(sigma) == 1 && is.Numeric(ncol.X.vlm)) { coef3[, 2] <- SEs %o% sigma # Fails here when sigma is a vector coef3[, 3] <- coef3[, 1] / coef3[, 2] pvalue <- 2 * pnorm(-abs(coef3[, 3])) coef3[, 4] <- pvalue if (is.logical(object@misc$estimated.dispersion) && object@misc$estimated.dispersion) coef3 <- coef3[, -4] # Delete the pvalues column } else { coef3[, 1] <- coef3[, 2] <- coef3[, 3] <- coef3[, 4] <- NA coef3 <- coef3[, -4] # Delete the pvalues column } if (correlation) { correl <- covun * outer(1 / SEs, 1 / SEs) diag(correl) <- 1.0 dimnames(correl) <- list(cnames, cnames) } else { correl <- matrix(0, 0, 0) # was NULL, but now a special matrix } answer <- new("summary.vlm", object, coef3 = coef3, correlation = correl, df = c(ncol.X.vlm, rdf), sigma = sigma) if (is.Numeric(ncol.X.vlm)) answer@cov.unscaled <- covun answer@dispersion <- dispersion # Overwrite this if (length(Presid)) answer@pearson.resid <- as.matrix(Presid) answer } show.summary.vlm <- function(x, digits = NULL, quote = TRUE, prefix = "") { M <- x@misc$M coef3 <- x@coef3 # ficients correl <- x@correlation if (is.null(digits)) { digits <- options()$digits } else { old.digits <- options(digits = digits) on.exit(options(old.digits)) } cat("\nCall:\n") dput(x@call) Presid <- x@pearson.resid rdf <- x@df[2] if (length(Presid) && all(!is.na(Presid))) { if (rdf/M > 5) { rq <- apply(as.matrix(Presid), 2, quantile) # 5 x M dimnames(rq) <- list(c("Min", "1Q", "Median", "3Q", "Max"), x@misc$predictors.names) cat("\nPearson residuals:\n") print(t(rq), digits = digits) } else if (rdf > 0) { cat("\nPearson residuals:\n") print(Presid, digits = digits) } } if (!all(is.na(coef3))) { cat("\nCoefficients:\n") print(coef3, digits = digits) } cat("\nNumber of responses: ", M, "\n") if (length(x@misc$predictors.names)) if (M == 1) { cat("\nName of response:", paste(x@misc$predictors.names, collapse = ", "), "\n") } else { UUU <- paste(x@misc$predictors.names, collapse = ", ") UUU <- x@misc$predictors.names cat("\nNames of responses:\n") cat(UUU, fill = TRUE, sep = ", ") } if (!is.null(x@ResSS)) cat("\nResidual Sum of Squares:", format(round(x@ResSS, digits)), "on", round(rdf, digits), "degrees of freedom\n") if (length(correl)) { ncol.X.vlm <- dim(correl)[2] if (ncol.X.vlm > 1) { cat("\nCorrelation of Coefficients:\n") ll <- lower.tri(correl) correl[ll] <- format(round(correl[ll], digits)) correl[!ll] <- "" print(correl[-1, -ncol.X.vlm, drop = FALSE], quote = FALSE, digits = digits) } } invisible(NULL) } setMethod("summary", "vlm", function(object, ...) summaryvlm(object, ...)) setMethod("show", "summary.vlm", function(object) show.summary.vlm(object)) VGAM/R/family.circular.R0000644000176200001440000004076713135276757014452 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dcard <- function(x, mu, rho, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mu), length(rho)) if (length(x) != L) x <- rep_len(x, L) if (length(mu) != L) mu <- rep_len(mu, L) if (length(rho) != L) rho <- rep_len(rho, L) logdensity <- rep_len(log(0), L) xok <- (x > 0) & (x < (2*pi)) logdensity[xok] <- -log(2*pi) + log1p(2 * rho[xok] * cos(x[xok]-mu[xok])) logdensity[mu <= 0] <- NaN logdensity[mu >= 2*pi] <- NaN logdensity[rho <= -0.5] <- NaN logdensity[rho >= 0.5] <- NaN if (log.arg) logdensity else exp(logdensity) } pcard <- function(q, mu, rho, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log((q + 2 * rho * (sin(q-mu) + sin(mu))) / (2*pi)) ans[q <= 0 ] <- -Inf ans[q >= (2*pi)] <- 0 } else { ans <- (q + 2 * rho * (sin(q-mu) + sin(mu))) / (2*pi) ans[q <= 0] <- 0 ans[q >= (2*pi)] <- 1 } } else { if (log.p) { ans <- log1p(-(q + 2 * rho * (sin(q-mu) + sin(mu))) / (2*pi)) ans[q <= 0] <- 0 ans[q >= (2*pi)] <- -Inf } else { ans <- (2*pi - q - 2 * rho * (sin(q-mu) + sin(mu))) / (2*pi) ans[q <= 0] <- 1 ans[q >= (2*pi)] <- 0 } } ans[mu < 0 | mu > 2*pi] <- NaN # A warning() may be a good idea here ans[abs(rho) > 0.5] <- NaN ans } qcard <- function(p, mu, rho, tolerance = 1.0e-7, maxits = 500, lower.tail = TRUE, log.p = FALSE) { if (!is.Numeric(p) || any(p < 0) || any(p > 1)) stop("'p' must be between 0 and 1") nn <- max(length(p), length(mu), length(rho)) if (length(p) != nn) p <- rep_len(p, nn) if (length(mu) != nn) mu <- rep_len(mu, nn) if (length(rho) != nn) rho <- rep_len(rho, nn) if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p for (its in 1:maxits) { oldans <- 2 * pi * exp(ln.p) ans <- oldans - (oldans + 2 * rho * (sin(oldans-mu)+sin(mu)) - 2*pi*exp(ln.p)) / (1 + 2 * rho * cos(oldans - mu)) index <- (ans < 0) | (ans > 2*pi) # 20141216 KaiH Remove ans == 0 if (any(index)) { ans[index] <- runif (sum(index), 0, 2*pi) } if (max(abs(ans - oldans)) < tolerance) break if (its == maxits) { warning("did not converge") break } oldans <- ans } } else { for (its in 1:maxits) { oldans <- 2 * pi * p ans <- oldans - (oldans + 2 * rho * (sin(oldans-mu)+sin(mu)) - 2*pi*p) / (1 + 2 * rho * cos(oldans - mu)) index <- (ans < 0) | (ans > 2*pi) # 20141216 KaiH Remove ans == 0 if (any(index)) { ans[index] <- runif(sum(index), 0, 2*pi) } if (max(abs(ans - oldans)) < tolerance) break if (its == maxits) { warning("did not converge") break } oldans <- ans } } } else { if (log.p) { ln.p <- p for (its in 1:maxits) { oldans <- - 2 * pi * expm1(ln.p) ans <- oldans - (oldans + 2 * rho * (sin(oldans-mu)+sin(mu)) + 2*pi*expm1(ln.p)) / (1 + 2 * rho * cos(oldans - mu)) index <- (ans < 0) | (ans > 2*pi) if (any(index)) { ans[index] <- runif (sum(index), 0, 2*pi) } if (max(abs(ans - oldans)) < tolerance) break if (its == maxits) { warning("did not converge") break } oldans <- ans } } else { for (its in 1:maxits) { oldans <- 2 * pi - 2 * pi * p ans <- oldans - (oldans + 2 * rho * (sin(oldans-mu)+sin(mu)) - 2*pi + 2*pi*p) / (1 + 2 * rho * cos(oldans - mu)) index <- (ans < 0) | (ans > 2*pi) if (any(index)) { ans[index] <- runif (sum(index), 0, 2*pi) } if (max(abs(ans - oldans)) < tolerance) break if (its == maxits) { warning("did not converge") break } oldans <- ans } } } ans[mu < 0 | mu > 2*pi] <- NaN # A warning() may be a good idea here ans[abs(rho) > 0.5] <- NaN ans } rcard <- function(n, mu, rho, ...) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(mu) || any(mu < 0) || any(mu > 2*pi)) stop("argument 'mu' must be between 0 and 2*pi inclusive") if (!is.Numeric(rho) || max(abs(rho) > 0.5)) stop("argument 'rho' must be between -0.5 and 0.5 inclusive") mu <- rep_len(mu, use.n) rho <- rep_len(rho, use.n) qcard(runif(use.n), mu = mu, rho = rho, ...) } cardioid.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } cardioid <- function( lmu = extlogit(min = 0, max = 2*pi), lrho = extlogit(min = -0.5, max = 0.5), imu = NULL, irho = 0.3, nsimEIM = 100, zero = NULL) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") if (length(imu) && (!is.Numeric(imu, positive = TRUE) || any(imu > 2*pi))) stop("bad input for argument 'imu'") if (!is.Numeric(irho) || max(abs(irho)) > 0.5) stop("bad input for argument 'irho'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Cardioid distribution\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = FALSE), ", ", namesof("rho", lrho, earg = erho, tag = FALSE), "\n", "Mean: ", "pi + (rho/pi) *", "((2*pi-mu)*sin(2*pi-mu)+cos(2*pi-mu)-mu*sin(mu)-cos(mu))"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu", "rho"), nsimEIM = .nsimEIM , lmu = .lmu , lrho = .lrho , zero = .zero ) }, list( .zero = zero, .lmu = lmu, .lrho = lrho, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (any((y <= 0) | (y >=2*pi))) stop("the response must be in (0, 2*pi)") predictors.names <- c( namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("rho", .lrho , earg = .erho , tag = FALSE)) if (!length(etastart)) { rho.init <- rep_len(if (length( .irho )) .irho else 0.3, n) cardioid.Loglikfun <- function(mu, y, x, w, extraargs) { rho <- extraargs$irho sum(w * (-log(2*pi) + log1p(2*rho*cos(y-mu)))) } mu.grid <- seq(0.1, 6.0, len = 19) mu.init <- if (length( .imu )) .imu else grid.search(mu.grid, objfun = cardioid.Loglikfun, y = y, x = x, w = w, extraargs = list(irho = rho.init)) mu.init <- rep_len(mu.init, length(y)) etastart <- cbind(theta2eta( mu.init, .lmu , earg = .emu ), theta2eta(rho.init, .lrho , earg = .erho )) } }), list( .lmu = lmu, .lrho = lrho, .imu = imu, .irho = irho, .emu = emu, .erho = erho ))), linkinv = eval(substitute(function(eta, extra = NULL){ mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], link = .lrho , earg = .erho ) pi + (rho/pi) * ((2*pi-mu)*sin(2*pi-mu) + cos(2*pi-mu) - mu*sin(mu) - cos(mu)) }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmu , "rho" = .lrho ) misc$earg <- list("mu" = .emu , "rho" = .erho ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], link = .lrho , earg = .erho ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dcard(x = y, mu = mu, rho = rho, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), vfamily = c("cardioid"), validparams = eval(substitute(function(eta, y, extra = NULL) { mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], link = .lrho , earg = .erho ) okay1 <- all(is.finite(mu )) && all( 0 < mu & mu < 2*pi) && all(is.finite(rho)) && all(-0.5 < rho & rho < 0.5) okay1 }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), deriv = eval(substitute(expression({ mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], link = .lrho , earg = .erho ) dmu.deta <- dtheta.deta(mu, link = .lmu , earg = .emu ) drho.deta <- dtheta.deta(rho, link = .lrho , earg = .erho ) dl.dmu <- 2 * rho * sin(y-mu) / (1 + 2 * rho * cos(y-mu)) dl.drho <- 2 * cos(y-mu) / (1 + 2 * rho * cos(y-mu)) c(w) * cbind(dl.dmu * dmu.deta, dl.drho * drho.deta) }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { ysim <- rcard(n, mu=mu, rho=rho) dl.dmu <- 2 * rho * sin(ysim-mu) / (1 + 2 * rho * cos(ysim-mu)) dl.drho <- 2 * cos(ysim-mu) / (1 + 2 * rho * cos(ysim-mu)) rm(ysim) temp3 <- cbind(dl.dmu, dl.drho) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov dtheta.detas <- cbind(dmu.deta, drho.deta) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM )))) } vonmises <- function(llocation = extlogit(min = 0, max = 2*pi), lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = NULL) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ilocat <- ilocation if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Von Mises distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: location"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- c(namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { locat.init <- mean(y) rat10 <- sqrt((sum(w*cos(y )))^2 + sum(w*sin(y))^2) / sum(w) scale.init <- sqrt(1 - rat10) } else { locat.init <- median(y) scale.init <- sqrt(sum(w*abs(y - locat.init)) / sum(w)) } locat.init <- rep_len(if (length( .ilocat )) .ilocat else locat.init,n) scale.init <- rep_len(if (length( .iscale )) .iscale else 1, n) etastart <- cbind( theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) } y <- y %% (2*pi) # Coerce after initial values have been computed }), list( .imethod = imethod, .ilocat = ilocat, .escale = escale, .elocat = elocat, .lscale = lscale, .llocat = llocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) %% (2*pi) }, list( .escale = escale, .lscale = lscale, .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (Scale * cos(y - locat) - log(mbesselI0(x = Scale))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .escale = escale, .lscale = lscale, .llocat = llocat, .elocat = elocat ))), vfamily = c("vonmises"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) okay1 <- all(is.finite(locat)) && all(0 < locat & locat < 2*pi) && all(is.finite(Scale)) && all(0 < Scale) okay1 }, list( .escale = escale, .lscale = lscale, .llocat = llocat, .elocat = elocat ))), deriv = eval(substitute(expression({ locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], .lscale , earg = .escale ) tmp6 <- mbesselI0(x = Scale, deriv = 2) dl.dlocat <- Scale * sin(y - locat) dl.dscale <- cos(y - locat) - tmp6[, 2] / tmp6[, 1] dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .llocat = llocat, .elocat = elocat ))), weight = eval(substitute(expression({ ned2l.dlocat2 <- Scale * tmp6[, 2] / tmp6[, 1] ned2l.dscale2 <- tmp6[, 3] / tmp6[, 1] - (tmp6[, 2] / tmp6[, 1])^2 wz <- matrix(0, nrow = n, ncol = 2) # diagonal wz[, iam(1, 1, M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .escale = escale, .elocat = elocat, .lscale = lscale, .llocat = llocat )))) } VGAM/R/vlm.R0000644000176200001440000001216713135276760012147 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vlm <- function(formula, data = list(), weights = NULL, subset = NULL, na.action = na.fail, prior.weights = NULL, control = vlm.control(...), method = "qr", model = FALSE, x.arg = FALSE, y.arg = TRUE, qr.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, offset = NULL, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "vlm" ocall <- match.call() if (smart) setup.smart("write") if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), qr = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") if (method != "qr") stop("only method = 'qr' is implemented") xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? if (length(offset) && any(offset != 0)) stop("offsets are redundant for (vector) linear models") wz <- model.weights(mf) y <- as.matrix(y) M <- NCOL(y) n <- nrow(x) dy <- dimnames(y) dy1 <- if (length(dy[[1]])) dy[[1]] else dimnames(mf)[[1]] dy2 <- if (length(dy[[2]])) dy[[2]] else paste("Y", 1:M, sep = "") dimnames(y) <- list(dy1, dy2) predictors.names <- dy2 if (!length(prior.weights)) { prior.weights <- rep_len(1, n) names(prior.weights) <- dy1 } if (any(prior.weights <= 0)) stop("only positive weights allowed") if (!length(wz)) { wz <- matrix(prior.weights, n, M) identity.wts <- TRUE } else { identity.wts <- FALSE temp <- NCOL(wz) if (temp < M || temp > M*(M+1)/2) stop("input 'w' must have between ", M, " and ", M*(M+1)/2, " columns") wz <- prior.weights * wz } control <- control Hlist <- process.constraints(constraints, x, M) intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" fit <- vlm.wfit(xmat = x, zmat = y, Hlist = Hlist, wz = wz, U = NULL, matrix.out = FALSE, is.vlmX = FALSE, ResSS = TRUE, qr = qr.arg, x.ret = TRUE, offset = offset) ncol.X.vlm <- fit$rank fit$R <- fit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] fit$R[lower.tri(fit$R)] <- 0 fit$constraints <- Hlist dnrow.X.vlm <- labels(fit$X.vlm) xnrow.X.vlm <- dnrow.X.vlm[[2]] dn <- labels(x) xn <- dn[[2]] dX.vlm <- as.integer(dim(fit$X.vlm)) nrow.X.vlm <- dX.vlm[[1]] ncol.X.vlm <- dX.vlm[[2]] misc <- list( colnames.x = xn, colnames.X.vlm = xnrow.X.vlm, function.name = function.name, intercept.only=intercept.only, predictors.names = predictors.names, M = M, n = nrow(x), nrow.X.vlm = nrow.X.vlm, orig.assign = attr(x, "assign"), p = ncol(x), ncol.X.vlm = ncol.X.vlm, ynames = dimnames(y)[[2]]) fit$misc <- misc fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new("vlm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "control" = control, "criterion" = list(deviance = fit$ResSS), "dispersion" = 1, "df.residual" = fit$df.residual, "df.total" = n*M, "effects" = fit$effects, "fitted.values"= as.matrix(fit$fitted.values), "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "ResSS" = fit$ResSS, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) slot(answer, "prior.weights") <- as.matrix(prior.weights) if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") slot(answer, "na.action") <- if (length(aaa <- attr(mf, "na.action"))) list(aaa) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (qr.arg) { class(fit$qr) <- "list" slot(answer, "qr") <- fit$qr } if (x.arg) slot(answer, "x") <- x # The 'small' design matrix if (control$save.weights) slot(answer, "weights") <- wz if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(y) answer } attr(vlm, "smart") <- TRUE VGAM/R/logLik.vlm.q0000644000176200001440000000761613135276757013437 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. logLik.vlm <- function(object, summation = TRUE, ...) { if (summation) { object@criterion$loglikelihood } else { Args <- formals(args(object@family@loglikelihood)) if (length(Args$summation) == 0) stop("there is no 'summation' argument for the function in the ", "'loglikelihood' slot of the object.") object@family@loglikelihood(mu = fitted(object), y = depvar(object), w = as.vector(weights(object, type = "prior")), residuals = FALSE, eta = predict(object), extra = object@extra, summation = summation) } } logLik.qrrvglm <- function(object, summation = TRUE, ...) { ff.code <- object@family ll.ff.code <- ff.code@loglikelihood prior.weights <- weights(object, type = "prior") if (is.matrix(prior.weights) && ncol(prior.weights) == 1) prior.weights <- c(prior.weights) loglik.try <- ll.ff.code(mu = fitted(object), y = depvar(object), w = prior.weights, residuals = FALSE, eta = predict(object), extra = object@extra, summation = summation) if (!is.numeric(loglik.try)) loglik.try <- NULL loglik.try } if (!isGeneric("logLik")) setGeneric("logLik", function(object, ...) standardGeneric("logLik"), package = "VGAM") setMethod("logLik", "vlm", function(object, ...) logLik.vlm(object, ...)) setMethod("logLik", "vglm", function(object, ...) logLik.vlm(object, ...)) setMethod("logLik", "vgam", function(object, ...) logLik.vlm(object, ...)) setMethod("logLik", "qrrvglm", function(object, ...) logLik.qrrvglm(object, ...)) setMethod("logLik", "rrvgam", function(object, ...) logLik.qrrvglm(object, ...)) constraints.vlm <- function(object, type = c("lm", "term"), all = TRUE, which, matrix.out = FALSE, colnames.arg = TRUE, # 20130827 rownames.arg = TRUE, # 20170606 ...) { type <- match.arg(type, c("lm", "term"))[1] Hlist <- ans <- slot(object, "constraints") # For "lm" (formerly "vlm") if (type == "term") { oassign.LM <- object@misc$orig.assign x.LM <- model.matrix(object) att.x.LM <- attr(x.LM, "assign") names.att.x.LM <- names(att.x.LM) ppp <- length(names.att.x.LM) ans <- vector("list", ppp) for (ii in 1:ppp) { col.ptr <- (oassign.LM[[ii]])[1] # 20110114 ans[[ii]] <- (Hlist[[col.ptr]]) } names(ans) <- names.att.x.LM } # End of "term" if (matrix.out) { if (all) { M <- npred(object) mat.ans <- matrix(unlist(ans), nrow = M) if (length(object@misc$predictors.names) == M) rownames(mat.ans) <- object@misc$predictors.names if (length(object@misc$colnames.X_vlm) == ncol(mat.ans)) colnames(mat.ans) <- object@misc$colnames.X_vlm if (colnames.arg || rownames.arg) { rownames.cm <- colnames(predict(object)) if (!rownames.arg || nrow(mat.ans) != length(rownames.cm)) rownames.cm <- NULL colnames.cm <- if (colnames.arg) colnames(model.matrix(object, type = "vlm")) else NULL dimnames(mat.ans) <- list(rownames.cm, colnames.cm) } mat.ans } else { ans[[which]] } } else { if (all) ans else ans[[which]] } } if (!isGeneric("constraints")) setGeneric("constraints", function(object, ...) standardGeneric("constraints")) setMethod("constraints", "vlm", function(object, ...) constraints.vlm(object, ...)) VGAM/R/vgam.control.q0000644000176200001440000001165613135276760014023 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vgam.control <- function(all.knots = FALSE, bf.epsilon = 1e-7, bf.maxit = 30, checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-7, maxit = 30, Maxit.outer = 10, noWarning = FALSE, na.action=na.fail, nk = NULL, save.weights = FALSE, se.fit = TRUE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, gamma.arg = 1, ...) { if (mode(criterion) != "character" && mode(criterion) != "name") criterion <- as.character(substitute(criterion)) criterion <- pmatch(criterion[1], names(.min.criterion.VGAM), nomatch = 1) criterion <- names(.min.criterion.VGAM)[criterion] if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for argument 'checkwz'") if (!is.Numeric(wzepsilon, length.arg = 1, positive = TRUE)) stop("bad input for argument 'wzepsilon'") if (length(all.knots) > 1) warning("all.knots should be of length 1; using first value only") if (!is.Numeric(bf.epsilon, length.arg = 1, positive = TRUE)) { warning("bad input for argument 'bf.epsilon'; using 0.00001 instead") bf.epsilon <- 0.00001 } if (!is.Numeric(bf.maxit, length.arg = 1, positive = TRUE, integer.valued = TRUE)) { warning("bad input for argument 'bf.maxit'; using 30 instead") bf.maxit <- 30 } if (!is.Numeric(epsilon, length.arg = 1, positive = TRUE)) { warning("bad input for argument 'epsilon'; using 0.0001 instead") epsilon <- 0.0001 } if (!is.Numeric(maxit, length.arg = 1, positive = TRUE, integer.valued = TRUE)) { warning("bad input for argument 'maxit'; using 30 instead") maxit <- 30 } if (!is.Numeric(Maxit.outer, length.arg = 1, positive = TRUE, integer.valued = TRUE)) { warning("bad input for argument 'Maxit.outer'; ", "using 20 instead") Maxit.outer <- 20 } convergence <- expression({ switch(criterion, coefficients = if (iter == 1) iter < maxit else (iter < maxit && max(abs(new.coeffs - old.coeffs) / ( abs(old.coeffs) + epsilon)) > epsilon), iter < maxit && sqrt(sqrt(eff.n)) * abs(old.crit - new.crit) / ( abs(old.crit) + epsilon) > epsilon) }) if (!is.Numeric(gamma.arg, length.arg = 1)) stop("bad input for argument 'gamma.arg'") if (gamma.arg < 0.5 || 3 < gamma.arg) warning("input for argument 'gamma.arg' looks dubious") list(all.knots = as.logical(all.knots)[1], bf.epsilon = bf.epsilon, bf.maxit = bf.maxit, checkwz = checkwz, Check.rank = Check.rank, Check.cm.rank = Check.cm.rank, convergence = convergence, criterion = criterion, epsilon = epsilon, maxit = maxit, Maxit.outer = Maxit.outer, noWarning = as.logical(noWarning)[1], nk = nk, min.criterion = .min.criterion.VGAM, save.weights = as.logical(save.weights)[1], se.fit = as.logical(se.fit)[1], trace = as.logical(trace)[1], xij = if (is(xij, "formula")) list(xij) else xij, wzepsilon = wzepsilon, gamma.arg = gamma.arg) } vgam.nlchisq <- function(qr, resid, wz, smomat, deriv, U, smooth.labels, assign, M, n, constraints) { attr(qr, "class") <- "qr" class(qr) <- "qr" if (!is.matrix(smomat)) smomat <- as.matrix(smomat) if (!is.matrix(wz)) wz <- as.matrix(wz) if (!is.matrix(deriv)) deriv <- as.matrix(deriv) if (!is.matrix(resid)) resid <- as.matrix(resid) trivc <- trivial.constraints(constraints) ans <- rep_len(NA_real_, ncol(smomat)) Uderiv <- vbacksub(U, t(deriv), M = M, n = n) # \bU_i^{-1} \biu_i ptr <- 0 for (ii in seq_along(smooth.labels)) { cmat <- constraints[[ smooth.labels[ii] ]] index <- (ptr + 1):(ptr + ncol(cmat)) for (jay in index) { yy <- t(cmat[, jay-ptr, drop = FALSE]) yy <- kronecker(smomat[, jay, drop = FALSE], yy) # n x M Us <- mux22(U, yy, M = M, upper = TRUE, as.matrix = TRUE) # n * M Uss <- matrix(c(t(Us)), nrow = n * M, ncol = 1) Rsw <- qr.resid(qr, Uss) vRsw <- matrix(Rsw, nrow = n, ncol = M, byrow = TRUE) newans <- vbacksub(U, t(vRsw), M = M, n = n) ans[jay] <- sum(vRsw^2 + 2 * newans * deriv) } ptr <- ptr + ncol(cmat) } names(ans) <- dimnames(smomat)[[2]] ans } VGAM/R/family.categorical.R0000644000176200001440000034362413135276757015121 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. process.categorical.data.VGAM <- expression({ extra$y.integer <- TRUE if (!all(w == 1)) extra$orig.w <- w if (!is.matrix(y)) { yf <- as.factor(y) lev <- levels(yf) llev <- length(lev) nn <- length(yf) y <- matrix(0, nn, llev) y[cbind(1:nn,as.vector(unclass(yf)))] <- 1 dimnames(y) <- list(names(yf), lev) if (llev <= 1) stop("the response matrix does not have 2 or more columns") } else { nn <- nrow(y) } nvec <- rowSums(y) if (min(y) < 0 || any(round(y) != y)) stop("the response must be non-negative counts (integers)") if (!exists("delete.zero.colns") || (exists("delete.zero.colns") && delete.zero.colns)) { sumy2 <- colSums(y) if (any(index <- sumy2 == 0)) { y <- y[, !index, drop = FALSE] sumy2 <- sumy2[!index] if (all(index) || ncol(y) <= 1) stop("'y' matrix has 0 or 1 columns") warning("Deleted ", sum(!index), " columns of the response matrix due to zero counts") } } if (any(miss <- (nvec == 0))) { smiss <- sum(miss) warning("Deleted ", smiss, " rows of the response matrix due to zero counts") x <- x[!miss,, drop = FALSE] y <- y[!miss,, drop = FALSE] w <- cbind(w) w <- w[!miss,, drop = FALSE] nvec <- nvec[!miss] nn <- nn - smiss } w <- w * nvec nvec[nvec == 0] <- 1 y <- prop.table(y, 1) # Convert to proportions if (length(mustart) + length(etastart) == 0) { mustart <- y + (1 / ncol(y) - y) / nvec } }) Deviance.categorical.data.vgam <- function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (ncol(y) == 1 || ncol(mu) == 1) stop("arguments 'y' and 'mu' must have at least 2 columns") double.eps <- sqrt( .Machine$double.xmin ) devy <- y nonz <- (y != 0) devy[nonz] <- y[nonz] * log(y[nonz]) devmu <- 0 * y # filler; y*log(mu) gives a warning (fixed up anyway). if (any(smallmu <- (mu * (1 - mu) < double.eps))) { warning("fitted values close to 0 or 1") smu <- mu[smallmu] smy <- y[smallmu] smu <- ifelse(smu < double.eps, double.eps, smu) devmu[smallmu] <- smy * log(smu) } devmu[!smallmu] <- y[!smallmu] * log(mu[!smallmu]) devi <- 2 * (devy - devmu) if (residuals) { M <- if (is.matrix(eta)) ncol(eta) else 1 if (M > 1) return(NULL) devi <- devi %*% rep_len(1, ncol(devi)) # deviance = \sum_i devi[i] return(c(sign(y[, 1] - mu[, 1]) * sqrt(abs(devi) * w))) } else { dev.elts <- c(w) * devi if (summation) { sum(dev.elts) } else { dev.elts } } } dmultinomial <- function(x, size = NULL, prob, log = FALSE, dochecking = TRUE, smallno = 1.0e-7) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) x <- as.matrix(x) prob <- as.matrix(prob) if (((K <- ncol(x)) <= 1) || ncol(prob) != K) stop("arguments 'x' and 'prob' must be matrices with ", "two or more columns") if (dochecking) { if (min(prob) < 0) stop("argument 'prob' contains some negative values") if (any(abs((rsprob <- rowSums(prob)) - 1) > smallno)) stop("some rows of 'prob' do not add to unity") if (any(abs(x - round(x)) > smallno)) stop("argument 'x' should be integer-valued") if (length(size)) { if (any(abs(size - rowSums(x)) > smallno)) stop("rowSums(x) does not agree with argument 'size'") } else { size <- round(rowSums(x)) } } else { if (!length(size)) size <- round(rowSums(prob)) } logdensity <- lgamma(size + 1) + rowSums(x * log(prob) - lgamma(x + 1)) if (log.arg) logdensity else exp(logdensity) } # dmultinomial() sratio <- function(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Stopping ratio model\n\n", "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y = j+1|Y <= j+1]", "P[Y=j+1|Y<=j+1]") else ifelse(whitespace, "P[Y = j|Y >= j]", "P[Y=j|Y>=j]"), link, earg = earg), "\n", "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), infos = eval(substitute(function(...) { list(M1 = NA, # zz -1? Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = as.character(NA), parallel = .parallel , reverse = .reverse , whitespace = .whitespace , zero = .zero , link = .link ) }, list( .link = link, .zero = zero, .parallel = parallel, .reverse = reverse, .whitespace = whitespace ))), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = M) }), list( .parallel = parallel, .zero = zero ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") delete.zero.colns <- TRUE eval(process.categorical.data.VGAM) extra$wy.prod <- TRUE M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar, "=", .fillerChar, 2:(M+1), "|Y", .fillerChar, "<=", .fillerChar, 2:(M+1), "]", sep = "") else paste("P[Y", .fillerChar, "=", .fillerChar, 1:M, "|Y", .fillerChar, ">=", .fillerChar, 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) y.names <- paste("mu", 1:(M+1), sep = "") extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] extra$colnames.y <- colnames(y) }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (!is.matrix(eta)) eta <- as.matrix(eta) fv.mat <- if ( .reverse ) { M <- NCOL(eta) djr <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(1 - djr[, M:1], "cumprod")[, M:1] cbind(1, djr) * cbind(temp, 1) } else { dj <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(1 - dj, "cumprod") cbind(dj, 1) * cbind(1, temp) } label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = 1) }, list( .earg = earg, .link = link, .reverse = reverse) )), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$parameters <- mynames misc$reverse <- .reverse misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace extra <- list() # kill what was used }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { cump <- tapplymat1(mu, "cumsum") if ( .reverse ) { djr <- mu[, -1] / cump[, -1] theta2eta(djr, .link , earg = .earg ) } else { M <- ncol(mu) - 1 dj <- if (M == 1) mu[, 1] else mu[, 1:M] / (1 - cbind(0, cump[, 1:(M-1)])) theta2eta(dj, .link , earg = .earg ) } }, list( .earg = earg, .link = link, .reverse = reverse) )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } }, vfamily = c("sratio", "VGAMordinal", "VGAMcategorical"), validparams = eval(substitute(function(eta, y, extra = NULL) { djr <- eta2theta(eta, .link , earg = .earg ) # dj or djr okay1 <- all(is.finite(djr)) && all(0 < djr & djr < 1) okay1 }, list( .earg = earg, .link = link, .reverse = reverse) )), deriv = eval(substitute(expression({ if (!length(extra$mymat)) { extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] } if ( .reverse ) { djr <- eta2theta(eta, .link , earg = .earg ) Mp1 <- ncol(extra$mymat) c(w) * (y[, -1] / djr - extra$mymat[, -Mp1] / (1 - djr)) * dtheta.deta(djr, .link , earg = .earg ) } else { dj <- eta2theta(eta, .link , earg = .earg ) c(w) * (y[, -ncol(y)] / dj - extra$mymat[, -1] / (1 - dj)) * dtheta.deta(dj, .link , earg = .earg ) } }), list( .earg = earg, .link = link, .reverse = reverse) )), weight = eval(substitute(expression({ if ( .reverse ) { cump <- tapplymat1(mu, "cumsum") ddjr.deta <- dtheta.deta(djr, .link , earg = .earg ) wz <- c(w) * ddjr.deta^2 * (mu[, -1] / djr^2 + cump[, 1:M] / (1 - djr)^2) } else { ccump <- tapplymat1(mu[, ncol(mu):1], "cumsum")[, ncol(mu):1] ddj.deta <- dtheta.deta(dj, .link , earg = .earg ) wz <- c(w) * ddj.deta^2 * (mu[, 1:M] / dj^2 + ccump[, -1] / (1 - dj)^2) } wz }), list( .earg = earg, .link = link, .reverse = reverse )))) } # sratio() cratio <- function(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Continuation ratio model\n\n", "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y < j+1|Y <= j+1]", "P[Y j|Y >= j]", "P[Y>j|Y>=j]"), link, earg = earg), "\n", "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), infos = eval(substitute(function(...) { list(M1 = NA, # zz -1? Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = as.character(NA), parallel = .parallel , reverse = .reverse , whitespace = .whitespace , zero = .zero , link = .link ) }, list( .link = link, .zero = zero, .parallel = parallel, .reverse = reverse, .whitespace = whitespace ))), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = M) }), list( .parallel = parallel, .zero = zero ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") delete.zero.colns <- TRUE eval(process.categorical.data.VGAM) M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar, "<", .fillerChar, 2:(M+1), "|Y", .fillerChar, "<=", .fillerChar, 2:(M+1), "]", sep = "") else paste("P[Y", .fillerChar, ">", .fillerChar, 1:M, "|Y", .fillerChar, ">=", .fillerChar, 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , earg = .earg , short = TRUE) y.names <- paste("mu", 1:(M+1), sep = "") extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] extra$colnames.y <- colnames(y) }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (!is.matrix(eta)) eta <- as.matrix(eta) fv.mat <- if ( .reverse ) { M <- ncol(eta) djrs <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(djrs[, M:1], "cumprod")[, M:1] cbind(1, 1 - djrs) * cbind(temp, 1) } else { djs <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(djs, "cumprod") cbind(1 - djs, 1) * cbind(1, temp) } label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = 1) }, list( .earg = earg, .link = link, .reverse = reverse) )), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$parameters <- mynames misc$reverse <- .reverse misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace extra <- list() # kill what was used }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { cump <- tapplymat1(mu, "cumsum") if ( .reverse ) { djrs <- 1 - mu[, -1] / cump[, -1] theta2eta(djrs, .link , earg = .earg ) } else { M <- ncol(mu) - 1 djs <- if (M == 1) 1 - mu[, 1] else 1 - mu[, 1:M] / (1 - cbind(0, cump[, 1:(M-1)])) theta2eta(djs, .link , earg = .earg ) } }, list( .earg = earg, .link = link, .reverse = reverse) )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("cratio", "VGAMordinal", "VGAMcategorical"), validparams = eval(substitute(function(eta, y, extra = NULL) { djrs <- eta2theta(eta, .link , earg = .earg ) # djs or djrs okay1 <- all(is.finite(djrs)) && all(0 < djrs & djrs < 1) okay1 }, list( .earg = earg, .link = link, .reverse = reverse) )), deriv = eval(substitute(expression({ if (!length(extra$mymat)) { extra$mymat <- if ( .reverse ) tapplymat1(y, "cumsum") else tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] } if ( .reverse ) { djrs <- eta2theta(eta, .link , earg = .earg ) Mp1 <- ncol(extra$mymat) -c(w) * (y[, -1]/(1 - djrs) - extra$mymat[, -Mp1]/djrs) * dtheta.deta(djrs, .link , earg = .earg ) } else { djs <- eta2theta(eta, .link , earg = .earg ) -c(w) * (y[, -ncol(y)]/(1 - djs) - extra$mymat[, -1]/djs) * dtheta.deta(djs, .link , earg = .earg ) } }), list( .earg = earg, .link = link, .reverse = reverse) )), weight = eval(substitute(expression({ if ( .reverse ) { cump <- tapplymat1(mu, "cumsum") ddjrs.deta <- dtheta.deta(djrs, .link , earg = .earg ) wz <- c(w) * ddjrs.deta^2 * (mu[, -1] / (1 - djrs)^2 + cump[, 1:M] / djrs^2) } else { ccump <- tapplymat1(mu[, ncol(mu):1], "cumsum")[, ncol(mu):1] ddjs.deta <- dtheta.deta(djs, .link , earg = .earg ) wz <- c(w) * ddjs.deta^2 * (mu[, 1:M] / (1 - djs)^2 + ccump[, -1] / djs^2) } wz }), list( .earg = earg, .link = link, .reverse = reverse )))) } # cratio() vglm.multinomial.deviance.control <- function(maxit = 21, panic = FALSE, ...) { if (maxit < 1) { warning("bad value of maxit; using 21 instead") maxit <- 21 } list(maxit = maxit, panic = as.logical(panic)[1]) } vglm.multinomial.control <- function(maxit = 21, panic = FALSE, criterion = c("aic1", "aic2", names( .min.criterion.VGAM )), ...) { if (mode(criterion) != "character" && mode(criterion) != "name") criterion <- as.character(substitute(criterion)) criterion <- match.arg(criterion, c("aic1", "aic2", names( .min.criterion.VGAM )))[1] if (maxit < 1) { warning("bad value of maxit; using 21 instead") maxit <- 21 } list(maxit = maxit, panic = as.logical(panic)[1], criterion = criterion, min.criterion = c("aic1" = FALSE, "aic2" = TRUE, .min.criterion.VGAM)) } vglm.VGAMcategorical.control <- function(maxit = 30, trace = FALSE, panic = TRUE, ...) { if (maxit < 1) { warning("bad value of maxit; using 200 instead") maxit <- 200 } list(maxit = maxit, trace = as.logical(trace)[1], panic = as.logical(panic)[1]) } # vglm.VGAMcategorical.control() multinomial <- function(zero = NULL, parallel = FALSE, nointercept = NULL, refLevel = "(Last)", whitespace = FALSE) { if (length(refLevel) != 1) stop("the length of 'refLevel' must be one") if ( is.numeric(refLevel) && !is.Numeric(refLevel, integer.valued = TRUE, positive = TRUE)) stop("argument 'refLevel' is not a positive integer") if (is.character(refLevel)) { if (refLevel == "(Last)") refLevel <- -1 } if (is.factor(refLevel)) { if (is.ordered(refLevel)) warning("argument 'refLevel' is from an ordered factor") refLevel <- as.character(refLevel) == levels(refLevel) refLevel <- (seq_along(refLevel))[refLevel] if (!is.Numeric(refLevel, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("could not coerce 'refLevel' into a single positive integer") } stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Multinomial logit model\n\n", "Links: ", if (is.numeric(refLevel)) { if (refLevel < 0) { ifelse(whitespace, "log(mu[,j] / mu[,M+1]), j = 1:M,\n", "log(mu[,j]/mu[,M+1]), j=1:M,\n") } else { if (refLevel == 1) { paste("log(mu[,", "j]", fillerChar, "/", fillerChar, "mu[,", refLevel, "]), j", fillerChar, "=", fillerChar, "2:(M+1),\n", sep = "") } else { paste("log(mu[,", "j]", fillerChar, "/", "mu[,", refLevel, "]), j", fillerChar, "=", fillerChar, "c(1:", refLevel-1, ",", fillerChar, refLevel+1, ":(M+1)),\n", sep = "") } } } else { # refLevel is character paste("log(mu[,", "j]", fillerChar, "/", "mu[,'", refLevel, "']), j", fillerChar, " != '", fillerChar, refLevel, "',\n", sep = "") }, "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , apply.int = TRUE, constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = M) constraints <- cm.nointercept.VGAM(constraints, x, .nointercept , M) }), list( .parallel = parallel, .zero = zero, .nointercept = nointercept ))), deviance = Deviance.categorical.data.vgam, infos = eval(substitute(function(...) { list(parallel = .parallel , refLevel = .refLevel , # original M1 = -1, link = "multilogit", link1parameter = FALSE, # The link is multiparameter expected = TRUE, hadof = FALSE, multipleResponses = FALSE, parameters.names = as.character(NA), zero = .zero ) }, list( .zero = zero, .refLevel = refLevel, .parallel = parallel ))), initialize = eval(substitute(expression({ if (is.factor(y) && is.ordered(y)) warning("response should be nominal, not ordinal") delete.zero.colns <- TRUE eval(process.categorical.data.VGAM) M <- ncol(y)-1 use.refLevel <- if (is.numeric( .refLevel )) { if ( .refLevel < 0) M+1 else .refLevel } else { # Is character. Match it with the levels of the response. tmp6 <- match( .refLevel , colnames(y)) if (is.na(tmp6)) stop("could not match argument 'refLevel' with any columns ", "of the response matrix") tmp6 } if (use.refLevel > (M+1)) stop("argument 'refLevel' has a value that is too high") extra$use.refLevel <- use.refLevel # Used in all other slots. allbut.refLevel <- (1:(M+1))[-use.refLevel] predictors.names <- paste("log(mu[,", allbut.refLevel, "]", .fillerChar, "/", .fillerChar, "mu[,", use.refLevel, "])", sep = "") extra$colnames.y <- colnames(y) }), list( .refLevel = refLevel, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (anyNA(eta)) warning("there are NAs in eta in slot inverse") ans <- multilogit(eta, refLevel = extra$use.refLevel, # .refLevel , inverse = TRUE) if (anyNA(ans)) warning("there are NAs here in slot linkinv") if (min(ans) == 0 || max(ans) == 1) warning("fitted probabilities numerically 0 or 1 occurred") label.cols.y(ans, colnames.y = extra$colnames.y, NOS = 1) }), list( .refLevel = refLevel )), last = eval(substitute(expression({ misc$link <- "multilogit" misc$earg <- list(multilogit = list( M = M, refLevel = use.refLevel )) dy <- dimnames(y) if (!is.null(dy[[2]])) dimnames(fit$fitted.values) <- dy misc$nointercept <- .nointercept misc$parallel <- .parallel misc$refLevel <- use.refLevel # if ( .refLevel<0) M+1 else .refLevel misc$refLevel.orig <- .refLevel misc$zero <- .zero }), list( .refLevel = refLevel, .nointercept = nointercept, .parallel = parallel, .zero = zero ))), linkfun = eval(substitute( function(mu, extra = NULL) { multilogit(mu, refLevel = extra$use.refLevel) # .refLevel }), list( .refLevel = refLevel )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("multinomial", "VGAMcategorical"), hadof = eval(substitute( function(eta, extra = list(), linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), deriv = 1, ...) { multinomial.eim.deriv1 <- function(mu, jay, w) { M <- ncol(mu) # Not ncol(mu) - 1 since one coln has been deleted MM12 <- M * (M + 1) / 2 # Full matrix wz1 <- matrix(0, NROW(mu), MM12) ind5 <- iam(NA, NA, M = M, both = TRUE) for (i in 1:MM12) { i1 <- ind5$row.index[i] j1 <- ind5$col.index[i] if (i1 == jay && j1 == jay) { wz1[, iam(i1, j1, M = M)] <- (1 - 2 * mu[, i1]) #* } if (i1 == jay && j1 != jay) { wz1[, iam(i1, j1, M = M)] <- -mu[, j1] # -(1 - 2 * mu[, i1]) * } if (i1 != jay && j1 == jay) { wz1[, iam(i1, j1, M = M)] <- -mu[, i1] # -(1 - 2 * mu[, j1]) * } } # for (i) c(w) * wz1 } # multinomial.eim.deriv1 multinomial.eim.deriv2 <- function(mu, jay, w) { M <- ncol(mu) # Not ncol(mu) - 1 since one coln has been deleted zz.yettodo <- 0 # NA_real_ MM12 <- M * (M + 1) / 2 # Full matrix wz1 <- matrix(NA_real_, NROW(mu), MM12) ind5 <- iam(NA, NA, M = M, both = TRUE) for (i in 1:MM12) { i1 <- ind5$row.index[i] j1 <- ind5$col.index[i] if (i1 == jay && j1 == jay) { wz1[, iam(i1, j1, M = M)] <- zz.yettodo } if (i1 != jay && j1 == i1) { wz1[, iam(i1, j1, M = M)] <- -mu[, i1] * mu[, jay] * (1 - 2 * mu[, i1] - 2 * mu[, jay] + 6 * mu[, i1] * mu[, jay]) } if (i1 == jay && j1 != jay) { wz1[, iam(i1, j1, M = M)] <- zz.yettodo } if (i1 != jay && j1 == jay) { wz1[, iam(i1, j1, M = M)] <- zz.yettodo } if (any(is.na(wz1[, iam(i1, j1, M = M)]))) { wz1[, iam(i1, j1, M = M)] <- 2 * mu[, i1] * mu[, j1] * mu[, jay] * (1 - 3 * mu[, jay]) } } # for (i) cat("\n\n\n\n") c(w) * wz1 } # multinomial.eim.deriv2 M <- NCOL(eta) use.refLevel <- extra$use.refLevel # Restore its value if (!is.numeric(use.refLevel)) { warning("variable 'use.refLevel' cannot be found. ", "Trying the original value.") use.refLevel <- .refLevel # Only if numeric... if (use.refLevel == "(Last)") use.refLevel <- M+1 } mu.use <- multilogit(eta, refLevel = use.refLevel, inverse = TRUE) mu.use <- pmax(mu.use, .Machine$double.eps * 1.0e-0) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) myinc <- (index$row.index >= use.refLevel) index$row.index[myinc] <- index$row.index[myinc] + 1 myinc <- (index$col.index >= use.refLevel) index$col.index[myinc] <- index$col.index[myinc] + 1 switch(as.character(deriv), "0" = { wz <- -mu.use[, index$row] * mu.use[, index$col] wz[, 1:M] <- wz[, 1:M] + mu.use[, -use.refLevel ] c(w) * wz }, "1" = { multinomial.eim.deriv1(mu.use[, -use.refLevel, drop = FALSE], jay = linpred.index, w = w) }, "2" = { multinomial.eim.deriv2(mu.use[, -use.refLevel, drop = FALSE], jay = linpred.index, w = w) }, stop("argument 'deriv' must be 0 or 1 or 2")) }, list( .refLevel = refLevel ))), validparams = eval(substitute(function(eta, y, extra = NULL) { probs <- multilogit(eta, refLevel = extra$use.refLevel, inverse = TRUE) # .refLevel okay1 <- all(is.finite(probs)) && all(0 < probs & probs < 1) okay1 }, list( .refLevel = refLevel ))), deriv = eval(substitute(expression({ use.refLevel <- extra$use.refLevel # Restore its value c(w) * (y[, -use.refLevel] - mu[, -use.refLevel]) }), list( .refLevel = refLevel ))), weight = eval(substitute(expression({ mytiny <- (mu < sqrt(.Machine$double.eps)) | (mu > 1.0 - sqrt(.Machine$double.eps)) if (M == 1) { wz <- mu[, 3 - use.refLevel] * (1 - mu[, 3 - use.refLevel]) } else { index <- iam(NA, NA, M, both = TRUE, diag = TRUE) myinc <- (index$row.index >= use.refLevel) index$row.index[myinc] <- index$row.index[myinc] + 1 myinc <- (index$col.index >= use.refLevel) index$col.index[myinc] <- index$col.index[myinc] + 1 wz <- -mu[, index$row] * mu[, index$col] wz[, 1:M] <- wz[, 1:M] + mu[, -use.refLevel ] } atiny <- (mytiny %*% rep(1, ncol(mu))) > 0 # apply(mytiny, 1, any) if (any(atiny)) { if (M == 1) wz[atiny] <- wz[atiny] * (1 + .Machine$double.eps^0.5) + .Machine$double.eps else wz[atiny, 1:M] <- wz[atiny, 1:M] * (1 + .Machine$double.eps^0.5) + .Machine$double.eps } c(w) * wz }), list( .refLevel = refLevel )))) } # multinomial() cumulative <- function(link = "logit", parallel = FALSE, # Does not apply to the intercept reverse = FALSE, multiple.responses = FALSE, whitespace = FALSE) { apply.parint <- FALSE link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") if (!is.logical(multiple.responses) || length(multiple.responses) != 1) stop("argument 'multiple.responses' must be a single logical") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") new("vglmff", blurb = if ( multiple.responses ) c(paste("Multivariate cumulative", link, "model\n\n"), "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y1 >= j+1]", "P[Y1>=j+1]") else ifelse(whitespace, "P[Y1 <= j]", "P[Y1<=j]"), link, earg = earg), ", ...") else c(paste("Cumulative", link, "model\n\n"), "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y >= j+1]", "P[Y>=j+1]") else ifelse(whitespace, "P[Y <= j]", "P[Y<=j]"), link, earg = earg)), infos = eval(substitute(function(...) { list(M1 = NA, # zz -1? Q1 = NA, expected = TRUE, hadof = TRUE, multipleResponses = .multiple.responses , parameters.names = as.character(NA), parallel = .parallel , reverse = .reverse , whitespace = .whitespace , link = .link ) }, list( .link = link, .parallel = parallel, .multiple.responses = multiple.responses, .reverse = reverse, .whitespace = whitespace ))), constraints = eval(substitute(expression({ if ( .multiple.responses ) { if ( !length(constraints) ) { Llevels <- extra$Llevels NOS <- extra$NOS Hk.matrix <- kronecker(diag(NOS), matrix(1,Llevels-1,1)) constraints <- cm.VGAM(Hk.matrix, x = x, bool = .parallel , apply.int = .apply.parint , constraints = constraints) } } else { constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , apply.int = .apply.parint , constraints = constraints) } }), list( .parallel = parallel, .multiple.responses = multiple.responses, .apply.parint = apply.parint ))), deviance = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { answer <- if ( .multiple.responses ) { totdev <- 0 NOS <- extra$NOS Llevels <- extra$Llevels for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels ) + 1:(Llevels) totdev <- totdev + Deviance.categorical.data.vgam( mu = mu[, aindex, drop = FALSE], y = y[, aindex, drop = FALSE], w = w, residuals = residuals, eta = eta[, cindex, drop = FALSE], extra = extra, summation = TRUE) } totdev } else { Deviance.categorical.data.vgam(mu = mu, y = y, w = w, residuals = residuals, eta = eta, extra = extra, summation = TRUE) } answer }, list( .earg = earg, .link = link, .multiple.responses = multiple.responses ) )), initialize = eval(substitute(expression({ if (colnames(x)[1] != "(Intercept)") warning("there seems to be no intercept term!") if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") extra$multiple.responses <- .multiple.responses if ( .multiple.responses ) { checkCut(y) # Check the input; stops if there is an error. if (any(w != 1) || NCOL(w) != 1) stop("the 'weights' argument must be a vector of all ones") Llevels <- max(y) delete.zero.colns <- FALSE orig.y <- cbind(y) # Convert y into a matrix if necessary NOS <- NCOL(orig.y) use.y <- use.mustart <- NULL for (iii in 1:NOS) { y <- as.factor(orig.y[,iii]) eval(process.categorical.data.VGAM) use.y <- cbind(use.y, y) use.mustart <- cbind(use.mustart, mustart) } mustart <- use.mustart y <- use.y # n x (Llevels*NOS) M <- NOS * (Llevels-1) mynames <- y.names <- NULL for (iii in 1:NOS) { Y.names <- paste("Y", iii, sep = "") mu.names <- paste("mu", iii, ".", sep = "") mynames <- c(mynames, if ( .reverse ) paste("P[", Y.names, ">=", 2:Llevels, "]", sep = "") else paste("P[", Y.names, "<=", 1:(Llevels-1), "]", sep = "")) y.names <- c(y.names, paste(mu.names, 1:Llevels, sep = "")) } predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) extra$NOS <- NOS extra$Llevels <- Llevels } else { delete.zero.colns <- TRUE eval(process.categorical.data.VGAM) M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar , ">=", .fillerChar, 2:(1+M), "]", sep = "") else paste("P[Y", .fillerChar , "<=", .fillerChar, 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) y.names <- paste("mu", 1:(M+1), sep = "") if (NCOL(w) == 1) { if (length(mustart) && all(c(y) %in% c(0, 1))) for (iii in 1:ncol(y)) mustart[,iii] <- weighted.mean(y[,iii], w) } extra$colnames.y <- colnames(y) } }), list( .reverse = reverse, .multiple.responses = multiple.responses, .link = link, .earg = earg, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { answer <- if ( .multiple.responses ) { NOS <- extra$NOS Llevels <- extra$Llevels fv.mat <- matrix(0, nrow(eta), NOS * Llevels) for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels) if ( .reverse ) { ccump <- cbind(1, eta2theta(eta[, cindex, drop = FALSE], .link , earg = .earg )) fv.mat[, aindex] <- cbind(-tapplymat1(ccump, "diff"), ccump[, ncol(ccump)]) } else { cump <- cbind(eta2theta(eta[, cindex, drop = FALSE], .link , earg = .earg ), 1) fv.mat[, aindex] <- cbind(cump[, 1], tapplymat1(cump, "diff")) } } label.cols.y(fv.mat, NOS = NOS, colnames.y = if (is.null(extra$colnames.y)) NULL else rep_len(extra$colnames.y, ncol(fv.mat))) } else { fv.mat <- if ( .reverse ) { ccump <- cbind(1, eta2theta(eta, .link , earg = .earg )) cbind(-tapplymat1(ccump, "diff"), ccump[, ncol(ccump)]) } else { cump <- cbind(eta2theta(eta, .link , earg = .earg ), 1) cbind(cump[, 1], tapplymat1(cump, "diff")) } label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = 1) } answer }, list( .reverse = reverse, .link = link, .earg = earg, .multiple.responses = multiple.responses ))), last = eval(substitute(expression({ if ( .multiple.responses ) { misc$link <- .link misc$earg <- list( .earg ) } else { misc$link <- rep_len( .link , M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg } misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace misc$parameters <- mynames misc$reverse <- .reverse misc$parallel <- .parallel misc$multiple.responses <- .multiple.responses }), list( .reverse = reverse, .parallel = parallel, .link = link, .earg = earg, .fillerChar = fillerChar, .multiple.responses = multiple.responses, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { answer <- if ( .multiple.responses ) { NOS <- extra$NOS Llevels <- extra$Llevels eta.matrix <- matrix(0, nrow(mu), NOS*(Llevels-1)) for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels) cump <- tapplymat1(as.matrix(mu[, aindex]), "cumsum") eta.matrix[,cindex] = theta2eta(if ( .reverse ) 1-cump[, 1:(Llevels-1)] else cump[, 1:(Llevels-1)], .link , earg = .earg ) } eta.matrix } else { cump <- tapplymat1(as.matrix(mu), "cumsum") M <- NCOL(mu) - 1 theta2eta(if ( .reverse ) 1-cump[, 1:M] else cump[, 1:M], .link , earg = .earg ) } answer }, list( .link = link, .earg = earg, .reverse = reverse, .multiple.responses = multiple.responses ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("cumulative", "VGAMordinal", "VGAMcategorical"), hadof = eval(substitute( function(eta, extra = list(), linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), deriv = 1, ...) { if ( .multiple.responses ) return(NA_real_, dim.wz[1], dim.wz[2]) cumulative.eim.deriv1 <- function(mu, jay, w, reverse = FALSE) { M <- ncol(mu) - 1 wz1 <- matrix(0, NROW(mu), M + M-1) # Tridiagonal wz1[, iam(jay, jay, M = M)] <- 1 / (mu[, jay+1])^2 - 1 / (mu[, jay ])^2 if (1 <= jay-1) wz1[, iam(jay-1, jay-1, M = M)] <- -1 / (mu[, jay ])^2 if (jay+1 <= M) wz1[, iam(jay+1, jay+1, M = M)] <- 1 / (mu[, jay+1])^2 if (1 < M && jay+1 <= M) wz1[, iam(jay, jay+1, M = M)] <- -1 / (mu[, jay+1])^2 if (1 < M && 1 <= jay-1) wz1[, iam(jay-1, jay, M = M)] <- 1 / (mu[, jay ])^2 (if (reverse) -c(w) else c(w)) * wz1 } # cumulative.eim.deriv1 cumulative.eim.deriv2 <- function(mu, jay, w) { M <- ncol(mu) - 1 wz2 <- matrix(0, NROW(mu), M + M-1) # Tridiagonal wz2[, iam(jay, jay, M = M)] <- 1 / (mu[, jay+1])^3 + 1 / (mu[, jay ])^3 if (1 <= jay-1) wz2[, iam(jay-1, jay-1, M = M)] <- 1 / (mu[, jay ])^3 if (jay+1 <= M) wz2[, iam(jay+1, jay+1, M = M)] <- 1 / (mu[, jay+1])^3 if (1 < M && jay+1 <= M) wz2[, iam(jay, jay+1, M = M)] <- -1 / (mu[, jay+1])^3 if (1 < M && 1 <= jay-1) wz2[, iam(jay-1, jay, M = M)] <- -1 / (mu[, jay ])^3 2 * c(w) * wz2 } # cumulative.eim.deriv2 probs <- if ( .reverse ) { ccump <- cbind(1, eta2theta(eta, .link , earg = .earg )) cbind(-tapplymat1(ccump, "diff"), ccump[, ncol(ccump)]) } else { cump <- cbind(eta2theta(eta, .link , earg = .earg ), 1) cbind(cump[, 1], tapplymat1(cump, "diff")) } mu.use <- pmax(probs, .Machine$double.eps * 1.0e-0) switch(as.character(deriv), "0" = { M <- ncol(eta) wz <- c(w) * (1 / mu.use[, 1:M] + 1 / mu.use[, -1]) if (M > 1) wz <- cbind(wz, -c(w) / mu.use[, 2:M]) wz }, "1" = { cumulative.eim.deriv1(mu = mu.use, jay = linpred.index, w = w, reverse = .reverse ) }, "2" = { cumulative.eim.deriv2(mu = mu.use, jay = linpred.index, w = w) }, stop("argument 'deriv' must be 0 or 1 or 2")) }, list( .link = link, .earg = earg, .reverse = reverse, .multiple.responses = multiple.responses ))), validparams = eval(substitute(function(eta, y, extra = NULL) { if ( .multiple.responses ) { return(TRUE) } probs <- if ( .reverse ) { ccump <- cbind(1, eta2theta(eta, .link , earg = .earg )) cbind(-tapplymat1(ccump, "diff"), ccump[, ncol(ccump)]) } else { cump <- cbind(eta2theta(eta, .link , earg = .earg ), 1) cbind(cump[, 1], tapplymat1(cump, "diff")) } okay1 <- all(is.finite(probs)) && all(0 < probs & probs < 1) if (!okay1) warning("It seems that the nonparallelism assumption has ", "resulted in intersecting linear/additive predictors. ", "Try propodds() or fitting a partial nonproportional ", "odds model or choosing ", "some other link function, etc.") okay1 }, list( .link = link, .earg = earg, .reverse = reverse, .multiple.responses = multiple.responses ))), deriv = eval(substitute(expression({ mu.use <- pmax(mu, .Machine$double.eps * 1.0e-0) deriv.answer <- if ( .multiple.responses ) { NOS <- extra$NOS Llevels <- extra$Llevels dcump.deta <- resmat <- matrix(0, n, NOS * (Llevels-1)) for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels-1) cump <- eta2theta(eta[,cindex, drop = FALSE], .link , earg = .earg ) dcump.deta[,cindex] <- dtheta.deta(cump, .link , earg = .earg ) resmat[,cindex] <- (y[, aindex, drop = FALSE] /mu.use[, aindex, drop = FALSE] - y[, 1+aindex, drop = FALSE] /mu.use[, 1+aindex, drop = FALSE]) } (if ( .reverse ) -c(w) else c(w)) * dcump.deta * resmat } else { cump <- eta2theta(eta, .link , earg = .earg ) dcump.deta <- dtheta.deta(cump, .link , earg = .earg ) c(if ( .reverse ) -c(w) else c(w)) * dcump.deta * (y[, -(M+1)] / mu.use[, -(M+1)] - y[, -1] / mu.use[, -1]) } deriv.answer }), list( .link = link, .earg = earg, .reverse = reverse, .multiple.responses = multiple.responses ))), weight = eval(substitute(expression({ if ( .multiple.responses ) { NOS <- extra$NOS Llevels <- extra$Llevels wz <- matrix(0, n, NOS*(Llevels-1)) # Diag elts only for a start for (iii in 1:NOS) { cindex <- (iii-1)*(Llevels-1) + 1:(Llevels-1) aindex <- (iii-1)*(Llevels) + 1:(Llevels-1) wz[, cindex] <- c(w) * dcump.deta[, cindex, drop = FALSE]^2 * (1 / mu.use[, aindex, drop = FALSE] + 1 / mu.use[, 1+aindex, drop = FALSE]) } if (Llevels-1 > 1) { iii <- 1 oindex <- (iii-1) * (Llevels-1) + 1:(Llevels-2) wz <- cbind(wz, -c(w) * dcump.deta[, oindex] * dcump.deta[, 1+oindex]) if (NOS > 1) { cptrwz <- ncol(wz) # Like a pointer wz <- cbind(wz, matrix(0, nrow(wz), (NOS-1) * (Llevels-1))) for (iii in 2:NOS) { oindex <- (iii-1)*(Llevels-1) + 1:(Llevels-2) wz[,cptrwz + 1 + (1:(Llevels-2))] = -c(w) * dcump.deta[,oindex] * dcump.deta[, 1+oindex] cptrwz <- cptrwz + Llevels - 1 # Move it along a bit } } } } else { wz <- c(w) * dcump.deta^2 * (1/mu.use[, 1:M] + 1/mu.use[, -1]) if (M > 1) wz <- cbind(wz, -c(w) * dcump.deta[, -M] * dcump.deta[, 2:M] / mu.use[, 2:M]) } wz }), list( .earg = earg, .link = link, .multiple.responses = multiple.responses )))) } # cumulative() propodds <- function(reverse = TRUE, whitespace = FALSE) { if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") cumulative(parallel = TRUE, reverse = reverse, whitespace = whitespace) } acat <- function(link = "loge", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(reverse) || length(reverse) != 1) stop("argument 'reverse' must be a single logical") stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") new("vglmff", blurb = c("Adjacent-categories model\n\n", "Links: ", namesof(if (reverse) ifelse(whitespace, "P[Y = j] / P[Y = j + 1]", "P[Y=j]/P[Y=j+1]") else ifelse(whitespace, "P[Y = j + 1] / P[Y = j]", "P[Y=j+1]/P[Y=j]"), link, earg = earg), "\n", "Variance: ", ifelse(whitespace, "mu[,j] * (1 - mu[,j]); -mu[,j] * mu[,k]", "mu[,j]*(1-mu[,j]); -mu[,j]*mu[,k]")), infos = eval(substitute(function(...) { list(M1 = NA, # zz -1? Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = as.character(NA), parallel = .parallel , reverse = .reverse , whitespace = .whitespace , zero = .zero , link = .link ) }, list( .link = link, .zero = zero, .parallel = parallel, .reverse = reverse, .whitespace = whitespace ))), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = M) }), list( .parallel = parallel, .zero = zero ))), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ if (is.factor(y) && !is.ordered(y)) warning("response should be ordinal---see ordered()") delete.zero.colns <- TRUE eval(process.categorical.data.VGAM) M <- ncol(y) - 1 mynames <- if ( .reverse ) paste("P[Y", .fillerChar , "=", 1:M, "]", .fillerChar , "/", .fillerChar , "P[Y", .fillerChar , "=", .fillerChar , 2:(M+1), "]", sep = "") else paste("P[Y", .fillerChar , "=", .fillerChar , 2:(M+1), "]", .fillerChar , "/", .fillerChar , "P[Y", .fillerChar , "=", .fillerChar , 1:M, "]", sep = "") predictors.names <- namesof(mynames, .link , short = TRUE, earg = .earg ) y.names <- paste("mu", 1:(M+1), sep = "") extra$colnames.y <- colnames(y) }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkinv = eval(substitute( function(eta, extra = NULL) { if (!is.matrix(eta)) eta <- as.matrix(eta) M <- ncol(eta) fv.mat <- if ( .reverse ) { zetar <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(zetar[, M:1], "cumprod")[, M:1, drop = FALSE] cbind(temp, 1) / drop(1 + temp %*% rep(1, ncol(temp))) } else { zeta <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(zeta, "cumprod") cbind(1, temp) / drop(1 + temp %*% rep(1, ncol(temp))) } label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = 1) }, list( .earg = earg, .link = link, .reverse = reverse) )), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$parameters <- mynames misc$reverse <- .reverse misc$fillerChar <- .fillerChar misc$whitespace <- .whitespace }), list( .earg = earg, .link = link, .reverse = reverse, .fillerChar = fillerChar, .whitespace = whitespace ))), linkfun = eval(substitute( function(mu, extra = NULL) { M <- ncol(mu) - 1 theta2eta(if ( .reverse ) mu[, 1:M] / mu[, -1] else mu[, -1] / mu[, 1:M], .link , earg = .earg ) }, list( .earg = earg, .link = link, .reverse = reverse) )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("acat", "VGAMordinal", "VGAMcategorical"), validparams = eval(substitute(function(eta, y, extra = NULL) { if (!is.matrix(eta)) eta <- as.matrix(eta) M <- ncol(eta) probs <- if ( .reverse ) { zetar <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(zetar[, M:1], "cumprod")[, M:1, drop = FALSE] cbind(temp, 1) / drop(1 + temp %*% rep(1, ncol(temp))) } else { zeta <- eta2theta(eta, .link , earg = .earg ) temp <- tapplymat1(zeta, "cumprod") cbind(1, temp) / drop(1 + temp %*% rep(1, ncol(temp))) } okay1 <- all(is.finite(probs)) && all(0 < probs & probs < 1) okay1 }, list( .earg = earg, .link = link, .reverse = reverse) )), deriv = eval(substitute(expression({ zeta <- eta2theta(eta, .link , earg = .earg ) # May be zetar dzeta.deta <- dtheta.deta(zeta, .link , earg = .earg ) d1 <- acat.deriv(zeta, M = M, n = n, reverse = .reverse ) score <- attr(d1, "gradient") / d1 answer <- if ( .reverse ) { cumy <- tapplymat1(y, "cumsum") c(w) * dzeta.deta * (cumy[, 1:M] / zeta - score) } else { ccumy <- tapplymat1(y[, ncol(y):1], "cumsum")[, ncol(y):1] c(w) * dzeta.deta * (ccumy[, -1] / zeta - score) } answer }), list( .earg = earg, .link = link, .reverse = reverse) )), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, dimm(M)) hess <- attr(d1, "hessian") / d1 if (M > 1) for (jay in 1:(M-1)) for (kay in (jay+1):M) wz[,iam(jay, kay,M)] <- (hess[, jay, kay] - score[, jay] * score[, kay]) * dzeta.deta[, jay] * dzeta.deta[, kay] if ( .reverse ) { cump <- tapplymat1(mu, "cumsum") wz[, 1:M] <- (cump[, 1:M] / zeta^2 - score^2) * dzeta.deta^2 } else { ccump <- tapplymat1(mu[, ncol(mu):1], "cumsum")[, ncol(mu):1] wz[, 1:M] <- (ccump[, -1] / zeta^2 - score^2) * dzeta.deta^2 } c(w) * wz }), list( .earg = earg, .link = link, .reverse = reverse )))) } # acat() acat.deriv <- function(zeta, reverse, M, n) { alltxt <- NULL for (ii in 1:M) { index <- if (reverse) ii:M else 1:ii vars <- paste("zeta", index, sep = "") txt <- paste(vars, collapse = "*") alltxt <- c(alltxt, txt) } alltxt <- paste(alltxt, collapse = " + ") alltxt <- paste(" ~ 1 +", alltxt) txt <- as.formula(alltxt) allvars <- paste("zeta", 1:M, sep = "") d1 <- deriv3(txt, allvars, hessian = TRUE) zeta <- as.matrix(zeta) for (ii in 1:M) assign(paste("zeta", ii, sep = ""), zeta[, ii]) ans <- eval(d1) ans } # acat.deriv() brat <- function(refgp = "last", refvalue = 1, ialpha = 1) { if (!is.Numeric(ialpha, positive = TRUE)) stop("'ialpha' must contain positive values only") if (!is.Numeric(refvalue, length.arg = 1, positive = TRUE)) stop("'refvalue' must be a single positive value") if (!is.character(refgp) && !is.Numeric(refgp, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("'refgp' must be a single positive integer") new("vglmff", blurb = c(paste("Bradley-Terry model (without ties)\n\n"), "Links: ", namesof("alpha's", "loge")), infos = eval(substitute(function(...) { list(M1 = NA, # zz -1? Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = as.character(NA), refvalue = .refvalue , refgp = .refgp , ialpha = .ialpha ) }, list( .ialpha = ialpha, .refgp = refgp, .refvalue = refvalue ))), initialize = eval(substitute(expression({ are.ties <- attr(y, "are.ties") # If Brat() was used if (is.logical(are.ties) && are.ties) stop("use bratt(), not brat(), when there are ties") try.index <- 1:400 M <- (seq_along(try.index))[(try.index+1)*(try.index) == ncol(y)] if (!is.finite(M)) stop("cannot determine 'M'") ialpha <- matrix(rep_len( .ialpha , M), n, M, byrow = TRUE) etastart <- matrix(theta2eta(ialpha, "loge", earg = list(theta = NULL)), n, M, byrow = TRUE) refgp <- .refgp if (!intercept.only) warning("this function only works with intercept-only models") extra$ybrat.indices <- .brat.indices(NCo = M+1, are.ties = FALSE) uindex <- if ( .refgp == "last") 1:M else (1:(M+1))[-( .refgp ) ] predictors.names <- namesof(paste("alpha", uindex, sep = ""), "loge", short = TRUE) }), list( .refgp = refgp, .ialpha = ialpha ))), linkinv = eval(substitute( function(eta, extra = NULL) { probs <- NULL eta <- as.matrix(eta) # in case M = 1 for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, ], "loge", earg = list(theta = NULL)), .refvalue , .refgp ) alpha1 <- alpha[extra$ybrat.indices[, "rindex"]] alpha2 <- alpha[extra$ybrat.indices[, "cindex"]] probs <- rbind(probs, alpha1 / (alpha1 + alpha2)) } dimnames(probs) <- dimnames(eta) probs }, list( .refgp = refgp, .refvalue = refvalue) )), last = eval(substitute(expression({ misc$link <- rep_len("loge", M) names(misc$link) <- paste("alpha", uindex, sep = "") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- list(theta = NULL) misc$refgp <- .refgp misc$refvalue <- .refvalue }), list( .refgp = refgp, .refvalue = refvalue ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("brat", "VGAMcategorical"), validparams = eval(substitute(function(eta, y, extra = NULL) { probs <- NULL eta <- as.matrix(eta) # in case M = 1 for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, ], "loge", earg = list(theta = NULL)), .refvalue , .refgp ) alpha1 <- alpha[extra$ybrat.indices[, "rindex"]] alpha2 <- alpha[extra$ybrat.indices[, "cindex"]] probs <- rbind(probs, alpha1 / (alpha1 + alpha2)) } okay1 <- all(is.finite(probs)) && all(0 < probs & probs < 1) okay1 }, list( .refvalue = refvalue, .refgp = refgp) )), deriv = eval(substitute(expression({ ans <- NULL uindex <- if ( .refgp == "last") 1:M else (1:(M+1))[-( .refgp ) ] eta <- as.matrix(eta) # in case M = 1 for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, ], "loge", earg = list(theta = NULL)), .refvalue, .refgp ) ymat <- InverseBrat(y[ii, ], NCo = M+1, diag = 0) answer <- rep_len(0, M) for (aa in 1:(M+1)) { answer <- answer + (1 - (aa == uindex)) * (ymat[uindex, aa] * alpha[aa] - ymat[aa, uindex] * alpha[uindex]) / (alpha[aa] + alpha[uindex]) } ans <- rbind(ans, w[ii] * answer) } dimnames(ans) <- dimnames(eta) ans }), list( .refvalue = refvalue, .refgp = refgp) )), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, ], "loge", earg = list(theta = NULL)), .refvalue, .refgp) ymat <- InverseBrat(y[ii, ], NCo = M+1, diag = 0) for (aa in 1:(M+1)) { wz[ii, 1:M] <- wz[ii, 1:M] + (1 - (aa == uindex)) * (ymat[aa, uindex] + ymat[uindex, aa]) * alpha[aa] * alpha[uindex] / (alpha[aa] + alpha[uindex])^2 } if (M > 1) { ind5 <- iam(1, 1, M, both = TRUE, diag = FALSE) wz[ii, (M+1):ncol(wz)] <- -(ymat[cbind(uindex[ind5$row], uindex[ind5$col])] + ymat[cbind(uindex[ind5$col], uindex[ind5$row])]) * alpha[uindex[ind5$col]] * alpha[uindex[ind5$row]] / (alpha[uindex[ind5$row]] + alpha[uindex[ind5$col]])^2 } } wz <- c(w) * wz wz }), list( .refvalue = refvalue, .refgp = refgp )))) } # brat() bratt <- function(refgp = "last", refvalue = 1, ialpha = 1, i0 = 0.01) { if (!is.Numeric(i0, length.arg = 1, positive = TRUE)) stop("'i0' must be a single positive value") if (!is.Numeric(ialpha, positive = TRUE)) stop("'ialpha' must contain positive values only") if (!is.Numeric(refvalue, length.arg = 1, positive = TRUE)) stop("'refvalue' must be a single positive value") if (!is.character(refgp) && !is.Numeric(refgp, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("'refgp' must be a single positive integer") new("vglmff", blurb = c(paste("Bradley-Terry model (with ties)\n\n"), "Links: ", namesof("alpha's", "loge"), ", log(alpha0)"), infos = eval(substitute(function(...) { list(M1 = NA, # zz -1? Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = as.character(NA), refvalue = .refvalue , refgp = .refgp , i0 = .i0 , ialpha = .ialpha ) }, list( .ialpha = ialpha, .i0 = i0, .refgp = refgp, .refvalue = refvalue ))), initialize = eval(substitute(expression({ try.index <- 1:400 M <- (seq_along(try.index))[(try.index*(try.index-1)) == ncol(y)] if (!is.Numeric(M, length.arg = 1, integer.valued = TRUE)) stop("cannot determine 'M'") NCo <- M # Number of contestants are.ties <- attr(y, "are.ties") # If Brat() was used if (is.logical(are.ties)) { if (!are.ties) stop("use brat(), not bratt(), when there are no ties") ties <- attr(y, "ties") } else { are.ties <- FALSE ties <- 0 * y } ialpha <- rep_len( .ialpha, NCo-1) ialpha0 <- .i0 etastart <- cbind(matrix(theta2eta(ialpha, "loge", list(theta = NULL)), n, NCo-1, byrow = TRUE), theta2eta(rep_len(ialpha0, n), "loge", list(theta = NULL))) refgp <- .refgp if (!intercept.only) warning("this function only works with intercept-only models") extra$ties <- ties # Flat (1-row) matrix extra$ybrat.indices <- .brat.indices(NCo = NCo, are.ties = FALSE) extra$tbrat.indices <- .brat.indices(NCo = NCo, are.ties = TRUE) extra$dnties <- dimnames(ties) uindex <- if (refgp == "last") 1:(NCo-1) else (1:(NCo))[-refgp ] predictors.names <- c( namesof(paste("alpha", uindex, sep = ""), "loge", short = TRUE), namesof("alpha0", "loge", short = TRUE)) }), list( .refgp = refgp, .i0 = i0, .ialpha = ialpha ))), linkinv = eval(substitute( function(eta, extra = NULL) { probs <- qprobs <- NULL M <- ncol(eta) for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, -M], "loge"), .refvalue , .refgp ) alpha0 <- loge(eta[ii, M], inverse = TRUE) alpha1 <- alpha[extra$ybrat.indices[, "rindex"]] alpha2 <- alpha[extra$ybrat.indices[, "cindex"]] probs <- rbind( probs, alpha1 / (alpha1 + alpha2 + alpha0)) # qprobs <- rbind(qprobs, alpha0 / (alpha1 + alpha2 + alpha0)) # } if (length(extra$dnties)) dimnames(qprobs) <- extra$dnties attr(probs, "probtie") <- qprobs probs }, list( .refgp = refgp, .refvalue = refvalue) )), last = eval(substitute(expression({ misc$link <- rep_len("loge", M) names(misc$link) <- c(paste("alpha", uindex, sep = ""), "alpha0") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- list(theta = NULL) misc$refgp <- .refgp misc$refvalue <- .refvalue misc$alpha <- alpha misc$alpha0 <- alpha0 }), list( .refgp = refgp, .refvalue = refvalue ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (y * log(mu) + 0.5 * extra$ties * log(attr(mu, "probtie"))) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("bratt", "VGAMcategorical"), validparams = eval(substitute(function(eta, y, extra = NULL) { probs <- qprobs <- NULL M <- ncol(eta) for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, -M], "loge"), .refvalue , .refgp ) alpha0 <- loge(eta[ii, M], inverse = TRUE) alpha1 <- alpha[extra$ybrat.indices[, "rindex"]] alpha2 <- alpha[extra$ybrat.indices[, "cindex"]] probs <- rbind( probs, alpha1 / (alpha1 + alpha2 + alpha0)) # qprobs <- rbind(qprobs, alpha0 / (alpha1 + alpha2 + alpha0)) # } okay1 <- all(is.finite( probs)) && all(0 < probs & probs < 1) && all(is.finite(qprobs)) && all(0 < qprobs & qprobs < 1) okay1 }, list( .refvalue = refvalue, .refgp = refgp) )), deriv = eval(substitute(expression({ ans <- NULL ties <- extra$ties NCo <- M uindex <- if ( .refgp == "last") 1:(M-1) else (1:(M))[-( .refgp )] eta <- as.matrix(eta) for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, -M], "loge", earg = list(theta = NULL)), .refvalue, .refgp ) alpha0 <- loge(eta[ii, M], inverse = TRUE) ymat <- InverseBrat( y[ii, ], NCo = M, diag = 0) tmat <- InverseBrat(ties[ii, ], NCo = M, diag = 0) answer <- rep_len(0, NCo-1) # deriv wrt eta[-M] for (aa in 1:NCo) { Daj <- alpha[aa] + alpha[uindex] + alpha0 pja <- alpha[uindex] / Daj answer <- answer + alpha[uindex] * (-ymat[aa, uindex] + ymat[uindex, aa] * (1 - pja) / pja - tmat[uindex, aa]) / Daj } deriv0 <- 0 # deriv wrt eta[M] for (aa in 1:(NCo-1)) for (bb in (aa+1):NCo) { Dab <- alpha[aa] + alpha[bb] + alpha0 qab <- alpha0 / Dab deriv0 <- deriv0 + alpha0 * (-ymat[aa, bb] - ymat[bb,aa] + tmat[aa, bb] * (1 - qab) / qab) / Dab } ans <- rbind(ans, w[ii] * c(answer, deriv0)) } dimnames(ans) <- dimnames(eta) ans }), list( .refvalue = refvalue, .refgp = refgp) )), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) # includes diagonal for (ii in 1:nrow(eta)) { alpha <- .brat.alpha(eta2theta(eta[ii, -M], "loge", earg = list(theta = NULL)), .refvalue, .refgp) alpha0 <- loge(eta[ii, M], inverse = TRUE) ymat <- InverseBrat( y[ii, ], NCo = M, diag = 0) tmat <- InverseBrat(ties[ii, ], NCo = M, diag = 0) for (aa in 1:(NCo)) { Daj <- alpha[aa] + alpha[uindex] + alpha0 pja <- alpha[uindex] / Daj nja <- ymat[aa,uindex] + ymat[uindex,aa] + tmat[uindex,aa] wz[ii, 1:(NCo-1)] <- wz[ii, 1:(NCo - 1)] + alpha[uindex]^2 * nja * (1 - pja) / (pja * Daj^2) if (aa < NCo) for (bb in (aa+1):(NCo)) { nab <- ymat[aa,bb] + ymat[bb,aa] + tmat[bb,aa] Dab <- alpha[aa] + alpha[bb] + alpha0 qab <- alpha0 / Dab wz[ii, NCo] <- wz[ii,NCo] + alpha0^2 * nab * (1-qab) / (qab * Dab^2) } } if (NCo > 2) { ind5 <- iam(1, 1, M = NCo, both = TRUE, diag = FALSE) alphajunk <- c(alpha, junk = NA) mat4 <- cbind(uindex[ind5$row],uindex[ind5$col]) wz[ii,(M+1):ncol(wz)] <- -(ymat[mat4] + ymat[mat4[, 2:1]] + tmat[mat4]) * alphajunk[uindex[ind5$col]] * alphajunk[uindex[ind5$row]] / (alpha0 + alphajunk[uindex[ind5$row]] + alphajunk[uindex[ind5$col]])^2 } for (sss in seq_along(uindex)) { jay <- uindex[sss] naj <- ymat[, jay] + ymat[jay, ] + tmat[, jay] Daj <- alpha[jay] + alpha + alpha0 wz[ii, iam(sss, NCo, M = NCo, diag = TRUE)] <- -alpha[jay] * alpha0 * sum(naj / Daj^2) } } wz <- c(w) * wz wz }), list( .refvalue = refvalue, .refgp = refgp )))) } # bratt() .brat.alpha <- function(vec, value, posn) { if (is.character(posn)) if (posn != "last") stop("can only handle \"last\"") else return(c(vec, value)) c(if (posn == 1) NULL else vec[1:(posn-1)], value, if (posn == length(vec) + 1) NULL else vec[posn:length(vec)]) } .brat.indices <- function(NCo, are.ties = FALSE) { if (!is.Numeric(NCo, length.arg = 1, integer.valued = TRUE) || NCo < 2) stop("bad input for 'NCo'") m <- diag(NCo) if (are.ties) { cbind(rindex = row(m)[col(m) < row(m)], cindex = col(m)[col(m) < row(m)]) } else cbind(rindex = row(m)[col(m) != row(m)], cindex = col(m)[col(m) != row(m)]) } Brat <- function(mat, ties = 0 * mat, string = c(">", "=="), whitespace = FALSE) { stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") string <- paste(fillerChar, string, fillerChar, sep = "") allargs <- list(mat) # ,... callit <- if (length(names(allargs))) names(allargs) else as.character(seq_along(allargs)) ans <- ans.ties <- NULL for (ii in seq_along(allargs)) { m <- allargs[[ii]] if (!is.matrix(m) || dim(m)[1] != dim(m)[2]) stop("m must be a square matrix") diag(ties) <- 0 if (!all(ties == t(ties))) stop("ties must be a symmetric matrix") are.ties <- any(ties > 0) diag(ties) <- NA diag(m) <- 0 # Could have been NAs if (anyNA(m)) stop("missing values not allowed (except on the diagonal)") diag(m) <- NA dm <- as.data.frame.table(m) dt <- as.data.frame.table(ties) dm <- dm[!is.na(dm$Freq), ] dt <- dt[!is.na(dt$Freq), ] usethis1 <- paste(dm[, 1], string[1], dm[, 2], sep = "") usethis2 <- paste(dm[, 1], string[2], dm[, 2], sep = "") ans <- rbind(ans, matrix(dm$Freq, nrow = 1)) ans.ties <- rbind(ans.ties, matrix(dt$Freq, nrow = 1)) } dimnames(ans) <- list(callit, usethis1) dimnames(ans.ties) <- list(callit, usethis2) attr(ans, "ties") <- ans.ties attr(ans, "are.ties") <- are.ties ans } # Brat() InverseBrat <- function(yvec, NCo = (1:900)[(1:900)*((1:900)-1) == ncol(rbind(yvec))], multiplicity = if (is.matrix(yvec)) nrow(yvec) else 1, diag = NA, string = c(">", "=="), whitespace = FALSE) { stopifnot(is.logical(whitespace) && length(whitespace) == 1) fillerChar <- ifelse(whitespace, " ", "") string <- paste(fillerChar, string, fillerChar, sep = "") ans <- array(diag, c(NCo, NCo, multiplicity)) yvec.orig <- yvec yvec <- c(yvec) ptr <- 1 for (mul in 1:multiplicity) for (i1 in 1:(NCo)) for (i2 in 1:(NCo)) if (i1 != i2) { ans[i2, i1, mul] <- yvec[ptr] ptr <- ptr + 1 } ans <- if (multiplicity > 1) ans else matrix(ans, NCo, NCo) if (is.array(yvec.orig) || is.matrix(yvec.orig)) { names.yvec <- dimnames(yvec.orig)[[2]] ii <- strsplit(names.yvec, string[1]) cal <- NULL for (kk in c(NCo, 1:(NCo-1))) cal <- c(cal, (ii[[kk]])[1]) if (multiplicity>1) { dimnames(ans) <- list(cal, cal, dimnames(yvec.orig)[[1]]) } else { dimnames(ans) <- list(cal, cal) } } ans } # InverseBrat() ordpoisson <- function(cutpoints, countdata = FALSE, NOS = NULL, Levels = NULL, init.mu = NULL, parallel = FALSE, zero = NULL, link = "loge") { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") fcutpoints <- cutpoints[is.finite(cutpoints)] if (!is.Numeric(fcutpoints, integer.valued = TRUE) || any(fcutpoints < 0)) stop("'cutpoints' must have non-negative integer or Inf ", "values only") if (is.finite(cutpoints[length(cutpoints)])) cutpoints <- c(cutpoints, Inf) if (!is.logical(countdata) || length(countdata) != 1) stop("argument 'countdata' must be a single logical") if (countdata) { if (!is.Numeric(NOS, integer.valued = TRUE, positive = TRUE)) stop("'NOS' must have integer values only") if (!is.Numeric(Levels, integer.valued = TRUE, positive = TRUE) || any(Levels < 2)) stop("'Levels' must have integer values (>= 2) only") Levels <- rep_len(Levels, NOS) } new("vglmff", blurb = c(paste("Ordinal Poisson model\n\n"), "Link: ", namesof("mu", link, earg = earg)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , apply.int = TRUE, constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("mu"), lmu = .link , zero = .zero ) }, list( .zero = zero, .link = link ))), initialize = eval(substitute(expression({ orig.y <- cbind(y) # Convert y into a matrix if necessary if ( .countdata ) { extra$NOS <- M <- NOS <- .NOS extra$Levels <- Levels <- .Levels y.names <- dimnames(y)[[2]] # Hopefully the user inputted them } else { if (any(w != 1) || NCOL(w) != 1) stop("the 'weights' argument must be a vector of all ones") extra$NOS <- M <- NOS <- if (is.Numeric( .NOS )) .NOS else ncol(orig.y) Levels <- rep_len(if (is.Numeric( .Levels )) .Levels else 0, NOS) if (!is.Numeric( .Levels )) for (iii in 1:NOS) { Levels[iii] <- length(unique(sort(orig.y[,iii]))) } extra$Levels <- Levels } initmu <- if (is.Numeric( .init.mu )) rep_len( .init.mu , NOS) else NULL cutpoints <- rep_len( .cutpoints, sum(Levels)) delete.zero.colns <- FALSE use.y <- if ( .countdata ) y else matrix(0, n, sum(Levels)) use.etastart <- matrix(0, n, M) cptr <- 1 for (iii in 1:NOS) { y <- factor(orig.y[,iii], levels=(1:Levels[iii])) if ( !( .countdata )) { eval(process.categorical.data.VGAM) # Creates mustart and y use.y[,cptr:(cptr+Levels[iii]-1)] <- y } use.etastart[,iii] <- if (is.Numeric(initmu)) initmu[iii] else median(cutpoints[cptr:(cptr+Levels[iii]-1-1)]) cptr <- cptr + Levels[iii] } mustart <- NULL # Overwrite it etastart <- theta2eta(use.etastart, .link , earg = .earg ) y <- use.y # n x sum(Levels) M <- NOS for (iii in 1:NOS) { mu.names <- paste("mu", iii, ".", sep = "") } ncoly <- extra$ncoly <- sum(Levels) cp.vector <- rep_len( .cutpoints , ncoly) extra$countdata <- .countdata extra$cutpoints <- cp.vector extra$n <- n mynames <- param.names("mu", M) predictors.names <- namesof(mynames, .link , earg = .earg , tag = FALSE) }), list( .link = link, .countdata = countdata, .earg = earg, .cutpoints=cutpoints, .NOS=NOS, .Levels=Levels, .init.mu = init.mu ))), linkinv = eval(substitute( function(eta, extra = NULL) { mu <- eta2theta(eta, link= .link , earg = .earg ) # Poisson means mu <- cbind(mu) mu }, list( .link = link, .earg = earg, .countdata = countdata ))), last = eval(substitute(expression({ if ( .countdata ) { misc$link <- .link misc$earg <- list( .earg ) } else { misc$link <- rep_len( .link , M) names(misc$link) <- mynames misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg } misc$parameters <- mynames misc$countdata <- .countdata misc$true.mu = FALSE # $fitted is not a true mu }), list( .link = link, .countdata = countdata, .earg = earg ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { probs <- ordpoissonProbs(extra, mu) index0 <- y == 0 probs[index0] <- 1 pindex0 <- probs == 0 probs[pindex0] <- 1 if (summation) { sum(pindex0) * (-1.0e+10) + sum(w * y * log(probs)) } else { stop("20140311; 'summation=F' not done yet") } } }, vfamily = c("ordpoisson", "VGAMcategorical"), deriv = eval(substitute(expression({ probs <- ordpoissonProbs(extra, mu) probs.use <- pmax(probs, .Machine$double.eps * 1.0e-0) cp.vector <- extra$cutpoints NOS <- extra$NOS Levels <- extra$Levels resmat <- matrix(0, n, M) dl.dprob <- y / probs.use dmu.deta <- dtheta.deta(mu, .link , earg = .earg ) dprob.dmu <- ordpoissonProbs(extra, mu, deriv = 1) cptr <- 1 for (iii in 1:NOS) { for (kkk in 1:Levels[iii]) { resmat[,iii] <- resmat[,iii] + dl.dprob[,cptr] * dprob.dmu[,cptr] cptr <- cptr + 1 } } resmat <- c(w) * resmat * dmu.deta resmat }), list( .link = link, .earg = earg, .countdata=countdata ))), weight = eval(substitute(expression({ d2l.dmu2 <- matrix(0, n, M) # Diagonal matrix cptr <- 1 for (iii in 1:NOS) { for (kkk in 1:Levels[iii]) { d2l.dmu2[,iii] <- d2l.dmu2[,iii] + dprob.dmu[,cptr]^2 / probs.use[,cptr] cptr <- cptr + 1 } } wz <- c(w) * d2l.dmu2 * dmu.deta^2 wz }), list( .earg = earg, .link = link, .countdata = countdata )))) } ordpoissonProbs <- function(extra, mu, deriv = 0) { cp.vector <- extra$cutpoints NOS <- extra$NOS if (deriv == 1) { dprob.dmu <- matrix(0, extra$n, extra$ncoly) } else { probs <- matrix(0, extra$n, extra$ncoly) } mu <- cbind(mu) cptr <- 1 for (iii in 1:NOS) { if (deriv == 1) { dprob.dmu[,cptr] <- -dpois(x = cp.vector[cptr], lambda = mu[,iii]) } else { probs[,cptr] <- ppois(q = cp.vector[cptr], lambda = mu[,iii]) } cptr <- cptr + 1 while (is.finite(cp.vector[cptr])) { if (deriv == 1) { dprob.dmu[,cptr] <- dpois(x = cp.vector[cptr-1], lambda = mu[,iii]) - dpois(x = cp.vector[cptr ], lambda = mu[,iii]) } else { probs[,cptr] <- ppois(q = cp.vector[cptr ], lambda = mu[,iii]) - ppois(q = cp.vector[cptr-1], lambda = mu[,iii]) } cptr <- cptr + 1 } if (deriv == 1) { dprob.dmu[,cptr] <- dpois(x = cp.vector[cptr-1], lambda = mu[,iii]) - dpois(x = cp.vector[cptr ], lambda = mu[,iii]) } else { probs[,cptr] <- ppois(q = cp.vector[cptr ], lambda = mu[,iii]) - ppois(q = cp.vector[cptr-1], lambda = mu[,iii]) } cptr <- cptr + 1 } if (deriv == 1) dprob.dmu else probs } findFirstMethod <- function(methodsfn, charvec) { answer <- NULL for (ii in seq_along(charvec)) { if (existsMethod(methodsfn, signature(VGAMff = charvec[ii]))) { answer <- charvec[ii] break } } answer } margeff <- function(object, subset = NULL, ...) { try.this <- findFirstMethod("margeffS4VGAM", object@family@vfamily) if (length(try.this)) { margeffS4VGAM(object = object, subset = subset, VGAMff = new(try.this), ...) } else { stop("Could not find a methods function for 'margeffS4VGAM' ", "emanating from '", object@family@vfamily[1], "'") } } subsetarray3 <- function(array3, subset = NULL) { if (is.null(subset)) { return(array3) } else if (is.numeric(subset) && (length(subset) == 1)) { return(array3[, , subset]) } else { return(array3[, , subset]) } warning("argument 'subset' unmatched. Doing nothing") array3 } setClass("VGAMcategorical", contains = "vglmff") setClass("VGAMordinal", contains = "VGAMcategorical") setClass("multinomial", contains = "VGAMcategorical") setClass("acat", contains = "VGAMordinal") setClass("cumulative", contains = "VGAMordinal") setClass("cratio", contains = "VGAMordinal") setClass("sratio", contains = "VGAMordinal") setMethod("margeffS4VGAM", signature(VGAMff = "VGAMcategorical"), function(object, subset = NULL, VGAMff, ...) { object@post$M <- M <- object@misc$M object@post$n <- nnn <- object@misc$n invisible(object) }) setMethod("margeffS4VGAM", signature(VGAMff ="multinomial"), function(object, subset = NULL, VGAMff, ...) { object <- callNextMethod(VGAMff = VGAMff, object = object, subset = subset, ...) M <- object@misc$M nnn <- object@misc$n cfit <- coefvlm(object, matrix.out = TRUE) rlev <- object@misc$refLevel if (!length(rlev)) relev <- M+1 # Default Bmat <- matrix(0, nrow(cfit), 1 + ncol(cfit)) Bmat[, -rlev] <- cfit ppp <- nrow(Bmat) pvec1 <- fitted(object)[1, ] rownames(Bmat) <- rownames(cfit) colnames(Bmat) <- if (length(names(pvec1))) names(pvec1) else paste("mu", 1:(M+1), sep = "") BB <- array(Bmat, c(ppp, M+1, nnn)) pvec <- c(t(fitted(object))) pvec <- rep(pvec, each = ppp) temp1 <- array(BB * pvec, c(ppp, M+1, nnn)) temp2 <- aperm(temp1, c(2, 1, 3)) # (M+1) x ppp x nnn temp2 <- colSums(temp2) # ppp x nnn temp2 <- array(rep(temp2, each = M+1), c(M+1, ppp, nnn)) temp2 <- aperm(temp2, c(2, 1, 3)) # ppp x (M+1) x nnn temp3 <- pvec ans.mlm <- array((BB - temp2) * temp3, c(ppp, M+1, nnn), dimnames = list(dimnames(Bmat)[[1]], dimnames(Bmat)[[2]], dimnames(fitted(object))[[1]])) return(subsetarray3(ans.mlm, subset = subset)) }) setMethod("margeffS4VGAM", signature(VGAMff = "VGAMordinal"), function(object, subset = NULL, VGAMff, ...) { M <- object@misc$M nnn <- object@misc$n object@post$reverse <- object@misc$reverse object@post$linkfunctions <- linkfunctions <- object@misc$link object@post$all.eargs <- all.eargs <- object@misc$earg object@post$Bmat <- Bmat <- coefvlm(object, matrix.out = TRUE) object@post$ppp <- nrow(Bmat) etamat <- predict(object) hdot <- Thetamat <- etamat for (jlocal in 1:M) { Thetamat[, jlocal] <- eta2theta(etamat[, jlocal], link = linkfunctions[jlocal], earg = all.eargs[[jlocal]]) hdot[, jlocal] <- dtheta.deta(Thetamat[, jlocal], link = linkfunctions[jlocal], earg = all.eargs[[jlocal]]) } # jlocal object@post$hdot <- hdot object@post$Thetamat <- Thetamat object }) setMethod("margeffS4VGAM", signature(VGAMff = "cumulative"), function(object, subset = NULL, VGAMff, ...) { object <- callNextMethod(VGAMff = VGAMff, object = object, subset = subset, ...) reverse <- object@post$reverse linkfunctions <- object@post$linkfunctions all.eargs <- object@post$all.eargs Bmat <- cfit <- object@post$Bmat ppp <- object@post$ppp etamat <- predict(object) # nnn x M fitmat <- fitted(object) # nnn x (M + 1) nnn <- nrow(etamat) M <- ncol(etamat) hdot <- object@post$hdot Thetamat <- object@post$Thetamat hdot.big <- kronecker(hdot, matrix(1, ppp, 1)) # Enlarged resmat <- cbind(hdot.big, 1) resmat[, 1] <- ifelse(reverse, -1, 1) * hdot.big[, 1] * cfit[, 1] if (M > 1) { for (jlocal in 2:M) { resmat[, jlocal] <- ifelse(reverse, -1, 1) * (hdot.big[, jlocal ] * cfit[, jlocal ] - hdot.big[, jlocal - 1] * cfit[, jlocal - 1]) } # jlocal } # if resmat[, M+1] <- ifelse(reverse, 1, -1) * hdot.big[, M] * cfit[, M] ans.cum <- array(resmat, c(ppp, nnn, M+1), dimnames = list(dimnames(Bmat)[[1]], dimnames(fitted(object))[[1]], dimnames(fitted(object))[[2]])) ans.cum <- aperm(ans.cum, c(1, 3, 2)) # ppp x (M+1) x nnn subsetarray3(ans.cum, subset = subset) }) setMethod("margeffS4VGAM", signature(VGAMff = "acat"), function(object, subset = NULL, VGAMff, ...) { object <- callNextMethod(VGAMff = VGAMff, object = object, subset = subset, ...) reverse <- object@post$reverse linkfunctions <- object@post$linkfunctions all.eargs <- object@post$all.eargs Bmat <- cfit <- object@post$Bmat ppp <- object@post$ppp etamat <- predict(object) # nnn x M fitmat <- fitted(object) # nnn x (M + 1) nnn <- nrow(etamat) M <- ncol(etamat) hdot <- object@post$hdot Thetamat <- object@post$Thetamat expcs.etamat <- if (reverse) exp(tapplymat1(etamat[, M:1, drop = FALSE], "cumsum")[, M:1, drop = FALSE]) else exp(tapplymat1(etamat, "cumsum")) csexpcs.etavec <- rowSums(expcs.etamat) if (!all(object@misc$link == "loge")) stop("currently only the 'loge' link is supported") acat.derivs <- function(jay, tee, M, expcs.etamat, Thetamat, prob1, probMplus1, reverse = FALSE) { if (jay > M+1) stop("argument 'jay' out of range") if (M < tee) stop("argument 'tee' out of range") if (reverse) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, dpMplus1.detat <- -(probMplus1^2) * rowSums(expcs.etamat[, 1:tee, drop = FALSE]) if (jay == M+1) { return(dpMplus1.detat) } if (jay <= tee) { return((probMplus1 + dpMplus1.detat) * expcs.etamat[, jay]) } if (tee < jay) { return(dpMplus1.detat * expcs.etamat[, jay]) } } else { # reverse = FALSE ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, dp1.detat <- -(prob1^2) * rowSums(expcs.etamat[, tee:M, drop = FALSE]) if (jay == 1) { return(dp1.detat) } if (jay <= tee) { return(dp1.detat * expcs.etamat[, jay-1]) } if (tee < jay) { return((prob1 + dp1.detat) * expcs.etamat[, jay-1]) } } # reverse ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, } # acat.derivs A <- array(0, c(i = nnn, vars = ppp, probs = M + 1, etas = M)) ansarray <- array(0, c(vars = ppp, i = nnn, probs = M + 1)) if (reverse) { probMplus1 <- 1 / (1 + csexpcs.etavec) # Last level of Y } else { prob1 <- 1 / (1 + csexpcs.etavec) # First level of Y } for (jlocal in 1:(M+1)) { for (tlocal in 1:M) { A[, , jlocal, tlocal] <- acat.derivs(jay = jlocal, tee = tlocal, M = M, expcs.etamat = expcs.etamat, Thetamat = Thetamat, prob1 = prob1, probMplus1 = probMplus1, reverse = reverse) } } A <- aperm(A, c(2, 1, 3, 4)) # c(ppp, nnn, M+1, M) for (jlocal in 1:(M + 1)) { for (tlocal in 1:M) { ansarray[,, jlocal] <- ansarray[,, jlocal] + A[,, jlocal, tlocal] * Bmat[, tlocal] } } ans.acat <- aperm(ansarray, c(1, 3, 2)) # c(ppp, M+1, nnn) dimnames(ans.acat) <- list(rownames(Bmat), colnames(fitmat), rownames(etamat)) subsetarray3(ans.acat, subset = subset) }) cratio.derivs <- function(jay, tee, hdot, M, cpThetamat, Thetamat, reverse = FALSE) { if (jay >= M+1) stop("argument 'jay' out of range") if (M < tee) stop("argument 'tee' out of range") if (reverse) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, if (jay == 1) { return(hdot[, tee] * cpThetamat[, 1] / Thetamat[, tee]) } if (jay-1 == tee) { return(-hdot[, jay-1] * cpThetamat[, jay]) } if (jay <= tee) { return((1 - Thetamat[, jay-1]) * hdot[, tee] * cpThetamat[, jay] / Thetamat[, tee]) } return(rep_len(0, nrow(Thetamat))) # Since jay-1 > tee } else { # reverse = FALSE ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, if (jay == 1 && tee == 1) { return(-hdot[, 1]) } if (jay == tee) { return(-hdot[, jay] * cpThetamat[, jay-1]) } if (tee < jay) { return((1 - Thetamat[, jay]) * hdot[, tee] * cpThetamat[, jay-1] / Thetamat[, tee]) } return(rep_len(0, nrow(Thetamat))) # Since jay < tee } # reverse ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, } # cratio.derivs setMethod("margeffS4VGAM", signature(VGAMff = "cratio"), function(object, subset = NULL, VGAMff, ...) { object <- callNextMethod(VGAMff = VGAMff, object = object, subset = subset, ...) reverse <- object@post$reverse linkfunctions <- object@post$linkfunctions all.eargs <- object@post$all.eargs Bmat <- cfit <- object@post$Bmat ppp <- object@post$ppp etamat <- predict(object) # nnn x M fitmat <- fitted(object) # nnn x (M + 1) nnn <- nrow(etamat) M <- ncol(etamat) hdot <- object@post$hdot Thetamat <- object@post$Thetamat vfamily <- object@family@vfamily c.nots <- any(vfamily == "cratio") if (any(vfamily == "cratio")) { cpThetamat <- if (reverse) tapplymat1( Thetamat[, M:1, drop = FALSE], "cumprod")[, M:1, drop = FALSE] else tapplymat1( Thetamat, "cumprod") } A <- array(0, c(i = nnn, vars = ppp, probs = M + 1, etas = M)) ansarray <- array(0, c(vars = ppp, i = nnn, probs = M + 1)) choosemat <- if (c.nots) Thetamat else 1 - Thetamat if (min(choosemat) <= 0) warning("division by 0 may occur") if (reverse) { for (tlocal in 1:M) { for (jlocal in 1:tlocal) { A[, , jlocal, tlocal] <- cratio.derivs(jay = jlocal, tee = tlocal, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } if (M > 1) for (jlocal in 2:M) { A[, , jlocal, jlocal-1] <- cratio.derivs(jay = jlocal, tee = jlocal-1, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } else { for (jlocal in 1:M) { for (tlocal in 1:jlocal) { A[, , jlocal, tlocal] <- cratio.derivs(jay = jlocal, tee = tlocal, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } } if (reverse) { A[, , M+1, M] <- ifelse(c.nots, -1, 1) * hdot[, M] } else { for (jlocal in 1:M) { for (tlocal in 1:jlocal) { A[, , M+1, tlocal] <- if (c.nots) { A[, , M+1, tlocal] - A[, , jlocal, tlocal] } else { -hdot[, tlocal] * cpThetamat[, M] / choosemat[, tlocal] } } } } A <- aperm(A, c(2, 1, 3, 4)) # c(ppp, nnn, M+1, M) for (jlocal in 1:(M + 1)) { for (tlocal in 1:M) { ansarray[,, jlocal] <- ansarray[,, jlocal] + A[,, jlocal, tlocal] * Bmat[, tlocal] } } ans.csratio <- aperm(ansarray, c(1, 3, 2)) # c(ppp, M+1, nnn) dimnames(ans.csratio) <- list(rownames(Bmat), colnames(fitmat), rownames(etamat)) subsetarray3(ans.csratio, subset = subset) # "cratio" and "sratio" }) setMethod("margeffS4VGAM", signature(VGAMff = "sratio"), function(object, subset = NULL, VGAMff, ...) { object <- callNextMethod(VGAMff = VGAMff, object = object, subset = subset, ...) reverse <- object@post$reverse linkfunctions <- object@post$linkfunctions all.eargs <- object@post$all.eargs Bmat <- cfit <- object@post$Bmat ppp <- object@post$ppp etamat <- predict(object) # nnn x M fitmat <- fitted(object) # nnn x (M + 1) nnn <- nrow(etamat) M <- ncol(etamat) hdot <- object@post$hdot Thetamat <- object@post$Thetamat vfamily <- object@family@vfamily c.nots <- any(vfamily == "cratio") if (any(vfamily == "sratio")) { cpThetamat <- if (reverse) tapplymat1(1 - Thetamat[, M:1, drop = FALSE], "cumprod")[, M:1, drop = FALSE] else tapplymat1(1 - Thetamat, "cumprod") } A <- array(0, c(i = nnn, vars = ppp, probs = M + 1, etas = M)) ansarray <- array(0, c(vars = ppp, i = nnn, probs = M + 1)) choosemat <- if (c.nots) Thetamat else 1 - Thetamat if (min(choosemat) <= 0) warning("division by 0 may occur") if (reverse) { for (tlocal in 1:M) { for (jlocal in 1:tlocal) { A[, , jlocal, tlocal] <- cratio.derivs(jay = jlocal, tee = tlocal, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } if (M > 1) for (jlocal in 2:M) { A[, , jlocal, jlocal-1] <- cratio.derivs(jay = jlocal, tee = jlocal-1, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } else { for (jlocal in 1:M) { for (tlocal in 1:jlocal) { A[, , jlocal, tlocal] <- cratio.derivs(jay = jlocal, tee = tlocal, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } } if (reverse) { A[, , M+1, M] <- ifelse(c.nots, -1, 1) * hdot[, M] } else { for (jlocal in 1:M) { for (tlocal in 1:jlocal) { A[, , M+1, tlocal] <- if (c.nots) { A[, , M+1, tlocal] - A[, , jlocal, tlocal] } else { -hdot[, tlocal] * cpThetamat[, M] / choosemat[, tlocal] } } } } A <- aperm(A, c(2, 1, 3, 4)) # c(ppp, nnn, M+1, M) for (jlocal in 1:(M + 1)) { for (tlocal in 1:M) { ansarray[,, jlocal] <- ansarray[,, jlocal] + A[,, jlocal, tlocal] * Bmat[, tlocal] } } ans.csratio <- aperm(ansarray, c(1, 3, 2)) # c(ppp, M+1, nnn) dimnames(ans.csratio) <- list(rownames(Bmat), colnames(fitmat), rownames(etamat)) subsetarray3(ans.csratio, subset = subset) # "cratio" and "sratio" }) margefff <- function(object, subset = NULL) { ii <- subset if (!is(object, "vglm")) stop("'object' is not a vglm() object") if (!any(temp.logical <- is.element(c("multinomial", "cumulative", "acat", "cratio", "sratio"), object@family@vfamily))) stop("'object' is not a 'multinomial' or 'acat' or 'cumulative' ", " or 'cratio' or 'sratio' VGLM!") vfamily <- object@family@vfamily if (is(object, "vgam")) stop("'object' is a vgam() object") if (length(object@control$xij)) stop("'object' contains 'xij' terms") if (length(object@misc$form2)) stop("'object' contains 'form2' terms") oassign <- object@misc$orig.assign if (any(unlist(lapply(oassign, length)) > 1)) warning("some terms in 'object' create more than one column of ", "the LM design matrix") nnn <- object@misc$n M <- object@misc$M # ncol(B) # length(pvec) - 1 if (any(vfamily == "multinomial")) { rlev <- object@misc$refLevel cfit <- coefvlm(object, matrix.out = TRUE) B <- if (!length(rlev)) { cbind(cfit, 0) } else { if (rlev == M+1) { # Default cbind(cfit, 0) } else if (rlev == 1) { cbind(0, cfit) } else { cbind(cfit[, 1:(rlev-1)], 0, cfit[, rlev:M]) } } ppp <- nrow(B) pvec1 <- fitted(object)[1, ] colnames(B) <- if (length(names(pvec1))) names(pvec1) else paste("mu", 1:(M+1), sep = "") if (is.null(ii)) { BB <- array(B, c(ppp, M+1, nnn)) pvec <- c(t(fitted(object))) pvec <- rep(pvec, each = ppp) temp1 <- array(BB * pvec, c(ppp, M+1, nnn)) temp2 <- aperm(temp1, c(2, 1, 3)) # (M+1) x ppp x nnn temp2 <- colSums(temp2) # ppp x nnn temp2 <- array(rep(temp2, each = M+1), c(M+1, ppp, nnn)) temp2 <- aperm(temp2, c(2, 1, 3)) # ppp x (M+1) x nnn temp3 <- pvec ans <- array((BB - temp2) * temp3, c(ppp, M+1, nnn), dimnames = list(dimnames(B)[[1]], dimnames(B)[[2]], dimnames(fitted(object))[[1]])) return(ans) } else if (is.numeric(ii) && length(ii) == 1) { pvec <- fitted(object)[ii, ] temp1 <- B * matrix(pvec, ppp, M+1, byrow = TRUE) temp2 <- matrix(rowSums(temp1), ppp, M+1) temp3 <- matrix(pvec, nrow(B), M+1, byrow = TRUE) return((B - temp2) * temp3) } else { if (is.logical(ii)) ii <- (1:nnn)[ii] ans <- array(0, c(ppp, M+1, length(ii)), dimnames = list(dimnames(B)[[1]], dimnames(B)[[2]], dimnames(fitted(object)[ii, ])[[1]])) for (ilocal in seq_along(ii)) { pvec <- fitted(object)[ii[ilocal], ] temp1 <- B * matrix(pvec, ppp, M+1, byrow = TRUE) temp2 <- matrix(rowSums(temp1), ppp, M+1) temp3 <- matrix(pvec, nrow(B), M+1, byrow = TRUE) ans[ , , ilocal] <- (B - temp2) * temp3 } return(ans) } } # "multinomial" reverse <- object@misc$reverse linkfunctions <- object@misc$link all.eargs <- object@misc$earg B <- cfit <- coefvlm(object, matrix.out = TRUE) ppp <- nrow(B) etamat <- predict(object) # nnn x M fitmat <- fitted(object) # nnn x (M + 1) nnn <- nrow(etamat) hdot <- Thetamat <- etamat for (jlocal in 1:M) { Thetamat[, jlocal] <- eta2theta(etamat[, jlocal], link = linkfunctions[jlocal], earg = all.eargs[[jlocal]]) hdot[, jlocal] <- dtheta.deta(Thetamat[, jlocal], link = linkfunctions[jlocal], earg = all.eargs[[jlocal]]) } # jlocal if (any(vfamily == "acat")) { expcs.etamat <- if (reverse) exp(tapplymat1(etamat[, M:1, drop = FALSE], "cumsum")[, M:1, drop = FALSE]) else exp(tapplymat1(etamat, "cumsum")) csexpcs.etavec <- rowSums(expcs.etamat) } if (any(vfamily == "cratio")) { cpThetamat <- if (reverse) tapplymat1( Thetamat[, M:1, drop = FALSE], "cumprod")[, M:1, drop = FALSE] else tapplymat1( Thetamat, "cumprod") } if (any(vfamily == "sratio")) { cpThetamat <- if (reverse) tapplymat1(1 - Thetamat[, M:1, drop = FALSE], "cumprod")[, M:1, drop = FALSE] else tapplymat1(1 - Thetamat, "cumprod") } if (is.logical(is.multivariateY <- object@misc$multiple.responses) && is.multivariateY) stop("cannot handle cumulative(multiple.responses = TRUE)") if (any(vfamily == "cumulative")) { hdot.big <- kronecker(hdot, matrix(1, ppp, 1)) # Enlarged resmat <- cbind(hdot.big, 1) resmat[, 1] <- ifelse(reverse, -1, 1) * hdot.big[, 1] * cfit[, 1] if (M > 1) { for (jlocal in 2:M) resmat[, jlocal] <- ifelse(reverse, -1, 1) * (hdot.big[, jlocal ] * cfit[, jlocal ] - hdot.big[, jlocal - 1] * cfit[, jlocal - 1]) } # jlocal resmat[, M+1] <- ifelse(reverse, 1, -1) * hdot.big[, M] * cfit[, M] temp1 <- array(resmat, c(ppp, nnn, M+1), dimnames = list(dimnames(B)[[1]], dimnames(fitted(object))[[1]], dimnames(fitted(object))[[2]])) temp1 <- aperm(temp1, c(1, 3, 2)) # ppp x (M+1) x nnn if (is.null(ii)) { return(temp1) } else if (is.numeric(ii) && (length(ii) == 1)) { return(temp1[, , ii]) } else { return(temp1[, , ii]) } } # "cumulative" if (any(vfamily == "acat")) { if (!all(object@misc$link == "loge")) stop("currently only the 'loge' link is supported") acat.derivs <- function(jay, tee, M, expcs.etamat, Thetamat, prob1, probMplus1, reverse = FALSE) { if (jay > M+1) stop("argument 'jay' out of range") if (M < tee) stop("argument 'tee' out of range") if (reverse) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, dpMplus1.detat <- -(probMplus1^2) * rowSums(expcs.etamat[, 1:tee, drop = FALSE]) if (jay == M+1) { return(dpMplus1.detat) } if (jay <= tee) { return((probMplus1 + dpMplus1.detat) * expcs.etamat[, jay]) } if (tee < jay) { return(dpMplus1.detat * expcs.etamat[, jay]) } } else { # reverse = FALSE ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, dp1.detat <- -(prob1^2) * rowSums(expcs.etamat[, tee:M, drop = FALSE]) if (jay == 1) { return(dp1.detat) } if (jay <= tee) { return(dp1.detat * expcs.etamat[, jay-1]) } if (tee < jay) { return((prob1 + dp1.detat) * expcs.etamat[, jay-1]) } } # reverse ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, } # acat.derivs A <- array(0, c(i = nnn, vars = ppp, probs = M + 1, etas = M)) ansarray <- array(0, c(vars = ppp, i = nnn, probs = M + 1)) if (reverse) { probMplus1 <- 1 / (1 + csexpcs.etavec) # Last level of Y } else { prob1 <- 1 / (1 + csexpcs.etavec) # First level of Y } for (jlocal in 1:(M+1)) { for (tlocal in 1:M) { A[, , jlocal, tlocal] <- acat.derivs(jay = jlocal, tee = tlocal, M = M, expcs.etamat = expcs.etamat, Thetamat = Thetamat, prob1 = prob1, probMplus1 = probMplus1, reverse = reverse) } } A <- aperm(A, c(2, 1, 3, 4)) # c(ppp, nnn, M+1, M) for (jlocal in 1:(M + 1)) { for (tlocal in 1:M) { ansarray[,, jlocal] <- ansarray[,, jlocal] + A[,, jlocal, tlocal] * B[, tlocal] } } ans.acat <- aperm(ansarray, c(1, 3, 2)) # c(ppp, M+1, nnn) dimnames(ans.acat) <- list(rownames(B), colnames(fitmat), rownames(etamat)) return(ans.acat) } # "acat" c.nots <- any(vfamily == "cratio") cratio.derivs <- function(jay, tee, hdot, M, cpThetamat, Thetamat, reverse = FALSE) { if (jay >= M+1) stop("argument 'jay' out of range") if (M < tee) stop("argument 'tee' out of range") if (reverse) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, if (jay == 1) { return(hdot[, tee] * cpThetamat[, 1] / Thetamat[, tee]) } if (jay-1 == tee) { return(-hdot[, jay-1] * cpThetamat[, jay]) } if (jay <= tee) { return((1 - Thetamat[, jay-1]) * hdot[, tee] * cpThetamat[, jay] / Thetamat[, tee]) } return(rep_len(0, nrow(Thetamat))) # Since jay-1 > tee } else { # reverse = FALSE ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, if (jay == 1 && tee == 1) { return(-hdot[, 1]) } if (jay == tee) { return(-hdot[, jay] * cpThetamat[, jay-1]) } if (tee < jay) { return((1 - Thetamat[, jay]) * hdot[, tee] * cpThetamat[, jay-1] / Thetamat[, tee]) } return(rep_len(0, nrow(Thetamat))) # Since jay < tee } # reverse ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, } # cratio.derivs A <- array(0, c(i = nnn, vars = ppp, probs = M + 1, etas = M)) ansarray <- array(0, c(vars = ppp, i = nnn, probs = M + 1)) choosemat <- if (c.nots) Thetamat else 1 - Thetamat if (min(choosemat) <= 0) warning("division by 0 may occur") if (any(vfamily == "cratio" | vfamily == "sratio")) { if (reverse) { for (tlocal in 1:M) { for (jlocal in 1:tlocal) { A[, , jlocal, tlocal] <- cratio.derivs(jay = jlocal, tee = tlocal, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } if (M > 1) for (jlocal in 2:M) { A[, , jlocal, jlocal-1] <- cratio.derivs(jay = jlocal, tee = jlocal-1, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } else { for (jlocal in 1:M) { for (tlocal in 1:jlocal) { A[, , jlocal, tlocal] <- cratio.derivs(jay = jlocal, tee = tlocal, hdot = ifelse(c.nots, 1, -1) * hdot, M = M, cpThetamat = cpThetamat, Thetamat = choosemat, reverse = reverse) } } } if (reverse) { A[, , M+1, M] <- ifelse(c.nots, -1, 1) * hdot[, M] } else { for (jlocal in 1:M) { for (tlocal in 1:jlocal) { A[, , M+1, tlocal] <- if (c.nots) { A[, , M+1, tlocal] - A[, , jlocal, tlocal] } else { -hdot[, tlocal] * cpThetamat[, M] / choosemat[, tlocal] } } } } A <- aperm(A, c(2, 1, 3, 4)) # c(ppp, nnn, M+1, M) for (jlocal in 1:(M + 1)) { for (tlocal in 1:M) { ansarray[,, jlocal] <- ansarray[,, jlocal] + A[,, jlocal, tlocal] * B[, tlocal] } } ans.csratio <- aperm(ansarray, c(1, 3, 2)) # c(ppp, M+1, nnn) dimnames(ans.csratio) <- list(rownames(B), colnames(fitmat), rownames(etamat)) return(ans.csratio) } # "cratio" and "sratio" } # margefff prplot <- function(object, control = prplot.control(...), ...) { if (!any(slotNames(object) == "family") || !any(object@family@vfamily == "VGAMcategorical")) stop("'object' does not seem to be a VGAM categorical model object") if (!any(object@family@vfamily == "cumulative")) stop("'object' is not seem to be a VGAM categorical model object") control <- prplot.control(...) object <- plot.vgam(object, plot.arg = FALSE, raw = FALSE) # , ... if (length(names(object@preplot)) != 1) stop("object needs to have only one term") MM <- object@misc$M use.y <- cbind((object@preplot[[1]])$y) Constant <- attr(object@preplot, "Constant") if (is.numeric(Constant) && length(Constant) == ncol(use.y)) use.y <- use.y + matrix(Constant, nrow(use.y), ncol(use.y), byrow = TRUE) for (ii in 1:MM) { use.y[, ii] <- eta2theta(use.y[, ii], link = object@misc$link[[ii]], earg = object@misc$earg[[ii]]) } if (ncol(use.y) != MM) use.y = use.y[, 1:MM, drop = FALSE] use.x <- (object@preplot[[1]])$x myxlab <- if (length(control$xlab)) control$xlab else (object@preplot[[1]])$xlab mymain <- if (MM <= 3) paste(object@misc$parameters, collapse = ", ") else paste(object@misc$parameters[c(1, MM)], collapse = ",...,") if (length(control$main)) mymain = control$main if (length(control$ylab)) myylab = control$ylab matplot(use.x, use.y, type = "l", xlab = myxlab, ylab = myylab, lty = control$lty, col = control$col, las = control$las, xlim = if (is.Numeric(control$xlim)) control$xlim else range(use.x), ylim = if (is.Numeric(control$ylim)) control$ylim else range(use.y), main=mymain) if (control$rug.arg) rug(use.x, col=control$rcol, lwd=control$rlwd) invisible(object) } prplot.control <- function(xlab = NULL, ylab = "Probability", main = NULL, xlim = NULL, ylim = NULL, lty = par()$lty, col = par()$col, rcol = par()$col, lwd = par()$lwd, rlwd = par()$lwd, las = par()$las, rug.arg = FALSE, ...) { list(xlab = xlab, ylab = ylab, xlim = xlim, ylim = ylim, lty = lty, col = col, rcol = rcol, lwd = lwd, rlwd = rlwd, rug.arg = rug.arg, las = las, main = main) } is.parallel.matrix <- function(object, ...) is.matrix(object) && all(!is.na(object)) && all(c(object) == 1) && ncol(object) == 1 is.parallel.vglm <- function(object, type = c("term", "lm"), ...) { type <- match.arg(type, c("term", "lm"))[1] Hlist <- constraints(object, type = type) unlist(lapply(Hlist, is.parallel.matrix)) } if (!isGeneric("is.parallel")) setGeneric("is.parallel", function(object, ...) standardGeneric("is.parallel"), package = "VGAM") setMethod("is.parallel", "matrix", function(object, ...) is.parallel.matrix(object, ...)) setMethod("is.parallel", "vglm", function(object, ...) is.parallel.vglm(object, ...)) is.zero.matrix <- function(object, ...) { rnames <- rownames(object) intercept.index <- if (length(rnames)) { if (any(rnames == "(Intercept)")) { (seq_along(rnames))[rnames == "(Intercept)"] } else { stop("the matrix does not seem to have an intercept") NULL } } else { stop("the matrix does not seem to have an intercept") NULL } if (nrow(object) <= 1) stop("the matrix needs to have more than one row, ", "i.e., more than ", "an intercept on the RHS of the formula") cfit <- object[-intercept.index, , drop = FALSE] foo <- function(conmat.col) all(!is.na(conmat.col)) && all(c(conmat.col) == 0) unlist(apply(cfit, 2, foo)) } is.zero.vglm <- function(object, ...) { is.zero.matrix(coef(object, matrix = TRUE)) } if (!isGeneric("is.zero")) setGeneric("is.zero", function(object, ...) standardGeneric("is.zero"), package = "VGAM") setMethod("is.zero", "matrix", function(object, ...) is.zero.matrix(object, ...)) setMethod("is.zero", "vglm", function(object, ...) is.zero.vglm(object, ...)) setMethod("showvglmS4VGAM", signature(VGAMff = "acat"), function(object, VGAMff, ...) { cat("\nThis is an adjacent categories model with", 1 + object@misc$M, "levels\n") invisible(object) }) setMethod("showvgamS4VGAM", signature(VGAMff = "acat"), function(object, VGAMff, ...) { cat("\nThis is an adjacent categories model with", 1 + object@misc$M, "levels\n") invisible(object) }) setMethod("showvglmS4VGAM", signature(VGAMff = "multinomial"), function(object, VGAMff, ...) { cat("\nThis is a multinomial logit model with", 1 + object@misc$M, "levels\n") invisible(object) }) setMethod("showvgamS4VGAM", signature(VGAMff = "multinomial"), function(object, VGAMff, ...) { cat("\nThis is a multinomial logit model with", 1 + object@misc$M, "levels\n") invisible(object) }) VGAM/R/build.terms.vlm.q0000644000176200001440000000505013135276757014434 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. if (!isGeneric("terms")) setGeneric("terms", function(x, ...) standardGeneric("terms")) terms.vlm <- function(x, ...) { termsvlm(x, ...) } termsvlm <- function(x, form.number = 1, ...) { if (!is.Numeric(form.number, integer.valued = TRUE, length.arg = 1, positive = TRUE) || form.number > 2) stop("argument 'form.number' must be 1 or 2") v <- if (form.number == 1) { v <- x@terms if (!length(v)) stop("terms slot is empty") v$terms } else if (form.number == 2) { x@misc$Terms2 } if (length(v)) { v } else { warning("no terms component; returning a NULL") NULL } } setMethod("terms", "vlm", function(x, ...) terms.vlm(x, ...)) Build.terms.vlm <- function(x, coefs, cov = NULL, assign, collapse = TRUE, M, dimname = NULL, coefmat = NULL) { cov.true <- !is.null(cov) if (collapse) { fit <- matrix(x %*% coefs, ncol = M, byrow = TRUE) dimnames(fit) <- dimname if (M == 1) fit <- c(fit) if (cov.true) { var <- rowSums((x %*% cov) * x) list(fitted.values = fit, se.fit = if (M == 1) c(sqrt(var)) else matrix(sqrt(var), ncol = M, byrow = TRUE, dimnames = dimname)) } else { fit } } else { constant <- attr(x, "constant") if (!is.null(constant)) { constant <- as.vector(t(coefmat) %*% constant) } if (missing(assign)) assign <- attr(x, "assign") if (is.null(assign)) stop("Need an 'assign' list") fit <- array(0, c(nrow(x), length(assign)), list(dimnames(x)[[1]], names(assign))) if (cov.true) se <- fit TL <- sapply(assign, length) simple <- (TL == 1) complex <- (TL > 1) if (any(simple)) { asss <- unlist(assign[simple]) ones <- rep_len(1, nrow(x)) fit[, simple] <- x[, asss] * outer(ones, coefs[asss]) if (cov.true) se[, simple] <- abs(x[, asss]) * outer(ones, sqrt(diag(cov))[asss]) } if (any(complex)) { assign <- assign[complex] for (term in names(assign)) { TT <- assign[[term]] xt <- x[, TT] fit[, term] <- xt %*% coefs[TT] if (cov.true) { se[, term] <- sqrt(rowSums((xt %*% cov[TT, TT]) * xt)) } } } attr(fit, "constant") <- constant if (cov.true) list(fitted.values = fit, se.fit = se) else fit } } # Build.terms.vlm() VGAM/R/family.extremes.R0000644000176200001440000032017713135276757014476 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. rgev <- function(n, location = 0, scale = 1, shape = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(location)) stop("bad input for argument argument 'location'") if (!is.Numeric(shape)) stop("bad input for argument argument 'shape'") ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep_len(shape, use.n) if (length(location) != use.n) location <- rep_len(location, use.n) if (length(scale) != use.n) scale <- rep_len(scale, use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) ans[!scase] <- location[!scase] + scale[!scase] * ((-log(runif(use.n - nscase)))^(-shape[!scase]) -1) / shape[!scase] if (nscase) ans[scase] <- rgumbel(nscase, location = location[scase], scale = scale[scase]) ans[scale <= 0] <- NaN ans } dgev <- function(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt( .Machine$double.eps )) { oobounds.log <- -Inf # 20160412; No longer an argument. if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(tolshape0, length.arg = 1, positive = TRUE)) stop("bad input for argument 'tolshape0'") use.n <- max(length(x), length(location), length(scale), length(shape)) if (length(shape) != use.n) shape <- rep_len(shape, use.n) if (length(location) != use.n) location <- rep_len(location, use.n) if (length(scale) != use.n) scale <- rep_len(scale, use.n) if (length(x) != use.n) x <- rep_len(x, use.n) logdensity <- rep_len(log(0), use.n) scase <- (abs(shape) < tolshape0) nscase <- sum(scase) if (use.n - nscase) { zedd <- 1 + shape * (x - location) / scale xok <- (!scase) & (zedd > 0) logdensity[xok] <- -log(scale[xok]) - zedd[xok]^(-1/shape[xok]) - (1 + 1/shape[xok]) * log(zedd[xok]) outofbounds <- (!scase) & (zedd <= 0) if (any(outofbounds)) { logdensity[outofbounds] <- oobounds.log no.oob <- sum(outofbounds) } } if (nscase) { logdensity[scase] <- dgumbel(x[scase], location = location[scase], scale = scale[scase], log = TRUE) } logdensity[scale <= 0] <- NaN logdensity[is.infinite(x)] <- log(0) # 20141209 KaiH if (log.arg) logdensity else exp(logdensity) } pgev <- function(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") use.n <- max(length(q), length(location), length(scale), length(shape)) if (length(shape) != use.n) shape <- rep_len(shape, use.n) if (length(location) != use.n) location <- rep_len(location, use.n) if (length(scale) != use.n) scale <- rep_len(scale, use.n) if (length(q) != use.n) q <- rep_len(q, use.n) scase0 <- abs(shape) < sqrt( .Machine$double.eps ) # Effectively 0 zedd <- (q - location) / scale use.zedd <- pmax(0, 1 + shape * zedd) if (lower.tail) { if (log.p) { ans <- -use.zedd^(-1 / shape) } else { ans <- exp(-use.zedd^(-1 / shape)) } } else { if (log.p) { ans <- log(-expm1(-use.zedd^(-1 / shape))) } else { ans <- -expm1(-use.zedd^(-1 / shape)) } } if (any(scase0)) { ans[scase0] <- pgumbel(q[scase0], location = location[scase0], scale = scale[scase0], lower.tail = lower.tail, log.p = log.p) } ans[scale <= 0] <- NaN ans } qgev <- function(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") use.n <- max(length(p), length(location), length(scale), length(shape)) if (length(shape) != use.n) shape <- rep_len(shape, use.n) if (length(location) != use.n) location <- rep_len(location, use.n) if (length(scale) != use.n) scale <- rep_len(scale, use.n) if (length(p) != use.n) p <- rep_len(p, use.n) scase0 <- abs(shape) < sqrt( .Machine$double.eps ) if (lower.tail) { if (log.p) { ln.p <- p ans <- location + scale * ((-ln.p)^(-shape) - 1) / shape ans[ln.p > 0] <- NaN } else { ans <- location + scale * ((-log(p))^(-shape) - 1) / shape ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- location + scale * ((-log1p(-exp(ln.p)))^(-shape) - 1) / shape ans[ln.p > 0] <- NaN } else { ans <- location + scale * ((-log1p(-p))^(-shape) - 1) / shape ans[p == 1] <- Inf ans[p > 1] <- NaN ans[p < 0] <- NaN } } if (any(scase0)) ans[scase0] <- qgumbel(p[scase0], location = location[scase0], scale = scale[scase0], lower.tail = lower.tail, log.p = log.p) ans[scale <= 0] <- NaN ans } gev <- function( llocation = "identitylink", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, # 20160713; grid for finding locat.init gscale.mux = exp((-5:5)/6), # exp(-5:5), gshape = (-5:5) / 11 + 0.01, # c(-0.45, 0.45), iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape")) { ilocat <- ilocation type.fitted <- match.arg(type.fitted, c("percentiles", "mean"))[1] llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (!is.Numeric(imethod, length.arg = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (length(ishape) && !is.Numeric(ishape)) stop("bad input for argument 'ishape'") if (!is.Numeric(tolshape0, length.arg = 1, positive = TRUE) || tolshape0 > 0.1) stop("bad input for argument 'tolshape0'") new("vglmff", blurb = c("Generalized extreme value distribution\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale", "shape"), llocation = .llocat , lscale = .lscale , lshape = .lshape , type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .llocat = llocation, .lscale = lscale, .lshape = lshape, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ temp16 <- w.y.check(w = w, y = y, Is.nonnegative.y = FALSE, Is.integer.y = FALSE, ncol.w.max = 1, # Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = NULL, # Ignore this argument maximize = TRUE) w <- temp16$w y <- temp16$y M1 <- extra$M1 <- 3 ncoly <- ncol(y) extra$ncoly <- ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) extra$percentiles <- .percentiles extra$M1 <- M1 mynames1 <- "location" mynames2 <- "scale" mynames3 <- "shape" predictors.names <- c( namesof(mynames1, .llocat , earg = .elocat , short = TRUE), namesof(mynames2, .lscale , earg = .escale , short = TRUE), namesof(mynames3, .lshape , earg = .eshape , short = TRUE)) if (ncol(y) > 1) y <- -t(apply(-y, 1, sort, na.last = TRUE)) r.vec <- rowSums(cbind(!is.na(y))) if (any(r.vec == 0)) stop("A row contains all missing values") NOS.proxy <- 1 gprobs.y <- .gprobs.y ilocat <- .ilocat # Default is NULL if (length(ilocat)) ilocat <- matrix(ilocat, n, NOS.proxy, byrow = TRUE) if (!length(etastart)) { locat.init <- shape.init <- scale.init <- matrix(NA_real_, n, NOS.proxy) if (length( .iprobs.y )) gprobs.y <- .iprobs.y gscale.mux <- .gscale.mux # gscale.mux is on a relative scale gshape <- .gshape for (jay in 1:NOS.proxy) { # For each response 'y_jay'... do: scale.init.jay <- sd(y[, 1]) * sqrt(6) / pi # Based on the Gumbel scale.init.jay <- gscale.mux * scale.init.jay if (length( .iscale )) scale.init.jay <- .iscale # iscale is on an absolute scale if (length( .ishape )) gshape <- .ishape # ishape is on an absolute scale locat.init.jay <- if ( .imethod == 1) { quantile(y[, jay], probs = gprobs.y) # + 1/16 } else { weighted.mean(y[, jay], w = w[, 1]) } if (length(ilocat)) locat.init.jay <- ilocat[, jay] gev.Loglikfun3 <- function(shapeval, locatval, scaleval, y, x, w, extraargs) { sum(c(w) * dgev(x = y, locat = locatval, scale = scaleval, shape = shapeval, log = TRUE), na.rm = TRUE) } try.this <- grid.search3(gshape, locat.init.jay, scale.init.jay, objfun = gev.Loglikfun3, y = y[, 1], w = w[, jay], ret.objfun = TRUE, # Last value is the loglik extraargs = NULL) shape.init[, jay] <- try.this["Value1" ] locat.init[, jay] <- try.this["Value2" ] scale.init[, jay] <- try.this["Value3" ] } # for (jay ...) etastart <- cbind(theta2eta(locat.init, .llocat , .elocat ), theta2eta(scale.init, .lscale , .escale ), theta2eta(shape.init, .lshape , .eshape )) } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .ilocat = ilocat, .ishape = ishape, .iscale = iscale, .gprobs.y = gprobs.y, .gscale.mux = gscale.mux, .iprobs.y = iprobs.y, .gshape = gshape, .type.fitted = type.fitted, .percentiles = percentiles, .tolshape0 = tolshape0, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 3) Locat <- eta2theta(eta[, 1], .llocat , .elocat ) sigma <- eta2theta(eta[, 2], .lscale , .escale ) shape <- eta2theta(eta[, 3], .lshape , .eshape ) type.fitted <- if (length(extra$type.fitted)) { extra$type.fitted } else { warning("cannot find 'type.fitted'. Returning 'percentiles'.") "percentiles" } type.fitted <- match.arg(type.fitted, c("percentiles", "mean"))[1] pcent <- extra$percentiles LP <- length(pcent) if (type.fitted == "percentiles" && # Upward compatibility: LP > 0) { fv <- matrix(NA_real_, nrow(eta), LP) for (ii in 1:LP) { fv[, ii] <- qgev(pcent[ii] /100, loc = Locat, scale = sigma, shape = shape) } fv <- label.cols.y(fv, colnames.y = extra$colnames.y, NOS = NOS, percentiles = pcent, one.on.one = FALSE) } else { is.zero <- (abs(shape) < .tolshape0 ) EulerM <- -digamma(1) fv <- Locat + sigma * EulerM # When shape = 0, is Gumbel fv[!is.zero] <- Locat[!is.zero] + sigma[!is.zero] * (gamma(1 - shape[!is.zero]) - 1) / shape[!is.zero] fv[shape >= 1] <- NA # Mean exists only if shape < 1. fv <- label.cols.y(fv, colnames.y = extra$colnames.y, NOS = NOS) } fv }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), last = eval(substitute(expression({ misc$earg <- vector("list", M) names(misc$earg) <- c(mynames1, mynames2, mynames3) misc$earg[[1]] <- .elocat misc$earg[[2]] <- .escale misc$earg[[3]] <- .eshape misc$link <- c( .llocat , .lscale , .lshape ) names(misc$link) <- c(mynames1, mynames2, mynames3) misc$M1 <- M1 misc$multipleResponses <- FALSE misc$true.mu <- !length( .percentiles ) # @fitted is not a true mu misc$percentiles <- .percentiles misc$tolshape0 <- .tolshape0 if (ncol(y) == 1) y <- as.vector(y) if (any(shape < -0.5)) warning("some values of the shape parameter are less than -0.5") }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL) { Locat <- eta2theta(eta[, 1], .llocat , .elocat ) sigma <- eta2theta(eta[, 2], .lscale , .escale ) shape <- eta2theta(eta[, 3], .lshape , .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { new.answer <- sum(c(w) * dgev(x = y, location = Locat, scale = sigma, shape = shape, tolshape0 = .tolshape0 , log = TRUE), na.rm = TRUE) new.answer } }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), vfamily = c("gev", "vextremes"), validparams = eval(substitute(function(eta, y, extra = NULL) { Locat <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .llocat , .elocat ) sigma <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lshape , .eshape ) okay1 <- all(is.finite(Locat)) && all(is.finite(sigma)) && all(sigma > 0) && all(is.finite(shape)) okay.support <- if (okay1) { Boundary <- Locat - sigma / shape all((shape == 0) || (shape < 0 & y < Boundary) || (shape > 0 & y > Boundary)) } else { TRUE } if (!okay.support) warning("current parameter estimates are at the boundary of ", "the parameter space. ", "Try fitting a Gumbel model instead.") okay1 && okay.support }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 3 r.vec <- rowSums(cbind(!is.na(y))) Locat <- eta2theta(eta[, 1], .llocat , .elocat ) sigma <- eta2theta(eta[, 2], .lscale , .escale ) shape <- eta2theta(eta[, 3], .lshape , .eshape ) dmu.deta <- dtheta.deta(Locat, .llocat , .elocat ) dsi.deta <- dtheta.deta(sigma, .lscale , .escale ) dxi.deta <- dtheta.deta(shape, .lshape , .eshape ) is.zero <- (abs(shape) < .tolshape0 ) ii <- 1:nrow(eta) zedd <- (y - Locat) / sigma A <- 1 + shape * zedd dA.dxi <- zedd # matrix dA.dmu <- -shape/sigma # vector dA.dsigma <- -shape * zedd / sigma # matrix pow <- 1 + 1 / shape A1 <- A[cbind(ii, r.vec)] AAr1 <- dA.dmu/(shape * A1^pow) - pow * rowSums(cbind(dA.dmu/A), na.rm = TRUE) AAr2 <- dA.dsigma[cbind(ii,r.vec)] / (shape * A1^pow) - pow * rowSums(cbind(dA.dsigma/A), na.rm = TRUE) AAr3 <- 1/(shape * A1^pow) - pow * rowSums(cbind(dA.dsigma/A), na.rm = TRUE) dl.dmu <- AAr1 dl.dsi <- AAr2 - r.vec/sigma dl.dxi <- rowSums(cbind(log(A)), na.rm = TRUE)/shape^2 - pow * rowSums(cbind(dA.dxi/A), na.rm = TRUE) - (log(A1) / shape^2 - dA.dxi[cbind(ii, r.vec)] / (shape*A1)) * A1^(-1/shape) if (any(is.zero)) { zorro <- c(zedd[cbind(1:n, r.vec)]) zorro <- zorro[is.zero] ezm1 <- -expm1(-zorro) # 1 - exp(-zorro) dl.dmu[is.zero] <- ezm1 / sigma[is.zero] dl.dsi[is.zero] <- (zorro * ezm1 - 1) / sigma[is.zero] dl.dxi[is.zero] <- zorro * (ezm1 * zorro / 2 - 1) } c(w) * cbind(dl.dmu * dmu.deta, dl.dsi * dsi.deta, dl.dxi * dxi.deta) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), weight = eval(substitute(expression({ kay <- -shape dd <- digamma(r.vec - kay + 1) ddd <- digamma(r.vec + 1) # Unnecessarily evaluated at each iteration temp13 <- -kay * dd + (kay^2 - kay + 1) / (1 - kay) temp33 <- 1 - 2 * kay * ddd + kay^2 * (1 + trigamma(r.vec + 1) + ddd^2) temp23 <- -kay * dd + (1 + (1-kay)^2) / (1-kay) GR.gev <- function(jay, ri, kay) gamma(ri - jay * kay + 1) / gamma(ri) tmp2 <- (1 - kay)^2 * GR.gev(2, r.vec, kay) # Latter is GR2 tmp1 <- (1 - 2*kay) * GR.gev(1, r.vec, kay) # Latter is GR1 k0 <- (1 - 2*kay) k1 <- k0 * kay k2 <- k1 * kay k3 <- k2 * kay # kay^3 * (1-2*kay) wz <- matrix(NA_real_, n, 6) wz[, iam(1, 1, M)] <- tmp2 / (sigma^2 * k0) wz[, iam(1, 2, M)] <- (tmp2 - tmp1) / (sigma^2 * k1) wz[, iam(1, 3, M)] <- (tmp1 * temp13 - tmp2) / (sigma * k2) wz[, iam(2, 2, M)] <- (r.vec*k0 - 2*tmp1 + tmp2) / (sigma^2 * k2) wz[, iam(2, 3, M)] <- (r.vec*k1*ddd + tmp1 * temp23 - tmp2 - r.vec*k0) / (sigma * k3) wz[, iam(3, 3, M)] <- (2*tmp1*(-temp13) + tmp2 + r.vec*k0*temp33) / (k3*kay) if (any(is.zero)) { if (ncol(y) > 1) stop("cannot handle shape == 0 with a multivariate response") EulerM <- -digamma(1) wz[is.zero, iam(2, 2, M)] <- (pi^2/6 +(1-EulerM)^2)/sigma[is.zero]^2 wz[is.zero, iam(3, 3, M)] <- 2.4236 wz[is.zero, iam(1, 2, M)] <- (digamma(2) + 2 * (EulerM - 1)) / sigma[is.zero]^2 wz[is.zero, iam(1, 3, M)] <- -(trigamma(1) / 2 + digamma(1) * (digamma(1)/2 + 1))/sigma[is.zero] wz[is.zero, iam(2, 3, M)] <- (-dgammadx(2, 3)/6 + dgammadx(1, 1) + 2*dgammadx(1, 2) + 2*dgammadx(1, 3) / 3) / sigma[is.zero] if (FALSE ) { wz[, iam(1, 2, M)] <- 2 * r.vec / sigma^2 wz[, iam(2, 2, M)] <- -4 * r.vec * digamma(r.vec + 1) + 2 * r.vec + (4 * dgammadx(r.vec + 1, deriv.arg = 1) - 3 * dgammadx(r.vec + 1, deriv.arg = 2)) / gamma(r.vec) # Not checked } } wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dmu.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dsi.deta^2 wz[, iam(3, 3, M)] <- wz[, iam(3, 3, M)] * dxi.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] * dmu.deta * dsi.deta wz[, iam(1, 3, M)] <- wz[, iam(1, 3, M)] * dmu.deta * (-dxi.deta) wz[, iam(2, 3, M)] <- wz[, iam(2, 3, M)] * dsi.deta * (-dxi.deta) c(w) * wz }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape )))) } dgammadx <- function(x, deriv.arg = 1) { if (deriv.arg == 0) { gamma(x) } else if (deriv.arg == 1) { digamma(x) * gamma(x) } else if (deriv.arg == 2) { gamma(x) * (trigamma(x) + digamma(x)^2) } else if (deriv.arg == 3) { gamma(x) * (psigamma(x, deriv = 2) + 2 * digamma(x) * trigamma(x)) + Recall(x, deriv.arg = 1) * (trigamma(x) + digamma(x)^2) } else if (deriv.arg == 4) { Recall(x, deriv.arg = 2) * (trigamma(x) + digamma(x)^2) + 2 * Recall(x, deriv.arg = 1) * (psigamma(x, deriv = 2) + 2*digamma(x) * trigamma(x)) + gamma(x) * (psigamma(x, deriv = 3) + 2*trigamma(x)^2 + 2 * digamma(x) * psigamma(x, deriv = 2)) } else { stop("cannot handle 'deriv' > 4") } } gevff <- function( llocation = "identitylink", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, # 20160713; grid for finding locat.init gscale.mux = exp((-5:5)/6), # exp(-5:5), gshape = (-5:5) / 11 + 0.01, # c(-0.45, 0.45), iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape")) { ilocat <- ilocation if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") type.fitted <- match.arg(type.fitted, c("percentiles", "mean"))[1] llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (!is.Numeric(imethod, length.arg = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (length(ishape) && !is.Numeric(ishape)) stop("bad input for argument 'ishape'") if (!is.Numeric(tolshape0, length.arg = 1, positive = TRUE) || tolshape0 > 0.1) stop("bad input for argument 'tolshape0'") new("vglmff", blurb = c("Generalized extreme value distribution\n", "Links: ", namesof("location", link = llocat, earg = elocat), ", ", namesof("scale", link = lscale, earg = escale), ", ", namesof("shape", link = lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale", "shape"), llocation = .llocat , lscale = .lscale , lshape = .lshape , type.fitted = .type.fitted , percentiles = .percentiles , zero = .zero ) }, list( .zero = zero, .llocat = llocation, .lscale = lscale, .lshape = lshape, .type.fitted = type.fitted, .percentiles = percentiles ))), initialize = eval(substitute(expression({ temp16 <- w.y.check(w = w, y = y, Is.nonnegative.y = FALSE, Is.integer.y = FALSE, ncol.w.max = Inf, # Differs from [e]gev()! ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp16$w y <- temp16$y M1 <- extra$M1 <- 3 NOS <- ncoly <- ncol(y) extra$ncoly <- ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) extra$percentiles <- .percentiles extra$M1 <- M1 M <- M1 * ncoly # Is now true! mynames1 <- param.names("location", NOS) mynames2 <- param.names("scale", NOS) mynames3 <- param.names("shape", NOS) predictors.names <- c( namesof(mynames1, .llocat , earg = .elocat , short = TRUE), namesof(mynames2, .lscale , earg = .escale , short = TRUE), namesof(mynames3, .lshape , earg = .eshape , short = TRUE))[ interleave.VGAM(M, M1 = M1)] gprobs.y <- .gprobs.y ilocat <- .ilocat # Default is NULL if (length(ilocat)) ilocat <- matrix(ilocat, n, NOS, byrow = TRUE) if (!length(etastart)) { if ( .lshape == "extlogit" && length( .ishape ) && (any( .ishape <= eshape$min | .ishape >= eshape$max))) stop("bad input for argument 'eshape'") locat.init <- shape.init <- scale.init <- matrix(NA_real_, n, NOS) if (length( .iprobs.y )) gprobs.y <- .iprobs.y gscale.mux <- .gscale.mux # gscale.mux is on a relative scale gshape <- .gshape for (jay in 1:NOS) { # For each response 'y_jay'... do: scale.init.jay <- sd(y[, jay]) * sqrt(6) / pi # Based on the Gumbel scale.init.jay <- gscale.mux * scale.init.jay if (length( .iscale )) scale.init.jay <- .iscale # iscale is on an absolute scale if (length( .ishape )) gshape <- .ishape # ishape is on an absolute scale locat.init.jay <- if ( .imethod == 1) { quantile(y[, jay], probs = gprobs.y) # + 1/16 } else { weighted.mean(y[, jay], w = w[, jay]) } if (length(ilocat)) locat.init.jay <- ilocat[, jay] gevff.Loglikfun3 <- function(shapeval, locatval, scaleval, y, x, w, extraargs) { sum(c(w) * dgev(x = y, locat = locatval, scale = scaleval, shape = shapeval, log = TRUE), na.rm = TRUE) } try.this <- grid.search3(gshape, locat.init.jay, scale.init.jay, objfun = gevff.Loglikfun3, y = y[, jay], w = w[, jay], ret.objfun = TRUE, # Last value is the loglik extraargs = NULL) shape.init[, jay] <- try.this["Value1" ] locat.init[, jay] <- try.this["Value2" ] scale.init[, jay] <- try.this["Value3" ] } # for (jay ...) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .ilocat = ilocat, .iscale = iscale, .ishape = ishape, .gshape = gshape, .gprobs.y = gprobs.y, .gscale.mux = gscale.mux, .iprobs.y = iprobs.y, .percentiles = percentiles, .tolshape0 = tolshape0, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 3) Locat <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lscale , earg = .escale ) shape <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lshape , earg = .eshape ) type.fitted <- if (length(extra$type.fitted)) { extra$type.fitted } else { warning("cannot find 'type.fitted'. Returning 'percentiles'.") "percentiles" } type.fitted <- match.arg(type.fitted, c("percentiles", "mean"))[1] pcent <- extra$percentiles LP <- length(pcent) if (type.fitted == "percentiles" && # Upward compatibility: LP > 0) { fv <- matrix(NA_real_, nrow(eta), LP * NOS) icol <- (0:(NOS-1)) * LP for (ii in 1:LP) { icol <- icol + 1 fv[, icol] <- qgev(pcent[ii] / 100, loc = Locat, scale = Scale, shape = shape) } fv <- label.cols.y(fv, colnames.y = extra$colnames.y, NOS = NOS, percentiles = pcent, one.on.one = FALSE) } else { is.zero <- (abs(shape) < .tolshape0 ) EulerM <- -digamma(1) fv <- Locat + Scale * EulerM # When shape == 0 it is Gumbel fv[!is.zero] <- Locat[!is.zero] + Scale[!is.zero] * (gamma(1 - shape[!is.zero]) - 1) / shape[!is.zero] fv[shape >= 1] <- NA # Mean exists only if shape < 1. fv <- label.cols.y(fv, colnames.y = extra$colnames.y, NOS = NOS) } fv }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), last = eval(substitute(expression({ temp0303 <- c(rep_len( .llocat , NOS), rep_len( .lscale , NOS), rep_len( .lshape , NOS)) names(temp0303) <- c(mynames1, mynames2, mynames3) temp0303 <- temp0303[interleave.VGAM(M, M1 = M1)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-2]] <- .elocat misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape } misc$true.mu <- !length( .percentiles ) # @fitted is not a true mu misc$percentiles <- .percentiles misc$tolshape0 <- .tolshape0 if (any(shape < -0.5)) warning("some values of the shape parameter are less than -0.5") }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 NOS <- ncol(eta) / M1 Locat <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lscale , earg = .escale ) shape <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgev(x = y, location = Locat, scale = Scale, shape = shape, tolshape0 = .tolshape0 , log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), vfamily = c("gevff", "vextremes"), validparams = eval(substitute(function(eta, y, extra = NULL) { Locat <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .llocat , .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lscale , .escale ) shape <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lshape , .eshape ) okay1 <- all(is.finite(Locat)) && all(is.finite(Scale)) && all(Scale > 0) && all(is.finite(shape)) okay.support <- if (okay1) { Boundary <- Locat - Scale / shape all((shape == 0) || (shape < 0 & y < Boundary) || (shape > 0 & y > Boundary)) } else { TRUE } if (!okay.support) warning("current parameter estimates are at the boundary of ", "the parameter space. ", "Try fitting a Gumbel model instead.") okay1 && okay.support }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta) / M1 Locat <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lscale , earg = .escale ) shape <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lshape , earg = .eshape ) is.zero <- (abs(shape) < .tolshape0 ) zedd <- (y - Locat) / Scale A <- 1 + shape * zedd dA.dlocat <- -shape / Scale dA.dshape <- zedd dA.dScale <- -shape * zedd / Scale pow <- 1 + 1/shape if (any(bad <- A <= 0, na.rm = TRUE)) stop(sum(bad, na.rm = TRUE), " observations violating boundary constraints in '@deriv'") AA <- 1 / (shape * A^pow)- pow / A dl.dlocat <- dA.dlocat * AA dl.dscale <- dA.dScale * AA - 1/Scale dl.dshape <- log(A)/shape^2 - pow * dA.dshape / A - (log(A)/shape^2 - dA.dshape / (shape*A)) * A^(-1/shape) if (any(is.zero)) { omez <- -expm1(-zedd[is.zero]) zedd0 <- zedd[is.zero] dl.dlocat[is.zero] <- omez / Scale[is.zero] dl.dscale[is.zero] <- (zedd0 * omez - 1) / Scale[is.zero] dl.dshape[is.zero] <- zedd0 * (omez * zedd0 / 2 - 1) } dlocat.deta <- dtheta.deta(Locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta, dl.dshape * dshape.deta) ans <- ans[, interleave.VGAM(M, M1 = M1)] ans }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .tolshape0 = tolshape0 ))), weight = eval(substitute(expression({ EulerM <- -digamma(1) bad <- A <= 0 if (any(bad, na.rm = TRUE)) stop(sum(bad, na.rm = TRUE), " observations violating boundary constraints in '@weight'") shape[abs(shape + 0.5) < .tolshape0 ] <- -0.499 temp100 <- gamma(2 + shape) pp <- (1 + shape)^2 * gamma(1 + 2*shape) qq <- temp100 * (digamma(1 + shape) + (1 + shape)/shape) ned2l.dlocat2 <- pp / Scale^2 ned2l.dscale2 <- (1 - 2*temp100 + pp) / (Scale * shape)^2 ned2l.dshape2 <- (pi^2 / 6 + (1 - EulerM + 1/shape)^2 - (2*qq - pp/shape)/shape) / shape^2 ned2l.dlocsca <- -(pp - temp100) / (Scale^2 * shape) ned2l.dscasha <- -(1 - EulerM + (1 - temp100)/shape - qq + pp/shape) / (Scale * shape^2) ned2l.dlocsha <- -(qq - pp/shape) / (Scale * shape) if (any(is.zero)) { ned2l.dscale2[is.zero] <- (pi^2/6 + (1-EulerM)^2) / Scale[is.zero]^2 ned2l.dshape2[is.zero] <- 2.4236 ned2l.dlocsca[is.zero] <- (digamma(2) + 2*(EulerM - 1)) / Scale[is.zero]^2 ned2l.dscasha[is.zero] <- -( -dgammadx(2, 3) / 6 + dgammadx(1, 1) + 2*dgammadx(1, 2) + 2*dgammadx(1, 3) / 3) / Scale[is.zero] ned2l.dlocsha[is.zero] <- (trigamma(1) / 2 + digamma(1)* (digamma(1) / 2 + 1)) / Scale[is.zero] } wz <- array( c(c(w) * ned2l.dlocat2 * dlocat.deta^2, c(w) * ned2l.dscale2 * dscale.deta^2, c(w) * ned2l.dshape2 * dshape.deta^2, c(w) * ned2l.dlocsca * dlocat.deta * dscale.deta, c(w) * ned2l.dscasha * dscale.deta * dshape.deta, c(w) * ned2l.dlocsha * dlocat.deta * dshape.deta), dim = c(n, NOS, 6)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .eshape = eshape, .tolshape0 = tolshape0 )))) } rgumbel <- function(n, location = 0, scale = 1) { answer <- location - scale * log(-log(runif(n))) answer[scale <= 0] <- NaN answer } dgumbel <- function(x, location = 0, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (x - location) / scale logdensity <- -zedd - exp(-zedd) - log(scale) logdensity[is.infinite(x)] <- log(0) # 20141209 KaiH if (log.arg) logdensity else exp(logdensity) } qgumbel <- function(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- location - scale * log(-ln.p) } else { ans <- location - scale * log(-log(p)) ans[p == 0] <- -Inf ans[p == 1] <- Inf } } else { if (log.p) { ln.p <- p ans <- location - scale * log(-log(-expm1(ln.p))) ans[ln.p > 0] <- NaN } else { ans <- location - scale * log(-log1p(-p)) ans[p == 0] <- Inf ans[p == 1] <- -Inf } } ans[scale <= 0] <- NaN ans } pgumbel <- function(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- -exp(-(q - location) / scale) ans[q <= -Inf] <- -Inf ans[q == Inf] <- 0 } else { ans <- exp(-exp(-(q - location) / scale)) ans[q <= -Inf] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log(-expm1(-exp(-(q - location) / scale))) ans[q <= -Inf] <- 0 ans[q == Inf] <- -Inf } else { ans <- -expm1(-exp(-(q - location) / scale)) ans[q <= -Inf] <- 1 ans[q == Inf] <- 0 } } ans[scale <= 0] <- NaN ans } gumbel <- function(llocation = "identitylink", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(mpv) || length(mpv) != 1) stop("bad input for argument 'mpv'") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Gumbel distribution for extreme value regression\n", "Links: ", namesof("location", llocat, earg = elocat ), ", ", namesof("scale", lscale, earg = escale )), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale"), llocation = .llocat , lscale = .lscale , mpv = .mpv , zero = .zero ) }, list( .zero = zero, .llocat = llocation, .lscale = lscale, .mpv = mpv ))), initialize = eval(substitute(expression({ predictors.names <- c(namesof("location", .llocat , earg = .elocat , short = TRUE), namesof("scale", .lscale , earg = .escale , short = TRUE)) y <- as.matrix(y) if (ncol(y) > 1) y <- -t(apply(-y, 1, sort, na.last = TRUE)) w <- as.matrix(w) if (ncol(w) != 1) stop("the 'weights' argument must be a vector or ", "1-column matrix") r.vec <- rowSums(cbind(!is.na(y))) if (any(r.vec == 0)) stop("There is at least one row of the response containing all NAs") if (ncol(y) > 1) { yiri <- y[cbind(1:nrow(y), r.vec)] sc.init <- if (is.Numeric( .iscale, positive = TRUE)) .iscale else {3 * (rowMeans(y, na.rm = TRUE) - yiri)} sc.init <- rep_len(sc.init, nrow(y)) sc.init[sc.init <= 0.0001] <- 1 # Used to be .iscale loc.init <- yiri + sc.init * log(r.vec) } else { sc.init <- if (is.Numeric( .iscale, positive = TRUE)) .iscale else 1.1 * (0.01 + sqrt(6 * var(y))) / pi sc.init <- rep_len(sc.init, n) EulerM <- -digamma(1) loc.init <- (y - sc.init * EulerM) loc.init[loc.init <= 0] <- min(y) } extra$R <- .R extra$mpv <- .mpv extra$percentiles <- .percentiles if (!length(etastart)) { etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta( sc.init, .lscale , earg = .escale )) } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .iscale = iscale, .R = R, .mpv = mpv, .percentiles = percentiles ))), linkinv = eval(substitute(function(eta, extra = NULL) { loc <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) pcent <- extra$percentiles LP <- length(pcent) # may be 0 if (LP > 0) { mpv <- extra$mpv mu <- matrix(NA_real_, nrow(eta), LP + mpv) # LP may be 0 Rvec <- extra$R for (ii in 1:LP) { ci <- if (is.Numeric(Rvec)) Rvec * (1 - pcent[ii] / 100) else -log(pcent[ii] / 100) mu[, ii] <- loc - sigma * log(ci) } if (mpv) mu[, ncol(mu)] <- loc - sigma * log(log(2)) dmn2 <- paste(as.character(pcent), "%", sep = "") if (mpv) dmn2 <- c(dmn2, "MPV") dimnames(mu) <- list(dimnames(eta)[[1]], dmn2) } else { EulerM <- -digamma(1) mu <- loc + sigma * EulerM } mu }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$links <- c(location = .llocat , scale = .lscale ) misc$earg <- list(location = .elocat , scale = .escale ) misc$R <- .R misc$mpv <- .mpv misc$true.mu <- !length( .percentiles ) # @fitted is not a true mu misc$percentiles <- .percentiles }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .percentiles = percentiles, .mpv = mpv, .R = R ))), vfamily = c("gumbel", "vextremes"), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { loc <- eta2theta(eta[, 1], .llocat, earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) r.vec <- rowSums(cbind(!is.na(y))) yiri <- y[cbind(1:nrow(y), r.vec)] ans <- -r.vec * log(sigma) - exp( -(yiri-loc)/sigma ) max.r.vec <- max(r.vec) for (jay in 1:max.r.vec) { index <- (jay <= r.vec) ans[index] <- ans[index] - (y[index,jay] - loc[index])/sigma[index] } if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * ans if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), deriv = eval(substitute(expression({ locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sigma <- eta2theta(eta[, 2], .lscale , earg = .escale ) r.vec <- rowSums(cbind(!is.na(y))) yiri <- y[cbind(1:nrow(y), r.vec)] yi.bar <- rowMeans(y, na.rm = TRUE) temp2 <- (yiri - locat) / sigma term2 <- exp(-temp2) dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dsigma.deta <- dtheta.deta(sigma, .lscale , earg = .escale ) dl.dlocat <- (r.vec - term2) / sigma dl.dsigma <- (rowSums((y - locat) / sigma, na.rm = TRUE) - r.vec - temp2 * term2) / sigma c(w) * cbind(dl.dlocat * dlocat.deta, dl.dsigma * dsigma.deta) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight = eval(substitute(expression({ temp6 <- digamma(r.vec) # , integer = T temp5 <- digamma(1:max(r.vec)) # , integer=T temp5 <- matrix(temp5, n, max(r.vec), byrow = TRUE) temp5[col(temp5) > r.vec] <- 0 temp5 <- temp5 %*% rep(1, ncol(temp5)) wz <- matrix(NA_real_, n, dimm(M = 2)) # 3=dimm(M = 2) wz[, iam(1, 1, M)] <- r.vec / sigma^2 wz[, iam(2, 1, M)] <- -(1 + r.vec * temp6) / sigma^2 wz[, iam(2, 2, M)] <- (2*(r.vec+1)*temp6 + r.vec*(trigamma(r.vec) + temp6^2) + 2 - r.vec - 2*temp5) / sigma^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dlocat.deta^2 wz[, iam(2, 1, M)] <- wz[, iam(2, 1, M)] * dsigma.deta * dlocat.deta wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dsigma.deta^2 c(w) * wz }), list( .lscale = lscale )))) } rgpd <- function(n, location = 0, scale = 1, shape = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(location)) stop("bad input for argument 'location'") if (!is.Numeric(shape)) stop("bad input for argument 'shape'") ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep_len(shape, use.n) if (length(location) != use.n) location <- rep_len(location, use.n) if (length(scale) != use.n) scale <- rep_len(scale, use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) ans[!scase] <- location[!scase] + scale[!scase] * ((runif(use.n - nscase))^(-shape[!scase])-1) / shape[!scase] if (nscase) ans[scase] <- location[scase] - scale[scase] * log(runif(nscase)) ans[scale <= 0] <- NaN ans } dgpd <- function(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt( .Machine$double.eps )) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) oobounds.log <- -Inf if (!is.Numeric(tolshape0, length.arg = 1, positive = TRUE)) stop("bad input for argument 'tolshape0'") L <- max(length(x), length(location), length(scale), length(shape)) if (length(shape) != L) shape <- rep_len(shape, L) if (length(location) != L) location <- rep_len(location, L) if (length(scale) != L) scale <- rep_len(scale, L) if (length(x) != L) x <- rep_len(x, L) logdensity <- rep_len(log(0), L) scase <- abs(shape) < tolshape0 nscase <- sum(scase) if (L - nscase) { zedd <- (x-location) / scale xok <- (!scase) & (zedd > 0) & (1 + shape*zedd > 0) logdensity[xok] <- -(1 + 1/shape[xok])*log1p(shape[xok]*zedd[xok]) - log(scale[xok]) outofbounds <- (!scase) & ((zedd <= 0) | (1 + shape*zedd <= 0)) if (any(outofbounds)) { logdensity[outofbounds] <- oobounds.log } } if (nscase) { xok <- scase & (x > location) logdensity[xok] <- -(x[xok] - location[xok]) / scale[xok] - log(scale[xok]) outofbounds <- scase & (x <= location) if (any(outofbounds)) { logdensity[outofbounds] <- oobounds.log } } logdensity[scale <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pgpd <- function(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") use.n <- max(length(q), length(location), length(scale), length(shape)) ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep_len(shape, use.n) if (length(location) != use.n) location <- rep_len(location, use.n) if (length(scale) != use.n) scale <- rep_len(scale, use.n) if (length(q) != use.n) q <- rep_len(q, use.n) zedd <- (q - location) / scale use.zedd <- pmax(zedd, 0) scase0 <- abs(shape) < sqrt( .Machine$double.eps ) nscase0 <- sum(scase0) if (use.n - nscase0) { ans <- 1 - pmax(1 + shape * use.zedd, 0)^(-1/shape) } if (nscase0) { pos <- (zedd >= 0) ind9 <- ( pos & scase0) ans[ind9] <- -expm1(-use.zedd[ind9]) ind9 <- (!pos & scase0) ans[ind9] <- 0 } ans[scale <= 0] <- NaN if (lower.tail) { if (log.p) log(ans) else ans } else { if (log.p) log1p(-ans) else 1-ans } } qgpd <- function(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rm(log.p) if (lower.tail) { if (log.arg) p <- exp(p) } else { p <- if (log.arg) -expm1(p) else 1 - p } use.n <- max(length(p), length(location), length(scale), length(shape)) ans <- numeric(use.n) if (length(shape) != use.n) shape <- rep_len(shape, use.n) if (length(location) != use.n) location <- rep_len(location, use.n) if (length(scale) != use.n) scale <- rep_len(scale, use.n) if (length(p) != use.n) p <- rep_len(p, use.n) scase <- abs(shape) < sqrt( .Machine$double.eps ) nscase <- sum(scase) if (use.n - nscase) { ans[!scase] <- location[!scase] + scale[!scase] * ((1-p[!scase])^(-shape[!scase]) - 1) / shape[!scase] } if (nscase) { ans[scase] <- location[scase] - scale[scase] * log1p(-p[scase]) } ans[p < 0] <- NaN ans[p > 1] <- NaN ans[(p == 0)] <- location[p == 0] ans[(p == 1) & (shape >= 0)] <- Inf ind5 <- (p == 1) & (shape < 0) ans[ind5] <- location[ind5] - scale[ind5] / shape[ind5] ans[scale <= 0] <- NaN ans } gpd <- function(threshold = 0, lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(90, 95), iscale = NULL, ishape = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), imethod = 1, zero = "shape") { type.fitted <- match.arg(type.fitted, c("percentiles", "mean"))[1] lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.Numeric(threshold)) stop("bad input for argument 'threshold'") if (!is.Numeric(imethod, length.arg = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (!is.Numeric(tolshape0, length.arg = 1, positive = TRUE) || tolshape0 > 0.1) stop("bad input for argument 'tolshape0'") new("vglmff", blurb = c("Generalized Pareto distribution\n", "Links: ", namesof("scale", link = lscale, earg = escale), ", ", namesof("shape", link = lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("scale", "shape"), lscale = .lscale , lshape = .lshape , type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted, .lscale = lscale, .lshape = lshape ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly y.names <- dimnames(y)[[2]] if (length(y.names) != ncoly) y.names <- paste("Y", 1:ncoly, sep = "") extra$y.names <- y.names extra$type.fitted <- .type.fitted extra$percentiles <- .percentiles extra$colnames.y <- colnames(y) Threshold <- if (is.Numeric( .threshold )) .threshold else 0 Threshold <- matrix(Threshold, n, ncoly, byrow = TRUE) if (is.Numeric( .threshold )) { orig.y <- y } ystar <- as.matrix(y - Threshold) # Operate on ystar if (min(ystar, na.rm = TRUE) < 0) stop("some response values, after subtracting ", "argument 'threshold', are negative. ", "Maybe argument 'subset' should be used. ", "A threshold value no more than ", min(orig.y, na.rm = TRUE), " is needed.") extra$threshold <- Threshold mynames1 <- param.names("scale", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lscale , earg = .escale , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { meany <- colSums(ystar * w) / colSums(w) vary <- apply(ystar, 2, var) mediany <- apply(ystar, 2, median) init.xii <- if (length( .ishape )) .ishape else { if ( .imethod == 1) -0.5 * (meany^2 / vary - 1) else 0.5 * (1 - mediany^2 / vary) } init.sig <- if (length( .iscale )) .iscale else { if (.imethod == 1) 0.5 * meany * (meany^2 / vary + 1) else abs(1 - init.xii) * mediany } init.xii <- matrix(init.xii, n, ncoly, byrow = TRUE) init.sig <- matrix(init.sig, n, ncoly, byrow = TRUE) init.sig[init.sig <= 0.0] <- 0.01 # sigma > 0 init.xii[init.xii <= -0.5] <- -0.40 # FS works if xi > -0.5 init.xii[init.xii >= 1.0] <- 0.90 # Mean/var exists if xi < 1/0.5 if ( .lshape == "loge") init.xii[init.xii <= 0.0] <- 0.05 etastart <- cbind(theta2eta(init.sig, .lscale , earg = .escale ), theta2eta(init.xii, .lshape , earg = .eshape ))[, interleave.VGAM(M, M1 = M1)] } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape, .percentiles = percentiles, .threshold = threshold, .type.fitted = type.fitted, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { sigma <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , earg = .escale ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) if (!is.matrix(sigma)) sigma <- as.matrix(sigma) if (!is.matrix(shape)) shape <- as.matrix(shape) type.fitted <- if (length(extra$type.fitted)) { extra$type.fitted } else { warning("cannot find 'type.fitted'. Returning 'percentiles'.") "percentiles" } type.fitted <- match.arg(type.fitted, c("percentiles", "mean"))[1] M1 <- 2 NOS <- ncol(eta) / M1 pcent <- extra$percentiles # Post-20140912 LP <- length(pcent) # NULL means LP == 0 and the mean is returned ncoly <- ncol(eta) / M1 if (!length(y.names <- extra$y.names)) y.names <- paste("Y", 1:ncoly, sep = "") Threshold <- extra$threshold if (type.fitted == "percentiles" && # Upward compatibility: LP > 0) { do.one <- function(yvec, shape, scale, threshold, percentiles = c(90, 95), y.name = NULL, tolshape0 = 0.001) { is.zero <- (abs(shape) < tolshape0 ) # A matrix LP <- length(percentiles) fv <- matrix(NA_real_, length(shape), LP) is.zero <- (abs(shape) < tolshape0) for (ii in 1:LP) { temp <- 1 - percentiles[ii] / 100 fv[!is.zero, ii] <- threshold[!is.zero] + (temp^(-shape[!is.zero]) - 1) * scale[!is.zero] / shape[!is.zero] fv[ is.zero, ii] <- threshold[is.zero] - scale[is.zero] * log(temp) } post.name <- paste(as.character(percentiles), "%", sep = "") dimnames(fv) <- list(dimnames(shape)[[1]], if (is.null(y.name)) post.name else paste(y.name, post.name, sep = " ")) fv } # do.one fv <- matrix(-1, nrow(sigma), LP * ncoly) for (jlocal in 1:ncoly) { block.mat.fv <- do.one(yvec = y[, jlocal], shape = shape[, jlocal], scale = sigma[, jlocal], threshold = Threshold[, jlocal], percentiles = pcent, y.name = if (ncoly > 1) y.names[jlocal] else NULL, tolshape0 = .tolshape0 ) fv[, (jlocal - 1) * LP + (1:LP)] <- block.mat.fv } fv <- label.cols.y(fv, colnames.y = extra$colnames.y, NOS = NOS, percentiles = pcent, one.on.one = FALSE) } else { fv <- Threshold + sigma / (1 - shape) fv[shape >= 1] <- Inf # Mean exists only if shape < 1. fv <- label.cols.y(fv, colnames.y = extra$colnames.y, NOS = NOS) } fv }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .threshold = threshold, .tolshape0 = tolshape0 ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lscale , ncoly), rep_len( .lshape , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape } misc$true.mu <- FALSE # @fitted is not a true mu misc$percentiles <- .percentiles misc$tolshape0 <- .tolshape0 if (any(Shape < -0.5)) warning("some values of the shape parameter are less than -0.5") }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .threshold = threshold, .tolshape0 = tolshape0, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { sigma <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , earg = .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Threshold <- extra$threshold if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgpd(x = y, location = Threshold, scale = sigma, shape = Shape, tolshape0 = .tolshape0 , log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .tolshape0 = tolshape0, .escale = escale, .eshape = eshape, .lscale = lscale, .lshape = lshape ))), vfamily = c("gpd", "vextremes"), validparams = eval(substitute(function(eta, y, extra = NULL) { sigma <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , earg = .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Locat <- extra$threshold okay1 <- all(is.finite(Locat)) && all(is.finite(sigma)) && all(sigma > 0) && all(is.finite(Shape)) okay.support <- if (okay1) { Boundary <- Locat - sigma / Shape all((y > Locat) & ((Shape < 0 & y < Boundary) || (Shape >= 0 & y < Inf))) } else { TRUE } if (!okay.support) warning("current parameter estimates are at the boundary of ", "the parameter space. ", "This model needs attention.") okay1 && okay.support }, list( .tolshape0 = tolshape0, .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 2 sigma <- eta2theta(eta[, c(TRUE, FALSE)], .lscale , earg = .escale ) Shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Threshold <- extra$threshold ystar <- y - Threshold # Operate on ystar A <- 1 + Shape * ystar / sigma mytolerance <- .Machine$double.eps bad <- (A <= mytolerance) if (any(bad) && any(w[bad] != 0)) { cat(sum(w[bad],na.rm = TRUE), # "; ignoring them" "observations violating boundary constraints\n") flush.console() } if (any(is.zero <- (abs(Shape) < .tolshape0 ))) { } igpd <- !is.zero & !bad iexp <- is.zero & !bad dl.dShape <- dl.dsigma <- rep_len(0, length(y)) dl.dsigma[igpd] <- ((1+Shape[igpd]) * ystar[igpd] / (sigma[igpd] + Shape[igpd] * ystar[igpd])-1) / sigma[igpd] dl.dShape[igpd] <- log(A[igpd])/Shape[igpd]^2 - (1 + 1/Shape[igpd]) * ystar[igpd] / (A[igpd] * sigma[igpd]) dl.dShape[iexp] <- ystar[iexp] * (0.5*ystar[iexp]/sigma[iexp] - 1) / sigma[iexp] dsigma.deta <- dtheta.deta(sigma, .lscale , earg = .escale ) dShape.deta <- dtheta.deta(Shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dsigma * dsigma.deta, dl.dShape * dShape.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .tolshape0 = tolshape0, .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ ned2l.dscale2 <- 1 / ((1+2*Shape) * sigma^2) ned2l.dshape2 <- 2 / ((1+2*Shape) * (1+Shape)) ned2l.dshapescale <- 1 / ((1+2*Shape) * (1+Shape) * sigma) # > 0 ! S <- M / M1 wz <- array(c(c(w) * ned2l.dscale2 * dsigma.deta^2, c(w) * ned2l.dshape2 * dShape.deta^2, c(w) * ned2l.dshapescale * dsigma.deta * dShape.deta), dim = c(n, S, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale )))) } meplot.default <- function(y, main = "Mean Excess Plot", xlab = "Threshold", ylab = "Mean Excess", lty = c(2, 1:2), conf = 0.95, col = c("blue", "black", "blue"), type = "l", ...) { if (!is.Numeric(y)) stop("bad input for argument 'y'") n <- length(y) sy <- sort(y) dsy <- rev(sy) # decreasing sequence me <- rev(cumsum(dsy)) / (n:1) - sy me2 <- rev(cumsum(dsy^2)) var <- (me2 - (n:1) * (me+sy)^2) / (n:1) ci <- qnorm((1+conf)/2) * sqrt(abs(var)) / sqrt(n:1) ci[length(ci)] <- NA mymat <- cbind(me - ci, me, me + ci) sy <- sy - sqrt( .Machine$double.eps ) matplot(sy, mymat, main = main, xlab = xlab, ylab = ylab, lty = lty, col = col, type = type, ...) invisible(list(threshold = sy, meanExcess = me, plusminus = ci)) } meplot.vlm <- function(object, ...) { if (!length(y <- object@y)) stop("y slot is empty") ans <- meplot(as.numeric(y), ...) invisible(ans) } if (!isGeneric("meplot")) setGeneric("meplot", function(object, ...) standardGeneric("meplot")) setMethod("meplot", "numeric", function(object, ...) meplot.default(y=object, ...)) setMethod("meplot", "vlm", function(object, ...) meplot.vlm(object, ...)) guplot.default <- function(y, main = "Gumbel Plot", xlab = "Reduced data", ylab = "Observed data", type = "p", ...) { if (!is.Numeric(y)) stop("bad input for argument 'y'") n <- length(y) sy <- sort(y) x <- -log(-log(((1:n) - 0.5) / n)) plot(x, sy, main = main, xlab = xlab, ylab = ylab, type = type, ...) invisible(list(x = x, y = sy)) } guplot.vlm <- function(object, ...) { if (!length(y <- object@y)) stop("y slot is empty") ans <- guplot(as.numeric(y), ...) invisible(ans) } if (!isGeneric("guplot")) setGeneric("guplot", function(object, ...) standardGeneric("guplot")) setMethod("guplot", "numeric", function(object, ...) guplot.default(y=object, ...)) setMethod("guplot", "vlm", function(object, ...) guplot.vlm(object, ...)) gumbelff <- function(llocation = "identitylink", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), zero = "scale", # Was NULL in egumbel() mpv = FALSE) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(mpv) || length(mpv) != 1) stop("bad input for argument 'mpv'") if (length(percentiles) && (!is.Numeric(percentiles, positive = TRUE) || max(percentiles) >= 100)) stop("bad input for argument 'percentiles'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Gumbel distribution (multiple responses allowed)\n\n", "Links: ", namesof("location", llocat, earg = elocat, tag = TRUE), ", ", namesof("scale", lscale, earg = escale, tag = TRUE), "\n", "Mean: location + scale*0.5772..\n", "Variance: pi^2 * scale^2 / 6"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("location", "scale"), llocation = .llocat , lscale = .lscale , mpv = .mpv , zero = .zero ) }, list( .zero = zero, .llocat = llocation, .lscale = lscale, .mpv = mpv ))), initialize = eval(substitute(expression({ temp16 <- w.y.check(w = w, y = y, Is.nonnegative.y = FALSE, Is.integer.y = FALSE, ncol.w.max = Inf, # Differs from gumbel()! ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp16$w y <- temp16$y M1 <- extra$M1 <- 2 NOS <- ncoly <- ncol(y) extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly # Is now true! mynames1 <- param.names("location", NOS) mynames2 <- param.names("scale", NOS) predictors.names <- c( namesof(mynames1, .llocat , earg = .elocat , short = TRUE), namesof(mynames2, .lscale , earg = .escale , short = TRUE))[ interleave.VGAM(M, M1 = M1)] extra$R <- .R extra$mpv <- .mpv extra$percentiles <- .percentiles if (!length(etastart)) { locat.init <- scale.init <- matrix(NA_real_, n, NOS) EulerM <- -digamma(1) for (jay in 1:NOS) { # For each response 'y_jay'... do: scale.init.jay <- 1.5 * (0.01 + sqrt(6 * var(y[, jay]))) / pi if (length( .iscale )) scale.init.jay <- .iscale # iscale is on an absolute scale scale.init[, jay] <- scale.init.jay locat.init[, jay] <- (y[, jay] - scale.init[, jay] * EulerM) } # NOS etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .iscale = iscale, .R = R, .mpv = mpv, .percentiles = percentiles ))), linkinv = eval(substitute( function(eta, extra = NULL) { M1 <- 2 NOS <- ncol(eta) / M1 Locat <- eta2theta(eta[, c(TRUE, FALSE)], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , earg = .escale ) EulerM <- -digamma(1) pcent <- extra$percentiles mpv <- extra$mpv LP <- length(pcent) # may be 0 if (!LP) return(Locat + Scale * EulerM) fv <- matrix(NA_real_, nrow(eta), (LP + mpv) * NOS) dmn2 <- c(if (LP >= 1) paste(as.character(pcent), "%", sep = "") else NULL, if (mpv) "MPV" else NULL) dmn2 <- rep_len(dmn2, ncol(fv)) Rvec <- extra$R if (1 <= LP) { icol <- (0:(NOS-1)) * (LP + mpv) for (ii in 1:LP) { icol <- icol + 1 use.p <- if (is.Numeric(Rvec)) exp(-Rvec * (1 - pcent[ii] / 100)) else pcent[ii] / 100 fv[, icol] <- qgumbel(use.p, loc = Locat, scale = Scale) } } if (mpv) { icol <- (0:(NOS-1)) * (LP + mpv) icol <- icol + 1 + LP fv[, icol] <- Locat - Scale * log(log(2)) } dimnames(fv) <- list(dimnames(eta)[[1]], dmn2) fv }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ temp0303 <- c(rep_len( .llocat , NOS), rep_len( .lscale , NOS)) names(temp0303) <- c(mynames1, mynames2) temp0303 <- temp0303[interleave.VGAM(M, M1 = M1)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .elocat misc$earg[[M1*ii ]] <- .escale } misc$true.mu <- !length( .percentiles ) # @fitted is not a true mu misc$R <- .R misc$mpv <- .mpv misc$percentiles <- .percentiles }), list( .llocat = llocat, .lscale = lscale, .mpv = mpv, .elocat = elocat, .escale = escale, .R = R, .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 NOS <- ncol(eta) / M1 Locat <- eta2theta(eta[, c(TRUE, FALSE)], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgumbel(x = y, location = Locat, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), vfamily = "gumbelff", validparams = eval(substitute(function(eta, y, extra = NULL) { Locat <- eta2theta(eta[, c(TRUE, FALSE)], .llocat , .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , .escale ) okay1 <- all(is.finite(Locat)) && all(is.finite(Scale)) && all(Scale > 0) okay1 }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 Locat <- eta2theta(eta[, c(TRUE, FALSE)], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , earg = .escale ) zedd <- (y - Locat) / Scale temp2 <- -expm1(-zedd) dl.dlocat <- temp2 / Scale dl.dscale <- -1/Scale + temp2 * zedd / Scale dlocat.deta <- dtheta.deta(Locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans <- ans[, interleave.VGAM(M, M1 = M1)] ans }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight = expression({ digamma1 <- digamma(1) ned2l.dscale2 <- ((2 + digamma1) * digamma1 + trigamma(1) + 1) / Scale^2 ned2l.dlocat2 <- 1 / Scale^2 ned2l.dlocsca <- -(1 + digamma1) / Scale^2 wz <- array( c(c(w) * ned2l.dlocat2 * dlocat.deta^2, c(w) * ned2l.dscale2 * dscale.deta^2, c(w) * ned2l.dlocsca * dlocat.deta * dscale.deta), dim = c(n, NOS, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz })) } cens.gumbel <- function(llocation = "identitylink", lscale = "loge", iscale = NULL, mean = TRUE, percentiles = NULL, zero = "scale") { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.logical(mean) || length(mean) != 1) stop("mean must be a single logical value") if (!mean && (!is.Numeric(percentiles, positive = TRUE) || any(percentiles >= 100))) stop("valid percentiles values must be given when mean = FALSE") new("vglmff", blurb = c("Censored Gumbel distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat, tag = TRUE), ", ", namesof("scale", lscale, earg = escale, tag = TRUE), "\n", "Mean: location + scale*0.5772..\n", "Variance: pi^2 * scale^2 / 6"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale"), llocation = .llocat , lscale = .lscale , percentiles = .percentiles , zero = .zero ) }, list( .zero = zero, .llocat = llocation, .lscale = lscale, .percentiles = percentiles ))), initialize = eval(substitute(expression({ y <- cbind(y) if (ncol(y) > 1) stop("Use gumbel.block() to handle multivariate responses") if (any(y) <= 0) stop("all response values must be positive") if (!length(extra$leftcensored)) extra$leftcensored <- rep_len(FALSE, n) if (!length(extra$rightcensored)) extra$rightcensored <- rep_len(FALSE, n) if (any(extra$rightcensored & extra$leftcensored)) stop("some observations are both right and left censored!") predictors.names <- c(namesof("location", .llocat, earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) if (!length(etastart)) { sca.init <- if (is.Numeric( .iscale, positive = TRUE)) .iscale else 1.1 * sqrt(var(y) * 6 ) / pi sca.init <- rep_len(sca.init, n) EulerM <- -digamma(1) loc.init <- (y - sca.init * EulerM) loc.init[loc.init <= 0] = min(y) etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(sca.init, .lscale , earg = .escale )) } }), list( .lscale = lscale, .iscale = iscale, .llocat = llocat, .elocat = elocat, .escale = escale ))), linkinv = eval(substitute( function(eta, extra = NULL) { loc <- eta2theta(eta[, 1], .llocat) sc <- eta2theta(eta[, 2], .lscale) EulerM <- -digamma(1) if (.mean) loc + sc * EulerM else { LP <- length(.percentiles) # 0 if NULL mu <- matrix(NA_real_, nrow(eta), LP) for (ii in 1:LP) { ci <- -log( .percentiles[ii] / 100) mu[, ii] <- loc - sc * log(ci) } dmn2 <- paste(as.character(.percentiles), "%", sep = "") dimnames(mu) <- list(dimnames(eta)[[1]], dmn2) mu } }, list( .lscale = lscale, .percentiles = percentiles, .llocat = llocat, .elocat = elocat, .escale = escale , .mean=mean ))), last = eval(substitute(expression({ misc$link <- c(location= .llocat, scale = .lscale) misc$earg <- list(location= .elocat, scale= .escale ) misc$true.mu <- .mean # if FALSE then @fitted is not a true mu misc$percentiles = .percentiles }), list( .lscale = lscale, .mean=mean, .llocat = llocat, .elocat = elocat, .escale = escale , .percentiles = percentiles ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { loc <- eta2theta(eta[, 1], .llocat , earg = .elocat ) sc <- eta2theta(eta[, 2], .lscale , earg = .escale ) zedd <- (y-loc) / sc cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns Fy <- exp(-exp(-zedd)) ell1 <- -log(sc[cen0]) - zedd[cen0] - exp(-zedd[cen0]) ell2 <- log(Fy[cenL]) ell3 <- log1p(-Fy[cenU]) if (residuals) stop("loglikelihood residuals not ", "implemented yet") else sum(w[cen0] * ell1) + sum(w[cenL] * ell2) + sum(w[cenU] * ell3) }, list( .lscale = lscale, .llocat = llocat, .elocat = elocat, .escale = escale ))), vfamily = "cens.gumbel", deriv = eval(substitute(expression({ cenL <- extra$leftcensored cenU <- extra$rightcensored cen0 <- !cenL & !cenU # uncensored obsns loc <- eta2theta(eta[, 1], .llocat, earg = .elocat ) sc <- eta2theta(eta[, 2], .lscale , earg = .escale ) zedd <- (y-loc) / sc temp2 <- -expm1(-zedd) dl.dloc <- temp2 / sc dl.dsc <- -1/sc + temp2 * zedd / sc dloc.deta <- dtheta.deta(loc, .llocat, earg = .elocat ) dsc.deta <- dtheta.deta(sc, .lscale , earg = .escale ) ezedd <- exp(-zedd) Fy <- exp(-ezedd) dFy.dloc <- -ezedd * Fy / sc dFy.dsc <- zedd * dFy.dloc # -zedd * exp(-zedd) * Fy / sc if (any(cenL)) { dl.dloc[cenL] <- -ezedd[cenL] / sc[cenL] dl.dsc[cenL] <- -zedd[cenL] * ezedd[cenL] / sc[cenL] } if (any(cenU)) { dl.dloc[cenU] <- -dFy.dloc[cenU] / (1-Fy[cenU]) dl.dsc[cenU] <- -dFy.dsc[cenU] / (1-Fy[cenU]) } c(w) * cbind(dl.dloc * dloc.deta, dl.dsc * dsc.deta) }), list( .lscale = lscale, .llocat = llocat, .elocat = elocat, .escale = escale ))), weight = expression({ A1 <- ifelse(cenL, Fy, 0) A3 <- ifelse(cenU, 1-Fy, 0) A2 <- 1 - A1 - A3 # Middle; uncensored digamma1 <- digamma(1) ed2l.dsc2 <- ((2+digamma1)*digamma1 + trigamma(1) + 1) / sc^2 ed2l.dloc2 <- 1 / sc^2 ed2l.dlocsc <- -(1 + digamma1) / sc^2 wz <- matrix(NA_real_, n, dimm(M = 2)) wz[, iam(1, 1, M)] <- A2 * ed2l.dloc2 * dloc.deta^2 wz[, iam(2, 2, M)] <- A2 * ed2l.dsc2 * dsc.deta^2 wz[, iam(1, 2, M)] <- A2 * ed2l.dlocsc * dloc.deta * dsc.deta d2l.dloc2 <- -ezedd / sc^2 d2l.dsc2 <- (2 - zedd) * zedd * ezedd / sc^2 d2l.dlocsc <- (1 - zedd) * ezedd / sc^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)]-A1^2 * d2l.dloc2 * dloc.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)]-A1^2 * d2l.dsc2 * dsc.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)]-A1^2 * d2l.dlocsc * dloc.deta * dsc.deta d2Fy.dloc2 <- dFy.dloc * dl.dloc + Fy * d2l.dloc2 d2Fy.dsc2 <- dFy.dsc * dl.dsc + Fy * d2l.dsc2 d2Fy.dlocsc <- dFy.dsc * dl.dloc + Fy * d2l.dlocsc d2l.dloc2 <- -((1-Fy) * d2Fy.dloc2 - dFy.dloc^2) / (1-Fy)^2 d2l.dsc2 <- -((1-Fy) * d2Fy.dsc2 - dFy.dsc^2) / (1-Fy)^2 d2l.dlocsc <- -((1-Fy) * d2Fy.dlocsc - dFy.dloc * dFy.dsc) / (1-Fy)^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)]-A3^2 * d2l.dloc2 * dloc.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)]-A3^2 * d2l.dsc2 * dsc.deta^2 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)]-A3^2 * d2l.dlocsc * dloc.deta * dsc.deta c(w) * wz })) } dfrechet <- function(x, location = 0, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(scale), length(shape), length(location)) if (length(x) != L) x <- rep_len(x, L) if (length(scale) != L) scale <- rep_len(scale, L) if (length(shape) != L) shape <- rep_len(shape, L) if (length(location) != L) location <- rep_len(location, L) logdensity <- rep_len(log(0), L) xok <- (x > location) rzedd <- scale / (x - location) logdensity[xok] <- log(shape[xok]) - (rzedd[xok]^shape[xok]) + (shape[xok]+1) * log(rzedd[xok]) - log(scale[xok]) logdensity[shape <= 0] <- NaN logdensity[scale <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pfrechet <- function(q, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rzedd <- scale / (q - location) if (lower.tail) { if (log.p) { ans <- -(rzedd^shape) ans[q <= location] <- -Inf } else { ans <- exp(-(rzedd^shape)) ans[q <= location] <- 0 } } else { if (log.p) { ans <- log(-expm1(-(rzedd^shape))) ans[q <= location] <- 0 } else { ans <- -expm1(-(rzedd^shape)) ans[q <= location] <- 1 } } ans } qfrechet <- function(p, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- location + scale * (-ln.p)^(-1 / shape) ans[ln.p > 0] <- NaN } else { ans <- location + scale * (-log(p))^(-1 / shape) ans[p < 0] <- NaN ans[p == 0] <- location ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- location + scale * (-log(-expm1(ln.p)))^(-1 / shape) ans[ln.p > 0] <- NaN } else { ans <- location + scale * (-log1p(-p))^(-1 / shape) ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- location ans[p > 1] <- NaN } } ans } rfrechet <- function(n, location = 0, scale = 1, shape) { if (!is.Numeric(scale, positive = TRUE)) stop("scale must be positive") if (!is.Numeric(shape, positive = TRUE)) stop("shape must be positive") location + scale * (-log(runif(n)))^(-1/shape) } frechet.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } frechet <- function(location = 0, lscale = "loge", lshape = logoff(offset = -2), iscale = NULL, ishape = NULL, nsimEIM = 250, zero = NULL) { if (!is.Numeric(location)) stop("bad input for argument 'location'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) new("vglmff", blurb = c("2-parameter Frechet distribution\n", "Links: ", namesof("scale", link = lscale, earg = escale ), ", ", namesof("shape", link = lshape, earg = eshape )), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("scale", "shape"), lscale = .lscale , lshape = .lshape , nsimEIM = .nsimEIM , zero = .zero ) }, list( .zero = zero, .lscale = lscale, .lshape = lshape, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("scale", .lscale , earg = .escale , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) extra$location <- rep_len( .location , n) # stored here if (!length(etastart)) { locinit = extra$location if (any(y <= locinit)) stop("initial values for 'location' are out of range") frech.aux <- function(shapeval, y, x, w, extraargs) { myprobs <- c(0.25, 0.5, 0.75) myobsns <- quantile(y, probs = myprobs) myquant <- (-log(myprobs))^(-1/shapeval) myfit <- lsfit(x = myquant, y = myobsns, intercept = TRUE) sum(myfit$resid^2) } shape.grid <- c(100, 70, 40, 20, 12, 8, 4, 2, 1.5) shape.grid <- c(1 / shape.grid, 1, shape.grid) try.this <- grid.search(shape.grid, objfun = frech.aux, y = y, x = x, w = w, maximize = FALSE, abs.arg = TRUE) shape.init <- if (length( .ishape )) rep_len( .ishape , n) else { rep_len(try.this , n) # variance exists if shape > 2 } myprobs <- c(0.25, 0.5, 0.75) myobsns <- quantile(y, probs = myprobs) myquant <- (-log(myprobs))^(-1/shape.init[1]) myfit <- lsfit(x = myquant, y = myobsns) Scale.init <- if (length( .iscale )) rep_len( .iscale , n) else { if (all(shape.init > 1)) { myfit$coef[2] } else { rep_len(1.0, n) } } etastart <- cbind(theta2eta(Scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .iscale = iscale, .ishape = ishape, .location = location ))), linkinv = eval(substitute(function(eta, extra = NULL) { loc <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) ans <- rep_len(NA_real_, length(shape)) ok <- shape > 1 ans[ok] <- loc[ok] + Scale[ok] * gamma(1 - 1/shape[ok]) ans }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$links <- c("scale" = .lscale , "shape" = .lshape ) misc$earg <- list("scale" = .escale , "shape" = .eshape ) misc$nsimEIM <- .nsimEIM }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { loctn <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dfrechet(x = y, location = loctn, scale = Scale, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), vfamily = c("frechet", "vextremes"), deriv = eval(substitute(expression({ loctn <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) rzedd <- Scale / (y - loctn) # reciprocial of zedd dl.dloctn <- (shape + 1) / (y - loctn) - (shape / (y - loctn)) * (rzedd)^shape dl.dScale <- shape * (1 - rzedd^shape) / Scale dl.dshape <- 1 / shape + log(rzedd) * (1 - rzedd^shape) dthetas.detas <- cbind( dScale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ), dShape.deta <- dtheta.deta(shape, .lshape , earg = .eshape )) c(w) * cbind(dl.dScale, dl.dshape) * dthetas.detas }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for (ii in 1:( .nsimEIM )) { ysim <- rfrechet(n, location = loctn, scale = Scale, shape = shape) rzedd <- Scale / (ysim - loctn) # reciprocial of zedd dl.dloctn <- (shape + 1) / (ysim - loctn) - (shape / (ysim - loctn)) * (rzedd)^shape dl.dScale <- shape * (1 - rzedd^shape) / Scale dl.dshape <- 1 / shape + log(rzedd) * (1 - rzedd^shape) rm(ysim) temp3 <- cbind(dl.dScale, dl.dshape) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz = if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz = c(w) * wz * dthetas.detas[, ind1$row.index] * dthetas.detas[, ind1$col.index] } else { stop("argument 'nsimEIM' must be numeric") } wz }), list( .nsimEIM = nsimEIM )))) } rec.normal.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } rec.normal <- function(lmean = "identitylink", lsd = "loge", imean = NULL, isd = NULL, imethod = 1, zero = NULL) { lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsdev <- as.list(substitute(lsd)) esdev <- link2list(lsdev) lsdev <- attr(esdev, "function.name") isdev <- isd if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3.5) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Upper record values from a univariate normal distribution\n\n", "Links: ", namesof("mean", lmean, earg = emean, tag = TRUE), "; ", namesof("sd", lsdev, earg = esdev, tag = TRUE), "\n", "Variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = FALSE, multipleResponses = FALSE, parameters.names = c("mean", "sd"), lmean = .lmean , lsd = .lsd , imethod = .imethod , zero = .zero ) }, list( .zero = zero, .lmean = lmean, .lsd = lsd, .imethod = imethod ))), initialize = eval(substitute(expression({ predictors.names <- c(namesof("mean", .lmean, earg = .emean , tag = FALSE), namesof("sd", .lsdev, earg = .esdev , tag = FALSE)) if (ncol(y <- cbind(y)) != 1) stop("response must be a vector or a one-column matrix") if (any(diff(y) <= 0)) stop("response must have increasingly larger and larger values") if (any(w != 1)) warning("weights should have unit values only") if (!length(etastart)) { mean.init <- if (length( .imean )) rep_len( .imean , n) else { if (.lmean == "loge") pmax(1/1024, min(y)) else min(y)} sd.init <- if (length( .isdev)) rep_len( .isdev , n) else { if (.imethod == 1) 1*(sd(c(y))) else if (.imethod == 2) 5*(sd(c(y))) else .5*(sd(c(y))) } etastart <- cbind(theta2eta(rep_len(mean.init, n), .lmean , .emean ), theta2eta(rep_len(sd.init, n), .lsdev , .esdev )) } }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev, .imean = imean, .isdev = isdev, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmean, .emean ) }, list( .lmean = lmean, .emean = emean ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmean , "sd" = .lsdev ) misc$earg <- list("mu" = .emean , "sd" = .esdev ) }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { sdev <- eta2theta(eta[, 2], .lsdev ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { zedd <- (y - mu) / sdev NN <- nrow(eta) if (summation) { sum(w * (-log(sdev) - 0.5 * zedd^2)) - sum(w[-NN] * pnorm(zedd[-NN], lower.tail = FALSE, log.p = TRUE)) } else { stop("cannot handle 'summation = FALSE' yet") } } }, list( .lsdev = lsdev, .esdev = esdev ))), vfamily = c("rec.normal"), deriv = eval(substitute(expression({ NN <- nrow(eta) mymu <- eta2theta(eta[, 1], .lmean) sdev <- eta2theta(eta[, 2], .lsdev) zedd <- (y - mymu) / sdev temp200 <- dnorm(zedd) / (1-pnorm(zedd)) dl.dmu <- (zedd - temp200) / sdev dl.dmu[NN] <- zedd[NN] / sdev[NN] dl.dsd <- (-1 + zedd^2 - zedd * temp200) / sdev dl.dsd[NN] <- (-1 + zedd[NN]^2) / sdev[NN] dmu.deta <- dtheta.deta(mymu, .lmean, .emean ) dsd.deta <- dtheta.deta(sdev, .lsdev, .esdev ) if (iter == 1) { etanew <- eta } else { derivold <- derivnew etaold <- etanew etanew <- eta } derivnew <- c(w) * cbind(dl.dmu * dmu.deta, dl.dsd * dsd.deta) derivnew }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev ))), weight = expression({ if (iter == 1) { wznew <- cbind(matrix(w, n, M), matrix(0, n, dimm(M)-M)) } else { wzold <- wznew wznew <- qnupdate(w = w, wzold = wzold, dderiv = (derivold - derivnew), deta = etanew-etaold, M = M, trace = trace) # weights incorporated in args } wznew })) } rec.exp1.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } rec.exp1 <- function(lrate = "loge", irate = NULL, imethod = 1) { lrate <- as.list(substitute(lrate)) erate <- link2list(lrate) lrate <- attr(erate, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3.5) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Upper record values from a ", "1-parameter exponential distribution\n\n", "Links: ", namesof("rate", lrate, earg = erate, tag = TRUE), "\n", "Variance: 1/rate^2"), initialize = eval(substitute(expression({ predictors.names <- c(namesof("rate", .lrate , earg = .erate , tag = FALSE)) if (ncol(y <- cbind(y)) != 1) stop("response must be a vector or a one-column matrix") if (any(diff(y) <= 0)) stop("response must have increasingly larger and larger values") if (any(w != 1)) warning("weights should have unit values only") if (!length(etastart)) { rate.init <- if (length( .irate )) rep_len( .irate , n) else { init.rate <- if (.imethod == 1) length(y) / y[length(y), 1] else if (.imethod == 2) 1/mean(y) else 1/median(y) if (.lrate == "loge") pmax(1/1024, init.rate) else init.rate } etastart <- cbind(theta2eta(rep_len(rate.init, n), .lrate , .erate )) } }), list( .lrate = lrate, .erate = erate, .irate = irate, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .lrate , .erate ) }, list( .lrate = lrate, .erate = erate ))), last = eval(substitute(expression({ misc$link <- c("rate" = .lrate) misc$earg <- list("rate" = .erate) misc$expected = TRUE }), list( .lrate = lrate, .erate = erate ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { rate <- eta2theta(eta, .lrate , .erate ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { NN <- length(eta) y <- cbind(y) if (summation) { sum(w * log(rate)) - w[NN] * rate[NN] * y[NN, 1] } else { stop("cannot handle 'summation = FALSE' yet") } } }, list( .lrate = lrate, .erate = erate ))), vfamily = c("rec.exp1"), deriv = eval(substitute(expression({ NN <- length(eta) rate <- c(eta2theta(eta, .lrate , .erate )) dl.drate <- 1 / rate dl.drate[NN] <- 1/ rate[NN] - y[NN, 1] drate.deta <- dtheta.deta(rate, .lrate , .erate ) c(w) * cbind(dl.drate * drate.deta) }), list( .lrate = lrate, .erate = erate ))), weight = expression({ ed2l.drate2 <- 1 / rate^2 wz <- drate.deta^2 * ed2l.drate2 c(w) * wz })) } dpois.points <- function(x, lambda, ostatistic, dimension = 2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(lambda), length(ostatistic), length(dimension)) if (length(x) != L) x <- rep_len(x, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) if (length(ostatistic) != L) ostatistic <- rep_len(ostatistic, L) if (length(dimension) != L) dimension <- rep_len(dimension, L) if (!all(dimension %in% c(2, 3))) stop("argument 'dimension' must have values 2 and/or 3") ans2 <- log(2) + ostatistic * log(pi * lambda) - lgamma(ostatistic) + (2 * ostatistic - 1) * log(x) - lambda * pi * x^2 ans2[x < 0 | is.infinite(x)] <- log(0) # 20141209 KaiH ans3 <- log(3) + ostatistic * log(4 * pi * lambda / 3) - lgamma(ostatistic) + (3 * ostatistic - 1) * log(x) - (4/3) * lambda * pi * x^3 ans3[x < 0 | is.infinite(x)] <- log(0) # 20141209 KaiH ans <- ifelse(dimension == 2, ans2, ans3) if (log.arg) ans else exp(ans) } poisson.points <- function(ostatistic, dimension = 2, link = "loge", idensity = NULL, imethod = 1) { if (!is.Numeric(ostatistic, length.arg = 1, positive = TRUE)) stop("argument 'ostatistic' must be a single positive integer") if (!is.Numeric(dimension, positive = TRUE, length.arg = 1, integer.valued = TRUE) || dimension > 3) stop("argument 'dimension' must be 2 or 3") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, positive = TRUE, integer.valued = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") if (length(idensity) && !is.Numeric(idensity, positive = TRUE)) stop("bad input for argument 'idensity'") new("vglmff", blurb = c(if (dimension == 2) "Poisson-points-on-a-plane distances distribution\n" else "Poisson-points-on-a-volume distances distribution\n", "Link: ", namesof("density", link, earg = earg), "\n\n", if (dimension == 2) "Mean: gamma(s+0.5) / (gamma(s) * sqrt(density * pi))" else "Mean: gamma(s+1/3) / (gamma(s) * (4*density*pi/3)^(1/3))"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("density"), link = .link ) }, list( .link = link ))), initialize = eval(substitute(expression({ if (NCOL(y) != 1) stop("response must be a vector or a one-column matrix") if (any(y <= 0)) stop("response must contain positive values only") predictors.names <- namesof("density", .link, earg = .earg , tag = FALSE) if (!length(etastart)) { use.this <- if ( .imethod == 1) median(y) + 1/8 else weighted.mean(y,w) if ( .dimension == 2) { myratio <- exp(lgamma( .ostatistic + 0.5) - lgamma( .ostatistic )) density.init <- if (is.Numeric( .idensity )) rep_len( .idensity , n) else rep_len(myratio^2 / (pi * use.this^2), n) etastart <- theta2eta(density.init, .link , earg = .earg ) } else { myratio <- exp(lgamma( .ostatistic + 1/3) - lgamma( .ostatistic )) density.init <- if (is.Numeric( .idensity )) rep_len( .idensity , n) else rep_len(3 * myratio^3 / (4 * pi * use.this^3), n) etastart <- theta2eta(density.init, .link , earg = .earg ) } } }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension, .imethod = imethod, .idensity = idensity ))), linkinv = eval(substitute(function(eta, extra = NULL) { density <- eta2theta(eta, .link, earg = .earg) if ( .dimension == 2) { myratio <- exp(lgamma( .ostatistic + 0.5) - lgamma( .ostatistic )) myratio / sqrt(density * pi) } else { myratio <- exp(lgamma( .ostatistic + 1/3) - lgamma( .ostatistic)) myratio / (4 * density * pi/3)^(1/3) } }, list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), last = eval(substitute(expression({ misc$link <- c("density" = .link ) misc$earg <- list("density" = .earg ) misc$ostatistic <- .ostatistic misc$dimension <- .dimension }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { density <- eta2theta(eta, .link, earg = .earg) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dpois.points(y, lambda = density, ostatistic = .ostatistic , dimension = .dimension , log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), vfamily = c("poisson.points"), deriv = eval(substitute(expression({ density <- eta2theta(eta, .link, earg = .earg) dl.ddensity <- if ( .dimension == 2) { .ostatistic / density - pi * y^2 } else { .ostatistic / density - (4/3) * pi * y^3 } ddensity.deta <- dtheta.deta(density, .link , earg = .earg ) c(w) * dl.ddensity * ddensity.deta }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension ))), weight = eval(substitute(expression({ ned2l.ddensity2 <- .ostatistic / density^2 wz <- ddensity.deta^2 * ned2l.ddensity2 c(w) * wz }), list( .link = link, .earg = earg, .ostatistic = ostatistic, .dimension = dimension )))) } VGAM/R/calibrate.R0000644000176200001440000006004613135276757013304 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. calibrate.qrrvglm.control <- function(object, trace = FALSE, # passed into optim() Method.optim = "BFGS", # passed into optim(method = Method) gridSize = ifelse(Rank == 1, 9, 5), varI.latvar = FALSE, ...) { Rank <- object@control$Rank eq.tolerances <- object@control$eq.tolerances if (!is.Numeric(gridSize, positive = TRUE, integer.valued = TRUE, length.arg = 1)) stop("bad input for argument 'gridSize'") if (gridSize < 2) stop("'gridSize' must be >= 2") list( trace = as.numeric(trace)[1], Method.optim = Method.optim, gridSize = gridSize, varI.latvar = as.logical(varI.latvar)[1]) } # calibrate.qrrvglm.control calibrate.qrrvglm <- function(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "all3or4"), initial.vals = NULL, ...) { Quadratic <- if (is.logical(object@control$Quadratic)) object@control$Quadratic else FALSE # T if CQO, F if CAO newdata.orig <- newdata if (!length(newdata)) { newdata <- data.frame(depvar(object)) } if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("latvar", "predictors", "response", "vcov", "all3or4"))[1] if (!Quadratic && type == "vcov") stop("cannot have 'type=\"vcov\"' when object is ", "a \"rrvgam\" object") if (!all(weights(object, type = "prior") == 1)) warning("not all the prior weights of 'object' are 1; assuming ", "they all are here") if (!is.data.frame(newdata)) newdata <- data.frame(newdata) if (!all(object@misc$ynames %in% colnames(newdata))) stop("need the following colnames in 'newdata': ", paste(object@misc$ynames, collapse = ", ")) newdata <- newdata[, object@misc$ynames, drop = FALSE] if (!is.matrix(newdata)) newdata <- as.matrix(newdata) nn <- nrow(newdata) obfunct <- slot(object@family, object@misc$criterion) minimize.obfunct <- if (Quadratic) object@control$min.criterion else TRUE # Logical; TRUE for CAO objects because deviance is minimized if (!is.logical(minimize.obfunct)) stop("object@control$min.criterion is not a logical") optim.control <- calibrate.qrrvglm.control(object = object, ...) use.optim.control <- optim.control use.optim.control$Method.optim <- use.optim.control$gridSize <- use.optim.control$varI.latvar <- NULL if ((Rank <- object@control$Rank) > 2) stop("currently can only handle Rank = 1 and 2") Coefobject <- if (Quadratic) { Coef(object, varI.latvar = optim.control$varI.latvar) } else { Coef(object) } if (!length(initial.vals)) { L <- apply(latvar(object), 2, min) U <- apply(latvar(object), 2, max) initial.vals <- if (Rank == 1) cbind(seq(L, U, length = optim.control$gridSize)) else expand.grid(seq(L[1], U[1], length = optim.control$gridSize), seq(L[2], U[2], length = optim.control$gridSize)) } M <- npred(object) v.simple <- if (Quadratic) { length(object@control$colx1.index) == 1 && names(object@control$colx1.index) == "(Intercept)" && (if (any(names(constraints(object)) == "(Intercept)")) trivial.constraints(constraints(object))["(Intercept)"] == 1 else TRUE) } else { FALSE # To simplify things for "rrvgam" objects } B1bix <- if (v.simple) { matrix(Coefobject@B1, nn, M, byrow = TRUE) } else { Xlm <- predict.vlm(as(object, "vglm"), # object, newdata = newdata.orig, type = "Xlm") Xlm[, names(object@control$colx1.index), drop = FALSE] %*% (if (Quadratic) Coefobject@B1 else object@coefficients[1:M]) } # !v.simple objfun1 <- function(lv1val, x = NULL, y, w = 1, extraargs) { ans <- sum(c(w) * extraargs$Obfunction( bnu = c(lv1val), y0 = y, extra = extraargs$object.extra, objfun = extraargs$obfunct, Object = extraargs$Object, # Needed for "rrvgam" objects Coefs = extraargs$Coefobject, B1bix = extraargs$B1bix, misc.list = extraargs$object.misc, everything = FALSE, mu.function = extraargs$mu.function)) ans } objfun2 <- function(lv1val, lv2val, x = NULL, y, w = 1, extraargs) { ans <- sum(c(w) * extraargs$Obfunction( bnu = c(lv1val, lv2val), y0 = y, extra = extraargs$object.extra, objfun = extraargs$obfunct, Object = extraargs$Object, # Needed for "rrvgam" objects Coefs = extraargs$Coefobject, B1bix = extraargs$B1bix, misc.list = extraargs$object.misc, everything = FALSE, mu.function = extraargs$mu.function)) ans } mu.function <- slot(object@family, "linkinv") wvec <- 1 # zz; Assumed here mylist <- list(object.extra = object@extra, Obfunction = if (Quadratic) .my.calib.objfunction.qrrvglm else .my.calib.objfunction.rrvgam, Coefobject = Coefobject, B1bix = NA, # Will be replaced below obfunct = obfunct, object.misc = object@misc, Object = if (Quadratic) 666 else object, mu.function = mu.function) init.vals <- matrix(NA_real_, nn, Rank) for (i1 in 1:nn) { if (optim.control$trace) cat("Grid searching initial values for observation", i1, "-----------------\n") y0 <- newdata[i1, , drop = FALSE] # drop added 20150624 mylist$B1bix <- B1bix[i1, ] try.this <- if (Rank == 1) grid.search(initial.vals[, 1], objfun = objfun1, y = y0 , w = wvec, ret.objfun = TRUE, extraargs = mylist) else grid.search2(initial.vals[, 1], initial.vals[, 2], objfun = objfun2, y = y0, w = wvec, ret.objfun = TRUE, extraargs = mylist) lv1.init <- try.this[if (Rank == 1) "Value" else "Value1"] lv2.init <- if (Rank >= 2) try.this["Value2"] else NULL init.vals[i1, ] <- c(lv1.init, lv2.init) } # for i1 BestOFpar <- matrix(NA_real_, nn, Rank) BestOFvalues <- rep(NA_real_, nn) # Best OF objective function values for (i1 in 1:nn) { if (optim.control$trace) { cat("\nOptimizing for observation", i1, "-----------------\n") flush.console() } ans <- optim(par = init.vals[i1, ], fn = if (Quadratic) .my.calib.objfunction.qrrvglm else .my.calib.objfunction.rrvgam, method = optim.control$Method.optim, # "BFGS" or "CG" or... control = c(fnscale = ifelse(minimize.obfunct, 1, -1), use.optim.control), # as.vector() needed y0 = newdata[i1, , drop = FALSE], # drop added 20150624 extra = object@extra, objfun = obfunct, Object = if (Quadratic) 666 else object, Coefs = Coefobject, B1bix = B1bix[i1, , drop = FALSE], misc.list = object@misc, everything = FALSE, # Differs from below mu.function = mu.function) if (optim.control$trace) { if (ans$convergence == 0) cat("Successful convergence\n") else cat("Unsuccessful convergence\n") flush.console() } if (ans$convergence == 0) { BestOFpar[i1, ] <- ans$par BestOFvalues[i1] <- ans$value } # else do nothing since NA_real_ signifies convergence failure } # for i1 pretty <- function(BestOFpar, newdata, Rank) { if (Rank == 1) { if (!is.null(dimnames(newdata)[[1]])) { BestOFpar <- c(BestOFpar) names(BestOFpar) <- dimnames(newdata)[[1]] } } else { dimnames(BestOFpar) <- list(dimnames(newdata)[[1]], if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "")) } BestOFpar } # pretty BestOFpar <- pretty(BestOFpar, newdata, Rank) attr(BestOFpar,"objectiveFunction") <- pretty(BestOFvalues, newdata, Rank = 1) if (type == "latvar") return(BestOFpar) choose.fun <- if (Quadratic) .my.calib.objfunction.qrrvglm else .my.calib.objfunction.rrvgam etaValues <- matrix(NA_real_, nn, M) muValues <- matrix(NA_real_, nn, ncol(fitted(object))) vcValues <- if (Quadratic) array(0, c(Rank, Rank, nn)) else NULL if (optim.control$trace) cat("\n") for (i1 in 1:nn) { if (optim.control$trace) { cat("Evaluating quantities for observation", i1, "-----------------\n") flush.console() } ans <- choose.fun( bnu = if (Rank == 1) BestOFpar[i1] else BestOFpar[i1, ], y0 = newdata[i1, , drop = FALSE], # drop added 20150624 extra = object@extra, objfun = obfunct, Object = if (Quadratic) 666 else object, Coefs = Coefobject, B1bix = B1bix[i1, , drop = FALSE], misc.list = object@misc, everything = TRUE, mu.function = mu.function) muValues[i1, ] <- ans$mu etaValues[i1, ] <- ans$eta if (Quadratic) vcValues[, , i1] <- ans$vcmat # Might be NULL, e.g., "rvgam" } # for i1 dimnames(muValues) <- dimnames(newdata) dimnames(etaValues) <- list(dimnames(newdata)[[1]], dimnames(object@predictors)[[2]]) if (Quadratic) dimnames(vcValues) <- list(as.character(1:Rank), as.character(1:Rank), dimnames(newdata)[[1]]) switch(type, response = muValues, predictors = etaValues, vcov = vcValues, all3or4 = list(latvar = BestOFpar, predictors = etaValues, response = muValues, vcov = if (Quadratic) vcValues else NULL)) } # calibrate.qrrvglm .my.calib.objfunction.qrrvglm <- function(bnu, y0, extra = NULL, objfun, Coefs, Object, B1bix, misc.list, everything = TRUE, mu.function) { bnumat <- cbind(bnu) Rank <- length(bnu) eta <- cbind(c(B1bix)) + Coefs@A %*% bnumat M <- misc.list$M check.eta <- matrix(0, M, 1) for (ss in 1:M) { temp <- Coefs@D[, , ss, drop = FALSE] dim(temp) <- dim(temp)[1:2] # c(Rank, Rank) eta[ss, 1] <- eta[ss, 1] + t(bnumat) %*% temp %*% bnumat if (FALSE) { warning("this line is wrong:") alf <- loge(Coefs@Maximum[ss]) # zz get the link function tolmat <- Coefs@Tolerance[, , ss, drop = FALSE] check.eta[ss, 1] <- alf - 0.5 * t(bnumat) %*% solve(tolmat) %*% bnumat } # FALSE } # for ss eta <- matrix(eta, 1, M, byrow = TRUE) mu <- rbind(mu.function(eta, extra)) # Make sure it has one row value <- objfun(mu = mu, y = y0, w = 1, # ignore prior.weights on the object residuals = FALSE, eta = eta, extra = extra) if (everything) { vcmat <- matrix(0, Rank, Rank) for (ss in 1:M) { vec1 <- cbind(Coefs@A[ss, ]) + 2 * matrix(Coefs@D[, , ss], Rank, Rank) %*% bnumat vcmat <- vcmat + mu[1, ss] * vec1 %*% t(vec1) } vcmat <- solve(vcmat) } else { vcmat <- NULL } if (everything) list(eta = eta, mu = mu, value = value, vcmat = vcmat) else value } # .my.calib.objfunction.qrrvglm .my.calib.objfunction.rrvgam <- function(bnu, y0, extra = NULL, objfun, Object, # Needed for "rrvgam" objects Coefs, B1bix, # Actually not needed here misc.list, everything = TRUE, mu.function) { Rank <- length(bnu) NOS <- Coefs@NOS eta <- matrix(NA_real_, 1, NOS) for (jlocal in 1:NOS) { eta[1, jlocal] <- predictrrvgam(Object, grid = bnu, sppno = jlocal, Rank = Rank, deriv = 0)$yvals } mu <- rbind(mu.function(eta, extra)) # Make sure it has one row value <- objfun(mu = mu, y = y0, w = 1, # ignore prior.weights on the object residuals = FALSE, eta = eta, extra = extra) vcmat <- NULL # No theory as of yet to compute the vcmat if (everything) list(eta = eta, mu = mu, value = value, vcmat = vcmat) else value } # .my.calib.objfunction.rrvgam calibrate.rrvglm <- function(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "all3or4"), initial.vals = NULL, # For one observation only ...) { Quadratic <- FALSE # Because this function was adapted from CQO code. newdata.orig <- newdata if (!length(newdata)) { newdata <- data.frame(depvar(object)) } if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("latvar", "predictors", "response", "vcov", "all3or4"))[1] if (!all(weights(object, type = "prior") == 1)) warning("not all the prior weights of 'object' are 1; assuming ", "they all are here") if (!is.data.frame(newdata)) newdata <- data.frame(newdata) if (!all(object@misc$ynames %in% colnames(newdata))) stop("need the following colnames in 'newdata': ", paste(object@misc$ynames, collapse = ", ")) newdata <- newdata[, object@misc$ynames, drop = FALSE] if (!is.matrix(newdata)) newdata <- as.matrix(newdata) nn <- nrow(newdata) obfunct <- slot(object@family, object@misc$criterion) minimize.obfunct <- object@control$min.criterion # zz if (!is.logical(minimize.obfunct)) stop("object@control$min.criterion is not a logical") minimize.obfunct <- as.vector(minimize.obfunct) optim.control <- calibrate.rrvglm.control(object = object, ...) use.optim.control <- optim.control use.optim.control$Method.optim <- use.optim.control$gridSize <- use.optim.control$varI.latvar <- NULL if ((Rank <- object@control$Rank) > 3) stop("currently can only handle Rank = 1, 2 and 3") Coefobject <- if (Quadratic) { Coef(object, varI.latvar = optim.control$varI.latvar) } else { Coef(object) } if (!length(initial.vals)) { L <- apply(latvar(object), 2, min) U <- apply(latvar(object), 2, max) initial.vals <- if (Rank == 1) cbind(seq(L, U, length = optim.control$gridSize)) else if (Rank == 2) expand.grid(seq(L[1], U[1], length = optim.control$gridSize), seq(L[2], U[2], length = optim.control$gridSize)) else expand.grid(seq(L[1], U[1], length = optim.control$gridSize), seq(L[2], U[2], length = optim.control$gridSize), seq(L[3], U[3], length = optim.control$gridSize)) } # !length(initial.vals) v.simple <- length(object@control$colx1.index) == 1 && names(object@control$colx1.index) == "(Intercept)" && (if (any(names(constraints(object)) == "(Intercept)")) trivial.constraints(constraints(object))["(Intercept)"] == 1 else TRUE) B1bix <- if (v.simple) { matrix(Coefobject@B1, nn, M, byrow = TRUE) } else { Xlm <- predict.vlm(as(object, "vglm"), # object, newdata = newdata.orig, type = "Xlm") Xlm[, names(object@control$colx1.index), drop = FALSE] %*% Coefobject@B1 } # !v.simple objfun1 <- function(lv1val, x = NULL, y, w = 1, extraargs) { ans <- sum(c(w) * extraargs$Obfunction( bnu = c(lv1val), y0 = y, extra = extraargs$object.extra, objfun = extraargs$obfunct, Object = extraargs$Object, Coefs = extraargs$Coefobject, B1bix = extraargs$B1bix, misc.list = extraargs$object.misc, everything = FALSE, mu.function = extraargs$mu.function)) ans } objfun2 <- function(lv1val, lv2val, x = NULL, y, w = 1, extraargs) { ans <- sum(c(w) * extraargs$Obfunction( bnu = c(lv1val, lv2val), y0 = y, extra = extraargs$object.extra, objfun = extraargs$obfunct, Object = extraargs$Object, Coefs = extraargs$Coefobject, B1bix = extraargs$B1bix, misc.list = extraargs$object.misc, everything = FALSE, mu.function = extraargs$mu.function)) ans } objfun3 <- function(lv1val, lv2val, lv3val, x = NULL, y, w = 1, extraargs) { ans <- sum(c(w) * extraargs$Obfunction( bnu = c(lv1val, lv2val, lv3val), y0 = y, extra = extraargs$object.extra, objfun = extraargs$obfunct, Object = extraargs$Object, Coefs = extraargs$Coefobject, B1bix = extraargs$B1bix, misc.list = extraargs$object.misc, everything = FALSE, mu.function = extraargs$mu.function)) ans } mu.function <- slot(object@family, "linkinv") wvec <- 1 # zz; Assumed here mylist <- list(object.extra = object@extra, Obfunction = .my.calib.objfunction.rrvglm, Coefobject = Coefobject, B1bix = NA, # Will be replaced below obfunct = obfunct, object.misc = object@misc, Object = 666, # object, mu.function = mu.function) M <- npred(object) init.vals <- matrix(NA_real_, nn, Rank) for (i1 in 1:nn) { if (optim.control$trace) cat("Grid searching initial values for observation", i1, "-----------------\n") y0 <- newdata[i1, , drop = FALSE] # drop added 20150624 mylist$B1bix <- B1bix[i1, ] try.this <- if (Rank == 1) grid.search(initial.vals[, 1], objfun = objfun1, y = y0 , w = wvec, ret.objfun = TRUE, extraargs = mylist) else if (Rank == 2) grid.search2(initial.vals[, 1], initial.vals[, 2], objfun = objfun2, y = y0, w = wvec, ret.objfun = TRUE, extraargs = mylist) else grid.search3(initial.vals[, 1], initial.vals[, 2], initial.vals[, 3], objfun = objfun3, y = y0, w = wvec, ret.objfun = TRUE, extraargs = mylist) lv1.init <- try.this[if (Rank == 1) "Value" else "Value1"] lv2.init <- if (Rank >= 2) try.this["Value2"] else NULL lv3.init <- if (Rank >= 3) try.this["Value3"] else NULL init.vals[i1, ] <- c(lv1.init, lv2.init, lv3.init) } # for i1 BestOFpar <- matrix(NA_real_, nn, Rank) BestOFvalues <- rep(NA_real_, nn) # Best OF objective function values for (i1 in 1:nn) { if (optim.control$trace) { cat("\nOptimizing for observation", i1, "-----------------\n") flush.console() } ans <- optim(par = init.vals[i1, ], fn = .my.calib.objfunction.rrvglm, method = optim.control$Method.optim, # "BFGS" or "CG" or... control = c(fnscale = ifelse(minimize.obfunct, 1, -1), use.optim.control), # as.vector() needed y0 = newdata[i1, , drop = FALSE], # drop added 20150624 extra = object@extra, objfun = obfunct, Object = 666, # object, Coefs = Coefobject, B1bix = B1bix[i1, , drop = FALSE], misc.list = object@misc, everything = FALSE, # Differs from below mu.function = mu.function) if (optim.control$trace) { if (ans$convergence == 0) cat("Successful convergence\n") else cat("Unsuccessful convergence\n") flush.console() } if (ans$convergence == 0) { BestOFpar[i1, ] <- ans$par BestOFvalues[i1] <- ans$value } # else do nothing since NA_real_ signifies convergence failure } # for i1 pretty <- function(BestOFpar, newdata, Rank) { if (Rank == 1) { BestOFpar <- c(BestOFpar) names(BestOFpar) <- dimnames(newdata)[[1]] } else dimnames(BestOFpar) <- list(dimnames(newdata)[[1]], if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "")) BestOFpar } # pretty BestOFpar <- pretty(BestOFpar, newdata, Rank) attr(BestOFpar,"objectiveFunction") <- pretty(BestOFvalues, newdata, Rank = 1) if (type == "latvar") return(BestOFpar) etaValues <- matrix(NA_real_, nn, M) muValues <- matrix(NA_real_, nn, ncol(fitted(object))) vcValues <- if (Quadratic) array(0, c(Rank, Rank, nn)) else NULL if (optim.control$trace) cat("\n") for (i1 in 1:nn) { if (optim.control$trace) { cat("Evaluating quantities for observation", i1, "-----------------\n") flush.console() } ans <- .my.calib.objfunction.rrvglm( bnu = if (Rank == 1) BestOFpar[i1] else BestOFpar[i1, ], y0 = newdata[i1, , drop = FALSE], # drop added 20150624 extra = object@extra, objfun = obfunct, Object = 666, # object, Coefs = Coefobject, B1bix = B1bix[i1, , drop = FALSE], misc.list = object@misc, everything = TRUE, mu.function = mu.function) muValues[i1, ] <- ans$mu etaValues[i1, ] <- ans$eta if (Quadratic) vcValues[, , i1] <- ans$vcmat # Might be NULL, e.g., "rvgam" } # for i1 dimnames(muValues) <- dimnames(newdata) dimnames(etaValues) <- list(dimnames(newdata)[[1]], dimnames(object@predictors)[[2]]) if (Quadratic) dimnames(vcValues) <- list(as.character(1:Rank), as.character(1:Rank), dimnames(newdata)[[1]]) switch(type, response = muValues, predictors = etaValues, vcov = vcValues, all3or4 = list(latvar = BestOFpar, predictors = etaValues, response = muValues, vcov = if (Quadratic) vcValues else NULL)) } # calibrate.rrvglm .my.calib.objfunction.rrvglm <- function(bnu, y0, extra = NULL, objfun, Coefs, Object, # Not needed B1bix, misc.list, everything = TRUE, mu.function) { bnumat <- cbind(bnu) Rank <- length(bnu) eta <- cbind(c(B1bix)) + Coefs@A %*% bnumat M <- misc.list$M eta <- matrix(eta, 1, M, byrow = TRUE) mu <- rbind(mu.function(eta, extra = extra)) # Make sure it has 1 row value <- objfun(mu = mu, y = y0, w = 1, # ignore prior.weights on the object zz residuals = FALSE, eta = eta, extra = extra) if (everything) { vcmat <- matrix(0, Rank, Rank) for (ss in 1:M) { vec1 <- cbind(Coefs@A[ss, ]) vcmat <- vcmat + mu[1, ss] * vec1 %*% t(vec1) } vcmat <- solve(vcmat) } else { vcmat <- NULL } if (everything) list(eta = eta, mu = mu, value = value, vcmat = vcmat) else value } # .my.calib.objfunction.rrvglm calibrate.rrvglm.control <- function(object, trace = FALSE, # passed into optim() Method.optim = "BFGS", # passed into optim(method = Method) gridSize = if (Rank == 1) 7 else 5, ...) { Rank <- object@control$Rank if (!is.Numeric(gridSize, positive = TRUE, integer.valued = TRUE, length.arg = 1)) stop("bad input for argument 'gridSize'") if (gridSize < 2) stop("argument 'gridSize' must be >= 2") list( trace = as.numeric(trace)[1], Method.optim = Method.optim, gridSize = gridSize ) } # calibrate.rrvglm.control VGAM/R/generic.q0000644000176200001440000000304713135276757013027 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. add1.vgam <- function(...) stop("no add1 method implemented for vgam() models (yet)") alias.vgam <- function(...) stop("no alias method implemented for vgam() models (yet)") anova.vgam <- function(...) stop("no anova method implemented for vgam() models (yet)") drop1.vgam <- function(...) stop("no drop1 method implemented for vgam() models (yet)") effects.vgam <- function(...) stop("no effects method implemented for vgam() models (yet)") proj.vgam <- function(...) stop("no proj method implemented for vgam() models (yet)") step.vgam <- function(...) stop("no step method implemented for vgam() models (yet)") update.vgam <- function(...) stop("no update method implemented for vgam() models (yet)") add1.vglm <- function(...) stop("no add1 method implemented for vglm() models (yet)") alias.vglm <- function(...) stop("no alias method implemented for vglm() models (yet)") anova.vglm <- function(...) stop("no anova method implemented for vglm() models (yet)") drop1.vglm <- function(...) stop("no drop1 method implemented for vglm() models (yet)") plot.vglm <- function(...) stop("no plot method implemented for vglm() models (yet)") proj.vglm <- function(...) stop("no proj method implemented for vglm() models (yet)") step.vglm <- function(...) stop("no step method implemented for vglm() models (yet)") update.vglm <- function(...) stop("no update method implemented for vglm() models (yet)") VGAM/R/family.genetic.R0000644000176200001440000006706213135276757014261 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. A1A2A3 <- function(link = "logit", inbreeding = FALSE, # HWE assumption is the default ip1 = NULL, ip2 = NULL, iF = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(inbreeding) || length(inbreeding) > 1) stop("argument 'inbreeding' must be a single logical") new("vglmff", blurb = c("G1-G2-G3 phenotype (", ifelse(inbreeding, "without", "with"), " the Hardy-Weinberg equilibrium assumption)\n\n", "Links: ", namesof("p1", link, earg = earg, tag = FALSE), ", ", namesof("p2", link, earg = earg, tag = FALSE), if (inbreeding) paste(",", namesof("f", link, earg = earg, tag = FALSE)) else ""), deviance = Deviance.categorical.data.vgam, infos = eval(substitute(function(...) { list(Q1 = 6, M1 = ifelse( .inbreeding , 3, 2), expected = TRUE, multipleResponses = FALSE, parameters.names = c("p1", "p2", if ( .inbreeding ) "f" else NULL), link = if ( .inbreeding ) c("p1" = .link , "p2" = .link , "f" = .link ) else c("p1" = .link , "p2" = .link )) }, list( .link = link, .inbreeding = inbreeding ))), initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.VGAM) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("A1A1", "A1A2", "A1A3", "A2A2", "A2A3", "A3A3") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have ", "names (output of colnames()) ordered as ", "c('A1A1','A1A2','A1A3','A2A2','A2A3','A3A3')") } predictors.names <- c(namesof("p1", .link , earg = .earg , tag = FALSE), namesof("p2", .link , earg = .earg , tag = FALSE), if ( .inbreeding ) namesof("f", .link , earg = .earg , tag = FALSE) else NULL) mustart <- (y + mustart) / 2 if (is.null(etastart)) { mydeterminant <- weighted.mean( mustart[, 2] * mustart[, 3] + mustart[, 2] * mustart[, 5] + mustart[, 3] * mustart[, 5], w) p1 <- if (is.numeric( .ip1 )) rep_len( .ip1 , n) else weighted.mean(mustart[, 2] * mustart[, 3], w) / mydeterminant p2 <- if (is.numeric( .ip2 )) rep_len( .ip2 , n) else weighted.mean(mustart[, 2] * mustart[, 5], w) / mydeterminant ff <- if (is.numeric( .iF )) rep_len( .iF , n) else weighted.mean(abs(1 - mustart[, 2] / (2 * p1 * p2)), w) p1 <- rep_len(p1, n) p2 <- rep_len(p2, n) ff <- rep_len(ff, n) p1[p1 < 0.05] <- 0.05 p1[p1 > 0.99] <- 0.99 p2[p2 < 0.05] <- 0.05 p2[p2 > 0.99] <- 0.99 ff[ff < 0.05] <- 0.05 ff[ff > 0.99] <- 0.99 etastart <- cbind(theta2eta(p1, .link , earg = .earg ), theta2eta(p2, .link , earg = .earg ), if ( .inbreeding ) theta2eta(ff, .link , earg = .earg ) else NULL) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .ip1 = ip1, .ip2 = ip2, .iF = iF, .inbreeding = inbreeding, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { p1 <- eta2theta(eta[, 1], link = .link , earg = .earg ) p2 <- eta2theta(eta[, 2], link = .link , earg = .earg ) f <- if ( .inbreeding ) eta2theta(eta[, 3], link = .link , earg = .earg ) else 0 p3 <- abs(1 - p1 - p2) cbind("A1A1" = f*p1+(1-f)*p1^2, "A1A2" = 2*p1*p2*(1-f), "A1A3" = 2*p1*p3*(1-f), "A2A2" = f*p2+(1-f)*p2^2, "A2A3" = 2*p2*p3*(1-f), "A3A3" = f*p3+(1-f)*p3^2) }, list( .link = link, .earg = earg, .inbreeding = inbreeding))), last = eval(substitute(expression({ if ( .inbreeding ) { misc$link <- c(p1 = .link , p2 = .link , f = .link ) misc$earg <- list(p1 = .earg , p2 = .earg , f = .earg ) } else { misc$link <- c(p1 = .link , p2 = .link ) misc$earg <- list(p1 = .earg , p2 = .earg ) } misc$expected <- TRUE }), list( .link = link, .earg = earg, .inbreeding = inbreeding ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("A1A2A3", "vgenetic"), validparams = eval(substitute(function(eta, y, extra = NULL) { p1 <- eta2theta(eta[, 1], link = .link , earg = .earg ) p2 <- eta2theta(eta[, 2], link = .link , earg = .earg ) p3 <- 1-p1-p2 okay1 <- all(is.finite(p1)) && all(0 < p1 & p1 < 1) && all(is.finite(p2)) && all(0 < p2 & p2 < 1) && all(is.finite(p3)) && all(0 < p3 & p3 < 1) okay2 <- TRUE if ( .inbreeding ) { f <- eta2theta(eta[, 3], link = .link , earg = .earg ) okay2 <- all(is.finite(f)) && all(0 <= f) # && all(f < 1) } okay1 && okay2 }, list( .link = link, .earg = earg, .inbreeding = inbreeding) )), deriv = eval(substitute(expression({ p1 <- eta2theta(eta[, 1], link = .link , earg = .earg ) p2 <- eta2theta(eta[, 2], link = .link , earg = .earg ) p3 <- 1-p1-p2 f <- if ( .inbreeding ) eta2theta(eta[, 3], link = .link , earg = .earg ) else 0 if ( .inbreeding ) { dP1 <- cbind(f + 2*p1*(1-f), 2*(1-f)*p2, 2*(1-f)*(1-p2-2*p1), 0, -2*(1-f)*p2, -f - 2*p3*(1-f)) dP2 <- cbind(0, 2*p1*(1-f), -2*(1-f)*p1, f+2*p2*(1-f), 2*(1-f)*(1-p1-2*p2), -f - 2*p3*(1-f)) dP3 <- cbind(p1*(1-p1), -2*p1*p2, -2*p1*p3, p2*(1-p2), -2*p2*p3, p3*(1-p3)) dl1 <- rowSums(y * dP1 / mu) dl2 <- rowSums(y * dP2 / mu) dl3 <- rowSums(y * dP3 / mu) dPP.deta <- dtheta.deta(cbind(p1, p2, f), link = .link , earg = .earg ) c(w) * cbind(dPP.deta[, 1] * dl1, dPP.deta[, 2] * dl2, dPP.deta[, 3] * dl3) } else { dl.dp1 <- (2*y[, 1]+y[, 2]+y[, 4])/p1 - (2*y[,6]+y[, 4]+y[,5])/(1-p1-p2) dl.dp2 <- (2*y[, 3]+y[, 2]+y[,5])/p2 - (2*y[,6]+y[, 4]+y[,5])/(1-p1-p2) dp1.deta <- dtheta.deta(p1, link = .link , earg = .earg ) dp2.deta <- dtheta.deta(p2, link = .link , earg = .earg ) c(w) * cbind(dl.dp1 * dp1.deta, dl.dp2 * dp2.deta) } }), list( .link = link, .earg = earg, .inbreeding = inbreeding ))), weight = eval(substitute(expression({ if ( .inbreeding ) { dPP <- array(c(dP1, dP2, dP3), c(n, 6, 3)) wz <- matrix(NA_real_, n, dimm(M)) # dimm(M)==6 because M==3 for (i1 in 1:M) for (i2 in i1:M) { index <- iam(i1, i2, M) wz[, index] <- rowSums(dPP[, , i1, drop = TRUE] * dPP[, , i2, drop = TRUE] / mu) * dPP.deta[, i1] * dPP.deta[, i2] } } else { qq <- 1-p1-p2 wz <- matrix(NA_real_, n, dimm(M)) # dimm(M)==3 because M==2 ned2l.dp12 <- 2 * (1/p1 + 1/qq) ned2l.dp22 <- 2 * (1/p2 + 1/qq) ned2l.dp1dp2 <- 2 / qq wz[, iam(1, 1, M)] <- ned2l.dp12 * dp1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dp22 * dp2.deta^2 wz[, iam(1, 2, M)] <- ned2l.dp1dp2 * dp1.deta * dp2.deta } c(w) * wz }), list( .link = link, .earg = earg, .inbreeding = inbreeding )))) } MNSs <- function(link = "logit", imS = NULL, ims = NULL, inS = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("MNSs Blood Group System (MS-Ms-MNS-MNs-NS-Ns phenotype)\n\n", "Links: ", namesof("mS", link, earg = earg), ", ", namesof("ms", link, earg = earg), ", ", namesof("nS", link, earg = earg, tag = FALSE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.VGAM) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("MS","Ms","MNS","MNs","NS","Ns") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have ", "names (output of colnames()) ordered as ", "c('MS','Ms','MNS','MNs','NS','Ns')") } predictors.names <- c(namesof("mS", .link , earg = .earg , tag = FALSE), namesof("ms", .link , earg = .earg , tag = FALSE), namesof("nS", .link , earg = .earg , tag = FALSE)) if (is.null(etastart)) { ms <- if (is.numeric(.ims)) rep_len( .ims , n) else c(sqrt(mustart[, 2])) ns <- c(sqrt(mustart[, 6])) nS <- if (is.numeric(.inS)) rep_len( .inS , n) else c(-ns + sqrt(ns^2 + mustart[, 5])) # Solve a quadratic eqn mS <- if (is.numeric(.imS)) rep_len( .imS , n) else 1-ns-ms-nS etastart <- cbind(theta2eta(mS, .link , earg = .earg ), theta2eta(ms, .link , earg = .earg ), theta2eta(nS, .link , earg = .earg )) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .imS = imS, .ims = ims, .inS = inS, .earg = earg))), linkinv = eval(substitute(function(eta, extra = NULL) { mS <- eta2theta(eta[, 1], link = .link , earg = .earg ) ms <- eta2theta(eta[, 2], link = .link , earg = .earg ) nS <- eta2theta(eta[, 3], link = .link , earg = .earg ) ns <- abs(1 - mS - ms - nS) cbind(MS = mS^2 + 2*mS*ms, Ms = ms^2, MNS = 2*(mS*nS + ms*nS + mS*ns), MNs = 2*ms*ns, NS = nS^2 + 2*nS*ns, Ns = ns^2) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(mS = .link , ms = .link , nS = .link ) misc$earg <- list(mS = .earg , ms = .earg , nS = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("MNSs", "vgenetic"), validparams = eval(substitute(function(eta, y, extra = NULL) { mS <- eta2theta(eta[, 1], link = .link , earg = .earg ) ms <- eta2theta(eta[, 2], link = .link , earg = .earg ) nS <- eta2theta(eta[, 3], link = .link , earg = .earg ) ns <- 1-mS-ms-nS okay1 <- all(is.finite(mS)) && all(0 < mS & mS < 1) && all(is.finite(ms)) && all(0 < ms & ms < 1) && all(is.finite(nS)) && all(0 < nS & nS < 1) && all(is.finite(ns)) && all(0 < ns & ns < 1) okay1 }, list( .link = link, .earg = earg) )), deriv = eval(substitute(expression({ mS <- eta2theta(eta[, 1], link = .link , earg = .earg ) ms <- eta2theta(eta[, 2], link = .link , earg = .earg ) nS <- eta2theta(eta[, 3], link = .link , earg = .earg ) ns <- 1-mS-ms-nS dP1 <- cbind(2*(mS+ms), 0, 2*(nS+ns-mS), -2*ms, -2*nS, -2*ns) dP2 <- cbind(2*mS, 2*ms, 2*(nS-mS), 2*(ns-ms), -2*nS, -2*ns) dP3 <- cbind(0, 0, 2*ms, -2*ms, 2*ns, -2*ns) # n x 6 dl1 <- rowSums(y * dP1 / mu) dl2 <- rowSums(y * dP2 / mu) dl3 <- rowSums(y * dP3 / mu) dPP.deta <- dtheta.deta(cbind(mS, ms, nS), link = .link , earg = .earg ) c(w) * dPP.deta * cbind(dl1, dl2, dl3) }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ dPP <- array(c(dP1,dP2,dP3), c(n,6, 3)) wz <- matrix(NA_real_, n, dimm(M)) # dimm(M)==6 because M==3 for (i1 in 1:M) for (i2 in i1:M) { index <- iam(i1,i2, M) wz[,index] <- rowSums(dPP[,,i1,drop = TRUE] * dPP[,,i2,drop = TRUE] / mu) * dPP.deta[,i1] * dPP.deta[,i2] } c(w) * wz }), list( .link = link, .earg = earg)))) } ABO <- function(link.pA = "logit", link.pB = "logit", ipA = NULL, ipB = NULL, ipO = NULL, zero = NULL) { link.pA <- as.list(substitute(link.pA)) earg.pA <- link2list(link.pA) link.pA <- attr(earg.pA, "function.name") link.pB <- as.list(substitute(link.pB)) earg.pB <- link2list(link.pB) link.pB <- attr(earg.pB, "function.name") new("vglmff", blurb = c("ABO Blood Group System (A-B-AB-O phenotype)\n\n", "Links: ", namesof("pA", link.pA, earg = earg.pA, tag = FALSE), ", ", namesof("pB", link.pB, earg = earg.pB, tag = FALSE)), deviance = Deviance.categorical.data.vgam, constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 4, multipleResponses = FALSE, parameters.names = c("pA", "pB"), expected = TRUE, zero = .zero , link = c("pA" = .link.pA , "pB" = .link.pB ), earg = c("pA" = .earg.pB , "pB" = .earg.pB ) ) }, list( .link.pA = link.pA, .link.pB = link.pB, .earg.pA = earg.pA, .earg.pB = earg.pB, .zero = zero ))), initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.VGAM) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("A","B","AB","O") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have names ", "(output of colnames()) ordered as c('A', 'B', 'AB', 'O')") } predictors.names <- c(namesof("pA", .link.pA , earg = .earg.pA , tag = FALSE), namesof("pB", .link.pB , earg = .earg.pB , tag = FALSE)) mustart <- (y + mustart) / 2 if (!length(etastart)) { pO <- if (is.Numeric( .ipO )) rep_len( .ipO , n) else rep_len(c(sqrt( weighted.mean(mustart[, 4], w)) ), n) pA <- if (is.Numeric( .ipA )) rep_len( .ipA , n) else rep_len(c(1-sqrt(weighted.mean(mustart[, 2] + mustart[, 4], w))), n) pB <- if (is.Numeric( .ipB )) rep_len( .ipB , n) else abs(1 - pA - pO) etastart <- cbind(theta2eta(pA, .link.pA , earg = .earg.pA ), theta2eta(pB, .link.pB , earg = .earg.pB )) mustart <- NULL # Since etastart has been computed. } }), list( .link.pA = link.pA, .link.pB = link.pB, .ipO = ipO, .ipA = ipA, .ipB = ipB, .earg.pA = earg.pA, .earg.pB = earg.pB ))), linkinv = eval(substitute(function(eta, extra = NULL) { pA <- eta2theta(eta[, 1], link = .link.pA , earg = .earg.pA ) pB <- eta2theta(eta[, 2], link = .link.pB , earg = .earg.pB ) pO <- abs(1 - pA - pB) cbind(A = pA*(pA+2*pO), B = pB*(pB+2*pO), AB = 2*pA*pB, O = pO*pO) }, list( .link.pA = link.pA, .link.pB = link.pB, .earg.pA = earg.pA, .earg.pB = earg.pB ))), last = eval(substitute(expression({ misc$link <- c(pA = .link.pA , pB = .link.pB ) misc$earg <- list(pA = .earg.pA , pB = .earg.pB ) misc$expected <- TRUE }), list( .link.pA = link.pA, .link.pB = link.pB, .earg.pA = earg.pA, .earg.pB = earg.pB ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("ABO", "vgenetic"), validparams = eval(substitute(function(eta, y, extra = NULL) { ppp <- eta2theta(eta[, 1], link = .link.pA , earg = .earg.pA ) qqq <- eta2theta(eta[, 2], link = .link.pB , earg = .earg.pB ) rrr <- 1 - ppp - qqq # abs(1 - ppp - qqq) prior to 20160624 okay1 <- all(is.finite(ppp)) && all(0 < ppp & ppp < 1) && all(is.finite(qqq)) && all(0 < qqq & qqq < 1) && all(is.finite(rrr)) && all(0 < rrr & rrr < 1) okay1 }, list( .link.pA = link.pA, .link.pB = link.pB, .earg.pA = earg.pA, .earg.pB = earg.pB ))), deriv = eval(substitute(expression({ ppp <- eta2theta(eta[, 1], link = .link.pA , earg = .earg.pA ) qqq <- eta2theta(eta[, 2], link = .link.pB , earg = .earg.pB ) rrr <- 1 - ppp - qqq # abs(1 - ppp - qqq) pbar <- 2*rrr + ppp qbar <- 2*rrr + qqq naa <- y[, 1] nbb <- y[, 2] nab <- y[, 3] noo <- y[, 4] dl.dp <- (naa+nab)/ppp - naa/pbar - 2*nbb/qbar - 2*noo/rrr dl.dq <- (nbb+nab)/qqq - 2*naa/pbar - nbb/qbar - 2*noo/rrr dp.deta <- dtheta.deta(ppp, link = .link.pA , earg = .earg.pA ) dq.deta <- dtheta.deta(qqq, link = .link.pB , earg = .earg.pB ) c(w) * cbind(dl.dp * dp.deta, dl.dq * dq.deta) }), list( .link.pA = link.pA, .link.pB = link.pB, .earg.pA = earg.pA, .earg.pB = earg.pB ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, dimm(M)) # dimm(M)==3 because M==2 ned2l.dp2 <- (1 + 2/ppp + 4*qqq/qbar + ppp/pbar) ned2l.dq2 <- (1 + 2/qqq + 4*ppp/pbar + qqq/qbar) ned2l.dpdq <- 2 * (1 + qqq/qbar + ppp/pbar) wz[, iam(1, 1, M)] <- ned2l.dp2 * dp.deta^2 wz[, iam(2, 2, M)] <- ned2l.dq2 * dq.deta^2 wz[, iam(1, 2, M)] <- ned2l.dpdq * dp.deta * dq.deta c(w) * wz }), list( .link.pA = link.pA, .link.pB = link.pB, .earg.pA = earg.pA, .earg.pB = earg.pB )))) } AB.Ab.aB.ab <- function(link = "logit", init.p = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("AB-Ab-aB-ab phenotype\n\n", "Links: ", namesof("p", link, earg = earg, tag = TRUE)), deviance = Deviance.categorical.data.vgam, initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.VGAM) if (length(mustart.orig)) { mustart <- mustart.orig } ok.col.ny <- c("AB","Ab","aB","ab") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix should have ", "names (output of colnames()) ordered as ", "c('AB','Ab','aB','ab')") } predictors.names <- namesof("p", .link , earg = .earg , tag = FALSE) mustart <- (y + mustart) / 2 if (is.null(etastart)) { p.init <- if (is.numeric( .init.p )) rep_len( .init.p , n) else rep_len(c(sqrt(4 * weighted.mean(mustart[, 4], w))), n) etastart <- cbind(theta2eta(p.init, .link , earg = .earg )) etastart <- jitter(etastart) mustart <- NULL # Since etastart has been computed. } }), list( .link = link, .init.p = init.p, .earg = earg))), linkinv = eval(substitute(function(eta,extra = NULL) { p <- eta2theta(eta, link = .link , earg = .earg ) pp4 <- p * p / 4 cbind(AB = 0.5 + pp4, Ab = 0.25 - pp4, aB = 0.25 - pp4, ab = pp4) }, list( .link = link, .earg = earg))), last = eval(substitute(expression({ misc$link <- c(p = .link ) misc$earg <- list(p = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("AB.Ab.aB.ab", "vgenetic"), validparams = eval(substitute(function(eta, y, extra = NULL) { pp <- eta2theta(eta, link = .link , earg = .earg ) okay1 <- all(is.finite(pp)) && all(0 < pp & pp < 1) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ pp <- eta2theta(eta, link = .link , earg = .earg ) p2 <- pp*pp nAB <- w * y[, 1] nAb <- w * y[, 2] naB <- w * y[, 3] nab <- w * y[, 4] dl.dp <- 8 * pp * (nAB/(2+p2) - (nAb+naB)/(1-p2) + nab/p2) dp.deta <- dtheta.deta(pp, link = .link , earg = .earg ) dl.dp * dp.deta }), list( .link = link, .earg = earg))), weight = eval(substitute(expression({ ned2l.dp2 <- 4 * p2 * (1/(2+p2) + 2/(1-p2) + 1/p2) wz <- cbind((dp.deta^2) * ned2l.dp2) c(w) * wz }), list( .link = link, .earg = earg)))) } AA.Aa.aa <- function(linkp = "logit", linkf = "logit", inbreeding = FALSE, # HWE assumption is the default ipA = NULL, ifp = NULL, zero = NULL) { linkp <- as.list(substitute(linkp)) eargp <- link2list(linkp) linkp <- attr(eargp, "function.name") linkf <- as.list(substitute(linkf)) eargf <- link2list(linkf) linkf <- attr(eargf, "function.name") if (!is.logical(inbreeding) || length(inbreeding) > 1) stop("argument 'inbreeding' must be a single logical") new("vglmff", blurb = c("AA-Aa-aa phenotype (", ifelse(inbreeding, "without", "with"), " the Hardy-Weinberg equilibrium assumption)\n\n", "Links: ", namesof("pA", linkp, earg = eargp, tag = FALSE), if (inbreeding) paste(",", namesof("f", linkf, earg = eargf, tag = FALSE)) else ""), deviance = Deviance.categorical.data.vgam, infos = eval(substitute(function(...) { list(M1 = ifelse( .inbreeding , 2, 1), Q1 = 3, multipleResponses = FALSE, parameters.names = c("pA", if ( .inbreeding ) "f" else NULL), expected = TRUE, zero = .zero , link = if ( .inbreeding ) c("pA" = .linkp , "f" = .linkf ) else c("pA" = .linkp )) }, list( .linkp = linkp, .linkf = linkf, .inbreeding = inbreeding, .zero = zero ))), initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- FALSE eval(process.categorical.data.VGAM) if (length(mustart.orig)) mustart <- mustart.orig ok.col.ny <- c("AA", "Aa", "aa") if (length(col.ny <- colnames(y)) == length(ok.col.ny) && setequal(ok.col.ny, col.ny)) { if (!all(ok.col.ny == col.ny)) stop("the columns of the response matrix ", "should have names ", "(output of colnames()) ordered as c('AA','Aa','aa')") } predictors.names <- c(namesof("pA", .linkp , earg = .eargp , tag = FALSE), if ( .inbreeding ) namesof("f", .linkf , earg = .eargf , tag = FALSE) else NULL) mustart <- (y + mustart) / 2 if (is.null(etastart)) { pA <- if (is.numeric( .ipA )) rep_len( .ipA , n) else rep_len(c(sqrt( weighted.mean(mustart[, 1], w))), n) fp <- if (is.numeric( .ifp )) rep_len( .ifp , n) else runif(n) # 1- mustart[, 2]/(2*pA*(1-pA)) etastart <- cbind(theta2eta(pA, .linkp , earg = .eargp ), if ( .inbreeding ) theta2eta(fp, .linkf , earg = .eargf ) else NULL) mustart <- NULL # Since etastart has been computed. } }), list( .linkp = linkp, .linkf = linkf, .ipA = ipA, .ifp = ifp, .inbreeding = inbreeding, .eargp = eargp, .eargf = eargf ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta <- as.matrix(eta) pA <- eta2theta(eta[, 1], link = .linkp , earg = .eargp ) fp <- if ( .inbreeding ) eta2theta(eta[, 2], link = .linkf , earg = .eargf ) else 0 cbind(AA = pA^2 + pA * (1-pA) * fp, Aa = 2 * pA * (1-pA) * (1 - fp), aa = (1-pA)^2 + pA * (1-pA) * fp) }, list( .linkp = linkp, .linkf = linkf, .eargp = eargp, .eargf = eargf, .inbreeding = inbreeding ))), last = eval(substitute(expression({ if ( .inbreeding ) { misc$link <- c("pA" = .linkp, "f" = .linkf ) misc$earg <- list("pA" = .eargp, "f" = .eargf ) } else { misc$link <- c("pA" = .linkp ) misc$earg <- list("pA" = .eargp ) } misc$expected <- TRUE }), list( .linkp = linkp, .linkf = linkf, .eargp = eargp, .eargf = eargf, .inbreeding = inbreeding ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL) if (residuals) stop("loglikelihood residuals not implemented yet") else { sum(dmultinomial(x = w * y, size = w, prob = mu, log = TRUE, dochecking = FALSE)) }, vfamily = c("AA.Aa.aa", "vgenetic"), validparams = eval(substitute(function(eta, y, extra = NULL) { eta <- as.matrix(eta) pA <- eta2theta(eta[, 1], link = .linkp , earg = .eargp ) okay1 <- all(is.finite(pA)) && all(0 < pA & pA < 1) okay2 <- TRUE if ( .inbreeding ) { fp <- eta2theta(eta[, 2], link = .linkf , earg = .eargf ) okay2 <- all(is.finite(fp)) && all(0 <= fp) # && all(fp < 1) } okay1 && okay2 }, list( .linkp = linkp, .linkf = linkf, .eargp = eargp, .eargf = eargf, .inbreeding = inbreeding ))), deriv = eval(substitute(expression({ eta <- as.matrix(eta) pA <- eta2theta(eta[, 1], link = .linkp , earg = .eargp ) fp <- if ( .inbreeding ) eta2theta(eta[, 2], link = .linkf , earg = .eargf ) else 0 if ( .inbreeding ) { dP1 <- cbind(2*pA*(1-fp) + fp, 2*(1-fp)*(1-2*pA), -2*(1-pA) + fp*(1-2*pA)) dP2 <- cbind(pA*(1-pA), -2*pA*(1-pA), pA*(1-pA)) dl1 <- rowSums(y * dP1 / mu) dl2 <- rowSums(y * dP2 / mu) dPP.deta <- dtheta.deta(pA, link = .linkp , earg = .eargp ) dfp.deta <- dtheta.deta(fp, link = .linkf , earg = .eargf ) c(w) * cbind(dPP.deta * dl1, dfp.deta * dl2) } else { nAA <- c(w) * y[, 1] nAa <- c(w) * y[, 2] naa <- c(w) * y[, 3] dl.dpA <- (2*nAA+nAa)/pA - (nAa+2*naa)/(1-pA) dpA.deta <- dtheta.deta(pA, link = .linkp , earg = .eargp ) dl.dpA * dpA.deta } }), list( .linkp = linkp, .linkf = linkf, .eargp = eargp, .eargf = eargf, .inbreeding = inbreeding ))), weight = eval(substitute(expression({ if ( .inbreeding ) { dPP <- array(c(dP1, dP2), c(n, 3, 2)) dPP.deta <- cbind(dtheta.deta(pA, link = .linkp , earg = .eargp ), dtheta.deta(fp, link = .linkf , earg = .eargf )) wz <- matrix(NA_real_, n, dimm(M)) # dimm(M)==3 because M==2 for (i1 in 1:M) for (i2 in i1:M) { index <- iam(i1, i2, M) wz[, index] <- rowSums(dPP[,, i1, drop = TRUE] * dPP[,, i2, drop = TRUE] / mu) * dPP.deta[, i1] * dPP.deta[, i2] } c(w) * wz } else { ned2l.dp2 <- 2 / (pA * (1-pA)) wz <- cbind(c(w) * ned2l.dp2 * dpA.deta^2) wz } }), list( .linkp = linkp, .linkf = linkf, .eargp = eargp, .eargf = eargf, .inbreeding = inbreeding )))) } VGAM/R/qrrvglm.control.q0000644000176200001440000001431313135276757014562 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. qrrvglm.control <- function(Rank = 1, Bestof = if (length(Cinit)) 1 else 10, checkwz = TRUE, Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-06, EqualTolerances = NULL, eq.tolerances = TRUE, # 20140520; replaces EqualTolerances Etamat.colmax = 10, FastAlgorithm = TRUE, GradientFunction = TRUE, Hstep = 0.001, isd.latvar = rep_len(c(2, 1, rep_len(0.5, Rank)), Rank), iKvector = 0.1, iShape = 0.1, ITolerances = NULL, I.tolerances = FALSE, # 20140520; replaces ITolerances maxitl = 40, imethod = 1, Maxit.optim = 250, MUXfactor = rep_len(7, Rank), noRRR = ~ 1, Norrr = NA, optim.maxit = 20, Parscale = if (I.tolerances) 0.001 else 1.0, sd.Cinit = 0.02, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) { if (!is.null(EqualTolerances)) { warning("argument 'EqualTolerances' is depreciated. ", "Use argument 'eq.tolerances'") if (is.logical(EqualTolerances)) { if (eq.tolerances != EqualTolerances) stop("arguments 'eq.tolerances' and 'EqualTolerances' differ") } else { stop("argument 'EqualTolerances' is not a logical") } } if (!is.null(ITolerances)) { warning("argument 'ITolerances' is depreciated. ", "Use argument 'I.tolerances'") if (is.logical(ITolerances)) { if (I.tolerances != ITolerances) stop("arguments 'I.tolerances' and 'ITolerances' differ") } else { stop("argument 'ITolerances' is not a logical") } } if (length(Norrr) != 1 || !is.na(Norrr)) { warning("argument 'Norrr' has been replaced by 'noRRR'. ", "Assigning the latter but using 'Norrr' will become an error in ", "the next VGAM version soon.") noRRR <- Norrr } if (!is.Numeric(iShape, positive = TRUE)) stop("bad input for 'iShape'") if (!is.Numeric(iKvector, positive = TRUE)) stop("bad input for 'iKvector'") if (!is.Numeric(isd.latvar, positive = TRUE)) stop("bad input for 'isd.latvar'") if (any(isd.latvar < 0.2 | isd.latvar > 10)) stop("isd.latvar values must lie between 0.2 and 10") if (length(isd.latvar) > 1 && any(diff(isd.latvar) > 0)) stop("successive isd.latvar values must not increase") if (!is.Numeric(epsilon, positive = TRUE, length.arg = 1)) stop("bad input for 'epsilon'") if (!is.Numeric(Etamat.colmax, positive = TRUE, length.arg = 1) || Etamat.colmax < Rank) stop("bad input for 'Etamat.colmax'") if (!is.Numeric(Hstep, positive = TRUE, length.arg = 1)) stop("bad input for 'Hstep'") if (!is.Numeric(maxitl, positive = TRUE, length.arg = 1, integer.valued = TRUE)) stop("bad input for 'maxitl'") if (!is.Numeric(imethod, positive = TRUE, length.arg = 1, integer.valued = TRUE)) stop("bad input for 'imethod'") if (!is.Numeric(Maxit.optim, integer.valued = TRUE, positive = TRUE)) stop("Bad input for 'Maxit.optim'") if (!is.Numeric(MUXfactor, positive = TRUE)) stop("bad input for 'MUXfactor'") if (any(MUXfactor < 1 | MUXfactor > 10)) stop("MUXfactor values must lie between 1 and 10") if (!is.Numeric(optim.maxit, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for 'optim.maxit'") if (!is.Numeric(Rank, positive = TRUE, length.arg = 1, integer.valued = TRUE)) stop("bad input for 'Rank'") if (!is.Numeric(sd.Cinit, positive = TRUE, length.arg = 1)) stop("bad input for 'sd.Cinit'") if (I.tolerances && !eq.tolerances) stop("'eq.tolerances' must be TRUE if 'I.tolerances' is TRUE") if (!is.Numeric(Bestof, positive = TRUE, length.arg = 1, integer.valued = TRUE)) stop("bad input for 'Bestof'") FastAlgorithm = as.logical(FastAlgorithm)[1] if (!FastAlgorithm) stop("FastAlgorithm = TRUE is now required") if ((SmallNo < .Machine$double.eps) || (SmallNo > .0001)) stop("SmallNo is out of range") if (any(Parscale <= 0)) stop("Parscale must contain positive numbers only") if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for 'checkwz'") if (!is.Numeric(wzepsilon, length.arg = 1, positive = TRUE)) stop("bad input for 'wzepsilon'") ans <- list( Bestof = Bestof, checkwz = checkwz, Cinit = Cinit, Crow1positive=as.logical(rep_len(Crow1positive, Rank)), ConstrainedQO = TRUE, # A constant, not a control parameter Corner = FALSE, # Needed for valt.1iter() Dzero = NULL, epsilon = epsilon, eq.tolerances = eq.tolerances, Etamat.colmax = Etamat.colmax, FastAlgorithm = FastAlgorithm, GradientFunction = GradientFunction, Hstep = Hstep, isd.latvar = rep_len(isd.latvar, Rank), iKvector = as.numeric(iKvector), iShape = as.numeric(iShape), I.tolerances = I.tolerances, maxitl = maxitl, imethod = imethod, Maxit.optim = Maxit.optim, min.criterion = TRUE, # needed for calibrate MUXfactor = rep_len(MUXfactor, Rank), noRRR = noRRR, optim.maxit = optim.maxit, OptimizeWrtC = TRUE, Parscale = Parscale, Quadratic = TRUE, Rank = Rank, save.weights = FALSE, sd.Cinit = sd.Cinit, SmallNo = SmallNo, str0 = NULL, Svd.arg = TRUE, Alpha = 0.5, Uncorrelated.latvar = TRUE, trace = trace, Use.Init.Poisson.QO = as.logical(Use.Init.Poisson.QO)[1], wzepsilon = wzepsilon) ans } VGAM/R/aamethods.q0000644000176200001440000003051013135276757013353 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. is.Numeric <- function(x, length.arg = Inf, integer.valued = FALSE, positive = FALSE) if (all(is.numeric(x)) && all(is.finite(x)) && (if (is.finite(length.arg)) length(x) == length.arg else TRUE) && (if (integer.valued) all(x == round(x)) else TRUE) && (if (positive) all(x>0) else TRUE)) TRUE else FALSE VGAMenv <- new.env() .VGAM.prototype.list = list( "constraints" = expression({}), "fini" = expression({}), "first" = expression({}), "initialize" = expression({}), "last" = expression({}), "middle" = expression({}), "middle2" = expression({}), "deriv" = expression({}), "weight" = expression({})) setClass("vglmff", representation( "blurb" = "character", "constraints" = "expression", "deviance" = "function", "fini" = "expression", "first" = "expression", "infos" = "function", # Added 20101203 "initialize" = "expression", "last" = "expression", "linkfun" = "function", "linkinv" = "function", "loglikelihood" = "function", "middle" = "expression", "middle2" = "expression", "summary.dispersion" = "logical", "vfamily" = "character", "validparams" = "function", # Added 20160305 "validfitted" = "function", # Added 20160305 "simslot" = "function", "hadof" = "function", "deriv" = "expression", "weight" = "expression"), # "call" prototype = .VGAM.prototype.list ) valid.vglmff <- function(object) { compulsory <- c("initialize", "weight", "deriv", "linkinv") for (ii in compulsory) { if (!length(slot(object, ii))) stop("slot ", ii, " is empty") } if (length(as.list(object@linkinv)) != 3) stop("wrong number of arguments in object@linkinv") } if (FALSE) setValidity("vglmff", valid.vglmff) show.vglmff <- function(object) { f <- object@vfamily if (is.null(f)) stop("not a VGAM family function") nn <- object@blurb cat("Family: ", f[1], "\n") if (length(f) > 1) cat("Informal classes:", paste(f, collapse = ", "), "\n") cat("\n") for (ii in seq_along(nn)) cat(nn[ii]) cat("\n") } setMethod("show", "vglmff", function(object) show.vglmff(object = object)) setClass("vlmsmall", representation( "call" = "call", "coefficients" = "numeric", "constraints" = "list", "control" = "list", "criterion" = "list", "fitted.values" = "matrix", "misc" = "list", "model" = "data.frame", "na.action" = "list", "post" = "list", "preplot" = "list", "prior.weights" = "matrix", "residuals" = "matrix", "weights" = "matrix", "x" = "matrix", "y" = "matrix"), ) setClass("vlm", representation( "assign" = "list", "callXm2" = "call", "contrasts" = "list", "df.residual" = "numeric", "df.total" = "numeric", "dispersion" = "numeric", "effects" = "numeric", "offset" = "matrix", "qr" = "list", "R" = "matrix", "rank" = "integer", "ResSS" = "numeric", "smart.prediction" = "list", "terms" = "list", "Xm2" = "matrix", "Ym2" = "matrix", "xlevels" = "list" ), contains = "vlmsmall" ) setClass("vglm", representation( "extra" = "list", "family" = "vglmff", "iter" = "numeric", "predictors" = "matrix"), contains = "vlm") setClass("vgam", representation( "Bspline" = "list", # each [[i]] is a "vsmooth.spline.fit" "nl.chisq" = "numeric", "nl.df" = "numeric", "spar" = "numeric", "s.xargument" = "character", "var" = "matrix"), contains = "vglm") setClass("pvgam", representation( "ospsslot" = "list"), contains = "vglm") setClass("summary.vgam", representation( anova = "data.frame", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric"), prototype(anova = data.frame()), contains = "vgam") setClass("summary.vglm", representation( coef3 = "matrix", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric"), contains = "vglm") setClass("summary.vlm", representation( coef3 = "matrix", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric"), contains = "vlm") setClass("summary.pvgam", representation( anova = "data.frame", "ospsslot" = "list"), prototype(anova = data.frame()), contains = c("summary.vglm", "pvgam") ) setClass(Class = "rrvglm", contains = "vglm") if (FALSE) setClass("qrrvglm", representation( "assign" = "list", "call" = "call", "coefficients" = "numeric", "constraints" = "list", "contrasts" = "list", "control" = "list", "criterion" = "list", "df.residual" = "numeric", "df.total" = "numeric", "dispersion" = "numeric", "extra" = "list", "family" = "vglmff", "fitted.values"= "matrix", "iter" = "numeric", "misc" = "list", "model" = "data.frame", "na.action" = "list", "offset" = "matrix", "post" = "list", "predictors" = "matrix", "preplot" = "list", "prior.weights"= "matrix", "residuals" = "matrix", "smart.prediction" = "list", "terms" = "list", "weights" = "matrix", "x" = "matrix", "Xm2" = "matrix", "Ym2" = "matrix", "xlevels" = "list", "y" = "matrix") ) setClass(Class = "qrrvglm", contains = "rrvglm") if (FALSE) setAs("qrrvglm", "vglm", function(from) new("vglm", "extra"=from@extra, "family"=from@family, "iter"=from@iter, "predictors"=from@predictors, "assign"=from@assign, "call"=from@call, "coefficients"=from@coefficients, "constraints"=from@constraints, "contrasts"=from@contrasts, "control"=from@control, "criterion"=from@criterion, "df.residual"=from@df.residual, "df.total"=from@df.total, "dispersion"=from@dispersion, "effects"=from@effects, "fitted.values"=from@fitted.values, "misc"=from@misc, "model"=from@model, "na.action"=from@na.action, "offset"=from@offset, "post"=from@post, "preplot"=from@preplot, "prior.weights"=from@prior.weights, "qr"=from@qr, "R"=from@R, "rank"=from@rank, "residuals"=from@residuals, "ResSS"=from@ResSS, "smart.prediction"=from@smart.prediction, "terms"=from@terms, "weights"=from@weights, "x"=from@x, "xlevels"=from@xlevels, "y"=from@y)) setClass("rcim0", representation(not.needed = "numeric"), contains = "vglm") # Added 20110506 setClass("rcim", representation(not.needed = "numeric"), contains = "rrvglm") setClass("grc", representation(not.needed = "numeric"), contains = "rrvglm") setMethod("summary", "rcim", function(object, ...) summary.rcim(object, ...)) setMethod("summary", "grc", function(object, ...) summary.grc(object, ...)) if (FALSE) { setClass("vfamily", representation("list")) } if (!isGeneric("Coef")) setGeneric("Coef", function(object, ...) standardGeneric("Coef"), package = "VGAM") if (!isGeneric("Coefficients")) setGeneric("Coefficients", function(object, ...) standardGeneric("Coefficients"), package = "VGAM") if (!isGeneric("logLik")) setGeneric("logLik", function(object, ...) standardGeneric("logLik"), package = "VGAM") if (!isGeneric("plot")) setGeneric("plot", function(x, y, ...) standardGeneric("plot"), package = "VGAM") if (!isGeneric("vcov")) setGeneric("vcov", function(object, ...) standardGeneric("vcov"), package = "VGAM") setClass("uqo", representation( "latvar" = "matrix", "extra" = "list", "family" = "vglmff", "iter" = "numeric", "predictors" = "matrix"), contains = "vlmsmall") setClass(Class = "rrvgam", contains = "vgam") if (!isGeneric("lvplot")) setGeneric("lvplot", function(object, ...) standardGeneric("lvplot"), package = "VGAM") if (FALSE) { if (!isGeneric("ccoef")) setGeneric("ccoef", function(object, ...) { .Deprecated("concoef") standardGeneric("ccoef") }) } if (!isGeneric("concoef")) setGeneric("concoef", function(object, ...) { standardGeneric("concoef") }) if (!isGeneric("model.matrix")) setGeneric("model.matrix", function(object, ...) standardGeneric("model.matrix")) if (!isGeneric("model.frame")) setGeneric("model.frame", function(formula, ...) standardGeneric("model.frame")) if (!isGeneric("predict")) setGeneric("predict", function(object, ...) standardGeneric("predict")) if (!isGeneric("resid")) setGeneric("resid", function(object, ...) standardGeneric("resid")) if (!isGeneric("AIC")) setGeneric("AIC", function(object, ..., k=2) standardGeneric("AIC"), package = "VGAM") if (!isGeneric("summary")) setGeneric("summary", function(object, ...) standardGeneric("summary"), package = "VGAM") if (!isGeneric("QR.R")) setGeneric("QR.R", function(object, ...) standardGeneric("QR.R"), package = "VGAM") setMethod("QR.R", "vglm", function(object, ...) { if (length(object@R)) object@R else { warning("empty 'R' slot on object. Returning a NULL") NULL } }) if (!isGeneric("QR.Q")) setGeneric("QR.Q", function(object, ...) standardGeneric("QR.Q"), package = "VGAM") setMethod("QR.Q", "vglm", function(object, ...) { qr.list <- object@qr if (length(qr.list)) { class(qr.list) <- "qr" qr.Q(qr.list) } else { warning("empty 'qr' slot on object. Returning a NULL") NULL } }) if (!isGeneric("margeffS4VGAM")) setGeneric("margeffS4VGAM", function(object, subset = NULL, VGAMff, ...) standardGeneric("margeffS4VGAM"), package = "VGAM") if (!isGeneric("summaryvglmS4VGAM")) setGeneric("summaryvglmS4VGAM", function(object, VGAMff, ...) standardGeneric("summaryvglmS4VGAM"), package = "VGAM") if (!isGeneric("showsummaryvglmS4VGAM")) setGeneric("showsummaryvglmS4VGAM", function(object, VGAMff, ...) standardGeneric("showsummaryvglmS4VGAM"), package = "VGAM") if (!isGeneric("showvglmS4VGAM")) setGeneric("showvglmS4VGAM", function(object, VGAMff, ...) standardGeneric("showvglmS4VGAM"), package = "VGAM") if (!isGeneric("showvgamS4VGAM")) setGeneric("showvgamS4VGAM", function(object, VGAMff, ...) standardGeneric("showvgamS4VGAM"), package = "VGAM") if (!isGeneric("predictvglmS4VGAM")) setGeneric("predictvglmS4VGAM", function(object, VGAMff, ...) standardGeneric("predictvglmS4VGAM"), package = "VGAM") VGAM/R/print.vlm.q0000644000176200001440000000350413135276757013342 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. show.vlm <- function(object) { if (!is.null(cl <- object@call)) { cat("Call:\n") dput(cl) } coef <- object@coefficients cat("\nCoefficients:\n") print(coef) rank <- object@rank if (is.null(rank)) rank <- sum(!is.na(coef)) n <- object@misc$n M <- object@misc$M nobs <- if (length(object@df.total)) object@df.total else n * M rdf <- object@df.residual if (is.null(rdf)) rdf <- (n - rank) * M cat("\nDegrees of Freedom:", nobs, "Total;", rdf, "Residual\n") if (length(deviance(object)) && is.finite(deviance(object))) cat("Deviance:", format(deviance(object)), "\n") if (length(object@ResSS) && is.finite(object@ResSS)) cat("Residual Sum of Squares:", format(object@ResSS), "\n") invisible(object) } setMethod("show", "vlm", function(object) show.vlm(object)) if (FALSE) print.vlm <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } coef <- x@coefficients cat("\nCoefficients:\n") print(coef, ...) rank <- x@rank if (is.null(rank)) rank <- sum(!is.na(coef)) n <- x@misc$n M <- x@misc$M nobs <- if (length(x@df.total)) x@df.total else n * M rdf <- x@df.residual if (is.null(rdf)) rdf <- (n - rank) * M cat("\nDegrees of Freedom:", nobs, "Total;", rdf, "Residual\n") if (length(deviance(x)) && is.finite(deviance(x))) cat("Deviance:", format(deviance(x)), "\n") if (length(x@ResSS) && is.finite(x@ResSS)) cat("Residual Sum of Squares:", format(x@ResSS), "\n") invisible(x) } if (!is.R()) { setMethod("show", "vlm", function(object) print.vlm(object)) } if (FALSE) setMethod("print", "vlm", function(x, ...) print.vlm(x, ...)) VGAM/R/lrp.R0000644000176200001440000000727313135276757012156 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. lrp.vglm <- function(object, which = NULL, # 1:p.vlm, omit1s = TRUE, trace = NULL, ...) { Pnames <- names(B0 <- coef(object)) nonA <- !is.na(B0) # Usually == rep(TRUE, p.vlm) if (any(is.na(B0))) stop("currently cannot handle NA-valued regression coefficients") pv0 <- t(as.matrix(B0)) # 1 x p.vlm p.vlm <- length(Pnames) if (is.character(which)) which <- match(which, Pnames) if (is.null(which)) which <- 1:p.vlm summ <- summary(object) M <- npred(object) Xm2 <- model.matrix(object, type = "lm2") # Could be a 0 x 0 matrix if (!length(Xm2)) Xm2 <- NULL # Make sure. This is safer clist <- constraints(object, type = "lm") # type = c("lm", "term") H1 <- clist[["(Intercept)"]] if (omit1s && length(H1) && any(which <= ncol(H1))) { if (length(clist) == 1) return(NULL) # Regressed against intercept only which <- which[which > ncol(H1)] } mf <- model.frame(object) Y <- model.response(mf) if (!is.factor(Y)) Y <- as.matrix(Y) n.lm <- nobs(object, type = "lm") OOO <- object@offset if (!length(OOO) || all(OOO == 0)) OOO <- matrix(0, n.lm, M) mt <- attr(mf, "terms") Wts <- model.weights(mf) if (length(Wts) == 0L) Wts <- rep(1, n.lm) # Safest (uses recycling and is a vector) Original.de <- deviance(object) # Could be NULL if (!(use.de <- is.Numeric(Original.de))) Original.ll <- logLik(object) DispersionParameter <- summ@dispersion if (!all(DispersionParameter == 1)) stop("Currently can only handle dispersion parameters ", "that are equal to 1") X.lm <- model.matrix(object, type = "lm") X.vlm <- model.matrix(object, type = "vlm") fam <- object@family LPmat <- predict(object) quasi.type <- if (length(tmp3 <- fam@infos()$quasi.type)) tmp3 else FALSE if (quasi.type) stop("currently this function cannot handle quasi-type models", " or models with an estimated dispersion parameter") ansvec <- B0[which] # Replace these with p-values iptr <- 0 for (i in which) { aa <- nonA aa[i] <- FALSE X.vlm.i <- X.vlm[, aa, drop = FALSE] X.lm.i <- X.lm # Try this # This is needed by vglm.fit(): attr(X.vlm.i, "assign") <- attr(X.vlm, "assign") # zz; this is wrong! attr( X.lm.i, "assign") <- attr( X.lm, "assign") if (is.logical(trace)) object@control$trace <- trace fm <- vglm.fit(x = X.lm.i, # Possibly use X.lm.i or else X.lm y = Y, w = Wts, X.vlm.arg = X.vlm.i, # X.vlm, Xm2 = Xm2, Terms = mt, extra = object@extra, etastart = LPmat, offset = OOO, # ooo, family = fam, control = object@control) zee <- if (use.de) { fm$crit.list[["deviance"]] - Original.de } else { 2 * (Original.ll - fm$crit.list[["loglikelihood"]]) } if (zee > -1e-3) { zee <- max(zee, 0) } else { stop("omitting 1 column has found a better solution, ", "so the original fit had not converged") } zedd <- zee # sgn * sqrt(zee) iptr <- iptr + 1 ansvec[iptr] <- pchisq(zedd, df = 1, lower.tail = FALSE) } # for i ansvec } if (!isGeneric("lrp")) setGeneric("lrp", function(object, ...) standardGeneric("lrp"), package = "VGAM") setMethod("lrp", "vglm", function(object, ...) lrp.vglm(object = object, ...)) VGAM/R/family.positive.R0000644000176200001440000024535213135276757014505 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. N.hat.posbernoulli <- function(eta, link, earg = list(), R = NULL, w = NULL, X.vlm = NULL, Hlist = NULL, extra = list(), model.type = c("0", "b", "t", "tb") ) { if (!is.null(w) && !all(w[1] == w)) warning("estimate of N may be wrong when prior weights ", "are not all the same") model.type <- match.arg(model.type, c("0", "b", "t", "tb"))[1] if (!is.matrix(eta)) eta <- as.matrix(eta) # May be needed for "0" tau <- switch(model.type, "0" = extra$tau, "b" = extra$tau, "t" = ncol(eta), "tb" = (ncol(eta) + 1) / 2) if (length(extra$tau) && extra$tau != tau) warning("variable 'tau' is mistaken") # Checking only jay.index <- switch(model.type, "0" = rep_len(1, tau), "b" = rep_len(1, tau), # Subset: 2 out of 1:2 "t" = 1:tau, # All of them "tb" = 1:tau) # Subset: first tau of them out of M = 2*tau-2 prc <- eta2theta(eta[, jay.index], link, earg = earg) # cap.probs prc <- as.matrix(prc) # Might be needed for Mtb(tau=2). if (FALSE && model.type == "tb") { if (tau == 2) prc <- cbind(prc, 1 - prc) if (tau > 3) stop("cannot handle tau > 3 yet") jay.index <- 1:tau # 'Restore' it coz its used below. zz?? } QQQ <- exp(rowSums(log1p(-prc))) pibbeta <- exp(log1p(-QQQ)) # One.minus.QQQ N.hat <- sum(1 / pibbeta) # Point estimate ss2 <- sum(QQQ / pibbeta^2) # Assumes bbeta is known if (length(extra$p.small) && any(pibbeta < extra$p.small) && !extra$no.warning) warning("The abundance estimation for this model can be unstable") if (length(R)) { dvect <- matrix(0, length(pibbeta), ncol = ncol(X.vlm)) M <- nrow(Hlist[[1]]) n.lm <- nrow(X.vlm) / M # Number of rows of the LM matrix dprc.deta <- dtheta.deta(prc, link, earg = earg) Hmatrices <- matrix(c(unlist(Hlist)), nrow = M) for (jay in 1:tau) { linpred.index <- jay.index[jay] Index0 <- Hmatrices[linpred.index, ] != 0 X.lm.jay <- X.vlm[(0:(n.lm - 1)) * M + linpred.index, Index0, drop = FALSE] dvect[, Index0] <- dvect[, Index0] + (QQQ / (1-prc[, jay])) * dprc.deta[, jay] * X.lm.jay } dvect <- dvect * (-1 / pibbeta^2) dvect <- colSums(dvect) # Now a vector ncol.X.vlm <- nrow(R) rinv <- diag(ncol.X.vlm) rinv <- backsolve(R, rinv) rowlen <- drop(((rinv^2) %*% rep_len(1, ncol.X.vlm))^0.5) covun <- rinv %*% t(rinv) vecTF <- FALSE for (jay in 1:tau) { linpred.index <- jay.index[jay] vecTF <- vecTF | (Hmatrices[linpred.index, ] != 0) } vecTF.index <- (seq_along(vecTF))[vecTF] covun <- covun[vecTF.index, vecTF.index, drop = FALSE] dvect <- dvect[vecTF.index, drop = FALSE] } list(N.hat = N.hat, SE.N.hat = if (length(R)) c(sqrt(ss2 + t(dvect) %*% covun %*% dvect)) else c(sqrt(ss2)) ) } aux.posbernoulli.t <- function(y, check.y = FALSE, rename = TRUE, name = "bei") { y <- as.matrix(y) if ((tau <- ncol(y)) == 1) stop("argument 'y' needs to be a matrix with at least two columns") if (check.y) { if (!all(y == 0 | y == 1 | y == 1/tau | is.na(y))) stop("response 'y' must contain 0s and 1s only") } zeddij <- cbind(0, t(apply(y, 1, cumsum))) # tau + 1 columns zij <- (0 + (zeddij > 0))[, 1:tau] # 0 or 1. if (rename) { colnames(zij) <- paste(name, 1:ncol(y), sep = "") } else { if (length(colnames(y))) colnames(zij) <- colnames(y) } cp1 <- numeric(nrow(y)) for (jay in tau:1) cp1[y[, jay] > 0] <- jay if (any(cp1 == 0)) warning("some individuals were never captured!") yr1i <- zeddij[, tau + 1] - 1 list(cap.hist1 = zij, # A matrix of the same dimension as 'y' cap1 = cp1, # Aka ti1 y0i = cp1 - 1, yr0i = tau - cp1 - yr1i, yr1i = yr1i) } rposbern <- function(n, nTimePts = 5, pvars = length(xcoeff), xcoeff = c(-2, 1, 2), Xmatrix = NULL, # If is.null(Xmatrix) then it is created cap.effect = 1, is.popn = FALSE, link = "logit", earg.link = FALSE) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n orig.n <- use.n if (!is.popn) use.n <- 1.50 * use.n + 100 # Bigger due to rejections if (pvars == 0) stop("argument 'pvars' must be at least one") if (pvars > length(xcoeff)) stop("argument 'pvars' is too high") if (earg.link) { earg <- link } else { link <- as.list(substitute(link)) earg <- link2list(link) } link <- attr(earg, "function.name") cap.effect.orig <- cap.effect Ymatrix <- matrix(0, use.n, nTimePts, dimnames = list(as.character(1:use.n), paste("y", 1:nTimePts, sep = ""))) CHmatrix <- matrix(0, use.n, nTimePts, dimnames = list(as.character(1:use.n), paste("ch", 1:(nTimePts ), sep = ""))) if (is.null(Xmatrix)) { Xmatrix <- cbind(x1 = rep_len(1.0, use.n)) if (pvars > 1) Xmatrix <- cbind(Xmatrix, matrix(runif(n = use.n * (pvars-1)), use.n, pvars - 1, dimnames = list(as.character(1:use.n), paste("x", 2:pvars, sep = "")))) } lin.pred.baseline <- xcoeff[1] if (pvars > 1) lin.pred.baseline <- lin.pred.baseline + Xmatrix[, 2:pvars, drop = FALSE] %*% xcoeff[2:pvars] sumrowy <- rep_len(0, use.n) cap.effect <- rep_len(cap.effect.orig, use.n) for (jlocal in 1:nTimePts) { CHmatrix[, jlocal] <- as.numeric(sumrowy > 0) caught.before.TF <- (CHmatrix[, jlocal] > 0) lin.pred <- lin.pred.baseline + caught.before.TF * cap.effect Ymatrix[, jlocal] <- rbinom(use.n, size = 1, prob = eta2theta(lin.pred, link = link, earg = earg)) sumrowy <- sumrowy + Ymatrix[, jlocal] } index0 <- (sumrowy == 0) if (all(!index0)) stop("bug in this code: cannot handle no animals being caught") Ymatrix <- Ymatrix[!index0, , drop = FALSE] Xmatrix <- Xmatrix[!index0, , drop = FALSE] CHmatrix <- CHmatrix[!index0, , drop = FALSE] ans <- data.frame(Ymatrix, Xmatrix, CHmatrix # zCHmatrix, ) if (!is.popn) { ans <- if (nrow(ans) >= orig.n) { ans[1:orig.n, ] } else { rbind(ans, Recall(n = orig.n - nrow(ans), nTimePts = nTimePts, pvars = pvars, xcoeff = xcoeff, cap.effect = cap.effect.orig, link = earg, earg.link = TRUE)) } } rownames(ans) <- as.character(1:nrow(ans)) attr(ans, "pvars") <- pvars attr(ans, "nTimePts") <- nTimePts attr(ans, "cap.effect") <- cap.effect.orig attr(ans, "is.popn") <- is.popn attr(ans, "n") <- n ans } dposbern <- function(x, prob, prob0 = prob, log = FALSE) { x <- as.matrix(x) prob <- as.matrix(prob) prob0 <- as.matrix(prob0) if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (ncol(x) < 2) stop("columns of argument 'x' should be 2 or more") logAA0 <- rowSums(log1p(-prob0)) AA0 <- exp(logAA0) ell1 <- x * log(prob) + (1 - x) * log1p(-prob) - log1p(-AA0) / ncol(x) if (log.arg) ell1 else exp(ell1) } prob.munb.size.VGAM <- function(munb, size) { prob <- size / (size + munb) inf.munb <- is.infinite(munb) inf.size <- is.infinite(size) prob[inf.munb] <- 0 prob[inf.size] <- 1 prob[inf.munb & inf.size] <- NaN prob[size < 0 | munb < 0] <- NaN prob } dposnegbin <- function(x, size, prob = NULL, munb = NULL, log = FALSE) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") } else { if (!length(prob)) stop("Only one of 'prob' or 'munb' must be specified") } if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(prob), length(munb), length(size)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) ans <- if (length(munb)) { if (length(munb) != LLL) munb <- rep_len(munb, LLL) dnbinom(x = x, size = size, mu = munb, log = TRUE) } else { if (length(prob) != LLL) prob <- rep_len(prob, LLL) dnbinom(x = x, size = size, prob = prob, log = TRUE) } index0 <- (x == 0) & !is.na(size) # & (!is.na(prob) | !is.na(munb)) ans[ index0] <- log(0.0) ans[!index0] <- ans[!index0] - ( if (length(prob)) pnbinom(0, size = size[!index0], prob = prob[!index0], lower.tail = FALSE, log.p = TRUE) else pnbinom(0, size = size[!index0], mu = munb[!index0], lower.tail = FALSE, log.p = TRUE)) if (!log.arg) ans <- exp(ans) if (!length(prob)) prob <- prob.munb.size.VGAM(munb, size) ans[prob == 0 | prob == 1] <- NaN ans } pposnegbin <- function(q, size, prob = NULL, munb = NULL, lower.tail = TRUE, log.p = FALSE) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") } else { if (!length(prob)) stop("Only one of 'prob' or 'munb' must be specified") } LLL <- max(length(q), length(prob), length(munb), length(size)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(munb)) { if (length(munb) != LLL) munb <- rep_len(munb, LLL) } else { if (length(prob) != LLL) prob <- rep_len(prob, LLL) } tail.prob <- if (length(prob)) dnbinom(0, size = size, prob = prob) else dnbinom(0, size = size, mu = munb) vall <- rep_len(ifelse(lower.tail, log(0), log(1)), LLL) ans <- if (length(prob)) { ifelse(q < 1, vall, (if (lower.tail) log(pnbinom(q, size = size, prob = prob) - tail.prob) else pnbinom(q, size = size, prob = prob, lower.tail = FALSE, log.p = TRUE)) - log1p(-tail.prob)) } else { ifelse(q < 1, vall, (if (lower.tail) log(pnbinom(q, size = size, mu = munb) - tail.prob) else pnbinom(q, size = size, mu = munb, lower.tail = FALSE, log.p = TRUE)) - log1p(-tail.prob)) } if (!log.p) ans <- exp(ans) if (!length(prob)) prob <- prob.munb.size.VGAM(munb, size) ans[prob == 0 | prob == 1] <- NaN ans } qposnegbin <- function(p, size, prob = NULL, munb = NULL) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") } else { if (!length(prob)) stop("Only one of 'prob' or 'munb' must be specified") } ans <- if (length(munb)) { qnbinom(pnbinom(0, size = size, mu = munb, lower.tail = FALSE) * p + dnbinom(0, size = size, mu = munb), size = size, mu = munb) } else { qnbinom(pnbinom(0, size = size, prob = prob, lower.tail = FALSE) * p + dnbinom(0, size = size, prob = prob), size = size, prob = prob) } ans[p == 1] <- Inf ans[p < 0 | 1 < p] <- NaN if (!length(prob)) prob <- prob.munb.size.VGAM(munb, size) ans[prob == 0 | prob == 1] <- NaN ans } rposnegbin <- function(n, size, prob = NULL, munb = NULL) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") } else { if (!length(prob)) stop("Only one of 'prob' or 'munb' must be specified") } ans <- if (length(munb)) { qnbinom(runif(n, min = dnbinom(0, size = size, mu = munb)), size = size, mu = munb) } else { qnbinom(runif(n, min = dnbinom(0, size = size, prob = prob)), size = size, prob = prob) } if (!length(prob)) prob <- prob.munb.size.VGAM(munb, size) ans[prob == 0 | prob == 1] <- NaN ans } EIM.posNB.specialp <- function(munb, size, y.max = NULL, # Must be an integer cutoff.prob = 0.995, prob0, df0.dkmat, df02.dkmat2, intercept.only = FALSE, second.deriv = TRUE) { if (intercept.only) { munb <- munb[1] size <- size[1] prob0 <- prob0[1] df0.dkmat <- df0.dkmat[1] df02.dkmat2 <- df02.dkmat2[1] } y.min <- 0 # Same as negbinomial() actually. A fixed const really if (!is.numeric(y.max)) { eff.p <- sort(c(cutoff.prob, 1 - cutoff.prob)) y.max <- max(qposnegbin(p = eff.p[2], munb = munb, size = size)) + 10 } Y.mat <- if (intercept.only) y.min:y.max else matrix(y.min:y.max, length(munb), y.max-y.min+1, byrow = TRUE) neff.row <- ifelse(intercept.only, 1, nrow(Y.mat)) neff.col <- ifelse(intercept.only, length(Y.mat), ncol(Y.mat)) if (FALSE) { Y.mat2 <- Y.mat + 1 trigg.term0 <- if (intercept.only) { dposnegbin(Y.mat2, size=size, munb=munb) %*% trigamma(Y.mat2+size) } else { rowSums(dposnegbin(Y.mat2, size = size, munb = munb) * trigamma(Y.mat2 + size)) } } trigg.term <- if (TRUE) { answerC <- .C("eimpnbinomspecialp", as.integer(intercept.only), as.double(neff.row), as.double(neff.col), as.double(size), as.double(pposnegbin(Y.mat, size = size, munb = munb, lower.tail = FALSE)), rowsums = double(neff.row)) answerC$rowsums } mymu <- munb / (1 - prob0) # E(Y) ned2l.dk2 <- trigg.term - munb / (size * (size + munb)) - (mymu - munb) / (munb + size)^2 if (second.deriv) ned2l.dk2 <- ned2l.dk2 - df02.dkmat2 / (1 - prob0) - (df0.dkmat / (1 - prob0))^2 ned2l.dk2 } # end of EIM.posNB.specialp() EIM.posNB.speciald <- function(munb, size, y.min = 1, # 20160201; must be an integer y.max = NULL, # Must be an integer cutoff.prob = 0.995, prob0, df0.dkmat, df02.dkmat2, intercept.only = FALSE, second.deriv = TRUE) { if (intercept.only) { munb <- munb[1] size <- size[1] prob0 <- prob0[1] df0.dkmat <- df0.dkmat[1] df02.dkmat2 <- df02.dkmat2[1] } if (!is.numeric(y.max)) { eff.p <- sort(c(cutoff.prob, 1 - cutoff.prob)) y.max <- max(qposnegbin(p = eff.p[2], munb = munb, size = size)) + 10 } Y.mat <- if (intercept.only) y.min:y.max else matrix(y.min:y.max, length(munb), y.max-y.min+1, byrow = TRUE) trigg.term <- if (intercept.only) { dposnegbin(Y.mat, size = size, munb = munb) %*% trigamma(Y.mat + size) } else { rowSums(dposnegbin(Y.mat, size = size, munb = munb) * trigamma(Y.mat + size)) } mymu <- munb / (1 - prob0) # E(Y) ned2l.dk2 <- trigamma(size) - munb / (size * (size + munb)) - (mymu - munb) / (munb + size)^2 - trigg.term if (second.deriv) ned2l.dk2 <- ned2l.dk2 - df02.dkmat2 / (1 - prob0) - (df0.dkmat / (1 - prob0))^2 ned2l.dk2 } # end of EIM.posNB.speciald() posNBD.Loglikfun2 <- function(munbval, sizeval, y, x, w, extraargs) { sum(c(w) * dposnegbin(x = y, munb = munbval, size = sizeval, log = TRUE)) } posnegbinomial.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } posnegbinomial <- function( zero = "size", type.fitted = c("mean", "munb", "prob0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, # higher is better for large 'size' eps.trig = 1e-7, max.support = 4000, # 20160201; I have changed this max.chunk.MB = 30, # max.memory = Inf is allowed lmunb = "loge", lsize = "loge", imethod = 1, imunb = NULL, iprobs.y = NULL, # 0.35, gprobs.y = ppoints(8), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("bad input for argument 'isize'") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "munb", "prob0"))[1] if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 0.001) stop("argument 'eps.trig' must be positive and smaller in value") if (!is.Numeric(nsimEIM, length.arg = 1, positive = TRUE, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 30) warning("argument 'nsimEIM' should be greater than 30, say") new("vglmff", blurb = c("Positive-negative binomial distribution\n\n", "Links: ", namesof("munb", lmunb, earg = emunb), ", ", namesof("size", lsize, earg = esize), "\n", "Mean: munb / (1 - (size / (size + munb))^size)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, mds.min = .mds.min , multipleResponses = TRUE, parameters.names = c("munb", "size"), nsimEIM = .nsimEIM , eps.trig = .eps.trig , lmunb = .lmunb , emunb = .emunb , type.fitted = .type.fitted , zero = .zero , lsize = .lsize , esize = .esize ) }, list( .lmunb = lmunb, .lsize = lsize, .isize = isize, .emunb = emunb, .esize = esize, .zero = zero, .nsimEIM = nsimEIM, .eps.trig = eps.trig, .imethod = imethod, .type.fitted = type.fitted, .mds.min = mds.min))), initialize = eval(substitute(expression({ M1 <- 2 temp12 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp12$w y <- temp12$y M <- M1 * ncol(y) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) predictors.names <- c( namesof(param.names("munb", NOS), .lmunb , earg = .emunb , tag=FALSE), namesof(param.names("size", NOS), .lsize , earg = .esize , tag=FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] gprobs.y <- .gprobs.y imunb <- .imunb # Default in NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: wm.yj <- weighted.mean(y[, jay], w = w[, jay]) munb.init.jay <- if ( .imethod == 1 ) { negbinomial.initialize.yj(y[, jay] - 1, w[, jay], gprobs.y = gprobs.y, wm.yj = wm.yj) + 1 - 1/4 } else { wm.yj - 1/2 } if (length(imunb)) { munb.init.jay <- sample(x = imunb[, jay], size = 10, replace = TRUE) munb.init.jay <- unique(sort(munb.init.jay)) } gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + wm.yj) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = posNBD.Loglikfun2, y = y[, jay], w = w[, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) etastart <- cbind( theta2eta(munb.init , .lmunb , earg = .emunb ), theta2eta(size.init, .lsize , earg = .esize )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .lmunb = lmunb, .lsize = lsize, .imunb = imunb, .isize = isize, .emunb = emunb, .esize = esize, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .iprobs.y = iprobs.y, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "munb", "prob0"))[1] TF <- c(TRUE, FALSE) munb <- eta2theta(eta[, TF, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, !TF, drop = FALSE], .lsize , earg = .esize ) small.size <- 1e-10 if (any(ind4 <- (kmat < small.size))) { warning("estimates of 'size' are very small. Taking evasive action.") kmat[ind4] <- small.size } tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 smallval <- .mds.min # Something like this is needed if (any(big.size <- (munb / kmat < smallval))) { prob0[big.size] <- exp(-munb[big.size]) # The limit as kmat-->Inf oneminusf0[big.size] <- -expm1(-munb[big.size]) } ans <- switch(type.fitted, "mean" = munb / oneminusf0, "munb" = munb, "prob0" = prob0) # P(Y=0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lsize = lsize, .lmunb = lmunb, .esize = esize, .emunb = emunb, .mds.min = mds.min ))), last = eval(substitute(expression({ temp0303 <- c(rep_len( .lmunb , NOS), rep_len( .lsize , NOS)) names(temp0303) <- c(param.names("munb", NOS), param.names("size", NOS)) temp0303 <- temp0303[interleave.VGAM(M, M1 = M1)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M1*NOS) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .emunb misc$earg[[M1*ii ]] <- .esize } misc$max.chunk.MB <- .max.chunk.MB misc$cutoff.prob <- .cutoff.prob misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize, .cutoff.prob = cutoff.prob, .max.chunk.MB = max.chunk.MB, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { TFvec <- c(TRUE, FALSE) munb <- eta2theta(eta[, TFvec, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, !TFvec, drop = FALSE], .lsize , earg = .esize ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dposnegbin(x = y, size = kmat, munb = munb, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize ))), vfamily = c("posnegbinomial"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) munb <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lmunb, earg = .emunb ) kmat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lsize, earg = .esize ) rposnegbin(nsim * length(munb), size = kmat, munb = munb) }, list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { munb <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lmunb , earg = .emunb ) size <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lsize , earg = .esize ) small.size.absolute <- 1e-14 # 20160909 smallval <- .mds.min # .munb.div.size okay1 <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) && all(small.size.absolute < size) overdispersion <- if (okay1) all(smallval < munb / size) else FALSE if (!overdispersion) warning("parameter 'size' has very large values relative ", "to 'munb'; ", "try fitting a positive-Poisson ", "model instead.") okay1 && overdispersion }, list( .lmunb = lmunb, .emunb = emunb, .lsize = lsize, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- extra$NOS TFvec <- c(TRUE, FALSE) munb <- eta2theta(eta[, TFvec, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, !TFvec, drop = FALSE], .lsize , earg = .esize ) smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { if (FALSE) warning("parameter 'size' has very large values; ", "try fitting a positive-Poisson ", "model instead") kmat[big.size] <- munb[big.size] / smallval } dmunb.deta <- dtheta.deta(munb, .lmunb , earg = .emunb ) dsize.deta <- dtheta.deta(kmat, .lsize , earg = .esize ) tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) tempm <- munb / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 AA16 <- tempm + log(tempk) df0.dmunb <- -tempk * prob0 df0.dkmat <- prob0 * AA16 df02.dmunb2 <- prob0 * tempk * (1 + 1/kmat) / (1 + munb/kmat) df02.dkmat2 <- prob0 * ((tempm^2) / kmat + AA16^2) df02.dkmat.dmunb <- -prob0 * (tempm/kmat + AA16) / (1 + munb/kmat) if (any(big.size)) { prob0[big.size] <- exp(-munb[big.size]) # The limit as kmat-->Inf oneminusf0[big.size] <- -expm1(-munb[big.size]) df0.dmunb[big.size] <- -tempk[big.size] * prob0[big.size] df0.dkmat[big.size] <- prob0[big.size] * AA16[big.size] df02.dmunb2[big.size] <- prob0[big.size] * tempk[big.size] * (1 + 1/kmat[big.size]) / (1 + smallval) df02.dkmat2[big.size] <- prob0[big.size] * ((tempm[big.size])^2 / kmat[big.size] + AA16[big.size]^2) df02.dkmat.dmunb[big.size] <- -prob0[big.size] * (tempm[big.size]/kmat[big.size] + AA16[big.size]) / (1 + smallval) } smallno <- 1e-6 if (TRUE && any(near.boundary <- oneminusf0 < smallno)) { warning("solution near the boundary; either there is no need ", "to fit a positive NBD or the distribution is centred ", "on the value 1") oneminusf0[near.boundary] <- smallno prob0[near.boundary] <- 1 - oneminusf0[near.boundary] } dl.dmunb <- y / munb - (1 + y/kmat) / (1 + munb/kmat) + df0.dmunb / oneminusf0 dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y - munb) / (munb + kmat) + log(tempk) + df0.dkmat / oneminusf0 if (any(big.size)) { } myderiv <- c(w) * cbind(dl.dmunb * dmunb.deta, dl.dsize * dsize.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize, .mds.min = mds.min ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M+M-1) mymu <- munb / oneminusf0 # Is the same as 'mu', == E(Y) max.support <- .max.support max.chunk.MB <- .max.chunk.MB ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 1 Q.maxs <- qposnegbin(p = eff.p[2] , munb = munb[, jay], size = kmat[, jay]) + 10 eps.trig <- .eps.trig Q.MAXS <- pmax(10, ceiling(1 / sqrt(eps.trig))) # Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay] <- EIM.posNB.specialp(munb = munb[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only) if (any(eim.kk.TF <- wz[sind2, M1*jay] <= 0 | is.na(wz[sind2, M1*jay]))) { ind2[sind2[eim.kk.TF], jay] <- FALSE } lwr.ptr <- upr.ptr + 1 } # while } # if } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { kkvec <- kmat[ii.TF, jay] muvec <- munb[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rposnegbin(sum(ii.TF), munb = muvec, size = kkvec) dl.dk <- digamma(ysim + kkvec) - digamma(kkvec) - (ysim - muvec) / (muvec + kkvec) + log1p(-muvec / (kkvec + muvec)) + df0.dkmat[ii.TF, jay] / oneminusf0[ii.TF, jay] run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay] <- ned2l.dk2 # * (dsize.deta[ii.TF, jay])^2 } } # jay wz[, M1*(1:NOS) ] <- wz[, M1*(1:NOS) ] * dsize.deta^2 save.weights <- !all(ind2) ned2l.dmunb2 <- mymu / munb^2 - ((1 + mymu/kmat) / kmat) / (1 + munb/kmat)^2 - df02.dmunb2 / oneminusf0 - (df0.dmunb / oneminusf0)^2 wz[, M1*(1:NOS) - 1] <- ned2l.dmunb2 * dmunb.deta^2 ned2l.dmunbsize <- (munb - mymu) / (munb + kmat)^2 - df02.dkmat.dmunb / oneminusf0 - df0.dmunb * df0.dkmat / oneminusf0^2 wz[, M + M1*(1:NOS) - 1] <- ned2l.dmunbsize * dmunb.deta * dsize.deta w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .cutoff.prob = cutoff.prob, .eps.trig = eps.trig, .max.support = max.support, .max.chunk.MB = max.chunk.MB, .nsimEIM = nsimEIM )))) } dposgeom <- function(x, prob, log = FALSE) { dgeom(x - 1, prob = prob, log = log) } pposgeom <- function(q, prob) { L <- max(length(q), length(prob)) if (length(q) != L) q <- rep_len(q, L) if (length(prob) != L) prob <- rep_len(prob, L) ans <- ifelse(q < 1, 0, (pgeom(q, prob) - dgeom(0, prob)) / pgeom(0, prob, lower.tail = FALSE)) ans[prob == 1] <- NaN ans[prob == 0] <- NaN ans } qposgeom <- function(p, prob) { ans <- qgeom(pgeom(0, prob, lower.tail = FALSE) * p + dgeom(0, prob), prob) ans[p == 1] <- Inf ans[p <= 0] <- NaN ans[1 < p] <- NaN ans[prob == 0] <- NaN ans[prob == 1] <- NaN ans } rposgeom <- function(n, prob) { ans <- qgeom(p = runif(n, min = dgeom(0, prob)), prob) ans[prob == 0] <- NaN ans[prob == 1] <- NaN ans } dpospois <- function(x, lambda, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(lambda)) if (length(x) != L) x <- rep_len(x, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) ans <- if (log.arg) { ifelse(x == 0, log(0.0), dpois(x, lambda, log = TRUE) - log1p(-exp(-lambda))) } else { ifelse(x == 0, 0, -dpois(x, lambda) / expm1(-lambda)) } ans[lambda <= 0] <- NaN ans } ppospois <- function(q, lambda) { L <- max(length(q), length(lambda)) if (length(q) != L) q <- rep_len(q, L) if (length(lambda) != L) lambda <- rep_len(lambda, L) ans <- ifelse(q < 1, 0, (ppois(q, lambda) - dpois(0, lambda)) / ppois(0, lambda, lower.tail = FALSE)) ans[lambda <= 0] <- NaN ans } qpospois <- function(p, lambda) { ans <- qpois(ppois(0, lambda, lower.tail = FALSE) * p + dpois(0, lambda), lambda = lambda) ans[p == 1] <- Inf ans[p < 0] <- NaN ans[1 < p] <- NaN ans[lambda <= 0] <- NaN ans } rpospois <- function(n, lambda) { ans <- qpois(p = runif(n, min = dpois(0, lambda)), lambda) ans[lambda <= 0] <- NaN ans } pospoisson <- function(link = "loge", type.fitted = c("mean", "lambda", "prob0"), expected = TRUE, ilambda = NULL, imethod = 1, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") if (length( ilambda) && !is.Numeric(ilambda, positive = TRUE)) stop("bad input for argument 'ilambda'") type.fitted <- match.arg(type.fitted, c("mean", "lambda", "prob0"))[1] new("vglmff", blurb = c("Positive-Poisson distribution\n\n", "Links: ", namesof("lambda", link, earg = earg, tag = FALSE)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("lambda"), link = .link , type.fitted = .type.fitted , expected = .expected , earg = .earg) }, list( .link = link, .earg = earg, .expected = expected, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("lambda", ncoly) predictors.names <- namesof(mynames1, .link , earg = .earg, tag = FALSE) if (!length(etastart)) { lambda.init <- Init.mu(y = y, w = w, imethod = .imethod , imu = .ilambda ) etastart <- theta2eta(lambda.init, .link , earg = .earg) } }), list( .link = link, .earg = earg, .ilambda = ilambda, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "prob0"))[1] lambda <- eta2theta(eta, .link , earg = .earg ) ans <- switch(type.fitted, "mean" = -lambda / expm1(-lambda), "lambda" = lambda, "prob0" = exp(-lambda)) # P(Y=0) as it were label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:M) misc$earg[[ii]] <- .earg misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg, .expected = expected ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dpospois(x = y, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("pospoisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) lambda <- eta2theta(eta, .link , earg = .earg ) rpospois(nsim * length(lambda), lambda) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta, .link , earg = .earg ) temp6 <- expm1(lambda) dl.dlambda <- y / lambda - 1 - 1 / temp6 dlambda.deta <- dtheta.deta(lambda, .link , earg = .earg ) c(w) * dl.dlambda * dlambda.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ if ( .expected ) { ned2l.dlambda2 <- (1 + 1 / temp6) * (1/lambda - 1/temp6) wz <- ned2l.dlambda2 * dlambda.deta^2 } else { d2l.dlambda2 <- y / lambda^2 - (1 + 1 / temp6 + 1) / temp6 d2lambda.deta2 <- d2theta.deta2(lambda, .link , earg = .earg) wz <- (dlambda.deta^2) * d2l.dlambda2 - dl.dlambda * d2lambda.deta2 } c(w) * wz }), list( .link = link, .earg = earg, .expected = expected )))) } dposbinom <- function(x, size, prob, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(size), length(prob)) if (length(x) != L) x <- rep_len(x, L) if (length(size) != L) size <- rep_len(size, L) if (length(prob) != L) prob <- rep_len(prob, L) answer <- NaN * x is0 <- (x == 0) ok2 <- (prob > 0) & (prob <= 1) & (size == round(size)) & (size > 0) answer <- dbinom(x = x, size = size, prob = prob, log = TRUE) - log1p(-dbinom(x = 0, size = size, prob = prob)) answer[!ok2] <- NaN if (log.arg) { answer[is0 & ok2] <- log(0.0) } else { answer <- exp(answer) answer[is0 & ok2] <- 0.0 } answer } pposbinom <- function(q, size, prob ) { if (!is.Numeric(prob, positive = TRUE)) stop("no zero or non-numeric values allowed for argument 'prob'") L <- max(length(q), length(size), length(prob)) if (length(q) != L) q <- rep_len(q, L) if (length(size) != L) size <- rep_len(size, L) if (length(prob) != L) prob <- rep_len(prob, L) ifelse(q < 1, 0, (pbinom(q = q, size, prob) - dbinom(x = 0, size, prob)) / pbinom(q = 0, size, prob, lower.tail = FALSE)) } qposbinom <- function(p, size, prob ) { ans <- qbinom(pbinom(0, size, prob, lower.tail = FALSE) * p + dbinom(0, size, prob), size, prob) ans[p == 1] <- size[p == 1] ans[p == 0] <- 1 ans[prob == 0] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans } rposbinom <- function(n, size, prob) { qbinom(p = runif(n, min = dbinom(0, size, prob)), size, prob) } posbinomial <- function(link = "logit", multiple.responses = FALSE, parallel = FALSE, omit.constant = FALSE, p.small = 1e-4, no.warning = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.logical(multiple.responses) || length(multiple.responses) != 1) stop("bad input for argument 'multiple.responses'") if (!is.logical(omit.constant) || length(omit.constant) != 1) stop("bad input for argument 'omit.constant'") if (!is.Numeric(p.small, positive = TRUE, length.arg = 1)) stop("bad input for argument 'p.small'") new("vglmff", blurb = c("Positive-binomial distribution\n\n", "Links: ", if (multiple.responses) c(namesof("prob1", link, earg = earg, tag = FALSE), ",...,", namesof("probM", link, earg = earg, tag = FALSE)) else namesof("prob", link, earg = earg, tag = FALSE), "\n"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = .multiple.responses , parameters.names = c("prob"), p.small = .p.small , no.warning = .no.warning , zero = .zero ) }, list( .zero = zero, .p.small = p.small, .multiple.responses = multiple.responses, .no.warning = no.warning ))), initialize = eval(substitute(expression({ mustart.orig <- mustart if ( .multiple.responses ) { temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$p.small <- .p.small extra$no.warning <- .no.warning extra$orig.w <- w mustart <- matrix(colSums(y) / colSums(w), # Not colSums(y * w)... n, ncoly, byrow = TRUE) } else { eval(binomialff(link = .earg , # earg = .earg , earg.link = TRUE)@initialize) } if ( .multiple.responses ) { dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("prob", 1:M, sep = "") } predictors.names <- namesof(if (M > 1) dn2 else "prob", .link , earg = .earg, short = TRUE) w <- matrix(w, n, ncoly) y <- y / w # Now sample proportion } else { predictors.names <- namesof("prob", .link , earg = .earg , tag = FALSE) } if (length(extra)) extra$w <- w else extra <- list(w = w) if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) mustart.orig else mustart etastart <- cbind(theta2eta(mustart.use, .link , earg = .earg )) } mustart <- NULL nvec <- if (NCOL(y) > 1) { NULL } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } extra$tau <- if (length(nvec) && length(unique(nvec) == 1)) nvec[1] else NULL }), list( .link = link, .p.small = p.small, .no.warning = no.warning, .earg = earg, .multiple.responses = multiple.responses ))), linkinv = eval(substitute(function(eta, extra = NULL) { w <- extra$w binprob <- eta2theta(eta, .link , earg = .earg ) nvec <- if ( .multiple.responses ) { w } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } binprob / (1.0 - (1.0 - binprob)^nvec) }, list( .link = link, .earg = earg, .multiple.responses = multiple.responses ))), last = eval(substitute(expression({ extra$w <- NULL # Kill it off misc$link <- rep_len( .link , M) names(misc$link) <- if (M > 1) dn2 else "prob" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$omit.constant <- .omit.constant misc$needto.omit.constant <- TRUE # Safety mechanism misc$multiple.responses <- .multiple.responses w <- as.numeric(w) if (length(extra$tau)) { R <- tfit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] R[lower.tri(R)] <- 0 tmp6 <- N.hat.posbernoulli(eta = eta, link = .link , earg = .earg , R = R, w = w, X.vlm = X.vlm.save, Hlist = Hlist, # 20150428; bug fixed here extra = extra, model.type = "0") extra$N.hat <- tmp6$N.hat extra$SE.N.hat <- tmp6$SE.N.hat } }), list( .link = link, .earg = earg, .multiple.responses = multiple.responses, .omit.constant = omit.constant ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ycounts <- if ( .multiple.responses ) { round(y * extra$orig.w) } else { if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts } nvec <- if ( .multiple.responses ) { w } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } use.orig.w <- if (is.numeric(extra$orig.w)) extra$orig.w else 1 binprob <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { answer <- c(use.orig.w) * dposbinom(x = ycounts, size = nvec, prob = binprob, log = TRUE) if ( .omit.constant ) { answer <- answer - c(use.orig.w) * lchoose(n = nvec, k = ycounts) } ll.elts <- answer if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg, .multiple.responses = multiple.responses, .omit.constant = omit.constant ))), vfamily = c("posbinomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { binprob <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(binprob)) && all(0 < binprob & binprob < 1) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if ( .multiple.responses ) stop("cannot run simulate() when 'multiple.responses = TRUE'") eta <- predict(object) binprob <- eta2theta(eta, .link , earg = .earg ) extra <- object@extra w <- extra$w # Usual code w <- pwts # 20140101 nvec <- if ( .multiple.responses ) { w } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } rposbinom(nsim * length(eta), size = nvec, prob = binprob) }, list( .link = link, .earg = earg, .multiple.responses = multiple.responses, .omit.constant = omit.constant ))), deriv = eval(substitute(expression({ use.orig.w <- if (is.numeric(extra$orig.w)) extra$orig.w else rep_len(1, n) nvec <- if ( .multiple.responses ) { w } else { if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) } binprob <- eta2theta(eta, .link , earg = .earg ) dmu.deta <- dtheta.deta(binprob, .link , earg = .earg ) temp1 <- 1 - (1 - binprob)^nvec temp2 <- (1 - binprob)^2 temp3 <- (1 - binprob)^(nvec-2) dl.dmu <- y / binprob - (1 - y) / (1 - binprob) - (1 - binprob) * temp3 / temp1 c(w) * dl.dmu * dmu.deta }), list( .link = link, .earg = earg, .multiple.responses = multiple.responses ))), weight = eval(substitute(expression({ ned2l.dmu2 <- 1 / (binprob * temp1) + (1 - mu) / temp2 - (nvec-1) * temp3 / temp1 - nvec * (temp2^(nvec-1)) / temp1^2 wz <- c(w) * ned2l.dmu2 * dmu.deta^2 wz }), list( .link = link, .earg = earg, .multiple.responses = multiple.responses )))) } posbernoulli.t <- function(link = "logit", parallel.t = FALSE ~ 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE) { apply.parint <- FALSE link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || max(iprob) >= 1) stop("argument 'iprob' must have values in (0, 1)") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") if (!is.Numeric(p.small, positive = TRUE, length.arg = 1)) stop("bad input for argument 'p.small'") new("vglmff", blurb = c("Positive-Bernoulli (capture-recapture) model ", "with temporal effects (M_{t}/M_{th})\n\n", "Links: ", namesof("prob1", link, earg = earg, tag = FALSE), ", ", namesof("prob2", link, earg = earg, tag = FALSE), ", ..., ", namesof("probM", link, earg = earg, tag = FALSE), "\n"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel.t , constraints = constraints, apply.int = .apply.parint , # TRUE, cm.default = diag(M), cm.intercept.default = diag(M)) }), list( .parallel.t = parallel.t, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = NA, expected = TRUE, multipleResponses = TRUE, parameters.names = c("prob"), p.small = .p.small , no.warning = .no.warning , apply.parint = .apply.parint , parallel.t = .parallel.t ) }, list( .parallel.t = parallel.t, .p.small = p.small, .no.warning = no.warning, .apply.parint = apply.parint ))), initialize = eval(substitute(expression({ M1 <- 1 mustart.orig <- mustart y <- as.matrix(y) M <- ncoly <- ncol(y) extra$ncoly <- ncoly <- ncol(y) extra$tau <- tau <- ncol(y) extra$orig.w <- w extra$p.small <- .p.small extra$no.warning <- .no.warning w <- matrix(w, n, ncoly) mustart <- matrix(colSums(y) / colSums(w), n, ncol(y), byrow = TRUE) mustart[mustart == 0] <- 0.05 mustart[mustart == 1] <- 0.95 if (ncoly == 1) stop("the response is univariate, therefore use posbinomial()") if (!all(y == 0 | y == 1)) stop("response must contain 0s and 1s only") if (!all(w == 1)) stop("argument 'weight' must contain 1s only") dn2 <- if (is.matrix(y)) dimnames(y)[[2]] else NULL dn2 <- if (length(dn2)) { paste("E[", dn2, "]", sep = "") } else { paste("prob", 1:M, sep = "") } predictors.names <- namesof(dn2, .link , earg = .earg, short = TRUE) if (length(extra)) extra$w <- w else extra <- list(w = w) if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) { mustart.orig } else { mustart } etastart <- cbind(theta2eta(mustart.use, .link , earg = .earg )) } mustart <- NULL }), list( .link = link, .earg = earg, .p.small = p.small, .no.warning = no.warning ))), linkinv = eval(substitute(function(eta, extra = NULL) { tau <- extra$ncoly probs <- eta2theta(eta, .link , earg = .earg ) logAA0 <- rowSums(log1p(-probs)) AA0 <- exp(logAA0) AAA <- exp(log1p(-AA0)) # 1 - AA0 fv <- probs / AAA fv }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ extra$w <- NULL # Kill it off misc$link <- rep_len( .link , M) names(misc$link) <- if (M > 1) dn2 else "prob" misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$multiple.responses <- TRUE misc$iprob <- .iprob R <- tfit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] R[lower.tri(R)] <- 0 tmp6 <- N.hat.posbernoulli(eta = eta, link = .link , earg = .earg , R = R, w = w, X.vlm = X.vlm.save, Hlist = Hlist, # 20150428; bug fixed here extra = extra, model.type = "t") extra$N.hat <- tmp6$N.hat extra$SE.N.hat <- tmp6$SE.N.hat misc$parallel.t <- .parallel.t misc$apply.parint <- .apply.parint }), list( .link = link, .earg = earg, .parallel.t = parallel.t, .apply.parint = apply.parint, .iprob = iprob ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ycounts <- y use.orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 probs <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(use.orig.w) * dposbern(x = ycounts, # size = 1, # Bernoulli trials prob = probs, prob0 = probs, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("posbernoulli.t"), validparams = eval(substitute(function(eta, y, extra = NULL) { probs <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(probs)) && all(0 < probs & probs < 1) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ probs <- eta2theta(eta, .link , earg = .earg ) dprobs.deta <- dtheta.deta(probs, .link , earg = .earg ) logAA0 <- rowSums(log1p(-probs)) AA0 <- exp(logAA0) AAA <- exp(log1p(-AA0)) # 1 - AA0 B.s <- AA0 / (1 - probs) B.st <- array(AA0, c(n, M, M)) for (slocal in 1:(M-1)) for (tlocal in (slocal+1):M) B.st[, slocal, tlocal] <- B.st[, tlocal, slocal] <- B.s[, slocal] / (1 - probs[, tlocal]) temp2 <- (1 - probs)^2 dl.dprobs <- y / probs - (1 - y) / (1 - probs) - B.s / AAA deriv.ans <- c(w) * dl.dprobs * dprobs.deta deriv.ans }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dprobs2 <- 1 / (probs * AAA) + 1 / temp2 - probs / (AAA * temp2) - (B.s / AAA)^2 wz <- matrix(NA_real_, n, dimm(M)) wz[, 1:M] <- ned2l.dprobs2 * (dprobs.deta^2) for (slocal in 1:(M-1)) for (tlocal in (slocal+1):M) wz[, iam(slocal, tlocal, M = M)] <- dprobs.deta[, slocal] * dprobs.deta[, tlocal] * (B.st[, slocal,tlocal] + B.s [, slocal] * B.s [, tlocal] / AAA) / (-AAA) wz }), list( .link = link, .earg = earg )))) } posbernoulli.b <- function(link = "logit", drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), I2 = FALSE, ipcapture = NULL, iprecapture = NULL, p.small = 1e-4, no.warning = FALSE ) { type.fitted <- match.arg(type.fitted, c("likelihood.cond", "mean.uncond"))[1] link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") apply.parint.b <- FALSE if (length(ipcapture)) if (!is.Numeric(ipcapture, positive = TRUE) || max(ipcapture) >= 1) stop("argument 'ipcapture' must have values in (0, 1)") if (length(iprecapture)) if (!is.Numeric(iprecapture, positive = TRUE) || max(iprecapture) >= 1) stop("argument 'iprecapture' must have values in (0, 1)") if (!is.logical(I2) || length(I2) != 1) stop("argument 'I2' must be a single logical") if (!is.Numeric(p.small, positive = TRUE, length.arg = 1)) stop("bad input for argument 'p.small'") new("vglmff", blurb = c("Positive-Bernoulli (capture-recapture) model ", "with behavioural effects (M_{b}/M_{bh})\n\n", "Links: ", namesof("pcapture", link, earg = earg, tag = FALSE), ", ", namesof("precapture", link, earg = earg, tag = FALSE), "\n"), constraints = eval(substitute(expression({ cm.intercept.default <- if ( .I2 ) diag(2) else cbind(0:1, 1) constraints <- cm.VGAM(matrix(1, 2, 1), x = x, bool = .drop.b , constraints = constraints, apply.int = .apply.parint.b , # TRUE, cm.default = cm.intercept.default, # diag(2), cm.intercept.default = cm.intercept.default) }), list( .drop.b = drop.b, .I2 = I2, .apply.parint.b = apply.parint.b ))), infos = eval(substitute(function(...) { list(M1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("pcapture", "precapture"), p.small = .p.small , no.warning = .no.warning , type.fitted = .type.fitted , apply.parint.b = .apply.parint.b ) }, list( .apply.parint.b = apply.parint.b, .p.small = p.small, .no.warning = no.warning, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 if (!is.matrix(y) || ncol(y) == 1) stop("the response appears to be univariate") if (!all(y == 0 | y == 1)) stop("response must contain 0s and 1s only") orig.y <- y extra$orig.w <- w extra$tau <- tau <- ncol(y) extra$ncoly <- ncoly <- ncol(y) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) extra$p.small <- .p.small extra$no.warning <- .no.warning mustart.orig <- mustart M <- 2 tmp3 <- aux.posbernoulli.t(y, rename = FALSE) y0i <- extra$y0i <- tmp3$y0i yr0i <- extra$yr0i <- tmp3$yr0i yr1i <- extra$yr1i <- tmp3$yr1i cap1 <- extra$cap1 <- tmp3$cap1 cap.hist1 <- extra$cap.hist1 <- tmp3$cap.hist1 temp5 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.min = 2, ncol.y.max = Inf, out.wy = TRUE, colsyperw = ncol(y), maximize = TRUE) w <- temp5$w # Retain the 0-1 response y <- temp5$y # Retain the 0-1 response mustart <- matrix(colMeans(y), n, tau, byrow = TRUE) mustart <- (mustart + orig.y) / 2 predictors.names <- c(namesof( "pcapture", .link , earg = .earg, short = TRUE), namesof("precapture", .link , earg = .earg, short = TRUE)) if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) { mustart.orig } else { mustart } etastart <- cbind(theta2eta(rowMeans(mustart.use), .link , earg = .earg ), theta2eta(rowMeans(mustart.use), .link , earg = .earg )) if (length( .ipcapture )) etastart[, 1] <- theta2eta( .ipcapture , .link , earg = .earg ) if (length( .iprecapture )) etastart[, 2] <- theta2eta( .iprecapture , .link , earg = .earg ) } mustart <- NULL }), list( .link = link, .earg = earg, .type.fitted = type.fitted, .p.small = p.small, .no.warning = no.warning, .ipcapture = ipcapture, .iprecapture = iprecapture ))), linkinv = eval(substitute(function(eta, extra = NULL) { cap.probs <- eta2theta(eta[, 1], .link , earg = .earg ) rec.probs <- eta2theta(eta[, 2], .link , earg = .earg ) tau <- extra$tau prc <- matrix(cap.probs, nrow(eta), tau) prr <- matrix(rec.probs, nrow(eta), tau) logQQQ <- rowSums(log1p(-prc)) QQQ <- exp(logQQQ) AAA <- exp(log1p(-QQQ)) # 1 - QQQ type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning 'likelihood.cond'.") "likelihood.cond" } type.fitted <- match.arg(type.fitted, c("likelihood.cond", "mean.uncond"))[1] if ( type.fitted == "likelihood.cond") { probs.numer <- prr mat.index <- cbind(1:nrow(prc), extra$cap1) probs.numer[mat.index] <- prc[mat.index] probs.numer[extra$cap.hist1 == 0] <- prc[extra$cap.hist1 == 0] fv <- probs.numer / AAA } else { fv <- prc - prr for (jay in 2:tau) fv[, jay] <- fv[, jay-1] * (1 - cap.probs) fv <- (fv + prr) / AAA } label.cols.y(fv, colnames.y = extra$colnames.y, NOS = tau) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c( .link , .link ) names(misc$link) <- predictors.names misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) misc$earg[[1]] <- .earg misc$earg[[2]] <- .earg misc$expected <- TRUE misc$multiple.responses <- TRUE misc$ipcapture <- .ipcapture misc$iprecapture <- .iprecapture misc$drop.b <- .drop.b misc$multipleResponses <- FALSE misc$apply.parint.b <- .apply.parint.b R <- tfit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] R[lower.tri(R)] <- 0 tmp6 <- N.hat.posbernoulli(eta = eta, link = .link , earg = .earg , R = R, w = w, X.vlm = X.vlm.save, Hlist = Hlist, # 20150428; bug fixed here extra = extra, model.type = "b") extra$N.hat <- tmp6$N.hat extra$SE.N.hat <- tmp6$SE.N.hat }), list( .link = link, .earg = earg, .drop.b = drop.b, .ipcapture = ipcapture, .iprecapture = iprecapture, .apply.parint.b = apply.parint.b ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { tau <- extra$ncoly ycounts <- y use.orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 cap.probs <- eta2theta(eta[, 1], .link , earg = .earg ) rec.probs <- eta2theta(eta[, 2], .link , earg = .earg ) prc <- matrix(cap.probs, nrow(eta), tau) prr <- matrix(rec.probs, nrow(eta), tau) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { probs.numer <- prr mat.index <- cbind(1:nrow(prc), extra$cap1) probs.numer[mat.index] <- prc[mat.index] probs.numer[extra$cap.hist1 == 0] <- prc[extra$cap.hist1 == 0] ll.elts <- c(use.orig.w) * dposbern(x = ycounts, # Bernoulli trials prob = probs.numer, prob0 = prc, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("posbernoulli.b"), validparams = eval(substitute(function(eta, y, extra = NULL) { cap.probs <- eta2theta(eta[, 1], .link , earg = .earg ) rec.probs <- eta2theta(eta[, 2], .link , earg = .earg ) okay1 <- all(is.finite(cap.probs)) && all(0 < cap.probs & cap.probs < 1) && all(is.finite(rec.probs)) && all(0 < rec.probs & rec.probs < 1) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ cap.probs <- eta2theta(eta[, 1], .link , earg = .earg ) rec.probs <- eta2theta(eta[, 2], .link , earg = .earg ) y0i <- extra$y0i yr0i <- extra$yr0i yr1i <- extra$yr1i cap1 <- extra$cap1 tau <- extra$tau dcapprobs.deta <- dtheta.deta(cap.probs, .link , earg = .earg ) drecprobs.deta <- dtheta.deta(rec.probs, .link , earg = .earg ) QQQ <- (1 - cap.probs)^tau dl.dcap <- 1 / cap.probs - y0i / (1 - cap.probs) - tau * ((1 - cap.probs)^(tau - 1)) / (1 - QQQ) dl.drec <- yr1i / rec.probs - yr0i / (1 - rec.probs) deriv.ans <- c(w) * cbind(dl.dcap * dcapprobs.deta, dl.drec * drecprobs.deta) deriv.ans }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M) # Diagonal EIM dA.dcapprobs <- -tau * ((1 - QQQ) * (tau-1) * (1 - cap.probs)^(tau-2) + tau * (1 - cap.probs)^(2*tau -2)) / (1 - QQQ)^2 prc <- matrix(cap.probs, n, tau) prr <- matrix(rec.probs, n, tau) dQ.dprc <- -QQQ / (1 - prc) QQQcummat <- exp(t( apply(log1p(-prc), 1, cumsum))) GGG <- (1 - QQQ - cap.probs * (1 + (tau-1) * QQQ)) / ( cap.probs * (1-cap.probs)^2) wz.pc <- GGG / (1 - QQQ) + 1 / cap.probs^2 + dA.dcapprobs wz[, iam(1, 1, M = M)] <- wz.pc * dcapprobs.deta^2 # Efficient wz.pr <- (tau - (1 - QQQ) / cap.probs) / ( rec.probs * (1 - rec.probs) * (1 - QQQ)) wz[, iam(2, 2, M = M)] <- wz.pr * drecprobs.deta^2 wz <- c(w) * wz wz }), list( .link = link, .earg = earg )))) } posbernoulli.tb <- function(link = "logit", parallel.t = FALSE ~ 1, parallel.b = FALSE ~ 0, drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), imethod = 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE, ridge.constant = 0.01, ridge.power = -4) { apply.parint.t <- FALSE apply.parint.b <- TRUE apply.parint.d <- FALSE # For 'drop.b' actually. link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") type.fitted <- match.arg(type.fitted, c("likelihood.cond", "mean.uncond"))[1] if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(ridge.constant) || ridge.constant < 0) warning("argument 'ridge.constant' should be non-negative") if (!is.Numeric(ridge.power) || ridge.power > 0) warning("argument 'ridge.power' should be non-positive") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || max(iprob) >= 1) stop("argument 'iprob' must have values in (0, 1)") if (!is.Numeric(p.small, positive = TRUE, length.arg = 1)) stop("bad input for argument 'p.small'") new("vglmff", blurb = c("Positive-Bernoulli (capture-recapture) model\n", "with temporal and behavioural effects (M_{tb}/M_{tbh})\n\n", "Links: ", namesof("pcapture.1", link, earg = earg, tag = FALSE), ", ..., ", namesof("pcapture.tau", link, earg = earg, tag = FALSE), ", ", namesof("precapture.2", link, earg = earg, tag = FALSE), ", ..., ", namesof("precapture.tau", link, earg = earg, tag = FALSE)), constraints = eval(substitute(expression({ constraints.orig <- constraints cm1.d <- cmk.d <- matrix(0, M, 1) # All 0s inside con.d <- cm.VGAM(matrix(1, M, 1), x = x, bool = .drop.b , constraints = constraints.orig, apply.int = .apply.parint.d , # FALSE, cm.default = cmk.d, cm.intercept.default = cm1.d) cm1.t <- cmk.t <- rbind(diag(tau), diag(tau)[-1, ]) # More readable con.t <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel.t , # Same as .parallel.b constraints = constraints.orig, apply.int = .apply.parint.t , # FALSE, cm.default = cmk.t, cm.intercept.default = cm1.t) cm1.b <- cmk.b <- rbind(matrix(0, tau, tau-1), diag(tau-1)) con.b <- cm.VGAM(matrix(c(rep_len(0, tau ), rep_len(1, tau-1)), M, 1), x = x, bool = .parallel.b , # Same as .parallel.b constraints = constraints.orig, apply.int = .apply.parint.b , # FALSE, cm.default = cmk.b, cm.intercept.default = cm1.b) con.use <- con.b for (klocal in seq_along(con.b)) { con.use[[klocal]] <- cbind(if (any(con.d[[klocal]] == 1)) NULL else con.b[[klocal]], con.t[[klocal]]) } constraints <- con.use }), list( .parallel.t = parallel.t, .parallel.b = parallel.b, .drop.b = drop.b, .apply.parint.b = apply.parint.b, .apply.parint.d = apply.parint.d, .apply.parint.t = apply.parint.t ))), infos = eval(substitute(function(...) { list(M1 = 2, expected = TRUE, multipleResponses = TRUE, parameters.names = as.character(NA), ridge.constant = .ridge.constant , ridge.power = .ridge.power , drop.b = .drop.b, imethod = .imethod , type.fitted = .type.fitted , p.small = .p.small , no.warning = .no.warning , apply.parint.b = .apply.parint.b , apply.parint.t = .apply.parint.t , parallel.t = .parallel.t , parallel.b = .parallel.b ) }, list( .parallel.t = parallel.t, .parallel.b = parallel.b, .drop.b = drop.b, .type.fitted = type.fitted, .p.small = p.small, .no.warning = no.warning, .imethod = imethod, .ridge.constant = ridge.constant, .ridge.power = ridge.power, .apply.parint.b = apply.parint.b, .apply.parint.t = apply.parint.t ))), initialize = eval(substitute(expression({ M1 <- 2 # Not quite true if (NCOL(w) > 1) stop("variable 'w' should be a vector or one-column matrix") w <- c(w) # Make it a vector mustart.orig <- mustart y <- as.matrix(y) extra$tau <- tau <- ncol(y) extra$ncoly <- ncoly <- ncol(y) extra$orig.w <- w extra$ycounts <- y extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) M <- M1 * tau - 1 # recap.prob.1 is unused mustart <- (y + matrix(apply(y, 2, weighted.mean, w = w), n, tau, byrow = TRUE)) / 2 mustart[mustart < 0.01] <- 0.01 mustart[mustart > 0.99] <- 0.99 mustart <- cbind(mustart, mustart[, -1]) extra$p.small <- .p.small extra$no.warning <- .no.warning if (!all(y == 0 | y == 1)) stop("response must contain 0s and 1s only") tmp3 <- aux.posbernoulli.t(y) cap.hist1 <- extra$cap.hist1 <- tmp3$cap.hist1 dn2.cap <- paste("pcapture.", 1:ncoly, sep = "") dn2.recap <- paste("precapture.", 2:ncoly, sep = "") predictors.names <- c( namesof(dn2.cap, .link , earg = .earg, short = TRUE), namesof(dn2.recap, .link , earg = .earg, short = TRUE)) if (length(extra)) extra$w <- w else extra <- list(w = w) if (!length(etastart)) { mu.init <- if ( .imethod == 1) { if (length( .iprob )) matrix( .iprob , n, M, byrow = TRUE) else if (length(mustart.orig)) matrix(rep_len(mustart.orig, n * M), n, M) else mustart # Already n x M } else { matrix(runif(n * M), n, M) } etastart <- theta2eta(mu.init, .link , earg = .earg ) # n x M } mustart <- NULL }), list( .link = link, .earg = earg, .type.fitted = type.fitted, .p.small = p.small, .no.warning = no.warning, .iprob = iprob, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { tau <- extra$ncoly taup1 <- tau + 1 probs <- eta2theta(eta, .link , earg = .earg ) prc <- probs[, 1:tau] prr <- cbind(0, # == pr1.ignored probs[, taup1:ncol(probs)]) # 1st coln ignored logQQQ <- rowSums(log1p(-prc)) QQQ <- exp(logQQQ) AAA <- exp(log1p(-QQQ)) # 1 - QQQ type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning 'likelihood.cond'.") "likelihood.cond" } type.fitted <- match.arg(type.fitted, c("likelihood.cond", "mean.uncond"))[1] if ( type.fitted == "likelihood.cond") { probs.numer <- prr mat.index <- cbind(1:nrow(prc), extra$cap1) probs.numer[mat.index] <- prc[mat.index] probs.numer[extra$cap.hist1 == 0] <- prc[extra$cap.hist1 == 0] fv <- probs.numer / AAA } else { fv <- matrix(prc[, 1] / AAA, nrow(prc), ncol(prc)) fv[, 2] <- (prc[, 2] + prc[, 1] * (prr[, 2] - prc[, 2])) / AAA if (tau >= 3) { QQQcummat <- exp(t( apply(log1p(-prc), 1, cumsum))) for (jay in 3:tau) { sum1 <- prc[, 1] for (kay in 2:(jay-1)) sum1 <- sum1 + prc[, kay] * QQQcummat[, kay-1] fv[, jay] <- prc[, jay] * QQQcummat[, jay-1] + prr[, jay] * sum1 } fv[, 3:tau] <- fv[, 3:tau] / AAA } } label.cols.y(fv, colnames.y = extra$colnames.y, NOS = NOS) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ extra$w <- NULL # Kill it off misc$link <- rep_len( .link , M) names(misc$link) <- c(dn2.cap, dn2.recap) misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$multiple.responses <- TRUE misc$iprob <- .iprob R <- tfit$qr$qr[1:ncol.X.vlm, 1:ncol.X.vlm, drop = FALSE] R[lower.tri(R)] <- 0 tmp6 <- N.hat.posbernoulli(eta = eta, link = .link , earg = .earg , R = R, w = w, X.vlm = X.vlm.save, Hlist = Hlist, # 20150428; bug fixed here extra = extra, model.type = "tb") extra$N.hat <- tmp6$N.hat extra$SE.N.hat <- tmp6$SE.N.hat misc$drop.b <- .drop.b misc$parallel.t <- .parallel.t misc$parallel.b <- .parallel.b misc$apply.parint.b <- .apply.parint.b misc$apply.parint.t <- .apply.parint.t misc$ridge.constant <- .ridge.constant misc$ridge.power <- .ridge.power }), list( .link = link, .earg = earg, .apply.parint.b = apply.parint.b, .apply.parint.t = apply.parint.t, .parallel.t = parallel.t, .parallel.b = parallel.b, .drop.b = drop.b, .ridge.constant = ridge.constant, .ridge.power = ridge.power, .iprob = iprob ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { tau <- extra$ncoly taup1 <- tau + 1 ycounts <- y use.orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 probs <- eta2theta(eta, .link , earg = .earg ) prc <- probs[, 1:tau] prr <- cbind(0, # pr1.ignored probs[, taup1:ncol(probs)]) # 1st coln ignored if (residuals) { stop("loglikelihood residuals not implemented yet") } else { probs.numer <- prr mat.index <- cbind(1:nrow(prc), extra$cap1) probs.numer[mat.index] <- prc[mat.index] probs.numer[extra$cap.hist1 == 0] <- prc[extra$cap.hist1 == 0] ll.elts <- c(use.orig.w) * dposbern(x = ycounts, # size = 1, # Bernoulli trials prob = probs.numer, prob0 = prc, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("posbernoulli.tb"), validparams = eval(substitute(function(eta, y, extra = NULL) { probs <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(probs)) && all(0 < probs & probs < 1) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ tau <- extra$ncoly taup1 <- tau + 1 probs <- eta2theta(eta, .link , earg = .earg ) prc <- probs[, 1:tau] prr <- cbind(pr1.ignored = 0, probs[, taup1:ncol(probs)]) # 1st coln ignored logQQQ <- rowSums(log1p(-prc)) QQQ <- exp(logQQQ) dprobs.deta <- dtheta.deta(probs, .link , earg = .earg ) dQ.dprc <- -QQQ / (1 - prc) d2Q.dprc <- array(0, c(n, tau, tau)) for (jay in 1:(tau-1)) for (kay in (jay+1):tau) d2Q.dprc[, jay, kay] <- d2Q.dprc[, kay, jay] <- QQQ / ((1 - prc[, jay]) * (1 - prc[, kay])) dl.dpc <- dl.dpr <- matrix(0, n, tau) # 1st coln of dl.dpr is ignored for (jay in 1:tau) { dl.dpc[, jay] <- (1 - extra$cap.hist1[, jay]) * ( y[, jay] / prc[, jay] - (1 - y[, jay]) / (1 - prc[, jay])) + dQ.dprc[, jay] / (1 - QQQ) } for (jay in 2:tau) { dl.dpr[, jay] <- extra$cap.hist1[, jay] * ( y[, jay] / prr[, jay] - (1 - y[, jay]) / (1 - prr[, jay])) } deriv.ans <- c(w) * cbind(dl.dpc, dl.dpr[, -1]) * dprobs.deta deriv.ans }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ wz <- matrix(0, n, sum(M:(M - (tau - 1)))) QQQcummat <- exp(t( apply(log1p(-prc), 1, cumsum))) wz.pc <- (QQQcummat / prc - QQQ / (1 - QQQ)) / ((1 - QQQ) * (1 - prc)^2) wz[, 1:tau] <- wz.pc wz.pr <- as.matrix((1 - QQQcummat / (1 - prc)) / ( prr * (1 - prr) * (1 - QQQ))) wz[, taup1:M] <- wz.pr[, -1] for (jay in 1:(tau-1)) for (kay in (jay+1):tau) wz[, iam(jay, kay, M = M)] <- -(d2Q.dprc[, jay, kay] + dQ.dprc[, jay] * dQ.dprc[, kay] / (1 - QQQ)) / (1 - QQQ) cindex <- iam(NA, NA, M = M, both = TRUE) cindex$row.index <- cindex$row.index[1:ncol(wz)] cindex$col.index <- cindex$col.index[1:ncol(wz)] wz <- wz * dprobs.deta[, cindex$row.index] * dprobs.deta[, cindex$col.index] wz.mean <- mean(wz[, 1:tau]) wz.adjustment <- wz.mean * .ridge.constant * iter^( .ridge.power ) wz[, 1:tau] <- wz[, 1:tau] + wz.adjustment c(w) * wz }), list( .link = link, .earg = earg, .ridge.constant = ridge.constant, .ridge.power = ridge.power )))) } setClass("posbernoulli.tb", contains = "vglmff") setClass("posbernoulli.t", contains = "posbernoulli.tb") setClass("posbernoulli.b", contains = "posbernoulli.tb") setClass("posbinomial", contains = "posbernoulli.b") setMethod("summaryvglmS4VGAM", signature(VGAMff = "posbernoulli.tb"), function(object, VGAMff, ...) { object@post }) setMethod("showsummaryvglmS4VGAM", signature(VGAMff = "posbernoulli.tb"), function(object, VGAMff, ...) { if (length(object@extra$N.hat) == 1 && is.numeric(object@extra$N.hat)) { cat("\nEstimate of N: ", round(object@extra$N.hat, digits = 3), "\n") cat("\nStd. Error of N: ", round(object@extra$SE.N.hat, digits = 3), "\n") confint.N <- object@extra$N.hat + c(Lower = -1, Upper = 1) * qnorm(0.975) * object@extra$SE.N.hat cat("\nApproximate 95 percent confidence interval for N:\n") print(round(confint.N, digits = 2)) } }) setMethod("showsummaryvglmS4VGAM", signature(VGAMff = "posbernoulli.b"), function(object, VGAMff, ...) { callNextMethod(VGAMff = VGAMff, object = object, ...) }) setMethod("showsummaryvglmS4VGAM", signature(VGAMff = "posbernoulli.t"), function(object, VGAMff, ...) { callNextMethod(VGAMff = VGAMff, object = object, ...) }) VGAM/R/vglm.R0000644000176200001440000001700013135276760012305 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vglm <- function(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = TRUE, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "vglm" ocall <- match.call() if (smart) setup.smart("write") if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "etastart", "mustart", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), vglm.fit = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) if (!is.null(form2)) { if (!is.null(subset)) stop("argument 'subset' cannot be used when ", "argument 'form2' is used") retlist <- shadowvglm(formula = form2, family = family, data = data, na.action = na.action, control = vglm.control(...), method = method, model = model, x.arg = x.arg, y.arg = y.arg, contrasts = contrasts, constraints = constraints, extra = extra, qr.arg = qr.arg) Ym2 <- retlist$Ym2 Xm2 <- retlist$Xm2 if (length(Ym2)) { if (NROW(Ym2) != NROW(y)) stop("number of rows of 'y' and 'Ym2' are unequal") } if (length(Xm2)) { if (NROW(Xm2) != NROW(x)) stop("number of rows of 'x' and 'Xm2' are unequal") } } else { Xm2 <- Ym2 <- NULL } offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) { w <- rep_len(1, nrow(mf)) } else if (NCOL(w) == 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } eval(vcontrol.expression) if (length(slot(family, "first"))) eval(slot(family, "first")) vglm.fitter <- get(method) fit <- vglm.fitter(x = x, y = y, w = w, offset = offset, Xm2 = Xm2, Ym2 = Ym2, etastart = etastart, mustart = mustart, coefstart = coefstart, family = family, control = control, constraints = constraints, extra = extra, qr.arg = qr.arg, Terms = mt, function.name = function.name, ...) fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new(Class = "vglm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, "df.residual" = fit$df.residual, "df.total" = fit$df.total, "dispersion" = 1, "effects" = fit$effects, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "ResSS" = fit$ResSS, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (qr.arg) { class(fit$qr) <- "list" slot(answer, "qr") <- fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") <- as.matrix(fit$fitted.values) slot(answer, "na.action") <- if (length(aaa <- attr(mf, "na.action"))) list(aaa) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") <- as.matrix(fit$weights) if (x.arg) slot(answer, "x") <- fit$x # The 'small' (lm) design matrix if (x.arg && length(Xm2)) slot(answer, "Xm2") <- Xm2 # The second (lm) design matrix if (y.arg && length(Ym2)) slot(answer, "Ym2") <- as.matrix(Ym2) # The second response if (!is.null(form2)) { slot(answer, "callXm2") <- retlist$call answer@misc$Terms2 <- retlist$Terms2 } answer@misc$formula <- formula answer@misc$form2 <- form2 if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(fit$y) slot(answer, "control") <- fit$control slot(answer, "extra") <- if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("'extra' is not a list, therefore placing ", "'extra' into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") <- fit$iter slot(answer, "post") <- fit$post fit$predictors <- as.matrix(fit$predictors) # Must be a matrix if (length(fit$misc$predictors.names) == ncol(fit$predictors)) dimnames(fit$predictors) <- list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") <- fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") <- as.matrix(fit$prior.weights) answer } attr(vglm, "smart") <- TRUE shadowvglm <- function(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), qr.arg = FALSE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "shadowvglm" ocall <- match.call() if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "etastart", "mustart", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), vglm.fit = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") x <- y <- NULL xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) list(Xm2 = x, Ym2 = y, call = ocall, Terms2 = mt) } VGAM/R/mux.q0000644000176200001440000002224713135276757012227 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. mux34 <- function(xmat, cc, symmetric = FALSE) { if (!is.matrix(xmat)) xmat <- as.matrix(xmat) d <- dim(xmat) nnn <- d[1] RRR <- d[2] if (length(cc) == 1) cc <- matrix(cc, 1, 1) if (!is.matrix(cc)) stop("'cc' is not a matrix") c( .C("VGAM_C_mux34", as.double(xmat), as.double(cc), as.integer(nnn), as.integer(RRR), as.integer(symmetric), ans = as.double(rep_len(0.0, nnn)), NAOK = TRUE)$ans) } mux2 <- function(cc, xmat) { if (!is.matrix(xmat)) xmat <- as.matrix(xmat) d <- dim(xmat) n <- d[1] p <- d[2] if (is.matrix(cc)) cc <- array(cc, c(dim(cc), n)) d <- dim(cc) M <- d[1] if (d[2] != p || d[3] != n) stop("dimension size inconformable") ans <- rep_len(NA_real_, n*M) fred <- .C("mux2ccc", as.double(cc), as.double(t(xmat)), ans = as.double(ans), as.integer(p), as.integer(n), as.integer(M), NAOK = TRUE) matrix(fred$ans, n, M, byrow = TRUE) } mux22 <- function(cc, xmat, M, upper = FALSE, as.matrix = FALSE) { n <- ncol(cc) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) # Usually M or M(M+1)/2 ans <- rep_len(NA_real_, n*M) fred <- .C("mux22ccc", as.double(cc), as.double(t(xmat)), ans = as.double(ans), as.integer(dimm.value), as.integer(index$row), as.integer(index$col), as.integer(n), as.integer(M), wk = double(M*M), as.integer(as.numeric(upper)), NAOK = TRUE) if (!as.matrix) fred$ans else { dim(fred$ans) <- c(M, n) t(fred$ans) } } mux5 <- function(cc, x, M, matrix.arg = FALSE) { dimx <- dim(x) dimcc <- dim(cc) r <- dimx[2] if (matrix.arg) { n <- dimcc[1] neltscci <- ncol(cc) cc <- t(cc) } else { n <- dimcc[3] if (dimcc[1] != dimcc[2] || dimx[1] != dimcc[1] || (length(dimx) == 3 && dimx[3] != dimcc[3])) stop('input nonconformable') neltscci <- M*(M+1)/2 } if (is.matrix(x)) x <- array(x,c(M, r, n)) index.M <- iam(NA, NA, M, both = TRUE, diag = TRUE) index.r <- iam(NA, NA, r, both = TRUE, diag = TRUE) size <- if (matrix.arg) dimm(r)*n else r*r*n fred <- .C("mux5ccc", as.double(cc), as.double(x), ans = double(size), as.integer(M), as.integer(n), as.integer(r), as.integer(neltscci), as.integer(dimm(r)), as.integer(as.numeric(matrix.arg)), double(M*M), double(r*r), as.integer(index.M$row), as.integer(index.M$col), as.integer(index.r$row), as.integer(index.r$col), ok3 = as.integer(1), NAOK = TRUE) if (fred$ok3 == 0) stop("can only handle matrix.arg == 1") if (matrix.arg) { ans <- fred$ans dim(ans) <- c(dimm(r), n) t(ans) } else { array(fred$ans, c(r, r, n)) } } mux55 <- function(evects, evals, M) { d <- dim(evects) n <- ncol(evals) if (d[1] != M || d[2] != M || d[3] != n || nrow(evals)!= M || ncol(evals) != n) stop("input nonconformable") MMp1d2 <- M*(M+1)/2 # The answer is a full-matrix index <- iam(NA, NA, M, both = TRUE, diag = TRUE) fred <- .C("mux55ccc", as.double(evects), as.double(evals), ans = double(MMp1d2 * n), double(M*M), double(M*M), as.integer(index$row), as.integer(index$col), as.integer(M), as.integer(n), NAOK = TRUE) dim(fred$ans) <- c(MMp1d2, n) fred$ans } mux7 <- function(cc, x) { dimx <- dim(x) dimcc <- dim(cc) if (dimx[1]!= dimcc[2] || (length(dimx) == 3 && dimx[3]!= dimcc[3])) stop('input nonconformable') M <- dimcc[1] qq <- dimcc[2] n <- dimcc[3] r <- dimx[2] if (is.matrix(x)) x <- array(x, c(qq, r, n)) ans <- array(NA, c(M, r, n)) fred <- .C("mux7ccc", as.double(cc), as.double(x), ans = as.double(ans), as.integer(M), as.integer(qq), as.integer(n), as.integer(r), NAOK = TRUE) array(fred$ans, c(M, r, n)) } mux11 <- function(cc, xmat) { dcc <- dim(cc) d <- dim(xmat) M <- dcc[1] R <- d[2] n <- dcc[3] if (M != dcc[2] || d[1] != n*M) stop("input inconformable") Xmat <- array(c(t(xmat)), c(R, M, n)) Xmat <- aperm(Xmat, c(2, 1, 3)) # Xmat becomes M x R x n mat <- mux7(cc, Xmat) # mat is M x R x n mat <- aperm(mat, c(2, 1, 3)) # mat becomes R x M x n mat <- matrix(c(mat), n*M, R, byrow = TRUE) mat } mux111 <- function(cc, xmat, M, upper = TRUE) { if (!is.matrix(xmat)) xmat <- as.matrix(xmat) if (!is.matrix(cc)) cc <- t(as.matrix(cc)) R <- ncol(xmat) n <- nrow(xmat) / M index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) # Usually M or M(M+1)/2 fred <- .C("mux111ccc", as.double(cc), b = as.double(t(xmat)), as.integer(M), as.integer(R), as.integer(n), wkcc = double(M * M), wk2 = double(M * R), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), as.integer(upper), NAOK = TRUE) ans <- fred$b dim(ans) <- c(R, n * M) d <- dimnames(xmat) dimnames(ans) <- list(d[[2]], d[[1]]) t(ans) } mux15 <- function(cc, xmat) { n <- nrow(xmat) M <- ncol(xmat) if (nrow(cc) != M || ncol(cc) != M) stop("input inconformable") if (max(abs(t(cc)-cc))>0.000001) stop("argument 'cc' is not symmetric") ans <- rep_len(NA_real_, n*M*M) fred <- .C("mux15ccc", as.double(cc), as.double(t(xmat)), ans = as.double(ans), as.integer(M), as.integer(n), NAOK = TRUE) array(fred$ans, c(M, M, n)) } vforsub <- function(cc, b, M, n) { index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) # M or M(M+1)/2 fred <- .C("vforsubccc", as.double(cc), b = as.double(t(b)), as.integer(M), as.integer(n), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), NAOK = TRUE) dim(fred$b) <- c(M, n) fred$b } vbacksub <- function(cc, b, M, n) { index <- iam(NA, NA, M, both = TRUE, diag = TRUE) dimm.value <- nrow(cc) if (nrow(b) != M || ncol(b) != n) stop("dimension size inconformable") fred <- .C("vbacksubccc", as.double(cc), b = as.double(b), as.integer(M), as.integer(n), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(dimm.value), NAOK = TRUE) if (M == 1) { fred$b } else { dim(fred$b) <- c(M, n) t(fred$b) } } vchol <- function(cc, M, n, silent = FALSE, callno = 0) { index <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) cc <- t(cc) MM <- nrow(cc) # cc is big enough to hold its Cholesky decom. fred <- .C("vcholccc", cc = as.double(cc), as.integer(M), as.integer(n), ok = integer(n), wk = double(M*M), as.integer(index$row), as.integer(index$col), as.integer(MM), NAOK = TRUE) failed <- (fred$ok != 1) if ((correction.needed <- any(failed))) { index <- (1:n)[failed] if (!silent) { if (length(index) < 11) warning("weight matri", ifelse(length(index) > 1, "ces ","x "), paste(index, collapse = ", "), " not positive-definite") } } ans <- fred$cc dim(ans) <- c(MM, n) if (correction.needed) { temp <- cc[, index, drop = FALSE] tmp777 <- vchol.greenstadt(temp, M = M, silent = silent, callno = callno + 1) if (length(index) == n) { ans <- tmp777[1:nrow(ans), , drop = FALSE] } else { ans[, index] <- tmp777 # restored 20031016 } } dim(ans) <- c(MM, n) # Make sure ans } vchol.greenstadt <- function(cc, M, silent = FALSE, callno = 0) { MM <- dim(cc)[1] n <- dim(cc)[2] if (!silent) cat(paste("Applying Greenstadt modification to ", n, " matri", ifelse(n > 1, "ces", "x"), "\n", sep = "")) temp <- veigen(cc, M = M) # , mat = TRUE) dim(temp$vectors) <- c(M, M, n) # Make sure (when M = 1) for mux5 dim(temp$values) <- c(M, n) # Make sure (when M = 1) for mux5 is.neg <- (temp$values < .Machine$double.eps) is.pos <- (temp$values > .Machine$double.eps) zilch <- (!is.pos & !is.neg) temp$values <- abs(temp$values) temp.small.value <- quantile(temp$values[!zilch], prob = 0.15) if (callno > 2) { temp.small.value <- abs(temp.small.value) * 1.50^callno small.value <- temp.small.value temp$values[zilch] <- small.value } if (callno > 9) { warning("taking drastic action; setting all wz to ", "scaled versions of the order-M identity matrix") cc2mean <- abs(colMeans(cc[1:M, , drop = FALSE])) temp$values <- matrix(cc2mean, M, n, byrow = TRUE) temp$vectors <- array(c(diag(M)), c(M, M, n)) } temp3 <- mux55(temp$vectors, temp$values, M = M) #, matrix.arg = TRUE) ans <- vchol(t(temp3), M = M, n = n, silent = silent, callno = callno + 1) #, matrix.arg = TRUE) if (nrow(ans) == MM) ans else ans[1:MM, , drop = FALSE] } VGAM/R/family.rcqo.R0000644000176200001440000003461213135276757013602 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. rcqo <- function(n, p, S, Rank = 1, family = c("poisson", "negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial", "gamma2"), eq.maximums = FALSE, eq.tolerances = TRUE, es.optimums = FALSE, lo.abundance = if (eq.maximums) hi.abundance else 10, hi.abundance = 100, sd.latvar = head(1.5/2^(0:3), Rank), sd.optimums = ifelse(es.optimums, 1.5/Rank, 1) * ifelse(scale.latvar, sd.latvar, 1), sd.tolerances = 0.25, Kvector = 1, Shape = 1, sqrt.arg = FALSE, log.arg = FALSE, rhox = 0.5, breaks = 4, # ignored unless family = "ordinal" seed = NULL, optimums1.arg = NULL, Crow1positive = TRUE, xmat = NULL, # Can be input scale.latvar = TRUE) { family <- match.arg(family, c("poisson", "negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial", "gamma2"))[1] if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE, length.arg = 1)) stop("bad input for argument 'n'") if (!is.Numeric(p, integer.valued = TRUE, positive = TRUE, length.arg = 1) || p < 1 + Rank) stop("bad input for argument 'p'") if (!is.Numeric(S, integer.valued = TRUE, positive = TRUE, length.arg = 1)) stop("bad input for argument 'S'") if (!is.Numeric(Rank, integer.valued = TRUE, positive = TRUE, length.arg = 1) || Rank > 4) stop("bad input for argument 'Rank'") if (!is.Numeric(Kvector, positive = TRUE)) stop("bad input for argument 'Kvector'") if (!is.Numeric(rhox) || abs(rhox) >= 1) stop("bad input for argument 'rhox'") if (length(seed) && !is.Numeric(seed, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'seed'") if (!is.logical(eq.tolerances) || length(eq.tolerances) > 1) stop("bad input for argument 'eq.tolerances)'") if (!is.logical(sqrt.arg) || length(sqrt.arg) > 1) stop("bad input for argument 'sqrt.arg)'") if (family != "negbinomial" && sqrt.arg) warning("argument 'sqrt.arg' is used only with family='negbinomial'") if (!eq.tolerances && !is.Numeric(sd.tolerances, positive = TRUE)) stop("bad input for argument 'sd.tolerances'") if (!is.Numeric(lo.abundance, positive = TRUE)) stop("bad input for argument 'lo.abundance'") if (!is.Numeric(sd.latvar, positive = TRUE)) stop("bad input for argument 'sd.latvar'") if (!is.Numeric(sd.optimums, positive = TRUE)) stop("bad input for argument 'sd.optimums'") if (eq.maximums && lo.abundance != hi.abundance) stop("arguments 'lo.abundance' and 'hi.abundance' must ", "be equal when 'eq.tolerances = TRUE'") if (any(lo.abundance > hi.abundance)) stop("lo.abundance > hi.abundance is not allowed") if (!is.logical(Crow1positive)) { stop("bad input for argument 'Crow1positive)'") } else { Crow1positive <- rep_len(Crow1positive, Rank) } Shape <- rep_len(Shape, S) sd.latvar <- rep_len(sd.latvar, Rank) sd.optimums <- rep_len(sd.optimums, Rank) sd.tolerances <- rep_len(sd.tolerances, Rank) AA <- sd.optimums / 3^0.5 if (Rank > 1 && any(diff(sd.latvar) > 0)) stop("argument 'sd.latvar)' must be a vector with decreasing values") if (FALSE) change.seed.expression <- expression({ if (!exists(".Random.seed", envir = .GlobalEnv, inherits = FALSE)) { runif(1) # initialize the RNG if necessary } if (is.null(seed)) { RNGstate <- get(".Random.seed", envir = .GlobalEnv) } else { R.seed <- get(".Random.seed", envir = .GlobalEnv) set.seed(seed) RNGstate <- structure(seed, kind = as.list(RNGkind())) on.exit(assign(".Random.seed", R.seed, envir = .GlobalEnv)) } }) change.seed.expression <- expression({ if (length(seed)) set.seed(seed) }) eval(change.seed.expression) V <- matrix(rhox, p-1, p-1) diag(V) <- 1 L <- chol(V) if (length(xmat)) { xnames <- colnames(xmat) } else { eval(change.seed.expression) xmat <- matrix(rnorm(n*(p-1)), n, p-1) %*% L xmat <- scale(xmat, center = TRUE) xnames <- paste("x", 2:p, sep = "") dimnames(xmat) <- list(as.character(1:n), xnames) } eval(change.seed.expression) Ccoefs <- matrix(rnorm((p-1)*Rank), p-1, Rank) latvarmat <- cbind(xmat %*% Ccoefs) if (Rank > 1) { Rmat <- chol(var(latvarmat)) iRmat <- solve(Rmat) latvarmat <- latvarmat %*% iRmat # var(latvarmat) == diag(Rank) Ccoefs <- Ccoefs %*% iRmat } for (r in 1:Rank) if (( Crow1positive[r] && Ccoefs[1, r] < 0) || (!Crow1positive[r] && Ccoefs[1, r] > 0)) { Ccoefs[ , r] <- -Ccoefs[ , r] latvarmat[ , r] <- -latvarmat[ , r] } if (scale.latvar) { for (r in 1:Rank) { sd.latvarr <- sd(latvarmat[, r]) latvarmat[, r] <- latvarmat[, r] * sd.latvar[r] / sd.latvarr Ccoefs[, r] <- Ccoefs[, r] * sd.latvar[r] / sd.latvarr } } else { sd.latvarr <- NULL for (r in 1:Rank) { sd.latvarr <- c(sd.latvarr, sd(latvarmat[, r])) } } if (es.optimums) { if (!is.Numeric(S^(1/Rank), integer.valued = TRUE) || S^(1/Rank) < 2) stop("S^(1/Rank) must be an integer greater or equal to 2") if (Rank == 1) { optimums <- matrix(NA_real_, S, Rank) for (r in 1:Rank) { optimums[, r] <- seq(-AA, AA, len = S^(1/Rank)) } } else if (Rank == 2) { optimums <- expand.grid(latvar1 = seq(-AA[1], AA[1], len = S^(1/Rank)), latvar2 = seq(-AA[2], AA[2], len = S^(1/Rank))) } else if (Rank == 3) { optimums <- expand.grid(latvar1 = seq(-AA[1], AA[1], len = S^(1/Rank)), latvar2 = seq(-AA[2], AA[2], len = S^(1/Rank)), latvar3 = seq(-AA[3], AA[3], len = S^(1/Rank))) } else { optimums <- expand.grid(latvar1 = seq(-AA[1], AA[1], len = S^(1/Rank)), latvar2 = seq(-AA[2], AA[2], len = S^(1/Rank)), latvar3 = seq(-AA[3], AA[3], len = S^(1/Rank)), latvar4 = seq(-AA[4], AA[4], len = S^(1/Rank))) } if (Rank > 1) optimums <- matrix(unlist(optimums), S, Rank) # Make sure its a matrix } else { optimums <- matrix(1, S, Rank) eval(change.seed.expression) for (r in 1:Rank) { optimums[, r] <- rnorm(n = S, sd = sd.optimums[r]) } } for (r in 1:Rank) optimums[, r] <- optimums[, r] * sd.optimums[r] / sd(optimums[, r]) if (length(optimums1.arg) && Rank == 1) for (r in 1:Rank) optimums[, r] <- optimums1.arg ynames <- paste("y", 1:S, sep = "") Kvector <- rep_len(Kvector, S) names(Kvector) <- ynames latvarnames <- if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "") Tols <- if (eq.tolerances) { matrix(1, S, Rank) } else { eval(change.seed.expression) temp <- matrix(1, S, Rank) if (S > 1) for (r in 1:Rank) { temp[-1, r] <- rnorm(S-1, mean = 1, sd = sd.tolerances[r]) if (any(temp[, r] <= 0)) stop("negative tolerances!") temp[, r] <- temp[, r]^2 # Tolerance matrix = var-cov matrix) } temp } dimnames(Tols) <- list(ynames, latvarnames) dimnames(Ccoefs) <- list(xnames, latvarnames) dimnames(optimums) <- list(ynames, latvarnames) loeta <- log(lo.abundance) # May be a vector hieta <- log(hi.abundance) eval(change.seed.expression) log.maximums <- runif(S, min = loeta, max = hieta) names(log.maximums) <- ynames etamat <- matrix(log.maximums, n, S, byrow = TRUE) for (jay in 1:S) { optmat <- matrix(optimums[jay, ], nrow = n, ncol = Rank, byrow = TRUE) tolmat <- matrix( Tols[jay, ], nrow = n, ncol = Rank, byrow = TRUE) temp <- cbind((latvarmat - optmat) / tolmat) for (r in 1:Rank) etamat[, jay] <- etamat[, jay] - 0.5 * (latvarmat[, r] - optmat[jay, r]) * temp[, r] } rootdist <- switch(family, "poisson" = 1, "binomial-poisson" = 1, "ordinal-poisson" = 1, "negbinomial" = 2, "Binomial-negbinomial" = 2, "Ordinal-negbinomial" = 2, "gamma2" = 3) eval(change.seed.expression) if (rootdist == 1) { ymat <- matrix(rpois(n * S, lambda = exp(etamat)), n, S) } else if (rootdist == 2) { mKvector <- matrix(Kvector, n, S, byrow = TRUE) ymat <- matrix(rnbinom(n = n * S, mu = exp(etamat), size = mKvector), n, S) if (sqrt.arg) ymat <- ymat^0.5 } else if (rootdist == 3) { Shape <- matrix(Shape, n, S, byrow = TRUE) ymat <- matrix(rgamma(n * S, shape = Shape, scale = exp(etamat) / Shape), n, S) if (log.arg) ymat <- log(ymat) } else { stop("argument 'rootdist' unmatched") } tmp1 <- NULL if (any(family == c("ordinal-poisson", "Ordinal-negbinomial"))) { tmp1 <- cut(c(ymat), breaks = breaks, labels = NULL) ymat <- cut(c(ymat), breaks = breaks, labels = FALSE) dim(ymat) <- c(n,S) } if (any(family == c("binomial-poisson", "Binomial-negbinomial"))) ymat <- 0 + (ymat > 0) myform <- as.formula(paste(paste("cbind(", paste(paste("y", 1:S, sep = ""), collapse = ", "), ") ~ ", sep = ""), paste(paste("x", 2:p, sep = ""), collapse = "+"), sep = "")) dimnames(ymat) <- list(as.character(1:n), ynames) ans <- data.frame(xmat, ymat) attr(ans, "concoefficients") <- Ccoefs attr(ans, "Crow1positive") <- Crow1positive attr(ans, "family") <- family attr(ans, "formula") <- myform # Useful for running cqo() on the data attr(ans, "Rank") <- Rank attr(ans, "family") <- family attr(ans, "Kvector") <- Kvector attr(ans, "log.maximums") <- log.maximums attr(ans, "lo.abundance") <- lo.abundance attr(ans, "hi.abundance") <- hi.abundance attr(ans, "optimums") <- optimums attr(ans, "log.arg") <- log.arg attr(ans, "latvar") <- latvarmat attr(ans, "eta") <- etamat attr(ans, "eq.tolerances") <- eq.tolerances attr(ans, "eq.maximums") <- eq.maximums || all(lo.abundance == hi.abundance) attr(ans, "es.optimums") <- es.optimums attr(ans, "seed") <- seed # RNGstate attr(ans, "sd.tolerances") <- sd.tolerances attr(ans, "sd.latvar") <- if (scale.latvar) sd.latvar else sd.latvarr attr(ans, "sd.optimums") <- sd.optimums attr(ans, "Shape") <- Shape attr(ans, "sqrt") <- sqrt.arg attr(ans, "tolerances") <- Tols^0.5 # Like a standard deviation attr(ans, "breaks") <- if (length(tmp1)) attributes(tmp1) else breaks ans } if (FALSE) dcqo <- function(x, p, S, family = c("poisson", "binomial", "negbinomial", "ordinal"), Rank = 1, eq.tolerances = TRUE, eq.maximums = FALSE, EquallySpacedOptima = FALSE, lo.abundance = if (eq.maximums) 100 else 10, hi.abundance = 100, sd.tolerances = 1, sd.optimums = 1, nlevels = 4, # ignored unless family = "ordinal" seed = NULL) { warning("20060612; needs a lot of work based on rcqo()") if (mode(family) != "character" && mode(family) != "name") family <- as.character(substitute(family)) family <- match.arg(family, c("poisson", "binomial", "negbinomial", "ordinal"))[1] if (!is.Numeric(p, integer.valued = TRUE, positive = TRUE, length.arg = 1) || p < 2) stop("bad input for argument 'p'") if (!is.Numeric(S, integer.valued = TRUE, positive = TRUE, length.arg = 1)) stop("bad input for argument 'S'") if (!is.Numeric(Rank, integer.valued = TRUE, positive = TRUE, length.arg = 1)) stop("bad input for argument 'Rank'") if (length(seed) && !is.Numeric(seed, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'seed'") if (!is.logical(eq.tolerances) || length(eq.tolerances)>1) stop("bad input for argument 'eq.tolerances)'") if (eq.maximums && lo.abundance != hi.abundance) stop("'lo.abundance' and 'hi.abundance' must ", "be equal when 'eq.tolerances = TRUE'") if (length(seed)) set.seed(seed) xmat <- matrix(rnorm(n*(p-1)), n, p-1, dimnames = list(as.character(1:n), paste("x", 2:p, sep = ""))) Ccoefs <- matrix(rnorm((p-1)*Rank), p-1, Rank) latvarmat <- xmat %*% Ccoefs optimums <- matrix(rnorm(Rank*S, sd = sd.optimums), S, Rank) Tols <- if (eq.tolerances) matrix(1, S, Rank) else matrix(rnorm(Rank*S, mean = 1, sd = 1), S, Rank) loeta <- log(lo.abundance) hieta <- log(hi.abundance) log.maximums <- runif(S, min = loeta, max = hieta) etamat <- matrix(log.maximums, n, S, byrow = TRUE) for (jay in 1:S) { optmat <- matrix(optimums[jay, ], n, Rank, byrow = TRUE) tolmat <- matrix( Tols[jay, ], n, Rank, byrow = TRUE) temp <- cbind((latvarmat - optmat) * tolmat) for (r in 1:Rank) etamat[, jay] <- etamat[, jay] - 0.5 * temp[, r] * (latvarmat[, r] - optmat[jay, r]) } ymat <- if (family == "negbinomial") { } else { matrix(rpois(n * S, lambda = exp(etamat)), n, S) } if (family == "binomial") ymat <- 0 + (ymat > 0) dimnames(ymat) <- list(as.character(1:n), paste("y", 1:S, sep = "")) ans <- data.frame(xmat, ymat) attr(ans, "concoefficients") <- Ccoefs attr(ans, "family") <- family ans } getInitVals <- function(gvals, llfun, ...) { LLFUN <- match.fun(llfun) ff <- function(myx, ...) LLFUN(myx, ...) objFun <- gvals for (ii in seq_along(gvals)) objFun[ii] <- ff(myx = gvals[ii], ...) try.this <- gvals[objFun == max(objFun)] # Usually scalar, maybe vector try.this } campp <- function(q, size, prob, mu) { if (!missing(mu)) { if (!missing(prob)) stop("arguments 'prob' and 'mu' both specified") prob <- size/(size + mu) } K <- (1/3) * ((9*q+8) / (q+1) - ((9*size-1)/size) * (mu/(q+1))^(1/3)) / sqrt( (1/size) * (mu/(q+1))^(2/3) + 1 / (q+1)) # Note the +, not - pnorm(K) } VGAM/R/family.rcim.R0000644000176200001440000010203513135276757013563 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. rcim <- function(y, family = poissonff, Rank = 0, M1 = NULL, weights = NULL, which.linpred = 1, Index.corner = ifelse(is.null(str0), 0, max(str0)) + 1:Rank, rprefix = "Row.", cprefix = "Col.", iprefix = "X2.", offset = 0, str0 = if (Rank) 1 else NULL, # Ignored if Rank == 0 summary.arg = FALSE, h.step = 0.0001, rbaseline = 1, cbaseline = 1, has.intercept = TRUE, M = NULL, rindex = 2:nrow(y), # Row index cindex = 2:ncol(y), # Col index iindex = 2:nrow(y), # Interaction index ...) { rindex <- unique(sort(rindex)) cindex <- unique(sort(cindex)) iindex <- unique(sort(iindex)) if (Rank == 0 && !has.intercept) warning("probably 'has.intercept == TRUE' is better for a rank-0 model") ncoly <- ncol(y) noroweffects <- FALSE nocoleffects <- FALSE if (!is.Numeric(which.linpred, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'which.linpred'") if (!is.character(rprefix)) stop("argument 'rprefix' must be character") if (!is.character(cprefix)) stop("argument 'cprefix' must be character") if (is.character(family)) family <- get(family) if (is.function(family)) family <- ((family)()) if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } efamily <- family if (!is.Numeric(M1)) { iefamily <- efamily@infos if (is.function(iefamily)) M1 <- (iefamily())$M1 if (is.Numeric(M1)) M1 <- abs(M1) } if (!is.Numeric(M1)) { if (!is.Numeric(M)) warning("cannot determine the value of 'M1'.", "Assuming the value one.") M1 <- 1 } M <- if (is.null(M)) M1 * ncol(y) else M special <- (M > 1) && (M1 == 1) object.save <- y y <- if (is(y, "rrvglm")) { depvar(object.save) } else { as(as.matrix(y), "matrix") } if (length(dim(y)) != 2 || nrow(y) < 3 || ncol(y) < 3) stop("argument 'y' must be a matrix with >= 3 rows & columns, or ", "a rrvglm() object") .rcim.df <- if (!noroweffects) data.frame("Row.2" = I.col(2, nrow(y))) else # See below if (!nocoleffects) data.frame("Col.2" = I.col(2, nrow(y))) else # See below stop("at least one of 'noroweffects' and 'nocoleffects' must be FALSE") min.row.val <- rindex[1] # == min(rindex) since it is sorted # Usually 2 min.col.val <- cindex[1] # == min(cindex) since it is sorted # Usually 2 if (!noroweffects) { colnames( .rcim.df ) <- paste(rprefix, as.character(min.row.val), # "2", sep = "") # Overwrite "Row.2" } else if (!nocoleffects) { colnames( .rcim.df ) <- paste(cprefix, as.character(min.col.val), # "2", sep = "") # Overwrite "Col.2" } yn1 <- if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else paste(iprefix, 1:nrow(y), sep = "") warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) if (any(!is.na(as.numeric(substring(yn1, 1, 1))))) yn1 <- paste(iprefix, 1:nrow(y), sep = "") options(warn = warn.save) nrprefix <- as.name(rprefix) ncprefix <- as.name(cprefix) assign(rprefix, factor(1:nrow(y))) modmat.row <- substitute( model.matrix( ~ .rprefix ), list( .rprefix = nrprefix )) LLL <- ifelse(special, M, ncol(y)) assign(cprefix, factor(1:LLL)) modmat.col <- substitute( model.matrix( ~ .cprefix ), list( .cprefix = ncprefix )) modmat.row <- eval( modmat.row ) modmat.col <- eval( modmat.col ) Hlist <- if (has.intercept) { list("(Intercept)" = matrix(1, LLL, 1)) } else { temp <- list("Row.2" = matrix(1, LLL, 1)) # Overwrite this name: names(temp) <- paste(rprefix, as.character(min.row.val), sep = "") temp } if (!noroweffects) for (ii in rindex) { Hlist[[paste(rprefix, ii, sep = "")]] <- matrix(1, LLL, 1) .rcim.df[[paste(rprefix, ii, sep = "")]] <- modmat.row[, ii] } if (!nocoleffects) for (ii in cindex) { temp6.mat <- modmat.col[, ii, drop = FALSE] Hlist[[paste(cprefix, ii, sep = "")]] <- temp6.mat .rcim.df[[paste(cprefix, ii, sep = "")]] <- rep_len(1, nrow(y)) } if (Rank > 0) { for (ii in iindex) { Hlist[[yn1[ii]]] <- diag(LLL) .rcim.df[[yn1[ii]]] <- I.col(ii, nrow(y)) } } dimnames(.rcim.df) <- list(if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else as.character(iindex), dimnames( .rcim.df )[[2]]) str1 <- paste(if (has.intercept) "~ 1 + " else "~ -1 + ", rprefix, as.character(min.row.val), # "2", sep = "") if (nrow(y) > 2) str1 <- paste(str1, paste(rprefix, rindex[-1], sep = "", collapse = " + "), sep = " + ") str1 <- paste(str1, paste(cprefix, cindex, sep = "", collapse = " + "), sep = " + ") str2 <- paste("y", str1) if (Rank > 0) { str2 <- paste(str2, paste(yn1[iindex], sep = "", collapse = " + "), sep = " + ") } controlfun <- if (Rank == 0) vglm.control else rrvglm.control # orig. mycontrol <- controlfun(Rank = Rank, Index.corner = Index.corner, str0 = str0, ...) if (mycontrol$trace) { } if ((mindim <- min(nrow(y), ncol(y))) <= Rank) { stop("argument 'Rank' is too high. Must be a value from 0 ", "to ", mindim - 1, " inclusive") } if (Rank > 0) mycontrol$noRRR <- as.formula(str1) # Overwrite this assign(".rcim.df", .rcim.df, envir = VGAM::VGAMenv) warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) if (mycontrol$trace) { } if (M1 > 1) { orig.Hlist <- Hlist kmat1 <- matrix(0, nrow = M1, ncol = 1) kmat1[which.linpred, 1] <- 1 kmat0 <- (diag(M1))[, -which.linpred, drop = FALSE] for (ii in seq_along(Hlist)) { Hlist[[ii]] <- kronecker(Hlist[[ii]], kmat1) } if (has.intercept) Hlist[["(Intercept)"]] <- cbind(Hlist[["(Intercept)"]], kronecker(matrix(1, ncoly, 1), kmat0)) if (mycontrol$trace) { } } offset.matrix <- matrix(offset, nrow = nrow(y), ncol = M) # byrow = TRUE answer <- if (Rank > 0) { if (is(object.save, "rrvglm")) object.save else rrvglm(as.formula(str2), family = family, constraints = Hlist, offset = offset.matrix, weights = if (length(weights)) weights else rep_len(1, nrow(y)), ..., control = mycontrol, data = .rcim.df ) } else { if (is(object.save, "vglm")) object.save else vglm(as.formula(str2), family = family, constraints = Hlist, offset = offset.matrix, weights = if (length(weights)) weights else rep_len(1, nrow(y)), ..., control = mycontrol, data = .rcim.df ) } options(warn = warn.save) # Restore warnings back to prior state answer <- if (summary.arg) { if (Rank > 0) { summary.rrvglm(as(answer, "rrvglm"), h.step = h.step) } else { summary(answer) } } else { as(answer, ifelse(Rank > 0, "rcim", "rcim0")) } answer@misc$rbaseline <- rbaseline answer@misc$cbaseline <- cbaseline answer@misc$which.linpred <- which.linpred answer@misc$offset <- offset.matrix answer } summaryrcim <- function(object, ...) { rcim(depvar(object), summary.arg = TRUE, ...) } setClass("rcim0", representation(not.needed = "numeric"), contains = "vglm") # Added 20110506 setClass("rcim", representation(not.needed = "numeric"), contains = "rrvglm") setMethod("summary", "rcim0", function(object, ...) summaryrcim(object, ...)) setMethod("summary", "rcim", function(object, ...) summaryrcim(object, ...)) Rcim <- function(mat, rbaseline = 1, cbaseline = 1) { mat <- as.matrix(mat) RRR <- dim(mat)[1] CCC <- dim(mat)[2] rnames <- if (is.null(rownames(mat))) { paste("X", 1:RRR, sep = "") } else { rownames(mat) } cnames <- if (is.null(colnames(mat))) { paste("Y", 1:CCC, sep = "") } else { colnames(mat) } r.index <- if (is.character(rbaseline)) which(rownames(mat) == rbaseline) else if (is.numeric(rbaseline)) rbaseline else stop("argement 'rbaseline' must be numeric", "or character of the level of row") c.index <- if (is.character(cbaseline)) which(colnames(mat) == cbaseline) else if (is.numeric(cbaseline)) cbaseline else stop("argement 'cbaseline' must be numeric", "or character of the level of row") if (length(r.index) != 1) stop("Could not match with argument 'rbaseline'") if (length(c.index) != 1) stop("Could not match with argument 'cbaseline'") yswap <- rbind(mat[r.index:RRR, ], if (r.index > 1) mat[1:(r.index - 1),] else NULL) yswap <- cbind(yswap[, c.index:CCC], if (c.index > 1) yswap[, 1:(c.index - 1)] else NULL) new.rnames <- rnames[c(r.index:RRR, if (r.index > 1) 1:(r.index - 1) else NULL)] new.cnames <- cnames[c(c.index:CCC, if (c.index > 1) 1:(c.index - 1) else NULL)] colnames(yswap) <- new.cnames rownames(yswap) <- new.rnames yswap } plotrcim0 <- function(object, centered = TRUE, which.plots = c(1, 2), hline0 = TRUE, hlty = "dashed", hcol = par()$col, hlwd = par()$lwd, rfirst = 1, cfirst = 1, rtype = "h", ctype = "h", rcex.lab = 1, rcex.axis = 1, # rlabels = FALSE, rtick = FALSE, ccex.lab = 1, ccex.axis = 1, # clabels = FALSE, ctick = FALSE, rmain = "Row effects", rsub = "", rxlab = "", rylab = "Row effects", cmain = "Column effects", csub = "", cxlab = "", cylab = "Column effects", rcol = par()$col, ccol = par()$col, no.warning = FALSE, ...) { nparff <- if (is.numeric(object@family@infos()$M1)) { object@family@infos()$M1 } else { 1 } if (!no.warning && is.numeric(object@control$Rank) && object@control$Rank != 0) warning("argument 'object' is not Rank-0") n.lm <- nrow(object@y) cobj <- coefficients(object) upperbound <- if (!is.numeric(object@control$Rank) || object@control$Rank == 0) length(cobj) else length(object@control$colx1.index) orig.roweff <- c("Row.1" = 0, cobj[(nparff + 1) : (nparff + n.lm - 1)]) orig.coleff <- c("Col.1" = 0, cobj[(nparff + n.lm) : upperbound]) last.r <- length(orig.roweff) last.c <- length(orig.coleff) orig.raxisl <- rownames(object@y) orig.caxisl <- colnames(object@y) if (is.null(orig.raxisl)) orig.raxisl <- as.character(1:nrow(object@y)) if (is.null(orig.caxisl)) orig.caxisl <- as.character(1:ncol(object@y)) roweff.orig <- roweff <- orig.roweff[c(rfirst:last.r, if (rfirst > 1) 1:(rfirst-1) else NULL)] coleff.orig <- coleff <- orig.coleff[c(cfirst:last.c, if (cfirst > 1) 1:(cfirst-1) else NULL)] if (centered) { roweff <- scale(roweff, scale = FALSE) # Center it only coleff <- scale(coleff, scale = FALSE) # Center it only } raxisl <- orig.raxisl[c(rfirst:last.r, if (rfirst > 1) 1:(rfirst-1) else NULL)] caxisl <- orig.caxisl[c(cfirst:last.c, if (cfirst > 1) 1:(cfirst-1) else NULL)] if (any(which.plots == 1, na.rm = TRUE)) { plot(roweff, type = rtype, axes = FALSE, col = rcol, main = rmain, sub = rsub, xlab = rxlab, ylab = rylab, ...) axis(1, at = seq_along(raxisl), cex.lab = rcex.lab, cex.axis = rcex.axis, labels = raxisl) axis(2, cex.lab = rcex.lab, ...) # las = rlas) if (hline0) abline(h = 0, lty = hlty, col = hcol, lwd = hlwd) } if (any(which.plots == 2, na.rm = TRUE)) { plot(coleff, type = ctype, axes = FALSE, col = ccol, main = cmain, # lwd = 2, xpd = FALSE, sub = csub, xlab = cxlab, ylab = cylab, ...) axis(1, at = seq_along(caxisl), cex.lab = ccex.lab, cex.axis = ccex.axis, labels = caxisl) axis(2, cex.lab = ccex.lab, ...) # las = clas) if (hline0) abline(h = 0, lty = hlty, col = hcol, lwd = hlwd) } object@post$row.effects = roweff object@post$col.effects = coleff object@post$raw.row.effects = roweff.orig object@post$raw.col.effects = coleff.orig invisible(object) } setMethod("plot", "rcim0", function(x, y, ...) plotrcim0(object = x, ...)) setMethod("plot", "rcim", function(x, y, ...) plotrcim0(object = x, ...)) moffset <- function(mat, roffset = 0, coffset = 0, postfix = "", rprefix = "Row.", cprefix = "Col." ) { if ((is.numeric(roffset) && (roffset == 0)) && (is.numeric(coffset) && (coffset == 0))) return(mat) vecmat <- c(unlist(mat)) ind1 <- if (is.character(roffset)) which(rownames(mat) == roffset) else if (is.numeric(roffset)) roffset + 1 else stop("argument 'roffset' not matched (character). ", "It must be numeric, ", "else character and match the ", "row names of the response") ind2 <- if (is.character(coffset)) which(colnames(mat) == coffset) else if (is.numeric(coffset)) coffset + 1 else stop("argument 'coffset' not matched (character). ", "It must be numeric, ", "else character and match the ", "column names of the response") if (!is.Numeric(ind1, positive = TRUE, integer.valued = TRUE, length.arg = 1) || !is.Numeric(ind2, positive = TRUE, integer.valued = TRUE, length.arg = 1)) stop("bad input for arguments 'roffset' and/or 'coffset'") if (ind1 > nrow(mat)) stop("too large a value for argument 'roffset'") if (ind2 > ncol(mat)) stop("too large a value for argument 'coffset'") start.ind <- (ind2 - 1)* nrow(mat) + ind1 svecmat <- vecmat[c(start.ind:(nrow(mat) * ncol(mat)), 0:(start.ind - 1))] rownames.mat <- rownames(mat) if (length(rownames.mat) != nrow(mat)) rownames.mat <- paste(rprefix, 1:nrow(mat), sep = "") colnames.mat <- colnames(mat) if (length(colnames.mat) != ncol(mat)) colnames.mat <- paste(cprefix, 1:ncol(mat), sep = "") newrn <- if (roffset > 0) c(rownames.mat[c(ind1:nrow(mat))], paste(rownames.mat[0:(ind1-1)], postfix, sep = "")) else rownames.mat newcn <- c(colnames.mat[c(ind2:ncol(mat), 0:(ind2 - 1))]) if (roffset > 0) newcn <- paste(newcn, postfix, sep = "") newmat <- matrix(svecmat, nrow(mat), ncol(mat), dimnames = list(newrn, newcn)) newmat } Confint.rrnb <- function(rrnb2, level = 0.95) { if (class(rrnb2) != "rrvglm") stop("argument 'rrnb2' does not appear to be a rrvglm() object") if (!any(rrnb2@family@vfamily == "negbinomial")) stop("argument 'rrnb2' does not appear to be a negbinomial() fit") if (rrnb2@control$Rank != 1) stop("argument 'rrnb2' is not Rank-1") if (rrnb2@misc$M != 2) stop("argument 'rrnb2' does not have M = 2") if (!all(rrnb2@misc$link == "loge")) stop("argument 'rrnb2' does not have log links for both parameters") a21.hat <- (Coef(rrnb2)@A)["loge(size)", 1] beta11.hat <- Coef(rrnb2)@B1["(Intercept)", "loge(mu)"] beta21.hat <- Coef(rrnb2)@B1["(Intercept)", "loge(size)"] delta1.hat <- exp(a21.hat * beta11.hat - beta21.hat) delta2.hat <- 2 - a21.hat SE.a21.hat <- sqrt(vcovrrvglm(rrnb2)["I(latvar.mat)", "I(latvar.mat)"]) ci.a21 <- a21.hat + c(-1, 1) * qnorm(1 - (1 - level)/2) * SE.a21.hat (ci.delta2 <- 2 - rev(ci.a21)) # e.g., the 95 percent CI list(a21.hat = a21.hat, beta11.hat = beta11.hat, beta21.hat = beta21.hat, CI.a21 = ci.a21, CI.delta2 = ci.delta2, delta1 = delta1.hat, delta2 = delta2.hat, SE.a21.hat = SE.a21.hat) } Confint.nb1 <- function(nb1, level = 0.95) { if (class(nb1) != "vglm") stop("argument 'nb1' does not appear to be a vglm() object") if (!any(nb1@family@vfamily == "negbinomial")) stop("argument 'nb1' does not appear to be a negbinomial() fit") if (!all(unlist(constraints(nb1)[-1]) == 1)) stop("argument 'nb1' does not appear to have 'parallel = TRUE'") if (!all(unlist(constraints(nb1)[1]) == c(diag(nb1@misc$M)))) stop("argument 'nb1' does not have 'parallel = FALSE' ", "for the intercept") if (nb1@misc$M != 2) stop("argument 'nb1' does not have M = 2") if (!all(nb1@misc$link == "loge")) stop("argument 'nb1' does not have log links for both parameters") cnb1 <- coefficients(as(nb1, "vglm"), matrix = TRUE) mydiff <- (cnb1["(Intercept)", "loge(size)"] - cnb1["(Intercept)", "loge(mu)"]) delta0.hat <- exp(mydiff) (phi0.hat <- 1 + 1 / delta0.hat) # MLE of phi0 myvcov <- vcov(as(nb1, "vglm")) # Not great; improve this! myvec <- cbind(c(-1, 1, rep_len(0, nrow(myvcov) - 2))) (se.mydiff <- sqrt(t(myvec) %*% myvcov %*% myvec)) ci.mydiff <- mydiff + c(-1, 1) * qnorm(1 - (1 - level)/2) * se.mydiff ci.delta0 <- ci.exp.mydiff <- exp(ci.mydiff) (ci.phi0 <- 1 + 1 / rev(ci.delta0)) # e.g., the 95 percent CI for phi0 list(CI.phi0 = ci.phi0, CI.delta0 = ci.delta0, delta0 = delta0.hat, phi0 = phi0.hat) } plota21 <- function(rrvglm2, show.plot = TRUE, nseq.a21 = 31, se.eachway = c(5, 5), # == c(LHS, RHS), trace.arg = TRUE, lwd = 2, ...) { if (class(rrvglm2) != "rrvglm") stop("argument 'rrvglm2' does not appear to be a rrvglm() object") if (rrvglm2@control$Rank != 1) stop("argument 'rrvglm2' is not Rank-1") if (rrvglm2@misc$M != 2) stop("argument 'rrvglm2' does not have M = 2") loglik.orig <- logLik(rrvglm2) temp1 <- Confint.rrnb(rrvglm2) # zz a21.hat <- (Coef(rrvglm2)@A)[2, 1] SE.a21.hat <- temp1$SE.a21.hat SE.a21.hat <- sqrt(vcov(rrvglm2)["I(latvar.mat)", "I(latvar.mat)"]) big.ci.a21 <- a21.hat + c(-1, 1) * se.eachway * SE.a21.hat seq.a21 <- seq(big.ci.a21[1], big.ci.a21[2], length = nseq.a21) Hlist.orig <- constraints.vlm(rrvglm2, type = "term") alreadyComputed <- !is.null(rrvglm2@post$a21.matrix) a21.matrix <- if (alreadyComputed) rrvglm2@post$a21.matrix else cbind(a21 = seq.a21, loglikelihood = 0) prev.etastart <- predict(rrvglm2) # Halves the computing time funname <- "vglm" listcall <- as.list(rrvglm2@call) if (!alreadyComputed) for (ii in 1:nseq.a21) { if (trace.arg) print(ii) argslist <- vector("list", length(listcall) - 1) for (kay in 2:(length(listcall))) argslist[[kay - 1]] <- listcall[[kay]] names(argslist) <- c(names(listcall)[-1]) argslist$trace <- trace.arg argslist$etastart <- prev.etastart argslist$constraints <- Hlist.orig for (kay in 2:length(argslist[["constraints"]])) { argslist[["constraints"]][[kay]] <- rbind(1, a21.matrix[ii, 1]) } fitnew <- do.call(what = funname, args = argslist) a21.matrix[ii, 2] <- logLik(fitnew) prev.etastart <- predict(fitnew) } if (show.plot) { plot(a21.matrix[ ,1], a21.matrix[ ,2], type = "l", col = "blue", cex.lab = 1.1, xlab = expression(a[21]), ylab = "Log-likelihood") # ... abline(v = (Hlist.orig[[length(Hlist.orig)]])[2, 1], col = "darkorange", lty = "dashed") abline(h = loglik.orig, col = "darkorange", lty = "dashed") abline(h = loglik.orig - qchisq(0.95, df = 1) / 2, col = "darkorange", lty = "dashed") abline(v = a21.hat + c(-1, 1) * 1.96 * SE.a21.hat, col = "gray50", lty = "dashed", lwd = lwd) } # End of (show.plot) rrvglm2@post <- list(a21.matrix = a21.matrix) invisible(rrvglm2) } Qvar <- function(object, factorname = NULL, which.linpred = 1, coef.indices = NULL, labels = NULL, dispersion = NULL, reference.name = "(reference)", estimates = NULL ) { if (!is.Numeric(which.linpred, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("argument 'which.linpred' must be a positive integer") coef.indices.saved <- coef.indices if (!is.matrix(object)) { model <- object if (is.null(factorname) && is.null(coef.indices)) { stop("arguments 'factorname' and 'coef.indices' are ", "both NULL") } if (is.null(coef.indices)) { M <- npred(model) if (M < which.linpred) stop("argument 'which.linpred' must be a value from the set 1:", M) newfactorname <- if (M > 1) { clist <- constraints(model, type = "term") Hk <- clist[[factorname]] Mdot <- ncol(Hk) Hk.row <- Hk[which.linpred, ] if (sum(Hk.row != 0) > 1) stop("cannot handle rows of constraint matrices with more ", "than one nonzero value") foo <- function(ii) switch(as.character(ii), '1'="1st", '2'="2nd", '3'="3rd", paste(ii, "th", sep = "")) if (sum(Hk.row != 0) == 0) stop("factor '", factorname, "' is not used the ", foo(which.linpred), " eta (linear predictor)") row.index <- (1:Mdot)[Hk.row != 0] all.labels <- vlabel(factorname, ncolHlist = Mdot, M = M) all.labels[row.index] } else { factorname } colptr <- attr(model.matrix(object, type = "vlm"), "vassign") colptr <- if (M > 1) { colptr[newfactorname] } else { colptr[[newfactorname]] } coef.indices <- colptr contmat <- if (length(model@xlevels[[factorname]]) == length(coef.indices)) { diag(length(coef.indices)) } else { eval(call(model@contrasts[[factorname]], model@xlevels [[factorname]])) } rownames(contmat) <- model@xlevels[[factorname]] if (is.null(estimates)) { if (M > 1) { estimates <- matrix(-1, nrow(contmat), 1) # Used to be nc = Mdot ii <- 1 estimates[, ii] <- contmat %*% (coefvlm(model)[(coef.indices[[ii]])]) } else { estimates <- contmat %*% (coefvlm(model)[coef.indices]) } } Covmat <- vcov(model, dispersion = dispersion) covmat <- Covmat[unlist(coef.indices), unlist(coef.indices), drop = FALSE] covmat <- if (M > 1) { ii <- 1 ans <- contmat %*% Covmat[(colptr[[ii]]), (colptr[[ii]])] %*% t(contmat) ans } else { contmat %*% covmat %*% t(contmat) } } else { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, kk <- length(coef.indices) refPos <- numeric(0) if (0 %in% coef.indices) { refPos <- which(coef.indices == 0) coef.indices <- coef.indices[-refPos] } covmat <- vcov(model, dispersion = dispersion) covmat <- covmat[coef.indices, coef.indices, drop = FALSE] if (is.null(estimates)) estimates <- coefvlm(model)[coef.indices] if (length(refPos) == 1) { if (length(estimates) != kk) estimates <- c(0, estimates) covmat <- rbind(0, cbind(0, covmat)) names(estimates)[1] <- rownames(covmat)[1] <- colnames(covmat)[1] <- reference.name if (refPos != 1) { perm <- if (refPos == kk) c(2:kk, 1) else c(2:refPos, 1, (refPos + 1):kk) estimates <- estimates[perm] covmat <- covmat[perm, perm, drop = FALSE] } } } # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, return(Recall(covmat, factorname = factorname, which.linpred = which.linpred, coef.indices = coef.indices.saved, labels = labels, dispersion = dispersion, estimates = estimates ) ) } else { # ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; covmat <- object if (length(labels)) rownames(covmat) <- colnames(covmat) <- labels if ((LLL <- dim(covmat)[1]) <= 2) stop("This function works only for factors with 3 ", "or more levels") } # ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; allvcov <- covmat for (ilocal in 1:LLL) for (jlocal in ilocal:LLL) allvcov[ilocal, jlocal] <- allvcov[jlocal, ilocal] <- covmat[ilocal, ilocal] + covmat[jlocal, jlocal] - covmat[ilocal, jlocal] * 2 diag(allvcov) <- rep_len(1.0, LLL) # Any positive value should do wmat <- matrix(1.0, LLL, LLL) diag(wmat) <- sqrt( .Machine$double.eps ) logAllvcov <- log(allvcov) attr(logAllvcov, "Prior.Weights") <- wmat attr(logAllvcov, "estimates") <- estimates attr(logAllvcov, "coef.indices") <- coef.indices attr(logAllvcov, "factorname") <- factorname attr(logAllvcov, "regularVar") <- diag(covmat) attr(logAllvcov, "which.linpred") <- which.linpred logAllvcov } # End of Qvar() WorstErrors <- function(qv.object) { stop("20110729; does not work") reducedForm <- function(covmat, qvmat) { nlevels <- dim(covmat)[1] firstRow <- covmat[1, ] ones <- rep_len(1, nlevels) J <- outer(ones, ones) notzero <- 2:nlevels r.covmat <- covmat + (firstRow[1]*J) - outer(firstRow, ones) - outer(ones, firstRow) r.covmat <- r.covmat[notzero, notzero] qv1 <- qvmat[1, 1] r.qvmat <- (qvmat + qv1*J)[notzero, notzero] list(r.covmat, r.qvmat) } covmat <- qv.object$covmat qvmat <- diag(qv.object$qvframe$quasiVar) r.form <- reducedForm(covmat, qvmat) r.covmat <- r.form[[1]] r.qvmat <- r.form[[2]] inverse.sqrt <- solve(chol(r.covmat)) evalues <- eigen(t(inverse.sqrt) %*% r.qvmat %*% inverse.sqrt, symmetric = TRUE)$values sqrt(c(min(evalues), max(evalues))) - 1 } IndentPrint <- function(object, indent = 4, ...) { stop("20110729; does not work") zz <- "" tc <- textConnection("zz", "w", local = TRUE) sink(tc) try(print(object, ...)) sink() close(tc) indent <- paste(rep_len(" ", indent), sep = "", collapse = "") cat(paste(indent, zz, sep = ""), sep = "\n") } Print.qv <- function(x, ...) { stop("20110729; does not work") } summary.qvar <- function(object, ...) { relerrs <- 1 - sqrt(exp(residuals(object, type = "response"))) diag(relerrs) <- NA minErrSimple <- round(100 * min(relerrs, na.rm = TRUE), 1) maxErrSimple <- round(100 * max(relerrs, na.rm = TRUE), 1) estimates <- c(object@extra$attributes.y$estimates) if (!length(names(estimates)) && is.matrix(object@extra$attributes.y$estimates)) names( estimates) <- rownames(object@extra$attributes.y$estimates) if (!length(names(estimates))) names( estimates) <- paste("Level", seq_along(estimates), sep = "") regularVar <- c(object@extra$attributes.y$regularVar) QuasiVar <- exp(diag(fitted(object))) / 2 QuasiSE <- sqrt(QuasiVar) structure(list(estimate = estimates, SE = sqrt(regularVar), minErrSimple = minErrSimple, maxErrSimple = maxErrSimple, quasiSE = QuasiSE, object = object, quasiVar = QuasiVar), class = "summary.qvar") } print.summary.qvar <- function(x, ...) { object <- x$object minErrSimple <- x$minErrSimple maxErrSimple <- x$maxErrSimple x$minErrSimple <- NULL x$maxErrSimple <- NULL x$object <- NULL if (length(cl <- object@call)) { cat("Call:\n") dput(cl) } facname <- c(object@extra$attributes.y$factorname) if (length(facname)) cat("Factor name: ", facname, "\n") if (length(object@dispersion)) cat("\nDispersion: ", object@dispersion, "\n\n") x <- as.data.frame(c(x)) print.data.frame(x) cat("\nWorst relative errors in SEs of simple contrasts (%): ", minErrSimple, ", ", maxErrSimple, "\n") invisible(x) } qvar <- function(object, se = FALSE, ...) { if (!inherits(object, "rcim") && !inherits(object, "rcim0")) warning("argument 'object' should be an 'rcim' or 'rcim0' object. ", "This call may fail.") if (!(object@family@vfamily %in% c("uninormal", "normal1"))) warning("argument 'object' does not seem to have used ", "a 'uninormal' family.") if (!any(object@misc$link == "explink")) warning("argument 'object' does not seem to have used ", "a 'explink' link function.") quasiVar <- diag(predict(object)[, c(TRUE, FALSE)]) / 2 if (se) sqrt(quasiVar) else quasiVar } plotqvar <- qvplot <- function(object, interval.width = 2, ylab = "Estimate", xlab = NULL, # x$factorname, ylim = NULL, main = "", level.names = NULL, conf.level = 0.95, warn.ratio = 10, border = "transparent", # None points.arg = TRUE, length.arrows = 0.25, angle = 30, lwd = par()$lwd, scol = par()$col, slwd = par()$lwd, slty = par()$lty, ...) { if (!is.numeric(interval.width) && !is.numeric(conf.level)) stop("at least one of arguments 'interval.width' and 'conf.level' ", "should be numeric") if (!any("uninormal" %in% object@family@vfamily)) stop("argument 'object' dos not appear to be a ", "rcim(, uninormal) object") estimates <- c(object@extra$attributes.y$estimates) if (!length(names(estimates)) && is.matrix(object@extra$attributes.y$estimates)) names(estimates) <- rownames(object@extra$attributes.y$estimates) if (length(level.names) == length(estimates)) { names(estimates) <- level.names } else if (!length(names(estimates))) names(estimates) <- paste("Level", seq_along(estimates), sep = "") QuasiVar <- exp(diag(fitted(object))) / 2 QuasiSE <- sqrt(QuasiVar) if (!is.numeric(estimates)) stop("Cannot plot, because there are no 'proper' ", "parameter estimates") if (!is.numeric(QuasiSE)) stop("Cannot plot, because there are no ", "quasi standard errors") faclevels <- factor(names(estimates), levels = names(estimates)) xvalues <- seq(along = faclevels) tops <- estimates + interval.width * QuasiSE tails <- estimates - interval.width * QuasiSE if (is.numeric(conf.level)) { zedd <- abs(qnorm((1 - conf.level) / 2)) lsd.tops <- estimates + zedd * QuasiSE / sqrt(2) lsd.tails <- estimates - zedd * QuasiSE / sqrt(2) if (max(QuasiSE) / min(QuasiSE) > warn.ratio) warning("Quasi SEs appear to be quite different... the ", "LSD intervals may not be very accurate") } else { lsd.tops <- NULL lsd.tails <- NULL } if (is.null(ylim)) ylim <- range(c(tails, tops, lsd.tails, lsd.tops), na.rm = TRUE) if (is.null(xlab)) xlab <- "Factor level" plot(faclevels, estimates, border = border, ylim = ylim, xlab = xlab, ylab = ylab, lwd = lwd, main = main, ...) if (points.arg) points(estimates, ...) if (is.numeric(interval.width)) { segments(xvalues, tails, xvalues, tops, col = scol, lty = slty, lwd = slwd) } if (is.numeric(conf.level)) { arrows(xvalues, lsd.tails, xvalues, lsd.tops, col = scol, lty = slty, lwd = slwd, code = 3, length = length.arrows, angle = angle) } if (any(slotNames(object) == "post")) { object@post$estimates <- estimates object@post$xvalues <- xvalues if (is.numeric(interval.width)) { object@post$tails <- tails object@post$tops <- tops } if (is.numeric(conf.level)) { object@post$lsd.tails <- lsd.tails object@post$lsd.tops <- lsd.tops } } invisible(object) } VGAM/R/family.zeroinf.R0000644000176200001440000101644513135276757014317 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dzanegbin <- function(x, size, prob = NULL, munb = NULL, pobs0 = 0, log = FALSE) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- 1 / (1 + munb/size) } if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(pobs0), length(prob), length(size)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") if (!is.Numeric(prob, positive = TRUE) || max(prob, na.rm = TRUE) > 1) stop("argument 'prob' must be in (0,1]") if (!is.Numeric(size, positive = TRUE)) stop("argument 'size' must be in (0,Inf)") index0 <- x == 0 if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dposnegbin(x[!index0], prob = prob[!index0], size = size[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1 - pobs0[!index0]) * dposnegbin(x[!index0], prob = prob[!index0], size = size[!index0]) } ans } pzanegbin <- function(q, size, prob = NULL, munb = NULL, pobs0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- 1 / (1 + munb/size) } LLL <- max(length(q), length(pobs0), length(prob), length(size)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") qindex <- (q > 0) ans[ qindex] <- pobs0[qindex] + (1 - pobs0[qindex]) * pposnegbin(q[qindex], size = size[qindex], prob = prob[qindex]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans <- pmax(0, ans) ans <- pmin(1, ans) ans } qzanegbin <- function(p, size, prob = NULL, munb = NULL, pobs0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- 1 / (1 + munb/size) } LLL <- max(length(p), length(pobs0), length(prob), length(size)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ans <- p ans[p <= pobs0] <- 0 pindex <- (p > pobs0) ans[pindex] <- qposnegbin((p[pindex] - pobs0[pindex]) / (1 - pobs0[pindex]), prob = prob[pindex], size = size[pindex]) ans } rzanegbin <- function(n, size, prob = NULL, munb = NULL, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- 1 / (1 + munb/size) } ans <- rposnegbin(n = use.n, prob = prob, size = size) if (length(pobs0) != use.n) pobs0 <- rep_len(pobs0, use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ifelse(runif(use.n) < pobs0, 0, ans) } dzapois <- function(x, lambda, pobs0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(lambda), length(pobs0)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dpospois(x[!index0], lambda[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1 - pobs0[!index0]) * dpospois(x[!index0], lambda[!index0]) } ans } pzapois <- function(q, lambda, pobs0 = 0) { LLL <- max(length(q), length(lambda), length(pobs0)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans[q > 0] <- pobs0[q > 0] + (1-pobs0[q > 0]) * ppospois(q[q > 0], lambda[q > 0]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans <- pmax(0, ans) ans <- pmin(1, ans) ans } qzapois <- function(p, lambda, pobs0 = 0) { LLL <- max(length(p), length(lambda), length(pobs0)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ans <- p ind4 <- (p > pobs0) ans[!ind4] <- 0 ans[ ind4] <- qpospois((p[ind4] - pobs0[ind4]) / (1 - pobs0[ind4]), lambda = lambda[ind4]) ans } rzapois <- function(n, lambda, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n ans <- rpospois(use.n, lambda) if (length(pobs0) != use.n) pobs0 <- rep_len(pobs0, use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must in [0,1]") ifelse(runif(use.n) < pobs0, 0, ans) } dzipois <- function(x, lambda, pstr0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(lambda), length(pstr0)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- x + lambda + pstr0 index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pstr0[ index0] + (1 - pstr0[ index0]) * dpois(x[ index0], lambda[ index0])) ans[!index0] <- log1p(-pstr0[!index0]) + dpois(x[!index0], lambda[!index0], log = TRUE) } else { ans[ index0] <- pstr0[ index0] + (1 - pstr0[ index0]) * dpois(x[ index0], lambda[ index0]) ans[!index0] <- (1 - pstr0[!index0]) * dpois(x[!index0], lambda[!index0]) } deflat.limit <- -1 / expm1(lambda) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzipois <- function(q, lambda, pstr0 = 0) { LLL <- max(length(pstr0), length(lambda), length(q)) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(q) != LLL) q <- rep_len(q, LLL) ans <- ppois(q, lambda) ans <- ifelse(q < 0, 0, pstr0 + (1 - pstr0) * ans) deflat.limit <- -1 / expm1(lambda) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzipois <- function(p, lambda, pstr0 = 0) { LLL <- max(length(p), length(lambda), length(pstr0)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- -1 / expm1(lambda) ans[p <= pstr0] <- 0 pindex <- (pstr0 < p) & (deflat.limit <= pstr0) ans[pindex] <- qpois((p[pindex] - pstr0[pindex]) / (1 - pstr0[pindex]), lambda = lambda[pindex]) ans[pstr0 < deflat.limit] <- NaN ans[1 < pstr0] <- NaN ans[lambda < 0] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans } rzipois <- function(n, lambda, pstr0 = 0) { qzipois(runif(n), lambda, pstr0 = pstr0) } yip88 <- function(link = "loge", n.arg = NULL, imethod = 1) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Zero-inflated Poisson (based on Yip (1988))\n\n", "Link: ", namesof("lambda", link, earg), "\n", "Variance: (1 - pstr0) * lambda"), first = eval(substitute(expression({ zero <- y == 0 if (any(zero)) { if (length(extra)) extra$sumw <- sum(w) else extra <- list(sumw=sum(w)) if (is.numeric(.n.arg) && extra$sumw != .n.arg) stop("value of 'n.arg' conflicts with data ", "(it need not be specified anyway)") warning("trimming out the zero observations") axa.save <- attr(x, "assign") x <- x[!zero,, drop = FALSE] attr(x, "assign") <- axa.save # Don't lose these!! w <- w[!zero] y <- y[!zero] } else { if (!is.numeric(.n.arg)) stop("n.arg must be supplied") } }), list( .n.arg = n.arg ))), initialize = eval(substitute(expression({ narg <- if (is.numeric(.n.arg)) .n.arg else extra$sumw if (sum(w) > narg) stop("sum(w) > narg") w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("lambda", .link, list(theta = NULL), tag = FALSE) if (!length(etastart)) { lambda.init <- Init.mu(y = y, w = w, imethod = .imethod , # x = x, pos.only = FALSE) etastart <- theta2eta(lambda.init, .link , earg = .earg ) } if (length(extra)) { extra$sumw <- sum(w) extra$narg <- narg # For @linkinv } else { extra <- list(sumw = sum(w), narg = narg) } }), list( .link = link, .earg = earg, .n.arg = n.arg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { lambda <- eta2theta(eta, .link, .earg) temp5 <- exp(-lambda) pstr0 <- (1 - temp5 - extra$sumw/extra$narg) / (1 - temp5) if (any(pstr0 <= 0)) stop("non-positive value(s) of pstr0") (1 - pstr0) * lambda }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(lambda = .link ) misc$earg <- list(lambda = .earg ) if (intercept.only) { suma <- extra$sumw pstr0 <- (1 - temp5[1] - suma / narg) / (1 - temp5[1]) pstr0 <- if (pstr0 < 0 || pstr0 > 1) NA else pstr0 misc$pstr0 <- pstr0 } }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta, .link) temp5 <- exp(-lambda) pstr0 <- (1 - temp5 - extra$sumw / extra$narg) / (1 - temp5) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzipois(x = y, pstr0 = pstr0, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("yip88"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta, .link , earg = .earg ) temp5 <- exp(-lambda) dl.dlambda <- -1 + y/lambda - temp5/(1-temp5) dlambda.deta <- dtheta.deta(lambda, .link , earg = .earg ) w * dl.dlambda * dlambda.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ d2lambda.deta2 <- d2theta.deta2(lambda, .link , earg = .earg ) d2l.dlambda2 <- -y / lambda^2 + temp5 / (1 - temp5)^2 -w * (d2l.dlambda2*dlambda.deta^2 + dl.dlambda*d2lambda.deta2) }), list( .link = link, .earg = earg )))) } zapoisson <- function(lpobs0 = "logit", llambda = "loge", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, ilambda = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = NULL) { lpobs.0 <- as.list(substitute(lpobs0)) epobs.0 <- link2list(lpobs.0) lpobs.0 <- attr(epobs.0, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "onempobs0"))[1] new("vglmff", blurb = c("Zero-altered Poisson ", "(Bernoulli and positive-Poisson conditional model)\n\n", "Links: ", namesof("pobs0", lpobs.0, earg = epobs.0, tag = FALSE), ", ", namesof("lambda", llambda, earg = elambda, tag = FALSE), "\n", "Mean: (1 - pobs0) * lambda / (1 - exp(-lambda))"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pobs0", "lambda"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pobs0", ncoly) mynames2 <- param.names("lambda", ncoly) predictors.names <- c(namesof(mynames1, .lpobs.0 , earg = .epobs.0 , tag = FALSE), namesof(mynames2, .llambda , earg = .elambda , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] if (!length(etastart)) { lambda.init <- Init.mu(y = y, w = w, imethod = .imethod , # x = x, imu = .ilambda , ishrinkage = .ishrinkage , pos.only = TRUE, probs.y = .probs.y ) etastart <- cbind(theta2eta(if (length( .ipobs0 )) .ipobs0 else (0.5 + w * y0) / (1 + w), .lpobs.0 , earg = .epobs.0 ), theta2eta(lambda.init, .llambda , earg = .elambda )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lpobs.0 = lpobs.0, .llambda = llambda, .epobs.0 = epobs.0, .elambda = elambda, .ipobs0 = ipobs0, .ilambda = ilambda, .ishrinkage = ishrinkage, .probs.y = probs.y, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "onempobs0"))[1] M1 <- 2 NOS <- ncol(eta) / M1 pobs.0 <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lpobs.0 , earg = .epobs.0 )) lambda <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .llambda , earg = .elambda )) ans <- switch(type.fitted, "mean" = (1 - pobs.0) * lambda / (-expm1(-lambda)), "lambda" = lambda, "pobs0" = pobs.0, # P(Y=0) "onempobs0" = 1 - pobs.0) # P(Y>0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpobs.0 = lpobs.0, .llambda = llambda, .epobs.0 = epobs.0, .elambda = elambda ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lpobs.0 , NOS), rep_len( .llambda , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names names(misc$link) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] misc$earg <- vector("list", M1 * NOS) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .epobs.0 misc$earg[[M1*ii ]] <- .elambda } }), list( .lpobs.0 = lpobs.0, .llambda = llambda, .epobs.0 = epobs.0, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pobs0 <- cbind(eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpobs.0, earg = .epobs.0)) lambda <- cbind(eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda, earg = .elambda )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzapois(x = y, pobs0 = pobs0, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpobs.0 = lpobs.0, .llambda = llambda, .epobs.0 = epobs.0, .elambda = elambda ))), vfamily = c("zapoisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { TFvec <- c(TRUE, FALSE) phimat <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs.0 , earg = .epobs.0 ) lambda <- eta2theta(eta[, !TFvec, drop = FALSE], .llambda , earg = .elambda ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) && all(is.finite(phimat)) && all(0 < phimat & phimat < 1) okay1 }, list( .lpobs.0 = lpobs.0, .llambda = llambda, .epobs.0 = epobs.0, .elambda = elambda ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pobs0 <- eta2theta(eta[, c(TRUE, FALSE)], .lpobs.0 , earg = .epobs.0 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) rzapois(nsim * length(lambda), lambda = lambda, pobs0 = pobs0) }, list( .lpobs.0 = lpobs.0, .llambda = llambda, .epobs.0 = epobs.0, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 # extra$NOS y0 <- extra$y0 skip <- extra$skip.these TFvec <- c(TRUE, FALSE) phimat <- eta2theta(eta[, TFvec, drop = FALSE], .lpobs.0 , earg = .epobs.0 ) lambda <- eta2theta(eta[, !TFvec, drop = FALSE], .llambda , earg = .elambda ) dl.dlambda <- y / lambda + 1 / expm1(-lambda) dl.dphimat <- -1 / (1 - phimat) # For y > 0 obsns for (spp. in 1:NOS) { dl.dphimat[skip[, spp.], spp.] <- 1 / phimat[skip[, spp.], spp.] dl.dlambda[skip[, spp.], spp.] <- 0 } dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) mu.phi0 <- phimat temp3 <- if ( .lpobs.0 == "logit") { c(w) * (y0 - mu.phi0) } else { c(w) * dtheta.deta(mu.phi0, link = .lpobs.0 , earg = .epobs.0 ) * dl.dphimat } ans <- cbind(temp3, c(w) * dl.dlambda * dlambda.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lpobs.0 = lpobs.0, .llambda = llambda, .epobs.0 = epobs.0, .elambda = elambda ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M1 * NOS) temp5 <- expm1(lambda) ned2l.dlambda2 <- (1 - phimat) * (temp5 + 1) * (1 / lambda - 1 / temp5) / temp5 wz[, NOS+(1:NOS)] <- c(w) * ned2l.dlambda2 * dlambda.deta^2 tmp100 <- mu.phi0 * (1 - mu.phi0) tmp200 <- if ( .lpobs.0 == "logit" && is.empty.list( .epobs.0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(mu.phi0, link = .lpobs.0 , earg = .epobs.0 )^2) } if (FALSE) for (ii in 1:NOS) { index200 <- abs(tmp200[, ii]) < .Machine$double.eps if (any(index200)) { tmp200[index200, ii] <- 10.0 * .Machine$double.eps^(3/4) } } wz[, 1:NOS] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M1 = M1)] wz }), list( .lpobs.0 = lpobs.0, .epobs.0 = epobs.0 )))) } # End of zapoisson zapoissonff <- function(llambda = "loge", lonempobs0 = "logit", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ilambda = NULL, ionempobs0 = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = "onempobs0") { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lonempobs0 <- as.list(substitute(lonempobs0)) eonempobs0 <- link2list(lonempobs0) lonempobs0 <- attr(eonempobs0, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "onempobs0"))[1] new("vglmff", blurb = c("Zero-altered Poisson ", "(Bernoulli and positive-Poisson conditional model)\n\n", "Links: ", namesof("lambda", llambda, earg = elambda, tag = FALSE), ", ", namesof("onempobs0", lonempobs0, earg = eonempobs0, tag = FALSE), "\n", "Mean: onempobs0 * lambda / (1 - exp(-lambda))"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("lambda", "onempobs0"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("lambda", ncoly) mynames2 <- param.names("onempobs0", ncoly) predictors.names <- c(namesof(mynames1, .llambda, earg = .elambda , tag = FALSE), namesof(mynames2, .lonempobs0 , earg = .eonempobs0 , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] if (!length(etastart)) { lambda.init <- Init.mu(y = y, w = w, imethod = .imethod , # x = x, imu = .ilambda, ishrinkage = .ishrinkage, pos.only = TRUE, probs.y = .probs.y ) etastart <- cbind(theta2eta(lambda.init, .llambda , earg = .elambda ), theta2eta(1 - (0.5 + w * y0) / (1 + w), .lonempobs0 , earg = .eonempobs0 )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lonempobs0 = lonempobs0, .llambda = llambda, .eonempobs0 = eonempobs0, .elambda = elambda, .ilambda = ilambda, .ishrinkage = ishrinkage, .probs.y = probs.y, .type.fitted = type.fitted, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "onempobs0"))[1] M1 <- 2 NOS <- ncol(eta) / M1 lambda <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .llambda , earg = .elambda )) onempobs0 <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lonempobs0 , earg = .eonempobs0 )) ans <- switch(type.fitted, "mean" = onempobs0 * lambda / (-expm1(-lambda)), "lambda" = lambda, "pobs0" = 1 - onempobs0, # P(Y=0) "onempobs0" = onempobs0) # P(Y>0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lonempobs0 = lonempobs0, .llambda = llambda, .eonempobs0 = eonempobs0, .elambda = elambda ))), last = eval(substitute(expression({ misc$expected <- TRUE misc$multipleResponses <- TRUE temp.names <- c(rep_len( .llambda , NOS), rep_len( .lonempobs0 , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names names(misc$link) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] misc$earg <- vector("list", M1 * NOS) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .elambda misc$earg[[M1*ii ]] <- .eonempobs0 } }), list( .lonempobs0 = lonempobs0, .llambda = llambda, .eonempobs0 = eonempobs0, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { NOS <- extra$NOS M1 <- 2 lambda <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .llambda , earg = .elambda )) onempobs0 <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lonempobs0 , earg = .eonempobs0 )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzapois(x = y, lambda = lambda, pobs0 = 1 - onempobs0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lonempobs0 = lonempobs0, .llambda = llambda, .eonempobs0 = eonempobs0, .elambda = elambda ))), vfamily = c("zapoissonff"), validparams = eval(substitute(function(eta, y, extra = NULL) { TFvec <- c(TRUE, FALSE) lambda <- eta2theta(eta[, TFvec, drop=FALSE], .llambda , e= .elambda ) onempobs0 <- eta2theta(eta[, !TFvec, drop=FALSE], .lonempobs0 , e= .eonempobs0 ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) && all(is.finite(onempobs0)) && all(0 < onempobs0 & onempobs0 < 1) okay1 }, list( .lonempobs0 = lonempobs0, .llambda = llambda, .eonempobs0 = eonempobs0, .elambda = elambda ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) onempobs0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempobs0 , earg = .eonempobs0 ) rzapois(nsim * length(lambda), lambda = lambda, pobs0 = 1 - onempobs0) }, list( .lonempobs0 = lonempobs0, .llambda = llambda, .eonempobs0 = eonempobs0, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 # extra$NOS y0 <- extra$y0 skip <- extra$skip.these lambda <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .llambda, earg = .elambda )) omphimat <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lonempobs0, earg = .eonempobs0 )) phimat <- 1 - omphimat dl.dlambda <- y / lambda + 1 / expm1(-lambda) dl.dPHImat <- +1 / (omphimat) # For y > 0 obsns for (spp. in 1:NOS) { dl.dPHImat[skip[, spp.], spp.] <- -1 / phimat[skip[, spp.], spp.] dl.dlambda[skip[, spp.], spp.] <- 0 } dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) mu.phi0 <- omphimat temp3 <- if ( FALSE && .lonempobs0 == "logit") { } else { c(w) * dtheta.deta(mu.phi0, link = .lonempobs0 , earg = .eonempobs0 ) * dl.dPHImat } ans <- cbind(c(w) * dl.dlambda * dlambda.deta, temp3) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lonempobs0 = lonempobs0, .llambda = llambda, .eonempobs0 = eonempobs0, .elambda = elambda ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M1 * NOS) temp5 <- expm1(lambda) ned2l.dlambda2 <- (1 - phimat) * (temp5 + 1) * (1 / lambda - 1 / temp5) / temp5 wz[, 0 * NOS + (1:NOS)] <- c(w) * ned2l.dlambda2 * dlambda.deta^2 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if ( .lonempobs0 == "logit" && is.empty.list( .eonempobs0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(mu.phi0, link = .lonempobs0, earg = .eonempobs0)^2) } wz[, 1 * NOS + (1:NOS)] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M1 = M1)] wz }), list( .lonempobs0 = lonempobs0, .eonempobs0 = eonempobs0 )))) } # End of zapoissonff zanegbinomial.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } zanegbinomial <- function( zero = "size", type.fitted = c("mean", "munb", "pobs0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, # higher is better for large 'size' eps.trig = 1e-7, max.support = 4000, # 20160127; I have changed this max.chunk.MB = 30, # max.memory = Inf is allowed lpobs0 = "logit", lmunb = "loge", lsize = "loge", imethod = 1, ipobs0 = NULL, imunb = NULL, iprobs.y = NULL, gprobs.y = (0:9)/10, # 20160709; grid for finding munb.init isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 0.001) stop("argument 'eps.trig' must be positive and smaller in value") if (!is.Numeric(nsimEIM, length.arg = 1, positive = TRUE, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 30) warning("argument 'nsimEIM' should be greater than 30, say") if (length(ipobs0) && (!is.Numeric(ipobs0, positive = TRUE) || max(ipobs0) >= 1)) stop("If given, argument 'ipobs0' must contain values in (0,1) only") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("If given, argument 'isize' must contain positive values only") lpobs0 <- as.list(substitute(lpobs0)) epobs0 <- link2list(lpobs0) lpobs0 <- attr(epobs0, "function.name") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0"))[1] ipobs0.small <- 1/64 # A number easily represented exactly new("vglmff", blurb = c("Zero-altered negative binomial (Bernoulli and\n", "positive-negative binomial conditional model)\n\n", "Links: ", namesof("pobs0", lpobs0, earg = epobs0, tag = FALSE), ", ", namesof("munb", lmunb, earg = emunb, tag = FALSE), ", ", namesof("size", lsize, earg = esize, tag = FALSE), "\n", "Mean: (1 - pobs0) * munb / (1 - (size / (size + ", "munb))^size)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, mds.min = .mds.min , imethod = .imethod , multipleResponses = TRUE, parameters.names = c("pobs0", "munb", "size"), nsimEIM = .nsimEIM , eps.trig = .eps.trig , type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .imethod = imethod, .nsimEIM = nsimEIM, .eps.trig = eps.trig, .type.fitted = type.fitted, .mds.min = mds.min ))), initialize = eval(substitute(expression({ M1 <- 3 temp16 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp16$w y <- temp16$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pobs0", NOS) mynames2 <- param.names("munb", NOS) mynames3 <- param.names("size", NOS) predictors.names <- c(namesof(mynames1, .lpobs0 , earg = .epobs0 , tag = FALSE), namesof(mynames2, .lmunb , earg = .emunb , tag = FALSE), namesof(mynames3, .lsize , earg = .esize , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) gprobs.y <- .gprobs.y imunb <- .imunb # Default in NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: TFvec <- y[, jay] > 0 # Important to exclude the 0s posyvec <- y[TFvec, jay] munb.init.jay <- if ( .imethod == 1 ) { quantile(posyvec, probs = gprobs.y) - 1/2 # + 1/16 } else { weighted.mean(posyvec, w = w[TFvec, jay]) - 1/2 } if (length(imunb)) munb.init.jay <- imunb[, jay] gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + weighted.mean(posyvec, w = w[TFvec, jay])) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = posNBD.Loglikfun2, y = posyvec, # x = x[TFvec, , drop = FALSE], w = w[TFvec, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) pobs0.init <- matrix(if (length( .ipobs0 )) .ipobs0 else -1, nrow = n, ncol = NOS, byrow = TRUE) for (jay in 1:NOS) { if (any(pobs0.init[, jay] < 0)) { index.y0 <- (y[, jay] < 0.5) pobs0.init[, jay] <- max(min(weighted.mean(index.y0, w[, jay]), 1 - .ipobs0.small ), .ipobs0.small ) } } etastart <- cbind(theta2eta(pobs0.init, .lpobs0 , earg = .epobs0 ), theta2eta(munb.init, .lmunb , earg = .emunb ), theta2eta(size.init, .lsize , earg = .esize )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } # End of if (!length(etastart)) }), list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize, .ipobs0 = ipobs0, .isize = isize, .ipobs0.small = ipobs0.small, .imunb = imunb, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .imethod = imethod, .type.fitted = type.fitted, .iprobs.y = iprobs.y ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0"))[1] M1 <- 3 NOS <- ncol(eta) / M1 phi0 <- eta2theta(eta[, M1*(1:NOS)-2], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, M1*(1:NOS)-1], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS) ], .lsize , earg = .esize ) tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) prob0 <- tempk^kmat # p(0) from negative binomial oneminusf0 <- 1 - prob0 smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { prob0[big.size] <- exp(-munb[big.size]) # The limit as kmat-->Inf oneminusf0[big.size] <- -expm1(-munb[big.size]) } ans <- switch(type.fitted, "mean" = (1 - phi0) * munb / oneminusf0, "munb" = munb, "pobs0" = phi0) # P(Y=0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpobs0 = lpobs0, .lsize = lsize, .lmunb = lmunb, .epobs0 = epobs0, .emunb = emunb, .esize = esize, .mds.min = mds.min ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lpobs0 , NOS), rep_len( .lmunb , NOS), rep_len( .lsize , NOS))[interleave.VGAM(M1*NOS, M1 = M1)] temp.names <- c(mynames1, mynames2, mynames3)[interleave.VGAM(M1*NOS, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M1*NOS) names(misc$earg) <- temp.names for (ii in 1:NOS) { misc$earg[[M1*ii - 2]] <- .epobs0 misc$earg[[M1*ii - 1]] <- .emunb misc$earg[[M1*ii ]] <- .esize } misc$nsimEIM <- .nsimEIM misc$ipobs0 <- .ipobs0 misc$isize <- .isize misc$multipleResponses <- TRUE }), list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize, .ipobs0 = ipobs0, .isize = isize, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 NOS <- ncol(eta) / M1 phi0 <- eta2theta(eta[, M1*(1:NOS)-2], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, M1*(1:NOS)-1], .lmunb , earg = .emunb ) size <- eta2theta(eta[, M1*(1:NOS) ], .lsize , earg = .esize ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzanegbin(x = y, pobs0 = phi0, munb = munb, size = size, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize ))), vfamily = c("zanegbinomial"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) phi0 <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lsize , earg = .esize ) rzanegbin(nsim * length(munb), pobs0 = phi0, munb = munb, size = kmat) }, list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 NOS <- ncol(eta) / M1 phi0 <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) size <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) okay1 <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) && all(is.finite(phi0)) && all(0 < phi0 & phi0 < 1) smallval <- .mds.min # .munb.div.size overdispersion <- if (okay1) all(munb / size > smallval) else FALSE if (!overdispersion) warning("parameter 'size' has very large values; ", "try fitting a zero-altered Poisson ", "model instead.") okay1 && overdispersion }, list( .lpobs0 = lpobs0, .lmunb = lmunb, .lsize = lsize, .epobs0 = epobs0, .emunb = emunb, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta) / M1 y0 <- extra$y0 phi0 <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lpobs0 , earg = .epobs0 ) munb <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) skip <- extra$skip.these dphi0.deta <- dtheta.deta(phi0, .lpobs0 , earg = .epobs0 ) dmunb.deta <- dtheta.deta(munb, .lmunb , earg = .emunb ) dsize.deta <- dtheta.deta(kmat, .lsize , earg = .esize ) smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { if (FALSE) warning("parameter 'size' has very large values; ", "try fitting a zero-altered Poisson ", "model instead") kmat[big.size] <- munb[big.size] / smallval } tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) tempm <- munb / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 AA16 <- tempm + log(tempk) df0.dmunb <- -tempk * prob0 df0.dkmat <- prob0 * AA16 df02.dmunb2 <- prob0 * tempk * (1 + 1/kmat) / (1 + munb/kmat) df02.dkmat2 <- prob0 * ((tempm^2) / kmat + AA16^2) df02.dkmat.dmunb <- -prob0 * (tempm/kmat + AA16) / (1 + munb/kmat) if (any(big.size)) { prob0[big.size] <- exp(-munb[big.size]) # The limit as kmat-->Inf oneminusf0[big.size] <- -expm1(-munb[big.size]) df0.dmunb[big.size] <- -tempk[big.size] * prob0[big.size] df0.dkmat[big.size] <- prob0[big.size] * AA16[big.size] df02.dmunb2[big.size] <- prob0[big.size] * tempk[big.size] * (1 + 1/kmat[big.size]) / (1 + smallval) df02.dkmat2[big.size] <- prob0[big.size] * ((tempm[big.size])^2 / kmat[big.size] + AA16[big.size]^2) df02.dkmat.dmunb[big.size] <- -prob0[big.size] * (tempm[big.size]/kmat[big.size] + AA16[big.size]) / (1+smallval) } mymu <- munb / oneminusf0 # E(Y) of Pos-NBD dl.dphi0 <- -1 / (1 - phi0) dl.dmunb <- y / munb - (1 + y/kmat) / (1 + munb/kmat) + df0.dmunb / oneminusf0 dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y - munb) / (munb + kmat) + log(tempk) + df0.dkmat / oneminusf0 if (any(big.size)) { } dl.dphi0[y == 0] <- 1 / phi0[y == 0] # Do it in one line skip <- extra$skip.these for (spp. in 1:NOS) { dl.dsize[skip[, spp.], spp.] <- dl.dmunb[skip[, spp.], spp.] <- 0 } dl.deta23 <- c(w) * cbind(dl.dmunb * dmunb.deta, dl.dsize * dsize.deta) dl.deta1 <- if ( .lpobs0 == "logit") { c(w) * (y0 - phi0) } else { c(w) * dl.dphi0 * dphi0.deta } ans <- cbind(dl.deta1, dl.deta23) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lpobs0 = lpobs0 , .lmunb = lmunb , .lsize = lsize , .epobs0 = epobs0 , .emunb = emunb , .esize = esize, .mds.min = mds.min ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M + M-1) # tridiagonal max.support <- .max.support max.chunk.MB <- .max.chunk.MB mu.phi0 <- phi0 # pobs0 # phi0 tmp100 <- mu.phi0 * (1 - mu.phi0) wz[, (1:NOS)*M1 - 2] <- if ( .lpobs0 == "logit" && is.empty.list( .epobs0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(mu.phi0, link = .lpobs0 , earg = .epobs0 )^2) } ned2l.dmunb2 <- mymu / munb^2 - ((1 + mymu/kmat) / kmat) / (1 + munb/kmat)^2 - df02.dmunb2 / oneminusf0 - (df0.dmunb / oneminusf0)^2 wz[, M1*(1:NOS) - 1] <- c(w) * (1 - phi0) * ned2l.dmunb2 * dmunb.deta^2 ned2l.dmunbsize <- (munb - mymu) / (munb + kmat)^2 - df02.dkmat.dmunb / oneminusf0 - df0.dmunb * df0.dkmat / oneminusf0^2 wz[, M + M1*(1:NOS) - 1] <- c(w) * (1 - phi0) * ned2l.dmunbsize * dmunb.deta * dsize.deta ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 1 Q.maxs <- qposnegbin(p = eff.p[2] , munb = munb[, jay], size = kmat[, jay]) + 10 eps.trig <- .eps.trig Q.MAXS <- pmax(10, ceiling(1 / sqrt(eps.trig))) Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay] <- EIM.posNB.specialp(munb = munb[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only) if (FALSE) wz2[sind2, M1*jay] <- EIM.posNB.speciald(munb = munb[sind2, jay], size = kmat[sind2, jay], y.min = min(Q.mins2[sind2]), y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only) # * if (any(eim.kk.TF <- wz[sind2, M1*jay] <= 0 | is.na(wz[sind2, M1*jay]))) { ind2[sind2[eim.kk.TF], jay] <- FALSE } lwr.ptr <- upr.ptr + 1 } # while } # if } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { kkvec <- kmat[ii.TF, jay] muvec <- munb[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rzanegbin(sum(ii.TF), munb = muvec, size = kkvec, pobs0 = phi0[ii.TF, jay]) dl.dk <- digamma(ysim + kkvec) - digamma(kkvec) - (ysim - muvec) / (muvec + kkvec) + log1p(-muvec / (kkvec + muvec)) + df0.dkmat[ii.TF, jay] / oneminusf0[ii.TF, jay] dl.dk[ysim == 0] <- 0 run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay] <- ned2l.dk2 # * (dsize.deta[ii.TF, jay])^2 } } # jay wz[, M1*(1:NOS) ] <- wz[, M1*(1:NOS) ] * dsize.deta^2 save.weights <- !all(ind2) wz[, M1*(1:NOS) ] <- c(w) * (1 - phi0) * wz[, M1*(1:NOS) ] wz }), list( .lpobs0 = lpobs0, .epobs0 = epobs0, .cutoff.prob = cutoff.prob, .eps.trig = eps.trig, .max.support = max.support, .max.chunk.MB = max.chunk.MB, .nsimEIM = nsimEIM )))) } # End of zanegbinomial() zanegbinomialff.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } zanegbinomialff <- function( lmunb = "loge", lsize = "loge", lonempobs0 = "logit", type.fitted = c("mean", "munb", "pobs0", "onempobs0"), isize = NULL, ionempobs0 = NULL, zero = c("size", "onempobs0"), mds.min = 1e-3, iprobs.y = NULL, # 0.35, gprobs.y = (0:9)/10, # 20160709; grid for finding munb.init cutoff.prob = 0.999, # higher is better for large 'size' eps.trig = 1e-7, max.support = 4000, # 20160127; I have changed this max.chunk.MB = 30, # max.memory = Inf is allowed gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imethod = 1, imunb = NULL, nsimEIM = 500) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 0.001) stop("argument 'eps.trig' must be positive and smaller in value") if (!is.Numeric(nsimEIM, length.arg = 1, positive = TRUE, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 30) warning("argument 'nsimEIM' should be greater than 30, say") if (length(ionempobs0) && (!is.Numeric(ionempobs0, positive = TRUE) || max(ionempobs0) >= 1)) stop("If given, argument 'ionempobs0' must contain values ", "in (0,1) only") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("If given, argument 'isize' must contain positive values only") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") lonempobs0 <- as.list(substitute(lonempobs0)) eonempobs0 <- link2list(lonempobs0) lonempobs0 <- attr(eonempobs0, "function.name") ipobs0.small <- 1/64 # A number easily represented exactly type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0", "onempobs0"))[1] new("vglmff", blurb = c("Zero-altered negative binomial (Bernoulli and\n", "positive-negative binomial conditional model)\n\n", "Links: ", namesof("munb", lmunb, earg = emunb, tag = FALSE), ", ", namesof("size", lsize, earg = esize, tag = FALSE), ", ", namesof("onempobs0", lonempobs0, earg = eonempobs0, tag = FALSE), "\n", "Mean: onempobs0 * munb / (1 - (size / (size + ", "munb))^size)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, mds.min = .mds.min , multipleResponses = TRUE, nsimEIM = .nsimEIM , parameters.names = c("munb", "size", "onempobs0"), eps.trig = .eps.trig , type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM, .eps.trig = eps.trig, .type.fitted = type.fitted, .mds.min = mds.min ))), initialize = eval(substitute(expression({ M1 <- 3 temp16 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp16$w y <- temp16$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("munb", NOS) mynames2 <- param.names("size", NOS) mynames3 <- param.names("onempobs0", NOS) predictors.names <- c(namesof(mynames1, .lmunb , earg = .emunb , tag = FALSE), namesof(mynames2, .lsize , earg = .esize , tag = FALSE), namesof(mynames3, .lonempobs0 , earg = .eonempobs0 , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) gprobs.y <- .gprobs.y imunb <- .imunb # Default in NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: TFvec <- y[, jay] > 0 # Important to exclude the 0s posyvec <- y[TFvec, jay] munb.init.jay <- if ( .imethod == 1 ) { quantile(posyvec, probs = gprobs.y) - 1/2 # + 1/16 } else { weighted.mean(posyvec, w = w[TFvec, jay]) - 1/2 } if (length(imunb)) munb.init.jay <- imunb[, jay] gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + weighted.mean(posyvec, w = w[TFvec, jay])) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = posNBD.Loglikfun2, y = posyvec, # x = x[TFvec, , drop = FALSE], w = w[TFvec, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) pobs0.init <- matrix(if (length( .ionempobs0 )) 1 - .ionempobs0 else -1, n, NOS, byrow = TRUE) for (jay in 1:NOS) { if (any(pobs0.init[, jay] < 0)) { index.y0 <- y[, jay] < 0.5 pobs0.init[, jay] <- max(min(mean(index.y0), 1 - .ipobs0.small ), .ipobs0.small ) } } etastart <- cbind(theta2eta(munb.init , .lmunb , earg = .emunb ), theta2eta(size.init , .lsize , earg = .esize ), theta2eta(1 - pobs0.init, .lonempobs0 , earg = .eonempobs0 )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } # End of if (!length(etastart)) }), list( .lonempobs0 = lonempobs0, .lmunb = lmunb, .lsize = lsize, .eonempobs0 = eonempobs0, .emunb = emunb, .esize = esize, .ionempobs0 = ionempobs0, .imunb = imunb, .isize = isize, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .ipobs0.small = ipobs0.small, .imethod = imethod, .iprobs.y = iprobs.y, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0", "onempobs0"))[1] M1 <- 3 NOS <- ncol(eta) / M1 munb <- eta2theta(eta[, M1*(1:NOS)-2], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS)-1], .lsize , earg = .esize ) onempobs0 <- eta2theta(eta[, M1*(1:NOS) ], .lonempobs0 , earg = .eonempobs0 ) tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb); NBD p(0) prob0 <- tempk^kmat # p(0) from negative binomial oneminusf0 <- 1 - prob0 smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { prob0[big.size] <- exp(-munb[big.size]) # The limit as kmat-->Inf oneminusf0[big.size] <- -expm1(-munb[big.size]) } ans <- switch(type.fitted, "mean" = onempobs0 * munb / oneminusf0, "munb" = munb, "pobs0" = 1 - onempobs0, # P(Y=0) "onempobs0" = onempobs0) # P(Y>0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lonempobs0 = lonempobs0, .lsize = lsize, .lmunb = lmunb, .eonempobs0 = eonempobs0, .emunb = emunb, .esize = esize, .mds.min = mds.min ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lmunb , NOS), rep_len( .lsize , NOS), rep_len( .lonempobs0 , NOS))[ interleave.VGAM(M1*NOS, M1 = M1)] temp.names <- c(mynames1, mynames2, mynames3)[interleave.VGAM(M1*NOS, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M1*NOS) names(misc$earg) <- temp.names for (ii in 1:NOS) { misc$earg[[M1*ii-2]] <- .emunb misc$earg[[M1*ii-1]] <- .esize misc$earg[[M1*ii ]] <- .eonempobs0 } misc$nsimEIM <- .nsimEIM misc$imethod <- .imethod misc$ionempobs0 <- .ionempobs0 misc$isize <- .isize misc$multipleResponses <- TRUE }), list( .lonempobs0 = lonempobs0, .lmunb = lmunb, .lsize = lsize, .eonempobs0 = eonempobs0, .emunb = emunb, .esize = esize, .ionempobs0 = ionempobs0, .isize = isize, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 NOS <- ncol(eta) / M1 munb <- eta2theta(eta[, M1*(1:NOS)-2], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS)-1], .lsize , earg = .esize ) onempobs0 <- eta2theta(eta[, M1*(1:NOS) ], .lonempobs0 , earg = .eonempobs0 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzanegbin(x = y, pobs0 = 1 - onempobs0, munb = munb, size = kmat, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lonempobs0 = lonempobs0, .lmunb = lmunb, .lsize = lsize, .eonempobs0 = eonempobs0, .emunb = emunb, .esize = esize ))), vfamily = c("zanegbinomialff"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) munb <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lsize , earg = .esize ) onempobs0 <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lonempobs0 , earg = .eonempobs0 ) rzanegbin(nsim * length(munb), pobs0 = 1 - onempobs0, munb = munb, size = kmat) }, list( .lonempobs0 = lonempobs0, .lmunb = lmunb, .lsize = lsize, .eonempobs0 = eonempobs0, .emunb = emunb, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 NOS <- ncol(eta) / M1 munb <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lmunb , earg = .emunb ) size <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lsize , earg = .esize ) onempobs0 <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lonempobs0 , earg = .eonempobs0 ) okay1 <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) && all(is.finite(onempobs0)) && all(0 < onempobs0 & onempobs0 < 1) smallval <- .mds.min # .munb.div.size overdispersion <- if (okay1) all(munb / size > smallval) else FALSE if (!overdispersion) warning("parameter 'size' has very large values; ", "try fitting a zero-altered Poisson ", "model instead.") okay1 && overdispersion }, list( .lonempobs0 = lonempobs0, .lmunb = lmunb, .lsize = lsize, .eonempobs0 = eonempobs0, .emunb = emunb, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta) / M1 y0 <- extra$y0 munb <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lsize , earg = .esize ) onempobs0 <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lonempobs0 , earg = .eonempobs0 ) skip <- extra$skip.these phi0 <- 1 - onempobs0 dmunb.deta <- dtheta.deta(munb, .lmunb , earg = .emunb ) dsize.deta <- dtheta.deta(kmat, .lsize , earg = .esize ) donempobs0.deta <- dtheta.deta(onempobs0, .lonempobs0 , earg = .eonempobs0 ) smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { if (FALSE) warning("parameter 'size' has very large values; ", "try fitting a zero-altered Poisson ", "model instead") kmat[big.size] <- munb[big.size] / smallval } tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) tempm <- munb / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 AA16 <- tempm + log(tempk) df0.dmunb <- -tempk * prob0 df0.dkmat <- prob0 * AA16 df02.dmunb2 <- prob0 * tempk * (1 + 1/kmat) / (1 + munb/kmat) df02.dkmat2 <- prob0 * ((tempm^2) / kmat + AA16^2) df02.dkmat.dmunb <- -prob0 * (tempm/kmat + AA16) / (1 + munb/kmat) mymu <- munb / oneminusf0 # E(Y) of Pos-NBD dl.dmunb <- y / munb - (1 + y/kmat) / (1 + munb/kmat) + df0.dmunb / oneminusf0 dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y - munb) / (munb + kmat) + log(tempk) + df0.dkmat / oneminusf0 dl.donempobs0 <- +1 / (onempobs0) if (any(big.size)) { } dl.donempobs0[y == 0] <- -1 / (1 - onempobs0[y == 0]) # Do it in 1 line skip <- extra$skip.these for (spp. in 1:NOS) { dl.dsize[skip[, spp.], spp.] <- dl.dmunb[skip[, spp.], spp.] <- 0 } dl.deta12 <- c(w) * cbind(dl.dmunb * dmunb.deta, dl.dsize * dsize.deta) dl.deta3 <- if ( .lonempobs0 == "logit") { -c(w) * (y0 - phi0) } else { -c(w) * dl.donempobs0 * donempobs0.deta } ans <- cbind(dl.deta12, dl.deta3) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lonempobs0 = lonempobs0 , .lmunb = lmunb , .lsize = lsize , .eonempobs0 = eonempobs0 , .emunb = emunb , .esize = esize, .mds.min = mds.min ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M + M-1) # tridiagonal max.support <- .max.support max.chunk.MB <- .max.chunk.MB tmp100 <- onempobs0 * (1 - onempobs0) wz[, (1:NOS)*M1 ] <- if ( .lonempobs0 == "logit" && is.empty.list( .eonempobs0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (1 / tmp100) * dtheta.deta(onempobs0, link = .lonempobs0 , earg = .eonempobs0 )^2) } ned2l.dmunb2 <- mymu / munb^2 - ((1 + mymu/kmat) / kmat) / (1 + munb/kmat)^2 - df02.dmunb2 / oneminusf0 - (df0.dmunb / oneminusf0)^2 wz[, M1*(1:NOS) - 2] <- c(w) * (1 - phi0) * ned2l.dmunb2 * dmunb.deta^2 ned2l.dmunbsize <- (munb - mymu) / (munb + kmat)^2 - df02.dkmat.dmunb / oneminusf0 - df0.dmunb * df0.dkmat / oneminusf0^2 wz[, M + M1*(1:NOS) - 2] <- c(w) * (1 - phi0) * ned2l.dmunbsize * dmunb.deta * dsize.deta ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 1 Q.maxs <- qposnegbin(p = eff.p[2] , munb = munb[, jay], size = kmat[, jay]) + 10 eps.trig <- .eps.trig Q.MAXS <- pmax(10, ceiling(1 / sqrt(eps.trig))) Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay - 1] <- EIM.posNB.specialp(munb = munb[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only) if (FALSE) wz2[sind2, M1*jay - 1] <- EIM.posNB.speciald(munb = munb[sind2, jay], size = kmat[sind2, jay], y.min = min(Q.mins2[sind2]), y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only) # * if (any(eim.kk.TF <- wz[sind2, M1*jay - 1] <= 0 | is.na(wz[sind2, M1*jay - 1]))) { ind2[sind2[eim.kk.TF], jay] <- FALSE } lwr.ptr <- upr.ptr + 1 } # while } # if } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { kkvec <- kmat[ii.TF, jay] muvec <- munb[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rzanegbin(sum(ii.TF), munb = muvec, size = kkvec, pobs0 = phi0[ii.TF, jay]) dl.dk <- digamma(ysim + kkvec) - digamma(kkvec) - (ysim - muvec) / (muvec + kkvec) + log1p(-muvec / (kkvec + muvec)) + df0.dkmat[ii.TF, jay] / oneminusf0[ii.TF, jay] dl.dk[ysim == 0] <- 0 run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay - 1] <- ned2l.dk2 # * (dsize.deta[ii.TF, jay])^2 } } # jay wz[, M1*(1:NOS) - 1] <- wz[, M1*(1:NOS) - 1] * dsize.deta^2 save.weights <- !all(ind2) wz[, M1*(1:NOS) - 1] <- c(w) * (1 - phi0) * wz[, M1*(1:NOS) - 1] wz }), list( .lonempobs0 = lonempobs0, .eonempobs0 = eonempobs0, .cutoff.prob = cutoff.prob, .eps.trig = eps.trig, .max.support = max.support, .max.chunk.MB = max.chunk.MB, .nsimEIM = nsimEIM )))) } # End of zanegbinomialff() zipoisson <- function(lpstr0 = "logit", llambda = "loge", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, ilambda = NULL, gpstr0 = NULL, # (1:9) / 10, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, zero = NULL) { ipstr00 <- ipstr0 gpstr00 <- gpstr0 ipstr0.small <- 1/64 # A number easily represented exactly lpstr0 <- as.list(substitute(lpstr0)) epstr00 <- link2list(lpstr0) lpstr00 <- attr(epstr00, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "pstr0", "onempstr0"))[1] if (length(ipstr00)) if (!is.Numeric(ipstr00, positive = TRUE) || any(ipstr00 >= 1)) stop("argument 'ipstr0' values must be inside the interval (0,1)") if (length(ilambda)) if (!is.Numeric(ilambda, positive = TRUE)) stop("argument 'ilambda' values must be positive") new("vglmff", blurb = c("Zero-inflated Poisson\n\n", "Links: ", namesof("pstr0", lpstr00, earg = epstr00 ), ", ", namesof("lambda", llambda, earg = elambda ), "\n", "Mean: (1 - pstr0) * lambda"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, hadof = TRUE, multipleResponses = TRUE, parameters.names = c("pstr0", "lambda"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pstr0", ncoly) mynames2 <- param.names("lambda", ncoly) predictors.names <- c(namesof(mynames1, .lpstr00 , earg = .epstr00 , tag = FALSE), namesof(mynames2, .llambda , earg = .elambda , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { matL <- Init.mu(y = y, w = w, imethod = .imethod , # x = x, imu = .ilambda , ishrinkage = .ishrinkage , pos.only = TRUE, probs.y = .probs.y ) matP <- matrix(if (length( .ipstr00 )) .ipstr00 else 0, n, ncoly, byrow = TRUE) phi.grid <- .gpstr00 # seq(0.02, 0.98, len = 21) ipstr0.small <- .ipstr0.small # A number easily represented exactly if (!length( .ipstr00 )) for (jay in 1:ncoly) { zipois.Loglikfun <- function(phival, y, x, w, extraargs) { sum(c(w) * dzipois(x = y, pstr0 = phival, lambda = extraargs$lambda, log = TRUE)) } Phi.init <- if (length(phi.grid)) { grid.search(phi.grid, objfun = zipois.Loglikfun, y = y[, jay], w = w[, jay], # x = x, extraargs = list(lambda = matL[, jay])) } else { pmax(ipstr0.small, weighted.mean(y[, jay] == 0, w[, jay]) - dpois(0, matL[, jay])) } if (mean(Phi.init == ipstr0.small) > 0.95 && .lpstr00 != "identitylink") warning("from the initial values only, the data appears to ", "have little or no 0-inflation, and possibly ", "0-deflation.") matP[, jay] <- Phi.init } # for (jay) etastart <- cbind(theta2eta(matP, .lpstr00 , earg = .epstr00 ), theta2eta(matL, .llambda , earg = .elambda ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda, .ipstr00 = ipstr00, .ilambda = ilambda, .gpstr00 = gpstr00, .imethod = imethod, .probs.y = probs.y, .ipstr0.small = ipstr0.small, .type.fitted = type.fitted, .ishrinkage = ishrinkage ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "pstr0", "onempstr0"))[1] phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) ans <- switch(type.fitted, "mean" = (1 - phimat) * lambda, "lambda" = lambda, "pobs0" = phimat + (1-phimat)*exp(-lambda), # P(Y=0) "pstr0" = phimat, "onempstr0" = 1 - phimat) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lpstr00 , ncoly), rep_len( .llambda , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .epstr00 misc$earg[[M1*ii ]] <- .elambda } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE if (FALSE) { misc$pobs0 <- phimat + (1 - phimat) * exp(-lambda) # P(Y=0) if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pobs0) <- dimnames(y) misc$pstr0 <- phimat if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pstr0) <- dimnames(y) } }), list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzipois(x = y, pstr0 = phimat, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), vfamily = c("zipoisson"), hadof = eval(substitute( function(eta, extra = list(), linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), deriv = 1, ...) { M1 <- 2 n <- NROW(eta) M <- NCOL(eta) phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) which.param <- ifelse(linpred.index %% M1 == 1, "phi", "lambda") which.y <- ceiling(linpred.index / M1) prob0 <- exp(-lambda) pobs0 <- phimat + (1 - phimat) * prob0 if (deriv == 0) { ned2l.dphimat2 <- -expm1(-lambda) / ((1 - phimat) * pobs0) ned2l.dphimatlambda <- -exp(-lambda) / pobs0 ned2l.dlambda2 <- (1 - phimat) / lambda - phimat * (1 - phimat) * exp(-lambda) / pobs0 wz <- array(c(c(w) * ned2l.dphimat2, c(w) * ned2l.dlambda2, c(w) * ned2l.dphimatlambda), dim = c(n, M / M1, 3)) return(arwz2wz(wz, M = M, M1 = M1, full.arg = TRUE)) } if (which.param == "phi") { NED2l.dphimat2 <- +expm1(-lambda) * (1 - 2 * pobs0) / ((1 - phimat) * pobs0)^2 NED2l.dphimatlambda <- -exp(-lambda) * expm1(-lambda) / pobs0^2 NED2l.dlambda2 <- -1 / lambda - exp(-lambda) * ((1 - phimat)^2 * exp(-lambda) - phimat^2) / pobs0^2 } else { NED2l.dphimat2 <- exp(-lambda) / ((1 - phimat) * pobs0^2) NED2l.dphimatlambda <- phimat * exp(-lambda) / pobs0^2 NED2l.dlambda2 <- -(1 - phimat) / lambda^2 + phimat^2 * (1 - phimat) * exp(-lambda) / pobs0^2 } if (deriv == 2) NED2l.dphimat2 <- NED2l.dphimatlambda <- NED2l.dlambda2 <- matrix(NA_real_, n, M) WZ <- switch(as.character(deriv), "1" = array(c(c(w) * retain.col(NED2l.dphimat2, which.y), c(w) * retain.col(NED2l.dlambda2, which.y), c(w) * retain.col(NED2l.dphimatlambda, which.y)), dim = c(n, M / M1, 3)), "2" = array(c(c(w) * retain.col(NED2l.dphimat2, which.y), c(w) * retain.col(NED2l.dlambda2, which.y), c(w) * retain.col(NED2l.dphimatlambda, which.y)), dim = c(n, M / M1, 3)), stop("argument 'deriv' must be 0 or 1 or 2")) return(arwz2wz(WZ, M = M, M1 = M1, full.arg = TRUE)) }, list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), validparams = eval(substitute(function(eta, y, extra = NULL) { phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) && all(is.finite(phimat)) && all(phimat < 1) deflat.limit <- -1 / expm1(lambda) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < phimat))) warning("parameter 'pstr0' is too negative even allowing for ", "0-deflation.") okay1 && okay2.deflat }, list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) rzipois(nsim * length(lambda), lambda = lambda, pstr0 = phimat) }, list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 phimat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr00 , earg = .epstr00 ) lambda <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda , earg = .elambda ) prob0 <- exp(-lambda) pobs0 <- phimat + (1 - phimat) * prob0 index0 <- as.matrix(y == 0) dl.dphimat <- -expm1(-lambda) / pobs0 dl.dphimat[!index0] <- -1 / (1 - phimat[!index0]) dl.dlambda <- -(1 - phimat) * exp(-lambda) / pobs0 dl.dlambda[!index0] <- (y[!index0] - lambda[!index0]) / lambda[!index0] dphimat.deta <- dtheta.deta(phimat, .lpstr00 , earg = .epstr00 ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) ans <- c(w) * cbind(dl.dphimat * dphimat.deta, dl.dlambda * dlambda.deta) ans <- ans[, interleave.VGAM(M, M1 = M1)] if ( .llambda == "loge" && is.empty.list( .elambda ) && any(lambda[!index0] < .Machine$double.eps)) { for (spp. in 1:(M / M1)) { ans[!index0[, spp.], M1 * spp.] <- w[!index0[, spp.]] * (y[!index0[, spp.], spp.] - lambda[!index0[, spp.], spp.]) } } ans }), list( .lpstr00 = lpstr00, .llambda = llambda, .epstr00 = epstr00, .elambda = elambda ))), weight = eval(substitute(expression({ ned2l.dphimat2 <- -expm1(-lambda) / ((1 - phimat) * pobs0) ned2l.dphimatlambda <- -exp(-lambda) / pobs0 ned2l.dlambda2 <- (1 - phimat) / lambda - phimat * (1 - phimat) * exp(-lambda) / pobs0 wz <- array(c(c(w) * ned2l.dphimat2 * dphimat.deta^2, c(w) * ned2l.dlambda2 * dlambda.deta^2, c(w) * ned2l.dphimatlambda * dphimat.deta * dlambda.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .llambda = llambda, .elambda = elambda )))) } # zipoisson zipoissonff <- function(llambda = "loge", lonempstr0 = "logit", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ilambda = NULL, ionempstr0 = NULL, gonempstr0 = NULL, # (1:9) / 10, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, zero = "onempstr0") { type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "pstr0", "onempstr0"))[1] llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") lonempstr0 <- as.list(substitute(lonempstr0)) eonempstr0 <- link2list(lonempstr0) lonempstr0 <- attr(eonempstr0, "function.name") ipstr0.small <- 1/64 # A number easily represented exactly if (length(ilambda)) if (!is.Numeric(ilambda, positive = TRUE)) stop("'ilambda' values must be positive") if (length(ionempstr0)) if (!is.Numeric(ionempstr0, positive = TRUE) || any(ionempstr0 >= 1)) stop("'ionempstr0' values must be inside the interval (0,1)") new("vglmff", blurb = c("Zero-inflated Poisson\n\n", "Links: ", namesof("lambda", llambda, earg = elambda), ", ", namesof("onempstr0", lonempstr0, earg = eonempstr0), "\n", "Mean: onempstr0 * lambda"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, hadof = TRUE, multipleResponses = TRUE, parameters.names = c("lambda", "onempstr0"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("lambda", ncoly) mynames2 <- param.names("onempstr0", ncoly) predictors.names <- c(namesof(mynames1, .llambda , earg = .elambda , tag = FALSE), namesof(mynames2, .lonempstr0 , earg = .eonempstr0 , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { matL <- Init.mu(y = y, w = w, imethod = .imethod , # x = x, imu = .ilambda , ishrinkage = .ishrinkage , pos.only = TRUE, probs.y = .probs.y ) matP <- matrix(if (length( .ionempstr0 )) .ionempstr0 else 0, n, ncoly, byrow = TRUE) phi0.grid <- .gonempstr0 ipstr0.small <- .ipstr0.small # Number easily represented exactly if (!length( .ionempstr0 )) for (jay in 1:ncoly) { zipois.Loglikfun <- function(phival, y, x, w, extraargs) { sum(c(w) * dzipois(x = y, pstr0 = phival, lambda = extraargs$lambda, log = TRUE)) } Phi0.init <- if (length(phi0.grid)) { grid.search(phi0.grid, objfun = zipois.Loglikfun, y = y[, jay], x = x, w = w[, jay], extraargs = list(lambda = matL[, jay])) } else { pmax(ipstr0.small, weighted.mean(y[, jay] == 0, w[, jay]) - dpois(0, matL[, jay])) } if (mean(Phi0.init == ipstr0.small) > 0.95 && .lonempstr0 != "identitylink") warning("from the initial values only, the data appears to ", "have little or no 0-inflation, and possibly ", "0-deflation.") matP[, jay] <- Phi0.init } # for (jay) etastart <- cbind(theta2eta( matL, .llambda , earg = .elambda ), theta2eta(1 - matP, .lonempstr0 , earg = .eonempstr0 ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } }), list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda, .ionempstr0 = ionempstr0, .ilambda = ilambda, .gonempstr0 = gonempstr0, .type.fitted = type.fitted, .probs.y = probs.y, .ipstr0.small = ipstr0.small, .imethod = imethod, .ishrinkage = ishrinkage ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs0", "pstr0", "onempstr0"))[1] M1 <- 2 NOS <- ncoly <- ncol(eta) / M1 lambda <- eta2theta(eta[, M1*(1:ncoly) - 1], .llambda , earg = .elambda ) onempstr0 <- eta2theta(eta[, M1*(1:ncoly) ], .lonempstr0 , earg = .eonempstr0 ) ans <- switch(type.fitted, "mean" = onempstr0 * lambda, "lambda" = lambda, "pobs0" = 1 + onempstr0 * expm1(-lambda), # P(Y=0) "pstr0" = 1 - onempstr0, "onempstr0" = onempstr0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .llambda , ncoly), rep_len( .lonempstr0 , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M1 * ncoly) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .elambda misc$earg[[M1*ii ]] <- .eonempstr0 } misc$M1 <- M1 misc$imethod <- .imethod if (FALSE) { misc$pobs0 <- (1 - onempstr0) + onempstr0 * exp(-lambda) # P(Y=0) misc$pobs0 <- as.matrix(misc$pobs0) if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pobs0) <- dimnames(y) misc$pstr0 <- (1 - onempstr0) misc$pstr0 <- as.matrix(misc$pstr0) if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pstr0) <- dimnames(y) } }), list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzipois(x = y, pstr0 = 1 - onempstr0, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda ))), vfamily = c("zipoissonff"), hadof = eval(substitute( function(eta, extra = list(), linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), deriv = 1, ...) { M1 <- 2 n <- NROW(eta) M <- NCOL(eta) lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) namevec <- c("lambda", "onempstr0") whichj <- 2 - (linpred.index %% M1) # \in 1:M1 which.param <- namevec[whichj] which.y <- ceiling(linpred.index / M1) if (deriv == 0) { denom <- 1 + onempstr0 * expm1(-lambda) ned2l.dlambda2 <- ( onempstr0) / lambda - onempstr0 * (1 - onempstr0) * exp(-lambda) / denom ned2l.donempstr0.2 <- -expm1(-lambda) / ((onempstr0) * denom) ned2l.dphilambda <- +exp(-lambda) / denom wz <- array(c(c(w) * ned2l.dlambda2, c(w) * ned2l.donempstr0.2, c(w) * ned2l.dphilambda), dim = c(n, M / M1, 3)) return(arwz2wz(wz, M = M, M1 = M1, full.arg = TRUE)) } d3.11 <- eval( deriv3( ~ onempstr0 / lambda - onempstr0 * (1 - onempstr0) * exp(-lambda) / (1 + onempstr0 * expm1(-lambda)), which.param, hessian = deriv == 2)) d3.22 <- eval( deriv3( ~ -expm1(-lambda) / (onempstr0 * (1 + onempstr0 * expm1(-lambda))), which.param, hessian = deriv == 2)) d3.12 <- eval( deriv3( ~ exp(-lambda) / (1 + onempstr0 * expm1(-lambda)), which.param, hessian = deriv == 2)) Dl.dlambda <- matrix(attr(d3.11, "gradient"), n, M/M1) Dl.donempstr0 <- matrix(attr(d3.22, "gradient"), n, M/M1) Dl.dphilambda <- matrix(attr(d3.12, "gradient"), n, M/M1) if (deriv == 2) { NED2l.dlambda2 <- matrix(attr(d3.11, "hessian"), n, M/M1) NED2l.donempstr0.2 <- matrix(attr(d3.22, "hessian"), n, M/M1) NED2l.dphilambda <- matrix(attr(d3.12, "hessian"), n, M/M1) } WZ <- switch(as.character(deriv), "1" = array(c(c(w) * retain.col(Dl.dlambda, which.y), c(w) * retain.col(Dl.donempstr0, which.y), c(w) * retain.col(Dl.dphilambda, which.y)), dim = c(n, M / M1, 3)), "2" = array(c(c(w) * retain.col(NED2l.dlambda2, which.y), c(w) * retain.col(NED2l.donempstr0.2, which.y), c(w) * retain.col(NED2l.dphilambda, which.y)), dim = c(n, M / M1, 3)), stop("argument 'deriv' must be 0 or 1 or 2")) return(arwz2wz(WZ, M = M, M1 = M1, full.arg = TRUE)) }, list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) rzipois(nsim * length(lambda), lambda = lambda, pstr0 = 1 - onempstr0) }, list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda ))), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) okay1 <- all(is.finite(lambda )) && all(0 < lambda ) && all(is.finite(onempstr0)) && all(0 < onempstr0) deflat.limit <- -1 / expm1(lambda) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(onempstr0 < 1 - deflat.limit))) warning("parameter 'onempstr0' is too positive even allowing for ", "0-deflation.") okay1 && okay2.deflat }, list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 ncoly <- ncol(eta) / M1 # extra$ncoly lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) dlambda.deta <- dtheta.deta(lambda , .llambda , earg = .elambda ) donempstr0.deta <- dtheta.deta(onempstr0, .lonempstr0 , earg = .eonempstr0 ) denom <- 1 + onempstr0 * expm1(-lambda) ind0 <- (y == 0) dl.dlambda <- -onempstr0 * exp(-lambda) / denom dl.dlambda[!ind0] <- (y[!ind0] - lambda[!ind0]) / lambda[!ind0] dl.donempstr0 <- expm1(-lambda) / denom dl.donempstr0[!ind0] <- 1 / onempstr0[!ind0] ans <- c(w) * cbind(dl.dlambda * dlambda.deta, dl.donempstr0 * donempstr0.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] if ( .llambda == "loge" && is.empty.list( .elambda ) && any(lambda[!ind0] < .Machine$double.eps)) { for (spp. in 1:ncoly) { ans[!ind0[, spp.], M1 * spp.] <- w[!ind0[, spp.]] * (y[!ind0[, spp.], spp.] - lambda[!ind0[, spp.], spp.]) } } ans }), list( .lonempstr0 = lonempstr0, .llambda = llambda, .eonempstr0 = eonempstr0, .elambda = elambda ))), weight = eval(substitute(expression({ ned2l.dlambda2 <- ( onempstr0) / lambda - onempstr0 * (1 - onempstr0) * exp(-lambda) / denom ned2l.donempstr0.2 <- -expm1(-lambda) / ((onempstr0) * denom) ned2l.dphilambda <- +exp(-lambda) / denom wz <- array(c(c(w) * ned2l.dlambda2 * dlambda.deta^2, c(w) * ned2l.donempstr0.2 * donempstr0.deta^2, c(w) * ned2l.dphilambda * donempstr0.deta * dlambda.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .llambda = llambda )))) } zibinomial <- function(lpstr0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, zero = NULL, # 20130917; was originally zero = 1, multiple.responses = FALSE, imethod = 1) { if (as.logical(multiple.responses)) stop("argument 'multiple.responses' must be FALSE") lpstr0 <- as.list(substitute(lpstr0)) epstr0 <- link2list(lpstr0) lpstr0 <- attr(epstr0, "function.name") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] if (is.Numeric(ipstr0)) if (!is.Numeric(ipstr0, positive = TRUE) || any(ipstr0 >= 1)) stop("'ipstr0' values must be inside the interval (0,1)") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Zero-inflated binomial\n\n", "Links: ", namesof("pstr0", lpstr0, earg = epstr0), ", ", namesof("prob" , lprob , earg = eprob ), "\n", "Mean: (1 - pstr0) * prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, type.fitted = .type.fitted , expected = TRUE, multipleResponses = FALSE, parameters.names = c("pstr0", "prob"), zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep_len(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1.0 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor ", "(first level = fail, other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } if ( .imethod == 1) mustart <- (mustart + y) / 2 if ( .imethod == 2) mustart <- mean(mustart) + 0 * y extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) predictors.names <- c(namesof("pstr0", .lpstr0 , earg = .epstr0 , tag = FALSE), namesof("prob" , .lprob , earg = .eprob , tag = FALSE)) extra$w <- w # Needed for @linkinv phi.init <- if (length( .ipstr0 )) .ipstr0 else { prob0.est <- sum(w[y == 0]) / sum(w) if ( .imethod == 1) { (prob0.est - (1 - mustart)^w) / (1 - (1 - mustart)^w) } else { prob0.est } } phi.init[phi.init <= 0.05] <- 0.05 # Last resort phi.init[phi.init >= 0.95] <- 0.95 # Last resort if ( length(mustart) && !length(etastart)) mustart <- cbind(rep_len(phi.init, n), mustart) # 1st coln not a real mu }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob, .ipstr0 = ipstr0, .type.fitted = type.fitted, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) pstr0 <- eta2theta(eta[, 1], .lpstr0 , earg = .epstr0 ) mubin <- eta2theta(eta[, 2], .lprob , earg = .eprob ) orig.w <- if (length(tmp3 <- extra$orig.w)) tmp3 else rep_len(1, nrow(eta)) priorw <- extra$w nvec <- priorw / orig.w type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] ans <- switch(type.fitted, "mean" = (1 - pstr0) * mubin, "prob" = mubin, "pobs0" = pstr0 + (1-pstr0)*(1-mubin)^nvec, # P(Y=0) "pstr0" = pstr0, "onempstr0" = 1 - pstr0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), last = eval(substitute(expression({ misc$link <- c("pstr0" = .lpstr0 , "prob" = .lprob ) misc$earg <- list("pstr0" = .epstr0 , "prob" = .eprob ) misc$imethod <- .imethod }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob, .imethod = imethod ))), linkfun = eval(substitute(function(mu, extra = NULL) { cbind(theta2eta(mu[, 1], .lpstr0 , earg = .epstr0 ), theta2eta(mu[, 2], .lprob , earg = .eprob )) }, list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pstr0 <- eta2theta(eta[, 1], .lpstr0 , earg = .epstr0 ) mubin <- eta2theta(eta[, 2], .lprob , earg = .eprob ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- dzibinom(x = round(w * y), size = w, prob = mubin, log = TRUE, pstr0 = pstr0) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), vfamily = c("zibinomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { pstr0 <- eta2theta(eta[, 1], .lpstr0 , earg = .epstr0 ) probb <- eta2theta(eta[, 2], .lprob , earg = .eprob ) size <- extra$w okay1 <- all(is.finite(probb)) && all(0 < probb) && all(is.finite(pstr0)) && all(pstr0 < 1) prob0 <- (1 - probb)^size Prob0.check <- dbinom(0, size = size, prob = probb) deflat.limit <- -prob0 / (1 - prob0) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < pstr0))) warning("parameter 'pstr0' is too negative even allowing for ", "0-deflation.") okay1 && okay2.deflat }, list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], .lpstr0 , earg = .epstr0 ) mubin <- eta2theta(eta[, 2], .lprob , earg = .eprob ) prob0 <- (1 - mubin)^w # Actually q^w pobs0 <- phi + (1 - phi) * prob0 index <- (y == 0) dl.dphi <- (1 - prob0) / pobs0 dl.dphi[!index] <- -1 / (1 - phi[!index]) dl.dmubin <- -w * (1 - phi) * (1 - mubin)^(w - 1) / pobs0 dl.dmubin[!index] <- w[!index] * ( y[!index] / mubin[!index] - (1 - y[!index]) / (1 - mubin[!index])) dphi.deta <- dtheta.deta(phi, .lpstr0 , earg = .epstr0 ) dmubin.deta <- dtheta.deta(mubin, .lprob , earg = .eprob ) ans <- cbind(dl.dphi * dphi.deta, dl.dmubin * dmubin.deta) if ( .lprob == "logit") { ans[!index, 2] <- w[!index] * (y[!index] - mubin[!index]) } ans }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, nrow = n, ncol = dimm(M)) ned2l.dphi2 <- (1 - prob0) / ((1 - phi) * pobs0) ned2l.dphimubin <- -w * ((1 - mubin)^(w - 1)) / pobs0 ned2l.dmubin2 <- (w * (1 - phi) / (mubin * (1 - mubin)^2)) * (1 - mubin - w * mubin * (1 - mubin)^w * phi / pobs0) wz[,iam(1, 1, M)] <- ned2l.dphi2 * dphi.deta^2 wz[,iam(2, 2, M)] <- ned2l.dmubin2 * dmubin.deta^2 wz[,iam(1, 2, M)] <- ned2l.dphimubin * dphi.deta * dmubin.deta if (TRUE) { ind6 <- (wz[, iam(2, 2, M)] < .Machine$double.eps) if (any(ind6)) wz[ind6, iam(2, 2, M)] <- .Machine$double.eps } wz }), list( .lpstr0 = lpstr0, .lprob = lprob, .epstr0 = epstr0, .eprob = eprob )))) } zibinomialff <- function(lprob = "logit", lonempstr0 = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ionempstr0 = NULL, zero = "onempstr0", multiple.responses = FALSE, imethod = 1) { if (as.logical(multiple.responses)) stop("argument 'multiple.responses' must be FALSE") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lonempstr0 <- as.list(substitute(lonempstr0)) eonempstr0 <- link2list(lonempstr0) lonempstr0 <- attr(eonempstr0, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] if (is.Numeric(ionempstr0)) if (!is.Numeric(ionempstr0, positive = TRUE) || any(ionempstr0 >= 1)) stop("'ionempstr0' values must be inside the interval (0,1)") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Zero-inflated binomial\n\n", "Links: ", namesof("prob" , lprob , earg = eprob ), ", ", namesof("onempstr0", lonempstr0, earg = eonempstr0), "\n", "Mean: onempstr0 * prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = c("prob", "onempstr0"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep_len(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1.0 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor ", "(first level = fail, other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } if ( .imethod == 1) mustart <- (mustart + y) / 2 extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) predictors.names <- c(namesof("prob" , .lprob , earg = .eprob , tag = FALSE), namesof("onempstr0", .lonempstr0 , earg = .eonempstr0 , tag = FALSE)) extra$w <- w # Needed for @linkinv onemphi.init <- if (length( .ionempstr0 )) .ionempstr0 else { prob0.est <- sum(w[y == 0]) / sum(w) if ( .imethod == 1) { 1 - (prob0.est - (1 - mustart)^w) / (1 - (1 - mustart)^w) } else { 1 - prob0.est } } onemphi.init[onemphi.init <= -0.10] <- 0.10 # Lots of sample variation onemphi.init[onemphi.init <= 0.05] <- 0.15 # Last resort onemphi.init[onemphi.init >= 0.80] <- 0.80 # Last resort if ( length(mustart) && !length(etastart)) mustart <- cbind(mustart, rep_len(onemphi.init, n)) # 1st coln not a real mu }), list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob, .ionempstr0 = ionempstr0, .type.fitted = type.fitted, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) mubin <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, 2], .lonempstr0 , earg = .eonempstr0 ) orig.w <- if (length(tmp3 <- extra$orig.w)) tmp3 else rep_len(1, nrow(eta)) priorw <- extra$w nvec <- priorw / orig.w type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] ans <- switch(type.fitted, "mean" = (onempstr0) * mubin, "prob" = mubin, "pobs0" = 1 - onempstr0 + (onempstr0)*(1-mubin)^nvec, # P(Y=0) "pstr0" = 1 - onempstr0, "onempstr0" = onempstr0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob ))), last = eval(substitute(expression({ misc$link <- c("prob" = .lprob , "onempstr0" = .lonempstr0 ) misc$earg <- list("prob" = .eprob , "onempstr0" = .eonempstr0 ) misc$imethod <- .imethod misc$pobs0 <- phi + (1 - phi) * (1 - mubin)^w # [1] # P(Y=0) misc$pstr0 <- phi }), list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob, .imethod = imethod ))), linkfun = eval(substitute(function(mu, extra = NULL) { cbind(theta2eta(mu[, 1], .lprob , earg = .eprob ), theta2eta(mu[, 2], .lonempstr0 , earg = .eonempstr0 )) }, list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mubin <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, 2], .lonempstr0 , earg = .eonempstr0 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- dzibinom(x = round(w * y), size = w, prob = mubin, log = TRUE, pstr0 = 1 - onempstr0) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob ))), vfamily = c("zibinomialff"), validparams = eval(substitute(function(eta, y, extra = NULL) { probb <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, 2], .lonempstr0 , earg = .eonempstr0 ) size <- extra$w okay1 <- all(is.finite(probb)) && all(0 < probb) && all(is.finite(onempstr0)) && all(0 < onempstr0) prob0 <- (1 - probb)^size Prob0.check <- dbinom(0, size = size, prob = probb) deflat.limit <- -prob0 / (1 - prob0) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(onempstr0 < 1 - deflat.limit))) warning("parameter 'onempstr0' is too positive even allowing for ", "0-deflation.") okay1 && okay2.deflat }, list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob ))), deriv = eval(substitute(expression({ mubin <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, 2], .lonempstr0 , earg = .eonempstr0 ) omphi <- onempstr0 phi <- 1 - onempstr0 prob0 <- (1 - mubin)^w # Actually q^w pobs0 <- phi + (omphi) * prob0 index <- (y == 0) dl.domphi <- -(1 - prob0) / pobs0 # Note "-" dl.domphi[!index] <- +1 / (omphi[!index]) # Note "+" dl.dmubin <- -w * (omphi) * (1 - mubin)^(w - 1) / pobs0 dl.dmubin[!index] <- w[!index] * ( y[!index] / mubin[!index] - (1 - y[!index]) / (1 - mubin[!index])) dmubin.deta <- dtheta.deta(mubin, .lprob , earg = .eprob ) domphi.deta <- dtheta.deta(omphi, .lonempstr0 , earg = .eonempstr0 ) ans <- cbind(dl.dmubin * dmubin.deta, dl.domphi * domphi.deta) if ( .lprob == "logit") { ans[!index, 1] <- w[!index] * (y[!index] - mubin[!index]) } ans }), list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, nrow = n, ncol = dimm(M)) ned2l.domphi2 <- (1 - prob0) / ((omphi) * pobs0) ned2l.domphimubin <- +w * ((1 - mubin)^(w - 1)) / pobs0 # Note "+" ned2l.dmubin2 <- (w * (omphi) / (mubin * (1 - mubin)^2)) * (1 - mubin - w * mubin * (1 - mubin)^w * phi / pobs0) wz[,iam(1, 1, M)] <- ned2l.dmubin2 * dmubin.deta^2 wz[,iam(2, 2, M)] <- ned2l.domphi2 * domphi.deta^2 wz[,iam(1, 2, M)] <- ned2l.domphimubin * domphi.deta * dmubin.deta if (TRUE) { ind6 <- (wz[, iam(1, 1, M)] < .Machine$double.eps) if (any(ind6)) wz[ind6, iam(1, 1, M)] <- .Machine$double.eps } wz }), list( .lonempstr0 = lonempstr0, .lprob = lprob, .eonempstr0 = eonempstr0, .eprob = eprob )))) } dzibinom <- function(x, size, prob, pstr0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(size), length(prob), length(pstr0)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- dbinom(x = x, size = size, prob = prob, log = TRUE) ans <- if (log.arg) { ifelse(x == 0, log(pstr0 + (1-pstr0) * exp(ans)), log1p(-pstr0) + ans) } else { ifelse(x == 0, pstr0 + (1-pstr0) * exp(ans) , (1-pstr0) * exp(ans)) } prob0 <- (1 - prob)^size deflat.limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzibinom <- function(q, size, prob, pstr0 = 0 ) { LLL <- max(length(pstr0), length(size), length(prob), length(q)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- pbinom(q, size, prob) # lower.tail = lower.tail, log.p = log.p ans <- ifelse(q < 0, 0, pstr0 + (1 - pstr0) * ans) prob0 <- (1 - prob)^size deflat.limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzibinom <- function(p, size, prob, pstr0 = 0 ) { LLL <- max(length(p), length(size), length(prob), length(pstr0)) p <- rep_len(p, LLL) size <- rep_len(size, LLL) prob <- rep_len(prob, LLL) pstr0 <- rep_len(pstr0, LLL) ans <- p ans[p <= pstr0] <- 0 ans[p > pstr0] <- qbinom((p[p > pstr0] - pstr0[p > pstr0]) / (1 - pstr0[p > pstr0]), size[p > pstr0], prob[p > pstr0]) prob0 <- (1 - prob)^size deflat.limit <- -prob0 / (1 - prob0) ind0 <- (deflat.limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[p[ind0] <= pobs0] <- 0 pindex <- (1:LLL)[ind0 & (p > pobs0)] Pobs0 <- pstr0[pindex] + (1 - pstr0[pindex]) * prob0[pindex] ans[pindex] <- qposbinom((p[pindex] - Pobs0) / (1 - Pobs0), size = size[pindex], prob = prob[pindex]) } ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } rzibinom <- function(n, size, prob, pstr0 = 0) { qzibinom(runif(n), size, prob, pstr0 = pstr0) } dzinegbin <- function(x, size, prob = NULL, munb = NULL, pstr0 = 0, log = FALSE) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(pstr0), length(size), length(prob), length(x)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- dnbinom(x = x, size = size, prob = prob, log = log.arg) ans <- if (log.arg) ifelse(x == 0, log(pstr0+(1-pstr0)*exp(ans)), log1p(-pstr0) + ans) else ifelse(x == 0, pstr0+(1-pstr0)* ans, (1-pstr0) * ans) prob0 <- prob^size deflat.limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzinegbin <- function(q, size, prob = NULL, munb = NULL, pstr0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size / (size + munb) } LLL <- max(length(pstr0), length(size), length(prob), length(q)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- pnbinom(q = q, size = size, prob = prob) ans <- ifelse(q < 0, 0, pstr0 + (1 - pstr0) * ans) prob0 <- prob^size deflat.limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzinegbin <- function(p, size, prob = NULL, munb = NULL, pstr0 = 0) { if (length(munb)) { if (length(prob)) stop("arguments 'prob' and 'munb' both specified") prob <- size/(size + munb) } LLL <- max(length(p), length(prob), length(pstr0), length(size), length(munb)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(munb) != LLL) munb <- rep_len(munb, LLL) ans <- rep_len(NA_real_, LLL) prob0 <- prob^size deflat.limit <- -prob0 / (1 - prob0) ans[p <= pstr0] <- 0 ind4 <- (pstr0 < p) & (deflat.limit <= pstr0) ans[ ind4] <- qnbinom(p = (p[ind4] - pstr0[ind4]) / (1 - pstr0[ind4]), size = size[ind4], prob = prob[ind4]) ans[pstr0 < deflat.limit] <- NaN ans[1 < pstr0] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans } rzinegbin <- function(n, size, prob = NULL, munb = NULL, pstr0 = 0) { qzinegbin(runif(n), size = size, prob = prob, munb = munb, pstr0 = pstr0) } zinegbinomial.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } zinegbinomial <- function( zero = "size", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, # higher is better for large 'size' eps.trig = 1e-7, max.support = 4000, # 20160127; I have changed this max.chunk.MB = 30, # max.memory = Inf is allowed lpstr0 = "logit", lmunb = "loge", lsize = "loge", imethod = 1, ipstr0 = NULL, imunb = NULL, iprobs.y = NULL, isize = NULL, gprobs.y = (0:9)/10, # 20160709; grid for finding munb.init gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") lpstr0 <- as.list(substitute(lpstr0)) epstr0 <- link2list(lpstr0) lpstr0 <- attr(epstr0, "function.name") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0", "pstr0", "onempstr0"))[1] if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 0.001) stop("argument 'eps.trig' must be positive and smaller in value") ipstr0.small <- 1/64 # A number easily represented exactly if (length(ipstr0) && (!is.Numeric(ipstr0, positive = TRUE) || any(ipstr0 >= 1))) stop("argument 'ipstr0' must contain values in (0,1)") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("argument 'isize' must contain positive values only") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be greater than 50, say") new("vglmff", blurb = c("Zero-inflated negative binomial\n\n", "Links: ", namesof("pstr0", lpstr0, earg = epstr0, tag = FALSE), ", ", namesof("munb", lmunb, earg = emunb, tag = FALSE), ", ", namesof("size", lsize, earg = esize, tag = FALSE), "\n", "Mean: (1 - pstr0) * munb"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, mds.min = .mds.min , multipleResponses = FALSE, parameters.names = c("pstr0", "munb", "size"), eps.trig = .eps.trig , type.fitted = .type.fitted , nsimEIM = .nsimEIM , zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM, .eps.trig = eps.trig, .type.fitted = type.fitted, .mds.min = mds.min))), initialize = eval(substitute(expression({ M1 <- 3 temp16 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp16$w y <- temp16$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pstr0", NOS) mynames2 <- param.names("munb", NOS) mynames3 <- param.names("size", NOS) predictors.names <- c(namesof(mynames1, .lpstr0 , earg = .epstr0 , tag = FALSE), namesof(mynames2, .lmunb , earg = .emunb , tag = FALSE), namesof(mynames3, .lsize , earg = .esize , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] gprobs.y <- .gprobs.y imunb <- .imunb # Default in NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: TFvec <- y[, jay] > 0 # Important to exclude the 0s posyvec <- y[TFvec, jay] munb.init.jay <- if ( .imethod == 1 ) { quantile(posyvec, probs = gprobs.y) - 1/2 # + 1/16 } else { weighted.mean(posyvec, w = w[TFvec, jay]) - 1/2 } if (length(imunb)) munb.init.jay <- imunb[, jay] gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + weighted.mean(posyvec, w = w[TFvec, jay])) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = posNBD.Loglikfun2, y = posyvec, # x = x[TFvec, , drop = FALSE], w = w[TFvec, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) if (length( .ipstr0 )) { pstr0.init <- matrix( .ipstr0 , n, ncoly, byrow = TRUE) } else { pstr0.init <- matrix(0, n, ncoly) ipstr0.small <- .ipstr0.small # Numbr easily represented exactly for (jay in 1:NOS) { Phi.init <- pmax(ipstr0.small, weighted.mean(y[, jay] == 0, w[, jay]) - dnbinom(0, mu = munb.init[, jay], size = size.init[, jay])) if (mean(Phi.init == ipstr0.small) > 0.95 && .lpstr0 != "identitylink") warning("from the initial values only, the data appears ", "to have little or no 0-inflation, and possibly ", "0-deflation.") pstr0.init[, jay] <- Phi.init } # for (jay) } etastart <- cbind(theta2eta(pstr0.init, .lpstr0 , earg = .epstr0 ), theta2eta(munb.init, .lmunb , earg = .emunb ), theta2eta(size.init, .lsize , earg = .esize )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize, .ipstr0 = ipstr0, .imunb = imunb, .isize = isize, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .type.fitted = type.fitted, .iprobs.y = iprobs.y, .ipstr0.small = ipstr0.small, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 3) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0", "pstr0", "onempstr0"))[1] pstr0 <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lpstr0 , earg = .epstr0 ) if (type.fitted %in% c("mean", "munb", "pobs0")) munb <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lmunb , earg = .emunb ) if (type.fitted %in% c("pobs0")) { kmat <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lsize , earg = .esize ) tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) prob0 <- tempk^kmat # p(0) from negative binomial smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { prob0[big.size] <- exp(-munb[big.size]) # The limit as kmat-->Inf } } ans <- switch(type.fitted, "mean" = (1 - pstr0) * munb, "munb" = munb, "pobs0" = pstr0 + (1 - pstr0) * prob0, # P(Y=0) "pstr0" = pstr0, "onempstr0" = 1 - pstr0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr0 = lpstr0, .lsize = lsize, .lmunb = lmunb, .epstr0 = epstr0, .esize = esize, .emunb = emunb, .mds.min = mds.min ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lpstr0 , NOS), rep_len( .lmunb , NOS), rep_len( .lsize , NOS))[interleave.VGAM(M1*NOS, M1 = M1)] temp.names <- c(mynames1, mynames2, mynames3)[interleave.VGAM(M1*NOS, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M1*NOS) names(misc$earg) <- temp.names for (ii in 1:NOS) { misc$earg[[M1*ii-2]] <- .epstr0 misc$earg[[M1*ii-1]] <- .emunb misc$earg[[M1*ii ]] <- .esize } misc$ipstr0 <- .ipstr0 misc$isize <- .isize misc$max.chunk.MB <- .max.chunk.MB misc$cutoff.prob <- .cutoff.prob misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize, .ipstr0 = ipstr0, .isize = isize, .nsimEIM = nsimEIM, .imethod = imethod, .cutoff.prob = cutoff.prob, .max.chunk.MB = max.chunk.MB ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pstr0 <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lpstr0 , earg = .epstr0 ) munb <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lsize , earg = .esize ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzinegbin(x = y, size = kmat, munb = munb, pstr0 = pstr0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize ))), vfamily = c("zinegbinomial"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pstr0 <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lpstr0 , earg = .epstr0 ) munb <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lsize , earg = .esize ) rzinegbin(nsim * length(munb), size = kmat, munb = munb, pstr0 = pstr0) }, list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 NOS <- ncol(eta) / M1 pstr0 <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lpstr0 , earg = .epstr0 ) munb <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) size <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) okay1 <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) && all(is.finite(pstr0)) && all(pstr0 < 1) prob <- size / (size + munb) prob0 <- prob^size Prob0.check <- dnbinom(0, size = size, prob = prob) deflat.limit <- -prob0 / (1 - prob0) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < pstr0))) warning("parameter 'pstr0' is too negative even allowing for ", "0-deflation.") smallval <- .mds.min # .munb.div.size overdispersion <- if (okay1 && okay2.deflat) all(munb / size > smallval) else FALSE if (!overdispersion) warning("parameter 'size' has very large values; ", "try fitting a zero-inflated Poisson ", "model instead.") okay1 && okay2.deflat && overdispersion }, list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta) / M1 pstr0 <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lpstr0 , earg = .epstr0 ) munb <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lsize , earg = .esize ) dpstr0.deta <- dtheta.deta(pstr0, .lpstr0 , earg = .epstr0 ) dmunb.deta <- dtheta.deta(munb , .lmunb , earg = .emunb ) dsize.deta <- dtheta.deta(kmat , .lsize , earg = .esize ) dthetas.detas <- (cbind(dpstr0.deta, dmunb.deta, dsize.deta))[, interleave.VGAM(M1*NOS, M1 = M1)] smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { if (FALSE) warning("parameter 'size' has very large values; ", "try fitting a zero-inflated Poisson ", "model instead") kmat[big.size] <- munb[big.size] / smallval } tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) tempm <- munb / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 AA16 <- tempm + log(tempk) df0.dmunb <- -tempk * prob0 df0.dkmat <- prob0 * AA16 df02.dmunb2 <- prob0 * tempk * (1 + 1/kmat) / (1 + munb/kmat) df02.dkmat2 <- prob0 * ((tempm^2) / kmat + AA16^2) df02.dkmat.dmunb <- -prob0 * (tempm/kmat + AA16) / (1 + munb/kmat) AA <- pobs0 <- cbind(pstr0 + (1 - pstr0) * prob0) dl.dpstr0 <- -1 / (1 - pstr0) dl.dmunb <- y / munb - (1 + y/kmat) / (1 + munb/kmat) dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y - munb) / (munb + kmat) + log(tempk) if (any(big.size)) { dl.dsize[big.size] <- 1e-7 # A small number } for (spp. in 1:NOS) { index0 <- (y[, spp.] == 0) if (all(index0) || all(!index0)) stop("must have some 0s AND some positive counts in the data") pstr0. <- pstr0[index0, spp.] tempk. <- tempk[index0, spp.] # kmat. / (kmat. + munb.) tempm. <- tempm[index0, spp.] # munb. / (kmat. + munb.) prob0. <- prob0[index0, spp.] # tempk.^kmat. df0.dmunb. <- df0.dmunb[index0, spp.] # -tempk.* prob0. df0.dkmat. <- df0.dkmat[index0, spp.] # prob0.*(tempm.+log(tempk.)) denom. <- AA[index0, spp.] # pstr0. + (1 - pstr0.) * prob0. dl.dpstr0[index0, spp.] <- (1 - prob0.) / denom. dl.dmunb[index0, spp.] <- (1 - pstr0.) * df0.dmunb. / denom. dl.dsize[index0, spp.] <- (1 - pstr0.) * df0.dkmat. / denom. } # of spp. dl.dthetas <- cbind(dl.dpstr0, dl.dmunb, dl.dsize)[, interleave.VGAM(M1*NOS, M1 = M1)] ans <- c(w) * dl.dthetas * dthetas.detas ans }), list( .lpstr0 = lpstr0, .lmunb = lmunb, .lsize = lsize, .epstr0 = epstr0, .emunb = emunb, .esize = esize, .mds.min = mds.min ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M + M-1 + M-2) mymu <- munb / oneminusf0 # Is the same as 'mu', == E(Y) max.support <- .max.support max.chunk.MB <- .max.chunk.MB ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 1 Q.maxs <- qposnegbin(p = eff.p[2] , munb = munb[, jay], size = kmat[, jay]) + 10 eps.trig <- .eps.trig Q.MAXS <- pmax(10, ceiling(1 / sqrt(eps.trig))) Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay] <- EIM.posNB.specialp(munb = munb[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only, second.deriv = FALSE) if (FALSE) wz2[sind2, M1*jay] <- EIM.posNB.speciald(munb = munb[sind2, jay], size = kmat[sind2, jay], y.min = min(Q.mins2[sind2]), y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only, second.deriv = FALSE) wz[sind2, M1*jay] <- wz[sind2, M1*jay] * (1 - AA[sind2, jay]) - (1-pstr0[sind2, jay]) * (df02.dkmat2[sind2, jay] - (1-pstr0[sind2, jay]) * (df0.dkmat[sind2, jay]^2) / AA[sind2, jay]) if (any(eim.kk.TF <- wz[sind2, M1*jay] <= 0 | is.na(wz[sind2, M1*jay]))) { ind2[sind2[eim.kk.TF], jay] <- FALSE } lwr.ptr <- upr.ptr + 1 } # while } } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { kkvec <- kmat[ii.TF, jay] muvec <- munb[ii.TF, jay] PSTR0 <- pstr0[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rzinegbin(sum(ii.TF), pstr0 = PSTR0, mu = muvec, size = kkvec) index0 <- (ysim == 0) dl.dk <- digamma(ysim + kkvec) - digamma(kkvec) - (ysim - muvec) / (muvec + kkvec) + log1p(-muvec / (kkvec + muvec)) # + ans0 <- (1 - PSTR0) * df0.dkmat[ii.TF , jay] / AA[ii.TF , jay] dl.dk[index0] <- ans0[index0] run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay] <- ned2l.dk2 # * (dsize.deta[ii.TF, jay])^2 } } wz[, M1*(1:NOS) ] <- wz[, M1*(1:NOS) ] * dsize.deta^2 save.weights <- !all(ind2) ned2l.dpstr02 <- oneminusf0 / (AA * (1 - pstr0)) wz[, M1*(1:NOS) - 2] <- ned2l.dpstr02 * dpstr0.deta^2 ned2l.dpstr0.dmunb <- df0.dmunb / AA wz[, M + M1*(1:NOS) - 2] <- ned2l.dpstr0.dmunb * dpstr0.deta * dmunb.deta ned2l.dpstr0.dsize <- df0.dkmat / AA wz[, M + M-1 + M1*(1:NOS) - 2] <- ned2l.dpstr0.dsize * dpstr0.deta * dsize.deta ned2l.dmunb2 <- (1 - AA) * (mymu / munb^2 - ((1 + mymu/kmat) / kmat) / (1 + munb/kmat)^2) - (1-pstr0) * (df02.dmunb2 - (1 - pstr0) * (df0.dmunb^2) / AA) wz[, M1*(1:NOS) - 1] <- ned2l.dmunb2 * dmunb.deta^2 dAA.dmunb <- (1 - pstr0) * df0.dmunb ned2l.dmunbsize <- (1 - AA) * (munb - mymu) / (munb + kmat)^2 - (1-pstr0) * (df02.dkmat.dmunb - df0.dkmat * dAA.dmunb / AA) wz[, M + M1*(1:NOS) - 1] <- ned2l.dmunbsize * dmunb.deta * dsize.deta w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .lpstr0 = lpstr0, .epstr0 = epstr0, .nsimEIM = nsimEIM, .cutoff.prob = cutoff.prob, .eps.trig = eps.trig, .max.support = max.support, .max.chunk.MB = max.chunk.MB )))) } # End of zinegbinomial zinegbinomialff.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } zinegbinomialff <- function(lmunb = "loge", lsize = "loge", lonempstr0 = "logit", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), imunb = NULL, isize = NULL, ionempstr0 = NULL, zero = c("size", "onempstr0"), imethod = 1, iprobs.y = NULL, # 0.35, cutoff.prob = 0.999, # higher is better for large 'size' eps.trig = 1e-7, max.support = 4000, # 20160127; I have changed this max.chunk.MB = 30, # max.memory = Inf is allowed gprobs.y = (0:9)/10, # 20160709; grid for finding munb.init gsize.mux = exp((-12:6)/2), mds.min = 1e-3, nsimEIM = 500) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") lmunb <- as.list(substitute(lmunb)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") lonempstr0 <- as.list(substitute(lonempstr0)) eonempstr0 <- link2list(lonempstr0) lonempstr0 <- attr(eonempstr0, "function.name") ipstr0.small <- 1/64 # A number easily represented exactly type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0", "pstr0", "onempstr0"))[1] if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 0.001) stop("argument 'eps.trig' must be positive and smaller in value") if (length(ionempstr0) && (!is.Numeric(ionempstr0, positive = TRUE) || any(ionempstr0 >= 1))) stop("argument 'ionempstr0' must contain values in (0,1)") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("argument 'isize' must contain positive values only") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("argument 'nsimEIM' must be a positive integer") if (nsimEIM <= 50) warning("argument 'nsimEIM' should be greater than 50, say") new("vglmff", blurb = c("Zero-inflated negative binomial\n\n", "Links: ", namesof("munb", lmunb, earg = emunb, tag = FALSE), ", ", namesof("size", lsize, earg = esize, tag = FALSE), ", ", namesof("onempstr0", lonempstr0, earg = eonempstr0, tag = FALSE), "\n", "Mean: (1 - pstr0) * munb"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, mds.min = .mds.min , multipleResponses = TRUE, parameters.names = c("munb", "size", "onempstr0"), eps.trig = .eps.trig , nsimEIM = .nsimEIM , type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM, .eps.trig = eps.trig, .type.fitted = type.fitted, .mds.min = mds.min ))), initialize = eval(substitute(expression({ M1 <- 3 temp16 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp16$w y <- temp16$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("munb", NOS) mynames2 <- param.names("size", NOS) mynames3 <- param.names("onempstr0", NOS) predictors.names <- c(namesof(mynames1, .lmunb , earg = .emunb , tag = FALSE), namesof(mynames2, .lsize , earg = .esize , tag = FALSE), namesof(mynames3, .lonempstr0 , earg = .eonempstr0 , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] gprobs.y <- .gprobs.y imunb <- .imunb # Default in NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: TFvec <- y[, jay] > 0 # Important to exclude the 0s posyvec <- y[TFvec, jay] munb.init.jay <- if ( .imethod == 1 ) { quantile(posyvec, probs = gprobs.y) - 1/2 # + 1/16 } else { weighted.mean(posyvec, w = w[TFvec, jay]) - 1/2 } if (length(imunb)) munb.init.jay <- imunb[, jay] gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + weighted.mean(posyvec, w = w[TFvec, jay])) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = posNBD.Loglikfun2, y = posyvec, # x = x[TFvec, , drop = FALSE], w = w[TFvec, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) if (length( .ionempstr0 )) { onempstr0.init <- matrix( .ionempstr0 , n, ncoly, byrow = TRUE) } else { onempstr0.init <- matrix(0, n, ncoly) ipstr0.small <- .ipstr0.small # Easily represented exactly for (jay in 1:NOS) { Phi.init <- pmax(ipstr0.small, weighted.mean(y[, jay] == 0, w[, jay]) - dnbinom(0, mu = munb.init[, jay], size = size.init[, jay])) if (mean(Phi.init == ipstr0.small) > 0.95) warning("from the initial values only, the data appears ", "to have little or no 0-inflation") onempstr0.init[, jay] <- 1 - Phi.init } # for (jay) } etastart <- cbind(theta2eta(munb.init, .lmunb , earg = .emunb ), theta2eta(size.init, .lsize , earg = .esize ), theta2eta(onempstr0.init, .lonempstr0 , earg = .eonempstr0 )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lonempstr0 = lonempstr0, .lmunb = lmunb, .lsize = lsize, .eonempstr0 = eonempstr0, .emunb = emunb, .esize = esize, .ionempstr0 = ionempstr0, .imunb = imunb, .isize = isize, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .type.fitted = type.fitted, .ipstr0.small = ipstr0.small, .iprobs.y = iprobs.y, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "munb", "pobs0", "pstr0", "onempstr0"))[1] M1 <- 3 NOS <- ncol(eta) / M1 if (type.fitted %in% c("mean", "munb", "pobs0")) munb <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lmunb , earg = .emunb ) if (type.fitted %in% c("pobs0")) { kmat <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lsize , earg = .esize ) tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) prob0 <- tempk^kmat # p(0) from negative binomial smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { prob0[big.size] <- exp(-munb[big.size]) # The limit as kmat-->Inf } } onempstr0 <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lonempstr0 , earg = .eonempstr0 ) ans <- switch(type.fitted, "mean" = onempstr0 * munb, "munb" = munb, "pobs0" = 1 - onempstr0 + onempstr0 * prob0, # P(Y=0) "pstr0" = 1 - onempstr0, "onempstr0" = onempstr0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lonempstr0 = lonempstr0, .lsize = lsize, .lmunb = lmunb, .eonempstr0 = eonempstr0, .esize = esize, .emunb = emunb, .type.fitted = type.fitted, .mds.min = mds.min ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lmunb , NOS), rep_len( .lsize , NOS), rep_len( .lonempstr0 , NOS))[interleave.VGAM(M1*NOS, M1 = M1)] temp.names <- c(mynames1, mynames2, mynames3)[interleave.VGAM(M1*NOS, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M1*NOS) names(misc$earg) <- temp.names for (ii in 1:NOS) { misc$earg[[M1*ii-2]] <- .emunb misc$earg[[M1*ii-1]] <- .esize misc$earg[[M1*ii ]] <- .eonempstr0 } misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$M1 <- M1 misc$ionempstr0 <- .ionempstr0 misc$isize <- .isize misc$multipleResponses <- TRUE }), list( .lonempstr0 = lonempstr0, .lmunb = lmunb, .lsize = lsize, .eonempstr0 = eonempstr0, .emunb = emunb, .esize = esize, .ionempstr0 = ionempstr0, .isize = isize, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 3 NOS <- extra$NOS munb <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lsize , earg = .esize ) onempstr0 <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lonempstr0 , earg = .eonempstr0 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzinegbin(x = y, size = kmat, munb = munb, pstr0 = 1 - onempstr0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lonempstr0 = lonempstr0, .lmunb = lmunb, .lsize = lsize, .eonempstr0 = eonempstr0, .emunb = emunb, .esize = esize ))), vfamily = c("zinegbinomialff"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) munb <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lsize , earg = .esize ) onempstr0 <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .lpstr0 , earg = .epstr0 ) rzinegbin(nsim * length(munb), size = kmat, munb = munb, pstr0 = 1 - onempstr0) }, list( .lonempstr0 = lonempstr0, .lmunb = lmunb, .lsize = lsize, .eonempstr0 = eonempstr0, .emunb = emunb, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 3 NOS <- ncol(eta) / M1 munb <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lmunb , earg = .emunb ) size <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lsize , earg = .esize ) onempstr0 <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lonempstr0 , earg = .eonempstr0 ) okay1 <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) && all(is.finite(onempstr0)) && all(0 < onempstr0) prob <- size / (size + munb) prob0 <- prob^size Prob0.check <- dnbinom(0, size = size, prob = prob) deflat.limit <- -prob0 / (1 - prob0) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(onempstr0 < 1 - deflat.limit))) warning("parameter 'pstr0' is too positive even allowing for ", "0-deflation.") smallval <- .mds.min # .munb.div.size overdispersion <- if (okay1 && okay2.deflat) all(munb / size > smallval) else FALSE if (!overdispersion) warning("parameter 'size' has very large values; ", "try fitting a zero-inflated Poisson ", "model instead.") okay1 && okay2.deflat && overdispersion }, list( .lonempstr0 = lonempstr0, .lmunb = lmunb, .lsize = lsize, .eonempstr0 = eonempstr0, .emunb = emunb, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ M1 <- 3 NOS <- ncol(eta) / M1 munb <- eta2theta(eta[, M1*(1:NOS)-2, drop = FALSE], .lmunb , earg = .emunb ) kmat <- eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lsize , earg = .esize ) onempstr0 <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lonempstr0 , earg = .eonempstr0 ) donempstr0.deta <- dtheta.deta(onempstr0, .lonempstr0 , earg = .eonempstr0 ) dmunb.deta <- dtheta.deta(munb , .lmunb , earg = .emunb ) dsize.deta <- dtheta.deta(kmat , .lsize , earg = .esize ) dthetas.detas <- (cbind(dmunb.deta, dsize.deta, donempstr0.deta))[, interleave.VGAM(M1*NOS, M1 = M1)] smallval <- .mds.min # Something like this is needed if (any(big.size <- munb / kmat < smallval)) { if (FALSE) warning("parameter 'size' has very large values; ", "try fitting a zero-inflated Poisson ", "model instead") kmat[big.size] <- munb[big.size] / smallval } tempk <- 1 / (1 + munb / kmat) # kmat / (kmat + munb) tempm <- munb / (kmat + munb) prob0 <- tempk^kmat oneminusf0 <- 1 - prob0 AA16 <- tempm + log(tempk) df0.dmunb <- -tempk * prob0 df0.dkmat <- cbind(prob0 * AA16) df02.dmunb2 <- prob0 * tempk * (1 + 1/kmat) / (1 + munb/kmat) df02.dkmat2 <- prob0 * ((tempm^2) / kmat + AA16^2) df02.dkmat.dmunb <- -prob0 * (tempm/kmat + AA16) / (1 + munb/kmat) pstr0 <- 1 - onempstr0 AA <- pobs0 <- cbind(pstr0 + (onempstr0) * prob0) dl.dmunb <- y / munb - (1 + y/kmat) / (1 + munb/kmat) dl.dsize <- digamma(y + kmat) - digamma(kmat) - (y - munb) / (munb + kmat) + log(tempk) dl.donempstr0 <- +1 / (onempstr0) for (spp. in 1:NOS) { index0 <- (y[, spp.] == 0) if (all(index0) || all(!index0)) stop("must have some 0s AND some positive counts in the data") kmat. <- kmat[index0, spp.] munb. <- munb[index0, spp.] onempstr0. <- onempstr0[index0, spp.] tempk. <- kmat. / (kmat. + munb.) tempm. <- munb. / (kmat. + munb.) prob0. <- tempk.^kmat. df0.dmunb. <- -tempk.* prob0. df0.dkmat. <- prob0. * (tempm. + log(tempk.)) denom. <- 1 - onempstr0. + (onempstr0.) * prob0. dl.donempstr0[index0, spp.] <- -(1 - prob0.) / denom. # note "-" dl.dmunb[index0, spp.] <- (onempstr0.) * df0.dmunb. / denom. dl.dsize[index0, spp.] <- (onempstr0.) * df0.dkmat. / denom. } # of spp. dl.dthetas <- cbind(dl.dmunb, dl.dsize, dl.donempstr0)[, interleave.VGAM(M1*NOS, M1 = M1)] c(w) * dl.dthetas * dthetas.detas }), list( .lonempstr0 = lonempstr0, .lmunb = lmunb, .lsize = lsize, .eonempstr0 = eonempstr0, .emunb = emunb, .esize = esize, .mds.min = mds.min ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M + M-1 + M-2) mymu <- munb / oneminusf0 # Is the same as 'mu', == E(Y) max.support <- .max.support max.chunk.MB <- .max.chunk.MB ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 1 Q.maxs <- qposnegbin(p = eff.p[2] , munb = munb[, jay], size = kmat[, jay]) + 10 eps.trig <- .eps.trig Q.MAXS <- pmax(10, ceiling(1 / sqrt(eps.trig))) Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay - 1] <- EIM.posNB.specialp(munb = munb[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only, second.deriv = FALSE) if (FALSE) wz2[sind2, M1*jay - 1] <- EIM.posNB.speciald(munb = munb[sind2, jay], size = kmat[sind2, jay], y.min = min(Q.mins2[sind2]), y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , prob0 = prob0[sind2, jay], df0.dkmat = df0.dkmat[sind2, jay], df02.dkmat2 = df02.dkmat2[sind2, jay], intercept.only = intercept.only, second.deriv = FALSE) wz[sind2, M1*jay - 1] <- wz[sind2, M1*jay - 1] * (1 - AA[sind2, jay]) - (1-pstr0[sind2, jay]) * (df02.dkmat2[sind2, jay] - (1-pstr0[sind2, jay]) * (df0.dkmat[sind2, jay]^2) / AA[sind2, jay]) if (any(eim.kk.TF <- wz[sind2, M1*jay - 1] <= 0 | is.na(wz[sind2, M1*jay - 1]))) { ind2[sind2[eim.kk.TF], jay] <- FALSE } lwr.ptr <- upr.ptr + 1 } # while } } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { kkvec <- kmat[ii.TF, jay] muvec <- munb[ii.TF, jay] PSTR0 <- pstr0[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rzinegbin(sum(ii.TF), pstr0 = PSTR0, mu = muvec, size = kkvec) index0 <- (ysim == 0) dl.dk <- digamma(ysim + kkvec) - digamma(kkvec) - (ysim - muvec) / (muvec + kkvec) + log1p(-muvec / (kkvec + muvec)) # + ans0 <- (1 - PSTR0) * df0.dkmat[ii.TF , jay] / AA[ii.TF , jay] dl.dk[index0] <- ans0[index0] run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay - 1] <- ned2l.dk2 # * (dsize.deta[ii.TF,jay])^2 } } wz[, M1*(1:NOS) - 1] <- wz[, M1*(1:NOS) - 1] * dsize.deta^2 save.weights <- !all(ind2) ned2l.donempstr02 <- oneminusf0 / (AA * (onempstr0)) wz[, M1*(1:NOS) ] <- ned2l.donempstr02 * donempstr0.deta^2 ned2l.donempstr0.dmunb <- -df0.dmunb / AA # Negated (1/2) wz[, M + M-1 + M1*(1:NOS) - 2] <- ned2l.donempstr0.dmunb * donempstr0.deta * dmunb.deta ned2l.donempstr0.dsize <- -df0.dkmat / AA # Negated (2/2) wz[, M + M1*(1:NOS) - 1] <- ned2l.donempstr0.dsize * donempstr0.deta * dsize.deta ned2l.dmunb2 <- (1 - AA) * (mymu / munb^2 - ((1 + mymu/kmat) / kmat) / (1 + munb/kmat)^2) - (1-pstr0) * (df02.dmunb2 - (1 - pstr0) * (df0.dmunb^2) / AA) wz[, M1*(1:NOS) - 2] <- ned2l.dmunb2 * dmunb.deta^2 dAA.dmunb <- (onempstr0) * df0.dmunb ned2l.dmunbsize <- (1 - AA) * (munb - mymu) / (munb + kmat)^2 - (onempstr0) * (df02.dkmat.dmunb - df0.dkmat * dAA.dmunb / AA) wz[, M + M1*(1:NOS) - 2] <- ned2l.dmunbsize * dmunb.deta * dsize.deta w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .lonempstr0 = lonempstr0, .eonempstr0 = eonempstr0, .nsimEIM = nsimEIM, .cutoff.prob = cutoff.prob, .eps.trig = eps.trig, .max.support = max.support, .max.chunk.MB = max.chunk.MB )))) } # End of zinegbinomialff dzigeom <- function(x, prob, pstr0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(prob), length(pstr0)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- dgeom(x = x, prob = prob, log = TRUE) ans <- if (log.arg) { ifelse(x == 0, log(pstr0 + (1 - pstr0) * exp(ans)), log1p(-pstr0) + ans) } else { ifelse(x == 0, pstr0 + (1 - pstr0) * exp(ans) , (1 - pstr0) * exp(ans)) } prob0 <- prob deflat.limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } pzigeom <- function(q, prob, pstr0 = 0) { LLL <- max(length(q), length(prob), length(pstr0)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr0) != LLL) pstr0 <- rep_len(pstr0, LLL) ans <- pgeom(q, prob) ans <- ifelse(q < 0, 0, pstr0 + (1-pstr0) * ans) prob0 <- prob deflat.limit <- -prob0 / (1 - prob0) ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } qzigeom <- function(p, prob, pstr0 = 0) { LLL <- max(length(p), length(prob), length(pstr0)) ans <- p <- rep_len(p, LLL) prob <- rep_len(prob, LLL) pstr0 <- rep_len(pstr0, LLL) ans[p <= pstr0] <- 0 ind1 <- (p > pstr0) ans[ind1] <- qgeom((p[ind1] - pstr0[ind1]) / (1 - pstr0[ind1]), prob = prob[ind1]) prob0 <- prob deflat.limit <- -prob0 / (1 - prob0) ind0 <- (deflat.limit <= pstr0) & (pstr0 < 0) if (any(ind0)) { pobs0 <- pstr0[ind0] + (1 - pstr0[ind0]) * prob0[ind0] ans[p[ind0] <= pobs0] <- 0 pindex <- (1:LLL)[ind0 & (p > pobs0)] Pobs0 <- pstr0[pindex] + (1 - pstr0[pindex]) * prob0[pindex] ans[pindex] <- 1 + qgeom((p[pindex] - Pobs0) / (1 - Pobs0), prob = prob[pindex]) } ans[pstr0 < deflat.limit] <- NaN ans[pstr0 > 1] <- NaN ans } rzigeom <- function(n, prob, pstr0 = 0) { qzigeom(runif(n), prob, pstr0 = pstr0) } zigeometric <- function( lpstr0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, iprob = NULL, imethod = 1, bias.red = 0.5, zero = NULL) { expected <- TRUE lpstr0 <- as.list(substitute(lpstr0)) epstr0 <- link2list(lpstr0) lpstr0 <- attr(epstr0, "function.name") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] if (length(ipstr0)) if (!is.Numeric(ipstr0, positive = TRUE) || ipstr0 >= 1) stop("argument 'ipstr0' is out of range") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || iprob >= 1) stop("argument 'iprob' is out of range") if (!is.Numeric(bias.red, length.arg = 1, positive = TRUE) || bias.red > 1) stop("argument 'bias.red' must be between 0 and 1") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Zero-inflated geometric distribution,\n", "P[Y = 0] = pstr0 + (1 - pstr0) * prob,\n", "P[Y = y] = (1 - pstr0) * prob * (1 - prob)^y, ", "y = 1, 2, ...\n\n", "Link: ", namesof("pstr0", lpstr0, earg = epstr0), ", ", namesof("prob", lprob, earg = eprob ), "\n", "Mean: (1 - pstr0) * (1 - prob) / prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pstr0", "prob"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pstr0", ncoly) mynames2 <- param.names("prob", ncoly) predictors.names <- c(namesof(mynames1, .lpstr0, earg = .epstr0, tag = FALSE), namesof(mynames2, .lprob, earg = .eprob, tag = FALSE))[ interleave.VGAM(M1 * NOS, M1 = M1)] if (!length(etastart)) { prob.init <- if ( .imethod == 3) .bias.red / (1 + y + 1/8) else if ( .imethod == 2) .bias.red / (1 + matrix(colMeans(y) + 1/8, n, ncoly, byrow = TRUE)) else .bias.red / (1 + matrix(colSums(y * w) / colSums(w) + 1/8, n, ncoly, byrow = TRUE)) prob.init <- if (length( .iprob )) { matrix( .iprob , n, ncoly, byrow = TRUE) } else { prob.init # Already a matrix } prob0.est <- psze.init <- matrix(0, n, NOS) for (jlocal in 1:NOS) { prob0.est[, jlocal] <- sum(w[y[, jlocal] == 0, jlocal]) / sum(w[, jlocal]) psze.init[, jlocal] <- if ( .imethod == 3) prob0.est[, jlocal] / 2 else if ( .imethod == 1) pmax(0.05, (prob0.est[, jlocal] - median(prob.init[, jlocal]))) else prob0.est[, jlocal] / 5 } psze.init <- if (length( .ipstr0 )) { matrix( .ipstr0 , n, ncoly, byrow = TRUE) } else { psze.init # Already a matrix } etastart <- cbind(theta2eta(psze.init, .lpstr0, earg = .epstr0), theta2eta(prob.init, .lprob , earg = .eprob )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0, .iprob = iprob, .ipstr0 = ipstr0, .type.fitted = type.fitted, .bias.red = bias.red, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) pstr0 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr0 , earg = .epstr0 ) prob <- eta2theta(eta[, c(FALSE, TRUE)], .lprob , earg = .eprob ) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] ans <- switch(type.fitted, "mean" = (1 - pstr0) * (1 - prob) / prob, "prob" = prob, "pobs0" = pstr0 + (1 - pstr0) * prob, # P(Y=0) "pstr0" = pstr0, "onempstr0" = 1 - pstr0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lpstr0 , NOS), rep_len( .lprob , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names misc$earg <- vector("list", M1 * NOS) names(misc$link) <- names(misc$earg) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .epstr0 misc$earg[[M1*ii ]] <- .eprob } misc$imethod <- .imethod misc$zero <- .zero misc$bias.red <- .bias.red misc$expected <- .expected misc$ipstr0 <- .ipstr0 misc$pobs0 <- pobs0 if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pobs0) <- dimnames(y) misc$pstr0 <- pstr0 if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pstr0) <- dimnames(y) }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0, .ipstr0 = ipstr0, .zero = zero, .expected = expected, .bias.red = bias.red, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pstr0 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr0 , earg = .epstr0 ) prob <- eta2theta(eta[, c(FALSE, TRUE)], .lprob , earg = .eprob ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzigeom(x = y, prob = prob, pstr0 = pstr0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), vfamily = c("zigeometric"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pstr0 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr0 , earg = .epstr0 ) prob <- eta2theta(eta[, c(FALSE, TRUE)], .lprob , earg = .eprob ) rzigeom(nsim * length(pstr0), prob = prob, pstr0 = pstr0) }, list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), validparams = eval(substitute(function(eta, y, extra = NULL) { pstr0 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr0 , earg = .epstr0 ) prob <- eta2theta(eta[, c(FALSE, TRUE)], .lprob , earg = .eprob ) okay1 <- all(is.finite(prob )) && all(0 < prob & prob < 1) && all(is.finite(pstr0)) && all(pstr0 < 1) prob0 <- prob deflat.limit <- -prob0 / (1 - prob0) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < pstr0))) warning("parameter 'pstr0' is too negative even allowing for ", "0-deflation.") okay1 && okay2.deflat }, list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), deriv = eval(substitute(expression({ M1 <- 2 pstr0 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr0 , earg = .epstr0 ) prob <- eta2theta(eta[, c(FALSE, TRUE)], .lprob , earg = .eprob ) prob0 <- prob # P(Y == 0) from parent distribution, aka f(0) pobs0 <- pstr0 + (1 - pstr0) * prob0 # P(Y == 0) index0 <- (y == 0) dl.dpstr0 <- (1 - prob0) / pobs0 dl.dpstr0[!index0] <- -1 / (1 - pstr0[!index0]) dl.dprob <- (1 - pstr0) / pobs0 dl.dprob[!index0] <- 1 / prob[!index0] - y[!index0] / (1 - prob[!index0]) dpstr0.deta <- dtheta.deta(pstr0 , .lpstr0 , earg = .epstr0 ) dprob.deta <- dtheta.deta(prob, .lprob , earg = .eprob ) dl.deta12 <- c(w) * cbind(dl.dpstr0 * dpstr0.deta, dl.dprob * dprob.deta) dl.deta12 <- dl.deta12[, interleave.VGAM(ncol(dl.deta12), M1 = M1)] dl.deta12 }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0 ))), weight = eval(substitute(expression({ if ( .expected ) { ned2l.dprob2 <- (1 - pstr0)^2 / pobs0 + (1 - pstr0) * ((1 - prob) / prob) * (1 / prob + 1 / (1 - prob)^2) ned2l.dpstr0.prob <- 1 / pobs0 ned2l.dpstr02 <- (1 - prob0) / ((1 - pstr0) * pobs0) } else { od2l.dprob2 <- ((1 - pstr0) / pobs0)^2 od2l.dprob2[!index0] <- 1 / (prob[!index0])^2 + y[!index0] / (1 - prob[!index0])^2 od2l.dpstr0.prob <- (pobs0 + (1 - prob0) * (1 - pstr0)) / pobs0^2 od2l.dpstr0.prob[!index0] <- 0 od2l.dpstr02 <- ((1 - prob0) / pobs0)^2 od2l.dpstr02[!index0] <- 1 / (1 - pstr0[!index0])^2 } allvals <- if ( .expected ) c(c(w) * ned2l.dpstr02 * dpstr0.deta^2, c(w) * ned2l.dprob2 * dprob.deta^2, c(w) * ned2l.dpstr0.prob * dprob.deta * dpstr0.deta) else c(c(w) * od2l.dpstr02 * dpstr0.deta^2, c(w) * od2l.dprob2 * dprob.deta^2, c(w) * od2l.dpstr0.prob * dprob.deta * dpstr0.deta) wz <- array(allvals, dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lprob = lprob, .lpstr0 = lpstr0, .eprob = eprob, .epstr0 = epstr0, .expected = expected )))) } zigeometricff <- function(lprob = "logit", lonempstr0 = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), iprob = NULL, ionempstr0 = NULL, imethod = 1, bias.red = 0.5, zero = "onempstr0") { expected <- TRUE lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lonempstr0 <- as.list(substitute(lonempstr0)) eonempstr0 <- link2list(lonempstr0) lonempstr0 <- attr(eonempstr0, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || iprob >= 1) stop("argument 'iprob' is out of range") if (length(ionempstr0)) if (!is.Numeric(ionempstr0, positive = TRUE) || ionempstr0 >= 1) stop("argument 'ionempstr0' is out of range") if (!is.Numeric(bias.red, length.arg = 1, positive = TRUE) || bias.red > 1) stop("argument 'bias.red' must be between 0 and 1") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Zero-inflated geometric distribution,\n", "P[Y = 0] = 1 - onempstr0 + onempstr0 * prob,\n", "P[Y = y] = onempstr0 * prob * (1 - prob)^y, ", "y = 1, 2, ...\n\n", "Link: ", namesof("prob", lprob, earg = eprob ), ", ", namesof("onempstr0", lonempstr0, earg = eonempstr0), "\n", "Mean: onempstr0 * (1 - prob) / prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("prob", "onempstr0"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("prob", ncoly) mynames2 <- param.names("onempstr0", ncoly) predictors.names <- c(namesof(mynames1, .lprob , earg = .eprob , tag = FALSE), namesof(mynames2, .lonempstr0 , earg = .eonempstr0 , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] if (!length(etastart)) { prob.init <- if ( .imethod == 3) .bias.red / (1 + y + 1/8) else if ( .imethod == 2) .bias.red / (1 + matrix(colMeans(y) + 1/8, n, ncoly, byrow = TRUE)) else .bias.red / (1 + matrix(colSums(y * w) / colSums(w) + 1/8, n, ncoly, byrow = TRUE)) prob.init <- if (length( .iprob )) { matrix( .iprob , n, ncoly, byrow = TRUE) } else { prob.init # Already a matrix } prob0.est <- psze.init <- matrix(0, n, NOS) for (jlocal in 1:NOS) { prob0.est[, jlocal] <- sum(w[y[, jlocal] == 0, jlocal]) / sum(w[, jlocal]) psze.init[, jlocal] <- if ( .imethod == 3) prob0.est[, jlocal] / 2 else if ( .imethod == 1) pmax(0.05, (prob0.est[, jlocal] - median(prob.init[, jlocal]))) else prob0.est[, jlocal] / 5 } psze.init <- if (length( .ionempstr0 )) { matrix( 1 - .ionempstr0 , n, ncoly, byrow = TRUE) } else { psze.init # Already a matrix } etastart <- cbind(theta2eta( prob.init, .lprob , earg = .eprob ), theta2eta(1 - psze.init, .lonempstr0 , earg = .eonempstr0 )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0, .iprob = iprob, .ionempstr0 = ionempstr0, .type.fitted = type.fitted, .bias.red = bias.red, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "pstr0", "onempstr0"))[1] ans <- switch(type.fitted, "mean" = onempstr0 * (1 - prob) / prob, "prob" = prob, "pobs0" = 1 - onempstr0 + onempstr0 * prob, # P(Y=0) "pstr0" = 1 - onempstr0, "onempstr0" = onempstr0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0 ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lprob , NOS), rep_len( .lonempstr0 , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names misc$earg <- vector("list", M1 * NOS) names(misc$link) <- names(misc$earg) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .eprob misc$earg[[M1*ii ]] <- .eonempstr0 } misc$imethod <- .imethod misc$zero <- .zero misc$bias.red <- .bias.red misc$expected <- .expected misc$ionempstr0 <- .ionempstr0 misc$pobs0 <- pobs0 if (length(dimnames(y)[[2]]) > 0) dimnames(misc$pobs0) <- dimnames(y) misc$onempstr0 <- onempstr0 if (length(dimnames(y)[[2]]) > 0) dimnames(misc$onempstr0) <- dimnames(y) }), list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0, .ionempstr0 = ionempstr0, .zero = zero, .expected = expected, .bias.red = bias.red, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzigeom(x = y, prob = prob, pstr0 = 1 - onempstr0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0 ))), vfamily = c("zigeometricff"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) rzigeom(nsim * length(onempstr0), prob = prob, pstr0 = 1 - onempstr0) }, list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0 ))), validparams = eval(substitute(function(eta, y, extra = NULL) { prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) okay1 <- all(is.finite(onempobs0)) && all(0 < onempobs0) && all(is.finite(prob )) && all(0 < prob & prob < 1) prob0 <- prob deflat.limit <- -prob0 / (1 - prob0) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(onempstr0 < 1 - deflat.limit))) warning("parameter 'onempstr0' is too positive even allowing for ", "0-deflation.") okay1 && okay2.deflat }, list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0 ))), deriv = eval(substitute(expression({ M1 <- 2 prob <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , earg = .eprob ) onempstr0 <- eta2theta(eta[, c(FALSE, TRUE)], .lonempstr0 , earg = .eonempstr0 ) prob0 <- prob # P(Y == 0) from the parent distribution pobs0 <- 1 - onempstr0 + (onempstr0) * prob0 # P(Y == 0) index0 <- (y == 0) dl.donempstr0 <- -(1 - prob0) / pobs0 # zz dl.donempstr0[!index0] <- 1 / (onempstr0[!index0]) # zz dl.dprob <- (onempstr0) / pobs0 dl.dprob[!index0] <- 1 / prob[!index0] - y[!index0] / (1 - prob[!index0]) dprob.deta <- dtheta.deta(prob , .lprob , earg = .eprob ) donempstr0.deta <- dtheta.deta(onempstr0 , .lonempstr0 , earg = .eonempstr0 ) dl.deta12 <- c(w) * cbind(dl.dprob * dprob.deta, dl.donempstr0 * donempstr0.deta) dl.deta12 <- dl.deta12[, interleave.VGAM(ncol(dl.deta12), M1 = M1)] dl.deta12 }), list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0 ))), weight = eval(substitute(expression({ if ( .expected ) { ned2l.dprob2 <- (onempstr0)^2 / pobs0 + (onempstr0) * ((1 - prob) / prob) * (1 / prob + 1 / (1 - prob)^2) ned2l.donempstr0.prob <- -1 / pobs0 ned2l.donempstr02 <- (1 - prob0) / (( onempstr0) * pobs0) } else { od2l.dprob2 <- (( onempstr0) / pobs0)^2 od2l.dprob2[!index0] <- 1 / (prob[!index0])^2 + y[!index0] / (1 - prob[!index0])^2 od2l.donempstr0.prob <- -(pobs0 + (1 - prob0) * (onempstr0)) / pobs0^2 od2l.donempstr0.prob[!index0] <- 0 od2l.donempstr02 <- ((1 - prob0) / pobs0)^2 od2l.donempstr02[!index0] <- 1 / ( onempstr0[!index0])^2 } allvals <- if ( .expected ) c(c(w) * ned2l.dprob2 * dprob.deta^2, c(w) * ned2l.donempstr02 * donempstr0.deta^2, c(w) * ned2l.donempstr0.prob * dprob.deta * donempstr0.deta) else c(c(w) * od2l.dprob2 * dprob.deta^2, c(w) * od2l.donempstr02 * donempstr0.deta^2, c(w) * od2l.donempstr0.prob * dprob.deta * donempstr0.deta) wz <- array(allvals, dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lprob = lprob, .lonempstr0 = lonempstr0, .eprob = eprob, .eonempstr0 = eonempstr0, .expected = expected )))) } dzageom <- function(x, prob, pobs0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(prob), length(pobs0)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dposgeom(x[!index0], prob = prob[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1-pobs0[!index0]) * dposgeom(x[!index0], prob = prob[!index0]) } ans } pzageom <- function(q, prob, pobs0 = 0) { LLL <- max(length(q), length(prob), length(pobs0)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans[q > 0] <- pobs0[q > 0] + (1 - pobs0[q > 0]) * pposgeom(q[q > 0], prob = prob[q > 0]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans <- pmax(0, ans) ans <- pmin(1, ans) ans } qzageom <- function(p, prob, pobs0 = 0) { LLL <- max(length(p), length(prob), length(pobs0)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans <- p ind4 <- (p > pobs0) ans[!ind4] <- 0.0 ans[ ind4] <- qposgeom((p[ind4] - pobs0[ind4]) / (1 - pobs0[ind4]), prob = prob[ind4]) ans } rzageom <- function(n, prob, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n ans <- rposgeom(use.n, prob) if (length(pobs0) != use.n) pobs0 <- rep_len(pobs0, use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ifelse(runif(use.n) < pobs0, 0, ans) } dzabinom <- function(x, size, prob, pobs0 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(size), length(prob), length(pobs0)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") index0 <- (x == 0) if (log.arg) { ans[ index0] <- log(pobs0[index0]) ans[!index0] <- log1p(-pobs0[!index0]) + dposbinom(x[!index0], size = size[!index0], prob = prob[!index0], log = TRUE) } else { ans[ index0] <- pobs0[index0] ans[!index0] <- (1-pobs0[!index0]) * dposbinom(x[!index0], size = size[!index0], prob = prob[!index0]) } ans } pzabinom <- function(q, size, prob, pobs0 = 0) { LLL <- max(length(q), length(size), length(prob), length(pobs0)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) ans <- rep_len(0.0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans[q > 0] <- pobs0[q > 0] + (1 - pobs0[q > 0]) * pposbinom(q[q > 0], size = size[q > 0], prob = prob[q > 0]) ans[q < 0] <- 0 ans[q == 0] <- pobs0[q == 0] ans <- pmax(0, ans) ans <- pmin(1, ans) ans } qzabinom <- function(p, size, prob, pobs0 = 0) { LLL <- max(length(p), length(size), length(prob), length(pobs0)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pobs0) != LLL) pobs0 <- rep_len(pobs0, LLL) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be in [0,1]") ans <- p ind4 <- (p > pobs0) ans[!ind4] <- 0.0 ans[ ind4] <- qposbinom((p[ind4] - pobs0[ind4]) / (1 - pobs0[ind4]), size = size[ind4], prob = prob[ind4]) ans } rzabinom <- function(n, size, prob, pobs0 = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n ans <- rposbinom(use.n, size, prob) if (length(pobs0) != use.n) pobs0 <- rep_len(pobs0, use.n) if (!is.Numeric(pobs0) || any(pobs0 < 0) || any(pobs0 > 1)) stop("argument 'pobs0' must be between 0 and 1 inclusive") ifelse(runif(use.n) < pobs0, 0, ans) } zabinomial <- function(lpobs0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0"), ipobs0 = NULL, iprob = NULL, imethod = 1, zero = NULL # Was zero = 2 prior to 20130917 ) { lpobs0 <- as.list(substitute(lpobs0)) epobs0 <- link2list(lpobs0) lpobs0 <- attr(epobs0, "function.name") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0"))[1] if (length(ipobs0)) if (!is.Numeric(ipobs0, positive = TRUE) || ipobs0 >= 1) stop("argument 'ipobs0' is out of range") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || iprob >= 1) stop("argument 'iprob' is out of range") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Zero-altered binomial distribution ", "(Bernoulli and positive-binomial conditional model)\n\n", "P[Y = 0] = pobs0,\n", "P[Y = y] = (1 - pobs0) * dposbinom(x = y, size, prob), ", "y = 1, 2, ..., size,\n\n", "Link: ", namesof("pobs0", lpobs0, earg = epobs0), ", ", namesof("prob" , lprob, earg = eprob), "\n", "Mean: (1 - pobs0) * prob / (1 - (1 - prob)^size)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = c("pobs0", "prob"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep_len(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1.0 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor ", "(first level = fail, other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } if (!all(w == 1)) extra$new.w <- w y <- as.matrix(y) extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) predictors.names <- c(namesof("pobs0", .lpobs0 , earg = .epobs0 , tag = FALSE), namesof("prob" , .lprob , earg = .eprob , tag = FALSE)) orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w phi.init <- if (length( .ipobs0 )) .ipobs0 else { prob0.est <- sum(Size[y == 0]) / sum(Size) if ( .imethod == 1) { (prob0.est - (1 - mustart)^Size) / (1 - (1 - mustart)^Size) } else if ( .imethod == 2) { prob0.est } else { prob0.est * 0.5 } } phi.init[phi.init <= -0.10] <- 0.50 # Lots of sample variation phi.init[phi.init <= 0.01] <- 0.05 # Last resort phi.init[phi.init >= 0.99] <- 0.95 # Last resort if (!length(etastart)) { etastart <- cbind(theta2eta(phi.init, .lpobs0, earg = .epobs0 ), theta2eta( mustart, .lprob, earg = .eprob )) mustart <- NULL } }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0, .iprob = iprob, .ipobs0 = ipobs0, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0"))[1] phi0 <- eta2theta(eta[, 1], .lpobs0 , earg = .epobs0 ) prob <- eta2theta(eta[, 2], .lprob , earg = .eprob ) orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w ans <- switch(type.fitted, "mean" = (1 - phi0) * prob / (1 - (1 - prob)^Size), "prob" = prob, "pobs0" = phi0) # P(Y=0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 ))), last = eval(substitute(expression({ misc$link <- c(prob = .lprob, pobs0 = .lpobs0 ) misc$earg <- list(prob = .eprob, pobs0 = .epobs0 ) misc$imethod <- .imethod misc$zero <- .zero misc$expected <- TRUE }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0, .zero = zero, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w pobs0 <- eta2theta(eta[, 1], .lpobs0 , earg = .epobs0 ) prob <- eta2theta(eta[, 2], .lprob , earg = .eprob ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- orig.w * dzabinom(x = round(y * Size), size = Size, prob = prob, pobs0 = pobs0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 ))), vfamily = c("zabinomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { phi0 <- eta2theta(eta[, 1], .lpobs0 , earg = .epobs0 ) prob <- eta2theta(eta[, 2], .lprob , earg = .eprob ) okay1 <- all(is.finite(phi0)) && all(0 < phi0 & phi0 < 1) && all(is.finite(prob)) && all(0 < prob & prob < 1) okay1 }, list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- if (length(extra$NOS)) extra$NOS else 1 orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w phi0 <- eta2theta(eta[, 1], .lpobs0 , earg = .epobs0 ) prob <- eta2theta(eta[, 2], .lprob , earg = .eprob ) dphi0.deta <- dtheta.deta(phi0, .lpobs0, earg = .epobs0 ) dprob.deta <- dtheta.deta(prob, .lprob , earg = .eprob ) df0.dprob <- -Size * (1 - prob)^(Size - 1) df02.dprob2 <- Size * (Size - 1) * (1 - prob)^(Size - 2) prob0 <- (1 - prob)^(Size) oneminusf0 <- 1 - prob0 dl.dphi0 <- -1 / (1 - phi0) dl.dprob <- c(w) * (y / prob - (1 - y) / (1 - prob)) + c(orig.w) * df0.dprob / oneminusf0 dl.dphi0[y == 0] <- 1 / phi0[y == 0] # Do it in one line skip <- extra$skip.these for (spp. in 1:NOS) { dl.dprob[skip[, spp.], spp.] <- 0 } ans <- cbind(c(orig.w) * dl.dphi0 * dphi0.deta, dl.dprob * dprob.deta) ans }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M1) usualmeanY <- prob meanY <- (1 - phi0) * usualmeanY / oneminusf0 term1 <- c(Size) * (meanY / prob^2 - meanY / (1 - prob)^2) + c(Size) * (1 - phi0) / (1 - prob)^2 term2 <- -(1 - phi0) * df02.dprob2 / oneminusf0 term3 <- -(1 - phi0) * (df0.dprob / oneminusf0)^2 ned2l.dprob2 <- term1 + term2 + term3 wz[, iam(2, 2, M)] <- ned2l.dprob2 * dprob.deta^2 mu.phi0 <- phi0 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if ( .lpobs0 == "logit" && is.empty.list( .epobs0 )) { tmp100 } else { (dphi0.deta^2) / tmp100 } wz[, iam(1, 1, M)] <- tmp200 c(orig.w) * wz }), list( .lprob = lprob, .lpobs0 = lpobs0, .eprob = eprob, .epobs0 = epobs0 )))) } zabinomialff <- function(lprob = "logit", lonempobs0 = "logit", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), iprob = NULL, ionempobs0 = NULL, imethod = 1, zero = "onempobs0") { lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lonempobs0 <- as.list(substitute(lonempobs0)) eonempobs0 <- link2list(lonempobs0) lonempobs0 <- attr(eonempobs0, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "onempobs0"))[1] if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || iprob >= 1) stop("argument 'iprob' is out of range") if (length(ionempobs0)) if (!is.Numeric(ionempobs0, positive = TRUE) || ionempobs0 >= 1) stop("argument 'ionempobs0' is out of range") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Zero-altered binomial distribution ", "(Bernoulli and positive-binomial conditional model)\n\n", "P[Y = 0] = 1 - onempobs0,\n", "P[Y = y] = onempobs0 * dposbinom(x = y, size, prob), ", "y = 1, 2, ..., size,\n\n", "Link: ", namesof("prob" , lprob , earg = eprob ), ", ", namesof("onempobs0", lonempobs0, earg = eonempobs0), "\n", "Mean: onempobs0 * prob / (1 - (1 - prob)^size)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = c("prob", "onempobs0"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (NCOL(y) == 1) { if (is.factor(y)) y <- y != levels(y)[1] nn <- rep_len(1, n) if (!all(y >= 0 & y <= 1)) stop("response values must be in [0, 1]") if (!length(mustart) && !length(etastart)) mustart <- (0.5 + w * y) / (1.0 + w) no.successes <- y if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(no.successes - round(no.successes)) > 1.0e-8)) stop("Number of successes must be integer-valued") } else if (NCOL(y) == 2) { if (min(y) < 0) stop("Negative data not allowed!") if (any(abs(y - round(y)) > 1.0e-8)) stop("Count data must be integer-valued") y <- round(y) nvec <- y[, 1] + y[, 2] y <- ifelse(nvec > 0, y[, 1] / nvec, 0) w <- w * nvec if (!length(mustart) && !length(etastart)) mustart <- (0.5 + nvec * y) / (1 + nvec) } else { stop("for the binomialff family, response 'y' must be a ", "vector of 0 and 1's\n", "or a factor ", "(first level = fail, other levels = success),\n", "or a 2-column matrix where col 1 is the no. of ", "successes and col 2 is the no. of failures") } if (!all(w == 1)) extra$new.w <- w y <- as.matrix(y) extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) predictors.names <- c(namesof("prob" , .lprob , earg = .eprob , tag = FALSE), namesof("onempobs0", .lonempobs0 , earg = .eonempobs0 , tag = FALSE)) orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w phi.init <- if (length( .ionempobs0 )) 1 - .ionempobs0 else { prob0.est <- sum(Size[y == 0]) / sum(Size) if ( .imethod == 1) { (prob0.est - (1 - mustart)^Size) / (1 - (1 - mustart)^Size) } else if ( .imethod == 2) { prob0.est } else { prob0.est * 0.5 } } phi.init[phi.init <= -0.10] <- 0.50 # Lots of sample variation phi.init[phi.init <= 0.01] <- 0.05 # Last resort phi.init[phi.init >= 0.99] <- 0.95 # Last resort if (!length(etastart)) { etastart <- cbind(theta2eta( mustart, .lprob , earg = .eprob ), theta2eta(1 - phi.init, .lonempobs0 , earg = .eonempobs0 )) mustart <- NULL } }), list( .lprob = lprob, .lonempobs0 = lonempobs0, .eprob = eprob, .eonempobs0 = eonempobs0, .iprob = iprob, .ionempobs0 = ionempobs0, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "onempobs0"))[1] prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempobs0 <- eta2theta(eta[, 2], .lonempobs0 , earg = .eonempobs0 ) orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w ans <- switch(type.fitted, "mean" = onempobs0 * prob / (1 - (1 - prob)^Size), "prob" = prob, "pobs0" = 1 - onempobs0, # P(Y=0) "onempobs0" = onempobs0) # P(Y>0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lprob = lprob, .lonempobs0 = lonempobs0, .eprob = eprob, .eonempobs0 = eonempobs0 ))), last = eval(substitute(expression({ misc$link <- c(prob = .lprob , onempobs0 = .lonempobs0 ) misc$earg <- list(prob = .eprob , onempobs0 = .eonempobs0 ) misc$imethod <- .imethod misc$zero <- .zero misc$expected <- TRUE }), list( .lprob = lprob, .lonempobs0 = lonempobs0, .eprob = eprob, .eonempobs0 = eonempobs0, .zero = zero, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempobs0 <- eta2theta(eta[, 2], .lonempobs0 , earg = .eonempobs0 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- orig.w * dzabinom(x = round(y * Size), size = Size, prob = prob, pobs0 = 1 - onempobs0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lprob = lprob, .lonempobs0 = lonempobs0, .eprob = eprob, .eonempobs0 = eonempobs0 ))), vfamily = c("zabinomialff"), validparams = eval(substitute(function(eta, y, extra = NULL) { prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempobs0 <- eta2theta(eta[, 2], .lonempobs0 , earg = .eonempobs0 ) okay1 <- all(is.finite(onempobs0)) && all(0 < onempobs0 & onempobs0 < 1) && all(is.finite(prob )) && all(0 < prob & prob < 1) okay1 }, list( .lprob = lprob, .lonempobs0 = lonempobs0, .eprob = eprob, .eonempobs0 = eonempobs0 ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- if (length(extra$NOS)) extra$NOS else 1 orig.w <- if (length(extra$orig.w)) extra$orig.w else 1 new.w <- if (length(extra$new.w)) extra$new.w else 1 Size <- new.w / orig.w prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) onempobs0 <- eta2theta(eta[, 2], .lonempobs0 , earg = .eonempobs0 ) phi0 <- 1 - onempobs0 dprob.deta <- dtheta.deta(prob , .lprob , earg = .eprob ) donempobs0.deta <- dtheta.deta(onempobs0, .lonempobs0 , earg = .eonempobs0 ) df0.dprob <- -Size * (1 - prob)^(Size - 1) df02.dprob2 <- Size * (Size - 1) * (1 - prob)^(Size - 2) prob0 <- (1 - prob)^(Size) oneminusf0 <- 1 - prob0 dl.dprob <- c(w) * (y / prob - (1 - y) / (1 - prob)) + c(orig.w) * df0.dprob / oneminusf0 dl.donempobs0 <- +1 / (onempobs0) dl.donempobs0[y == 0] <- -1 / (1 - onempobs0[y == 0]) # Do it in 1 line skip <- extra$skip.these for (spp. in 1:NOS) { dl.dprob[skip[, spp.], spp.] <- 0 } ans <- cbind( dl.dprob * dprob.deta, c(orig.w) * dl.donempobs0 * donempobs0.deta) ans }), list( .lprob = lprob, .lonempobs0 = lonempobs0, .eprob = eprob, .eonempobs0 = eonempobs0 ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M1) usualmeanY <- prob meanY <- (1 - phi0) * usualmeanY / oneminusf0 term1 <- c(Size) * (meanY / prob^2 - meanY / (1 - prob)^2) + c(Size) * (1 - phi0) / (1 - prob)^2 term2 <- -(1 - phi0) * df02.dprob2 / oneminusf0 term3 <- -(1 - phi0) * (df0.dprob / oneminusf0)^2 ned2l.dprob2 <- term1 + term2 + term3 wz[, iam(1, 1, M)] <- ned2l.dprob2 * dprob.deta^2 mu.phi0 <- phi0 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if (FALSE && .lonempobs0 == "logit" && is.empty.list( .eonempobs0 )) { tmp100 } else { (donempobs0.deta^2) / tmp100 } wz[, iam(2, 2, M)] <- tmp200 c(orig.w) * wz }), list( .lprob = lprob, .lonempobs0 = lonempobs0, .eprob = eprob, .eonempobs0 = eonempobs0 )))) } zageometric <- function(lpobs0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, iprob = NULL, zero = NULL) { lpobs0 <- as.list(substitute(lpobs0)) epobs0 <- link2list(lpobs0) lpobs0 <- attr(epobs0, "function.name") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "onempobs0"))[1] if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || max(iprob) >= 1) stop("argument 'iprob' out of range") if (length(ipobs0)) if (!is.Numeric(ipobs0, positive = TRUE) || max(ipobs0) >= 1) stop("argument 'ipobs0' out of range") new("vglmff", blurb = c("Zero-altered geometric ", "(Bernoulli and positive-geometric conditional model)\n\n", "Links: ", namesof("pobs0", lpobs0, earg = epobs0, tag = FALSE), ", ", namesof("prob" , lprob , earg = eprob , tag = FALSE), "\n", "Mean: (1 - pobs0) / prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("pobs0", "prob"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pobs0", ncoly) mynames2 <- param.names("prob", ncoly) predictors.names <- c(namesof(mynames1, .lpobs0 , earg = .epobs0 , tag = FALSE), namesof(mynames2, .lprob , earg = .eprob , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] if (!length(etastart)) { foo <- function(x) mean(as.numeric(x == 0)) phi0.init <- matrix(apply(y, 2, foo), n, ncoly, byrow = TRUE) if (length( .ipobs0 )) phi0.init <- matrix( .ipobs0 , n, ncoly, byrow = TRUE) prob.init <- if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 1) (1 - phi0.init) / (1 + matrix(colSums(y * w) / colSums(w) + 1/16, n, ncoly, byrow = TRUE)) else (1 - phi0.init) / (1 + matrix(apply(y, 2, median), n, ncoly, byrow = TRUE) + 1/16) if (length( .iprob )) prob.init <- matrix( .iprob , n, ncoly, byrow = TRUE) etastart <- cbind(theta2eta(phi0.init, .lpobs0 , earg = .epobs0 ), theta2eta(prob.init, .lprob , earg = .eprob )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob, .ipobs0 = ipobs0, .iprob = iprob, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "onempobs0"))[1] M1 <- 2 NOS <- ncol(eta) / M1 phi0 <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lpobs0 , earg = .epobs0 )) prob <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lprob , earg = .eprob )) ans <- switch(type.fitted, "mean" = (1 - phi0) / prob, "prob" = prob, "pobs0" = phi0, # P(Y=0) "onempobs0" = 1 - phi0) # P(Y>0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lpobs0 , NOS), rep_len( .lprob , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names misc$earg <- vector("list", M1 * NOS) names(misc$link) <- names(misc$earg) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .epobs0 misc$earg[[M1*ii ]] <- .eprob } misc$expected <- TRUE misc$imethod <- .imethod misc$ipobs0 <- .ipobs0 misc$iprob <- .iprob misc$multipleResponses <- TRUE }), list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob, .ipobs0 = ipobs0, .iprob = iprob, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { NOS <- extra$NOS M1 <- 2 phi0 <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lpobs0 , earg = .epobs0 )) prob <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lprob , earg = .eprob )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzageom(x = y, pobs0 = phi0, prob = prob, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), vfamily = c("zageometric"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) phi0 <- cbind(eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpobs0 , earg = .epobs0 )) prob <- cbind(eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lprob , earg = .eprob )) rzageom(nsim * length(prob), prob = prob, pobs0 = phi0) }, list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), validparams = eval(substitute(function(eta, y, extra = NULL) { phi0 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpobs0 , earg = .epobs0 ) prob <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lprob , earg = .eprob ) okay1 <- all(is.finite(phi0)) && all(0 < phi0 & phi0 < 1) && all(is.finite(prob)) && all(0 < prob & prob < 1) okay1 }, list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 # extra$NOS y0 <- extra$y0 skip <- extra$skip.these phi0 <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lpobs0 , earg = .epobs0 )) prob <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lprob , earg = .eprob )) dl.dprob <- 1 / prob - (y - 1) / (1 - prob) dl.dphi0 <- -1 / (1 - phi0) for (spp. in 1:NOS) { dl.dphi0[skip[, spp.], spp.] <- 1 / phi0[skip[, spp.], spp.] dl.dprob[skip[, spp.], spp.] <- 0 } dphi0.deta <- dtheta.deta(phi0, .lpobs0 , earg = .epobs0 ) dprob.deta <- dtheta.deta(prob, .lprob , earg = .eprob ) ans <- c(w) * cbind(dl.dphi0 * dphi0.deta, dl.dprob * dprob.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lpobs0 = lpobs0, .lprob = lprob, .epobs0 = epobs0, .eprob = eprob ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M1*NOS) ned2l.dprob2 <- (1 - phi0) / (prob^2 * (1 - prob)) wz[, NOS+(1:NOS)] <- c(w) * ned2l.dprob2 * dprob.deta^2 mu.phi0 <- phi0 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if ( .lpobs0 == "logit" && is.empty.list( .epobs0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (dphi0.deta^2) / tmp100) } wz[, 1:NOS] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M1 = M1)] wz }), list( .lpobs0 = lpobs0, .epobs0 = epobs0 )))) } # End of zageometric zageometricff <- function(lprob = "logit", lonempobs0 = "logit", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, iprob = NULL, ionempobs0 = NULL, zero = "onempobs0") { lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lonempobs0 <- as.list(substitute(lonempobs0)) eonempobs0 <- link2list(lonempobs0) lonempobs0 <- attr(eonempobs0, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "onempobs0"))[1] if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iprob)) if (!is.Numeric(iprob, positive = TRUE) || max(iprob) >= 1) stop("argument 'iprob' out of range") if (length(ionempobs0)) if (!is.Numeric(ionempobs0, positive = TRUE) || max(ionempobs0) >= 1) stop("argument 'ionempobs0' out of range") new("vglmff", blurb = c("Zero-altered geometric ", "(Bernoulli and positive-geometric conditional model)\n\n", "Links: ", namesof("prob" , lprob , earg = eprob , tag = FALSE), ", ", namesof("onempobs0", lonempobs0, earg = eonempobs0, tag = FALSE), "\n", "Mean: onempobs0 / prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("prob", "onempobs0"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$y0 <- y0 <- ifelse(y == 0, 1, 0) extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$skip.these <- skip.these <- matrix(as.logical(y0), n, NOS) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("prob", ncoly) mynames2 <- param.names("onempobs0", ncoly) predictors.names <- c(namesof(mynames1, .lprob , earg = .eprob , tag = FALSE), namesof(mynames2, .lonempobs0 , earg = .eonempobs0 , tag = FALSE))[ interleave.VGAM(M1*NOS, M1 = M1)] if (!length(etastart)) { foo <- function(x) mean(as.numeric(x == 0)) phi0.init <- matrix(apply(y, 2, foo), n, ncoly, byrow = TRUE) if (length( .ionempobs0 )) phi0.init <- matrix( 1 - .ionempobs0 , n, ncoly, byrow = TRUE) prob.init <- if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 1) (1 - phi0.init) / (1 + matrix(colSums(y * w) / colSums(w) + 1/16, n, ncoly, byrow = TRUE)) else (1 - phi0.init) / (1 + matrix(apply(y, 2, median), n, ncoly, byrow = TRUE) + 1/16) if (length( .iprob )) prob.init <- matrix( .iprob , n, ncoly, byrow = TRUE) etastart <- cbind(theta2eta( prob.init, .lprob , earg = .eprob ), theta2eta(1 - phi0.init, .lonempobs0 , earg = .eonempobs0 )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lonempobs0 = lonempobs0, .lprob = lprob, .eonempobs0 = eonempobs0, .eprob = eprob, .ionempobs0 = ionempobs0, .iprob = iprob, .imethod = imethod, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs0", "onempobs0"))[1] M1 <- 2 NOS <- ncol(eta) / M1 prob <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lprob , earg = .eprob )) onempobs0 <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lonempobs0 , earg = .eonempobs0 )) ans <- switch(type.fitted, "mean" = onempobs0 / prob, "prob" = prob, "pobs0" = 1 - onempobs0, # P(Y=0) "onempobs0" = onempobs0) # P(Y>0) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lonempobs0 = lonempobs0, .lprob = lprob, .eonempobs0 = eonempobs0, .eprob = eprob ))), last = eval(substitute(expression({ temp.names <- c(rep_len( .lprob , NOS), rep_len( .lonempobs0 , NOS)) temp.names <- temp.names[interleave.VGAM(M1*NOS, M1 = M1)] misc$link <- temp.names misc$earg <- vector("list", M1 * NOS) names(misc$link) <- names(misc$earg) <- c(mynames1, mynames2)[interleave.VGAM(M1*NOS, M1 = M1)] for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .eprob misc$earg[[M1*ii ]] <- .eonempobs0 } misc$expected <- TRUE misc$imethod <- .imethod misc$ionempobs0 <- .ionempobs0 misc$iprob <- .iprob misc$multipleResponses <- TRUE }), list( .lonempobs0 = lonempobs0, .lprob = lprob, .eonempobs0 = eonempobs0, .eprob = eprob, .ionempobs0 = ionempobs0, .iprob = iprob, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { NOS <- extra$NOS M1 <- 2 prob <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lprob , earg = .eprob )) onempobs0 <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lonempobs0 , earg = .eonempobs0 )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzageom(x = y, pobs0 = 1 - onempobs0, prob = prob, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lonempobs0 = lonempobs0, .lprob = lprob, .eonempobs0 = eonempobs0, .eprob = eprob ))), vfamily = c("zageometricff"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) onempobs0 <- cbind(eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lonempobs0 , earg = .eonempobs0 )) prob <- cbind(eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lprob , earg = .eprob )) rzageom(nsim * length(prob), pobs0 = 1 - onempobs0, prob = prob) }, list( .lonempobs0 = lonempobs0, .lprob = lprob, .eonempobs0 = eonempobs0, .eprob = eprob ))), validparams = eval(substitute(function(eta, y, extra = NULL) { prob <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lprob , earg = .eprob ) onempobs0 <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lonempobs0 , earg = .eonempobs0 ) okay1 <- all(is.finite(onempobs0)) && all(0 < onempobs0 & onempobs0 < 1) && all(is.finite(prob )) && all(0 < prob & prob < 1) okay1 }, list( .lonempobs0 = lonempobs0, .lprob = lprob, .eonempobs0 = eonempobs0, .eprob = eprob ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 # extra$NOS y0 <- extra$y0 skip <- extra$skip.these prob <- cbind(eta2theta(eta[, M1*(1:NOS)-1, drop = FALSE], .lprob , earg = .eprob )) onempobs0 <- cbind(eta2theta(eta[, M1*(1:NOS)-0, drop = FALSE], .lonempobs0 , earg = .eonempobs0 )) pobs0 <- 1 - onempobs0 dl.dprob <- 1 / prob - (y - 1) / (1 - prob) dl.donempobs0 <- +1 / (onempobs0) for (spp. in 1:NOS) { dl.donempobs0[skip[, spp.], spp.] <- -1 / pobs0[skip[, spp.], spp.] dl.dprob[skip[, spp.], spp.] <- 0 } dprob.deta <- dtheta.deta(prob, .lprob , earg = .eprob ) donempobs0.deta <- dtheta.deta(onempobs0, .lonempobs0 , earg = .eonempobs0 ) ans <- c(w) * cbind(dl.dprob * dprob.deta, dl.donempobs0 * donempobs0.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lonempobs0 = lonempobs0, .lprob = lprob, .eonempobs0 = eonempobs0, .eprob = eprob ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M1*NOS) ned2l.dprob2 <- (1 - pobs0) / (prob^2 * (1 - prob)) wz[, (1:NOS)] <- c(w) * ned2l.dprob2 * dprob.deta^2 mu.phi0 <- pobs0 # phi0 tmp100 <- mu.phi0 * (1.0 - mu.phi0) tmp200 <- if ( FALSE && .lonempobs0 == "logit" && is.empty.list( .eonempobs0 )) { cbind(c(w) * tmp100) } else { cbind(c(w) * (donempobs0.deta^2) / tmp100) } wz[, NOS+(1:NOS)] <- tmp200 wz <- wz[, interleave.VGAM(ncol(wz), M1 = M1)] wz }), list( .lonempobs0 = lonempobs0, .eonempobs0 = eonempobs0 )))) } # End of zageometricff deflat.limit.oipospois <- function(lambda) { if (any(lambda < 0)) stop("argument 'lambda' cannot be negative") ans <- -lambda / (expm1(lambda) - lambda) ans[is.infinite(lambda)] <- 0 ans } doipospois <- function(x, lambda, pstr1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(lambda), length(pstr1)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep(NA_real_, LLL) index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pstr1[ index1] + (1 - pstr1[ index1]) * dpospois(x[ index1], lambda[ index1])) ans[!index1] <- log1p(-pstr1[!index1]) + dpospois(x[!index1], lambda[!index1], log = TRUE) } else { ans[ index1] <- pstr1[ index1] + (1 - pstr1[ index1]) * dpospois(x[ index1], lambda[ index1]) ans[!index1] <- (1 - pstr1[!index1]) * dpospois(x[!index1], lambda[!index1]) } deflat.limit <- deflat.limit.oipospois(lambda) ans[pstr1 < deflat.limit] <- NaN ans[pstr1 > 1] <- NaN ans } # doipospois poipospois <- function(q, lambda, pstr1 = 0) { LLL <- max(length(q), length(lambda), length(pstr1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- deflat.limit.oipospois(lambda) ans <- ppospois(q, lambda) #, lower.tail = lower.tail, log.p = log.p ans <- ifelse(q < 1, 0, pstr1 + (1 - pstr1) * ans) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[lambda <= 0] <- NaN ans } # poipospois qoipospois <- function(p, lambda, pstr1 = 0) { LLL <- max(length(p), length(lambda), length(pstr1)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- deflat.limit.oipospois(lambda) ans[p <= pstr1] <- 1 pindex <- (deflat.limit <= pstr1) & (pstr1 < p) ans[pindex] <- qpospois((p[pindex] - pstr1[pindex]) / (1 - pstr1[pindex]), lambda = lambda[pindex]) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans[lambda <= 0] <- NaN ans } # qoipospois roipospois <- function(n, lambda, pstr1 = 0) { ans <- qoipospois(runif(n), lambda, pstr1 = pstr1) ans } # roipospois oipospoisson <- function(lpstr1 = "logit", llambda = "loge", type.fitted = c("mean", "lambda", "pobs1", "pstr1", "onempstr1"), ilambda = NULL, gpstr1 = (1:19)/20, gprobs.y = (1:19)/20, # 20160518; grid for finding lambd.init imethod = 1, zero = NULL) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") gpstr10 <- gpstr1 lpstr10 <- as.list(substitute(lpstr1)) epstr10 <- link2list(lpstr10) lpstr10 <- attr(epstr10, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs1", "pstr1", "onempstr1"))[1] if (length(ilambda)) if (!is.Numeric(ilambda, positive = TRUE)) stop("argument 'ilambda' values must be positive") new("vglmff", blurb = c("One-inflated positive Poisson\n\n", "Links: ", namesof("pstr1", lpstr10, earg = epstr10 ), ", ", namesof("lambda", llambda, earg = elambda ), "\n", "Mean: pstr1 + (1 - pstr1) * lambda / (1 - exp(-lambda))"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, imethod = .imethod , multipleResponses = TRUE, parameters.names = c("pstr1", "lambda"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .imethod = imethod, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y NOS <- ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pstr1", ncoly) mynames2 <- param.names("lambda", ncoly) predictors.names <- c(namesof(mynames1, .lpstr10 , earg = .epstr10 , tag = FALSE), namesof(mynames2, .llambda , earg = .elambda , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { lambd.init <- pstr1.init <- matrix(NA_real_, n, NOS) gpstr10 <- .gpstr10 gprobs.y <- .gprobs.y ilambda <- .ilambda oipospois.Loglikfun <- function(pstr1, lambda, y, x, w, extraargs) { sum(c(w) * doipospois(x = y, pstr1 = pstr1, lambda = lambda, log = TRUE)) } for (jay in 1:NOS) { # For each response 'y_jay'... do: TFvec <- y[, jay] > 1 # Important to exclude the 1s posyvec <- y[TFvec, jay] # Variable name unchanged (lazy) lambd.init.jay <- if ( .imethod == 1) { quantile(posyvec, probs = gprobs.y) - 1/2 # + 1/16 } else if ( .imethod == 2) { weighted.mean(posyvec, w = w[TFvec, jay]) - 1/2 } else { warning("argument 'imethod' should have the value 1 or 2") } if (length(ilambda)) { # zz lambd.init.jay <- ilambda[jay] } else { } try.this <- grid.search2(gpstr10, lambd.init.jay, objfun = oipospois.Loglikfun, y = y[, jay], # x = x[TFvec, , drop = FALSE], w = w[, jay], ret.objfun = TRUE) # Last value is the loglik pstr1.init[, jay] <- try.this["Value1"] lambd.init[, jay] <- (try.this["Value2"] + y[, jay]) / 2 lambd.init[, jay] <- try.this["Value2"] } # for (jay ...) etastart <- cbind(theta2eta(pstr1.init, .lpstr10 , earg = .epstr10 ), theta2eta(lambd.init, .llambda , earg = .elambda ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lpstr10 = lpstr10, .llambda = llambda, .epstr10 = epstr10, .elambda = elambda, .ilambda = ilambda, .gpstr10 = gpstr10, .gprobs.y = gprobs.y, .imethod = imethod, # .probs.y = probs.y, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "lambda", "pobs1", "pstr1", "onempstr1"))[1] phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr10 , earg = .epstr10 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) ans <- switch(type.fitted, "mean" = phimat - (1 - phimat) * lambda / expm1(-lambda), "lambda" = lambda, "pobs1" = doipospois(1, lambda = lambda, pstr1 = phimat), # Pr(Y=1) "pstr1" = phimat, "onempstr1" = 1 - phimat) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr10 = lpstr10, .llambda = llambda, .epstr10 = epstr10, .elambda = elambda ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lpstr10 , NOS), rep_len( .llambda , NOS))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .epstr10 misc$earg[[M1*ii ]] <- .elambda } }), list( .lpstr10 = lpstr10, .llambda = llambda, .epstr10 = epstr10, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr10 , earg = .epstr10 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doipospois(x = y, pstr1 = phimat, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr10 = lpstr10, .llambda = llambda, .epstr10 = epstr10, .elambda = elambda ))), vfamily = c("oipospoisson"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) phimat <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr10 , earg = .epstr10 ) lambda <- eta2theta(eta[, c(FALSE, TRUE)], .llambda , earg = .elambda ) roipospois(nsim * length(lambda), lambda = lambda, pstr1 = phimat) }, list( .lpstr10 = lpstr10, .llambda = llambda, .epstr10 = epstr10, .elambda = elambda ))), validparams = eval(substitute(function(eta, y, extra = NULL) { phimat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr10 , earg = .epstr10 ) lambda <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda , earg = .elambda ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) && all(is.finite(phimat)) && all(phimat < 1) deflat.limit <- deflat.limit.oipospois(lambda) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < phimat))) warning("parameter 'pstr1' is too negative even allowing for ", "0-deflation.") okay1 && okay2.deflat }, list( .lpstr10 = lpstr10, .llambda = llambda, .epstr10 = epstr10, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- M / M1 phimat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr10 , earg = .epstr10 ) lambda <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .llambda , earg = .elambda ) pmf1 <- -lambda * exp(-lambda) / expm1(-lambda) onempmf1 <- 1 - pmf1 # doipospois(1, lambda = lambda, pstr1 = phimat) pobs1 <- phimat + (1 - phimat) * pmf1 index1 <- as.matrix(y == 1) dl.dphimat <- onempmf1 / pobs1 dl.dphimat[!index1] <- -1 / (1 - phimat[!index1]) dpmf1.dlambda <- exp(-lambda) * (1 - lambda - exp(-lambda)) / (expm1(-lambda))^2 d3 <- deriv3( ~ exp(-lambda) * lambda / (1 - exp(-lambda)), c("lambda"), hessian = TRUE) eval.d3 <- eval(d3) d2pmf1.dlambda2 <- attr(eval.d3, "hessian") dim(d2pmf1.dlambda2) <- c(n, NOS) # Matrix it, even for NOS==1 dl.dlambda <- (1 - phimat) * dpmf1.dlambda / pobs1 # dl.dlambda[!index1] <- y[!index1] / lambda[!index1] - 1 - 1 / expm1(lambda[!index1]) dphimat.deta <- dtheta.deta(phimat, .lpstr10 , earg = .epstr10 ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) myderiv <- c(w) * cbind(dl.dphimat * dphimat.deta, dl.dlambda * dlambda.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lpstr10 = lpstr10, .llambda = llambda, .epstr10 = epstr10, .elambda = elambda ))), weight = eval(substitute(expression({ ned2l.dphimat2 <- onempmf1 / ((1 - phimat) * pobs1) # ned2l.dphimatlambda <- dpmf1.dlambda / pobs1 # ned2l.dlambda2 <- (((1 - phimat) * dpmf1.dlambda)^2) / pobs1 - (1 - phimat) * d2pmf1.dlambda2 + (1 - phimat) * (1/lambda - exp(-lambda) * (1 - exp(-lambda) - lambda * exp(-lambda)) / (expm1(-lambda))^3) wz <- array(c(c(w) * ned2l.dphimat2 * dphimat.deta^2, c(w) * ned2l.dlambda2 * dlambda.deta^2, c(w) * ned2l.dphimatlambda * dphimat.deta * dlambda.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .llambda = llambda, .elambda = elambda )))) } # oipospoisson doiposbinom <- function(x, size, prob, pstr1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(size), length(prob), length(pstr1)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- x # + prob + pstr1 index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pstr1[ index1] + (1 - pstr1[ index1]) * dposbinom(x[ index1], size[ index1], prob[ index1])) ans[!index1] <- log1p(-pstr1[!index1]) + dposbinom(x[!index1], size[!index1], prob[!index1], log = TRUE) } else { ans[ index1] <- pstr1[ index1] + (1 - pstr1[ index1]) * dposbinom(x[ index1], size[ index1], prob[ index1]) ans[!index1] <- (1 - pstr1[!index1]) * dposbinom(x[!index1], size[!index1], prob[!index1]) } deflat.limit <- size * prob / (1 + (size-1) * prob - 1 / (1-prob)^(size-1)) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans } # doiposbinom poiposbinom <- function(q, size, prob, pstr1 = 0) { LLL <- max(length(q), length(size), length(prob), length(pstr1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) ans <- pposbinom(q, size, prob) # lower.tail=lower.tail, log.p=log.p ans <- ifelse(q < 1, 0, pstr1 + (1 - pstr1) * ans) deflat.limit <- size * prob / (1 + (size-1) * prob - 1 / (1-prob)^(size-1)) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans } qoiposbinom <- function(p, size, prob, pstr1 = 0) { LLL <- max(length(p), length(size), length(prob), length(pstr1)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(prob) != LLL) prob <- rep_len(prob, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- size * prob / (1 + (size-1) * prob - 1 / (1-prob)^(size-1)) ans[p <= pstr1] <- 1 pindex <- (deflat.limit <= pstr1) & (pstr1 < p) ans[pindex] <- qposbinom((p[pindex] - pstr1[pindex]) / (1 - pstr1[pindex]), size = size[pindex], prob = prob[pindex]) ans[p == 0] <- 1 ans[prob == 0] <- NaN ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans } # qoiposbinom roiposbinom <- function(n, size, prob, pstr1 = 0) { qoiposbinom(runif(n), size, prob, pstr1 = pstr1) } # roiposbinom oiposbinomial <- function(lpstr1 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs1", "pstr1", "onempstr1"), iprob = NULL, gpstr1 = ppoints(9), # (1:19)/20, gprob = ppoints(9), # (1:19)/20, # 20160613; grid for finding prob multiple.responses = FALSE, zero = NULL) { gprobb <- gprob lpstr1 <- as.list(substitute(lpstr1)) epstr1 <- link2list(lpstr1) lpstr1 <- attr(epstr1, "function.name") lprobb <- as.list(substitute(lprob)) eprobb <- link2list(lprobb) lprobb <- attr(eprobb, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs1", "pstr1", "onempstr1"))[1] iprobb <- iprob if (length(iprobb)) if (!is.Numeric(iprobb, positive = TRUE) || any(iprobb >= 1)) stop("argument 'iprob' values must be in (0, 1)") new("vglmff", blurb = c("One-inflated positive binomial\n\n", "Links: ", namesof("pstr1", lpstr1, earg = epstr1 ), ", ", namesof("prob", lprobb, earg = eprobb ), "\n", "Mean: pstr1 + (1 - pstr1) * ", "size * prob / (1 - (1-prob)^size)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = .multiple.responses , # FALSE, # TRUE, parameters.names = c("pstr1", "prob"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .multiple.responses = multiple.responses, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 multiple.responses <- .multiple.responses y <- as.matrix(y) w.orig <- as.matrix(w) # zz this may be of a weird dimension temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = if (multiple.responses) FALSE else FALSE, ncol.w.max = if (multiple.responses) ncol(y) else 1, ncol.y.max = if (multiple.responses) ncol(y) else ncol(y), out.wy = TRUE, colsyperw = if (multiple.responses) 1 else ncol(y), maximize = TRUE) w <- temp5$w y <- temp5$y if (multiple.responses) { if (!all(w == round(w))) stop("the 'weights' argument must be integer valued") if (min(y) < 0 || max(y) > 1) stop("the response must be a proportion") Nvec <- w } else { if (ncol(y) > 1) { Nvec <- rowSums(y) y[, 1] <- y[, 1] / Nvec y <- y[, 1, drop = FALSE] w[, 1] <- w[, 1] * Nvec # == w.orig * Nvec w <- w[, 1, drop = FALSE] } else { Nvec <- w # rep_len(1, nrow(x)) if (!all(Nvec == round(Nvec))) stop("number of trials is not integer-valued") } } extra$Nvec <- Nvec w.orig <- matrix(w.orig, n, ncol(y)) NOS <- ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pstr1", ncoly) mynames2 <- param.names("prob", ncoly) predictors.names <- c(namesof(mynames1, .lpstr1 , earg = .epstr1 , tag = FALSE), namesof(mynames2, .lprobb , earg = .eprobb , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { probb.init <- pstr1.init <- matrix(NA_real_, n, NOS) gpstr1 <- .gpstr1 gprobb <- .gprobb iprobb <- .iprobb if (length(iprobb)) gprobb <- iprobb oiposbinom.Loglikfun <- function(pstr1, prob, y, x, w, extraargs) { sum(c(w) * doiposbinom(x = y, pstr1 = pstr1, size = extraargs$size, prob = prob, log = TRUE)) } for (jay in 1:NOS) { # For each response 'y_jay'... do: try.this <- grid.search2(gpstr1, gprobb, objfun = oiposbinom.Loglikfun, y = round(y[, jay] * Nvec[, jay]), w = 1, # w.orig[, jay], or 1, or w[, jay], possibly extraargs = list(size = Nvec), ret.objfun = TRUE) # Last value is the loglik pstr1.init[, jay] <- try.this["Value1"] probb.init[, jay] <- try.this["Value2"] } # for (jay ...) etastart <- cbind(theta2eta(pstr1.init, .lpstr1 , earg = .epstr1 ), theta2eta(probb.init, .lprobb , earg = .eprobb ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lpstr1 = lpstr1, .lprobb = lprobb, .epstr1 = epstr1, .eprobb = eprobb, .iprobb = iprobb, .gpstr1 = gpstr1, .gprobb = gprobb, .multiple.responses = multiple.responses, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob", "pobs1", "pstr1", "onempstr1"))[1] pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) probb <- eta2theta(eta[, c(FALSE, TRUE)], .lprobb , earg = .eprobb ) Nvec <- extra$Nvec if (!is.numeric(Nvec)) stop("something gone wrong with 'Nvec'") ans <- switch(type.fitted, "mean" = pstr1 + (1 - pstr1) * Nvec * probb / (1 - (1-probb)^Nvec), "prob" = probb, "pobs1" = doiposbinom(1, prob = probb, size = Nvec, pstr1 = pstr1), # Pr(Y=1) "pstr1" = pstr1, "onempstr1" = 1 - pstr1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr1 = lpstr1, .lprobb = lprobb, .epstr1 = epstr1, .eprobb = eprobb ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lpstr1 , NOS), rep_len( .lprobb , NOS))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .epstr1 misc$earg[[M1*ii ]] <- .eprobb } }), list( .lpstr1 = lpstr1, .lprobb = lprobb, .epstr1 = epstr1, .eprobb = eprobb ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) probb <- eta2theta(eta[, c(FALSE, TRUE)], .lprobb , earg = .eprobb ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doiposbinom(x = round(extra$Nvec * y), size = extra$Nvec, # w, pstr1 = pstr1, prob = probb, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr1 = lpstr1, .lprobb = lprobb, .epstr1 = epstr1, .eprobb = eprobb ))), vfamily = c("oiposbinomial"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) probb <- eta2theta(eta[, c(FALSE, TRUE)], .lprobb , earg = .eprobb ) Nvec <- object@extra$Nvec roiposbinom(nsim * length(probb), size = Nvec, probb = probb, pstr1 = pstr1) }, list( .lpstr1 = lpstr1, .lprobb = lprobb, .epstr1 = epstr1, .eprobb = eprobb ))), validparams = eval(substitute(function(eta, y, extra = NULL) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) probb <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lprobb , earg = .eprobb ) size <- extra$Nvec okay1 <- all(is.finite(pstr1)) && all(pstr1 < 1) && all(is.finite(probb)) && all(0 < probb & probb < 1) deflat.limit <- size * probb / (1 + (size-1) * probb - 1 / (1-probb)^(size-1)) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < pstr1))) warning("parameter 'pstr1' is too negative even allowing for ", "1-deflation.") okay1 && okay2.deflat }, list( .lpstr1 = lpstr1, .lprobb = lprobb, .epstr1 = epstr1, .eprobb = eprobb ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- M / M1 pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) probb <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lprobb , earg = .eprobb ) size <- extra$Nvec Qn <- function(n, prob) (1 - prob)^n pmf1 <- size * probb * Qn(size-1, probb) / (1 - Qn(size, probb)) onempmf1 <- 1 - pmf1 # doiposbinom(1, probb = probb, pstr1 = pstr1) pobs1 <- pstr1 + (1 - pstr1) * pmf1 index1 <- as.matrix(round(w * y) == 1) dl.dpstr1 <- onempmf1 / pobs1 dl.dpstr1[!index1] <- -1 / (1 - pstr1[!index1]) d3 <- deriv3( ~ size * probb * ((1 - probb)^(size-1)) / (1 - (1 - probb)^size), c("probb"), hessian = TRUE) eval.d3 <- eval(d3) dpmf1.dprobb <- attr(eval.d3, "gradient") # For checking only d2pmf1.dprobb2 <- attr(eval.d3, "hessian") # dim(dpmf1.dprobb) <- c(n, NOS) # Matrix it, even for NOS==1 dim(d2pmf1.dprobb2) <- c(n, NOS) # Matrix it, even for NOS==1 dl.dprobb <- size * ( y / probb - (1 - y) / (1 - probb) - Qn(size-1, probb) / (1 - Qn(size, probb))) dl.dprobb[index1] <- (1 - pstr1[index1]) * dpmf1.dprobb[index1] / pobs1[index1] dpstr1.deta <- dtheta.deta(pstr1, .lpstr1 , earg = .epstr1 ) dprobb.deta <- dtheta.deta(probb, .lprobb , earg = .eprobb ) myderiv <- cbind(dl.dpstr1 * dpstr1.deta, # * c(w), dl.dprobb * dprobb.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lpstr1 = lpstr1, .lprobb = lprobb, .epstr1 = epstr1, .eprobb = eprobb ))), weight = eval(substitute(expression({ d4 <- deriv3( ~ size * ((1 - probb)^(size-1)) / (1 - (1 - probb)^size), c("probb"), hessian = FALSE) eval.d4 <- eval(d4) d2logonempmf0.dprobb2 <- attr(eval.d4, "gradient") dim(d2logonempmf0.dprobb2) <- c(n, NOS) # Matrix it, even for NOS==1 E2 <- function(size, prob) { size * prob * (1 - Qn(size-1, prob)) / (1 - Qn(size, prob) - size * prob * Qn(size-1, prob)) } E2mat <- E2(size, probb) RHS <- onempmf1 * ( E2mat / probb^2 + (size - E2mat) / (1-probb)^2 + d2logonempmf0.dprobb2) LHS <- -d2pmf1.dprobb2 + ((1-pstr1) / pobs1) * dpmf1.dprobb^2 ned2l.dpstr12 <- onempmf1 / ((1 - pstr1) * pobs1) ned2l.dpstr1probb <- dpmf1.dprobb / pobs1 ned2l.dprobb2 <- (1 - pstr1) * (LHS + RHS) wz <- array(c(ned2l.dpstr12 * dpstr1.deta^2, ned2l.dprobb2 * dprobb.deta^2, ned2l.dpstr1probb * dpstr1.deta * dprobb.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lprobb = lprobb, .eprobb = eprobb )))) } # oiposbinomial VGAM/R/cao.fit.q0000644000176200001440000020661713135276757012746 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. cao.fit <- function(x, y, w = rep_len(1, length(x[, 1])), etastart = NULL, mustart = NULL, coefstart = NULL, offset = 0, family, control = cao.control(...), criterion = "coefficients", qr.arg = FALSE, constraints = NULL, extra = NULL, Terms = Terms, function.name = "cao", ...) { maxitl <- NULL fv <- NULL eff.n <- nrow(x) # + sum(abs(w[1:nrow(x)])) specialCM <- NULL post <- list() check.rank <- TRUE nonparametric <- TRUE optim.maxit <- control$optim.maxit save.weights <- control$save.weights trace <- control$trace minimize.criterion <- control$min.criterion n <- dim(x)[1] copy.X.vlm <- FALSE # May be overwritten in @initialize X.vlm.save <- NULL intercept.only <- ncol(x) == 1 && dimnames(x)[[2]] == "(Intercept)" y.names <- predictors.names <- NULL # May be overwritten in @initialize n.save <- n Rank <- control$Rank rrcontrol <- control # if (length(family@initialize)) eval(family@initialize) # Initialize mu and M (and optionally w) n <- n.save modelno <- switch(family@vfamily[1], "poissonff" = 2, "binomialff" = 1, "quasipoissonff" = 0, "quasibinomialff" = 0, "negbinomial" = 3, "gamma2" = 5, "gaussianff" = 8, 0) # stop("cannot fit this model using fast algorithm") if (!modelno) stop("the family function does not work with cao()") if (modelno == 1) modelno <- get("modelno", envir = VGAMenv) eval(rrr.init.expression) if (length(etastart)) { eta <- etastart mu <- if (length(mustart)) mustart else if (length(body(slot(family, "linkinv")))) slot(family, "linkinv")(eta, extra) else warning("argument 'etastart' assigned a value ", "but there is no 'linkinv' slot to use it") } if (length(mustart)) { mu <- mustart if (length(body(slot(family, "linkfun")))) { eta <- slot(family, "linkfun")(mu, extra) } else { warning("argument 'mustart' assigned a value ", "but there is no 'link' slot to use it") } } M <- if (is.matrix(eta)) ncol(eta) else 1 if (length(family@constraints)) eval(family@constraints) special.matrix <- matrix(-34956.125, M, M) # An unlikely used matrix just.testing <- cm.VGAM(special.matrix, x, rrcontrol$noRRR, constraints) findex <- trivial.constraints(just.testing, special.matrix) tc1 <- trivial.constraints(constraints) if (all(findex == 1)) stop("No covariates to form latent variables from.") colx1.index <- names.colx1.index <- NULL dx2 <- dimnames(x)[[2]] if (sum(findex)) { asx <- attr(x, "assign") for (ii in names(findex)) if (findex[ii]) { names.colx1.index <- c(names.colx1.index, dx2[asx[[ii]]]) colx1.index <- c(colx1.index, asx[[ii]]) } names(colx1.index) <- names.colx1.index } rrcontrol$colx1.index <- control$colx1.index <- colx1.index colx2.index <- 1:ncol(x) names(colx2.index) <- dx2 colx2.index <- colx2.index[-colx1.index] p1 <- length(colx1.index) p2 <- length(colx2.index) rrcontrol$colx2.index <- control$colx2.index <- colx2.index Cmat <- if (length(rrcontrol$Cinit)) { matrix(rrcontrol$Cinit, p2, Rank) } else { if (!rrcontrol$Use.Init.Poisson.QO) { matrix(rnorm(p2 * Rank, sd = rrcontrol$sd.Cinit), p2, Rank) } else { .Init.Poisson.QO(ymat = as.matrix(y), X1 = x[, colx1.index, drop = FALSE], X2 = x[, colx2.index, drop = FALSE], Rank = rrcontrol$Rank, trace = rrcontrol$trace, max.ncol.etamat = rrcontrol$Etamat.colmax, Crow1positive = rrcontrol$Crow1positive, constwt = any(family@vfamily[1] == c("negbinomial", "gamma2", "gaussianff")), takelog = any(family@vfamily[1] != c("gaussianff"))) } } rrcontrol$Cinit <- control$Cinit <- Cmat # Good for valt() Hlist <- process.constraints(constraints, x, M, specialCM = specialCM) nice31 <- checkCMCO(Hlist, control = control, modelno = modelno) if (nice31 != 1) stop("not nice") ncolHlist <- unlist(lapply(Hlist, ncol)) latvar.mat <- x[, colx2.index, drop = FALSE] %*% Cmat rmfromVGAMenv(c("etamat", "beta"), prefix = ".VGAM.CAO.") Nice21 <- length(names.colx1.index) == 1 && names.colx1.index == "(Intercept)" if (!Nice21) stop("'noRRR = ~ 1' is supported only, without constraints") NOS <- ifelse(modelno %in% c(3, 5), M/2, M) p1star. <- if (Nice21) ifelse(modelno %in% c(3, 5), 2, 1) else M p2star. <- if (Nice21) Rank else stop("not Nice21") pstar. <- p1star. + p2star. nstar <- if (Nice21) ifelse(modelno %in% c(3, 5), n * 2, n) else n * M lenbeta <- pstar. * ifelse(Nice21, NOS, 1) othint <- c(Rank, control$eq.tol, pstar. , dim2wz = 1, inited = 0, # w(, dimw) cols modelno, maxitl = control$maxitl, actnits = 0, twice = 0, p1star. , p2star. , Nice21, lenbeta, controlI.tolerances = 0, control$trace, p1, p2 = p2, imethod = control$imethod, bchat = 0) othdbl <- c(small = control$SmallNo, fseps = control$epsilon, .Machine$double.eps, iKvector = rep_len(control$iKvector, NOS), iShape = rep_len(control$iShape, NOS), resss = 0, bfeps = control$bf.epsilon, hstep = 0.1) for (iter in 1:optim.maxit) { if (control$trace) { cat("\nIteration", iter, "\n") flush.console() } conjgrad <- optim(par = c(Cmat), fn = callcaoc, gr = if (control$GradientFunction) calldcaoc else NULL, method = "BFGS", control = list(fnscale = 1, trace = as.integer(control$trace), maxit = control$Maxit.optim, REPORT = 10), etamat = eta, xmat = x, ymat = y, # as.matrix(y), wvec = w, modelno = modelno, Control = control, Nice21 = Nice21, p1star. = p1star. , p2star. = p2star. , n = n, M = M, othint = othint, othdbl = othdbl, alldump = FALSE) Cmat <- matrix(conjgrad$par, p2, Rank) # old becoz of scale(cmatrix) if (converged <- (conjgrad$convergence == 0)) break } if (!converged) { if (control$maxitl > 1) { warning("convergence not obtained in ", control$maxitl, " iterations.") } else { warning("convergence not obtained") } } else { } Cmat <- crow1C(Cmat, control$Crow1positive) # Make sure signs are right flush.console() temp9 <- callcaoc(cmatrix = Cmat, etamat = eta, xmat = x, ymat = y, wvec = w, modelno = modelno, Control = control, Nice21 = Nice21, p1star. = p1star. , p2star. = p2star. , n = n, M = M, othint = othint, othdbl = othdbl, alldump = TRUE) if (!is.list(extra)) extra <- list() extra$Cmat <- temp9$Cmat ynames <- dimnames(y)[[2]] extra$df1.nl <- temp9$df1.nl extra$lambda1 <- temp9$lambda1 extra$spar1 <- temp9$spar1 names(extra$df1.nl) <- names(extra$lambda1) <- names(extra$spar1) <- ynames if (Rank == 2) { extra$spar2 <- temp9$spar2 extra$lambda2 <- temp9$lambda2 extra$df2.nl <- temp9$df2.nl names(extra$df2.nl) <- names(extra$lambda2) <- names(extra$spar2) <- ynames } extra$alldeviance <- temp9$alldeviance names(extra$alldeviance) <- ynames mu <- matrix(temp9$fitted, n, NOS, byrow = TRUE) dn <- labels(x) yn <- dn[[1]] if (is.matrix(mu)) { if (length(dimnames(y)[[2]])) { y.names <- dimnames(y)[[2]] } if (length(dimnames(mu)[[2]])) { y.names <- dimnames(mu)[[2]] } dimnames(mu) <- list(yn, y.names) } else { names(mu) <- names(fv) } fit <- list( fitted.values = mu, Cmatrix = Cmat, terms = Terms) # terms: This used to be done in vglm() misc <- list( criterion = criterion, intercept.only = intercept.only, predictors.names = predictors.names, M = M, n = n, nonparametric = nonparametric, p = ncol(x), ynames = ynames) crit.list <- list() crit.list$deviance <- temp9$deviance if (w[1] != 1 || any(w != w[1])) fit$prior.weights <- w if (length(family@last)) eval(family@last) structure(c(fit, temp9, list( contrasts = attr(x, "contrasts"), control = control, crit.list = crit.list, extra = extra, family = family, iter = iter, misc = misc, post = post, x = x, y = y)), vclass = family@vfamily) } cao.control <- function(Rank = 1, all.knots = FALSE, criterion = "deviance", Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-05, Etamat.colmax = 10, GradientFunction = FALSE, # For now 20041224 iKvector = 0.1, iShape = 0.1, noRRR = ~ 1, Norrr = NA, SmallNo = 5.0e-13, Use.Init.Poisson.QO = TRUE, Bestof = if (length(Cinit)) 1 else 10, maxitl = 10, # was 40 prior to 20100420 imethod = 1, bf.epsilon = 1.0e-7, bf.maxit = 10, # was 40 prior to 20100420 Maxit.optim = 250, optim.maxit = 20, sd.sitescores = 1.0, sd.Cinit = 0.02, suppress.warnings = TRUE, trace = TRUE, df1.nl = 2.5, # About 1.5--2.5 gives the flexibility of a quadratic df2.nl = 2.5, # About 1.5--2.5 gives the flexibility of a quadratic spar1 = 0, # 0 means df1.nl is used spar2 = 0, # 0 means df2.nl is used ...) { if (length(Norrr) != 1 || !is.na(Norrr)) { warning("argument 'Norrr' has been replaced by 'noRRR'. ", "Assigning the latter but using 'Norrr' will become an error in ", "the next VGAM version soon.") noRRR <- Norrr } if (!is.Numeric(iShape, positive = TRUE)) stop("bad input for argument 'iShape'") if (!is.Numeric(iKvector, positive = TRUE)) stop("bad input for argument 'iKvector'") if (!is.Numeric(imethod, positive = TRUE, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'imethod'") if (criterion != "deviance") stop("'criterion' must be 'deviance'") if (GradientFunction) stop("20050114; GradientFunction = TRUE not working yet") se.fit <- as.logical(FALSE) if (se.fit) stop("se.fit = FALSE handled only") if (length(Cinit) && !is.Numeric(Cinit)) stop("Bad input for argument 'Cinit'") if (!is.Numeric(Bestof, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for argument 'Bestof'") if (!is.Numeric(maxitl, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for argument 'maxitl'") if (!is.Numeric(bf.epsilon, length.arg = 1, positive = TRUE)) stop("Bad input for argument 'bf.epsilon'") if (!is.Numeric(bf.maxit, integer.valued = TRUE, positive = TRUE, length.arg = 1)) stop("Bad input for argument 'bf.maxit'") if (!is.Numeric(Etamat.colmax, positive = TRUE, length.arg = 1) || Etamat.colmax < Rank) stop("bad input for argument 'Etamat.colmax'") if (!is.Numeric(Maxit.optim, integer.valued = TRUE, positive = TRUE, length.arg = 1)) stop("Bad input for argument 'Maxit.optim'") if (!is.Numeric(optim.maxit, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("Bad input for argument 'optim.maxit'") if (!is.Numeric(sd.sitescores, length.arg = 1, positive = TRUE)) stop("Bad input for argument 'sd.sitescores'") if (!is.Numeric(sd.Cinit, length.arg = 1, positive = TRUE)) stop("Bad input for argument 'sd.Cinit'") if (!is.Numeric(df1.nl) || any(df1.nl < 0)) stop("Bad input for argument 'df1.nl'") if (any(df1.nl >= 0 & df1.nl < 0.05)) { warning("'df1.nl' values between 0 and 0.05 converted to 0.05") df1.nl[df1.nl < 0.05] <- 0.05 } if (any(df1.nl > 3.5)) { warning("'df1.nl' values > 3.5 are excessive") } if (!is.Numeric(df2.nl) || any(df2.nl < 0)) stop("Bad input for argument 'df2.nl'") if (any(df2.nl >= 0 & df2.nl < 0.05)) { warning("'df2.nl' values between 0 and 0.05 converted to 0.05") df2.nl[df2.nl < 0.05] <- 0.05 } if (!is.Numeric(spar1) || any(spar1 < 0)) stop("Bad input for argument 'spar1'") if (!is.Numeric(spar2) || any(spar2 < 0)) stop("Bad input for argument 'spar2'") if (!is.Numeric(epsilon, positive = TRUE, length.arg = 1)) stop("Bad input for argument 'epsilon'") if (!is.Numeric(SmallNo, positive = TRUE, length.arg = 1)) stop("Bad input for argument 'SmallNo'") if ((SmallNo < .Machine$double.eps) || (SmallNo > .0001)) stop("'SmallNo' is out of range") ans <- list( Corner = FALSE, # A constant, not a control parameter; unneeded? eq.tolerances = FALSE, # A constant, not a control parameter; needed I.tolerances = FALSE, # A constant, not a control parameter; unneeded? Quadratic = FALSE, # A constant, not a control parameter; unneeded? all.knots = as.logical(all.knots)[1], Bestof = Bestof, Cinit = Cinit, ConstrainedO = TRUE, # A constant, not a control parameter criterion = criterion, Crow1positive = as.logical(rep_len(Crow1positive, Rank)), epsilon = epsilon, Etamat.colmax = Etamat.colmax, FastAlgorithm = TRUE, # A constant, not a control parameter GradientFunction = as.logical(GradientFunction), maxitl = maxitl, bf.epsilon = bf.epsilon, bf.maxit = bf.maxit, imethod = imethod, Maxit.optim = Maxit.optim, optim.maxit = optim.maxit, noRRR = noRRR, Rank = Rank, sd.sitescores = sd.sitescores, sd.Cinit = sd.Cinit, se.fit = se.fit, # If TRUE, then would need storage for S QR fits SmallNo = SmallNo, suppress.warnings = as.logical(suppress.warnings), trace = as.integer(trace), Use.Init.Poisson.QO = Use.Init.Poisson.QO, iKvector = as.numeric(iKvector), iShape = as.numeric(iShape), DF1 = 2.5, # Used as Default value if df1.nl has no default DF2 = 2.5, # Used as Default value if df2.nl has no default SPAR1 = 0, # Used as Default value if spar1 has no default SPAR2 = 0, # Used as Default value if spar2 has no default df1.nl = df1.nl, df2.nl = df2.nl, spar1 = spar1, spar2 = spar2) ans } create.cms <- function(Rank = 1, M, MSratio = 1, which, p1 = 1) { if (!is.Numeric(p1, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'p1'") Hlist. <- vector("list", p1 + Rank) for (rr in 1:(p1+Rank)) Hlist.[[rr]] <- diag(M) names(Hlist.) <- if (p1 == 1) c("(Intercept)", names(which)) else stop() if (MSratio == 2) { for (r in 1:Rank) Hlist.[[p1+r]] <- eijfun(1, M) } Hlist. } callcaoc <- function(cmatrix, etamat, xmat, ymat, wvec, modelno, Control, Nice21 = TRUE, p1star. = if (modelno %in% c(3, 5)) 2 else 1, p2star. = Rank, n, M, othint, othdbl, alldump = FALSE) { flush.console() control <- Control Rank <- control$Rank p1 <- length(control$colx1.index) p2 <- length(control$colx2.index) yn <- dimnames(ymat)[[2]] if (length(yn) != ncol(ymat)) stop("the column names of argument 'ymat' must be given") queue <- qbig <- Rank # 20051019; number of smooths per species NOS <- if (modelno %in% c(3, 5)) M/2 else M df1.nl <- procVec(control$df1.nl, yn = yn , Default = control$DF1) spar1 <- procVec(control$spar1, yn = yn , Default = control$SPAR1) df2.nl <- procVec(control$df2.nl, yn = yn , Default = control$DF2) spar2 <- procVec(control$spar2, yn = yn , Default = control$SPAR2) if (any(c(length(spar1), length(spar2), length(df1.nl), length(df2.nl)) != NOS)) stop("wrong length in at least one of arguments ", "'df1.nl', 'df2.nl', 'spar1', 'spar2'") cmatrix <- matrix(cmatrix, p2, Rank) # crow1C() needs a matrix as input cmatrix <- crow1C(cmatrix, crow1positive = control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix evnu <- eigen(var(numat), symmetric = TRUE) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) cmatrix <- cmatrix %*% temp7 cmatrix <- crow1C(cmatrix, crow1positive = control$Crow1positive) numat <- xmat[, control$colx2.index, drop = FALSE] %*% cmatrix dim(numat) <- c(n, Rank) mynames5 <- if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "") nu1mat <- cbind("(Intercept)" = 1, latvar = numat) dimnames(nu1mat) <- list(dimnames(xmat)[[1]], c("(Intercept)", mynames5)) temp.smooth.frame <- vector("list", p1+Rank) # Temporary makeshift frame names(temp.smooth.frame) <- c(names(control$colx1.index), mynames5) for (uu in 1:(p1+Rank)) { temp.smooth.frame[[uu]] <- nu1mat[, uu] } temp.smooth.frame <- data.frame(temp.smooth.frame) for (uu in 1:Rank) { attr(temp.smooth.frame[,uu+p1], "spar") <- 0 # this value unused attr(temp.smooth.frame[,uu+p1], "df") <- 4 # this value unused } pstar. <- p1star. + p2star. # = Mdot + Rank nstar <- if (Nice21) ifelse(modelno %in% c(3, 5), n * 2, n) else n * M lenbeta <- pstar. * ifelse(Nice21, NOS, 1) # Holds the linear coeffs inited <- if (exists(".VGAM.CAO.etamat", envir = VGAMenv)) 1 else 0 usethiseta <- if (inited == 1) getfromVGAMenv("etamat", prefix = ".VGAM.CAO.") else t(etamat) if (anyNA(usethiseta)) { usethiseta <- t(etamat) # So that dim(usethiseta) == c(M,n) rmfromVGAMenv("etamat", prefix = ".VGAM.CAO.") } usethisbeta <- if (inited == 2) getfromVGAMenv("beta", prefix = ".VGAM.CAO.") else double(lenbeta) othint[5] <- inited # Refine initialization within C pstar <- NOS * pstar. bnumat <- if (Nice21) matrix(0, nstar, pstar.) else stop("code not written here") M. <- MSratio <- M / NOS # 1 or 2 usually which <- p1 + (1:Rank) # These columns are smoothed nwhich <- names(which) <- mynames5 origHlist <- Hlist. <- create.cms(Rank = Rank, M = M., MSratio = MSratio, which = which, p1 = p1) # For 1 species only ncolHlist. <- unlist(lapply(Hlist. , ncol)) smooth.frame <- s.vam(x = nu1mat, zedd = NULL, wz = NULL, smomat = NULL, which = which, smooth.frame = temp.smooth.frame, bf.maxit = control$bf.maxit, bf.epsilon = control$bf.epsilon, trace = FALSE, se.fit = control$se.fit, X.vlm.save = bnumat, Hlist = Hlist. , ncolHlist = ncolHlist. , M = M. , qbig = NULL, Umat = NULL, # NULL ==> unneeded all.knots = control$all.knots, nk = NULL, sf.only = TRUE) ldk <- 3 * max(ncolHlist.[nwhich]) + 1 # 20020711 dimw. <- M. # Smoothing one spp. at a time dim1U. <- M. wz. <- matrix(0, n, dimw. ) if (names(Hlist.)[1] != "(Intercept)") stop("something wrong here") Hlist.[[1]] <- NULL trivc <- rep_len(2 - M. , queue) ncbvec <- ncolHlist.[nwhich] ncolb <- max(ncbvec) qbig. <- NOS * qbig # == NOS * Rank; holds all the smooths if (!all(as.vector(ncbvec) == rep_len(1, queue))) stop("'ncbvec' not right---should be a queue-vector of ones") pbig <- pstar. # contr.sp <- list(low = -1.5, ## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4, ## tol = 0.001 was default till R 1.3.x eps = 2e-8, ## eps = 0.00244 was default till R 1.3.x maxit = 500 ) npetc <- c(n = nrow(nu1mat), p. = ncol(nu1mat), q = length(which), se.fit = control$se.fit, 0, control$bf.maxit, qrank = 0, M = M. , nbig = nstar, pbig = pbig, qbig = qbig, dim2wz = dimw. , dim1U = dim1U. , ierror = 0, ldk = ldk, contr.sp$maxit, iinfo = 0) if (Rank == 2) { smopar <- (c(spar1, spar2))[interleave.VGAM(4 * NOS, M1 = 2)] dofvec <- (1.0 + c(df1.nl, df2.nl))[interleave.VGAM(4 * NOS, M1 = 2)] lamvec <- 0 * dofvec stop("20100414; havent got Rank = 2 going yet") } else { smopar <- c(spar1, spar2) dofvec <- c(df1.nl, df2.nl) + 1.0 lamvec <- 0 * dofvec } ans1 <- .C("vcao6", numat = as.double(numat), ymat = as.double(ymat), wvec = as.double(wvec), etamat = as.double(usethiseta), fv = double(NOS*n), zedd = double(n*M), wz = double(n*M), U = double(M*n), # bnumat = as.double(bnumat), qr = double(nstar*pstar.), qraux = double(pstar.), qpivot = integer(pstar.), n = as.integer(n), M = as.integer(M), NOS = as.integer(NOS), nstar = as.integer(nstar), dim1U = as.integer( M ), # for U, not U. errcode = integer(1), othint = as.integer(othint), deviance = double(1 + NOS), # NOS more elts added 20100413 beta = as.double(usethisbeta), othdbl = as.double(othdbl), npetc = as.integer(npetc), M. = as.integer( M. ), dofvec = as.double(dofvec), lamvec = as.double(lamvec), smopar = as.double(smopar), match = as.integer(smooth.frame$matcho), as.integer(smooth.frame$nef), which = as.integer(which), smomat = as.double(matrix(0, n, qbig. )), nu1mat = as.double(nu1mat), Hlist = as.double(unlist( Hlist. )), as.integer(ncbvec), smap = as.integer(1:(Rank+1)), # trivc = as.integer(trivc), levmat = double(NOS * sum(smooth.frame$neffec * ncbvec)), bcoefficients = double(NOS * sum(smooth.frame$nknots*ncbvec)), xknots = as.double(unlist(smooth.frame$knots)), bindex = as.integer(smooth.frame$bindex), lindex = as.integer(smooth.frame$lindex), nknots = as.integer(smooth.frame$nknots), kindex = as.integer(smooth.frame$kindex)) flush.console() if (ans1$errcode == 0) { assign2VGAMenv(c("etamat", "beta"), ans1, prefix = ".VGAM.CAO.") assign(".VGAM.CAO.cmatrix", matrix(cmatrix, p2, Rank), envir = VGAMenv) } else { if (!control$suppress.warnings) { cat("warning in callcaoc: error code = ", ans1$errcode, "\n") cat("warning in callcaoc: npetc[14] = ", ans1$npetc[14], "\n") flush.console() } rmfromVGAMenv(c("etamat", "beta"), prefix = ".VGAM.CAO.") } returnans <- if (alldump) { bindex <- ans1$bindex ncolHlist <- ncbvec Bspline2 <- vector("list", NOS) names(Bspline2) <- dimnames(ymat)[[2]] Bspline <- vector("list", length(nwhich)) names(Bspline) <- nwhich ind9 <- 0 # moving index for (sppno in 1:NOS) { for (ii in seq_along(nwhich)) { ind7 <- (smooth.frame$bindex[ii]):(smooth.frame$bindex[ii+1]-1) ans <- ans1$bcoeff[ind9+ind7] ans <- matrix(ans, ncol = ncolHlist[nwhich[ii]]) Bspline[[ii]] <- new(Class = "vsmooth.spline.fit", "Bcoefficients" = ans, "xmax" = smooth.frame$xmax[ii], "xmin" = smooth.frame$xmin[ii], "knots" = as.vector(smooth.frame$knots[[ii]])) } ind9 <- ind9 + smooth.frame$bindex[length(nwhich)+1] - 1 Bspline2[[sppno]] <- Bspline } qrank <- npetc[7] # Assume all species have the same qrank value dim(ans1$etamat) <- c(M, n) # was c(n, M) prior to 20060822 df1.nl <- ans1$dofvec[1:NOS] - 1.0 lambda1 <- ans1$lamvec[1:NOS] spar1 <- ans1$smopar[1:NOS] if (Rank == 2) { stop("20100414; this is not working yet") df2.nl <- ans1$dofvec[NOS + (1:NOS)] - 1.0 lambda2 <- ans1$lamvec[NOS + (1:NOS)] spar2 <- ans1$smopar[NOS + (1:NOS)] } list(deviance = ans1$deviance[1], alldeviance = ans1$deviance[-1], bcoefficients = ans1$bcoefficients, bindex = ans1$bindex, Bspline = Bspline2, Cmat = matrix(cmatrix, p2, Rank, dimnames = list( names(control$colx2.index), mynames5)), coefficients = ans1$beta, df1.nl = df1.nl, df2.nl = if (Rank == 2) df2.nl else NULL, df.residual = n*M - qrank - sum(ans1$df - 1), fitted = ans1$fv, # NOS x n kindex = ans1$kindex, lambda1 = lambda1, lambda2 = if (Rank == 2) lambda2 else NULL, predictors = matrix(ans1$etamat, n, M, byrow = TRUE), wresiduals = ans1$zedd - t(ans1$etamat), # n x M spar1 = spar1, spar2 = if (Rank == 2) spar2 else NULL) } else { ans1$deviance[1] } flush.console() returnans } calldcaoc <- function(cmatrix, etamat, xmat, ymat, wvec, modelno, Control, Nice21 = TRUE, p1star. = if (modelno %in% c(3, 5)) 2 else 1, p2star. = Rank, n, M, othint, othdbl, alldump = FALSE) { if (alldump) stop("really used?") flush.console() U <- NULL if (!Nice21) stop("'Nice21' must be TRUE") control <- Control Rank <- control$Rank p2 <- length(control$colx2.index) yn <- dimnames(ymat)[[2]] if (!length( yn )) yn <- paste("Y", 1:ncol(ymat), sep = "") cmatrix <- scale(cmatrix) xmat2 <- xmat[, control$colx2.index, drop = FALSE] #ccc numat <- xmat2 %*% matrix(cmatrix, p2, Rank) dim(numat) <- c(nrow(xmat), Rank) temp.smooth.frame <- vector("list", 1+Rank) # Temporary makeshift frame mynames5 <- if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "") names(temp.smooth.frame) <- c("(Intercept)", mynames5) temp.smooth.frame[[1]] <- rep_len(1, n) for (uu in 1:Rank) { temp.smooth.frame[[uu+1]] <- numat[, uu] } temp.smooth.frame <- data.frame(temp.smooth.frame) for (uu in 1:Rank) { attr(temp.smooth.frame[,uu+1], "spar") <- 0 # any old value attr(temp.smooth.frame[,uu+1], "df") <- 4 # any old value } pstar. <- p1star. + p2star. nstar <- if (Nice21) ifelse(modelno %in% c(3, 5), n * 2, n) else n * M NOS <- ifelse(modelno %in% c(3, 5), M / 2, M) lenbeta <- pstar. * ifelse(Nice21, NOS, 1) if (TRUE) { inited <- if (exists(".VGAM.CAO.etamat", envir = VGAMenv)) 1 else 0 usethiseta <- if (inited == 1) get(".VGAM.CAO.etamat", envir = VGAMenv) else t(etamat) } usethisbeta <- if (inited == 2) get(".VGAM.CAO.beta", envir = VGAMenv) else double(lenbeta) pstar <- NOS * pstar. bnumat <- if (Nice21) matrix(0, nstar, pstar) else stop("need 'Nice21'") M. <- MSratio <- M / NOS # 1 or 2 usually p1 <- 1 which <- p1 + (1:Rank) # The first 1 is the intercept term nwhich <- names(which) <- mynames5 origHlist <- Hlist. <- create.cms(Rank = Rank, M = M., MSratio = MSratio, which = which, p1 = p1) # For 1 species ncolHlist. <- unlist(lapply(Hlist. , ncol)) nu1mat <- cbind("(Intercept)" = 1, latvar = numat) dimnames(nu1mat) <- list(dimnames(xmat)[[1]], c("(Intercept)", "latvar")) smooth.frame <- s.vam(x = nu1mat, zedd = NULL, wz = NULL, smomat = NULL, which = which, smooth.frame = temp.smooth.frame, bf.maxit = control$bf.maxit, bf.epsilon = control$bf.epsilon, trace = FALSE, se.fit = control$se.fit, X.vlm.save = bnumat, Hlist = Hlist., ncolHlist = ncolHlist. , M = M. , qbig = NULL, Umat = U, # NULL value ==> not needed all.knots = control$all.knots, nk = NULL, sf.only = TRUE) ldk <- 4 * max(ncolHlist.[nwhich]) # was M; # Prior to 20020711 ldk <- 3 * max(ncolHlist.[nwhich]) + 1 # 20020711 wz. <- matrix(0, n, M. ) # not sure dimw. <- if (is.matrix( wz. )) ncol( wz. ) else 1 dim1U. <- M. # 20100410 queue <- qbig <- Rank # 20051019; number of smooths per species Hlist.[[1]] <- NULL trivc <- rep_len(2 - M. , queue) ncbvec <- ncolHlist.[nwhich] ncolb <- max(ncbvec) qbig. <- NOS * qbig # == NOS * Rank pbig <- pstar. # Not sure if (FALSE) { df1.nl <- rep_len(control$df1.nl, NOS) # This is used df2.nl <- rep_len(control$df2.nl, NOS) # This is used spar1 <- rep_len(control$spar1, NOS) # This is used spar2 <- rep_len(control$spar2, NOS) # This is used } else { df1.nl <- procVec(control$df1.nl, yn = yn , Default = control$DF1) df2.nl <- df1.nl # 20100417; stopgap spar1 <- procVec(control$spar1, yn = yn , Default = control$SPAR1) spar2 <- spar1 # 20100417; stopgap dofvec <- c(df1.nl, df2.nl) lamvec <- 0 * dofvec smopar <- c(spar1, spar2) } contr.sp <- list(low = -1.5, ## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4, ## tol = 0.001 was default till R 1.3.x eps = 2e-8, ## eps = 0.00244 was default till R 1.3.x maxit = 500 ) warning("20100405; this is old:") npetc <- c(n = n, p = 1+Rank, length(which), se.fit = control$se.fit, 0, maxitl = control$maxitl, qrank = 0, M = M. , n.M = n* M. , pbig = sum( ncolHlist.), qbig = qbig, dimw = dimw. , dim1U = dim1U. , ierror = 0, ldk = ldk) warning("20100405; this is new:") npetc <- c(n = nrow(nu1mat), p. = ncol(nu1mat), q = length(which), se.fit = control$se.fit, 0, control$bf.maxit, qrank = 0, M = M. , nbig = nstar, pbig = pbig, qbig = qbig, dim2wz = dimw. , dim1U = dim1U. , ierror = 0, ldk = ldk, contr.sp$maxit, iinfo = 0) flush.console() if (!Nice21) stop("need 'Nice21'") ans1 <- .C("vdcao6", numat = as.double(numat), as.double(ymat), as.double(wvec), etamat = as.double(usethiseta), fv = double(NOS*n), zedd = double(n*M), wz = double(n*M), U = double(M*n), # bnumat = as.double(bnumat), qr = double(nstar*pstar.), qraux = double(pstar.), qpivot = integer(pstar.), as.integer(n), as.integer(M), NOS = as.integer(NOS), as.integer(nstar), dim1U = as.integer(M), errcode = integer(1), othint = as.integer(othint), deviance = double(1 + NOS), beta = as.double(usethisbeta), othdbl = as.double(othdbl), as.double(xmat2), cmat = as.double(cmatrix), p2 = as.integer(p2), deriv = double(p2 * Rank), betasave = double(lenbeta), npetc = as.integer(npetc), M. = as.integer( M. ), dofvec = as.double(dofvec + 1.0), lamvec = as.double(0 * dofvec), smopar = as.double(smopar), match = as.integer(smooth.frame$matcho), as.integer(smooth.frame$nef), as.integer(which), smomat = as.double(matrix(0, n, qbig. )), nu1mat = as.double(nu1mat), as.double(unlist( Hlist. )), as.integer(ncbvec), smap = as.integer(1:(Rank+1)), trivc = as.integer(trivc), levmat = double(NOS * sum(smooth.frame$neffec * ncbvec)), bcoefficients = double(NOS * sum(smooth.frame$nknots * ncbvec)), xknots = as.double(unlist(smooth.frame$knots)), bindex = as.integer(smooth.frame$bindex), lindex = as.integer(smooth.frame$lindex), nknots = as.integer(smooth.frame$nknots), kindex = as.integer(smooth.frame$kindex)) flush.console() assign(".VGAM.CAO.etamat", ans1$etamat, envir = VGAMenv) assign(".VGAM.CAO.z", ans1$zedd, envir = VGAMenv) assign(".VGAM.CAO.U", ans1$U, envir = VGAMenv) # U if (ans1$errcode == 0) { } else { cat("warning in calldcaoc: error code = ", ans1$errcode, "\n") flush.console() } returnans <- if (alldump) { bindex <- ans1$bindex ncolHlist <- ncbvec Bspline2 <- vector("list", NOS) names(Bspline2) <- dimnames(ymat)[[2]] Bspline <- vector("list", length(nwhich)) names(Bspline) <- nwhich ind9 <- 0 # moving index for (jay in 1:NOS) { for (ii in seq_along(nwhich)) { ind9 <- ind9[length(ind9)] + (bindex[ii]):(bindex[ii+1]-1) ans <- ans1$bcoeff[ind9] ans <- matrix(ans, ncol = ncolHlist[nwhich[ii]]) Bspline[[ii]] <- new(Class = "vsmooth.spline.fit", "Bcoefficients" = ans, "xmax" = smooth.frame$xmax[ii], "xmin" = smooth.frame$xmin[ii], "knots" = as.vector(smooth.frame$knots[[ii]])) } Bspline2[[jay]] <- Bspline } qrank <- npetc[7] # Assume all species have the same qrank value dim(ans1$etamat) <- c(M,n) # bug: was c(n,M) prior to 20060822 list(deviance = ans1$deviance[1], alldeviance = ans1$deviance[-1], bcoefficients = ans1$bcoefficients, bindex = ans1$bindex, Bspline = Bspline2, Cmat = matrix(cmatrix, p2, Rank, dimnames = list( names(control$colx2.index), mynames5)), coefficients = ans1$beta, df1.nl = ans1$dofvec[1:NOS] - 1, df2.nl = if (Rank == 2) ans1$dofvec[2 * (1:NOS) - 1] - 1 else NULL, lambda1 = ans1$lambda[1:NOS], lambda2 = if (Rank == 2) ans1$lambda[2 * (1:NOS) - 1] else NULL, df.residual = n * M - qrank - sum(ans1$df - 1), fitted = ans1$fv, kindex = ans1$kindex, predictors=matrix(ans1$etamat, n, M, byrow = TRUE), wresiduals = ans1$zedd - t(ans1$etamat), # n x M spar1 = ans1$smopar[1:NOS], spar2 = if (Rank == 2) ans1$smopar[2 * (1:NOS) - 1] else NULL) } else { ans1$deriv } flush.console() returnans } setClass(Class = "Coef.rrvgam", representation( "Bspline" = "list", "C" = "matrix", "Constrained" = "logical", "df1.nl" = "numeric", "df2.nl" = "numeric", "dispersion" = "numeric", "eta2" = "matrix", "latvar" = "matrix", "latvar.order" = "matrix", "M" = "numeric", "Maximum" = "numeric", "NOS" = "numeric", "Optimum" = "matrix", "Optimum.order"= "matrix", "Rank" = "numeric", "spar1" = "numeric", "spar2" = "numeric")) Coef.rrvgam <- function(object, epsOptimum = 0.00001, # Determines how accurately Optimum is estimated gridlen = 40, # Number of points on the grid (one level at a time) maxgriditer = 10, # Maximum number of iters allowed for grid search smallno = 0.05, ...) { if (!is.Numeric(epsOptimum, positive = TRUE, length.arg = 1)) stop("bad input for argument 'epsOptimum'") if (!is.Numeric(gridlen, positive = TRUE, integer.valued = TRUE) || gridlen < 5) stop("bad input for argument 'gridlen'") if (!is.Numeric(maxgriditer, positive = TRUE, length.arg = 1, integer.valued = TRUE) || maxgriditer < 3) stop("bad input for argument 'maxgriditer'") if (!is.logical(ConstrainedO <- object@control$ConstrainedO)) stop("cannot determine whether the model is constrained or not") if (!is.Numeric(smallno, positive = TRUE, length.arg = 1) || smallno > 0.5 || smallno < 0.0001) stop("bad input for argument 'smallno'") ocontrol <- object@control if ((Rank <- ocontrol$Rank) > 2) stop("'Rank' must be 1 or 2") gridlen <- rep_len(gridlen, Rank) M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M MSratio <- M / NOS # 1 or 2; First value is g(mean)=quadratic form in latvar nice21 <- (length(ocontrol$colx1.index) == 1) && (names(ocontrol$colx1.index) == "(Intercept)") if (!nice21) stop("Can only handle 'noRRR = ~ 1'") p1 <- length(ocontrol$colx1.index) p2 <- length(ocontrol$colx2.index) modelno <- object@control$modelno # 1,2,3,... or 0 ynames <- object@misc$ynames if (!length(ynames)) ynames <- object@misc$predictors.names if (!length(ynames)) ynames <- object@misc$ynames if (!length(ynames)) ynames <- paste("Y", 1:NOS, sep = "") lp.names <- object@misc$predictors.names if (!length(lp.names)) lp.names <- NULL latvar.names <- if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "") Cmat <- object@extra$Cmat # p2 x Rank (provided maxitl > 1) if (ConstrainedO) dimnames(Cmat) <- list(names(ocontrol$colx2.index), latvar.names) latvar.mat <- if (ConstrainedO) { object@x[, ocontrol$colx2.index, drop = FALSE] %*% Cmat } else { object@latvar } optimum <- matrix(NA_real_, Rank, NOS, dimnames = list(latvar.names, ynames)) extents <- apply(latvar.mat, 2, range) # 2 by R maximum <- rep_len(NA_real_, NOS) which.species <- 1:NOS # Do it for all species if (Rank == 1) { gridd <- cbind(seq(extents[1, 1], extents[2, 1], len = gridlen)) eta2matrix <- matrix(0, NOS, 1) # Added 20160716 } else { gridd <- expand.grid(seq(extents[1, 1], extents[2, 1], len = gridlen[1]), seq(extents[1, 2], extents[2, 2], len = gridlen[2])) eta2matrix <- matrix(0, NOS, 1) } gridd.orig <- gridd for (sppno in seq_along(which.species)) { gridd <- gridd.orig gridres1 <- gridd[2, 1] - gridd[1, 1] gridres2 <- if (Rank == 2) gridd[2, 2] - gridd[1, 2] else 0 griditer <- 1 thisSpecies <- which.species[sppno] indexSpecies <- if (is.character(which.species)) match(which.species[sppno], ynames) else which.species[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'which.species'") while (griditer == 1 || ((griditer <= maxgriditer) && ((gridres1 > epsOptimum) || (gridres2 > epsOptimum)))) { temp <- predictrrvgam(object, grid = gridd, sppno = thisSpecies, Rank = Rank, deriv = 0, MSratio = MSratio) yvals <- temp$yvals # gridlen-vector xvals <- temp$xvals # gridlen x Rank; gridd if (length(temp$eta2)) eta2matrix[sppno, 1] <- temp$eta2 nnn <- length(yvals) index <- (1:nnn)[yvals == max(yvals)] if (length(index) != 1) warning("could not find a single maximum") if (Rank == 2) { initvalue <- rep_len(xvals[index,], Rank) # for optim() if (abs(initvalue[1] - extents[1, 1]) < smallno) initvalue[1] <- extents[1, 1] + smallno if (abs(initvalue[1] - extents[2, 1]) < smallno) initvalue[1] <- extents[2, 1] - smallno if (abs(initvalue[2] - extents[1, 2]) < smallno) initvalue[2] <- extents[1, 2] + smallno if (abs(initvalue[2] - extents[2, 2]) < smallno) initvalue[2] <- extents[2, 2] - smallno break } if (index == 1 || index == nnn) { maximum[sppno] <- optimum[1, sppno] <- NA gridres1 <- epsOptimum + 1 # equivalent to a break break # just in case } else { maximum[sppno] <- yvals[index] # On the eta scale optimum[1, sppno] <- xvals[index, 1] gridd[, 1] <- seq( max(extents[1, 1], optimum[1, sppno] - gridres1), min(extents[2, 1], optimum[1, sppno] + gridres1), len = gridlen) gridres1 <- gridd[2, 1] - gridd[1, 1] griditer <- griditer + 1 } } # of while if (Rank == 2) { myfun <- function(x, object, sppno, Rank = 1, deriv = 0, MSratio = 1) { x <- matrix(x, 1, length(x)) temp <- predictrrvgam(object, grid = x, sppno = sppno, Rank = Rank, deriv = deriv, MSratio = MSratio) temp$yval } answer <- optim(initvalue, myfun, gr = NULL, method = "L-BFGS-B", lower = extents[1, ], upper = extents[2, ], control = list(fnscale = -1), # maximize! object = object, sppno = sppno, Rank = Rank, deriv = 0, MSratio = MSratio) for (rindex in 1:Rank) if (abs(answer$par[rindex] - extents[1, rindex]) > smallno && abs(answer$par[rindex] - extents[2, rindex]) > smallno) { optimum[rindex,sppno] <- answer$par[rindex] maximum[sppno] <- answer$value } } # end of Rank = 2 } # end of sppno myetamat <- rbind(maximum) if (MSratio == 2) myetamat <- kronecker(myetamat, matrix(1:0, 1, 2)) maximum <- object@family@linkinv(eta = myetamat, extra = object@extra) maximum <- c(maximum) # Convert from matrix to vector names(maximum) <- ynames ans <- new(Class = "Coef.rrvgam", Bspline = object@Bspline, Constrained = ConstrainedO, df1.nl = object@extra$df1.nl, latvar = latvar.mat, latvar.order = latvar.mat, Maximum = maximum, M = M, NOS = NOS, Optimum = optimum, Optimum.order = optimum, Rank = Rank, spar1 = object@extra$spar1) if (ConstrainedO) { ans@C <- Cmat } else { Cmat <- NULL } if (Rank == 2) { dimnames(eta2matrix) <- list(object@misc$predictors.names[c(FALSE, TRUE)], " ") ans@eta2 <- eta2matrix ans@df2.nl <- object@extra$df2.nl ans@spar2 <- object@extra$spar2 } for (rindex in 1:Rank) { ans@Optimum.order[rindex, ] <- order(ans@Optimum[rindex, ]) ans@latvar.order[, rindex] <- order(ans@latvar[, rindex]) } if (length(object@misc$estimated.dispersion) && object@misc$estimated.dispersion) { p <- length(object@coefficients) n <- object@misc$n M <- object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M pstar <- p + length(Cmat) # Adjustment adjusted.dispersion <- object@misc$dispersion * (n * M - p) / (n * M - pstar) ans@dispersion <- adjusted.dispersion } if (MSratio == 2) { lcoef <- object@coefficients temp <- lcoef[((1:NOS)-1) * (2+Rank)+2] names(temp) <- object@misc$predictors.names[2 * (1:NOS)] ans@dispersion <- temp } dimnames(ans@Optimum) <- list(latvar.names, ynames) ans } show.Coef.rrvgam <- function(object, digits = max(2, options()$digits-2), ...) { Rank <- object@Rank NOS <- object@NOS M <- object@M Maximum <- if (length(object@Maximum)) cbind(Maximum = object@Maximum) else NULL optmat <- cbind(t(object@Optimum)) dimnames(optmat) <- list(dimnames(optmat)[[1]], if (Rank > 1) paste("Optimum", dimnames(optmat)[[2]], sep = ".") else "Optimum") if ( object@Constrained ) { cat("\nC matrix (constrained/canonical coefficients)\n") print(object@C, digits = digits, ...) } cat("\nOptimums and maximums\n") print(cbind(Optimum = optmat, Maximum), digits = max(1, digits-1)) cat("\nNonlinear degrees of freedom\n") if (Rank == 1) { print(cbind(df1.nl = object@df1.nl), digits = max(2, digits-1), ...) } else { print(cbind(df1.nl = object@df1.nl, df2.nl = object@df2.nl), digits = max(2, digits-1), ...) } invisible(object) } setMethod("show", "Coef.rrvgam", function(object) show.Coef.rrvgam(object)) setMethod("coef", "rrvgam", function(object, ...) Coef.rrvgam(object, ...)) setMethod("coefficients", "rrvgam", function(object, ...) Coef.rrvgam(object, ...)) setMethod("Coef", "rrvgam", function(object, ...) Coef.rrvgam(object, ...)) lvplot.rrvgam <- function(object, add = FALSE, show.plot = TRUE, rugplot = TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab = paste("Latent Variable", if (Rank == 1) "" else " 1", sep = ""), ylab = if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex = par()$cex, pcol = par()$col, pch = par()$pch, llty = par()$lty, lcol = par()$col, llwd = par()$lwd, label.arg= FALSE, adj.arg=-0.5, sites= FALSE, spch = NULL, scol = par()$col, scex = par()$cex, sfont = par()$font, which.species = NULL, check.ok = TRUE, ...) { type <- match.arg(type, c("fitted.values", "predictors"))[1] if ((Rank <- object@control$Rank) > 2) stop("can only handle 'Rank' = 1 or 2 models") M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M NOS <- ncol(object@y) MSratio <- M / NOS # First value is g(mean) = quadratic form in latvar n <- object@misc$n colx2.index <- object@control$colx2.index cx1i <- object@control$colx1.index if (!length(which.species)) which.species <- 1:NOS if (check.ok) if (!(length(cx1i) == 1 && names(cx1i) == "(Intercept)")) stop("latent variable plots allowable only ", "for 'noRRR = ~ 1' models") Coeflist <- Coef(object) Cmat <- Coeflist@C latvarmat <- Coeflist@latvar # n x Rank if (!show.plot) return(latvarmat) r.curves <- slot(object, type) if (MSratio != 1 && type == "predictors") stop("can only plot the predictors if M == S") MorS <- ncol(r.curves) # Actually, here, the value is S always. if (!add) { if (Rank == 1) { matplot(latvarmat, if ( y && type == "fitted.values") object@y[, which.species, drop = FALSE] else r.curves[, which.species, drop = FALSE], type = "n", xlab = xlab, ylab = ylab, ...) } else { # Rank == 2 matplot(c(Coeflist@Optimum[1, which.species], latvarmat[, 1]), c(Coeflist@Optimum[2, which.species], latvarmat[, 2]), type = "n", xlab = xlab, ylab = ylab, ...) } } pch <- rep_len(pch, length(which.species)) pcol <- rep_len(pcol, length(which.species)) pcex <- rep_len(pcex, length(which.species)) llty <- rep_len(llty, length(which.species)) lcol <- rep_len(lcol, length(which.species)) llwd <- rep_len(llwd, length(which.species)) adj.arg <- rep_len(adj.arg, length(which.species)) sppnames <- if (type == "predictors") dimnames(r.curves)[[2]] else dimnames(object@y)[[2]] if (Rank == 1) { for (sppno in seq_along(which.species)) { thisSpecies <- which.species[sppno] indexSpecies <- if (is.character(which.species)) match(which.species[sppno], sppnames) else which.species[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'which.species'") xx <- latvarmat yy <- r.curves[, indexSpecies] ooo <- sort.list(xx) xx <- xx[ooo] yy <- yy[ooo] lines(xx, yy, col = lcol[sppno], lwd = llwd[sppno], lty = llty[sppno]) if (y && type == "fitted.values") { ypts <- object@y if (NCOL(ypts) == ncol(r.curves)) points(xx, ypts[ooo, sppno], col = pcol[sppno], cex = pcex[sppno], pch = pch[sppno]) } } if (rugplot) rug(xx) } else { if (sites) { text(latvarmat[,1], latvarmat[,2], adj = 0.5, labels = if (is.null(spch)) dimnames(latvarmat)[[1]] else rep_len(spch, nrow(latvarmat)), col = scol, cex = scex, font=sfont) } for (sppno in seq_along(which.species)) { thisSpecies <- which.species[sppno] indexSpecies <- if (is.character(which.species)) match(which.species[sppno], sppnames) else which.species[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'which.species'") points(Coeflist@Optimum[1, indexSpecies], Coeflist@Optimum[2, indexSpecies], col = pcol[sppno], cex = pcex[sppno], pch = pch[sppno]) } if (label.arg) { for (sppno in seq_along(which.species)) { thisSpecies <- which.species[sppno] indexSpecies <- if (is.character(which.species)) match(which.species[sppno], sppnames) else which.species[sppno] text(Coeflist@Optimum[1, indexSpecies], Coeflist@Optimum[2, indexSpecies], labels = (dimnames(Coeflist@Optimum)[[2]])[indexSpecies], adj = adj.arg[sppno], col = pcol[sppno], cex = pcex[sppno]) } } } invisible(latvarmat) } setMethod("lvplot", "rrvgam", function(object, ...) { invisible(lvplot.rrvgam(object, ...))}) predict.rrvgam <- function (object, newdata = NULL, type = c("link", "response", "terms"), deriv = 0, ...) { type <- match.arg(type, c("link", "response", "terms"))[1] if (type != "link" && deriv != 0) stop("Setting deriv = requires type='link'") na.act <- object@na.action object@na.action <- list() ocontrol <- object@control nice21 <- (length(ocontrol$colx1.index) == 1) && (names(ocontrol$colx1.index) == "(Intercept)") if (!nice21) stop("Can only handle 'noRRR = ~ 1'") if (!length(newdata) && type == "response" && length(object@fitted.values)) { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } if (!length(newdata)) { X <- model.matrixvlm(object, type = "lm", ...) offset <- object@offset tt <- terms(object) if (!length(object@x)) attr(X, "assign") <- attrassignlm(X, tt) } else { if (is.smart(object) && length(object@smart.prediction)) { setup.smart("read", smart.prediction = object@smart.prediction) } tt <- terms(object) # 20030811; object@terms$terms X <- model.matrix(delete.response(tt), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) if (nice21 && nrow(X) != nrow(newdata)) { as.save <- attr(X, "assign") X <- X[rep_len(1, nrow(newdata)),, drop = FALSE] dimnames(X) <- list(dimnames(newdata)[[1]], "(Intercept)") attr(X, "assign") <- as.save # Restored } offset <- if (!is.null(off.num <- attr(tt, "offset"))) { eval(attr(tt, "variables")[[off.num+1]], newdata) } else if (!is.null(object@offset)) eval(object@call$offset, newdata) if (is.smart(object) && length(object@smart.prediction)) { wrapup.smart() } attr(X, "assign") <- attrassigndefault(X, tt) } cancoefs <- concoef(object) latvarmat <- X[, ocontrol$colx2.index, drop = FALSE] %*% cancoefs Rank <- ocontrol$Rank NOS <- ncol(object@y) sppnames <- dimnames(object@y)[[2]] modelno <- ocontrol$modelno # 1,2,3,5 or 0 M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M MSratio <- M / NOS # First value is g(mean) = quadratic form in latvar if (type == "terms") { terms.mat <- matrix(0, nrow(X), Rank*NOS) # 1st R cols for spp.1, etc. interceptvector <- rep_len(0, NOS) } else { etamat <- matrix(0, nrow(X), M) # Could contain derivatives } ind8 <- 1:Rank which.species <- 1:NOS # Do it all for all species for (sppno in seq_along(which.species)) { thisSpecies <- which.species[sppno] indexSpecies <- if (is.character(which.species)) match(which.species[sppno], sppnames) else which.species[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'which.species'") temp345 <- predictrrvgam(object, grid = latvarmat, sppno = thisSpecies, Rank = Rank, deriv = deriv, MSratio = MSratio, type = ifelse(type == "response", "link", type)) if (MSratio == 2) { if (any(type == c("link", "response"))) { etamat[, 2*sppno-1] <- temp345$yvals etamat[, 2*sppno ] <- temp345$eta2 } else { terms.mat[, ind8] <- temp345 interceptvector[sppno] <- attr(temp345, "constant") } } else { if (any(type == c("link", "response"))) { etamat[, sppno] <- temp345$yvals } else { terms.mat[, ind8] <- temp345 interceptvector[sppno] <- attr(temp345, "constant") } } ind8 <- ind8 + Rank } if (length(offset) && any(offset != 0)) etamat <- etamat + offset if (type == "link") { dimnames(etamat) <- list(dimnames(X)[[1]], if (deriv == 0) object@misc$predictors.names else NULL) return(etamat) } else if (type == "response") { fv <- object@family@linkinv(etamat, extra = object@extra) dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) return(fv) } else { attr(terms.mat, "constant") <- interceptvector terms.mat } } setMethod("predict", "rrvgam", function(object, ...) predict.rrvgam(object, ...)) predictrrvgam <- function(object, grid, sppno, Rank = 1, deriv = 0, MSratio = 1, type = "link") { if (type != "link" && type != "terms") stop("'link' must be \"link\" or \"terms\"") if (ncol(grid <- as.matrix(grid)) != Rank) stop("'grid' must have ", Rank, " columns") if (!is.Numeric(1 + deriv, length.arg = 1, positive = TRUE, integer.valued = TRUE)) stop("'deriv' must be a non-negative integer") if (type == "terms" && deriv != 0) stop("'deriv' must be 0 when type=\"terms\"") temp.b <- object@Bspline[[sppno]] if (type == "terms") { meanlatvar <- colMeans(grid) answer <- matrix(0, nrow(grid), Rank) } else { nlfunvalues <- 0 } for (rindex in 1:Rank) { temp <- temp.b[[rindex]] # temp is of class "vsmooth.spline.fit" nlpart <- predict(temp, grid[, rindex], deriv = deriv) yvals <- nlpart$y if (type == "terms") { answer[, rindex] <- yvals } else { nlfunvalues <- nlfunvalues + yvals } } lcoef <- object@coefficients # linear coefs; dont use coef() (== Coef) llcoef <- lcoef[(1+(sppno-1)*(MSratio+Rank)):(sppno*(MSratio+Rank))] if (type == "terms") { interceptvector <- llcoef[1] for (rindex in 1:Rank) { answer[, rindex] <- answer[, rindex] + (grid[, rindex] - meanlatvar[rindex]) * llcoef[MSratio+rindex] interceptvector <- interceptvector + meanlatvar[rindex] * llcoef[MSratio+rindex] } } else { linpar <- if (deriv == 0) { llcoef[1] + grid %*% llcoef[-(1:MSratio)] } else { if (deriv == 1) llcoef[MSratio + rindex] else 0 } nlfunvalues <- nlfunvalues + linpar # Now complete } if (type == "terms") { attr(answer, "constant") <- interceptvector answer } else { list(xvals = grid, yvals = c(nlfunvalues), eta2 = if (MSratio == 2) llcoef[MSratio] else NULL) } } plot.rrvgam <- function(x, xlab = if (Rank == 1) "Latent Variable" else paste("Latent Variable", 1:Rank), ylab = NULL, residuals.arg = FALSE, pcol = par()$col, pcex = par()$cex, pch = par()$pch, lcol = par()$col, lwd = par()$lwd, lty = par()$lty, add = FALSE, main = NULL, center.cf = Rank > 1, WhichRank = 1:Rank, which.species = NULL, # a numeric or character vector rugplot = TRUE, se.arg = FALSE, deriv = 0, scale = 0, ylim = NULL, overlay = FALSE, ...) { Rank <- x@control$Rank if (!is.logical(center.cf) || length(center.cf) != 1) stop("bad input for argument 'center.cf'") if (Rank > 1 && !center.cf) stop("center.cf = TRUE is needed for models with Rank > 1") NOS <- ncol(x@y) sppnames <- dimnames(x@y)[[2]] modelno <- x@control$modelno # 1,2,3, or 0 M <- if (any(slotNames(x) == "predictors") && is.matrix(x@predictors)) ncol(x@predictors) else x@misc$M if (all((MSratio <- M / NOS) != c(1,2))) stop("bad value for 'MSratio'") pcol <- rep_len(pcol, Rank*NOS) pcex <- rep_len(pcex, Rank*NOS) pch <- rep_len(pch, Rank*NOS) lcol <- rep_len(lcol, Rank*NOS) lwd <- rep_len(lwd, Rank*NOS) lty <- rep_len(lty, Rank*NOS) xlab <- rep_len(xlab, Rank) if (!length(which.species)) which.species <- 1:NOS if (length(ylab)) ylab <- rep_len(ylab, length(which.species)) # Too long if overlay if (length(main)) main <- rep_len(main, length(which.species)) # Too long if overlay latvarmat <- latvar(x) nice21 <- length(x@control$colx1.index) == 1 && names(x@control$colx1.index) == "(Intercept)" if (!nice21) stop("can only handle intercept-only models") counter <- 0 for (sppno in seq_along(which.species)) { thisSpecies <- which.species[sppno] indexSpecies <- if (is.character(which.species)) match(which.species[sppno], sppnames) else which.species[sppno] if (is.na(indexSpecies)) stop("mismatch found in 'which.species'") terms.mat <- predictrrvgam(object = x, grid = latvarmat, type = "terms", sppno = indexSpecies, Rank = Rank, deriv = deriv, MSratio = MSratio) for (rindex in WhichRank) { xvals <- latvarmat[, rindex] yvals <- terms.mat[, rindex] ooo <- sort.list(xvals) xvals <- xvals[ooo] yvals <- yvals[ooo] if (!center.cf) yvals <- yvals + attr(terms.mat, "constant") if (!add) if (sppno == 1 || !overlay) { ylim.use <- if (length(ylim)) ylim else ylim.scale(range(yvals), scale) matplot(xvals, yvals, type = "n", xlab = xlab[rindex], ylab = if (length(ylab)) ylab[sppno] else ifelse(overlay, "Fitted functions", "Fitted function"), main = if (length(main)) main[sppno] else ifelse(overlay, "", sppnames[thisSpecies]), ylim = ylim.use, ...) } if (residuals.arg) { stop("cannot handle residuals = TRUE yet") } counter <- counter + 1 lines(xvals, yvals, col = lcol[counter], lwd = lwd[counter], lty = lty[counter]) if (rugplot) rug(xvals) } } invisible(x) } setMethod("plot", "rrvgam", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plot.rrvgam(x, ...))}) persp.rrvgam <- function(x, show.plot = TRUE, xlim = NULL, ylim = NULL, zlim = NULL, # zlim ignored if Rank == 1 gridlength = if (Rank == 1) 301 else c(51, 51), which.species = NULL, xlab = if (Rank == 1) "Latent Variable" else "Latent Variable 1", ylab = if (Rank == 1) "Expected Value" else "Latent Variable 2", zlab = "Expected value", labelSpecies = FALSE, # For Rank == 1 only stretch = 1.05, # quick and dirty, Rank == 1 only main = "", ticktype = "detailed", col = if (Rank == 1) par()$col else "white", lty = par()$lty, lwd = par()$lwd, rugplot = FALSE, ...) { object <- x # don't like x as the primary argument coefobj <- Coef(object) if ((Rank <- coefobj@Rank) > 2) stop("object must be a rank-1 or rank-2 model") fvmat <- fitted(object) NOS <- ncol(fvmat) # Number of species M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M MSratio <- M / NOS # First value is g(mean) = quadratic form in latvar xlim <- if (length(xlim)) xlim else range(coefobj@latvar[, 1]) if (!length(ylim.orig <- ylim)) { ylim <- if (Rank == 1) c(0, max(fvmat)*stretch) else range(coefobj@latvar[,2]) } xlim <- rep_len(xlim, 2) ylim <- rep_len(ylim, 2) gridlength <- rep_len(gridlength, Rank) latvar1 <- seq(xlim[1], xlim[2], length = gridlength[1]) latvar2 <- if (Rank == 2) seq(ylim[1], ylim[2], len = gridlength[2]) else NULL latvarmat <- if (Rank == 2) expand.grid(latvar1, latvar2) else cbind(latvar1) sppNames <- dimnames(object@y)[[2]] if (!length(which.species)) { which.species <- sppNames[1:NOS] which.species.numer <- 1:NOS } else if (is.numeric(which.species)) { which.species.numer <- which.species which.species <- sppNames[which.species.numer] # Convert to character } else { which.species.numer <- match(which.species, sppNames) } LP <- matrix(NA_real_, nrow(latvarmat), NOS) for (sppno in 1:NOS) { temp <- predictrrvgam(object = object, grid = latvarmat, sppno = sppno, Rank = Rank, deriv = 0, MSratio = MSratio) LP[, sppno] <- temp$yval } if (MSratio == 2) { LP <- kronecker(LP, matrix(1:0, 1, 2)) # n x M } fitvals <- object@family@linkinv(LP, extra = object@extra) # n by NOS dimnames(fitvals) <- list(NULL, dimnames(fvmat)[[2]]) if (Rank == 1) { if (show.plot) { if (!length(ylim.orig)) ylim <- c(0, max(fitvals[,which.species.numer]) * stretch) # A revision col <- rep_len(col, length(which.species.numer)) lty <- rep_len(lty, length(which.species.numer)) lwd <- rep_len(lwd, length(which.species.numer)) matplot(latvar1, fitvals, xlab = xlab, ylab = ylab, type = "n", main = main, xlim = xlim, ylim = ylim, ...) if (rugplot) rug(latvar(object)) for (sppno in seq_along(which.species.numer)) { ptr2 <- which.species.numer[sppno] # points to species column lines(latvar1, fitvals[,ptr2], col = col[sppno], lty = lty[sppno], lwd = lwd [sppno], ...) if (labelSpecies) { ptr1 <- (1:nrow(fitvals))[max(fitvals[, ptr2]) == fitvals[, ptr2]] ptr1 <- ptr1[1] text(latvar1[ptr1], fitvals[ptr1, ptr2] + (stretch-1) * diff(range(ylim)), label = sppNames[sppno], col = col[sppno], ...) } } } } else { max.fitted <- matrix(fitvals[,which.species[1]], length(latvar1), length(latvar2)) if (length(which.species) > 1) for (sppno in which.species[-1]) { max.fitted <- pmax(max.fitted, matrix(fitvals[, sppno], length(latvar1), length(latvar2))) } if (!length(zlim)) zlim <- range(max.fitted, na.rm = TRUE) perspdefault <- getS3method("persp", "default") if (show.plot) perspdefault(latvar1, latvar2, max.fitted, zlim = zlim, xlab = xlab, ylab = ylab, zlab = zlab, ticktype = ticktype, col = col, main = main, ...) } invisible(list(fitted = fitvals, latvar1grid = latvar1, latvar2grid = if (Rank == 2) latvar2 else NULL, max.fitted = if (Rank == 2) max.fitted else NULL)) } if (!isGeneric("persp")) setGeneric("persp", function(x, ...) standardGeneric("persp")) setMethod("persp", "rrvgam", function(x, ...) persp.rrvgam(x = x, ...)) latvar.rrvgam <- function(object, ...) { Coef(object, ...)@latvar } if (!isGeneric("lv")) setGeneric("lv", function(object, ...) { .Deprecated("latvar") standardGeneric("lv") }, package = "VGAM") setMethod("lv", "rrvgam", function(object, ...) latvar.rrvgam(object, ...)) if (!isGeneric("latvar")) setGeneric("latvar", function(object, ...) standardGeneric("latvar")) setMethod("latvar", "rrvgam", function(object, ...) latvar.rrvgam(object, ...)) setClass(Class = "summary.rrvgam", representation("misc" = "list", "call" = "call"), contains = "Coef.rrvgam") summary.rrvgam <- function(object, ...) { answer <- Coef(object, ...) answer <- as(answer, "summary.rrvgam") answer@misc <- object@misc answer@call <- object@call answer } setMethod("summary", "rrvgam", function(object, ...) summary.rrvgam(object, ...)) show.summary.rrvgam <- function(x, ...) { cat("\nCall:\n") dput(x@call) show.Coef.rrvgam(x, ...) cat("\nNumber of species: ", x@NOS, "\n") if (length(x@misc$dispersion) == 1) { cat("\nDispersion parameter(s): ", x@misc$dispersion, "\n") } else if (is.Numeric(x@dispersion)) { cat("\nDispersion parameter(s)\n") print( x@dispersion, ... ) } invisible(x) } setMethod("show", "summary.rrvgam", function(object) show.summary.rrvgam(object)) concoef.rrvgam <- function(object, ...) { Coef(object, ...)@C } concoef.Coef.rrvgam <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") object@C } if (FALSE) { if (!isGeneric("ccoef")) setGeneric("ccoef", function(object, ...) { .Deprecated("concoef") standardGeneric("ccoef") }) setMethod("ccoef", "rrvgam", function(object, ...) concoef.rrvgam(object, ...)) setMethod("ccoef", "Coef.rrvgam", function(object, ...) concoef.Coef.rrvgam(object, ...)) } setMethod("concoef", "rrvgam", function(object, ...) concoef.rrvgam(object, ...)) setMethod("concoef", "Coef.rrvgam", function(object, ...) concoef.Coef.rrvgam(object, ...)) if (!isGeneric("calibrate")) setGeneric("calibrate", function(object, ...) standardGeneric("calibrate")) setMethod("calibrate", "rrvgam", function(object, ...) calibrate.qrrvglm(object, ...)) setMethod("calibrate", "qrrvglm", function(object, ...) calibrate.qrrvglm(object, ...)) setMethod("calibrate", "rrvglm", function(object, ...) calibrate.rrvglm(object, ...)) Tol.rrvgam <- function(object, ...) { stop("The tolerance for a 'rrvgam' object is undefined") } if (!isGeneric("Tol")) setGeneric("Tol", function(object, ...) standardGeneric("Tol")) setMethod("Tol", "rrvgam", function(object, ...) Tol.rrvgam(object, ...)) setMethod("show", "rrvgam", function(object) show.vgam(object)) VGAM/R/nobs.R0000644000176200001440000000700513135276757012313 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. nobs.vlm <- function(object, type = c("lm", "vlm"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("lm", "vlm"))[1] if (type == "lm") { object@misc$n } else { object@misc$nrow.X.vlm } } if (!isGeneric("nobs")) setGeneric("nobs", function(object, ...) standardGeneric("nobs"), package = "VGAM") setMethod("nobs", "vlm", function(object, ...) nobs.vlm(object, ...)) nvar.vlm <- function(object, type = c("vlm", "lm"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("vlm", "lm"))[1] if (type == "lm") { object@misc$p } else { object@misc$ncol.X.vlm } } nvar.vgam <- function(object, type = c("vgam", "zz"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("vgam", "zz"))[1] stop("function nvar.vgam() has not been written yet") if (type == "vgam") { object@misc$p } else { object@misc$ncol.X.vlm } } nvar.rrvglm <- function(object, type = c("rrvglm", "zz"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("rrvglm", "zz"))[1] stop("function nvar.rrvglm() has not been written yet") if (type == "vgam") { object@misc$p } else { object@misc$ncol.X.vlm } } nvar.qrrvglm <- function(object, type = c("qrrvglm", "zz"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("qrrvglm", "zz"))[1] stop("function nvar.qrrvglm() has not been written yet") if (type == "qrrvglm") { object@misc$p } else { object@misc$ncol.X.vlm } } nvar.rrvgam <- function(object, type = c("cao", "zz"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("rrvglm", "zz"))[1] stop("function nvar.rrvgam() has not been written yet") if (type == "cao") { object@misc$p } else { object@misc$ncol.X.vlm } } nvar.rcim <- function(object, type = c("rcim", "zz"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("rcim", "zz"))[1] stop("function nvar.rcim() has not been written yet") if (type == "rcim") { object@misc$p } else { object@misc$ncol.X.vlm } } if (!isGeneric("nvar")) setGeneric("nvar", function(object, ...) standardGeneric("nvar"), package = "VGAM") setMethod("nvar", "vlm", function(object, ...) nvar.vlm(object, ...)) setMethod("nvar", "vgam", function(object, ...) nvar.vgam(object, ...)) setMethod("nvar", "rrvglm", function(object, ...) nvar.rrvglm(object, ...)) setMethod("nvar", "qrrvglm", function(object, ...) nvar.qrrvglm(object, ...)) setMethod("nvar", "rrvgam", function(object, ...) nvar.rrvgam(object, ...)) setMethod("nvar", "rcim", function(object, ...) nvar.rcim(object, ...)) VGAM/R/Links.R0000644000176200001440000001271613135276757012437 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dtheta.deta <- function(theta, link = "identitylink", earg = list(theta = theta, # Needed inverse = TRUE, # 20150711: big change!!!! deriv = 1, short = TRUE, tag = FALSE)) { function.name <- link function.name2 <- attr(earg, "function.name") if (length(function.name2) && function.name != function.name2) { warning("apparent conflict in name of link function") } earg[["theta"]] <- theta # New data if (length(earg$inverse)) earg[["inverse"]] <- TRUE else earg$inverse <- TRUE earg[["deriv"]] <- 1 # New do.call(what = function.name, args = earg) } d2theta.deta2 <- function(theta, link = "identitylink", earg = list(theta = theta, # Needed inverse = TRUE, # 20150711: big change!!!! deriv = 2, short = TRUE, tag = FALSE)) { function.name <- link function.name2 <- attr(earg, "function.name") if (length(function.name2) && function.name != function.name2) warning("apparent conflict in name of link function in D2theta.deta2") earg[["theta"]] <- theta # New data if (length(earg$inverse)) earg[["inverse"]] <- TRUE else earg$inverse <- TRUE earg[["deriv"]] <- 2 # New do.call(what = function.name, args = earg) } d3theta.deta3 <- function(theta, link = "identitylink", earg = list(theta = theta, inverse = TRUE, deriv = 3, short = TRUE, tag = FALSE)) { function.name <- link earg[["theta"]] <- theta # New data if (length(earg$inverse)) earg[["inverse"]] <- TRUE else earg$inverse <- TRUE earg[["deriv"]] <- 3 # New do.call(what = function.name, args = earg) } theta2eta <- function(theta, link = "identitylink", earg = list(theta = NULL)) { function.name <- link function.name2 <- attr(earg, "function.name") if (length(function.name2) && function.name != function.name2) warning("apparent conflict in name of link function") earg[["theta"]] <- theta # New data do.call(what = function.name, args = earg) } eta2theta <- function(theta, # This is really eta. link = "identitylink", earg = list(theta = NULL)) { orig.earg <- earg if (!is.list(earg)) stop("argument 'earg' is not a list") level1 <- length(earg) > 3 && length(intersect(names(earg), c("theta", "inverse", "deriv", "short", "tag"))) > 3 if (level1) earg <- list(oneOnly = earg) llink <- length(link) if (llink != length(earg)) stop("length of argument 'link' differs from ", "length of argument 'earg'") if (llink == 0) stop("length(earg) == 0 not allowed") if (llink == 1) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, if (is.list(earg[[1]])) earg <- earg[[1]] function.name <- link function.name2 <- attr(earg, "function.name") # May be, e.g., NULL if (length(function.name2) && function.name != function.name2) warning("apparent conflict in name of link function") earg[["theta"]] <- theta # New data earg[["inverse"]] <- TRUE # New return(do.call(what = function.name, args = earg)) } if (!is.matrix(theta) && length(theta) == length(earg)) theta <- rbind(theta) ans <- NULL for (iii in 1:llink) { use.earg <- earg[[iii]] use.earg[["inverse"]] <- TRUE # New use.earg[["theta"]] <- theta[, iii] # New use.function.name <- link[iii] ans <- cbind(ans, do.call(what = use.function.name, args = use.earg)) } if (length(orig.earg) == ncol(ans) && length(names(orig.earg)) > 0 && ncol(ans) > 0) colnames(ans) <- names(orig.earg) ans } namesof <- function(theta, link = "identitylink", earg = list(tag = tag, short = short), tag = FALSE, short = TRUE) { funname.only <- strsplit(as.character(link), "(", fixed = TRUE) funname.only <- (funname.only[[1]])[1] link <- funname.only earg[["theta"]] <- as.character(theta) earg[["tag"]] <- tag earg[["short"]] <- short do.call(link, args = earg) } if (FALSE) namesof <- function(theta, link = "identitylink", earg = list(tag = tag, short = short), tag = FALSE, short = TRUE) { earg[["theta"]] <- as.character(theta) earg[["tag"]] <- tag earg[["short"]] <- short do.call(link, args = earg) } link2list <- function(link ) { ans <- link fun.name <- as.character(ans[[1]]) big.list <- as.list(as.function(get(fun.name))) big.list[[length(big.list)]] <- NULL # Kill the body of code t.index <- pmatch(names(ans[-1]), names(big.list)) t.index if (anyNA(t.index)) stop("in '", fun.name, "' could not match argument(s) ", paste('"', names(ans[-1])[is.na(t.index)], '"', sep = "", collapse = ", ")) Big.list <- big.list trivial.call <- (length(t.index) == 0) if (!trivial.call) { Big.list[t.index] <- ans[-1] } attr(Big.list, "function.name") <- fun.name Big.list } VGAM/R/simulate.vglm.R0000644000176200001440000000223513135276760014133 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. simulate.vlm <- function (object, nsim = 1, seed = NULL, ...) { if (!exists(".Random.seed", envir = .GlobalEnv, inherits = FALSE)) runif(1) if (is.null(seed)) { RNGstate <- get(".Random.seed", envir = .GlobalEnv) } else { R.seed <- get(".Random.seed", envir = .GlobalEnv) set.seed(seed) RNGstate <- structure(seed, kind = as.list(RNGkind())) on.exit(assign(".Random.seed", R.seed, envir = .GlobalEnv)) } ftd <- fitted(object) nm <- names(ftd) n <- length(ftd) ntot <- n * nsim Fam <- if (inherits(object, "vlm")) { object@family } else { stop("cannot get at the 'family' slot") } val <- if (length(Fam@simslot) > 0) { Fam@simslot(object, nsim) } else { stop(gettextf("family '%s' not implemented", Fam), domain = NA) } if (!is.list(val)) { dim(val) <- c(n, nsim) val <- as.data.frame(val) } else { class(val) <- "data.frame" } names(val) <- paste("sim", seq_len(nsim), sep = "_") if (!is.null(nm)) row.names(val) <- nm attr(val, "seed") <- RNGstate val } VGAM/R/family.aunivariate.R0000644000176200001440000054141013135276757015145 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. hzeta.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } hzeta <- function(lshape = "loglog", ishape = NULL, nsimEIM = 100) { stopifnot(ishape > 0) stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Haight's Zeta distribution f(y) = (2y-1)^(-shape) - ", "(2y+1)^(-shape),\n", " shape>0, y = 1, 2,....\n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n\n", "Mean: (1-2^(-shape)) * zeta(shape) if shape>1", "\n", "Variance: (1-2^(1-shape)) * zeta(shape-1) - mean^2 if shape>2"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = FALSE, multipleResponses = FALSE, parameters.names = c("shape"), lshape = .lshape , nsimEIM = .nsimEIM ) }, list( .nsimEIM = nsimEIM, .lshape = lshape ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.positive.y = TRUE) predictors.names <- namesof("shape", .lshape , earg = .eshape , tag = FALSE) if (!length(etastart)) { a.init <- if (length( .ishape)) .ishape else { if ((meany <- weighted.mean(y, w)) < 1.5) 3.0 else if (meany < 2.5) 1.4 else 1.1 } a.init <- rep_len(a.init, n) etastart <- theta2eta(a.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) mu <- (1-2^(-shape)) * zeta(shape) mu[shape <= 1] <- Inf mu }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape) misc$earg <- list(shape = .eshape ) misc$nsimEIM <- .nsimEIM }), list( .lshape = lshape, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dhzeta(x = y, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("hzeta"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) shape <- eta2theta(eta, .lshape , earg = .eshape ) rhzeta(nsim * length(shape), shape = shape) }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) d3 <- deriv3(~ log((2*y-1)^(-shape) - (2*y+1)^(-shape)), "shape", hessian = FALSE) eval.d3 <- eval(d3) dl.dshape <- attr(eval.d3, "gradient") c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ sd3 <- deriv3(~ log((2*ysim-1)^(-shape) - (2*ysim+1)^(-shape)), "shape", hessian = FALSE) run.var <- 0 for (ii in 1:( .nsimEIM )) { ysim <- rhzeta(n, shape = shape) eval.sd3 <- eval(sd3) dl.dshape <- attr(eval.d3, "gradient") rm(ysim) temp3 <- dl.dshape run.var <- ((ii-1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, dimm(M), byrow = TRUE) else cbind(run.var) wz <- wz * dshape.deta^2 c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dhzeta <- function(x, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(shape, positive = TRUE)) stop("'shape' must be numeric and have positive values") nn <- max(length(x), length(shape)) if (length(x) != nn) x <- rep_len(x, nn) if (length(shape) != nn) shape <- rep_len(shape, nn) ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 ans <- rep_len(0, nn) ans[!zero] <- (2*x[!zero]-1)^(-shape[!zero]) - (2*x[!zero]+1)^(-shape[!zero]) if (log.arg) log(ans) else ans } phzeta <- function(q, shape, log.p = FALSE) { nn <- max(length(q), length(shape)) q <- rep_len(q, nn) shape <- rep_len(shape, nn) oq <- !is.finite(q) zero <- oq | q < 1 q <- floor(q) ans <- 0 * q ans[!zero] <- 1 - (2*q[!zero]+1)^(-shape[!zero]) ans[q == -Inf] <- 0 # 20141215 KaiH ans[q == Inf] <- 1 # 20141215 KaiH ans[shape <= 0] <- NaN if (log.p) log(ans) else ans } qhzeta <- function(p, shape) { if (!is.Numeric(p, positive = TRUE) || any(p >= 1)) stop("argument 'p' must have values inside the interval (0,1)") nn <- max(length(p), length(shape)) p <- rep_len(p, nn) shape <- rep_len(shape, nn) ans <- (((1 - p)^(-1/shape) - 1) / 2) # p is in (0,1) ans[shape <= 0] <- NaN floor(ans + 1) } rhzeta <- function(n, shape) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n shape <- rep_len(shape, use.n) ans <- (runif(use.n)^(-1/shape) - 1) / 2 ans[shape <= 0] <- NaN floor(ans + 1) } dkumar <- function(x, shape1, shape2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape1), length(shape2)) if (length(x) != N) x <- rep_len(x, N) if (length(shape1) != N) shape1 <- rep_len(shape1, N) if (length(shape2) != N) shape2 <- rep_len(shape2, N) logdensity <- rep_len(log(0), N) xok <- (0 <= x & x <= 1) logdensity[xok] <- log(shape1[xok]) + log(shape2[xok]) + (shape1[xok] - 1) * log(x[xok]) + (shape2[xok] - 1) * log1p(-x[xok]^shape1[xok]) logdensity[shape1 <= 0] <- NaN logdensity[shape2 <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } rkumar <- function(n, shape1, shape2) { ans <- (1 - (runif(n))^(1/shape2))^(1/shape1) ans[(shape1 <= 0) | (shape2 <= 0)] <- NaN ans } qkumar <- function(p, shape1, shape2, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- (-expm1((1/shape2) * log(-expm1(ln.p))))^(1/shape1) ans[ln.p > 0] <- NaN } else { ans <- (-expm1((1/shape2) * log1p(-p)))^(1/shape1) ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- 1 ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- (-expm1(ln.p / shape2))^(1/shape1) ans[ln.p > 0] <- NaN ans } else { ans <- (-expm1((1/shape2) * log(p)))^(1/shape1) ans[p < 0] <- NaN ans[p == 0] <- 1 ans[p == 1] <- 0 ans[p > 1] <- NaN } } ans[(shape1 <= 0) | (shape2 <= 0)] = NaN ans } pkumar <- function(q, shape1, shape2, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log(-expm1(shape2 * log1p(-q^shape1))) ans[q <= 0 ] <- -Inf ans[q >= 1] <- 0 } else { ans <- -expm1(shape2 * log1p(-q^shape1)) ans[q <= 0] <- 0 ans[q >= 1] <- 1 } } else { if (log.p) { ans <- shape2 * log1p(-q^shape1) ans[q <= 0] <- 0 ans[q >= 1] <- -Inf } else { ans <- exp(shape2 * log1p(-q^shape1)) ans[q <= 0] <- 1 ans[q >= 1] <- 0 } } ans[(shape1 <= 0) | (shape2 <= 0)] <- NaN ans } kumar <- function(lshape1 = "loge", lshape2 = "loge", ishape1 = NULL, ishape2 = NULL, gshape1 = exp(2*ppoints(5) - 1), tol12 = 1.0e-4, zero = NULL) { lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") if (length(ishape1) && (!is.Numeric(ishape1, length.arg = 1, positive = TRUE))) stop("bad input for argument 'ishape1'") if (length(ishape2) && !is.Numeric(ishape2)) stop("bad input for argument 'ishape2'") if (!is.Numeric(tol12, length.arg = 1, positive = TRUE)) stop("bad input for argument 'tol12'") if (!is.Numeric(gshape1, positive = TRUE)) stop("bad input for argument 'gshape1'") new("vglmff", blurb = c("Kumaraswamy distribution\n\n", "Links: ", namesof("shape1", lshape1, eshape1, tag = FALSE), ", ", namesof("shape2", lshape2, eshape2, tag = FALSE), "\n", "Mean: shape2 * beta(1 + 1 / shape1, shape2)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("shape1", "shape2"), lshape1 = .lshape1 , lshape2 = .lshape2 , zero = .zero ) }, list( .zero = zero, .lshape1 = lshape1, .lshape2 = lshape2 ))), initialize = eval(substitute(expression({ checklist <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- checklist$w y <- checklist$y # Now 'w' and 'y' have the same dimension. if (any((y <= 0) | (y >= 1))) stop("the response must be in (0, 1)") extra$ncoly <- ncoly <- ncol(y) extra$M1 <- M1 <- 2 M <- M1 * ncoly mynames1 <- param.names("shape1", ncoly) mynames2 <- param.names("shape2", ncoly) predictors.names <- c(namesof(mynames1, .lshape1 , earg = .eshape1 , tag = FALSE), namesof(mynames2, .lshape2 , earg = .eshape2 , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { kumar.Loglikfun <- function(shape1, y, x, w, extraargs) { mediany <- colSums(y * w) / colSums(w) shape2 <- log(0.5) / log1p(-(mediany^shape1)) sum(c(w) * dkumar(y, shape1 = shape1, shape2 = shape2, log = TRUE)) } shape1.grid <- .gshape1 shape1.init <- if (length( .ishape1 )) .ishape1 else grid.search(shape1.grid, objfun = kumar.Loglikfun, y = y, x = x, w = w) shape1.init <- matrix(shape1.init, n, ncoly, byrow = TRUE) mediany <- colSums(y * w) / colSums(w) shape2.init <- if (length( .ishape2 )) .ishape2 else log(0.5) / log1p(-(mediany^shape1.init)) shape2.init <- matrix(shape2.init, n, ncoly, byrow = TRUE) etastart <- cbind(theta2eta(shape1.init, .lshape1 , earg = .eshape1 ), theta2eta(shape2.init, .lshape2 , earg = .eshape2 ))[, interleave.VGAM(M, M1 = M1)] } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .ishape1 = ishape1, .ishape2 = ishape2, .eshape1 = eshape1, .eshape2 = eshape2, .gshape1 = gshape1 ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) shape2 * (base::beta(1 + 1/shape1, shape2)) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lshape1 , ncoly), rep_len( .lshape2 , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .eshape1 misc$earg[[M1*ii ]] <- .eshape2 } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dkumar(x = y, shape1, shape2, log = TRUE) if (summation) sum(ll.elts) else ll.elts } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = c("kumar"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) okay1 <- all(is.finite(shape1)) && all(0 < shape1) && all(is.finite(shape2)) && all(0 < shape2) okay1 }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), simslot = eval(substitute( function(object, nsim) { eta <- predict(object) shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) rkumar(nsim * length(shape1), shape1 = shape1, shape2 = shape2) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), deriv = eval(substitute(expression({ shape1 <- eta2theta(eta[, c(TRUE, FALSE)], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE)], .lshape2 , earg = .eshape2 ) dshape1.deta <- dtheta.deta(shape1, link = .lshape1 , earg = .eshape1 ) dshape2.deta <- dtheta.deta(shape2, link = .lshape2 , earg = .eshape2 ) dl.dshape1 <- 1 / shape1 + log(y) - (shape2 - 1) * log(y) * (y^shape1) / (1 - y^shape1) dl.dshape2 <- 1 / shape2 + log1p(-y^shape1) dl.deta <- c(w) * cbind(dl.dshape1 * dshape1.deta, dl.dshape2 * dshape2.deta) dl.deta[, interleave.VGAM(M, M1 = M1)] }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = eval(substitute(expression({ ned2l.dshape11 <- (1 + (shape2 / (shape2 - 2)) * ((digamma(shape2) - digamma(2))^2 - (trigamma(shape2) - trigamma(2)))) / shape1^2 ned2l.dshape22 <- 1 / shape2^2 ned2l.dshape12 <- (digamma(2) - digamma(1 + shape2)) / ((shape2 - 1) * shape1) index1 <- (abs(shape2 - 1) < .tol12 ) # Fix up singular point at shape2 == 1 ned2l.dshape12[index1] <- -trigamma(2) / shape1[index1] index2 <- (abs(shape2 - 2) < .tol12 ) # Fix up singular point at shape2 == 2 ned2l.dshape11[index2] <- (1 - 2 * psigamma(2, deriv = 2)) / shape1[index2]^2 wz <- array(c(c(w) * ned2l.dshape11 * dshape1.deta^2, c(w) * ned2l.dshape22 * dshape2.deta^2, c(w) * ned2l.dshape12 * dshape1.deta * dshape2.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .tol12 = tol12 )))) } drice <- function(x, sigma, vee, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(vee), length(sigma)) if (length(x) != N) x <- rep_len(x, N) if (length(vee) != N) vee <- rep_len(vee , N) if (length(sigma ) != N) sigma <- rep_len(sigma , N) logdensity <- rep_len(log(0), N) xok <- (x > 0) x.abs <- abs(x[xok] * vee[xok] / sigma[xok]^2) logdensity[xok] <- log(x[xok]) - 2 * log(sigma[xok]) + (-(x[xok]^2+vee[xok]^2)/(2*sigma[xok]^2)) + log(besselI(x.abs, nu = 0, expon.scaled = TRUE)) + x.abs logdensity[sigma <= 0] <- NaN logdensity[vee < 0] <- NaN logdensity[is.infinite(x)] <- -Inf # 20141209 KaiH if (log.arg) logdensity else exp(logdensity) } rrice <- function(n, sigma, vee) { theta <- 1 # any number X <- rnorm(n, mean = vee * cos(theta), sd = sigma) Y <- rnorm(n, mean = vee * sin(theta), sd = sigma) sqrt(X^2 + Y^2) } marcumQ <- function(a, b, m = 1, lower.tail = TRUE, log.p = FALSE, ... ) { pchisq(b^2, df = 2*m, ncp = a^2, lower.tail = lower.tail, log.p = log.p, ... ) } price <- function(q, sigma, vee, lower.tail = TRUE, log.p = FALSE, ...) { ans <- marcumQ(vee/sigma, q/sigma, m = 1, lower.tail = lower.tail, log.p = log.p, ... ) ans } qrice <- function(p, sigma, vee, lower.tail = TRUE, log.p = FALSE, ... ) { sqrt(qchisq(p, df = 2, ncp = (vee/sigma)^2, lower.tail = lower.tail, log.p = log.p, ... )) * sigma } riceff.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } riceff <- function(lsigma = "loge", lvee = "loge", isigma = NULL, ivee = NULL, nsimEIM = 100, zero = NULL, nowarning = FALSE) { lvee <- as.list(substitute(lvee)) evee <- link2list(lvee) lvee <- attr(evee, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (length(ivee) && !is.Numeric(ivee, positive = TRUE)) stop("bad input for argument 'ivee'") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("bad input for argument 'isigma'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Rice distribution\n\n", "Links: ", namesof("sigma", lsigma, earg = esigma, tag = FALSE), ", ", namesof("vee", lvee, earg = evee, tag = FALSE), "\n", "Mean: ", "sigma*sqrt(pi/2)*exp(z/2)*((1-z)*", "besselI(-z/2, nu = 0) - z * besselI(-z/2, nu = 1)) ", "where z=-vee^2/(2*sigma^2)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = FALSE, multipleResponses = FALSE, parameters.names = c("sigma", "vee"), nsimEIM = .nsimEIM, lsigma = .lsigma , lvee = .lvee , zero = .zero ) }, list( .zero = zero, .lsigma = lsigma, .lvee = lvee, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("sigma", .lsigma , earg = .esigma , tag = FALSE), namesof("vee", .lvee , earg = .evee , tag = FALSE)) if (!length(etastart)) { riceff.Loglikfun <- function(vee, y, x, w, extraargs) { sigma.init <- sd(rep(y, w)) sum(c(w) * (log(y) - 2*log(sigma.init) + log(besselI(y*vee/sigma.init^2, nu = 0)) - (y^2 + vee^2) / (2*sigma.init^2))) } vee.grid <- seq(quantile(rep(y, w), probs = seq(0, 1, 0.2))["20%"], quantile(rep(y, w), probs = seq(0, 1, 0.2))["80%"], len = 11) vee.init <- if (length( .ivee )) .ivee else grid.search(vee.grid, objfun = riceff.Loglikfun, y = y, x = x, w = w) vee.init <- rep_len(vee.init, length(y)) sigma.init <- if (length( .isigma )) .isigma else sqrt(max((weighted.mean(y^2, w) - vee.init^2)/2, 0.001)) sigma.init <- rep_len(sigma.init, length(y)) etastart <- cbind(theta2eta(sigma.init, .lsigma , earg = .esigma ), theta2eta(vee.init, .lvee , earg = .evee )) } }), list( .lvee = lvee, .lsigma = lsigma, .ivee = ivee, .isigma = isigma, .evee = evee, .esigma = esigma ))), linkinv = eval(substitute(function(eta, extra = NULL) { vee <- eta2theta(eta[, 1], link = .lvee , earg = .evee ) sigma <- eta2theta(eta[, 2], link = .lsigma , earg = .esigma ) temp9 <- -vee^2 / (2*sigma^2) sigma * sqrt(pi/2) * ((1-temp9) * besselI(-temp9/2, nu = 0, expon = TRUE) - temp9 * besselI(-temp9/2, nu = 1, expon = TRUE)) }, list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma ))), last = eval(substitute(expression({ misc$link <- c("sigma" = .lsigma , "vee" = .lvee ) misc$earg <- list("sigma" = .esigma , "vee" = .evee ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { sigma <- eta2theta(eta[, 1], link = .lsigma , earg = .esigma ) vee <- eta2theta(eta[, 2], link = .lvee , earg = .evee ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * drice(x = y, sigma = sigma, vee = vee, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma ))), vfamily = c("riceff"), validparams = eval(substitute(function(eta, y, extra = NULL) { sigma <- eta2theta(eta[, 1], link = .lsigma , earg = .esigma ) vee <- eta2theta(eta[, 2], link = .lvee , earg = .evee ) okay1 <- all(is.finite(sigma)) && all(0 < sigma) && all(is.finite(vee )) && all(0 < vee ) okay1 }, list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) sigma <- eta2theta(eta[, 1], link = .lsigma , earg = .esigma ) vee <- eta2theta(eta[, 2], link = .lvee , earg = .evee ) rrice(nsim * length(vee), vee = vee, sigma = sigma) }, list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma ))), deriv = eval(substitute(expression({ sigma <- eta2theta(eta[, 1], link = .lsigma , earg = .esigma ) vee <- eta2theta(eta[, 2], link = .lvee , earg = .evee ) dvee.deta <- dtheta.deta(vee, link = .lvee , earg = .evee ) dsigma.deta <- dtheta.deta(sigma, link = .lsigma , earg = .esigma ) temp8 <- y * vee / sigma^2 dl.dvee <- -vee/sigma^2 + (y/sigma^2) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) dl.dsigma <- -2/sigma + (y^2 + vee^2)/(sigma^3) - (2 * temp8 / sigma) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) c(w) * cbind(dl.dsigma * dsigma.deta, dl.dvee * dvee.deta) }), list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.var <- run.cov <- 0 for (ii in 1:( .nsimEIM )) { ysim <- rrice(n, vee = vee, sigma = sigma) temp8 <- ysim * vee / sigma^2 dl.dvee <- -vee/sigma^2 + (ysim/sigma^2) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) dl.dsigma <- -2/sigma + (ysim^2 + vee^2)/(sigma^3) - (2 * temp8 / sigma) * besselI(temp8, nu = 1) / besselI(temp8, nu = 0) rm(ysim) temp3 <- cbind(dl.dsigma, dl.dvee) run.var <- ((ii-1) * run.var + temp3^2) / ii run.cov <- ((ii-1) * run.cov + temp3[, 1] * temp3[, 2]) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var, run.cov)), n, dimm(M), byrow = TRUE) else cbind(run.var, run.cov) dtheta.detas <- cbind(dsigma.deta, dvee.deta) index0 <- iam(NA_real_, NA_real_, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list( .lvee = lvee, .lsigma = lsigma, .evee = evee, .esigma = esigma, .nsimEIM = nsimEIM )))) } dskellam <- function(x, mu1, mu2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mu1), length(mu2)) if (length(x) != L) x <- rep_len(x, L) if (length(mu1) != L) mu1 <- rep_len(mu1, L) if (length(mu2) != L) mu2 <- rep_len(mu2, L) ok2 <- is.finite(mu1) & is.finite(mu2) & (mu1 >= 0) & (mu2 >= 0) ok3 <- (mu1 == 0) & (mu2 > 0) ok4 <- (mu1 > 0) & (mu2 == 0) ok5 <- (mu1 == 0) & (mu2 == 0) if (log.arg) { ans <- -mu1 - mu2 + 2 * sqrt(mu1*mu2) + 0.5 * x * log(mu1) - 0.5 * x * log(mu2) + log(besselI(2 * sqrt(mu1*mu2), nu = abs(x), expon.scaled = TRUE)) ans[ok3] <- dpois(x = -x[ok3], lambda = mu2[ok3], log = TRUE) ans[ok4] <- dpois(x = -x[ok4], lambda = mu1[ok4], log = TRUE) ans[ok5] <- dpois(x = x[ok5], lambda = 0.0, log = TRUE) ans[x != round(x)] = log(0.0) } else { ans <- (mu1/mu2)^(x/2) * exp(-mu1-mu2 + 2 * sqrt(mu1*mu2)) * besselI(2 * sqrt(mu1*mu2), nu = abs(x), expon.scaled = TRUE) ans[ok3] <- dpois(x = -x[ok3], lambda = mu2[ok3]) ans[ok4] <- dpois(x = -x[ok4], lambda = mu1[ok4]) ans[ok5] <- dpois(x = x[ok5], lambda = 0.0) ans[x != round(x)] <- 0.0 } ans[!ok2] <- NaN ans } rskellam <- function(n, mu1, mu2) { rpois(n, mu1) - rpois(n, mu2) } skellam.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } skellam <- function(lmu1 = "loge", lmu2 = "loge", imu1 = NULL, imu2 = NULL, nsimEIM = 100, parallel = FALSE, zero = NULL) { lmu1 <- as.list(substitute(lmu1)) emu1 <- link2list(lmu1) lmu1 <- attr(emu1, "function.name") lmu2 <- as.list(substitute(lmu2)) emu2 <- link2list(lmu2) lmu2 <- attr(emu2, "function.name") if (length(imu1) && !is.Numeric(imu1, positive = TRUE)) stop("bad input for argument 'imu1'") if (length(imu2) && !is.Numeric(imu2, positive = TRUE)) stop("bad input for argument 'imu2'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Skellam distribution\n\n", "Links: ", namesof("mu1", lmu1, earg = emu1, tag = FALSE), ", ", namesof("mu2", lmu2, earg = emu2, tag = FALSE), "\n", "Mean: mu1-mu2", "\n", "Variance: mu1+mu2"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = FALSE, multipleResponses = FALSE, parameters.names = c("mu1", "mu2"), nsimEIM = .nsimEIM, lmu1 = .lmu1 , lmu2 = .lmu2 , zero = .zero ) }, list( .zero = zero, .lmu1 = lmu1, .lmu2 = lmu2, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, Is.integer.y = TRUE, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("mu1", .lmu1, earg = .emu1, tag = FALSE), namesof("mu2", .lmu2, earg = .emu2, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) var.y.est <- sum(c(w) * junk$resid^2) / junk$df.residual mean.init <- weighted.mean(y, w) mu1.init <- max((var.y.est + mean.init) / 2, 0.01) mu2.init <- max((var.y.est - mean.init) / 2, 0.01) mu1.init <- rep_len(if (length( .imu1 )) .imu1 else mu1.init, n) mu2.init <- rep_len(if (length( .imu2 )) .imu2 else mu2.init, n) etastart <- cbind(theta2eta(mu1.init, .lmu1, earg = .emu1 ), theta2eta(mu2.init, .lmu2, earg = .emu2 )) } }), list( .lmu1 = lmu1, .lmu2 = lmu2, .imu1 = imu1, .imu2 = imu2, .emu1 = emu1, .emu2 = emu2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1 ) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2 ) mu1 - mu2 }, list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2 ))), last = eval(substitute(expression({ misc$link <- c("mu1" = .lmu1, "mu2" = .lmu2) misc$earg <- list("mu1" = .emu1, "mu2" = .emu2 ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1 ) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- if ( is.logical( .parallel ) && length( .parallel ) == 1 && .parallel ) c(w) * log(besselI(2*mu1, nu = y, expon = TRUE)) else c(w) * (-mu1 - mu2 + 0.5 * y * log(mu1) - 0.5 * y * log(mu2) + 2 * sqrt(mu1*mu2) + # Use this when expon = TRUE log(besselI(2 * sqrt(mu1*mu2), nu = y, expon = TRUE))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .parallel = parallel ))), vfamily = c("skellam"), validparams = eval(substitute(function(eta, y, extra = NULL) { mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1 ) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2 ) okay1 <- all(is.finite(mu1)) && all(0 < mu1) && all(is.finite(mu2)) && all(0 < mu2) okay1 }, list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2 ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1 ) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2 ) rskellam(nsim * length(mu1), mu1, mu2) }, list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .parallel = parallel ))), deriv = eval(substitute(expression({ mu1 <- eta2theta(eta[, 1], link = .lmu1, earg = .emu1 ) mu2 <- eta2theta(eta[, 2], link = .lmu2, earg = .emu2 ) dmu1.deta <- dtheta.deta(mu1, link = .lmu1, earg = .emu1 ) dmu2.deta <- dtheta.deta(mu2, link = .lmu2, earg = .emu2 ) temp8 <- 2 * sqrt(mu1*mu2) temp9 <- besselI(temp8, nu = y , expon = TRUE) temp7 <- (besselI(temp8, nu = y-1, expon = TRUE) + besselI(temp8, nu = y+1, expon = TRUE)) / 2 temp6 <- temp7 / temp9 dl.dmu1 <- -1 + 0.5 * y / mu1 + sqrt(mu2/mu1) * temp6 dl.dmu2 <- -1 - 0.5 * y / mu2 + sqrt(mu1/mu2) * temp6 c(w) * cbind(dl.dmu1 * dmu1.deta, dl.dmu2 * dmu2.deta) }), list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.var <- run.cov <- 0 for (ii in 1:( .nsimEIM )) { ysim <- rskellam(n, mu1=mu1, mu2=mu2) temp9 <- besselI(temp8, nu = ysim, expon = TRUE) temp7 <- (besselI(temp8, nu = ysim-1, expon = TRUE) + besselI(temp8, nu = ysim+1, expon = TRUE)) / 2 temp6 <- temp7 / temp9 dl.dmu1 <- -1 + 0.5 * ysim/mu1 + sqrt(mu2/mu1) * temp6 dl.dmu2 <- -1 - 0.5 * ysim/mu2 + sqrt(mu1/mu2) * temp6 rm(ysim) temp3 <- cbind(dl.dmu1, dl.dmu2) run.var <- ((ii-1) * run.var + temp3^2) / ii run.cov <- ((ii-1) * run.cov + temp3[, 1] * temp3[, 2]) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var, run.cov)), n, dimm(M), byrow = TRUE) else cbind(run.var, run.cov) dtheta.detas <- cbind(dmu1.deta, dmu2.deta) index0 <- iam(NA_real_, NA_real_, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list( .lmu1 = lmu1, .lmu2 = lmu2, .emu1 = emu1, .emu2 = emu2, .nsimEIM = nsimEIM )))) } dyules <- function(x, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if ( log.arg ) { ans <- log(shape) + lbeta(abs(x), shape+1) ans[(x != round(x)) | (x < 1)] <- log(0) } else { ans <- shape * beta(x, shape+1) ans[(x != round(x)) | (x < 1)] <- 0 } ans[!is.finite(shape) | (shape <= 0)] <- NaN ans } pyules <- function(q, shape, lower.tail = TRUE, log.p = FALSE) { tq <- trunc(q) if (lower.tail) { ans <- 1 - tq * beta(abs(tq), shape+1) ans[q < 1] <- 0 ans[is.infinite(q) & 0 < q] <- 1 # 20141215 KaiH } else { ans <- tq * beta(abs(tq), shape+1) ans[q < 1] <- 1 ans[is.infinite(q) & 0 < q] <- 0 # 20160713 } ans[shape <= 0] <- NaN if (log.p) log(ans) else ans ans } qyules <- function(p, shape) { LLL <- max(length(p), length(shape)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) ans <- rep_len(0, LLL) lo <- rep_len(1, LLL) approx.ans <- lo # True at lhs hi <- 2 * lo + 10 dont.iterate <- p == 1 | shape <= 0 done <- p <= pyules(hi, shape) | dont.iterate while (!all(done)) { hi.save <- hi[!done] hi[!done] <- 2 * lo[!done] + 10 lo[!done] <- hi.save done[!done] <- (p[!done] <= pyules(hi[!done], shape[!done])) } foo <- function(q, shape, p) pyules(q, shape) - p lhs <- (p <= dyules(1, shape)) | dont.iterate approx.ans[!lhs] <- bisection.basic(foo, lo[!lhs], hi[!lhs], tol = 1/16, shape = shape[!lhs], p = p[!lhs]) faa <- floor(approx.ans) ans <- ifelse(pyules(faa, shape) < p & p <= pyules(faa+1, shape), faa+1, faa) ans[p == 1] <- Inf ans[shape <= 0] <- NaN ans } # qyules ryules <- function(n, shape) { rgeom(n, prob = exp(-rexp(n, rate = shape))) + 1 } yulesimon.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } yulesimon <- function(lshape = "loge", ishape = NULL, nsimEIM = 200, zero = NULL) { if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' must be > 0") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Yule-Simon distribution f(y) = shape * beta(y, shape + 1), ", "shape > 0, y = 1, 2,..\n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n\n", "Mean: shape / (shape - 1), provided shape > 1\n", "Variance: shape^2 / ((shape - 1)^2 * (shape - 2)), ", "provided shape > 2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, nsimEIM = .nsimEIM , parameters.names = c("shape"), zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("shape", ncoly) predictors.names <- namesof(mynames1, .lshape , earg = .eshape , tag = FALSE) if (!length(etastart)) { wmeany <- colSums(y * w) / colSums(w) + 1/8 shape.init <- wmeany / (wmeany - 1) shape.init <- matrix(if (length( .ishape )) .ishape else shape.init, n, M, byrow = TRUE) etastart <- theta2eta(shape.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { ans <- shape <- eta2theta(eta, .lshape , earg = .eshape ) ans[shape > 1] <- shape / (shape - 1) ans[shape <= 1] <- NA ans }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lshape , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .eshape } misc$M1 <- M1 misc$ishape <- .ishape misc$nsimEIM <- .nsimEIM }), list( .lshape = lshape, .eshape = eshape, .nsimEIM = nsimEIM, .ishape = ishape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dyules(x = y, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("yulesimon"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) shape <- eta2theta(eta, .lshape , earg = .eshape ) ryules(nsim * length(shape), shape = shape) }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 1 shape <- eta2theta(eta, .lshape , earg = .eshape ) dl.dshape <- 1/shape + digamma(1+shape) - digamma(1+shape+y) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ run.var <- 0 for (ii in 1:( .nsimEIM )) { ysim <- ryules(n, shape <- shape) dl.dshape <- 1/shape + digamma(1+shape) - digamma(1+shape+ysim) rm(ysim) temp3 <- dl.dshape run.var <- ((ii-1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, M, byrow = TRUE) else cbind(run.var) wz <- wz * dshape.deta^2 c(w) * wz }), list( .nsimEIM = nsimEIM )))) } # yule.simon() dlind <- function(x, theta, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if ( log.arg ) { ans <- 2 * log(theta) + log1p(x) - theta * x - log1p(theta) ans[x < 0 | is.infinite(x)] <- log(0) # 20141209 KaiH } else { ans <- theta^2 * (1 + x) * exp(-theta * x) / (1 + theta) ans[x < 0 | is.infinite(x)] <- 0 # 20141209 KaiH } ans[theta <= 0] <- NaN ans } plind <- function(q, theta, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log(-expm1(-theta * q + log1p(q / (1 + 1/theta)))) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- -expm1(-theta * q + log1p(q / (1 + 1/theta))) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- -theta * q + log1p(q / (1 + 1/theta)) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- exp(-theta * q + log1p(q / (1 + 1/theta))) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[theta <= 0] <- NaN ans } rlind <- function(n, theta) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n ifelse(runif(use.n) < rep_len(1 / (1 + 1/theta), use.n), rexp(use.n, theta), rgamma(use.n, shape = 2, scale = 1 / theta)) } lindley <- function(link = "loge", itheta = NULL, zero = NULL) { if (length(itheta) && !is.Numeric(itheta, positive = TRUE)) stop("argument 'itheta' must be > 0") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Lindley distribution f(y) = ", "theta^2 * (1 + y) * exp(-theta * y) / (1 + theta), ", "theta > 0, y > 0,\n\n", "Link: ", namesof("theta", link, earg = earg), "\n\n", "Mean: (theta + 2) / (theta * (theta + 1))\n", "Variance: (theta^2 + 4 * theta + 2) / (theta * (theta + 1))^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("theta"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("theta", ncoly) predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { wmeany <- colSums(y * w) / colSums(w) + 1/8 theta.init <- 1 / (wmeany + 1) theta.init <- matrix(if (length( .itheta )) .itheta else theta.init, n, M, byrow = TRUE) etastart <- theta2eta(theta.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .itheta = itheta ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .link , earg = .earg ) (theta + 2) / (theta * (theta + 1)) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .link , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$M1 <- M1 misc$itheta <- .itheta misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg, .itheta = itheta ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { theta <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlind(x = y, theta = theta, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("lindley"), validparams = eval(substitute(function(eta, y, extra = NULL) { theta <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(theta)) && all(0 < theta) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) theta <- eta2theta(eta, .link , earg = .earg ) rlind(nsim * length(theta), theta = theta) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ M1 <- 1 theta <- eta2theta(eta, .link , earg = .earg ) dl.dtheta <- 2 / theta - 1 / (1 + theta) - y DTHETA.DETA <- dtheta.deta(theta, .link , earg = .earg ) c(w) * dl.dtheta * DTHETA.DETA }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dtheta2 <- (theta^2 + 4 * theta + 2) / (theta * (1 + theta))^2 c(w) * ned2l.dtheta2 * DTHETA.DETA^2 }), list( .zero = zero )))) } dpoislindley <- function(x, theta, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if ( log.arg ) { ans <- 2 * log(theta) + log(theta + 2 + x) - (x+3) * log1p(theta) ans[(x != round(x)) | (x < 0)] <- log(0) } else { ans <- theta^2 * (theta + 2 + x) / (theta + 1)^(x+3) ans[(x != round(x)) | (x < 0)] <- 0 } ans[ # !is.finite(theta) | (theta <= 0)] <- NA ans } dslash <- function(x, mu = 0, sigma = 1, log = FALSE, smallno = .Machine$double.eps * 1000) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(sigma) || any(sigma <= 0)) stop("argument 'sigma' must be positive") L <- max(length(x), length(mu), length(sigma)) if (length(x) != L) x <- rep_len(x, L) if (length(mu) != L) mu <- rep_len(mu, L) if (length(sigma) != L) sigma <- rep_len(sigma, L) zedd <- (x-mu)/sigma if (log.arg) { ifelse(abs(zedd) < smallno, -log(2*sigma*sqrt(2*pi)), log1p(-exp(-zedd^2/2)) - log(sqrt(2*pi)*sigma*zedd^2)) } else { ifelse(abs(zedd) < smallno, 1/(2*sigma*sqrt(2*pi)), -expm1(-zedd^2/2)/(sqrt(2*pi)*sigma*zedd^2)) } } pslash <- function(q, mu = 0, sigma = 1, very.negative = -10000, lower.tail = TRUE, log.p = FALSE) { if (anyNA(q)) stop("argument 'q' must have non-missing values") if (!is.Numeric(mu)) stop("argument 'mu' must have finite and non-missing values") if (!is.Numeric(sigma, positive = TRUE)) stop("argument 'sigma' must have positive finite non-missing values") if (!is.Numeric(very.negative, length.arg = 1) || (very.negative >= 0)) stop("argument 'very.negative' must be quite negative") if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") L <- max(length(q), length(mu), length(sigma)) if (length(q) != L) q <- rep_len(q, L) if (length(mu) != L) mu <- rep_len(mu, L) if (length(sigma) != L) sigma <- rep_len(sigma, L) zedd <- (q - mu)/sigma ans <- as.numeric(q * NA) extreme.q <- FALSE for (ii in 1:L) { use.trick <- (-abs(zedd[ii]) <= very.negative) if (use.trick) { ans[ii] <- ifelse(zedd[ii] < 0, 0.0, 1.0) extreme.q <- TRUE } else if ((zedd[ii] >= very.negative) && zedd[ii] <= 0.0) { temp2 <- integrate(dslash, lower = q[ii], upper = mu[ii], mu = mu[ii], sigma = sigma[ii]) if (temp2$message != "OK") warning("integrate() failed on 'temp2'") ans[ii] <- 0.5 - temp2$value } else { temp1 <- integrate(dslash, lower = mu[ii], upper = q[ii], mu = mu[ii], sigma = sigma[ii]) if (temp1$message != "OK") warning("integrate() failed") ans[ii] <- 0.5 + temp1$value } } if (extreme.q) warning("returning 0 or 1 values for extreme values of argument 'q'") if (lower.tail) { if (log.p) log(ans) else ans } else { if (log.p) log1p(-ans) else -expm1(log(ans)) } } rslash <- function (n, mu = 0, sigma = 1) { rnorm(n = n, mean = mu, sd = sigma) / runif(n = n) } slash.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } slash <- function(lmu = "identitylink", lsigma = "loge", imu = NULL, isigma = NULL, gprobs.y = ppoints(8), nsimEIM = 250, zero = NULL, smallno = .Machine$double.eps * 1000) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (length(isigma) && !is.Numeric(isigma, positive = TRUE)) stop("argument 'isigma' must be > 0") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") if (!is.Numeric(gprobs.y, positive = TRUE) || max(gprobs.y) >= 1) stop("bad input for argument 'gprobs.y'") if (!is.Numeric(smallno, positive = TRUE) || smallno > 0.1) stop("bad input for argument 'smallno'") new("vglmff", blurb = c("Slash distribution\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = FALSE), ", ", namesof("sigma", lsigma, earg = esigma, tag = FALSE), "\n", paste( "1-exp(-(((y-mu)/sigma)^2)/2))/(sqrt(2*pi)*", "sigma*((y-mu)/sigma)^2)", "\ty!=mu", "\n1/(2*sigma*sqrt(2*pi))", "\t\t\t\t\t\t\ty=mu\n")), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu", "sigma"), lmu = .lmu , lsigma = .lsigma , zero = .zero ) }, list( .zero = zero, .lmu = lmu, .lsigma = lsigma ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("mu", .lmu , earg = .emu, tag = FALSE), namesof("sigma", .lsigma , earg = .esigma, tag = FALSE)) if (!length(etastart)) { slash.Loglikfun <- function(mu, y, x, w, extraargs) { sigma <- if (is.Numeric(.isigma)) .isigma else max(0.01, ((quantile(rep(y, w), prob = 0.75)/2)-mu)/qnorm(0.75)) zedd <- (y-mu)/sigma sum(c(w) * ifelse(abs(zedd)<.smallno, -log(2*sigma*sqrt(2*pi)), log1p(-exp(-zedd^2/2)) - log(sqrt(2*pi) * sigma * zedd^2))) } gprobs.y <- .gprobs.y mu.grid <- quantile(rep(y, w), probs = gprobs.y) mu.grid <- seq(mu.grid[1], mu.grid[2], length=100) mu.init <- if (length( .imu )) .imu else grid.search(mu.grid, objfun = slash.Loglikfun, y = y, x = x, w = w) sigma.init <- if (is.Numeric(.isigma)) .isigma else max(0.01, ((quantile(rep(y, w), prob = 0.75)/2) - mu.init) / qnorm(0.75)) mu.init <- rep_len(mu.init, length(y)) etastart <- matrix(0, n, 2) etastart[, 1] <- theta2eta(mu.init, .lmu , earg = .emu ) etastart[, 2] <- theta2eta(sigma.init, .lsigma , earg = .esigma ) } }), list( .lmu = lmu, .lsigma = lsigma, .imu = imu, .isigma = isigma, .emu = emu, .esigma = esigma, .gprobs.y = gprobs.y, .smallno = smallno))), linkinv = eval(substitute(function(eta, extra = NULL) { NA * eta2theta(eta[, 1], link = .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmu , "sigma" = .lsigma ) misc$earg <- list("mu" = .emu , "sigma" = .esigma ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) sigma <- eta2theta(eta[, 2], link = .lsigma , earg = .esigma ) zedd <- (y - mu) / sigma if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dslash(x = y, mu = mu, sigma = sigma, log = TRUE, smallno = .smallno) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .smallno = smallno ))), vfamily = c("slash"), validparams = eval(substitute(function(eta, y, extra = NULL) { mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) sigma <- eta2theta(eta[, 2], link = .lsigma , earg = .esigma ) okay1 <- all(is.finite(mu)) && all(is.finite(sigma)) && all(0 < sigma) okay1 }, list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) sigma <- eta2theta(eta[, 2], link = .lsigma , earg = .esigma ) rslash(nsim * length(sigma), mu = mu, sigma = sigma) }, list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .smallno = smallno ))), deriv = eval(substitute(expression({ mu <- eta2theta(eta[, 1], link = .lmu , earg = .emu ) sigma <- eta2theta(eta[, 2], link = .lsigma , earg = .esigma ) dmu.deta <- dtheta.deta(mu, link = .lmu , earg = .emu ) dsigma.deta <- dtheta.deta(sigma, link = .lsigma , earg = .esigma ) zedd <- (y - mu) / sigma d3 <- deriv3(~ w * log(1 - exp(-(((y - mu) / sigma)^2) / 2)) - log(sqrt(2 * pi) * sigma * ((y - mu) / sigma)^2), c("mu", "sigma")) eval.d3 <- eval(d3) dl.dthetas <- attr(eval.d3, "gradient") dl.dmu <- dl.dthetas[, 1] dl.dsigma <- dl.dthetas[, 2] ind0 <- (abs(zedd) < .smallno) dl.dmu[ind0] <- 0 dl.dsigma[ind0] <- -1 / sigma[ind0] c(w) * cbind(dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .smallno = smallno ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA_real_, NA_real_, M = M, both = TRUE, diag = TRUE) sd3 <- deriv3(~ w * log(1 - exp(-(((ysim - mu) / sigma)^2) / 2))- log(sqrt(2 * pi) * sigma * ((ysim - mu) / sigma)^2), c("mu", "sigma")) for (ii in 1:( .nsimEIM )) { ysim <- rslash(n, mu = mu, sigma = sigma) seval.d3 <- eval(sd3) dl.dthetas <- attr(seval.d3, "gradient") dl.dmu <- dl.dthetas[, 1] dl.dsigma <- dl.dthetas[, 2] temp3 <- cbind(dl.dmu, dl.dsigma) run.varcov <- run.varcov + temp3[, ind1$row] * temp3[, ind1$col] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov, na.rm = FALSE), n, ncol(run.varcov), byrow = TRUE) else run.varcov dthetas.detas <- cbind(dmu.deta, dsigma.deta) wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] c(w) * wz }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .nsimEIM = nsimEIM, .smallno = smallno )))) } dnefghs <- function(x, tau, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(tau)) if (length(x) != N) x <- rep_len(x, N) if (length(tau) != N) tau <- rep_len(tau, N) logdensity <- log(sin(pi*tau)) + (1-tau)*x - log(pi) - log1pexp(x) logdensity[tau < 0] <- NaN logdensity[tau > 1] <- NaN if (log.arg) logdensity else exp(logdensity) } nefghs <- function(link = "logit", itau = NULL, imethod = 1) { if (length(itau) && !is.Numeric(itau, positive = TRUE) || any(itau >= 1)) stop("argument 'itau' must be in (0, 1)") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Natural exponential family generalized hyperbolic ", "secant distribution\n", "f(y) = sin(pi*tau)*exp((1-tau)*y)/(pi*(1+exp(y))\n\n", "Link: ", namesof("tau", link, earg = earg), "\n\n", "Mean: pi / tan(pi * tau)\n"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("tau"), ltau = .link ) }, list( .link = link ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("tau", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { wmeany <- if ( .imethod == 1) weighted.mean(y, w) else median(rep(y, w)) if (abs(wmeany) < 0.01) wmeany <- 0.01 tau.init <- atan(pi / wmeany) / pi + 0.5 tau.init[tau.init < 0.03] <- 0.03 tau.init[tau.init > 0.97] <- 0.97 tau.init <- rep_len(if (length( .itau )) .itau else tau.init, n) etastart <- theta2eta(tau.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .itau = itau, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { tau <- eta2theta(eta, .link , earg = .earg ) pi / tan(pi * tau) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(tau = .link ) misc$earg <- list(tau = .earg ) misc$expected <- TRUE misc$imethod <- .imethod }), list( .link = link, .earg = earg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { tau <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dnefghs(x = y, tau = tau, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("nefghs"), validparams = eval(substitute(function(eta, y, extra = NULL) { tau <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(tau)) && all(0 < tau) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ tau <- eta2theta(eta, .link , earg = .earg ) dl.dtau <- pi / tan(pi * tau) - y dtau.deta <- dtheta.deta(tau, .link , earg = .earg ) w * dl.dtau * dtau.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dtau2 <- (pi / sin(pi * tau))^2 wz <- ned2l.dtau2 * dtau.deta^2 c(w) * wz }), list( .link = link )))) } dlogF <- function(x, shape1, shape2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdensity <- shape1*x - lbeta(shape1, shape2) - (shape1 + shape2) * log1pexp(x) logdensity[is.infinite(x)] <- -Inf # 20141209 KaiH if (log.arg) logdensity else exp(logdensity) } logF <- function(lshape1 = "loge", lshape2 = "loge", ishape1 = NULL, ishape2 = 1, imethod = 1) { if (length(ishape1) && !is.Numeric(ishape1, positive = TRUE)) stop("argument 'ishape1' must be positive") if ( # length(ishape2) && !is.Numeric(ishape2, positive = TRUE)) stop("argument 'ishape2' must be positive") lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("log F distribution\n", "f(y) = exp(-shape2 * y) / (beta(shape1, shape2) * ", "(1 + exp(-y))^(shape1 + shape2))\n\n", "Link: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2), "\n\n", "Mean: digamma(shape1) - digamma(shape2)"), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("shape1", "shape2"), lshape1 = .lshape1 , lshape2 = .lshape2 , imethod = .imethod ) }, list( .imethod = imethod, .lshape1 = lshape1, .lshape2 = lshape2 ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("shape1", .lshape1 , earg = .eshape1 , tag = FALSE), namesof("shape2", .lshape2 , earg = .eshape2 , tag = FALSE)) if (!length(etastart)) { wmeany <- if ( .imethod == 1) weighted.mean(y, w) else median(rep(y, w)) shape1.init <- shape2.init <- rep_len( .ishape2 , n) shape1.init <- if (length( .ishape1)) rep_len( .ishape1, n) else { index1 <- (y > wmeany) shape1.init[ index1] <- shape2.init[ index1] + 1/1 shape1.init[!index1] <- shape2.init[!index1] - 1/1 shape1.init <- pmax(shape1.init, 1/8) shape1.init } etastart <- cbind(theta2eta(shape1.init, .lshape1 , earg = .eshape1 ), theta2eta(shape2.init, .lshape2 , earg = .eshape2 )) } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .ishape1 = ishape1, .ishape2 = ishape2, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) digamma(shape1) - digamma(shape2) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .lshape1 , shape2 = .lshape2 ) misc$earg <- list(shape1 = .eshape1 , shape2 = .eshape2 ) misc$expected <- TRUE misc$imethod <- .imethod }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlogF(x = y, shape1 = shape1, shape2 = shape2, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = c("logF"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) okay1 <- all(is.finite(shape1)) && all(0 < shape1) && all(is.finite(shape2)) && all(0 < shape2) okay1 }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), deriv = eval(substitute(expression({ shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) tmp888 <- digamma(shape1 + shape2) - log1pexp(-y) dl.dshape1 <- tmp888 - digamma(shape1) dl.dshape2 <- tmp888 - digamma(shape2) - y dshape1.deta <- dtheta.deta(shape1, .lshape1 , earg = .eshape1 ) dshape2.deta <- dtheta.deta(shape2, .lshape2 , earg = .eshape2 ) c(w) * cbind(dl.dshape1 * dshape1.deta, dl.dshape2 * dshape2.deta) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = eval(substitute(expression({ tmp888 <- trigamma(shape1 + shape2) ned2l.dshape12 <- trigamma(shape1) - tmp888 ned2l.dshape22 <- trigamma(shape2) - tmp888 ned2l.dshape1shape2 <- -tmp888 wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M = M)] <- ned2l.dshape12 * dshape1.deta^2 wz[, iam(2, 2, M = M)] <- ned2l.dshape22 * dshape2.deta^2 wz[, iam(1, 2, M = M)] <- ned2l.dshape1shape2 * dshape1.deta * dshape2.deta c(w) * wz }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 )))) } dbenf <- function(x, ndigits = 1, log = FALSE) { if (!is.Numeric(ndigits, length.arg = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) ans <- x * NA indexTF <- is.finite(x) & (x >= lowerlimit) ans[indexTF] <- log10(1 + 1/x[indexTF]) ans[!is.na(x) & !is.nan(x) & ((x < lowerlimit) | (x > upperlimit) | (x != round(x)))] <- 0.0 if (log.arg) log(ans) else ans } rbenf <- function(n, ndigits = 1) { if (!is.Numeric(ndigits, length.arg = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n myrunif <- runif(use.n) ans <- rep_len(lowerlimit, use.n) for (ii in (lowerlimit+1):upperlimit) { indexTF <- (pbenf(ii-1, ndigits = ndigits) < myrunif) & (myrunif <= pbenf(ii, ndigits = ndigits)) ans[indexTF] <- ii } ans } pbenf <- function(q, ndigits = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (!is.Numeric(ndigits, length.arg = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) ans <- q * NA floorq <- floor(q) indexTF <- is.finite(q) & (floorq >= lowerlimit) if (ndigits == 1) { if (lower.tail) { if (log.p) { ans[indexTF] <- log(log10(1 + floorq[indexTF])) ans[q < lowerlimit ] <- -Inf ans[q >= upperlimit] <- 0 } else { ans[indexTF] <- log10(1 + floorq[indexTF]) ans[q < lowerlimit] <- 0 ans[q >= upperlimit] <- 1 } } else { if (log.p) { ans[indexTF] <- log1p(-log10(1 + floorq[indexTF])) ans[q < lowerlimit] <- 0 ans[q >= upperlimit] <- -Inf } else { ans[indexTF] <- log10(10 / (1 + floorq[indexTF])) ans[q < lowerlimit] <- 1 ans[q >= upperlimit] <- 0 } } } else { if (lower.tail) { if (log.p) { ans[indexTF] <- log(log10((1 + floorq[indexTF])/10)) ans[q < lowerlimit ] <- -Inf ans[q >= upperlimit] <- 0 } else { ans[indexTF] <- log10((1 + floorq[indexTF])/10) ans[q < lowerlimit] <- 0 ans[q >= upperlimit] <- 1 } } else { if (log.p) { ans[indexTF] <- log(log10(100/(1 + floorq[indexTF]))) ans[q < lowerlimit] <- 0 ans[q >= upperlimit] <- -Inf } else { ans[indexTF] <- log10(100/(1 + floorq[indexTF])) ans[q < lowerlimit] <- 1 ans[q >= upperlimit] <- 0 } } } ans } if (FALSE) qbenf <- function(p, ndigits = 1) { if (!is.Numeric(ndigits, length.arg = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) bad <- !is.na(p) & !is.nan(p) & ((p < 0) | (p > 1)) if (any(bad)) stop("bad input for argument 'p'") ans <- rep_len(lowerlimit, length(p)) for (ii in (lowerlimit+1):upperlimit) { indexTF <- is.finite(p) & (pbenf(ii-1, ndigits = ndigits) < p) & (p <= pbenf(ii, ndigits = ndigits)) ans[indexTF] <- ii } ans[ is.na(p) | is.nan(p)] <- NA ans[!is.na(p) & !is.nan(p) & (p == 0)] <- lowerlimit ans[!is.na(p) & !is.nan(p) & (p == 1)] <- upperlimit ans } qbenf <- function(p, ndigits = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.Numeric(ndigits, length.arg = 1, positive = TRUE, integer.valued = TRUE) || ndigits > 2) stop("argument 'ndigits' must be 1 or 2") if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (log.p) { bad <- ((p > 0) | is.na(p) | is.nan(p)) } else { bad <- ((p < 0) | (p > 1) | is.na(p) | is.nan(p)) } if (any(bad)) stop("bad input for argument 'p'") lowerlimit <- ifelse(ndigits == 1, 1, 10) upperlimit <- ifelse(ndigits == 1, 9, 99) ans <- rep_len(lowerlimit, length(p)) if (lower.tail) { for (ii in (lowerlimit+1):upperlimit) { indexTF <- is.finite(p) & (pbenf(ii-1, ndigits = ndigits, lower.tail = lower.tail, log.p = log.p) < p) & (p <= pbenf(ii, ndigits = ndigits, lower.tail = lower.tail, log.p = log.p)) ans[indexTF] <- ii } } else { ## when lower.tail = F, pbenf(ii-1) >= p & pben(ii) < p for (ii in (lowerlimit+1):upperlimit) { indexTF <- is.finite(p) & (pbenf(ii-1, ndigits = ndigits, lower.tail = lower.tail, log.p = log.p) >= p) & (p > pbenf(ii, ndigits = ndigits, lower.tail = lower.tail, log.p = log.p)) ans[indexTF] <- ii } } if (lower.tail) { if (log.p) { ans[p > 0] <- NaN ans[p == -Inf] <- lowerlimit } else { ans[p < 0] <- NaN ans[p == 0] <- lowerlimit ans[p == 1] <- upperlimit ans[p > 1] <- NaN } } else { if (log.p) { ans[p > 0] <- NaN ans[p == -Inf] <- upperlimit } else { ans[p < 0] <- NaN ans[p == 0] <- upperlimit ans[p == 1] <- lowerlimit ans[p > 1] <- NaN } } ans } truncgeometric <- function(upper.limit = Inf, # lower.limit = 1, # Inclusive link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) { if (is.finite(upper.limit) && !is.Numeric(upper.limit, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'upper.limit'") if (any(upper.limit < 0)) stop("bad input for argument 'upper.limit'") if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") uu.ll <- min(upper.limit) new("vglmff", blurb = c("Truncated geometric distribution ", "(P[Y=y] =\n", " ", "prob * (1 - prob)^y / [1-(1-prob)^", uu.ll+1, "], y = 0,1,...,", uu.ll, ")\n", "Link: ", namesof("prob", link, earg = earg), "\n", "Mean: mu = 1 / prob - 1 ", ifelse(is.finite(upper.limit), paste("- (", upper.limit+1, ") * (1 - prob)^", upper.limit+1, " / (1 - ", "(1 - prob)^", upper.limit+1, ")", sep = ""), "")), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = .expected , imethod = .imethod , multipleResponses = TRUE, parameters.names = c("prob"), upper.limit = .upper.limit , zero = .zero ) }, list( .zero = zero, .expected = expected, .imethod = imethod, .upper.limit = upper.limit ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$upper.limit <- matrix( .upper.limit , n, ncoly, byrow = TRUE) if (any(y > extra$upper.limit)) stop("some response values greater than argument 'upper.limit'") mynames1 <- param.names("prob", ncoly) predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { prob.init <- if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 3) 1 / (1 + apply(y, 2, median) + 1/16) else 1 / (1 + colSums(y * w) / colSums(w) + 1/16) if (!is.matrix(prob.init)) prob.init <- matrix(prob.init, n, M, byrow = TRUE) if (length( .iprob )) prob.init <- matrix( .iprob , n, M, byrow = TRUE) etastart <- theta2eta(prob.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .upper.limit = upper.limit, .imethod = imethod, .iprob = iprob ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) QQQ <- 1 - prob upper.limit <- extra$upper.limit tmp1 <- QQQ^(upper.limit+1) answer <- 1 / prob - 1 - (upper.limit+1) * tmp1 / (1 - tmp1) answer[!is.finite(answer)] <- 1 / prob[!is.finite(answer)] - 1 answer }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .link , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$M1 <- M1 misc$multipleResponses <- TRUE misc$expected <- .expected misc$imethod <- .imethod misc$iprob <- .iprob }), list( .link = link, .earg = earg, .iprob = iprob, .upper.limit = upper.limit, .expected = expected, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { prob <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { upper.limit <- extra$upper.limit ll.elts <- c(w) * (dgeom(x = y, prob = prob, log = TRUE) - log1p(-(1.0 - prob)^(1 + upper.limit))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("truncgeometric"), validparams = eval(substitute(function(eta, y, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(prob)) && all(0 < prob & prob < 1) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ prob <- eta2theta(eta, .link , earg = .earg ) sss <- upper.limit <- extra$upper.limit # Is a matrix QQQ <- 1 - prob tmp1 <- QQQ^(upper.limit + 1) dl.dprob <- 1 / prob + (0 - y) / (1 - prob) - (1+upper.limit) * QQQ^(upper.limit - 0) / (1 - tmp1) dl.dprob[!is.finite(upper.limit)] <- 1 / prob[!is.finite(upper.limit)] + (0 - y[!is.finite(upper.limit)]) / (1 - prob[!is.finite(upper.limit)]) dprobdeta <- dtheta.deta(prob, .link , earg = .earg ) c(w) * cbind(dl.dprob * dprobdeta) }), list( .link = link, .earg = earg, .upper.limit = upper.limit, .expected = expected ))), weight = eval(substitute(expression({ eim.oim.fun <- function(mu.y, sss) ifelse(is.finite(sss), 1/prob^2 + (0 + mu.y) / QQQ^2 - (1+sss) * ((sss-0) * QQQ^(sss-1) / (1 - tmp1) + (1+sss) * QQQ^(2*sss) / (1 - tmp1)^2), 1 / (prob^2 * (1 - prob))) ned2l.dprob2 <- if ( .expected ) { eim.oim.fun(mu, sss) } else { eim.oim.fun(y, sss) } wz <- ned2l.dprob2 * dprobdeta^2 if ( !( .expected )) wz <- wz - dl.dprob * d2theta.deta2(prob, .link , earg = .earg ) c(w) * wz }), list( .link = link, .earg = earg, .expected = expected )))) } betaff <- function(A = 0, B = 1, lmu = "logit", lphi = "loge", imu = NULL, iphi = NULL, # imethod = 1, gprobs.y = ppoints(8), # (1:9)/10, gphi = exp(-3:5)/4, zero = NULL) { if (!is.Numeric(A, length.arg = 1) || !is.Numeric(B, length.arg = 1) || A >= B) stop("A must be < B, and both must be of length one") stdbeta <- (A == 0 && B == 1) lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") if (length(imu) && (!is.Numeric(imu, positive = TRUE) || any(imu <= A) || any(imu >= B))) stop("bad input for argument 'imu'") if (length(iphi) && !is.Numeric(iphi, positive = TRUE)) stop("bad input for argument 'iphi'") new("vglmff", blurb = c("Beta distribution parameterized by mu and a ", "precision parameter\n", if (stdbeta) paste("f(y) = y^(mu*phi-1) * (1-y)^((1-mu)*phi-1)", "/ beta(mu*phi,(1-mu)*phi),\n", " 00\n\n") else paste("f(y) = (y-",A,")^(mu1*phi-1) * (",B, "-y)^(((1-mu1)*phi)-1) / \n(beta(mu1*phi,(1-mu1)*phi) * (", B, "-", A, ")^(phi-1)),\n", A," < y < ",B, ", ", A," < mu < ",B, ", mu = ", A, " + ", (B-A), " * mu1", ", phi > 0\n\n", sep = ""), "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("phi", lphi, earg = ephi)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu", "phi"), A = .A , B = .B , zero = .zero ) }, list( .zero = zero, .A = A, .B = B ))), initialize = eval(substitute(expression({ if (min(y) <= .A || max(y) >= .B) stop("data not within (A, B)") temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$A <- .A # Needed for @validparams extra$B <- .B predictors.names <- c(namesof("mu", .lmu , .emu , short = TRUE), namesof("phi", .lphi , .ephi, short = TRUE)) if (!length(etastart)) { NOS <- 1 muu.init <- phi.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y gphi <- if (length( .iphi )) .iphi else .gphi betaff.Loglikfun <- function(muu, phi, y, x, w, extraargs) { zedd <- (y - extraargs$A) / ( extraargs$B - extraargs$A) m1u <- (muu - extraargs$A) / ( extraargs$B - extraargs$A) shape1 <- phi * m1u shape2 <- (1 - m1u) * phi sum(c(w) * (dbeta(x = zedd, shape1, shape2, log = TRUE) - log(abs( extraargs$B - extraargs$A )))) } for (jay in 1:NOS) { # For each response 'y_jay'... do: gmuu <- if (length( .imu )) .imu else quantile(y[, jay], probs = gprobs.y) try.this <- grid.search2(gmuu, gphi, objfun = betaff.Loglikfun, y = y[, jay], w = w[, jay], extraargs = list(A = .A , B = .B ), ret.objfun = TRUE) # Last value is the loglik muu.init[, jay] <- try.this["Value1"] phi.init[, jay] <- try.this["Value2"] } # for (jay ...) if (FALSE) { mu.init <- if (is.Numeric( .imu )) .imu else { if ( .imethod == 1) weighted.mean(y, w) else (y + weighted.mean(y, w)) / 2 } mu1.init <- (mu.init - .A ) / ( .B - .A ) # In (0,1) phi.init <- if (is.Numeric( .iphi )) .iphi else max(0.01, -1 + ( .B - .A )^2 * mu1.init*(1-mu1.init)/var(y)) } etastart <- matrix(0, n, 2) etastart[, 1] <- theta2eta(muu.init, .lmu , earg = .emu ) etastart[, 2] <- theta2eta(phi.init, .lphi , earg = .ephi ) } }), list( .lmu = lmu, .lphi = lphi, .imu = imu, .iphi = iphi, .A = A, .B = B, .emu = emu, .ephi = ephi, .gprobs.y = gprobs.y, .gphi = gphi ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta[, 1], .lmu , .emu ) mu }, list( .lmu = lmu, .emu = emu, .A = A, .B = B))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu , phi = .lphi ) misc$earg <- list(mu = .emu , phi = .ephi ) misc$limits <- c( .A , .B ) misc$stdbeta <- .stdbeta }), list( .lmu = lmu, .lphi = lphi, .A = A, .B = B, .emu = emu, .ephi = ephi, .stdbeta = stdbeta ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mu <- eta2theta(eta[, 1], .lmu , earg = .emu ) phi <- eta2theta(eta[, 2], .lphi , earg = .ephi ) m1u <- if ( .stdbeta ) mu else (mu - .A ) / ( .B - .A ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { shape1 <- phi * m1u shape2 <- (1 - m1u) * phi zedd <- (y - .A) / ( .B - .A) ll.elts <- c(w) * (dbeta(x = zedd, shape1 = shape1, shape2 = shape2, log = TRUE) - log( abs( .B - .A ))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu = lmu, .lphi = lphi, .A = A, .B = B, .emu = emu, .ephi = ephi, .stdbeta = stdbeta ))), vfamily = "betaff", validparams = eval(substitute(function(eta, y, extra = NULL) { mu <- eta2theta(eta[, 1], .lmu , .emu ) phi <- eta2theta(eta[, 2], .lphi , .ephi ) okay1 <- all(is.finite(mu )) && all(extra$A < mu & mu < extra$B) && all(is.finite(phi)) && all(0 < phi) okay1 }, list( .lmu = lmu, .lphi = lphi, .A = A, .B = B, .emu = emu, .ephi = ephi ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mu <- eta2theta(eta[, 1], .lmu , earg = .emu ) phi <- eta2theta(eta[, 2], .lphi , earg = .ephi ) m1u <- if ( .stdbeta ) mu else (mu - .A ) / ( .B - .A ) shape1 <- phi * m1u shape2 <- (1 - m1u) * phi .A + ( .B - .A ) * rbeta(nsim * length(shape1), shape1 = shape1, shape2 = shape2) }, list( .lmu = lmu, .lphi = lphi, .A = A, .B = B, .emu = emu, .ephi = ephi, .stdbeta = stdbeta ))), deriv = eval(substitute(expression({ mu <- eta2theta(eta[, 1], .lmu , .emu ) phi <- eta2theta(eta[, 2], .lphi , .ephi ) m1u <- if ( .stdbeta ) mu else (mu - .A) / ( .B - .A) dmu.deta <- dtheta.deta(mu, .lmu , .emu ) dmu1.dmu <- 1 / ( .B - .A ) dphi.deta <- dtheta.deta(phi, .lphi , .ephi ) temp1 <- m1u*phi temp2 <- (1-m1u)*phi if ( .stdbeta ) { dl.dmu1 <- phi*(digamma(temp2) - digamma(temp1) + log(y) - log1p(-y)) dl.dphi <- digamma(phi) - mu*digamma(temp1) - (1-mu)*digamma(temp2) + mu*log(y) + (1-mu)*log1p(-y) } else { dl.dmu1 <- phi*(digamma(temp2) - digamma(temp1) + log(y-.A) - log( .B-y)) dl.dphi <- digamma(phi) - m1u*digamma(temp1) - (1-m1u)*digamma(temp2) + m1u*log(y-.A) + (1-m1u)*log( .B-y) - log( .B -.A) } c(w) * cbind(dl.dmu1 * dmu1.dmu * dmu.deta, dl.dphi * dphi.deta) }), list( .lmu = lmu, .lphi = lphi, .emu = emu, .ephi = ephi, .A = A, .B = B, .stdbeta = stdbeta ))), weight = eval(substitute(expression({ ned2l.dmu12 <- (trigamma(temp1) + trigamma(temp2)) * phi^2 ned2l.dphi2 <- -trigamma(phi) + trigamma(temp1) * m1u^2 + trigamma(temp2) * (1-m1u)^2 ned2l.dmu1phi <- temp1 * trigamma(temp1) - temp2 * trigamma(temp2) wz <- matrix(NA_real_, n, dimm(M)) wz[, iam(1, 1, M)] <- ned2l.dmu12 * dmu1.dmu^2 * dmu.deta^2 wz[, iam(2, 2, M)] <- ned2l.dphi2 * dphi.deta^2 wz[, iam(1, 2, M)] <- ned2l.dmu1phi * dmu1.dmu * dmu.deta * dphi.deta c(w) * wz }), list( .A = A, .B = B )))) } betaR <- function(lshape1 = "loge", lshape2 = "loge", i1 = NULL, i2 = NULL, trim = 0.05, A = 0, B = 1, parallel = FALSE, zero = NULL) { lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") if (length( i1 ) && !is.Numeric( i1, positive = TRUE)) stop("bad input for argument 'i1'") if (length( i2 ) && !is.Numeric( i2, positive = TRUE)) stop("bad input for argument 'i2'") if (!is.Numeric(A, length.arg = 1) || !is.Numeric(B, length.arg = 1) || A >= B) stop("A must be < B, and both must be of length one") stdbeta <- (A == 0 && B == 1) # stdbeta == T iff standard beta distn new("vglmff", blurb = c("Two-parameter Beta distribution ", "(shape parameters parameterization)\n", if (stdbeta) paste("y^(shape1-1) * (1-y)^(shape2-1) / B(shape1,shape2),", "0 <= y <= 1, shape1>0, shape2>0\n\n") else paste("(y-",A,")^(shape1-1) * (",B, "-y)^(shape2-1) / [B(shape1,shape2) * (", B, "-", A, ")^(shape1+shape2-1)], ", A," <= y <= ",B," shape1>0, shape2>0\n\n", sep = ""), "Links: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, A = .A, B = .B, multipleResponses = FALSE, zero = .zero ) }, list( .A = A, .B = B, .zero = zero ))), initialize = eval(substitute(expression({ if (min(y) <= .A || max(y) >= .B) stop("data not within (A, B)") if (NCOL(y) != 1) stop("response must be a vector or a one-column matrix") w.y.check(w = w, y = y) predictors.names <- c(namesof("shape1", .lshape1 , earg = .eshape1 , short = TRUE), namesof("shape2", .lshape2 , earg = .eshape2 , short = TRUE)) if (!length(etastart)) { mu1d <- mean(y, trim = .trim ) uu <- (mu1d - .A) / ( .B - .A) DD <- ( .B - .A)^2 pinit <- max(0.01, uu^2 * (1 - uu) * DD / var(y) - uu) qinit <- max(0.01, pinit * (1 - uu) / uu) etastart <- matrix(0, n, 2) etastart[, 1] <- theta2eta( pinit, .lshape1 , earg = .eshape1 ) etastart[, 2] <- theta2eta( qinit, .lshape2 , earg = .eshape2 ) } if (is.Numeric( .i1 )) etastart[, 1] <- theta2eta( .i1 , .lshape1 , earg = .eshape1 ) if (is.Numeric( .i2 )) etastart[, 2] <- theta2eta( .i2 , .lshape2 , earg = .eshape2 ) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .i1 = i1, .i2 = i2, .trim = trim, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) .A + ( .B - .A ) * shapes[, 1] / (shapes[, 1] + shapes[, 2]) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .lshape1 , shape2 = .lshape2 ) misc$earg <- list(shape1 = .eshape1 , shape2 = .eshape2 ) misc$limits <- c( .A , .B ) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { zedd <- (y - .A ) / ( .B - .A ) ll.elts <- c(w) * (dbeta(x = zedd, shape1 = shapes[, 1], shape2 = shapes[, 2], log = TRUE) - log( abs( .B - .A ))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = "betaR", validparams = eval(substitute(function(eta, y, extra = NULL) { shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) okay1 <- all(is.finite(shapes)) && all(0 < shapes) okay1 }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) .A + ( .B - .A ) * rbeta(nsim * length(shapes[, 1]), shape1 = shapes[, 1], shape2 = shapes[, 2]) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), deriv = eval(substitute(expression({ shapes <- cbind(eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ), eta2theta(eta[, 2], .lshape2 , earg = .eshape2 )) dshapes.deta <- cbind(dtheta.deta(shapes[, 1], .lshape1 , earg = .eshape1), dtheta.deta(shapes[, 2], .lshape2 , earg = .eshape2)) dl.dshapes <- cbind(log(y - .A ), log( .B - y)) - digamma(shapes) + digamma(shapes[, 1] + shapes[, 2]) - log( .B - .A ) c(w) * dl.dshapes * dshapes.deta }), list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = expression({ trig.sum <- trigamma(shapes[, 1] + shapes[, 2]) ned2l.dshape12 <- trigamma(shapes[, 1]) - trig.sum ned2l.dshape22 <- trigamma(shapes[, 2]) - trig.sum ned2l.dshape1shape2 <- -trig.sum wz <- matrix(NA_real_, n, dimm(M)) # dimm(M) == 3 wz[, iam(1, 1, M)] <- ned2l.dshape12 * dshapes.deta[, 1]^2 wz[, iam(2, 2, M)] <- ned2l.dshape22 * dshapes.deta[, 2]^2 wz[, iam(1, 2, M)] <- ned2l.dshape1shape2 * dshapes.deta[, 1] * dshapes.deta[, 2] c(w) * wz })) } betaprime <- function(lshape = "loge", ishape1 = 2, ishape2 = NULL, zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Beta-prime distribution\n", "y^(shape1-1) * (1+y)^(-shape1-shape2) / Beta(shape1,shape2),", " y>0, shape1>0, shape2>0\n\n", "Links: ", namesof("shape1", lshape, earg = eshape), ", ", namesof("shape2", lshape, earg = eshape), "\n", "Mean: shape1/(shape2-1) provided shape2>1"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("shape1", "shape2"), lshape1 = .lshape , lshape2 = .lshape , zero = .zero ) }, list( .zero = zero, .lshape = lshape ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1", .lshape , earg = .eshape , short = TRUE), namesof("shape2", .lshape , earg = .eshape , short = TRUE)) if (is.numeric( .ishape1) && is.numeric( .ishape2 )) { vec <- c( .ishape1, .ishape2 ) vec <- c(theta2eta(vec[1], .lshape , earg = .eshape ), theta2eta(vec[2], .lshape , earg = .eshape )) etastart <- matrix(vec, n, 2, byrow = TRUE) } if (!length(etastart)) { init1 <- if (length( .ishape1 )) rep_len( .ishape1 , n) else rep_len(1, n) init2 <- if (length( .ishape2 )) rep_len( .ishape2 , n) else 1 + init1 / (y + 0.1) etastart <- matrix(theta2eta(c(init1, init2), .lshape , earg = .eshape ), n, 2, byrow = TRUE) } }), list( .lshape = lshape, .eshape = eshape, .ishape1 = ishape1, .ishape2 = ishape2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { shapes <- eta2theta(eta, .lshape , earg = .eshape ) ifelse(shapes[, 2] > 1, shapes[, 1] / (shapes[, 2] - 1), NA) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .lshape , shape2 = .lshape ) misc$earg <- list(shape1 = .eshape , shape2 = .eshape ) }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shapes <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * ((shapes[, 1]-1) * log(y) - lbeta(shapes[, 1], shapes[, 2]) - (shapes[, 2] + shapes[, 1]) * log1p(y)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = "betaprime", validparams = eval(substitute(function(eta, y, extra = NULL) { shapes <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shapes)) && all(0 < shapes) okay1 }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shapes <- eta2theta(eta, .lshape , earg = .eshape ) dshapes.deta <- dtheta.deta(shapes, .lshape , earg = .eshape ) dl.dshapes <- cbind(log(y) - log1p(y) - digamma(shapes[, 1]) + digamma(shapes[, 1] + shapes[, 2]), - log1p(y) - digamma(shapes[, 2]) + digamma(shapes[, 1] + shapes[, 2])) c(w) * dl.dshapes * dshapes.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = expression({ temp2 <- trigamma(shapes[, 1] + shapes[, 2]) ned2l.dshape12 <- trigamma(shapes[, 1]) - temp2 ned2l.dshape22 <- trigamma(shapes[, 2]) - temp2 ned2l.dshape1shape2 <- -temp2 wz <- matrix(NA_real_, n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dshape12 * dshapes.deta[, 1]^2 wz[, iam(2, 2, M)] <- ned2l.dshape22 * dshapes.deta[, 2]^2 wz[, iam(1, 2, M)] <- ned2l.dshape1shape2 * dshapes.deta[, 1] * dshapes.deta[, 2] c(w) * wz })) } # betaprime zoabetaR <- function(lshape1 = "loge", lshape2 = "loge", lpobs0 = "logit", lpobs1 = "logit", ishape1 = NULL, ishape2 = NULL, trim = 0.05, type.fitted = c("mean", "pobs0", "pobs1", "beta.mean"), parallel.shape = FALSE, parallel.pobs = FALSE, zero = NULL) { A <- 0 B <- 1 lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") lprobb0 <- as.list(substitute(lpobs0)) eprobb0 <- link2list(lprobb0) lprobb0 <- attr(eprobb0, "function.name") lprobb1 <- as.list(substitute(lpobs1)) eprobb1 <- link2list(lprobb1) lprobb1 <- attr(eprobb1, "function.name") if (length( ishape1 ) && !is.Numeric( ishape1, positive = TRUE)) stop("bad input for argument 'ishape1'") if (length( ishape2 ) && !is.Numeric( ishape2, positive = TRUE)) stop("bad input for argument 'ishape2'") if (!is.Numeric(A, length.arg = 1) || !is.Numeric(B, length.arg = 1) || A >= B) stop("A must be < B, and both must be of length one") stdbeta <- (A == 0 && B == 1) # stdbeta == TRUE iff standard beta distn type.fitted <- match.arg(type.fitted, c("mean", "pobs0", "pobs1", "beta.mean"))[1] new("vglmff", blurb = c("Standard Beta distribution with 0- and \n", "1-inflation ", "(shape parameters parameterization)\n", if (stdbeta) paste("y^(shape1-1) * (1-y)^(shape2-1) / beta(shape1,shape2),", "0 <= y <= 1, shape1>0, shape2>0\n\n") else paste("(y-",A,")^(shape1-1) * (",B, "-y)^(shape2-1) / [beta(shape1,shape2) * (", B, "-", A, ")^(shape1+shape2-1)], ", A," <= y <= ",B," shape1>0, shape2>0, ", "0 < pobs0 < 1, 0 < pobs1 < 1 \n\n", sep = ""), "Links: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2), ", ", namesof("pobs0", lprobb0, earg = eprobb0), ", ", namesof("pobs1", lprobb1, earg = eshape1)), constraints = eval(substitute(expression({ constraints.orig <- constraints if (is.logical( .parallel.probb ) && .parallel.probb && (cind0[1] + cind1[1] <= 1)) warning("argument 'parallel.pobs' specified when there is only ", "one of 'pobs0' and 'pobs1'") cmk.s <- kronecker(matrix(1, NOS, 1), rbind(1, 1, 0, 0)) cmk.S <- kronecker(diag(NOS), rbind(diag(2), 0*diag(2))) con.s <- cm.VGAM(cmk.s, x = x, bool = .parallel.shape , # Same as .parallel.b constraints = constraints.orig, apply.int = TRUE, cm.default = cmk.S, cm.intercept.default = cmk.S) cmk.p <- kronecker(matrix(1, NOS, 1), rbind(0, 0, 1, 1)) cmk.P <- kronecker(diag(NOS), rbind(0*diag(2), diag(2))) con.p <- cm.VGAM(cmk.p, x = x, bool = .parallel.probb , # constraints = constraints.orig, apply.int = TRUE, cm.default = cmk.P, cm.intercept.default = cmk.P) con.use <- con.s for (klocal in seq_along(con.s)) { con.use[[klocal]] <- cbind(con.s[[klocal]], con.p[[klocal]]) # Delete rows that are not needed: if (!cind0[1]) { con.use[[klocal]] <- (con.use[[klocal]])[c(TRUE, TRUE, FALSE, TRUE), ] } if (!cind1[1]) { con.use[[klocal]] <- (con.use[[klocal]])[c(TRUE, TRUE, TRUE, FALSE), ] } col.delete <- apply(con.use[[klocal]], 2, function(HkCol) all(HkCol == 0)) con.use[[klocal]] <- (con.use[[klocal]])[, !col.delete] } constraints <- con.use constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = M1) }), list( .parallel.shape = parallel.shape, .parallel.probb = parallel.pobs, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = NA, # Either 3 or 4, data-dependent Q1 = 1, A = .A , B = .B , expected = TRUE, multipleResponses = TRUE, type.fitted = .type.fitted , zero = .zero ) }, list( .A = A, .B = B, .type.fitted = type.fitted, .zero = zero ))), initialize = eval(substitute(expression({ if (min(y) < .A || max(y) > .B) stop("data not within [A, B]") temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- NOS <- ncol(y) if (ncoly > 1 && !( .stdbeta )) stop("can only input multiple responses with the standard beta") cind0 <- colSums(ind0 <- y == 0) > 0 cind1 <- colSums(ind1 <- y == 1) > 0 if (!any(cind0 | cind1)) stop("no 0s or 1s in the responses to perform 0- and/or ", "1-inflation! ", "Try using betaff() or betaR() instead.") if (ncoly > 1 && !all(cind0 == cind0[1]) && # FALSE && !all(cind0 == cind0[1])) stop("with multiple responses, cannot have 0-inflation in ", "some responses and 1-inflation in other responses") M1 <- 2 + cind0[1] + cind1[1] # 4 when there is both 0 & 1-inflation M <- M1 * NOS mynames1 <- param.names("shape1", ncoly) mynames2 <- param.names("shape2", ncoly) mynames3 <- param.names("pobs0", ncoly) mynames4 <- param.names("pobs1", ncoly) predictors.names <- c(namesof(mynames1, .lshape1 , earg = .eshape1 , short = TRUE), namesof(mynames2, .lshape2 , earg = .eshape2 , short = TRUE), if (cind0[1]) namesof(mynames3, .lprobb0 , earg = .eprobb0 , short = TRUE) else NULL, if (cind1[1]) namesof(mynames4, .lprobb1 , earg = .eprobb1 , short = TRUE) else NULL)[ interleave.VGAM(M, M1 = M1)] extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) extra$M1 <- M1 # Determined from the data extra$cind0 <- cind0 extra$cind1 <- cind1 if (!length(etastart)) { p0init <- matrix(colMeans(ind0), n, ncoly, byrow = TRUE) p1init <- matrix(colMeans(ind1), n, ncoly, byrow = TRUE) mu1d <- matrix(NA_real_, n, NOS) for (jay in 1:ncoly) { yy <- y[, jay] yy <- yy[ .A < yy & yy < .B ] mu1d[, jay] <- weighted.mean(yy, trim = .trim ) } uu <- (mu1d - .A ) / ( .B - .A ) DD <- ( .B - .A )^2 p.init <- if (is.Numeric( .ishape1 )) matrix( .ishape1 , n, ncoly, byrow = TRUE) else uu^2 * (1 - uu) * DD / var(yy) - uu p.init[p.init < 0.01] <- 0.01 q.init <- if (is.Numeric( .ishape2 )) matrix( .ishape2 , n, ncoly, byrow = TRUE) else p.init * (1 - uu) / uu q.init[q.init < 0.01] <- 0.01 etastart <- cbind( theta2eta(p.init, .lshape1 , earg = .eshape1 ), theta2eta(q.init, .lshape2 , earg = .eshape2 ), if (cind0[1]) theta2eta(p0init, .lprobb0 , earg = .eprobb0 ) else NULL, if (cind1[1]) theta2eta(p1init, .lprobb1 , earg = .eprobb1 ) else NULL)[, interleave.VGAM(M, M1 = M1)] } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .lprobb0 = lprobb0, .lprobb1 = lprobb1, .eprobb0 = eprobb0, .eprobb1 = eprobb1, .ishape1 = ishape1, .ishape2 = ishape2, .trim = trim, .A = A, .B = B, .type.fitted = type.fitted, .stdbeta = stdbeta ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- extra$M1 cind0 <- extra$cind0 cind1 <- extra$cind1 NOS <- ncol(eta) / M1 shape1 <- eta2theta(eta[, c(TRUE, rep(FALSE, M1 - 1)), drop = FALSE], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE, rep(FALSE, M1 - 2)), drop = FALSE], .lshape2 , earg = .eshape2 ) probb0 <- if (cind0[1]) eta2theta(eta[, c(FALSE, FALSE, TRUE, if (cind1[1]) FALSE else NULL), drop = FALSE], .lprobb0 , earg = .eprobb0 ) else 0 probb1 <- if (cind1[1]) eta2theta(eta[, c(FALSE, FALSE, if (cind0[1]) FALSE else NULL, TRUE), drop = FALSE], .lprobb1 , earg = .eprobb1 ) else 0 type.fitted <- match.arg(extra$type.fitted, c("mean", "pobs0", "pobs1", "beta.mean"))[1] ans <- switch(type.fitted, "mean" = (1 - probb0) * shape1 / (shape1 + shape2) + probb1 * shape2 / (shape1 + shape2), "beta.mean" = shape1/(shape1+shape2), # zz Mux by (1-pobs0-pobs1)?? "pobs0" = probb0, "pobs1" = probb1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2, .lprobb0 = lprobb0, .lprobb1 = lprobb1, .eprobb0 = eprobb0, .eprobb1 = eprobb1 ))), last = eval(substitute(expression({ misc$link <- rep_len( c( .lshape1 , .lshape2 , if (cind0[1]) .lprobb0 else NULL, if (cind1[1]) .lprobb1 else NULL), M) names(misc$link) <- c(mynames1, mynames2, if (cind0[1]) mynames3 else NULL, if (cind1[1]) mynames4 else NULL)[ interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) jay <- 1 while (jay <= M) { misc$earg[[jay]] <- .eshape1 jay <- jay + 1 misc$earg[[jay]] <- .eshape2 jay <- jay + 1 if (cind0[1]) { misc$earg[[jay]] <- .eprobb0 jay <- jay + 1 } if (cind1[1]) { misc$earg[[jay]] <- .eprobb1 jay <- jay + 1 } } misc$supportlimits <- c( .A , .B ) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .lprobb0 = lprobb0, .lprobb1 = lprobb1, .eprobb0 = eprobb0, .eprobb1 = eprobb1, .A = A, .B = B ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 4 M1 <- extra$M1 cind0 <- extra$cind0 cind1 <- extra$cind1 NOS <- ncol(eta) / M1 shape1 <- eta2theta(eta[, c(TRUE, rep(FALSE, M1 - 1)), drop = FALSE], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE, rep(FALSE, M1 - 2)), drop = FALSE], .lshape2 , earg = .eshape2 ) probb0 <- if (cind0[1]) eta2theta(eta[, c(FALSE, FALSE, TRUE, if (cind1[1]) FALSE else NULL), drop = FALSE], .lprobb0 , earg = .eprobb0 ) else 0 probb1 <- if (cind1[1]) eta2theta(eta[, c(FALSE, FALSE, if (cind0[1]) FALSE else NULL, TRUE), drop = FALSE], .lprobb1 , earg = .eprobb1 ) else 0 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { zedd <- (y - .A ) / ( .B - .A ) ll.elts <- c(w) * (dzoabeta(x = zedd, shape1 = shape1, shape2 = shape2, pobs0 = probb0, pobs1 = probb1, log = TRUE) - log( abs( .B - .A ))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2, .lprobb0 = lprobb0, .lprobb1 = lprobb1, .eprobb0 = eprobb0, .eprobb1 = eprobb1 ))), vfamily = "zoabetaR", validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 4 M1 <- extra$M1 cind0 <- extra$cind0 cind1 <- extra$cind1 NOS <- ncol(eta) / M1 shape1 <- eta2theta(eta[, c(TRUE, rep(FALSE, M1 - 1)), drop = FALSE], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE, rep(FALSE, M1 - 2)), drop = FALSE], .lshape2 , earg = .eshape2 ) probb0 <- if (cind0[1]) eta2theta(eta[, c(FALSE, FALSE, TRUE, if (cind1[1]) FALSE else NULL), drop = FALSE], .lprobb0 , earg = .eprobb0 ) else 0.5 probb1 <- if (cind1[1]) eta2theta(eta[, c(FALSE, FALSE, if (cind0[1]) FALSE else NULL, TRUE), drop = FALSE], .lprobb1 , earg = .eprobb1 ) else 0.5 okay1 <- all(is.finite(shape1)) && all(0 < shape1) && all(is.finite(shape2)) && all(0 < shape2) && all(is.finite(probb0)) && all(0 < probb0 & probb0 < 1) && all(is.finite(probb1)) && all(0 < probb1 & probb1 < 1) okay1 }, list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2, .lprobb0 = lprobb0, .lprobb1 = lprobb1, .eprobb0 = eprobb0, .eprobb1 = eprobb1 ))), deriv = eval(substitute(expression({ M1 <- 4 M1 <- extra$M1 cind0 <- extra$cind0 cind1 <- extra$cind1 NOS <- ncol(eta) / M1 shape1 <- eta2theta(eta[, c(TRUE, rep(FALSE, M1 - 1)), drop = FALSE], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, c(FALSE, TRUE, rep(FALSE, M1 - 2)), drop = FALSE], .lshape2 , earg = .eshape2 ) probb0 <- if (cind0[1]) eta2theta(eta[, c(FALSE, FALSE, TRUE, if (cind1[1]) FALSE else NULL), drop = FALSE], .lprobb0 , earg = .eprobb0 ) else 0 probb1 <- if (cind1[1]) eta2theta(eta[, c(FALSE, FALSE, if (cind0[1]) FALSE else NULL, TRUE), drop = FALSE], .lprobb1 , earg = .eprobb1 ) else 0 dshape1.deta <- dtheta.deta(shape1, .lshape1 , earg = .eshape1 ) dshape2.deta <- dtheta.deta(shape2, .lshape2 , earg = .eshape2 ) dprobb0.deta <- dtheta.deta(probb0, .lprobb0 , earg = .eprobb0 ) dprobb1.deta <- dtheta.deta(probb1, .lprobb1 , earg = .eprobb1 ) index0 <- y == 0 index1 <- y == 1 indexi <- !index0 & !index1 # In the interior, i.e., (0, 1) dig.sum <- digamma(shape1 + shape2) QQ <- 1 - probb0 - probb1 if (cind0[1]) { dl.dprobb0 <- -1 / QQ dl.dprobb0[index0] <- 1 / probb0[index0] dl.dprobb0[index1] <- 0 } if (cind1[1]) { dl.dprobb1 <- -1 / QQ dl.dprobb1[index0] <- 0 dl.dprobb1[index1] <- 1 / probb1[index1] } dl.dshape1 <- log(y) - digamma(shape1) + dig.sum dl.dshape2 <- log1p(-y) - digamma(shape2) + dig.sum dl.dshape1[!indexi] <- 0 dl.dshape2[!indexi] <- 0 myderiv <- c(w) * cbind(dl.dshape1 * dshape1.deta, dl.dshape2 * dshape2.deta, if (cind0[1]) dl.dprobb0 * dprobb0.deta else NULL, if (cind1[1]) dl.dprobb1 * dprobb1.deta else NULL) colnames(myderiv) <- NULL myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lshape1 = lshape1, .lshape2 = lshape2, .A = A, .B = B, .eshape1 = eshape1, .eshape2 = eshape2, .lprobb0 = lprobb0, .lprobb1 = lprobb1, .eprobb0 = eprobb0, .eprobb1 = eprobb1 ))), weight = expression({ trig.sum <- trigamma(shape1 + shape2) ned2l.dshape12 <- (trigamma(shape1) - trig.sum) * QQ ned2l.dshape22 <- (trigamma(shape2) - trig.sum) * QQ ned2l.dprobb02 <- (1 - probb1) / (probb0 * QQ) ned2l.dprobb12 <- (1 - probb0) / (probb1 * QQ) ned2l.dshape1shape2 <- -trig.sum * QQ # (1 - probb0 - probb0) zz ned2l.dshape2probb0 <- 0 ned2l.dprobb0probb1 <- 1 / QQ ned2l.dshape1probb0 <- 0 ned2l.dshape2probb1 <- 0 ned2l.dshape1probb1 <- 0 ned2l.dshape1probb0 <- 0 wz <- array(c(c(w) * ned2l.dshape12 * dshape1.deta^2, c(w) * ned2l.dshape22 * dshape2.deta^2, if (cind0[1]) c(w) * ned2l.dprobb02 * dprobb0.deta^2 else NULL, if (cind1[1]) c(w) * ned2l.dprobb12 * dprobb1.deta^2 else NULL, c(w) * ned2l.dshape1shape2 * dshape1.deta * dshape2.deta, if (cind0[1]) c(w) * ned2l.dshape2probb0 * dshape2.deta * dprobb0.deta, c(w) * ned2l.dprobb0probb1 * dprobb0.deta * dprobb1.deta, if (cind0[1]) c(w) * ned2l.dshape1probb0 * dshape1.deta * dprobb0.deta, if (cind1[1]) c(w) * ned2l.dshape2probb1 * dshape2.deta * dprobb1.deta, if (cind1[1]) c(w) * ned2l.dshape1probb1 * dshape1.deta * dprobb1.deta), dim = c(n, M / M1, M1*(M1+1)/2)) wz <- arwz2wz(wz, M = M, M1 = M1) # tridiagonal but unexploited here wz })) } # zoabetaR dtopple <- function(x, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(shape)) if (length(x) != L) x <- rep_len(x, L) if (length(shape) != L) shape <- rep_len(shape, L) logdensity <- rep_len(log(0), L) xok <- (0 <= x) & (x <= 1) logdensity[xok] <- log(2) + log(shape[xok]) + log1p(-x[xok]) + (shape[xok] - 1) * (log(x[xok]) + log(2) + log1p(-x[xok]/2)) logdensity[shape >= 1] <- NaN if (log.arg) logdensity else exp(logdensity) } ptopple <- function(q, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- shape * (log(q) + log(2) + log1p(-q/2)) ans[q <= 0 ] <- -Inf ans[q >= 1] <- 0 } else { ans <- (q * (2 - q))^shape ans[q <= 0] <- 0 ans[q >= 1] <- 1 } } else { if (log.p) { ans <- log1p(-(q * (2 - q))^shape) ans[q <= 0] <- 0 ans[q >= 1] <- -Inf } else { ans <- exp(log1p(-(q * (2 - q))^shape)) ans[q <= 0] <- 1 ans[q >= 1] <- 0 } } ans[shape <= 0] <- NaN ans[shape >= 1] <- NaN ans } qtopple <- function(p, shape) { ans <- -expm1(0.5 * log1p(-p^(1/shape))) ans[shape <= 0] <- NaN ans[shape >= 1] <- NaN ans } rtopple <- function(n, shape) { qtopple(runif(n), shape) } topple <- function(lshape = "logit", zero = NULL, gshape = ppoints(8)) { lshape <- as.list(substitute(lshape)) # orig eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Topp-Leone distribution F(y;shape) = (y * (2 - y))^shape, ", "0 < y < 1, 0 < shape < 1\n", "Link: ", namesof("shape", lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, hadof = TRUE, multipleResponses = TRUE, parameters.names = "shape", zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (any(y >= 1)) stop("response must be in (0, 1)") ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("shape", ncoly) predictors.names <- namesof(mynames1, .lshape , earg = .eshape , tag = FALSE) if (!length(etastart)) { shape.init <- matrix(0, nrow(x), ncoly) gshape <- .gshape topple.Loglikfun <- function(shape, y, x = NULL, w, extraargs = NULL) { sum(c(w) * dtopple(x = y, shape = shape, log = TRUE)) } for (jay in 1:ncoly) { shape.init[, jay] <- grid.search(gshape, objfun = topple.Loglikfun, y = y[, jay], w = w[, jay]) } etastart <- theta2eta(shape.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .gshape = gshape, .eshape = eshape ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) 1 - (gamma(1 + shape))^2 * 4^shape / gamma(2 * (1 + shape)) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ilocal in 1:ncoly) { misc$earg[[ilocal]] <- .eshape } misc$link <- rep_len( .lshape , ncoly) names(misc$link) <- mynames1 }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dtopple(x = y, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("topple"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape & shape < 1) okay1 }, list( .lshape = lshape, .eshape = eshape ))), hadof = eval(substitute( function(eta, extra = list(), deriv = 1, linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), ...) { shape <- eta2theta(eta, .lshape , earg = .eshape ) ans <- c(w) * switch(as.character(deriv), "0" = 1 / shape^2, "1" = -2 / shape^3, "2" = 6 / shape^4, "3" = -24 / shape^5, stop("argument 'deriv' must be 0, 1, 2 or 3")) if (deriv == 0) ans else retain.col(ans, linpred.index) # Since M1 = 1 }, list( .lshape = lshape, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) shape <- eta2theta(eta, .lshape , earg = .eshape ) rtopple(nsim * length(shape), shape = c(shape)) }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) dl.dshape <- 1 / shape + log(y) + log(2) + log1p(-y/2) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ ned2l.dshape2 <- 1 / shape^2 wz <- c(w) * ned2l.dshape2 * dshape.deta^2 wz }), list( .lshape = lshape, .eshape = eshape )))) } dzeta <- function(x, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(shape), length(x)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 ans <- rep_len(if (log.arg) log(0) else 0, LLL) if (any(!zero)) { if (log.arg) { ans[!zero] <- (-shape[!zero]-1)*log(x[!zero]) - log(zeta(shape[!zero]+1)) } else { ans[!zero] <- x[!zero]^(-shape[!zero]-1) / zeta(shape[!zero]+1) } } if (any(ox)) ans[ox] <- if (log.arg) log(0) else 0 ans[shape <= 0] <- NaN # Added 20160617 ans } pzeta <- function(q, shape, lower.tail = TRUE) { LLL <- max(lenq <- length(q), lens <- length(shape)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) ans <- rep_len(0, LLL) aa <- 12 # Same as Zeta.aux() qfloor <- floor(q) for (nn in 1:(aa-1)) ans <- ans + as.numeric(nn <= qfloor) / nn^(shape+1) vecTF <- (aa-1 <= qfloor) if (lower.tail) { if (any(vecTF)) ans[vecTF] <- zeta(shape[vecTF]+1) - Zeta.aux(shape[vecTF]+1, qfloor[vecTF]+1) } else { ans <- zeta(shape+1) - ans if (any(vecTF)) ans[vecTF] <- Zeta.aux(shape[vecTF]+1, qfloor[vecTF]+1) } ans / zeta(shape+1) } # pzeta qzeta <- function(p, shape) { LLL <- max(lenp <- length(p), lens <- length(shape)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) ans <- rep_len(0, LLL) lo <- rep_len(1, LLL) approx.ans <- lo # True at lhs hi <- 2 * lo + 10 dont.iterate <- p == 1 | shape <= 0 done <- p <= pzeta(hi, shape) | dont.iterate while (!all(done)) { hi.save <- hi[!done] hi[!done] <- 2 * lo[!done] + 10 lo[!done] <- hi.save done[!done] <- (p[!done] <= pzeta(hi[!done], shape[!done])) } foo <- function(q, shape, p) pzeta(q, shape) - p lhs <- (p <= dzeta(1, shape)) | dont.iterate approx.ans[!lhs] <- bisection.basic(foo, lo[!lhs], hi[!lhs], tol = 1/16, shape = shape[!lhs], p = p[!lhs]) faa <- floor(approx.ans) ans <- ifelse(pzeta(faa, shape) < p & p <= pzeta(faa+1, shape), faa+1, faa) ans[p == 1] <- Inf ans[shape <= 0] <- NaN ans } # qzeta rzeta <- function(n, shape) { qzeta(runif(n), shape) } zetaff <- function(lshape = "loge", ishape = NULL, gshape = exp(-3:4)/4, zero = NULL) { if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' must be > 0") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Zeta distribution ", "f(y) = 1/(y^(shape+1) zeta(shape+1)), shape>0, y = 1, 2,..\n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n\n", "Mean: zeta(shape) / zeta(shape+1), provided shape>1\n", "Variance: zeta(shape-1) / zeta(shape+1) - mean^2, provided shape>2"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = TRUE, parameters.names = "shape", zero = .zero , lshape = .lshape ) }, list( .lshape = lshape, .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, Is.positive.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) mynames1 <- param.names("shape", ncoly) predictors.names <- namesof(mynames1, .lshape , earg = .eshape , tag = FALSE) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly if (!length(etastart)) { zetaff.Loglikfun <- function(shape, y, x, w, extraargs) { sum(c(w) * dzeta(x = y, shape, log = TRUE)) } gshape <- .gshape if (!length( .ishape )) { shape.init <- matrix(NA_real_, n, M, byrow = TRUE) for (jay in 1:ncoly) { shape.init[, jay] <- grid.search(gshape, objfun = zetaff.Loglikfun, y = y[, jay], x = x, w = w[, jay]) } } else { shape.init <- matrix( .ishape , n, M, byrow = TRUE) } etastart <- theta2eta(shape.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape, .gshape = gshape ))), linkinv = eval(substitute(function(eta, extra = NULL) { ans <- pp <- eta2theta(eta, .lshape , earg = .eshape ) ans[pp > 1] <- zeta(pp[pp > 1]) / zeta(pp[pp > 1] + 1) ans[pp <= 1] <- NA ans }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- rep_len( .lshape , ncoly) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (jay in 1:ncoly) { misc$earg[[jay]] <- .eshape } }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzeta(x = y, shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("zetaff"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) fred0 <- zeta(shape+1) fred1 <- zeta(shape+1, deriv = 1) dl.dshape <- -log(y) - fred1 / fred0 dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = expression({ NOS <- NCOL(y) nd2l.dshape2 <- zeta(shape + 1, deriv = 2) / fred0 - (fred1/fred0)^2 wz <- nd2l.dshape2 * dshape.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) })) } gharmonic2 <- function(n, shape = 1) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'n'") LLL <- max(length(n), length(shape)) if (length(n) != LLL) n <- rep_len(n, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) aa <- 12 ans <- rep_len(0, LLL) for (ii in 1:aa) ans <- ans + as.numeric(ii <= n) / ii^shape vecTF <- (aa < n) if (any(vecTF)) ans[vecTF] <- zeta(shape[vecTF]) - Zeta.aux(shape[vecTF], 1 + n[vecTF]) ans } gharmonic <- function(n, shape = 1, deriv = 0) { if (!is.Numeric(n, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'n'") if (!is.Numeric(deriv, length.arg = 1, integer.valued = TRUE) || deriv < 0) stop("bad input for argument 'deriv'") lognexponent <- deriv sign <- ifelse(deriv %% 2 == 0, 1, -1) ans <- if (length(n) == 1 && length(shape) == 1) { if (lognexponent != 0) sum(log(1:n)^lognexponent * (1:n)^(-shape)) else sum((1:n)^(-shape)) } else { LEN <- max(length(n), length(shape)) n <- rep_len(n, LEN) ans <- shape <- rep_len(shape, LEN) if (lognexponent != 0) { for (ii in 1:LEN) ans[ii] <- sum(log(1:n[ii])^lognexponent * (1:n[ii])^(-shape[ii])) } else { for (ii in 1:LEN) ans[ii] <- sum((1:n[ii])^(-shape[ii])) } ans } sign * ans } dzipf <- function(x, N, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(N, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'N'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") nn <- max(length(x), length(N), length(shape)) if (length(x) != nn) x <- rep_len(x, nn) if (length(N) != nn) N <- rep_len(N, nn) if (length(shape) != nn) shape <- rep_len(shape, nn) ox <- !is.finite(x) zero <- ox | round(x) != x | x < 1 | x > N ans <- (if (log.arg) log(0) else 0) * x if (any(!zero)) if (log.arg) { ans[!zero] <- (-shape[!zero]) * log(x[!zero]) - log(gharmonic2(N[!zero], shape[!zero])) } else { ans[!zero] <- x[!zero]^(-shape[!zero]) / gharmonic2(N[!zero], shape[!zero]) } ans } pzipf <- function(q, N, shape, log.p = FALSE) { if (!is.Numeric(N, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'N'") nn <- max(length(q), length(N), length(shape)) if (length(q) != nn) q <- rep_len(q, nn) if (length(N) != nn) N <- rep_len(N, nn) if (length(shape) != nn) shape <- rep_len(shape, nn) oq <- !is.finite(q) dont.iterate <- shape <= 0 zeroOR1 <- oq | q < 1 | N <= q | dont.iterate floorq <- floor(q) ans <- 0 * floorq ans[oq | q >= N] <- 1 if (any(!zeroOR1)) ans[!zeroOR1] <- gharmonic2(floorq[!zeroOR1], shape[!zeroOR1]) / gharmonic2( N[!zeroOR1], shape[!zeroOR1]) ans[shape <= 0] <- NaN if (log.p) log(ans) else ans } qzipf <- function(p, N, shape) { if (!is.Numeric(p)) stop("bad input for argument 'p'") if (!is.Numeric(N, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'N'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") nn <- max(length(p), length(N), length(shape)) if (length(p) != nn) p <- rep_len(p, nn) if (length(N) != nn) N <- rep_len(N, nn) if (length(shape) != nn) shape <- rep_len(shape, nn) a <- rep_len(1, nn) b <- rep_len(N, nn) approx.ans <- a # True at lhs foo <- function(q, N, shape, p) pzipf(q, N, shape) - p dont.iterate <- p == 1 | shape <= 0 lhs <- (p <= dzipf(1, N, shape)) | dont.iterate approx.ans[!lhs] <- bisection.basic(foo, a[!lhs], b[!lhs], shape = shape[!lhs], tol = 1/16, p = p[!lhs], N = N[!lhs]) faa <- floor(approx.ans) ans <- ifelse(pzipf(faa, N, shape) < p & p <= pzipf(faa+1, N, shape), faa+1, faa) ans[shape <= 0] <- NaN ans[p == 1] <- N ans } # qzipf rzipf <- function(n, N, shape) { qzipf(runif(n), N, shape) } zipf <- function(N = NULL, lshape = "loge", ishape = NULL) { if (length(N) && (!is.Numeric(N, positive = TRUE, integer.valued = TRUE, length.arg = 1) || N <= 1)) stop("bad input for argument 'N'") enteredN <- length(N) if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' must be > 0") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Zipf distribution f(y;s) = y^(-s) / sum((1:N)^(-s)),", " s > 0, y = 1, 2,...,N", ifelse(enteredN, paste(" = ", N, sep = ""), ""), "\n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n\n", "Mean: gharmonic(N, shape-1) / gharmonic(N, shape)"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = FALSE, parameters.names = "shape", N = enteredN, lshape = .lshape ) }, list( .lshape = lshape, .enteredN = enteredN ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.integer.y = TRUE) predictors.names <- namesof("shape", .lshape , earg = .eshape , tag = FALSE) NN <- .N if (!is.Numeric(NN, length.arg = 1, positive = TRUE, integer.valued = TRUE)) NN <- max(y) if (max(y) > NN) stop("maximum of the response is greater than argument 'N'") if (any(y < 1)) stop("all response values must be in 1, 2, 3,...,N( = ", NN,")") extra$N <- NN if (!length(etastart)) { llfun <- function(shape, y, N, w) { sum(c(w) * dzipf(x = y, N = extra$N, shape = shape, log = TRUE)) } shape.init <- if (length( .ishape )) .ishape else getInitVals(gvals = seq(0.1, 3, length.out = 19), llfun = llfun, y = y, N = extra$N, w = w) shape.init <- rep_len(shape.init, length(y)) if ( .lshape == "loglog") shape.init[shape.init <= 1] <- 1.2 etastart <- theta2eta(shape.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape, .N = N ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) gharmonic2(extra$N, shape = shape - 1) / gharmonic2(extra$N, shape = shape) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$expected <- FALSE misc$link <- c(shape = .lshape) misc$earg <- list(shape = .eshape ) misc$N <- extra$N }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dzipf(x = y, N = extra$N, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("zipf"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) extra <- object@extra shape <- eta2theta(eta, .lshape , earg = .eshape ) rzipf(nsim * length(shape), N = extra$N, shape = shape) }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) fred1 <- gharmonic(extra$N, shape, deriv = 1) fred0 <- gharmonic2(extra$N, shape) dl.dshape <- -log(y) - fred1 / fred0 dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) d2shape.deta2 <- d2theta.deta2(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = expression({ d2l.dshape <- gharmonic(extra$N, shape, deriv = 2) / fred0 - (fred1/fred0)^2 wz <- c(w) * (dshape.deta^2 * d2l.dshape - d2shape.deta2 * dl.dshape) wz })) } deflat.limit.oizeta <- function(shape) { if (any(shape <= 0)) stop("argument 'shape' must be positive") ans <- -dzeta(1, shape) / pzeta(1, shape, lower.tail = FALSE) ans } doizeta <- function(x, shape, pstr1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(shape), length(pstr1)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep(NA_real_, LLL) index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pstr1[ index1] + (1 - pstr1[ index1]) * dzeta(x[ index1], shape[ index1])) ans[!index1] <- log1p(-pstr1[!index1]) + dzeta(x[!index1], shape[!index1], log = TRUE) } else { ans[ index1] <- pstr1[ index1] + (1 - pstr1[ index1]) * dzeta(x[ index1], shape[ index1]) ans[!index1] <- (1 - pstr1[!index1]) * dzeta(x[!index1], shape[!index1]) } ans[pstr1 < deflat.limit.oizeta(shape)] <- NaN ans[pstr1 > 1] <- NaN ans } # doizeta poizeta <- function(q, shape, pstr1 = 0) { LLL <- max(length(q), length(shape), length(pstr1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- deflat.limit.oizeta(shape) ans <- pzeta(q, shape) #, lower.tail = lower.tail, log.p = log.p ans <- ifelse(q < 1, 0, pstr1 + (1 - pstr1) * ans) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[shape <= 0] <- NaN ans } # poizeta qoizeta <- function(p, shape, pstr1 = 0) { LLL <- max(length(p), length(shape), length(pstr1)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- deflat.limit.oizeta(shape) ans[p <= pstr1] <- 1 pindex <- (deflat.limit <= pstr1) & (pstr1 < p) ans[pindex] <- qzeta((p[pindex] - pstr1[pindex]) / (1 - pstr1[pindex]), shape = shape[pindex]) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans[shape <= 0] <- NaN ans } # qoizeta roizeta <- function(n, shape, pstr1 = 0) { qoizeta(runif(n), shape, pstr1 = pstr1) } oizeta <- function(lpstr1 = "logit", lshape = "loge", type.fitted = c("mean", "shape", "pobs1", "pstr1", "onempstr1"), ishape = NULL, gpstr1 = ppoints(8), gshape = exp((-3:3) / 4), # grid for finding shape.init zero = NULL) { lpstr1 <- as.list(substitute(lpstr1)) epstr1 <- link2list(lpstr1) lpstr1 <- attr(epstr1, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "pstr1", "onempstr1"))[1] if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") new("vglmff", blurb = c("One-inflated zeta regression\n\n", "Links: ", namesof("pstr1", lpstr1, earg = epstr1 ), ", ", namesof("shape", lshape, earg = eshape ), "\n", "Mean: pstr1 + (1 - pstr1) * zeta(shape) / ", "zeta(1 + shape), if shape > 1"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pstr1", "shape"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y NOS <- ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) mynames1 <- param.names("pstr1", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lpstr1 , earg = .epstr1 , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { shape.init <- pstr1.init <- matrix(NA_real_, n, NOS) gpstr1 <- .gpstr1 gshape <- .gshape oizeta.Loglikfun <- function(pstr1, shape, y, x, w, extraargs) { sum(c(w) * doizeta(x = y, pstr1 = pstr1, shape = shape, log = TRUE)) } for (jay in 1:NOS) { # For each response 'y_jay'... do: try.this <- grid.search2(gpstr1, gshape, objfun = oizeta.Loglikfun, y = y[, jay], # x = x[TFvec, , drop = FALSE], w = w[, jay], ret.objfun = TRUE) # Last value is the loglik pstr1.init[, jay] <- try.this["Value1"] shape.init[, jay] <- try.this["Value2"] } # for (jay ...) etastart <- cbind(theta2eta(pstr1.init, .lpstr1 , earg = .epstr1 ), theta2eta(shape.init, .lshape , earg = .eshape ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape, .ishape = ishape, .gpstr1 = gpstr1, .gshape = gshape, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "pstr1", "onempstr1"))[1] pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Meanfun <- function(shape) { Mean <- shape Mean[shape > 1] <- zeta(shape[shape > 1]) / zeta(1 + shape[shape > 1]) Mean[shape <= 1] <- NA Mean } ans <- switch(type.fitted, "mean" = pstr1 + (1 - pstr1) * Meanfun(shape), "shape" = shape, "pobs1" = doizeta(1, shape = shape, pstr1 = pstr1), # P(Y=1) "pstr1" = pstr1, "onempstr1" = 1 - pstr1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lpstr1 , NOS), rep_len( .lshape , NOS))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .epstr1 misc$earg[[M1*ii ]] <- .eshape } }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doizeta(x = y, pstr1 = pstr1, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), vfamily = c("oizeta"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) roizeta(nsim * length(shape), shape = shape, pstr1 = pstr1) }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), validparams = eval(substitute(function(eta, y, extra = NULL) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) && all(is.finite(pstr1)) && all(pstr1 < 1) deflat.limit <- deflat.limit.oizeta(shape) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < pstr1))) warning("parameter 'pstr1' is too negative even allowing for ", "1-deflation.") okay1 && okay2.deflat }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- M / M1 pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , earg = .eshape ) pmf1 <- dzeta(1, shape) onempmf1 <- 1 - pmf1 # dozeta(1, shape = shape, pstr1 = pstr1) pobs1 <- pstr1 + (1 - pstr1) * pmf1 index1 <- as.matrix(y == 1) zeta0 <- zeta(shape + 1) zeta1 <- zeta(shape + 1, deriv = 1) zeta2 <- zeta(shape + 1, deriv = 2) dl.dpstr1 <- onempmf1 / pobs1 dl.dpstr1[!index1] <- -1 / (1 - pstr1[!index1]) dpmf1.dshape <- -zeta1 / zeta0^2 d2pmf1.dshape2 <- (2 * zeta1^2 / zeta0 - zeta2) / zeta0^2 dl.dshape <- (1 - pstr1) * dpmf1.dshape / pobs1 # dl.dshape[!index1] <- -log(y[!index1]) - zeta1[!index1] / zeta0[!index1] dpstr1.deta <- dtheta.deta(pstr1, .lpstr1 , earg = .epstr1 ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dpstr1 * dpstr1.deta, dl.dshape * dshape.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), weight = eval(substitute(expression({ LHS <- ((1 - pstr1) / pobs1) * dpmf1.dshape^2 - d2pmf1.dshape2 RHS <- (zeta2 - zeta1^2 / zeta0) / zeta0 ned2l.dpstr12 <- onempmf1 / ((1 - pstr1) * pobs1) # ned2l.dpstr1shape <- dpmf1.dshape / pobs1 # ned2l.dshape2 <- (1 - pstr1) * (LHS + (1 - pmf1) * RHS) wz <- array(c(c(w) * ned2l.dpstr12 * dpstr1.deta^2, c(w) * ned2l.dshape2 * dshape.deta^2, c(w) * ned2l.dpstr1shape * dpstr1.deta * dshape.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lshape = lshape, .eshape = eshape )))) } # oizeta deflat.limit.oizipf <- function(N, shape) { if (any(shape <= 0)) stop("argument 'shape' must be positive") ans <- 1 / (1 - 1 / dzipf(1, N, shape)) ans } doizipf <- function(x, N, shape, pstr1 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(N, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'N'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") nn <- max(length(x), length(N), length(shape), length(pstr1)) if (length(x) != nn) x <- rep_len(x, nn) if (length(N) != nn) N <- rep_len(N, nn) if (length(shape)!= nn) shape <- rep_len(shape, nn) if (length(pstr1)!= nn) pstr1 <- rep_len(pstr1, nn) ans <- rep(NA_real_, nn) index1 <- (x == 1) if (log.arg) { ans[ index1] <- log(pstr1[ index1] + (1 - pstr1[ index1]) * dzipf(x[ index1], N[ index1], shape[ index1])) ans[!index1] <- log1p(-pstr1[!index1]) + dzipf(x[!index1], N[!index1], shape[!index1], log = TRUE) } else { ans[ index1] <- pstr1[ index1] + (1 - pstr1[ index1]) * dzipf(x[ index1], N[ index1], shape[ index1]) ans[!index1] <- (1 - pstr1[!index1]) * dzipf(x[!index1], N[!index1], shape[!index1]) } deflat.limit <- deflat.limit.oizipf(N, shape) ans[pstr1 < deflat.limit] <- NaN ans[pstr1 > 1] <- NaN ans } poizipf <- function(q, N, shape, pstr1 = 0) { LLL <- max(length(q), length(N), length(shape), length(pstr1)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(N) != LLL) N <- rep_len(N, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(pstr1) != LLL) pstr1 <- rep_len(pstr1, LLL) ans <- rep_len(NA_real_, LLL) deflat.limit <- deflat.limit.oizipf(N, shape) ans <- pzipf(q, N, shape) #, lower.tail = lower.tail, log.p = log.p ans <- ifelse(q < 1, 0, pstr1 + (1 - pstr1) * ans) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[s <= 0] <- NaN ans } qoizipf <- function(p, N, shape, pstr1 = 0) { if (!is.Numeric(p)) stop("bad input for argument 'p'") if (!is.Numeric(N, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'N'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") nn <- max(length(p), length(N), length(s), length(pstr1)) if (length(p) != nn) p <- rep_len(p, nn) if (length(N) != nn) N <- rep_len(N, nn) if (length(shape) != nn) shape <- rep_len(shape, nn) if (length(pstr1) != nn) pstr1 <- rep_len(pstr1, nn) ans <- rep_len(NA_real_, nn) deflat.limit <- deflat.limit.oizipf(N, shape) dont.iterate <- 1 < p ans[p <= pstr1] <- 1 pindex <- (pstr1 < p) & (deflat.limit <= pstr1) & !dont.iterate if (any(pindex)) ans[pindex] <- qzipf((p[pindex] - pstr1[pindex]) / (1 - pstr1[pindex]), N = N[pindex], shape = shape[pindex]) ans[pstr1 < deflat.limit] <- NaN ans[1 < pstr1] <- NaN ans[shape < 0] <- NaN ans[p < 0] <- NaN ans[1 < p] <- NaN ans } roizipf <- function(n, N, shape, pstr1 = 0) { qoizipf(runif(n), N, shape, pstr1 = pstr1) } oizipf <- function(N = NULL, lpstr1 = "logit", lshape = "loge", type.fitted = c("mean", "shape", "pobs1", "pstr1", "onempstr1"), ishape = NULL, gpstr1 = ppoints(8), gshape = exp((-3:3) / 4), # grid for finding shape.init zero = NULL) { if (length(N) && (!is.Numeric(N, positive = TRUE, integer.valued = TRUE, length.arg = 1) || N <= 1)) stop("bad input for argument 'N'") enteredN <- length(N) lpstr1 <- as.list(substitute(lpstr1)) epstr1 <- link2list(lpstr1) lpstr1 <- attr(epstr1, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "pstr1", "onempstr1"))[1] if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' values must be positive") new("vglmff", blurb = c("One-inflated Zipf distribution f(y; pstr1, shape) = pstr1 + ", "(1 - pstr1) * y^(-shape) / sum((1:N)^(-shape)),", " 0 < shape, y = 1, 2,...,N", ifelse(enteredN, paste(" = ", N, sep = ""), ""), "\n\n", "Links: ", namesof("pstr1", lpstr1, earg = epstr1 ), ", ", namesof("shape", lshape, earg = eshape ), "\n", "Mean: pstr1 + (1 - pstr1) * ", "gharmonic(N, shape-1) / gharmonic(N, shape)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("pstr1", "shape"), type.fitted = .type.fitted , zero = .zero ) }, list( .zero = zero, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y NOS <- ncoly <- ncol(y) extra$ncoly <- ncoly M <- M1 * ncoly extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) NN <- .N if (!is.Numeric(NN, length.arg = 1, positive = TRUE, integer.valued = TRUE)) NN <- max(y) if (max(y) > NN) stop("maximum of the response is greater than argument 'N'") extra$N <- NN mynames1 <- param.names("pstr1", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lpstr1 , earg = .epstr1 , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { shape.init <- pstr1.init <- matrix(NA_real_, n, NOS) gpstr1 <- .gpstr1 gshape <- .gshape oizipf.Loglikfun <- function(pstr1, shape, y, x, w, extraargs) { sum(c(w) * doizipf(x = y, pstr1 = pstr1, N = extraargs$N, s = shape, log = TRUE)) } for (jay in 1:NOS) { # For each response 'y_jay'... do: try.this <- grid.search2(gpstr1, gshape, objfun = oizipf.Loglikfun, y = y[, jay], # x = x[TFvec, , drop = FALSE], w = w[, jay], extraargs = list(N = extra$N), ret.objfun = TRUE) # Last value is the loglik pstr1.init[, jay] <- try.this["Value1"] shape.init[, jay] <- try.this["Value2"] } # for (jay ...) etastart <- cbind(theta2eta(pstr1.init, .lpstr1 , earg = .epstr1 ), theta2eta(shape.init, .lshape , earg = .eshape ))[, interleave.VGAM(M, M1 = M1)] mustart <- NULL # Since etastart has been computed. } # End of !length(etastart) }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape, .ishape = ishape, .gpstr1 = gpstr1, .gshape = gshape, .type.fitted = type.fitted, .N = N ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "shape", "pobs1", "pstr1", "onempstr1"))[1] pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) Meanfun <- function(shape, extra) { Mean <- shape Mean <- ( gharmonic2(extra$N, shape = shape - 1) / gharmonic2(extra$N, shape = shape)) Mean[shape <= 0] <- NaN Mean } ans <- switch(type.fitted, "mean" = pstr1 + (1 - pstr1) * Meanfun(shape, extra), "shape" = shape, "pobs1" = doizipf(1, N = extra$N, s = shape, pstr1 = pstr1), "pstr1" = pstr1, "onempstr1" = 1 - pstr1) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lpstr1 , NOS), rep_len( .lshape , NOS))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .epstr1 misc$earg[[M1*ii ]] <- .eshape } }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * doizipf(x = y, pstr1 = pstr1, s = shape, N = extra$N, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), vfamily = c("oizipf"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pstr1 <- eta2theta(eta[, c(TRUE, FALSE)], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) roizipf(nsim * length(shape), s = shape, pstr1 = pstr1, N = object@extra$N) }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), validparams = eval(substitute(function(eta, y, extra = NULL) { pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) && all(is.finite(pstr1)) && all(pstr1 < 1) deflat.limit <- deflat.limit.oizipf(N = extra$N, s = shape) okay2.deflat <- TRUE if (okay1 && !(okay2.deflat <- all(deflat.limit < pstr1))) warning("parameter 'pstr1' is too negative even allowing for ", "1-deflation.") okay1 && okay2.deflat }, list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- M / M1 pstr1 <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lpstr1 , earg = .epstr1 ) shape <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , earg = .eshape ) pmf1 <- dzipf(1, N = extra$N, shape = shape) onempmf1 <- 1 - pmf1 # dozeta(1, shape = shape, pstr1 = pstr1) pobs1 <- pstr1 + (1 - pstr1) * pmf1 index1 <- as.matrix(y == 1) ghar0 <- gharmonic2(extra$N, shape) ghar1 <- gharmonic(extra$N, shape, deriv = 1) ghar2 <- gharmonic(extra$N, shape, deriv = 2) dl.dpstr1 <- onempmf1 / pobs1 dl.dpstr1[!index1] <- -1 / (1 - pstr1[!index1]) dpmf1.dshape <- -ghar1 / ghar0^2 d2pmf1.dshape2 <- (2 * ghar1^2 / ghar0 - ghar2) / ghar0^2 dl.dshape <- (1 - pstr1) * dpmf1.dshape / pobs1 # dl.dshape[!index1] <- -log(y[!index1]) - ghar1[!index1] / ghar0[!index1] dpstr1.deta <- dtheta.deta(pstr1, .lpstr1 , earg = .epstr1 ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dpstr1 * dpstr1.deta, dl.dshape * dshape.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lpstr1 = lpstr1, .lshape = lshape, .epstr1 = epstr1, .eshape = eshape ))), weight = eval(substitute(expression({ LHS <- ((1 - pstr1) / pobs1) * dpmf1.dshape^2 - d2pmf1.dshape2 RHS <- (ghar2 - ghar1^2 / ghar0) / ghar0 ned2l.dpstr12 <- onempmf1 / ((1 - pstr1) * pobs1) # ned2l.dpstr1shape <- dpmf1.dshape / pobs1 # ned2l.dshape2 <- (1 - pstr1) * (LHS + (1 - pmf1) * RHS) wz <- array(c(c(w) * ned2l.dpstr12 * dpstr1.deta^2, c(w) * ned2l.dshape2 * dshape.deta^2, c(w) * ned2l.dpstr1shape * dpstr1.deta * dshape.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lshape = lshape, .eshape = eshape )))) } # oizipf dotzeta <- function(x, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (log.arg) { ans <- dzeta(x, shape, log = log.arg) - log1p(-dzeta(1, shape)) ans[x == 1] <- log(0) } else { ans <- dzeta(x, shape) / (1 - dzeta(1, shape)) ans[x == 1] <- 0 } ans[shape < 0] <- NaN ans } # dotzeta potzeta <- function(q, shape, log.p = FALSE) { if (log.p) log(pzeta(q, shape) - dzeta(1, shape)) - log1p(-dzeta(1, shape)) else (pzeta(q, shape) - dzeta(1, shape)) / (1 - dzeta(1, shape)) } qotzeta <- function(p, shape) { ans <- qzeta((1 - dzeta(1, shape)) * p + dzeta(1, shape), shape = shape) ans[p == 1] <- Inf ans[p < 0 | 1 < p] <- NaN ans[shape < 0] <- NaN ans } # qotzeta rotzeta <- function(n, shape) { qotzeta(runif(n), shape) } otzeta <- function(lshape = "loge", ishape = NULL, gshape = exp((-4:3)/4), zero = NULL) { if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' must be > 0") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("One-truncated Zeta distribution ", "f(y) = 1/(y^(shape+1) * (zeta(shape+1) - 1 - 1/2^(shape+1)))", " 0 1] <- zeta(pp[pp > 1]) / zeta(pp[pp > 1] + 1) ans[pp <= 1] <- NA pmf.1 <- dzeta(1, pp) (ans - pmf.1) / (1 - pmf.1) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- rep_len( .lshape , ncoly) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (jay in 1:ncoly) { misc$earg[[jay]] <- .eshape } }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dotzeta(x = y, shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("otzeta"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) BBBB <- zeta(shape + 1) - 1 fred1 <- zeta(shape + 1, deriv = 1) dl.dshape <- -log(y) - fred1 / BBBB dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = expression({ ned2l.dshape2 <- (zeta(shape + 1, deriv = 2) - fred1^2 / BBBB) / BBBB wz <- ned2l.dshape2 * dshape.deta^2 c(w) * wz })) } ddiffzeta <- function(x, shape, start = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(shape), length(x), length(start)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(start) != LLL) start <- rep_len(start, LLL) ox <- !is.finite(x) zero <- ox | round(x) != x | x < start ans <- rep_len(if (log.arg) log(0) else 0, LLL) if (any(!zero)) { ans[!zero] <- (start[!zero] / x[!zero]) ^(shape[!zero]) - (start[!zero] / (1 + x[!zero]))^(shape[!zero]) if (log.arg) ans[!zero] <- log(ans[!zero]) } if (any(ox)) ans[ox] <- if (log.arg) log(0) else 0 ans[shape <= 0] <- NaN ans[start != round(start) | start < 1] <- NaN ans } pdiffzeta <- function(q, shape, start = 1, lower.tail = TRUE) { LLL <- max(length(shape), length(q), length(start)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(start) != LLL) start <- rep_len(start, LLL) if (lower.tail) { ans <- 1 - (start / floor(1 + q))^shape } else { ans <- (start / floor(1 + q))^shape } ans[q < start] <- if (lower.tail) 0 else 1 ans[shape <= 0] <- NaN ans[start != round(start) | start < 1] <- NaN ans } # pdiffzeta qdiffzeta <- function(p, shape, start = 1) { LLL <- max(length(p), length(shape), length(start)) if (length(p) != LLL) p <- rep_len(p, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) if (length(start) != LLL) start <- rep_len(start, LLL) lo <- rep_len(start, LLL) approx.ans <- lo # True at lhs hi <- 2 * lo + 10 dont.iterate <- p == 1 | shape <= 0 | start != round(start) | start < 1 done <- p <= pdiffzeta(hi, shape, start = start) | dont.iterate max.iter <- 100 iter <- 0 while (!all(done) && iter < max.iter) { hi.save <- hi[!done] hi[!done] <- 2 * lo[!done] + 10 lo[!done] <- hi.save done[!done] <- is.infinite(hi[!done]) | (p[!done] <= pdiffzeta(hi[!done], shape[!done], start[!done])) iter <- iter + 1 } foo <- function(q, shape, start, p) pdiffzeta(q, shape, start) - p lhs <- (p <= ddiffzeta(start, shape, start = start)) | dont.iterate approx.ans[!lhs] <- bisection.basic(foo, lo[!lhs], hi[!lhs], tol = 1/16, shape = shape[!lhs], start = start[!lhs], p = p[!lhs]) faa <- floor(approx.ans) ans <- ifelse(pdiffzeta(faa , shape, start = start) < p & p <= pdiffzeta(faa+1, shape, start = start), faa+1, faa) ans[p == 1] <- Inf ans[shape <= 0] <- NaN ans[start != round(start) | start < 1] <- NaN ans } # qdiffzeta rdiffzeta <- function(n, shape, start = 1) { rr <- runif(n) qdiffzeta(rr, shape, start = start) } diffzeta <- function(start = 1, lshape = "loge", ishape = NULL) { if (!is.Numeric(start, positive = TRUE, integer.valued = TRUE, length.arg = 1)) stop("bad input for argument 'start'") enteredstart <- length(start) if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("argument 'ishape' must be > 0") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Difference in 2 Zipf distributions ", "f(y;s) = y^(-shape) / sum((1:start)^(-shape)), ", "shape > 0, start, start+1,...", ifelse(enteredstart, paste("start = ", start, sep = ""), ""), "\n\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n\n", "Mean: gharmonic(start, shape-1) / gharmonic(start, shape)"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, start = .start , parameters.names = "shape") }, list( .start = start ))), initialize = eval(substitute(expression({ start <- .start temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, Is.integer.y = TRUE, Is.positive.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y if (any(y < start)) stop("some response values less than 'start'") predictors.names <- namesof("shape", .lshape , earg = .eshape , tag = FALSE) extra$start <- start if (!length(etastart)) { llfun <- function(shape, y, start, w) { sum(c(w) * ddiffzeta(x = y, start = extra$start, shape = shape, log = TRUE)) } shape.init <- if (length( .ishape )) .ishape else getInitVals(gvals = seq(0.1, 3.0, length.out = 19), llfun = llfun, y = y, start = extra$start, w = w) shape.init <- rep_len(shape.init, length(y)) if ( .lshape == "loglog") shape.init[shape.init <= 1] <- 1.2 etastart <- theta2eta(shape.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape, .start = start ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) aa <- extra$start if (length(aa) != 1 || aa < 1 || round(aa) != aa) stop("the 'start' variable must be of unit length") if (aa == 1) return(zeta(shape)) mymat <- matrix(1:aa, NROW(eta), aa, byrow = TRUE) temp1 <- rowSums(1 / mymat^shape) (aa^shape) * (zeta(shape) - temp1 + 1 / aa^(shape-1)) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$expected <- FALSE misc$link <- c(shape = .lshape ) misc$earg <- list(shape = .eshape ) misc$start <- extra$start }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * ddiffzeta(x = y, start = extra$start, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("diffzeta"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) temp1 <- extra$start / y temp2 <- extra$start / (y+1) AA <- temp1^shape - temp2^shape Aprime <- log(temp1) * temp1^shape - log(temp2) * temp2^shape dl.dshape <- Aprime / AA dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = expression({ ned2l.dshape <- (Aprime / AA)^2 # Not quite FS. Half FS. wz <- c(w) * ned2l.dshape * dshape.deta^2 wz })) } VGAM/R/family.nbd.R0000644000176200001440000017016313135276757013403 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. Init.mu <- function(y, x = cbind("(Intercept)" = rep_len(1, NROW(y))), w = x, imethod = 1, imu = NULL, ishrinkage = 0.95, pos.only = FALSE, probs.y = 0.35) { if (!is.matrix(x)) x <- as.matrix(x) if (!is.matrix(y)) y <- as.matrix(y) if (!is.matrix(w)) w <- as.matrix(w) if (ncol(w) != ncol(y)) w <- matrix(w, nrow = nrow(y), ncol = ncol(y)) if (length(imu)) { MU.INIT <- matrix(imu, nrow(y), ncol(y), byrow = TRUE) return(MU.INIT) } if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) warning("bad input for argument 'ishrinkage'; ", "using the value 0.95 instead") if (imethod > 6) { warning("argument 'imethod' should be 1 or 2 or... 6; ", "using the value 1") imethod <- 1 } mu.init <- y for (jay in 1:ncol(y)) { TFvec <- if (pos.only) y[, jay] > 0 else TRUE locn.est <- if ( imethod %in% c(1, 4)) { weighted.mean(y[TFvec, jay], w[TFvec, jay]) + 1/16 } else if ( imethod %in% c(3, 6)) { c(quantile(y[TFvec, jay], probs = probs.y ) + 1/16) } else { median(y[TFvec, jay]) + 1/16 } if (imethod <= 3) { mu.init[, jay] <- ishrinkage * locn.est + (1 - ishrinkage ) * y[, jay] } else { medabsres <- median(abs(y[, jay] - locn.est)) + 1/32 allowfun <- function(z, maxtol = 1) sign(z) * pmin(abs(z), maxtol) mu.init[, jay] <- locn.est + (1 - ishrinkage ) * allowfun(y[, jay] - locn.est, maxtol = medabsres) mu.init[, jay] <- abs(mu.init[, jay]) + 1 / 1024 } } # of for (jay) mu.init } EIM.NB.specialp <- function(mu, size, y.max = NULL, # Must be an integer cutoff.prob = 0.995, intercept.only = FALSE, extra.bit = TRUE) { if (intercept.only) { mu <- mu[1] size <- size[1] } y.min <- 0 # A fixed constant really if (!is.numeric(y.max)) { eff.p <- sort(c(cutoff.prob, 1 - cutoff.prob)) y.max <- max(round(qnbinom(p = eff.p[2], mu = mu, size = size) * 1.1)) + 30 } Y.mat <- if (intercept.only) y.min:y.max else matrix(y.min:y.max, length(mu), y.max-y.min+1, byrow = TRUE) neff.row <- ifelse(intercept.only, 1, nrow(Y.mat)) neff.col <- ifelse(intercept.only, length(Y.mat), ncol(Y.mat)) if (FALSE) { trigg.term <- if (intercept.only) { check2 <- sum(pnbinom(Y.mat, size = size, mu = mu, lower.tail = FALSE) / (Y.mat + size)^2) check2 } else { check2 <- rowSums(pnbinom(Y.mat, size = size, mu = mu, lower.tail = FALSE) / (Y.mat + size)^2) check2 } } # FALSE if (TRUE) { answerC <- .C("eimpnbinomspecialp", as.integer(intercept.only), as.double(neff.row), as.double(neff.col), as.double(size), as.double(pnbinom(Y.mat, size = size, mu = mu, lower.tail = FALSE)), rowsums = double(neff.row)) trigg.term <- answerC$rowsums } # TRUE ned2l.dk2 <- trigg.term if (extra.bit) ned2l.dk2 <- ned2l.dk2 - 1 / size + 1 / (size + mu) ned2l.dk2 } # EIM.NB.specialp() EIM.NB.speciald <- function(mu, size, y.min = 0, # 20160201; must be an integer y.max = NULL, # Must be an integer cutoff.prob = 0.995, intercept.only = FALSE, extra.bit = TRUE) { if (intercept.only) { mu <- mu[1] size <- size[1] } if (!is.numeric(y.max)) { eff.p <- sort(c(cutoff.prob, 1 - cutoff.prob)) y.max <- max(round(qnbinom(p = eff.p[2], mu = mu, size = size) * 1.1)) + 30 } Y.mat <- if (intercept.only) y.min:y.max else matrix(y.min:y.max, length(mu), y.max-y.min+1, byrow = TRUE) trigg.term <- if (intercept.only) { dnbinom(Y.mat, size = size, mu = mu) %*% trigamma(Y.mat + size) } else { rowSums(dnbinom(Y.mat, size = size, mu = mu) * trigamma(Y.mat + size)) } ned2l.dk2 <- trigamma(size) - trigg.term if (extra.bit) ned2l.dk2 <- ned2l.dk2 - 1 / size + 1 / (size + mu) ned2l.dk2 } # end of EIM.NB.speciald() NBD.Loglikfun2 <- function(munbval, sizeval, y, x, w, extraargs) { sum(c(w) * dnbinom(x = y, mu = munbval, size = sizeval, log = TRUE)) } negbinomial.initialize.yj <- function(yvec, wvec = rep(1, length(yvec)), gprobs.y = ppoints(9), wm.yj = weighted.mean(yvec, w = wvec)) { try.mu <- c(quantile(yvec, probs = gprobs.y) + 1/16, wm.yj) if (median(try.mu) < 1) { y.pos <- yvec[yvec > 0] try.mu <- c(min(try.mu), # 0.25, wm.yj, summary.default(y.pos)[c(1:3, 5)], quantile(y.pos, probs = gprobs.y) - 1/16) } unique(sort(try.mu)) } negbinomial.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } negbinomial <- function( zero = "size", parallel = FALSE, deviance.arg = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, # Maxiter = 5000, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, # max.memory = Inf is allowed lmu = "loge", lsize = "loge", imethod = 1, imu = NULL, iprobs.y = NULL, # 0.35, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.logical( deviance.arg ) || length( deviance.arg ) != 1) stop("argument 'deviance.arg' must be TRUE or FALSE") type.fitted <- match.arg(type.fitted, c("mean", "quantiles"))[1] lmunb <- as.list(substitute(lmu)) emunb <- link2list(lmunb) lmunb <- attr(emunb, "function.name") imunb <- imu lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 1e-5) stop("argument 'eps.trig' must be positive and smaller in value") if (length(imunb) && !is.Numeric(imunb, positive = TRUE)) stop("bad input for argument 'imu'") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("bad input for argument 'isize'") if (!is.Numeric(cutoff.prob, length.arg = 1) || cutoff.prob < 0.95 || cutoff.prob >= 1) stop("range error in the argument 'cutoff.prob'; ", "a value in [0.95, 1) is needed") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("argument 'nsimEIM' should be an integer ", "greater than 10, say") if (is.logical( parallel ) && parallel && length(zero)) stop("need to set 'zero = NULL' when parallel = TRUE") ans <- new("vglmff", blurb = c("Negative binomial distribution\n\n", "Links: ", namesof("mu", lmunb, earg = emunb), ", ", namesof("size", lsize, earg = esize), "\n", "Mean: mu\n", "Variance: mu * (1 + mu / size) for NB-2"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, imethod = .imethod , mds.min = .mds.min , multipleResponses = TRUE, parameters.names = c("mu", "size"), type.fitted = .type.fitted , percentiles = .percentiles , lmu = .lmunb , lsize = .lsize , nsimEIM = .nsimEIM , eps.trig = .eps.trig , zero = .zero , max.chunk.MB = .max.chunk.MB , cutoff.prob = .cutoff.prob ) }, list( .zero = zero, .lsize = lsize, .lmunb = lmunb, .type.fitted = type.fitted, .percentiles = percentiles , .eps.trig = eps.trig, .imethod = imethod, .mds.min = mds.min, .cutoff.prob = cutoff.prob, .max.chunk.MB = max.chunk.MB, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ M1 <- 2 temp12 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp12$w y <- temp12$y assign("CQO.FastAlgorithm", ( .lmunb == "loge") && ( .lsize == "loge"), envir = VGAMenv) if (any(function.name == c("cqo", "cao")) && ((is.Numeric( .zero , length.arg = 1) && .zero != -2) || (is.character( .zero ) && .zero != "size"))) stop("argument zero = 'size' or zero = -2 is required") extra$type.fitted <- .type.fitted extra$percentiles <- .percentiles extra$colnames.y <- colnames(y) NOS <- ncoly <- ncol(y) # Number of species M <- M1 * NOS predictors.names <- c(namesof(param.names("mu", NOS), .lmunb , earg = .emunb , tag = FALSE), namesof(param.names("size", NOS), .lsize , earg = .esize , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] gprobs.y <- .gprobs.y imunb <- .imunb # Default is NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: wm.yj <- weighted.mean(y[, jay], w = w[, jay]) munb.init.jay <- if ( .imethod == 1 ) { negbinomial.initialize.yj(y[, jay], w[, jay], gprobs.y = gprobs.y, wm.yj = wm.yj) } else { wm.yj } if (length(imunb)) munb.init.jay <- imunb[, jay] gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + wm.yj) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = NBD.Loglikfun2, y = y[, jay], w = w[, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) newemu <- .emunb if ( .lmunb == "nbcanlink") { newemu$size <- size.init } etastart <- cbind(theta2eta(munb.init, link = .lmunb , earg = newemu ), theta2eta(size.init, link = .lsize , earg = .esize )) if ( .lmunb == "nbcanlink") { if (any(cond1 <- is.na(etastart[, c(TRUE, FALSE)])) || any(cond2 <- etastart[, c(TRUE, FALSE)] >= 0)) etastart[c(cond1) || c(cond2), c(TRUE, FALSE)] <- -0.1 } if (M > M1) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize, .imunb = imunb, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .deviance.arg = deviance.arg, .isize = isize, .iprobs.y = iprobs.y, .nsimEIM = nsimEIM, .zero = zero, .imethod = imethod, .type.fitted = type.fitted, .percentiles = percentiles ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) kmat <- NULL munb <- if ( .lmunb == "nbcanlink") { newemu <- .emunb kmat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lsize , earg = .esize ) munb <- kmat / expm1(-eta[, c(TRUE, FALSE), drop = FALSE]) munb } else { eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lmunb , earg = .emunb ) } type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "quantiles"))[1] if (type.fitted == "mean") { return(label.cols.y(munb, colnames.y = extra$colnames.y, NOS = NOS)) } if (is.null(kmat)) kmat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lsize , earg = .esize ) percvec <- extra$percentiles lenperc <- length(percvec) jvec <- lenperc * (0:(NOS - 1)) ans <- matrix(0, nrow(eta), lenperc * NOS) for (kay in 1:lenperc) ans[, jvec + kay] <- qnbinom(0.01 * percvec[kay], mu = munb, size = kmat) rownames(ans) <- rownames(eta) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS, percentiles = percvec, one.on.one = FALSE) }, list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAMenv) if (function.name == "cao") ind2 <- FALSE save.weights <- control$save.weights <- !all(ind2) temp0303 <- c(rep_len( .lmunb , NOS), rep_len( .lsize , NOS)) names(temp0303) <- c(param.names("mu", NOS), param.names("size", NOS)) misc$link <- temp0303[interleave.VGAM(M, M1 = M1)] # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- if ( .lmunb == "nbcanlink") newemu else .emunb misc$earg[[M1*ii ]] <- .esize } }), list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize ))), linkfun = eval(substitute(function(mu, extra = NULL) { M1 <- 2 newemu <- .emunb if ( .lmunb == "nbcanlink") { newemu$size <- eta2theta(eta.size, .lsize , earg = .esize ) } eta.munb <- theta2eta(mu, .lmunb , earg = newemu) eta.size <- theta2eta(if (is.numeric( .isize )) .isize else 1.0, .lsize , earg = .esize ) eta.size <- 0 * eta.munb + eta.size # Right dimension now. eta.temp <- cbind(eta.munb, eta.size) eta.temp[, interleave.VGAM(ncol(eta.temp), M1 = M1), drop = FALSE] }, list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize, .isize = isize ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { vecTF <- c(TRUE, FALSE) kmat <- eta2theta(eta[, !vecTF, drop=FALSE], .lsize , earg = .esize ) munb <- if ( .lmunb == "nbcanlink") { munb <- kmat / expm1(-eta[, vecTF, drop = FALSE]) if (min(munb) <= 0) { munb[munb <= 0] <- median(munb[munb > 0]) # 0.1 warning("'munb' has some negative values. ", "Using a temporary fix.") } munb } else { eta2theta(eta[, vecTF, drop = FALSE], .lmunb , earg = .emunb ) } if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dnbinom(x = y, mu = munb, size = kmat, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize))), vfamily = c("negbinomial"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) vecTF <- c(TRUE, FALSE) munb <- cbind(eta2theta(eta[, vecTF], .lmunb , earg = .emunb )) size <- cbind(eta2theta(eta[, !vecTF], .lsize , earg = .esize )) rnbinom(nsim * length(munb), mu = munb, size = size) }, list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { size <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lsize , earg = .esize ) munb <- if ( .lmunb == "nbcanlink") { munb <- size / expm1(-eta[, c(TRUE, FALSE), drop = FALSE]) munb } else { eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lmunb , earg = .emunb ) } smallval <- .mds.min # .munb.div.size okay1 <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) okay0 <- if ( .lmunb == "nbcanlink") all(eta[, c(TRUE, FALSE)] < 0) else TRUE overdispersion <- if (okay1) all(smallval < munb / size) else FALSE if (!overdispersion) warning("parameter 'size' has very large values relative ", "to 'mu'; ", "try fitting a quasi-Poisson ", "model instead.") okay1 && overdispersion && okay0 }, list( .lmunb = lmunb, .emunb = emunb, .lsize = lsize, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ if (iter == 1 && .deviance.arg ) { if (control$criterion != "coefficients" && control$half.step) warning("Argument 'criterion' should be 'coefficients' or ", "'half.step' should be 'FALSE' when 'deviance.arg = TRUE'") low.index <- ifelse(names(constraints)[1] == "(Intercept)", 2, 1) if (low.index <= length(constraints)) for (iii in low.index:length(constraints)) { conmat <- constraints[[iii]] if (any(conmat[c(FALSE, TRUE), ] != 0)) stop("argument 'deviance.arg' should only be TRUE for NB-2 ", "models; ", "non-zero elements detected for the 'size' parameter." ) } # for iii } # (iter == 1 && .deviance.arg ) vecTF <- c(TRUE, FALSE) M1 <- 2 NOS <- ncol(eta) / M1 kmat <- eta2theta(eta[, !vecTF, drop = FALSE], .lsize , earg = .esize ) munb <- if ( .lmunb == "nbcanlink") { munb <- kmat / expm1(-eta[, vecTF, drop = FALSE]) if (iter <= 2 && min(munb) <= 0) { munb[munb <= 0] <- median(munb[munb > 0]) warning("'munb' has some negative values. ", "Using a temporary fix.") } munb } else { eta2theta(eta[, vecTF, drop = FALSE], .lmunb , earg = .emunb ) } smallval <- .mds.min # Something like this is needed if (any(big.size <- (munb / kmat < smallval))) { kmat[big.size] <- munb[big.size] / smallval } dl.dmunb <- y / munb - (1 + y/kmat) / (1 + munb/kmat) dl.dsize <- digamma(y + kmat) - digamma(kmat) + log1p(-munb / (kmat + munb)) - (y - munb) / (munb + kmat) if (any(big.size)) { dl.dsize[big.size] <- 1e-8 # A small number } dsize.deta2 <- dtheta.deta(kmat, .lsize , earg = .esize ) dmunb.deta1 <- if ( .lmunb == "nbcanlink") { dl.dsize <- digamma(y + kmat) - digamma(kmat) + log1p(-munb / (kmat + munb)) dmunb.deta1 <- nbcanlink(munb, size = kmat, wrt.param = 1, inverse = TRUE, deriv = 1) dmunb.deta1 } else { dtheta.deta(munb, .lmunb , earg = .emunb ) } myderiv <- c(w) * cbind(dl.dmunb * dmunb.deta1, dl.dsize * dsize.deta2) if (M > M1) myderiv <- myderiv[, interleave.VGAM(M, M1 = M1)] myderiv }), list( .lmunb = lmunb, .lsize = lsize, .emunb = emunb, .esize = esize, .deviance.arg = deviance.arg, .mds.min = mds.min ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, M) max.support <- .max.support max.chunk.MB <- .max.chunk.MB ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 0 Q.maxs <- round(qnbinom(p = eff.p[2], mu = munb[, jay], size = kmat[, jay]) * 1.1) + 30 eps.trig <- .eps.trig Q.MAXS <- if ( .lsize == "loge") pmax(10, ceiling(kmat[, jay] / sqrt(eps.trig))) else Inf Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay] <- EIM.NB.specialp(mu = munb[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , intercept.only = intercept.only) if (any(eim.kk.TF <- wz[sind2, M1*jay] <= 0)) { ind2[sind2[eim.kk.TF], jay] <- FALSE } lwr.ptr <- upr.ptr + 1 } # while } # if } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { kkvec <- kmat[ii.TF, jay] muvec <- munb[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rnbinom(sum(ii.TF), mu = muvec, size = kkvec) dl.dsize <- digamma(ysim + kkvec) - digamma(kkvec) - (ysim - muvec) / (muvec + kkvec) + log1p( -muvec / (kkvec + muvec)) run.varcov <- run.varcov + dl.dsize^2 } # for ii run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dsize2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay] <- ned2l.dsize2 } # any ii.TF } # for jay save.weights <- !all(ind2) ned2l.dmunb2 <- 1 / munb - 1 / (munb + kmat) wz[, c(TRUE, FALSE)] <- ned2l.dmunb2 * dmunb.deta1^2 if ( .lmunb == "nbcanlink") { wz[, !vecTF] <- wz[, !vecTF] + 1 / kmat - 1 / (kmat + munb) } wz[, !vecTF] <- wz[, !vecTF] * dsize.deta2^2 if ( .lmunb == "nbcanlink") { ned2l.dmunb.dsize <- 1 / (munb + kmat) wzoffdiag <- ned2l.dmunb.dsize * dmunb.deta1 * dsize.deta2 wz <- if (M > M1) { wzoffdiag <- kronecker(wzoffdiag, cbind(1, 0)) cbind(wz, wzoffdiag[, -ncol(wzoffdiag)]) } else { cbind(wz, wzoffdiag) } } w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .cutoff.prob = cutoff.prob, .max.support = max.support, .max.chunk.MB = max.chunk.MB, .lmunb = lmunb, .lsize = lsize, .eps.trig = eps.trig, .nsimEIM = nsimEIM )))) if (deviance.arg) { ans@deviance <- eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { size <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lsize , earg = .esize ) if (residuals) { stop("this part of the function has not been written yet.") } else { dev.elts <- 2 * c(w) * (y * log(pmax(1, y) / mu) - (y + size) * log((y + size) / (mu + size))) if (summation) { sum(dev.elts) } else { dev.elts } } }, list( .lsize = lsize, .esize = esize, .lmunb = lmunb, .emunb = emunb ))) } ans } # negbinomial() polya.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } polya <- function( zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, # Maxiter = 5000, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, # max.memory = Inf is allowed lprob = "logit", lsize = "loge", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") deviance.arg <- FALSE # 20131212; for now type.fitted <- match.arg(type.fitted, c("mean", "prob"))[1] if (length(iprob) && !is.Numeric(iprob, positive = TRUE)) stop("bad input for argument 'iprob'") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("bad input for argument 'isize'") if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 0.001) stop("argument 'eps.trig' must be positive and smaller in value") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("argument 'nsimEIM' should be an integer ", "greater than 10, say") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") ans <- new("vglmff", blurb = c("Polya (negative-binomial) distribution\n\n", "Links: ", namesof("prob", lprob, earg = eprob), ", ", namesof("size", lsize, earg = esize), "\n", "Mean: size * (1 - prob) / prob\n", "Variance: mean / prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, mds.min = .mds.min , type.fitted = .type.fitted , eps.trig = .eps.trig , parameters.names = c("prob", "size"), zero = .zero) }, list( .zero = zero, .eps.trig = eps.trig, .type.fitted = type.fitted, .mds.min = mds.min))), initialize = eval(substitute(expression({ M1 <- 2 if (any(function.name == c("cqo", "cao"))) stop("polya() does not work with cqo() or cao(). ", "Try negbinomial()") temp12 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.nonnegative = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp12$w y <- temp12$y M <- M1 * ncol(y) NOS <- ncoly <- ncol(y) # Number of species extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) predictors.names <- c(namesof(param.names("prob", NOS), .lprob , earg = .eprob , tag = FALSE), namesof(param.names("size", NOS), .lsize , earg = .esize , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (is.null( .nsimEIM )) { save.weights <- control$save.weights <- FALSE } gprobs.y <- .gprobs.y imunb <- .imunb # Default in NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: munb.init.jay <- if ( .imethod == 1 ) { quantile(y[, jay], probs = gprobs.y) + 1/16 } else { weighted.mean(y[, jay], w = w[, jay]) } if (length(imunb)) munb.init.jay <- imunb[, jay] gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + weighted.mean(y[, jay], w = w[, jay])) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = NBD.Loglikfun2, y = y[, jay], w = w[, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) prob.init <- if (length( .iprob )) matrix( .iprob , nrow(y), ncol(y), byrow = TRUE) else size.init / (size.init + munb.init) etastart <- cbind(theta2eta(prob.init, .lprob , earg = .eprob), theta2eta(size.init, .lsize , earg = .esize)) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize, .iprob = iprob, .isize = isize, .pinit = iprob, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .iprobs.y = iprobs.y, .nsimEIM = nsimEIM, .zero = zero, .imethod = imethod , .imunb = imunb, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) pmat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lprob , earg = .eprob ) kmat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lsize , earg = .esize ) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob"))[1] ans <- switch(type.fitted, "mean" = kmat * (1 - pmat) / pmat, "prob" = pmat) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lprob = lprob, .eprob = eprob, .lsize = lsize, .esize = esize))), last = eval(substitute(expression({ temp0303 <- c(rep_len( .lprob , NOS), rep_len( .lsize , NOS)) names(temp0303) <- c(param.names("prob", NOS), param.names("size", NOS)) temp0303 <- temp0303[interleave.VGAM(M, M1 = M1)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .eprob misc$earg[[M1*ii ]] <- .esize } misc$isize <- .isize misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize, .isize = isize, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pmat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lprob , earg = .eprob) temp300 <- eta[, c(FALSE, TRUE), drop = FALSE] if ( .lsize == "loge") { bigval <- 68 temp300[temp300 > bigval] <- bigval temp300[temp300 < -bigval] <- -bigval } kmat <- eta2theta(temp300, .lsize , earg = .esize ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dnbinom(y, prob = pmat, size = kmat, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lsize = lsize, .lprob = lprob, .esize = esize, .eprob = eprob ))), vfamily = c("polya"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) pmat <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , .eprob ) kmat <- eta2theta(eta[, c(FALSE, TRUE)], .lsize , .esize ) rnbinom(nsim * length(pmat), prob = pmat, size = kmat) }, list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { pmat <- eta2theta(eta[, c(TRUE, FALSE)], .lprob , .eprob ) size <- eta2theta(eta[, c(FALSE, TRUE)], .lsize , .esize ) munb <- size * (1 / pmat - 1) smallval <- .mds.min # .munb.div.size okay1 <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) && all(is.finite(pmat)) && all(0 < pmat & pmat < 1) overdispersion <- if (okay1) all(munb / size > smallval) else FALSE if (!overdispersion) warning("parameter 'size' has very large values; ", "try fitting a quasi-Poisson ", "model instead.") okay1 && overdispersion }, list( .lprob = lprob, .eprob = eprob, .lsize = lsize, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 pmat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lprob , earg = .eprob ) temp3 <- eta[, c(FALSE, TRUE), drop = FALSE] if ( .lsize == "loge") { bigval <- 68 temp3[temp3 > bigval] <- bigval # pmin() collapses matrices temp3[temp3 < -bigval] <- -bigval } kmat <- as.matrix(eta2theta(temp3, .lsize , earg = .esize )) dl.dprob <- kmat / pmat - y / (1.0 - pmat) dl.dkayy <- digamma(y + kmat) - digamma(kmat) + log(pmat) dprob.deta <- dtheta.deta(pmat, .lprob , earg = .eprob ) dkayy.deta <- dtheta.deta(kmat, .lsize , earg = .esize ) myderiv <- c(w) * cbind(dl.dprob * dprob.deta, dl.dkayy * dkayy.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize))), weight = eval(substitute(expression({ wz <- matrix(0, n, M + M - 1) # wz is 'tridiagonal' max.support <- .max.support max.chunk.MB <- .max.chunk.MB ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 0 Q.maxs <- round(qnbinom(p = eff.p[2], mu = mu[, jay], size = kmat[, jay]) * 1.1) + 30 eps.trig <- .eps.trig Q.MAXS <- pmax(10, ceiling(1 / sqrt(eps.trig))) Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay] <- EIM.NB.specialp(mu = mu[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , intercept.only = intercept.only, extra.bit = FALSE) lwr.ptr <- upr.ptr + 1 } # while } # if } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { ppvec <- pmat[ii.TF, jay] kkvec <- kmat[ii.TF, jay] muvec <- mu[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rnbinom(sum(ii.TF), mu = muvec, size = kkvec) dl.dk <- digamma(ysim + kkvec) - digamma(kkvec) + log(ppvec) run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay] <- ned2l.dk2 # * (dk.deta2[ii.TF, jay])^2 } } wz[, M1*(1:NOS) ] <- wz[, M1 * (1:NOS)] * dkayy.deta^2 save.weights <- !all(ind2) ned2l.dprob2 <- kmat / ((1 - pmat) * pmat^2) wz[, M1*(1:NOS) - 1] <- ned2l.dprob2 * dprob.deta^2 ned2l.dkayyprob <- -1 / pmat wz[, M + M1*(1:NOS) - 1] <- ned2l.dkayyprob * dkayy.deta * dprob.deta w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .cutoff.prob = cutoff.prob, .eps.trig = eps.trig, .max.support = max.support, .max.chunk.MB = max.chunk.MB, .nsimEIM = nsimEIM )))) if (deviance.arg) ans@deviance <- eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { temp300 <- eta[, c(FALSE, TRUE), drop = FALSE] if (NCOL(y) > 1 && NCOL(w) > 1) stop("cannot handle matrix 'w' yet") if ( .lsize == "loge") { bigval <- 68 temp300[temp300 > bigval] <- bigval temp300[temp300 < -bigval] <- -bigval } else { stop("can only handle the 'loge' link") } kayy <- eta2theta(temp300, .lsize , earg = .esize) devi <- 2 * (y * log(ifelse(y < 1, 1, y) / mu) + (y + kayy) * log((mu + kayy) / (kayy + y))) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else { dev.elts <- sum(c(w) * devi) if (summation) { sum(dev.elts) } else { dev.elts } } }, list( .lsize = lsize, .eprob = eprob, .esize = esize ))) ans } # End of polya() polyaR.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } polyaR <- function( zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, # Maxiter = 5000, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, # max.memory = Inf is allowed lsize = "loge", lprob = "logit", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL) { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") deviance.arg <- FALSE # 20131212; for now type.fitted <- match.arg(type.fitted, c("mean", "prob"))[1] if (!is.Numeric(eps.trig, length.arg = 1, positive = TRUE) || eps.trig > 0.001) stop("argument 'eps.trig' must be positive and smaller in value") if (length(iprob) && !is.Numeric(iprob, positive = TRUE)) stop("bad input for argument 'iprob'") if (length(isize) && !is.Numeric(isize, positive = TRUE)) stop("bad input for argument 'isize'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("argument 'nsimEIM' should be an integer ", "greater than 10, say") lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lsize <- as.list(substitute(lsize)) esize <- link2list(lsize) lsize <- attr(esize, "function.name") ans <- new("vglmff", blurb = c("Polya (negative-binomial) distribution\n\n", "Links: ", namesof("size", lsize, earg = esize), ", ", namesof("prob", lprob, earg = eprob), "\n", "Mean: size * (1 - prob) / prob\n", "Variance: mean / prob"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, mds.min = .mds.min , multipleResponses = TRUE, type.fitted = .type.fitted , parameters.names = c("size", "prob"), eps.trig = .eps.trig , zero = .zero ) }, list( .zero = zero, .eps.trig = eps.trig, .type.fitted = type.fitted, .mds.min = mds.min))), initialize = eval(substitute(expression({ M1 <- 2 if (any(function.name == c("cqo", "cao"))) stop("polyaR() does not work with cqo() or cao(). ", "Try negbinomial()") temp12 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, Is.nonnegative = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp12$w y <- temp12$y M <- M1 * ncol(y) NOS <- ncoly <- ncol(y) # Number of species extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) predictors.names <- c(namesof(param.names("size", NOS), .lsize , earg = .esize , tag = FALSE), namesof(param.names("prob", NOS), .lprob , earg = .eprob , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (is.null( .nsimEIM )) { save.weights <- control$save.weights <- FALSE } gprobs.y <- .gprobs.y imunb <- .imunb # Default in NULL if (length(imunb)) imunb <- matrix(imunb, n, NOS, byrow = TRUE) if (!length(etastart)) { munb.init <- size.init <- matrix(NA_real_, n, NOS) gprobs.y <- .gprobs.y if (length( .iprobs.y )) gprobs.y <- .iprobs.y gsize.mux <- .gsize.mux # gsize.mux is on a relative scale for (jay in 1:NOS) { # For each response 'y_jay'... do: munb.init.jay <- if ( .imethod == 1 ) { quantile(y[, jay], probs = gprobs.y) + 1/16 } else { weighted.mean(y[, jay], w = w[, jay]) } if (length(imunb)) munb.init.jay <- imunb[, jay] gsize <- gsize.mux * 0.5 * (mean(munb.init.jay) + weighted.mean(y[, jay], w = w[, jay])) if (length( .isize )) gsize <- .isize # isize is on an absolute scale try.this <- grid.search2(munb.init.jay, gsize, objfun = NBD.Loglikfun2, y = y[, jay], w = w[, jay], ret.objfun = TRUE) # Last value is the loglik munb.init[, jay] <- try.this["Value1"] size.init[, jay] <- try.this["Value2"] } # for (jay ...) prob.init <- if (length( .iprob )) matrix( .iprob , nrow(y), ncol(y), byrow = TRUE) else size.init / (size.init + munb.init) etastart <- cbind(theta2eta(size.init, .lsize , earg = .esize ), theta2eta(prob.init, .lprob , earg = .eprob )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize, .iprob = iprob, .isize = isize, .pinit = iprob, .gprobs.y = gprobs.y, .gsize.mux = gsize.mux, .iprobs.y = iprobs.y, .nsimEIM = nsimEIM, .zero = zero, .imethod = imethod , .imunb = imunb, .type.fitted = type.fitted ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) kmat <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lsize , earg = .esize ) pmat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lprob , earg = .eprob ) type.fitted <- if (length(extra$type.fitted)) extra$type.fitted else { warning("cannot find 'type.fitted'. ", "Returning the 'mean'.") "mean" } type.fitted <- match.arg(type.fitted, c("mean", "prob"))[1] ans <- switch(type.fitted, "mean" = kmat * (1 - pmat) / pmat, "prob" = pmat) label.cols.y(ans, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lprob = lprob, .eprob = eprob, .lsize = lsize, .esize = esize))), last = eval(substitute(expression({ temp0303 <- c(rep_len( .lprob , NOS), rep_len( .lsize , NOS)) names(temp0303) <- c(param.names("size", NOS), param.names("prob", NOS)) temp0303 <- temp0303[interleave.VGAM(M, M1 = M1)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .esize misc$earg[[M1*ii ]] <- .eprob } misc$isize <- .isize misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize, .isize = isize, .nsimEIM = nsimEIM, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { pmat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lprob , earg = .eprob) temp300 <- eta[, c(TRUE, FALSE), drop = FALSE] if ( .lsize == "loge") { bigval <- 68 temp300[temp300 > bigval] <- bigval temp300[temp300 < -bigval] <- -bigval } kmat <- eta2theta(temp300, .lsize , earg = .esize ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dnbinom(y, prob = pmat, size = kmat, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lsize = lsize, .lprob = lprob, .esize = esize, .eprob = eprob ))), vfamily = c("polyaR"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) kmat <- eta2theta(eta[, c(TRUE, FALSE)], .lsize , .esize ) pmat <- eta2theta(eta[, c(FALSE, TRUE)], .lprob , .eprob ) rnbinom(nsim * length(pmat), prob = pmat, size = kmat) }, list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize ))), validparams = eval(substitute(function(eta, y, extra = NULL) { size <- eta2theta(eta[, c(TRUE, FALSE)], .lsize , .esize ) pmat <- eta2theta(eta[, c(FALSE, TRUE)], .lprob , .eprob ) munb <- size * (1 / pmat - 1) smallval <- .mds.min # .munb.div.size overdispersion <- all(munb / size > smallval) ans <- all(is.finite(munb)) && all(0 < munb) && all(is.finite(size)) && all(0 < size) && all(is.finite(pmat)) && all(0 < pmat & pmat < 1) && overdispersion if (!overdispersion) warning("parameter 'size' has very large values; ", "try fitting a quasi-Poisson ", "model instead.") ans }, list( .lprob = lprob, .eprob = eprob, .lsize = lsize, .esize = esize, .mds.min = mds.min))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 pmat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lprob , earg = .eprob) temp3 <- eta[, c(TRUE, FALSE), drop = FALSE] if ( .lsize == "loge") { bigval <- 68 temp3[temp3 > bigval] <- bigval # pmin() collapses matrices temp3[temp3 < -bigval] <- -bigval } kmat <- as.matrix(eta2theta(temp3, .lsize , earg = .esize )) dl.dprob <- kmat / pmat - y / (1.0 - pmat) dl.dkayy <- digamma(y + kmat) - digamma(kmat) + log(pmat) dprob.deta <- dtheta.deta(pmat, .lprob , earg = .eprob) dkayy.deta <- dtheta.deta(kmat, .lsize , earg = .esize) myderiv <- c(w) * cbind(dl.dkayy * dkayy.deta, dl.dprob * dprob.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lprob = lprob, .lsize = lsize, .eprob = eprob, .esize = esize))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' max.support <- .max.support max.chunk.MB <- .max.chunk.MB ind2 <- matrix(FALSE, n, NOS) # Used for SFS for (jay in 1:NOS) { eff.p <- sort(c( .cutoff.prob , 1 - .cutoff.prob )) Q.mins <- 0 Q.maxs <- round(qnbinom(p = eff.p[2], mu = mu[, jay], size = kmat[, jay]) * 1.1) + 30 eps.trig <- .eps.trig Q.MAXS <- pmax(10, ceiling(1 / sqrt(eps.trig) - kmat[, jay])) Q.maxs <- pmin(Q.maxs, Q.MAXS) ind1 <- if (max.chunk.MB > 0) (Q.maxs - Q.mins < max.support) else FALSE if ((NN <- sum(ind1)) > 0) { Object.Size <- NN * 8 * max(Q.maxs - Q.mins) / (2^20) n.chunks <- if (intercept.only) 1 else max(1, ceiling( Object.Size / max.chunk.MB)) chunk.rows <- ceiling(NN / n.chunks) ind2[, jay] <- ind1 # Save this wind2 <- which(ind1) upr.ptr <- 0 lwr.ptr <- upr.ptr + 1 while (lwr.ptr <= NN) { upr.ptr <- min(upr.ptr + chunk.rows, NN) sind2 <- wind2[lwr.ptr:upr.ptr] wz[sind2, M1*jay - 1] <- EIM.NB.specialp(mu = mu[sind2, jay], size = kmat[sind2, jay], y.max = max(Q.maxs[sind2]), cutoff.prob = .cutoff.prob , intercept.only = intercept.only, extra.bit = FALSE) lwr.ptr <- upr.ptr + 1 } # while } # if } # end of for (jay in 1:NOS) for (jay in 1:NOS) { run.varcov <- 0 ii.TF <- !ind2[, jay] # Not assigned above if (any(ii.TF)) { ppvec <- pmat[ii.TF, jay] kkvec <- kmat[ii.TF, jay] muvec <- mu[ii.TF, jay] for (ii in 1:( .nsimEIM )) { ysim <- rnbinom(sum(ii.TF), mu = muvec, size = kkvec) dl.dk <- digamma(ysim + kkvec) - digamma(kkvec) + log(ppvec) run.varcov <- run.varcov + dl.dk^2 } # end of for loop run.varcov <- c(run.varcov / .nsimEIM ) ned2l.dk2 <- if (intercept.only) mean(run.varcov) else run.varcov wz[ii.TF, M1*jay - 1] <- ned2l.dk2 # * (dk.deta2[ii.TF, jay])^2 } } wz[, M1*(1:NOS) - 1] <- wz[, M1*(1:NOS) - 1] * dkayy.deta^2 save.weights <- !all(ind2) ned2l.dprob2 <- kmat / ((1 - pmat) * pmat^2) wz[, M1*(1:NOS) ] <- ned2l.dprob2 * dprob.deta^2 ned2l.dkayyprob <- -1 / pmat wz[, M + M1*(1:NOS) - 1] <- ned2l.dkayyprob * dkayy.deta * dprob.deta w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .cutoff.prob = cutoff.prob, .eps.trig = eps.trig, .max.support = max.support, .max.chunk.MB = max.chunk.MB, .nsimEIM = nsimEIM )))) if (deviance.arg) ans@deviance <- eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { temp300 <- eta[, c(FALSE, TRUE), drop = FALSE] if (NCOL(y) > 1 && NCOL(w) > 1) stop("cannot handle matrix 'w' yet") if ( .lsize == "loge") { bigval <- 68 temp300[temp300 > bigval] <- bigval temp300[temp300 < -bigval] <- -bigval } else { stop("can only handle the 'loge' link") } kayy <- eta2theta(temp300, .lsize , earg = .esize) devi <- 2 * (y * log(ifelse(y < 1, 1, y) / mu) + (y + kayy) * log((mu + kayy) / (kayy + y))) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else { dev.elts <- sum(c(w) * devi) if (summation) { sum(dev.elts) } else { dev.elts } } }, list( .lsize = lsize, .eprob = eprob, .esize = esize ))) ans } # End of polyaR() negbinomial.size <- function(size = Inf, lmu = "loge", imu = NULL, iprobs.y = 0.35, imethod = 1, ishrinkage = 0.95, zero = NULL) { if (any(size <= 0)) stop("bad input for argument 'size'") if (anyNA(size)) stop("bad input for argument 'size'") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") if (length(imu) && !is.Numeric(imu, positive = TRUE)) stop("bad input for argument 'imu'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || 1 < ishrinkage) stop("bad input for argument 'ishrinkage'") ans <- new("vglmff", blurb = c("Negative-binomial distribution with size known\n\n", "Links: ", namesof("mu", lmu, earg = emu), "\n", "Mean: mu\n", "Variance: mu * (1 + mu / size) for NB-2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, imethod = .imethod , ishrinkage = .ishrinkage , multipleResponses = TRUE, parameters.names = c("mu"), zero = .zero ) }, list( .imethod = imethod, .ishrinkage = ishrinkage, .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 1 temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- M1 * ncol(y) NOS <- ncoly <- ncol(y) # Number of species mynames1 <- param.names("mu", NOS) predictors.names <- namesof(mynames1, .lmu , earg = .emu , tag = FALSE) if (is.numeric( .mu.init )) MU.INIT <- matrix( .mu.init, nrow(y), ncol(y), byrow = TRUE) if (!length(etastart)) { mu.init <- y for (iii in 1:ncol(y)) { use.this <- if ( .imethod == 1) { weighted.mean(y[, iii], w[, iii]) + 1/16 } else if ( .imethod == 3) { c(quantile(y[, iii], probs = .iprobs.y ) + 1/16) } else { median(y[, iii]) + 1/16 } if (is.numeric( .mu.init )) { mu.init[, iii] <- MU.INIT[, iii] } else { medabsres <- median(abs(y[, iii] - use.this)) + 1/32 allowfun <- function(z, maxtol = 1) sign(z) * pmin(abs(z), maxtol) mu.init[, iii] <- use.this + (1 - .ishrinkage ) * allowfun(y[, iii] - use.this, maxtol = medabsres) mu.init[, iii] <- abs(mu.init[, iii]) + 1 / 1024 } } # of for (iii) kmat <- matrix( .size , n, NOS, byrow = TRUE) newemu <- .emu if ( .lmu == "nbcanlink") { newemu$size <- kmat } etastart <- cbind(theta2eta(mu.init , link = .lmu , earg = newemu )) } }), list( .lmu = lmu, .emu = emu, .mu.init = imu, .size = size, .iprobs.y = iprobs.y, .ishrinkage = ishrinkage, .zero = zero, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- 1 eta <- cbind(eta) NOS <- ncol(eta) / M1 n <- nrow(eta) kmat <- matrix( .size , n, NOS, byrow = TRUE) newemu <- .emu if ( .lmu == "nbcanlink") { newemu$size <- kmat } eta2theta(eta, .lmu , earg = newemu) }, list( .lmu = lmu, .emu = emu, .size = size ))), last = eval(substitute(expression({ misc$link <- rep_len( .lmu , NOS) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:NOS) { misc$earg[[ii]] <- newemu } misc$size <- kmat # Conformable size, i.e., is a matrix }), list( .lmu = lmu, .emu = emu ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mu <- cbind(mu) y <- cbind(y) w <- cbind(w) eta <- cbind(eta) kmat <- matrix( .size , nrow(eta), ncol(eta), byrow = TRUE) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ind1 <- is.finite(kmat) NOS <- ncol(y) ans1 <- ans2 <- 0 for (kk in 1:NOS) { ind1 <- is.finite(kmat[, kk]) ans1 <- ans1 + sum(w[ind1] * dnbinom(x = y[ind1, kk], mu = mu[ind1, kk], size = kmat[ind1, kk], log = TRUE)) ans2 <- ans2 + sum(w[!ind1] * dpois(x = y[!ind1, kk], lambda = mu[!ind1, kk], log = TRUE)) } ans <- ans1 + ans2 ll.elts <- ans if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .size = size ))), vfamily = c("negbinomial.size"), validparams = eval(substitute(function(eta, y, extra = NULL) { eta <- as.matrix(eta) kmat <- matrix( .size , nrow(eta), ncol(eta), byrow = TRUE) newemu <- .emu if ( .lmu == "nbcanlink") { newemu$size <- kmat } munb <- eta2theta(eta, .lmu , earg = newemu ) okay1 <- all(is.finite(munb)) && all(0 < munb) && all(0 < kmat ) okay1 }, list( .lmu = lmu, .emu = emu, .size = size ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") muuu <- fitted(object) n <- NROW(muuu) NOS <- NCOL(muuu) kmat <- matrix( .size , n, NOS, byrow = TRUE) rnbinom(nsim * length(muuu), mu = muuu, size = kmat) }, list( .size = size ))), deriv = eval(substitute(expression({ eta <- cbind(eta) M <- ncol(eta) kmat <- matrix( .size , n, M, byrow = TRUE) newemu <- .emu if ( .lmu == "nbcanlink") { newemu$size <- kmat newemu$wrt.param <- 1 } dl.dmu <- y / mu - (y + kmat) / (kmat + mu) if (any(fix.up <- !is.finite(dl.dmu))) dl.dmu[fix.up] <- (y/mu)[fix.up] - 1 dmu.deta <- dtheta.deta(mu, .lmu , earg = newemu) # eta1 c(w) * dl.dmu * dmu.deta }), list( .lmu = lmu, .emu = emu, .size = size ))), weight = eval(substitute(expression({ ned2l.dmunb2 <- 1 / mu - 1 / (mu + kmat) wz <- ned2l.dmunb2 * dmu.deta^2 c(w) * wz }), list( .lmu = lmu )))) ans } VGAM/R/print.vglm.q0000644000176200001440000001522113135276757013510 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. endfpvgam <- function(object, nonlinear.edf = TRUE, diag.all = FALSE, return.endf = TRUE, ...) { M <- npred(object) n <- nobs(object, type = "lm") wz <- weights(object, type = "working") X.vlm.save <- model.matrix(object, type = "vlm") U <- vchol(wz, M = M, n = n) X.vlm <- mux111(U, X.vlm.save, M = M) X.vlm.aug <- rbind(X.vlm, model.matrix(object, type = "penalty")) if (!object@ospsslot$magicfit$gcv.info$fully.converged) warning("fitted object has a GCV criterion that has not ", "fully converged") poststuff <- mgcv::magic.post.proc(X.vlm.aug, object = object@ospsslot$magicfit, w = NULL) if (!return.endf) return(poststuff) which.X.sm.osps <- object@ospsslot$sm.osps.list$which.X.sm.osps all.ncol.Hk <- unlist(lapply(constraints(object, type = "term"), ncol)) names.which.X.sm.osps <- names(which.X.sm.osps) endf <- rep_len(NA_real_, sum(all.ncol.Hk[names.which.X.sm.osps])) names(endf) <- vlabel(names.which.X.sm.osps, all.ncol.Hk[names.which.X.sm.osps], M = npred(object)) use.index <- NULL endf.all0 <- diag(solve(crossprod(X.vlm.aug), crossprod(X.vlm))) if (FALSE) { qr1 <- qr(X.vlm.aug) qr2 <- qr(X.vlm) endf.all <- diag(solve(crossprod(qr.R(qr1)), crossprod(qr.R(qr2)))) } endf.all <- endf.all0 if (diag.all) return(endf.all) startstop <- startstoppvgam(object) for (iterm in 1:length(startstop)) { endf[iterm] <- sum(endf.all[(startstop[[iterm]])]) } endf[endf < 1] <- 1 # Cannot be smoother than linear if (nonlinear.edf) endf - 1 else endf } # endfpvgam() show.pvgam <- function(object) { digits <- 3 if (!is.null(cl <- object@call)) { cat("\nCall:\n", paste(deparse(cl), sep = "\n", collapse = "\n"), "\n\n", sep = "") } magicfit <- object@ospsslot$magicfit if (FALSE) { XX <- model.matrix(object, type = "vlm") poststuff <- mgcv::magic.post.proc(XX, object = object@ospsslot$magicfit, w = NULL) } if (FALSE) { edf <- rep_len(NA_real_, n.smooth) cat("\nEstimated degrees of freedom:\n") for (i in 1:n.smooth) edf[i] <- sum(x$edf[x$smooth[[i]]$first.para:x$smooth[[i]]$last.para]) edf.str <- format(round(edf, digits = 4), digits = 3, scientific = FALSE) for (i in 1:n.smooth) { cat(edf.str[i], " ", sep = "") if (i%%7 == 0) cat("\n") } cat(" total =", round(sum(poststuff$edf), digits = 2), "\n") } endf <- endfpvgam(object) cat("\nEstimated nonlinear degrees of freedom:\n") # based on endfpvgam() print(round(endf, digits = digits + 2), digits = digits, scientific = FALSE) if (length(endf) > 1) cat("Total:", format(sum(endf), digits = digits), "\n") object@post$endf <- endf # Good to save this on the object if (FALSE) cat("\nEstimated degrees of freedom based on poststuff:", format(poststuff$edf, digits = digits), "\nTotal:", format(round(sum(poststuff$edf), digits = digits)), "\n") cat("\nUBRE score:", format(magicfit$score, digits = digits + 1), "\n\n") if (length(deviance(object))) cat("Residual deviance:", format(deviance(object)), "\n") llx <- logLik.vlm(object = object) if (length(llx)) cat("Log-likelihood:", format(llx), "\n") invisible(object) } setMethod("show", "pvgam", function(object) show.pvgam(object)) if (!isGeneric("endf")) setGeneric("endf", function(object, ...) standardGeneric("endf")) setMethod("endf", "pvgam", function(object, ...) endfpvgam(object, ...)) setMethod("endf", "summary.pvgam", function(object, ...) endfpvgam(object, ...)) show.vglm <- function(object) { if (!is.null(cl <- object@call)) { cat("\nCall:\n", paste(deparse(cl), sep = "\n", collapse = "\n"), "\n\n", sep = "") } coef <- object@coefficients if (any(nas <- is.na(coef))) { if (is.null(names(coef))) names(coef) <- paste("b", seq_along(coef), sep = "") cat("\nCoefficients: (", sum(nas), " not defined because of singularities)\n", sep = "") } else { cat("\nCoefficients:\n") } print(coef) rank <- object@rank if (!length(rank)) rank <- sum(!nas) nobs <- if (length(object@df.total)) object@df.total else length(object@residuals) rdf <- object@df.residual if (!length(rdf)) rdf <- nobs - rank cat("\nDegrees of Freedom:", nobs, "Total;", rdf, "Residual\n") if (length(deviance(object))) cat("Residual deviance:", format(deviance(object)), "\n") llx <- logLik.vlm(object = object) if (length(llx)) cat("Log-likelihood:", format(llx), "\n") if (length(object@criterion)) { ncrit <- names(object@criterion) for (ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), format(object@criterion[[ii]]), "\n") } try.this <- findFirstMethod("showvglmS4VGAM", object@family@vfamily) if (length(try.this)) { showvglmS4VGAM(object = object, VGAMff = new(try.this)) } else { } invisible(object) } show.vgam <- function(object) { digits <- 2 if (!is.null(cl <- object@call)) { cat("\nCall:\n", paste(deparse(cl), sep = "\n", collapse = "\n"), "\n\n", sep = "") } coef <- object@coefficients nas <- is.na(coef) rank <- object@rank if (is.null(rank)) rank <- sum(!nas) nobs <- if (length(object@df.total)) object@df.total else length(object@residuals) rdf <- object@df.residual if (is.null(rdf)) rdf <- nobs - rank cat("\nDegrees of Freedom:", nobs, "Total;", format(round(rdf, digits = digits)), "Residual\n") if (length(deviance(object))) cat("Residual deviance:", format(deviance(object)), "\n") llx <- logLik.vlm(object = object) if (length(llx)) cat("Log-likelihood:", format(llx), "\n") criterion <- attr(terms(object), "criterion") if (!is.null(criterion) && criterion != "coefficients") cat(paste(criterion, ":", sep = ""), format(object[[criterion]]), "\n") try.this <- findFirstMethod("showvgamS4VGAM", object@family@vfamily) if (length(try.this)) { showvgamS4VGAM(object = object, VGAMff = new(try.this)) } else { } invisible(object) } setMethod("show", "vlm", function(object) show.vlm (object)) setMethod("show", "vglm", function(object) show.vglm(object)) setMethod("show", "vgam", function(object) show.vgam(object)) VGAM/R/family.bivariate.R0000644000176200001440000034467413135276757014620 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dbiclaytoncop <- function(x1, x2, apar = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) A <- x1^(-apar) + x2^(-apar) - 1 logdensity <- log1p(apar) - (1 + apar) * (log(x1) + log(x2)) - (2 + 1 / apar) * log(abs(A)) # Avoid warning out.square <- (x1 < 0) | (x1 > 1) | (x2 < 0) | (x2 > 1) logdensity[out.square] <- log(0.0) index0 <- (rep_len(apar, length(A)) < sqrt(.Machine$double.eps)) if (any(index0)) logdensity[index0] <- log(1.0) index1 <- (rep_len(apar, length(A)) < 0.0) | (A < 0.0) if (any(index1)) logdensity[index1] <- NaN if (log.arg) logdensity else exp(logdensity) } rbiclaytoncop <- function(n, apar = 0) { if (any(apar < 0)) stop("argument 'apar' must be greater or equal to 0") u1 <- runif(n = n) v2 <- runif(n = n) u2 <- (u1^(-apar) * (v2^(-apar / (1 + apar)) - 1) + 1)^(-1 / apar) index0 <- (rep_len(apar, length(u1)) < sqrt(.Machine$double.eps)) if (any(index0)) u2[index0] <- runif(sum(index0)) cbind(u1, u2) } biclaytoncop <- function(lapar = "loge", iapar = NULL, imethod = 1, parallel = FALSE, zero = NULL) { apply.parint <- TRUE lapar <- as.list(substitute(lapar)) eapar <- link2list(lapar) lapar <- attr(eapar, "function.name") if (length(iapar) && any(iapar <= 0)) stop("argument 'iapar' must have values in (0, Inf)") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Bivariate Clayton copula distribution)\n", "Links: ", namesof("apar", lapar, earg = eapar)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero, .apply.parint = apply.parint, .parallel = parallel ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 2, apply.parint = .apply.parint , parameters.names = c("apar"), lapar = .lapar , parallel = .parallel , zero = .zero ) }, list( .zero = zero, .apply.parint = apply.parint, .lapar = lapar, .parallel = parallel ))), initialize = eval(substitute(expression({ M1 <- 1 Q1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, ncol.y.min = Q1, out.wy = TRUE, colsyperw = Q1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) extra$ncoly <- ncoly extra$M1 <- M1 extra$Q1 <- Q1 M <- M1 * (ncoly / Q1) mynames1 <- param.names("apar", M / M1) predictors.names <- namesof(mynames1, .lapar , earg = .eapar , short = TRUE) extra$colnames.y <- colnames(y) if (!length(etastart)) { apar.init <- matrix(if (length( .iapar )) .iapar else NA_real_, n, M / M1, byrow = TRUE) if (!length( .iapar )) for (spp. in 1:(M / M1)) { ymatj <- y[, (Q1 * spp. - 1):(Q1 * spp.)] apar.init0 <- if ( .imethod == 1) { k.tau <- kendall.tau(ymatj[, 1], ymatj[, 2], exact = FALSE, max.n = 500) max(0.1, 2 * k.tau / (1 - k.tau)) # Must be positive } else if ( .imethod == 2) { spearman.rho <- max(0.05, cor(ymatj[, 1], ymatj[, 2], meth = "spearman")) rhobit(spearman.rho) } else { pearson.rho <- max(0.05, cor(ymatj[, 1], ymatj[, 2])) rhobit(pearson.rho) } if (anyNA(apar.init[, spp.])) apar.init[, spp.] <- apar.init0 } etastart <- theta2eta(apar.init, .lapar , earg = .eapar ) } }), list( .imethod = imethod, .lapar = lapar, .eapar = eapar, .iapar = iapar ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(0.5, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) } , list( .lapar = lapar, .eapar = eapar ))), last = eval(substitute(expression({ M1 <- extra$M1 Q1 <- extra$Q1 misc$link <- rep_len( .lapar , M) temp.names <- mynames1 names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:M) { misc$earg[[ii]] <- .eapar } misc$M1 <- M1 misc$Q1 <- Q1 misc$imethod <- .imethod misc$expected <- TRUE misc$parallel <- .parallel misc$apply.parint <- .apply.parint misc$multipleResponses <- TRUE }) , list( .imethod = imethod, .parallel = parallel, .apply.parint = apply.parint, .lapar = lapar, .eapar = eapar ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Alpha <- eta2theta(eta, .lapar , earg = .eapar ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbiclaytoncop(x1 = c(y[, c(TRUE, FALSE)]), x2 = c(y[, c(FALSE, TRUE)]), apar = c(Alpha), log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } } , list( .lapar = lapar, .eapar = eapar, .imethod = imethod ))), vfamily = c("biclaytoncop"), validparams = eval(substitute(function(eta, y, extra = NULL) { Alpha <- eta2theta(eta, .lapar , earg = .eapar ) okay1 <- all(is.finite(Alpha)) && all(0 < Alpha) okay1 } , list( .lapar = lapar, .eapar = eapar, .imethod = imethod ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Alpha <- eta2theta(eta, .lapar , earg = .eapar ) rbiclaytoncop(nsim * length(Alpha), apar = c(Alpha)) } , list( .lapar = lapar, .eapar = eapar ))), deriv = eval(substitute(expression({ Alpha <- eta2theta(eta, .lapar , earg = .eapar ) Yindex1 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) - 1 Yindex2 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) AA <- y[, Yindex1]^(-Alpha) + y[, Yindex2]^(-Alpha) - 1 dAA.dapar <- -y[, Yindex1]^(-Alpha) * log(y[, Yindex1]) - y[, Yindex2]^(-Alpha) * log(y[, Yindex2]) dl.dapar <- 1 / (1 + Alpha) - log(y[, Yindex1] * y[, Yindex2]) - dAA.dapar / AA * (2 + 1 / Alpha ) + log(AA) / Alpha^2 dapar.deta <- dtheta.deta(Alpha, .lapar , earg = .eapar ) dl.deta <- c(w) * cbind(dl.dapar) * dapar.deta dl.deta }), list( .lapar = lapar, .eapar = eapar, .imethod = imethod ))), weight = eval(substitute(expression({ par <- Alpha + 1 denom1 <- (3 * par -2) * (2 * par - 1) denom2 <- 2 * (par - 1) v1 <- trigamma(1 / denom2) v2 <- trigamma(par / denom2) v3 <- trigamma((2 * par - 1) / denom2) Rho. <- (1 + par * (v1 - v2) / denom2 + (v2 - v3) / denom2) / denom1 ned2l.dapar <- 1 / par^2 + 2 / (par * (par - 1) * (2 * par - 1)) + 4 * par / (3 * par - 2) - 2 * (2 * par - 1) * Rho. / (par - 1) wz <- ned2l.dapar * dapar.deta^2 c(w) * wz }), list( .lapar = lapar, .eapar = eapar, .imethod = imethod )))) } dbistudentt <- function(x1, x2, df, rho = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdensity <- -(df/2 + 1) * log1p( (x1^2 + x2^2 - 2 * rho * x1 * x2) / (df * (1 - rho^2))) - log(2 * pi) - 0.5 * log1p(-rho^2) # - logdensity[df <= 0] <- NaN # Not picked up by dt(). logdensity[is.infinite(x1) | is.infinite(x2)] <- log(0) # 20141216 KaiH if (log.arg) logdensity else exp(logdensity) } if (FALSE) bistudent.deriv.dof <- function(u, v, nu, rho) { t1 <- qt(u, nu, 1, 0) t2 <- qt(v, nu, 1, 0) t3 <- -(nu + 2.0) / 2.0 t10 <- nu * (1.0 - rho * rho) t4 <- -2.0 * t1 * t2 / t10 t11 <- (t1 * t1 + t2 * t2 - 2.0 * rho * t1 * t2) t5 <- 2.0 * t11 * rho / t10 / (1.0 - rho * rho) t6 <- 1.0 + (t11 / t10) t7 <- rho / (1.0 - rho * rho) out <- (t3 * (t4 + t5) / t6 + t7) } bistudentt <- function(ldf = "loglog", lrho = "rhobit", idf = NULL, irho = NULL, imethod = 1, parallel = FALSE, zero = "rho") { apply.parint <- TRUE ldof <- as.list(substitute(ldf)) edof <- link2list(ldof) ldof <- attr(edof, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") idof <- idf if (length(idof) && any(idof <= 1)) stop("argument 'idf' must have values in (1,Inf)") if (length(irho) && any(abs(irho) >= 1)) stop("argument 'irho' must have values in (-1,1)") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate student-t distribution\n", "Links: ", namesof("df", ldof, earg = edof), ", ", namesof("rho", lrho, earg = erho)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero, .apply.parint = apply.parint, .parallel = parallel ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 2, parameters.names = c("df", "rho"), apply.parint = .apply.parint , parallel = .parallel , zero = .zero ) }, list( .zero = zero, .apply.parint = apply.parint, .parallel = parallel ))), initialize = eval(substitute(expression({ M1 <- 2 Q1 <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, ncol.y.min = Q1, out.wy = TRUE, colsyperw = Q1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) extra$ncoly <- ncoly extra$M1 <- M1 extra$Q1 <- Q1 M <- M1 * (ncoly / Q1) mynames1 <- param.names("df", M / M1) mynames2 <- param.names("rho", M / M1) predictors.names <- c( namesof(mynames1, .ldof , earg = .edof , short = TRUE), namesof(mynames2, .lrho , earg = .erho , short = TRUE))[ interleave.VGAM(M, M1 = M1)] extra$colnames.y <- colnames(y) if (!length(etastart)) { dof.init <- matrix(if (length( .idof )) .idof else 0 + NA, n, M / M1, byrow = TRUE) rho.init <- matrix(if (length( .irho )) .irho else 0 + NA, n, M / M1, byrow = TRUE) if (!length( .idof ) || !length( .irho )) for (spp. in 1:(M / M1)) { ymatj <- y[, (M1 * spp. - 1):(M1 * spp.)] dof.init0 <- if ( .imethod == 1) { 2 + rexp(n = 1, rate = 0.1) } else { 10 } if (anyNA(dof.init[, spp.])) dof.init[, spp.] <- dof.init0 rho.init0 <- if ( .imethod == 2) { runif(n, min = -1 + 0.1, max = 1 - 0.1) } else { cor(ymatj[, 1], ymatj[, 2]) } if (anyNA(rho.init[, spp.])) rho.init[, spp.] <- rho.init0 } etastart <- cbind(theta2eta(dof.init, .ldof , earg = .edof ), theta2eta(rho.init, .lrho , earg = .erho )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } }), list( .imethod = imethod, .lrho = lrho, .ldof = ldof, .erho = erho, .edof = edof, .idof = idof, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 2) Q1 <- 2 fv.mat <- matrix(0, nrow(eta), Q1 * NOS) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) } , list( .lrho = lrho, .ldof = ldof, .erho = erho, .edof = edof ))), last = eval(substitute(expression({ M1 <- extra$M1 Q1 <- extra$Q1 misc$link <- c(rep_len( .ldof , M / M1), rep_len( .lrho , M / M1))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:(M / M1)) { misc$earg[[M1*ii-1]] <- .edof misc$earg[[M1*ii ]] <- .erho } misc$M1 <- M1 misc$Q1 <- Q1 misc$imethod <- .imethod misc$expected <- TRUE misc$parallel <- .parallel misc$apply.parint <- .apply.parint misc$multipleResponses <- TRUE }) , list( .imethod = imethod, .parallel = parallel, .apply.parint = apply.parint, .lrho = lrho, .ldof = ldof, .erho = erho, .edof = edof ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Dof <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .ldof , earg = .edof ) Rho <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lrho , earg = .erho ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { Yindex1 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) - 1 Yindex2 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) ll.elts <- c(w) * dbistudentt(x1 = y[, Yindex1, drop = FALSE], x2 = y[, Yindex2, drop = FALSE], df = Dof, rho = Rho, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lrho = lrho, .ldof = ldof, .erho = erho, .edof = edof, .imethod = imethod ))), vfamily = c("bistudentt"), validparams = eval(substitute(function(eta, y, extra = NULL) { Dof <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .ldof , earg = .edof ) Rho <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lrho , earg = .erho ) okay1 <- all(is.finite(Dof)) && all(0 < Dof) && all(is.finite(Rho)) && all(abs(Rho) < 1) okay1 }, list( .lrho = lrho, .ldof = ldof, .erho = erho, .edof = edof, .imethod = imethod ))), deriv = eval(substitute(expression({ M1 <- Q1 <- 2 Dof <- eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .ldof , earg = .edof ) Rho <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lrho , earg = .erho ) Yindex1 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) - 1 Yindex2 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) x1 <- c(y[, Yindex1]) # Convert into a vector x2 <- c(y[, Yindex2]) dee3 <- deriv3( ~ -(Dof/2 + 1) * log(1 + (x1^2 + x2^2 - 2 * Rho * x1 * x2) / (Dof * (1 - Rho^2))) - log(2 * pi) - 0.5 * log(1 - Rho^2), namevec = c("Dof", "Rho"), hessian = FALSE) eval.d3 <- eval(dee3) dl.dthetas <- attr(eval.d3, "gradient") dl.ddof <- matrix(dl.dthetas[, "Dof"], n, length(Yindex1)) dl.drho <- matrix(dl.dthetas[, "Rho"], n, length(Yindex2)) if (FALSE) { dd <- cbind(y, Rho, Dof) pp <- apply(dd, 1, function(x) BiCopPDF(x[1], x[2], family = 2, x[3], x[4])) alt.dl.ddof <- apply(dd, 1, function(x) BiCopDeriv(x[1], x[2], family = 2, x[3], x[4], "par2")) / pp alt.dl.drho <- apply(dd, 1, function(x) BiCopDeriv(x[1], x[2], family = 2, x[3], x[4], "par")) / pp } ddof.deta <- dtheta.deta(Dof, .ldof , earg = .edof ) drho.deta <- dtheta.deta(Rho, .lrho , earg = .erho ) ans <- c(w) * cbind(dl.ddof * ddof.deta, dl.drho * drho.deta) ans <- ans[, interleave.VGAM(M, M1 = M1)] ans }), list( .lrho = lrho, .ldof = ldof, .erho = erho, .edof = edof, .imethod = imethod ))), weight = eval(substitute(expression({ wz11 <- beta(2, Dof / 2) / Dof - beta(3, Dof / 2) * (Dof + 2) / (4 * Dof) wz12 <- -Rho / (2 * (1 - Rho^2)) * (beta(2, Dof / 2) - beta(3, Dof / 2) * (Dof + 2) / 2) wz22 <- (1 + Rho^2) / (1 - Rho^2)^2 + (Dof^2 + 2 * Dof) * Rho^2 * beta(3, Dof / 2) / (4 * (1 - Rho^2)^2) wz22 <- wz22 + (Dof^2 + 2 * Dof) * (2 - 3 * Rho^2 + Rho^6) * beta(3, Dof / 2) / (16 * (1 - Rho^2)^4) wz22 <- wz22 + (Dof^2 + 2 * Dof) * (1 + Rho^2) * # Replace - by + ??? beta(2, Dof / 2) / (4 * (1 - Rho^2)^2) # denom == 4 or 2 ??? ned2l.ddof2 <- wz11 ned2l.ddofrho <- wz12 ned2l.drho2 <- wz22 wz <- array(c(c(w) * ned2l.ddof2 * ddof.deta^2, c(w) * ned2l.drho2 * drho.deta^2, c(w) * ned2l.ddofrho * ddof.deta * drho.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lrho = lrho, .ldof = ldof, .erho = erho, .edof = edof, .imethod = imethod )))) } dbinormcop <- function(x1, x2, rho = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) x1 <- qnorm(x1) x2 <- qnorm(x2) logdensity <- (2 * rho * x1 * x2 - rho^2 * (x1^2 + x2^2)) / (2 * (1 - rho^2)) - 0.5 * log1p(-rho^2) if (log.arg) logdensity else exp(logdensity) } pbinormcop <- function(q1, q2, rho = 0) { if (!is.Numeric(q1, positive = TRUE) || any(q1 >= 1)) stop("bad input for argument 'q1'") if (!is.Numeric(q2, positive = TRUE) || any(q2 >= 1)) stop("bad input for argument 'q2'") if (!is.Numeric(rho) || any(abs(rho) >= 1)) stop("bad input for argument 'rho'") pbinorm(q1 = qnorm(q1), q2 = qnorm(q2), cov12 = rho) } rbinormcop <- function(n, rho = 0 #, inverse = FALSE ) { inverse <- FALSE ymat <- rbinorm(n = n, cov12 = rho) if (inverse) { ymat } else { cbind(y1 = pnorm(ymat[, 1]), y2 = pnorm(ymat[, 2])) } } binormalcop <- function(lrho = "rhobit", irho = NULL, imethod = 1, parallel = FALSE, zero = NULL) { apply.parint <- TRUE lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") if (length(irho) && any(abs(irho) >= 1)) stop("argument 'irho' must have values in (-1,1)") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Gaussian copula (based on the bivariate normal distribution)\n", "Links: ", namesof("rho", lrho, earg = erho)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero, .apply.parint = apply.parint, .parallel = parallel ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 2, parameters.names = c("rho"), apply.parint = .apply.parint , parallel = .parallel , zero = .zero ) }, list( .zero = zero, .apply.parint = apply.parint, .parallel = parallel ))), initialize = eval(substitute(expression({ M1 <- 1 Q1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, ncol.y.min = Q1, out.wy = TRUE, colsyperw = Q1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) extra$ncoly <- ncoly extra$M1 <- M1 extra$Q1 <- Q1 M <- M1 * (ncoly / Q1) mynames1 <- param.names("rho", M / M1) predictors.names <- c( namesof(mynames1, .lrho , earg = .erho , short = TRUE)) extra$colnames.y <- colnames(y) if (!length(etastart)) { rho.init <- matrix(if (length( .irho )) .irho else 0 + NA, n, M / M1, byrow = TRUE) if (!length( .irho )) for (spp. in 1:(M / M1)) { ymatj <- y[, (Q1 * spp. - 1):(Q1 * spp.)] rho.init0 <- if ( .imethod == 1) { sin(kendall.tau(ymatj[, 1], ymatj[, 2], exact = FALSE, max.n = 200) * pi / 2) } else if ( .imethod == 2) { sin(cor(ymatj[, 1], ymatj[, 2], method = "spearman") * pi / 6) * 2 } else { cor(ymatj[, 1], ymatj[, 2]) } if (anyNA(rho.init[, spp.])) rho.init[, spp.] <- rho.init0 } etastart <- theta2eta(rho.init, .lrho , earg = .erho ) } }), list( .imethod = imethod, .lrho = lrho, .erho = erho, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(0.5, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) } , list( .lrho = lrho, .erho = erho ))), last = eval(substitute(expression({ M1 <- extra$M1 Q1 <- extra$Q1 misc$link <- rep_len( .lrho , M) temp.names <- mynames1 names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:M) { misc$earg[[ii]] <- .erho } misc$M1 <- M1 misc$Q1 <- Q1 misc$imethod <- .imethod misc$expected <- TRUE misc$parallel <- .parallel misc$apply.parint <- .apply.parint misc$multipleResponses <- TRUE }) , list( .imethod = imethod, .parallel = parallel, .apply.parint = apply.parint, .lrho = lrho, .erho = erho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Rho <- eta2theta(eta, .lrho , earg = .erho ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { Yindex1 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) - 1 Yindex2 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) ll.elts <- c(w) * dbinormcop(x1 = y[, Yindex1, drop = FALSE], x2 = y[, Yindex2, drop = FALSE], rho = Rho, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } } , list( .lrho = lrho, .erho = erho, .imethod = imethod ))), vfamily = c("binormalcop"), validparams = eval(substitute(function(eta, y, extra = NULL) { Rho <- eta2theta(eta, .lrho , earg = .erho ) okay1 <- all(is.finite(Rho)) && all(abs(Rho) < 1) okay1 }, list( .lrho = lrho, .erho = erho, .imethod = imethod ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Rho <- eta2theta(eta, .lrho , earg = .erho ) rbinormcop(nsim * length(Rho), rho = c(Rho)) } , list( .lrho = lrho, .erho = erho ))), deriv = eval(substitute(expression({ Rho <- eta2theta(eta, .lrho , earg = .erho ) Yindex1 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) - 1 Yindex2 <- extra$Q1 * (1:(extra$ncoly/extra$Q1)) temp7 <- 1 - Rho^2 q.y <- qnorm(y) dl.drho <- ((1 + Rho^2) * q.y[, Yindex1] * q.y[, Yindex2] - Rho * (q.y[, Yindex1]^2 + q.y[, Yindex2]^2)) / temp7^2 + Rho / temp7 drho.deta <- dtheta.deta(Rho, .lrho , earg = .erho ) c(w) * cbind(dl.drho) * drho.deta }), list( .lrho = lrho, .erho = erho, .imethod = imethod ))), weight = eval(substitute(expression({ ned2l.drho <- (1 + Rho^2) / temp7^2 wz <- ned2l.drho * drho.deta^2 c(w) * wz }), list( .lrho = lrho, .erho = erho, .imethod = imethod )))) } bilogistic.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } bilogistic <- function(llocation = "identitylink", lscale = "loge", iloc1 = NULL, iscale1 = NULL, iloc2 = NULL, iscale2 = NULL, imethod = 1, zero = NULL) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate logistic distribution\n\n", "Link: ", namesof("location1", llocat, elocat), ", ", namesof("scale1", lscale, escale), ", ", namesof("location2", llocat, elocat), ", ", namesof("scale2", lscale, escale), "\n", "\n", "Means: location1, location2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 4) }), list( .zero = zero))), infos = eval(substitute(function(...) { list(M1 = 4, Q1 = 2, parameters.names = c("location1", "scale1", "location2", "scale2"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y extra$colnames.y <- colnames(y) predictors.names <- c(namesof("location1", .llocat, .elocat , tag = FALSE), namesof("scale1", .lscale, .escale , tag = FALSE), namesof("location2", .llocat, .elocat , tag = FALSE), namesof("scale2", .lscale, .escale , tag = FALSE)) if (!length(etastart)) { if ( .imethod == 1) { locat.init1 <- y[, 1] scale.init1 <- sqrt(3) * sd(y[, 1]) / pi locat.init2 <- y[, 2] scale.init2 <- sqrt(3) * sd(y[, 2]) / pi } else { locat.init1 <- median(rep(y[, 1], w)) locat.init2 <- median(rep(y[, 2], w)) const4 <- sqrt(3) / (sum(w) * pi) scale.init1 <- const4 * sum(c(w) *(y[, 1] - locat.init1)^2) scale.init2 <- const4 * sum(c(w) *(y[, 2] - locat.init2)^2) } loc1.init <- if (length( .iloc1 )) rep_len( .iloc1 , n) else rep_len(locat.init1, n) loc2.init <- if (length( .iloc2 )) rep_len( .iloc2 , n) else rep_len(locat.init2, n) scale1.init <- if (length( .iscale1 )) rep_len( .iscale1 , n) else rep_len(1, n) scale2.init <- if (length( .iscale2 )) rep_len( .iscale2 , n) else rep_len(1, n) if ( .llocat == "loge") locat.init1 <- abs(locat.init1) + 0.001 if ( .llocat == "loge") locat.init2 <- abs(locat.init2) + 0.001 etastart <- cbind(theta2eta(locat.init1, .llocat , .elocat ), theta2eta(scale1.init, .lscale , .escale ), theta2eta(locat.init2, .llocat , .elocat ), theta2eta(scale2.init, .lscale , .escale )) } }), list(.imethod = imethod, .iloc1 = iloc1, .iloc2 = iloc2, .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .iscale1 = iscale1, .iscale2 = iscale2))), linkinv = function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 4) fv.mat <- eta[, 1:2] label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, last = eval(substitute(expression({ misc$link <- c(location1 = .llocat , scale1 = .lscale , location2 = .llocat , scale2 = .lscale ) misc$earg <- list(location1 = .elocat , scale1 = .escale , location2 = .elocat , scale2 = .escale ) misc$expected <- FALSE misc$BFGS <- TRUE misc$multipleResponses <- FALSE }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat1 <- eta2theta(eta[, 1], .llocat , .elocat ) Scale1 <- eta2theta(eta[, 2], .lscale , .escale ) locat2 <- eta2theta(eta[, 3], .llocat , .elocat ) Scale2 <- eta2theta(eta[, 4], .lscale , .escale ) zedd1 <- (y[, 1]-locat1) / Scale1 zedd2 <- (y[, 2]-locat2) / Scale2 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (-zedd1 - zedd2 - 3 * log1p(exp(-zedd1) + exp(-zedd2)) - log(Scale1) - log(Scale2)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), vfamily = c("bilogistic"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat1 <- eta2theta(eta[, 1], .llocat , .elocat ) Scale1 <- eta2theta(eta[, 2], .lscale , .escale ) locat2 <- eta2theta(eta[, 3], .llocat , .elocat ) Scale2 <- eta2theta(eta[, 4], .lscale , .escale ) okay1 <- all(is.finite(locat1)) && all(is.finite(Scale1)) && all(0 < Scale1) && all(is.finite(locat2)) && all(is.finite(Scale2)) && all(0 < Scale2) okay1 }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) locat1 <- eta2theta(eta[, 1], .llocat , .elocat ) Scale1 <- eta2theta(eta[, 2], .lscale , .escale ) locat2 <- eta2theta(eta[, 3], .llocat , .elocat ) Scale2 <- eta2theta(eta[, 4], .lscale , .escale ) rbilogis(nsim * length(locat1), loc1 = locat1, scale1 = Scale1, loc2 = locat2, scale2 = Scale2) }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), deriv = eval(substitute(expression({ locat1 <- eta2theta(eta[, 1], .llocat , .elocat ) Scale1 <- eta2theta(eta[, 2], .lscale , .escale ) locat2 <- eta2theta(eta[, 3], .llocat , .elocat ) Scale2 <- eta2theta(eta[, 4], .lscale , .escale ) zedd1 <- (y[, 1]-locat1) / Scale1 zedd2 <- (y[, 2]-locat2) / Scale2 ezedd1 <- exp(-zedd1) ezedd2 <- exp(-zedd2) denom <- 1 + ezedd1 + ezedd2 dl.dlocat1 <- (1 - 3 * ezedd1 / denom) / Scale1 dl.dlocat2 <- (1 - 3 * ezedd2 / denom) / Scale2 dl.dscale1 <- (zedd1 - 1 - 3 * ezedd1 * zedd1 / denom) / Scale1 dl.dscale2 <- (zedd2 - 1 - 3 * ezedd2 * zedd2 / denom) / Scale2 dlocat1.deta <- dtheta.deta(locat1, .llocat , .elocat ) dlocat2.deta <- dtheta.deta(locat2, .llocat , .elocat ) dscale1.deta <- dtheta.deta(Scale1, .lscale , .escale ) dscale2.deta <- dtheta.deta(Scale2, .lscale , .escale ) if (iter == 1) { etanew <- eta } else { derivold <- derivnew etaold <- etanew etanew <- eta } derivnew <- c(w) * cbind(dl.dlocat1 * dlocat1.deta, dl.dscale1 * dscale1.deta, dl.dlocat2 * dlocat2.deta, dl.dscale2 * dscale2.deta) derivnew }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight = eval(substitute(expression({ if (iter == 1) { wznew <- cbind(matrix(w, n, M), matrix(0, n, dimm(M)-M)) } else { wzold <- wznew wznew <- qnupdate(w = w, wzold=wzold, dderiv=(derivold - derivnew), deta=etanew-etaold, M = M, trace=trace) # weights incorporated in args } wznew }), list( .lscale = lscale, .escale = escale, .llocat = llocat)))) } dbilogis <- function(x1, x2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x1), length(x2), length(loc1), length(loc2), length(scale1), length(scale2)) if (length(x1 ) != L) x1 <- rep_len(x1, L) if (length(x2 ) != L) x2 <- rep_len(x2, L) if (length(loc1 ) != L) loc1 <- rep_len(loc1, L) if (length(loc2 ) != L) loc2 <- rep_len(loc2, L) if (length(scale1) != L) scale1 <- rep_len(scale1, L) if (length(scale2) != L) scale2 <- rep_len(scale2, L) zedd1 <- (x1 - loc1) / scale1 zedd2 <- (x2 - loc2) / scale2 logdensity <- log(2) - zedd1 - zedd2 - log(scale1) - log(scale1) - 3 * log1p(exp(-zedd1) + exp(-zedd2)) logdensity[x1 == -Inf | x2 == -Inf] <- log(0) # 20141216 KaiH if (log.arg) logdensity else exp(logdensity) } pbilogis <- function(q1, q2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) { ans <- 1 / (1 + exp(-(q1-loc1)/scale1) + exp(-(q2-loc2)/scale2)) ans[scale1 <= 0] <- NA ans[scale2 <= 0] <- NA ans } rbilogis <- function(n, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) { y1 <- rlogis(n = n, location = loc1, scale = scale1) ezedd1 <- exp(-(y1-loc1)/scale1) y2 <- loc2 - scale2 * log(1/sqrt(runif(n) / (1 + ezedd1)^2) - 1 - ezedd1) ans <- cbind(y1, y2) ans[scale2 <= 0, ] <- NA ans } freund61 <- function(la = "loge", lap = "loge", lb = "loge", lbp = "loge", ia = NULL, iap = NULL, ib = NULL, ibp = NULL, independent = FALSE, zero = NULL) { la <- as.list(substitute(la)) ea <- link2list(la) la <- attr(ea, "function.name") lap <- as.list(substitute(lap)) eap <- link2list(lap) lap <- attr(eap, "function.name") lb <- as.list(substitute(lb)) eb <- link2list(lb) lb <- attr(eb, "function.name") lbp <- as.list(substitute(lbp)) ebp <- link2list(lbp) lbp <- attr(ebp, "function.name") new("vglmff", blurb = c("Freund (1961) bivariate exponential distribution\n", "Links: ", namesof("a", la, earg = ea ), ", ", namesof("ap", lap, earg = eap), ", ", namesof("b", lb, earg = eb ), ", ", namesof("bp", lbp, earg = ebp)), constraints = eval(substitute(expression({ M1 <- 4 Q1 <- 2 constraints <- cm.VGAM(matrix(c(1, 1,0,0, 0,0, 1, 1), M, 2), x = x, bool = .independent , constraints = constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 4) }), list( .independent = independent, .zero = zero))), infos = eval(substitute(function(...) { list(M1 = 4, Q1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("a", "ap", "b", "bp"), la = .la , lap = .lap , lb = .lb , lbp = .lbp , independent = .independent , zero = .zero ) }, list( .zero = zero, .la = la , .lap = lap , .lb = lb , .lbp = lbp , .independent = independent ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("a", .la , earg = .ea , short = TRUE), namesof("ap", .lap , earg = .eap , short = TRUE), namesof("b", .lb , earg = .eb , short = TRUE), namesof("bp", .lbp , earg = .ebp , short = TRUE)) extra$y1.lt.y2 = y[, 1] < y[, 2] if (!(arr <- sum(extra$y1.lt.y2)) || arr == n) stop("identifiability problem: either all y1 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate gamma: McKay's distribution\n", "Links: ", namesof("scale", lscale, earg = escale ), ", ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("scale", "shape1", "shape2"), lscale = .lscale , lshape1 = .lshape1 , lshape2 = .lshape2 , zero = .zero ) }, list( .zero = zero, .lscale = lscale , .lshape1 = lshape1, .lshape2 = lshape2 ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y extra$colnames.y <- colnames(y) if (any(y[, 1] >= y[, 2])) stop("the second column minus the first column must be a vector ", "of positive values") predictors.names <- c(namesof("scale", .lscale, .escale, short = TRUE), namesof("shape1", .lshape1, .eshape1, short = TRUE), namesof("shape2", .lshape2, .eshape2, short = TRUE)) if (!length(etastart)) { momentsY <- if ( .imethod == 1) { cbind(median(y[, 1]), # This may not be monotonic median(y[, 2])) + 0.01 } else { cbind(weighted.mean(y[, 1], w), weighted.mean(y[, 2], w)) } mcg2.loglik <- function(thetaval, y, x, w, extraargs) { ainit <- a <- thetaval momentsY <- extraargs$momentsY p <- (1/a) * abs(momentsY[1]) + 0.01 q <- (1/a) * abs(momentsY[2] - momentsY[1]) + 0.01 sum(c(w) * (-(p+q)*log(a) - lgamma(p) - lgamma(q) + (p - 1)*log(y[, 1]) + (q - 1)*log(y[, 2]-y[, 1]) - y[, 2] / a )) } a.grid <- if (length( .iscale )) c( .iscale ) else c(0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100) extraargs <- list(momentsY = momentsY) ainit <- grid.search(a.grid, objfun = mcg2.loglik, y = y, x = x, w = w, maximize = TRUE, extraargs = extraargs) ainit <- rep_len(if (is.Numeric( .iscale )) .iscale else ainit, n) pinit <- (1/ainit) * abs(momentsY[1]) + 0.01 qinit <- (1/ainit) * abs(momentsY[2] - momentsY[1]) + 0.01 pinit <- rep_len(if (is.Numeric( .ishape1 )) .ishape1 else pinit, n) qinit <- rep_len(if (is.Numeric( .ishape2 )) .ishape2 else qinit, n) etastart <- cbind(theta2eta(ainit, .lscale), theta2eta(pinit, .lshape1), theta2eta(qinit, .lshape2)) } }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2, .iscale = iscale, .ishape1 = ishape1, .ishape2 = ishape2, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 3) a <- eta2theta(eta[, 1], .lscale , .escale ) p <- eta2theta(eta[, 2], .lshape1 , .eshape1 ) q <- eta2theta(eta[, 3], .lshape2 , .eshape2 ) fv.mat <- cbind("y1" = p*a, "y2" = (p+q)*a) # Overwrite the colnames: label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "shape1" = .lshape1 , "shape2" = .lshape2 ) misc$earg <- list("scale" = .escale , "shape1" = .eshape1 , "shape2" = .eshape2 ) misc$ishape1 <- .ishape1 misc$ishape2 <- .ishape2 misc$iscale <- .iscale misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2, .iscale = iscale, .ishape1 = ishape1, .ishape2 = ishape2, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { a <- eta2theta(eta[, 1], .lscale , .escale ) p <- eta2theta(eta[, 2], .lshape1 , .eshape1 ) q <- eta2theta(eta[, 3], .lshape2 , .eshape2 ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (-(p+q)*log(a) - lgamma(p) - lgamma(q) + (p - 1)*log(y[, 1]) + (q - 1)*log(y[, 2]-y[, 1]) - y[, 2] / a) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = c("bigamma.mckay"), validparams = eval(substitute(function(eta, y, extra = NULL) { aparam <- eta2theta(eta[, 1], .lscale , .escale ) shape1 <- eta2theta(eta[, 2], .lshape1 , .eshape1 ) shape2 <- eta2theta(eta[, 3], .lshape2 , .eshape2 ) okay1 <- all(is.finite(aparam)) && all(0 < aparam) && all(is.finite(shape1)) && all(0 < shape1) && all(is.finite(shape2)) && all(0 < shape2) okay1 }, list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2 ))), deriv = eval(substitute(expression({ aparam <- eta2theta(eta[, 1], .lscale , .escale ) shape1 <- eta2theta(eta[, 2], .lshape1 , .eshape1 ) shape2 <- eta2theta(eta[, 3], .lshape2 , .eshape2 ) dl.da <- (-(shape1+shape2) + y[, 2] / aparam) / aparam dl.dshape1 <- -log(aparam) - digamma(shape1) + log(y[, 1]) dl.dshape2 <- -log(aparam) - digamma(shape2) + log(y[, 2]-y[, 1]) c(w) * cbind(dl.da * dtheta.deta(aparam, .lscale), dl.dshape1 * dtheta.deta(shape1, .lshape1), dl.dshape2 * dtheta.deta(shape2, .lshape2)) }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2, .escale = escale, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = eval(substitute(expression({ d11 <- (shape1+shape2) / aparam^2 d22 <- trigamma(shape1) d33 <- trigamma(shape2) d12 <- 1 / aparam d13 <- 1 / aparam d23 <- 0 wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dtheta.deta(aparam, .lscale )^2 * d11 wz[, iam(2, 2, M)] <- dtheta.deta(shape1, .lshape1 )^2 * d22 wz[, iam(3, 3, M)] <- dtheta.deta(shape2, .lshape2 )^2 * d33 wz[, iam(1, 2, M)] <- dtheta.deta(aparam, .lscale ) * dtheta.deta(shape1, .lshape1 ) * d12 wz[, iam(1, 3, M)] <- dtheta.deta(aparam, .lscale ) * dtheta.deta(shape2, .lshape2 ) * d13 wz[, iam(2, 3, M)] <- dtheta.deta(shape1, .lshape1 ) * dtheta.deta(shape2, .lshape2 ) * d23 c(w) * wz }), list( .lscale = lscale, .lshape1 = lshape1, .lshape2 = lshape2 )))) } rbifrankcop <- function(n, apar) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(apar, positive = TRUE)) stop("bad input for argument 'apar'") if (length(apar) != use.n) apar <- rep_len(apar, use.n) U <- runif(use.n) V <- runif(use.n) T <- apar^U + (apar - apar^U) * V X <- U index <- (abs(apar - 1) < .Machine$double.eps) Y <- U if (any(!index)) Y[!index] <- logb(T[!index] / (T[!index] + (1 - apar[!index]) * V[!index]), base = apar[!index]) ans <- matrix(c(X, Y), nrow = use.n, ncol = 2) if (any(index)) { ans[index, 1] <- runif(sum(index)) # Uniform density for apar == 1 ans[index, 2] <- runif(sum(index)) } ans } pbifrankcop <- function(q1, q2, apar) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(apar, positive = TRUE)) stop("bad input for 'apar'") L <- max(length(q1), length(q2), length(apar)) if (length(apar ) != L) apar <- rep_len(apar, L) if (length(q1 ) != L) q1 <- rep_len(q1, L) if (length(q2 ) != L) q2 <- rep_len(q2, L) x <- q1; y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y <= 0) | (x >= 1 & y >= 1) | (abs(apar - 1) < .Machine$double.eps) ans <- as.numeric(index) if (any(!index)) ans[!index] <- logb(1 + ((apar[!index])^(x[!index]) - 1)* ((apar[!index])^(y[!index]) - 1)/(apar[!index] - 1), base = apar[!index]) ind2 <- (abs(apar - 1) < .Machine$double.eps) ans[ind2] <- x[ind2] * y[ind2] ans[x >= 1 & y < 1] <- y[x >= 1 & y < 1] # P(Y2 < q2) = q2 ans[y >= 1 & x < 1] <- x[y >= 1 & x < 1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans } if (FALSE) dbifrank <- function(x1, x2, apar, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdens <- (x1+x2)*log(apar) + log(apar-1) + log(log(apar)) - 2 * log(apar - 1 + (apar^x1 - 1) * (apar^x2 - 1)) if (log.arg) logdens else exp(logdens) } dbifrankcop <- function(x1, x2, apar, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x1)) stop("bad input for 'x1'") if (!is.Numeric(x2)) stop("bad input for 'x2'") if (!is.Numeric(apar, positive = TRUE)) stop("bad input for 'apar'") L <- max(length(x1), length(x2), length(apar)) if (length(apar ) != L) apar <- rep_len(apar, L) if (length(x1 ) != L) x1 <- rep_len(x1, L) if (length(x2 ) != L) x2 <- rep_len(x2, L) if (log.arg) { denom <- apar-1 + (apar^x1 - 1) * (apar^x2 - 1) denom <- abs(denom) log((apar - 1) * log(apar)) + (x1+x2)*log(apar) - 2 * log(denom) } else { temp <- (apar - 1) + (apar^x1 - 1) * (apar^x2 - 1) index <- (abs(apar - 1) < .Machine$double.eps) ans <- x1 if (any(!index)) ans[!index] <- (apar[!index] - 1) * log(apar[!index]) * (apar[!index])^(x1[!index] + x2[!index]) / (temp[!index])^2 ans[x1 <= 0 | x2 <= 0 | x1 >= 1 | x2 >= 1] <- 0 ans[index] <- 1 ans } } bifrankcop.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } bifrankcop <- function(lapar = "loge", iapar = 2, nsimEIM = 250) { lapar <- as.list(substitute(lapar)) eapar <- link2list(lapar) lapar <- attr(eapar, "function.name") if (!is.Numeric(iapar, positive = TRUE)) stop("argument 'iapar' must be positive") if (length(nsimEIM) && (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50)) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Frank's bivariate copula\n", "Links: ", namesof("apar", lapar, earg = eapar )), initialize = eval(substitute(expression({ if (any(y <= 0) || any(y >= 1)) stop("the response must have values between 0 and 1") temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("apar", .lapar , earg = .eapar, short = TRUE)) extra$colnames.y <- colnames(y) if (!length(etastart)) { apar.init <- rep_len(.iapar , n) etastart <- cbind(theta2eta(apar.init, .lapar , earg = .eapar )) } }), list( .lapar = lapar, .eapar = eapar, .iapar = iapar))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(0.5, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lapar = lapar, .eapar = eapar ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .eapar ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$pooled.weight <- pooled.weight misc$multipleResponses <- FALSE }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { apar <- eta2theta(eta, .lapar , earg = .eapar ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbifrankcop(x1 = y[, 1], x2 = y[, 2], apar = apar, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lapar = lapar, .eapar = eapar ))), vfamily = c("bifrankcop"), validparams = eval(substitute(function(eta, y, extra = NULL) { apar <- eta2theta(eta, .lapar , earg = .eapar ) okay1 <- all(is.finite(apar)) && all(0 < apar) okay1 }, list( .lapar = lapar, .eapar = eapar ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) apar <- eta2theta(eta, .lapar , earg = .eapar ) rbifrankcop(nsim * length(apar), apar = c(apar)) }, list( .lapar = lapar, .eapar = eapar ))), deriv = eval(substitute(expression({ apar <- eta2theta(eta, .lapar , earg = .eapar ) dapar.deta <- dtheta.deta(apar, .lapar , earg = .eapar ) de3 <- deriv3(~ (log((apar - 1) * log(apar)) + (y1+y2)*log(apar) - 2 * log(apar-1 + (apar^y1 - 1) * (apar^y2 - 1))), name = "apar", hessian = TRUE) denom <- apar-1 + (apar^y[, 1] - 1) * (apar^y[, 2] - 1) tmp700 <- 2*apar^(y[, 1]+y[, 2]) - apar^y[, 1] - apar^y[, 2] numerator <- 1 + y[, 1] * apar^(y[, 1] - 1) * (apar^y[, 2] - 1) + y[, 2] * apar^(y[, 2] - 1) * (apar^y[, 1] - 1) Dl.dapar <- 1/(apar - 1) + 1/(apar*log(apar)) + (y[, 1]+y[, 2])/apar - 2 * numerator / denom c(w) * Dl.dapar * dapar.deta }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ if ( is.Numeric( .nsimEIM)) { pooled.weight <- FALSE # For @last run.mean <- 0 for (ii in 1:( .nsimEIM )) { ysim <- rbifrankcop(n, apar = apar) y1 <- ysim[, 1]; y2 <- ysim[, 2]; eval.de3 <- eval(de3) d2l.dthetas2 <- attr(eval.de3, "hessian") rm(ysim) temp3 <- -d2l.dthetas2[, 1, 1] # M = 1 run.mean <- ((ii - 1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(mean(run.mean), n, dimm(M)) else run.mean wz <- wz * dapar.deta^2 c(w) * wz } else { nump <- apar^(y[, 1]+y[, 2]-2) * (2 * y[, 1] * y[, 2] + y[, 1]*(y[, 1] - 1) + y[, 2]*(y[, 2] - 1)) - y[, 1]*(y[, 1] - 1) * apar^(y[, 1]-2) - y[, 2]*(y[, 2] - 1) * apar^(y[, 2]-2) D2l.dapar2 <- 1/(apar - 1)^2 + (1+log(apar))/(apar*log(apar))^2 + (y[, 1]+y[, 2])/apar^2 + 2 * (nump / denom - (numerator/denom)^2) d2apar.deta2 <- d2theta.deta2(apar, .lapar , earg = .eapar ) wz <- c(w) * (dapar.deta^2 * D2l.dapar2 - Dl.dapar * d2apar.deta2) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for (iii in 1:ncol(wz)) wz[,iii] <- sum(wz[, iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz } }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM )))) } gammahyperbola <- function(ltheta = "loge", itheta = NULL, expected = FALSE) { ltheta <- as.list(substitute(ltheta)) etheta <- link2list(ltheta) ltheta <- attr(etheta, "function.name") if (!is.logical(expected) || length(expected) != 1) stop("argument 'expected' must be a single logical") new("vglmff", blurb = c("Gamma hyperbola bivariate distribution\n", "Links: ", namesof("theta", ltheta, etheta)), initialize = eval(substitute(expression({ if (any(y[, 1] <= 0) || any(y[, 2] <= 1)) stop("the response has values that are out of range") temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y extra$colnames.y <- colnames(y) predictors.names <- c(namesof("theta", .ltheta , .etheta , short = TRUE)) if (!length(etastart)) { theta.init <- if (length( .itheta)) { rep_len( .itheta , n) } else { 1 / (y[, 2] - 1 + 0.01) } etastart <- cbind(theta2eta(theta.init, .ltheta , .etheta )) } }), list( .ltheta = ltheta, .etheta = etheta, .itheta = itheta))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) theta <- eta2theta(eta, .ltheta , .etheta ) fv.mat <- cbind(theta * exp(theta), 1 + 1 / theta) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .ltheta = ltheta, .etheta = etheta ))), last = eval(substitute(expression({ misc$link <- c("theta" = .ltheta ) misc$earg <- list("theta" = .etheta ) misc$expected <- .expected misc$multipleResponses <- FALSE }), list( .ltheta = ltheta, .etheta = etheta, .expected = expected ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { theta <- eta2theta(eta, .ltheta , .etheta ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (-exp(-theta) * y[, 1] / theta - theta * y[, 2]) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .ltheta = ltheta, .etheta = etheta ))), vfamily = c("gammahyperbola"), validparams = eval(substitute(function(eta, y, extra = NULL) { theta <- eta2theta(eta, .ltheta , .etheta ) okay1 <- all(is.finite(theta)) && all(0 < theta) okay1 }, list( .ltheta = ltheta, .etheta = etheta ))), deriv = eval(substitute(expression({ theta <- eta2theta(eta, .ltheta , .etheta ) Dl.dtheta <- exp(-theta) * y[, 1] * (1+theta) / theta^2 - y[, 2] DTHETA.deta <- dtheta.deta(theta, .ltheta , .etheta ) c(w) * Dl.dtheta * DTHETA.deta }), list( .ltheta = ltheta, .etheta = etheta ))), weight = eval(substitute(expression({ temp300 <- 2 + theta * (2 + theta) if ( .expected ) { D2l.dtheta2 <- temp300 / theta^2 wz <- c(w) * DTHETA.deta^2 * D2l.dtheta2 } else { D2l.dtheta2 <- temp300 * y[, 1] * exp(-theta) / theta^3 D2theta.deta2 <- d2theta.deta2(theta, .ltheta ) wz <- c(w) * (DTHETA.deta^2 * D2l.dtheta2 - Dl.dtheta * D2theta.deta2) } wz }), list( .ltheta = ltheta, .etheta = etheta, .expected = expected )))) } bifgmexp <- function(lapar = "rhobit", iapar = NULL, tola0 = 0.01, imethod = 1) { lapar <- as.list(substitute(lapar)) earg <- link2list(lapar) lapar <- attr(earg, "function.name") if (length(iapar) && (!is.Numeric(iapar, length.arg = 1) || abs(iapar) >= 1)) stop("argument 'iapar' must be a single number between -1 and 1") if (!is.Numeric(tola0, length.arg = 1, positive = TRUE)) stop("argument 'tola0' must be a single positive number") if (length(iapar) && abs(iapar) <= tola0) stop("argument 'iapar' must not be between -tola0 and tola0") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate Farlie-Gumbel-Morgenstern ", "exponential distribution\n", # Morgenstern's "Links: ", namesof("apar", lapar, earg = earg )), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("apar", .lapar , earg = .earg , short = TRUE)) extra$colnames.y <- colnames(y) if (!length(etastart)) { ainit <- if (length(.iapar)) rep_len( .iapar , n) else { mean1 <- if ( .imethod == 1) median(y[, 1]) else mean(y[, 1]) mean2 <- if ( .imethod == 1) median(y[, 2]) else mean(y[, 2]) Finit <- 0.01 + mean(y[, 1] <= mean1 & y[, 2] <= mean2) ((Finit+expm1(-mean1)+exp(-mean2)) / exp(-mean1-mean2) - 1) / ( expm1(-mean1) * expm1(-mean2)) } etastart <- theta2eta(rep_len(ainit, n), .lapar , earg = .earg ) } }), list( .iapar = iapar, .lapar = lapar, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(1, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lapar = lapar, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .earg ) misc$expected <- FALSE misc$pooled.weight <- pooled.weight misc$multipleResponses <- FALSE }), list( .lapar = lapar, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { alpha <- eta2theta(eta, .lapar , earg = .earg ) alpha[abs(alpha) < .tola0 ] <- .tola0 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { denom <- (1 + alpha - 2*alpha*(exp(-y[, 1]) + exp(-y[, 2])) + 4*alpha*exp(-y[, 1] - y[, 2])) ll.elts <- c(w) * (-y[, 1] - y[, 2] + log(denom)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lapar = lapar, .earg = earg, .tola0 = tola0 ))), vfamily = c("bifgmexp"), # morgenstern validparams = eval(substitute(function(eta, y, extra = NULL) { alpha <- eta2theta(eta, .lapar , earg = .earg ) okay1 <- all(is.finite(alpha)) && all(abs(alpha) < 1) okay1 }, list( .lapar = lapar, .earg = earg, .tola0 = tola0 ))), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lapar , earg = .earg ) alpha[abs(alpha) < .tola0 ] <- .tola0 numerator <- 1 - 2*(exp(-y[, 1]) + exp(-y[, 2])) + 4*exp(-y[, 1] - y[, 2]) denom <- (1 + alpha - 2*alpha*(exp(-y[, 1]) + exp(-y[, 2])) + 4 *alpha*exp(-y[, 1] - y[, 2])) dl.dalpha <- numerator / denom dalpha.deta <- dtheta.deta(alpha, .lapar , earg = .earg ) c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .lapar = lapar, .earg = earg, .tola0 = tola0 ))), weight = eval(substitute(expression({ d2l.dalpha2 <- dl.dalpha^2 d2alpha.deta2 <- d2theta.deta2(alpha, .lapar , earg = .earg ) wz <- c(w) * (dalpha.deta^2 * d2l.dalpha2 - d2alpha.deta2 * dl.dalpha) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for (iii in 1:ncol(wz)) wz[, iii] <- sum(wz[, iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz }), list( .lapar = lapar, .earg = earg )))) } rbifgmcop <- function(n, apar) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(apar)) stop("bad input for argument 'apar'") if (any(abs(apar) > 1)) stop("argument 'apar' has values out of range") y1 <- V1 <- runif(use.n) V2 <- runif(use.n) temp <- 2*y1 - 1 A <- apar * temp - 1 B <- sqrt(1 - 2 * apar * temp + (apar*temp)^2 + 4 * apar * V2 * temp) y2 <- 2 * V2 / (B - A) matrix(c(y1, y2), nrow = use.n, ncol = 2) } dbifgmcop <- function(x1, x2, apar, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(apar)) stop("bad input for 'apar'") if (any(abs(apar) > 1)) stop("'apar' values out of range") if ( !is.logical( log.arg ) || length( log.arg ) != 1 ) stop("bad input for argument 'log'") L <- max(length(x1), length(x2), length(apar)) if (length(x1) != L) x1 <- rep_len(x1, L) if (length(x2) != L) x2 <- rep_len(x2, L) if (length(apar) != L) apar <- rep_len(apar, L) ans <- 0 * x1 xnok <- (x1 <= 0) | (x1 >= 1) | (x2 <= 0) | (x2 >= 1) if ( log.arg ) { ans[!xnok] <- log1p(apar[!xnok] * (1-2*x1[!xnok]) * (1-2*x2[!xnok])) ans[xnok] <- log(0) } else { ans[!xnok] <- 1 + apar[!xnok] * (1-2*x1[!xnok]) * (1-2*x2[!xnok]) ans[xnok] <- 0 if (any(ans < 0)) stop("negative values in the density (apar out of range)") } ans } pbifgmcop <- function(q1, q2, apar) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(apar)) stop("bad input for 'apar'") if (any(abs(apar) > 1)) stop("'apar' values out of range") L <- max(length(q1), length(q2), length(apar)) if (length(q1) != L) q1 <- rep_len(q1, L) if (length(q2) != L) q2 <- rep_len(q2, L) if (length(apar) != L) apar <- rep_len(apar, L) x <- q1 y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y <= 0) | (x >= 1 & y >= 1) ans <- as.numeric(index) if (any(!index)) { ans[!index] <- q1[!index] * q2[!index] * (1 + apar[!index] * (1-q1[!index])*(1-q2[!index])) } ans[x >= 1 & y<1] <- y[x >= 1 & y<1] # P(Y2 < q2) = q2 ans[y >= 1 & x<1] <- x[y >= 1 & x<1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans } bifgmcop <- function(lapar = "rhobit", iapar = NULL, imethod = 1) { lapar <- as.list(substitute(lapar)) earg <- link2list(lapar) lapar <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3.5) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iapar) && (abs(iapar) >= 1)) stop("'iapar' should be less than 1 in absolute value") new("vglmff", blurb = c("Farlie-Gumbel-Morgenstern copula \n", # distribution "Links: ", namesof("apar", lapar, earg = earg )), initialize = eval(substitute(expression({ if (any(y < 0) || any(y > 1)) stop("the response must have values in the unit square") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("apar", .lapar , earg = .earg , short = TRUE) extra$colnames.y <- colnames(y) if (!length(etastart)) { ainit <- if (length( .iapar )) .iapar else { if ( .imethod == 1) { 3 * cor(y[, 1], y[, 2], method = "spearman") } else if ( .imethod == 2) { 9 * kendall.tau(y[, 1], y[, 2]) / 2 } else { mean1 <- if ( .imethod == 1) weighted.mean(y[, 1], w) else median(y[, 1]) mean2 <- if ( .imethod == 1) weighted.mean(y[, 2], w) else median(y[, 2]) Finit <- weighted.mean(y[, 1] <= mean1 & y[, 2] <= mean2, w) (Finit / (mean1 * mean2) - 1) / ((1 - mean1) * (1 - mean2)) } } ainit <- min(0.95, max(ainit, -0.95)) etastart <- theta2eta(rep_len(ainit, n), .lapar , earg = .earg ) } }), list( .iapar = iapar, .lapar = lapar, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(0.5, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lapar = lapar, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .earg ) misc$expected <- FALSE misc$multipleResponses <- FALSE }), list( .lapar = lapar, .earg = earg))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { alpha <- eta2theta(eta, .lapar , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbifgmcop(x1 = y[, 1], x2 = y[, 2], apar = alpha, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lapar = lapar, .earg = earg ))), vfamily = c("bifgmcop"), validparams = eval(substitute(function(eta, y, extra = NULL) { alpha <- eta2theta(eta, .lapar , earg = .earg ) okay1 <- all(is.finite(alpha)) && all(abs(alpha) < 1) okay1 }, list( .lapar = lapar, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) alpha <- eta2theta(eta, .lapar , earg = .earg ) rbifgmcop(nsim * length(alpha), apar = c(alpha)) }, list( .lapar = lapar, .earg = earg ))), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lapar , earg = .earg ) dalpha.deta <- dtheta.deta(alpha, .lapar , earg = .earg ) numerator <- (1 - 2 * y[, 1]) * (1 - 2 * y[, 2]) denom <- 1 + alpha * numerator mytolerance <- .Machine$double.eps bad <- (denom <= mytolerance) # Range violation if (any(bad)) { cat("There are some range violations in @deriv\n") flush.console() denom[bad] <- 2 * mytolerance } dl.dalpha <- numerator / denom c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .lapar = lapar, .earg = earg))), weight = eval(substitute(expression({ wz <- lerch(alpha^2, 2, 1.5) / 4 # Checked and correct wz <- wz * dalpha.deta^2 c(w) * wz }), list( .lapar = lapar, .earg = earg)))) } bigumbelIexp <- function(lapar = "identitylink", iapar = NULL, imethod = 1) { lapar <- as.list(substitute(lapar)) earg <- link2list(lapar) lapar <- attr(earg, "function.name") if (length(iapar) && !is.Numeric(iapar, length.arg = 1)) stop("'iapar' must be a single number") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Gumbel's Type I bivariate exponential distribution\n", "Links: ", namesof("apar", lapar, earg = earg )), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y extra$colnames.y <- colnames(y) predictors.names <- c(namesof("apar", .lapar , earg = .earg , short = TRUE)) if (!length(etastart)) { ainit <- if (length( .iapar )) rep_len( .iapar, n) else { mean1 <- if ( .imethod == 1) median(y[, 1]) else mean(y[, 1]) mean2 <- if ( .imethod == 1) median(y[, 2]) else mean(y[, 2]) Finit <- 0.01 + mean(y[, 1] <= mean1 & y[, 2] <= mean2) (log(Finit+expm1(-mean1)+exp(-mean2))+mean1+mean2)/(mean1*mean2) } etastart <- theta2eta(rep_len(ainit, n), .lapar , earg = .earg ) } }), list( .iapar = iapar, .lapar = lapar, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 alpha <- eta2theta(eta, .lapar , earg = .earg ) fv.mat <- matrix(1, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lapar = lapar, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .earg ) misc$expected <- FALSE misc$pooled.weight <- pooled.weight misc$multipleResponses <- FALSE }), list( .lapar = lapar, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { alpha <- eta2theta(eta, .lapar , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { denom <- (alpha*y[, 1] - 1) * (alpha*y[, 2] - 1) + alpha mytolerance <- .Machine$double.xmin bad <- (denom <= mytolerance) # Range violation if (any(bad)) { cat("There are some range violations in @deriv\n") flush.console() } if (summation) { sum(bad) * (-1.0e10) + sum(w[!bad] * (-y[!bad, 1] - y[!bad, 2] + alpha[!bad] * y[!bad, 1] * y[!bad, 2] + log(denom[!bad]))) } else { stop("argument 'summation = FALSE' does not work yet") } } }, list( .lapar = lapar, .earg = earg ))), vfamily = c("bigumbelIexp"), validparams = eval(substitute(function(eta, y, extra = NULL) { alpha <- eta2theta(eta, .lapar , earg = .earg ) okay1 <- all(is.finite(alpha)) okay1 }, list( .lapar = lapar, .earg = earg ))), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lapar , earg = .earg ) numerator <- (alpha * y[, 1] - 1) * y[, 2] + (alpha * y[, 2] - 1) * y[, 1] + 1 denom <- (alpha * y[, 1] - 1) * (alpha * y[, 2] - 1) + alpha denom <- abs(denom) dl.dalpha <- numerator / denom + y[, 1] * y[, 2] dalpha.deta <- dtheta.deta(alpha, .lapar , earg = .earg ) c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .lapar = lapar, .earg = earg ))), weight = eval(substitute(expression({ d2l.dalpha2 <- (numerator/denom)^2 - 2*y[, 1]*y[, 2] / denom d2alpha.deta2 <- d2theta.deta2(alpha, .lapar , earg = .earg ) wz <- c(w) * (dalpha.deta^2 * d2l.dalpha2 - d2alpha.deta2 * dl.dalpha) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for (iii in 1:ncol(wz)) wz[, iii] <- sum(wz[, iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz }), list( .lapar = lapar, .earg = earg )))) } pbiplackcop <- function(q1, q2, oratio) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(oratio, positive = TRUE)) stop("bad input for 'oratio'") L <- max(length(q1), length(q2), length(oratio)) if (length(q1) != L) q1 <- rep_len(q1, L) if (length(q2) != L) q2 <- rep_len(q2, L) if (length(oratio) != L) oratio <- rep_len(oratio, L) x <- q1; y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y <= 0) | (x >= 1 & y >= 1) | (abs(oratio - 1) < 1.0e-6) # .Machine$double.eps ans <- as.numeric(index) if (any(!index)) { temp1 <- 1 + (oratio[!index] - 1) * (q1[!index] + q2[!index]) temp2 <- temp1 - sqrt(temp1^2 - 4 * oratio[!index] * (oratio[!index] - 1) * q1[!index] * q2[!index]) ans[!index] <- 0.5 * temp2 / (oratio[!index] - 1) } ind2 <- (abs(oratio - 1) < 1.0e-6) # .Machine$double.eps ans[ind2] <- x[ind2] * y[ind2] ans[x >= 1 & y<1] <- y[x >= 1 & y<1] # P(Y2 < q2) = q2 ans[y >= 1 & x<1] <- x[y >= 1 & x<1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans } rbiplackcop <- function(n, oratio) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n y1 <- U <- runif(use.n) V <- runif(use.n) Z <- V * (1-V) y2 <- (2*Z*(y1*oratio^2 + 1 - y1) + oratio * (1 - 2 * Z) - (1 - 2 * V) * sqrt(oratio * (oratio + 4*Z*y1*(1-y1)*(1-oratio)^2))) / (oratio + Z*(1-oratio)^2) matrix(c(y1, 0.5 * y2), nrow = use.n, ncol = 2) } dbiplackcop <- function(x1, x2, oratio, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) ans <- log(oratio) + log1p((oratio - 1) * (x1+x2 - 2*x1*x2)) - 1.5 * log((1 + (x1+x2)*(oratio - 1))^2 - 4 * oratio * (oratio - 1)*x1*x2) ans[ # !is.na(x1) & !is.na(x2) & !is.na(oratio) & ((x1 < 0) | (x1 > 1) | (x2 < 0) | (x2 > 1))] <- log(0) if (log.arg) ans else exp(ans) } biplackettcop.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } biplackettcop <- function(link = "loge", ioratio = NULL, imethod = 1, nsimEIM = 200) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (length(ioratio) && (!is.Numeric(ioratio, positive = TRUE))) stop("'ioratio' must be positive") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Plackett distribution (bivariate copula)\n", "Links: ", namesof("oratio", link, earg = earg )), initialize = eval(substitute(expression({ if (any(y < 0) || any(y > 1)) stop("the response must have values in the unit square") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("oratio", .link , earg = .earg, short = TRUE) extra$colnames.y <- colnames(y) if (!length(etastart)) { orinit <- if (length( .ioratio )) .ioratio else { if ( .imethod == 2) { scorp <- cor(y)[1, 2] if (abs(scorp) <= 0.1) 1 else if (abs(scorp) <= 0.3) 3^sign(scorp) else if (abs(scorp) <= 0.6) 5^sign(scorp) else if (abs(scorp) <= 0.8) 20^sign(scorp) else 40^sign(scorp) } else { y10 <- weighted.mean(y[, 1], w) y20 <- weighted.mean(y[, 2], w) (0.5 + sum(w[(y[, 1] < y10) & (y[, 2] < y20)])) * (0.5 + sum(w[(y[, 1] >= y10) & (y[, 2] >= y20)])) / ( ((0.5 + sum(w[(y[, 1] < y10) & (y[, 2] >= y20)])) * (0.5 + sum(w[(y[, 1] >= y10) & (y[, 2] < y20)])))) } } etastart <- theta2eta(rep_len(orinit, n), .link , earg = .earg ) } }), list( .ioratio = ioratio, .link = link, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(0.5, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(oratio = .link) misc$earg <- list(oratio = .earg) misc$expected <- FALSE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { oratio <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbiplackcop(x1 = y[, 1], x2 = y[, 2], oratio = oratio, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("biplackettcop"), validparams = eval(substitute(function(eta, y, extra = NULL) { oratio <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(oratio)) && all(0 < oratio) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) oratio <- eta2theta(eta, .link , earg = .earg ) rbiplackcop(nsim * length(oratio), oratio = c(oratio)) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ oratio <- eta2theta(eta, .link , earg = .earg ) doratio.deta <- dtheta.deta(oratio, .link , earg = .earg ) y1 <- y[, 1] y2 <- y[, 2] de3 <- deriv3(~ (log(oratio) + log(1+(oratio - 1) * (y1+y2-2*y1*y2)) - 1.5 * log((1 + (y1+y2)*(oratio - 1))^2 - 4 * oratio * (oratio - 1)*y1*y2)), name = "oratio", hessian = FALSE) eval.de3 <- eval(de3) dl.doratio <- attr(eval.de3, "gradient") c(w) * dl.doratio * doratio.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ sd3 <- deriv3(~ (log(oratio) + log(1+(oratio - 1) * (y1sim+y2sim-2*y1sim*y2sim)) - 1.5 * log((1 + (y1sim+y2sim)*(oratio - 1))^2 - 4 * oratio * (oratio - 1)*y1sim*y2sim)), name = "oratio", hessian = FALSE) run.var <- 0 for (ii in 1:( .nsimEIM )) { ysim <- rbiplackcop(n, oratio = oratio) y1sim <- ysim[, 1] y2sim <- ysim[, 1] eval.sd3 <- eval(sd3) dl.doratio <- attr(eval.sd3, "gradient") rm(ysim, y1sim, y2sim) temp3 <- dl.doratio run.var <- ((ii - 1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, dimm(M), byrow = TRUE) else cbind(run.var) wz <- wz * doratio.deta^2 c(w) * wz }), list( .link = link, .earg = earg, .nsimEIM = nsimEIM )))) } dbiamhcop <- function(x1, x2, apar, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x1), length(x2), length(apar)) if (length(apar) != L) apar <- rep_len(apar, L) if (length(x1) != L) x1 <- rep_len(x1, L) if (length(x2) != L) x2 <- rep_len(x2, L) temp <- 1 - apar*(1-x1)*(1-x2) if (log.arg) { ans <- log1p(-apar+2*apar*x1*x2/temp) - 2*log(temp) ans[(x1 <= 0) | (x1 >= 1) | (x2 <= 0) | (x2 >= 1)] <- log(0) } else { ans <- (1-apar+2*apar*x1*x2/temp) / (temp^2) ans[(x1 <= 0) | (x1 >= 1) | (x2 <= 0) | (x2 >= 1)] <- 0 } ans[abs(apar) > 1] <- NA ans } pbiamhcop <- function(q1, q2, apar) { if (!is.Numeric(q1)) stop("bad input for 'q1'") if (!is.Numeric(q2)) stop("bad input for 'q2'") if (!is.Numeric(apar)) stop("bad input for 'apar'") L <- max(length(q1), length(q2), length(apar)) if (length(q1) != L) q1 <- rep_len(q1, L) if (length(q2) != L) q2 <- rep_len(q2, L) if (length(apar) != L) apar <- rep_len(apar, L) x <- q1 y <- q2 index <- (x >= 1 & y < 1) | (y >= 1 & x < 1) | (x <= 0 | y<= 0) | (x >= 1 & y >= 1) ans <- as.numeric(index) if (any(!index)) { ans[!index] <- (q1[!index] * q2[!index]) / (1 - apar[!index] * (1-q1[!index]) * (1-q2[!index])) } ans[x >= 1 & y < 1] <- y[x >= 1 & y < 1] # P(Y2 < q2) = q2 ans[y >= 1 & x < 1] <- x[y >= 1 & x < 1] # P(Y1 < q1) = q1 ans[x <= 0 | y <= 0] <- 0 ans[x >= 1 & y >= 1] <- 1 ans[abs(apar) > 1] <- NA ans } rbiamhcop <- function(n, apar) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (any(abs(apar) > 1)) stop("'apar' values out of range") U1 <- V1 <- runif(use.n) V2 <- runif(use.n) b <- 1-V1 A <- -apar*(2*b*V2+1)+2*apar^2*b^2*V2+1 B <- apar^2*(4*b^2*V2-4*b*V2+1)+apar*(4*V2-4*b*V2-2)+1 U2 <- (2*V2*(apar*b - 1)^2)/(A+sqrt(B)) matrix(c(U1, U2), nrow = use.n, ncol = 2) } biamhcop.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } biamhcop <- function(lapar = "rhobit", iapar = NULL, imethod = 1, nsimEIM = 250) { lapar <- as.list(substitute(lapar)) eapar <- link2list(lapar) lapar <- attr(eapar, "function.name") if (length(iapar) && (abs(iapar) > 1)) stop("'iapar' should be less than or equal to 1 in absolute value") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("imethod must be 1 or 2") if (length(nsimEIM) && (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50)) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Ali-Mikhail-Haq distribution\n", "Links: ", namesof("apar", lapar, earg = eapar )), initialize = eval(substitute(expression({ if (any(y < 0) || any(y > 1)) stop("the response must have values in the unit square") temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 2, ncol.y.min = 2, out.wy = TRUE, colsyperw = 2, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("apar", .lapar, earg = .eapar, short = TRUE)) extra$colnames.y <- colnames(y) if (!length(etastart)) { ainit <- if (length( .iapar )) .iapar else { mean1 <- if ( .imethod == 1) weighted.mean(y[, 1], w) else median(y[, 1]) mean2 <- if ( .imethod == 1) weighted.mean(y[, 2], w) else median(y[, 2]) Finit <- weighted.mean(y[, 1] <= mean1 & y[, 2] <= mean2, w) (1 - (mean1 * mean2 / Finit)) / ((1-mean1) * (1-mean2)) } ainit <- min(0.95, max(ainit, -0.95)) etastart <- theta2eta(rep_len(ainit, n), .lapar , earg = .eapar ) } }), list( .lapar = lapar, .eapar = eapar, .iapar = iapar, .imethod = imethod))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(0.5, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .lapar = lapar, .eapar = eapar ))), last = eval(substitute(expression({ misc$link <- c("apar" = .lapar ) misc$earg <- list("apar" = .eapar ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { apar <- eta2theta(eta, .lapar, earg = .eapar ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbiamhcop(x1 = y[, 1], x2 = y[, 2], apar = apar, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lapar = lapar, .eapar = eapar ))), vfamily = c("biamhcop"), validparams = eval(substitute(function(eta, y, extra = NULL) { apar <- eta2theta(eta, .lapar, earg = .eapar ) okay1 <- all(is.finite(apar)) && all(abs(apar) < 1) okay1 }, list( .lapar = lapar, .eapar = eapar ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) apar <- eta2theta(eta, .lapar , earg = .eapar ) rbiamhcop(nsim * length(apar), apar = c(apar)) }, list( .lapar = lapar, .eapar = eapar ))), deriv = eval(substitute(expression({ apar <- eta2theta(eta, .lapar, earg = .eapar ) dapar.deta <- dtheta.deta(apar, .lapar, earg = .eapar ) y1 <- y[, 1] y2 <- y[, 2] de3 <- deriv3(~ (log(1 - apar+ (2 * apar*y1*y2/(1-apar*(1-y1)*(1-y2)))) - 2 * log(1 - apar*(1-y1)*(1-y2))) , name = "apar", hessian = FALSE) eval.de3 <- eval(de3) dl.dapar <- attr(eval.de3, "gradient") c(w) * dl.dapar * dapar.deta }), list( .lapar = lapar, .eapar = eapar ))), weight = eval(substitute(expression({ sd3 <- deriv3(~ (log(1 - apar + (2 * apar * y1sim * y2sim / (1 - apar * (1 - y1sim) * (1-y2sim)))) - 2 * log(1-apar*(1-y1sim)*(1-y2sim))), name = "apar", hessian = FALSE) run.var <- 0 for (ii in 1:( .nsimEIM )) { ysim <- rbiamhcop(n, apar = apar) y1sim <- ysim[, 1] y2sim <- ysim[, 1] eval.sd3 <- eval(sd3) dl.apar <- attr(eval.sd3, "gradient") rm(ysim, y1sim, y2sim) temp3 <- dl.dapar run.var <- ((ii - 1) * run.var + temp3^2) / ii } wz <- if (intercept.only) matrix(colMeans(cbind(run.var)), n, dimm(M), byrow = TRUE) else cbind(run.var) wz <- wz * dapar.deta^2 c(w) * wz }), list( .lapar = lapar, .eapar = eapar, .nsimEIM = nsimEIM )))) } dbinorm <- function(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) sd1 <- sqrt(var1) sd2 <- sqrt(var2) rho <- cov12 / (sd1 * sd2) temp5 <- 1 - rho^2 zedd1 <- (x1 - mean1) / sd1 zedd2 <- (x2 - mean2) / sd2 logpdf <- -log(2 * pi) - log(sd1) - log(sd2) - 0.5 * log1p(-rho^2) + -(0.5 / temp5) * (zedd1^2 + (-2 * rho * zedd1 + zedd2) * zedd2) logpdf[is.infinite(x1) | is.infinite(x2)] <- log(0) # 20141216 KaiH if (log.arg) logpdf else exp(logpdf) } rbinorm <- function(n, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) { Y1 <- rnorm(n) Y2 <- rnorm(n) X1 <- sqrt(var1) * Y1 + mean1 delta <- sqrt(var2 - (cov12^2) / var1) X2 <- cov12 * Y1 / sqrt(var1) + delta * Y2 + mean2 ans <- cbind(X1, X2) ans[is.na(delta), ] <- NA ans } binormal <- function(lmean1 = "identitylink", lmean2 = "identitylink", lsd1 = "loge", lsd2 = "loge", lrho = "rhobit", imean1 = NULL, imean2 = NULL, isd1 = NULL, isd2 = NULL, irho = NULL, imethod = 1, eq.mean = FALSE, eq.sd = FALSE, zero = c("sd", "rho")) { lmean1 <- as.list(substitute(lmean1)) emean1 <- link2list(lmean1) lmean1 <- attr(emean1, "function.name") lmean2 <- as.list(substitute(lmean2)) emean2 <- link2list(lmean2) lmean2 <- attr(emean2, "function.name") lsd1 <- as.list(substitute(lsd1)) esd1 <- link2list(lsd1) lsd1 <- attr(esd1, "function.name") lsd2 <- as.list(substitute(lsd2)) esd2 <- link2list(lsd2) lsd2 <- attr(esd2, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") trivial1 <- is.logical(eq.mean) && length(eq.mean) == 1 && !eq.mean trivial2 <- is.logical(eq.sd ) && length(eq.sd ) == 1 && !eq.sd if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate normal distribution\n", "Links: ", namesof("mean1", lmean1, earg = emean1 ), ", ", namesof("mean2", lmean2, earg = emean2 ), ", ", namesof("sd1", lsd1, earg = esd1 ), ", ", namesof("sd2", lsd2, earg = esd2 ), ", ", namesof("rho", lrho, earg = erho )), constraints = eval(substitute(expression({ constraints.orig <- constraints M1 <- 5 NOS <- M / M1 cm1.m <- cmk.m <- kronecker(diag(NOS), rbind(diag(2), matrix(0, 3, 2))) con.m <- cm.VGAM(kronecker(diag(NOS), rbind(1, 1, 0, 0, 0)), x = x, bool = .eq.mean , # constraints = constraints.orig, apply.int = TRUE, cm.default = cmk.m, cm.intercept.default = cm1.m) cm1.s <- cmk.s <- kronecker(diag(NOS), rbind(matrix(0, 2, 2), diag(2), matrix(0, 1, 2))) con.s <- cm.VGAM(kronecker(diag(NOS), rbind(0, 0, 1, 1, 0)), x = x, bool = .eq.sd , # constraints = constraints.orig, apply.int = TRUE, cm.default = cmk.s, cm.intercept.default = cm1.s) con.use <- con.m for (klocal in seq_along(con.m)) { con.use[[klocal]] <- cbind(con.m[[klocal]], con.s[[klocal]], kronecker(matrix(1, NOS, 1), diag(5)[, 5])) } constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 5) }), list( .zero = zero, .eq.sd = eq.sd, .eq.mean = eq.mean ))), infos = eval(substitute(function(...) { list(M1 = 5, Q1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mean1", "mean2", "sd1", "sd2", "rho"), eq.mean = .eq.mean , eq.sd = .eq.sd , zero = .zero ) }, list( .zero = zero, .eq.mean = eq.mean, .eq.sd = eq.sd ))), initialize = eval(substitute(expression({ Q1 <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = Q1, ncol.y.min = Q1, out.wy = TRUE, colsyperw = Q1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("mean1", .lmean1 , earg = .emean1 , short = TRUE), namesof("mean2", .lmean2 , earg = .emean2 , short = TRUE), namesof("sd1", .lsd1 , earg = .esd1 , short = TRUE), namesof("sd2", .lsd2 , earg = .esd2 , short = TRUE), namesof("rho", .lrho , earg = .erho , short = TRUE)) extra$colnames.y <- colnames(y) if (!length(etastart)) { imean1 <- rep_len(if (length( .imean1 )) .imean1 else weighted.mean(y[, 1], w = w), n) imean2 <- rep_len(if (length( .imean2 )) .imean2 else weighted.mean(y[, 2], w = w), n) isd1 <- rep_len(if (length( .isd1 )) .isd1 else sd(y[, 1]), n) isd2 <- rep_len(if (length( .isd2 )) .isd2 else sd(y[, 2]), n) irho <- rep_len(if (length( .irho )) .irho else cor(y[, 1], y[,2]),n) if ( .imethod == 2) { imean1 <- abs(imean1) + 0.01 imean2 <- abs(imean2) + 0.01 } etastart <- cbind(theta2eta(imean1, .lmean1 , earg = .emean1 ), theta2eta(imean2, .lmean2 , earg = .emean2 ), theta2eta(isd1, .lsd1 , earg = .esd1 ), theta2eta(isd2, .lsd2 , earg = .esd2 ), theta2eta(irho, .lrho , earg = .erho )) } }), list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod, .imean1 = imean1, .imean2 = imean2, .isd1 = isd1, .isd2 = isd2, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- ncol(eta) / c(M1 = 5) mean1 <- eta2theta(eta[, 1], .lmean1 , earg = .emean1 ) mean2 <- eta2theta(eta[, 2], .lmean2 , earg = .emean2 ) fv.mat <- cbind(mean1, mean2) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) } , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho ))), last = eval(substitute(expression({ misc$link <- c("mean1" = .lmean1 , "mean2" = .lmean2 , "sd1" = .lsd1 , "sd2" = .lsd2 , "rho" = .lrho ) misc$earg <- list("mean1" = .emean1 , "mean2" = .emean2 , "sd1" = .esd1 , "sd2" = .esd2 , "rho" = .erho ) misc$expected <- TRUE misc$multipleResponses <- FALSE }) , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mean1 <- eta2theta(eta[, 1], .lmean1 , earg = .emean1 ) mean2 <- eta2theta(eta[, 2], .lmean2 , earg = .emean2 ) sd1 <- eta2theta(eta[, 3], .lsd1 , earg = .esd1 ) sd2 <- eta2theta(eta[, 4], .lsd2 , earg = .esd2 ) Rho <- eta2theta(eta[, 5], .lrho , earg = .erho ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbinorm(x1 = y[, 1], x2 = y[, 2], mean1 = mean1, mean2 = mean2, var1 = sd1^2, var2 = sd2^2, cov12 = Rho * sd1 * sd2, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } } , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod ))), vfamily = c("binormal"), validparams = eval(substitute(function(eta, y, extra = NULL) { mean1 <- eta2theta(eta[, 1], .lmean1, earg = .emean1) mean2 <- eta2theta(eta[, 2], .lmean2, earg = .emean2) sd1 <- eta2theta(eta[, 3], .lsd1 , earg = .esd1 ) sd2 <- eta2theta(eta[, 4], .lsd2 , earg = .esd2 ) Rho <- eta2theta(eta[, 5], .lrho , earg = .erho ) okay1 <- all(is.finite(mean1)) && all(is.finite(mean2)) && all(is.finite(sd1 )) && all(0 < sd1) && all(is.finite(sd2 )) && all(0 < sd2) && all(is.finite(Rho )) && all(abs(Rho) < 1) okay1 } , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mean1 <- eta2theta(eta[, 1], .lmean1 , earg = .emean1 ) mean2 <- eta2theta(eta[, 2], .lmean2 , earg = .emean2 ) sd1 <- eta2theta(eta[, 3], .lsd1 , earg = .esd1 ) sd2 <- eta2theta(eta[, 4], .lsd2 , earg = .esd2 ) Rho <- eta2theta(eta[, 5], .lrho , earg = .erho ) rbinorm(nsim * length(sd1), mean1 = mean1, mean2 = mean2, var1 = sd1^2, var2 = sd2^2, cov12 = Rho * sd1 * sd2) } , list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho ))), deriv = eval(substitute(expression({ mean1 <- eta2theta(eta[, 1], .lmean1, earg = .emean1) mean2 <- eta2theta(eta[, 2], .lmean2, earg = .emean2) sd1 <- eta2theta(eta[, 3], .lsd1 , earg = .esd1 ) sd2 <- eta2theta(eta[, 4], .lsd2 , earg = .esd2 ) Rho <- eta2theta(eta[, 5], .lrho , earg = .erho ) zedd1 <- (y[, 1] - mean1) / sd1 zedd2 <- (y[, 2] - mean2) / sd2 temp5 <- 1 - Rho^2 SigmaInv <- matrix(0, n, dimm(2)) SigmaInv[, iam(1, 1, M = 2)] <- 1 / ((sd1^2) * temp5) SigmaInv[, iam(2, 2, M = 2)] <- 1 / ((sd2^2) * temp5) SigmaInv[, iam(1, 2, M = 2)] <- -Rho / (sd1 * sd2 * temp5) dl.dmeans <- mux22(t(SigmaInv), y - cbind(mean1, mean2), M = 2, as.matrix = TRUE) dl.dsd1 <- -1 / sd1 + zedd1 * (zedd1 - Rho * zedd2) / (sd1 * temp5) dl.dsd2 <- -1 / sd2 + zedd2 * (zedd2 - Rho * zedd1) / (sd2 * temp5) dl.drho <- -Rho * (zedd1^2 - 2 * Rho * zedd1 * zedd2 + zedd2^2) / temp5^2 + zedd1 * zedd2 / temp5 + Rho / temp5 dmean1.deta <- dtheta.deta(mean1, .lmean1) dmean2.deta <- dtheta.deta(mean2, .lmean2) dsd1.deta <- dtheta.deta(sd1 , .lsd1 ) dsd2.deta <- dtheta.deta(sd2 , .lsd2 ) drho.deta <- dtheta.deta(Rho , .lrho ) dthetas.detas <- cbind(dmean1.deta, dmean2.deta, dsd1.deta, dsd2.deta, drho.deta) c(w) * cbind(dl.dmeans[, 1], dl.dmeans[, 2], dl.dsd1, dl.dsd2, dl.drho) * dthetas.detas }), list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, dimm(M)) wz[, iam(1, 1, M)] <- SigmaInv[, iam(1, 1, M = 2)] wz[, iam(2, 2, M)] <- SigmaInv[, iam(2, 2, M = 2)] wz[, iam(1, 2, M)] <- SigmaInv[, iam(1, 2, M = 2)] wz[, iam(3, 3, M)] <- (1 + 1 / temp5) / sd1^2 wz[, iam(4, 4, M)] <- (1 + 1 / temp5) / sd2^2 wz[, iam(3, 4, M)] <- -(Rho^2) / (temp5 * sd1 * sd2) wz[, iam(5, 5, M)] <- (1 + Rho^2) / temp5^2 wz[, iam(3, 5, M)] <- -Rho / (sd1 * temp5) wz[, iam(4, 5, M)] <- -Rho / (sd2 * temp5) for (ilocal in 1:M) for (jlocal in ilocal:M) wz[, iam(ilocal, jlocal, M)] <- wz[, iam(ilocal, jlocal, M)] * dthetas.detas[, ilocal] * dthetas.detas[, jlocal] c(w) * wz }), list( .lmean1 = lmean1, .lmean2 = lmean2, .emean1 = emean1, .emean2 = emean2, .lsd1 = lsd1 , .lsd2 = lsd2 , .lrho = lrho, .esd1 = esd1 , .esd2 = esd2 , .erho = erho, .imethod = imethod )))) } gumbelI <- function(la = "identitylink", earg = list(), ia = NULL, imethod = 1) { la <- as.list(substitute(la)) earg <- link2list(la) la <- attr(earg, "function.name") if (length(ia) && !is.Numeric(ia, length.arg = 1)) stop("'ia' must be a single number") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2.5) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Gumbel's Type I Bivariate Distribution\n", "Links: ", namesof("a", la, earg = earg )), initialize = eval(substitute(expression({ if (!is.matrix(y) || ncol(y) != 2) stop("the response must be a 2 column matrix") if (any(y < 0)) stop("the response must have non-negative values only") extra$colnames.y <- colnames(y) predictors.names <- c(namesof("a", .la, earg = .earg , short = TRUE)) if (!length(etastart)) { ainit <- if (length( .ia )) rep_len( .ia , n) else { mean1 <- if ( .imethod == 1) median(y[,1]) else mean(y[,1]) mean2 <- if ( .imethod == 1) median(y[,2]) else mean(y[,2]) Finit <- 0.01 + mean(y[,1] <= mean1 & y[,2] <= mean2) (log(Finit+expm1(-mean1)+exp(-mean2))+mean1+mean2)/(mean1*mean2) } etastart <- theta2eta(rep_len(ainit, n), .la , earg = .earg ) } }), list( .ia=ia, .la = la, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- NCOL(eta) / c(M1 = 1) Q1 <- 2 fv.mat <- matrix(1, NROW(eta), NOS * Q1) label.cols.y(fv.mat, colnames.y = extra$colnames.y, NOS = NOS) }, list( .la = la ))), last = eval(substitute(expression({ misc$link <- c("a" = .la ) misc$earg <- list("a" = .earg ) misc$expected <- FALSE misc$pooled.weight <- pooled.weight }), list( .la = la, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { alpha <- eta2theta(eta, .la, earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { denom <- (alpha*y[,1] - 1) * (alpha*y[,2] - 1) + alpha mytolerance <- .Machine$double.xmin bad <- (denom <= mytolerance) # Range violation if (any(bad)) { cat("There are some range violations in @deriv\n") flush.console() denom[bad] <- 2 * mytolerance } ll.elts <- c(w) * (-y[,1] - y[,2] + alpha*y[,1]*y[,2] + log(denom)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .la = la, .earg = earg ))), vfamily = c("gumbelI"), validparams = eval(substitute(function(eta, y, extra = NULL) { alpha <- eta2theta(eta, .la , earg = .earg ) okay1 <- all(is.finite(alpha)) okay1 } , list( .la = la, .earg = earg ))), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .la, earg = .earg ) numerator <- (alpha*y[,1] - 1)*y[,2] + (alpha*y[,2] - 1)*y[,1] + 1 denom <- (alpha*y[,1] - 1) * (alpha*y[,2] - 1) + alpha denom <- abs(denom) dl.dalpha <- numerator / denom + y[,1]*y[,2] dalpha.deta <- dtheta.deta(alpha, .la, earg = .earg ) c(w) * cbind(dl.dalpha * dalpha.deta) }), list( .la = la, .earg = earg ))), weight = eval(substitute(expression({ d2l.dalpha2 <- (numerator/denom)^2 - 2*y[,1]*y[,2] / denom d2alpha.deta2 <- d2theta.deta2(alpha, .la, earg = .earg ) wz <- w * (dalpha.deta^2 * d2l.dalpha2 - d2alpha.deta2 * dl.dalpha) if (TRUE && intercept.only) { wz <- cbind(wz) sumw <- sum(w) for (iii in 1:ncol(wz)) wz[,iii] <- sum(wz[,iii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE wz }), list( .la = la, .earg = earg )))) } kendall.tau <- function(x, y, exact = FALSE, max.n = 3000) { if ((N <- length(x)) != length(y)) stop("arguments 'x' and 'y' do not have equal lengths") NN <- if (!exact && N > max.n) { cindex <- sample.int(n = N, size = max.n, replace = FALSE) x <- x[cindex] y <- y[cindex] max.n } else { N } ans3 <- c( .C("VGAM_C_kend_tau", as.double(x), as.double(y), as.integer(NN), ans = double(3), NAOK = TRUE)$ans) con <- ans3[1] + ans3[2] / 2 # Ties put half and half dis <- ans3[3] + ans3[2] / 2 (con - dis) / (con + dis) } if (FALSE) kendall.tau <- function(x, y, exact = TRUE, max.n = 1000) { if ((N <- length(x)) != length(y)) stop("arguments 'x' and 'y' do not have equal lengths") index <- iam(NA, NA, M = N, both = TRUE) index$row.index <- index$row.index[-(1:N)] index$col.index <- index$col.index[-(1:N)] NN <- if (!exact && N > max.n) { cindex <- sample.int(n = N, size = max.n, replace = FALSE) index$row.index <- index$row.index[cindex] index$col.index <- index$col.index[cindex] max.n } else{ choose(N, 2) } con <- sum((x[index$row.index] - x[index$col.index]) * (y[index$row.index] - y[index$col.index]) > 0) dis <- NN - con (con - dis) / (con + dis) } dbistudenttcop <- function(x1, x2, df, rho = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) u1 <- qt(x1, df = df) u2 <- qt(x2, df = df) logdensity <- -(df/2 + 1) * log1p( (u1^2 + u2^2 - 2 * rho * u1 * u2) / (df * (1 - rho^2))) - log(2*pi) - 0.5 * log1p(-rho^2) - dt(u1, df = df, log = TRUE) - dt(u2, df = df, log = TRUE) if (log.arg) logdensity else exp(logdensity) } VGAM/R/vgam.R0000644000176200001440000003244613135276760012305 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vgam <- function(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vgam.control(...), offset = NULL, method = "vgam.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, # Added 20130730 qr.arg = FALSE, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "vgam" ocall <- match.call() if (smart) setup.smart("write") if (missing(data)) data <- environment(formula) mtsave <- terms(formula, specials = c("s", "sm.os", "sm.ps"), data = data) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "etastart", "mustart", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), vgam.fit = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) if (!is.null(form2)) { if (!is.null(subset)) stop("argument 'subset' cannot be used when ", "argument 'form2' is used") retlist <- shadowvgam(formula = form2, family = family, data = data, na.action = na.action, control = vgam.control(...), method = method, model = model, x.arg = x.arg, y.arg = y.arg, contrasts = contrasts, constraints = constraints, extra = extra, qr.arg = qr.arg) Ym2 <- retlist$Ym2 Xm2 <- retlist$Xm2 if (length(Ym2)) { if (NROW(Ym2) != NROW(y)) stop("number of rows of 'y' and 'Ym2' are unequal") } if (length(Xm2)) { if (NROW(Xm2) != NROW(x)) stop("number of rows of 'x' and 'Xm2' are unequal") } } else { Xm2 <- Ym2 <- NULL } offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? mf2 <- mf if (!missing(subset)) { mf2$subset <- NULL mf2 <- eval(mf2, parent.frame()) # mf2 is the full data frame. spars2 <- lapply(mf2, attr, "spar") dfs2 <- lapply(mf2, attr, "df") sx2 <- lapply(mf2, attr, "s.xargument") for (ii in seq_along(mf)) { if (length(sx2[[ii]])) { attr(mf[[ii]], "spar") <- spars2[[ii]] attr(mf[[ii]], "dfs2") <- dfs2[[ii]] attr(mf[[ii]], "s.xargument") <- sx2[[ii]] } } rm(mf2) } w <- model.weights(mf) if (!length(w)) { w <- rep_len(1, nrow(mf)) } else if (NCOL(w) == 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } eval(vcontrol.expression) n <- dim(x)[1] if (length(slot(family, "first"))) eval(slot(family, "first")) aa <- attributes(mtsave) smoothers <- aa$specials mgcv.sm.os <- length(smoothers$sm.os) > 0 mgcv.sm.ps <- length(smoothers$sm.ps) > 0 mgcv.sm.PS <- length(smoothers$sm.PS) > 0 any.sm.os.terms <- mgcv.sm.os any.sm.ps.terms <- mgcv.sm.ps || mgcv.sm.PS mgcv.s <- length(smoothers$s) > 0 if ((any.sm.os.terms || any.sm.ps.terms) && mgcv.s) stop("cannot include both s() and any of sm.os() or ", "sm.ps() (or sm.PS()) terms in the formula") if (any.sm.os.terms && any.sm.ps.terms) stop("cannot include both sm.os() and ", "sm.ps() (or sm.PS()) terms in the formula") nonparametric <- length(smoothers$s) > 0 if (nonparametric) { ff <- apply(aa$factors[smoothers[["s"]],,drop = FALSE], 2, any) smoothers[["s"]] <- if (any(ff)) seq(along = ff)[aa$order == 1 & ff] else NULL smooth.labels <- aa$term.labels[unlist(smoothers)] } else { function.name <- "vglm" # This is effectively so } are.sm.os.terms <- length(smoothers$sm.os) > 0 are.sm.ps.terms <- (length(smoothers$sm.ps) + length(smoothers$sm.PS)) > 0 if (are.sm.os.terms || are.sm.ps.terms) { control$criterion <- "coefficients" # Overwrite if necessary if (length(smoothers$sm.os) > 0) { ff.sm.os <- apply(aa$factors[smoothers[["sm.os"]],,drop = FALSE], 2, any) smoothers[["sm.os"]] <- if (any(ff.sm.os)) seq(along = ff.sm.os)[aa$order == 1 & ff.sm.os] else NULL smooth.labels <- aa$term.labels[unlist(smoothers)] } if (length(smoothers$sm.ps) > 0) { ff.sm.ps <- apply(aa$factors[smoothers[["sm.ps"]],,drop = FALSE], 2, any) smoothers[["sm.ps"]] <- if (any(ff.sm.ps)) seq(along = ff.sm.ps)[aa$order == 1 & ff.sm.ps] else NULL smooth.labels <- aa$term.labels[unlist(smoothers)] } assignx <- attr(x, "assign") which.X.sm.osps <- assignx[smooth.labels] Data <- mf[, names(which.X.sm.osps), drop = FALSE] attr(Data, "class") <- NULL S.arg <- lapply(Data, attr, "S.arg") sparlist <- lapply(Data, attr, "spar") ridge.adj <- lapply(Data, attr, "ridge.adj") fixspar <- lapply(Data, attr, "fixspar") ps.int <- lapply(Data, attr, "ps.int") # FYI only; for sm.ps() knots <- lapply(Data, attr, "knots") # FYI only; for sm.os() term.labels <- aa$term.labels } sm.osps.list <- if (any.sm.os.terms || any.sm.ps.terms) list(indexterms = if (any.sm.os.terms) ff.sm.os else ff.sm.ps, intercept = aa$intercept, which.X.sm.osps = which.X.sm.osps, S.arg = S.arg, sparlist = sparlist, ridge.adj = ridge.adj, term.labels = term.labels, fixspar = fixspar, orig.fixspar = fixspar, # For posterity ps.int = ps.int, # FYI only knots = knots, # FYI only assignx = assignx) else NULL fit <- vgam.fit(x = x, y = y, w = w, mf = mf, Xm2 = Xm2, Ym2 = Ym2, # Added 20130730 etastart = etastart, mustart = mustart, coefstart = coefstart, offset = offset, family = family, control = control, constraints = constraints, extra = extra, qr.arg = qr.arg, Terms = mtsave, nonparametric = nonparametric, smooth.labels = smooth.labels, function.name = function.name, sm.osps.list = sm.osps.list, ...) if (is.Numeric(fit$nl.df) && any(fit$nl.df < 0)) { fit$nl.df[fit$nl.df < 0] <- 0 } if (!is.null(fit[["smooth.frame"]])) { fit <- fit[-1] # Strip off smooth.frame } else { } fit$smomat <- NULL # Not needed fit$call <- ocall if (model) fit$model <- mf if (!x.arg) fit$x <- NULL if (!y.arg) fit$y <- NULL if (nonparametric) fit$misc$smooth.labels <- smooth.labels fit$misc$dataname <- dataname if (smart) fit$smart.prediction <- get.smart.prediction() answer <- new( if (any.sm.os.terms || any.sm.ps.terms) "pvgam" else "vgam", "assign" = attr(x, "assign"), "call" = fit$call, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, "df.residual" = fit$df.residual, "dispersion" = 1, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "ResSS" = fit$ResSS, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = fit$terms)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (qr.arg) { class(fit$qr) <- "list" slot(answer, "qr") <- fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") <- as.matrix(fit$fitted.values) slot(answer, "na.action") <- if (length(aaa <- attr(mf, "na.action"))) list(aaa) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") <- as.matrix(fit$weights) if (x.arg) slot(answer, "x") <- x # The 'small' design matrix if (length(fit$misc$Xvlm.aug)) { slot(answer, "ospsslot") <- list(Xvlm.aug = fit$misc$Xvlm.aug, sm.osps.list = fit$misc$sm.osps.list, magicfit = fit$misc$magicfit, iter.outer = fit$misc$iter.outer) fit$misc$Xvlm.aug <- NULL fit$misc$sm.osps.list <- NULL fit$misc$magicfit <- NULL fit$misc$iter.outer <- NULL } if (x.arg && length(Xm2)) slot(answer, "Xm2") <- Xm2 # The second (lm) design matrix if (y.arg && length(Ym2)) slot(answer, "Ym2") <- as.matrix(Ym2) # The second response if (!is.null(form2)) slot(answer, "callXm2") <- retlist$call answer@misc$formula <- formula answer@misc$form2 <- form2 if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(fit$y) answer@misc$formula <- formula slot(answer, "control") <- fit$control if (length(fit$extra)) { slot(answer, "extra") <- fit$extra } slot(answer, "iter") <- fit$iter slot(answer, "post") <- fit$post fit$predictors <- as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) <- list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") <- fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") <- as.matrix(fit$prior.weights) if (nonparametric) { slot(answer, "Bspline") <- fit$Bspline slot(answer, "nl.chisq") <- fit$nl.chisq if (is.Numeric(fit$nl.df)) slot(answer, "nl.df") <- fit$nl.df slot(answer, "spar") <- fit$spar slot(answer, "s.xargument") <- fit$s.xargument if (length(fit$varmat)) { slot(answer, "var") <- fit$varmat } } if (length(fit$effects)) slot(answer, "effects") <- fit$effects if (nonparametric && is.buggy.vlm(answer)) { warning("some s() terms have constraint matrices that have columns", " which are not orthogonal;", " try using sm.os() or sm.ps() instead of s().") } else { } answer } attr(vgam, "smart") <- TRUE shadowvgam <- function(formula, family, data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vgam.control(...), offset = NULL, method = "vgam.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), qr.arg = FALSE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "shadowvgam" ocall <- match.call() if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) m <- match(c("formula", "data", "subset", "weights", "na.action", "etastart", "mustart", "offset"), names(mf), 0) mf <- mf[c(1, m)] mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) switch(method, model.frame = return(mf), vgam.fit = 1, stop("invalid 'method': ", method)) mt <- attr(mf, "terms") x <- y <- NULL xlev <- .getXlevels(mt, mf) y <- model.response(mf, "any") # model.extract(mf, "response") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") <- attrassigndefault(x, mt) list(Xm2 = x, Ym2 = y, call = ocall) } # shadowvgam is.buggy.vlm <- function(object, each.term = FALSE, ...) { Hk.list <- constraints(object) ncl <- names(Hk.list) TFvec <- rep_len(FALSE, length(ncl)) names(TFvec) <- ncl if (!is(object, "vgam")) { return(if (each.term) TFvec else any(TFvec)) } if (!length(object@nl.chisq)) { return(if (each.term) TFvec else any(TFvec)) } for (kay in seq_along(ncl)) { cmat <- Hk.list[[kay]] if (ncol(cmat) > 1 && substring(ncl[kay], 1, 2) == "s(") { CMat <- crossprod(cmat) # t(cmat) %*% cmat TFvec[kay] <- any(CMat[lower.tri(CMat)] != 0 | CMat[upper.tri(CMat)] != 0) } } if (each.term) TFvec else any(TFvec) } if (!isGeneric("is.buggy")) setGeneric("is.buggy", function(object, ...) standardGeneric("is.buggy"), package = "VGAM") setMethod("is.buggy", signature(object = "vlm"), function(object, ...) is.buggy.vlm(object, ...)) VGAM/R/rrvglm.control.q0000644000176200001440000001310113135276757014373 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. rrvglm.control <- function(Rank = 1, Algorithm = c("alternating", "derivative"), Corner = TRUE, Uncorrelated.latvar = FALSE, Wmat = NULL, Svd.arg = FALSE, Index.corner = if (length(str0)) head((1:1000)[-str0], Rank) else 1:Rank, Ainit = NULL, Alpha = 0.5, Bestof = 1, Cinit = NULL, Etamat.colmax = 10, sd.Ainit = 0.02, sd.Cinit = 0.02, str0 = NULL, noRRR = ~ 1, Norrr = NA, noWarning = FALSE, trace = FALSE, Use.Init.Poisson.QO = FALSE, checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) { if (length(Norrr) != 1 || !is.na(Norrr)) { warning("argument 'Norrr' has been replaced by 'noRRR'. ", "Assigning the latter but using 'Norrr' will become an ", "error in the next VGAM version soon.") noRRR <- Norrr } if (mode(Algorithm) != "character" && mode(Algorithm) != "name") Algorithm <- as.character(substitute(Algorithm)) Algorithm <- match.arg(Algorithm, c("alternating", "derivative"))[1] if (Svd.arg) Corner <- FALSE if (!is.Numeric(Rank, positive = TRUE, length.arg = 1, integer.valued = TRUE)) stop("bad input for 'Rank'") if (!is.Numeric(Alpha, positive = TRUE, length.arg = 1) || Alpha > 1) stop("bad input for 'Alpha'") if (!is.Numeric(Bestof, positive = TRUE, length.arg = 1, integer.valued = TRUE)) stop("bad input for 'Bestof'") if (!is.Numeric(sd.Ainit, positive = TRUE, length.arg = 1)) stop("bad input for 'sd.Ainit'") if (!is.Numeric(sd.Cinit, positive = TRUE, length.arg = 1)) stop("bad input for 'sd.Cinit'") if (!is.Numeric(Etamat.colmax, positive = TRUE, length.arg = 1) || Etamat.colmax < Rank) stop("bad input for 'Etamat.colmax'") if (length(str0) && (any(round(str0) != str0) || any(str0 < 1))) stop("bad input for the argument 'str0'") Quadratic <- FALSE if (!Quadratic && Algorithm == "derivative" && !Corner) { dd <- "derivative algorithm only supports corner constraints" if (length(Wmat) || Uncorrelated.latvar || Svd.arg) stop(dd) warning(dd) Corner <- TRUE } if (Quadratic && Algorithm != "derivative") stop("Quadratic model can only be fitted using the derivative algorithm") if (Corner && (Svd.arg || Uncorrelated.latvar || length(Wmat))) stop("cannot have 'Corner = TRUE' and either 'Svd = TRUE' or ", "'Uncorrelated.latvar = TRUE' or Wmat") if (Corner && length(intersect(str0, Index.corner))) stop("cannot have arguments 'str0' and 'Index.corner' having ", "common values") if (length(Index.corner) != Rank) stop("length(Index.corner) != Rank") if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for 'checkwz'") if (!is.Numeric(wzepsilon, length.arg = 1, positive = TRUE)) stop("bad input for 'wzepsilon'") if (class(noRRR) != "formula" && !is.null(noRRR)) stop("argument 'noRRR' should be a formula or a NULL") ans <- c(vglm.control( trace = trace, checkwz = checkwz, Check.rank = Check.rank, Check.cm.rank = Check.cm.rank, wzepsilon = wzepsilon, noWarning = noWarning, ...), switch(Algorithm, "alternating" = valt.control(...), "derivative" = rrvglm.optim.control(...)), list(Rank = Rank, Ainit = Ainit, Algorithm = Algorithm, Alpha = Alpha, Bestof = Bestof, Cinit = Cinit, Index.corner = Index.corner, noRRR = noRRR, Corner = Corner, Uncorrelated.latvar = Uncorrelated.latvar, Wmat = Wmat, OptimizeWrtC = TRUE, # OptimizeWrtC, Quadratic = FALSE, # A constant now, here. sd.Ainit = sd.Ainit, sd.Cinit = sd.Cinit, Etamat.colmax = Etamat.colmax, str0 = str0, Svd.arg = Svd.arg, Use.Init.Poisson.QO = Use.Init.Poisson.QO), if (Quadratic) qrrvglm.control(Rank = Rank, ...) else NULL) if (Quadratic && ans$I.tolerances) { ans$Svd.arg <- FALSE ans$Uncorrelated.latvar <- FALSE ans$Corner <- FALSE } ans$half.stepsizing <- FALSE # Turn it off ans } setClass("summary.rrvglm", representation("rrvglm", coef3 = "matrix", cov.unscaled = "matrix", correlation = "matrix", df = "numeric", pearson.resid = "matrix", sigma = "numeric")) setMethod("summary", "rrvglm", function(object, ...) summary.rrvglm(object, ...)) show.summary.rrvglm <- function(x, digits = NULL, quote = TRUE, prefix = "", signif.stars = NULL) { show.summary.vglm(x, digits = digits, quote = quote, prefix = prefix) invisible(x) NULL } setMethod("show", "summary.rrvglm", function(object) show.summary.rrvglm(x = object)) setMethod("coefficients", "summary.rrvglm", function(object, ...) object@coef3) setMethod("coef", "summary.rrvglm", function(object, ...) object@coef3) VGAM/R/family.basics.R0000644000176200001440000011770613135276757014110 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. subsetcol <- Select <- function( data = list(), prefix = "y", lhs = NULL, rhs = NULL, # Can be "0" to suppress an intercept, else "". rhs2 = NULL, # Can be "0" to suppress an intercept, else "". rhs3 = NULL, # Can be "0" to suppress an intercept, else "". as.character = FALSE, as.formula.arg = FALSE, tilde = TRUE, exclude = NULL, sort.arg = TRUE) { if (is.character(exclude)) if (any(nchar(prefix) == 0)) stop("bad input for argument 'exclude'") if (!is.logical(sort.arg) || length(sort.arg) != 1) stop("bad input for argument 'sort.arg'") col.names <- colnames(data) if (is.logical(prefix)) { index <- if (prefix) seq_along(col.names) else stop("cannot have 'prefix = FALSE'") } else { index <- NULL for (ii in seq_along(prefix)) { small.col.names <- substr(col.names, 1, nchar(prefix[ii])) index <- c(index, grep(prefix[ii], small.col.names)) } } temp.col.names <- col.names[index] if (length(exclude)) { exclude.index <- NULL for (ii in seq_along(exclude)) { exclude.index <- c(exclude.index, (seq_along(col.names))[exclude[ii] == col.names]) } exclude.index <- unique(sort(exclude.index)) index <- setdiff(index, exclude.index) temp.col.names <- col.names[index] } if (sort.arg) { ooo <- order(temp.col.names) index <- index[ooo] temp.col.names <- temp.col.names[ooo] } ltcn.positive <- (length(temp.col.names) > 0) if (as.formula.arg) { form.string <- paste0(ifelse(length(lhs), lhs, ""), ifelse(tilde, " ~ ", ""), if (ltcn.positive) paste(temp.col.names, collapse = " + ") else "", ifelse(ltcn.positive && length(rhs ), " + ", ""), ifelse(length(rhs ), rhs, ""), ifelse(length(rhs2), paste(" +", rhs2), ""), ifelse(length(rhs3), paste(" +", rhs3), "")) if (as.character) { form.string } else { as.formula(form.string) } } else { if (as.character) { paste0("cbind(", paste(temp.col.names, collapse = ", "), ")") } else { ans <- if (is.matrix(data)) data[, index] else if (is.list(data)) data[index] else stop("argument 'data' is neither a list or a matrix") if (length(ans)) { as.matrix(ans) } else { NULL } } } } if (FALSE) subsetc <- function(x, select, prefix = NULL, subset = TRUE, drop = FALSE, exclude = NULL, sort.arg = !is.null(prefix), as.character = FALSE) { if (!is.null(prefix)) { if (!missing(select)) warning("overwriting argument 'select' by something ", "using 'prefix'") select <- grepl(paste0("^", prefix), colnames(x)) } if (missing(select)) { vars <- TRUE } else { nl <- as.list(seq_along(x)) # as.list(1L:ncol(x)) names(nl) <- names(x) # colnames(x) vars <- eval(substitute(select), nl, parent.frame()) } ans <- x[subset & !is.na(subset), vars, drop = drop] if (sort.arg) { cna <- colnames(ans) ooo <- order(cna) ans <- ans[, ooo, drop = drop] } if (!is.null(exclude)) { cna <- colnames(ans) ooo <- match(exclude, cna) ans <- ans[, -ooo, drop = drop] } if (as.character) { cna <- colnames(ans) paste0("cbind(", paste0(cna, collapse = ", "), ")") } else { ans } } grid.search <- function(vov, objfun, y, x, w, extraargs = NULL, maximize = TRUE, abs.arg = FALSE, ret.objfun = FALSE, ...) { if (!is.vector(vov)) stop("argument 'vov' must be a vector") objvals <- vov for (ii in seq_along(vov)) objvals[ii] <- objfun(vov[ii], y = y, x = x, w = w, extraargs = extraargs, ...) try.this <- if (abs.arg) { if (maximize) vov[abs(objvals) == max(abs(objvals))] else vov[abs(objvals) == min(abs(objvals))] } else { if (maximize) vov[objvals == max(objvals)] else vov[objvals == min(objvals)] } if (!length(try.this)) stop("something has gone wrong!") ans <- if (length(try.this) == 1) try.this else sample(try.this, size = 1) myvec <- objvals[ans == vov] # Could be a vector if (ret.objfun) c(Value = ans, ObjFun = myvec[1]) else ans } grid.search2 <- function(vov1, vov2, objfun, y, x, w, extraargs = NULL, maximize = TRUE, # abs.arg = FALSE, ret.objfun = FALSE, ...) { if (!is.vector(vov1)) stop("argument 'vov1' must be a vector") if (!is.vector(vov2)) stop("argument 'vov2' must be a vector") allmat1 <- expand.grid(vov1 = as.vector(vov1), vov2 = as.vector(vov2)) objvals <- numeric(nrow(allmat1)) for (ii in seq_along(objvals)) objvals[ii] <- objfun(allmat1[ii, "vov1"], allmat1[ii, "vov2"], y = y, x = x, w = w, extraargs = extraargs, ...) ind5 <- if (maximize) which.max(objvals) else which.min(objvals) c(Value1 = allmat1[ind5, "vov1"], Value2 = allmat1[ind5, "vov2"], ObjFun = if (ret.objfun) objvals[ind5] else NULL) } grid.search3 <- function(vov1, vov2, vov3, objfun, y, x, w, extraargs = NULL, maximize = TRUE, # abs.arg = FALSE, ret.objfun = FALSE, ...) { if (!is.vector(vov1)) stop("argument 'vov1' must be a vector") if (!is.vector(vov2)) stop("argument 'vov2' must be a vector") if (!is.vector(vov3)) stop("argument 'vov3' must be a vector") allmat1 <- expand.grid(vov1 = as.vector(vov1), vov2 = as.vector(vov2), vov3 = as.vector(vov3)) objvals <- numeric(nrow(allmat1)) for (ii in seq_along(objvals)) objvals[ii] <- objfun(allmat1[ii, "vov1"], allmat1[ii, "vov2"], allmat1[ii, "vov3"], y = y, x = x, w = w, extraargs = extraargs, ...) ind5 <- if (maximize) which.max(objvals) else which.min(objvals) c(Value1 = allmat1[ind5, "vov1"], Value2 = allmat1[ind5, "vov2"], Value3 = allmat1[ind5, "vov3"], ObjFun = if (ret.objfun) objvals[ind5] else NULL) } grid.search4 <- function(vov1, vov2, vov3, vov4, objfun, y, x, w, extraargs = NULL, maximize = TRUE, # abs.arg = FALSE, ret.objfun = FALSE, ...) { if (!is.vector(vov1)) stop("argument 'vov1' must be a vector") if (!is.vector(vov2)) stop("argument 'vov2' must be a vector") if (!is.vector(vov3)) stop("argument 'vov3' must be a vector") if (!is.vector(vov4)) stop("argument 'vov4' must be a vector") allmat1 <- expand.grid(vov1 = as.vector(vov1), vov2 = as.vector(vov2), vov3 = as.vector(vov3), vov4 = as.vector(vov4)) objvals <- numeric(nrow(allmat1)) for (ii in seq_along(objvals)) objvals[ii] <- objfun(allmat1[ii, "vov1"], allmat1[ii, "vov2"], allmat1[ii, "vov3"], allmat1[ii, "vov4"], y = y, x = x, w = w, extraargs = extraargs, ...) ind5 <- if (maximize) which.max(objvals) else which.min(objvals) c(Value1 = allmat1[ind5, "vov1"], Value2 = allmat1[ind5, "vov2"], Value3 = allmat1[ind5, "vov3"], Value4 = allmat1[ind5, "vov4"], ObjFun = if (ret.objfun) objvals[ind5] else NULL) } getind <- function(constraints, M, ncolx) { if (!length(constraints)) { constraints <- vector("list", ncolx) for (ii in 1:ncolx) constraints[[ii]] <- diag(M) } ans <- vector("list", M+1) names(ans) <- c(paste("eta", 1:M, sep = ""), "ncolX.vlm") temp2 <- matrix(unlist(constraints), nrow = M) for (kk in 1:M) { ansx <- NULL for (ii in seq_along(constraints)) { temp <- constraints[[ii]] isfox <- any(temp[kk, ] != 0) if (isfox) { ansx <- c(ansx, ii) } } ans[[kk]] <- list(xindex = ansx, X.vlmindex = (1:ncol(temp2))[temp2[kk,] != 0]) } ans[[M+1]] <- ncol(temp2) ans } cm.VGAM <- function(cm, x, bool, constraints, apply.int = FALSE, cm.default = diag(nrow(cm)), # 20121226 cm.intercept.default = diag(nrow(cm)) # 20121226 ) { if (is.null(bool)) return(NULL) if (!is.matrix(cm)) stop("argument 'cm' is not a matrix") M <- nrow(cm) asgn <- attr(x, "assign") if (is.null(asgn)) stop("the 'assign' attribute is missing from 'x'; this ", "may be due to some missing values") # 20100306 nasgn <- names(asgn) ninasgn <- nasgn[nasgn != "(Intercept)"] if (!length(constraints)) { constraints <- vector("list", length(nasgn)) for (ii in seq_along(nasgn)) { constraints[[ii]] <- cm.default # diag(M) } names(constraints) <- nasgn if (any(nasgn == "(Intercept)")) constraints[["(Intercept)"]] <- cm.intercept.default } if (!is.list(constraints)) stop("argument 'constraints' must be a list") if (length(constraints) != length(nasgn) || any(sort(names(constraints)) != sort(nasgn))) { cat("\nnames(constraints)\n") print(names(constraints) ) cat("\nnames(attr(x, 'assign'))\n") print( nasgn ) stop("The above do not match; 'constraints' is half-pie") } if (is.logical(bool)) { if (bool) { if (any(nasgn == "(Intercept)") && apply.int) constraints[["(Intercept)"]] <- cm if (length(ninasgn)) for (ii in ninasgn) constraints[[ii]] <- cm } else { return(constraints) } } else { tbool <- terms(bool) if (attr(tbool, "response")) { ii <- attr(tbool, "factors") default <- dimnames(ii)[[1]] default <- default[1] default <- if (is.null(default[1])) { t.or.f <- attr(tbool, "variables") t.or.f <- as.character( t.or.f ) if (t.or.f[1] == "list" && length(t.or.f) == 2 && (t.or.f[2] == "TRUE" || t.or.f[2] == "FALSE")) { t.or.f <- as.character( t.or.f[2] ) parse(text = t.or.f)[[1]] } else { stop("something gone awry") } } else { parse(text = default[1])[[1]] # Original } default <- as.logical(eval(default)) } else { default <- TRUE } tl <- attr(tbool, "term.labels") if (attr(tbool, "intercept")) tl <- c("(Intercept)", tl) for (ii in nasgn) { if ( default && any(tl == ii)) constraints[[ii]] <- cm if (!default && !any(tl == ii)) constraints[[ii]] <- cm } } constraints } cm.nointercept.VGAM <- function(constraints, x, nointercept, M) { asgn <- attr(x, "assign") nasgn <- names(asgn) if (is.null(constraints)) { constraints <- vector("list", length(nasgn)) # list() names(constraints) <- nasgn } if (!is.list(constraints)) stop("'constraints' must be a list") for (ii in seq_along(asgn)) constraints[[nasgn[ii]]] <- if (is.null(constraints[[nasgn[ii]]])) diag(M) else eval(constraints[[nasgn[ii]]]) if (is.null(nointercept)) return(constraints) if (!is.numeric(nointercept)) stop("'nointercept' must be numeric") nointercept <- unique(sort(nointercept)) if (length(nointercept) == 0 || length(nointercept) >= M) stop("too few or too many values") if (any(nointercept < 1 | nointercept > M)) stop("'nointercept' out of range") if (nasgn[1] != "(Intercept)" || M == 1) stop("Need an (Intercept) constraint matrix with M>1") if (!identical(constraints[["(Intercept)"]], diag(M))) warning("Constraint matrix of (Intercept) not diagonal") temp <- constraints[["(Intercept)"]] temp <- temp[, -nointercept, drop = FALSE] constraints[["(Intercept)"]] <- temp constraints } cm.zero.VGAM <- function(constraints, x, zero = NULL, M = 1, predictors.names, M1 = 1) { dotzero <- zero # Transition if (length(dotzero) == 1 && (dotzero == "" || is.na(dotzero))) dotzero <- NULL if (is.character(dotzero)) { which.numeric.all <- NULL for (ii in seq_along(dotzero)) { which.ones <- grep(dotzero[ii], predictors.names, fixed = TRUE) if (length(which.ones)) { which.numeric.all <- c(which.numeric.all, which.ones) } else { warning("some values of argument 'zero' are unmatched. Ignoring them") } } # for ii which.numeric <- unique(sort(which.numeric.all)) if (!length(which.numeric)) { warning("No values of argument 'zero' were matched.") which.numeric <- NULL } else if (length(which.numeric.all) > length(which.numeric)) { warning("There were redundant values of argument 'zero'.") } dotzero <- which.numeric } # if is.character(dotzero) posdotzero <- dotzero[dotzero > 0] negdotzero <- dotzero[dotzero < 0] zneg.index <- if (length(negdotzero)) { if (!is.Numeric(-negdotzero, positive = TRUE, integer.valued = TRUE) || max(-negdotzero) > M1) stop("bad input for argument 'zero'") bigUniqInt <- 1080 zneg.index <- rep(0:bigUniqInt, rep(length(negdotzero), 1 + bigUniqInt)) * M1 + abs(negdotzero) sort(intersect(zneg.index, 1:M)) } else { NULL } zpos.index <- if (length(posdotzero)) posdotzero else NULL z.Index <- if (!length(dotzero)) NULL else unique(sort(c(zneg.index, zpos.index))) zero <- z.Index # Transition asgn <- attr(x, "assign") nasgn <- names(asgn) if (is.null(constraints)) { constraints <- vector("list", length(nasgn)) # list() names(constraints) <- nasgn } if (!is.list(constraints)) stop("'constraints' must be a list") for (ii in seq_along(asgn)) constraints[[nasgn[ii]]] <- if (is.null(constraints[[nasgn[ii]]])) diag(M) else eval(constraints[[nasgn[ii]]]) if (is.null(zero)) return(constraints) if (any(zero < 1 | zero > M)) stop("argument 'zero' out of range; should have values between ", "1 and ", M, " inclusive") if (nasgn[1] != "(Intercept)") stop("cannot fit an intercept to a no-intercept model") if (2 <= length(constraints)) for (ii in 2:length(constraints)) { Hmatk <- constraints[[nasgn[ii]]] Hmatk[zero, ] <- 0 index <- NULL for (kk in 1:ncol(Hmatk)) if (all(Hmatk[, kk] == 0)) index <- c(index, kk) if (length(index) == ncol(Hmatk)) stop("constraint matrix has no columns!") if (!is.null(index)) Hmatk <- Hmatk[, -index, drop = FALSE] constraints[[nasgn[ii]]] <- Hmatk } # for ii constraints } process.constraints <- function(constraints, x, M, by.col = TRUE, specialCM = NULL, Check.cm.rank = TRUE # 20140626 ) { asgn <- attr(x, "assign") nasgn <- names(asgn) if (is.null(constraints)) { constraints <- vector("list", length(nasgn)) for (ii in seq_along(nasgn)) constraints[[ii]] <- diag(M) names(constraints) <- nasgn } if (is.matrix(constraints)) constraints <- list(constraints) if (!is.list(constraints)) stop("'constraints' must be a list") lenconstraints <- length(constraints) if (lenconstraints > 0) for (ii in 1:lenconstraints) { list.elt <- constraints[[ii]] if (is.function(list.elt)) { list.elt <- list.elt() } constraints[[ii]] <- eval(list.elt) if (!is.null (constraints[[ii]]) && !is.matrix(constraints[[ii]])) stop("'constraints[[", ii, "]]' is not a matrix") } if (is.null(names(constraints))) names(constraints) <- rep_len(nasgn, lenconstraints) temp <- vector("list", length(nasgn)) names(temp) <- nasgn for (ii in seq_along(nasgn)) temp[[nasgn[ii]]] <- if (is.null(constraints[[nasgn[ii]]])) diag(M) else eval(constraints[[nasgn[ii]]]) for (ii in seq_along(asgn)) { if (!is.matrix(temp[[ii]])) { stop("not a constraint matrix") } if (ncol(temp[[ii]]) > M) stop("constraint matrix has too many columns") } if (!by.col) return(temp) constraints <- temp Hlist <- vector("list", ncol(x)) for (ii in seq_along(asgn)) { cols <- asgn[[ii]] ictr <- 0 for (jay in cols) { ictr <- ictr + 1 cm <- if (is.list(specialCM) && any(nasgn[ii] == names(specialCM))) { slist <- specialCM[[(nasgn[ii])]] slist[[ictr]] } else { constraints[[ii]] } Hlist[[jay]] <- cm } } names(Hlist) <- dimnames(x)[[2]] if (Check.cm.rank) { all.svd.d <- function(x) svd(x)$d mylist <- lapply(Hlist, all.svd.d) if (max(unlist(lapply(mylist, length))) > M) stop("some constraint matrices have more than ", M, "columns") MyVector <- unlist(mylist) if (min(MyVector) < 1.0e-10) stop("some constraint matrices are not of ", "full column-rank: ", paste(names(MyVector)[MyVector < 1.0e-10], collapse = ", ")) } Hlist } trivial.constraints <- function(Hlist, target = diag(M)) { if (is.null(Hlist)) return(1) if (is.matrix(Hlist)) Hlist <- list(Hlist) M <- dim(Hlist[[1]])[1] if (!is.matrix(target)) stop("target is not a matrix") dimtar <- dim(target) trivc <- rep_len(1, length(Hlist)) names(trivc) <- names(Hlist) for (ii in seq_along(Hlist)) { d <- dim(Hlist[[ii]]) if (d[1] != dimtar[1]) trivc[ii] <- 0 if (d[2] != dimtar[2]) trivc[ii] <- 0 if (d[1] != M) trivc[ii] <- 0 if (length(Hlist[[ii]]) != length(target)) trivc[ii] <- 0 if (trivc[ii] == 0) next if (!all(c(Hlist[[ii]]) == c(target))) trivc[ii] <- 0 if (trivc[ii] == 0) next } trivc } add.constraints <- function(constraints, new.constraints, overwrite = FALSE, check = FALSE) { empty.list <- function(l) (is.null(l) || (is.list(l) && length(l) == 0)) if (empty.list(constraints)) if (is.list(new.constraints)) return(new.constraints) else return(list()) # Both NULL probably constraints <- as.list(constraints) new.constraints <- as.list(new.constraints) nc <- names(constraints) # May be NULL nn <- names(new.constraints) # May be NULL if (is.null(nc) || is.null(nn)) stop("lists must have names") if (any(nc == "") || any(nn == "")) stop("lists must have names") if (!empty.list(constraints) && !empty.list(new.constraints)) { for (ii in nn) { if (any(ii == nc)) { if (check && (!(all(dim(constraints[[ii]]) == dim(new.constraints[[ii]])) && all( constraints[[ii]] == new.constraints[[ii]])))) stop("apparent contradiction in the specification ", "of the constraints") if (overwrite) constraints[[ii]] <- new.constraints[[ii]] } else constraints[[ii]] <- new.constraints[[ii]] } } else { if (!empty.list(constraints)) return(as.list(constraints)) else return(as.list(new.constraints)) } constraints } iam <- function(j, k, M, # hbw = M, both = FALSE, diag = TRUE) { jay <- j kay <- k if (M == 1) if (!diag) stop("cannot handle this") if (M == 1) if (both) return(list(row.index = 1, col.index = 1)) else return(1) upper <- if (diag) M else M - 1 i2 <- as.list(upper:1) i2 <- lapply(i2, seq) i2 <- unlist(i2) i1 <- matrix(1:M, M, M) i1 <- if (diag) c(i1[row(i1) >= col(i1)]) else c(i1[row(i1) > col(i1)]) if (both) { list(row.index = i2, col.index = i1) } else { if (jay > M || kay > M || jay < 1 || kay < 1) stop("range error in j or k") both <- (i1 == jay & i2 == kay) | (i1 == kay & i2 == jay) (seq_along(i2))[both] } } dimm <- function(M, hbw = M) { if (!is.numeric(hbw)) hbw <- M if (hbw > M || hbw < 1) stop("range error in argument 'hbw'") hbw * (2 * M - hbw +1) / 2 } m2a <- function(m, M, upper = FALSE, allow.vector = FALSE) { if (!is.numeric(m)) stop("argument 'm' is not numeric") if (!is.matrix(m)) m <- cbind(m) n <- nrow(m) dimm <- ncol(m) index <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (dimm > length(index$row.index)) stop("bad value for 'M'; it is too small") if (dimm < M) { stop("bad value for 'M'; it is too big") } fred <- .C("m2accc", as.double(t(m)), ans=double(M*M*n), as.integer(dimm), as.integer(index$row-1), as.integer(index$col-1), as.integer(n), as.integer(M), as.integer(as.numeric(upper)), NAOK = TRUE) dim(fred$ans) <- c(M, M, n) alpn <- NULL dimnames(fred$ans) <- list(alpn, alpn, dimnames(m)[[1]]) fred$a } a2m <- function(a, hbw = M) { if (is.matrix(a) && ncol(a) == nrow(a)) a <- array(a, c(nrow(a), ncol(a), 1)) if (!is.array(a)) dim(a) <- c(1,1,length(a)) M <- dim(a)[1] n <- dim(a)[3] dimm.value <- dimm(M, hbw) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) fred <- .C("a2mccc", as.double(a), m = double(dimm.value * n), as.integer(dimm.value), as.integer(index$row-1), as.integer(index$col-1), as.integer(n), as.integer(M), NAOK = TRUE) dim(fred$m) <- c(dimm.value,n) fred$m <- t(fred$m) if (hbw != M) attr(fred$m, "hbw") <- hbw if (length(lpn <- dimnames(a)[[1]]) != 0) attr(fred$m, "predictors.names") <- lpn fred$m } vindex <- function(M, row.arg = FALSE, col.arg = FALSE, length.arg = M * (M + 1) / 2) { if ((row.arg + col.arg) != 1) stop("only one of row and col must be TRUE") if (M == 1) { ans <- 1 } else { if (row.arg) { i1 <- matrix(1:M, M, M) ans <- c(i1[row(i1) + col(i1) <= (M + 1)]) } else { i1 <- matrix(1:M, M, M) ans <- c(i1[row(i1) >= col(i1)]) } } if (length.arg > length(ans)) stop("argument 'length.arg' too big") rep_len(ans, length.arg) } wweights <- function(object, matrix.arg = TRUE, deriv.arg = FALSE, ignore.slot = FALSE, checkwz = TRUE) { if (length(wz <- object@weights) && !ignore.slot && !deriv.arg) { return(wz) } M <- object@misc$M # Done below n <- object@misc$n # Done below if (any(slotNames(object) == "extra")) { extra <- object@extra if (length(extra) == 1 && !length(names(extra))) { extra <- extra[[1]] } } mu <- object@fitted.values if (any(slotNames(object) == "predictors")) eta <- object@predictors mt <- terms(object) # object@terms$terms; 20030811 Hlist <- constraints <- object@constraints new.coeffs <- object@coefficients if (any(slotNames(object) == "iter")) iter <- object@iter w <- rep_len(1, n) if (any(slotNames(object) == "prior.weights")) w <- object@prior.weights if (!length(w)) w <- rep_len(1, n) x <- object@x if (!length(x)) x <- model.matrixvlm(object, type = "lm") y <- object@y if (!length(y)) y <- depvar(object) if (length(object@misc$form2)) { Xm2 <- object@Xm2 if (!length(Xm2)) Xm2 <- model.matrix(object, type = "lm2") Ym2 <- object@Ym2 } if (any(slotNames(object) == "family")) { infos.list <- object@family@infos() if (length(infos.list)) for (ii in names(infos.list)) { assign(ii, infos.list[[ii]]) } } if (any(slotNames(object) == "control")) for (ii in names(object@control)) { assign(ii, object@control[[ii]]) } if (length(object@misc)) for (ii in names(object@misc)) { assign(ii, object@misc[[ii]]) } if (any(slotNames(object) == "family")) { expr <- object@family@deriv deriv.mu <- eval(expr) if (!length(wz)) { expr <- object@family@weight wz <- eval(expr) if (M > 1) dimnames(wz) <- list(dimnames(wz)[[1]], NULL) # Remove colnames wz <- if (matrix.arg) as.matrix(wz) else c(wz) } if (deriv.arg) list(deriv = deriv.mu, weights = wz) else wz } else { NULL } } pweights <- function(object, ...) { ans <- object@prior.weights if (length(ans)) { ans } else { temp <- object@y ans <- rep_len(1, nrow(temp)) # Assumed all equal and unity. names(ans) <- dimnames(temp)[[1]] ans } } procVec <- function(vec, yn, Default) { if (anyNA(vec)) stop("vec cannot contain any NAs") L <- length(vec) nvec <- names(vec) # vec[""] undefined named <- length(nvec) # FALSE for c(1,3) if (named) { index <- (1:L)[nvec == ""] default <- if (length(index)) vec[index] else Default } else { default <- vec } answer <- rep_len(default, length(yn)) names(answer) <- yn if (named) { nvec2 <- nvec[nvec != ""] if (length(nvec2)) { if (any(!is.element(nvec2, yn))) stop("some names given which are superfluous") answer <- rep_len(NA_real_, length(yn)) names(answer) <- yn answer[nvec2] <- vec[nvec2] answer[is.na(answer)] <- rep_len(default, sum(is.na(answer))) } } answer } if (FALSE) { } weightsvglm <- function(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...) { weightsvlm(object, type = type, matrix.arg = matrix.arg, ignore.slot = ignore.slot, deriv.arg = deriv.arg, ...) } weightsvlm <- function(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("prior", "working"))[1] if (type == "working") { wweights(object = object, matrix.arg = matrix.arg, deriv.arg = deriv.arg, ignore.slot = ignore.slot, ...) } else { if (deriv.arg) stop("cannot set 'deriv = TRUE' when 'type=\"prior\"'") ans <- pweights(object) if (matrix.arg) as.matrix(ans) else c(ans) } } if (!isGeneric("weights")) setGeneric("weights", function(object, ...) standardGeneric("weights")) setMethod("weights", "vlm", function(object, ...) weightsvlm(object, ...)) setMethod("weights", "vglm", function(object, ...) weightsvglm(object, ...)) qnupdate <- function(w, wzold, dderiv, deta, M, keeppd = TRUE, trace = FALSE, reset = FALSE, effpos=.Machine$double.eps^0.75) { if (M == 1) { dderiv <- cbind(dderiv) deta <- cbind(deta) } Bs <- mux22(t(wzold), deta, M = M, upper = FALSE, as.matrix = TRUE) # n x M sBs <- c( (deta * Bs) %*% rep_len(1, M) ) # should have positive values sy <- c( (dderiv * deta) %*% rep_len(1, M) ) wznew <- wzold index <- iam(NA, NA, M = M, both = TRUE) index$row.index <- rep_len(index$row.index, ncol(wzold)) index$col.index <- rep_len(index$col.index, ncol(wzold)) updateThese <- if (keeppd) (sy > effpos) else rep_len(TRUE, length(sy)) if (!keeppd || any(updateThese)) { wznew[updateThese,] <- wznew[updateThese,] - Bs[updateThese,index$row] * Bs[updateThese,index$col] / sBs[updateThese] + dderiv[updateThese,index$row] * dderiv[updateThese,index$col] / sy[updateThese] notupdated <- sum(!updateThese) if (notupdated && trace) cat(notupdated, "weight matrices not updated out of", length(sy), "\n") } else { warning("no BFGS quasi-Newton update made at all") cat("no BFGS quasi-Newton update made at all\n") flush.console() } wznew } mbesselI0 <- function(x, deriv.arg = 0) { if (!is.Numeric(deriv.arg, length.arg = 1, integer.valued = TRUE, positive = TRUE) && deriv.arg != 0) stop("argument 'deriv.arg' must be a single non-negative integer") if (!(deriv.arg == 0 || deriv.arg == 1 || deriv.arg == 2)) stop("argument 'deriv' must be 0, 1, or 2") if (!is.Numeric(x)) stop("bad input for argument 'x'") nn <- length(x) if (FALSE) { } ans <- matrix(NA_real_, nrow = nn, ncol = deriv.arg+1) ans[, 1] <- besselI(x, nu = 0) if (deriv.arg>=1) ans[,2] <- besselI(x, nu = 1) if (deriv.arg>=2) ans[,3] <- ans[,1] - ans[,2] / x ans } VGAM.matrix.norm <- function(A, power = 2, suppressWarning = FALSE) { if ((nrow(A) != ncol(A)) && !suppressWarning) warning("norms should be calculated for square matrices; ", "'A' is not square") if (power == "F") { sqrt(sum(A^2)) } else if (power == 1) { max(colSums(abs(A))) } else if (power == 2) { sqrt(max(eigen(t(A) %*% A, symmetric = TRUE)$value)) } else if (!is.finite(power)) { max(colSums(abs(A))) } else { stop("argument 'power' not recognized") } } rmfromVGAMenv <- function(varnames, prefix = "") { evarnames <- paste(prefix, varnames, sep = "") for (ii in evarnames) { mytext1 <- "exists(x = ii, envir = VGAMenv)" myexp1 <- parse(text = mytext1) is.there <- eval(myexp1) if (is.there) { rm(list = ii, envir = VGAMenv) } } } existsinVGAMenv <- function(varnames, prefix = "") { evarnames <- paste(prefix, varnames, sep = "") ans <- NULL for (ii in evarnames) { mytext1 <- "exists(x = ii, envir = VGAMenv)" myexp1 <- parse(text = mytext1) is.there <- eval(myexp1) ans <- c(ans, is.there) } ans } assign2VGAMenv <- function(varnames, mylist, prefix = "") { evarnames <- paste(prefix, varnames, sep = "") for (ii in seq_along(varnames)) { assign(evarnames[ii], mylist[[(varnames[ii])]], envir = VGAMenv) } } getfromVGAMenv <- function(varname, prefix = "") { varname <- paste(prefix, varname, sep = "") if (length(varname) > 1) stop("'varname' must be of length 1") get(varname, envir = VGAMenv) } lerch <- function(x, s, v, tolerance = 1.0e-10, iter = 100) { if (!is.Numeric(x) || !is.Numeric(s) || !is.Numeric(v)) stop("bad input in 'x', 's', and/or 'v'") if (is.complex(c(x,s,v))) stop("complex arguments not allowed in 'x', 's' and 'v'") if (!is.Numeric(tolerance, length.arg = 1, positive = TRUE) || tolerance > 0.01) stop("bad input for argument 'tolerance'") if (!is.Numeric(iter, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'iter'") L <- max(length(x), length(s), length(v)) x <- rep_len(x, L) s <- rep_len(s, L) v <- rep_len(v, L) xok <- abs(x) < 1 & !(v <= 0 & v == round(v)) x[!xok] <- 0 # Fix this later ans <- .C("lerchphi123", err = integer(L), as.integer(L), as.double(x), as.double(s), as.double(v), acc=as.double(tolerance), result=double(L), as.integer(iter)) ifelse(ans$err == 0 & xok , ans$result, NA) } negzero.expression.VGAM <- expression({ if (length(dotzero) == 1 && (dotzero == "" || is.na(dotzero))) dotzero <- NULL if (is.character(dotzero)) { which.numeric.all <- NULL for (ii in seq_along(dotzero)) { which.ones <- grep(dotzero[ii], predictors.names, fixed = TRUE) if (length(which.ones)) { which.numeric.all <- c(which.numeric.all, which.ones) } else { warning("some values of argument 'zero' are unmatched. Ignoring them") } } which.numeric <- unique(sort(which.numeric.all)) if (!length(which.numeric)) { warning("No values of argument 'zero' were matched.") which.numeric <- NULL } else if (length(which.numeric.all) > length(which.numeric)) { warning("There were redundant values of argument 'zero'.") } dotzero <- which.numeric } posdotzero <- dotzero[dotzero > 0] negdotzero <- dotzero[dotzero < 0] zneg.index <- if (length(negdotzero)) { if (!is.Numeric(-negdotzero, positive = TRUE, integer.valued = TRUE) || max(-negdotzero) > M1) stop("bad input for argument 'zero'") bigUniqInt <- 1080 zneg.index <- rep(0:bigUniqInt, rep(length(negdotzero), 1 + bigUniqInt)) * M1 + abs(negdotzero) sort(intersect(zneg.index, 1:M)) } else { NULL } zpos.index <- if (length(posdotzero)) posdotzero else NULL z.Index <- if (!length(dotzero)) NULL else unique(sort(c(zneg.index, zpos.index))) constraints <- cm.zero.VGAM(constraints, x = x, z.Index, M = M) }) is.empty.list <- function(mylist) { is.list(mylist) && length(unlist(mylist)) == 0 } interleave.VGAM <- function(.M, M1, inverse = FALSE) { if (inverse) { NRs <- (.M)/M1 if (round(NRs) != NRs) stop("Incompatible number of parameters") c(matrix(1:(.M), nrow = NRs, byrow = TRUE)) } else { c(matrix(1:(.M), nrow = M1, byrow = TRUE)) } } interleave.cmat <- function(cmat1, cmat2) { ncol1 <- ncol(cmat1) ncol2 <- ncol(cmat2) if (ncol1 == 1) { return(cbind(cmat1, cmat2)) } else { # ncol1 > 1 if (ncol2 == 1) { return(cbind(cmat1[, 1], cmat2, cmat1[, -1])) } else if (ncol1 != ncol2) { warning("this function is confused. Returning cbind(cmat1, cmat2)") return(cbind(cmat1[, 1], cmat2, cmat1[, -1])) } else { # ncol1 == ncol2 and both are > 1. kronecker(cmat1, cbind(1, 0)) + kronecker(cmat2, cbind(0, 1)) } } } w.wz.merge <- function(w, wz, n, M, ndepy, intercept.only = FALSE) { wz <- as.matrix(wz) if (ndepy == 1) return( c(w) * wz) if (intercept.only) warning("yettodo: support intercept.only == TRUE") if (NCOL(w) > ndepy) stop("number of columns of 'w' exceeds number of responses") w <- matrix(w, n, ndepy) w.rep <- matrix(0, n, ncol(wz)) M1 <- M / ndepy all.indices <- iam(NA, NA, M = M, both = TRUE) if (FALSE) for (ii in 1:ncol(wz)) { if ((ind1 <- ceiling(all.indices$row[ii] / M1)) == ceiling(all.indices$col[ii] / M1)) { w.rep[, ii] <- w[, ind1] } } # ii res.Ind1 <- ceiling(all.indices$row.index / M1) Ind1 <- res.Ind1 == ceiling(all.indices$col.index / M1) LLLL <- min(ncol(wz), length(Ind1)) Ind1 <- Ind1[1:LLLL] res.Ind1 <- res.Ind1[1:LLLL] for (ii in 1:ndepy) { sub.ind1 <- (1:LLLL)[Ind1 & (res.Ind1 == ii)] w.rep[, sub.ind1] <- w[, ii] } # ii w.rep * wz } w.y.check <- function(w, y, ncol.w.max = 1, ncol.y.max = 1, ncol.w.min = 1, ncol.y.min = 1, out.wy = FALSE, colsyperw = 1, maximize = FALSE, Is.integer.y = FALSE, Is.positive.y = FALSE, Is.nonnegative.y = FALSE, prefix.w = "PriorWeight", prefix.y = "Response") { if (!is.matrix(w)) w <- as.matrix(w) if (!is.matrix(y)) y <- as.matrix(y) n.lm <- nrow(y) rn.w <- rownames(w) rn.y <- rownames(y) cn.w <- colnames(w) cn.y <- colnames(y) if (Is.integer.y && any(y != round(y))) stop("response variable 'y' must be integer-valued") if (Is.positive.y && any(y <= 0)) stop("response variable 'y' must be positive-valued") if (Is.nonnegative.y && any(y < 0)) stop("response variable 'y' must be 0 or positive-valued") if (nrow(w) != n.lm) stop("nrow(w) should be equal to nrow(y)") if (ncol(w) > ncol.w.max) stop("prior-weight variable 'w' has too many columns") if (ncol(y) > ncol.y.max) stop("response variable 'y' has too many columns; ", "only ", ncol.y.max, " allowed") if (ncol(w) < ncol.w.min) stop("prior-weight variable 'w' has too few columns") if (ncol(y) < ncol.y.min) stop("response variable 'y' has too few columns; ", "at least ", ncol.y.max, " needed") if (min(w) <= 0) stop("prior-weight variable 'w' must contain positive values only") if (is.numeric(colsyperw) && ncol(y) %% colsyperw != 0) stop("number of columns of the response variable 'y' is not ", "a multiple of ", colsyperw) if (maximize) { Ncol.max.w <- max(ncol(w), ncol(y) / colsyperw) Ncol.max.y <- max(ncol(y), ncol(w) * colsyperw) } else { Ncol.max.w <- ncol(w) Ncol.max.y <- ncol(y) } if (out.wy && ncol(w) < Ncol.max.w) { nblanks <- sum(cn.w == "") if (nblanks > 0) cn.w[cn.w == ""] <- paste(prefix.w, 1:nblanks, sep = "") if (length(cn.w) < Ncol.max.w) cn.w <- c(cn.w, paste(prefix.w, (length(cn.w)+1):Ncol.max.w, sep = "")) w <- matrix(w, n.lm, Ncol.max.w, dimnames = list(rn.w, cn.w)) } if (out.wy && ncol(y) < Ncol.max.y) { nblanks <- sum(cn.y == "") if (nblanks > 0) cn.y[cn.y == ""] <- paste(prefix.y, 1:nblanks, sep = "") if (length(cn.y) < Ncol.max.y) cn.y <- c(cn.y, paste(prefix.y, (length(cn.y)+1):Ncol.max.y, sep = "")) y <- matrix(y, n.lm, Ncol.max.y, dimnames = list(rn.y, cn.y)) } list(w = if (out.wy) w else NULL, y = if (out.wy) y else NULL) } arwz2wz <- function(arwz, M = 1, M1 = 1, rm.trailing.cols = TRUE, full.arg = FALSE) { if (length(dim.arwz <- dim(arwz)) != 3) stop("dimension of 'arwz' should be of length 3") n <- dim.arwz[1] ndepy <- dim.arwz[2] dim.val <- dim.arwz[3] if (ndepy == 1) { dim(arwz) <- c(n, dim.val) return(arwz) } wz <- matrix(0.0, n, if (full.arg) M*(M+1)/2 else sum(M:(M-M1+1))) ind1 <- iam(NA, NA, M = M1, both = TRUE, diag = TRUE) len.ind1 <- dim.val # length(ind1$col.index) for (ii in 1:ndepy) { for (jlocal in 1:len.ind1) { wz[, iam(M1 * (ii - 1) + ind1$row[jlocal], M1 * (ii - 1) + ind1$col[jlocal], M = M)] <- arwz[, ii, jlocal] } } if (rm.trailing.cols && !full.arg) { colind <- ncol(wz) while (all(wz[, colind] == 0)) colind <- colind - 1 if (colind < ncol(wz)) wz <- wz[, 1:colind, drop = FALSE] } wz } param.names <- function(string, S) { if (S == 1) string else paste(string, 1:S, sep = "") } vweighted.mean.default <- function (x, w, ..., na.rm = FALSE) { temp5 <- w.y.check(w = w, y = x, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE, Is.integer.y = FALSE, Is.positive.y = FALSE, Is.nonnegative.y = FALSE, prefix.w = "PriorWeight", prefix.y = "Response") x <- temp5$y w <- temp5$w ans <- numeric(ncol(w)) for (ii in 1:ncol(w)) ans[ii] <- weighted.mean(x[, ii], w = w[, ii], ..., na.rm = na.rm) ans } familyname.vlm <- function(object, all = FALSE, ...) { ans <- object@family@vfamily if (all) ans else ans[1] } familyname.vglmff <- function(object, all = FALSE, ...) { ans <- object@vfamily if (all) ans else ans[1] } if (!isGeneric("familyname")) setGeneric("familyname", function(object, ...) standardGeneric("familyname")) setMethod("familyname", "vglmff", function(object, ...) familyname.vglmff(object, ...)) setMethod("familyname", "vlm", function(object, ...) familyname.vlm(object, ...)) bisection.basic <- function(f, a, b, tol = 1e-9, nmax = 500, ...) { if (!all(sign(f(a, ...)) * sign(f(b, ...)) <= 0)) stop("roots do not exist between 'a' and 'b'") N <- 1 while (N <= nmax) { mid <- (a + b) / 2 save.f <- f(mid, ...) if (all(save.f == 0 | (b - a)/2 < tol)) { return(mid) } N <- N + 1 vecTF <- sign(save.f) == sign(f(a, ...)) a[ vecTF] <- mid[ vecTF] b[!vecTF] <- mid[!vecTF] } warning("did not coverge. Returning final root") mid } retain.col <- function(mat, coln ) { if (is.matrix(mat)) # && exclude mat[, -coln] <- 0 mat } VGAM/R/deviance.vlm.q0000644000176200001440000001110113135276757013754 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. deviance.vlm <- function(object, summation = TRUE, ...) { if (summation) { object@criterion$deviance } else { Args <- formals(args(object@family@deviance)) if (length(Args$summation) == 0) stop("there is no 'summation' argument for the function in the ", "'deviance' slot of the object.") object@family@deviance(mu = fitted(object), y = depvar(object), w = weights(object, type = "prior"), residuals = FALSE, eta = predict(object), extra = object@extra, summation = summation) } } if (FALSE) deviance.vglm <- function(object, summation = TRUE, ...) object@criterion$deviance if (!isGeneric("deviance")) setGeneric("deviance", function(object, ...) standardGeneric("deviance")) setMethod("deviance", "vlm", function(object, ...) deviance.vlm(object, ...)) if (FALSE) setMethod("deviance", "vglm", function(object, ...) deviance.vglm(object, ...)) deviance.qrrvglm <- function(object, summation = TRUE, history = FALSE, ...) { if (history) { if (summation) { return(object@misc$deviance.Bestof) } else { stop("cannot handle 'history = TRUE' when 'summation = FALSE'") } } deviance.vlm(object, summation = summation, ...) } setMethod("deviance", "qrrvglm", function(object, ...) deviance.qrrvglm(object, ...)) setMethod("deviance", "rrvgam", function(object, ...) deviance.qrrvglm(object, ...)) df.residual_vlm <- function(object, type = c("vlm", "lm"), ...) { type <- type[1] switch(type, vlm = object@df.residual, lm = nobs(object, type = "lm") - nvar_vlm(object, type = "lm"), stop("argument 'type' unmatched")) } setMethod("df.residual", "vlm", function(object, ...) df.residual_vlm(object, ...)) df.residual_pvgam <- function(object, ...) { nobs(object, type = "lm") * npred(object) - sum(endf(object, diag.all = TRUE)) } setMethod("df.residual", "pvgam", function(object, ...) df.residual_pvgam(object, ...)) nvar_vlm <- function(object, ...) { M <- npred(object) allH <- matrix(unlist(constraints(object, type = "lm")), nrow = M) checkNonZero <- function(m) sum(as.logical(m)) numPars <- apply(allH, 1, checkNonZero) if (length(object@misc$predictors.names) == M) names(numPars) <- object@misc$predictors.names NumPars <- rep_len(0, M) for (jay in 1:M) { X.lm.jay <- model.matrix(object, type = "lm", linpred.index = jay) NumPars[jay] <- ncol(X.lm.jay) } if (length(object@misc$predictors.names) == M) names(NumPars) <- object@misc$predictors.names if (!all(NumPars == numPars)) { print(NumPars - numPars) # Should be all 0s stop("something wrong in nvar_vlm()") } numPars } if (FALSE) { set.seed(123) zapdat <- data.frame(x2 = runif(nn <- 2000)) zapdat <- transform(zapdat, p0 = logit(-0.5 + 1*x2, inverse = TRUE), lambda = loge( 0.5 + 2*x2, inverse = TRUE), f1 = gl(4, 50, labels = LETTERS[1:4]), x3 = runif(nn)) zapdat <- transform(zapdat, y = rzapois(nn, lambda, p0)) with(zapdat, table(y)) fit1 <- vglm(y ~ x2, zapoisson, zapdat, trace = TRUE) fit1 <- vglm(y ~ bs(x2), zapoisson, zapdat, trace = TRUE) coef(fit1, matrix = TRUE) # These should agree with the above values fit2 <- vglm(y ~ bs(x2) + x3, zapoisson(zero = 2), zapdat, trace = TRUE) coef(fit2, matrix = TRUE) clist <- list("(Intercept)" = diag(2), "x2" = rbind(0,1), "x3" = rbind(1,0)) fit3 <- vglm(y ~ x2 + x3, zapoisson(zero = NULL), zapdat, constraints = clist, trace = TRUE) coef(fit3, matrix = TRUE) constraints(fit2, type = "term") constraints(fit2, type = "lm") head(model.matrix(fit2, type = "term")) head(model.matrix(fit2, type = "lm")) allH <- matrix(unlist(constraints(fit1)), nrow = fit1@misc$M) allH <- matrix(unlist(constraints(fit2)), nrow = fit2@misc$M) allH <- matrix(unlist(constraints(fit3)), nrow = fit3@misc$M) checkNonZero <- function(m) sum(as.logical(m)) (numPars <- apply(allH, 1, checkNonZero)) nvar_vlm(fit1) nvar_vlm(fit2) nvar_vlm(fit3) } VGAM/R/family.mixture.R0000644000176200001440000007015113135276757014331 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. mix2normal.control <- function(trace = TRUE, ...) { list(trace = trace) } mix2normal <- function(lphi = "logit", lmu = "identitylink", lsd = "loge", iphi = 0.5, imu1 = NULL, imu2 = NULL, isd1 = NULL, isd2 = NULL, qmu = c(0.2, 0.8), eq.sd = TRUE, nsimEIM = 100, zero = "phi") { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") emu1 <- emu2 <- emu esd1 <- esd2 <- esd if (!is.Numeric(qmu, length.arg = 2, positive = TRUE) || any(qmu >= 1)) stop("bad input for argument 'qmu'") if (length(iphi) && (!is.Numeric(iphi, length.arg = 1, positive = TRUE) || iphi>= 1)) stop("bad input for argument 'iphi'") if (length(imu1) && !is.Numeric(imu1)) stop("bad input for argument 'imu1'") if (length(imu2) && !is.Numeric(imu2)) stop("bad input for argument 'imu2'") if (length(isd1) && !is.Numeric(isd1, positive = TRUE)) stop("bad input for argument 'isd1'") if (length(isd2) && !is.Numeric(isd2, positive = TRUE)) stop("bad input for argument 'isd2'") if (!is.logical(eq.sd) || length(eq.sd) != 1) stop("bad input for argument 'eq.sd'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Mixture of two univariate normals\n\n", "Links: ", namesof("phi", lphi, earg = ephi, tag = FALSE), ", ", namesof("mu1", lmu, earg = emu1, tag = FALSE), ", ", namesof("sd1", lsd, earg = esd1, tag = FALSE), ", ", namesof("mu2", lmu, earg = emu2, tag = FALSE), ", ", namesof("sd2", lsd, earg = esd2, tag = FALSE), "\n", "Mean: phi*mu1 + (1 - phi)*mu2\n", "Variance: phi*sd1^2 + (1 - phi)*sd2^2 + ", "phi*(1 - phi)*(mu1-mu2)^2"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(rbind(diag(4), c(0, 0, 1, 0)), x = x, bool = .eq.sd , constraints = constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 5) }), list( .zero = zero, .eq.sd = eq.sd ))), infos = eval(substitute(function(...) { list(M1 = 5, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("phi", "mu1", "sd1", "mu2", "sd2"), nsimEIM = .nsimEIM , lphi = .lphi , lmu1 = .lmu , lsd1 = .lsd , lmu2 = .lmu , lsd2 = .lsd , zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM, .lphi = lphi, .lmu = lmu , .lsd = lsd ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("phi", .lphi , earg = .ephi , tag = FALSE), namesof("mu1", .lmu , earg = .emu1 , tag = FALSE), namesof("sd1", .lsd , earg = .esd1 , tag = FALSE), namesof("mu2", .lmu , earg = .emu2 , tag = FALSE), namesof("sd2", .lsd , earg = .esd2 , tag = FALSE)) if (!length(etastart)) { qy <- quantile(y, prob = .qmu ) init.phi <- rep_len(if (length( .iphi )) .iphi else 0.5, n) init.mu1 <- rep_len(if (length( .imu1 )) .imu1 else qy[1], n) init.mu2 <- rep_len(if (length( .imu2 )) .imu2 else qy[2], n) ind.1 <- if (init.mu1[1] < init.mu2[1]) 1:round(n* init.phi[1]) else round(n* init.phi[1]):n ind.2 <- if (init.mu1[1] < init.mu2[1]) round(n* init.phi[1]):n else 1:round(n* init.phi[1]) sorty <- sort(y) init.sd1 <- rep_len(if (length( .isd1 )) .isd1 else sd(sorty[ind.1]), n) init.sd2 <- rep_len(if (length( .isd2 )) .isd2 else sd(sorty[ind.2]), n) if ( .eq.sd ) { init.sd1 <- init.sd2 <- (init.sd1 + init.sd2) / 2 if (!identical( .esd1 , .esd2 )) stop("'esd1' and 'esd2' must be equal if 'eq.sd = TRUE'") } etastart <- cbind( theta2eta(init.phi, .lphi , earg = .ephi ), theta2eta(init.mu1, .lmu , earg = .emu1 ), theta2eta(init.sd1, .lsd , earg = .esd1 ), theta2eta(init.mu2, .lmu , earg = .emu2 ), theta2eta(init.sd2, .lsd , earg = .esd2 )) } }), list(.lphi = lphi, .lmu = lmu, .iphi = iphi, .imu1 = imu1, .imu2 = imu2, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .eq.sd = eq.sd, .lsd = lsd, .isd1 = isd1, .isd2 = isd2, .qmu = qmu))), linkinv = eval(substitute(function(eta, extra = NULL){ phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) mu1 <- eta2theta(eta[, 2], link = .lmu , earg = .emu1 ) mu2 <- eta2theta(eta[, 4], link = .lmu , earg = .emu2 ) phi * mu1 + (1 - phi) * mu2 }, list( .lphi = lphi, .lmu = lmu, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2 ))), last = eval(substitute(expression({ misc$link <- c("phi" = .lphi , "mu1" = .lmu , "sd1" = .lsd , "mu2" = .lmu , "sd2" = .lsd ) misc$earg <- list("phi" = .ephi , "mu1" = .emu1 , "sd1" = .esd1 , "mu2" = .emu2 , "sd2" = .esd2 ) misc$expected <- TRUE misc$eq.sd <- .eq.sd misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list(.lphi = lphi, .lmu = lmu, .lsd = lsd, .eq.sd = eq.sd, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) mu1 <- eta2theta(eta[, 2], link = .lmu , earg = .emu1 ) sd1 <- eta2theta(eta[, 3], link = .lsd , earg = .esd1 ) mu2 <- eta2theta(eta[, 4], link = .lmu , earg = .emu2 ) sd2 <- eta2theta(eta[, 5], link = .lsd , earg = .esd2 ) f1 <- dnorm(y, mean = mu1, sd = sd1) f2 <- dnorm(y, mean = mu2, sd = sd2) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * log(phi*f1 + (1 - phi)*f2) if (summation) { sum(ll.elts) } else { ll.elts } } }, list(.lphi = lphi, .lmu = lmu, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .lsd = lsd ))), vfamily = c("mix2normal"), validparams = eval(substitute(function(eta, y, extra = NULL) { phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) mu1 <- eta2theta(eta[, 2], link = .lmu , earg = .emu1 ) sd1 <- eta2theta(eta[, 3], link = .lsd , earg = .esd1 ) mu2 <- eta2theta(eta[, 4], link = .lmu , earg = .emu2 ) sd2 <- eta2theta(eta[, 5], link = .lsd , earg = .esd2 ) okay1 <- all(is.finite(mu1)) && all(is.finite(mu2)) && all(is.finite(sd1)) && all(0 < sd1) && all(is.finite(sd2)) && all(0 < sd2) && all(is.finite(phi)) && all(0 < phi & phi < 1) okay1 }, list(.lphi = lphi, .lmu = lmu, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .lsd = lsd ))), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) mu1 <- eta2theta(eta[, 2], link = .lmu , earg = .emu1 ) sd1 <- eta2theta(eta[, 3], link = .lsd , earg = .esd1 ) mu2 <- eta2theta(eta[, 4], link = .lmu , earg = .emu2 ) sd2 <- eta2theta(eta[, 5], link = .lsd , earg = .esd2 ) dphi.deta <- dtheta.deta(phi, link = .lphi , earg = .ephi ) dmu1.deta <- dtheta.deta(mu1, link = .lmu , earg = .emu1 ) dmu2.deta <- dtheta.deta(mu2, link = .lmu , earg = .emu2 ) dsd1.deta <- dtheta.deta(sd1, link = .lsd , earg = .esd1 ) dsd2.deta <- dtheta.deta(sd2, link = .lsd , earg = .esd2 ) f1 <- dnorm(y, mean = mu1, sd = sd1) f2 <- dnorm(y, mean = mu2, sd = sd2) pdf <- phi*f1 + (1 - phi)*f2 z1 <- (y-mu1) / sd1 z2 <- (y-mu2) / sd2 df1.dmu1 <- z1 * f1 / sd1 df2.dmu2 <- z2 * f2 / sd2 df1.dsd1 <- (z1^2 - 1) * f1 / sd1 df2.dsd2 <- (z2^2 - 1) * f2 / sd2 dl.dphi <- (f1-f2) / pdf dl.dmu1 <- phi * df1.dmu1 / pdf dl.dmu2 <- (1 - phi) * df2.dmu2 / pdf dl.dsd1 <- phi * df1.dsd1 / pdf dl.dsd2 <- (1 - phi) * df2.dsd2 / pdf c(w) * cbind(dl.dphi * dphi.deta, dl.dmu1 * dmu1.deta, dl.dsd1 * dsd1.deta, dl.dmu2 * dmu2.deta, dl.dsd2 * dsd2.deta) }), list(.lphi = lphi, .lmu = lmu, .lsd = lsd, .ephi = ephi, .emu1 = emu1, .emu2 = emu2, .esd1 = esd1, .esd2 = esd2, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ d3 <- deriv3(~ log( phi * dnorm((ysim-mu1)/sd1) / sd1 + (1 - phi) * dnorm((ysim-mu2)/sd2) / sd2), c("phi","mu1","sd1","mu2","sd2"), hessian = TRUE) run.mean <- 0 for (ii in 1:( .nsimEIM )) { ysim <- ifelse(runif(n) < phi, rnorm(n, mu1, sd1), rnorm(n, mu2, sd2)) eval.d3 <- eval(d3) d2l.dthetas2 <- attr(eval.d3, "hessian") rm(ysim) temp3 <- matrix(0, n, dimm(M)) for (ss in 1:M) for (tt in ss:M) temp3[,iam(ss,tt, M)] <- -d2l.dthetas2[, ss, tt] run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean dtheta.detas <- cbind(dphi.deta, dmu1.deta, dsd1.deta, dmu2.deta, dsd2.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list(.lphi = lphi, .lmu = lmu, .nsimEIM = nsimEIM )))) } mix2poisson.control <- function(trace = TRUE, ...) { list(trace = trace) } mix2poisson <- function(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.2, 0.8), nsimEIM = 100, zero = "phi") { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") el1 <- el2 <- elambda if (!is.Numeric(qmu, length.arg = 2, positive = TRUE) || any(qmu >= 1)) stop("bad input for argument 'qmu'") if (length(iphi) && (!is.Numeric(iphi, length.arg = 1, positive = TRUE) || iphi >= 1)) stop("bad input for argument 'iphi'") if (length(il1) && !is.Numeric(il1)) stop("bad input for argument 'il1'") if (length(il2) && !is.Numeric(il2)) stop("bad input for argument 'il2'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Mixture of two Poisson distributions\n\n", "Links: ", namesof("phi", lphi, earg = ephi, tag = FALSE), ", ", namesof("lambda1", llambda, earg = el1, tag = FALSE), ", ", namesof("lambda2", llambda, earg = el2, tag = FALSE), "\n", "Mean: phi*lambda1 + (1 - phi)*lambda2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("phi", "lambda1", "lambda2"), nsimEIM = .nsimEIM , lphi = .lphi , llambda1 = .llambda , llambda2 = .llambda , zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM, .lphi = lphi, .llambda = llambda ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, Is.integer.y = TRUE, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("phi", .lphi , earg = .ephi , tag = FALSE), namesof("lambda1", .llambda , earg = .el1 , tag = FALSE), namesof("lambda2", .llambda , earg = .el2 , tag = FALSE)) if (!length(etastart)) { qy <- quantile(y, prob = .qmu) init.phi <- rep_len(if (length( .iphi )) .iphi else 0.5, n) init.lambda1 <- rep_len(if (length( .il1 )) .il1 else qy[1], n) init.lambda2 <- rep_len(if (length( .il2 )) .il2 else qy[2], n) if (!length(etastart)) etastart <- cbind(theta2eta(init.phi, .lphi , earg = .ephi ), theta2eta(init.lambda1, .llambda , earg = .el1 ), theta2eta(init.lambda2, .llambda , earg = .el2 )) } }), list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .iphi = iphi, .il1 = il1, .il2 = il2, .qmu = qmu))), linkinv = eval(substitute(function(eta, extra = NULL){ phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) phi * lambda1 + (1 - phi) * lambda2 }, list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), last = eval(substitute(expression({ misc$link <- c("phi" = .lphi , "lambda1" = .llambda , "lambda2" = .llambda ) misc$earg <- list("phi" = .ephi , "lambda1" = .el1 , "lambda2" = .el2 ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) f1 <- dpois(y, lam = lambda1) f2 <- dpois(y, lam = lambda2) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * log(phi*f1 + (1 - phi)*f2) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), vfamily = c("mix2poisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) okay1 <- all(is.finite(phi)) && all(0 < phi & phi < 1) && all(is.finite(lambda1)) && all(0 < lambda1) && all(is.finite(lambda2)) && all(0 < lambda2) okay1 }, list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) dphi.deta <- dtheta.deta(phi, link = .lphi , earg = .ephi ) dlambda1.deta <- dtheta.deta(lambda1, link = .llambda , earg = .el1 ) dlambda2.deta <- dtheta.deta(lambda2, link = .llambda , earg = .el2 ) f1 <- dpois(x = y, lam = lambda1) f2 <- dpois(x = y, lam = lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- dpois(y-1, lam = lambda1) - f1 df2.dlambda2 <- dpois(y-1, lam = lambda2) - f2 dl.dphi <- (f1-f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf c(w) * cbind(dl.dphi * dphi.deta, dl.dlambda1 * dlambda1.deta, dl.dlambda2 * dlambda2.deta) }), list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM ))), weight = eval(substitute(expression({ run.mean <- 0 for (ii in 1:( .nsimEIM )) { ysim <- ifelse(runif(n) < phi, rpois(n, lambda1), rpois(n, lambda2)) f1 <- dpois(x = ysim, lam = lambda1) f2 <- dpois(x = ysim, lam = lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- dpois(ysim-1, lam = lambda1) - f1 df2.dlambda2 <- dpois(ysim-1, lam = lambda2) - f2 dl.dphi <- (f1 - f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf d2f1.dlambda12 <- dpois(ysim-2, lambda1) - 2*dpois(ysim-1, lambda1) + dpois(ysim, lambda1) d2f2.dlambda22 <- dpois(ysim-2, lambda2) - 2*dpois(ysim-1, lambda2) + dpois(ysim, lambda2) d2l.dphi2 <- dl.dphi^2 d2l.dlambda12 <- phi * (phi * df1.dlambda1^2 / pdf - d2f1.dlambda12) / pdf d2l.dlambda22 <- (1 - phi) * ((1 - phi) * df2.dlambda2^2 / pdf - d2f2.dlambda22) / pdf d2l.dlambda1lambda2 <- phi * (1 - phi) * df1.dlambda1 * df2.dlambda2 / pdf^2 d2l.dphilambda1 <- df1.dlambda1 * (phi*(f1-f2)/pdf - 1) / pdf d2l.dphilambda2 <- df2.dlambda2 * ((1 - phi)*(f1-f2)/pdf - 1) / pdf rm(ysim) temp3 <- matrix(0, n, dimm(M)) temp3[, iam(1, 1, M = 3)] <- d2l.dphi2 temp3[, iam(2, 2, M = 3)] <- d2l.dlambda12 temp3[, iam(3, 3, M = 3)] <- d2l.dlambda22 temp3[, iam(1, 2, M = 3)] <- d2l.dphilambda1 temp3[, iam(1, 3, M = 3)] <- d2l.dphilambda2 temp3[, iam(2, 3, M = 3)] <- d2l.dlambda1lambda2 run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean dtheta.detas <- cbind(dphi.deta, dlambda1.deta, dlambda2.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM )))) } mix2exp.control <- function(trace = TRUE, ...) { list(trace = trace) } mix2exp <- function(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.8, 0.2), nsimEIM = 100, zero = "phi") { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") el1 <- el2 <- elambda if (!is.Numeric(qmu, length.arg = 2, positive = TRUE) || any(qmu >= 1)) stop("bad input for argument 'qmu'") if (length(iphi) && (!is.Numeric(iphi, length.arg = 1, positive = TRUE) || iphi >= 1)) stop("bad input for argument 'iphi'") if (length(il1) && !is.Numeric(il1)) stop("bad input for argument 'il1'") if (length(il2) && !is.Numeric(il2)) stop("bad input for argument 'il2'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Mixture of two univariate exponentials\n\n", "Links: ", namesof("phi", lphi, earg = ephi, tag = FALSE), ", ", namesof("lambda1", llambda, earg = el1 , tag = FALSE), ", ", namesof("lambda2", llambda, earg = el2 , tag = FALSE), "\n", "Mean: phi / lambda1 + (1 - phi) / lambda2\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("phi", "lambda1", "lambda2"), nsimEIM = .nsimEIM , lphi = .lphi , llambda1 = .llambda , llambda2 = .llambda , zero = .zero ) }, list( .zero = zero, .nsimEIM = nsimEIM, .lphi = lphi, .llambda = llambda ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("phi", .lphi , earg = .ephi , tag = FALSE), namesof("lambda1", .llambda , earg = .el1 , tag = FALSE), namesof("lambda2", .llambda , earg = .el2 , tag = FALSE)) if (!length(etastart)) { qy <- quantile(y, prob = .qmu) init.phi <- rep_len(if (length( .iphi )) .iphi else 0.5, n) init.lambda1 <- rep_len(if (length( .il1 )) .il1 else 1/qy[1], n) init.lambda2 <- rep_len(if (length( .il2 )) .il2 else 1/qy[2], n) if (!length(etastart)) etastart <- cbind(theta2eta(init.phi, .lphi , earg = .ephi ), theta2eta(init.lambda1, .llambda , earg = .el1 ), theta2eta(init.lambda2, .llambda , earg = .el2 )) } }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .iphi = iphi, .il1 = il1, .il2 = il2, .qmu = qmu))), linkinv = eval(substitute(function(eta, extra = NULL){ phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) phi / lambda1 + (1 - phi) / lambda2 }, list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), last = eval(substitute(expression({ misc$link <- c("phi" = .lphi , "lambda1" = .llambda , "lambda2" = .llambda ) misc$earg <- list("phi" = .ephi , "lambda1" = .el1 , "lambda2" = .el2 ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list(.lphi = lphi, .llambda = llambda, .nsimEIM = nsimEIM, .ephi = ephi, .el1 = el1, .el2 = el2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) f1 <- dexp(y, rate=lambda1) f2 <- dexp(y, rate=lambda2) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * log(phi*f1 + (1 - phi)*f2) if (summation) { sum(ll.elts) } else { ll.elts } } }, list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), vfamily = c("mix2exp"), validparams = eval(substitute(function(eta, y, extra = NULL) { phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) okay1 <- all(is.finite(phi)) && all(0 < phi & phi < 1) && all(is.finite(lambda1)) && all(0 < lambda1) && all(is.finite(lambda2)) && all(0 < lambda2) okay1 }, list( .lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), deriv = eval(substitute(expression({ phi <- eta2theta(eta[, 1], link = .lphi , earg = .ephi ) lambda1 <- eta2theta(eta[, 2], link = .llambda , earg = .el1 ) lambda2 <- eta2theta(eta[, 3], link = .llambda , earg = .el2 ) dphi.deta <- dtheta.deta(phi, link = .lphi , earg = .ephi ) dlambda1.deta <- dtheta.deta(lambda1, link = .llambda , earg = .el1 ) dlambda2.deta <- dtheta.deta(lambda2, link = .llambda , earg = .el2 ) f1 <- dexp(x = y, rate = lambda1) f2 <- dexp(x = y, rate = lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- exp(-lambda1*y) - y * dexp(y, rate = lambda1) df2.dlambda2 <- exp(-lambda2*y) - y * dexp(y, rate = lambda2) dl.dphi <- (f1-f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf c(w) * cbind(dl.dphi * dphi.deta, dl.dlambda1 * dlambda1.deta, dl.dlambda2 * dlambda2.deta) }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2 ))), weight = eval(substitute(expression({ run.mean <- 0 for (ii in 1:( .nsimEIM )) { ysim <- ifelse(runif(n) < phi, rexp(n, lambda1), rexp(n, lambda2)) f1 <- dexp(x = ysim, rate=lambda1) f2 <- dexp(x = ysim, rate=lambda2) pdf <- phi*f1 + (1 - phi)*f2 df1.dlambda1 <- exp(-lambda1*ysim) - ysim * dexp(ysim, rate = lambda1) df2.dlambda2 <- exp(-lambda2*ysim) - ysim * dexp(ysim, rate = lambda2) dl.dphi <- (f1-f2) / pdf dl.dlambda1 <- phi * df1.dlambda1 / pdf dl.dlambda2 <- (1 - phi) * df2.dlambda2 / pdf d2f1.dlambda12 <- ysim*(ysim*lambda1-2)*exp(-lambda1*ysim) d2f2.dlambda22 <- ysim*(ysim*lambda2-2)*exp(-lambda2*ysim) d2l.dphi2 <- dl.dphi^2 d2l.dlambda12 <- phi * (phi * df1.dlambda1^2 / pdf - d2f1.dlambda12) / pdf d2l.dlambda22 <- (1 - phi) * ((1 - phi) * df2.dlambda2^2 / pdf - d2f2.dlambda22) / pdf d2l.dlambda1lambda2 <- phi * (1 - phi) * df1.dlambda1 * df2.dlambda2 / pdf^2 d2l.dphilambda1 <- df1.dlambda1 * (phi*(f1-f2)/pdf - 1) / pdf d2l.dphilambda2 <- df2.dlambda2 * ((1 - phi)*(f1-f2)/pdf - 1) / pdf rm(ysim) temp3 <- matrix(0, n, dimm(M)) temp3[, iam(1, 1, M = 3)] <- d2l.dphi2 temp3[, iam(2, 2, M = 3)] <- d2l.dlambda12 temp3[, iam(3, 3, M = 3)] <- d2l.dlambda22 temp3[, iam(1, 2, M = 3)] <- d2l.dphilambda1 temp3[, iam(1, 3, M = 3)] <- d2l.dphilambda2 temp3[, iam(2, 3, M = 3)] <- d2l.dlambda1lambda2 run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean dtheta.detas <- cbind(dphi.deta, dlambda1.deta, dlambda2.deta) index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * dtheta.detas[, index0$row] * dtheta.detas[, index0$col] c(w) * wz }), list(.lphi = lphi, .llambda = llambda, .ephi = ephi, .el1 = el1, .el2 = el2, .nsimEIM = nsimEIM )))) } VGAM/R/getxvlmaug.R0000644000176200001440000001122013135276760013521 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. mroot2 <- function(A) { if (!isTRUE(all.equal(A, t(A)))) stop("Supplied matrix not symmetric") U <- chol(A, pivot = TRUE, tol = 0) opiv <- order(attr(U, "pivot")) r <- attr(U, "rank") p <- ncol(U) if (r < p) U[(r+1):p, (r+1):p] <- 0 rank <- r U <- U[, opiv, drop = FALSE] U } # mroot2 mroot3 <- function(A, rank = NULL, transpose = FALSE) { if (is.null(rank)) rank <- 0 if (!isTRUE(all.equal(A, t(A)))) stop("Supplied matrix not symmetric") U <- suppressWarnings(chol(A, pivot = TRUE, tol = 0)) piv <- order(attr(U, "pivot")) r <- attr(U, "rank") p <- ncol(U) if (r < p) U[(r+1):p, (r+1):p] <- 0 if (rank < 1) rank <- r U <- U[, piv, drop = FALSE] if (transpose) t(U[1:rank, , drop = FALSE]) else U[1:rank, , drop = FALSE] } # mroot3 get.X.VLM.aug <- function(constraints = constraints, sm.osps.list = sm.osps.list) { assignx <- sm.osps.list$assignx nassignx <- names(assignx) indexterms <- sm.osps.list$indexterms which.X.sm.osps <- sm.osps.list$which.X.sm.osps S.arg <- sm.osps.list$S.arg sparlist <- sm.osps.list$sparlist ridge.adj <- sm.osps.list$ridge.adj term.labels <- sm.osps.list$term.labels spar.new <- list() pen.new.list <- list() ncol.X.sm.osps <- sapply(which.X.sm.osps, length) ncolHlist.model <- unlist(lapply(constraints, ncol)) ncolHlist.new <- ncolHlist.model if (names(constraints)[[1]] == "(Intercept)") { ncolHlist.new <- ncolHlist.new[-1] nassignx <- nassignx[-1] } ncol.H.sm.osps <- ncolHlist.new[indexterms] nsm.osps <- nassignx[indexterms] sparlen <- sapply(sparlist, length) for (ii in seq_along(ncol.H.sm.osps)) { nspar <- sparlen[ii] # sparlen[[ii]] sparlist.use <- sparlist[[ii]] sparlist.use[sparlist.use < 0] <- 0 spar.new[[ii]] <- if (nspar == ncol.H.sm.osps[ii]) { sparlist.use } else { if (ncol.H.sm.osps[ii] < nspar) warning("too many 'spar' values; using the first few") rep_len(sparlist.use, ncol.H.sm.osps[ii]) } names(spar.new)[[ii]] <- nsm.osps[ii] # nsm.osps[[ii]] if (ridge.adj[[ii]] == 0) { spar.diag <- diag(sqrt(spar.new[[ii]])) pen.noridge <- kronecker(spar.diag, S.arg[[ii]]) ooo <- matrix(1:(ncol.H.sm.osps[ii] * ncol.X.sm.osps[ii]), ncol = ncol.X.sm.osps[ii], byrow = TRUE) pen.new.list[[ii]] <- pen.noridge[, ooo] names(pen.new.list)[[ii]] <- nsm.osps[ii] # nsm.osps[[ii]] } else { ioffset <- 0 joffset <- 0 Dmat1 <- matrix(0, ncol.H.sm.osps[ii] * (ncol(S.arg[[ii]]) + nrow(S.arg[[ii]])), ncol.H.sm.osps[ii] * ncol(S.arg[[ii]])) for (jay in 1:(ncol.H.sm.osps[ii])) { pen.set <- mroot2(sqrt(spar.new[[ii]][jay]) * S.arg[[ii]] + sqrt(ridge.adj[[ii]]) * diag(ncol(S.arg[[ii]]))) pen.ridge <- rbind(pen.set, sqrt(ridge.adj[[ii]]) * diag(ncol(S.arg[[ii]]))) Dmat1[ioffset + 1:nrow(pen.ridge), joffset + 1:ncol(pen.ridge)] <- pen.ridge ioffset <- ioffset + nrow(pen.ridge) joffset <- joffset + ncol(pen.ridge) } # for jay ooo <- matrix(1:(ncol.H.sm.osps[ii] * ncol.X.sm.osps[ii]), nrow = ncol.H.sm.osps[ii], # Redundant really ncol = ncol.X.sm.osps[ii], byrow = TRUE) pen.new.list[[ii]] <- Dmat1[, c(ooo), drop = FALSE] names(pen.new.list)[[ii]] <- nsm.osps[ii] # nsm.osps[[ii]] ioffset <- 0 joffset <- 0 } # if-else ridge.adj } # for ncol.allterms <- sapply(assignx, length) ncol.model <- if (names(constraints)[[1]] == "(Intercept)") ncol.allterms[-1] else ncol.allterms nrowpen.new.list <- sapply(pen.new.list, nrow) nrowPen <- sum(nrowpen.new.list) ncolPen <- sum(ncol.allterms * ncolHlist.model) iioffset <- 0 Dmat2 <- matrix(0, nrowPen, ncolPen) jay <- 0 jjoffset <- if (names(constraints)[[1]] == "(Intercept)") ncolHlist.model[1] else 0 for (ii in seq_along(term.labels)) { if (indexterms[ii]) { jay <- jay + 1 ind.x <- iioffset + 1:nrow(pen.new.list[[jay]]) ind.y <- jjoffset + 1:ncol(pen.new.list[[jay]]) Dmat2[ind.x, ind.y] <- pen.new.list[[jay]] iioffset <- iioffset + nrow(pen.new.list[[jay]]) jjoffset <- jjoffset + ncol(pen.new.list[[jay]]) } else { jjoffset <- jjoffset + ncolHlist.new[ii] * ncol.model[ii] } } # ii Xvlm.aug <- Dmat2 attr(Xvlm.aug, "spar.vlm") <- spar.new Xvlm.aug } # get.X.VLM.aug VGAM/R/family.survival.R0000644000176200001440000003407313135276757014512 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. double.cens.normal <- function(r1 = 0, r2 = 0, lmu = "identitylink", lsd = "loge", imu = NULL, isd = NULL, zero = "sd") { if (!is.Numeric(r1, length.arg = 1, integer.valued = TRUE) || r1 < 0) stop("bad input for 'r1'") if (!is.Numeric(r2, length.arg = 1, integer.valued = TRUE) || r2 < 0) stop("bad input for 'r2'") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") new("vglmff", blurb = c("Univariate normal distribution with double censoring\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = TRUE), ", ", namesof("sd", lsd, earg = esd, tag = TRUE), "\n", "Variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }) , list( .zero = zero))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu", "sd"), lmu = .lmu , lsd = .lsd , zero = .zero ) }, list( .zero = zero, .lmu = lmu, .lsd = lsd ))), initialize = eval(substitute(expression({ predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("sd", .lsd , earg = .esd , tag = FALSE)) if (ncol(y <- cbind(y)) != 1) stop("the response must be a vector or a one-column matrix") if (length(w) != n || !is.Numeric(w, integer.valued = TRUE, positive = TRUE)) stop("the argument 'weights' must be a vector ", "of positive integers") sumw <- sum(w) extra$bign <- sumw + .r1 + .r2 # Tot num of censored & uncensored obsns if (!length(etastart)) { yyyy.est <- if (length( .imu )) .imu else median(y) sd.y.est <- if (length( .isd )) .isd else { junk <- lm.wfit(x = x, y = c(y), w = c(w)) 1.25 * sqrt( sum(w * junk$resid^2) / junk$df.residual ) } yyyy.est <- rep_len(yyyy.est , n) sd.y.est <- rep_len(sd.y.est , n) etastart <- cbind(mu = theta2eta(yyyy.est, .lmu , earg = .emu ), sd = theta2eta(sd.y.est, .lsd , earg = .esd )) } }) , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .imu = imu, .isd = isd, .r1 = r1, .r2 = r2 ))), linkinv = function(eta, extra = NULL) eta[, 1], last = eval(substitute(expression({ misc$link <- c(mu = .lmu , sd = .lsd ) misc$earg <- list(mu = .emu , sd = .esd ) misc$multipleResponses <- FALSE misc$expected <- TRUE misc$r1 <- .r1 misc$r2 <- .r2 }) , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .r1 = r1, .r2 = r2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { sd <- eta2theta(eta[, 2], .lsd , earg = .esd ) if (!summation) stop("cannot handle 'summation = FALSE' yet") if (residuals) { stop("loglikelihood residuals not implemented yet") } else { sum(w * dnorm(y, m = mu, sd = sd, log = TRUE)) + (if ( .r1 == 0) 0 else { z1 <- min((y - mu) / sd) Fz1 <- pnorm(z1) .r1 * log(Fz1)}) + (if ( .r2 == 0) 0 else { z2 <- max((y - mu) / sd) Fz2 <- pnorm(z2) .r2 * log1p(-Fz2)}) } } , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .r1 = r1, .r2 = r2 ))), vfamily = c("double.cens.normal"), validparams = eval(substitute(function(eta, y, extra = NULL) { mu <- eta[, 1] sd <- eta2theta(eta[, 2], .lsd , earg = .esd ) okay1 <- all(is.finite(mu)) && all(is.finite(sd)) && all(0 < sd) okay1 }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .r1 = r1, .r2 = r2 ))), deriv = eval(substitute(expression({ sd <- eta2theta(eta[, 2], .lsd, earg =.esd) q1 <- .r1 / extra$bign q2 <- .r2 / extra$bign pee <- 1 - q1 - q2 # 1 if r1==r2==0 z1 <- if ( .r1 == 0) - 100 else min((y - mu) / sd) # 100==Inf z2 <- if ( .r2 == 0) + 100 else max((y - mu) / sd) # 100==Inf fz1 <- if ( .r1 == 0) 0 else dnorm(z1) fz2 <- if ( .r2 == 0) 0 else dnorm(z2) Fz1 <- if ( .r1 == 0) 0.02 else pnorm(z1) # 0/0 undefined Fz2 <- if ( .r2 == 0) 0.99 else pnorm(z2) dl.dmu <- (y - mu) / sd^2 + ((- .r1 * fz1/Fz1 + .r2 * fz2/(1-Fz2)) / sd) / (n*w) dl.dsd <- -1/sd + (y-mu)^2 / sd^3 + ((- .r1 * z1*fz1/Fz1 + .r2 * z2*fz2/(1-Fz2)) / sd) / (n*w) dmu.deta <- dtheta.deta(mu, .lmu , earg =.emu ) dsd.deta <- dtheta.deta(sd, .lsd , earg =.esd ) c(w) * cbind(dl.dmu * dmu.deta, dl.dsd * dsd.deta) }) , list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .r1 = r1, .r2 = r2 ))), weight = expression({ wz <- matrix(NA_real_, n, dimm(M)) Q.1 <- ifelse(q1 == 0, 1, q1) # Saves division by 0 below; not elegant Q.2 <- ifelse(q2 == 0, 1, q2) # Saves division by 0 below; not elegant ed2l.dmu2 <- 1 / (sd^2) + ((fz1*(z1+fz1/Q.1) - fz2*(z2-fz2/Q.2)) / sd^2) / (pee*w) ed2l.dmusd <- ((fz1-fz2 + z1*fz1*(z1+fz1/Q.1) - z2*fz2*(z2-fz2/Q.2)) / sd^2) / (pee*w) ed2l.dsd2 <- 2 / (sd^2) + ((z1*fz1-z2*fz2 + z1^2 *fz1 *(z1+fz1/Q.1) - z2^2 *fz2*(z2-fz2/Q.2)) / sd^2) / (pee*w) wz[,iam(1,1,M)] <- w * ed2l.dmu2 * dmu.deta^2 wz[,iam(2,2,M)] <- w * ed2l.dsd2 * dsd.deta^2 wz[,iam(1,2,M)] <- w * ed2l.dmusd * dsd.deta * dmu.deta wz })) } dbisa <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(shape), length(scale)) if (length(x) != L) x <- rep_len(x, L) if (length(shape) != L) shape <- rep_len(shape, L) if (length(scale) != L) scale <- rep_len(scale, L) logdensity <- rep_len(log(0), L) xok <- (x > 0) xifun <- function(x) { temp <- sqrt(x) temp - 1/temp } logdensity[xok] <- dnorm(xifun(x[xok] / scale[xok]) / shape[xok], log = TRUE) + log1p(scale[xok]/x[xok]) - log(2) - log(shape[xok]) - 0.5 * log(x[xok]) - 0.5 * log(scale[xok]) logdensity[scale <= 0] <- NaN logdensity[shape <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } pbisa <- function(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { ans <- pnorm(((temp <- sqrt(q/scale)) - 1/temp) / shape, lower.tail = lower.tail, log.p = log.p) ans[scale < 0 | shape < 0] <- NaN ans[q <= 0] <- if (lower.tail) ifelse(log.p, log(0), 0) else ifelse(log.p, log(1), 1) ans } qbisa <- function(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") A <- qnorm(p, lower.tail = lower.tail, log.p = log.p) temp1 <- A * shape * sqrt(4 + A^2 * shape^2) ans1 <- (2 + A^2 * shape^2 + temp1) * scale / 2 ans2 <- (2 + A^2 * shape^2 - temp1) * scale / 2 if (lower.tail) { if (log.p) { ln.p <- p ans <- ifelse(exp(p) < 0.5, pmin(ans1, ans2), pmax(ans1, ans2)) ans[ln.p == -Inf] <- 0 ans[ln.p == 0] <- Inf #ans[ln.p > 0] <- NaN } else { ans <- ifelse(p < 0.5, pmin(ans1, ans2), pmax(ans1, ans2)) #ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf #ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- ifelse(-expm1(p) < 0.5, pmin(ans1, ans2), pmax(ans1, ans2)) ans[ln.p == -Inf] <- Inf ans[ln.p == 0] <- 0 #ans[ln.p > 0] <- NaN } else { ans <- ifelse(p > 0.5, pmin(ans1, ans2), pmax(ans1, ans2)) #ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 #ans[p > 1] <- NaN } } ans[scale < 0 | shape < 0] <- NaN ans } rbisa <- function(n, scale = 1, shape) { A <- rnorm(n) temp1 <- A * shape temp1 <- temp1 * sqrt(4 + temp1^2) ans1 <- (2 + A^2 * shape^2 + temp1) * scale / 2 ans2 <- (2 + A^2 * shape^2 - temp1) * scale / 2 ans <- ifelse(A < 0, pmin(ans1, ans2), pmax(ans1, ans2)) ans[shape <= 0] <- NaN ans[scale <= 0] <- NaN ans } bisa <- function(lscale = "loge", lshape = "loge", iscale = 1, ishape = NULL, imethod = 1, zero = "shape", nowarning = FALSE) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (!is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Birnbaum-Saunders distribution\n\n", "Links: ", namesof("scale", lscale, earg = escale, tag = TRUE), "; ", namesof("shape", lshape, earg = eshape, tag = TRUE)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }) , list( .zero = zero))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("scale", "shape"), lscale = .lscale , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .lscale = lscale, .lshape = lshape ))), initialize = eval(substitute(expression({ if (ncol(y <- cbind(y)) != 1) stop("the response must be a vector or a one-column matrix") predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) if (!length(etastart)) { scale.init <- rep_len( .iscale , n) shape.init <- if (is.Numeric( .ishape)) rep_len( .ishape , n) else { if ( .imethod == 1) { ybar <- rep_len(weighted.mean(y, w), n) ybarr <- rep_len(1 / weighted.mean(1/y, w), n) # Reqrs y > 0 sqrt(ybar / scale.init + scale.init / ybarr - 2) } else if ( .imethod == 2) { sqrt(2*( pmax(y, scale.init+0.1) / scale.init - 1)) } else { ybar <- rep_len(weighted.mean(y, w), n) sqrt(2*(pmax(ybar, scale.init + 0.1) / scale.init - 1)) } } etastart <- cbind(theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }) , list( .lshape = lshape, .lscale = lscale, .ishape = ishape, .iscale = iscale, .eshape = eshape, .escale = escale, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { sc <- eta2theta(eta[, 1], .lscale , earg = .escale ) sh <- eta2theta(eta[, 2], .lshape , earg = .eshape ) sc * (1 + sh^2 / 2) }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , shape = .lshape ) misc$earg <- list(scale = .escale , shape = .eshape ) misc$expected <- TRUE misc$multipleResponses <- FALSE }) , list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { sc <- eta2theta(eta[, 1], .lscale , earg = .escale ) sh <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbisa(x = y, scale = sc, shape = sh, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), vfamily = c("bisa"), validparams = eval(substitute(function(eta, y, extra = NULL) { sc <- eta2theta(eta[, 1], .lscale , earg = .escale ) sh <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(sc)) && all(0 < sc) && all(is.finite(sh)) && all(0 < sh) okay1 }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), deriv = eval(substitute(expression({ sc <- eta2theta(eta[, 1], .lscale , earg = .escale ) sh <- eta2theta(eta[, 2], .lshape , earg = .eshape ) dl.dsh <- ((y/sc - 2 + sc/y) / sh^2 - 1) / sh dl.dsc <- -0.5 / sc + 1/(y+sc) + sqrt(y) * ((y+sc)/y) * (sqrt(y/sc) - sqrt(sc/y)) / (2 * sh^2 * sc^1.5) dsh.deta <- dtheta.deta(sh, .lshape , earg = .eshape ) dsc.deta <- dtheta.deta(sc, .lscale , earg = .escale ) c(w) * cbind(dl.dsc * dsc.deta, dl.dsh * dsh.deta) }) , list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, M) # Diagonal!! wz[, iam(2, 2, M)] <- 2 * dsh.deta^2 / sh^2 hfunction <- function(alpha) alpha * sqrt(pi/2) - pi * exp(2/alpha^2) * pnorm(2/alpha, lower.tail = FALSE) wz[, iam(1, 1, M)] <- dsc.deta^2 * (sh * hfunction(sh) / sqrt(2*pi) + 1) / (sh*sc)^2 c(w) * wz }), list( .zero = zero )))) } VGAM/R/cqo.R0000644000176200001440000001225213135276757012134 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. cqo <- function(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = qrrvglm.control(...), offset = NULL, method = "cqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data = function.name <- "cqo" ocall <- match.call() if (smart) setup.smart("write") mt <- terms(formula, data = data) if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) mf$family <- mf$method <- mf$model <- mf$x.arg <- mf$y.arg <- mf$control <- mf$contrasts <- mf$constraints <- mf$extra <- NULL mf$coefstart <- mf$etastart <- mf$... <- NULL mf$smart <- NULL mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) if (method == "model.frame") return(mf) na.act <- attr(mf, "na.action") xvars <- as.character(attr(mt, "variables"))[-1] if ((yvar <- attr(mt, "response")) > 0) xvars <- xvars[-yvar] xlev <- if (length(xvars) > 0) { xlev <- lapply(mf[xvars], levels) xlev[!sapply(xlev, is.null)] } y <- model.response(mf, "numeric") # model.extract(mf, "response") x <- model.matrix(mt, mf, contrasts) attr(x, "assign") <- attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) { w <- rep_len(1, nrow(mf)) } else if (NCOL(w) == 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family = ", family, "' is not a VGAM family function") } control$criterion <- "coefficients" # Specifically 4 vcontrol.expression eval(vcontrol.expression) if (!is.null(family@first)) eval(family@first) cqo.fitter <- get(method) deviance.Bestof <- rep_len(NA_real_, control$Bestof) for (tries in 1:control$Bestof) { if (control$trace && (control$Bestof>1)) cat(paste("\n========================= Fitting model", tries, "=========================\n")) onefit <- cqo.fitter(x = x, y = y, w = w, offset = offset, etastart = etastart, mustart = mustart, coefstart = coefstart, family = family, control = control, constraints = constraints, extra = extra, Terms = mt, function.name = function.name, ...) deviance.Bestof[tries] <- if (length(onefit$crit.list$deviance)) onefit$crit.list$deviance else onefit$crit.list$loglikelihood if (tries == 1 || min(deviance.Bestof[1:(tries-1)]) > deviance.Bestof[tries]) fit <- onefit } fit$misc$deviance.Bestof <- deviance.Bestof fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new(Class = "qrrvglm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, # list("deviance" = min(deviance.Bestof)), "dispersion" = 1, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "residuals" = as.matrix(fit$residuals), "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (length(attr(x, "contrasts"))) slot(answer, "contrasts") <- attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") <- as.matrix(fit$fitted.values) slot(answer, "na.action") <- if (length(na.act)) list(na.act) else list() if (length(offset)) slot(answer, "offset") <- as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") <- as.matrix(fit$weights) if (x.arg) slot(answer, "x") <- fit$x # The 'small' design matrix if (length(xlev)) slot(answer, "xlevels") <- xlev if (y.arg) slot(answer, "y") <- as.matrix(fit$y) fit$control$min.criterion <- TRUE # needed for calibrate; a special case slot(answer, "control") <- fit$control slot(answer, "extra") <- if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("'extra' is not a list, therefore placing ", "'extra' into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") <- fit$iter fit$predictors <- as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) <- list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") <- fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") <- as.matrix(fit$prior.weights) answer } attr(cqo, "smart") <- TRUE VGAM/R/sm.ps.R0000644000176200001440000001144713135276760012411 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. sm.ps <- function(x, ..., ps.int = NULL, spar = -1, # was 0 prior to 20160810 degree = 3, p.order = 2, ridge.adj = 1e-5, # ridge.inv = 0.0001, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, mux = NULL, # 1.25, fixspar = FALSE) { xs <- substitute(x) ans <- as.character(xs) x.index <- as.vector(x) x.orig <- x.index xdots <- list(...) uses.xij <- length(xdots) > 0 if (uses.xij) x.index <- as.vector(c(x.index, unlist(xdots))) if (is.null(ps.int)) { ps.int <- if (length(mux)) { nux <- length(unique(x.index)) ceiling(mux * log(nux)) } else { min(max(degree, 7), length(x.index) - 2) } } # if (is.null(ps.int)) if (length(x.index) - 1 <= ps.int) stop("argument 'ps.int' is too large") xl <- min(x.index) xr <- max(x.index) if (smart.mode.is("read")) { smartlist <- get.smart() xl <- smartlist$xl # Overwrite its value xr <- smartlist$xr # Overwrite its value ps.int <- smartlist$ps.int # Ditto spar <- smartlist$spar degree <- smartlist$degree p.order <- smartlist$p.order ridge.adj <- smartlist$ridge.adj spillover <- smartlist$spillover maxspar <- smartlist$maxspar maXX <- smartlist$maXX Cmat <- smartlist$Cmat outer.ok <- smartlist$outer.ok mux <- smartlist$mux fixspar <- smartlist$fixspar } else { maXX <- NULL Cmat <- NULL } xmax <- xr + spillover * (xr - xl) xmin <- xl - spillover * (xr - xl) dx <- (xmax - xmin) / ps.int nx <- names(x.index) nax <- is.na(x.index) if (nas <- any(nax)) x.index <- x[!nax] s.order <- degree + 1 if (length(ps.int)) { nAknots <- ps.int - 1 if (nAknots < 1) { nAknots <- 1 warning("'ps.int' was too small; have used 2") } if (FALSE && # nux < 6 && smart.mode.is("write")) warning("smoothing when there are less than 6 distinct 'x' values", " is not advised") if (nAknots > 0) { Aknots <- seq(from = xmin - degree * dx, to = xmax + degree * dx, by = dx) } else { knots <- NULL } } # length(ps.int) basis <- splineDesign(Aknots, x.index, s.order, 0 * x.index, outer.ok = outer.ok) n.col <- ncol(basis) if (nas) { nmat <- matrix(NA_real_, length(nax), n.col) nmat[!nax, ] <- basis basis <- nmat } dimnames(basis) <- list(1:nrow(basis), 1:n.col) if ((p.order - n.col + 1) > 0) { p.order <- n.col - 1 warning("argument 'p.order' was too large; have used ", n.col - 1) } fixspar <- rep_len(fixspar, max(length(fixspar), length(spar))) spar <- rep_len( spar, max(length(fixspar), length(spar))) if (any(spar < 0 & fixspar)) { spar[spar < 0 & fixspar] <- 0 warning("some 'spar' values are negative : have used 'spar' = ", paste(spar, collapse = ", ")) } if (any(spar > maxspar)) { spar[spar > maxspar] <- maxspar warning("some 'spar' values are > ", maxspar, ": ", "for stability have used 'spar' = ", paste(spar, collapse = ", ")) } aug <- if (p.order > 0) diff(diag(n.col), diff = p.order) else diag(n.col) pen.aug <- crossprod(aug) if (is.null(maXX)) maXX <- mean(abs(crossprod(basis))) maS <- mean(abs(pen.aug)) / maXX pen.aug <- pen.aug / maS kk <- ncol(basis) if (is.null(Cmat)) Cmat <- matrix(colSums(basis), 1, kk) qrCt <- qr(t(Cmat)) jay <- nrow(Cmat) # 1 XZ <- t(qr.qty(qrCt, t(basis))[(jay+1):kk, ]) ZtSZ <- qr.qty(qrCt, t(qr.qty(qrCt, t(pen.aug))))[(jay+1):kk, (jay+1):kk] basis <- XZ if (smart.mode.is("write")) put.smart(list(xl = xl, xr = xr, ps.int = ps.int, spar = spar, degree = degree, p.order = p.order, ridge.adj = ridge.adj, spillover = spillover, maxspar = maxspar, maXX = maXX, Cmat = Cmat, outer.ok = outer.ok, mux = mux, fixspar = fixspar)) basis <- basis[seq_along(x.orig), , drop = FALSE] attr(basis, "S.arg") <- ZtSZ attr(basis, "degree") <- degree attr(basis, "knots") <- Aknots attr(basis, "spar") <- spar # Vector attr(basis, "p.order") <- p.order attr(basis, "ps.int") <- ps.int attr(basis, "ridge.adj") <- ridge.adj attr(basis, "outer.ok") <- outer.ok attr(basis, "mux") <- mux attr(basis, "fixspar") <- fixspar basis } VGAM/R/family.binomial.R0000644000176200001440000034041113135276757014425 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. process.binomial2.data.VGAM <- expression({ if (!all(w == 1)) extra$orig.w <- w if (!is.matrix(y)) { yf <- as.factor(y) lev <- levels(yf) llev <- length(lev) if (llev != 4) stop("response must have 4 levels") nn <- length(yf) y <- matrix(0, nn, llev) y[cbind(1:nn, as.vector(unclass(yf)))] <- 1 colnamesy <- paste(lev, ":", c("00", "01", "10", "11"), sep = "") dimnames(y) <- list(names(yf), colnamesy) input.type <- 1 } else if (ncol(y) == 2) { if (!all(y == 0 | y == 1)) stop("response must contains 0's and 1's only") col.index <- y[, 2] + 2*y[, 1] + 1 # 1:4 nn <- nrow(y) y <- matrix(0, nn, 4) y[cbind(1:nn, col.index)] <- 1 dimnames(y) <- list(dimnames(y)[[1]], c("00", "01", "10", "11")) input.type <- 2 } else if (ncol(y) == 4) { input.type <- 3 } else stop("response unrecognized") nvec <- rowSums(y) w <- w * nvec y <- y / nvec # Convert to proportions if (length(mustart) + length(etastart) == 0) { mu <- y + (1 / ncol(y) - y) / nvec dimnames(mu) <- dimnames(y) mustart <- mu } }) betabinomial.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } betabinomial <- function(lmu = "logit", lrho = "logit", irho = NULL, imethod = 1, ishrinkage = 0.95, nsimEIM = NULL, zero = "rho") { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1, 2, 3 or 4") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (!is.null(nsimEIM)) { if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("'nsimEIM' should be an integer greater than 10, say") } new("vglmff", blurb = c("Beta-binomial model\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("rho", lrho, earg = erho), "\n", "Mean: mu", "\n", "Variance: mu*(1-mu)*(1+(w-1)*rho)/w"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu", "rho"), imethod = .imethod , ishrinkage = .ishrinkage , nsimEIM = .nsimEIM , lmu = .lmu , lrho = .lrho , zero = .zero ) }, list( .lmu = lmu, .lrho = lrho, .imethod = imethod, .ishrinkage = ishrinkage, .nsimEIM = nsimEIM, .zero = zero ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (is.null( .nsimEIM )) { save.weights <- control$save.weights <- FALSE } mustart.orig <- mustart eval(binomialff()@initialize) # Note: n,w,y,mustart is changed if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts if (max(abs(ycounts - round(ycounts))) > 1.0e-6) warning("the response (as counts) does not appear to ", "be integer-valued. Am rounding to integer values.") ycounts <- round(ycounts) # Make sure it is an integer predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("rho", .lrho , earg = .erho , tag = FALSE)) if (!length(etastart)) { betabinomial.Loglikfun <- function(rhoval, y, x, w, extraargs) { shape1 <- extraargs$mustart * (1-rhoval) / rhoval shape2 <- (1-extraargs$mustart) * (1-rhoval) / rhoval ycounts <- extraargs$ycounts # Ought to be integer-valued nvec <- extraargs$nvec sum(dbetabinom.ab(x = ycounts, size = nvec, shape1 = shape1, shape2 = shape2, log = TRUE)) } rho.grid <- seq(0.05, 0.95, len = 25) # rvar = mustart.use <- if (length(mustart.orig)) { mustart.orig } else if ( .imethod == 1) { rep_len(weighted.mean(y, w), n) } else if ( .imethod == 2) { .ishrinkage * weighted.mean(y, w) + (1 - .ishrinkage ) * y } else if ( .imethod == 3) { y.matrix <- cbind(y) mat.temp <- matrix(colMeans(y.matrix), nrow(y.matrix), ncol(y.matrix), byrow = TRUE) 0.5 * mustart + 0.5 * mat.temp } else { mustart } try.this <- grid.search(rho.grid, objfun = betabinomial.Loglikfun, y = y, x = x, w = w, extraargs = list( ycounts = ycounts, nvec = if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w), mustart = mustart.use)) init.rho <- if (is.Numeric( .irho )) rep_len( .irho , n) else rep_len(try.this, n) etastart <- cbind(theta2eta(mustart.use, .lmu , earg = .emu ), theta2eta(init.rho, .lrho , earg = .erho )) mustart <- NULL # Since etastart has been computed. } }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .imethod = imethod, .ishrinkage = ishrinkage, .nsimEIM = nsimEIM, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) eta2theta(eta[, 1], .lmu , earg = .emu ), list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu , rho = .lrho) misc$earg <- list(mu = .emu , rho = .erho ) misc$zero <- .zero misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$rho <- 1 / (shape1 + shape2 + 1) }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM, .zero = zero ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], .lrho , earg = .erho ) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) rho <- pmax(rho, smallno) rho <- pmin(rho, 1 - smallno) shape1 <- mymu * (1 - rho) / rho shape2 <- (1 - mymu) * (1 - rho) / rho nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbetabinom.ab(x = ycounts, size = nvec, shape1 = shape1, shape2 = shape2, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), vfamily = c("betabinomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], .lrho , earg = .erho ) okay1 <- all(is.finite(mymu)) && all(0 < mymu & mymu < 1) && all(is.finite(rho )) && all(0 < rho & rho < 1) okay1 }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") w <- pwts eta <- predict(object) extra <- object@extra mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], .lrho , earg = .erho ) nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) rbetabinom(nsim * length(rho), size = nvec, prob = mymu, rho = rho) }, list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), deriv = eval(substitute(expression({ nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts ycounts <- round(ycounts) mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) rho <- eta2theta(eta[, 2], .lrho , earg = .erho ) smallno <- 100 * .Machine$double.eps rho <- pmax(rho, smallno) rho <- pmin(rho, 1-smallno) shape1 <- mymu * (1 - rho) / rho shape2 <- (1 - mymu) * (1 - rho) / rho dshape1.dmu <- (1 - rho) / rho dshape2.dmu <- -(1 - rho) / rho dshape1.drho <- -mymu / rho^2 dshape2.drho <- -(1 - mymu) / rho^2 dmu.deta <- dtheta.deta(mymu, .lmu , earg = .emu ) drho.deta <- dtheta.deta(rho, .lrho , earg = .erho ) dl.dmu <- dshape1.dmu * (digamma(shape1+ycounts) - digamma(shape2+nvec-ycounts) - digamma(shape1) + digamma(shape2)) dl.drho <- (-1/rho^2) * (mymu * digamma(shape1 + ycounts) + (1 - mymu) * digamma(shape2 + nvec - ycounts) - digamma(shape1 + shape2 + nvec) - mymu * digamma(shape1) - (1 - mymu)*digamma(shape2) + digamma(shape1+shape2)) c(if (is.numeric(extra$orig.w)) extra$orig.w else 1) * cbind(dl.dmu * dmu.deta, dl.drho * drho.deta) }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM )) { wz <- matrix(NA_real_, n, dimm(M)) #3=dimm(2) wz11 <- -(expected.betabin.ab(nvec, shape1, shape2, TRUE) - trigamma(shape1+shape2+nvec) - trigamma(shape1) + trigamma(shape1+shape2)) wz22 <- -(expected.betabin.ab(nvec, shape1, shape2, FALSE) - trigamma(shape1+shape2+nvec) - trigamma(shape2) + trigamma(shape1+shape2)) wz21 <- -(trigamma(shape1+shape2) - trigamma(shape1+shape2+nvec)) wz[, iam(1, 1, M)] <- dmu.deta^2 * (wz11 * dshape1.dmu^2 + wz22 * dshape2.dmu^2 + 2 * wz21 * dshape1.dmu * dshape2.dmu) wz[, iam(2, 2, M)] <- drho.deta^2 * (wz11 * dshape1.drho^2 + wz22 * dshape2.drho^2 + 2 * wz21 * dshape1.drho * dshape2.drho) wz[, iam(2, 1, M)] <- dmu.deta * drho.deta * (dshape1.dmu*(wz11*dshape1.drho + wz21*dshape2.drho) + dshape2.dmu*(wz21*dshape1.drho + wz22*dshape2.drho)) wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas <- cbind(dmu.deta, drho.deta) for (ii in 1:( .nsimEIM )) { ysim <- rbetabinom.ab(n = n, size = nvec, shape1 = shape1, shape2 = shape2) dl.dmu <- dshape1.dmu * (digamma(shape1+ysim) - digamma(shape2+nvec-ysim) - digamma(shape1) + digamma(shape2)) dl.drho <- (-1/rho^2) * (mymu * digamma(shape1+ysim) + (1-mymu) * digamma(shape2+nvec-ysim) - digamma(shape1+shape2+nvec) - mymu * digamma(shape1) - (1-mymu)*digamma(shape2) + digamma(shape1+shape2)) temp3 <- cbind(dl.dmu, dl.drho) # n x M matrix run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } }), list( .lmu = lmu, .lrho = lrho, .emu = emu, .erho = erho, .nsimEIM = nsimEIM )))) } dbinom2.or <- function(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) { if (ErrorCheck) { if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(oratio, positive = TRUE)) stop("bad input for argument 'oratio'") if (!is.Numeric(tol, positive = TRUE, length.arg = 1) || tol > 0.1) stop("bad input for argument 'tol'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } L <- max(length(mu1), length(mu2), length(oratio)) if (length(oratio) != L) oratio <- rep_len(oratio, L) if (length(mu1 ) != L) mu1 <- rep_len(mu1, L) if (length(mu2 ) != L) mu2 <- rep_len(mu2, L) a.temp <- 1 + (mu1+mu2)*(oratio-1) b.temp <- -4 * oratio * (oratio-1) * mu1 * mu2 temp <- sqrt(a.temp^2 + b.temp) p11 <- ifelse(abs(oratio-1) < tol, mu1*mu2, (a.temp-temp)/(2*(oratio-1))) p01 <- mu2 - p11 p10 <- mu1 - p11 p00 <- 1 - p11 - p01 - p10 matrix(c(p00, p01, p10, p11), L, 4, dimnames = list(NULL, colnames)) } rbinom2.or <- function(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, twoCols = TRUE, colnames = if (twoCols) c("y1", "y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (ErrorCheck) { if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(oratio, positive = TRUE)) stop("bad input for argument 'oratio'") if (!is.Numeric(tol, positive = TRUE, length.arg = 1) || tol > 0.1) stop("bad input for argument 'tol'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } dmat <- dbinom2.or(mu1 = mu1, mu2 = mu2, oratio = oratio, exchangeable = exchangeable, tol = tol, ErrorCheck = ErrorCheck) answer <- matrix(0, use.n, 2, dimnames = list(NULL, if (twoCols) colnames else NULL)) yy <- runif(use.n) cs1 <- dmat[, "00"] + dmat[, "01"] cs2 <- cs1 + dmat[, "10"] index <- (dmat[, "00"] < yy) & (yy <= cs1) answer[index, 2] <- 1 index <- (cs1 < yy) & (yy <= cs2) answer[index, 1] <- 1 index <- (yy > cs2) answer[index,] <- 1 if (twoCols) { answer } else { answer4 <- matrix(0, use.n, 4, dimnames = list(NULL, colnames)) answer4[cbind(1:use.n, 1 + 2*answer[, 1] + answer[, 2])] <- 1 answer4 } } binom2.or <- function(lmu = "logit", lmu1 = lmu, lmu2 = lmu, loratio = "loge", imu1 = NULL, imu2 = NULL, ioratio = NULL, zero = "oratio", exchangeable = FALSE, tol = 0.001, more.robust = FALSE) { lmu1 <- lmu1 lmu2 <- lmu2 lmu1 <- as.list(substitute(lmu1)) emu1 <- link2list(lmu1) lmu1 <- attr(emu1, "function.name") lmu2 <- as.list(substitute(lmu2)) emu2 <- link2list(lmu2) lmu2 <- attr(emu2, "function.name") loratio <- as.list(substitute(loratio)) eoratio <- link2list(loratio) loratio <- attr(eoratio, "function.name") if (!is.logical(exchangeable)) warning("argument 'exchangeable' should be a single logical") if (is.logical(exchangeable) && exchangeable && ((lmu1 != lmu2) || !identical(emu1, emu2))) warning("exchangeable = TRUE but marginal links are not equal") if (!is.Numeric(tol, positive = TRUE, length.arg = 1) || tol > 0.1) stop("bad input for argument 'tol'") new("vglmff", blurb = c("Bivariate binomial regression with an odds ratio\n", "Links: ", namesof("mu1", lmu1, earg = emu1), ", ", namesof("mu2", lmu2, earg = emu2), "; ", namesof("oratio", loratio, earg = eoratio)), constraints = eval(substitute(expression({ cm.intercept.default <- diag(3) constraints <- cm.VGAM(matrix(c(1, 1, 0, 0, 0, 1), 3, 2), x = x, bool = .exchangeable , constraints = constraints, apply.int = TRUE, cm.default = cm.intercept.default, cm.intercept.default = cm.intercept.default) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .exchangeable = exchangeable, .zero = zero ))), deviance = Deviance.categorical.data.vgam, infos = eval(substitute(function(...) { list(M1 = 3, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu1", "mu2", "oratio"), exchangeable = .exchangeable , lmu1 = .lmu1 , lmu2 = .lmu2 , loratio = .loratio , zero = .zero ) }, list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .zero = zero, .exchangeable = exchangeable ))), initialize = eval(substitute(expression({ mustart.orig <- mustart eval(process.binomial2.data.VGAM) if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted predictors.names <- c(namesof("mu1", .lmu1 , earg = .emu1 , short = TRUE), namesof("mu2", .lmu2 , earg = .emu2 , short = TRUE), namesof("oratio", .loratio , earg = .eoratio , short = TRUE)) if (!length(etastart)) { pmargin <- cbind(mustart[, 3] + mustart[, 4], mustart[, 2] + mustart[, 4]) ioratio <- if (length( .ioratio )) rep_len( .ioratio , n) else mustart[, 4] * mustart[, 1] / (mustart[, 2] * mustart[, 3]) if (length( .imu1 )) pmargin[, 1] <- .imu1 if (length( .imu2 )) pmargin[, 2] <- .imu2 etastart <- cbind(theta2eta(pmargin[, 1], .lmu1 , earg = .emu1 ), theta2eta(pmargin[, 2], .lmu2 , earg = .emu2 ), theta2eta(ioratio, .loratio , earg = .eoratio )) } }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio, .imu1 = imu1, .imu2 = imu2, .ioratio = ioratio ))), linkinv = eval(substitute(function(eta, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu1 , earg = .emu1 ), eta2theta(eta[, 2], .lmu2 , earg = .emu2 )) oratio <- eta2theta(eta[, 3], .loratio , earg = .eoratio ) a.temp <- 1 + (pmargin[, 1] + pmargin[, 2]) * (oratio - 1) b.temp <- -4 * oratio * (oratio-1) * pmargin[, 1] * pmargin[, 2] temp <- sqrt(a.temp^2 + b.temp) pj4 <- ifelse(abs(oratio-1) < .tol , pmargin[, 1] * pmargin[, 2], (a.temp-temp)/(2*(oratio-1))) pj2 <- pmargin[, 2] - pj4 pj3 <- pmargin[, 1] - pj4 cbind("00" = 1-pj4-pj2-pj3, "01" = pj2, "10" = pj3, "11" = pj4) }, list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio, .tol = tol ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu1 , mu2 = .lmu2 , oratio = .loratio ) misc$earg <- list(mu1 = .emu1 , mu2 = .emu2 , oratio = .eoratio ) misc$tol <- .tol misc$expected <- TRUE }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio, .tol = tol ))), linkfun = eval(substitute(function(mu, extra = NULL) { pmargin <- cbind(mu[, 3]+mu[, 4], mu[, 2]+mu[, 4]) oratio <- mu[, 4]*mu[, 1] / (mu[, 2]*mu[, 3]) cbind(theta2eta(pmargin[, 1], .lmu1 , earg = .emu1), theta2eta(pmargin[, 2], .lmu2 , earg = .emu2), theta2eta(oratio, .loratio, earg = .eoratio)) }, list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { if ( .more.robust) { vsmallno <- 1.0e4 * .Machine$double.xmin mu[mu < vsmallno] <- vsmallno } ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .more.robust = more.robust ))), vfamily = c("binom2.or", "binom2"), validparams = eval(substitute(function(eta, y, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu1 , earg = .emu1 ), eta2theta(eta[, 2], .lmu2 , earg = .emu2 )) oratio <- eta2theta(eta[, 3], .loratio , earg = .eoratio ) okay1 <- all(is.finite(pmargin)) && all(0 < pmargin & pmargin < 1) && all(is.finite(oratio )) && all(0 < oratio) okay1 }, list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio ))), deriv = eval(substitute(expression({ smallno <- 1.0e4 * .Machine$double.eps mu.use <- mu mu.use[mu.use < smallno] <- smallno mu.use[mu.use > 1 - smallno] <- 1 - smallno pmargin <- cbind(mu.use[, 3] + mu.use[, 4], mu.use[, 2] + mu.use[, 4]) pmargin[, 1] <- pmax( smallno, pmargin[, 1]) pmargin[, 1] <- pmin(1 - smallno, pmargin[, 1]) pmargin[, 2] <- pmax( smallno, pmargin[, 2]) pmargin[, 2] <- pmin(1 - smallno, pmargin[, 2]) oratio <- mu.use[, 4]*mu.use[, 1] / (mu.use[, 2]*mu.use[, 3]) use.oratio <- pmax(smallno, oratio) a.temp <- 1 + (pmargin[, 1]+pmargin[, 2])*(oratio-1) b.temp <- -4 * oratio * (oratio-1) * pmargin[, 1] * pmargin[, 2] temp9 <- sqrt(a.temp^2 + b.temp) coeff12 <- -0.5 + (2*oratio*pmargin - a.temp) / (2*temp9) dl.dmu1 <- coeff12[, 2] * (y[, 1]/mu.use[, 1]-y[, 3]/mu.use[, 3]) - (1+coeff12[, 2]) * (y[, 2]/mu.use[, 2]-y[, 4]/mu.use[, 4]) dl.dmu2 <- coeff12[, 1] * (y[, 1]/mu.use[, 1]-y[, 2]/mu.use[, 2]) - (1+coeff12[, 1]) * (y[, 3]/mu.use[, 3]-y[, 4]/mu.use[, 4]) coeff3 <- (y[, 1]/mu.use[, 1] - y[, 2]/mu.use[, 2] - y[, 3]/mu.use[, 3] + y[, 4]/mu.use[, 4]) Vab <- pmax(smallno, 1 / (1/mu.use[, 1] + 1/mu.use[, 2] + 1/mu.use[, 3] + 1/mu.use[, 4])) dp11.doratio <- Vab / use.oratio dl.doratio <- coeff3 * dp11.doratio c(w) * cbind(dl.dmu1 * dtheta.deta(pmargin[, 1], .lmu1, earg = .emu1), dl.dmu2 * dtheta.deta(pmargin[, 2], .lmu2, earg = .emu2), dl.doratio * dtheta.deta(oratio, .loratio, earg = .eoratio)) }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio ))), weight = eval(substitute(expression({ Deltapi <- mu.use[, 3]*mu.use[, 2] - mu.use[, 4]*mu.use[, 1] myDelta <- pmax(smallno, mu.use[, 1] * mu.use[, 2] * mu.use[, 3] * mu.use[, 4]) pqmargin <- pmargin * (1-pmargin) pqmargin[pqmargin < smallno] <- smallno wz <- matrix(0, n, 4) wz[, iam(1, 1, M)] <- (pqmargin[, 2] * Vab / myDelta) * dtheta.deta(pmargin[, 1], .lmu1, earg = .emu1)^2 wz[, iam(2, 2, M)] <- (pqmargin[, 1] * Vab / myDelta) * dtheta.deta(pmargin[, 2], .lmu2, earg = .emu2)^2 wz[, iam(3, 3, M)] <- (Vab / use.oratio^2) * dtheta.deta(use.oratio, .loratio, earg = .eoratio)^2 wz[, iam(1, 2, M)] <- (Vab * Deltapi / myDelta) * dtheta.deta(pmargin[, 1], .lmu1, earg = .emu1) * dtheta.deta(pmargin[, 2], .lmu2, earg = .emu2) c(w) * wz }), list( .lmu1 = lmu1, .lmu2 = lmu2, .loratio = loratio, .emu1 = emu1, .emu2 = emu2, .eoratio = eoratio )))) } # binom2.or setClass("binom2", contains = "vglmff") setClass("binom2.or", contains = "binom2") setMethod("summaryvglmS4VGAM", signature(VGAMff = "binom2.or"), function(object, VGAMff, ...) { cfit <- coef.vlm(object, matrix = TRUE) if (rownames(cfit)[1] == "(Intercept)" && all(cfit[-1, 3] == 0)) { object@post$oratio <- eta2theta(cfit[1, 3], link = object@misc$link[3], earg = object@misc$earg[[3]]) } object@post }) setMethod("showsummaryvglmS4VGAM", signature(VGAMff = "binom2.or"), function(object, VGAMff, ...) { if (length(object@post$oratio) == 1 && is.numeric(object@post$oratio)) { cat("\nOdds ratio: ", round(object@post$oratio, digits = 4), "\n") } }) dbinom2.rho <- function(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), rho = 0, exchangeable = FALSE, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) { if (ErrorCheck) { if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(rho) || min(rho) <= -1 || max(rho) >= 1) stop("bad input for argument 'rho'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } nn <- max(length(mu1), length(mu2), length(rho)) rho <- rep_len(rho, nn) mu1 <- rep_len(mu1, nn) mu2 <- rep_len(mu2, nn) eta1 <- qnorm(mu1) eta2 <- qnorm(mu2) p11 <- pbinorm(eta1, eta2, cov12 = rho) p01 <- mu2 - p11 p10 <- mu1 - p11 p00 <- 1.0 - p01 - p10 - p11 matrix(c(p00, p01, p10, p11), nn, 4, dimnames = list(NULL, colnames)) } rbinom2.rho <- function(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), rho = 0, exchangeable = FALSE, twoCols = TRUE, colnames = if (twoCols) c("y1", "y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (ErrorCheck) { if (!is.Numeric(mu1, positive = TRUE) || max(mu1) >= 1) stop("bad input for argument 'mu1'") if (!is.Numeric(mu2, positive = TRUE) || max(mu2) >= 1) stop("bad input for argument 'mu2'") if (!is.Numeric(rho) || min(rho) <= -1 || max(rho) >= 1) stop("bad input for argument 'rho'") if (exchangeable && max(abs(mu1 - mu2)) > 0.00001) stop("argument 'exchangeable' is TRUE but 'mu1' and 'mu2' differ") } dmat <- dbinom2.rho(mu1 = mu1, mu2 = mu2, rho = rho, exchangeable = exchangeable, ErrorCheck = ErrorCheck) answer <- matrix(0, use.n, 2, dimnames = list(NULL, if (twoCols) colnames else NULL)) yy <- runif(use.n) cs1 <- dmat[, "00"] + dmat[, "01"] cs2 <- cs1 + dmat[, "10"] index <- (dmat[, "00"] < yy) & (yy <= cs1) answer[index, 2] <- 1 index <- (cs1 < yy) & (yy <= cs2) answer[index, 1] <- 1 index <- (yy > cs2) answer[index,] <- 1 if (twoCols) { answer } else { answer4 <- matrix(0, use.n, 4, dimnames = list(NULL, colnames)) answer4[cbind(1:use.n, 1 + 2*answer[, 1] + answer[, 2])] <- 1 answer4 } } binom2.rho.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } binom2.rho <- function(lmu = "probit", # added 20120817, order swapped 20151128 lrho = "rhobit", imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = "rho", # 3 exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05), nsimEIM = NULL) { lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") if (lmu != "probit") warning("argument 'lmu' should be 'probit'") lmu12 <- "probit" # But emu may contain some arguments. emu12 <- emu # list() if (is.Numeric(nsimEIM)) { if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 100) warning("'nsimEIM' should be an integer greater than 100") } if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate probit model\n", "Links: ", namesof("mu1", lmu12, earg = emu12), ", ", namesof("mu2", lmu12, earg = emu12), ", ", namesof("rho", lrho, earg = erho)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(c(1, 1, 0, 0, 0, 1), 3, 2), x = x, bool = .exchangeable , constraints = constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .exchangeable = exchangeable, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu1", "mu2", "rho"), lmu1 = .lmu12, lmu2 = .lmu12, lrho = .lrho , zero = .zero ) }, list( .lmu12 = lmu12, .lrho = lrho, .zero = zero ))), initialize = eval(substitute(expression({ mustart.orig <- mustart eval(process.binomial2.data.VGAM) if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted predictors.names <- c( namesof("mu1", .lmu12 , earg = .emu12 , short = TRUE), namesof("mu2", .lmu12 , earg = .emu12 , short = TRUE), namesof("rho", .lrho , earg = .erho, short = TRUE)) if (is.null( .nsimEIM )) { save.weights <- control$save.weights <- FALSE } ycounts <- if (is.numeric(extra$orig.w)) y * c(w) / extra$orig.w else y * c(w) # Convert proportions to counts if (max(abs(ycounts - round(ycounts))) > 1.0e-6) warning("the response (as counts) does not appear to ", "be integer-valued. Am rounding to integer values.") ycounts <- round(ycounts) # Make sure it is an integer nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) if (is.null(etastart)) { if (length(mustart.orig)) { mu1.init <- mustart.orig[, 3] + mustart.orig[, 4] mu2.init <- mustart.orig[, 2] + mustart.orig[, 4] } else if ( .imethod == 1) { glm1.fit <- glm(cbind(ycounts[, 3] + ycounts[, 4], ycounts[, 1] + ycounts[, 2]) ~ x - 1, fam = binomial("probit")) glm2.fit <- glm(cbind(ycounts[, 2] + ycounts[, 4], ycounts[, 1] + ycounts[, 3]) ~ x - 1, fam = binomial("probit")) mu1.init <- fitted(glm1.fit) mu2.init <- fitted(glm2.fit) } else if ( .imethod == 2) { mu1.init <- if (is.Numeric( .imu1 )) rep_len( .imu1 , n) else mu[, 3] + mu[, 4] mu2.init <- if (is.Numeric( .imu2 )) rep_len( .imu2 , n) else mu[, 2] + mu[, 4] } else { stop("bad value for argument 'imethod'") } binom2.rho.Loglikfun <- function(rhoval, y, x, w, extraargs) { init.mu1 <- extraargs$initmu1 init.mu2 <- extraargs$initmu2 ycounts <- extraargs$ycounts nvec <- extraargs$nvec eta1 <- qnorm(init.mu1) eta2 <- qnorm(init.mu2) p11 <- pbinorm(eta1, eta2, cov12 = rhoval) p01 <- pmin(init.mu2 - p11, init.mu2) p10 <- pmin(init.mu1 - p11, init.mu1) p00 <- 1.0 - p01 - p10 - p11 mumat <- abs(cbind("00" = p00, "01" = p01, "10" = p10, "11" = p11)) mumat <- mumat / rowSums(mumat) mumat[mumat < 1.0e-100] <- 1.0e-100 sum((if (is.numeric(extraargs$orig.w)) extraargs$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mumat, log = TRUE, dochecking = FALSE)) } rho.grid <- .grho # seq(-0.95, 0.95, len = 31) try.this <- grid.search(rho.grid, objfun = binom2.rho.Loglikfun, y = y, x = x, w = w, extraargs = list( orig.w = extra$orig.w, ycounts = ycounts, initmu1 = mu1.init, initmu2 = mu2.init, nvec = nvec )) rho.init <- if (is.Numeric( .irho )) rep_len( .irho , n) else { try.this } etastart <- cbind(theta2eta(mu1.init, .lmu12 , earg = .emu12 ), theta2eta(mu2.init, .lmu12 , earg = .emu12 ), theta2eta(rho.init, .lrho , earg = .erho )) mustart <- NULL # Since etastart has been computed. } }), list( .lmu12 = lmu12, .lrho = lrho, .emu12 = emu12, .erho = erho, .grho = grho, .irho = irho, .imethod = imethod, .nsimEIM = nsimEIM, .imu1 = imu1, .imu2 = imu2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rho <- eta2theta(eta[, 3], .lrho , earg = .erho ) p11 <- pbinorm(eta[, 1], eta[, 2], cov12 = rho) p01 <- pmin(pmargin[, 2] - p11, pmargin[, 2]) p10 <- pmin(pmargin[, 1] - p11, pmargin[, 1]) p00 <- 1.0 - p01 - p10 - p11 ansmat <- abs(cbind("00" = p00, "01" = p01, "10" = p10, "11" = p11)) ansmat / rowSums(ansmat) }, list( .lmu12 = lmu12, .lrho = lrho, .emu12 = emu12, .erho = erho ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu12 , mu2 = .lmu12 , rho = .lrho ) misc$earg <- list(mu1 = .emu12 , mu2 = .emu12 , rho = .erho ) misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lmu12 = lmu12, .lrho = lrho, .nsimEIM = nsimEIM, .emu12 = emu12, .erho = erho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * c(w) / extra$orig.w else y * c(w) # Convert proportions to counts smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .erho = erho ))), vfamily = c("binom2.rho", "binom2"), validparams = eval(substitute(function(eta, y, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- eta2theta(eta[, 3], .lrho , earg = .erho ) okay1 <- all(is.finite(pmargin)) && all( 0 < pmargin & pmargin < 1) && all(is.finite(rhovec )) && all(-1 < rhovec & rhovec < 1) okay1 }, list( .lmu12 = lmu12, .lrho = lrho, .emu12 = emu12, .erho = erho ))), deriv = eval(substitute(expression({ nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) ycounts <- if (is.numeric(extra$orig.w)) y * c(w) / extra$orig.w else y * c(w) # Convert proportions to counts pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- eta2theta(eta[, 3], .lrho , earg = .erho ) p11 <- pbinorm(eta[, 1], eta[, 2], cov12 = rhovec) p01 <- pmargin[, 2] - p11 p10 <- pmargin[, 1] - p11 p00 <- 1 - p01 - p10 - p11 ABmat <- (eta[, 1:2] - rhovec * eta[, 2:1]) / sqrt(pmax(1e5 * .Machine$double.eps, 1.0 - rhovec^2)) PhiA <- pnorm(ABmat[, 1]) PhiB <- pnorm(ABmat[, 2]) onemPhiA <- pnorm(ABmat[, 1], lower.tail = FALSE) onemPhiB <- pnorm(ABmat[, 2], lower.tail = FALSE) smallno <- 1000 * .Machine$double.eps p00[p00 < smallno] <- smallno p01[p01 < smallno] <- smallno p10[p10 < smallno] <- smallno p11[p11 < smallno] <- smallno dprob00 <- dbinorm(eta[, 1], eta[, 2], cov12 = rhovec) dl.dprob1 <- PhiB * (ycounts[, 4]/p11 - ycounts[, 2]/p01) + onemPhiB * (ycounts[, 3]/p10 - ycounts[, 1]/p00) dl.dprob2 <- PhiA * (ycounts[, 4]/p11 - ycounts[, 3]/p10) + onemPhiA * (ycounts[, 2]/p01 - ycounts[, 1]/p00) dl.drho <- (ycounts[, 4]/p11 - ycounts[, 3]/p10 - ycounts[, 2]/p01 + ycounts[, 1]/p00) * dprob00 dprob1.deta <- dtheta.deta(pmargin[, 1], .lmu12 , earg = .emu12 ) dprob2.deta <- dtheta.deta(pmargin[, 2], .lmu12 , earg = .emu12 ) drho.deta <- dtheta.deta(rhovec, .lrho , earg = .erho ) dthetas.detas <- cbind(dprob1.deta, dprob2.deta, drho.deta) (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * cbind(dl.dprob1, dl.dprob2, dl.drho) * dthetas.detas }), list( .lmu12 = lmu12, .lrho = lrho, .emu12 = emu12, .erho = erho ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM )) { ned2l.dprob1prob1 <- PhiB^2 * (1/p11 + 1/p01) + onemPhiB^2 * (1/p10 + 1/p00) ned2l.dprob2prob2 <- PhiA^2 * (1/p11 + 1/p10) + onemPhiA^2 * (1/p01 + 1/p00) ned2l.dprob1prob2 <- PhiA * ( PhiB/p11 - onemPhiB/p10) + onemPhiA * (onemPhiB/p00 - PhiB/p01) ned2l.dprob1rho <- (PhiB * (1/p11 + 1/p01) - onemPhiB * (1/p10 + 1/p00)) * dprob00 ned2l.dprob2rho <- (PhiA * (1/p11 + 1/p10) - onemPhiA * (1/p01 + 1/p00)) * dprob00 ned2l.drho2 <- (1/p11 + 1/p01 + 1/p10 + 1/p00) * dprob00^2 wz <- matrix(0, n, dimm(M)) # 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dprob1prob1 * dprob1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dprob2prob2 * dprob2.deta^2 wz[, iam(3, 3, M)] <- ned2l.drho2 * drho.deta^2 wz[, iam(1, 2, M)] <- ned2l.dprob1prob2 * dprob1.deta * dprob2.deta wz[, iam(2, 3, M)] <- ned2l.dprob2rho * dprob2.deta * drho.deta wz[, iam(1, 3, M)] <- ned2l.dprob1rho * dprob1.deta * drho.deta } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { ysim <- rbinom2.rho(n, mu1 = pmargin[, 1], mu2 = pmargin[, 2], twoCols = FALSE, rho = rhovec) dl.dprob1 <- PhiB * (ysim[, 4]/p11 - ysim[, 2]/p01) + onemPhiB * (ysim[, 3]/p10 - ysim[, 1]/p00) dl.dprob2 <- PhiA * (ysim[, 4]/p11 - ysim[, 3]/p10) + onemPhiA * (ysim[, 2]/p01 - ysim[, 1]/p00) dl.drho <- (ysim[, 4]/p11 - ysim[, 3]/p10 - ysim[, 2]/p01 + ysim[, 1]/p00) * dprob00 rm(ysim) temp3 <- cbind(dl.dprob1, dl.dprob2, dl.drho) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dnorm2 <- function(x, y, rho = 0, log = FALSE) { warning("decommissioning dnorm2() soon; use ", "dbinorm(..., cov12 = rho) instead") if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logdnorm2 <- (-0.5*(x * (x - 2*y*rho) + y^2) / (1.0 - rho^2)) - log(2 * pi) - 0.5 * log1p(-rho^2) if (log.arg) { logdnorm2 } else { exp(logdnorm2) } } pbinorm <- function(q1, q2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) { sd1 <- sqrt(var1) sd2 <- sqrt(var2) rho <- cov12 / (sd1 * sd2) if (anyNA(q1) || anyNA(q2) || anyNA(sd1) || anyNA(sd2) || anyNA(mean1) || anyNA(mean2) || anyNA(rho)) stop("no NAs allowed in arguments or variables 'q1', 'q2',", " 'mean1', 'mean2', 'sd1', 'sd2', 'cov12'") if (min(rho) < -1 || max(rho) > +1) stop("correlation 'rho' is out of range") if (length(mean1) > 1 && length(mean2) == 1 && length(var1) == 1 && length(var2) == 1 && length(cov12) == 1) warning("the call to pnorm2() seems based on the old version ", "of the arguments") LLL <- max(length(q1), length(q2), length(mean1), length(mean2), length(sd1), length(sd2), length(rho)) if (length(q1) != LLL) q1 <- rep_len(q1, LLL) if (length(q2) != LLL) q2 <- rep_len(q2, LLL) if (length(mean1) != LLL) mean1 <- rep_len(mean1, LLL) if (length(mean2) != LLL) mean2 <- rep_len(mean2, LLL) if (length(sd1) != LLL) sd1 <- rep_len(sd1, LLL) if (length(sd2) != LLL) sd2 <- rep_len(sd2, LLL) if (length(rho) != LLL) rho <- rep_len(rho, LLL) Zedd1 <- Z1 <- (q1 - mean1) / sd1 Zedd2 <- Z2 <- (q2 - mean2) / sd2 is.inf1.neg <- is.infinite(Z1) & Z1 < 0 # -Inf is.inf1.pos <- is.infinite(Z1) & Z1 > 0 # +Inf is.inf2.neg <- is.infinite(Z2) & Z2 < 0 # -Inf is.inf2.pos <- is.infinite(Z2) & Z2 > 0 # +Inf Zedd1[is.inf1.neg] <- 0 Zedd1[is.inf1.pos] <- 0 Zedd2[is.inf2.neg] <- 0 Zedd2[is.inf2.pos] <- 0 ans <- Zedd1 singler <- ifelse(length(rho) == 1, 1, 0) answer <- .C("pnorm2ccc", ah = as.double(-Zedd1), ak = as.double(-Zedd2), r = as.double(rho), size = as.integer(LLL), singler = as.integer(singler), ans = as.double(ans))$ans if (any(answer < 0.0)) warning("some negative values returned") answer[is.inf1.neg] <- 0 answer[is.inf1.pos] <- pnorm(Z2[is.inf1.pos]) # pnorm(Z2[is.inf1.neg]) answer[is.inf2.neg] <- 0 answer[is.inf2.pos] <- pnorm(Z1[is.inf2.pos]) # pnorm(Z1[is.inf2.neg]) answer } pnorm2 <- function(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) { warning("decommissioning pnorm2() soon; use ", "pbinorm() instead") sd1 <- sqrt(var1) sd2 <- sqrt(var2) rho <- cov12 / (sd1 * sd2) if (anyNA(x1) || anyNA(x2) || anyNA(sd1) || anyNA(sd2) || anyNA(mean1) || anyNA(mean2) || anyNA(rho)) stop("no NAs allowed in arguments or variables 'x1', 'x2',", " 'mean1', 'mean2', 'sd1', 'sd2', 'cov12'") if (min(rho) < -1 || max(rho) > +1) stop("correlation 'rho' is out of range") if (length(mean1) > 1 && length(mean2) == 1 && length(var1) == 1 && length(var2) == 1 && length(cov12) == 1) warning("the call to pnorm2() seems based on the old version ", "of the arguments") LLL <- max(length(x1), length(x2), length(mean1), length(mean2), length(sd1), length(sd2), length(rho)) if (length(x1) != LLL) x1 <- rep_len(x1, LLL) if (length(x2) != LLL) x2 <- rep_len(x2, LLL) if (length(mean1) != LLL) mean1 <- rep_len(mean1, LLL) if (length(mean2) != LLL) mean2 <- rep_len(mean2, LLL) if (length(sd1) != LLL) sd1 <- rep_len(sd1, LLL) if (length(sd2) != LLL) sd2 <- rep_len(sd2, LLL) if (length(rho) != LLL) rho <- rep_len(rho, LLL) Z1 <- (x1 - mean1) / sd1 Z2 <- (x2 - mean2) / sd2 ans <- Z1 singler <- ifelse(length(rho) == 1, 1, 0) answer <- .C("pnorm2ccc", ah = as.double(-Z1), ak = as.double(-Z2), r = as.double(rho), size = as.integer(LLL), singler = as.integer(singler), ans = as.double(ans))$ans if (any(answer < 0.0)) warning("some negative values returned") answer } my.dbinom <- function(x, size = stop("no 'size' argument"), prob = stop("no 'prob' argument")) { exp(lgamma(size + 1) - lgamma(size - x + 1) - lgamma(x + 1) + x * log(prob / (1 - prob)) + size * log1p(-prob)) } size.binomial <- function(prob = 0.5, link = "loge") { if (any(prob <= 0 | prob >= 1)) stop("some values of prob out of range") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Binomial with n unknown, prob known (prob = ", prob, ")\n", "Links: ", namesof("size", link, tag = TRUE), " (treated as real-valued)\n", "Variance: Var(Y) = size * prob * (1-prob);", " Var(size) is intractable"), initialize = eval(substitute(expression({ predictors.names <- "size" extra$temp2 <- rep_len( .prob , n) if (is.null(etastart)) { nvec <- (y+0.1)/extra$temp2 etastart <- theta2eta(nvec, .link ) } }), list( .prob = prob, .link = link ))), linkinv = eval(substitute(function(eta, extra = NULL) { nvec <- eta2theta(eta, .link ) nvec * extra$temp2 }, list( .link = link ))), last = eval(substitute(expression({ misc$link <- c(size = .link ) misc$prob <- extra$temp2 }), list( .link = link ))), linkfun = eval(substitute(function(mu, extra = NULL) { nvec <- mu / extra$temp2 theta2eta(nvec, .link) }, list( .link = link ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { nvec <- mu / extra$temp2 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (lgamma(nvec+1) - lgamma(y+1) - lgamma(nvec-y+1) + y * log( .prob / (1- .prob )) + nvec * log1p(- .prob )) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .prob = prob ))), vfamily = c("size.binomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { nvec <- eta2theta(eta, .link ) okay1 <- all(is.finite(nvec)) && all( 0 < nvec) okay1 }, list( .link = link ))), deriv = eval(substitute(expression({ nvec <- mu/extra$temp2 dldnvec <- digamma(nvec+1) - digamma(nvec-y+1) + log1p(-extra$temp2) dnvecdeta <- dtheta.deta(nvec, .link ) c(w) * cbind(dldnvec * dnvecdeta) }), list( .link = link ))), weight = eval(substitute(expression({ d2ldnvec2 <- trigamma(nvec+1) - trigamma(nvec-y+1) d2ldnvec2[y == 0] <- -sqrt( .Machine$double.eps ) wz <- -c(w) * dnvecdeta^2 * d2ldnvec2 wz }), list( .link = link )))) } dbetabinom.ab <- function(x, size, shape1, shape2, log = FALSE, Inf.shape = exp(20), # 1e6, originally limit.prob = 0.5 # Strictly should be NaN ) { Bigg <- Inf.shape Bigg2 <- Inf.shape # big.shape # exp(34) # Found empirically if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(size), length(shape1), length(shape2)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) if (length(shape1) != LLL) shape1 <- rep_len(shape1, LLL) if (length(shape2) != LLL) shape2 <- rep_len(shape2, LLL) is.infinite.shape1 <- is.infinite(shape1) # Includes -Inf !! is.infinite.shape2 <- is.infinite(shape2) ans <- x ans[TRUE] <- log(0) ans[is.na(x)] <- NA ans[is.nan(x)] <- NaN ok0 <- !is.na(shape1) & !is.na(shape2) & !is.na(x) & !is.na(size) okk <- (round(x) == x) & (x >= 0) & (x <= size) & !is.infinite.shape1 & !is.infinite.shape2 & ok0 if (any(okk)) { ans[okk] <- lchoose(size[okk], x[okk]) + lbeta(shape1[okk] + x[okk], shape2[okk] + size[okk] - x[okk]) - lbeta(shape1[okk], shape2[okk]) endpt1 <- (x == size) & ((shape1 < 1/Bigg) | (shape2 < 1/Bigg)) & ok0 if (any(endpt1)) { ans[endpt1] <- lgamma( size[endpt1] + shape1[endpt1]) + lgamma(shape1[endpt1] + shape2[endpt1]) - (lgamma( size[endpt1] + shape1[endpt1] + shape2[endpt1]) + lgamma(shape1[endpt1])) } # endpt1 endpt2 <- (x == 0) & ((shape1 < 1/Bigg) | (shape2 < 1/Bigg)) & ok0 if (any(endpt2)) { ans[endpt2] <- lgamma( size[endpt2] + shape2[endpt2]) + lgamma(shape1[endpt2] + shape2[endpt2]) - (lgamma( size[endpt2] + shape1[endpt2] + shape2[endpt2]) + lgamma(shape2[endpt2])) } # endpt2 endpt3 <- ((Bigg < shape1) | (Bigg < shape2)) & ok0 if (any(endpt3)) { ans[endpt3] <- lchoose(size[endpt3], x[endpt3]) + lbeta(shape1[endpt3] + x[endpt3], shape2[endpt3] + size[endpt3] - x[endpt3]) - lbeta(shape1[endpt3], shape2[endpt3]) } # endpt3 } # if (any(okk)) if (!log.arg) { ans <- exp(ans) } ok1 <- !is.infinite.shape1 & is.infinite.shape2 # rho==0 & prob==0 ok2 <- is.infinite.shape1 & !is.infinite.shape2 # rho==0 & prob==1 ok3 <- Bigg2 < shape1 & Bigg2 < shape2 ok4 <- is.infinite.shape1 & is.infinite.shape2 # prob undefined if (any(ok3)) { prob1 <- shape1[ok3] / (shape1[ok3] + shape2[ok3]) ans[ok3] <- dbinom(x = x[ok3], size = size[ok3], prob = prob1, log = log.arg) if (any(ok4)) { ans[ok4] <- dbinom(x = x[ok4], size = size[ok4], prob = limit.prob, log = log.arg) } } # ok3 if (any(ok1)) ans[ok1] <- dbinom(x = x[ok1], size = size[ok1], prob = 0, # finite / (finite + Inf) == 0 log = log.arg) if (any(ok2)) ans[ok2] <- dbinom(x = x[ok2], size = size[ok2], prob = 1, # Inf / (finite + Inf) == 1 log = log.arg) ans[shape1 < 0] <- NaN ans[shape2 < 0] <- NaN ans } pbetabinom.ab <- function(q, size, shape1, shape2, log.p = FALSE) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(size, integer.valued = TRUE)) stop("bad input for argument 'size'") if (!is.Numeric(shape1, positive = TRUE)) stop("bad input for argument 'shape1'") if (!is.Numeric(shape2, positive = TRUE)) stop("bad input for argument 'shape2'") LLL <- max(length(q), length(size), length(shape1), length(shape2)) if (length(q) != LLL) q <- rep_len(q, LLL) if (length(shape1) != LLL) shape1 <- rep_len(shape1, LLL) if (length(shape2) != LLL) shape2 <- rep_len(shape2, LLL) if (length(size) != LLL) size <- rep_len(size, LLL) ans <- q # Retains names(q) ans[] <- 0 # Set all elements to 0 if (max(abs(size - size[1])) < 1.0e-08 && max(abs(shape1 - shape1[1])) < 1.0e-08 && max(abs(shape2 - shape2[1])) < 1.0e-08) { if (any(is.infinite(qstar <- floor(q)))) stop("argument 'q' must be finite") temp <- if (max(qstar) >= 0) { dbetabinom.ab(0:max(qstar), size = size[1], shape1 = shape1[1], shape2 = shape2[1]) } else { 0 * qstar } unq <- unique(qstar) for (ii in unq) { index <- (qstar == ii) ans[index] <- if (ii >= 0) sum(temp[1:(1+ii)]) else 0 } } else { for (ii in 1:LLL) { qstar <- floor(q[ii]) ans[ii] <- if (qstar >= 0) { sum(dbetabinom.ab(x = 0:qstar, size = size[ii], shape1 = shape1[ii], shape2 = shape2[ii])) } else 0 } } if (log.p) log(ans) else ans } rbetabinom.ab <- function(n, size, shape1, shape2, .dontuse.prob = NULL) { # checkargs = TRUE if (!is.Numeric(size, integer.valued = TRUE)) stop("bad input for argument 'size'") if (any(shape1 < 0, na.rm = TRUE)) stop("negative values for argument 'shape1' not allowed") if (any(shape2 < 0, na.rm = TRUE)) stop("negative values for argument 'shape2' not allowed") use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (length(size) != use.n) size <- rep_len(size, use.n) if (length(shape1) != use.n) shape1 <- rep_len(shape1, use.n) if (length(shape2) != use.n) shape2 <- rep_len(shape2, use.n) ans <- rep_len(NA_real_, use.n) okay0 <- is.finite(shape1) & is.finite(shape2) if (smalln <- sum(okay0)) ans[okay0] <- rbinom(n = smalln, size = size[okay0], prob = rbeta(n = smalln, shape1 = shape1[okay0], shape2 = shape2[okay0])) okay1 <- is.na(shape1) & is.infinite(shape2) # rho=0 & prob==0 okay2 <- is.infinite(shape1) & is.na(shape2) # rho = 0 & prob == 1 okay3 <- is.infinite(shape1) & is.infinite(shape2) # rho=0 & 0 3) stop("argument 'imethod' must be 1, 2 or 3") if (length(ishape2) && !is.Numeric(ishape2, positive = TRUE)) stop("bad input for argument 'ishape2'") if (!is.null(nsimEIM)) { if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 10) warning("'nsimEIM' should be an integer greater than 10, say") } new("vglmff", blurb = c("Beta-binomial model\n", "Links: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2), "\n", "Mean: mu = shape1 / (shape1+shape2)", "\n", "Variance: mu * (1-mu) * (1+(w-1)*rho) / w, ", "where rho = 1 / (shape1+shape2+1)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("shape1", "shape2"), lshape1 = .lshape1 , lshape2 = .lshape2 , zero = .zero ) }, list( .zero = zero, .lshape1 = lshape1, .lshape2 = lshape2 ))), initialize = eval(substitute(expression({ if (!all(w == 1)) extra$orig.w <- w if (is.null( .nsimEIM )) { save.weights <- control$save.weights <- FALSE } mustart.orig <- mustart eval(binomialff()@initialize) # Note: n,w,y,mustart is changed if (length(mustart.orig)) mustart <- mustart.orig # Retain it if inputted predictors.names <- c(namesof("shape1", .lshape1 , earg = .eshape1 , tag = FALSE), namesof("shape2", .lshape2 , earg = .eshape2 , tag = FALSE)) if (!length(etastart)) { mustart.use <- if (length(mustart.orig)) mustart.orig else mustart shape1 <- rep_len( .ishape1 , n) shape2 <- if (length( .ishape2 )) { rep_len( .ishape2 , n) } else if (length(mustart.orig)) { shape1 * (1 / mustart.use - 1) } else if ( .imethod == 1) { shape1 * (1 / weighted.mean(y, w) - 1) } else if ( .imethod == 2) { temp777 <- .ishrinkage * weighted.mean(y, w) + (1 - .ishrinkage ) * y shape1 * (1 / temp777 - 1) } else { shape1 * (1 / weighted.mean(mustart.use, w) - 1) } ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts if (max(abs(ycounts - round(ycounts))) > 1.0e-6) warning("the response (as counts) does not appear to ", "be integer-valued. Am rounding to integer values.") ycounts <- round(ycounts) # Make sure it is an integer etastart <- cbind(theta2eta(shape1, .lshape1 , earg = .eshape1 ), theta2eta(shape2, .lshape2 , earg = .eshape2 )) mustart <- NULL # Since etastart has been computed. } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .ishape1 = ishape1, .ishape2 = ishape2, .nsimEIM = nsimEIM, .imethod = imethod, .ishrinkage = ishrinkage ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) shape1 / (shape1 + shape2) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), last = eval(substitute(expression({ misc$link <- c("shape1" = .lshape1 , "shape2" = .lshape2 ) misc$earg <- list("shape1" = .eshape1 , "shape2" = .eshape2 ) shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) misc$rho <- 1 / (shape1 + shape2 + 1) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$zero <- .zero }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .nsimEIM = nsimEIM, .zero = zero ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dbetabinom.ab(x = ycounts, size = nvec, shape1 = shape1, shape2 = shape2, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), vfamily = c("betabinomialff"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) okay1 <- all(is.finite(shape1)) && all(0 < shape1) && all(is.finite(shape2)) && all(0 < shape2) okay1 }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") w <- pwts eta <- predict(object) extra <- object@extra shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) rbetabinom.ab(nsim * length(shape1), size = nvec, shape1 = shape1, shape2 = shape2) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), deriv = eval(substitute(expression({ nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) dshape1.deta <- dtheta.deta(shape1, .lshape1 , earg = .eshape1 ) dshape2.deta <- dtheta.deta(shape2, .lshape2 , earg = .eshape2 ) dl.dshape1 <- digamma(shape1+ycounts) - digamma(shape1+shape2+nvec) - digamma(shape1) + digamma(shape1 + shape2) dl.dshape2 <- digamma(nvec + shape2 - ycounts) - digamma(shape1 + shape2 + nvec) - digamma(shape2) + digamma(shape1 + shape2) (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * cbind(dl.dshape1 * dshape1.deta, dl.dshape2 * dshape2.deta) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2 ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM)) { wz <- matrix(NA_real_, n, dimm(M)) #3=dimm(2) wz[, iam(1, 1, M)] <- -(expected.betabin.ab(nvec,shape1,shape2, TRUE) - trigamma(shape1+shape2+nvec) - trigamma(shape1) + trigamma(shape1+shape2)) * dshape1.deta^2 wz[, iam(2, 2, M)] <- -(expected.betabin.ab(nvec,shape1,shape2, FALSE) - trigamma(shape1+shape2+nvec) - trigamma(shape2) + trigamma(shape1+shape2)) * dshape2.deta^2 wz[, iam(2, 1, M)] <- -(trigamma(shape1+shape2) - trigamma(shape1+shape2+nvec)) * dshape1.deta * dshape2.deta wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas <- cbind(dshape1.deta, dshape2.deta) for (ii in 1:( .nsimEIM )) { ysim <- rbetabinom.ab(n = n, size = nvec, shape1 = shape1, shape2 = shape2) checkargs = .checkargs dl.dshape1 <- digamma(shape1+ysim) - digamma(shape1+shape2+nvec) - digamma(shape1) + digamma(shape1+shape2) dl.dshape2 <- digamma(nvec+shape2-ysim) - digamma(shape1+shape2+nvec) - digamma(shape2) + digamma(shape1+shape2) rm(ysim) temp3 <- cbind(dl.dshape1, dl.dshape2) # n x M matrix run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index]* temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz * (if (is.numeric(extra$orig.w)) extra$orig.w else 1) } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .eshape1 = eshape1, .eshape2 = eshape2, .nsimEIM = nsimEIM )))) } betageometric <- function(lprob = "logit", lshape = "loge", iprob = NULL, ishape = 0.1, moreSummation = c(2, 100), tolerance = 1.0e-10, zero = NULL) { lprob <- as.list(substitute(lprob)) eprob <- link2list(lprob) lprob <- attr(eprob, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (!is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (!is.Numeric(moreSummation, positive = TRUE, length.arg = 2, integer.valued = TRUE)) stop("bad input for argument 'moreSummation'") if (!is.Numeric(tolerance, positive = TRUE, length.arg = 1) || 1.0 - tolerance >= 1.0) stop("bad input for argument 'tolerance'") new("vglmff", blurb = c("Beta-geometric distribution\n", "Links: ", namesof("prob", lprob, earg = eprob), ", ", namesof("shape", lshape, earg = eshape)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("prob", "shape"), lprob = .lprob , lshape = .lshape , zero = .zero ) }, list( .lprob = lprob, .lshape = lshape, .zero = zero ))), initialize = eval(substitute(expression({ eval(geometric()@initialize) predictors.names <- c(namesof("prob", .lprob , earg = .eprob , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) if (length( .iprob )) prob.init <- rep_len( .iprob , n) if (!length(etastart) || NCOL(etastart) != 2) { shape.init <- rep_len( .ishape , n) etastart <- cbind(theta2eta(prob.init, .lprob , earg = .eprob ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .iprob = iprob, .ishape = ishape, .lprob = lprob, .eprob = eprob, .eshape = eshape, .lshape = lshape ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) mymu <- (1-prob) / (prob - shape) ifelse(mymu >= 0, mymu, NA) }, list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c("prob" = .lprob , "shape" = .lshape ) misc$earg <- list("prob" = .eprob , "shape" = .eshape ) if (intercept.only) { misc$shape1 <- shape1[1] # These quantities computed in @deriv misc$shape2 <- shape2[1] } misc$expected <- TRUE misc$tolerance <- .tolerance misc$zero <- .zero misc$moreSummation = .moreSummation }), list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape, .tolerance = tolerance, .moreSummation = moreSummation, .zero = zero ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) ans <- log(prob) maxy <- max(y) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { for (ii in 1:maxy) { index <- (ii <= y) ans[index] <- ans[index] + log1p(-prob[index] + (ii-1) * shape[index]) - log1p((ii-1) * shape[index]) } ans <- ans - log1p((y+1-1) * shape) ll.elts <- w * ans if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), vfamily = c("betageometric"), validparams = eval(substitute(function(eta, y, extra = NULL) { prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(prob )) && all(0 < prob & prob < 1) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) rbetageom(nsim * length(shape), shape1 = shape, shape2 = shape) }, list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), deriv = eval(substitute(expression({ prob <- eta2theta(eta[, 1], .lprob , earg = .eprob ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) shape1 <- prob / shape shape2 <- (1 - prob) / shape dprob.deta <- dtheta.deta(prob , .lprob , earg = .eprob ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dl.dprob <- 1 / prob dl.dshape <- 0 * y maxy <- max(y) for (ii in 1:maxy) { index <- (ii <= y) dl.dprob[index] <- dl.dprob[index] - 1/(1-prob[index]+(ii-1) * shape[index]) dl.dshape[index] <- dl.dshape[index] + (ii-1)/(1-prob[index]+(ii-1) * shape[index]) - (ii-1)/(1+(ii-1) * shape[index]) } dl.dshape <- dl.dshape - (y+1 -1)/(1+(y+1 -1) * shape) c(w) * cbind(dl.dprob * dprob.deta, dl.dshape * dshape.deta) }), list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape ))), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) #3=dimm(2) wz[, iam(1, 1, M)] <- 1 / prob^2 moresum <- .moreSummation maxsummation <- round(maxy * moresum[1] + moresum[2]) for (ii in 3:maxsummation) { temp7 <- 1 - pbetageom(q = ii-1-1, shape1 = shape1, shape2 = shape2) denom1 <- (1-prob+(ii-2)*shape)^2 denom2 <- (1+(ii-2)*shape)^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + temp7 / denom1 wz[, iam(1, 2, M)] <- wz[, iam(1, 2, M)] - (ii-2) * temp7 / denom1 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] + (ii-2)^2 * temp7 / denom1 - (ii-1)^2 * temp7 / denom2 if (max(temp7) < .tolerance ) break } ii <- 2 temp7 <- 1 - pbetageom(q=ii-1-1, shape1 = shape1, shape2 = shape2) denom1 <- (1-prob+(ii-2)*shape)^2 denom2 <- (1+(ii-2)*shape)^2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] + temp7 / denom1 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - (ii-1)^2 * temp7 / denom2 wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] * dprob.deta^2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] * dshape.deta^2 wz[, iam(2, 1, M)] <- wz[, iam(2, 1, M)] * dprob.deta * dshape.deta c(w) * wz }), list( .lprob = lprob, .lshape = lshape, .eprob = eprob, .eshape = eshape, .moreSummation = moreSummation, .tolerance = tolerance )))) } seq2binomial <- function(lprob1 = "logit", lprob2 = "logit", iprob1 = NULL, iprob2 = NULL, parallel = FALSE, # apply.parint = TRUE, zero = NULL) { apply.parint <- TRUE lprob1 <- as.list(substitute(lprob1)) eprob1 <- link2list(lprob1) lprob1 <- attr(eprob1, "function.name") lprob2 <- as.list(substitute(lprob2)) eprob2 <- link2list(lprob2) lprob2 <- attr(eprob2, "function.name") if (length(iprob1) && (!is.Numeric(iprob1, positive = TRUE) || max(iprob1) >= 1)) stop("bad input for argument 'iprob1'") if (length(iprob2) && (!is.Numeric(iprob2, positive = TRUE) || max(iprob2) >= 1)) stop("bad input for argument 'iprob2'") new("vglmff", blurb = c("Sequential binomial distribution ", "(Crowder and Sweeting, 1989)\n", "Links: ", namesof("prob1", lprob1, earg = eprob1), ", ", namesof("prob2", lprob2, earg = eprob2)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .parallel = parallel, .apply.parint = apply.parint, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, expected = TRUE, multipleResponses = FALSE, parameters.names = c("prob1", "prob2"), lprob1 = .lprob1 , lprob2 = .lprob2 , zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ if (!is.vector(w)) stop("the 'weights' argument must be a vector") if (any(abs(w - round(w)) > 1e-6)) stop("the 'weights' argument does not seem to be integer-valued") if (ncol(y <- cbind(y)) != 2) stop("the response must be a 2-column matrix") if (any(y < 0 | y > 1)) stop("the response must have values between 0 and 1") w <- round(w) rvector <- w * y[, 1] if (any(abs(rvector - round(rvector)) > 1.0e-8)) warning("number of successes in column one ", "should be integer-valued") svector <- rvector * y[, 2] if (any(abs(svector - round(svector)) > 1.0e-8)) warning("number of successes in", " column two should be integer-valued") predictors.names <- c(namesof("prob1", .lprob1 , earg = .eprob1 , tag = FALSE), namesof("prob2", .lprob2 , earg = .eprob2 , tag = FALSE)) prob1.init <- if (is.Numeric( .iprob1)) rep_len( .iprob1 , n) else rep_len(weighted.mean(y[, 1], w = w), n) prob2.init <- if (is.Numeric( .iprob2 )) rep_len( .iprob2 , n) else rep_len(weighted.mean(y[, 2], w = w*y[, 1]), n) if (!length(etastart)) { etastart <- cbind(theta2eta(prob1.init, .lprob1 , earg = .eprob1 ), theta2eta(prob2.init, .lprob2 , earg = .eprob2 )) } }), list( .iprob1 = iprob1, .iprob2 = iprob2, .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob1 <- eta2theta(eta[, 1], .lprob1 , earg = .eprob1 ) prob2 <- eta2theta(eta[, 2], .lprob2 , earg = .eprob2 ) cbind(prob1, prob2) }, list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), last = eval(substitute(expression({ misc$link <- c("prob1" = .lprob1 , "prob2" = .lprob2 ) misc$earg <- list("prob1" = .eprob1 , "prob2" = .eprob2 ) misc$expected <- TRUE misc$zero <- .zero misc$parallel <- .parallel misc$apply.parint <- .apply.parint }), list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2, .parallel = parallel, .apply.parint = apply.parint, .zero = zero ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { prob1 <- eta2theta(eta[, 1], .lprob1 , earg = .eprob1 ) prob2 <- eta2theta(eta[, 2], .lprob2 , earg = .eprob2 ) smallno <- 100 * .Machine$double.eps prob1 <- pmax(prob1, smallno) prob1 <- pmin(prob1, 1-smallno) prob2 <- pmax(prob2, smallno) prob2 <- pmin(prob2, 1-smallno) mvector <- w rvector <- w * y[, 1] svector <- rvector * y[, 2] if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ans1 <- dbinom(rvector, size = mvector, prob = prob1, log = TRUE) + dbinom(svector, size = rvector, prob = prob2, log = TRUE) ll.elts <- ans1 if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), vfamily = c("seq2binomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { prob1 <- eta2theta(eta[, 1], .lprob1 , earg = .eprob1 ) prob2 <- eta2theta(eta[, 2], .lprob2 , earg = .eprob2 ) okay1 <- all(is.finite(prob1)) && all(0 < prob1 & prob1 < 1) && all(is.finite(prob2)) && all(0 < prob2 & prob2 < 1) okay1 }, list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), deriv = eval(substitute(expression({ prob1 <- eta2theta(eta[, 1], .lprob1 , earg = .eprob1 ) prob2 <- eta2theta(eta[, 2], .lprob2 , earg = .eprob2 ) smallno <- 100 * .Machine$double.eps prob1 <- pmax(prob1, smallno) prob1 <- pmin(prob1, 1-smallno) prob2 <- pmax(prob2, smallno) prob2 <- pmin(prob2, 1-smallno) dprob1.deta <- dtheta.deta(prob1, .lprob1, earg = .eprob1) dprob2.deta <- dtheta.deta(prob2, .lprob2, earg = .eprob2) mvector <- w rvector <- w * y[, 1] svector <- rvector * y[, 2] dl.dprob1 <- rvector / prob1 - (mvector-rvector) / (1-prob1) dl.dprob2 <- svector / prob2 - (rvector-svector) / (1-prob2) cbind(dl.dprob1 * dprob1.deta, dl.dprob2 * dprob2.deta) }), list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M) wz[, iam(1, 1, M)] <- (dprob1.deta^2) / (prob1 * (1-prob1)) wz[, iam(2, 2, M)] <- (dprob2.deta^2) * prob1 / (prob2 * (1-prob2)) c(w) * wz }), list( .lprob1 = lprob1, .lprob2 = lprob2, .eprob1 = eprob1, .eprob2 = eprob2 )))) } zipebcom <- function(lmu12 = "cloglog", lphi12 = "logit", loratio = "loge", imu12 = NULL, iphi12 = NULL, ioratio = NULL, zero = c("phi12", "oratio"), tol = 0.001, addRidge = 0.001) { lmu12 <- as.list(substitute(lmu12)) emu12 <- link2list(lmu12) lmu12 <- attr(emu12, "function.name") lphi12 <- as.list(substitute(lphi12)) ephi12 <- link2list(lphi12) lphi12 <- attr(ephi12, "function.name") loratio <- as.list(substitute(loratio)) eoratio <- link2list(loratio) loratio <- attr(eoratio, "function.name") if (!is.Numeric(tol, positive = TRUE, length.arg = 1) || tol > 0.1) stop("bad input for argument 'tol'") if (!is.Numeric(addRidge, length.arg = 1, positive = TRUE) || addRidge > 0.5) stop("bad input for argument 'addRidge'") if (lmu12 != "cloglog") warning("argument 'lmu12' should be 'cloglog'") new("vglmff", blurb = c("Exchangeable bivariate ", lmu12, " odds-ratio model based on\n", "a zero-inflated Poisson distribution\n\n", "Links: ", namesof("mu12", lmu12, earg = emu12), ", ", namesof("phi12", lphi12, earg = ephi12), ", ", namesof("oratio", loratio, earg = eoratio)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu12", "phi12", "oratio"), lmu12 = .lmu12 , lphi12 = .lphi12 , loratio = .loratio , zero = .zero ) }, list( .zero = zero, .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio ))), initialize = eval(substitute(expression({ eval(process.binomial2.data.VGAM) predictors.names <- c( namesof("mu12", .lmu12 , earg = .emu12 , tag = FALSE), namesof("phi12", .lphi12 , earg = .ephi12 , tag = FALSE), namesof("oratio", .loratio , earg = .eoratio , tag = FALSE)) propY1.eq.0 <- weighted.mean(y[,'00'], w) + weighted.mean(y[,'01'], w) propY2.eq.0 <- weighted.mean(y[,'00'], w) + weighted.mean(y[,'10'], w) if (length( .iphi12) && any( .iphi12 > propY1.eq.0)) warning("iphi12 must be less than the sample proportion of Y1==0") if (length( .iphi12) && any( .iphi12 > propY2.eq.0)) warning("iphi12 must be less than the sample proportion of Y2==0") if (!length(etastart)) { pstar.init <- ((mu[, 3]+mu[, 4]) + (mu[, 2]+mu[, 4])) / 2 phi.init <- if (length(.iphi12)) rep_len( .iphi12 , n) else min(propY1.eq.0 * 0.95, propY2.eq.0 * 0.95, pstar.init/1.5) oratio.init <- if (length( .ioratio)) rep_len( .ioratio , n) else mu[, 4]*mu[, 1]/(mu[, 2]*mu[, 3]) mu12.init <- if (length(.imu12)) rep_len( .imu12 , n) else pstar.init / (1-phi.init) etastart <- cbind( theta2eta(mu12.init, .lmu12 , earg = .emu12 ), theta2eta(phi.init, .lphi12, earg = .ephi12), theta2eta(oratio.init, .loratio, earg = .eoratio)) mustart <- NULL } }), list( .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio, .imu12 = imu12, .iphi12 = iphi12, .ioratio = ioratio ))), linkinv = eval(substitute(function(eta, extra = NULL) { A1vec <- eta2theta(eta[, 1], .lmu12 , earg = .emu12 ) phivec <- eta2theta(eta[, 2], .lphi12 , earg = .ephi12 ) pmargin <- matrix((1 - phivec) * A1vec, nrow(eta), 2) oratio <- eta2theta(eta[, 3], .loratio , earg = .eoratio ) a.temp <- 1 + (pmargin[, 1]+pmargin[, 2])*(oratio-1) b.temp <- -4 * oratio * (oratio-1) * pmargin[, 1] * pmargin[, 2] temp <- sqrt(a.temp^2 + b.temp) pj4 <- ifelse(abs(oratio-1) < .tol, pmargin[, 1]*pmargin[, 2], (a.temp-temp)/(2*(oratio-1))) pj2 <- pmargin[, 2] - pj4 pj3 <- pmargin[, 1] - pj4 cbind("00" = 1-pj4-pj2-pj3, "01" = pj2, "10" = pj3, "11" = pj4) }, list( .tol = tol, .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio ))), last = eval(substitute(expression({ misc$link <- c("mu12" = .lmu12 , "phi12" = .lphi12 , "oratio" = .loratio ) misc$earg <- list("mu12" = .emu12 , "phi12" = .ephi12 , "oratio" = .eoratio ) misc$tol <- .tol misc$expected <- TRUE misc$addRidge <- .addRidge }), list( .tol = tol, .addRidge = addRidge, .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, vfamily = c("zipebcom"), validparams = eval(substitute(function(eta, y, extra = NULL) { A1vec <- eta2theta(eta[, 1], .lmu12 , earg = .emu12 ) smallno <- .Machine$double.eps^(2/4) A1vec[A1vec > 1.0 - smallno] <- 1.0 - smallno phivec <- eta2theta(eta[, 2], .lphi12 , earg = .ephi12 ) oratio <- eta2theta(eta[, 3], .loratio , earg = .eoratio ) okay1 <- all(is.finite(A1vec )) && all(0 < A1vec & A1vec < 1) && all(is.finite(phivec)) && all(0 < phivec & phivec < 1) && all(is.finite(oratio)) && all(0 < oratio) okay1 }, list( .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio ))), deriv = eval(substitute(expression({ A1vec <- eta2theta(eta[, 1], .lmu12 , earg = .emu12 ) smallno <- .Machine$double.eps^(2/4) A1vec[A1vec > 1.0 -smallno] <- 1.0 - smallno phivec <- eta2theta(eta[, 2], .lphi12, earg = .ephi12) pmargin <- matrix((1 - phivec) * A1vec, n, 2) oratio <- eta2theta(eta[, 3], .loratio, earg = .eoratio) Vab <- 1 / (1/mu[, 1] + 1/mu[, 2] + 1/mu[, 3] + 1/mu[, 4]) Vabc <- 1/mu[, 1] + 1/mu[, 2] denom3 <- 2 * oratio * mu[, 2] + mu[, 1] + mu[, 4] temp1 <- oratio * mu[, 2] + mu[, 4] dp11star.dp1unstar <- 2*(1-phivec)*Vab * Vabc dp11star.dphi1 <- -2 * A1vec * Vab * Vabc dp11star.doratio <- Vab / oratio yandmu <- (y[, 1]/mu[, 1] - y[, 2]/mu[, 2] - y[, 3]/mu[, 3] + y[, 4]/mu[, 4]) dp11.doratio <- Vab / oratio check.dl.doratio <- yandmu * dp11.doratio cyandmu <- (y[, 2]+y[, 3])/mu[, 2] - 2 * y[, 1]/mu[, 1] dl.dmu1 <- dp11star.dp1unstar * yandmu + (1-phivec) * cyandmu dl.dphi1 <- dp11star.dphi1 * yandmu - A1vec * cyandmu dl.doratio <- check.dl.doratio dthetas.detas = cbind(dtheta.deta(A1vec, .lmu12 , earg = .emu12 ), dtheta.deta(phivec, .lphi12, earg = .ephi12), dtheta.deta(oratio, .loratio, earg = .eoratio)) c(w) * cbind(dl.dmu1, dl.dphi1, dl.doratio) * dthetas.detas }), list( .lmu12 = lmu12, .lphi12 = lphi12, .loratio = loratio, .emu12 = emu12, .ephi12 = ephi12, .eoratio = eoratio ))), weight = eval(substitute(expression({ wz <- matrix(0, n, 4) alternwz11 <- 2 * (1-phivec)^2 * (2/mu[, 1] + 1/mu[, 2] - 2*Vab*Vabc^2) * (dthetas.detas[, 1])^2 wz[, iam(1, 1, M)] <- alternwz11 alternwz22 <- 2* A1vec^2 *(2/mu[, 1] + 1/mu[, 2] - 2*Vab*Vabc^2) * (dthetas.detas[, 2])^2 wz[, iam(2, 2, M)] <- alternwz22 alternwz12 <- -2*A1vec*(1-phivec)* (2/mu[, 1] + 1/mu[, 2] - 2*Vab*Vabc^2) * dthetas.detas[, 1] * dthetas.detas[, 2] wz[, iam(1, 2, M)] <- alternwz12 alternwz33 <- (Vab / oratio^2) * dthetas.detas[, 3]^2 wz[, iam(3, 3, M)] <- alternwz33 wz[, 1:2] <- wz[, 1:2] * (1 + .addRidge) c(w) * wz }), list( .addRidge = addRidge )))) } binom2.Rho <- function(rho = 0, imu1 = NULL, imu2 = NULL, exchangeable = FALSE, nsimEIM = NULL) { lmu12 <- "probit" emu12 <- list() if (is.Numeric(nsimEIM)) { if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 100) warning("'nsimEIM' should be an integer greater than 100") } if (min(rho) <= -1 || 1 <= max(rho)) stop("argument 'rho' should lie in (-1, 1)") new("vglmff", blurb = c("Bivariate probit model with rho = ", format(rho), "\n", "Links: ", namesof("mu1", lmu12, earg = emu12), ", ", namesof("mu2", lmu12, earg = emu12)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(c(1, 1), 2, 1), x = x, bool = .exchangeable , constraints = constraints, apply.int = TRUE) }), list( .exchangeable = exchangeable ))), deviance = Deviance.categorical.data.vgam, infos = eval(substitute(function(...) { list(M1 = 3, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu1", "mu2"), lmu12 = .lmu12 ) }, list( .lmu12 = lmu12 ))), initialize = eval(substitute(expression({ eval(process.binomial2.data.VGAM) predictors.names <- c( namesof("mu1", .lmu12 , earg = .emu12 , short = TRUE), namesof("mu2", .lmu12 , earg = .emu12 , short = TRUE)) if (is.null( .nsimEIM )) { save.weights <- control$save.weights <- FALSE } if (is.null(etastart)) { mu1.init <- if (is.Numeric( .imu1 )) rep_len( .imu1 , n) else mu[, 3] + mu[, 4] mu2.init <- if (is.Numeric( .imu2 )) rep_len( .imu2 , n) else mu[, 2] + mu[, 4] etastart <- cbind(theta2eta(mu1.init, .lmu12 , earg = .emu12 ), theta2eta(mu2.init, .lmu12 , earg = .emu12 )) mustart <- NULL } }), list( .lmu12 = lmu12, .emu12 = emu12, .nsimEIM = nsimEIM, .imu1 = imu1, .imu2 = imu2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- rep_len( .rho , nrow(eta)) p11 <- pbinorm(eta[, 1], eta[, 2], cov12 = rhovec) p01 <- pmin(pmargin[, 2] - p11, pmargin[, 2]) p10 <- pmin(pmargin[, 1] - p11, pmargin[, 1]) p00 <- 1 - p01 - p10 - p11 ansmat <- abs(cbind("00"=p00, "01"=p01, "10"=p10, "11"=p11)) ansmat / rowSums(ansmat) }, list( .lmu12 = lmu12, .emu12 = emu12, .rho = rho ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu12 , mu2 = .lmu12 ) misc$earg <- list(mu1 = .emu12 , mu2 = .emu12 ) misc$nsimEIM <- .nsimEIM misc$expected <- TRUE misc$rho <- .rho }), list( .lmu12 = lmu12, .emu12 = emu12, .rho = rho, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- if (is.numeric(extra$orig.w)) y * w / extra$orig.w else y * w # Convert proportions to counts nvec <- if (is.numeric(extra$orig.w)) round(w / extra$orig.w) else round(w) smallno <- 1.0e4 * .Machine$double.eps if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") ycounts <- round(ycounts) ll.elts <- (if (is.numeric(extra$orig.w)) extra$orig.w else 1) * dmultinomial(x = ycounts, size = nvec, prob = mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .rho = rho ))), vfamily = c("binom2.Rho", "binom2"), validparams = eval(substitute(function(eta, y, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) okay1 <- all(is.finite(pmargin)) && all(0 < pmargin & pmargin < 1) okay1 }, list( .lmu12 = lmu12, .emu12 = emu12, .rho = rho ))), deriv = eval(substitute(expression({ pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- rep_len( .rho , nrow(eta)) p11 <- pbinorm(eta[, 1], eta[, 2], cov12 = rhovec) p01 <- pmargin[, 2]-p11 p10 <- pmargin[, 1]-p11 p00 <- 1-p01-p10-p11 ABmat <- (eta[, 1:2] - rhovec * eta[, 2:1]) / sqrt(pmax(1e5 * .Machine$double.eps, 1.0 - rhovec^2)) PhiA <- pnorm(ABmat[, 1]) PhiB <- pnorm(ABmat[, 2]) onemPhiA <- pnorm(ABmat[, 1], lower.tail = FALSE) onemPhiB <- pnorm(ABmat[, 2], lower.tail = FALSE) smallno <- 1000 * .Machine$double.eps p00[p00 < smallno] <- smallno p01[p01 < smallno] <- smallno p10[p10 < smallno] <- smallno p11[p11 < smallno] <- smallno dprob00 <- dibinorm(eta[, 1], eta[, 2], cov12 = rhovec) dl.dprob1 <- PhiB*(y[, 4]/p11-y[, 2]/p01) + onemPhiB*(y[, 3]/p10-y[, 1]/p00) dl.dprob2 <- PhiA*(y[, 4]/p11-y[, 3]/p10) + onemPhiA*(y[, 2]/p01-y[, 1]/p00) dprob1.deta <- dtheta.deta(pmargin[, 1], .lmu12 , earg = .emu12 ) dprob2.deta <- dtheta.deta(pmargin[, 2], .lmu12 , earg = .emu12 ) dthetas.detas <- cbind(dprob1.deta, dprob2.deta) c(w) * cbind(dl.dprob1, dl.dprob2) * dthetas.detas }), list( .lmu12 = lmu12, .emu12 = emu12, .rho = rho ))), weight = eval(substitute(expression({ if (is.null( .nsimEIM)) { ned2l.dprob1prob1 <- PhiB^2 *(1/p11+1/p01) + onemPhiB^2 *(1/p10+1/p00) ned2l.dprob2prob2 <- PhiA^2 *(1/p11+1/p10) + onemPhiA^2 *(1/p01+1/p00) ned2l.dprob1prob2 <- PhiA * (PhiB/p11 - onemPhiB/p10) + onemPhiA * (onemPhiB/p00 - PhiB/p01) wz <- matrix(0, n, dimm(M)) # 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dprob1prob1 * dprob1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dprob2prob2 * dprob2.deta^2 wz[, iam(1, 2, M)] <- ned2l.dprob1prob2 * dprob1.deta * dprob2.deta } else { run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { ysim <- rbinom2.rho(n = n, mu1 = pmargin[, 1], mu2 = pmargin[, 2], twoCols = FALSE, rho = rhovec) dl.dprob1 <- PhiB * (ysim[, 4]/p11-ysim[, 2]/p01) + onemPhiB * (ysim[, 3]/p10-ysim[, 1]/p00) dl.dprob2 <- PhiA * (ysim[, 4]/p11-ysim[, 3]/p10) + onemPhiA * (ysim[, 2]/p01-ysim[, 1]/p00) rm(ysim) temp3 <- cbind(dl.dprob1, dl.dprob2) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } binom2.rho.ss <- function(lrho = "rhobit", lmu = "probit", # added 20120817 imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = 3, exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05)) { lrho <- as.list(substitute(lrho)) e.rho <- link2list(lrho) l.rho <- attr(e.rho, "function.name") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") if (lmu != "probit") warning("argument 'lmu' should be 'probit'. Changing it.") lmu12 <- "probit" # But emu may contain some arguments. emu12 <- emu # list() if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Bivariate probit model with sample selection\n", "Links: ", namesof("mu1", lmu12, earg = emu12), ", ", namesof("mu2", lmu12, earg = emu12), ", ", namesof("rho", l.rho, earg = e.rho)), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(c(1, 1, 0, 0, 0, 1), 3, 2), x = x, bool = .exchangeable , constraints = constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .exchangeable = exchangeable, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, multipleResponses = FALSE, parameters.names = c("mu1", "mu2", "rho"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ if (!is.matrix(y)) stop("response must be a 2- or 3-column matrix") ncoly <- ncol(y) temp5 <- w.y.check(w = w, y = y, ncol.w.min = 1, ncol.w.max = 1, ncol.y.min = 2, ncol.y.max = 3, Is.integer.y = TRUE, Is.nonnegative.y = TRUE, out.wy = TRUE, colsyperw = ncoly, maximize = TRUE) w <- temp5$w y <- temp5$y if (!all(c(y) == 0 | c(y) == 1)) stop("response matrix must have values 0 and 1 only") if (ncoly == 2) { extra$ymat2col <- y y <- cbind("0" = 1 - y[, 1], "10" = y[, 1] * (1 - y[, 2]), "11" = y[, 1] * y[, 2]) } else { if (!all(rowSums(y) == 1)) stop("response matrix must have two 0s and one 1 in each row") y1vec <- 1 - y[, 1] # Not a 0 means a 1. y2vec <- ifelse(y1vec == 1, y[, 3], 0) extra$ymat2col <- cbind(y1vec, y2vec) } predictors.names <- c( namesof("mu1", .lmu12 , earg = .emu12 , short = TRUE), namesof("mu2", .lmu12 , earg = .emu12 , short = TRUE), namesof("rho", .l.rho , earg = .e.rho, short = TRUE)) ycounts <- y nvec <- 1 if (!length(etastart)) { if (length(mustart)) { mu1.init <- mustart[, 1] mu2.init <- mustart[, 2] } else if ( .imethod == 1) { mu1.init <- weighted.mean(extra$ymat2col[, 1], c(w)) index1 <- (extra$ymat2col[, 1] == 1) mu2.init <- weighted.mean(extra$ymat2col[index1, 2], w[index1, 1]) mu1.init <- rep_len(mu1.init, n) mu2.init <- rep_len(mu2.init, n) } else if ( .imethod == 2) { warning("not working yet2") glm1.fit <- glm(ycounts ~ x - 1, weights = c(w), fam = binomial("probit")) glm2.fit <- glm(ycounts[, 2:1] ~ x - 1, weights = c(w), fam = binomial("probit")) mu1.init <- fitted(glm1.fit) mu2.init <- fitted(glm2.fit) } else { stop("bad value for argument 'imethod'") } if (length( .imu1 )) mu1.init <- rep_len( .imu1 , n) if (length( .imu2 )) mu2.init <- rep_len( .imu2 , n) binom2.rho.ss.Loglikfun <- function(rhoval, y, x, w, extraargs) { init.mu1 <- extraargs$initmu1 init.mu2 <- extraargs$initmu2 ymat2col <- extraargs$ymat2col nvec <- extraargs$nvec eta1 <- qnorm(init.mu1) eta2 <- qnorm(init.mu2) smallno <- 1000 * .Machine$double.eps p11 <- pmax(smallno, pbinorm(eta1, eta2, cov12 = rhoval)) p10 <- pmax(smallno, pnorm( eta1) - p11) p0 <- pmax(smallno, pnorm(-eta1)) mumat <- abs(cbind("0" = p0, "10" = p10, "11" = p11)) # rows sum to unity smallpos <- 1.0e-100 mumat[mumat < smallpos] <- smallpos ycounts <- y # n x 3 use.mu <- mumat # cbind(p0, p10, p11) retval <- sum(c(w) * dmultinomial(x = ycounts, size = nvec, prob = use.mu, # mumat, log = TRUE, dochecking = FALSE)) retval } rho.grid <- .grho # seq(-0.95, 0.95, len = 31) try.this <- grid.search(rho.grid, objfun = binom2.rho.ss.Loglikfun, y = y, x = x, w = w, extraargs = list( ymat2col = extra$ymat2col, initmu1 = mu1.init, initmu2 = mu2.init, nvec = nvec )) rho.init <- if (is.Numeric( .irho )) rep_len( .irho , n) else { try.this } etastart <- cbind(theta2eta(mu1.init, .lmu12 , earg = .emu12 ), theta2eta(mu2.init, .lmu12 , earg = .emu12 ), theta2eta(rho.init, .l.rho , earg = .e.rho )) } mustart <- NULL # Coz etastart has been computed and/or no @linkfun. }), list( .lmu12 = lmu12, .l.rho = l.rho, .emu12 = emu12, .e.rho = e.rho, .grho = grho, .irho = irho, .imethod = imethod, .imu1 = imu1, .imu2 = imu2 ))), linkinv = eval(substitute(function(eta, extra = NULL) { rhovec <- eta2theta(eta[, 3], .l.rho , earg = .e.rho ) smallno <- 1000 * .Machine$double.eps p11 <- pmax(smallno, pbinorm(eta[, 1], eta[, 2], cov12 = rhovec)) p10 <- pmax(smallno, pnorm( eta[, 1]) - p11) p0 <- pmax(smallno, pnorm(-eta[, 1])) sumprob <- p11 + p10 + p0 p11 <- p11 / sumprob p10 <- p10 / sumprob p0 <- p0 / sumprob ansmat <- abs(cbind("0" = p0, # p0 == P(Y_1 = 0) "10" = p10, "11" = p11)) ansmat }, list( .lmu12 = lmu12, .l.rho = l.rho, .emu12 = emu12, .e.rho = e.rho ))), last = eval(substitute(expression({ misc$link <- c(mu1 = .lmu12 , mu2 = .lmu12 , rho = .l.rho ) misc$earg <- list(mu1 = .emu12 , mu2 = .emu12 , rho = .e.rho ) misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lmu12 = lmu12, .l.rho = l.rho, .emu12 = emu12, .e.rho = e.rho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ycounts <- y # n x 3 nvec <- 1 smallno <- 1000 * .Machine$double.eps rhovec <- eta2theta(eta[, 3], .l.rho , earg = .e.rho ) p11 <- pmax(smallno, pbinorm(eta[, 1], eta[, 2], cov12 = rhovec)) p10 <- pmax(smallno, pnorm( eta[, 1]) - p11) p0 <- pmax(smallno, pnorm(-eta[, 1])) sumprob <- p11 + p10 + p0 p11 <- p11 / sumprob p10 <- p10 / sumprob p0 <- p0 / sumprob ll.elts <- c(w) * dmultinomial(x = ycounts, size = nvec, prob = mu, # use.mu, log = TRUE, dochecking = FALSE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .l.rho = l.rho, .e.rho = e.rho ))), vfamily = c("binom2.rho.ss", "binom2"), validparams = eval(substitute(function(eta, y, extra = NULL) { pmargin <- cbind(eta2theta(eta[, 1], .lmu12 , earg = .emu12 ), eta2theta(eta[, 2], .lmu12 , earg = .emu12 )) rhovec <- eta2theta(eta[, 3], .l.rho , earg = .e.rho ) okay1 <- all(is.finite(pmargin)) && all( 0 -1e-3) { zee <- max(zee, 0) } else { stop("profiling has found a better solution, ", "so original fit had not converged") } zedd <- sgn * sqrt(zee) zi <- c(zi, zedd) } # while } # for sgn si. <- order(zi) prof[[pnamesi]] <- structure(data.frame(zi[si.]), names = profName) prof[[pnamesi]]$par.vals <- pvi[si., ,drop = FALSE] } # for i val <- structure(prof, original.fit = object, summary = summ) class(val) <- c("profile.glm", "profile") val } if (!isGeneric("profile")) setGeneric("profile", function(fitted, ...) standardGeneric("profile"), package = "VGAM") setMethod("profile", "vglm", function(fitted, ...) profilevglm(object = fitted, ...)) vplot.profile <- function(x, ...) { nulls <- sapply(x, is.null) if (all(nulls)) return(NULL) x <- x[!nulls] nm <- names(x) nr <- ceiling(sqrt(length(nm))) oldpar <- par(mfrow = c(nr, nr)) on.exit(par(oldpar)) for (nm in names(x)) { tau <- x[[nm]][[1L]] parval <- x[[nm]][[2L]][, nm] dev.hold() plot(parval, tau, xlab = nm, ylab = "tau", type = "n") if (sum(tau == 0) == 1) points(parval[tau == 0], 0, pch = 3) splineVals <- spline(parval, tau) lines(splineVals$x, splineVals$y) dev.flush() } } vpairs.profile <- function(x, colours = 2:3, ...) { parvals <- lapply(x, "[[", "par.vals") rng <- apply(do.call("rbind", parvals), 2L, range, na.rm = TRUE) Pnames <- colnames(rng) npar <- length(Pnames) coefs <- coef(attr(x, "original.fit")) form <- paste(as.character(formula(attr(x, "original.fit")))[c(2, 1, 3)], collapse = "") oldpar <- par(mar = c(0, 0, 0, 0), mfrow = c(1, 1), oma = c(3, 3, 6, 3), las = 1) on.exit(par(oldpar)) fin <- par("fin") dif <- (fin[2L] - fin[1L])/2 adj <- if (dif > 0) c(dif, 0, dif, 0) else c(0, -dif, 0, -dif) par(omi = par("omi") + adj) cex <- 1 + 1/npar frame() mtext(form, side = 3, line = 3, cex = 1.5, outer = TRUE) del <- 1/npar for (i in 1L:npar) { ci <- npar - i pi <- Pnames[i] for (j in 1L:npar) { dev.hold() pj <- Pnames[j] par(fig = del * c(j - 1, j, ci, ci + 1)) if (i == j) { par(new = TRUE) plot(rng[, pj], rng[, pi], axes = FALSE, xlab = "", ylab = "", type = "n") op <- par(usr = c(-1, 1, -1, 1)) text(0, 0, pi, cex = cex, adj = 0.5) par(op) } else { col <- colours if (i < j) col <- col[2:1] if (!is.null(parvals[[pj]])) { par(new = TRUE) plot(spline(x <- parvals[[pj]][, pj], y <- parvals[[pj]][, pi]), type = "l", xlim = rng[, pj], ylim = rng[, pi], axes = FALSE, xlab = "", ylab = "", col = col[2L]) pu <- par("usr") smidge <- 2/100 * (pu[4L] - pu[3L]) segments(x, pmax(pu[3L], y - smidge), x, pmin(pu[4L], y + smidge)) } else plot(rng[, pj], rng[, pi], axes = FALSE, xlab = "", ylab = "", type = "n") if (!is.null(parvals[[pi]])) { lines(x <- parvals[[pi]][, pj], y <- parvals[[pi]][, pi], type = "l", col = col[1L]) pu <- par("usr") smidge <- 2/100 * (pu[2L] - pu[1L]) segments(pmax(pu[1L], x - smidge), y, pmin(pu[2L], x + smidge), y) } points(coefs[pj], coefs[pi], pch = 3, cex = 3) } if (i == npar) axis(1) if (j == 1) axis(2) if (i == 1) axis(3) if (j == npar) axis(4) dev.flush() } } par(fig = c(0, 1, 0, 1)) invisible(x) } VGAM/R/rrvglm.R0000644000176200001440000001461713135276757012672 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. rrvglm <- function(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = rrvglm.control(...), offset = NULL, method="rrvglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...) { dataname <- as.character(substitute(data)) # "list" if no data= function.name <- "rrvglm" ocall <- match.call() if (smart) setup.smart("write") mt <- terms(formula, data = data) if (missing(data)) data <- environment(formula) mf <- match.call(expand.dots = FALSE) mf$family <- mf$method <- mf$model <- mf$x.arg <- mf$y.arg <- mf$control <- mf$contrasts <- mf$constraints <- mf$extra <- mf$qr.arg <- NULL mf$coefstart <- mf$etastart <- mf$... <- NULL mf$smart <- NULL mf$drop.unused.levels <- TRUE mf[[1]] <- as.name("model.frame") mf <- eval(mf, parent.frame()) if (method == "model.frame") return(mf) na.act <- attr(mf, "na.action") xvars <- as.character(attr(mt, "variables"))[-1] if ((yvar <- attr(mt, "response")) > 0) xvars <- xvars[-yvar] xlev = .getXlevels(mt, mf) y <- model.response(mf, "any") x <- if (!is.empty.model(mt)) model.matrix(mt, mf, contrasts) else matrix(, NROW(y), 0) attr(x, "assign") = attrassigndefault(x, mt) offset <- model.offset(mf) if (is.null(offset)) offset <- 0 # yyy ??? w <- model.weights(mf) if (!length(w)) { w <- rep_len(1, nrow(mf)) } else if (NCOL(w) == 1 && any(w < 0)) stop("negative weights not allowed") if (is.character(family)) family <- get(family) if (is.function(family)) family <- family() if (!inherits(family, "vglmff")) { stop("'family=", family, "' is not a VGAM family function") } eval(vcontrol.expression) if (!is.null(family@first)) eval(family@first) if (control$Quadratic && control$FastAlgorithm && length(as.list(family@deviance)) <= 1) stop("The fast algorithm requires the family ", "function to have a deviance slot") rrvglm.fitter <- get(method) fit <- rrvglm.fitter(x = x, y = y, w = w, offset = offset, etastart = etastart, mustart = mustart, coefstart = coefstart, family = family, control = control, constraints = constraints, criterion = control$criterion, extra = extra, qr.arg = qr.arg, Terms = mt, function.name = function.name, ...) if (control$Bestof > 1) { deviance.Bestof <- rep_len(fit$crit.list$deviance, control$Bestof) for (tries in 2:control$Bestof) { if (control$trace && (control$Bestof>1)) cat(paste("\n========================= Fitting model", tries, "=========================\n\n")) it <- rrvglm.fitter(x = x, y = y, w = w, offset = offset, etastart = etastart, mustart = mustart, coefstart = coefstart, family = family, control = control, constraints = constraints, criterion = control$criterion, extra = extra, qr.arg = qr.arg, Terms = mt, function.name = function.name, ...) deviance.Bestof[tries] <- it$crit.list$deviance if (min(deviance.Bestof[1:(tries-1)]) > deviance.Bestof[tries]) fit <- it } fit$misc$deviance.Bestof = deviance.Bestof } fit$misc$dataname <- dataname if (smart) { fit$smart.prediction <- get.smart.prediction() wrapup.smart() } answer <- new(if (control$Quadratic) "qrrvglm" else "rrvglm", "assign" = attr(x, "assign"), "call" = ocall, "coefficients" = fit$coefficients, "constraints" = fit$constraints, "criterion" = fit$crit.list, "df.residual" = fit$df.residual, "df.total" = fit$df.total, "dispersion" = 1, "effects" = fit$effects, "family" = fit$family, "misc" = fit$misc, "model" = if (model) mf else data.frame(), "R" = fit$R, "rank" = fit$rank, "residuals" = as.matrix(fit$residuals), "ResSS" = fit$ResSS, "smart.prediction" = as.list(fit$smart.prediction), "terms" = list(terms = mt)) if (!smart) answer@smart.prediction <- list(smart.arg = FALSE) if (qr.arg) { class(fit$qr) = "list" slot(answer, "qr") = fit$qr } if (length(attr(x, "contrasts"))) slot(answer, "contrasts") = attr(x, "contrasts") if (length(fit$fitted.values)) slot(answer, "fitted.values") = as.matrix(fit$fitted.values) slot(answer, "na.action") = if (length(na.act)) list(na.act) else list() if (length(offset)) slot(answer, "offset") = as.matrix(offset) if (length(fit$weights)) slot(answer, "weights") = as.matrix(fit$weights) if (x.arg) slot(answer, "x") = fit$x # The 'small' design matrix if (length(xlev)) slot(answer, "xlevels") = xlev if (y.arg) slot(answer, "y") = as.matrix(fit$y) answer@misc$formula = formula slot(answer, "control") = fit$control slot(answer, "extra") = if (length(fit$extra)) { if (is.list(fit$extra)) fit$extra else { warning("\"extra\" is not a list, therefore placing \"extra\" into a list") list(fit$extra) } } else list() # R-1.5.0 slot(answer, "iter") = fit$iter fit$predictors = as.matrix(fit$predictors) # Must be a matrix dimnames(fit$predictors) = list(dimnames(fit$predictors)[[1]], fit$misc$predictors.names) slot(answer, "predictors") = fit$predictors if (length(fit$prior.weights)) slot(answer, "prior.weights") = as.matrix(fit$prior.weights) answer } attr(rrvglm, "smart") <- TRUE VGAM/R/predict.vgam.q0000644000176200001440000002472513135276757014004 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. predict.vgam <- function(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv.arg = 0, terms.arg = NULL, raw = FALSE, all = TRUE, offset = 0, untransform = FALSE, dispersion = NULL, ...) { newdata <- if (missing(newdata)) { NULL } else { as.data.frame(newdata) } no.newdata <- (length(newdata) == 0) na.act <- object@na.action object@na.action <- list() if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("link", "response", "terms"))[1] if (untransform && (type != "link" || se.fit || deriv.arg != 0 || offset != 0)) stop("argument 'untransform = TRUE' only if type='link', ", "se.fit = FALSE, deriv = 0") if (raw && type!="terms") stop("'raw = TRUE' only works when 'type = \"terms\"'") if (!is.numeric(deriv.arg) || deriv.arg < 0 || deriv.arg != round(deriv.arg) || length(deriv.arg) > 1) stop("bad input for the 'deriv' argument") if (deriv.arg > 0 && type != "terms") stop("'deriv>0' can only be specified if 'type=\"terms\"'") if (deriv.arg != 0 && !(type != "response" && !se.fit)) stop("argument 'deriv' only works with type != 'response' and ", "se.fit = FALSE") if (se.fit && length(newdata)) stop("cannot specify 'se.fit = TRUE' when argument 'newdata' ", "is assigned") tt <- terms(object) # 20030811; object@terms$terms ttf <- attr(tt, "factors") tto <- attr(tt, "order") intercept <- attr(tt, "intercept") if (!intercept) stop("an intercept is assumed") M <- object@misc$M Hlist <- object@constraints ncolHlist <- unlist(lapply(Hlist, ncol)) if (intercept) ncolHlist <- ncolHlist[-1] if (raw) { Hlist <- canonical.Hlist(Hlist) object@constraints <- Hlist } if (!length(newdata)) { if (type == "link") { if (se.fit) { stop("cannot handle this option (se.fit = TRUE) currently") } else { answer <- if (length(na.act)) { napredict(na.act[[1]], object@predictors) } else { object@predictors } if (untransform) return(untransformVGAM(object, answer)) else return(answer) } } else if (type == "response") { if (se.fit) { stop("cannot handle this option (se.fit = TRUE) currently") } else { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } } predictor <- predict.vlm(object, type = "terms", se.fit = se.fit, terms.arg = terms.arg, raw = raw, all = all, offset = offset, dispersion = dispersion, ...) # deriv.arg = deriv.arg, newdata <- model.matrixvlm(object, type = "lm") } else { temp.type <- if (type == "link") "response" else type predictor <- predict.vlm(object, newdata, type = temp.type, se.fit = se.fit, terms.arg = terms.arg, raw = raw, all = all, offset = offset, dispersion = dispersion, ...) # deriv.arg = deriv.arg, } if (deriv.arg > 0) if (se.fit) { predictor$fitted.values <- predictor$fitted.values * 0 predictor$se.fit <- predictor$se.fit * NA } else { predictor <- predictor * 0 } if (length(s.xargument <- object@s.xargument)) { dnames2 <- dimnames(newdata)[[2]] index1 <- match(s.xargument, dnames2, nomatch = FALSE) index2 <- match(names(s.xargument), dnames2, nomatch = FALSE) index <- index1 | index2 if (!length(index) || any(!index)) stop("required variables not found in newdata") if (is.null(tmp6 <- attr(if (se.fit) predictor$fitted.values else predictor, "vterm.assign"))) { Hlist <- subconstraints(object@misc$orig.assign, object@constraints) ncolHlist <- unlist(lapply(Hlist, ncol)) if (intercept) ncolHlist <- ncolHlist[-1] cs <- if (raw) cumsum(c(1, ncolHlist)) else cumsum(c(1, M + 0 * ncolHlist)) tmp6 <- vector("list", length(ncolHlist)) for (ii in seq_along(tmp6)) tmp6[[ii]] <- cs[ii]:(cs[ii+1]-1) names(tmp6) <- names(ncolHlist) } n.s.xargument <- names(s.xargument) # e.g., c("s(x)", "s(x2)") for (ii in n.s.xargument) { fred <- s.xargument[ii] if (!any(dimnames(newdata)[[2]] == fred)) fred <- ii xx <- newdata[, fred] # [, s.xargument[ii]] # [, nindex[ii]] rawMat <- predictvsmooth.spline.fit(object@Bspline[[ii]], x = xx, deriv = deriv.arg)$y eta.mat <- if (raw) rawMat else (rawMat %*% t(Hlist[[ii]])) if (type == "terms") { hhh <- tmp6[[ii]] if (se.fit) { predictor$fitted.values[, hhh] <- predictor$fitted.values[, hhh] + eta.mat TS <- predictor$sigma^2 temp.var <- if (raw) { tmp7 <- object@misc$varassign tmp7 <- tmp7[[ii]] object@var[, tmp7, drop = FALSE] } else { stop("cannot handle se's with raw = FALSE") } predictor$se.fit[, hhh] <- (predictor$se.fit[, hhh]^2 + TS * temp.var)^0.5 } else { predictor[, hhh] <- predictor[, hhh] + eta.mat } } else { if (se.fit) { predictor$fitted.values <- predictor$fitted.values + eta.mat TS <- 1 # out$residual.scale^2 TS <- predictor$sigma^2 TT <- ncol(object@var) predictor$se.fit <- sqrt(predictor$se.fit^2 + TS * object@var %*% rep_len(1, TT)) } else { predictor <- predictor + eta.mat } } } } if (type == "link") { if (no.newdata && length(na.act)) { return(napredict(na.act[[1]], predictor)) } else { return(predictor) } } else if (type == "response") { fv <- object@family@linkinv(if (se.fit) predictor$fitted.values else predictor, object@extra) if (is.matrix(fv) && is.matrix(object@fitted.values)) dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) if (is.matrix(fv) && ncol(fv) == 1) fv <- c(fv) if (no.newdata && length(na.act)) { fv <- if (se.fit) { napredict(na.act[[1]], fv) } else { napredict(na.act[[1]], fv) } } if (se.fit) { return(list(fit = fv, se.fit = fv * NA)) } else { return(fv) } } else { if (deriv.arg >= 1) { if (se.fit) { attr(predictor$fitted.values, "constant") <- NULL } else { attr(predictor, "constant") <- NULL } } if (deriv.arg >= 1) { v <- attr(if (se.fit) predictor$fitted.values else predictor, "vterm.assign") is.lin <- is.linear.term(names(v)) coefmat <- coefvlm(object, matrix.out = TRUE) ord <- 0 for (ii in names(v)) { ord <- ord + 1 index <- v[[ii]] lindex <- length(index) if (is.lin[ii]) { if (tto[ord] > 1 || (length(ttf) && ttf[ii, ii])) { if (se.fit) { predictor$fitted.values[, index] <- if (tto[ord] > 1) NA else NA } else { predictor[, index] <- if (tto[ord] > 1) NA else NA } } else { ans <- coefmat[ii, 1:lindex] if (se.fit) { predictor$fitted.values[, index] <- if (deriv.arg == 1) matrix(ans, ncol = lindex, byrow = TRUE) else 0 } else { predictor[, index] <- if (deriv.arg == 1) matrix(ans, ncol = lindex, byrow = TRUE) else 0 } } } else if (length(s.xargument) && any(n.s.xargument == ii)) { ans <- coefmat[ii, 1:lindex] if (se.fit) { predictor$fitted.values[, index] <- predictor$fitted.values[, index] + (if (deriv.arg == 1) matrix(ans, nrow = nrow(predictor$fitted.values), ncol = lindex, byrow = TRUE) else 0) } else { predictor[, index] <- predictor[, index] + (if (deriv.arg == 1) matrix(ans, nrow = nrow(predictor), ncol = lindex, byrow = TRUE) else 0) } } else { cat("Derivatives of term ", ii, "are unknown\n") if (se.fit) { predictor$fitted.values[, index] <- NA } else { predictor[, index] <- NA } } } } if (no.newdata && length(na.act)) { if (se.fit) { predictor$fitted.values <- napredict(na.act[[1]], predictor$fitted.values) predictor$se.fit <- napredict(na.act[[1]], predictor$se.fit) } else { predictor <- napredict(na.act[[1]], predictor) } } if (se.fit) { attr(predictor$fitted.values, "derivative") <- deriv.arg } else { attr(predictor, "derivative") <- deriv.arg } return(predictor) } } setMethod("predict", "vgam", function(object, ...) predict.vgam(object, ...)) varassign <- function(constraints, n.s.xargument) { if (!length(n.s.xargument)) stop("length(n.s.xargument) must be > 0") ans <- vector("list", length(n.s.xargument)) ncolHlist <- unlist(lapply(constraints, ncol)) names(ans) <- n.s.xargument ptr <- 1 for (ii in n.s.xargument) { temp <- ncolHlist[[ii]] ans[[ii]] <- ptr:(ptr + temp - 1) ptr <- ptr + temp } ans } VGAM/R/effects.vglm.q0000644000176200001440000000107413135276757013774 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. effects.vlm <- function(object, ...) { cat("Sorry, this function has not been written yet. Returning a NULL.\n") invisible(NULL) } if (!isGeneric("effects")) setGeneric("effects", function(object, ...) standardGeneric("effects")) if (is.R()) { setMethod("effects", "vlm", function(object, ...) effects.vlm(object, ...)) } else { setMethod("effects", "vlm", function(object, ...) effects.vlm(object, ...)) } VGAM/R/family.others.R0000644000176200001440000020654313135276757014146 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. dexppois <- function(x, rate = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape), length(rate)) if (length(x) != N) x <- rep_len(x, N) if (length(shape) != N) shape <- rep_len(shape, N) if (length(rate) != N) rate <- rep_len(rate, N) logdensity <- rep_len(log(0), N) xok <- (0 < x) logdensity[xok] <- log(shape[xok]) + log(rate[xok]) - log1p(-exp(-shape[xok])) - shape[xok] - rate[xok] * x[xok] + shape[xok] * exp(-rate[xok] * x[xok]) logdensity[shape <= 0] <- NaN logdensity[rate <= 0] <- NaN if (log.arg) logdensity else exp(logdensity) } qexppois<- function(p, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- -log(log(exp(ln.p) * (-expm1(shape)) + exp(shape)) / shape) / rate ans[ln.p > 0] <- NaN } else { ans <- -log(log(p * (-expm1(shape)) + exp(shape)) / shape) / rate ans[p < 0] <- NaN ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- -log(log(expm1(ln.p) * expm1(shape) + exp(shape)) / shape) / rate ans[ln.p > 0] <- NaN } else { ans <- -log(log(p * expm1(shape) + 1) / shape) / rate ans[p < 0] <- NaN ans[p > 1] <- NaN } } ans[(shape <= 0) | (rate <= 0)] <- NaN ans } pexppois<- function(q, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log((exp(shape * exp(-rate * q)) - exp(shape)) / -expm1(shape)) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- (exp(shape * exp(-rate * q)) - exp(shape)) / (-expm1(shape)) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log(expm1(shape * exp(-rate * q)) / expm1(shape)) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- expm1(shape * exp(-rate * q)) / expm1(shape) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[(shape <= 0) | (rate <= 0)] <- NaN ans } rexppois <- function(n, rate = 1, shape) { ans <- -log(log(runif(n) * (-expm1(shape)) + exp(shape)) / shape) / rate ans[(shape <= 0) | (rate <= 0)] <- NaN ans } exppoisson <- function(lrate = "loge", lshape = "loge", irate = 2.0, ishape = 1.1, zero = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lratee <- as.list(substitute(lrate)) eratee <- link2list(lratee) lratee <- attr(eratee, "function.name") iratee <- irate if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (length(iratee) && !is.Numeric(iratee, positive = TRUE)) stop("bad input for argument 'irate'") ishape[abs(ishape - 1) < 0.01] = 1.1 new("vglmff", blurb = c("Exponential Poisson distribution \n \n", "Links: ", namesof("rate", lratee, earg = eratee), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: shape/(expm1(shape) * rate)) * ", "genhypergeo(c(1, 1), c(2, 2), shape)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("rate", "shape"), lrate = .lratee , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .lratee = lratee, .lshape = lshape ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("rate", .lratee , earg = .eratee , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (!length(etastart)) { ratee.init <- if (length( .iratee )) rep_len( .iratee , n) else stop("Need to input a value into argument 'iratee'") shape.init <- if (length( .ishape )) rep_len( .ishape , n) else (1/ratee.init - mean(y)) / ((y * exp(-ratee.init * y))/n) ratee.init <- rep_len(weighted.mean(ratee.init, w = w), n) etastart <- cbind(theta2eta(ratee.init, .lratee , earg = .eratee ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lshape = lshape, .lratee = lratee, .ishape = ishape, .iratee = iratee, .eshape = eshape, .eratee = eratee))), linkinv = eval(substitute(function(eta, extra = NULL) { ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) qexppois(p = 0.5, rate = ratee, shape = shape) }, list( .lshape = lshape, .lratee = lratee, .eshape = eshape, .eratee = eratee))), last = eval(substitute(expression({ misc$link <- c( rate = .lratee , shape = .lshape ) misc$earg <- list( rate = .eratee , shape = .eshape ) misc$expected <- TRUE misc$multipleResponses <- FALSE }), list( .lshape = lshape, .lratee = lratee, .eshape = eshape, .eratee = eratee))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dexppois(x = y, shape = shape, rate = ratee, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lratee = lratee , .lshape = lshape , .eshape = eshape , .eratee = eratee ))), vfamily = c("exppoisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(ratee)) && all(0 < ratee) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lratee = lratee , .lshape = lshape , .eshape = eshape , .eratee = eratee ))), deriv = eval(substitute(expression({ ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) dl.dratee <- 1/ratee - y - y * shape * exp(-ratee * y) dl.dshape <- 1/shape - 1/expm1(shape) - 1 + exp(-ratee * y) dratee.deta <- dtheta.deta(ratee, .lratee , earg = .eratee ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.dratee * dratee.deta, dl.dshape * dshape.deta) }), list( .lshape = lshape, .lratee = lratee, .eshape = eshape, .eratee = eratee ))), weight = eval(substitute(expression({ temp1 <- -expm1(-shape) ned2l.dshape2 <- (1 + exp(2 * shape) - shape^2 * exp(shape) - 2 * exp(shape)) / (shape * temp1)^2 ned2l.dratee2 <- 1 / ratee^2 - (shape^2 * exp(-shape) / (4 * ratee^2 * temp1)) * genhypergeo(c(2, 2, 2), c(3, 3, 3), shape) ned2l.drateeshape <- (shape * exp(-shape) / (4 * ratee * temp1)) * genhypergeo(c(2, 2), c(3, 3), shape) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dratee.deta^2 * ned2l.dratee2 wz[, iam(1, 2, M)] <- dratee.deta * dshape.deta * ned2l.drateeshape wz[, iam(2, 2, M)] <- dshape.deta^2 * ned2l.dshape2 c(w) * wz }), list( .zero = zero )))) } dgenray <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(shape), length(scale)) if (length(x) != N) x <- rep_len(x, N) if (length(shape) != N) shape <- rep_len(shape, N) if (length(scale) != N) scale <- rep_len(scale, N) logdensity <- rep_len(log(0), N) if (any(xok <- (x > 0))) { temp1 <- x[xok] / scale[xok] logdensity[xok] <- log(2) + log(shape[xok]) + log(x[xok]) - 2 * log(scale[xok]) - temp1^2 + (shape[xok] - 1) * log1p(-exp(-temp1^2)) } logdensity[(shape <= 0) | (scale <= 0)] <- NaN logdensity[is.infinite(x)] <- log(0) # 20141209 KaiH if (log.arg) { logdensity } else { exp(logdensity) } } pgenray <- function(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log((-expm1(-(q/scale)^2))^shape) ans[q <= 0 ] <- -Inf } else { ans <- (-expm1(-(q/scale)^2))^shape ans[q <= 0] <- 0 } } else { if (log.p) { ans <- log(-expm1(shape*log(-expm1(-(q/scale)^2)))) ans[q <= 0] <- 0 } else { ans <- -expm1(shape*log(-expm1(-(q/scale)^2))) ans[q <= 0] <- 1 } } ans[(shape <= 0) | (scale <= 0)] <- NaN ans } qgenray <- function(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- scale * sqrt(-log1p(-(exp(ln.p)^(1/shape)))) ans[ln.p > 0] <- NaN } else { ans <- scale * sqrt(-log1p(-(p^(1/shape)))) ans[p < 0] <- NaN ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- scale * sqrt(-log1p(-((-expm1(ln.p))^(1/shape)))) ans[ln.p > 0] <- NaN } else { ans <- scale * sqrt(-log1p(-exp((1/shape)*log1p(-p)))) ans[p < 0] <- NaN ans[p > 1] <- NaN } } ans[(shape <= 0) | (scale <= 0)] <- NaN ans } rgenray <- function(n, scale = 1, shape) { ans <- qgenray(runif(n), shape = shape, scale = scale) ans[(shape <= 0) | (scale <= 0)] <- NaN ans } genrayleigh.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } genrayleigh <- function(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, tol12 = 1.0e-05, nsimEIM = 300, zero = 2) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Generalized Rayleigh distribution\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("scale", "shape"), nsimEIM = .nsimEIM , lscale = .lscale , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .lscale = lscale, .lshape = lshape, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("scale", .lscale , earg = .escale , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (!length(etastart)) { genrayleigh.Loglikfun <- function(scale, y, x, w, extraargs) { temp1 <- y / scale shape <- -1 / weighted.mean(log1p(-exp(-temp1^2)), w = w) ans <- sum(c(w) * (log(2) + log(shape) + log(y) - 2 * log(scale) - temp1^2 + (shape - 1) * log1p(-exp(-temp1^2)))) ans } scale.grid <- seq(0.2 * stats::sd(c(y)), 5.0 * stats::sd(c(y)), len = 29) scale.init <- if (length( .iscale )) .iscale else grid.search(scale.grid, objfun = genrayleigh.Loglikfun, y = y, x = x, w = w) scale.init <- rep_len(scale.init, length(y)) shape.init <- if (length( .ishape )) .ishape else -1 / weighted.mean(log1p(-exp(-(y/scale.init)^2)), w = w) shape.init <- rep_len(shape.init, length(y)) etastart <- cbind(theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) qgenray(p = 0.5, shape = shape, scale = Scale) }, list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale, shape = .lshape ) misc$earg <- list(scale = .escale, shape = .eshape ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lshape = lshape, .lscale = lscale, .eshape = eshape, .escale = escale, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgenray(x = y, shape = shape, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape , .lscale = lscale , .eshape = eshape , .escale = escale ))), vfamily = c("genrayleigh"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape , .lscale = lscale , .eshape = eshape , .escale = escale ))), deriv = eval(substitute(expression({ Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dthetas.detas <- cbind(dscale.deta, dshape.deta) temp1 <- y / Scale temp2 <- exp(-temp1^2) temp3 <- temp1^2 / Scale AAA <- 2 * temp1^2 / Scale # 2 * y^2 / Scale^3 BBB <- -expm1(-temp1^2) # denominator dl.dshape <- 1/shape + log1p(-temp2) dl.dscale <- -2 / Scale + AAA * (1 - (shape - 1) * temp2 / BBB) dl.dshape[!is.finite(dl.dshape)] = max(dl.dshape[is.finite(dl.dshape)]) answer <- c(w) * cbind(dl.dscale, dl.dshape) * dthetas.detas answer }), list( .lshape = lshape , .lscale = lscale, .eshape = eshape, .escale = escale ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) for (ii in 1:( .nsimEIM )) { ysim <- rgenray(n = n, shape = shape, scale = Scale) temp1 <- ysim / Scale temp2 <- exp(-temp1^2) # May be 1 if ysim is very close to 0. temp3 <- temp1^2 / Scale AAA <- 2 * temp1^2 / Scale # 2 * y^2 / Scale^3 BBB <- -expm1(-temp1^2) # denominator dl.dshape <- 1/shape + log1p(-temp2) dl.dscale <- -2 / Scale + AAA * (1 - (shape - 1) * temp2 / BBB) dl.dshape[!is.finite(dl.dshape)] <- max( dl.dshape[is.finite(dl.dshape)]) temp3 <- cbind(dl.dscale, dl.dshape) run.varcov <- run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov, na.rm = FALSE), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] c(w) * wz }), list( .lshape = lshape , .lscale = lscale, .eshape = eshape, .escale = escale, .tol12 = tol12, .nsimEIM = nsimEIM )))) } dexpgeom <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(scale), length(shape)) if (length(x) != N) x <- rep_len(x, N) if (length(scale) != N) scale <- rep_len(scale, N) if (length(shape) != N) shape <- rep_len(shape, N) logdensity <- rep_len(log(0), N) if (any(xok <- (x > 0))) { temp1 <- -x[xok] / scale[xok] logdensity[xok] <- -log(scale[xok]) + log1p(-shape[xok]) + temp1 - 2 * log1p(-shape[xok] * exp(temp1)) } logdensity[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN if (log.arg) { logdensity } else { exp(logdensity) } } pexpgeom <- function(q, scale = 1, shape) { temp1 <- -q / scale ans <- -expm1(temp1) / (1 - shape * exp(temp1)) ans[q <= 0] <- 0 ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } qexpgeom <- function(p, scale = 1, shape) { ans <- (-scale) * log((p - 1) / (p * shape - 1)) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans } rexpgeom <- function(n, scale = 1, shape) { ans <- qexpgeom(runif(n), shape = shape, scale = scale) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } expgeometric.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } expgeometric <- function(lscale = "loge", lshape = "logit", iscale = NULL, ishape = NULL, tol12 = 1.0e-05, zero = 1, nsimEIM = 400) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE) || any(ishape >= 1)) stop("bad input for argument 'ishape'") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) stop("'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Exponential geometric distribution\n\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: ", "(shape - 1) * log(1 - ", "shape) / (shape / scale)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("scale", "shape"), nsimEIM = .nsimEIM , lscale = .lscale , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .lscale = lscale, .lshape = lshape, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("scale", .lscale , earg = .escale , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (!length(etastart)) { scale.init <- if (is.Numeric( .iscale , positive = TRUE)) { rep_len( .iscale , n) } else { stats::sd(c(y)) # The papers scale parameter beta } shape.init <- if (is.Numeric( .ishape , positive = TRUE)) { rep_len( .ishape , n) } else { rep_len(2 - exp(median(y)/scale.init), n) } shape.init[shape.init >= 0.95] <- 0.95 shape.init[shape.init <= 0.05] <- 0.05 etastart <- cbind(theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) (shape - 1) * log1p(-shape) / (shape / Scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , shape = .lshape ) misc$earg <- list(scale = .escale , shape = .eshape ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dexpgeom(x = y, scale = Scale, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), vfamily = c("expgeometric"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(shape)) && all(0 < shape & shape < 1) okay1 }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), deriv = eval(substitute(expression({ Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) temp2 <- exp(-y / Scale) temp3 <- shape * temp2 temp4 <- y / Scale^2 dl.dscale <- -1 / Scale + temp4 + 2 * temp4 * temp3 / (1 - temp3) dl.dshape <- -1 / (1 - shape) + 2 * temp2 / (1 - temp3) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dthetas.detas <- cbind(dscale.deta, dshape.deta) answer <- c(w) * cbind(dl.dscale, dl.dshape) * dthetas.detas answer }), list( .lscale = lscale , .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for (ii in 1:( .nsimEIM )) { ysim <- rexpgeom(n, scale=Scale, shape=shape) temp2 <- exp(-ysim / Scale) temp3 <- shape * temp2 temp4 <- ysim / Scale^2 dl.dscale <- -1 / Scale + temp4 + 2 * temp4 * temp3 / (1 - temp3) dl.dshape <- -1 / (1 - shape) + 2 * temp2 / (1 - temp3) temp6 <- cbind(dl.dscale, dl.dshape) run.varcov <- run.varcov + temp6[,ind1$row.index] * temp6[,ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dexplog <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(scale), length(shape)) if (length(x) != N) x <- rep_len(x, N) if (length(scale) != N) scale <- rep_len(scale, N) if (length(shape) != N) shape <- rep_len(shape, N) logdensity <- rep_len(log(0), N) if (any(xok <- (x > 0))) { temp1 <- -x[xok] / scale[xok] logdensity[xok] <- -log(-log(shape[xok])) - log(scale[xok]) + log1p(-shape[xok]) + temp1 - log1p(-(1-shape[xok]) * exp(temp1)) } logdensity[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN if (log.arg) { logdensity } else { exp(logdensity) } } pexplog <- function(q, scale = 1, shape) { ans <- 1 - log1p(-(1-shape) * exp(-q / scale)) / log(shape) ans[q <= 0] <- 0 ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } qexplog <- function(p, scale = 1, shape) { ans <- -scale * (log1p(-shape^(1.0 - p)) - log1p(-shape)) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans[p < 0] <- NaN ans[p > 1] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans } rexplog <- function(n, scale = 1, shape) { ans <- qexplog(runif(n), scale = scale, shape = shape) ans[(scale <= 0) | (shape <= 0) | (shape >= 1)] <- NaN ans } explogff.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } explogff <- function(lscale = "loge", lshape = "logit", iscale = NULL, ishape = NULL, tol12 = 1.0e-05, zero = 1, nsimEIM = 400) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(ishape)) if (!is.Numeric(ishape, positive = TRUE) || any(ishape >= 1)) stop("bad input for argument 'ishape'") if (length(iscale)) if (!is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'nsimEIM'") if (nsimEIM <= 50) stop("argument 'nsimEIM' should be an integer greater than 50") new("vglmff", blurb = c("Exponential logarithmic distribution\n\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: ", "(-polylog(2, 1 - p) * scale) / log(shape)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("scale", "shape"), nsimEIM = .nsimEIM , lscale = .lscale , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .lscale = lscale, .lshape = lshape, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("scale", .lscale , earg = .escale , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (!length(etastart)) { scale.init <- if (is.Numeric( .iscale , positive = TRUE)) { rep_len( .iscale , n) } else { stats::sd(c(y)) } shape.init <- if (is.Numeric( .ishape , positive = TRUE)) { rep_len( .ishape , n) } else { rep_len((exp(median(y)/scale.init) - 1)^2, n) } shape.init[shape.init >= 0.95] <- 0.95 shape.init[shape.init <= 0.05] <- 0.05 etastart <- cbind(theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .iscale = iscale, .ishape = ishape, .escale = escale, .eshape = eshape))), linkinv = eval(substitute(function(eta, extra = NULL) { scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) qexplog(p = 0.5, shape = shape, scale = scale) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , shape = .lshape ) misc$earg <- list(scale = .escale , shape = .eshape ) misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$multipleResponses <- FALSE }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dexplog(x = y, scale = Scale, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), vfamily = c("explogff"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(shape)) && all(0 < shape & shape < 1) okay1 }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), deriv = eval(substitute(expression({ Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) temp2 <- exp(-y / Scale) temp3 <- y / Scale^2 temp4 <- 1 - shape dl.dscale <- (-1 / Scale) + temp3 + (temp4 * temp3 * temp2) / (1 - temp4 * temp2) dl.dshape <- -1 / (shape * log(shape)) - 1 / temp4 - temp2 / (1 - temp4 * temp2) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dthetas.detas <- cbind(dscale.deta, dshape.deta) answer <- c(w) * cbind(dl.dscale, dl.dshape) * dthetas.detas answer }), list( .lscale = lscale , .lshape = lshape, .escale = escale, .eshape = eshape ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { for (ii in 1:( .nsimEIM )) { ysim <- rexplog(n, scale = Scale, shape = shape) temp2 <- exp(-ysim / Scale) temp3 <- ysim / Scale^2 temp4 <- 1 - shape dl.dscale <- (-1 / Scale) + temp3 + (temp4 * temp3 * temp2) / (1 - temp4 * temp2) dl.dshape <- -1 / (shape * log(shape)) - 1 / temp4 - temp2 / (1 - temp4 * temp2) temp6 <- cbind(dl.dscale, dl.dshape) run.varcov <- run.varcov + temp6[,ind1$row.index] * temp6[,ind1$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] } c(w) * wz }), list( .nsimEIM = nsimEIM )))) } dweibull3 <- function(x, location = 0, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) dweibull(x = x - location, shape = shape, scale = scale, log = log.arg) } pweibull3 <- function(q, location = 0, scale = 1, shape) { pweibull(q = q - location, scale = scale, shape = shape) } qweibull3 <- function(p, location = 0, scale = 1, shape) { location + qweibull(p = p, shape = shape, scale = scale) } rweibull3 <- function(n, location = 0, scale = 1, shape) { location + rweibull(n = n, shape = shape, scale = scale) } ### Two-piece normal (TPN) family dtpn <- function(x, location = 0, scale = 1, skewpar = 0.5, log.arg = FALSE) { if (any(skewpar <= 0 | skewpar >= 1 | scale <= 0 , na.rm = TRUE)) stop("some parameters out of bound") N <- max(length(x), length(location), length(scale), length(skewpar)) if (length(x) != N) x <- rep_len(x, N) if (length(scale) != N) scale <- rep_len(scale, N) if (length(location) != N) location <- rep_len(location, N) if (length(skewpar) != N) skewpar <- rep_len(skewpar, N) zedd <- (x - location) / scale log.s1 <- -zedd^2 / (8 * skewpar^2) log.s2 <- -zedd^2 / (8 * (1 - skewpar)^2) logdensity <- log.s1 logdensity[zedd > 0] <- log.s2[zedd > 0] logdensity <- logdensity -log(scale) - log(sqrt(2 * pi)) if (log.arg) logdensity else exp(logdensity) } ptpn <- function(q, location = 0, scale = 1, skewpar = 0.5) { if (any(skewpar <= 0 | skewpar >= 1 | scale <= 0 , na.rm = TRUE)) stop("some parameters out of bound") zedd <- (q - location) / scale s1 <- 2 * skewpar * pnorm(zedd, sd = 2 * skewpar) #/ scale s2 <- skewpar + (1 - skewpar) * pgamma(zedd^2 / (8 * (1-skewpar)^2), 0.5) ans <- rep_len(0.0, length(zedd)) ans[zedd <= 0] <- s1[zedd <= 0] ans[zedd > 0] <- s2[zedd > 0] ans } pos <- function(x) ifelse(x > 0, x, 0.0) qtpn <- function(p, location = 0, scale = 1, skewpar = 0.5) { pp = p if (any(pp <= 0 | pp >= 1 | skewpar <= 0 | skewpar >= 1 | scale <= 0 , na.rm = TRUE)) stop("some parameters out of bound") # Recycle the vectors to equal lengths LLL <- max(length(pp), length(location), length(scale), length(skewpar)) if (length(pp) != LLL) pp <- rep_len(pp, LLL) if (length(location) != LLL) location <- rep_len(location, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(skewpar) != LLL) skewpar <- rep_len(skewpar, LLL) qtpn <- rep_len(NA_real_, length(LLL)) qtpn <- qnorm(pp / (2 * skewpar), sd = 2 * skewpar) qtpn[pp > skewpar] <- sqrt(8 * ( 1 - skewpar)^2 * qgamma(pos( pp - skewpar) / ( 1 - skewpar),.5))[pp > skewpar] qtpn * scale + location } rtpn <- function(n, location = 0, scale = 1, skewpar = 0.5) { qtpn(p = runif(n), location = location, scale = scale, skewpar = skewpar) } tpnff <- function(llocation = "identitylink", lscale = "loge", pp = 0.5, method.init = 1, zero = 2) { if (!is.Numeric(method.init, length.arg = 1, integer.valued = TRUE, positive = TRUE) || method.init > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.Numeric(pp, length.arg = 1, positive = TRUE)) stop("bad input for argument 'pp'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Two-piece normal distribution \n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n\n", "Mean: "), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale"), llocation = .llocat , lscale = .lscale , zero = .zero ) }, list( .zero = zero, .llocat = llocat, .lscale = lscale ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) scale.y.est <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .method.init == 3) { rep_len(weighted.mean(y, w), n) } else if ( .method.init == 2) { rep_len(median(rep(y, w)), n) } else if ( .method.init == 1) { junk$fitted } else { y } } etastart <- cbind( theta2eta(location.init, .llocat , earg = .elocat ), theta2eta(scale.y.est, .lscale , earg = .escale )) } }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .method.init = method.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat , "scale" = .lscale ) misc$earg <- list("location" = .elocat , "scale" = .escale ) misc$expected <- TRUE misc$pp <- .pp misc$method.init <- .method.init misc$multipleResponses <- FALSE }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .pp = pp, .method.init = method.init ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) ppay <- .pp if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dtpn(y, skewpar = ppay, location = location, scale = myscale, log.arg = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .pp = pp ))), vfamily = c("tpnff"), validparams = eval(substitute(function(eta, y, extra = NULL) { mylocat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) okay1 <- all(is.finite(mylocat)) && all(is.finite(myscale)) && all(0 < myscale) okay1 }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .pp = pp ))), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) mypp <- .pp zedd <- (y - mylocat) / myscale # cond1 <- (zedd <= 0) cond2 <- (zedd > 0) dl.dlocat <- zedd / (4 * mypp^2) # cond1 dl.dlocat[cond2] <- (zedd / (4 * (1 - mypp)^2))[cond2] dl.dlocat <- dl.dlocat / myscale dl.dscale <- zedd^2 / (4 * mypp^2) dl.dscale[cond2] <- (zedd^2 / (4 * (1 - mypp)^2))[cond2] dl.dscale <- (-1 + dl.dscale) / myscale #dl.dpp <- zedd^2 / (4 * mypp^3) #dl.dpp[cond2] <- -zedd^2 / (4 * (1 - mypp)^3)[cond2] dlocat.deta <- dtheta.deta(mylocat, .llocat, earg = .elocat) dscale.deta <- dtheta.deta(myscale, .lscale, earg = .escale) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .pp = pp ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M) # diag matrix; y is one-col too temp10 <- mypp * (1 - mypp) ned2l.dlocat2 <- 1 / ((4 * temp10) * myscale^2) ned2l.dscale2 <- 2 / myscale^2 wz[, iam(1, 1, M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2, M)] <- ned2l.dscale2 * dscale.deta^2 # wz[, iam(3, 3, M)] <- ned2l.dskewpar2 * dskewpa.deta^2 # wz[, iam(1, 3, M)] <- ned2l.dlocatdskewpar * dskewpar.deta * dlocat.deta c(w) * wz })))) } ######################################################################## tpnff3 <- function(llocation = "identitylink", lscale = "loge", lskewpar = "identitylink", method.init = 1, zero = 2) { if (!is.Numeric(method.init, length.arg = 1, integer.valued = TRUE, positive = TRUE) || method.init > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lskewp <- as.list(substitute(lskewpar)) eskewp <- link2list(lskewp) lskewp <- attr(eskewp, "function.name") new("vglmff", blurb = c("Two-piece normal distribution \n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("skewpar", lskewp, earg = eskewp), "\n\n", "Mean: "), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale", "skewpar"), llocation = .llocat , lscale = .lscale , lskewpar = .lskewp , zero = .zero ) }, list( .zero = zero, .llocat = llocat, .lscale = lscale, .lskewp = lskewp ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("location", .llocat, earg = .elocat, tag = FALSE), namesof("scale", .lscale, earg = .escale, tag = FALSE), namesof("skewpar", .lskewp, earg = .eskewp, tag = FALSE)) if (!length(etastart)) { junk = lm.wfit(x = x, y = c(y), w = c(w)) scale.y.est <- sqrt(sum(c(w) * junk$resid^2) / junk$df.residual) location.init <- if ( .llocat == "loge") pmax(1/1024, y) else { if ( .method.init == 3) { rep_len(weighted.mean(y, w), n) } else if ( .method.init == 2) { rep_len(median(rep(y, w)), n) } else if ( .method.init == 1) { junk$fitted } else { y } } skew.l.in <- sum((y < location.init)) / length(y) etastart <- cbind( theta2eta(location.init, .llocat, earg = .elocat), theta2eta(scale.y.est, .lscale, earg = .escale), theta2eta(skew.l.in, .lskewp, earg = .escale)) } }), list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp, .method.init=method.init ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat, earg = .elocat) }, list( .llocat = llocat, .elocat = elocat, .escale = escale ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat, "scale" = .lscale, "skewpar" = .lskewp) misc$earg <- list("location" = .elocat, "scale" = .escale, "skewpar" = .eskewp) misc$expected <- TRUE misc$method.init <- .method.init }), list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp, .method.init = method.init ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) myskew <- eta2theta(eta[, 3], .lskewp , earg = .eskewp ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dtpn(y, location = locat, scale = myscale, skewpar = myskew, log.arg = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp ))), vfamily = c("tpnff3"), validparams = eval(substitute(function(eta, y, extra = NULL) { mylocat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) myskew <- eta2theta(eta[, 3], .lskewp , earg = .eskewp ) okay1 <- all(is.finite(mylocat)) && all(is.finite(myscale)) && all(0 < myscale) && all(is.finite(myskew )) okay1 }, list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp ))), deriv = eval(substitute(expression({ mylocat <- eta2theta(eta[, 1], .llocat, earg = .elocat) myscale <- eta2theta(eta[, 2], .lscale, earg = .escale) myskew <- eta2theta(eta[, 3], .lskewp, earg = .eskewp) zedd <- (y - mylocat) / myscale cond2 <- (zedd > 0) dl.dlocat <- zedd / (4 * myskew^2) # cond1 dl.dlocat[cond2] <- (zedd / (4 * (1 - myskew)^2))[cond2] dl.dlocat <- dl.dlocat / myscale dl.dscale <- zedd^2 / (4 * myskew^2) dl.dscale[cond2] <- (zedd^2 / (4 * (1 - myskew)^2))[cond2] dl.dscale <- (-1 + dl.dscale) / myscale dl.dskewpar <- zedd^2 / (4 * myskew^3) dl.dskewpar[cond2] <- (-zedd^2 / (4 * (1 - myskew)^3))[cond2] dlocat.deta <- dtheta.deta(mylocat, .llocat, earg = .elocat) dscale.deta <- dtheta.deta(myscale, .lscale, earg = .escale) dskewpar.deta <- dtheta.deta(myskew, .lskewp, earg = .eskewp) ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta, dl.dskewpar * dskewpar.deta ) ans }), list( .llocat = llocat, .lscale = lscale, .lskewp = lskewp, .elocat = elocat, .escale = escale, .eskewp = eskewp ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, dimm(M)) # diag matrix; y is one-col too temp10 <- myskew * (1 - myskew) ned2l.dlocat2 <- 1 / ((4 * temp10) * myscale^2) ned2l.dscale2 <- 2 / myscale^2 ned2l.dskewpar2 <- 3 / temp10 ned2l.dlocatdskewpar <- (-2 * sqrt(2)) / (temp10 * sqrt(pi) * myscale) wz[, iam(1, 1,M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2,M)] <- ned2l.dscale2 * dscale.deta^2 wz[, iam(3, 3,M)] <- ned2l.dskewpar2 * dskewpar.deta^2 wz[, iam(1, 3,M)] <- ned2l.dlocatdskewpar * dskewpar.deta * dlocat.deta ans c(w) * wz })))) } dzoabeta <- function(x, shape1, shape2, pobs0 = 0, pobs1 = 0, log = FALSE, tol = .Machine$double.eps) { log.arg <- log rm(log) LLL <- max(length(x), length(shape1), length(shape2), length(pobs0), length(pobs1)) if (LLL != length(x)) x <- rep_len(x, LLL) if (LLL != length(shape1)) shape1 <- rep_len(shape1, LLL) if (LLL != length(shape2)) shape2 <- rep_len(shape2, LLL) if (LLL != length(pobs0)) pobs0 <- rep_len(pobs0, LLL) if (LLL != length(pobs1)) pobs1 <- rep_len(pobs1, LLL) ans <- rep_len(NA_real_, LLL) k1 <- (pobs0 < -tol | pobs1 < -tol | (pobs0 + pobs1) > (1 + tol)) k4 <- is.na(pobs0) | is.na(pobs1) ans[!k4 & !k1] <- dbeta(x[!k4 & !k1], shape1[!k4 & !k1], shape2[!k4 & !k1], log = TRUE) + log1p(-(pobs0[!k4 & !k1] + pobs1[!k4 & !k1])) k2 <- x == 0 & pobs0 > 0 & !is.na(x) k3 <- x == 1 & pobs1 > 0 & !is.na(x) ans[k2 & !k4 & !k1] <- log(pobs0[k2 & !k4 & !k1]) ans[k3 & !k4 & !k1] <- log(pobs1[k3 & !k4 & !k1]) if (!log.arg) ans <- exp(ans) if (any(k1 & !k4)) { ans[k1 & !k4] <- NaN warning("NaNs produced") } ans } rzoabeta <- function(n, shape1, shape2, pobs0 = 0, pobs1 = 0, tol = .Machine$double.eps) { use.n <- if ((length.n <- length(n)) > 1) { length.n } else { if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) { stop("bad input for argument 'n'") } else { n } } shape1 <- rep_len(shape1, use.n) shape2 <- rep_len(shape2, use.n) pobs0 <- rep_len(pobs0, use.n) pobs1 <- rep_len(pobs1, use.n) random.number <- runif(use.n) ans <- rep_len(NA_real_, use.n) k5 <- (pobs0 < -tol | pobs1 < -tol | (pobs0 + pobs1) > (1 + tol)) k4 <- is.na(pobs0) | is.na(pobs1) ans[!k4] <- qzoabeta(random.number[!k4], shape1 = shape1, shape2 = shape2, pobs0 = pobs0, pobs1 = pobs1) if (any(k5 & !k4)) { ans[k5 & !k4] <- NaN warning("NaNs produced") } ans } pzoabeta <- function(q, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) { LLL <- max(length(q), length(shape1), length(shape2), length(pobs0), length(pobs1)) if (LLL != length(q)) q <- rep_len(q, LLL) if (LLL != length(shape1)) shape1 <- rep_len(shape1, LLL) if (LLL != length(shape2)) shape2 <- rep_len(shape2, LLL) if (LLL != length(pobs0)) pobs0 <- rep_len(pobs0, LLL) if (LLL != length(pobs1)) pobs1 <- rep_len(pobs1, LLL) k3 <- (pobs0 < -tol | pobs1 < -tol | (pobs0 + pobs1) > (1 + tol)) k4 <- is.na(pobs0) | is.na(pobs1) ans <- rep_len(NA_real_, LLL) ans[!k3 & !k4] <- pbeta(q[!k3 & !k4], shape1[!k3 & !k4], shape2[!k3 & !k4], log.p = TRUE) + log1p(-(pobs0[!k3 & !k4] + pobs1[!k3 & !k4])) ans <- exp(ans) k1 <- q >= 0 & !is.na(q) k2 <- q >= 1 & !is.na(q) ans[k1 & !k3 & !k4] <- ans[k1 & !k3 & !k4] + pobs0[k1 & !k3 & !k4] ans[k2 & !k3 & !k4] <- ans[k2 & !k3 & !k4] + pobs1[k2 & !k3 & !k4] if (!lower.tail & log.p) { ans <- log1p(-ans) } else { if (!lower.tail) ans <- 1 - ans if (log.p) ans <- log(ans) } if (any(k3 & !k4)) { ans[k3 & !k4] <- NaN warning("NaNs produced") } ans } qzoabeta <- function(p, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) { LLL <- max(length(p), length(shape1), length(shape2), length(pobs0), length(pobs1)) if (LLL != length(p)) p <- rep_len(p, LLL) if (LLL != length(shape1)) shape1 <- rep_len(shape1, LLL) if (LLL != length(shape2)) shape2 <- rep_len(shape2, LLL) if (LLL != length(pobs0)) pobs0 <- rep_len(pobs0, LLL) if (LLL != length(pobs1)) pobs1 <- rep_len(pobs1, LLL) k0 <- (pobs0 < -tol | pobs1 < -tol | (pobs0 + pobs1) > (1 + tol)) k4 <- is.na(pobs0) | is.na(pobs1) ans <- rep_len(NA_real_, LLL) if (!lower.tail & log.p) { p <- -expm1(p) } else{ if (!lower.tail) p <- 1 - p if (log.p) { p <- exp(p) } } k1 <- p >= 0 & p <= pobs0 & !is.na(p) k2 <- p > pobs0 & p < (1 - pobs1) & !is.na(p) k3 <- p >= (1 - pobs1) & p <= 1 & !is.na(p) ans[k1 & !k0 & !k4] <- 0 ans[k2 & !k0 & !k4] <- qbeta((p[k2 & !k0 & !k4] - pobs0[k2 & !k0 & !k4]) / (1 - pobs0[k2 & !k0 & !k4] - pobs1[k2 & !k0 & !k4]), shape1 = shape1[k2 & !k0 & !k4], shape2 = shape2[k2 & !k0 & !k4]) ans[k3 & !k0 & !k4] <- 1 if (any(k0 & !k4)) { ans[k3 & !k4] <- NaN warning("NaNs produced") } ans } log1mexp <- function(x) { if (any(x < 0 & !is.na(x))) stop("Inputs need to be non-negative!") ifelse(x <= log(2), log(-expm1(-x)), log1p(-exp(-x))) } log1pexp <- function(x){ ifelse(x <= -37, exp(x), ifelse(x <= 18, log1p(exp(x)), ifelse(x <= 33, x + exp(-x), x))) } dzoibetabinom.ab <- function(x, size, shape1, shape2, pstr0 = 0, pstrsize = 0, log = FALSE) { log.arg <- log rm(log) LLL <- max(length(x), length(size), length(shape1), length(shape2), length(pstr0), length(pstrsize)) if (LLL != length(x)) x <- rep_len(x, LLL) if (LLL != length(size)) size <- rep_len(size, LLL) if (LLL != length(shape1)) shape1 <- rep_len(shape1, LLL) if (LLL != length(shape2)) shape2 <- rep_len(shape2, LLL) if (LLL != length(pstr0)) pstr0 <- rep_len(pstr0, LLL) if (LLL != length(pstrsize)) pstrsize <- rep_len(pstrsize, LLL) ans <- rep_len(NA_real_, LLL) k1 <- pstr0 < 0 | pstrsize < 0 | (pstr0 + pstrsize) > 1 k <- is.na(size) | is.na(shape1) | is.na(shape2) | is.na(pstr0) | is.na(pstrsize) | is.na(x) if (sum(!k & !k1) > 0) { ans[!k & !k1] <- dbetabinom.ab(x[!k & !k1], size[!k & !k1], shape1[!k & !k1], shape2[!k & !k1], log = TRUE) + log1p(-(pstr0[!k & !k1]+pstrsize[!k & !k1])) if (!log.arg) ans <- exp(ans) } k2 <- x == 0 & pstr0 > 0 k3 <- x == size & pstrsize > 0 if (sum(k2 & !k & !k1) > 0) ans[k2 & !k & !k1] <- pstr0[k2 & !k & !k1] + ans[k2 & !k & !k1] if (sum(k3 & !k & !k1) > 0) ans[k3 & !k & !k1] <- pstrsize[k3 & !k & !k1] + ans[k3 & !k & !k1] if (any(k1 & !k)) { ans[k1 & !k] <- NaN warning("NaNs produced") } ans } dzoibetabinom <- function(x, size, prob, rho = 0, pstr0 = 0, pstrsize = 0, log = FALSE) { dzoibetabinom.ab(x, size, shape1 = prob * (1 - rho) / rho, shape2 = (1 - prob) * (1 - rho) / rho, pstr0 = pstr0, pstrsize = pstrsize, log = log) } rzoibetabinom.ab <- function(n, size, shape1, shape2, pstr0 = 0, pstrsize = 0) { use.n <- if ((length.n <- length(n)) > 1) { length.n } else { if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) { stop("bad input for argument 'n'") } else { n } } size <- rep_len(size, use.n) shape1 <- rep_len(shape1, use.n) shape2 <- rep_len(shape2, use.n) pstr0 <- rep_len(pstr0, use.n) pstrsize <- rep_len(pstrsize, use.n) ans <- rep_len(NA_real_, use.n) k <- is.na(size) | is.na(shape1) | is.na(shape2) | is.na(pstr0) | is.na(pstrsize) k1 <- pstr0 < 0 | pstrsize < 0 | (pstr0 + pstrsize) > 1 random.number <- runif(use.n) k2 <- random.number[!k] < pstr0[!k] k3 <- pstr0[!k] <= random.number[!k] & random.number[!k] <= (1 - pstrsize[!k]) k4 <- (1 - pstrsize[!k]) < random.number[!k] if (sum(k2 & !k1 & !k) > 0) ans[k2 & !k1 & !k] <- 0 if (sum(k3 & !k1 & !k) > 0) ans[k3 & !k1 & !k] <- rbetabinom.ab(sum(k3 & !k1 & !k), size = size[k3 & !k1 & !k], shape1 = shape1[k3 & !k1 & !k], shape2 = shape2[k3 & !k1 & !k]) if (sum(k4 & !k1 & !k) > 0) ans[k4 & !k1 & !k] <- size[k4 & !k1 & !k] ans } rzoibetabinom <- function(n, size, prob, rho = 0, pstr0 = 0, pstrsize = 0) { rzoibetabinom.ab(n, size, shape1 = prob * (1 - rho) / rho, shape2 = (1 - prob) * (1 - rho) / rho, pstr0 = pstr0, pstrsize = pstrsize) } pzoibetabinom.ab <- function(q, size, shape1, shape2, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) { LLL <- max(length(q), length(size), length(shape1), length(shape2), length(pstr0), length(pstrsize)) if (LLL != length(q)) q <- rep_len(q, LLL) if (LLL != length(size)) size <- rep_len(size, LLL) if (LLL != length(shape1)) shape1 <- rep_len(shape1, LLL) if (LLL != length(shape2)) shape2 <- rep_len(shape2, LLL) if (LLL != length(pstr0)) pstr0 <- rep_len(pstr0, LLL) if (LLL != length(pstrsize)) pstrsize <- rep_len(pstrsize, LLL) ans <- rep_len(NA_real_, LLL) k <- is.na(size) | is.na(shape1) | is.na(shape2) | is.na(pstr0) | is.na(pstrsize) | is.na(q) k1 <- pstr0 < 0 | pstrsize < 0 | (pstr0 + pstrsize) > 1 if (sum(!k1 & !k) > 0) ans[!k & !k1] <- pbetabinom.ab(q[!k & !k1], size[!k & !k1], shape1[!k & !k1], shape2[!k & !k1], log.p = TRUE) + log1p(-(pstr0[!k & !k1] + pstrsize[!k & !k1])) ans <- exp(ans) k2 <- q >= 0 k3 <- q >= size if (sum(k2 & !k1 & !k) > 0) ans[k2 & !k & !k1] <- ans[k2 & !k & !k1] + pstr0[k2 & !k & !k1] if (sum(k3 & !k1 & !k) > 0) ans[k3 & !k & !k1] <- ans[k3 & !k & !k1] + pstrsize[k3 & !k & !k1] if (!lower.tail & log.p) { ans <- log1p(-ans) } else { if (!lower.tail) ans <- 1 - ans if (log.p) ans <- log(ans) } if (any(!k & k1)) { ans[!k & k1] <- NaN warning("NaNs produced") } ans } pzoibetabinom <- function(q, size, prob, rho, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) { pzoibetabinom.ab(q, size, shape1 = prob * (1 - rho) / rho, shape2 = (1 - prob) * (1 - rho) / rho, pstr0 = pstr0, pstrsize = pstrsize, lower.tail = lower.tail, log.p = log.p) } AR1EIM<- function(x = NULL, var.arg = NULL, p.drift = NULL, WNsd = NULL, ARcoeff1 = NULL, eps.porat = 1e-2) { if (!is.matrix(x)) stop("Argument 'x' must be a matrix.") yy <- x M <- 3 nn <- nrow(x) nn0 <- numeric(0) NOS <- ncol(x) if (!is.matrix(WNsd)) WNsd <- matrix(WNsd, nrow = nn, ncol = NOS, byrow = TRUE) if (!is.matrix(ARcoeff1)) ARcoeff1 <- matrix(ARcoeff1, nrow = nn, ncol = NOS, byrow = TRUE) if (!is.Numeric(eps.porat, length.arg = 1) || eps.porat < 0 || eps.porat > 1e-2) stop("Bad input for argument 'eps.porat'.") sdTSR <- colMeans(WNsd) sdTSv <- colMeans(WNsd) drift.v <- rep(p.drift, NOS)[1:NOS] Aux11 <- (NOS > 1) the1v <- colMeans(ARcoeff1) JFin <- array(0.0, dim = c(nn, NOS, M + (M - 1) + (M - 2) )) for (spp in 1:NOS) { x <- yy[, spp] the1 <- the1v[spp] drift.p <- drift.v[spp] sdTS <- sdTSv[spp] r <- numeric(nn) r <- AR1.gammas(x = x, lags = nn - 1) r[nn] <- r[1] s0 <- numeric(nn) s1 <- numeric(nn) s1 <- if (var.arg) (the1^(0:(nn - 1))) / (1 - the1^2) else 2 * (the1^(0:(nn - 1))) * sdTS / (1 - the1^2) s2 <- numeric(nn) help1 <- c(0:(nn - 1)) s2 <- help1 * (the1^(help1 - 1)) * (sdTS^2) / (1 - the1^2) + 2 * (sdTS^2) * (the1^(help1 + 1)) / (1 - the1^2)^2 sMat <- cbind(s0, s1, s2) J <- array(NA_real_, dim = c(length(the1) + 2, length(the1) + 2, nn)) Jp <- array(NA_real_, dim = c(length(the1) + 2, length(the1) + 2, nn)) alpha <- numeric(nn) alpha[1] <- 1 delta <- r[1] eta <- matrix(NA_real_, nrow = nn, ncol = M) eta[1, ] <- cbind(s0[1], s1[1], s2[1]) psi <- matrix(0, nrow = nn, ncol = length(the1) + 2) psi[1, ] <- cbind(s0[1], s1[1], s2[1]) / r[1] u0 <- rep(1/(1 - sign(the1v[1]) * min(0.975, abs(the1v[1]))), nn ) u1 <- rep(drift.p/(1 - the1)^2, nn) uMat <- cbind(u0, rep(0, nn), u1) aux1 <- matrix(sMat[1, ], nrow = 2 + length(the1), ncol = 2 + length(the1), byrow = TRUE) diag(aux1) <- sMat[1, ] J[, , 1] <- Jp[, , 1] <- aux1 * t(aux1) / (2 * r[1]^2) J[1, 1, 1] <- Jp[1, 1, 1] <- 1 / sdTS^2 JFin[1, spp, 1:M] <- Jp[, , 1][row(Jp[, , 1]) == col(Jp[, , 1])] Neps.porat <- 1.819*eps.porat*(1e-10) dk <- matrix(NA_real_, nrow = 1, ncol = length(the1) + 2) eR <- matrix(NA_real_, nrow = 1, ncol = length(the1) + 2) cAux2 <- d55 <- numeric(nn); d55[1] <- 0.1 for (jay in 1:(nn - 1)) { cAux <- as.numeric(alpha[1:jay] %*% r[2:(jay + 1)][length(r[2:(jay + 1)]):1])/delta dk <- alpha[1:jay] %*% sMat[2:(jay + 1), , drop = FALSE][length(sMat[2:(jay + 1)]):1, ] delta <- delta * (1 - cAux^2) d55[jay + 1] <- cAux^2 if ((d55[jay + 1] < eps.porat*1e-2) || (jay > 1e1)) { nn0 <- jay break } eta[jay + 1, ] <- dk tAux <- numeric(jay + 1) tAux <- alpha[1:(jay + 1)] - cAux * alpha[1:(jay + 1)][(jay + 1):1] alpha[1:(jay + 1)] <- tAux[1:(jay + 1)] eR <- alpha[1:(jay + 1)][(jay + 1):1] %*% eta[1:(jay + 1), , drop = FALSE] tAux <- eta[1:(jay + 1), ] - cAux * eta[1:(jay + 1), ][(jay + 1):1, ] eta[1:(jay + 1), ] <- tAux AuxE <- matrix(eR, nrow = jay + 1, ncol = M, byrow = TRUE) Aux3 <- matrix(alpha[1:(jay + 1)][(jay + 1):1], nrow = jay + 1, ncol = M, byrow = FALSE) Aux4 <- matrix(alpha[1:(jay + 1)], nrow = jay + 1, ncol = M, byrow = FALSE) tAux <- psi[1:(jay + 1), ] - cAux * psi[1:(jay + 1), ][(jay + 1):1, ] + AuxE * (Aux3 - cAux * Aux4) / delta if (any(dim(psi[1:(jay + 1), ])) != any(dim(tAux)) ) stop("Invalids 'psi' and 'tAux'.") psi[1:(jay + 1), ] <- tAux fk <- alpha[1:(jay + 1)] %*% eta[1:(jay + 1), ] gk <- alpha[1:(jay + 1)][(jay + 1):1] %*% uMat[1:(jay + 1), ] Auxf <- matrix(fk, nrow = M, ncol = M, byrow = FALSE) Auxg <- matrix(gk, nrow = M, ncol = M, byrow = FALSE) J[, , jay + 1] <- J[, , jay] + t(eta[1:(jay + 1), ]) %*% psi[1:(jay + 1), ] / delta - 0.5 * Auxf * t(Auxf) / delta^2 + Auxg * t(Auxg) / delta Jp[, , jay + 1] <- J[, , jay + 1] - J[, , jay] JFin[jay + 1, spp , 1:M ] <- Jp[, , jay + 1][col(Jp[, , jay + 1]) == row(Jp[, , jay + 1])] helpC <- numeric(0) for (kk in 1:(M - 1)) { TF1 <- ( col(Jp[, , jay + 1]) >= row(Jp[, , jay + 1]) ) TF2 <- (abs(col(Jp[, , jay + 1]) - row(Jp[, , jay + 1])) == kk ) helpC <- c(helpC, Jp[, , jay + 1][TF1 & TF2]) } rm(TF1, TF2) JFin[jay + 1, spp , -(1:M) ] <- helpC } if (length(nn0)) for (kk in nn0:(nn - 1)) { J[, , kk + 1] <- J[, , nn0] + (kk - nn0 + 1) * Jp[, , nn0] Jp[, , kk + 1] <- J[, , kk + 1] - J[, , kk] JFin[kk + 1, spp , 1:M ] <- Jp[, , kk + 1][col(Jp[, , kk + 1]) == row(Jp[, , kk + 1])] helpC <- numeric(0) for (ll in 1:(M - 1)) { TF1 <- ( col(Jp[, , kk + 1]) >= row(Jp[, , kk + 1]) ) TF2 <- (abs(col(Jp[, , kk + 1]) - row(Jp[, , kk + 1])) == ll) helpC <- c(helpC, Jp[, , kk + 1][TF1 & TF2]) } rm(TF1, TF2) JFin[kk + 1, spp , -(1:M) ] <- helpC } JFin[which(JFin <= Neps.porat)] <- abs( JFin[which(JFin <= Neps.porat)]) } JFin } # End AR1.gammas <- function(x, y = NULL, lags = 1) { xx <- matrix(x, ncol = 1) nx <- nrow(xx) if (lags < 0 || !(is.Numeric(lags, integer.valued = TRUE))) stop("'lags' must be a positive integer.") if (length(y)) { yy <- matrix(y, ncol = 1) ny <- nrow(yy) if (nx != ny) stop("Number of rows differs.") else n <- nx } else { yy <- xx n <- nrow(xx) } myD <- numeric(lags + 1) myD[1] <- if (length(y)) cov(xx, yy) else cov(xx, xx) # i.e. var(xx) if (lags > 0) for (ii in 1:lags) myD[ii + 1] <- cov(xx[-(1:ii), 1], yy[1:(n - ii) , 1]) myD } VGAM/R/smart.R0000644000176200001440000003757613135276757012520 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. smartpredenv <- new.env() smart.mode.is <- function(mode.arg = NULL) { if (!length(mode.arg)) { if (exists(".smart.prediction", envir = smartpredenv)) { get(".smart.prediction.mode", envir = smartpredenv) } else { "neutral" } } else { if (mode.arg != "neutral" && mode.arg != "read" && mode.arg != "write") stop("argument \"mode.arg\" must be one of", " \"neutral\", \"read\" or \"write\"") if (exists(".smart.prediction", envir = smartpredenv)) { get(".smart.prediction.mode", envir = smartpredenv) == mode.arg } else { mode.arg == "neutral" } } } setup.smart <- function(mode.arg, smart.prediction = NULL, max.smart = 30) { actual <- if (mode.arg == "write") vector("list", max.smart) else if (mode.arg == "read") smart.prediction else stop("value of 'mode.arg' unrecognized") wrapup.smart() # make sure if (length(actual)) { assign(".smart.prediction", actual, envir = smartpredenv) assign(".smart.prediction.counter", 0, envir = smartpredenv) assign(".smart.prediction.mode", mode.arg, envir = smartpredenv) assign(".max.smart", max.smart, envir = smartpredenv) assign(".smart.prediction", actual, envir = smartpredenv) } } wrapup.smart <- function() { if (exists(".smart.prediction", envir = smartpredenv)) rm(".smart.prediction", envir = smartpredenv) if (exists(".smart.prediction.counter", envir = smartpredenv)) rm(".smart.prediction.counter", envir = smartpredenv) if (exists(".smart.prediction.mode", envir = smartpredenv)) rm(".smart.prediction.mode", envir = smartpredenv) if (exists(".max.smart", envir = smartpredenv)) rm(".max.smart", envir = smartpredenv) } get.smart.prediction <- function() { smart.prediction.counter <- get(".smart.prediction.counter", envir = smartpredenv) max.smart <- get(".max.smart", envir = smartpredenv) if (smart.prediction.counter > 0) { smart.prediction <- get(".smart.prediction", envir = smartpredenv) if (max.smart >= (smart.prediction.counter + 1)) for(i in max.smart:(smart.prediction.counter + 1)) smart.prediction[[i]] <- NULL smart.prediction } else NULL } put.smart <- function(smart) { max.smart <- get(".max.smart", envir = smartpredenv) smart.prediction.counter <- get(".smart.prediction.counter", envir = smartpredenv) smart.prediction <- get(".smart.prediction", envir = smartpredenv) smart.prediction.counter <- smart.prediction.counter + 1 if (smart.prediction.counter > max.smart) { max.smart <- max.smart + (inc.smart <- 10) # can change inc.smart smart.prediction <- c(smart.prediction, vector("list", inc.smart)) assign(".max.smart", max.smart, envir = smartpredenv) } smart.prediction[[smart.prediction.counter]] <- smart assign(".smart.prediction", smart.prediction, envir = smartpredenv) assign(".smart.prediction.counter", smart.prediction.counter, envir = smartpredenv) } get.smart <- function() { smart.prediction <- get(".smart.prediction", envir = smartpredenv) smart.prediction.counter <- get(".smart.prediction.counter", envir = smartpredenv) smart.prediction.counter <- smart.prediction.counter + 1 assign(".smart.prediction.counter", smart.prediction.counter, envir = smartpredenv) smart <- smart.prediction[[smart.prediction.counter]] smart } smart.expression <- expression({ smart <- get.smart() assign(".smart.prediction.mode", "neutral", envir = smartpredenv) .smart.match.call <- as.character(smart$match.call) smart$match.call <- NULL # Kill it off for the do.call ans.smart <- do.call(.smart.match.call[1], c(list(x=x), smart)) assign(".smart.prediction.mode", "read", envir = smartpredenv) ans.smart }) is.smart <- function(object) { if (is.function(object)) { if (is.logical(a <- attr(object, "smart"))) a else FALSE } else { if (length(slotNames(object))) { if (length(object@smart.prediction) == 1 && is.logical(object@smart.prediction$smart.arg)) object@smart.prediction$smart.arg else any(slotNames(object) == "smart.prediction") } else { if (length(object$smart.prediction) == 1 && is.logical(object$smart.prediction$smart.arg)) object$smart.prediction$smart.arg else any(names(object) == "smart.prediction") } } } sm.bs <- function (x, df = NULL, knots = NULL, degree = 3, intercept = FALSE, Boundary.knots = range(x)) { x <- x # Evaluate x; needed for nested calls, e.g., sm.bs(sm.scale(x)). if (smart.mode.is("read")) { return(eval(smart.expression)) } nx <- names(x) x <- as.vector(x) nax <- is.na(x) if (nas <- any(nax)) x <- x[!nax] if (!missing(Boundary.knots)) { Boundary.knots <- sort(Boundary.knots) outside <- (ol <- x < Boundary.knots[1]) | (or <- x > Boundary.knots[2L]) } else outside <- FALSE ord <- 1 + (degree <- as.integer(degree)) if (ord <= 1) stop("'degree' must be integer >= 1") if (!missing(df) && missing(knots)) { nIknots <- df - ord + (1 - intercept) if (nIknots < 0) { nIknots <- 0 warning("'df' was too small; have used ", ord - (1 - intercept)) } knots <- if (nIknots > 0) { knots <- seq(from = 0, to = 1, length = nIknots + 2)[-c(1, nIknots + 2)] stats::quantile(x[!outside], knots) } } Aknots <- sort(c(rep(Boundary.knots, ord), knots)) if (any(outside)) { warning("some 'x' values beyond boundary knots may ", "cause ill-conditioned bases") derivs <- 0:degree scalef <- gamma(1L:ord) basis <- array(0, c(length(x), length(Aknots) - degree - 1L)) if (any(ol)) { k.pivot <- Boundary.knots[1L] xl <- cbind(1, outer(x[ol] - k.pivot, 1L:degree, "^")) tt <- splines::splineDesign(Aknots, rep(k.pivot, ord), ord, derivs) basis[ol, ] <- xl %*% (tt/scalef) } if (any(or)) { k.pivot <- Boundary.knots[2L] xr <- cbind(1, outer(x[or] - k.pivot, 1L:degree, "^")) tt <- splines::splineDesign(Aknots, rep(k.pivot, ord), ord, derivs) basis[or, ] <- xr %*% (tt/scalef) } if (any(inside <- !outside)) basis[inside, ] <- splines::splineDesign(Aknots, x[inside], ord) } else basis <- splines::splineDesign(Aknots, x, ord) if (!intercept) basis <- basis[, -1L, drop = FALSE] n.col <- ncol(basis) if (nas) { nmat <- matrix(NA_real_, length(nax), n.col) nmat[!nax, ] <- basis basis <- nmat } dimnames(basis) <- list(nx, 1L:n.col) a <- list(degree = degree, knots = if (is.null(knots)) numeric(0L) else knots, Boundary.knots = Boundary.knots, intercept = intercept, Aknots = Aknots) attributes(basis) <- c(attributes(basis), a) class(basis) <- c("bs", "basis", "matrix") if (smart.mode.is("write")) put.smart(list(df = df, knots = knots, degree = degree, intercept = intercept, Boundary.knots = Boundary.knots, match.call = match.call())) basis } attr( sm.bs, "smart") <- TRUE sm.ns <- function (x, df = NULL, knots = NULL, intercept = FALSE, Boundary.knots = range(x)) { x <- x # Evaluate x; needed for nested calls, e.g., sm.bs(sm.scale(x)). if (smart.mode.is("read")) { return(eval(smart.expression)) } nx <- names(x) x <- as.vector(x) nax <- is.na(x) if (nas <- any(nax)) x <- x[!nax] if (!missing(Boundary.knots)) { Boundary.knots <- sort(Boundary.knots) outside <- (ol <- x < Boundary.knots[1L]) | (or <- x > Boundary.knots[2L]) } else outside <- FALSE if (!missing(df) && missing(knots)) { nIknots <- df - 1 - intercept if (nIknots < 0) { nIknots <- 0 warning("'df' was too small; have used ", 1 + intercept) } knots <- if (nIknots > 0) { knots <- seq.int(0, 1, length.out = nIknots + 2L)[-c(1L, nIknots + 2L)] stats::quantile(x[!outside], knots) } } else nIknots <- length(knots) Aknots <- sort(c(rep(Boundary.knots, 4), knots)) if (any(outside)) { basis <- array(0, c(length(x), nIknots + 4L)) if (any(ol)) { k.pivot <- Boundary.knots[1L] xl <- cbind(1, x[ol] - k.pivot) tt <- splines::splineDesign(Aknots, rep(k.pivot, 2L), 4, c(0, 1)) basis[ol, ] <- xl %*% tt } if (any(or)) { k.pivot <- Boundary.knots[2L] xr <- cbind(1, x[or] - k.pivot) tt <- splines::splineDesign(Aknots, rep(k.pivot, 2L), 4, c(0, 1)) basis[or, ] <- xr %*% tt } if (any(inside <- !outside)) basis[inside, ] <- splines::splineDesign(Aknots, x[inside], 4) } else basis <- splines::splineDesign(Aknots, x, 4) const <- splines::splineDesign(Aknots, Boundary.knots, 4, c(2, 2)) if (!intercept) { const <- const[, -1, drop = FALSE] basis <- basis[, -1, drop = FALSE] } qr.const <- qr(t(const)) basis <- as.matrix((t(qr.qty(qr.const, t(basis))))[, -(1L:2L), drop = FALSE]) n.col <- ncol(basis) if (nas) { nmat <- matrix(NA_real_, length(nax), n.col) nmat[!nax, ] <- basis basis <- nmat } dimnames(basis) <- list(nx, 1L:n.col) a <- list(degree = 3, knots = if (is.null(knots)) numeric(0) else knots, Boundary.knots = Boundary.knots, intercept = intercept, Aknots = Aknots) attributes(basis) <- c(attributes(basis), a) class(basis) <- c("ns", "basis", "matrix") if (smart.mode.is("write")) put.smart(list(df = df, knots = knots, intercept = intercept, Boundary.knots = Boundary.knots, match.call = match.call())) basis } attr( sm.ns, "smart") <- TRUE sm.poly <- function (x, ..., degree = 1, coefs = NULL, raw = FALSE) { x <- x # Evaluate x; needed for nested calls, e.g., sm.bs(sm.scale(x)). if (!raw && smart.mode.is("read")) { smart <- get.smart() degree <- smart$degree coefs <- smart$coefs raw <- smart$raw } dots <- list(...) if (nd <- length(dots)) { if (nd == 1 && length(dots[[1]]) == 1L) degree <- dots[[1L]] else return(polym(x, ..., degree = degree, raw = raw)) } if (is.matrix(x)) { m <- unclass(as.data.frame(cbind(x, ...))) return(do.call("polym", c(m, degree = degree, raw = raw))) } if (degree < 1) stop("'degree' must be at least 1") if (smart.mode.is("write") || smart.mode.is("neutral")) if (degree >= length(x)) stop("degree must be less than number of points") if (anyNA(x)) stop("missing values are not allowed in 'poly'") n <- degree + 1 if (raw) { if (degree >= length(unique(x))) stop("'degree' must be less than number of unique points") Z <- outer(x, 1L:degree, "^") colnames(Z) <- 1L:degree attr(Z, "degree") <- 1L:degree class(Z) <- c("poly", "matrix") return(Z) } if (is.null(coefs)) { if (degree >= length(unique(x))) stop("'degree' must be less than number of unique points") xbar <- mean(x) x <- x - xbar X <- outer(x, seq_len(n) - 1, "^") QR <- qr(X) if (QR$rank < degree) stop("'degree' must be less than number of unique points") z <- QR$qr z <- z * (row(z) == col(z)) raw <- qr.qy(QR, z) norm2 <- colSums(raw^2) alpha <- (colSums(x * raw^2)/norm2 + xbar)[1L:degree] Z <- raw/rep(sqrt(norm2), each = length(x)) colnames(Z) <- 1L:n - 1L Z <- Z[, -1, drop = FALSE] attr(Z, "degree") <- 1:degree attr(Z, "coefs") <- list(alpha = alpha, norm2 = c(1, norm2)) class(Z) <- c("poly", "matrix") } else { alpha <- coefs$alpha norm2 <- coefs$norm2 Z <- matrix(, length(x), n) Z[, 1] <- 1 Z[, 2] <- x - alpha[1L] if (degree > 1) for (i in 2:degree) Z[, i + 1] <- (x - alpha[i]) * Z[, i] - (norm2[i + 1]/norm2[i]) * Z[, i - 1] Z <- Z/rep(sqrt(norm2[-1L]), each = length(x)) colnames(Z) <- 0:degree Z <- Z[, -1, drop = FALSE] attr(Z, "degree") <- 1L:degree attr(Z, "coefs") <- list(alpha = alpha, norm2 = norm2) class(Z) <- c("poly", "matrix") } if (smart.mode.is("write")) put.smart(list(degree = degree, coefs = attr(Z, "coefs"), raw = FALSE, # raw is changed above match.call = match.call())) Z } attr(sm.poly, "smart") <- TRUE sm.scale.default <- function (x, center = TRUE, scale = TRUE) { x <- as.matrix(x) if (smart.mode.is("read")) { return(eval(smart.expression)) } nc <- ncol(x) if (is.logical(center)) { if (center) { center <- colMeans(x, na.rm = TRUE) x <- sweep(x, 2L, center, check.margin = FALSE) } } else if (is.numeric(center) && (length(center) == nc)) x <- sweep(x, 2L, center, check.margin = FALSE) else stop("length of 'center' must equal the number of columns of 'x'") if (is.logical(scale)) { if (scale) { f <- function(v) { v <- v[!is.na(v)] sqrt(sum(v^2) / max(1, length(v) - 1L)) } scale <- apply(x, 2L, f) x <- sweep(x, 2L, scale, "/", check.margin = FALSE) } } else if (is.numeric(scale) && length(scale) == nc) x <- sweep(x, 2L, scale, "/", check.margin = FALSE) else stop("length of 'scale' must equal the number of columns of 'x'") if (is.numeric(center)) attr(x, "scaled:center") <- center if (is.numeric(scale)) attr(x, "scaled:scale") <- scale if (smart.mode.is("write")) { put.smart(list(center = center, scale = scale, match.call = match.call())) } x } attr(sm.scale.default, "smart") <- TRUE sm.scale <- function (x, center = TRUE, scale = TRUE) UseMethod("sm.scale") attr(sm.scale, "smart") <- TRUE sm.min1 <- function(x) { x <- x # Evaluate x; needed for nested calls, e.g., sm.bs(sm.scale(x)). minx <- min(x) if (smart.mode.is("read")) { smart <- get.smart() minx <- smart$minx # Overwrite its value } else if (smart.mode.is("write")) put.smart(list(minx = minx)) minx } attr(sm.min1, "smart") <- TRUE sm.min2 <- function(x, .minx = min(x)) { x <- x # Evaluate x; needed for nested calls, e.g., sm.bs(sm.scale(x)). if (smart.mode.is("read")) { # Use recursion return(eval(smart.expression)) } else if (smart.mode.is("write")) put.smart(list( .minx = .minx , match.call = match.call())) .minx } attr(sm.min2, "smart") <- TRUE sm.scale1 <- function(x, center = TRUE, scale = TRUE) { x <- x # Evaluate x; needed for nested calls, e.g., sm.bs(sm.scale(x)). if (!is.vector(x)) stop("argument 'x' must be a vector") if (smart.mode.is("read")) { smart <- get.smart() return((x - smart$Center) / smart$Scale) } if (is.logical(center)) center <- if (center) mean(x) else 0 if (is.logical(scale)) scale <- if (scale) sqrt(var(x)) else 1 if (smart.mode.is("write")) put.smart(list(Center = center, Scale = scale)) (x - center) / scale } attr(sm.scale1, "smart") <- TRUE sm.scale2 <- function(x, center = TRUE, scale = TRUE) { x <- x # Evaluate x; needed for nested calls, e.g., sm.bs(sm.scale(x)). if (!is.vector(x)) stop("argument 'x' must be a vector") if (smart.mode.is("read")) { return(eval(smart.expression)) # Recursion used } if (is.logical(center)) center <- if (center) mean(x) else 0 if (is.logical(scale)) scale <- if (scale) sqrt(var(x)) else 1 if (smart.mode.is("write")) put.smart(list(center = center, scale = scale, match.call = match.call())) (x - center) / scale } attr(sm.scale2, "smart") <- TRUE VGAM/R/coef.vlm.q0000644000176200001440000000736613135276757013134 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. coef.vlm <- function(object, ...) { coefvlm(object, ...) } coefvlm <- function(object, matrix.out = FALSE, label = TRUE, colon = FALSE) { ans <- object@coefficients if (colon) { if (matrix.out) stop("cannot have 'matrix.out = TRUE' and 'colon = TRUE'") if (!label) stop("cannot have 'label = FALSE' and 'colon = TRUE'") d1 <- object@misc$colnames.x Hlist <- object@constraints M <- object@misc$M ncolHlist <- unlist(lapply(Hlist, ncol)) new.labs <- vlabel(xn = d1, ncolHlist, M = M, colon = colon) names(ans) <- new.labs return(ans) } if (!label) names(ans) <- NULL if (!matrix.out) return(ans) ncolx <- object@misc$p # = length(object@constraints) M <- object@misc$M Hlist <- object@constraints if (all(trivial.constraints(Hlist) == 1)) { Bmat <- matrix(ans, nrow = ncolx, ncol = M, byrow = TRUE) } else { Bmat <- matrix(NA_real_, nrow = ncolx, ncol = M) if (!matrix.out) return(ans) ncolHlist <- unlist(lapply(Hlist, ncol)) nasgn <- names(Hlist) temp <- c(0, cumsum(ncolHlist)) for (ii in seq_along(nasgn)) { index <- (temp[ii] + 1):temp[ii + 1] cmat <- Hlist[[nasgn[ii]]] Bmat[ii,] <- cmat %*% ans[index] } } if (label) { d1 <- object@misc$colnames.x d2 <- object@misc$predictors.names # Could be NULL dimnames(Bmat) <- list(d1, d2) } Bmat } # end of coefvlm setMethod("coefficients", "vlm", function(object, ...) coefvlm(object, ...)) setMethod("coef", "vlm", function(object, ...) coefvlm(object, ...)) setMethod("coefficients", "vglm", function(object, ...) coefvlm(object, ...)) setMethod("coef", "vglm", function(object, ...) coefvlm(object, ...)) setMethod("coefficients", "summary.vglm", function(object, ...) object@coef3) setMethod("coef", "summary.vglm", function(object, ...) object@coef3) Coef.vlm <- function(object, ...) { LL <- length(object@family@vfamily) funname <- paste("Coef.", object@family@vfamily[LL], sep = "") if (exists(funname)) { newcall <- paste("Coef.", object@family@vfamily[LL], "(object, ...)", sep = "") newcall <- parse(text = newcall)[[1]] return(eval(newcall)) } answer <- if (length(tmp2 <- object@misc$link) && object@misc$intercept.only && trivial.constraints(object@constraints)) { if (!is.list(use.earg <- object@misc$earg)) use.earg <- list() answer <- eta2theta(rbind(coefvlm(object)), link = object@misc$link, earg = use.earg) answer <- c(answer) if (length(ntmp2 <- names(tmp2)) == object@misc$M) names(answer) <- ntmp2 answer } else { coefvlm(object, ... ) } if (length(tmp3 <- object@misc$parameter.names) && object@misc$intercept.only && trivial.constraints(object@constraints)) { answer <- c(answer) if (length(tmp3) == object@misc$M && is.character(tmp3)) names(answer) <- tmp3 } answer } setMethod("Coefficients", "vlm", function(object, ...) Coef.vlm(object, ...)) setMethod("Coef", "vlm", function(object, ...) Coef.vlm(object, ...)) coefvgam <- function(object, type = c("linear", "nonlinear"), ...) { type <- match.arg(type, c("linear", "nonlinear"))[1] if (type == "linear") { coefvlm(object, ...) } else { object@Bspline } } setMethod("coefficients", "vgam", function(object, ...) coefvgam(object, ...)) setMethod("coef", "vgam", function(object, ...) coefvgam(object, ...)) VGAM/R/family.exp.R0000644000176200001440000004473113135276757013435 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. qeunif <- function(p, min = 0, max = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rm(log.p) # 20150102 KaiH if (lower.tail) { if (log.arg) p <- exp(p) } else { p <- if (log.arg) -expm1(p) else 1 - p } ppp <- p vsmallno <- sqrt(.Machine$double.eps) smallno <- 0.10 if (any(min >= max)) stop("argument 'min' has values greater or equal ", "to argument 'max'") if (!is.Numeric( Tol.nr, length.arg = 1, positive = TRUE) || Tol.nr > 0.10) stop("argument 'Tol.nr' is not a single positive value, ", "or is too large") nrok <- ppp >= vsmallno & ppp <= 1.0 - vsmallno & is.finite(ppp) eee <- qbeta(ppp, shape1 = 3, shape2 = 3) eee[ppp < smallno] <- sqrt(ppp[ppp < smallno]) eee[ppp > 1.0 - smallno] <- 1.0 - sqrt(1.0 - ppp[ppp > 1.0 - smallno]) for (iii in 1:Maxit.nr) { realdiff <- (peunif(eee[nrok]) - ppp[nrok]) / deunif(eee[nrok]) eee[nrok] <- eee[nrok] - realdiff if (all(abs(realdiff) / (1.0 + abs(realdiff)) < Tol.nr )) break if (iii == Maxit.nr) warning("did not converge") } if (max(abs(peunif(eee[nrok]) - ppp[nrok])) > Tol.nr) warning("did not converge on the second check") eee[ppp < vsmallno] <- sqrt( ppp[ppp < vsmallno]) eee[ppp > 1.0 - vsmallno] <- 1.0 - sqrt(1.0 - ppp[ppp > 1.0 - vsmallno]) eee[ppp == 0] <- 0 eee[ppp == 1] <- 1 eee[ppp < 0] <- NA eee[ppp > 1] <- NA min + eee * (max - min) } peunif <- function(q, min = 0, max = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (any(min >= max)) stop("argument 'min' has values greater or equal to argument 'max'") eee <- (q - min) / (max - min) if (lower.tail) { if (log.p) { Gofy <- -log1p((1/eee - 1)^2) Gofy[eee < 0] <- -Inf Gofy[eee > 1] <- 0.0 } else { Gofy <- eee^2 / (exp(2*log1p(-eee)) + eee^2) # KaiH Gofy <- 1 / (1 + (1/eee - 1)^2) Gofy[eee < 0] <- 0.0 Gofy[eee > 1] <- 1.0 } } else { if (log.p) { Gofy <- 2*log1p(-eee) - log(exp(2*log1p(-eee)) + eee^2) Gofy[eee < 0] <- 0.0 Gofy[eee > 1] <- -Inf } else { Gofy <- exp(2*log1p(-eee)) / (exp(2*log1p(-eee)) + eee^2) Gofy[eee < 0] <- 1 Gofy[eee > 1] <- 0 } } Gofy } deunif <- function(x, min = 0, max = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (any(min >= max)) stop("argument 'min' has values greater or equal to argument 'max'") eee <- (x - min) / (max - min) if (log.arg) { ans <- log(2) + log(eee) + log1p(-eee) - 2.0 * log(2*eee*(1-eee) - 1) - log(max - min) ans[eee <= 0.0] <- log(0.0) ans[eee >= 1.0] <- log(0.0) } else { gunif <- function(y) as.numeric(y >= 0 & y <= 1) * 2*y*(1-y) / (2*y*(1-y) - 1)^2 ans <- gunif(eee) / (max - min) ans[is.infinite(x)] <- 0 # 20141209 KaiH } ans } reunif <- function(n, min = 0, max = 1) { qeunif(runif(n), min = min, max = max) } qenorm <- function(p, mean = 0, sd = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") ppp <- p if (!is.Numeric( Tol.nr, length.arg = 1, positive = TRUE) || Tol.nr > 0.10) stop("argument 'Tol.nr' is not a single ", "positive value, or is too large") nrok <- is.finite(ppp) eee <- qnorm(ppp, sd = 2/3, lower.tail = lower.tail, log.p = log.p) gnorm <- function(y) dnorm(y) / (y * (1-2*pnorm(y)) - 2*dnorm(y))^2 for (iii in 1:Maxit.nr) { if (lower.tail) { realdiff <- if (log.p) { ln.ppp <- ppp (penorm(eee[nrok]) - exp(ln.ppp[nrok])) / gnorm(eee[nrok]) } else { (penorm(eee[nrok]) - ppp[nrok]) / gnorm(eee[nrok]) } } else { realdiff <- if (log.p) { ln.ppp <- ppp (penorm(eee[nrok]) + expm1(ln.ppp[nrok])) / gnorm(eee[nrok]) } else { (penorm(eee[nrok]) + expm1(log(ppp[nrok]))) / gnorm(eee[nrok]) } } eee[nrok] <- eee[nrok] - realdiff if (all(abs(realdiff) / (1.0 + abs(realdiff)) < Tol.nr )) break if (iii == Maxit.nr) warning("did not converge") } if (max(abs(penorm(eee[nrok]) - ppp[nrok])) > Tol.nr) warning("did not converge on the second check") if (lower.tail) { if (log.p) { eee[ln.ppp > 0] <- NaN } else { eee[ppp == 0] <- -Inf eee[ppp == 1] <- Inf eee[ppp < 0] <- NaN eee[ppp > 1] <- NaN } } else { if (log.p) { eee[ln.ppp > 0] <- NaN } else { eee[ppp == 0] <- Inf eee[ppp == 1] <- -Inf eee[ppp < 0] <- NaN eee[ppp > 1] <- NaN } } eee * ifelse(sd >= 0, sd, NaN) + mean } penorm <- function(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") eee <- (q - mean) / sd tmp1 <- -dnorm(eee) - eee * pnorm(eee) if (lower.tail) { if (log.p) { Gofy <- log(tmp1 / (2 * tmp1 + eee)) Gofy[eee <= -Inf] <- -Inf Gofy[eee >= Inf] <- 0 } else { Gofy <- tmp1 / (2 * tmp1 + eee) Gofy[eee <= -Inf] <- 0.0 Gofy[eee >= Inf] <- 1.0 } } else { if (log.p) { Gofy <- log((tmp1 + eee) / (2 * tmp1 + eee)) Gofy[eee <= -Inf] <- 0 Gofy[eee >= Inf] <- -Inf } else { Gofy <- (tmp1 + eee) / (2 * tmp1 + eee) Gofy[eee <= -Inf] <- 1 Gofy[eee >= Inf] <- 0 } } Gofy } denorm <- function(x, mean = 0, sd = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) eee <- (x - mean) / sd if (log.arg) { ans <- dnorm(eee, log = TRUE) - 2.0 * log(eee * (1-2*pnorm(eee)) - 2*dnorm(eee)) - log(sd) } else { gnorm <- function(y) dnorm(y) / (y * (1-2*pnorm(y)) - 2*dnorm(y))^2 ans <- gnorm(eee) / sd ans[sd <= 0.0] <- NaN } ans } renorm <- function(n, mean = 0, sd = 1) { qenorm(runif(n), mean = mean, sd = sd) } qeexp <- function(p, rate = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rm(log.p) # 20150102 KaiH if (lower.tail) { if (log.arg) p <- exp(p) } else { p <- if (log.arg) -expm1(p) else 1 - p } ppp <- p vsmallno <- sqrt(.Machine$double.eps) if (!is.Numeric( Tol.nr, length.arg = 1, positive = TRUE) || Tol.nr > 0.10) stop("argument 'Tol.nr' is not a single positive value, or ", "is too large") nrok <- ppp >= vsmallno & is.finite(ppp) eee <- qf(1.0 * ppp, df1 = 4.0, df2 = 44) if ( any(rangex <- ppp < 0.8) ) eee[rangex] <- qrayleigh(ppp[rangex], scale = 0.8) eee[ppp < vsmallno] <- sqrt(ppp[ppp < vsmallno]) for (iii in 1:Maxit.nr) { realdiff <- (peexp(eee[nrok]) - ppp[nrok]) / deexp(eee[nrok]) eee[nrok] <- eee[nrok] - realdiff if (all(abs(realdiff) / (1.0 + abs(realdiff)) < Tol.nr )) break if (iii == Maxit.nr) warning("did not converge") } if (max(abs(peexp(eee[nrok]) - ppp[nrok])) > Tol.nr) warning("did not converge on the second check") eee[ppp < vsmallno] <- sqrt(ppp[ppp < vsmallno]) eee[ppp == 0] <- 0 eee[ppp == 1] <- Inf eee[ppp < 0] <- NaN eee[ppp > 1] <- NaN eee / rate } peexp <- function(q, rate = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") eee <- q * rate tmp1 <- -expm1(-eee) - eee if (lower.tail) { if (log.p) { Gofy <- log(-tmp1) - log(expm1(-eee) + exp(-eee) + eee) Gofy[eee < 0 ] <- -Inf Gofy[eee == Inf] <- 0 } else { Gofy <- tmp1 / (-expm1(-eee) - exp(-eee) - eee) Gofy[eee < 0] <- 0 Gofy[eee == Inf] <- 1 } } else { if (log.p) { Gofy <- -eee - log(expm1(-eee) + exp(-eee) + eee) Gofy[eee < 0] <- 0 Gofy[eee == Inf] <- -Inf } else { Gofy <- exp(-eee)/(expm1(-eee) +exp(-eee) +eee) Gofy[eee < 0] <- 1 Gofy[eee == Inf] <- 0 } } Gofy } deexp <- function(x, rate = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (any(rate <= 0)) stop("argument 'rate' must have positive values") eee <- x * rate if (log.arg) { ans <- log(eee) - eee + 2.0 * log((1-x) - 2 * exp(-x)) + log(rate) ans[is.infinite(x)] <- log(0) } else { gexp <- function(y) as.numeric(y >= 0) * y * exp(-y) / ((1-y) - 2 * exp(-y))^2 ans <- gexp(eee) * rate ans[rate <= 0.0] <- NaN ans[is.infinite(x)] <- 0 } ans } reexp <- function(n, rate = 1) { qeexp(runif(n), rate = rate) } dsc.t2 <- function(x, location = 0, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (x - location) / scale zedd[scale <= 0] <- NaN if (log.arg) { log(0.25) - 1.5 * log1p((zedd / 2)^2) - log(scale) } else { 2 / (scale * (4 + zedd^2)^1.5) } } psc.t2 <- function(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") zedd <- (q - location) / scale zedd[scale <= 0] <- NaN if (lower.tail) { if (log.p) { ans <- log(0.5) + log1p(zedd / sqrt(4 + zedd^2)) ans[q == -Inf] <- log(0) ans[q == Inf] <- log(1) } else { ans <- 0.5 * (1 + zedd / sqrt(4 + zedd^2)) ans[q == -Inf] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log(0.5) + log1p(-zedd / sqrt(4 + zedd^2)) ans[q == -Inf] <- log(1) ans[q == Inf] <- log(0) } else { ans <- 0.5 * exp(log1p(-zedd / sqrt(4 + zedd^2))) ans[q == -Inf] <- 1 ans[q == Inf] <- 0 } } ans } qsc.t2 <- function(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- exp(0.5*(ln.p - log(-expm1(ln.p)))) - exp(0.5*(log(-expm1(ln.p)) - ln.p)) ans[ln.p > 0] <- NaN } else { ans <- exp(0.5*(log(p) - log1p(-p))) - exp(0.5*(log1p(-p) - log(p))) ans[p < 0] <- NaN ans[p == 0] <- -Inf ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- exp(0.5*(log(-expm1(ln.p)) - ln.p)) - exp(0.5*(ln.p - log(-expm1(ln.p)))) ans[ln.p > 0] <- NaN ans } else { ans <- exp(0.5*(log1p(-p) - log(p))) - exp(0.5*(log(p) - log1p(-p))) ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- -Inf ans[p > 1] <- NaN } } answer <- ans * scale + location answer[scale <= 0] <- NaN answer } rsc.t2 <- function(n, location = 0, scale = 1) { answer <- qsc.t2(runif(n)) * scale + location answer[scale <= 0] <- NaN answer } sc.studentt2 <- function(percentile = 50, llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ilocat <- ilocation if (length(ilocat) && (!is.Numeric(ilocat, length.arg = 1, positive = TRUE))) stop("bad input for argument 'ilocation'") if (length(iscale) && !is.Numeric(iscale)) stop("bad input for argument 'iscale'") if (!is.Numeric(percentile, positive = TRUE) || any(percentile >= 100)) stop("bad input for argument 'percentile'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("'imethod' must be 1 or 2") new("vglmff", blurb = c("Scaled Student t distribution with 2 degrees of freedom\n\n", "Links: ", namesof("location", llocat, earg = elocat, tag = FALSE), ", ", namesof("scale", lscale, earg = escale, tag = FALSE), "\n\n", "Mean: location\n", "Variance: infinite"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale"), llocation = .llocation , lscale = .lscale , zero = .zero ) }, list( .zero = zero, .llocation = llocation, .lscale = lscale ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c( namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) if (!length(etastart)) { locat.init <- if ( .imethod == 2) { weighted.mean(y, w) } else { median(y) } Scale.init <- if (length( .iscale )) .iscale else diff(quantile(y, prob = c(0.25, 0.75))) / (2 * 1.155) + 1.0e-5 locat.init <- rep_len(locat.init, length(y)) Scale.init <- rep_len(Scale.init, length(y)) etastart <- cbind(theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(Scale.init, .lscale , earg = .escale )) } }), list( .llocat = llocat, .lscale = lscale, .ilocat = ilocat, .iscale = iscale, .elocat = elocat, .escale = escale, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL){ Perce <- .percentile locat <- eta2theta(eta[, 1], link = .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], link = .lscale , earg = .escale ) answer <- matrix(locat, nrow(eta), length(Perce)) for (ii in seq_along(Perce)) answer[, ii] <- qsc.t2(Perce[ii] / 100, loc = locat, sc = Scale) dimnames(answer) <- list(dimnames(eta)[[1]], paste(as.character(Perce), "%", sep = "")) answer }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .percentile = percentile ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat , "scale" = .lscale ) misc$earg <- list("location" = .elocat , "scale" = .escale ) misc$expected <- TRUE misc$percentile <- .percentile misc$imethod <- .imethod misc$multipleResponses <- FALSE ncoly <- ncol(y) for (ii in seq_along( .percentile )) { y.use <- if (ncoly > 1) y[, ii] else y mu <- cbind(mu) extra$percentile[ii] <- 100 * weighted.mean(y.use <= mu[, ii], w) } names(extra$percentile) <- colnames(mu) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale, .imethod = imethod, .percentile = percentile ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta[, 1], link = .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], link = .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dsc.t2(x = y, location = locat, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), vfamily = c("sc.studentt2"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat <- eta2theta(eta[, 1], link = .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], link = .lscale , earg = .escale ) okay1 <- all(is.finite(locat)) && all(is.finite(Scale)) && all(0 < Scale) okay1 }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), deriv = eval(substitute(expression({ locat <- eta2theta(eta[, 1], link = .llocat , earg = .elocat ) Scale <- eta2theta(eta[, 2], link = .lscale , earg = .escale ) dlocat.deta <- dtheta.deta(locat, link = .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, link = .lscale , earg = .escale ) zedd <- (y - locat) / Scale dl.dlocat <- 3 * zedd / (Scale * (4 + zedd^2)) dl.dscale <- 3 * zedd^2 / (Scale * (4 + zedd^2)) - 1 / Scale c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale ))), weight = eval(substitute(expression({ ned2l.dlocat2 <- 0.3 / Scale^2 ned2l.dscale2 <- 2.0 / (3 * Scale^2) wz <- matrix(-10, n, M) # Diagonal EIM wz[, iam(1, 1, M = M)] <- ned2l.dlocat2 * dlocat.deta^2 wz[, iam(2, 2, M = M)] <- ned2l.dscale2 * dscale.deta^2 c(w) * wz }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale )))) } VGAM/R/family.rrr.R0000644000176200001440000032663513135276757013454 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. replace.constraints <- function(Hlist, cm, index) { for (iii in index) Hlist[[iii]] <- cm Hlist } valt.control <- function( Alphavec = c(2, 4, 6, 9, 12, 16, 20, 25, 30, 40, 50, 60, 80, 100, 125, 2^(8:12)), Criterion = c("ResSS", "coefficients"), Linesearch = FALSE, Maxit = 7, Suppress.warning = TRUE, Tolerance = 1e-7, ...) { if (mode(Criterion) != "character" && mode(Criterion) != "name") Criterion <- as.character(substitute(Criterion)) Criterion <- match.arg(Criterion, c("ResSS", "coefficients"))[1] list(Alphavec = Alphavec, Criterion = Criterion, Linesearch = Linesearch, Maxit = Maxit, Suppress.warning = Suppress.warning, Tolerance = Tolerance) } qrrvglm.xprod <- function(numat, Aoffset, Quadratic, I.tolerances) { Rank <- ncol(numat) moff <- NULL ans <- if (Quadratic) { index <- iam(NA, NA, M = Rank, diag = TRUE, both = TRUE) temp1 <- cbind(numat[, index$row] * numat[, index$col]) if (I.tolerances) { moff <- 0 for (ii in 1:Rank) moff <- moff - 0.5 * temp1[, ii] } cbind(numat, if (I.tolerances) NULL else temp1) } else { as.matrix(numat) } list(matrix = if (Aoffset > 0) ans else ans[, -(1:Rank), drop = FALSE], offset = moff) } valt <- function(x, z, U, Rank = 1, Hlist = NULL, Cinit = NULL, Alphavec = c(2, 4, 6, 9, 12, 16, 20, 25, 30, 40, 50, 60, 80, 100, 125, 2^(8:12)), Criterion = c("ResSS", "coefficients"), Crow1positive = rep_len(TRUE, Rank), colx1.index, Linesearch = FALSE, Maxit = 20, str0 = NULL, sd.Cinit = 0.02, Suppress.warning = FALSE, Tolerance = 1e-6, trace = FALSE, xij = NULL) { if (mode(Criterion) != "character" && mode(Criterion) != "name") Criterion <- as.character(substitute(Criterion)) Criterion <- match.arg(Criterion, c("ResSS", "coefficients"))[1] if (any(diff(Alphavec) <= 0)) stop("'Alphavec' must be an increasing sequence") if (!is.matrix(z)) z <- as.matrix(z) n <- nrow(z) M <- ncol(z) if (!is.matrix(x)) x <- as.matrix(x) colx2.index <- if (is.null(colx1.index)) 1:ncol(x) else (1:ncol(x))[-colx1.index] p1 <- length(colx1.index) p2 <- length(colx2.index) p <- p1 + p2 if (!p2) stop("'p2', the number of variables for the ", "reduced-rank regression, must be > 0") if (!length(Hlist)) { Hlist <- replace.constraints(vector("list", p), diag(M), 1:p) } dU <- dim(U) if (dU[2] != n) stop("input unconformable") clist2 <- replace.constraints(vector("list", Rank+p1), if (length(str0)) diag(M)[, -str0, drop = FALSE] else diag(M), 1:Rank) if (p1) { for (kk in 1:p1) clist2[[Rank+kk]] <- Hlist[[colx1.index[kk]]] } if (is.null(Cinit)) Cinit <- matrix(rnorm(p2*Rank, sd = sd.Cinit), p2, Rank) fit <- list(ResSS = 0) # Only for initial old.crit below C <- Cinit # This is input for the main iter loop old.crit <- switch(Criterion, coefficients = C, ResSS = fit$ResSS) recover <- 0 # Allow a few iterations between different line searches for (iter in 1:Maxit) { iter.save <- iter latvar.mat <- x[, colx2.index, drop = FALSE] %*% C new.latvar.model.matrix <- cbind(latvar.mat, if (p1) x[, colx1.index] else NULL) fit <- vlm.wfit(xmat = new.latvar.model.matrix, z, Hlist = clist2, U = U, matrix.out = TRUE, is.vlmX = FALSE, ResSS = FALSE, qr = FALSE, xij = xij) A <- t(fit$mat.coef[1:Rank, , drop = FALSE]) clist1 <- replace.constraints(Hlist, A, colx2.index) fit <- vlm.wfit(xmat = x, z, Hlist = clist1, U = U, matrix.out = TRUE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, xij = xij) C <- fit$mat.coef[colx2.index, , drop = FALSE] %*% A %*% solve(t(A) %*% A) numat <- x[, colx2.index, drop = FALSE] %*% C evnu <- eigen(var(numat), symmetric = TRUE) temp7 <- if (Rank > 1) evnu$vector %*% diag(evnu$value^(-0.5)) else evnu$vector %*% evnu$value^(-0.5) C <- C %*% temp7 A <- A %*% t(solve(temp7)) temp8 <- crow1C(cmat = C, Crow1positive, amat = A) C <- temp8$cmat A <- temp8$amat ratio <- switch(Criterion, coefficients = max(abs(C - old.crit) / ( Tolerance + abs(C))), ResSS = max(abs(fit$ResSS - old.crit) / ( Tolerance + fit$ResSS))) if (trace) { cat(" Alternating iteration", iter, ", Convergence criterion = ", format(ratio), "\n") if (!is.null(fit$ResSS)) cat(" ResSS = ", fit$ResSS, "\n") flush.console() } if (ratio < Tolerance) { if (!Linesearch || (Linesearch && iter >= 3)) break } else if (iter == Maxit && !Suppress.warning) { warning("did not converge") } fini.linesearch <- FALSE if (Linesearch && iter - recover >= 2) { xnew <- C direction1 <- (xnew - xold) # / sqrt(1 + sum((xnew-xold)^2)) ftemp <- fit$ResSS # Most recent objective function use.alpha <- 0 # The current step relative to (xold, yold) for (itter in seq_along(Alphavec)) { CC <- xold + Alphavec[itter] * direction1 try.latvar.mat <- x[, colx2.index, drop = FALSE] %*% CC try.new.latvar.model.matrix <- cbind(try.latvar.mat, if (p1) x[, colx1.index] else NULL) try <- vlm.wfit(xmat = try.new.latvar.model.matrix, z, Hlist = clist2, U = U, matrix.out = TRUE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, xij = xij) if (try$ResSS < ftemp) { use.alpha <- Alphavec[itter] fit <- try ftemp <- try$ResSS C <- CC A <- t(fit$mat.coef[1:Rank, , drop = FALSE]) latvar.mat <- x[, colx2.index, drop = FALSE] %*% C recover <- iter # Give it some altg iters to recover } else { if (trace && use.alpha > 0) { cat(" Finished line search using Alpha = ", use.alpha, "\n") flush.console() } fini.linesearch <- TRUE } if (fini.linesearch) break } # End of itter loop } xold <- C # Do not take care of drift old.crit <- switch(Criterion, coefficients = C, ResSS = fit$ResSS) } # End of iter loop list(A = A, C = C, fitted = fit$fitted, new.coeffs = fit$coef, ResSS = fit$ResSS) } lm2qrrvlm.model.matrix <- function(x, Hlist, C, control, assign = TRUE, no.thrills = FALSE) { Rank <- control$Rank colx1.index <- control$colx1.index Quadratic <- control$Quadratic Dzero <- control$Dzero Corner <- control$Corner I.tolerances <- control$I.tolerances M <- nrow(Hlist[[1]]) p1 <- length(colx1.index) combine2 <- c(control$str0, if (Corner) control$Index.corner else NULL) Qoffset <- if (Quadratic) ifelse(I.tolerances, 0, sum(1:Rank)) else 0 NoA <- length(combine2) == M # No unknown parameters in A clist2 <- if (NoA) { Aoffset <- 0 vector("list", Aoffset+Qoffset+p1) } else { Aoffset <- Rank replace.constraints(vector("list", Aoffset+Qoffset+p1), if (length(combine2)) diag(M)[, -combine2, drop = FALSE] else diag(M), 1:Rank) # If Corner then does not contain \bI_{Rank} } if (Quadratic && !I.tolerances) clist2 <- replace.constraints(clist2, if (control$eq.tolerances) matrix(1, M, 1) - eijfun(Dzero, M) else { if (length(Dzero)) diag(M)[,-Dzero, drop = FALSE] else diag(M)}, Aoffset + (1:Qoffset)) if (p1) for (kk in 1:p1) clist2[[Aoffset+Qoffset+kk]] <- Hlist[[colx1.index[kk]]] if (!no.thrills) { i63 <- iam(NA, NA, M=Rank, both = TRUE) names(clist2) <- c( if (NoA) NULL else paste("(latvar", 1:Rank, ")", sep = ""), if (Quadratic && Rank == 1 && !I.tolerances) "(latvar^2)" else if (Quadratic && Rank>1 && !I.tolerances) paste("(latvar", i63$row, ifelse(i63$row == i63$col, "^2", paste("*latvar", i63$col, sep = "")), ")", sep = "") else NULL, if (p1) names(colx1.index) else NULL) } latvar.mat <- x[, control$colx2.index, drop = FALSE] %*% C tmp900 <- qrrvglm.xprod(latvar.mat, Aoffset, Quadratic, I.tolerances) new.latvar.model.matrix <- cbind(tmp900$matrix, if (p1) x[,colx1.index] else NULL) if (!no.thrills) dimnames(new.latvar.model.matrix) <- list(dimnames(x)[[1]], names(clist2)) if (assign) { asx <- attr(x, "assign") asx <- vector("list", ncol(new.latvar.model.matrix)) names(asx) <- names(clist2) for (ii in seq_along(names(asx))) { asx[[ii]] <- ii } attr(new.latvar.model.matrix, "assign") <- asx } if (no.thrills) list(new.latvar.model.matrix = new.latvar.model.matrix, constraints = clist2, offset = tmp900$offset) else list(new.latvar.model.matrix = new.latvar.model.matrix, constraints = clist2, NoA = NoA, Aoffset = Aoffset, latvar.mat = latvar.mat, offset = tmp900$offset) } valt.2iter <- function(x, z, U, Hlist, A, control) { clist1 <- replace.constraints(Hlist, A, control$colx2.index) fit <- vlm.wfit(xmat = x, z, Hlist = clist1, U = U, matrix.out = TRUE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, xij = control$xij) C <- fit$mat.coef[control$colx2.index, , drop = FALSE] %*% A %*% solve(t(A) %*% A) list(A = A, C = C, fitted = fit$fitted, new.coeffs = fit$coef, Hlist = clist1, ResSS = fit$ResSS) } valt.1iter <- function(x, z, U, Hlist, C, control, lp.names = NULL, nice31 = FALSE, MSratio = 1) { Rank <- control$Rank Quadratic <- control$Quadratic Index.corner <- control$Index.corner p1 <- length(control$colx1.index) M <- ncol(zedd <- as.matrix(z)) NOS <- M / MSratio Corner <- control$Corner I.tolerances <- control$I.tolerances Qoffset <- if (Quadratic) ifelse(I.tolerances, 0, sum(1:Rank)) else 0 tmp833 <- lm2qrrvlm.model.matrix(x = x, Hlist = Hlist, C = C, control = control) new.latvar.model.matrix <- tmp833$new.latvar.model.matrix clist2 <- tmp833$constraints # Does not contain \bI_{Rank} latvar.mat <- tmp833$latvar.mat if (Corner) zedd[,Index.corner] <- zedd[,Index.corner] - latvar.mat if (nice31 && MSratio == 1) { fit <- list(mat.coef = NULL, fitted.values = NULL, ResSS = 0) clist2 <- NULL # for vlm.wfit i5 <- rep_len(0, MSratio) for (ii in 1:NOS) { i5 <- i5 + 1:MSratio tmp100 <- vlm.wfit(xmat = new.latvar.model.matrix, zedd[, i5, drop = FALSE], Hlist = clist2, U = U[i5,, drop = FALSE], matrix.out = TRUE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, Eta.range = control$Eta.range, xij = control$xij, lp.names = lp.names[i5]) fit$ResSS <- fit$ResSS + tmp100$ResSS fit$mat.coef <- cbind(fit$mat.coef, tmp100$mat.coef) fit$fitted.values <- cbind(fit$fitted.values, tmp100$fitted.values) } } else { fit <- vlm.wfit(xmat = new.latvar.model.matrix, zedd, Hlist = clist2, U = U, matrix.out = TRUE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, Eta.range = control$Eta.range, xij = control$xij, lp.names = lp.names) } A <- if (tmp833$NoA) matrix(0, M, Rank) else t(fit$mat.coef[1:Rank,, drop = FALSE]) if (Corner) A[Index.corner,] <- diag(Rank) B1 <- if (p1) fit$mat.coef[-(1:(tmp833$Aoffset+Qoffset)),, drop = FALSE] else NULL fv <- as.matrix(fit$fitted.values) if (Corner) fv[,Index.corner] <- fv[,Index.corner] + latvar.mat Dmat <- if (Quadratic) { if (I.tolerances) { tmp800 <- matrix(0, M, Rank*(Rank+1)/2) tmp800[if (MSratio == 2) c(TRUE, FALSE) else TRUE, 1:Rank] <- -0.5 tmp800 } else t(fit$mat.coef[(tmp833$Aoffset+1): (tmp833$Aoffset+Qoffset),, drop = FALSE]) } else NULL list(Amat = A, B1 = B1, Cmat = C, Dmat = Dmat, fitted = if (M == 1) c(fv) else fv, new.coeffs = fit$coef, constraints = clist2, ResSS = fit$ResSS, offset = if (length(tmp833$offset)) tmp833$offset else NULL) } rrr.init.expression <- expression({ if (length(control$Quadratic) && control$Quadratic) copy.X.vlm <- TRUE if (function.name %in% c("cqo", "cao")) { modelno <- switch(family@vfamily[1], "poissonff" = 2, "quasipoissonff" = 2, "quasipoisson" = 2, "binomialff" = 1, "quasibinomialff" = 1, "quasibinomial" = 1, "negbinomial" = 3, "gamma2" = 5, "gaussianff" = 8, 0) # stop("cannot fit this model using fast algorithm") if (modelno == 1) modelno = get("modelno", envir = VGAMenv) rrcontrol$modelno = control$modelno = modelno if (modelno == 3 || modelno == 5) { M <- 2 * ifelse(is.matrix(y), ncol(y), 1) control$str0 <- rrcontrol$str0 <- seq(from = 2, to = M, by = 2) # Handles A control$Dzero <- rrcontrol$Dzero <- seq(from = 2, to = M, by = 2) # Handles D } } else { modelno <- 0 # Any value will do as the variable is unused. } }) rrr.alternating.expression <- expression({ alt <- valt(x, z, U, Rank = Rank, Hlist = Hlist, Cinit = rrcontrol$Cinit, Criterion = rrcontrol$Criterion, colx1.index = rrcontrol$colx1.index, Linesearch = rrcontrol$Linesearch, Maxit = rrcontrol$Maxit, str0 = rrcontrol$str0, sd.Cinit = rrcontrol$sd.Cinit, Suppress.warning = rrcontrol$Suppress.warning, Tolerance = rrcontrol$Tolerance, trace = trace, xij = control$xij) # This is subject to drift in A and C ans2 <- rrr.normalize(rrcontrol = rrcontrol, A=alt$A, C=alt$C, x = x) Amat <- ans2$A # Fed into Hlist below (in rrr.end.expression) tmp.fitted <- alt$fitted # Also fed; was alt2$fitted rrcontrol$Cinit <- ans2$C # For next valt() call eval(rrr.end.expression) # Put Amat into Hlist, and create new z }) adjust.Dmat.expression <- function(Mmat, Rank, Dmat, M) { if (length(Dmat)) { ind0 <- iam(NA, NA, both = TRUE, M = Rank) for (kay in 1:M) { elts <- Dmat[kay, , drop = FALSE] # Manual recycling if (length(elts) < Rank) elts <- matrix(elts, 1, Rank) Dk <- m2a(elts, M = Rank)[, , 1] Dk <- matrix(Dk, Rank, Rank) Dk <- t(Mmat) %*% Dk %*% Mmat # 20030822; Not diagonal in general. Dmat[kay, ] <- Dk[cbind(ind0$row.index[1:ncol(Dmat)], ind0$col.index[1:ncol(Dmat)])] } } Dmat } rrr.normalize <- function(rrcontrol, A, C, x, Dmat = NULL) { colx2.index <- rrcontrol$colx2.index Rank <- rrcontrol$Rank Index.corner <- rrcontrol$Index.corner M <- nrow(A) C.old <- C if (rrcontrol$Corner) { tmp87 <- A[Index.corner,, drop = FALSE] Mmat <- solve(tmp87) # The normalizing matrix C <- C %*% t(tmp87) A <- A %*% Mmat A[Index.corner,] <- diag(Rank) # Make sure Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } if (rrcontrol$Svd.arg) { temp <- svd(C %*% t(A)) if (!is.matrix(temp$v)) temp$v <- as.matrix(temp$v) C <- temp$u[, 1:Rank, drop = FALSE] %*% diag(temp$d[1:Rank]^(1-rrcontrol$Alpha), nrow = Rank) A <- diag(temp$d[1:Rank]^( rrcontrol$Alpha), nrow = Rank) %*% t(temp$v[, 1:Rank, drop = FALSE]) A <- t(A) Mmat <- t(C.old) %*% C.old %*% solve(t(C) %*% C.old) Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } if (rrcontrol$Uncorrelated.latvar) { latvar.mat <- x[, colx2.index, drop = FALSE] %*% C var.latvar.mat <- var(latvar.mat) UU <- chol(var.latvar.mat) Ut <- solve(UU) Mmat <- t(UU) C <- C %*% Ut A <- A %*% t(UU) Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } if (rrcontrol$Quadratic) { Mmat <- diag(Rank) for (LV in 1:Rank) if (( rrcontrol$Crow1positive[LV] && C[1,LV] < 0) || (!rrcontrol$Crow1positive[LV] && C[1,LV] > 0)) { C[,LV] <- -C[,LV] A[,LV] <- -A[,LV] Mmat[LV,LV] <- -1 } Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) } list(Amat = A, Cmat = C, Dmat = Dmat) } rrr.end.expression <- expression({ if (exists(".VGAM.etamat", envir = VGAMenv)) rm(".VGAM.etamat", envir = VGAMenv) if (control$Quadratic) { if (!length(extra)) extra <- list() extra$Cmat <- Cmat # Saves the latest iteration extra$Dmat <- Dmat # Not the latest iteration extra$B1 <- B1.save # Not the latest iteration (not good) } else { Hlist <- replace.constraints(Hlist.save, Amat, colx2.index) } X.vlm.save <- if (control$Quadratic) { tmp300 <- lm2qrrvlm.model.matrix(x = x, Hlist = Hlist.save, C = Cmat, control = control) latvar.mat <- tmp300$latvar.mat # Needed at the top of new.s.call lm2vlm.model.matrix(tmp300$new.latvar.model.matrix, H.list, xij = control$xij) } else { lm2vlm.model.matrix(x, Hlist, xij = control$xij) } fv <- tmp.fitted # Contains \bI \bnu eta <- fv + offset if (FALSE && control$Rank == 1) { ooo <- order(latvar.mat[, 1]) } mu <- family@linkinv(eta, extra) if (anyNA(mu)) warning("there are NAs in mu") deriv.mu <- eval(family@deriv) wz <- eval(family@weight) if (control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) U <- vchol(wz, M = M, n = n, silent=!trace) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = n) z <- eta + vbacksub(U, tvfor, M = M, n = n) - offset # Contains \bI \bnu }) rrr.derivative.expression <- expression({ which.optimizer <- if (control$Quadratic && control$FastAlgorithm) { "BFGS" } else { if (iter <= rrcontrol$Switch.optimizer) "Nelder-Mead" else "BFGS" } if (trace && control$OptimizeWrtC) { cat("\n\n") cat("Using", which.optimizer, "\n") flush.console() } constraints <- replace.constraints(constraints, diag(M), rrcontrol$colx2.index) nice31 <- (!control$eq.tol || control$I.tolerances) && all(trivial.constraints(constraints) == 1) theta0 <- c(Cmat) assign(".VGAM.dot.counter", 0, envir = VGAMenv) if (control$OptimizeWrtC) { if (control$Quadratic && control$FastAlgorithm) { if (iter == 2) { if (exists(".VGAM.etamat", envir = VGAMenv)) rm(".VGAM.etamat", envir = VGAMenv) } if (iter > 2 && !quasi.newton$convergence) { if (zthere <- exists(".VGAM.z", envir = VGAMenv)) { ..VGAM.z <- get(".VGAM.z", envir = VGAMenv) ..VGAM.U <- get(".VGAM.U", envir = VGAMenv) ..VGAM.beta <- get(".VGAM.beta", envir = VGAMenv) } if (zthere) { z <- matrix(..VGAM.z, n, M) # minus any offset U <- matrix(..VGAM.U, M, n) } } if (iter == 2 || quasi.newton$convergence) { NOS <- ifelse(modelno == 3 || modelno == 5, M/2, M) canfitok <- (exists("CQO.FastAlgorithm", envir=VGAMenv) && get("CQO.FastAlgorithm", envir = VGAMenv)) if (!canfitok) stop("cannot fit this model using fast algorithm") p2star <- if (nice31) ifelse(control$I.toleran, Rank, Rank+0.5*Rank*(Rank+1)) else (NOS*Rank + Rank*(Rank+1)/2 * ifelse(control$eq.tol, 1,NOS)) p1star <- if (nice31) p1 * ifelse(modelno == 3 || modelno == 5, 2, 1) else (ncol(X.vlm.save) - p2star) X.vlm.1save <- if (p1star > 0) X.vlm.save[,-(1:p2star)] else NULL quasi.newton <- optim(par = Cmat, fn = callcqof, gr <- if (control$GradientFunction) calldcqo else NULL, method = which.optimizer, control = list(fnscale = 1, trace = as.integer(control$trace), parscale = rep_len(control$Parscale, length(Cmat)), maxit = 250), etamat = eta, xmat = x, ymat = y, wvec = w, X.vlm.1save = if (nice31) NULL else X.vlm.1save, modelno = modelno, Control = control, n = n, M = M, p1star = p1star, p2star = p2star, nice31 = nice31) if (zthere <- exists(".VGAM.z", envir = VGAMenv)) { ..VGAM.z <- get(".VGAM.z", envir = VGAMenv) ..VGAM.U <- get(".VGAM.U", envir = VGAMenv) ..VGAM.beta <- get(".VGAM.beta", envir = VGAMenv) } if (zthere) { z <- matrix(..VGAM.z, n, M) # minus any offset U <- matrix(..VGAM.U, M, n) } } else { if (exists(".VGAM.offset", envir = VGAMenv)) rm(".VGAM.offset", envir = VGAMenv) } } else { use.reltol <- if (length(rrcontrol$Reltol) >= iter) rrcontrol$Reltol[iter] else rev(rrcontrol$Reltol)[1] quasi.newton <- optim(par = theta0, fn = rrr.derivC.ResSS, method = which.optimizer, control = list(fnscale = rrcontrol$Fnscale, maxit = rrcontrol$Maxit, abstol = rrcontrol$Abstol, reltol = use.reltol), U = U, z = if (control$I.tolerances) z + offset else z, M = M, xmat = x, # varbix2 = varbix2, Hlist = Hlist, rrcontrol = rrcontrol) } Cmat <- matrix(quasi.newton$par, p2, Rank, byrow = FALSE) if (Rank > 1 && rrcontrol$I.tolerances) { numat <- x[, rrcontrol$colx2.index, drop = FALSE] %*% Cmat evnu <- eigen(var(numat), symmetric = TRUE) Cmat <- Cmat %*% evnu$vector numat <- x[, rrcontrol$colx2.index, drop = FALSE] %*% Cmat offset <- if (Rank > 1) -0.5*rowSums(numat^2) else -0.5*numat^2 } } alt <- valt.1iter(x = x, z = z, U = U, Hlist = Hlist, C = Cmat, nice31 = nice31, control = rrcontrol, lp.names = predictors.names) if (length(alt$offset)) offset <- alt$offset B1.save <- alt$B1 # Put later into extra tmp.fitted <- alt$fitted # contains \bI_{Rank} \bnu if Corner if (modelno != 33 && control$OptimizeWrtC) alt <- rrr.normalize(rrc = rrcontrol, A = alt$Amat, C = alt$Cmat, x = x, Dmat = alt$Dmat) if (trace && control$OptimizeWrtC) { cat("\n") cat(which.optimizer, "using optim():\n") cat("Objective = ", quasi.newton$value, "\n") cat("Parameters (= c(C)) = ", if (length(quasi.newton$par) < 5) "" else "\n") cat(alt$Cmat, fill = TRUE) cat("\n") cat("Number of function evaluations = ", quasi.newton$count[1], "\n") if (length(quasi.newton$message)) cat("Message = ", quasi.newton$message, "\n") cat("\n") flush.console() } Amat <- alt$Amat # Needed in rrr.end.expression Cmat <- alt$Cmat # Needed in rrr.end.expression if Quadratic Dmat <- alt$Dmat # Put later into extra eval(rrr.end.expression) # Put Amat into Hlist, and create new z }) rrr.derivC.ResSS <- function(theta, U, z, M, xmat, Hlist, rrcontrol, omit.these = NULL) { if (rrcontrol$trace) { cat(".") flush.console() } alreadyThere <- exists(".VGAM.dot.counter", envir = VGAMenv) if (alreadyThere) { VGAM.dot.counter <- get(".VGAM.dot.counter", envir = VGAMenv) VGAM.dot.counter <- VGAM.dot.counter + 1 assign(".VGAM.dot.counter", VGAM.dot.counter, envir = VGAMenv) if (VGAM.dot.counter > max(50, options()$width - 5)) { if (rrcontrol$trace) { cat("\n") flush.console() } assign(".VGAM.dot.counter", 0, envir = VGAMenv) } } Cmat <- matrix(theta, length(rrcontrol$colx2.index), rrcontrol$Rank) tmp700 <- lm2qrrvlm.model.matrix(x = xmat, Hlist = Hlist, no.thrills = !rrcontrol$Corner, C = Cmat, control = rrcontrol, assign = FALSE) Hlist <- tmp700$constraints # Does not contain \bI_{Rank} \bnu if (rrcontrol$Corner) { z <- as.matrix(z) # should actually call this zedd z[, rrcontrol$Index.corner] <- z[, rrcontrol$Index.corner] - tmp700$latvar.mat } if (length(tmp700$offset)) z <- z - tmp700$offset vlm.wfit(xmat = tmp700$new.latvar.model.matrix, zmat = z, Hlist = Hlist, ncolx = ncol(xmat), U = U, only.ResSS = TRUE, matrix.out = FALSE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, Eta.range = rrcontrol$Eta.range, xij = rrcontrol$xij)$ResSS } rrvglm.optim.control <- function(Fnscale = 1, Maxit = 100, Switch.optimizer = 3, Abstol = -Inf, Reltol = sqrt(.Machine$double.eps), ...) { list(Fnscale = Fnscale, Maxit = Maxit, Switch.optimizer = Switch.optimizer, Abstol = Abstol, Reltol = Reltol) } nlminbcontrol <- function(Abs.tol = 10^(-6), Eval.max = 91, Iter.max = 91, Rel.err = 10^(-6), Rel.tol = 10^(-6), Step.min = 10^(-6), X.tol = 10^(-6), ...) { list(Abs.tol = Abs.tol, Eval.max = Eval.max, Iter.max = Iter.max, Rel.err = Rel.err, Rel.tol = Rel.tol, Step.min = Step.min, X.tol = X.tol) } Coef.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, ...) { if (length(varI.latvar) != 1 || !is.logical(varI.latvar)) stop("'varI.latvar' must be TRUE or FALSE") if (length(refResponse) > 1) stop("argument 'refResponse' must be of length 0 or 1") if (length(refResponse) && is.Numeric(refResponse)) if (!is.Numeric(refResponse, length.arg = 1, integer.valued = TRUE)) stop("bad input for argument 'refResponse'") if (!is.logical(ConstrainedQO <- object@control$ConstrainedQO)) stop("cannot determine whether the model is constrained or not") ocontrol <- object@control coef.object <- object@coefficients Rank <- ocontrol$Rank M <- object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M MSratio <- M / NOS # First value is g(mean) = quadratic form in latvar Quadratic <- if (ConstrainedQO) ocontrol$Quadratic else TRUE if (!Quadratic) stop("object is not a quadratic ordination object") p1 <- length(ocontrol$colx1.index) p2 <- length(ocontrol$colx2.index) Index.corner <- ocontrol$Index.corner str0 <- ocontrol$str0 eq.tolerances <- ocontrol$eq.tolerances Dzero <- ocontrol$Dzero Corner <- if (ConstrainedQO) ocontrol$Corner else FALSE estI.tol <- if (ConstrainedQO) object@control$I.tolerances else FALSE modelno <- object@control$modelno # 1, 2, 3, 4, 5, 6, 7 or 0 combine2 <- c(str0, if (Corner) Index.corner else NULL) NoA <- length(combine2) == M # A is fully known. Qoffset <- if (Quadratic) ifelse(estI.tol, 0, sum(1:Rank)) else 0 ynames <- object@misc$ynames if (!length(ynames)) ynames <- object@misc$predictors.names if (!length(ynames)) ynames <- object@misc$ynames if (!length(ynames)) ynames <- paste("Y", 1:NOS, sep = "") lp.names <- object@misc$predictors.names if (!length(lp.names)) lp.names <- NULL dzero.vector <- rep_len(FALSE, M) if (length(Dzero)) dzero.vector[Dzero] <- TRUE names(dzero.vector) <- ynames latvar.names <- if (Rank == 1) "latvar" else paste("latvar", 1:Rank, sep = "") td.expression <- function(Dmat, Amat, M, Dzero, Rank, bellshaped) { Tolerance <- Darray <- m2a(Dmat, M = Rank) for (ii in 1:M) if (length(Dzero) && any(Dzero == ii)) { Tolerance[, , ii] <- NA # Darray[,,ii] == O bellshaped[ii] <- FALSE } else { Tolerance[, , ii] <- -0.5 * solve(Darray[, , ii]) bellshaped[ii] <- all(eigen(Tolerance[, , ii], symmetric = TRUE)$values > 0) } optimum <- matrix(NA_real_, Rank, M) for (ii in 1:M) if (bellshaped[ii]) optimum[, ii] <- Tolerance[, , ii] %*% cbind(Amat[ii, ]) list(optimum = optimum, Tolerance = Tolerance, Darray = Darray, bellshaped = bellshaped) } Amat <- object@extra$Amat # M x Rank Cmat <- object@extra$Cmat # p2 x Rank Dmat <- object@extra$Dmat # B1 <- object@extra$B1 # bellshaped <- rep_len(FALSE, M) if (is.character(refResponse)) { refResponse <- (1:NOS)[refResponse == ynames] if (length(refResponse) != 1) stop("could not match argument 'refResponse' with any response") } ptr1 <- 1 candidates <- if (length(refResponse)) refResponse else { if (length(ocontrol$Dzero)) (1:M)[-ocontrol$Dzero] else (1:M)} repeat { if (ptr1 > 0) { this.spp <- candidates[ptr1] } elts <- Dmat[this.spp,, drop = FALSE] if (length(elts) < Rank) elts <- matrix(elts, 1, Rank) Dk <- m2a(elts, M = Rank)[, , 1] # Hopefully negative-def temp400 <- eigen(Dk, symmetric = TRUE) ptr1 <- ptr1 + 1 if (all(temp400$value < 0)) break if (ptr1 > length(candidates)) break } if (all(temp400$value < 0)) { temp1tol <- -0.5 * solve(Dk) dim(temp1tol) <- c(Rank,Rank) Mmat <- t(chol(temp1tol)) if (ConstrainedQO) { temp900 <- solve(t(Mmat)) Cmat <- Cmat %*% temp900 Amat <- Amat %*% Mmat } if (length(Cmat)) { temp800 <- crow1C(Cmat, ocontrol$Crow1positive, amat = Amat) Cmat <- temp800$cmat Amat <- temp800$amat } Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) retlist <- td.expression(Dmat = Dmat, Amat = Amat, M = M, Dzero = Dzero, Rank = Rank, bellshaped = bellshaped) optimum <- retlist$optimum Tolerance <- retlist$Tolerance Darray <- retlist$Darray bellshaped <- retlist$bellshaped } else { if (length(refResponse) == 1) stop("tolerance matrix specified by 'refResponse' ", "is not positive-definite") else warning("could not find any positive-definite ", "tolerance matrix") } if (ConstrainedQO) if (Rank > 1) { if (!length(xmat <- object@x)) stop("cannot obtain the model matrix") numat <- xmat[,ocontrol$colx2.index, drop = FALSE] %*% Cmat evnu <- eigen(var(numat), symmetric = TRUE) Mmat <- solve(t(evnu$vector)) Cmat <- Cmat %*% evnu$vector # == Cmat %*% solve(t(Mmat)) Amat <- Amat %*% Mmat temp800 <- crow1C(Cmat, ocontrol$Crow1positive, amat = Amat) Cmat <- temp800$cmat Amat <- temp800$amat Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) retlist <- td.expression(Dmat = Dmat, Amat = Amat, M = M, Dzero = Dzero, Rank = Rank, bellshaped = bellshaped) optimum <- retlist$optimum Tolerance <- retlist$Tolerance Darray <- retlist$Darray bellshaped <- retlist$bellshaped } if (ConstrainedQO) if (varI.latvar) { if (!length(xmat <- object@x)) stop("cannot obtain the model matrix") numat <- xmat[,ocontrol$colx2.index, drop = FALSE] %*% Cmat sdnumat <- apply(cbind(numat), 2, sd) Mmat <- if (Rank > 1) diag(sdnumat) else matrix(sdnumat, 1, 1) Cmat <- Cmat %*% solve(t(Mmat)) Amat <- Amat %*% Mmat temp800 <- crow1C(Cmat, ocontrol$Crow1positive, amat = Amat) Cmat <- temp800$cmat Amat <- temp800$amat Cmat # Not needed Dmat <- adjust.Dmat.expression(Mmat = Mmat, Rank = Rank, Dmat = Dmat, M = M) retlist <- td.expression(Dmat = Dmat, Amat = Amat, M = M, Dzero = Dzero, Rank = Rank, bellshaped = bellshaped) optimum <- retlist$optimum Tolerance <- retlist$Tolerance Darray <- retlist$Darray bellshaped <- retlist$bellshaped } cx1i <- ocontrol$colx1.index maximum <- if (length(cx1i) == 1 && names(cx1i) == "(Intercept)") { eta.temp <- B1 for (ii in 1:M) eta.temp[ii] <- eta.temp[ii] + Amat[ii, , drop = FALSE] %*% optimum[, ii, drop = FALSE] + t(optimum[, ii, drop = FALSE]) %*% Darray[,, ii, drop = TRUE] %*% optimum[, ii, drop = FALSE] mymax <- object@family@linkinv(rbind(eta.temp), extra = object@extra) c(mymax) # Convert from matrix to vector } else { 5 * rep_len(NA_real_, M) # Make "numeric" } names(maximum) <- ynames latvar.mat <- if (ConstrainedQO) { object@x[, ocontrol$colx2.index, drop = FALSE] %*% Cmat } else { object@latvar } dimnames(Amat) <- list(lp.names, latvar.names) if (ConstrainedQO) dimnames(Cmat) <- list(names(ocontrol$colx2.index), latvar.names) if (!length(xmat <- object@x)) stop("cannot obtain the model matrix") dimnames(latvar.mat) <- list(dimnames(xmat)[[1]], latvar.names) ans <- new(Class <- if (ConstrainedQO) "Coef.qrrvglm" else "Coef.uqo", A = Amat, B1 = B1, Constrained = ConstrainedQO, D = Darray, NOS = NOS, Rank = Rank, latvar = latvar.mat, latvar.order = latvar.mat, Optimum = optimum, Optimum.order = optimum, bellshaped = bellshaped, Dzero = dzero.vector, Maximum = maximum, Tolerance = Tolerance) if (ConstrainedQO) {ans@C <- Cmat} else {Cmat <- NULL} for (rrr in 1:Rank) ans@Optimum.order[rrr, ] <- order(ans@Optimum[rrr, ]) for (rrr in 1:Rank) ans@latvar.order[, rrr] <- order(ans@latvar[, rrr]) if (length(object@misc$estimated.dispersion) && object@misc$estimated.dispersion) { p <- length(object@coefficients) n <- object@misc$n M <- object@misc$M NOS <- if (length(object@y)) ncol(object@y) else M pstar <- if (ConstrainedQO) (p + length(Cmat)) else p + n*Rank # Adjustment; not sure about UQO adjusted.dispersion <- object@misc$dispersion * (n*M - p) / (n*M - pstar) ans@dispersion <- adjusted.dispersion } if (MSratio > 1) { keepIndex <- seq(from = 1, to = M, by = MSratio) ans@Dzero <- ans@Dzero[keepIndex] ans@Optimum <- ans@Optimum[,keepIndex, drop = FALSE] ans@Tolerance <- ans@Tolerance[,,keepIndex, drop = FALSE] ans@bellshaped <- ans@bellshaped[keepIndex] names(ans@Dzero) <- ynames } else { dimnames(ans@D) <- list(latvar.names, latvar.names, ynames) } names(ans@bellshaped) <- ynames dimnames(ans@Optimum) <- list(latvar.names, ynames) dimnames(ans@Tolerance) <- list(latvar.names, latvar.names, ynames) ans } # End of Coef.qrrvglm setClass(Class = "Coef.rrvglm", representation( "A" = "matrix", "B1" = "matrix", # This may be unassigned if p1 = 0. "C" = "matrix", "Rank" = "numeric", "colx1.index" = "numeric", "colx2.index" = "numeric", "Atilde" = "matrix")) setClass(Class = "Coef.uqo", representation( "A" = "matrix", "B1" = "matrix", "Constrained" = "logical", "D" = "array", "NOS" = "numeric", "Rank" = "numeric", "latvar" = "matrix", "latvar.order" = "matrix", "Maximum" = "numeric", "Optimum" = "matrix", "Optimum.order" = "matrix", "bellshaped" = "logical", "dispersion" = "numeric", "Dzero" = "logical", "Tolerance" = "array")) setClass(Class = "Coef.qrrvglm", representation( "C" = "matrix"), contains = "Coef.uqo") show.Coef.qrrvglm <- function(x, ...) { object <- x Rank <- object@Rank M <- nrow(object@A) NOS <- object@NOS mymat <- matrix(NA_real_, NOS, Rank) if (Rank == 1) { # || object@Diagonal for (ii in 1:NOS) { fred <- if (Rank > 1) diag(object@Tolerance[, , ii, drop = FALSE]) else object@Tolerance[, , ii] if (all(fred > 0)) mymat[ii,] <- sqrt(fred) } dimnames(mymat) <- list(dimnames(object@Tolerance)[[3]], if (Rank == 1) "latvar" else paste("Tolerance", dimnames(mymat)[[2]], sep = "")) } else { for (ii in 1:NOS) { fred <- eigen(object@Tolerance[, , ii], symmetric = TRUE) if (all(fred$value > 0)) mymat[ii, ] <- sqrt(fred$value) } dimnames(mymat) <- list(dimnames(object@Tolerance)[[3]], paste("tol", 1:Rank, sep = "")) } dimnames(object@A) <- list(dimnames(object@A)[[1]], if (Rank > 1) paste("A", dimnames(object@A)[[2]], sep = ".") else "A") Maximum <- if (length(object@Maximum)) cbind(Maximum = object@Maximum) else NULL if (length(Maximum) && length(mymat) && Rank == 1) Maximum[is.na(mymat),] <- NA optmat <- cbind(t(object@Optimum)) dimnames(optmat) <- list(dimnames(optmat)[[1]], if (Rank > 1) paste("Optimum", dimnames(optmat)[[2]], sep = ".") else "Optimum") if (length(optmat) && length(mymat) && Rank == 1) optmat[is.na(mymat), ] <- NA if ( object@Constrained ) { cat("\nC matrix (constrained/canonical coefficients)\n") print(object@C, ...) } cat("\nB1 and A matrices\n") print(cbind(t(object@B1), A = object@A), ...) cat("\nOptimums and maximums\n") print(cbind(Optimum = optmat, Maximum), ...) if (Rank > 1) { # !object@Diagonal && Rank > 1 cat("\nTolerances\n") } else { cat("\nTolerance\n") } print(mymat, ...) cat("\nStandard deviation of the latent variables (site scores)\n") print(apply(cbind(object@latvar), 2, sd)) invisible(object) } setMethod("show", "Coef.qrrvglm", function(object) show.Coef.qrrvglm(object)) setMethod("summary", "qrrvglm", function(object, ...) summary.qrrvglm(object, ...)) predictqrrvglm <- function(object, newdata = NULL, type = c("link", "response", "latvar", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, varI.latvar = FALSE, refResponse = NULL, ...) { if (se.fit) stop("cannot handle se.fit == TRUE yet") if (deriv != 0) stop("derivative is not equal to 0") if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("link", "response", "latvar", "terms"))[1] if (type == "latvar") stop("cannot handle type='latvar' yet") if (type == "terms") stop("cannot handle type='terms' yet") M <- object@misc$M Rank <- object@control$Rank na.act <- object@na.action object@na.action <- list() if (!length(newdata) && type == "response" && length(object@fitted.values)) { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } if (!length(newdata)) { X <- model.matrixvlm(object, type = "lm", ...) offset <- object@offset tt <- object@terms$terms # terms(object) if (!length(object@x)) attr(X, "assign") <- attrassignlm(X, tt) } else { if (is.smart(object) && length(object@smart.prediction)) { setup.smart("read", smart.prediction = object@smart.prediction) } tt <- object@terms$terms # terms(object) # 20030811; object@terms$terms X <- model.matrix(delete.response(tt), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) if (nrow(X) != nrow(newdata)) { as.save <- attr(X, "assign") X <- X[rep_len(1, nrow(newdata)),, drop = FALSE] dimnames(X) <- list(dimnames(newdata)[[1]], "(Intercept)") attr(X, "assign") <- as.save # Restored } offset <- if (!is.null(off.num<-attr(tt,"offset"))) { eval(attr(tt,"variables")[[off.num+1]], newdata) } else if (!is.null(object@offset)) eval(object@call$offset, newdata) if (any(c(offset) != 0)) stop("currently cannot handle nonzero offsets") if (is.smart(object) && length(object@smart.prediction)) { wrapup.smart() } attr(X, "assign") <- attrassigndefault(X, tt) } ocontrol <- object@control Rank <- ocontrol$Rank NOS <- ncol(object@y) sppnames <- dimnames(object@y)[[2]] modelno <- ocontrol$modelno # 1, 2, 3, 5 or 0 M <- if (any(slotNames(object) == "predictors") && is.matrix(object@predictors)) ncol(object@predictors) else object@misc$M MSratio <- M / NOS # First value is g(mean) = quadratic form in latvar if (MSratio != 1) stop("can only handle MSratio == 1 for now") if (length(newdata)) { Coefs <- Coef(object, varI.latvar = varI.latvar, refResponse = refResponse) X1mat <- X[, ocontrol$colx1.index, drop = FALSE] X2mat <- X[, ocontrol$colx2.index, drop = FALSE] latvarmat <- as.matrix(X2mat %*% Coefs@C) # n x Rank etamat <- as.matrix(X1mat %*% Coefs@B1 + latvarmat %*% t(Coefs@A)) which.species <- 1:NOS # Do it all for all species for (sppno in seq_along(which.species)) { thisSpecies <- which.species[sppno] Dmat <- matrix(Coefs@D[,,thisSpecies], Rank, Rank) etamat[, thisSpecies] <- etamat[, thisSpecies] + mux34(latvarmat, Dmat, symmetric = TRUE) } } else { etamat <- object@predictors } pred <- switch(type, response = { fv <- if (length(newdata)) object@family@linkinv(etamat, extra) else fitted(object) if (M > 1 && is.matrix(fv)) { dimnames(fv) <- list(dimnames(fv)[[1]], dimnames(object@fitted.values)[[2]]) } fv }, link = etamat, latvar = stop("failure here"), terms = stop("failure here")) if (!length(newdata) && length(na.act)) { if (se.fit) { pred$fitted.values <- napredict(na.act[[1]], pred$fitted.values) pred$se.fit <- napredict(na.act[[1]], pred$se.fit) } else { pred <- napredict(na.act[[1]], pred) } } pred } setMethod("predict", "qrrvglm", function(object, ...) predictqrrvglm(object, ...)) coefqrrvglm <- function(object, matrix.out = FALSE, label = TRUE) { if (matrix.out) stop("currently cannot handle matrix.out = TRUE") coefvlm(object, matrix.out = matrix.out, label = label) } residualsqrrvglm <- function(object, type = c("deviance", "pearson", "working", "response", "ldot"), matrix.arg = TRUE) { stop("this function has not been written yet") } setMethod("residuals", "qrrvglm", function(object, ...) residualsqrrvglm(object, ...)) show.rrvglm <- function(x, ...) { if (!is.null(cl <- x@call)) { cat("Call:\n") dput(cl) } vecOfBetas <- x@coefficients if (any(nas <- is.na(vecOfBetas))) { if (is.null(names(vecOfBetas))) names(vecOfBetas) <- paste("b", seq_along(vecOfBetas), sep = "") cat("\nCoefficients: (", sum(nas), " not defined because of singularities)\n", sep = "") } else cat("\nCoefficients:\n") print.default(vecOfBetas, ...) # used to be print() if (FALSE) { Rank <- x@Rank if (!length(Rank)) Rank <- sum(!nas) } if (FALSE) { nobs <- if (length(x@df.total)) x@df.total else length(x@residuals) rdf <- x@df.residual if (!length(rdf)) rdf <- nobs - Rank } cat("\n") if (length(deviance(x))) cat("Residual deviance:", format(deviance(x)), "\n") if (length(vll <- logLik.vlm(x))) cat("Log-likelihood:", format(vll), "\n") if (length(x@criterion)) { ncrit <- names(x@criterion) for (iii in ncrit) if (iii != "loglikelihood" && iii != "deviance") cat(paste(iii, ":", sep = ""), format(x@criterion[[iii]]), "\n") } invisible(x) } setMethod("show", "rrvglm", function(object) show.rrvglm(object)) summary.rrvglm <- function(object, correlation = FALSE, dispersion = NULL, digits = NULL, numerical = TRUE, h.step = 0.0001, kill.all = FALSE, omit13 = FALSE, fixA = FALSE, presid = TRUE, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, ...) { if (!is.Numeric(h.step, length.arg = 1) || abs(h.step) > 1) stop("bad input for 'h.step'") if (!object@control$Corner) stop("this function works with corner constraints only") if (is.null(dispersion)) dispersion <- object@misc$dispersion newobject <- as(object, "vglm") stuff <- summaryvglm(newobject, correlation = correlation, dispersion = dispersion, presid = presid) answer <- new(Class = "summary.rrvglm", object, call = stuff@call, coef3 = stuff@coef3, cov.unscaled = stuff@cov.unscaled, correlation = stuff@correlation, df = stuff@df, sigma = stuff@sigma) if (is.numeric(stuff@dispersion)) slot(answer, "dispersion") <- stuff@dispersion if (presid && length(stuff@pearson.resid)) slot(answer, "pearson.resid") <- stuff@pearson.resid tmp5 <- get.rrvglm.se1(object, omit13 = omit13, numerical = numerical, h.step = h.step, kill.all = kill.all, fixA = fixA, ...) if (any(diag(tmp5$cov.unscaled) <= 0) || any(eigen(tmp5$cov.unscaled, symmetric = TRUE)$value <= 0)) { warning("cov.unscaled is not positive definite") } answer@cov.unscaled <- tmp5$cov.unscaled od <- if (is.numeric(object@misc$disper)) object@misc$disper else object@misc$default.disper if (is.numeric(dispersion)) { if (is.numeric(od) && dispersion != od) warning("dispersion != object@misc$dispersion; ", "using the former") } else { dispersion <- if (is.numeric(od)) od else 1 } tmp8 <- object@misc$M - object@control$Rank - length(object@control$str0) answer@df[1] <- answer@df[1] + tmp8 * object@control$Rank answer@df[2] <- answer@df[2] - tmp8 * object@control$Rank if (dispersion == 0) { dispersion <- tmp5$ResSS / answer@df[2] # Estimate } answer@coef3 <- get.rrvglm.se2(answer@cov.unscaled, dispersion = dispersion, coefficients = tmp5$coefficients) answer@dispersion <- dispersion answer@sigma <- dispersion^0.5 answer@misc$signif.stars <- signif.stars # 20160629 answer@misc$nopredictors <- nopredictors # 20150925 answer } get.rrvglm.se1 <- function(fit, omit13 = FALSE, kill.all = FALSE, numerical = TRUE, fixA = FALSE, h.step = 0.0001, trace.arg = FALSE, ...) { if (length(fit@control$Nested) && fit@control$Nested) stop("sorry, cannot handle nested models yet") str0 <- fit@control$str0 if (!length(fit@x)) stop("fix@x is empty. Run rrvglm(... , x = TRUE)") colx1.index <- fit@control$colx1.index # May be NULL colx2.index <- fit@control$colx2.index Hlist <- fit@constraints ncolHlist <- unlist(lapply(Hlist, ncol)) p1 <- length(colx1.index) # May be 0 p2 <- length(colx2.index) Rank <- fit@control$Rank # fit@misc$Nested.Rank Amat <- fit@constraints[[colx2.index[1]]] B1mat <- if (p1) coefvlm(fit, matrix.out = TRUE)[colx1.index, , drop = FALSE] else NULL C.try <- coefvlm(fit, matrix.out= TRUE)[colx2.index, , drop = FALSE] Cmat <- C.try %*% Amat %*% solve(t(Amat) %*% Amat) x1mat <- if (p1) fit@x[, colx1.index, drop = FALSE] else NULL x2mat <- fit@x[, colx2.index, drop = FALSE] wz <- weights(fit, type = "work") # old: wweights(fit) #fit@weights if (!length(wz)) stop("cannot get fit@weights") M <- fit@misc$M n <- fit@misc$n Index.corner <- fit@control$Index.corner # used to be (1:Rank); zmat <- fit@predictors + fit@residuals theta <- c(Amat[-c(Index.corner,str0), ]) if (fit@control$checkwz) wz <- checkwz(wz, M = M, trace = trace, wzepsilon = fit@control$wzepsilon) U <- vchol(wz, M = M, n = n, silent= TRUE) delct.da <- if (numerical) { num.deriv.rrr(fit, M = M, r = Rank, x1mat = x1mat, x2mat = x2mat, p2 = p2, Index.corner, Aimat = Amat, B1mat = B1mat, Cimat = Cmat, h.step = h.step, colx2.index = colx2.index, xij = fit@control$xij, str0 = str0) } else { dctda.fast.only(theta = theta, wz = wz, U = U, zmat, M = M, r = Rank, x1mat = x1mat, x2mat = x2mat, p2 = p2, Index.corner, Aimat = Amat, B1mat = B1mat, Cimat = Cmat, xij = fit@control$xij, str0 = str0) } newobject <- as(fit, "vglm") sfit2233 <- summaryvglm(newobject) d8 <- dimnames(sfit2233@cov.unscaled)[[1]] cov2233 <- solve(sfit2233@cov.unscaled) # Includes any intercepts dimnames(cov2233) <- list(d8, d8) log.vec33 <- NULL nassign <- names(fit@constraints) choose.from <- varassign(fit@constraints, nassign) for (ii in nassign) if (any(ii == names(colx2.index))) { log.vec33 <- c(log.vec33, choose.from[[ii]]) } cov33 <- cov2233[ log.vec33, log.vec33, drop = FALSE] # r*p2 by r*p2 cov23 <- cov2233[-log.vec33, log.vec33, drop = FALSE] cov22 <- cov2233[-log.vec33,-log.vec33, drop = FALSE] latvar.mat <- x2mat %*% Cmat offs <- matrix(0, n, M) # The "0" handles str0's offs[, Index.corner] <- latvar.mat if (M == (Rank + length(str0))) stop("cannot handle full-rank models yet") cm <- matrix(0, M, M - Rank - length(str0)) cm[-c(Index.corner, str0), ] <- diag(M - Rank - length(str0)) Hlist <- vector("list", length(colx1.index)+1) names(Hlist) <- c(names(colx1.index), "I(latvar.mat)") for (ii in names(colx1.index)) Hlist[[ii]] <- fit@constraints[[ii]] Hlist[["I(latvar.mat)"]] <- cm if (p1) { ooo <- fit@assign bb <- NULL for (ii in seq_along(ooo)) { if (any(ooo[[ii]][1] == colx1.index)) bb <- c(bb, names(ooo)[ii]) } has.intercept <- any(bb == "(Intercept)") bb[bb == "(Intercept)"] <- "1" if (p1 > 1) bb <- paste(bb, collapse = "+") if (has.intercept) { bb <- paste("zmat - offs ~ ", bb, " + I(latvar.mat)", collapse = " ") } else { bb <- paste("zmat - offs ~ -1 + ", bb, " + I(latvar.mat)", collapse = " ") } bb <- as.formula(bb) } else { bb <- as.formula("zmat - offs ~ -1 + I(latvar.mat)") } if (fit@misc$dataname == "list") { dspec <- FALSE } else { mytext1 <- "exists(x = fit@misc$dataname, envir = VGAMenv)" myexp1 <- parse(text = mytext1) is.there <- eval(myexp1) bbdata <- if (is.there) get(fit@misc$dataname, envir = VGAMenv) else get(fit@misc$dataname) dspec <- TRUE } fit1122 <- if (dspec) vlm(bb, constraints = Hlist, criterion = "d", weights = wz, data = bbdata, save.weights = TRUE, smart = FALSE, trace = trace.arg, x.arg = TRUE) else vlm(bb, constraints = Hlist, criterion = "d", weights = wz, save.weights = TRUE, smart = FALSE, trace = trace.arg, x.arg = TRUE) sfit1122 <- summaryvlm(fit1122) d8 <- dimnames(sfit1122@cov.unscaled)[[1]] cov1122 <- solve(sfit1122@cov.unscaled) dimnames(cov1122) <- list(d8, d8) lcs <- length(coefvlm(sfit1122)) log.vec11 <- (lcs-(M-Rank-length(str0))*Rank+1):lcs cov11 <- cov1122[log.vec11, log.vec11, drop = FALSE] cov12 <- cov1122[ log.vec11, -log.vec11, drop = FALSE] cov22 <- cov1122[-log.vec11, -log.vec11, drop = FALSE] cov13 <- delct.da %*% cov33 if (omit13) cov13 <- cov13 * 0 # zero it if (kill.all) { cov13 <- cov13 * 0 # zero it if (fixA) { cov12 <- cov12 * 0 # zero it } else { cov23 <- cov23 * 0 # zero it } } cov13 <- -cov13 # Richards (1961) if (fixA) { cov.unscaled <- rbind(cbind(cov1122, rbind(cov13, cov23)), cbind(t(cov13), t(cov23), cov33)) } else { cov.unscaled <- rbind(cbind(cov11, cov12, cov13), cbind(rbind(t(cov12), t(cov13)), cov2233)) } ans <- solve(cov.unscaled) acoefs <- c(fit1122@coefficients[log.vec11], fit@coefficients) dimnames(ans) <- list(names(acoefs), names(acoefs)) list(cov.unscaled = ans, coefficients = acoefs, ResSS = sfit1122@ResSS) } get.rrvglm.se2 <- function(cov.unscaled, dispersion = 1, coefficients) { d8 <- dimnames(cov.unscaled)[[1]] ans <- matrix(coefficients, length(coefficients), 4) ans[, 2] <- sqrt(dispersion) * sqrt(diag(cov.unscaled)) ans[, 3] <- ans[, 1] / ans[, 2] ans[, 4] <- pnorm(-abs(ans[, 3])) dimnames(ans) <- list(d8, c("Estimate", "Std. Error", "z value", "Pr(>|z|)")) ans } num.deriv.rrr <- function(fit, M, r, x1mat, x2mat, p2, Index.corner, Aimat, B1mat, Cimat, h.step = 0.0001, colx2.index, xij = NULL, str0 = NULL) { nn <- nrow(x2mat) if (nrow(Cimat) != p2 || ncol(Cimat) != r) stop("'Cimat' wrong shape") dct.da <- matrix(NA_real_, (M-r-length(str0))*r, r*p2) if ((length(Index.corner) + length(str0)) == M) stop("cannot handle full rank models yet") cbindex <- (1:M)[-c(Index.corner, str0)] ptr <- 1 for (sss in 1:r) for (tt in cbindex) { small.Hlist <- vector("list", p2) pAmat <- Aimat pAmat[tt,sss] <- pAmat[tt,sss] + h.step # Perturb it for (ii in 1:p2) small.Hlist[[ii]] <- pAmat offset <- if (length(fit@offset)) fit@offset else 0 if (all(offset == 0)) offset <- 0 neweta <- x2mat %*% Cimat %*% t(pAmat) if (is.numeric(x1mat)) neweta <- neweta + x1mat %*% B1mat fit@predictors <- neweta newmu <- fit@family@linkinv(neweta, fit@extra) fit@fitted.values <- as.matrix(newmu) # 20100909 fred <- weights(fit, type = "w", deriv = TRUE, ignore.slot = TRUE) if (!length(fred)) stop("cannot get @weights and @deriv from object") wz <- fred$weights deriv.mu <- fred$deriv U <- vchol(wz, M = M, n = nn, silent = TRUE) tvfor <- vforsub(U, as.matrix(deriv.mu), M = M, n = nn) newzmat <- neweta + vbacksub(U, tvfor, M = M, n = nn) - offset if (is.numeric(x1mat)) newzmat <- newzmat - x1mat %*% B1mat newfit <- vlm.wfit(xmat = x2mat, zmat = newzmat, Hlist = small.Hlist, U = U, matrix.out = FALSE, is.vlmX = FALSE, ResSS = TRUE, qr = FALSE, x.ret = FALSE, offset = NULL, xij = xij) dct.da[ptr, ] <- (newfit$coef - t(Cimat)) / h.step ptr <- ptr + 1 } dct.da } dctda.fast.only <- function(theta, wz, U, zmat, M, r, x1mat, x2mat, p2, Index.corner, Aimat, B1mat, Cimat, xij = NULL, str0 = NULL) { if (length(str0)) stop("cannot handle 'str0' in dctda.fast.only()") nn <- nrow(x2mat) if (nrow(Cimat) != p2 || ncol(Cimat) != r) stop("Cimat wrong shape") fred <- kronecker(matrix(1, 1,r), x2mat) fred <- kronecker(fred, matrix(1,M, 1)) barney <- kronecker(Aimat, matrix(1, 1,p2)) barney <- kronecker(matrix(1, nn, 1), barney) temp <- array(t(barney*fred), c(p2*r, M, nn)) temp <- aperm(temp, c(2, 1, 3)) # M by p2*r by nn temp <- mux5(wz, temp, M = M, matrix.arg= TRUE) temp <- m2a(temp, M = p2 * r) # Note M != M here! G <- solve(rowSums(temp, dims = 2)) # p2*r by p2*r dc.da <- array(NA_real_, c(p2, r, M, r)) # different from other functions if (length(Index.corner) == M) stop("cannot handle full rank models yet") cbindex <- (1:M)[-Index.corner] # complement of Index.corner resid2 <- if (length(x1mat)) mux22(t(wz), zmat - x1mat %*% B1mat, M = M, upper = FALSE, as.matrix = TRUE) else mux22(t(wz), zmat , M = M, upper = FALSE, as.matrix = TRUE) for (sss in 1:r) for (ttt in cbindex) { fred <- t(x2mat) * matrix(resid2[, ttt], p2, nn, byrow = TRUE) # p2 * nn temp2 <- kronecker(I.col(sss, r), rowSums(fred)) for (kkk in 1:r) { Wiak <- mux22(t(wz), matrix(Aimat[,kkk], nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) # nn * M wxx <- Wiak[,ttt] * x2mat blocki <- t(x2mat) %*% wxx temp4a <- blocki %*% Cimat[,kkk] if (kkk == 1) { temp4b <- blocki %*% Cimat[,sss] } temp2 <- temp2 - kronecker(I.col(sss, r), temp4a) - kronecker(I.col(kkk, r), temp4b) } dc.da[,,ttt,sss] <- G %*% temp2 } ans1 <- dc.da[,,cbindex,, drop = FALSE] # p2 x r x (M-r) x r ans1 <- aperm(ans1, c(2, 1, 3, 4)) # r x p2 x (M-r) x r ans1 <- matrix(c(ans1), r*p2, (M-r)*r) ans1 <- t(ans1) ans1 } dcda.fast <- function(theta, wz, U, z, M, r, xmat, pp, Index.corner, intercept = TRUE, xij = NULL) { nn <- nrow(xmat) Aimat <- matrix(NA_real_, M, r) Aimat[Index.corner,] <- diag(r) Aimat[-Index.corner,] <- theta # [-(1:M)] if (intercept) { Hlist <- vector("list", pp+1) Hlist[[1]] <- diag(M) for (ii in 2:(pp+1)) Hlist[[ii]] <- Aimat } else { Hlist <- vector("list", pp) for (ii in 1:pp) Hlist[[ii]] <- Aimat } coeffs <- vlm.wfit(xmat = xmat, z, Hlist, U = U, matrix.out = TRUE, xij = xij)$mat.coef c3 <- coeffs <- t(coeffs) # transpose to make M x (pp+1) int.vec <- if (intercept) c3[, 1] else 0 # \boldeta_0 Cimat <- if (intercept) t(c3[Index.corner,-1, drop = FALSE]) else t(c3[Index.corner,, drop = FALSE]) if (nrow(Cimat)!=pp || ncol(Cimat)!=r) stop("Cimat wrong shape") fred <- kronecker(matrix(1, 1,r), if (intercept) xmat[,-1, drop = FALSE] else xmat) fred <- kronecker(fred, matrix(1,M, 1)) barney <- kronecker(Aimat, matrix(1, 1,pp)) barney <- kronecker(matrix(1, nn, 1), barney) temp <- array(t(barney*fred), c(r*pp,M,nn)) temp <- aperm(temp, c(2, 1, 3)) temp <- mux5(wz, temp, M = M, matrix.arg = TRUE) temp <- m2a(temp, M = r * pp) # Note M != M here! G <- solve(rowSums(temp, dims = 2)) dc.da <- array(NA_real_, c(pp, r, M, r)) # different from other functions cbindex <- (1:M)[-Index.corner] resid2 <- mux22(t(wz), z - matrix(int.vec, nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) # mat = TRUE, for (s in 1:r) for (tt in cbindex) { fred <- (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) * matrix(resid2[, tt], pp, nn, byrow = TRUE) temp2 <- kronecker(I.col(s, r), rowSums(fred)) temp4 <- rep_len(0, pp) for (k in 1:r) { Wiak <- mux22(t(wz), matrix(Aimat[, k], nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) wxx <- Wiak[,tt] * (if (intercept) xmat[, -1, drop = FALSE] else xmat) blocki <- (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) %*% wxx temp4 <- temp4 + blocki %*% Cimat[, k] } dc.da[,,tt,s] <- G %*% (temp2 - 2 * kronecker(I.col(s, r), temp4)) } ans1 <- dc.da[,,cbindex,, drop = FALSE] # pp x r x (M-r) x r ans1 <- aperm(ans1, c(2, 1, 3, 4)) # r x pp x (M-r) x r ans1 <- matrix(c(ans1), (M-r)*r, r*pp, byrow = TRUE) detastar.da <- array(0,c(M,r,r,nn)) for (s in 1:r) for (j in 1:r) { t1 <- t(dc.da[,j,,s]) t1 <- matrix(t1, M, pp) detastar.da[,j,s,] <- t1 %*% (if (intercept) t(xmat[,-1, drop = FALSE]) else t(xmat)) } etastar <- (if (intercept) xmat[,-1, drop = FALSE] else xmat) %*% Cimat eta <- matrix(int.vec, nn, M, byrow = TRUE) + etastar %*% t(Aimat) sumWinv <- solve((m2a(t(colSums(wz)), M = M))[, , 1]) deta0.da <- array(0,c(M,M,r)) AtWi <- kronecker(matrix(1, nn, 1), Aimat) AtWi <- mux111(t(wz), AtWi, M = M, upper= FALSE) # matrix.arg= TRUE, AtWi <- array(t(AtWi), c(r, M, nn)) for (ss in 1:r) { temp90 <- (m2a(t(colSums(etastar[, ss]*wz)), M = M))[, , 1] # MxM temp92 <- array(detastar.da[,,ss,], c(M, r, nn)) temp93 <- mux7(temp92, AtWi) temp91 <- rowSums(temp93, dims = 2) # M x M deta0.da[,,ss] <- -(temp90 + temp91) %*% sumWinv } ans2 <- deta0.da[-(1:r), , , drop = FALSE] # (M-r) x M x r ans2 <- aperm(ans2, c(1, 3, 2)) # (M-r) x r x M ans2 <- matrix(c(ans2), (M-r)*r, M) list(dc.da = ans1, dint.da = ans2) } rrr.deriv.ResSS <- function(theta, wz, U, z, M, r, xmat, pp, Index.corner, intercept = TRUE, xij = NULL) { Amat <- matrix(NA_real_, M, r) Amat[Index.corner,] <- diag(r) Amat[-Index.corner,] <- theta # [-(1:M)] if (intercept) { Hlist <- vector("list", pp+1) Hlist[[1]] <- diag(M) for (ii in 2:(pp+1)) Hlist[[ii]] <- Amat } else { Hlist <- vector("list", pp) for (ii in 1:pp) Hlist[[ii]] <- Amat } vlm.wfit(xmat = xmat, z, Hlist, U = U, matrix.out = FALSE, ResSS = TRUE, xij = xij)$ResSS } rrr.deriv.gradient.fast <- function(theta, wz, U, z, M, r, xmat, pp, Index.corner, intercept = TRUE) { nn <- nrow(xmat) Aimat <- matrix(NA_real_, M, r) Aimat[Index.corner,] <- diag(r) Aimat[-Index.corner,] <- theta # [-(1:M)] if (intercept) { Hlist <- vector("list", pp+1) Hlist[[1]] <- diag(M) for (i in 2:(pp+1)) Hlist[[i]] <- Aimat } else { Hlist <- vector("list", pp) for (i in 1:(pp)) Hlist[[i]] <- Aimat } coeffs <- vlm.wfit(xmat, z, Hlist, U = U, matrix.out= TRUE, xij = NULL)$mat.coef c3 <- coeffs <- t(coeffs) # transpose to make M x (pp+1) int.vec <- if (intercept) c3[, 1] else 0 # \boldeta_0 Cimat <- if (intercept) t(c3[Index.corner, -1, drop = FALSE]) else t(c3[Index.corner,, drop = FALSE]) if (nrow(Cimat) != pp || ncol(Cimat) != r) stop("Cimat wrong shape") fred <- kronecker(matrix(1, 1,r), if (intercept) xmat[, -1, drop = FALSE] else xmat) fred <- kronecker(fred, matrix(1, M, 1)) barney <- kronecker(Aimat, matrix(1, 1, pp)) barney <- kronecker(matrix(1, nn, 1), barney) temp <- array(t(barney*fred), c(r*pp, M, nn)) temp <- aperm(temp, c(2, 1, 3)) temp <- mux5(wz, temp, M = M, matrix.arg = TRUE) temp <- m2a(temp, M = r * pp) # Note M != M here! G <- solve(rowSums(temp, dims = 2)) dc.da <- array(NA_real_, c(pp, r, r, M)) cbindex <- (1:M)[-Index.corner] resid2 <- mux22(t(wz), z - matrix(int.vec, nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) for (s in 1:r) for (tt in cbindex) { fred <- (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) * matrix(resid2[, tt], pp, nn, byrow = TRUE) temp2 <- kronecker(I.col(s, r), rowSums(fred)) temp4 <- rep_len(0, pp) for (k in 1:r) { Wiak <- mux22(t(wz), matrix(Aimat[, k], nn, M, byrow = TRUE), M = M, upper = FALSE, as.matrix = TRUE) wxx <- Wiak[,tt] * (if (intercept) xmat[, -1, drop = FALSE] else xmat) blocki <- (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) %*% wxx temp4 <- temp4 + blocki %*% Cimat[, k] } dc.da[,,s,tt] <- G %*% (temp2 - 2 * kronecker(I.col(s, r), temp4)) } detastar.da <- array(0,c(M,r,r,nn)) for (s in 1:r) for (j in 1:r) { t1 <- t(dc.da[,j,s,]) t1 <- matrix(t1, M, pp) detastar.da[,j,s,] <- t1 %*% (if (intercept) t(xmat[, -1, drop = FALSE]) else t(xmat)) } etastar <- (if (intercept) xmat[, -1, drop = FALSE] else xmat) %*% Cimat eta <- matrix(int.vec, nn, M, byrow = TRUE) + etastar %*% t(Aimat) sumWinv <- solve((m2a(t(colSums(wz)), M = M))[, , 1]) deta0.da <- array(0, c(M, M, r)) AtWi <- kronecker(matrix(1, nn, 1), Aimat) AtWi <- mux111(t(wz), AtWi, M = M, upper = FALSE) # matrix.arg= TRUE, AtWi <- array(t(AtWi), c(r, M, nn)) for (ss in 1:r) { temp90 <- (m2a(t(colSums(etastar[, ss] * wz)), M = M))[, , 1] temp92 <- array(detastar.da[, , ss, ], c(M, r, nn)) temp93 <- mux7(temp92,AtWi) temp91 <- apply(temp93, 1:2,sum) # M x M temp91 <- rowSums(temp93, dims = 2) # M x M deta0.da[,,ss] <- -(temp90 + temp91) %*% sumWinv } ans <- matrix(0,M,r) fred <- mux22(t(wz), z - eta, M = M, upper = FALSE, as.matrix = TRUE) fred.array <- array(t(fred %*% Aimat),c(r, 1, nn)) for (s in 1:r) { a1 <- colSums(fred %*% t(deta0.da[,, s])) a2 <- colSums(fred * etastar[, s]) temp92 <- array(detastar.da[, , s, ],c(M, r, nn)) temp93 <- mux7(temp92, fred.array) a3 <- rowSums(temp93, dims = 2) ans[,s] <- a1 + a2 + a3 } ans <- -2 * c(ans[cbindex, ]) ans } vellipse <- function(R, ratio = 1, orientation = 0, center = c(0, 0), N = 300) { if (length(center) != 2) stop("argument 'center' must be of length 2") theta <- 2*pi*(0:N)/N x1 <- R*cos(theta) y1 <- ratio*R*sin(theta) x <- center[1] + cos(orientation)*x1 - sin(orientation)*y1 y <- center[2] + sin(orientation)*x1 + cos(orientation)*y1 cbind(x, y) } biplot.qrrvglm <- function(x, ...) { stop("biplot.qrrvglm has been replaced by the function lvplot.qrrvglm") } lvplot.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, add = FALSE, show.plot = TRUE, rug = TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab = paste("Latent Variable", if (Rank == 1) "" else " 1", sep = ""), ylab = if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex = par()$cex, pcol = par()$col, pch = par()$pch, llty = par()$lty, lcol = par()$col, llwd = par()$lwd, label.arg = FALSE, adj.arg = -0.1, ellipse = 0.95, Absolute = FALSE, elty = par()$lty, ecol = par()$col, elwd = par()$lwd, egrid = 200, chull.arg = FALSE, clty = 2, ccol = par()$col, clwd = par()$lwd, cpch = " ", C = FALSE, OriginC = c("origin", "mean"), Clty = par()$lty, Ccol = par()$col, Clwd = par()$lwd, Ccex = par()$cex, Cadj.arg = -0.1, stretchC = 1, sites = FALSE, spch = NULL, scol = par()$col, scex = par()$cex, sfont = par()$font, check.ok = TRUE, ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("fitted.values", "predictors"))[1] if (is.numeric(OriginC)) OriginC <- rep_len(OriginC, 2) else { if (mode(OriginC) != "character" && mode(OriginC) != "name") OriginC <- as.character(substitute(OriginC)) OriginC <- match.arg(OriginC, c("origin","mean"))[1] } if (length(ellipse) > 1) stop("ellipse must be of length 1 or 0") if (is.logical(ellipse)) {ellipse <- if (ellipse) 0.95 else NULL} Rank <- object@control$Rank if (Rank > 2) stop("can only handle rank 1 or 2 models") M <- object@misc$M NOS <- ncol(object@y) MSratio <- M / NOS # First value is g(mean) = quadratic form in latvar n <- object@misc$n colx2.index <- object@control$colx2.index cx1i <- object@control$colx1.index # May be NULL if (check.ok) if (!(length(cx1i) == 1 && names(cx1i) == "(Intercept)")) stop("latent variable plots allowable only for ", "noRRR = ~ 1 models") Coef.list <- Coef(object, varI.latvar = varI.latvar, refResponse = refResponse) if ( C) Cmat <- Coef.list@C nustar <- Coef.list@latvar # n x Rank if (!show.plot) return(nustar) r.curves <- slot(object, type) # n times M (\boldeta or \boldmu) if (!add) { if (Rank == 1) { matplot(nustar, if ( y && type == "fitted.values") object@y else r.curves, type = "n", xlab = xlab, ylab = ylab, ...) } else { # Rank == 2 matplot(c(Coef.list@Optimum[1, ], nustar[, 1]), c(Coef.list@Optimum[2, ], nustar[, 2]), type = "n", xlab = xlab, ylab = ylab, ...) } } pch <- rep_len(pch, ncol(r.curves)) pcol <- rep_len(pcol, ncol(r.curves)) pcex <- rep_len(pcex, ncol(r.curves)) llty <- rep_len(llty, ncol(r.curves)) lcol <- rep_len(lcol, ncol(r.curves)) llwd <- rep_len(llwd, ncol(r.curves)) elty <- rep_len(elty, ncol(r.curves)) ecol <- rep_len(ecol, ncol(r.curves)) elwd <- rep_len(elwd, ncol(r.curves)) adj.arg <- rep_len(adj.arg, ncol(r.curves)) if ( C ) { Clwd <- rep_len(Clwd, nrow(Cmat)) Clty <- rep_len(Clty, nrow(Cmat)) Ccol <- rep_len(Ccol, nrow(Cmat)) Ccex <- rep_len(Ccex, nrow(Cmat)) Cadj.arg <- rep_len(Cadj.arg, nrow(Cmat)) } if (Rank == 1) { for (i in 1:ncol(r.curves)) { xx <- nustar yy <- r.curves[,i] o <- sort.list(xx) xx <- xx[o] yy <- yy[o] lines(xx, yy, col = lcol[i], lwd = llwd[i], lty = llty[i]) if ( y && type == "fitted.values") { ypts <- object@y if (NCOL(ypts) == ncol(r.curves)) points(xx, ypts[o,i], col = pcol[i], cex = pcex[i], pch = pch[i]) } } if (rug) rug(xx) } else { for (i in 1:ncol(r.curves)) points(Coef.list@Optimum[1, i], Coef.list@Optimum[2, i], col = pcol[i], cex = pcex[i], pch = pch[i]) if (label.arg) { for (i in 1:ncol(r.curves)) text(Coef.list@Optimum[1, i], Coef.list@Optimum[2, i], labels = (dimnames(Coef.list@Optimum)[[2]])[i], adj = adj.arg[i], col = pcol[i], cex = pcex[i]) } if (chull.arg) { hull <- chull(nustar[, 1], nustar[, 2]) hull <- c(hull, hull[1]) lines(nustar[hull, 1], nustar[hull, 2], type = "b", pch = cpch, lty = clty, col = ccol, lwd = clwd) } if (length(ellipse)) { ellipse.temp <- if (ellipse > 0) ellipse else 0.95 if (ellipse < 0 && (!object@control$eq.tolerances || varI.latvar)) stop("an equal-tolerances assumption and 'varI.latvar = FALSE' ", "is needed for 'ellipse' < 0") if ( check.ok ) { colx1.index <- object@control$colx1.index if (!(length(colx1.index) == 1 && names(colx1.index) == "(Intercept)")) stop("can only plot ellipses for intercept models only") } for (i in 1:ncol(r.curves)) { cutpoint <- object@family@linkfun( if (Absolute) ellipse.temp else Coef.list@Maximum[i] * ellipse.temp, extra = object@extra) if (MSratio > 1) cutpoint <- cutpoint[1, 1] cutpoint <- object@family@linkfun(Coef.list@Maximum[i], extra = object@extra) - cutpoint if (is.finite(cutpoint) && cutpoint > 0) { Mmat <- diag(rep_len(ifelse(object@control$Crow1positive, 1, -1), Rank)) etoli <- eigen(t(Mmat) %*% Coef.list@Tolerance[,,i] %*% Mmat, symmetric = TRUE) A <- ifelse(etoli$val[1]>0, sqrt(2*cutpoint*etoli$val[1]), Inf) B <- ifelse(etoli$val[2]>0, sqrt(2*cutpoint*etoli$val[2]), Inf) if (ellipse < 0) A <- B <- -ellipse / 2 theta.angle <- asin(etoli$vector[2, 1]) * ifelse(object@control$Crow1positive[2], 1, -1) if (object@control$Crow1positive[1]) theta.angle <- pi - theta.angle if (all(is.finite(c(A,B)))) lines(vellipse(R = 2*A, ratio = B/A, orientation = theta.angle, center = Coef.list@Optimum[, i], N = egrid), lwd = elwd[i], col =ecol[i], lty = elty[i]) } } } if ( C ) { if (is.character(OriginC) && OriginC == "mean") OriginC <- c(mean(nustar[, 1]), mean(nustar[, 2])) if (is.character(OriginC) && OriginC == "origin") OriginC <- c(0,0) for (i in 1:nrow(Cmat)) arrows(x0 = OriginC[1], y0 = OriginC[2], x1 = OriginC[1] + stretchC * Cmat[i, 1], y1 = OriginC[2] + stretchC * Cmat[i, 2], lty = Clty[i], col = Ccol[i], lwd = Clwd[i]) if (label.arg) { temp200 <- dimnames(Cmat)[[1]] for (i in 1:nrow(Cmat)) text(OriginC[1] + stretchC * Cmat[i, 1], OriginC[2] + stretchC * Cmat[i, 2], col = Ccol[i], labels = temp200[i], adj = Cadj.arg[i], cex = Ccex[i]) } } if (sites) { text(nustar[, 1], nustar[, 2], adj = 0.5, labels = if (is.null(spch)) dimnames(nustar)[[1]] else rep_len(spch, nrow(nustar)), col = scol, cex = scex, font = sfont) } } invisible(nustar) } lvplot.rrvglm <- function(object, A = TRUE, C = TRUE, scores = FALSE, show.plot = TRUE, groups = rep(1, n), gapC = sqrt(sum(par()$cxy^2)), scaleA = 1, xlab = "Latent Variable 1", ylab = "Latent Variable 2", Alabels= if (length(object@misc$predictors.names)) object@misc$predictors.names else paste("LP", 1:M, sep = ""), Aadj = par()$adj, Acex = par()$cex, Acol = par()$col, Apch = NULL, Clabels=rownames(Cmat), Cadj = par()$adj, Ccex = par()$cex, Ccol = par()$col, Clty = par()$lty, Clwd = par()$lwd, chull.arg = FALSE, ccex = par()$cex, ccol = par()$col, clty = par()$lty, clwd = par()$lwd, spch = NULL, scex = par()$cex, scol = par()$col, slabels = rownames(x2mat), ...) { if (object@control$Rank != 2 && show.plot) stop("can only handle rank-2 models") M <- object@misc$M n <- object@misc$n colx2.index <- object@control$colx2.index Coef.list <- Coef(object) Amat <- Coef.list@A Cmat <- Coef.list@C Amat <- Amat * scaleA dimnames(Amat) <- list(object@misc$predictors.names, NULL) Cmat <- Cmat / scaleA if (!length(object@x)) { object@x <- model.matrixvlm(object, type = "lm") } x2mat <- object@x[, colx2.index, drop = FALSE] nuhat <- x2mat %*% Cmat if (!show.plot) return(as.matrix(nuhat)) index.nosz <- 1:M allmat <- rbind(if (A) Amat else NULL, if (C) Cmat else NULL, if (scores) nuhat else NULL) plot(allmat[, 1], allmat[, 2], type = "n", xlab=xlab, ylab=ylab, ...) # xlim etc. supplied through ... if (A) { Aadj <- rep_len(Aadj, length(index.nosz)) Acex <- rep_len(Acex, length(index.nosz)) Acol <- rep_len(Acol, length(index.nosz)) if (length(Alabels) != M) stop("'Alabels' must be of length ", M) if (length(Apch)) { Apch <- rep_len(Apch, length(index.nosz)) for (i in index.nosz) points(Amat[i, 1], Amat[i, 2], pch=Apch[i],cex = Acex[i],col=Acol[i]) } else { for (i in index.nosz) text(Amat[i, 1], Amat[i, 2], Alabels[i], cex = Acex[i], col =Acol[i], adj=Aadj[i]) } } if (C) { p2 <- nrow(Cmat) gapC <- rep_len(gapC, p2) Cadj <- rep_len(Cadj, p2) Ccex <- rep_len(Ccex, p2) Ccol <- rep_len(Ccol, p2) Clwd <- rep_len(Clwd, p2) Clty <- rep_len(Clty, p2) if (length(Clabels) != p2) stop("'length(Clabels)' must be equal to ", p2) for (ii in 1:p2) { arrows(0, 0, Cmat[ii, 1], Cmat[ii, 2], lwd = Clwd[ii], lty = Clty[ii], col = Ccol[ii]) const <- 1 + gapC[ii] / sqrt(Cmat[ii, 1]^2 + Cmat[ii, 2]^2) text(const*Cmat[ii, 1], const*Cmat[ii, 2], Clabels[ii], cex = Ccex[ii], adj = Cadj[ii], col = Ccol[ii]) } } if (scores) { ugrp <- unique(groups) nlev <- length(ugrp) # number of groups clty <- rep_len(clty, nlev) clwd <- rep_len(clwd, nlev) ccol <- rep_len(ccol, nlev) if (length(spch)) spch <- rep_len(spch, n) scol <- rep_len(scol, n) scex <- rep_len(scex, n) for (ii in ugrp) { gp <- groups == ii if (nlev > 1 && (length(unique(spch[gp])) != 1 || length(unique(scol[gp])) != 1 || length(unique(scex[gp])) != 1)) warning("spch/scol/scex is different for individuals ", "from the same group") temp <- nuhat[gp,, drop = FALSE] if (length(spch)) { points(temp[, 1], temp[, 2], cex = scex[gp], pch = spch[gp], col = scol[gp]) } else { text(temp[, 1], temp[, 2], label = slabels, cex = scex[gp], col = scol[gp]) } if (chull.arg) { hull <- chull(temp[, 1], temp[, 2]) hull <- c(hull, hull[1]) lines(temp[hull, 1], temp[hull, 2], type = "b", lty = clty[ii], col = ccol[ii], lwd = clwd[ii], pch = " ") } } } invisible(nuhat) } Coef.rrvglm <- function(object, ...) { M <- object@misc$M n <- object@misc$n colx1.index <- object@control$colx1.index colx2.index <- object@control$colx2.index p1 <- length(colx1.index) # May be 0 Amat <- object@constraints[[colx2.index[1]]] B1mat <- if (p1) coefvlm(object, matrix.out = TRUE)[colx1.index,, drop = FALSE] else NULL C.try <- coefvlm(object, matrix.out = TRUE)[colx2.index, , drop = FALSE] Cmat <- C.try %*% Amat %*% solve(t(Amat) %*% Amat) Rank <- object@control$Rank latvar.names <- if (Rank > 1) paste("latvar", 1:Rank, sep = "") else "latvar" dimnames(Amat) <- list(object@misc$predictors.names, latvar.names) dimnames(Cmat) <- list(dimnames(Cmat)[[1]], latvar.names) ans <- new(Class = "Coef.rrvglm", A = Amat, C = Cmat, Rank = Rank, colx2.index = colx2.index) if (!is.null(colx1.index)) { ans@colx1.index <- colx1.index ans@B1 <- B1mat } if (object@control$Corner) ans@Atilde <- Amat[-c(object@control$Index.corner, object@control$str0),, drop = FALSE] ans } setMethod("Coef", "rrvglm", function(object, ...) Coef.rrvglm(object, ...)) show.Coef.rrvglm <- function(x, ...) { object <- x cat("A matrix:\n") print(object@A, ...) cat("\nC matrix:\n") print(object@C, ...) p1 <- length(object@colx1.index) if (p1) { cat("\nB1 matrix:\n") print(object@B1, ...) } invisible(object) } if (!isGeneric("biplot")) setGeneric("biplot", function(x, ...) standardGeneric("biplot")) setMethod("Coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("biplot", "qrrvglm", function(x, ...) { biplot.qrrvglm(x, ...)}) setMethod("lvplot", "qrrvglm", function(object, ...) { invisible(lvplot.qrrvglm(object, ...))}) setMethod("lvplot", "rrvglm", function(object, ...) { invisible(lvplot.rrvglm(object, ...))}) biplot.rrvglm <- function(x, ...) lvplot(object = x, ...) setMethod("biplot", "rrvglm", function(x, ...) invisible(biplot.rrvglm(x, ...))) summary.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, ...) { answer <- object answer@post$Coef <- Coef(object, varI.latvar = varI.latvar, refResponse = refResponse, ...) # Store it here; non-elegant if (length((answer@post$Coef)@dispersion) && length(object@misc$estimated.dispersion) && object@misc$estimated.dispersion) answer@dispersion <- answer@misc$dispersion <- (answer@post$Coef)@dispersion as(answer, "summary.qrrvglm") } show.summary.qrrvglm <- function(x, ...) { cat("\nCall:\n") dput(x@call) print(x@post$Coef, ...) # non-elegant programming if (length(x@dispersion) > 1) { cat("\nDispersion parameters:\n") if (length(x@misc$ynames)) { names(x@dispersion) <- x@misc$ynames print(x@dispersion, ...) } else { cat(x@dispersion, fill = TRUE) } cat("\n") } else if (length(x@dispersion) == 1) { cat("\nDispersion parameter: ", x@dispersion, "\n") } } setClass("summary.qrrvglm", contains = "qrrvglm") setMethod("summary", "qrrvglm", function(object, ...) summary.qrrvglm(object, ...)) setMethod("show", "summary.qrrvglm", function(object) show.summary.qrrvglm(object)) setMethod("show", "Coef.rrvglm", function(object) show.Coef.rrvglm(object)) grc <- function(y, Rank = 1, Index.corner = 2:(1+Rank), str0 = 1, summary.arg = FALSE, h.step = 0.0001, ...) { myrrcontrol <- rrvglm.control(Rank = Rank, Index.corner = Index.corner, str0 = str0, ...) object.save <- y if (is(y, "rrvglm")) { y <- object.save@y } else { y <- as.matrix(y) y <- as(y, "matrix") } if (length(dim(y)) != 2 || nrow(y) < 3 || ncol(y) < 3) stop("y must be a matrix with >= 3 rows & columns, ", "or a rrvglm() object") ei <- function(i, n) diag(n)[, i, drop = FALSE] .grc.df <- data.frame(Row.2 = I.col(2, nrow(y))) yn1 <- if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else paste("X2.", 1:nrow(y), sep = "") warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) if (any(!is.na(as.numeric(substring(yn1, 1, 1))))) yn1 <- paste("X2.", 1:nrow(y), sep = "") options(warn = warn.save) Row. <- factor(1:nrow(y)) modmat.row <- model.matrix( ~ Row.) Col. <- factor(1:ncol(y)) modmat.col <- model.matrix( ~ Col.) cms <- list("(Intercept)" = matrix(1, ncol(y), 1)) for (ii in 2:nrow(y)) { cms[[paste("Row.", ii, sep = "")]] <- matrix(1, ncol(y), 1) .grc.df[[paste("Row.", ii, sep = "")]] <- modmat.row[,ii] } for (ii in 2:ncol(y)) { cms[[paste("Col.", ii, sep = "")]] <- modmat.col[,ii, drop = FALSE] .grc.df[[paste("Col.", ii, sep = "")]] <- rep_len(1, nrow(y)) } for (ii in 2:nrow(y)) { cms[[yn1[ii]]] <- diag(ncol(y)) .grc.df[[yn1[ii]]] <- I.col(ii, nrow(y)) } dimnames(.grc.df) <- list(if (length(dimnames(y)[[1]])) dimnames(y)[[1]] else as.character(1:nrow(y)), dimnames(.grc.df)[[2]]) str1 <- "~ Row.2" if (nrow(y) > 2) for (ii in 3:nrow(y)) str1 <- paste(str1, paste("Row.", ii, sep = ""), sep = " + ") for (ii in 2:ncol(y)) str1 <- paste(str1, paste("Col.", ii, sep = ""), sep = " + ") str2 <- paste("y ", str1) for (ii in 2:nrow(y)) str2 <- paste(str2, yn1[ii], sep = " + ") myrrcontrol$noRRR <- as.formula(str1) # Overwrite this assign(".grc.df", .grc.df, envir = VGAMenv) warn.save <- options()$warn options(warn = -3) # Suppress the warnings (hopefully, temporarily) answer <- if (is(object.save, "rrvglm")) object.save else rrvglm(as.formula(str2), family = poissonff, constraints = cms, control = myrrcontrol, data = .grc.df) options(warn = warn.save) if (summary.arg) { answer <- as(answer, "rrvglm") answer <- summary.rrvglm(answer, h.step = h.step) } else { answer <- as(answer, "grc") } if (exists(".grc.df", envir = VGAMenv)) rm(".grc.df", envir = VGAMenv) answer } summary.grc <- function(object, ...) { grc(object, summary.arg= TRUE, ...) } trplot.qrrvglm <- function(object, which.species = NULL, add = FALSE, show.plot = TRUE, label.sites = FALSE, sitenames = rownames(object@y), axes.equal = TRUE, cex = par()$cex, col = 1:(nos*(nos-1)/2), log = "", lty = rep_len(par()$lty, nos*(nos-1)/2), lwd = rep_len(par()$lwd, nos*(nos-1)/2), tcol = rep_len(par()$col, nos*(nos-1)/2), xlab = NULL, ylab = NULL, main = "", # "Trajectory plot", type = "b", check.ok = TRUE, ...) { coef.obj <- Coef(object) # use defaults for those two arguments if (coef.obj@Rank != 1) stop("object must be a rank-1 model") fv <- fitted(object) modelno <- object@control$modelno # 1, 2, 3, or 0 NOS <- ncol(fv) # Number of species M <- object@misc$M # nn <- nrow(fv) # Number of sites if (length(sitenames)) sitenames <- rep_len(sitenames, nn) sppNames <- dimnames(object@y)[[2]] if (!length(which.species)) { which.species <- sppNames[1:NOS] which.species.numer <- 1:NOS } else if (is.numeric(which.species)) { which.species.numer <- which.species which.species <- sppNames[which.species.numer] # Convert to character } else { which.species.numer <- match(which.species, sppNames) } nos <- length(which.species) # nos = number of species to be plotted if (length(which.species.numer) <= 1) stop("must have at least 2 species to be plotted") cx1i <- object@control$colx1.index if (check.ok) if (!(length(cx1i) == 1 && names(cx1i) == "(Intercept)")) stop("trajectory plots allowable only for noRRR = ~ 1 models") first.spp <- iam(1, 1,M = M,both = TRUE,diag = FALSE)$row.index second.spp <- iam(1, 1,M = M,both = TRUE,diag = FALSE)$col.index myxlab <- if (length(which.species.numer) == 2) { paste("Fitted value for", if (is.character(which.species.numer)) which.species.numer[1] else sppNames[which.species.numer[1]]) } else "Fitted value for 'first' species" myxlab <- if (length(xlab)) xlab else myxlab myylab <- if (length(which.species.numer) == 2) { paste("Fitted value for", if (is.character(which.species.numer)) which.species.numer[2] else sppNames[which.species.numer[2]]) } else "Fitted value for 'second' species" myylab <- if (length(ylab)) ylab else myylab if (!add) { xxx <- if (axes.equal) fv[,which.species.numer] else fv[,which.species.numer[first.spp]] yyy <- if (axes.equal) fv[,which.species.numer] else fv[,which.species.numer[second.spp]] matplot(xxx, yyy, type = "n", log = log, xlab = myxlab, ylab = myylab, main = main, ...) } lwd <- rep_len(lwd, nos*(nos-1)/2) col <- rep_len(col, nos*(nos-1)/2) lty <- rep_len(lty, nos*(nos-1)/2) tcol <- rep_len(tcol, nos*(nos-1)/2) oo <- order(coef.obj@latvar) # Sort by the latent variable ii <- 0 col <- rep_len(col, nos*(nos-1)/2) species.names <- NULL if (show.plot) for (i1 in seq(which.species.numer)) { for (i2 in seq(which.species.numer)) if (i1 < i2) { ii <- ii + 1 species.names <- rbind(species.names, cbind(sppNames[i1], sppNames[i2])) matplot(fv[oo, which.species.numer[i1]], fv[oo, which.species.numer[i2]], type = type, add = TRUE, lty = lty[ii], lwd = lwd[ii], col = col[ii], pch = if (label.sites) " " else "*" ) if (label.sites && length(sitenames)) text(fv[oo, which.species.numer[i1]], fv[oo, which.species.numer[i2]], labels = sitenames[oo], cex = cex, col = tcol[ii]) } } invisible(list(species.names = species.names, sitenames = sitenames[oo])) } if (!isGeneric("trplot")) setGeneric("trplot", function(object, ...) standardGeneric("trplot")) setMethod("trplot", "qrrvglm", function(object, ...) trplot.qrrvglm(object, ...)) setMethod("trplot", "rrvgam", function(object, ...) trplot.qrrvglm(object, ...)) vcovrrvglm <- function(object, ...) { summary.rrvglm(object, ...)@cov.unscaled } vcovqrrvglm <- function(object, I.tolerances = object@control$eq.tolerances, MaxScale = c("predictors", "response"), dispersion = rep_len(if (length(sobj@dispersion)) sobj@dispersion else 1, M), ...) { stop("this function is not yet completed") if (mode(MaxScale) != "character" && mode(MaxScale) != "name") MaxScale <- as.character(substitute(MaxScale)) MaxScale <- match.arg(MaxScale, c("predictors", "response"))[1] if (MaxScale != "predictors") stop("can currently only handle MaxScale='predictors'") sobj <- summary(object) cobj <- Coef(object, I.tolerances = I.tolerances, ...) M <- nrow(cobj@A) dispersion <- rep_len(dispersion, M) if (cobj@Rank != 1) stop("object must be a rank 1 model") dvecMax <- cbind(1, -0.5 * cobj@A / c(cobj@D), (cobj@A / c(2*cobj@D))^2) dvecTol <- cbind(0, 0, 1 / c(-2 * cobj@D)^1.5) dvecOpt <- cbind(0, -0.5 / c(cobj@D), 0.5 * cobj@A / c(cobj@D^2)) if ((length(object@control$colx1.index) != 1) || (names(object@control$colx1.index) != "(Intercept)")) stop("Can only handle noRRR=~1 models") okvals <- c(3*M, 2*M+1) if (all(length(coef(object)) != okvals)) stop("Can only handle intercepts-only model with ", "eq.tolerances = FALSE") answer <- NULL Cov.unscaled <- array(NA_real_, c(3, 3, M), dimnames = list( c("(Intercept)", "latvar", "latvar^2"), c("(Intercept)", "latvar", "latvar^2"), dimnames(cobj@D)[[3]])) for (spp in 1:M) { index <- c(M + ifelse(object@control$eq.tolerances, 1, M) + spp, spp, M + ifelse(object@control$eq.tolerances, 1, spp)) vcov <- Cov.unscaled[,,spp] <- sobj@cov.unscaled[index, index] # Order is A, D, B1 se2Max <- dvecMax[spp,, drop = FALSE] %*% vcov %*% cbind(dvecMax[spp,]) se2Tol <- dvecTol[spp,, drop = FALSE] %*% vcov %*% cbind(dvecTol[spp,]) se2Opt <- dvecOpt[spp,, drop = FALSE] %*% vcov %*% cbind(dvecOpt[spp,]) answer <- rbind(answer, dispersion[spp]^0.5 * c(se2Opt = se2Opt, se2Tol = se2Tol, se2Max = se2Max)) } link.function <- if (MaxScale == "predictors") remove.arg(object@misc$predictors.names[1]) else "" dimnames(answer) <- list(dimnames(cobj@D)[[3]], c("Optimum", "Tolerance", if (nchar(link.function)) paste(link.function, "(Maximum)", sep = "") else "Maximum")) NAthere <- is.na(answer %*% rep_len(1, 3)) answer[NAthere,] <- NA # NA in tolerance means NA everywhere else new(Class = "vcov.qrrvglm", Cov.unscaled = Cov.unscaled, dispersion = dispersion, se = sqrt(answer)) } setMethod("vcov", "rrvglm", function(object, ...) vcovrrvglm(object, ...)) setMethod("vcov", "qrrvglm", function(object, ...) vcovqrrvglm(object, ...)) setClass(Class = "vcov.qrrvglm", representation( Cov.unscaled = "array", # permuted cov.unscaled dispersion = "numeric", se = "matrix")) model.matrix.qrrvglm <- function(object, type = c("latvar", "vlm"), ...) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("latvar", "vlm"))[1] switch(type, latvar = Coef(object, ...)@latvar, vlm = object@x) } setMethod("model.matrix", "qrrvglm", function(object, ...) model.matrix.qrrvglm(object, ...)) perspqrrvglm <- function(x, varI.latvar = FALSE, refResponse = NULL, show.plot = TRUE, xlim = NULL, ylim = NULL, zlim = NULL, # zlim ignored if Rank == 1 gridlength = if (Rank == 1) 301 else c(51, 51), which.species = NULL, xlab = if (Rank == 1) "Latent Variable" else "Latent Variable 1", ylab = if (Rank == 1) "Expected Value" else "Latent Variable 2", zlab = "Expected value", labelSpecies = FALSE, # For Rank == 1 only stretch = 1.05, # quick and dirty, Rank == 1 only main = "", ticktype = "detailed", col = if (Rank == 1) par()$col else "white", llty = par()$lty, llwd = par()$lwd, add1 = FALSE, ...) { oylim <- ylim object <- x # Do not like x as the primary argument coef.obj <- Coef(object, varI.latvar = varI.latvar, refResponse = refResponse) if ((Rank <- coef.obj@Rank) > 2) stop("object must be a rank-1 or rank-2 model") fv <- fitted(object) NOS <- ncol(fv) # Number of species M <- object@misc$M xlim <- rep_len(if (length(xlim)) xlim else range(coef.obj@latvar[, 1]), 2) if (!length(oylim)) { ylim <- if (Rank == 1) c(0, max(fv) * stretch) else rep_len(range(coef.obj@latvar[, 2]), 2) } gridlength <- rep_len(gridlength, Rank) latvar1 <- seq(xlim[1], xlim[2], length = gridlength[1]) if (Rank == 1) { m <- cbind(latvar1) } else { latvar2 <- seq(ylim[1], ylim[2], length = gridlength[2]) m <- expand.grid(latvar1,latvar2) } if (dim(coef.obj@B1)[1] != 1 || dimnames(coef.obj@B1)[[1]] != "(Intercept)") stop("noRRR = ~ 1 is needed") LP <- coef.obj@A %*% t(cbind(m)) # M by n LP <- LP + c(coef.obj@B1) # Assumes \bix_1 = 1 (intercept only) mm <- as.matrix(m) N <- ncol(LP) for (jay in 1:M) { for (ii in 1:N) { LP[jay, ii] <- LP[jay, ii] + mm[ii, , drop = FALSE] %*% coef.obj@D[,,jay] %*% t(mm[ii, , drop = FALSE]) } } LP <- t(LP) # n by M fitvals <- object@family@linkinv(LP, extra = object@extra) # n by NOS dimnames(fitvals) <- list(NULL, dimnames(fv)[[2]]) sppNames <- dimnames(object@y)[[2]] if (!length(which.species)) { which.species <- sppNames[1:NOS] which.species.numer <- 1:NOS } else if (is.numeric(which.species)) { which.species.numer <- which.species which.species <- sppNames[which.species.numer] # Convert to character } else { which.species.numer <- match(which.species, sppNames) } if (Rank == 1) { if (show.plot) { if (!length(oylim)) ylim <- c(0, max(fitvals[,which.species.numer]) * stretch) # A revision col <- rep_len(col, length(which.species.numer)) llty <- rep_len(llty, length(which.species.numer)) llwd <- rep_len(llwd, length(which.species.numer)) if (!add1) matplot(latvar1, fitvals, xlab = xlab, ylab = ylab, type = "n", main = main, xlim = xlim, ylim = ylim, ...) for (jloc in seq_along(which.species.numer)) { ptr2 <- which.species.numer[jloc] # points to species column lines(latvar1, fitvals[, ptr2], col = col[jloc], lty = llty[jloc], lwd = llwd[jloc], ...) if (labelSpecies) { ptr1 <- (1:nrow(fitvals))[max(fitvals[, ptr2]) == fitvals[, ptr2]] ptr1 <- ptr1[1] text(latvar1[ptr1], fitvals[ptr1, ptr2] + (stretch-1) * diff(range(ylim)), label = sppNames[jloc], col = col[jloc], ...) } } } } else { max.fitted <- matrix(fitvals[, which.species[1]], length(latvar1), length(latvar2)) if (length(which.species) > 1) for (jlocal in which.species[-1]) { max.fitted <- pmax(max.fitted, matrix(fitvals[, jlocal], length(latvar1), length(latvar2))) } if (!length(zlim)) zlim <- range(max.fitted, na.rm = TRUE) perspdefault <- getS3method("persp", "default") if (show.plot) perspdefault(latvar1, latvar2, max.fitted, zlim = zlim, xlab = xlab, ylab = ylab, zlab = zlab, ticktype = ticktype, col = col, main = main, ...) } invisible(list(fitted = fitvals, latvar1grid = latvar1, latvar2grid = if (Rank == 2) latvar2 else NULL, max.fitted = if (Rank == 2) max.fitted else NULL)) } if (!isGeneric("persp")) setGeneric("persp", function(x, ...) standardGeneric("persp"), package = "VGAM") setMethod("persp", "qrrvglm", function(x, ...) perspqrrvglm(x = x, ...)) Rank.rrvglm <- function(object, ...) { object@control$Rank } Rank.qrrvglm <- function(object, ...) { object@control$Rank } Rank.rrvgam <- function(object, ...) { object@control$Rank } concoef.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, ...) { Coef(object, varI.latvar = varI.latvar, refResponse = refResponse, ...)@C } concoef.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") object@C } latvar.rrvglm <- function(object, ...) { ans <- lvplot(object, show.plot = FALSE) if (ncol(ans) == 1) dimnames(ans) <- list(dimnames(ans)[[1]], "lv") ans } latvar.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, ...) { Coef(object, varI.latvar = varI.latvar, refResponse = refResponse, ...)@latvar } latvar.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") object@latvar } Max.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, ...) { Coef(object, varI.latvar = varI.latvar, refResponse = refResponse, ...)@Maximum } Max.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") if (any(slotNames(object) == "Maximum")) object@Maximum else Max(object, ...) } Opt.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, ...) { Coef(object, varI.latvar = varI.latvar, refResponse = refResponse, ...)@Optimum } Opt.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") object@Optimum } Tol.qrrvglm <- function(object, varI.latvar = FALSE, refResponse = NULL, ...) { Coef(object, varI.latvar = varI.latvar, refResponse = refResponse, ...)@Tolerance } Tol.Coef.qrrvglm <- function(object, ...) { if (length(list(...))) warning("Too late! Ignoring the extra arguments") if (any(slotNames(object) == "Tolerance")) object@Tolerance else Tol(object, ...) } if (FALSE) { if (!isGeneric("ccoef")) setGeneric("ccoef", function(object, ...) { .Deprecated("concoef") standardGeneric("ccoef") }) setMethod("ccoef", "rrvglm", function(object, ...) concoef.qrrvglm(object, ...)) setMethod("ccoef", "qrrvglm", function(object, ...) concoef.qrrvglm(object, ...)) setMethod("ccoef", "Coef.rrvglm", function(object, ...) concoef.Coef.qrrvglm(object, ...)) setMethod("ccoef", "Coef.qrrvglm", function(object, ...) concoef.Coef.qrrvglm(object, ...)) } if (!isGeneric("concoef")) setGeneric("concoef", function(object, ...) standardGeneric("concoef")) setMethod("concoef", "rrvglm", function(object, ...) concoef.qrrvglm(object, ...)) setMethod("concoef", "qrrvglm", function(object, ...) concoef.qrrvglm(object, ...)) setMethod("concoef", "Coef.rrvglm", function(object, ...) concoef.Coef.qrrvglm(object, ...)) setMethod("concoef", "Coef.qrrvglm", function(object, ...) concoef.Coef.qrrvglm(object, ...)) setMethod("coef", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) setMethod("coefficients", "qrrvglm", function(object, ...) Coef.qrrvglm(object, ...)) if (!isGeneric("lv")) setGeneric("lv", function(object, ...) { .Deprecated("latvar") standardGeneric("lv") }) setMethod("lv", "rrvglm", function(object, ...) { latvar.rrvglm(object, ...) }) setMethod("lv", "qrrvglm", function(object, ...) { latvar.qrrvglm(object, ...) }) setMethod("lv", "Coef.rrvglm", function(object, ...) { latvar.Coef.qrrvglm(object, ...) }) setMethod("lv", "Coef.qrrvglm", function(object, ...) { latvar.Coef.qrrvglm(object, ...) }) if (!isGeneric("latvar")) setGeneric("latvar", function(object, ...) standardGeneric("latvar")) setMethod("latvar", "rrvglm", function(object, ...) latvar.rrvglm(object, ...)) setMethod("latvar", "qrrvglm", function(object, ...) latvar.qrrvglm(object, ...)) setMethod("latvar", "Coef.rrvglm", function(object, ...) latvar.Coef.qrrvglm(object, ...)) setMethod("latvar", "Coef.qrrvglm", function(object, ...) latvar.Coef.qrrvglm(object, ...)) if (!isGeneric("Max")) setGeneric("Max", function(object, ...) standardGeneric("Max")) setMethod("Max", "qrrvglm", function(object, ...) Max.qrrvglm(object, ...)) setMethod("Max", "Coef.qrrvglm", function(object, ...) Max.Coef.qrrvglm(object, ...)) setMethod("Max", "rrvgam", function(object, ...) Coef(object, ...)@Maximum) if (!isGeneric("Opt")) setGeneric("Opt", function(object, ...) standardGeneric("Opt")) setMethod("Opt", "qrrvglm", function(object, ...) Opt.qrrvglm(object, ...)) setMethod("Opt", "Coef.qrrvglm", function(object, ...) Opt.Coef.qrrvglm(object, ...)) setMethod("Opt", "rrvgam", function(object, ...) Coef(object, ...)@Optimum) if (!isGeneric("Tol")) setGeneric("Tol", function(object, ...) standardGeneric("Tol")) setMethod("Tol", "qrrvglm", function(object, ...) Tol.qrrvglm(object, ...)) setMethod("Tol", "Coef.qrrvglm", function(object, ...) Tol.Coef.qrrvglm(object, ...)) cgo <- function(...) { stop("The function 'cgo' has been renamed 'cqo'. ", "Ouch! Sorry!") } clo <- function(...) { stop("Constrained linear ordination is fitted with ", "the function 'rrvglm'") } is.bell.vlm <- is.bell.rrvglm <- function(object, ...) { M <- object@misc$M ynames <- object@misc$ynames ans <- rep_len(FALSE, M) if (length(ynames)) names(ans) <- ynames ans } is.bell.uqo <- is.bell.qrrvglm <- function(object, ...) { is.finite(Max(object, ...)) } is.bell.rrvgam <- function(object, ...) { NA * Max(object, ...) } if (!isGeneric("is.bell")) setGeneric("is.bell", function(object, ...) standardGeneric("is.bell")) setMethod("is.bell","qrrvglm", function(object,...) is.bell.qrrvglm(object,...)) setMethod("is.bell","rrvglm", function(object, ...) is.bell.rrvglm(object, ...)) setMethod("is.bell","vlm", function(object, ...) is.bell.vlm(object, ...)) setMethod("is.bell","rrvgam", function(object, ...) is.bell.rrvgam(object, ...)) setMethod("is.bell","Coef.qrrvglm", function(object,...) is.bell.qrrvglm(object,...)) if (!isGeneric("Rank")) setGeneric("Rank", function(object, ...) standardGeneric("Rank")) setMethod("Rank", "rrvglm", function(object, ...) Rank.rrvglm(object, ...)) setMethod("Rank", "qrrvglm", function(object, ...) Rank.qrrvglm(object, ...)) setMethod("Rank", "rrvgam", function(object, ...) Rank.rrvgam(object, ...)) VGAM/R/family.nonlinear.R0000644000176200001440000005245713135276757014632 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vnonlinear.control <- function(save.weights = TRUE, ...) { list(save.weights = as.logical(save.weights)[1]) } subset.lohi <- function(xvec, yvec, probs.x = c(0.15, 0.85), type = c("median", "wtmean", "unwtmean"), wtvec = rep_len(1, length(xvec))) { if (!is.Numeric(probs.x, length.arg = 2)) stop("argument 'probs.x' must be numeric and of length two") min.q <- quantile(xvec, probs = probs.x[1] ) max.q <- quantile(xvec, probs = probs.x[2] ) if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("median", "wtmean", "unwtmean"))[1] if (type == "median") { y1bar <- median(yvec[xvec < min.q]) x1bar <- median(xvec[xvec < min.q]) y2bar <- median(yvec[xvec > max.q]) x2bar <- median(xvec[xvec > max.q]) } if (type == "wtmean") { y1bar <- weighted.mean(yvec[xvec < min.q], w = wtvec[xvec < min.q]) x1bar <- weighted.mean(xvec[xvec < min.q], w = wtvec[xvec < min.q]) y2bar <- weighted.mean(yvec[xvec > max.q], w = wtvec[xvec > max.q]) x2bar <- weighted.mean(xvec[xvec > max.q], w = wtvec[xvec > max.q]) } if (type == "unwtmean") { y1bar <- mean(yvec[xvec < min.q]) x1bar <- mean(xvec[xvec < min.q]) y2bar <- mean(yvec[xvec > max.q]) x2bar <- mean(xvec[xvec > max.q]) } if (x1bar >= x2bar) stop("cannot find two distinct x values; try decreasing the first ", "value of argument 'probs.x' and increasing the second value") list(x1bar = x1bar, y1bar = y1bar, x2bar = x2bar, y2bar = y2bar, slopeUp = (y2bar > y1bar)) } micmen.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } micmen <- function(rpar = 0.001, divisor = 10, init1 = NULL, init2 = NULL, imethod = 1, oim = TRUE, link1 = "identitylink", link2 = "identitylink", firstDeriv = c("nsimEIM", "rpar"), probs.x = c(0.15, 0.85), nsimEIM = 500, dispersion = 0, zero = NULL) { firstDeriv <- match.arg(firstDeriv, c("nsimEIM", "rpar"))[1] if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("argument 'imethod' must be integer") if (!is.Numeric(probs.x, length.arg = 2)) stop("argument 'probs.x' must be numeric and of length two") if (!is.logical(oim) || length(oim) != 1) stop("argument 'oim' must be single logical") stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("'imethod' must be 1 or 2 or 3") estimated.dispersion <- (dispersion == 0) link1 <- as.list(substitute(link1)) earg1 <- link2list(link1) link1 <- attr(earg1, "function.name") link2 <- as.list(substitute(link2)) earg2 <- link2list(link2) link2 <- attr(earg2, "function.name") new("vglmff", blurb = c("Michaelis-Menton regression model\n", "Y_i = theta1 * u_i / (theta2 + u_i) + e_i\n\n", "Links: ", namesof("theta1", link1, earg = earg1), ", ", namesof("theta2", link2, earg = earg2), "\n", "Variance: constant"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 if (residuals) { if (M > 1) NULL else (y - mu) * sqrt(w) } else { ResSS.vgam(y - mu, w, M = M) } }, infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("theta1", "theta2"), link1 = .link1 , link2 = .link2 , zero = .zero ) }, list( .zero = zero, .link1 = link1, .link2 = link2 ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w if (!length(Xm2)) stop("regressor not found") if (NCOL(Xm2) != 1) stop("regressor not found or is not a vector. Use the ", "'form2' argument without an intercept") Xm2 <- as.vector(Xm2) # Make sure extra$Xm2 <- Xm2 # Needed for @linkinv predictors.names <- c(namesof("theta1", .link1 , earg = .earg1, tag = FALSE), namesof("theta2", .link2 , earg = .earg2, tag = FALSE)) if (length(mustart) || length(coefstart)) stop("cannot handle 'mustart' or 'coefstart'") if (!length(etastart)) { if ( .imethod == 3 ) { index0 <- (1:n)[Xm2 <= quantile(Xm2, prob = .probs.x[2] )] init1 <- median(y[index0]) init2 <- median(init1 * Xm2 / y - Xm2) } if ( .imethod == 1 || .imethod == 2) { mysubset <- subset.lohi(Xm2, y, probs.x = .probs.x, type = ifelse( .imethod == 1, "median", "wtmean"), wtvec = w) mat.x <- with(mysubset, cbind(c(x1bar, x2bar), -c(y1bar, y2bar))) theta.temp <- with(mysubset, solve(mat.x, c(x1bar * y1bar, x2bar * y2bar))) init1 <- theta.temp[1] init2 <- theta.temp[2] } if (length( .init1 )) init1 <- .init1 if (length( .init2 )) init2 <- .init2 etastart <- cbind( rep_len(theta2eta(init1, .link1 , earg = .earg1 ), n), rep_len(theta2eta(init2, .link2 , earg = .earg2 ), n)) } else { stop("cannot handle 'etastart' or 'mustart'") } }), list( .init1 = init1, .link1 = link1, .earg1 = earg1, .init2 = init2, .link2 = link2, .earg2 = earg2, .imethod = imethod, .probs.x = probs.x ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) theta1 * extra$Xm2 / (theta2 + extra$Xm2) }, list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2))), last = eval(substitute(expression({ misc$link <- c(theta1 = .link1 , theta2 = .link2 ) misc$earg <- list(theta1 = .earg1 , theta2 = .earg2 ) misc$rpar <- rpar fit$df.residual <- n - rank # Not nrow.X.vlm - rank fit$df.total <- n # Not nrow.X.vlm extra$Xm2 <- NULL # Regressor is in control$regressor dpar <- .dispersion if (!dpar) { dpar <- sum(c(w) * (y - mu)^2) / (n - ncol.X.vlm) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$firstDeriv <- .firstDeriv misc$oim <- .oim misc$rpar <- rpar misc$orig.rpar <- .rpar misc$multipleResponses <- FALSE }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .dispersion = dispersion, .imethod = imethod, .firstDeriv = firstDeriv, .oim = oim, .rpar = rpar, .nsimEIM = nsimEIM, .estimated.dispersion = estimated.dispersion ))), summary.dispersion = FALSE, vfamily = c("micmen", "vnonlinear"), validparams = eval(substitute(function(eta, y, extra = NULL) { theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) okay1 <- all(is.finite(theta1)) && all(is.finite(theta2)) okay1 }, list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .firstDeriv = firstDeriv, .rpar = rpar, .divisor = divisor ))), deriv = eval(substitute(expression({ theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) dthetas.detas <- cbind(dtheta.deta(theta1, .link1 , earg = .earg1 ), dtheta.deta(theta2, .link2 , earg = .earg2 )) rpar <- if ( .firstDeriv == "rpar") { if (iter > 1) { max(rpar / .divisor, 1000 * .Machine$double.eps) } else { d3 <- deriv3(~ theta1 * Xm2 / (theta2 + Xm2), c("theta1", "theta2"), hessian = FALSE) .rpar } } else { .rpar } dmus.dthetas <- if (FALSE) { attr(eval(d3), "gradient") } else { dmu.dtheta1 <- Xm2 / (theta2 + Xm2) dmu.dtheta2 <- -theta1 * Xm2 / (Xm2 + theta2)^2 cbind(dmu.dtheta1, dmu.dtheta2) } myderiv <- if ( .firstDeriv == "rpar") { if (TRUE) { index <- iam(NA, NA, M = M, both = TRUE) temp200809 <- dmus.dthetas * dthetas.detas if (M > 1) temp200809[, 2:M] <- temp200809[, 2:M] + sqrt(rpar) c(w) * (y - mu) * temp200809 } else { c(w) * (y - mu) * cbind(dmus.dthetas[, 1] * dthetas.detas[, 1], dmus.dthetas[, 2] * dthetas.detas[, 2] + sqrt(rpar)) } } else { temp20101111 <- dmus.dthetas * dthetas.detas c(w) * (y - mu) * temp20101111 } myderiv }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .firstDeriv = firstDeriv, .rpar = rpar, .divisor = divisor ))), weight = eval(substitute(expression({ if ( .oim ) { wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- Xm2 wz[, iam(1, 2, M)] <- y - 2 * mu wz[, iam(2, 2, M)] <- theta1 * (3 * mu - 2 * y) / (theta2 + Xm2) wz <- wz * Xm2 / (theta2 + Xm2)^2 } if ( .firstDeriv == "rpar") { if (FALSE) { wz <- dmus.dthetas[, index$row] * dmus.dthetas[, index$col] * dthetas.detas[, index$row] * dthetas.detas[, index$col] if (M > 1) wz[, 2:M] <- wz[, 2:M] + rpar } else { wz <- cbind(( dmus.dthetas[, 1] * dthetas.detas[, 1])^2, ( dmus.dthetas[, 2] * dthetas.detas[, 2])^2 + rpar, dmus.dthetas[, 1] * dmus.dthetas[, 2] * dthetas.detas[, 1] * dthetas.detas[, 2]) } } else { run.varcov <- 0 index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) mysigma <- 1 for (ii in 1:( .nsimEIM )) { ysim <- theta1 * Xm2 / (theta2 + Xm2) + rnorm(n, sd = mysigma) temp3 <- (ysim - mu) * dmus.dthetas * dthetas.detas run.varcov <- run.varcov + temp3[, index0$row.index] * temp3[, index0$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov } c(w) * wz }), list( .link1 = link1, .link2 = link2, .firstDeriv = firstDeriv, .nsimEIM = nsimEIM, .oim = oim )))) } skira.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } skira <- function(rpar = 0.1, divisor = 10, init1 = NULL, init2 = NULL, link1 = "identitylink", link2 = "identitylink", earg1 = list(), earg2 = list(), imethod = 1, oim = TRUE, probs.x = c(0.15, 0.85), smallno = 1.0e-3, nsimEIM = 500, firstDeriv = c("nsimEIM", "rpar"), dispersion = 0, zero = NULL) { firstDeriv <- match.arg(firstDeriv, c("nsimEIM", "rpar"))[1] if (!is.Numeric(probs.x, length.arg = 2)) stop("argument 'probs.x' must be numeric and of length two") estimated.dispersion <- dispersion == 0 if (mode(link1) != "character" && mode(link1) != "name") link1 <- as.character(substitute(link1)) if (mode(link2) != "character" && mode(link2) != "name") link2 <- as.character(substitute(link2)) if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("argument 'imethod' must be integer") if (imethod > 5) stop("argument 'imethod' must be 1, 2, 3, 4 or 5") if (!is.list(earg1)) earg1 = list() if (!is.list(earg2)) earg2 = list() stopifnot(nsimEIM > 10, length(nsimEIM) == 1, nsimEIM == round(nsimEIM)) new("vglmff", blurb = c("Shinozaki-Kira regression model\n", "Y_i = 1 / (theta1 + theta2 * u_i) + e_i\n\n", "Links: ", namesof("theta1", link1, earg = earg1), ", ", namesof("theta2", link2, earg = earg2)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 if (residuals) { if (M > 1) NULL else (y - mu) * sqrt(w) } else { ResSS.vgam(y - mu, w, M = M) } }, infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("theta1", "theta2"), link1 = .link1 , link2 = .link2 , zero = .zero ) }, list( .zero = zero, .link1 = link1, .link2 = link2 ))), initialize = eval(substitute(expression({ warning("20101105; need to fix a bug in the signs of initial vals") temp5 <- w.y.check(w = w, y = y, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y if (!length(Xm2)) stop("regressor not found") if (NCOL(Xm2) != 1) stop("regressor not found or is not a vector. ", "Use the 'form2' argument without an intercept") Xm2 <- as.vector(Xm2) extra$Xm2 <- Xm2 predictors.names <- c(namesof("theta1", .link1 , earg = .earg1, tag = FALSE), namesof("theta2", .link2 , earg = .earg2, tag = FALSE)) if (length(mustart) || length(coefstart)) stop("cannot handle 'mustart' or 'coefstart'") if (!length(etastart)) { min.q <- quantile(Xm2, probs = .probs.x[1] ) max.q <- quantile(Xm2, probs = .probs.x[2] ) if ( .imethod == 3 || .imethod == 2 ) { mysubset <- subset.lohi(Xm2, y, probs.x = .probs.x, type = ifelse( .imethod == 2, "median", "wtmean"), wtvec = w) mat.x <- with(mysubset, cbind(c(1, 1), c(x1bar, x2bar)) * c(y1bar, y2bar)) theta.temp <- solve(mat.x, c(1, 1)) init1 <- theta.temp[1] init2 <- theta.temp[2] } else if ( .imethod == 1 ) { yy <- as.vector( y[(Xm2 > min.q) & (Xm2 < max.q)]) xx <- as.vector(Xm2[(Xm2 > min.q) & (Xm2 < max.q)]) ww <- as.vector( w[(Xm2 > min.q) & (Xm2 < max.q)]) yy[ abs(yy) < .smallno ] <- .smallno * sign(yy[ abs(yy) < .smallno ]) wt.temp <- (yy^4) * ww wt.temp.max <- median(wt.temp) * 100 wt.temp[wt.temp > wt.temp.max] <- wt.temp.max mylm.wfit <- lm.wfit(x = cbind(1, xx), y = c(1 / yy), w = c(wt.temp)) init1 <- mylm.wfit$coef[1] init2 <- mylm.wfit$coef[2] } else if (( .imethod == 4) || ( .imethod == 5)) { tempfit <- if ( .imethod == 4 ) { fitted(loess(y ~ Xm2)) } else { fitted(smooth.spline(Xm2, y, w = w, df = 2.0)) } mysubset <- subset.lohi(Xm2, y, probs.x = .probs.x, type = "wtmean", wtvec = w) mat.x <- with(mysubset, cbind(c(1, 1), c(x1bar, x2bar)) * c(y1bar, y2bar)) theta.temp <- solve(mat.x, c(1, 1)) init1 <- theta.temp[1] init2 <- theta.temp[2] } else { stop("argument 'imethod' unmatched") } mu <- 1 / (init1 + init2 * Xm2) matplot(Xm2, cbind(y, mu), col = c("blue", "green"), main = "Initial values in green") if ( .imethod == 1 ) { points(Xm2, 1 / (init1 + init2 * Xm2), col = "green") } else { with(mysubset, points(c(x1bar, x2bar), c(y1bar, y2bar), col = "red", pch = "+", cex = 2)) } if (length( .init1 )) init1 <- .init1 if (length( .init2 )) init2 <- .init2 etastart <- cbind( rep_len(theta2eta(init1, .link1 , earg = .earg1 ), n), rep_len(theta2eta(init2, .link2 , earg = .earg2 ), n)) } else { stop("cannot handle 'etastart' or 'mustart'") } }), list( .init1 = init1, .link1 = link1, .earg1 = earg1, .init2 = init2, .link2 = link2, .earg2 = earg2, .smallno = smallno, .probs.x = probs.x, .nsimEIM = nsimEIM, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) 1 / (theta1 + theta2 * extra$Xm2) }, list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2 ))), last = eval(substitute(expression({ misc$link <- c(theta1 = .link1 , theta2 = .link2 ) misc$earg <- list(theta1 = .earg1 , theta2 = .earg2 ) misc$rpar <- rpar misc$orig.rpar <- .rpar fit$df.residual <- n - rank fit$df.total <- n dpar <- .dispersion if (!dpar) { dpar <- sum(c(w) * (y - mu)^2) / (n - ncol.X.vlm) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$imethod <- .imethod misc$nsimEIM <- .nsimEIM misc$firstDeriv <- .firstDeriv misc$oim <- .oim misc$multipleResponses <- FALSE }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .dispersion = dispersion, .rpar = rpar, .imethod = imethod, .nsimEIM = nsimEIM, .firstDeriv = firstDeriv, .oim = oim, .estimated.dispersion = estimated.dispersion ))), summary.dispersion = FALSE, vfamily = c("skira", "vnonlinear"), validparams = eval(substitute(function(eta, y, extra = NULL) { theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) okay1 <- all(is.finite(theta1)) && all(is.finite(theta2)) okay1 }, list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .firstDeriv = firstDeriv, .rpar = rpar, .divisor = divisor ))), deriv = eval(substitute(expression({ rpar <- if ( .firstDeriv == "rpar") { if (iter > 1) { max(rpar / .divisor, 1000 * .Machine$double.eps) } else { d3 <- deriv3( ~ 1 / (theta1 + theta2 * Xm2), c("theta1", "theta2"), hessian = FALSE) .rpar } } else { .rpar } theta1 <- eta2theta(eta[, 1], .link1 , earg = .earg1 ) theta2 <- eta2theta(eta[, 2], .link2 , earg = .earg2 ) dthetas.detas <- cbind(dtheta.deta(theta1, .link1 , earg = .earg1 ), dtheta.deta(theta2, .link2 , earg = .earg2 )) dmus.dthetas <- if (FALSE) { attr(eval(d3), "gradient") } else { dmu.dtheta1 <- -1 / (theta1 + theta2 * Xm2)^2 dmu.dtheta2 <- -Xm2 / (theta1 + theta2 * Xm2)^2 cbind(dmu.dtheta1, dmu.dtheta2) } myderiv <- if ( .firstDeriv == "nsimEIM") { c(w) * (y - mu) * dmus.dthetas * dthetas.detas } else { c(w) * (y - mu) * cbind(dmus.dthetas[, 1] * dthetas.detas[, 1], dmus.dthetas[, 2] * dthetas.detas[, 2] + sqrt(rpar)) } myderiv }), list( .link1 = link1, .earg1 = earg1, .link2 = link2, .earg2 = earg2, .firstDeriv = firstDeriv, .rpar = rpar, .divisor = divisor ))), weight = eval(substitute(expression({ if ( .firstDeriv == "rpar") { if (FALSE) { index5 <- iam(NA, NA, M = M, both = TRUE) wz <- dmus.dthetas[, index5$row] * dmus.dthetas[, index5$col] * dthetas.detas[, index5$row] * dthetas.detas[, index5$col] if (M > 1) wz[, -(1:M)] <- wz[, -(1:M)] / 100 } else { wz <- cbind((dmus.dthetas[, 1] * dthetas.detas[, 1])^2, (dmus.dthetas[, 2] * dthetas.detas[, 2])^2 + rpar, dmus.dthetas[, 1] * dmus.dthetas[, 2] * dthetas.detas[, 1] * dthetas.detas[, 2]) } } else { run.varcov <- 0 index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) mysigma <- sqrt( median( (y - mu)^2 ) ) / 100 mysigma <- 1 for (ii in 1:( .nsimEIM )) { ysim <- 1 / (theta1 + theta2 * Xm2) + rnorm(n, sd = mysigma) temp3 <- (ysim - mu) * dmus.dthetas * dthetas.detas run.varcov <- run.varcov + temp3[, index0$row.index] * temp3[, index0$col.index] } run.varcov <- run.varcov / .nsimEIM wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov } c(w) * wz }), list( .link1 = link1, .link2 = link2, .firstDeriv = firstDeriv, .nsimEIM = nsimEIM, .oim = oim )))) } VGAM/R/family.normal.R0000644000176200001440000032567013135276757014135 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. VGAM.weights.function <- function(w, M, n) { ncolw <- NCOL(w) if (ncolw == 1) { wz <- matrix(w, nrow = n, ncol = M) # w_i * diag(M) } else if (ncolw == M) { wz <- as.matrix(w) } else if (ncolw < M && M > 1) { stop("ambiguous input for 'weights'") } else if (ncolw > M*(M+1)/2) { stop("too many columns") } else { wz <- as.matrix(w) } wz } gaussianff <- function(dispersion = 0, parallel = FALSE, zero = NULL) { if (!is.Numeric(dispersion, length.arg = 1) || dispersion < 0) stop("bad input for argument 'dispersion'") estimated.dispersion <- dispersion == 0 new("vglmff", blurb = c("Vector linear/additive model\n", "Links: identitylink for Y1,...,YM"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .parallel = parallel, .zero = zero ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL) { M <- if (is.matrix(y)) ncol(y) else 1 n <- if (is.matrix(y)) nrow(y) else length(y) wz <- VGAM.weights.function(w = w, M = M, n = n) if (residuals) { if (M > 1) { U <- vchol(wz, M = M, n = n) temp <- mux22(U, y-mu, M = M, upper = TRUE, as.matrix = TRUE) dimnames(temp) <- dimnames(y) temp } else (y-mu) * sqrt(wz) } else { ResSS.vgam(y-mu, wz = wz, M = M) } }, infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, quasi.type = TRUE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ if (is.R()) assign("CQO.FastAlgorithm", TRUE, envir = VGAM::VGAMenv) else CQO.FastAlgorithm <<- TRUE if (any(function.name == c("cqo", "cao")) && (length( .zero ) || (is.logical( .parallel ) && .parallel ))) stop("cannot handle non-default arguments for cqo() and cao()") temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- if (is.matrix(y)) ncol(y) else 1 dy <- dimnames(y) predictors.names <- if (!is.null(dy[[2]])) dy[[2]] else paste("Y", 1:M, sep = "") if (!length(etastart)) etastart <- 0 * y }), list( .parallel = parallel, .zero = zero ))), linkinv = function(eta, extra = NULL) eta, last = eval(substitute(expression({ dy <- dimnames(y) if (!is.null(dy[[2]])) dimnames(fit$fitted.values) <- dy dpar <- .dispersion if (!dpar) { wz <- VGAM.weights.function(w = w, M = M, n = n) temp5 <- ResSS.vgam(y-mu, wz = wz, M = M) dpar <- temp5 / (length(y) - (if (is.numeric(ncol(X.vlm.save))) ncol(X.vlm.save) else 0)) } misc$dispersion <- dpar misc$default.dispersion <- 0 misc$estimated.dispersion <- .estimated.dispersion misc$link <- rep_len("identitylink", M) names(misc$link) <- predictors.names misc$earg <- vector("list", M) for (ilocal in 1:M) misc$earg[[ilocal]] <- list() names(misc$link) <- predictors.names if (is.R()) { if (exists("CQO.FastAlgorithm", envir = VGAM::VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAM::VGAMenv) } else { while (exists("CQO.FastAlgorithm")) remove("CQO.FastAlgorithm") } misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .dispersion = dispersion, .estimated.dispersion = estimated.dispersion ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M <- if (is.matrix(y)) ncol(y) else 1 n <- if (is.matrix(y)) nrow(y) else length(y) wz <- VGAM.weights.function(w = w, M = M, n = n) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { temp1 <- ResSS.vgam(y-mu, wz = wz, M = M) ll.elts <- if (M == 1 || ncol(wz) == M) { -0.5 * temp1 + 0.5 * (log(wz)) - n * (M / 2) * log(2*pi) } else { if (all(wz[1, ] == apply(wz, 2, min)) && all(wz[1, ] == apply(wz, 2, max))) { onewz <- m2a(wz[1, , drop = FALSE], M = M) onewz <- onewz[,, 1] # M x M logdet <- determinant(onewz)$modulus logretval <- -0.5 * temp1 + 0.5 * n * logdet - n * (M / 2) * log(2*pi) distval <- stop("variable 'distval' not computed yet") logretval <- -(ncol(onewz) * log(2 * pi) + logdet + distval)/2 logretval } else { logretval <- -0.5 * temp1 - n * (M / 2) * log(2*pi) for (ii in 1:n) { onewz <- m2a(wz[ii, , drop = FALSE], M = M) onewz <- onewz[,, 1] # M x M logdet <- determinant(onewz)$modulus logretval <- logretval + 0.5 * logdet } logretval } } if (summation) { sum(ll.elts) } else { ll.elts } } }, linkfun = function(mu, extra = NULL) mu, vfamily = "gaussianff", validparams = eval(substitute(function(eta, y, extra = NULL) { okay1 <- all(is.finite(eta)) okay1 }, list( .zero = zero ))), deriv = expression({ wz <- VGAM.weights.function(w = w, M = M, n = n) mux22(cc = t(wz), xmat = y-mu, M = M, as.matrix = TRUE) }), weight = expression({ wz })) } dposnorm <- function(x, mean = 0, sd = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mean), length(sd)) if (length(x) != L) x <- rep_len(x, L) if (length(mean) != L) mean <- rep_len(mean, L) if (length(sd) != L) sd <- rep_len(sd, L) if (log.arg) { ifelse(x < 0, log(0), dnorm(x, mean = mean, sd = sd, log = TRUE) - pnorm(mean / sd, log.p = TRUE)) } else { ifelse(x < 0, 0, dnorm(x = x, mean = mean, sd = sd) / pnorm(mean/sd)) } } pposnorm <- function(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") ans <- (pnorm(q, mean = mean, sd = sd) - pnorm(0, mean = mean, sd = sd)) / pnorm(mean / sd) ans[q <= 0] <- 0 if (lower.tail) { if (log.p) log(ans) else ans } else { if (log.p) log1p(-ans) else 1-ans } } qposnorm <- function(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rm(log.p) # 20150102 KaiH if (lower.tail) { if (log.arg) p <- exp(p) } else { p <- if (log.arg) -expm1(p) else 1 - p } qnorm(p = p + (1 - p) * pnorm(0, mean = mean, sd = sd), mean = mean, sd = sd) } rposnorm <- function(n, mean = 0, sd = 1) { qnorm(p = runif(n, min = pnorm(0, mean = mean, sd = sd)), mean = mean, sd = sd) } if (FALSE) posnormal.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } posnormal <- function(lmean = "identitylink", lsd = "loge", eq.mean = FALSE, eq.sd = FALSE, gmean = exp((-5:5)/2), gsd = exp((-1:5)/2), imean = NULL, isd = NULL, probs.y = 0.10, imethod = 1, nsimEIM = NULL, zero = "sd") { if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (!is.logical(eq.mean) || length(eq.mean) != 1) stop("bad input for argument 'eq.mean'") if (!is.logical(eq.sd ) || length(eq.sd ) != 1) stop("bad input for argument 'eq.sd'") if (length(isd) && !is.Numeric(isd, positive = TRUE)) stop("bad input for argument 'isd'") if (length(nsimEIM)) if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("argument 'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("Positive (univariate) normal distribution\n\n", "Links: ", namesof("mean", lmean, earg = emean, tag = TRUE), "; ", namesof("sd", lsd, earg = esd, tag = TRUE)), constraints = eval(substitute(expression({ constraints.orig <- constraints M1 <- 2 NOS <- M / M1 cm1.m <- cmk.m <- kronecker(diag(NOS), rbind(1, 0)) con.m <- cm.VGAM(kronecker(matrix(1, NOS, 1), rbind(1, 0)), x = x, bool = .eq.mean , # constraints = constraints.orig, apply.int = TRUE, cm.default = cmk.m, cm.intercept.default = cm1.m) cm1.s <- cmk.s <- kronecker(diag(NOS), rbind(0, 1)) con.s <- cm.VGAM(kronecker(matrix(1, NOS, 1), rbind(0, 1)), x = x, bool = .eq.sd , # constraints = constraints.orig, apply.int = TRUE, cm.default = cmk.s, cm.intercept.default = cm1.s) con.use <- con.m for (klocal in seq_along(con.m)) { con.use[[klocal]] <- interleave.cmat(con.m[[klocal]], con.s[[klocal]]) } constraints <- con.use constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero, .eq.sd = eq.sd, .eq.mean = eq.mean ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, eq.mean = .eq.mean , eq.sd = .eq.sd , multipleResponses = TRUE, parameters.names = c("mean", "sd"), zero = .zero ) }, list( .zero = zero, .eq.mean = eq.mean, .eq.sd = eq.sd ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y NOS <- ncol(y) M <- NOS * M1 mean.names <- param.names("mean", NOS) sdev.names <- param.names("sd", NOS) predictors.names <- c(namesof(mean.names , .lmean , earg = .emean , tag = FALSE), namesof(sdev.names , .lsd , earg = .esd , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { init.me <- matrix( if (length( .imean )) .imean else NA_real_, n, NOS, byrow = TRUE) init.sd <- matrix( if (length( .isd )) .isd else NA_real_, n, NOS, byrow = TRUE) mean.grid.orig <- .gmean sdev.grid.orig <- .gsd for (jay in 1:NOS) { yvec <- y[, jay] wvec <- w[, jay] if (anyNA(init.me[, jay])) { init.me[, jay] <- if ( .imethod == 1) { weighted.mean(yvec, wvec) } else if ( .imethod == 2) { quantile(yvec, probs = .probs.y ) } else if ( .imethod == 3) { median(yvec) } } if (anyNA(init.sd[, jay])) init.sd[, jay] <- sd(yvec) ll.posnormal <- function(sdev.val, y, x, w, extraargs) { ans <- sum(c(w) * dposnorm(x = y, mean = extraargs$Mean, sd = sdev.val, log = TRUE)) ans } sdev.grid <- sdev.grid.orig * init.sd[1, jay] mean.grid <- mean.grid.orig * init.me[1, jay] mean.grid <- sort(c(-mean.grid, mean.grid)) allmat1 <- expand.grid(Mean = mean.grid) allmat2 <- matrix(NA_real_, nrow(allmat1), 2) for (iloc in 1:nrow(allmat1)) { allmat2[iloc, ] <- grid.search(sdev.grid, objfun = ll.posnormal, y = yvec, x = x, w = wvec, ret.objfun = TRUE, # 2nd value is the loglik extraargs = list(Mean = allmat1[iloc, "Mean"])) } ind5 <- which.max(allmat2[, 2]) # 2nd value is the loglik if (!length( .imean )) init.me[, jay] <- allmat1[ind5, "Mean"] if (!length( .isd )) init.sd[, jay] <- allmat2[ind5, 1] } # jay etastart <- cbind(theta2eta(init.me, .lmean , earg = .emean ), theta2eta(init.sd, .lsd , earg = .esd )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .gmean = gmean, .gsd = gsd, .imean = imean, .isd = isd, .imethod = imethod, .probs.y = probs.y ))), linkinv = eval(substitute(function(eta, extra = NULL) { mymu <- eta2theta(eta[, c(TRUE, FALSE)], .lmean , earg = .emean ) mysd <- eta2theta(eta[, c(FALSE, TRUE)], .lsd , earg = .esd ) mymu + mysd * dnorm(-mymu/mysd) / pnorm(mymu/mysd) }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lmean , NOS), rep_len( .lsd , NOS))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mean.names, sdev.names) names(misc$link) <- temp.names[interleave.VGAM(M, M1 = M1)] misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .emean misc$earg[[M1*ii ]] <- .esd } misc$expected <- TRUE misc$multipleResponses <- TRUE misc$nsimEIM <- .nsimEIM }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .nsimEIM = nsimEIM ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mymu <- eta2theta(eta[, c(TRUE, FALSE)], .lmean , earg = .emean ) mysd <- eta2theta(eta[, c(FALSE, TRUE)], .lsd , earg = .esd ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dposnorm(x = y, m = mymu, sd = mysd, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), vfamily = c("posnormal"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, c(TRUE, FALSE)], .lmean , earg = .emean ) mysd <- eta2theta(eta[, c(FALSE, TRUE)], .lsd , earg = .esd ) okay1 <- all(is.finite(mymu)) && all(is.finite(mysd)) && all(0 < mysd) okay1 }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mymu <- eta2theta(eta[, c(TRUE, FALSE)], .lmean , earg = .emean ) mysd <- eta2theta(eta[, c(FALSE, TRUE)], .lsd , earg = .esd ) rposnorm(nsim * length(mymu), mean = mymu, sd = mysd) }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), deriv = eval(substitute(expression({ mymu <- eta2theta(eta[, c(TRUE, FALSE)], .lmean , earg = .emean ) mysd <- eta2theta(eta[, c(FALSE, TRUE)], .lsd , earg = .esd ) zedd <- (y-mymu) / mysd temp0 <- mymu / mysd imratio <- dnorm(temp0) / pnorm(temp0) dl.dmu <- (zedd - imratio) / mysd dl.dsd <- (temp0 * imratio + zedd^2 - 1) / mysd dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) dsd.deta <- dtheta.deta(mysd, .lsd , earg = .esd ) dthetas.detas <- cbind(dmu.deta, dsd.deta) myderiv <- c(w) * dthetas.detas * cbind(dl.dmu, dl.dsd) myderiv <- myderiv[, interleave.VGAM(M, M1 = M1)] myderiv }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) if (length( .nsimEIM )) { NOS <- M / M1 dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M1 = M1)] wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' ind1 <- iam(NA, NA, M = M1, both = TRUE, diag = TRUE) for (spp. in 1:NOS) { run.varcov <- 0 Mymu <- mymu[, spp.] Mysd <- mysd[, spp.] for (ii in 1:( .nsimEIM )) { ysim <- rposnorm(n, m = Mymu, sd = Mysd) zedd <- (ysim-Mymu) / Mysd dl.dmu <- (zedd - imratio) / Mysd dl.dsd <- (temp0 * imratio + zedd^2 - 1) / Mysd temp7 <- cbind(dl.dmu, dl.dsd) run.varcov <- run.varcov + temp7[, ind1$row.index] * temp7[, ind1$col.index] } run.varcov <- cbind(run.varcov / .nsimEIM ) wz1 <- if (intercept.only) matrix(colMeans(run.varcov), nrow = n, ncol = ncol(run.varcov), byrow = TRUE) else run.varcov wz1 <- wz1 * dThetas.detas[, M1 * (spp. - 1) + ind1$row] * dThetas.detas[, M1 * (spp. - 1) + ind1$col] for (jay in 1:M1) for (kay in jay:M1) { cptr <- iam((spp. - 1) * M1 + jay, (spp. - 1) * M1 + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = M1)] } } # End of for (spp.) loop wz <- w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = M / M1) } else { ned2l.dmu2 <- (1 - imratio * (temp0 + imratio)) / mysd^2 ned2l.dmusd <- imratio * (1 + temp0 * (temp0 + imratio)) / mysd^2 ned2l.dsd2 <- (2 - imratio * (temp0 * (1 + temp0 * (temp0 + imratio)))) / mysd^2 wz <- array(c(c(w) * ned2l.dmu2 * dmu.deta^2, c(w) * ned2l.dsd2 * dsd.deta^2, c(w) * ned2l.dmusd * dmu.deta * dsd.deta), dim = c(n, M/M1, M1*(M1+1)/2)) wz <- arwz2wz(wz, M = M, M1 = M1) } wz }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .nsimEIM = nsimEIM )))) } dbetanorm <- function(x, shape1, shape2, mean = 0, sd = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) logden <- dnorm(x = x, mean = mean, sd = sd, log = TRUE) + (shape1-1) * pnorm(q = x, mean = mean, sd = sd, log.p = TRUE) + (shape2-1) * pnorm(q = x, mean = mean, sd = sd, log.p = TRUE, lower.tail = FALSE) - lbeta(shape1, shape2) logden[is.infinite(x)] <- log(0) # 20141210 KaiH if (log.arg) logden else exp(logden) } pbetanorm <- function(q, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) { pbeta(q = pnorm(q = q, mean = mean, sd = sd), shape1 = shape1, shape2 = shape2, lower.tail = lower.tail, log.p = log.p) } qbetanorm <- function(p, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) { qnorm(p = qbeta(p = p, shape1 = shape1, shape2 = shape2, lower.tail = lower.tail, log.p = log.p), mean = mean, sd = sd) } rbetanorm <- function(n, shape1, shape2, mean = 0, sd = 1) { qnorm(p = qbeta(p = runif(n), shape1 = shape1, shape2 = shape2), mean = mean, sd = sd) } dtikuv <- function(x, d, mean = 0, sigma = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(d, length.arg = 1) || max(d) >= 2) stop("bad input for argument 'd'") L <- max(length(x), length(mean), length(sigma)) if (length(x) != L) x <- rep_len(x, L) if (length(mean) != L) mean <- rep_len(mean, L) if (length(sigma) != L) sigma <- rep_len(sigma, L) hh <- 2 - d KK <- 1 / (1 + 1/hh + 0.75/hh^2) logden <- dnorm(x = x, mean = mean, sd = sigma, log = TRUE) + log(KK) + 2 * log1p(((x-mean)/sigma)^2 / (2*hh)) logden[is.infinite(x)] <- log(0) # 20141209 KaiH if (log.arg) logden else exp(logden) } ptikuv <- function(q, d, mean = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.Numeric(d, length.arg = 1) || max(d) >= 2) stop("bad input for argument 'd'") if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rm(log.p) # 20141231 KaiH L <- max(length(q), length(mean), length(sigma)) if (length(q) != L) q <- rep_len(q, L) if (length(mean) != L) mean <- rep_len(mean, L) if (length(sigma) != L) sigma <- rep_len(sigma, L) zedd1 <- 0.5 * ((q - mean) / sigma)^2 ans <- q*0 + 0.5 hh <- 2 - d KK <- 1 / (1 + 1/hh + 0.75/hh^2) if (any(lhs <- q < mean)) { ans[lhs] <- ( KK/(2*sqrt(pi))) * ( gamma(0.5) * (1 - pgamma(zedd1[lhs], 0.5)) + 2 * gamma(1.5) * (1 - pgamma(zedd1[lhs], 1.5)) / hh + gamma(2.5) * (1 - pgamma(zedd1[lhs], 2.5)) / hh^2) } if (any(rhs <- q > mean)) { ans[rhs] <- 1.0 - Recall(q = (2*mean[rhs] - q[rhs]), d = d, mean = mean[rhs], sigma = sigma[rhs]) } if (lower.tail) { if (log.arg) log(ans) else ans } else { if (log.arg) log1p(-ans) else 1 - ans } } qtikuv <- function(p, d, mean = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE, ...) { if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (!is.Numeric(d, length.arg = 1) || max(d) >= 2) stop("bad input for argument 'd'") orig.p <- p if (lower.tail) { if (log.p) p <- exp(p) } else { p <- if (log.p) -expm1(p) else 1 - p } L <- max(length(p), length(mean), length(sigma)) if (length(p) != L) p <- rep_len(p, L) if (length(mean) != L) mean <- rep_len(mean, L) if (length(sigma) != L) sigma <- rep_len(sigma, L) ans <- rep_len(0.0, L) myfun <- function(x, d, mean = 0, sigma = 1, p) ptikuv(q = x, d = d, mean = mean, sigma = sigma) - p for (ii in 1:L) { Lower <- ifelse(p[ii] <= 0.5, mean[ii] - 3 * sigma[ii], mean[ii]) while (ptikuv(q = Lower, d = d, mean = mean[ii], sigma = sigma[ii]) > p[ii]) Lower <- Lower - sigma[ii] Upper <- ifelse(p[ii] >= 0.5, mean[ii] + 3 * sigma[ii], mean[ii]) while (ptikuv(q = Upper, d = d, mean = mean[ii], sigma = sigma[ii]) < p[ii]) Upper <- Upper + sigma[ii] ans[ii] <- uniroot(f = myfun, lower = Lower, upper = Upper, d = d, p = p[ii], mean = mean[ii], sigma = sigma[ii], ...)$root } if (log.p) { ans[orig.p > 0] <- NaN } else { ans[orig.p < 0] <- NaN ans[orig.p > 1] <- NaN } ans } rtikuv <- function(n, d, mean = 0, sigma = 1, Smallno = 1.0e-6) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(d, length.arg = 1) || max(d) >= 2) stop("bad input for argument 'd'") if (!is.Numeric(mean, length.arg = 1)) stop("bad input for argument 'mean'") if (!is.Numeric(sigma, length.arg = 1)) stop("bad input for argument 'sigma'") if (!is.Numeric(Smallno, positive = TRUE, length.arg = 1) || Smallno > 0.01 || Smallno < 2 * .Machine$double.eps) stop("bad input for argument 'Smallno'") ans <- rep_len(0.0, use.n) ptr1 <- 1; ptr2 <- 0 hh <- 2 - d KK <- 1 / (1 + 1/hh + 0.75/hh^2) ymax <- ifelse(hh < 2, dtikuv(x = mean + sigma*sqrt(4 - 2*hh), d = d, mean = mean, sigma = sigma), KK / (sqrt(2 * pi) * sigma)) while (ptr2 < use.n) { Lower <- mean - 5 * sigma while (ptikuv(q = Lower, d = d, mean = mean, sigma = sigma) > Smallno) Lower <- Lower - sigma Upper <- mean + 5 * sigma while (ptikuv(q = Upper, d = d, mean = mean, sigma = sigma) < 1-Smallno) Upper <- Upper + sigma x <- runif(2*use.n, min = Lower, max = Upper) index <- runif(2*use.n, max = ymax) < dtikuv(x, d = d, mean = mean, sigma = sigma) sindex <- sum(index) if (sindex) { ptr2 <- min(use.n, ptr1 + sindex - 1) ans[ptr1:ptr2] <- (x[index])[1:(1+ptr2-ptr1)] ptr1 <- ptr2 + 1 } } ans } tikuv <- function(d, lmean = "identitylink", lsigma = "loge", isigma = NULL, zero = "sigma") { lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(d, length.arg = 1) || max(d) >= 2) stop("bad input for argument 'd'") new("vglmff", blurb = c("Short-tailed symmetric [Tiku and Vaughan (1999)] ", "distribution\n", "Link: ", namesof("mean", lmean, earg = emean), ", ", namesof("sigma", lsigma, earg = esigma), "\n", "\n", "Mean: mean"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mean", "sigma"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- c(namesof("mean", .lmean , earg = .emean , tag = FALSE), namesof("sigma", .lsigma , earg = .esigma , tag = FALSE)) if (!length(etastart)) { sigma.init <- if (length( .isigma )) rep_len( .isigma , n) else { hh <- 2 - .d KK <- 1 / (1 + 1/hh + 0.75/hh^2) K2 <- 1 + 3/hh + 15/(4*hh^2) rep_len(sqrt(var(y) / (KK*K2)), n) } mean.init <- rep_len(weighted.mean(y, w), n) etastart <- cbind(theta2eta(mean.init, .lmean , earg = .emean ), theta2eta(sigma.init, .lsigma , earg = .esigma )) } }),list( .lmean = lmean, .lsigma = lsigma, .isigma = isigma, .d = d, .emean = emean, .esigma = esigma ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmean , earg = .emean ) }, list( .lmean = lmean, .emean = emean, .esigma = esigma ))), last = eval(substitute(expression({ misc$link <- c("mean" = .lmean , "sigma"= .lsigma ) misc$earg <- list("mean" = .emean , "sigma"= .esigma ) misc$expected <- TRUE misc$d <- .d }), list( .lmean = lmean, .lsigma = lsigma, .d = d, .emean = emean, .esigma = esigma ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mymu <- eta2theta(eta[, 1], .lmean , earg = .emean ) sigma <- eta2theta(eta[, 2], .lsigma , earg = .esigma ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dtikuv(x = y, d = .d , mean = mymu, sigma = sigma, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmean = lmean, .lsigma = lsigma, .d = d, .emean = emean, .esigma = esigma ))), vfamily = c("tikuv"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean , earg = .emean ) sigma <- eta2theta(eta[, 2], .lsigma , earg = .esigma ) dee <- .d okay1 <- all(is.finite(mymu )) && all(is.finite(sigma)) && all(0 < sigma) && all(is.finite(dee )) && all(0 < dee & dee < 2) okay1 }, list( .lmean = lmean, .lsigma = lsigma, .d = d, .emean = emean, .esigma = esigma ))), deriv = eval(substitute(expression({ mymu <- eta2theta(eta[, 1], .lmean , earg = .emean ) sigma <- eta2theta(eta[, 2], .lsigma , earg = .esigma ) dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) dsigma.deta <- dtheta.deta(sigma, .lsigma, earg = .esigma) zedd <- (y - mymu) / sigma hh <- 2 - .d gzedd <- zedd / (1 + 0.5*zedd^2 / hh) dl.dmu <- zedd / sigma - 2 * gzedd / (hh*sigma) dl.dsigma <- (zedd^2 - 1 - 2 * zedd * gzedd / hh) / sigma c(w) * cbind(dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) }), list( .lmean = lmean, .lsigma = lsigma, .d = d, .emean = emean, .esigma = esigma ))), weight = eval(substitute(expression({ ayy <- 1 / (2*hh) Dnos <- 1 - (2/hh) * (1 - ayy) / (1 + 2*ayy + 3*ayy^2) Dstar <- -1 + 3 * (1 + 2*ayy + 11*ayy^2) / (1 + 2*ayy + 3*ayy^2) ned2l.dmymu2 <- Dnos / sigma^2 ned2l.dnu2 <- Dstar / sigma^2 wz <- matrix(NA_real_, n, M) # diagonal matrix wz[, iam(1, 1, M)] <- ned2l.dmymu2 * dmu.deta^2 wz[, iam(2, 2, M)] <- ned2l.dnu2 * dsigma.deta^2 c(w) * wz }), list( .lmean = lmean, .lsigma = lsigma, .emean = emean, .esigma = esigma )))) } dfoldnorm <- function(x, mean = 0, sd = 1, a1 = 1, a2 = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) ans <- dnorm(x = x/(a1*sd) - mean/sd) / (a1*sd) + dnorm(x = x/(a2*sd) + mean/sd) / (a2*sd) ans[x < 0] <- 0 ans[a1 <= 0 | a2 <= 0] <- NA ans[sd <= 0] <- NA if (log.arg) log(ans) else ans } pfoldnorm <- function(q, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log(pnorm(q = q/(a1*sd) - mean/sd) - pnorm(q = -q/(a2*sd) - mean/sd)) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- pnorm(q = q/(a1*sd) - mean/sd) - pnorm(q = -q/(a2*sd) - mean/sd) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log(pnorm(q = q/(a1*sd) - mean/sd, lower.tail = FALSE) + pnorm(q = -q/(a2*sd) - mean/sd)) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- pnorm(q = q/(a1*sd) - mean/sd, lower.tail = FALSE) + pnorm(q = -q/(a2*sd) - mean/sd) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans[a1 <= 0 | a2 <= 0] <- NaN ans[sd <= 0] <- NaN ans } qfoldnorm <- function(p, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE, ...) { if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rm(log.p) if (lower.tail) { if (log.arg) p <- exp(p) } else { p <- if (log.arg) -expm1(p) else 1 - p } L <- max(length(p), length(mean), length(sd), length(a1), length(a2)) if (length(p) != L) p <- rep_len(p, L) if (length(mean) != L) mean <- rep_len(mean, L) if (length(sd) != L) sd <- rep_len(sd, L) if (length(a1) != L) a1 <- rep_len(a1, L) if (length(a2) != L) a2 <- rep_len(a2, L) ans <- rep_len(0.0 , L) myfun <- function(x, mean = 0, sd = 1, a1 = 1, a2 = 2, p) pfoldnorm(q = x, mean = mean, sd = sd, a1 = a1, a2 = a2) - p for (ii in 1:L) { mytheta <- mean[ii] / sd[ii] EY <- sd[ii] * ((a1[ii] + a2[ii]) * (mytheta * pnorm(mytheta) + dnorm(mytheta)) - a2[ii] * mytheta) Upper <- 2 * EY while (pfoldnorm(q = Upper, mean = mean[ii], sd = sd[ii], a1 = a1[ii], a2 = a2[ii]) < p[ii]) Upper <- Upper + sd[ii] ans[ii] <- uniroot(f = myfun, lower = 0, upper = Upper, mean = mean[ii], sd = sd[ii], a1 = a1[ii], a2 = a2[ii], p = p[ii], ...)$root } ans[a1 <= 0 | a2 <= 0] <- NaN ans[sd <= 0] <- NaN ans } rfoldnorm <- function(n, mean = 0, sd = 1, a1 = 1, a2 = 1) { X <- rnorm(n, mean = mean, sd = sd) ans <- pmax(a1 * X, -a2*X) ans[a1 <= 0 | a2 <= 0] <- NA ans[sd <= 0] <- NA ans } foldnormal <- function(lmean = "identitylink", lsd = "loge", imean = NULL, isd = NULL, a1 = 1, a2 = 1, nsimEIM = 500, imethod = 1, zero = NULL) { if (!is.Numeric(a1, positive = TRUE, length.arg = 1) || !is.Numeric(a2, positive = TRUE, length.arg = 1)) stop("bad input for arguments 'a1' and 'a2'") if (any(a1 <= 0 | a2 <= 0)) stop("arguments 'a1' and 'a2' must each be a positive value") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10) stop("argument 'nsimEIM' should be an integer greater than 10") if (length(imean) && !is.Numeric(imean)) stop("bad input for 'imean'") if (length(isd) && !is.Numeric(isd, positive = TRUE)) stop("bad input for 'isd'") new("vglmff", blurb = c("(Generalized) folded univariate normal distribution\n\n", "Link: ", namesof("mean", lmean, earg = emean, tag = TRUE), "; ", namesof("sd", lsd, earg = esd, tag = TRUE)), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, a1 = .a1 , a2 = .a2 , multiple.responses = FALSE, parameters.names = c("mean", "sd"), zero = .zero , nsimEIM = .nsimEIM ) }, list( .zero = zero, .a1 = a1, .a2 = a2, .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- c(namesof("mean", .lmean , earg = .emean, tag = FALSE), namesof("sd", .lsd , earg = .esd, tag = FALSE)) if (!length(etastart)) { junk <- lm.wfit(x = x, y = c(y), w = c(w)) if (FALSE) { if ((NCOL(w) != 1) || any(w != round(w))) stop("'weights' must be a vector or a one-column matrix ", "with integer values") m1d <- meany <- weighted.mean(y, w) m2d <- weighted.mean(y^2, w) stddev <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) Ahat <- m1d^2 / m2d thetahat <- sqrt(max(1/Ahat -1, 0.1)) mean.init <- rep_len(if (length( .imean)) .imean else thetahat * sqrt((stddev^2 + meany^2) * Ahat), n) sd.init <- rep_len(if (length( .isd)) .isd else sqrt((stddev^2 + meany^2) * Ahat), n) } stddev <- sqrt( sum(c(w) * junk$resid^2) / junk$df.residual ) meany <- weighted.mean(y, w) mean.init <- rep_len(if (length( .imean )) .imean else {if ( .imethod == 1) median(y) else meany}, n) sd.init <- rep_len(if (length( .isd )) .isd else {if ( .imethod == 1) stddev else 1.2*sd(c(y))}, n) etastart <- cbind(theta2eta(mean.init, .lmean , earg = .emean ), theta2eta(sd.init, .lsd , earg = .esd )) } }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .imean = imean, .isd = isd, .a1 = a1, .a2 = a2, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean , earg = .emean ) mysd <- eta2theta(eta[, 2], .lsd , earg = .esd ) mytheta <- mymu / mysd mysd * (( .a1 + .a2 ) * (mytheta * pnorm(mytheta) + dnorm(mytheta)) - .a2 * mytheta) }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .a1 = a1, .a2 = a2 ))), last = eval(substitute(expression({ misc$link <- c("mu" = .lmean , "sd" = .lsd ) misc$earg <- list("mu" = .emean , "sd" = .esd ) misc$multipleResponses <- FALSE misc$expected <- TRUE misc$nsimEIM <- .nsimEIM misc$simEIM <- TRUE misc$imethod <- .imethod misc$a1 <- .a1 misc$a2 <- .a2 }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .imethod = imethod, .nsimEIM = nsimEIM, .a1 = a1, .a2 = a2 ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mymu <- eta2theta(eta[, 1], .lmean , earg = .emean ) mysd <- eta2theta(eta[, 2], .lsd , earg = .esd ) a1vec <- .a1 a2vec <- .a2 if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dfoldnorm(y, mean = mymu, sd = mysd, a1 = a1vec, a2 = a2vec, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .a1 = a1, .a2 = a2 ))), vfamily = c("foldnormal"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmean , earg = .emean ) mysd <- eta2theta(eta[, 2], .lsd , earg = .esd ) okay1 <- all(is.finite(mymu)) && all(is.finite(mysd)) && all(0 < mysd) && all(is.finite( .a1 )) && all(0 < .a1 ) && all(is.finite( .a2 )) && all(0 < .a2 ) okay1 }, list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .a1 = a1, .a2 = a2 ))), deriv = eval(substitute(expression({ M1 <- 2 mymu <- eta2theta(eta[, 1], .lmean , earg = .emean ) mysd <- eta2theta(eta[, 2], .lsd , earg = .esd ) dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) dsd.deta <- dtheta.deta(mysd, .lsd , earg = .esd ) a1vec <- .a1 a2vec <- .a2 d3 <- deriv3(~ log((exp(-0.5*(y/(a1vec*mysd) - mymu/mysd)^2)/a1vec + exp(-0.5*(y/(a2vec*mysd) + mymu/mysd)^2)/a2vec)/(mysd*sqrt(2*pi))), name = c("mymu", "mysd"), hessian = FALSE) eval.d3 <- eval(d3) dl.dthetas <- attr(eval.d3, "gradient") # == cbind(dl.dmu, dl.dsd) DTHETA.detas <- cbind(dmu.deta, dsd.deta) c(w) * DTHETA.detas * dl.dthetas }), list( .lmean = lmean, .lsd = lsd, .emean = emean, .esd = esd, .a1 = a1, .a2 = a2 ))), weight = eval(substitute(expression({ de3 <- deriv3(~ log((exp(-0.5*(ysim/(a1vec*mysd) - mymu/mysd)^2)/a1vec + exp(-0.5*(ysim/(a2vec*mysd) + mymu/mysd)^2)/a2vec)/(mysd*sqrt(2*pi))), name = c("mymu", "mysd"), hessian = TRUE) run.mean <- 0 for (ii in 1:( .nsimEIM )) { ysim <- abs(rnorm(n, m = mymu, sd = mysd)) ysim <- rfoldnorm(n = n, mean = mymu, sd = mysd, a1 = a1vec, a2 = a2vec) eval.de3 <- eval(de3) d2l.dthetas2 <- attr(eval.de3, "hessian") rm(ysim) temp3 <- matrix(0, n, dimm(M)) for (ss in 1:M) for (tt in ss:M) temp3[, iam(ss,tt, M)] <- -d2l.dthetas2[, ss,tt] run.mean <- ((ii-1) * run.mean + temp3) / ii } wz <- if (intercept.only) matrix(colMeans(run.mean), n, dimm(M), byrow = TRUE) else run.mean index0 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) wz <- wz * DTHETA.detas[, index0$row] * DTHETA.detas[, index0$col] }), list( .nsimEIM = nsimEIM, .a1 = a1, .a2 = a2 )))) } lqnorm.control <- function(trace = TRUE, ...) { list(trace = trace) } lqnorm <- function(qpower = 2, link = "identitylink", imethod = 1, imu = NULL, ishrinkage = 0.95) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(qpower, length.arg = 1) || qpower <= 1) stop("bad input for argument 'qpower'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") new("vglmff", blurb = c("Minimizing the q-norm of residuals\n", "Links: ", namesof("Y1", link, earg = earg, tag = TRUE)), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y M <- if (is.matrix(y)) ncol(y) else 1 dy <- dimnames(y) predictors.names <- if (!is.null(dy[[2]])) dy[[2]] else paste("mu", 1:M, sep = "") predictors.names <- namesof(predictors.names, link = .link, earg = .earg, short = TRUE) if (!length(etastart)) { meany <- weighted.mean(y, w) mean.init <- rep_len(if (length( .i.mu )) .i.mu else { if ( .imethod == 2) median(y) else if ( .imethod == 1) meany else .ishrinkage * meany + (1 - .ishrinkage ) * y }, n) etastart <- theta2eta(mean.init, link = .link, earg = .earg) } }), list( .imethod = imethod, .i.mu = imu, .ishrinkage = ishrinkage, .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { mu <- eta2theta(eta, link = .link , earg = .earg ) mu }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ dy <- dimnames(y) if (!is.null(dy[[2]])) dimnames(fit$fitted.values) = dy misc$link <- rep_len( .link , M) names(misc$link) <- predictors.names misc$earg <- list(mu = .earg) misc$qpower <- .qpower misc$imethod <- .imethod misc$objectiveFunction <- sum( c(w) * (abs(y - mu))^(.qpower) ) }), list( .qpower = qpower, .link = link, .earg = earg, .imethod = imethod ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, link = .link, earg = .earg) }, list( .link = link, .earg = earg ))), vfamily = "lqnorm", validparams = eval(substitute(function(eta, y, extra = NULL) { okay1 <- all(is.finite(eta)) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ dmu.deta <- dtheta.deta(theta=mu, link = .link, earg = .earg ) myresid <- y - mu signresid <- sign(myresid) temp2 <- (abs(myresid))^(.qpower-1) .qpower * c(w) * temp2 * signresid * dmu.deta }), list( .qpower = qpower, .link = link, .earg = earg ))), weight = eval(substitute(expression({ temp3 <- (abs(myresid))^(.qpower-2) wz <- .qpower * (.qpower - 1) * c(w) * temp3 * dmu.deta^2 wz }), list( .qpower = qpower, .link = link, .earg = earg )))) } dtobit <- function(x, mean = 0, sd = 1, Lower = 0, Upper = Inf, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(mean), length(sd), length(Lower), length(Upper)) if (length(x) != L) x <- rep_len(x, L) if (length(mean) != L) mean <- rep_len(mean, L) if (length(sd) != L) sd <- rep_len(sd, L) if (length(Lower) != L) Lower <- rep_len(Lower, L) if (length(Upper) != L) Upper <- rep_len(Upper, L) if (!all(Lower < Upper, na.rm = TRUE)) stop("all(Lower < Upper) is not TRUE") ans <- dnorm(x = x, mean = mean, sd = sd, log = log.arg) ans[x < Lower] <- if (log.arg) log(0.0) else 0.0 ans[x > Upper] <- if (log.arg) log(0.0) else 0.0 ind3 <- x == Lower ans[ind3] <- pnorm(q = Lower[ind3], mean = mean[ind3], sd = sd[ind3], log.p = log.arg) ind4 <- x == Upper ans[ind4] <- pnorm(q = Upper[ind4], mean = mean[ind4], sd = sd[ind4], lower.tail = FALSE, log.p = log.arg) ans } ptobit <- function(q, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail) != 1) stop("argument 'lower.tail' must be a single logical") if (!is.logical(log.p) || length(log.p) != 1) stop("argument 'log.p' must be a single logical") if (!all(Lower < Upper, na.rm = TRUE)) stop("all(Lower < Upper) is not TRUE") ans <- pnorm(q = q, mean = mean, sd = sd, lower.tail = lower.tail, log.p = log.p) ind1 <- (q < Lower) ans[ind1] <- if (lower.tail) ifelse(log.p, log(0.0), 0.0) else ifelse(log.p, log(1.0), 1.0) ind2 <- (Upper <= q) ans[ind2] <- if (lower.tail) ifelse(log.p, log(1.0), 1.0) else ifelse(log.p, log(0.0), 0.0) ans } qtobit <- function(p, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) { if (!all(Lower < Upper, na.rm = TRUE)) stop("all(Lower < Upper) is not TRUE") # 20150127 KaiH; add lower.tail = lower.tail, log.p = log.p ans <- qnorm(p, mean = mean, sd = sd, lower.tail = lower.tail, log.p = log.p) pnorm.Lower <- ptobit(q = Lower, mean = mean, sd = sd, lower.tail = lower.tail, log.p = log.p) pnorm.Upper <- ptobit(q = Upper, mean = mean, sd = sd, lower.tail = lower.tail, log.p = log.p) if (FALSE) { if (lower.tail) { ind1 <- (p <= pnorm.Lower) ans[ind1] <- Lower[ind1] ind2 <- (pnorm.Upper <= p) ans[ind2] <- Upper[ind2] } else { ind1 <- (p >= pnorm.Lower) ans[ind1] <- Lower[ind1] ind2 <- (pnorm.Upper >= p) ans[ind2] <- Upper[ind2] } } else { ans <- qnorm(p = p, mean = mean, sd = sd, lower.tail = lower.tail, log.p = log.p) ans <- pmax(ans, Lower) ans <- pmin(ans, Upper) } ans } rtobit <- function(n, mean = 0, sd = 1, Lower = 0, Upper = Inf) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n L <- use.n if (length(mean) != L) mean <- rep_len(mean, L) if (length(sd) != L) sd <- rep_len(sd, L) if (length(Lower) != L) Lower <- rep_len(Lower, L) if (length(Upper) != L) Upper <- rep_len(Upper, L) if (!all(Lower < Upper, na.rm = TRUE)) stop("all(Lower < Upper) is not TRUE") ans <- rnorm(n = use.n, mean = mean, sd = sd) cenL <- (ans < Lower) cenU <- (ans > Upper) if (FALSE) { ans[cenL] <- Lower[cenL] ans[cenU] <- Upper[cenU] } else { ans <- pmax(ans, Lower) ans <- pmin(ans, Upper) } attr(ans, "Lower") <- Lower attr(ans, "Upper") <- Upper attr(ans, "cenL") <- cenL attr(ans, "cenU") <- cenU ans } tobit <- function(Lower = 0, Upper = Inf, # See the trick described below. lmu = "identitylink", lsd = "loge", imu = NULL, isd = NULL, type.fitted = c("uncensored", "censored", "mean.obs"), byrow.arg = FALSE, imethod = 1, zero = "sd") { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if ( # length(Lower) != 1 || length(Upper) != 1 || !is.numeric(Lower) || !is.numeric(Upper) || any(Lower >= Upper)) stop("arguments 'Lower' and 'Upper' must be numeric and ", "satisfy Lower < Upper") if (mode(type.fitted) != "character" && mode(type.fitted) != "name") type.fitted <- as.character(substitute(type.fitted)) type.fitted <- match.arg(type.fitted, c("uncensored", "censored", "mean.obs"))[1] stdTobit <- all(Lower == 0.0) && all(is.infinite(Upper)) && all(lmu == "identitylink") new("vglmff", blurb = c("Tobit model (censored normal)\n\n", "Links: ", namesof("mu", lmu, earg = emu, tag = TRUE), "; ", namesof("sd", lsd, earg = esd, tag = TRUE), "\n", "Mean: mu", "\n", "Conditional variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, type.fitted = .type.fitted , zero = .zero , multiple.responses = TRUE, parameters.names = c("mu", "sd"), byrow.arg = .byrow.arg , stdTobit = .stdTobit , expected = TRUE ) }, list( .zero = zero, .byrow.arg = byrow.arg, .stdTobit = stdTobit, .type.fitted = type.fitted ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M <- M1 * ncoly Lowmat <- matrix( .Lower , n, ncol = ncoly, byrow = .byrow.arg ) Uppmat <- matrix( .Upper , n, ncol = ncoly, byrow = .byrow.arg ) extra$type.fitted <- .type.fitted extra$colnames.y <- colnames(y) extra$censoredL <- (y <= Lowmat) extra$censoredU <- (y >= Uppmat) if (any(matTF <- (y < Lowmat))) { warning("replacing response values less than 'Lower' by 'Lower'") y[matTF] <- Lowmat[matTF] } if (any(matTF <- (y > Uppmat))) { warning("replacing response values greater than 'Upper' by 'Upper'") y[matTF] <- Uppmat[matTF] } temp1.names <- param.names("mu", ncoly) temp2.names <- param.names("sd", ncoly) predictors.names <- c(namesof(temp1.names, .lmu , earg = .emu , tag = FALSE), namesof(temp2.names, .lsd , earg = .esd , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { anyc <- cbind(extra$censoredL | extra$censoredU) i11 <- if ( .imethod == 1) anyc else matrix(FALSE, n, 1) # can be all data mu.init <- sd.init <- matrix(0.0, n, ncoly) for (jay in 1:ncol(y)) { if ( .imethod > 2) { mu.init[, jay] <- (y[, jay] + weighted.mean(y[, jay], w[, jay])) / 2 sd.init[, jay] <- pmax(weighted.mean((y[, jay] - mu.init[, jay])^2, w[, jay])^0.5, 0.001) } else { # .imethod <= 2 use.i11 <- i11[, jay] if (sum(!use.i11) < ncol(x)) { use.i11 <- rep_len(FALSE, n) } mylm <- lm.wfit(x = x[!use.i11, , drop = FALSE], y = y[!use.i11, jay], w = w[!use.i11, jay]) sd.init[, jay] <- sqrt( sum(w[!use.i11, jay] * mylm$resid^2) / mylm$df.residual ) * 1.5 mu.init[!use.i11, jay] <- mylm$fitted.values if (any(anyc[, jay])) mu.init[anyc[, jay], jay] <- x[anyc[, jay],, drop = FALSE] %*% mylm$coeff } # .imethod <= 2 } # for (jay in 1:ncol(y)) if (length( .Imu )) mu.init <- matrix( .Imu , n, ncoly, byrow = .byrow.arg ) if (length( .isd )) sd.init <- matrix( .isd , n, ncoly, byrow = .byrow.arg ) etastart <- cbind(theta2eta(mu.init, .lmu , earg = .emu ), theta2eta(sd.init, .lsd , earg = .esd )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } # if (!length(etastart)) }), list( .Lower = Lower, .Upper = Upper, .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .Imu = imu, .isd = isd, .type.fitted = type.fitted, .stdTobit = stdTobit, .byrow.arg = byrow.arg, .imethod = imethod ))), linkinv = eval(substitute( function(eta, extra = NULL) { M1 <- 2 NOS <- ncoly <- ncol(eta) / M1 mum <- eta2theta(eta[, M1*(1:ncoly)-1, drop = FALSE], .lmu , earg = .emu ) mum <- label.cols.y(mum, colnames.y = extra$colnames.y, NOS = NOS) type.fitted <- if (length(extra$type.fitted)) { extra$type.fitted } else { warning("cannot find 'type.fitted'. Returning 'uncensored'.") "uncensored" } type.fitted <- match.arg(type.fitted, c("uncensored", "censored", "mean.obs"))[1] if ( type.fitted == "uncensored") return(mum) Lowmat <- matrix( .Lower , nrow = nrow(eta), ncol = ncoly, byrow = .byrow.arg ) Uppmat <- matrix( .Upper , nrow = nrow(eta), ncol = ncoly, byrow = .byrow.arg ) if ( type.fitted == "censored") { mum[mum < Lowmat] <- Lowmat[mum < Lowmat] mum[mum > Uppmat] <- Uppmat[mum > Uppmat] return(mum) } else { sdm <- eta2theta(eta[, M1*(1:ncoly)-0, drop = FALSE], .lsd , earg = .esd ) zeddL <- (Lowmat - mum) / sdm zeddU <- (Uppmat - mum) / sdm Phi.L <- pnorm(zeddL) phi.L <- dnorm(zeddL) Phi.U <- pnorm(zeddU) phi.U <- dnorm(zeddU) mum * (Phi.U - Phi.L) + sdm * (phi.L - phi.U) + ifelse(is.infinite(Lowmat), 0, Lowmat * Phi.U ) + ifelse(is.infinite(Uppmat), 0, Uppmat * (1 - Phi.U)) } }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .byrow.arg = byrow.arg, .Lower = Lower, .Upper = Upper ))), last = eval(substitute(expression({ temp0303 <- c(rep_len( .lmu , ncoly), rep_len( .lsd , ncoly)) names(temp0303) <- c(param.names("mu", ncoly), param.names("sd", ncoly)) temp0303 <- temp0303[interleave.VGAM(M, M1 = M1)] misc$link <- temp0303 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .emu misc$earg[[M1*ii ]] <- .esd } misc$multipleResponses <- TRUE misc$expected <- TRUE misc$imethod <- .imethod misc$M1 <- M1 misc$stdTobit <- .stdTobit misc$Lower <- Lowmat misc$Upper <- Uppmat }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .imethod = imethod, .stdTobit = stdTobit, .Lower = Lower, .Upper = Upper ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M1 <- 2 y <- cbind(y) ncoly <- ncol(y) cenL <- extra$censoredL cenU <- extra$censoredU cen0 <- !cenL & !cenU # uncensored obsns Lowmat <- matrix( .Lower , nrow = nrow(eta), ncol = ncoly, byrow = .byrow.arg ) Uppmat <- matrix( .Upper , nrow = nrow(eta), ncol = ncoly, byrow = .byrow.arg ) mum <- eta2theta(eta[, M1*(1:ncoly)-1, drop = FALSE], .lmu , earg = .emu ) sdm <- eta2theta(eta[, M1*(1:ncoly)-0, drop = FALSE], .lsd , earg = .esd ) ell0 <- dnorm( y[cen0], mean = mum[cen0], sd = sdm[cen0], log = TRUE) ellL <- pnorm(Lowmat[cenL], mean = mum[cenL], sd = sdm[cenL], log.p = TRUE, lower.tail = TRUE) ellU <- pnorm(Uppmat[cenU], mean = mum[cenU], sd = sdm[cenU], log.p = TRUE, lower.tail = FALSE) wmat <- matrix(w, nrow = nrow(eta), ncol = ncoly) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- y # Right dimension only ll.elts[cen0] <- wmat[cen0] * ell0 ll.elts[cenL] <- wmat[cenL] * ellL ll.elts[cenU] <- wmat[cenU] * ellU if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .byrow.arg = byrow.arg, .Lower = Lower, .Upper = Upper ))), vfamily = c("tobit"), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2 ncoly <- NCOL(y) mum <- eta2theta(eta[, M1*(1:ncoly)-1, drop = FALSE], .lmu , earg = .emu ) sdm <- eta2theta(eta[, M1*(1:ncoly)-0, drop = FALSE], .lsd , earg = .esd ) okay1 <- all(is.finite(mum)) && all(is.finite(sdm)) && all(0 < sdm) okay1 }, list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .byrow.arg = byrow.arg, .Lower = Lower, .Upper = Upper ))), deriv = eval(substitute(expression({ M1 <- 2 y <- cbind(y) ncoly <- ncol(y) mills.ratio1 <- function(x) { ans <- exp(dnorm(x, log = TRUE) - pnorm(x, log = TRUE)) if (any(vecTF <- (x < -1e2))) { xvneg <- x[vecTF] ans[vecTF] <- -xvneg / (1 - 1/xvneg^2 + 3 / xvneg^4) } ans } # mills.ratio1() mills.ratio2 <- function(x) { ans <- exp(2 * dnorm(x, log = TRUE) - pnorm(x, log = TRUE)) ans[x < -40] <- 0 ans } moment.k.dnorm <- function(z, k = 0) { if (any(k < 0)) stop("this function works only for non-negative 'k'") ans <- dnorm(z) * z^k ans[is.infinite(z)] <- 0 ans } moment.millsratio2 <- function(zedd) { ans <- exp(2 * (log(abs(zedd)) + dnorm(zedd, log = TRUE)) - pnorm(zedd, log = TRUE)) ans[is.infinite(zedd)] <- 0 # Needed for zedd == Inf and -Inf ans } Lowmat <- matrix( .Lower , nrow = nrow(eta), ncol = ncoly, byrow = .byrow.arg ) Uppmat <- matrix( .Upper , nrow = nrow(eta), ncol = ncoly, byrow = .byrow.arg ) cenL <- extra$censoredL cenU <- extra$censoredU cen0 <- !cenL & !cenU # uncensored obsns mum <- eta2theta(eta[, M1*(1:ncoly)-1, drop = FALSE], .lmu , earg = .emu ) sdm <- eta2theta(eta[, M1*(1:ncoly)-0, drop = FALSE], .lsd , earg = .esd ) zedd <- (y - mum) / sdm dl.dmu <- zedd / sdm dl.dsd <- (zedd^2 - 1) / sdm dmu.deta <- dtheta.deta(mum, .lmu , earg = .emu ) dsd.deta <- dtheta.deta(sdm, .lsd , earg = .esd ) if (any(cenL)) { mumL <- Lowmat - mum temp21L <- mumL[cenL] / sdm[cenL] fred21 <- mills.ratio1(temp21L) dl.dmu[cenL] <- -fred21 / sdm[cenL] dl.dsd[cenL] <- fred21 * (-temp21L / sdm[cenL]) } if (any(cenU)) { mumU <- Uppmat - mum temp21U <- mumU[cenU] / sdm[cenU] fred21 <- -mills.ratio1(-temp21U) dl.dmu[cenU] <- -fred21 / sdm[cenU] # Negated dl.dsd[cenU] <- fred21 * (-temp21U / sdm[cenU]) } dthetas.detas <- cbind(dmu.deta, dsd.deta) dThetas.detas <- dthetas.detas[, interleave.VGAM(M, M1 = M1)] myderiv <- cbind(c(w) * dl.dmu, c(w) * dl.dsd) * dthetas.detas myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lmu = lmu, .lsd = lsd, .emu = emu, .esd = esd, .byrow.arg = byrow.arg, .Lower = Lower, .Upper = Upper ))), weight = eval(substitute(expression({ v.large <- 3.5 v.small <- -5.0 # pnorm(-5) == 3e-07 v.large <- 5.5 v.small <- -6.5 # pnorm(-5) == 3e-07 if ( .stdTobit ) { wz <- matrix(0.0, n, M + M - 1) # wz is 'tridiagonal' wz1 <- matrix(0.0, n, dimm(M1)) ind1 <- iam(NA, NA, M = M1, both = TRUE, diag = TRUE) for (spp. in 1:ncoly) { zedd0 <- ( mum[, spp.]) / sdm[, spp.] phivec <- dnorm(zedd0) Phivec <- pnorm(zedd0) phicPhi <- mills.ratio1(-zedd0) wz1[, iam(1, 2, M = M1)] <- phivec * (1 + zedd0 * (zedd0 - phicPhi)) wz1[, iam(1, 1, M = M1)] <- Phivec + mills.ratio2(-zedd0) + moment.k.dnorm(-zedd0, k = 1) wz1[, iam(2, 2, M = M1)] <- 2 * Phivec + moment.k.dnorm(-zedd0, k = 2) * mills.ratio1(-zedd0) + moment.k.dnorm(-zedd0, k = 1) + moment.k.dnorm(-zedd0, k = 3) if (FALSE && any(index1 <- (zedd0 < v.small))) { wz1[index1, iam(1, 1, M = M1)] <- 1e-7 wz1[index1, iam(1, 2, M = M1)] <- 0 wz1[index1, iam(2, 2, M = M1)] <- 1e-7 } if (FALSE && any(index1 <- (zedd0 > v.large))) { wz1[index1, iam(1, 1, M = M1)] <- 1 wz1[index1, iam(1, 2, M = M1)] <- 0 wz1[index1, iam(2, 2, M = M1)] <- 2 } wz1 <- wz1 * dThetas.detas[, M1 * (spp. - 1) + ind1$row] * dThetas.detas[, M1 * (spp. - 1) + ind1$col] for (jay in 1:M1) for (kay in jay:M1) { cptr <- iam((spp. - 1) * M1 + jay, (spp. - 1) * M1 + kay, M = M) wz[, cptr] <- wz1[, iam(jay, kay, M = M1)] } } # End of for (spp.) loop } else { # Not a standard Tobit model ,,,,,,,,,,,,,,,,,,,,,,,,,,,, A.i <- (Lowmat - mum) / sdm B.i <- (Uppmat - mum) / sdm phivec.A <- dnorm(A.i) phivec.B <- dnorm(B.i) Phivec.A <- pnorm(A.i) Phivec.B <- pnorm(B.i) Phivec.BB <- pnorm(-B.i) phiPhi.A <- mills.ratio1( A.i) phicPhi.B <- mills.ratio1(-B.i) ned2l.dmumu <- Phivec.B - Phivec.A + moment.k.dnorm( A.i, k = 1) + mills.ratio2( A.i) + moment.k.dnorm(-B.i, k = 1) + mills.ratio2(-B.i) ned2l.dsdsd <- 2 * (Phivec.B - Phivec.A) + 3 * (moment.k.dnorm( A.i, k = 1) + moment.k.dnorm(-B.i, k = 1)) - 2 * moment.k.dnorm(-B.i, k = 1) + moment.k.dnorm(-B.i, k = 3) + moment.millsratio2(-B.i) - 2 * moment.k.dnorm( A.i, k = 1) + moment.k.dnorm( A.i, k = 3) + moment.millsratio2( A.i) ned2l.dmusd <- phivec.A - phivec.B + moment.k.dnorm( A.i, k = 2) + moment.k.dnorm( A.i, k = 1) * mills.ratio1( A.i) + moment.k.dnorm( B.i, k = 2) + moment.k.dnorm(-B.i, k = 1) * mills.ratio1(-B.i) if (TRUE && any(index1 <- (A.i < v.small))) { ned2l.dmusd[index1] <- 0 } if (TRUE && any(index1 <- (B.i > v.large))) { ned2l.dmusd[index1] <- 0 } wz <- array(c(ned2l.dmumu * dmu.deta^2, ned2l.dsdsd * dsd.deta^2, ned2l.dmusd * dmu.deta * dsd.deta), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) } # Not a standard Tobit model w.wz.merge(w = w / sdm^2, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .lmu = lmu, .Lower = Lower, .Upper = Upper, .lsd = lsd, .stdTobit = stdTobit )))) } # End of tobit() normal1 <- uninormal <- function(lmean = "identitylink", lsd = "loge", lvar = "loge", var.arg = FALSE, imethod = 1, isd = NULL, parallel = FALSE, smallno = 1.0e-5, zero = "sd") { apply.parint <- FALSE lmean <- as.list(substitute(lmean)) emean <- link2list(lmean) lmean <- attr(emean, "function.name") lsdev <- as.list(substitute(lsd)) esdev <- link2list(lsdev) lsdev <- attr(esdev, "function.name") lvare <- as.list(substitute(lvar)) evare <- link2list(lvare) lvare <- attr(evare, "function.name") if (!is.Numeric(smallno, length.arg = 1, positive = TRUE)) stop("argument 'smallno' must be positive and close to 0") if (smallno > 0.1) { warning("replacing argument 'smallno' with 0.1") smallno <- 0.1 } if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.logical(var.arg) || length(var.arg) != 1) stop("argument 'var.arg' must be a single logical") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") if (is.logical(parallel) && parallel && length(zero)) stop("set 'zero = NULL' if 'parallel = TRUE'") new("vglmff", blurb = c("Univariate normal distribution\n\n", "Links: ", namesof("mean", lmean, earg = emean, tag = TRUE), "; ", if (var.arg) namesof("var", lvare, earg = evare, tag = TRUE) else namesof("sd" , lsdev, earg = esdev, tag = TRUE), "\n", if (var.arg) "Variance: var" else "Variance: sd^2"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, hadof = TRUE, multipleResponses = TRUE, parameters.names = c("mean", if ( .var.arg ) "var" else "sd"), var.arg = .var.arg , parallel = .parallel , zero = .zero ) }, list( .zero = zero , .parallel = parallel , .var.arg = var.arg ))), initialize = eval(substitute(expression({ orig.y <- y if (length(attr(orig.y, "Prior.Weights"))) { if (any(c(w) != 1)) warning("replacing the 'weights' argument by the 'Prior.Weights'", "attribute of the response (probably due to Qvar()") w <- attr(orig.y, "Prior.Weights") extra$attributes.y <- attributes(orig.y) } else { } temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("mean", ncoly) mynames2 <- param.names(if ( .var.arg ) "var" else "sd", ncoly) predictors.names <- c(namesof(mynames1, .lmean , earg = .emean , tag = FALSE), if ( .var.arg ) namesof(mynames2, .lvare , earg = .evare , tag = FALSE) else namesof(mynames2, .lsdev , earg = .esdev , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] extra$predictors.names <- predictors.names if (!length(etastart)) { sdev.init <- mean.init <- matrix(0, n, ncoly) for (jay in 1:ncoly) { jfit <- lm.wfit(x = x, y = y[, jay], w = w[, jay]) mean.init[, jay] <- if ( .lmean == "loge") pmax(1/1024, y[, jay]) else if ( .imethod == 1) median(y[, jay]) else if ( .imethod == 2) weighted.mean(y[, jay], w = w[, jay]) else if ( .imethod == 3) weighted.mean(y[, jay], w = w[, jay]) * 0.5 + y[, jay] * 0.5 else mean(jfit$fitted) sdev.init[, jay] <- if ( .imethod == 1) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else if ( .imethod == 2) { if (jfit$df.resid > 0) sqrt( sum(w[, jay] * jfit$resid^2) / jfit$df.resid ) else sqrt( sum(w[, jay] * jfit$resid^2) / sum(w[, jay]) ) } else if ( .imethod == 3) { sqrt( sum(w[, jay] * (y[, jay] - mean.init[, jay])^2) / sum(w[, jay]) ) } else { sqrt( sum(w[, jay] * abs(y[, jay] - mean.init[, jay])) / sum(w[, jay]) ) } if (any(sdev.init[, jay] <= sqrt( .Machine$double.eps ) )) sdev.init[, jay] <- 1.01 } if (length( .isdev )) { sdev.init <- matrix( .isdev , n, ncoly, byrow = TRUE) } etastart <- cbind(theta2eta(mean.init, .lmean , earg = .emean ), if ( .var.arg ) theta2eta(sdev.init^2, .lvare , earg = .evare ) else theta2eta(sdev.init , .lsdev , earg = .esdev )) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] colnames(etastart) <- predictors.names } }), list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .isdev = isd, .var.arg = var.arg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { M1 <- extra$M1 ncoly <- extra$ncoly if ( .lmean == "explink") { if (any(eta[, M1*(1:ncoly) - 1] <= 0)) { warning("turning some columns of 'eta' positive in @linkinv") for (ii in 1:ncoly) eta[, M1*ii - 1] <- pmax( .smallno , eta[, M1*ii - 1]) } } eta2theta(eta[, M1*(1:ncoly) - 1], .lmean , earg = .emean ) }, list( .lmean = lmean, .emean = emean, .esdev = esdev , .evare = evare, .smallno = smallno ))), last = eval(substitute(expression({ M1 <- extra$M1 temp.names <- c(mynames1, mynames2) temp.names <- temp.names[interleave.VGAM(M1 * ncoly, M1 = M1)] misc$link <- rep_len( .lmean , M1 * ncoly) misc$earg <- vector("list", M1 * ncoly) names(misc$link) <- names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$link[ M1*ii-1 ] <- .lmean misc$link[ M1*ii ] <- if ( .var.arg ) .lvare else .lsdev misc$earg[[M1*ii-1]] <- .emean misc$earg[[M1*ii ]] <- if ( .var.arg ) .evare else .esdev } misc$var.arg <- .var.arg misc$M1 <- M1 misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- TRUE misc$parallel <- .parallel misc$apply.parint <- .apply.parint misc$smallno <- .smallno }), list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .parallel = parallel, .apply.parint = apply.parint, .smallno = smallno, .var.arg = var.arg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ncoly <- extra$ncoly M1 <- extra$M1 if ( .lmean == "explink") { if (any(eta[, M1*(1:ncoly) - 1] <= 0)) { warning("turning some columns of 'eta' positive in @loglikelihood") for (ii in 1:ncoly) eta[, M1*ii - 1] <- pmax( .smallno , eta[, M1*ii - 1]) } } if ( .var.arg ) { Varm <- eta2theta(eta[, M1*(1:ncoly)], .lvare , earg = .evare ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, M1*(1:ncoly)], .lsdev , earg = .esdev ) } if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dnorm(y, m = mu, sd = sdev, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lsdev = lsdev, .lvare = lvare, .esdev = esdev, .evare = evare, .lmean = lmean, .smallno = smallno, .var.arg = var.arg ))), vfamily = c("uninormal"), hadof = eval(substitute( function(eta, extra = list(), linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), deriv = 1, ...) { M1 <- 2 n <- NROW(eta) M <- NCOL(eta) mymu <- eta2theta(eta[, c(TRUE, FALSE)], .lmean , earg = .emean ) if ( .var.arg ) { Varm <- eta2theta(eta[, c(FALSE, TRUE)], .lvare , earg = .evare ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, c(FALSE, TRUE)], .lsdev , earg = .esdev ) Varm <- sdev^2 # Not needed really } which.param <- ifelse(linpred.index %% M1 == 1, "mean", if ( .var.arg ) "var" else "sd") which.y <- ceiling(linpred.index / M1) if (deriv == 0) { ned2l.dmu2 <- 1 / sdev^2 ned2l.dsd2 <- 2 / sdev^2 ned2l.dva2 <- 0.5 / Varm^2 wz <- array(c(c(w) * ned2l.dmu2, c(w) * (if ( .var.arg ) ned2l.dva2 else ned2l.dsd2), c(w) * ned2l.dmu2 * 0), # diagonal dim = c(n, M / M1, 3)) return(arwz2wz(wz, M = M, M1 = M1, full.arg = TRUE)) } if (deriv == 1) { if (which.param == "mean") { NED2l.dmu2 <- NED2l.dsd2 <- matrix(0, n, M) } else { NED2l.dmu2 <- if ( .var.arg ) (-1 / Varm^2) else 1 * (-2 / sdev^3) NED2l.dsd2 <- if ( .var.arg ) (-1 / Varm^3) else 2 * (-2 / sdev^3) } } if (deriv == 2) { if (which.param == "mean") { NED2l.dmu2 <- NED2l.dsd2 <- matrix(0, n, M) } else { NED2l.dmu2 <- if ( .var.arg ) (2 / Varm^3) else 1 * (6 / sdev^4) NED2l.dsd2 <- if ( .var.arg ) (3 / Varm^4) else 2 * (6 / sdev^4) } } WZ <- switch(as.character(deriv), "1" = array(c(c(w) * retain.col(NED2l.dmu2, which.y), c(w) * retain.col(NED2l.dsd2, which.y), c(w) * retain.col(NED2l.dmu2 * 0, which.y)), dim = c(n, M / M1, 3)), "2" = array(c(c(w) * retain.col(NED2l.dmu2, which.y), c(w) * retain.col(NED2l.dsd2, which.y), c(w) * retain.col(NED2l.dmu2 * 0, which.y)), dim = c(n, M / M1, 3)), stop("argument 'deriv' must be 0 or 1 or 2")) return(arwz2wz(WZ, M = M, M1 = M1, full.arg = TRUE)) }, list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .var.arg = var.arg ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2 ncoly <- NCOL(y) mymu <- eta2theta( eta[, M1*(1:ncoly) - 1], .lmean , earg = .emean ) if ( .var.arg ) { Varm <- eta2theta(eta[, M1*(1:ncoly) ], .lvare , earg = .evare ) sdev <- 111 } else { sdev <- eta2theta(eta[, M1*(1:ncoly) ], .lsdev , earg = .esdev ) Varm <- 111 } okay1 <- all(is.finite(mymu)) && all(is.finite(sdev)) && all(0 < sdev) && all(is.finite(Varm)) && all(0 < Varm) okay2 <- TRUE if ( .lmean == "explink") { okay2 <- all(0 < eta[, M1*(1:ncoly) - 1]) } okay1 && okay2 }, list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .smallno = smallno, .var.arg = var.arg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") mymu <- fitted(object) eta <- predict(object) if ( .var.arg ) { Varm <- eta2theta(eta[, c(FALSE, TRUE)], .lvare , earg = .evare ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, c(FALSE, TRUE)], .lsdev , earg = .esdev ) } rnorm(nsim * length(mymu), mean = mymu, sd = sdev) }, list( .lsdev = lsdev, .lvare = lvare, .esdev = esdev, .evare = evare, .lmean = lmean, .smallno = smallno, .var.arg = var.arg ))), deriv = eval(substitute(expression({ ncoly <- extra$ncoly M1 <- extra$M1 if ( .lmean == "explink") { if (any(eta[, M1*(1:ncoly) - 1] <= 0)) { warning("turning some columns of 'eta' positive in @deriv") for (ii in 1:ncoly) eta[, M1*ii - 1] <- pmax( .smallno , eta[, M1*ii - 1]) } } mymu <- eta2theta( eta[, M1*(1:ncoly) - 1], .lmean , earg = .emean ) if ( .var.arg ) { Varm <- eta2theta(eta[, M1*(1:ncoly) ], .lvare , earg = .evare ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, M1*(1:ncoly) ], .lsdev , earg = .esdev ) } dl.dmu <- (y - mymu) / sdev^2 if ( .var.arg ) { dl.dva <- -0.5 / Varm + 0.5 * (y - mymu)^2 / sdev^4 } else { dl.dsd <- -1.0 / sdev + (y - mymu)^2 / sdev^3 } dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) if ( .var.arg ) { dva.deta <- dtheta.deta(Varm, .lvare , earg = .evare ) } else { dsd.deta <- dtheta.deta(sdev, .lsdev , earg = .esdev ) } ans <- c(w) * cbind(dl.dmu * dmu.deta, if ( .var.arg ) dl.dva * dva.deta else dl.dsd * dsd.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lmean = lmean, .lsdev = lsdev, .lvare = lvare, .emean = emean, .esdev = esdev, .evare = evare, .smallno = smallno, .var.arg = var.arg ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, M) # Diagonal matrix ned2l.dmu2 <- 1 / sdev^2 if ( .var.arg ) { ned2l.dva2 <- 0.5 / Varm^2 } else { ned2l.dsd2 <- 2 / sdev^2 } wz[, M1*(1:ncoly) - 1] <- ned2l.dmu2 * dmu.deta^2 wz[, M1*(1:ncoly) ] <- if ( .var.arg ) { ned2l.dva2 * dva.deta^2 } else { ned2l.dsd2 * dsd.deta^2 } w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .var.arg = var.arg )))) } # End of uninormal() normal.vcm <- function(link.list = list("(Default)" = "identitylink"), earg.list = list("(Default)" = list()), lsd = "loge", lvar = "loge", esd = list(), evar = list(), var.arg = FALSE, imethod = 1, icoefficients = NULL, isd = NULL, zero = "sd", sd.inflation.factor = 2.50) { orig.esd <- esd orig.evar <- evar lsd <- as.list(substitute(lsd)) esd <- link2list(lsd) lsd <- attr(esd, "function.name") lvar <- as.list(substitute(lvar)) evar <- link2list(lvar) lvar <- attr(evar, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2 or 3 or 4") if (!is.logical(var.arg) || length(var.arg) != 1) stop("argument 'var.arg' must be a single logical") new("vglmff", blurb = c("Univariate normal distribution with ", "varying coefficients\n\n", "Links: ", "G1: g1(coeff:v1), ", "G2: g2(coeff:v2)", ", ..., ", if (var.arg) namesof("var", lvar, earg = evar, tag = TRUE) else namesof("sd" , lsd, earg = esd, tag = TRUE), "; ", "\n", if (var.arg) "Variance: var" else "Variance: sd^2"), constraints = eval(substitute(expression({ M1 <- NA if (FALSE) { dotzero <- .zero if (is.character(dotzero) && dotzero == "M") dotzero <- M M1 <- NA eval(negzero.expression.VGAM) } else { constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = M) # 20151222; Okay for 1 response? } }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = NA, Q1 = 1, multipleResponses = FALSE, # zz unsure parameters.names = as.character(NA), # zz unsure zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ asgn2 <- attr(Xm2, "assign") nasgn2 <- names(asgn2) link.list.lengths <- unlist(lapply(asgn2, length)) link.list <- .link.list earg.list <- .earg.list if (FALSE) { if (length(link.list) > 0) if (length(nasgn2) != length(names(link.list)) || !all(sort(nasgn2) == sort(names(link.list)))) stop("names of 'link.list' do not match argument 'form2'") if (length(earg.list) > 0) if (length(nasgn2) != length(names(earg.list)) || !all(sort(nasgn2) == sort(names(earg.list)))) stop("names of 'earg.list' do not match argument 'form2'") } link.list.ordered <- vector("list", ncol(Xm2)) earg.list.ordered <- vector("list", ncol(Xm2)) if (sum(names(link.list) == "(Default)") > 1) stop("only one default allowed in argument 'link.list'!") if (sum(names(earg.list) == "(Default)") > 1) stop("only one default allowed in argument 'earg.list'!") default.link <- if (any(names(link.list) == "(Default)")) link.list[["(Default)"]] else "identitylink" default.earg <- if (any(names(earg.list) == "(Default)")) earg.list[["(Default)"]] else list() names(link.list.ordered) <- names(earg.list.ordered) <- colnames(Xm2) i.ptr <- 1 for (jlocal in seq_along(nasgn2)) { for (klocal in 1:link.list.lengths[jlocal]) { link.list.ordered[[i.ptr]] <- if (any(names(link.list) == nasgn2[jlocal])) link.list[[(nasgn2[jlocal])]] else default.link earg.list.ordered[[i.ptr]] <- if (any(names(earg.list) == nasgn2[jlocal])) earg.list[[(nasgn2[jlocal])]] else default.earg i.ptr <- i.ptr + 1 } } link.list <- link.list.ordered earg.list <- earg.list.ordered extra$link.list <- link.list extra$earg.list <- earg.list if (any(is.multilogit <- (unlist(link.list.ordered) == "multilogit"))) { if (sum(is.multilogit) < 2) stop("at least two 'multilogit' links need to be specified, ", "else none") col.index.is.multilogit <- (seq_along(is.multilogit))[is.multilogit] extra$col.index.is.multilogit <- col.index.is.multilogit extra$is.multilogit <- is.multilogit } temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, # M-1 ? out.wy = TRUE, colsyperw = 1, # Use M-1, not 1, for plotvgam(y=TRUE) maximize = TRUE) w <- temp5$w y <- temp5$y extra$ncoly <- ncoly <- ncol(y) extra$M <- M <- ncol(Xm2) + 1 - (length(extra$is.multilogit) > 0) M1 <- NA # Since this cannot be determined apriori. extra$M1 <- M1 extra$Xm2 <- Xm2 # Needed for @linkinv extra$depvar <- y mynames1 <- paste("coeff:", colnames(Xm2), sep = "") for (jlocal in seq_along(mynames1)) { mynames1[jlocal] <- namesof(mynames1[jlocal], link = link.list[[jlocal]], earg = earg.list[[jlocal]], short = TRUE) } extra$all.mynames1 <- all.mynames1 <- mynames1 if (LLL <- length(extra$is.multilogit)) { mynames1 <- mynames1[-max(extra$col.index.is.multilogit)] } mynames2 <- param.names(if ( .var.arg ) "var" else "sd", ncoly) predictors.names <- c(mynames1, if ( .var.arg ) namesof(mynames2, .lvar , earg = .evar , tag = FALSE) else namesof(mynames2, .lsd , earg = .esd , tag = FALSE)) extra$predictors.names <- predictors.names if (!length(etastart)) { jfit <- lm.wfit(x = Xm2, y = c(y), w = c(w)) jfit.coeff <- jfit$coeff if (icoefficients.given <- is.numeric( .icoefficients )) jfit.coeff <- rep_len( .icoefficients , length(jfit.coeff)) if (!icoefficients.given) for (jlocal in seq_along(nasgn2)) { if (link.list[[jlocal]] %in% c("cauchit", "probit", "cloglog", "logit", "logc", "golf", "polf", "nbolf") && abs(jfit.coeff[jlocal] - 0.5) >= 0.5) jfit.coeff[jlocal] <- 0.5 + sign(jfit.coeff[jlocal] - 0.5) * 0.25 if (link.list[[jlocal]] %in% c("rhobit", "fisherz") && abs(jfit.coeff[jlocal]) >= 1) jfit.coeff[jlocal] <- sign(jfit.coeff[jlocal]) * 0.5 if (link.list[[jlocal]] == "loglog" && abs(jfit.coeff[jlocal]) <= 1) jfit.coeff[jlocal] <- 1 + 1/8 if (link.list[[jlocal]] == "logoff" && is.numeric(LLL <- (earg.list[[jlocal]])$offset) && jfit.coeff[jlocal] <= -LLL) { jfit.coeff[jlocal] <- max((-LLL) * 1.05, (-LLL) * 0.95, -LLL + 1) } if (link.list[[jlocal]] == "loge" && jfit.coeff[jlocal] <= 0.001) jfit.coeff[jlocal] <- 1/8 } if (!icoefficients.given) { if (LLL <- length(extra$is.multilogit)) { raw.coeffs <- jfit.coeff[extra$col.index.is.multilogit] possum1 <- (0.01 + abs(raw.coeffs)) / sum(0.01 + abs(raw.coeffs)) jfit.coeff[extra$is.multilogit] <- possum1 } } thetamat.init <- matrix(jfit.coeff, n, length(jfit.coeff), byrow = TRUE) etamat.init <- 1 * thetamat.init # May delete a coln later for (jlocal in 1:ncol(etamat.init)) { earg.use <- if (!length(extra$earg.list)) { list(theta = NULL) } else { extra$earg.list[[jlocal]] } if (length(extra$is.multilogit) && !extra$is.multilogit[jlocal]) etamat.init[, jlocal] <- theta2eta(thetamat.init[, jlocal], link = extra$link.list[[jlocal]], earg = earg.use) } if (LLL <- length(extra$col.index.is.multilogit)) { etamat.init[, extra$col.index.is.multilogit[-LLL]] <- multilogit(thetamat.init[, extra$col.index.is.multilogit]) etamat.init <- etamat.init[, -max(extra$col.index.is.multilogit)] } w.sum1 <- w / sum(w) sdev.init <- if ( .imethod == 1) { sqrt( sum(w.sum1 * jfit$resid^2) ) } else if ( .imethod == 2) { sqrt( sum(w.sum1 * (abs(jfit$resid))^1.5) ) } else if ( .imethod == 3) { sqrt( sum(w.sum1 * abs(jfit$resid)) ) } else { wmean.init <- weighted.mean(y, w = w) # jfit$fitted sqrt( sum(w.sum1 * (y - wmean.init)^2) ) } sd.inflation.factor <- .sd.inflation.factor sdev.init <- sdev.init * sd.inflation.factor sdev.init <- pmax(sdev.init, ( .Machine$double.eps )^0.25) # Limit the smallness if (length( .isdev )) { sdev.init <- matrix( .isdev , n, ncoly, byrow = TRUE) } etastart <- cbind(etamat.init, # eta.equi.probs, if ( .var.arg ) theta2eta(sdev.init^2, .lvar , earg = .evar ) else theta2eta(sdev.init , .lsd , earg = .esd )) colnames(etastart) <- predictors.names } }), list( .link.list = link.list, .earg.list = earg.list, .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .orig.esd = orig.esd, .orig.evar = orig.evar, .var.arg = var.arg, .sd.inflation.factor = sd.inflation.factor, .isdev = isd, .icoefficients = icoefficients, .imethod = imethod ))), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- ncol(eta) NOS <- ncol(eta) / M1 sdev <- eta2theta(eta[, M1*(1:NOS) , drop = FALSE], .lsd , earg = .esd ) okay1 <- all(is.finite(sdev)) && all(0 < sdev) && all(is.finite(eta)) okay1 }, list( .link.list = link.list, .earg.list = earg.list, .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .orig.esd = orig.esd, .orig.evar = orig.evar, .var.arg = var.arg ))), linkinv = eval(substitute(function(eta, extra = NULL) { M <- ncol(eta) coffs <- eta[, -M, drop = FALSE] if (LLL <- length(extra$col.index.is.multilogit)) { last.one <- extra$col.index.is.multilogit[LLL] coffs <- cbind(coffs[, 1:(last.one-1)], probs.last.multilogit = 0, if (last.one == M) NULL else coffs[, last.one:ncol(coffs)]) colnames(coffs) <- extra$all.mynames1 } for (jlocal in 1:ncol(coffs)) { earg.use <- if (!length(extra$earg.list[[jlocal]])) { list(theta = NULL) } else { extra$earg.list[[jlocal]] } if (length(extra$is.multilogit) && !extra$is.multilogit[jlocal]) { iskip <- (jlocal > max(extra$col.index.is.multilogit)) coffs[, jlocal] <- eta2theta(eta[, jlocal - iskip], link = extra$link.list[[jlocal]], earg = earg.use) } } if (LLL <- length(extra$col.index.is.multilogit)) { coffs[, extra$col.index.is.multilogit] <- multilogit(eta[, extra$col.index.is.multilogit[-LLL], drop = FALSE], inverse = TRUE) } rowSums(extra$Xm2 * coffs) }, list( .link.list = link.list, .earg.list = earg.list, .esd = esd , .evar = evar ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(link.list.ordered, "sd" = if ( .var.arg ) .lvar else .lsd ) temp.earg.list <- c(earg.list.ordered, "sd" = if ( .var.arg ) list( .orig.evar ) else list( .orig.esd )) misc$earg <- temp.earg.list misc$var.arg <- .var.arg misc$M1 <- M1 misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- FALSE misc$icoefficients <- .icoefficients }), list( .link.list = link.list, .earg.list = earg.list, .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .orig.esd = orig.esd, .orig.evar = orig.evar, .icoefficients = icoefficients, .var.arg = var.arg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if ( .var.arg ) { Varm <- eta2theta(eta[, ncol(eta)], .lvar , earg = .evar ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, ncol(eta)], .lsd , earg = .esd ) } if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dnorm(y, m = mu, sd = sdev, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lsd = lsd, .lvar = lvar, .esd = esd, .evar = evar, .var.arg = var.arg ))), vfamily = c("normal.vcm"), deriv = eval(substitute(expression({ if ( .var.arg ) { Varm <- eta2theta(eta[, M], .lvar , earg = .evar ) sdev <- sqrt(Varm) } else { sdev <- eta2theta(eta[, M], .lsd , earg = .esd ) } zedd <- (y - mu) / sdev dl.dmu <- c(zedd / sdev) # dl.dmu <- (y - mymu) / sdev^2 dmu.dcoffs <- Xm2 mymu <- mu coffs <- eta[, -M, drop = FALSE] # Exclude log(sdev) or log(var) if (LLL <- length(extra$is.multilogit)) { last.one <- max(extra$col.index.is.multilogit) coffs <- cbind(coffs[, 1:(last.one-1)], probsLastmultilogit = 0, if (last.one == M) NULL else coffs[, last.one:ncol(coffs)]) colnames(coffs) <- extra$all.mynames1 } dcoffs.deta <- coffs # Includes any last "multilogit" for (jlocal in 1:ncol(coffs)) { earg.use <- if (!length(extra$earg.list[[jlocal]])) { list(theta = NULL) } else { extra$earg.list[[jlocal]] } if (!length(extra$is.multilogit) || !extra$is.multilogit[jlocal]) { iskip <- length(extra$is.multilogit) && (jlocal > max(extra$col.index.is.multilogit)) coffs[, jlocal] <- eta2theta(eta[, jlocal - iskip], link = extra$link.list[[jlocal]], earg = earg.use) } } if (LLL <- length(extra$col.index.is.multilogit)) { coffs[, extra$col.index.is.multilogit] <- multilogit(eta[, extra$col.index.is.multilogit[-LLL], drop = FALSE], inverse = TRUE) } for (jlocal in 1:ncol(coffs)) { if (!length(extra$is.multilogit) || !extra$is.multilogit[jlocal]) { earg.use <- if (!length(extra$earg.list[[jlocal]])) { list(theta = NULL) } else { extra$earg.list[[jlocal]] } dcoffs.deta[, jlocal] <- dtheta.deta(coffs[, jlocal], link = extra$link.list[[jlocal]], earg = earg.use) } } if ( .var.arg ) { dl.dva <- -0.5 / Varm + 0.5 * (y - mymu)^2 / sdev^4 } else { dl.dsd <- -1.0 / sdev + (y - mymu)^2 / sdev^3 } if ( .var.arg ) { dva.deta <- dtheta.deta(Varm, .lvar , earg = .evar ) } else { dsd.deta <- dtheta.deta(sdev, .lsd , earg = .esd ) } dMu.deta <- dmu.dcoffs * dcoffs.deta # n x pLM, but may change below if (LLL <- length(extra$col.index.is.multilogit)) { dMu.deta[, extra$col.index.is.multilogit[-LLL]] <- coffs[, extra$col.index.is.multilogit[-LLL]] * (dmu.dcoffs[, extra$col.index.is.multilogit[-LLL]] - rowSums(dmu.dcoffs[, extra$col.index.is.multilogit] * coffs[, extra$col.index.is.multilogit])) dMu.deta <- dMu.deta[, -extra$col.index.is.multilogit[LLL]] } dl.deta <- if ( .var.arg ) c(w) * cbind(dl.dmu * dMu.deta, "var" = c(dl.dva * dva.deta)) else c(w) * cbind(dl.dmu * dMu.deta, "sd" = c(dl.dsd * dsd.deta)) dl.deta }), list( .link.list = link.list, .lsd = lsd, .lvar = lvar, .earg.list = earg.list, .esd = esd, .evar = evar, .var.arg = var.arg ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, dimm(M)) # Treated as a general full matrix wz[, iam(M, M, M = M)] <- if ( .var.arg ) { ned2l.dva2 <- 0.5 / Varm^2 ned2l.dva2 * dva.deta^2 } else { ned2l.dsd2 <- 2 / sdev^2 ned2l.dsd2 * dsd.deta^2 } if (length(extra$col.index.is.multilogit)) { LLL <- max(extra$col.index.is.multilogit) dmu.dcoffs <- dmu.dcoffs[, -LLL] dcoffs.deta <- dcoffs.deta[, -LLL] } index <- iam(NA, NA, M , both = TRUE, diag = TRUE) indtw <- iam(NA, NA, M-1, both = TRUE, diag = TRUE) ned2l.dmu2 <- 1 / sdev^2 if ((LLL <- length(extra$col.index.is.multilogit))) { dmu.dcoffs[, extra$col.index.is.multilogit[-LLL]] <- dMu.deta[, extra$col.index.is.multilogit[-LLL]] dcoffs.deta[, extra$col.index.is.multilogit[-LLL]] <- 1 } twz <- crossprod(dmu.dcoffs * sqrt(c(w))) / sum(w) twz <- matrix(twz[cbind(indtw$row.index, indtw$col.index)], n, dimm(M-1), byrow = TRUE) if (length(indtw$row.index) != dimm(M-1)) stop("dim of twz incorrect") twz <- twz * dcoffs.deta[, indtw$row.index, drop = FALSE] * dcoffs.deta[, indtw$col.index, drop = FALSE] * ned2l.dmu2 for (ilocal in seq_along(indtw$row.index)) wz[, iam(indtw$row.index[ilocal], indtw$col.index[ilocal], M = M)] <- twz[, iam(indtw$row.index[ilocal], indtw$col.index[ilocal], M = M-1)] c(w) * wz }), list( .var.arg = var.arg )))) } # End of normal.vcm() lognormal <- function(lmeanlog = "identitylink", lsdlog = "loge", zero = "sdlog") { lmulog <- as.list(substitute(lmeanlog)) emulog <- link2list(lmulog) lmulog <- attr(emulog, "function.name") lsdlog <- as.list(substitute(lsdlog)) esdlog <- link2list(lsdlog) lsdlog <- attr(esdlog, "function.name") new("vglmff", blurb = c("Two-parameter (univariate) lognormal distribution\n\n", "Links: ", namesof("meanlog", lmulog, earg = emulog, tag = TRUE), ", ", namesof("sdlog", lsdlog, earg = esdlog, tag = TRUE)), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, lmeanlog = .lmeanlog , lsdlog = .lsdlog , expected = TRUE, multipleResponses = FALSE, parameters.names = c("meanlog", "sdlog"), zero = .zero ) }, list( .zero = zero, .lmeanlog = lmeanlog, .lsdlog = lsdlog ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE) predictors.names <- c(namesof("meanlog", .lmulog , earg = .emulog , tag = FALSE), namesof("sdlog", .lsdlog , earg = .esdlog , tag = FALSE)) if (!length(etastart)) { mylm <- lm.wfit(x = x, y = c(log(y)), w = c(w)) sdlog.y.est <- sqrt( sum(c(w) * mylm$resid^2) / mylm$df.residual ) etastart <- cbind( meanlog = rep_len(theta2eta(log(median(y)), .lmulog , earg = .emulog ), n), sdlog = rep_len(theta2eta(sdlog.y.est, .lsdlog , earg = .esdlog ), n)) } }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), linkinv = eval(substitute(function(eta, extra = NULL) { mulog <- eta2theta(eta[, 1], .lmulog , earg = .emulog ) sdlog <- eta2theta(eta[, 2], .lsdlog , earg = .esdlog ) exp(mulog + 0.5 * sdlog^2) }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), last = eval(substitute(expression({ misc$link <- c("meanlog" = .lmulog , "sdlog" = .lsdlog ) misc$earg <- list("meanlog" = .emulog , "sdlog" = .esdlog ) misc$expected <- TRUE }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mulog <- eta2theta(eta[, 1], .lmulog , earg = .emulog ) sdlog <- eta2theta(eta[, 2], .lsdlog , earg = .esdlog ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlnorm(y, meanlog = mulog, sdlog = sdlog, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), vfamily = c("lognormal"), validparams = eval(substitute(function(eta, y, extra = NULL) { mulog <- eta2theta(eta[, 1], .lmulog , earg = .emulog ) sdlog <- eta2theta(eta[, 2], .lsdlog , earg = .esdlog ) okay1 <- all(is.finite(mulog)) && all(is.finite(sdlog)) && all(0 < sdlog) okay1 }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mulog <- eta2theta(eta[, c(TRUE, FALSE)], .lmulog , earg = .emulog ) sdlog <- eta2theta(eta[, c(FALSE, TRUE)], .lsdlog , earg = .esdlog ) rlnorm(nsim * length(mulog), meanlog = mulog, sdlog = sdlog) }, list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), deriv = eval(substitute(expression({ mulog <- eta2theta(eta[, 1], .lmulog , earg = .emulog ) sdlog <- eta2theta(eta[, 2], .lsdlog , earg = .esdlog ) dmulog.deta <- dtheta.deta(mulog, .lmulog , earg = .emulog ) dsdlog.deta <- dtheta.deta(sdlog, .lsdlog , earg = .esdlog ) dl.dmulog <- (log(y) - mulog) / sdlog^2 dl.dsdlog <- -1 / sdlog + (log(y) - mulog)^2 / sdlog^3 c(w) * cbind(dl.dmulog * dmulog.deta, dl.dsdlog * dsdlog.deta) }), list( .lmulog = lmulog, .lsdlog = lsdlog, .emulog = emulog, .esdlog = esdlog ))), weight = expression({ wz <- matrix(NA_real_, n, 2) # Diagonal! ned2l.dmulog2 <- 1 / sdlog^2 ned2l.dsdlog2 <- 2 * ned2l.dmulog2 wz[, iam(1, 1, M)] <- ned2l.dmulog2 * dmulog.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsdlog2 * dsdlog.deta^2 wz = c(w) * wz wz })) } dskewnorm <- function(x, location = 0, scale = 1, shape = 0, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) zedd <- (x - location) / scale loglik <- log(2) + dnorm(zedd, log = TRUE) + pnorm(shape * zedd, log.p = TRUE) - log(scale) loglik[is.infinite(x)] <- log(0) # 20141209 KaiH if (log.arg) { loglik } else { exp(loglik) } } rskewnorm <- function(n, location = 0, scale = 1, shape = 0) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n rho <- shape / sqrt(1 + shape^2) u0 <- rnorm(use.n) v <- rnorm(use.n) u1 <- rho * u0 + sqrt(1 - rho^2) * v ans <- location + scale * sign(u0) * u1 ans[scale <= 0] <- NA ans } skewnormal <- function(lshape = "identitylink", ishape = NULL, nsimEIM = NULL) { lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length(nsimEIM) && (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 10)) stop("argument 'nsimEIM' should be an integer greater than 10") new("vglmff", blurb = c("1-parameter skew-normal distribution\n\n", "Link: ", namesof("shape", lshape , earg = eshape), "\n", "Mean: shape * sqrt(2 / (pi * (1 + shape^2 )))\n", "Variance: 1-mu^2"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = FALSE, parameters.names = c("shape"), nsimEIM = .nsimEIM) }, list( .nsimEIM = nsimEIM ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y predictors.names <- namesof("shape", .lshape , earg = .eshape , tag = FALSE) if (!length(etastart)) { init.shape <- if (length( .ishape )) rep_len( .ishape , n) else { temp <- y index <- abs(y) < sqrt(2/pi)-0.01 temp[!index] <- y[!index] temp[index] <- sign(y[index]) / sqrt(2/(pi*y[index]*y[index])-1) temp } etastart <- matrix(init.shape, n, ncol(y)) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { alpha <- eta2theta(eta, .lshape , earg = .eshape ) alpha * sqrt(2/(pi * (1+alpha^2 ))) }, list( .eshape = eshape, .lshape = lshape ))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape) misc$earg <- list(shape = .eshape ) misc$nsimEIM <- .nsimEIM misc$expected <- (length( .nsimEIM ) > 0) }), list( .eshape = eshape, .lshape = lshape, .nsimEIM = nsimEIM ))), linkfun = eval(substitute(function(mu, extra = NULL) { alpha <- mu / sqrt(2/pi - mu^2) theta2eta(alpha, .lshape , earg = .eshape ) }, list( .eshape = eshape, .lshape = lshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { alpha <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dskewnorm(x = y, location = 0, scale = 1, shape = alpha, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .eshape = eshape, .lshape = lshape ))), vfamily = c("skewnormal"), validparams = eval(substitute(function(eta, y, extra = NULL) { alpha <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(alpha)) okay1 }, list( .eshape = eshape, .lshape = lshape ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) alpha <- eta2theta(eta, .lshape , earg = .eshape ) rskewnorm(nsim * length(alpha), location = 0, scale = 1, shape = alpha) }, list( .eshape = eshape, .lshape = lshape ))), deriv = eval(substitute(expression({ alpha <- eta2theta(eta, .lshape , earg = .eshape ) zedd <- y*alpha tmp76 <- pnorm(zedd) tmp86 <- dnorm(zedd) dl.dshape <- tmp86 * y / tmp76 dshape.deta <- dtheta.deta(alpha, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .eshape = eshape, .lshape = lshape ))), weight = eval(substitute(expression({ if ( length( .nsimEIM )) { run.mean <- 0 for (ii in 1:( .nsimEIM)) { ysim <- rsnorm(n, location = 0, scale = 1, shape = alpha) zedd <- ysim*alpha tmp76 <- pnorm(zedd) tmp86 <- dnorm(zedd) d2l.dshape2 <- -ysim*ysim*tmp86*(tmp76*zedd+tmp86)/tmp76^2 rm(ysim) run.mean <- ((ii-1) * run.mean + d2l.dshape2) / ii } if (intercept.only) run.mean <- mean(run.mean) wz <- -c(w) * (dshape.deta^2) * run.mean } else { d2shape.deta2 <- d2theta.deta2(alpha, .lshape , earg = .eshape ) d2l.dshape2 <- -y*y * tmp86 * (tmp76 * zedd + tmp86) / tmp76^2 wz <- -(dshape.deta^2) * d2l.dshape2 - d2shape.deta2 * dl.dshape wz <- c(w) * wz } wz }), list( .eshape = eshape, .lshape = lshape, .nsimEIM = nsimEIM )))) } VGAM/R/formula.vlm.q0000644000176200001440000001331113135276757013650 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. formula.vlm <- function(x, ...) formulavlm(x, ...) formulavlm <- function(x, form.number = 1, ...) { if (!is.Numeric(form.number, integer.valued = TRUE, length.arg = 1, positive = TRUE) || form.number > 2) stop("argument 'form.number' must be 1 or 2") if (!any(slotNames(x) == "misc")) stop("cannot find slot 'misc'") if (form.number == 1) x@misc$formula else x@misc$form2 } formulaNA.VGAM <- function(x, ...) { stop("a formula does not make sense for object 'x'") } setMethod("formula", "vlm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "vglm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "vgam", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "rrvglm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "qrrvglm", function(x, ...) formulavlm(x = x, ...)) setMethod("formula", "grc", function(x, ...) formulavlm(x = x, ...)) variable.namesvlm <- function(object, full = FALSE, ...) { qrslot <- object@qr if (!length(qrslot$qr)) { use.this <- object@x if (!length(use.this)) stop("argument 'object' has empty 'qr' and 'x' slots.") } else { use.this <- qrslot$qr } if (full) dimnames(use.this)[[2]] else if (object@rank) dimnames(use.this)[[2]][seq_len(object@rank)] else character(0) } variable.namesrrvglm <- function(object, ...) { qrslot <- object@qr if (!length(qrslot$qr)) { use.this <- object@x if (!length(use.this)) stop("argument 'object' has empty 'qr' and 'x' slots.") } else { use.this <- qrslot$qr } dimnames(use.this)[[2]] } case.namesvlm <- function(object, full = FALSE, ...) { w <- weights(object, type="prior") use.this <- residuals(object, type = "working") if (!length(use.this)) use.this <- object@x if (!length(use.this)) use.this <- object@y if (!length(use.this)) stop("argument 'object' has empty 'x' and 'y' slots.") dn <- dimnames(use.this)[[1]] if (full || is.null(w) || NCOL(w) != 1) dn else dn[w != 0] } setMethod("variable.names", "vlm", function(object, ...) variable.namesvlm(object = object, ...)) setMethod("variable.names", "vglm", function(object, ...) variable.namesvlm(object = object, ...)) setMethod("variable.names", "vgam", function(object, ...) variable.namesvlm(object = object, ...)) setMethod("variable.names", "rrvglm", function(object, ...) variable.namesrrvglm(object = object, ...)) setMethod("variable.names", "qrrvglm", function(object, ...) variable.namesvlm(object = object, ...)) setMethod("variable.names", "grc", function(object, ...) variable.namesvlm(object = object, ...)) setMethod("case.names", "vlm", function(object, ...) case.namesvlm(object = object, ...)) setMethod("case.names", "vglm", function(object, ...) case.namesvlm(object = object, ...)) setMethod("case.names", "vgam", function(object, ...) case.namesvlm(object = object, ...)) setMethod("case.names", "rrvglm", function(object, ...) case.namesvlm(object = object, ...)) setMethod("case.names", "qrrvglm", function(object, ...) case.namesvlm(object = object, ...)) setMethod("case.names", "grc", function(object, ...) case.namesvlm(object = object, ...)) has.interceptvlm <- function(object, form.number = 1, ...) { if (!is.Numeric(form.number, integer.valued = TRUE, length.arg = 1, positive = TRUE) || form.number > 2) stop("argument 'form.number' must be 1 or 2") if (form.number == 1) { if (is.numeric(aa <- attr(terms(object), "intercept"))) as.logical(aa) else FALSE } else if (form.number == 2) { if (is.numeric(aa <- attr(terms(object, form.number = 2), "intercept"))) as.logical(aa) else FALSE } } if (!isGeneric("has.intercept")) setGeneric("has.intercept", function(object, ...) standardGeneric("has.intercept"), package = "VGAM") setMethod("has.intercept", "vlm", function(object, ...) has.interceptvlm(object, ...)) term.namesvlm <- function(model, form.number = 1, ...) { if (!is.Numeric(form.number, integer.valued = TRUE, length.arg = 1, positive = TRUE) || form.number > 2) stop("argument 'form.number' must be 1 or 2") aa <- if (has.intercept(model, form.number = form.number)) "(Intercept)" else NULL bb <- attr(terms(model, form.number = form.number), "term.labels") c(aa, bb) } if (!isGeneric("term.names")) setGeneric("term.names", function(model, ...) standardGeneric("term.names"), package = "VGAM") setMethod("term.names", "vlm", function(model, ...) term.namesvlm(model, ...)) responseNamevlm <- function(model, form.number = 1, ...) { TERMS.MODEL <-terms(model, form.number = form.number) if (length(aa <- attr(TERMS.MODEL, "dataClasses")) && length(bb <- attr(TERMS.MODEL, "response" )) && bb == 1) { names(aa)[1] } else { NULL } } if (!isGeneric("responseName")) setGeneric("responseName", function(model, ...) standardGeneric("responseName"), package = "VGAM") setMethod("responseName", "vlm", function(model, ...) responseNamevlm(model, ...)) VGAM/R/psv2magic.R0000644000176200001440000000535013135276760013240 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. psv2magic <- function(x.VLM, constraints, spar.vlm, sm.osps.list) { colperm <- function(x, from, to) { ncx <- ncol(x) if (length(from) != length(to) || any(from != round(from)) || any(from < 1 | ncx < from) || any(duplicated(from)) || any(sort(from) != sort(to))) stop("invalid column permutation indices") perm <- seq_len(ncx) perm[to] <- perm[from] x[, perm] } assignx <- sm.osps.list$assignx nassignx <- names(assignx) indexterms <- sm.osps.list$indexterms which.X.sm.osps <- sm.osps.list$which.X.sm.osps term.labels <- sm.osps.list$term.labels ncol.X.sm.osps <- sapply(which.X.sm.osps, length) ncolHlist.model <- unlist(lapply(constraints, ncol)) ncolHlist.new <- ncolHlist.model if (names(constraints)[[1]] == "(Intercept)") { ncolHlist.new <- ncolHlist.new[-1] nassignx <- nassignx[-1] } ncol.H.ps <- ncolHlist.new[indexterms] num.osps.terms <- length(which.X.sm.osps) ncol.allterms <- sapply(assignx, length) ncol.model <- if (names(constraints)[[1]] == "(Intercept)") ncol.allterms[-1] else ncol.allterms jay <- 0 jjoffset <- if (names(constraints)[[1]] == "(Intercept)") ncolHlist.model[1] else 0 perm.list <- list() for (ii in seq_along(term.labels)) { if (indexterms[ii]) { jay <- jay + 1 perm.list[[jay]] <- matrix(jjoffset + 1:(ncol.X.sm.osps[jay] * ncol.H.ps[jay]), nrow = ncol.X.sm.osps[jay], # Redundant really ncol = ncol.H.ps[jay], byrow = TRUE) jjoffset <- jjoffset + ncol.H.ps[[jay]] * ncol.X.sm.osps[[jay]] } else { jjoffset <- jjoffset + ncolHlist.new[ii] * ncol.model[ii] } } # for ii vindex.min <- sapply(perm.list, min) # function(x) min(x) vindex.max <- sapply(perm.list, max) # function(x) max(x) oo1 <- vector("list", length(ncol.H.ps)) # list() for (ii in seq_along(ncol.H.ps)) { oo1[[ii]] <- seq.int(vindex.min[ii], vindex.max[ii]) } ooo <- unlist(oo1, use.names = FALSE) # do.call("c", oo1) ppp <- unlist(perm.list, use.names = FALSE) # do.call("c", perm.list) OFF.list <- vector("list", num.osps.terms) # list() for (ii in 1:num.osps.terms) { index <- 0 OFF.list[[ii]] <- numeric() for (jay in 1:(ncol.H.ps[ii])) { OFF.list[[ii]][jay] <- vindex.min[ii] + index index <- ncol.X.sm.osps[ii] * jay } } list(x.VLM.new = if (identical(ppp, ooo)) x.VLM else colperm(x.VLM, ppp, ooo), sp = unlist(spar.vlm), S.arg = rep(sm.osps.list$S.arg, ncol.H.ps), # Argument 'S' of magic() OFF = unlist(OFF.list)) } # psv2magic VGAM/R/predict.vlm.q0000644000176200001440000002630313135276757013642 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. predict.vlm <- function(object, newdata = NULL, type = c("response", "terms", "Xlm", "Xm2", "Xvlm"), # 20170418 this line added se.fit = FALSE, scale = NULL, terms.arg = NULL, raw = FALSE, dispersion = NULL, ...) { Xm2 <- NULL xij.used <- length(form2 <- object@misc$form2) || length(object@control$xij) if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("response", "terms", "Xlm", "Xm2", "Xvlm"))[1] na.act <- object@na.action object@na.action <- list() if (raw && type != "terms") stop("sorry, 'raw=TRUE' only works when 'type=\"terms\"'") if (!length(newdata) && type == "response" && !se.fit && length(object@fitted.values)) { if (length(na.act)) { return(napredict(na.act[[1]], object@fitted.values)) } else { return(object@fitted.values) } } ttob <- terms(object) # 20030811; object@terms$terms if (!length(newdata)) { offset <- object@offset if (xij.used) { bothList <- model.matrix(object, type = "bothlmlm2") X <- bothList$X Xm2 <- bothList$Xm2 } else { X <- model.matrix(object, type = "lm") } } else { if (is.smart(object) && length(object@smart.prediction)) { setup.smart("read", smart.prediction = object@smart.prediction) } X <- model.matrix(delete.response(ttob), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) if (xij.used) { ttXm2 <- terms(form2) Xm2 <- model.matrix(delete.response(ttXm2), newdata, contrasts = if (length(object@contrasts)) object@contrasts else NULL, xlev = object@xlevels) } if (object@misc$intercept.only && nrow(X) != nrow(newdata)) { as.save <- attr(X, "assign") X <- X[rep_len(1, nrow(newdata)), , drop = FALSE] dimnames(X) <- list(dimnames(newdata)[[1]], "(Intercept)") attr(X, "assign") <- as.save # Restored } offset <- if (!is.null(off.num <- attr(ttob, "offset"))) { eval(attr(ttob, "variables")[[off.num + 1]], newdata) } else if (!is.null(object@offset)) eval(object@call$offset, newdata) if (is.smart(object) && length(object@smart.prediction)) { wrapup.smart() } attr(X, "assign") <- attrassigndefault(X, ttob) if (length(Xm2)) attr(Xm2, "assign") <- attrassigndefault(Xm2, ttXm2) } # newdata is given if (type == "Xlm") return(X) if (type == "Xm2") return(Xm2) hasintercept <- attr(ttob, "intercept") dx1 <- dimnames(X)[[1]] M <- object@misc$M Hlist <- object@constraints ncolHlist <- unlist(lapply(Hlist, ncol)) if (hasintercept) ncolHlist <- ncolHlist[-1] xbar <- x2bar <- NULL if (type == "terms" && hasintercept) { if (length(object@control$xij)) { x2bar <- colMeans(Xm2) * ifelse(type == "Xvlm", 0, 1) Xm2 <- sweep(Xm2, 2, x2bar) } xbar <- colMeans(X) * ifelse(type == "Xvlm", 0, 1) X <- sweep(X, 2, xbar) nac <- is.na(object@coefficients) if (any(nac)) { if (length(object@control$xij)) stop("cannot handle 'xij' argument when ", "there are NAs in the coefficients") X <- X[, !nac, drop = FALSE] xbar <- xbar[!nac] } } # if (type == "terms" && hasintercept) if (!is.null(newdata) && !is.data.frame(newdata)) newdata <- as.data.frame(newdata) nn <- if (!is.null(newdata)) nrow(newdata) else object@misc$n if (raw) { Hlist <- canonical.Hlist(Hlist) object@constraints <- Hlist } X_vlm <- lm2vlm.model.matrix(X, Hlist = Hlist, M = M, xij = object@control$xij, Xm2 = Xm2) attr(X_vlm, "constant") <- xbar attr(X_vlm, "constant2") <- x2bar if (type == "Xvlm") return(X_vlm) coefs <- coefvlm(object) vasgn <- attr(X_vlm, "vassign") if (type == "terms") { nv <- names(vasgn) if (hasintercept) nv <- nv[-(1:ncol(object@constraints[["(Intercept)"]]))] terms.arg <- if (is.null(terms.arg)) nv else terms.arg index <- charmatch(terms.arg, nv) if (all(index == 0)) { warning("no match found; returning all terms") index <- seq_along(nv) } vasgn <- vasgn[nv[index]] } # if (type == "terms") if (anyNA(object@coefficients)) stop("cannot handle NAs in 'object@coefficients'") dname2 <- object@misc$predictors.names if (se.fit) { object <- as(object, "vlm") # Coerce fit.summary <- summaryvlm(object, dispersion = dispersion) sigma <- if (is.numeric(fit.summary@sigma)) fit.summary@sigma else sqrt(deviance(object) / object@df.residual) # was @ResSS pred <- Build.terms.vlm(x = X_vlm, coefs = coefs, cov = sigma^2 * fit.summary@cov.unscaled, assign = vasgn, collapse = type != "terms", M = M, dimname = list(dx1, dname2), coefmat = coefvlm(object, matrix.out = TRUE)) pred$df <- object@df.residual pred$sigma <- sigma } else { pred <- Build.terms.vlm(x = X_vlm, coefs = coefs, cov = NULL, # Only this line differs from above assign = vasgn, collapse = type != "terms", M = M, dimname = list(dx1, dname2), coefmat = coefvlm(object, matrix.out = TRUE)) } # !se.fit constant <- attr(pred, "constant") if (type != "terms" && length(offset) && any(offset != 0)) { if (se.fit) { pred$fitted.values <- pred$fitted.values + offset } else { pred <- pred + offset } } if (type == "terms") { Hlist <- subconstraints(object@misc$orig.assign, object@constraints) ncolHlist <- unlist(lapply(Hlist, ncol)) if (hasintercept) ncolHlist <- ncolHlist[-1] cs <- cumsum(c(1, ncolHlist)) # Like a pointer for (ii in 1:(length(cs)-1)) if (cs[ii+1] - cs[ii] > 1) for (kk in (cs[ii]+1):(cs[ii+1]-1)) if (se.fit) { pred$fitted.values[, cs[ii]] <- pred$fitted.values[, cs[ii]] + pred$fitted.values[, kk] pred$se.fit[, cs[ii]] <- pred$se.fit[, cs[ii]] + pred$se.fit[, kk] } else { pred[, cs[ii]] <- pred[, cs[ii]] + pred[, kk] } if (se.fit) { pred$fitted.values <- pred$fitted.values[, cs[-length(cs)], drop = FALSE] pred$se.fit <- pred$se.fit[, cs[-length(cs)], drop = FALSE] } else { pred <- pred[, cs[-length(cs)], drop = FALSE] } pp <- if (se.fit) ncol(pred$fitted.values) else ncol(pred) if (se.fit) { dimnames(pred$fitted.values) <- dimnames(pred$se.fit) <- NULL dim(pred$fitted.values) <- dim(pred$se.fit) <- c(M, nn, pp) pred$fitted.values <- aperm(pred$fitted.values, c(2, 1, 3)) pred$se.fit <- aperm(pred$se.fit, c(2, 1, 3)) dim(pred$fitted.values) <- dim(pred$se.fit) <- c(nn, M*pp) } else { dimnames(pred) <- NULL # Saves a warning dim(pred) <- c(M, nn, pp) pred <- aperm(pred, c(2, 1, 3)) dim(pred) <- c(nn, M*pp) } if (raw) { kindex <- NULL for (ii in 1:pp) kindex <- c(kindex, (ii-1) * M + (1:ncolHlist[ii])) if (se.fit) { pred$fitted.values <- pred$fitted.values[, kindex, drop = FALSE] pred$se.fit <- pred$se.fit[, kindex, drop = FALSE] } else { pred <- pred[, kindex, drop = FALSE] } } temp <- if (raw) ncolHlist else rep_len(M, length(ncolHlist)) dd <- vlabel(names(ncolHlist), temp, M) if (se.fit) { dimnames(pred$fitted.values) <- dimnames(pred$se.fit) <- list(if (length(newdata)) dimnames(newdata)[[1]] else dx1, dd) } else { dimnames(pred) <- list(if (length(newdata)) dimnames(newdata)[[1]] else dx1, dd) } if (!length(newdata) && length(na.act)) { if (se.fit) { pred$fitted.values <- napredict(na.act[[1]], pred$fitted.values) pred$se.fit <- napredict(na.act[[1]], pred$se.fit) } else { pred <- napredict(na.act[[1]], pred) } } if (!raw) cs <- cumsum(c(1, M + 0 * ncolHlist)) fred <- vector("list", length(ncolHlist)) for (ii in seq_along(fred)) fred[[ii]] <- cs[ii]:(cs[ii+1]-1) names(fred) <- names(ncolHlist) if (se.fit) { attr(pred$fitted.values, "vterm.assign") <- attr(pred$se.fit, "vterm.assign") <- fred } else { attr(pred, "vterm.assign") <- fred } } # End of if (type == "terms") if (!is.null(xbar)) { if (se.fit) { attr(pred$fitted.values, "constant") <- constant } else { attr(pred, "constant") <- constant } } pred } # predict.vlm() setMethod("predict", "vlm", function(object, ...) predict.vlm(object, ...)) predict.vglm.se <- function(fit, ...) { H.ss <- hatvalues(fit, type = "centralBlocks") # diag = FALSE M <- npred(fit) nn <- nobs(fit, type = "lm") U <- vchol(weights(fit, type = "working"), M = M, n = nn) Uarray <- array(0, c(M, M, nn)) ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) MMp1d2 <- M * (M + 1) / 2 for (jay in 1:MMp1d2) Uarray[ind1$row.index[jay], ind1$col.index[jay], ] <- U[jay, ] Uinv.array <- apply(Uarray, 3, backsolve, x = diag(M)) dim(Uinv.array) <- c(M, M, nn) Utinv.array <- Uinv.array if (M > 1) for (jay in 1:(M-1)) { for (kay in (jay+1):M) { Utinv.array[kay, jay, ] <- Uinv.array[jay, kay, ] Utinv.array[jay, kay, ] <- 0 } } var.boldeta.i <- mux5(H.ss, Utinv.array, M = M, matrix.arg = TRUE) # First M cols are SE^2 sqrt(var.boldeta.i[, 1:M]) # SE(linear.predictor) sqrt(var.boldeta.i[, 1:M]) } subconstraints <- function(assign, constraints) { ans <- vector("list", length(assign)) if (!length(assign) || !length(constraints)) stop("assign and/or constraints is empty") for (ii in seq_along(assign)) ans[[ii]] <- constraints[[assign[[ii]][1]]] names(ans) <- names(assign) ans } is.linear.term <- function(ch) { lchar <- length(ch) ans <- rep_len(FALSE, lchar) for (ii in 1:lchar) { nc <- nchar(ch[ii]) x <- substring(ch[ii], 1:nc, 1:nc) ans[ii] <- all(x != "(" & x != "+" & x != "-" & x != "/" & x != "*" & x != "^") } names(ans) <- ch ans } canonical.Hlist <- function(Hlist) { for (ii in seq_along(Hlist)) { temp <- Hlist[[ii]] * 0 temp[cbind(1:ncol(temp), 1:ncol(temp))] <- 1 Hlist[[ii]] <- temp } Hlist } VGAM/R/lrwaldtest.R0000644000176200001440000003613413135276760013536 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. update_default <- function (object, formula., ..., evaluate = TRUE) { if (is.null(call <- getCall(object))) stop("need an object with call component") extras <- match.call(expand.dots = FALSE)$... if (!missing(formula.)) { call$formula <- update_formula(formula(object), formula.) } if (length(extras)) { existing <- !is.na(match(names(extras), names(call))) for (a in names(extras)[existing]) call[[a]] <- extras[[a]] if (any(!existing)) { call <- c(as.list(call), extras[!existing]) call <- as.call(call) } } if (evaluate) eval(call, parent.frame()) else call } update_formula <- function (old, new, ...) { tmp <- (update.formula(as.formula(old), as.formula(new))) out <- formula(terms.formula(tmp, simplify = TRUE)) return(out) } if (FALSE) print_anova <- function (x, digits = max(getOption("digits") - 2, 3), signif.stars = getOption("show.signif.stars"), ...) { x <- x@Body if (!is.null(heading <- attr(x, "heading"))) cat(heading, sep = "\n") nc <- dim(x)[2L] if (is.null(cn <- colnames(x))) stop("'anova' object must have colnames") has.P <- grepl("^(P|Pr)\\(", cn[nc]) zap.i <- 1L:(if (has.P) nc - 1 else nc) i <- which(substr(cn, 2, 7) == " value") i <- c(i, which(!is.na(match(cn, c("F", "Cp", "Chisq"))))) if (length(i)) zap.i <- zap.i[!(zap.i %in% i)] tst.i <- i if (length(i <- grep("Df$", cn))) zap.i <- zap.i[!(zap.i %in% i)] stats::printCoefmat(x, digits = digits, signif.stars = signif.stars, has.Pvalue = has.P, P.values = has.P, cs.ind = NULL, zap.ind = zap.i, tst.ind = tst.i, na.print = "", ...) invisible(x) } setGeneric("lrtest", function(object, ...) standardGeneric("lrtest"), package = "VGAM") setClass("VGAManova", representation( "Body" = "data.frame")) lrtest_vglm <- function(object, ..., no.warning = FALSE, # 20160802 name = NULL) { cls <- class(object)[1] nobs <- function(x) x@misc$nrow.X.vlm tlab <- function(x) attr(terms(x), "term.labels") if (is.null(name)) name <- function(x) paste(deparse(formula(x)), collapse = "\n") modelUpdate <- function(fm, update, no.warning = FALSE) { if (is.numeric(update)) { if (any(update < 1)) { if (!no.warning) warning("for numeric model specifications all values ", "have to be >=1") update <- abs(update)[abs(update) > 0] } if (any(update > length(tlab(fm)))) { if (!no.warning) warning("more terms specified than existent in the model: ", paste(as.character(update[update > length(tlab(fm))]), collapse = ", ")) update <- update[update <= length(tlab(fm))] } update <- tlab(fm)[update] } if (is.character(update)) { if (!all(update %in% tlab(fm))) { if (!no.warning) warning("terms specified that are not in the model:", paste(dQuote(update[!(update %in% tlab(fm))]), collapse = ", ")) update <- update[update %in% tlab(fm)] } if (length(update) < 1) stop("empty model specification") update <- as.formula(paste(". ~ . -", paste(update, collapse = " - "))) } if (inherits(update, "formula")) { update <- update_default(fm, update) } if (!inherits(update, cls)) { if (!no.warning) warning("original model was of class '", cls, "', updated model is of class '", class(update)[1], "'") } return(update) } objects <- list(object, ...) nmodels <- length(objects) if (nmodels < 2) { objects <- c(objects, . ~ 1) nmodels <- 2 } no.update <- sapply(objects, function(obj) inherits(obj, cls)) for (i in 2:nmodels) { objects[[i]] <- modelUpdate(objects[[i-1]], objects[[i]], no.warning = no.warning) } ns <- sapply(objects, nobs) if (any(ns != ns[1])) { for (i in 2:nmodels) { if (ns[1] != ns[i]) { if (no.update[i]) stop("models were not all fitted to ", "the same size of dataset") else { commonobs <- row.names(model.frame(objects[[i]])) %in% row.names(model.frame(objects[[i-1]])) objects[[i]] <- eval(substitute(update(objects[[i]], subset = commonobs), list(commonobs = commonobs))) if (nobs(objects[[i]]) != ns[1]) stop("models could not be fitted to the same size of dataset") } } } } rval <- matrix(rep_len(NA_real_, 5 * nmodels), ncol = 5) colnames(rval) <- c("#Df", "LogLik", "Df", "Chisq", "Pr(>Chisq)") rownames(rval) <- 1:nmodels logLlist <- lapply(objects, logLik) dflist <- lapply(objects, df.residual) rval[,1] <- unlist(dflist) rval[,2] <- unlist(logLlist) rval[2:nmodels, 3] <- rval[2:nmodels, 1] - rval[1:(nmodels-1), 1] rval[2:nmodels, 4] <- 2 * abs(rval[2:nmodels, 2] - rval[1:(nmodels-1), 2]) rval[,5] <- pchisq(rval[,4], round(abs(rval[,3])), lower.tail = FALSE) variables <- lapply(objects, name) title <- "Likelihood ratio test\n" topnote <- paste("Model ", format(1:nmodels), ": ", variables, sep = "", collapse = "\n") new("VGAManova", Body = structure(as.data.frame(rval), heading = c(title, topnote))) } setMethod("lrtest", "vglm", function(object, ...) lrtest_vglm(object = object, ...)) setMethod("show", "VGAManova", function(object) getS3method("print", "anova")(object@Body)) use.S3.lrtest <- TRUE use.S3.lrtest <- FALSE if (use.S3.lrtest) lrtest <- function(object, ...) { UseMethod("lrtest") } if (use.S3.lrtest) lrtest.formula <- function(object, ..., data = list()) { object <- if (length(data) < 1) eval(call("lm", formula = as.formula(deparse(substitute(object))), environment(object))) else eval(call("lm", formula = as.formula(deparse(substitute(object))), data = as.name(deparse(substitute(data))), environment(data))) lrtest.default(object, ...) } if (use.S3.lrtest) lrtest.default <- function(object, ..., name = NULL) { cls <- class(object)[1] nobs <- function(x) NROW(residuals(x)) tlab <- function(x) attr(terms(x), "term.labels") if (is.null(name)) name <- function(x) paste(deparse(formula(x)), collapse = "\n") modelUpdate <- function(fm, update) { if (is.numeric(update)) { if (any(update < 1)) { warning("for numeric model specifications all values ", "have to be >=1") update <- abs(update)[abs(update) > 0] } if (any(update > length(tlab(fm)))) { warning("more terms specified than existent in the model: ", paste(as.character(update[update > length(tlab(fm))]), collapse = ", ")) update <- update[update <= length(tlab(fm))] } update <- tlab(fm)[update] } if (is.character(update)) { if (!all(update %in% tlab(fm))) { warning("terms specified that are not in the model: ", paste(dQuote(update[!(update %in% tlab(fm))]), collapse = ", ")) update <- update[update %in% tlab(fm)] } if (length(update) < 1) stop("empty model specification") update <- as.formula(paste(". ~ . -", paste(update, collapse = " - "))) } if (inherits(update, "formula")) update <- update(fm, update) if (!inherits(update, cls)) warning("original model was of class '", cls, "', updated model is of class '", class(update)[1], "'") return(update) } objects <- list(object, ...) nmodels <- length(objects) if (nmodels < 2) { objects <- c(objects, . ~ 1) print("objects 1") print( objects ) nmodels <- 2 } no.update <- sapply(objects, function(obj) inherits(obj, cls)) print("no.update") print( no.update ) for (i in 2:nmodels) objects[[i]] <- modelUpdate(objects[[i-1]], objects[[i]]) print("objects i") print( objects ) ns <- sapply(objects, nobs) if (any(ns != ns[1])) { for (i in 2:nmodels) { if (ns[1] != ns[i]) { if (no.update[i]) stop("models were not all fitted to ", "the same size of dataset") else { commonobs <- row.names(model.frame(objects[[i]])) %in% row.names(model.frame(objects[[i-1]])) print("commonobs") print( commonobs ) objects[[i]] <- eval(substitute(update(objects[[i]], subset = commonobs), list(commonobs = commonobs))) if (nobs(objects[[i]]) != ns[1]) stop("models could not be fitted to the same size of dataset") } } } } rval <- matrix(rep_len(NA_real_, 5 * nmodels), ncol = 5) colnames(rval) <- c("#Df", "LogLik", "Df", "Chisq", "Pr(>Chisq)") rownames(rval) <- 1:nmodels logL <- lapply(objects, logLik) rval[,1] <- as.numeric(sapply(logL, function(x) attr(x, "df"))) rval[,2] <- sapply(logL, as.numeric) rval[2:nmodels, 3] <- rval[2:nmodels, 1] - rval[1:(nmodels-1), 1] rval[2:nmodels, 4] <- 2 * abs(rval[2:nmodels, 2] - rval[1:(nmodels-1), 2]) rval[,5] <- pchisq(rval[,4], round(abs(rval[,3])), lower.tail = FALSE) variables <- lapply(objects, name) title <- "Likelihood ratio test\n" topnote <- paste("Model ", format(1:nmodels), ": ", variables, sep = "", collapse = "\n") structure(as.data.frame(rval), heading = c(title, topnote), class = c("anova", "data.frame")) } # End of lrtest.default if (FALSE) setGeneric("waldtest", function(object, ...) standardGeneric("waldtest"), package = "VGAM") if (FALSE) waldtest <- function(object, ...) { UseMethod("waldtest") } waldtest_formula <- function(object, ..., data = list()) { stop("cannot find waldtest_lm()") object <- if (length(data) < 1) eval(call("lm", formula = as.formula(deparse(substitute(object))), environment(object))) else eval(call("lm", formula = as.formula(deparse(substitute(object))), data = as.name(deparse(substitute(data))), environment(data))) } waldtest_default <- function(object, ..., vcov = NULL, test = c("Chisq", "F"), name = NULL) { vcov. <- vcov cls <- class(object)[1] nobs <- function(x) NROW(residuals(x)) tlab <- function(x) attr(terms(x), "term.labels") if (is.null(name)) name <- function(x) paste(deparse(formula(x)), collapse = "\n") modelUpdate <- function(fm, update) { if (is.numeric(update)) { if (any(update < 1)) { warning("for numeric model specifications all values ", "have to be >=1") update <- abs(update)[abs(update) > 0] } if (any(update > length(tlab(fm)))) { warning("more terms specified than existent in the model: ", paste(as.character(update[update > length(tlab(fm))]), collapse = ", ")) update <- update[update <= length(tlab(fm))] } update <- tlab(fm)[update] } if (is.character(update)) { if (!all(update %in% tlab(fm))) { warning("terms specified that are not in the model: ", paste(dQuote(update[!(update %in% tlab(fm))]), collapse = ", ")) update <- update[update %in% tlab(fm)] } if (length(update) < 1) stop("empty model specification") update <- as.formula(paste(". ~ . -", paste(update, collapse = " - "))) } if (inherits(update, "formula")) update <- update(fm, update) if (!inherits(update, cls)) stop("original model was of class '", cls, "', updated model is of class '", class(update)[1], "'") return(update) } modelCompare <- function(fm, fm.up, vfun = NULL) { q <- length(coef(fm)) - length(coef(fm.up)) if (q > 0) { fm0 <- fm.up fm1 <- fm } else { fm0 <- fm fm1 <- fm.up } k <- length(coef(fm1)) n <- nobs(fm1) if (!all(tlab(fm0) %in% tlab(fm1))) stop("models are not nested") ovar <- which(!(names(coef(fm1)) %in% names(coef(fm0)))) vc <- if (is.null(vfun)) vcov(fm1) else if (is.function(vfun)) vfun(fm1) else vfun stat <- t(coef(fm1)[ovar]) %*% solve(vc[ovar,ovar]) %*% coef(fm1)[ovar] return(c(-q, stat)) } objects <- list(object, ...) nmodels <- length(objects) if (nmodels < 2) { objects <- c(objects, . ~ 1) nmodels <- 2 } no.update <- sapply(objects, function(obj) inherits(obj, cls)) for (i in 2:nmodels) objects[[i]] <- modelUpdate(objects[[i-1]], objects[[i]]) responses <- as.character(lapply(objects, function(x) deparse(terms(x)[[2]]))) sameresp <- responses == responses[1] if (!all(sameresp)) { objects <- objects[sameresp] warning("models with response ", deparse(responses[!sameresp]), " removed because response differs from ", "model 1") } ns <- sapply(objects, nobs) if (any(ns != ns[1])) { for (i in 2:nmodels) { if (ns[1] != ns[i]) { if (no.update[i]) stop("models were not all fitted to the ", "same size of dataset") else { commonobs <- row.names(model.frame(objects[[i]])) %in% row.names(model.frame(objects[[i-1]])) objects[[i]] <- eval(substitute(update(objects[[i]], subset = commonobs), list(commonobs = commonobs))) if (nobs(objects[[i]]) != ns[1]) stop("models could not be fitted to the same size of dataset") } } } } if (nmodels > 2 && !is.null(vcov.) && !is.function(vcov.)) stop("to compare more than 2 models `vcov.' needs to be a function") test <- match.arg(test) rval <- matrix(rep_len(NA_real_, 4 * nmodels), ncol = 4) colnames(rval) <- c("Res.Df", "Df", test, paste("Pr(>", test, ")", sep = "")) rownames(rval) <- 1:nmodels rval[,1] <- as.numeric(sapply(objects, df.residual)) for (i in 2:nmodels) rval[i, 2:3] <- modelCompare(objects[[i-1]], objects[[i]], vfun = vcov.) if (test == "Chisq") { rval[,4] <- pchisq(rval[,3], round(abs(rval[,2])), lower.tail = FALSE) } else { df <- rval[,1] for (i in 2:nmodels) if (rval[i, 2] < 0) df[i] <- rval[i-1, 1] rval[, 3] <- rval[, 3] / abs(rval[, 2]) rval[, 4] <- pf(rval[, 3], abs(rval[, 2]), df, lower.tail = FALSE) } variables <- lapply(objects, name) title <- "Wald test\n" topnote <- paste("Model ", format(1:nmodels), ": ", variables, sep = "", collapse = "\n") new("VGAManova", Body = structure(as.data.frame(rval), heading = c(title, topnote))) } if (FALSE) setMethod("waldtest", "vglm", function(object, ...) waldtest_vglm(object = object, ...)) VGAM/R/vglm.control.q0000644000176200001440000001155213135276760014031 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. .min.criterion.VGAM <- c("deviance" = TRUE, "loglikelihood" = FALSE, "AIC" = TRUE, "Likelihood" = FALSE, "ResSS" = TRUE, "coefficients" = TRUE) vlm.control <- function(save.weights = TRUE, tol = 1e-7, method = "qr", checkwz = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) { if (tol <= 0) { warning("argument 'tol' not positive; using 1e-7 instead") tol <- 1e-7 } if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for argument 'checkwz'") if (!is.Numeric(wzepsilon, length.arg = 1, positive = TRUE)) stop("bad input for argument 'wzepsilon'") list(save.weights = save.weights, tol = tol, method = method, checkwz = checkwz, wzepsilon = wzepsilon) } vglm.control <- function(checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-7, half.stepsizing = TRUE, maxit = 30, noWarning = FALSE, stepsize = 1, save.weights = FALSE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, ...) { if (mode(criterion) != "character" && mode(criterion) != "name") criterion <- as.character(substitute(criterion)) criterion <- pmatch(criterion[1], names(.min.criterion.VGAM), nomatch = 1) criterion <- names(.min.criterion.VGAM)[criterion] if (!is.logical(checkwz) || length(checkwz) != 1) stop("bad input for argument 'checkwz'") if (!is.Numeric(wzepsilon, length.arg = 1, positive = TRUE)) stop("bad input for argument 'wzepsilon'") convergence <- expression({ switch(criterion, coefficients = if (iter == 1) iter < maxit else (iter < maxit && max(abs(new.crit - old.crit) / ( abs(old.crit) + epsilon)) > epsilon), iter < maxit && sqrt(eff.n) * abs(old.crit - new.crit) / ( abs(old.crit) + epsilon) > epsilon) }) if (!is.Numeric(epsilon, length.arg = 1, positive = TRUE)) { warning("bad input for argument 'epsilon'; using 0.00001 instead") epsilon <- 0.00001 } if (!is.Numeric(maxit, length.arg = 1, positive = TRUE, integer.valued = TRUE)) { warning("bad input for argument 'maxit'; using 30 instead") maxit <- 30 } if (!is.Numeric(stepsize, length.arg = 1, positive = TRUE)) { warning("bad input for argument 'stepsize'; using 1 instead") stepsize <- 1 } list(checkwz = checkwz, Check.rank = Check.rank, Check.cm.rank = Check.cm.rank, convergence = convergence, criterion = criterion, epsilon = epsilon, half.stepsizing = as.logical(half.stepsizing)[1], maxit = maxit, noWarning = as.logical(noWarning)[1], min.criterion = .min.criterion.VGAM, save.weights = as.logical(save.weights)[1], stepsize = stepsize, trace = as.logical(trace)[1], wzepsilon = wzepsilon, xij = if (is(xij, "formula")) list(xij) else xij) } vcontrol.expression <- expression({ control <- control # First one, e.g., vgam.control(...) mylist <- family@vfamily for (jay in length(mylist):1) { for (ii in 1:2) { temp <- paste(if (ii == 1) "" else paste(function.name, ".", sep = ""), mylist[jay], ".control", sep = "") if (exists(temp, envir = VGAMenv)) { temp <- get(temp) temp <- temp(...) for (kk in names(temp)) control[[kk]] <- temp[[kk]] } } } orig.criterion <- control$criterion if (control$criterion != "coefficients") { try.crit <- c(names(.min.criterion.VGAM), "coefficients") for (i in try.crit) { if (any(slotNames(family) == i) && length(body(slot(family, i)))) { control$criterion <- i break } else { control$criterion <- "coefficients" } } } control$min.criterion <- control$min.criterion[control$criterion] for (ii in 1:2) { temp <- paste(if (ii == 1) "" else paste(function.name, ".", sep = ""), family@vfamily[1], ".", control$criterion, ".control", sep = "") if (exists(temp, inherit = TRUE)) { temp <- get(temp) temp <- temp(...) for (k in names(temp)) control[[k]] <- temp[[k]] } } }) VGAM/R/residuals.vlm.q0000644000176200001440000001626013135276757014204 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. residualsvlm <- function(object, type = c("response", "deviance", "pearson", "working")) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("response", "deviance", "pearson", "working"))[1] na.act <- object@na.action object@na.action <- list() pooled.weight <- object@misc$pooled.weight if (is.null(pooled.weight)) pooled.weight <- FALSE answer <- switch(type, working = if (pooled.weight) NULL else object@residuals, pearson = { if (pooled.weight) return(NULL) n <- object@misc$n M <- object@misc$M wz <- weights(object, type = "work") # $weights if (!length(wz)) wz <- if (M == 1) rep_len(1, n) else matrix(1, n, M) if (M == 1) { if (any(wz < 0)) warning("some weights are negative. ", "Their residual will be assigned NA") ans <- sqrt(c(wz)) * c(object@residuals) names(ans) <- names(object@residuals) ans } else { wz.sqrt <- matrix.power(wz, M = M, power = 0.5, fast = TRUE) ans <- mux22(wz.sqrt, object@residuals, M = M, upper = FALSE) dim(ans) <- c(M, n) ans <- t(ans) dimnames(ans) <- dimnames(object@residuals) # n x M ans } }, deviance = { M <- object@misc$M if (M > 1) return(NULL) residualsvlm(object, type = "pearson") }, response = object@residuals ) if (length(answer) && length(na.act)) { napredict(na.act[[1]], answer) } else { answer } } residualsvglm <- function(object, type = c("working", "pearson", "response", "deviance", "ldot"), matrix.arg = TRUE) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("working", "pearson", "response", "deviance", "ldot"))[1] na.act <- object@na.action object@na.action <- list() pooled.weight <- object@misc$pooled.weight if (is.null(pooled.weight)) pooled.weight <- FALSE answer <- switch(type, working = if (pooled.weight) NULL else object@residuals, pearson = { if (pooled.weight) return(NULL) n <- object@misc$n M <- object@misc$M wz <- weights(object, type = "work") # $weights if (M == 1) { if (any(wz < 0)) warning("some weights are negative. ", "Their residual will be assigned NA") ans <- sqrt(c(wz)) * c(object@residuals) names(ans) <- names(object@residuals) ans } else { wz.sqrt <- matrix.power(wz, M = M, power = 0.5, fast = TRUE) ans <- mux22(wz.sqrt, object@residuals, M = M, upper = FALSE) dim(ans) <- c(M, n) ans <- t(ans) dimnames(ans) <- dimnames(object@residuals) # n x M ans } }, deviance = { n <- object@misc$n y <- as.matrix(object@y) mu <- object@fitted.values w <- object@prior.weights if (!length(w)) w <- rep_len(1, n) eta <- object@predictors dev.fn <- object@family@deviance # May not 'exist' for that model if (length(body(dev.fn)) > 0) { extra <- object@extra ans <- dev.fn(mu = mu,y = y, w = w, residuals = TRUE, eta = eta, extra) if (length(ans)) { lob <- labels(object@residuals) if (is.list(lob)) { if (is.matrix(ans)) dimnames(ans) <- lob else names(ans) <- lob[[1]] } else { names(ans) <- lob } } ans } else { NULL } }, ldot = { n <- object@misc$n y <- as.matrix(object@y) mu <- object@fitted w <- object@prior.weights if (is.null(w)) w <- rep_len(1, n) eta <- object@predictors if (!is.null(ll.fn <- object@family@loglikelihood)) { extra <- object@extra ans <- ll.fn(mu = mu,y = y,w = w, residuals = TRUE, eta = eta, extra) if (!is.null(ans)) { ans <- c(ans) # ldot residuals can only be a vector names(ans) <- labels(object@residuals) } ans } else { NULL } }, response = { y <- object@y mu <- fitted(object) true.mu <- object@misc$true.mu if (is.null(true.mu)) true.mu <- TRUE ans <- if (true.mu) y - mu else NULL if (!matrix.arg && length(ans)) { if (ncol(ans) == 1) { names.ans <- dimnames(ans)[[1]] ans <- c(ans) names(ans) <- names.ans ans } else { warning("ncol(ans) is not 1") ans } } else { ans } }) if (length(answer) && length(na.act)) { napredict(na.act[[1]], answer) } else { answer } } residualsqrrvglm <- function(object, type = c("response"), matrix.arg = TRUE) { if (mode(type) != "character" && mode(type) != "name") type <- as.character(substitute(type)) type <- match.arg(type, c("response"))[1] na.act <- object@na.action object@na.action <- list() pooled.weight <- object@misc$pooled.weight if (is.null(pooled.weight)) pooled.weight <- FALSE answer <- switch(type, working = if (pooled.weight) NULL else object@residuals, pearson = { stop("have not programmed pearson resids yet") }, deviance = { stop("have not programmed deviance resids yet") }, ldot = { stop("have not programmed ldot resids yet") }, response = { y <- object@y mu <- fitted(object) true.mu <- object@misc$true.mu if (is.null(true.mu)) true.mu <- TRUE ans <- if (true.mu) y - mu else NULL if (!matrix.arg && length(ans)) { if (ncol(ans) == 1) { names.ans <- dimnames(ans)[[1]] ans <- c(ans) names(ans) <- names.ans ans } else { warning("ncol(ans) is not 1") ans } } else { ans } }) if (length(answer) && length(na.act)) { napredict(na.act[[1]], answer) } else { answer } } setMethod("residuals", "vlm", function(object, ...) residualsvlm(object, ...)) setMethod("residuals", "vglm", function(object, ...) residualsvglm(object, ...)) setMethod("residuals", "vgam", function(object, ...) residualsvglm(object, ...)) setMethod("residuals", "qrrvglm", function(object, ...) residualsqrrvglm(object, ...)) setMethod("resid", "vlm", function(object, ...) residualsvlm(object, ...)) setMethod("resid", "vglm", function(object, ...) residualsvglm(object, ...)) setMethod("resid", "vgam", function(object, ...) residualsvglm(object, ...)) setMethod("resid", "qrrvglm", function(object, ...) residualsqrrvglm(object, ...)) VGAM/R/confint.vlm.R0000644000176200001440000000557113135276757013615 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. confintvglm <- function(object, parm = "(All)", level = 0.95, method = c("wald", "profile"), trace = NULL, ...) { method <- match.arg(method, c("wald", "profile"))[1] cf <- coef(object) pnames <- names(cf) if (is.character(parm) && length(parm) == 1 && parm == "(All)") parm <- pnames else if (is.numeric(parm)) parm <- pnames[parm] format.perc <- function(probs, digits) paste(format(100 * probs, trim = TRUE, scientific = FALSE, digits = digits), "%") if (method == "wald") { aa <- (1 - level) / 2 aa <- c(aa, 1 - aa) pct <- format.perc(aa, 3) fac <- qnorm(aa) ci <- array(NA, dim = c(length(parm), 2L), dimnames = list(parm, pct)) ses <- sqrt(diag(vcov(object)))[parm] ci[] <- cf[parm] + ses %o% fac return(ci) } # if (method == "wald") ppv <- profilevglm(object, which = parm, alpha = (1 - level) / 4, trace = trace, ...) MASSconfint.profile.glm(ppv, parm = parm, level = level, trace = trace, ...) } # confintvglm confintrrvglm <- function(object, parm, level = 0.95, ...) { stop("currently this function has not been written") } confintvgam <- function(object, parm, level = 0.95, ...) { stop("currently this function has not been written") } if (!isGeneric("confint")) setGeneric("confint", function(object, parm, level = 0.95, ...) standardGeneric("confint"), package = "VGAM") setMethod("confint", "vglm", function(object, parm, level = 0.95, ...) confintvglm(object = object, parm = if (missing(parm)) "(All)" else parm, level = level, ...)) setMethod("confint", "rrvglm", function(object, parm, level = 0.95, ...) confintrrvglm(object = object, parm = parm, level = level, ...)) setMethod("confint", "vgam", function(object, parm, level = 0.95, ...) confintvgam(object = object, parm = parm, level = level, ...)) MASSconfint.profile.glm <- function (object, parm = seq_along(pnames), level = 0.95, ...) { of <- attr(object, "original.fit") pnames <- names(coef(of)) if (is.character(parm)) parm <- match(parm, pnames, nomatch = 0L) a <- (1 - level)/2 a <- c(a, 1 - a) pct <- paste(round(100 * a, 1), "%") ci <- array(NA, dim = c(length(parm), 2L), dimnames = list(pnames[parm], pct)) cutoff <- qnorm(a) for (pm in parm) { pro <- object[[pnames[pm]]] if (is.null(pro)) next if (length(pnames) > 1L) sp <- spline(x = pro[, "par.vals"][, pm], y = pro[, 1]) else sp <- spline(x = pro[, "par.vals"], y = pro[, 1]) ci[pnames[pm], ] <- approx(sp$y, sp$x, xout = cutoff)$y } drop(ci) } VGAM/R/family.sur.R0000644000176200001440000001721513135276757013447 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. SURff <- function(mle.normal = FALSE, divisor = c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"), parallel = FALSE, Varcov = NULL, matrix.arg = FALSE) { apply.parint <- TRUE lmean <- "identitylink" lsdev <- "loge" emean <- list() esdev <- list() if (!is.logical(mle.normal) || length(mle.normal) != 1) stop("argument 'mle.normal' must be a single logical") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") divisor <- match.arg(divisor, c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"))[1] if (mle.normal && divisor != "n") warning("MLE requires 'n' as the value of argument 'divisor'. ", "The solution will probably not be the MLE") ret.ff <- new("vglmff", blurb = c("Seemingly unrelated regressions"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) }), list( .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(M1 = 1, # zz??? Q1 = 1, parallel = .parallel , expected = TRUE, multipleResponses = TRUE, parameters.names = as.character(NA)) }, list( .parallel = parallel ))), initialize = eval(substitute(expression({ if (!is.matrix(y) || ncol(y) == 1) stop("response must be a matrix with at least 2 columns") ncoly <- ncol(y) if (is.logical( .parallel ) && .parallel && !all(as.logical(trivial.constraints(constraints)))) warning("setting 'parallel = TRUE' with nontrivial constraints may not ", "make sense") temp5 <- w.y.check(w = w, y = y, ncol.w.min = 1, ncol.w.max = 1, ncol.y.max = Inf, Is.integer.y = FALSE, Is.positive.y = FALSE, out.wy = TRUE, colsyperw = ncoly, maximize = TRUE) w <- temp5$w y <- temp5$y if (!all(w[1, 1] == w)) stop("all prior 'weights' must currently have equal values") ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly predictors.names <- if (!length(ddd <- dimnames(y)[[2]])) paste("Y", 1:M, sep = "") else ddd extra$wz <- matrix(1, nrow(x), M) if (!length(etastart)) { etastart <- matrix(0, n, M) Hlist.early <- process.constraints(constraints, x, M, specialCM = specialCM) X.vlm.early <- lm2vlm.model.matrix(x, Hlist.early, xij = control$xij, Xm2 = Xm2) Hmatrices <- matrix(c(unlist(Hlist.early)), nrow = M) jay.index <- 1:ncol(Hmatrices) extra$ncols.X.lm <- numeric(ncoly) for (jay in 1:ncoly) { X.lm.jay <- vlm2lm.model.matrix(x.vlm = X.vlm.early, Hlist = Hlist.early, which.linpred = jay, M = M) extra$ncols.X.lm[jay] <- ncol(X.lm.jay) etastart[, jay] <- y[, jay] - lsfit(x = X.lm.jay, y = y[, jay], wt = c(w), intercept = FALSE)$residuals } # jay } # !length(etastart) }), list( .parallel = parallel ))), linkinv = function(eta, extra = NULL) eta, last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lmean , ncoly)) temp.names <- predictors.names names(misc$link) <- temp.names misc$earg <- vector("list", M1 * ncoly) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii]] <- .emean } names(misc$earg) <- temp.names misc$M1 <- M1 misc$expected <- TRUE misc$divisor <- .divisor misc$values.divisor <- round(n / ratio.df) }), list( .lmean = lmean, .lsdev = lsdev, .emean = emean, .esdev = esdev, .divisor = divisor ))), vfamily = "SURff", validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta okay1 <- all(is.finite(mymu)) okay1 }, list( .lmean = lmean, .emean = emean, .divisor = divisor ))), deriv = eval(substitute(expression({ mymu <- eta iam.indices <- iam(NA, NA, M = M, both = TRUE) resmat <- y - mymu Sigma.elts <- colMeans(resmat[, iam.indices$row.index] * resmat[, iam.indices$col.index]) if ( .divisor != "n") { ratio.df <- n / switch( .divisor , "n-max(pj,pk)" = n - pmax(extra$ncols.X.lm[iam.indices$row.index], extra$ncols.X.lm[iam.indices$col.index]), "sqrt((n-pj)*(n-pk))" = sqrt((n - extra$ncols.X.lm[iam.indices$row.index]) * (n - extra$ncols.X.lm[iam.indices$col.index])), stop("argument 'divisor' unmatched")) Sigma.elts <- Sigma.elts * ratio.df } else { ratio.df <- rep_len(1, M*(M+1)/2) } Sigma.mat <- matrix(0, M, M) Sigma.mat[cbind(iam.indices$row.index, iam.indices$col.index)] <- Sigma.elts Sigma.mat[cbind(iam.indices$col.index, iam.indices$row.index)] <- Sigma.elts invSigma.mat <- chol2inv(chol(Sigma.mat)) temp3 <- matrix(invSigma.mat[cbind(iam.indices$row.index, iam.indices$col.index)], M*(M+1)/2, n) dl.dmu <- mux22(temp3, y - mymu, M = M, upper = FALSE, as.matrix = TRUE) dmu.deta <- dtheta.deta(mymu, .lmean , earg = .emean ) c(w) * dl.dmu * dmu.deta }), list( .lmean = lmean, .emean = emean, .divisor = divisor ))), weight = eval(substitute(expression({ if (length( .Varcov )) { Sigma.mat <- if ( .matrix.arg ) .Varcov else { temp.vec <- rep_len( .Varcov , M*(M+1)/2) temp.mat <- matrix(0, M, M) temp.mat[cbind(iam.indices$col.index, iam.indices$row.index)] <- temp.vec temp.mat[cbind(iam.indices$row.index, iam.indices$col.index)] <- temp.vec temp.mat } invSigma.mat <- chol2inv(chol(Sigma.mat)) } wz <- extra$wz <- c(w) * matrix(invSigma.mat[cbind(iam.indices$col.index, iam.indices$row.index)], n, M*(M+1)/2, byrow = TRUE) extra$Sigma.mat <- Sigma.mat extra$invSigma.mat <- invSigma.mat wz }), list( .divisor = divisor, .Varcov = Varcov, .matrix.arg = matrix.arg )))) if (mle.normal) { ret.ff@loglikelihood <- function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (!summation) stop("cannot handle 'summation = FALSE' yet") M <- if (is.matrix(y)) ncol(y) else 1 n <- if (is.matrix(y)) nrow(y) else length(y) wz <- extra$wz temp1 <- ResSS.vgam(y-mu, wz = wz, M = M) onewz <- if (length(extra$invSigma.mat)) extra$invSigma.mat else (m2a(wz[1, , drop = FALSE], M = M))[,, 1] # M x M logdet <- determinant(onewz)$modulus logretval <- -0.5 * temp1 + 0.5 * n * logdet - n * (M / 2) * log(2*pi) logretval } } ret.ff } VGAM/R/s.q0000644000176200001440000000142313135276757011651 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. s <- function(x, df = 4, spar = 0, ...) { xs <- substitute(x) ans <- as.character(xs) if (length(ans) > 1) stop("argument 'x' must be of length one") call <- deparse(sys.call()) if (NCOL(x) > 1) stop("argument 'x' must be a vector") if (!is.null(levels(x))) { x <- if (is.ordered(x)) { as.vector(x) } else stop("unordered factors cannot be used as smoothing variables") } attr(x, "spar") <- spar attr(x, "df") <- df attr(x, "call") <- call attr(x, "class") <- "smooth" attr(x, "s.xargument") <- ans # Needed for prediction and constraints a <- is.na(x) if (any(a)) attr(x, "NAs") <- seq(along = x)[a] x } VGAM/R/vgam.match.q0000644000176200001440000000526413135276760013435 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. vgam.match <- function(x, all.knots = FALSE, nk = NULL) { if (is.list(x)) { nvar <- length(x) if (length(nk)) nk <- rep_len(nk, nvar) temp <- vgam.match(x[[1]], all.knots = all.knots, nk = nk[1]) ooo <- matrix(temp$matcho, length(temp$matcho), nvar) neffec <- rep(temp$neffec, nvar) xmin <- rep(temp$xmin, nvar) xmax <- rep(temp$xmax, nvar) nknots <- rep(temp$nknots, nvar) knots <- vector("list", nvar) knots[[1]] <- temp$knots if (nvar > 1) for (ii in 2:nvar) { temp <- vgam.match(x[[ii]], all.knots = all.knots, nk = nk[ii]) ooo[, ii] <- temp$matcho neffec[ii] <- temp$neffec nknots[ii] <- temp$nknots knots[[ii]] <- temp$knots xmin[ii] <- temp$xmin xmax[ii] <- temp$xmax } names(nknots) <- names(knots) <- names(neffec) <- names(xmin) <- names(xmax) <- names(x) dimnames(ooo) <- list(NULL, names(x)) return(list(matcho = ooo, neffec = neffec, nknots = nknots, knots = knots, xmin = xmin, xmax = xmax)) } if (!is.null(attributes(x)$NAs) || anyNA(x)) stop("cannot smooth on variables with NAs") sx <- unique(sort(as.vector(x))) # "as.vector()" strips off attributes ooo <- match(x, sx) # as.integer(match(x, sx)) # sx[o]==x neffec <- length(sx) # as.integer(length(sx)) if (neffec < 7) stop("smoothing variables must have at least 7 unique values") xmin <- sx[1] # Don't use rounded value xmax <- sx[neffec] xbar <- (sx - xmin) / (xmax - xmin) noround <- TRUE # Improvement 20020803 if (all.knots) { knot <- if (noround) { valid.vknotl2(c(rep(xbar[1], 3), xbar, rep(xbar[neffec], 3))) } else { c(rep(xbar[1], 3), xbar, rep(xbar[neffec], 3)) } if (length(nk)) warning("overriding nk by all.knots = TRUE") nk <- length(knot) - 4 # No longer: neffec + 2 } else { chosen <- length(nk) if (chosen && (nk > neffec+2 || nk <= 5)) stop("bad value for 'nk'") if (!chosen) nk <- 0 knot.list <- .C("vknootl2", as.double(xbar), as.integer(neffec), knot = double(neffec+6), k = as.integer(nk+4), chosen = as.integer(chosen)) if (noround) { knot <- valid.vknotl2(knot.list$knot[1:(knot.list$k)]) knot.list$k <- length(knot) } else { knot <- knot.list$knot[1:(knot$k)] } nk <- knot.list$k - 4 } if (nk <= 5) stop("not enough distinct knots found") return(list(matcho = ooo, neffec = neffec, nknots = nk, knots = knot, xmin = xmin, xmax = xmax)) } VGAM/R/family.univariate.R0000644000176200001440000110346213135276757015006 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. mccullagh89 <- function(ltheta = "rhobit", lnu = logoff(offset = 0.5), itheta = NULL, inu = NULL, zero = NULL) { ltheta <- as.list(substitute(ltheta)) etheta <- link2list(ltheta) ltheta <- attr(etheta, "function.name") lnuvec <- as.list(substitute(lnu)) enuvec <- link2list(lnuvec) lnuvec <- attr(enuvec, "function.name") inuvec <- inu new("vglmff", blurb = c("McCullagh (1989)'s distribution \n", "f(y) = (1-2*theta*y+theta^2)^(-nu) * [1 - y^2]^(nu-1/2) /\n", " Beta[nu+1/2, 1/2], ", " -1 < y < 1, -1 < theta < 1, nu > -1/2\n", "Links: ", namesof("theta", ltheta, earg = etheta), ", ", namesof("nu", lnuvec, earg = enuvec), "\n", "\n", "Mean: nu*theta/(1+nu)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("theta", "nu"), ltheta = .ltheta , lnu = .lnu , zero = .zero ) }, list( .zero = zero, .ltheta = ltheta, .lnu = lnuvec ))), initialize = eval(substitute(expression({ w.y.check(w, y) y <- as.numeric(y) if (any(y <= -1 | y >= 1)) stop("all y values must be in (-1, 1)") predictors.names <- c(namesof("theta", .ltheta , earg = .etheta , tag = FALSE), namesof("nu", .lnuvec , earg = .enuvec , tag = FALSE)) if (!length(etastart)) { theta.init <- if (length( .itheta )) { rep_len( .itheta , n) } else { mccullagh89.aux <- function(thetaval, y, x, w, extraargs) mean((y - thetaval) * (thetaval^2 - 1) / (1 - 2*thetaval*y + thetaval^2)) theta.grid <- seq(-0.9, 0.9, by = 0.05) try.this <- grid.search(theta.grid, objfun = mccullagh89.aux, y = y, x = x, w = w, maximize = FALSE, abs.arg = TRUE) try.this <- rep_len(try.this, n) try.this } tmp <- y / (theta.init - y) tmp[tmp < -0.4] <- -0.4 tmp[tmp > 10.0] <- 10.0 nuvec.init <- rep_len(if (length( .inuvec )) .inuvec else tmp, n) nuvec.init[!is.finite(nuvec.init)] <- 0.4 etastart <- cbind(theta2eta(theta.init, .ltheta , earg = .etheta ), theta2eta(nuvec.init, .lnuvec , earg = .enuvec )) } }), list( .ltheta = ltheta, .lnuvec = lnuvec, .etheta = etheta, .enuvec = enuvec, .inuvec = inuvec, .itheta = itheta ))), linkinv = eval(substitute(function(eta, extra = NULL) { Theta <- eta2theta(eta[, 1], .ltheta , earg = .etheta ) nuvec <- eta2theta(eta[, 2], .lnuvec , earg = .enuvec ) nuvec * Theta / (1 + nuvec) }, list( .ltheta = ltheta, .lnuvec = lnuvec, .etheta = etheta, .enuvec = enuvec ))), last = eval(substitute(expression({ misc$link <- c("theta" = .ltheta , "nu" = .lnuvec ) misc$earg <- list("theta" = .etheta , "nu" = .enuvec ) }), list( .ltheta = ltheta, .lnuvec = lnuvec, .etheta = etheta, .enuvec = enuvec ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Theta <- eta2theta(eta[, 1], .ltheta , earg = .etheta ) nuvec <- eta2theta(eta[, 2], .lnuvec , earg = .enuvec ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * ((nuvec - 0.5) * log1p(-y^2) - nuvec * log1p(-2*Theta*y + Theta^2) - lbeta(nuvec + 0.5, 0.5)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .ltheta = ltheta, .lnuvec = lnuvec, .etheta = etheta, .enuvec = enuvec ))), vfamily = c("mccullagh89"), validparams = eval(substitute(function(eta, y, extra = NULL) { Theta <- eta2theta(eta[, 1], .ltheta , earg = .etheta ) nuvec <- eta2theta(eta[, 2], .lnuvec , earg = .enuvec ) okay1 <- all(is.finite(Theta)) && all(abs(Theta) < 1) && all(is.finite(nuvec)) && all(-0.5 < nuvec) okay1 }, list( .ltheta = ltheta, .lnuvec = lnuvec, .etheta = etheta, .enuvec = enuvec ))), deriv = eval(substitute(expression({ Theta <- eta2theta(eta[, 1], .ltheta , earg = .etheta ) nuvec <- eta2theta(eta[, 2], .lnuvec , earg = .enuvec ) dTheta.deta <- dtheta.deta(Theta, .ltheta , earg = .etheta ) dnuvec.deta <- dtheta.deta(nuvec, .lnuvec , earg = .enuvec ) dl.dTheta <- 2 * nuvec * (y-Theta) / (1 -2*Theta*y + Theta^2) dl.dnuvec <- log1p(-y^2) - log1p(-2 * Theta * y + Theta^2) - digamma(nuvec + 0.5) + digamma(nuvec + 1) c(w) * cbind(dl.dTheta * dTheta.deta, dl.dnuvec * dnuvec.deta) }), list( .ltheta = ltheta, .lnuvec = lnuvec, .etheta = etheta, .enuvec = enuvec ))), weight = eval(substitute(expression({ ned2l.dTheta2 <- (2 * nuvec^2 / (1+nuvec)) / (1-Theta^2) ned2l.dnuvec2 <- trigamma(nuvec+0.5) - trigamma(nuvec+1) wz <- matrix(NA_real_, n, M) # diagonal matrix wz[, iam(1, 1, M)] <- ned2l.dTheta2 * dTheta.deta^2 wz[, iam(2, 2, M)] <- ned2l.dnuvec2 * dnuvec.deta^2 c(w) * wz }), list( .ltheta = ltheta, .lnuvec = lnuvec )))) } dirmultinomial <- function(lphi = "logit", iphi = 0.10, parallel = FALSE, zero = "M") { lphi <- as.list(substitute(lphi)) ephi <- link2list(lphi) lphi <- attr(ephi, "function.name") if (!is.Numeric(iphi, positive = TRUE) || max(iphi) >= 1.0) stop("bad input for argument 'iphi'") new("vglmff", blurb = c("Dirichlet-multinomial distribution\n\n", "Links: ", "log(prob[1]/prob[M]), ..., log(prob[M-1]/prob[M]), ", namesof("phi", lphi, earg = ephi), "\n", "\n", "Mean: shape_j / sum_j(shape_j)"), constraints = eval(substitute(expression({ .ZERO <- .zero if (is.character( .ZERO)) .ZERO <- eval(parse(text = .ZERO)) .PARALLEL <- .parallel if (is.logical( .PARALLEL) && .PARALLEL) { mycmatrix <- if (length( .ZERO )) stop("can only handle parallel = TRUE when zero = NULL") else cbind(rbind(matrix(1, M - 1, 1), 0), rbind(matrix(0, M - 1, 1), 1)) } else { mycmatrix <- if (M == 1) diag(1) else diag(M) } constraints <- cm.VGAM(mycmatrix, x = x, bool = .PARALLEL , constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .ZERO , M) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = NA, Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = c("phi"), lphi = .lphi , zero = .zero ) }, list( .zero = zero, .lphi = lphi ))), initialize = eval(substitute(expression({ mustart.orig <- mustart delete.zero.colns <- TRUE eval(process.categorical.data.VGAM) if (length(mustart.orig)) mustart <- mustart.orig y <- as.matrix(y) ycount <- as.matrix(y * c(w)) M <- ncol(y) if (max(abs(ycount - round(ycount))) > 1.0e-6) warning("there appears to be non-integer responses") if (min(ycount) < 0) stop("all values of the response (matrix) must be non-negative") predictors.names <- c(paste("log(prob[,", 1:(M-1), "]/prob[,", M, "])", sep = ""), namesof("phi", .lphi , short = TRUE)) extra$n2 <- w # aka omega, must be integer # as.vector(rowSums(y)) if (!length(etastart)) { if (length(mustart.orig)) { prob.init <- mustart } else { prob.init <- colSums(ycount) prob.init <- prob.init / sum(prob.init) prob.init <- matrix(prob.init, n, M, byrow = TRUE) } phi.init <- rep_len( .iphi , n) etastart <- cbind(log(prob.init[, -M] / prob.init[, M]), theta2eta(phi.init, .lphi , earg = .ephi )) } mustart <- NULL # Since etastart has been computed. }), list( .lphi = lphi, .ephi = ephi, .iphi = iphi ))), linkinv = eval(substitute(function(eta, extra = NULL) { M <- if (is.matrix(eta)) ncol(eta) else 1 temp <- cbind(exp(eta[, -M, drop = FALSE]), 1) prop.table(temp, 1) }, list( .ephi = ephi, .lphi = lphi ))), last = eval(substitute(expression({ misc$link <- c(rep_len("loge", M-1), .lphi ) names(misc$link) <- c( paste("prob[,", 1:(M-1), "]/prob[,", M, "])", sep = ""), "phi") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:(M-1)) misc$earg[[ii]] <- list() misc$earg[[M]] <- .ephi misc$expected <- TRUE if (intercept.only) { # phi & probs computed in @deriv misc$shape <- probs[1, ] * (1 / phi[1] - 1) } }), list( .ephi = ephi, .lphi = lphi ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M <- if (is.matrix(eta)) ncol(eta) else 1 probs <- cbind(exp(eta[, -M]), 1) probs <- prop.table(probs, 1) phi <- eta2theta(eta[, M], .lphi , earg = .ephi ) n <- length(phi) ycount <- as.matrix(y * c(w)) ycount <- round(ycount) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ans <- rep_len(0.0, n) omega <- extra$n2 for (jay in 1:M) { maxyj <- max(ycount[, jay]) loopOveri <- (n < maxyj) if (loopOveri) { for (iii in 1:n) { rrr <- 1:ycount[iii, jay] # a vector if (ycount[iii, jay] > 0) ans[iii] <- ans[iii] + sum(log((1-phi[iii]) * probs[iii, jay] + (rrr-1)*phi[iii])) } } else { for (rrr in 1:maxyj) { index <- (rrr <= ycount[, jay]) & (ycount[, jay] > 0) if (any(index)) ans[index] <- ans[index] + log((1-phi[index]) * probs[index, jay] + (rrr-1) * phi[index]) } } } # end of jay loop maxomega <- max(omega) loopOveri <- n < maxomega if (loopOveri) { for (iii in 1:n) { rrr <- 1:omega[iii] ans[iii]<- ans[iii] - sum(log1p(-phi[iii] + (rrr-1) * phi[iii])) } } else { for (rrr in 1:maxomega) { ind8 <- rrr <= omega ans[ind8] <- ans[ind8] - log1p(-phi[ind8] + (rrr-1) * phi[ind8]) } } ll.elts <- ans if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .ephi = ephi, .lphi = lphi ))), vfamily = c("dirmultinomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { probs <- cbind(exp(eta[, -M]), 1) probs <- prop.table(probs, 1) phi <- eta2theta(eta[, M], .lphi , earg = .ephi ) okay1 <- all(is.finite(probs)) && all(0 < probs & probs < 1) && all(is.finite(phi )) && all(0 < phi & phi < 1) okay1 }, list( .ephi = ephi, .lphi = lphi ))), deriv = eval(substitute(expression({ probs <- cbind(exp(eta[, -M]), 1) probs <- prop.table(probs, 1) phi <- eta2theta(eta[, M], .lphi , earg = .ephi ) dl.dprobs <- matrix(0.0, n, M-1) dl.dphi <- rep_len(0.0, n) omega <- extra$n2 ycount <- as.matrix(y * c(w)) ycount <- round(ycount) for (jay in 1:M) { maxyj <- max(ycount[, jay]) loopOveri <- n < maxyj if (loopOveri) { for (iii in 1:n) { rrr <- 1:ycount[iii, jay] if (ycount[iii, jay] > 0) { PHI <- phi[iii] dl.dphi[iii] <- dl.dphi[iii] + sum((rrr-1-probs[iii, jay]) / ((1-PHI)*probs[iii, jay] + (rrr-1)*PHI)) tmp9 <- (1-PHI) / ((1-PHI)*probs[iii, jay] + (rrr-1)*PHI) if (jay < M) { dl.dprobs[iii, jay] <- dl.dprobs[iii, jay] + sum(tmp9) } else { for (jay2 in 1:(M-1)) dl.dprobs[iii, jay2]<-dl.dprobs[iii, jay2]-sum(tmp9) } } } } else { for (rrr in 1:maxyj) { index <- (rrr <= ycount[, jay]) & (ycount[, jay] > 0) PHI <- phi[index] dl.dphi[index] <- dl.dphi[index] + (rrr-1-probs[index, jay]) / ((1-PHI)*probs[index, jay] + (rrr-1)*PHI) tmp9 <- (1-PHI) / ((1-PHI)*probs[index, jay] + (rrr-1)*PHI) if (jay < M) { dl.dprobs[index, jay] <- dl.dprobs[index, jay] + tmp9 } else { for (jay2 in 1:(M-1)) dl.dprobs[index, jay2] <- dl.dprobs[index, jay2] - tmp9 } } } } # end of jay loop maxomega <- max(omega) loopOveri <- n < maxomega if (loopOveri) { for (iii in 1:n) { rrr <- 1:omega[iii] dl.dphi[iii]<-dl.dphi[iii] - sum((rrr-2)/(1 + (rrr-2)*phi[iii])) } } else { for (rrr in 1:maxomega) { index <- rrr <= omega dl.dphi[index] <- dl.dphi[index] - (rrr-2)/(1 + (rrr-2)*phi[index]) } } dprobs.deta <- probs[, -M] * (1 - probs[, -M]) # n x (M-1) dphi.deta <- dtheta.deta(phi, .lphi , earg = .ephi ) ans <- cbind(dl.dprobs * dprobs.deta, dl.dphi * dphi.deta) ans }), list( .ephi = ephi, .lphi = lphi ))), weight = eval(substitute(expression({ wz <- matrix(0, n, dimm(M)) loopOveri <- (n < maxomega) if (loopOveri) { for (iii in 1:n) { rrr <- 1:omega[iii] # A vector PHI <- phi[iii] pYiM.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[iii], shape1 <- probs[iii, M]*(1/PHI-1), shape2 <- (1-probs[iii, M])*(1/PHI-1)) # A vector denomM <- ((1-PHI)*probs[iii, M] + (rrr-1)*PHI)^2 # A vector wz[iii, iam(M, M, M)] <- wz[iii, iam(M, M, M)] + sum(probs[iii, M]^2 * pYiM.ge.rrr / denomM) - sum(1 / (1 + (rrr-2)*PHI)^2) for (jay in 1:(M-1)) { denomj <- ((1-PHI)*probs[iii, jay] + (rrr-1)*PHI)^2 pYij.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[iii], shape1<-probs[iii, jay]*(1/PHI-1), shape2<-(1-probs[iii, jay])*(1/PHI-1)) wz[iii, iam(jay, jay, M)] <- wz[iii, iam(jay, jay, M)] + sum(pYij.ge.rrr / denomj) + sum(pYiM.ge.rrr / denomM) for (kay in jay:(M-1)) if (kay > jay) { wz[iii, iam(jay, kay, M)] <- wz[iii, iam(jay, kay, M)] + sum(pYiM.ge.rrr / denomM) } wz[iii, iam(jay, M, M)] <- wz[iii, iam(jay, M, M)] + sum(probs[iii, jay] * pYij.ge.rrr / denomj) - sum(probs[iii, M] * pYiM.ge.rrr / denomM) wz[iii, iam(M, M, M)] <- wz[iii, iam(M, M, M)] + sum(probs[iii, jay]^2 * pYij.ge.rrr / denomj) } # end of jay loop } # end of iii loop } else { for (rrr in 1:maxomega) { ind5 <- rrr <= omega PHI <- phi[ind5] pYiM.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[ind5], shape1 <- probs[ind5, M]*(1/PHI-1), shape2 <- (1-probs[ind5, M])*(1/PHI-1)) denomM <- ((1-PHI)*probs[ind5, M] + (rrr-1)*PHI)^2 wz[ind5, iam(M, M, M)] <- wz[ind5, iam(M, M, M)] + probs[ind5, M]^2 * pYiM.ge.rrr / denomM - 1 / (1 + (rrr-2)*PHI)^2 for (jay in 1:(M-1)) { denomj <- ((1-PHI)*probs[ind5, jay] + (rrr-1)*PHI)^2 pYij.ge.rrr <- 1 - pbetabinom.ab(q = rrr-1, size = omega[ind5], shape1<-probs[ind5, jay]*(1/PHI-1), shape2<-(1-probs[ind5, jay])*(1/PHI-1)) wz[ind5, iam(jay, jay, M)] <- wz[ind5, iam(jay, jay, M)] + pYij.ge.rrr / denomj + pYiM.ge.rrr / denomM for (kay in jay:(M-1)) if (kay > jay) { wz[ind5, iam(jay, kay, M)] <- wz[ind5, iam(jay, kay, M)] + pYiM.ge.rrr / denomM } wz[ind5, iam(jay, M, M)] <- wz[ind5, iam(jay, M, M)] + probs[ind5, jay] * pYij.ge.rrr / denomj - probs[ind5, M] * pYiM.ge.rrr / denomM wz[ind5, iam(M, M, M)] <- wz[ind5, iam(M, M, M)] + probs[ind5, jay]^2 * pYij.ge.rrr / denomj } # end of jay loop } # end of rrr loop } for (jay in 1:(M-1)) for (kay in jay:(M-1)) wz[, iam(jay, kay, M)] <- wz[, iam(jay, kay, M)] * (1-phi)^2 for (jay in 1:(M-1)) wz[, iam(jay, M, M)] <- wz[, iam(jay, M, M)] * (phi-1) / phi wz[, iam(M, M, M)] <- wz[, iam(M, M, M)] / phi^2 d1Thetas.deta <- cbind(dprobs.deta, dphi.deta) index <- iam(NA, NA, M, both = TRUE, diag = TRUE) wz <- wz * d1Thetas.deta[, index$row] * d1Thetas.deta[, index$col] wz }), list( .ephi = ephi, .lphi = lphi )))) } dirmul.old <- function(link = "loge", ialpha = 0.01, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(ialpha, positive = TRUE)) stop("'ialpha' must contain positive values only") new("vglmff", blurb = c("Dirichlet-Multinomial distribution\n\n", "Links: ", namesof("shape1", link, earg = earg), ", ..., ", namesof("shapeM", link, earg = earg), "\n\n", "Posterior mean: (n_j + shape_j)/(2*sum(n_j) + ", "sum(shape_j))\n"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ y <- as.matrix(y) M <- ncol(y) if (any(y != round(y ))) stop("all y values must be integer-valued") predictors.names <- namesof(paste("shape", 1:M, sep = ""), .link , earg = .earg , short = TRUE) extra$n2 <- rowSums(y) # Nb. don't multiply by 2 extra$y <- y if (!length(etastart)) { yy <- if (is.numeric( .ialpha)) matrix( .ialpha , n, M, byrow = TRUE) else matrix(runif(n*M), n, M) etastart <- theta2eta(yy, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .ialpha = ialpha ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) M <- if (is.matrix(eta)) ncol(eta) else 1 sumshape <- as.vector(shape %*% rep_len(1, M)) (extra$y + shape) / (extra$n2 + sumshape) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- paste("shape", 1:M, sep = "") misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$pooled.weight <- pooled.weight }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .link , earg = .earg ) M <- if (is.matrix(eta)) ncol(eta) else 1 sumshape <- as.vector(shape %*% rep_len(1, M)) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (lgamma(sumshape) - lgamma(extra$n2 + sumshape )) + c(w) * (lgamma(y + shape) - lgamma(shape )) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("dirmul.old"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .link , earg = .earg ) sumshape <- as.vector(shape %*% rep_len(1, M)) dl.dsh <- digamma(sumshape) - digamma(extra$n2 + sumshape) + digamma(y + shape) - digamma(shape) dsh.deta <- dtheta.deta(shape, .link , earg = .earg ) c(w) * dl.dsh * dsh.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ index <- iam(NA, NA, M, both = TRUE, diag = TRUE) wz <- matrix(trigamma(sumshape) - trigamma(extra$n2 + sumshape), nrow = n, ncol = dimm(M)) wz[, 1:M] <- wz[, 1:M] + trigamma(y + shape) - trigamma(shape) wz <- -wz * dsh.deta[, index$row] * dsh.deta[, index$col] if (TRUE && intercept.only) { sumw <- sum(w) for (ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE wz }), list( .link = link, .earg = earg )))) } rdiric <- function(n, shape, dimension = NULL, is.matrix.shape = FALSE) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n shape.orig <- shape if (is.matrix.shape) { if (!is.matrix(shape)) stop("argument 'shape' is not a matrix") if (!is.numeric(dimension)) dimension <- ncol(shape) n.shape <- nrow(shape) shape <- kronecker(matrix(1, use.n, 1), shape) ans <- rgamma(use.n * n.shape * dimension, shape) dim(ans) <- c(use.n * n.shape, dimension) } else { if (!is.numeric(dimension)) dimension <- length(shape) if (length(shape) != dimension) shape <- rep_len(shape, dimension) ans <- rgamma(use.n * dimension, rep(shape, rep(use.n, dimension))) dim(ans) <- c(use.n, dimension) } ans <- ans / rowSums(ans) names.shape.orig <- names(shape.orig) if (is.character(names.shape.orig) && !is.matrix.shape) colnames(ans) <- names.shape.orig ans } dirichlet <- function(link = "loge", parallel = FALSE, zero = NULL, imethod = 1) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Dirichlet distribution\n\n", "Links: ", namesof("shape_j", link, earg = earg), "\n\n", "Mean: shape_j/(1 + sum(shape_j)), j = 1,..,ncol(y)"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = NA, Q1 = NA, expected = TRUE, multipleResponses = FALSE, parameters.names = c("shape"), link = .link , zero = .zero ) }, list( .zero = zero, .link = link ))), initialize = eval(substitute(expression({ y <- as.matrix(y) M <- ncol(y) w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = Inf, out.wy = FALSE, colsyperw = NULL, maximize = FALSE) if (any(y <= 0) || any(y >= 1)) stop("all y values must be > 0 and < 1") mynames1 <- paste("shape", 1:M, sep = "") predictors.names <- namesof(mynames1, .link , earg = .earg , short = TRUE) if (!length(etastart)) { yy <- if ( .imethod == 2) { matrix(colMeans(y), nrow(y), M, byrow = TRUE) } else { 0.5 * (y + matrix(colMeans(y), nrow(y), M, byrow = TRUE)) } etastart <- theta2eta(yy, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) prop.table(shape, 1) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:M) misc$earg[[ii]] <- .earg misc$imethod <- .imethod }), list( .link = link, .earg = earg, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .link , earg = .earg ) M <- if (is.matrix(eta)) ncol(eta) else 1 sumshape <- as.vector(shape %*% rep_len(1, M)) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- (c(w) * lgamma(sumshape)) - (c(w) * lgamma(shape)) + (c(w) * (shape-1) * log(y)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("dirichlet"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) M <- NCOL(eta) Shape <- eta2theta(eta, .link , earg = .earg ) rdiric(nsim, # has a different meaning; shape = as.matrix(Shape), dimension = M, is.matrix.shape = TRUE) # 20140106; This is new }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .link , earg = .earg ) sumshape <- as.vector(shape %*% rep_len(1, M)) dl.dsh <- digamma(sumshape) - digamma(shape) + log(y) dsh.deta <- dtheta.deta(shape, .link , earg = .earg ) c(w) * dl.dsh * dsh.deta }), list( .link = link, .earg = earg ))), weight = expression({ index <- iam(NA, NA, M, both = TRUE, diag = TRUE) wz <- matrix(-trigamma(sumshape), nrow = n, ncol = dimm(M)) wz[, 1:M] <- trigamma(shape) + wz[, 1:M] wz <- c(w) * wz * dsh.deta[, index$row] * dsh.deta[, index$col] wz })) } cauchy.control <- function(save.weights = TRUE, ...) { list(save.weights = save.weights) } cauchy <- function(llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, iprobs = seq(0.2, 0.8, by = 0.2), imethod = 1, nsimEIM = NULL, zero = "scale") { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(nsimEIM) && (!is.Numeric(nsimEIM, length.arg = 1, integer.valued = TRUE) || nsimEIM <= 50)) stop("argument 'nsimEIM' should be an integer greater than 50") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") if (!is.Numeric(iprobs, positive = TRUE) || max(iprobs) >= 1) stop("bad input for argument 'iprobs'") new("vglmff", blurb = c("Two-parameter Cauchy distribution ", "(location & scale unknown)\n\n", "Link: ", namesof("location", llocat, earg = elocat), "\n", namesof("scale", lscale, earg = escale), "\n\n", "Mean: NA\n", "Variance: NA"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale"), llocation = .llocat , lscale = .lscale , zero = .zero ) }, list( .zero = zero, .llocat = llocat, .lscale = lscale ))), initialize = eval(substitute(expression({ predictors.names <- c( namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE)) w.y.check(w = w, y = y) if (!length(etastart)) { loc.init <- if (length( .ilocat)) .ilocat else { if ( .imethod == 2) median(rep(y, w)) else if ( .imethod == 3) y else { cauchy2.Loglikfun <- function(loc, y, x, w, extraargs) { iprobs <- .iprobs qy <- quantile(rep(y, w), probs = iprobs) ztry <- tan(pi*(iprobs-0.5)) btry <- (qy - loc) / ztry scal <- median(btry, na.rm = TRUE) if (scal <= 0) scal <- 0.1 sum(c(w) * dcauchy(x = y, loc = loc, scale = scal, log = TRUE)) } loc.grid <- c(quantile(y, probs = seq(0.1, 0.9, by = 0.05))) try.this <- grid.search(loc.grid, objfun = cauchy2.Loglikfun, y = y, x = x, w = w) try.this <- rep_len(c(try.this), n) try.this } } loc.init <- rep_len(c(loc.init), n) sca.init <- if (length( .iscale )) .iscale else { iprobs <- .iprobs qy <- quantile(rep(y, w), probs = iprobs) ztry <- tan(pi*(iprobs-0.5)) btry <- (qy - loc.init[1]) / ztry sca.init <- median(btry, na.rm = TRUE) if (sca.init <= 0) sca.init <- 0.01 sca.init } sca.init <- rep_len(c(sca.init), n) if ( .llocat == "loge") loc.init <- abs(loc.init) + 0.01 etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(sca.init, .lscale , earg = .escale )) } }), list( .ilocat = ilocat, .elocat = elocat, .llocat = llocat, .iscale = iscale, .escale = escale, .lscale = lscale, .iprobs = iprobs, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$expected <- TRUE misc$link <- c("location" = .llocat , "scale" =.lscale) misc$earg <- list("location" = .elocat , "scale" = .escale ) misc$imethod <- .imethod }), list( .escale = escale, .elocat = elocat, .imethod = imethod, .llocat = llocat, .lscale = lscale ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dcauchy(x = y, loc = locat, sc = myscale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), vfamily = c("cauchy"), validparams = eval(substitute(function(eta, y, extra = NULL) { location <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) okay1 <- all(is.finite(location)) && all(is.finite(myscale )) && all(0 < myscale) okay1 }, list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) locat <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) rcauchy(nsim * length(myscale), loc = locat, sc = myscale) }, list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), deriv = eval(substitute(expression({ location <- eta2theta(eta[, 1], .llocat , earg = .elocat ) myscale <- eta2theta(eta[, 2], .lscale , earg = .escale ) dlocation.deta <- dtheta.deta(location, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(myscale, .lscale , earg = .escale ) Z <- (y-location) / myscale dl.dlocation <- 2 * Z / ((1 + Z^2) * myscale) dl.dscale <- (Z^2 - 1) / ((1 + Z^2) * myscale) c(w) * cbind(dl.dlocation * dlocation.deta, dl.dscale * dscale.deta) }), list( .escale = escale, .lscale = lscale, .elocat = elocat, .llocat = llocat ))), weight = eval(substitute(expression({ run.varcov <- 0 ind1 <- iam(NA, NA, M = M, both = TRUE, diag = TRUE) dthetas.detas = cbind(dlocation.deta, dscale.deta) if (length( .nsimEIM )) { for (ii in 1:( .nsimEIM )) { ysim <- rcauchy(n, loc = location, scale = myscale) Z <- (ysim-location) / myscale dl.dlocation <- 2 * Z / ((1 + Z^2) * myscale) dl.dscale <- (Z^2 - 1) / ((1 + Z^2) * myscale) rm(ysim) temp3 <- matrix(c(dl.dlocation, dl.dscale), n, 2) run.varcov <- ((ii-1) * run.varcov + temp3[, ind1$row.index] * temp3[, ind1$col.index]) / ii } wz <- if (intercept.only) matrix(colMeans(run.varcov), n, ncol(run.varcov), byrow = TRUE) else run.varcov wz <- wz * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz <- c(w) * matrix(wz, n, dimm(M)) } else { wz <- cbind(matrix(0.5 / myscale^2, n, 2), matrix(0, n, 1)) * dthetas.detas[, ind1$row] * dthetas.detas[, ind1$col] wz <- c(w) * wz[, 1:M] # diagonal wz } wz }), list( .escale = escale, .lscale = lscale, .nsimEIM = nsimEIM, .elocat = elocat, .llocat = llocat )))) } cauchy1 <- function(scale.arg = 1, llocation = "identitylink", ilocation = NULL, imethod = 1) { llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation if (!is.Numeric(scale.arg, positive = TRUE)) stop("bad input for 'scale.arg'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("One-parameter Cauchy distribution ", "(location unknown, scale known)\n\n", "Link: ", namesof("location", llocat, earg = elocat), "\n\n", "Mean: NA\n", "Variance: NA"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location"), llocation = .llocat , imethod = .imethod ) }, list( .llocat = llocat, .imethod = imethod ))), initialize = eval(substitute(expression({ predictors.names <- namesof("location", .llocat , earg = .elocat , tag = FALSE) w.y.check(w = w, y = y) if (!length(etastart)) { loc.init <- if (length( .ilocat)) .ilocat else { if ( .imethod == 2) median(rep(y, w)) else if ( .imethod == 3) y else { cauchy1.Loglikfun <- function(loc, y, x, w, extraargs) { scal <- extraargs sum(c(w) * dcauchy(x = y, loc = loc, scale = scal, log = TRUE)) } loc.grid <- quantile(y, probs = seq(0.1, 0.9, by = 0.05)) try.this <- grid.search(loc.grid, objfun = cauchy1.Loglikfun, y = y, x = x, w = w, extraargs = .scale.arg ) try.this <- rep_len(try.this, n) try.this } } loc.init <- rep_len(loc.init, n) if ( .llocat == "loge") loc.init = abs(loc.init)+0.01 etastart <- theta2eta(loc.init, .llocat , earg = .elocat ) } }), list( .scale.arg = scale.arg, .ilocat = ilocat, .elocat = elocat, .llocat = llocat, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$link <- c("location" = .llocat ) misc$earg <- list("location" = .elocat ) misc$expected <- TRUE misc$scale.arg <- .scale.arg }), list( .scale.arg = scale.arg, .elocat = elocat, .llocat = llocat ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta, .llocat , earg = .elocat ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dcauchy(x = y, loc = locat, scale = .scale.arg , log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .elocat = elocat, .scale.arg = scale.arg, .llocat = llocat ))), vfamily = c("cauchy1"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat <- eta2theta(eta, .llocat , earg = .elocat ) okay1 <- all(is.finite(locat)) okay1 }, list( .elocat = elocat, .scale.arg = scale.arg, .llocat = llocat ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) locat <- eta2theta(eta, .llocat , earg = .elocat ) rcauchy(nsim * length(locat), loc = locat, sc = .scale.arg ) }, list( .elocat = elocat, .scale.arg = scale.arg, .llocat = llocat ))), deriv = eval(substitute(expression({ locat <- eta2theta(eta, .llocat , earg = .elocat ) temp <- (y-locat)/.scale.arg dl.dlocat <- 2 * temp / ((1 + temp^2) * .scale.arg) dlocation.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) c(w) * dl.dlocat * dlocation.deta }), list( .elocat = elocat, .scale.arg = scale.arg, .llocat = llocat ))), weight = eval(substitute(expression({ wz <- c(w) * dlocation.deta^2 / ( .scale.arg^2 * 2) wz }), list( .elocat = elocat, .scale.arg = scale.arg, .llocat = llocat )))) } logistic1 <- function(llocation = "identitylink", scale.arg = 1, imethod = 1) { if (!is.Numeric(scale.arg, length.arg = 1, positive = TRUE)) stop("'scale.arg' must be a single positive number") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") new("vglmff", blurb = c("One-parameter logistic distribution ", "(location unknown, scale known)\n\n", "Link: ", namesof("location", llocat, earg = elocat), "\n\n", "Mean: location", "\n", "Variance: (pi*scale)^2 / 3"), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location"), scale.arg = .scale.arg , llocation = .llocation ) }, list( .llocation = llocation, .scale.arg = scale.arg ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- namesof("location", .llocat , earg = .elocat , tag = FALSE) if (!length(etastart)) { locat.init <- if ( .imethod == 1) y else median(rep(y, w)) locat.init <- rep_len(locat.init, n) if ( .llocat == "loge") locat.init <- abs(locat.init) + 0.001 etastart <- theta2eta(locat.init, .llocat , earg = .elocat ) } }), list( .imethod = imethod, .llocat = llocat, .elocat = elocat ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ misc$link <- c(location = .llocat) misc$earg <- list(location = .elocat ) misc$scale.arg <- .scale.arg }), list( .llocat = llocat, .elocat = elocat, .scale.arg = scale.arg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { locat <- eta2theta(eta, .llocat , earg = .elocat ) zedd <- (y-locat) / .scale.arg if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlogis(x = y, locat = locat, scale = .scale.arg , log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .elocat = elocat, .scale.arg = scale.arg ))), vfamily = c("logistic1"), validparams = eval(substitute(function(eta, y, extra = NULL) { locat <- eta2theta(eta, .llocat , earg = .elocat ) okay1 <- all(is.finite(locat)) okay1 }, list( .elocat = elocat, .scale.arg = scale.arg, .llocat = llocat ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) locat <- eta2theta(eta, .llocat , earg = .elocat ) rlogis(nsim * length(locat), location = locat, scale = .scale.arg ) }, list( .llocat = llocat, .elocat = elocat, .scale.arg = scale.arg ))), deriv = eval(substitute(expression({ locat <- eta2theta(eta, .llocat , earg = .elocat ) ezedd <- exp(-(y-locat) / .scale.arg ) dl.dlocat <- (1 - ezedd) / ((1 + ezedd) * .scale.arg) dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) c(w) * dl.dlocat * dlocat.deta }), list( .llocat = llocat, .elocat = elocat, .scale.arg = scale.arg ))), weight = eval(substitute(expression({ wz <- c(w) * dlocat.deta^2 / ( .scale.arg^2 * 3) wz }), list( .scale.arg = scale.arg )))) } erlang <- function(shape.arg, lscale = "loge", imethod = 1, zero = NULL) { if (!is.Numeric(shape.arg, # length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("'shape' must be a positive integer") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Erlang distribution\n\n", "Link: ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: shape * scale", "\n", "Variance: shape * scale^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = TRUE, shape.arg = .shape.arg , parameters.names = c("scale"), expected = TRUE, hadof = TRUE, zero = .zero ) }, list( .zero = zero, .shape.arg = shape.arg ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly parameters.names <- param.names("scale", ncoly) predictors.names <- namesof(parameters.names, .lscale , earg = .escale , tag = FALSE) shape.mat <- matrix( .shape.arg , NROW(y), NCOL(y), byrow = TRUE) if (!length(etastart)) { sc.init <- if ( .imethod == 1) { y / shape.mat } else if ( .imethod == 2) { (colSums(y * w) / colSums(w)) / shape.mat } else if ( .imethod == 3) { matrix(apply(y, 2, median), n, ncoly, byrow = TRUE) / shape.mat } if ( !is.matrix(sc.init)) sc.init <- matrix(sc.init, n, M, byrow = TRUE) etastart <- theta2eta(sc.init, .lscale , earg = .escale ) } }), list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta <- as.matrix(eta) SC <- eta2theta(eta, .lscale , earg = .escale ) shape.mat <- matrix( .shape.arg , nrow(eta), ncol(eta), byrow = TRUE) shape.mat * SC }, list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg ))), last = eval(substitute(expression({ misc$link <- c(rep_len( .lscale , ncoly)) names(misc$link) <- parameters.names misc$earg <- vector("list", M) names(misc$earg) <- parameters.names for (ii in 1:ncoly) { misc$earg[[ii]] <- .escale } misc$shape.arg <- .shape.arg }), list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { sc <- eta2theta(eta, .lscale , earg = .escale ) shape.mat <- matrix( .shape.arg , NROW(y), NCOL(y), byrow = TRUE) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (( shape.mat - 1) * log(y) - y / sc - shape.mat * log(sc) - lgamma( shape.mat )) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg ))), vfamily = c("erlang"), validparams = eval(substitute(function(eta, y, extra = NULL) { sc <- eta2theta(eta, .lscale , earg = .escale ) okay1 <- all(is.finite(sc)) && all(0 < sc) okay1 }, list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg ))), hadof = eval(substitute( function(eta, extra = list(), linpred.index = 1, w = 1, dim.wz = c(NROW(eta), NCOL(eta) * (NCOL(eta)+1)/2), deriv = 1, ...) { sc <- eta2theta(eta, .lscale , earg = .escale ) shape.mat <- matrix( .shape.arg , NROW(eta), NCOL(eta), byrow = TRUE) ans <- c(w) * switch(as.character(deriv), "0" = shape.mat / sc^2, "1" = ( -2) * shape.mat / sc^3, "2" = ( +6) * shape.mat / sc^4, "3" = (-24) * shape.mat / sc^5, stop("argument 'deriv' must be 0, 1, 2 or 3")) if (deriv == 0) ans else retain.col(ans, linpred.index) # Since M1 = 1 }, list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Scale <- eta2theta(eta, .lscale , earg = .escale ) shape.mat <- matrix( .shape.arg , NROW(eta), NCOL(eta), byrow = TRUE) rgamma(nsim * length(Scale), shape = shape.mat , scale = Scale ) }, list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg ))), deriv = eval(substitute(expression({ sc <- eta2theta(eta, .lscale , earg = .escale ) shape.mat <- matrix( .shape.arg , NROW(eta), NCOL(eta), byrow = TRUE) dl.dsc <- (y / sc - shape.mat) / sc dsc.deta <- dtheta.deta(sc, .lscale , earg = .escale ) c(w) * dl.dsc * dsc.deta }), list( .lscale = lscale, .escale = escale, .shape.arg = shape.arg ))), weight = eval(substitute(expression({ ned2l.dsc2 <- shape.mat / sc^2 wz <- c(w) * dsc.deta^2 * ned2l.dsc2 wz }), list( .escale = escale, .shape.arg = shape.arg )))) } # erlang dbort <- function(x, Qsize = 1, a = 0.5, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(Qsize, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'Qsize'") if (!is.Numeric(a, positive = TRUE) || max(a) >= 1) stop("bad input for argument 'a'") N <- max(length(x), length(Qsize), length(a)) if (length(x) != N) x <- rep_len(x, N) if (length(a) != N) a <- rep_len(a, N) if (length(Qsize) != N) Qsize <- rep_len(Qsize, N) xok <- (x >= Qsize) & (x == round(x)) & (a > 0) & (a < 1) ans <- rep_len(if (log.arg) log(0) else 0, N) # loglikelihood ans[xok] <- log(Qsize[xok]) - lgamma(x[xok] + 1 - Qsize[xok]) + (x[xok] - 1 - Qsize[xok]) * log(x[xok]) + (x[xok] - Qsize[xok]) * log(a[xok]) - a[xok] * x[xok] if (!log.arg) { ans[xok] <- exp(ans[xok]) } ans } rbort <- function(n, Qsize = 1, a = 0.5) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(Qsize, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'Qsize'") if (!is.Numeric(a, positive = TRUE) || max(a) >= 1) stop("bad input for argument 'a'") N <- use.n qsize <- rep_len(Qsize, N) a <- rep_len(a, N) totqsize <- qsize fini <- (qsize < 1) while (any(!fini)) { additions <- rpois(sum(!fini), a[!fini]) qsize[!fini] <- qsize[!fini] + additions totqsize[!fini] <- totqsize[!fini] + additions qsize <- qsize - 1 fini <- fini | (qsize < 1) } totqsize } borel.tanner <- function(Qsize = 1, link = "logit", imethod = 1) { if (!is.Numeric(Qsize, length.arg = 1, integer.valued = TRUE, positive = TRUE)) stop("bad input for argument 'Qsize'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2, 3 or 4") new("vglmff", blurb = c("Borel-Tanner distribution\n\n", "Link: ", namesof("a", link, earg = earg), "\n\n", "Mean: Qsize / (1-a)", "\n", "Variance: Qsize * a / (1 - a)^3"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, Qsize = .Qsize , link = .link , multipleResponses = FALSE ) }, list( .Qsize = Qsize, .link = link ))), initialize = eval(substitute(expression({ if (any(y < .Qsize )) stop("all y values must be >= ", .Qsize ) w.y.check(w = w, y = y, Is.positive.y = TRUE, Is.integer.y = TRUE) predictors.names <- namesof("a", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { a.init <- switch(as.character( .imethod ), "1" = 1 - .Qsize / (y + 1/8), "2" = rep_len(1 - .Qsize / weighted.mean(y, w), n), "3" = rep_len(1 - .Qsize / median(y), n), "4" = rep_len(0.5, n)) etastart <- theta2eta(a.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .Qsize = Qsize, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) .Qsize / (1 - aa) }, list( .link = link, .earg = earg, .Qsize = Qsize ))), last = eval(substitute(expression({ misc$link <- c(a = .link) misc$earg <- list(a = .earg ) misc$expected <- TRUE misc$Qsize <- .Qsize }), list( .link = link, .earg = earg, .Qsize = Qsize ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { aa <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dbort(x = y, Qsize = .Qsize , a = aa, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg, .Qsize = Qsize ))), vfamily = c("borel.tanner"), validparams = eval(substitute(function(eta, y, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(aa)) && all(0 < aa) okay1 }, list( .link = link, .earg = earg, .Qsize = Qsize ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) aa <- eta2theta(eta, .link , earg = .earg ) rbort(nsim * length(aa), Qsize = .Qsize , a = aa) }, list( .link = link, .earg = earg, .Qsize = Qsize ))), deriv = eval(substitute(expression({ aa <- eta2theta(eta, .link , earg = .earg ) dl.da <- (y - .Qsize ) / aa - y da.deta <- dtheta.deta(aa, .link , earg = .earg ) c(w) * dl.da * da.deta }), list( .link = link, .earg = earg, .Qsize = Qsize ))), weight = eval(substitute(expression({ ned2l.da2 <- .Qsize / (aa * (1 - aa)) wz <- c(w) * ned2l.da2 * da.deta^2 wz }), list( .Qsize = Qsize )))) } dfelix <- function(x, rate = 0.25, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(rate, positive = TRUE)) stop("bad input for argument 'rate'") N <- max(length(x), length(rate)) if (length(x) != N) x <- rep_len(x, N) if (length(rate) != N) rate <- rep_len(rate, N) xok <- (x %% 2 == 1) & (x == round(x)) & (x >= 1) & (rate > 0) & (rate < 0.5) ans <- rep_len(if (log.arg) log(0) else 0, N) # loglikelihood ans[xok] <- ((x[xok]-3)/2) * log(x[xok]) + ((x[xok]-1)/2) * log(rate[xok]) - lgamma(x[xok]/2 + 0.5) - rate[xok] * x[xok] if (!log.arg) { ans[xok] <- exp(ans[xok]) } ans } felix <- function(lrate = extlogit(min = 0, max = 0.5), imethod = 1) { lrate <- as.list(substitute(lrate)) erate <- link2list(lrate) lrate <- attr(erate, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 4) stop("argument 'imethod' must be 1 or 2, 3 or 4") new("vglmff", blurb = c("Felix distribution\n\n", "Link: ", namesof("rate", lrate, earg = erate), "\n\n", "Mean: 1/(1-2*rate)"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("rate"), lrate = .lrate , imethod = .imethod ) }, list( .imethod = imethod, .lrate = lrate ))), initialize = eval(substitute(expression({ if (any(y < 1) || any((y+1)/2 != round((y+1)/2))) warning("response should be positive, odd and integer-valued") w.y.check(w = w, y = y) predictors.names <- namesof("rate", .lrate , earg = .erate , tag = FALSE) if (!length(etastart)) { wymean <- weighted.mean(y, w) a.init <- switch(as.character( .imethod ), "1" = (y - 1 + 1/8) / (2 * (y + 1/8) + 1/8), "2" = rep_len((wymean-1+1/8) / (2*(wymean+1/8)+1/8), n), "3" = rep_len((median(y)-1+1/8) / (2*(median(y)+1/8)+1/8), n), "4" = rep_len(0.25, n)) etastart <- theta2eta(a.init, .lrate , earg = .erate ) } }), list( .lrate = lrate, .erate = erate, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { rate <- eta2theta(eta, .lrate , earg = .erate ) 1 / (1 - 2 * rate) }, list( .lrate = lrate, .erate = erate ))), last = eval(substitute(expression({ misc$link <- c(rate = .lrate) misc$earg <- list(rate = .erate ) }), list( .lrate = lrate, .erate = erate ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { rate <- eta2theta(eta, .lrate , earg = .erate ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dfelix(x = y, rate = rate, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lrate = lrate, .erate = erate ))), vfamily = c("felix"), deriv = eval(substitute(expression({ rate <- eta2theta(eta, .lrate , earg = .erate ) dl.da <- (y - 1) / (2 * rate) - y da.deta <- dtheta.deta(rate, .lrate , earg = .erate ) c(w) * dl.da * da.deta }), list( .lrate = lrate, .erate = erate ))), weight = eval(substitute(expression({ ned2l.da2 <- 1 / (rate * (1 - 2 * rate)) wz <- c(w) * da.deta^2 * ned2l.da2 wz }), list( .lrate = lrate )))) } simple.exponential <- function() { new("vglmff", blurb = c("Simple exponential distribution\n", "Link: log(rate)\n"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { devy <- -log(y) - 1 devmu <- -log(mu) - y / mu devi <- 2 * (devy - devmu) if (residuals) { sign(y - mu) * sqrt(abs(devi) * c(w)) } else { dev.elts <- c(w) * devi if (summation) sum(dev.elts) else dev.elts } }, loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (residuals) return(NULL) if (summation) sum(c(w) * dexp(y, rate = 1 / mu, log = TRUE)) else c(w) * dexp(y, rate = 1 / mu, log = TRUE) }, initialize = expression({ predictors.names <- "loge(rate)" mustart <- y + (y == 0) / 8 }), linkinv = function(eta, extra = NULL) exp(-eta), linkfun = function(mu, extra = NULL) -log(mu), vfamily = "simple.exponential", deriv = expression({ rate <- 1 / mu dl.drate <- mu - y drate.deta <- dtheta.deta(rate, "loge") c(w) * dl.drate * drate.deta }), weight = expression({ ned2l.drate2 <- 1 / rate^2 # EIM wz <- c(w) * drate.deta^2 * ned2l.drate2 wz })) } better.exponential <- function(link = "loge", location = 0, expected = TRUE, ishrinkage = 0.95, parallel = FALSE, zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Exponential distribution\n\n", "Link: ", namesof("rate", link, earg, tag = TRUE), "\n", "Mean: ", "mu = ", if (all(location == 0)) "1 / rate" else if (length(unique(location)) == 1) paste(location[1], "+ 1 / rate") else "location + 1 / rate"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, multipleResponses = TRUE, zero = .zero ) }, list( .zero = zero ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- extra$location devy <- -log(y - location) - 1 devmu <- -log(mu - location) - (y - location ) / (mu - location) devi <- 2 * (devy - devmu) if (residuals) sign(y - mu) * sqrt(abs(devi) * w) else { dev.elts <- c(w) * devi if (summation) sum(dev.elts) else dev.elts } }, initialize = eval(substitute(expression({ checklist <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- checklist$w # So ncol(w) == ncol(y) y <- checklist$y extra$ncoly <- ncoly <- ncol(y) extra$M1 <- M1 <- 1 M <- M1 * ncoly extra$location <- matrix( .location , n, ncoly, byrow = TRUE) if (any(y <= extra$location)) stop("all responses must be greater than argument 'location'") mynames1 <- param.names("rate", M) predictors.names <- namesof(mynames1, .link , earg = .earg , short = TRUE) if (length(mustart) + length(etastart) == 0) mustart <- matrix(colSums(y * w) / colSums(w), n, M, byrow = TRUE) * .ishrinkage + (1 - .ishrinkage ) * y + 1 / 8 if (!length(etastart)) etastart <- theta2eta(1 / (mustart-extra$location), .link , .earg ) }), list( .location = location, .link = link, .earg = earg, .ishrinkage = ishrinkage ))), linkinv = eval(substitute(function(eta, extra = NULL) extra$location + 1 / eta2theta(eta, .link , earg = .earg ), list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) misc$earg <- vector("list", M) names(misc$link) <- names(misc$earg) <- mynames1 for (ii in 1:M) misc$earg[[ii]] <- .earg misc$location <- .location misc$expected <- .expected }), list( .link = link, .earg = earg, .expected = expected, .location = location ))), linkfun = eval(substitute(function(mu, extra = NULL) theta2eta(1 / (mu - extra$location), .link , earg = .earg ), list( .link = link, .earg = earg ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) if (residuals) stop("loglikelihood residuals not implemented yet") else { rate <- 1 / (mu - extra$location) ll.elts <- c(w) * dexp(y - extra$location, rate = rate, log = TRUE) if (summation) sum(ll.elts) else ll.elts }, vfamily = c("better.exponential"), simslot = eval(substitute(function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") mu <- fitted(object) rate <- 1 / (mu - object@extra$location) rexp(nsim * length(rate), rate = rate) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ rate <- 1 / (mu - extra$location) dl.drate <- mu - y drate.deta <- dtheta.deta(rate, .link , earg = .earg ) c(w) * dl.drate * drate.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.drate2 <- (mu - extra$location)^2 wz <- ned2l.drate2 * drate.deta^2 # EIM if (! .expected ) { # Use the OIM, not the EIM d2rate.deta2 <- d2theta.deta2(rate, .link , earg = .earg ) wz <- wz - dl.drate * d2rate.deta2 } c(w) * wz }), list( .link = link, .expected = expected, .earg = earg )))) } exponential <- function(link = "loge", location = 0, expected = TRUE, ishrinkage = 0.95, parallel = FALSE, zero = NULL) { if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") new("vglmff", blurb = c("Exponential distribution\n\n", "Link: ", namesof("rate", link, earg, tag = TRUE), "\n", "Mean: ", "mu = ", if (all(location == 0)) "1 / rate" else if (length(unique(location)) == 1) paste(location[1], "+ 1 / rate") else "location + 1 / rate"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = TRUE) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .parallel = parallel, .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, zero = .zero ) }, list( .zero = zero ))), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- extra$location devy <- -log(y - location) - 1 devmu <- -log(mu - location) - (y - location ) / (mu - location) devi <- 2 * (devy - devmu) if (residuals) { sign(y - mu) * sqrt(abs(devi) * w) } else { dev.elts <- c(w) * devi if (summation) { sum(dev.elts) } else { dev.elts } } }, initialize = eval(substitute(expression({ checklist <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- checklist$w y <- checklist$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$location <- matrix( .location , n, ncoly, byrow = TRUE) # By row! if (any(y <= extra$location)) stop("all responses must be greater than ", extra$location) mynames1 <- param.names("rate", M) predictors.names <- namesof(mynames1, .link , earg = .earg , short = TRUE) if (length(mustart) + length(etastart) == 0) mustart <- matrix(colSums(y * w) / colSums(w), n, M, byrow = TRUE) * .ishrinkage + (1 - .ishrinkage ) * y + 1 / 8 if (!length(etastart)) etastart <- theta2eta(1 / (mustart - extra$location), .link , earg = .earg ) }), list( .location = location, .link = link, .earg = earg, .ishrinkage = ishrinkage ))), linkinv = eval(substitute(function(eta, extra = NULL) extra$location + 1 / eta2theta(eta, .link , earg = .earg ), list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:M) misc$earg[[ii]] <- .earg misc$location <- .location misc$expected <- .expected misc$multipleResponses <- TRUE misc$M1 <- M1 }), list( .link = link, .earg = earg, .expected = expected, .location = location ))), linkfun = eval(substitute(function(mu, extra = NULL) theta2eta(1 / (mu - extra$location), .link , earg = .earg ), list( .link = link, .earg = earg ))), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { rate <- 1 / (mu - extra$location) ll.elts <- c(w) * dexp(x = y - extra$location, rate, log = TRUE) if (summation) sum(ll.elts) else ll.elts }, vfamily = c("exponential"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") mu <- fitted(object) rate <- 1 / (mu - object@extra$location) rexp(nsim * length(rate), rate = rate) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ rate <- 1 / (mu - extra$location) dl.drate <- mu - y drate.deta <- dtheta.deta(rate, .link , earg = .earg ) c(w) * dl.drate * drate.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.drate2 <- (mu - extra$location)^2 wz <- ned2l.drate2 * drate.deta^2 if (! .expected ) { # Use the OIM, not the EIM d2rate.deta2 <- d2theta.deta2(rate, .link , earg = .earg ) wz <- wz - dl.drate * d2rate.deta2 } c(w) * wz }), list( .link = link, .expected = expected, .earg = earg )))) } gamma1 <- function(link = "loge", zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("1-parameter Gamma distribution\n", "Link: ", namesof("shape", link, earg = earg, tag = TRUE), "\n", "Mean: mu (=shape)\n", "Variance: mu (=shape)"), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 1 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y M <- if (is.matrix(y)) ncol(y) else 1 M1 <- 1 mynames1 <- param.names("shape", M) predictors.names <- namesof(mynames1, .link , earg = .earg , short = TRUE) if (!length(etastart)) etastart <- cbind(theta2eta(y + 1/8, .link , earg = .earg )) }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) eta2theta(eta, .link , earg = .earg )), list( .link = link, .earg = earg )), last = eval(substitute(expression({ misc$link <- rep_len( .link , M) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:M) misc$earg[[ii]] <- .earg misc$expected <- TRUE misc$multipleResponses <- TRUE misc$M1 <- M1 }), list( .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) theta2eta(mu, .link , earg = .earg )), list( .link = link, .earg = earg )), loglikelihood = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgamma(x = y, shape = mu, scale = 1, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } }, vfamily = c("gamma1"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") mu <- fitted(object) rgamma(nsim * length(shape), shape = mu, scale = 1) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ shape <- mu dl.dshape <- log(y) - digamma(shape) dshape.deta <- dtheta.deta(shape, .link , earg = .earg ) ans <- c(w) * dl.dshape * dshape.deta ans c(w) * dl.dshape * dshape.deta }), list( .link = link, .earg = earg ))), weight = expression({ ned2l.dshape <- trigamma(shape) wz <- ned2l.dshape * dshape.deta^2 c(w) * wz })) } gammaR <- function(lrate = "loge", lshape = "loge", irate = NULL, ishape = NULL, lss = TRUE, zero = "shape" ) { expected <- TRUE # FALSE does not work well iratee <- irate lratee <- as.list(substitute(lrate)) eratee <- link2list(lratee) lratee <- attr(eratee, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length( iratee) && !is.Numeric(iratee, positive = TRUE)) stop("bad input for argument 'irate'") if (length( ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") ratee.TF <- if (lss) c(TRUE, FALSE) else c(FALSE, TRUE) scale.12 <- if (lss) 1:2 else 2:1 blurb.vec <- c(namesof("rate", lratee, earg = eratee), namesof("shape", lshape, earg = eshape)) blurb.vec <- blurb.vec[scale.12] new("vglmff", blurb = c("2-parameter Gamma distribution\n", "Links: ", blurb.vec[1], ", ", blurb.vec[2], "\n", "Mean: mu = shape/rate\n", "Variance: (mu^2)/shape = shape/rate^2"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = .expected , multipleResponses = TRUE, zero = .zero ) }, list( .zero = zero, .scale.12 = scale.12, .ratee.TF = ratee.TF, .expected = expected ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly if ( .lss ) { mynames1 <- param.names("rate", ncoly) mynames2 <- param.names("shape", ncoly) predictors.names <- c(namesof(mynames1, .lratee , earg = .eratee , tag = FALSE), namesof(mynames2, .lshape , earg = .eshape , tag = FALSE)) } else { mynames1 <- param.names("shape", ncoly) mynames2 <- param.names("rate", ncoly) predictors.names <- c(namesof(mynames1, .lshape , earg = .eshape , tag = FALSE), namesof(mynames2, .lratee , earg = .eratee , tag = FALSE)) } parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] Ratee.init <- matrix(if (length( .iratee )) .iratee else 0 + NA, n, ncoly, byrow = TRUE) Shape.init <- matrix(if (length( .ishape )) .iscale else 0 + NA, n, ncoly, byrow = TRUE) if (!length(etastart)) { mymu <- y + 0.167 * (y == 0) for (ilocal in 1:ncoly) { junk <- lsfit(x, y[, ilocal], wt = w[, ilocal], intercept = FALSE) var.y.est <- sum(c(w[, ilocal]) * junk$resid^2) / (nrow(x) - length(junk$coef)) if (!is.Numeric(Shape.init[, ilocal])) Shape.init[, ilocal] <- (mymu[, ilocal])^2 / var.y.est if (!is.Numeric(Ratee.init[, ilocal])) Ratee.init[, ilocal] <- Shape.init[, ilocal] / mymu[, ilocal] } if ( .lshape == "loglog") Shape.init[Shape.init <= 1] <- 3.1 # Hopefully value is big enough etastart <- if ( .lss ) cbind(theta2eta(Ratee.init, .lratee , earg = .eratee ), theta2eta(Shape.init, .lshape , earg = .eshape ))[, interleave.VGAM(M, M1 = M1)] else cbind(theta2eta(Shape.init, .lshape , earg = .eshape ), theta2eta(Ratee.init, .lratee , earg = .eratee ))[, interleave.VGAM(M, M1 = M1)] } }), list( .lratee = lratee, .lshape = lshape, .iratee = iratee, .ishape = ishape, .eratee = eratee, .eshape = eshape, .scale.12 = scale.12, .ratee.TF = ratee.TF, .lss = lss ))), linkinv = eval(substitute(function(eta, extra = NULL) { Ratee <- eta2theta(eta[, .ratee.TF ], .lratee , earg = .eratee ) Shape <- eta2theta(eta[, !( .ratee.TF )], .lshape , earg = .eshape ) Shape / Ratee }, list( .lratee = lratee, .lshape = lshape, .eratee = eratee, .eshape = eshape, .scale.12 = scale.12, .ratee.TF = ratee.TF, .lss = lss ))), last = eval(substitute(expression({ misc$multipleResponses <- TRUE M1 <- extra$M1 avector <- if ( .lss ) c(rep_len( .lratee , ncoly), rep_len( .lshape , ncoly)) else c(rep_len( .lshape , ncoly), rep_len( .lratee , ncoly)) misc$link <- avector[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- if ( .lss ) .eratee else .eshape misc$earg[[M1*ii ]] <- if ( .lss ) .eshape else .eratee } misc$M1 <- M1 }), list( .lratee = lratee, .lshape = lshape, .eratee = eratee, .eshape = eshape, .scale.12 = scale.12, .ratee.TF = ratee.TF, .lss = lss ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Ratee <- eta2theta(eta[, .ratee.TF ], .lratee , earg = .eratee ) Shape <- eta2theta(eta[, !( .ratee.TF )], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgamma(x=y, shape = Shape, rate = Ratee, log=TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lratee = lratee, .lshape = lshape, .eratee = eratee, .eshape = eshape, .scale.12 = scale.12, .ratee.TF = ratee.TF, .lss = lss ))), vfamily = c("gammaR"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Ratee <- eta2theta(eta[, .ratee.TF ], .lratee , earg = .eratee ) Shape <- eta2theta(eta[, !( .ratee.TF )], .lshape , earg = .eshape ) rgamma(nsim * length(Shape), shape = Shape, rate = Ratee) }, list( .lratee = lratee, .lshape = lshape, .eratee = eratee, .eshape = eshape, .scale.12 = scale.12, .ratee.TF = ratee.TF, .lss = lss ))), deriv = eval(substitute(expression({ M1 <- 2 Ratee <- eta2theta(eta[, .ratee.TF ], .lratee , earg = .eratee ) Shape <- eta2theta(eta[, !( .ratee.TF )], .lshape , earg = .eshape ) dl.dratee <- mu - y dl.dshape <- log(y * Ratee) - digamma(Shape) dratee.deta <- dtheta.deta(Ratee, .lratee , earg = .eratee ) dshape.deta <- dtheta.deta(Shape, .lshape , earg = .eshape ) myderiv <- if ( .lss ) c(w) * cbind(dl.dratee * dratee.deta, dl.dshape * dshape.deta) else c(w) * cbind(dl.dshape * dshape.deta, dl.dratee * dratee.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lratee = lratee, .lshape = lshape, .eratee = eratee, .eshape = eshape, .scale.12 = scale.12, .ratee.TF = ratee.TF, .lss = lss ))), weight = eval(substitute(expression({ ned2l.dratee2 <- Shape / (Ratee^2) ned2l.drateeshape <- -1/Ratee ned2l.dshape2 <- trigamma(Shape) if ( .expected ) { ratee.adjustment <- 0 shape.adjustment <- 0 } else { d2ratee.deta2 <- d2theta.deta2(Ratee, .lratee , earg = .eratee ) d2shape.deta2 <- d2theta.deta2(Shape, .lshape , earg = .eshape ) ratee.adjustment <- dl.dratee * d2ratee.deta2 shape.adjustment <- dl.dshape * d2shape.deta2 } wz <- if ( .lss ) array(c(c(w) * (ned2l.dratee2 * dratee.deta^2 - ratee.adjustment), c(w) * (ned2l.dshape2 * dshape.deta^2 - shape.adjustment), c(w) * (ned2l.drateeshape * dratee.deta * dshape.deta)), dim = c(n, M / M1, 3)) else array(c(c(w) * (ned2l.dshape2 * dshape.deta^2 - shape.adjustment), c(w) * (ned2l.dratee2 * dratee.deta^2 - ratee.adjustment), c(w) * (ned2l.drateeshape * dratee.deta * dshape.deta)), dim = c(n, M / M1, 3)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lratee = lratee, .lshape = lshape, .eratee = eratee, .eshape = eshape, .expected = expected, .scale.12 = scale.12, .ratee.TF = ratee.TF, .lss = lss )))) } gamma2 <- function(lmu = "loge", lshape = "loge", imethod = 1, ishape = NULL, parallel = FALSE, deviance.arg = FALSE, zero = "shape") { if (!is.logical( deviance.arg ) || length( deviance.arg ) != 1) stop("argument 'deviance.arg' must be TRUE or FALSE") apply.parint <- FALSE lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") if (length( ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (!is.logical(apply.parint) || length(apply.parint) != 1) stop("argument 'apply.parint' must be a single logical") if (is.logical(parallel) && parallel && length(zero)) stop("set 'zero = NULL' if 'parallel = TRUE'") ans <- new("vglmff", blurb = c("2-parameter gamma distribution", " (McCullagh and Nelder 1989 parameterization)\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: mu\n", "Variance: (mu^2)/shape"), constraints = eval(substitute(expression({ constraints <- cm.VGAM(matrix(1, M, 1), x = x, bool = .parallel , constraints = constraints, apply.int = .apply.parint ) constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero, .parallel = parallel, .apply.parint = apply.parint ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, apply.parint = .apply.parint , expected = TRUE, multipleResponses = TRUE, parameters.names = c("mu", "shape"), parallel = .parallel , zero = .zero ) }, list( .apply.parint = apply.parint, .parallel = parallel, .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y assign("CQO.FastAlgorithm", ( .lmu == "loge" && .lshape == "loge"), envir = VGAMenv) if (any(function.name == c("cqo", "cao")) && is.Numeric( .zero , length.arg = 1) && .zero != -2) stop("argument zero = -2 is required") M <- M1 * ncol(y) NOS <- ncoly <- ncol(y) # Number of species temp1.names <- param.names("mu", NOS) temp2.names <- param.names("shape", NOS) predictors.names <- c(namesof(temp1.names, .lmu , earg = .emu , tag = FALSE), namesof(temp2.names, .lshape , earg = .eshape , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (is.logical( .parallel ) & .parallel & ncoly > 1) warning("the constraint matrices may not be correct with ", "multiple responses") if (!length(etastart)) { init.shape <- matrix(1.0, n, NOS) mymu <- y # + 0.167 * (y == 0) # imethod == 1 (the default) if ( .imethod == 2) { for (ii in 1:ncol(y)) { mymu[, ii] <- weighted.mean(y[, ii], w = w[, ii]) } } for (spp in 1:NOS) { junk <- lsfit(x, y[, spp], wt = w[, spp], intercept = FALSE) var.y.est <- sum(w[, spp] * junk$resid^2)/(n - length(junk$coef)) init.shape[, spp] <- if (length( .ishape )) .ishape else mymu[, spp]^2 / var.y.est if ( .lshape == "loglog") init.shape[init.shape[, spp] <= 1, spp] <- 3.1 } etastart <- cbind(theta2eta(mymu, .lmu , earg = .emu ), theta2eta(init.shape, .lshape , earg = .eshape )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .lmu = lmu, .lshape = lshape, .ishape = ishape, .emu = emu, .eshape = eshape, .parallel = parallel, .apply.parint = apply.parint, .zero = zero, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, c(TRUE, FALSE), drop = FALSE], .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ if (exists("CQO.FastAlgorithm", envir = VGAMenv)) rm("CQO.FastAlgorithm", envir = VGAMenv) tmp34 <- c(rep_len( .lmu , NOS), rep_len( .lshape , NOS)) names(tmp34) <- c(param.names("mu", NOS), param.names("shape", NOS)) tmp34 <- tmp34[interleave.VGAM(M, M1 = M1)] misc$link <- tmp34 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .emu misc$earg[[M1*ii ]] <- .eshape } }), list( .lmu = lmu, .lshape = lshape, .emu = emu, .eshape = eshape ))), linkfun = eval(substitute(function(mu, extra = NULL) { temp <- theta2eta(mu, .lmu , earg = .emu ) temp <- cbind(temp, NA * temp) temp[, interleave.VGAM(ncol(temp), M1 = M1), drop = FALSE] }, list( .lmu = lmu, .emu = emu ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mymu <- mu # eta2theta(eta[, 2*(1:NOS)-1], .lmu , earg = .emu ) shapemat <- eta2theta(eta[, c(FALSE, TRUE), drop = FALSE], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgamma(x = y, shape = c(shapemat), scale = c(mymu / shapemat), log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lmu = lmu, .lshape = lshape, .emu = emu, .eshape = eshape))), vfamily = c("gamma2"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mymu <- eta2theta(eta[, c(TRUE, FALSE)], .lmu , earg = .emu ) shape <- eta2theta(eta[, c(FALSE, TRUE)], .lshape , earg = .eshape ) rgamma(nsim * length(shape), shape = c(shape), scale = c(mymu/shape)) }, list( .lmu = lmu, .lshape = lshape, .emu = emu, .eshape = eshape))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta) / M1 vecTF <- c(TRUE, FALSE) mymu <- eta2theta(eta[, vecTF], .lmu , earg = .emu ) shape <- eta2theta(eta[, !vecTF], .lshape , earg = .eshape ) dl.dmu <- shape * (y / mymu - 1) / mymu dl.dshape <- log(y) + log(shape) - log(mymu) + 1 - digamma(shape) - y / mymu dmu.deta <- dtheta.deta(mymu, .lmu , earg = .emu ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.dmu * dmu.deta, dl.dshape * dshape.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lmu = lmu, .lshape = lshape, .emu = emu, .eshape = eshape))), weight = eval(substitute(expression({ ned2l.dmu2 <- shape / (mymu^2) ned2l.dshape2 <- trigamma(shape) - 1 / shape wz <- matrix(NA_real_, n, M) # 2 = M1; diagonal! wz[, vecTF] <- ned2l.dmu2 * dmu.deta^2 wz[, !vecTF] <- ned2l.dshape2 * dshape.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .lmu = lmu )))) if (deviance.arg) ans@deviance <- eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { if (NCOL(y) > 1 && NCOL(w) > 1) stop("cannot handle matrix 'w' yet") M1 <- 2 NOS <- ncol(eta) / 2 temp300 <- eta[, 2*(1:NOS), drop = FALSE] shape <- eta2theta(temp300, .lshape , earg = .eshape ) devi <- -2 * (log(y/mu) - y/mu + 1) if (residuals) { warning("not 100% sure about these deviance residuals!") sign(y - mu) * sqrt(abs(devi) * w) } else { dev.elts <- c(w) * devi if (summation) { sum(dev.elts) } else { dev.elts } } }, list( .lshape = lshape ))) ans } geometric <- function(link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) { if (!is.logical(expected) || length(expected) != 1) stop("bad input for argument 'expected'") link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Geometric distribution ", "(P[Y=y] = prob * (1 - prob)^y, y = 0, 1, 2,...)\n", "Link: ", namesof("prob", link, earg = earg), "\n", "Mean: mu = (1 - prob) / prob\n", "Variance: mu * (1 + mu) = (1 - prob) / prob^2"), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 1 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.nonnegative.y = TRUE, Is.integer.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("prob", ncoly) predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { prob.init <- if ( .imethod == 2) 1 / (1 + y + 1/16) else if ( .imethod == 3) 1 / (1 + apply(y, 2, median) + 1/16) else 1 / (1 + colSums(y * w) / colSums(w) + 1/16) if (!is.matrix(prob.init)) prob.init <- matrix(prob.init, n, M, byrow = TRUE) if (length( .iprob )) prob.init <- matrix( .iprob , n, M, byrow = TRUE) etastart <- theta2eta(prob.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .imethod = imethod, .iprob = iprob ))), linkinv = eval(substitute(function(eta, extra = NULL) { prob <- eta2theta(eta, .link , earg = .earg ) (1 - prob) / prob }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .link , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE misc$expected <- .expected misc$imethod <- .imethod misc$iprob <- .iprob }), list( .link = link, .earg = earg, .iprob = iprob, .expected = expected, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { prob <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dgeom(x = y, prob = prob, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("geometric"), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) prob <- eta2theta(eta, .link , earg = .earg ) rgeom(nsim * length(prob), prob = prob) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ prob <- eta2theta(eta, .link , earg = .earg ) dl.dprob <- -y / (1 - prob) + 1 / prob dprobdeta <- dtheta.deta(prob, .link , earg = .earg ) c(w) * cbind(dl.dprob * dprobdeta) }), list( .link = link, .earg = earg, .expected = expected ))), weight = eval(substitute(expression({ ned2l.dprob2 <- if ( .expected ) { 1 / (prob^2 * (1 - prob)) } else { y / (1 - prob)^2 + 1 / prob^2 } wz <- ned2l.dprob2 * dprobdeta^2 if ( !( .expected )) wz <- wz - dl.dprob * d2theta.deta2(prob, .link , earg = .earg ) c(w) * wz }), list( .link = link, .earg = earg, .expected = expected )))) } dbetageom <- function(x, shape1, shape2, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(x)) stop("bad input for argument 'x'") if (!is.Numeric(shape1, positive = TRUE)) stop("bad input for argument 'shape1'") if (!is.Numeric(shape2, positive = TRUE)) stop("bad input for argument 'shape2'") N <- max(length(x), length(shape1), length(shape2)) if (length(x) != N) x <- rep_len(x, N) if (length(shape1) != N) shape1 <- rep_len(shape1, N) if (length(shape2) != N) shape2 <- rep_len(shape2, N) loglik <- lbeta(1+shape1, shape2 + abs(x)) - lbeta(shape1, shape2) xok <- (x == round(x) & x >= 0) loglik[!xok] <- log(0) if (log.arg) { loglik } else { exp(loglik) } } pbetageom <- function(q, shape1, shape2, log.p = FALSE) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.Numeric(shape1, positive = TRUE)) stop("bad input for argument 'shape1'") if (!is.Numeric(shape2, positive = TRUE)) stop("bad input for argument 'shape2'") N <- max(length(q), length(shape1), length(shape2)) if (length(q) != N) q <- rep_len(q, N) if (length(shape1) != N) shape1 <- rep_len(shape1, N) if (length(shape2) != N) shape2 <- rep_len(shape2, N) ans <- q * 0 # Retains names(q) if (max(abs(shape1-shape1[1])) < 1.0e-08 && max(abs(shape2-shape2[1])) < 1.0e-08) { qstar <- floor(q) temp <- if (max(qstar) >= 0) dbetageom(x = 0:max(qstar), shape1 = shape1[1], shape2 = shape2[1]) else 0*qstar unq <- unique(qstar) for (ii in unq) { index <- (qstar == ii) ans[index] <- if (ii >= 0) sum(temp[1:(1+ii)]) else 0 } } else { for (ii in 1:N) { qstar <- floor(q[ii]) ans[ii] <- if (qstar >= 0) sum(dbetageom(x = 0:qstar, shape1 = shape1[ii], shape2 = shape2[ii])) else 0 } } if (log.p) log(ans) else ans } rbetageom <- function(n, shape1, shape2) { rgeom(n = n, prob = rbeta(n = n, shape1 = shape1, shape2 = shape2)) } simple.poisson <- function() { new("vglmff", blurb = c("Poisson distribution\n\n", "Link: log(lambda)", "\n", "Variance: lambda"), deviance = function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { nz <- y > 0 devi <- - (y - mu) devi[nz] <- devi[nz] + y[nz] * log(y[nz]/mu[nz]) if (residuals) { sign(y - mu) * sqrt(2 * abs(devi) * w) } else { dev.elts <- 2 * c(w) * devi if (summation) { sum(dev.elts) } else { dev.elts } } }, initialize = expression({ if (NCOL(w) != 1) stop("prior weight must be a vector or a one-column matrix") if (NCOL(y) != 1) stop("response must be a vector or a one-column matrix") predictors.names <- "loge(lambda)" mu <- (weighted.mean(y, w) + y) / 2 + 1/8 if (!length(etastart)) etastart <- log(mu) }), linkinv = function(eta, extra = NULL) exp(eta), last = expression({ misc$link <- c(lambda = "loge") misc$earg <- list(lambda = list()) }), link = function(mu, extra = NULL) log(mu), vfamily = "simple.poisson", deriv = expression({ lambda <- mu dl.dlambda <- -1 + y/lambda dlambda.deta <- dtheta.deta(theta = lambda, link = "loge") c(w) * dl.dlambda * dlambda.deta }), weight = expression({ d2l.dlambda2 <- 1 / lambda c(w) * d2l.dlambda2 * dlambda.deta^2 })) } studentt <- function(ldf = "loglog", idf = NULL, tol1 = 0.1, imethod = 1) { ldof <- as.list(substitute(ldf)) edof <- link2list(ldof) ldof <- attr(edof, "function.name") idof <- idf if (length(idof)) if (!is.Numeric(idof) || any(idof <= 1)) stop("argument 'idf' should be > 1") if (!is.Numeric(tol1, positive = TRUE)) stop("argument 'tol1' should be positive") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") new("vglmff", blurb = c("Student t-distribution\n\n", "Link: ", namesof("df", ldof, earg = edof), "\n", "Variance: df / (df - 2) if df > 2\n"), infos = eval(substitute(function(...) { list(M1 = 1, tol1 = .tol1 ) }, list( .tol1 = tol1 ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y) predictors.names <- namesof("df", .ldof , earg = .edof , tag = FALSE) if (!length(etastart)) { init.df <- if (length( .idof )) .idof else { VarY <- var(y) MadY <- mad(y) if (VarY <= (1 + .tol1 )) VarY <- 1.12 if ( .imethod == 1) { 2 * VarY / (VarY - 1) } else if ( .imethod == 2) { ifelse(MadY < 1.05, 30, ifelse(MadY > 1.2, 2, 5)) } else 10 } etastart <- rep_len(theta2eta(init.df, .ldof , earg = .edof ), length(y)) } }), list( .ldof = ldof, .edof = edof, .idof = idof, .tol1 = tol1, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { Dof <- eta2theta(eta, .ldof , earg = .edof ) ans <- 0 * eta ans[Dof <= 1] <- NA ans }, list( .ldof = ldof, .edof = edof ))), last = eval(substitute(expression({ misc$link <- c(df = .ldof ) misc$earg <- list(df = .edof ) misc$imethod <- .imethod misc$expected = TRUE }), list( .ldof = ldof, .edof = edof, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Dof <- eta2theta(eta, .ldof , earg = .edof ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dt(x = y, df = Dof, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .ldof = ldof, .edof = edof ))), vfamily = c("studentt"), validparams = eval(substitute(function(eta, y, extra = NULL) { Dof <- eta2theta(eta, .ldof , earg = .edof ) okay1 <- all(is.finite(Dof)) && all(0 < Dof) okay1 }, list( .ldof = ldof, .edof = edof ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Dof <- eta2theta(eta, .ldof , earg = .edof ) rt(nsim * length(Dof), df = Dof) }, list( .ldof = ldof, .edof = edof ))), deriv = eval(substitute(expression({ Dof <- eta2theta(eta, .ldof , earg = .edof ) ddf.deta <- dtheta.deta(Dof, .ldof , earg = .edof ) DDS <- function(df) digamma((df + 1) / 2) - digamma(df / 2) DDSp <- function(df) 0.5 * (trigamma((df + 1) / 2) - trigamma(df / 2)) temp0 <- 1 / Dof temp1 <- temp0 * y^2 dl.ddf <- 0.5 * (-temp0 - log1p(temp1) + (Dof + 1) * y^2 / (Dof^2 * (1 + temp1)) + DDS(Dof)) c(w) * dl.ddf * ddf.deta }), list( .ldof = ldof, .edof = edof ))), weight = eval(substitute(expression({ const2 <- (Dof + 0) / (Dof + 3) const2[!is.finite(Dof)] <- 1 # Handles Inf tmp6 <- DDS(Dof) nedl2.dnu2 <- 0.5 * (tmp6 *(const2 * tmp6 - 2 / (Dof + 1)) - DDSp(Dof)) wz <- c(w) * nedl2.dnu2 * ddf.deta^2 wz }), list( .ldof = ldof, .edof = edof )))) } Kayfun.studentt <- function(df, bigno = .Machine$double.eps^(-0.46)) { ind1 <- is.finite(df) const4 <- dnorm(0) ans <- df if (any(ind1)) ans[ind1] <- exp(lgamma((df[ind1] + 1) / 2) - lgamma( df[ind1] / 2)) / sqrt(pi * df[ind1]) ans[df <= 0] <- NaN ind2 <- (df >= bigno) if (any(ind2)) { dff <- df[ind2] ans[ind2] <- const4 # 1 / const3 # for handling df = Inf } ans[!ind1] <- const4 # 1 / const3 # for handling df = Inf ans } studentt3 <- function(llocation = "identitylink", lscale = "loge", ldf = "loglog", ilocation = NULL, iscale = NULL, idf = NULL, imethod = 1, zero = c("scale", "df")) { lloc <- as.list(substitute(llocation)) eloc <- link2list(lloc) lloc <- attr(eloc, "function.name") lsca <- as.list(substitute(lscale)) esca <- link2list(lsca) lsca <- attr(esca, "function.name") ldof <- as.list(substitute(ldf)) edof <- link2list(ldof) ldof <- attr(edof, "function.name") iloc <- ilocation isca <- iscale idof <- idf if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iloc)) if (!is.Numeric(iloc)) stop("bad input in argument 'ilocation'") if (length(isca)) if (!is.Numeric(isca, positive = TRUE)) stop("argument 'iscale' should be positive") if (length(idof)) if (!is.Numeric(idof) || any(idof <= 1)) stop("argument 'idf' should be > 1") new("vglmff", blurb = c("Student t-distribution\n\n", "Link: ", namesof("location", lloc, earg = eloc), ", ", namesof("scale", lsca, earg = esca), ", ", namesof("df", ldof, earg = edof), "\n", "Variance: scale^2 * df / (df - 2) if df > 2\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("location", "scale", "df"), zero = .zero) }, list( .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 3 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$M1 <- M1 M <- M1 * ncoly # mynames1 <- param.names("location", NOS) mynames2 <- param.names("scale", NOS) mynames3 <- param.names("df", NOS) predictors.names <- c(namesof(mynames1, .lloc , earg = .eloc , tag = FALSE), namesof(mynames2, .lsca , earg = .esca , tag = FALSE), namesof(mynames3, .ldof , earg = .edof , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M1 * NOS, M1 = M1)] if (!length(etastart)) { init.loc <- if (length( .iloc )) .iloc else { if ( .imethod == 2) apply(y, 2, median) else if ( .imethod == 3) (colMeans(y) + t(y)) / 2 else { colSums(w * y) / colSums(w) } } sdvec <- apply(y, 2, sd) init.sca <- if (length( .isca )) .isca else sdvec / 2.3 sdvec <- rep_len(sdvec, max(length(sdvec), length(init.sca))) init.sca <- rep_len(init.sca, max(length(sdvec), length(init.sca))) ind9 <- (sdvec / init.sca <= (1 + 0.12)) sdvec[ind9] <- sqrt(1.12) * init.sca[ind9] init.dof <- if (length( .idof )) .idof else (2 * (sdvec / init.sca)^2) / ((sdvec / init.sca)^2 - 1) if (!is.Numeric(init.dof) || init.dof <= 1) init.dof <- rep_len(3, ncoly) mat1 <- matrix(theta2eta(init.loc, .lloc , earg = .eloc ), n, NOS, byrow = TRUE) mat2 <- matrix(theta2eta(init.sca, .lsca , earg = .esca ), n, NOS, byrow = TRUE) mat3 <- matrix(theta2eta(init.dof, .ldof , earg = .edof ), n, NOS, byrow = TRUE) etastart <- cbind(mat1, mat2, mat3) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lloc = lloc, .eloc = eloc, .iloc = iloc, .lsca = lsca, .esca = esca, .isca = isca, .ldof = ldof, .edof = edof, .idof = idof, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- extra$NOS M1 <- extra$M1 Loc <- eta2theta(eta[, M1*(1:NOS)-2], .lloc , earg = .eloc ) Dof <- eta2theta(eta[, M1*(1:NOS)-0], .ldof , earg = .edof ) Loc[Dof <= 1] <- NA Loc }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lloc , NOS), rep_len( .lsca , NOS), rep_len( .ldof , NOS)) misc$link <- misc$link[interleave.VGAM(M1 * NOS, M1 = M1)] temp.names <- c(mynames1, mynames2, mynames3) temp.names <- temp.names[interleave.VGAM(M1 * NOS, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M1 * NOS) names(misc$earg) <- temp.names for (ii in 1:NOS) { misc$earg[[M1*ii-2]] <- .eloc misc$earg[[M1*ii-1]] <- .esca misc$earg[[M1*ii ]] <- .edof } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { NOS <- extra$NOS M1 <- extra$M1 Loc <- eta2theta(eta[, M1*(1:NOS)-2], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, M1*(1:NOS)-1], .lsca , earg = .esca ) Dof <- eta2theta(eta[, M1*(1:NOS)-0], .ldof , earg = .edof ) zedd <- (y - Loc) / Sca if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (dt(x = zedd, df = Dof, log = TRUE) - log(Sca)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), vfamily = c("studentt3"), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- extra$M1 NOS <- extra$NOS Loc <- eta2theta(eta[, M1*(1:NOS)-2], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, M1*(1:NOS)-1], .lsca , earg = .esca ) Dof <- eta2theta(eta[, M1*(1:NOS)-0], .ldof , earg = .edof ) okay1 <- all(is.finite(Loc)) && all(is.finite(Sca)) && all(0 < Sca) && all(is.finite(Dof)) && all(0 < Dof) okay1 }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Loc <- eta2theta(eta[, c(TRUE, FALSE, FALSE)], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, c(FALSE, TRUE, FALSE)], .lsca , earg = .esca ) Dof <- eta2theta(eta[, c(FALSE, FALSE, TRUE)], .ldof , earg = .edof ) Loc + Sca * rt(nsim * length(Dof), df = Dof) }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), deriv = eval(substitute(expression({ M1 <- extra$M1 NOS <- extra$NOS Loc <- eta2theta(eta[, M1*(1:NOS)-2], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, M1*(1:NOS)-1], .lsca , earg = .esca ) Dof <- eta2theta(eta[, M1*(1:NOS)-0], .ldof , earg = .edof ) dloc.deta <- cbind(dtheta.deta(theta = Loc, .lloc , earg = .eloc )) dsca.deta <- cbind(dtheta.deta(theta = Sca, .lsca , earg = .esca )) ddof.deta <- cbind(dtheta.deta(theta = Dof, .ldof , earg = .edof )) zedd <- (y - Loc) / Sca temp0 <- 1 / Dof temp1 <- temp0 * zedd^2 dl.dloc <- (Dof + 1) * zedd / (Sca * (Dof + zedd^2)) dl.dsca <- zedd * dl.dloc - 1 / Sca dl.ddof <- 0.5 * (-temp0 - log1p(temp1) + (Dof+1) * zedd^2 / (Dof^2 * (1 + temp1)) + digamma((Dof+1)/2) - digamma(Dof/2)) ans <- c(w) * cbind(dl.dloc * dloc.deta, dl.dsca * dsca.deta, dl.ddof * ddof.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof ))), weight = eval(substitute(expression({ const1 <- (Dof + 1) / (Dof + 3) const2 <- (Dof + 0) / (Dof + 3) const1[!is.finite(Dof)] <- 1 # Handles Inf const2[!is.finite(Dof)] <- 1 # Handles Inf const4 <- dnorm(0) ned2l.dlocat2 <- const1 / (Sca * (Kayfun.studentt(Dof) / const4))^2 ned2l.dscale2 <- 2 * const2 / Sca^2 DDS <- function(df) digamma((df + 1) / 2) - digamma(df/2) DDSp <- function(df) 0.5 * (trigamma((df + 1) / 2) - trigamma(df/2)) tmp6 <- DDS(Dof) edl2.dnu2 <- 0.5 * (tmp6 * (const2 * tmp6 - 2 / (Dof + 1)) - DDSp(Dof)) ned2l.dshape2 <- cbind(edl2.dnu2) # cosmetic name change ned2l.dshape.dlocat <- cbind(0 * Sca) ned2l.dshape.dscale <- cbind((-1 / (Dof + 1) + const2 * DDS(Dof))/Sca) wz <- array(c(c(w) * ned2l.dlocat2 * dloc.deta^2, c(w) * ned2l.dscale2 * dsca.deta^2, c(w) * ned2l.dshape2 * ddof.deta^2, c(w) * ned2l.dshape2 * 0, c(w) * ned2l.dshape.dscale * dsca.deta * ddof.deta, c(w) * ned2l.dshape.dlocat * dloc.deta * ddof.deta), dim = c(n, M / M1, 6)) wz <- arwz2wz(wz, M = M, M1 = M1) if (FALSE) { wz <- matrix(0.0, n, dimm(M)) wz[, M1*(1:NOS) - 2] <- ned2l.dlocat2 * dloc.deta^2 wz[, M1*(1:NOS) - 1] <- ned2l.dscale2 * dsca.deta^2 wz[, M1*(1:NOS) - 0] <- ned2l.dshape2 * ddof.deta^2 for (ii in ((1:NOS) - 1)) { ind3 <- 1 + ii wz[, iam(ii*M1 + 1, ii*M1 + 3, M = M)] <- ned2l.dshape.dlocat[, ind3] * dloc.deta[, ind3] * ddof.deta[, ind3] wz[, iam(ii*M1 + 2, ii*M1 + 3, M = M)] <- ned2l.dshape.dscale[, ind3] * dsca.deta[, ind3] * ddof.deta[, ind3] } while (all(wz[, ncol(wz)] == 0)) wz <- wz[, -ncol(wz)] } wz }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .ldof = ldof, .edof = edof )))) } studentt2 <- function(df = Inf, llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") { lloc <- as.list(substitute(llocation)) eloc <- link2list(lloc) lloc <- attr(eloc, "function.name") lsca <- as.list(substitute(lscale)) esca <- link2list(lsca) lsca <- attr(esca, "function.name") iloc <- ilocation; isca <- iscale doff <- df if (is.finite(doff)) if (!is.Numeric(doff, positive = TRUE)) stop("argument 'df' must be positive") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iloc)) if (!is.Numeric(iloc)) stop("bad input in argument 'ilocation'") if (length(isca)) if (!is.Numeric(isca, positive = TRUE)) stop("argument 'iscale' should be positive") new("vglmff", blurb = c("Student t-distribution (2-parameter)\n\n", "Link: ", namesof("location", lloc, earg = eloc), ", ", namesof("scale", lsca, earg = esca), "\n", "Variance: scale^2 * df / (df - 2) if df > 2\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("location", "scale"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ M1 <- 2 temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, out.wy = TRUE, maximize = TRUE) w <- temp5$w y <- temp5$y extra$NOS <- NOS <- ncoly <- ncol(y) # Number of species extra$M1 <- M1 M <- M1 * ncoly # mynames1 <- param.names("location", NOS) mynames2 <- param.names("scale", NOS) predictors.names <- c(namesof(mynames1, .lloc , earg = .eloc , tag = FALSE), namesof(mynames2, .lsca , earg = .esca , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M1 * NOS, M1 = M1)] if (!length(etastart)) { init.loc <- if (length( .iloc )) .iloc else { if ( .imethod == 2) apply(y, 2, median) else if ( .imethod == 3) (colMeans(y) + t(y)) / 2 else { colSums(w * y) / colSums(w) } } sdvec <- apply(y, 2, sd) init.sca <- if (length( .isca )) .isca else sdvec / 2.3 mat1 <- matrix(theta2eta(init.loc, .lloc , earg = .eloc ), n, NOS, byrow = TRUE) mat2 <- matrix(theta2eta(init.sca, .lsca , earg = .esca ), n, NOS, byrow = TRUE) etastart <- cbind(mat1, mat2) etastart <- etastart[, interleave.VGAM(ncol(etastart), M1 = M1)] } }), list( .lloc = lloc, .eloc = eloc, .iloc = iloc, .lsca = lsca, .esca = esca, .isca = isca, .doff = doff, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { NOS <- extra$NOS M1 <- extra$M1 Loc <- eta2theta(eta[, M1*(1:NOS) - 1], .lloc , earg = .eloc ) Dof <- matrix( .doff , NROW(Loc), NOS, byrow = TRUE) Loc[Dof <= 1] <- NA Loc }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lloc , NOS), rep_len( .lsca , NOS)) temp.names <- c(mynames1, mynames2) temp.names <- temp.names[interleave.VGAM(M1 * NOS, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M1 * NOS) names(misc$earg) <- temp.names for (ii in 1:NOS) { misc$earg[[M1*ii-1]] <- .eloc misc$earg[[M1*ii-0]] <- .esca } misc$M1 <- M1 misc$simEIM <- TRUE misc$df <- .doff misc$imethod <- .imethod misc$expected = TRUE misc$multipleResponses <- TRUE }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { NOS <- extra$NOS M1 <- extra$M1 Loc <- eta2theta(eta[, M1*(1:NOS)-1], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, M1*(1:NOS)-0], .lsca , earg = .esca ) Dof <- matrix( .doff , NROW(Loc), NOS, byrow = TRUE) zedd <- (y - Loc) / Sca if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (dt(x = zedd, df = Dof, log = TRUE) - log(Sca)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), vfamily = c("studentt2"), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- extra$M1 NOS <- extra$NOS Loc <- eta2theta(eta[, M1*(1:NOS)-1], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, M1*(1:NOS)-0], .lsca , earg = .esca ) Dof <- .doff okay1 <- all(is.finite(Loc)) && all(is.finite(Sca)) && all(0 < Sca) && all(is.finite(Dof)) && all(0 < Dof) okay1 }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) extra <- object@extra NOS <- extra$NOS Loc <- eta2theta(eta[, c(TRUE, FALSE)], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, c(FALSE, TRUE)], .lsca , earg = .esca ) Dof <- matrix( .doff , NROW(Loc), NOS, byrow = TRUE) Loc + Sca * rt(nsim * length(Sca), df = Dof) }, list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), deriv = eval(substitute(expression({ M1 <- extra$M1 NOS <- extra$NOS Loc <- eta2theta(eta[, M1*(1:NOS)-1], .lloc , earg = .eloc ) Sca <- eta2theta(eta[, M1*(1:NOS)-0], .lsca , earg = .esca ) Dof <- matrix( .doff , n, NOS, byrow = TRUE) dlocat.deta <- dtheta.deta(theta = Loc, .lloc , earg = .eloc ) dscale.deta <- dtheta.deta(theta = Sca, .lsca , earg = .esca ) zedd <- (y - Loc) / Sca temp0 <- 1 / Dof temp1 <- temp0 * zedd^2 dl.dlocat <- (Dof + 1) * zedd / (Sca * (Dof + zedd^2)) dl.dlocat[!is.finite(Dof)] <- zedd / Sca # Adjust for df=Inf dl.dscale <- zedd * dl.dlocat - 1 / Sca ans <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) ans <- ans[, interleave.VGAM(ncol(ans), M1 = M1)] ans }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff ))), weight = eval(substitute(expression({ const1 <- (Dof + 1) / (Dof + 3) const2 <- (Dof + 0) / (Dof + 3) const1[!is.finite( Dof )] <- 1 # Handles Inf const2[!is.finite( Dof )] <- 1 # Handles Inf const4 <- dnorm(0) ned2l.dlocat2 <- const1 / (Sca * (Kayfun.studentt(Dof) / const4))^2 ned2l.dscale2 <- 2.0 * const2 / Sca^2 # 2.0 seems to work wz <- matrix(NA_real_, n, M) #2=M; diagonal! wz[, M1*(1:NOS) - 1] <- ned2l.dlocat2 * dlocat.deta^2 wz[, M1*(1:NOS) ] <- ned2l.dscale2 * dscale.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = NOS) }), list( .lloc = lloc, .eloc = eloc, .lsca = lsca, .esca = esca, .doff = doff )))) } chisq <- function(link = "loge", zero = NULL) { link <- as.list(substitute(link)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Chi-squared distribution\n\n", "Link: ", namesof("df", link, earg = earg, tag = FALSE)), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 1 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly extra$ncoly <- NOS <- ncoly # Number of species mynames1 <- param.names("df", NOS) predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(mustart) && !length(etastart)) mustart <- y + (1 / 8) * (y == 0) }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta, .link , earg = .earg ) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .link , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .earg } misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg ))), linkfun = eval(substitute(function(mu, extra = NULL) { theta2eta(mu, .link , earg = .earg ) }, list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { mydf <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dchisq(x = y, df = mydf, ncp = 0, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = "chisq", validparams = eval(substitute(function(eta, y, extra = NULL) { mydf <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(mydf)) && all(0 < mydf) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) Dof <- eta2theta(eta, .link , earg = .earg ) rchisq(nsim * length(Dof), df = Dof, ncp = 0) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ mydf <- eta2theta(eta, .link , earg = .earg ) dl.dv <- (log(y / 2) - digamma(mydf / 2)) / 2 dv.deta <- dtheta.deta(mydf, .link , earg = .earg ) c(w) * dl.dv * dv.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dv2 <- trigamma(mydf / 2) / 4 wz <- ned2l.dv2 * dv.deta^2 c(w) * wz }), list( .link = link, .earg = earg )))) } dsimplex <- function(x, mu = 0.5, dispersion = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) sigma <- dispersion deeFun <- function(y, mu) (((y - mu) / (mu * (1 - mu)))^2) / (y * (1 - y)) logpdf <- (-0.5 * log(2 * pi) - log(sigma) - 1.5 * log(x) - 1.5 * log1p(-x) - 0.5 * deeFun(x, mu) / sigma^2) logpdf[x <= 0.0] <- -Inf # log(0.0) logpdf[x >= 1.0] <- -Inf # log(0.0) logpdf[mu <= 0.0] <- NaN logpdf[mu >= 1.0] <- NaN logpdf[sigma <= 0.0] <- NaN if (log.arg) logpdf else exp(logpdf) } rsimplex <- function(n, mu = 0.5, dispersion = 1) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n oneval <- (length(mu) == 1 && length(dispersion) == 1) answer <- rep_len(0.0, use.n) mu <- rep_len(mu, use.n) dispersion <- rep_len(dispersion, use.n) Kay1 <- 3 * (dispersion * mu * (1-mu))^2 if (oneval) { Kay1 <- Kay1[1] # Since oneval means there is only one unique value mymu <- mu[1] myroots <- polyroot(c(-mymu^2, Kay1+2*mymu^2, -3*Kay1+1-2*mymu, 2*Kay1)) myroots <- myroots[abs(Im(myroots)) < 0.00001] myroots <- Re(myroots) myroots <- myroots[myroots >= 0.0] myroots <- myroots[myroots <= 1.0] pdfmax <- dsimplex(myroots, mymu, dispersion[1]) pdfmax <- rep_len(max(pdfmax), use.n) # For multiple peaks } else { pdfmax <- numeric(use.n) for (ii in 1:use.n) { myroots <- polyroot(c(-mu[ii]^2, Kay1[ii]+2*mu[ii]^2, -3*Kay1[ii]+1-2*mu[ii], 2*Kay1[ii])) myroots <- myroots[abs(Im(myroots)) < 0.00001] myroots <- Re(myroots) myroots <- myroots[myroots >= 0.0] myroots <- myroots[myroots <= 1.0] pdfmax[ii] <- max(dsimplex(myroots, mu[ii], dispersion[ii])) } } index <- 1:use.n nleft <- length(index) while (nleft > 0) { xx <- runif(nleft) # , 0, 1 yy <- runif(nleft, max = pdfmax[index]) newindex <- (1:nleft)[yy < dsimplex(xx, mu[index], dispersion[index])] if (length(newindex)) { answer[index[newindex]] <- xx[newindex] index <- setdiff(index, index[newindex]) nleft <- nleft - length(newindex) } } answer } simplex <- function(lmu = "logit", lsigma = "loge", imu = NULL, isigma = NULL, imethod = 1, ishrinkage = 0.95, zero = "sigma") { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") lsigma <- as.list(substitute(lsigma)) esigma <- link2list(lsigma) lsigma <- attr(esigma, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") new("vglmff", blurb = c("Univariate simplex distribution\n\n", "f(y) = [2*pi*sigma^2*(y*(1-y))^3]^(-0.5) * \n", " exp[-0.5*(y-mu)^2 / (sigma^2 * y * ", "(1-y) * mu^2 * (1-mu)^2)],\n", " 0 < y < 1, 0 < mu < 1, sigma > 0\n\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("sigma", lsigma, earg = esigma), "\n\n", "Mean: mu\n", "Variance function: V(mu) = mu^3 * (1 - mu)^3"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("mu", "sigma"), lmu = .lmu , lsigma = .lsigma , zero = .zero ) }, list( .zero = zero, .lsigma = lsigma, .lmu = lmu ))), initialize = eval(substitute(expression({ if (any(y <= 0.0 | y >= 1.0)) stop("all 'y' values must be in (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE) predictors.names <- c( namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("sigma", .lsigma , earg = .esigma , tag = FALSE)) deeFun <- function(y, mu) (((y - mu) / (mu * (1 - mu)))^2) / (y * (1 - y)) if (!length(etastart)) { use.this <- if ( .imethod == 3) weighted.mean(y, w = w) else if ( .imethod == 1) median(y) else mean(y, trim = 0.1) init.mu <- (1 - .ishrinkage ) * y + .ishrinkage * use.this mu.init <- rep_len(if (length( .imu )) .imu else init.mu, n) sigma.init <- if (length( .isigma )) rep_len( .isigma, n) else { use.this <- deeFun(y, mu = init.mu) rep_len(sqrt( if ( .imethod == 3) weighted.mean(use.this, w) else if ( .imethod == 1) median(use.this) else mean(use.this, trim = 0.1)), n) } etastart <- cbind(theta2eta(mu.init, .lmu , earg = .emu ), theta2eta(sigma.init, .lsigma , earg = .esigma )) } }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma, .imu = imu, .isigma = isigma, .ishrinkage = ishrinkage, .imethod = imethod ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu ))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu , sigma = .lsigma ) misc$earg <- list(mu = .emu , sigma = .esigma ) misc$imu <- .imu misc$isigma <- .isigma misc$imethod <- .imethod misc$ishrinkage <- .ishrinkage }), list( .lmu = lmu, .lsigma = lsigma, .imu = imu, .isigma = isigma, .emu = emu, .esigma = esigma, .ishrinkage = ishrinkage, .imethod = imethod ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { sigma <- eta2theta(eta[, 2], .lsigma , earg = .esigma ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dsimplex(x = y, mu = mu, dispersion = sigma, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lsigma = lsigma, .emu = emu, .esigma = esigma ))), vfamily = c("simplex"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 2], .lsigma , earg = .esigma ) okay1 <- all(is.finite(mymu )) && all(is.finite(sigma)) && all(0 < sigma) okay1 }, list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) sigma <- eta2theta(eta[, 2], .lsigma , earg = .esigma ) rsimplex(nsim * length(sigma), mu = mymu, dispersion = sigma) }, list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma ))), deriv = eval(substitute(expression({ deeFun <- function(y, mu) (((y - mu) / (mu * (1 - mu)))^2) / (y * (1 - y)) sigma <- eta2theta(eta[, 2], .lsigma , earg = .esigma ) dmu.deta <- dtheta.deta(mu, .lmu , earg = .emu ) dsigma.deta <- dtheta.deta(sigma, .lsigma , earg = .esigma ) dl.dmu <- (y - mu) * (deeFun(y, mu) + 1 / (mu * (1 - mu))^2) / (mu * (1 - mu) * sigma^2) dl.dsigma <- (deeFun(y, mu) / sigma^2 - 1) / sigma cbind(dl.dmu * dmu.deta, dl.dsigma * dsigma.deta) }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma ))), weight = eval(substitute(expression({ wz <- matrix(0.0, n, M) # Diagonal!! eim11 <- 3 / (mu * (1 - mu)) + 1 / (sigma^2 * (mu * (1 - mu))^3) wz[, iam(1, 1, M)] <- eim11 * dmu.deta^2 wz[, iam(2, 2, M)] <- (2 / sigma^2) * dsigma.deta^2 c(w) * wz }), list( .lmu = lmu, .lsigma = lsigma, .emu = emu, .esigma = esigma )))) } rigff <- function(lmu = "identitylink", llambda = "loge", imu = NULL, ilambda = 1) { if (!is.Numeric(ilambda, positive = TRUE)) stop("bad input for 'ilambda'") lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Reciprocal inverse Gaussian distribution \n", "f(y) = [lambda/(2*pi*y)]^(0.5) * \n", " exp[-0.5*(lambda/y) * (y-mu)^2], ", " 0 < y,\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("lambda", llambda, earg = elambda), "\n\n", "Mean: mu"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { mu.init <- rep_len(if (length( .imu )) .imu else median(y), n) lambda.init <- rep_len(if (length( .ilambda )) .ilambda else sqrt(var(y)), n) etastart <- cbind(theta2eta(mu.init, .lmu , earg = .emu ), theta2eta(lambda.init, .llambda , earg = .elambda )) } }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .imu = imu, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu, .elambda = elambda ))), last = eval(substitute(expression({ misc$d3 <- d3 # because save.weights = FALSE misc$link <- c(mu = .lmu , lambda = .llambda ) misc$earg <- list(mu = .emu , lambda = .elambda ) misc$pooled.weight <- pooled.weight }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (-0.5 * log(y) + 0.5 * log(lambda) - (0.5 * lambda/y) * (y - mu)^2) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llambda = llambda, .elambda = elambda, .emu = emu ))), vfamily = c("rigff"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) okay1 <- all(is.finite(mymu )) && all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), deriv = eval(substitute(expression({ if (iter == 1) { d3 <- deriv3( ~ w * (-0.5*log(y) + 0.5*log(lambda) - (0.5*lambda/y) * (y-mu)^2), c("mu", "lambda"), hessian = TRUE) } lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) eval.d3 <- eval(d3) dl.dthetas <- attr(eval.d3, "gradient") dmu.deta <- dtheta.deta(mu, .lmu , earg = .emu ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) dtheta.detas <- cbind(dmu.deta, dlambda.deta) dl.dthetas * dtheta.detas }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), weight = eval(substitute(expression({ d2l.dthetas2 <- attr(eval.d3, "hessian") wz <- matrix(NA_real_, n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- -d2l.dthetas2[, 1, 1] * dtheta.detas[, 1]^2 wz[, iam(2, 2, M)] <- -d2l.dthetas2[, 2, 2] * dtheta.detas[, 2]^2 wz[, iam(1, 2, M)] <- -d2l.dthetas2[, 1, 2] * dtheta.detas[, 1] * dtheta.detas[, 2] if (! .expected ) { d2mudeta2 <- d2theta.deta2(mu, .lmu , earg = .emu ) d2lambda <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] - dl.dthetas[, 1] * d2mudeta2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - dl.dthetas[, 2] * d2lambda } if (intercept.only) { sumw <- sum(w) for (ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz }), list( .lmu = lmu, .llambda = llambda, .expected = FALSE, .emu = emu, .elambda = elambda )))) } hypersecant <- function(link.theta = extlogit(min = -pi/2, max = pi/2), init.theta = NULL) { link.theta <- as.list(substitute(link.theta)) earg <- link2list(link.theta) link.theta <- attr(earg, "function.name") new("vglmff", blurb = c("Hyperbolic Secant distribution \n", "f(y) = exp(theta*y + log(cos(theta ))) / (2*cosh(pi*y/2))\n", " for all y,\n", "Link: ", namesof("theta", link.theta , earg = earg), "\n\n", "Mean: tan(theta)"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("theta", .link.theta , earg = .earg , tag = FALSE) if (!length(etastart)) { theta.init <- rep_len(if (length( .init.theta )) .init.theta else median(y), n) etastart <- theta2eta(theta.init, .link.theta , earg = .earg ) } }), list( .link.theta = link.theta , .earg = earg, .init.theta = init.theta ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) tan(theta) }, list( .link.theta = link.theta , .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(theta = .link.theta ) misc$earg <- list(theta = .earg ) misc$expected <- TRUE }), list( .link.theta = link.theta , .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { theta <- eta2theta(eta, .link.theta , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (theta*y + log(cos(theta)) - log(cosh(pi*y/2 ))) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link.theta = link.theta , .earg = earg ))), vfamily = c("hypersecant"), validparams = eval(substitute(function(eta, y, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) okay1 <- all(is.finite(theta)) && all(abs(theta) < pi/2) okay1 }, list( .link.theta = link.theta , .earg = earg ))), deriv = eval(substitute(expression({ theta <- eta2theta(eta, .link.theta , earg = .earg ) dl.dthetas <- y - tan(theta) dparam.deta <- dtheta.deta(theta, .link.theta , earg = .earg ) c(w) * dl.dthetas * dparam.deta }), list( .link.theta = link.theta , .earg = earg ))), weight = expression({ d2l.dthetas2 <- 1 / cos(theta)^2 wz <- c(w) * d2l.dthetas2 * dparam.deta^2 wz })) } hypersecant01 <- function(link.theta = extlogit(min = -pi/2, max = pi/2), init.theta = NULL) { link.theta <- as.list(substitute(link.theta)) earg <- link2list(link.theta) link.theta <- attr(earg, "function.name") new("vglmff", blurb = c("Hyperbolic secant distribution \n", "f(y) = (cos(theta)/pi) * y^(-0.5+theta/pi) * \n", " (1-y)^(-0.5-theta/pi), ", " 0 < y < 1,\n", "Link: ", namesof("theta", link.theta , earg = earg), "\n\n", "Mean: 0.5 + theta/pi", "\n", "Variance: (pi^2 - 4*theta^2) / (8*pi^2)"), initialize = eval(substitute(expression({ if (any(y <= 0 | y >= 1)) stop("all response 'y' values must be in (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("theta", .link.theta , earg = .earg , tag = FALSE) if (!length(etastart)) { theta.init <- rep_len(if (length( .init.theta )) .init.theta else median(y), n) etastart <- theta2eta(theta.init, .link.theta , earg = .earg ) } }), list( .link.theta = link.theta , .earg = earg, .init.theta = init.theta ))), linkinv = eval(substitute(function(eta, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) 0.5 + theta / pi }, list( .link.theta = link.theta , .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(theta = .link.theta ) misc$earg <- list(theta = .earg ) misc$expected <- TRUE }), list( .link.theta = link.theta , .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { theta <- eta2theta(eta, .link.theta , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (log(cos(theta)) + (-0.5 + theta/pi) * log(y) + (-0.5 - theta/pi) * log1p(-y )) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link.theta = link.theta , .earg = earg ))), vfamily = c("hypersecant01"), validparams = eval(substitute(function(eta, y, extra = NULL) { theta <- eta2theta(eta, .link.theta , earg = .earg ) okay1 <- all(is.finite(theta)) && all(abs(theta) < pi/2) okay1 }, list( .link.theta = link.theta , .earg = earg ))), deriv = eval(substitute(expression({ theta <- eta2theta(eta, .link.theta , earg = .earg ) dl.dthetas <- -tan(theta) + logit(y) / pi dparam.deta <- dtheta.deta(theta, .link.theta , earg = .earg ) c(w) * dl.dthetas * dparam.deta }), list( .link.theta = link.theta , .earg = earg ))), weight = expression({ d2l.dthetas2 <- 1 / cos(theta)^2 wz <- c(w) * d2l.dthetas2 * dparam.deta^2 wz })) } leipnik <- function(lmu = "logit", llambda = logoff(offset = 1), imu = NULL, ilambda = NULL) { lmu <- as.list(substitute(lmu)) emu <- link2list(lmu) lmu <- attr(emu, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") if (is.Numeric(ilambda) && any(ilambda <= -1)) stop("argument 'ilambda' must be > -1") new("vglmff", blurb = c("Leipnik's distribution \n", "f(y) = (y(1-y))^(-1/2) * [1 + (y-mu)^2 / (y*(1-y))]^(-lambda/2) /\n", " Beta[(lambda+1)/2, 1/2], ", " 0 < y < 1, lambda > -1\n", "Links: ", namesof("mu", lmu, earg = emu), ", ", namesof("lambda", llambda, earg = elambda), "\n\n", "Mean: mu\n", "Variance: mu*(1-mu)"), initialize = eval(substitute(expression({ if (any(y <= 0 | y >= 1)) stop("all response 'y' values must be in (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("mu", .lmu , earg = .emu , tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { mu.init <- rep_len(if (length( .imu )) .imu else (y), n) lambda.init <- rep_len(if (length( .ilambda )) .ilambda else 1/var(y), n) etastart <- cbind(theta2eta(mu.init, .lmu , earg = .emu ), theta2eta(lambda.init, .llambda , earg = .elambda )) } }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda, .imu = imu, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .lmu , earg = .emu ) }, list( .lmu = lmu, .emu = emu, .elambda = elambda ))), last = eval(substitute(expression({ misc$link <- c(mu = .lmu , lambda = .llambda ) misc$earg <- list(mu = .emu , lambda = .elambda ) misc$pooled.weight <- pooled.weight misc$expected <- FALSE }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (-0.5*log(y*(1-y)) - 0.5 * lambda * log1p((y-mu)^2 / (y*(1-y ))) - lgamma((lambda+1)/2) + lgamma(1+ lambda/2 )) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llambda = llambda, .emu = emu, .elambda = elambda ))), vfamily = c("leipnik"), validparams = eval(substitute(function(eta, y, extra = NULL) { mymu <- eta2theta(eta[, 1], .lmu , earg = .emu ) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) okay1 <- all(is.finite(mymu )) && all( 0 < mymu & mymu < 1) && all(is.finite(lambda)) && all(-1 < lambda) okay1 }, list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) dl.dthetas = cbind(dl.dmu = lambda*(y-mu) / (y*(1-y)+(y-mu)^2), dl.dlambda= -0.5 * log1p((y-mu)^2 / (y*(1-y))) - 0.5*digamma((lambda+1)/2) + 0.5*digamma(1+lambda/2)) dmu.deta <- dtheta.deta(mu, .lmu , earg = .emu ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) dtheta.detas <- cbind(dmu.deta, dlambda.deta) c(w) * dl.dthetas * dtheta.detas }), list( .lmu = lmu, .llambda = llambda, .emu = emu, .elambda = elambda ))), weight = eval(substitute(expression({ denominator <- y*(1-y) + (y-mu)^2 d2l.dthetas2 <- array(NA_real_, c(n, 2, 2)) d2l.dthetas2[, 1, 1] <- c(w) * lambda*(-y*(1-y)+(y-mu)^2)/denominator^2 d2l.dthetas2[, 1, 2] <- d2l.dthetas2[, 2, 1] <- c(w) * (y-mu) / denominator d2l.dthetas2[, 2, 2] <- c(w) * (-0.25*trigamma((lambda+1)/2) + 0.25*trigamma(1+lambda/2)) wz <- matrix(NA_real_, n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- -d2l.dthetas2[, 1, 1] * dtheta.detas[, 1]^2 wz[, iam(2, 2, M)] <- -d2l.dthetas2[, 2, 2] * dtheta.detas[, 2]^2 wz[, iam(1, 2, M)] <- -d2l.dthetas2[, 1, 2] * dtheta.detas[, 1] * dtheta.detas[, 2] if (!.expected) { d2mudeta2 <- d2theta.deta2(mu, .lmu , earg = .emu ) d2lambda <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz[, iam(1, 1, M)] <- wz[, iam(1, 1, M)] - dl.dthetas[, 1] *d2mudeta2 wz[, iam(2, 2, M)] <- wz[, iam(2, 2, M)] - dl.dthetas[, 2] *d2lambda } if (intercept.only) { sumw <- sum(w) for (ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else { pooled.weight <- FALSE } wz }), list( .lmu = lmu, .llambda = llambda, .expected = FALSE, .emu = emu, .elambda = elambda )))) } inv.binomial <- function(lrho = extlogit(min = 0.5, max = 1), llambda = "loge", irho = NULL, ilambda = NULL, zero = NULL) { lrho <- as.list(substitute(lrho)) erho <- link2list(lrho) lrho <- attr(erho, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Inverse binomial distribution\n\n", "Links: ", namesof("rho", lrho, earg = erho), ", ", namesof("lambda", llambda, earg = elambda), "\n", "Mean: lambda*(1-rho)/(2*rho-1)\n", "Variance: lambda*rho*(1-rho)/(2*rho-1)^3\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("rho", .lrho, earg = .erho, tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { covarn <- sd(c(y))^2 / weighted.mean(y, w) temp1 <- 0.5 + (1 + sqrt(1+8*covarn)) / (8*covarn) temp2 <- 0.5 + (1 - sqrt(1+8*covarn)) / (8*covarn) init.rho <- rep_len(if (length( .irho)) .irho else { ifelse(temp1 > 0.5 && temp1 < 1, temp1, temp2) }, n) init.lambda <- rep_len(if (length( .ilambda)) .ilambda else { (2*init.rho-1) * weighted.mean(y, w) / (1-init.rho)}, n) etastart <- cbind(theta2eta(init.rho, .lrho, earg = .erho), theta2eta(init.lambda, .llambda , earg = .elambda )) } }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho, .ilambda = ilambda, .irho = irho ))), linkinv = eval(substitute(function(eta, extra = NULL) { rho <- eta2theta(eta[, 1], .lrho, earg = .erho) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) ifelse(rho > 0.5, lambda*(1-rho)/(2*rho-1), NA) }, list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), last = eval(substitute(expression({ misc$link <- c(rho= .lrho, lambda = .llambda ) misc$earg <- list(rho= .erho, lambda = .elambda ) misc$pooled.weight <- pooled.weight }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { rho <- eta2theta(eta[, 1], .lrho , earg = .erho ) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (log(lambda) - lgamma(2*y+lambda) - lgamma(y+1) - lgamma(y+lambda+1) + y*log(rho) + y*log1p(-rho) + lambda*log(rho)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), vfamily = c("inv.binomial"), validparams = eval(substitute(function(eta, y, extra = NULL) { rho <- eta2theta(eta[, 1], .lrho , earg = .erho ) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) okay1 <- all(is.finite(rho )) && all(0.5 < rho & rho < 1) && all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), deriv = eval(substitute(expression({ rho <- eta2theta(eta[, 1], .lrho , earg = .erho ) lambda <- eta2theta(eta[, 2], .llambda , earg = .elambda ) dl.drho <- (y + lambda)/rho - y/(1-rho) dl.dlambda <- 1/lambda - digamma(2*y+lambda) - digamma(y+lambda+1) + log(rho) drho.deta <- dtheta.deta(rho, .lrho , earg = .erho ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * cbind(dl.drho * drho.deta, dl.dlambda * dlambda.deta ) }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho ))), weight = eval(substitute(expression({ ned2l.drho2 <- (mu+lambda) / rho^2 + mu / (1-rho)^2 d2l.dlambda2 <- 1/(lambda^2) + trigamma(2*y+lambda) + trigamma(y+lambda+1) ned2l.dlambdarho <- -1/rho wz <- matrix(NA_real_, n, dimm(M)) #3=dimm(M) wz[, iam(1, 1, M)] <- ned2l.drho2 * drho.deta^2 wz[, iam(1, 2, M)] <- ned2l.dlambdarho * dlambda.deta * drho.deta wz[, iam(2, 2, M)] <- d2l.dlambda2 * dlambda.deta^2 d2rhodeta2 <- d2theta.deta2(rho, .lrho, earg = .erho) d2lambda.deta2 <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz <- c(w) * wz if (intercept.only) { pooled.weight <- TRUE wz[, iam(2, 2, M)] <- sum(wz[, iam(2, 2, M)]) / sum(w) } else { pooled.weight <- FALSE } wz }), list( .llambda = llambda, .lrho = lrho, .elambda = elambda, .erho = erho )))) } dgenpois <- function(x, lambda = 0, theta, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(lambda), length(theta)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(lambda) != LLL) lambda <- rep_len(lambda, LLL) if (length(theta) != LLL) theta <- rep_len(theta, LLL) llans <- -x*lambda - theta + (x-1) * log(theta + x*lambda) + log(theta) - lgamma(x+1) llans[x < 0] <- log(0) llans[x != round(x)] <- log(0) # x should be integer-valued llans[lambda > 1] <- NaN if (any(ind1 <- (lambda < 0))) { epsilon <- 1.0e-9 # Needed to handle a "<" rather than a "<=". mmm <- pmax(4, floor(theta/abs(lambda) - epsilon)) llans[ind1 & mmm < pmax(-1, -theta/mmm)] <- NaN llans[ind1 & mmm < x] <- log(0) # probability 0, not NaN } if (log.arg) { llans } else { exp(llans) } } genpoisson <- function(llambda = "rhobit", ltheta = "loge", ilambda = NULL, itheta = NULL, use.approx = TRUE, imethod = 1, ishrinkage = 0.95, zero = "lambda") { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") ltheta <- as.list(substitute(ltheta)) etheta <- link2list(ltheta) ltheta <- attr(etheta, "function.name") if (!is.Numeric(ishrinkage, length.arg = 1) || ishrinkage < 0 || ishrinkage > 1) stop("bad input for argument 'ishrinkage'") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (!is.logical(use.approx) || length(use.approx) != 1) stop("'use.approx' must be logical value") new("vglmff", blurb = c("Generalized Poisson distribution\n\n", "Links: ", namesof("lambda", llambda, earg = elambda), ", ", namesof("theta", ltheta, earg = etheta ), "\n", "Mean: theta / (1-lambda)\n", "Variance: theta / (1-lambda)^3"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 2) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = FALSE, multipleResponses = TRUE, parameters.names = c("lambda", "theta"), imethod = .imethod , zero = .zero ) }, list( .zero = zero, .imethod = imethod ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.integer.y = TRUE, ncol.w.max = Inf, # 1, ncol.y.max = Inf, # 1, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y extra$ncoly <- ncoly <- NOS <- ncol(y) extra$M1 <- M1 <- 2 M <- M1 * ncoly mynames1 <- param.names("lambda", NOS) mynames2 <- param.names("theta", NOS) predictors.names <- c(namesof(mynames1, .llambda , earg = .elambda , tag = FALSE), namesof(mynames2, .ltheta , earg = .etheta , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] init.lambda <- init.theta <- matrix(0, n, NOS) for (spp. in 1: NOS) { init.lambda[, spp.] <- if ( .imethod == 1) { min(max(0.05, 1 - sqrt(weighted.mean(y[, spp.], w[, spp.]) / var(y[, spp.]))), 0.95) } else if ( .imethod == 2) { runif(n, max = 0.1) } else { runif(n, max = 0.7) } init.theta[, spp.] <- if ( .imethod == 2) { (y[, spp.] + weighted.mean(y[, spp.], w[, spp.])) / 2 } else if ( .imethod == 3) { (y[, spp.] + median(y[, spp.])) / 2 } else { (1 - .ishrinkage ) * y[, spp.] + .ishrinkage * weighted.mean(y[, spp.], w[, spp.]) } } if (!length(etastart)) { init.lambda <- if (length( .ilambda )) matrix( .ilambda , n, NOS, byrow = TRUE) else init.lambda init.theta <- if (length( .itheta )) matrix( .itheta , n, NOS, byrow = TRUE) else init.theta etastart <- cbind(theta2eta(init.lambda, .llambda , earg = .elambda ), theta2eta(init.theta, .ltheta , earg = .etheta )) etastart <- etastart[, interleave.VGAM(M, M1 = M1), drop = FALSE] } }), list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda, .imethod = imethod, .ishrinkage = ishrinkage, .itheta = itheta, .ilambda = ilambda )) ), linkinv = eval(substitute(function(eta, extra = NULL) { lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) theta <- eta2theta(eta[, c(FALSE, TRUE)], .ltheta , earg = .etheta ) theta / (1 - lambda) }, list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda ))), last = eval(substitute(expression({ M1 <- extra$M1 temp.names <- c(mynames1, mynames2) temp.names <- temp.names[interleave.VGAM(M1 * ncoly, M1 = M1)] misc$link <- rep_len( .llambda , M1 * ncoly) misc$earg <- vector("list", M1 * ncoly) names(misc$link) <- names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$link[ M1*ii-1 ] <- .llambda misc$link[ M1*ii ] <- .ltheta misc$earg[[M1*ii-1]] <- .elambda misc$earg[[M1*ii ]] <- .etheta } misc$M1 <- M1 misc$expected <- TRUE misc$imethod <- .imethod misc$multipleResponses <- TRUE misc$use.approx <- .use.approx }), list( .ltheta = ltheta, .llambda = llambda, .use.approx = use.approx, .imethod = imethod, .etheta = etheta, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) theta <- eta2theta(eta[, c(FALSE, TRUE)], .ltheta , earg = .etheta ) index <- (y == 0) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- dgenpois(x = y, lambda = lambda, theta = theta, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda ))), vfamily = c("genpoisson"), validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) theta <- eta2theta(eta[, c(FALSE, TRUE)], .ltheta , earg = .etheta ) mmm <- ifelse(lambda < 0, floor(-theta/lambda), Inf) if (any(mmm < 4)) { warning("the lower bound is less than 4; choosing 4") mmm <- pmax(mmm, 4) } Lbnd <- pmax(-1, -theta / mmm) okay1 <- all(is.finite(lambda)) && all(Lbnd < lambda & lambda < 1) && all(is.finite(theta )) && all(0 < theta) okay1 }, list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda ))), deriv = eval(substitute(expression({ M1 <- 2 NOS <- ncol(eta)/M1 lambda <- eta2theta(eta[, c(TRUE, FALSE)], .llambda , earg = .elambda ) theta <- eta2theta(eta[, c(FALSE, TRUE)], .ltheta , earg = .etheta ) dl.dlambda <- -y + y*(y-1) / (theta+y*lambda) dl.dtheta <- -1 + (y-1) / (theta+y*lambda) + 1/theta dTHETA.deta <- dtheta.deta(theta, .ltheta , earg = .etheta ) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) myderiv <- c(w) * cbind(dl.dlambda * dlambda.deta, dl.dtheta * dTHETA.deta ) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .ltheta = ltheta, .llambda = llambda, .etheta = etheta, .elambda = elambda ))), weight = eval(substitute(expression({ wz <- matrix(0, n, M + M-1) # Tridiagonal if ( .use.approx ) { BBB <- (theta+2)*(theta+2*lambda-theta*lambda)-(theta^2)*(1-lambda) d2l.dlambda2 <- 2 * theta * (theta+2) / ((1-lambda) * BBB) d2l.dtheta2 <- 2 * (1 + lambda * (2/theta - 1)) / BBB d2l.dthetalambda <- 2 * theta / BBB wz[, M1*(1:NOS) - 1 ] <- d2l.dlambda2 * dlambda.deta^2 wz[, M1*(1:NOS) ] <- d2l.dtheta2 * dTHETA.deta^2 wz[, M1*(1:NOS) + M - 1] <- d2l.dthetalambda * dTHETA.deta * dlambda.deta } else { d2l.dlambda2 <- -y^2 * (y-1) / (theta+y*lambda)^2 d2l.dtheta2 <- -(y-1)/(theta+y*lambda)^2 - 1 / theta^2 d2l.dthetalambda <- -y * (y-1) / (theta+y*lambda)^2 wz[, M1*(1:NOS) - 1 ] <- -d2l.dlambda2 * dlambda.deta^2 wz[, M1*(1:NOS) ] <- -d2l.dtheta2 * dTHETA.deta^2 wz[, M1*(1:NOS) + M - 1] <- -d2l.dthetalambda * dTHETA.deta * dlambda.deta d2THETA.deta2 <- d2theta.deta2(theta, .ltheta , earg = .etheta ) d2lambdadeta2 <- d2theta.deta2(lambda, .llambda , earg = .elambda ) wz[, M1*(1:NOS) - 1 ] <- wz[, M1*(1:NOS) - 1 ] - dl.dlambda * d2lambdadeta2 wz[, M1*(1:NOS) ] <- wz[, M1*(1:NOS) ] - dl.dtheta * d2THETA.deta2 } wz <- w.wz.merge(w = w, wz = wz, n = n, M = M + (M - 1), ndepy = NOS) wz }), list( .ltheta = ltheta, .llambda = llambda, .use.approx = use.approx, .etheta = etheta, .elambda = elambda )))) } dlgamma <- function(x, location = 0, scale = 1, shape = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") z <- (x-location) / scale logden <- shape * z - exp(z) - log(scale) - lgamma(shape) logden[is.infinite(x)] <- log(0) # 20141210 if (log.arg) logden else exp(logden) } plgamma <- function(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) { zedd <- (q - location) / scale ans <- pgamma(exp(zedd), shape, lower.tail = lower.tail, log.p = log.p) ans[scale < 0] <- NaN ans } qlgamma <- function(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) { ans <- location + scale * log(qgamma(p, shape, log.p = log.p, lower.tail = lower.tail)) ans[scale < 0] <- NaN ans } rlgamma <- function(n, location = 0, scale = 1, shape = 1) { ans <- location + scale * log(rgamma(n, shape)) ans[scale < 0] <- NaN ans } lgamma1 <- function(lshape = "loge", ishape = NULL) { init.k <- ishape link <- as.list(substitute(lshape)) earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Log-gamma distribution ", "f(y) = exp(ky - e^y)/gamma(k)), k>0, ", "shape=k>0\n\n", "Link: ", namesof("k", link, earg = earg), "\n", "\n", "Mean: digamma(k)", "\n"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("shape", .link , earg = .earg , tag = FALSE) if (!length(etastart)) { k.init <- if (length( .init.k)) rep_len( .init.k, length(y)) else { medy = median(y) if (medy < 2) 5 else if (medy < 4) 20 else exp(0.7 * medy) } etastart <- theta2eta(k.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg, .init.k = init.k ))), linkinv = eval(substitute(function(eta, extra = NULL) { kay <- eta2theta(eta, .link , earg = .earg ) digamma(kay) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ misc$link <- c(shape = .link ) misc$earg <- list(shape = .earg ) misc$expected <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { kay <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlgamma(x = y, location = 0, scale = 1, shape = kay, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("lgamma1"), validparams = eval(substitute(function(eta, y, extra = NULL) { kk <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(kk)) && all(0 < kk) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) kay <- eta2theta(eta, .link , earg = .earg ) rlgamma(nsim * length(kay), location = 0, scale = 1, shape = kay) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ kk <- eta2theta(eta, .link , earg = .earg ) dl.dk <- y - digamma(kk) dk.deta <- dtheta.deta(kk, .link , earg = .earg ) c(w) * dl.dk * dk.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.dk2 <- trigamma(kk) wz <- c(w) * dk.deta^2 * ned2l.dk2 wz }), list( .link = link, .earg = earg )))) } lgamma3 <- function(llocation = "identitylink", lscale = "loge", lshape = "loge", ilocation = NULL, iscale = NULL, ishape = 1, zero = c("scale", "shape")) { if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Log-gamma distribution", " f(y) = exp(k(y-a)/b - e^((y-a)/b))/(b*gamma(k)), ", "location=a, scale=b>0, shape=k>0\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n\n", "Mean: a + b * digamma(k)", "\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("location", "scale", "shape"), llocation = .llocat , lscale = .lscale , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .llocat = llocat , .lscale = lscale , .lshape = lshape ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("location", .llocat , earg = .elocat , tag = FALSE), namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) if (!length(etastart)) { k.init <- if (length( .ishape )) rep_len( .ishape, length(y)) else { rep_len(exp(median(y)), length(y)) } scale.init <- if (length( .iscale )) rep_len( .iscale , length(y)) else { rep_len(sqrt(var(y) / trigamma(k.init)), length(y)) } loc.init <- if (length( .ilocat )) rep_len( .ilocat, length(y)) else { rep_len(median(y) - scale.init * digamma(k.init), length(y)) } etastart <- cbind(theta2eta(loc.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ), theta2eta(k.init, .lshape , earg = .eshape )) } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .ilocat = ilocat, .iscale = iscale, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, 1], .llocat , earg = .elocat ) + eta2theta(eta[, 2], .lscale , earg = .escale ) * digamma(eta2theta(eta[, 3], .lshape , earg = .eshape )) }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c(location = .llocat , scale = .lscale , shape = .lshape) misc$earg <- list(location = .elocat , scale = .escale , shape = .eshape ) misc$multipleResponses <- FALSE }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { aa <- eta2theta(eta[, 1], .llocat , earg = .elocat ) bb <- eta2theta(eta[, 2], .lscale , earg = .escale ) kk <- eta2theta(eta[, 3], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlgamma(x = y, locat = aa, scale = bb, shape = kk, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), vfamily = c("lgamma3"), validparams = eval(substitute(function(eta, y, extra = NULL) { aa <- eta2theta(eta[, 1], .llocat , earg = .elocat ) bb <- eta2theta(eta[, 2], .lscale , earg = .escale ) kk <- eta2theta(eta[, 3], .lshape , earg = .eshape ) okay1 <- all(is.finite(kk)) && all(0 < kk) && all(is.finite(bb)) && all(0 < bb) && all(is.finite(aa)) okay1 }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) aa <- eta2theta(eta[, 1], .llocat , earg = .elocat ) bb <- eta2theta(eta[, 2], .lscale , earg = .escale ) kk <- eta2theta(eta[, 3], .lshape , earg = .eshape ) rlgamma(nsim * length(kk), location = aa, scale = bb, shape = kk) }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), deriv = eval(substitute(expression({ a <- eta2theta(eta[, 1], .llocat , earg = .elocat ) b <- eta2theta(eta[, 2], .lscale , earg = .escale ) k <- eta2theta(eta[, 3], .lshape , earg = .eshape ) zedd <- (y-a)/b dl.da <- (exp(zedd) - k) / b dl.db <- (zedd * (exp(zedd) - k) - 1) / b dl.dk <- zedd - digamma(k) da.deta <- dtheta.deta(a, .llocat , earg = .elocat ) db.deta <- dtheta.deta(b, .lscale , earg = .escale ) dk.deta <- dtheta.deta(k, .lshape , earg = .eshape ) c(w) * cbind(dl.da * da.deta, dl.db * db.deta, dl.dk * dk.deta) }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ ned2l.da2 <- k / b^2 ned2l.db2 <- (1 + k*(trigamma(k+1) + (digamma(k+1))^2)) / b^2 ned2l.dk2 <- trigamma(k) ned2l.dadb <- (1 + k*digamma(k)) / b^2 ned2l.dadk <- 1 / b ned2l.dbdk <- digamma(k) / b wz <- matrix(NA_real_, n, dimm(M)) wz[, iam(1, 1, M)] <- ned2l.da2 * da.deta^2 wz[, iam(2, 2, M)] <- ned2l.db2 * db.deta^2 wz[, iam(3, 3, M)] <- ned2l.dk2 * dk.deta^2 wz[, iam(1, 2, M)] <- ned2l.dadb * da.deta * db.deta wz[, iam(1, 3, M)] <- ned2l.dadk * da.deta * dk.deta wz[, iam(2, 3, M)] <- ned2l.dbdk * db.deta * dk.deta wz <- c(w) * wz wz }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape)))) } dprentice74 <- function(x, location = 0, scale = 1, shape, log = FALSE, tol0 = 1e-4) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) LLL <- max(length(x), length(location), length(scale), length(shape)) if (length(x) != LLL) x <- rep_len(x, LLL) if (length(location) != LLL) location <- rep_len(location, LLL) if (length(scale) != LLL) scale <- rep_len(scale, LLL) if (length(shape) != LLL) shape <- rep_len(shape, LLL) tmp55 <- shape^(-2) doubw <- (x - location) * shape / scale + digamma(tmp55) ll.elts <- log(abs(shape)) - log(scale) - lgamma(tmp55) + doubw * tmp55 - exp(doubw) if (any((shape0 <- abs(shape) < tol0), na.rm = TRUE)) ll.elts[shape0] <- dnorm(x[shape0], location[shape0], scale[shape0], log = TRUE) if (log.arg) ll.elts else exp(ll.elts) } prentice74 <- function(llocation = "identitylink", lscale = "loge", lshape = "identitylink", ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, glocation.mux = exp((-4:4)/2), gscale.mux = exp((-4:4)/2), gshape = qt(ppoints(6), df = 1), # exp((-5:5)/2), probs.y = 0.3, zero = c("scale", "shape")) { if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") ilocat <- ilocation lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Log-gamma distribution (Prentice, 1974)\n", "f(y; a, b, q) = |q| * exp(w/q^2 - e^w) / (b*gamma(1/q^2)),\n", "w = (y-a)*q/b + digamma(1/q^2),\n", "location = a, scale = b > 0, shape = q\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "\n", "Mean: a", "\n"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 3) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 3, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("location", "scale", "shape"), imethod = .imethod , llocation = .llocat , lscale = .lscale , lshape = .lshape , zero = .zero ) }, list( .zero = zero, .imethod = imethod , .llocat = llocat , .lscale = lscale , .lshape = lshape ))), initialize = eval(substitute(expression({ M1 <- 3 Q1 <- 1 temp5 <- w.y.check(w = w, y = y, Is.positive.y = FALSE, Is.integer.y = FALSE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y NOS <- ncoly <- ncol(y) # Number of species M <- M1 * ncoly temp1.names <- param.names("location", NOS) temp2.names <- param.names("scale", NOS) temp3.names <- param.names("shape", NOS) predictors.names <- c(namesof(temp1.names, .llocat , earg = .elocat , tag = FALSE), namesof(temp2.names, .lscale , earg = .escale , tag = FALSE), namesof(temp3.names, .lshape , earg = .eshape , tag = FALSE)) predictors.names <- predictors.names[interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { lo.init <- sc.init <- sh.init <- matrix(NA_real_, n, NOS) if (length( .ilocat )) lo.init <- matrix( .ilocat , n, NOS, byrow = TRUE) if (length( .iscale )) sc.init <- matrix( .iscale , n, NOS, byrow = TRUE) if (length( .ishape )) sh.init <- matrix( .ishape , n, NOS, byrow = TRUE) for (spp. in 1:NOS) { # For each response 'y_spp.'... do: yvec <- y[, spp.] wvec <- w[, spp.] mu.init <- switch( .imethod , median(yvec), # More reliable I think weighted.mean(yvec, w = wvec), quantile(yvec, prob = .probs.y )) glocat <- .glocat.mux * mu.init gscale <- .gscale.mux * abs(mu.init) gshape <- .gshape if (length( .ilocat )) glocat <- rep_len( .ilocat , NOS) if (length( .iscale )) gscale <- rep_len( .iscale , NOS) if (length( .ishape )) gshape <- rep_len( .ishape , NOS) ll.pren74 <- function(scaleval, locn, shape, x = x, y = y, w = w, extraargs) { ans <- sum(c(w) * dprentice74(x = y, scale = scaleval, locat = locn, shape = shape, log = TRUE)) ans } try.this <- grid.search3(gscale, glocat, gshape, objfun = ll.pren74, y = yvec, w = wvec, ret.objfun = TRUE) # Last value is the loglik sc.init[, spp.] <- try.this["Value1" ] lo.init[, spp.] <- try.this["Value2" ] sh.init[, spp.] <- try.this["Value3" ] if (FALSE) { sdy <- sqrt(var(yvec)) if (!length( .ishape )) { skewness <- mean((yvec - mean(yvec))^3) / sdy^3 # <0 LHS skewed sh.init[, spp.] <- (-skewness) } if (!length( .iscale )) sc.init[, spp.] <- sdy if (!length( .ilocat )) lo.init[, spp.] <- median(yvec) } } # End of for (spp. ...) etastart <- cbind(theta2eta(lo.init, .llocat , earg = .elocat ), theta2eta(sc.init, .lscale , earg = .escale ), theta2eta(sh.init, .lshape , earg = .eshape )) etastart <- etastart[, interleave.VGAM(M, M1 = M1)] } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape, .ilocat = ilocat, .iscale = iscale, .ishape = ishape, .imethod = imethod , .glocat.mux = glocation.mux, .gscale.mux = gscale.mux, .gshape = gshape, .probs.y = probs.y ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta2theta(eta[, c(TRUE, FALSE, FALSE)], .llocat , earg = .elocat ) }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ tmp34 <- c(rep_len( .llocat , NOS), rep_len( .lscale , NOS), rep_len( .lshape , NOS)) names(tmp34) <- c(temp1.names, temp2.names, temp3.names) tmp34 <- tmp34[interleave.VGAM(M, M1 = M1)] misc$link <- tmp34 # Already named misc$earg <- vector("list", M) names(misc$earg) <- names(misc$link) for (ii in 1:NOS) { misc$earg[[M1*ii-2]] <- .elocat misc$earg[[M1*ii-1]] <- .escale misc$earg[[M1*ii ]] <- .eshape } }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { TF1 <- c(TRUE, FALSE, FALSE) TF2 <- c(FALSE, TRUE, FALSE) TF3 <- c(FALSE, FALSE, TRUE) a <- eta2theta(eta[, TF1], .llocat , earg = .elocat ) b <- eta2theta(eta[, TF2], .lscale , earg = .escale ) k <- eta2theta(eta[, TF3], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dprentice74(y, loc = a, scale = b, shape = k, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), vfamily = c("prentice74"), validparams = eval(substitute(function(eta, y, extra = NULL) { TF1 <- c(TRUE, FALSE, FALSE) TF2 <- c(FALSE, TRUE, FALSE) TF3 <- c(FALSE, FALSE, TRUE) aa <- eta2theta(eta[, TF1], .llocat , earg = .elocat ) bb <- eta2theta(eta[, TF2], .lscale , earg = .escale ) kk <- eta2theta(eta[, TF3], .lshape , earg = .eshape ) okay1 <- all(is.finite(kk)) && all(is.finite(bb)) && all(0 < bb) && all(is.finite(aa)) okay1 }, list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), deriv = eval(substitute(expression({ TF1 <- c(TRUE, FALSE, FALSE) TF2 <- c(FALSE, TRUE, FALSE) TF3 <- c(FALSE, FALSE, TRUE) a <- eta2theta(eta[, TF1], .llocat , earg = .elocat ) b <- eta2theta(eta[, TF2], .lscale , earg = .escale ) k <- eta2theta(eta[, TF3], .lshape , earg = .eshape ) tmp55 <- k^(-2) mustar <- digamma(tmp55) doubw <- (y-a)*k/b + mustar sigmastar2 <- trigamma(tmp55) dl.da <- k*(exp(doubw) - tmp55) / b dl.db <- ((doubw - mustar) * (exp(doubw) - tmp55) - 1) / b dl.dk <- 1/k - 2 * (doubw - mustar) / k^3 - (exp(doubw) - tmp55) * ((doubw - mustar) / k - 2 * sigmastar2 / k^3) da.deta <- dtheta.deta(a, .llocat , earg = .elocat ) db.deta <- dtheta.deta(b, .lscale , earg = .escale ) dk.deta <- dtheta.deta(k, .lshape , earg = .eshape ) myderiv <- c(w) * cbind(dl.da * da.deta, dl.db * db.deta, dl.dk * dk.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ ned2l.da2 <- 1 / b^2 ned2l.db2 <- (1 + sigmastar2 * tmp55) / b^2 ned2l.dk2 <- tmp55 - 3 * sigmastar2 * tmp55^2 + 4 * sigmastar2 * tmp55^4 * (sigmastar2 - k^2) ned2l.dadb <- k / b^2 ned2l.dadk <- (2*(sigmastar2*tmp55^2 - tmp55) - 1) / b ned2l.dbdk <- (sigmastar2*tmp55 - 1) / (b*k) wz <- array(c(c(w) * ned2l.da2 * da.deta^2, c(w) * ned2l.db2 * db.deta^2, c(w) * ned2l.dk2 * dk.deta^2, c(w) * ned2l.dadb * da.deta * db.deta, c(w) * ned2l.dbdk * db.deta * dk.deta, c(w) * ned2l.dadk * da.deta * dk.deta), dim = c(n, M / M1, 6)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .llocat = llocat, .lscale = lscale, .lshape = lshape, .elocat = elocat, .escale = escale, .eshape = eshape)))) } dgengamma.stacy <- function(x, scale = 1, d, k, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(d, positive = TRUE)) stop("bad input for argument 'd'") if (!is.Numeric(k, positive = TRUE)) stop("bad input for argument 'k'") N <- max(length(x), length(scale), length(d), length(k)) if (length(x) != N) x <- rep_len(x, N) if (length(d) != N) d <- rep_len(d, N) if (length(k) != N) k <- rep_len(k, N) if (length(scale) != N) scale <- rep_len(scale, N) Loglik <- rep_len(log(0), N) xok <- x > 0 if (any(xok)) { zedd <- (x[xok]/scale[xok])^(d[xok]) Loglik[xok] <- log(d[xok]) + (-d[xok] * k[xok]) * log(scale[xok]) + (d[xok] * k[xok]-1) * log(x[xok]) - zedd - lgamma(k[xok]) } Loglik[is.infinite(x)] <- log(0) # 20141208; KaiH. answer <- if (log.arg) { Loglik } else { exp(Loglik) } answer[scale < 0] <- NaN answer[scale == 0] <- NaN # Not strictly correct if (any(scale <= 0)) warning("NaNs produced") answer } pgengamma.stacy <- function(q, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) { zedd <- (q / scale)^d ans <- pgamma(zedd, k, lower.tail = lower.tail, log.p = log.p) ans[scale < 0] <- NaN ans[d <= 0] <- NaN ans } qgengamma.stacy <- function(p, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) { ans <- scale * qgamma(p, k, lower.tail = lower.tail, log.p = log.p)^(1/d) ans[scale < 0] <- NaN ans[d <= 0] <- NaN ans } rgengamma.stacy <- function(n, scale = 1, d, k) { ans <- scale * rgamma(n, k)^(1/d) ans[scale < 0] <- NaN ans[d <= 0] <- NaN ans } gengamma.stacy <- function(lscale = "loge", ld = "loge", lk = "loge", iscale = NULL, id = NULL, ik = NULL, imethod = 1, gscale.mux = exp((-4:4)/2), gshape1.d = exp((-5:5)/2), gshape2.k = exp((-5:5)/2), probs.y = 0.3, zero = c("d", "k") ) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") ld <- as.list(substitute(ld)) ed <- link2list(ld) ld <- attr(ed, "function.name") lk <- as.list(substitute(lk)) ek <- link2list(lk) lk <- attr(ek, "function.name") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 3) stop("argument 'imethod' must be 1 or 2 or 3") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") new("vglmff", blurb = c("Generalized gamma distribution ", "f(y; b, d, k) = \n", "d * b^(-d*k) * y^(d*k-1) * exp(-(y/b)^d)", " / gamma(k),\n", "scale=b>0, 0 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) TF1 <- c(TRUE, FALSE, FALSE) TF2 <- c(FALSE, TRUE, FALSE) TF3 <- c(FALSE, FALSE, TRUE) bb <- eta2theta(eta[, TF1], .lscale , earg = .escale ) dd <- eta2theta(eta[, TF2], .ld , earg = .ed ) kk <- eta2theta(eta[, TF3], .lk , earg = .ek ) rgengamma.stacy(nsim * length(kk), scale = bb, d = dd, k = kk) }, list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek ))), deriv = eval(substitute(expression({ TF1 <- c(TRUE, FALSE, FALSE) TF2 <- c(FALSE, TRUE, FALSE) TF3 <- c(FALSE, FALSE, TRUE) b <- eta2theta(eta[, TF1], .lscale , earg = .escale ) d <- eta2theta(eta[, TF2], .ld , earg = .ed ) k <- eta2theta(eta[, TF3], .lk , earg = .ek ) tmp22 <- (y/b)^d tmp33 <- log(y/b) dl.db <- d * (tmp22 - k) / b dl.dd <- 1/d + tmp33 * (k - tmp22) dl.dk <- d * tmp33 - digamma(k) db.deta <- dtheta.deta(b, .lscale , earg = .escale ) dd.deta <- dtheta.deta(d, .ld , earg = .ed ) dk.deta <- dtheta.deta(k, .lk , earg = .ek ) myderiv <- c(w) * cbind(dl.db * db.deta, dl.dd * dd.deta, dl.dk * dk.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek ))), weight = eval(substitute(expression({ ned2l.db2 <- k * (d/b)^2 ned2l.dd2 <- (1 + k * (trigamma(k+1) + (digamma(k+1))^2)) / d^2 ned2l.dk2 <- trigamma(k) ned2l.dbdd <- -(1 + k*digamma(k)) / b ned2l.dbdk <- d / b ned2l.dddk <- -digamma(k) / d wz <- array(c(c(w) * ned2l.db2 * db.deta^2, c(w) * ned2l.dd2 * dd.deta^2, c(w) * ned2l.dk2 * dk.deta^2, c(w) * ned2l.dbdd * db.deta * dd.deta, c(w) * ned2l.dddk * dd.deta * dk.deta, c(w) * ned2l.dbdk * db.deta * dk.deta), dim = c(n, M / M1, 6)) wz <- arwz2wz(wz, M = M, M1 = M1) wz }), list( .lscale = lscale, .ld = ld, .lk = lk, .escale = escale, .ed = ed, .ek = ek )))) } dlevy <- function(x, location = 0, scale = 1, log.arg = FALSE) { logdensity <- 0.5 * log(scale / (2*pi)) - 1.5 * log(x - location) - 0.5 * scale / (x - location) if (log.arg) logdensity else exp(logdensity) } plevy <- function(q, location = 0, scale = 1) { erfc(sqrt(scale * 0.5 / (q - location))) } qlevy <- function(p, location = 0, scale = 1) { location + 0.5 * scale / (erfc(p, inverse = TRUE))^2 } rlevy <- function(n, location = 0, scale = 1) qlevy(runif(n), location = location, scale = scale) levy <- function(location = 0, lscale = "loge", iscale = NULL) { delta.known <- is.Numeric(location) # , length.arg = 1 if (!delta.known) stop("argument 'location' must be specified") idelta <- NULL delta <- location # Lazy to change variable names below link.gamma <- as.list(substitute(lscale)) earg <- link2list(link.gamma) link.gamma <- attr(earg, "function.name") new("vglmff", blurb = c("Levy distribution f(y) = sqrt(scale/(2*pi)) * ", "(y-location)^(-3/2) * \n", " exp(-scale / (2*(y-location ))),\n", " location < y < Inf, scale > 0", if (delta.known) "Link: " else "Links: ", namesof("scale", link.gamma, earg = earg), if (! delta.known) c(", ", namesof("delta", "identitylink", earg = list())), "\n\n", "Mean: NA", "\n"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .link.gamma , earg = .earg , tag = FALSE), if ( .delta.known) NULL else namesof("delta", "identitylink", earg = list(), tag = FALSE)) if (!length(etastart)) { delta.init <- if ( .delta.known) { if (min(y, na.rm = TRUE) <= .delta ) stop("'location' must be < min(y)") .delta } else { if (length( .idelta )) .idelta else min(y,na.rm = TRUE) - 1.0e-4 * diff(range(y,na.rm = TRUE)) } gamma.init <- if (length( .iscale )) .iscale else median(y - delta.init) # = 1/median(1/(y-delta.init)) gamma.init <- rep_len(gamma.init, length(y)) etastart <- cbind(theta2eta(gamma.init, .link.gamma , earg = .earg ), if ( .delta.known ) NULL else delta.init) } }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta, .idelta = idelta, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { eta <- as.matrix(eta) mygamma <- eta2theta(eta[, 1], .link.gamma , earg = .earg ) delta <- if ( .delta.known) .delta else eta[, 2] qlevy(p = 0.5, location = delta, scale = mygamma) }, list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), last = eval(substitute(expression({ misc$link <- if ( .delta.known) NULL else c(delta = "identitylink") misc$link <- c(scale = .link.gamma , misc$link) misc$earg <- if ( .delta.known ) list(scale = .earg ) else list(scale = .earg , delta = list()) if ( .delta.known) misc$delta <- .delta }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { eta <- as.matrix(eta) mygamma <- eta2theta(eta[, 1], .link.gamma , earg = .earg ) delta <- if ( .delta.known) .delta else eta[, 2] if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlevy(x = y, location = delta, scale = mygamma, log.arg = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), vfamily = c("levy"), validparams = eval(substitute(function(eta, y, extra = NULL) { eta <- as.matrix(eta) mygamma <- eta2theta(eta[, 1], .link.gamma , earg = .earg ) okay1 <- all(is.finite(mygamma)) && all(0 < mygamma) okay1 }, list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), deriv = eval(substitute(expression({ eta <- as.matrix(eta) mygamma <- eta2theta(eta[, 1], .link.gamma , earg = .earg ) delta <- if ( .delta.known ) .delta else eta[, 2] if (! .delta.known) dl.ddelta <- (3 - mygamma / (y-delta)) / (2 * (y-delta)) dl.dgamma <- 0.5 * (1 / mygamma - 1 / (y-delta)) dgamma.deta <- dtheta.deta(mygamma, .link.gamma , earg = .earg ) c(w) * cbind(dl.dgamma * dgamma.deta, if ( .delta.known ) NULL else dl.ddelta) }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta ))), weight = eval(substitute(expression({ wz <- matrix(NA_real_, n, dimm(M)) wz[, iam(1, 1, M)] <- 1 * dgamma.deta^2 if (! .delta.known ) { wz[, iam(1, 2, M)] <- 3 * dgamma.deta wz[, iam(2, 2, M)] <- 21 } wz <- c(w) * wz / (2 * mygamma^2) wz }), list( .link.gamma = link.gamma, .earg = earg, .delta.known = delta.known, .delta = delta )))) } dlino <- function(x, shape1, shape2, lambda = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) loglik <- dbeta(x = x, shape1 = shape1, shape2 = shape2, log = TRUE) + shape1 * log(lambda) - (shape1+shape2) * log1p(-(1-lambda) * x) loglik[is.infinite(x)] <- log(0) # 20141208 KaiH if (log.arg) loglik else exp(loglik) } plino <- function(q, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) { ans <- pbeta(1/(1+(1/q-1)/lambda), # lambda * q / (1 - (1-lambda) * q), shape1 = shape1, shape2 = shape2, lower.tail = lower.tail, log.p = log.p) ans[lambda <= 0] <- NaN ans } qlino <- function(p, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) { Y <- qbeta(p = p, shape1 = shape1, shape2 = shape2, lower.tail = lower.tail, log.p = log.p) ans <- Y / (lambda + (1-lambda)*Y) ans[lambda <= 0] <- NaN ans } rlino <- function(n, shape1, shape2, lambda = 1) { Y <- rbeta(n = n, shape1 = shape1, shape2 = shape2) ans <- Y / (lambda + (1 - lambda) * Y) ans[lambda <= 0] <- NaN ans } lino <- function(lshape1 = "loge", lshape2 = "loge", llambda = "loge", ishape1 = NULL, ishape2 = NULL, ilambda = 1, zero = NULL) { if (!is.Numeric(ilambda, positive = TRUE)) stop("bad input for argument 'ilambda'") lshape1 <- as.list(substitute(lshape1)) eshape1 <- link2list(lshape1) lshape1 <- attr(eshape1, "function.name") lshape2 <- as.list(substitute(lshape2)) eshape2 <- link2list(lshape2) lshape2 <- attr(eshape2, "function.name") llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Generalized Beta distribution (Libby and Novick, 1982)\n\n", "Links: ", namesof("shape1", lshape1, earg = eshape1), ", ", namesof("shape2", lshape2, earg = eshape2), ", ", namesof("lambda", llambda, earg = elambda), "\n", "Mean: something complicated"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ if (min(y) <= 0 || max(y) >= 1) stop("values of the response must be between 0 and 1 (0,1)") w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("shape1", .lshape1 , earg = .eshape1 , tag = FALSE), namesof("shape2", .lshape2 , earg = .eshape2 , tag = FALSE), namesof("lambda", .llambda , earg = .elambda , tag = FALSE)) if (!length(etastart)) { lambda.init <- rep_len(if (length( .ilambda )) .ilambda else 1, n) sh1.init <- if (length( .ishape1 )) rep_len( .ishape1 , n) else NULL sh2.init <- if (length( .ishape2 )) rep_len( .ishape2 , n) else NULL txY.init <- lambda.init * y / (1+lambda.init*y - y) mean1 <- mean(txY.init) mean2 <- mean(1/txY.init) if (!is.Numeric(sh1.init)) sh1.init <- rep_len((mean2 - 1) / (mean2 - 1/mean1), n) if (!is.Numeric(sh2.init)) sh2.init <- rep_len(sh1.init * (1-mean1) / mean1, n) etastart <- cbind(theta2eta(sh1.init, .lshape1 , earg = .eshape1), theta2eta(sh2.init, .lshape2 , earg = .eshape2), theta2eta(lambda.init, .llambda , earg = .elambda )) } }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda, .ishape1 = ishape1, .ishape2 = ishape2, .ilambda = ilambda ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) qlino(p = 0.5, shape1 = shape1, shape2 = shape2, lambda = lambda) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), last = eval(substitute(expression({ misc$link <- c(shape1 = .lshape1 , shape2 = .lshape2 , lambda = .llambda ) misc$earg <- list(shape1 = .eshape1 , shape2 = .eshape2 , lambda = .elambda ) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlino(y, shape1 = shape1, shape2 = shape2, lambda = lambda, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), vfamily = c("lino"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) okay1 <- all(is.finite(shape1)) && all(0 < shape1) && all(is.finite(shape2)) && all(0 < shape2) && all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) shape1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) shape2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) rlino(nsim * length(shape1), shape1 = shape1, shape2 = shape2, lambda = lambda) }, list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), deriv = eval(substitute(expression({ sh1 <- eta2theta(eta[, 1], .lshape1 , earg = .eshape1 ) sh2 <- eta2theta(eta[, 2], .lshape2 , earg = .eshape2 ) lambda <- eta2theta(eta[, 3], .llambda , earg = .elambda ) temp1 <- log1p(-(1-lambda) * y) temp2 <- digamma(sh1+sh2) dl.dsh1 <- log(lambda) + log(y) - digamma(sh1) + temp2 - temp1 dl.dsh2 <- log1p(-y) - digamma(sh2) + temp2 - temp1 dl.dlambda <- sh1/lambda - (sh1+sh2) * y / (1 - (1-lambda) * y) dsh1.deta <- dtheta.deta(sh1, .lshape1 , earg = .eshape1) dsh2.deta <- dtheta.deta(sh2, .lshape2 , earg = .eshape2) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * cbind( dl.dsh1 * dsh1.deta, dl.dsh2 * dsh2.deta, dl.dlambda * dlambda.deta) }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda ))), weight = eval(substitute(expression({ temp3 <- trigamma(sh1+sh2) ned2l.dsh1 <- trigamma(sh1) - temp3 ned2l.dsh2 <- trigamma(sh2) - temp3 ned2l.dlambda2 <- sh1 * sh2 / (lambda^2 * (sh1+sh2+1)) ned2l.dsh1sh2 <- -temp3 ned2l.dsh1lambda <- -sh2 / ((sh1+sh2)*lambda) ned2l.dsh2lambda <- sh1 / ((sh1+sh2)*lambda) wz <- matrix(NA_real_, n, dimm(M)) #M==3 means 6=dimm(M) wz[, iam(1, 1, M)] <- ned2l.dsh1 * dsh1.deta^2 wz[, iam(2, 2, M)] <- ned2l.dsh2 * dsh2.deta^2 wz[, iam(3, 3, M)] <- ned2l.dlambda2 * dlambda.deta^2 wz[, iam(1, 2, M)] <- ned2l.dsh1sh2 * dsh1.deta * dsh2.deta wz[, iam(1, 3, M)] <- ned2l.dsh1lambda * dsh1.deta * dlambda.deta wz[, iam(2, 3, M)] <- ned2l.dsh2lambda * dsh2.deta * dlambda.deta wz <- c(w) * wz wz }), list( .lshape1 = lshape1, .lshape2 = lshape2, .llambda = llambda, .eshape1 = eshape1, .eshape2 = eshape2, .elambda = elambda )))) } dmaxwell <- function(x, rate, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(rate)) if (length(x) != L) x <- rep_len(x, L) if (length(rate) != L) rate <- rep_len(rate, L) logdensity <- rep_len(log(0), L) xok <- (x >= 0) logdensity[xok] <- 0.5 * log(2/pi) + 1.5 * log(rate[xok]) + 2 * log(x[xok]) - 0.5 * rate[xok] * x[xok]^2 logdensity[rate <= 0] <- NaN logdensity[x == Inf] <- log(0) if (log.arg) logdensity else exp(logdensity) } pmaxwell <- function(q, rate, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log(erf(q*sqrt(rate/2)) - q*exp(-0.5*rate*q^2) * sqrt(2*rate/pi)) ans[q <= 0 ] <- -Inf ans[q == Inf] <- 0 } else { ans <- erf(q*sqrt(rate/2)) - q*exp(-0.5*rate*q^2) * sqrt(2*rate/pi) ans[q <= 0] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log1p(-erf(q*sqrt(rate/2)) + q*exp(-0.5*rate*q^2) * sqrt(2*rate/pi)) ans[q <= 0] <- 0 ans[q == Inf] <- -Inf } else { ans <- exp(log1p(-erf(q*sqrt(rate/2)) + q*exp(-0.5*rate*q^2) * sqrt(2*rate/pi))) ans[q <= 0] <- 1 ans[q == Inf] <- 0 } } ans } qmaxwell <- function(p, rate, lower.tail = TRUE, log.p = FALSE) { sqrt(2 * qgamma(p = p, 1.5, lower.tail = lower.tail, log.p = log.p) / rate) } rmaxwell <- function(n, rate) { sqrt(2 * rgamma(n = n, 1.5) / rate) } maxwell <- function(link = "loge", zero = NULL) { link <- as.list(substitute(link)) # orig earg <- link2list(link) link <- attr(earg, "function.name") new("vglmff", blurb = c("Maxwell distribution \n", "f(y;rate) = sqrt(2/pi) * rate^(3/2) * y^2 *", " exp(-0.5*rate*y^2), y>0, rate>0\n", "Link: ", namesof("rate", link, earg = earg), "\n", "\n", "Mean: sqrt(8 / (rate * pi))"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("rate", ncoly) predictors.names <- namesof(mynames1, .link , earg = .earg , tag = FALSE) if (!length(etastart)) { a.init <- 8 / (pi * (y + 0.1)^2) etastart <- theta2eta(a.init, .link , earg = .earg ) } }), list( .link = link, .earg = earg ))), linkinv = eval(substitute(function(eta, extra = NULL) { aa <- eta2theta(eta, .link , earg = .earg ) sqrt(8 / (aa * pi)) }, list( .link = link, .earg = earg ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ilocal in 1:ncoly) { misc$earg[[ilocal]] <- .earg } misc$link <- rep_len( .link , ncoly) names(misc$link) <- mynames1 misc$M1 <- M1 misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .link = link, .earg = earg ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { aa <- eta2theta(eta, .link , earg = .earg ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dmaxwell(x = y, rate = aa, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .link = link, .earg = earg ))), vfamily = c("maxwell"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .link , earg = .earg ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .link = link, .earg = earg ))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) aa <- eta2theta(eta, .link , earg = .earg ) rmaxwell(nsim * length(aa), a = c(aa)) }, list( .link = link, .earg = earg ))), deriv = eval(substitute(expression({ aa <- eta2theta(eta, .link , earg = .earg ) dl.da <- 1.5 / aa - 0.5 * y^2 da.deta <- dtheta.deta(aa, .link , earg = .earg ) c(w) * dl.da * da.deta }), list( .link = link, .earg = earg ))), weight = eval(substitute(expression({ ned2l.da2 <- 1.5 / aa^2 wz <- c(w) * ned2l.da2 * da.deta^2 wz }), list( .link = link, .earg = earg )))) } dnaka <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(shape), length(scale)) if (length(x) != L) x <- rep_len(x, L) if (length(shape) != L) shape <- rep_len(shape, L) if (length(scale) != L) scale <- rep_len(scale, L) logdensity <- rep_len(log(0), L) xok <- (x > 0) logdensity[xok] <- dgamma(x = x[xok]^2, shape = shape[xok], scale = scale[xok] / shape[xok], log = TRUE) + log(2) + log(x[xok]) logdensity[is.infinite(x)] <- log(0) # 20141208 KaiH if (log.arg) logdensity else exp(logdensity) } pnaka <- function(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { ans <- pgamma(shape * q^2 / scale, shape = shape, lower.tail = lower.tail, log.p = log.p) ans[scale < 0] <- NaN ans } qnaka <- function(p, scale = 1, shape, ...) { if (!is.Numeric(p, positive = TRUE) || max(p) >= 1) stop("bad input for argument 'p'") if (!is.Numeric(shape, positive = TRUE)) stop("bad input for argument 'shape'") if (!is.Numeric(scale, positive = TRUE)) stop("bad input for argument 'scale'") L <- max(length(p), length(shape), length(scale)) if (length(p) != L) p <- rep_len(p, L) if (length(shape) != L) shape <- rep_len(shape, L) if (length(scale) != L) scale <- rep_len(scale, L) ans <- rep_len(0.0, L) myfun <- function(x, shape, scale = 1, p) pnaka(q = x, shape = shape, scale = scale) - p for (ii in 1:L) { EY <- sqrt(scale[ii]/shape[ii]) * gamma(shape[ii] + 0.5) / gamma(shape[ii]) Upper <- 5 * EY while (pnaka(q = Upper, shape = shape[ii], scale = scale[ii]) < p[ii]) Upper <- Upper + scale[ii] ans[ii] <- uniroot(f = myfun, lower = 0, upper = Upper, shape = shape[ii], scale = scale[ii], p = p[ii], ...)$root } ans } rnaka <- function(n, scale = 1, shape, Smallno = 1.0e-6) { use.n <- if ((length.n <- length(n)) > 1) length.n else if (!is.Numeric(n, integer.valued = TRUE, length.arg = 1, positive = TRUE)) stop("bad input for argument 'n'") else n if (!is.Numeric(scale, positive = TRUE, length.arg = 1)) stop("bad input for argument 'scale'") if (!is.Numeric(shape, positive = TRUE, length.arg = 1)) stop("bad input for argument 'shape'") if (!is.Numeric(Smallno, positive = TRUE, length.arg = 1) || Smallno > 0.01 || Smallno < 2 * .Machine$double.eps) stop("bad input for argument 'Smallno'") ans <- rep_len(0.0, use.n) ptr1 <- 1 ptr2 <- 0 ymax <- dnaka(x = sqrt(scale * (1 - 0.5 / shape)), shape = shape, scale = scale) while (ptr2 < use.n) { EY <- sqrt(scale / shape) * gamma(shape + 0.5) / gamma(shape) Upper <- EY + 5 * scale while (pnaka(q = Upper, shape = shape, scale = scale) < 1 - Smallno) Upper <- Upper + scale x <- runif(2*use.n, min = 0, max = Upper) index <- runif(2*use.n, max = ymax) < dnaka(x, shape = shape, scale = scale) sindex <- sum(index) if (sindex) { ptr2 <- min(use.n, ptr1 + sindex - 1) ans[ptr1:ptr2] <- (x[index])[1:(1+ptr2-ptr1)] ptr1 <- ptr2 + 1 } } ans } nakagami <- function(lscale = "loge", lshape = "loge", iscale = 1, ishape = NULL, nowarning = FALSE) { if (!is.null(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("argument 'iscale' must be a positive number or NULL") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Nakagami distribution f(y) = 2 * (shape/scale)^shape *\n", " ", "y^(2*shape-1) * exp(-shape*y^2/scale) / gamma(shape),\n", " ", "y>0, shape>0, scale>0\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "\n", "Mean: sqrt(scale/shape)*gamma(shape+0.5) / gamma(shape)"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) if (!length(etastart)) { init2 <- if (is.Numeric( .iscale , positive = TRUE)) rep_len( .iscale , n) else rep_len(1, n) init1 <- if (is.Numeric( .ishape, positive = TRUE)) rep_len( .ishape , n) else rep_len(init2 / (y+1/8)^2, n) etastart <- cbind(theta2eta(init2, .lscale , earg = .escale ), theta2eta(init1, .lshape , earg = .eshape )) } }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape, .ishape = ishape, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) sqrt(scale/shape) * gamma(shape+0.5) / gamma(shape) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c(scale = .lscale , shape = .lshape ) misc$earg <- list(scale = .escale , shape = .eshape ) misc$expected = TRUE }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dnaka(x = y, shape = shape, scale = scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), vfamily = c("nakagami"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), deriv = eval(substitute(expression({ Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) dl.dshape <- 1 + log(shape/Scale) - digamma(shape) + 2 * log(y) - y^2 / Scale dl.dscale <- -shape/Scale + shape * (y/Scale)^2 dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * cbind(dl.dscale * dscale.deta, dl.dshape * dshape.deta) }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ d2l.dshape2 <- trigamma(shape) - 1/shape d2l.dscale2 <- shape / Scale^2 wz <- matrix(NA_real_, n, M) # diagonal wz[, iam(1, 1, M)] <- d2l.dscale2 * dscale.deta^2 wz[, iam(2, 2, M)] <- d2l.dshape2 * dshape.deta^2 c(w) * wz }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape)))) } drayleigh <- function(x, scale = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(scale)) if (length(x) != L) x <- rep_len(x, L) if (length(scale) != L) scale <- rep_len(scale, L) logdensity <- rep_len(log(0), L) xok <- (x > 0) logdensity[xok] <- log(x[xok]) - 0.5 * (x[xok]/scale[xok])^2 - 2 * log(scale[xok]) logdensity[is.infinite(x)] <- log(0) # 20141208 KaiH if (log.arg) logdensity else exp(logdensity) } prayleigh <- function(q, scale = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log(-expm1(-0.5 * (q / scale)^2)) ans[q <= 0 ] <- -Inf } else { ans <- -expm1(-0.5 * (q / scale)^2) ans[q <= 0] <- 0 } } else { if (log.p) { ans <- -0.5 * (q / scale)^2 ans[q <= 0] <- 0 } else { ans <- exp(-0.5 * (q / scale)^2) ans[q <= 0] <- 1 } } ans[scale < 0] <- NaN ans } qrayleigh <- function(p, scale = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- scale * sqrt(-2 * log(-expm1(ln.p))) ans[ln.p > 0] <- NaN } else { ans <- scale * sqrt(-2 * log1p(-p)) ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf } } else { if (log.p) { ln.p <- p ans <- scale * sqrt(-2 * ln.p) ans[ln.p > 0] <- NaN ans } else { ans <- scale * sqrt(-2 * log(p)) ans[p > 1] <- NaN } } ans[scale <= 0] <- NaN ans } rrayleigh <- function(n, scale = 1) { ans <- scale * sqrt(-2 * log(runif(n))) ans[scale <= 0] <- NaN ans } rayleigh <- function(lscale = "loge", nrfs = 1 / 3 + 0.01, oim.mean = TRUE, zero = NULL) { lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") if (!is.Numeric(nrfs, length.arg = 1) || nrfs < 0 || nrfs > 1) stop("bad input for 'nrfs'") if (!is.logical(oim.mean) || length(oim.mean) != 1) stop("bad input for argument 'oim.mean'") new("vglmff", blurb = c("Rayleigh distribution\n\n", "f(y) = y*exp(-0.5*(y/scale)^2)/scale^2, y>0, scale>0\n\n", "Link: ", namesof("scale", lscale, earg = escale), "\n\n", "Mean: scale * sqrt(pi / 2)"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M = M, predictors.names = predictors.names, M1 = 1) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = TRUE, parameters.names = c("scale"), zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 1 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("scale", ncoly) predictors.names <- namesof(mynames1, .lscale , earg = .escale , tag = FALSE) if (!length(etastart)) { Ymat <- matrix(colSums(y) / colSums(w), n, ncoly, byrow = TRUE) b.init <- (Ymat + 1/8) / sqrt(pi/2) etastart <- theta2eta(b.init, .lscale , earg = .escale ) } }), list( .lscale = lscale, .escale = escale))), linkinv = eval(substitute(function(eta, extra = NULL) { Scale <- eta2theta(eta, .lscale , earg = .escale ) Scale * sqrt(pi / 2) }, list( .lscale = lscale, .escale = escale))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .lscale , ncoly)) names(misc$link) <- mynames1 misc$earg <- vector("list", M) names(misc$earg) <- mynames1 for (ii in 1:ncoly) { misc$earg[[ii]] <- .escale } misc$M1 <- M1 misc$multipleResponses <- TRUE misc$nrfs <- .nrfs }), list( .lscale = lscale, .escale = escale, .nrfs = nrfs ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { Scale <- eta2theta(eta, .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * drayleigh(x = y, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .escale = escale))), vfamily = c("rayleigh"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- eta2theta(eta, .lscale , earg = .escale ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) okay1 }, list( .lscale = lscale, .escale = escale))), simslot = function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") Scale <- fitted(object) / sqrt(pi / 2) rrayleigh(nsim * length(Scale), scale = c(Scale)) }, deriv = eval(substitute(expression({ Scale <- eta2theta(eta, .lscale , earg = .escale ) dl.dScale <- ((y/Scale)^2 - 2) / Scale dScale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) c(w) * dl.dScale * dScale.deta }), list( .lscale = lscale, .escale = escale))), weight = eval(substitute(expression({ d2l.dScale2 <- (3 * (y/Scale)^2 - 2) / Scale^2 ned2l.dScale2 <- 4 / Scale^2 wz <- c(w) * dScale.deta^2 * ((1 - .nrfs) * d2l.dScale2 + .nrfs * ned2l.dScale2) if (intercept.only && .oim.mean ) { ave.oim <- weighted.mean(d2l.dScale2, rep_len(c(w), length(d2l.dScale2))) if (ave.oim > 0) { wz <- c(w) * dScale.deta^2 * ave.oim } } wz }), list( .lscale = lscale, .escale = escale, .nrfs = nrfs, .oim.mean = oim.mean )))) } dparetoIV <- function(x, location = 0, scale = 1, inequality = 1, shape = 1, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) N <- max(length(x), length(location), length(scale), length(inequality), length(shape)) if (length(x) != N) x <- rep_len(x, N) if (length(location) != N) location <- rep_len(location, N) if (length(inequality) != N) inequality <- rep_len(inequality, N) if (length(shape) != N) shape <- rep_len(shape, N) if (length(scale) != N) scale <- rep_len(scale, N) logdensity <- rep_len(log(0), N) xok <- (x > location) zedd <- (x - location) / scale logdensity[xok] <- log(shape[xok]) - log(scale[xok]) - log(inequality[xok]) + (1/inequality[xok]-1) * log(zedd[xok]) - (shape[xok]+1) * log1p(zedd[xok]^(1/inequality[xok])) logdensity[is.infinite(x)] <- log(0) # 20141208 KaiH if (log.arg) logdensity else exp(logdensity) } pparetoIV <- function(q, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") zedd <- (q - location) / scale if (lower.tail) { if (log.p) { answer <- log(-expm1(log1p(zedd^(1/inequality)) * (-shape))) answer[q <= 0 ] <- -Inf answer[q == Inf] <- 0 } else { answer <- -expm1(log1p(zedd^(1/inequality)) * (-shape)) answer[q <= 0] <- 0 answer[q == Inf] <- 1 } } else { if (log.p) { answer <- log1p(zedd^(1/inequality)) * (-shape) answer[q <= 0] <- 0 answer[q == Inf] <- -Inf } else { answer <- exp(log1p(zedd^(1/inequality)) * (-shape)) answer[q <= 0] <- 1 answer[q == Inf] <- 0 } } answer[scale <= 0 | shape <= 0 | inequality <= 0] <- NaN answer } qparetoIV <- function(p, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- location + scale * (expm1((-1/shape)*log(-expm1(ln.p))))^inequality ans[ln.p > 0] <- NaN } else { ans <- location + scale * (expm1((-1/shape) * log1p(-p)))^inequality ans[p < 0] <- NaN ans[p == 0] <- 0 ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- location + scale * (expm1((-1/shape)*ln.p))^inequality ans[ln.p > 0] <- NaN ans } else { ans <- location + scale * (expm1((-1/shape)*log(p)))^inequality ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- 0 ans[p > 1] <- NaN } } ans[scale <= 0 | shape <= 0 | inequality <= 0] <- NaN ans } rparetoIV <- function(n, location = 0, scale = 1, inequality = 1, shape = 1) { if (!is.Numeric(inequality, positive = TRUE)) stop("bad input for argument 'inequality'") ans <- location + scale * (-1 + runif(n)^(-1/shape))^inequality ans[scale <= 0] <- NaN ans[shape <= 0] <- NaN ans } dparetoIII <- function(x, location = 0, scale = 1, inequality = 1, log = FALSE) dparetoIV(x = x, location = location, scale = scale, inequality = inequality, shape = 1, log = log) pparetoIII <- function(q, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) pparetoIV(q = q, location = location, scale = scale, inequality = inequality, shape = 1, lower.tail = lower.tail, log.p = log.p) qparetoIII <- function(p, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) qparetoIV(p = p, location = location, scale = scale, inequality = inequality, shape = 1, lower.tail = lower.tail, log.p = log.p) rparetoIII <- function(n, location = 0, scale = 1, inequality = 1) rparetoIV(n = n, location= location, scale = scale, inequality = inequality, shape = 1) dparetoII <- function(x, location = 0, scale = 1, shape = 1, log = FALSE) dparetoIV(x = x, location = location, scale = scale, inequality = 1, shape = shape, log = log) pparetoII <- function(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) pparetoIV(q = q, location = location, scale = scale, inequality = 1, shape = shape, lower.tail = lower.tail, log.p = log.p) qparetoII <- function(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoIV(p = p, location = location, scale = scale, inequality = 1, shape = shape, lower.tail = lower.tail, log.p = log.p) rparetoII <- function(n, location = 0, scale = 1, shape = 1) rparetoIV(n = n, location = location, scale = scale, inequality = 1, shape = shape) dparetoI <- function(x, scale = 1, shape = 1, log = FALSE) dparetoIV(x = x, location = scale, scale = scale, inequality = 1, shape = shape, log = log) pparetoI <- function(q, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) pparetoIV(q = q, location = scale, scale = scale, inequality = 1, shape = shape, lower.tail = lower.tail, log.p = log.p) qparetoI <- function(p, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoIV(p = p, location = scale, scale = scale, inequality = 1, shape = shape, lower.tail = lower.tail, log.p = log.p) rparetoI <- function(n, scale = 1, shape = 1) rparetoIV(n = n, location = scale, scale = scale, inequality = 1, shape = shape) paretoIV <- function(location = 0, lscale = "loge", linequality = "loge", lshape = "loge", iscale = 1, iinequality = 1, ishape = NULL, imethod = 1) { if (!is.Numeric(location)) stop("argument 'location' must be numeric") if (is.Numeric(iscale) && any(iscale <= 0)) stop("argument 'iscale' must be positive") if (is.Numeric(iinequality) && any(iinequality <= 0)) stop("argument 'iinequality' must be positive") if (is.Numeric(ishape) && any(ishape <= 0)) stop("argument 'ishape' must be positive") if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE) || imethod > 2) stop("bad input for argument 'imethod'") if (linequality == "negloge" && location != 0) warning("The Burr distribution has 'location = 0' and ", "'linequality = negloge'") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") linequ <- as.list(substitute(linequality)) einequ <- link2list(linequ) linequ <- attr(einequ, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") iinequ <- iinequality new("vglmff", blurb = c("Pareto(IV) distribution F(y)=1-[1+((y - ", location, ")/scale)^(1/inequality)]^(-shape),", "\n", " y > ", location, ", scale > 0, inequality > 0, shape > 0,\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("inequality", linequ, earg = einequ ), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: location + scale * NA"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("inequality", .linequ , earg = .einequ , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) extra$location <- location <- .location if (any(y <= location)) stop("the response must have values > than the 'location' argument") if (!length(etastart)) { inequ.init <- if (length( .iinequ )) .iinequ else 1 scale.init <- if (length( .iscale )) .iscale else 1 shape.init <- if (length( .ishape )) .ishape else NULL if (!length(shape.init)) { zedd <- (y - location) / scale.init if ( .imethod == 1) { A1 <- weighted.mean(1/(1 + zedd^(1/inequ.init)), w = w) A2 <- weighted.mean(1/(1 + zedd^(1/inequ.init))^2, w = w) } else { A1 <- median(1/(1 + zedd^(1/inequ.init ))) A2 <- median(1/(1 + zedd^(1/inequ.init))^2) } shape.init <- max(0.01, (2*A2-A1)/(A1-A2)) } etastart <- cbind( theta2eta(rep_len(scale.init, n), .lscale , earg = .escale ), theta2eta(rep_len(inequ.init, n), .linequ , earg = .einequ ), theta2eta(rep_len(shape.init, n), .lshape , earg = .eshape )) } }), list( .location = location, .lscale = lscale, .linequ = linequ, .lshape = lshape, .imethod = imethod, .escale = escale, .einequ = einequ, .eshape = eshape, .iscale = iscale, .iinequ = iinequ, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) qparetoIV(p = 0.5, location = location, scale = Scale, inequality = inequ, shape = shape) }, list( .lscale = lscale, .linequ = linequ, .lshape = lshape, .escale = escale, .einequ = einequ, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "inequality" = .linequ, "shape" = .lshape) misc$earg <- list("scale" = .escale , "inequality" = .einequ, "shape" = .eshape ) misc$location = extra$location # Use this for prediction }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ, .lshape = lshape, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) zedd <- (y - location) / Scale if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dparetoIV(x = y, location = location, scale = Scale, inequ = inequ, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ, .lshape = lshape, .eshape = eshape))), vfamily = c("paretoIV"), validparams = eval(substitute(function(eta, y, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ , earg = .einequ ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(inequ)) && all(0 < inequ) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ, .lshape = lshape, .eshape = eshape))), deriv = eval(substitute(expression({ location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ , earg = .einequ ) shape <- eta2theta(eta[, 3], .lshape , earg = .eshape ) zedd <- (y - location) / Scale temp100 <- 1 + zedd^(1/inequ) dl.dscale <- (shape - (1+shape) / temp100) / (inequ * Scale) dl.dinequ <- ((log(zedd) * (shape - (1+shape)/temp100)) / inequ - 1) / inequ dl.dshape <- -log(temp100) + 1/shape dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dinequ.deta <- dtheta.deta(inequ, .linequ, earg = .einequ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.dscale * dscale.deta, dl.dinequ * dinequ.deta, dl.dshape * dshape.deta) }), list( .lscale = lscale, .linequ = linequ, .lshape = lshape, .escale = escale, .einequ = einequ, .eshape = eshape))), weight = eval(substitute(expression({ temp200 <- digamma(shape) - digamma(1) - 1 d2scale.deta2 <- shape / ((inequ*Scale)^2 * (shape+2)) d2inequ.deta2 <- (shape * (temp200^2 + trigamma(shape) + trigamma(1) ) + 2*(temp200+1)) / (inequ^2 * (shape+2)) d2shape.deta2 <- 1 / shape^2 d2si.deta2 <- (shape*(-temp200) -1) / (inequ^2 * Scale * (shape+2)) d2ss.deta2 <- -1 / ((inequ*Scale) * (shape+1)) d2is.deta2 <- temp200 / (inequ*(shape+1)) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dscale.deta^2 * d2scale.deta2 wz[, iam(2, 2, M)] <- dinequ.deta^2 * d2inequ.deta2 wz[, iam(3, 3, M)] <- dshape.deta^2 * d2shape.deta2 wz[, iam(1, 2, M)] <- dscale.deta * dinequ.deta * d2si.deta2 wz[, iam(1, 3, M)] <- dscale.deta * dshape.deta * d2ss.deta2 wz[, iam(2, 3, M)] <- dinequ.deta * dshape.deta * d2is.deta2 c(w) * wz }), list( .lscale = lscale, .linequ = linequ, .lshape = lshape, .escale = escale, .einequ = einequ, .eshape = eshape)))) } paretoIII <- function(location = 0, lscale = "loge", linequality = "loge", iscale = NULL, iinequality = NULL) { if (!is.Numeric(location)) stop("argument 'location' must be numeric") if (is.Numeric(iscale) && any(iscale <= 0)) stop("argument 'iscale' must be positive") if (is.Numeric(iinequality) && any(iinequality <= 0)) stop("argument 'iinequality' must be positive") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") linequ <- as.list(substitute(linequality)) einequ <- link2list(linequ) linequ <- attr(einequ, "function.name") iinequ <- iinequality new("vglmff", blurb = c("Pareto(III) distribution F(y)=1-[1+((y - ", location, ")/scale)^(1/inequality)]^(-1),", "\n", " y > ", location, ", scale > 0, inequality > 0, \n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("inequality", linequ, earg = einequ ), "\n", "Mean: location + scale * NA"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("inequ", .linequ, earg = .einequ, tag = FALSE)) extra$location = location = .location if (any(y <= location)) stop("the response must have values > than the 'location' argument") if (!length(etastart)) { inequ.init <- if (length( .iinequ)) .iinequ else NULL scale.init <- if (length( .iscale )) .iscale else NULL if (!length(inequ.init) || !length(scale.init)) { probs <- (1:4)/5 ytemp <- quantile(x = log(y-location), probs = probs) fittemp <- lsfit(logit(probs), ytemp, intercept = TRUE) if (!length(inequ.init)) inequ.init <- max(fittemp$coef["X"], 0.01) if (!length(scale.init)) scale.init <- exp(fittemp$coef["Intercept"]) } etastart<- cbind( theta2eta(rep_len(scale.init, n), .lscale , earg = .escale ), theta2eta(rep_len(inequ.init, n), .linequ , earg = .einequ )) } }), list( .location = location, .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ, .iscale = iscale, .iinequ = iinequ ))), linkinv = eval(substitute(function(eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) qparetoIII(p = 0.5, location = location, scale = Scale, inequality = inequ) }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "inequality" = .linequ) misc$earg <- list("scale" = .escale , "inequality" = .einequ) misc$location <- extra$location # Use this for prediction }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ , earg = .einequ ) zedd <- (y - location) / Scale if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dparetoIII(x = y, location = location, scale = Scale, inequ = inequ, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), vfamily = c("paretoIII"), validparams = eval(substitute(function(eta, y, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(inequ)) && all(0 < inequ) okay1 }, list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), deriv = eval(substitute(expression({ location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) inequ <- eta2theta(eta[, 2], .linequ, earg = .einequ) shape <- 1 zedd <- (y - location) / Scale temp100 <- 1 + zedd^(1/inequ) dl.dscale <- (shape - (1+shape) / temp100) / (inequ * Scale) dl.dinequ <- ((log(zedd) * (shape - (1+shape)/temp100)) / inequ - 1) / inequ dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dinequ.deta <- dtheta.deta(inequ, .linequ, earg = .einequ) c(w) * cbind(dl.dscale * dscale.deta, dl.dinequ * dinequ.deta) }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ ))), weight = eval(substitute(expression({ d2scale.deta2 <- 1 / ((inequ*Scale)^2 * 3) d2inequ.deta2 <- (1 + 2* trigamma(1)) / (inequ^2 * 3) wz <- matrix(0, n, M) # It is diagonal wz[, iam(1, 1, M)] <- dscale.deta^2 * d2scale.deta2 wz[, iam(2, 2, M)] <- dinequ.deta^2 * d2inequ.deta2 c(w) * wz }), list( .lscale = lscale, .linequ = linequ, .escale = escale, .einequ = einequ )))) } paretoII <- function(location = 0, lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL) { if (!is.Numeric(location)) stop("argument 'location' must be numeric") if (is.Numeric(iscale) && any(iscale <= 0)) stop("argument 'iscale' must be positive") if (is.Numeric(ishape) && any(ishape <= 0)) stop("argument 'ishape' must be positive") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Pareto(II) distribution F(y)=1-[1+(y - ", location, ")/scale]^(-shape),", "\n", " y > ", location, ", scale > 0, shape > 0,\n", "Links: ", namesof("scale", lscale, earg = escale), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: location + scale * NA"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("scale", .lscale , earg = .escale , tag = FALSE), namesof("shape", .lshape , earg = .eshape , tag = FALSE)) extra$location <- location <- .location if (any(y <= location)) stop("the response must have values > than the 'location' argument") if (!length(etastart)) { scale.init <- if (length( .iscale )) .iscale else NULL shape.init <- if (length( .ishape )) .ishape else NULL if (!length(shape.init) || !length(scale.init)) { probs <- (1:4)/5 scale.init.0 <- 1 ytemp <- quantile(x = log(y-location+scale.init.0), probs = probs) fittemp <- lsfit(x = log1p(-probs), y = ytemp, intercept = TRUE) if (!length(shape.init)) shape.init <- max(-1/fittemp$coef["X"], 0.01) if (!length(scale.init)) scale.init <- exp(fittemp$coef["Intercept"]) } etastart <- cbind(theta2eta(rep_len(scale.init, n), .lscale , earg = .escale ), theta2eta(rep_len(shape.init, n), .lshape , earg = .eshape )) } }), list( .location = location, .lscale = lscale, .escale = escale, .eshape = eshape, .lshape = lshape, .iscale = iscale, .ishape = ishape ))), linkinv = eval(substitute(function(eta, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) qparetoII(p = 0.5, scale = Scale, shape = shape) }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), last = eval(substitute(expression({ misc$link <- c("scale" = .lscale , "shape" = .lshape) misc$earg <- list("scale" = .escale , "shape" = .eshape ) misc$location <- extra$location # Use this for prediction }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) zedd <- (y - location) / Scale if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dparetoII(x = y, location = location, scale = Scale, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), vfamily = c("paretoII"), validparams = eval(substitute(function(eta, y, extra = NULL) { location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), deriv = eval(substitute(expression({ location <- extra$location Scale <- eta2theta(eta[, 1], .lscale , earg = .escale ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) zedd <- (y - location) / Scale temp100 <- 1 + zedd dl.dscale <- (shape - (1+shape) / temp100) / (1 * Scale) dl.dshape <- -log(temp100) + 1/shape dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.dscale * dscale.deta, dl.dshape * dshape.deta) }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape))), weight = eval(substitute(expression({ d2scale.deta2 <- shape / (Scale^2 * (shape+2)) d2shape.deta2 <- 1 / shape^2 d2ss.deta2 <- -1 / (Scale * (shape+1)) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dscale.deta^2 * d2scale.deta2 wz[, iam(2, 2, M)] <- dshape.deta^2 * d2shape.deta2 wz[, iam(1, 2, M)] <- dscale.deta * dshape.deta * d2ss.deta2 c(w) * wz }), list( .lscale = lscale, .lshape = lshape, .escale = escale, .eshape = eshape)))) } dpareto <- function(x, scale = 1, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) L <- max(length(x), length(scale), length(shape)) if (length(x) != L) x <- rep_len(x, L) if (length(scale) != L) scale <- rep_len(scale, L) if (length(shape) != L) shape <- rep_len(shape, L) logdensity <- rep_len(log(0), L) xok <- (x >= scale) # 20141212 KaiH logdensity[xok] <- log(shape[xok]) + shape[xok] * log(scale[xok]) - (shape[xok]+1) * log(x[xok]) if (log.arg) logdensity else exp(logdensity) } ppareto <- function(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ans <- log1p(-(scale/q)^shape) ans[q <= scale] <- -Inf ans[q == Inf] <- 0 } else { ans <- exp(log1p(-(scale/q)^shape)) ans[q <= scale] <- 0 ans[q == Inf] <- 1 } } else { if (log.p) { ans <- log((scale/q)^shape) ans[q <= scale] <- 0 ans[q == Inf] <- -Inf } else { ans <- (scale/q)^shape ans[q <= scale] <- 1 ans[q == Inf] <- 0 } } ans[shape <= 0 | scale <= 0] <- NaN ans } qpareto <- function(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") if (lower.tail) { if (log.p) { ln.p <- p ans <- scale / (-expm1(ln.p))^(1/shape) ans[ln.p > 0] <- NaN } else { ans <- scale / exp(log1p(-p) * (1/shape)) ans[p < 0] <- NaN ans[p == 0] <- scale ans[p == 1] <- Inf ans[p > 1] <- NaN } } else { if (log.p) { ln.p <- p ans <- scale / exp(ln.p)^(1/shape) ans[ln.p > 0] <- NaN ans } else { ans <- scale / p^(1/shape) ans[p < 0] <- NaN ans[p == 0] <- Inf ans[p == 1] <- scale ans[p > 1] <- NaN } } ans[shape <= 0 | scale <= 0] <- NaN ans } rpareto <- function(n, scale = 1, shape) { ans <- scale / runif(n)^(1/shape) ans[scale <= 0] <- NaN ans[shape <= 0] <- NaN ans } paretoff <- function(scale = NULL, lshape = "loge") { if (is.Numeric(scale) && scale <= 0) stop("argument 'scale' must be positive") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Pareto distribution ", "f(y) = shape * scale^shape / y^(shape+1),", " 01"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("shape", .lshape , earg = .eshape , tag = FALSE) scalehat <- if (!length( .scale )) { scaleEstimated <- TRUE min(y) # - .smallno } else { scaleEstimated <- FALSE .scale } if (any(y < scalehat)) stop("the value of 'scale' is too high ", "(requires 0 < scale < min(y))") extra$scale <- scalehat extra$scaleEstimated <- scaleEstimated if (!length(etastart)) { k.init <- (y + 1/8) / (y - scalehat + 1/8) etastart <- theta2eta(k.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .scale = scale ))), linkinv = eval(substitute(function(eta, extra = NULL) { k <- eta2theta(eta, .lshape , earg = .eshape ) scale <- extra$scale ifelse(k > 1, k * scale / (k-1), NA) }, list( .lshape = lshape, .eshape = eshape ))), last = eval(substitute(expression({ misc$link <- c(k = .lshape) misc$earg <- list(k = .eshape ) misc$scale <- extra$scale # Use this for prediction }), list( .lshape = lshape, .eshape = eshape ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) scale <- extra$scale if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dpareto(x = y, scale = scale, shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .eshape = eshape ))), vfamily = c("paretoff"), validparams = eval(substitute(function(eta, y, extra = NULL) { Scale <- extra$scale shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(Scale)) && all(0 < Scale & Scale <= y) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .eshape = eshape ))), deriv = eval(substitute(expression({ scale <- extra$scale k <- eta2theta(eta, .lshape , earg = .eshape ) dl.dk <- 1/k + log(scale/y) dk.deta <- dtheta.deta(k, .lshape , earg = .eshape ) c(w) * dl.dk * dk.deta }), list( .lshape = lshape, .eshape = eshape ))), weight = eval(substitute(expression({ ned2l.dk2 <- 1 / k^2 wz <- c(w) * dk.deta^2 * ned2l.dk2 wz }), list( .lshape = lshape, .eshape = eshape )))) } dtruncpareto <- function(x, lower, upper, shape, log = FALSE) { if (!is.logical(log.arg <- log) || length(log) != 1) stop("bad input for argument 'log'") rm(log) if (!is.Numeric(lower, positive = TRUE)) stop("argument 'lower' must be positive") if (!is.Numeric(upper, positive = TRUE)) stop("argument 'upper' must be positive") if (!is.Numeric(shape, positive = TRUE)) stop("argument 'shape' must be positive") L <- max(length(x), length(lower), length(upper), length(shape)) if (length(x) != L) x <- rep_len(x, L) if (length(shape) != L) shape <- rep_len(shape, L) if (length(lower) != L) lower <- rep_len(lower, L) if (length(upper) != L) upper <- rep_len(upper, L) logdensity <- rep_len(log(0), L) xok <- (0 < lower) & (lower < x) & (x < upper) & (shape > 0) logdensity[xok] <- log(shape[xok]) + shape[xok] * log(lower[xok]) - (shape[xok] + 1) * log(x[xok]) - log1p(-(lower[xok] / upper[xok])^(shape[xok])) logdensity[shape <= 0] <- NaN logdensity[upper < lower] <- NaN logdensity[0 > lower] <- NaN if (log.arg) logdensity else exp(logdensity) } ptruncpareto <- function(q, lower, upper, shape, lower.tail = TRUE, log.p = FALSE) { if (!is.Numeric(q)) stop("bad input for argument 'q'") if (!is.logical(lower.tail) || length(lower.tail ) != 1) stop("bad input for argument 'lower.tail'") if (!is.logical(log.arg <- log.p) || length(log.p) != 1) stop("bad input for argument 'log.p'") rm(log.p) # 20141231 KaiH L <- max(length(q), length(lower), length(upper), length(shape)) if (length(q) != L) q <- rep_len(q, L) if (length(shape) != L) shape <- rep_len(shape, L) if (length(lower) != L) lower <- rep_len(lower, L) if (length(upper) != L) upper <- rep_len(upper, L) ans <- q * 0 xok <- (0 < lower) & (lower < q) & (q < upper) & (shape > 0) ans[xok] <- (1 - (lower[xok]/q[xok])^shape[xok]) / (1 - (lower[xok]/upper[xok])^shape[xok]) ans[q >= upper] <- 1 ans[upper < lower] <- NaN ans[lower <= 0] <- NaN ans[upper <= 0] <- NaN ans[shape <= 0] <- NaN if (lower.tail) { if (log.arg) log(ans) else ans } else { if (log.arg) log1p(-ans) else exp(log1p(-ans)) } } qtruncpareto <- function(p, lower, upper, shape) { if (!is.Numeric(p, positive = TRUE)) stop("bad input for argument 'p'") if (max(p) >= 1) stop("argument 'p' must be in (0, 1)") ans <- lower / (1 - p * (1 - (lower/upper)^shape))^(1/shape) ans[lower <= 0] <- NaN ans[upper <= 0] <- NaN ans[shape <= 0] <- NaN ans[upper < lower] <- NaN ans } rtruncpareto <- function(n, lower, upper, shape) { ans <- qtruncpareto(p = runif(n), lower = lower, upper = upper, shape = shape) ans[lower <= 0] <- NaN ans[upper <= 0] <- NaN ans[shape <= 0] <- NaN ans } truncpareto <- function(lower, upper, lshape = "loge", ishape = NULL, imethod = 1) { if (!is.Numeric(lower, positive = TRUE, length.arg = 1)) stop("bad input for argument 'lower'") if (!is.Numeric(upper, positive = TRUE, length.arg = 1)) stop("bad input for argument 'upper'") if (lower >= upper) stop("lower < upper is required") if (length(ishape) && !is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") earg <- eshape if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") new("vglmff", blurb = c("Truncated Pareto distribution f(y) = shape * lower^shape /", "(y^(shape+1) * (1-(lower/upper)^shape)),", " 0 < lower < y < upper < Inf, shape>0\n", "Link: ", namesof("shape", lshape, earg = eshape), "\n", "\n", "Mean: shape*lower^shape*(upper^(1-shape)-lower^(1-shape)) /", " ((1-shape) * (1-(lower/upper)^shape))"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("shape", .lshape , earg = .eshape , tag = FALSE) if (any(y <= .lower)) stop("the value of argument 'lower' is too high ", "(requires '0 < lower < min(y)')") extra$lower <- .lower if (any(y >= .upper)) stop("the value of argument 'upper' is too low ", "(requires 'max(y) < upper')") extra$upper <- .upper if (!length(etastart)) { shape.init <- if (is.Numeric( .ishape )) 0 * y + .ishape else if ( .imethod == 2) { 0 * y + median(rep((y + 1/8) / (y - .lower + 1/8), times = w)) } else { truncpareto.Loglikfun <- function(shape, y, x, w, extraargs) { myratio <- .lower / .upper sum(c(w) * (log(shape) + shape * log( .lower ) - (shape + 1) * log(y) - log1p(-myratio^shape))) } shape.grid <- 2^((-4):4) try.this <- grid.search(shape.grid, objfun = truncpareto.Loglikfun, y = y, x = x, w = w) try.this <- rep_len(try.this, n) try.this } etastart <- theta2eta(shape.init, .lshape , earg = .eshape ) } }), list( .lshape = lshape, .eshape = eshape, .ishape = ishape, .imethod = imethod, .lower = lower, .upper = upper ))), linkinv = eval(substitute(function(eta, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) myratio <- .lower / .upper constprop <- shape * .lower^shape / (1 - myratio^shape) constprop * ( .upper^(1-shape) - .lower^(1-shape)) / (1-shape) }, list( .lshape = lshape, .lower = lower, .eshape = eshape, .upper = upper ))), last = eval(substitute(expression({ misc$link <- c(shape = .lshape ) misc$earg <- list(shape = .eshape ) misc$lower <- extra$lower misc$upper <- extra$upper misc$expected <- TRUE }), list( .lshape = lshape, .lower = lower, .eshape = eshape, .upper = upper ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { shape <- eta2theta(eta, .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dtruncpareto(x = y, lower = .lower , upper = .upper , shape = shape, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lshape = lshape, .lower = lower, .eshape = eshape, .upper = upper ))), vfamily = c("truncpareto"), validparams = eval(substitute(function(eta, y, extra = NULL) { shape <- eta2theta(eta, .lshape , earg = .eshape ) okay1 <- all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lshape = lshape, .lower = lower, .eshape = eshape, .upper = upper ))), deriv = eval(substitute(expression({ shape <- eta2theta(eta, .lshape , earg = .eshape ) myratio <- .lower / .upper myratio2 <- myratio^shape tmp330 <- myratio2 * log(myratio) / (1 - myratio2) dl.dshape <- 1 / shape + log( .lower) - log(y) + tmp330 dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * dl.dshape * dshape.deta }), list( .lshape = lshape, .lower = lower, .eshape = eshape, .upper = upper ))), weight = eval(substitute(expression({ ned2l.dshape2 <- 1 / shape^2 - tmp330^2 / myratio2 wz <- c(w) * dshape.deta^2 * ned2l.dshape2 wz }), list( .lshape = lshape, .lower = lower, .eshape = eshape, .upper = upper )))) } waldff <- function(llambda = "loge", ilambda = NULL) { llambda <- as.list(substitute(llambda)) elambda <- link2list(llambda) llambda <- attr(elambda, "function.name") new("vglmff", blurb = c("Standard Wald distribution\n\n", "f(y) = sqrt(lambda/(2*pi*y^3)) * ", "exp(-lambda*(y-1)^2/(2*y)), y&lambda>0", "\n", "Link: ", namesof("lambda", llambda, earg = elambda), "\n", "Mean: ", "1\n", "Variance: 1 / lambda"), infos = eval(substitute(function(...) { list(M1 = 1, Q1 = 1, expected = TRUE, multipleResponses = FALSE, parameters.names = c("lambda"), llambda = .llambda ) }, list( .llambda = llambda ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, Is.positive.y = TRUE, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("lambda", .llambda , earg = .elambda , short = TRUE) if (!length(etastart)) { initlambda <- if (length( .ilambda )) .ilambda else 1 / (0.01 + (y-1)^2) initlambda <- rep_len(initlambda, n) etastart <- cbind(theta2eta(initlambda, link = .llambda , earg = .elambda )) } }), list( .llambda = llambda, .elambda = elambda, .ilambda = ilambda ))), linkinv = function(eta, extra = NULL) { 0 * eta + 1 }, last = eval(substitute(expression({ misc$link <- c(lambda = .llambda ) misc$earg <- list(lambda = .elambda ) }), list( .llambda = llambda, .elambda = elambda ))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { lambda <- eta2theta(eta, link = .llambda , earg = .elambda ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (0.5 * log(lambda/(2*pi*y^3)) - lambda * (y-1)^2 / (2*y)) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llambda = llambda, .elambda = elambda ))), vfamily = "waldff", validparams = eval(substitute(function(eta, y, extra = NULL) { lambda <- eta2theta(eta, .llambda , earg = .elambda ) okay1 <- all(is.finite(lambda)) && all(0 < lambda) okay1 }, list( .llambda = llambda, .elambda = elambda ))), deriv = eval(substitute(expression({ lambda <- eta2theta(eta, .llambda , earg = .elambda ) dl.dlambda <- 0.5 / lambda + 1 - 0.5 * (y + 1/y) dlambda.deta <- dtheta.deta(lambda, .llambda , earg = .elambda ) c(w) * cbind(dl.dlambda * dlambda.deta) }), list( .llambda = llambda, .elambda = elambda ))), weight = eval(substitute(expression({ d2l.dlambda2 <- 0.5 / lambda^2 c(w) * cbind(dlambda.deta^2 * d2l.dlambda2) }), list( .llambda = llambda, .elambda = elambda )))) } # waldff expexpff <- function(lrate = "loge", lshape = "loge", irate = NULL, ishape = 1.1, # ishape cannot be 1 tolerance = 1.0e-6, zero = NULL) { if (!is.Numeric(tolerance, positive = TRUE, length.arg = 1) || tolerance > 1.0e-2) stop("bad input for argument 'tolerance'") if (!is.Numeric(ishape, positive = TRUE)) stop("bad input for argument 'ishape'") if (length(irate) && !is.Numeric(irate, positive = TRUE)) stop("bad input for argument 'irate'") ishape[ishape == 1] <- 1.1 # Fails in @deriv iratee <- irate lratee <- as.list(substitute(lrate)) eratee <- link2list(lratee) lratee <- attr(eratee, "function.name") lshape <- as.list(substitute(lshape)) eshape <- link2list(lshape) lshape <- attr(eshape, "function.name") new("vglmff", blurb = c("Exponentiated Exponential Distribution\n", "Links: ", namesof("rate", lratee, earg = eratee), ", ", namesof("shape", lshape, earg = eshape), "\n", "Mean: (digamma(shape+1)-digamma(1)) / rate"), constraints = eval(substitute(expression({ constraints <- cm.zero.VGAM(constraints, x = x, .zero , M) }), list( .zero = zero ))), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- c(namesof("rate", .lratee , earg = .eratee , short = TRUE), namesof("shape", .lshape , earg = .eshape , short = TRUE)) if (!length(etastart)) { shape.init <- if (!is.Numeric( .ishape, positive = TRUE)) stop("argument 'ishape' must be positive") else rep_len( .ishape, n) ratee.init <- if (length( .iratee )) rep_len( .iratee , n) else (digamma(shape.init+1) - digamma(1)) / (y+1/8) ratee.init <- rep_len(weighted.mean(ratee.init, w = w), n) etastart <- cbind(theta2eta(ratee.init, .lratee , earg = .eratee ), theta2eta(shape.init, .lshape , earg = .eshape )) } }), list( .lshape = lshape, .lratee = lratee, .iratee = iratee, .ishape = ishape, .eshape = eshape, .eratee = eratee))), linkinv = eval(substitute(function(eta, extra = NULL) { ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) (digamma(shape+1) - digamma(1)) / ratee }, list( .lshape = lshape, .lratee = lratee, .eshape = eshape, .eratee = eratee))), last = eval(substitute(expression({ misc$link <- c("rate" = .lratee , "shape" = .lshape ) misc$earg <- list("rate" = .eratee , "shape" = .eshape ) misc$expected <- TRUE }), list( .lshape = lshape, .lratee = lratee, .eshape = eshape, .eratee = eratee))), loglikelihood= eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (log(shape) + log(ratee) + (shape-1)*log1p(-exp(-ratee*y)) - ratee*y) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lratee = lratee, .lshape = lshape, .eshape = eshape, .eratee = eratee))), vfamily = c("expexpff"), validparams = eval(substitute(function(eta, y, extra = NULL) { ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) okay1 <- all(is.finite(ratee)) && all(0 < ratee) && all(is.finite(shape)) && all(0 < shape) okay1 }, list( .lratee = lratee, .lshape = lshape, .eshape = eshape, .eratee = eratee))), deriv = eval(substitute(expression({ ratee <- eta2theta(eta[, 1], .lratee , earg = .eratee ) shape <- eta2theta(eta[, 2], .lshape , earg = .eshape ) dl.dratee <- 1/ratee + (shape-1)*y*exp(-ratee*y)/(-expm1(-ratee*y))-y dl.dshape <- 1/shape + log1p(-exp(-ratee*y)) dratee.deta <- dtheta.deta(ratee, .lratee , earg = .eratee ) dshape.deta <- dtheta.deta(shape, .lshape , earg = .eshape ) c(w) * cbind(dl.dratee * dratee.deta, dl.dshape * dshape.deta) }), list( .lshape = lshape, .lratee = lratee, .eshape = eshape, .eratee = eratee))), weight = eval(substitute(expression({ d11 <- 1 / shape^2 # True for all shape d22 <- d12 <- rep_len(NA_real_, n) index2 <- abs(shape - 2) > .tolerance # index2 = shape != 1 largeno <- 10000 if (any(index2)) { Shape <- shape[index2] Shape[abs(Shape-1) < .tolerance] <- 1.001 # digamma(0) is undefined Scale <- ratee[index2] tmp200 <- trigamma(1)-trigamma(Shape-1) + (digamma(Shape-1)-digamma(1))^2 # Fails when Shape == 1 tmp300 <- trigamma(1)-digamma(Shape)+(digamma(Shape)-digamma(1))^2 d22[index2] <- (1 + Shape*(Shape-1)*tmp200/(Shape-2)) / Scale^2 + Shape*tmp300 / Scale^2 } if (any(!index2)) { Scale <- ratee[!index2] d22[!index2] <- (1 + 4 * sum(1/(2 + (0:largeno))^3)) / Scale^2 } index1 <- abs(shape - 1) > .tolerance # index1 <- shape != 1 if (any(index1)) { Shape <- shape[index1] Scale <- ratee[index1] d12[index1] <- -(Shape*(digamma(Shape)-digamma(1))/(Shape-1) - digamma(Shape+1) + digamma(1)) / Scale } if (any(!index1)) { Scale <- ratee[!index1] d12[!index1] <- -sum(1/(2 + (0:largeno))^2) / Scale } wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dratee.deta^2 * d22 wz[, iam(1, 2, M)] <- dratee.deta * dshape.deta * d12 wz[, iam(2, 2, M)] <- dshape.deta^2 * d11 c(w) * wz }), list( .tolerance = tolerance )))) } expexpff1 <- function(lrate = "loge", irate = NULL, ishape = 1) { lrate <- as.list(substitute(lrate)) erate <- link2list(lrate) lrate <- attr(erate, "function.name") if (length(irate) && !is.Numeric(irate, positive = TRUE)) stop("bad input for argument 'irate'") new("vglmff", blurb = c("Exponentiated Exponential Distribution", " (profile likelihood estimation)\n", "Links: ", namesof("rate", lrate, earg = erate), "\n", "Mean: (digamma(shape+1)-digamma(1)) / rate"), initialize = eval(substitute(expression({ w.y.check(w = w, y = y, ncol.w.max = 1, ncol.y.max = 1) predictors.names <- namesof("rate", .lrate , earg = .erate , short = TRUE) if (length(w) != n || !is.Numeric(w, integer.valued = TRUE, positive = TRUE)) stop("argument 'weights' must be a vector of positive integers") if (!intercept.only) stop("this family function only works for an ", "intercept-only, i.e., y ~ 1") extra$yvector <- y extra$sumw <- sum(w) extra$w <- w if (!length(etastart)) { shape.init <- if (!is.Numeric( .ishape, positive = TRUE)) stop("argument 'ishape' must be positive") else rep_len( .ishape , n) rateinit <- if (length( .irate )) rep_len( .irate , n) else (digamma(shape.init+1) - digamma(1)) / (y+1/8) etastart <- cbind(theta2eta(rateinit, .lrate , earg = .erate )) } }), list( .lrate = lrate, .irate = irate, .ishape = ishape, .erate = erate))), linkinv = eval(substitute(function(eta, extra = NULL) { rate <- eta2theta(eta, .lrate , earg = .erate ) temp7 <- -expm1(-rate*extra$yvector) shape <- -extra$sumw / sum(extra$w*log(temp7)) # \gamma(\theta) (digamma(shape+1)-digamma(1)) / rate }, list( .lrate = lrate, .erate = erate))), last = eval(substitute(expression({ misc$link <- c("rate" = .lrate) misc$earg <- list("rate" = .erate ) temp7 <- -expm1(-rate*y) shape <- -extra$sumw / sum(w*log(temp7)) # \gamma(\theta) misc$shape <- shape # Store the ML estimate here misc$pooled.weight <- pooled.weight }), list( .lrate = lrate, .erate = erate))), loglikelihood= eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { rate <- eta2theta(eta, .lrate , earg = .erate ) temp7 <- -expm1(-rate*y) shape <- -extra$sumw / sum(w*log(temp7)) # \gamma(\theta) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * (log(shape) + log(rate) + (shape-1)*log1p(-exp(-rate*y)) - rate*y) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .lrate = lrate, .erate = erate))), vfamily = c("expexpff1"), validparams = eval(substitute(function(eta, y, extra = NULL) { rate <- eta2theta(eta, .lrate , earg = .erate ) okay1 <- all(is.finite(rate)) && all(0 < rate) okay1 }, list( .lrate = lrate, .erate = erate))), deriv = eval(substitute(expression({ rate <- eta2theta(eta, .lrate , earg = .erate ) temp6 <- exp(-rate*y) temp7 <- 1-temp6 shape <- -extra$sumw / sum(w*log(temp7)) # \gamma(\theta) d1 <- 1/rate + (shape-1)*y*temp6/temp7 - y c(w) * cbind(d1 * dtheta.deta(rate, .lrate , earg = .erate )) }), list( .lrate = lrate, .erate = erate))), weight = eval(substitute(expression({ d11 <- 1/rate^2 + y*(temp6/temp7^2) * ((shape-1) * (y*temp7+temp6) - y*temp6 / (log(temp7))^2) wz <- matrix(0, n, dimm(M)) wz[, iam(1, 1, M)] <- dtheta.deta(rate, .lrate , earg = .erate )^2 * d11 - d2theta.deta2(rate, .lrate , earg = .erate ) * d1 if (FALSE && intercept.only) { sumw <- sum(w) for (ii in 1:ncol(wz)) wz[, ii] <- sum(wz[, ii]) / sumw pooled.weight <- TRUE wz <- c(w) * wz # Put back the weights } else pooled.weight <- FALSE c(w) * wz }), list( .lrate = lrate, .erate = erate)))) } logistic <- function(llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") { ilocat <- ilocation if (!is.Numeric(imethod, length.arg = 1, integer.valued = TRUE, positive = TRUE) || imethod > 2) stop("argument 'imethod' must be 1 or 2") if (length(iscale) && !is.Numeric(iscale, positive = TRUE)) stop("bad input for argument 'iscale'") llocat <- as.list(substitute(llocation)) elocat <- link2list(llocat) llocat <- attr(elocat, "function.name") lscale <- as.list(substitute(lscale)) escale <- link2list(lscale) lscale <- attr(escale, "function.name") new("vglmff", blurb = c("Two-parameter logistic distribution\n\n", "Links: ", namesof("location", llocat, earg = elocat), ", ", namesof("scale", lscale, earg = escale), "\n", "\n", "Mean: location", "\n", "Variance: (pi * scale)^2 / 3"), constraints = eval(substitute(expression({ dotzero <- .zero M1 <- 2 Q1 <- 1 eval(negzero.expression.VGAM) }), list( .zero = zero ))), infos = eval(substitute(function(...) { list(M1 = 2, Q1 = 1, multipleResponses = TRUE, expected = TRUE, zero = .zero ) }, list( .zero = zero ))), initialize = eval(substitute(expression({ temp5 <- w.y.check(w = w, y = y, ncol.w.max = Inf, ncol.y.max = Inf, out.wy = TRUE, colsyperw = 1, maximize = TRUE) w <- temp5$w y <- temp5$y ncoly <- ncol(y) M1 <- 2 extra$ncoly <- ncoly extra$M1 <- M1 M <- M1 * ncoly mynames1 <- param.names("location", ncoly) mynames2 <- param.names("scale", ncoly) parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)] predictors.names <- c(namesof(mynames1, .llocat , earg = .elocat , tag = FALSE), namesof(mynames2, .lscale , earg = .escale , tag = FALSE))[ interleave.VGAM(M, M1 = M1)] if (!length(etastart)) { if ( .imethod == 1) { locat.init <- y scale.init <- sqrt(3) * apply(y, 2, sd) / pi } else { locat.init <- scale.init <- NULL for (ii in 1:ncoly) { locat.init <- c(locat.init, median(rep(y[, ii], w[, ii]))) scale.init <- c(scale.init, sqrt(3) * sum(w[, ii] * (y[, ii] - locat.init[ii])^2) / (sum(w[, ii]) * pi)) } } locat.init <- matrix(if (length( .ilocat )) .ilocat else locat.init, n, ncoly, byrow = TRUE) if ( .llocat == "loge") locat.init <- abs(locat.init) + 0.001 scale.init <- matrix(if (length( .iscale )) .iscale else scale.init, n, ncoly, byrow = TRUE) etastart <- cbind( theta2eta(locat.init, .llocat , earg = .elocat ), theta2eta(scale.init, .lscale , earg = .escale ))[, interleave.VGAM(M, M1 = M1)] } }), list( .imethod = imethod, .elocat = elocat, .escale = escale, .llocat = llocat, .lscale = lscale, .ilocat = ilocat, .iscale = iscale ))), linkinv = eval(substitute(function(eta, extra = NULL) { M <- ncol(eta) M1 <- 2 ncoly <- M / M1 eta2theta(eta[, (1:ncoly) * M1 - 1], .llocat , earg = .elocat ) }, list( .llocat = llocat, .elocat = elocat ))), last = eval(substitute(expression({ M1 <- extra$M1 misc$link <- c(rep_len( .llocat , ncoly), rep_len( .lscale , ncoly))[interleave.VGAM(M, M1 = M1)] temp.names <- c(mynames1, mynames2)[ interleave.VGAM(M, M1 = M1)] names(misc$link) <- temp.names misc$earg <- vector("list", M) names(misc$earg) <- temp.names for (ii in 1:ncoly) { misc$earg[[M1*ii-1]] <- .elocat misc$earg[[M1*ii ]] <- .escale } misc$M1 <- M1 misc$imethod <- .imethod misc$expected <- TRUE misc$multipleResponses <- TRUE }), list( .imethod = imethod, .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), loglikelihood = eval(substitute( function(mu, y, w, residuals = FALSE, eta, extra = NULL, summation = TRUE) { M <- ncol(eta) M1 <- 2 ncoly <- M / M1 locat <- eta2theta(eta[, (1:ncoly)*M1-1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, (1:ncoly)*M1 ], .lscale , earg = .escale ) if (residuals) { stop("loglikelihood residuals not implemented yet") } else { ll.elts <- c(w) * dlogis(x = y, location = locat, scale = Scale, log = TRUE) if (summation) { sum(ll.elts) } else { ll.elts } } }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), vfamily = c("logistic"), validparams = eval(substitute(function(eta, y, extra = NULL) { M1 <- 2; M <- NCOL(eta) ncoly <- M / M1 locat <- eta2theta(eta[, (1:ncoly)*M1-1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, (1:ncoly)*M1 ], .lscale , earg = .escale ) okay1 <- all(is.finite(locat)) && all(is.finite(Scale)) && all(0 < Scale) okay1 }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), simslot = eval(substitute( function(object, nsim) { pwts <- if (length(pwts <- object@prior.weights) > 0) pwts else weights(object, type = "prior") if (any(pwts != 1)) warning("ignoring prior weights") eta <- predict(object) locat <- eta2theta(eta[, c(TRUE, FALSE)], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, c(FALSE, TRUE)], .lscale , earg = .escale ) rlogis(nsim * length(Scale), location = locat, scale = Scale) }, list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), deriv = eval(substitute(expression({ M1 <- 2 ncoly <- M / M1 locat <- eta2theta(eta[, (1:ncoly)*M1-1], .llocat , earg = .elocat ) Scale <- eta2theta(eta[, (1:ncoly)*M1 ], .lscale , earg = .escale ) zedd <- (y - locat) / Scale ezedd <- exp(-zedd) dl.dlocat <- (-expm1(-zedd)) / ((1 + ezedd) * Scale) dl.dscale <- zedd * (-expm1(-zedd)) / ((1 + ezedd) * Scale) - 1 / Scale dlocat.deta <- dtheta.deta(locat, .llocat , earg = .elocat ) dscale.deta <- dtheta.deta(Scale, .lscale , earg = .escale ) myderiv <- c(w) * cbind(dl.dlocat * dlocat.deta, dl.dscale * dscale.deta) myderiv[, interleave.VGAM(M, M1 = M1)] }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale))), weight = eval(substitute(expression({ ned2l.dlocat2 <- 1 / (3 * Scale^2) ned2l.dscale2 <- (3 + pi^2) / (9 * Scale^2) wz <- matrix(NA_real_, nrow = n, ncol = M) # diagonal wz[, (1:ncoly) * M1 - 1] <- ned2l.dlocat2 * dlocat.deta^2 wz[, (1:ncoly) * M1 ] <- ned2l.dscale2 * dscale.deta^2 w.wz.merge(w = w, wz = wz, n = n, M = M, ndepy = ncoly) }), list( .llocat = llocat, .lscale = lscale, .elocat = elocat, .escale = escale)))) } VGAM/R/family.vglm.R0000644000176200001440000000105613135276757013577 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. if (FALSE) family.vglm <- function(object, ...) object$vfamily if (FALSE) print.vfamily <- function(x, ...) { f <- x$vfamily if (is.null(f)) stop("not a VGAM family function") nn <- x$blurb if (is.null(nn)) invisible(return(x)) cat("Family: ", f[1], "\n") if (length(f)>1) cat("Classes:", paste(f, collapse=", "), "\n") cat("\n") for (ii in seq_along(nn)) cat(nn[ii]) cat("\n") invisible(return(x)) } VGAM/R/plot.vglm.R0000644000176200001440000000346013135276757013275 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. plotvlm <- function(object, residuals = NULL, rugplot= FALSE, ...) { stop("sorry, this function hasn't been written yet") } plotvglm <- function(x, which = "(All)", ...) { show <- rep(FALSE, 10000) if (is.character(which) && which == "(All)") { show[TRUE] <- TRUE } else { show[which] <- TRUE } presid <- resid(x, type = "pearson") if (!is.matrix(presid) == 1) presid <- as.matrix(presid) lapred <- predict(x) M <- ncol(lapred) for (jay in 1:M) { if (show[jay]) { use.x <- lapred[, jay] if (one.x <- diff(range(use.x)) < 1e-10) use.x[TRUE] <- jitter(mean(use.x)) plot(use.x, presid[, jay], ylab = "Pearson residuals", xlab = paste(if (one.x) "Jittered l" else "L", "inear predictor ", jay, sep = ""), ...) } } hvmat <- hatvalues(x) for (jay in 1:M) { if (show[M + jay]) { use.x <- hvmat[, jay] if (one.x <- diff(range(use.x)) < 1e-10) use.x[TRUE] <- jitter(mean(use.x)) plot(use.x, presid[, jay], ylab = "Pearson residuals", xlab = paste(if (one.x) "Jittered h" else "H", "at values for linear predictor ", jay, sep = ""), ...) } } invisible(x) } setMethod("plot", "vlm", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plotvlm(x, y, ...))}) setMethod("plot", "vglm", function(x, y, ...) { if (!missing(y)) stop("cannot process the 'y' argument") invisible(plotvglm(x = x, ...))}) VGAM/R/summary.vglm.q0000644000176200001440000003161313135276757014054 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. yformat <- function(x, digits = options()$digits) { format(ifelse(abs(x) < 0.001, signif(x, digits), round(x, digits))) } summaryvglm <- function(object, correlation = FALSE, dispersion = NULL, digits = NULL, presid = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, ... # Added 20151211 ) { if (length(dispersion) && dispersion == 0 && length(object@family@summary.dispersion) && !object@family@summary.dispersion) { stop("cannot use the general VGLM formula (based on a residual ", "sum of squares) for computing the dispersion parameter") } stuff <- summaryvlm( object, presid = FALSE, correlation = correlation, dispersion = dispersion) infos.fun <- object@family@infos infos.list <- infos.fun() summary.pvalues <- if (is.logical(infos.list$summary.pvalues)) infos.list$summary.pvalues else TRUE if (!summary.pvalues && ncol(stuff@coef3) == 4) stuff@coef3 <- stuff@coef3[, -4] # Delete the pvalues column answer <- new("summary.vglm", object, coef3 = stuff@coef3, cov.unscaled = stuff@cov.unscaled, correlation = stuff@correlation, df = stuff@df, sigma = stuff@sigma) if (presid) { Presid <- resid(object, type = "pearson") if (length(Presid)) answer@pearson.resid <- as.matrix(Presid) } slot(answer, "misc") <- stuff@misc # Replace answer@misc$signif.stars <- signif.stars # 20140728 answer@misc$nopredictors <- nopredictors # 20150831 if (is.numeric(stuff@dispersion)) slot(answer, "dispersion") <- stuff@dispersion try.this <- findFirstMethod("summaryvglmS4VGAM", object@family@vfamily) if (length(try.this)) { new.postslot <- summaryvglmS4VGAM(object = object, VGAMff = new(try.this), ...) answer@post <- new.postslot } else { } answer@post$hdeff <- hdeff(object, derivative = 2) answer@post$hde.NA <- hde.NA answer@post$threshold.hde <- threshold.hde answer } setMethod("summaryvglmS4VGAM", signature(VGAMff = "cumulative"), function(object, VGAMff, ...) { object@post <- callNextMethod(VGAMff = VGAMff, object = object, ...) object@post$reverse <- object@misc$reverse cfit <- coef(object, matrix = TRUE) M <- ncol(cfit) if (rownames(cfit)[1] == "(Intercept)") object@post$expcoeffs <- exp(coef(object)[-(1:M)]) object@post }) setMethod("showsummaryvglmS4VGAM", signature(VGAMff = "cumulative"), function(object, VGAMff, ...) { if (length(object@post$expcoeffs)) { cat("\nExponentiated coefficients:\n") print(object@post$expcoeffs) } if (FALSE) { if (object@post$reverse) cat("Reversed\n\n") else cat("Not reversed\n\n") } }) setMethod("summaryvglmS4VGAM", signature(VGAMff = "multinomial"), function(object, VGAMff, ...) { object@post <- callNextMethod(VGAMff = VGAMff, object = object, ...) object@post$refLevel <- object@misc$refLevel object@post }) setMethod("showsummaryvglmS4VGAM", signature(VGAMff = "multinomial"), function(object, VGAMff, ...) { cat("\nReference group is level ", object@post$refLevel, " of the response\n") callNextMethod(VGAMff = VGAMff, object = object, ...) }) setMethod("summaryvglmS4VGAM", signature(VGAMff = "VGAMcategorical"), function(object, VGAMff, ...) { object@post }) setMethod("showsummaryvglmS4VGAM", signature(VGAMff = "VGAMcategorical"), function(object, VGAMff, ...) { }) setMethod("logLik", "summary.vglm", function(object, ...) logLik.vlm(object, ...)) show.summary.vglm <- function(x, digits = max(3L, getOption("digits") - 3L), # Same as glm() quote = TRUE, prefix = "", presid = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = NULL, # Use this if logical; 20140728 nopredictors = NULL, # Use this if logical; 20150831 top.half.only = FALSE, # Added 20160803 ... # Added 20151214 ) { M <- x@misc$M coef3 <- x@coef3 # icients correl <- x@correlation digits <- if (is.null(digits)) options()$digits - 2 else digits cat("\nCall:\n", paste(deparse(x@call), sep = "\n", collapse = "\n"), "\n\n", sep = "") if (is.logical(x@post$hde.NA) && x@post$hde.NA) { if (length(hado <- x@post$hdeff)) { HDE <- is.Numeric(hado[, "deriv1"]) & # Could be all NAs hado[, "deriv1"] < 0 if (any(HDE) && ncol(coef3) == 4) { HDE <- HDE & (x@post$threshold.hde < coef3[, 4]) coef3[HDE, 3:4] <- NA # 3:4 means WaldStat and p-value } } } Presid <- x@pearson.resid rdf <- x@df[2] if (presid && length(Presid) && all(!is.na(Presid)) && is.finite(rdf)) { if (rdf/M > 5) { rq <- apply(as.matrix(Presid), 2, quantile) # 5 x M dimnames(rq) <- list(c("Min", "1Q", "Median", "3Q", "Max"), x@misc$predictors.names) cat("\nPearson residuals:\n") print(t(rq), digits = digits) } else if (rdf > 0) { cat("\nPearson residuals:\n") print(Presid, digits = digits) } } use.signif.stars <- if (is.logical(signif.stars)) signif.stars else x@misc$signif.stars # 20140728 if (!is.logical(use.signif.stars)) use.signif.stars <- getOption("show.signif.stars") use.nopredictors <- if (is.logical(nopredictors)) nopredictors else x@misc$nopredictors # 20140728 if (!is.logical(use.nopredictors)) { warning("cannot determine 'nopredictors'; choosing FALSE") use.nopredictors <- FALSE } if (length(coef3)) { cat(if (top.half.only) "\nParametric coefficients:" else "\nCoefficients:", "\n") printCoefmat(coef3, digits = digits, signif.stars = use.signif.stars, na.print = "NA") } if (top.half.only) return(invisible(NULL)) cat("\nNumber of linear predictors: ", M, "\n") if (!is.null(x@misc$predictors.names) && !use.nopredictors) { if (M == 1) { cat("\nName of linear predictor:", paste(x@misc$predictors.names, collapse = ", "), "\n") } else if (M <= 5) { cat("\nNames of linear predictors:", paste(x@misc$predictors.names, collapse = ", "), fill = TRUE) } } prose <- "" if (length(x@dispersion)) { if (is.logical(x@misc$estimated.dispersion) && x@misc$estimated.dispersion) { prose <- "(Estimated) " } else { if (is.numeric(x@misc$default.dispersion) && x@dispersion == x@misc$default.dispersion) prose <- "(Default) " if (is.numeric(x@misc$default.dispersion) && x@dispersion != x@misc$default.dispersion) prose <- "(Pre-specified) " } if (any(x@dispersion != 1)) cat(paste("\n", prose, "Dispersion Parameter for ", x@family@vfamily[1], " family: ", yformat(x@dispersion, digits), "\n", sep = "")) } if (length(deviance(x))) { cat("\nResidual deviance:", yformat(deviance(x), digits)) if (is.finite(rdf)) cat(" on", round(rdf, digits), "degrees of freedom\n") else cat("\n") } if (length(vll <- logLik.vlm(x))) { cat("\nLog-likelihood:", yformat(vll, digits)) if (is.finite(rdf)) cat(" on", round(rdf, digits), "degrees of freedom\n") else cat("\n") } if (length(x@criterion)) { ncrit <- names(x@criterion) for (ii in ncrit) if (ii != "loglikelihood" && ii != "deviance") cat(paste(ii, ":", sep = ""), yformat(x@criterion[[ii]], digits), "\n") } cat("\nNumber of iterations:", format(trunc(x@iter)), "\n") if (!is.null(correl)) { ncol.X.vlm <- dim(correl)[2] if (ncol.X.vlm > 1) { cat("\nCorrelation of Coefficients:\n") ll <- lower.tri(correl) correl[ll] <- format(round(correl[ll], digits)) correl[!ll] <- "" print(correl[-1, -ncol.X.vlm, drop = FALSE], quote = FALSE, digits = digits) } } if (length(hado <- x@post$hdeff)) { if (is.Numeric(hado[, "deriv1"]) & # Could be all NAs all(hado[, "deriv1"] > 0)) cat("\nNo Hauck-Donner effect found in any of the estimates\n") if (is.Numeric(hado[, "deriv1"]) & # Could be all NAs any(hado[, "deriv1"] < 0)) { cat("\nWarning: Hauck-Donner effect detected in the following estimate(s):\n") cat(paste("'", rownames(hado)[hado[, "deriv1"] < 0], "'", collapse = ", ", sep = "")) cat("\n") } } try.this <- findFirstMethod("showsummaryvglmS4VGAM", x@family@vfamily) if (length(try.this)) { showsummaryvglmS4VGAM(object = x, VGAMff = new(try.this), ...) } else { } invisible(NULL) } setMethod("summary", "vglm", function(object, ...) summaryvglm(object, ...)) setMethod("show", "summary.vglm", function(object) show.summary.vglm(object)) if (FALSE) show.summary.binom2.or <- function(x, digits = max(3L, getOption("digits") - 3L) # Same as glm() ) { if (length(x@post$oratio) == 1 && is.numeric(x@post$oratio)) { cat("\nOdds ratio: ", round(x@post$oratio, digits), "\n") } } if (FALSE) setMethod("show", "summary.binom2.or", function(object) show.summary.vglm(object)) vcovdefault <- function(object, ...) { if (is.null(object@vcov)) stop("no default") object@vcov } vcov.vlm <- function(object, ...) { vcovvlm(object, ...) } vcovvlm <- function(object, dispersion = NULL, untransform = FALSE) { so <- summaryvlm(object, correlation = FALSE, dispersion = dispersion) d <- if (any(slotNames(so) == "dispersion") && is.Numeric(so@dispersion)) so@dispersion else 1 answer <- d * so@cov.unscaled if (is.logical(OKRC <- object@misc$RegCondOK) && !OKRC) warning("MLE regularity conditions were violated ", "at the final iteration of the fitted object") if (!untransform) return(answer) new.way <- TRUE if (!is.logical(object@misc$intercept.only)) stop("cannot determine whether the object is", "an intercept-only fit, i.e., 'y ~ 1' is the response") if (!object@misc$intercept.only) stop("object must be an intercept-only fit, i.e., ", "y ~ 1 is the response") if (!all(trivial.constraints(constraints(object)) == 1)) stop("object must have trivial constraints") M <- object@misc$M tvector <- numeric(M) etavector <- predict(object)[1, ] # Contains transformed parameters LINK <- object@misc$link EARG <- object@misc$earg # This could be a NULL if (is.null(EARG)) EARG <- list(theta = NULL) if (!is.list(EARG)) stop("the 'earg' component of 'object@misc' must be a list") if (length(LINK) != M && length(LINK) != 1) stop("cannot obtain the link functions to untransform 'object'") if (!is.character(LINK)) stop("the 'link' component of 'object@misc' should ", "be a character vector") learg <- length(EARG) llink <- length(LINK) if (llink != learg) stop("the 'earg' component of 'object@misc' should ", "be a list of length ", learg) level1 <- length(EARG) > 3 && length(intersect(names(EARG), c("theta", "inverse", "deriv", "short", "tag"))) > 3 if (level1) EARG <- list(oneOnly = EARG) learg <- length(EARG) for (ii in 1:M) { TTheta <- etavector[ii] # Transformed theta use.earg <- if (llink == 1) EARG[[1]] else EARG[[ii]] function.name <- if (llink == 1) LINK else LINK[ii] if (new.way) { use.earg[["inverse"]] <- TRUE # New use.earg[["theta"]] <- TTheta # New Theta <- do.call(function.name, use.earg) use.earg[["inverse"]] <- TRUE # Reset this use.earg[["deriv"]] <- 1 # New use.earg[["theta"]] <- Theta # Renew this tvector[ii] <- do.call(function.name, use.earg) } else { stop("link functions handled in the new way now") } } # of for (ii in 1:M) tvector <- abs(tvector) answer <- (cbind(tvector) %*% rbind(tvector)) * answer if (length(dmn2 <- names(object@misc$link)) == M) dimnames(answer) <- list(dmn2, dmn2) answer } setMethod("vcov", "vlm", function(object, ...) vcovvlm(object, ...)) setMethod("vcov", "vglm", function(object, ...) vcovvlm(object, ...)) VGAM/R/s.vam.q0000644000176200001440000002042313135276757012434 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. s.vam <- function(x, zedd, wz, smomat, which, smooth.frame, bf.maxit = 10, bf.epsilon = 0.001, trace = FALSE, se.fit = TRUE, X.vlm.save, Hlist, ncolHlist, M, qbig, Umat, all.knots = FALSE, nk = NULL, sf.only = FALSE) { nwhich <- names(which) dX.vlm <- as.integer(dim(X.vlm.save)) pbig <- dX.vlm[2] if (!length(smooth.frame$first)) { data <- smooth.frame[, nwhich, drop = FALSE] smooth.frame <- vgam.match(data, all.knots = all.knots, nk = nk) smooth.frame$first <- TRUE # Only executed at the first time dx <- as.integer(dim(x)) smooth.frame$n.lm <- dx[1] smooth.frame$p.lm <- dx[2] attr(data, "class") <- NULL osparv <- lapply(data, attr, "spar") # "o" for original odfvec <- lapply(data, attr, "df") s.xargument <- lapply(data, attr, "s.xargument") for (kk in seq_along(nwhich)) { ii <- nwhich[kk] temp <- osparv[[ii]] if (!is.numeric(temp) || any(temp < 0)) { stop("spar cannot be negative or non-numeric") } if (length(temp) > ncolHlist[ii]) { warning("only the first ", ncolHlist[ii], " values of ", "'spar' are used for variable '", s.xargument, "'") } osparv[[ii]] <- rep_len(temp, ncolHlist[ii]) # Recycle temp <- odfvec[[ii]] if (!is.numeric(temp) || any(temp < 1)) { stop("argument 'df' is non-numeric or less than 1") } if (length(temp) > ncolHlist[ii]) { warning("only the first ", ncolHlist[ii], " value(s) of 'df' ", "are used for variable '", s.xargument, "'") } odfvec[[ii]] <- rep_len(temp, ncolHlist[ii]) # Recycle if (max(temp) > smooth.frame$neffec[kk]-1) { stop("'df' value too high for variable '", s.xargument, "'") } if (any(osparv[[ii]] != 0) && any(odfvec[[ii]] != 4)) { stop("cannot specify both 'spar' and 'df'") } } # End of kk loop osparv <- unlist(osparv) odfvec <- unlist(odfvec) smooth.frame$osparv <- osparv # Original smooth.frame$odfvec <- odfvec # Original if (sum(smooth.frame$dfvec[smooth.frame$osparv == 0]) + pbig > smooth.frame$n.lm * sum(ncolHlist[nwhich])) { stop("too many parameters/dof for data on hand") } xnrow.X.vlm <- labels(X.vlm.save)[[2]] asgn <- attr(X.vlm.save, "assign") aa <- NULL for (ii in nwhich) { aa <- c(aa, xnrow.X.vlm[asgn[[ii]]]) } smooth.frame$ndfsparv <- aa # Stored here smooth.frame$xnrow.X.vlm <- xnrow.X.vlm # Stored here smooth.frame$s.xargument <- s.xargument # Stored here smooth.frame$smap <- as.vector(cumsum(c(1, ncolHlist[nwhich]))[seq_along(nwhich)]) smooth.frame$try.sparv <- osparv smooth.frame$bindex <- as.integer(cumsum(c(1, smooth.frame$nknots * ncolHlist[nwhich]))) smooth.frame$lindex <- as.integer(cumsum(c(1, smooth.frame$neffec * ncolHlist[nwhich]))) smooth.frame$kindex <- as.integer(cumsum(c(1, 4 + smooth.frame$nknots))) } else { smooth.frame$first <- FALSE } if (sf.only) { return(smooth.frame) } ldk <- 3 * max(ncolHlist[nwhich]) + 1 # 20020711 which <- unlist(which) p.lm <- smooth.frame$p.lm n.lm <- smooth.frame$n.lm dim2wz <- if (is.matrix(wz)) ncol(wz) else 1 dim1U <- if (is.matrix(Umat)) nrow(Umat) else 1 nHlist <- names(Hlist) for (ii in length(nHlist):1) { if (!any(nHlist[ii] == nwhich)) { Hlist[[ii]] <- NULL } } trivc <- trivial.constraints(Hlist) ncbvec <- ncolHlist[nwhich] ncolbmax <- max(ncbvec) contr.sp <- list(low = -1.5, ## low = 0. was default till R 1.3.x high = 1.5, tol = 1e-4, ## tol = 0.001 was default till R 1.3.x eps = 2e-8, ## eps = 0.00244 was default till R 1.3.x maxit = 500) fit <- .C("Yee_vbfa", # --------------------------------- npetc = as.integer(c(n.lm, p.lm, length(which), se.fit, 0, bf.maxit, qrank = 0, M, nbig = n.lm * M, pbig, qbig, dim2wz, dim1U, ier = 0, ldk = ldk, # ldk may be unused contr.sp$maxit, iinfo = 0 )), doubvec = as.double(c(bf.epsilon, resSS = 0, unlist(contr.sp[1:4]))), as.double(x), y = as.double(zedd), wz = as.double(wz), dfvec = as.double(smooth.frame$odfvec + 1), # 20130427; + 1 added lamvec = double(length(smooth.frame$odfvec)), sparv = as.double(smooth.frame$try.sparv), as.integer(smooth.frame$matcho), as.integer(smooth.frame$neffec), as.integer(which), smomat = as.double(smomat), etamat = double(M * n.lm), beta = double(pbig), varmat = if (se.fit) as.double(smomat) else double(1), qr = as.double(X.vlm.save), qraux = double(pbig), qpivot = as.integer(1:pbig), as.double(Umat), as.double(unlist(Hlist)), as.integer(ncbvec), as.integer(smooth.frame$smap), trivc = as.integer(trivc), levmat = double(sum(smooth.frame$neffec * ncbvec)), # 20130427; bcoefficients = double(sum(smooth.frame$nknots * ncbvec)), knots = as.double(unlist(smooth.frame$knots)), bindex = as.integer(smooth.frame$bindex), lindex = as.integer(smooth.frame$lindex), nknots = as.integer(smooth.frame$nknots), kindex = as.integer(smooth.frame$kindex)) # End of dotC if (exists("flush.console")) flush.console() dim(fit$qr) <- dim(X.vlm.save) dimnames(fit$qr) <- dimnames(X.vlm.save) dim(fit$y) <- dim(zedd) dimnames(fit$y) <- dimnames(zedd) dim(fit$smomat) <- dim(smomat) dimnames(fit$smomat) <- dimnames(smomat) # Needed for vgam.nlchisq if (se.fit) { dim(fit$varmat) <- dim(smomat) dimnames(fit$varmat) <- dimnames(smomat) } if (fit$npetc[14] != 0 || fit$npetc[17] != 0) { stop("something went wrong in the C function 'vbfa'") } fit$etamat <- if (M > 1) matrix(fit$etamat, n.lm, M, byrow = TRUE) else c(fit$etamat) # May no longer be a matrix nits <- fit$npetc[5] qrank <- fit$npetc[7] if (smooth.frame$first) { smooth.frame$try.sparv <- fit$sparv } if ((nits == bf.maxit) && bf.maxit > 1) { warning("'s.vam()' convergence not obtained in ", bf.maxit, " iterations") } R <- fit$qr[1:pbig, 1:pbig] R[lower.tri(R)] <- 0 Bspline <- vector("list", length(nwhich)) names(Bspline) <- nwhich for (ii in seq_along(nwhich)) { b.coefs <- fit$bcoeff[(smooth.frame$bindex[ii]): (smooth.frame$bindex[ii + 1] - 1)] b.coefs <- matrix(b.coefs, ncol = ncolHlist[nwhich[ii]]) Bspline[[ii]] <- new("vsmooth.spline.fit", "Bcoefficients" = b.coefs, "xmax" = smooth.frame$xmax[ii], "xmin" = smooth.frame$xmin[ii], "knots" = as.vector(smooth.frame$knots[[ii]])) } Leverages <- vector("list", length(nwhich)) names(Leverages) <- nwhich for (ii in seq_along(nwhich)) { levvec <- fit$levmat[(smooth.frame$lindex[ii]): (smooth.frame$lindex[ii+1]-1)] levmat <- matrix(levvec, nrow = smooth.frame$neffec[ii], ncol = ncolHlist[nwhich[ii]]) Leverages[[ii]] <- levmat } nl.df <- fit$dfvec - 1 # Decrement/increment ? retlist <- list( Bspline = Bspline, coefficients = fit$beta, df.residual = n.lm * M - qrank - sum(nl.df), # Decrement/increment ? fitted.values = fit$etamat, Leverages = Leverages, nl.df = nl.df, qr = list(qr = fit$qr, rank = qrank, qraux = fit$qraux, pivot = fit$qpivot), R = R, rank = qrank, residuals = fit$y - fit$etamat, ResSS = fit$doubvec[2], smomat = fit$smomat, sparv = fit$sparv, s.xargument = unlist(smooth.frame$s.xargument)) names(retlist$coefficients) <- smooth.frame$xnrow.X.vlm names(retlist$sparv) <- names(retlist$nl.df) <- smooth.frame$ndfspar if (se.fit) { retlist <- c(retlist, list(varmat = fit$varmat)) } c(list(smooth.frame = smooth.frame), retlist) } VGAM/R/bAIC.q0000644000176200001440000002060113135276757012144 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. check.omit.constant <- function(object) { if (is.logical(object@misc$needto.omit.constant) && object@misc$needto.omit.constant && !object@misc$omit.constant) warning("Probably 'omit.constant = TRUE' should have been set. ", "See the family function '", object@family@vfamily[1], "' help file.") } if (!isGeneric("nparam")) setGeneric("nparam", function(object, ...) standardGeneric("nparam"), package = "VGAM") nparam.vlm <- function(object, dpar = TRUE, ...) { estdisp <- object@misc$estimated.dispersion check.omit.constant(object) no.dpar <- if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 tot.par <- length(coefvlm(object)) + as.numeric(dpar) * no.dpar tot.par } nparam.vgam <- function(object, dpar = TRUE, linear.only = FALSE, ...) { estdisp <- object@misc$estimated.dispersion check.omit.constant(object) no.dpar <- if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 nldf <- if (is.Numeric(object@nl.df)) sum(object@nl.df) else 0 if (linear.only) { length(coefvlm(object)) + as.numeric(dpar) * no.dpar } else { length(coefvlm(object)) + as.numeric(dpar) * no.dpar + nldf } } nparam.rrvglm <- function(object, dpar = TRUE, ...) { check.omit.constant(object) estdisp <- object@misc$estimated.dispersion no.dpar <- if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 str0 <- object@control$str0 MMM <- object@misc$M Rank <- object@control$Rank elts.tildeA <- (MMM - Rank - length(str0)) * Rank length(coefvlm(object)) + as.numeric(dpar) * no.dpar + elts.tildeA } nparam.qrrvglm <- function(object, dpar = TRUE, ...) { check.omit.constant(object) estdisp <- object@misc$estimated.dispersion no.dpar <- if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 str0 <- object@control$str0 MMM <- object@misc$M Rank <- object@control$Rank elts.tildeA <- (MMM - Rank - length(str0)) * Rank eq.tolerances <- object@control$eq.tolerances I.tolerances <- object@control$I.tolerances if (!(length(eq.tolerances) == 1 && is.logical(eq.tolerances))) stop("could not determine whether the fitted object used an ", "equal-tolerances assumption based on ", "argument 'eq.tolerances'") if (!(length(I.tolerances) == 1 && is.logical(I.tolerances))) stop("could not determine whether the fitted object used an ", "equal-tolerances assumption based on argument 'I.tolerances'") NOS <- if (length(object@y)) ncol(object@y) else MMM MSratio <- MMM / NOS # First value is g(mean) = quadratic form in l if (round(MSratio) != MSratio) stop("variable 'MSratio' is not an integer") elts.D <- ifelse(I.tolerances || eq.tolerances, 1, NOS) * Rank * (Rank + 1) / 2 elts.B1 <- length(object@extra$B1) elts.C <- length(object@extra$Cmat) num.params <- elts.B1 + elts.tildeA + elts.D + elts.C num.params } nparam.rrvgam <- function(object, dpar = TRUE, ...) { check.omit.constant(object) estdisp <- object@misc$estimated.dispersion no.dpar <- if (length(estdisp) && is.logical(estdisp) && estdisp) length(object@misc$dispersion) else 0 str0 <- object@control$str0 MMM <- object@misc$M Rank <- object@control$Rank NOS <- if (length(object@y)) ncol(object@y) else MMM MSratio <- MMM / NOS # First value is g(mean) = quadratic form in l if (round(MSratio) != MSratio) stop("variable 'MSratio' is not an integer") elts.B1 <- length(object@extra$B1) # 0 since a NULL elts.C <- length(object@extra$Cmat) elts.df1.nl <- sum(object@extra$df1.nl) num.params <- elts.B1 + elts.C + ( 2 * length(object@extra$df1.nl) + elts.df1.nl) - (Rank + length(str0)) * Rank num.params } setMethod("nparam", "vlm", function(object, ...) nparam.vlm(object, ...)) setMethod("nparam", "vglm", function(object, ...) nparam.vlm(object, ...)) setMethod("nparam", "vgam", function(object, ...) nparam.vgam(object, ...)) setMethod("nparam", "rrvglm", function(object, ...) nparam.rrvglm(object, ...)) setMethod("nparam", "qrrvglm", function(object, ...) nparam.qrrvglm(object, ...)) setMethod("nparam", "rrvgam", function(object, ...) nparam.rrvgam(object, ...)) if (!isGeneric("AIC")) setGeneric("AIC", function(object, ..., k = 2) standardGeneric("AIC"), package = "VGAM") AICvlm <- function(object, ..., corrected = FALSE, k = 2) { estdisp <- object@misc$estimated.dispersion tot.par <- nparam.vlm(object, dpar = TRUE) ans <- (-2) * logLik.vlm(object, ...) + k * tot.par if (corrected) { ans <- ans + k * tot.par * (tot.par + 1) / ( nobs(object) - tot.par - 1) } ans } AICvgam <- function(object, ..., k = 2) { sum.lco.no.dpar.nldf <- nparam.vgam(object, dpar = TRUE, linear.only = FALSE) -2 * logLik.vlm(object, ...) + k * sum.lco.no.dpar.nldf } AICrrvglm <- function(object, ..., k = 2) { sum.lco.no.dpar.A <- nparam.rrvglm(object, dpar = TRUE) (-2) * logLik.vlm(object, ...) + k * sum.lco.no.dpar.A } AICqrrvglm <- function(object, ..., k = 2) { loglik.try <- logLik.qrrvglm(object, ...) if (!is.numeric(loglik.try)) warning("cannot compute the log-likelihood of 'object'. ", "Returning NULL") num.params <- nparam.qrrvglm(object, dpar = TRUE) if (is.numeric(loglik.try)) { (-2) * loglik.try + k * num.params } else { NULL } } AICrrvgam <- function(object, ..., k = 2) { loglik.try <- logLik(object, ...) if (!is.numeric(loglik.try)) warning("cannot compute the log-likelihood of 'object'. ", "Returning NULL") num.params <- nparam.rrvgam(object, dpar = TRUE) if (is.numeric(loglik.try)) { (-2) * loglik.try + k * num.params } else { NULL } } setMethod("AIC", "vlm", function(object, ..., k = 2) AICvlm(object, ..., k = k)) setMethod("AIC", "vglm", function(object, ..., k = 2) AICvlm(object, ..., k = k)) setMethod("AIC", "vgam", function(object, ..., k = 2) AICvgam(object, ..., k = k)) setMethod("AIC", "rrvglm", function(object, ..., k = 2) AICrrvglm(object, ..., k = k)) setMethod("AIC", "qrrvglm", function(object, ..., k = 2) AICqrrvglm(object, ..., k = k)) setMethod("AIC", "rrvgam", function(object, ..., k = 2) AICrrvgam(object, ..., k = k)) if (!isGeneric("AICc")) setGeneric("AICc", function(object, ..., k = 2) standardGeneric("AICc"), package = "VGAM") setMethod("AICc", "vlm", function(object, ..., k = 2) AICvlm(object, ..., corrected = TRUE, k = k)) setMethod("AICc", "vglm", function(object, ..., k = 2) AICvlm(object, ..., corrected = TRUE, k = k)) if (!isGeneric("BIC")) setGeneric("BIC", function(object, ..., k = log(nobs(object))) standardGeneric("BIC"), package = "VGAM") BICvlm <- function(object, ..., k = log(nobs(object))) { AICvlm(object, ..., k = k) } setMethod("BIC", "vlm", function(object, ..., k = log(nobs(object))) BICvlm(object, ..., k = k)) setMethod("BIC", "vglm", function(object, ..., k = log(nobs(object))) BICvlm(object, ..., k = k)) setMethod("BIC", "vgam", function(object, ..., k = log(nobs(object))) AICvgam(object, ..., k = k)) setMethod("BIC", "rrvglm", function(object, ..., k = log(nobs(object))) AICrrvglm(object, ..., k = k)) setMethod("BIC", "qrrvglm", function(object, ..., k = log(nobs(object))) AICqrrvglm(object, ..., k = k)) setMethod("BIC", "rrvgam", function(object, ..., k = log(nobs(object))) AICrrvgam(object, ..., k = k)) VGAM/R/hdeff.R0000644000176200001440000002110613135276757012424 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. hdeff.vglm <- function(object, derivative = NULL, se.arg = FALSE, ...) { type <- if (length(derivative)) { if (is.Numeric(derivative, length.arg = 1, positive = TRUE, integer.valued = TRUE) && derivative %in% 1:2) "derivatives" else stop("bad input for argument 'derivative'") } else { "logical" } Fam <- if (inherits(object, "vlm")) { object@family } else { stop("cannot get at the 'family' slot") } Fam.infos <- Fam@infos() dfun <- if (is.logical(Fam.infos$hadof) && Fam.infos$hadof) { Fam@hadof } else { return(NULL) # This means its not implemented yet stop(gettextf( "hdeff() not implemented for family '%s' yet", Fam), domain = NA) } link1parameter <- Fam.infos$link1parameter if (is.null(link1parameter)) link1parameter <- TRUE # The default really M <- npred(object) # Some constraints span across responses all.Hk <- constraints(object, matrix = TRUE) X.vlm <- model.matrix(object, type = "vlm") eta.mat <- predict(object) nnn <- NROW(eta.mat) pwts <- weights(object, type = "prior") mylinks <- linkfun(object) # Of length 1 for GLMs, char only wwts <- weights(object, type = "working") dim.wz <- dim(wwts) # Inefficient p.VLM <- ncol(all.Hk) M1 <- npred(object, type = "one.response") vc2 <- vcov(object) SE2 <- diag.ixwx <- diag(vc2) SE1 <- sqrt(SE2) cobj <- coef(object) se2.deriv1 <- vec.deriv1 <- rep_len(NA_real_, p.VLM) names(vec.deriv1) <- names(cobj) if (type == "derivatives" && derivative == 2) { se2.deriv2 <- vec.deriv2 <- vec.deriv1 } D3thetas.Detas3 <- # May not be needed D2thetas.Detas2 <- D1thetas.Detas1 <- Param.mat <- matrix(NA_real_, nnn, M) if (link1parameter) { for (jay in 1:M) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Param.mat[, jay] <- Param.vec <- eta2theta(eta.mat[, jay], mylinks[jay], earg = object@misc$earg[[jay]]) D1thetas.Detas1[, jay] <- dtheta.deta(Param.vec, link = mylinks[jay], earg = object@misc$earg[[jay]]) D2thetas.Detas2[, jay] <- d2theta.deta2(Param.vec, link = mylinks[jay], earg = object@misc$earg[[jay]]) if (type == "derivatives" && derivative == 2) { D3thetas.Detas3[, jay] <- d3theta.deta3(Param.vec, link = mylinks[jay], earg = object@misc$earg[[jay]]) } } # for (jay) ind5 <- iam(NA, NA, both = TRUE, M = M) wz.tet <- D1thetas.Detas1[, ind5$row] * D1thetas.Detas1[, ind5$col] # n x MM12 Der1 <- D1thetas.Detas1 Der2 <- D2thetas.Detas2 } else { MM12 <- M * (M + 1) / 2 Param.mat <- eta2theta(eta.mat, mylinks, earg = object@misc$earg) myearg <- object@misc$earg[[1]] build.list <- list(theta = Param.mat, inverse = TRUE, deriv = 1) build.list <- c(build.list, myearg) # Hopefully no dups arg names build.list$all.derivs <- TRUE # For multinomial, etc. Der1 <- do.call(what = mylinks, args = build.list) if (type == "derivatives" && derivative == 2) { build.list$deriv <- 2 Der2 <- do.call(what = mylinks, args = build.list) } } # if (link1parameter) and (!link1parameter) for (kay in 1:p.VLM) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, dwz.dbetas <- d2wz.dbetas2 <- 0 # Good for the first instance of use. for (jay in 1:M) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, if (all.Hk[jay, kay] != 0) { # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, vecTF.jay <- as.logical(eijfun(jay, M)) bix.jk <- X.vlm[vecTF.jay, kay] # An n-vector if (link1parameter) { dfun1 <- dfun(eta.mat, extra = object@extra, linpred.index = jay, w = pwts, dim.wz = dim.wz, deriv = 1) dfun0 <- dfun(eta.mat, extra = object@extra, linpred.index = jay, w = pwts, dim.wz = dim.wz, deriv = 0) use.ncol <- NCOL(dfun0) # Reference value really if (NCOL(dfun1 ) < use.ncol) dfun1 <- cbind(dfun1, matrix(0, nnn, use.ncol - NCOL(dfun1))) if (use.ncol < NCOL(wz.tet)) wz.tet <- wz.tet[, 1:use.ncol, drop = FALSE] write.into.wz <- function(jay, nxM) { M <- NCOL(nxM) wz <- matrix(0, NROW(nxM), M*(M+1)/2) for (uuu in 1:M) wz[, iam(jay, uuu, M = M)] <- (1 + (jay == uuu)) * nxM[, uuu] wz } wz.lhs <- write.into.wz(jay, D1thetas.Detas1) if (use.ncol < NCOL(wz.lhs)) wz.lhs <- wz.lhs[, 1:use.ncol, drop = FALSE] dwz.dtheta.Der1 <- dfun1 * wz.tet * Der1[, jay] + Der2[, jay] * dfun0 * wz.lhs if (!is.matrix(dwz.dtheta.Der1)) dwz.dtheta.Der1 <- as.matrix(dwz.dtheta.Der1) } # else { if (link1parameter) { dwz.dbetakk <- dwz.dtheta.Der1 * bix.jk # * Der1[, jay] } else { dwz.dbetakk <- 0 for (uuu in 1:M) { dfun1 <- dfun(eta.mat, extra = object@extra, linpred.index = uuu, w = pwts, dim.wz = dim.wz, deriv = 1) dwz.dbetakk <- dwz.dbetakk + dfun1 * Der1[, iam(uuu, jay, M = M)] } # for uuu dwz.dbetakk <- dwz.dbetakk * bix.jk } if (!is.matrix(dwz.dbetakk)) dwz.dbetakk <- as.matrix(dwz.dbetakk) dwz.dbetas <- dwz.dbetas + dwz.dbetakk # Summed over 1:M if (type == "derivatives" && derivative == 2) { if (link1parameter) { Der3 <- D3thetas.Detas3 } dfun2 <- dfun(eta.mat, extra = object@extra, linpred.index = jay, w = pwts, dim.wz = dim.wz, deriv = 2) use.ncol <- if (link1parameter) NCOL(dwz.dtheta.Der1) else MM12 if (NCOL(dfun2) < use.ncol) dfun2 <- cbind(dfun2, matrix(0, nnn, use.ncol - NCOL(dfun2))) d2wz.dtheta2 <- if (link1parameter && M1 == 1) { dfun2 * (Der1[, jay])^4 + (dwz.dtheta.Der1 / Der1[, jay]) * Der2[, jay] + 4 * dfun1 * Der2[, jay] * (Der1[, jay])^2 + 2 * dfun0 * Der3[, jay] * Der1[, jay] } else { NA * dfun2 } d2wz.dbetakk2 <- d2wz.dtheta2 * bix.jk^2 if (!is.matrix(d2wz.dbetakk2)) d2wz.dbetakk2 <- as.matrix(d2wz.dbetakk2) d2wz.dbetas2 <- d2wz.dbetas2 + d2wz.dbetakk2 # Summed over 1:M } # (type == "derivatives" && derivative == 2) } # if (all.Hk[jay, kay] != 0) } # for (jay in 1:M) # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, tmp.mat <- mux111(t(dwz.dbetas), X.vlm, M = M, upper = FALSE) dA.dbeta <- crossprod(X.vlm, tmp.mat) # p.VLM x p.VLM temp1 <- dA.dbeta %*% vc2 d1ixwx.dbeta1 <- -(vc2 %*% temp1) SE2.kay <- SE2[kay] se2.deriv1[kay] <- diag(d1ixwx.dbeta1)[kay] vec.deriv1[kay] <- (1 - 0.5 * cobj[kay] * se2.deriv1[kay] / SE2.kay) / SE1[kay] if (type == "derivatives" && derivative == 2) { tmp.mat <- mux111(t(d2wz.dbetas2), X.vlm, M = M, upper = FALSE) d2A.dbeta2 <- crossprod(X.vlm, tmp.mat) # p.VLM x p.VLM d2ixwx.dbeta2 <- vc2 %*% (2 * temp1 %*% dA.dbeta - d2A.dbeta2) %*% vc2 se2.deriv2[kay] <- diag(d2ixwx.dbeta2)[kay] vec.deriv2[kay] <- (-se2.deriv1[kay] + 0.5 * cobj[kay] * (1.5 * ((se2.deriv1[kay])^2) / SE2.kay - se2.deriv2[kay])) / (SE2.kay^1.5) } # derivative == 2 } # for (kay in 1:p.VLM) # ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, se.deriv1 <- if (se.arg) 0.5 * se2.deriv1 / SE1 else NULL switch(type, logical = vec.deriv1 < 0, # yettodo: 2nd-deriv test later derivatives = if (derivative == 1) { if (se.arg) cbind(deriv1 = vec.deriv1, se.deriv1 = se.deriv1) else vec.deriv1 } else { cbind(deriv1 = vec.deriv1, deriv2 = vec.deriv2, se.deriv1 = if (se.arg) se.deriv1 else NULL, se.deriv2 = if (se.arg) 0.5 * (se2.deriv2 / SE1 - 0.5 * se2.deriv1^2 / SE2^1.5) else NULL) }) } # hdeff.vglm if (!isGeneric("hdeff")) setGeneric("hdeff", function(object, ...) standardGeneric("hdeff")) setMethod("hdeff", "vglm", function(object, ...) hdeff.vglm(object, ...)) VGAM/R/attrassign.R0000644000176200001440000000141613135276757013531 0ustar liggesusers# These functions are # Copyright (C) 1998-2017 T.W. Yee, University of Auckland. # All rights reserved. attrassignlm <- function(lmobj) { attrassign(model.matrix(lmobj), terms(lmobj)) } attrassigndefault <- function(mmat, tt) { if (!inherits(tt, "terms")) stop("need terms object") aa <- attr(mmat, "assign") if (is.null(aa)) stop("argument is not really a model matrix") ll <- attr(tt, "term.labels") if (attr(tt, "intercept") > 0) ll <- c("(Intercept)", ll) aaa <- factor(aa, labels = ll) split(order(aa), aaa) } if (!isGeneric("attrassign")) setGeneric("attrassign", function(object, ...) standardGeneric("attrassign")) setMethod("attrassign", "lm", function(object, ...) attrassignlm(object, ...)) VGAM/MD50000644000176200001440000010302113135730670011316 0ustar liggesusersbabb8a1c553c18356c0db0223bece6cc *BUGS 7ee5b2dc375f5ec613dffed100ca7b3d *ChangeLog 4d1e60d019acc18d42c78054e69be0c0 *DESCRIPTION e640665d8993539374917f850992ddc7 *LICENCE.note 70a069b2b3d8feaae43c04d592ae91cf *NAMESPACE d05f36561b01478928c8d722cb7143a6 *NEWS f69011dacdfa468fb550f7fe6e7115da *R/Links.R 15e6d7daf696fe77f77c2618f65b7f4a *R/aamethods.q 850fc2355e3cdad5e6da31d75e7e3e87 *R/attrassign.R be85f5ed7e4886fa80527e81b9f402f6 *R/bAIC.q d4a2289805cafc49f5685f734921adba *R/build.terms.vlm.q 6a35d111b193a461fda33b8889d7727c *R/calibrate.R 239c21deab95a059b658a4c55a24bb2d *R/cao.R 96f3ffb873380bb4b4804d5f74589c84 *R/cao.fit.q 8d90f7d73c88b7f70997d82e37270538 *R/coef.vlm.q dd4bac465a287f5db3d3cd37547ab1dc *R/confint.vlm.R 80721a276eedd15c3ab6d2bfdb94a6ae *R/cqo.R 2aa1e33b70242e41b5fbb0363c6d5202 *R/cqo.fit.q 6bd2dee0075dc98322bb67e173bc3411 *R/deviance.vlm.q 40b0bc3b6621172402a2e948c060947c *R/effects.vglm.q 5cf840785f3cffa521f1bc2f7b846c89 *R/family.actuary.R 50eafe39e7f9c4ecffdf38607391139e *R/family.aunivariate.R d07f620d26def69e688bac7464465ff2 *R/family.basics.R a926cef87b4187d9555b3ccf5f4ff3de *R/family.binomial.R 029e84e10266d7dcbbaf3064d99c7334 *R/family.bivariate.R 430040d78525cd84093e28f399315b50 *R/family.categorical.R 2fd385aa52dbdec75c3c711dee841b53 *R/family.censored.R 0a8f27601542f6e00b2120fae23db0e0 *R/family.circular.R ff42cacaf581014695c79903671a3ec2 *R/family.exp.R 95d81a8060f61238fef994f782974bed *R/family.extremes.R 9776b38d1240cce93a01fd1fd910d064 *R/family.functions.R 988ce78b40c2e99d41a3865fe1068adc *R/family.genetic.R ea8f1484a36e135637c8ea6fd5c093ec *R/family.glmgam.R 31a2db812c69b1316db4be8a232a3280 *R/family.loglin.R e8c92d46b60b400d0706b2a98995a9c4 *R/family.math.R a1a4c3db19ea37ba7a4506e4c63b583d *R/family.mixture.R 819b0d763de218e72ab5fe13ff840bc2 *R/family.nbd.R 152de0eff22c85e5f15fd0cf34e23d20 *R/family.nonlinear.R 8daa227f4124e052013126bba62a5680 *R/family.normal.R 3f643294d6f5b1b40d754aaf2e62e4b2 *R/family.oneinf.R 0f6b900978a1c8fbe99f058ad8cbe921 *R/family.others.R dcb4fd572b2b21e8ec6abf1de8f52672 *R/family.positive.R 7cb2e49ad56b3b6fb49da23ea3e4f91c *R/family.qreg.R d1765ac8518188f267c9676c6fb38975 *R/family.rcim.R e8a591f4dbdca517b3c7abb4c7148498 *R/family.rcqo.R 61612a0ab87e08723e2dcb61862da4e0 *R/family.robust.R 7be5dd7ddc98b6e5457c2b0ca88d0ab6 *R/family.rrr.R 7c77d6e90c6344ecd1d86a21d6b53eb9 *R/family.sur.R 4d8c241af8a93744d2ef2bb560b71d28 *R/family.survival.R 6e83ee8908e08691d2a1f3ad937a4130 *R/family.ts.R d99f13c9894e3b2fca9bf41cf6602b2c *R/family.univariate.R 7fb585dbf26bfb316794f3dcdca2f177 *R/family.vglm.R cc6c957f3366e34ffeb6badc2fb91008 *R/family.zeroinf.R ce324d0c57408b541796f45f8c99e66d *R/fittedvlm.R 9e23063d5cf42a1274872cd58291ffe0 *R/formula.vlm.q 2eee5295f01795773e5ca2eb648d730e *R/generic.q f257433624cebe43f7ab54926c58e7ae *R/getxvlmaug.R c1f4425899dc0cecb198844248aa8d2d *R/hdeff.R 888cd656e8e90466dc831fb616829963 *R/links.q 0ae70775d53e5078a917acfae690a9a3 *R/logLik.vlm.q d32e722ce3a87330c5711a45721e1573 *R/lrp.R 19e85d36989af790dff4afb1726e42e0 *R/lrwaldtest.R 9399cba191664032f1ce2e4a4bbbbb0e *R/model.matrix.vglm.q d2e67df9b8b868d8d3c591069e4e8dec *R/mux.q fe713edeefaf3f3660835103fe2e3e64 *R/nobs.R 79469ed8e3d205fdd7903cceb11914f2 *R/plot.vgam.R 0556ee6f3ea4287744485a4c23faca05 *R/plot.vglm.R 541762a018c2c64b8d65230877116958 *R/predict.vgam.q 51b20db1da7c63d773b1c11af9f36d83 *R/predict.vglm.q 57b0b30cf8a97b561a820a167cd59fcb *R/predict.vlm.q 1afe12cfa18ae375b7bc1711145ec81d *R/print.vglm.q 79210aa7183e752a6fa036e193251af6 *R/print.vlm.q 637e2241be5a481067e3059764371c06 *R/profilevglm.R 945527e91fd66440a6b5849bbc30a949 *R/psv2magic.R b69ca0140390f60473398540808ed27e *R/qrrvglm.control.q 96747a8f20c4d4f47caeb53f8e92baab *R/qtplot.q 067ea212f72255109d664c77db84471d *R/residuals.vlm.q 8983fc2ddcb771329cfc5f5495d7fe95 *R/rrvglm.R af47d54734f99aee7201c93e7ef927d5 *R/rrvglm.control.q 4e5b97ff9f9f5bb5d1b4c2e8240b5dfb *R/rrvglm.fit.q 5683c1b71935ff8f42eeae1e3d30934c *R/s.q 6da8e34ae162ed21d59650805ff1623c *R/s.vam.q 688ad55cc7d6eba17fdbb1667eccdbe8 *R/simulate.vglm.R b5b74e062d32f11899d1ba96224e375c *R/sm.os.R 8a5cab372b14ee98fb7e3ecb877e4216 *R/sm.ps.R d57222b1fddf6de18688d0f9e2786d9d *R/smart.R 3e7923049875fdd3e16e3e79f3f96c7a *R/summary.vgam.q b0d7cb236267a5294f85b93ad53614c6 *R/summary.vglm.q d84c8c1fdb0d89fbc940e5dacbebf1fa *R/summary.vlm.q 6a9681424d34d61b971ef84b18d3aa3f *R/vcov.pvgam.R 1a022e12d1be3373b3956524ed52bb23 *R/vgam.R bdd171d87708a45deee6c442d0e17b71 *R/vgam.control.q 0fb9e7d906e6707834a13b64fdf595d4 *R/vgam.fit.q 5c97b388e0e51b4ca0464a1a3202de9b *R/vgam.match.q 4be3cc07a7691cb0660676c625839fe5 *R/vglm.R 203deda34c94348dee0918bff172dc88 *R/vglm.control.q c8e1180a8166816637de8b1ee60fac39 *R/vglm.fit.q a6982b19687791914118d2d8e87c18f0 *R/vlm.R ba69df50faefd39592539f77b0f46d94 *R/vlm.wfit.q 3a4defc1e8e86e0ea4e4113e1d94a9e9 *R/vsmooth.spline.q dda41a300447c473a76e0560126f748f *data/Huggins89.t1.rda ca84866e9a39a5f2026e1c8d7cc1e6a4 *data/Huggins89table1.rda d89f69ab78bc3c7a526960c8bdb9454b *data/V1.txt.gz 4cf4c61fa06f78232ad665a892984496 *data/alclevels.rda b063fbd14baa9a640209ba9841bd4e18 *data/alcoff.rda b5efe630e68eaa59fc9e5a046161581e *data/auuc.rda 4d0c64631ace24cfd5fa5b7273c59972 *data/backPain.rda 4fa3eac69a59ea5ed0123d54528e5595 *data/backPain.txt.gz f6449b3a535ada1d388b7e0165db8e0f *data/beggs.rda e039fd36c33b359830b2ac811ca7fc49 *data/bmi.nz.txt.xz ebdedc47496664b7eb59ef579317e914 *data/car.all.rda 582464fd1c8f03966d86ce58b9f08167 *data/cfibrosis.rda b29c1a4125f0898885b0a723442d6a92 *data/chest.nz.txt.bz2 4df5fd8b5db905c4c19071e1e6a698a4 *data/chinese.nz.txt.gz 3cb8bc8e1fc615416f0c8838a50b3f51 *data/coalminers.txt.gz 0ba5a7d7678d359651c9b712fd6270df *data/corbet.rda 36143f44e198df98e67027817061b57f *data/crashbc.rda 52fc905363c592086a4c57832cb46f8b *data/crashf.rda 61dfa26dc9c4d023f909669c9b5359b7 *data/crashi.rda f1d69315faacf8918cf33370670da515 *data/crashmc.rda 39cfb28ad150498af86545a353c130b3 *data/crashp.rda 36b4aa03c54dd4e4e41c91386ff765e4 *data/crashtr.rda 2be7615e694efdc8f2b19b775242643e *data/deermice.rda 90143a2182e8e0a1797d44bb0dc34322 *data/ducklings.rda 08e87bb80a2364697b17ccec6260387c *data/enzyme.txt.gz 7f0dd373c134f1e5ae1e7acc1c04fa5f *data/finney44.rda 75df1d1b059b7be38e5393687c9d4892 *data/flourbeetle.rda 3125b7b004c671f9d4516999c8473eac *data/gew.txt.gz bec512b2d2d680889c9b71c7b97dbffd *data/grain.us.txt.bz2 9dcb8cdf026f5468fa70f8037fd72a0b *data/hormone.txt.bz2 33e3484fe6bcf0b1dc1d518f166d91fa *data/hspider.rda dffe21fbabf645127bccc3f3733098a7 *data/hunua.txt.bz2 6d6b08fe1d97518edbc41aaa4336c58e *data/lakeO.rda c552bffac2ec420295872d382b06f100 *data/leukemia.rda aba4885e0eeda8ee887a422fee01e02a *data/lirat.txt.gz 7d7e59127af09903659c5727d71acc56 *data/machinists.txt.gz c51f8bf697fecc98a009c460770ce008 *data/marital.nz.rda be0ac2390e71582a60caab17ecde4b92 *data/melbmaxtemp.rda 56490506642d6415ac67d9b6a7f7aff6 *data/olym08.txt.gz fe334fe839d5efbe61aa3a757c38faeb *data/olym12.txt.gz 3ed63397c4a34f3233326ade6cfd1279 *data/oxtemp.txt.gz a8ecbdd5fde152dced3b5403e35914d0 *data/pneumo.rda 0cd66b7ce4e596ad3ca75e1e2ec0a73c *data/prats.txt.gz d928020ad9ea0ad1ba922e5e11ace1a1 *data/prinia.rda e4f0ee62c65eadd4791719014428d376 *data/ruge.rda ab26dd2543cb413960e24cd3c3e66d44 *data/toxop.rda 1b059fc42c890bf89f2282298828d098 *data/ucberk.txt.gz 64adf51329c7660476908fbb4af1978f *data/venice.rda 66623652583f986c3a2690ffd95e9b39 *data/venice90.rda e990ca4deea25b60febd2d315a6a9ec4 *data/waitakere.txt.bz2 5fdf4881486e8fff1355e28a3c3653f5 *data/wine.rda 81f7f0844a196dc48e91870c4cfafc99 *demo/00Index 9327dcfa4015cf47172717bac166f353 *demo/binom2.or.R b9f0af62a654d77a3052997eb4cc15e2 *demo/cqo.R 2b8f46b985ee4354d6f2cc48ac81df8b *demo/distributions.R 541e1a831b9abf6a2a5bfe193b03b1b4 *demo/lmsqreg.R ab8081763fe2144558be25f3a154327b *demo/vgam.R 65570d10948785994d70d817f574bd96 *demo/zipoisson.R 4c1b5ce8ed92900b8bcb6746ad08b15f *inst/CITATION 285846db4de993ae1778be8166a5a113 *man/A1A2A3.Rd bf4deb73bc66a698f518b65290a52e7e *man/AA.Aa.aa.Rd 03439b6b6db1ebb78bf79bd11e8b560a *man/AB.Ab.aB.ab.Rd a97404c947185f987fcb72fa8a242e5d *man/ABO.Rd 467a312763e8a9a3c938aee39dac64f7 *man/AICvlm.Rd 6c0407b2dece39cc4d627564375a54b8 *man/AR1.Rd 129ea1a7a3f72390ecba97817f848d8e *man/AR1EIM.Rd ea156efd9638ec3d235442147ca21843 *man/AR1UC.Rd 447222e0a355dab67f394d1ed02011d0 *man/BICvlm.Rd 32daae0afb71eae3cdeefc042f4241c6 *man/Coef.Rd 7064da1eee8698b388f5d40c76c00ed1 *man/Coef.qrrvglm-class.Rd e7f724d393e8e41632c9bfe0777f2ba3 *man/Coef.qrrvglm.Rd 20dae27ef5c49a65318dc19d0059f0d3 *man/Coef.rrvglm-class.Rd 2090ca21bbfccf2ed98d694514654ae1 *man/Coef.rrvglm.Rd 4e4b7b0f0535956bd665e88e99681c1c *man/Coef.vlm.Rd c5b42970f5b3557fed1aaa9c4edf380e *man/CommonVGAMffArguments.Rd aecb9f3fb3bcde4e74f3722cc4f67efd *man/Huggins89.t1.Rd 63b7e9ae7538ac73ca7c2913ce093cb9 *man/Inv.gaussian.Rd 02b76640569fda28cd5cf2cd532dec00 *man/Links.Rd 500b60125f8954997037620b5b8505a6 *man/MNSs.Rd 5ddd860d2b28b025dbf94b80062e3fc6 *man/Max.Rd 4393fcf0fc30408567c6a8477a822503 *man/Opt.Rd 4f8d77d5f9c49e4ae5033a169bfe274e *man/ParetoUC.Rd 2ec707e8c6450699f244a27a35466e77 *man/QvarUC.Rd 826c063341f27d31e4778e5ef65eee35 *man/Rcim.Rd cef46e6d9d73924ca7f8b9e969e55b9c *man/SURff.Rd 01e0e6705724e5cdbcc9d5077cef7aa6 *man/Select.Rd ba415357209734874cf96a406bd9214b *man/SurvS4-class.Rd 6ed5239b716d4aaef069b66f248503f0 *man/SurvS4.Rd d7625b064394b2c400802feb17ef400c *man/Tol.Rd 77777a8c452a5d806a1a4e5c3378be96 *man/UtilitiesVGAM.Rd 39394f12a25f8ae05334df307ccd10fe *man/V1.Rd 181c34c20563e98d09b441d9916adc23 *man/VGAM-package.Rd 421b43ebe193fc7dae0bcffdcd07227b *man/acat.Rd 99d6e410a20c6ffa691021a8a87c5c24 *man/alaplace3.Rd 8c0d8e4d9e634a0c2539e3a052afa9cc *man/alaplaceUC.Rd 5227bdac87a20561f26516d74684a5bf *man/amlbinomial.Rd 48d6f12735b9acbce463730f63468db9 *man/amlexponential.Rd a1d58b708d9db88ecce89fe3015445e7 *man/amlnormal.Rd cdaeaaafe03e6137fa24b41a6308ad06 *man/amlpoisson.Rd 9f1ddcb0af49daaec702a1284341d778 *man/auuc.Rd c5776da4bc66a6581a053cdb7de2cf0a *man/auxposbernoulli.t.Rd b9b949fcf6415abc0b15718ab5b8ed4c *man/backPain.Rd 111893c26485baa2b2616a0bf87c927e *man/beggs.Rd fbb153319516252a8dc9f0e40282a433 *man/benfUC.Rd ec06d832602e25e42d8d0d3d4ff4f655 *man/benini.Rd f4c7eaee169de8469b521abb5504c0f3 *man/beniniUC.Rd 1cd00f8000b3e2af73bff388590a3146 *man/betaII.Rd bd214a990456c1c8c29c7352e56fee90 *man/betaR.Rd 7217fa8ae33d3b4946e5275b0e93255d *man/betabinomUC.Rd 194cd150f9675c57214394982c0c724c *man/betabinomial.Rd c9d17bd3e9a704ac922eb935eade0eff *man/betabinomialff.Rd c3420e8dc291534ccbcfcdb6bf5ea9a5 *man/betaff.Rd cb0be46a372240c2416eafde863e9824 *man/betageomUC.Rd a321871b9722215e053297e8d8b34687 *man/betageometric.Rd e649861c7f8f6b151a8f95b8dce56bf9 *man/betanormUC.Rd 92dc23b72dcd7cca817690c70cb1a434 *man/betaprime.Rd 5cbd2127e8da755d996e032343ec982f *man/biamhcop.Rd 495e32601db2c4f22462811e27436c9d *man/biamhcopUC.Rd 8f59784ca8807108410c4f3fb1299ac1 *man/biclaytoncop.Rd 82b67433a10d4d25ac7bd66e3f4a2f33 *man/biclaytoncopUC.Rd 67daaae368a358fd35c714d258e54f04 *man/bifgmcop.Rd 2c51520c0e0485360fa5ee65989dbe2f *man/bifgmcopUC.Rd b8a59120e83fc9edb42af25537dcc93c *man/bifgmexp.Rd 17da2a456e6af88182e2d2fa59f189b7 *man/bifrankcop.Rd 4e57b0a38391fdfe5e57e39799ae9d6d *man/bifrankcopUC.Rd 24ffd4d97c8b5d9c71c6702c4ecb3316 *man/bigamma.mckay.Rd 913fd4c8407162fb1aa521583dc41ccc *man/bigumbelIexp.Rd ffcbfc72f334094f6dfd4842ab522e96 *man/bilogisUC.Rd cb0b07c6c1ac54aac522170d1dfb02bc *man/bilogistic.Rd 422973a9e84550aab73f4c797dcbfd23 *man/binom2.or.Rd 84dc66c2c5d0321ace962180382c7c59 *man/binom2.orUC.Rd 4cb0caf5f677d96c4bf95573d4357fbf *man/binom2.rho.Rd 9f2ac11550d4af21ccc838f98cf04014 *man/binom2.rhoUC.Rd 107152b5c7e7d2f949a65138b48023a3 *man/binomialff.Rd ed1c2cb576b135de3dde2a997ce87230 *man/binormal.Rd b7288a39f385659ce3fda48789c75f46 *man/binormalUC.Rd 020d2ccd1d470d6fd84a09af53119ebc *man/binormalcop.Rd 4d8649f87cb865fe10d55aa655749e10 *man/binormcopUC.Rd 1d943aad478481e7bf4c4b1a9540706c *man/biplackettcop.Rd 79d9cd96d00531b88793d55a07d29842 *man/biplackettcopUC.Rd bdad9ecfb116c4f30f930bcaf7208735 *man/biplot-methods.Rd 673b5f26090b304c6a87bb818bc78ad0 *man/bisa.Rd ce85dc490ea819c417ca847e0a80ace8 *man/bisaUC.Rd a7c09c2ae5e665ab7a45213b1baafb38 *man/bistudentt.Rd db08fad184b678ca63e8b290cec33312 *man/bistudenttUC.Rd 40e4a5e1a4e770be48eeea6337ecca83 *man/bmi.nz.Rd 732e21d872b4c4d96380f14e602114b8 *man/borel.tanner.Rd a25a019943aa0d82d35d6c46ec726c67 *man/bortUC.Rd e409b7d19bd012900f3edf51ce084318 *man/brat.Rd f2de98254c464126b697019198425e21 *man/bratUC.Rd 4b650c9b9cc0709dcba08a9f4661e957 *man/bratt.Rd df53c524cd1b1533425e33054bf27a7f *man/calibrate-methods.Rd 56c11a14cd7f934e408be19666e0a8b2 *man/calibrate.Rd f1e3618b6cfd61021132d0758841fed8 *man/calibrate.qrrvglm.Rd 072c314be10da18411923e776c8365d1 *man/calibrate.qrrvglm.control.Rd 451a2f2848b81ebcd22b52b7356ec40e *man/calibrate.rrvglm.Rd 462d94ae3f1e437f4e548c4189c0fbc2 *man/calibrate.rrvglm.control.Rd 701feb6c08b20c5e097946f93cb83e17 *man/cao.Rd ca994a5e35760fa5ea5b91e5d55d7988 *man/cao.control.Rd 26c82ae6a9216158479d3707493127b0 *man/cardUC.Rd 711cd6465d3c6aadabb2e4b59964da79 *man/cardioid.Rd 2b2ef87eb1e638b5278405db1067deb5 *man/cauchit.Rd e827a0f2a0205030364895150f391d1e *man/cauchy.Rd a0df4cb63aae3cdd06afc498620e8ea3 *man/cdf.lmscreg.Rd c760b63501f6213c3c12716c3614685a *man/cens.gumbel.Rd d38abec814db0e3fb01ba9ccfeabba20 *man/cens.normal.Rd 72901f13efe7d772fc5ed78bd6c58cea *man/cens.poisson.Rd 77939979aa0728370bb60b66a2c98136 *man/cfibrosis.Rd aa6292da8fd825045a7d21b2ce8c78cf *man/cgo.Rd 2855aeac51aae2e795d607941b8273ac *man/chest.nz.Rd 922ebc06682ee2090eb1804d9939ec03 *man/chinese.nz.Rd 51f11030b641d0c57db0aec5d57de538 *man/chisq.Rd 64750de2ee96a799428e2301ab647dd8 *man/clo.Rd 412d70832fbb40ecf9f4d8f03243b545 *man/cloglog.Rd b1985e33c967fdddf79e10cbb646b974 *man/coalminers.Rd 74b5325202b9c0285519604f5705b27d *man/coefvgam.Rd 67cbcd6d989b8dd99d95611e19b61ca0 *man/coefvlm.Rd 1409b01c52bad85c87e9740fb003699a *man/concoef-methods.Rd e9a2bf379aac3e4035b8259463a5374b *man/concoef.Rd 7eac5145453feed2cdcd307741b912f5 *man/confintvglm.Rd 162b28160580b7f1cbb4452623e723f8 *man/constraints.Rd ef1f3f2351de5cf13132fd524197dc92 *man/corbet.Rd a9a75144cb8344438a1c5796068a5ed5 *man/cqo.Rd b85e14be939b981496370132cb1fe0eb *man/crashes.Rd 40e3fc904c8f5774b466d592b5871509 *man/cratio.Rd f1eb8262701aca944988394aa045eca9 *man/cumulative.Rd 2b5c56846f4b02069a7beeb19b21e370 *man/dagum.Rd 12192f19751804a540e6d0852e29726c *man/dagumUC.Rd d5439d37875ba50990406c5c5f8595eb *man/deermice.Rd d8fbb18d62bd64d50e7896e34ffd59d1 *man/deplot.lmscreg.Rd 522bbe5bf684007a8e8d02ec8ce3b911 *man/depvar.Rd 5a201348194119b8d79548bf1cd9f9e0 *man/df.residual.Rd 49407a051530fcd4424f64740e4f601c *man/diffzeta.Rd 1b4fb2baff32165b567969ff96f22c76 *man/diffzetaUC.Rd bc6022d55c367cdd33b768aa20948e30 *man/dirichlet.Rd 5d80c9aec9a30bf52ee26429deae8721 *man/dirmul.old.Rd affd1703898de2fce53af63eebe656c8 *man/dirmultinomial.Rd 55033c690b7e4ef405fc9009fe7fd01d *man/double.cens.normal.Rd 3ea7f60a4427620c3a50932ef7da33ac *man/double.expbinomial.Rd 2b74d3ee310b347a7112ce9c0b0ccb34 *man/ducklings.Rd 1e7015b921166b7564a38eb8c49c0436 *man/eexpUC.Rd 34dedcca98e6434bc1e9c7deca8a49fa *man/enormUC.Rd 62236ca35117508cb67241845240d5b3 *man/enzyme.Rd 980efa41e75a65ef1c0a8ccf943f6398 *man/erf.Rd 4e888f6bc8399670553b48ae319d349f *man/erlang.Rd 25e5bdbf1b46ac8d999ffaf284ed45e1 *man/eunifUC.Rd 64b225658109fddba5d01002764a63bd *man/expexpff.Rd cb7a983158b27b6fdfe1eafbcb6cf42a *man/expexpff1.Rd eccfa33017118bc7314ef168695a595e *man/expgeometric.Rd f39dd0be93d3e24eda78f08310ff4b2f *man/expgeometricUC.Rd 5bde1dc82f2facb1410342713dfd6407 *man/expint3.Rd 6cd9a3c431dd04ca30c58d836d77f2e4 *man/explink.Rd 89ce96662b931aa17182192618085ed0 *man/explogUC.Rd e51211ad603eeecbe72cd7f6db0e76e0 *man/explogff.Rd dca2e21888171a395e2b9e8ad788cd87 *man/exponential.Rd f3cca02f31b091259c7a8cf690f93148 *man/exppoisson.Rd 79f43e2f29b5cca093569fd81aea3abe *man/exppoissonUC.Rd 326f2e1ad8675fb4d3fc610bd4d473e4 *man/felix.Rd 0cb64513fed0f600d4d7012733d6e71d *man/felixUC.Rd 28102fa0c3908cda895835e03c6ca8d4 *man/fff.Rd 02da5dbf379f99aa8f74562b1a69aad5 *man/fill.Rd b929e2ab670eb59700bc4a1db07bbbc0 *man/finney44.Rd 676e766de613899da4bb649d736e5fac *man/fisherz.Rd fe861dacc59a40590ffec9752a5c31c0 *man/fisk.Rd 5966dbc9e396bd3cbb15b2650d885177 *man/fiskUC.Rd e2b8486d75b02113b0f249e7b2af6df0 *man/fittedvlm.Rd 43a9d36771b0925338017e31e0e23c02 *man/flourbeetle.Rd 56da968ac438de6793e1de751093aa2f *man/foldnormUC.Rd 422c8e83b864b0336ef325f8f1502412 *man/foldnormal.Rd 704cb2521df31b2924262ad97b269f73 *man/foldsqrt.Rd 628edb6d51c54d246702e9521ba6470c *man/formulavlm.Rd 5968650ac0521408bbd7c06c842f77c8 *man/frechet.Rd dabb4b7cdd3422f239888fb85ca5a70b *man/frechetUC.Rd fd885ddeff27c79c445bc21f1d02265f *man/freund61.Rd 1159eecef9c45ae80bf2da78ca439985 *man/gamma1.Rd 4a46213a615e160d1de424276dc86fb6 *man/gamma2.Rd 5605f627059a37fca123209727ae358d *man/gammaR.Rd 09a8f1cbdd2216433910a23f982b2943 *man/gammahyperbola.Rd b4b0cf65a9d6064e4d4578fc00997fa9 *man/garma.Rd 6be60152befbb0b0c97bd1bb38934479 *man/gaussianff.Rd e539477266517e44c478d7a014d59e33 *man/genbetaII.Rd 45999add2a92fc243422b25bfc8f8198 *man/genbetaIIUC.Rd 1021037686d81712b08cc544a1037073 *man/gengamma.Rd 18ea379b3c4276c209405cc52618a0b9 *man/gengammaUC.Rd e76ba09ce1f37dcaefe798265764cc0c *man/genpoisUC.Rd a119afc00f962b1e029606fbb432c4da *man/genpoisson.Rd 15429ac99e67921a77cb78e47210d7fc *man/genrayleigh.Rd 2b8ec736188410b1502ce23ba1852463 *man/genrayleighUC.Rd 47cf2f1cabda023a3a982c4f1594bc6b *man/geometric.Rd ea16a72ebd8739cd2133e91fd9c92662 *man/get.smart.Rd d89a22500e2031841b7bcfa1d8607d44 *man/get.smart.prediction.Rd 1209ed3da3e11b7925de5eaa91a31f97 *man/gev.Rd a4fba685f8e85a0566b2d04cb9887508 *man/gevUC.Rd b67d72038a94936de43e9c9867e42696 *man/gew.Rd 98323d4ddee9e4389db885c5dfd9e8b5 *man/golf.Rd f67b351f60fea785dcb5c364bcb76439 *man/gompertz.Rd 8170cb9545cf35f1768db069b13a893e *man/gompertzUC.Rd a82c1ef0b1f4275354aa1ccdb2f3f032 *man/gpd.Rd 1ac579be6f58aef14c0fd352e242fbb1 *man/gpdUC.Rd ae22c83775126740de24f9b424eb1ac8 *man/grain.us.Rd 9e6745cde809d3f4d76d9454ed4bdfea *man/grc.Rd 086dd47f85b86acd9cdccd841f49b2cb *man/gumbel.Rd 50b5296cb8c594ec807e7093db9884cc *man/gumbelII.Rd 5099d1835eebc1b4610481e77463a50c *man/gumbelIIUC.Rd 58292f4d52c780593164af0311a765af *man/gumbelUC.Rd 7a67665b02e62cf7c115a6f689c4b489 *man/guplot.Rd 4b511af2b7f9ef9ebb409d803ebc09b0 *man/has.intercept.Rd d5ad348b7727127369874c7e7faf49bd *man/hatvalues.Rd e4c7cc98736c146f141920bec0ac91ba *man/hdeff.Rd 6fc61fd507dc4c1c75fb6c28bee0a2b2 *man/hormone.Rd 8b6e465e5fe73a935f46daa60046c9cd *man/hspider.Rd 62abee8f8a170a6bbe50d1ab3e56c347 *man/huber.Rd 95e3f876501a7fd2e84e35f4894477ef *man/huberUC.Rd fc4d3f7180812d4d2d9795729baade82 *man/hunua.Rd 17e69c7a5141452916dd851ff3d8da1b *man/hyperg.Rd 1d933319c8cb063e3cb36453b97ae4c6 *man/hypersecant.Rd 1c49eed2542b1209bc724ad32f60fabd *man/hzeta.Rd 41cf19f4dc45e8194f64a34e7c039f1d *man/hzetaUC.Rd 5c27582c146028bd203dc6138b22bcef *man/iam.Rd 8f8543e4554295b88ca2d3444a28173b *man/identitylink.Rd 3270bfcad322c6b39ea015225ec71267 *man/inv.binomial.Rd 65dfc6df4db250babcb46684a65328cd *man/inv.gaussianff.Rd 1be0ee6f6ed383873765d3618a82ee72 *man/inv.lomax.Rd 4492e4a4f91d5fe7d4ec75a128bf4e07 *man/inv.lomaxUC.Rd b0fbf31decc06fbee3ecb4b78463a796 *man/inv.paralogistic.Rd 6f740a890a174ff4ff3879fa8719ec58 *man/inv.paralogisticUC.Rd 7153cd3b651179845eedeb69cb2f9c7d *man/is.buggy.Rd a501c3d3de4a744a0e0cdbc0673b543d *man/is.parallel.Rd e68a1f19e55cd95da21eec0b119c0ad8 *man/is.smart.Rd 1b33dcd08e9f444146fb7fe03a425add *man/is.zero.Rd e28d52ce917a7dc0a0da14ee2a44ad5c *man/kendall.tau.Rd bf860633c6991dbc98a3ae4a56679470 *man/kumar.Rd bcc3111b901e3cc621a4a49d48b66206 *man/kumarUC.Rd 6f5588e7ecb0328d282c94b391950240 *man/lakeO.Rd decbd103cc5311735e70d906d170c742 *man/lambertW.Rd d7ffbb868815836d70d5019f20b9af95 *man/laplace.Rd 381e036b4bb5ec7eab83aec4924ea195 *man/laplaceUC.Rd 7a0dffd900869abd11094f369410c3cd *man/latvar.Rd 6139f5266e9dabcd024f5323f09abec7 *man/leipnik.Rd f14199e5a0c4d647de4fee04f1ba7edf *man/lerch.Rd 8c7fca39c92e5f79391a7881a0f44026 *man/leukemia.Rd 92fe3c03e74cb0fdd584bab129416eb4 *man/levy.Rd d3fb68f03d6cc946da6b48772bea3297 *man/lgammaUC.Rd 058b16f16ccc46ab0bc029419c60d71a *man/lgammaff.Rd 31b32afde6eea39dce5651070827e17f *man/lindUC.Rd c5b96c1bb5898f1948d946ebeabcd8b3 *man/lindley.Rd 0dbe4623dfb32dde6bb3dd3c7fefd509 *man/linkfun.Rd ce4327b5fa5398367a23005373ebf851 *man/linkfun.vglm.Rd 1923870b8d3c672d7a7736d1cb752d59 *man/lino.Rd f4fa53903badeb6850411bb464cbfa11 *man/linoUC.Rd 48d52e3536f40f4bb8b80ed13523e098 *man/lirat.Rd e86677e152d0f11cfdd23cf85adb373f *man/lms.bcg.Rd 02ed4ad7f248aa002395a7f1547ebfc6 *man/lms.bcn.Rd e851725d21bb3515fea3dfa2ed50a892 *man/lms.yjn.Rd 0dad131a129a97908dfa39adac5ca812 *man/log1mexp.Rd 7c108f33921093160892c22505f77e79 *man/logF.Rd 128e56816c06da9c68204c5e17f40909 *man/logF.UC.Rd 9f80bd504e1c75b0c7b29b3449cf7362 *man/logLikvlm.Rd e25ab9a04aa6fe3c1ca7e207127df654 *man/logUC.Rd fac0b2e9215006ccc2d19659f370038f *man/logc.Rd 0322131a22664e2eb8df4353fff97ec6 *man/loge.Rd d32bf68ce529579e1afc7198e557812a *man/logff.Rd 2d41c4483d71434df635770c3227661f *man/logistic.Rd 3457599cfa34218ea02070a931ce37bd *man/logit.Rd 81fbcf2fd0488f7467f10a9b04112c36 *man/logitoffsetlink.Rd 8822ba593955e90e63a8779aaf74d29b *man/loglapUC.Rd bac5b0432ecb0ea9a2a8813b514ba205 *man/loglaplace.Rd 19da55b6d1195e349514a7397c2611cc *man/loglinb2.Rd 4290a696c9eedd140e5d64489b6f29be *man/loglinb3.Rd fd4cd26d6ff2a4436bc76b32032697dc *man/loglog.Rd b60549cb1381a5e28c243336088c8d0c *man/lognormal.Rd 61a6b92080d54adddba15f5aff318167 *man/logoff.Rd 1d125855406e1d3298b00cbbb43f617c *man/lomax.Rd dbc62e15528097b42fb64d49be5f22f3 *man/lomaxUC.Rd 6bd022246e23e920706e334289786a92 *man/lqnorm.Rd 4a5c26f48c7f11be4f19121e1db90243 *man/lrpvglm.Rd c5cb78a3dd87e37d5dbaa4085d7a9f6b *man/lrtest.Rd f0a38f0b82c1525dcd51687a2f2768c1 *man/lvplot.Rd b58baa9997db4a2d17bffd823b18d91f *man/lvplot.qrrvglm.Rd d49cdb5fb7d5c6802adf306c149b4b11 *man/lvplot.rrvglm.Rd a22ccac7d14abadd6c77d953fb7a40e9 *man/machinists.Rd 84c2bc3a634004df81bcf51de759d69b *man/makeham.Rd ded9a294b923c79230f83eb84f3103e3 *man/makehamUC.Rd 2bbcbe064963fcc8c05f47ce043b9c56 *man/margeff.Rd 3ddf15a17c9065fcb8a97c4d4dca724d *man/marital.nz.Rd fe8730c68060188f47c05090d696f324 *man/maxwell.Rd 8fe9e02925adf9a10be7913a361c79b9 *man/maxwellUC.Rd 831b3c476b91b3f79edc76e8708b8d1e *man/mccullagh89.Rd a98728733448bc9c8943a2fb8f3f66bc *man/melbmaxtemp.Rd 4d8d0f37dc8249d00e52283764534e98 *man/meplot.Rd 620804acef2de6c732fd9bb2dc05f8df *man/micmen.Rd dba6353c3b5a28ab1d489fbfee0ed1a8 *man/mix2exp.Rd 98bf54f72e45f62c7048cea667f7e1bc *man/mix2normal.Rd 9441d372f9b21c0e0f7e649e07a685af *man/mix2poisson.Rd 131aaa836a137554786e8bda01d8e334 *man/model.framevlm.Rd d9b1e066f2438d36dab43f26214070f1 *man/model.matrixvlm.Rd 5e63d27fdf0187ee5f4163a3b17bc3e5 *man/moffset.Rd 3aee77737891012246b2b6e56e14f509 *man/multilogit.Rd 92bb80be2226f90308850fb8aab11feb *man/multinomial.Rd a4cefcc6cef23b9863209164da813db8 *man/nakagami.Rd ec4fd16fac5c0c2657dfecb963394ad2 *man/nakagamiUC.Rd 4bcc3a41b3bd571b15f17408c447858e *man/nbcanlink.Rd cac04298dcb393237420ae72229a8fc5 *man/nbolf.Rd 68432d390c8652c812d7abade62a2dc4 *man/negbinomial.Rd 686a17933c5953c4834200ca217b119e *man/negbinomial.size.Rd 4811ccf0037a161a3381c6390a54a004 *man/normal.vcm.Rd b80be56e3a81ee4e7b859e72b0017b12 *man/notdocumentedyet.Rd 5e590acdda3ff0a9e2df0db8d233f848 *man/nparamvglm.Rd 880a8634be7d3a1b56be1c93545fb327 *man/oalog.Rd d5d5aa6c773fe976386fe73c300926c8 *man/oalogUC.Rd 1372b54544f32022ac1e7b0e0c653407 *man/oapospoisUC.Rd ac947e9ab52c8db4338fbb8fe74a9e65 *man/oapospoisson.Rd 4bad12a578bf78715b8564c0d4404494 *man/oazeta.Rd a095d22eecb1cc08cb577d3fe8980d25 *man/oazetaUC.Rd 5ce5c0f9814fe22b8bb29ba0c884d407 *man/oilog.Rd d09ac6330dc06648d12cff12a46e7301 *man/oilogUC.Rd 188f9232c145101e2406d295a03baafd *man/oiposbinomUC.Rd bd35edc6b99943212cdb1bba9e7aa19b *man/oiposbinomial.Rd d885552fcb8421235b192fe939c77b1e *man/oipospoisUC.Rd 2900e15a9d5d04db0c473a234fd20b80 *man/oipospoisson.Rd e5d5dd80195d9ca99e9119cb02a8067b *man/oizeta.Rd f17688c4d640452bbd9eb9a0a50e42e3 *man/oizetaUC.Rd e7d848e60f3a36a25df093deff2104be *man/oizipf.Rd a15a33c2c5565e2aa3c09c7d3b1c9546 *man/oizipfUC.Rd 98b83e406ea1968ba3e8b17d0933b2cf *man/olym.Rd 94ca3795c49c069e81d4abb217046057 *man/ordpoisson.Rd 03e5915aea0d7d6d764123a245f3d8f7 *man/otlog.Rd f6ec078f93412c9498e1c309f2cc3761 *man/otlogUC.Rd 6803fbdd9ae7dc3086c20e30abe93dce *man/otpospoisUC.Rd 502cc7989515c59ce6f1fd40bf705b12 *man/otpospoisson.Rd 244e33255edcc38590a49bab0cdc282c *man/otzeta.Rd 4b01e8ae04198fa40c89cbf83df24e21 *man/otzetaUC.Rd 5d9093c6c1297fb988ed0695d88ffeb2 *man/oxtemp.Rd 8313e214e8fb06aba430451440ad1f3a *man/paralogistic.Rd 383805a5130a512c207a6a30c28553d3 *man/paralogisticUC.Rd bb3a3f7ead9224fd8c9a4b0404bca995 *man/paretoIV.Rd 8111c66c525adc1c66cda4a6fa9a5d94 *man/paretoIVUC.Rd d5fc72aaf9ba9efb046c5be1f65fd11f *man/paretoff.Rd 9b2c4dd01870f3f32ba3c15ae95587cb *man/perks.Rd 295f59604bff838edcef5b96d0da8350 *man/perksUC.Rd 39f6cfbf1ae8bc5222d992e92fb4f954 *man/persp.qrrvglm.Rd 2712adc085e3af192860b8a966ca5d78 *man/pgamma.deriv.Rd 353c3ec4b00a91d4417bff571322ac49 *man/pgamma.deriv.unscaled.Rd 30c34e19012351e5099882d59a44baf3 *man/plotdeplot.lmscreg.Rd 34ef2a5ed9f1bfa72b34480b77845c5a *man/plotqrrvglm.Rd 4b406159b245829944a887876f3e713b *man/plotqtplot.lmscreg.Rd 963dc57ab1c5ac0a61c3be106219e332 *man/plotrcim0.Rd d464580d44d5a26f6c77288fc76793e9 *man/plotvgam.Rd f25590471e0e46cdeb4f1e392f2f04c7 *man/plotvgam.control.Rd 26fed3fa54d8840117d985ef588d081b *man/plotvglm.Rd 38cd2c95cba3ed8b74972d53df71426b *man/pneumo.Rd 2d5197dddcf4865060d96fcab0386ba2 *man/poisson.points.Rd 8c7d77fdf6933ab63d412be61e3fa0ec *man/poisson.pointsUC.Rd 0d809be1bcf0a62adb087dcaa54ad834 *man/poissonff.Rd 3fdf36e4c276e54a76645769d2ed72bc *man/polf.Rd 694c02f146b91973590273ad5cb4fd4f *man/polonoUC.Rd 164d34ccd72cc10ec9791bac5b1ed7fe *man/posbernUC.Rd e4df302db602b3c97346af14aca849d1 *man/posbernoulli.b.Rd 71dfacd1e28b00ac6ad6647c5a51ff5d *man/posbernoulli.t.Rd 3d033407af789f6441ea60bba129097e *man/posbernoulli.tb.Rd fd9833c48acf34cec92c36ca92734c1a *man/posbinomUC.Rd 5e0a62bad63329c57c079e34816d77ee *man/posbinomial.Rd 11c6a588bb83c54efcc4bb14d44de769 *man/posgeomUC.Rd 3ddd0176688f1477124da0158406b0d0 *man/posnegbinUC.Rd ee784865b3e4d1f9ae7a632dd1fdc47e *man/posnegbinomial.Rd a13ee6ad182d4f914aada7041f84332e *man/posnormUC.Rd 0cad58b1ace9b8b16c4f563c1e4af868 *man/posnormal.Rd f1013339a9d1c7745558d45d80f80cc4 *man/pospoisUC.Rd 4e8e0fd1d0589cc90578092f56fe8a47 *man/pospoisson.Rd 9fcfe365dcb6ac011a4657a1de224cfc *man/powerlink.Rd c4ba6c4f7daa451ff072c4dc6212482c *man/prats.Rd fb899ff42f1e50ad04f981036b560cd8 *man/predictqrrvglm.Rd 7d97e6a6bdb932f940ff3d02985de1b8 *man/predictvglm.Rd 9693278ea39deeb7b07a35ebc8c7454b *man/prentice74.Rd 5f4fbb060b2d8386d8d2bfde926d9d5d *man/prinia.Rd aee50b5fbbc6e550edf241b0bf7a037e *man/probit.Rd e5fb904fde9d3f7477b1fecd0c13fc67 *man/profilevglm.Rd 603ba91498e87096beca269b31993826 *man/propodds.Rd 241402d089ef4159f01fb4cd2c72b9a3 *man/prplot.Rd ab1399d5d5f71707fd46960dc3efad04 *man/put.smart.Rd 46b5a6ac57b987b42b63bb50b5ed9228 *man/qrrvglm.control.Rd 4d9e77b96958342af0ab14eb7efe6ed3 *man/qtplot.gumbel.Rd 70e2a9e06f485fe104196f9e8106eb42 *man/qtplot.lmscreg.Rd fda36bd798b4f79df236467bedfee142 *man/quasibinomialff.Rd 024c0d00d1d04c12bd1ae9df05705ec7 *man/quasipoissonff.Rd bbde69d1bad346cd4ad04763c96d6ffe *man/qvar.Rd d2436d379f06a0ba36a56ad83f5dfe59 *man/rayleigh.Rd a95c0df100dedc0b4e80be0659858441 *man/rayleighUC.Rd 1809b33212eea356d82f80436bb14976 *man/rcqo.Rd 297ed178a02d65d90eebe17ab7981127 *man/rdiric.Rd ecde375d79d50e892f54786fa4dd8327 *man/rec.exp1.Rd b2f3db92bcc08670d243983b9ccf050a *man/rec.normal.Rd 2acf75ab67b2a465ff82cebcce58d965 *man/reciprocal.Rd 892fc03d536bb55c2f3aeee3bf245268 *man/rhobit.Rd d907e0bbe40b4fb02b0763ab6076309e *man/riceUC.Rd 4ce494548d1104b52ff5362b6654e954 *man/riceff.Rd 4c325e0d372cb92f3d7257978c8d0051 *man/rigff.Rd aa87e119b195f40e7a9db901da1cea55 *man/rlplot.gevff.Rd a538533f1313c946eb0a2e5716513698 *man/rrar.Rd 17084ab97d2faf2df082ea5b5b4de6e8 *man/rrvglm-class.Rd c1b50b2396b52678c0c958f0495d3b21 *man/rrvglm.Rd bbcf2bc13a6493c9e1cd55dde8f7fc29 *man/rrvglm.control.Rd b6726fd139c577faab7116201b46a8f0 *man/rrvglm.optim.control.Rd ecc44804896b8f3d4a9d469a952fe9a6 *man/ruge.Rd 4ff8684ef85a13adc80a3f9cd21cdc05 *man/s.Rd 6341d64e2c95390be630ccedf4b1b89f *man/sc.studentt2.Rd 307960ccb037e87c18c6df4ba774f761 *man/sc.t2UC.Rd 8e497f59f72836001a13f44caa2127fd *man/seq2binomial.Rd 9985ea15444cc317e3e8fc2aad7200da *man/setup.smart.Rd 73f90d943920dda558b022de9d7e06cc *man/simplex.Rd 3a6df504e33f0a38210c7f07243af4b1 *man/simplexUC.Rd 55472b86057f6827b8da81a884066eff *man/simulate.vlm.Rd abb6970dec2c99061594d6429dfd156a *man/sinmad.Rd 95cbc5903a187d325c52c3d9d07ee252 *man/sinmadUC.Rd 31eae032b50794ce290bba66d7873df0 *man/skellam.Rd 2424940e3cff6d5a3ddd0ee99565ea39 *man/skellamUC.Rd 7fbee7ea95cc0dcf7cf04a73c229bf63 *man/skewnormUC.Rd 236f9327ac9a3f291712e2f064dc59f7 *man/skewnormal.Rd e4d5a9f6f1b67ad37ee6ea35fb4d4071 *man/slash.Rd 55e7c75854df8cfeeb3e871a9ed3b286 *man/slashUC.Rd 50be1c2b5a74611e6e872d4500bc85bf *man/sm.os.Rd 445ffb7a2db302f854a0d1bd7306281b *man/sm.ps.Rd 21bada3a13aca65ba49fb28127575144 *man/smart.expression.Rd 5726ef8bb900532df62b24bd4b7b8fe4 *man/smart.mode.is.Rd ef0defa486a8ea35cf8f0008dcacfd3c *man/smartpred.Rd 84ef8e73613b4e648d320236a8f61720 *man/sratio.Rd 9d461f98c11b2731f60b5d06460d2a35 *man/studentt.Rd c741f034c6a2a63d020f384e1057f51e *man/summarypvgam.Rd a1553608ed84216f359717b6aebc6079 *man/summaryvgam.Rd 545674564daf5ef50e64895b1741f790 *man/summaryvglm.Rd 9c715fb5d55b290394fae63526c67851 *man/tikuv.Rd 4777309021c9780ce7229ab29aa6ac6b *man/tikuvUC.Rd b36380d5c4189df2cb25a7c4644ce664 *man/tobit.Rd 3eac65d470861c7c8492b8c996ff88f6 *man/tobitUC.Rd 630313b8d7de3fb57ac2446cf966d652 *man/topple.Rd 198fe709514ab9e978abb67e5016521d *man/toppleUC.Rd b70afa170b0cf98a6c2a9eea9dc58483 *man/toxop.Rd 91785ee07f0a4a097b818f850a5fc7da *man/triangle.Rd 7d08069709e7beb97ace613a967cd70e *man/triangleUC.Rd 3184a6345636018d37b8c6ba9680df9c *man/trplot.Rd 9fb1ee580c26ba647e722eedb2680c93 *man/trplot.qrrvglm.Rd 346a3abab054b4d7c524770ffce9e1c6 *man/truncparetoUC.Rd 28b59fd1046ac92695de2e144bad7292 *man/truncweibull.Rd 90bd59df538b418b4250e7ca95b95ccb *man/ucberk.Rd 7bae24bbd750516a9045acc01bf8c7e6 *man/undocumented-methods.Rd 356226011d38c8414e934fbae8b1663b *man/uninormal.Rd d10e73620e2c2358d1112319da8fbc26 *man/vcovvlm.Rd 3d62484053bac86524482b090739b567 *man/venice.Rd 8481f7f192836bfced10d8b3602fee0a *man/vgam-class.Rd b904739533434d3c3500acbb4bf54ffb *man/vgam.Rd 04dcc28c4bf1bddf845cf84eb18a1f2f *man/vgam.control.Rd 63631c9b681a02e88ddf1c561a056837 *man/vglm-class.Rd dabc62c2125bfb861cf5f46f9bc234dc *man/vglm.Rd f972a5160daa01f39cd570b5f54859e6 *man/vglm.control.Rd 49e00c8d497c046e65b9457c2f62baf6 *man/vglmff-class.Rd 1f2b7e686f738759842ed3440c022a8a *man/vonmises.Rd 5904db2ed5b83de53864477c981cf545 *man/vplot.profile.Rd 1e00d80d16a246873d74d6ff82bb08ea *man/vsmooth.spline.Rd 7977fc14291cc0edb2ecba062401f3fe *man/waitakere.Rd 336b5a87d1c8a1487c234571776198cc *man/waldff.Rd 74524fcccdf2a073afa7b16eb3ea2729 *man/weibull.mean.Rd 9ce85dfd3a2c65e056a5bdde204d9a26 *man/weibullR.Rd 926b0135ba89f142999133d8a386e018 *man/weightsvglm.Rd 590569895fbf4c69045cd5e4c75eae19 *man/wine.Rd a814b37503a9534c86789482ab81333f *man/wrapup.smart.Rd d37d3685ddda0eb2e65314d5df4d92c0 *man/yeo.johnson.Rd ba8a9e9e16a86f89352763dffb377942 *man/yip88.Rd 40d1463cbadda8c9659f9de4c059997a *man/yulesimon.Rd e2cb133a9b12c848730f8716944ade3e *man/yulesimonUC.Rd b3f454f3c002fd2eb31f4b38df803626 *man/zabinomUC.Rd c8d7a55e87998b3baaa10545046f4f5d *man/zabinomial.Rd 5ffb51ee3a3e86590daa4ce3ebd6b410 *man/zageomUC.Rd 59828103410f040ea22c4eba309cbf5e *man/zageometric.Rd 5033e639911af869f9aea3d5014067f6 *man/zanegbinUC.Rd f804f4552fe35142f67c3526fbfa2b54 *man/zanegbinomial.Rd cce8cebbf1f5b348ede3d0fe284201a1 *man/zapoisUC.Rd 75d7a648df8e8813590ec9723fccc6ee *man/zapoisson.Rd 6b4cfbabda746f57a831f2d762710a0f *man/zero.Rd 1a7e90a732be5e2c8bc2d7ba617bec9e *man/zeta.Rd 9364e9797c523a5114b6833c8452fc36 *man/zetaUC.Rd 73cd8928268a99e4870eb722e29b7c8d *man/zetaff.Rd 7b0a5c037cbadaa08859b1ce2cb860ec *man/zibinomUC.Rd 47ec000d6c8d502f60e9706b88d71f49 *man/zibinomial.Rd 070f470ac592a7d463c8fea7a239c787 *man/zigeomUC.Rd e83cc03487ae3b667eb4fde5647bc99a *man/zigeometric.Rd 5b0ce56f13bb255e3ee569373403b797 *man/zinegbinUC.Rd 028c3b0f5ae31395cbf7240a31b9d615 *man/zinegbinomial.Rd e5f47c53c8af2547c4cf3f867ebec4d2 *man/zipebcom.Rd 60fcc6b3c2d2cda0a13119e3d02fad08 *man/zipf.Rd 95724a61f6d83eb58c97246d14ddaaae *man/zipfUC.Rd 10be2ebb9b6f81f14463acd1cd226b2f *man/zipoisUC.Rd 7c6e79a518db7308271f3286270cc414 *man/zipoisson.Rd 1e98238d361280f9119e993fb5480910 *man/zoabetaR.Rd c6979016b5a054cf8e740fdf3e2a7620 *man/zoabetaUC.Rd f228920c30d2bf386c22f6042d6dac56 *src/VGAM_init.c f306f4262366ba8c13d31e6afd0e393b *src/caqo3.c ec1b60ab786ea922f9c9665ae352b147 *src/cqof.f 8daac3d03d7cb7a355a4c5ba548c9793 *src/ei.f 964e4ddd4ec4e99a1924ed513a3b124c *src/fgam.f 995dfd385f4c9857a7a4c804c9e7bb9c *src/gautr.c dc1ca5b4e9a67b6d48c25e7107112d9c *src/lerchphi.c c54afdee58cf86ecaf1072c492b49001 *src/lms.f 04cf8404fc40446bea25b2797f4a02f1 *src/muxr3.c 65ef45ba4e422c33db9848bb549ea93c *src/rgam.f 473bc0b2f4d6757fa9b397ac0d7c9e47 *src/rgam3.c 6aee7dc8f242ea6e9446ade5b7edeee5 *src/specfun3.c 4814bb73b4c3eedc7507ad99511c7dc5 *src/tyeepolygamma.f b8589097163df4491a094baff9352a6d *src/tyeepolygamma3.c 79cf39f1d83f25e29a6c56d344ea8d76 *src/vcall2.f 1875946181988bbab62f6a084c25ee12 *src/vdigami.f 790dbdf81e423556e0f09f68f2cdd591 *src/veigen.f 5ea414b5b42454c8efa73152c45ea62b *src/vgam.f 73b8d37419685738d4a7a151284299b4 *src/vgam3.c bbb4ca20dcf50cd985b411b9a65b68f2 *src/vlinpack1.f 14d546d52123df5ffa6dd5013470c89f *src/vlinpack2.f 5c49fbd5021e96518c4b3cc85ff82a39 *src/vlinpack3.f 753359563526a9cd5ebac104dab2d754 *src/vmux.f 9083b462bcc275ee6dda47e97f1ebf94 *src/vmux3.c b19585d2495c46800b0c95f347fe89f9 *src/zeta3.c VGAM/DESCRIPTION0000755000176200001440000000320413135730670012521 0ustar liggesusersPackage: VGAM Version: 1.0-4 Date: 2017-07-24 Title: Vector Generalized Linear and Additive Models Author: Thomas W. Yee Maintainer: Thomas Yee Depends: R (>= 3.4.0), methods, stats, stats4, splines Suggests: VGAMdata, MASS, mgcv Description: An implementation of about 6 major classes of statistical regression models. At the heart of it are the vector generalized linear and additive model (VGLM/VGAM) classes, and the book "Vector Generalized Linear and Additive Models: With an Implementation in R" (Yee, 2015) gives details of the statistical framework and VGAM package. Currently only fixed-effects models are implemented, i.e., no random-effects models. Many (150+) models and distributions are estimated by maximum likelihood estimation (MLE) or penalized MLE, using Fisher scoring. VGLMs can be loosely thought of as multivariate GLMs. VGAMs are data-driven VGLMs (i.e., with smoothing). The other classes are RR-VGLMs (reduced-rank VGLMs), quadratic RR-VGLMs, reduced-rank VGAMs, RCIMs (row-column interaction models)---these classes perform constrained and unconstrained quadratic ordination (CQO/UQO) models in ecology, as well as constrained additive ordination (CAO). Note that these functions are subject to change; see the NEWS and ChangeLog files for latest changes. License: GPL-3 URL: https://www.stat.auckland.ac.nz/~yee/VGAM NeedsCompilation: yes BuildVignettes: yes LazyLoad: yes LazyData: yes Packaged: 2017-07-24 04:56:17 UTC; tyee001 Repository: CRAN Date/Publication: 2017-07-25 21:01:44 UTC VGAM/ChangeLog0000755000176200001440000000074513135276753012603 0ustar liggesusers2015-10-26 Thomas Yee * R/links.q (all link functions): big changes, when deriv >= 1 wrt 'inverse' argument. For example, logit(p, deriv = 1, inverse = TRUE) is now logit(p, deriv = 1, inverse = FALSE). Models fitted under <= VGAM 0.9-9 and saved might not work under >= VGAM 1.0-0. 2015-10-26 Thomas Yee * R/family.normal.R (tobit): tobit()@weight implements Fisher scoring entirely. VGAM/man/0000755000176200001440000000000013135276753011573 5ustar liggesusersVGAM/man/slash.Rd0000644000176200001440000000665113135276753013204 0ustar liggesusers\name{slash} \alias{slash} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Slash Distribution Family Function } \description{ Estimates the two parameters of the slash distribution by maximum likelihood estimation. } \usage{ slash(lmu = "identitylink", lsigma = "loge", imu = NULL, isigma = NULL, gprobs.y = ppoints(8), nsimEIM = 250, zero = NULL, smallno = .Machine$double.eps*1000) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsigma}{ Parameter link functions applied to the \eqn{\mu}{mu} and \eqn{\sigma}{sigma} parameters, respectively. See \code{\link{Links}} for more choices. } % \item{emu, esigma}{ % List. Extra argument for each of the link functions. % See \code{earg} in \code{\link{Links}} for general information. %emu = list(), esigma = list(), % } \item{imu, isigma}{ Initial values. A \code{NULL} means an initial value is chosen internally. See \code{\link{CommonVGAMffArguments}} for more information. } \item{gprobs.y}{ Used to compute the initial values for \code{mu}. This argument is fed into the \code{probs} argument of \code{\link[stats]{quantile}} to construct a grid, which is used to evaluate the log-likelihood. This must have values between 0 and 1. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{smallno}{ Small positive number, used to test for the singularity. } } \details{ The standard slash distribution is the distribution of the ratio of a standard normal variable to an independent standard uniform(0,1) variable. It is mainly of use in simulation studies. One of its properties is that it has heavy tails, similar to those of the Cauchy. The general slash distribution can be obtained by replacing the univariate normal variable by a general normal \eqn{N(\mu,\sigma)}{N(mu,sigma)} random variable. It has a density that can be written as \deqn{f(y) = \left\{ \begin{array}{cl} 1/(2 \sigma \sqrt(2 \pi)) & if y=\mu, \\ 1-\exp(-(((y-\mu)/\sigma)^2)/2))/(\sqrt(2 pi) \sigma ((y-\mu)/\sigma)^2) & if y \ne \mu. \end{array} \right . }{% f(y) = 1/(2*sigma*sqrt(2*pi)) if y=mu = 1-exp(-(((x-mu)/sigma)^2)/2))/(sqrt(2*pi)*sigma*((x-mu)/sigma)^2) if y!=mu} where \eqn{\mu}{mu} and \eqn{\sigma}{sigma} are the mean and standard deviation of the univariate normal distribution respectively. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Kafadar, K. (1982) A Biweight Approach to the One-Sample Problem \emph{Journal of the American Statistical Association}, \bold{77}, 416--424. % multivariate skew-slash distribution. % jspi, 2006, 136: 209--220., by Wang, J. and Genton, M. G. } \author{ T. W. Yee and C. S. Chee } \note{ Fisher scoring using simulation is used. Convergence is often quite slow. Numerical problems may occur. } \seealso{ \code{\link{rslash}}, \code{\link{simulate.vlm}}. } \examples{ \dontrun{ sdata <- data.frame(y = rslash(n = 1000, mu = 4, sigma = exp(2))) fit <- vglm(y ~ 1, slash, data = sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/powerlink.Rd0000644000176200001440000000441513135276753014100 0ustar liggesusers\name{powerlink} \alias{powerlink} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Power Link Function } \description{ Computes the power transformation, including its inverse and the first two derivatives. } \usage{ powerlink(theta, power = 1, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{power}{ This denotes the power or exponent. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The power link function raises a parameter by a certain value of \code{power}. Care is needed because it is very easy to get numerical problems, e.g., if \code{power=0.5} and \code{theta} is negative. } \value{ For \code{powerlink} with \code{deriv = 0}, then \code{theta} raised to the power of \code{power}. And if \code{inverse = TRUE} then \code{theta} raised to the power of \code{1/power}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } %\references{ % McCullagh, P. and Nelder, J. A. (1989) % \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. % %} \author{ Thomas W. Yee } \note{ Numerical problems may occur for certain combinations of \code{theta} and \code{power}. Consequently this link function should be used with caution. } \seealso{ \code{\link{Links}}, \code{\link{loge}}. } \examples{ powerlink("a", power = 2, short = FALSE, tag = TRUE) powerlink(x <- 1:5) powerlink(x, power = 2) max(abs(powerlink(powerlink(x, power = 2), power = 2, inverse = TRUE) - x)) # Should be 0 powerlink(x <- (-5):5, power = 0.5) # Has NAs # 1/2 = 0.5 pdata <- data.frame(y = rbeta(n = 1000, shape1 = 2^2, shape2 = 3^2)) fit <- vglm(y ~ 1, betaR(lshape1 = powerlink(power = 0.5), i1 = 3, lshape2 = powerlink(power = 0.5), i2 = 7), data = pdata) t(coef(fit, matrix = TRUE)) Coef(fit) # Useful for intercept-only models vcov(fit, untransform = TRUE) } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/tobit.Rd0000644000176200001440000002550613135276753013213 0ustar liggesusers\name{tobit} \alias{tobit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Tobit Model } \description{ Fits a Tobit model. } \usage{ tobit(Lower = 0, Upper = Inf, lmu = "identitylink", lsd = "loge", imu = NULL, isd = NULL, type.fitted = c("uncensored", "censored", "mean.obs"), byrow.arg = FALSE, imethod = 1, zero = "sd") } % 20151024 yettodo: maybe add a new option to 'type.fitted': % type.fitted = c("uncensored", "censored", "mean.obs", "truncated"), % where "truncated" is only concerned with values of y > Lower; % values of y <= Lower are ignored. % % % % % %- maybe also 'usage' for other objects documented here. \arguments{ \item{Lower}{ Numeric. It is the value \eqn{L} described below. Any value of the linear model \eqn{x_i^T \beta}{x_i^T beta} that is less than this lowerbound is assigned this value. Hence this should be the smallest possible value in the response variable. May be a vector (see below for more information). } \item{Upper}{ Numeric. It is the value \eqn{U} described below. Any value of the linear model \eqn{x_i^T \beta}{x_i^T beta} that is greater than this upperbound is assigned this value. Hence this should be the largest possible value in the response variable. May be a vector (see below for more information). } \item{lmu, lsd}{ Parameter link functions for the mean and standard deviation parameters. See \code{\link{Links}} for more choices. The standard deviation is a positive quantity, therefore a log link is its default. } \item{imu, isd, byrow.arg}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{type.fitted}{ Type of fitted value returned. The first choice is default and is the ordinary uncensored or unbounded linear model. If \code{"censored"} then the fitted values in the interval \eqn{[L, U]}. If \code{"mean.obs"} then the mean of the observations is returned; this is a doubly truncated normal distribution augmented by point masses at the truncation points (see \code{\link{dtobit}}). See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Initialization method. Either 1 or 2 or 3, this specifies some methods for obtaining initial values for the parameters. See \code{\link{CommonVGAMffArguments}} for information. } \item{zero}{ A vector, e.g., containing the value 1 or 2. If so, the mean or standard deviation respectively are modelled as an intercept-only. Setting \code{zero = NULL} means both linear/additive predictors are modelled as functions of the explanatory variables. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The Tobit model can be written \deqn{y_i^* = x_i^T \beta + \varepsilon_i}{% y_i^* = x_i^T beta + e_i} where the \eqn{e_i \sim N(0,\sigma^2)}{e_i ~ N(0,sigma^2)} independently and \eqn{i=1,\ldots,n}{i=1,...,n}. However, we measure \eqn{y_i = y_i^*} only if \eqn{y_i^* > L} and \eqn{y_i^* < U} for some cutpoints \eqn{L} and \eqn{U}. Otherwise we let \eqn{y_i=L} or \eqn{y_i=U}, whatever is closer. The Tobit model is thus a multiple linear regression but with censored responses if it is below or above certain cutpoints. The defaults for \code{Lower} and \code{Upper} and \code{lmu} correspond to the \emph{standard} Tobit model. Fisher scoring is used for the standard and nonstandard models. By default, the mean \eqn{x_i^T \beta}{x_i^T beta} is the first linear/additive predictor, and the log of the standard deviation is the second linear/additive predictor. The Fisher information matrix for uncensored data is diagonal. The fitted values are the estimates of \eqn{x_i^T \beta}{x_i^T beta}. } \section{Warning }{ If values of the response and \code{Lower} and/or \code{Upper} are not integers then there is the danger that the value is wrongly interpreted as uncensored. For example, if the first 10 values of the response were \code{runif(10)} and \code{Lower} was assigned these value then testing \code{y[1:10] == Lower[1:10]} is numerically fraught. Currently, if any \code{y < Lower} or \code{y > Upper} then a warning is issued. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Tobin, J. (1958) Estimation of relationships for limited dependent variables. \emph{Econometrica} \bold{26}, 24--36. } \author{ Thomas W. Yee } \note{ The response can be a matrix. If so, then \code{Lower} and \code{Upper} are recycled into a matrix with the number of columns equal to the number of responses, and the recycling is done row-wise \emph{if} \code{byrow.arg = TRUE}. The default order is as \code{\link[base]{matrix}}, which is \code{byrow.arg = FALSE}. For example, these are returned in \code{fit4@misc$Lower} and \code{fit4@misc$Upper} below. If there is no censoring then \code{\link{uninormal}} is recommended instead. Any value of the response less than \code{Lower} or greater than \code{Upper} will be assigned the value \code{Lower} and \code{Upper} respectively, and a warning will be issued. The fitted object has components \code{censoredL} and \code{censoredU} in the \code{extra} slot which specifies whether observations are censored in that direction. The function \code{\link{cens.normal}} is an alternative to \code{tobit()}. % 20150417; McClelland Kemp bug: When obtaining initial values, if the algorithm would otherwise want to fit an underdetermined system of equations, then it uses the entire data set instead. This might result in rather poor quality initial values, and consequently, monitoring convergence is advised. } \seealso{ \code{\link{rtobit}}, \code{\link{cens.normal}}, \code{\link{uninormal}}, \code{\link{double.cens.normal}}, \code{\link{posnormal}}, \code{\link{CommonVGAMffArguments}}, \code{\link[stats:Normal]{rnorm}}. } \examples{ # Here, fit1 is a standard Tobit model and fit2 is a nonstandard Tobit model tdata <- data.frame(x2 = seq(-1, 1, length = (nn <- 100))) set.seed(1) Lower <- 1; Upper <- 4 # For the nonstandard Tobit model tdata <- transform(tdata, Lower.vec = rnorm(nn, Lower, 0.5), Upper.vec = rnorm(nn, Upper, 0.5)) meanfun1 <- function(x) 0 + 2*x meanfun2 <- function(x) 2 + 2*x meanfun3 <- function(x) 2 + 2*x meanfun4 <- function(x) 3 + 2*x tdata <- transform(tdata, y1 = rtobit(nn, mean = meanfun1(x2)), # Standard Tobit model y2 = rtobit(nn, mean = meanfun2(x2), Lower = Lower, Upper = Upper), y3 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec), y4 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec)) with(tdata, table(y1 == 0)) # How many censored values? with(tdata, table(y2 == Lower | y2 == Upper)) # How many censored values? with(tdata, table(attr(y2, "cenL"))) with(tdata, table(attr(y2, "cenU"))) fit1 <- vglm(y1 ~ x2, tobit, data = tdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) fit2 <- vglm(y2 ~ x2, tobit(Lower = Lower, Upper = Upper, type.f = "cens"), data = tdata, trace = TRUE) table(fit2@extra$censoredL) table(fit2@extra$censoredU) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ x2, tobit(Lower = with(tdata, Lower.vec), Upper = with(tdata, Upper.vec), type.f = "cens"), data = tdata, trace = TRUE) table(fit3@extra$censoredL) table(fit3@extra$censoredU) coef(fit3, matrix = TRUE) # fit4 is fit3 but with type.fitted = "uncen". fit4 <- vglm(cbind(y3, y4) ~ x2, tobit(Lower = rep(with(tdata, Lower.vec), each = 2), Upper = rep(with(tdata, Upper.vec), each = 2), byrow.arg = TRUE), data = tdata, crit = "coeff", trace = TRUE) head(fit4@extra$censoredL) # A matrix head(fit4@extra$censoredU) # A matrix head(fit4@misc$Lower) # A matrix head(fit4@misc$Upper) # A matrix coef(fit4, matrix = TRUE) \dontrun{ # Plot fit1--fit4 par(mfrow = c(2, 2)) plot(y1 ~ x2, tdata, las = 1, main = "Standard Tobit model", col = as.numeric(attr(y1, "cenL")) + 3, pch = as.numeric(attr(y1, "cenL")) + 1) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 2.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun1(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit1) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y1 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y2 ~ x2, data = tdata, las = 1, main = "Tobit model", col = as.numeric(attr(y2, "cenL")) + 3 + as.numeric(attr(y2, "cenU")), pch = as.numeric(attr(y2, "cenL")) + 1 + as.numeric(attr(y2, "cenU"))) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun2(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit2) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y2 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y3 ~ x2, data = tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 2 + as.numeric(attr(y3, "cenU") * 2), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU") * 2)) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit3) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y3 ~ x2, data = tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 2 + as.numeric(attr(y3, "cenU") * 2), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU") * 2)) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), col = c("purple", "orange", "black"), lwd = 2, lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, data = tdata, col = "purple", lwd = 2) lines(fitted(fit4)[, 1] ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, data = tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! } } \keyword{models} \keyword{regression} VGAM/man/trplot.Rd0000644000176200001440000000411713135276753013411 0ustar liggesusers\name{trplot} \alias{trplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Trajectory Plot } \description{ Generic function for a trajectory plot. } \usage{ trplot(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which a trajectory plot is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. They usually are graphical parameters, and sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Trajectory plots can be defined in different ways for different models. Many models have no such notion or definition. For quadratic and additive ordination models they plot the fitted values of two species against each other (more than two is theoretically possible, but not implemented in this software yet). } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2012) On constrained and unconstrained quadratic ordination. \emph{Manuscript in preparation}. } \author{ Thomas W. Yee } %\note{ %} \seealso{ \code{\link{trplot.qrrvglm}}, \code{\link{perspqrrvglm}}, \code{\link{lvplot}}. } \examples{ \dontrun{ set.seed(123) hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars p1cqo <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, Crow1positive = FALSE) nos <- ncol(depvar(p1cqo)) clr <- 1:nos # OR (1:(nos+1))[-7] to omit yellow trplot(p1cqo, which.species = 1:3, log = "xy", col = c("blue", "orange", "green"), lwd = 2, label = TRUE) -> ii legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "orange", "green")) abline(a = 0, b = 1, lty = "dashed", col = "grey") } } \keyword{models} \keyword{regression} VGAM/man/identitylink.Rd0000644000176200001440000000463613135276753014602 0ustar liggesusers\name{identitylink} \alias{identitylink} \alias{negidentity} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Identity Link Function } \description{ Computes the identity transformation, including its inverse and the first two derivatives. } \usage{ identitylink(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negidentity(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The identity link function \eqn{g(\theta)=\theta}{g(theta)=theta} should be available to every parameter estimated by the \pkg{VGAM} library. However, it usually results in numerical problems because the estimates lie outside the permitted range. Consequently, the result may contain \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. The function \code{negidentity} is the negative-identity link function and corresponds to \eqn{g(\theta)=-\theta}{g(theta)=-theta}. This is useful for some models, e.g., in the literature supporting the \code{\link{gevff}} function it seems that half of the authors use \eqn{\xi=-k}{xi=-k} for the shape parameter and the other half use \eqn{k} instead of \eqn{\xi}{xi}. } \value{ For \code{identitylink()}: for \code{deriv = 0}, the identity of \code{theta}, i.e., \code{theta} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{theta}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. For \code{negidentity()}: the results are similar to \code{identitylink()} except for a sign change in most cases. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \seealso{ \code{\link{Links}}, \code{\link{loge}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{powerlink}}. } \examples{ identitylink((-5):5) identitylink((-5):5, deriv = 1) identitylink((-5):5, deriv = 2) negidentity((-5):5) negidentity((-5):5, deriv = 1) negidentity((-5):5, deriv = 2) } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/finney44.Rd0000644000176200001440000000261013135276753013521 0ustar liggesusers\name{finney44} \alias{finney44} \docType{data} \title{ Toxicity trial for insects %% ~~ data name/kind ... ~~ } \description{ A data frame of a toxicity trial. %% ~~ A concise (1-5 lines) description of the dataset. ~~ } \usage{data(finney44)} \format{ A data frame with 6 observations on the following 3 variables. \describe{ \item{\code{pconc}}{a numeric vector, percent concentration of pyrethrins. } \item{\code{hatched}}{number of eggs that hatched. } \item{\code{unhatched}}{number of eggs that did not hatch. } } } \details{ Finney (1944) describes a toxicity trial of five different concentrations of pyrethrins (percent) plus a control that were administered to eggs of \emph{Ephestia kuhniella}. The natural mortality rate is large, and a common adjustment is to use Abbott's formula. } %\source{ %% ~~ reference to a publication or URL from which the data were obtained ~~ %} \references{ Finney, D. J., 1944. The application of the probit method to toxicity test data adjusted for mortality in the controls. \emph{Annals of Applied Biology}, \bold{31}, 68--74. Abbott, W. S. (1925). A method of computing the effectiveness of an insecticide. \emph{Journal of Economic Entomology}, 18, 265--7. %% ~~ possibly secondary sources and usages ~~ } \examples{ data(finney44) transform(finney44, mortality = unhatched / (hatched + unhatched)) } \keyword{datasets} VGAM/man/VGAM-package.Rd0000644000176200001440000002146213135276753014212 0ustar liggesusers\name{VGAM-package} \alias{VGAM-package} \alias{VGAM} \docType{package} \title{ Vector Generalized Linear and Additive Models } \description{ \pkg{VGAM} provides functions for fitting vector generalized linear and additive models (VGLMs and VGAMs), and associated models (Reduced-rank VGLMs, Quadratic RR-VGLMs, Reduced-rank VGAMs). This package fits many models and distributions by maximum likelihood estimation (MLE) or penalized MLE. Also fits constrained ordination models in ecology such as constrained quadratic ordination (CQO). } \details{ This package centers on the \emph{iteratively reweighted least squares} (IRLS) algorithm. Other key words include Fisher scoring, additive models, penalized likelihood, reduced-rank regression and constrained ordination. The central modelling functions are \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{rcim}}, \code{\link{cqo}}, \code{\link{cao}}. For detailed control of fitting, each of these has its own control function, e.g., \code{\link{vglm.control}}. The package uses S4 (see \code{\link[methods]{methods-package}}). A companion package called \pkg{VGAMdata} contains some larger data sets which were shifted from \pkg{VGAM}. The classes of GLMs and GAMs are special cases of VGLMs and VGAMs. The VGLM/VGAM framework is intended to be very general so that it encompasses as many distributions and models as possible. VGLMs are limited only by the assumption that the regression coefficients enter through a set of linear predictors. The VGLM class is very large and encompasses a wide range of multivariate response types and models, e.g., it includes univariate and multivariate distributions, categorical data analysis, time series, survival analysis, generalized estimating equations, extreme values, correlated binary data, quantile and expectile regression, bioassay data and nonlinear least-squares problems. Crudely, VGAMs are to VGLMs what GAMs are to GLMs. Two types of VGAMs are implemented: 1st-generation VGAMs with \code{\link{s}} use vector backfitting, while 2nd-generation VGAMs with \code{\link{sm.os}} and \code{\link{sm.ps}} use O-splines and P-splines, do not use the backfitting algorithm, and have automatic smoothing parameter selection. The former is older and is based on Yee and Wild (1996). The latter is more modern (Yee, Somchit and Wild, 2017) but it requires a reasonably large number of observations to work well. %(e.g., \eqn{n > 500}, say); and it does not always converge %and is not entirely reliable. %Vector smoothing (see \code{\link{vsmooth.spline}}) allows several %additive predictors to be estimated as a sum of smooth functions of %the covariates. For a complete list of this package, use \code{library(help = "VGAM")}. New \pkg{VGAM} family functions are continually being written and added to the package. A monograph about VGLM and VGAMs etc. appeared in October 2015. %but unfortunately will not be finished for a while. %~~ An overview of how to use the package, including the most important ~~ %~~ functions ~~ } \author{ Thomas W. Yee, \email{t.yee@auckland.ac.nz}. Maintainer: Thomas Yee \email{t.yee@auckland.ac.nz}. } \section{Warning}{ This package is undergoing continual development and improvement, therefore users should treat everything as subject to change. This includes the family function names, argument names, many of the internals, the use of link functions, and slot names. For example, all link functions may be renamed so that they end in \code{"link"}, e.g., \code{loglink()} instead of \code{loge()}. Some future pain can be avoided by using good programming techniques, e.g., using extractor/accessor functions such as \code{coef()}, \code{weights()}, \code{vcov()}, \code{predict()}. Nevertheless, please expect changes in all aspects of the package. See the \code{NEWS} file for a list of changes from version to version. } \references{ Yee, T. W. (2015) Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: \emph{Springer}. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. Yee, T. W. (2008) The \code{VGAM} Package. \emph{R News}, \bold{8}, 28--39. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. Yee, T. W. and Somchit, C. and Wild, C. J. (2017) Penalized vector generalized additive models. Manuscript in preparation. My website for the \pkg{VGAM} package and book is at \url{https://www.stat.auckland.ac.nz/~yee} and I hope to put more resources there in the future, especially as relating to my book. %(Oldish) documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee/VGAM} %contains some further information and examples. } \keyword{package} \keyword{models} \keyword{regression} \seealso{ \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{rcim}}, \code{\link{cqo}}, \code{\link{TypicalVGAMfamilyFunction}}, \code{\link{CommonVGAMffArguments}}, \code{\link{Links}}, \url{https://CRAN.R-project.org/package=VGAM}. %~~ Optional links to other man pages, e.g. ~~ %~~ \code{\link[:-package]{}} ~~ } \examples{ # Example 1; proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) depvar(fit1) # Better than using fit1@y; dependent variable (response) weights(fit1, type = "prior") # Number of observations coef(fit1, matrix = TRUE) # p.179, in McCullagh and Nelder (1989) constraints(fit1) # Constraint matrices summary(fit1) # Example 2; zero-inflated Poisson model zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pstr0 = logit(-0.5 + 1*x2, inverse = TRUE), lambda = loge( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y = rzipois(nn, lambda, pstr0 = pstr0)) with(zdata, table(y)) fit2 <- vglm(y ~ x2, zipoisson, data = zdata, trace = TRUE) coef(fit2, matrix = TRUE) # These should agree with the above values # Example 3; fit a two species GAM simultaneously fit3 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(multiple.responses = TRUE), data = hunua) coef(fit3, matrix = TRUE) # Not really interpretable \dontrun{ plot(fit3, se = TRUE, overlay = TRUE, lcol = 3:4, scol = 3:4) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(fit3)[ooo, ], type = "l", lwd = 2, col = 3:4, xlab = "Altitude (m)", ylab = "Probability of presence", las = 1, main = "Two plant species' response curves", ylim = c(0, 0.8))) with(hunua, rug(altitude)) } # Example 4; LMS quantile regression fit4 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit4)) head(fitted(fit4)) head(bmi.nz) # Person 1 is near the lower quartile among people his age head(cdf(fit4)) \dontrun{ par(mfrow = c(1, 1), bty = "l", mar = c(5,4,4,3)+0.1, xpd = TRUE) qtplot(fit4, percentiles = c(5,50,90,99), main = "Quantiles", las = 1, xlim = c(15, 90), ylab = "BMI", lwd = 2, lcol = 4) # Quantile plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) # Density plot aa <- deplot(fit4, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)") aa aa <- deplot(fit4, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit4, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values } # Example 5; GEV distribution for extremes (fit5 <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)) head(fitted(fit5)) coef(fit5, matrix = TRUE) Coef(fit5) vcov(fit5) vcov(fit5, untransform = TRUE) sqrt(diag(vcov(fit5))) # Approximate standard errors \dontrun{ rlplot(fit5) } } % Until my monograph comes out and this package is released as version 1.0-0 % the user should treat everything subject to change. VGAM/man/ducklings.Rd0000644000176200001440000000271013135276753014045 0ustar liggesusers\name{ducklings} \alias{ducklings} \docType{data} \title{ Relative Frequencies of Serum Proteins in White Pekin Ducklings %% ~~ data name/kind ... ~~ } \description{ Relative frequencies of serum proteins in white Pekin ducklings as determined by electrophoresis. } \usage{data(ducklings)} \format{ The format is: chr "ducklings" } \details{ Columns \code{p1}, \code{p2}, \code{p3} stand for pre-albumin, albumin, globulins respectively. These were collected from 3-week old white Pekin ducklings. Let \eqn{Y_1}{Y1} be proportional to the total milligrams of pre-albumin in the blood serum of a duckling. Similarly, let \eqn{Y_2}{Y2} and \eqn{Y_3}{Y3} be directly proportional to the same factor as \eqn{Y_1}{Y1} to the total milligrams respectively of albumin and globulins in its blood serum. The proportion of pre-albumin is given by \eqn{Y_1/(Y_1 + Y_2 + Y_3)}{Y1/(Y1 + Y2 + Y3)}, and similarly for the others. % Each set of 3 measurements is based on from 7 to 12 individual ducklings. %% ~~ If necessary, more details than the __description__ above ~~ } \source{ Mosimann, J. E. (1962) On the compound multinomial distribution, the multivariate \eqn{\beta}{beta}-distribution, and correlations among proportions, {Biometrika}, \bold{49}, 65--82. } \seealso{ \code{\link{dirichlet}}. } %%\references{ %% ~~ possibly secondary sources and usages ~~ %%} \examples{ print(ducklings) } \keyword{datasets} VGAM/man/gengammaUC.Rd0000644000176200001440000000542313135276753014072 0ustar liggesusers\name{gengammaUC} \alias{gengammaUC} \alias{dgengamma.stacy} \alias{pgengamma.stacy} \alias{qgengamma.stacy} \alias{rgengamma.stacy} \title{The Generalized Gamma Distribution } \description{ Density, distribution function, quantile function and random generation for the generalized gamma distribution with scale parameter \code{scale}, and parameters \code{d} and \code{k}. } \usage{ dgengamma.stacy(x, scale = 1, d, k, log = FALSE) pgengamma.stacy(q, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) qgengamma.stacy(p, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) rgengamma.stacy(n, scale = 1, d, k) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{scale}{the (positive) scale parameter \eqn{b}.} \item{d, k}{the (positive) parameters \eqn{d} and \eqn{k}. Both can be thought of as shape parameters, where \eqn{d} is of the Weibull-type and \eqn{k} is of the gamma-type. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dgengamma.stacy} gives the density, \code{pgengamma.stacy} gives the distribution function, \code{qgengamma.stacy} gives the quantile function, and \code{rgengamma.stacy} generates random deviates. } \references{ Stacy, E. W. and Mihram, G. A. (1965) Parameter estimation for a generalized gamma distribution. \emph{Technometrics}, \bold{7}, 349--358. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{gengamma.stacy}}, the \pkg{VGAM} family function for estimating the generalized gamma distribution by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } % \note{ % } \seealso{ \code{\link{gengamma.stacy}}. } \examples{ \dontrun{ x <- seq(0, 14, by = 0.01); d <- 1.5; Scale <- 2; k <- 6 plot(x, dgengamma.stacy(x, Scale, d = d, k = k), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qgengamma.stacy(seq(0.05, 0.95, by = 0.05), Scale, d = d, k = k), dgengamma.stacy(qgengamma.stacy(seq(0.05, 0.95, by = 0.05), Scale, d = d, k = k), Scale, d = d, k = k), col = "purple", lty = 3, type = "h") lines(x, pgengamma.stacy(x, Scale, d = d, k = k), col = "orange") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/Coef.Rd0000644000176200001440000000415713135276753012745 0ustar liggesusers\name{Coef} \alias{Coef} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Computes Model Coefficients and Quantities } \description{ \code{Coef} is a generic function which computes model coefficients from objects returned by modelling functions. It is an auxiliary function to \code{\link[stats]{coef}} that enables extra capabilities for some specific models. } \usage{ Coef(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation of other types of model coefficients or quantities is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. } } \details{ This function can often be useful for \code{\link{vglm}} objects with just an intercept term in the RHS of the formula, e.g., \code{y ~ 1}. Then often this function will apply the inverse link functions to the parameters. See the example below. For reduced-rank VGLMs, this function can return the \bold{A}, \bold{C} matrices, etc. For quadratic and additive ordination models, this function can return ecological meaningful quantities such as tolerances, optimums, maximums. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } %\note{ %} \section{Warning }{ This function may not work for \emph{all} \pkg{VGAM} family functions. You should check your results on some artificial data before applying it to models fitted to real data. } \seealso{ \code{\link[stats]{coef}}, \code{\link{Coef.vlm}}, \code{\link{Coef.rrvglm}}, \code{\link{Coef.qrrvglm}}, \code{\link{depvar}}. } \examples{ nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) # Original scale fit <- vglm(y ~ 1, betaR, data = bdata, trace = TRUE) # Intercept-only model coef(fit, matrix = TRUE) # Both on a log scale Coef(fit) # On the original scale } \keyword{models} \keyword{regression} VGAM/man/summaryvglm.Rd0000644000176200001440000001435113135276753014451 0ustar liggesusers% Adapted from file src/library/stats/man/summary.glm.Rd % Part of the R package, http://www.R-project.org % Copyright 1995-2013 R Core Team % Distributed under GPL 2 or later \name{summaryvglm} \alias{summaryvglm} \alias{show.summary.vglm} \title{Summarizing Vector Generalized Linear Model Fits} \usage{ summaryvglm(object, correlation = FALSE, dispersion = NULL, digits = NULL, presid = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, ...) \method{show}{summary.vglm}(x, digits = max(3L, getOption("digits") - 3L), quote = TRUE, prefix = "", presid = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = NULL, nopredictors = NULL, top.half.only = FALSE, ...) } \arguments{ \item{object}{an object of class \code{"vglm"}, usually, a result of a call to \code{\link{vglm}}.} \item{x}{an object of class \code{"summary.vglm"}, usually, a result of a call to \code{summaryvglm()}.} \item{dispersion}{ used mainly for GLMs. See \code{\link[stats]{summary.glm}}. } \item{correlation}{logical; if \code{TRUE}, the correlation matrix of the estimated parameters is returned and printed.} \item{digits}{the number of significant digits to use when printing. } % \item{symbolic.cor}{logical; if \code{TRUE}, print the correlations in % a symbolic form (see \code{\link{symnum}}) rather than as numbers.} \item{signif.stars}{logical; if \code{TRUE}, \sQuote{significance stars} are printed for each coefficient. } % \item{\dots}{further arguments passed to or from other methods.} \item{presid}{Pearson residuals; print out some summary statistics of these? } \item{hde.NA}{logical; if a test for the Hauck-Donner effect is done (for each coefficient) and it is affirmative should that Wald test p-value be replaced by an \code{NA}? The default is to do so. Setting \code{hde.NA = FALSE} will print the p-value even though it will be biassed upwards. } \item{threshold.hde}{numeric; used if \code{hde.NA = TRUE} and is present for some coefficients. Only p-values greater than this argument will be replaced by an \code{NA}, the reason being that small p-values will already be statistically significant. } \item{quote}{ Fed into \code{print()}. } \item{nopredictors}{ logical; if \code{TRUE} the names of the linear predictors are not printed out. The default is that they are. } \item{top.half.only}{ logical; if \code{TRUE} then only print out the top half of the usual output. Used for P-VGAMs. } \item{prefix}{ Not used. } \item{\ldots}{ Not used. } } \description{ These functions are all \code{\link{methods}} for class \code{vglm} or \code{summary.vglm} objects. } \details{ \code{show.summary.vglm()} tries to be smart about formatting the coefficients, standard errors, etc. and additionally gives \sQuote{significance stars} if \code{signif.stars} is \code{TRUE}. The \code{coefficients} component of the result gives the estimated coefficients and their estimated standard errors, together with their ratio. This third column is labelled \code{z value} regardless of whether the dispersion is estimated or known (or fixed by the family). A fourth column gives the two-tailed p-value corresponding to the z ratio based on a Normal reference distribution. % (It is possible that the dispersion is % not known and there are no residual degrees of freedom from which to % estimate it. In that case the estimate is \code{NaN}.) % % % In general, the t distribution is not used, but the normal distribution is used. % Aliased coefficients are omitted in the returned object but restored % by the \code{print} method. Correlations are printed to two decimal places (or symbolically): to see the actual correlations print \code{summary(object)@correlation} directly. % The dispersion of a GLM is not used in the fitting process, but it is % needed to find standard errors. % If \code{dispersion} is not supplied or \code{NULL}, % the dispersion is taken as \code{1} for the \code{binomial} and % \code{Poisson} families, and otherwise estimated by the residual % Chisquared statistic (calculated from cases with non-zero weights) % divided by the residual degrees of freedom. % \code{summary} can be used with Gaussian \code{glm} fits to handle the % case of a linear regression with known error variance, something not % handled by \code{\link{summary.lm}}. The Hauck-Donner effect (HDE) is tested for some models; see \code{\link{hdeff.vglm}} for details. Arguments \code{hde.NA} and \code{threshold.hde} here are meant to give some control for the output for this aberration of the Wald statistic (so that the p-value is biassed upwards). If the HDE is present, using \code{\link{lrp.vglm}} is a good alternative as p-values based on the likelihood ratio test tend to be more accurate than Wald tests and do not suffer from the HDE. % 20151215 It is possible for programmers to write a methods function to print out extra quantities when \code{summary(vglmObject)} is called. The generic function is \code{summaryvglmS4VGAM()}, and one can use the S4 function \code{\link[methods]{setMethod}} to compute the quantities needed. Also needed is the generic function is \code{showsummaryvglmS4VGAM()} to actually print the quantities out. } \value{ \code{summaryvglm} returns an object of class \code{"summary.vglm"}; see \code{\link{summary.vglm-class}}. } \seealso{ \code{\link{vglm}}, \code{\link{confintvglm}}, \code{\link{vcovvlm}}, \code{\link[stats]{summary.glm}}, \code{\link[stats]{summary.lm}}, \code{\link[base]{summary}}, \code{\link{hdeff.vglm}}, \code{\link{lrp.vglm}}. } \examples{ ## For examples see example(glm) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, acat, data = pneumo)) coef(fit, matrix = TRUE) summary(fit) coef(summary(fit)) } \keyword{models} \keyword{regression} %\method{summary}{vglm}(object, correlation = FALSE, % dispersion = NULL, digits = NULL, % presid = TRUE, % signif.stars = getOption("show.signif.stars")) VGAM/man/betageomUC.Rd0000644000176200001440000000404313135276753014076 0ustar liggesusers\name{Betageom} \alias{Betageom} \alias{dbetageom} \alias{pbetageom} %\alias{qbetageom} \alias{rbetageom} \title{The Beta-Geometric Distribution} \description{ Density, distribution function, and random generation for the beta-geometric distribution. } \usage{ dbetageom(x, shape1, shape2, log = FALSE) pbetageom(q, shape1, shape2, log.p = FALSE) rbetageom(n, shape1, shape2) } \arguments{ \item{x, q}{vector of quantiles. } % \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{shape1, shape2}{ the two (positive) shape parameters of the standard beta distribution. They are called \code{a} and \code{b} in \code{\link[base:Special]{beta}} respectively. } \item{log, log.p}{ Logical. If \code{TRUE} then all probabilities \code{p} are given as \code{log(p)}. } } \value{ \code{dbetageom} gives the density, \code{pbetageom} gives the distribution function, and \code{rbetageom} generates random deviates. % \code{qbetageom} gives the quantile function, and } \author{ T. W. Yee } \details{ The beta-geometric distribution is a geometric distribution whose probability of success is not a constant but it is generated from a beta distribution with parameters \code{shape1} and \code{shape2}. Note that the mean of this beta distribution is \code{shape1/(shape1+shape2)}, which therefore is the mean of the probability of success. % See zz code{link{betageomzz}}, the \pkg{VGAM} family function % for estimating the parameters, % for the formula of the probability density function and other details. } \note{ \code{pbetageom} can be particularly slow. } \seealso{ \code{\link{geometric}}, \code{\link{betaff}}, \code{\link[stats:Beta]{Beta}}. } \examples{ \dontrun{ shape1 <- 1; shape2 <- 2; y <- 0:30 proby <- dbetageom(y, shape1, shape2, log = FALSE) plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y]", main = paste( "Y ~ Beta-geometric(shape1=", shape1,", shape2=", shape2, ")", sep = "")) sum(proby) } } \keyword{distribution} VGAM/man/yulesimonUC.Rd0000644000176200001440000000337713135276753014350 0ustar liggesusers\name{Yules} \alias{Yules} \alias{dyules} \alias{pyules} \alias{qyules} \alias{ryules} \title{ Yule-Simon Distribution } \description{ Density, distribution function, quantile function and random generation for the Yule-Simon distribution. } \usage{ dyules(x, shape, log = FALSE) pyules(q, shape, lower.tail = TRUE, log.p = FALSE) qyules(p, shape) ryules(n, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{ Same meaning as in \code{\link[stats]{Normal}}. % Vector of quantiles. For the density, it should be a vector % with positive integer values in order for the probabilities % to be positive. } % \item{p}{vector of probabilities.} % \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{shape}{ See \code{\link{yulesimon}}. } % \item{log}{logical; if TRUE, the logarithm is returned. } \item{log, lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ See \code{\link{yulesimon}}, the \pkg{VGAM} family function for estimating the parameter, for the formula of the probability density function and other details. } \value{ \code{dyules} gives the density, \code{pyules} gives the distribution function, \code{qyules} gives the quantile function, and \code{ryules} generates random deviates. } %\references{ % %} \author{ T. W. Yee } \note{ Numerical problems may occur with \code{qyules()} when \code{p} is very close to 1. } \seealso{ \code{\link{yulesimon}}. } \examples{ dyules(1:20, 2.1) ryules(20, 2.1) round(1000 * dyules(1:8, 2)) table(ryules(1000, 2)) \dontrun{ x <- 0:6 plot(x, dyules(x, shape = 2.2), type = "h", las = 1, col = "blue") } } \keyword{distribution} VGAM/man/binomialff.Rd0000644000176200001440000002273313135276753014177 0ustar liggesusers\name{binomialff} %\alias{binomial} \alias{binomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Binomial Family Function } \description{ Family function for fitting generalized linear models to binomial responses, where the dispersion parameter may be known or unknown. } \usage{ binomialff(link = "logit", dispersion = 1, multiple.responses = FALSE, onedpar = !multiple.responses, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE) } %- maybe also 'usage' for other objects documented here. % apply.parint = FALSE, \arguments{ \item{link}{ Link function; see \code{\link{Links}} and \code{\link{CommonVGAMffArguments}} for more information. } \item{dispersion}{ Dispersion parameter. By default, maximum likelihood is used to estimate the model because it is known. However, the user can specify \code{dispersion = 0} to have it estimated, or else specify a known positive value (or values if \code{multiple.responses} is \code{TRUE}). } \item{multiple.responses}{ Multivariate response? If \code{TRUE}, then the response is interpreted as \eqn{M} independent binary responses, where \eqn{M} is the number of columns of the response matrix. In this case, the response matrix should have \eqn{Q} columns consisting of counts (successes), and the \code{weights} argument should have \eqn{Q} columns consisting of the number of trials (successes plus failures). % zero/one values only. If \code{FALSE} and the response is a (2-column) matrix, then the number of successes is given in the first column, and the second column is the number of failures. } \item{onedpar}{ One dispersion parameter? If \code{multiple.responses}, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar = TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if \code{multiple.responses} is \code{TRUE}. This argument allows for the parallelism assumption whereby the regression coefficients for a variable is constrained to be equal over the \eqn{M} linear/additive predictors. If \code{parallel = TRUE} then the constraint is not applied to the intercepts. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. See \code{\link{CommonVGAMffArguments}} for more information. } \item{earg.link}{ Details at \code{\link{CommonVGAMffArguments}}. } \item{bred}{ Details at \code{\link{CommonVGAMffArguments}}. Setting \code{bred = TRUE} should work for multiple responses (\code{multiple.responses = TRUE}) and all \pkg{VGAM} link functions; it has been tested for \code{\link{logit}} only (and it gives similar results to \pkg{brglm} but not identical), and further testing is required. One result from fitting bias reduced binary regression is that finite regression coefficients occur when the data is separable (see example below). Currently \code{\link{hdeff.vglm}} does not work when \code{bred = TRUE}. } } \details{ This function is largely to mimic \code{\link[stats:Binomial]{binomial}}, however there are some differences. If the dispersion parameter is unknown, then the resulting estimate is not fully a maximum likelihood estimate (see pp.124--8 of McCullagh and Nelder, 1989). A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the binomial model. Over-dispersion is more common in practice. Setting \code{multiple.responses = TRUE} is necessary when fitting a Quadratic RR-VGLM (see \code{\link{cqo}}) because the response is a matrix of \eqn{M} columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response matrix). When used with \code{\link{cqo}} and \code{\link{cao}}, it may be preferable to use the \code{\link{cloglog}} link. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Altman, M. and Gill, J. and McDonald, M. P. (2004) \emph{Numerical Issues in Statistical Computing for the Social Scientist}, Hoboken, NJ, USA: Wiley-Interscience. Ridout, M. S. (1990) Non-convergence of Fisher's method of scoring---a simple example. \emph{GLIM Newsletter}, 20(6). } \author{ Thomas W. Yee } \note{ If \code{multiple.responses} is \code{FALSE} (default) then the response can be of one of two formats: a factor (first level taken as failure), or a 2-column matrix (first column = successes) of counts. The argument \code{weights} in the modelling function can also be specified as any vector of positive values. In general, 1 means success and 0 means failure (to check, see the \code{y} slot of the fitted object). Note that a general vector of proportions of success is no longer accepted. The notation \eqn{M} is used to denote the number of linear/additive predictors. If \code{multiple.responses} is \code{TRUE}, then the matrix response can only be of one format: a matrix of 1's and 0's (1 = success). The call \code{binomialff(dispersion = 0, ...)} is equivalent to \code{quasibinomialff(...)}. The latter was written so that R users of \code{quasibinomial()} would only need to add a ``\code{ff}'' to the end of the family function name. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. Fisher scoring is used. This can sometimes fail to converge by oscillating between successive iterations (Ridout, 1990). See the example below. } \seealso{ \code{\link{quasibinomialff}}, \code{\link{Links}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{betabinomial}}, \code{\link{posbinomial}}, \code{\link{zibinomial}}, \code{\link{double.expbinomial}}, \code{\link{seq2binomial}}, \code{\link{amlbinomial}}, \code{\link{simplex}}, \code{\link[stats:Binomial]{binomial}}, \code{\link{simulate.vlm}}, \code{\link{hdeff.vglm}}, \pkg{safeBinaryRegression}. % \code{\link{matched.binomial}}, } \section{Warning }{ With a multivariate response, assigning a known dispersion parameter for \emph{each} response is not handled well yet. Currently, only a single known dispersion parameter is handled well. See the above note regarding \code{bred}. The maximum likelihood estimate will not exist if the data is \emph{completely separable} or \emph{quasi-completely separable}. See Chapter 10 of Altman et al. (2004) for more details, and \pkg{safeBinaryRegression}. Yet to do: add a \code{sepcheck = TRUE}, say, argument to detect this problem and give an appropriate warning. } \examples{ quasibinomialff() quasibinomialff(link = "probit") shunua <- hunua[sort.list(with(hunua, altitude)), ] # Sort by altitude fit <- vglm(agaaus ~ poly(altitude, 2), binomialff(link = cloglog), data = shunua) \dontrun{ plot(agaaus ~ jitter(altitude), shunua, ylab = "P(Agaaus = 1)", main = "Presence/absence of Agathis australis", col = "blue", las = 1) with(shunua, lines(altitude, fitted(fit), col = "orange", lwd = 2)) } # Fit two species simultaneously fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude), binomialff(multiple.responses = TRUE), data = shunua) \dontrun{ with(shunua, matplot(altitude, fitted(fit2), type = "l", main = "Two species response curves", las = 1)) } # Shows that Fisher scoring can sometime fail. See Ridout (1990). ridout <- data.frame(v = c(1000, 100, 10), r = c(4, 3, 3), n = c(5, 5, 5)) (ridout <- transform(ridout, logv = log(v))) # The iterations oscillates between two local solutions: glm.fail <- glm(r / n ~ offset(logv) + 1, weight = n, binomial(link = 'cloglog'), ridout, trace = TRUE) coef(glm.fail) # vglm()'s half-stepping ensures the MLE of -5.4007 is obtained: vglm.ok <- vglm(cbind(r, n-r) ~ offset(logv) + 1, binomialff(link = cloglog), ridout, trace = TRUE) coef(vglm.ok) # Separable data set.seed(123) threshold <- 0 bdata <- data.frame(x2 = sort(rnorm(nn <- 100))) bdata <- transform(bdata, y1 = ifelse(x2 < threshold, 0, 1)) fit <- vglm(y1 ~ x2, binomialff(bred = TRUE), data = bdata, criter = "coef", trace = TRUE) coef(fit, matrix = TRUE) # Finite!! summary(fit) \dontrun{ plot(depvar(fit) ~ x2, data = bdata, col = "blue", las = 1) lines(fitted(fit) ~ x2, data = bdata, col = "orange") abline(v = threshold, col = "gray", lty = "dashed") } } \keyword{models} \keyword{regression} % a vector of proportions of success, % In particular, for a general vector of proportions, % you will need to specify \code{weights} because the number of trials % is needed. % To input general positive values into the \code{weights} argument of % \code{\link{vglm}}/\code{\link{vgam}} one needs to input a 2-column % response. VGAM/man/sm.os.Rd0000644000176200001440000003376613135276753013140 0ustar liggesusers\name{sm.os} \alias{sm.os} % % % 20161028; 20161213 % % % %- Also NEED an `\alias' for EACH other topic documented here. \title{ Defining O'Sullivan Spline Smooths in VGAM Formulas } \description{ This function represents an O-spline smooth term in a \code{vgam} formula and confers automatic smoothing parameter selection. } \usage{ sm.os(x, ..., niknots = 6, spar = -1, o.order = 2, alg.niknots = c("s", ".nknots.smspl")[1], all.knots = FALSE, ridge.adj = 1e-5, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, fixspar = FALSE) } % degree = 3, %- maybe also `usage' for other objects documented here. \arguments{ \item{x}{ covariate (abscissae) to be smoothed. Also called the regressor. If the \code{xij} facility is used then these covariates are inputted via the \code{\dots} argument. % Currently at least 7 unique \code{x} values are needed. } \item{\dots}{ Used to accommodate the other \eqn{M-1} covariates when the \code{xij} facility is used. See Section 3.4.4 of Yee (2015) for something very similar. This argument, found in the second argument, means that the other argument names must be fully specified if used, e.g., \code{outer.ok} and not \code{outer}. See the example below. In the example below, the term in the main formula is \code{sm.os(gcost.air, gcost.trn, gcost.bus)} and one might be tempted to use something like \code{sm.os(gcost)} to represent that \code{xij} term. However, this is not recommended because \code{sm.os(gcost)} might not have the same number of columns as \code{sm.os(gcost.air, gcost.trn, gcost.bus)} etc. That is, it is best to select one of the diagonal elements of the block matrix to represent that term. } \item{niknots}{ numeric, the number of \emph{interior} knots, called \eqn{K} below. The default is to use this value. If you want \code{alg.niknots} to operate then assign \code{NULL} to this argument. } \item{alg.niknots}{ character. The algorithm used to determine the number of interior knots. Only used when \code{all.knots = FALSE} and \code{niknots = NULL}. Note that \code{".nknots.smspl"} corresponds to the default of \code{\link[stats]{smooth.spline}}. The value \code{"s"} corresponds to the same algorithm as \code{\link[VGAM]{s}}. % the other algorithms tend to give fewer knots than this choice % because when the model's \eqn{M} is large then the number % of parameters to be estimated and the amount of memory % used quickly grows. } \item{all.knots}{ logical. If \code{TRUE} then all distinct points in \code{x} are used as the interior knots. If \code{FALSE} (default) then a subset of \code{x[]} is used, specifically \code{x[j]} where the \code{niknots} indices are quantiles that are evenly spaced with respect to the argument \code{probs}---see \code{\link[stats]{quantile}}. If \code{all.knots = FALSE} and \code{niknots = NULL} then the argument \code{alg.niknots} is used to compute \code{niknots}. } \item{spar, maxspar}{ \code{spar} is a vector of smoothing parameters. Negative values mean that \code{\link[mgcv]{magic}} will choose initial values in order to do the optimization at each P-IRLS iteration. Positive values mean that they are used as initial values for \code{\link[mgcv]{magic}}. If \code{fixspar = TRUE} then \code{spar} should be assigned a vector of positive values (but having values less than \code{maxspar}); then the smoothing parameters will be fixed and \code{\link[mgcv]{magic}} will not be used. % non-negative regularization parameters for difference penalty, % whose values should be less than \code{maxspar}. % Can be a vector. % zz. } % \item{degree}{ % degree of B-spline basis. % Currently only the value 3 is implemented. % In the future one should usually assign 2 or 3; and % the values 1 or 4 might possibly be recommended. % zz--this argument may be unneeded. % } \item{o.order}{ The order of the O'Sullivan penalzed spline. Any one value from \code{1:4} is acceptable. The degree of the spline is \code{2 * o.order - 1}, so that cubic splines are the default. Setting \code{o.order = 1} results in a linear spline which is a piecewise linear function. % (p.191 ANZJS). } \item{ridge.adj}{ small positive number to stabilize linear dependencies among B-spline bases. } \item{spillover}{ small and positive proportion of the range used on the outside of the boundary values. This defines the endpoints \eqn{a} and \eqn{b} that cover the data \eqn{x_i}, i.e., we are interested in the interval \eqn{[a,b]} which contains all the abscissae. The interior knots are strictly inside \eqn{(a,b)}. % Untrue, see ANZJS. % Set \code{spillover = 0} to obtain the natural boundary conditions % (NBCs), hence a fit based on natural splines. } \item{outer.ok}{ Fed into the argument (by the same name) of \code{\link[splines]{splineDesign}}. } \item{fixspar}{ logical. If \code{TRUE} then \code{spar} should be a vector with positive values and the smoothing parameters are fixed at those values. If \code{FALSE} then \code{spar} contains the initial values for the smoothing parameters, and \code{\link[mgcv]{magic}} is called to determine (hopefully) some good values for the smoothing parameters. } } \details{ This function is currently used by \code{\link{vgam}} to allow automatic smoothing parameter selection based on O-splines to minimize an UBRE quantity. In contrast, \code{\link{s}} operates by having a prespecified amount of smoothing, e.g., its \code{df} argument. When the sample size is reasonably large this function is recommended over \code{\link{s}} also because backfitting is not required. This function therefore allows 2nd-generation VGAMs to be fitted (called G2-VGAMs, or Penalized-VGAMs). % A similar function is \code{\link{s}} which has a prespecified % amount of smoothing. This function should only be used with \code{\link{vgam}}. This function uses \code{\link[stats]{quantile}} to choose the knots, whereas \code{\link{sm.ps}} chooses equally-spaced knots. As Wand and Ormerod (2008) write, in most situations the differences will be minor, but it is possible for problems to arise for either strategy by constructing certain regression functions and predictor variable distributions. Any differences between O-splines and P-splines tend to be at the boundaries. O-splines have \emph{natural boundary constraints} so that the solution is linear beyond the boundary knots. Some arguments in decreasing order of precedence are: \code{all.knots}, \code{niknots}, \code{alg.niknots}. Unlike \code{\link[VGAM]{s}}, which is symbolic and does not perform any smoothing itself, this function does compute the penalized spline when used by \code{\link{vgam}}---it creates the appropriate columns of the model matrix. When this function is used within \code{\link{vgam}}, automatic smoothing parameter selection is implemented by calling \code{\link[mgcv]{magic}} after the necessary link-ups are done. By default this function centres the component function. This function is also \emph{smart}; it can be used for smart prediction (Section 18.6 of Yee (2015)). Automatic smoothing parameter selection is performed using \emph{performance-oriented iteration} whereby an optimization problem is solved at each IRLS iteration. % Occasionally there are convergence problems for this. % Eventually, in most cases, both model parameter estimates and % smoothing parameter estimates converge. This function works better when the sample size is large, e.g., when in the hundreds, say. % Also, if \eqn{n} is the number of \emph{distinct} abscissae, then % \code{sm.os} will fail if \eqn{n < 7}. % Unlike \code{\link[VGAM]{s}}, which is symbolic and does not perform % any smoothing itself, this function does compute the penalized spline % when used by \code{\link{vgam}}---it creates the appropriate columns % of the model matrix. When this function is used within % \code{\link{vgam}}, automatic smoothing parameter selection is % implemented by calling \code{\link[mgcv]{magic}} after the necessary % link-ups are done. % By default this function centres every component function. % This function is also \emph{smart}; it can be used for smart prediction % (Section 18.6 of Yee (2015)). % Automatic smoothing parameter selection is performed using % \emph{performance-oriented iteration} whereby an optimization % problem is solved at each IRLS iteration. % Occasionally there are convergence problems for this. % Eventually, in most cases, both model parameter estimates and % smoothing parameter estimates converge. } \value{ A matrix with attributes that are (only) used by \code{\link{vgam}}. The number of rows of the matrix is \code{length(x)}. The number of columns is a function of the number of interior knots \eqn{K} and the order of the O-spline \eqn{m}: \eqn{K+2m-1}. In code, this is \code{niknots + 2 * o.order - 1}, or using \code{\link{sm.ps}}-like arguments, \code{ps.int + degree - 1} (where \code{ps.int} should be more generally interpreted as the number of intervals. The formula is the same as \code{\link{sm.ps}}.). It transpires then that \code{\link{sm.os}} and \code{\link{sm.ps}} are very similar. % are very similar wrt return value, and % the the number of the knots; % but not wrt the location of the knots. % The \eqn{-1} is because of the centring. } \references{ Wand, M. P. and Ormerod, J. T. (2008). On semiparametric regression with O'Sullivan penalized splines. \emph{Australian and New Zealand Journal of Statistics}, \bold{50}(2): 179--198. %Wood, S. N. (2004). %Stable and efficient multiple smoothing parameter estimation %for generalized additive models. %\emph{J. Amer. Statist. Assoc.}, \bold{99}(467): 673--686. %Yee, T. W. (2016). %Comments on ``Smoothing parameter and model selection for %general smooth models'' %by Wood, S. N. and Pya, N. and Safken, N., %\emph{J. Amer. Statist. Assoc.}, \bold{110}(516). } \author{ T. W. Yee, with some of the essential R code coming from the appendix of Wand and Ormerod (2008). } \note{ This function is currently under development and may change in the future. One might try using this function with \code{\link{vglm}} so as to fit a regression spline, however, the default value of \code{niknots} will probably be too high for most data sets. % In particular, the default for \code{ps.int} is % subject to change. } % ~Make other sections like WARNING with \section{WARNING }{....} ~ \section{Warning }{ Being introduced into \pkg{VGAM} for the first time, this function (and those associated with it) should be used cautiously. Not all options are fully working or have been tested yet, and there are bound to be some bugs lurking around. } \seealso{ \code{\link{vgam}}, \code{\link{sm.ps}}, \code{\link{s}}, \code{\link{smartpred}}, \code{\link{is.smart}}, \code{\link{summarypvgam}}, \code{\link[stats]{smooth.spline}}, \code{\link[splines]{splineDesign}}, \code{\link[splines]{bs}}, \code{\link[mgcv]{magic}}. } \examples{ sm.os(runif(20)) \dontrun{ data("TravelMode", package = "AER") # Need to install "AER" first air.df <- subset(TravelMode, mode == "air") # Form 4 smaller data frames bus.df <- subset(TravelMode, mode == "bus") trn.df <- subset(TravelMode, mode == "train") car.df <- subset(TravelMode, mode == "car") TravelMode2 <- data.frame(income = air.df$income, wait.air = air.df$wait - car.df$wait, wait.trn = trn.df$wait - car.df$wait, wait.bus = bus.df$wait - car.df$wait, gcost.air = air.df$gcost - car.df$gcost, gcost.trn = trn.df$gcost - car.df$gcost, gcost.bus = bus.df$gcost - car.df$gcost, wait = air.df$wait) # Value is unimportant TravelMode2$mode <- subset(TravelMode, choice == "yes")$mode # The response TravelMode2 <- transform(TravelMode2, incom.air = income, incom.trn = 0, incom.bus = 0) set.seed(1) TravelMode2 <- transform(TravelMode2, junkx2 = runif(nrow(TravelMode2))) tfit2 <- vgam(mode ~ sm.os(gcost.air, gcost.trn, gcost.bus) + ns(junkx2, 4) + sm.os(incom.air, incom.trn, incom.bus) + wait , crit = "coef", multinomial(parallel = FALSE ~ 1), data = TravelMode2, xij = list(sm.os(gcost.air, gcost.trn, gcost.bus) ~ sm.os(gcost.air, gcost.trn, gcost.bus) + sm.os(gcost.trn, gcost.bus, gcost.air) + sm.os(gcost.bus, gcost.air, gcost.trn), sm.os(incom.air, incom.trn, incom.bus) ~ sm.os(incom.air, incom.trn, incom.bus) + sm.os(incom.trn, incom.bus, incom.air) + sm.os(incom.bus, incom.air, incom.trn), wait ~ wait.air + wait.trn + wait.bus), form2 = ~ sm.os(gcost.air, gcost.trn, gcost.bus) + sm.os(gcost.trn, gcost.bus, gcost.air) + sm.os(gcost.bus, gcost.air, gcost.trn) + wait + sm.os(incom.air, incom.trn, incom.bus) + sm.os(incom.trn, incom.bus, incom.air) + sm.os(incom.bus, incom.air, incom.trn) + junkx2 + ns(junkx2, 4) + incom.air + incom.trn + incom.bus + gcost.air + gcost.trn + gcost.bus + wait.air + wait.trn + wait.bus) par(mfrow = c(2, 2)) plot(tfit2, se = TRUE, lcol = "orange", scol = "blue", ylim = c(-4, 4)) summary(tfit2) } } \keyword{models} \keyword{regression} \keyword{smooth} % binom2.or(exchangeable = TRUE ~ s(x2, 3)) VGAM/man/Coef.rrvglm-class.Rd0000644000176200001440000000424313135276753015354 0ustar liggesusers\name{Coef.rrvglm-class} \docType{class} \alias{Coef.rrvglm-class} \title{Class ``Coef.rrvglm'' } \description{ The most pertinent matrices and other quantities pertaining to a RR-VGLM. } \section{Objects from the Class}{ Objects can be created by calls of the form \code{Coef(object, ...)} where \code{object} is an object of class \code{rrvglm} (see \code{\link{rrvglm-class}}). In this document, \eqn{M} is the number of linear predictors and \eqn{n} is the number of observations. } \section{Slots}{ \describe{ \item{\code{A}:}{Of class \code{"matrix"}, \bold{A}. } \item{\code{B1}:}{Of class \code{"matrix"}, \bold{B1}. } \item{\code{C}:}{Of class \code{"matrix"}, \bold{C}. } \item{\code{Rank}:}{The rank of the RR-VGLM. } \item{\code{colx1.index}:}{Index of the columns of the \code{"vlm"}-type model matrix corresponding to the variables in \bold{x1}. These correspond to \bold{B1}. } \item{\code{colx2.index}:}{ Index of the columns of the \code{"vlm"}-type model matrix corresponding to the variables in \bold{x2}. These correspond to the reduced-rank regression. } \item{\code{Atilde}:}{Object of class \code{"matrix"}, the \bold{A} matrix with the corner rows removed. Thus each of the elements have been estimated. This matrix is returned only if corner constraints were used. } } } %\section{Methods}{ %No methods defined with class "Coef.rrvglm" in the signature. %} \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{Coef.rrvglm}}, \code{\link{rrvglm}}, \code{\link{rrvglm-class}}, \code{print.Coef.rrvglm}. } \examples{ # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo) coef(fit, matrix = TRUE) Coef(fit) # print(Coef(fit), digits = 3) } \keyword{classes} VGAM/man/posbernoulli.tb.Rd0000644000176200001440000002327613135276753015215 0ustar liggesusers\name{posbernoulli.tb} %\alias{posbernoulli} \alias{posbernoulli.tb} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Family Function with Time and Behavioural Effects } \description{ Fits a GLM/GAM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Sampling occasion effects and behavioural effects are accommodated. } \usage{ posbernoulli.tb(link = "logit", parallel.t = FALSE ~ 1, parallel.b = FALSE ~ 0, drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), imethod = 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE, ridge.constant = 0.01, ridge.power = -4) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, imethod, iprob}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{parallel.t, parallel.b, drop.b}{ A logical, or formula with a logical as the response. See \code{\link{CommonVGAMffArguments}} for information. The \code{parallel.}-type arguments specify whether the constraint matrices have a parallelism assumption for the temporal and behavioural effects. Argument \code{parallel.t} means parallel with respect to time, and matches the same argument name in \code{\link{posbernoulli.t}}. Suppose the model is intercept-only. Setting \code{parallel.t = FALSE ~ 0} results in the \eqn{M_b} model. Setting \code{drop.b = FALSE ~ 0} results in the \eqn{M_t} model because it drops columns off the constraint matrices corresponding to any behavioural effect. Setting \code{parallel.t = FALSE ~ 0} and setting \code{parallel.b = FALSE ~ 0} results in the \eqn{M_b} model. Setting \code{parallel.t = FALSE ~ 0}, \code{parallel.b = FALSE ~ 0} and \code{drop.b = FALSE ~ 0} results in the \eqn{M_0} model. Note the default for \code{parallel.t} and \code{parallel.b} may be unsuitable for data sets which have a large \eqn{\tau} because of the large number of parameters; it might be too flexible. If it is desired to have the behaviour affect some of the other covariates then set \code{drop.b = TRUE ~ 0}. The default model has a different intercept for each sampling occasion, a time-parallelism assumption for all other covariates, and a dummy variable representing a single behavioural effect (also in the intercept). The most flexible model is to set \code{parallel.b = TRUE ~ 0}, \code{parallel.t = TRUE ~ 0} and \code{drop.b = TRUE ~ 0}. This means that all possible temporal and behavioural effects are estimated, for the intercepts and other covariates. Such a model is \emph{not} recommended; it will contain a lot of paramters. } \item{type.fitted}{ Character, one of the choices for the type of fitted value returned. The default is the first one. Partial matching is okay. For \code{"likelihood.cond"}: the probability defined by the conditional likelihood. For \code{"mean.uncond"}: the unconditional mean, which should agree with \code{\link[base]{colMeans}} applied to the response matrix for intercept-only models. } \item{ridge.constant, ridge.power}{ Determines the ridge parameters at each IRLS iteration. They are the constant and power (exponent) for the ridge adjustment for the working weight matrices (the capture probability block matrix, hence the first \eqn{\tau} diagonal values). At iteration \eqn{a} of the IRLS algorithm a positive value is added to the first \eqn{\tau}{tau} diagonal elements of the working weight matrices to make them positive-definite. This adjustment is the mean of the diagonal elements of \code{wz} multipled by \eqn{K \times a^p}{K * a^p} where \eqn{K} is \code{ridge.constant} and \eqn{p} is \code{ridge.power}. This is always positive but decays to zero as iterations proceed (provided \eqn{p} is negative etc.). } \item{p.small, no.warning}{ See \code{\link{posbernoulli.t}}. } } \details{ This model (commonly known as \eqn{M_{tb}}/\eqn{M_{tbh}} in the capture--recapture literature) operates on a response matrix of 0s and 1s (\eqn{n \times \tau}{n x tau}). See \code{\link{posbernoulli.t}} for information that is in common. It allows time and behavioural effects to be modelled. Evidently, the expected information matrix (EIM) seems \emph{not} of full rank (especially in early iterations), so \code{ridge.constant} and \code{ridge.power} are used to \emph{try} fix up the problem. The default link functions are \eqn{(logit \,p_{c1},\ldots,logit \,p_{c\tau},logit \,p_{r2},\ldots,logit \,p_{r\tau})^T}{ (logit p_{c1},\ldots,logit p_{c,tau},logit p_{r2},\ldots,logit p_{r,tau})^T} where the subscript \eqn{c} denotes capture, the subscript \eqn{r} denotes recapture, and it is not possible to recapture the animal at sampling occasion 1. Thus \eqn{M = 2\tau - 1}{M=2*tau-1}. The parameters are currently prefixed by \code{pcapture} and \code{precapture} for the capture and recapture probabilities. This \pkg{VGAM} family function may be further modified in the future. % Not surprisingly, % the fitted values are similar to \code{\link{posbernoulli.t}} and % \code{\link{posbernoulli.b}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ See \code{\link{posbernoulli.t}}. } \author{ Thomas W. Yee. } \note{ It is a good idea to apply the parallelism assumption to each sampling occasion except possibly with respect to the intercepts. Also, a simple behavioural effect such as being modelled using the intercept is recommended; if the behavioural effect is not parallel and/or allowed to apply to other covariates then there will probably be too many parameters, and hence, numerical problems. See \code{M_tbh.1} below. %Data-wise, at each sampling occasion, the \eqn{M_{tb}} model requires at least %one first capture and at least one noncapture. % If not all of the \eqn{2^{\tau}-1}{2^(tau) - 1} combinations of % the response matrix are not present then it pays to add % such rows to the response matrix and assign a small but % positive prior weight. % For example, if \eqn{\tau=2}{tau=2} then there should be % (0,1) rows, % (1,0) rows and % (1,1) rows present in the response matrix. It is a good idea to monitor convergence. Simpler models such as the \eqn{M_0}/\eqn{M_h} models are best fitted with \code{\link{posbernoulli.t}} or \code{\link{posbernoulli.b}} or \code{\link{posbinomial}}. % yettodo: % Some time in the future it might be possible to allow for a % different tau value for each row. % Then the response would be a matrix padded with NAs on the RHS. } \seealso{ \code{\link{posbernoulli.b}} (including \code{N.hat}), \code{\link{posbernoulli.t}}, \code{\link{posbinomial}}, \code{\link{Select}}, \code{\link{fill1}}, \code{\link{Huggins89table1}}, \code{\link{Huggins89.t1}}, \code{\link{deermice}}, \code{\link{prinia}}. } \examples{ \dontrun{ # Example 1: simulated data nTimePts <- 5 # (aka tau == # of sampling occasions) nnn <- 1000 # Number of animals pdata <- rposbern(n = nnn, nTimePts = nTimePts, pvars = 2) dim(pdata); head(pdata) M_tbh.1 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.tb, data = pdata, trace = TRUE) coef(M_tbh.1) # First element is the behavioural effect coef(M_tbh.1, matrix = TRUE) constraints(M_tbh.1, matrix = TRUE) summary(M_tbh.1, presid = FALSE) # Standard errors are approximate head(fitted(M_tbh.1)) head(model.matrix(M_tbh.1, type = "vlm"), 21) dim(depvar(M_tbh.1)) M_tbh.2 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.tb(parallel.t = FALSE ~ 0), data = pdata, trace = TRUE) coef(M_tbh.2) # First element is the behavioural effect coef(M_tbh.2, matrix = TRUE) constraints(M_tbh.2, matrix = TRUE) summary(M_tbh.2, presid = FALSE) # Standard errors are approximate head(fitted(M_tbh.2)) head(model.matrix(M_tbh.2, type = "vlm"), 21) dim(depvar(M_tbh.2)) # Example 2: deermice subset data fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.t, data = deermice, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) constraints(fit1, matrix = TRUE) summary(fit1, presid = FALSE) # Standard errors are approximate # fit1 is the same as Fit1 (a M_{th} model): Fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.tb(drop.b = TRUE ~ sex + weight, parallel.t = TRUE), # No parallelism for the intercept data = deermice, trace = TRUE) constraints(Fit1) } } \keyword{models} \keyword{regression} %\section{Warning }{ % As this model is likely to be overparameterized, probably this % function should not be used (for now?). % %% From Jakub: % Estimation for the population size (and its SE) for the % \eqn{M_{tb}} and \eqn{M_{tbh}} model may be wrong. % But models % \eqn{M_{0}}, % \eqn{M_{h}}, % \eqn{M_{b}}, % \eqn{M_{bh}}, % \eqn{M_{t}}, % \eqn{M_{th}} % seem fine. % % Inference, especially using standard errors, may be fraught here % because the EIM is, strictly speaking, not of full rank. % A similar adjustment is made by \code{\link{zipebcom}}. % It is a good idea to monitor convergence. % The \eqn{M_0}/\eqn{M_h} models are best fitted with % \code{\link{posbernoulli.t}} or \code{\link{posbinomial}} because % the standard errors are more accurate. % % %} %yyy <- depvar(fit1) %if (length(table(4 * yyy[, 1] + 2 * yyy[, 2] + 1 * yyy[, 3])) != 2^(ncol(yyy)) - 1) % warning("not every combination is represented by a row in the response matrix") VGAM/man/genpoisUC.Rd0000644000176200001440000000355213135276753013763 0ustar liggesusers\name{dgenpois} \alias{dgenpois} %\alias{pgenpois} %\alias{qgenpois} %\alias{rgenpois} \title{The Generalized Poisson Distribution} \description{ Density for the Generalized Poisson Distribution. } \usage{ dgenpois(x, lambda = 0, theta, log = FALSE) } \arguments{ \item{x,}{vector of quantiles.} \item{lambda, theta}{ See \code{\link{genpoisson}}. The default value of \code{lambda} corresponds to an ordinary Poisson distribution. } \item{log}{ Logical. If \code{TRUE} then the logarithm of the density is returned. } } \value{ \code{dgenpois} gives the density. The value \code{NaN} is returned for elements not satisfying the parameter restrictions, e.g., if \eqn{\lambda > 1}{lambda > 1}. % \code{pgenpois} gives the distribution function, and % \code{qgenpois} gives the quantile function, and % \code{rgenpois} generates random deviates. } \author{ T. W. Yee } \details{ Most of the background to this function is given in \code{\link{genpoisson}}. Some warnings relevant to this distribution are given there, especially relating to the complicated range of the parameter \code{lambda} about or near \eqn{-1}. Note that numerical round off errors etc. can occur; see below for an example. } %\note{ %} \seealso{ \code{\link{genpoisson}}, \code{\link[stats:Poisson]{dpois}}. } \examples{ sum(dgenpois(0:1000, lambda = -0.5, theta = 2)) # Not perfect... \dontrun{ lambda <- -0.2; theta <- 2; y <- 0:10 proby <- dgenpois(y, lambda = lambda, theta = theta, log = FALSE) plot(y, proby, type = "h", col = "blue", lwd = 2, ylab = "P[Y=y]", main = paste("Y ~ Generalized Poisson(lambda=", lambda, ", theta=", theta, ")", sep = ""), las = 1, sub = "Orange is the Poisson probability function") sum(proby) lines(y + 0.1, dpois(y, theta), type = "h", lwd = 2, col = "orange") } } \keyword{distribution} VGAM/man/hypersecant.Rd0000644000176200001440000000626213135276753014415 0ustar liggesusers\name{hypersecant} \alias{hypersecant} \alias{hypersecant01} \alias{nef.hs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hyperbolic Secant Distribution Family Function } \description{ Estimation of the parameter of the hyperbolic secant distribution. } \usage{ hypersecant(link.theta = extlogit(min = -pi/2, max = pi/2), init.theta = NULL) hypersecant01(link.theta = extlogit(min = -pi/2, max = pi/2), init.theta = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link.theta}{ Parameter link function applied to the parameter \eqn{\theta}{theta}. See \code{\link{Links}} for more choices. } \item{init.theta}{ Optional initial value for \eqn{\theta}{theta}. If failure to converge occurs, try some other value. The default means an initial value is determined internally. } } \details{ The probability density function of the hyperbolic secant distribution is given by \deqn{f(y;\theta) = \exp(\theta y + \log(\cos(\theta ))) / (2 \cosh(\pi y/2)),}{% f(y; theta) = exp(theta*y + log(cos(theta))) / (2*cosh(pi*y/2)),} for parameter \eqn{-\pi/2 < \theta < \pi/2}{-pi/2 < theta < pi/2} and all real \eqn{y}. The mean of \eqn{Y} is \eqn{\tan(\theta)}{tan(theta)} (returned as the fitted values). Morris (1982) calls this model NEF-HS (Natural Exponential Family-Hyperbolic Secant). It is used to generate NEFs, giving rise to the class of NEF-GHS (G for Generalized). Another parameterization is used for \code{hypersecant01()}: let \eqn{Y = (logit U) / \pi}{Y = (logit U) / pi}. Then this uses \deqn{f(u;\theta)=(\cos(\theta)/\pi) \times u^{-0.5+\theta/\pi} \times (1-u)^{-0.5-\theta/\pi},}{% f(u;theta) = (cos(theta)/pi) * u^(-0.5+theta/pi) * (1-u)^(-0.5-theta/pi),} for parameter \eqn{-\pi/2 < \theta < \pi/2}{-pi/2 < theta < pi/2} and \eqn{0 < u < 1}. Then the mean of \eqn{U} is \eqn{0.5 + \theta/\pi}{0.5 + theta/pi} (returned as the fitted values) and the variance is \eqn{(\pi^2 - 4 \theta^2) / (8\pi^2)}{(pi^2 - 4*theta^2) / (8*pi^2)}. For both parameterizations Newton-Raphson is same as Fisher scoring. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall. % p.101, Eqn (3.37) for hypersecant(). % p.101, Eqn (3.38) for hypersecant01(). Morris, C. N. (1982) Natural exponential families with quadratic variance functions. \emph{The Annals of Statistics}, \bold{10}(1), 65--80. } \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{extlogit}}. % \code{\link{nefghs}}, } \examples{ hdata <- data.frame(x2 = rnorm(nn <- 200)) hdata <- transform(hdata, y = rnorm(nn)) # Not very good data! fit1 <- vglm(y ~ x2, hypersecant, data = hdata, trace = TRUE, crit = "coef") coef(fit1, matrix = TRUE) fit1@misc$earg # Not recommended: fit2 <- vglm(y ~ x2, hypersecant(link = "identitylink"), data = hdata, trace = TRUE) coef(fit2, matrix = TRUE) fit2@misc$earg } \keyword{models} \keyword{regression} VGAM/man/zipf.Rd0000644000176200001440000000611513135276753013035 0ustar liggesusers\name{zipf} \alias{zipf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zipf Distribution Family Function } \description{ Estimates the parameter of the Zipf distribution. } \usage{ zipf(N = NULL, lshape = "loge", ishape = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{N}{ Number of elements, an integer satisfying \code{1 < N < Inf}. The default is to use the maximum value of the response. If given, \code{N} must be no less that the largest response value. If \code{N = Inf} and \eqn{s>1} then this is the zeta distribution (use \code{\link{zetaff}} instead). } \item{lshape}{ Parameter link function applied to the (positive) shape parameter \eqn{s}. See \code{\link{Links}} for more choices. } \item{ishape}{ Optional initial value for the parameter \eqn{s}. The default is to choose an initial value internally. If converge failure occurs use this argument to input a value. } } \details{ The probability function for a response \eqn{Y} is \deqn{P(Y=y) = y^{-s} / \sum_{i=1}^N i^{-s},\ \ s>0,\ \ y=1,2,\ldots,N,}{% P(Y=y) = (y^(-s)) / sum((1:N)^(-s)), s>0, y=1,2,...,N,} where \eqn{s} is the exponent characterizing the distribution. The mean of \eqn{Y}, which are returned as the fitted values, is \eqn{\mu = H_{N,s-1} / H_{N,s}}{H(N,s-1) / H(N,s)} where \eqn{H_{n,m}= \sum_{i=1}^n i^{-m}}{H(n,m)=sum((1:n)^(-m))} is the \eqn{n}th generalized harmonic number. Zipf's law is an experimental law which is often applied to the study of the frequency of words in a corpus of natural language utterances. It states that the frequency of any word is inversely proportional to its rank in the frequency table. For example, \code{"the"} and \code{"of"} are first two most common words, and Zipf's law states that \code{"the"} is twice as common as \code{"of"}. Many other natural phenomena conform to Zipf's law. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ pp.526-- of Chapter 11 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey, USA: Wiley. } \author{ T. W. Yee } \note{ Upon convergence, the \code{N} is stored as \code{@misc$N}. } \seealso{ \code{\link{dzipf}}, \code{\link{zetaff}}, \code{\link{simulate.vlm}}. } \examples{ zdata <- data.frame(y = 1:5, ofreq = c(63, 14, 5, 1, 2)) zfit <- vglm(y ~ 1, zipf, data = zdata, trace = TRUE, weight = ofreq) zfit <- vglm(y ~ 1, zipf(lshape = "identitylink", ishape = 3.4), data = zdata, trace = TRUE, weight = ofreq, crit = "coef") zfit@misc$N (shape.hat <- Coef(zfit)) with(zdata, weighted.mean(y, ofreq)) fitted(zfit, matrix = FALSE) } \keyword{models} \keyword{regression} %pp.465--471, Chapter 11 of %Johnson N. L., Kotz S., and Kemp A. W. (1993) %\emph{Univariate Discrete Distributions}, %2nd ed. %New York: Wiley. %http://www.math.uah.edu/stat/special/Zeta.html calls s 'shape' VGAM/man/calibrate.qrrvglm.Rd0000644000176200001440000001215413135276753015504 0ustar liggesusers\name{calibrate.qrrvglm} \alias{calibrate.qrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calibration for CQO and CAO models } \description{ Performs maximum likelihood calibration for constrained and unconstrained quadratic and additive ordination models (CQO and CAO models are better known as QRR-VGLMs and RR-VGAMs respectively). } \usage{ calibrate.qrrvglm(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "all3or4"), initial.vals = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ The fitted CQO/CAO model. } \item{newdata}{ A data frame with new response data (usually new species data). The default is to use the original data used to fit the model; however, the calibration may take a long time to compute because the computations are expensive. } \item{type}{ What type of result is to be returned. The first are the calibrated latent variables or site scores. This must be computed always. The \code{"predictors"} are the linear/quadratic or additive predictors evaluated at the calibrated latent variables or site scores. The \code{"response"} are the fitted means evaluated at the calibrated latent variables or site scores. The \code{"vcov"} are the estimated variance-covariance matrices of the calibrated latent variables or site scores. The \code{"all3or4"} is for all of them, i.e., all \code{type}s. For CAO models, \code{"vcov"} is unavailable, so all 3 are returned. For CQO models, \code{"vcov"} is available, so all 4 are returned. } \item{initial.vals}{ Initial values for the search. For rank-1 models, this should be a vector of length \code{nrow(newdata)}, and for rank 2 models this should be a two column matrix with the number of rows equalling the number of rows in \code{newdata}. The default is a grid defined by arguments in \code{\link{calibrate.qrrvglm.control}}. } \item{\dots}{ Arguments that are fed into \code{\link{calibrate.qrrvglm.control}}. } } \details{ Given a fitted regression CQO/CAO model, maximum likelihood calibration is theoretically easy and elegant. However, the method assumes that all species are independent, which is not really true in practice. More details and references are given in Yee (2012). The function \code{\link[stats]{optim}} is used to search for the maximum likelihood solution. Good initial values are needed, and \code{\link{calibrate.qrrvglm.control}} allows the user some control over the choice of these. } \value{ The argument \code{type} determines what is returned. If \code{type = "all3or4"} then all the \code{type} values are returned in a list, with the following components. Each component has length \code{nrow(newdata)}. \item{latvar}{Calibrated latent variables or site scores. } \item{predictors }{linear/quadratic or additive predictors. For example, for Poisson families, this will be on a log scale, and for binomial families, this will be on a logit scale. } \item{response}{Fitted values of the response, evaluated at the calibrated latent variables or site scores. } \item{vcov}{Estimated variance-covariance matrix of the calibrated latent variables or site scores. Actually, these are stored in an array whose last dimension is \code{nrow(newdata)}. } } \references{ Yee, T. W. (2017) On constrained and unconstrained quadratic ordination. \emph{Manuscript in preparation}. ter Braak, C. J. F. 1995. Calibration. In: \emph{Data Analysis in Community and Landscape Ecology} by Jongman, R. H. G., ter Braak, C. J. F. and van Tongeren, O. F. R. (Eds.) Cambridge University Press, Cambridge. } \author{T. W. Yee} \note{ Despite the name of this function, CAO models are handled as well. % Despite the name of this function, UQO and CAO models are handled } \section{Warning }{ This function is computationally expensive. Setting \code{trace = TRUE} to get a running log is a good idea. } \seealso{ \code{\link{calibrate.qrrvglm.control}}, \code{\link{calibrate.rrvglm}}, \code{\link{calibrate}}, \code{\link{cqo}}, \code{\link{cao}}. % \code{\link{uqo}}, } \examples{ \dontrun{ hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdze the environmental variables set.seed(123) p1 <- cqo(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, I.toler = TRUE, Crow1positive = TRUE) siteNos <- 1:2 # Calibrate these sites cp1 <- calibrate(p1, trace = TRUE, new = data.frame(depvar(p1)[siteNos, ])) } \dontrun{ # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) # Actual site scores: abline(v = latvar(p1)[siteNos], lty = 1, col = 1:length(siteNos)) abline(v = cp1, lty = 2, col = 1:length(siteNos)) # Calibrated values } } \keyword{models} \keyword{regression} VGAM/man/lindUC.Rd0000644000176200001440000000313213135276753013237 0ustar liggesusers\name{Lindley} \alias{Lindley} \alias{dlind} \alias{plind} %\alias{qlind} \alias{rlind} \title{The Lindley Distribution} \description{ Density, cumulative distribution function, and random generation for the Lindley distribution. % quantile function } % yettodo: 20170103; use csam-23-517.pdf to write plind() and/or qlind(). \usage{ dlind(x, theta, log = FALSE) plind(q, theta, lower.tail = TRUE, log.p = FALSE) rlind(n, theta) } %qlind(p, theta) \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{theta}{positive parameter. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dlind} gives the density, \code{plind} gives the cumulative distribution function, and \code{rlind} generates random deviates. % \code{qlind} gives the quantile function, and } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{lindley}} for details. } %\note{ % %} \seealso{ \code{\link{lindley}}. } \examples{ theta <- exp(-1); x <- seq(0.0, 17, length = 700) dlind(0:10, theta) \dontrun{ plot(x, dlind(x, theta), type = "l", las = 1, col = "blue", main = "dlind(x, theta = exp(-1))") abline(h = 1, col = "grey", lty = "dashed") } } \keyword{distribution} % probs <- seq(0.01, 0.99, by = 0.01) % max(abs(plind(qlind(p = probs, theta), theta) - probs)) # Should be 0 VGAM/man/bratt.Rd0000644000176200001440000001235713135276753013206 0ustar liggesusers\name{bratt} \alias{bratt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bradley Terry Model With Ties } \description{ Fits a Bradley Terry model with ties (intercept-only model) by maximum likelihood estimation. } \usage{ bratt(refgp = "last", refvalue = 1, ialpha = 1, i0 = 0.01) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{refgp}{ Integer whose value must be from the set \{1,\ldots,\eqn{M}\}, where there are \eqn{M} competitors. The default value indicates the last competitor is used---but don't input a character string, in general. } \item{refvalue}{ Numeric. A positive value for the reference group. } \item{ialpha}{ Initial values for the \eqn{\alpha}{alpha}s. These are recycled to the appropriate length. } \item{i0}{ Initial value for \eqn{\alpha_0}{alpha_0}. If convergence fails, try another positive value. } } \details{ There are several models that extend the ordinary Bradley Terry model to handle ties. This family function implements one of these models. It involves \eqn{M} competitors who either win or lose or tie against each other. (If there are no draws/ties then use \code{\link{brat}}). The probability that Competitor \eqn{i} beats Competitor \eqn{j} is \eqn{\alpha_i / (\alpha_i+\alpha_j+\alpha_0)}{alpha_i / (alpha_i + alpha_j + alpha_0)}, where all the \eqn{\alpha}{alpha}s are positive. The probability that Competitor \eqn{i} ties with Competitor \eqn{j} is \eqn{\alpha_0 / (\alpha_i+\alpha_j+\alpha_0)}{alpha_0 / (alpha_i + alpha_j + alpha_0)}. Loosely, the \eqn{\alpha}{alpha}s can be thought of as the competitors' `abilities', and \eqn{\alpha_0}{alpha_0} is an added parameter to model ties. For identifiability, one of the \eqn{\alpha_i}{alpha_i} is set to a known value \code{refvalue}, e.g., 1. By default, this function chooses the last competitor to have this reference value. The data can be represented in the form of a \eqn{M} by \eqn{M} matrix of counts, where winners are the rows and losers are the columns. However, this is not the way the data should be inputted (see below). Excluding the reference value/group, this function chooses \eqn{\log(\alpha_j)}{log(alpha_j)} as the first \eqn{M-1} linear predictors. The log link ensures that the \eqn{\alpha}{alpha}s are positive. The last linear predictor is \eqn{\log(\alpha_0)}{log(alpha_0)}. The Bradley Terry model can be fitted with covariates, e.g., a home advantage variable, but unfortunately, this lies outside the VGLM theoretical framework and therefore cannot be handled with this code. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Torsney, B. (2004) Fitting Bradley Terry models using a multiplicative algorithm. In: Antoch, J. (ed.) \emph{Proceedings in Computational Statistics COMPSTAT 2004}, Physica-Verlag: Heidelberg. Pages 513--526. } \author{ T. W. Yee } \note{ The function \code{\link{Brat}} is useful for coercing a \eqn{M} by \eqn{M} matrix of counts into a one-row matrix suitable for \code{bratt}. Diagonal elements are skipped, and the usual S order of \code{c(a.matrix)} of elements is used. There should be no missing values apart from the diagonal elements of the square matrix. The matrix should have winners as the rows, and losers as the columns. In general, the response should be a matrix with \eqn{M(M-1)} columns. Also, a symmetric matrix of ties should be passed into \code{\link{Brat}}. The diagonal of this matrix should be all \code{NA}s. Only an intercept model is recommended with \code{bratt}. It doesn't make sense really to include covariates because of the limited VGLM framework. Notationally, note that the \pkg{VGAM} family function \code{\link{brat}} has \eqn{M+1} contestants, while \code{bratt} has \eqn{M} contestants. } \seealso{ \code{\link{brat}}, \code{\link{Brat}}, \code{\link{binomialff}}. } \examples{ # citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) # Add some ties. This is fictitional data. ties <- 5 + 0 * mat ties[2, 1] <- ties[1,2] <- 9 # Now fit the model fit <- vglm(Brat(mat, ties) ~ 1, bratt(refgp = 1), trace = TRUE) fit <- vglm(Brat(mat, ties) ~ 1, bratt(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (in order of "journal"); last is log(alpha0) c(1, Coef(fit)) # Abilities (in order of "journal"); last is alpha0 fit@misc$alpha # alpha_1,...,alpha_M fit@misc$alpha0 # alpha_0 fitted(fit) # Probabilities of winning and tying, in awkward form predict(fit) (check <- InverseBrat(fitted(fit))) # Probabilities of winning qprob <- attr(fitted(fit), "probtie") # Probabilities of a tie qprobmat <- InverseBrat(c(qprob), NCo = nrow(ties)) # Probabilities of a tie check + t(check) + qprobmat # Should be 1s in the off-diagonals } \keyword{models} \keyword{regression} VGAM/man/golf.Rd0000644000176200001440000001060013135276753013006 0ustar liggesusers\name{golf} \alias{golf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gamma-Ordinal Link Function } \description{ Computes the gamma-ordinal transformation, including its inverse and the first two derivatives. } \usage{ golf(theta, lambda = 1, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{lambda, cutpoint}{ The former is the shape parameter in \code{\link{gamma2}}. \code{cutpoint} is optional; if \code{NULL} then \code{cutpoint} is ignored from the GOLF definition. If given, the cutpoints should be non-negative integers. If \code{golf()} is used as the link function in \code{\link{cumulative}} then, if the cutpoints are known, then one should choose \code{reverse = TRUE, parallel = FALSE ~ -1}. If the cutpoints are unknown, then choose \code{reverse = TRUE, parallel = TRUE}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The gamma-ordinal link function (GOLF) can be applied to a parameter lying in the unit interval. Its purpose is to link cumulative probabilities associated with an ordinal response coming from an underlying 2-parameter gamma distribution. See \code{\link{Links}} for general information about \pkg{VGAM} link functions. } \value{ See Yee (2012) for details. } \references{ Yee, T. W. (2012) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Numerical values of \code{theta} too close to 0 or 1 or out of range result in large positive or negative values, or maybe 0 depending on the arguments. Although measures have been taken to handle cases where \code{theta} is too close to 1 or 0, numerical instabilities may still arise. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the gamma distribution (see \code{\link{gamma2}}) that has been recorded as an ordinal response using known cutpoints. } \section{Warning }{ Prediction may not work on \code{\link{vglm}} or \code{\link{vgam}} etc. objects if this link function is used. } \seealso{ \code{\link{Links}}, \code{\link{gamma2}}, \code{\link{polf}}, \code{\link{nbolf}}, \code{\link{cumulative}}. } \examples{ \dontrun{ golf("p", lambda = 1, short = FALSE) golf("p", lambda = 1, tag = TRUE) p <- seq(0.02, 0.98, len = 201) y <- golf(p, lambda = 1) y. <- golf(p, lambda = 1, deriv = 1, inverse = TRUE) max(abs(golf(y, lambda = 1, inverse = TRUE) - p)) # Should be 0 #\ dontrun{par(mfrow = c(2, 1), las = 1) #plot(p, y, type = "l", col = "blue", main = "golf()") #abline(h = 0, v = 0.5, col = "orange", lty = "dashed") #plot(p, y., type = "l", col = "blue", # main = "(Reciprocal of) first GOLF derivative") #} # Another example gdata <- data.frame(x2 = sort(runif(nn <- 1000))) gdata <- transform(gdata, x3 = runif(nn)) gdata <- transform(gdata, mymu = exp( 3 + 1 * x2 - 2 * x3)) lambda <- 4 gdata <- transform(gdata, y1 = rgamma(nn, shape = lambda, scale = mymu / lambda)) cutpoints <- c(-Inf, 10, 20, Inf) gdata <- transform(gdata, cuty = Cut(y1, breaks = cutpoints)) #\ dontrun{ par(mfrow = c(1, 1), las = 1) #with(gdata, plot(x2, x3, col = cuty, pch = as.character(cuty))) } with(gdata, table(cuty) / sum(table(cuty))) fit <- vglm(cuty ~ x2 + x3, cumulative(multiple.responses = TRUE, reverse = TRUE, parallel = FALSE ~ -1, link = golf(cutpoint = cutpoints[2:3], lambda = lambda)), data = gdata, trace = TRUE) head(depvar(fit)) head(fitted(fit)) head(predict(fit)) coef(fit) coef(fit, matrix = TRUE) constraints(fit) fit@misc } } \keyword{math} \keyword{models} \keyword{regression} % # Another example % nn <- 1000 % x2 <- sort(runif(nn)) % x3 <- runif(nn) % shape <- exp(0.0) % mymu <- exp( 3 + 1 * x2 - 2 * x3) % y1 <- rnbinom(nn, mu=mymu, size=shape) % cuty <- Cut(y1) % fit <- vglm(cuty ~ x2 + x3, fam = cumulative(link = "golf", rev = TRUE, % multiple.responses = TRUE, parallel = TRUE, earg = list(lambda=shape))) % coef(fit) % fit <- vglm(cuty ~ x2 + x3, fam = cumulative(link = "probit", rev = TRUE, % multiple.responses = TRUE, parallel = TRUE)) % coef(fit, matrix = TRUE) % coef(fit) VGAM/man/Inv.gaussian.Rd0000644000176200001440000000414013135276753014426 0ustar liggesusers\name{Inv.gaussian} \alias{Inv.gaussian} \alias{dinv.gaussian} \alias{pinv.gaussian} \alias{rinv.gaussian} \title{The Inverse Gaussian Distribution} \description{ Density, distribution function and random generation for the inverse Gaussian distribution. } \usage{ dinv.gaussian(x, mu, lambda, log = FALSE) pinv.gaussian(q, mu, lambda) rinv.gaussian(n, mu, lambda) } \arguments{ \item{x, q}{vector of quantiles.} %%\item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{mu}{the mean parameter.} \item{lambda}{the \eqn{\lambda}{lambda} parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dinv.gaussian} gives the density, \code{pinv.gaussian} gives the distribution function, and \code{rinv.gaussian} generates random deviates. % \code{qinv.gaussian} gives the quantile function, and } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Taraldsen, G. and Lindqvist, B. H. (2005) The multiple roots simulation algorithm, the inverse Gaussian distribution, and the sufficient conditional Monte Carlo method. \emph{Preprint Statistics No. 4/2005}, Norwegian University of Science and Technology, Trondheim, Norway. } \author{ T. W. Yee } \details{ See \code{\link{inv.gaussianff}}, the \pkg{VGAM} family function for estimating both parameters by maximum likelihood estimation, for the formula of the probability density function. } \note{ Currently \code{qinv.gaussian} is unavailable. } \seealso{ \code{\link{inv.gaussianff}}, \code{\link{waldff}}. } \examples{ \dontrun{ x <- seq(-0.05, 4, len = 300) plot(x, dinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "blue",las = 1, main = "blue is density, orange is cumulative distribution function") abline(h = 0, col = "gray", lty = 2) lines(x, pinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "orange") } } \keyword{distribution} VGAM/man/vgam.Rd0000644000176200001440000003124713135276753013023 0ustar liggesusers\name{vgam} \alias{vgam} %\alias{vgam.fit} \title{ Fitting Vector Generalized Additive Models } % 20030215; This file is based a lot from vglm.Rd \description{ Fit a vector generalized additive model (VGAM). Both 1st-generation VGAMs (based on backfitting) and 2nd-generation VGAMs (based on P-splines, with automatic smoothing parameter selection) are implemented. This is a large class of models that includes generalized additive models (GAMs) and vector generalized linear models (VGLMs) as special cases. } \usage{ vgam(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vgam.control(...), offset = NULL, method = "vgam.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = FALSE, smart = TRUE, ...) } %- maybe also `usage' for other objects documented here. \arguments{ % The following comes from vglm.Rd but with minor tweaks \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is applied to each linear/additive predictor, and should include at least one \code{\link[VGAM]{sm.os}} term or \code{\link[VGAM]{sm.ps}} term or \code{\link[VGAM]{s}} term. Mixing both together is not allowed. Different variables in each linear/additive predictor can be chosen by specifying constraint matrices. } \item{family}{ Same as for \code{\link{vglm}}. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{vgam} is called. } \item{weights, subset, na.action}{ Same as for \code{\link{vglm}}. Note that \code{subset} may be unreliable and to get around this problem it is best to use \code{\link[base]{subset}} to create a new smaller data frame and feed in the smaller data frame. See below for an example. This is a bug that needs fixing. } \item{etastart, mustart, coefstart}{ Same as for \code{\link{vglm}}. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{vgam.control}} for details. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{vgam.fit} uses iteratively reweighted least squares (IRLS). } \item{constraints, model, offset}{ Same as for \code{\link{vglm}}. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix; to get the VGAM model matrix type \code{model.matrix(vgamfit)} where \code{vgamfit} is a \code{vgam} object. } \item{contrasts, extra, form2, qr.arg, smart}{ Same as for \code{\link{vglm}}. } \item{\dots}{ further arguments passed into \code{\link{vgam.control}}. } } \details{ A vector generalized additive model (VGAM) is loosely defined as a statistical model that is a function of \eqn{M} additive predictors. The central formula is given by \deqn{\eta_j = \sum_{k=1}^p f_{(j)k}(x_k)}{% eta_j = sum_{k=1}^p f_{(j)k}(x_k)} where \eqn{x_k}{x_k} is the \eqn{k}th explanatory variable (almost always \eqn{x_1=1} for the intercept term), and \eqn{f_{(j)k}} are smooth functions of \eqn{x_k} that are estimated by smoothers. The first term in the summation is just the intercept. Currently two types of smoothers are implemented: \code{\link[VGAM]{s}} represents the older and more traditional one, called a \emph{vector (cubic smoothing spline) smoother} and is based on Yee and Wild (1996); it is more similar to the \R{} package \pkg{gam}. The newer one is represented by \code{\link[VGAM]{sm.os}} and \code{\link[VGAM]{sm.ps}}, and these are based on O-splines and P-splines---they allow automatic smoothing parameter selection; it is more similar to the \R{} package \pkg{mgcv}. In the above, \eqn{j=1,\ldots,M} where \eqn{M} is finite. If all the functions are constrained to be linear then the resulting model is a vector generalized linear model (VGLM). VGLMs are best fitted with \code{\link{vglm}}. Vector (cubic smoothing spline) smoothers are represented by \code{s()} (see \code{\link[VGAM]{s}}). Local regression via \code{lo()} is \emph{not} supported. The results of \code{vgam} will differ from the \code{gam()} (in the \pkg{gam}) because \code{vgam()} uses a different knot selection algorithm. In general, fewer knots are chosen because the computation becomes expensive when the number of additive predictors \eqn{M} is large. Second-generation VGAMs are based on the O-splines and P-splines. The latter is due to Eilers and Marx (1996). Backfitting is not required, and estimation is performed using IRLS. The function \code{\link{sm.os}} represents a \emph{smart} implementation of O-splines. The function \code{\link{sm.ps}} represents a \emph{smart} implementation of P-splines. Written G2-VGAMs or P-VGAMs, this methodology should not be used unless the sample size is reasonably large. Usually an UBRE predictive criterion is optimized (at each IRLS iteration) because the scale parameter for VGAMs is usually assumed to be known. This search for optimal smoothing parameters does not always converge, and neither is it totally reliable. G2-VGAMs implicitly set \code{criterion = "coefficients"} so that convergence occurs when the change in the regression coefficients between 2 IRLS iterations is sufficiently small. Otherwise the search for the optimal smoothing parameters might cause the log-likelihood to decrease between 2 IRLS iterations. Currently \emph{outer iteration} is implemented, by default, rather than \emph{performance iteration} because the latter is more easy to converge to a local solution; see Wood (2004) for details. One can use \emph{performance iteration} by setting \code{Maxit.outer = 1} in \code{\link{vgam.control}}. % outeriter % A suggested rule-of-thumb is at least 500 observations. The underlying algorithm of VGAMs is IRLS. First-generation VGAMs (called G1-VGAMs) are estimated by modified vector backfitting using vector splines. O-splines are used as the basis functions for the vector (smoothing) splines, which are a lower dimensional version of natural B-splines. The function \code{vgam.fit()} actually does the work. The smoothing code is based on F. O'Sullivan's BART code. % If more than one of \code{etastart}, \code{start} and \code{mustart} % is specified, the first in the list will be used. A closely related methodology based on VGAMs called \emph{constrained additive ordination} (CAO) first forms a linear combination of the explanatory variables (called \emph{latent variables}) and then fits a GAM to these. This is implemented in the function \code{\link{cao}} for a very limited choice of family functions. } \value{ For G1-VGAMs and G2-VGAMs, an object of class \code{"vgam"} or \code{"pvgam"} respectively (see \code{\link{vgam-class}} and \code{\link{pvgam-class}} for further information). } \references{ Wood, S. N. (2004). Stable and efficient multiple smoothing parameter estimation for generalized additive models. \emph{J. Amer. Statist. Assoc.}, \bold{99}(467): 673--686. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. (2008) The \code{VGAM} Package. \emph{R News}, \bold{8}, 28--39. Yee, T. W. (2015) Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: \emph{Springer}. Yee, T. W. (2016). Comments on ``Smoothing parameter and model selection for general smooth models'' by Wood, S. N. and Pya, N. and Safken, N., \emph{J. Amer. Statist. Assoc.}, \bold{110}(516). %Yee, T. W. and Somchit, C. and Wild, C. J. (2016) %Generation-2 %vector generalized additive models. %Manuscript in preparation. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. %Wood, S. N. (2004). %Stable and efficient multiple smoothing parameter estimation %for generalized additive models. %\emph{J. Amer. Statist. Assoc.}, \bold{99}(467): 673--686. } \author{ Thomas W. Yee } \note{ This function can fit a wide variety of statistical models. Some of these are harder to fit than others because of inherent numerical difficulties associated with some of them. Successful model fitting benefits from cumulative experience. Varying the values of arguments in the \pkg{VGAM} family function itself is a good first step if difficulties arise, especially if initial values can be inputted. A second, more general step, is to vary the values of arguments in \code{\link{vgam.control}}. A third step is to make use of arguments such as \code{etastart}, \code{coefstart} and \code{mustart}. Some \pkg{VGAM} family functions end in \code{"ff"} to avoid interference with other functions, e.g., \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{gaussianff}}, \code{gammaff}. This is because \pkg{VGAM} family functions are incompatible with \code{\link[stats]{glm}} (and also \code{\link[gam]{gam}} in the \pkg{gam} library and \code{\link[mgcv]{gam}} in the \pkg{mgcv} library). The smart prediction (\code{\link{smartpred}}) library is packed with the \pkg{VGAM} library. The theory behind the scaling parameter is currently being made more rigorous, but it it should give the same value as the scale parameter for GLMs. } %~Make other sections like WARNING with \section{WARNING }{....} ~ \section{WARNING}{ For G1-VGAMs, currently \code{vgam} can only handle constraint matrices \code{cmat}, say, such that \code{crossprod(cmat)} is diagonal. It can be detected by \code{\link{is.buggy}}. VGAMs with constraint matrices that have non-orthogonal columns should be fitted with \code{\link{sm.os}} or \code{\link{sm.ps}} terms instead of \code{\link{s}}. % This is a bug that I will try to fix up soon; See warnings in \code{\link{vglm.control}}. } \seealso{ \code{\link{is.buggy}}, \code{\link{vgam.control}}, \code{\link{vgam-class}}, \code{\link{vglmff-class}}, \code{\link{plotvgam}}, \code{\link{summaryvgam}}, \code{\link{summarypvgam}}, \code{\link{sm.os}}, \code{\link{sm.ps}}, \code{\link[VGAM]{s}}, \code{\link[mgcv]{magic}}. \code{\link{vglm}}, \code{\link{vsmooth.spline}}, \code{\link{cao}}. } \examples{# Nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), data = pneumo, trace = TRUE) # Nonparametric logistic regression hfit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) \dontrun{ plot(hfit, se = TRUE) } phfit <- predict(hfit, type = "terms", raw = TRUE, se = TRUE) names(phfit) head(phfit$fitted) head(phfit$se.fit) phfit$df phfit$sigma # Fit two species simultaneously hfit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(multiple.responses = TRUE), data = hunua) coef(hfit2, matrix = TRUE) # Not really interpretable \dontrun{ plot(hfit2, se = TRUE, overlay = TRUE, lcol = 3:4, scol = 3:4) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(hfit2)[ooo,], ylim = c(0, 0.8), xlab = "Altitude (m)", ylab = "Probability of presence", las = 1, main = "Two plant species' response curves", type = "l", lwd = 2)) with(hunua, rug(altitude)) } # The 'subset' argument does not work here. Use subset() instead. set.seed(1) zdata <- data.frame(x2 = runif(nn <- 500)) zdata <- transform(zdata, y = rbinom(nn, 1, 0.5)) zdata <- transform(zdata, subS = runif(nn) < 0.7) sub.zdata <- subset(zdata, subS) # Use this instead if (FALSE) fit4a <- vgam(cbind(y, y) ~ s(x2, df = 2), binomialff(multiple.responses = TRUE), data = zdata, subset = subS) # This fails!!! fit4b <- vgam(cbind(y, y) ~ s(x2, df = 2), binomialff(multiple.responses = TRUE), data = sub.zdata) # This succeeds!!! fit4c <- vgam(cbind(y, y) ~ sm.os(x2), binomialff(multiple.responses = TRUE), data = sub.zdata) # This succeeds!!! \dontrun{par(mfrow = c(2, 2)) plot(fit4b, se = TRUE, shade = TRUE, shcol = "pink") plot(fit4c, se = TRUE, shade = TRUE, shcol = "pink") } } \keyword{models} \keyword{regression} \keyword{smooth} VGAM/man/cens.gumbel.Rd0000644000176200001440000001021413135276753014262 0ustar liggesusers\name{cens.gumbel} \alias{cens.gumbel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Censored Gumbel Distribution } \description{ Maximum likelihood estimation of the 2-parameter Gumbel distribution when there are censored observations. A matrix response is not allowed. } \usage{ cens.gumbel(llocation = "identitylink", lscale = "loge", iscale = NULL, mean = TRUE, percentiles = NULL, zero = "scale") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Character. Parameter link functions for the location and (positive) \eqn{scale} parameters. See \code{\link{Links}} for more choices. } \item{iscale}{ Numeric and positive. Initial value for \eqn{scale}. Recycled to the appropriate length. In general, a larger value is better than a smaller value. The default is to choose the value internally. } \item{mean}{ Logical. Return the mean? If \code{TRUE} then the mean is returned, otherwise percentiles given by the \code{percentiles} argument. } \item{percentiles}{ Numeric with values between 0 and 100. If \code{mean=FALSE} then the fitted values are percentiles which must be specified by this argument. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The value (possibly values) must be from the set \{1,2\} corresponding respectively to \eqn{location} and \eqn{scale}. If \code{zero=NULL} then all linear/additive predictors are modelled as a linear combination of the explanatory variables. The default is to fit the shape parameter as an intercept only. } } \details{ This \pkg{VGAM} family function is like \code{\link{gumbel}} but handles observations that are left-censored (so that the true value would be less than the observed value) else right-censored (so that the true value would be greater than the observed value). To indicate which type of censoring, input \code{extra = list(leftcensored = vec1, rightcensored = vec2)} where \code{vec1} and \code{vec2} are logical vectors the same length as the response. If the two components of this list are missing then the logical values are taken to be \code{FALSE}. The fitted object has these two components stored in the \code{extra} slot. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \section{Warning}{ Numerical problems may occur if the amount of censoring is excessive. } \note{ See \code{\link{gumbel}} for details about the Gumbel distribution. The initial values are based on assuming all uncensored observations, therefore could be improved upon. } \seealso{ \code{\link{gumbel}}, \code{\link{gumbelff}}, \code{\link{rgumbel}}, \code{\link{guplot}}, \code{\link{gev}}, \code{\link{venice}}. } \examples{ # Example 1 ystar <- venice[["r1"]] # Use the first order statistic as the response nn <- length(ystar) L <- runif(nn, 100, 104) # Lower censoring points U <- runif(nn, 130, 135) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ scale(year), data = venice, trace = TRUE, extra = extra, fam = cens.gumbel(mean = FALSE, perc = c(5, 25, 50, 75, 95))) coef(fit, matrix = TRUE) head(fitted(fit)) fit@extra # Example 2: simulated data nn <- 1000 ystar <- rgumbel(nn, loc = 1, scale = exp(0.5)) # The uncensored data L <- runif(nn, -1, 1) # Lower censoring points U <- runif(nn, 2, 5) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored \dontrun{par(mfrow = c(1, 2)); hist(ystar); hist(y);} extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ 1, trace = TRUE, extra = extra, fam = cens.gumbel) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/betaR.Rd0000644000176200001440000001134113135276753013117 0ustar liggesusers\name{betaR} \alias{betaR} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Two-parameter Beta Distribution Family Function } \description{ Estimation of the shape parameters of the two-parameter beta distribution. } \usage{ betaR(lshape1 = "loge", lshape2 = "loge", i1 = NULL, i2 = NULL, trim = 0.05, A = 0, B = 1, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2, i1, i2}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more choices. } \item{trim}{ An argument which is fed into \code{mean()}; it is the fraction (0 to 0.5) of observations to be trimmed from each end of the response \code{y} before the mean is computed. This is used when computing initial values, and guards against outliers. } \item{A, B}{ Lower and upper limits of the distribution. The defaults correspond to the \emph{standard beta distribution} where the response lies between 0 and 1. } \item{parallel, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The two-parameter beta distribution is given by \eqn{f(y) =} \deqn{(y-A)^{shape1-1} \times (B-y)^{shape2-1} / [Beta(shape1,shape2) \times (B-A)^{shape1+shape2-1}]}{% (y-A)^(shape1-1) * (B-y)^(shape2-1) / [Beta(shape1,shape2) * (B-A)^(shape1+shape2-1)]} for \eqn{A < y < B}, and \eqn{Beta(.,.)} is the beta function (see \code{\link[base:Special]{beta}}). The shape parameters are positive, and here, the limits \eqn{A} and \eqn{B} are known. The mean of \eqn{Y} is \eqn{E(Y) = A + (B-A) \times shape1 / (shape1 + shape2)}{E(Y) = A + (B-A) * shape1 / (shape1 + shape2)}, and these are the fitted values of the object. For the standard beta distribution the variance of \eqn{Y} is \eqn{shape1 \times shape2 / [(1+shape1+shape2) \times (shape1+shape2)^2]}{ shape1 * shape2 / ((1+shape1+shape2) * (shape1+shape2)^2)}. If \eqn{\sigma^2= 1 / (1+shape1+shape2)} then the variance of \eqn{Y} can be written \eqn{\sigma^2 \mu (1-\mu)}{mu*(1-mu)*sigma^2} where \eqn{\mu=shape1 / (shape1 + shape2)}{mu=shape1 / (shape1 + shape2)} is the mean of \eqn{Y}. Another parameterization of the beta distribution involving the mean and a precision parameter is implemented in \code{\link{betaff}}. % 20120525: % Regularity conditions not satisfied; support depends on the parameters: % If \eqn{A} and \eqn{B} are unknown, then the \pkg{VGAM} family function % \code{beta4()} can be used to estimate these too. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) Chapter 25 of: \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. Gupta, A. K. and Nadarajah, S. (2004) \emph{Handbook of Beta Distribution and Its Applications}, New York: Marcel Dekker. %Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) %\emph{Statistical Distributions}, %Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. % Documentation accompanying the \pkg{VGAM} package at % \url{https://www.stat.auckland.ac.nz/~yee} % contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must have values in the interval (\eqn{A}, \eqn{B}). \pkg{VGAM} 0.7-4 and prior called this function \code{\link{betaff}}. } \seealso{ \code{\link{betaff}}, % \code{\link{zoibetaR}}, \code{\link[stats:Beta]{Beta}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{betabinomialff}}, \code{\link{betageometric}}, \code{\link{betaprime}}, \code{\link{rbetageom}}, \code{\link{rbetanorm}}, \code{\link{kumar}}, \code{\link{simulate.vlm}}. } \examples{ bdata <- data.frame(y = rbeta(n = 1000, shape1 = exp(0), shape2 = exp(1))) fit <- vglm(y ~ 1, betaR(lshape1 = "identitylink", lshape2 = "identitylink"), data = bdata, trace = TRUE, crit = "coef") fit <- vglm(y ~ 1, betaR, data = bdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models bdata <- transform(bdata, Y = 5 + 8 * y) # From 5 to 13, not 0 to 1 fit <- vglm(Y ~ 1, betaR(A = 5, B = 13), data = bdata, trace = TRUE) Coef(fit) c(meanY = with(bdata, mean(Y)), head(fitted(fit),2)) } \keyword{models} \keyword{regression} % 3/1/06; this works well: % fit <- vglm(y~1, beta.abqn(link = logoff(offset = 1), tr = TRUE, crit = "c") % 3/1/06; this does not work so well: % it <- vglm(y~1, beta.abqn(link = logoff(offset = 0), tr = TRUE, crit = "c") % Interesting!! VGAM/man/cloglog.Rd0000644000176200001440000001000413135276753013503 0ustar liggesusers\name{cloglog} \alias{cloglog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Complementary Log-log Link Function } \description{ Computes the complementary log-log transformation, including its inverse and the first two derivatives. } \usage{ cloglog(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}} for general information about links. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The complementary log-log link function is commonly used for parameters that lie in the unit interval. Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the complimentary log-log of \code{theta}, i.e., \code{log(-log(1 - theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{1-exp(-exp(theta))}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \eqn{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0. One way of overcoming this is to use \code{bvalue}. Changing 1s to 0s and 0s to 1s in the response means that effectively a loglog link is fitted. That is, tranform \eqn{y} by \eqn{1-y}. That's why only one of \code{\link{cloglog}} and \code{loglog} is written. With constrained ordination (e.g., \code{\link{cqo}} and \code{\link{cao}}) used with \code{\link{binomialff}}, a complementary log-log link function is preferred over the default \code{\link{logit}} link, for a good reason. See the example below. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the extreme value distribution. } \seealso{ \code{\link{Links}}, \code{\link{logitoffsetlink}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cauchit}}. } \examples{ p <- seq(0.01, 0.99, by = 0.01) cloglog(p) max(abs(cloglog(cloglog(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) cloglog(p) # Has NAs cloglog(p, bvalue = .Machine$double.eps) # Has no NAs \dontrun{ p <- seq(0.01, 0.99, by = 0.01) plot(p, logit(p), type = "l", col = "limegreen", lwd = 2, las = 1, main = "Some probability link functions", ylab = "transformation") lines(p, probit(p), col = "purple", lwd = 2) lines(p, cloglog(p), col = "chocolate", lwd = 2) lines(p, cauchit(p), col = "tan", lwd = 2) abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4, c("logit", "probit", "cloglog", "cauchit"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = 2) } \dontrun{ # This example shows that a cloglog link is preferred over the logit n <- 500; p <- 5; S <- 3; Rank <- 1 # Species packing model: mydata <- rcqo(n, p, S, eq.tol = TRUE, es.opt = TRUE, eq.max = TRUE, family = "binomial", hi.abundance = 5, seed = 123, Rank = Rank) fitc <- cqo(attr(mydata, "formula"), I.tol = TRUE, data = mydata, fam = binomialff(multiple.responses = TRUE, link = "cloglog"), Rank = Rank) fitl <- cqo(attr(mydata, "formula"), I.tol = TRUE, data = mydata, fam = binomialff(multiple.responses = TRUE, link = "logit"), Rank = Rank) # Compare the fitted models (cols 1 and 3) with the truth (col 2) cbind(concoef(fitc), attr(mydata, "concoefficients"), concoef(fitl)) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/constraints.Rd0000644000176200001440000001255313135276753014437 0ustar liggesusers\name{constraints} \alias{constraints} \alias{constraints.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Constraint Matrices } \description{ Extractor function for the \emph{constraint matrices} of objects in the \pkg{VGAM} package. } \usage{ constraints(object, ...) constraints.vlm(object, type = c("lm", "term"), all = TRUE, which, matrix.out = FALSE, colnames.arg = TRUE, rownames.arg = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Some \pkg{VGAM} object, for example, having class \code{\link{vglmff-class}}. } \item{type}{ Character. Whether LM- or term-type constraints are to be returned. The number of such matrices returned is equal to \code{nvar(object, type = "lm")} and the number of terms, respectively. } \item{all, which}{ If \code{all = FALSE} then \code{which} gives the integer index or a vector of logicals specifying the selection. } \item{matrix.out}{ Logical. If \code{TRUE} then the constraint matrices are \code{\link[base]{cbind}()ed} together. The result is usually more compact because the default is a list of constraint matrices. } \item{colnames.arg, rownames.arg}{ Logical. If \code{TRUE} then column and row names are assigned corresponding to the variables. } \item{\dots}{ Other possible arguments such as \code{type}. } } \details{ Constraint matrices describe the relationship of coefficients/component functions of a particular explanatory variable between the linear/additive predictors in VGLM/VGAM models. For example, they may be all different (constraint matrix is the identity matrix) or all the same (constraint matrix has one column and has unit values). VGLMs and VGAMs have constraint matrices which are \emph{known}. The class of RR-VGLMs have constraint matrices which are \emph{unknown} and are to be estimated. } \value{ The extractor function \code{constraints()} returns a list comprising of constraint matrices---usually one for each column of the VLM model matrix, and in that order. The list is labelled with the variable names. Each constraint matrix has \eqn{M} rows, where \eqn{M} is the number of linear/additive predictors, and whose rank is equal to the number of columns. A model with no constraints at all has an order \eqn{M} identity matrix as each variable's constraint matrix. For \code{\link{vglm}} and \code{\link{vgam}} objects, feeding in \code{type = "term"} constraint matrices back into the same model should work and give an identical model. The default are the \code{"lm"}-type constraint matrices; this is a list with one constraint matrix per column of the LM matrix. See the \code{constraints} argument of \code{\link{vglm}}, and the example below. } \author{T. W. Yee } \note{ In all \pkg{VGAM} family functions \code{zero = NULL} means none of the linear/additive predictors are modelled as intercepts-only. Other arguments found in certain \pkg{VGAM} family functions which affect constraint matrices include \code{parallel} and \code{exchangeable}. The \code{constraints} argument in \code{\link{vglm}} and \code{\link{vgam}} allows constraint matrices to be inputted. If so, then \code{constraints(fit, type = "lm")} can be fed into the \code{constraints} argument of the same object to get the same model. The \code{xij} argument does not affect constraint matrices; rather, it allows each row of the constraint matrix to be multiplied by a specified vector. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. %\url{http://www.stat.auckland.ac.nz/~yee} contains additional %information. } \seealso{ \code{\link{is.parallel}}, \code{\link{is.zero}}. VGLMs are described in \code{\link{vglm-class}}; RR-VGLMs are described in \code{\link{rrvglm-class}}. Arguments such as \code{zero} and \code{parallel} found in many \pkg{VGAM} family functions are a way of creating/modifying constraint matrices conveniently, e.g., see \code{\link{zero}}. See \code{\link{CommonVGAMffArguments}} for more information. } \examples{ # Fit the proportional odds model: pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ sm.bs(let, 3), cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) constraints(fit1) # Parallel assumption results in this constraints(fit1, type = "term") # Same as the default ("vlm"-type) is.parallel(fit1) # An equivalent model to fit1 (needs the type "term" constraints): clist.term <- constraints(fit1, type = "term") # "term"-type constraints (fit2 <- vglm(cbind(normal, mild, severe) ~ sm.bs(let, 3), data = pneumo, cumulative(reverse = TRUE), constraints = clist.term)) abs(max(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be zero # Fit a rank-1 stereotype (RR-multinomial logit) model: fit <- rrvglm(Country ~ Width + Height + HP, multinomial, data = car.all) constraints(fit) # All except the first are the estimated A matrix } \keyword{models} \keyword{regression} VGAM/man/wine.Rd0000644000176200001440000000420713135276753013027 0ustar liggesusers\name{wine} \alias{wine} \docType{data} \title{ Bitterness in Wine Data %% ~~ data name/kind ... ~~ } \description{ This oenological data frame concerns the amount of bitterness in 78 bottles of white wine. } \usage{ data(wine) } \format{ A data frame with 4 rows on the following 7 variables. \describe{ \item{temp}{temperature, with levels cold and warm. } \item{contact}{whether contact of the juice with the skin was allowed or avoided, for a specified period. Two levels: no or yes. } \item{bitter1, bitter2, bitter3, bitter4, bitter5}{ numeric vectors, the counts. The order is none to most intense. } } } \details{ The data set comes from Randall (1989) and concerns a factorial experiment for investigating factors that affect the bitterness of white wines. There are two factors in the experiment: temperature at the time of crushing the grapes and contact of the juice with the skin. Two bottles of wine were fermented for each of the treatment combinations. A panel of 9 judges were selected and trained for the ability to detect bitterness. Thus there were 72 bottles in total. Originally, the bitterness of the wine were taken on a continuous scale in the interval from 0 (none) to 100 (intense) but later they were grouped using equal lengths into five ordered categories 1, 2, 3, 4 and 5. %% ~~ If necessary, more details than the __description__ above ~~ } \source{ % Further information is at: % September 30, 2013 Christensen, R. H. B. (2013) Analysis of ordinal data with cumulative link models---estimation with the R-package \pkg{ordinal}. R Package Version 2013.9-30. \url{https://CRAN.R-project.org/package=ordinal}. %\url{https://www.R-project.org/package=ordinal}. %\url{https://www.CRAN.R-project.org/package=ordinal}. % Prior to 20150728 Randall, J. H. (1989) The analysis of sensory data by generalized linear model. \emph{Biometrical Journal} \bold{31}(7), 781--793. Kosmidis, I. (2014) Improved estimation in cumulative link models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{76}, in press. } \examples{ wine summary(wine) } \keyword{datasets} VGAM/man/lambertW.Rd0000644000176200001440000000374413135276753013647 0ustar liggesusers\name{lambertW} \alias{lambertW} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Lambert W function } \description{ Computes the Lambert \emph{W} function for real values. } \usage{ lambertW(x, tolerance = 1e-10, maxit = 50) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector of reals. } \item{tolerance}{ Accuracy desired. } \item{maxit}{ Maximum number of iterations of third-order Halley's method. } } \details{ The Lambert \eqn{W} function is the root of the equation \eqn{W(z) \exp(W(z)) = z}{W(z) * exp(W(z)) = z} for complex \eqn{z}. It is multi-valued if \eqn{z} is real and \eqn{z < -1/e}. For real \eqn{-1/e \leq z < 0}{-1/e <= z < 0} it has two possible real values, and currently only the upper branch is computed. } \value{ This function returns the principal branch of the \eqn{W} function for \emph{real} \eqn{z}. It returns \eqn{W(z) \geq -1}{W(z) >= -1}, and \code{NA} for \eqn{z < -1/e}. } \references{ Corless, R. M. and Gonnet, G. H. and Hare, D. E. G. and Jeffrey, D. J. and Knuth, D. E. (1996) On the Lambert \eqn{W} function. \emph{Advances in Computational Mathematics}, \bold{5}(4), 329--359. } \author{ T. W. Yee } \note{ If convergence does not occur then increase the value of \code{maxit} and/or \code{tolerance}. Yet to do: add an argument \code{lbranch = TRUE} to return the lower branch for real \eqn{-1/e \leq z < 0}{-1/e <= z < 0}; this would give \eqn{W(z) \leq -1}{W(z) <= -1}. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link[base:log]{log}}, \code{\link[base:log]{exp}}. } \examples{ \dontrun{ curve(lambertW, -exp(-1), 3, xlim = c(-1, 3), ylim = c(-2, 1), las = 1, col = "orange") abline(v = -exp(-1), h = -1, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} VGAM/man/predictqrrvglm.Rd0000644000176200001440000000452013135276753015130 0ustar liggesusers\name{predictqrrvglm} \alias{predictqrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Predict Method for a CQO fit } \description{ Predicted values based on a constrained quadratic ordination (CQO) object. } \usage{ predictqrrvglm(object, newdata=NULL, type = c("link", "response", "latvar", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, varI.latvar = FALSE, refResponse = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class inheriting from \code{"qrrvglm"}. } \item{newdata}{ An optional data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. } \item{type, se.fit, dispersion, extra}{ See \code{\link{predictvglm}}. } \item{deriv}{ Derivative. Currently only 0 is handled. } \item{varI.latvar, refResponse}{ Arguments passed into \code{\link{Coef.qrrvglm}}. } \item{\dots}{ Currently undocumented. } } \details{ Obtains predictions from a fitted CQO object. Currently there are lots of limitations of this function; it is unfinished. % and optionally estimates standard errors of those predictions } \value{ See \code{\link{predictvglm}}. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{ T. W. Yee } \note{ This function is not robust and has not been checked fully. } \seealso{ \code{\link{cqo}}. } \examples{ hspider[,1:6] <- scale(hspider[,1:6]) # Standardize the environmental vars set.seed(1234) # vvv p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, # vvv Arctperi, Auloalbi, Pardlugu, Pardmont, # vvv Pardnigr, Pardpull, Trocterr, Zoraspin) ~ # vvv WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, # vvv poissonff, data = hspider, Crow1positive = FALSE, I.tol = TRUE) # vvv sort(deviance(p1, history = TRUE)) # A history of all the iterations # vvv head(predict(p1)) # The following should be all zeros # vvv max(abs(predict(p1, new = head(hspider)) - head(predict(p1)))) # vvv max(abs(predict(p1, new = head(hspider), type = "res")-head(fitted(p1)))) } \keyword{models} \keyword{regression} VGAM/man/otpospoisUC.Rd0000644000176200001440000000344013135276753014352 0ustar liggesusers\name{Otpospois} \alias{Otpospois} \alias{dotpospois} \alias{potpospois} \alias{qotpospois} \alias{rotpospois} \title{ One-truncated Positive-Poisson Distribution } \description{ Density, distribution function, quantile function, and random generation for the one-truncated positive-Poisson distribution. } \usage{ dotpospois(x, lambda, log = FALSE) potpospois(q, lambda, log.p = FALSE) qotpospois(p, lambda) rotpospois(n, lambda) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{ Same as \code{\link{Pospois}}. } \item{lambda, log, log.p}{ Same as \code{\link{Pospois}}. } } \details{ The one-truncated positive-Poisson is a Poisson distribution but with the probability of a one and a zero being 0. That is, its support is 2, 3, \ldots. The other probabilities are scaled to add to unity. Some more details are given in \code{\link{pospoisson}}. } \value{ \code{dotpospois} gives the density, \code{potpospois} gives the distribution function, \code{qotpospois} gives the quantile function, and \code{rotpospois} generates random deviates. } %\references{ %} \author{ T. W. Yee } \note{ Given some response data, the \pkg{VGAM} family function \code{\link{otpospoisson}} estimates the parameter \code{lambda}. } \seealso{ \code{\link{otpospoisson}}, \code{\link{Pospois}}, \code{\link{Oipospois}}. } \examples{ dotpospois(1:20, 0.5) rotpospois(20, 0.5) \dontrun{ lambda <- 4; x <- 1:10 plot(x, dotpospois(x, lambda = lambda), type = "h", ylim = 0:1, sub = "lambda=4", las = 1, col = "blue", ylab = "Probability", main = "1-truncated positive-Poisson distribution: blue=PMF; orange=CDF") lines(x + 0.1, potpospois(x, lambda = lambda), col = "orange", lty = 3, type = "h") } } \keyword{distribution} VGAM/man/rrvglm.optim.control.Rd0000644000176200001440000000402513135276753016202 0ustar liggesusers\name{rrvglm.optim.control} \alias{rrvglm.optim.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for rrvglm() Calling optim() } \description{ Algorithmic constants and parameters for running \code{optim} within \code{rrvglm} are set using this function. } \usage{ rrvglm.optim.control(Fnscale = 1, Maxit = 100, Switch.optimizer = 3, Abstol = -Inf, Reltol = sqrt(.Machine$double.eps), ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Fnscale}{ Passed into \code{optim} as \code{fnscale}. } \item{Maxit}{ Passed into \code{optim} as \code{maxit}. } \item{Switch.optimizer}{ Iteration number when the "Nelder-Mead" method of \code{optim} is switched to the quasi-Newton "BFGS" method. Assigning \code{Switch.optimizer} a negative number means always BFGS, while assigning \code{Switch.optimizer} a value greater than \code{maxits} means always use Nelder-Mead. } \item{Abstol}{ Passed into \code{optim} as \code{abstol}. } \item{Reltol}{ Passed into \code{optim} as \code{reltol}. } \item{\dots}{ Ignored. } } \details{ See \code{\link[stats]{optim}} for more details. } \value{ A list with components equal to the arguments. } %\references{ ~put references to the literature/web site here ~ } \author{ Thomas W. Yee } \note{ The transition between optimization methods may be unstable, so users may have to vary the value of \code{Switch.optimizer}. Practical experience with \code{Switch.optimizer} shows that setting it to too large a value may lead to a local solution, whereas setting it to a low value will obtain the global solution. It appears that, if BFGS kicks in too late when the Nelder-Mead algorithm is starting to converge to a local solution, then switching to BFGS will not be sufficient to bypass convergence to that local solution. } \seealso{ \code{\link{rrvglm.control}}, \code{\link[stats]{optim}}. } %\examples{ %} \keyword{models} \keyword{regression} VGAM/man/cao.control.Rd0000644000176200001440000002713213135276753014310 0ustar liggesusers\name{cao.control} \alias{cao.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for RR-VGAMs (CAO) } \description{ Algorithmic constants and parameters for a constrained additive ordination (CAO), by fitting a \emph{reduced-rank vector generalized additive model} (RR-VGAM), are set using this function. This is the control function for \code{\link{cao}}. } \usage{ cao.control(Rank = 1, all.knots = FALSE, criterion = "deviance", Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-05, Etamat.colmax = 10, GradientFunction = FALSE, iKvector = 0.1, iShape = 0.1, noRRR = ~ 1, Norrr = NA, SmallNo = 5.0e-13, Use.Init.Poisson.QO = TRUE, Bestof = if (length(Cinit)) 1 else 10, maxitl = 10, imethod = 1, bf.epsilon = 1.0e-7, bf.maxit = 10, Maxit.optim = 250, optim.maxit = 20, sd.sitescores = 1.0, sd.Cinit = 0.02, suppress.warnings = TRUE, trace = TRUE, df1.nl = 2.5, df2.nl = 2.5, spar1 = 0, spar2 = 0, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Rank}{ The numerical rank \eqn{R} of the model, i.e., the number of latent variables. Currently only \code{Rank = 1} is implemented. } \item{all.knots}{ Logical indicating if all distinct points of the smoothing variables are to be used as knots. Assigning the value \code{FALSE} means fewer knots are chosen when the number of distinct points is large, meaning less computational expense. See \code{\link{vgam.control}} for details. } \item{criterion}{ Convergence criterion. Currently, only one is supported: the deviance is minimized. } \item{Cinit}{ Optional initial \bold{C} matrix which may speed up convergence. } \item{Crow1positive}{ Logical vector of length \code{Rank} (recycled if necessary): are the elements of the first row of \bold{C} positive? For example, if \code{Rank} is 4, then specifying \code{Crow1positive = c(FALSE, TRUE)} will force \bold{C[1,1]} and \bold{C[1,3]} to be negative, and \bold{C[1,2]} and \bold{C[1,4]} to be positive. } \item{epsilon}{ Positive numeric. Used to test for convergence for GLMs fitted in FORTRAN. Larger values mean a loosening of the convergence criterion. % Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{Etamat.colmax}{ Positive integer, no smaller than \code{Rank}. Controls the amount of memory used by \code{.Init.Poisson.QO()}. It is the maximum number of columns allowed for the pseudo-response and its weights. In general, the larger the value, the better the initial value. Used only if \code{Use.Init.Poisson.QO = TRUE}. } % \item{FastAlgorithm}{ % Logical. % Whether compiled code is used. % For \code{\link{cao}} this must be \code{TRUE}. % % } \item{GradientFunction}{ Logical. Whether \code{\link[stats]{optim}}'s argument \code{gr} is used or not, i.e., to compute gradient values. Used only if \code{FastAlgorithm} is \code{TRUE}. Currently, this argument must be set to \code{FALSE}. } \item{iKvector, iShape}{ See \code{\link{qrrvglm.control}}. } % \item{Hstep}{ Positive value. Used as the step size in the % finite difference approximation to the derivatives by % \code{\link[stats]{optim}}. % Used only if \code{GradientFunction} is \code{TRUE}. % % } % \item{Kinit}{ % Initial values for the index parameters \code{k} in the negative % binomial distribution (one per species). In general, a smaller number % is preferred over a larger number. The vector is recycled to the % number of responses (species). %} \item{noRRR}{ Formula giving terms that are \emph{not} to be included in the reduced-rank regression (or formation of the latent variables). The default is to omit the intercept term from the latent variables. Currently, only \code{noRRR = ~ 1} is implemented. } \item{Norrr}{ Defunct. Please use \code{noRRR}. Use of \code{Norrr} will become an error soon. } % \item{Parscale}{ % Numerical and positive-valued vector of length \bold{C} % (recycled if necessary). Passed into \code{optim(..., % control = list(parscale = Parscale))}; the elements of \bold{C} become % \bold{C} / \code{Parscale}. Setting \code{I.tolerances = TRUE} results % in line searches that are very large, therefore \bold{C} has to be % scaled accordingly to avoid large step sizes. % } \item{SmallNo}{ Positive numeric between \code{.Machine$double.eps} and \code{0.0001}. Used to avoid under- or over-flow in the IRLS algorithm. % Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{Use.Init.Poisson.QO }{ Logical. If \code{TRUE} then the function \code{.Init.Poisson.QO} is used to obtain initial values for the canonical coefficients \bold{C}. If \code{FALSE} then random numbers are used instead. } \item{Bestof}{ Integer. The best of \code{Bestof} models fitted is returned. This argument helps guard against local solutions by (hopefully) finding the global solution from many fits. The argument works only when the function generates its own initial value for \bold{C}, i.e., when \bold{C} are \emph{not} passed in as initial values. The default is only a convenient minimal number and users are urged to increase this value. } \item{maxitl}{ Positive integer. Maximum number of Newton-Raphson/Fisher-scoring/local-scoring iterations allowed. } \item{imethod}{ See \code{\link{qrrvglm.control}}. } \item{bf.epsilon}{ Positive numeric. Tolerance used by the modified vector backfitting algorithm for testing convergence. } \item{bf.maxit}{ Positive integer. Number of backfitting iterations allowed in the compiled code. } \item{Maxit.optim}{ Positive integer. Number of iterations given to the function \code{\link[stats]{optim}} at each of the \code{optim.maxit} iterations. } \item{optim.maxit}{ Positive integer. Number of times \code{\link[stats]{optim}} is invoked. % At iteration \code{i}, the \code{i}th value of \code{Maxit.optim} % is fed into \code{\link[stats]{optim}}. } % \item{se.fit}{ % Logical indicating whether approximate % pointwise standard errors are to be saved on the object. % Currently this argument must have the value \code{FALSE}. % } \item{sd.sitescores}{ Numeric. Standard deviation of the initial values of the site scores, which are generated from a normal distribution. Used when \code{Use.Init.Poisson.QO} is \code{FALSE}. } \item{sd.Cinit}{ Standard deviation of the initial values for the elements of \bold{C}. These are normally distributed with mean zero. This argument is used only if \code{Use.Init.Poisson.QO = FALSE}. } \item{suppress.warnings}{ Logical. Suppress warnings? } \item{trace}{ Logical indicating if output should be produced for each iteration. Having the value \code{TRUE} is a good idea for large data sets. } \item{df1.nl, df2.nl}{ Numeric and non-negative, recycled to length \emph{S}. Nonlinear degrees of freedom for smooths of the first and second latent variables. A value of 0 means the smooth is linear. Roughly, a value between 1.0 and 2.0 often has the approximate flexibility of a quadratic. The user should not assign too large a value to this argument, e.g., the value 4.0 is probably too high. The argument \code{df1.nl} is ignored if \code{spar1} is assigned a positive value or values. Ditto for \code{df2.nl}. } \item{spar1, spar2}{ Numeric and non-negative, recycled to length \emph{S}. Smoothing parameters of the smooths of the first and second latent variables. The larger the value, the more smooth (less wiggly) the fitted curves. These arguments are an alternative to specifying \code{df1.nl} and \code{df2.nl}. A value 0 (the default) for \code{spar1} means that \code{df1.nl} is used. Ditto for \code{spar2}. The values are on a scaled version of the latent variables. See Green and Silverman (1994) for more information. } \item{\dots}{ Ignored at present. } } \details{ Many of these arguments are identical to \code{\link{qrrvglm.control}}. Here, \eqn{R} is the \code{Rank}, \eqn{M} is the number of additive predictors, and \eqn{S} is the number of responses (species). Thus \eqn{M=S} for binomial and Poisson responses, and \eqn{M=2S} for the negative binomial and 2-parameter gamma distributions. Allowing the smooths too much flexibility means the CAO optimization problem becomes more difficult to solve. This is because the number of local solutions increases as the nonlinearity of the smooths increases. In situations of high nonlinearity, many initial values should be used, so that \code{Bestof} should be assigned a larger value. In general, there should be a reasonable value of \code{df1.nl} somewhere between 0 and about 3 for most data sets. } \value{ A list with the components corresponding to its arguments, after some basic error checking. } \references{ Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. Green, P. J. and Silverman, B. W. (1994) \emph{Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach}, London: Chapman & Hall. } \author{T. W. Yee} \note{ The argument \code{df1.nl} can be inputted in the format \code{c(spp1 = 2, spp2 = 3, 2.5)}, say, meaning the default value is 2.5, but two species have alternative values. If \code{spar1 = 0} and \code{df1.nl = 0} then this represents fitting linear functions (CLO). Currently, this is handled in the awkward manner of setting \code{df1.nl} to be a small positive value, so that the smooth is almost linear but not quite. A proper fix to this special case should done in the short future. } \seealso{ \code{\link{cao}}. } \examples{\dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(123) ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, df1.nl = c(Zoraspin = 2.3, 2.1), Bestof = 10, Crow1positive = FALSE) sort(deviance(ap1, history = TRUE)) # A history of all the iterations Coef(ap1) par(mfrow = c(2, 3)) # All or most of the curves are unimodal; some are plot(ap1, lcol = "blue") # quite symmetric. Hence a CQO model should be ok par(mfrow = c(1, 1), las = 1) index <- 1:ncol(depvar(ap1)) # lvplot is jagged because only 28 sites lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a = 0, b = 1, lty = 2) persp(ap1, label = TRUE, col = 1:4) } } \keyword{models} \keyword{regression} %cao.control(Rank = 1, all.knots = FALSE, % criterion = "deviance", Cinit = NULL, % Crow1positive = TRUE, epsilon = 1e-05, % Etamat.colmax = 10, %% FastAlgorithm = TRUE, %% is.loaded(symbol.For("cqo2f")), %% GradientFunction = FALSE, % iKvector = 0.1, % iShape = 0.1, % noRRR = ~1, %% Parscale = 1, % SmallNo = 5e-13, % Use.Init.Poisson.QO = TRUE, % Bestof = if (length(Cinit)) 1 else 10, maxitl = 40, % bf.epsilon = 1.0e-7, bf.maxit = 40, % Maxit.optim = 250, optim.maxit = 20, %% se.fit = FALSE, % sd.sitescores = 1, % sd.Cinit = 0.02, trace = TRUE, %% df1.nl = 2.5, spar1 = 0, ...) VGAM/man/AR1EIM.Rd0000644000176200001440000002335613135276753013011 0ustar liggesusers\name{AR1EIM} \alias{AR1EIM} %- Also NEED an '\alias' for EACH other topic documented here. \title{Computation of the Exact EIM of an Order-1 Autoregressive Process } \description{Computation of the exact Expected Information Matrix of the Autoregressive process of order-\eqn{1} (AR(\eqn{1})) with Gaussian white noise and stationary random components. } \usage{ AR1EIM(x = NULL, var.arg = NULL, p.drift = NULL, WNsd = NULL, ARcoeff1 = NULL, eps.porat = 1e-2) } \arguments{ \item{x}{ A vector of quantiles. The gaussian time series for which the EIMs are computed. If multiple time series are being analyzed, then \code{x} must be a matrix where each column allocates a response. That is, the number of columns (denoted as \eqn{NOS}) must match the number of responses. } \item{var.arg}{ Logical. Same as with \code{\link[VGAM:AR1]{AR1}}. } \item{p.drift}{ A numeric vector with the \emph{scaled mean(s)} (commonly referred as \emph{drift}) of the AR process(es) in turn. Its length matches the number of responses. } \item{WNsd, ARcoeff1}{ Matrices. The standard deviation of the white noise, and the correlation (coefficient) of the AR(\eqn{1}) model, for \bold{each} observation. That is, the dimension for each matrix is \eqn{N \times NOS}{N x NOS}, where \eqn{N} is the number of observations and \eqn{NOS} is the number of responses. Else, these arguments are recycled. } \item{eps.porat}{ A very small positive number to test whether the standar deviation (\code{WNsd}) is close enough to its value estimated in this function. See below for further details. } } \details{ This function implements the algorithm of Porat and Friedlander (1986) to \emph{recursively} compute the exact expected information matrix (EIM) of Gaussian time series with stationary random components. By default, when the VGLM/VGAM family function \code{\link[VGAM:AR1]{AR1}} is used to fit an AR(\eqn{1}) model via \code{\link[VGAM:vglm]{vglm}}, Fisher scoring is executed using the \bold{approximate} EIM for the AR process. However, this model can also be fitted using the \bold{exact} EIMs computed by \code{AR1EIM}. Given \eqn{N} consecutive data points, \eqn{ {y_{0}, y_{1}, \ldots, y_{N - 1} } }{ {y[0], y[1], \ldots, y[N - 1]} } with probability density \eqn{f(\boldsymbol{y})}{f(y)}, the Porat and Friedlander algorithm calculates the EIMs \eqn{ [J_{n-1}(\boldsymbol{\theta})] }{J(n-1)[\theta]}, for all \eqn{1 \leq n \leq N}{1 \le n \le N}. This is done based on the Levinson-Durbin algorithm for computing the orthogonal polynomials of a Toeplitz matrix. In particular, for the AR(\eqn{1}) model, the vector of parameters to be estimated under the VGAM/VGLM approach is \deqn{ \boldsymbol{\eta} = (\mu^{*}, loge(\sigma^2), rhobit(\rho)),}{ \eta = ( mu^*, loge(sigma^2), rhobit(rho)), } where \eqn{\sigma^2}{sigma^2} is the variance of the white noise and \eqn{mu^{*}}{mu^*} is the drift parameter (See \code{\link[VGAM:AR1]{AR1}} for further details on this). %Compared to \code{\link[stats]{arima}}, this family function differs %in the following ways. %1. %2. %3. %The following quote from \code{\link[stats]{arima}} reveals a weakness: %"jsdjfksf". %This is a well-known weakness in \code{\link[stats]{arima}}, however, %some simulations suggest that the VGAM se is more accurate. Consequently, for each observation \eqn{n = 1, \ldots, N}, the EIM, \eqn{J_{n}(\boldsymbol{\theta})}{Jn[\theta]}, has dimension \eqn{3 \times 3}{3 x 3}, where the diagonal elements are: %Notice, however, that the Porat and Friedlander algorithm considers %\eqn{ { y_t } }{ {y[t]}} as a zero-mean process. %Then, for each \eqn{n = 1, \ldots, N}, %\eqn{ [J_{n}(\boldsymbol{\theta})] }{Jn[\theta]} is a %\eqn{2 \times 2}{2 x 2} matrix, with elements \deqn{ J_{[n, 1, 1]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial ( \mu^{*} )^2 ], }{ J[n, 1, 1] = E[ -\delta^2 log f(y) / \delta (mu^*)^2 ], } \deqn{ J_{[n, 2, 2]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial (\sigma^2)^2 ], }{ J[n, 2, 2] = E[ - \delta^2 log f(y) / \delta (\sigma^2)^2 ],} and \deqn{ J_{[n, 3, 3]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial ( \rho )^2 ]. }{ J[n, 3, 3] = E[ -\delta^2 log f(y) / \delta (rho)^2]. } As for the off-diagonal elements, one has the usual entries, i.e., \deqn{ J_{[n, 1, 2]} = J_{[n, 2, 1]} = E[ -\partial^2 \log f(\boldsymbol{y}) / \partial \sigma^2 \partial \rho], }{ J[n, 1, 2] = J[n, 2, 1] = E[ -\delta^2 log f(y) / \delta \sigma^2 \delta rho ],} etc. If \code{var.arg = FALSE}, then \eqn{\sigma} instead of \eqn{\sigma^2} is estimated. Therefore, \eqn{J_{[n, 2, 2]}}{J[n, 2, 2]}, \eqn{J_{[n, 1, 2]}}{J[n, 1, 2]}, etc., are correspondingly replaced. Once these expected values are internally computed, they are returned in an array of dimension \eqn{N \times 1 \times 6}{N x 1 x 6}, of the form \deqn{J[, 1, ] = [ J_{[ , 1, 1]}, J_{[ , 2, 2]}, J_{[ , 3, 3]}, J_{[ , 1, 2]}, J_{[, 2, 3]}, J_{[ , 1, 3]} ]. }{ J[, 1, ] = [ J[ , 1, 1], J[ , 2, 2], J[ , 3, 3], J[ , 1, 2], J[ , 2, 3], J[ , 1, 3] ]. } \code{AR1EIM} handles multiple time series, say \eqn{NOS}. If this happens, then it accordingly returns an array of dimension \eqn{N \times NOS \times 6 }{N x NOS x 6}. Here, \eqn{J[, k, ]}, for \eqn{k = 1, \ldots, NOS}, is a matrix of dimension \eqn{N \times 6}{N x 6}, which stores the EIMs for the \eqn{k^{th}}{k}th response, as above, i.e., \deqn{J[, k, ] = [ J_{[ , 1, 1]}, J_{[ , 2, 2]}, J_{[ , 3, 3]}, \ldots ], }{ J[, k, ] = [ J[ , 1, 1], J[ , 2, 2], J[ , 3, 3], \ldots ], } the \emph{bandwith} form, as per required by \code{\link[VGAM:AR1]{AR1}}. } \value{ An array of dimension \eqn{N \times NOS \times 6}{N x NOS x 6}, as above. This array stores the EIMs calculated from the joint density as a function of \deqn{\boldsymbol{\theta} = (\mu^*, \sigma^2, \rho). }{ \theta = (mu^*, sigma^2, rho). } Nevertheless, note that, under the VGAM/VGLM approach, the EIMs must be correspondingly calculated in terms of the linear predictors, \eqn{\boldsymbol{\eta}}{\eta}. } \note{ For simplicity, one can assume that the time series analyzed has a 0-mean. Consequently, where the family function \code{\link[VGAM:AR1]{AR1}} calls \code{AR1EIM} to compute the EIMs, the argument \code{p.drift} is internally set to zero-vector, whereas \code{x} is \emph{centered} by subtracting its mean value. } \section{Asymptotic behaviour of the algorithm}{ For large enough \eqn{n}, the EIMs, \eqn{J_n(\boldsymbol{\theta})}{Jn(\theta)}, become approximately linear in \eqn{n}. That is, for some \eqn{n_0}{n0}, \deqn{ J_n(\boldsymbol{\theta}) \equiv J_{n_0}(\boldsymbol{\theta}) + (n - n_0) \bar{J}(\boldsymbol{\theta}),~~~~~~(**) }{ Jn(\theta) -> Jn0(\theta) + (n - n0) * Jbar(\theta), (*) } where \eqn{ \bar{J}(\boldsymbol{\theta}) }{ Jbar(\theta)} is a constant matrix. This relationsihip is internally considered if a proper value of \eqn{n_0}{n0} is determined. Different ways can be adopted to find \eqn{n_0}{n0}. In \code{AR1EIM}, this is done by checking the difference between the internally estimated variances and the entered ones at \code{WNsd}. If this difference is less than \code{eps.porat} at some iteration, say at iteration \eqn{n_0}{n0}, then \code{AR1EIM} takes \eqn{ \bar{J}(\boldsymbol{\theta})}{Jbar(\theta)} as the last computed increment of \eqn{J_n(\boldsymbol{\theta})}{Jn(\theta)}, and extraplotates \eqn{J_k(\boldsymbol{\theta})}{Jk(\theta)}, for all \eqn{k \geq n_0 }{k \ge n0} using \eqn{(*)}. Else, the algorithm will complete the iterations for \eqn{1 \leq n \leq N}{1 \le n \le N}. Finally, note that the rate of convergence reasonably decreases if the asymptotic relationship \eqn{(*)} is used to compute \eqn{J_k(\boldsymbol{\theta})}{Jk(\theta)}, \eqn{k \geq n_0 }{k \ge n0}. Normally, the number of operations involved on this algorithm is proportional to \eqn{N^2}. See Porat and Friedlander (1986) for full details on the asymptotic behaviour of the algorithm. } \section{Warning}{ Arguments \code{WNsd}, and \code{ARcoeff1} are matrices of dimension \eqn{N \times NOS}{N x NOS}. Else, these arguments are accordingly recycled. } \references{ Porat, B. and Friedlander, B. (1986) Computation of the Exact Information Matrix of Gaussian Time Series with Stationary Random Components. \emph{IEEE Transactions on Acoustics, Speech, and Signal Processing}, \bold{54(1)}, 118--130. } \author{ V. Miranda and T. W. Yee. } \seealso{ \code{\link[VGAM:AR1]{AR1}}. } \examples{ set.seed(1) nn <- 500 ARcoeff1 <- c(0.3, 0.25) # Will be recycled. WNsd <- c(exp(1), exp(1.5)) # Will be recycled. p.drift <- c(0, 0) # Zero-mean gaussian time series. ### Generate two (zero-mean) AR(1) processes ### ts1 <- p.drift[1]/(1 - ARcoeff1[1]) + arima.sim(model = list(ar = ARcoeff1[1]), n = nn, sd = WNsd[1]) ts2 <- p.drift[2]/(1 - ARcoeff1[2]) + arima.sim(model = list(ar = ARcoeff1[2]), n = nn, sd = WNsd[2]) ARdata <- matrix(cbind(ts1, ts2), ncol = 2) ### Compute the exact EIMs: TWO responses. ### ExactEIM <- AR1EIM(x = ARdata, var.arg = FALSE, p.drift = p.drift, WNsd = WNsd, ARcoeff1 = ARcoeff1) ### For response 1: head(ExactEIM[, 1 ,]) # NOTICE THAT THIS IS A (nn x 6) MATRIX! ### For response 2: head(ExactEIM[, 2 ,]) # NOTICE THAT THIS IS A (nn x 6) MATRIX! } VGAM/man/probit.Rd0000644000176200001440000000513113135276753013361 0ustar liggesusers\name{probit} \alias{probit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Probit Link Function } \description{ Computes the probit transformation, including its inverse and the first two derivatives. } \usage{ probit(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The probit link function is commonly used for parameters that lie in the unit interval. Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the probit of \code{theta}, i.e., \code{qnorm(theta)} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{pnorm(theta)}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0. One way of overcoming this is to use \code{bvalue}. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the univariate normal distribution (see \code{\link{uninormal}}). } \seealso{ \code{\link{Links}}, \code{\link{logit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ p <- seq(0.01, 0.99, by = 0.01) probit(p) max(abs(probit(probit(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) probit(p) # Has NAs probit(p, bvalue = .Machine$double.eps) # Has no NAs \dontrun{p <- seq(0.01, 0.99, by = 0.01); par(lwd = (mylwd <- 2)) plot(p, logit(p), type = "l", col = "limegreen", ylab = "transformation", las = 1, main = "Some probability link functions") lines(p, probit(p), col = "purple") lines(p, cloglog(p), col = "chocolate") lines(p, cauchit(p), col = "tan") abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4.0, c("logit", "probit", "cloglog", "cauchit"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) par(lwd = 1) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/bifrankcop.Rd0000644000176200001440000000603713135276753014206 0ustar liggesusers\name{bifrankcop} \alias{bifrankcop} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Frank's Bivariate Distribution Family Function } \description{ Estimate the association parameter of Frank's bivariate distribution by maximum likelihood estimation. } \usage{ bifrankcop(lapar = "loge", iapar = 2, nsimEIM = 250) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar}{ Link function applied to the (positive) association parameter \eqn{\alpha}{alpha}. See \code{\link{Links}} for more choices. } \item{iapar}{ Numeric. Initial value for \eqn{\alpha}{alpha}. If a convergence failure occurs try assigning a different value. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = H_{\alpha}(y_1,y_2) = \log_{\alpha} [1 + (\alpha^{y_1}-1)(\alpha^{y_2}-1)/ (\alpha-1)] }{% P(Y1 <= y1, Y2 <= y2) = H_{alpha}(y1,y2) = log_{alpha} [1 + (alpha^(y1)-1)*(alpha^(y2)-1)/ (alpha-1)] } for \eqn{\alpha \ne 1}{alpha != 1}. Note the logarithm here is to base \eqn{\alpha}{alpha}. The support of the function is the unit square. When \eqn{0 < \alpha < 1}{0 1}{alpha>1} then \eqn{h_{\alpha}(y_1,y_2) = h_{1/\alpha}(1-y_1,y_2)}{h_{1/alpha}(1-y_1,y_2)}. If \eqn{\alpha=1}{alpha=1} then \eqn{H(y_1,y_2) = y_1 y_2}{H(y1,y2)=y1*y2}, i.e., uniform on the unit square. As \eqn{\alpha}{alpha} approaches 0 then \eqn{H(y_1,y_2) = \min(y_1,y_2)}{H(y1,y2)=min(y1,y2)}. As \eqn{\alpha}{alpha} approaches infinity then \eqn{H(y_1,y_2) = \max(0, y_1+y_2-1)}{H(y1,y2)=max(0,y1+y2-1)}. The default is to use Fisher scoring implemented using \code{\link{rbifrankcop}}. For intercept-only models an alternative is to set \code{nsimEIM=NULL} so that a variant of Newton-Raphson is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } %% improve the references \references{ Genest, C. (1987) Frank's family of bivariate distributions. \emph{Biometrika}, \bold{74}, 549--555. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to a half. This is because the marginal distributions correspond to a standard uniform distribution. } \seealso{ \code{\link{rbifrankcop}}, \code{\link{bifgmcop}}, \code{\link{simulate.vlm}}. } \examples{ \dontrun{ ymat <- rbifrankcop(n = 2000, apar = exp(4)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = bifrankcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/biclaytoncopUC.Rd0000644000176200001440000000477513135276753015015 0ustar liggesusers\name{Biclaytoncop} \alias{dbiclaytoncop} %\alias{pbiclaytoncop} \alias{rbiclaytoncop} \title{Clayton Copula (Bivariate) Distribution} \description{ Density and random generation for the (one parameter) bivariate Clayton copula distribution. } \usage{ dbiclaytoncop(x1, x2, apar = 0, log = FALSE) rbiclaytoncop(n, apar = 0) } %pbiclaytoncop(q1, q2, rho = 0) \arguments{ \item{x1, x2}{vector of quantiles. The \code{x1} and \code{x2} should both be in the interval \eqn{(0,1)}. } \item{n}{number of observations. Same as \code{\link[stats]{rnorm}}. } \item{apar}{the association parameter. Should be in the interval \eqn{[0, \infty)}{[0, Inf)}. The default corresponds to independence. } \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. % Same as \code{\link[stats]{rnorm}}. } } \value{ \code{dbiclaytoncop} gives the density at point (\code{x1},\code{x2}), \code{rbiclaytoncop} generates random deviates (a two-column matrix). % \code{pbiclaytoncop} gives the distribution function, and } \references{ % A Model for Association in Bivariate Survival Data Clayton, D. (1982) A model for association in bivariate survival data. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{44}, 414--422. } \author{ R. Feyter and T. W. Yee } \details{ See \code{\link{biclaytoncop}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } \note{ \code{dbiclaytoncop()} does not yet handle \code{x1 = 0} and/or \code{x2 = 0}. %Yettodo: allow \code{x1} and/or \code{x2} to have values 1, %and to allow any values for \code{x1} and/or \code{x2} to be %outside the unit square. } \seealso{ \code{\link{biclaytoncop}}, \code{\link{binormalcop}}, \code{\link{binormal}}. } \examples{ \dontrun{ edge <- 0.01 # A small positive value N <- 101; x <- seq(edge, 1.0 - edge, len = N); Rho <- 0.7 ox <- expand.grid(x, x) zedd <- dbiclaytoncop(ox[, 1], ox[, 2], apar = Rho, log = TRUE) par(mfrow = c(1, 2)) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5, las = 1) plot(rbiclaytoncop(1000, 2), col = "blue", las = 1) } } \keyword{distribution} %plot(r <- rbiclaytoncop(n = 3000, apar = exp(2)), col = "blue") %par(mfrow = c(1, 2)) %hist(r[, 1]) # Should be uniform %hist(r[, 2]) # Should be uniform VGAM/man/rdiric.Rd0000644000176200001440000000406613135276753013344 0ustar liggesusers\name{rdiric} \alias{rdiric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Dirichlet distribution } \description{ Generates Dirichlet random variates. } \usage{ rdiric(n, shape, dimension = NULL, is.matrix.shape = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ number of observations. Note it has two meanings, see \code{is.matrix.shape} below. } \item{shape}{ the shape parameters. These must be positive. If \code{dimension} is specifed, values are recycled if necessary to length \code{dimension}. } \item{dimension}{ the dimension of the distribution. If \code{dimension} is not numeric then it is taken to be \code{length(shape)} (or \code{ncol(shape)} if \code{is.matrix.shape == TRUE}). } \item{is.matrix.shape}{ Logical. If \code{TRUE} then \code{shape} must be a matrix, and then \code{n} is no longer the number of rows of the answer but the answer has \code{n * nrow(shape)} rows. If \code{FALSE} (the default) then \code{shape} is a vector and each of the \code{n} rows of the answer have \code{shape} as its shape parameters. } } \details{ This function is based on a relationship between the gamma and Dirichlet distribution. Random gamma variates are generated, and then Dirichlet random variates are formed from these. } \value{ A \code{n} by \code{dimension} matrix of Dirichlet random variates. Each element is positive, and each row will sum to unity. If \code{shape} has names then these will become the column names of the answer. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ Thomas W. Yee } \seealso{ \code{\link{dirichlet}} is a \pkg{VGAM} family function for fitting a Dirichlet distribution to data. } \examples{ ddata <- data.frame(rdiric(n = 1000, shape = c(y1 = 3, y2 = 1, y3 = 4))) fit <- vglm(cbind(y1, y2, y3) ~ 1, dirichlet, data = ddata, trace = TRUE) Coef(fit) coef(fit, matrix = TRUE) } \keyword{distribution} VGAM/man/is.smart.Rd0000644000176200001440000000304213135276753013621 0ustar liggesusers\name{is.smart} \alias{is.smart} \title{ Test For a Smart Object } \description{ Tests an object to see if it is smart. } \usage{ is.smart(object) } \arguments{ \item{object}{ a function or a fitted model. } } \value{ Returns \code{TRUE} or \code{FALSE}, according to whether the \code{object} is smart or not. } \details{ If \code{object} is a function then this function looks to see whether \code{object} has the logical attribute \code{"smart"}. If so then this is returned, else \code{FALSE}. If \code{object} is a fitted model then this function looks to see whether \code{object@smart.prediction} or \code{object\$smart.prediction} exists. If it does and it is not equal to \code{list(smart.arg=FALSE)} then a \code{TRUE} is returned, else \code{FALSE}. The reason for this is because, e.g., \code{lm(...,smart=FALSE)} and \code{vglm(...,smart=FALSE)}, will return such a specific list. Writers of smart functions manually have to assign this attribute to their smart function after it has been written. } \examples{ is.smart(sm.min1) # TRUE is.smart(sm.poly) # TRUE library(splines) is.smart(sm.bs) # TRUE is.smart(sm.ns) # TRUE is.smart(tan) # FALSE \dontrun{ udata <- data.frame(x2 = rnorm(9)) fit1 <- vglm(rnorm(9) ~ x2, uninormal, data = udata) is.smart(fit1) # TRUE fit2 <- vglm(rnorm(9) ~ x2, uninormal, data = udata, smart = FALSE) is.smart(fit2) # FALSE fit2@smart.prediction } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10.6.1. VGAM/man/machinists.Rd0000644000176200001440000000311113135276753014220 0ustar liggesusers\name{machinists} \alias{machinists} \docType{data} \title{ Machinists Accidents } \description{ A small count data set involving 414 machinists from a three months study, of accidents around the end of WWI. } \usage{ data(machinists) } \format{ A data frame with the following variables. \describe{ \item{accidents}{ The number of accidents } \item{ofreq}{ Observed frequency, i.e., the number of machinists with that many accidents } } } \details{ The data was collected over a period of three months. There were 414 machinists in total. Also, there were data collected over six months, but it is not given here. } \source{ Incidence of Industrial Accidents. Report No. 4 (Industrial Fatigue Research Board), Stationery Office, London, 1919. } \references{ Greenwood, M. and Yule, G. U. (1920). An Inquiry into the Nature of Frequency Distributions Representative of Multiple Happenings with Particular Reference to the Occurrence of Multiple Attacks of Disease or of Repeated Accidents. \emph{Journal of the Royal Statistical Society}, \bold{83}, 255--279. } \seealso{ \code{\link[VGAM]{negbinomial}}, \code{\link[VGAM]{poissonff}}. } \examples{ machinists mean(with(machinists, rep(accidents, times = ofreq))) var(with(machinists, rep(accidents, times = ofreq))) \dontrun{ barplot(with(machinists, ofreq), names.arg = as.character(with(machinists, accidents)), main = "Machinists accidents", col = "lightblue", las = 1, ylab = "Frequency", xlab = "accidents") } } \keyword{datasets} % % VGAM/man/kendall.tau.Rd0000644000176200001440000000556113135276753014273 0ustar liggesusers\name{kendall.tau} \alias{kendall.tau} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Kendall's Tau Statistic } \description{ Computes Kendall's Tau, which is a rank-based correlation measure, between two vectors. } \usage{ kendall.tau(x, y, exact = FALSE, max.n = 3000) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, y}{ Numeric vectors. Must be of equal length. Ideally their values are continuous and not too discrete. Let \code{length(x)} be \eqn{N}, say. } \item{exact}{ Logical. If \code{TRUE} then the exact value is computed. } \item{max.n}{ Numeric. If \code{exact = FALSE} and \code{length(x)} is more than \code{max.n} then a random sample of \code{max.n} pairs are chosen. } } \details{ Kendall's tau is a measure of dependency in a bivariate distribution. Loosely, two random variables are \emph{concordant} if large values of one random variable are associated with large values of the other random variable. Similarly, two random variables are \emph{disconcordant} if large values of one random variable are associated with small values of the other random variable. More formally, if \code{(x[i] - x[j])*(y[i] - y[j]) > 0} then that comparison is concordant \eqn{(i \neq j)}. And if \code{(x[i] - x[j])*(y[i] - y[j]) < 0} then that comparison is disconcordant \eqn{(i \neq j)}. Out of \code{choose(N, 2}) comparisons, let \eqn{c} and \eqn{d} be the number of concordant and disconcordant pairs. Then Kendall's tau can be estimated by \eqn{(c-d)/(c+d)}. If there are ties then half the ties are deemed concordant and half disconcordant so that \eqn{(c-d)/(c+d+t)} is used. } \value{ Kendall's tau, which lies between \eqn{-1} and \eqn{1}. } %\references{ %} %\author{ % T. W. Yee. %} %\note{ %This function has not been tested thoroughly. %} %% ~Make other sections like Warning with \section{Warning }{....} ~ \section{Warning}{ If \code{length(x)} is large then the cost is \eqn{O(N^2)}, which is expensive! Under these circumstances it is not advisable to set \code{exact = TRUE} or \code{max.n} to a very large number. } \seealso{ \code{\link{binormalcop}}, \code{\link[stats]{cor}}. } \examples{ N <- 5000; x <- 1:N; y <- runif(N) true.rho <- -0.8 ymat <- rbinorm(N, cov12 = true.rho) # Bivariate normal, aka N_2 x <- ymat[, 1] y <- ymat[, 2] \dontrun{plot(x, y, col = "blue")} kendall.tau(x, y) # A random sample is taken here kendall.tau(x, y) # A random sample is taken here kendall.tau(x, y, exact = TRUE) # Costly if length(x) is large kendall.tau(x, y, max.n = N) # Same as exact = TRUE (rhohat <- sin(kendall.tau(x, y) * pi / 2)) # This formula holds for N_2 actually true.rho # rhohat should be near this value } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} VGAM/man/rcqo.Rd0000644000176200001440000003353213135276753013034 0ustar liggesusers\name{rcqo} \alias{rcqo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Constrained Quadratic Ordination } \description{ Random generation for constrained quadratic ordination (CQO). } \usage{ rcqo(n, p, S, Rank = 1, family = c("poisson", "negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial", "gamma2"), eq.maximums = FALSE, eq.tolerances = TRUE, es.optimums = FALSE, lo.abundance = if (eq.maximums) hi.abundance else 10, hi.abundance = 100, sd.latvar = head(1.5/2^(0:3), Rank), sd.optimums = ifelse(es.optimums, 1.5/Rank, 1) * ifelse(scale.latvar, sd.latvar, 1), sd.tolerances = 0.25, Kvector = 1, Shape = 1, sqrt.arg = FALSE, log.arg = FALSE, rhox = 0.5, breaks = 4, seed = NULL, optimums1.arg = NULL, Crow1positive = TRUE, xmat = NULL, scale.latvar = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ Number of sites. It is denoted by \eqn{n} below. } \item{p}{ Number of environmental variables, including an intercept term. It is denoted by \eqn{p} below. Must be no less than \eqn{1+R} in value. } \item{S}{ Number of species. It is denoted by \eqn{S} below. } \item{Rank}{ The rank or the number of latent variables or true dimension of the data on the reduced space. This must be either 1, 2, 3 or 4. It is denoted by \eqn{R}. } \item{family}{ What type of species data is to be returned. The first choice is the default. If binomial then a 0 means absence and 1 means presence. If ordinal then the \code{breaks} argument is passed into the \code{breaks} argument of \code{\link[base]{cut}}. Note that either the Poisson or negative binomial distributions are used to generate binomial and ordinal data, and that an upper-case choice is used for the negative binomial distribution (this makes it easier for the user). If \code{"gamma2"} then this is the 2-parameter gamma distribution. % , % and the resulting values are % 1,2,\ldots,\code{breaks} if \code{breaks} is a single integer zz % else zz. } \item{eq.maximums}{ Logical. Does each species have the same maximum? See arguments \code{lo.abundance} and \code{hi.abundance}. } \item{eq.tolerances}{ Logical. Does each species have the same tolerance? If \code{TRUE} then the common value is 1 along every latent variable, i.e., all species' tolerance matrices are the order-\eqn{R} identity matrix. } \item{es.optimums}{ Logical. Do the species have equally spaced optimums? If \code{TRUE} then the quantity \eqn{S^{1/R}}{S^(1/R)} must be an integer with value 2 or more. That is, there has to be an appropriate number of species in total. This is so that a grid of optimum values is possible in \eqn{R}-dimensional latent variable space in order to place the species' optimums. Also see the argument \code{sd.tolerances}. } \item{lo.abundance, hi.abundance}{ Numeric. These are recycled to a vector of length \eqn{S}. The species have a maximum between \code{lo.abundance} and \code{hi.abundance}. That is, at their optimal environment, the mean abundance of each species is between the two componentwise values. If \code{eq.maximums} is \code{TRUE} then \code{lo.abundance} and \code{hi.abundance} must have the same values. If \code{eq.maximums} is \code{FALSE} then the logarithm of the maximums are uniformly distributed between \code{log(lo.abundance)} and \code{log(hi.abundance)}. } \item{sd.latvar}{ Numeric, of length \eqn{R} (recycled if necessary). Site scores along each latent variable have these standard deviation values. This must be a decreasing sequence of values because the first ordination axis contains the greatest spread of the species' site scores, followed by the second axis, followed by the third axis, etc. } \item{sd.optimums}{ Numeric, of length \eqn{R} (recycled if necessary). If \code{es.optimums = FALSE} then, for the \eqn{r}th latent variable axis, the optimums of the species are generated from a normal distribution centered about 0. If \code{es.optimums = TRUE} then the \eqn{S} optimums are equally spaced about 0 along every latent variable axis. Regardless of the value of \code{es.optimums}, the optimums are then scaled to give standard deviation \code{sd.optimums[r]}. } \item{sd.tolerances}{ Logical. If \code{eq.tolerances = FALSE} then, for the \eqn{r}th latent variable, the species' tolerances are chosen from a normal distribution with mean 1 and standard deviation \code{sd.tolerances[r]}. However, the first species \code{y1} has its tolerance matrix set equal to the order-\eqn{R} identity matrix. All tolerance matrices for all species are diagonal in this function. This argument is ignored if \code{eq.tolerances} is \code{TRUE}, otherwise it is recycled to length \eqn{R} if necessary. } \item{Kvector}{ A vector of positive \eqn{k} values (recycled to length \eqn{S} if necessary) for the negative binomial distribution (see \code{\link{negbinomial}} for details). Note that a natural default value does not exist, however the default value here is probably a realistic one, and that for large values of \eqn{\mu} one has \eqn{Var(Y) = \mu^2 / k}{Var(Y) = mu^2 / k} approximately. } \item{Shape}{ A vector of positive \eqn{\lambda}{lambda} values (recycled to length \eqn{S} if necessary) for the 2-parameter gamma distribution (see \code{\link{gamma2}} for details). Note that a natural default value does not exist, however the default value here is probably a realistic one, and that \eqn{Var(Y) = \mu^2 / \lambda}{Var(Y) = mu^2 / lambda}. } \item{sqrt.arg}{ Logical. Take the square-root of the negative binomial counts? Assigning \code{sqrt.arg = TRUE} when \code{family="negbinomial"} means that the resulting species data can be considered very crudely to be approximately Poisson distributed. They will not integers in general but much easier (less numerical problems) to estimate using something like \code{cqo(..., family="poissonff")}. } \item{log.arg}{ Logical. Take the logarithm of the gamma random variates? Assigning \code{log.arg = TRUE} when \code{family="gamma2"} means that the resulting species data can be considered very crudely to be approximately Gaussian distributed about its (quadratic) mean. The result is that it is much easier (less numerical problems) to estimate using something like \code{cqo(..., family="gaussianff")}. } \item{rhox}{ Numeric, less than 1 in absolute value. The correlation between the environmental variables. The correlation matrix is a matrix of 1's along the diagonal and \code{rhox} in the off-diagonals. Note that each environmental variable is normally distributed with mean 0. The standard deviation of each environmental variable is chosen so that the site scores have the determined standard deviation, as given by argument \code{sd.latvar}. } \item{breaks}{ If \code{family} is assigned an ordinal value then this argument is used to define the cutpoints. It is fed into the \code{breaks} argument of \code{\link[base]{cut}}. } \item{seed}{ If given, it is passed into \code{\link[base:Random]{set.seed}}. This argument can be used to obtain reproducible results. If set, the value is saved as the \code{"seed"} attribute of the returned value. The default will not change the random generator state, and return \code{\link[base:Random]{.Random.seed}} as \code{"seed"} attribute. } \item{optimums1.arg}{ If assigned and \code{Rank = 1} then these are the explicity optimums. Recycled to length \code{S}. } \item{Crow1positive}{ See \code{\link{qrrvglm.control}} for details. } \item{xmat}{ The \eqn{n} by \eqn{p-1} environmental matrix can be inputted. } \item{scale.latvar}{ Logical. If \code{FALSE} the argument \code{sd.latvar} is ignored and no scaling of the latent variable values is performed. } } \details{ This function generates data coming from a constrained quadratic ordination (CQO) model. In particular, data coming from a \emph{species packing model} can be generated with this function. The species packing model states that species have equal tolerances, equal maximums, and optimums which are uniformly distributed over the latent variable space. This can be achieved by assigning the arguments \code{es.optimums = TRUE}, \code{eq.maximums = TRUE}, \code{eq.tolerances = TRUE}. At present, the Poisson and negative binomial abundances are generated first using \code{lo.abundance} and \code{hi.abundance}, and if \code{family} is binomial or ordinal then it is converted into these forms. In CQO theory the \eqn{n} by \eqn{p} matrix \eqn{X} is partitioned into two parts \eqn{X_1} and \eqn{X_2}. The matrix \eqn{X_2} contains the `real' environmental variables whereas the variables in \eqn{X_1} are just for adjustment purposes; they contain the intercept terms and other variables that one wants to adjust for when (primarily) looking at the variables in \eqn{X_2}. This function has \eqn{X_1} only being a matrix of ones, i.e., containing an intercept only. } \value{ A \eqn{n} by \eqn{p-1+S} data frame with components and attributes. In the following the attributes are labelled with double quotes. \item{x2, x3, x4, \ldots, xp}{ The environmental variables. This makes up the \eqn{n} by \eqn{p-1} \eqn{X_2} matrix. Note that \code{x1} is not present; it is effectively a vector of ones since it corresponds to an intercept term when \code{\link{cqo}} is applied to the data. } \item{y1, y2, x3, \ldots, yS}{ The species data. This makes up the \eqn{n} by \eqn{S} matrix \eqn{Y}. This will be of the form described by the argument \code{family}. } \item{"concoefficients"}{ The \eqn{p-1} by \eqn{R} matrix of constrained coefficients (or canonical coefficients). These are also known as weights or loadings. } \item{"formula"}{ The formula involving the species and environmental variable names. This can be used directly in the \code{formula} argument of \code{\link{cqo}}. } \item{"log.maximums"}{ The \eqn{S}-vector of species' maximums, on a log scale. These are uniformly distributed between \code{log(lo.abundance)} and \code{log(hi.abundance)}. } \item{"latvar"}{ The \eqn{n} by \eqn{R} matrix of site scores. Each successive column (latent variable) has sample standard deviation equal to successive values of \code{sd.latvar}. } \item{"eta"}{ The linear/additive predictor value. } \item{"optimums"}{ The \eqn{S} by \eqn{R} matrix of species' optimums. } \item{"tolerances"}{ The \eqn{S} by \eqn{R} matrix of species' tolerances. These are the square root of the diagonal elements of the tolerance matrices (recall that all tolerance matrices are restricted to being diagonal in this function). } Other attributes are \code{"break"}, \code{"family"}, \code{"Rank"}, \code{"lo.abundance"}, \code{"hi.abundance"}, \code{"eq.tolerances"}, \code{"eq.maximums"}, \code{"seed"} as used. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. ter Braak, C. J. F. and Prentice, I. C. (1988) A theory of gradient analysis. \emph{Advances in Ecological Research}, \bold{18}, 271--317. } \author{ T. W. Yee } \note{ This function is under development and is not finished yet. There may be a few bugs. Yet to do: add an argument that allows absences to be equal to the first level if ordinal data is requested. } \seealso{ \code{\link{cqo}}, \code{\link{qrrvglm.control}}, \code{\link[base]{cut}}, \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}. } \examples{ \dontrun{ # Example 1: Species packing model: n <- 100; p <- 5; S <- 5 mydata <- rcqo(n, p, S, es.opt = TRUE, eq.max = TRUE) names(mydata) (myform <- attr(mydata, "formula")) fit <- cqo(myform, poissonff, mydata, Bestof = 3) # eq.tol = TRUE matplot(attr(mydata, "latvar"), mydata[,-(1:(p-1))], col = 1:S) persp(fit, col = 1:S, add = TRUE) lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # The same plot as above # Compare the fitted model with the 'truth' concoef(fit) # The fitted model attr(mydata, "concoefficients") # The 'truth' c(apply(attr(mydata, "latvar"), 2, sd), apply(latvar(fit), 2, sd)) # Both values should be approx equal # Example 2: negative binomial data fitted using a Poisson model: n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "negbin", sqrt = TRUE) myform <- attr(mydata, "formula") fit <- cqo(myform, fam = poissonff, dat = mydata) # I.tol = TRUE, lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Compare the fitted model with the 'truth' concoef(fit) # The fitted model attr(mydata, "concoefficients") # The 'truth' # Example 3: gamma2 data fitted using a Gaussian model: n <- 200; p <- 5; S <- 3 mydata <- rcqo(n, p, S, fam = "gamma2", log.arg = TRUE) fit <- cqo(attr(mydata, "formula"), fam = gaussianff, data = mydata) # I.tol = TRUE, matplot(attr(mydata, "latvar"), exp(mydata[, -(1:(p-1))]), col = 1:S) # 'raw' data # Fitted model to transformed data: lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Compare the fitted model with the 'truth' concoef(fit) # The fitted model attr(mydata, "concoefficients") # The 'truth' } } \keyword{distribution} \keyword{datagen} VGAM/man/zipfUC.Rd0000644000176200001440000000272013135276753013263 0ustar liggesusers\name{Zipf} \alias{Zipf} \alias{dzipf} \alias{pzipf} \alias{qzipf} \alias{rzipf} \title{The Zipf Distribution} \description{ Density, distribution function, quantile function and random generation for the Zipf distribution. } \usage{ dzipf(x, N, shape, log = FALSE) pzipf(q, N, shape, log.p = FALSE) qzipf(p, N, shape) rzipf(n, N, shape) } \arguments{ \item{x, q, p, n}{Same as \code{\link[stats]{Poisson}}. } \item{N, shape}{ the number of elements, and the exponent characterizing the distribution. See \code{\link{zipf}} for more details. } \item{log, log.p}{ Same meaning as in \code{\link[stats]{Normal}}. } } \value{ \code{dzipf} gives the density, \code{pzipf} gives the cumulative distribution function, \code{qzipf} gives the quantile function, and \code{rzipf} generates random deviates. } \author{ T. W. Yee } \details{ This is a finite version of the zeta distribution. See \code{\link{zetaff}} for more details. In general, these functions runs slower and slower as \code{N} increases. } %\note{ % %} \seealso{ \code{\link{zipf}}. } \examples{ N <- 10; shape <- 0.5; y <- 1:N proby <- dzipf(y, N = N, shape = shape) \dontrun{ plot(proby ~ y, type = "h", col = "blue", ylab = "Probability", ylim = c(0, 0.2), main = paste("Zipf(N = ",N,", shape = ",shape,")", sep = ""), lwd = 2, las = 1) } sum(proby) # Should be 1 max(abs(cumsum(proby) - pzipf(y, N = N, shape = shape))) # Should be 0 } \keyword{distribution} VGAM/man/A1A2A3.Rd0000644000176200001440000000435113135276753012675 0ustar liggesusers\name{A1A2A3} \alias{A1A2A3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The A1A2A3 Blood Group System } \description{ Estimates the three independent parameters of the the A1A2A3 blood group system. } \usage{ A1A2A3(link = "logit", inbreeding = FALSE, ip1 = NULL, ip2 = NULL, iF = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to \code{p1}, \code{p2} and \code{f}. See \code{\link{Links}} for more choices. } \item{inbreeding}{ Logical. Is there inbreeding? % Logical. Is the HWE assumption to be made? } \item{ip1, ip2, iF}{ Optional initial value for \code{p1}, \code{p2} and \code{f}. } } \details{ The parameters \code{p1} and \code{p2} are probabilities, so that \code{p3=1-p1-p2} is the third probability. The parameter \code{f} is the third independent parameter if \code{inbreeding = TRUE}. If \code{inbreeding = FALSE} then \eqn{f = 0} and Hardy-Weinberg Equilibrium (HWE) is assumed. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ T. W. Yee } \note{ The input can be a 6-column matrix of counts, with columns corresponding to \code{A1A1}, \code{A1A2}, \code{A1A3}, \code{A2A2}, \code{A2A3}, \code{A3A3} (in order). Alternatively, the input can be a 6-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab}}, \code{\link{ABO}}, \code{\link{MNSs}}. % \code{\link{AB.Ab.aB.ab2}}, } \examples{ ymat <- cbind(108, 196, 429, 143, 513, 559) fit <- vglm(ymat ~ 1, A1A2A3(link = probit), trace = TRUE, crit = "coef") fit <- vglm(ymat ~ 1, A1A2A3(link = logit, ip1 = 0.3, ip2 = 0.3, iF = 0.02), trace = TRUE, crit = "coef") Coef(fit) # Estimated p1 and p2 rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit))) } \keyword{models} \keyword{regression} VGAM/man/get.smart.prediction.Rd0000644000176200001440000000163713135276753016134 0ustar liggesusers\name{get.smart.prediction} \alias{get.smart.prediction} \title{ Retrieves ``.smart.prediction'' } \description{ Retrieves \code{.smart.prediction} from \code{smartpredenv}. } \usage{ get.smart.prediction() } \value{ Returns with the list \code{.smart.prediction} from \code{smartpredenv}. } \details{ A smart modelling function such as \code{\link[stats]{lm}} allows smart functions such as \code{\link[VGAM]{sm.bs}} to write to a data structure called \code{.smart.prediction} in \code{smartpredenv}. At the end of fitting, \code{get.smart.prediction} retrieves this data structure. It is then attached to the object, and used for prediction later. } \seealso{ \code{\link{get.smart}}, \code{\link[stats]{lm}}. } \examples{ \dontrun{ fit$smart <- get.smart.prediction() # Put at the end of lm() } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/oilogUC.Rd0000644000176200001440000000716613135276753013435 0ustar liggesusers\name{Oilog} \alias{Oilog} \alias{doilog} \alias{poilog} \alias{qoilog} \alias{roilog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Inflated Logarithmic Distribution } \description{ Density, distribution function, quantile function and random generation for the one-inflated logarithmic distribution with parameter \code{pstr1}. } \usage{ doilog(x, shape, pstr1 = 0, log = FALSE) poilog(q, shape, pstr1 = 0) qoilog(p, shape, pstr1 = 0) roilog(n, shape, pstr1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{Same as \code{\link[stats]{Uniform}}.} \item{shape}{ Vector of parameters that lie in \eqn{(0,1)}. } \item{pstr1}{ Probability of a structural one (i.e., ignoring the logarithmic distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi = 0}{phi = 0} corresponds to the response having an ordinary logarithmic distribution. } \item{log}{Same as \code{\link[stats]{Uniform}}.} } \details{ The probability function of \eqn{Y} is 1 with probability \eqn{\phi}{phi}, and \eqn{Logarithmic(prob)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=1) =\phi + (1-\phi) P(W=1)}{% P(Y=1) = phi + (1-phi) * P(W=1)} where \eqn{W} is distributed as a \eqn{Logarithmic(shape)} random variable. The \pkg{VGAM} family function \code{\link{oilog}} estimates \eqn{\phi}{phi} by MLE. } \value{ \code{doilog} gives the density, \code{poilog} gives the distribution function, \code{qoilog} gives the quantile function, and \code{roilog} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr1} is recycled to the required length, and usually has values which lie in the interval \eqn{[0,1]}. These functions actually allow for the \emph{zero-deflated logarithmic} distribution. Here, \code{pstr1} is also permitted to lie in the interval \code{[-dlog(1, shape) / (1 - dlog(1, shape)), 0]}. The resulting probability of a unit count is \emph{less than} the nominal logarithmic value, and the use of \code{pstr1} to stand for the probability of a structural 1 loses its meaning. % % % When \code{pstr1} equals \code{-dlog(1, shape) / (1 - dlog(1, shape))} this corresponds to the 1-truncated logarithmic distribution. } \seealso{ \code{\link{oilog}}, \code{\link{rlog}}, \code{\link{logff}}, \code{\link{Otlog}}. % \code{\link{zipf}}. } \examples{ shape <- 0.5; pstr1 <- 0.3; x <- (-1):7 (ii <- doilog(x, shape, pstr1 = pstr1)) max(abs(poilog(1:200, shape) - cumsum(shape^(1:200) / (-(1:200) * log1p(-shape))))) # Should be 0 \dontrun{ x <- 0:10 par(mfrow = c(2, 1)) # One-Inflated logarithmic barplot(rbind(doilog(x, shape, pstr1 = pstr1), dlog(x, shape)), beside = TRUE, col = c("blue", "orange"), main = paste("OILogff(", shape, ", pstr1 = ", pstr1, ") (blue) vs", " Logff(", shape, ") (orange)", sep = ""), names.arg = as.character(x)) deflat.limit <- -dlog(1, shape) / plog(1, shape, lower.tail = FALSE) newpstr1 <- round(deflat.limit, 3) + 0.001 # Inside but near the boundary barplot(rbind(doilog(x, shape, pstr1 = newpstr1), dlog(x, shape)), beside = TRUE, col = c("blue","orange"), main = paste("ODLogff(", shape, ", pstr1 = ", newpstr1, ") (blue) vs", " Logff(", shape, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} %qoilog(p, shape, pstr1 = 0) %roilog(n, shape, pstr1 = 0) % table(roilog(100, shape, pstr1 = pstr1)) % round(doilog(1:10, shape, pstr1 = pstr1) * 100) # Should be similar VGAM/man/polonoUC.Rd0000644000176200001440000001023413135276753013620 0ustar liggesusers\name{Polono} \alias{Polono} \alias{dpolono} \alias{ppolono} %\alias{qpolono} \alias{rpolono} \title{The Poisson Lognormal Distribution} \description{ Density, distribution function and random generation for the Poisson lognormal distribution. } \usage{ dpolono(x, meanlog = 0, sdlog = 1, bigx = 170, ...) ppolono(q, meanlog = 0, sdlog = 1, isOne = 1 - sqrt( .Machine$double.eps ), ...) rpolono(n, meanlog = 0, sdlog = 1) } \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{meanlog, sdlog }{ the mean and standard deviation of the normal distribution (on the log scale). They match the arguments in \code{\link[stats:Lognormal]{Lognormal}}. } \item{bigx}{ Numeric. This argument is for handling large values of \code{x} and/or when \code{\link[stats]{integrate}} fails. A first order Taylor series approximation [Equation (7) of Bulmer (1974)] is used at values of \code{x} that are greater or equal to this argument. For \code{bigx = 10}, he showed that the approximation has a relative error less than 0.001 for values of \code{meanlog} and \code{sdlog} ``likely to be encountered in practice''. The argument can be assigned \code{Inf} in which case the approximation is not used. } \item{isOne }{ Used to test whether the cumulative probabilities have effectively reached unity. } \item{...}{ Arguments passed into \code{\link[stats]{integrate}}. } } \value{ \code{dpolono} gives the density, \code{ppolono} gives the distribution function, and \code{rpolono} generates random deviates. % \code{qpolono} gives the quantile function, and } \references{ Bulmer, M. G. (1974) On fitting the Poisson lognormal distribution to species-abundance data. \emph{Biometrics}, \bold{30}, 101--110. } \author{ T. W. Yee. Some anonymous soul kindly wrote \code{ppolono()} and improved the original \code{dpolono()}. } \details{ The Poisson lognormal distribution is similar to the negative binomial in that it can be motivated by a Poisson distribution whose mean parameter comes from a right skewed distribution (gamma for the negative binomial and lognormal for the Poisson lognormal distribution). % See zz code{link{polonozz}}, the \pkg{VGAM} family function % for estimating the parameters, % for the formula of the probability density function and other details. } \note{ By default, \code{dpolono} involves numerical integration that is performed using \code{\link[stats]{integrate}}. Consequently, computations are very slow and numerical problems may occur (if so then the use of \code{...} may be needed). Alternatively, for extreme values of \code{x}, \code{meanlog}, \code{sdlog}, etc., the use of \code{bigx = Inf} avoids the call to \code{\link[stats]{integrate}}, however the answer may be a little inaccurate. For the maximum likelihood estimation of the 2 parameters a \pkg{VGAM} family function called \code{polono()}, say, has not been written yet. } \seealso{ \code{\link{lognormal}}, \code{\link{poissonff}}, \code{\link{negbinomial}}. } \examples{ meanlog <- 0.5; sdlog <- 0.5; yy <- 0:19 sum(proby <- dpolono(yy, m = meanlog, sd = sdlog)) # Should be 1 max(abs(cumsum(proby) - ppolono(yy, m = meanlog, sd = sdlog))) # Should be 0 \dontrun{ opar = par(no.readonly = TRUE) par(mfrow = c(2, 2)) plot(yy, proby, type = "h", col = "blue", ylab = "P[Y=y]", log = "", main = paste("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")", sep = "")) y <- 0:190 # More extreme values; use the approximation and plot on a log scale (sum(proby <- dpolono(y, m = meanlog, sd = sdlog, bigx = 100))) # Should be 1 plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y] (log)", log = "y", main = paste("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")", sep = "")) # Note the kink at bigx # Random number generation table(y <- rpolono(n = 1000, m = meanlog, sd = sdlog)) hist(y, breaks = ((-1):max(y))+0.5, prob = TRUE, border = "blue") par(opar) } } \keyword{distribution} VGAM/man/smart.expression.Rd0000644000176200001440000000142613135276753015411 0ustar liggesusers\name{smart.expression} \alias{smart.expression} \title{ S Expression for Smart Functions } \description{ \code{smart.expression} is an S expression for a smart function to call itself. It is best if you go through it line by line, but most users will not need to know anything about it. It requires the primary argument of the smart function to be called \code{"x"}. The list component \code{match.call} must be assigned the value of \code{match.call()} in the smart function; this is so that the smart function can call itself later. } \seealso{ \code{\link[base]{match.call}}. } \examples{ print(sm.min2) } %\keyword{smartpred} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. % Edited manually 17/2/03, 9/7/03 VGAM/man/chest.nz.Rd0000644000176200001440000000254513135276753013624 0ustar liggesusers\name{chest.nz} \alias{chest.nz} \docType{data} \title{ Chest Pain in NZ Adults Data} \description{ Presence/absence of chest pain in 10186 New Zealand adults. } \usage{data(chest.nz)} \format{ A data frame with 73 rows and the following 5 variables. \describe{ \item{age}{a numeric vector; age (years).} \item{nolnor}{a numeric vector of counts; no pain on LHS or RHS.} \item{nolr}{a numeric vector of counts; no pain on LHS but pain on RHS.} \item{lnor}{a numeric vector of counts; no pain on RHS but pain on LHS.} \item{lr}{a numeric vector of counts; pain on LHS and RHS of chest.} } } \details{ Each adult was asked their age and whether they experienced any pain or discomfort in their chest over the last six months. If yes, they indicated whether it was on their LHS and/or RHS of their chest. } \source{ MacMahon, S., Norton, R., Jackson, R., Mackie, M. J., Cheng, A., Vander Hoorn, S., Milne, A., McCulloch, A. (1995) Fletcher Challenge-University of Auckland Heart & Health Study: design and baseline findings. \emph{New Zealand Medical Journal}, \bold{108}, 499--502. } \examples{ \dontrun{ fit <- vgam(cbind(nolnor, nolr, lnor, lr) ~ s(age, c(4, 3)), binom2.or(exchan = TRUE, zero = NULL), data = chest.nz) coef(fit, matrix = TRUE) } \dontrun{ plot(fit, which.cf = 2, se = TRUE) } } \keyword{datasets} VGAM/man/lgammaUC.Rd0000644000176200001440000000551313135276753013554 0ustar liggesusers\name{lgammaUC} \alias{lgammaUC} \alias{dlgamma} \alias{plgamma} \alias{qlgamma} \alias{rlgamma} \title{The Log-Gamma Distribution } \description{ Density, distribution function, quantile function and random generation for the log-gamma distribution with location parameter \code{location}, scale parameter \code{scale} and shape parameter \code{k}. } \usage{ dlgamma(x, location = 0, scale = 1, shape = 1, log = FALSE) plgamma(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qlgamma(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rlgamma(n, location = 0, scale = 1, shape = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{location}{the location parameter \eqn{a}.} \item{scale}{the (positive) scale parameter \eqn{b}.} \item{shape}{the (positive) shape parameter \eqn{k}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dlgamma} gives the density, \code{plgamma} gives the distribution function, \code{qlgamma} gives the quantile function, and \code{rlgamma} generates random deviates. } \references{ Kotz, S. and Nadarajah, S. (2000) \emph{Extreme Value Distributions: Theory and Applications}, pages 48--49, London: Imperial College Press. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{lgamma1}}, the \pkg{VGAM} family function for estimating the one parameter standard log-gamma distribution by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \note{ The \pkg{VGAM} family function \code{\link{lgamma3}} is for the three parameter (nonstandard) log-gamma distribution. } \seealso{ \code{\link{lgamma1}}, \code{\link{prentice74}}. } \examples{ \dontrun{ loc <- 1; Scale <- 1.5; shape <- 1.4 x <- seq(-3.2, 5, by = 0.01) plot(x, dlgamma(x, loc = loc, Scale, shape = shape), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlgamma(seq(0.05, 0.95, by = 0.05), loc = loc, Scale, shape = shape), dlgamma(qlgamma(seq(0.05, 0.95, by = 0.05), loc = loc, scale = Scale, shape = shape), loc = loc, Scale, shape = shape), col = "purple", lty = 3, type = "h") lines(x, plgamma(x, loc = loc, Scale, shape = shape), col = "orange") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/smart.mode.is.Rd0000644000176200001440000000254213135276753014550 0ustar liggesusers\name{smart.mode.is} \alias{smart.mode.is} \title{ Determine What Mode the Smart Prediction is In } \description{ Determine which of three modes the smart prediction is currently in. } \usage{ smart.mode.is(mode.arg = NULL) } \arguments{ \item{mode.arg}{ a character string, either \code{"read"}, \code{"write"} or \code{"neutral"}. }} \value{ If \code{mode.arg} is given, then either \code{TRUE} or \code{FALSE} is returned. If \code{mode.arg} is not given, then the mode (\code{"neutral"}, \code{"read"} or \code{"write"}) is returned. Usually, the mode is \code{"neutral"}. } \seealso{ \code{\link{put.smart}}, \code{\link[splines]{bs}}, \code{\link[stats]{poly}}. } \details{ Smart functions such as \code{\link[splines]{bs}} and \code{\link[stats]{poly}} need to know what mode smart prediction is in. If it is in \code{"write"} mode then the parameters are saved to \code{.smart.prediction} using \code{\link{put.smart}}. If in \code{"read"} mode then the parameters are read in using \code{\link{get.smart}}. If in \code{"neutral"} mode then the smart function behaves like an ordinary function. } \examples{ print(sm.min1) smart.mode.is() # Returns "neutral" smart.mode.is(smart.mode.is()) # Returns TRUE } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/lvplot.Rd0000644000176200001440000000400713135276753013403 0ustar liggesusers\name{lvplot} \alias{lvplot} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variable Plot } \description{ Generic function for a \emph{latent variable plot} (also known as an \emph{ordination diagram} by ecologists). } \usage{ lvplot(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for a latent variable plot is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. They usually are graphical parameters, and sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Latent variables occur in reduced-rank regression models, as well as in quadratic and additive ordination. For the latter, latent variables are often called the \emph{site scores}. Latent variable plots were coined by Yee (2004), and have the latent variable as at least one of its axes. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ Latent variables are not really applicable to \code{\link{vglm}}/\code{\link{vgam}} models. } \seealso{ \code{\link{lvplot.qrrvglm}}, \code{lvplot.cao}, \code{\link{latvar}}, \code{\link{trplot}}. } \examples{ \dontrun{ hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Bestof = 3, df1.nl = c(Zoraspin = 2.5, 3), Crow1positive = TRUE) index <- 1:ncol(depvar(p1)) lvplot(p1, lcol = index, pcol = index, y = TRUE, las = 1) } } \keyword{models} \keyword{regression} VGAM/man/SurvS4-class.Rd0000644000176200001440000000267013135276753014340 0ustar liggesusers\name{SurvS4-class} \docType{class} \alias{SurvS4-class} %%%% 20120216 \alias{print,SurvS4-method} \alias{show,SurvS4-method} \title{Class "SurvS4" } \description{ S4 version of the Surv class. } \section{Objects from the Class}{A virtual Class: No objects may be created from it.} \section{Extends}{ %Class \code{"\linkS4class{Surv}"}, directly. Class \code{"Surv"}, directly. Class \code{"\linkS4class{matrix}"}, directly. Class \code{"\linkS4class{oldClass}"}, by class "Surv", distance 2. Class \code{"\linkS4class{structure}"}, by class "matrix", distance 2. Class \code{"\linkS4class{array}"}, by class "matrix", distance 2. Class \code{"\linkS4class{vector}"}, by class "matrix", distance 3, with explicit coerce. Class \code{"\linkS4class{vector}"}, by class "matrix", distance 4, with explicit coerce. } \section{Methods}{ \describe{ % \item{print}{\code{signature(x = "SurvS4")}: ... } \item{show}{\code{signature(object = "SurvS4")}: ... } } } \references{ See \pkg{survival}. } \author{ T. W. Yee. } \note{ The purpose of having \code{\link{SurvS4}} in \pkg{VGAM} is so that the same input can be fed into \code{\link{vglm}} as functions in \pkg{survival} such as \code{\link[survival]{survreg}}. } \section{Warning }{ This code has not been thoroughly tested. } \seealso{ \code{\link{SurvS4}}. % or \code{\linkS4class{CLASSNAME}} for links to other classes } \examples{ showClass("SurvS4") } \keyword{classes} VGAM/man/bistudenttUC.Rd0000644000176200001440000000474313135276753014507 0ustar liggesusers\name{Bistudentt} \alias{Bistudentt} \alias{dbistudentt} %\alias{rbistudentt} \title{Bivariate Student-t distribution cumulative distribution function} \description{ Density for the bivariate Student-t distribution distribution. % cumulative distribution function % quantile function % and % random generation } \usage{ dbistudentt(x1, x2, df, rho = 0, log = FALSE) } \arguments{ \item{x1, x2}{vector of quantiles.} \item{df, rho}{ vector of degrees of freedom and correlation parameter. For \code{df}, a value \code{Inf} is currently not working. % standard deviations and correlation parameter. } % \item{n}{number of observations. % Same as \code{\link[stats]{rt}}. % } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } % \item{rho}{ % See \code{\link{bistudenttal}}. % } } \value{ \code{dbistudentt} gives the density. % \code{pnorm2} gives the cumulative distribution function, % \code{qnorm2} gives the quantile function, and % \code{rbistudentt} generates random deviates (\eqn{n} by 2 matrix). } % \author{ T. W. Yee } \details{ % The default arguments correspond to the standard bivariate Student-t % distribution with correlation parameter \eqn{\rho = 0}{rho = 0}. % That is, two independent standard Student-t distibutions. % Let \code{sd1} be \code{sqrt(var1)} and % written \eqn{\sigma_1}{sigma_1}, etc. % Then the general formula for the correlation coefficient is % \eqn{\rho = cov / (\sigma_1 \sigma_2)}{rho = cov / (sigma_1 * sigma_2)} % where \eqn{cov} is argument \code{cov12}. % Thus if arguments \code{var1} and \code{var2} are left alone then % \code{cov12} can be inputted with \eqn{\rho}{rho}. One can think of this function as an extension of \code{\link[stats]{dt}} to two dimensions. See \code{\link{bistudentt}} for more information. } \references{ Schepsmeier, U. and Stober, J. (2013) Derivatives and Fisher information of bivariate copulas. \emph{Statistical Papers}. } %\section{Warning}{ % % %} %\note{ % For \code{rbistudentt()}, % if the \eqn{i}th variance-covariance matrix is not % positive-definite then the \eqn{i}th row is all \code{NA}s. %} \seealso{ \code{\link{bistudentt}}, \code{\link[stats]{dt}}. } \examples{ \dontrun{ N <- 101; x <- seq(-4, 4, len = N); Rho <- 0.7; mydf <- 10 ox <- expand.grid(x, x) zedd <- dbistudentt(ox[, 1], ox[, 2], df = mydf, rho = Rho, log = TRUE) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) } } \keyword{distribution} VGAM/man/AB.Ab.aB.ab.Rd0000644000176200001440000000330413135276753013567 0ustar liggesusers\name{AB.Ab.aB.ab} \alias{AB.Ab.aB.ab} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The AB-Ab-aB-ab Blood Group System } \description{ Estimates the parameter of the AB-Ab-aB-ab blood group system. } \usage{ AB.Ab.aB.ab(link = "logit", init.p = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to \code{p}. See \code{\link{Links}} for more choices. } \item{init.p}{ Optional initial value for \code{p}. } } \details{ This one parameter model involves a probability called \code{p}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ T. W. Yee } \note{ The input can be a 4-column matrix of counts, where the columns are AB, Ab, aB and ab (in order). Alternatively, the input can be a 4-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{ABO}}, \code{\link{A1A2A3}}, \code{\link{MNSs}}. % \code{\link{AB.Ab.aB.ab2}}, } \examples{ ymat <- cbind(AB=1997, Ab=906, aB=904, ab=32) # Data from Fisher (1925) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab(link = "identitylink"), trace = TRUE) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab, trace = TRUE) rbind(ymat, sum(ymat)*fitted(fit)) Coef(fit) # Estimated p p <- sqrt(4*(fitted(fit)[, 4])) p*p summary(fit) } \keyword{models} \keyword{regression} VGAM/man/hzetaUC.Rd0000644000176200001440000000406713135276753013434 0ustar liggesusers\name{Hzeta} \alias{Hzeta} \alias{dhzeta} \alias{phzeta} \alias{qhzeta} \alias{rhzeta} \title{ Haight's Zeta Distribution } \description{ Density, distribution function, quantile function and random generation for Haight's zeta distribution with parameter \code{shape}. } \usage{ dhzeta(x, shape, log = FALSE) phzeta(q, shape, log.p = FALSE) qhzeta(p, shape) rhzeta(n, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{ Same meaning as \code{\link[stats]{runif}}. } \item{shape}{ The positive shape parameter. Called \eqn{\alpha}{alpha} below. } \item{log,log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ The probability function is \deqn{f(x) = (2x-1)^{(-\alpha)} - (2x+1)^{(-\alpha)},}{% f(x) = (2x-1)^(-alpha) - (2x+1)^(-alpha),} where \eqn{\alpha>0}{alpha>0} and \eqn{x=1,2,\ldots}{x=1,2,...}. } \value{ \code{dhzeta} gives the density, \code{phzeta} gives the distribution function, \code{qhzeta} gives the quantile function, and \code{rhzeta} generates random deviates. } %\references{ % % Pages 533--4 of % Johnson N. L., Kemp, A. W. and Kotz S. (2005) % \emph{Univariate Discrete Distributions}, % 3rd edition, % Hoboken, New Jersey: Wiley. % % %} \author{ T. W. Yee and Kai Huang } \note{ Given some response data, the \pkg{VGAM} family function \code{\link{hzeta}} estimates the parameter \code{shape}. } \seealso{ \code{\link{hzeta}}, \code{\link{zeta}}, \code{\link{zetaff}}, \code{\link{simulate.vlm}}. } \examples{ dhzeta(1:20, 2.1) rhzeta(20, 2.1) round(1000 * dhzeta(1:8, 2)) table(rhzeta(1000, 2)) \dontrun{ shape <- 1.1; x <- 1:10 plot(x, dhzeta(x, shape = shape), type = "h", ylim = 0:1, lwd = 2, sub = paste("shape =", shape), las = 1, col = "blue", ylab = "Probability", main = "Haight's zeta: blue = density; orange = distribution function") lines(x+0.1, phzeta(x, shape = shape), col = "orange", lty = 3, lwd = 2, type = "h") } } \keyword{distribution} VGAM/man/levy.Rd0000644000176200001440000000667113135276753013053 0ustar liggesusers\name{levy} \alias{levy} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Levy Distribution Family Function } \description{ Estimates the scale parameter of the Levy distribution by maximum likelihood estimation. } \usage{ levy(location = 0, lscale = "loge", iscale = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{location}{ Location parameter. Must have a known value. Called \eqn{a} below. % otherwise it is estimated (the default). } \item{lscale}{ Parameter link function for the (positive) scale parameter \eqn{b}. See \code{\link{Links}} for more choices. } \item{iscale}{ Initial value for the \eqn{b} parameter. By default, an initial value is chosen internally. } } \details{ The Levy distribution is one of three stable distributions whose density function has a tractable form. The formula for the density is \deqn{f(y;b) = \sqrt{\frac{b}{2\pi}} \exp \left( \frac{-b}{2(y - a)} \right) / (y - a)^{3/2} }{% f(y;b) = sqrt(b / (2 pi)) exp( -b / (2(y - a))) / (y - a)^{3/2} } where \eqn{a0}. Note that if \eqn{a} is very close to \code{min(y)} (where \code{y} is the response), then numerical problem will occur. The mean does not exist. The median is returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Nolan, J. P. (2005) \emph{Stable Distributions: Models for Heavy Tailed Data}. % p.5 } \author{ T. W. Yee } %\note{ % If \eqn{\delta}{delta} is given, then only one parameter is estimated % and the default is \eqn{\eta_1=\log(\gamma)}{eta1=log(gamma)}. % If \eqn{\delta}{delta} is not given, then \eqn{\eta_2=\delta}{eta2=delta}. % % %} \seealso{ The Nolan article was at \code{http://academic2.american.edu/~jpnolan/stable/chap1.pdf}. % \code{\link{dlevy}}. } \examples{ nn <- 1000; loc1 <- 0; loc2 <- 10 myscale <- 1 # log link ==> 0 is the answer ldata <- data.frame(y1 = loc1 + myscale/rnorm(nn)^2, # Levy(myscale, a) y2 = rlevy(nn, loc = loc2, scale = exp(+2))) # Cf. Table 1.1 of Nolan for Levy(1,0) with(ldata, sum(y1 > 1) / length(y1)) # Should be 0.6827 with(ldata, sum(y1 > 2) / length(y1)) # Should be 0.5205 fit1 <- vglm(y1 ~ 1, levy(location = loc1), data = ldata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) summary(fit1) head(weights(fit1, type = "work")) fit2 <- vglm(y2 ~ 1, levy(location = loc2), data = ldata, trace = TRUE) coef(fit2, matrix = TRUE) Coef(fit2) c(median = with(ldata, median(y2)), fitted.median = head(fitted(fit2), 1)) } \keyword{models} \keyword{regression} %%\eqn{\delta + \gamma \Gamma(-0.5) / (2\sqrt{\pi})}{delta + %% gamma * gamma(-0.5) / (2*sqrt(pi))} %%where \code{gamma} is a parameter but \code{gamma()} is the gamma function. %%mygamma = exp(1) # log link ==> 1 is the answer %% alternative: %%w = rgamma(n, shape=0.5) # W ~ Gamma(0.5) distribution %%mean(w) # 0.5 %%mean(1/w) %%y = delta + mygamma / (2 * w) # This is Levy(mygamma, delta) %%mean(y) %%set.seed(123) %%sum(y > 3) / length(y) # Should be 0.4363 %%sum(y > 4) / length(y) # Should be 0.3829 %%sum(y > 5) / length(y) # Should be 0.3453 %fit <- vglm(y ~ 1, levy(idelta = delta, igamma = mygamma), % data = ldata, trace = TRUE) # 2 parameters VGAM/man/gompertz.Rd0000644000176200001440000000730013135276753013731 0ustar liggesusers\name{gompertz} \alias{gompertz} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gompertz Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Gompertz distribution. } \usage{ gompertz(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, nsimEIM = 500, zero = NULL, nowarning = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{nowarning}{ Logical. Suppress a warning? Ignored for \pkg{VGAM} 0.9-7 and higher. } \item{lshape, lscale}{ Parameter link functions applied to the shape parameter \code{a}, scale parameter \code{scale}. All parameters are positive. See \code{\link{Links}} for more choices. } % \item{eshape, escale}{ % List. Extra argument for each of the links. % eshape = list(), escale = list(), % See \code{earg} in \code{\link{Links}} for general information. % } \item{ishape, iscale}{ Optional initial values. A \code{NULL} means a value is computed internally. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The Gompertz distribution has a cumulative distribution function \deqn{F(x;\alpha, \beta) = 1 - \exp[-(\alpha/\beta) \times (\exp(\beta x) - 1) ]}{% F(x;alpha, beta) = 1 - exp(-(alpha/beta) * (exp(beta * x) - 1) )} which leads to a probability density function \deqn{f(x; \alpha, \beta) = \alpha \exp(\beta x) \exp [-(\alpha/\beta) \times (\exp(\beta x) - 1) ]}{% f(x; alpha, beta) = alpha * exp[-beta * x] * exp[-(alpha/beta) * (exp(beta * x) - 1) ]} for \eqn{\alpha > 0}{a > 0}, \eqn{\beta > 0}{b > 0}, \eqn{x > 0}. Here, \eqn{\beta} is called the scale parameter \code{scale}, and \eqn{\alpha} is called the shape parameter (one could refer to \eqn{\alpha}{a} as a location parameter and \eqn{\beta}{b} as a shape parameter---see Lenart (2012)). The mean is involves an exponential integral function. Simulated Fisher scoring is used and multiple responses are handled. The Makeham distibution has an additional parameter compared to the Gompertz distribution. If \eqn{X} is defined to be the result of sampling from a Gumbel distribution until a negative value \eqn{Z} is produced, then \eqn{X = -Z} has a Gompertz distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Lenart, A. (2012) The moments of the Gompertz distribution and maximum likelihood estimation of its parameters. \emph{Scandinavian Actuarial Journal}, in press. } \author{ T. W. Yee } \section{Warning }{ The same warnings in \code{\link{makeham}} apply here too. } \seealso{ \code{\link{dgompertz}}, \code{\link{makeham}}, \code{\link{simulate.vlm}}. } \examples{ \dontrun{ gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, eta1 = -1, eta2 = -1 + 0.2 * x2, ceta1 = 1, ceta2 = -1 + 0.2 * x2) gdata <- transform(gdata, shape1 = exp(eta1), shape2 = exp(eta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgompertz(nn, scale = scale1, shape = shape1), y2 = rgompertz(nn, scale = scale2, shape = shape2)) fit1 <- vglm(y1 ~ 1, gompertz, data = gdata, trace = TRUE) fit2 <- vglm(y2 ~ x2, gompertz, data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) summary(fit1) coef(fit2, matrix = TRUE) summary(fit2) } } \keyword{models} \keyword{regression} % probs.y = c(0.20, 0.50, 0.80) VGAM/man/Max.Rd0000644000176200001440000000433613135276753012615 0ustar liggesusers\name{Max} \alias{Max} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Maximums } \description{ Generic function for the \emph{maximums} (maxima) of a model. } \usage{ Max(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation or extraction of a maximum (or maximums) is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Different models can define a maximum in different ways. Many models have no such notion or definition. Maximums occur in quadratic and additive ordination, e.g., CQO or CAO. For these models the maximum is the fitted value at the optimum. For quadratic ordination models there is a formula for the optimum but for additive ordination models the optimum must be searched for numerically. If it occurs on the boundary, then the optimum is undefined. For a valid optimum, the fitted value at the optimum is the maximum. % e.g., CQO or UQO or CAO. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ %} \seealso{ \code{Max.qrrvglm}, \code{\link{Tol}}, \code{\link{Opt}}. } \examples{ \dontrun{ set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, Bestof = 2, data = hspider, Crow1positive = FALSE) Max(p1) index <- 1:ncol(depvar(p1)) persp(p1, col = index, las = 1, llwd = 2) abline(h = Max(p1), lty = 2, col = index) } } \keyword{models} \keyword{regression} VGAM/man/gevUC.Rd0000644000176200001440000000773013135276753013102 0ustar liggesusers\name{gevUC} \alias{gevUC} \alias{dgev} \alias{pgev} \alias{qgev} \alias{rgev} \title{The Generalized Extreme Value Distribution } \description{ Density, distribution function, quantile function and random generation for the generalized extreme value distribution (GEV) with location parameter \code{location}, scale parameter \code{scale} and shape parameter \code{shape}. } \usage{ dgev(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps)) pgev(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) qgev(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) rgev(n, location = 0, scale = 1, shape = 0) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{location}{the location parameter \eqn{\mu}{mu}.} \item{scale}{the (positive) scale parameter \eqn{\sigma}{sigma}. Must consist of positive values. } \item{shape}{the shape parameter \eqn{\xi}{xi}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Uniform]{punif}} or \code{\link[stats:Uniform]{qunif}}. } \item{tolshape0}{ Positive numeric. Threshold/tolerance value for resting whether \eqn{\xi}{xi} is zero. If the absolute value of the estimate of \eqn{\xi}{xi} is less than this value then it will be assumed zero and a Gumbel distribution will be used. } % 20160412; Depreciated: % \item{oobounds.log, giveWarning}{ % Numeric and logical. % The GEV distribution has support in the region satisfying % \code{1+shape*(x-location)/scale > 0}. Outside that region, the % logarithm of the density is assigned \code{oobounds.log}, which % equates to a zero density. % It should not be assigned a positive number, and ideally is very negative. % Since \code{\link{egev}} uses this function it is necessary % to return a finite value outside this region so as to allow % for half-stepping. Both arguments are in support of this. % This argument and others match those of \code{\link{egev}}. % } } \value{ \code{dgev} gives the density, \code{pgev} gives the distribution function, \code{qgev} gives the quantile function, and \code{rgev} generates random deviates. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \details{ See \code{\link{gev}}, the \pkg{VGAM} family function for estimating the 3 parameters by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \note{ The default value of \eqn{\xi = 0}{xi = 0} means the default distribution is the Gumbel. Currently, these functions have different argument names compared with those in the \pkg{evd} package. } \seealso{ \code{\link{gev}}, \code{\link{gevff}}, \code{\link{vglm.control}}. } \examples{ loc <- 2; sigma <- 1; xi <- -0.4 pgev(qgev(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) \dontrun{ x <- seq(loc - 3, loc + 3, by = 0.01) plot(x, dgev(x, loc, sigma, xi), type = "l", col = "blue", ylim = c(0, 1), main = "Blue is density, orange is cumulative distribution function", sub = "Purple are 10,...,90 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgev(seq(0.1, 0.9, by = 0.1), loc, sigma, xi), dgev(qgev(seq(0.1, 0.9, by = 0.1), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgev(x, loc, sigma, xi), type = "l", col = "orange") abline(h = (0:10)/10, lty = 2, col = "gray50") } } \keyword{distribution} %dgev(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = % sqrt(.Machine$double.eps), oobounds.log = -Inf, giveWarning = FALSE) VGAM/man/rrvglm-class.Rd0000644000176200001440000001623013135276753014500 0ustar liggesusers\name{rrvglm-class} \docType{class} \alias{rrvglm-class} \title{Class ``rrvglm'' } \description{ Reduced-rank vector generalized linear models. } \section{Objects from the Class}{ Objects can be created by calls to \code{\link{rrvglm}}. } \section{Slots}{ \describe{ \item{\code{extra}:}{ Object of class \code{"list"}; the \code{extra} argument on entry to \code{vglm}. This contains any extra information that might be needed by the family function. } \item{\code{family}:}{ Object of class \code{"vglmff"}. The family function. } \item{\code{iter}:}{ Object of class \code{"numeric"}. The number of IRLS iterations used. } \item{\code{predictors}:}{ Object of class \code{"matrix"} with \eqn{M} columns which holds the \eqn{M} linear predictors. } \item{\code{assign}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. This named list gives information matching the columns and the (LM) model matrix terms. } \item{\code{call}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call. } \item{\code{coefficients}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. A named vector of coefficients. } \item{\code{constraints}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A named list of constraint matrices used in the fitting. } \item{\code{contrasts}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. The contrasts used (if any). } \item{\code{control}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{\code{criterion}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. List of convergence criterion evaluated at the final IRLS iteration. } \item{\code{df.residual}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The residual degrees of freedom. } \item{\code{df.total}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The total degrees of freedom. } \item{\code{dispersion}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The scaling parameter. } \item{\code{effects}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. The effects. } \item{\code{fitted.values}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The fitted values. This is usually the mean but may be quantiles, or the location parameter, e.g., in the Cauchy model. } \item{\code{misc}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A named list to hold miscellaneous parameters. } \item{\code{model}:}{ Object of class \code{"data.frame"}, from class \code{ "vlm"}. The model frame. } \item{\code{na.action}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A list holding information about missing values. } \item{\code{offset}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. If non-zero, a \eqn{M}-column matrix of offsets. } \item{\code{post}:}{ Object of class \code{"list"}, from class \code{ "vlm"} where post-analysis results may be put. } \item{\code{preplot}:}{ Object of class \code{"list"}, from class \code{ "vlm"} used by \code{\link{plotvgam}}; the plotting parameters may be put here. } \item{\code{prior.weights}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"} holding the initially supplied weights. } \item{\code{qr}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. QR decomposition at the final iteration. } \item{\code{R}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The \bold{R} matrix in the QR decomposition used in the fitting. } \item{\code{rank}:}{ Object of class \code{"integer"}, from class \code{ "vlm"}. Numerical rank of the fitted model. } \item{\code{residuals}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The \emph{working} residuals at the final IRLS iteration. } \item{\code{ResSS}:}{ Object of class \code{"numeric"}, from class \code{ "vlm"}. Residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices. } \item{\code{smart.prediction}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. A list of data-dependent parameters (if any) that are used by smart prediction. } \item{\code{terms}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. The \code{\link[stats]{terms}} object used. } \item{\code{weights}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The weight matrices at the final IRLS iteration. This is in matrix-band form. } \item{\code{x}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The model matrix (LM, not VGLM). } \item{\code{xlevels}:}{ Object of class \code{"list"}, from class \code{ "vlm"}. The levels of the factors, if any, used in fitting. } \item{\code{y}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The response, in matrix form. } \item{\code{Xm2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{Ym2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{callXm2}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call for argument \code{form2}. } } } \section{Extends}{ Class \code{"vglm"}, directly. Class \code{"vlm"}, by class "vglm". } \section{Methods}{ \describe{ \item{biplot}{\code{signature(x = "rrvglm")}: biplot. } \item{Coef}{\code{signature(object = "rrvglm")}: more detailed coefficients giving \bold{A}, \eqn{\bold{B}_1}{\bold{B}1}, \bold{C}, etc. } \item{biplot}{\code{signature(object = "rrvglm")}: biplot. } \item{print}{\code{signature(x = "rrvglm")}: short summary of the object. } \item{summary}{\code{signature(object = "rrvglm")}: a more detailed summary of the object. } } } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. %\url{http://www.stat.auckland.ac.nz/~yee} } \author{ Thomas W. Yee } \note{ The slots of \code{"rrvglm"} objects are currently identical to \code{"vglm"} objects. } % ~Make other sections like Warning with \section{Warning }{....} ~ % zzz need to make sure this function matches \code{\link{vglm-class}}, %where \code{\link{vglm-class}} is definitive. \seealso{ \code{\link{rrvglm}}, \code{\link{lvplot.rrvglm}}, \code{\link{vglmff-class}}. } \examples{ \dontrun{ # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # x3 is unrelated fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo, Rank = 1) Coef(fit) } } \keyword{classes} % set.seed(111) VGAM/man/Tol.Rd0000644000176200001440000000544613135276753012631 0ustar liggesusers\name{Tol} \alias{Tol} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Tolerances } \description{ Generic function for the \emph{tolerances} of a model. } \usage{ Tol(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation or extraction of a tolerance or tolerances is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Different models can define an optimum in different ways. Many models have no such notion or definition. Tolerances occur in quadratic ordination, i.e., CQO and UQO. They have ecological meaning because a high tolerance for a species means the species can survive over a large environmental range (stenoecous species), whereas a small tolerance means the species' niche is small (eurycous species). Mathematically, the tolerance is like the variance of a normal distribution. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ Tolerances are undefined for `linear' and additive ordination models. They are well-defined for quadratic ordination models. } \section{Warning }{ There is a direct inverse relationship between the scaling of the latent variables (site scores) and the tolerances. One normalization is for the latent variables to have unit variance. Another normalization is for all the tolerances to be unit. These two normalization cannot simultaneously hold in general. For rank-\emph{R>1} models it becomes more complicated because the latent variables are also uncorrelated. An important argument when fitting quadratic ordination models is whether \code{eq.tolerances} is \code{TRUE} or \code{FALSE}. See Yee (2004) for details. } \seealso{ \code{Tol.qrrvglm}. \code{\link{Max}}, \code{\link{Opt}}, \code{\link{cqo}}, \code{\link{rcim}} for UQO. } \examples{ \dontrun{ set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, Bestof = 2, quasipoissonff, data = hspider, Crow1positive = FALSE) Tol(p1) } } \keyword{models} \keyword{regression} VGAM/man/expexpff1.Rd0000644000176200001440000000667113135276753014002 0ustar liggesusers\name{expexpff1} \alias{expexpff1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponentiated Exponential Distribution } \description{ Estimates the two parameters of the exponentiated exponential distribution by maximizing a profile (concentrated) likelihood. } \usage{ expexpff1(lrate = "loge", irate = NULL, ishape = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrate}{ Parameter link function for the (positive) \eqn{\lambda}{rate} parameter. See \code{\link{Links}} for more choices. } \item{irate}{ Initial value for the \eqn{\lambda}{rate} parameter. By default, an initial value is chosen internally using \code{ishape}. } \item{ishape}{ Initial value for the \eqn{\alpha}{shape} parameter. If convergence fails try setting a different value for this argument. } } \details{ See \code{\link{expexpff}} for details about the exponentiated exponential distribution. This family function uses a different algorithm for fitting the model. Given \eqn{\lambda}{rate}, the MLE of \eqn{\alpha}{shape} can easily be solved in terms of \eqn{\lambda}{rate}. This family function maximizes a profile (concentrated) likelihood with respect to \eqn{\lambda}{rate}. Newton-Raphson is used, which compares with Fisher scoring with \code{\link{expexpff}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Gupta, R. D. and Kundu, D. (2001) Exponentiated exponential family: an alternative to gamma and Weibull distributions, \emph{Biometrical Journal}, \bold{43}, 117--130. } \author{ T. W. Yee } \note{ This family function works only for intercept-only models, i.e., \code{y ~ 1} where \code{y} is the response. The estimate of \eqn{\alpha}{shape} is attached to the \code{misc} slot of the object, which is a list and contains the component \code{shape}. As Newton-Raphson is used, the working weights are sometimes negative, and some adjustment is made to these to make them positive. Like \code{\link{expexpff}}, good initial values are needed. Convergence may be slow. } \section{Warning }{The standard errors produced by a \code{summary} of the model may be wrong. } \seealso{ \code{\link{expexpff}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # Ball bearings data (number of million revolutions before failure) edata <- data.frame(bbearings = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)) fit <- vglm(bbearings ~ 1, expexpff1(ishape = 4), trace = TRUE, maxit = 250, checkwz = FALSE, data = edata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0314, 5.2589) with log-lik -112.9763 logLik(fit) fit@misc$shape # Estimate of shape # Failure times of the airconditioning system of an airplane eedata <- data.frame(acplane = c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95)) fit <- vglm(acplane ~ 1, expexpff1(ishape = 0.8), trace = TRUE, maxit = 50, checkwz = FALSE, data = eedata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0145, 0.8130) with log-lik -152.264 logLik(fit) fit@misc$shape # Estimate of shape } \keyword{models} \keyword{regression} VGAM/man/zipoisUC.Rd0000644000176200001440000000712613135276753013635 0ustar liggesusers\name{Zipois} \alias{Zipois} \alias{dzipois} \alias{pzipois} \alias{qzipois} \alias{rzipois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Poisson Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-inflated and zero-deflated Poisson distribution with parameter \code{pstr0}. } \usage{ dzipois(x, lambda, pstr0 = 0, log = FALSE) pzipois(q, lambda, pstr0 = 0) qzipois(p, lambda, pstr0 = 0) rzipois(n, lambda, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles. } \item{p}{vector of probabilities. } \item{n}{number of observations. Must be a single positive integer. } \item{lambda}{ Vector of positive means. } \item{pstr0}{ Probability of a structural zero (i.e., ignoring the Poisson distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi = 0}{phi = 0} corresponds to the response having an ordinary Poisson distribution. This argument may be negative to allow for 0-deflation, hence its interpretation as a probability ceases. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and \eqn{Poisson(\lambda)}{Poisson(lambda)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed \eqn{Poisson(\lambda)}{Poisson(lambda)}. } \value{ \code{dzipois} gives the density, \code{pzipois} gives the distribution function, \code{qzipois} gives the quantile function, and \code{rzipois} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for the \emph{zero-deflated Poisson} distribution. Here, \code{pstr0} is also permitted to lie in the interval \code{[-1/expm1(lambda), 0]}. The resulting probability of a zero count is \emph{less than} the nominal Poisson value, and the use of \code{pstr0} to stand for the probability of a structural zero loses its meaning. When \code{pstr0} equals \code{-1/expm1(lambda)} this corresponds to the positive-Poisson distribution (e.g., see \code{\link{dpospois}}). } \seealso{ \code{\link{zipoisson}}, \code{\link[stats:Poisson]{dpois}}, \code{\link{rzinegbin}}. } \examples{ lambda <- 3; pstr0 <- 0.2; x <- (-1):7 (ii <- dzipois(x, lambda, pstr0 = pstr0)) max(abs(cumsum(ii) - pzipois(x, lambda, pstr0 = pstr0))) # Should be 0 table(rzipois(100, lambda, pstr0 = pstr0)) table(qzipois(runif(100), lambda, pstr0)) round(dzipois(0:10, lambda, pstr0 = pstr0) * 100) # Should be similar \dontrun{ x <- 0:10 par(mfrow = c(2, 1)) # Zero-inflated Poisson barplot(rbind(dzipois(x, lambda, pstr0 = pstr0), dpois(x, lambda)), beside = TRUE, col = c("blue", "orange"), main = paste("ZIP(", lambda, ", pstr0 = ", pstr0, ") (blue) vs", " Poisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x)) deflat.limit <- -1 / expm1(lambda) # Zero-deflated Poisson newpstr0 <- round(deflat.limit / 1.5, 3) barplot(rbind(dzipois(x, lambda, pstr0 = newpstr0), dpois(x, lambda)), beside = TRUE, col = c("blue","orange"), main = paste("ZDP(", lambda, ", pstr0 = ", newpstr0, ") (blue) vs", " Poisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/gamma2.Rd0000644000176200001440000001444113135276753013232 0ustar liggesusers\name{gamma2} \alias{gamma2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 2-parameter Gamma Distribution } \description{ Estimates the 2-parameter gamma distribution by maximum likelihood estimation. } \usage{ gamma2(lmu = "loge", lshape = "loge", imethod = 1, ishape = NULL, parallel = FALSE, deviance.arg = FALSE, zero = "shape") } %- maybe also 'usage' for other objects documented here. % apply.parint = FALSE, \arguments{ \item{lmu, lshape}{ Link functions applied to the (positive) \emph{mu} and \emph{shape} parameters (called \eqn{\mu}{mu} and \eqn{a}{shape} respectively). See \code{\link{Links}} for more choices. } \item{ishape}{ Optional initial value for \emph{shape}. A \code{NULL} means a value is computed internally. If a failure to converge occurs, try using this argument. This argument is ignored if used within \code{\link{cqo}}; see the \code{iShape} argument of \code{\link{qrrvglm.control}} instead. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method for the \eqn{\mu}{mu} parameter. If failure to converge occurs try another value (and/or specify a value for \code{ishape}). } \item{deviance.arg}{ Logical. If \code{TRUE}, the deviance function is attached to the object. Under ordinary circumstances, it should be left alone because it really assumes the shape parameter is at the maximum likelihood estimate. Consequently, one cannot use that criterion to minimize within the IRLS algorithm. It should be set \code{TRUE} only when used with \code{\link{cqo}} under the fast algorithm. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for information. % An integer specifying which % linear/additive predictor is to be modelled as an intercept only. % If assigned, the single value should be either 1 or 2 or \code{NULL}. % The default is to model \eqn{shape} as an intercept only. % A value \code{NULL} means neither 1 or 2. % Integer valued vector, usually assigned \eqn{-2} or \eqn{2} if % used at all. Specifies which of the two linear/additive predictors % are modelled as an intercept only. By default, the shape parameter % (after \code{lshape} is applied) is modelled as a single unknown % number that is estimated. It can be modelled as a function of % the explanatory variables by setting \code{zero = NULL}. A negative % value means that the value is recycled, so setting \eqn{-2} means % all shape parameters are intercept only. % See \code{\link{CommonVGAMffArguments}} for more information. } \item{parallel}{ Details at \code{\link{CommonVGAMffArguments}}. If \code{parallel = TRUE} then the constraint is not applied to the intercept. } } \details{ This distribution can model continuous skewed responses. The density function is given by \deqn{f(y;\mu,a) = \frac{\exp(-a y / \mu) \times (a y / \mu)^{a-1} \times a}{ \mu \times \Gamma(a)}}{% f(y;mu,shape) = exp(-shape * y / mu) y^(shape-1) shape^(shape) / [mu^(shape) * gamma(shape)]} for \eqn{\mu > 0}{mu > 0}, \eqn{a > 0}{shape > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(\cdot)}{gamma()} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \emph{Y} is \eqn{\mu=\mu}{mu=mu} (returned as the fitted values) with variance \eqn{\sigma^2 = \mu^2 / a}{sigma^2 = mu^2 / shape}. If \eqn{01}{shape>1} then the density is zero at the origin and is unimodal with mode at \eqn{y = \mu - \mu / a}{y = mu - mu / shape}; this can be achieved with \code{lshape="loglog"}. By default, the two linear/additive predictors are \eqn{\eta_1=\log(\mu)}{eta1=log(mu)} and \eqn{\eta_2=\log(a)}{eta2=log(shape)}. This family function implements Fisher scoring and the working weight matrices are diagonal. This \pkg{VGAM} family function handles \emph{multivariate} responses, so that a matrix can be used as the response. The number of columns is the number of species, say, and \code{zero=-2} means that \emph{all} species have a shape parameter equalling a (different) intercept only. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ The parameterization of this \pkg{VGAM} family function is the 2-parameter gamma distribution described in the monograph McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ T. W. Yee } \note{ The response must be strictly positive. A moment estimator for the shape parameter may be implemented in the future. If \code{mu} and \code{shape} are vectors, then \code{rgamma(n = n, shape = shape, scale = mu/shape)} will generate random gamma variates of this parameterization, etc.; see \code{\link[stats]{GammaDist}}. For \code{\link{cqo}} and \code{\link{cao}}, taking the logarithm of the response means (approximately) a \code{\link{gaussianff}} family may be used on the transformed data. } \seealso{ \code{\link{gamma1}} for the 1-parameter gamma distribution, \code{\link{gammaR}} for another parameterization of the 2-parameter gamma distribution that is directly matched with \code{\link[stats]{rgamma}}, \code{\link{bigamma.mckay}} for \emph{a} bivariate gamma distribution, \code{\link{expexpff}}, \code{\link[stats]{GammaDist}}, \code{\link{golf}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}, \code{\link{negloge}}. } \examples{ # Essentially a 1-parameter gamma gdata <- data.frame(y = rgamma(n = 100, shape = exp(1))) fit1 <- vglm(y ~ 1, gamma1, data = gdata) fit2 <- vglm(y ~ 1, gamma2, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) c(Coef(fit2), colMeans(gdata)) # Essentially a 2-parameter gamma gdata <- data.frame(y = rgamma(n = 500, rate = exp(-1), shape = exp(2))) fit2 <- vglm(y ~ 1, gamma2, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) c(Coef(fit2), colMeans(gdata)) summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/brat.Rd0000644000176200001440000001131713135276753013015 0ustar liggesusers\name{brat} \alias{brat} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bradley Terry Model } \description{ Fits a Bradley Terry model (intercept-only model) by maximum likelihood estimation. } \usage{ brat(refgp = "last", refvalue = 1, ialpha = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{refgp}{ Integer whose value must be from the set \{1,\ldots,\eqn{M+1}\}, where there are \eqn{M+1} competitors. The default value indicates the last competitor is used---but don't input a character string, in general. } \item{refvalue}{ Numeric. A positive value for the reference group. } \item{ialpha}{ Initial values for the \eqn{\alpha}{alpha}s. These are recycled to the appropriate length. } } \details{ The Bradley Terry model involves \eqn{M+1} competitors who either win or lose against each other (no draws/ties allowed in this implementation--see \code{\link{bratt}} if there are ties). The probability that Competitor \eqn{i} beats Competitor \eqn{j} is \eqn{\alpha_i / (\alpha_i+\alpha_j)}{alpha_i / (alpha_i + alpha_j)}, where all the \eqn{\alpha}{alpha}s are positive. Loosely, the \eqn{\alpha}{alpha}s can be thought of as the competitors' `abilities'. For identifiability, one of the \eqn{\alpha_i}{alpha_i} is set to a known value \code{refvalue}, e.g., 1. By default, this function chooses the last competitor to have this reference value. The data can be represented in the form of a \eqn{M+1} by \eqn{M+1} matrix of counts, where winners are the rows and losers are the columns. However, this is not the way the data should be inputted (see below). Excluding the reference value/group, this function chooses \eqn{\log(\alpha_j)}{log(alpha_j)} as the \eqn{M} linear predictors. The log link ensures that the \eqn{\alpha}{alpha}s are positive. The Bradley Terry model can be fitted by logistic regression, but this approach is not taken here. The Bradley Terry model can be fitted with covariates, e.g., a home advantage variable, but unfortunately, this lies outside the VGLM theoretical framework and therefore cannot be handled with this code. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Agresti, A. (2013) \emph{Categorical Data Analysis}, 3rd ed. Hoboken, NJ, USA: Wiley. Stigler, S. (1994) Citation patterns in the journals of statistics and probability. \emph{Statistical Science}, \bold{9}, 94--108. The \pkg{BradleyTerry2} package has more comprehensive capabilities than this function. } \author{ T. W. Yee } \note{ The function \code{\link{Brat}} is useful for coercing a \eqn{M+1} by \eqn{M+1} matrix of counts into a one-row matrix suitable for \code{brat}. Diagonal elements are skipped, and the usual S order of \code{c(a.matrix)} of elements is used. There should be no missing values apart from the diagonal elements of the square matrix. The matrix should have winners as the rows, and losers as the columns. In general, the response should be a 1-row matrix with \eqn{M(M+1)} columns. Only an intercept model is recommended with \code{brat}. It doesn't make sense really to include covariates because of the limited VGLM framework. Notationally, note that the \pkg{VGAM} family function \code{\link{brat}} has \eqn{M+1} contestants, while \code{bratt} has \eqn{M} contestants. } \section{Warning }{ Presently, the residuals are wrong, and the prior weights are not handled correctly. Ideally, the total number of counts should be the prior weights, after the response has been converted to proportions. This would make it similar to family functions such as \code{\link{multinomial}} and \code{\link{binomialff}}. } \seealso{ \code{\link{bratt}}, \code{\link{Brat}}, \code{\link{multinomial}}, \code{\link{binomialff}}. } \examples{ # Citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (in order of "journal") c(1, Coef(fit)) # Abilities (in order of "journal") fitted(fit) # Probabilities of winning in awkward form (check <- InverseBrat(fitted(fit))) # Probabilities of winning check + t(check) # Should be 1's in the off-diagonals } \keyword{models} \keyword{regression} VGAM/man/leukemia.Rd0000644000176200001440000000120113135276753013650 0ustar liggesusers%\name{aml} \name{leukemia} %\alias{aml} \alias{leukemia} \docType{data} \title{Acute Myelogenous Leukemia Survival Data} \description{Survival in patients with Acute Myelogenous Leukemia} \usage{ %data(aml) data(leukemia) } \format{ \tabular{ll}{ time:\tab survival or censoring time\cr status:\tab censoring status\cr x: \tab maintenance chemotherapy given? (factor)\cr } } \source{ Rupert G. Miller (1997), \emph{Survival Analysis}. John Wiley & Sons. ISBN: 0-471-25218-2. } \note{ This data set has been transferred from \pkg{survival} and renamed from \code{aml} to \code{leukemia}. } \keyword{datasets} VGAM/man/iam.Rd0000644000176200001440000000677613135276753012650 0ustar liggesusers\name{iam} \alias{iam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Index from Array to Matrix } \description{ Maps the elements of an array containing symmetric positive-definite matrices to a matrix with sufficient columns to hold them (called matrix-band format.) } \usage{ iam(j, k, M, both = FALSE, diag = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{j}{ An integer from the set \{\code{1:M}\} giving the row number of an element. } \item{k}{ An integer from the set \{\code{1:M}\} giving the column number of an element. } \item{M}{ The number of linear/additive predictors. This is the dimension of each positive-definite symmetric matrix. } \item{both}{ Logical. Return both the row and column indices? See below for more details. } \item{diag}{ Logical. Return the indices for the diagonal elements? If \code{FALSE} then only the strictly upper triangular part of the matrix elements are used. } } \details{ Suppose we have \eqn{n} symmetric positive-definite square matrices, each \eqn{M} by \eqn{M}, and these are stored in an \code{array} of dimension \code{c(n,M,M)}. Then these can be more compactly represented by a \code{matrix} of dimension \code{c(n,K)} where \code{K} is an integer between \code{M} and \code{M*(M+1)/2} inclusive. The mapping between these two representations is given by this function. It firstly enumerates by the diagonal elements, followed by the band immediately above the diagonal, then the band above that one, etc. The last element is \code{(1,M)}. This function performs the mapping from elements \code{(j,k)} of symmetric positive-definite square matrices to the columns of another matrix representing such. This is called the \emph{matrix-band} format and is used by the \pkg{VGAM} package. } \value{ This function has a dual purpose depending on the value of \code{both}. If \code{both=FALSE} then the column number corresponding to the \code{j}-\code{k} element of the matrix is returned. If \code{both = TRUE} then \code{j} and \code{k} are ignored and a list with the following components are returned. \item{row.index}{ The row indices of the upper triangular part of the matrix (This may or may not include the diagonal elements, depending on the argument \code{diagonal}). } \item{col.index}{ The column indices of the upper triangular part of the matrix (This may or may not include the diagonal elements, depending on the argument \code{diagonal}). } } %\references{ % The website \url{http://www.stat.auckland.ac.nz/~yee} contains % some additional information. % % %} \author{ T. W. Yee } \note{ This function is used in the \code{weight} slot of many \pkg{VGAM} family functions (see \code{\link{vglmff-class}}), especially those whose \eqn{M} is determined by the data, e.g., \code{\link{dirichlet}}, \code{\link{multinomial}}. } \seealso{ \code{\link{vglmff-class}}. %\code{ima}. } \examples{ iam(1, 2, M = 3) # The 4th column represents element (1,2) of a 3x3 matrix iam(NULL, NULL, M = 3, both = TRUE) # Return the row and column indices dirichlet()@weight M <- 4 temp1 <- iam(NA, NA, M = M, both = TRUE) mat1 <- matrix(NA, M, M) mat1[cbind(temp1$row, temp1$col)] = 1:length(temp1$row) mat1 # More commonly used temp2 <- iam(NA, NA, M = M, both = TRUE, diag = FALSE) mat2 <- matrix(NA, M, M) mat2[cbind(temp2$row, temp2$col)] = 1:length(temp2$row) mat2 # Rarely used } \keyword{manip} \keyword{programming} VGAM/man/betabinomial.Rd0000644000176200001440000001753513135276753014523 0ustar liggesusers\name{betabinomial} \alias{betabinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta-binomial Distribution Family Function } \description{ Fits a beta-binomial distribution by maximum likelihood estimation. The two parameters here are the mean and correlation coefficient. } \usage{ betabinomial(lmu = "logit", lrho = "logit", irho = NULL, imethod = 1, ishrinkage = 0.95, nsimEIM = NULL, zero = "rho") } %- maybe also 'usage' for other objects documented here. % ishrinkage = 0.95, nsimEIM = NULL, zero = 2 \arguments{ \item{lmu, lrho}{ Link functions applied to the two parameters. See \code{\link{Links}} for more choices. The defaults ensure the parameters remain in \eqn{(0,1)}, however, see the warning below. } \item{irho}{ Optional initial value for the correlation parameter. If given, it must be in \eqn{(0,1)}, and is recyled to the necessary length. Assign this argument a value if a convergence failure occurs. Having \code{irho = NULL} means an initial value is obtained internally, though this can give unsatisfactory results. } \item{imethod}{ An integer with value \code{1} or \code{2} or \ldots, which specifies the initialization method for \eqn{\mu}{mu}. If failure to converge occurs try the another value and/or else specify a value for \code{irho}. } \item{zero}{ Specifyies which linear/additive predictor is to be modelled as an intercept only. If assigned, the single value can be either \code{1} or \code{2}. The default is to have a single correlation parameter. To model both parameters as functions of the covariates assign \code{zero = NULL}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{ishrinkage, nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. The argument \code{ishrinkage} is used only if \code{imethod = 2}. Using the argument \code{nsimEIM} may offer large advantages for large values of \eqn{N} and/or large data sets. } } \details{ There are several parameterizations of the beta-binomial distribution. This family function directly models the mean and correlation parameter, i.e., the probability of success. The model can be written \eqn{T|P=p \sim Binomial(N,p)}{T|P=p ~ Binomial(N,p)} where \eqn{P} has a beta distribution with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Here, \eqn{N} is the number of trials (e.g., litter size), \eqn{T=NY} is the number of successes, and \eqn{p} is the probability of a success (e.g., a malformation). That is, \eqn{Y} is the \emph{proportion} of successes. Like \code{\link{binomialff}}, the fitted values are the estimated probability of success (i.e., \eqn{E[Y]} and not \eqn{E[T]}) and the prior weights \eqn{N} are attached separately on the object in a slot. The probability function is \deqn{P(T=t) = {N \choose t} \frac{B(\alpha+t, \beta+N-t)} {B(\alpha, \beta)}}{% P(T=t) = choose(N,t) B(alpha+t, beta+N-t) / B(alpha, beta)} where \eqn{t=0,1,\ldots,N}, and \eqn{B} is the \code{\link[base:Special]{beta}} function with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Recall \eqn{Y = T/N} is the real response being modelled. The default model is \eqn{\eta_1 = logit(\mu)}{eta1 =logit(mu)} and \eqn{\eta_2 = logit(\rho)}{eta2 = logit(rho)} because both parameters lie between 0 and 1. The mean (of \eqn{Y}) is \eqn{p = \mu = \alpha / (\alpha + \beta)}{p = mu = alpha / (alpha + beta)} and the variance (of \eqn{Y}) is \eqn{\mu(1-\mu)(1+(N-1)\rho)/N}{mu(1-mu)(1+(N-1)rho)/N}. Here, the correlation \eqn{\rho}{rho} is given by \eqn{1/(1 + \alpha + \beta)}{1/(1 + alpha + beta)} and is the correlation between the \eqn{N} individuals within a litter. A \emph{litter effect} is typically reflected by a positive value of \eqn{\rho}{rho}. It is known as the \emph{over-dispersion parameter}. This family function uses Fisher scoring. Elements of the second-order expected derivatives with respect to \eqn{\alpha}{alpha} and \eqn{\beta}{beta} are computed numerically, which may fail for large \eqn{\alpha}{alpha}, \eqn{\beta}{beta}, \eqn{N} or else take a long time. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. Suppose \code{fit} is a fitted beta-binomial model. Then \code{fit@y} contains the sample proportions \eqn{y}, \code{fitted(fit)} returns estimates of \eqn{E(Y)}, and \code{weights(fit, type="prior")} returns the number of trials \eqn{N}. } \references{ Moore, D. F. and Tsiatis, A. (1991) Robust estimation of the variance in moment methods for extra-binomial and extra-Poisson variation. \emph{Biometrics}, \bold{47}, 383--401. Prentice, R. L. (1986) Binary regression using an extended beta-binomial distribution, with discussion of correlation induced by covariate measurement errors. \emph{Journal of the American Statistical Association}, \bold{81}, 321--327. } \author{ T. W. Yee } \note{ This function processes the input in the same way as \code{\link{binomialff}}. But it does not handle the case \eqn{N=1} very well because there are two parameters to estimate, not one, for each row of the input. Cases where \eqn{N=1} can be omitted via the \code{subset} argument of \code{\link{vglm}}. The \emph{extended} beta-binomial distribution of Prentice (1986) is currently not implemented in the \pkg{VGAM} package as it has range-restrictions for the correlation parameter that are currently too difficult to handle in this package. However, try \code{lrho = "rhobit"}. } \section{Warning }{ If the estimated rho parameter is close to 0 then it pays to try \code{lrho = "rhobit"}. One day this may become the default link function. This family function is prone to numerical difficulties due to the expected information matrices not being positive-definite or ill-conditioned over some regions of the parameter space. If problems occur try setting \code{irho} to some numerical value, \code{nsimEIM = 100}, say, or else use \code{etastart} argument of \code{\link{vglm}}, etc. } \seealso{ \code{\link{betabinomialff}}, \code{\link{Betabinom}}, \code{\link{binomialff}}, \code{\link{betaff}}, \code{\link{dirmultinomial}}, \code{\link{lirat}}, \code{\link{simulate.vlm}}. } \examples{ # Example 1 bdata <- data.frame(N = 10, mu = 0.5, rho = 0.8) bdata <- transform(bdata, y = rbetabinom(n = 100, size = N, prob = mu, rho = rho)) fit <- vglm(cbind(y, N-y) ~ 1, betabinomial, data = bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomial, lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomial(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) \dontrun{ with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) } \dontrun{ # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead", main = "Fitted values (lines)")) smalldf <- with(lirat, lirat[N > 1, ]) for (gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp) } } } \keyword{models} \keyword{regression} VGAM/man/zigeomUC.Rd0000644000176200001440000000476313135276753013616 0ustar liggesusers\name{Zigeom} \alias{Zigeom} \alias{dzigeom} \alias{pzigeom} \alias{qzigeom} \alias{rzigeom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Geometric Distribution } \description{ Density, and random generation for the zero-inflated geometric distribution with parameter \code{pstr0}. } \usage{ dzigeom(x, prob, pstr0 = 0, log = FALSE) pzigeom(q, prob, pstr0 = 0) qzigeom(p, prob, pstr0 = 0) rzigeom(n, prob, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{prob}{see \code{\link[stats]{dgeom}}.} \item{n}{ Same as in \code{\link[stats]{runif}}. } \item{pstr0}{ Probability of structural zero (ignoring the geometric distribution), called \eqn{\phi}{phi}. The default value corresponds to the response having an ordinary geometric distribution. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and \eqn{geometric(prob)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed \eqn{geometric(prob)}. } \value{ \code{dzigeom} gives the density, \code{pzigeom} gives the distribution function, \code{qzigeom} gives the quantile function, and \code{rzigeom} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for \emph{zero-deflation}. That is, the resulting probability of a zero count is \emph{less than} the nominal value of the parent distribution. See \code{\link{Zipois}} for more information. } \seealso{ \code{\link{zigeometric}}, \code{\link[stats]{dgeom}}. } \examples{ prob <- 0.5; pstr0 <- 0.2; x <- (-1):20 (ii <- dzigeom(x, prob, pstr0)) max(abs(cumsum(ii) - pzigeom(x, prob, pstr0))) # Should be 0 table(rzigeom(1000, prob, pstr0)) \dontrun{ x <- 0:10 barplot(rbind(dzigeom(x, prob, pstr0), dgeom(x, prob)), beside = TRUE, col = c("blue","orange"), ylab = "P[Y = y]", xlab = "y", las = 1, main = paste("zigeometric(", prob, ", pstr0 = ", pstr0, ") (blue) vs", " geometric(", prob, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/sinmad.Rd0000644000176200001440000001024213135276753013334 0ustar liggesusers\name{sinmad} \alias{sinmad} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Singh-Maddala Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter Singh-Maddala distribution. } \usage{ sinmad(lscale = "loge", lshape1.a = "loge", lshape3.q = "loge", iscale = NULL, ishape1.a = NULL, ishape3.q = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape3.q = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape") } %- maybe also 'usage' for other objects documented here. % zero = ifelse(lss, -(2:3), -c(1, 3)) \arguments{ \item{lss}{ See \code{\link{CommonVGAMffArguments}} for important information. } \item{lshape1.a, lscale, lshape3.q}{ Parameter link functions applied to the (positive) parameters \eqn{a}, \code{scale}, and \eqn{q}. See \code{\link{Links}} for more choices. } \item{iscale, ishape1.a, ishape3.q, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. For \code{imethod = 2} a good initial value for \code{ishape3.q} is needed to obtain good estimates for the other parameters. } \item{gscale, gshape1.a, gshape3.q}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 3-parameter Singh-Maddala distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{p=1}. It is known under various other names, such as the Burr XII (or just the Burr distribution), Pareto IV, beta-P, and generalized log-logistic distribution. More details can be found in Kleiber and Kotz (2003). Some distributions which are special cases of the 3-parameter Singh-Maddala are the Lomax (\eqn{a=1}), Fisk (\eqn{q=1}), and paralogistic (\eqn{a=q}). The Singh-Maddala distribution has density \deqn{f(y) = aq y^{a-1} / [b^a \{1 + (y/b)^a\}^{1+q}]}{% f(y) = aq y^(a-1) / [b^a (1 + (y/b)^a)^(1+q)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and the others are shape parameters. The cumulative distribution function is \deqn{F(y) = 1 - [1 + (y/b)^a]^{-q}.}{% F(y) = 1 - [1 + (y/b)^a]^(-q).} The mean is \deqn{E(Y) = b \, \Gamma(1 + 1/a) \, \Gamma(q - 1/a) / \Gamma(q)}{% E(Y) = b gamma(1 + 1/a) gamma(q - 1/a) / gamma(q)} provided \eqn{-a < 1 < aq}; these are returned as the fitted values. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. } \seealso{ \code{\link{Sinmad}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{fisk}}, \code{\link{inv.lomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{inv.paralogistic}}, \code{\link{simulate.vlm}}. } \examples{ sdata <- data.frame(y = rsinmad(n = 1000, shape1 = exp(1), scale = exp(2), shape3 = exp(0))) fit <- vglm(y ~ 1, sinmad(lss = FALSE), data = sdata, trace = TRUE) fit <- vglm(y ~ 1, sinmad(lss = FALSE, ishape1.a = exp(1)), data = sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) # Harder problem (has the shape3.q parameter going to infinity) set.seed(3) sdata <- data.frame(y1 = rbeta(1000, 6, 6)) # hist(with(sdata, y1)) if (FALSE) { # These struggle fit1 <- vglm(y1 ~ 1, sinmad(lss = FALSE), data = sdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, sinmad(lss = FALSE), data = sdata, trace = TRUE, crit = "coef") Coef(fit1) } # Try this remedy: fit2 <- vglm(y1 ~ 1, sinmad(lss = FALSE, ishape3.q = 3, lshape3.q = "loglog"), data = sdata, trace = TRUE, stepsize = 0.05, maxit = 99) coef(fit2, matrix = TRUE) Coef(fit2) } \keyword{models} \keyword{regression} VGAM/man/oipospoisUC.Rd0000644000176200001440000000715213135276753014343 0ustar liggesusers\name{Oipospois} \alias{Oipospois} \alias{doipospois} \alias{poipospois} \alias{qoipospois} \alias{roipospois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Inflated Positive Poisson Distribution } \description{ Density, distribution function, quantile function and random generation for the one-inflated positive Poisson distribution with parameter \code{pstr1}. } \usage{ doipospois(x, lambda, pstr1 = 0, log = FALSE) poipospois(q, lambda, pstr1 = 0) qoipospois(p, lambda, pstr1 = 0) roipospois(n, lambda, pstr1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, p, q, n}{Same as \code{\link{Pospois}}.} \item{lambda}{ Vector of positive means. } \item{pstr1}{ Probability of a structural one (i.e., ignoring the positive Poisson distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi = 0}{phi = 0} corresponds to the response having a positive Poisson distribution. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 1 with probability \eqn{\phi}{phi}, and \eqn{PosPoisson(\lambda)}{PosPoisson(lambda)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=1) =\phi + (1-\phi) P(W=1)}{% P(Y=1) = phi + (1-phi) * P(W=1)} where \eqn{W} is distributed as a positive \eqn{Poisson(\lambda)}{Poisson(lambda)} random variate. } \value{ \code{doipospois} gives the density, \code{poipospois} gives the distribution function, \code{qoipospois} gives the quantile function, and \code{roipospois} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr1} is recycled to the required length, and usually has values which lie in the interval \eqn{[0,1]}. These functions actually allow for the \emph{zero-deflated Poisson} distribution. Here, \code{pstr1} is also permitted to lie in the interval \code{[-lambda / (expm1(lambda) - lambda), 0]}. The resulting probability of a unit count is \emph{less than} the nominal positive Poisson value, and the use of \code{pstr1} to stand for the probability of a structural 1 loses its meaning. % % % When \code{pstr1} equals \code{-lambda / (expm1(lambda) - lambda)} this corresponds to the 0- and 1-truncated Poisson distribution. } \seealso{ \code{\link{Pospois}}, \code{\link{oapospoisson}}, \code{\link{oipospoisson}}, \code{\link{otpospoisson}}, \code{\link{pospoisson}}, \code{\link[stats:Poisson]{dpois}}, \code{\link{poissonff}}. } \examples{ lambda <- 3; pstr1 <- 0.2; x <- (-1):7 (ii <- doipospois(x, lambda, pstr1 = pstr1)) table(roipospois(100, lambda, pstr1 = pstr1)) round(doipospois(1:10, lambda, pstr1 = pstr1) * 100) # Should be similar \dontrun{ x <- 0:10 par(mfrow = c(2, 1)) # One-Inflated Positive Poisson barplot(rbind(doipospois(x, lambda, pstr1 = pstr1), dpospois(x, lambda)), beside = TRUE, col = c("blue", "orange"), main = paste("OIPP(", lambda, ", pstr1 = ", pstr1, ") (blue) vs", " PosPoisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x)) deflat.limit <- -lambda / (expm1(lambda) - lambda) # 0-deflated Pos Poisson newpstr1 <- round(deflat.limit, 3) + 0.001 # Inside and near the boundary barplot(rbind(doipospois(x, lambda, pstr1 = newpstr1), dpospois(x, lambda)), beside = TRUE, col = c("blue","orange"), main = paste("ODPP(", lambda, ", pstr1 = ", newpstr1, ") (blue) vs", " PosPoisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/nparamvglm.Rd0000644000176200001440000000425013135276753014227 0ustar liggesusers\name{nparam.vlm} \alias{nparam.vlm} \alias{nparam} %\alias{nparam.vglm} \alias{nparam.vgam} \alias{nparam.rrvglm} \alias{nparam.qrrvglm} \alias{nparam.rrvgam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Number of Parameters } \description{ Returns the number of parameters in a fitted model object. } \usage{ nparam(object, \dots) nparam.vlm(object, dpar = TRUE, \dots) nparam.vgam(object, dpar = TRUE, linear.only = FALSE, \dots) nparam.rrvglm(object, dpar = TRUE, \dots) nparam.qrrvglm(object, dpar = TRUE, \dots) nparam.rrvgam(object, dpar = TRUE, \dots) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Some \pkg{VGAM} object, for example, having class \code{\link{vglmff-class}}. } \item{\dots}{ Other possible arguments fed into the function. } \item{dpar}{ Logical, include any (estimated) dispersion parameters as a parameter? } \item{linear.only}{ Logical, include only the number of linear (parametric) parameters? } } \details{ The code was copied from the \code{AIC()} methods functions. } \value{ Returns a numeric value with the corresponding number of parameters. For \code{\link{vgam}} objects, this may be real rather than integer, because the nonlinear degrees of freedom is real-valued. } \author{T. W. Yee. } %\note{ % This code has not been checked fully. % % %} %\references{ % Sakamoto, Y., Ishiguro, M., and Kitagawa G. (1986). % \emph{Akaike Information Criterion Statistics}. % D. Reidel Publishing Company. %} \section{Warning }{ This code has not been double-checked. } \seealso{ VGLMs are described in \code{\link{vglm-class}}; VGAMs are described in \code{\link{vgam-class}}; RR-VGLMs are described in \code{\link{rrvglm-class}}; \code{\link{AICvlm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) coef(fit1) coef(fit1, matrix = TRUE) nparam(fit1) (fit2 <- vglm(hits ~ 1, quasipoissonff, weights = ofreq, data = V1)) coef(fit2) coef(fit2, matrix = TRUE) nparam(fit2) nparam(fit2, dpar = FALSE) } \keyword{models} \keyword{regression} VGAM/man/expgeometricUC.Rd0000644000176200001440000000426713135276753015016 0ustar liggesusers\name{expgeom} \alias{expgeom} \alias{dexpgeom} \alias{pexpgeom} \alias{qexpgeom} \alias{rexpgeom} \title{The Exponential Geometric Distribution} \description{ Density, distribution function, quantile function and random generation for the exponential geometric distribution. } \usage{ dexpgeom(x, scale = 1, shape, log = FALSE) pexpgeom(q, scale = 1, shape) qexpgeom(p, scale = 1, shape) rexpgeom(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{scale, shape}{ positive scale and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dexpgeom} gives the density, \code{pexpgeom} gives the distribution function, \code{qexpgeom} gives the quantile function, and \code{rexpgeom} generates random deviates. } \author{ J. G. Lauder and T. W. Yee } \details{ See \code{\link{expgeometric}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Adamidis and Loukas (1998). } \seealso{ \code{\link{expgeometric}}, \code{\link{exponential}}, \code{\link{geometric}}. } \examples{ \dontrun{ shape <- 0.5; scale <- 1; nn <- 501 x <- seq(-0.10, 3.0, len = nn) plot(x, dexpgeom(x, scale, shape), type = "l", las = 1, ylim = c(0, 2), ylab = paste("[dp]expgeom(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexpgeom(x, scale, shape), col = "red") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexpgeom(probs, scale, shape) lines(Q, dexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") lines(Q, pexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexpgeom(Q, scale, shape) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/logF.Rd0000644000176200001440000000502213135276753012750 0ustar liggesusers\name{logF} \alias{logF} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Natural Exponential Family Generalized Hyperbolic Secant Distribution Family Function } \description{ Maximum likelihood estimation of the 1-parameter log F distribution. } \usage{ logF(lshape1 = "loge", lshape2 = "loge", ishape1 = NULL, ishape2 = 1, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2}{ % Character. Parameter link functions for the shape parameters. Called \eqn{\alpha}{alpha} and \eqn{\beta}{beta} respectively. See \code{\link{Links}} for more choices. } \item{ishape1, ishape2}{ Optional initial values for the shape parameters. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Initialization method. Either the value 1, 2, or \ldots. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The density for this distribution is \deqn{f(y; \alpha, \beta) = \exp(\alpha y) / [B(\alpha,\beta) (1 + e^y)^{\alpha + \beta}] }{% f(y; alpha, beta) = exp(\alpha y) / [B(\alpha,\beta) * (1 + exp(y))^(\alpha + \beta)] } where \eqn{y} is real, \eqn{\alpha > 0}, \eqn{\beta > 0}, \eqn{B(., .)} is the beta function \code{\link[base:Special]{beta}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Jones, M. C. (2008). On a class of distributions with simple exponential tails. \emph{Statistica Sinica}, \bold{18}(3), 1101--1110. % Section 3.2. } \author{ Thomas W. Yee } %\section{Warning}{ % %} %\note{ %} \seealso{ \code{\link{dlogF}}, \code{\link{logff}}. % \code{\link{simulate.vlm}}. } \examples{ nn <- 1000 ldata <- data.frame(y1 = rnorm(nn, m = +1, sd = exp(2)), # Not proper data x2 = rnorm(nn, m = -1, sd = exp(2)), y2 = rnorm(nn, m = -1, sd = exp(2))) # Not proper data fit1 <- vglm(y1 ~ 1 , logF, data = ldata, trace = TRUE) fit2 <- vglm(y2 ~ x2, logF, data = ldata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2) vcov(fit2) head(fitted(fit1)) with(ldata, mean(y1)) max(abs(head(fitted(fit1)) - with(ldata, mean(y1)))) } \keyword{models} \keyword{regression} VGAM/man/hunua.Rd0000644000176200001440000000403613135276753013205 0ustar liggesusers\name{hunua} \alias{hunua} \docType{data} \title{Hunua Ranges Data} \description{ The \code{hunua} data frame has 392 rows and 18 columns. Altitude is explanatory, and there are binary responses (presence/absence = 1/0 respectively) for 17 plant species. } \usage{data(hunua)} \format{ This data frame contains the following columns: \describe{ \item{agaaus}{Agathis australis, or Kauri} \item{beitaw}{Beilschmiedia tawa, or Tawa} \item{corlae}{Corynocarpus laevigatus} \item{cyadea}{Cyathea dealbata} \item{cyamed}{Cyathea medullaris} \item{daccup}{Dacrydium cupressinum} \item{dacdac}{Dacrycarpus dacrydioides} \item{eladen}{Elaecarpus dentatus} \item{hedarb}{Hedycarya arborea} \item{hohpop}{Species name unknown} \item{kniexc}{Knightia excelsa, or Rewarewa} \item{kuneri}{Kunzea ericoides} \item{lepsco}{Leptospermum scoparium} \item{metrob}{Metrosideros robusta} \item{neslan}{Nestegis lanceolata} \item{rhosap}{Rhopalostylis sapida} \item{vitluc}{Vitex lucens, or Puriri} \item{altitude}{meters above sea level} } } \details{ These were collected from the Hunua Ranges, a small forest in southern Auckland, New Zealand. At 392 sites in the forest, the presence/absence of 17 plant species was recorded, as well as the altitude. Each site was of area size 200\eqn{m^2}{m^2}. } \source{ Dr Neil Mitchell, University of Auckland. } %\references{ % None. %} \seealso{ \code{\link{waitakere}}. } \examples{ # Fit a GAM using vgam() and compare it with the Waitakere Ranges one fit.h <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) \dontrun{ plot(fit.h, se = TRUE, lcol = "orange", scol = "orange", llwd = 2, slwd = 2, main = "Orange is Hunua, Blue is Waitakere") } head(predict(fit.h, hunua, type = "response")) fit.w <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = waitakere) \dontrun{ plot(fit.w, se = TRUE, lcol = "blue", scol = "blue", add = TRUE) } head(predict(fit.w, hunua, type = "response")) # Same as above? } \keyword{datasets} VGAM/man/vglm.Rd0000644000176200001440000004423113135276753013033 0ustar liggesusers\name{vglm} \alias{vglm} %\alias{vglm.fit} \title{Fitting Vector Generalized Linear Models } \description{ \code{vglm} is used to fit vector generalized linear models (VGLMs). This is a very large class of models that includes generalized linear models (GLMs) as a special case. } \usage{ vglm(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = TRUE, smart = TRUE, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is applied to each linear predictor. Different variables in each linear predictor can be chosen by specifying constraint matrices. } \item{family}{ a function of class \code{"vglmff"} (see \code{\link{vglmff-class}}) describing what statistical model is to be fitted. This is called a ``\pkg{VGAM} family function''. See \code{\link{CommonVGAMffArguments}} for general information about many types of arguments found in this type of function. The argument name \code{"family"} is used loosely and for the ease of existing \code{\link[stats]{glm}} users; there is no concept of a formal ``error distribution'' for VGLMs. Possibly the argument name should be better \code{"model"} but unfortunately that name has already been taken. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{vglm} is called. } \item{weights}{ an optional vector or matrix of (prior fixed and known) weights to be used in the fitting process. If the \pkg{VGAM} family function handles multiple responses (\eqn{Q > 1} of them, say) then \code{weights} can be a matrix with \eqn{Q} columns. Each column matches the respective response. If it is a vector (the usually case) then it is recycled into a matrix with \eqn{Q} columns. The values of \code{weights} must be positive; try setting a very small value such as \code{1.0e-8} to effectively delete an observation. % 20140507: Currently the \code{weights} argument does not support sampling weights from complex sampling designs. And currently sandwich estimators are not computed in any shape or form. The present weights are multiplied by the corresponding log-likelihood contributions. % If \code{weights} is a matrix, % then it should be must be in \emph{matrix-band} form, whereby the % first \eqn{M} columns of the matrix are the diagonals, % followed by the upper-diagonal band, followed by the % band above that, etc. In this case, there can be up to % \eqn{M(M+1)} columns, with the last column corresponding % to the (1,\eqn{M}) elements of the weight matrices. } \item{subset}{ an optional logical vector specifying a subset of observations to be used in the fitting process. } \item{na.action}{ a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[base]{options}}, and is \code{na.fail} if that is unset. The ``factory-fresh'' default is \code{na.omit}. } \item{etastart}{ starting values for the linear predictors. It is a \eqn{M}-column matrix with the same number of rows as the response. If \eqn{M = 1} then it may be a vector. Note that \code{etastart} and the output of \code{predict(fit)} should be comparable. Here, \code{fit} is the fitted object. } \item{mustart}{ starting values for the fitted values. It can be a vector or a matrix; if a matrix, then it has the same number of rows as the response. Usually \code{mustart} and the output of \code{fitted(fit)} should be comparable. Some family functions do not make use of this argument. } \item{coefstart}{ starting values for the coefficient vector. The length and order must match that of \code{coef(fit)}. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{offset}{ a vector or \eqn{M}-column matrix of offset values. These are \emph{a priori} known and are added to the linear/additive predictors during fitting. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{vglm.fit()} uses iteratively reweighted least squares (IRLS). } \item{model}{ a logical value indicating whether the \emph{model frame} should be assigned in the \code{model} slot. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix; to get the VGLM model matrix type \code{model.matrix(vglmfit)} where \code{vglmfit} is a \code{vglm} object. } \item{contrasts}{ an optional list. See the \code{contrasts.arg} of \code{\link{model.matrix.default}}. } \item{constraints}{ an optional list of constraint matrices. The components of the list must be named with the term it corresponds to (and it must match in character format exactly). There are two types of input: \code{"lm"}-type and \code{"vlm"}-type. The former is a subset of the latter. The former has a matrix for each term of the LM matrix. The latter has a matrix for each column of the VLM matrix. After fitting, the \code{\link{constraints}} extractor function may be applied; it returns the \code{"vlm"}-type list of constraint matrices by default. If \code{"lm"}-type are returned by \code{\link{constraints}} then these can be fed into this argument and it should give the same model as before. Each constraint matrix must have \eqn{M} rows, and be of full-column rank. By default, constraint matrices are the \eqn{M} by \eqn{M} identity matrix unless arguments in the family function itself override these values, e.g., \code{parallel} (see \code{\link{CommonVGAMffArguments}}). If \code{constraints} is used it must contain \emph{all} the terms; an incomplete list is not accepted. } \item{extra}{ an optional list with any extra information that might be needed by the \pkg{VGAM} family function. } \item{form2}{ The second (optional) formula. If argument \code{xij} is used (see \code{\link{vglm.control}}) then \code{form2} needs to have \emph{all} terms in the model. Also, some \pkg{VGAM} family functions such as \code{\link{micmen}} use this argument to input the regressor variable. If given, the slots \code{@Xm2} and \code{@Ym2} may be assigned. Note that smart prediction applies to terms in \code{form2} too. } \item{qr.arg}{ logical value indicating whether the slot \code{qr}, which returns the QR decomposition of the VLM model matrix, is returned on the object. } \item{smart}{ logical value indicating whether smart prediction (\code{\link{smartpred}}) will be used. } \item{\dots}{ further arguments passed into \code{\link{vglm.control}}. } } \details{ A vector generalized linear model (VGLM) is loosely defined as a statistical model that is a function of \eqn{M} linear predictors. The central formula is given by \deqn{\eta_j = \beta_j^T x}{% eta_j = beta_j^T x} where \eqn{x}{x} is a vector of explanatory variables (sometimes just a 1 for an intercept), and \eqn{\beta_j}{beta_j} is a vector of regression coefficients to be estimated. Here, \eqn{j=1,\ldots,M}, where \eqn{M} is finite. Then one can write \eqn{\eta=(\eta_1,\ldots,\eta_M)^T}{eta=(eta_1,\ldots,\eta_M)^T} as a vector of linear predictors. Most users will find \code{vglm} similar in flavour to \code{\link[stats]{glm}}. The function \code{vglm.fit} actually does the work. % If more than one of \code{etastart}, \code{start} and \code{mustart} % is specified, the first in the list will be used. } \value{ An object of class \code{"vglm"}, which has the following slots. Some of these may not be assigned to save space, and will be recreated if necessary later. \item{extra}{the list \code{extra} at the end of fitting.} \item{family}{the family function (of class \code{"vglmff"}).} \item{iter}{the number of IRLS iterations used.} \item{predictors}{a \eqn{M}-column matrix of linear predictors.} \item{assign}{a named list which matches the columns and the (LM) model matrix terms.} \item{call}{the matched call.} \item{coefficients}{a named vector of coefficients.} \item{constraints}{ a named list of constraint matrices used in the fitting. } \item{contrasts}{the contrasts used (if any).} \item{control}{list of control parameter used in the fitting.} \item{criterion}{list of convergence criterion evaluated at the final IRLS iteration.} \item{df.residual}{the residual degrees of freedom.} \item{df.total}{the total degrees of freedom.} \item{dispersion}{the scaling parameter.} \item{effects}{the effects.} \item{fitted.values}{ the fitted values, as a matrix. This is often the mean but may be quantiles, or the location parameter, e.g., in the Cauchy model. } \item{misc}{a list to hold miscellaneous parameters.} \item{model}{the model frame.} \item{na.action}{a list holding information about missing values.} \item{offset}{if non-zero, a \eqn{M}-column matrix of offsets.} \item{post}{a list where post-analysis results may be put.} \item{preplot}{used by \code{\link{plotvgam}}, the plotting parameters may be put here.} \item{prior.weights}{ initially supplied weights (the \code{weights} argument). Also see \code{\link{weightsvglm}}. } \item{qr}{the QR decomposition used in the fitting.} \item{R}{the \bold{R} matrix in the QR decomposition used in the fitting.} \item{rank}{numerical rank of the fitted model.} \item{residuals}{the \emph{working} residuals at the final IRLS iteration.} \item{ResSS}{residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices.} \item{smart.prediction}{ a list of data-dependent parameters (if any) that are used by smart prediction. } \item{terms}{the \code{\link[stats]{terms}} object used.} \item{weights}{the working weight matrices at the final IRLS iteration. This is in matrix-band form.} \item{x}{the model matrix (linear model LM, not VGLM).} \item{xlevels}{the levels of the factors, if any, used in fitting.} \item{y}{the response, in matrix form.} This slot information is repeated at \code{\link{vglm-class}}. } \references{ Yee, T. W. (2015) Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: \emph{Springer}. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. Yee, T. W. (2008) The \code{VGAM} Package. \emph{R News}, \bold{8}, 28--39. % Documentation accompanying the \pkg{VGAM} package at % \url{http://www.stat.auckland.ac.nz/~yee} % contains further information and examples. } \author{ Thomas W. Yee } \note{ This function can fit a wide variety of statistical models. Some of these are harder to fit than others because of inherent numerical difficulties associated with some of them. Successful model fitting benefits from cumulative experience. Varying the values of arguments in the \pkg{VGAM} family function itself is a good first step if difficulties arise, especially if initial values can be inputted. A second, more general step, is to vary the values of arguments in \code{\link{vglm.control}}. A third step is to make use of arguments such as \code{etastart}, \code{coefstart} and \code{mustart}. Some \pkg{VGAM} family functions end in \code{"ff"} to avoid interference with other functions, e.g., \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{gaussianff}}, \code{gammaff}. This is because \pkg{VGAM} family functions are incompatible with \code{\link[stats]{glm}} (and also \code{\link[gam]{gam}} in the \pkg{gam} library and \code{\link[mgcv]{gam}} in the \pkg{mgcv} library). The smart prediction (\code{\link{smartpred}}) library is incorporated within the \pkg{VGAM} library. The theory behind the scaling parameter is currently being made more rigorous, but it it should give the same value as the scale parameter for GLMs. In Example 5 below, the \code{xij} argument to illustrate covariates that are specific to a linear predictor. Here, \code{lop}/\code{rop} are the ocular pressures of the left/right eye (artificial data). Variables \code{leye} and \code{reye} might be the presence/absence of a particular disease on the LHS/RHS eye respectively. See \code{\link{vglm.control}} and \code{\link{fill}} for more details and examples. } %~Make other sections like WARNING with \section{WARNING }{....} ~ \section{WARNING}{ See warnings in \code{\link{vglm.control}}. Also, see warnings under \code{weights} above regarding sampling weights from complex sampling designs. } \seealso{ \code{\link{vglm.control}}, \code{\link{vglm-class}}, \code{\link{vglmff-class}}, \code{\link{smartpred}}, \code{vglm.fit}, \code{\link{fill}}, \code{\link{rrvglm}}, \code{\link{vgam}}. Methods functions include \code{\link{AICvlm}}, \code{\link{coefvlm}}, \code{\link{confintvglm}}, \code{\link{constraints.vlm}}, \code{\link{fittedvlm}}, \code{\link{hatvaluesvlm}}, \code{\link{hdeff.vglm}}, \code{\link{linkfun.vglm}}, \code{\link{lrp.vglm}}, \code{\link{nobs.vlm}}, \code{\link{npred.vlm}}, \code{\link{plotvglm}}, \code{\link{predictvglm}}, \code{\link{summaryvglm}}, \code{\link{lrtest_vglm}}, etc. } \examples{ # Example 1. See help(glm) print(d.AD <- data.frame(treatment = gl(3, 3), outcome = gl(3, 1, 9), counts = c(18,17,15,20,10,20,25,13,12))) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff, data = d.AD, trace = TRUE) summary(vglm.D93) # Example 2. Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo) # Example 3. Proportional odds model fit3 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) coef(fit3, matrix = TRUE) constraints(fit3) model.matrix(fit3, type = "lm") # LM model matrix model.matrix(fit3) # Larger VGLM (or VLM) model matrix # Example 4. Bivariate logistic model fit4 <- vglm(cbind(nBnW, nBW, BnW, BW) ~ age, binom2.or, coalminers) coef(fit4, matrix = TRUE) depvar(fit4) # Response are proportions weights(fit4, type = "prior") # Example 5. The use of the xij argument (simple case). # The constraint matrix for 'op' has one column. nn <- 1000 eyesdat <- round(data.frame(lop = runif(nn), rop = runif(nn), op = runif(nn)), digits = 2) eyesdat <- transform(eyesdat, eta1 = -1 + 2 * lop, eta2 = -1 + 2 * lop) eyesdat <- transform(eyesdat, leye = rbinom(nn, size = 1, prob = logit(eta1, inverse = TRUE)), reye = rbinom(nn, size = 1, prob = logit(eta2, inverse = TRUE))) head(eyesdat) fit5 <- vglm(cbind(leye, reye) ~ op, binom2.or(exchangeable = TRUE, zero = 3), data = eyesdat, trace = TRUE, xij = list(op ~ lop + rop + fill(lop)), form2 = ~ op + lop + rop + fill(lop)) coef(fit5) coef(fit5, matrix = TRUE) constraints(fit5) } \keyword{models} \keyword{regression} %eyesdat$leye <- ifelse(runif(n) < 1/(1+exp(-1+2*eyesdat$lop)), 1, 0) %eyesdat$reye <- ifelse(runif(n) < 1/(1+exp(-1+2*eyesdat$rop)), 1, 0) %coef(fit, matrix = TRUE, compress = FALSE) % 20090506 zz Put these examples elsewhere: % %# Example 6. The use of the xij argument (complex case). %# Here is one method to handle the xij argument with a term that %# produces more than one column in the model matrix. %# The constraint matrix for 'op' has essentially one column. %POLY3 <- function(x, ...) { % # A cubic; ensures that the basis functions are the same. % poly(c(x,...), 3)[1:length(x),] % head(poly(c(x,...), 3), length(x), drop = FALSE) %} % %fit6 <- vglm(cbind(leye, reye) ~ POLY3(op), trace = TRUE, % fam = binom2.or(exchangeable = TRUE, zero=3), data=eyesdat, % xij = list(POLY3(op) ~ POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))), % form2 = ~ POLY3(op) + POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))) %coef(fit6) %coef(fit6, matrix = TRUE) %head(predict(fit6)) %\dontrun{ %plotvgam(fit6, se = TRUE) # Wrong since it plots against op, not lop. %} % % %# Example 7. The use of the xij argument (simple case). %# Each constraint matrix has 4 columns. %ymat <- rdiric(n <- 1000, shape=c(4,7,3,1)) %mydat <- data.frame(x1=runif(n), x2=runif(n), x3=runif(n), x4=runif(n), % z1=runif(n), z2=runif(n), z3=runif(n), z4=runif(n), % X2=runif(n), Z2=runif(n)) %mydat <- round(mydat, dig=2) %fit7 <- vglm(ymat ~ X2 + Z2, data=mydat, crit="c", % fam = dirichlet(parallel = TRUE), # Intercept is also parallel. % xij = list(Z2 ~ z1 + z2 + z3 + z4, % X2 ~ x1 + x2 + x3 + x4), % form2 = ~ Z2 + z1 + z2 + z3 + z4 + % X2 + x1 + x2 + x3 + x4) %head(model.matrix(fit7, type="lm")) # LM model matrix %head(model.matrix(fit7, type="vlm")) # Big VLM model matrix %coef(fit7) %coef(fit7, matrix = TRUE) %max(abs(predict(fit7)-predict(fit7, new=mydat))) # Predicts correctly %summary(fit7) VGAM/man/amlpoisson.Rd0000644000176200001440000001162713135276753014255 0ustar liggesusers\name{amlpoisson} \alias{amlpoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson Regression by Asymmetric Maximum Likelihood Estimation } \description{ Poisson quantile regression estimated by maximizing an asymmetric likelihood function. } \usage{ amlpoisson(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary maximum likelihood (MLE) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } \item{link}{ See \code{\link{poissonff}}. } } \details{ This method was proposed by Efron (1992) and full details can be obtained there. % Equation numbers below refer to that article. The model is essentially a Poisson regression model (see \code{\link{poissonff}}) but the usual deviance is replaced by an asymmetric squared error loss function; it is multiplied by \eqn{w.aml} for positive residuals. The solution is the set of regression coefficients that minimize the sum of these deviance-type values over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1991) Regression percentiles using asymmetric squared error loss. \emph{Statistica Sinica}, \bold{1}, 93--125. Efron, B. (1992) Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. \emph{Journal of the American Statistical Association}, \bold{87}, 98--107. Koenker, R. and Bassett, G. (1978) Regression quantiles. \emph{Econometrica}, \bold{46}, 33--50. Newey, W. K. and Powell, J. L. (1987) Asymmetric least squares estimation and testing. \emph{Econometrica}, \bold{55}, 819--847. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. Also, the individual deviance values corresponding to each element of the argument \code{w.aml} is stored in the \code{extra} slot. For \code{amlpoisson} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. About the jargon, Newey and Powell (1987) used the name \emph{expectiles} for regression surfaces obtained by asymmetric least squares. This was deliberate so as to distinguish them from the original \emph{regression quantiles} of Koenker and Bassett (1978). Efron (1991) and Efron (1992) use the general name \emph{regression percentile} to apply to all forms of asymmetric fitting. Although the asymmetric maximum likelihood method very nearly gives regression percentiles in the strictest sense for the normal and Poisson cases, the phrase \emph{quantile regression} is used loosely in this \pkg{VGAM} documentation. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } \section{Warning }{ If \code{w.aml} has more than one value then the value returned by \code{deviance} is the sum of all the (weighted) deviances taken over all the \code{w.aml} values. See Equation (1.6) of Efron (1992). } \seealso{ \code{\link{amlnormal}}, \code{\link{amlbinomial}}, \code{\link{alaplace1}}. } \examples{ set.seed(1234) mydat <- data.frame(x = sort(runif(nn <- 200))) mydat <- transform(mydat, y = rpois(nn, exp(0 - sin(8*x)))) (fit <- vgam(y ~ s(x), fam = amlpoisson(w.aml = c(0.02, 0.2, 1, 5, 50)), mydat, trace = TRUE)) fit@extra \dontrun{ # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, fitted(fit), lwd = 2)) } } \keyword{models} \keyword{regression} VGAM/man/maxwellUC.Rd0000644000176200001440000000427013135276753013766 0ustar liggesusers\name{Maxwell} \alias{Maxwell} \alias{dmaxwell} \alias{pmaxwell} \alias{qmaxwell} \alias{rmaxwell} \title{The Maxwell Distribution} \description{ Density, distribution function, quantile function and random generation for the Maxwell distribution. } \usage{ dmaxwell(x, rate, log = FALSE) pmaxwell(q, rate, lower.tail = TRUE, log.p = FALSE) qmaxwell(p, rate, lower.tail = TRUE, log.p = FALSE) rmaxwell(n, rate) } \arguments{ \item{x, q, p, n}{ Same as \code{\link[stats:Uniform]{Uniform}}. } \item{rate}{the (rate) parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dmaxwell} gives the density, \code{pmaxwell} gives the distribution function, \code{qmaxwell} gives the quantile function, and \code{rmaxwell} generates random deviates. } \references{ Balakrishnan, N. and Nevzorov, V. B. (2003) \emph{A Primer on Statistical Distributions}. Hoboken, New Jersey: Wiley. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{maxwell}}, the \pkg{VGAM} family function for estimating the (rate) parameter \eqn{a} by maximum likelihood estimation, for the formula of the probability density function. } \note{ The Maxwell distribution is related to the Rayleigh distribution. } \seealso{ \code{\link{maxwell}}, \code{\link{Rayleigh}}, \code{\link{rayleigh}}. } \examples{ \dontrun{ rate <- 3; x <- seq(-0.5, 3, length = 100) plot(x, dmaxwell(x, rate = rate), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pmaxwell(x, rate = rate), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmaxwell(probs, rate = rate) lines(Q, dmaxwell(Q, rate), col = "purple", lty = 3, type = "h") lines(Q, pmaxwell(Q, rate), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pmaxwell(Q, rate) - probs)) # Should be zero } } \keyword{distribution} VGAM/man/betageometric.Rd0000644000176200001440000001015213135276753014673 0ustar liggesusers\name{betageometric} \alias{betageometric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta-geometric Distribution Family Function } \description{ Maximum likelihood estimation for the beta-geometric distribution. } \usage{ betageometric(lprob = "logit", lshape = "loge", iprob = NULL, ishape = 0.1, moreSummation = c(2, 100), tolerance = 1.0e-10, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lprob, lshape}{ Parameter link functions applied to the parameters \eqn{p}{prob} and \eqn{\phi}{phi} (called \code{prob} and \code{shape} below). The former lies in the unit interval and the latter is positive. See \code{\link{Links}} for more choices. } \item{iprob, ishape}{ Numeric. Initial values for the two parameters. A \code{NULL} means a value is computed internally. } \item{moreSummation}{ Integer, of length 2. When computing the expected information matrix a series summation from 0 to \code{moreSummation[1]*max(y)+moreSummation[2]} is made, in which the upper limit is an approximation to infinity. Here, \code{y} is the response. } \item{tolerance}{ Positive numeric. When all terms are less than this then the series is deemed to have converged. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. If used, the value must be from the set \{1,2\}. } } \details{ A random variable \eqn{Y} has a 2-parameter beta-geometric distribution if \eqn{P(Y=y) = p (1-p)^y}{P(Y=y) = prob * (1-prob)^y} for \eqn{y=0,1,2,\ldots}{y=0,1,2,...} where \eqn{p}{prob} are generated from a standard beta distribution with shape parameters \code{shape1} and \code{shape2}. The parameterization here is to focus on the parameters \eqn{p}{prob} and \eqn{\phi = 1/(shape1+shape2)}{phi = 1/(shape1+shape2)}, where \eqn{\phi}{phi} is \code{shape}. The default link functions for these ensure that the appropriate range of the parameters is maintained. The mean of \eqn{Y} is \eqn{E(Y) = shape2 / (shape1-1) = (1-p) / (p-\phi)}{E(Y) = shape2 / (shape1-1) = (1-prob) / (prob-phi)} if \code{shape1 > 1}, and if so, then this is returned as the fitted values. The geometric distribution is a special case of the beta-geometric distribution with \eqn{\phi=0}{phi=0} (see \code{\link{geometric}}). However, fitting data from a geometric distribution may result in numerical problems because the estimate of \eqn{\log(\phi)}{log(phi)} will 'converge' to \code{-Inf}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Paul, S. R. (2005) Testing goodness of fit of the geometric distribution: an application to human fecundability data. \emph{Journal of Modern Applied Statistical Methods}, \bold{4}, 425--433. } \author{ T. W. Yee } \note{ The first iteration may be very slow; if practical, it is best for the \code{weights} argument of \code{\link{vglm}} etc. to be used rather than inputting a very long vector as the response, i.e., \code{vglm(y ~ 1, ..., weights = wts)} is to be preferred over \code{vglm(rep(y, wts) ~ 1, ...)}. If convergence problems occur try inputting some values of argument \code{ishape}. If an intercept-only model is fitted then the \code{misc} slot of the fitted object has list components \code{shape1} and \code{shape2}. } \seealso{ \code{\link{geometric}}, \code{\link{betaff}}, \code{\link{rbetageom}}. } \examples{ bdata <- data.frame(y = 0:11, wts = c(227,123,72,42,21,31,11,14,6,4,7,28)) fitb <- vglm(y ~ 1, betageometric, data = bdata, weight = wts, trace = TRUE) fitg <- vglm(y ~ 1, geometric, data = bdata, weight = wts, trace = TRUE) coef(fitb, matrix = TRUE) Coef(fitb) sqrt(diag(vcov(fitb, untransform = TRUE))) fitb@misc$shape1 fitb@misc$shape2 # Very strong evidence of a beta-geometric: pchisq(2 * (logLik(fitb) - logLik(fitg)), df = 1, lower.tail = FALSE) } \keyword{models} \keyword{regression} VGAM/man/paralogistic.Rd0000644000176200001440000000610113135276753014541 0ustar liggesusers\name{paralogistic} \alias{paralogistic} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Paralogistic Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter paralogistic distribution. } \usage{ paralogistic(lscale = "loge", lshape1.a = "loge", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape") } %- maybe also 'usage' for other objects documented here. % zero = ifelse(lss, -2, -1) \arguments{ \item{lss}{ See \code{\link{CommonVGAMffArguments}} for important information. } \item{lshape1.a, lscale}{ Parameter link functions applied to the (positive) parameters \eqn{a} and \code{scale}. See \code{\link{Links}} for more choices. } \item{iscale, ishape1.a, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. For \code{imethod = 2} a good initial value for \code{ishape1.a} is needed to obtain good estimates for the other parameter. } \item{gscale, gshape1.a}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 2-parameter paralogistic distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{p=1} and \eqn{a=q}. It is the 3-parameter Singh-Maddala distribution with \eqn{a=q}. More details can be found in Kleiber and Kotz (2003). The 2-parameter paralogistic has density \deqn{f(y) = a^2 y^{a-1} / [b^a \{1 + (y/b)^a\}^{1+a}]}{% f(y) = a^2 y^(a-1) / [b^a (1 + (y/b)^a)^(1+a)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \eqn{a} is the shape parameter. The mean is \deqn{E(Y) = b \, \Gamma(1 + 1/a) \, \Gamma(a - 1/a) / \Gamma(a)}{% E(Y) = b gamma(1 + 1/a) gamma(a - 1/a) / gamma(a)} provided \eqn{a > 1}; these are returned as the fitted values. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. } \seealso{ \code{\link{Paralogistic}}, \code{\link{sinmad}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{fisk}}, \code{\link{inv.lomax}}, \code{\link{lomax}}, \code{\link{inv.paralogistic}}. } \examples{ pdata <- data.frame(y = rparalogistic(n = 3000, exp(1), scale = exp(1))) fit <- vglm(y ~ 1, paralogistic(lss = FALSE), data = pdata, trace = TRUE) fit <- vglm(y ~ 1, paralogistic(ishape1.a = 2.3, iscale = 5), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/laplaceUC.Rd0000644000176200001440000000651713135276753013724 0ustar liggesusers\name{laplaceUC} \alias{dlaplace} \alias{plaplace} \alias{qlaplace} \alias{rlaplace} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Laplace Distribution } \description{ Density, distribution function, quantile function and random generation for the Laplace distribution with location parameter \code{location} and scale parameter \code{scale}. } \usage{ dlaplace(x, location = 0, scale = 1, log = FALSE) plaplace(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qlaplace(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rlaplace(n, location = 0, scale = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{location}{ the location parameter \eqn{a}, which is the mean. } \item{scale}{ the scale parameter \eqn{b}. Must consist of positive values. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ The Laplace distribution is often known as the double-exponential distribution and, for modelling, has heavier tail than the normal distribution. The Laplace density function is \deqn{f(y) = \frac{1}{2b} \exp \left( - \frac{|y-a|}{b} \right) }{% f(y) = (1/(2b)) exp( -|y-a|/b ) } where \eqn{-\infty0}. The mean is \eqn{a}{a} and the variance is \eqn{2b^2}. See \code{\link{laplace}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for formulae and details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \value{ \code{dlaplace} gives the density, \code{plaplace} gives the distribution function, \code{qlaplace} gives the quantile function, and \code{rlaplace} generates random deviates. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee and Kai Huang} %\note{ % The \pkg{VGAM} family function \code{\link{laplace}} % estimates the two parameters by maximum likelihood estimation. %} \seealso{ \code{\link{laplace}}. } \examples{ loc <- 1; b <- 2 y <- rlaplace(n = 100, loc = loc, scale = b) mean(y) # sample mean loc # population mean var(y) # sample variance 2 * b^2 # population variance \dontrun{ loc <- 0; b <- 1.5; x <- seq(-5, 5, by = 0.01) plot(x, dlaplace(x, loc, b), type = "l", col = "blue", ylim = c(0,1), main = "Blue is density, orange is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlaplace(seq(0.05,0.95,by = 0.05), loc, b), dlaplace(qlaplace(seq(0.05, 0.95, by = 0.05), loc, b), loc, b), col = "purple", lty = 3, type = "h") lines(x, plaplace(x, loc, b), type = "l", col = "orange") abline(h = 0, lty = 2) } plaplace(qlaplace(seq(0.05, 0.95, by = 0.05), loc, b), loc, b) } \keyword{distribution} VGAM/man/linkfun.Rd0000644000176200001440000000231113135276753013525 0ustar liggesusers\name{linkfun} \alias{linkfun} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Link Functions } \description{ Generic function for returning the link functions of a fitted object. } \usage{ linkfun(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object which has parameter link functions. } \item{\dots}{ Other arguments fed into the specific methods function of the model. } } \details{ Fitted models in the \pkg{VGAM} have parameter link functions. This generic function returns these. } \value{ The value returned depends specifically on the methods function invoked. } %\references{ %} \author{ Thomas W. Yee } %\note{ %} \seealso{ \code{\link{linkfun.vglm}}, \code{\link{multilogit}}, \code{\link{vglm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) coef(fit1, matrix = TRUE) linkfun(fit1) linkfun(fit1, earg = TRUE) fit2 <- vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo) coef(fit2, matrix = TRUE) linkfun(fit2) linkfun(fit2, earg = TRUE) } \keyword{models} \keyword{regression} VGAM/man/pospoisson.Rd0000644000176200001440000000560613135276753014305 0ustar liggesusers\name{pospoisson} \alias{pospoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Poisson Distribution Family Function } \description{ Fits a positive Poisson distribution. } \usage{ pospoisson(link = "loge", type.fitted = c("mean", "lambda", "prob0"), expected = TRUE, ilambda = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function for the usual mean (lambda) parameter of an ordinary Poisson distribution. See \code{\link{Links}} for more choices. } \item{expected}{ Logical. Fisher scoring is used if \code{expected = TRUE}, else Newton-Raphson. } \item{ilambda, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} for details. } } \details{ The positive Poisson distribution is the ordinary Poisson distribution but with the probability of zero being zero. Thus the other probabilities are scaled up (i.e., divided by \eqn{1-P[Y=0]}). The mean, \eqn{\lambda / (1 - \exp(-\lambda))}{lambda/(1-exp(-lambda))}, can be obtained by the extractor function \code{fitted} applied to the object. A related distribution is the zero-inflated Poisson, in which the probability \eqn{P[Y=0]} involves another parameter \eqn{\phi}{phi}. See \code{\link{zipoisson}}. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Coleman, J. S. and James, J. (1961) The equilibrium size distribution of freely-forming groups. \emph{Sociometry}, \bold{24}, 36--45. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ This family function can handle multiple responses. Yet to be done: a \code{quasi.pospoisson} which estimates a dispersion parameter. } \seealso{ \code{\link{Pospois}}, \code{\link{posnegbinomial}}, \code{\link{poissonff}}, \code{\link{otpospoisson}}, \code{\link{zapoisson}}, \code{\link{zipoisson}}, \code{\link{simulate.vlm}}. } \examples{ # Data from Coleman and James (1961) cjdata <- data.frame(y = 1:6, freq = c(1486, 694, 195, 37, 10, 1)) fit <- vglm(y ~ 1, pospoisson, data = cjdata, weights = freq) Coef(fit) summary(fit) fitted(fit) pdata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data pdata <- transform(pdata, lambda = exp(1 - 2 * x2)) pdata <- transform(pdata, y1 = rpospois(nn, lambda)) with(pdata, table(y1)) fit <- vglm(y1 ~ x2, pospoisson, data = pdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/gev.Rd0000644000176200001440000002702413135276753012650 0ustar liggesusers\name{gev} \alias{gev} \alias{gevff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Extreme Value Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter generalized extreme value (GEV) distribution. } \usage{ gev(llocation = "identitylink", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, gscale.mux = exp((-5:5)/6), gshape = (-5:5) / 11 + 0.01, iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape")) gevff(llocation = "identitylink", lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, gscale.mux = exp((-5:5)/6), gshape = (-5:5) / 11 + 0.01, iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale, lshape}{ Parameter link functions for \eqn{\mu}{mu}, \eqn{\sigma}{sigma} and \eqn{\xi}{xi} respectively. See \code{\link{Links}} for more choices. For the shape parameter, the default \code{\link{logoff}} link has an offset called \eqn{A} below; and then the linear/additive predictor is \eqn{\log(\xi+A)}{log(xi+A)} which means that \eqn{\xi > -A}{xi > -A}. For technical reasons (see \bold{Details}) it is a good idea for \eqn{A = 0.5}. } % \item{Offset}{ % Numeric, of length 1. % Called \eqn{A} below. % Offset value if \code{lshape = "logoff"}. % Then the linear/additive predictor is % \eqn{\log(\xi+A)}{log(xi+A)} which means that % \eqn{\xi > -A}{xi > -A}. % For technical reasons (see \bold{Details}) it is a good idea for % \code{Offset = 0.5}. % } \item{percentiles}{ Numeric vector of percentiles used for the fitted values. Values should be between 0 and 100. This argument is ignored if \code{type.fitted = "mean"}. % 20140912: this is still true, but using 'type.fitted' is better. % However, if \code{percentiles = NULL}, then the mean % \eqn{\mu + \sigma (\Gamma(1-\xi)-1) / \xi}{mu + sigma * (gamma(1-xi)-1)/xi} % is returned, and this is only defined if \eqn{\xi<1}{xi<1}. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} for information. The default is to use the \code{percentiles} argument. If \code{"mean"} is chosen, then the mean \eqn{\mu + \sigma (\Gamma(1-\xi)-1) / \xi}{mu + sigma * (gamma(1-xi)-1)/xi} is returned as the fitted values, and these are only defined for \eqn{\xi<1}{xi<1}. } \item{ilocation, iscale, ishape}{ Numeric. Initial value for the location parameter, \eqn{\sigma}{sigma} and \eqn{\xi}{xi}. A \code{NULL} means a value is computed internally. The argument \code{ishape} is more important than the other two. If a failure to converge occurs, or even to obtain initial values occurs, try assigning \code{ishape} some value (positive or negative; the sign can be very important). Also, in general, a larger value of \code{iscale} tends to be better than a smaller value. % because they are initialized from the initial \eqn{\xi}{xi}. } % \item{rshape}{ % Numeric, of length 2. % Range of \eqn{\xi}{xi} if \code{lshape = "extlogit"} is chosen. % The rationale for the default values is given below. % } % \item{mean}{ % Logical. If \code{TRUE}, the mean is computed and returned % as the fitted values. This argument overrides the % \code{percentiles} argument. % See \bold{Details} for more details. % } \item{imethod}{ Initialization method. Either the value 1 or 2. If both methods fail then try using \code{ishape}. See \code{\link{CommonVGAMffArguments}} for information. % Method 1 involves choosing the best \eqn{\xi}{xi} on the grid values % given by \code{gshape}. % Method 2 is similar to the method of moments. } \item{gshape}{ Numeric vector. The values are used for a grid search for an initial value for \eqn{\xi}{xi}. See \code{\link{CommonVGAMffArguments}} for information. % Used only if \code{imethod} equals 1. } \item{gprobs.y, gscale.mux, iprobs.y}{ Numeric vectors, used for the initial values. See \code{\link{CommonVGAMffArguments}} for information. } \item{tolshape0}{ Passed into \code{\link{dgev}} when computing the log-likelihood. } \item{zero}{ A specifying which linear/additive predictors are modelled as intercepts only. The values can be from the set \{1,2,3\} corresponding respectively to \eqn{\mu}{mu}, \eqn{\sigma}{sigma}, \eqn{\xi}{xi}. If \code{zero = NULL} then all linear/additive predictors are modelled as a linear combination of the explanatory variables. For many data sets having \code{zero = 3} is a good idea. See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The GEV distribution function can be written \deqn{G(y) = \exp( -[ (y-\mu)/ \sigma ]_{+}^{- 1/ \xi}) }{% G(y) = exp( -[ (y- mu)/ sigma ]_{+}^{- 1/ xi}) } where \eqn{\sigma > 0}{sigma > 0}, \eqn{-\infty < \mu < \infty}{-Inf < mu < Inf}, and \eqn{1 + \xi(y-\mu)/\sigma > 0}{1 + xi*(y-mu)/sigma > 0}. Here, \eqn{x_+ = \max(x,0)}{x_+ = max(x,0)}. The \eqn{\mu}{mu}, \eqn{\sigma}{sigma}, \eqn{\xi}{xi} are known as the \emph{location}, \emph{scale} and \emph{shape} parameters respectively. The cases \eqn{\xi>0}{xi>0}, \eqn{\xi<0}{xi<0}, \eqn{\xi = 0}{xi = 0} correspond to the Frechet, Weibull, and Gumbel types respectively. It can be noted that the Gumbel (or Type I) distribution accommodates many commonly-used distributions such as the normal, lognormal, logistic, gamma, exponential and Weibull. For the GEV distribution, the \eqn{k}th moment about the mean exists if \eqn{\xi < 1/k}{xi < 1/k}. Provided they exist, the mean and variance are given by \eqn{\mu+\sigma\{ \Gamma(1-\xi)-1\}/ \xi}{mu + sigma \{ Gamma(1-xi)-1\} / xi} and \eqn{\sigma^2 \{ \Gamma(1-2\xi) - \Gamma^2(1-\xi) \} / \xi^2}{sigma^2 \{ Gamma(1-2 xi) - Gamma^2 (1- xi) \} / xi^2} respectively, where \eqn{\Gamma}{Gamma} is the gamma function. Smith (1985) established that when \eqn{\xi > -0.5}{xi > -0.5}, the maximum likelihood estimators are completely regular. To have some control over the estimated \eqn{\xi}{xi} try using \code{lshape = logoff(offset = 0.5)}, say, or \code{lshape = extlogit(min = -0.5, max = 0.5)}, say. % and when \eqn{-1 < \xi < -0.5}{-1 < xi < -0.5} they exist but are % non-regular; and when \eqn{\xi < -1}{xi < -1} then the maximum % likelihood estimators do not exist. In most environmental data % sets \eqn{\xi > -1}{xi > -1} so maximum likelihood works fine. } \section{Warning }{ Currently, if an estimate of \eqn{\xi}{xi} is too close to 0 then an error may occur for \code{gev()} with multivariate responses. In general, \code{gevff()} is more reliable than \code{gev()}. Fitting the GEV by maximum likelihood estimation can be numerically fraught. If \eqn{1 + \xi (y-\mu)/ \sigma \leq 0}{1 + xi*(y-mu)/sigma <= 0} then some crude evasive action is taken but the estimation process can still fail. This is particularly the case if \code{\link{vgam}} with \code{\link{s}} is used; then smoothing is best done with \code{\link{vglm}} with regression splines (\code{\link[splines]{bs}} or \code{\link[splines]{ns}}) because \code{\link{vglm}} implements half-stepsizing whereas \code{\link{vgam}} doesn't (half-stepsizing helps handle the problem of straying outside the parameter space.) } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Tawn, J. A. (1988) An extreme-value theory model for dependent observations. \emph{Journal of Hydrology}, \bold{101}, 227--250. Prescott, P. and Walden, A. T. (1980) Maximum likelihood estimation of the parameters of the generalized extreme-value distribution. \emph{Biometrika}, \bold{67}, 723--724. Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. \emph{Biometrika}, \bold{72}, 67--90. } \author{ T. W. Yee } \note{ The \pkg{VGAM} family function \code{gev} can handle a multivariate (matrix) response, cf. multiple responses. If so, each row of the matrix is sorted into descending order and \code{NA}s are put last. With a vector or one-column matrix response using \code{gevff} will give the same result but be faster and it handles the \eqn{\xi = 0}{xi = 0} case. The function \code{gev} implements Tawn (1988) while \code{gevff} implements Prescott and Walden (1980). Function \code{egev()} has been replaced by the new family function \code{gevff()}. It now conforms to the usual \pkg{VGAM} philosophy of having \code{M1} linear predictors per (independent) response. This is the usual way multiple responses are handled. Hence \code{vglm(cbind(y1, y2)\ldots, gevff, \ldots)} will have 6 linear predictors and it is possible to constrain the linear predictors so that the answer is similar to \code{gev()}. Missing values in the response of \code{gevff()} will be deleted; this behaviour is the same as with almost every other \pkg{VGAM} family function. The shape parameter \eqn{\xi}{xi} is difficult to estimate accurately unless there is a lot of data. Convergence is slow when \eqn{\xi}{xi} is near \eqn{-0.5}. Given many explanatory variables, it is often a good idea to make sure \code{zero = 3}. The range restrictions of the parameter \eqn{\xi}{xi} are not enforced; thus it is possible for a violation to occur. Successful convergence often depends on having a reasonably good initial value for \eqn{\xi}{xi}. If failure occurs try various values for the argument \code{ishape}, and if there are covariates, having \code{zero = 3} is advised. } \seealso{ \code{\link{rgev}}, \code{\link{gumbel}}, \code{\link{gumbelff}}, \code{\link{guplot}}, \code{\link{rlplot.gevff}}, \code{\link{gpd}}, \code{\link{weibullR}}, \code{\link{frechet}}, \code{\link{extlogit}}, \code{\link{oxtemp}}, \code{\link{venice}}, \code{\link{CommonVGAMffArguments}}. %\code{\link{gevff}}, %\code{\link{ogev}}, } \examples{ \dontrun{ # Multivariate example fit1 <- vgam(cbind(r1, r2) ~ s(year, df = 3), gev(zero = 2:3), data = venice, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1)) par(mfrow = c(1, 2), las = 1) plot(fit1, se = TRUE, lcol = "blue", scol = "forestgreen", main = "Fitted mu(year) function (centered)", cex.main = 0.8) with(venice, matplot(year, depvar(fit1)[, 1:2], ylab = "Sea level (cm)", col = 1:2, main = "Highest 2 annual sea levels", cex.main = 0.8)) with(venice, lines(year, fitted(fit1)[,1], lty = "dashed", col = "blue")) legend("topleft", lty = "dashed", col = "blue", "Fitted 95 percentile") # Univariate example (fit <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)) head(fitted(fit)) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) vcov(fit, untransform = TRUE) sqrt(diag(vcov(fit))) # Approximate standard errors rlplot(fit) } } \keyword{models} \keyword{regression} % type.fitted = c("percentiles", "mean"), giveWarning = TRUE, % \item{gshape}{ % Numeric, of length 2. % Range of \eqn{\xi}{xi} used for a grid search for a good initial value % for \eqn{\xi}{xi}. % Used only if \code{imethod} equals 1. % } VGAM/man/foldnormUC.Rd0000644000176200001440000000526113135276753014136 0ustar liggesusers\name{Foldnorm} \alias{Foldnorm} \alias{dfoldnorm} \alias{pfoldnorm} \alias{qfoldnorm} \alias{rfoldnorm} \title{The Folded-Normal Distribution} \description{ Density, distribution function, quantile function and random generation for the (generalized) folded-normal distribution. } \usage{ dfoldnorm(x, mean = 0, sd = 1, a1 = 1, a2 = 1, log = FALSE) pfoldnorm(q, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE) qfoldnorm(p, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE, ...) rfoldnorm(n, mean = 0, sd = 1, a1 = 1, a2 = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats:Normal]{rnorm}}. } \item{mean, sd}{ see \code{\link[stats:Normal]{rnorm}}. } \item{a1, a2}{ see \code{\link{foldnormal}}. } \item{log}{ Logical. If \code{TRUE} then the log density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } \item{\ldots}{ Arguments that can be passed into \code{\link[stats]{uniroot}}. } } \value{ \code{dfoldnorm} gives the density, \code{pfoldnorm} gives the distribution function, \code{qfoldnorm} gives the quantile function, and \code{rfoldnorm} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{foldnormal}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ \code{qfoldnorm()} runs very slowly because it calls \code{\link[stats]{uniroot}} for each value of the argument \code{p}. The solution is consequently not exact; the \code{...} can be used to obtain a more accurate solution if necessary. } \seealso{ \code{\link{foldnormal}}, \code{\link[stats]{uniroot}}. } \examples{ \dontrun{ m <- 1.5; SD <- exp(0) x <- seq(-1, 4, len = 501) plot(x, dfoldnorm(x, m = m, sd = SD), type = "l", ylim = 0:1, las = 1, ylab = paste("foldnorm(m = ", m, ", sd = ", round(SD, digits = 3), ")"), main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", col = "blue") abline(h = 0, col = "gray50") lines(x, pfoldnorm(x, m = m, sd = SD), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qfoldnorm(probs, m = m, sd = SD) lines(Q, dfoldnorm(Q, m = m, sd = SD), col = "purple", lty = 3, type = "h") lines(Q, pfoldnorm(Q, m = m, sd = SD), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pfoldnorm(Q, m = m, sd = SD) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/cao.Rd0000644000176200001440000002776213135276753012642 0ustar liggesusers\name{cao} \alias{cao} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Constrained Additive Ordination (CAO) } \description{ A constrained additive ordination (CAO) model is fitted using the \emph{reduced-rank vector generalized additive model} (RR-VGAM) framework. } \usage{ cao(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = cao.control(...), offset = NULL, method = "cao.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is used to construct the latent variables, upon which the smooths are applied. All the variables in the formula are used for the construction of latent variables except for those specified by the argument \code{noRRR}, which is itself a formula. The LHS of the formula contains the response variables, which should be a matrix with each column being a response (species). } \item{family}{ a function of class \code{"vglmff"} (see \code{\link{vglmff-class}}) describing what statistical model is to be fitted. This is called a ``\pkg{VGAM} family function''. See \code{\link{CommonVGAMffArguments}} for general information about many types of arguments found in this type of function. See \code{\link{cqo}} for a list of those presently implemented. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{cao} is called. } \item{weights}{ an optional vector or matrix of (prior) weights to be used in the fitting process. For \code{cao}, this argument currently should not be used. } \item{subset}{ an optional logical vector specifying a subset of observations to be used in the fitting process. } \item{na.action}{ a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[base]{options}}, and is \code{na.fail} if that is unset. The ``factory-fresh'' default is \code{na.omit}. } \item{etastart}{ starting values for the linear predictors. It is a \eqn{M}-column matrix. If \eqn{M=1} then it may be a vector. For \code{cao}, this argument currently should not be used. } \item{mustart}{ starting values for the fitted values. It can be a vector or a matrix. Some family functions do not make use of this argument. For \code{cao}, this argument currently should not be used. } \item{coefstart}{ starting values for the coefficient vector. For \code{cao}, this argument currently should not be used. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{cao.control}} for details. } \item{offset}{ a vector or \eqn{M}-column matrix of offset values. These are \emph{a priori} known and are added to the linear predictors during fitting. For \code{cao}, this argument currently should not be used. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{cao.fit} uses iteratively reweighted least squares (IRLS) within FORTRAN code called from \code{\link[stats]{optim}}. } \item{model}{ a logical value indicating whether the \emph{model frame} should be assigned in the \code{model} slot. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the linear model (LM) matrix. } \item{contrasts}{ an optional list. See the \code{contrasts.arg} of \code{\link{model.matrix.default}}. } \item{constraints}{ an optional list of constraint matrices. For \code{cao}, this argument currently should not be used. The components of the list must be named with the term it corresponds to (and it must match in character format). Each constraint matrix must have \eqn{M} rows, and be of full-column rank. By default, constraint matrices are the \eqn{M} by \eqn{M} identity matrix unless arguments in the family function itself override these values. If \code{constraints} is used it must contain \emph{all} the terms; an incomplete list is not accepted. } \item{extra}{ an optional list with any extra information that might be needed by the family function. For \code{cao}, this argument currently should not be used. } \item{qr.arg}{ For \code{cao}, this argument currently should not be used. } \item{smart}{ logical value indicating whether smart prediction (\code{\link{smartpred}}) will be used. } \item{\dots}{ further arguments passed into \code{\link{cao.control}}. } } \details{ The arguments of \code{cao} are a mixture of those from \code{\link{vgam}} and \code{\link{cqo}}, but with some extras in \code{\link{cao.control}}. Currently, not all of the arguments work properly. CAO can be loosely be thought of as the result of fitting generalized additive models (GAMs) to several responses (e.g., species) against a very small number of latent variables. Each latent variable is a linear combination of the explanatory variables; the coefficients \bold{C} (called \eqn{C} below) are called \emph{constrained coefficients} or \emph{canonical coefficients}, and are interpreted as weights or loadings. The \bold{C} are estimated by maximum likelihood estimation. It is often a good idea to apply \code{\link[base]{scale}} to each explanatory variable first. For each response (e.g., species), each latent variable is smoothed by a cubic smoothing spline, thus CAO is data-driven. If each smooth were a quadratic then CAO would simplify to \emph{constrained quadratic ordination} (CQO; formerly called \emph{canonical Gaussian ordination} or CGO). If each smooth were linear then CAO would simplify to \emph{constrained linear ordination} (CLO). CLO can theoretically be fitted with \code{cao} by specifying \code{df1.nl=0}, however it is more efficient to use \code{\link{rrvglm}}. Currently, only \code{Rank=1} is implemented, and only \code{noRRR = ~1} models are handled. % Poisson and binary responses are implemented (viz., % \code{\link{poissonff}}, \code{\link{binomialff}}), and % dispersion parameters for these must be assumed known. Hence using % \code{\link{quasipoissonff}} and \code{\link{quasibinomialff}} will % currently fail. Also, currently, only \code{noRRR = ~ 1} models are % handled. With binomial data, the default formula is \deqn{logit(P[Y_s=1]) = \eta_s = f_s(\nu), \ \ \ s=1,2,\ldots,S}{% logit(P[Y_s=1]) = eta_s = f_s(\nu), \ \ \ s=1,2,\ldots,S} where \eqn{x_2}{x_2} is a vector of environmental variables, and \eqn{\nu=C^T x_2}{nu=C^T x_2} is a \eqn{R}-vector of latent variables. The \eqn{\eta_s}{eta_s} is an additive predictor for species \eqn{s}, and it models the probabilities of presence as an additive model on the logit scale. The matrix \eqn{C} is estimated from the data, as well as the smooth functions \eqn{f_s}. The argument \code{noRRR = ~ 1} specifies that the vector \eqn{x_1}{x_1}, defined for RR-VGLMs and QRR-VGLMs, is simply a 1 for an intercept. Here, the intercept in the model is absorbed into the functions. A \code{\link{cloglog}} link may be preferable over a \code{\link{logit}} link. With Poisson count data, the formula is \deqn{\log(E[Y_s]) = \eta_s = f_s(\nu)}{% log(E[Y_s]) = eta_s = f_s(\nu)} which models the mean response as an additive models on the log scale. The fitted latent variables (site scores) are scaled to have unit variance. The concept of a tolerance is undefined for CAO models, but the optimums and maximums are defined. The generic functions \code{\link{Max}} and \code{\link{Opt}} should work for CAO objects, but note that if the maximum occurs at the boundary then \code{\link{Max}} will return a \code{NA}. Inference for CAO models is currently undeveloped. } \value{ An object of class \code{"cao"} (this may change to \code{"rrvgam"} in the future). Several generic functions can be applied to the object, e.g., \code{\link{Coef}}, \code{\link{concoef}}, \code{\link{lvplot}}, \code{\link{summary}}. } \references{ Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. % Documentation accompanying the \pkg{VGAM} package at % \url{http://www.stat.auckland.ac.nz/~yee} % contains further information and examples. } \author{T. W. Yee} \note{ CAO models are computationally expensive, therefore setting \code{trace = TRUE} is a good idea, as well as running it on a simple random sample of the data set instead. Sometimes the IRLS algorithm does not converge within the FORTRAN code. This results in warnings being issued. In particular, if an error code of 3 is issued, then this indicates the IRLS algorithm has not converged. One possible remedy is to increase or decrease the nonlinear degrees of freedom so that the curves become more or less flexible, respectively. } \section{Warning }{ CAO is very costly to compute. With version 0.7-8 it took 28 minutes on a fast machine. I hope to look at ways of speeding things up in the future. Use \code{\link[base:Random]{set.seed}} just prior to calling \code{cao()} to make your results reproducible. The reason for this is finding the optimal CAO model presents a difficult optimization problem, partly because the log-likelihood function contains many local solutions. To obtain the (global) solution the user is advised to try \emph{many} initial values. This can be done by setting \code{Bestof} some appropriate value (see \code{\link{cao.control}}). Trying many initial values becomes progressively more important as the nonlinear degrees of freedom of the smooths increase. % The code is a little fragile at this stage, so the function might % hang/lock up in the microsoft Windows version. Currently the dispersion parameter for a \code{\link{gaussianff}} CAO model is estimated slightly differently and may be slightly biassed downwards (usually a little too small). } \seealso{ \code{\link{cao.control}}, \code{Coef.cao}, \code{\link{cqo}}, \code{\link{latvar}}, \code{\link{Opt}}, \code{\link{Max}}, \code{persp.cao}, \code{\link{poissonff}}, \code{\link{binomialff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}, \code{\link[base:Random]{set.seed}}, \code{\link[gam]{gam}}, \code{\link[VGAMdata]{trapO}}. } \examples{ \dontrun{ hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars set.seed(149) # For reproducible results ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Pardpull= 2.7, 2.5), Bestof = 7, Crow1positive = FALSE) sort(deviance(ap1, history = TRUE)) # A history of all the iterations Coef(ap1) concoef(ap1) par(mfrow = c(2, 2)) plot(ap1) # All the curves are unimodal; some quite symmetric par(mfrow = c(1, 1), las = 1) index <- 1:ncol(depvar(ap1)) lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a = 0, b = 1, lty = 2) trplot(ap1, label = TRUE, col = "blue", log = "xy", which.sp = c(1, 3)) abline(a = 0, b = 1, lty = 2) persp(ap1, col = index, lwd = 2, label = TRUE) abline(v = Opt(ap1), lty = 2, col = index) abline(h = Max(ap1), lty = 2, col = index) } } \keyword{models} \keyword{regression} VGAM/man/oalog.Rd0000644000176200001440000000701213135276753013163 0ustar liggesusers\name{oalog} \alias{oalog} %\alias{oalogff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Altered Logarithmic Distribution } \description{ Fits a one-altered logarithmic distribution based on a conditional model involving a Bernoulli distribution and a 1-truncated logarithmic distribution. } \usage{ oalog(lpobs1 = "logit", lshape = "logit", type.fitted = c("mean", "shape", "pobs1", "onempobs1"), ipobs1 = NULL, gshape = ppoints(8), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs1}{ Link function for the parameter \eqn{p_1}{pobs1} or \eqn{\phi}{phi}, called \code{pobs1} or \code{phi} here. See \code{\link{Links}} for more choices. } \item{lshape}{ See \code{\link{logff}} for details. } \item{gshape, type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } % \item{epobs1, eshape}{ % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % epobs1 = list(), eshape = list(), % } \item{ipobs1, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The response \eqn{Y} is one with probability \eqn{p_1}{pobs1}, or \eqn{Y} has a 1-truncated logarithmic distribution with probability \eqn{1-p_1}{1-pobs1}. Thus \eqn{0 < p_1 < 1}{0 < pobs1 < 1}, which is modelled as a function of the covariates. The one-altered logarithmic distribution differs from the one-inflated logarithmic distribution in that the former has ones coming from one source, whereas the latter has ones coming from the logarithmic distribution too. The one-inflated logarithmic distribution is implemented in the \pkg{VGAM} package. Some people call the one-altered logarithmic a \emph{hurdle} model. The input can be a matrix (multiple responses). By default, the two linear/additive predictors of \code{oalog} are \eqn{(logit(\phi), logit(s))^T}{(logit(phi), logit(shape))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} (default) which is given by \deqn{\mu = \phi + (1-\phi) A}{% mu = phi + (1- phi) A} where \eqn{A} is the mean of the one-truncated logarithmic distribution. If \code{type.fitted = "pobs1"} then \eqn{p_1}{pobs1} is returned. } %\references{ % % %} %\section{Warning }{ %} \author{ T. W. Yee } \note{ This family function effectively combines \code{\link{binomialff}} and \code{\link{otlog}} into one family function. } \seealso{ \code{\link{Oalog}}, \code{\link{logff}}, \code{\link{oilog}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}. } % \code{\link{poslogarithmic}}, \examples{ odata <- data.frame(x2 = runif(nn <- 1000)) odata <- transform(odata, pobs1 = logit(-1 + 2*x2, inverse = TRUE), shape = logit(-2 + 3*x2, inverse = TRUE)) odata <- transform(odata, y1 = roalog(nn, shape = shape, pobs1 = pobs1), y2 = roalog(nn, shape = shape, pobs1 = pobs1)) with(odata, table(y1)) ofit <- vglm(cbind(y1, y2) ~ x2, oalog, data = odata, trace = TRUE) coef(ofit, matrix = TRUE) head(fitted(ofit)) head(predict(ofit)) summary(ofit) } \keyword{models} \keyword{regression} VGAM/man/kumar.Rd0000644000176200001440000000575213135276753013212 0ustar liggesusers\name{kumar} \alias{kumar} %- Also NEED an '\alias' for EACH other topic documented here. \title{Kumaraswamy Distribution Family Function} \description{ Estimates the two parameters of the Kumaraswamy distribution by maximum likelihood estimation. } \usage{ kumar(lshape1 = "loge", lshape2 = "loge", ishape1 = NULL, ishape2 = NULL, gshape1 = exp(2*ppoints(5) - 1), tol12 = 1.0e-4, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2}{ Link function for the two positive shape parameters, respectively, called \eqn{a} and \eqn{b} below. See \code{\link{Links}} for more choices. } % \item{eshape1, eshape2}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % eshape1 = list(), eshape2 = list(), % } \item{ishape1, ishape2}{ Numeric. Optional initial values for the two positive shape parameters. } \item{tol12}{ Numeric and positive. Tolerance for testing whether the second shape parameter is either 1 or 2. If so then the working weights need to handle these singularities. } \item{gshape1}{ Values for a grid search for the first shape parameter. See \code{\link{CommonVGAMffArguments}} for more information. % Lower and upper limits for a grid search for the first shape parameter. } \item{zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The Kumaraswamy distribution has density function \deqn{f(y;a = shape1,b = shape2) = a b y^{a-1} (1-y^{a})^{b-1}}{% a*b*y^(a-1)*(1-y^a)^(b-1)} where \eqn{0 < y < 1} and the two shape parameters, \eqn{a} and \eqn{b}, are positive. The mean is \eqn{b \times Beta(1+1/a,b)}{b * Beta(1+1/a,b)} (returned as the fitted values) and the variance is \eqn{b \times Beta(1+2/a,b) - (b \times Beta(1+1/a,b))^2}{b * Beta(1+2/a,b) - (b * Beta(1+1/a,b))^2}. Applications of the Kumaraswamy distribution include the storage volume of a water reservoir. Fisher scoring is implemented. Handles multiple responses (matrix input). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. \emph{Journal of Hydrology}, \bold{46}, 79--88. Jones, M. C. (2009). Kumaraswamy's distribution: A beta-type distribution with some tractability advantages. \emph{Statistical Methodology}, \bold{6}, 70--81. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{dkumar}}, \code{\link{betaff}}, \code{\link{simulate.vlm}}. } \examples{ shape1 <- exp(1); shape2 <- exp(2) kdata <- data.frame(y = rkumar(n = 1000, shape1, shape2)) fit <- vglm(y ~ 1, kumar, data = kdata, trace = TRUE) c(with(kdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/double.expbinomial.Rd0000644000176200001440000001530113135276753015642 0ustar liggesusers\name{double.expbinomial} \alias{double.expbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Double Exponential Binomial Distribution Family Function } \description{ Fits a double exponential binomial distribution by maximum likelihood estimation. The two parameters here are the mean and dispersion parameter. } \usage{ double.expbinomial(lmean = "logit", ldispersion = "logit", idispersion = 0.25, zero = "dispersion") } % idispersion = 0.25, zero = 2 %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, ldispersion}{ Link functions applied to the two parameters, called \eqn{\mu}{mu} and \eqn{\theta}{theta} respectively below. See \code{\link{Links}} for more choices. The defaults cause the parameters to be restricted to \eqn{(0,1)}. } \item{idispersion}{ Initial value for the dispersion parameter. If given, it must be in range, and is recyled to the necessary length. Use this argument if convergence failure occurs. } \item{zero}{ A vector specifying which linear/additive predictor is to be modelled as intercept-only. If assigned, the single value can be either \code{1} or \code{2}. The default is to have a single dispersion parameter value. To model both parameters as functions of the covariates assign \code{zero = NULL}. See \code{\link{CommonVGAMffArguments}} for more details. } } \details{ This distribution provides a way for handling overdispersion in a binary response. The double exponential binomial distribution belongs the family of double exponential distributions proposed by Efron (1986). Below, equation numbers refer to that original article. Briefly, the idea is that an ordinary one-parameter exponential family allows the addition of a second parameter \eqn{\theta}{theta} which varies the dispersion of the family without changing the mean. The extended family behaves like the original family with sample size changed from \eqn{n} to \eqn{n\theta}{n*theta}. The extended family is an exponential family in \eqn{\mu}{mu} when \eqn{n} and \eqn{\theta}{theta} are fixed, and an exponential family in \eqn{\theta}{theta} when \eqn{n} and \eqn{\mu}{mu} are fixed. Having \eqn{0 < \theta < 1}{0 < theta < 1} corresponds to overdispersion with respect to the binomial distribution. See Efron (1986) for full details. This \pkg{VGAM} family function implements an \emph{approximation} (2.10) to the exact density (2.4). It replaces the normalizing constant by unity since the true value nearly equals 1. The default model fitted is \eqn{\eta_1 = logit(\mu)}{eta1 =logit(mu)} and \eqn{\eta_2 = logit(\theta)}{eta2 = logit(theta)}. This restricts both parameters to lie between 0 and 1, although the dispersion parameter can be modelled over a larger parameter space by assigning the arguments \code{ldispersion} and \code{edispersion}. Approximately, the mean (of \eqn{Y}) is \eqn{\mu}{mu}. The \emph{effective sample size} is the dispersion parameter multiplied by the original sample size, i.e., \eqn{n\theta}{n*theta}. This family function uses Fisher scoring, and the two estimates are asymptotically independent because the expected information matrix is diagonal. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Efron, B. (1986) Double exponential families and their use in generalized linear regression. \emph{Journal of the American Statistical Association}, \bold{81}, 709--721. } \author{ T. W. Yee } \note{ This function processes the input in the same way as \code{\link{binomialff}}, however multiple responses are not allowed (\code{binomialff(multiple.responses = FALSE)}). } \section{Warning }{ Numerical difficulties can occur; if so, try using \code{idispersion}. } \seealso{ \code{\link{binomialff}}, \code{\link{toxop}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # This example mimics the example in Efron (1986). # The results here differ slightly. # Scale the variables toxop <- transform(toxop, phat = positive / ssize, srainfall = scale(rainfall), # (6.1) sN = scale(ssize)) # (6.2) # A fit similar (should be identical) to Section 6 of Efron (1986). # But does not use poly(), and M = 1.25 here, as in (5.3) cmlist <- list("(Intercept)" = diag(2), "I(srainfall)" = rbind(1, 0), "I(srainfall^2)" = rbind(1, 0), "I(srainfall^3)" = rbind(1, 0), "I(sN)" = rbind(0, 1), "I(sN^2)" = rbind(0, 1)) fit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3) + I(sN) + I(sN^2), double.expbinomial(ldisp = extlogit(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist) # Now look at the results coef(fit, matrix = TRUE) head(fitted(fit)) summary(fit) vcov(fit) sqrt(diag(vcov(fit))) # Standard errors # Effective sample size (not quite the last column of Table 1) head(predict(fit)) Dispersion <- extlogit(predict(fit)[,2], min = 0, max = 1.25, inverse = TRUE) c(round(weights(fit, type = "prior") * Dispersion, digits = 1)) # Ordinary logistic regression (gives same results as (6.5)) ofit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3), binomialff, toxop, trace = TRUE) # Same as fit but it uses poly(), and can be plotted (cf. Figure 1) cmlist2 <- list("(Intercept)" = diag(2), "poly(srainfall, degree = 3)" = rbind(1, 0), "poly(sN, degree = 2)" = rbind(0, 1)) fit2 <- vglm(cbind(phat, 1 - phat) * ssize ~ poly(srainfall, degree = 3) + poly(sN, degree = 2), double.expbinomial(ldisp = extlogit(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist2) \dontrun{ par(mfrow = c(1, 2)) plot(as(fit2, "vgam"), se = TRUE, lcol = "blue", scol = "orange") # Cf. Figure 1 # Cf. Figure 1(a) par(mfrow = c(1,2)) ooo <- with(toxop, sort.list(rainfall)) with(toxop, plot(rainfall[ooo], fitted(fit2)[ooo], type = "l", col = "blue", las = 1, ylim = c(0.3, 0.65))) with(toxop, points(rainfall[ooo], fitted(ofit)[ooo], col = "orange", type = "b", pch = 19)) # Cf. Figure 1(b) ooo <- with(toxop, sort.list(ssize)) with(toxop, plot(ssize[ooo], Dispersion[ooo], type = "l", col = "blue", las = 1, xlim = c(0, 100))) } } \keyword{models} \keyword{regression} VGAM/man/sc.t2UC.Rd0000644000176200001440000000570313135276753013250 0ustar liggesusers\name{Expectiles-sc.t2} \alias{Expectiles-sc.t2} \alias{dsc.t2} \alias{psc.t2} \alias{qsc.t2} \alias{rsc.t2} \title{ Expectiles/Quantiles of the Scaled Student t Distribution with 2 Df} \description{ Density function, distribution function, and quantile/expectile function and random generation for the scaled Student t distribution with 2 degrees of freedom. } \usage{ dsc.t2(x, location = 0, scale = 1, log = FALSE) psc.t2(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qsc.t2(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rsc.t2(n, location = 0, scale = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of expectiles/quantiles. See the terminology note below. } \item{p}{ Vector of probabilities. % (tau or \eqn{\tau}). These should lie in \eqn{(0,1)}. } \item{n, log}{See \code{\link[stats:Uniform]{runif}}.} \item{location, scale}{ Location and scale parameters. The latter should have positive values. Values of these vectors are recyled. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:TDist]{pt}} or \code{\link[stats:TDist]{qt}}. } } \details{ A Student-t distribution with 2 degrees of freedom and a scale parameter of \code{sqrt(2)} is equivalent to the standard form of this distribution (called Koenker's distribution below). Further details about this distribution are given in \code{\link{sc.studentt2}}. } \value{ \code{dsc.t2(x)} gives the density function. \code{psc.t2(q)} gives the distribution function. \code{qsc.t2(p)} gives the expectile and quantile function. \code{rsc.t2(n)} gives \eqn{n} random variates. } \author{ T. W. Yee and Kai Huang } %\note{ %} \seealso{ \code{\link[stats:TDist]{dt}}, \code{\link{sc.studentt2}}. } \examples{ my.p <- 0.25; y <- rsc.t2(nn <- 5000) (myexp <- qsc.t2(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my.p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard Koenker distribution myloc <- 1; myscale <- 2 yy <- rsc.t2(nn, myloc, myscale) (myexp <- qsc.t2(my.p, myloc, myscale)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p psc.t2(mean(yy), myloc, myscale) # Should be 0.5 abs(qsc.t2(0.5, myloc, myscale) - mean(yy)) # Should be 0 abs(psc.t2(myexp, myloc, myscale) - my.p) # Should be 0 integrate(f = dsc.t2, lower = -Inf, upper = Inf, locat = myloc, scale = myscale) # Should be 1 y <- seq(-7, 7, len = 201) max(abs(dsc.t2(y) - dt(y / sqrt(2), df = 2) / sqrt(2))) # Should be 0 \dontrun{ plot(y, dsc.t2(y), type = "l", col = "blue", las = 1, ylim = c(0, 0.4), main = "Blue = Koenker; orange = N(0, 1)") lines(y, dnorm(y), type = "l", col = "orange") abline(h = 0, v = 0, lty = 2) } } \keyword{distribution} VGAM/man/inv.lomax.Rd0000644000176200001440000000575613135276753014012 0ustar liggesusers\name{inv.lomax} \alias{inv.lomax} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inverse Lomax Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter inverse Lomax distribution. } \usage{ inv.lomax(lscale = "loge", lshape2.p = "loge", iscale = NULL, ishape2.p = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape2.p") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape2.p}{ Parameter link functions applied to the (positive) parameters \eqn{b}, and \eqn{p}. See \code{\link{Links}} for more choices. } \item{iscale, ishape2.p, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. For \code{imethod = 2} a good initial value for \code{ishape2.p} is needed to obtain a good estimate for the other parameter. } \item{gscale, gshape2.p}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 2-parameter inverse Lomax distribution is the 4-parameter generalized beta II distribution with shape parameters \eqn{a=q=1}. It is also the 3-parameter Dagum distribution with shape parameter \eqn{a=1}, as well as the beta distribution of the second kind with \eqn{q=1}. More details can be found in Kleiber and Kotz (2003). The inverse Lomax distribution has density \deqn{f(y) = p y^{p-1} / [b^p \{1 + y/b\}^{p+1}]}{% f(y) = p y^(p-1) / [b^p (1 + y/b)^(p+1)]} for \eqn{b > 0}, \eqn{p > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \code{p} is a shape parameter. The mean does not seem to exist; the \emph{median} is returned as the fitted values. This family function handles multiple responses. % 20140826 % The mean does not exist; \code{NA}s are returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. } \seealso{ \code{\link{inv.lomax}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{inv.paralogistic}}, \code{\link{simulate.vlm}}. } \examples{ idata <- data.frame(y = rinv.lomax(n = 2000, scale = exp(2), exp(1))) fit <- vglm(y ~ 1, inv.lomax, data = idata, trace = TRUE) fit <- vglm(y ~ 1, inv.lomax(iscale = exp(3)), data = idata, trace = TRUE, epsilon = 1e-8, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/toppleUC.Rd0000644000176200001440000000427413135276753013624 0ustar liggesusers\name{Topple} \alias{Topple} \alias{dtopple} \alias{ptopple} \alias{qtopple} \alias{rtopple} \title{The Topp-Leone Distribution} \description{ Density, distribution function, quantile function and random generation for the Topp-Leone distribution. } \usage{ dtopple(x, shape, log = FALSE) ptopple(q, shape, lower.tail = TRUE, log.p = FALSE) qtopple(p, shape) rtopple(n, shape) } \arguments{ \item{x, q, p, n}{ Same as \code{\link[stats:Uniform]{Uniform}}. } \item{shape}{the (shape) parameter, which lies in \eqn{(0, 1)}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dtopple} gives the density, \code{ptopple} gives the distribution function, \code{qtopple} gives the quantile function, and \code{rtopple} generates random deviates. } \references{ Topp, C. W. and F. C. Leone (1955) A family of J-shaped frequency functions. \emph{Journal of the American Statistical Association}, \bold{50}, 209--219. } \author{ T. W. Yee } \details{ See \code{\link{topple}}, the \pkg{VGAM} family function for estimating the (shape) parameter \eqn{s} by maximum likelihood estimation, for the formula of the probability density function. } \note{ The Topp-Leone distribution is related to the triangle distribution. } \seealso{ \code{\link{topple}}, \code{\link{Triangle}}. } \examples{ \dontrun{ shape <- 0.7; x <- seq(0.02, 0.999, length = 300) plot(x, dtopple(x, shape = shape), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, ptopple(x, shape = shape), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtopple(probs, shape = shape) lines(Q, dtopple(Q, shape), col = "purple", lty = 3, type = "h") lines(Q, ptopple(Q, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(ptopple(Q, shape) - probs)) # Should be zero } } \keyword{distribution} VGAM/man/gpdUC.Rd0000644000176200001440000000754413135276753013076 0ustar liggesusers\name{gpdUC} \alias{gpdUC} \alias{dgpd} \alias{pgpd} \alias{qgpd} \alias{rgpd} \title{The Generalized Pareto Distribution } \description{ Density, distribution function, quantile function and random generation for the generalized Pareto distribution (GPD) with location parameter \code{location}, scale parameter \code{scale} and shape parameter \code{shape}. } \usage{ dgpd(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps)) pgpd(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) qgpd(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) rgpd(n, location = 0, scale = 1, shape = 0) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required.} \item{location}{the location parameter \eqn{\mu}{mu}.} \item{scale}{the (positive) scale parameter \eqn{\sigma}{sigma}.} \item{shape}{the shape parameter \eqn{\xi}{xi}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Uniform]{punif}} or \code{\link[stats:Uniform]{qunif}}. } \item{tolshape0}{ Positive numeric. Threshold/tolerance value for resting whether \eqn{\xi}{xi} is zero. If the absolute value of the estimate of \eqn{\xi}{xi} is less than this value then it will be assumed zero and an exponential distribution will be used. } % \item{oobounds.log, giveWarning}{ % Numeric and logical. % The GPD distribution has support in the region satisfying % \code{(x-location)/scale > 0} % and % \code{1+shape*(x-location)/scale > 0}. % Outside that region, the % logarithm of the density is assigned \code{oobounds.log}, which % equates to a zero density. % It should not be assigned a positive number, and ideally is very negative. % Since \code{\link{gpd}} uses this function it is necessary % to return a finite value outside this region so as to allow % for half-stepping. Both arguments are in support of this. % This argument and others match those of \code{\link{gpd}}. % } } \value{ \code{dgpd} gives the density, \code{pgpd} gives the distribution function, \code{qgpd} gives the quantile function, and \code{rgpd} generates random deviates. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{gpd}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \note{ The default values of all three parameters, especially \eqn{\xi = 0}{xi = 0}, means the default distribution is the exponential. Currently, these functions have different argument names compared with those in the \pkg{evd} package. } \seealso{ \code{\link{gpd}}, \code{\link[stats]{Exponential}}. } \examples{ \dontrun{ loc <- 2; sigma <- 1; xi <- -0.4 x <- seq(loc - 0.2, loc + 3, by = 0.01) plot(x, dgpd(x, loc, sigma, xi), type = "l", col = "blue", ylim = c(0, 1), main = "Blue is density, red is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), dgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgpd(x, loc, sigma, xi), type = "l", col = "red") abline(h = 0, lty = 2) pgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) } } \keyword{distribution} % oobounds.log = -Inf, giveWarning = FALSE VGAM/man/simplexUC.Rd0000644000176200001440000000362713135276753014003 0ustar liggesusers\name{Simplex } \alias{dsimplex} %\alias{psimplex} %\alias{qsimplex} \alias{rsimplex} \title{ Simplex Distribution } \description{ Density function, and random generation for the simplex distribution. } \usage{ dsimplex(x, mu = 0.5, dispersion = 1, log = FALSE) rsimplex(n, mu = 0.5, dispersion = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Vector of quantiles. The support of the distribution is the interval \eqn{(0,1)}. } \item{mu, dispersion}{ Mean and dispersion parameters. The former lies in the interval \eqn{(0,1)} and the latter is positive. } \item{n, log}{ Same usage as \code{\link[stats:Uniform]{runif}}. } } \details{ The \pkg{VGAM} family function \code{\link{simplex}} fits this model; see that online help for more information. For \code{rsimplex()} the rejection method is used; it may be very slow if the density is highly peaked, and will fail if the density asymptotes at the boundary. } \value{ \code{dsimplex(x)} gives the density function, \code{rsimplex(n)} gives \eqn{n} random variates. } % \references{ % %} \author{ T. W. Yee } \seealso{ \code{\link{simplex}}. } \examples{ sigma <- c(4, 2, 1) # Dispersion parameter mymu <- c(0.1, 0.5, 0.7); xxx <- seq(0, 1, len = 501) \dontrun{ par(mfrow = c(3, 3)) # Figure 2.1 of Song (2007) for (iii in 1:3) for (jjj in 1:3) { plot(xxx, dsimplex(xxx, mymu[jjj], sigma[iii]), type = "l", col = "blue", xlab = "", ylab = "", main = paste("mu = ", mymu[jjj], ", sigma = ", sigma[iii], sep = "")) } } } \keyword{distribution} % mean(rsimplex(1000, mymu[2], sigma[2])) # Should be mu below % var(rsimplex(1000, mymu[2], sigma[2])) # Should be as below % (mu <- mymu[2]) % lambda <- (1 / sigma[2])^2 % mu * (1 - mu) - sqrt(lambda / 2) * exp(lambda / (mu^2 * (1 - mu)^2)) * % pgamma(lambda / (2 * mu^2 * (1 - mu)^2), 0.5, lower = FALSE) * gamma(0.5) VGAM/man/truncweibull.Rd0000644000176200001440000001074313135276753014606 0ustar liggesusers\name{truncweibull} \alias{truncweibull} %\alias{truncweibullff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Truncated Weibull Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Weibull distribution with lower truncation. No observations should be censored. } \usage{ truncweibull(lower.limit = 1e-5, lAlpha = "loge", lBetaa = "loge", iAlpha = NULL, iBetaa = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "Betaa") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lower.limit}{ Positive lower truncation limits. Recycled to the same dimension as the response, going across rows first. The default, being close to 0, should mean effectively the same results as \code{\link{weibullR}} if there are no response values that are smaller. } \item{lAlpha, lBetaa}{ Parameter link functions applied to the (positive) parameters \code{Alpha} (called \eqn{\alpha} below) and (positive) \code{Betaa} (called \eqn{\beta} below). See \code{\link{Links}} for more choices. } \item{iAlpha, iBetaa}{ See \code{\link{CommonVGAMffArguments}}. } \item{imethod, nrfs, zero, probs.y}{ Details at \code{\link{weibullR}} and \code{\link{CommonVGAMffArguments}}. } } \details{ MLE of the two parameters of the Weibull distribution are computed, subject to lower truncation. That is, all response values are greater than \code{lower.limit}, element-wise. For a particular observation this is any known positive value. This function is currently based directly on Wingo (1989) and his parameterization is used (it differs from \code{\link{weibullR}}.) In particular, \eqn{\beta = a} and \eqn{\alpha = (1/b)^a} where \eqn{a} and \eqn{b} are as in \code{\link{weibullR}} and \code{\link[stats:Weibull]{dweibull}}. % More details about the Weibull density are \code{\link{weibullR}}. Upon fitting the \code{extra} slot has a component called \code{lower.limit} which is of the same dimension as the response. The fitted values are the mean, which are computed using \code{\link{pgamma.deriv}} and \code{\link{pgamma.deriv.unscaled}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Wingo, D. R. (1989) The left-truncated Weibull distribution: theory and computation. \emph{Statistical Papers}, \bold{30}(1), 39--48. } \author{ T. W. Yee } \note{ More improvements need to be made, e.g., initial values are currently based on no truncation. This \pkg{VGAM} family function handles multiple responses. } \section{Warning}{ This function may be converted to the same parameterization as \code{\link{weibullR}} at any time. Yet to do: one element of the EIM may be wrong (due to two interpretations of a formula; but it seems to work). Convergence is slower than usual and this may imply something is wrong; use argument \code{maxit}. In fact, it's probably because \code{\link{pgamma.deriv.unscaled}} is inaccurate at \code{q = 1} and \code{q = 2}. Also, convergence should be monitored, especially if the truncation means that a large proportion of the data is lost compared to an ordinary Weibull distribution. } \seealso{ \code{\link{weibullR}}, \code{\link[stats:Weibull]{dweibull}}, \code{\link{pgamma.deriv}}, \code{\link{pgamma.deriv.unscaled}}. } \examples{ nn <- 5000; prop.lost <- 0.40 # Proportion lost to truncation wdata <- data.frame(x2 = runif(nn)) # Complete Weibull data wdata <- transform(wdata, Betaa = exp(1)) # > 2 is okay (satisfies regularity conds) wdata <- transform(wdata, Alpha = exp(0.5 - 1 * x2)) wdata <- transform(wdata, Shape = Betaa, # aaa = Betaa, # bbb = 1 / Alpha^(1 / Betaa), Scale = 1 / Alpha^(1 / Betaa)) wdata <- transform(wdata, y2 = rweibull(nn, shape = Shape, scale = Scale)) summary(wdata) lower.limit2 <- with(wdata, quantile(y2, prob = prop.lost)) # Proportion lost wdata <- subset(wdata, y2 > lower.limit2) # Smaller due to truncation fit1 <- vglm(y2 ~ x2, maxit = 100, trace = TRUE, truncweibull(lower.limit = lower.limit2), data = wdata) coef(fit1, matrix = TRUE) summary(fit1) vcov(fit1) head(fit1@extra$lower.limit) } \keyword{models} \keyword{regression} VGAM/man/otzeta.Rd0000644000176200001440000000325013135276753013370 0ustar liggesusers\name{otzeta} \alias{otzeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-truncated Zeta Distribution Family Function } \description{ Estimates the parameter of the 1-truncated zeta distribution. } \usage{ otzeta(lshape = "loge", ishape = NULL, gshape = exp((-4:3)/4), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, ishape, gshape, zero}{ Same as \code{\link{zetaff}}. } } \details{ The 1-truncated zeta distribution is the ordinary zeta distribution but with the probability of one being 0. Thus the other probabilities are scaled up (i.e., divided by \eqn{1-P[Y=1]}). The mean is returned by default as the fitted values. More details can be found at \code{\link{zetaff}}. Multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } %\note{ % The \code{\link{zeta}} function may be used to compute values % of the zeta function. % % %} \seealso{ \code{\link{Otzeta}}, \code{\link{zetaff}}, \code{\link{oizeta}}, \code{\link{diffzeta}}, \code{\link{zeta}}, \code{\link{dzeta}}, \code{\link{hzeta}}, \code{\link{zipf}}. } \examples{ odata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data odata <- transform(odata, shape = loge(-0.25 + x2, inverse = TRUE)) odata <- transform(odata, y1 = rotzeta(nn, shape)) with(odata, table(y1)) ofit <- vglm(y1 ~ x2, otzeta, data = odata, trace = TRUE, crit = "coef") coef(ofit, matrix = TRUE) } \keyword{models} \keyword{regression} % VGAM/man/quasibinomialff.Rd0000644000176200001440000001264613135276753015244 0ustar liggesusers\name{quasibinomialff} %\alias{quasibinomial} \alias{quasibinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quasi-Binomial Family Function } \description{ Family function for fitting generalized linear models to binomial responses, where the dispersion parameters are unknown. } \usage{ quasibinomialff(link = "logit", multiple.responses = FALSE, onedpar = !multiple.responses, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function. See \code{\link{Links}} for more choices. } \item{multiple.responses}{ Multiple responses? If \code{TRUE}, then the response is interpreted as \eqn{M} binary responses, where \eqn{M} is the number of columns of the response matrix. In this case, the response matrix should have zero/one values only. If \code{FALSE} and the response is a (2-column) matrix, then the number of successes is given in the first column and the second column is the number of failures. } \item{onedpar}{ One dispersion parameter? If \code{multiple.responses}, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar=TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if \code{multiple.responses} is \code{TRUE}. This argument allows for the parallelism assumption whereby the regression coefficients for a variable is constrained to be equal over the \eqn{M} linear/additive predictors. } \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The final model is not fully estimated by maximum likelihood since the dispersion parameter is unknown (see pp.124--8 of McCullagh and Nelder (1989) for more details). A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the binomial model. Over-dispersion is more common in practice. Setting \code{multiple.responses=TRUE} is necessary when fitting a Quadratic RR-VGLM (see \code{\link{cqo}}) because the response will be a matrix of \eqn{M} columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ If \code{multiple.responses} is \code{FALSE} (the default), then the response can be of one of three formats: a factor (first level taken as success), a vector of proportions of success, or a 2-column matrix (first column = successes) of counts. The argument \code{weights} in the modelling function can also be specified. In particular, for a general vector of proportions, you will need to specify \code{weights} because the number of trials is needed. If \code{multiple.responses} is \code{TRUE}, then the matrix response can only be of one format: a matrix of 1s and 0s (1=success). This function is only a front-end to the \pkg{VGAM} family function \code{binomialff()}; indeed, \code{quasibinomialff(...)} is equivalent to \code{binomialff(..., dispersion=0)}. Here, the argument \code{dispersion=0} signifies that the dispersion parameter is to be estimated. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. } \section{Warning }{ The log-likelihood pertaining to the ordinary family is used to test for convergence during estimation, and is printed out in the summary. } \seealso{ \code{\link{binomialff}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{poissonff}}, \code{\link{quasipoissonff}}, \code{\link[stats]{quasibinomial}}. } \examples{ quasibinomialff() quasibinomialff(link = "probit") # Nonparametric logistic regression hunua <- transform(hunua, a.5 = sqrt(altitude)) # Transformation of altitude fit1 <- vglm(agaaus ~ poly(a.5, 2), quasibinomialff, hunua) fit2 <- vgam(agaaus ~ s(a.5, df = 2), quasibinomialff, hunua) \dontrun{ plot(fit2, se = TRUE, llwd = 2, lcol = "orange", scol = "orange", xlab = "sqrt(altitude)", ylim = c(-3, 1), main = "GAM and quadratic GLM fitted to species data") plotvgam(fit1, se = TRUE, lcol = "blue", scol = "blue", add = TRUE, llwd = 2) } fit1@misc$dispersion # dispersion parameter logLik(fit1) # Here, the dispersion parameter defaults to 1 fit0 <- vglm(agaaus ~ poly(a.5, 2), binomialff, hunua) fit0@misc$dispersion # dispersion parameter } \keyword{models} \keyword{regression} %AIC(fit1) %AIC(fit0) VGAM/man/mix2poisson.Rd0000644000176200001440000001226013135276753014355 0ustar liggesusers\name{mix2poisson} \alias{mix2poisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Mixture of Two Poisson Distributions } \description{ Estimates the three parameters of a mixture of two Poisson distributions by maximum likelihood estimation. } \usage{ mix2poisson(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.2, 0.8), nsimEIM = 100, zero = "phi") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lphi, llambda}{ Link functions for the parameter \eqn{\phi}{phi} and \eqn{\lambda}{lambda}. See \code{\link{Links}} for more choices. } % \item{ephi, el1, el2}{ % ephi = list(), el1 = list(), el2 = list(), % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{iphi}{ Initial value for \eqn{\phi}{phi}, whose value must lie between 0 and 1. } \item{il1, il2}{ Optional initial value for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. These values must be positive. The default is to compute initial values internally using the argument \code{qmu}. % If these arguments are supplied then practical experience % suggests they should be quite well-separated. } \item{qmu}{ Vector with two values giving the probabilities relating to the sample quantiles for obtaining initial values for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. The two values are fed in as the \code{probs} argument into \code{\link[stats]{quantile}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The probability function can be loosely written as \deqn{P(Y=y) = \phi \, Poisson(\lambda_1) + (1-\phi) \, Poisson(\lambda_2)}{% P(Y=y) = phi * Poisson(lambda1) + (1-phi) * Poisson(lambda2)} where \eqn{\phi}{phi} is the probability an observation belongs to the first group, and \eqn{y=0,1,2,\ldots}{y=0,1,2,...}. The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{\phi \lambda_1 + (1-\phi) \lambda_2}{phi*lambda1 + (1-phi)*lambda2} and this is returned as the fitted values. By default, the three linear/additive predictors are \eqn{(logit(\phi), \log(\lambda_1), \log(\lambda_2))^T}{(logit(phi), log(lambda1), log(lambda2))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } % \references{ ~put references to the literature/web site here ~ } \section{Warning }{ This \pkg{VGAM} family function requires care for a successful application. In particular, good initial values are required because of the presence of local solutions. Therefore running this function with several different combinations of arguments such as \code{iphi}, \code{il1}, \code{il2}, \code{qmu} is highly recommended. Graphical methods such as \code{\link[graphics]{hist}} can be used as an aid. With grouped data (i.e., using the \code{weights} argument) one has to use a large value of \code{nsimEIM}; see the example below. This \pkg{VGAM} family function is experimental and should be used with care. } \author{ T. W. Yee } \note{ The response must be integer-valued since \code{\link[stats]{dpois}} is invoked. Fitting this model successfully to data can be difficult due to local solutions and ill-conditioned data. It pays to fit the model several times with different initial values, and check that the best fit looks reasonable. Plotting the results is recommended. This function works better as \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2} become more different. The default control argument \code{trace = TRUE} is to encourage monitoring convergence. } \seealso{ \code{\link[stats:Poisson]{rpois}}, \code{\link{poissonff}}, \code{\link{mix2normal}}. } \examples{ \dontrun{ # Example 1: simulated data nn <- 1000 mu1 <- exp(2.5) # Also known as lambda1 mu2 <- exp(3) (phi <- logit(-0.5, inverse = TRUE)) mdata <- data.frame(y = rpois(nn, ifelse(runif(nn) < phi, mu1, mu2))) mfit <- vglm(y ~ 1, mix2poisson, data = mdata) coef(mfit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(mfit), 'Truth' = c(phi, mu1, mu2)), digits = 2) ty <- with(mdata, table(y)) plot(names(ty), ty, type = "h", main = "Orange=estimate, blue=truth", ylab = "Frequency", xlab = "y") abline(v = Coef(mfit)[-1], lty = 2, col = "orange", lwd = 2) abline(v = c(mu1, mu2), lty = 2, col = "blue", lwd = 2) # Example 2: London Times data (Lange, 1997, p.31) ltdata1 <- data.frame(deaths = 0:9, freq = c(162, 267, 271, 185, 111, 61, 27, 8, 3, 1)) ltdata2 <- data.frame(y = with(ltdata1, rep(deaths, freq))) # Usually this does not work well unless nsimEIM is large Mfit <- vglm(deaths ~ 1, weight = freq, data = ltdata1, mix2poisson(iphi = 0.3, il1 = 1, il2 = 2.5, nsimEIM = 5000)) # This works better in general Mfit <- vglm(y ~ 1, mix2poisson(iphi = 0.3, il1 = 1, il2 = 2.5), data = ltdata2) coef(Mfit, matrix = TRUE) Coef(Mfit) } } \keyword{models} \keyword{regression} VGAM/man/poisson.points.Rd0000644000176200001440000000753613135276753015102 0ustar liggesusers\name{poisson.points} \alias{poisson.points} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson-points-on-a-plane/volume Distances Distribution } \description{ Estimating the density parameter of the distances from a fixed point to the u-th nearest point, in a plane or volume. } \usage{ poisson.points(ostatistic, dimension = 2, link = "loge", idensity = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ostatistic}{ Order statistic. A single positive value, usually an integer. For example, the value 5 means the response are the distances of the fifth nearest value to that point (usually over many planes or volumes). Non-integers are allowed because the value 1.5 coincides with \code{\link{maxwell}} when \code{dimension = 2}. Note: if \code{ostatistic = 1} and \code{dimension = 2} then this \pkg{VGAM} family function coincides with \code{\link{rayleigh}}. } \item{dimension}{ The value 2 or 3; 2 meaning a plane and 3 meaning a volume. } \item{link}{ Parameter link function applied to the (positive) density parameter, called \eqn{\lambda}{lambda} below. See \code{\link{Links}} for more choices. } \item{idensity}{ Optional initial value for the parameter. A \code{NULL} value means a value is obtained internally. Use this argument if convergence failure occurs. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method for \eqn{\lambda}{lambda}. If failure to converge occurs try another value and/or else specify a value for \code{idensity}. } } \details{ Suppose the number of points in any region of area \eqn{A} of the plane is a Poisson random variable with mean \eqn{\lambda A}{lambda*A} (i.e., \eqn{\lambda}{lambda} is the \emph{density} of the points). Given a fixed point \eqn{P}, define \eqn{D_1}, \eqn{D_2},\ldots to be the distance to the nearest point to \eqn{P}, second nearest to \eqn{P}, etc. This \pkg{VGAM} family function estimates \eqn{\lambda}{lambda} since the probability density function for \eqn{D_u} is easily derived, \eqn{u=1,2,\ldots}{u=1,2,...}. Here, \eqn{u} corresponds to the argument \code{ostatistic}. Similarly, suppose the number of points in any volume \eqn{V} is a Poisson random variable with mean \eqn{\lambda V}{lambda*V} where, once again, \eqn{\lambda}{lambda} is the \emph{density} of the points. This \pkg{VGAM} family function estimates \eqn{\lambda}{lambda} by specifying the argument \code{ostatistic} and using \code{dimension = 3}. The mean of \eqn{D_u} is returned as the fitted values. Newton-Raphson is the same as Fisher-scoring. } \section{Warning}{ Convergence may be slow if the initial values are far from the solution. This often corresponds to the situation when the response values are all close to zero, i.e., there is a high density of points. Formulae such as the means have not been fully checked. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{poissonff}}, \code{\link{maxwell}}, \code{\link{rayleigh}}. } \examples{ pdata <- data.frame(y = rgamma(10, shape = exp(-1))) # Not proper data! ostat <- 2 fit <- vglm(y ~ 1, poisson.points(ostat, 2), data = pdata, trace = TRUE, crit = "coef") fit <- vglm(y ~ 1, poisson.points(ostat, 3), data = pdata, trace = TRUE, crit = "coef") # Slow convergence? fit <- vglm(y ~ 1, poisson.points(ostat, 3, idensi = 1), data = pdata, trace = TRUE, crit = "coef") head(fitted(fit)) with(pdata, mean(y)) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/genrayleighUC.Rd0000644000176200001440000000461013135276753014611 0ustar liggesusers\name{genray} \alias{genray} \alias{dgenray} \alias{pgenray} \alias{qgenray} \alias{rgenray} \title{The Generalized Rayleigh Distribution} \description{ Density, distribution function, quantile function and random generation for the generalized Rayleigh distribution. } \usage{ dgenray(x, scale = 1, shape, log = FALSE) pgenray(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgenray(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgenray(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{scale, shape}{ positive scale and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dgenray} gives the density, \code{pgenray} gives the distribution function, \code{qgenray} gives the quantile function, and \code{rgenray} generates random deviates. } \author{ Kai Huang and J. G. Lauder and T. W. Yee } \details{ See \code{\link{genrayleigh}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Kundu and Raqab (2005). } \seealso{ \code{\link{genrayleigh}}, \code{\link{rayleigh}}. } \examples{ \dontrun{ shape <- 0.5; Scale <- 1; nn <- 501 x <- seq(-0.10, 3.0, len = nn) plot(x, dgenray(x, shape, scale = Scale), type = "l", las = 1, ylim = c(0, 1.2), ylab = paste("[dp]genray(shape = ", shape, ", scale = ", Scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pgenray(x, shape, scale = Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgenray(probs, shape, scale = Scale) lines(Q, dgenray(Q, shape, scale = Scale), col = "purple", lty = 3, type = "h") lines(Q, pgenray(Q, shape, scale = Scale), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pgenray(Q, shape, scale = Scale) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/plotvgam.Rd0000644000176200001440000001343013135276753013714 0ustar liggesusers\name{plotvgam} \alias{plotvgam} \alias{plot.vgam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Default VGAM Plotting } \description{ Component functions of a \code{\link{vgam-class}} object can be plotted with \code{plotvgam()}. These are on the scale of the linear/additive predictor. } \usage{ plotvgam(x, newdata = NULL, y = NULL, residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, raw = TRUE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, type.residuals = c("deviance", "working", "pearson", "response"), plot.arg = TRUE, which.term = NULL, which.cf = NULL, control = plotvgam.control(...), varxij = 1, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A fitted \pkg{VGAM} object, e.g., produced by \code{\link{vgam}}, \code{\link{vglm}}, or \code{\link{rrvglm}}. } \item{newdata}{ Data frame. May be used to reconstruct the original data set. } \item{y}{ Unused. } \item{residuals}{ Logical. If \code{TRUE} then residuals are plotted. See \code{type.residuals} } \item{rugplot}{ Logical. If \code{TRUE} then a rug plot is plotted at the foot of each plot. These values are jittered to expose ties. } \item{se}{ Logical. If \code{TRUE} then approximate \eqn{\pm 2}{+-2} pointwise standard error bands are included in the plot. } \item{scale}{ Numerical. By default, each plot will have its own y-axis scale. However, by specifying a value, each plot's y-axis scale will be at least \code{scale} wide. } \item{raw}{ Logical. If \code{TRUE} then the smooth functions are those obtained directly by the algorithm, and are plotted without having to premultiply with the constraint matrices. If \code{FALSE} then the smooth functions have been premultiply by the constraint matrices. The \code{raw} argument is directly fed into \code{predict.vgam()}. } \item{offset.arg}{ Numerical vector of length \eqn{r}. These are added to the component functions. Useful for separating out the functions when \code{overlay} is \code{TRUE}. If \code{overlay} is \code{TRUE} and there is one covariate then using the intercept values as the offsets can be a good idea. } \item{deriv.arg}{ Numerical. The order of the derivative. Should be assigned an small integer such as 0, 1, 2. Only applying to \code{s()} terms, it plots the derivative. } \item{overlay}{ Logical. If \code{TRUE} then component functions of the same covariate are overlaid on each other. The functions are centered, so \code{offset.arg} can be useful when \code{overlay} is \code{TRUE}. } \item{type.residuals}{ if \code{residuals} is \code{TRUE} then the first possible value of this vector, is used to specify the type of residual. } \item{plot.arg}{ Logical. If \code{FALSE} then no plot is produced. } \item{which.term}{ Character or integer vector containing all terms to be plotted, e.g., \code{which.term = c("s(age)", "s(height"))} or \code{which.term = c(2, 5, 9)}. By default, all are plotted. } \item{which.cf}{ An integer-valued vector specifying which linear/additive predictors are to be plotted. The values must be from the set \{1,2,\ldots,\eqn{r}\}. By default, all are plotted. } \item{control}{ Other control parameters. See \code{\link{plotvgam.control}}. } \item{\dots}{ Other arguments that can be fed into \code{\link{plotvgam.control}}. This includes line colors, line widths, line types, etc. } \item{varxij}{ Positive integer. Used if \code{xij} of \code{\link{vglm.control}} was used, this chooses which inner argument the component is plotted against. This argument is related to \code{raw = TRUE} and terms such as \code{NS(dum1, dum2)} and constraint matrices that have more than one column. The default would plot the smooth against \code{dum1} but setting \code{varxij = 2} could mean plotting the smooth against \code{dum2}. See the \pkg{VGAM} website for further information. } } \details{ In this help file \eqn{M} is the number of linear/additive predictors, and \eqn{r} is the number of columns of the constraint matrix of interest. Many of \code{plotvgam()}'s options can be found in \code{\link{plotvgam.control}}, e.g., line types, line widths, colors. } \value{ The original object, but with the \code{preplot} slot of the object assigned information regarding the plot. } %\references{ % % %Yee, T. W. and Wild, C. J. (1996) %Vector generalized additive models. %\emph{Journal of the Royal Statistical Society, Series B, Methodological}, %\bold{58}, 481--493. % % %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. % % %} \author{ Thomas W. Yee } \note{ While \code{plot(fit)} will work if \code{class(fit)} is \code{"vgam"}, it is necessary to use \code{plotvgam(fit)} explicitly otherwise. \code{plotvgam()} is quite buggy at the moment. % \code{plotvgam()} works in a similar % manner to S-PLUS's \code{plot.gam()}, however, there is no % options for interactive construction of the plots yet. } \seealso{ \code{\link{vgam}}, \code{\link{plotvgam.control}}, \code{predict.vgam}, \code{\link{plotvglm}}, \code{\link{vglm}}. } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vgam(cbind(nBnW, nBW, BnW, BW) ~ s(Age), binom2.or(zero = NULL), data = coalminers) \dontrun{ par(mfrow = c(1,3)) plot(fit, se = TRUE, ylim = c(-3, 2), las = 1) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", ylim = c(-3, 2)) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", overlay = TRUE) } } \keyword{models} \keyword{regression} \keyword{smooth} \keyword{graphs} VGAM/man/binormal.Rd0000644000176200001440000000646413135276753013677 0ustar liggesusers\name{binormal} \alias{binormal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Normal Distribution Family Function } \description{ Maximum likelihood estimation of the five parameters of a bivariate normal distribution. } \usage{ binormal(lmean1 = "identitylink", lmean2 = "identitylink", lsd1 = "loge", lsd2 = "loge", lrho = "rhobit", imean1 = NULL, imean2 = NULL, isd1 = NULL, isd2 = NULL, irho = NULL, imethod = 1, eq.mean = FALSE, eq.sd = FALSE, zero = c("sd", "rho")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean1, lmean2, lsd1, lsd2, lrho}{ Link functions applied to the means, standard deviations and \code{rho} parameters. See \code{\link{Links}} for more choices. Being positive quantities, a log link is the default for the standard deviations. } \item{imean1, imean2, isd1, isd2, irho, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{eq.mean, eq.sd}{ Logical or formula. Constrains the means or the standard deviations to be equal. % 20150530; FALSE now; they work separately: % Only one of these arguments may be assigned a value. } } \details{ For the bivariate normal distribution, this fits a linear model (LM) to the means, and by default, the other parameters are intercept-only. The response should be a two-column matrix. The correlation parameter is \code{rho}, which lies between \eqn{-1} and \eqn{1} (thus the \code{\link{rhobit}} link is a reasonable choice). The fitted means are returned as the fitted values, which is in the form of a two-column matrix. Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \section{Warning}{ This function may be renamed to \code{normal2()} or something like that at a later date. } %\references{ % %} \author{ T. W. Yee } \note{ If both equal means and equal standard deviations are desired then use something like \code{constraints = list("(Intercept)" = matrix(c(1,1,0,0,0, 0,0,1,1,0 ,0,0,0,0,1), 5, 3))} and maybe \code{zero = NULL} etc. } \seealso{ \code{\link{uninormal}}, \code{\link{gaussianff}}, \code{\link{pbinorm}}, \code{\link{bistudentt}}. % \code{\link{pnorm2}}, } \examples{ set.seed(123); nn <- 1000 bdata <- data.frame(x2 = runif(nn), x3 = runif(nn)) bdata <- transform(bdata, y1 = rnorm(nn, 1 + 2 * x2), y2 = rnorm(nn, 3 + 4 * x2)) fit1 <- vglm(cbind(y1, y2) ~ x2, binormal(eq.sd = TRUE), data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) constraints(fit1) summary(fit1) # Estimated P(Y1 <= y1, Y2 <= y2) under the fitted model var1 <- loge(2 * predict(fit1)[, "loge(sd1)"], inverse = TRUE) var2 <- loge(2 * predict(fit1)[, "loge(sd2)"], inverse = TRUE) cov12 <- rhobit(predict(fit1)[, "rhobit(rho)"], inverse = TRUE) head(with(bdata, pbinorm(y1, y2, mean1 = predict(fit1)[, "mean1"], mean2 = predict(fit1)[, "mean2"], var1 = var1, var2 = var2, cov12 = cov12))) } \keyword{models} \keyword{regression} VGAM/man/moffset.Rd0000644000176200001440000000733613135276753013536 0ustar liggesusers\name{moffset} \alias{moffset} \title{ Matrix Offset } \description{ Modify a matrix by shifting successive elements. } \usage{ moffset(mat, roffset = 0, coffset = 0, postfix = "", rprefix = "Row.", cprefix = "Col.") } \arguments{ \item{mat}{ Data frame or matrix. This ought to have at least three rows and three columns. The elements are shifted in the order of \code{c(mat)}, i.e., going down successive columns, as the columns go from left to right. Wrapping of values is done. } \item{roffset, coffset}{ Numeric or character. If numeric, the amount of shift (offset) for each row and column. The default is no change to \code{mat}. If character, the offset is computed by matching with the row or column names. For example, for the \code{\link{alcoff}}, put \code{roffset = "6"} means that we make an effective day's dataset start from 6:00 am, and this wraps around to include midnight to 05.59 am on the next day. } \item{postfix}{ Character. Modified rows and columns are renamed by pasting this argument to the end of each name. The default is no change. } \item{rprefix, cprefix}{ Same as \code{\link{rcim}}. } } \details{ This function allows a matrix to be rearranged so that element (\code{roffset} + 1, \code{coffset} + 1) becomes the (1, 1) element. The elements are assumed to be ordered in the same way as the elements of \code{c(mat)}, This function is applicable to, e.g., \code{\link{alcoff}}, where it is useful to define the \emph{effective day} as starting at some other hour than midnight, e.g., 6.00am. This is because partying on Friday night continues on into Saturday morning, therefore it is more interpretable to use the effective day when considering a daily effect. This is a data preprocessing function for \code{\link{rcim}} and \code{\link{plotrcim0}}. The differences between \code{\link{Rcim}} and \code{\link{moffset}} is that \code{\link{Rcim}} only reorders the level of the rows and columns so that the data is shifted but not moved. That is, a value in one row stays in that row, and ditto for column. But in \code{\link{moffset}} values in one column can be moved to a previous column. See the examples below. } \value{ A matrix of the same dimensional as its input. } \author{ T. W. Yee, Alfian F. Hadi. } \note{ % This function was originally for a 24 x 7 dimensional matrix % (24 hours of the day by 7 days per week) such as \code{\link{alcoff}}. % Of course, this function can be applied to any moderately % large matrix. The input \code{mat} should have row names and column names. } \seealso{ \code{\link{Rcim}}, \code{\link{rcim}}, \code{\link{plotrcim0}}, \code{\link{alcoff}}, \code{\link{crashi}}. } \examples{ moffset(alcoff, 3, 2, "*") # Some day's data is moved to previous day. Rcim(alcoff, 3 + 1, 2 + 1) # Data does not move as much. alcoff # Original data moffset(alcoff, 3, 2, "*") - Rcim(alcoff, 3+1, 2+1) # Note the differences # An 'effective day' data set: alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*") fit.o <- rcim(alcoff) # default baselines are first row and col fit.e <- rcim(alcoff.e) # default baselines are first row and col \dontrun{ par(mfrow = c(2, 2), mar = c(9, 4, 2, 1)) plot(fit.o, rsub = "Not very interpretable", csub = "Not very interpretable") plot(fit.e, rsub = "More interpretable", csub = "More interpretable") } # Some checking all.equal(moffset(alcoff), alcoff) # Should be no change moffset(alcoff, 1, 1, "*") moffset(alcoff, 2, 3, "*") moffset(alcoff, 1, 0, "*") moffset(alcoff, 0, 1, "*") moffset(alcoff, "6", "Mon", "*") # This one is good # Customise row and column baselines fit2 <- rcim(Rcim(alcoff.e, rbaseline = "11", cbaseline = "Mon*")) } VGAM/man/vplot.profile.Rd0000644000176200001440000000264513135276753014674 0ustar liggesusers% file MASS/man/plot.profile.Rd % copyright (C) 1999-2008 W. N. Venables and B. D. Ripley % \name{vplot.profile} \alias{vplot.profile} \alias{vpairs.profile} \title{Plotting Functions for 'profile' Objects} \description{ \code{\link{plot}} and \code{\link{pairs}} methods for objects of class \code{"profile"}, but renamed as \code{vplot} and \code{vpairs}. % \code{\link{vplot}} and \code{\link{vpairs}} methods for objects of % class \code{"profile"}. } \usage{ vplot.profile(x, ...) vpairs.profile(x, colours = 2:3, ...) } \arguments{ \item{x}{an object inheriting from class \code{"profile"}.} \item{colours}{Colours to be used for the mean curves conditional on \code{x} and \code{y} respectively.} \item{\dots}{arguments passed to or from other methods.} } \details{ See \code{\link[MASS]{profile.glm}} for details. } \author{ T. W. Yee adapted this function from \code{\link[MASS]{profile.glm}}, written originally by D. M. Bates and W. N. Venables. (For S in 1996.) } \seealso{ \code{\link{profilevglm}}, \code{\link{confintvglm}}, \code{\link[MASS]{profile.glm}}, \code{\link[stats]{profile.nls}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, acat, trace = TRUE, data = pneumo) pfit1 <- profile(fit1, trace = FALSE) \dontrun{ vplot.profile(pfit1) vpairs.profile(pfit1) } } \keyword{models} \keyword{hplot} VGAM/man/is.zero.Rd0000644000176200001440000000265013135276753013456 0ustar liggesusers\name{is.zero} \alias{is.zero} \alias{is.zero.matrix} \alias{is.zero.vglm} \title{Zero Constraint Matrices} \description{ Returns a logical vector from a test of whether an object such as a matrix or VGLM object corresponds to a 'zero' assumption. } \usage{ is.zero.matrix(object, \dots) is.zero.vglm(object, \dots) } \arguments{ \item{object}{ an object such as a coefficient matrix of a \code{\link{vglm}} object, or a \code{\link{vglm}} object. } \item{\dots}{ additional optional arguments. Currently unused. } } \details{ These functions test the effect of the \code{zero} argument on a \code{\link{vglm}} object or the coefficient matrix of a \code{\link{vglm}} object. The latter is obtained by \code{coef(vglmObject, matrix = TRUE)}. } \value{ A vector of logicals, testing whether each linear/additive predictor has the \code{zero} argument applied to it. It is \code{TRUE} if that linear/additive predictor is intercept-only, i.e., all other regression coefficients are set to zero. No checking is done for the intercept term at all, i.e., that it was estimated in the first place. } \seealso{ \code{\link{constraints}}, \code{\link{vglm}}. } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or(zero = NULL), coalminers) is.zero(fit) is.zero(coef(fit, matrix = TRUE)) } \keyword{models} \keyword{regression} VGAM/man/bifrankcopUC.Rd0000644000176200001440000000322213135276753014427 0ustar liggesusers\name{Frank} \alias{Frank} \alias{dbifrankcop} \alias{pbifrankcop} \alias{rbifrankcop} \title{Frank's Bivariate Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Frank distribution. } \usage{ dbifrankcop(x1, x2, apar, log = FALSE) pbifrankcop(q1, q2, apar) rbifrankcop(n, apar) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{apar}{the positive association parameter. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dbifrankcop} gives the density, \code{pbifrankcop} gives the distribution function, and \code{rbifrankcop} generates random deviates (a two-column matrix). } \references{ Genest, C. (1987) Frank's family of bivariate distributions. \emph{Biometrika}, \bold{74}, 549--555. } \author{ T. W. Yee } \details{ See \code{\link{bifrankcop}}, the \pkg{VGAM} family functions for estimating the association parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{bifrankcop}}. } \examples{ \dontrun{N <- 100; apar <- exp(2) xx <- seq(-0.30, 1.30, len = N) ox <- expand.grid(xx, xx) zedd <- dbifrankcop(ox[, 1], ox[, 2], apar = apar) contour(xx, xx, matrix(zedd, N, N)) zedd <- pbifrankcop(ox[, 1], ox[, 2], apar = apar) contour(xx, xx, matrix(zedd, N, N)) plot(rr <- rbifrankcop(n = 3000, apar = exp(4))) par(mfrow = c(1, 2)) hist(rr[, 1]); hist(rr[, 2]) # Should be uniform } } \keyword{distribution} VGAM/man/plotdeplot.lmscreg.Rd0000644000176200001440000000613313135276753015706 0ustar liggesusers\name{plotdeplot.lmscreg} \alias{plotdeplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Density Plot for LMS Quantile Regression } \description{ Plots a probability density function associated with a LMS quantile regression. } \usage{ plotdeplot.lmscreg(answer, y.arg, add.arg = FALSE, xlab = "", ylab = "density", xlim = NULL, ylim = NULL, llty.arg = par()$lty, col.arg = par()$col, llwd.arg = par()$lwd, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{answer}{ Output from functions of the form \code{deplot.???} where \code{???} is the name of the \pkg{VGAM} LMS family function, e.g., \code{lms.yjn}. See below for details. } \item{y.arg}{ Numerical vector. The values of the response variable at which to evaluate the density. This should be a grid that is fine enough to ensure the plotted curves are smooth. } \item{add.arg}{ Logical. Add the density to an existing plot? } \item{xlab, ylab}{ Caption for the x- and y-axes. See \code{\link[graphics]{par}}. } \item{xlim, ylim}{ Limits of the x- and y-axes. See \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{col.arg}{ Line color. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{main} and \code{las}. } } \details{ The above graphical parameters offer some flexibility when plotting the quantiles. } \value{ The list \code{answer}, which has components \item{newdata}{ The argument \code{newdata} above from the argument list of \code{\link{deplot.lmscreg}}, or a one-row data frame constructed out of the \code{x0} argument. } \item{y}{ The argument \code{y.arg} above. } \item{density}{ Vector of the density function values evaluated at \code{y.arg}. } } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ While the graphical arguments of this function are useful to the user, this function should not be called directly. } \seealso{ \code{\link{deplot.lmscreg}}. } \examples{ fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), bmi.nz) \dontrun{ y = seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = y, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (orange)") deplot(fit, x0 = 40, y = y, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = y, add = TRUE, col = "orange", llwd = 2) -> aa names(aa@post$deplot) aa@post$deplot$newdata head(aa@post$deplot$y) head(aa@post$deplot$density) } } \keyword{dplot} \keyword{models} \keyword{regression} VGAM/man/oapospoisUC.Rd0000644000176200001440000000424613135276753014334 0ustar liggesusers\name{Oapospois} \alias{Oapospois} \alias{doapospois} \alias{poapospois} \alias{qoapospois} \alias{roapospois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Altered Logarithmic Distribution } \description{ Density, distribution function, quantile function and random generation for the one-altered positive-Poisson distribution with parameter \code{pobs1}. } \usage{ doapospois(x, lambda, pobs1 = 0, log = FALSE) poapospois(q, lambda, pobs1 = 0) qoapospois(p, lambda, pobs1 = 0) roapospois(n, lambda, pobs1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, n, p}{ Same \code{\link[stats:Uniform]{Unif}}.} \item{lambda, log}{ Same as \code{\link{Otpospois}}). } \item{pobs1}{ Probability of (an observed) one, called \eqn{pobs1}. The default value of \code{pobs1 = 0} corresponds to the response having a 1-truncated positive-Poisson distribution. } } \details{ The probability function of \eqn{Y} is 1 with probability \code{pobs1}, else a 1-truncated positive-Poisson(lambda) distribution. } \value{ \code{doapospois} gives the density and \code{poapospois} gives the distribution function, \code{qoapospois} gives the quantile function, and \code{roapospois} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pobs1} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{oapospoisson}}, \code{\link{Oipospois}}, \code{\link{Otpospois}}. } \examples{ lambda <- 3; pobs1 <- 0.30; x <- (-1):7 doapospois(x, lambda = lambda, pobs1 = pobs1) table(roapospois(100, lambda = lambda, pobs1 = pobs1)) \dontrun{ x <- 0:10 barplot(rbind(doapospois(x, lambda = lambda, pobs1 = pobs1), dpospois(x, lambda = lambda)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("OAPP(lambda = ", lambda, ", pobs1 = ", pobs1, ") [blue] vs", " PosPoisson(lambda = ", lambda, ") [orange] densities", sep = "")) } } \keyword{distribution} VGAM/man/sinmadUC.Rd0000644000176200001440000000414713135276753013573 0ustar liggesusers\name{Sinmad} \alias{Sinmad} \alias{dsinmad} \alias{psinmad} \alias{qsinmad} \alias{rsinmad} \title{The Singh-Maddala Distribution} \description{ Density, distribution function, quantile function and random generation for the Singh-Maddala distribution with shape parameters \code{a} and \code{q}, and scale parameter \code{scale}. } \usage{ dsinmad(x, scale = 1, shape1.a, shape3.q, log = FALSE) psinmad(q, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) qsinmad(p, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) rsinmad(n, scale = 1, shape1.a, shape3.q) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a, shape3.q}{shape parameters.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dsinmad} gives the density, \code{psinmad} gives the distribution function, \code{qsinmad} gives the quantile function, and \code{rsinmad} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{sinmad}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Singh-Maddala distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{sinmad}}, \code{\link{genbetaII}}. } \examples{ sdata <- data.frame(y = rsinmad(n = 3000, scale = exp(2), shape1 = exp(1), shape3 = exp(1))) fit <- vglm(y ~ 1, sinmad(lss = FALSE, ishape1.a = 2.1), data = sdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/paretoIVUC.Rd0000644000176200001440000000756013135276753014053 0ustar liggesusers\name{ParetoIV} \alias{ParetoIV} \alias{dparetoIV} \alias{pparetoIV} \alias{qparetoIV} \alias{rparetoIV} \alias{ParetoIII} \alias{dparetoIII} \alias{pparetoIII} \alias{qparetoIII} \alias{rparetoIII} \alias{ParetoII} \alias{dparetoII} \alias{pparetoII} \alias{qparetoII} \alias{rparetoII} \alias{ParetoI} \alias{dparetoI} \alias{pparetoI} \alias{qparetoI} \alias{rparetoI} \title{The Pareto(IV/III/II) Distributions} \description{ Density, distribution function, quantile function and random generation for the Pareto(IV/III/II) distributions. } \usage{ dparetoIV(x, location = 0, scale = 1, inequality = 1, shape = 1, log = FALSE) pparetoIV(q, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoIV(p, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoIV(n, location = 0, scale = 1, inequality = 1, shape = 1) dparetoIII(x, location = 0, scale = 1, inequality = 1, log = FALSE) pparetoIII(q, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) qparetoIII(p, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) rparetoIII(n, location = 0, scale = 1, inequality = 1) dparetoII(x, location = 0, scale = 1, shape = 1, log = FALSE) pparetoII(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoII(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoII(n, location = 0, scale = 1, shape = 1) dparetoI(x, scale = 1, shape = 1, log = FALSE) pparetoI(q, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoI(p, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoI(n, scale = 1, shape = 1) } \arguments{ \item{x, q}{vector of quantiles. } \item{p}{vector of probabilities. } \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. % Must be a single positive integer. } \item{location}{the location parameter. } \item{scale, shape, inequality}{the (positive) scale, inequality and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ Functions beginning with the letter \code{d} give the density, functions beginning with the letter \code{p} give the distribution function, functions beginning with the letter \code{q} give the quantile function, and functions beginning with the letter \code{r} generates random deviates. } \references{ Brazauskas, V. (2003) Information matrix for Pareto(IV), Burr, and related distributions. \emph{Comm. Statist. Theory and Methods} \bold{32}, 315--325. Arnold, B. C. (1983) \emph{Pareto Distributions}. Fairland, Maryland: International Cooperative Publishing House. } \author{ T. W. Yee and Kai Huang } \details{ For the formulas and other details see \code{\link{paretoIV}}. } \note{ The functions \code{[dpqr]paretoI} are the same as \code{[dpqr]pareto} except for a slight change in notation: \eqn{s=k} and \eqn{b=\alpha}{b=alpha}; see \code{\link{Pareto}}. } \seealso{ \code{\link{paretoIV}}, \code{\link{Pareto}}. } \examples{ \dontrun{ x <- seq(-0.2, 4, by = 0.01) loc <- 0; Scale <- 1; ineq <- 1; shape <- 1.0 plot(x, dparetoIV(x, loc, Scale, ineq, shape), type = "l", col = "blue", main = "Blue is density, orange is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", ylim = 0:1, las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) Q <- qparetoIV(seq(0.05, 0.95,by = 0.05), loc, Scale, ineq, shape) lines(Q, dparetoIV(Q, loc, Scale, ineq, shape), col = "purple", lty = 3, type = "h") lines(x, pparetoIV(x, loc, Scale, ineq, shape), col = "orange") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/bigamma.mckay.Rd0000644000176200001440000000753713135276753014576 0ustar liggesusers\name{bigamma.mckay} \alias{bigamma.mckay} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Gamma: McKay's Distribution } \description{ Estimate the three parameters of McKay's bivariate gamma distribution by maximum likelihood estimation. } \usage{ bigamma.mckay(lscale = "loge", lshape1 = "loge", lshape2 = "loge", iscale = NULL, ishape1 = NULL, ishape2 = NULL, imethod = 1, zero = "shape") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape1, lshape2}{ Link functions applied to the (positive) parameters \eqn{a}, \eqn{p} and \eqn{q} respectively. See \code{\link{Links}} for more choices. } \item{iscale, ishape1, ishape2}{ Optional initial values for \eqn{a}, \eqn{p} and \eqn{q} respectively. The default is to compute them internally. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ One of the earliest forms of the bivariate gamma distribution has a joint probability density function given by \deqn{f(y_1,y_2;a,p,q) = (1/a)^{p+q} y_1^{p-1} (y_2-y_1)^{q-1} \exp(-y_2 / a) / [\Gamma(p) \Gamma(q)]}{% f(y1,y2;a,p,q) = (1/a)^(p+q) y1^(p-1) (y2-y1)^(q-1) exp(-y2/a) / [gamma(p) gamma(q)] } for \eqn{a > 0}, \eqn{p > 0}, \eqn{q > 0} and \eqn{0 < y_1 < y_2}{00}{rho>0}, \eqn{beta} is the \code{\link[base]{beta}} function, and \eqn{y=1,2,\ldots}{y=1,2,...}. The function \code{\link{dyules}} computes this probability function. The mean of \eqn{Y}, which is returned as fitted values, is \eqn{\rho/(\rho-1)}{rho/(rho-1)} provided \eqn{\rho > 1}{rho > 1}. The variance of \eqn{Y} is \eqn{\rho^2/((\rho-1)^2 (\rho-2))}{rho^2/((rho-1)^2 (rho-2))} provided \eqn{\rho > 2}{rho > 2}. The distribution was named after Udny Yule and Herbert A. Simon. Simon originally called it the Yule distribution. This family function can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Simon, H. A. (1955) On a class of skew distribution functions. \emph{Biometrika}, \bold{42}, 425--440. } \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{ryules}}, \code{\link{simulate.vlm}}. } \examples{ ydata <- data.frame(x2 = runif(nn <- 1000)) ydata <- transform(ydata, y = ryules(nn, shape = exp(1.5 - x2))) with(ydata, table(y)) fit <- vglm(y ~ x2, yulesimon, data = ydata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) } \keyword{models} \keyword{regression} %# Generate some yulesimon random variates %set.seed(123) %nn = 400 %x = 1:20 %alpha = 1.1 # The parameter %probs = dyulesimon(x, alpha) %\dontrun{ %plot(x, probs, type="h", log="y")} %cs = cumsum(probs) %tab = table(cut(runif(nn), brea = c(0,cs,1))) %index = (1:length(tab))[tab>0] %y = rep(index, times=tab[index]) VGAM/man/cens.normal.Rd0000644000176200001440000000671613135276753014313 0ustar liggesusers\name{cens.normal} \alias{cens.normal} % 20131111: just for \pkg{cg}: % 20140609: just for \pkg{cg}: \alias{cennormal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Censored Normal Distribution } \description{ Maximum likelihood estimation for the normal distribution with left and right censoring. } \usage{ cens.normal(lmu = "identitylink", lsd = "loge", imethod = 1, zero = "sd") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsd}{ Parameter link functions applied to the mean and standard deviation parameters. See \code{\link{Links}} for more choices. The standard deviation is a positive quantity, therefore a log link is the default. } \item{imethod}{ Initialization method. Either 1 or 2, this specifies two methods for obtaining initial values for the parameters. } \item{zero}{ A vector, e.g., containing the value 1 or 2; if so, the mean or standard deviation respectively are modelled as an intercept only. Setting \code{zero = NULL} means both linear/additive predictors are modelled as functions of the explanatory variables. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ This function is like \code{\link{uninormal}} but handles observations that are left-censored (so that the true value would be less than the observed value) else right-censored (so that the true value would be greater than the observed value). To indicate which type of censoring, input \code{extra = list(leftcensored = vec1, rightcensored = vec2)} where \code{vec1} and \code{vec2} are logical vectors the same length as the response. If the two components of this list are missing then the logical values are taken to be \code{FALSE}. The fitted object has these two components stored in the \code{extra} slot. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } \note{ This function, which is an alternative to \code{\link{tobit}}, cannot handle a matrix response and uses different working weights. If there are no censored observations then \code{\link{uninormal}} is recommended instead. % Function \code{\link{cens.normal1}} will be depreciated soon. % It is exactly the same as \code{\link{cens.normal}}. } \seealso{ \code{\link{tobit}}, \code{\link{uninormal}}, \code{\link{double.cens.normal}}. } \examples{ \dontrun{ cdata <- data.frame(x2 = runif(nn <- 1000)) # ystar are true values cdata <- transform(cdata, ystar = rnorm(nn, m = 100 + 15 * x2, sd = exp(3))) with(cdata, hist(ystar)) cdata <- transform(cdata, L = runif(nn, 80, 90), # Lower censoring points U = runif(nn, 130, 140)) # Upper censoring points cdata <- transform(cdata, y = pmax(L, ystar)) # Left censored cdata <- transform(cdata, y = pmin(U, y)) # Right censored with(cdata, hist(y)) Extra <- list(leftcensored = with(cdata, ystar < L), rightcensored = with(cdata, ystar > U)) fit1 <- vglm(y ~ x2, cens.normal, data = cdata, crit = "c", extra = Extra) fit2 <- vglm(y ~ x2, tobit(Lower = with(cdata, L), Upper = with(cdata, U)), data = cdata, crit = "c", trace = TRUE) coef(fit1, matrix = TRUE) max(abs(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be 0 names(fit1@extra) } } \keyword{models} \keyword{regression} VGAM/man/posnormUC.Rd0000644000176200001440000000375313135276753014017 0ustar liggesusers\name{Posnorm} \alias{Posnorm} \alias{dposnorm} \alias{pposnorm} \alias{qposnorm} \alias{rposnorm} \title{The Positive-Normal Distribution} \description{ Density, distribution function, quantile function and random generation for the univariate positive-normal distribution. } \usage{ dposnorm(x, mean = 0, sd = 1, log = FALSE) pposnorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qposnorm(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rposnorm(n, mean = 0, sd = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{mean, sd, log, lower.tail, log.p}{ see \code{\link[stats:Normal]{rnorm}}. } } \value{ \code{dposnorm} gives the density, \code{pposnorm} gives the distribution function, \code{qposnorm} gives the quantile function, and \code{rposnorm} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{posnormal}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ %} \seealso{ \code{\link{posnormal}}. } \examples{ \dontrun{ m <- 0.8; x <- seq(-1, 4, len = 501) plot(x, dposnorm(x, m = m), type = "l", las = 1, ylim = 0:1, ylab = paste("posnorm(m = ", m, ", sd = 1)"), col = "blue", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "grey") lines(x, pposnorm(x, m = m), col = "orange", type = "l") probs <- seq(0.1, 0.9, by = 0.1) Q <- qposnorm(probs, m = m) lines(Q, dposnorm(Q, m = m), col = "purple", lty = 3, type = "h") lines(Q, pposnorm(Q, m = m), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pposnorm(Q, m = m) - probs)) # Should be 0 } } \keyword{distribution} % 20150207; bug involving ifelse() picked up for qposnorm(). VGAM/man/oalogUC.Rd0000644000176200001440000000405413135276753013416 0ustar liggesusers\name{Oalog} \alias{Oalog} \alias{doalog} \alias{poalog} \alias{qoalog} \alias{roalog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Altered Logarithmic Distribution } \description{ Density, distribution function, quantile function and random generation for the one-altered logarithmic distribution with parameter \code{pobs1}. } \usage{ doalog(x, shape, pobs1 = 0, log = FALSE) poalog(q, shape, pobs1 = 0) qoalog(p, shape, pobs1 = 0) roalog(n, shape, pobs1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, n, p}{ Same \code{\link[stats:Uniform]{Unif}}.} \item{shape, log}{ Same as \code{\link{Otlog}}). } \item{pobs1}{ Probability of (an observed) one, called \eqn{pobs1}. The default value of \code{pobs1 = 0} corresponds to the response having a 1-truncated logarithmic distribution. } } \details{ The probability function of \eqn{Y} is 1 with probability \code{pobs1}, else a 1-truncated logarithmic(shape) distribution. } \value{ \code{doalog} gives the density and \code{poalog} gives the distribution function, \code{qoalog} gives the quantile function, and \code{roalog} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pobs1} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{oalog}}, \code{\link{oilog}}, \code{\link{Otlog}}. } \examples{ shape <- 0.75; pobs1 <- 0.10; x <- (-1):7 doalog(x, shape = shape, pobs1 = pobs1) table(roalog(100, shape = shape, pobs1 = pobs1)) \dontrun{ x <- 0:10 barplot(rbind(doalog(x, shape = shape, pobs1 = pobs1), dlog(x, shape = shape)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("OAL(shape = ", shape, ", pobs1 = ", pobs1, ") [blue] vs", " Logarithmic(shape = ", shape, ") [orange] densities", sep = "")) } } \keyword{distribution} VGAM/man/vonmises.Rd0000644000176200001440000001043713135276753013732 0ustar liggesusers\name{vonmises} \alias{vonmises} %- Also NEED an '\alias' for EACH other topic documented here. \title{ von Mises Distribution Family Function } \description{ Estimates the location and scale parameters of the von Mises distribution by maximum likelihood estimation. } \usage{ vonmises(llocation = extlogit(min = 0, max = 2 * pi), lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions applied to the location \eqn{a} parameter and scale parameter \eqn{k}, respectively. See \code{\link{Links}} for more choices. For \eqn{k}, a log link is the default because the parameter is positive. } \item{ilocation}{ Initial value for the location \eqn{a} parameter. By default, an initial value is chosen internally using \code{imethod}. Assigning a value will override the argument \code{imethod}. } \item{iscale}{ Initial value for the scale \eqn{k} parameter. By default, an initial value is chosen internally using \code{imethod}. Assigning a value will override the argument \code{imethod}. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ilocation} and \code{iscale}. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The default is none of them. If used, one can choose one value from the set \{1,2\}. See \code{\link{CommonVGAMffArguments}} for more information. } % \item{hstep}{ Positive numeric. The \eqn{h} used for the finite difference % approximation, e.g., in \eqn{(f(x+h)-f(x))/h} for the first % derivative estimate of the modified Bessel function values. % If too small, some half stepsizing may occur; % if too large, numerical problems might occur. % } } \details{ The (two-parameter) von Mises is the most commonly used distribution in practice for circular data. It has a density that can be written as \deqn{f(y;a,k) = \frac{\exp[k\cos(y-a)]}{ 2\pi I_0(k)}}{% f(y;a,k) = exp[k*cos(y-a)] / (2*pi*I0(k))} where \eqn{0 \leq y < 2\pi}{0 <= y < 2*pi}, \eqn{k>0} is the scale parameter, \eqn{a} is the location parameter, and \eqn{I_0(k)}{I0(k)} is the modified Bessel function of order 0 evaluated at \eqn{k}. The mean of \eqn{Y} (which is the fitted value) is \eqn{a} and the circular variance is \eqn{1 - I_1(k) / I_0(k)}{1 - I1(k) / I0(k)} where \eqn{I_1(k)}{I1(k)} is the modified Bessel function of order 1. By default, \eqn{\eta_1=\log(a/(2\pi-a))}{eta1=log(a/(2*pi-a))} and \eqn{\eta_2=\log(k)}{eta2=log(k)} for this family function. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ The response and the fitted values are scaled so that \eqn{0\leq y< 2\pi}{0<=y<2*pi}. The linear/additive predictors are left alone. Fisher scoring is used. } \section{Warning }{ Numerically, the von Mises can be difficult to fit because of a log-likelihood having multiple maximums. The user is therefore encouraged to try different starting values, i.e., make use of \code{ilocation} and \code{iscale}. } \seealso{ \code{\link[base]{Bessel}}, \code{\link{cardioid}}. \pkg{CircStats} and \pkg{circular} currently have a lot more R functions for circular data than the \pkg{VGAM} package. } \examples{ vdata <- data.frame(x2 = runif(nn <- 1000)) vdata <- transform(vdata, y = rnorm(nn, m = 2+x2, sd = exp(0.2))) # Bad data!! fit <- vglm(y ~ x2, vonmises(zero = 2), data = vdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) with(vdata, range(y)) # Original data range(depvar(fit)) # Processed data is in [0,2*pi) } \keyword{models} \keyword{regression} %later an extended logit link \eqn{\eta_1=\log(a/(2\pi-a))}{eta1=log(a/(2*pi-a))} %might be provided for \eqn{\eta_1}{eta1}. %\eqn{\eta_1=a}{eta1=a} and VGAM/man/borel.tanner.Rd0000644000176200001440000000637513135276753014466 0ustar liggesusers\name{borel.tanner} \alias{borel.tanner} %- Also NEED an '\alias' for EACH other topic documented here. \title{Borel-Tanner Distribution Family Function} \description{ Estimates the parameter of a Borel-Tanner distribution by maximum likelihood estimation. } \usage{ borel.tanner(Qsize = 1, link = "logit", imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Qsize}{ A positive integer. It is called \eqn{Q} below and is the initial queue size. The default value \eqn{Q = 1} corresponds to the Borel distribution. } \item{link}{ Link function for the parameter; see \code{\link{Links}} for more choices and for general information. } \item{imethod}{ See \code{\link{CommonVGAMffArguments}}. Valid values are 1, 2, 3 or 4. } } \details{ The Borel-Tanner distribution (Tanner, 1953) describes the distribution of the total number of customers served before a queue vanishes given a single queue with random arrival times of customers (at a constant rate \eqn{r} per unit time, and each customer taking a constant time \eqn{b} to be served). Initially the queue has \eqn{Q} people and the first one starts to be served. The two parameters appear in the density only in the form of the product \eqn{rb}, therefore we use \eqn{a=rb}, say, to denote the single parameter to be estimated. The density function is \deqn{f(y;a) = \frac{ Q }{(y-Q)!} y^{y-Q-1} a^{y-Q} \exp(-ay) }{% f(y;a) = (Q / (y-Q)!) * y^(y-Q-1) * a^(y-Q) * exp(-ay)} where \eqn{y=Q,Q+1,Q+2,\ldots}{y=Q,Q+1,Q+2,...}. The case \eqn{Q=1} corresponds to the \emph{Borel} distribution (Borel, 1942). For the \eqn{Q=1} case it is necessary for \eqn{0 < a < 1} for the distribution to be proper. The Borel distribution is a basic Lagrangian distribution of the first kind. The Borel-Tanner distribution is an \eqn{Q}-fold convolution of the Borel distribution. The mean is \eqn{Q/(1-a)} (returned as the fitted values) and the variance is \eqn{Q a / (1-a)^3}{Q*a/(1-a)^3}. The distribution has a very long tail unless \eqn{a} is small. Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Tanner, J. C. (1953) A problem of interference between two queues. \emph{Biometrika}, \bold{40}, 58--69. Borel, E. (1942) Sur l'emploi du theoreme de Bernoulli pour faciliter le calcul d'une infinite de coefficients. Application au probleme de l'attente a un guichet. \emph{Comptes Rendus, Academie des Sciences, Paris, Series A}, \bold{214}, 452--456. Page 328 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. Consul, P. C. and Famoye, F. (2006) \emph{Lagrangian Probability Distributions}, Boston, MA, USA: Birkhauser. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{rbort}}, \code{\link{poissonff}}, \code{\link{felix}}. } \examples{ bdata <- data.frame(y = rbort(n <- 200)) fit <- vglm(y ~ 1, borel.tanner, data = bdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/posnegbinomial.Rd0000644000176200001440000002041413135276753015071 0ustar liggesusers\name{posnegbinomial} \alias{posnegbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Negative Binomial Distribution Family Function } \description{ Maximum likelihood estimation of the two parameters of a positive negative binomial distribution. } \usage{ posnegbinomial(zero = "size", type.fitted = c("mean", "munb", "prob0"), mds.min = 0.001, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-07, max.support = 4000, max.chunk.MB = 30, lmunb = "loge", lsize = "loge", imethod = 1, imunb = NULL, iprobs.y = NULL, gprobs.y = ppoints(8), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmunb}{ Link function applied to the \code{munb} parameter, which is the mean \eqn{\mu_{nb}}{munb} of an ordinary negative binomial distribution. See \code{\link{Links}} for more choices. } \item{lsize}{ Parameter link function applied to the dispersion parameter, called \code{k}. See \code{\link{Links}} for more choices. } \item{isize}{ Optional initial value for \code{k}, an index parameter. The value \code{1/k} is known as a dispersion parameter. If failure to converge occurs try different values (and/or use \code{imethod}). If necessary this vector is recycled to length equal to the number of responses. A value \code{NULL} means an initial value for each response is computed internally using a range of values. } \item{nsimEIM, zero, eps.trig}{ See \code{\link{CommonVGAMffArguments}}. } \item{mds.min, iprobs.y, cutoff.prob}{ Similar to \code{\link{negbinomial}}. } \item{imunb, max.support}{ Similar to \code{\link{negbinomial}}. } \item{max.chunk.MB, gsize.mux}{ Similar to \code{\link{negbinomial}}. } \item{imethod, gprobs.y}{ See \code{\link{negbinomial}}. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} for details. } } \details{ The positive negative binomial distribution is an ordinary negative binomial distribution but with the probability of a zero response being zero. The other probabilities are scaled to sum to unity. This family function is based on \code{\link{negbinomial}} and most details can be found there. To avoid confusion, the parameter \code{munb} here corresponds to the mean of an ordinary negative binomial distribution \code{\link{negbinomial}}. The mean of \code{posnegbinomial} is \deqn{\mu_{nb} / (1-p(0))}{% munb / (1-p(0))} where \eqn{p(0) = (k/(k + \mu_{nb}))^k}{p(0) = (k/(k + munb))^k} is the probability an ordinary negative binomial distribution has a zero value. The parameters \code{munb} and \code{k} are not independent in the positive negative binomial distribution, whereas they are in the ordinary negative binomial distribution. This function handles \emph{multiple} responses, so that a matrix can be used as the response. The number of columns is the number of species, say, and setting \code{zero = -2} means that \emph{all} species have a \code{k} equalling a (different) intercept only. } \section{Warning}{ This family function is fragile; at least two cases will lead to numerical problems. Firstly, the positive-Poisson model corresponds to \code{k} equalling infinity. If the data is positive-Poisson or close to positive-Poisson, then the estimated \code{k} will diverge to \code{Inf} or some very large value. Secondly, if the data is clustered about the value 1 because the \code{munb} parameter is close to 0 then numerical problems will also occur. Users should set \code{trace = TRUE} to monitor convergence. In the situation when both cases hold, the result returned (which will be untrustworthy) will depend on the initial values. The negative binomial distribution (NBD) is a strictly unimodal distribution. Any data set that does not exhibit a mode (in the middle) makes the estimation problem difficult. The positive NBD inherits this feature. Set \code{trace = TRUE} to monitor convergence. See the example below of a data set where \code{posbinomial()} fails; the so-called solution is \emph{extremely} poor. This is partly due to a lack of a unimodal shape because the number of counts decreases only. This long tail makes it very difficult to estimate the mean parameter with any certainty. The result too is that the \code{size} parameter is numerically fraught. % Then trying a \code{\link{loglog}} link might help % handle this problem. This \pkg{VGAM} family function inherits the same warnings as \code{\link{negbinomial}}. And if \code{k} is much less than 1 then the estimation may be slow. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Barry, S. C. and Welsh, A. H. (2002) Generalized additive modelling and zero inflated count data. \emph{Ecological Modelling}, \bold{157}, 179--188. Williamson, E. and Bretherton, M. H. (1964) Tables of the logarithmic series distribution. \emph{Annals of Mathematical Statistics}, \bold{35}, 284--297. } \author{ Thomas W. Yee } \note{ If the estimated \eqn{k} is very large then fitting a \code{\link{pospoisson}} model is a good idea. If both \code{munb} and \eqn{k} are large then it may be necessary to decrease \code{eps.trig} and increase \code{max.support} so that the EIMs are positive-definite, e.g., \code{eps.trig = 1e-8} and \code{max.support = Inf}. } \seealso{ \code{\link{rposnegbin}}, \code{\link{pospoisson}}, \code{\link{negbinomial}}, \code{\link{zanegbinomial}}, \code{\link[stats:NegBinomial]{rnbinom}}, \code{\link{CommonVGAMffArguments}}, \code{\link{corbet}}, \code{\link{logff}}, \code{\link{simulate.vlm}}. % \code{\link[MASS]{rnegbin}}. } \examples{ pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, y1 = rposnegbin(nn, munb = exp(0+2*x2), size = exp(1)), y2 = rposnegbin(nn, munb = exp(1+2*x2), size = exp(3))) fit <- vglm(cbind(y1, y2) ~ x2, posnegbinomial, data = pdata, trace = TRUE) coef(fit, matrix = TRUE) dim(depvar(fit)) # Using dim(fit@y) is not recommended # Another artificial data example pdata2 <- data.frame(munb = exp(2), size = exp(3)); nn <- 1000 pdata2 <- transform(pdata2, y3 = rposnegbin(nn, munb = munb, size = size)) with(pdata2, table(y3)) fit <- vglm(y3 ~ 1, posnegbinomial, data = pdata2, trace = TRUE) coef(fit, matrix = TRUE) with(pdata2, mean(y3)) # Sample mean head(with(pdata2, munb/(1-(size/(size+munb))^size)), 1) # Population mean head(fitted(fit), 3) head(predict(fit), 3) # Example: Corbet (1943) butterfly Malaya data fit <- vglm(ofreq ~ 1, posnegbinomial, weights = species, data = corbet) coef(fit, matrix = TRUE) Coef(fit) (khat <- Coef(fit)["size"]) pdf2 <- dposnegbin(x = with(corbet, ofreq), mu = fitted(fit), size = khat) print(with(corbet, cbind(ofreq, species, fitted = pdf2*sum(species))), dig = 1) \dontrun{ with(corbet, matplot(ofreq, cbind(species, fitted = pdf2*sum(species)), las = 1, xlab = "Observed frequency (of individual butterflies)", type = "b", ylab = "Number of species", col = c("blue", "orange"), main = "blue 1s = observe; orange 2s = fitted")) } \dontrun{ # This data (courtesy of Maxim Gerashchenko) causes posbinomial() to fail pnbd.fail <- data.frame( y1 = c(1:16, 18:21, 23:28, 33:38, 42, 44, 49:51, 55, 56, 58, 59, 61:63, 66, 73, 76, 94, 107, 112, 124, 190, 191, 244), ofreq = c(130, 80, 38, 23, 22, 11, 21, 14, 6, 7, 9, 9, 9, 4, 4, 5, 1, 4, 6, 1, 3, 2, 4, 3, 4, 5, 3, 1, 2, 1, 1, 4, 1, 2, 2, 1, 3, 1, 1, 2, 2, 2, 1, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1)) fit.fail <- vglm(y1 ~ 1, weights = ofreq, posnegbinomial, trace = TRUE, data = pnbd.fail) } } \keyword{models} \keyword{regression} % bigN = with(corbet, sum(species)) %posnegbinomial(lmunb = "loge", lsize = "loge", imunb = NULL, % isize = NULL, zero = "size", nsimEIM = 250, % probs.y = 0.75, cutoff.prob = 0.999, % max.support = 2000, max.chunk.MB = 30, % gsize = exp((-4):4), ishrinkage = 0.95, imethod = 1) VGAM/man/CommonVGAMffArguments.Rd0000644000176200001440000005610513135276753016176 0ustar liggesusers\name{CommonVGAMffArguments} \alias{CommonVGAMffArguments} \alias{TypicalVGAMfamilyFunction} \title{Common VGAM Family Function Arguments } \description{ Here is a description of some common and typical arguments found in many \pkg{VGAM} family functions, e.g., \code{lsigma}, \code{isigma}, \code{gsigma}, \code{nsimEI}, \code{parallel} and \code{zero}. } \usage{ TypicalVGAMfamilyFunction(lsigma = "loge", isigma = NULL, link.list = list("(Default)" = "identitylink", x2 = "loge", x3 = "logoff", x4 = "multilogit", x5 = "multilogit"), earg.list = list("(Default)" = list(), x2 = list(), x3 = list(offset = -1), x4 = list(), x5 = list()), gsigma = exp(-5:5), parallel = TRUE, ishrinkage = 0.95, nointercept = NULL, imethod = 1, type.fitted = c("mean", "quantiles", "pobs0", "pstr0", "onempstr0"), percentiles = c(25, 50, 75), probs.x = c(0.15, 0.85), probs.y = c(0.25, 0.50, 0.75), multiple.responses = FALSE, earg.link = FALSE, whitespace = FALSE, bred = FALSE, lss = TRUE, oim = FALSE, nsimEIM = 100, byrow.arg = FALSE, zero = NULL) } \arguments{ % apply.parint = FALSE, \item{lsigma}{ Character. Link function applied to a parameter and not necessarily a mean. See \code{\link{Links}} for a selection of choices. If there is only one parameter then this argument is often called \code{link}. } % \item{esigma}{ % List. % Extra argument allowing for additional information, specific to the % link function. % See \code{\link{Links}} for more information. % If there is only one parameter then this argument is often called % \code{earg}. % } \item{link.list, earg.list}{ Some \pkg{VGAM} family functions (such as \code{\link{normal.vcm}}) implement models with potentially lots of parameter link functions. These two arguments allow many such links and extra arguments to be inputted more easily. One has something like \code{link.list = list("(Default)" = "identitylink", x2 = "loge", x3 = "logoff")} and \code{earg.list = list("(Default)" = list(), x2 = list(), x3 = "list(offset = -1)")}. Then any unnamed terms will have the default link with its corresponding extra argument. Note: the \code{\link{multilogit}} link is also possible, and if so, at least two instances of it are necessary. Then the last term is the baseline/reference group. } \item{isigma}{ Optional initial values can often be inputted using an argument beginning with \code{"i"}. For example, \code{"isigma"} and \code{"ilocation"}, or just \code{"init"} if there is one parameter. A value of \code{NULL} means a value is computed internally, i.e., a \emph{self-starting} \pkg{VGAM} family function. If a failure to converge occurs make use of these types of arguments. } \item{gsigma}{ Grid-search initial values can be inputted using an argument beginning with \code{"g"}, e.g., \code{"gsigma"}, \code{"gshape"} and \code{"gscale"}. If argument \code{isigma} is inputted then that has precedence over \code{gsigma}, etc. % The actual search values will be \code{unique(sort(c(gshape)))}, etc. If the grid search is 2-dimensional then it is advisable not to make the vectors too long as a nested \code{for} loop may be used. Ditto for 3-dimensions etc. Sometimes a \code{".mux"} is added as a suffix, e.g., \code{gshape.mux}; this means that the grid is created relatively and not absolutely, e.g., its values are multipled by some single initial estimate of the parameter in order to create the grid on an absolute scale. Some family functions have an argument called \code{gprobs.y}. This is fed into the \code{probs} argument of \code{\link[stats:quantile]{quantile}} in order to obtain some values of central tendency of the response, i.e., some spread of values in the middle. when \code{imethod = 1} to obtain an initial value for the mean Some family functions have an argument called \code{iprobs.y}, and if so, then these values can overwrite \code{gprobs.y}. % Then the actual search values will be % \code{unique(sort(c(gshape, 1/gshape)))}, etc. } \item{parallel}{ A logical, or a simple formula specifying which terms have equal/unequal coefficients. The formula must be simple, i.e., additive with simple main effects terms. Interactions and nesting etc. are not handled. To handle complex formulas use the \code{constraints} argument (of \code{\link{vglm}} etc.); however, there is a lot more setting up involved and things will not be as convenient. Here are some examples. 1. \code{parallel = TRUE ~ x2 + x5} means the parallelism assumption is only applied to \eqn{X_2}, \eqn{X_5} and the intercept. 2. \code{parallel = TRUE ~ -1} and \code{parallel = TRUE ~ 0} mean the parallelism assumption is applied to \emph{no} variables at all. Similarly, \code{parallel = FALSE ~ -1} and \code{parallel = FALSE ~ 0} mean the parallelism assumption is applied to \emph{all} the variables including the intercept. 3. \code{parallel = FALSE ~ x2 - 1} and \code{parallel = FALSE ~ x2 + 0} applies the parallelism constraint to all terms (including the intercept) except for \eqn{X_2}. This argument is common in \pkg{VGAM} family functions for categorical responses, e.g., \code{\link{cumulative}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}. For the proportional odds model (\code{\link{cumulative}}) having parallel constraints applied to each explanatory variable (except for the intercepts) means the fitted probabilities do not become negative or greater than 1. However this parallelism or proportional-odds assumption ought to be checked. } % \item{apply.parint}{ % \emph{This variable will be depreciated shortly}. % Logical. % It refers to whether the parallelism constraint is % applied to the intercept too. % By default, in some models it does, in other models it does not. % Used only if \code{parallel = TRUE} (fully or partially with % respect to all the explanatory variables). % } \item{nsimEIM}{ Some \pkg{VGAM} family functions use simulation to obtain an approximate expected information matrix (EIM). For those that do, the \code{nsimEIM} argument specifies the number of random variates used per observation; the mean of \code{nsimEIM} random variates is taken. Thus \code{nsimEIM} controls the accuracy and a larger value may be necessary if the EIMs are not positive-definite. For intercept-only models (\code{y ~ 1)} the value of \code{nsimEIM} can be smaller (since the common value used is also then taken as the mean over the observations), especially if the number of observations is large. Some \pkg{VGAM} family functions provide two algorithms for estimating the EIM. If applicable, set \code{nsimEIM = NULL} to choose the other algorithm. } \item{imethod}{ An integer with value \code{1} or \code{2} or \code{3} or ... which specifies the initialization method for some parameters or a specific parameter. If failure to converge occurs try the next higher value, and continue until success. For example, \code{imethod = 1} might be the method of moments, and \code{imethod = 2} might be another method. If no value of \code{imethod} works then it will be necessary to use arguments such as \code{isigma}. For many \pkg{VGAM} family functions it is advisable to try this argument with all possible values to safeguard against problems such as converging to a local solution. \pkg{VGAM} family functions with this argument usually correspond to a model or distribution that is relatively hard to fit successfully, therefore care is needed to ensure the global solution is obtained. So using all possible values that this argument supplies is a good idea. } \item{type.fitted}{ Character. Type of fitted value returned by the \code{fitted()} methods function. The first choice is always the default. The available choices depends on what kind of family function it is. Using the first few letters of the chosen choice is okay. See \code{\link{fittedvlm}} for more details. } \item{percentiles}{ Numeric vector, with values between 0 and 100 (although it is not recommended that exactly 0 or 100 be inputted). Used only if \code{type.fitted = "quantiles"} or \code{type.fitted = "percentiles"}, then this argument specifies the values of these quantiles. The argument name tries to reinforce that the values lie between 0 and 100. See \code{\link{fittedvlm}} for more details. } \item{probs.x, probs.y}{ Numeric, with values in (0, 1). The probabilites that define quantiles with respect to some vector, usually an \code{x} or \code{y} of some sort. This is used to create two subsets of data corresponding to `low' and `high' values of x or y. Each value is separately fed into the \code{probs} argument of \code{\link[stats:quantile]{quantile}}. If the data set size is small then it may be necessary to increase/decrease slightly the first/second values respectively. } \item{lss}{ Logical. This stands for the ordering: location, scale and shape. Should the ordering of the parameters be in this order? Almost all \pkg{VGAM} family functions have this order by default, but in order to match the arguments of existing R functions, one might need to set \code{lss = FALSE}. For example, the arguments of \code{\link{weibullR}} are scale and shape, whereas \code{\link[stats]{rweibull}} are shape and scale. As a temporary measure (from \pkg{VGAM} 0.9-7 onwards but prior to version 1.0-0), some family functions such as \code{\link{sinmad}} have an \code{lss} argument without a default. For these, setting \code{lss = FALSE} will work. Later, \code{lss = TRUE} will be the default. Be careful for the \code{dpqr}-type functions, e.g., \code{\link{rsinmad}}. } \item{whitespace}{ Logical. Should white spaces (\code{" "}) be used in the labelling of the linear/additive predictors? Setting \code{TRUE} usually results in more readability but it occupies more columns of the output. } \item{oim}{ Logical. Should the observed information matrices (OIMs) be used for the working weights? In general, setting \code{oim = TRUE} means the Newton-Raphson algorithm, and \code{oim = FALSE} means Fisher-scoring. The latter uses the EIM, and is usually recommended. If \code{oim = TRUE} then \code{nsimEIM} is ignored. } \item{zero}{ Either an integer vector, or a vector of character strings. If an integer, then it specifies which linear/additive predictor is modelled as \emph{intercept-only}. That is, the regression coefficients are set to zero for all covariates except for the intercept. If \code{zero} is specified then it may be a vector with values from the set \eqn{\{1,2,\ldots,M\}}. The value \code{zero = NULL} means model \emph{all} linear/additive predictors as functions of the explanatory variables. Here, \eqn{M} is the number of linear/additive predictors. Technically, if \code{zero} contains the value \eqn{j} then the \eqn{j}th row of every constraint matrix (except for the intercept) consists of all 0 values. Some \pkg{VGAM} family functions allow the \code{zero} argument to accept negative values; if so then its absolute value is recycled over each (usual) response. For example, \code{zero = -2} for the two-parameter negative binomial distribution would mean, for each response, the second linear/additive predictor is modelled as intercepts-only. That is, for all the \eqn{k} parameters in \code{\link{negbinomial}} (this \pkg{VGAM} family function can handle a matrix of responses). Suppose \code{zero = zerovec} where \code{zerovec} is a vector of negative values. If \eqn{G} is the usual \eqn{M} value for a univariate response then the actual values for argument \code{zero} are all values in \code{c(abs(zerovec), G + abs(zerovec), 2*G + abs(zerovec), ... )} lying in the integer range \eqn{1} to \eqn{M}. For example, setting \code{zero = -c(2, 3)} for a matrix response of 4 columns with \code{\link{zinegbinomial}} (which usually has \eqn{G = M = 3} for a univariate response) would be equivalent to \code{zero = c(2, 3, 5, 6, 8, 9, 11, 12)}. This example has \eqn{M = 12}. Note that if \code{zerovec} contains negative values then their absolute values should be elements from the set \code{1:G}. Note: \code{zero} may have positive and negative values, for example, setting \code{zero = c(-2, 3)} in the above example would be equivalent to \code{zero = c(2, 3, 5, 8, 11)}. The argument \code{zero} also accepts a character vector (for \pkg{VGAM} 1.0-1 onwards). Each value is fed into \code{\link[base]{grep}} with \code{fixed = TRUE}, meaning that wildcards \code{"*"} are not useful. See the example below---all the variants work; those with \code{LOCAT} issue a warning that that value is unmatched. Importantly, the parameter names are \code{c("location1", "scale1", "location2", "scale2")} because there are 2 responses. Yee (2015) described \code{zero} for only numerical input. Allowing character input is particularly important when the number of parameters cannot be determined without having the actual data first. For example, with time series data, an ARMA(\eqn{p},\eqn{q}) process might have parameters \eqn{\theta_1,\ldots,\theta_p} which should be intercept-only by default. Then specifying a numerical default value for \code{zero} would be too difficult (there are the drift and scale parameters too). However, it is possible with the character representation: \code{zero = "theta"} would achieve this. In the future, most \pkg{VGAM} family functions might be converted to the character representation---the advantage being that it is more readable. When programming a \pkg{VGAM} family function that allows character input, the variable \code{predictors.names} must be assigned correctly. %Note that \code{zero} accepts wildcards (cf. the Linux operating system): %\code{"location*"} means that \emph{all} location parameters %are intercept-only. % When programming a \pkg{VGAM} family function that allows character % input, the variables \code{parameters.names} % and \code{Q1} } \item{ishrinkage}{ Shrinkage factor \eqn{s} used for obtaining initial values. Numeric, between 0 and 1. In general, the formula used is something like \eqn{s \mu + (1-s) y}{s*mu + (1-s)*y} where \eqn{\mu}{mu} is a measure of central tendency such as a weighted mean or median, and \eqn{y} is the response vector. For example, the initial values are slight perturbations of the mean towards the actual data. For many types of models this method seems to work well and is often reasonably robust to outliers in the response. Often this argument is only used if the argument \code{imethod} is assigned a certain value. } \item{nointercept}{ An integer-valued vector specifying which linear/additive predictors have no intercepts. Any values must be from the set \{1,2,\ldots,\eqn{M}\}. A value of \code{NULL} means no such constraints. } \item{multiple.responses}{ Logical. Some \pkg{VGAM} family functions allow a multivariate or vector response. If so, then usually the response is a matrix with columns corresponding to the individual response variables. They are all fitted simultaneously. Arguments such as \code{parallel} may then be useful to allow for relationships between the regressions of each response variable. If \code{multiple.responses = TRUE} then sometimes the response is interpreted differently, e.g., \code{\link{posbinomial}} chooses the first column of a matrix response as success and combines the other columns as failure, but when \code{multiple.responses = TRUE} then each column of the response matrix is the number of successes and the \code{weights} argument is of the same dimension as the response and contains the number of trials. } \item{earg.link}{ Sometimes the link argument can receive \code{earg}-type input, such as \code{\link{quasibinomial}} calling \code{\link{binomial}}. This argument should be generally ignored. } \item{byrow.arg}{ Logical. Some \pkg{VGAM} family functions that handle multiple responses have arguments that allow input to be fed in which affect all the responses, e.g., \code{imu} for initalizing a \code{mu} parameter. In such cases it is sometime more convenient to input one value per response by setting \code{byrow.arg = TRUE}; then values are recycled in order to form a matrix of the appropriate dimension. This argument matches \code{byrow} in \code{\link[base]{matrix}}; in fact it is fed into such using \code{matrix(..., byrow = byrow.arg)}. This argument has no effect when there is one response. } \item{bred}{ Logical. Some \pkg{VGAM} family functions will allow bias-reduction based on the work by Kosmidis and Firth. Sometimes half-stepping is a good idea; set \code{stepsize = 0.5} and monitor convergence by setting \code{trace = TRUE}. } } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \section{Warning }{ The \code{zero} argument is supplied for convenience but conflicts can arise with other arguments, e.g., the \code{constraints} argument of \code{\link{vglm}} and \code{\link{vgam}}. See Example 5 below for an example. If not sure, use, e.g., \code{constraints(fit)} and \code{coef(fit, matrix = TRUE)} to check the result of a fit \code{fit}. The arguments \code{zero} and \code{nointercept} can be inputted with values that fail. For example, \code{multinomial(zero = 2, nointercept = 1:3)} means the second linear/additive predictor is identically zero, which will cause a failure. Be careful about the use of other potentially contradictory constraints, e.g., \code{multinomial(zero = 2, parallel = TRUE ~ x3)}. If in doubt, apply \code{constraints()} to the fitted object to check. \pkg{VGAM} family functions with the \code{nsimEIM} may have inaccurate working weight matrices. If so, then the standard errors of the regression coefficients may be inaccurate. Thus output from \code{summary(fit)}, \code{vcov(fit)}, etc. may be misleading. Changes relating to the code{lss} argument have very important consequences and users must beware. Good programming style is to rely on the argument names and not on the order. } \details{ Full details will be given in documentation yet to be written, at a later date! } \references{ Yee, T. W. (2015) Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: \emph{Springer}. Kosmidis, I. and Firth, D. (2009) Bias reduction in exponential family nonlinear models. \emph{Biometrika}, \bold{96}(4), 793--804. %Kosmidis, I. and Firth, D. (2010) %A generic algorithm for reducing bias in parametric estimation. %\emph{Electronic Journal of Statistics}, %\bold{4}, 1097--1112. } \seealso{ \code{\link{Links}}, \code{\link{vglmff-class}}, \code{\link{UtilitiesVGAM}}, \code{\link{normal.vcm}}, \code{\link{multilogit}}. } \author{T. W. Yee} \note{ See \code{\link{Links}} regarding a major change in link functions, for version 0.9-0 and higher (released during the 2nd half of 2012). } \examples{ # Example 1 cumulative() cumulative(link = "probit", reverse = TRUE, parallel = TRUE) # Example 2 wdata <- data.frame(x2 = runif(nn <- 1000)) wdata <- transform(wdata, y = rweibull(nn, shape = 2 + exp(1 + x2), scale = exp(-0.5))) fit <- vglm(y ~ x2, weibullR(lshape = logoff(offset = -2), zero = 2), data = wdata) coef(fit, mat = TRUE) # Example 3; multivariate (multiple) response \dontrun{ ndata <- data.frame(x = runif(nn <- 500)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x), size = exp(1)), # k is size y2 = rnbinom(nn, mu = exp(2-x), size = exp(0))) fit <- vglm(cbind(y1, y2) ~ x, negbinomial(zero = -2), data = ndata) coef(fit, matrix = TRUE) } # Example 4 \dontrun{ # fit1 and fit2 are equivalent fit1 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = FALSE ~ 1 + x3 + x5), data = cdata) fit2 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = TRUE ~ x2 + x4), data = cdata) } # Example 5 udata <- data.frame(x2 = rnorm(nn <- 200)) udata <- transform(udata, y1 = rnorm(nn, mean = 1 - 3*x2, sd = exp(1 + 0.2*x2)), y2 = rnorm(nn, mean = 1 - 3*x2, sd = exp(1))) args(uninormal) fit1 <- vglm(y1 ~ x2, uninormal, data = udata) # This is okay fit2 <- vglm(y2 ~ x2, uninormal(zero = 2), data = udata) # This is okay # This creates potential conflict clist <- list("(Intercept)" = diag(2), "x2" = diag(2)) fit3 <- vglm(y2 ~ x2, uninormal(zero = 2), data = udata, constraints = clist) # Conflict! coef(fit3, matrix = TRUE) # Shows that clist[["x2"]] was overwritten, constraints(fit3) # i.e., 'zero' seems to override the 'constraints' arg # Example 6 ('whitespace' argument) pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = FALSE, parallel = TRUE), data = pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = TRUE, parallel = TRUE), data = pneumo) head(predict(fit1), 2) # No white spaces head(predict(fit2), 2) # Uses white spaces # Example 7 ('zero' argument with character input) set.seed(123); n <- 1000 ldata <- data.frame(x2 = runif(n)) ldata <- transform(ldata, y1 = rlogis(n, loc = 0+5*x2, scale = exp(2))) ldata <- transform(ldata, y2 = rlogis(n, loc = 0+5*x2, scale = exp(0+1*x2))) ldata <- transform(ldata, w1 = runif(n)) ldata <- transform(ldata, w2 = runif(n)) fit7 <- vglm(cbind(y1, y2) ~ x2, # logistic(zero = "location1"), # location1 is intercept-only # logistic(zero = "location2"), # logistic(zero = "location*"), # Not okay... all is unmatched # logistic(zero = "scale1"), # logistic(zero = "scale2"), # logistic(zero = "scale"), # Both scale parameters are matched logistic(zero = c("location", "scale2")), # All but scale1 # logistic(zero = c("LOCAT", "scale2")), # Only scale2 is matched # logistic(zero = c("LOCAT")), # Nothing is matched # trace = TRUE, # weights = cbind(w1, w2), weights = w1, data = ldata) coef(fit7, matrix = TRUE) } \keyword{models} VGAM/man/posnegbinUC.Rd0000644000176200001440000000766513135276753014314 0ustar liggesusers\name{Posnegbin} \alias{Posnegbin} \alias{dposnegbin} \alias{pposnegbin} \alias{qposnegbin} \alias{rposnegbin} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-Negative Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-negative binomial distribution. } \usage{ dposnegbin(x, size, prob = NULL, munb = NULL, log = FALSE) pposnegbin(q, size, prob = NULL, munb = NULL, lower.tail = TRUE, log.p = FALSE) qposnegbin(p, size, prob = NULL, munb = NULL) rposnegbin(n, size, prob = NULL, munb = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{size, prob, munb, log}{ Same arguments as that of an ordinary negative binomial distribution (see \code{\link[stats:NegBinomial]{dnbinom}}). Some arguments have been renamed slightly. % This is called \eqn{\theta}{theta} in the \code{\link[MASS]{rnegbin}} % function in the \code{MASS} library. Short vectors are recycled. The parameter \code{1/size} is known as a dispersion parameter; as \code{size} approaches infinity, the negative binomial distribution approaches a Poisson distribution. Note that \code{prob} must lie in \eqn{(0,1)}, otherwise a \code{NaN} is returned. } \item{log.p, lower.tail}{ Same arguments as that of an ordinary negative binomial distribution (see \code{\link[stats:NegBinomial]{pnbinom}}). } } \details{ The positive-negative binomial distribution is a negative binomial distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \deqn{\mu / (1-p(0))}{% munb / (1-p(0))} where \eqn{\mu}{munb} the mean of an ordinary negative binomial distribution. % 20120405; no longer true to a superior method: % The arguments of % \code{rposnegbin()} % are fed into % \code{\link[stats:NegBinomial]{rnbinom}} until \eqn{n} positive values % are obtained. } \value{ \code{dposnegbin} gives the density, \code{pposnegbin} gives the distribution function, \code{qposnegbin} gives the quantile function, and \code{rposnegbin} generates \eqn{n} random deviates. } \references{ Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996) Modelling the abundances of rare species: statistical models for counts with extra zeros. \emph{Ecological Modelling}, \bold{88}, 297--308. } \author{ T. W. Yee } %\note{ % 20120405; no longer true to a superior method: % The running time % of \code{rposnegbin()} % is slow when \code{munb} is very close to zero. % %} \seealso{ \code{\link{posnegbinomial}}, \code{\link{zanegbinomial}}, \code{\link{zinegbinomial}}, \code{\link[stats:NegBinomial]{rnbinom}}. % \code{\link[MASS]{rnegbin}}, } \examples{ munb <- 5; size <- 4; n <- 1000 table(y <- rposnegbin(n, munb = munb, size = size)) mean(y) # sample mean munb / (1 - (size / (size + munb))^size) # population mean munb / pnbinom(0, mu = munb, size = size, lower.tail = FALSE) # same as before x <- (-1):17 (ii <- dposnegbin(x, munb = munb, size = size)) max(abs(cumsum(ii) - pposnegbin(x, munb = munb, size = size))) # Should be 0 \dontrun{ x <- 0:10 barplot(rbind(dposnegbin(x, munb = munb, size = size), dnbinom(x, mu = munb, size = size)), beside = TRUE, col = c("blue","green"), main = paste("dposnegbin(munb = ", munb, ", size = ", size, ") (blue) vs", " dnbinom(mu = ", munb, ", size = ", size, ") (green)", sep = ""), names.arg = as.character(x)) } # Another test for pposnegbin() nn <- 5000 mytab <- cumsum(table(rposnegbin(nn, munb = munb, size = size))) / nn myans <- pposnegbin(sort(as.numeric(names(mytab))), munb = munb, size = size) max(abs(mytab - myans)) # Should be 0 } \keyword{distribution} VGAM/man/calibrate.qrrvglm.control.Rd0000644000176200001440000000663213135276753017167 0ustar liggesusers\name{calibrate.qrrvglm.control} \alias{calibrate.qrrvglm.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for CQO/CAO Calibration } \description{ Algorithmic constants and parameters for running \code{\link{calibrate.qrrvglm}} are set using this function. } \usage{ calibrate.qrrvglm.control(object, trace = FALSE, Method.optim = "BFGS", gridSize = ifelse(Rank == 1, 9, 5), varI.latvar = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ The fitted CQO/CAO model. The user should ignore this argument. % The fitted CQO/UQO/CAO model. The user should ignore this argument. } \item{trace}{ Logical indicating if output should be produced for each iteration. It is a good idea to set this argument to be \code{TRUE} since the computations are expensive. } \item{Method.optim}{ Character. Fed into the \code{method} argument of \code{\link[stats]{optim}}. } \item{gridSize}{ Numeric, recycled to length \code{Rank}. Controls the resolution of the grid used for initial values. For each latent variable, an equally spaced grid of length \code{gridSize} is cast from the smallest site score to the largest site score. Then the likelihood function is evaluated on the grid, and the best fit is chosen as the initial value. Thus increasing the value of \code{gridSize} increases the chance of obtaining the global solution, however, the computing time increases proportionately. } \item{varI.latvar}{ Logical. For CQO objects only, this argument is fed into \code{\link{Coef.qrrvglm}}. } \item{\dots}{ Avoids an error message for extraneous arguments. } } \details{ Most CQO/CAO users will only need to make use of \code{trace} and \code{gridSize}. These arguments should be used inside their call to \code{\link{calibrate.qrrvglm}}, not this function directly. } \value{ A list which with the following components. \item{trace}{Numeric (even though the input can be logical). } \item{gridSize}{Positive integer. } \item{varI.latvar}{Logical.} } \references{ Yee, T. W. (2013) On constrained and unconstrained quadratic ordination. \emph{Manuscript in preparation}. } \author{T. W. Yee} \note{ Despite the name of this function, CAO models are handled as well. % Despite the name of this function, UQO and CAO models are handled } \seealso{ \code{\link{calibrate.qrrvglm}}, \code{\link{Coef.qrrvglm}}. } \examples{ \dontrun{ hspider[, 1:6] <- scale(hspider[, 1:6]) # Needed when I.tol = TRUE set.seed(123) p1 <- cqo(cbind(Alopacce, Alopcune, Pardlugu, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, I.tol = TRUE) sort(deviance(p1, history = TRUE)) # A history of all the iterations siteNos <- 3:4 # Calibrate these sites cp1 <- calibrate(p1, trace = TRUE, new = data.frame(depvar(p1)[siteNos, ])) } \dontrun{ # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) abline(v = latvar(p1)[siteNos], lty = 1, col = 1:length(siteNos)) # Actual site scores abline(v = cp1, lty = 2, col = 1:length(siteNos)) # Calibrated values } } \keyword{models} \keyword{regression} VGAM/man/Coef.qrrvglm.Rd0000644000176200001440000001140513135276753014430 0ustar liggesusers\name{Coef.qrrvglm} \alias{Coef.qrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Returns Important Matrices etc. of a QO Object } \description{ This methods function returns important matrices etc. of a QO object. } \usage{ Coef.qrrvglm(object, varI.latvar = FALSE, refResponse = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ % A CQO or UQO object. A CQO object. The former has class \code{"qrrvglm"}. } \item{varI.latvar}{ Logical indicating whether to scale the site scores (latent variables) to have variance-covariance matrix equal to the rank-\eqn{R} identity matrix. All models have uncorrelated site scores (latent variables), and this option stretches or shrinks the ordination axes if \code{TRUE}. See below for further details. } \item{refResponse}{ Integer or character. Specifies the \emph{reference response} or \emph{reference species}. By default, the reference species is found by searching sequentially starting from the first species until a positive-definite tolerance matrix is found. Then this tolerance matrix is transformed to the identity matrix. Then the sites scores (latent variables) are made uncorrelated. See below for further details. % If \code{eq.tolerances=FALSE}, then transformations occur so that % the reference species has a tolerance matrix equal to the rank-\eqn{R} % identity matrix. } \item{\dots}{ Currently unused. } } \details{ If \code{I.tolerances=TRUE} or \code{eq.tolerances=TRUE} (and its estimated tolerance matrix is positive-definite) then all species' tolerances are unity by transformation or by definition, and the spread of the site scores can be compared to them. Vice versa, if one wishes to compare the tolerances with the sites score variability then setting \code{varI.latvar=TRUE} is more appropriate. For rank-2 QRR-VGLMs, one of the species can be chosen so that the angle of its major axis and minor axis is zero, i.e., parallel to the ordination axes. This means the effect on the latent vars is independent on that species, and that its tolerance matrix is diagonal. The argument \code{refResponse} allows one to choose which is the reference species, which must have a positive-definite tolerance matrix, i.e., is bell-shaped. If \code{refResponse} is not specified, then the code will try to choose some reference species starting from the first species. Although the \code{refResponse} argument could possibly be offered as an option when fitting the model, it is currently available after fitting the model, e.g., in the functions \code{\link{Coef.qrrvglm}} and \code{\link{lvplot.qrrvglm}}. } \value{ The \bold{A}, \bold{B1}, \bold{C}, \bold{T}, \bold{D} matrices/arrays are returned, along with other slots. The returned object has class \code{"Coef.qrrvglm"} (see \code{\link{Coef.qrrvglm-class}}). % For UQO, \bold{C} is undefined. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ Consider an equal-tolerances Poisson/binomial CQO model with \code{noRRR = ~ 1}. For \eqn{R=1} it has about \eqn{2S+p_2}{2*S+p2} parameters. For \eqn{R=2} it has about \eqn{3S+2 p_2}{3*S+2*p_2} parameters. Here, \eqn{S} is the number of species, and \eqn{p_2=p-1}{p2=p-1} is the number of environmental variables making up the latent variable. For an unequal-tolerances Poisson/binomial CQO model with \code{noRRR = ~ 1}, it has about \eqn{3S -1 +p_2}{3*S-1+p2} parameters for \eqn{R=1}, and about \eqn{6S -3 +2p_2}{6*S -3 +2*p2} parameters for \eqn{R=2}. Since the total number of data points is \eqn{nS}{n*S}, where \eqn{n} is the number of sites, it pays to divide the number of data points by the number of parameters to get some idea about how much information the parameters contain. } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{cqo}}, \code{\link{Coef.qrrvglm-class}}, \code{print.Coef.qrrvglm}, \code{\link{lvplot.qrrvglm}}. } \examples{ set.seed(123) x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) latvar1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * ( latvar1-0)^2) lambda2 <- exp(2 - 0.5 * ( latvar1-1)^2) lambda3 <- exp(2 - 0.5 * ((latvar1+4)/2)^2) # Unequal tolerances y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) set.seed(111) # vvv p1 <- cqo(cbind(y1, y2, y3) ~ x2 + x3 + x4, poissonff, trace = FALSE) \dontrun{ lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) } # vvv Coef(p1) # vvv print(Coef(p1), digits=3) } \keyword{models} \keyword{regression} VGAM/man/binom2.rho.Rd0000644000176200001440000001512413135276753014042 0ustar liggesusers\name{binom2.rho} \alias{binom2.rho} \alias{binom2.Rho} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Probit Model (Family Function) } \description{ Fits a bivariate probit model to two binary responses. The correlation parameter rho is the measure of dependency. } \usage{ binom2.rho(lmu = "probit", lrho = "rhobit", imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = "rho", exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05), nsimEIM = NULL) binom2.Rho(rho = 0, imu1 = NULL, imu2 = NULL, exchangeable = FALSE, nsimEIM = NULL) } %- maybe also 'usage' for other objects documented here. % binom2.rho(lrho = "rhobit", lmu = "probit", imu1 = NULL, imu2 = NULL,...) \arguments{ \item{lmu}{ Link function applied to the marginal probabilities. Should be left alone. } \item{lrho}{ Link function applied to the \eqn{\rho}{rho} association parameter. See \code{\link{Links}} for more choices. } \item{imu1, imu2}{ Optional initial values for the two marginal probabilities. May be a vector. } \item{irho}{ Optional initial value for \eqn{\rho}{rho}. If given, this should lie between \eqn{-1} and \eqn{1}. See below for more comments. } \item{zero}{ Specifies which linear/additive predictors are modelled as intercept-only. A \code{NULL} means none. Numerically, the \eqn{\rho}{rho} parameter is easiest modelled as an intercept only, hence the default. See \code{\link{CommonVGAMffArguments}} for more information. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{imethod, nsimEIM, grho}{ See \code{\link{CommonVGAMffArguments}} for more information. A value of at least 100 for \code{nsimEIM} is recommended; the larger the value the better. } \item{rho}{ Numeric vector. Values are recycled to the needed length, and ought to be in range, which is \eqn{(-1, 1)}. } } \details{ The \emph{bivariate probit model} was one of the earliest regression models to handle two binary responses jointly. It has a probit link for each of the two marginal probabilities, and models the association between the responses by the \eqn{\rho}{rho} parameter of a standard bivariate normal distribution (with zero means and unit variances). One can think of the joint probabilities being \eqn{\Phi(\eta_1,\eta_2;\rho)}{Phi(eta1,eta2;rho)} where \eqn{\Phi}{Phi} is the cumulative distribution function of a standard bivariate normal distribution. Explicitly, the default model is \deqn{probit[P(Y_j=1)] = \eta_j,\ \ \ j=1,2}{% probit[P(Y_j=1)] = eta_j, j=1,2} for the marginals, and \deqn{rhobit[rho] = \eta_3.}{% rhobit[rho] = eta_3.} The joint probability \eqn{P(Y_1=1,Y_2=1)=\Phi(\eta_1,\eta_2;\rho)}{P(Y_1=1,Y_2=1)=Phi(eta1,eta2;rho)}, and from these the other three joint probabilities are easily computed. The model is fitted by maximum likelihood estimation since the full likelihood is specified. Fisher scoring is implemented. The default models \eqn{\eta_3}{eta3} as a single parameter only, i.e., an intercept-only model for rho, but this can be circumvented by setting \code{zero = NULL} in order to model rho as a function of all the explanatory variables. The bivariate probit model should not be confused with a \emph{bivariate logit model} with a probit link (see \code{\link{binom2.or}}). The latter uses the odds ratio to quantify the association. Actually, the bivariate logit model is recommended over the bivariate probit model because the odds ratio is a more natural way of measuring the association between two binary responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. } \references{ Ashford, J. R. and Sowden, R. R. (1970) Multi-variate probit analysis. \emph{Biometrics}, \bold{26}, 535--546. Freedman, D. A. (2010) \emph{Statistical Models and Causal Inference: a Dialogue with the Social Sciences}, Cambridge: Cambridge University Press. Freedman, D. A. and Sekhon, J. S. (2010) Endogeneity in probit response models. \emph{Political Analysis}, \bold{18}, 138--150. } \author{ Thomas W. Yee } \note{ See \code{\link{binom2.or}} about the form of input the response should have. By default, a constant \eqn{\rho}{rho} is fitted because \code{zero = "rho"}. Set \code{zero = NULL} if you want the \eqn{\rho}{rho} parameter to be modelled as a function of the explanatory variables. The value \eqn{\rho}{rho} lies in the interval \eqn{(-1,1)}{(-1,1)}, therefore a \code{\link{rhobit}} link is default. Converge problems can occur. If so, assign \code{irho} a range of values and monitor convergence (e.g., set \code{trace = TRUE}). Else try \code{imethod}. Practical experience shows that local solutions can occur, and that \code{irho} needs to be quite close to the (global) solution. Also, \code{imu1} and \code{imu2} may be used. This help file is mainly about \code{binom2.rho()}. \code{binom2.Rho()} fits a bivariate probit model with \emph{known} \eqn{\rho}{rho}. The inputted \code{rho} is saved in the \code{misc} slot of the fitted object, with \code{rho} as the component name. In some econometrics applications (e.g., Freedman 2010, Freedman and Sekhon 2010) one response is used as an explanatory variable, e.g., a \emph{recursive} binomial probit model. Such will not work here. Historically, the bivariate probit model was the first VGAM I ever wrote, based on Ashford and Sowden (1970). I don't think they ever thought of it either! Hence the criticisms raised go beyond the use of what was originally intended. } \seealso{ \code{\link{rbinom2.rho}}, \code{\link{rhobit}}, \code{\link{pbinorm}}, \code{\link{binom2.or}}, \code{\link{loglinb2}}, \code{\link{coalminers}}, \code{\link{binomialff}}, \code{\link{rhobit}}, \code{\link{fisherz}}. % \code{\link{pnorm2}}, } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW, nBW, BnW, BW) ~ Age, binom2.rho, data = coalminers, trace = TRUE) summary(fit) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} % (i.e., \code{\link[stats:Normal]{pnorm}}) % with correlation parameter \eqn{\rho}{rho}. VGAM/man/olym.Rd0000644000176200001440000000446713135276753013055 0ustar liggesusers\name{olympics} \alias{olym08} \alias{olym12} \docType{data} \title{ 2008 and 2012 Summer Olympic Final Medal Count Data} \description{ Final medal count, by country, for the Summer 2008 and 2012 Olympic Games. } \usage{ data(olym08) data(olym12) } \format{ A data frame with 87 or 85 observations on the following 6 variables. \describe{ \item{\code{rank}}{a numeric vector, overall ranking of the countries. } \item{\code{country}}{a factor. } \item{\code{gold}}{a numeric vector, number of gold medals. } \item{\code{silver}}{a numeric vector, number of silver medals. } \item{\code{bronze}}{a numeric vector, number of bronze medals. } \item{\code{totalmedal}}{a numeric vector, total number of medals. } % \item{\code{country}}{a factor. character vector. } } } \details{ The events were held during (i) August 8--24, 2008, in Beijing; and (ii) 27 July--12 August, 2012, in London. % This is a simple two-way contingency table of counts. } % \source{ % url{http://www.associatedcontent.com/article/979484/2008_summer_olympic_medal_count_total.html}, % url{http://www.london2012.com/medals/medal-count/}. % } \references{ The official English website was/is \code{http://en.beijing2008.cn} and \code{http://www.london2012.com}. Help from Viet Hoang Quoc is gratefully acknowledged. } \seealso{ \code{\link[VGAM]{grc}}. } \examples{ summary(olym08) summary(olym12) ## maybe str(olym08) ; plot(olym08) ... \dontrun{ par(mfrow = c(1, 2)) myylim <- c(0, 55) with(head(olym08, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! # "gold", "grey71", "chocolate4", names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2008 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) with(head(olym12, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2012 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) } } \keyword{datasets} VGAM/man/cqo.Rd0000644000176200001440000006105713135276753012655 0ustar liggesusers\name{cqo} \alias{cqo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting Constrained Quadratic Ordination (CQO)} \description{ A \emph{constrained quadratic ordination} (CQO; formerly called \emph{canonical Gaussian ordination} or CGO) model is fitted using the \emph{quadratic reduced-rank vector generalized linear model} (QRR-VGLM) framework. } \usage{ cqo(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = qrrvglm.control(...), offset = NULL, method = "cqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, smart = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{formula}{ a symbolic description of the model to be fit. The RHS of the formula is applied to each linear predictor. Different variables in each linear predictor can be chosen by specifying constraint matrices. } \item{family}{ a function of class \code{"vglmff"} (see \code{\link{vglmff-class}}) describing what statistical model is to be fitted. This is called a ``\pkg{VGAM} family function''. See \code{\link{CommonVGAMffArguments}} for general information about many types of arguments found in this type of function. Currently the following families are supported: \code{\link{poissonff}}, \code{\link{binomialff}} (\code{\link{logit}} and \code{\link{cloglog}} links available), \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}. Sometimes special arguments are required for \code{cqo()}, e.g., \code{binomialff(multiple.responses = TRUE)}. Also, \code{\link{quasipoissonff}} and \code{\link{quasibinomialff}} may or may not work. % \code{negbinomial(deviance = TRUE)}, % \code{gamma2(deviance = TRUE)}. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{cqo} is called. } \item{weights}{ an optional vector or matrix of (prior) weights to be used in the fitting process. Currently, this argument should not be used. } \item{subset}{ an optional logical vector specifying a subset of observations to be used in the fitting process. } \item{na.action}{ a function which indicates what should happen when the data contain \code{NA}s. The default is set by the \code{na.action} setting of \code{\link[base]{options}}, and is \code{na.fail} if that is unset. The ``factory-fresh'' default is \code{na.omit}. } \item{etastart}{ starting values for the linear predictors. It is a \eqn{M}-column matrix. If \eqn{M = 1} then it may be a vector. Currently, this argument probably should not be used. } \item{mustart}{ starting values for the fitted values. It can be a vector or a matrix. Some family functions do not make use of this argument. Currently, this argument probably should not be used. } \item{coefstart}{ starting values for the coefficient vector. Currently, this argument probably should not be used. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{qrrvglm.control}} for details. } \item{offset}{ This argument must not be used. % especially when \code{I.tolerances = TRUE}. % a vector or \eqn{M}-column matrix of offset values. % These are \emph{a priori} known and are % added to the linear predictors during fitting. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{cqo.fit} uses \emph{iteratively reweighted least squares} (IRLS). } \item{model}{ a logical value indicating whether the \emph{model frame} should be assigned in the \code{model} slot. } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix. % ; to get the VGLM % model matrix type \code{model.matrix(vglmfit)} where % \code{vglmfit} is a \code{vglm} object. } \item{contrasts}{ an optional list. See the \code{contrasts.arg} of \code{model.matrix.default}. } \item{constraints}{ an optional list of constraint matrices. The components of the list must be named with the term it corresponds to (and it must match in character format). Each constraint matrix must have \eqn{M} rows, and be of full-column rank. By default, constraint matrices are the \eqn{M} by \eqn{M} identity matrix unless arguments in the family function itself override these values. If \code{constraints} is used it must contain \emph{all} the terms; an incomplete list is not accepted. Constraint matrices for \eqn{x_2}{x_2} variables are taken as the identity matrix. } \item{extra}{ an optional list with any extra information that might be needed by the family function. } % \item{qr.arg}{ logical value indicating whether % the slot \code{qr}, which returns the QR decomposition of the % VLM model matrix, is returned on the object. % } \item{smart}{ logical value indicating whether smart prediction (\code{\link{smartpred}}) will be used. } \item{\dots}{ further arguments passed into \code{\link{qrrvglm.control}}. } } \details{ QRR-VGLMs or \emph{constrained quadratic ordination} (CQO) models are estimated here by maximum likelihood estimation. Optimal linear combinations of the environmental variables are computed, called \emph{latent variables} (these appear as \code{latvar} for \eqn{R=1} else \code{latvar1}, \code{latvar2}, etc. in the output). Here, \eqn{R} is the \emph{rank} or the number of ordination axes. Each species' response is then a regression of these latent variables using quadratic polynomials on a transformed scale (e.g., log for Poisson counts, logit for presence/absence responses). The solution is obtained iteratively in order to maximize the log-likelihood function, or equivalently, minimize the deviance. The central formula (for Poisson and binomial species data) is given by \deqn{\eta = B_1^T x_1 + A \nu + \sum_{m=1}^M (\nu^T D_m \nu) e_m}{% eta = B_1^T x_1 + A nu + sum_{m=1}^M (nu^T D_m nu) e_m} where \eqn{x_1}{x_1} is a vector (usually just a 1 for an intercept), \eqn{x_2}{x_2} is a vector of environmental variables, \eqn{\nu=C^T x_2}{nu=C^T x_2} is a \eqn{R}-vector of latent variables, \eqn{e_m} is a vector of 0s but with a 1 in the \eqn{m}th position. The \eqn{\eta}{eta} are a vector of linear/additive predictors, e.g., the \eqn{m}th element is \eqn{\eta_m = \log(E[Y_m])}{eta_m = log(E[Y_m])} for the \eqn{m}th species. The matrices \eqn{B_1}, \eqn{A}, \eqn{C} and \eqn{D_m} are estimated from the data, i.e., contain the regression coefficients. The tolerance matrices satisfy \eqn{T_s = -\frac12 D_s^{-1}}{T_s = -(0.5 D_s^(-1)}. Many important CQO details are directly related to arguments in \code{\link{qrrvglm.control}}, e.g., the argument \code{noRRR} specifies which variables comprise \eqn{x_1}{x_1}. Theoretically, the four most popular \pkg{VGAM} family functions to be used with \code{cqo} correspond to the Poisson, binomial, normal, and negative binomial distributions. The latter is a 2-parameter model. All of these are implemented, as well as the 2-parameter gamma. The Poisson is or should be catered for by \code{\link{quasipoissonff}} and \code{\link{poissonff}}, and the binomial by \code{\link{quasibinomialff}} and \code{\link{binomialff}}. Those beginning with \code{"quasi"} have dispersion parameters that are estimated for each species. %the negative binomial by \code{\link{negbinomial}}, and the normal by %\code{gaussianff}. %For overdispersed Poisson data, using \code{\link{quasipoissonff}} is %strongly recommended over \code{\link{negbinomial}}; the latter is %\emph{very} sensitive to departures from the model assumptions. For initial values, the function \code{.Init.Poisson.QO} should work reasonably well if the data is Poisson with species having equal tolerances. It can be quite good on binary data too. Otherwise the \code{Cinit} argument in \code{\link{qrrvglm.control}} can be used. %(and negative binomial) It is possible to relax the quadratic form to an additive model. The result is a data-driven approach rather than a model-driven approach, so that CQO is extended to \emph{constrained additive ordination} (CAO) when \eqn{R=1}. See \code{\link{cao}} for more details. In this documentation, \eqn{M} is the number of linear predictors, \eqn{S} is the number of responses (species). Then \eqn{M=S} for Poisson and binomial species data, and \eqn{M=2S} for negative binomial and gamma distributed species data. Incidentally, \emph{Unconstrained quadratic ordination} (UQO) may be performed by, e.g., fitting a Goodman's RC association model; see \code{\link{uqo}} and the Yee and Hadi (2014) referenced there. For UQO, the response is the usual site-by-species matrix and there are no environmental variables; the site scores are free parameters. UQO can be performed under the assumption that all species have the same tolerance matrices. } \value{ An object of class \code{"qrrvglm"}. % Note that the slot \code{misc} has a list component called % \code{deviance.Bestof} which gives the history of deviances over all % the iterations. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. ter Braak, C. J. F. and Prentice, I. C. (1988) A theory of gradient analysis. \emph{Advances in Ecological Research}, \bold{18}, 271--317. %Yee, T. W. (2005) %On constrained and unconstrained %quadratic ordination. %\emph{Manuscript in preparation}. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee. Thanks to Alvin Sou for converting a lot of the original FORTRAN code into C. } \note{ The input requires care, preparation and thought---\emph{a lot more} than other ordination methods. Here is a partial \bold{checklist}. \describe{ \item{(1)}{ The number of species should be kept reasonably low, e.g., 12 max. Feeding in 100+ species wholesale is a recipe for failure. Choose a few species carefully. Using 10 well-chosen species is better than 100+ species thrown in willy-nilly. } \item{(2)}{ Each species should be screened individually first, e.g., for presence/absence is the species totally absent or totally present at all sites? For presence/absence data \code{sort(colMeans(data))} can help avoid such species. } \item{(3)}{ The number of explanatory variables should be kept low, e.g., 7 max. } \item{(4)}{ Each explanatory variable should be screened individually first, e.g., is it heavily skewed or are there outliers? They should be plotted and then transformed where needed. They should not be too highly correlated with each other. } \item{(5)}{ Each explanatory variable should be scaled, e.g., to mean 0 and unit variance. This is especially needed for \code{I.tolerance = TRUE}. } \item{(6)}{ Keep the rank low. Only if the data is very good should a rank-2 model be attempted. Usually a rank-1 model is all that is practically possible even after a lot of work. The rank-1 model should always be attempted first. Then might be clever and try use this for initial values for a rank-2 model. } \item{(7)}{ If the number of sites is large then choose a random sample of them. For example, choose a maximum of 500 sites. This will reduce the memory and time expense of the computations. } \item{(8)}{ Try \code{I.tolerance = TRUE} or \code{eq.tolerance = FALSE} if the inputted data set is large, so as to reduce the computational expense. That's because the default, \code{I.tolerance = FALSE} and \code{eq.tolerance = TRUE}, is very memory hungry. } } By default, a rank-1 equal-tolerances QRR-VGLM model is fitted (see \code{\link{qrrvglm.control}} for the default control parameters). If \code{Rank > 1} then the latent variables are always transformed so that they are uncorrelated. By default, the argument \code{trace} is \code{TRUE} meaning a running log is printed out while the computations are taking place. This is because the algorithm is computationally expensive, therefore users might think that their computers have frozen if \code{trace = FALSE}! The argument \code{Bestof} in \code{\link{qrrvglm.control}} controls the number of models fitted (each uses different starting values) to the data. This argument is important because convergence may be to a \emph{local} solution rather than the \emph{global} solution. Using more starting values increases the chances of finding the global solution. Always plot an ordination diagram (use the generic function \code{\link{lvplot}}) and see if it looks sensible. Local solutions arise because the optimization problem is highly nonlinear, and this is particularly true for CAO. %Convergence of QRR-VGLMs can be difficult, especially for binary %data. If this is so, then setting \code{I.tolerances = TRUE} or %\code{eq.tolerances = TRUE} may help, especially when the number of sites, %\eqn{n}, is small. %If the negative binomial family function \code{\link{negbinomial}} is %used for \code{cqo} then set \code{negbinomial(deviance = TRUE)} %is necessary. This means to minimize the deviance, which the fast %algorithm can handle. Many of the arguments applicable to \code{cqo} are common to \code{\link{vglm}} and \code{\link{rrvglm.control}}. The most important arguments are \code{Rank}, \code{noRRR}, \code{Bestof}, \code{I.tolerances}, \code{eq.tolerances}, \code{isd.latvar}, and \code{MUXfactor}. When fitting a 2-parameter model such as the negative binomial or gamma, it pays to have \code{eq.tolerances = TRUE} and \code{I.tolerances = FALSE}. This is because numerical problems can occur when fitting the model far away from the global solution when \code{I.tolerances = TRUE}. Setting the two arguments as described will slow down the computation considerably, however it is numerically more stable. In Example 1 below, an unequal-tolerances rank-1 QRR-VGLM is fitted to the hunting spiders dataset, and Example 2 is the equal-tolerances version. The latter is less likely to have convergence problems compared to the unequal-tolerances model. In Example 3 below, an equal-tolerances rank-2 QRR-VGLM is fitted to the hunting spiders dataset. The numerical difficulties encountered in fitting the rank-2 model suggests a rank-1 model is probably preferable. In Example 4 below, constrained binary quadratic ordination (in old nomenclature, constrained Gaussian logit ordination) is fitted to some simulated data coming from a species packing model. With multivariate binary responses, one must use \code{multiple.responses = TRUE} to indicate that the response (matrix) is multivariate. Otherwise, it is interpreted as a single binary response variable. In Example 5 below, the deviance residuals are plotted for each species. This is useful as a diagnostic plot. This is done by (re)regressing each species separately against the latent variable. Sometime in the future, this function might handle input of the form \code{cqo(x, y)}, where \code{x} and \code{y} are matrices containing the environmental and species data respectively. } \section{Warning }{ Local solutions are not uncommon when fitting CQO models. To increase the chances of obtaining the global solution, increase the value of the argument \code{Bestof} in \code{\link{qrrvglm.control}}. For reproducibility of the results, it pays to set a different random number seed before calling \code{cqo} (the function \code{\link[base:Random]{set.seed}} does this). The function \code{cqo} chooses initial values for \bold{C} using \code{.Init.Poisson.QO()} if \code{Use.Init.Poisson.QO = TRUE}, else random numbers. Unless \code{I.tolerances = TRUE} or \code{eq.tolerances = FALSE}, CQO is computationally expensive with memory and time. It pays to keep the rank down to 1 or 2. If \code{eq.tolerances = TRUE} and \code{I.tolerances = FALSE} then the cost grows quickly with the number of species and sites (in terms of memory requirements and time). The data needs to conform quite closely to the statistical model, and the environmental range of the data should be wide in order for the quadratics to fit the data well (bell-shaped response surfaces). If not, RR-VGLMs will be more appropriate because the response is linear on the transformed scale (e.g., log or logit) and the ordination is called \emph{constrained linear ordination} or CLO. Like many regression models, CQO is sensitive to outliers (in the environmental and species data), sparse data, high leverage points, multicollinearity etc. For these reasons, it is necessary to examine the data carefully for these features and take corrective action (e.g., omitting certain species, sites, environmental variables from the analysis, transforming certain environmental variables, etc.). Any optimum lying outside the convex hull of the site scores should not be trusted. Fitting a CAO is recommended first, then upon transformations etc., possibly a CQO can be fitted. For binary data, it is necessary to have `enough' data. In general, the number of sites \eqn{n} ought to be much larger than the number of species \emph{S}, e.g., at least 100 sites for two species. Compared to count (Poisson) data, numerical problems occur more frequently with presence/absence (binary) data. For example, if \code{Rank = 1} and if the response data for each species is a string of all absences, then all presences, then all absences (when enumerated along the latent variable) then infinite parameter estimates will occur. In general, setting \code{I.tolerances = TRUE} may help. This function was formerly called \code{cgo}. It has been renamed to reinforce a new nomenclature described in Yee (2006). } \seealso{ \code{\link{qrrvglm.control}}, \code{\link{Coef.qrrvglm}}, \code{\link{predictqrrvglm}}, \code{\link{rcqo}}, \code{\link{cao}}, \code{\link{uqo}}, \code{\link{rrvglm}}, \code{\link{poissonff}}, \code{\link{binomialff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{lvplot.qrrvglm}}, \code{\link{perspqrrvglm}}, \code{\link{trplot.qrrvglm}}, \code{\link{vglm}}, \code{\link[base:Random]{set.seed}}, \code{\link{hspider}}, \code{\link[VGAMdata]{trapO}}. % \code{\link{rrvglm.control}}, % \code{\link{vcovqrrvglm}}, %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \examples{ \dontrun{ # Example 1; Fit an unequal tolerances model to the hunting spiders data hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental variables set.seed(1234) # For reproducibility of the results p1ut <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, fam = poissonff, data = hspider, Crow1positive = FALSE, eq.tol = FALSE) sort(deviance(p1ut, history = TRUE)) # A history of all the iterations if (deviance(p1ut) > 1177) warning("suboptimal fit obtained") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow lvplot(p1ut, y = TRUE, lcol = clr, pch = 1:S, pcol = clr, las = 1) # Ordination diagram legend("topright", leg = colnames(depvar(p1ut)), col = clr, pch = 1:S, merge = TRUE, bty = "n", lty = 1:S, lwd = 2) (cp <- Coef(p1ut)) (a <- latvar(cp)[cp@latvar.order]) # Ordered site scores along the gradient # Names of the ordered sites along the gradient: rownames(latvar(cp))[cp@latvar.order] (aa <- Opt(cp)[, cp@Optimum.order]) # Ordered optimums along the gradient aa <- aa[!is.na(aa)] # Delete the species that is not unimodal names(aa) # Names of the ordered optimums along the gradient trplot(p1ut, which.species = 1:3, log = "xy", type = "b", lty = 1, lwd = 2, col = c("blue","red","green"), label = TRUE) -> ii # Trajectory plot legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "red", "green")) abline(a = 0, b = 1, lty = "dashed") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1ut, col = clr, label = TRUE, las = 1) # Perspective plot # Example 2; Fit an equal tolerances model. Less numerically fraught. set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) sort(deviance(p1et, history = TRUE)) # A history of all the iterations if (deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1et, col = clr, label = TRUE, las = 1) # Example 3: A rank-2 equal tolerances CQO model with Poisson data # This example is numerically fraught... need I.toler = TRUE too. set.seed(555) p2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE, I.toler = TRUE, Rank = 2, Bestof = 3, isd.latvar = c(2.1, 0.9)) sort(deviance(p2, history = TRUE)) # A history of all the iterations if (deviance(p2) > 1127) warning("suboptimal fit obtained") lvplot(p2, ellips = FALSE, label = TRUE, xlim = c(-3,4), C = TRUE, Ccol = "brown", sites = TRUE, scol = "grey", pcol = "blue", pch = "+", chull = TRUE, ccol = "grey") # Example 4: species packing model with presence/absence data set.seed(2345) n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "binomial", hi.abundance = 4, eq.tol = TRUE, es.opt = TRUE, eq.max = TRUE) myform <- attr(mydata, "formula") set.seed(1234) b1et <- cqo(myform, binomialff(multiple.responses = TRUE, link = "cloglog"), data = mydata) sort(deviance(b1et, history = TRUE)) # A history of all the iterations lvplot(b1et, y = TRUE, lcol = 1:S, pch = 1:S, pcol = 1:S, las = 1) Coef(b1et) # Compare the fitted model with the 'truth' cbind(truth = attr(mydata, "concoefficients"), fitted = concoef(b1et)) # Example 5: Plot the deviance residuals for diagnostic purposes set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, eq.tol = TRUE, trace = FALSE) sort(deviance(p1et, history = TRUE)) # A history of all the iterations if (deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) par(mfrow = c(3, 4)) for (ii in 1:S) { tempdata <- data.frame(latvar1 = c(latvar(p1et)), sppCounts = depvar(p1et)[, ii]) tempdata <- transform(tempdata, myOffset = -0.5 * latvar1^2) # For species ii, refit the model to get the deviance residuals fit1 <- vglm(sppCounts ~ offset(myOffset) + latvar1, poissonff, data = tempdata, trace = FALSE) # For checking: this should be 0 # print("max(abs(c(Coef(p1et)@B1[1,ii],Coef(p1et)@A[ii,1])-coef(fit1)))") # print( max(abs(c(Coef(p1et)@B1[1,ii],Coef(p1et)@A[ii,1])-coef(fit1))) ) # Plot the deviance residuals devresid <- resid(fit1, type = "deviance") predvalues <- predict(fit1) + fit1@offset ooo <- with(tempdata, order(latvar1)) plot(predvalues + devresid ~ latvar1, data = tempdata, col = "red", xlab = "latvar1", ylab = "", main = colnames(depvar(p1et))[ii]) with(tempdata, lines(latvar1[ooo], predvalues[ooo], col = "blue")) } } } \keyword{models} \keyword{regression} %legend("topright", x=1, y=135, leg = colnames(depvar(p1ut)), col = clr, % pch = 1:S, merge = TRUE, bty = "n", lty = 1:S, lwd = 2) VGAM/man/alaplace3.Rd0000644000176200001440000003353413135276753013717 0ustar liggesusers\name{alaplace} \alias{alaplace1} \alias{alaplace2} \alias{alaplace3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Asymmetric Laplace Distribution Family Functions } \description{ Maximum likelihood estimation of the 1, 2 and 3-parameter asymmetric Laplace distributions (ALDs). The 2-parameter ALD may, with trepidation and lots of skill, sometimes be used as an approximation of quantile regression. } \usage{ alaplace1(tau = NULL, llocation = "identitylink", ilocation = NULL, kappa = sqrt(tau/(1 - tau)), Scale.arg = 1, ishrinkage = 0.95, parallel.locat = TRUE ~ 0, digt = 4, idf.mu = 3, zero = NULL, imethod = 1) alaplace2(tau = NULL, llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, kappa = sqrt(tau/(1 - tau)), ishrinkage = 0.95, parallel.locat = TRUE ~ 0, parallel.scale = FALSE ~ 0, digt = 4, idf.mu = 3, imethod = 1, zero = "scale") alaplace3(llocation = "identitylink", lscale = "loge", lkappa = "loge", ilocation = NULL, iscale = NULL, ikappa = 1, imethod = 1, zero = c("scale", "kappa")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{tau, kappa}{ Numeric vectors with \eqn{0 < \tau < 1}{0 < tau < 1} and \eqn{\kappa >0}{kappa >0}. Most users will only specify \code{tau} since the estimated location parameter corresponds to the \eqn{\tau}{tau}th regression quantile, which is easier to understand. See below for details. } \item{llocation, lscale, lkappa}{ Character. Parameter link functions for location parameter \eqn{\xi}{xi}, scale parameter \eqn{\sigma}{sigma}, asymmetry parameter \eqn{\kappa}{kappa}. See \code{\link{Links}} for more choices. For example, the argument \code{llocation} can help handle count data by restricting the quantiles to be positive (use \code{llocation = "loge"}). However, \code{llocation} is best left alone since the theory only works properly with the identity link. } \item{ilocation, iscale, ikappa}{ Optional initial values. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. } \item{parallel.locat, parallel.scale}{ See the \code{parallel} argument of \code{\link{CommonVGAMffArguments}}. These arguments apply to the location and scale parameters. It generally only makes sense for the scale parameters to be equal, hence set \code{parallel.scale = TRUE}. Note that assigning \code{parallel.locat} the value \code{TRUE} circumvents the seriously embarrassing quantile crossing problem because all constraint matrices except for the intercept correspond to a parallelism assumption. } % \item{intparloc}{ Logical. % Defunct. % } % \item{eq.scale}{ Logical. % Should the scale parameters be equal? It is advised % to keep \code{eq.scale = TRUE} unchanged because it % does not make sense to have different values for each % \code{tau} value. % } \item{imethod}{ Initialization method. Either the value 1, 2, 3 or 4. } \item{idf.mu}{ Degrees of freedom for the cubic smoothing spline fit applied to get an initial estimate of the location parameter. See \code{\link{vsmooth.spline}}. Used only when \code{imethod = 3}. } \item{ishrinkage}{ How much shrinkage is used when initializing \eqn{\xi}{xi}. The value must be between 0 and 1 inclusive, and a value of 0 means the individual response values are used, and a value of 1 means the median or mean is used. This argument is used only when \code{imethod = 4}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{Scale.arg}{ The value of the scale parameter \eqn{\sigma}{sigma}. This argument may be used to compute quantiles at different \eqn{\tau}{tau} values from an existing fitted \code{alaplace2()} model (practical only if it has a single value). If the model has \code{parallel.locat = TRUE} then only the intercept need be estimated; use an offset. See below for an example. % This is because the expected information matrix is diagonal, % i.e., the location and scale parameters are asymptotically independent. } \item{digt }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{tau} values; used cosmetically for labelling. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for more information. Where possible, the default is to model all the \eqn{\sigma}{sigma} and \eqn{\kappa}{kappa} as an intercept-only term. } } \details{ These \pkg{VGAM} family functions implement one variant of asymmetric Laplace distributions (ALDs) suitable for quantile regression. Kotz et al. (2001) call it \emph{the} ALD. Its density function is \deqn{f(y;\xi,\sigma,\kappa) = \frac{\sqrt{2}}{\sigma} \, \frac{\kappa}{1 + \kappa^2} \, \exp \left( - \frac{\sqrt{2}}{\sigma \, \kappa} |y - \xi | \right) }{% f(y;xi,sigma,kappa) = (sqrt(2)/sigma) * (kappa/(1+ \kappa^2)) * exp( -(sqrt(2) / (sigma * kappa)) * |y-xi| ) } for \eqn{y \leq \xi}{y <= xi}, and \deqn{f(y;\xi,\sigma,\kappa) = \frac{\sqrt{2}}{\sigma} \, \frac{\kappa}{1 + \kappa^2} \, \exp \left( - \frac{\sqrt{2} \, \kappa}{\sigma} |y - \xi | \right) }{% f(y;xi,sigma,kappa) = (sqrt(2)/sigma) * (kappa/(1+ \kappa^2)) * exp( - (sqrt(2) * kappa / sigma) * |y-xi| ) } for \eqn{y > \xi}{y > xi}. Here, the ranges are for all real \eqn{y} and \eqn{\xi}{xi}, positive \eqn{\sigma}{sigma} and positive \eqn{\kappa}{kappa}. The special case \eqn{\kappa = 1}{kappa = 1} corresponds to the (symmetric) Laplace distribution of Kotz et al. (2001). The mean is \eqn{\xi + \sigma (1/\kappa - \kappa) / \sqrt{2}}{xi + sigma * (1/kappa - kappa) / sqrt(2)} and the variance is \eqn{\sigma^2 (1 + \kappa^4) / (2 \kappa^2)}{sigma^2 * (1 + kappa^4) / (2 * kappa^2)}. The enumeration of the linear/additive predictors used for \code{alaplace2()} is the first location parameter followed by the first scale parameter, then the second location parameter followed by the second scale parameter, etc. For \code{alaplace3()}, only a vector response is handled and the last (third) linear/additive predictor is for the asymmetry parameter. It is known that the maximum likelihood estimate of the location parameter \eqn{\xi}{xi} corresponds to the regression quantile estimate of the classical quantile regression approach of Koenker and Bassett (1978). An important property of the ALD is that \eqn{P(Y \leq \xi) = \tau}{P(Y <= xi) = tau} where \eqn{\tau = \kappa^2 / (1 + \kappa^2)}{tau = kappa^2 / (1 + kappa^2)} so that \eqn{\kappa = \sqrt{\tau / (1-\tau)}}{kappa = sqrt(tau / (1-tau))}. Thus \code{alaplace2()} might be used as an alternative to \code{rq} in the \pkg{quantreg} package, although scoring is really an unsuitable algorithm for estimation here. Both \code{alaplace1()} and \code{alaplace2()} can handle multiple responses, and the number of linear/additive predictors is dictated by the length of \code{tau} or \code{kappa}. The functions \code{alaplace1()} and \code{alaplace2()} can also handle multiple responses (i.e., a matrix response) but only with a \emph{single-valued} \code{tau} or \code{kappa}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. In the \code{extra} slot of the fitted object are some list components which are useful, e.g., the sample proportion of values which are less than the fitted quantile curves. } \references{ Koenker, R. and Bassett, G. (1978) Regression quantiles. \emph{Econometrica}, \bold{46}, 33--50. Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. % Yee, T. W. (2014) % Quantile regression for counts and proportions. % In preparation. } \author{ Thomas W. Yee } \section{Warning}{ These functions are experimental and especially subject to change or withdrawal. The MLE regularity conditions do not hold for this distribution so that misleading inferences may result, e.g., in the \code{summary} and \code{vcov} of the object. Care is needed with \code{tau} values which are too small, e.g., for count data with \code{llocation = "loge"} and if the sample proportion of zeros is greater than \code{tau}. } \note{ % Commented out 20090326 % The function \code{alaplace2()} is recommended over \code{alaplace1()} % for quantile regression because the solution is % invariant to location and scale, % i.e., linear transformation of the response produces the % same linear transformation of the fitted quantiles. These \pkg{VGAM} family functions use Fisher scoring. Convergence may be slow and half-stepping is usual (although one can use \code{trace = TRUE} to see which is the best model and then use \code{maxit} to choose that model) due to the regularity conditions not holding. Often the iterations slowly crawl towards the solution so monitoring the convergence (set \code{trace = TRUE}) is highly recommended. For large data sets it is a very good idea to keep the length of \code{tau}/\code{kappa} low to avoid large memory requirements. Then for \code{parallel.locat = FALSE} one can repeatedly fit a model with \code{alaplace1()} with one \eqn{\tau}{tau} at a time; and for \code{parallel.locat = TRUE} one can refit a model with \code{alaplace1()} with one \eqn{\tau}{tau} at a time but using offsets and an intercept-only model. A second method for solving the noncrossing quantile problem is illustrated below in Example 3. This is called the \emph{accumulative quantile method} (AQM) and details are in Yee (2015). It does not make the strong parallelism assumption. The functions \code{alaplace2()} and \code{\link{laplace}} differ slightly in terms of the parameterizations. } \seealso{ \code{\link{ralap}}, \code{\link{laplace}}, \code{\link{CommonVGAMffArguments}}, \code{\link{lms.bcn}}, \code{\link{amlnormal}}, \code{\link{sc.studentt2}}, \code{\link{simulate.vlm}}. } % set.seed(1) \examples{ \dontrun{ # Example 1: quantile regression with smoothing splines set.seed(123); adata <- data.frame(x2 = sort(runif(n <- 500))) mymu <- function(x) exp(-2 + 6*sin(2*x-0.2) / (x+0.5)^2) adata <- transform(adata, y = rpois(n, lambda = mymu(x2))) mytau <- c(0.25, 0.75); mydof <- 4 fit <- vgam(y ~ s(x2, df = mydof), data = adata, trace = TRUE, maxit = 900, alaplace2(tau = mytau, llocat = "loge", parallel.locat = FALSE)) fitp <- vgam(y ~ s(x2, df = mydof), data = adata, trace = TRUE, maxit = 900, alaplace2(tau = mytau, llocat = "loge", parallel.locat = TRUE)) par(las = 1); mylwd <- 1.5 with(adata, plot(x2, jitter(y, factor = 0.5), col = "orange", main = "Example 1; green: parallel.locat = TRUE", ylab = "y", pch = "o", cex = 0.75)) with(adata, matlines(x2, fitted(fit ), col = "blue", lty = "solid", lwd = mylwd)) with(adata, matlines(x2, fitted(fitp), col = "green", lty = "solid", lwd = mylwd)) finexgrid <- seq(0, 1, len = 1001) for (ii in 1:length(mytau)) lines(finexgrid, qpois(p = mytau[ii], lambda = mymu(finexgrid)), col = "blue", lwd = mylwd) fit@extra # Contains useful information # Example 2: regression quantile at a new tau value from an existing fit # Nb. regression splines are used here since it is easier. fitp2 <- vglm(y ~ sm.bs(x2, df = mydof), data = adata, trace = TRUE, alaplace1(tau = mytau, llocation = "loge", parallel.locat = TRUE)) newtau <- 0.5 # Want to refit the model with this tau value fitp3 <- vglm(y ~ 1 + offset(predict(fitp2)[, 1]), alaplace1(tau = newtau, llocation = "loge"), data = adata) with(adata, plot(x2, jitter(y, factor = 0.5), col = "orange", pch = "o", cex = 0.75, ylab = "y", main = "Example 2; parallel.locat = TRUE")) with(adata, matlines(x2, fitted(fitp2), col = "blue", lty = 1, lwd = mylwd)) with(adata, matlines(x2, fitted(fitp3), col = "black", lty = 1, lwd = mylwd)) # Example 3: noncrossing regression quantiles using a trick: obtain # successive solutions which are added to previous solutions; use a log # link to ensure an increasing quantiles at any value of x. mytau <- seq(0.2, 0.9, by = 0.1) answer <- matrix(0, nrow(adata), length(mytau)) # Stores the quantiles adata <- transform(adata, offsety = y*0) usetau <- mytau for (ii in 1:length(mytau)) { # cat("\n\nii = ", ii, "\n") adata <- transform(adata, usey = y-offsety) iloc <- ifelse(ii == 1, with(adata, median(y)), 1.0) # Well-chosen! mydf <- ifelse(ii == 1, 5, 3) # Maybe less smoothing will help # lloc <- ifelse(ii == 1, "loge", "loge") # 2nd value must be "loge" fit3 <- vglm(usey ~ sm.ns(x2, df = mydf), data = adata, trace = TRUE, alaplace2(tau = usetau[ii], lloc = "loge", iloc = iloc)) answer[, ii] <- (if(ii == 1) 0 else answer[, ii-1]) + fitted(fit3) adata <- transform(adata, offsety = answer[, ii]) } # Plot the results. with(adata, plot(x2, y, col = "blue", main = paste("Noncrossing and nonparallel; tau = ", paste(mytau, collapse = ", ")))) with(adata, matlines(x2, answer, col = "orange", lty = 1)) # Zoom in near the origin. with(adata, plot(x2, y, col = "blue", xlim = c(0, 0.2), ylim = 0:1, main = paste("Noncrossing and nonparallel; tau = ", paste(mytau, collapse = ", ")))) with(adata, matlines(x2, answer, col = "orange", lty = 1)) } } \keyword{models} \keyword{regression} VGAM/man/loglaplace.Rd0000644000176200001440000002131213135276753014164 0ustar liggesusers\name{loglaplace} \alias{loglaplace1} \alias{logitlaplace1} % \alias{alaplace3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log-Laplace and Logit-Laplace Distribution Family Functions } \description{ Maximum likelihood estimation of the 1-parameter log-Laplace and the 1-parameter logit-Laplace distributions. These may be used for quantile regression for counts and proportions respectively. } \usage{ loglaplace1(tau = NULL, llocation = "loge", ilocation = NULL, kappa = sqrt(tau/(1 - tau)), Scale.arg = 1, ishrinkage = 0.95, parallel.locat = FALSE, digt = 4, idf.mu = 3, rep0 = 0.5, minquantile = 0, maxquantile = Inf, imethod = 1, zero = NULL) logitlaplace1(tau = NULL, llocation = "logit", ilocation = NULL, kappa = sqrt(tau/(1 - tau)), Scale.arg = 1, ishrinkage = 0.95, parallel.locat = FALSE, digt = 4, idf.mu = 3, rep01 = 0.5, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{tau, kappa}{ See \code{\link{alaplace1}}. } \item{llocation}{ Character. Parameter link functions for location parameter \eqn{\xi}{xi}. See \code{\link{Links}} for more choices. However, this argument should be left unchanged with count data because it restricts the quantiles to be positive. With proportions data \code{llocation} can be assigned a link such as \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, etc. } \item{ilocation}{ Optional initial values. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. } \item{parallel.locat}{ Logical. Should the quantiles be parallel on the transformed scale (argument \code{llocation})? Assigning this argument to \code{TRUE} circumvents the seriously embarrassing quantile crossing problem. } % \item{eq.scale}{ Logical. % Should the scale parameters be equal? It is advised to keep % \code{eq.scale = TRUE} unchanged because it does not make sense to % have different values for each \code{tau} value. % } \item{imethod}{ Initialization method. Either the value 1, 2, or \ldots. } \item{idf.mu, ishrinkage, Scale.arg, digt, zero}{ See \code{\link{alaplace1}}. } \item{rep0, rep01}{ Numeric, positive. Replacement values for 0s and 1s respectively. For count data, values of the response whose value is 0 are replaced by \code{rep0}; it avoids computing \code{log(0)}. For proportions data values of the response whose value is 0 or 1 are replaced by \code{min(rangey01[1]/2, rep01/w[y< = 0])} and \code{max((1 + rangey01[2])/2, 1-rep01/w[y >= 1])} respectively; e.g., it avoids computing \code{logit(0)} or \code{logit(1)}. Here, \code{rangey01} is the 2-vector \code{range(y[(y > 0) & (y < 1)])} of the response. } \item{minquantile, maxquantile}{ Numeric. The minimum and maximum values possible in the quantiles. These argument are effectively ignored by default since \code{\link{loge}} keeps all quantiles positive. However, if \code{llocation = logoff(offset = 1)} then it is possible that the fitted quantiles have value 0 because \code{minquantile = 0}. } } \details{ These \pkg{VGAM} family functions implement translations of the asymmetric Laplace distribution (ALD). The resulting variants may be suitable for quantile regression for count data or sample proportions. For example, a log link applied to count data is assumed to follow an ALD. Another example is a logit link applied to proportions data so as to follow an ALD. A positive random variable \eqn{Y} is said to have a log-Laplace distribution if \eqn{Y = e^W}{Y = exp(W)} where \eqn{W} has an ALD. There are many variants of ALDs and the one used here is described in \code{\link{alaplace1}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. In the \code{extra} slot of the fitted object are some list components which are useful. For example, the sample proportion of values which are less than the fitted quantile curves, which is \code{sum(wprior[y <= location]) / sum(wprior)} internally. Here, \code{wprior} are the prior weights (called \code{ssize} below), \code{y} is the response and \code{location} is a fitted quantile curve. This definition comes about naturally from the transformed ALD data. } \references{ Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. Kozubowski, T. J. and Podgorski, K. (2003) Log-Laplace distributions. \emph{International Mathematical Journal}, \bold{3}, 467--495. Yee, T. W. (2012) Quantile regression for counts and proportions. In preparation. } \author{ Thomas W. Yee } \section{Warning}{ The \pkg{VGAM} family function \code{\link{logitlaplace1}} will not handle a vector of just 0s and 1s as the response; it will only work satisfactorily if the number of trials is large. See \code{\link{alaplace1}} for other warnings. Care is needed with \code{tau} values which are too small, e.g., for count data the sample proportion of zeros must be less than all values in \code{tau}. Similarly, this also holds with \code{\link{logitlaplace1}}, which also requires all \code{tau} values to be less than the sample proportion of ones. } \note{ The form of input for \code{\link{logitlaplace1}} as response is a vector of proportions (values in \eqn{[0,1]}) and the number of trials is entered into the \code{weights} argument of \code{\link{vglm}}/\code{\link{vgam}}. See Example 2 below. See \code{\link{alaplace1}} for other notes in general. } \seealso{ \code{\link{alaplace1}}, \code{\link{dloglap}}. } \examples{ # Example 1: quantile regression of counts with regression splines set.seed(123); my.k <- exp(0) adata <- data.frame(x2 = sort(runif(n <- 500))) mymu <- function(x) exp( 1 + 3*sin(2*x) / (x+0.5)^2) adata <- transform(adata, y = rnbinom(n, mu = mymu(x2), size = my.k)) mytau <- c(0.1, 0.25, 0.5, 0.75, 0.9); mydof = 3 # halfstepping is usual: fitp <- vglm(y ~ sm.bs(x2, df = mydof), data = adata, trace = TRUE, loglaplace1(tau = mytau, parallel.locat = TRUE)) \dontrun{ par(las = 1) # Plot on a log1p() scale mylwd <- 1.5 plot(jitter(log1p(y), factor = 1.5) ~ x2, adata, col = "red", pch = "o", main = "Example 1; darkgreen=truth, blue=estimated", cex = 0.75) with(adata, matlines(x2, log1p(fitted(fitp)), col = "blue", lty = 1, lwd = mylwd)) finexgrid <- seq(0, 1, len = 201) for (ii in 1:length(mytau)) lines(finexgrid, col = "darkgreen", lwd = mylwd, log1p(qnbinom(p = mytau[ii], mu = mymu(finexgrid), si = my.k))) } fitp@extra # Contains useful information # Example 2: sample proportions set.seed(123); nnn <- 1000; ssize <- 100 # ssize = 1 will not work! adata <- data.frame(x2 = sort(runif(nnn))) mymu <- function(x) logit( 1.0 + 4*x, inv = TRUE) adata <- transform(adata, ssize = ssize, y2 = rbinom(nnn, size = ssize, prob = mymu(x2)) / ssize) mytau <- c(0.25, 0.50, 0.75) fit1 <- vglm(y2 ~ sm.bs(x2, df = 3), data = adata, weights = ssize, trace = TRUE, logitlaplace1(tau = mytau, lloc = "cloglog", paral = TRUE)) \dontrun{ # Check the solution. Note: this may be like comparing apples with oranges. plotvgam(fit1, se = TRUE, scol = "red", lcol = "blue", main = "Truth = 'darkgreen'") # Centered approximately ! linkFunctionChar <- as.character(fit1@misc$link) adata <- transform(adata, trueFunction= theta2eta(theta = mymu(x2), link=linkFunctionChar)) with(adata, lines(x2, trueFunction - mean(trueFunction), col = "darkgreen")) # Plot the data + fitted quantiles (on the original scale) myylim <- with(adata, range(y2)) plot(y2 ~ x2, adata, col = "blue", ylim = myylim, las = 1, pch = ".", cex = 2.5) with(adata, matplot(x2, fitted(fit1), add = TRUE, lwd = 3, type = "l")) truecol <- rep(1:3, len = fit1@misc$M) # Add the 'truth' smallxgrid <- seq(0, 1, len = 501) for (ii in 1:length(mytau)) lines(smallxgrid, col = truecol[ii], lwd = 2, qbinom(p = mytau[ii], prob = mymu(smallxgrid), size = ssize) / ssize) # Plot on the eta (== logit()/probit()/...) scale with(adata, matplot(x2, predict(fit1), add = FALSE, lwd = 3, type = "l")) # Add the 'truth' for (ii in 1:length(mytau)) { true.quant <- qbinom(p = mytau[ii], pr = mymu(smallxgrid), si = ssize) / ssize lines(smallxgrid, theta2eta(theta = true.quant, link = linkFunctionChar), col = truecol[ii], lwd = 2) } } } \keyword{models} \keyword{regression} VGAM/man/sm.ps.Rd0000644000176200001440000001606213135276753013127 0ustar liggesusers\name{sm.ps} \alias{sm.ps} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Defining Penalized Spline Smooths in VGAM Formulas } \description{ This function represents a P-spline smooth term in a \code{vgam} formula and confers automatic smoothing parameter selection. } \usage{ sm.ps(x, ..., ps.int = NULL, spar = -1, degree = 3, p.order = 2, ridge.adj = 1e-5, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, mux = NULL, fixspar = FALSE) } %- maybe also `usage' for other objects documented here. \arguments{ \item{x, \dots}{ See \code{\link{sm.os}}. % Currently at least 7 unique \code{x} values are needed. } \item{ps.int}{ the number of equally-spaced B-spline intervals. Note that the number of knots is equal to \code{ps.int + 2*degree + 1}. The default, signified by \code{NULL}, means that the maximum of the value 7 and \code{degree} is chosen. This usually means 6 interior knots for big data sets. However, if this is too high compared to the length of \code{x}, then some adjustment is made. In the case where \code{mux} is assigned a numerical value (suggestions: some value between 1 and 2) then \code{ceiling(mux * log(length(unique(x.index))))} is used, where \code{x.index} is the combined data. No matter what, the above is not guaranteed to work on every data set. This argument may change in the future. See also argument \code{mux}. % 20160805; correct: Note that the number of knots is equal to % \code{ps.int + 2*degree + 1}. Its called Aknots. % 20160801: % \code{ceiling(2.5 * log1p(length(unique(x.index)))) + 3} % Prior to 20160801: % The default, signified by \code{NULL}, means that % \code{ceiling(1.5 * log(length(unique(x.index))))} } \item{spar, maxspar}{ See \code{\link{sm.os}}. } \item{mux}{ numeric. If given, then this argument multiplies \code{log(length(unique(x)))} to obtain \code{ps.int}. If \code{ps.int} is given then this argument is ignored. } \item{degree}{ degree of B-spline basis. Usually this will be 2 or 3; and the values 1 or 4 might possibly be used. } \item{p.order}{ order of difference penalty (0 is the ridge penalty). } \item{ridge.adj, spillover}{ See \code{\link{sm.os}}. % however, setting this argument equal to 0 does not result in % the natural boundary conditions (NBCs). } \item{outer.ok, fixspar}{ See \code{\link{sm.os}}. } } \details{ This function can be used by \code{\link{vgam}} to allow automatic smoothing parameter selection based on P-splines and minimizing an UBRE quantity. % For large sample sizes (\eqn{> 500}, say) % Also, if \eqn{n} is the number of \emph{distinct} abscissae, then % \code{sm.ps} will fail if \eqn{n < 7}. This function should only be used with \code{\link{vgam}} and is an alternative to \code{\link{sm.os}}; see that function for some details that also apply here. } \value{ A matrix with attributes that are (only) used by \code{\link{vgam}}. The number of rows of the matrix is \code{length(x)} and the number of columns is \code{ps.int + degree - 1}. The latter is because the function is centred. } \references{ %Eilers, P. H. C. and Marx, B. D. (2002). %Generalized Linear Additive Smooth Structures. %\emph{Journal of Computational and Graphical Statistics}, %\bold{11}(4): 758--783. %Marx, B. D. and Eilers, P. H. C. (1998). %Direct generalized linear modeling %with penalized likelihood. %\emph{CSDA}, \bold{28}(2): 193--209. Eilers, P. H. C. and Marx, B. D. (1996). Flexible smoothing with B-splines and penalties (with comments and rejoinder). \emph{Statistical Science}, \bold{11}(2): 89--121. } \author{ B. D. Marx wrote the original function. Subsequent edits were made by T. W. Yee and C. Somchit. } \note{ This function is currently under development and may change in the future. In particular, the default for \code{ps.int} is subject to change. } % ~Make other sections like WARNING with \section{WARNING }{....} ~ \section{Warning }{ See \code{\link{sm.os}}. } \seealso{ \code{\link{sm.os}}, \code{\link{s}}, \code{\link{vgam}}, \code{\link{smartpred}}, \code{\link{is.smart}}, \code{\link{summarypvgam}}, \code{\link[splines]{splineDesign}}, \code{\link[splines]{bs}}, \code{\link[mgcv]{magic}}. } \examples{ sm.ps(runif(20)) sm.ps(runif(20), ps.int = 5) \dontrun{ data("TravelMode", package = "AER") # Need to install "AER" first air.df <- subset(TravelMode, mode == "air") # Form 4 smaller data frames bus.df <- subset(TravelMode, mode == "bus") trn.df <- subset(TravelMode, mode == "train") car.df <- subset(TravelMode, mode == "car") TravelMode2 <- data.frame(income = air.df$income, wait.air = air.df$wait - car.df$wait, wait.trn = trn.df$wait - car.df$wait, wait.bus = bus.df$wait - car.df$wait, gcost.air = air.df$gcost - car.df$gcost, gcost.trn = trn.df$gcost - car.df$gcost, gcost.bus = bus.df$gcost - car.df$gcost, wait = air.df$wait) # Value is unimportant TravelMode2$mode <- subset(TravelMode, choice == "yes")$mode # The response TravelMode2 <- transform(TravelMode2, incom.air = income, incom.trn = 0, incom.bus = 0) set.seed(1) TravelMode2 <- transform(TravelMode2, junkx2 = runif(nrow(TravelMode2))) tfit2 <- vgam(mode ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + ns(junkx2, 4) + sm.ps(incom.air, incom.trn, incom.bus) + wait , crit = "coef", multinomial(parallel = FALSE ~ 1), data = TravelMode2, xij = list(sm.ps(gcost.air, gcost.trn, gcost.bus) ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + sm.ps(gcost.trn, gcost.bus, gcost.air) + sm.ps(gcost.bus, gcost.air, gcost.trn), sm.ps(incom.air, incom.trn, incom.bus) ~ sm.ps(incom.air, incom.trn, incom.bus) + sm.ps(incom.trn, incom.bus, incom.air) + sm.ps(incom.bus, incom.air, incom.trn), wait ~ wait.air + wait.trn + wait.bus), form2 = ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + sm.ps(gcost.trn, gcost.bus, gcost.air) + sm.ps(gcost.bus, gcost.air, gcost.trn) + wait + sm.ps(incom.air, incom.trn, incom.bus) + sm.ps(incom.trn, incom.bus, incom.air) + sm.ps(incom.bus, incom.air, incom.trn) + junkx2 + ns(junkx2, 4) + incom.air + incom.trn + incom.bus + gcost.air + gcost.trn + gcost.bus + wait.air + wait.trn + wait.bus) par(mfrow = c(2, 2)) plot(tfit2, se = TRUE, lcol = "orange", scol = "blue", ylim = c(-4, 4)) summary(tfit2) } } \keyword{models} \keyword{regression} \keyword{smooth} % binom2.or(exchangeable = TRUE ~ s(x2, 3)) VGAM/man/oizipfUC.Rd0000644000176200001440000000701413135276753013614 0ustar liggesusers\name{Oizipf} \alias{Oizipf} \alias{doizipf} \alias{poizipf} \alias{qoizipf} \alias{roizipf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Inflated Zipf Distribution } \description{ Density, distribution function, quantile function and random generation for the one-inflated Zipf distribution with parameter \code{pstr1}. } \usage{ doizipf(x, N, shape, pstr1 = 0, log = FALSE) poizipf(q, N, shape, pstr1 = 0) qoizipf(p, N, shape, pstr1 = 0) roizipf(n, N, shape, pstr1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{Same as \code{\link[stats]{Uniform}}.} \item{N, shape}{ See \code{\link{Zipf}}. } \item{pstr1}{ Probability of a structural one (i.e., ignoring the Zipf distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi = 0}{phi = 0} corresponds to the response having an ordinary Zipf distribution. } \item{log}{Same as \code{\link[stats]{Uniform}}.} } \details{ The probability function of \eqn{Y} is 1 with probability \eqn{\phi}{phi}, and \eqn{Zipf(N, s)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=1) =\phi + (1-\phi) P(W=1)}{% P(Y=1) = phi + (1-phi) * P(W=1)} where \eqn{W} is distributed as a \eqn{Zipf(N, s)} random variable. The \pkg{VGAM} family function \code{\link{oizeta}} estimates the two parameters of this model by Fisher scoring. } \value{ \code{doizipf} gives the density, \code{poizipf} gives the distribution function, \code{qoizipf} gives the quantile function, and \code{roizipf} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr1} is recycled to the required length, and usually has values which lie in the interval \eqn{[0,1]}. These functions actually allow for the \emph{zero-deflated Zipf} distribution. Here, \code{pstr1} is also permitted to lie in the interval \code{[-dzipf(1, N, s) / (1 - dzipf(1, N, s)), 0]}. The resulting probability of a unit count is \emph{less than} the nominal zipf value, and the use of \code{pstr1} to stand for the probability of a structural 1 loses its meaning. % % % When \code{pstr1} equals \code{-dzipf(1, N, s) / (1 - dzipf(1, N, s))} this corresponds to the 1-truncated zipf distribution. } \seealso{ \code{\link{oizeta}}. \code{\link{Zipf}}, \code{\link{zipf}}, \code{\link{Oizeta}}. } \examples{ N <- 10; shape <- 1.5; pstr1 <- 0.3; x <- (-1):N (ii <- doizipf(x, N, shape, pstr1 = pstr1)) \dontrun{ x <- 0:10 par(mfrow = c(2, 1)) # One-Inflated zipf barplot(rbind(doizipf(x, N, shape, pstr1 = pstr1), dzipf(x, N, shape)), beside = TRUE, col = c("blue", "orange"), main = paste("OIZipf(", N, ", ", shape, ", pstr1 = ", pstr1, ") (blue) vs", " Zipf(", N, ", ", shape, ") (orange)", sep = ""), names.arg = as.character(x)) deflat.limit <- -dzipf(1, N, shape) / (1 - dzipf(1, N, shape)) newpstr1 <- round(deflat.limit, 3) + 0.001 # Inside but near the boundary barplot(rbind(doizipf(x, N, shape, pstr1 = newpstr1), dzipf(x, N, shape)), beside = TRUE, col = c("blue", "orange"), main = paste("ODZipf(", N, ", ", shape, ", pstr1 = ", newpstr1, ") (blue) vs", " Zipf(", N, ", ", shape, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} %qoizipf(p, shape, pstr1 = 0) %roizipf(n, shape, pstr1 = 0) % table(roizipf(100, shape, pstr1 = pstr1)) % round(doizipf(1:10, shape, pstr1 = pstr1) * 100) # Should be similar VGAM/man/setup.smart.Rd0000644000176200001440000000441313135276753014351 0ustar liggesusers\name{setup.smart} \alias{setup.smart} \title{ Smart Prediction Setup } \description{ Sets up smart prediction in one of two modes: \code{"write"} and \code{"read"}. } \usage{ setup.smart(mode.arg, smart.prediction = NULL, max.smart = 30) } \arguments{ \item{mode.arg}{ \code{mode.arg} must be \code{"write"} or \code{"read"}. If in \code{"read"} mode then \code{smart.prediction} must be assigned the data structure \code{.smart.prediction} that was created while fitting. This is stored in \code{object@smart.prediction} or \code{object$smart.prediction} where \code{object} is the name of the fitted object. } \item{smart.prediction}{ If in \code{"read"} mode then \code{smart.prediction} must be assigned the list of data dependent parameters, which is stored on the fitted object. Otherwise, \code{smart.prediction} is ignored. } \item{max.smart}{ \code{max.smart} is the initial length of the list \code{.smart.prediction}. It is not important because \code{.smart.prediction} is made larger if needed. }} \value{ Nothing is returned. } \section{Side Effects}{ In \code{"write"} mode \code{.smart.prediction} in \code{smartpredenv} is assigned an empty list with \code{max.smart} components. In \code{"read"} mode \code{.smart.prediction} in \code{smartpredenv} is assigned \code{smart.prediction}. Then \code{.smart.prediction.counter} in \code{smartpredenv} is assigned the value 0, and \code{.smart.prediction.mode} and \code{.max.smart} are written to \code{smartpredenv} too. } \details{ This function is only required by programmers writing a modelling function such as \code{\link[stats]{lm}} and \code{\link[stats]{glm}}, or a prediction functions of such, e.g., \code{\link[stats]{predict.lm}}. The function \code{setup.smart} operates by mimicking the operations of a first-in first-out stack (better known as a \emph{queue}). } \seealso{ \code{\link[stats]{lm}}, \code{\link[stats]{predict.lm}}. } \examples{ \dontrun{ setup.smart("write") # Put at the beginning of lm } \dontrun{# Put at the beginning of predict.lm setup.smart("read", smart.prediction = object$smart.prediction) } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/expint3.Rd0000644000176200001440000000511013135276753013451 0ustar liggesusers\name{expint} \alias{expint} \alias{expexpint} \alias{expint.E1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Exponential Integral and Variants } \description{ Computes the exponential integral \eqn{Ei(x)} for real values, as well as \eqn{\exp(-x) \times Ei(x)}{exp(-x) * Ei(x)} and \eqn{E_1(x)} and their derivatives (up to the 3rd derivative). } \usage{ expint(x, deriv = 0) expexpint(x, deriv = 0) expint.E1(x, deriv = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Numeric. Ideally a vector of positive reals. } \item{deriv}{Integer. Either 0, 1, 2 or 3. } } \details{ The exponential integral \eqn{Ei(x)} function is the integral of \eqn{\exp(t) / t}{exp(t) / t} from 0 to \eqn{x}, for positive real \eqn{x}. The function \eqn{E_1(x)} is the integral of \eqn{\exp(-t) / t}{exp(-t) / t} from \eqn{x} to infinity, for positive real \eqn{x}. } \value{ Function \code{expint(x, deriv = n)} returns the \eqn{n}th derivative of \eqn{Ei(x)} (up to the 3rd), function \code{expexpint(x, deriv = n)} returns the \eqn{n}th derivative of \eqn{\exp(-x) \times Ei(x)}{exp(-x) * Ei(x)} (up to the 3rd), function \code{expint.E1(x, deriv = n)} returns the \eqn{n}th derivative of \eqn{E_1(x)} (up to the 3rd). } \references{ \url{http://www.netlib.org/specfun/ei}. } \author{ T. W. Yee has simply written a small wrapper function to call the NETLIB FORTRAN code. Xiangjie Xue modified the functions to calculate derivatives. Higher derivatives can actually be calculated---please let me know if you need it. } \section{Warning }{ These functions have not been tested thoroughly. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link[base:log]{log}}, \code{\link[base:log]{exp}}. } \examples{ \dontrun{ par(mfrow = c(2, 2)) curve(expint, 0.01, 2, xlim = c(0, 2), ylim = c(-3, 5), las = 1, col = "orange") abline(v = (-3):5, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") curve(expexpint, 0.01, 2, xlim = c(0, 2), ylim = c(-3, 2), las = 1, col = "orange") abline(v = (-3):2, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") curve(expint.E1, 0.01, 2, xlim = c(0, 2), ylim = c(0, 5), las = 1, col = "orange") abline(v = (-3):2, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} VGAM/man/model.framevlm.Rd0000644000176200001440000000440713135276753014777 0ustar liggesusers\name{model.framevlm} \alias{model.framevlm} \title{Construct the Model Frame of a VLM Object} \usage{ model.framevlm(object, setupsmart = TRUE, wrapupsmart = TRUE, \dots) } \arguments{ \item{object}{a model object from the \pkg{VGAM} \R package that inherits from a \emph{vector linear model} (VLM), e.g., a model of class \code{"vglm"}.} \item{\dots}{further arguments such as \code{data}, \code{na.action}, \code{subset}. See \code{\link[stats]{model.frame}} for more information on these. } \item{setupsmart, wrapupsmart}{ Logical. Arguments to determine whether to use smart prediction. } } \description{ This function returns a \code{\link{data.frame}} with the variables. It is applied to an object which inherits from class \code{"vlm"} (e.g., a fitted model of class \code{"vglm"}). } \details{Since \code{object} is an object which inherits from class \code{"vlm"} (e.g., a fitted model of class \code{"vglm"}), the method will either returned the saved model frame used when fitting the model (if any, selected by argument \code{model = TRUE}) or pass the call used when fitting on to the default method. This code implements \emph{smart prediction} (see \code{\link{smartpred}}). } \value{ A \code{\link{data.frame}} containing the variables used in the \code{object} plus those specified in \code{\dots}. } \seealso{ \code{\link[stats]{model.frame}}, \code{\link{model.matrixvlm}}, \code{\link{predictvglm}}, \code{\link{smartpred}}. } \references{ Chambers, J. M. (1992) \emph{Data for models.} Chapter 3 of \emph{Statistical Models in S} eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. } \examples{ # Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal,mild, severe) ~ poly(c(scale(let)), 2), multinomial, data = pneumo, trace = TRUE, x = FALSE) class(fit) check1 <- head(model.frame(fit)) check1 check2 <- model.frame(fit, data = head(pneumo)) check2 all.equal(unlist(check1), unlist(check2)) # Should be TRUE q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE } \keyword{models} VGAM/man/loge.Rd0000644000176200001440000000543513135276753013017 0ustar liggesusers\name{loge} \alias{loge} \alias{negloge} \alias{logneg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log Link Function, and Variants } \description{ Computes the log transformation, including its inverse and the first two derivatives. } \usage{ loge(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negloge(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) logneg(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The log link function is very commonly used for parameters that are positive. Here, all logarithms are natural logarithms, i.e., to base \eqn{e}. Numerical values of \code{theta} close to 0 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. The function \code{loge} computes \eqn{\log(\theta)}{log(theta)} whereas \code{negloge} computes \eqn{-\log(\theta)=\log(1/\theta)}{-log(theta)=log(1/theta)}. The function \code{logneg} computes \eqn{\log(-\theta)}{log(-theta)}, hence is suitable for parameters that are negative, e.g., a trap-shy effect in \code{\link{posbernoulli.b}}. } \value{ The following concerns \code{loge}. For \code{deriv = 0}, the log of \code{theta}, i.e., \code{log(theta)} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(theta)}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ This function is called \code{loge} to avoid conflict with the \code{\link[base:Log]{log}} function. Numerical instability may occur when \code{theta} is close to 0 unless \code{bvalue} is used. } \seealso{ \code{\link{Links}}, \code{\link{explink}}, \code{\link{logit}}, \code{\link{logc}}, \code{\link{loglog}}, \code{\link[base:Log]{log}}, \code{\link{logoff}}, \code{\link{lambertW}}, \code{\link{posbernoulli.b}}. } \examples{ \dontrun{ loge(seq(-0.2, 0.5, by = 0.1)) loge(seq(-0.2, 0.5, by = 0.1), bvalue = .Machine$double.xmin) negloge(seq(-0.2, 0.5, by = 0.1)) negloge(seq(-0.2, 0.5, by = 0.1), bvalue = .Machine$double.xmin) } logneg(seq(-0.5, -0.2, by = 0.1)) } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/paretoff.Rd0000644000176200001440000001463113135276753013675 0ustar liggesusers\name{paretoff} \alias{paretoff} \alias{truncpareto} %- Also NEED an '\alias' for EACH other topic documented here. \title{Pareto and Truncated Pareto Distribution Family Functions } \description{ Estimates one of the parameters of the Pareto(I) distribution by maximum likelihood estimation. Also includes the upper truncated Pareto(I) distribution. } \usage{ paretoff(scale = NULL, lshape = "loge") truncpareto(lower, upper, lshape = "loge", ishape = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape}{ Parameter link function applied to the parameter \eqn{k}. See \code{\link{Links}} for more choices. A log link is the default because \eqn{k} is positive. } \item{scale}{ Numeric. The parameter \eqn{\alpha}{alpha} below. If the user inputs a number then it is assumed known with this value. The default means it is estimated by maximum likelihood estimation, which means \code{min(y)} is used, where \code{y} is the response vector. } \item{lower, upper}{ Numeric. Lower and upper limits for the truncated Pareto distribution. Each must be positive and of length 1. They are called \eqn{\alpha}{alpha} and \eqn{U} below. } \item{ishape}{ Numeric. Optional initial value for the shape parameter. A \code{NULL} means a value is obtained internally. If failure to converge occurs try specifying a value, e.g., 1 or 2. } \item{imethod}{ See \code{\link{CommonVGAMffArguments}} for information. If failure to converge occurs then try specifying a value for \code{ishape}. } } \details{ A random variable \eqn{Y} has a Pareto distribution if \deqn{P[Y>y] = C / y^{k}}{% P[Y>y] = C / y^k} for some positive \eqn{k} and \eqn{C}. This model is important in many applications due to the power law probability tail, especially for large values of \eqn{y}. The Pareto distribution, which is used a lot in economics, has a probability density function that can be written \deqn{f(y;\alpha,k) = k \alpha^k / y^{k+1}}{% f(y;alpha,k) = k * alpha^k / y^(k+1)} for \eqn{0 < \alpha < y}{0< alpha < y} and \eqn{0 1}. Its variance is \eqn{\alpha^2 k /((k-1)^2 (k-2))}{alpha^2 k /((k-1)^2 (k-2))} provided \eqn{k > 2}. The upper truncated Pareto distribution has a probability density function that can be written \deqn{f(y) = k \alpha^k / [y^{k+1} (1-(\alpha/U)^k)]}{% f(y) = k * alpha^k / [y^(k+1) (1-(\alpha/U)^k)]} for \eqn{0 < \alpha < y < U < \infty}{0< alpha < y < U < Inf} and \eqn{k>0}. Possibly, better names for \eqn{k} are the \emph{index} and \emph{tail} parameters. Here, \eqn{\alpha}{alpha} and \eqn{U} are known. The mean of \eqn{Y} is \eqn{k \alpha^k (U^{1-k}-\alpha^{1-k}) / [(1-k)(1-(\alpha/U)^k)]}{ k * lower^k * (U^(1-k)-alpha^(1-k)) / ((1-k) * (1-(alpha/U)^k))}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. Aban, I. B., Meerschaert, M. M. and Panorska, A. K. (2006) Parameter estimation for the truncated Pareto distribution, \emph{Journal of the American Statistical Association}, \bold{101}(473), 270--277. } \author{ T. W. Yee } \note{ Outside of economics, the Pareto distribution is known as the Bradford distribution. For \code{paretoff}, if the estimate of \eqn{k} is less than or equal to unity then the fitted values will be \code{NA}s. Also, \code{paretoff} fits the Pareto(I) distribution. See \code{\link{paretoIV}} for the more general Pareto(IV/III/II) distributions, but there is a slight change in notation: \eqn{s = k} and \eqn{b=\alpha}{b = alpha}. In some applications the Pareto law is truncated by a natural upper bound on the probability tail. The upper truncated Pareto distribution has three parameters (called \eqn{\alpha}{alpha}, \eqn{U} and \eqn{k} here) but the family function \code{truncpareto()} estimates only \eqn{k}. With known lower and upper limits, the ML estimator of \eqn{k} has the usual properties of MLEs. Aban (2006) discusses other inferential details. } \section{Warning }{ The usual or unbounded Pareto distribution has two parameters (called \eqn{\alpha}{alpha} and \eqn{k} here) but the family function \code{paretoff} estimates only \eqn{k} using iteratively reweighted least squares. The MLE of the \eqn{\alpha}{alpha} parameter lies on the boundary and is \code{min(y)} where \code{y} is the response. Consequently, using the default argument values, the standard errors are incorrect when one does a \code{summary} on the fitted object. If the user inputs a value for \code{alpha} then it is assumed known with this value and then \code{summary} on the fitted object should be correct. Numerical problems may occur for small \eqn{k}, e.g., \eqn{k < 1}. } \seealso{ \code{\link{Pareto}}, \code{\link{Truncpareto}}, \code{\link{paretoIV}}, \code{\link{gpd}}, \code{\link{benini1}}. } \examples{ alpha <- 2; kay <- exp(3) pdata <- data.frame(y = rpareto(n = 1000, scale = alpha, shape = kay)) fit <- vglm(y ~ 1, paretoff, data = pdata, trace = TRUE) fit@extra # The estimate of alpha is here head(fitted(fit)) with(pdata, mean(y)) coef(fit, matrix = TRUE) summary(fit) # Standard errors are incorrect!! # Here, alpha is assumed known fit2 <- vglm(y ~ 1, paretoff(scale = alpha), data = pdata, trace = TRUE) fit2@extra # alpha stored here head(fitted(fit2)) coef(fit2, matrix = TRUE) summary(fit2) # Standard errors are okay # Upper truncated Pareto distribution lower <- 2; upper <- 8; kay <- exp(2) pdata3 <- data.frame(y = rtruncpareto(n = 100, lower = lower, upper = upper, shape = kay)) fit3 <- vglm(y ~ 1, truncpareto(lower, upper), data = pdata3, trace = TRUE) coef(fit3, matrix = TRUE) c(fit3@misc$lower, fit3@misc$upper) } \keyword{models} \keyword{regression} % Package lmomco fits generalized pareto (three parameter) using % method of L-moments. VGAM/man/nbolf.Rd0000644000176200001440000000715213135276753013167 0ustar liggesusers\name{nbolf} \alias{nbolf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative Binomial-Ordinal Link Function } \description{ Computes the negative binomial-ordinal transformation, including its inverse and the first two derivatives. } \usage{ nbolf(theta, cutpoint = NULL, k = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{cutpoint, k}{ Here, \code{k} is the \eqn{k} parameter associated with the negative binomial distribution; see \code{\link{negbinomial}}. The cutpoints should be non-negative integers. If \code{nbolf()} is used as the link function in \code{\link{cumulative}} then one should choose \code{reverse = TRUE, parallel = TRUE}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The negative binomial-ordinal link function (NBOLF) can be applied to a parameter lying in the unit interval. Its purpose is to link cumulative probabilities associated with an ordinal response coming from an underlying negative binomial distribution. See \code{\link{Links}} for general information about \pkg{VGAM} link functions. } \value{ See Yee (2017) for details. } \references{ Yee, T. W. (2017) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Numerical values of \code{theta} too close to 0 or 1 or out of range result in large positive or negative values, or maybe 0 depending on the arguments. Although measures have been taken to handle cases where \code{theta} is too close to 1 or 0, numerical instabilities may still arise. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the negative binomial distribution (see \code{\link{negbinomial}}) that has been recorded as an ordinal response using known cutpoints. } \section{Warning }{ Prediction may not work on \code{\link{vglm}} or \code{\link{vgam}} etc. objects if this link function is used. } \seealso{ \code{\link{Links}}, \code{\link{negbinomial}}, \code{\link{polf}}, \code{\link{golf}}, \code{nbolf2}, \code{\link{cumulative}}, \code{\link{CommonVGAMffArguments}}. } \examples{ \dontrun{ nbolf("p", cutpoint = 2, k = 1, short = FALSE) nbolf("p", cutpoint = 2, k = 1, tag = TRUE) p <- seq(0.02, 0.98, by = 0.01) y <- nbolf(p,cutpoint = 2, k = 1) y. <- nbolf(p,cutpoint = 2, k = 1, deriv = 1) max(abs(nbolf(y,cutpoint = 2, k = 1, inv = TRUE) - p)) # Should be 0 #\ dontrun{ par(mfrow = c(2, 1), las = 1) #plot(p, y, type = "l", col = "blue", main = "nbolf()") #abline(h = 0, v = 0.5, col = "red", lty = "dashed") # #plot(p, y., type = "l", col = "blue", # main = "(Reciprocal of) first NBOLF derivative") } # Another example nn <- 1000 x2 <- sort(runif(nn)) x3 <- runif(nn) mymu <- exp( 3 + 1 * x2 - 2 * x3) k <- 4 y1 <- rnbinom(nn, mu = mymu, size = k) cutpoints <- c(-Inf, 10, 20, Inf) cuty <- Cut(y1, breaks = cutpoints) #\ dontrun{ plot(x2, x3, col = cuty, pch = as.character(cuty)) } table(cuty) / sum(table(cuty)) fit <- vglm(cuty ~ x2 + x3, trace = TRUE, cumulative(reverse = TRUE, multiple.responses = TRUE, parallel = TRUE, link = nbolf(cutpoint = cutpoints[2:3], k = k))) head(depvar(fit)) head(fitted(fit)) head(predict(fit)) coef(fit) coef(fit, matrix = TRUE) constraints(fit) fit@misc } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/V1.Rd0000644000176200001440000000444613135276753012360 0ustar liggesusers\name{V1} \alias{V1} \docType{data} \title{ V1 Flying-Bombs Hits in London } \description{ A small count data set. During WWII V1 flying-bombs were fired from sites in France (Pas-de-Calais) and Dutch coasts towards London. The number of hits per square grid around London were recorded. } \usage{ data(V1) } \format{ A data frame with the following variables. \describe{ \item{hits}{ Values between 0 and 4, and 7. Actually, the 7 is really imputed from the paper (it was recorded as "5 and over"). } \item{ofreq}{ Observed frequency, i.e., the number of grids with that many hits. } } } \details{ The data concerns 576 square grids each of 0.25 square kms about south London. The area was selected comprising 144 square kms over which the basic probability function of the distribution was very nearly constant. V1s, which were one type of flying-bomb, were a ``Vergeltungswaffen'' or vengeance weapon fired during the summer of 1944 at London. The V1s were informally called Buzz Bombs or Doodlebugs, and they were pulse-jet-powered with a warhead of 850 kg of explosives. Over 9500 were launched at London, and many were shot down by artillery and the RAF. Over the period considered the total number of bombs within the area was 537. It was asserted that the bombs tended to be grouped in clusters. However, a basic Poisson analysis shows this is not the case. Their guidance system being rather primitive, the data is consistent with a Poisson distribution (random). } \source{ Clarke, R. D. (1946). An application of the Poisson distribution. \emph{Journal of the Institute of Actuaries}, \bold{72}(3), 481. } \references{ Feller, W. (1970). \emph{An Introduction to Probability Theory and Its Applications}, Vol. 1, Third Edition. John Wiley and Sons: New York, USA. % p.160--1 } \seealso{ \code{\link[VGAM]{poissonff}}. } \examples{ V1 mean(with(V1, rep(hits, times = ofreq))) var(with(V1, rep(hits, times = ofreq))) sum(with(V1, rep(hits, times = ofreq))) \dontrun{ barplot(with(V1, ofreq), names.arg = as.character(with(V1, hits)), main = "London V1 buzz bomb hits", col = "lightblue", las = 1, ylab = "Frequency", xlab = "Hits") } } \keyword{datasets} % % VGAM/man/frechet.Rd0000644000176200001440000001202713135276753013504 0ustar liggesusers\name{frechet} \alias{frechet} %\alias{frechet2} %\alias{frechet3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Frechet Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Frechet distribution. % and 3-parameter } \usage{ frechet(location = 0, lscale = "loge", lshape = logoff(offset = -2), iscale = NULL, ishape = NULL, nsimEIM = 250, zero = NULL) } %frechet3(anchor = NULL, ldifference = "loge", lscale = "loge", % lshape = "loglog", % ilocation = NULL, iscale = NULL, ishape = NULL, % zero = NULL, effpos = .Machine$double.eps^0.75) %- maybe also 'usage' for other objects documented here. \arguments{ \item{location}{ Numeric. Location parameter. It is called \eqn{a} below. } \item{lscale, lshape}{ Link functions for the parameters; see \code{\link{Links}} for more choices. } \item{iscale, ishape, zero, nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for information. } % \item{edifference}{ % % Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } % \item{anchor}{ % An ``anchor'' point for estimating the location parameter. This must % be a value no greater than \code{min(y)} where \code{y} is the response. % The location parameter is \eqn{A - D} where % \eqn{A} is the anchor, % \eqn{D} is the ``difference'' (default is to make this positive). % The default value of \code{anchor} means \code{min(y)} is chosen. % % } % \item{ldifference}{ % Parameter link function for the difference \eqn{D} between the anchor % point and the location parameter estimate. % The default keeps this difference positive so that numerical % problems are less likely to occur. % } % \item{ilocation}{ % Optional initial value for the location parameter. % A good choice can speed up the convergence rate markedly. % A \code{NULL} means it is chosen internally. % } } \details{ The (3-parameter) Frechet distribution has a density function that can be written \deqn{f(y) = \frac{sb}{(y-a)^2} [b/(y-a)]^{s-1} \, \exp[-(b/(y-a))^s] }{% f(y) = ((s*b) / (y-a)^2) * exp[-(b/(y-a))^s] * [b/(y-a)]^(s-1)} for \eqn{y > a} and scale parameter \eqn{b > 0}. The positive shape parameter is \eqn{s}. The cumulative distribution function is \deqn{F(y) = \exp[-(b/(y-a))^s]. }{% F(y) = exp[-(b/(y-a))^s].} The mean of \eqn{Y} is \eqn{a + b \Gamma(1-1/s)}{a + b*gamma(1-1/s)} for \eqn{s > 1} (these are returned as the fitted values). The variance of \eqn{Y} is \eqn{b^2 [ \Gamma(1-2/s) - \Gamma^2(1-1/s)]}{b^2 * [gamma(1 - 2/s) - gamma(1 - 1/s)^2]} for \eqn{s > 2}. Family \code{frechet} has \eqn{a} known, and \eqn{\log(b)}{log(b)} and \eqn{\log(s - 2)}{log(s - 2)} are the default linear/additive predictors. The working weights are estimated by simulated Fisher scoring. % Note that the \code{\link{loglog}} link ensures \eqn{s > 1}. % whereas \code{frechet3} estimates it. % Estimating \eqn{a} well requires a lot of data and % a good choice of \code{ilocation} will help speed up convergence. % For \code{frechet3} the default linear/additive predictors are % \eqn{\log(D)}{log(D)}, % It would be great if the first linear/additive predictor was a direct % function of the location parameter, but this can run the risk that % the estimate is out of range (i.e., greater than \code{min(y)}). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \section{Warning}{ % Convergence for \code{frechet3} can be very slow, especially if the % initial value for the location parameter is poor. Setting something % like \code{maxit = 200, trace = TRUE} is a good idea. Family function \code{frechet} may fail for low values of the shape parameter, e.g., near 2 or lower. } %\note{ % Family function \code{frechet3} uses % the BFGS quasi-Newton update formula for the % working weight matrices. Consequently the estimated variance-covariance % matrix may be inaccurate or simply wrong! The standard errors must be % therefore treated with caution; these are computed in functions such % as \code{vcov()} and \code{summary()}. % If \code{fit} is a \code{frechet3} fit then \code{fit@extra$location} % is the final estimate of the location parameter, and % \code{fit@extra$LHSanchor} is the anchor point. %} \seealso{ \code{\link{rfrechet}}, \code{\link{gev}}. } \examples{ \dontrun{ set.seed(123) fdata <- data.frame(y1 = rfrechet(nn <- 1000, shape = 2 + exp(1))) with(fdata, hist(y1)) fit2 <- vglm(y1 ~ 1, frechet, data = fdata, trace = TRUE) coef(fit2, matrix = TRUE) Coef(fit2) head(fitted(fit2)) with(fdata, mean(y1)) head(weights(fit2, type = "working")) vcov(fit2) } } \keyword{models} \keyword{regression} VGAM/man/cauchit.Rd0000644000176200001440000001053313135276753013504 0ustar liggesusers\name{cauchit} \alias{cauchit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cauchit Link Function } \description{ Computes the cauchit (tangent) link transformation, including its inverse and the first two derivatives. } \usage{ cauchit(theta, bvalue = .Machine$double.eps, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ This link function is an alternative link function for parameters that lie in the unit interval. This type of link bears the same relation to the Cauchy distribution as the probit link bears to the Gaussian. One characteristic of this link function is that the tail is heavier relative to the other links (see examples below). Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the tangent of \code{theta}, i.e., \code{tan(pi * (theta-0.5))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{0.5 + atan(theta)/pi}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0. One way of overcoming this is to use \code{bvalue}. As mentioned above, in terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the Cauchy distribution (see \code{\link{cauchy1}}). } \seealso{ \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{loge}}, \code{\link{cauchy}}, \code{\link{cauchy1}}. } \examples{ p <- seq(0.01, 0.99, by=0.01) cauchit(p) max(abs(cauchit(cauchit(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by=0.01), seq(0.97, 1.02, by = 0.01)) cauchit(p) # Has no NAs \dontrun{ par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { matplot(p, cbind(logit(p, deriv = d), probit(p, deriv = d)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logit(p, deriv = d), col = "limegreen") lines(p, probit(p, deriv = d), col = "purple") lines(p, cloglog(p, deriv = d), col = "chocolate") lines(p, cauchit(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logit", "probit", "cloglog", "cauchit"), lwd = mylwd, col = c("limegreen","purple","chocolate", "tan")) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind( logit(y, deriv = d, inverse = TRUE), probit(y, deriv = d, inverse = TRUE)), type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logit(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probit(y, deriv = d, inverse = TRUE), col = "purple") lines(y, cloglog(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, cauchit(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logit", "probit", "cloglog", "cauchit"), lwd = mylwd, col = c("limegreen", "purple", "chocolate", "tan")) } } par(lwd = 1) } } \keyword{math} \keyword{models} \keyword{regression} %plot(y, logit(y, inverse = TRUE), type = "l", col = "limegreen", % xlab = "transformation", ylab = "p", % lwd=2, las=1, main = "Some inverse probability link functions") %lines(y, probit(y, inverse = TRUE), col = "purple", lwd=2) %lines(y, cloglog(y, inverse = TRUE), col = "chocolate", lwd=2) %abline(h=0.5, v = 0, lty = "dashed") VGAM/man/cratio.Rd0000644000176200001440000000771513135276753013355 0ustar liggesusers\name{cratio} \alias{cratio} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Continuation Ratios } \description{ Fits a continuation ratio logit/probit/cloglog/cauchit/... regression model to an ordered (preferably) factor response. } \usage{ cratio(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the \eqn{M} continuation ratio probabilities. See \code{\link{Links}} for more choices. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{reverse}{ Logical. By default, the continuation ratios used are \eqn{\eta_j = logit(P[Y>j|Y \geq j])}{eta_j = logit(P[Y>j|Y>=j])} for \eqn{j=1,\dots,M}. If \code{reverse} is \code{TRUE}, then \eqn{\eta_j = logit(P[Y=j])}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Agresti, A. (2013) \emph{Categorical Data Analysis}, 3rd ed. Hoboken, NJ, USA: Wiley. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{vglm}/\code{vgam}/\code{rrvglm} is the matrix of counts. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x1}, \code{x2} and \code{x3}, then \code{parallel = TRUE ~ x1 + x2 -1} and \code{parallel = FALSE ~ x3} are equivalent. This would constrain the regression coefficients for \code{x1} and \code{x2} to be equal; those of the intercepts and \code{x3} would be different. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{sratio}}, \code{\link{acat}}, \code{\link{cumulative}}, \code{\link{multinomial}}, \code{\link{margeff}}, \code{\link{pneumo}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cratio(parallel = TRUE), data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE) margeff(fit) } \keyword{models} \keyword{regression} %Simonoff, J. S. (2003) %\emph{Analyzing Categorical Data}, %New York: Springer-Verlag. VGAM/man/corbet.Rd0000644000176200001440000000254513135276753013346 0ustar liggesusers\name{corbet} \alias{corbet} \docType{data} \title{ Corbet's Butterfly Data %% ~~ data name/kind ... ~~ } \description{ About 3300 individual butterflies were caught in Malaya by naturalist Corbet trapping butterflies. They were classified to about 500 species. %% ~~ A concise (1-5 lines) description of the dataset. ~~ } \usage{data(corbet)} \format{ A data frame with 24 observations on the following 2 variables. \describe{ \item{\code{species}}{Number of species. } \item{\code{ofreq}}{Observed frequency of individual butterflies of that species. } } } %%\format{ %% The format is: %% chr "corbet" %%} \details{ In the early 1940s Corbet spent two years trapping butterflies in Malaya. Of interest was the total number of species. Some species were so rare (e.g., 118 species had only one specimen) that it was thought likely that there were many unknown species. %% ~~ If necessary, more details than the __description__ above } %%\source{ %% ~~ reference to a publication or URL from which the data were obtained ~~ %%} \references{ Fisher, R. A., Corbet, A. S. and Williams, C. B. (1943) The Relation Between the Number of Species and the Number of Individuals in a Random Sample of an Animal Population. \emph{Journal of Animal Ecology}, \bold{12}, 42--58. } \examples{ summary(corbet) } \keyword{datasets} VGAM/man/rec.normal.Rd0000644000176200001440000000620013135276753014120 0ustar liggesusers\name{rec.normal} \alias{rec.normal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Upper Record Values from a Univariate Normal Distribution } \description{ Maximum likelihood estimation of the two parameters of a univariate normal distribution when the observations are upper record values. } \usage{ rec.normal(lmean = "identitylink", lsd = "loge", imean = NULL, isd = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, lsd}{ Link functions applied to the mean and sd parameters. See \code{\link{Links}} for more choices. } \item{imean, isd}{ Numeric. Optional initial values for the mean and sd. The default value \code{NULL} means they are computed internally, with the help of \code{imethod}. } \item{imethod}{ Integer, either 1 or 2 or 3. Initial method, three algorithms are implemented. Choose the another value if convergence fails, or use \code{imean} and/or \code{isd}. } \item{zero}{ Can be an integer vector, containing the value 1 or 2. If so, the mean or standard deviation respectively are modelled as an intercept only. Usually, setting \code{zero = 2} will be used, if used at all. The default value \code{NULL} means both linear/additive predictors are modelled as functions of the explanatory variables. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The response must be a vector or one-column matrix with strictly increasing values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Arnold, B. C. and Balakrishnan, N. and Nagaraja, H. N. (1998) \emph{Records}, New York: John Wiley & Sons. } \author{ T. W. Yee } \note{ This family function tries to solve a difficult problem, and the larger the data set the better. Convergence failure can commonly occur, and convergence may be very slow, so set \code{maxit = 200, trace = TRUE}, say. Inputting good initial values are advised. This family function uses the BFGS quasi-Newton update formula for the working weight matrices. Consequently the estimated variance-covariance matrix may be inaccurate or simply wrong! The standard errors must be therefore treated with caution; these are computed in functions such as \code{vcov()} and \code{summary()}. } \seealso{ \code{\link{uninormal}}, \code{\link{double.cens.normal}}. } \examples{ nn <- 10000; mymean <- 100 # First value is reference value or trivial record Rdata <- data.frame(rawy = c(mymean, rnorm(nn, me = mymean, sd = exp(3)))) # Keep only observations that are records: rdata <- data.frame(y = unique(cummax(with(Rdata, rawy)))) fit <- vglm(y ~ 1, rec.normal, data = rdata, trace = TRUE, maxit = 200) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} %# Keep only observations that are records %delete = c(FALSE, rep(TRUE, len = n)) %for (i in 2:length(rawy)) % if (rawy[i] > max(rawy[1:(i-1)])) delete[i] = FALSE %(y = rawy[!delete]) VGAM/man/loglinb3.Rd0000644000176200001440000000607413135276753013602 0ustar liggesusers\name{loglinb3} \alias{loglinb3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Loglinear Model for Three Binary Responses } \description{ Fits a loglinear model to three binary responses. } \usage{ loglinb3(exchangeable = FALSE, zero = c("u12", "u13", "u23")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{exchangeable}{ Logical. If \code{TRUE}, the three marginal probabilities are constrained to be equal. } \item{zero}{ Which linear/additive predictors are modelled as intercept-only? A \code{NULL} means none. See \code{\link{CommonVGAMffArguments}} for further information. } } \details{ The model is \eqn{P(Y_1=y_1,Y_2=y_2,Y_3=y_3) =}{P(Y1=y1,Y2=y2,Y3=y3) =} \deqn{\exp(u_0+u_1 y_1+u_2 y_2+u_3 y_3+u_{12} y_1 y_2+ u_{13} y_1 y_3+u_{23} y_2 y_3)}{% exp(u0 + u1*y1 + u2*y2 + u3*y3 + u12*y1*y2 + u13*y1*y3+ u23*y2*y3)} where \eqn{y_1}{y1}, \eqn{y_2}{y2} and \eqn{y_3}{y3} are 0 or 1, and the parameters are \eqn{u_1}{u1}, \eqn{u_2}{u2}, \eqn{u_3}{u3}, \eqn{u_{12}}{u12}, \eqn{u_{13}}{u13}, \eqn{u_{23}}{u23}. The normalizing parameter \eqn{u_0}{u0} can be expressed as a function of the other parameters. Note that a third-order association parameter, \eqn{u_{123}}{u123} for the product \eqn{y_1 y_2 y_3}{y1*y2*y3}, is assumed to be zero for this family function. The linear/additive predictors are \eqn{(\eta_1,\eta_2,\ldots,\eta_6)^T = (u_1,u_2,u_3,u_{12},u_{13},u_{23})^T}{(eta1,eta2,...,eta6) = (u1,u2,u3,u12,u13,u23)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the eight joint probabilities, labelled as \eqn{(Y_1,Y_2,Y_3)}{(Y1,Y2,Y3)} = (0,0,0), (0,0,1), (0,1,0), (0,1,1), (1,0,0), (1,0,1), (1,1,0), (1,1,1), respectively. } \references{ Yee, T. W. and Wild, C. J. (2001) Discussion to: ``Smoothing spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data (with discussion)'' by Gao, F., Wahba, G., Klein, R., Klein, B. \emph{Journal of the American Statistical Association}, \bold{96}, 127--160. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must be a three-column matrix of ones and zeros only. Note that each of the 8 combinations of the multivariate response need to appear in the data set, therefore data sets will need to be large in order for this family function to work. } \seealso{ \code{\link{loglinb2}}, \code{\link{hunua}}. } \examples{ fit <- vglm(cbind(cyadea, beitaw, kniexc) ~ altitude, loglinb3, data = hunua) coef(fit, matrix = TRUE) head(fitted(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/oizeta.Rd0000644000176200001440000000453213135276753013361 0ustar liggesusers\name{oizeta} \alias{oizeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-inflated Zeta Distribution Family Function } \description{ Fits a 1-inflated zeta distribution. } \usage{ oizeta(lpstr1 = "logit", lshape = "loge", type.fitted = c("mean", "shape", "pobs1", "pstr1", "onempstr1"), ishape = NULL, gpstr1 = ppoints(8), gshape = exp((-3:3) / 4), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr1, lshape}{ For \code{lpstr1}: the same idea as \code{\link{zipoisson}} except it applies to a structural 1. } \item{gpstr1, gshape, ishape}{ For initial values. See \code{\link{CommonVGAMffArguments}} for information. } \item{type.fitted, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 1-inflated zeta distribution is a mixture distribution of the zeta distribution with some probability of obtaining a (structural) 1. Thus there are two sources for obtaining the value 1. This distribution is written here in a way that retains a similar notation to the zero-inflated Poisson, i.e., the probability \eqn{P[Y=1]} involves another parameter \eqn{\phi}{phi}. See \code{\link{zipoisson}}. This family function can handle multiple responses. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. Lots of data is needed to estimate the parameters accurately. Usually, probably the shape parameter is best modelled as intercept-only. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } %\references{ %} \author{ Thomas W. Yee } %\note{ %} \seealso{ \code{\link{Oizeta}}, \code{\link{zetaff}}, \code{\link{oazeta}}, \code{\link{otzeta}}, \code{\link{diffzeta}}, \code{\link{zeta}}, \code{\link{Oizipf}}. } \examples{ \dontrun{ odata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data odata <- transform(odata, pstr1 = logit(-1 + x2, inverse = TRUE), shape = exp(-0.5)) odata <- transform(odata, y1 = roizeta(nn, shape, pstr1 = pstr1)) with(odata, table(y1)) fit1 <- vglm(y1 ~ x2, oizeta(zero = "shape"), data = odata, trace = TRUE) coef(fit1, matrix = TRUE) } } \keyword{models} \keyword{regression} VGAM/man/nakagami.Rd0000644000176200001440000000710413135276753013634 0ustar liggesusers\name{nakagami} \alias{nakagami} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Nakagami Distribution Family Function } \description{ Estimation of the two parameters of the Nakagami distribution by maximum likelihood estimation. } \usage{ nakagami(lscale = "loge", lshape = "loge", iscale = 1, ishape = NULL, nowarning = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{nowarning}{ Logical. Suppress a warning? } \item{lscale, lshape}{ Parameter link functions applied to the \emph{scale} and \emph{shape} parameters. Log links ensure they are positive. See \code{\link{Links}} for more choices and information. } \item{iscale, ishape}{ Optional initial values for the shape and scale parameters. For \code{ishape}, a \code{NULL} value means it is obtained in the \code{initialize} slot based on the value of \code{iscale}. For \code{iscale}, assigning a \code{NULL} means a value is obtained in the \code{initialize} slot, however, setting another numerical value is recommended if convergence fails or is too slow. } } \details{ The Nakagami distribution, which is useful for modelling wireless systems such as radio links, can be written \deqn{f(y) = 2 (shape/scale)^{shape} y^{2 \times shape-1} \exp(-shape \times y^2/scale) / \Gamma(shape)}{% 2 * (shape/scale)^shape * y^(2*shape-1) * exp(-shape*y^2/scale) / gamma(shape)} for \eqn{y > 0}, \eqn{shape > 0}, \eqn{scale > 0}. The mean of \eqn{Y} is \eqn{\sqrt{scale/shape} \times \Gamma(shape+0.5) / \Gamma(shape)}{sqrt(scale/shape) * gamma(shape+0.5) / gamma(shape)} and these are returned as the fitted values. By default, the linear/additive predictors are \eqn{\eta_1=\log(scale)}{eta1=log(scale)} and \eqn{\eta_2=\log(shape)}{eta2=log(shape)}. Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Nakagami, M. (1960) The \emph{m}-distribution: a general formula of intensity distribution of rapid fading, pp.3--36 in: \emph{Statistical Methods in Radio Wave Propagation}. W. C. Hoffman, Ed., New York: Pergamon. } \author{ T. W. Yee } \note{ The Nakagami distribution is also known as the Nakagami-\emph{m} distribution, where \eqn{m=shape} here. Special cases: \eqn{m=0.5} is a one-sided Gaussian distribution and \eqn{m=1} is a Rayleigh distribution. The second moment is \eqn{E(Y^2)=m}. If \eqn{Y} has a Nakagami distribution with parameters \emph{shape} and \emph{scale} then \eqn{Y^2} has a gamma distribution with shape parameter \emph{shape} and scale parameter \emph{scale/shape}. } \seealso{ \code{\link{rnaka}}, \code{\link{gamma2}}, \code{\link{rayleigh}}. } \examples{ nn <- 1000; shape <- exp(0); Scale <- exp(1) ndata <- data.frame(y1 = sqrt(rgamma(nn, shape = shape, scale = Scale/shape))) nfit <- vglm(y1 ~ 1, nakagami, data = ndata, trace = TRUE, crit = "coef") ndata <- transform(ndata, y2 = rnaka(nn, scale = Scale, shape = shape)) nfit <- vglm(y2 ~ 1, nakagami(iscale = 3), data = ndata, trace = TRUE) head(fitted(nfit)) with(ndata, mean(y2)) coef(nfit, matrix = TRUE) (Cfit <- Coef(nfit)) \dontrun{ sy <- with(ndata, sort(y2)) hist(with(ndata, y2), prob = TRUE, main = "", xlab = "y", ylim = c(0, 0.6), col = "lightblue") lines(dnaka(sy, scale = Cfit["scale"], shape = Cfit["shape"]) ~ sy, data = ndata, col = "orange") } } \keyword{models} \keyword{regression} VGAM/man/zipoisson.Rd0000644000176200001440000003050013135276753014115 0ustar liggesusers\name{zipoisson} \alias{zipoisson} \alias{zipoissonff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Poisson Distribution Family Function } \description{ Fits a zero-inflated or zero-deflated Poisson distribution by full maximum likelihood estimation. } \usage{ zipoisson(lpstr0 = "logit", llambda = "loge", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, ilambda = NULL, gpstr0 = NULL, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, zero = NULL) zipoissonff(llambda = "loge", lonempstr0 = "logit", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ilambda = NULL, ionempstr0 = NULL, gonempstr0 = NULL, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, zero = "onempstr0") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr0, llambda}{ Link function for the parameter \eqn{\phi}{phi} and the usual \eqn{\lambda}{lambda} parameter. See \code{\link{Links}} for more choices; see \code{\link{CommonVGAMffArguments}} for more information. For the zero-\emph{deflated} model see below. } \item{ipstr0, ilambda}{ Optional initial values for \eqn{\phi}{phi}, whose values must lie between 0 and 1. Optional initial values for \eqn{\lambda}{lambda}, whose values must be positive. The defaults are to compute an initial value internally for each. If a vector then recycling is used. } \item{lonempstr0, ionempstr0}{ Corresponding arguments for the other parameterization. See details below. } \item{type.fitted}{ Character. The type of fitted value to be returned. The first choice (the expected value) is the default. The estimated probability of an observed 0 is an alternative, else the estimated probability of a structural 0, or one minus the estimated probability of a structural 0. See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for more information. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method for \eqn{\lambda}{lambda}. If failure to converge occurs try another value and/or else specify a value for \code{ishrinkage} and/or else specify a value for \code{ipstr0}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{ishrinkage}{ How much shrinkage is used when initializing \eqn{\lambda}{lambda}. The value must be between 0 and 1 inclusive, and a value of 0 means the individual response values are used, and a value of 1 means the median or mean is used. This argument is used in conjunction with \code{imethod}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{zero}{ Specifies which linear/additive predictors are to be modelled as intercept-only. If given, the value can be either 1 or 2, and the default is none of them. Setting \code{zero = 1} makes \eqn{\phi}{phi} a single parameter. See \code{\link{CommonVGAMffArguments}} for more information. } \item{gpstr0, gonempstr0, probs.y}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ These models are a mixture of a Poisson distribution and the value 0; it has value 0 with probability \eqn{\phi}{phi} else is Poisson(\eqn{\lambda}{lambda}) distributed. Thus there are two sources for zero values, and \eqn{\phi}{phi} is the probability of a \emph{structural zero}. The model for \code{zipoisson()} can be written \deqn{P(Y = 0) = \phi + (1-\phi) \exp(-\lambda),}{% P(Y = 0) = phi + (1-phi) * exp(-lambda),} and for \eqn{y=1,2,\ldots}, \deqn{P(Y = y) = (1-\phi) \exp(-\lambda) \lambda^y / y!.}{% P(Y = y) = (1-phi) * exp(-lambda) * lambda^y / y!.} Here, the parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{(1-\phi) \lambda}{(1-phi)*lambda} and these are returned as the fitted values, by default. The variance of \eqn{Y} is \eqn{(1-\phi) \lambda (1 + \phi \lambda)}{ (1-phi)*lambda*(1 + phi lambda)}. By default, the two linear/additive predictors of \code{zipoisson()} are \eqn{(logit(\phi), \log(\lambda))^T}{(logit(phi), log(lambda))^T}. The \pkg{VGAM} family function \code{zipoissonff()} has a few changes compared to \code{zipoisson()}. These are: (i) the order of the linear/additive predictors is switched so the Poisson mean comes first; (ii) \code{onempstr0} is now 1 minus the probability of a structural 0, i.e., the probability of the parent (Poisson) component, i.e., \code{onempstr0} is \code{1-pstr0}; (iii) argument \code{zero} has a new default so that the \code{onempstr0} is intercept-only by default. Now \code{zipoissonff()} is generally recommended over \code{zipoisson()} (and definitely recommended over \code{\link{yip88}}). Both functions implement Fisher scoring and can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Thas, O. and Rayner, J. C. W. (2005) Smooth tests for the zero-inflated Poisson distribution. \emph{Biometrics}, \bold{61}, 808--815. Data: Angers, J-F. and Biswas, A. (2003) A Bayesian analysis of zero-inflated generalized Poisson model. \emph{Computational Statistics & Data Analysis}, \bold{42}, 37--46. Cameron, A. C. and Trivedi, P. K. (1998) \emph{Regression Analysis of Count Data}. Cambridge University Press: Cambridge. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. } \author{ T. W. Yee } \note{ % The \code{misc} slot has a component called % \code{pobs0} which is the estimate of \eqn{P(Y = 0)}. % Note that \eqn{P(Y = 0)} is not the parameter \eqn{\phi}{phi}. % The estimated probability of a structural 0 is returned in % the \code{misc} slot with component name \code{pstr0}. This family function can be used to estimate the 0-\emph{deflated} model, hence \code{pstr0} is not to be interpreted as a probability. One should set, e.g., \code{lpstr0 = "identitylink"}. Likewise, the functions in \code{\link{Zipois}} can handle the zero-deflated Poisson distribution too. Although the iterations might fall outside the parameter space, the \code{validparams} slot should keep them inside. A (somewhat) similar alternative for zero-deflation is to try the zero-altered Poisson model (see \code{\link{zapoisson}}). % Practically, it is restricted to intercept-models only % (see example below). % Also, one might need inputting good initial values % or using a simpler model to obtain initial values. % If there is a covariate then it is best to % constrain \code{pstr0} to be intercept-only, e.g., % by \code{zipoisson(lpstr0 = identitylink, zero = -1)}. The use of this \pkg{VGAM} family function with \code{\link{rrvglm}} can result in a so-called COZIGAM or COZIGLM. That is, a reduced-rank zero-inflated Poisson model (RR-ZIP) is a constrained zero-inflated generalized linear model. See \pkg{COZIGAM}. A RR-ZINB model can also be fitted easily; see \code{\link{zinegbinomial}}. Jargon-wise, a COZIGLM might be better described as a COZIVGLM-ZIP. } \section{Warning }{ Numerical problems can occur, e.g., when the probability of zero is actually less than, not more than, the nominal probability of zero. For example, in the Angers and Biswas (2003) data below, replacing 182 by 1 results in nonconvergence. Half-stepping is not uncommon. If failure to converge occurs, try using combinations of \code{imethod}, \code{ishrinkage}, \code{ipstr0}, and/or \code{zipoisson(zero = 1)} if there are explanatory variables. The default for \code{zipoissonff()} is to model the structural zero probability as an intercept-only. } \seealso{ \code{\link{zapoisson}}, \code{\link{Zipois}}, \code{\link{yip88}}, \code{\link{rrvglm}}, \code{\link{zipebcom}}, \code{\link[stats:Poisson]{rpois}}, \code{\link{simulate.vlm}}, \code{\link{hdeff.vglm}}. } \examples{ # Example 1: simulated ZIP data zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pstr01 = logit(-0.5 + 1*x2, inverse = TRUE), pstr02 = logit( 0.5 - 1*x2, inverse = TRUE), Ps01 = logit(-0.5 , inverse = TRUE), Ps02 = logit( 0.5 , inverse = TRUE), lambda1 = loge(-0.5 + 2*x2, inverse = TRUE), lambda2 = loge( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda = lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) fit1 <- vglm(y1 ~ x2, zipoisson(zero = 1), data = zdata, crit = "coef") fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), data = zdata, crit = "coef") coef(fit1, matrix = TRUE) # These should agree with the above values coef(fit2, matrix = TRUE) # These should agree with the above values # Fit all two simultaneously, using a different parameterization: fit12 <- vglm(cbind(y1, y2) ~ x2, zipoissonff, data = zdata, crit = "coef") coef(fit12, matrix = TRUE) # These should agree with the above values # For the first observation compute the probability that y1 is # due to a structural zero. (fitted(fit1, type = "pstr0") / fitted(fit1, type = "pobs0"))[1] # Example 2: McKendrick (1926). Data from 223 Indian village households cholera <- data.frame(ncases = 0:4, # Number of cholera cases, wfreq = c(168, 32, 16, 6, 1)) # Frequencies fit <- vglm(ncases ~ 1, zipoisson, wei = wfreq, cholera, trace = TRUE) coef(fit, matrix = TRUE) with(cholera, cbind(actual = wfreq, fitted = round(dzipois(ncases, lambda = Coef(fit)[2], pstr0 = Coef(fit)[1]) * sum(wfreq), digits = 2))) # Example 3: data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) fit <- vglm(y ~ 1, zipoisson(lpstr0 = probit, ipstr0 = 0.8), data = abdata, weight = w, trace = TRUE) fitted(fit, type = "pobs0") # Estimate of P(Y = 0) coef(fit, matrix = TRUE) Coef(fit) # Estimate of pstr0 and lambda fitted(fit) with(abdata, weighted.mean(y, w)) # Compare this with fitted(fit) summary(fit) # Example 4: zero-deflated model for intercept-only data zdata <- transform(zdata, lambda3 = loge(0.0, inverse = TRUE)) zdata <- transform(zdata, deflat.limit = -1 / expm1(lambda3)) # Boundary # The 'pstr0' parameter is negative and in parameter space: zdata <- transform(zdata, usepstr0 = deflat.limit / 2) # Not too near the boundary zdata <- transform(zdata, y3 = rzipois(nn, lambda3, pstr0 = usepstr0)) head(zdata) with(zdata, table(y3)) # A lot of deflation fit3 <- vglm(y3 ~ 1, zipoisson(zero = -1, lpstr0 = "identitylink"), data = zdata, trace = TRUE, crit = "coef") coef(fit3, matrix = TRUE) # Check how accurate it was: zdata[1, "usepstr0"] # Answer coef(fit3)[1] # Estimate Coef(fit3) vcov(fit3) # Is positive-definite # Example 5: This RR-ZIP is known as a COZIGAM or COZIVGLM-ZIP set.seed(123) rrzip <- rrvglm(Alopacce ~ sm.bs(WaterCon, df = 3), zipoisson(zero = NULL), data = hspider, trace = TRUE, Index.corner = 2) coef(rrzip, matrix = TRUE) Coef(rrzip) summary(rrzip) \dontrun{plotvgam(rrzip, lcol = "blue")} } \keyword{models} \keyword{regression} %# head(zdata, 1); pfit1 <- predict(fit1, zdata[1, ]); %# lambda <- loge(pfit1[2], inverse = TRUE) %# lambda <- (fitted(fit1, type = "mean") / fitted(fit1, type = "onempstr0"))[1] %# (prob.struc.0 <- pstr0 / dzipois(x = 0, lambda = lambda, pstr0 = pstr0)) % fit@misc$pobs0 # Estimate of P(Y = 0) %zipoisson(lpstr0 = "logit", llambda = "loge", % type.fitted = c("mean", "pobs0", "pstr0", "onempstr0"), % ipstr0 = NULL, ilambda = NULL, % imethod = 1, ishrinkage = 0.8, zero = NULL) %zipoissonff(llambda = "loge", lonempstr0 = "logit", % type.fitted = c("mean", "pobs0", "pstr0", "onempstr0"), % ilambda = NULL, ionempstr0 = NULL, % imethod = 1, ishrinkage = 0.8, zero = "onempstr0") VGAM/man/skellam.Rd0000644000176200001440000000626713135276753013525 0ustar liggesusers\name{skellam} \alias{skellam} %- Also NEED an '\alias' for EACH other topic documented here. \title{Skellam Distribution Family Function} \description{ Estimates the two parameters of a Skellam distribution by maximum likelihood estimation. } \usage{ skellam(lmu1 = "loge", lmu2 = "loge", imu1 = NULL, imu2 = NULL, nsimEIM = 100, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu1, lmu2}{ Link functions for the \eqn{\mu_1}{mu1} and \eqn{\mu_2}{mu2} parameters. See \code{\link{Links}} for more choices and for general information. } \item{imu1, imu2}{ Optional initial values for the parameters. See \code{\link{CommonVGAMffArguments}} for more information. If convergence failure occurs (this \pkg{VGAM} family function seems to require good initial values) try using these arguments. } \item{nsimEIM, parallel, zero}{ See \code{\link{CommonVGAMffArguments}} for information. In particular, setting \code{parallel=TRUE} will constrain the two means to be equal. } } \details{ The Skellam distribution models the difference between two independent Poisson distributions (with means \eqn{\mu_{j}}{mu_j}, say). It has density function \deqn{f(y;\mu_1,\mu_2) = \left( \frac{ \mu_1 }{\mu_2} \right)^{y/2} \, \exp(-\mu_1-\mu_2 ) \, I_{|y|}( 2 \sqrt{ \mu_1 \mu_2}) }{% f(y;mu1,mu2) = ( \mu1 / mu_2 )^(y/2) * exp(-mu1-mu2 ) * I_(|y|)( 2 * sqrt(mu1*mu2)) } where \eqn{y} is an integer, \eqn{\mu_1 > 0}{mu1 > 0}, \eqn{\mu_2 > 0}{mu2 > 0}. Here, \eqn{I_v} is the modified Bessel function of the first kind with order \eqn{v}. The mean is \eqn{\mu_1 - \mu_2}{mu1 - mu2} (returned as the fitted values), and the variance is \eqn{\mu_1 + \mu_2}{mu1 + mu2}. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \section{Warning }{ This \pkg{VGAM} family function seems fragile and very sensitive to the initial values. Use very cautiously!! } \references{ Skellam, J. G. (1946) The frequency distribution of the difference between two Poisson variates belonging to different populations. \emph{Journal of the Royal Statistical Society, Series A}, \bold{109}, 296. } %\author{ T. W. Yee } \note{ Numerical problems may occur for data if \eqn{\mu_1}{mu1} and/or \eqn{\mu_2}{mu2} are large. } \seealso{ \code{\link{dskellam}}, \code{\link[stats:Poisson]{dpois}}, \code{\link{poissonff}}. } \examples{ \dontrun{ sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, mu1 = exp(1 + x2), mu2 = exp(1 + x2)) sdata <- transform(sdata, y = rskellam(nn, mu1, mu2)) fit1 <- vglm(y ~ x2, skellam, data = sdata, trace = TRUE, crit = "coef") fit2 <- vglm(y ~ x2, skellam(parallel = TRUE), data = sdata, trace = TRUE) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) summary(fit1) # Likelihood ratio test for equal means: pchisq(2 * (logLik(fit1) - logLik(fit2)), df = df.residual(fit2) - df.residual(fit1), lower.tail = FALSE) lrtest(fit1, fit2) # Alternative } } \keyword{models} \keyword{regression} VGAM/man/zoabetaR.Rd0000644000176200001440000000531513135276753013635 0ustar liggesusers\name{zoabetaR} \alias{zoabetaR} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero- and One-Inflated Beta Distribution Family Function } \description{ Estimation of the shape parameters of the two-parameter beta distribution plus the probabilities of a 0 and/or a 1. } \usage{ zoabetaR(lshape1 = "loge", lshape2 = "loge", lpobs0 = "logit", lpobs1 = "logit", ishape1 = NULL, ishape2 = NULL, trim = 0.05, type.fitted = c("mean", "pobs0", "pobs1", "beta.mean"), parallel.shape = FALSE, parallel.pobs = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2, lpobs0, lpobs1}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more choices. } \item{ishape1, ishape2}{ Details at \code{\link{CommonVGAMffArguments}}. } \item{trim, zero}{ Same as \code{\link{betaR}}. } \item{parallel.shape, parallel.pobs}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{type.fitted}{ The choice \code{"beta.mean"} mean to return the mean of the beta distribution; the 0s and 1s are ignored. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The standard 2-parameter beta distribution has a support on (0,1), however, many datasets have 0 and/or 1 values too. This family function handles 0s and 1s (at least one of them must be present) in the data set by modelling the probability of a 0 by a logistic regression (default link is the logit), and similarly for the probability of a 1. The remaining proportion, \code{1-pobs0-pobs1}, of the data comes from a standard beta distribution. This family function therefore extends \code{\link{betaR}}. One has \eqn{M=3} or \eqn{M=4} per response. Multiple responses are allowed. } \value{ Similar to \code{\link{betaR}}. } %\references{ %} \author{ Thomas W. Yee and Xiangjie Xue. } %\note{ %} \seealso{ \code{\link{Zoabeta}}, \code{\link{betaR}}, \code{\link{betaff}}, \code{\link[stats:Beta]{Beta}}, \code{\link{zipoisson}}. } \examples{ nn <- 1000; set.seed(1) bdata <- data.frame(x2 = runif(nn)) bdata <- transform(bdata, pobs0 = logit(-2 + x2, inverse = TRUE), pobs1 = logit(-2 + x2, inverse = TRUE)) bdata <- transform(bdata, y1 = rzoabeta(nn, shape1 = exp(1 + x2), shape2 = exp(2 - x2), pobs0 = pobs0, pobs1 = pobs1)) summary(bdata) fit1 <- vglm(y1 ~ x2, zoabetaR(parallel.pobs = TRUE), data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) } \keyword{regression} % y1 = rbeta(nn, shape1 = exp(1 + x2), shape2 = exp(2 - x2)) %rrr <- runif(nn) %bdata$y1[rrr < bdata$p0] <- 0 %bdata$y1[rrr > 1 - bdata$p1] <- 1 VGAM/man/zipebcom.Rd0000644000176200001440000002063113135276753013674 0ustar liggesusers\name{zipebcom} \alias{zipebcom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exchangeable Bivariate cloglog Odds-ratio Model From a Zero-inflated Poisson Distribution } \description{ Fits an exchangeable bivariate odds-ratio model to two binary responses with a complementary log-log link. The data are assumed to come from a zero-inflated Poisson distribution that has been converted to presence/absence. } \usage{ zipebcom(lmu12 = "cloglog", lphi12 = "logit", loratio = "loge", imu12 = NULL, iphi12 = NULL, ioratio = NULL, zero = c("phi12", "oratio"), tol = 0.001, addRidge = 0.001) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu12, imu12}{ Link function, extra argument and optional initial values for the first (and second) marginal probabilities. Argument \code{lmu12} should be left alone. Argument \code{imu12} may be of length 2 (one element for each response). } \item{lphi12}{ Link function applied to the \eqn{\phi}{phi} parameter of the zero-inflated Poisson distribution (see \code{\link{zipoisson}}). See \code{\link{Links}} for more choices. } \item{loratio}{ Link function applied to the odds ratio. See \code{\link{Links}} for more choices. } \item{iphi12, ioratio}{ Optional initial values for \eqn{\phi}{phi} and the odds ratio. See \code{\link{CommonVGAMffArguments}} for more details. In general, good initial values (especially for \code{iphi12}) are often required, therefore use these arguments if convergence failure occurs. If inputted, the value of \code{iphi12} cannot be more than the sample proportions of zeros in either response. } % \item{ephi12, eoratio}{ % List. Extra argument for each of the links. % emu12 = list(), ephi12 = list(), eoratio = list(), % See \code{earg} in \code{\link{Links}} for general information. % } \item{zero}{ Which linear/additive predictor is modelled as an intercept only? A \code{NULL} means none. The default has both \eqn{\phi}{phi} and the odds ratio as not being modelled as a function of the explanatory variables (apart from an intercept). } \item{tol}{ Tolerance for testing independence. Should be some small positive numerical value. } \item{addRidge}{ Some small positive numerical value. The first two diagonal elements of the working weight matrices are multiplied by \code{1+addRidge} to make it diagonally dominant, therefore positive-definite. } } \details{ This \pkg{VGAM} family function fits an exchangeable bivariate odds ratio model (\code{\link{binom2.or}}) with a \code{\link{cloglog}} link. The data are assumed to come from a zero-inflated Poisson (ZIP) distribution that has been converted to presence/absence. Explicitly, the default model is \deqn{cloglog[P(Y_j=1)/(1-\phi)] = \eta_1,\ \ \ j=1,2}{% cloglog[P(Y_j=1)/(1-phi)] = eta_1,\ \ \ j=1,2} for the (exchangeable) marginals, and \deqn{logit[\phi] = \eta_2,}{% logit[phi] = eta_2,} for the mixing parameter, and \deqn{\log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = \eta_3,}{% log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = eta_3,} specifies the dependency between the two responses. Here, the responses equal 1 for a success and a 0 for a failure, and the odds ratio is often written \eqn{\psi=p_{00}p_{11}/(p_{10}p_{01})}{psi=p00 p11 / (p10 p01)}. We have \eqn{p_{10} = p_{01}}{p10 = p01} because of the exchangeability. The second linear/additive predictor models the \eqn{\phi}{phi} parameter (see \code{\link{zipoisson}}). The third linear/additive predictor is the same as \code{\link{binom2.or}}, viz., the log odds ratio. Suppose a dataset1 comes from a Poisson distribution that has been converted to presence/absence, and that both marginal probabilities are the same (exchangeable). Then \code{binom2.or("cloglog", exch=TRUE)} is appropriate. Now suppose a dataset2 comes from a \emph{zero-inflated} Poisson distribution. The first linear/additive predictor of \code{zipebcom()} applied to dataset2 is the same as that of \code{binom2.or("cloglog", exch=TRUE)} applied to dataset1. That is, the \eqn{\phi}{phi} has been taken care of by \code{zipebcom()} so that it is just like the simpler \code{\link{binom2.or}}. Note that, for \eqn{\eta_1}{eta_1}, \code{mu12 = prob12 / (1-phi12)} where \code{prob12} is the probability of a 1 under the ZIP model. Here, \code{mu12} correspond to \code{mu1} and \code{mu2} in the \code{\link{binom2.or}}-Poisson model. If \eqn{\phi=0}{phi=0} then \code{zipebcom()} should be equivalent to \code{binom2.or("cloglog", exch=TRUE)}. Full details are given in Yee and Dirnbock (2009). The leading \eqn{2 \times 2}{2 x 2} submatrix of the expected information matrix (EIM) is of rank-1, not 2! This is due to the fact that the parameters corresponding to the first two linear/additive predictors are unidentifiable. The quick fix around this problem is to use the \code{addRidge} adjustment. The model is fitted by maximum likelihood estimation since the full likelihood is specified. Fisher scoring is implemented. The default models \eqn{\eta_2}{eta2} and \eqn{\eta_3}{eta3} as single parameters only, but this can be circumvented by setting \code{zero=NULL} in order to model the \eqn{\phi}{phi} and odds ratio as a function of all the explanatory variables. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. These estimated probabilities should be extracted with the \code{fitted} generic function. } \section{Warning }{ The fact that the EIM is not of full rank may mean the model is naturally ill-conditioned. Not sure whether there are any negative consequences wrt theory. For now it is certainly safer to fit \code{\link{binom2.or}} to bivariate binary responses. } \references{ Yee, T. W. and Dirnbock, T. (2009) Models for analysing species' presence/absence data at two time points. Journal of Theoretical Biology, \bold{259}(4), 684--694. } %\author{ T. W. Yee } \note{ The \code{"12"} in the argument names reinforce the user about the exchangeability assumption. The name of this \pkg{VGAM} family function stands for \emph{zero-inflated Poisson exchangeable bivariate complementary log-log odds-ratio model} or ZIP-EBCOM. See \code{\link{binom2.or}} for details that are pertinent to this \pkg{VGAM} family function too. Even better initial values are usually needed here. The \code{xij} (see \code{\link{vglm.control}}) argument enables environmental variables with different values at the two time points to be entered into an exchangeable \code{\link{binom2.or}} model. See the author's webpage for sample code. } \seealso{ \code{\link{binom2.or}}, \code{\link{zipoisson}}, \code{\link{cloglog}}, \code{\link{CommonVGAMffArguments}}. } \examples{ zdata <- data.frame(x2 = seq(0, 1, len = (nsites <- 2000))) zdata <- transform(zdata, eta1 = -3 + 5 * x2, phi1 = logit(-1, inverse = TRUE), oratio = exp(2)) zdata <- transform(zdata, mu12 = cloglog(eta1, inverse = TRUE) * (1-phi1)) tmat <- with(zdata, rbinom2.or(nsites, mu1 = mu12, oratio = oratio, exch = TRUE)) zdata <- transform(zdata, ybin1 = tmat[, 1], ybin2 = tmat[, 2]) with(zdata, table(ybin1, ybin2)) / nsites # For interest only \dontrun{ # Various plots of the data, for interest only par(mfrow = c(2, 2)) plot(jitter(ybin1) ~ x2, data = zdata, col = "blue") plot(jitter(ybin2) ~ jitter(ybin1), data = zdata, col = "blue") plot(mu12 ~ x2, data = zdata, col = "blue", type = "l", ylim = 0:1, ylab = "Probability", main = "Marginal probability and phi") with(zdata, abline(h = phi1[1], col = "red", lty = "dashed")) tmat2 <- with(zdata, dbinom2.or(mu1 = mu12, oratio = oratio, exch = TRUE)) with(zdata, matplot(x2, tmat2, col = 1:4, type = "l", ylim = 0:1, ylab = "Probability", main = "Joint probabilities")) } # Now fit the model to the data. fit <- vglm(cbind(ybin1, ybin2) ~ x2, zipebcom, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) vcov(fit) } \keyword{models} \keyword{regression} VGAM/man/rrvglm.control.Rd0000644000176200001440000002165013135276753015056 0ustar liggesusers\name{rrvglm.control} \alias{rrvglm.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control Function for rrvglm() } \description{ Algorithmic constants and parameters for running \code{rrvglm} are set using this function. } \usage{ rrvglm.control(Rank = 1, Algorithm = c("alternating", "derivative"), Corner = TRUE, Uncorrelated.latvar = FALSE, Wmat = NULL, Svd.arg = FALSE, Index.corner = if (length(str0)) head((1:1000)[-str0], Rank) else 1:Rank, Ainit = NULL, Alpha = 0.5, Bestof = 1, Cinit = NULL, Etamat.colmax = 10, sd.Ainit = 0.02, sd.Cinit = 0.02, str0 = NULL, noRRR = ~1, Norrr = NA, noWarning = FALSE, trace = FALSE, Use.Init.Poisson.QO = FALSE, checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{Rank}{ The numerical rank \eqn{R} of the model. Must be an element from the set \{1,2,\ldots,min(\eqn{M},\emph{p2})\}. Here, the vector of explanatory variables \bold{x} is partitioned into (\bold{x1},\bold{x2}), which is of dimension \emph{p1}+\emph{p2}. The variables making up \bold{x1} are given by the terms in \code{noRRR} argument, and the rest of the terms comprise \bold{x2}. } \item{Algorithm}{ Character string indicating what algorithm is to be used. The default is the first one. } \item{Corner}{ Logical indicating whether corner constraints are to be used. This is one method for ensuring a unique solution. If \code{TRUE}, \code{Index.corner} specifies the \eqn{R} rows of the constraint matrices that are use as the corner constraints, i.e., they hold an order-\eqn{R} identity matrix. } \item{Uncorrelated.latvar}{ Logical indicating whether uncorrelated latent variables are to be used. This is normalization forces the variance-covariance matrix of the latent variables to be \code{diag(Rank)}, i.e., unit variance and uncorrelated. This constraint does not lead to a unique solution because it can be rotated. } \item{Wmat}{ Yet to be done. } \item{Svd.arg}{ Logical indicating whether a singular value decomposition of the outer product is to computed. This is another normalization which ensures uniqueness. See the argument \code{Alpha} below. } \item{Index.corner}{ Specifies the \eqn{R} rows of the constraint matrices that are used for the corner constraints, i.e., they hold an order-\eqn{R} identity matrix. } \item{Alpha}{ The exponent in the singular value decomposition that is used in the first part: if the SVD is \eqn{U D V^T}{ U \%*\% D \%*\% t(V) } then the first and second parts are \eqn{U D^{\alpha}}{ U \%*\% D^Alpha} and \eqn{D^{1-\alpha} V^T}{D^(1-Alpha) \%*\% t(V)} respectively. A value of 0.5 is `symmetrical'. This argument is used only when \code{Svd.arg=TRUE}. } \item{Bestof}{ Integer. The best of \code{Bestof} models fitted is returned. This argument helps guard against local solutions by (hopefully) finding the global solution from many fits. The argument works only when the function generates its own initial value for \bold{C}, i.e., when \bold{C} is \emph{not} passed in as initial values. } \item{Ainit, Cinit}{ Initial \bold{A} and \bold{C} matrices which may speed up convergence. They must be of the correct dimension. } \item{Etamat.colmax}{ Positive integer, no smaller than \code{Rank}. Controls the amount of memory used by \code{.Init.Poisson.QO()}. It is the maximum number of columns allowed for the pseudo-response and its weights. In general, the larger the value, the better the initial value. Used only if \code{Use.Init.Poisson.QO=TRUE}. } % \item{Quadratic}{ % Logical indicating whether a \emph{Quadratic} % RR-VGLM is to be fitted. If \code{TRUE}, an object of class % \code{"qrrvglm"} will be returned, otherwise \code{"rrvglm"}. % } \item{str0}{ Integer vector specifying which rows of the estimated constraint matrices (\bold{A}) are to be all zeros. These are called \emph{structural zeros}. Must not have any common value with \code{Index.corner}, and be a subset of the vector \code{1:M}. The default, \code{str0 = NULL}, means no structural zero rows at all. } \item{sd.Ainit, sd.Cinit}{ Standard deviation of the initial values for the elements of \bold{A} and \bold{C}. These are normally distributed with mean zero. This argument is used only if \code{Use.Init.Poisson.QO = FALSE}. } % \item{ppar}{ Ignore this. } \item{noRRR}{ Formula giving terms that are \emph{not} to be included in the reduced-rank regression. That is, \code{noRRR} specifes which explanatory variables are in the \eqn{x_1}{x1} vector of \code{\link{rrvglm}}, and the rest go into \eqn{x_2}{x2}. The \eqn{x_1}{x1} variables constitute the \eqn{\bold{B}_1}{\bold{B}1} matrix in Yee and Hastie (2003). Those \eqn{x_2}{x2} variables which are subject to the reduced-rank regression correspond to the \eqn{\bold{B}_2}{\bold{B}2} matrix. Set \code{noRRR = NULL} for the reduced-rank regression to be applied to every explanatory variable including the intercept. } \item{Norrr}{ Defunct. Please use \code{noRRR}. Use of \code{Norrr} will become an error soon. } \item{trace}{ Logical indicating if output should be produced for each iteration. % Useful when \code{Quadratic=TRUE} because QRR-VGLMs are % computationally expensive and it's good to see that the program % is working! } \item{Use.Init.Poisson.QO}{ Logical indicating whether the \code{.Init.Poisson.QO()} should be used to obtain initial values for the \bold{C}. The function uses a new method that can work well if the data are Poisson counts coming from an equal-tolerances QRR-VGLM (CQO). This option is less realistic for RR-VGLMs compared to QRR-VGLMs. } \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{noWarning, Check.rank, Check.cm.rank}{ Same as \code{\link{vglm.control}}. Ignored for \pkg{VGAM} 0.9-7 and higher. } \item{wzepsilon}{ Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } \item{\dots}{ Variables in \dots are passed into \code{\link{vglm.control}}. If the derivative algorithm is used then \dots are also passed into \code{\link{rrvglm.optim.control}}; and if the alternating algorithm is used then \dots are also passed into \code{\link{valt.control}}. } In the above, \eqn{R} is the \code{Rank} and \eqn{M} is the number of linear predictors. } \details{ % QRR-VGLMs are an extension of RR-VGLMs and are useful for constrained % ordination. QRR-VGLMs fitted with \pkg{VGAM} allow a maximum % likelihood solution to constrained quadratic ordination (CQO; % formerly called canonical Gaussian ordination) models. % For QRR-VGLMs, if \code{eq.tolerances=TRUE} and % \code{I.tolerances=FALSE} then the default is that the \bold{C} % matrix is constrained by forcing the latent variables to have sample % variance-covariance matrix equalling \code{diag(Rank)}, i.e., unit % variance and uncorrelated. \pkg{VGAM} supports three normalizations to ensure a unique solution. Of these, only corner constraints will work with \code{summary} of RR-VGLM objects. } \value{ A list with components matching the input names. Some error checking is done, but not much. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ % The function call \code{cqo(...)} is equivalent to % \code{rrvglm(..., Quadratic=TRUE)}, and hence uses this function. % For QRR-VGLMs, the function \code{\link{qrrvglm.control}} is called too. The arguments in this function begin with an upper case letter to help avoid interference with those of \code{\link{vglm.control}}. In the example below a rank-1 \emph{stereotype} model (Anderson, 1984) is fitted. } %- \section{Warning }{ } \seealso{ \code{\link{rrvglm}}, \code{\link{rrvglm.optim.control}}, \code{\link{rrvglm-class}}, \code{\link{vglm}}, \code{\link{vglm.control}}, \code{\link{cqo}}. } \examples{ \dontrun{ set.seed(111) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # x3 is random noise fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo, Rank = 1, Index.corner = 2) constraints(fit) vcov(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/oapospoisson.Rd0000644000176200001440000000664313135276753014627 0ustar liggesusers\name{oapospoisson} \alias{oapospoisson} %\alias{oapospoisff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Altered Positive-Poisson Distribution } \description{ Fits a one-altered positive-Poisson distribution based on a conditional model involving a Bernoulli distribution and a 1-truncated positive-Poisson distribution. } \usage{ oapospoisson(lpobs1 = "logit", llambda = "loge", type.fitted = c("mean", "lambda", "pobs1", "onempobs1"), ipobs1 = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs1}{ Link function for the parameter \eqn{p_1}{pobs1} or \eqn{\phi}{phi}, called \code{pobs1} or \code{phi} here. See \code{\link{Links}} for more choices. } \item{llambda}{ See \code{\link{pospoisson}} for details. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } \item{ipobs1, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The response \eqn{Y} is one with probability \eqn{p_1}{pobs1}, or \eqn{Y} has a 1-truncated positive-Poisson distribution with probability \eqn{1-p_1}{1-pobs1}. Thus \eqn{0 < p_1 < 1}{0 < pobs1 < 1}, which is modelled as a function of the covariates. The one-altered positive-Poisson distribution differs from the one-inflated positive-Poisson distribution in that the former has ones coming from one source, whereas the latter has ones coming from the positive-Poisson distribution too. The one-inflated positive-Poisson distribution is implemented in the \pkg{VGAM} package. Some people call the one-altered positive-Poisson a \emph{hurdle} model. The input can be a matrix (multiple responses). By default, the two linear/additive predictors of \code{oapospoisson} are \eqn{(logit(\phi), log(\lambda))^T}{(logit(phi), log(lambda))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} (default) which is given by \deqn{\mu = \phi + (1-\phi) A}{% mu = phi + (1- phi) A} where \eqn{A} is the mean of the one-truncated positive-Poisson distribution. If \code{type.fitted = "pobs1"} then \eqn{p_1}{pobs1} is returned. } %\references{ % % %} %\section{Warning }{ %} \author{ T. W. Yee } \note{ This family function effectively combines \code{\link{binomialff}} and \code{\link{otpospoisson}} into one family function. } \seealso{ \code{\link{Oapospois}}, \code{\link{pospoisson}}, \code{\link{oipospoisson}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}. } \examples{ odata <- data.frame(x2 = runif(nn <- 1000)) odata <- transform(odata, pobs1 = logit(-1 + 2*x2, inverse = TRUE), lambda = loge( 1 + 1*x2, inverse = TRUE)) odata <- transform(odata, y1 = roapospois(nn, lambda = lambda, pobs1 = pobs1), y2 = roapospois(nn, lambda = lambda, pobs1 = pobs1)) with(odata, table(y1)) ofit <- vglm(cbind(y1, y2) ~ x2, oapospoisson, data = odata, trace = TRUE) coef(ofit, matrix = TRUE) head(fitted(ofit)) head(predict(ofit)) summary(ofit) } \keyword{models} \keyword{regression} VGAM/man/oxtemp.Rd0000644000176200001440000000112313135276753013373 0ustar liggesusers\name{oxtemp} \alias{oxtemp} \docType{data} \title{ Oxford Temperature Data } \description{ Annual maximum temperatures collected at Oxford, UK. } \usage{data(oxtemp)} \format{ A data frame with 80 observations on the following 2 variables. \describe{ \item{maxtemp}{Annual maximum temperatures (in degrees Fahrenheit). } \item{year}{The values 1901 to 1980. } } } \details{ The data were collected from 1901 to 1980. } % zz: \source{ Unknown. } % \references{ % } \examples{ \dontrun{ fit <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE) } } \keyword{datasets} VGAM/man/crashes.Rd0000644000176200001440000000723313135276753013517 0ustar liggesusers\name{crashes} \alias{crashi} \alias{crashf} \alias{crashtr} \alias{crashmc} \alias{crashbc} \alias{crashp} \alias{alcoff} \alias{alclevels} \docType{data} \title{Crashes on New Zealand Roads in 2009} \description{ A variety of reported crash data cross-classified by time (hour of the day) and day of the week, accumulated over 2009. These include fatalities and injuries (by car), trucks, motor cycles, bicycles and pedestrians. There are some alcohol-related data too. } \usage{ data(crashi) data(crashf) data(crashtr) data(crashmc) data(crashbc) data(crashp) data(alcoff) data(alclevels) } \format{ Data frames with hourly times as rows and days of the week as columns. The \code{alclevels} dataset has hourly times and alcohol levels. \describe{ \item{Mon, Tue, Wed, Thu, Fri, Sat, Sun}{ Day of the week. } \item{0-30, 31-50, 51-80, 81-100, 101-120, 121-150, 151-200, 201-250, 251-300, 301-350, 350+}{ Blood alcohol level (milligrams alcohol per 100 millilitres of blood). % Aggregate number of alcohol offenders or number of dead % drivers/passengers on NZ roads. } } } \details{ Each cell is the aggregate number of crashes reported at each hour-day combination, over the 2009 calendar year. The \code{rownames} of each data frame is the start time (hourly from midnight onwards) on a 24 hour clock, e.g., 21 means 9.00pm to 9.59pm. For crashes, \code{chrashi} are the number of injuries by car, \code{crashf} are the number of fatalities by car (not included in \code{chrashi}), \code{crashtr} are the number of crashes involving trucks, \code{crashmc} are the number of crashes involving motorcyclists, \code{crashbc} are the number of crashes involving bicycles, and \code{crashp} are the number of crashes involving pedestrians. For alcohol-related offences, \code{alcoff} are the number of alcohol offenders from breath screening drivers, and \code{alclevels} are the blood alcohol levels of fatally injured drivers. } \source{ \url{http://www.transport.govt.nz/research/Pages/Motor-Vehicle-Crashes-in-New-Zealand-2009.aspx}. Thanks to Warwick Goold and Alfian F. Hadi for assistance. } \references{ Motor Vehicles Crashes in New Zealand 2009; Statistical Statement Calendar Year 2009. Ministry of Transport, NZ Government; Yearly Report 2010. ISSN: 1176-3949 } \seealso{ \code{\link[VGAM]{rrvglm}}, \code{\link[VGAM]{rcim}}, \code{\link[VGAM]{grc}}. } \examples{ \dontrun{ plot(unlist(alcoff), type = "l", frame.plot = TRUE, axes = FALSE, col = "blue", bty = "o", main = "Alcoholic offenders on NZ roads, aggregated over 2009", sub = "Vertical lines at midnight (purple) and noon (orange)", xlab = "Day/hour", ylab = "Number of offenders") axis(1, at = 1 + (0:6) * 24 + 12, labels = colnames(alcoff)) axis(2, las = 1) axis(3:4, labels = FALSE, tick = FALSE) abline(v = sort(1 + c((0:7) * 24, (0:6) * 24 + 12)), lty = "dashed", col = c("purple", "orange")) } # Goodmans RC models \dontrun{ fitgrc1 <- grc(alcoff) # Rank-1 model fitgrc2 <- grc(alcoff, Rank = 2, Corner = FALSE, Uncor = TRUE) Coef(fitgrc2) } \dontrun{ biplot(fitgrc2, scaleA = 2.3, Ccol = "blue", Acol = "orange", Clabels = as.character(1:23), xlim = c(-1.3, 2.3), ylim = c(-1.2, 1)) } } \keyword{datasets} % % %\alias{crashi} Table 18, p.39 %\alias{crashf} Table 19, p.40 %\alias{crashtr} Table 30, p.66 %\alias{crashmc} Table 35, p.72 %\alias{crashbc} Table 40, p.77 %\alias{crashp} Table 45, p.84 %\alias{alcoff} Table 3, p.121 %\alias{alclevels} Table 2, p.132 % print(Coef(fitgrc2), digits = 2) VGAM/man/Rcim.Rd0000644000176200001440000000401213135276753012751 0ustar liggesusers\name{Rcim} \alias{Rcim} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Mark the Baseline of Row and Column on a Matrix data } \description{ Rearrange the rows and columns of the input so that the first row and first column are baseline. This function is for rank-zero row-column interaction models (RCIMs; i.e., general main effects models). } \usage{ Rcim(mat, rbaseline = 1, cbaseline = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mat}{ Matrix, of dimension \eqn{r} by \eqn{c}. It is best that it is labelled with row and column names. } \item{rbaseline, cbaseline}{ Numeric (row number of the matrix \code{mat}) or character (matching a row name of \code{mat}) that the user wants as the row baseline or reference level. Similarly \code{cbaseline} for the column. } } \details{ This is a data preprocessing function for \code{\link{rcim}}. For rank-zero row-column interaction models this function establishes the baseline (or reference) levels of the matrix response with respect to the row and columns---these become the new first row and column. } \value{ Matrix of the same dimension as the input, with \code{rbaseline} and \code{cbaseline} specifying the first rows and columns. The default is no change in \code{mat}. } \author{ Alfian F. Hadi and T. W. Yee. } \note{ This function is similar to \code{\link{moffset}}; see \code{\link{moffset}} for information about the differences. If numeric, the arguments \code{rbaseline} and \code{cbaseline} differ from arguments \code{roffset} and \code{coffset} in \code{\link{moffset}} by 1 (when elements of the matrix agree). } \seealso{ \code{\link{moffset}}, \code{\link{rcim}}, \code{\link{plotrcim0}}. } \examples{ (alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*")) (aa <- Rcim(alcoff, rbaseline = "11", cbaseline = "Sun")) (bb <- moffset(alcoff, "11", "Sun", postfix = "*")) aa - bb # Note the difference! } VGAM/man/linoUC.Rd0000644000176200001440000000507013135276753013255 0ustar liggesusers\name{Lino} \alias{Lino} \alias{dlino} \alias{plino} \alias{qlino} \alias{rlino} \title{The Generalized Beta Distribution (Libby and Novick, 1982)} \description{ Density, distribution function, quantile function and random generation for the generalized beta distribution, as proposed by Libby and Novick (1982). } \usage{ dlino(x, shape1, shape2, lambda = 1, log = FALSE) plino(q, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) qlino(p, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) rlino(n, shape1, shape2, lambda = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{shape1, shape2, lambda}{ see \code{\link{lino}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dlino} gives the density, \code{plino} gives the distribution function, \code{qlino} gives the quantile function, and \code{rlino} generates random deviates. } %\references{ % Libby, D. L. and Novick, M. R. (1982) % Multivariate generalized beta distributions with applications to % utility assessment. % \emph{Journal of Educational Statistics}, % \bold{7}, 271--294. % % Gupta, A. K. and Nadarajah, S. (2004) % \emph{Handbook of Beta Distribution and Its Applications}, % NY: Marcel Dekker, Inc. % %} \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{lino}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ % %} \seealso{ \code{\link{lino}}. } \examples{ \dontrun{ lambda <- 0.4; shape1 <- exp(1.3); shape2 <- exp(1.3) x <- seq(0.0, 1.0, len = 101) plot(x, dlino(x, shape1 = shape1, shape2 = shape2, lambda = lambda), type = "l", col = "blue", las = 1, ylab = "", main = "Blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, plino(x, shape1 = shape1, shape2 = shape2, l = lambda), col = "red") probs <- seq(0.1, 0.9, by = 0.1) Q <- qlino(probs, shape1 = shape1, shape2 = shape2, lambda = lambda) lines(Q, dlino(Q, shape1 = shape1, shape2 = shape2, lambda = lambda), col = "purple", lty = 3, type = "h") plino(Q, shape1 = shape1, shape2 = shape2, l = lambda) - probs # Should be all 0 } } \keyword{distribution} VGAM/man/freund61.Rd0000644000176200001440000001550513135276753013522 0ustar liggesusers\name{freund61} \alias{freund61} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Freund's (1961) Bivariate Extension of the Exponential Distribution } \description{ Estimate the four parameters of the Freund (1961) bivariate extension of the exponential distribution by maximum likelihood estimation. } \usage{ freund61(la = "loge", lap = "loge", lb = "loge", lbp = "loge", ia = NULL, iap = NULL, ib = NULL, ibp = NULL, independent = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{la, lap, lb, lbp}{ Link functions applied to the (positive) parameters \eqn{\alpha}{alpha}, \eqn{\alpha'}{alpha'}, \eqn{\beta}{beta} and \eqn{\beta'}{beta'}, respectively (the ``\code{p}'' stands for ``prime''). See \code{\link{Links}} for more choices. } \item{ia, iap, ib, ibp}{ Initial value for the four parameters respectively. The default is to estimate them all internally. } \item{independent}{ Logical. If \code{TRUE} then the parameters are constrained to satisfy \eqn{\alpha=\alpha'}{alpha=alpha'} and \eqn{\beta=\beta'}{beta=beta'}, which implies that \eqn{y_1}{y1} and \eqn{y_2}{y2} are independent and each have an ordinary exponential distribution. } \item{zero}{ A vector specifying which linear/additive predictors are modelled as intercepts only. The values can be from the set \{1,2,3,4\}. The default is none of them. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ This model represents one type of bivariate extension of the exponential distribution that is applicable to certain problems, in particular, to two-component systems which can function if one of the components has failed. For example, engine failures in two-engine planes, paired organs such as peoples' eyes, ears and kidneys. Suppose \eqn{y_1}{y1} and \eqn{y_2}{y2} are random variables representing the lifetimes of two components \eqn{A} and \eqn{B} in a two component system. The dependence between \eqn{y_1}{y1} and \eqn{y_2}{y2} is essentially such that the failure of the \eqn{B} component changes the parameter of the exponential life distribution of the \eqn{A} component from \eqn{\alpha}{alpha} to \eqn{\alpha'}{alpha'}, while the failure of the \eqn{A} component changes the parameter of the exponential life distribution of the \eqn{B} component from \eqn{\beta}{beta} to \eqn{\beta'}{beta'}. The joint probability density function is given by \deqn{f(y_1,y_2) = \alpha \beta' \exp(-\beta' y_2 - (\alpha+\beta-\beta')y_1) }{% f(y1,y2) = alpha * beta' * exp(-beta' * y2 - (alpha+beta-beta') * y1) } for \eqn{0 < y_1 < y_2}{0 < y1 < y2}, and \deqn{f(y_1,y_2) = \beta \alpha' \exp(-\alpha' y_1 - (\alpha+\beta-\alpha')y_2) }{% f(y1,y2) = beta * alpha' * exp(-alpha' * y1 - (alpha+beta-alpha') * y2) } for \eqn{0 < y_2 < y_1}{0 < y2 < y1}. Here, all four parameters are positive, as well as the responses \eqn{y_1}{y1} and \eqn{y_2}{y2}. Under this model, the probability that component \eqn{A} is the first to fail is \eqn{\alpha/(\alpha+\beta)}{alpha/(alpha+beta)}. The time to the first failure is distributed as an exponential distribution with rate \eqn{\alpha+\beta}{alpha+beta}. Furthermore, the distribution of the time from first failure to failure of the other component is a mixture of Exponential(\eqn{\alpha'}{alpha'}) and Exponential(\eqn{\beta'}{beta'}) with proportions \eqn{\beta/(\alpha+\beta)}{beta/(alpha+beta)} and \eqn{\alpha/(\alpha+\beta)}{alpha/(alpha+beta)} respectively. The marginal distributions are, in general, not exponential. By default, the linear/additive predictors are \eqn{\eta_1=\log(\alpha)}{eta1=log(alpha)}, \eqn{\eta_2=\log(\alpha')}{eta2=log(alpha')}, \eqn{\eta_3=\log(\beta)}{eta3=log(beta)}, \eqn{\eta_4=\log(\beta')}{eta4=log(beta')}. A special case is when \eqn{\alpha=\alpha'}{alpha=alpha'} and \eqn{\beta=\beta'}{beta'=beta'}, which means that \eqn{y_1}{y1} and \eqn{y_2}{y2} are independent, and both have an ordinary exponential distribution with means \eqn{1 / \alpha}{1/alpha} and \eqn{1 / \beta}{1/beta} respectively. Fisher scoring is used, and the initial values correspond to the MLEs of an intercept model. Consequently, convergence may take only one iteration. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Freund, J. E. (1961) A bivariate extension of the exponential distribution. \emph{Journal of the American Statistical Association}, \bold{56}, 971--977. } \author{ T. W. Yee } \note{ To estimate all four parameters, it is necessary to have some data where \eqn{y_1 857) stop("suboptimal fit obtained") persp(r1, xlim = c(-6, 5), col = 1:4, label = TRUE) # Involves all species persp(r2, xlim = c(-6, 5), ylim = c(-4, 5), theta = 10, phi = 20, zlim = c(0, 220)) # Omit the two dominant species to see what is behind them persp(r2, xlim = c(-6, 5), ylim = c(-4, 5), theta = 10, phi = 20, zlim = c(0, 220), which = (1:10)[-c(8, 10)]) # Use zlim to retain the original z-scale } } \keyword{models} \keyword{regression} \keyword{graphs} VGAM/man/fff.Rd0000644000176200001440000000624713135276753012634 0ustar liggesusers\name{fff} \alias{fff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ F Distribution Family Function } \description{ Maximum likelihood estimation of the (2-parameter) F distribution. } \usage{ fff(link = "loge", idf1 = NULL, idf2 = NULL, nsimEIM = 100, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Parameter link function for both parameters. See \code{\link{Links}} for more choices. The default keeps the parameters positive. } \item{idf1, idf2}{ Numeric and positive. Initial value for the parameters. The default is to choose each value internally. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Initialization method. Either the value 1 or 2. If both fail try setting values for \code{idf1} and \code{idf2}. } % \item{zero}{ % An integer-valued vector specifying which % linear/additive predictors are modelled as intercepts only. % The value must be from the set \{1,2\}, corresponding % respectively to \eqn{df1} and \eqn{df2}. % By default all linear/additive predictors are modelled as % a linear combination of the explanatory variables. % % % } } \details{ The F distribution is named after Fisher and has a density function that has two parameters, called \code{df1} and \code{df2} here. This function treats these degrees of freedom as \emph{positive reals} rather than integers. The mean of the distribution is \eqn{df2/(df2-2)} provided \eqn{df2>2}, and its variance is \eqn{2 df2^2 (df1+df2-2)/(df1 (df2-2)^2 (df2-4))}{2*df2^2*(df1+df2-2)/ (df1*(df2-2)^2*(df2-4))} provided \eqn{df2>4}. The estimated mean is returned as the fitted values. Although the F distribution can be defined to accommodate a non-centrality parameter \code{ncp}, it is assumed zero here. Actually it shouldn't be too difficult to handle any known \code{ncp}; something to do in the short future. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \section{Warning}{ Numerical problems will occur when the estimates of the parameters are too low or too high. } %\note{ % This family function uses the BFGS quasi-Newton update formula for the % working weight matrices. Consequently the estimated variance-covariance % matrix may be inaccurate or simply wrong! The standard errors must be % therefore treated with caution; these are computed in functions such % as \code{vcov()} and \code{summary()}. % %} \seealso{ \code{\link[stats:Fdist]{FDist}}. } \examples{ \dontrun{ fdata <- data.frame(x2 = runif(nn <- 2000)) fdata <- transform(fdata, df1 = exp(2+0.5*x2), df2 = exp(2-0.5*x2)) fdata <- transform(fdata, y = rf(nn, df1, df2)) fit <- vglm(y ~ x2, fff, data = fdata, trace = TRUE) coef(fit, matrix = TRUE) } } \keyword{models} \keyword{regression} VGAM/man/logF.UC.Rd0000644000176200001440000000225013135276753013256 0ustar liggesusers\name{dlogF} \alias{dlogF} % \alias{qnefghs} \title{ log F Distribution } \description{ Density for the log F distribution. % quantile function } \usage{ dlogF(x, shape1, shape2, log = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Vector of quantiles. } \item{shape1, shape2}{Positive shape parameters. } % \item{p}{vector of probabilities.} % \item{n}{number of observations. A single positive integer.} \item{log}{ if \code{TRUE} then the log density is returned, else the density. } } \details{ The details are given in \code{\link{logF}}. } \value{ \code{dlogF} gives the density. % \code{pnefghs} gives the distribution function, and % \code{qnefghs} gives the quantile function, and % \code{rnefghs} generates random deviates. } %\references{ % % % %} \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{hypersecant}}. % \code{\link{simulate.vlm}}. } \examples{ \dontrun{ shape1 <- 1.5; shape2 <- 0.5; x <- seq(-5, 8, length = 1001) plot(x, dlogF(x, shape1, shape2), type = "l", las = 1, col = "blue", ylab = "pdf", main = "log F density function") } } \keyword{distribution} VGAM/man/logistic.Rd0000644000176200001440000000741613135276753013707 0ustar liggesusers\name{logistic} \alias{logistic} \alias{logistic1} \alias{logistic} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Logistic Distribution Family Function } \description{ Estimates the location and scale parameters of the logistic distribution by maximum likelihood estimation. } \usage{ logistic1(llocation = "identitylink", scale.arg = 1, imethod = 1) logistic(llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions applied to the location parameter \eqn{l} and scale parameter \eqn{s}. See \code{\link{Links}} for more choices, and \code{\link{CommonVGAMffArguments}} for more information. } \item{scale.arg}{ Known positive scale parameter (called \eqn{s} below). } \item{ilocation, iscale}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The two-parameter logistic distribution has a density that can be written as \deqn{f(y;l,s) = \frac{\exp[-(y-l)/s]}{ s\left( 1 + \exp[-(y-l)/s] \right)^2}}{% f(y;l,s) = exp[-(y-l)/s] / [s * ( 1 + exp[-(y-l)/s] )^2] } where \eqn{s > 0} is the scale parameter, and \eqn{l} is the location parameter. The response \eqn{-\infty0}{s1>0} and \eqn{s_2>0}{s2>0} are the scale parameters, and \eqn{l_1}{l1} and \eqn{l_2}{l2} are the location parameters. Each of the two responses are unbounded, i.e., \eqn{-\infty a}, \eqn{b>0}, \eqn{g>0} and \eqn{s>0}. The \eqn{a}{a} is called the \emph{location} parameter, \eqn{b} the \emph{scale} parameter, \eqn{g} the \emph{inequality} parameter, and \eqn{s} the \emph{shape} parameter. The location parameter is assumed known otherwise the Pareto(IV) distribution will not be a regular family. This assumption is not too restrictive in modelling because in typical applications this parameter is known, e.g., in insurance and reinsurance it is pre-defined by a contract and can be represented as a deductible or a retention level. The inequality parameter is so-called because of its interpretation in the economics context. If we choose a unit shape parameter value and a zero location parameter value then the inequality parameter is the Gini index of inequality, provided \eqn{g \leq 1}{g<=1}. The fitted values are currently the median, e.g., \code{\link{qparetoIV}} is used for \code{paretoIV()}. % The fitted values are currently \code{NA} because I % haven't worked out what the mean of \eqn{Y} is yet. % The mean of \eqn{Y} is % \eqn{\alpha k/(k-1)}{alpha*k/(k-1)} provided \eqn{k>1}. % Its variance is % \eqn{\alpha^2 k /((k-1)^2 (k-2))}{alpha^2 k /((k-1)^2 (k-2))} % provided \eqn{k>2}. % The maximum likelihood estimator for the location parameter is % \code{min(y)}, i.e., the smallest response value. There are a number of special cases of the Pareto(IV) distribution. These include the Pareto(I), Pareto(II), Pareto(III), and Burr family of distributions. Denoting \eqn{PIV(a,b,g,s)} as the Pareto(IV) distribution, the Burr distribution \eqn{Burr(b,g,s)} is \eqn{PIV(a=0,b,1/g,s)}, the Pareto(III) distribution \eqn{PIII(a,b,g)} is \eqn{PIV(a,b,g,s=1)}, the Pareto(II) distribution \eqn{PII(a,b,s)} is \eqn{PIV(a,b,g=1,s)}, and the Pareto(I) distribution \eqn{PI(b,s)} is \eqn{PIV(b,b,g=1,s)}. Thus the Burr distribution can be fitted using the \code{\link{negloge}} link function and using the default \code{location=0} argument. The Pareto(I) distribution can be fitted using \code{\link{paretoff}} but there is a slight change in notation: \eqn{s=k} and \eqn{b=\alpha}{b=alpha}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Johnson N. L., Kotz S., and Balakrishnan N. (1994) \emph{Continuous Univariate Distributions, Volume 1}, 2nd ed. New York: Wiley. Brazauskas, V. (2003) Information matrix for Pareto(IV), Burr, and related distributions. \emph{Comm. Statist. Theory and Methods} \bold{32}, 315--325. Arnold, B. C. (1983) \emph{Pareto Distributions}. Fairland, Maryland: International Cooperative Publishing House. } \author{ T. W. Yee } \note{ The \code{extra} slot of the fitted object has a component called \code{"location"} which stores the location parameter value(s). } \section{Warning }{ The Pareto(IV) distribution is very general, for example, special cases include the Pareto(I), Pareto(II), Pareto(III), and Burr family of distributions. [Johnson et al. (1994) says on p.19 that fitting Type IV by ML is very difficult and rarely attempted]. Consequently, reasonably good initial values are recommended, and convergence to a local solution may occur. For this reason setting \code{trace=TRUE} is a good idea for monitoring the convergence. Large samples are ideally required to get reasonable results. } \seealso{ \code{\link{ParetoIV}}, \code{\link{paretoff}}, \code{\link{gpd}}. } \examples{ pdata <- data.frame(y = rparetoIV(2000, scale = exp(1), ineq = exp(-0.3), shape = exp(1))) \dontrun{par(mfrow = c(2, 1)) with(pdata, hist(y)); with(pdata, hist(log(y))) } fit <- vglm(y ~ 1, paretoIV, data = pdata, trace = TRUE) head(fitted(fit)) summary(pdata) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/simplex.Rd0000644000176200001440000000627013135276753013550 0ustar liggesusers\name{simplex} \alias{simplex} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Simplex Distribution Family Function } \description{ The two parameters of the univariate standard simplex distribution are estimated by full maximum likelihood estimation. } \usage{ simplex(lmu = "logit", lsigma = "loge", imu = NULL, isigma = NULL, imethod = 1, ishrinkage = 0.95, zero = "sigma") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsigma}{ Link function for \code{mu} and \code{sigma}. See \code{\link{Links}} for more choices. } \item{imu, isigma}{ Optional initial values for \code{mu} and \code{sigma}. A \code{NULL} means a value is obtained internally. } \item{imethod, ishrinkage, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The probability density function can be written \deqn{f(y; \mu, \sigma) = [2 \pi \sigma^2 (y (1-y))^3]^{-0.5} \exp[-0.5 (y-\mu)^2 / (\sigma^2 y (1-y) \mu^2 (1-\mu)^2)] }{% f(y; mu, sigma) = [2* pi * sigma^2 * (y*(1-y))^3]^(-0.5) * exp[-0.5 * (y-mu)^2 / (sigma^2 * y * (1-y) * mu^2 * (1-mu)^2)] } for \eqn{0 < y < 1}, \eqn{0 < \mu < 1}{0 < mu < 1}, and \eqn{\sigma > 0}{sigma > 0}. The mean of \eqn{Y} is \eqn{\mu}{mu} (called \code{mu}, and returned as the fitted values). % This comes from Jorgensen but it is not confirmed by simulations: % The variance of \eqn{Y} is \eqn{\mu (1 - \mu) - \sqrt{ \lambda / 2} % \exp\{ \lambda / (\mu^2 (1 - \mu)^2) \} % \Gamma(\lambda / (2 \mu^2 (1 - \mu)^2), 0.5)}{ % mu * (1 - mu) - sqrt(lambda / 2) * % exp(lambda / (mu^2 * (1 - mu)^2)) * % Gamma(lambda / (2 * mu^2 * (1 - mu)^2), 0.5)}. % Here, \eqn{\Gamma(x, a)}{Gamma(x, a)} is the % `upper' normalized incomplete gamma function given by % \code{pgamma(x, a, lower = FALSE) * gamma(a)}. The second parameter, \code{sigma}, of this standard simplex distribution is known as the dispersion parameter. The unit variance function is \eqn{V(\mu) = \mu^3 (1-\mu)^3}{V(mu) = mu^3 (1-mu)^3}. Fisher scoring is applied to both parameters. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall Song, P. X.-K. (2007) \emph{Correlated Data Analysis: Modeling, Analytics, and Applications}. Springer. } \author{ T. W. Yee } \note{ This distribution is potentially useful for dispersion modelling. Numerical problems may occur when \code{mu} is very close to 0 or 1. } \seealso{ \code{\link{dsimplex}}, \code{\link{dirichlet}}, \code{\link{rig}}, \code{\link{binomialff}}. } \examples{ sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, eta1 = 1 + 2 * x2, eta2 = 1 - 2 * x2) sdata <- transform(sdata, y = rsimplex(nn, mu = logit(eta1, inverse = TRUE), dispersion = exp(eta2))) (fit <- vglm(y ~ x2, simplex(zero = NULL), data = sdata, trace = TRUE)) coef(fit, matrix = TRUE) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/skewnormal.Rd0000644000176200001440000000661513135276753014254 0ustar liggesusers\name{skewnormal} \alias{skewnormal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Univariate Skew-Normal Distribution Family Function } \description{ Maximum likelihood estimation of the shape parameter of a univariate skew-normal distribution. } \usage{ skewnormal(lshape = "identitylink", ishape = NULL, nsimEIM = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, ishape, nsimEIM}{ See \code{\link{Links}} and \code{\link{CommonVGAMffArguments}}. } } \details{ The univariate skew-normal distribution has a density function that can be written \deqn{f(y) = 2 \, \phi(y) \, \Phi(\alpha y)}{% f(y) = 2 * phi(y) * Phi(alpha * y)} where \eqn{\alpha}{alpha} is the shape parameter. Here, \eqn{\phi}{phi} is the standard normal density and \eqn{\Phi}{Phi} its cumulative distribution function. When \eqn{\alpha=0}{alpha=0} the result is a standard normal distribution. When \eqn{\alpha=1}{alpha=1} it models the distribution of the maximum of two independent standard normal variates. When the absolute value of the shape parameter increases the skewness of the distribution increases. The limit as the shape parameter tends to positive infinity results in the folded normal distribution or half-normal distribution. When the shape parameter changes its sign, the density is reflected about \eqn{y=0}. The mean of the distribution is \eqn{\mu=\alpha \sqrt{2/(\pi (1+\alpha^2))}}{mu=alpha*sqrt(2/(pi*(1+alpha^2)))} and these are returned as the fitted values. The variance of the distribution is \eqn{1-\mu^2}{1-mu^2}. The Newton-Raphson algorithm is used unless the \code{nsimEIM} argument is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Azzalini, A. A. (1985) A class of distributions which include the normal. \emph{Scandinavian Journal of Statistics}, \bold{12}, 171--178. Azzalini, A. and Capitanio, A. (1999) Statistical applications of the multivariate skew-normal distribution. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{61}, 579--602. } \author{ Thomas W. Yee } \note{ It is a good idea to use several different initial values to ensure that the global solution is obtained. This family function will be modified (hopefully soon) to handle a location and scale parameter too. } \section{Warning }{ It is well known that the EIM of Azzalini's skew-normal distribution is singular for skewness parameter tending to zero, and thus produces influential problems. } \seealso{ \code{\link{skewnorm}}, \code{\link{uninormal}}, \code{\link{foldnormal}}. } \examples{ sdata <- data.frame(y1 = rskewnorm(nn <- 1000, shape = 5)) fit1 <- vglm(y1 ~ 1, skewnormal, data = sdata, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1), 1) with(sdata, mean(y1)) \dontrun{ with(sdata, hist(y1, prob = TRUE)) x <- with(sdata, seq(min(y1), max(y1), len = 200)) with(sdata, lines(x, dskewnorm(x, shape = Coef(fit1)), col = "blue")) } sdata <- data.frame(x2 = runif(nn)) sdata <- transform(sdata, y2 = rskewnorm(nn, shape = 1 + 2*x2)) fit2 <- vglm(y2 ~ x2, skewnormal, data = sdata, trace = TRUE, crit = "coef") summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/zigeometric.Rd0000644000176200001440000001254413135276753014411 0ustar liggesusers\name{zigeometric} \alias{zigeometric} \alias{zigeometricff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Geometric Distribution Family Function } \description{ Fits a zero-inflated geometric distribution by maximum likelihood estimation. } \usage{ zigeometric(lpstr0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, iprob = NULL, imethod = 1, bias.red = 0.5, zero = NULL) zigeometricff(lprob = "logit", lonempstr0 = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), iprob = NULL, ionempstr0 = NULL, imethod = 1, bias.red = 0.5, zero = "onempstr0") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr0, lprob}{ Link functions for the parameters \eqn{\phi}{phi} and \eqn{p}{prob} (\code{prob}). The usual geometric probability parameter is the latter. The probability of a structural zero is the former. See \code{\link{Links}} for more choices. For the zero-\emph{deflated} model see below. } % \item{eprob, epstr0}{ eprob = list(), epstr0 = list(), % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{lonempstr0, ionempstr0}{ Corresponding arguments for the other parameterization. See details below. } \item{bias.red}{ A constant used in the initialization process of \code{pstr0}. It should lie between 0 and 1, with 1 having no effect. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } \item{ipstr0, iprob}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{zero, imethod}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ Function \code{zigeometric()} is based on \deqn{P(Y=0) = \phi + (1-\phi) p,}{% P(Y=0) = phi + (1-phi) * prob,} for \eqn{y=0}, and \deqn{P(Y=y) = (1-\phi) p (1 - p)^{y}.}{% P(Y=y) = (1-phi) * prob * (1 - prob)^y.} for \eqn{y=1,2,\ldots}. The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{E(Y)=(1-\phi) p / (1-p)}{E(Y) = (1-phi) * prob / (1-prob)} and these are returned as the fitted values by default. By default, the two linear/additive predictors are \eqn{(logit(\phi), logit(p))^T}{(logit(phi), logit(prob))^T}. Multiple responses are handled. % 20130316: Estimated probabilities of a structural zero and an observed zero can be returned, as in \code{\link{zipoisson}}; see \code{\link{fittedvlm}} for information. The \pkg{VGAM} family function \code{zigeometricff()} has a few changes compared to \code{zigeometric()}. These are: (i) the order of the linear/additive predictors is switched so the geometric probability comes first; (ii) argument \code{onempstr0} is now 1 minus the probability of a structural zero, i.e., the probability of the parent (geometric) component, i.e., \code{onempstr0} is \code{1-pstr0}; (iii) argument \code{zero} has a new default so that the \code{onempstr0} is intercept-only by default. Now \code{zigeometricff()} is generally recommended over \code{zigeometric()}. Both functions implement Fisher scoring and can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } \note{ % Numerical problems may occur since the initial values are currently % not very good. The zero-\emph{deflated} geometric distribution might be fitted by setting \code{lpstr0 = identitylink}, albeit, not entirely reliably. See \code{\link{zipoisson}} for information that can be applied here. Else try the zero-altered geometric distribution (see \code{\link{zageometric}}). } %\section{Warning }{ % Numerical problems can occur. % Half-stepping is not uncommon. % If failure to converge occurs, make use of the argument \code{ipstr0}. % %} \seealso{ \code{\link{rzigeom}}, \code{\link{geometric}}, \code{\link{zageometric}}, \code{\link[stats]{rgeom}}, \code{\link{simulate.vlm}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta1 = 1.0 - 1.0 * x2 + 2.0 * x3, eta2 = -1.0, eta3 = 0.5) gdata <- transform(gdata, prob1 = logit(eta1, inverse = TRUE), prob2 = logit(eta2, inverse = TRUE), prob3 = logit(eta3, inverse = TRUE)) gdata <- transform(gdata, y1 = rzigeom(nn, prob1, pstr0 = prob3), y2 = rzigeom(nn, prob2, pstr0 = prob3), y3 = rzigeom(nn, prob2, pstr0 = prob3)) with(gdata, table(y1)) with(gdata, table(y2)) with(gdata, table(y3)) head(gdata) fit1 <- vglm(y1 ~ x2 + x3 + x4, zigeometric(zero = 1), data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1, type = "pstr0")) fit2 <- vglm(cbind(y2, y3) ~ 1, zigeometric(zero = 1), data = gdata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/has.intercept.Rd0000644000176200001440000000324613135276753014636 0ustar liggesusers\name{has.interceptvlm} %\name{confint} \alias{has.intercept} %\alias{has.intercept.vlm} \alias{has.interceptvlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Has a Fitted VGLM Got an Intercept Term? } \description{ Looks at the \code{formula} to see if it has an intercept term. } \usage{ has.intercept(object, \dots) has.interceptvlm(object, form.number = 1, \dots) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A fitted model object. } \item{form.number}{Formula number, is 1 or 2. which correspond to the arguments \code{formula} and \code{form2} respectively. } \item{\dots}{Arguments that are might be passed from one function to another. } } \details{ This methods function is a simple way to determine whether a fitted \code{\link{vglm}} object etc. has an intercept term or not. It is not entirely foolproof because one might suppress the intercept from the formula and then add in a variable in the formula that has a constant value. } \value{ Returns a single logical. } %\references{ %} \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ %} \seealso{ \code{\link{formulavlm}}, \code{termsvlm}. } \examples{ # Example: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) pdata <- data.frame(counts, outcome, treatment) # Better style vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = pdata) formula(vglm.D93) term.names(vglm.D93) responseName(vglm.D93) has.intercept(vglm.D93) } \keyword{models} \keyword{regression} % \method{has.intercept}{vlm}(object, \dots) VGAM/man/betanormUC.Rd0000644000176200001440000000542513135276753014127 0ustar liggesusers\name{Betanorm} \alias{Betanorm} \alias{dbetanorm} \alias{pbetanorm} \alias{qbetanorm} \alias{rbetanorm} \title{The Beta-Normal Distribution} \description{ Density, distribution function, quantile function and random generation for the univariate beta-normal distribution. } \usage{ dbetanorm(x, shape1, shape2, mean = 0, sd = 1, log = FALSE) pbetanorm(q, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qbetanorm(p, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rbetanorm(n, shape1, shape2, mean = 0, sd = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{shape1, shape2}{ the two (positive) shape parameters of the standard beta distribution. They are called \code{a} and \code{b} respectively in \code{\link[base:Special]{beta}}. } \item{mean, sd}{ the mean and standard deviation of the univariate normal distribution (\code{\link[stats:Normal]{Normal}}). } \item{log, log.p}{ Logical. If \code{TRUE} then all probabilities \code{p} are given as \code{log(p)}. } \item{lower.tail}{ Logical. If \code{TRUE} then the upper tail is returned, i.e., one minus the usual answer. } } \value{ \code{dbetanorm} gives the density, \code{pbetanorm} gives the distribution function, \code{qbetanorm} gives the quantile function, and \code{rbetanorm} generates random deviates. } \references{ pp.146--152 of Gupta, A. K. and Nadarajah, S. (2004) \emph{Handbook of Beta Distribution and Its Applications}, New York: Marcel Dekker. } \author{ T. W. Yee } \details{ The function \code{betauninormal}, the \pkg{VGAM} family function for estimating the parameters, has not yet been written. % for the formula of the probability density function and other details. } %\note{ %} %\seealso{ % zz code{link{betauninormal}}. %} \examples{ \dontrun{ shape1 <- 0.1; shape2 <- 4; m <- 1 x <- seq(-10, 2, len = 501) plot(x, dbetanorm(x, shape1, shape2, m = m), type = "l", ylim = 0:1, las = 1, ylab = paste("betanorm(",shape1,", ",shape2,", m=",m, ", sd=1)", sep = ""), main = "Blue is density, orange is cumulative distribution function", sub = "Gray lines are the 10,20,...,90 percentiles", col = "blue") lines(x, pbetanorm(x, shape1, shape2, m = m), col = "orange") abline(h = 0, col = "black") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbetanorm(probs, shape1, shape2, m = m) lines(Q, dbetanorm(Q, shape1, shape2, m = m), col = "gray50", lty = 2, type = "h") lines(Q, pbetanorm(Q, shape1, shape2, m = m), col = "gray50", lty = 2, type = "h") abline(h = probs, col = "gray50", lty = 2) pbetanorm(Q, shape1, shape2, m = m) - probs # Should be all 0 } } \keyword{distribution} VGAM/man/exponential.Rd0000644000176200001440000000643613135276753014421 0ustar liggesusers\name{exponential} \alias{exponential} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponential Distribution } \description{ Maximum likelihood estimation for the exponential distribution. } \usage{ exponential(link = "loge", location = 0, expected = TRUE, ishrinkage = 0.95, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Parameter link function applied to the positive parameter \eqn{rate}. See \code{\link{Links}} for more choices. } \item{location}{ Numeric of length 1, the known location parameter, \eqn{A}, say. } \item{expected}{ Logical. If \code{TRUE} Fisher scoring is used, otherwise Newton-Raphson. The latter is usually faster. } \item{ishrinkage, parallel, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The family function assumes the response \eqn{Y} has density \deqn{f(y) = \lambda \exp(-\lambda (y-A))}{% f(y) = rate * exp(-rate * (y-A)) } for \eqn{y > A}, where \eqn{A} is the known location parameter. By default, \eqn{A=0}. Then \eqn{E(Y) = A + 1/ \lambda}{E(Y) = A + 1/rate} and \eqn{Var(Y) = 1/ \lambda^2}{Var(Y) = 1/rate^2}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ Suppose \eqn{A = 0}. For a fixed time interval, the number of events is Poisson with mean \eqn{\lambda}{rate} if the time between events has a geometric distribution with mean \eqn{\lambda^{-1}}{1/rate}. The argument \code{rate} in \code{exponential} is the same as \code{\link[stats:Exponential]{rexp}} etc. The argument \code{lambda} in \code{\link{rpois}} is somewhat the same as \code{rate} here. } \seealso{ \code{\link{amlexponential}}, \code{\link{gpd}}, \code{\link{laplace}}, \code{\link{expgeometric}}, \code{\link{explogff}}, \code{\link{poissonff}}, \code{\link{mix2exp}}, \code{\link{freund61}}, \code{\link{simulate.vlm}}, \code{\link[stats]{Exponential}}. % \code{\link{cens.exponential}}, } \examples{ edata <- data.frame(x2 = runif(nn <- 100) - 0.5) edata <- transform(edata, x3 = runif(nn) - 0.5) edata <- transform(edata, eta = 0.2 - 0.7 * x2 + 1.9 * x3) edata <- transform(edata, rate = exp(eta)) edata <- transform(edata, y = rexp(nn, rate = rate)) with(edata, stem(y)) fit.slow <- vglm(y ~ x2 + x3, exponential, data = edata, trace = TRUE) fit.fast <- vglm(y ~ x2 + x3, exponential(exp = FALSE), data = edata, trace = TRUE, crit = "coef") coef(fit.slow, mat = TRUE) summary(fit.slow) # Compare results with a GPD. Has a threshold. threshold <- 0.5 gdata <- data.frame(y1 = threshold + rexp(n = 3000, rate = exp(1.5))) fit.exp <- vglm(y1 ~ 1, exponential(location = threshold), data = gdata) coef(fit.exp, matrix = TRUE) Coef(fit.exp) logLik(fit.exp) fit.gpd <- vglm(y1 ~ 1, gpd(threshold = threshold), data = gdata) coef(fit.gpd, matrix = TRUE) Coef(fit.gpd) logLik(fit.gpd) } \keyword{models} \keyword{regression} VGAM/man/multilogit.Rd0000644000176200001440000000645313135276753014263 0ustar liggesusers\name{multilogit} \alias{multilogit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Multi-logit Link Function } \description{ Computes the multilogit transformation, including its inverse and the first two derivatives. } \usage{ multilogit(theta, refLevel = "(Last)", M = NULL, whitespace = FALSE, bvalue = NULL, inverse = FALSE, deriv = 0, all.derivs = FALSE, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{refLevel, M, whitespace}{ See \code{\link{multinomial}}. } \item{bvalue}{ See \code{\link{Links}}. } \item{all.derivs}{ Logical. This is currently experimental only. % If \code{TRUE} then more partial derivatives are returned; % these is needed by, e.g., % \code{\link{hdeff.vglm}} for \code{\link{multinomial}} fits. % This argument might work for only some combinations of % the arguments, e.g., it should work at least for % \code{inverse = TRUE} and \code{deriv = 1}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{multilogit()} link function is a generalization of the \code{\link{logit}} link to \eqn{M} levels/classes. It forms the basis of the \code{\link{multinomial}} logit model. It is sometimes called the \emph{multi-logit} link or the \emph{multinomial logit} link. When its inverse function is computed it returns values which are positive and add to unity. } \value{ For \code{multilogit} with \code{deriv = 0}, the multilogit of \code{theta}, i.e., \code{log(theta[, j]/theta[, M+1])} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(theta[, j])/(1+rowSums(exp(theta)))}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0 (for \code{multilogit}). One way of overcoming this is to use, e.g., \code{bvalue}. Currently \code{care.exp()} is used to avoid \code{NA}s being returned if the probability is too close to 1. } \seealso{ \code{\link{Links}}, \code{\link{multinomial}}, \code{\link{logit}}, \code{\link{normal.vcm}}, \code{\link{CommonVGAMffArguments}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, multinomial, trace = TRUE, data = pneumo) # For illustration only! fitted(fit) predict(fit) multilogit(fitted(fit)) multilogit(fitted(fit)) - predict(fit) # Should be all 0s multilogit(predict(fit), inverse = TRUE) # rowSums() add to unity multilogit(predict(fit), inverse = TRUE, refLevel = 1) # For illustration only multilogit(predict(fit), inverse = TRUE) - fitted(fit) # Should be all 0s multilogit(fitted(fit), deriv = 1) multilogit(fitted(fit), deriv = 2) } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/exppoisson.Rd0000644000176200001440000000503013135276753014267 0ustar liggesusers\name{exppoisson} \alias{exppoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{Exponential Poisson Distribution Family Function} \description{ Estimates the two parameters of the exponential Poisson distribution by maximum likelihood estimation. } \usage{ exppoisson(lrate = "loge", lshape = "loge", irate = 2, ishape = 1.1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lrate}{ Link function for the two positive parameters. See \code{\link{Links}} for more choices. } \item{ishape, irate}{ Numeric. Initial values for the \code{shape} and \code{rate} parameters. Currently this function is not intelligent enough to obtain better initial values. } \item{zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The exponential Poisson distribution has density function \deqn{f(y; \beta = rate, \lambda = shape) = \frac{\lambda \beta}{1 - e^{-\lambda}} \, e^{-\lambda - \beta y + \lambda \exp{(-\beta y)}}}{% f(y; a = shape, b = rate) = (a*b/(1 - e^(-a))) * e^{-a - b*y + a * e^(-b*y)}} where \eqn{y > 0}, and the parameters shape, \eqn{\lambda}{a}, and rate, \eqn{\beta}{b}, are positive. The distribution implies a population facing discrete hazard rates which are multiples of a base hazard. This \pkg{VGAM} family function requires the \code{hypergeo} package (to use their \code{genhypergeo} function). The median is returned as the fitted value. % This \pkg{VGAM} family function requires the \pkg{hypergeo} package % (to use their \code{\link[hypergeo]{genhypergeo}} function). } \section{Warning }{ This \pkg{VGAM} family function does not work properly! } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kus, C., (2007). A new lifetime distribution. \emph{Computational Statistics and Data Analysis}, \bold{51}, 4497--4509. } \author{ J. G. Lauder, jamesglauder@gmail.com } \seealso{ \code{\link{dexppois}}, \code{\link{exponential}}, \code{\link{poisson}}. } \examples{ \dontrun{ shape <- exp(1); rate <- exp(2) rdata <- data.frame(y = rexppois(n = 1000, rate = rate, shape = shape)) library("hypergeo") # Required! fit <- vglm(y ~ 1, exppoisson, data = rdata, trace = FALSE, maxit = 1200) c(with(rdata, median(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/diffzetaUC.Rd0000644000176200001440000000371113135276753014110 0ustar liggesusers\name{Diffzeta} \alias{Diffzeta} \alias{ddiffzeta} \alias{pdiffzeta} \alias{qdiffzeta} \alias{rdiffzeta} \title{ Differenced Zeta Distribution } \description{ Density, distribution function, quantile function, and random generation for the differenced zeta distribution. } \usage{ ddiffzeta(x, shape, start = 1, log = FALSE) pdiffzeta(q, shape, start = 1, lower.tail = TRUE) qdiffzeta(p, shape, start = 1) rdiffzeta(n, shape, start = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{ Same as in \code{\link[stats]{runif}}. } \item{shape, start}{ Details at \code{\link{diffzeta}}. % For \code{rdiffzeta()} this pa%arameter must be of length 1. } \item{log, lower.tail}{ Same as in \code{\link[stats]{runif}}. } } \details{ This distribution appears to work well on the distribution of English words in such texts. Some more details are given in \code{\link{diffzeta}}. } \value{ \code{ddiffzeta} gives the density, \code{pdiffzeta} gives the distribution function, \code{qdiffzeta} gives the quantile function, and \code{rdiffzeta} generates random deviates. } %\references{ %} \author{ T. W. Yee } \note{ Given some response data, the \pkg{VGAM} family function \code{\link{diffzeta}} estimates the parameter \code{shape}. Function \code{pdiffzeta()} suffers from the problems that \code{\link{plog}} sometimes has, i.e., when \code{p} is very close to 1. } \seealso{ \code{\link{diffzeta}}, \code{\link{zetaff}}, \code{\link{zipf}}, \code{\link{Oizeta}}. } \examples{ ddiffzeta(1:20, 0.5, start = 2) rdiffzeta(20, 0.5) \dontrun{ shape <- 0.8; x <- 1:10 plot(x, ddiffzeta(x, shape = shape), type = "h", ylim = 0:1, sub = "shape=0.8", las = 1, col = "blue", ylab = "Probability", main = "Differenced zeta distribution: blue=PMF; orange=CDF") lines(x + 0.1, pdiffzeta(x, shape = shape), col = "orange", lty = 3, type = "h") } } \keyword{distribution} VGAM/man/rayleigh.Rd0000644000176200001440000001046613135276753013675 0ustar liggesusers\name{rayleigh} \alias{rayleigh} \alias{cens.rayleigh} %- Also NEED an '\alias' for EACH other topic documented here. \title{Rayleigh Distribution Family Function } \description{ Estimating the parameter of the Rayleigh distribution by maximum likelihood estimation. Right-censoring is allowed. } \usage{ rayleigh(lscale = "loge", nrfs = 1/3 + 0.01, oim.mean = TRUE, zero = NULL) cens.rayleigh(lscale = "loge", oim = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale}{ Parameter link function applied to the scale parameter \eqn{b}. See \code{\link{Links}} for more choices. A log link is the default because \eqn{b} is positive. } \item{nrfs}{ Numeric, of length one, with value in \eqn{[0,1]}. Weighting factor between Newton-Raphson and Fisher scoring. The value 0 means pure Newton-Raphson, while 1 means pure Fisher scoring. The default value uses a mixture of the two algorithms, and retaining positive-definite working weights. } \item{oim.mean}{ Logical, used only for intercept-only models. \code{TRUE} means the mean of the OIM elements are used as working weights. If \code{TRUE} then this argument has top priority for working out the working weights. \code{FALSE} means use another algorithm. } \item{oim}{ Logical. For censored data only, \code{TRUE} means the Newton-Raphson algorithm, and \code{FALSE} means Fisher scoring. } \item{zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The Rayleigh distribution, which is used in physics, has a probability density function that can be written \deqn{f(y) = y \exp(-0.5 (y/b)^2)/b^2}{% f(y) = y*exp(-0.5*(y/b)^2)/b^2} for \eqn{y > 0} and \eqn{b > 0}. The mean of \eqn{Y} is \eqn{b \sqrt{\pi / 2}}{b * sqrt(pi / 2)} (returned as the fitted values) and its variance is \eqn{b^2 (4-\pi)/2}{b^2 (4-pi)/2}. The \pkg{VGAM} family function \code{cens.rayleigh} handles right-censored data (the true value is greater than the observed value). To indicate which type of censoring, input \code{extra = list(rightcensored = vec2)} where \code{vec2} is a logical vector the same length as the response. If the component of this list is missing then the logical values are taken to be \code{FALSE}. The fitted object has this component stored in the \code{extra} slot. The \pkg{VGAM} family function \code{rayleigh} handles multiple responses. } \section{Warning}{ The theory behind the argument \code{oim} is not fully complete. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ The \code{\link{poisson.points}} family function is more general so that if \code{ostatistic = 1} and \code{dimension = 2} then it coincides with \code{\link{rayleigh}}. Other related distributions are the Maxwell and Weibull distributions. % http://www.math.uah.edu/stat/special/MultiNormal.html % The distribution of R is known as the Rayleigh distribution, %named for William Strutt, Lord Rayleigh. It is a member of the %family of Weibull distributions, named in turn for Wallodi Weibull. } \seealso{ \code{\link{Rayleigh}}, \code{\link{genrayleigh}}, \code{\link{riceff}}, \code{\link{maxwell}}, \code{\link{weibullR}}, \code{\link{poisson.points}}, \code{\link{simulate.vlm}}. } \examples{ nn <- 1000; Scale <- exp(2) rdata <- data.frame(ystar = rrayleigh(nn, scale = Scale)) fit <- vglm(ystar ~ 1, rayleigh, data = rdata, trace = TRUE, crit = "coef") head(fitted(fit)) with(rdata, mean(ystar)) coef(fit, matrix = TRUE) Coef(fit) # Censored data rdata <- transform(rdata, U = runif(nn, 5, 15)) rdata <- transform(rdata, y = pmin(U, ystar)) \dontrun{ par(mfrow = c(1, 2)) hist(with(rdata, ystar)); hist(with(rdata, y)) } extra <- with(rdata, list(rightcensored = ystar > U)) fit <- vglm(y ~ 1, cens.rayleigh, data = rdata, trace = TRUE, extra = extra, crit = "coef") table(fit@extra$rightcen) coef(fit, matrix = TRUE) head(fitted(fit)) } \keyword{models} \keyword{regression} VGAM/man/benini.Rd0000644000176200001440000000524313135276753013332 0ustar liggesusers\name{benini1} \alias{benini1} %- Also NEED an '\alias' for EACH other topic documented here. \title{Benini Distribution Family Function } \description{ Estimating the 1-parameter Benini distribution by maximum likelihood estimation. } \usage{ benini1(y0 = stop("argument 'y0' must be specified"), lshape = "loge", ishape = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y0}{ Positive scale parameter. } \item{lshape}{ Parameter link function and extra argument of the parameter \eqn{b}, which is the shape parameter. See \code{\link{Links}} for more choices. A log link is the default because \eqn{b} is positive. } \item{ishape}{ Optional initial value for the shape parameter. The default is to compute the value internally. } \item{imethod, zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The Benini distribution has a probability density function that can be written \deqn{f(y) = 2 s \exp(-s[(\log(y/y_0))^2]) \log(y/y_0) / y }{% f(y) = 2*s*exp(-s * [(log(y/y0))^2]) * log(y/y0) / y} for \eqn{0 < y_0 < y}{0 < y0 < y}, and shape \eqn{s > 0}. The cumulative distribution function for \eqn{Y} is \deqn{F(y) = 1 - \exp(-s[(\log(y/y_0))^2]).}{% F(y) = 1 - exp(-s * [(log(y / y0))^2]). } Here, Newton-Raphson and Fisher scoring coincide. The median of \eqn{Y} is now returned as the fitted values, by default. This \pkg{VGAM} family function can handle a multiple responses, which is inputted as a matrix. On fitting, the \code{extra} slot has a component called \code{y0} which contains the value of the \code{y0} argument. } %\section{Warning}{ % % % The median of \eqn{Y}, which are returned as the fitted values, % may be incorrect. % % %} \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. % Section 7.1, pp.235--8 } \author{ T. W. Yee } \note{ Yet to do: the 2-parameter Benini distribution estimates another shape parameter \eqn{a}{a} too. Hence, the code may change in the future. } \seealso{ \code{\link{Benini}}. } \examples{ y0 <- 1; nn <- 3000 bdata <- data.frame(y = rbenini(nn, y0 = y0, shape = exp(2))) fit <- vglm(y ~ 1, benini1(y0 = y0), data = bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) fit@extra$y0 c(head(fitted(fit), 1), with(bdata, median(y))) # Should be equal } \keyword{models} \keyword{regression} VGAM/man/geometric.Rd0000644000176200001440000001044113135276753014040 0ustar liggesusers\name{geometric} \alias{geometric} \alias{truncgeometric} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Geometric (Truncated and Untruncated) Distributions } \description{ Maximum likelihood estimation for the geometric and truncated geometric distributions. } \usage{ geometric(link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) truncgeometric(upper.limit = Inf, link = "logit", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Parameter link function applied to the probability parameter \eqn{p}{prob}, which lies in the unit interval. See \code{\link{Links}} for more choices. } \item{expected}{ Logical. Fisher scoring is used if \code{expected = TRUE}, else Newton-Raphson. } \item{iprob, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for details. } \item{upper.limit}{ Numeric. Upper values. As a vector, it is recycled across responses first. The default value means both family functions should give the same result. } } \details{ A random variable \eqn{Y} has a 1-parameter geometric distribution if \eqn{P(Y=y) = p (1-p)^y}{P(Y=y) = prob * (1-prob)^y} for \eqn{y=0,1,2,\ldots}{y=0,1,2,...}. Here, \eqn{p}{prob} is the probability of success, and \eqn{Y} is the number of (independent) trials that are fails until a success occurs. Thus the response \eqn{Y} should be a non-negative integer. The mean of \eqn{Y} is \eqn{E(Y) = (1-p)/p}{E(Y) = (1-prob)/prob} and its variance is \eqn{Var(Y) = (1-p)/p^2}{Var(Y) = (1-prob)/prob^2}. The geometric distribution is a special case of the negative binomial distribution (see \code{\link{negbinomial}}). The geometric distribution is also a special case of the Borel distribution, which is a Lagrangian distribution. If \eqn{Y} has a geometric distribution with parameter \eqn{p}{prob} then \eqn{Y+1} has a positive-geometric distribution with the same parameter. Multiple responses are permitted. For \code{truncgeometric()}, the (upper) truncated geometric distribution can have response integer values from 0 to \code{upper.limit}. It has density \code{prob * (1 - prob)^y / [1-(1-prob)^(1+upper.limit)]}. For a generalized truncated geometric distribution with integer values \eqn{L} to \eqn{U}, say, subtract \eqn{L} from the response and feed in \eqn{U-L} as the upper limit. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee. Help from Viet Hoang Quoc is gratefully acknowledged. } %\note{ % %} \seealso{ \code{\link{negbinomial}}, \code{\link[stats]{Geometric}}, \code{\link{betageometric}}, \code{\link{expgeometric}}, \code{\link{zageometric}}, \code{\link{zigeometric}}, \code{\link{rbetageom}}, \code{\link{simulate.vlm}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta = -1.0 - 1.0 * x2 + 2.0 * x3) gdata <- transform(gdata, prob = logit(eta, inverse = TRUE)) gdata <- transform(gdata, y1 = rgeom(nn, prob)) with(gdata, table(y1)) fit1 <- vglm(y1 ~ x2 + x3 + x4, geometric, data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # Truncated geometric (between 0 and upper.limit) upper.limit <- 5 tdata <- subset(gdata, y1 <= upper.limit) nrow(tdata) # Less than nn fit2 <- vglm(y1 ~ x2 + x3 + x4, truncgeometric(upper.limit), data = tdata, trace = TRUE) coef(fit2, matrix = TRUE) # Generalized truncated geometric (between lower.limit and upper.limit) lower.limit <- 1 upper.limit <- 8 gtdata <- subset(gdata, lower.limit <= y1 & y1 <= upper.limit) with(gtdata, table(y1)) nrow(gtdata) # Less than nn fit3 <- vglm(y1 - lower.limit ~ x2 + x3 + x4, truncgeometric(upper.limit - lower.limit), data = gtdata, trace = TRUE) coef(fit3, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/inv.paralogisticUC.Rd0000644000176200001440000000426413135276753015574 0ustar liggesusers\name{Inv.paralogistic} \alias{Inv.paralogistic} \alias{dinv.paralogistic} \alias{pinv.paralogistic} \alias{qinv.paralogistic} \alias{rinv.paralogistic} \title{The Inverse Paralogistic Distribution} \description{ Density, distribution function, quantile function and random generation for the inverse paralogistic distribution with shape parameters \code{a} and \code{p}, and scale parameter \code{scale}. } \usage{ dinv.paralogistic(x, scale = 1, shape1.a, log = FALSE) pinv.paralogistic(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qinv.paralogistic(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rinv.paralogistic(n, scale = 1, shape1.a) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dinv.paralogistic} gives the density, \code{pinv.paralogistic} gives the distribution function, \code{qinv.paralogistic} gives the quantile function, and \code{rinv.paralogistic} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{inv.paralogistic}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The inverse paralogistic distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{inv.paralogistic}}, \code{\link{genbetaII}}. } \examples{ idata <- data.frame(y = rinv.paralogistic(n = 3000, exp(1), scale = exp(2))) fit <- vglm(y ~ 1, inv.paralogistic(lss = FALSE, ishape1.a = 2.1), data = idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/genbetaII.Rd0000644000176200001440000001403413135276753013713 0ustar liggesusers\name{genbetaII} \alias{genbetaII} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Beta Distribution of the Second Kind } \description{ Maximum likelihood estimation of the 4-parameter generalized beta II distribution. } \usage{ genbetaII(lscale = "loge", lshape1.a = "loge", lshape2.p = "loge", lshape3.q = "loge", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, ishape3.q = NULL, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = exp(-5:5), zero = "shape") } %- maybe also 'usage' for other objects documented here. % zero = ifelse(lss, -(2:4), -c(1, 3:4)) \arguments{ \item{lss}{ See \code{\link{CommonVGAMffArguments}} for important information. } \item{lshape1.a, lscale, lshape2.p, lshape3.q}{ Parameter link functions applied to the shape parameter \code{a}, scale parameter \code{scale}, shape parameter \code{p}, and shape parameter \code{q}. All four parameters are positive. See \code{\link{Links}} for more choices. } \item{iscale, ishape1.a, ishape2.p, ishape3.q}{ Optional initial values for the parameters. A \code{NULL} means a value is computed internally using the arguments \code{gscale}, \code{gshape1.a}, etc. } \item{gscale, gshape1.a, gshape2.p, gshape3.q}{ See \code{\link{CommonVGAMffArguments}} for information. Replaced by \code{iscale}, \code{ishape1.a} etc. if given. } % \item{gshape1.a, gscale, gshape2.p, gshape3.q}{ % See \code{\link{CommonVGAMffArguments}} for information. % } \item{zero}{ The default is to set all the shape parameters to be intercept-only. See \code{\link{CommonVGAMffArguments}} for information. % An integer-valued vector specifying which % linear/additive predictors are modelled as intercepts only. } } \details{ This distribution is most useful for unifying a substantial number of size distributions. For example, the Singh-Maddala, Dagum, Fisk (log-logistic), Lomax (Pareto type II), inverse Lomax, beta distribution of the second kind distributions are all special cases. Full details can be found in Kleiber and Kotz (2003), and Brazauskas (2002). The argument names given here are used by other families that are special cases of this family. Fisher scoring is used here and for the special cases too. The 4-parameter generalized beta II distribution has density \deqn{f(y) = a y^{ap-1} / [b^{ap} B(p,q) \{1 + (y/b)^a\}^{p+q}]}{% f(y) = a y^(ap-1) / [b^(ap) B(p,q) (1 + (y/b)^a)^(p+q)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{p > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here \eqn{B} is the beta function, and \eqn{b} is the scale parameter \code{scale}, while the others are shape parameters. The mean is \deqn{E(Y) = b \, \Gamma(p + 1/a) \, \Gamma(q - 1/a) / (\Gamma(p) \, \Gamma(q))}{% E(Y) = b gamma(p + 1/a) gamma(q - 1/a) / ( gamma(p) gamma(q))} provided \eqn{-ap < 1 < aq}; these are returned as the fitted values. %The distribution is motivated by the incomplete beta function %\eqn{B_y(p,q)} which is the integral from 0 to \eqn{y} of the integrand %\eqn{u^{p-1} (1-u)^{q-1}}{u^(p-1) (1-u)^(q-1)} where \eqn{y>0}. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. Brazauskas, V. (2002) Fisher information matrix for the Feller-Pareto distribution. \emph{Statistics & Probability Letters}, \bold{59}, 159--167. } \author{ T. W. Yee, with help from Victor Miranda. } \note{ The default is to use a grid search with respect to all four parameters; this is quite costly and is time consuming. If the self-starting initial values fail, try experimenting with the initial value arguments. Also, the constraint \eqn{-ap < 1 < aq} may be violated as the iterations progress so it pays to monitor convergence, e.g., set \code{trace = TRUE}. Successful convergence depends on having very good initial values. This is rather difficult for this distribution so that a grid search is conducted by default. One suggestion for increasing the estimation reliability is to set \code{stepsize = 0.5} and \code{maxit = 100}; see \code{\link{vglm.control}}. } \section{Warning}{ This distribution is very flexible and it is not generally recommended to use this family function when the sample size is small---numerical problems easily occur with small samples. Probably several hundred observations at least are needed in order to estimate the parameters with any level of confidence. Neither is the inclusion of covariates recommended at all---not unless there are several thousand observations. The mean is finite only when \eqn{-ap < 1 < aq}, and this can be easily violated by the parameter estimates for small sample sizes. Try fitting some of the special cases of this distribution (e.g., \code{\link{sinmad}}, \code{\link{fisk}}, etc.) first, and then possibly use those models for initial values for this distribution. } \seealso{ \code{\link{dgenbetaII}}, \code{\link{betaff}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{lomax}}, \code{\link{inv.lomax}}, \code{\link{paralogistic}}, \code{\link{inv.paralogistic}}, \code{\link{lino}}, \code{\link{CommonVGAMffArguments}}, \code{\link{vglm.control}}. } \examples{ \dontrun{ gdata <- data.frame(y = rsinmad(3000, shape1 = exp(1), scale = exp(2), shape3 = exp(1))) # A special case! fit <- vglm(y ~ 1, genbetaII(lss = FALSE), data = gdata, trace = TRUE) fit <- vglm(y ~ 1, data = gdata, trace = TRUE, genbetaII(ishape1.a = 3, iscale = 7, ishape3.q = 2.3)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/zageomUC.Rd0000644000176200001440000000436113135276753013600 0ustar liggesusers\name{Zageom} \alias{Zageom} \alias{dzageom} \alias{pzageom} \alias{qzageom} \alias{rzageom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Geometric Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered geometric distribution with parameter \code{pobs0}. } \usage{ dzageom(x, prob, pobs0 = 0, log = FALSE) pzageom(q, prob, pobs0 = 0) qzageom(p, prob, pobs0 = 0) rzageom(n, prob, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{prob, log}{ Parameters from the ordinary geometric distribution (see \code{\link[stats:Geometric]{dgeom}}). } \item{pobs0}{ Probability of (an observed) zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive geometric distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive geometric(prob) distribution. } \value{ \code{dzageom} gives the density and \code{pzageom} gives the distribution function, \code{qzageom} gives the quantile function, and \code{rzageom} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{zageometric}}, \code{\link{zigeometric}}, \code{\link{rposgeom}}. } \examples{ prob <- 0.35; pobs0 <- 0.05; x <- (-1):7 dzageom(x, prob = prob, pobs0 = pobs0) table(rzageom(100, prob = prob, pobs0 = pobs0)) \dontrun{ x <- 0:10 barplot(rbind(dzageom(x, prob = prob, pobs0 = pobs0), dgeom(x, prob = prob)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("ZAG(prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Geometric(prob = ", prob, ") [orange] densities", sep = "")) } } \keyword{distribution} VGAM/man/df.residual.Rd0000644000176200001440000000436713135276753014274 0ustar liggesusers\name{df.residual} \alias{df.residual} \alias{df.residual_vlm} %\alias{df.residual.default} \title{Residual Degrees-of-Freedom} \description{ Returns the residual degrees-of-freedom extracted from a fitted VGLM object. } \usage{ df.residual_vlm(object, type = c("vlm", "lm"), \dots) } \arguments{ \item{object}{ an object for which the degrees-of-freedom are desired, e.g., a \code{\link{vglm}} object. } \item{type}{ the type of residual degrees-of-freedom wanted. In some applications the 'usual' LM-type value may be more appropriate. The default is the first choice. } \item{\dots}{ additional optional arguments. } } \details{ When a VGLM is fitted, a \emph{large} (VLM) generalized least squares (GLS) fit is done at each IRLS iteration. To do this, an ordinary least squares (OLS) fit is performed by transforming the GLS using Cholesky factors. The number of rows is \eqn{M} times the `ordinary' number of rows of the LM-type model: \eqn{nM}. Here, \eqn{M} is the number of linear/additive predictors. So the formula for the VLM-type residual degrees-of-freedom is \eqn{nM - p^{*}} where \eqn{p^{*}} is the number of columns of the `big' VLM matrix. The formula for the LM-type residual degrees-of-freedom is \eqn{n - p_{j}} where \eqn{p_{j}} is the number of columns of the `ordinary' LM matrix corresponding to the \eqn{j}th linear/additive predictor. } \value{ The value of the residual degrees-of-freedom extracted from the object. When \code{type = "vlm"} this is a single integer, and when \code{type = "lm"} this is a \eqn{M}-vector of integers. } \seealso{ \code{\link{vglm}}, \code{\link[stats]{deviance}}, \code{\link[stats]{lm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) head(model.matrix(fit, type = "vlm")) head(model.matrix(fit, type = "lm")) df.residual(fit, type = "vlm") # n * M - p_VLM nobs(fit, type = "vlm") # n * M nvar(fit, type = "vlm") # p_VLM df.residual(fit, type = "lm") # n - p_LM(j); Useful in some situations nobs(fit, type = "lm") # n nvar(fit, type = "lm") # p_LM nvar_vlm(fit, type = "lm") # p_LM(j) (<= p_LM elementwise) } \keyword{models} \keyword{regression} VGAM/man/predictvglm.Rd0000644000176200001440000001177213135276753014412 0ustar liggesusers\name{predictvglm} \alias{predictvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{Predict Method for a VGLM fit} \description{ Predicted values based on a vector generalized linear model (VGLM) object. } \usage{ predictvglm(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, untransform = FALSE, type.fitted = NULL, percentiles = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class inheriting from \code{"vlm"}, e.g., \code{\link{vglm}}. } \item{newdata}{ An optional data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. } \item{type}{ The value of this argument can be abbreviated. The type of prediction required. The default is the first one, meaning on the scale of the linear predictors. This should be a \eqn{n \times M}{n x M} matrix. The alternative \code{"response"} is on the scale of the response variable, and depending on the family function, this may or may not be the mean. Often this is the fitted value, e.g., \code{fitted(vglmObject)} (see \code{\link{fittedvlm}}). Note that the response is output from the \code{@linkinv} slot, where the \code{eta} argument is the \eqn{n \times M}{n x M} matrix of linear predictors. The \code{"terms"} option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale. The terms have been centered. } \item{se.fit}{ logical: return standard errors? } \item{deriv}{ Non-negative integer. Currently this must be zero. Later, this may be implemented for general values. } \item{dispersion}{ Dispersion parameter. This may be inputted at this stage, but the default is to use the dispersion parameter of the fitted model. } % \item{extra}{ % A list containing extra information. % This argument should be ignored. % } \item{type.fitted}{ Some \pkg{VGAM} family functions have an argument by the same name. If so, then one can obtain fitted values by setting \code{type = "response"} and choosing a value of \code{type.fitted} from what's available. If \code{type.fitted = "quantiles"} is available then the \code{percentiles} argument can be used to specify what quantile values are requested. } \item{percentiles}{ Used only if \code{type.fitted = "quantiles"} is available and is selected. } \item{untransform}{ Logical. Reverses any parameter link function. This argument only works if \code{type = "link", se.fit = FALSE, deriv = 0}. Setting \code{untransform = TRUE} does not work for all \pkg{VGAM} family functions; only ones where there is a one-to-one correspondence between a simple link function and a simple parameter might work. } \item{\dots}{Arguments passed into \code{predictvlm}. } } \details{ Obtains predictions and optionally estimates standard errors of those predictions from a fitted \code{\link{vglm}} object. This code implements \emph{smart prediction} (see \code{\link{smartpred}}). } \value{ If \code{se.fit = FALSE}, a vector or matrix of predictions. If \code{se.fit = TRUE}, a list with components \item{fitted.values}{Predictions} \item{se.fit}{Estimated standard errors} \item{df}{Degrees of freedom} \item{sigma}{The square root of the dispersion parameter} } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ Setting \code{se.fit = TRUE} and \code{type = "response"} will generate an error. The arguments \code{type.fitted} and \code{percentiles} are provided in this function to give more convenience than modifying the \code{extra} slot directly. } \section{Warning }{ This function may change in the future. } \seealso{ \code{\link[stats]{predict}}, \code{\link{vglm}}, \code{predictvlm}, \code{\link{smartpred}}. } \examples{ # Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ poly(c(scale(let)), 2), propodds, data = pneumo, trace = TRUE, x.arg = FALSE) class(fit) (q0 <- head(predict(fit))) (q1 <- predict(fit, newdata = head(pneumo))) (q2 <- predict(fit, newdata = head(pneumo))) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE head(predict(fit)) head(predict(fit, untransform = TRUE)) p0 <- head(predict(fit, type = "response")) p1 <- head(predict(fit, type = "response", newdata = pneumo)) p2 <- head(predict(fit, type = "response", newdata = pneumo)) p3 <- head(fitted(fit)) all.equal(p0, p1) # Should be TRUE all.equal(p1, p2) # Should be TRUE all.equal(p2, p3) # Should be TRUE predict(fit, type = "terms", se = TRUE) } \keyword{models} \keyword{regression} % untransform = FALSE, extra = object@extra, VGAM/man/plotrcim0.Rd0000644000176200001440000001276613135276753014007 0ustar liggesusers\name{plotrcim0} \alias{plotrcim0} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Main Effects Plot for a Row-Column Interaction Model (RCIM) } \description{ Produces a main effects plot for Row-Column Interaction Models (RCIMs). } \usage{ plotrcim0(object, centered = TRUE, which.plots = c(1, 2), hline0 = TRUE, hlty = "dashed", hcol = par()$col, hlwd = par()$lwd, rfirst = 1, cfirst = 1, rtype = "h", ctype = "h", rcex.lab = 1, rcex.axis = 1, rtick = FALSE, ccex.lab = 1, ccex.axis = 1, ctick = FALSE, rmain = "Row effects", rsub = "", rxlab = "", rylab = "Row effects", cmain = "Column effects", csub = "", cxlab= "", cylab = "Column effects", rcol = par()$col, ccol = par()$col, no.warning = FALSE, ...) } \arguments{ \item{object}{ An \code{\link{rcim}} object. This should be of rank-0, i.e., main effects only and no interactions. } \item{which.plots}{ Numeric, describing which plots are to be plotted. The row effects plot is 1 and the column effects plot is 2. Set the value \code{0}, say, for no plots at all. } \item{centered}{ Logical. If \code{TRUE} then the row and column effects are centered (but not scaled) by \code{\link[base]{scale}}. If \code{FALSE} then the raw effects are used (of which the first are zero by definition). } \item{hline0, hlty, hcol, hlwd}{ \code{hline0} is logical. If \code{TRUE} then a horizontal line is plotted at 0 and the other arguments describe this line. Probably having \code{hline0 = TRUE} only makes sense when \code{centered = TRUE}. } \item{rfirst, cfirst}{ \code{rfirst} is the level of row that is placed first in the row effects plot, etc. } \item{rmain, cmain}{ Character. \code{rmain} is the main label in the row effects plot, etc. } \item{rtype, ctype, rsub, csub}{ See the \code{type} and \code{sub} arguments of \code{\link[graphics:plot]{plot}}. } %\item{rlabels, clabels}{ % rlabels = FALSE, clabels = FALSE, % Currently not functioning properly. % zz. % See \code{labels} argument of % \code{\link[graphics:plot]{plot}}. % %} \item{rxlab, rylab, cxlab, cylab}{ Character. For the row effects plot, \code{rxlab} is \code{xlab} and \code{rylab} is \code{ylab}; see \code{\link[graphics:par]{par}}. Ditto for \code{cxlab} and \code{cylab} for the column effects plot. } \item{rcex.lab, ccex.lab}{ Numeric. \code{rcex.lab} is \code{cex} for the row effects plot label, etc. } \item{rcex.axis, ccex.axis}{ Numeric. \code{rcex.axis} is the \code{cex} argument for the row effects axis label, etc. } \item{rtick, ctick}{ Logical. If \code{rtick = TRUE} then add ticks to the row effects plot, etc. } \item{rcol, ccol}{ \code{rcol} give a colour for the row effects plot, etc. } \item{no.warning}{ Logical. If \code{TRUE} then no warning is issued if the model is not rank-0. } %\item{llwd}{ % Fed into \code{lwd} of \code{\link[graphics:par]{par}}. % %} %\item{rlas, clas}{ % Fed into \code{las} of \code{\link[graphics:par]{par}}. % %} %\item{main}{ % Character. A common title. % %} %\item{type}{ % Fed into \code{type} of \code{\link[graphics:plot]{plot}}. % %} \item{...}{ Arguments fed into both \code{\link[graphics:plot]{plot}} calls. } } \details{ This function plots the row and column effects of a rank-0 RCIM. As the result is a main effects plot of a regression analysis, its interpretation when \code{centered = FALSE} is relative to the baseline (reference level) of a row and column, and should also be considered in light of the link function used. Many arguments that start with \code{"r"} refer to the row effects plot, and \code{"c"} for the column effects plot. } \value{ The original object with the \code{post} slot assigned additional information from the plot. } \note{ This function should be only used to plot the object of rank-0 RCIM. If the rank is positive then it will issue a warning. Using an argument \code{ylim} will mean the row and column effects are plotted on a common scale; see \code{\link[graphics]{plot.window}}. % This function is not finished yet. % There may be several bugs! } \author{ T. W. Yee, A. F. Hadi. } %\section{Warning}{ % %} \seealso{ \code{\link{moffset}} \code{\link{Rcim}}, \code{\link{rcim}}. } \examples{ alcoff.e <- moffset(alcoff, "6", "Mon", postfix = "*") # Effective day fit0 <- rcim(alcoff.e, family = poissonff) \dontrun{par(oma = c(0, 0, 4, 0), mfrow = 1:2) # For all plots below too ii <- plot(fit0, rcol = "blue", ccol = "orange", lwd = 4, ylim = c(-2, 2), # A common ylim cylab = "Effective daily effects", rylab = "Hourly effects", rxlab = "Hour", cxlab = "Effective day") ii@post # Endowed with additional information } # Negative binomial example \dontrun{ fit1 <- rcim(alcoff.e, negbinomial, trace = TRUE) plot(fit1, ylim = c(-2, 2)) } # Univariate normal example fit2 <- rcim(alcoff.e, uninormal, trace = TRUE) \dontrun{ plot(fit2, ylim = c(-200, 400)) } # Median-polish example \dontrun{ fit3 <- rcim(alcoff.e, alaplace1(tau = 0.5), maxit = 1000, trace = FALSE) plot(fit3, ylim = c(-200, 250)) } # Zero-inflated Poisson example on "crashp" (no 0s in alcoff) \dontrun{ cbind(rowSums(crashp)) # Easy to see the data cbind(colSums(crashp)) # Easy to see the data fit4 <- rcim(Rcim(crashp, rbaseline = "5", cbaseline = "Sun"), zipoissonff, trace = TRUE) plot(fit4, ylim = c(-3, 3)) } } VGAM/man/amlnormal.Rd0000644000176200001440000001427013135276753014050 0ustar liggesusers\name{amlnormal} \alias{amlnormal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Asymmetric Least Squares Quantile Regression } \description{ Asymmetric least squares, a special case of maximizing an asymmetric likelihood function of a normal distribution. This allows for expectile/quantile regression using asymmetric least squares error loss. } \usage{ amlnormal(w.aml = 1, parallel = FALSE, lexpectile = "identitylink", iexpectile = NULL, imethod = 1, digw = 4) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary least squares (OLS) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{lexpectile, iexpectile}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } } \details{ This is an implementation of Efron (1991) and full details can be obtained there. Equation numbers below refer to that article. The model is essentially a linear model (see \code{\link[stats]{lm}}), however, the asymmetric squared error loss function for a residual \eqn{r} is \eqn{r^2} if \eqn{r \leq 0}{r <= 0} and \eqn{w r^2}{w*r^2} if \eqn{r > 0}. The solution is the set of regression coefficients that minimize the sum of these over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1991) Regression percentiles using asymmetric squared error loss. \emph{Statistica Sinica}, \bold{1}, 93--125. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. One difficulty is finding the \code{w.aml} value giving a specified percentile. One solution is to fit the model within a root finding function such as \code{\link[stats]{uniroot}}; see the example below. For \code{amlnormal} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. See the note in \code{\link{amlpoisson}} on the jargon, including \emph{expectiles} and \emph{regression quantiles}. The \code{deviance} slot computes the total asymmetric squared error loss (2.5). If \code{w.aml} has more than one value then the value returned by the slot is the sum taken over all the \code{w.aml} values. This \pkg{VGAM} family function could well be renamed \code{amlnormal()} instead, given the other function names \code{\link{amlpoisson}}, \code{\link{amlbinomial}}, etc. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } %\section{Warning }{ % The \code{loglikelihood} slot currently does not return the % log-likelihood but negative the total asymmetric squared error % loss (2.5). % If \code{w} has more than one value then the value returned by % \code{loglikelihood} is the sum taken over all the \code{w} values. %} \seealso{ \code{\link{amlpoisson}}, \code{\link{amlbinomial}}, \code{\link{amlexponential}}, \code{\link{bmi.nz}}, \code{\link{alaplace1}}, \code{\link{denorm}}, \code{\link{lms.bcn}} and similar variants are alternative methods for quantile regression. } \examples{ \dontrun{ # Example 1 ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age (fit <- vglm(BMI ~ sm.bs(age), amlnormal(w.aml = 0.1), data = bmi.nz)) fit@extra # Gives the w value and the percentile coef(fit, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", main = paste(round(fit@extra$percentile, digits = 1), "expectile-percentile curve"))) with(bmi.nz, lines(age, c(fitted(fit)), col = "black")) # Example 2 # Find the w values that give the 25, 50 and 75 percentiles find.w <- function(w, percentile = 50) { fit2 <- vglm(BMI ~ sm.bs(age), amlnormal(w = w), data = bmi.nz) fit2@extra$percentile - percentile } # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", las = 1, main = "25, 50 and 75 expectile-percentile curves")) for (myp in c(25, 50, 75)) { # Note: uniroot() can only find one root at a time bestw <- uniroot(f = find.w, interval = c(1/10^4, 10^4), percentile = myp) fit2 <- vglm(BMI ~ sm.bs(age), amlnormal(w = bestw$root), data = bmi.nz) with(bmi.nz, lines(age, c(fitted(fit2)), col = "orange")) } # Example 3; this is Example 1 but with smoothing splines and # a vector w and a parallelism assumption. ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age fit3 <- vgam(BMI ~ s(age, df = 4), data = bmi.nz, trace = TRUE, amlnormal(w = c(0.1, 1, 10), parallel = TRUE)) fit3@extra # The w values, percentiles and weighted deviances # The linear components of the fit; not for human consumption: coef(fit3, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col="blue", main = paste(paste(round(fit3@extra$percentile, digits = 1), collapse = ", "), "expectile-percentile curves"))) with(bmi.nz, matlines(age, fitted(fit3), col = 1:fit3@extra$M, lwd = 2)) with(bmi.nz, lines(age, c(fitted(fit )), col = "black")) # For comparison } } \keyword{models} \keyword{regression} VGAM/man/bifgmcop.Rd0000644000176200001440000000504613135276753013655 0ustar liggesusers\name{bifgmcop} \alias{bifgmcop} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Farlie-Gumbel-Morgenstern's Bivariate Distribution Family Function } \description{ Estimate the association parameter of Farlie-Gumbel-Morgenstern's bivariate distribution by maximum likelihood estimation. } \usage{ bifgmcop(lapar = "rhobit", iapar = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar, iapar, imethod}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more link function choices. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = y_1 y_2 ( 1 + \alpha (1 - y_1) (1 - y_2) ) }{% P(Y1 <= y1, Y2 <= y2) = y1 * y2 * ( 1 + alpha * (1 - y1) * (1 - y2) ) } for \eqn{-1 < \alpha < 1}{-1 < alpha < 1}. The support of the function is the unit square. The marginal distributions are the standard uniform distributions. When \eqn{\alpha = 0}{alpha=0} the random variables are independent. % A variant of Newton-Raphson is used, which only seems to work for an % intercept model. % It is a very good idea to set \code{trace=TRUE}. % This \pkg{VGAM} family function is prone to numerical difficulties. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. Smith, M. D. (2007) Invariance theorems for Fisher information. \emph{Communications in Statistics---Theory and Methods}, \bold{36}(12), 2213--2222. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution. % This \pkg{VGAM} family function should be used with caution. } \seealso{ \code{\link{rbifgmcop}}, \code{\link{bifrankcop}}, \code{\link{bifgmexp}}, \code{\link{simulate.vlm}}. } \examples{ ymat <- rbifgmcop(n = 1000, apar = rhobit(3, inverse = TRUE)) \dontrun{plot(ymat, col = "blue")} fit <- vglm(ymat ~ 1, fam = bifgmcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) } \keyword{models} \keyword{regression} % for real \eqn{\alpha}{alpha} (the range is data-dependent). VGAM/man/genbetaIIUC.Rd0000644000176200001440000000445713135276753014153 0ustar liggesusers\name{GenbetaII} \alias{GenbetaII} \alias{dgenbetaII} %\alias{pgenbetaII} %\alias{qgenbetaII} %\alias{rgenbetaII} \title{The Generalized Beta II Distribution} \description{ Density for the generalized beta II distribution with shape parameters \code{a} and \code{p} and \code{q}, and scale parameter \code{scale}. % distribution function, quantile function and random generation } \usage{ dgenbetaII(x, scale = 1, shape1.a, shape2.p, shape3.q, log = FALSE) } %pgenbetaII(q, scale = 1, shape1.a, shape2.p, shape3.q, % lower.tail = TRUE, log.p = FALSE) %qgenbetaII(p, scale = 1, shape1.a, shape2.p, shape3.q, % lower.tail = TRUE, log.p = FALSE) %rgenbetaII(n, scale = 1, shape1.a, shape3.q, shape3.q) \arguments{ % \item{x, q}{vector of quantiles.} \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} % \item{n}{number of observations. If \code{length(n) > 1}, the length % is taken to be the number required.} \item{shape1.a, shape2.p, shape3.q}{positive shape parameters.} \item{scale}{positive scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } % \item{lower.tail, log.p}{ % Same meaning as in \code{\link[stats:Normal]{pnorm}} % or \code{\link[stats:Normal]{qnorm}}. % } } \value{ \code{dgenbetaII} gives the density. % \code{pgenbetaII} gives the distribution function, % \code{qgenbetaII} gives the quantile function, and % \code{rgenbetaII} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{genbetaII}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. Several distributions, such as the Singh-Maddala, are special case of this flexible 4-parameter distribution. The product of \code{shape1.a} and \code{shape2.p} determines the behaviour of the density at the origin. } %\note{ % % %} \seealso{ \code{\link{genbetaII}}. } \examples{ dgenbetaII(0, shape1.a = 1/4, shape2.p = 4, shape3.q = 3) dgenbetaII(0, shape1.a = 1/4, shape2.p = 2, shape3.q = 3) dgenbetaII(0, shape1.a = 1/4, shape2.p = 8, shape3.q = 3) } \keyword{distribution} VGAM/man/zapoisUC.Rd0000644000176200001440000000436613135276753013630 0ustar liggesusers\name{Zapois} \alias{Zapois} \alias{dzapois} \alias{pzapois} \alias{qzapois} \alias{rzapois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Poisson Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered Poisson distribution with parameter \code{pobs0}. } \usage{ dzapois(x, lambda, pobs0 = 0, log = FALSE) pzapois(q, lambda, pobs0 = 0) qzapois(p, lambda, pobs0 = 0) rzapois(n, lambda, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{lambda}{ Vector of positive means. } \item{pobs0}{ Probability of zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive Poisson distribution. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive \eqn{Poisson(\lambda)}{Poisson(lambda)}. } \value{ \code{dzapois} gives the density, \code{pzapois} gives the distribution function, \code{qzapois} gives the quantile function, and \code{rzapois} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{zapoisson}}, \code{\link{dzipois}}. } \examples{ lambda <- 3; pobs0 <- 0.2; x <- (-1):7 (ii <- dzapois(x, lambda, pobs0)) max(abs(cumsum(ii) - pzapois(x, lambda, pobs0))) # Should be 0 table(rzapois(100, lambda, pobs0)) table(qzapois(runif(100), lambda, pobs0)) round(dzapois(0:10, lambda, pobs0) * 100) # Should be similar \dontrun{ x <- 0:10 barplot(rbind(dzapois(x, lambda, pobs0), dpois(x, lambda)), beside = TRUE, col = c("blue", "green"), las = 1, main = paste("ZAP(", lambda, ", pobs0 = ", pobs0, ") [blue] vs", " Poisson(", lambda, ") [green] densities", sep = ""), names.arg = as.character(x), ylab = "Probability") } } \keyword{distribution} VGAM/man/bisa.Rd0000644000176200001440000001066313135276753013006 0ustar liggesusers\name{bisa} \alias{bisa} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Birnbaum-Saunders Distribution Family Function } \description{ Estimates the shape and scale parameters of the Birnbaum-Saunders distribution by maximum likelihood estimation. } \usage{ bisa(lscale = "loge", lshape = "loge", iscale = 1, ishape = NULL, imethod = 1, zero = "shape", nowarning = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{nowarning}{ Logical. Suppress a warning? Ignored for \pkg{VGAM} 0.9-7 and higher. } \item{lscale, lshape}{ Parameter link functions applied to the shape and scale parameters (\eqn{a} and \eqn{b} below). See \code{\link{Links}} for more choices. A log link is the default for both because they are positive. } \item{iscale, ishape}{ Initial values for \eqn{a} and \eqn{b}. A \code{NULL} means an initial value is chosen internally using \code{imethod}. } \item{imethod}{ An integer with value \code{1} or \code{2} or \code{3} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ishape} and/or \code{iscale}. } \item{zero}{ Specifies which linear/additive predictor is modelled as intercept-only. If used, choose one value from the set \{1,2\}. See \code{\link{CommonVGAMffArguments}} for more details. % The default is none of them. } } \details{ The (two-parameter) Birnbaum-Saunders distribution has a cumulative distribution function that can be written as \deqn{F(y;a,b) = \Phi[ \xi(y/b)/a] }{% F(y;a,k) = pnorm[xi(y/b)/a] } where \eqn{\Phi(\cdot)}{pnorm()} is the cumulative distribution function of a standard normal (see \code{\link[stats:Normal]{pnorm}}), \eqn{\xi(t) = \sqrt{t} - 1 / \sqrt{t}}{xi(t) = t^(0.5) - t^(-0.5)}, \eqn{y > 0}, \eqn{a>0} is the shape parameter, \eqn{b>0} is the scale parameter. The mean of \eqn{Y} (which is the fitted value) is \eqn{b(1 + a^2/2)}{b*(1 + a*a/2)}. and the variance is \eqn{a^2 b^2 (1 + \frac{5}{4}a^2)}{a^2 b^2 (1 + (5/4)*a^2)}. By default, \eqn{\eta_1 = \log(a)}{eta1 = log(a)} and \eqn{\eta_2 = \log(b)}{eta2 = log(b)} for this family function. Note that \eqn{a} and \eqn{b} are orthogonal, i.e., the Fisher information matrix is diagonal. This family function implements Fisher scoring, and it is unnecessary to compute any integrals numerically. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Lemonte, A. J. and Cribari-Neto, F. and Vasconcellos, K. L. P. (2007) Improved statistical inference for the two-parameter Birnbaum-Saunders distribution. \emph{Computational Statistics \& Data Analysis}, \bold{51}, 4656--4681. Birnbaum, Z. W. and Saunders, S. C. (1969) A new family of life distributions. \emph{Journal of Applied Probability}, \bold{6}, 319--327. Birnbaum, Z. W. and Saunders, S. C. (1969) Estimation for a family of life distributions with applications to fatigue. \emph{Journal of Applied Probability}, \bold{6}, 328--347. Engelhardt, M. and Bain, L. J. and Wright, F. T. (1981) Inferences on the parameters of the Birnbaum-Saunders fatigue life distribution based on maximum likelihood estimation. \emph{Technometrics}, \bold{23}, 251--256. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. } \author{ T. W. Yee } %\note{ % %} %\section{Warning }{ %} \seealso{ \code{\link{pbisa}}, \code{\link{inv.gaussianff}}, \code{\link{CommonVGAMffArguments}}. } \examples{ bdata1 <- data.frame(x2 = runif(nn <- 1000)) bdata1 <- transform(bdata1, shape = exp(-0.5 + x2), scale = exp(1.5)) bdata1 <- transform(bdata1, y = rbisa(nn, scale, shape)) fit1 <- vglm(y ~ x2, bisa(zero = 1), data = bdata1, trace = TRUE) coef(fit1, matrix = TRUE) \dontrun{ bdata2 <- data.frame(shape = exp(-0.5), scale = exp(0.5)) bdata2 <- transform(bdata2, y = rbisa(nn, scale, shape)) fit <- vglm(y ~ 1, bisa, data = bdata2, trace = TRUE) with(bdata2, hist(y, prob = TRUE, ylim = c(0, 0.5), col = "lightblue")) coef(fit, matrix = TRUE) with(bdata2, mean(y)) head(fitted(fit)) x <- with(bdata2, seq(0, max(y), len = 200)) lines(dbisa(x, Coef(fit)[1], Coef(fit)[2]) ~ x, data = bdata2, col = "orange", lwd = 2) } } \keyword{models} \keyword{regression} VGAM/man/pgamma.deriv.Rd0000644000176200001440000000623213135276753014437 0ustar liggesusers\name{pgamma.deriv} \alias{pgamma.deriv} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Derivatives of the Incomplete Gamma Integral } \description{ The first two derivatives of the incomplete gamma integral. } \usage{ pgamma.deriv(q, shape, tmax = 100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{q, shape}{ As in \code{\link[stats]{pgamma}} but these must be vectors of positive values only and finite. } % \item{shape}{ % A vector of positive values. % %} \item{tmax}{ Maximum number of iterations allowed in the computation (per \code{q} value). } } \details{ Write \eqn{x = q} and \code{shape =} \eqn{a}. The first and second derivatives with respect to \eqn{q} and \eqn{a} are returned. This function is similar in spirit to \code{\link[stats]{pgamma}}; define \deqn{P(a,x) = \frac{1}{\Gamma(a)} \int_0^x t^{a-1} e^{-t} dt}{P(a,x) = 1/Gamma(a) integral_0^x t^(a-1) exp(-t) dt} so that \eqn{P(a, x)} is \code{pgamma(x, a)}. Currently a 6-column matrix is returned (in the future this may change and an argument may be supplied so that only what is required by the user is computed.) The computations use a series expansion for \eqn{a \leq x \leq 1}{a <= x <= 1} or or \eqn{x < a}, else otherwise a continued fraction expansion. Machine overflow can occur for large values of \eqn{x} when \eqn{x} is much greater than \eqn{a}. } \value{ The first 5 columns, running from left to right, are the derivatives with respect to: \eqn{x}, \eqn{x^2}, \eqn{a}, \eqn{a^2}, \eqn{xa}. The 6th column is \eqn{P(a, x)} (but it is not as accurate as calling \code{\link[stats]{pgamma}} directly). } \references{ Moore, R. J. (1982) Algorithm AS 187: Derivatives of the Incomplete Gamma Integral. \emph{Journal of the Royal Statistical Society, Series C} \emph{(Applied Statistics)}, \bold{31}(3), 330--335. } \author{ T. W. Yee wrote the wrapper function to the Fortran subroutine written by R. J. Moore. The subroutine was modified to run using double precision. The original code came from \code{http://lib.stat.cmu.edu/apstat/187}. but this website has since become stale. } \note{ If convergence does not occur then try increasing the value of \code{tmax}. Yet to do: add more arguments to give greater flexibility in the accuracy desired and to compute only quantities that are required by the user. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{pgamma.deriv.unscaled}}, \code{\link[stats]{pgamma}}. } \examples{ x <- seq(2, 10, length = 501) head(ans <- pgamma.deriv(x, 2)) \dontrun{ par(mfrow = c(2, 3)) for (jay in 1:6) plot(x, ans[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, log = "x", main = colnames(ans)[jay], xlab = "q", ylab = "") } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} % Some part of R-2.15.2/src/library/stats/man/GammaDist.Rd used % An error in the article? % I believe comments in the code (C in fortran). % for \eqn{a \leq x \leq 1}{a <= x <= 1}, and VGAM/man/Coef.qrrvglm-class.Rd0000644000176200001440000001072413135276753015536 0ustar liggesusers\name{Coef.qrrvglm-class} \docType{class} \alias{Coef.qrrvglm-class} \title{Class ``Coef.qrrvglm'' } \description{ The most pertinent matrices and other quantities pertaining to a QRR-VGLM (CQO model). } \section{Objects from the Class}{ Objects can be created by calls of the form \code{Coef(object, ...)} where \code{object} is an object of class \code{"qrrvglm"} (created by \code{\link{cqo}}). In this document, \eqn{R} is the \emph{rank}, \eqn{M} is the number of linear predictors and \eqn{n} is the number of observations. } \section{Slots}{ \describe{ \item{\code{A}:}{Of class \code{"matrix"}, \bold{A}, which are the linear `coefficients' of the matrix of latent variables. It is \eqn{M} by \eqn{R}. } \item{\code{B1}:}{Of class \code{"matrix"}, \bold{B1}. These correspond to terms of the argument \code{noRRR}. } \item{\code{C}:}{Of class \code{"matrix"}, \bold{C}, the canonical coefficients. It has \eqn{R} columns. } \item{\code{Constrained}:}{Logical. Whether the model is a constrained ordination model. } \item{\code{D}:}{Of class \code{"array"}, \code{D[,,j]} is an order-\code{Rank} matrix, for \code{j} = 1,\dots,\eqn{M}. Ideally, these are negative-definite in order to make the response curves/surfaces bell-shaped. } \item{\code{Rank}:}{The rank (dimension, number of latent variables) of the RR-VGLM. Called \eqn{R}. } \item{\code{latvar}:}{\eqn{n} by \eqn{R} matrix of latent variable values. } \item{\code{latvar.order}:}{Of class \code{"matrix"}, the permutation returned when the function \code{\link{order}} is applied to each column of \code{latvar}. This enables each column of \code{latvar} to be easily sorted. } \item{\code{Maximum}:}{Of class \code{"numeric"}, the \eqn{M} maximum fitted values. That is, the fitted values at the optimums for \code{noRRR = ~ 1} models. If \code{noRRR} is not \code{~ 1} then these will be \code{NA}s. } \item{\code{NOS}:}{Number of species.} \item{\code{Optimum}:}{Of class \code{"matrix"}, the values of the latent variables where the optimums are. If the curves are not bell-shaped, then the value will be \code{NA} or \code{NaN}.} \item{\code{Optimum.order}:}{Of class \code{"matrix"}, the permutation returned when the function \code{\link{order}} is applied to each column of \code{Optimum}. This enables each row of \code{Optimum} to be easily sorted. } % \item{\code{Diagonal}:}{Vector of logicals: are the % \code{D[,,j]} diagonal? } \item{\code{bellshaped}:}{Vector of logicals: is each response curve/surface bell-shaped? } \item{\code{dispersion}:}{Dispersion parameter(s). } \item{\code{Dzero}:}{Vector of logicals, is each of the response curves linear in the latent variable(s)? It will be if and only if \code{D[,,j]} equals \bold{O}, for \code{j} = 1,\dots,\eqn{M} . } \item{\code{Tolerance}:}{Object of class \code{"array"}, \code{Tolerance[,,j]} is an order-\code{Rank} matrix, for \code{j} = 1,\dots,\eqn{M}, being the matrix of tolerances (squared if on the diagonal). These are denoted by \bold{T} in Yee (2004). Ideally, these are positive-definite in order to make the response curves/surfaces bell-shaped. The tolerance matrices satisfy \eqn{T_s = -\frac12 D_s^{-1}}{T_s = -(0.5 D_s^(-1)}. } } } %\section{Methods}{ %No methods defined with class "Coef.qrrvglm" in the signature. %} \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{Coef.qrrvglm}}, \code{\link{cqo}}, \code{print.Coef.qrrvglm}. % \code{qrrvglm-class}, } \examples{ x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) latvar1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * ( latvar1-0)^2) lambda2 <- exp(2 - 0.5 * ( latvar1-1)^2) lambda3 <- exp(2 - 0.5 * ((latvar1+4)/2)^2) y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) yy <- cbind(y1, y2, y3) # vvv p1 <- cqo(yy ~ x2 + x3 + x4, fam = poissonff, trace = FALSE) \dontrun{ lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) } # vvv print(Coef(p1), digits = 3) } \keyword{classes} VGAM/man/MNSs.Rd0000644000176200001440000000374113135276753012707 0ustar liggesusers\name{MNSs} \alias{MNSs} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The MNSs Blood Group System } \description{ Estimates the three independent parameters of the the MNSs blood group system. } \usage{ MNSs(link = "logit", imS = NULL, ims = NULL, inS = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the three parameters. See \code{\link{Links}} for more choices. } \item{imS, ims, inS}{ Optional initial value for \code{mS}, \code{ms} and \code{nS} respectively. A \code{NULL} means they are computed internally. } } \details{ There are three independent parameters: \code{m_S}, \code{m_s}, \code{n_S}, say, so that \code{n_s = 1 - m_S - m_s - n_S}. We let the eta vector (transposed) be \code{(g(m_S), g(m_s), g(n_S))} where \code{g} is the link function. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Elandt-Johnson, R. C. (1971) \emph{Probability Models and Statistical Methods in Genetics}, New York: Wiley. } \author{ T. W. Yee } \note{ The input can be a 6-column matrix of counts, where the columns are MS, Ms, MNS, MNs, NS, Ns (in order). Alternatively, the input can be a 6-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab}}, \code{\link{ABO}}, \code{\link{A1A2A3}}. % \code{\link{AB.Ab.aB.ab2}}, } \examples{ # Order matters only: y <- cbind(MS = 295, Ms = 107, MNS = 379, MNs = 322, NS = 102, Ns = 214) fit <- vglm(y ~ 1, MNSs("logit", .25, .28, .08), trace = TRUE) fit <- vglm(y ~ 1, MNSs(link = logit), trace = TRUE, crit = "coef") Coef(fit) rbind(y, sum(y)*fitted(fit)) sqrt(diag(vcov(fit))) } \keyword{models} \keyword{regression} VGAM/man/bistudentt.Rd0000644000176200001440000000565513135276753014262 0ustar liggesusers\name{bistudentt} \alias{bistudentt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Student-t Family Function } \description{ Estimate the degrees of freedom and correlation parameters of the (bivariate) Student-t distribution by maximum likelihood estimation. } \usage{ bistudentt(ldf = "loglog", lrho = "rhobit", idf = NULL, irho = NULL, imethod = 1, parallel = FALSE, zero = "rho") } %- maybe also 'usage' for other objects documented here. %apply.parint = TRUE, \arguments{ \item{ldf, lrho, idf, irho, imethod}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more link function choices. } \item{parallel, zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The density function is \deqn{f(y_1, y_2; \nu, \rho) = \frac{1}{2\pi\sqrt{1-\rho^2}} (1 + y_1^2 + y_2^2 - 2\rho y_1 y_2) / (\nu (1-\rho^2))^{(\nu+2)/2} }{% f(y1, y2; nu, rho) = (1/(2*pi*sqrt(1-\rho^2))) * (1 + y1^2 + y_2^2 - 2*rho*y1*y2) / (nu*(1-rho^2))^((\nu+2)/2) } for \eqn{-1 < \rho < 1}{-1 < rho < 1}, and real \eqn{y_1}{y1} and \eqn{y_2}{y2}. % The support of the function is the interior of the unit square; % however, values of 0 and/or 1 are not allowed. % The marginal distributions are the standard uniform distributions. % When \eqn{\rho = 0}{rho=0} the random variables are % independent. This \pkg{VGAM} family function can handle multiple responses, for example, a six-column matrix where the first 2 columns is the first out of three responses, the next 2 columns being the next response, etc. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Schepsmeier, U. and Stober, J. (2013) Derivatives and Fisher information of bivariate copulas. \emph{Statistical Papers}. } \author{ T. W. Yee, with help from Thibault Vatter. } \note{ The response matrix must have a multiple of two-columns. Currently, the fitted value is a matrix with the same number of columns and values equal to 0.0. } \section{Warning }{ The working weight matrices have not been fully checked. } \seealso{ \code{\link{dbistudentt}}, \code{\link{binormal}}, \code{\link[stats]{pt}}. } \examples{ nn <- 1000 mydof <- loglog(1, inverse = TRUE) ymat <- cbind(rt(nn, df = mydof), rt(nn, df = mydof)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) \dontrun{ plot(ymat, col = "blue") } fit1 <- vglm(cbind(y1, y2, y3, y4) ~ 1, # 2 responses, e.g., (y1,y2) is the 1st fam = bistudentt, # crit = "coef", # Sometimes a good idea data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) } \keyword{models} \keyword{regression} % VGAM/man/zibinomUC.Rd0000644000176200001440000000623713135276753013771 0ustar liggesusers\name{Zibinom} \alias{Zibinom} \alias{dzibinom} \alias{pzibinom} \alias{qzibinom} \alias{rzibinom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-inflated binomial distribution with parameter \code{pstr0}. } \usage{ dzibinom(x, size, prob, pstr0 = 0, log = FALSE) pzibinom(q, size, prob, pstr0 = 0) qzibinom(p, size, prob, pstr0 = 0) rzibinom(n, size, prob, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. %pzibinom(q, size, prob, pstr0 = 0, lower.tail = TRUE, log.p = FALSE) %qzibinom(p, size, prob, pstr0 = 0, lower.tail = TRUE, log.p = FALSE) \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{size}{number of trials. It is the \eqn{N} symbol in the formula given in \code{\link{zibinomial}}. } \item{prob}{probability of success on each trial. } \item{n}{ Same as in \code{\link[stats]{runif}}. } % \item{log, log.p, lower.tail}{ \item{log}{ Same as \code{\link[stats:Binomial]{pbinom}}.} \item{pstr0}{ Probability of a structural zero (i.e., ignoring the binomial distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi=0}{phi=0} corresponds to the response having an ordinary binomial distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and \eqn{Binomial(size, prob)}{Binomial(size, prob)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed \eqn{Binomial(size, prob)}{Binomial(size, prob)}. } \value{ \code{dzibinom} gives the density, \code{pzibinom} gives the distribution function, \code{qzibinom} gives the quantile function, and \code{rzibinom} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for \emph{zero-deflation}. That is, the resulting probability of a zero count is \emph{less than} the nominal value of the parent distribution. See \code{\link{Zipois}} for more information. } \seealso{ \code{\link{zibinomial}}, \code{\link[stats:Binomial]{dbinom}}. } \examples{ prob <- 0.2; size <- 10; pstr0 <- 0.5 (ii <- dzibinom(0:size, size, prob, pstr0 = pstr0)) max(abs(cumsum(ii) - pzibinom(0:size, size, prob, pstr0 = pstr0))) # Should be 0 table(rzibinom(100, size, prob, pstr0 = pstr0)) table(qzibinom(runif(100), size, prob, pstr0 = pstr0)) round(dzibinom(0:10, size, prob, pstr0 = pstr0) * 100) # Should be similar \dontrun{ x <- 0:size barplot(rbind(dzibinom(x, size, prob, pstr0 = pstr0), dbinom(x, size, prob)), beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste("ZIB(", size, ", ", prob, ", pstr0 = ", pstr0, ") (blue) vs", " Binomial(", size, ", ", prob, ") (green)", sep=""), names.arg = as.character(x), las = 1, lwd = 2) } } \keyword{distribution} VGAM/man/rec.exp1.Rd0000644000176200001440000000375313135276753013517 0ustar liggesusers\name{rec.exp1} \alias{rec.exp1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Upper Record Values from a 1-parameter Exponential Distribution } \description{ Maximum likelihood estimation of the rate parameter of a 1-parameter exponential distribution when the observations are upper record values. } \usage{ rec.exp1(lrate = "loge", irate = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrate}{ Link function applied to the rate parameter. See \code{\link{Links}} for more choices. } \item{irate}{ Numeric. Optional initial values for the rate. The default value \code{NULL} means they are computed internally, with the help of \code{imethod}. } \item{imethod}{ Integer, either 1 or 2 or 3. Initial method, three algorithms are implemented. Choose the another value if convergence fails, or use \code{irate}. } } \details{ The response must be a vector or one-column matrix with strictly increasing values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Arnold, B. C. and Balakrishnan, N. and Nagaraja, H. N. (1998) \emph{Records}, New York: John Wiley & Sons. } \author{ T. W. Yee } \note{ By default, this family function has the intercept-only MLE as the initial value, therefore convergence may only take one iteration. Fisher scoring is used. } \seealso{ \code{\link{exponential}}. } \examples{ rawy <- rexp(n <- 10000, rate = exp(1)) y <- unique(cummax(rawy)) # Keep only the records length(y) / y[length(y)] # MLE of rate fit <- vglm(y ~ 1, rec.exp1, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} %# Keep only the records %delete = c(FALSE, rep(TRUE, len = n-1)) %for (i in 2:length(rawy)) % if (rawy[i] > max(rawy[1:(i-1)])) delete[i] = FALSE %(y = rawy[!delete]) VGAM/man/yeo.johnson.Rd0000644000176200001440000000470013135276753014334 0ustar liggesusers\name{yeo.johnson} \alias{yeo.johnson} %- Also NEED an '\alias' for EACH other topic documented here. \title{Yeo-Johnson Transformation} \description{ Computes the Yeo-Johnson transformation, which is a normalizing transformation. } \usage{ yeo.johnson(y, lambda, derivative = 0, epsilon = sqrt(.Machine$double.eps), inverse = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y}{Numeric, a vector or matrix. } \item{lambda}{Numeric. It is recycled to the same length as \code{y} if necessary. } \item{derivative}{Non-negative integer. The default is the ordinary function evaluation, otherwise the derivative with respect to \code{lambda}.} \item{epsilon}{ Numeric and positive value. The tolerance given to values of \code{lambda} when comparing it to 0 or 2. } \item{inverse}{ Logical. Return the inverse transformation? } } \details{ The Yeo-Johnson transformation can be thought of as an extension of the Box-Cox transformation. It handles both positive and negative values, whereas the Box-Cox transformation only handles positive values. Both can be used to transform the data so as to improve normality. They can be used to perform LMS quantile regression. } \value{ The Yeo-Johnson transformation or its inverse, or its derivatives with respect to \code{lambda}, of \code{y}. } \references{ Yeo, I.-K. and Johnson, R. A. (2000) A new family of power transformations to improve normality or symmetry. \emph{Biometrika}, \bold{87}, 954--959. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. } \author{ Thomas W. Yee } \note{ If \code{inverse = TRUE} then the argument \code{derivative = 0} is required. } \seealso{ \code{\link{lms.yjn}}, \code{\link[MASS]{boxcox}}. } \examples{ y <- seq(-4, 4, len = (nn <- 200)) ltry <- c(0, 0.5, 1, 1.5, 2) # Try these values of lambda lltry <- length(ltry) psi <- matrix(as.numeric(NA), nn, lltry) for (ii in 1:lltry) psi[, ii] <- yeo.johnson(y, lambda = ltry[ii]) \dontrun{ matplot(y, psi, type = "l", ylim = c(-4, 4), lwd = 2, lty = 1:lltry, ylab = "Yeo-Johnson transformation", col = 1:lltry, las = 1, main = "Yeo-Johnson transformation with some values of lambda") abline(v = 0, h = 0) legend(x = 1, y = -0.5, lty = 1:lltry, legend = as.character(ltry), lwd = 2, col = 1:lltry) } } \keyword{models} \keyword{regression} VGAM/man/binom2.rhoUC.Rd0000644000176200001440000000702713135276753014275 0ustar liggesusers\name{Binom2.rho} \alias{Binom2.rho} \alias{dbinom2.rho} \alias{rbinom2.rho} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Probit Model } \description{ Density and random generation for a bivariate probit model. The correlation parameter rho is the measure of dependency. } \usage{ rbinom2.rho(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), rho = 0, exchangeable = FALSE, twoCols = TRUE, colnames = if (twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.rho(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), rho = 0, exchangeable = FALSE, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ number of observations. Same as in \code{\link[stats]{runif}}. The arguments \code{mu1}, \code{mu2}, \code{rho} are recycled to this value. } \item{mu1, mu2}{ The marginal probabilities. Only \code{mu1} is needed if \code{exchangeable = TRUE}. Values should be between 0 and 1. } \item{rho}{ The correlation parameter. Must be numeric and lie between \eqn{-1} and \eqn{1}. The default value of zero means the responses are uncorrelated. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{twoCols}{ Logical. If \code{TRUE}, then a \eqn{n} \eqn{\times}{*} \eqn{2} matrix of 1s and 0s is returned. If \code{FALSE}, then a \eqn{n} \eqn{\times}{*} \eqn{4} matrix of 1s and 0s is returned. } \item{colnames}{ The \code{dimnames} argument of \code{\link[base]{matrix}} is assigned \code{list(NULL, colnames)}. } \item{ErrorCheck}{ Logical. Do some error checking of the input parameters? } } \details{ The function \code{rbinom2.rho} generates data coming from a bivariate probit model. The data might be fitted with the \pkg{VGAM} family function \code{\link{binom2.rho}}. The function \code{dbinom2.rho} does not really compute the density (because that does not make sense here) but rather returns the four joint probabilities. } \value{ The function \code{rbinom2.rho} returns either a 2 or 4 column matrix of 1s and 0s, depending on the argument \code{twoCols}. The function \code{dbinom2.rho} returns a 4 column matrix of joint probabilities; each row adds up to unity. } \author{ T. W. Yee } \seealso{ \code{\link{binom2.rho}}. } \examples{ (myrho <- rhobit(2, inverse = TRUE)) # Example 1 ymat <- rbinom2.rho(nn <- 2000, mu1 = 0.8, rho = myrho, exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) fit <- vglm(ymat ~ 1, binom2.rho(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = probit(-2+4*x2, inverse = TRUE), mu2 = probit(-1+3*x2, inverse = TRUE)) dmat <- with(bdata, dbinom2.rho(mu1, mu2, myrho)) ymat <- with(bdata, rbinom2.rho(nn, mu1, mu2, myrho)) fit2 <- vglm(ymat ~ x2, binom2.rho, data = bdata) coef(fit2, matrix = TRUE) \dontrun{ matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, type = "l", main = "Joint probabilities", ylim = 0:1, lwd = 2, ylab = "Probability") legend(x = 0.25, y = 0.9, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) } } \keyword{distribution} VGAM/man/fisherz.Rd0000644000176200001440000000552013135276753013536 0ustar liggesusers\name{fisherz} \alias{fisherz} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fisher's Z Link Function } \description{ Computes the Fisher Z transformation, including its inverse and the first two derivatives. } \usage{ fisherz(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bminvalue, bmaxvalue}{ Optional boundary values. Values of \code{theta} which are less than or equal to \eqn{-1} can be replaced by \code{bminvalue} before computing the link function value. Values of \code{theta} which are greater than or equal to \eqn{1} can be replaced by \code{bmaxvalue} before computing the link function value. See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{fisherz} link function is commonly used for parameters that lie between \eqn{-1} and \eqn{1}. Numerical values of \code{theta} close to \eqn{-1} or \eqn{1} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, \code{0.5 * log((1+theta)/(1-theta))} (same as \code{atanh(theta)}) when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{(exp(2*theta)-1)/(exp(2*theta)+1)} (same as \code{tanh(theta)}). For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to \eqn{-1} or \eqn{1}. One way of overcoming this is to use, e.g., \code{bminvalue}. The link function \code{\link{rhobit}} is very similar to \code{fisherz}, e.g., just twice the value of \code{fisherz}. This link function may be renamed to \code{atanhlink} in the near future. } \seealso{ \code{\link{Links}}, \code{\link{rhobit}}, \code{\link{atanh}}, \code{\link{logit}}. } \examples{ theta <- seq(-0.99, 0.99, by = 0.01) y <- fisherz(theta) \dontrun{ plot(theta, y, type = "l", las = 1, ylab = "", main = "fisherz(theta)", col = "blue") abline(v = (-1):1, h = 0, lty = 2, col = "gray") } x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) fisherz(x) # Has NAs fisherz(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/pneumo.Rd0000644000176200001440000000226013135276753013365 0ustar liggesusers\name{pneumo} \alias{pneumo} \docType{data} \title{Pneumoconiosis in Coalminers Data} \description{ The \code{pneumo} data frame has 8 rows and 4 columns. Exposure time is explanatory, and there are 3 ordinal response variables. } \usage{data(pneumo)} \format{ This data frame contains the following columns: \describe{ \item{exposure.time}{a numeric vector, in years} \item{normal}{a numeric vector, counts} \item{mild}{a numeric vector, counts} \item{severe}{a numeric vector, counts} } } \details{ These were collected from coalface workers. In the original data set, the two most severe categories were combined. } \source{ Ashford, J.R., 1959. An approach to the analysis of data for semi-quantal responses in biological assay. \emph{Biometrics}, \bold{15}, 573--581. } \seealso{ \code{\link{cumulative}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \examples{ # Fit the proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) } \keyword{datasets} VGAM/man/SURff.Rd0000644000176200001440000001250613135276753013053 0ustar liggesusers\name{SURff} \alias{SURff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Seemingly Unrelated Regressions Family Function %% ~~function to do ... ~~ } \description{ Fits a system of seemingly unrelated regressions. %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ SURff(mle.normal = FALSE, divisor = c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"), parallel = FALSE, Varcov = NULL, matrix.arg = FALSE) } %- maybe also 'usage' for other objects documented here. %apply.parint = TRUE, \arguments{ % \item{estimator}{ %Character. %What estimator is computed. %% ~~Describe \code{estimator} here~~ %} \item{mle.normal}{ Logical. If \code{TRUE} then the MLE, assuming multivariate normal errors, is computed; the effect is just to add a \code{loglikelihood} slot to the returned object. Then it results in the \emph{maximum likelihood estimator}. } \item{divisor}{ Character, partial matching allowed and the first choice is the default. The divisor for the estimate of the covariances. If \code{"n"} then the estimate will be biased. If the others then the estimate will be unbiased for some elements. If \code{mle.normal = TRUE} and this argument is not \code{"n"} then a warning or an error will result. } \item{parallel}{ See \code{\link{CommonVGAMffArguments}}. If \code{parallel = TRUE} then the constraint applies to the intercept too. } \item{Varcov}{ Numeric. This may be assigned a variance-covariance of the errors. If \code{matrix.arg} then this is a \eqn{M \times M}{M x M} matrix. If \code{!matrix.arg} then this is a \eqn{M \times M}{M x M} matrix in matrix-band format (a vector with at least \eqn{M} and at most \code{M*(M+1)/2} elements). } \item{matrix.arg}{ Logical. Of single length. } } \details{ Proposed by Zellner (1962), the basic seemingly unrelated regressions (SUR) model is a set of LMs (\eqn{M > 1} of them) tied together at the error term level. Each LM's model matrix may potentially have its own set of predictor variables. Zellner's efficient (ZEF) estimator (also known as \emph{Zellner's two-stage Aitken estimator}) can be obtained by setting \code{maxit = 1} (and possibly \code{divisor = "sqrt"} or \code{divisor = "n-max"}). The default value of \code{maxit} (in \code{\link{vglm.control}}) probably means \emph{iterative GLS} (IGLS) estimator is computed because IRLS will probably iterate to convergence. IGLS means, at each iteration, the residuals are used to estimate the error variance-covariance matrix, and then the matrix is used in the GLS. The IGLS estimator is also known as \emph{Zellner's iterative Aitken estimator}, or IZEF. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Zellner, A. (1962) An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias. \emph{J. Amer. Statist. Assoc.}, \bold{57}(298), 348--368. Kmenta, J. and Gilbert, R. F. (1968) Small Sample Properties of Alternative Estimators of Seemingly Unrelated Regressions. \emph{J. Amer. Statist. Assoc.}, \bold{63}(324), 1180--1200. } \author{ T. W. Yee. } \section{Warning }{ The default convergence criterion may be a little loose. Try setting \code{epsilon = 1e-11}, especially with \code{mle.normal = TRUE}. } \note{ The fitted object has slot \code{@extra$ncols.X.lm} which is a \eqn{M} vector with the number of parameters for each LM. Also, \code{@misc$values.divisor} is the \eqn{M}-vector of \code{divisor} values. Constraint matrices are needed in order to specify which response variables that each term on the RHS of the formula is a regressor for. See the \code{constraints} argument of \code{\link{vglm}} for more information. % This \pkg{VGAM} family function is currently experimental. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{uninormal}}, \code{\link{gew}}. } \examples{ # Obtain some of the results of p.1199 of Kmenta and Gilbert (1968) clist <- list("(Intercept)" = diag(2), "capital.g" = rbind(1, 0), "value.g" = rbind(1, 0), "capital.w" = rbind(0, 1), "value.w" = rbind(0, 1)) zef1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SURff(divisor = "sqrt"), maxit = 1, data = gew, trace = TRUE, constraints = clist) round(coef(zef1, matrix = TRUE), digits = 4) # ZEF zef1@extra$ncols.X.lm zef1@misc$divisor zef1@misc$values.divisor round(sqrt(diag(vcov(zef1))), digits = 4) # SEs nobs(zef1, type = "lm") df.residual(zef1, type = "lm") mle1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SURff(mle.normal = TRUE), epsilon = 1e-11, data = gew, trace = TRUE, constraints = clist) round(coef(mle1, matrix = TRUE), digits = 4) # MLE round(sqrt(diag(vcov(mle1))), digits = 4) # SEs } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} % Prior to 20141108: % SURff(mle.normal = TRUE, divisor = "n-max"), VGAM/man/ucberk.Rd0000644000176200001440000000365213135276753013343 0ustar liggesusers\name{ucberk} \alias{ucberk} \docType{data} \title{ University California Berkeley Graduate Admissions } \description{ University California Berkeley Graduate Admissions: counts cross-classified by acceptance/rejection and gender, for the six largest departments. } \usage{data(ucberk)} \format{ A data frame with 6 departmental groups with the following 5 columns. \describe{ \item{m.deny}{Counts of men denied admission. } \item{m.admit}{Counts of men admitted. } \item{w.deny}{Counts of women denied admission. } \item{w.admit}{Counts of women admitted. } \item{dept}{Department (the six largest), called \code{A}, code{B}, \dots, code{F}. } } } \details{ From Bickel et al. (1975), the data consists of applications for admission to graduate study at the University of California, Berkeley, for the fall 1973 quarter. In the admissions cycle for that quarter, the Graduate Division at Berkeley received approximately 15,000 applications, some of which were later withdrawn or transferred to a different proposed entry quarter by the applicants. Of the applications finally remaining for the fall 1973 cycle 12,763 were sufficiently complete to permit a decision. There were about 101 graduate department and interdepartmental graduate majors. There were 8442 male applicants and 4321 female applicants. About 44 percent of the males and about 35 percent of the females were admitted. The data are well-known for illustrating Simpson's paradox. } %\source{ % % %} \references{ Bickel, P. J., Hammel, E. A. and O'Connell, J. W. (1975) Sex bias in graduate admissions: data from Berkeley. \emph{Science}, \bold{187}(4175): 398--404. Freedman, D., Pisani, R. and Purves, R. (1998) Chapter 2 of \emph{Statistics}, 3rd. ed., W. W. Norton & Company. } \examples{ summary(ucberk) } \keyword{datasets} % 7 February 1975 % Bickel, et al., 187 (4175): 398-404 VGAM/man/zoabetaUC.Rd0000644000176200001440000000643213135276753013744 0ustar liggesusers\name{Zoabeta} \alias{Zoabeta} \alias{dzoabeta} \alias{pzoabeta} \alias{qzoabeta} \alias{rzoabeta} \title{The Zero/One-Inflated Beta Distribution} \description{ Density, distribution function, and random generation for the zero/one-inflated beta distribution. } \usage{ dzoabeta(x, shape1, shape2, pobs0 = 0, pobs1 = 0, log = FALSE, tol = .Machine$double.eps) pzoabeta(q, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) qzoabeta(p, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) rzoabeta(n, shape1, shape2, pobs0 = 0, pobs1 = 0, tol = .Machine$double.eps) } \arguments{ \item{x, q, p, n}{Same as \code{\link[stats]{Beta}}. } \item{pobs0, pobs1}{ vector of probabilities that 0 and 1 are observed (\eqn{\omega_0}{omega_0} and \eqn{\omega_1}{omega_1}). } \item{shape1, shape2}{ Same as \code{\link[stats]{Beta}}. They are called \code{a} and \code{b} in \code{\link[base:Special]{beta}} respectively. } \item{lower.tail, log, log.p}{ Same as \code{\link[stats]{Beta}}. } \item{tol}{ Numeric, tolerance for testing equality with 0 and 1. } } \value{ \code{dzoabeta} gives the density, \code{pzoabeta} gives the distribution function, \code{qzoabeta} gives the quantile, and \code{rzoabeta} generates random deviates. } \author{ Xiangjie Xue and T. W. Yee } \details{ This distribution is a mixture of a discrete distribution with a continuous distribution. The cumulative distribution function of \eqn{Y} is \deqn{F(y) =(1 - \omega_0 -\omega_1) B(y) + \omega_0 \times I[0 \leq y] + \omega_1 \times I[1 \leq y]}{% F(y) =(1 - omega_0 - omega_1) B(y) + omega_0 * I[0 <= y] + omega_1 * I[1 <= y]} where \eqn{B(y)} is the cumulative distribution function of the beta distribution with the same shape parameters (\code{\link[stats]{pbeta}}), \eqn{\omega_0}{omega_0} is the inflated probability at 0 and \eqn{\omega_1}{omega_1} is the inflated probability at 1. The default values of \eqn{\omega_j}{omega_j} mean that these functions behave like the ordinary \code{\link[stats]{Beta}} when only the essential arguments are inputted. } %\note{ % % % %} \seealso{ \code{\link{zoabetaR}}, \code{\link[base:Special]{beta}}, \code{\link{betaR}}, \code{\link{Betabinom}}. } \examples{ \dontrun{ N <- 1000; y <- rzoabeta(N, 2, 3, 0.2, 0.2) hist(y, probability = TRUE, border = "blue", las = 1, main = "Blue = 0- and 1-altered; orange = ordinary beta") sum(y == 0) / N # Proportion of 0s sum(y == 1) / N # Proportion of 1s Ngrid <- 1000 lines(seq(0, 1, length = Ngrid), dbeta(seq(0, 1, length = Ngrid), 2, 3), col = "orange") lines(seq(0, 1, length = Ngrid), col = "blue", dzoabeta(seq(0, 1, length = Ngrid), 2 , 3, 0.2, 0.2)) } } \keyword{distribution} %dzoabeta(c(-1, NA, 0.5, 2), 2, 3, 0.2, 0.2) # should be NA %dzoabeta(0.5, c(NA, Inf), 4, 0.2, 0.1) # should be NA %dzoabeta(0.5, 2.2, 4.3, NA, 0.3) # should be NA %dzoabeta(0.5, 2, 3, 0.5, 0.6) # should NaN %set.seed(1234); k <- runif(1000) %sum(abs(qzoabeta(k, 2, 3) - qbeta(k, 2, 3)) > .Machine$double.eps) # Should be 0 %sum(abs(pzoabeta(k, 10, 7) - pbeta(k, 10, 7)) > .Machine$double.eps) # Should be 0 VGAM/man/foldsqrt.Rd0000644000176200001440000001061213135276753013720 0ustar liggesusers\name{foldsqrt} \alias{foldsqrt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Folded Square Root Link Function } \description{ Computes the folded square root transformation, including its inverse and the first two derivatives. } \usage{ foldsqrt(theta, min = 0, max = 1, mux = sqrt(2), inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{min, max, mux}{ These are called \eqn{L}, \eqn{U} and \eqn{K} below. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The folded square root link function can be applied to parameters that lie between \eqn{L} and \eqn{U} inclusive. Numerical values of \code{theta} out of range result in \code{NA} or \code{NaN}. } \value{ For \code{foldsqrt} with \code{deriv = 0}: \eqn{K (\sqrt{\theta-L} - \sqrt{U-\theta})}{K * (sqrt(theta-L) - sqrt(U-theta))} or \code{mux * (sqrt(theta-min) - sqrt(max-theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then some more complicated function that returns a \code{NA} unless \code{theta} is between \code{-mux*sqrt(max-min)} and \code{mux*sqrt(max-min)}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } %\references{ % %} \author{ Thomas W. Yee } \note{ The default has, if \code{theta} is 0 or 1, the link function value is \code{-sqrt(2)} and \code{+sqrt(2)} respectively. These are finite values, therefore one cannot use this link function for general modelling of probabilities because of numerical problem, e.g., with \code{\link{binomialff}}, \code{\link{cumulative}}. See the example below. } \seealso{ \code{\link{Links}}. } \examples{ p <- seq(0.01, 0.99, by = 0.01) foldsqrt(p) max(abs(foldsqrt(foldsqrt(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) foldsqrt(p) # Has NAs \dontrun{ p <- seq(0.01, 0.99, by = 0.01) par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) for (d in 0:1) { matplot(p, cbind(logit(p, deriv = d), foldsqrt(p, deriv = d)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logit(p, deriv = d), col = "limegreen") lines(p, probit(p, deriv = d), col = "purple") lines(p, cloglog(p, deriv = d), col = "chocolate") lines(p, foldsqrt(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logit", "probit", "cloglog", "foldsqrt"), lwd = 2, col = c("limegreen","purple","chocolate", "tan")) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind(logit(y, deriv = d, inverse = TRUE), foldsqrt(y, deriv = d, inverse = TRUE)), type = "n", col = "purple", xlab = "transformation", ylab = "p", lwd = 2, las = 1, main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logit(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probit(y, deriv = d, inverse = TRUE), col = "purple") lines(y, cloglog(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, foldsqrt(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logit", "probit", "cloglog", "foldsqrt"), lwd = 2, col = c("limegreen","purple","chocolate", "tan")) } } par(lwd = 1) } # This is lucky to converge fit.h <- vglm(agaaus ~ sm.bs(altitude), binomialff(link = foldsqrt(mux = 5)), data = hunua, trace = TRUE) \dontrun{ plotvgam(fit.h, se = TRUE, lcol = "orange", scol = "orange", main = "Orange is Hunua, Blue is Waitakere") } head(predict(fit.h, hunua, type = "response")) \dontrun{ # The following fails. pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(link = foldsqrt(mux = 10), par = TRUE, rev = TRUE), data = pneumo, trace = TRUE, maxit = 200) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/gpd.Rd0000644000176200001440000002305013135276753012634 0ustar liggesusers\name{gpd} \alias{gpd} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Pareto Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter generalized Pareto distribution (GPD). } \usage{ gpd(threshold = 0, lscale = "loge", lshape = logoff(offset = 0.5), percentiles = c(90, 95), iscale = NULL, ishape = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), imethod = 1, zero = "shape") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{threshold}{ Numeric, values are recycled if necessary. The threshold value(s), called \eqn{\mu}{mu} below. } \item{lscale}{ Parameter link function for the scale parameter \eqn{\sigma}{sigma}. See \code{\link{Links}} for more choices. } \item{lshape}{ Parameter link function for the shape parameter \eqn{\xi}{xi}. See \code{\link{Links}} for more choices. The default constrains the parameter to be greater than \eqn{-0.5} because if \eqn{\xi \leq -0.5}{xi <= -0.5} then Fisher scoring does not work. See the Details section below for more information. For the shape parameter, the default \code{\link{logoff}} link has an offset called \eqn{A} below; and then the second linear/additive predictor is \eqn{\log(\xi+A)}{log(xi+A)} which means that \eqn{\xi > -A}{xi > -A}. The working weight matrices are positive definite if \eqn{A = 0.5}. } % \item{Offset}{ % Numeric, of length 1. % Called \eqn{A} below. % Offset value if \code{lshape = "logoff"}. % Then the second linear/additive predictor is % \eqn{\log(\xi+A)}{log(xi+A)} which means that % \eqn{\xi > -A}{xi > -A}. % The working weight matrices are positive definite if \code{Offset = 0.5}. % } \item{percentiles}{ Numeric vector of percentiles used for the fitted values. Values should be between 0 and 100. See the example below for illustration. This argument is ignored if \code{type.fitted = "mean"}. % However, if \code{percentiles = NULL} then the mean % \eqn{\mu + \sigma / (1-\xi)}{mu + sigma / (1-xi)} is returned; % this is only defined if \eqn{\xi<1}{xi<1}. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} for information. The default is to use the \code{percentiles} argument. If \code{"mean"} is chosen, then the mean \eqn{\mu + \sigma / (1-\xi)}{mu + sigma / (1-xi)} is returned as the fitted values, and these are only defined for \eqn{\xi<1}{xi<1}. } \item{iscale, ishape}{ Numeric. Optional initial values for \eqn{\sigma}{sigma} and \eqn{\xi}{xi}. The default is to use \code{imethod} and compute a value internally for each parameter. Values of \code{ishape} should be between \eqn{-0.5} and \eqn{1}. Values of \code{iscale} should be positive. } % \item{rshape}{ % Numeric, of length 2. % Range of \eqn{\xi}{xi} if \code{lshape = "extlogit"} is chosen. % The default values ensures the algorithm works (\eqn{\xi > -0.5}{xi > -0.5}) % and the variance exists (\eqn{\xi < 0.5}{xi < 0.5}). % } \item{tolshape0}{ Passed into \code{\link{dgpd}} when computing the log-likelihood. } % \item{tolshape0}{ % Positive numeric. % Threshold/tolerance value for resting whether \eqn{\xi}{xi} is zero. % If the absolute value of the estimate of \eqn{\xi}{xi} is less than % this value then it will be assumed zero and exponential distribution % derivatives etc. will be used. % } \item{imethod}{ Method of initialization, either 1 or 2. The first is the method of moments, and the second is a variant of this. If neither work, try assigning values to arguments \code{ishape} and/or \code{iscale}. } \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. For one response, the value should be from the set \{1,2\} corresponding respectively to \eqn{\sigma}{sigma} and \eqn{\xi}{xi}. It is often a good idea for the \eqn{\sigma}{sigma} parameter only to be modelled through a linear combination of the explanatory variables because the shape parameter is probably best left as an intercept only: \code{zero = 2}. Setting \code{zero = NULL} means both parameters are modelled with explanatory variables. See \code{\link{CommonVGAMffArguments}} for more details. } } \details{ The distribution function of the GPD can be written \deqn{G(y) = 1 - [1 + \xi (y-\mu) / \sigma ]_{+}^{- 1/ \xi} }{% G(y) = 1 - [1 + xi (y-mu)/ sigma ]_{+}^{- 1/ xi} } where \eqn{\mu}{mu} is the location parameter (known, with value \code{threshold}), \eqn{\sigma > 0}{sigma > 0} is the scale parameter, \eqn{\xi}{xi} is the shape parameter, and \eqn{h_+ = \max(h,0)}{h_+ = max(h,0)}. The function \eqn{1-G} is known as the \emph{survivor function}. The limit \eqn{\xi \rightarrow 0}{xi --> 0} gives the \emph{shifted exponential} as a special case: \deqn{G(y) = 1 - \exp[-(y-\mu)/ \sigma]. }{% G(y) = 1 - exp[-(y-mu)/ sigma]. } The support is \eqn{y>\mu}{y>mu} for \eqn{\xi>0}{xi>0}, and \eqn{\mu < y <\mu-\sigma / \xi}{mu < y -0.5}{xi > -0.5} the classical asymptotic theory of maximum likelihood estimators is applicable; this is the default. Although for \eqn{\xi < -0.5}{xi < -0.5} the usual asymptotic properties do not apply, the maximum likelihood estimator generally exists and is superefficient for \eqn{-1 < \xi < -0.5}{-1 < xi < -0.5}, so it is ``better'' than normal. When \eqn{\xi < -1}{xi < -1} the maximum likelihood estimator generally does not exist as it effectively becomes a two parameter problem. The mean of \eqn{Y} does not exist unless \eqn{\xi < 1}{xi < 1}, and the variance does not exist unless \eqn{\xi < 0.5}{xi < 0.5}. So if you want to fit a model with finite variance use \code{lshape = "extlogit"}. } \note{ The response in the formula of \code{\link{vglm}} and \code{\link{vgam}} is \eqn{y}. Internally, \eqn{y-\mu}{y-mu} is computed. This \pkg{VGAM} family function can handle a multiple responses, which is inputted as a matrix. The response stored on the object is the original uncentred data. With functions \code{\link{rgpd}}, \code{\link{dgpd}}, etc., the argument \code{location} matches with the argument \code{threshold} here. } \section{Warning}{ Fitting the GPD by maximum likelihood estimation can be numerically fraught. If \eqn{1 + \xi (y-\mu)/ \sigma \leq 0}{1 + xi*(y-mu)/sigma <= 0} then some crude evasive action is taken but the estimation process can still fail. This is particularly the case if \code{\link{vgam}} with \code{\link{s}} is used. Then smoothing is best done with \code{\link{vglm}} with regression splines (\code{\link[splines]{bs}} or \code{\link[splines]{ns}}) because \code{\link{vglm}} implements half-stepsizing whereas \code{\link{vgam}} doesn't. Half-stepsizing helps handle the problem of straying outside the parameter space. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. However, for this \pkg{VGAM} family function, \code{\link{vglm}} is probably preferred over \code{\link{vgam}} when there is smoothing. } \references{ Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. \emph{Biometrika}, \bold{72}, 67--90. } \author{ T. W. Yee } \seealso{ \code{\link{rgpd}}, \code{\link{meplot}}, \code{\link{gev}}, \code{\link{paretoff}}, \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{s}}. } \examples{ # Simulated data from an exponential distribution (xi = 0) Threshold <- 0.5 gdata <- data.frame(y1 = Threshold + rexp(n = 3000, rate = 2)) fit <- vglm(y1 ~ 1, gpd(threshold = Threshold), data = gdata, trace = TRUE) head(fitted(fit)) summary(depvar(fit)) # The original uncentred data coef(fit, matrix = TRUE) # xi should be close to 0 Coef(fit) summary(fit) head(fit@extra$threshold) # Note the threshold is stored here # Check the 90 percentile ii <- depvar(fit) < fitted(fit)[1, "90\%"] 100 * table(ii) / sum(table(ii)) # Should be 90% # Check the 95 percentile ii <- depvar(fit) < fitted(fit)[1, "95\%"] 100 * table(ii) / sum(table(ii)) # Should be 95% \dontrun{ plot(depvar(fit), col = "blue", las = 1, main = "Fitted 90\% and 95\% quantiles") matlines(1:length(depvar(fit)), fitted(fit), lty = 2:3, lwd = 2) } # Another example gdata <- data.frame(x2 = runif(nn <- 2000)) Threshold <- 0; xi <- exp(-0.8) - 0.5 gdata <- transform(gdata, y2 = rgpd(nn, scale = exp(1 + 0.1*x2), shape = xi)) fit <- vglm(y2 ~ x2, gpd(Threshold), data = gdata, trace = TRUE) coef(fit, matrix = TRUE) \dontrun{ # Nonparametric fits # Not so recommended: fit1 <- vgam(y2 ~ s(x2), gpd(Threshold), data = gdata, trace = TRUE) par(mfrow = c(2, 1)) plot(fit1, se = TRUE, scol = "blue") # More recommended: fit2 <- vglm(y2 ~ sm.bs(x2), gpd(Threshold), data = gdata, trace = TRUE) plot(as(fit2, "vgam"), se = TRUE, scol = "blue") } } \keyword{models} \keyword{regression} % % # gdata <- transform(gdata, yy = y2 + rnorm(nn, sd = 0.1)) % % giveWarning = TRUE, imethod = 1, zero = "shape" VGAM/man/ParetoUC.Rd0000644000176200001440000000451313135276753013547 0ustar liggesusers\name{Pareto} \alias{Pareto} \alias{dpareto} \alias{ppareto} \alias{qpareto} \alias{rpareto} \title{The Pareto Distribution} \description{ Density, distribution function, quantile function and random generation for the Pareto(I) distribution with parameters \code{scale} and \code{shape}. } \usage{ dpareto(x, scale = 1, shape, log = FALSE) ppareto(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qpareto(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rpareto(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{scale, shape}{the \eqn{\alpha}{alpha} and \eqn{k} parameters.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dpareto} gives the density, \code{ppareto} gives the distribution function, \code{qpareto} gives the quantile function, and \code{rpareto} generates random deviates. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{paretoff}}, the \pkg{VGAM} family function for estimating the parameter \eqn{k} by maximum likelihood estimation, for the formula of the probability density function and the range restrictions imposed on the parameters. } %%\note{ %% The Pareto distribution is %%} \seealso{ \code{\link{paretoff}}, \code{\link{ParetoIV}}. } \examples{ alpha <- 3; k <- exp(1); x <- seq(2.8, 8, len = 300) \dontrun{ plot(x, dpareto(x, scale = alpha, shape = k), type = "l", main = "Pareto density split into 10 equal areas") abline(h = 0, col = "blue", lty = 2) qvec <- qpareto(seq(0.1, 0.9, by = 0.1), scale = alpha, shape = k) lines(qvec, dpareto(qvec, scale = alpha, shape = k), col = "purple", lty = 3, type = "h") } pvec <- seq(0.1, 0.9, by = 0.1) qvec <- qpareto(pvec, scale = alpha, shape = k) ppareto(qvec, scale = alpha, shape = k) qpareto(ppareto(qvec, scale = alpha, shape = k), scale = alpha, shape = k) - qvec # Should be 0 } \keyword{distribution} VGAM/man/negbinomial.size.Rd0000644000176200001440000000677613135276753015337 0ustar liggesusers\name{negbinomial.size} \alias{negbinomial.size} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative Binomial Distribution Family Function With Known Size} \description{ Maximum likelihood estimation of the mean parameter of a negative binomial distribution with known size parameter. } \usage{ negbinomial.size(size = Inf, lmu = "loge", imu = NULL, iprobs.y = 0.35, imethod = 1, ishrinkage = 0.95, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{size}{ Numeric, positive. Same as argument \code{size} of \code{\link[stats:NegBinomial]{rnbinom}}. If the response is a matrix then this is recycled to a matrix of the same dimension, by row (\code{\link[base]{matrix}} with \code{byrow = TRUE}). } \item{lmu, imu}{ Same as \code{\link{negbinomial}}. } \item{iprobs.y, imethod}{ Same as \code{\link{negbinomial}}. } \item{zero, ishrinkage}{ Same as \code{\link{negbinomial}}. } } \details{ This \pkg{VGAM} family function estimates only the mean parameter of the negative binomial distribution. See \code{\link{negbinomial}} for general information. Setting \code{size = 1} gives what might be called the NB-G (geometric model; see Hilbe (2011)). The default, \code{size = Inf}, corresponds to the Poisson distribution. } %\section{Warning}{ % %} \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Hilbe, J. M. (2011) \emph{Negative Binomial Regression}, 2nd Edition. Cambridge: Cambridge University Press. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. } \author{ Thomas W. Yee } \note{ If \code{lmu = "nbcanlink"} in \code{negbinomial.size()} then the \code{size} argument here should be assigned and these values are recycled. % is placed inside the \code{earg} % argument of \code{nbcanlink()} as a matrix with conformable size. } \seealso{ \code{\link{negbinomial}}, \code{\link{nbcanlink}} (NB-C model), \code{\link{quasipoissonff}}, \code{\link{poissonff}}, \code{\link[stats:NegBinomial]{rnbinom}}, \code{\link{simulate.vlm}}. % \code{\link[MASS]{rnegbin}}. } \examples{ # Simulated data with various multiple responses size1 <- exp(1); size2 <- exp(2); size3 <- exp(0); size4 <- Inf ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, eta1 = -1 - 2 * x2, # eta1 must be negative size1 = size1) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inv = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size = size1), # NB-C y2 = rnbinom(nn, mu = exp(2 - x2), size = size2), y3 = rnbinom(nn, mu = exp(3 + x2), size = size3), # NB-G y4 = rpois(nn, lambda = exp(1 + x2))) # Also known as NB-C with size known (Hilbe, 2011) fit1 <- vglm(y1 ~ x2, negbinomial.size(size = size1, lmu = "nbcanlink"), data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) head(fit1@misc$size) # size saved here fit2 <- vglm(cbind(y2, y3, y4) ~ x2, data = ndata, trace = TRUE, negbinomial.size(size = c(size2, size3, size4))) coef(fit2, matrix = TRUE) head(fit2@misc$size) # size saved here } \keyword{models} \keyword{regression} VGAM/man/erlang.Rd0000644000176200001440000000557013135276753013341 0ustar liggesusers\name{erlang} \alias{erlang} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Erlang Distribution } \description{ Estimates the scale parameter of the Erlang distribution by maximum likelihood estimation. } \usage{ erlang(shape.arg, lscale = "loge", imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{shape.arg}{ The shape parameters. The user must specify a positive integer, or integers for multiple responses. They are recycled \code{by.row = TRUE} according to \code{\link[base]{matrix}}. } \item{lscale}{ Link function applied to the (positive) \eqn{scale} parameter. See \code{\link{Links}} for more choices. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more details. } } \details{ The Erlang distribution is a special case of the gamma distribution with \emph{shape} that is a positive integer. If \code{shape.arg = 1} then it simplifies to the exponential distribution. As illustrated in the example below, the Erlang distribution is the distribution of the sum of \code{shape.arg} independent and identically distributed exponential random variates. The probability density function of the Erlang distribution is given by \deqn{f(y) = \exp(-y/scale) y^{shape-1} scale^{-shape} / \Gamma(shape)}{% f(y) = exp(-y/scale) y^(shape-1) scale^(-shape) / gamma(shape)} for known positive integer \eqn{shape}, unknown \eqn{scale > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(shape)}{gamma(shape)} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \emph{Y} is \eqn{\mu=shape \times scale}{mu=shape*scale} and its variance is \eqn{shape \times scale^2}{shape*scale^2}. The linear/additive predictor, by default, is \eqn{\eta=\log(scale)}{eta=log(scale)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Most standard texts on statistical distributions describe this distribution, e.g., Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ Multiple responses are permitted. The \code{rate} parameter found in \code{\link{gammaR}} is \code{1/scale} here---see also \code{\link[stats]{rgamma}}. } \seealso{ \code{\link{gammaR}}, \code{\link{exponential}}, \code{\link{simulate.vlm}}. } \examples{ rate <- exp(2); myshape <- 3 edata <- data.frame(y = rep(0, nn <- 1000)) for (ii in 1:myshape) edata <- transform(edata, y = y + rexp(nn, rate = rate)) fit <- vglm(y ~ 1, erlang(shape = myshape), data = edata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) # Answer = 1/rate 1/rate summary(fit) } \keyword{models} \keyword{regression} VGAM/man/poisson.pointsUC.Rd0000644000176200001440000000355413135276753015326 0ustar liggesusers\name{PoissonPoints} \alias{PoissonPoints} \alias{dpois.points} %\alias{ppois.points} %\alias{qpois.points} \alias{rpois.points} \title{Poisson Points Distribution} \description{ Density for the PoissonPoints distribution. % distribution function, quantile function % and random generation } \usage{ dpois.points(x, lambda, ostatistic, dimension = 2, log = FALSE) } %ppois.points(q, lambda, ostatistic, dimension = 2, log = FALSE) %qpois.points(p, lambda, ostatistic, dimension = 2, log = FALSE) %rpois.points(n, lambda, ostatistic, dimension = 2, log = FALSE) \arguments{ \item{x}{vector of quantiles.} \item{lambda}{ the mean density of points. } \item{ostatistic}{ positive values, usually integers. } \item{dimension}{ Either 2 and/or 3. } % \item{p}{vector of probabilities.} % \item{n}{number of observations. % Same as \code{\link[stats:Uniform]{runif}}. % } \item{log}{ Logical; if TRUE, the logarithm is returned. } } \value{ \code{dpois.points} gives the density. % and % \code{ppois.points} gives the distribution function, % \code{qpois.points} gives the quantile function, and % \code{rpois.points} generates random deviates. } %\author{ T. W. Yee } \details{ See \code{\link{poisson.points}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\section{Warning }{ %} \seealso{ \code{\link{poisson.points}}, \code{\link[stats:Poisson]{dpois}}, \code{\link{Maxwell}}. } \examples{ \dontrun{ lambda <- 1; xvec <- seq(0, 2, length = 400) plot(xvec, dpois.points(xvec, lambda, ostat = 1, dimension = 2), type = "l", las = 1, col = "blue", sub = "First order statistic", main = paste("PDF of PoissonPoints distribution with lambda = ", lambda, " and on the plane", sep = "")) } } \keyword{distribution} VGAM/man/zapoisson.Rd0000644000176200001440000001724713135276753014122 0ustar liggesusers\name{zapoisson} \alias{zapoisson} \alias{zapoissonff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Poisson Distribution } \description{ Fits a zero-altered Poisson distribution based on a conditional model involving a Bernoulli distribution and a positive-Poisson distribution. } \usage{ zapoisson(lpobs0 = "logit", llambda = "loge", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, ilambda = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = NULL) zapoissonff(llambda = "loge", lonempobs0 = "logit", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ilambda = NULL, ionempobs0 = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = "onempobs0") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0}, called \code{pobs0} here. See \code{\link{Links}} for more choices. } \item{llambda}{ Link function for the usual \eqn{\lambda}{lambda} parameter. See \code{\link{Links}} for more choices. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } \item{lonempobs0}{ Corresponding argument for the other parameterization. See details below. } % \item{epobs0, elambda}{ % epobs0 = list(), elambda = list(), % Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{imethod, ipobs0, ionempobs0, ilambda, ishrinkage}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y, zero}{ See \code{\link{CommonVGAMffArguments}} for information. % Integer valued vector, usually assigned \eqn{-1} or \eqn{1} if used % at all. Specifies which of the two linear/additive predictors are % modelled as an intercept only. % By default, both linear/additive predictors are modelled using % the explanatory variables. % If \code{zero = 1} then the \eqn{p_0}{pobs0} parameter % (after \code{lpobs0} is applied) is modelled as a single unknown % number that is estimated. It is modelled as a function of the % explanatory variables by \code{zero = NULL}. A negative value % means that the value is recycled, so setting \eqn{-1} means % all \eqn{p_0}{pobs0} are intercept-only (for multiple responses). } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, else \eqn{Y} has a positive-Poisson(\eqn{\lambda)}{lambda)} distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which is modelled as a function of the covariates. The zero-altered Poisson distribution differs from the zero-inflated Poisson distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the Poisson distribution too. Some people call the zero-altered Poisson a \emph{hurdle} model. For one response/species, by default, the two linear/additive predictors for \code{zapoisson()} are \eqn{(logit(p_0), \log(\lambda))^T}{(logit(pobs0), log(lambda))^T}. The \pkg{VGAM} family function \code{zapoissonff()} has a few changes compared to \code{zapoisson()}. These are: (i) the order of the linear/additive predictors is switched so the Poisson mean comes first; (ii) argument \code{onempobs0} is now 1 minus the probability of an observed 0, i.e., the probability of the positive Poisson distribution, i.e., \code{onempobs0} is \code{1-pobs0}; (iii) argument \code{zero} has a new default so that the \code{onempobs0} is intercept-only by default. Now \code{zapoissonff()} is generally recommended over \code{zapoisson()}. Both functions implement Fisher scoring and can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} (default) which is given by \deqn{\mu = (1-p_0) \lambda / [1 - \exp(-\lambda)].}{% mu = (1-pobs0) * lambda / [1 - exp(-lambda)].} If \code{type.fitted = "pobs0"} then \eqn{p_0}{pobs0} is returned. } \references{ Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996) Modelling the abundances of rare species: statistical models for counts with extra zeros. \emph{Ecological Modelling}, \bold{88}, 297--308. Angers, J-F. and Biswas, A. (2003) A Bayesian analysis of zero-inflated generalized Poisson model. \emph{Computational Statistics & Data Analysis}, \bold{42}, 37--46. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } %20111123; this has been fixed up with proper FS using EIM. %\section{Warning }{ % Inference obtained from \code{summary.vglm} % and \code{summary.vgam} may or may not be correct. % In particular, the p-values, standard errors and degrees of % freedom may need adjustment. Use simulation on artificial % data to check that these are reasonable. % % %} \author{ T. W. Yee } \note{ There are subtle differences between this family function and \code{\link{zipoisson}} and \code{\link{yip88}}. In particular, \code{\link{zipoisson}} is a \emph{mixture} model whereas \code{zapoisson()} and \code{\link{yip88}} are \emph{conditional} models. Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates. % It can be thought of an extension % of \code{\link{yip88}}, which is also a conditional model but its % \eqn{\phi}{phi} parameter is a scalar only. This family function effectively combines \code{\link{pospoisson}} and \code{\link{binomialff}} into one family function. This family function can handle multiple responses, e.g., more than one species. } \seealso{ \code{\link{rzapois}}, \code{\link{zipoisson}}, \code{\link{pospoisson}}, \code{\link{posnegbinomial}}, \code{\link{binomialff}}, \code{\link{rpospois}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}. } \examples{ zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pobs0 = logit( -1 + 1*x2, inverse = TRUE), lambda = loge(-0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y = rzapois(nn, lambda, pobs0 = pobs0)) with(zdata, table(y)) fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE) fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE, crit = "coef") head(fitted(fit)) head(predict(fit)) head(predict(fit, untransform = TRUE)) coef(fit, matrix = TRUE) summary(fit) # Another example ------------------------------ # Data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) Abdata <- data.frame(yy = with(abdata, rep(y, w))) fit3 <- vglm(yy ~ 1, zapoisson, data = Abdata, trace = TRUE, crit = "coef") coef(fit3, matrix = TRUE) Coef(fit3) # Estimate lambda (they get 0.6997 with SE 0.1520) head(fitted(fit3), 1) with(Abdata, mean(yy)) # Compare this with fitted(fit3) } \keyword{models} \keyword{regression} %zapoisson(lpobs0 = "logit", llambda = "loge", % type.fitted = c("mean", "pobs0", "onempobs0"), zero = NULL) %zapoissonff(llambda = "loge", lonempobs0 = "logit", % type.fitted = c("mean", "pobs0", "onempobs0"), zero = "onempobs0") VGAM/man/coefvgam.Rd0000644000176200001440000000311213135276753013646 0ustar liggesusers\name{coefvgam} \alias{coefvgam} \alias{coef,vgam-method} \alias{coefficients,vgam-method} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Extract Model Coefficients of a vgam() Object} \description{ Extracts the estimated coefficients from vgam() objects. } \usage{ coefvgam(object, type = c("linear", "nonlinear"), ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{\link{vgam}} object. } \item{type}{ Character. The default is the first choice. } \item{\ldots}{ Optional arguments fed into \code{\link{coefvlm}}. } } \details{ For VGAMs, because modified backfitting is performed, each fitted function is decomposed into a linear and nonlinear (smooth) part. The argument \code{type} is used to return which one is wanted. } \value{ A vector if \code{type = "linear"}. A list if \code{type = "nonlinear"}, and each component of this list corresponds to an \code{\link{s}} term; the component contains an S4 object with slot names such as \code{"Bcoefficients"}, \code{"knots"}, \code{"xmin"}, \code{"xmax"}. } %\references{ % % %} \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ %} \seealso{ \code{\link{vgam}}, \code{\link{coefvlm}}, \code{\link[stats]{coef}}. % \code{\link{coef-method}}, } \examples{ fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) coef(fit) # Same as coef(fit, type = "linear") (ii <- coef(fit, type = "nonlinear")) is.list(ii) names(ii) slotNames(ii[[1]]) } \keyword{models} \keyword{regression} VGAM/man/cauchy.Rd0000644000176200001440000001103213135276753013333 0ustar liggesusers\name{cauchy} \alias{cauchy} \alias{cauchy1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cauchy Distribution Family Function } \description{ Estimates either the location parameter or both the location and scale parameters of the Cauchy distribution by maximum likelihood estimation. } \usage{ cauchy(llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, iprobs = seq(0.2, 0.8, by = 0.2), imethod = 1, nsimEIM = NULL, zero = "scale") cauchy1(scale.arg = 1, llocation = "identitylink", ilocation = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions for the location parameter \eqn{a}{a} and the scale parameter \eqn{b}{b}. See \code{\link{Links}} for more choices. } \item{ilocation, iscale}{ Optional initial value for \eqn{a}{a} and \eqn{b}{b}. By default, an initial value is chosen internally for each. } \item{imethod}{ Integer, either 1 or 2 or 3. Initial method, three algorithms are implemented. The user should try all possible values to help avoid converging to a local solution. Also, choose the another value if convergence fails, or use \code{ilocation} and/or \code{iscale}. } \item{iprobs}{ Probabilities used to find the respective sample quantiles; used to compute \code{iscale}. } \item{zero, nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{scale.arg}{ Known (positive) scale parameter, called \eqn{b}{b} below. } } \details{ The Cauchy distribution has density function \deqn{f(y;a,b) = \left\{ \pi b [1 + ((y-a)/b)^2] \right\}^{-1} }{% f(y;a,b) = 1 / [pi * b * [1 + ((y-a)/b)^2]] } where \eqn{y} and \eqn{a} are real and finite, and \eqn{b>0}{b>0}. The distribution is symmetric about \eqn{a} and has a heavy tail. Its median and mode are \eqn{a}, but the mean does not exist. The fitted values are the estimates of \eqn{a}. Fisher scoring is the default but if \code{nsimEIM} is specified then Fisher scoring with simulation is used. If the scale parameter is known (\code{cauchy1}) then there may be multiple local maximum likelihood solutions for the location parameter. However, if both location and scale parameters are to be estimated (\code{cauchy}) then there is a unique maximum likelihood solution provided \eqn{n > 2} and less than half the data are located at any one point. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \section{Warning }{ It is well-known that the Cauchy distribution may have local maximums in its likelihood function; make full use of \code{imethod}, \code{ilocation}, \code{iscale} etc. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. Barnett, V. D. (1966) Evaluation of the maximum-likehood estimator where the likelihood equation has multiple roots. \emph{Biometrika}, \bold{53}, 151--165. Copas, J. B. (1975) On the unimodality of the likelihood for the Cauchy distribution. \emph{Biometrika}, \bold{62}, 701--704. Efron, B. and Hinkley, D. V. (1978) Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher information. \emph{Biometrika}, \bold{65}, 457--481. } \author{ T. W. Yee } \note{ Good initial values are needed. By default these \pkg{VGAM} family functions search for a starting value for \eqn{a}{a} on a grid. It also pays to select a wide range of initial values via the \code{ilocation} and/or \code{iscale} and/or \code{imethod} arguments. } \seealso{ \code{\link[stats:Cauchy]{Cauchy}}, \code{\link{cauchit}}, \code{\link{studentt}}, \code{\link{simulate.vlm}}. } \examples{ # Both location and scale parameters unknown set.seed(123) cdata <- data.frame(x2 = runif(nn <- 1000)) cdata <- transform(cdata, loc = exp(1 + 0.5 * x2), scale = exp(1)) cdata <- transform(cdata, y2 = rcauchy(nn, loc, scale)) fit2 <- vglm(y2 ~ x2, cauchy(lloc = "loge"), data = cdata, trace = TRUE) coef(fit2, matrix = TRUE) head(fitted(fit2)) # Location estimates summary(fit2) # Location parameter unknown cdata <- transform(cdata, scale1 = 0.4) cdata <- transform(cdata, y1 = rcauchy(nn, loc, scale1)) fit1 <- vglm(y1 ~ x2, cauchy1(scale = 0.4), data = cdata, trace = TRUE) coef(fit1, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/logitoffsetlink.Rd0000644000176200001440000000435713135276753015276 0ustar liggesusers\name{logitoffsetlink} \alias{logitoffsetlink} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Logit-with-an-Offset Link Function } \description{ Computes the logitoffsetlink transformation, including its inverse and the first two derivatives. } \usage{ logitoffsetlink(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{offset}{ The offset value(s), which must be non-negative. It is called \eqn{K} below. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ This link function allows for some asymmetry compared to the ordinary \code{\link{logit}} link. The formula is \deqn{\log(\theta/(1-\theta) - K)}{% log(theta/(1-theta) - K)} and the default value for the offset \eqn{K} is corresponds to the ordinary \code{\link{logit}} link. When \code{inverse = TRUE} will mean that the value will lie in the interval \eqn{(K / (1+K), 1)}. } \value{ For \code{logitoffsetlink} with \code{deriv = 0}, the logitoffsetlink of \code{theta}, i.e., \code{log(theta/(1-theta) - K)} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{(K + exp(theta))/(1 + exp(theta) + K)}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ Komori, O. and Eguchi, S. et al., 2016. An asymmetric logistic model for ecological data. \emph{Methods in Ecology and Evolution}, \bold{7}. } \author{ Thomas W. Yee } \note{ This function is numerical less stability than \code{\link{logit}}. } \seealso{ \code{\link{Links}}, \code{\link{logit}}. } \examples{ p <- seq(0.05, 0.99, by = 0.01); myoff <- 0.05 logitoffsetlink(p, myoff) max(abs(logitoffsetlink(logitoffsetlink(p, myoff), myoff, inverse = TRUE) - p)) # Should be 0 } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/zabinomial.Rd0000644000176200001440000001152013135276753014206 0ustar liggesusers\name{zabinomial} \alias{zabinomial} \alias{zabinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Binomial Distribution } \description{ Fits a zero-altered binomial distribution based on a conditional model involving a Bernoulli distribution and a positive-binomial distribution. } \usage{ zabinomial(lpobs0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0"), ipobs0 = NULL, iprob = NULL, imethod = 1, zero = NULL) zabinomialff(lprob = "logit", lonempobs0 = "logit", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), iprob = NULL, ionempobs0 = NULL, imethod = 1, zero = "onempobs0") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lprob}{ Parameter link function applied to the probability parameter of the binomial distribution. See \code{\link{Links}} for more choices. } \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0}, called \code{pobs0} here. See \code{\link{Links}} for more choices. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } \item{iprob, ipobs0}{ See \code{\link{CommonVGAMffArguments}}. } \item{lonempobs0, ionempobs0}{ Corresponding argument for the other parameterization. See details below. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, else \eqn{Y} has a positive-binomial distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which may be modelled as a function of the covariates. The zero-altered binomial distribution differs from the zero-inflated binomial distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the binomial distribution too. The zero-inflated binomial distribution is implemented in \code{\link{zibinomial}}. Some people call the zero-altered binomial a \emph{hurdle} model. The input is currently a vector or one-column matrix. By default, the two linear/additive predictors for \code{zabinomial()} are \eqn{(logit(p_0), \log(p))^T}{(logit(pobs0), log(prob))^T}. The \pkg{VGAM} family function \code{zabinomialff()} has a few changes compared to \code{zabinomial()}. These are: (i) the order of the linear/additive predictors is switched so the binomial probability comes first; (ii) argument \code{onempobs0} is now 1 minus the probability of an observed 0, i.e., the probability of the positive binomial distribution, i.e., \code{onempobs0} is \code{1-pobs0}; (iii) argument \code{zero} has a new default so that the \code{onempobs0} is intercept-only by default. Now \code{zabinomialff()} is generally recommended over \code{zabinomial()}. Both functions implement Fisher scoring and neither can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} (default) which is given by \deqn{\mu = (1-p_0) \mu_{b} / [1 - (1 - \mu_{b})^N]}{% mu = (1-pobs0) * mub / [1 - (1 - mub)^N]} where \eqn{\mu_{b}}{mub} is the usual binomial mean. If \code{type.fitted = "pobs0"} then \eqn{p_0}{pobs0} is returned. } %\references{ % % %} %\section{Warning }{ % %} \author{ T. W. Yee } \note{ The response should be a two-column matrix of counts, with first column giving the number of successes. Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates by having \code{zero = NULL}. It is a conditional model, not a mixture model. These family functions effectively combine \code{\link{posbinomial}} and \code{\link{binomialff}} into one family function. } \seealso{ \code{\link{dzabinom}}, \code{\link{zibinomial}}, \code{\link{posbinomial}}, \code{\link{binomialff}}, \code{\link[stats:Binomial]{dbinom}}, \code{\link{CommonVGAMffArguments}}. } \examples{ zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, size = 10, prob = logit(-2 + 3*x2, inverse = TRUE), pobs0 = logit(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzabinom(nn, size = size, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) zfit <- vglm(cbind(y1, size - y1) ~ x2, zabinomial(zero = NULL), data = zdata, trace = TRUE) coef(zfit, matrix = TRUE) head(fitted(zfit)) head(predict(zfit)) summary(zfit) } \keyword{models} \keyword{regression} VGAM/man/cens.poisson.Rd0000644000176200001440000001132713135276753014507 0ustar liggesusers\name{cens.poisson} %\alias{cens.poisson} \alias{cens.poisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Censored Poisson Family Function } \description{ Family function for a censored Poisson response. } \usage{ cens.poisson(link = "loge", imu = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the mean; see \code{\link{Links}} for more choices. } \item{imu}{ Optional initial value; see \code{\link{CommonVGAMffArguments}} for more information. } } \details{ Often a table of Poisson counts has an entry \emph{J+} meaning \eqn{\ge J}{>= J}. This family function is similar to \code{\link{poissonff}} but handles such censored data. The input requires \code{\link{SurvS4}}. Only a univariate response is allowed. The Newton-Raphson algorithm is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ See \pkg{survival} for background. } \author{ Thomas W. Yee } \note{ The function \code{\link{poissonff}} should be used when there are no censored observations. Also, \code{NA}s are not permitted with \code{\link{SurvS4}}, nor is \code{type = "counting"}. } \section{Warning }{ As the response is discrete, care is required with \code{\link{Surv}}, especially with \code{"interval"} censored data because of the \code{(start, end]} format. See the examples below. The examples have \code{y < L} as left censored and \code{y >= U} (formatted as \code{U+}) as right censored observations, therefore \code{L <= y < U} is for uncensored and/or interval censored observations. Consequently the input must be tweaked to conform to the \code{(start, end]} format. } \seealso{ \code{\link{SurvS4}}, \code{\link{poissonff}}, \code{\link{Links}}. } \examples{ # Example 1: right censored data set.seed(123); U <- 20 cdata <- data.frame(y = rpois(N <- 100, exp(3))) cdata <- transform(cdata, cy = pmin(U, y), rcensored = (y >= U)) cdata <- transform(cdata, status = ifelse(rcensored, 0, 1)) with(cdata, table(cy)) with(cdata, table(rcensored)) with(cdata, table(ii <- print(SurvS4(cy, status)))) # Check; U+ means >= U fit <- vglm(SurvS4(cy, status) ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check; U+ means >= U # Example 2: left censored data L <- 15 cdata <- transform(cdata, cY = pmax(L, y), lcensored = y < L) # Note y < L, not cY == L or y <= L cdata <- transform(cdata, status = ifelse(lcensored, 0, 1)) with(cdata, table(cY)) with(cdata, table(lcensored)) with(cdata, table(ii <- print(SurvS4(cY, status, type = "left")))) # Check fit <- vglm(SurvS4(cY, status, type = "left") ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) # Example 3: interval censored data cdata <- transform(cdata, Lvec = rep(L, len = N), Uvec = rep(U, len = N)) cdata <- transform(cdata, icensored = Lvec <= y & y < Uvec) # Not lcensored or rcensored with(cdata, table(icensored)) cdata <- transform(cdata, status = rep(3, N)) # 3 means interval censored cdata <- transform(cdata, status = ifelse(rcensored, 0, status)) # 0 means right censored cdata <- transform(cdata, status = ifelse(lcensored, 2, status)) # 2 means left censored # Have to adjust Lvec and Uvec because of the (start, end] format: cdata$Lvec[with(cdata, icensored)] <- cdata$Lvec[with(cdata, icensored)] - 1 cdata$Uvec[with(cdata, icensored)] <- cdata$Uvec[with(cdata, icensored)] - 1 # Unchanged: cdata$Lvec[with(cdata, lcensored)] <- cdata$Lvec[with(cdata, lcensored)] cdata$Lvec[with(cdata, rcensored)] <- cdata$Uvec[with(cdata, rcensored)] with(cdata, table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) # Check fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check # Example 4: Add in some uncensored observations index <- (1:N)[with(cdata, icensored)] index <- head(index, 4) cdata$status[index] <- 1 # actual or uncensored value cdata$Lvec[index] <- cdata$y[index] with(cdata, table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) # Check fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cens.poisson, data = cdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check } \keyword{models} \keyword{regression} VGAM/man/polf.Rd0000644000176200001440000000767713135276753013043 0ustar liggesusers\name{polf} \alias{polf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson-Ordinal Link Function } \description{ Computes the Poisson-ordinal transformation, including its inverse and the first two derivatives. } \usage{ polf(theta, cutpoint = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{cutpoint}{ The cutpoints should be non-negative integers. If \code{polf()} is used as the link function in \code{\link{cumulative}} then one should choose \code{reverse = TRUE, parallel = TRUE}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The Poisson-ordinal link function (POLF) can be applied to a parameter lying in the unit interval. Its purpose is to link cumulative probabilities associated with an ordinal response coming from an underlying Poisson distribution. If the cutpoint is zero then a complementary log-log link is used. See \code{\link{Links}} for general information about \pkg{VGAM} link functions. } \value{ See Yee (2012) for details. } \references{ Yee, T. W. (2012) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Numerical values of \code{theta} too close to 0 or 1 or out of range result in large positive or negative values, or maybe 0 depending on the arguments. Although measures have been taken to handle cases where \code{theta} is too close to 1 or 0, numerical instabilities may still arise. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the Poisson distribution (see \code{\link{poissonff}}) that has been recorded as an ordinal response using known cutpoints. } \section{Warning }{ Prediction may not work on \code{\link{vglm}} or \code{\link{vgam}} etc. objects if this link function is used. } \seealso{ \code{\link{Links}}, \code{\link{ordpoisson}}, \code{\link{poissonff}}, \code{\link{nbolf}}, \code{\link{golf}}, \code{\link{cumulative}}. } \examples{ \dontrun{ polf("p", cutpoint = 2, short = FALSE) polf("p", cutpoint = 2, tag = TRUE) p <- seq(0.01, 0.99, by = 0.01) y <- polf(p, cutpoint = 2) y. <- polf(p, cutpoint = 2, deriv = 1) max(abs(polf(y, cutpoint = 2, inv = TRUE) - p)) # Should be 0 #\ dontrun{ par(mfrow = c(2, 1), las = 1) #plot(p, y, type = "l", col = "blue", main = "polf()") #abline(h = 0, v = 0.5, col = "orange", lty = "dashed") # #plot(p, y., type = "l", col = "blue", # main = "(Reciprocal of) first POLF derivative") #} # Rutherford and Geiger data ruge <- data.frame(yy = rep(0:14, times = c(57,203,383,525,532,408,273,139,45,27,10,4,0,1,1))) with(ruge, length(yy)) # 2608 1/8-minute intervals cutpoint <- 5 ruge <- transform(ruge, yy01 = ifelse(yy <= cutpoint, 0, 1)) fit <- vglm(yy01 ~ 1, binomialff(link = polf(cutpoint = cutpoint)), ruge) coef(fit, matrix = TRUE) exp(coef(fit)) # Another example pdata <- data.frame(x2 = sort(runif(nn <- 1000))) pdata <- transform(pdata, x3 = runif(nn)) pdata <- transform(pdata, mymu = exp( 3 + 1 * x2 - 2 * x3)) pdata <- transform(pdata, y1 = rpois(nn, lambda = mymu)) cutpoints <- c(-Inf, 10, 20, Inf) pdata <- transform(pdata, cuty = Cut(y1, breaks = cutpoints)) #\ dontrun{ with(pdata, plot(x2, x3, col = cuty, pch = as.character(cuty))) } with(pdata, table(cuty) / sum(table(cuty))) fit <- vglm(cuty ~ x2 + x3, data = pdata, trace = TRUE, cumulative(reverse = TRUE, parallel = TRUE, link = polf(cutpoint = cutpoints[2:3]), multiple.responses = TRUE)) head(depvar(fit)) head(fitted(fit)) head(predict(fit)) coef(fit) coef(fit, matrix = TRUE) constraints(fit) fit@misc$earg } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/logoff.Rd0000644000176200001440000000422713135276753013343 0ustar liggesusers\name{logoff} \alias{logoff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log Link Function with an Offset } \description{ Computes the log transformation with an offset, including its inverse and the first two derivatives. } \usage{ logoff(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{offset}{ Offset value. See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The log-offset link function is very commonly used for parameters that are greater than a certain value. In particular, it is defined by \code{log(theta + offset)} where \code{offset} is the offset value. For example, if \code{offset = 0.5} then the value of \code{theta} is restricted to be greater than \eqn{-0.5}. Numerical values of \code{theta} close to \code{-offset} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the log of \code{theta+offset}, i.e., \code{log(theta+offset)} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(theta)-offset}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ The default means this function is identical to \code{\link{loge}}. Numerical instability may occur when \code{theta} is close to \code{-offset}. } \seealso{ \code{\link{Links}}, \code{\link{loge}}. } \examples{ \dontrun{ logoff(seq(-0.2, 0.5, by = 0.1)) logoff(seq(-0.2, 0.5, by = 0.1), offset = 0.5) log(seq(-0.2, 0.5, by = 0.1) + 0.5) } } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/posgeomUC.Rd0000644000176200001440000000523513135276753013770 0ustar liggesusers\name{Posgeom} \alias{Posgeom} \alias{dposgeom} \alias{pposgeom} \alias{qposgeom} \alias{rposgeom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-Geometric Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-geometric distribution. } \usage{ dposgeom(x, prob, log = FALSE) pposgeom(q, prob) qposgeom(p, prob) rposgeom(n, prob) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{prob}{ vector of probabilities of success (of an ordinary geometric distribution). Short vectors are recycled. } \item{log}{ logical. } } \details{ The positive-geometric distribution is a geometric distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \eqn{1/prob}{1/prob}. As \eqn{prob}{prob} decreases, the positive-geometric and geometric distributions become more similar. Like similar functions for the geometric distribution, a zero value of \code{prob} is not permitted here. } \value{ \code{dposgeom} gives the density, \code{pposgeom} gives the distribution function, \code{qposgeom} gives the quantile function, and \code{rposgeom} generates random deviates. } %\references{ %None. %} \author{ T. W. Yee } %\note{ % 20120405; no longer true to a superior method: % For \code{rposgeom()}, the arguments of the function are fed % into \code{\link[stats:Geometric]{rgeom}} until \eqn{n} positive % values are obtained. This may take a long time if \code{prob} % has values close to 1. % The family function \code{posgeometric} needs not be written. % If it were, then it would estimate % \eqn{prob}{prob} by maximum likelihood estimation. %} \seealso{ \code{\link{zageometric}}, \code{\link{zigeometric}}, \code{\link[stats:Geometric]{rgeom}}. % \code{posgeometric}, } \examples{ prob <- 0.75; y <- rposgeom(n = 1000, prob) table(y) mean(y) # Sample mean 1 / prob # Population mean (ii <- dposgeom(0:7, prob)) cumsum(ii) - pposgeom(0:7, prob) # Should be 0s table(rposgeom(100, prob)) table(qposgeom(runif(1000), prob)) round(dposgeom(1:10, prob) * 1000) # Should be similar \dontrun{ x <- 0:5 barplot(rbind(dposgeom(x, prob), dgeom(x, prob)), beside = TRUE, col = c("blue", "orange"), main = paste("Positive geometric(", prob, ") (blue) vs", " geometric(", prob, ") (orange)", sep = ""), names.arg = as.character(x), las = 1, lwd = 2) } } \keyword{distribution} VGAM/man/logUC.Rd0000644000176200001440000000407413135276753013100 0ustar liggesusers\name{Log} \alias{Log} \alias{dlog} \alias{plog} \alias{qlog} \alias{rlog} \title{ Logarithmic Distribution } \description{ Density, distribution function, quantile function, and random generation for the logarithmic distribution. } \usage{ dlog(x, shape, log = FALSE) plog(q, shape, log.p = FALSE) qlog(p, shape) rlog(n, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{ Same interpretation as in \code{\link[stats]{runif}}. } \item{shape}{ The shape parameter value \eqn{c} described in in \code{\link{logff}}. % Here it is called \code{shape} because \eqn{0 ii legend(0.00005, 0.3, lwd = 2, lty = 1, col = c("blue", "red", "green"), with(ii, paste(species.names[,1], species.names[,2], sep = " and "))) abline(a = 0, b = 1, lty = "dashed", col = "grey") # Useful reference line } } \keyword{models} \keyword{regression} \keyword{graphs} VGAM/man/lgammaff.Rd0000644000176200001440000000760213135276753013641 0ustar liggesusers\name{lgamma1} \alias{lgamma1} \alias{lgamma3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log-gamma Distribution Family Function } \description{ Estimation of the parameter of the standard and nonstandard log-gamma distribution. } \usage{ lgamma1(lshape = "loge", ishape = NULL) lgamma3(llocation = "identitylink", lscale = "loge", lshape = "loge", ilocation = NULL, iscale = NULL, ishape = 1, zero = c("scale", "shape")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link function applied to the location parameter \eqn{a} and the positive scale parameter \eqn{b}. See \code{\link{Links}} for more choices. } \item{lshape}{ Parameter link function applied to the positive shape parameter \eqn{k}. See \code{\link{Links}} for more choices. } \item{ishape}{ Initial value for \eqn{k}. If given, it must be positive. If failure to converge occurs, try some other value. The default means an initial value is determined internally. } \item{ilocation, iscale}{ Initial value for \eqn{a} and \eqn{b}. The defaults mean an initial value is determined internally for each. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,3\}. The default value means none are modelled as intercept-only terms. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability density function of the standard log-gamma distribution is given by \deqn{f(y;k)=\exp[ky - \exp(y)] / \Gamma(k),}{% f(y;k) = exp[ky - exp(y)]/gamma(k),} for parameter \eqn{k>0}{k>0} and all real \eqn{y}. The mean of \eqn{Y} is \code{digamma(k)} (returned as the fitted values) and its variance is \code{trigamma(k)}. For the non-standard log-gamma distribution, one replaces \eqn{y} by \eqn{(y-a)/b}, where \eqn{a} is the location parameter and \eqn{b} is the positive scale parameter. Then the density function is \deqn{f(y)=\exp[k(y-a)/b - \exp((y-a)/b)] / (b \, \Gamma(k)).}{% f(y) = exp[k(y-a)/b - exp((y-a)/b)]/(b*gamma(k)).} The mean and variance of \eqn{Y} are \code{a + b*digamma(k)} (returned as the fitted values) and \code{b^2 * trigamma(k)}, respectively. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kotz, S. and Nadarajah, S. (2000) \emph{Extreme Value Distributions: Theory and Applications}, pages 48--49, London: Imperial College Press. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, p.89, New York: Wiley. } \author{ T. W. Yee } \note{ The standard log-gamma distribution can be viewed as a generalization of the standard type 1 extreme value density: when \eqn{k = 1} the distribution of \eqn{-Y} is the standard type 1 extreme value distribution. The standard log-gamma distribution is fitted with \code{lgamma1} and the non-standard (3-parameter) log-gamma distribution is fitted with \code{lgamma3}. } \seealso{ \code{\link{rlgamma}}, \code{\link{gengamma.stacy}}, \code{\link{prentice74}}, \code{\link{gamma1}}, \code{\link[base:Special]{lgamma}}. } \examples{ ldata <- data.frame(y = rlgamma(100, shape = exp(1))) fit <- vglm(y ~ 1, lgamma1, data = ldata, trace = TRUE, crit = "coef") summary(fit) coef(fit, matrix = TRUE) Coef(fit) ldata <- data.frame(x2 = runif(nn <- 5000)) # Another example ldata <- transform(ldata, loc = -1 + 2 * x2, Scale = exp(1)) ldata <- transform(ldata, y = rlgamma(nn, loc, scale = Scale, shape = exp(0))) fit2 <- vglm(y ~ x2, lgamma3, data = ldata, trace = TRUE, crit = "c") coef(fit2, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/gamma1.Rd0000644000176200001440000000435613135276753013235 0ustar liggesusers\name{gamma1} \alias{gamma1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 1-parameter Gamma Distribution } \description{ Estimates the 1-parameter gamma distribution by maximum likelihood estimation. } \usage{ gamma1(link = "loge", zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the (positive) \emph{shape} parameter. See \code{\link{Links}} for more choices and general information. } \item{zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The density function is given by \deqn{f(y) = \exp(-y) \times y^{shape-1} / \Gamma(shape)}{% f(y) = exp(-y) y^(shape-1) / gamma(shape)} for \eqn{shape > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(shape)}{gamma(shape)} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \eqn{Y} (returned as the fitted values) is \eqn{\mu=shape}{mu=shape}, and the variance is \eqn{\sigma^2 = shape}{sigma^2 = shape}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Most standard texts on statistical distributions describe the 1-parameter gamma distribution, e.g., Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ This \pkg{VGAM} family function can handle a multiple responses, which is inputted as a matrix. The parameter \eqn{shape} matches with \code{shape} in \code{\link[stats]{rgamma}}. The argument \code{rate} in \code{\link[stats]{rgamma}} is assumed 1 for this family function. If \eqn{rate} is unknown use the family function \code{\link{gammaR}} to estimate it too. } \seealso{ \code{\link{gammaR}} for the 2-parameter gamma distribution, \code{\link{lgamma1}}, \code{\link{lindley}}, \code{\link{simulate.vlm}}. } \examples{ gdata <- data.frame(y = rgamma(n = 100, shape = exp(3))) fit <- vglm(y ~ 1, gamma1, data = gdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/acat.Rd0000644000176200001440000000711613135276753012777 0ustar liggesusers\name{acat} \alias{acat} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Adjacent Categories Probabilities } \description{ Fits an adjacent categories regression model to an ordered (preferably) factor response. } \usage{ acat(link = "loge", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the ratios of the adjacent categories probabilities. See \code{\link{Links}} for more choices. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{reverse}{ Logical. By default, the linear/additive predictors used are \eqn{\eta_j = \log(P[Y=j+1]/P[Y=j])}{eta_j = log(P[Y=j+1]/P[Y=j])} for \eqn{j=1,\ldots,M}. If \code{reverse} is \code{TRUE} then \eqn{\eta_j = \log(P[Y=j]/P[Y=j+1])}{eta_j=log(P[Y=j]/P[Y=j+1])} will be used. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}. } \item{whitespace}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with ordered values \eqn{1,2,\ldots,M+1}, so that \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}. By default, the log link is used because the ratio of two probabilities is positive. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Agresti, A. (2013) \emph{Categorical Data Analysis}, 3rd ed. Hoboken, NJ, USA: Wiley. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York: Springer-Verlag. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. %Documentation accompanying the \pkg{VGAM} package at %\url{https://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or an ordered factor. In both cases, the \code{y} slot returned by \code{vglm}/\code{vgam}/\code{rrvglm} is the matrix of counts. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x1}, \code{x2} and \code{x3}, then \code{parallel = TRUE ~ x1 + x2 -1} and \code{parallel = FALSE ~ x3} are equivalent. This would constrain the regression coefficients for \code{x1} and \code{x2} to be equal; those of the intercepts and \code{x3} would be different. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{cumulative}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{multinomial}}, \code{\link{margeff}}, \code{\link{pneumo}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, acat, data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) model.matrix(fit) } \keyword{models} \keyword{regression} %pneumo$let <- log(pneumo$exposure.time) VGAM/man/otlogUC.Rd0000644000176200001440000000432513135276753013442 0ustar liggesusers\name{Otlog} \alias{Otlog} \alias{dotlog} \alias{potlog} \alias{qotlog} \alias{rotlog} \title{ One-truncated Logarithmic Distribution } \description{ Density, distribution function, quantile function, and random generation for the one-truncated logarithmic distribution. } \usage{ dotlog(x, shape, log = FALSE) potlog(q, shape, log.p = FALSE) qotlog(p, shape) rotlog(n, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of quantiles. For the density, it should be a vector with integer values \eqn{> 1} in order for the probabilities to be positive. } \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{shape}{ The parameter value \eqn{c} described in in \code{\link{logff}}. Here it is called \code{shape} because \eqn{0 0}, \eqn{c > 0} and \eqn{s \in (0, 1)}{0 < s < 1}. The mean, \eqn{(c (s - 1)/ s) \log(1 - s)}{(c * (s - 1)/ s) * log(1 - s)} is returned as the fitted values. Note the median is \eqn{c \log(2 - s)}{c * log(2 - s)}. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Adamidis, K., Loukas, S. (1998). A lifetime distribution with decreasing failure rate. \emph{Statistics and Probability Letters}, \bold{39}, 35--42. } \author{ J. G. Lauder and T. W. Yee } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Adamidis and Loukas (1998). } \seealso{ \code{\link{dexpgeom}}, \code{\link{exponential}}, \code{\link{geometric}}. } \examples{ \dontrun{ Scale <- exp(2); shape = logit(-1, inverse = TRUE); edata <- data.frame(y = rexpgeom(n = 2000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, expgeometric, edata, trace = TRUE) c(with(edata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/lomax.Rd0000644000176200001440000000615513135276753013211 0ustar liggesusers\name{lomax} \alias{lomax} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Lomax Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Lomax distribution. } \usage{ lomax(lscale = "loge", lshape3.q = "loge", iscale = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape3.q}{ Parameter link function applied to the (positive) parameters \code{scale} and \code{q}. See \code{\link{Links}} for more choices. } \item{iscale, ishape3.q, imethod}{ See \code{\link{CommonVGAMffArguments}} for information. For \code{imethod = 2} a good initial value for \code{iscale} is needed to obtain a good estimate for the other parameter. } \item{gscale, gshape3.q, zero, probs.y}{ See \code{\link{CommonVGAMffArguments}}. } % \item{zero}{ % An integer-valued vector specifying which % linear/additive predictors are modelled as intercepts only. % Here, the values must be from the set \{1,2\} which correspond to % \code{scale}, \code{q}, respectively. % } } \details{ The 2-parameter Lomax distribution is the 4-parameter generalized beta II distribution with shape parameters \eqn{a=p=1}. It is probably more widely known as the Pareto (II) distribution. It is also the 3-parameter Singh-Maddala distribution with shape parameter \eqn{a=1}, as well as the beta distribution of the second kind with \eqn{p=1}. More details can be found in Kleiber and Kotz (2003). The Lomax distribution has density \deqn{f(y) = q / [b \{1 + y/b\}^{1+q}]}{% f(y) = q / [b (1 + y/b)^(1+q)]} for \eqn{b > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \code{q} is a shape parameter. The cumulative distribution function is \deqn{F(y) = 1 - [1 + (y/b)]^{-q}.}{% F(y) = 1 - [1 + (y/b)]^(-q).} The mean is \deqn{E(Y) = b/(q-1)}{% E(Y) = b/(q-1)} provided \eqn{q > 1}; these are returned as the fitted values. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. } \seealso{ \code{\link{Lomax}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{inv.lomax}}, \code{\link{paralogistic}}, \code{\link{inv.paralogistic}}, \code{\link{simulate.vlm}}. } \examples{ ldata <- data.frame(y = rlomax(n = 1000, scale = exp(1), exp(2))) fit <- vglm(y ~ 1, lomax, data = ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/eexpUC.Rd0000644000176200001440000000750013135276753013255 0ustar liggesusers\name{Expectiles-Exponential} \alias{Expectiles-Exponential} \alias{eexp} \alias{deexp} \alias{peexp} \alias{qeexp} \alias{reexp} \title{ Expectiles of the Exponential Distribution } \description{ Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of an exponential distribution. } \usage{ deexp(x, rate = 1, log = FALSE) peexp(q, rate = 1, lower.tail = TRUE, log.p = FALSE) qeexp(p, rate = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) reexp(n, rate = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, p, q}{ See \code{\link{deunif}}. } \item{n, rate, log}{ See \code{\link[stats:Exponential]{rexp}}. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Exponential]{pexp}} or \code{\link[stats:Exponential]{qexp}}. } \item{Maxit.nr, Tol.nr}{ See \code{\link{deunif}}. } } \details{ General details are given in \code{\link{deunif}} including a note regarding the terminology used. Here, \code{exp} corresponds to the distribution of interest, \eqn{F}, and \code{eexp} corresponds to \eqn{G}. The addition of ``\code{e}'' is for the `other' distribution associated with the parent distribution. Thus \code{deexp} is for \eqn{g}, \code{peexp} is for \eqn{G}, \code{qeexp} is for the inverse of \eqn{G}, \code{reexp} generates random variates from \eqn{g}. For \code{qeexp} the Newton-Raphson algorithm is used to solve for \eqn{y} satisfying \eqn{p = G(y)}. Numerical problems may occur when values of \code{p} are very close to 0 or 1. } \value{ \code{deexp(x)} gives the density function \eqn{g(x)}. \code{peexp(q)} gives the distribution function \eqn{G(q)}. \code{qeexp(p)} gives the expectile function: the value \eqn{y} such that \eqn{G(y)=p}. \code{reexp(n)} gives \eqn{n} random variates from \eqn{G}. } %\references{ % %Jones, M. C. (1994) %Expectiles and M-quantiles are quantiles. %\emph{Statistics and Probability Letters}, %\bold{20}, 149--153. % %} \author{ T. W. Yee and Kai Huang } %\note{ %The ``\code{q}'', as the first character of ``\code{qeunif}'', %may be changed to ``\code{e}'' in the future, %the reason being to emphasize that the expectiles are returned. %Ditto for the argument ``\code{q}'' in \code{peunif}. % %} \seealso{ \code{\link{deunif}}, \code{\link{denorm}}, \code{\link{dexp}}. } \examples{ my.p <- 0.25; y <- rexp(nn <- 1000) (myexp <- qeexp(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p \dontrun{ par(mfrow = c(2,1)) yy <- seq(-0, 4, len = nn) plot(yy, deexp(yy), col = "blue", ylim = 0:1, xlab = "y", ylab = "g(y)", type = "l", main = "g(y) for Exp(1); dotted green is f(y) = dexp(y)") lines(yy, dexp(yy), col = "darkgreen", lty = "dotted", lwd = 2) # 'original' plot(yy, peexp(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Exp(1)") abline(v = 1, h = 0.5, col = "red", lty = "dashed") lines(yy, pexp(yy), col = "darkgreen", lty = "dotted", lwd = 2) } } \keyword{distribution} %# Equivalently: %I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) %I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) %I1 / (I1 + I2) # Should be my.p %# Or: %I1 <- sum( myexp - y[y <= myexp]) %I2 <- sum(-myexp + y[y > myexp]) %# Non-standard exponential %myrate <- 8 %yy <- rexp(nn, rate = myrate) %(myexp <- qeexp(my.p, rate = myrate)) %sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p %peexp(-Inf, rate = myrate) # Should be 0 %peexp( Inf, rate = myrate) # Should be 1 %peexp(mean(yy), rate = myrate) # Should be 0.5 %abs(qeexp(0.5, rate = myrate) - mean(yy)) # Should be 0 %abs(peexp(myexp, rate = myrate) - my.p) # Should be 0 %integrate(f = deexp, lower = -1, upper = Inf, rate = myrate) # Should be 1 VGAM/man/AR1UC.Rd0000644000176200001440000000475413135276753012707 0ustar liggesusers\name{dAR1} \alias{dAR1} \alias{dAR1} %\alias{pAR1} %\alias{qAR1} %\alias{rAR1} \title{The AR-1 Autoregressive Process} \description{ Density for the AR-1 model. } \usage{ dAR1(x, drift = 0, var.error = 1, ARcoef1 = 0.0, type.likelihood = c("exact", "conditional"), log = FALSE) } \arguments{ \item{x,}{vector of quantiles.} \item{drift}{ the scaled mean (also known as the \emph{drift} parameter), \eqn{\mu^*}{mu^*}. Note that the mean is \eqn{\mu^* /(1-\rho)}{mu^* / (1-rho)}. The default corresponds to observations that have mean 0. } \item{log}{ Logical. If \code{TRUE} then the logarithm of the density is returned. } \item{type.likelihood, var.error, ARcoef1}{ See \code{\link{AR1}}. The argument \code{ARcoef1} is \eqn{\rho}{rho}. The argument \code{var.error} is the variance of the i.i.d. random noise, i.e., \eqn{\sigma^2}{sigma^2}. If \code{type.likelihood = "conditional"} then the first element or row of the result is currently assigned \code{NA}---this is because the density of the first observation is effectively ignored. } } \value{ \code{dAR1} gives the density. % \code{pAR1} gives the distribution function, and % \code{qAR1} gives the quantile function, and % \code{rAR1} generates random deviates. } \author{ T. W. Yee and Victor Miranda } \details{ Most of the background to this function is given in \code{\link{AR1}}. All the arguments are converted into matrices, and then all their dimensions are obtained. They are then coerced into the same size: the number of rows is the maximum of all the single rows, and ditto for the number of columns. } %\note{ %} \seealso{ \code{\link{AR1}}. } \examples{ nn <- 100; set.seed(1) tdata <- data.frame(index = 1:nn, TS1 = arima.sim(nn, model = list(ar = -0.50), sd = exp(1))) fit1 <- vglm(TS1 ~ 1, AR1, data = tdata, trace = TRUE) rhobit(-0.5) coef(fit1, matrix = TRUE) (Cfit1 <- Coef(fit1)) summary(fit1) # SEs are useful to know logLik(fit1) sum(dAR1(depvar(fit1), drift = Cfit1[1], var.error = (Cfit1[2])^2, ARcoef1 = Cfit1[3], log = TRUE)) fit2 <- vglm(TS1 ~ 1, AR1(type.likelihood = "cond"), data = tdata, trace = TRUE) (Cfit2 <- Coef(fit2)) # Okay for intercept-only models logLik(fit2) head(keep <- dAR1(depvar(fit2), drift = Cfit2[1], var.error = (Cfit2[2])^2, ARcoef1 = Cfit2[3], type.likelihood = "cond", log = TRUE)) sum(keep[-1]) } \keyword{distribution} VGAM/man/coalminers.Rd0000644000176200001440000000216713135276753014224 0ustar liggesusers\name{coalminers} \alias{coalminers} \docType{data} \title{ Breathlessness and Wheeze Amongst Coalminers Data} \description{ Coalminers who are smokers without radiological pneumoconiosis, classified by age, breathlessness and wheeze. } \usage{data(coalminers)} \format{ A data frame with 9 age groups with the following 5 columns. \describe{ \item{BW}{Counts with breathlessness and wheeze. } \item{BnW}{Counts with breathlessness but no wheeze. } \item{nBW}{Counts with no breathlessness but wheeze. } \item{nBnW}{Counts with neither breathlessness or wheeze. } \item{age}{Age of the coal miners (actually, the midpoints of the 5-year category ranges). } } } \details{ The data were published in Ashford and Sowden (1970). A more recent analysis is McCullagh and Nelder (1989, Section 6.6). } \source{ Ashford, J. R. and Sowden, R. R. (1970) Multi-variate probit analysis. \emph{Biometrics}, \bold{26}, 535--546. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}. 2nd ed. London: Chapman & Hall. } \examples{ str(coalminers) } \keyword{datasets} VGAM/man/concoef.Rd0000644000176200001440000000565013135276753013504 0ustar liggesusers\name{concoef} \alias{concoef} %\alias{ccoef} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Extract Model Constrained/Canonical Coefficients } \description{ \code{concoef} is a generic function which extracts the constrained (canonical) coefficients from objects returned by certain modelling functions. } \usage{ concoef(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the extraction of canonical coefficients is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. } } \details{ For constrained quadratic and ordination models, \emph{canonical coefficients} are the elements of the \bold{C} matrix used to form the latent variables. They are highly interpretable in ecology, and are looked at as weights or loadings. They are also applicable for reduced-rank VGLMs. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } %\note{ %} \section{Warning }{ \code{\link{concoef}} replaces \code{ccoef}; the latter is deprecated. % \code{\link{concoef}} and \code{\link{ccoef}} are identical, % but the latter will be deprecated soon. For QO models, there is a direct inverse relationship between the scaling of the latent variables (site scores) and the tolerances. One normalization is for the latent variables to have unit variance. Another normalization is for all the species' tolerances to be unit (provided \code{eq.tolerances} is \code{TRUE}). These two normalizations cannot simultaneously hold in general. For rank \eqn{R} models with \eqn{R>1} it becomes more complicated because the latent variables are also uncorrelated. An important argument when fitting quadratic ordination models is whether \code{eq.tolerances} is \code{TRUE} or \code{FALSE}. See Yee (2004) for details. } \seealso{ \code{\link{concoef-method}}, \code{concoef.qrrvglm}, \code{concoef.cao}, \code{\link[stats]{coef}}. } \examples{ \dontrun{ set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, fam = quasipoissonff, data = hspider, Crow1positive = FALSE) concoef(p1) } } \keyword{models} \keyword{regression} VGAM/man/hdeff.Rd0000644000176200001440000001006713135276753013142 0ustar liggesusers\name{hdeff} \alias{hdeff} \alias{hdeff.vglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hauck-Donner effects: detection test for Wald tests } \description{ A detection test for Hauck-Donner effects of each regression coefficient in a VGLM regression model. } \usage{ hdeff(object, ...) hdeff.vglm(object, derivative = NULL, se.arg = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{\link{vglm}} object. Currently only a limited number of family functions have the HDE detection test: \code{\link{binomialff}}, \code{\link{cumulative}}, \code{\link{erlang}}, \code{\link{poissonff}}, \code{\link{topple}}, \code{\link{uninormal}}, \code{\link{zipoissonff}}, and \code{\link{zipoisson}}. More will be implemented in the short future! % \code{\link{multinomial}}, } \item{derivative}{ Numeric. Either 1 or 2. Currently only a few models having one linear predictor are handled when \code{derivative = 2}, e.g., \code{\link{binomialff}}, \code{\link{poissonff}}. } \item{se.arg}{ Logical. If \code{TRUE} then the derivatives of the standard errors are returned as well, otherwise the derivatives are of the Wald statistics. } \item{\dots}{ further arguments passed into the other methods functions. % e.g., \code{subset}. } } \details{ Hauck and Donner (1977) first observed an aberration of the Wald test statistic not monotonically increasing as a function of increasing distance between the parameter estimate and the null value (called the Hauck-Donner effect, or HDE, here). This "disturbing" and "undesirable" underappreciated effect has since been observed in other regression models by various authors. This function computes the first, and possibly second, derivative of the Wald statistic for each regression coefficient. A negative value of the first derivative is indicative of the HDE being present. By default this function returns a labelled logical vector; a \code{TRUE} means the HDE is affirmative for that coefficient. Hence ideally all values are \code{FALSE}. Any \code{TRUE} values suggests that the MLE is near the boundary of the parameter space, and that the p-value for that regression coefficient is biased upwards. % and that a likelihood ratio test is recommended. } \value{ By default, a vector of logicals. Setting \code{deriv = 1} returns a vector of first derivatives of the Wald statistics. Setting \code{deriv = 2} returns a 2-column matrix of first and second derivatives of the Wald statistics. Setting \code{se.arg = TRUE} returns an additional 1 or 2 columns. For those \pkg{VGAM} family functions whose HDE test has not yet been implmented a \code{NULL} is returned. } \references{ Hauck, J. W. W. and A. Donner (1977) Wald's test as applied to hypotheses in logit analysis. \emph{Journal of the American Statistical Association}, \bold{72}, 851--853. Corrigenda: JASA, \bold{75}, 482. % \textit{JASA 72(360): 851--3}] 75 (370), 482 Yee, T. W. (2017) \emph{Detecting the Hauck-Donner effect in Wald tests} (in preparation). } \author{ T. W. Yee. } \section{Warning }{ Some 2nd derivatives are \code{NA}, meaning that they have not been programmed in yet. } \note{ The function \code{\link{summaryvglm}} conducts the HDE detection test if possible and prints out a modified Wald table if some HDEs are detected. This function is currently a little beyond the experimental stage and may change quite a bit in the short future. } \seealso{ \code{\link{summaryvglm}}, \code{\link{vglm}}, \code{\link{lrp.vglm}}. % \code{\link{multinomial}}, % \code{\link{cumulative}}, } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, data = pneumo, cumulative(reverse = TRUE, parallel = TRUE)) hdeff(fit) hdeff(fit, deriv = 1) hdeff(fit, deriv = 2) } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} VGAM/man/lognormal.Rd0000644000176200001440000001107013135276753014053 0ustar liggesusers\name{lognormal} \alias{lognormal} %\alias{lognormal3} %%- Also NEED an '\alias' for EACH other topic documented here. \title{ Lognormal Distribution } \description{ Maximum likelihood estimation of the (univariate) lognormal distribution. } \usage{ lognormal(lmeanlog = "identitylink", lsdlog = "loge", zero = "sdlog") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmeanlog, lsdlog}{ Parameter link functions applied to the mean and (positive) \eqn{\sigma}{sigma} (standard deviation) parameter. Both of these are on the log scale. See \code{\link{Links}} for more choices. } % \item{emeanlog, esdlog}{ % emeanlog = list(), esdlog = list(), % emeanlog = list(), esdlog = list(), % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{zero}{ Specifies which linear/additive predictor is modelled as intercept-only. For \code{lognormal()}, the values can be from the set \{1,2\} which correspond to \code{mu}, \code{sigma}, respectively. See \code{\link{CommonVGAMffArguments}} for more information. % For \code{lognormal3()}, % the values must be from the set \{1,2,3\} where 3 is for % \eqn{\lambda}{\lambda}. } % \item{powers.try}{ % Numerical vector. The initial \eqn{lambda} is chosen % as the best value from \code{min(y) - 10^powers.try} where % \code{y} is the response. % } % \item{delta}{ % Numerical vector. An alternative method for % obtaining an initial \eqn{lambda}. Here, \code{delta = min(y)-lambda}. % If given, this supersedes the \code{powers.try} argument. % The value must be positive. % } } \details{ A random variable \eqn{Y} has a 2-parameter lognormal distribution if \eqn{\log(Y)}{log(Y)} is distributed \eqn{N(\mu, \sigma^2)}{N(mu, sigma^2)}. The expected value of \eqn{Y}, which is \deqn{E(Y) = \exp(\mu + 0.5 \sigma^2)}{% E(Y) = exp(mu + 0.5 sigma^2)} and not \eqn{\mu}{mu}, make up the fitted values. The variance of \eqn{Y} is \deqn{Var(Y) = [\exp(\sigma^2) -1] \exp(2\mu + \sigma^2).}{% Var(Y) = [exp(sigma^2) -1] * exp(2 mu + sigma^2).} % A random variable \eqn{Y} has a 3-parameter lognormal distribution % if \eqn{\log(Y-\lambda)}{log(Y-lambda)} % is distributed \eqn{N(\mu, \sigma^2)}{N(mu, sigma^2)}. Here, % \eqn{\lambda < Y}{lambda < Y}. % The expected value of \eqn{Y}, which is % \deqn{E(Y) = \lambda + \exp(\mu + 0.5 \sigma^2)}{% % E(Y) = lambda + exp(mu + 0.5 sigma^2)} % and not \eqn{\mu}{mu}, make up the fitted values. % \code{lognormal()} and \code{lognormal3()} fit the 2- and 3-parameter % lognormal distribution respectively. Clearly, if the location % parameter \eqn{\lambda=0}{lambda=0} then both distributions coincide. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } %\note{ % The more commonly used 2-parameter lognormal distribution is the % 3-parameter lognormal distribution with \eqn{\lambda}{lambda} equal % to zero---see \code{\link{lognormal3}}. % % %} %\section{Warning}{ % Regularity conditions are not satisfied for the 3-parameter case: % results may be erroneous. % May withdraw it in later versions. % % %} \seealso{ \code{\link[stats]{rlnorm}}, \code{\link{uninormal}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}. % \code{\link{lognormal3}}, } \examples{ ldata2 <- data.frame(x2 = runif(nn <- 1000)) ldata2 <- transform(ldata2, y1 = rlnorm(nn, mean = 1 + 2 * x2, sd = exp(-1)), y2 = rlnorm(nn, mean = 1, sd = exp(-1 + x2))) fit1 <- vglm(y1 ~ x2, lognormal(zero = 2), data = ldata2, trace = TRUE) fit2 <- vglm(y2 ~ x2, lognormal(zero = 1), data = ldata2, trace = TRUE) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) } \keyword{models} \keyword{regression} %lognormal3(lmeanlog = "identitylink", lsdlog = "loge", % powers.try = (-3):3, delta = NULL, zero = 2) %lambda <- 4 %ldata3 <- data.frame(y3 = lambda + rlnorm(1000, m = 1.5, sd = exp(-0.8))) %fit3 <- vglm(y3 ~ 1, lognormal3, data = ldata3, trace = TRUE, crit = "c") %coef(fit3, matrix = TRUE) %summary(fit3) %ldata <- data.frame(y1 = rlnorm(nn <- 1000, meanlog = 1.5, sdlog = exp(-0.8))) %fit1 <- vglm(y1 ~ 1, lognormal, data = ldata, trace = TRUE, crit = "c") %coef(fit1, matrix = TRUE) %Coef(fit1) VGAM/man/is.parallel.Rd0000644000176200001440000000303313135276753014267 0ustar liggesusers\name{is.parallel} \alias{is.parallel} \alias{is.parallel.matrix} \alias{is.parallel.vglm} \title{Parallelism Constraint Matrices} \description{ Returns a logical vector from a test of whether an object such as a matrix or VGLM object corresponds to a parallelism assumption. } \usage{ is.parallel.matrix(object, \dots) is.parallel.vglm(object, type = c("term", "lm"), \dots) } \arguments{ \item{object}{ an object such as a constraint matrix or a \code{\link{vglm}} object. } \item{type}{ passed into \code{\link{constraints}}. } \item{\dots}{ additional optional arguments. Currently unused. } } \details{ These functions may be useful for categorical models such as \code{\link{propodds}}, \code{\link{cumulative}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{multinomial}}. } \value{ A vector of logicals, testing whether each constraint matrix is a one-column matrix of ones. Note that parallelism can still be thought of as holding if the constraint matrix has a non-zero but constant values, however, this is currently not implemented. No checking is done that the constraint matrices have the same number of rows. } \seealso{ \code{\link{constraints}}, \code{\link{vglm}}. } \examples{ \dontrun{ require("VGAMdata") fit <- vglm(educ ~ sm.bs(age) * sex + ethnicity, cumulative(parallel = TRUE), head(xs.nz, 200)) is.parallel(fit) is.parallel(fit, type = "lm") # For each column of the LM matrix } } \keyword{models} \keyword{regression} VGAM/man/expexpff.Rd0000644000176200001440000001201113135276753013702 0ustar liggesusers\name{expexpff} \alias{expexpff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponentiated Exponential Distribution } \description{ Estimates the two parameters of the exponentiated exponential distribution by maximum likelihood estimation. } \usage{ expexpff(lrate = "loge", lshape = "loge", irate = NULL, ishape = 1.1, tolerance = 1.0e-6, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lrate}{ Parameter link functions for the \eqn{\alpha}{shape} and \eqn{\lambda}{rate} parameters. See \code{\link{Links}} for more choices. The defaults ensure both parameters are positive. } \item{ishape}{ Initial value for the \eqn{\alpha}{shape} parameter. If convergence fails try setting a different value for this argument. } \item{irate}{ Initial value for the \eqn{\lambda}{rate} parameter. By default, an initial value is chosen internally using \code{ishape}. } \item{tolerance}{ Numeric. Small positive value for testing whether values are close enough to 1 and 2. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The default is none of them. If used, choose one value from the set \{1,2\}. } } \details{ The exponentiated exponential distribution is an alternative to the Weibull and the gamma distributions. The formula for the density is \deqn{f(y;\lambda,\alpha) = \alpha \lambda (1-\exp(-\lambda y))^{\alpha-1} \exp(-\lambda y) }{% f(y;rate,shape) = shape rate (1-\exp(-rate y))^(shape-1) \exp(-rate y) } where \eqn{y>0}, \eqn{\lambda>0}{rate>0} and \eqn{\alpha>0}{shape>0}. The mean of \eqn{Y} is \eqn{(\psi(\alpha+1)-\psi(1))/\lambda}{(psi(shape+1)-psi(1))/rate} (returned as the fitted values) where \eqn{\psi}{psi} is the digamma function. The variance of \eqn{Y} is \eqn{(\psi'(1)-\psi'(\alpha+1))/\lambda^2}{(psi'(1)-psi'(shape+1))/ rate^2} where \eqn{\psi'}{psi'} is the trigamma function. This distribution has been called the two-parameter generalized exponential distribution by Gupta and Kundu (2006). A special case of the exponentiated exponential distribution: \eqn{\alpha=1}{shape=1} is the exponential distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Gupta, R. D. and Kundu, D. (2001) Exponentiated exponential family: an alternative to gamma and Weibull distributions, \emph{Biometrical Journal}, \bold{43}, 117--130. Gupta, R. D. and Kundu, D. (2006) On the comparison of Fisher information of the Weibull and GE distributions, \emph{Journal of Statistical Planning and Inference}, \bold{136}, 3130--3144. } \author{ T. W. Yee } \note{ Fisher scoring is used, however, convergence is usually very slow. This is a good sign that there is a bug, but I have yet to check that the expected information is correct. Also, I have yet to implement Type-I right censored data using the results of Gupta and Kundu (2006). Another algorithm for fitting this model is implemented in \code{\link{expexpff1}}. } \section{Warning }{ Practical experience shows that reasonably good initial values really helps. In particular, try setting different values for the \code{ishape} argument if numerical problems are encountered or failure to convergence occurs. Even if convergence occurs try perturbing the initial value to make sure the global solution is obtained and not a local solution. The algorithm may fail if the estimate of the shape parameter is too close to unity. } \seealso{ \code{\link{expexpff1}}, \code{\link{gammaR}}, \code{\link{weibullR}}, \code{\link{CommonVGAMffArguments}}. } \examples{ # A special case: exponential data edata <- data.frame(y = rexp(n <- 1000)) fit <- vglm(y ~ 1, fam = expexpff, data = edata, trace = TRUE, maxit = 99) coef(fit, matrix = TRUE) Coef(fit) # Ball bearings data (number of million revolutions before failure) edata <- data.frame(bbearings = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)) fit <- vglm(bbearings ~ 1, fam = expexpff(irate = 0.05, ish = 5), trace = TRUE, maxit = 300, data = edata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(rate=0.0314, shape=5.2589) logLik(fit) # Authors get -112.9763 # Failure times of the airconditioning system of an airplane eedata <- data.frame(acplane = c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95)) fit <- vglm(acplane ~ 1, fam = expexpff(ishape = 0.8, irate = 0.15), trace = TRUE, maxit = 99, data = eedata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(rate=0.0145, shape=0.8130) logLik(fit) # Authors get log-lik -152.264 } \keyword{models} \keyword{regression} VGAM/man/betaff.Rd0000644000176200001440000001172713135276753013321 0ustar liggesusers\name{betaff} \alias{betaff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Two-parameter Beta Distribution Family Function } \description{ Estimation of the mean and precision parameters of the beta distribution. } \usage{ betaff(A = 0, B = 1, lmu = "logit", lphi = "loge", imu = NULL, iphi = NULL, gprobs.y = ppoints(8), gphi = exp(-3:5)/4, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{A, B}{ Lower and upper limits of the distribution. The defaults correspond to the \emph{standard beta distribution} where the response lies between 0 and 1. } \item{lmu, lphi}{ Link function for the mean and precision parameters. The values \eqn{A} and \eqn{B} are extracted from the \code{min} and \code{max} arguments of \code{\link{extlogit}}. Consequently, only \code{\link{extlogit}} is allowed. % See below for more details. % See \code{\link{Links}} for more choices. } \item{imu, iphi}{ Optional initial value for the mean and precision parameters respectively. A \code{NULL} value means a value is obtained in the \code{initialize} slot. } \item{gprobs.y, gphi, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The two-parameter beta distribution can be written \eqn{f(y) =} \deqn{(y-A)^{\mu_1 \phi-1} \times (B-y)^{(1-\mu_1) \phi-1} / [beta(\mu_1 \phi,(1-\mu_1) \phi) \times (B-A)^{\phi-1}]}{% (y-A)^(mu1*phi-1) * (B-y)^((1-mu1)*phi-1) / [beta(mu1*phi,(1-mu1)*phi) * (B-A)^(phi-1)]} for \eqn{A < y < B}, and \eqn{beta(.,.)} is the beta function (see \code{\link[base:Special]{beta}}). The parameter \eqn{\mu_1}{mu1} satisfies \eqn{\mu_1 = (\mu - A) / (B-A)}{mu1 = (mu - A) / (B-A)} where \eqn{\mu}{mu} is the mean of \eqn{Y}. That is, \eqn{\mu_1}{mu1} is the mean of of a standard beta distribution: \eqn{E(Y) = A + (B-A) \times \mu_1}{E(Y) = A + (B-A)*mu1}, and these are the fitted values of the object. Also, \eqn{\phi}{phi} is positive and \eqn{A < \mu < B}{A < mu < B}. Here, the limits \eqn{A} and \eqn{B} are \emph{known}. Another parameterization of the beta distribution involving the raw shape parameters is implemented in \code{\link{betaR}}. For general \eqn{A} and \eqn{B}, the variance of \eqn{Y} is \eqn{(B-A)^2 \times \mu_1 \times (1-\mu_1) / (1+\phi)}{(B-A)^2 * mu1 * (1-mu1) / (1+phi)}. Then \eqn{\phi}{phi} can be interpreted as a \emph{precision} parameter in the sense that, for fixed \eqn{\mu}{mu}, the larger the value of \eqn{\phi}{phi}, the smaller the variance of \eqn{Y}. Also, \eqn{\mu_1 = shape1/(shape1+shape2)}{mu1=shape1/(shape1+shape2)} and \eqn{\phi = shape1+shape2}{phi = shape1+shape2}. Fisher scoring is implemented. % If \eqn{A} and \eqn{B} are unknown then the \pkg{VGAM} family function % \code{beta4()} can be used to estimate these too. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Ferrari, S. L. P. and Francisco C.-N. (2004) Beta regression for modelling rates and proportions. \emph{Journal of Applied Statistics}, \bold{31}, 799--815. % Documentation accompanying the \pkg{VGAM} package at % \url{https://www.stat.auckland.ac.nz/~yee} % contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must have values in the interval (\eqn{A}, \eqn{B}). The user currently needs to manually choose \code{lmu} to match the input of arguments \code{A} and \code{B}, e.g., with \code{\link{extlogit}}; see the example below. } \seealso{ \code{\link{betaR}}, % \code{\link{zoibetaR}}, \code{\link[stats:Beta]{Beta}}, \code{\link{dzoabeta}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{betabinomialff}}, \code{\link{betageometric}}, \code{\link{betaprime}}, \code{\link{rbetageom}}, \code{\link{rbetanorm}}, \code{\link{kumar}}, \code{\link{extlogit}}, \code{\link{simulate.vlm}}. } \examples{ bdata <- data.frame(y = rbeta(nn <- 1000, shape1 = exp(0), shape2 = exp(1))) fit1 <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) # Useful for intercept-only models # General A and B, and with a covariate bdata <- transform(bdata, x2 = runif(nn)) bdata <- transform(bdata, mu = logit(0.5 - x2, inverse = TRUE), prec = exp(3.0 + x2)) # prec == phi bdata <- transform(bdata, shape2 = prec * (1 - mu), shape1 = mu * prec) bdata <- transform(bdata, y = rbeta(nn, shape1 = shape1, shape2 = shape2)) bdata <- transform(bdata, Y = 5 + 8 * y) # From 5 to 13, not 0 to 1 fit <- vglm(Y ~ x2, data = bdata, trace = TRUE, betaff(A = 5, B = 13, lmu = extlogit(min = 5, max = 13))) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} % imu = NULL, iphi = NULL, imethod = 1, zero = NULL) VGAM/man/hspider.Rd0000644000176200001440000000645613135276753013533 0ustar liggesusers\name{hspider} \alias{hspider} \docType{data} \title{ Hunting Spider Data } \description{ Abundance of hunting spiders in a Dutch dune area. } \usage{data(hspider)} \format{ A data frame with 28 observations (sites) on the following 18 variables. \describe{ \item{WaterCon}{Log percentage of soil dry mass.} \item{BareSand}{Log percentage cover of bare sand.} \item{FallTwig}{Log percentage cover of fallen leaves and twigs.} \item{CoveMoss}{Log percentage cover of the moss layer.} \item{CoveHerb}{Log percentage cover of the herb layer.} \item{ReflLux}{Reflection of the soil surface with cloudless sky.} \item{Alopacce}{Abundance of \emph{Alopecosa accentuata}.} \item{Alopcune}{Abundance of \emph{Alopecosa cuneata}.} \item{Alopfabr}{Abundance of \emph{Alopecosa fabrilis}.} \item{Arctlute}{Abundance of \emph{Arctosa lutetiana}.} \item{Arctperi}{Abundance of \emph{Arctosa perita}.} \item{Auloalbi}{Abundance of \emph{Aulonia albimana}.} \item{Pardlugu}{Abundance of \emph{Pardosa lugubris}.} \item{Pardmont}{Abundance of \emph{Pardosa monticola}.} \item{Pardnigr}{Abundance of \emph{Pardosa nigriceps}.} \item{Pardpull}{Abundance of \emph{Pardosa pullata}.} \item{Trocterr}{Abundance of \emph{Trochosa terricola}.} \item{Zoraspin}{Abundance of \emph{Zora spinimana}.} } } \details{ The data, which originally came from Van der Aart and Smeek-Enserink (1975) consists of abundances (numbers trapped over a 60 week period) and 6 environmental variables. There were 28 sites. This data set has been often used to illustrate ordination, e.g., using canonical correspondence analysis (CCA). In the example below, the data is used for constrained quadratic ordination (CQO; formerly called canonical Gaussian ordination or CGO), a numerically intensive method that has many superior qualities. See \code{\link{cqo}} for details. } %\source{ %} \references{ Van der Aart, P. J. M. and Smeek-Enserink, N. (1975) Correlations between distributions of hunting spiders (Lycosidae, Ctenidae) and environmental characteristics in a dune area. \emph{Netherlands Journal of Zoology}, \bold{25}, 1--45. } \examples{ summary(hspider) \dontrun{ # Standardize the environmental variables: hspider[, 1:6] <- scale(subset(hspider, select = WaterCon:ReflLux)) # Fit a rank-1 binomial CAO hsbin <- hspider # Binary species data hsbin[, -(1:6)] <- as.numeric(hsbin[, -(1:6)] > 0) set.seed(123) ahsb1 <- cao(cbind(Alopcune, Arctlute, Auloalbi, Zoraspin) ~ WaterCon + ReflLux, family = binomialff(multiple.responses = TRUE), df1.nl = 2.2, Bestof = 3, data = hsbin) par(mfrow = 2:1, las = 1) lvplot(ahsb1, type = "predictors", llwd = 2, ylab = "logit p", lcol = 1:9) persp(ahsb1, rug = TRUE, col = 1:10, lwd = 2) coef(ahsb1) } } \keyword{datasets} %# Fit a rank-1 Poisson CQO %set.seed(111) # This leads to the global solution %# vvv p1 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, %# vvv Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ %# vvv WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, %# vvv fam = poissonff, data = hspider, Crow1posit=FALSE) %# vvv nos = ncol(p1@y) %# vvv lvplot(p1, y=TRUE, lcol=1:nos, pch=1:nos, pcol=1:nos) %# vvv Coef(p1) %# vvv summary(p1) VGAM/man/calibrate.rrvglm.control.Rd0000644000176200001440000000266113135276753017004 0ustar liggesusers\name{calibrate.rrvglm.control} \alias{calibrate.rrvglm.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for CLO (RR-VGLM) Calibration } \description{ Algorithmic constants and parameters for running \code{\link{calibrate.rrvglm}} are set using this function. } \usage{ calibrate.rrvglm.control(object, trace = FALSE, Method.optim = "BFGS", gridSize = if (Rank == 1) 7 else 5, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ The fitted \code{\link{rrvglm}} model. The user should ignore this argument. % The fitted CLO model. The user should ignore this argument. } \item{trace, Method.optim}{ Same as \code{\link{calibrate.qrrvglm.control}}. } \item{gridSize}{ Same as \code{\link{calibrate.qrrvglm.control}}. } \item{\dots}{ Avoids an error message for extraneous arguments. } } \details{ Most CLO users will only need to make use of \code{trace} and \code{gridSize}. These arguments should be used inside their call to \code{\link{calibrate.rrvglm}}, not this function directly. } \value{ Similar to \code{\link{calibrate.qrrvglm.control}}. } %\references{ %} \author{T. W. Yee} %\note{ % Despite the name of this function, UQO and CAO models are handled % } \seealso{ \code{\link{calibrate.rrvglm}}, \code{\link{Coef.rrvglm}}. } %\examples{ %} \keyword{models} \keyword{regression} VGAM/man/formulavlm.Rd0000644000176200001440000000364113135276753014252 0ustar liggesusers\name{formulavlm} %\name{confint} \alias{formula.vlm} \alias{formulavlm} \alias{term.names} \alias{term.namesvlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Model Formulae and Term Names for VGLMs } \description{ The methods function for \code{formula} to extract the formula from a fitted object, as well as a methods function to return the names of the terms in the formula. } \usage{ \method{formula}{vlm}(x, \dots) formulavlm(x, form.number = 1, \dots) term.names(model, \dots) term.namesvlm(model, form.number = 1, \dots) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, model}{ A fitted model object. } \item{form.number}{Formula number, is 1 or 2. which correspond to the arguments \code{formula} and \code{form2} respectively. } \item{\dots}{Same as \code{\link[stats]{formula}}. } } \details{ The \code{formula} methods function is based on \code{\link[stats]{formula}}. } \value{ The \code{formula} methods function should return something similar to \code{\link[stats]{formula}}. The \code{term.names} methods function should return a character string with the terms in the formula; this includes any intercept (which is denoted by \code{"(Intercept)"} as the first element.) } %\references{ %} \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ %} \seealso{ \code{\link{has.interceptvlm}}. % \code{termsvlm}. } \examples{ # Example: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff) formula(vglm.D93) pdata <- data.frame(counts, outcome, treatment) # Better style vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = pdata) formula(vglm.D93) term.names(vglm.D93) responseName(vglm.D93) has.intercept(vglm.D93) } \keyword{models} \keyword{regression} VGAM/man/felixUC.Rd0000644000176200001440000000256713135276753013433 0ustar liggesusers\name{Felix} \alias{Felix} \alias{dfelix} %\alias{pfelix} %\alias{qfelix} %\alias{rfelix} \title{The Felix Distribution} \description{ Density for the Felix distribution. % distribution function, quantile function % and random generation } \usage{ dfelix(x, rate = 0.25, log = FALSE) } %pfelix(q, rate = 0.25) %qfelix(p, rate = 0.25) %rfelix(n, rate = 0.25) \arguments{ \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} % \item{n}{number of observations. % Must be a positive integer of length 1.} \item{rate}{ See \code{\link{felix}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dfelix} gives the density. % \code{pfelix} gives the distribution function, % \code{qfelix} gives the quantile function, and % \code{rfelix} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{felix}}, the \pkg{VGAM} family function for estimating the parameter, for the formula of the probability density function and other details. } \section{Warning }{ The default value of \code{rate} is subjective. } \seealso{ \code{\link{felix}}. } \examples{ \dontrun{ rate <- 0.25; x <- 1:15 plot(x, dfelix(x, rate), type = "h", las = 1, col = "blue", ylab = paste("dfelix(rate=", rate, ")"), main = "Felix density function") } } \keyword{distribution} VGAM/man/lino.Rd0000644000176200001440000001051313135276753013023 0ustar liggesusers\name{lino} \alias{lino} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Beta Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter generalized beta distribution as proposed by Libby and Novick (1982). } \usage{ lino(lshape1 = "loge", lshape2 = "loge", llambda = "loge", ishape1 = NULL, ishape2 = NULL, ilambda = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2}{ Parameter link functions applied to the two (positive) shape parameters \eqn{a} and \eqn{b}. See \code{\link{Links}} for more choices. } \item{llambda}{ Parameter link function applied to the parameter \eqn{\lambda}{lambda}. See \code{\link{Links}} for more choices. } \item{ishape1, ishape2, ilambda}{ Initial values for the parameters. A \code{NULL} value means one is computed internally. The argument \code{ilambda} must be numeric, and the default corresponds to a standard beta distribution. } \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Here, the values must be from the set \{1,2,3\} which correspond to \eqn{a}, \eqn{b}, \eqn{\lambda}{lambda}, respectively. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ Proposed by Libby and Novick (1982), this distribution has density \deqn{f(y;a,b,\lambda) = \frac{\lambda^{a} y^{a-1} (1-y)^{b-1}}{ B(a,b) \{1 - (1-\lambda) y\}^{a+b}}}{% f(y;a,b,lambda) = lambda^a y^(a-1) (1-y)^(b-1) / [B(a,b) (1 - (1-lambda)*y)^(a+b)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{\lambda > 0}{lambda > 0}, \eqn{0 < y < 1}. Here \eqn{B} is the beta function (see \code{\link[base:Special]{beta}}). The mean is a complicated function involving the Gauss hypergeometric function. If \eqn{X} has a \code{lino} distribution with parameters \code{shape1}, \code{shape2}, \code{lambda}, then \eqn{Y=\lambda X/(1-(1-\lambda)X)}{Y = \lambda*X / (1 - (1-\lambda)*X)} has a standard beta distribution with parameters \code{shape1}, \code{shape2}. Since \eqn{\log(\lambda)=0}{log(lambda)=0} corresponds to the standard beta distribution, a \code{summary} of the fitted model performs a t-test for whether the data belongs to a standard beta distribution (provided the \code{\link{loge}} link for \eqn{\lambda}{lambda} is used; this is the default). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Libby, D. L. and Novick, M. R. (1982) Multivariate generalized beta distributions with applications to utility assessment. \emph{Journal of Educational Statistics}, \bold{7}, 271--294. Gupta, A. K. and Nadarajah, S. (2004) \emph{Handbook of Beta Distribution and Its Applications}, NY: Marcel Dekker, Inc. } \author{ T. W. Yee } \note{ The fitted values, which is usually the mean, have not been implemented yet. Currently the median is returned as the fitted values. % and consequently are \code{NA}s. Although Fisher scoring is used, the working weight matrices are positive-definite only in a certain region of the parameter space. Problems with this indicate poor initial values or an ill-conditioned model or insufficient data etc. This model is can be difficult to fit. A reasonably good value of \code{ilambda} seems to be needed so if the self-starting initial values fail, try experimenting with the initial value arguments. Experience suggests \code{ilambda} is better a little larger, rather than smaller, compared to the true value. } \seealso{ \code{\link{Lino}}, \code{\link{genbetaII}}. } \examples{ ldata <- data.frame(y1 = rbeta(n = 1000, exp(0.5), exp(1))) # ~ standard beta fit <- vglm(y1 ~ 1, lino, data = ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit) # Nonstandard beta distribution ldata <- transform(ldata, y2 = rlino(n = 1000, shape1 = exp(1), shape2 = exp(2), lambda = exp(1))) fit2 <- vglm(y2 ~ 1, lino(lshape1 = "identitylink", lshape2 = "identitylink", ilamb = 10), data = ldata, trace = TRUE) coef(fit2, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/zetaUC.Rd0000644000176200001440000000433713135276753013264 0ustar liggesusers\name{Zeta} \alias{Zeta} \alias{dzeta} \alias{pzeta} \alias{qzeta} \alias{rzeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{The Zeta Distribution } \description{ Density, distribution function, quantile function and random generation for the zeta distribution. } \usage{ dzeta(x, shape, log = FALSE) pzeta(q, shape, lower.tail = TRUE) qzeta(p, shape) rzeta(n, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{Same as \code{\link[stats]{Poisson}}. } \item{shape}{ The positive shape parameter \eqn{p}. } \item{lower.tail, log}{ Same meaning as in \code{\link[stats]{Normal}}. } } \details{ The density function of the zeta distribution is given by \deqn{y^{-s-1} / \zeta(s+1)}{% y^(-s-1) / zeta(s+1)} where \eqn{s>0}, \eqn{y=1,2,\ldots}, and \eqn{\zeta}{zeta} is Riemann's zeta function. } \value{ \code{dzeta} gives the density, \code{pzeta} gives the distribution function, \code{qzeta} gives the quantile function, and \code{rzeta} generates random deviates. } \references{ Johnson N. L., Kotz S., and Balakrishnan N. (1993) \emph{Univariate Discrete Distributions}, 2nd ed. New York: Wiley. % Lindsey, J. K. (2002zz) % \emph{Applied Statistical Modelling}, 2nd ed. % London: Chapman & Hall.zz % Knight, K. (2002zz) % Theory book. % London: Chapman & Hall.zz } \author{ T. W. Yee } \note{ \code{qzeta()} runs slower and slower as \code{shape} approaches 0 and \code{p} approaches 1. The \pkg{VGAM} family function \code{\link{zetaff}} estimates the shape parameter \eqn{s}. } %\section{Warning}{ % These functions have not been fully tested. %} \seealso{ \code{\link{zeta}}, \code{\link{zetaff}}, \code{\link{Oazeta}}, \code{\link{Oizeta}}, \code{\link{Otzeta}}. } \examples{ dzeta(1:20, shape = 2) myshape <- 0.5 max(abs(pzeta(1:200, myshape) - cumsum(1/(1:200)^(1+myshape)) / zeta(myshape+1))) # Should be 0 \dontrun{ plot(1:6, dzeta(1:6, 2), type = "h", las = 1, col = "orange", ylab = "Probability", main = "zeta probability function; orange: shape = 2; blue: shape = 1") points(0.10 + 1:6, dzeta(1:6, 1), type = "h", col = "blue") } } \keyword{distribution} VGAM/man/pospoisUC.Rd0000644000176200001440000000545713135276753014021 0ustar liggesusers\name{Pospois} \alias{Pospois} \alias{dpospois} \alias{ppospois} \alias{qpospois} \alias{rpospois} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-Poisson Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-Poisson distribution. } \usage{ dpospois(x, lambda, log = FALSE) ppospois(q, lambda) qpospois(p, lambda) rpospois(n, lambda) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{lambda}{ vector of positive means (of an ordinary Poisson distribution). Short vectors are recycled. } \item{log}{ logical. } } \details{ The positive-Poisson distribution is a Poisson distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \deqn{\lambda / (1-\exp(-\lambda)).}{% lambda / (1-exp(-lambda)).} As \eqn{\lambda}{lambda} increases, the positive-Poisson and Poisson distributions become more similar. Unlike similar functions for the Poisson distribution, a zero value of \code{lambda} returns a \code{NaN}. % Unlike similar functions for the Poisson distribution, a zero value % of \code{lambda} is not permitted here. } \value{ \code{dpospois} gives the density, \code{ppospois} gives the distribution function, \code{qpospois} gives the quantile function, and \code{rpospois} generates random deviates. } %\references{ %None. %} \author{ T. W. Yee } \note{ % 20120405; no longer true to a superior method: % For \code{rpospois}, the arguments of the function are fed % into \code{\link[stats:Poisson]{rpois}} until \eqn{n} positive % values are obtained. This may take a long time if \code{lambda} % has values close to 0. The family function \code{\link{pospoisson}} estimates \eqn{\lambda}{lambda} by maximum likelihood estimation. } \seealso{ \code{\link{pospoisson}}, \code{\link{zapoisson}}, \code{\link{zipoisson}}, \code{\link[stats:Poisson]{rpois}}. } \examples{ lambda <- 2; y = rpospois(n = 1000, lambda) table(y) mean(y) # Sample mean lambda / (1 - exp(-lambda)) # Population mean (ii <- dpospois(0:7, lambda)) cumsum(ii) - ppospois(0:7, lambda) # Should be 0s table(rpospois(100, lambda)) table(qpospois(runif(1000), lambda)) round(dpospois(1:10, lambda) * 1000) # Should be similar \dontrun{ x <- 0:7 barplot(rbind(dpospois(x, lambda), dpois(x, lambda)), beside = TRUE, col = c("blue", "orange"), main = paste("Positive Poisson(", lambda, ") (blue) vs", " Poisson(", lambda, ") (orange)", sep = ""), names.arg = as.character(x), las = 1, lwd = 2) } } \keyword{distribution} VGAM/man/laplace.Rd0000644000176200001440000000641413135276753013470 0ustar liggesusers\name{laplace} \alias{laplace} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Laplace Distribution } \description{ Maximum likelihood estimation of the 2-parameter classical Laplace distribution. } \usage{ laplace(llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Character. Parameter link functions for location parameter \eqn{a} and scale parameter \eqn{b}. See \code{\link{Links}} for more choices. } \item{ilocation, iscale}{ Optional initial values. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. } \item{imethod}{ Initialization method. Either the value 1 or 2. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The Laplace distribution is often known as the \emph{double-exponential} distribution and, for modelling, has heavier tail than the normal distribution. The Laplace density function is \deqn{f(y) = \frac{1}{2b} \exp \left( - \frac{|y-a|}{b} \right) }{% f(y) = (1/(2b)) exp( -|y-a|/b ) } where \eqn{-\infty0}. Its mean is \eqn{a} and its variance is \eqn{2b^2}. This parameterization is called the \emph{classical Laplace distribution} by Kotz et al. (2001), and the density is symmetric about \eqn{a}. For \code{y ~ 1} (where \code{y} is the response) the maximum likelihood estimate (MLE) for the location parameter is the sample median, and the MLE for \eqn{b} is \code{mean(abs(y-location))} (replace location by its MLE if unknown). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. } \author{ T. W. Yee } \section{Warning}{ This family function has not been fully tested. The MLE regularity conditions do not hold for this distribution, therefore misleading inferences may result, e.g., in the \code{summary} and \code{vcov} of the object. } \note{ This family function uses Fisher scoring. Convergence may be slow for non-intercept-only models; half-stepping is frequently required. } \seealso{ \code{\link{rlaplace}}, \code{\link{alaplace2}} (which differs slightly from this parameterization), \code{\link{exponential}}, \code{\link[stats]{median}}. } \examples{ ldata <- data.frame(y = rlaplace(nn <- 100, loc = 2, scale = exp(1))) fit <- vglm(y ~ 1, laplace, data = ldata, trace = TRUE, crit = "l") coef(fit, matrix = TRUE) Coef(fit) with(ldata, median(y)) ldata <- data.frame(x = runif(nn <- 1001)) ldata <- transform(ldata, y = rlaplace(nn, loc = 2, scale = exp(-1 + 1*x))) coef(vglm(y ~ x, laplace(iloc = 0.2, imethod = 2, zero = 1), data = ldata, trace = TRUE), matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/cumulative.Rd0000644000176200001440000002701613135276753014246 0ustar liggesusers\name{cumulative} \alias{cumulative} %\alias{scumulative} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Cumulative Probabilities } \description{ Fits a cumulative link regression model to a (preferably ordered) factor response. } \usage{ cumulative(link = "logit", parallel = FALSE, reverse = FALSE, multiple.responses = FALSE, whitespace = FALSE) } %apply.parint = FALSE, %scumulative(link = "logit", % lscale = "loge", escale = list(), % parallel = FALSE, sparallel = TRUE, reverse = FALSE, iscale = 1) %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the \eqn{J} cumulative probabilities. See \code{\link{Links}} for more choices, e.g., for the cumulative \code{\link{probit}}/\code{\link{cloglog}}/\code{\link{cauchit}}/\ldots models. } % \item{lscale}{ % Link function applied to the \eqn{J} scaling parameters. % See \code{\link{Links}} for more choices. % % } \item{parallel}{ A logical or formula specifying which terms have equal/unequal coefficients. See below for more information about the parallelism assumption. The default results in what some people call the \emph{generalized ordered logit model} to be fitted. If \code{parallel = TRUE} then it does not apply to the intercept. } % \item{sparallel}{ % For the scaling parameters. % A logical, or formula specifying which terms have % equal/unequal coefficients. % This argument is not applied to the intercept. % The \code{scumulative()} function requires covariates; for % intercept models use \code{cumulative()}. % } \item{reverse}{ Logical. By default, the cumulative probabilities used are \eqn{P(Y\leq 1)}{P(Y<=1)}, \eqn{P(Y\leq 2)}{P(Y<=2)}, \dots, \eqn{P(Y\leq J)}{P(Y<=J)}. If \code{reverse} is \code{TRUE} then \eqn{P(Y\geq 2)}{P(Y>=2)}, \eqn{P(Y\geq 3)}{P(Y>=3)}, \dots, \eqn{P(Y\geq J+1)}{P(Y>=J+1)} are used. This should be set to \code{TRUE} for \code{link=} \code{\link{golf}}, \code{\link{polf}}, \code{\link{nbolf}}. For these links the cutpoints must be an increasing sequence; if \code{reverse = FALSE} for then the cutpoints must be an decreasing sequence. } \item{multiple.responses}{ Logical. Multiple responses? If \code{TRUE} then the input should be a matrix with values \eqn{1,2,\dots,L}, where \eqn{L=J+1} is the number of levels. Each column of the matrix is a response, i.e., multiple responses. A suitable matrix can be obtained from \code{Cut}. } % \item{apply.parint}{ % Logical. % Whether the \code{parallel} argument should be applied to the intercept term. % This should be set to \code{TRUE} for \code{link=} % \code{\link{golf}}, % \code{\link{polf}}, % \code{\link{nbolf}}. % See \code{\link{CommonVGAMffArguments}} for more information. % % % } % \item{iscale}{ % Numeric. Initial values for the scale parameters. % } \item{whitespace}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with ordered values \eqn{1,2,\dots,J+1}. Hence \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}; for \code{cumulative()} one has \eqn{M=J}. % and for \code{scumulative()} \eqn{M=2J}. This \pkg{VGAM} family function fits the class of \emph{cumulative link models} to (hopefully) an ordinal response. By default, the \emph{non-parallel} cumulative logit model is fitted, i.e., \deqn{\eta_j = logit(P[Y \leq j])}{% eta_j = logit(P[Y<=j])} where \eqn{j=1,2,\dots,M} and the \eqn{\eta_j}{eta_j} are not constrained to be parallel. This is also known as the \emph{non-proportional odds model}. If the logit link is replaced by a complementary log-log link (\code{\link{cloglog}}) then this is known as the \emph{proportional-hazards model}. In almost all the literature, the constraint matrices associated with this family of models are known. For example, setting \code{parallel = TRUE} will make all constraint matrices (except for the intercept) equal to a vector of \eqn{M} 1's. If the constraint matrices are equal, unknown and to be estimated, then this can be achieved by fitting the model as a reduced-rank vector generalized linear model (RR-VGLM; see \code{\link{rrvglm}}). Currently, reduced-rank vector generalized additive models (RR-VGAMs) have not been implemented here. % The scaled version of \code{cumulative()}, called \code{scumulative()}, % has \eqn{J} positive scaling factors. % They are described in pages 154 and 177 of McCullagh and Nelder (1989); % see their equation (5.4) in particular, % which they call the \emph{generalized rational model}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Agresti, A. (2013) \emph{Categorical Data Analysis}, 3rd ed. Hoboken, NJ, USA: Wiley. Agresti, A. (2010) \emph{Analysis of Ordinal Categorical Data}, 2nd ed. Hoboken, NJ, USA: Wiley. Dobson, A. J. and Barnett, A. (2008) \emph{An Introduction to Generalized Linear Models}, 3rd ed. Boca Raton: Chapman & Hall/CRC Press. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York: Springer-Verlag. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. %Further information and examples on categorical data analysis %by the \pkg{VGAM} package can be found at %\url{http://www.stat.auckland.ac.nz/~yee/VGAM/doc/categorical.pdf}. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{\link{vglm}}/\code{\link{vgam}}/\code{\link{rrvglm}} is the matrix of counts. The formula must contain an intercept term. Other \pkg{VGAM} family functions for an ordinal response include \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. With the logit link, setting \code{parallel = TRUE} will fit a proportional odds model. Note that the \code{TRUE} here does not apply to the intercept term. In practice, the validity of the proportional odds assumption needs to be checked, e.g., by a likelihood ratio test (LRT). If acceptable on the data, then numerical problems are less likely to occur during the fitting, and there are less parameters. Numerical problems occur when the linear/additive predictors cross, which results in probabilities outside of \eqn{(0,1)}; setting \code{parallel = TRUE} will help avoid this problem. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x2}, \code{x3} and \code{x4}, then \code{parallel = TRUE ~ x2 + x3 -1} and \code{parallel = FALSE ~ x4} are equivalent. This would constrain the regression coefficients for \code{x2} and \code{x3} to be equal; those of the intercepts and \code{x4} would be different. If the data is inputted in \emph{long} format (not \emph{wide} format, as in \code{\link{pneumo}} below) and the self-starting initial values are not good enough then try using \code{mustart}, \code{coefstart} and/or \code{etatstart}. See the example below. To fit the proportional odds model one can use the \pkg{VGAM} family function \code{\link{propodds}}. Note that \code{propodds(reverse)} is equivalent to \code{cumulative(parallel = TRUE, reverse = reverse)} (which is equivalent to \code{cumulative(parallel = TRUE, reverse = reverse, link = "logit")}). It is for convenience only. A call to \code{cumulative()} is preferred since it reminds the user that a parallelism assumption is made, as well as being a lot more flexible. % In the future, this family function may be renamed to % ``\code{cups}'' (for \bold{cu}mulative \bold{p}robabilitie\bold{s}) % or ``\code{cute}'' (for \bold{cu}mulative probabili\bold{t}i\bold{e}s). % Please let me know if you strongly agree or disagree about this. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{propodds}}, \code{\link{prplot}}, \code{\link{margeff}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{multinomial}}, \code{\link{pneumo}}, \code{\link{Links}}, \code{\link{hdeff.vglm}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{golf}}, \code{\link{polf}}, \code{\link{nbolf}}, \code{\link{logistic1}}. } \examples{ # Fit the proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) depvar(fit) # Sample proportions (good technique) fit@y # Sample proportions (bad technique) weights(fit, type = "prior") # Number of observations coef(fit, matrix = TRUE) constraints(fit) # Constraint matrices apply(fitted(fit), 1, which.max) # Classification apply(predict(fit, newdata = pneumo, type = "response"), 1, which.max) # Classification # Check that the model is linear in let ---------------------- fit2 <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), cumulative(reverse = TRUE), data = pneumo) \dontrun{ plot(fit2, se = TRUE, overlay = TRUE, lcol = 1:2, scol = 1:2) } # Check the proportional odds assumption with a LRT ---------- (fit3 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) pchisq(2 * (logLik(fit3) - logLik(fit)), df = length(coef(fit3)) - length(coef(fit)), lower.tail = FALSE) lrtest(fit3, fit) # More elegant # A factor() version of fit ---------------------------------- # This is in long format (cf. wide format above) Nobs <- round(depvar(fit) * c(weights(fit, type = "prior"))) sumNobs <- colSums(Nobs) # apply(Nobs, 2, sum) pneumo.long <- data.frame(symptoms = ordered(rep(rep(colnames(Nobs), nrow(Nobs)), times = c(t(Nobs))), levels = colnames(Nobs)), let = rep(rep(with(pneumo, let), each = ncol(Nobs)), times = c(t(Nobs)))) with(pneumo.long, table(let, symptoms)) # Should be same as pneumo (fit.long1 <- vglm(symptoms ~ let, data = pneumo.long, trace = TRUE, cumulative(parallel = TRUE, reverse = TRUE))) coef(fit.long1, matrix = TRUE) # Should be as coef(fit, matrix = TRUE) # Could try using mustart if fit.long1 failed to converge. mymustart <- matrix(sumNobs / sum(sumNobs), nrow(pneumo.long), ncol(Nobs), byrow = TRUE) fit.long2 <- vglm(symptoms ~ let, mustart = mymustart, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo.long, trace = TRUE) coef(fit.long2, matrix = TRUE) # Should be as coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} % pneumo$let <- log(pneumo$exposure.time) VGAM/man/waldff.Rd0000644000176200001440000000347213135276753013333 0ustar liggesusers\name{waldff} \alias{waldff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Wald Distribution Family Function } \description{ Estimates the parameter of the standard Wald distribution by maximum likelihood estimation. } \usage{ waldff(llambda = "loge", ilambda = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llambda,ilambda}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The standard Wald distribution is a special case of the inverse Gaussian distribution with \eqn{\mu=1}{mu=1}. It has a density that can be written as \deqn{f(y;\lambda) = \sqrt{\lambda/(2\pi y^3)} \; \exp\left(-\lambda (y-1)^2/(2 y)\right)}{% f(y;mu,lambda) = sqrt(lambda/(2*pi*y^3)) * exp(-lambda*(y-1)^2/(2*y)) } where \eqn{y>0} and \eqn{\lambda>0}{lambda>0}. The mean of \eqn{Y} is \eqn{1} (returned as the fitted values) and its variance is \eqn{1/\lambda}{1/lambda}. By default, \eqn{\eta=\log(\lambda)}{eta=log(lambda)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. } \author{ T. W. Yee } \note{ The \pkg{VGAM} family function \code{\link{inv.gaussianff}} estimates the location parameter \eqn{\mu}{mu} too. } \seealso{ \code{\link{inv.gaussianff}}, \code{\link{rinv.gaussian}}. } \examples{ wdata <- data.frame(y = rinv.gaussian(n = 1000, mu = 1, lambda = exp(1))) wfit <- vglm(y ~ 1, waldff(ilambda = 0.2), data = wdata, trace = TRUE) coef(wfit, matrix = TRUE) Coef(wfit) summary(wfit) } \keyword{models} \keyword{regression} VGAM/man/tikuv.Rd0000644000176200001440000000760613135276753013235 0ustar liggesusers\name{tikuv} \alias{tikuv} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Short-tailed Symmetric Distribution Family Function } \description{ Fits the short-tailed symmetric distribution of Tiku and Vaughan (1999). } \usage{ tikuv(d, lmean = "identitylink", lsigma = "loge", isigma = NULL, zero = "sigma") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{d}{ The \eqn{d} parameter. It must be a single numeric value less than 2. Then \eqn{h = 2-d>0} is another parameter. } \item{lmean, lsigma}{ Link functions for the mean and standard deviation parameters of the usual univariate normal distribution (see \bold{Details} below). They are \eqn{\mu}{mu} and \eqn{\sigma}{sigma} respectively. See \code{\link{Links}} for more choices. } % \item{emean, esigma}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. %emean = list(), esigma = list(), % % } \item{isigma}{ Optional initial value for \eqn{\sigma}{sigma}. A \code{NULL} means a value is computed internally. } \item{zero}{ A vector specifying which linear/additive predictors are modelled as intercept-only. The values can be from the set \{1,2\}, corresponding respectively to \eqn{\mu}{mu}, \eqn{\sigma}{sigma}. If \code{zero = NULL} then all linear/additive predictors are modelled as a linear combination of the explanatory variables. For many data sets having \code{zero = 2} is a good idea. See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The short-tailed symmetric distribution of Tiku and Vaughan (1999) has a probability density function that can be written \deqn{f(y) = \frac{K}{\sqrt{2\pi} \sigma} \left[ 1 + \frac{1}{2h} \left( \frac{y-\mu}{\sigma} \right)^2 \right]^2 \exp\left( -\frac12 (y-\mu)^2 / \sigma^2 \right) }{% f(y) = (K/(sqrt(2*pi)*sigma)) * [1 + (1/(2*h)) * ((y-mu)/sigma)^2]^2 * exp( -0.5 * (y-mu)^2/ sigma^2) } where \eqn{h=2-d>0}, \eqn{K} is a function of \eqn{h}, \eqn{-\infty < y < \infty}{-Inf < y < Inf}, \eqn{\sigma > 0}{sigma > 0}. The mean of \eqn{Y} is \eqn{E(Y) = \mu}{E(Y) = mu} and this is returned as the fitted values. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Akkaya, A. D. and Tiku, M. L. (2008) Short-tailed distributions and inliers. \emph{Test}, \bold{17}, 282--296. Tiku, M. L. and Vaughan, D. C. (1999) A family of short-tailed symmetric distributions. \emph{Technical report, McMaster University, Canada}. } \author{ Thomas W. Yee } \note{ The density function is the product of a univariate normal density and a polynomial in the response \eqn{y}. The distribution is bimodal if \eqn{d>0}, else is unimodal. A normal distribution arises as the limit as \eqn{d} approaches \eqn{-\infty}{-Inf}, i.e., as \eqn{h} approaches \eqn{\infty}{Inf}. Fisher scoring is implemented. After fitting the value of \code{d} is stored in \code{@misc} with component name \code{d}. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned, e.g., when \eqn{d} is very close to 2 or approaches \code{-Inf}. } \seealso{ \code{\link{dtikuv}}, \code{\link{uninormal}}. } \examples{ m <- 1.0; sigma <- exp(0.5) tdata <- data.frame(y = rtikuv(n = 1000, d = 1, m = m, s = sigma)) tdata <- transform(tdata, sy = sort(y)) fit <- vglm(y ~ 1, tikuv(d = 1), data = tdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) with(tdata, mean(y)) \dontrun{ with(tdata, hist(y, prob = TRUE)) lines(dtikuv(sy, d = 1, m = Cfit[1], s = Cfit[2]) ~ sy, data = tdata, col = "orange") } } \keyword{models} \keyword{regression} VGAM/man/summaryvgam.Rd0000644000176200001440000000413313135276753014433 0ustar liggesusers% 20160804; Adapted from summary.vglm.Rd \name{summaryvgam} \alias{summaryvgam} \alias{show.summary.vgam} \title{Summarizing Vector Generalized Additive Model Fits} \usage{ summaryvgam(object, dispersion = NULL, digits = options()$digits - 2, presid = TRUE, nopredictors = FALSE) \method{show}{summary.vgam}(x, quote = TRUE, prefix = "", digits = options()$digits-2, nopredictors = NULL) } \arguments{ \item{object}{an object of class \code{"vgam"}, which is the result of a call to \code{\link{vgam}} with at least one \code{\link[VGAM]{s}} term. } \item{x}{an object of class \code{"summary.vgam"}, which is the result of a call to \code{summaryvgam()}. } \item{dispersion, digits, presid}{ See \code{\link{summaryvglm}}. } \item{quote, prefix, nopredictors}{ See \code{\link{summaryvglm}}. } } \description{ These functions are all \code{\link{methods}} for class \code{vgam} or \code{summary.vgam} objects. } \details{ This methods function reports a summary more similar to \code{summary.gam()} from \pkg{gam} than \code{\link[mgcv]{summary.gam}} from \pkg{mgcv}. It applies to G1-VGAMs using \code{\link{s}} and vector backfitting. In particular, an approximate score test for \emph{linearity} is conducted for each \code{\link{s}} term---see Section 4.3.4 of Yee (2015) for details. The p-values from this type of test tend to be biased upwards (too large). } \value{ \code{summaryvgam} returns an object of class \code{"summary.vgam"}; see \code{\link{summary.vgam-class}}. } \seealso{ \code{\link{vgam}}, \code{\link[stats]{summary.glm}}, \code{\link[stats]{summary.lm}}, \code{\link[mgcv]{summary.gam}} from \pkg{mgcv}, % A core R package \code{\link{summarypvgam}} for P-VGAMs. % \code{\link[gam]{summary.gam}}. % May not be installed. } \examples{ hfit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) summary(hfit) summary(hfit)@anova # Table for (approximate) testing of linearity } \keyword{models} \keyword{regression} % summary(hfit)@post$s.table # For sm.ps() terms. VGAM/man/chinese.nz.Rd0000644000176200001440000000567013135276753014136 0ustar liggesusers\name{chinese.nz} \alias{chinese.nz} \docType{data} \title{ Chinese Population in New Zealand 1867--2001 Data} \description{ The Chinese population in New Zealand from 1867 to 2001, along with the whole of the New Zealand population. } \usage{data(chinese.nz)} \format{ A data frame with 27 observations on the following 4 variables. \describe{ \item{\code{year}}{Year. } \item{\code{male}}{Number of Chinese males. } \item{\code{female}}{Number of Chinese females. } \item{\code{nz}}{Total number in the New Zealand population. } } } \details{ Historically, there was a large exodus of Chinese from the Guangdong region starting in the mid-1800s to the gold fields of South Island of New Zealand, California, and southern Australia, etc. Discrimination then meant that only men were allowed entry, to hinder permanent settlement. In the case of New Zealand, the government relaxed its immigration laws after WWII to allow wives of Chinese already in NZ to join them because China had been among the Allied powers. Gradual relaxation in the immigration and an influx during the 1980s meant the Chinese population became increasingly demographically normal over time. The NZ total for the years 1867 and 1871 exclude the Maori population. Three modifications have been made to the female column to make the data internally consistent with the original table. % The second value of 4583 looks erroneous, as seen by the plot below. } %\source{ %} \references{ Page 6 of \emph{Aliens At My Table: Asians as New Zealanders See Them} by M. Ip and N. Murphy, (2005). Penguin Books. Auckland, New Zealand. } \examples{ \dontrun{ par(mfrow = c(1, 2)) plot(female / (male + female) ~ year, chinese.nz, type = "b", ylab = "Proportion", col = "blue", las = 1, cex = 0.015 * sqrt(male + female), # cex = 0.10 * sqrt((male + female)^1.5 / sqrt(female) / sqrt(male)), main = "Proportion of NZ Chinese that are female") abline(h = 0.5, lty = "dashed", col = "gray") fit1.cnz <- vglm(cbind(female, male) ~ year, binomialff, data = chinese.nz) fit2.cnz <- vglm(cbind(female, male) ~ sm.poly(year, 2), binomialff, data = chinese.nz) fit4.cnz <- vglm(cbind(female, male) ~ sm.bs(year, 5), binomialff, data = chinese.nz) lines(fitted(fit1.cnz) ~ year, chinese.nz, col = "purple", lty = 1) lines(fitted(fit2.cnz) ~ year, chinese.nz, col = "green", lty = 2) lines(fitted(fit4.cnz) ~ year, chinese.nz, col = "orange", lwd = 2, lty = 1) legend("bottomright", col = c("purple", "green", "orange"), lty = c(1, 2, 1), leg = c("linear", "quadratic", "B-spline")) plot(100*(male+female)/nz ~ year, chinese.nz, type = "b", ylab = "Percent", ylim = c(0, max(100*(male+female)/nz)), col = "blue", las = 1, main = "Percent of NZers that are Chinese") abline(h = 0, lty = "dashed", col = "gray") } } \keyword{datasets} % Albany, Auckland, New Zealand. VGAM/man/vglm.control.Rd0000644000176200001440000002660613135276753014520 0ustar liggesusers\name{vglm.control} \alias{vglm.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control Function for vglm() } \description{ Algorithmic constants and parameters for running \code{vglm} are set using this function. } \usage{ vglm.control(checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, half.stepsizing = TRUE, maxit = 30, noWarning = FALSE, stepsize = 1, save.weights = FALSE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{Check.rank}{ logical indicating whether the rank of the VLM matrix should be checked. If this is not of full column rank then the results are not to be trusted. The default is to give an error message if the VLM matrix is not of full column rank. } \item{Check.cm.rank}{ logical indicating whether the rank of each constraint matrix should be checked. If this is not of full column rank then an error will occur. Under no circumstances should any constraint matrix have a rank less than the number of columns. } \item{criterion}{ character variable describing what criterion is to be used to test for convergence. The possibilities are listed in \code{.min.criterion.VGAM}, but most family functions only implement a few of these. } \item{epsilon}{ positive convergence tolerance epsilon. Roughly speaking, the Newton-Raphson/Fisher-scoring iterations are assumed to have converged when two successive \code{criterion} values are within \code{epsilon} of each other. } \item{half.stepsizing}{ logical indicating if half-stepsizing is allowed. For example, in maximizing a log-likelihood, if the next iteration has a log-likelihood that is less than the current value of the log-likelihood, then a half step will be taken. If the log-likelihood is still less than at the current position, a quarter-step will be taken etc. Eventually a step will be taken so that an improvement is made to the convergence criterion. \code{half.stepsizing} is ignored if \code{criterion == "coefficients"}. } \item{maxit}{ maximum number of (usually Fisher-scoring) iterations allowed. Sometimes Newton-Raphson is used. } \item{noWarning}{ logical indicating whether to suppress a warning if convergence is not obtained within \code{maxit} iterations. This is ignored if \code{maxit = 1} is set. } \item{stepsize}{ usual step size to be taken between each Newton-Raphson/Fisher-scoring iteration. It should be a value between 0 and 1, where a value of unity corresponds to an ordinary step. A value of 0.5 means half-steps are taken. Setting a value near zero will cause convergence to be generally slow but may help increase the chances of successful convergence for some family functions. } \item{save.weights}{ logical indicating whether the \code{weights} slot of a \code{"vglm"} object will be saved on the object. If not, it will be reconstructed when needed, e.g., \code{summary}. Some family functions have \code{save.weights = TRUE} and others have \code{save.weights = FALSE} in their control functions. } \item{trace}{ logical indicating if output should be produced for each iteration. Setting \code{trace = TRUE} is recommended in general because \pkg{VGAM} fits a very broad variety of models and distributions, and for some of them, convergence is intrinsically more difficult. Monitoring convergence can help check that the solution is reasonable or that a problem has occurred. It may suggest better initial values are needed, the making of invalid assumptions, or that the model is inappropriate for the data, etc. } \item{wzepsilon}{ small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } \item{xij}{ A formula or a list of formulas. Each formula has a RHS giving \eqn{M} terms making up a covariate-dependent term (whose name is the response). That is, it creates a variable that takes on different values for each linear/additive predictor, e.g., the ocular pressure of each eye. The \eqn{M} terms must be unique; use \code{\link{fill1}}, \code{fill2}, \code{fill3}, etc. if necessary. Each formula should have a response which is taken as the name of that variable, and the \eqn{M} terms are enumerated in sequential order. Each of the \eqn{M} terms multiply each successive row of the constraint matrix. When \code{xij} is used, the use of \code{form2} is also required to give \emph{every} term used by the model. The function \code{\link{Select}} can be used to select variables beginning with the same character string. } % \item{jix}{ % A formula or a list of formulas specifying % which explanatory variables are to be plotted for each \code{xij} term. % For example, in the code below, % the term \code{BS(dumm)} could be plotted against either % \code{dum1} or \code{dum2}, therefore % either \code{jix=dum1} or \code{jix=dum2} are ok. % This argument is made use of by \code{plotvgam()}. % Each formula has a RHS giving \eqn{r_k} unique terms, % one for each column of the constraint matrix. % Each formula should have a response that matches the % \code{formula} argument. % The argument \code{jix} is a reversal of \code{xij} to emphasize % the same framework for handling terms involving covariates that have % different values for each linear/additive predictor. % % } \item{\dots}{ other parameters that may be picked up from control functions that are specific to the \pkg{VGAM} family function. } } \details{ Most of the control parameters are used within \code{vglm.fit} and you will have to look at that to understand the full details. Setting \code{save.weights = FALSE} is useful for some models because the \code{weights} slot of the object is the largest and so less memory is used to store the object. However, for some \pkg{VGAM} family function, it is necessary to set \code{save.weights = TRUE} because the \code{weights} slot cannot be reconstructed later. } \value{ A list with components matching the input names. A little error checking is done, but not much. The list is assigned to the \code{control} slot of \code{vglm} objects. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee} \note{ Reiterating from above, setting \code{trace = TRUE} is recommended in general. In Example 2 below there are two covariates that have linear/additive predictor specific values. These are handled using the \code{xij} argument. } \section{Warning}{ For some applications the default convergence criterion should be tightened. Setting something like \code{criterion = "coef", epsilon = 1e-09} is one way to achieve this, and also add \code{trace = TRUE} to monitor the convergence. Setting \code{maxit} to some higher number is usually not needed, and needing to do so suggests something is wrong, e.g., an ill-conditioned model, over-fitting or under-fitting. } \seealso{ \code{\link{vglm}}, \code{\link{fill1}}. The author's homepage has further documentation about the \code{xij} argument; see also \code{\link{Select}}. } \examples{ # Example 1. pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo, crit = "coef", step = 0.5, trace = TRUE, epsil = 1e-8, maxit = 40) # Example 2. The use of the xij argument (simple case). ymat <- rdiric(n <- 1000, shape = rep(exp(2), len = 4)) mydat <- data.frame(x1 = runif(n), x2 = runif(n), x3 = runif(n), x4 = runif(n), z1 = runif(n), z2 = runif(n), z3 = runif(n), z4 = runif(n)) mydat <- transform(mydat, X = x1, Z = z1) mydat <- round(mydat, digits = 2) fit2 <- vglm(ymat ~ X + Z, dirichlet(parallel = TRUE), data = mydat, trace = TRUE, xij = list(Z ~ z1 + z2 + z3 + z4, X ~ x1 + x2 + x3 + x4), form2 = ~ Z + z1 + z2 + z3 + z4 + X + x1 + x2 + x3 + x4) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) max(abs(predict(fit2)-predict(fit2, new = mydat))) # Predicts correctly summary(fit2) \dontrun{ # plotvgam(fit2, se = TRUE, xlab = "x1", which.term = 1) # Bug! # plotvgam(fit2, se = TRUE, xlab = "z1", which.term = 2) # Bug! plotvgam(fit2, xlab = "x1") # Correct plotvgam(fit2, xlab = "z1") # Correct } # Example 3. The use of the xij argument (complex case). set.seed(123) coalminers <- transform(coalminers, Age = (age - 42) / 5, dum1 = round(runif(nrow(coalminers)), digits = 2), dum2 = round(runif(nrow(coalminers)), digits = 2), dum3 = round(runif(nrow(coalminers)), digits = 2), dumm = round(runif(nrow(coalminers)), digits = 2)) BS <- function(x, ..., df = 3) sm.bs(c(x,...), df = df)[1:length(x),,drop = FALSE] NS <- function(x, ..., df = 3) sm.ns(c(x,...), df = df)[1:length(x),,drop = FALSE] # Equivalently... BS <- function(x, ..., df = 3) head(sm.bs(c(x,...), df = df), length(x), drop = FALSE) NS <- function(x, ..., df = 3) head(sm.ns(c(x,...), df = df), length(x), drop = FALSE) fit3 <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age + NS(dum1, dum2), fam = binom2.or(exchangeable = TRUE, zero = 3), xij = list(NS(dum1, dum2) ~ NS(dum1, dum2) + NS(dum2, dum1) + fill(NS( dum1))), form2 = ~ NS(dum1, dum2) + NS(dum2, dum1) + fill(NS(dum1)) + dum1 + dum2 + dum3 + Age + age + dumm, data = coalminers, trace = TRUE) head(model.matrix(fit3, type = "lm")) # LM model matrix head(model.matrix(fit3, type = "vlm")) # Big VLM model matrix coef(fit3) coef(fit3, matrix = TRUE) \dontrun{ plotvgam(fit3, se = TRUE, lcol = "red", scol = "blue", xlab = "dum1") } } \keyword{models} \keyword{regression} % zz 20090506 put elsewhere: % % %# Example 4. The use of the xij argument (complex case). %# Here is one method to handle the xij argument with a term that %# produces more than one column in the model matrix. %# The constraint matrix for 'op' has one column. %POLY3 <- function(x, ...) { % # A cubic; ensures that the basis functions are the same. % poly(c(x,...), 3)[1:length(x),] %} % %\dontrun{ %fit4 <- vglm(cbind(leye,reye) ~ POLY3(op), trace=TRUE, % fam = binom2.or(exchangeable=TRUE, zero=3), data=eyesdata, % xij = list(POLY3(op) ~ POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))), % form2 = ~ POLY3(op) + POLY3(lop,rop) + POLY3(rop,lop) + % fill(POLY3(lop,rop))) %coef(fit4) %coef(fit4, matrix=TRUE) %head(predict(fit4)) %} VGAM/man/lomaxUC.Rd0000644000176200001440000000535513135276753013442 0ustar liggesusers\name{Lomax} \alias{Lomax} \alias{dlomax} \alias{plomax} \alias{qlomax} \alias{rlomax} \title{The Lomax Distribution} \description{ Density, distribution function, quantile function and random generation for the Lomax distribution with scale parameter \code{scale} and shape parameter \code{q}. } \usage{ dlomax(x, scale = 1, shape3.q, log = FALSE) plomax(q, scale = 1, shape3.q, lower.tail = TRUE, log.p = FALSE) qlomax(p, scale = 1, shape3.q, lower.tail = TRUE, log.p = FALSE) rlomax(n, scale = 1, shape3.q) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{scale}{scale parameter.} \item{shape3.q}{shape parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dlomax} gives the density, \code{plomax} gives the distribution function, \code{qlomax} gives the quantile function, and \code{rlomax} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{lomax}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Lomax distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{lomax}}, \code{\link{genbetaII}}. } \examples{ probs <- seq(0.1, 0.9, by = 0.1) max(abs(plomax(qlomax(p = probs, shape3.q = 1), shape3.q = 1) - probs)) # Should be 0 \dontrun{ par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", ylim = c(0, 3), main = "Black is standard exponential, others are dlomax(x, shape3.q)") lines(x, dlomax(x, shape3.q = 1), col = "orange") lines(x, dlomax(x, shape3.q = 2), col = "blue") lines(x, dlomax(x, shape3.q = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, 3), legend = paste("shape3.q =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard exponential, others are plomax(x, shape3.q)") lines(x, plomax(x, shape3.q = 1), col = "orange") lines(x, plomax(x, shape3.q = 2), col = "blue") lines(x, plomax(x, shape3.q = 5), col = "green") legend("bottomright", col = c("orange","blue","green"), lty = rep(1, 3), legend = paste("shape3.q =", c(1, 2, 5))) } } \keyword{distribution} VGAM/man/dirichlet.Rd0000644000176200001440000001022213135276753014026 0ustar liggesusers\name{dirichlet} \alias{dirichlet} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fitting a Dirichlet Distribution } \description{ Fits a Dirichlet distribution to a matrix of compositions. } \usage{ dirichlet(link = "loge", parallel = FALSE, zero = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to each of the \eqn{M} (positive) shape parameters \eqn{\alpha_j}{alpha_j}. See \code{\link{Links}} for more choices. The default gives \eqn{\eta_j=\log(\alpha_j)}{eta_j=log(alpha_j)}. } \item{parallel, zero, imethod}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ In this help file the response is assumed to be a \eqn{M}-column matrix with positive values and whose rows each sum to unity. Such data can be thought of as compositional data. There are \eqn{M} linear/additive predictors \eqn{\eta_j}{eta_j}. The Dirichlet distribution is commonly used to model compositional data, including applications in genetics. Suppose \eqn{(Y_1,\ldots,Y_{M})^T}{(Y_1,\ldots,Y_M)^T} is the response. Then it has a Dirichlet distribution if \eqn{(Y_1,\ldots,Y_{M-1})^T}{(Y_1,\ldots,Y_{M-1})^T} has density \deqn{\frac{\Gamma(\alpha_{+})} {\prod_{j=1}^{M} \Gamma(\alpha_{j})} \prod_{j=1}^{M} y_j^{\alpha_{j} -1}}{% (Gamma(alpha_+) / prod_{j=1}^M gamma(alpha_j)) prod_{j=1}^M y_j^(alpha_j -1)} where \eqn{\alpha_+=\alpha_1+\cdots+\alpha_M}{alpha_+= alpha_1 + \dots + alpha_M}, \eqn{\alpha_j > 0}{alpha_j > 0}, and the density is defined on the unit simplex \deqn{\Delta_{M} = \left\{ (y_1,\ldots,y_{M})^T : y_1 > 0, \ldots, y_{M} > 0, \sum_{j=1}^{M} y_j = 1 \right\}. }{% Delta_M = { (y_1,\ldots,y_M)^T : y_1 > 0, \dots, y_M > 0, \sum_{j=1}^M y_j = 1 }. } One has \eqn{E(Y_j) = \alpha_j / \alpha_{+}}{E(Y_j) = alpha_j / alpha_{+}}, which are returned as the fitted values. For this distribution Fisher scoring corresponds to Newton-Raphson. The Dirichlet distribution can be motivated by considering the random variables \eqn{(G_1,\ldots,G_{M})^T}{(G_1,\ldots,G_M)^T} which are each independent and identically distributed as a gamma distribution with density \eqn{f(g_j)=g_j^{\alpha_j - 1} e^{-g_j} / \Gamma(\alpha_j)}{f(g_j)= g_j^(alpha_j - 1) e^(-g_j) / gamma(alpha_j)}. Then the Dirichlet distribution arises when \eqn{Y_j=G_j / (G_1 + \cdots + G_M)}{Y_j = G_j / (G_1 + ... + G_M)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the \eqn{M}-column matrix of means. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be a matrix of positive values whose rows each sum to unity. Similar to this is count data, where probably a multinomial logit model (\code{\link{multinomial}}) may be appropriate. Another similar distribution to the Dirichlet is the Dirichlet-multinomial (see \code{\link{dirmultinomial}}). } \seealso{ \code{\link{rdiric}}, \code{\link{dirmultinomial}}, \code{\link{multinomial}}, \code{\link{simplex}}. } % yettodo: use the data of \citet[p.81]{mosi:1962}. See % See also \citet[pp.8--9]{macd:2014}. \examples{ ddata <- data.frame(rdiric(n = 1000, shape = exp(c(y1 = -1, y2 = 1, y3 = 0)))) fit <- vglm(cbind(y1, y2, y3) ~ 1, dirichlet, data = ddata, trace = TRUE, crit = "coef") Coef(fit) coef(fit, matrix = TRUE) head(fitted(fit)) } \keyword{models} \keyword{regression} % colnames(ddata) <- paste("y", 1:3, sep = "") VGAM/man/oizipf.Rd0000644000176200001440000000457713135276753013377 0ustar liggesusers\name{oizipf} \alias{oizipf} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-inflated Zipf Distribution Family Function } \description{ Fits a 1-inflated Zipf distribution. } \usage{ oizipf(N = NULL, lpstr1 = "logit", lshape = "loge", type.fitted = c("mean", "shape", "pobs1", "pstr1", "onempstr1"), ishape = NULL, gpstr1 = ppoints(8), gshape = exp((-3:3) / 4), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{N}{ Same as \code{\link{zipf}}. } \item{lpstr1, lshape}{ For \code{lpstr1}: the same idea as \code{\link{zipoisson}} except it applies to a structural 1. } \item{gpstr1, gshape, ishape}{ For initial values. See \code{\link{CommonVGAMffArguments}} for information. } \item{type.fitted, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 1-inflated Zipf distribution is a mixture distribution of the Zipf distribution with some probability of obtaining a (structural) 1. Thus there are two sources for obtaining the value 1. This distribution is written here in a way that retains a similar notation to the zero-inflated Poisson, i.e., the probability \eqn{P[Y=1]} involves another parameter \eqn{\phi}{phi}. See \code{\link{zipoisson}}. This family function can handle multiple responses. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. Lots of data is needed to estimate the parameters accurately. Usually, probably the shape parameter is best modelled as intercept-only. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } %\references{ %} \author{ Thomas W. Yee } %\note{ %} \seealso{ \code{\link{Oizipf}}. \code{\link{zipf}}, \code{\link{Oizeta}}. } \examples{ \dontrun{ odata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data odata <- transform(odata, pstr1 = logit(-1 + x2, inverse = TRUE), myN = 10, shape = exp(-0.5)) odata <- transform(odata, y1 = roizipf(nn, N = myN, s = shape, pstr1 = pstr1)) with(odata, table(y1)) fit1 <- vglm(y1 ~ x2, oizipf(zero = "shape"), data = odata, trace = TRUE) coef(fit1, matrix = TRUE) } } \keyword{models} \keyword{regression} VGAM/man/Coef.vlm.Rd0000644000176200001440000000321713135276753013536 0ustar liggesusers\name{Coef.vlm} \alias{Coef.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Extract Model Coefficients for VLM Objects } \description{ Amongst other things, this function applies inverse link functions to the parameters of intercept-only VGLMs. } \usage{ Coef.vlm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A fitted model. } \item{\dots}{ Arguments which may be passed into \code{\link[stats]{coef}}. } } \details{ Most \pkg{VGAM} family functions apply a link function to the parameters, e.g., positive parameter are often have a log link, parameters between 0 and 1 have a logit link. This function can back-transform the parameter estimate to the original scale. } \value{ For intercept-only models (e.g., formula is \code{y ~ 1}) the back-transformed parameter estimates can be returned. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } \section{Warning }{ This function may not work for \emph{all} \pkg{VGAM} family functions. You should check your results on some artificial data before applying it to models fitted to real data. } \seealso{ \code{\link{Coef}}, \code{\link[stats]{coef}}. } \examples{ set.seed(123); nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) fit <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) # intercept-only model coef(fit, matrix = TRUE) # log scale Coef(fit) # On the original scale } \keyword{models} \keyword{regression} VGAM/man/weibull.mean.Rd0000644000176200001440000000656413135276753014457 0ustar liggesusers\name{weibull.mean} \alias{weibull.mean} %\alias{weibullff} %\alias{weibull.lsh} %\alias{weibull3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Weibull Distribution Family Function, Parameterized by the Mean } \description{ Maximum likelihood estimation of the 2-parameter Weibull distribution. The mean is one of the parameters. No observations should be censored. } \usage{ weibull.mean(lmean = "loge", lshape = "loge", imean = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmean, lshape}{ Parameter link functions applied to the (positive) mean parameter (called \eqn{mu} below) and (positive) shape parameter (called \eqn{a} below). See \code{\link{Links}} for more choices. } \item{imean, ishape}{ Optional initial values for the mean and shape parameters. } \item{imethod, zero, probs.y}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ See \code{\link{weibullR}} for most of the details for this family function too. The mean of \eqn{Y} is \eqn{b \, \Gamma(1+ 1/a)}{b * gamma(1+ 1/a)} (returned as the fitted values), and this is the first parameter (a \code{\link{loge}} link is the default because it is positive). The other parameter is the positive shape paramter \eqn{a}, also having a default \code{\link{loge}} link. This \pkg{VGAM} family function currently does not handle censored data. Fisher scoring is used to estimate the two parameters. Although the expected information matrices used here are valid in all regions of the parameter space, the regularity conditions for maximum likelihood estimation are satisfied only if \eqn{a>2} (according to Kleiber and Kotz (2003)). If this is violated then a warning message is issued. One can enforce \eqn{a>2} by choosing \code{lshape = logoff(offset = -2)}. Common values of the shape parameter lie between 0.5 and 3.5. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \author{ T. W. Yee } \note{ See \code{\link{weibullR}} for more details. This \pkg{VGAM} family function handles multiple responses. } %\section{Warning}{ % This function is under development to handle other censoring situations. % The version of this function which will handle censored data will be %} \seealso{ \code{\link{weibullR}}, \code{\link[stats:Weibull]{dweibull}}, \code{\link{truncweibull}}, \code{\link{gev}}, \code{\link{lognormal}}, \code{\link{expexpff}}, \code{\link{maxwell}}, \code{\link{rayleigh}}, \code{\link{gumbelII}}. } \examples{ wdata <- data.frame(x2 = runif(nn <- 1000)) # Complete data wdata <- transform(wdata, mu = exp(-1 + 1 * x2), x3 = rnorm(nn), shape1 = exp(1), shape2 = exp(2)) wdata <- transform(wdata, y1 = rweibull(nn, shape = shape1, scale = mu / gamma(1 + 1/shape1)), y2 = rweibull(nn, shape = shape2, scale = mu / gamma(1 + 1/shape2))) fit <- vglm(cbind(y1, y2) ~ x2 + x3, weibull.mean, data = wdata, trace = TRUE) coef(fit, matrix = TRUE) sqrt(diag(vcov(fit))) # SEs summary(fit, presid = FALSE) } \keyword{models} \keyword{regression} VGAM/man/cgo.Rd0000644000176200001440000000217513135276753012637 0ustar liggesusers\name{cgo} \alias{cgo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Redirects the user to cqo } \description{ Redirects the user to the function \code{\link{cqo}}. } \usage{ cgo(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ Ignored. } } \details{ The former function \code{cgo} has been renamed \code{\link{cqo}} because CGO (for \emph{canonical Gaussian ordination}) is a confusing and inaccurate name. CQO (for \emph{constrained quadratic ordination}) is better. This new nomenclature described in Yee (2006). } \value{ Nothing is returned; an error message is issued. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{Thomas W. Yee} \section{Warning }{ The code, therefore, in Yee (2004) will not run without changing the \code{"g"} to a \code{"q"}. } \seealso{ \code{\link{cqo}}. } \examples{ \dontrun{ cgo() } } \keyword{models} \keyword{regression} VGAM/man/oiposbinomUC.Rd0000644000176200001440000000765713135276753014507 0ustar liggesusers\name{Oiposbinom} \alias{Oiposbinom} \alias{doiposbinom} \alias{poiposbinom} \alias{qoiposbinom} \alias{roiposbinom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Inflated Positive Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the one-inflated positive binomial distribution with parameter \code{pstr1}. } \usage{ doiposbinom(x, size, prob, pstr1 = 0, log = FALSE) poiposbinom(q, size, prob, pstr1 = 0) qoiposbinom(p, size, prob, pstr1 = 0) roiposbinom(n, size, prob, pstr1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, p, q, n}{Same as \code{\link{Posbinom}}. } \item{size, prob}{Same as \code{\link{Posbinom}}. } \item{pstr1}{ Probability of a structural one (i.e., ignoring the positive binomial distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi = 0}{phi = 0} corresponds to the response having a positive binomial distribution. However, \code{pstr1} can also be negative, in which case it ceases its interpretation as a probability, and this is known as \emph{one-deflation}. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The probability function of \eqn{Y} is 1 with probability \eqn{\phi}{phi}, and \eqn{PosBinomial(size, prob)}{PosBinomial(size, prob)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=1) =\phi + (1-\phi) P(W=1)}{% P(Y=1) = phi + (1-phi) * P(W=1)} where \eqn{W} is distributed as a positive \eqn{binomial(size, prob)}{binomial(size, prob)} random variable. } \value{ \code{doiposbinom} gives the density, \code{poiposbinom} gives the distribution function, \code{qoiposbinom} gives the quantile function, and \code{roiposbinom} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr1} is recycled to the required length, and usually has values which lie in the interval \eqn{[0,1]}. % % % These functions actually allow for the \emph{zero-deflated binomial} distribution. Here, \code{pstr1} is also permitted to lie in the interval \eqn{[-A, 0]} for some positive quantity \eqn{A}. The resulting probability of a unit value is \emph{less than} the nominal positive binomial value, and the use of \code{pstr1} to stand for the probability of a structural 1 loses its meaning. % % % If \code{pstr1} equals \eqn{A} then this corresponds to the 0- and 1-truncated binomial distribution. } \seealso{ \code{\link{posbinomial}}, \code{\link[stats:binomial]{dbinom}}, \code{\link{binomialff}}. % \code{\link{oiposbinomial}}, } \examples{ size <- 10; prob <- 0.2; pstr1 <- 0.4; x <- (-1):size (ii <- doiposbinom(x, size, prob, pstr1 = pstr1)) table(roiposbinom(100, size, prob, pstr1 = pstr1)) round(doiposbinom(x , size, prob, pstr1 = pstr1) * 100) # Should be similar \dontrun{ x <- 0:size par(mfrow = c(2, 1)) # One-Inflated Positive Binomial barplot(rbind(doiposbinom(x, size, prob, pstr1 = pstr1), dposbinom(x, size, prob)), beside = TRUE, col = c("blue", "orange"), main = paste("OIPB(", size, ",", prob, ", pstr1 = ", pstr1, ") (blue) vs", " PosBinomial(", size, ",", prob, ") (orange)", sep = ""), names.arg = as.character(x)) # Zero-deflated Pos Binomial deflat.limit <- -dposbinom(1, size, prob) / (1 - dposbinom(1, size, prob)) deflat.limit <- size * prob / (1 + (size-1) * prob - 1 / (1-prob)^(size-1)) newpstr1 <- round(deflat.limit, 3) + 0.001 # A little from the boundary barplot(rbind(doiposbinom(x, size, prob, pstr1 = newpstr1), dposbinom(x, size, prob)), beside = TRUE, col = c("blue","orange"), main = paste("ODPB(", size, ",", prob, ", pstr1 = ", newpstr1, ") (blue) vs", " PosBinomial(", size, ",", prob, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/gew.Rd0000644000176200001440000000467013135276753012653 0ustar liggesusers\name{gew} \alias{gew} \docType{data} \title{ General Electric and Westinghouse Data } \description{ General Electric and Westinghouse capital data. } \usage{data(gew)} \format{ A data frame with 20 observations on the following 7 variables. All variables are numeric vectors. Variables ending in \code{.g} correspond to General Electric and those ending in \code{.w} are Westinghouse. \describe{ \item{year}{The observations are the years from 1934 to 1953} \item{invest.g, invest.w}{investment figures. These are \eqn{I=} Gross investment = additions to plant and equipment plus maintenance and repairs in millions of dollars deflated by \eqn{P_1}. } \item{capital.g, capital.w}{capital stocks. These are \eqn{C=} The stock of plant and equipment = accumulated sum of net additions to plant and equipment deflated by \eqn{P_1} minus depreciation allowance deflated by \eqn{P_3}. } \item{value.g, value.w}{market values. These are \eqn{F=} Value of the firm = price of common and preferred shares at December 31 (or average price of December 31 and January 31 of the following year) times number of common and preferred shares outstanding plus total book value of debt at December 31 in millions of dollars deflated by \eqn{P_2}. } } } \details{ These data are a subset of a table in Boot and de Wit (1960), also known as the Grunfeld data. It is used a lot in econometrics, e.g., for seemingly unrelated regressions (see \code{\link[VGAM:SURff]{SURff}}). Here, \eqn{P_1 =} Implicit price deflator of producers durable equipment (base 1947), \eqn{P_2 =} Implicit price deflator of G.N.P. (base 1947), \eqn{P_3 =} Depreciation expense deflator = ten years moving average of wholesale price index of metals and metal products (base 1947). } \source{ Table 10 of: Boot, J. C. G. and de Wit, G. M. (1960) Investment Demand: An Empirical Contribution to the Aggregation Problem. \emph{International Economic Review}, \bold{1}, 3--30. Grunfeld, Y. (1958) The Determinants of Corporate Investment. Unpublished PhD Thesis (Chicago). } \seealso{ \code{\link[VGAM:SURff]{SURff}}, \url{http://statmath.wu.ac.at/~zeileis/grunfeld}. } \references{ Zellner, A. (1962) An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. \emph{Journal of the American Statistical Association}, \bold{57}, 348--368. } \examples{ str(gew) } \keyword{datasets} VGAM/man/sratio.Rd0000644000176200001440000001011613135276753013362 0ustar liggesusers\name{sratio} \alias{sratio} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Regression with Stopping Ratios } \description{ Fits a stopping ratio logit/probit/cloglog/cauchit/... regression model to an ordered (preferably) factor response. } \usage{ sratio(link = "logit", parallel = FALSE, reverse = FALSE, zero = NULL, whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the \eqn{M} stopping ratio probabilities. See \code{\link{Links}} for more choices. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{reverse}{ Logical. By default, the stopping ratios used are \eqn{\eta_j = logit(P[Y=j|Y \geq j])}{eta_j = logit(P[Y=j|Y>=j])} for \eqn{j=1,\dots,M}. If \code{reverse} is \code{TRUE}, then \eqn{\eta_j = logit(P[Y=j+1|Y \leq j+1])}{eta_j = logit(P[Y=j+1|Y<=j+1])} will be used. } \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}. The default value means none are modelled as intercept-only terms. } \item{whitespace}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with ordered values \eqn{1,2,\dots,M+1}, so that \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}. There are a number of definitions for the \emph{continuation ratio} in the literature. To make life easier, in the \pkg{VGAM} package, we use \emph{continuation} ratios (see \code{\link{cratio}}) and \emph{stopping} ratios. Continuation ratios deal with quantities such as \code{logit(P[Y>j|Y>=j])}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Agresti, A. (2013) \emph{Categorical Data Analysis}, 3rd ed. Hoboken, NJ, USA: Wiley. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York, USA: Springer-Verlag. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{vglm}/\code{vgam}/\code{rrvglm} is the matrix of counts. For a nominal (unordered) factor response, the multinomial logit model (\code{\link{multinomial}}) is more appropriate. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x1}, \code{x2} and \code{x3}, then \code{parallel = TRUE ~ x1 + x2 -1} and \code{parallel = FALSE ~ x3} are equivalent. This would constrain the regression coefficients for \code{x1} and \code{x2} to be equal; those of the intercepts and \code{x3} would be different. } \section{Warning }{ No check is made to verify that the response is ordinal if the response is a matrix; see \code{\link[base:factor]{ordered}}. } \seealso{ \code{\link{cratio}}, \code{\link{acat}}, \code{\link{cumulative}}, \code{\link{multinomial}}, \code{\link{margeff}}, \code{\link{pneumo}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, sratio(parallel = TRUE), data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE) } \keyword{models} \keyword{regression} VGAM/man/explogUC.Rd0000644000176200001440000000426113135276753013613 0ustar liggesusers\name{explog} \alias{explog} \alias{dexplog} \alias{pexplog} \alias{qexplog} \alias{rexplog} \title{The Exponential Logarithmic Distribution} \description{ Density, distribution function, quantile function and random generation for the exponential logarithmic distribution. } \usage{ dexplog(x, scale = 1, shape, log = FALSE) pexplog(q, scale = 1, shape) qexplog(p, scale = 1, shape) rexplog(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{scale, shape}{ positive scale and shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dexplog} gives the density, \code{pexplog} gives the distribution function, \code{qexplog} gives the quantile function, and \code{rexplog} generates random deviates. } \author{ J. G. Lauder and T. W. Yee } \details{ See \code{\link{explogff}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Tahmasabi and Rezaei (2008). } \seealso{ \code{\link{explogff}}, \code{\link{exponential}}. } \examples{ \dontrun{ shape <- 0.5; scale <- 2; nn <- 501 x <- seq(-0.50, 6.0, len = nn) plot(x, dexplog(x, scale, shape), type = "l", las = 1, ylim = c(0, 1.1), ylab = paste("[dp]explog(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexplog(x, scale, shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexplog(probs, scale, shape = shape) lines(Q, dexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") lines(Q, pexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexplog(Q, scale, shape = shape) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/logit.Rd0000644000176200001440000001436013135276753013204 0ustar liggesusers\name{logit} \alias{logit} \alias{extlogit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Logit Link Function } \description{ Computes the logit transformation, including its inverse and the first two derivatives. } \usage{ logit(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) extlogit(theta, min = 0, max = 1, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue, bminvalue, bmaxvalue}{ See \code{\link{Links}}. These are boundary values. For \code{extlogit}, values of \code{theta} less than or equal to \eqn{A} or greater than or equal to \eqn{B} can be replaced by \code{bminvalue} and \code{bmaxvalue}. } % Extra argument for passing in additional information. % For \code{logit}, values of \code{theta} which are equal to 0 or 1 are % replaced by \code{earg} or \code{1-earg} % (respectively, and if given) before computing the logit. \item{min, max}{ For \code{extlogit}, \code{min} gives \eqn{A}, \code{max} gives \eqn{B}, and for out of range values, \code{bminvalue} and \code{bmaxvalue}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The logit link function is very commonly used for parameters that lie in the unit interval. Numerical values of \code{theta} close to 0 or 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. The \emph{extended} logit link function \code{extlogit} should be used more generally for parameters that lie in the interval \eqn{(A,B)}, say. The formula is \deqn{\log((\theta-A)/(B-\theta))}{% log((theta-A)/(B-theta))} and the default values for \eqn{A} and \eqn{B} correspond to the ordinary logit function. Numerical values of \code{theta} close to \eqn{A} or \eqn{B} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. However these can be replaced by values \eqn{bminvalue} and \eqn{bmaxvalue} first before computing the link function. } \value{ For \code{logit} with \code{deriv = 0}, the logit of \code{theta}, i.e., \code{log(theta/(1-theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(theta)/(1+exp(theta))}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 or 0 (for \code{logit}), or close to \eqn{A} or \eqn{B} for \code{extlogit}. One way of overcoming this is to use, e.g., \code{bvalue}. In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the univariate logistic distribution (see \code{\link{logistic}}). } \seealso{ \code{\link{Links}}, \code{\link{logitoffsetlink}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{logistic1}}, \code{\link{loge}}, \code{\link[stats]{plogis}}, \code{\link{multilogit}}. } \examples{ p <- seq(0.01, 0.99, by = 0.01) logit(p) max(abs(logit(logit(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) logit(p) # Has NAs logit(p, bvalue = .Machine$double.eps) # Has no NAs p <- seq(0.9, 2.2, by = 0.1) extlogit(p, min = 1, max = 2, bminvalue = 1 + .Machine$double.eps, bmaxvalue = 2 - .Machine$double.eps) # Has no NAs \dontrun{ par(mfrow = c(2,2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { myinv <- (d > 0) matplot(p, cbind( logit(p, deriv = d, inverse = myinv), probit(p, deriv = d, inverse = myinv)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "1 / first derivative") lines(p, logit(p, deriv = d, inverse = myinv), col = "limegreen") lines(p, probit(p, deriv = d, inverse = myinv), col = "purple") lines(p, cloglog(p, deriv = d, inverse = myinv), col = "chocolate") lines(p, cauchit(p, deriv = d, inverse = myinv), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logit", "probit", "cloglog", "cauchit"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind(logit(y, deriv = d, inverse = TRUE), probit(y, deriv = d, inverse = TRUE)), las = 1, type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logit(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probit(y, deriv = d, inverse = TRUE), col = "purple") lines(y, cloglog(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, cauchit(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logit", "probit", "cloglog", "cauchit"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } } p <- seq(0.21, 0.59, by = 0.01) plot(p, extlogit(p, min = 0.2, max = 0.6), type = "l", col = "black", ylab = "transformation", xlim = c(0, 1), las = 1, main = "extlogit(p, min = 0.2, max = 0.6)") par(lwd = 1) } } \keyword{math} \keyword{models} \keyword{regression} %plot(y, logit(y, inverse = TRUE), type = "l", col = "limegreen", % xlab = "transformation", ylab = "p", % lwd = 2, las = 1, main = "Some inverse probability link functions") %lines(y, probit(y, inverse = TRUE), col = "purple", lwd = 2) %lines(y, cloglog(y, inverse = TRUE), col = "chocolate", lwd = 2) %abline(h = 0.5, v = 0, lty = "dashed") VGAM/man/zeta.Rd0000644000176200001440000001067713135276753013040 0ustar liggesusers\name{zeta} \alias{zeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Riemann's Zeta Function } \description{ Computes Riemann's zeta function and its first two derivatives. Also can computes Hurwitz's zeta function. } \usage{ zeta(x, deriv = 0, shift = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A complex-valued vector/matrix whose real values must be \eqn{\geq 1}{>= 1}. Otherwise, if \code{x} may be real. If \code{deriv} is 1 or 2 then \code{x} must be real and positive. } \item{deriv}{ An integer equalling 0 or 1 or 2, which is the order of the derivative. The default means it is computed ordinarily. } \item{shift}{ Positive and numeric, called \eqn{A} below. Allows for the Hurwitz zeta to be returned. The default corresponds to the Riemann formula. } } \details{ The (Riemann) formula for real \eqn{s} is \deqn{\sum_{n=1}^{\infty} 1 / n^s.}{% sum_{n=1}^Inf 1 / n^s.} While the usual definition involves an infinite series that converges when the real part of the argument is \eqn{> 1}, more efficient methods have been devised to compute the value. In particular, this function uses Euler-Maclaurin summation. Theoretically, the zeta function can be computed over the whole complex plane because of analytic continuation. The (Riemann) formula used here for analytic continuation is \deqn{\zeta(s) = 2^s \pi^{s-1} \sin(\pi s/2) \Gamma(1-s) \zeta(1-s).}{% zeta(s) = 2^s * pi^(s-1) * sin(pi*s/2) * gamma(1-s) * zeta(1-s).} This is actually one of several formulas, but this one was discovered by Riemann himself and is called the \emph{functional equation}. The Hurwitz zeta function for real \eqn{0 < s} is \deqn{\sum_{n=0}^{\infty} 1 / (A + n)^s.}{% sum_{n=0}^Inf 1 / (A + n)^s.} where \eqn{0 < A} is known here as the \code{shift}. Since \eqn{A=1} by default, this function will therefore return Riemann's zeta function by default. Currently derivatives are unavailable. } \section{Warning}{ This function has not been fully tested, especially the derivatives. In particular, analytic continuation does not work here for complex \code{x} with \code{Re(x)<1} because currently the \code{\link[base:Special]{gamma}} function does not handle complex arguments. } \value{ The default is a vector/matrix of computed values of Riemann's zeta function. If \code{shift} contains values not equal to 1, then this is Hurwitz's zeta function. % The derivative is attached as an attribute zz. } \references{ Riemann, B. (1859) Ueber die Anzahl der Primzahlen unter einer gegebenen Grosse. \emph{Monatsberichte der Berliner Akademie, November 1859}. Edwards, H. M. (1974) \emph{Riemann's Zeta Function}. Academic Press: New York. Markman, B. (1965) The Riemann zeta function. \emph{BIT}, \bold{5}, 138--141. Abramowitz, M. and Stegun, I. A. (1972) \emph{Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables}, New York: Dover Publications Inc. } \author{ T. W. Yee, with the help of Garry J. Tee. } \note{ Estimation of the parameter of the zeta distribution can be achieved with \code{\link{zetaff}}. } \seealso{ \code{\link{zetaff}}, \code{\link{oazeta}}, \code{\link{oizeta}}, \code{\link{otzeta}}, \code{\link{lerch}}, \code{\link[base:Special]{gamma}}. } \examples{ zeta(2:10) \dontrun{ curve(zeta, -13, 0.8, xlim = c(-12, 10), ylim = c(-1, 4), col = "orange", las = 1, main = expression({zeta}(x))) curve(zeta, 1.2, 12, add = TRUE, col = "orange") abline(v = 0, h = c(0, 1), lty = "dashed", col = "gray") curve(zeta, -14, -0.4, col = "orange", main = expression({zeta}(x))) abline(v = 0, h = 0, lty = "dashed", col = "gray") # Close up plot x <- seq(0.04, 0.8, len = 100) # Plot of the first derivative plot(x, zeta(x, deriv = 1), type = "l", las = 1, col = "blue", xlim = c(0.04, 3), ylim = c(-6, 0), main = "zeta'(x)") x <- seq(1.2, 3, len = 100) lines(x, zeta(x, deriv = 1), col = "blue") abline(v = 0, h = 0, lty = "dashed", col = "gray") } zeta(2) - pi^2 / 6 # Should be 0 zeta(4) - pi^4 / 90 # Should be 0 zeta(6) - pi^6 / 945 # Should be 0 zeta(8) - pi^8 / 9450 # Should be 0 # zeta(0, deriv = 1) + 0.5 * log(2*pi) # Should be 0 } \keyword{math} % curve(zeta, -13, 0.8, xlim = c(-12, 10), ylim = c(-1, 4), col = "orange") % curve(zeta, 1.2, 12, add = TRUE, col = "orange") % abline(v = 0, h = c(0,1), lty = "dashed") VGAM/man/posbinomUC.Rd0000644000176200001440000001005013135276753014134 0ustar liggesusers\name{Posbinom} \alias{Posbinom} \alias{dposbinom} \alias{pposbinom} \alias{qposbinom} \alias{rposbinom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive-Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the positive-binomial distribution. } \usage{ dposbinom(x, size, prob, log = FALSE) pposbinom(q, size, prob) qposbinom(p, size, prob) rposbinom(n, size, prob) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{size}{number of trials. It is the \eqn{N} symbol in the formula given in \code{\link{posbinomial}} and should be positive. } \item{prob}{probability of success on each trial. Should be in \eqn{(0,1)}. } % 20120407: % \item{log.p, lower.tail}{ % Arguments that are passed on to % \code{\link[stats:Binomial]{pbinom}} etc. % % } \item{log}{ See \code{\link[stats:Binomial]{dbinom}}. } } \details{ The positive-binomial distribution is a binomial distribution but with the probability of a zero being zero. The other probabilities are scaled to add to unity. The mean therefore is \deqn{\mu / (1-(1-\mu)^N)}{% mu / (1-(1-mu)^N)} where \eqn{\mu}{mu} is the argument \code{prob} above. As \eqn{\mu}{mu} increases, the positive-binomial and binomial distributions become more similar. Unlike similar functions for the binomial distribution, a zero value of \code{prob} is not permitted here. } \value{ \code{dposbinom} gives the density, \code{pposbinom} gives the distribution function, \code{qposbinom} gives the quantile function, and \code{rposbinom} generates random deviates. } %\references{ %None. %} \author{ T. W. Yee. } \note{ For \code{dposbinom()}, if arguments \code{size} or \code{prob} equal 0 then a \code{NaN} is returned. % 20120405; no longer true to a superior method: % For \code{rposbinom()}, the arguments of the function are fed into % \code{\link[stats:Binomial]{rbinom}} until \eqn{n} positive values % are obtained. This may take a long time if \code{prob} has values % close to 0. The family function \code{\link{posbinomial}} estimates the parameters by maximum likelihood estimation. } \seealso{ \code{\link{posbinomial}}, \code{\link{dposbern}}, \code{\link{zabinomial}}, \code{\link{zibinomial}}, \code{\link[stats:Binomial]{rbinom}}. } \examples{ prob <- 0.2; size <- 10 table(y <- rposbinom(n = 1000, size, prob)) mean(y) # Sample mean size * prob / (1 - (1 - prob)^size) # Population mean (ii <- dposbinom(0:size, size, prob)) cumsum(ii) - pposbinom(0:size, size, prob) # Should be 0s table(rposbinom(100, size, prob)) table(qposbinom(runif(1000), size, prob)) round(dposbinom(1:10, size, prob) * 1000) # Should be similar \dontrun{ barplot(rbind(dposbinom(x = 0:size, size, prob), dbinom(x = 0:size, size, prob)), beside = TRUE, col = c("blue", "green"), main = paste("Positive-binomial(", size, ",", prob, ") (blue) vs", " Binomial(", size, ",", prob, ") (green)", sep = ""), names.arg = as.character(0:size), las = 1) } # Simulated data example nn <- 1000; sizeval1 <- 10; sizeval2 <- 20 pdata <- data.frame(x2 = seq(0, 1, length = nn)) pdata <- transform(pdata, prob1 = logit(-2 + 2 * x2, inverse = TRUE), prob2 = logit(-1 + 1 * x2, inverse = TRUE), sizev1 = rep(sizeval1, len = nn), sizev2 = rep(sizeval2, len = nn)) pdata <- transform(pdata, y1 = rposbinom(nn, size = sizev1, prob = prob1), y2 = rposbinom(nn, size = sizev2, prob = prob2)) with(pdata, table(y1)) with(pdata, table(y2)) # Multiple responses fit2 <- vglm(cbind(y1, y2) ~ x2, posbinomial(multiple.responses = TRUE), trace = TRUE, data = pdata, weight = cbind(sizev1, sizev2)) coef(fit2, matrix = TRUE) } \keyword{distribution} VGAM/man/loglog.Rd0000644000176200001440000000417713135276753013356 0ustar liggesusers\name{loglog} \alias{loglog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Log-log Link Function } \description{ Computes the log-log transformation, including its inverse and the first two derivatives. } \usage{ loglog(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ Values of \code{theta} which are less than or equal to 1 can be replaced by \code{bvalue} before computing the link function value. The component name \code{bvalue} stands for ``boundary value''. See \code{\link{Links}} for more information. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The log-log link function is commonly used for parameters that are greater than unity. Numerical values of \code{theta} close to 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the log of \code{theta}, i.e., \code{log(log(theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{exp(exp(theta))}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1 unless \code{bvalue} is used. } \seealso{ \code{\link{Links}}, \code{\link{loge}}, \code{\link{logoff}}. } \examples{ x <- seq(0.8, 1.5, by = 0.1) loglog(x) # Has NAs loglog(x, bvalue = 1.0 + .Machine$double.eps) # Has no NAs x <- seq(1.01, 10, len = 100) loglog(x) max(abs(loglog(loglog(x), inverse = TRUE) - x)) # Should be 0 } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/mccullagh89.Rd0000644000176200001440000000657513135276753014217 0ustar liggesusers\name{mccullagh89} \alias{mccullagh89} %- Also NEED an '\alias' for EACH other topic documented here. \title{McCullagh (1989) Distribution Family Function} \description{ Estimates the two parameters of the McCullagh (1989) distribution by maximum likelihood estimation. } \usage{ mccullagh89(ltheta = "rhobit", lnu = logoff(offset = 0.5), itheta = NULL, inu = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ltheta, lnu}{ Link functions for the \eqn{\theta}{theta} and \eqn{\nu}{nu} parameters. See \code{\link{Links}} for general information. } \item{itheta, inu}{ Numeric. Optional initial values for \eqn{\theta}{theta} and \eqn{\nu}{nu}. The default is to internally compute them. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The McCullagh (1989) distribution has density function \deqn{f(y;\theta,\nu) = \frac{ \{ 1-y^2 \}^{\nu-\frac12}} { (1-2\theta y + \theta^2)^{\nu} \mbox{Beta}(\nu+\frac12, \frac12)}}{% f(y;theta,nu) = (1-y^2)^(nu-0.5) / [ (1 - 2*theta*y+theta^2)^nu * Beta(nu+0.5, 0.5)]} where \eqn{-1 < y < 1} and \eqn{-1 < \theta < 1}{-1 < theta < 1}. This distribution is equation (1) in that paper. The parameter \eqn{\nu}{nu} satisfies \eqn{\nu > -1/2}{nu > -1/2}, therefore the default is to use an log-offset link with offset equal to 0.5, i.e., \eqn{\eta_2=\log(\nu+0.5)}{eta_2=log(nu+0.5)}. The mean is of \eqn{Y} is \eqn{\nu \theta / (1+\nu)}{nu*theta/(1+nu)}, and these are returned as the fitted values. This distribution is related to the Leipnik distribution (see Johnson et al. (1995)), is related to ultraspherical functions, and under certain conditions, arises as exit distributions for Brownian motion. Fisher scoring is implemented here and it uses a diagonal matrix so the parameters are globally orthogonal in the Fisher information sense. McCullagh (1989) also states that, to some extent, \eqn{\theta}{theta} and \eqn{\nu}{nu} have the properties of a location parameter and a precision parameter, respectively. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ McCullagh, P. (1989) Some statistical properties of a family of continuous univariate distributions. \emph{Journal of the American Statistical Association}, \bold{84}, 125--129. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. (pages 612--617). } \author{ T. W. Yee } \note{ Convergence may be slow or fail unless the initial values are reasonably close. If a failure occurs, try assigning the argument \code{inu} and/or \code{itheta}. Figure 1 of McCullagh (1989) gives a broad range of densities for different values of \eqn{\theta}{theta} and \eqn{\nu}{nu}, and this could be consulted for obtaining reasonable initial values if all else fails. } \seealso{ \code{\link{leipnik}}, \code{\link{rhobit}}, \code{\link{logoff}}. } %\section{Warning }{ %} \examples{ mdata <- data.frame(y = rnorm(n = 1000, sd = 0.2)) # Limit as theta = 0, nu = Inf fit <- vglm(y ~ 1, mccullagh89, data = mdata, trace = TRUE) head(fitted(fit)) with(mdata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/quasipoissonff.Rd0000644000176200001440000000774013135276753015143 0ustar liggesusers\name{quasipoissonff} %\alias{quasipoisson} \alias{quasipoissonff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quasi-Poisson Family Function } \description{ Fits a generalized linear model to a Poisson response, where the dispersion parameter is unknown. } \usage{ quasipoissonff(link = "loge", onedpar = FALSE, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function. See \code{\link{Links}} for more choices. } \item{onedpar}{ One dispersion parameter? If the response is a matrix, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar=TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if the response is a matrix. } \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ \eqn{M} defined above is the number of linear/additive predictors. If the dispersion parameter is unknown, then the resulting estimate is not fully a maximum likelihood estimate. A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the Poisson model. Over-dispersion is more common in practice. When fitting a Quadratic RR-VGLM, the response is a matrix of \eqn{M}, say, columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response matrix). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ This function will handle a matrix response automatically. The call \code{poissonff(dispersion = 0, ...)} is equivalent to \code{quasipoissonff(...)}. The latter was written so that R users of \code{quasipoisson()} would only need to add a ``\code{ff}'' to the end of the family function name. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. } \section{Warning }{ See the warning in \code{\link{quasibinomialff}}. } \seealso{ \code{\link{poissonff}}, \code{\link{negbinomial}}, \code{\link{loge}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{binomialff}}, \code{\link{quasibinomialff}}, \code{\link[stats]{quasipoisson}}. } \examples{ quasipoissonff() \dontrun{n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "poisson", eq.tol = FALSE) myform <- attr(mydata, "formula") p1 <- cqo(myform, fam = quasipoissonff, eq.tol = FALSE, data = mydata) sort(deviance(p1, history = TRUE)) # A history of all the iterations lvplot(p1, y = TRUE, lcol = 1:S, pch = 1:S, pcol = 1:S) summary(p1) # The dispersion parameters are estimated }} \keyword{models} \keyword{regression} %n = 100 %x2 = rnorm(n) %x3 = rnorm(n) %x4 = rnorm(n) %lv1 = 0 + x3 - 2*x4 %lambda1 = exp(3 - 0.5 * (lv1-0)^2) %lambda2 = exp(2 - 0.5 * (lv1-1)^2) %lambda3 = exp(2 - 0.5 * ((lv1+4)/2)^2) %y1 = rpois(n, lambda1) %y2 = rpois(n, lambda2) %y3 = rpois(n, lambda3) %p1 = cqo(cbind(y1,y2,y3) ~ x2 + x3 + x4, quasipoissonff) %lvplot(p1, y=TRUE, lcol=1:3, pch=1:3, pcol=1:3) %summary(p1) # Three dispersion parameters are estimated VGAM/man/bortUC.Rd0000644000176200001440000000315213135276753013261 0ustar liggesusers\name{Bort} \alias{Bort} \alias{dbort} %\alias{pbort} %\alias{qbort} \alias{rbort} \title{The Borel-Tanner Distribution} \description{ Density and random generation for the Borel-Tanner distribution. % distribution function, quantile function } \usage{ dbort(x, Qsize = 1, a = 0.5, log = FALSE) rbort(n, Qsize = 1, a = 0.5) } %pbort(q, Qsize = 1, a = 0.5) %qbort(p, Qsize = 1, a = 0.5) \arguments{ \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Must be a positive integer of length 1.} \item{Qsize, a}{ See \code{\link{borel.tanner}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dbort} gives the density, \code{rbort} generates random deviates. % \code{pbort} gives the distribution function, % \code{qbort} gives the quantile function, and } \author{ T. W. Yee } \details{ See \code{\link{borel.tanner}}, the \pkg{VGAM} family function for estimating the parameter, for the formula of the probability density function and other details. } \section{Warning }{ Looping is used for \code{\link{rbort}}, therefore values of \code{a} close to 1 will result in long (or infinite!) computational times. The default value of \code{a} is subjective. } \seealso{ \code{\link{borel.tanner}}. } \examples{ \dontrun{ qsize <- 1; a <- 0.5; x <- qsize:(qsize+10) plot(x, dbort(x, qsize, a), type = "h", las = 1, col = "blue", ylab = paste("fbort(qsize=", qsize, ", a=", a, ")"), log = "y", main = "Borel-Tanner density function") } } \keyword{distribution} VGAM/man/paralogisticUC.Rd0000644000176200001440000000410113135276753014767 0ustar liggesusers\name{Paralogistic} \alias{Paralogistic} \alias{dparalogistic} \alias{pparalogistic} \alias{qparalogistic} \alias{rparalogistic} \title{The Paralogistic Distribution} \description{ Density, distribution function, quantile function and random generation for the paralogistic distribution with shape parameter \code{a} and scale parameter \code{scale}. } \usage{ dparalogistic(x, scale = 1, shape1.a, log = FALSE) pparalogistic(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qparalogistic(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rparalogistic(n, scale = 1, shape1.a) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log=TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dparalogistic} gives the density, \code{pparalogistic} gives the distribution function, \code{qparalogistic} gives the quantile function, and \code{rparalogistic} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{paralogistic}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The paralogistic distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{paralogistic}}, \code{\link{genbetaII}}. } \examples{ pdata <- data.frame(y = rparalogistic(n = 3000, scale = exp(1), exp(2))) fit <- vglm(y ~ 1, paralogistic(lss = FALSE, ishape1.a = 4.1), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/oazeta.Rd0000644000176200001440000000644213135276753013353 0ustar liggesusers\name{oazeta} \alias{oazeta} %\alias{oazetaff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Altered Zeta Distribution } \description{ Fits a one-altered zeta distribution based on a conditional model involving a Bernoulli distribution and a 1-truncated zeta distribution. } \usage{ oazeta(lpobs1 = "logit", lshape = "loge", type.fitted = c("mean", "shape", "pobs1", "onempobs1"), gshape = exp((-4:3)/4), ishape = NULL, ipobs1 = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs1}{ Link function for the parameter \eqn{p_1}{pobs1} or \eqn{\phi}{phi}, called \code{pobs1} or \code{phi} here. See \code{\link{Links}} for more choices. } \item{lshape}{ See \code{\link{zeta}} for details. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } \item{gshape, ishape, ipobs1, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The response \eqn{Y} is one with probability \eqn{p_1}{pobs1}, or \eqn{Y} has a 1-truncated zeta distribution with probability \eqn{1-p_1}{1-pobs1}. Thus \eqn{0 < p_1 < 1}{0 < pobs1 < 1}, which is modelled as a function of the covariates. The one-altered zeta distribution differs from the one-inflated zeta distribution in that the former has ones coming from one source, whereas the latter has ones coming from the zeta distribution too. The one-inflated zeta distribution is implemented in the \pkg{VGAM} package. Some people call the one-altered zeta a \emph{hurdle} model. The input can be a matrix (multiple responses). By default, the two linear/additive predictors of \code{oazeta} are \eqn{(logit(\phi), log(shape))^T}{(logit(phi), log(shape))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} (default) which is given by \deqn{\mu = \phi + (1-\phi) A}{% mu = phi + (1- phi) A} where \eqn{A} is the mean of the one-truncated zeta distribution. If \code{type.fitted = "pobs1"} then \eqn{p_1}{pobs1} is returned. } %\references{ % % %} %\section{Warning }{ %} \author{ T. W. Yee } \note{ This family function effectively combines \code{\link{binomialff}} and \code{\link{otzeta}} into one family function. } \seealso{ \code{\link{Oazeta}}, \code{\link{zetaff}}, \code{\link{oizeta}}, \code{\link{otzeta}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}. } \examples{ odata <- data.frame(x2 = runif(nn <- 1000)) odata <- transform(odata, pobs1 = logit(-1 + 2*x2, inverse = TRUE), shape = loge( 1 + 1*x2, inverse = TRUE)) odata <- transform(odata, y1 = roazeta(nn, shape = shape, pobs1 = pobs1), y2 = roazeta(nn, shape = shape, pobs1 = pobs1)) with(odata, table(y1)) ofit <- vglm(cbind(y1, y2) ~ x2, oazeta, data = odata, trace = TRUE) coef(ofit, matrix = TRUE) head(fitted(ofit)) head(predict(ofit)) summary(ofit) } \keyword{models} \keyword{regression} VGAM/man/logLikvlm.Rd0000644000176200001440000000542313135276753014026 0ustar liggesusers\name{logLik.vlm} \alias{logLik.vlm} %\alias{AICvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{Extract Log-likelihood for VGLMs/VGAMs/etc. } \description{ Calculates the log-likelihood value or the element-by-element contributions of the log-likelihood. } \usage{ \method{logLik}{vlm}(object, summation = TRUE, \dots) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Some \pkg{VGAM} object, for example, having class \code{\link{vglmff-class}}. } \item{summation}{ Logical, apply \code{\link[base]{sum}}? If \code{FALSE} then a \eqn{n}-vector or \eqn{n}-row matrix (with the number of responses as the number of columns) is returned. Each element is the contribution to the log-likelihood. } \item{\dots}{ Currently unused. In the future: other possible arguments fed into \code{logLik} in order to compute the log-likelihood. } } \details{ By default, this function returns the log-likelihood of the object. Thus this code relies on the log-likelihood being defined, and computed, for the object. } \value{ Returns the log-likelihood of the object. If \code{summation = FALSE} then a \eqn{n}-vector or \eqn{n}-row matrix (with the number of responses as the number of columns) is returned. Each element is the contribution to the log-likelihood. The prior weights are assimulated within the answer. } \author{T. W. Yee. } \note{ Not all \pkg{VGAM} family functions currently have the \code{summation} argument implemented. } %\references{ % %} \section{Warning }{ Not all \pkg{VGAM} family functions have had the \code{summation} checked. } \seealso{ VGLMs are described in \code{\link{vglm-class}}; VGAMs are described in \code{\link{vgam-class}}; RR-VGLMs are described in \code{\link{rrvglm-class}}; \code{\link[stats]{AIC}}. } \examples{ zdata <- data.frame(x2 = runif(nn <- 50)) zdata <- transform(zdata, Ps01 = logit(-0.5 , inverse = TRUE), Ps02 = logit( 0.5 , inverse = TRUE), lambda1 = loge(-0.5 + 2*x2, inverse = TRUE), lambda2 = loge( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda = lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) fit2 <- vglm(cbind(y1, y2) ~ x2, zipoisson(zero = NULL), data = zdata) logLik(fit2) # Summed over the two responses sum(logLik(fit2, sum = FALSE)) # For checking purposes (ll.matrix <- logLik(fit2, sum = FALSE)) # nn x 2 matrix colSums(ll.matrix) # log-likelihood for each response } \keyword{models} \keyword{regression} % logLik.vlm(object, summation = TRUE, \dots) VGAM/man/explink.Rd0000644000176200001440000000441113135276753013534 0ustar liggesusers\name{explink} \alias{explink} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponential Link Function } \description{ Computes the exponential transformation, including its inverse and the first two derivatives. } \usage{ explink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } % \item{earg}{ % Optional list. % See \code{\link{Links}} for general information about \code{earg}. % } \item{bvalue}{ See \code{cloglog}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The exponential link function is potentially suitable for parameters that are positive. Numerical values of \code{theta} close to negative or positive infinity may result in \code{0}, \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{explink} with \code{deriv = 0}, the exponential of \code{theta}, i.e., \code{exp(theta)} when \code{inverse = FALSE}. And if \code{inverse = TRUE} then \code{log(theta)}; if \code{theta} is not positive then it will return \code{NaN}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } %\references{ % McCullagh, P. and Nelder, J. A. (1989) % \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. % %} \author{ Thomas W. Yee } \note{ This function has particular use for computing quasi-variances when used with \code{\link{rcim}} and \code{\link{uninormal}}. Numerical instability may occur when \code{theta} is close to negative or positive infinity. One way of overcoming this (one day) is to use \code{bvalue}. } \seealso{ \code{\link{Links}}, \code{\link{loge}}, \code{\link{rcim}}, \code{\link{Qvar}}, \code{\link{uninormal}}. } \examples{ theta <- rnorm(30) explink(theta) max(abs(explink(explink(theta), inverse = TRUE) - theta)) # Should be 0 } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/skellamUC.Rd0000644000176200001440000000311013135276753013735 0ustar liggesusers\name{Skellam} \alias{Skellam} \alias{dskellam} %\alias{pskellam} %\alias{qskellam} \alias{rskellam} \title{The Skellam Distribution} \description{ Density and random generation for the Skellam distribution. % distribution function, quantile function } \usage{ dskellam(x, mu1, mu2, log = FALSE) rskellam(n, mu1, mu2) } %pskellam(q, mu1, mu2) %qskellam(p, mu1, mu2) \arguments{ \item{x}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats:Uniform]{runif}}. } \item{mu1, mu2}{ See \code{\link{skellam}} }. \item{log}{ Logical; if TRUE, the logarithm is returned. } } \value{ \code{dskellam} gives the density, and \code{rskellam} generates random deviates. % \code{pskellam} gives the distribution function, % \code{qskellam} gives the quantile function, and } %\author{ T. W. Yee } \details{ See \code{\link{skellam}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } \section{Warning }{ Numerical problems may occur for data if \eqn{\mu_1}{mu1} and/or \eqn{\mu_2}{mu2} are large. The normal approximation for this case has not been implemented yet. } \seealso{ \code{\link{skellam}}, \code{\link[stats:Poisson]{dpois}}. } \examples{ \dontrun{ mu1 <- 1; mu2 <- 2; x <- (-7):7 plot(x, dskellam(x, mu1, mu2), type = "h", las = 1, col = "blue", main = paste("Density of Skellam distribution with mu1 = ", mu1, " and mu2 = ", mu2, sep = "")) } } \keyword{distribution} VGAM/man/zageometric.Rd0000644000176200001440000001266613135276753014406 0ustar liggesusers\name{zageometric} \alias{zageometric} \alias{zageometricff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Geometric Distribution } \description{ Fits a zero-altered geometric distribution based on a conditional model involving a Bernoulli distribution and a positive-geometric distribution. } \usage{ zageometric(lpobs0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, iprob = NULL, zero = NULL) zageometricff(lprob = "logit", lonempobs0 = "logit", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, iprob = NULL, ionempobs0 = NULL, zero = "onempobs0") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0} or \eqn{\phi}{phi}, called \code{pobs0} or \code{phi} here. See \code{\link{Links}} for more choices. } \item{lprob}{ Parameter link function applied to the probability of success, called \code{prob} or \eqn{p}. See \code{\link{Links}} for more choices. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } % \item{epobs0, eprob}{ % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % epobs0 = list(), eprob = list(), % } \item{ipobs0, iprob}{ Optional initial values for the parameters. If given, they must be in range. For multi-column responses, these are recycled sideways. } \item{lonempobs0, ionempobs0}{ Corresponding argument for the other parameterization. See details below. } \item{zero, imethod}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, or \eqn{Y} has a positive-geometric distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which is modelled as a function of the covariates. The zero-altered geometric distribution differs from the zero-inflated geometric distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the geometric distribution too. The zero-inflated geometric distribution is implemented in the \pkg{VGAM} package. Some people call the zero-altered geometric a \emph{hurdle} model. The input can be a matrix (multiple responses). By default, the two linear/additive predictors of \code{zageometric} are \eqn{(logit(\phi), logit(p))^T}{(logit(phi), logit(prob))^T}. The \pkg{VGAM} family function \code{zageometricff()} has a few changes compared to \code{zageometric()}. These are: (i) the order of the linear/additive predictors is switched so the geometric probability comes first; (ii) argument \code{onempobs0} is now 1 minus the probability of an observed 0, i.e., the probability of the positive geometric distribution, i.e., \code{onempobs0} is \code{1-pobs0}; (iii) argument \code{zero} has a new default so that the \code{pobs0} is intercept-only by default. Now \code{zageometricff()} is generally recommended over \code{zageometric()}. Both functions implement Fisher scoring and can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} (default) which is given by \deqn{\mu = (1-\phi) / p.}{% mu = (1- phi) / p.} If \code{type.fitted = "pobs0"} then \eqn{p_0}{pobs0} is returned. } %\references{ % % %} \section{Warning }{ Convergence for this \pkg{VGAM} family function seems to depend quite strongly on providing good initial values. Inference obtained from \code{summary.vglm} and \code{summary.vgam} may or may not be correct. In particular, the p-values, standard errors and degrees of freedom may need adjustment. Use simulation on artificial data to check that these are reasonable. } \author{ T. W. Yee } \note{ Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates. It is a conditional model, not a mixture model. This family function effectively combines \code{\link{binomialff}} and \code{posgeometric()} and \code{\link{geometric}} into one family function. However, \code{posgeometric()} is not written because it is trivially related to \code{\link{geometric}}. } \seealso{ \code{\link{dzageom}}, \code{\link{geometric}}, \code{\link{zigeometric}}, \code{\link[stats:Geometric]{dgeom}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}. } % \code{\link{posgeometric}}, \examples{ zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pobs0 = logit(-1 + 2*x2, inverse = TRUE), prob = logit(-2 + 3*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzageom(nn, prob = prob, pobs0 = pobs0), y2 = rzageom(nn, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) fit <- vglm(cbind(y1, y2) ~ x2, zageometric, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/makehamUC.Rd0000644000176200001440000000534013135276753013717 0ustar liggesusers\name{Makeham} \alias{Makeham} \alias{dmakeham} \alias{pmakeham} \alias{qmakeham} \alias{rmakeham} \title{The Makeham Distribution} \description{ Density, cumulative distribution function, quantile function and random generation for the Makeham distribution. } \usage{ dmakeham(x, scale = 1, shape, epsilon = 0, log = FALSE) pmakeham(q, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) qmakeham(p, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) rmakeham(n, scale = 1, shape, epsilon = 0) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } \item{scale, shape}{positive scale and shape parameters. } \item{epsilon}{another parameter. Must be non-negative. See below. } } \value{ \code{dmakeham} gives the density, \code{pmakeham} gives the cumulative distribution function, \code{qmakeham} gives the quantile function, and \code{rmakeham} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{makeham}} for details. The default value of \code{epsilon = 0} corresponds to the Gompertz distribution. The function \code{\link{pmakeham}} uses \code{\link{lambertW}}. } \references{ Jodra, P. (2009) A closed-form expression for the quantile function of the Gompertz-Makeham distribution. \emph{Mathematics and Computers in Simulation}, \bold{79}, 3069--3075. } %\note{ % %} \seealso{ \code{\link{makeham}}, \code{\link{lambertW}}. } \examples{ probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(-1); Scale <- exp(1); eps = Epsilon <- exp(-1) max(abs(pmakeham(qmakeham(p = probs, sca = Scale, Shape, eps = Epsilon), sca = Scale, Shape, eps = Epsilon) - probs)) # Should be 0 \dontrun{ x <- seq(-0.1, 2.0, by = 0.01); plot(x, dmakeham(x, sca = Scale, Shape, eps = Epsilon), type = "l", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", col = "blue", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pmakeham(x, sca = Scale, Shape, eps = Epsilon), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmakeham(probs, sca = Scale, Shape, eps = Epsilon) lines(Q, dmakeham(Q, sca = Scale, Shape, eps = Epsilon), col = "purple", lty = 3, type = "h") pmakeham(Q, sca = Scale, Shape, eps = Epsilon) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/coefvlm.Rd0000644000176200001440000000441213135276753013516 0ustar liggesusers\name{coefvlm} \alias{coefvlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Extract Model Coefficients } \description{ Extracts the estimated coefficients from VLM objects such as VGLMs. } \usage{ coefvlm(object, matrix.out = FALSE, label = TRUE, colon = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the extraction of coefficients is meaningful. This will usually be a \code{\link{vglm}} object. } \item{matrix.out}{ Logical. If \code{TRUE} then a matrix is returned. The explanatory variables are the rows. The linear/additive predictors are the columns. The constraint matrices are used to compute this matrix. } \item{label}{ Logical. If \code{FALSE} then the \code{names} of the vector of coefficients are set to \code{NULL}. } \item{colon}{ Logical. Explanatory variables which appear in more than one linear/additive predictor are labelled with a colon, e.g., \code{age:1}, \code{age:2}. However, if it only appears in one linear/additive predictor then the \code{:1} is omitted by default. Then setting \code{colon = TRUE} will add the \code{:1}. } } \details{ This function works in a similar way to applying \code{coef()} to a \code{\link[stats]{lm}} or \code{\link[stats]{glm}} object. However, for VGLMs, there are more options available. } \value{ A vector usually. A matrix if \code{matrix.out = TRUE}. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ %} \seealso{ \code{\link{vglm}}, \code{\link{coefvgam}}, \code{\link[stats]{coef}}. % \code{\link{coef-method}}, } \examples{ zdata <- data.frame(x2 = runif(nn <- 200)) zdata <- transform(zdata, pstr0 = logit(-0.5 + 1*x2, inverse = TRUE), lambda = loge( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y2 = rzipois(nn, lambda, pstr0 = pstr0)) fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), data = zdata, trace = TRUE) coef(fit2, matrix = TRUE) # Always a good idea coef(fit2) coef(fit2, colon = TRUE) } \keyword{models} \keyword{regression} VGAM/man/gumbel.Rd0000644000176200001440000002023313135276753013335 0ustar liggesusers\name{gumbel} \alias{gumbel} \alias{gumbelff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Gumbel distribution. } \usage{ gumbel(llocation = "identitylink", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) gumbelff(llocation = "identitylink", lscale = "loge", iscale = NULL, R = NA, percentiles = c(95, 99), zero = "scale", mpv = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale}{ Parameter link functions for \eqn{\mu}{mu} and \eqn{\sigma}{sigma}. See \code{\link{Links}} for more choices. } \item{iscale}{ Numeric and positive. Optional initial value for \eqn{\sigma}{sigma}. Recycled to the appropriate length. In general, a larger value is better than a smaller value. A \code{NULL} means an initial value is computed internally. } \item{R}{ Numeric. Maximum number of values possible. See \bold{Details} for more details. } \item{percentiles}{ Numeric vector of percentiles used for the fitted values. Values should be between 0 and 100. This argument uses the argument \code{R} if assigned. If \code{percentiles = NULL} then the mean will be returned as the fitted values. % This argument is ignored if \code{mean = TRUE}. } \item{mpv}{ Logical. If \code{mpv = TRUE} then the \emph{median predicted value} (MPV) is computed and returned as the (last) column of the fitted values. This argument is ignored if \code{percentiles = NULL}. See \bold{Details} for more details. % This argument is ignored if \code{mean = TRUE}. } % \item{mean}{ % Logical. If \code{TRUE}, the mean is computed and returned % as the fitted values. This argument overrides the % \code{percentiles} and \code{mpv} arguments. % See \bold{Details} for more details. % } \item{zero}{ A vector specifying which linear/additive predictors are modelled as intercepts only. The value (possibly values) can be from the set \{1, 2\} corresponding respectively to \eqn{\mu}{mu} and \eqn{\sigma}{sigma}. By default all linear/additive predictors are modelled as a linear combination of the explanatory variables. See \code{\link{CommonVGAMffArguments}} for more details. } } \details{ The Gumbel distribution is a generalized extreme value (GEV) distribution with \emph{shape} parameter \eqn{\xi = 0}{xi = 0}. Consequently it is more easily estimated than the GEV. See \code{\link{gev}} for more details. The quantity \eqn{R} is the maximum number of observations possible, for example, in the Venice data below, the top 10 daily values are recorded for each year, therefore \eqn{R = 365} because there are about 365 days per year. The MPV is the value of the response such that the probability of obtaining a value greater than the MPV is 0.5 out of \eqn{R} observations. For the Venice data, the MPV is the sea level such that there is an even chance that the highest level for a particular year exceeds the MPV. If \code{mpv = TRUE} then the column labelled \code{"MPV"} contains the MPVs when \code{fitted()} is applied to the fitted object. The formula for the mean of a response \eqn{Y} is \eqn{\mu+\sigma \times Euler} where \eqn{Euler} is a constant that has value approximately equal to 0.5772. The formula for the percentiles are (if \code{R} is not given) \eqn{\mu-\sigma \times \log[-\log(P/100)]}{location- scale*log[-log(P/100)]} where \eqn{P} is the \code{percentile} argument value(s). If \code{R} is given then the percentiles are \eqn{\mu-\sigma \times \log[R(1-P/100)]}{location- scale*log[-log(R*(1-P/100))]}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Yee, T. W. and Stephenson, A. G. (2007) Vector generalized linear and additive extreme value models. \emph{Extremes}, \bold{10}, 1--19. Smith, R. L. (1986) Extreme value theory based on the \emph{r} largest annual events. \emph{Journal of Hydrology}, \bold{86}, 27--43. Rosen, O. and Cohen, A. (1996) Extreme percentile regression. In: Haerdle, W. and Schimek, M. G. (eds.), \emph{Statistical Theory and Computational Aspects of Smoothing: Proceedings of the COMPSTAT '94 Satellite Meeting held in Semmering, Austria, 27--28 August 1994}, pp.200--214, Heidelberg: Physica-Verlag. Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \section{Warning}{ When \code{R} is not given (the default) the fitted percentiles are that of the data, and not of the overall population. For example, in the example below, the 50 percentile is approximately the running median through the data, however, the data are the highest sea level measurements recorded each year (it therefore equates to the median predicted value or MPV). } \note{ Like many other usual \pkg{VGAM} family functions, \code{gumbelff()} handles (independent) multiple responses. % and is preferred to \code{gumbel()} because it is faster. \code{gumbel()} can handle more of a multivariate response, i.e., a matrix with more than one column. Each row of the matrix is sorted into descending order. Missing values in the response are allowed but require \code{na.action = na.pass}. The response matrix needs to be padded with any missing values. With a multivariate response one has a matrix \code{y}, say, where \code{y[, 2]} contains the second order statistics, etc. % If a random variable \eqn{Y} has a \emph{reverse} % \eqn{Gumbel(\mu,\sigma)}{Gumbel(mu,sigma)} distribution then \eqn{-Y} % has a \eqn{Gumbel(-\mu,\sigma)}{Gumbel(-mu,sigma)} distribution. % It appears that some definite the reverse Gumbel the same as others % who define the ordinary Gumbel distribution, e.g., in \pkg{gamlss}. } \seealso{ \code{\link{rgumbel}}, \code{\link{dgumbelII}}, \code{\link{cens.gumbel}}, \code{\link{guplot}}, \code{\link{gev}}, \code{\link{gevff}}, \code{\link{venice}}. % \code{\link{ogev}}, } \examples{ # Example 1: Simulated data gdata <- data.frame(y1 = rgumbel(n = 1000, loc = 100, scale = exp(1))) fit1 <- vglm(y1 ~ 1, gumbelff(perc = NULL), data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) with(gdata, mean(y1)) # Example 2: Venice data (fit2 <- vglm(cbind(r1, r2, r3, r4, r5) ~ year, data = venice, gumbel(R = 365, mpv = TRUE), trace = TRUE)) head(fitted(fit2)) coef(fit2, matrix = TRUE) sqrt(diag(vcov(summary(fit2)))) # Standard errors # Example 3: Try a nonparametric fit --------------------- # Use the entire data set, including missing values # Same as as.matrix(venice[, paste0("r", 1:10)]): Y <- Select(venice, "r", sort = FALSE) fit3 <- vgam(Y ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) depvar(fit3)[4:5, ] # NAs used to pad the matrix \dontrun{ # Plot the component functions par(mfrow = c(2, 3), mar = c(6, 4, 1, 2) + 0.3, xpd = TRUE) plot(fit3, se = TRUE, lcol = "blue", scol = "limegreen", lty = 1, lwd = 2, slwd = 2, slty = "dashed") # Quantile plot --- plots all the fitted values qtplot(fit3, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), lwd = 2, pcol = "blue", tadj = 0.1, ylab = "Sea level (cm)") # Plot the 99 percentile only year <- venice[["year"]] matplot(year, Y, ylab = "Sea level (cm)", type = "n") matpoints(year, Y, pch = "*", col = "blue") lines(year, fitted(fit3)[, "99\%"], lwd = 2, col = "orange") # Check the 99 percentiles with a smoothing spline. # Nb. (1-0.99) * 365 = 3.65 is approx. 4, meaning the 4th order # statistic is approximately the 99 percentile. plot(year, Y[, 4], ylab = "Sea level (cm)", type = "n", main = "Orange is 99 percentile, Green is a smoothing spline") points(year, Y[, 4], pch = "4", col = "blue") lines(year, fitted(fit3)[, "99\%"], lty = 1, col = "orange") lines(smooth.spline(year, Y[, 4], df = 4), col = "limegreen", lty = 2) } } \keyword{models} \keyword{regression} VGAM/man/bigumbelIexp.Rd0000644000176200001440000000560613135276753014505 0ustar liggesusers\name{bigumbelIexp} \alias{bigumbelIexp} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel's Type I Bivariate Distribution Family Function } \description{ Estimate the association parameter of Gumbel's Type I bivariate distribution by maximum likelihood estimation. } \usage{ bigumbelIexp(lapar = "identitylink", iapar = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar}{ Link function applied to the association parameter \eqn{\alpha}{alpha}. See \code{\link{Links}} for more choices. } \item{iapar}{ Numeric. Optional initial value for \eqn{\alpha}{alpha}. By default, an initial value is chosen internally. If a convergence failure occurs try assigning a different value. Assigning a value will override the argument \code{imethod}. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ia}. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = e^{-y_1-y_2+\alpha y_1 y_2} + 1 - e^{-y_1} - e^{-y_2} }{% P(Y1 <= y1, Y2 <= y2) = exp(-y1-y2+alpha*y1*y2) + 1 - exp(-y1) - exp(-y2) } for real \eqn{\alpha}{alpha}. The support of the function is for \eqn{y_1>0}{y1>0} and \eqn{y_2>0}{y2>0}. The marginal distributions are an exponential distribution with unit mean. A variant of Newton-Raphson is used, which only seems to work for an intercept model. It is a very good idea to set \code{trace=TRUE}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ %Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) %\emph{Extreme Value and Related Models with Applications in Engineering and Science}, %Hoboken, NJ, USA: Wiley-Interscience. Gumbel, E. J. (1960) Bivariate Exponential Distributions. \emph{Journal of the American Statistical Association}, \bold{55}, 698--707. % Journal of the American Statistical Association. % Vol. 55, No. 292, Dec., 1960 > Bivariate Exponentia. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 1. This is because each marginal distribution corresponds to a exponential distribution with unit mean. This \pkg{VGAM} family function should be used with caution. } \seealso{ \code{\link{bifgmexp}}. } \examples{ nn <- 1000 gdata <- data.frame(y1 = rexp(nn), y2 = rexp(nn)) \dontrun{ with(gdata, plot(cbind(y1, y2))) } fit <- vglm(cbind(y1, y2) ~ 1, bigumbelIexp, data = gdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) } \keyword{models} \keyword{regression} VGAM/man/rrar.Rd0000644000176200001440000000614613135276753013037 0ustar liggesusers\name{rrar} \alias{rrar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Nested reduced-rank autoregressive models for multiple time series } \description{ Estimates the parameters of a nested reduced-rank autoregressive model for multiple time series. } \usage{ rrar(Ranks = 1, coefstart = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{Ranks}{ Vector of integers: the ranks of the model. Each value must be at least one and no more than \code{M}, where \code{M} is the number of response variables in the time series. The length of \code{Ranks} is the \emph{lag}, which is often denoted by the symbol \emph{L} in the literature. } \item{coefstart}{ Optional numerical vector of initial values for the coefficients. By default, the family function chooses these automatically. } } \details{ Full details are given in Ahn and Reinsel (1988). Convergence may be very slow, so setting \code{maxits = 50}, say, may help. If convergence is not obtained, you might like to try inputting different initial values. Setting \code{trace = TRUE} in \code{\link{vglm}} is useful for monitoring the progress at each iteration. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Ahn, S. and Reinsel, G. C. (1988) Nested reduced-rank autoregressive models for multiple time series. \emph{Journal of the American Statistical Association}, \bold{83}, 849--856. } \author{ T. W. Yee } \note{ This family function should be used within \code{\link{vglm}} and not with \code{\link{rrvglm}} because it does not fit into the RR-VGLM framework exactly. Instead, the reduced-rank model is formulated as a VGLM! A methods function \code{Coef.rrar}, say, has yet to be written. It would return the quantities \code{Ak1}, \code{C}, \code{D}, \code{omegahat}, \code{Phi}, etc. as slots, and then \code{show.Coef.rrar} would also need to be written. } \seealso{ \code{\link{vglm}}, \code{\link{grain.us}}. } \examples{ \dontrun{ year <- seq(1961 + 1/12, 1972 + 10/12, by = 1/12) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(2, 2)) for (ii in 1:4) { plot(year, grain.us[, ii], main = names(grain.us)[ii], las = 1, type = "l", xlab = "", ylab = "", col = "blue") points(year, grain.us[, ii], pch = "*", col = "blue") } apply(grain.us, 2, mean) # mu vector cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Ranks = c(4, 1)), trace = TRUE) summary(fit) print(fit@misc$Ak1, digits = 2) print(fit@misc$Cmatrices, digits = 3) print(fit@misc$Dmatrices, digits = 3) print(fit@misc$omegahat, digits = 3) print(fit@misc$Phimatrices, digits = 2) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(4, 1)) for (ii in 1:4) { plot(year, fit@misc$Z[, ii], main = paste("Z", ii, sep = ""), type = "l", xlab = "", ylab = "", las = 1, col = "blue") points(year, fit@misc$Z[, ii], pch = "*", col = "blue") } } } \keyword{ts} \keyword{regression} \keyword{models} VGAM/man/gumbelIIUC.Rd0000644000176200001440000000425313135276753014013 0ustar liggesusers\name{Gumbel-II} \alias{Gumbel-II} \alias{dgumbelII} \alias{pgumbelII} \alias{qgumbelII} \alias{rgumbelII} \title{The Gumbel-II Distribution} \description{ Density, cumulative distribution function, quantile function and random generation for the Gumbel-II distribution. } \usage{ dgumbelII(x, scale = 1, shape, log = FALSE) pgumbelII(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgumbelII(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgumbelII(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } \item{shape, scale}{positive shape and scale parameters. } } \value{ \code{dgumbelII} gives the density, \code{pgumbelII} gives the cumulative distribution function, \code{qgumbelII} gives the quantile function, and \code{rgumbelII} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{gumbelII}} for details. } %\note{ % %} \seealso{ \code{\link{gumbelII}}, \code{\link{dgumbel}}. } \examples{ probs <- seq(0.01, 0.99, by = 0.01) Scale <- exp(1); Shape <- exp( 0.5); max(abs(pgumbelII(qgumbelII(p = probs, shape = Shape, Scale), shape = Shape, Scale) - probs)) # Should be 0 \dontrun{ x <- seq(-0.1, 10, by = 0.01); plot(x, dgumbelII(x, shape = Shape, Scale), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pgumbelII(x, shape = Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgumbelII(probs, shape = Shape, Scale) lines(Q, dgumbelII(Q, Scale, Shape), col = "purple", lty = 3, type = "h") pgumbelII(Q, shape = Shape, Scale) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/oizetaUC.Rd0000644000176200001440000000671213135276753013613 0ustar liggesusers\name{Oizeta} \alias{Oizeta} \alias{doizeta} \alias{poizeta} \alias{qoizeta} \alias{roizeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Inflated Zeta Distribution } \description{ Density, distribution function, quantile function and random generation for the one-inflated zeta distribution with parameter \code{pstr1}. } \usage{ doizeta(x, shape, pstr1 = 0, log = FALSE) poizeta(q, shape, pstr1 = 0) qoizeta(p, shape, pstr1 = 0) roizeta(n, shape, pstr1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{Same as \code{\link[stats]{Uniform}}.} \item{shape}{ Vector of positive shape parameters. } \item{pstr1}{ Probability of a structural one (i.e., ignoring the zeta distribution), called \eqn{\phi}{phi}. The default value of \eqn{\phi = 0}{phi = 0} corresponds to the response having an ordinary zeta distribution. } \item{log}{Same as \code{\link[stats]{Uniform}}.} } \details{ The probability function of \eqn{Y} is 1 with probability \eqn{\phi}{phi}, and \eqn{Zeta(shape)} with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=1) =\phi + (1-\phi) P(W=1)}{% P(Y=1) = phi + (1-phi) * P(W=1)} where \eqn{W} is distributed as a \eqn{zeta(shape)} random variable. } \value{ \code{doizeta} gives the density, \code{poizeta} gives the distribution function, \code{qoizeta} gives the quantile function, and \code{roizeta} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr1} is recycled to the required length, and usually has values which lie in the interval \eqn{[0,1]}. These functions actually allow for the \emph{zero-deflated zeta} distribution. Here, \code{pstr1} is also permitted to lie in the interval \code{[-dzeta(1, shape) / (1 - dzeta(1, shape)), 0]}. The resulting probability of a unit count is \emph{less than} the nominal zeta value, and the use of \code{pstr1} to stand for the probability of a structural 1 loses its meaning. % % % When \code{pstr1} equals \code{-dzeta(1, shape) / (1 - dzeta(1, shape))} this corresponds to the 1-truncated zeta distribution. } \seealso{ \code{\link{Zeta}}, \code{\link{zetaff}}. \code{\link{Otzeta}}, % \code{\link{zipf}}. } \examples{ shape <- 1.5; pstr1 <- 0.3; x <- (-1):7 (ii <- doizeta(x, shape, pstr1 = pstr1)) max(abs(poizeta(1:200, shape) - cumsum(1/(1:200)^(1+shape)) / zeta(shape+1))) # Should be 0 \dontrun{ x <- 0:10 par(mfrow = c(2, 1)) # One-Inflated zeta barplot(rbind(doizeta(x, shape, pstr1 = pstr1), dzeta(x, shape)), beside = TRUE, col = c("blue", "orange"), main = paste("OIZeta(", shape, ", pstr1 = ", pstr1, ") (blue) vs", " Zeta(", shape, ") (orange)", sep = ""), names.arg = as.character(x)) deflat.limit <- -dzeta(1, shape) / pzeta(1, shape, lower.tail = FALSE) newpstr1 <- round(deflat.limit, 3) + 0.001 # Inside but near the boundary barplot(rbind(doizeta(x, shape, pstr1 = newpstr1), dzeta(x, shape)), beside = TRUE, col = c("blue","orange"), main = paste("ODZeta(", shape, ", pstr1 = ", newpstr1, ") (blue) vs", " Zeta(", shape, ") (orange)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} %qoizeta(p, shape, pstr1 = 0) %roizeta(n, shape, pstr1 = 0) % table(roizeta(100, shape, pstr1 = pstr1)) % round(doizeta(1:10, shape, pstr1 = pstr1) * 100) # Should be similar VGAM/man/waitakere.Rd0000644000176200001440000000330213135276753014034 0ustar liggesusers\name{waitakere} \alias{waitakere} \docType{data} \title{Waitakere Ranges Data} \description{ The \code{waitakere} data frame has 579 rows and 18 columns. Altitude is explanatory, and there are binary responses (presence/absence = 1/0 respectively) for 17 plant species. } \usage{data(waitakere)} \format{ This data frame contains the following columns: \describe{ \item{agaaus}{Agathis australis, or Kauri} \item{beitaw}{Beilschmiedia tawa, or Tawa} \item{corlae}{Corynocarpus laevigatus} \item{cyadea}{Cyathea dealbata} \item{cyamed}{Cyathea medullaris} \item{daccup}{Dacrydium cupressinum} \item{dacdac}{Dacrycarpus dacrydioides} \item{eladen}{Elaecarpus dentatus} \item{hedarb}{Hedycarya arborea} \item{hohpop}{Species name unknown} \item{kniexc}{Knightia excelsa, or Rewarewa} \item{kuneri}{Kunzea ericoides} \item{lepsco}{Leptospermum scoparium} \item{metrob}{Metrosideros robusta} \item{neslan}{Nestegis lanceolata} \item{rhosap}{Rhopalostylis sapida} \item{vitluc}{Vitex lucens, or Puriri} \item{altitude}{meters above sea level} } } \details{ These were collected from the Waitakere Ranges, a small forest in northern Auckland, New Zealand. At 579 sites in the forest, the presence/absence of 17 plant species was recorded, as well as the altitude. Each site was of area size 200\eqn{m^2}{m^2}. } \source{ Dr Neil Mitchell, University of Auckland. } %\references{ %None. %} \seealso{ \code{\link{hunua}}. } \examples{ fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, waitakere) head(predict(fit, waitakere, type = "response")) \dontrun{ plot(fit, se = TRUE, lcol = "orange", scol = "blue") } } \keyword{datasets} VGAM/man/qtplot.lmscreg.Rd0000644000176200001440000000460313135276753015043 0ustar liggesusers\name{qtplot.lmscreg} \alias{qtplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quantile Plot for LMS Quantile Regression } \description{ Plots quantiles associated with a LMS quantile regression. } \usage{ qtplot.lmscreg(object, newdata = NULL, percentiles = object@misc$percentiles, show.plot = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}, e.g., \code{\link{lms.yjn}}. } \item{newdata}{ Optional data frame for computing the quantiles. If missing, the original data is used. } \item{percentiles}{ Numerical vector with values between 0 and 100 that specify the percentiles (quantiles). The default are the percentiles used when the model was fitted. } \item{show.plot}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{\dots}{ Graphical parameter that are passed into \code{\link{plotqtplot.lmscreg}}. } } \details{ The `primary' variable is defined as the main covariate upon which the regression or smoothing is performed. For example, in medical studies, it is often the age. In \pkg{VGAM}, it is possible to handle more than one covariate, however, the primary variable must be the first term after the intercept. } \value{ A list with the following components. \item{fitted.values }{A vector of fitted percentile values. } \item{percentiles }{The percentiles used. } } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ \code{\link{plotqtplot.lmscreg}} does the actual plotting. } \seealso{ \code{\link{plotqtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{lms.yjn}}. } \examples{\dontrun{ fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero=1), data = bmi.nz) qtplot(fit) qtplot(fit, perc = c(25, 50, 75, 95), lcol = "blue", tcol = "blue", llwd = 2) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/hatvalues.Rd0000644000176200001440000002012213135276753014053 0ustar liggesusers% 20120312 % Modified from file src/library/stats/man/influence.measures.Rd \name{hatvalues} %\title{Regression Deletion Diagnostics} \title{Hat Values and Regression Deletion Diagnostics} %\concept{studentized residuals} %\concept{standardized residuals} %\concept{Cook's distances} %\concept{Covariance ratios} \concept{DFBETAs} %\concept{DFFITs} %\alias{influence.measures} %\alias{print.infl} %\alias{summary.infl} %\alias{hat} \alias{hatvalues} %\alias{hatvalues.lm} \alias{hatvaluesvlm} \alias{hatplot} \alias{hatplot.vlm} %\alias{rstandard} %\alias{rstandard.lm} %\alias{rstandard.glm} %\alias{rstudent} %\alias{rstudent.lm} %\alias{rstudent.glm} \alias{dfbeta} \alias{dfbetavlm} %\alias{dfbetas} %\alias{dfbetas.lm} %\alias{dffits} %\alias{covratio} %\alias{cooks.distance} %\alias{cooks.distance.lm} %\alias{cooks.distance.glm} \usage{ hatvalues(model, \dots) hatvaluesvlm(model, type = c("diagonal", "matrix", "centralBlocks"), \dots) hatplot(model, \dots) hatplot.vlm(model, multiplier = c(2, 3), lty = "dashed", xlab = "Observation", ylab = "Hat values", ylim = NULL, \dots) dfbetavlm(model, maxit.new = 1, trace.new = FALSE, smallno = 1.0e-8, ...) } \arguments{ \item{model}{an \R object, typically returned by \code{\link{vglm}}. %or \code{\link{glm}}. } \item{type}{Character. The default is the first choice, which is a \eqn{nM \times nM}{nM x nM} matrix. If \code{type = "matrix"} then the \emph{entire} hat matrix is returned. If \code{type = "centralBlocks"} then \eqn{n} central \eqn{M \times M}{M x M} block matrices, in matrix-band format. } % \item{diag}{Logical. If \code{TRUE} then the diagonal elements % of the hat matrix are returned, else the \emph{entire} hat matrix is % returned. % In the latter case, it is a \eqn{nM \times nM}{nM x nM} matrix. % } \item{multiplier}{Numeric, the multiplier. The usual rule-of-thumb is that values greater than two or three times the average leverage (at least for the linear model) should be checked. } \item{lty, xlab, ylab, ylim}{Graphical parameters, see \code{\link[graphics]{par}} etc. The default of \code{ylim} is \code{c(0, max(hatvalues(model)))} which means that if the horizontal dashed lines cannot be seen then there are no particularly influential observations. } \item{maxit.new, trace.new, smallno}{ Having \code{maxit.new = 1} will give a one IRLS step approximation from the ordinary solution (and no warnings!). Else having \code{maxit.new = 10}, say, should usually mean convergence will occur for all observations when they are removed one-at-a-time. Else having \code{maxit.new = 2}, say, should usually mean some lack of convergence will occur when observations are removed one-at-a-time. Setting \code{trace.new = TRUE} will produce some running output at each IRLS iteration and for each individual row of the model matrix. The argument \code{smallno} multiplies each value of the original prior weight (often unity); setting it identically to zero will result in an error, but setting a very small value effectively removes that observation. } % \item{infl}{influence structure as returned by % \code{\link{lm.influence}} or \code{\link{influence}} (the latter % only for the \code{glm} method of \code{rstudent} and % \code{cooks.distance}).} % \item{res}{(possibly weighted) residuals, with proper default.} % \item{sd}{standard deviation to use, see default.} % \item{dispersion}{dispersion (for \code{\link{glm}} objects) to use, % see default.} % \item{hat}{hat values \eqn{H_{ii}}{H[i,i]}, see default.} % \item{type}{type of residuals for \code{glm} method for \code{rstandard.}} % \item{x}{the \eqn{X} or design matrix.} % \item{intercept}{should an intercept column be prepended to \code{x}?} \item{\dots}{further arguments, for example, graphical parameters for \code{hatplot.vlm()}. % passed to or from other methods. } } \description{ When complete, a suite of functions that can be used to compute some of the regression (leave-one-out deletion) diagnostics, for the VGLM class. % This suite of functions can be used to compute some of the % regression (leave-one-out deletion) diagnostics for linear and % generalized linear models discussed in Belsley, Kuh and Welsch % (1980), Cook and Weisberg (1982), etc. } \details{ The invocation \code{hatvalues(vglmObject)} should return a \eqn{n \times M}{n x M} matrix of the diagonal elements of the hat (projection) matrix of a \code{\link{vglm}} object. To do this, the QR decomposition of the object is retrieved or reconstructed, and then straightforward calculations are performed. The invocation \code{hatplot(vglmObject)} should plot the diagonal of the hat matrix for each of the \eqn{M} linear/additive predictors. By default, two horizontal dashed lines are added; hat values higher than these ought to be checked. % The primary high-level function is \code{influence.measures} % which produces a class \code{"infl"} object tabular display % showing the DFBETAS for each model variable, DFFITS, covariance % ratios, Cook's distances and the diagonal elements of the % hat matrix. Cases which are influential with respect to any % of these measures are marked with an asterisk. % The functions \code{dfbetas}, \code{dffits}, \code{covratio} % and \code{cooks.distance} provide direct access to the % corresponding diagnostic quantities. Functions \code{rstandard} % and \code{rstudent} give the standardized and Studentized % residuals respectively. (These re-normalize the residuals to % have unit variance, using an overall and leave-one-out measure % of the error variance respectively.) % Values for generalized linear models are approximations, as % described in Williams (1987) (except that Cook's distances % are scaled as \eqn{F} rather than as chi-square values). The % approximations can be poor when some cases have large influence. % The optional \code{infl}, \code{res} and \code{sd} arguments are % there to encourage the use of these direct access functions, % in situations where, e.g., the underlying basic influence % measures (from \code{\link{lm.influence}} or the generic % \code{\link{influence}}) are already available. % Note that cases with \code{weights == 0} are \emph{dropped} from all % these functions, but that if a linear model has been fitted with % \code{na.action = na.exclude}, suitable values are filled in for the % cases excluded during fitting. % The function \code{hat()} exists mainly for S (version 2) % compatibility; we recommend using \code{hatvalues()} instead. } \note{ It is hoped, soon, that the full suite of functions described at \code{\link[stats]{influence.measures}} will be written for VGLMs. This will enable general regression deletion diagnostics to be available for the entire VGLM class. % For \code{hatvalues}, \code{dfbeta}, and \code{dfbetas}, the method % for linear models also works for generalized linear models. } \author{ T. W. Yee. } %\references{ % Belsley, D. A., Kuh, E. and Welsch, R. E. (1980) % \emph{Regression Diagnostics}. % New York: Wiley. % % Cook, R. D. and Weisberg, S. (1982) % \emph{Residuals and Influence in Regression}. % London: Chapman and Hall. % % Williams, D. A. (1987) % Generalized linear model diagnostics using the deviance and single % case deletions. \emph{Applied Statistics} \bold{36}, 181--191. % % Fox, J. (1997) % \emph{Applied Regression, Linear Models, and Related Methods}. Sage. % % Fox, J. (2002) % \emph{An R and S-Plus Companion to Applied Regression}. % Sage Publ.; \url{http://www.socsci.mcmaster.ca/jfox/Books/Companion/}. % % %} \seealso{ \code{\link{vglm}}, \code{\link{cumulative}}, \code{\link[stats]{influence.measures}}. } \examples{ # Proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative, data = pneumo) hatvalues(fit) # n x M matrix, with positive values all.equal(sum(hatvalues(fit)), fit@rank) # Should be TRUE \dontrun{ par(mfrow = c(1, 2)) hatplot(fit, ylim = c(0, 1), las = 1, col = "blue") } } \keyword{regression} VGAM/man/bifgmcopUC.Rd0000644000176200001440000000311013135276753014073 0ustar liggesusers\name{Bifgmcop} \alias{Bifgmcop} \alias{dbifgmcop} \alias{pbifgmcop} \alias{rbifgmcop} \title{Farlie-Gumbel-Morgenstern's Bivariate Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Farlie-Gumbel-Morgenstern's distribution. } \usage{ dbifgmcop(x1, x2, apar, log = FALSE) pbifgmcop(q1, q2, apar) rbifgmcop(n, apar) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{apar}{the association parameter.} \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. } } \value{ \code{dbifgmcop} gives the density, \code{pbifgmcop} gives the distribution function, and \code{rbifgmcop} generates random deviates (a two-column matrix). } %\references{ % %} \author{ T. W. Yee } \details{ See \code{\link{bifgmcop}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{bifgmcop}}. } \examples{ \dontrun{ N <- 101; x <- seq(0.0, 1.0, len = N); apar <- 0.7 ox <- expand.grid(x, x) zedd <- dbifgmcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbifgmcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") plot(r <- rbifgmcop(n = 3000, apar = apar), col = "blue") par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform } } \keyword{distribution} VGAM/man/fiskUC.Rd0000644000176200001440000000360713135276753013254 0ustar liggesusers\name{Fisk} \alias{Fisk} \alias{dfisk} \alias{pfisk} \alias{qfisk} \alias{rfisk} \title{The Fisk Distribution} \description{ Density, distribution function, quantile function and random generation for the Fisk distribution with shape parameter \code{a} and scale parameter \code{scale}. } \usage{ dfisk(x, scale = 1, shape1.a, log = FALSE) pfisk(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qfisk(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rfisk(n, scale = 1, shape1.a) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required.} \item{shape1.a}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dfisk} gives the density, \code{pfisk} gives the distribution function, \code{qfisk} gives the quantile function, and \code{rfisk} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{fisk}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Fisk distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{fisk}}, \code{\link{genbetaII}}. } \examples{ fdata <- data.frame(y = rfisk(n = 1000, shape = exp(1), scale = exp(2))) fit <- vglm(y ~ 1, fisk(lss = FALSE), data = fdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/AICvlm.Rd0000644000176200001440000001024213135276753013174 0ustar liggesusers\name{AICvlm} \alias{AICvlm} %\alias{AICvglm} \alias{AICvgam} \alias{AICrrvglm} \alias{AICqrrvglm} \alias{AICrrvgam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Akaike's Information Criterion } \description{ Calculates the Akaike information criterion for a fitted model object for which a log-likelihood value has been obtained. } \usage{ AICvlm(object, \dots, corrected = FALSE, k = 2) AICvgam(object, \dots, k = 2) AICrrvglm(object, \dots, k = 2) AICqrrvglm(object, \dots, k = 2) AICrrvgam(object, \dots, k = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Some \pkg{VGAM} object, for example, having class \code{\link{vglmff-class}}. } \item{\dots}{ Other possible arguments fed into \code{logLik} in order to compute the log-likelihood. } \item{corrected}{ Logical, perform the finite sample correction? } \item{k}{ Numeric, the penalty per parameter to be used; the default is the classical AIC. } } \details{ The following formula is used for VGLMs: \eqn{-2 \mbox{log-likelihood} + k n_{par}}{-2*log-likelihood + k*npar}, where \eqn{n_{par}}{npar} represents the number of parameters in the fitted model, and \eqn{k = 2} for the usual AIC. One could assign \eqn{k = \log(n)} (\eqn{n} the number of observations) for the so-called BIC or SBC (Schwarz's Bayesian criterion). This is the function \code{AICvlm()}. This code relies on the log-likelihood being defined, and computed, for the object. When comparing fitted objects, the smaller the AIC, the better the fit. The log-likelihood and hence the AIC is only defined up to an additive constant. Any estimated scale parameter (in GLM parlance) is used as one parameter. For VGAMs and CAO the nonlinear effective degrees of freedom for each smoothed component is used. This formula is heuristic. These are the functions \code{AICvgam()} and \code{AICcao()}. The finite sample correction is usually recommended when the sample size is small or when the number of parameters is large. When the sample size is large their difference tends to be negligible. The correction is described in Hurvich and Tsai (1989), and is based on a (univariate) linear model with normally distributed errors. } \value{ Returns a numeric value with the corresponding AIC (or BIC, or \dots, depending on \code{k}). } \author{T. W. Yee. } \note{ AIC has not been defined for QRR-VGLMs yet. Using AIC to compare \code{\link{posbinomial}} models with, e.g., \code{\link{posbernoulli.tb}} models, requires \code{posbinomial(omit.constant = TRUE)}. See \code{\link{posbinomial}} for an example. A warning is given if it suspects a wrong \code{omit.constant} value was used. Where defined, \code{AICc(...)} is the same as \code{AIC(..., corrected = TRUE)}. } \references{ Hurvich, C. M. and Tsai, C.-L. (1989) Regression and time series model selection in small samples, \emph{Biometrika}, \bold{76}, 297--307. % Sakamoto, Y., Ishiguro, M., and Kitagawa G. (1986). % \emph{Akaike Information Criterion Statistics}. % D. Reidel Publishing Company. } \section{Warning }{ This code has not been double-checked. The general applicability of \code{AIC} for the VGLM/VGAM classes has not been developed fully. In particular, \code{AIC} should not be run on some \pkg{VGAM} family functions because of violation of certain regularity conditions, etc. } \seealso{ VGLMs are described in \code{\link{vglm-class}}; VGAMs are described in \code{\link{vgam-class}}; RR-VGLMs are described in \code{\link{rrvglm-class}}; \code{\link[stats]{AIC}}, \code{\link{BICvlm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) AIC(fit1) AICc(fit1) # Quick way AIC(fit1, corrected = TRUE) # Slow way (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) AIC(fit2) AICc(fit2) AIC(fit2, corrected = TRUE) } \keyword{models} \keyword{regression} VGAM/man/linkfun.vglm.Rd0000644000176200001440000000327213135276753014500 0ustar liggesusers\name{linkfun.vglm} \alias{linkfun.vglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Link Functions for VGLMs } \description{ Returns the link functions, and parameter names, for \emph{vector generalized linear models} (VGLMs). } \usage{ linkfun.vglm(object, earg = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class \code{"vglm"}, i.e., a VGLM object. } \item{earg}{ Logical. Return the extra arguments associated with each link function? If \code{TRUE} then a list is returned. } \item{\dots}{ Arguments that might be used in the future. } } \details{ All fitted VGLMs have a link function applied to each parameter. This function returns these, and optionally, the extra arguments associated with them. } \value{ Usually just a (named) character string, with the link functions in order. It is named with the parameter names. If \code{earg = TRUE} then a list with the following components. \item{link}{ The default output. } \item{earg}{The extra arguments, in order. } } %\references{ %} \author{ Thomas W. Yee } \note{ Presently, the multinomial logit model has only one link function, \code{\link{multilogit}}, so a warning is not issued for that link. For other models, if the number of link functions does not equal \eqn{M} then a warning may be issued. } \seealso{ \code{\link{linkfun}}, \code{\link{multilogit}}, \code{\link{vglm}}. } \examples{ fit1 <- vgam(cbind(r1, r2) ~ s(year, df = 3), gev(zero = 2:3), data = venice) coef(fit1, matrix = TRUE) linkfun(fit1) linkfun(fit1, earg = TRUE) } \keyword{models} \keyword{regression} VGAM/man/grc.Rd0000644000176200001440000003656613135276753012655 0ustar liggesusers\name{grc} \alias{grc} \alias{rcim} \alias{uqo} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Row-Column Interaction Models including Goodman's RC Association Model and Unconstrained Quadratic Ordination } \description{ Fits a Goodman's RC association model (GRC) to a matrix of counts, and more generally, row-column interaction models (RCIMs). RCIMs allow for unconstrained quadratic ordination (UQO). } \usage{ grc(y, Rank = 1, Index.corner = 2:(1 + Rank), str0 = 1, summary.arg = FALSE, h.step = 1e-04, ...) rcim(y, family = poissonff, Rank = 0, M1 = NULL, weights = NULL, which.linpred = 1, Index.corner = ifelse(is.null(str0), 0, max(str0)) + 1:Rank, rprefix = "Row.", cprefix = "Col.", iprefix = "X2.", offset = 0, str0 = if (Rank) 1 else NULL, summary.arg = FALSE, h.step = 0.0001, rbaseline = 1, cbaseline = 1, has.intercept = TRUE, M = NULL, rindex = 2:nrow(y), cindex = 2:ncol(y), iindex = 2:nrow(y), ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{y}{ For \code{grc()}: a matrix of counts. For \code{rcim()}: a general matrix response depending on \code{family}. Output from \code{table()} is acceptable; it is converted into a matrix. Note that \code{y} should be at least 3 by 3 in dimension. } \item{family}{ A \pkg{VGAM} family function. By default, the first linear/additive predictor is fitted using main effects plus an optional rank-\code{Rank} interaction term. Not all family functions are suitable or make sense. All other linear/additive predictors are fitted using an intercept-only, so it has a common value over all rows and columns. For example, \code{\link{zipoissonff}} may be suitable for counts but not \code{\link{zipoisson}} because of the ordering of the linear/additive predictors. If the \pkg{VGAM} family function does not have an \code{infos} slot then \code{M1} needs to be inputted (the number of linear predictors for an ordinary (usually univariate) response, aka \eqn{M}). The \pkg{VGAM} family function also needs to be able to handle multiple responses (currently not all of them can do this). } \item{Rank}{ An integer from the set \{0,\ldots,\code{min(nrow(y), ncol(y))}\}. This is the dimension of the fit in terms of the interaction. For \code{grc()} this argument must be positive. A value of 0 means no interactions (i.e., main effects only); each row and column is represented by an indicator variable. } \item{weights}{ Prior weights. Fed into \code{\link{rrvglm}} or \code{\link{vglm}}. } \item{which.linpred}{ Single integer. Specifies which linear predictor is modelled as the sum of an intercept, row effect, column effect plus an optional interaction term. It should be one value from the set \code{1:M1}. } \item{Index.corner}{ A vector of \code{Rank} integers. These are used to store the \code{Rank} by \code{Rank} identity matrix in the \code{A} matrix; corner constraints are used. } \item{rprefix, cprefix, iprefix}{ Character, for rows and columns and interactions respectively. For labelling the indicator variables. } \item{offset}{ Numeric. Either a matrix of the right dimension, else a single numeric expanded into such a matrix. } \item{str0}{ Ignored if \code{Rank = 0}, else an integer from the set \{1,\ldots,\code{min(nrow(y), ncol(y))}\}, specifying the row that is used as the structural zero. Passed into \code{\link{rrvglm.control}} if \code{Rank > 0}. Set \code{str0 = NULL} for none. } \item{summary.arg}{ Logical. If \code{TRUE} then a summary is returned. If \code{TRUE} then \code{y} may be the output (fitted object) of \code{grc()}. } \item{h.step}{ A small positive value that is passed into \code{summary.rrvglm()}. Only used when \code{summary.arg = TRUE}. } \item{\dots}{ Arguments that are passed into \code{rrvglm.control()}. } \item{M1}{ The number of linear predictors of the \pkg{VGAM} \code{family} function for an ordinary (univariate) response. Then the number of linear predictors of the \code{rcim()} fit is usually the number of columns of \code{y} multiplied by \code{M1}. The default is to evaluate the \code{infos} slot of the \pkg{VGAM} \code{family} function to try to evaluate it; see \code{\link{vglmff-class}}. If this information is not yet supplied by the family function then the value needs to be inputted manually using this argument. } \item{rbaseline, cbaseline}{ Baseline reference levels for the rows and columns. Currently stored on the object but not used. } \item{has.intercept}{ Logical. Include an intercept? } \item{M, cindex}{ \eqn{M} is the usual \pkg{VGAM} \eqn{M}, viz. the number of linear/additive predictors in total. Also, \code{cindex} means column index, and these point to the columns of \code{y} which are part of the vector of linear/additive predictor \emph{main effects}. For \code{family = multinomial} it is necessary to input these arguments as \code{M = ncol(y)-1} and \code{cindex = 2:(ncol(y)-1)}. % except for the possibly the first one (due to identifiability constraints). } \item{rindex, iindex}{ \code{rindex} means row index, and these are similar to \code{cindex}. \code{iindex} means interaction index, and these are similar to \code{cindex}. } } \details{ Goodman's RC association model fits a reduced-rank approximation to a table of counts. A Poisson model is assumed. The log of each cell mean is decomposed as an intercept plus a row effect plus a column effect plus a reduced-rank component. The latter can be collectively written \code{A \%*\% t(C)}, the product of two `thin' matrices. Indeed, \code{A} and \code{C} have \code{Rank} columns. By default, the first column and row of the interaction matrix \code{A \%*\% t(C)} is chosen to be structural zeros, because \code{str0 = 1}. This means the first row of \code{A} are all zeros. This function uses \code{options()$contrasts} to set up the row and column indicator variables. In particular, Equation (4.5) of Yee and Hastie (2003) is used. These are called \code{Row.} and \code{Col.} (by default) followed by the row or column number. The function \code{rcim()} is more general than \code{grc()}. Its default is a no-interaction model of \code{grc()}, i.e., rank-0 and a Poisson distribution. This means that each row and column has a dummy variable associated with it. The first row and first column are baseline. The power of \code{rcim()} is that many \pkg{VGAM} family functions can be assigned to its \code{family} argument. For example, \code{\link{uninormal}} fits something in between a 2-way ANOVA with and without interactions, \code{\link{alaplace2}} with \code{Rank = 0} is something like \code{\link[stats]{medpolish}}. Others include \code{\link{zipoissonff}} and \code{\link{negbinomial}}. Hopefully one day \emph{all} \pkg{VGAM} family functions will work when assigned to the \code{family} argument, although the result may not have meaning. \emph{Unconstrained quadratic ordination} (UQO) can be performed using \code{rcim()} and \code{grc()}. This has been called \emph{unconstrained Gaussian ordination} in the literature, however the word \emph{Gaussian} has two meanings which is confusing; it is better to use \emph{quadratic} because the bell-shape response surface is meant. UQO is similar to CQO (\code{\link{cqo}}) except there are no environmental/explanatory variables. Here, a GLM is fitted to each column (species) that is a quadratic function of hypothetical latent variables or gradients. Thus each row of the response has an associated site score, and each column of the response has an associated optimum and tolerance matrix. UQO can be performed here under the assumption that all species have the same tolerance matrices. See Yee and Hadi (2014) for details. It is not recommended that presence/absence data be inputted because the information content is so low for each site-species cell. The example below uses Poisson counts. } \value{ An object of class \code{"grc"}, which currently is the same as an \code{"rrvglm"} object. Currently, a rank-0 \code{rcim()} object is of class \code{\link{rcim0-class}}, else of class \code{"rcim"} (this may change in the future). % Currently, % a rank-0 \code{rcim()} object is of class \code{\link{vglm-class}}, % but it may become of class \code{"rcim"} one day. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Hadi, A. F. (2014) Row-column interaction models, with an R implementation. \emph{Computational Statistics}, \bold{29}, 1427--1445. Goodman, L. A. (1981) Association models and canonical correlation in the analysis of cross-classifications having ordered categories. \emph{Journal of the American Statistical Association}, \bold{76}, 320--334. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information about the setting up of the %indicator variables. } \author{ Thomas W. Yee, with assistance from Alfian F. Hadi. } \note{ These functions set up the indicator variables etc. before calling \code{\link{rrvglm}} or \code{\link{vglm}}. The \code{...} is passed into \code{\link{rrvglm.control}} or \code{\link{vglm.control}}, This means, e.g., \code{Rank = 1} is default for \code{grc()}. The data should be labelled with \code{\link[base]{rownames}} and \code{\link[base]{colnames}}. Setting \code{trace = TRUE} is recommended to monitor convergence. Using \code{criterion = "coefficients"} can result in slow convergence. If \code{summary = TRUE} then \code{y} can be a \code{"grc"} object, in which case a summary can be returned. That is, \code{grc(y, summary = TRUE)} is equivalent to \code{summary(grc(y))}. It is not possible to plot a \code{grc(y, summary = TRUE)} or \code{rcim(y, summary = TRUE)} object. } \section{Warning}{ The function \code{rcim()} is experimental at this stage and may have bugs. Quite a lot of expertise is needed when fitting and in its interpretion thereof. For example, the constraint matrices applies the reduced-rank regression to the first (see \code{which.linpred}) linear predictor and the other linear predictors are intercept-only and have a common value throughout the entire data set. This means that, by default, \code{family =} \code{\link{zipoissonff}} is appropriate but not \code{family =} \code{\link{zipoisson}}. Else set \code{family =} \code{\link{zipoisson}} and \code{which.linpred = 2}. To understand what is going on, do examine the constraint matrices of the fitted object, and reconcile this with Equations (4.3) to (4.5) of Yee and Hastie (2003). The functions temporarily create a permanent data frame called \code{.grc.df} or \code{.rcim.df}, which used to be needed by \code{summary.rrvglm()}. Then these data frames are deleted before exiting the function. If an error occurs then the data frames may be present in the workspace. } \seealso{ \code{\link{rrvglm}}, \code{\link{rrvglm.control}}, \code{\link{rrvglm-class}}, \code{summary.grc}, \code{\link{moffset}}, \code{\link{Rcim}}, \code{\link{Select}}, \code{\link{Qvar}}, \code{\link{plotrcim0}}, \code{\link{cqo}}, \code{\link{multinomial}}, \code{\link{alcoff}}, \code{\link{crashi}}, \code{\link{auuc}}, \code{\link[VGAM:olym08]{olym08}}, \code{\link[VGAM:olym12]{olym12}}, \code{\link{poissonff}}, \code{\link[stats]{medpolish}}. } \examples{ # Example 1: Undergraduate enrolments at Auckland University in 1990 fitted(grc1 <- grc(auuc)) summary(grc1) grc2 <- grc(auuc, Rank = 2, Index.corner = c(2, 5)) fitted(grc2) summary(grc2) model3 <- rcim(auuc, Rank = 1, fam = multinomial, M = ncol(auuc)-1, cindex = 2:(ncol(auuc)-1), trace = TRUE) fitted(model3) summary(model3) # Median polish but not 100 percent reliable. Maybe call alaplace2()... \dontrun{ rcim0 <- rcim(auuc, fam = alaplace1(tau = 0.5), trace = FALSE, maxit = 500) round(fitted(rcim0), digits = 0) round(100 * (fitted(rcim0) - auuc) / auuc, digits = 0) # Discrepancy depvar(rcim0) round(coef(rcim0, matrix = TRUE), digits = 2) Coef(rcim0, matrix = TRUE) # constraints(rcim0) names(constraints(rcim0)) # Compare with medpolish(): (med.a <- medpolish(auuc)) fv <- med.a$overall + outer(med.a$row, med.a$col, "+") round(100 * (fitted(rcim0) - fv) / fv) # Hopefully should be all 0s } # Example 2: 2012 Summer Olympic Games in London \dontrun{ top10 <- head(olym12, 10) grc1.oly12 <- with(top10, grc(cbind(gold, silver, bronze))) round(fitted(grc1.oly12)) round(resid(grc1.oly12, type = "response"), digits = 1) # Response residuals summary(grc1.oly12) Coef(grc1.oly12) } # Example 3: Unconstrained quadratic ordination (UQO); see Yee and Hadi (2014) \dontrun{ n <- 100; p <- 5; S <- 10 pdata <- rcqo(n, p, S, es.opt = FALSE, eq.max = FALSE, eq.tol = TRUE, sd.latvar = 0.75) # Poisson counts true.nu <- attr(pdata, "latvar") # The 'truth'; site scores attr(pdata, "tolerances") # The 'truth'; tolerances Y <- Select(pdata, "y", sort = FALSE) # Y matrix (n x S); the "y" vars uqo.rcim1 <- rcim(Y, Rank = 1, str0 = NULL, # Delta covers entire n x M matrix iindex = 1:nrow(Y), # RRR covers the entire Y has.intercept = FALSE) # Suppress the intercept # Plot 1 par(mfrow = c(2, 2)) plot(attr(pdata, "optimums"), Coef(uqo.rcim1)@A, col = "blue", type = "p", main = "(a) UQO optimums", xlab = "True optimums", ylab = "Estimated (UQO) optimums") mylm <- lm(Coef(uqo.rcim1)@A ~ attr(pdata, "optimums")) abline(coef = coef(mylm), col = "orange", lty = "dashed") # Plot 2 fill.val <- NULL # Choose this for the new parameterization plot(attr(pdata, "latvar"), c(fill.val, concoef(uqo.rcim1)), las = 1, col = "blue", type = "p", main = "(b) UQO site scores", xlab = "True site scores", ylab = "Estimated (UQO) site scores" ) mylm <- lm(c(fill.val, concoef(uqo.rcim1)) ~ attr(pdata, "latvar")) abline(coef = coef(mylm), col = "orange", lty = "dashed") # Plots 3 and 4 myform <- attr(pdata, "formula") p1ut <- cqo(myform, family = poissonff, eq.tol = FALSE, trace = FALSE, data = pdata) c1ut <- cqo(Select(pdata, "y", sort = FALSE) ~ scale(latvar(uqo.rcim1)), family = poissonff, eq.tol = FALSE, trace = FALSE, data = pdata) lvplot(p1ut, lcol = 1:S, y = TRUE, pcol = 1:S, pch = 1:S, pcex = 0.5, main = "(c) CQO fitted to the original data", xlab = "Estimated (CQO) site scores") lvplot(c1ut, lcol = 1:S, y = TRUE, pcol = 1:S, pch = 1:S, pcex = 0.5, main = "(d) CQO fitted to the scaled UQO site scores", xlab = "Estimated (UQO) site scores") } } \keyword{models} \keyword{regression} % plot(grc.oly1) % oly2 <- with(top10, grc(cbind(gold,silver,bronze), Rank = 2)) # Saturated model % round(fitted(oly2)) % round(fitted(oly2)) - with(top10, cbind(gold,silver,bronze)) % summary(oly2) # Saturated model % zz 20100927 unsure % Then \code{.grc.df} is deleted before exiting the function. % print(Coef(rcim0, matrix = TRUE), digits = 3) % Prior to 201310: % str0 = if (!Rank) NULL else { % if (M1 == 1) 1 else setdiff(1:(M1 * ncol(y)), % c(1 + (1:ncol(y)) * M1, Index.corner)) % }, % str0 = if (Rank > 0) 1 else NULL, % Index.corner = if (!Rank) NULL else 1 + M1 * (1:Rank), VGAM/man/lms.yjn.Rd0000644000176200001440000001341613135276753013461 0ustar liggesusers\name{lms.yjn} \alias{lms.yjn} \alias{lms.yjn2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ LMS Quantile Regression with a Yeo-Johnson Transformation to Normality } \description{ LMS quantile regression with the Yeo-Johnson transformation to normality. This family function is experimental and the LMS-BCN family function is recommended instead. } \usage{ lms.yjn(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL, rule = c(10, 5), yoffset = NULL, diagW = FALSE, iters.diagW = 6) lms.yjn2(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1.0, isigma = NULL, yoffset = NULL, nsimEIM = 250) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentiles}{ A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as `fitted values'. } \item{zero}{ See \code{\link{lms.bcn}}. } \item{llambda, lmu, lsigma}{ See \code{\link{lms.bcn}}. } \item{idf.mu, idf.sigma}{ See \code{\link{lms.bcn}}. } \item{ilambda, isigma}{ See \code{\link{lms.bcn}}. } \item{rule}{ Number of abscissae used in the Gaussian integration scheme to work out elements of the weight matrices. The values given are the possible choices, with the first value being the default. The larger the value, the more accurate the approximation is likely to be but involving more computational expense. } \item{yoffset}{ A value to be added to the response y, for the purpose of centering the response before fitting the model to the data. The default value, \code{NULL}, means \code{-median(y)} is used, so that the response actually used has median zero. The \code{yoffset} is saved on the object and used during prediction. } \item{diagW}{ Logical. This argument is offered because the expected information matrix may not be positive-definite. Using the diagonal elements of this matrix results in a higher chance of it being positive-definite, however convergence will be very slow. If \code{TRUE}, then the first \code{iters.diagW} iterations will use the diagonal of the expected information matrix. The default is \code{FALSE}, meaning faster convergence. } \item{iters.diagW}{ Integer. Number of iterations in which the diagonal elements of the expected information matrix are used. Only used if \code{diagW = TRUE}. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ Given a value of the covariate, this function applies a Yeo-Johnson transformation to the response to best obtain normality. The parameters chosen to do this are estimated by maximum likelihood or penalized maximum likelihood. The function \code{lms.yjn2()} estimates the expected information matrices using simulation (and is consequently slower) while \code{lms.yjn()} uses numerical integration. Try the other if one function fails. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Yeo, I.-K. and Johnson, R. A. (2000) A new family of power transformations to improve normality or symmetry. \emph{Biometrika}, \bold{87}, 954--959. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. Yee, T. W. (2002) An Implementation for Regression Quantile Estimation. Pages 3--14. In: Haerdle, W. and Ronz, B., \emph{Proceedings in Computational Statistics COMPSTAT 2002}. Heidelberg: Physica-Verlag. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response may contain both positive and negative values. In contrast, the LMS-Box-Cox-normal and LMS-Box-Cox-gamma methods only handle a positive response because the Box-Cox transformation cannot handle negative values. Some other notes can be found at \code{\link{lms.bcn}}. } \section{Warning }{ The computations are not simple, therefore convergence may fail. In that case, try different starting values. The generic function \code{predict}, when applied to a \code{lms.yjn} fit, does not add back the \code{yoffset} value. As described above, this family function is experimental and the LMS-BCN family function is recommended instead. } \seealso{ \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{qtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{cdf.lmscreg}}, \code{\link{bmi.nz}}, \code{\link{amlnormal}}. } \examples{ fit <- vgam(BMI ~ s(age, df = 4), lms.yjn, bmi.nz, trace = TRUE) head(predict(fit)) head(fitted(fit)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit)) \dontrun{ # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) with(aa@post, deplot) # Contains density function values; == a@post$deplot } } \keyword{models} \keyword{regression} VGAM/man/meplot.Rd0000644000176200001440000001000613135276753013357 0ustar liggesusers\name{meplot} \alias{meplot} \alias{meplot.default} \alias{meplot.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Mean Excess Plot } \description{ Mean excess plot (also known as a mean residual life plot), a diagnostic plot for the generalized Pareto distribution (GPD). } \usage{ meplot(object, ...) meplot.default(y, main = "Mean Excess Plot", xlab = "Threshold", ylab = "Mean Excess", lty = c(2, 1:2), conf = 0.95, col = c("blue", "black", "blue"), type = "l", ...) meplot.vlm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y}{ A numerical vector. \code{NA}s etc. are not allowed.} \item{main, xlab, ylab}{Character. Overall title for the plot, and titles for the x- and y-axes. } \item{lty}{Line type. The second value is for the mean excess value, the first and third values are for the envelope surrounding the confidence interval. } \item{conf}{Confidence level. The default results in approximate 95 percent confidence intervals for each mean excess value. } \item{col}{Colour of the three lines. } \item{type}{Type of plot. The default means lines are joined between the mean excesses and also the upper and lower limits of the confidence intervals. } \item{object}{ An object that inherits class \code{"vlm"}, usually of class \code{\link{vglm-class}} or \code{\link{vgam-class}}. } \item{\dots}{ Graphical argument passed into \code{\link[graphics]{plot}}. See \code{\link[graphics]{par}} for an exhaustive list. The arguments \code{xlim} and \code{ylim} are particularly useful. } } \details{ If \eqn{Y} has a GPD with scale parameter \eqn{\sigma}{sigma} and shape parameter \eqn{\xi<1}{xi<1}, and if \eqn{y>0}, then \deqn{E(Y-u|Y>u) = \frac{\sigma+\xi u}{1-\xi}.}{% E(Y-u|Y>u) = \frac{\sigma+ xi*u}{1- xi}.} It is a linear function in \eqn{u}, the threshold. Note that \eqn{Y-u} is called the \emph{excess} and values of \eqn{Y} greater than \eqn{u} are called \emph{exceedances}. The empirical versions used by these functions is to use sample means to estimate the left hand side of the equation. Values of \eqn{u} in the plot are the values of \eqn{y} itself. If the plot is roughly a straight line then the GPD is a good fit; this plot can be used to select an appropriate threshold value. See \code{\link{gpd}} for more details. If the plot is flat then the data may be exponential, and if it is curved then it may be Weibull or gamma. There is often a lot of variance/fluctuation at the RHS of the plot due to fewer observations. The function \code{meplot} is generic, and \code{meplot.default} and \code{meplot.vlm} are some methods functions for mean excess plots. } \value{ A list is returned invisibly with the following components. \item{threshold }{The x axis values. } \item{meanExcess }{The y axis values. Each value is a sample mean minus a value \eqn{u}. } \item{plusminus }{The amount which is added or subtracted from the mean excess to give the confidence interval. The last value is a \code{NA} because it is based on one observation. } } \references{ Davison, A. C. and Smith, R. L. (1990) Models for exceedances over high thresholds (with discussion). \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{52}, 393--442. Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \note{ The function is designed for speed and not accuracy, therefore huge data sets with extremely large values may cause failure (the function \code{\link[base]{cumsum}} is used.) Ties may not be well handled. } \seealso{ \code{\link{gpd}}. } \examples{ \dontrun{meplot(with(venice90, sealevel), las = 1) -> ii names(ii) abline(h = ii$meanExcess[1], col = "orange", lty = "dashed") par(mfrow = c(2, 2)) for (ii in 1:4) meplot(rgpd(1000), col = c("orange", "blue", "orange")) } } \keyword{models} \keyword{regression} VGAM/man/posbernUC.Rd0000644000176200001440000001155313135276753013767 0ustar liggesusers\name{posbernUC} \alias{posbernUC} \alias{dposbern} \alias{rposbern} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Sequence Model } \description{ Density, and random generation for multiple Bernoulli responses where each row in the response matrix has at least one success. } \usage{ rposbern(n, nTimePts = 5, pvars = length(xcoeff), xcoeff = c(-2, 1, 2), Xmatrix = NULL, cap.effect = 1, is.popn = FALSE, link = "logit", earg.link = FALSE) dposbern(x, prob, prob0 = prob, log = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ response vector or matrix. Should only have 0 and 1 values, at least two columns, and each row should have at least one 1. } \item{nTimePts}{Number of sampling occasions. Called \eqn{\tau} in \code{\link{posbernoulli.b}} and \code{\link{posbernoulli.t}}. } \item{n}{number of observations. Usually a single positive integer, else the length of the vector is used. See argument \code{is.popn}. } \item{is.popn}{ Logical. If \code{TRUE} then argument \code{n} is the population size and what is returned may have substantially less rows than \code{n}. That is, if an animal has at least one one in its sequence then it is returned, else that animal is not returned because it never was captured. % Put in other words, only animals captured at least once are % returned in the sample. } \item{Xmatrix}{ Optional \bold{X} matrix. If given, the \bold{X} matrix is not generated internally. } \item{cap.effect}{ Numeric, the capture effect. Added to the linear predictor if captured previously. A positive or negative value corresponds to a trap-happy and trap-shy effect respectively. } % \item{double.ch}{ % Logical. % If \code{TRUE} then the values of \code{ch0}, \code{ch1}, \ldots are % 2 or 0, else 1 or 0. % Setting this argument \code{TRUE} means that a model can be fitted % with half the capture history in both denominator and numerator % (this is a compromise of the Huggins (1991) model where the full % capture history only appears in the numerator). % } \item{pvars}{ Number of other numeric covariates that make up the linear predictor. Labelled \code{x1}, \code{x2}, \ldots, where the first is an intercept, and the others are independent standard \code{\link[stats:Uniform]{runif}} random variates. The first \code{pvars} elements of \code{xcoeff} are used. } \item{xcoeff}{ The regression coefficients of the linear predictor. These correspond to \code{x1}, \code{x2}, \ldots, and the first is for the intercept. The length of \code{xcoeff} must be at least \code{pvars}. } \item{link, earg.link}{ The former is used to generate the probabilities for capture at each occasion. Other details at \code{\link{CommonVGAMffArguments}}. } \item{prob, prob0}{ Matrix of probabilities for the numerator and denominators respectively. The default does \emph{not} correspond to the \eqn{M_b} model since the \eqn{M_b} model has a denominator which involves the capture history. } \item{log}{ Logical. Return the logarithm of the answer? } } \details{ The form of the conditional likelihood is described in \code{\link{posbernoulli.b}} and/or \code{\link{posbernoulli.t}} and/or \code{\link{posbernoulli.tb}}. The denominator is equally shared among the elements of the matrix \code{x}. } \value{ \code{rposbern} returns a data frame with some attributes. The function generates random deviates (\eqn{\tau} columns labelled \code{y1}, \code{y2}, \ldots) for the response. Some indicator columns are also included (those starting with \code{ch} are for previous capture history). The default setting corresponds to a \eqn{M_{bh}} model that has a single trap-happy effect. Covariates \code{x1}, \code{x2}, \ldots have the same affect on capture/recapture at every sampling occasion (see the argument \code{parallel.t} in, e.g., \code{\link{posbernoulli.tb}}). % and these are useful for the \code{xij} argument. The function \code{dposbern} gives the density, } %\references{ } \author{ Thomas W. Yee. } \note{ The \code{r}-type function is experimental only and does not follow the usual conventions of \code{r}-type R functions. It may change a lot in the future. The \code{d}-type function is more conventional and is less likely to change. } \seealso{ \code{\link{posbernoulli.tb}}, \code{\link{posbernoulli.b}}, \code{\link{posbernoulli.t}}. % \code{\link{huggins91}}, } \examples{ rposbern(n = 10) attributes(pdata <- rposbern(n = 100)) M.bh <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2 + x3, posbernoulli.b(I2 = FALSE), data = pdata, trace = TRUE) constraints(M.bh) summary(M.bh) } \keyword{distribution} \keyword{datagen} %double.ch = FALSE, % and those starting with \code{z} are zero. VGAM/man/betaprime.Rd0000644000176200001440000000703213135276753014034 0ustar liggesusers\name{betaprime} \alias{betaprime} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Beta-Prime Distribution } \description{ Estimation of the two shape parameters of the beta-prime distribution by maximum likelihood estimation. } \usage{ betaprime(lshape = "loge", ishape1 = 2, ishape2 = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape}{ Parameter link function applied to the two (positive) shape parameters. See \code{\link{Links}} for more choices. } \item{ishape1, ishape2, zero}{ See \code{\link{CommonVGAMffArguments}}. % Initial values for the first and second shape parameters. % A \code{NULL} value means it is obtained in the \code{initialize} slot. % Note that \code{ishape2} is obtained using \code{ishape1}. } % \item{zero}{ % An integer-valued vector specifying which linear/additive predictors % are modelled as intercepts only. The value must be from the set % \{1,2\} corresponding respectively to \code{shape1} and \code{shape2} % respectively. If \code{zero=NULL} then both parameters are modelled % with the explanatory variables. % } } %% what is the mean if shape2 < 1? \details{ The beta-prime distribution is given by \deqn{f(y) = y^{shape1-1} (1+y)^{-shape1-shape2} / B(shape1,shape2)}{% f(y) = y^(shape1-1) * (1+y)^(-shape1-shape2) / B(shape1,shape2) } for \eqn{y > 0}. The shape parameters are positive, and here, \eqn{B} is the beta function. The mean of \eqn{Y} is \eqn{shape1 / (shape2-1)} provided \eqn{shape2>1}; these are returned as the fitted values. If \eqn{Y} has a \eqn{Beta(shape1,shape2)} distribution then \eqn{Y/(1-Y)} and \eqn{(1-Y)/Y} have a \eqn{Betaprime(shape1,shape2)} and \eqn{Betaprime(shape2,shape1)} distribution respectively. Also, if \eqn{Y_1}{Y1} has a \eqn{gamma(shape1)} distribution and \eqn{Y_2}{Y2} has a \eqn{gamma(shape2)} distribution then \eqn{Y_1/Y_2}{Y1/Y2} has a \eqn{Betaprime(shape1,shape2)} distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } %% zz not sure about the JKB reference. \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) Chapter 25 of: \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. %Documentation accompanying the \pkg{VGAM} package at %\url{https://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must have positive values only. The beta-prime distribution is also known as the \emph{beta distribution of the second kind} or the \emph{inverted beta distribution}. } \seealso{ \code{\link{betaff}}, \code{\link[stats]{Beta}}. } \examples{ nn <- 1000 bdata <- data.frame(shape1 = exp(1), shape2 = exp(3)) bdata <- transform(bdata, yb = rbeta(nn, shape1, shape2)) bdata <- transform(bdata, y1 = (1-yb) / yb, y2 = yb / (1-yb), y3 = rgamma(nn, exp(3)) / rgamma(nn, exp(2))) fit1 <- vglm(y1 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(y2 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit3, matrix = TRUE) # Compare the fitted values with(bdata, mean(y3)) head(fitted(fit3)) Coef(fit3) # Useful for intercept-only models } \keyword{models} \keyword{regression} VGAM/man/zetaff.Rd0000644000176200001440000000576513135276753013356 0ustar liggesusers\name{zetaff} \alias{zetaff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zeta Distribution Family Function } \description{ Estimates the parameter of the zeta distribution. } \usage{ zetaff(lshape = "loge", ishape = NULL, gshape = exp(-3:4)/4, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, ishape, zero}{ These arguments apply to the (positive) parameter \eqn{p}. See \code{\link{Links}} for more choices. Choosing \code{\link{loglog}} constrains \eqn{p>1}, but may fail if the maximum likelihood estimate is less than one. See \code{\link{CommonVGAMffArguments}} for more information. } \item{gshape}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ In this long tailed distribution the response must be a positive integer. The probability function for a response \eqn{Y} is \deqn{P(Y=y) = 1/[y^{p+1} \zeta(p+1)],\ \ \ p>0,\ \ \ y=1,2,...}{% P(Y=y) = 1/(y^(p+1) zeta(p+1)), p>0, y=1,2,...} where \eqn{\zeta}{zeta} is Riemann's zeta function. The parameter \eqn{p} is positive, therefore a log link is the default. The mean of \eqn{Y} is \eqn{\mu = \zeta(p) / \zeta(p+1)}{mu = zeta(p)/zeta(p+1)} (provided \eqn{p>1}) and these are the fitted values. The variance of \eqn{Y} is \eqn{\zeta(p-1) / \zeta(p+1) - \mu^2}{zeta(p-1) / zeta(p+1) - mu^2} provided \eqn{p>2}. It appears that good initial values are needed for successful convergence. If convergence is not obtained, try several values ranging from values near 0 to values about 10 or more. Multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %Lindsey, J. K. (1995) %\emph{Modelling Frequency and Count Data}. %Oxford: Clarendon Press. \references{ pp.527-- of Chapter 11 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. Knight, K. (2000) \emph{Mathematical Statistics}. Boca Raton: Chapman & Hall/CRC Press. } \author{ T. W. Yee } \note{ The \code{\link{zeta}} function may be used to compute values of the zeta function. } \seealso{ \code{\link{zeta}}, \code{\link{oazeta}}, \code{\link{oizeta}}, \code{\link{otzeta}}, \code{\link{diffzeta}}, \code{\link{dzeta}}, \code{\link{hzeta}}, \code{\link{zipf}}. } \examples{ zdata <- data.frame(y = 1:5, w = c(63, 14, 5, 1, 2)) # Knight, p.304 fit <- vglm(y ~ 1, zetaff, data = zdata, trace = TRUE, weight = w, crit = "c") (phat <- Coef(fit)) # 1.682557 with(zdata, cbind(round(dzeta(y, phat) * sum(w), 1), w)) with(zdata, weighted.mean(y, w)) fitted(fit, matrix = FALSE) predict(fit) # The following should be zero at the MLE: with(zdata, mean(log(rep(y, w))) + zeta(1+phat, deriv = 1) / zeta(1+phat)) } \keyword{models} \keyword{regression} % Also known as the Joos model or discrete Pareto distribution. VGAM/man/dirmul.old.Rd0000644000176200001440000001223113135276753014132 0ustar liggesusers\name{dirmul.old} \alias{dirmul.old} %- Also NEED an '\alias' for EACH other topic documented here. \title{Fitting a Dirichlet-Multinomial Distribution } \description{ Fits a Dirichlet-multinomial distribution to a matrix of non-negative integers. } \usage{ dirmul.old(link = "loge", ialpha = 0.01, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to each of the \eqn{M} (positive) shape parameters \eqn{\alpha_j}{alpha_j} for \eqn{j=1,\ldots,M}. See \code{\link{Links}} for more choices. Here, \eqn{M} is the number of columns of the response matrix. } \item{ialpha}{ Numeric vector. Initial values for the \code{alpha} vector. Must be positive. Recycled to length \eqn{M}. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}. } } % formula is p.49 of Lange 2002. \details{ The Dirichlet-multinomial distribution, which is somewhat similar to a Dirichlet distribution, has probability function \deqn{P(Y_1=y_1,\ldots,Y_M=y_M) = {2y_{*} \choose {y_1,\ldots,y_M}} \frac{\Gamma(\alpha_{+})}{\Gamma(2y_{*}+\alpha_{+})} \prod_{j=1}^M \frac{\Gamma(y_j+\alpha_{j})}{\Gamma(\alpha_{j})}}{% P(Y_1=y_1,\ldots,Y_M=y_M) = C_{y_1,\ldots,y_M}^{2y_{*}} Gamma(alpha_+) / Gamma( 2y_* + alpha_+) prod_{j=1}^M [ Gamma( y_j+ alpha_j) / Gamma( alpha_j)]} for \eqn{\alpha_j > 0}{alpha_j > 0}, \eqn{\alpha_+ = \alpha_1 + \cdots + \alpha_M}{alpha_+ = alpha_1 + \cdots + alpha_M}, and \eqn{2y_{*} = y_1 + \cdots + y_M}{2y_* = y_1 + \cdots + y_M}. Here, \eqn{a \choose b}{C_b^a} means ``\eqn{a} choose \eqn{b}'' and refers to combinations (see \code{\link[base]{choose}}). The (posterior) mean is \deqn{E(Y_j) = (y_j + \alpha_j) / (2y_{*} + \alpha_{+})}{% E(Y_j) = (y_j + alpha_j) / (2y_{*} + alpha_+)} for \eqn{j=1,\ldots,M}{j=1,\ldots,M}, and these are returned as the fitted values as a \eqn{M}-column matrix. % One situation that arises for the Dirichlet-multinomial distribution % is a locus with M codominant alleles. If in a sample of y_* people, % allele i appears y_j times, then the maximum likelihood estimate of % the ith allele frequency is y_j / (2y_*). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. Paul, S. R., Balasooriya, U. and Banerjee, T. (2005) Fisher information matrix of the Dirichlet-multinomial distribution. \emph{Biometrical Journal}, \bold{47}, 230--236. Tvedebrink, T. (2010) Overdispersion in allelic counts and \eqn{\theta}-correction in forensic genetics. \emph{Theoretical Population Biology}, \bold{78}, 200--210. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response should be a matrix of non-negative values. Convergence seems to slow down if there are zero values. Currently, initial values can be improved upon. This function is almost defunct and may be withdrawn soon. Use \code{\link{dirmultinomial}} instead. } \seealso{ \code{\link{dirmultinomial}}, \code{\link{dirichlet}}, \code{\link{betabinomialff}}, \code{\link{multinomial}}. } \examples{ # Data from p.50 of Lange (2002) alleleCounts <- c(2, 84, 59, 41, 53, 131, 2, 0, 0, 50, 137, 78, 54, 51, 0, 0, 0, 80, 128, 26, 55, 95, 0, 0, 0, 16, 40, 8, 68, 14, 7, 1) dim(alleleCounts) <- c(8, 4) alleleCounts <- data.frame(t(alleleCounts)) dimnames(alleleCounts) <- list(c("White","Black","Chicano","Asian"), paste("Allele", 5:12, sep = "")) set.seed(123) # @initialize uses random numbers fit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old, trace = TRUE, crit = "c", data = alleleCounts) (sfit <- summary(fit)) vcov(sfit) round(eta2theta(coef(fit), fit@misc$link, fit@misc$earg), digits = 2) # not preferred round(Coef(fit), digits = 2) # preferred round(t(fitted(fit)), digits = 4) # 2nd row of Table 3.5 of Lange (2002) coef(fit, matrix = TRUE) pfit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old(parallel = TRUE), trace = TRUE, data = alleleCounts) round(eta2theta(coef(pfit, matrix = TRUE), pfit@misc$link, pfit@misc$earg), digits = 2) # 'Right' answer round(Coef(pfit), digits = 2) # 'Wrong' answer due to parallelism constraint } \keyword{models} \keyword{regression} VGAM/man/benfUC.Rd0000644000176200001440000000711713135276753013232 0ustar liggesusers\name{Benford} \alias{Benford} \alias{dbenf} \alias{pbenf} \alias{qbenf} \alias{rbenf} \title{ Benford's Distribution } \description{ Density, distribution function, quantile function, and random generation for Benford's distribution. } \usage{ dbenf(x, ndigits = 1, log = FALSE) pbenf(q, ndigits = 1, lower.tail = TRUE, log.p = FALSE) qbenf(p, ndigits = 1, lower.tail = TRUE, log.p = FALSE) rbenf(n, ndigits = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of quantiles. See \code{ndigits}. } \item{p}{vector of probabilities.} \item{n}{number of observations. A single positive integer. Else if \code{length(n) > 1} then the length is taken to be the number required. } \item{ndigits}{ Number of leading digits, either 1 or 2. If 1 then the support of the distribution is \{1,\ldots,9\}, else \{10,\ldots,99\}. } \item{log, log.p}{ Logical. If \code{log.p = TRUE} then all probabilities \code{p} are given as \code{log(p)}. } \item{lower.tail}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ Benford's Law (aka \emph{the significant-digit law}) is the empirical observation that in many naturally occuring tables of numerical data, the leading significant (nonzero) digit is not uniformly distributed in \eqn{\{1,2,\ldots,9\}}{1:9}. Instead, the leading significant digit (\eqn{=D}, say) obeys the law \deqn{P(D=d) = \log_{10} \left( 1 + \frac1d \right)}{% P(D=d) = log10(1 + 1/d)} for \eqn{d=1,\ldots,9}. This means the probability the first significant digit is 1 is approximately \eqn{0.301}, etc. Benford's Law was apparently first discovered in 1881 by astronomer/mathematician S. Newcombe. It started by the observation that the pages of a book of logarithms were dirtiest at the beginning and progressively cleaner throughout. In 1938, a General Electric physicist called F. Benford rediscovered the law on this same observation. Over several years he collected data from different sources as different as atomic weights, baseball statistics, numerical data from \emph{Reader's Digest}, and drainage areas of rivers. Applications of Benford's Law has been as diverse as to the area of fraud detection in accounting and the design computers. } \value{ \code{dbenf} gives the density, \code{pbenf} gives the distribution function, and \code{qbenf} gives the quantile function, and \code{rbenf} generates random deviates. } \references{ Benford, F. (1938) The Law of Anomalous Numbers. \emph{Proceedings of the American Philosophical Society}, \bold{78}, 551--572. Newcomb, S. (1881) Note on the Frequency of Use of the Different Digits in Natural Numbers. \emph{American Journal of Mathematics}, \bold{4}, 39--40. } \author{ T. W. Yee and Kai Huang } %\note{ % Currently only the leading digit is handled. % The first two leading digits would be the next simple extension. % %} %\seealso{ % \code{\link{logff}}. %} \examples{ dbenf(x <- c(0:10, NA, NaN, -Inf, Inf)) pbenf(x) \dontrun{ xx <- 1:9 barplot(dbenf(xx), col = "lightblue", las = 1, xlab = "Leading digit", ylab = "Probability", names.arg = as.character(xx), main = paste("Benford's distribution", sep = "")) hist(rbenf(n = 1000), border = "blue", prob = TRUE, main = "1000 random variates from Benford's distribution", xlab = "Leading digit", sub="Red is the true probability", breaks = 0:9 + 0.5, ylim = c(0, 0.35), xlim = c(0, 10.0)) lines(xx, dbenf(xx), col = "red", type = "h") points(xx, dbenf(xx), col = "red") } } \keyword{distribution} VGAM/man/biplackettcopUC.Rd0000644000176200001440000000336413135276753015144 0ustar liggesusers\name{Biplackett} \alias{Biplackett} \alias{dbiplackcop} \alias{pbiplackcop} \alias{rbiplackcop} \title{Plackett's Bivariate Copula } \description{ Density, distribution function, and random generation for the (one parameter) bivariate Plackett copula. %distribution. } \usage{ dbiplackcop(x1, x2, oratio, log = FALSE) pbiplackcop(q1, q2, oratio) rbiplackcop(n, oratio) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{oratio}{the positive odds ratio \eqn{\psi}{psi}.} \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. } } \value{ \code{dbiplackcop} gives the density, \code{pbiplackcop} gives the distribution function, and \code{rbiplackcop} generates random deviates (a two-column matrix). } \references{ Mardia, K. V. (1967) Some contributions to contingency-type distributions. \emph{Biometrika}, \bold{54}, 235--249. } \author{ T. W. Yee } \details{ See \code{\link{biplackettcop}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{biplackettcop}}, \code{\link{bifrankcop}}. } \examples{ \dontrun{ N <- 101; oratio <- exp(1) x <- seq(0.0, 1.0, len = N) ox <- expand.grid(x, x) zedd <- dbiplackcop(ox[, 1], ox[, 2], oratio = oratio) contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbiplackcop(ox[, 1], ox[, 2], oratio = oratio) contour(x, x, matrix(zedd, N, N), col = "blue") plot(rr <- rbiplackcop(n = 3000, oratio = oratio)) par(mfrow = c(1, 2)) hist(rr[, 1]) # Should be uniform hist(rr[, 2]) # Should be uniform } } \keyword{distribution} VGAM/man/normal.vcm.Rd0000644000176200001440000002210613135276753014137 0ustar liggesusers\name{normal.vcm} \alias{normal.vcm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Univariate Normal Distribution as a Varying-Coefficient Model } \description{ Maximum likelihood estimation of all the coefficients of a LM where each of the usual regression coefficients is modelled with other explanatory variables via parameter link functions. Thus this is a basic varying-coefficient model. } \usage{ normal.vcm(link.list = list("(Default)" = "identitylink"), earg.list = list("(Default)" = list()), lsd = "loge", lvar = "loge", esd = list(), evar = list(), var.arg = FALSE, imethod = 1, icoefficients = NULL, isd = NULL, zero = "sd", sd.inflation.factor = 2.5) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link.list, earg.list}{ Link functions and extra arguments applied to the coefficients of the LM, excluding the standard deviation/variance. See \code{\link{CommonVGAMffArguments}} for more information. The default is for an identity link to be applied to each of the regression coefficients. } \item{lsd, esd, lvar, evar}{ Link function and extra argument applied to the standard deviation/variance. See \code{\link{CommonVGAMffArguments}} for more information. Same as \code{\link{uninormal}}. } \item{icoefficients}{ Optional initial values for the coefficients. Recycled to length \eqn{M-1} (does not include the standard deviation/variance). Try using this argument if there is a link function that is not programmed explicitly to handle range restrictions in the \code{initialize} slot. } \item{var.arg, imethod, isd}{ Same as, or similar to, \code{\link{uninormal}}. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for more information. The default applies to the last one, viz. the standard deviation/variance parameter. } \item{sd.inflation.factor}{ Numeric, should be greater than 1. The initial value of the standard deviation is multiplied by this, unless \code{isd} is inputted. Experience has shown that it is safer to start off with a larger value rather than a smaller one. } } \details{ This function allows all the usual LM regression coefficients to be modelled as functions of other explanatory variables via parameter link functions. For example, we may want some of them to be positive. Or we may want a subset of them to be positive and add to unity. So a class of such models have been named \emph{varying-coefficient models} (VCMs). The usual linear model is specified through argument \code{form2}. As with all other \pkg{VGAM} family functions, the linear/additive predictors are specified through argument \code{formula}. The \code{\link{multilogit}} link allows a subset of the coefficients to be positive and add to unity. Either none or more than one call to \code{\link{multilogit}} is allowed. The last variable will be used as the baseline/reference group, and therefore excluded from the estimation. By default, the log of the standard deviation is the last linear/additive predictor. It is recommended that this parameter be estimated as intercept-only, for numerical stability. Technically, the Fisher information matrix is of unit-rank for all but the last parameter (the standard deviation/variance). Hence an approximation is used that pools over all the observations. This \pkg{VGAM} family function cannot handle multiple responses. Also, this function will probably not have the full capabilities of the class of varying-coefficient models as described by Hastie and Tibshirani (1993). However, it should be able to manage some simple models, especially involving the following links: \code{\link{identity}}, \code{\link{loge}}, \code{\link{logoff}}, \code{\link{loglog}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cauchit}}. \code{\link{cloglog}}, \code{\link{rhobit}}, \code{\link{fisherz}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Hastie, T. and Tibshirani, R. (1993) Varying-coefficient models. \emph{J. Roy. Statist. Soc. Ser. B}, \bold{55}, 757--796. } \author{ T. W. Yee } \section{Warning}{ This \pkg{VGAM} family function is fragile. One should monitor convergence, and possibly enter initial values especially when there are non-\code{\link{identity}}-link functions. If the initial value of the standard deviation/variance is too small then numerical problems may occur. One trick is to fit an intercept-only only model and feed its \code{predict()} output into argument \code{etastart} of a more complicated model. The use of the \code{zero} argument is recommended in order to keep models as simple as possible. % 20130730; No longer a bug: % Inference for an ordinary LM here differs from \code{\link[stats]{lm}}. % In particular, the SEs differ. } \note{ The standard deviation/variance parameter is best modelled as intercept-only. Yet to do: allow an argument such as \code{parallel} that enables many of the coefficients to be equal. Fix a bug: \code{Coef()} does not work for intercept-only models. } \seealso{ \code{\link{uninormal}}, \code{\link[stats:lm]{lm}}. % \code{link[locfit]{ethanol}}. } \examples{ ndata <- data.frame(x2 = runif(nn <- 2000)) # Note that coeff1 + coeff2 + coeff5 == 1. So try a "multilogit" link. myoffset <- 10 ndata <- transform(ndata, coeff1 = 0.25, # "multilogit" link coeff2 = 0.25, # "multilogit" link coeff3 = exp(-0.5), # "loge" link coeff4 = logoff(+0.5, offset = myoffset, inverse = TRUE), # "logoff" link coeff5 = 0.50, # "multilogit" link coeff6 = 1.00, # "identitylink" link v2 = runif(nn), v3 = runif(nn), v4 = runif(nn), v5 = rnorm(nn), v6 = rnorm(nn)) ndata <- transform(ndata, Coeff1 = 0.25 - 0 * x2, Coeff2 = 0.25 - 0 * x2, Coeff3 = logit(-0.5 - 1 * x2, inverse = TRUE), Coeff4 = loglog( 0.5 - 1 * x2, inverse = TRUE), Coeff5 = 0.50 - 0 * x2, Coeff6 = 1.00 + 1 * x2) ndata <- transform(ndata, y1 = coeff1 * 1 + coeff2 * v2 + coeff3 * v3 + coeff4 * v4 + coeff5 * v5 + coeff6 * v6 + rnorm(nn, sd = exp(0)), y2 = Coeff1 * 1 + Coeff2 * v2 + Coeff3 * v3 + Coeff4 * v4 + Coeff5 * v5 + Coeff6 * v6 + rnorm(nn, sd = exp(0))) # An intercept-only model fit1 <- vglm(y1 ~ 1, form2 = ~ 1 + v2 + v3 + v4 + v5 + v6, normal.vcm(link.list = list("(Intercept)" = "multilogit", "v2" = "multilogit", "v3" = "loge", "v4" = "logoff", "(Default)" = "identitylink", "v5" = "multilogit"), earg.list = list("(Intercept)" = list(), "v2" = list(), "v4" = list(offset = myoffset), "v3" = list(), "(Default)" = list(), "v5" = list()), zero = c(1:2, 6)), data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # This works only for intercept-only models: multilogit(rbind(coef(fit1, matrix = TRUE)[1, c(1, 2)]), inverse = TRUE) # A model with covariate x2 for the regression coefficients fit2 <- vglm(y2 ~ 1 + x2, form2 = ~ 1 + v2 + v3 + v4 + v5 + v6, normal.vcm(link.list = list("(Intercept)" = "multilogit", "v2" = "multilogit", "v3" = "logit", "v4" = "loglog", "(Default)" = "identitylink", "v5" = "multilogit"), earg.list = list("(Intercept)" = list(), "v2" = list(), "v3" = list(), "v4" = list(), "(Default)" = list(), "v5" = list()), zero = c(1:2, 6)), data = ndata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/explogff.Rd0000644000176200001440000000461213135276753013677 0ustar liggesusers\name{explogff} \alias{explogff} %- Also NEED an '\alias' for EACH other topic documented here. \title{Exponential Logarithmic Distribution Family Function} \description{ Estimates the two parameters of the exponential logarithmic distribution by maximum likelihood estimation. } \usage{ explogff(lscale = "loge", lshape = "logit", iscale = NULL, ishape = NULL, tol12 = 1e-05, zero = 1, nsimEIM = 400) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{tol12}{ Numeric. Tolerance for testing whether a parameter has value 1 or 2. } \item{iscale, ishape, zero, nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The exponential logarithmic distribution has density function \deqn{f(y; c, s) = (1/(-\log p )) (((1/c) (1 - s) e^{-y/c}) / (1 - (1 - s) e^{-y/c}))}{% (1/(-log(p))) * (((1/c) * (1 - s) * e^(-y/c)) / (1 - (1 - s) * e^(-y/c)))} where \eqn{y > 0}, scale parameter \eqn{c > 0}, and shape parameter \eqn{s \in (0, 1)}{0 < s < 1}. The mean, \eqn{(-polylog(2, 1 - p) c) / \log(s)}{((-polylog(2, 1 - p) * c) / log(s)} is \emph{not} returned as the fitted values. Note the median is \eqn{c \log(1 + \sqrt{s})}{c * log(1 + sqrt(s))} and it is \emph{currently} returned as the fitted values. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Tahmasabi, R., Sadegh, R. (2008). A two-parameter lifetime distribution with decreasing failure rate. \emph{Computational Statistics and Data Analysis}, \bold{52}, 3889--3901. } \author{ J. G. Lauder and T. W .Yee } \note{ We define \code{scale} as the reciprocal of the rate parameter used by Tahmasabi and Sadegh (2008). Yet to do: find a \code{polylog()} function. } \seealso{ \code{\link{dexplog}}, \code{\link{exponential}}, } \examples{ \dontrun{ Scale <- exp(2); shape <- logit(-1, inverse = TRUE) edata <- data.frame(y = rexplog(n = 2000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, explogff, data = edata, trace = TRUE) c(with(edata, median(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/get.smart.Rd0000644000176200001440000000257113135276753013773 0ustar liggesusers\name{get.smart} \alias{get.smart} \title{ Retrieve One Component of ``.smart.prediction'' } \description{ Retrieve one component of the list \code{.smart.prediction} from \code{smartpredenv}. } \usage{ get.smart() } \value{ Returns with one list component of \code{.smart.prediction} from \code{smartpredenv}, in fact, \code{.smart.prediction[[.smart.prediction.counter]]}. The whole procedure mimics a first-in first-out stack (better known as a \emph{queue}). } \section{Side Effects}{ The variable \code{.smart.prediction.counter} in \code{smartpredenv} is incremented beforehand, and then written back to \code{smartpredenv}. } \details{ \code{get.smart} is used in \code{"read"} mode within a smart function: it retrieves parameters saved at the time of fitting, and is used for prediction. \code{get.smart} is only used in smart functions such as \code{\link[VGAM]{sm.poly}}; \code{get.smart.prediction} is only used in modelling functions such as \code{\link[stats]{lm}} and \code{\link[stats]{glm}}. The function \code{\link{get.smart}} gets only a part of \code{.smart.prediction} whereas \code{\link{get.smart.prediction}} gets the entire \code{.smart.prediction}. } \seealso{ \code{\link{get.smart.prediction}}. } \examples{ print(sm.min1) } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/Select.Rd0000644000176200001440000001536613135276753013314 0ustar liggesusers\name{Select} \alias{Select} \alias{subsetcol} % \alias{subsetc} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Select Variables for a Formula Response or the RHS of a Formula %% ~~function to do ... ~~ } \description{ Select variables from a data frame whose names begin with a certain character string. %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ Select(data = list(), prefix = "y", lhs = NULL, rhs = NULL, rhs2 = NULL, rhs3 = NULL, as.character = FALSE, as.formula.arg = FALSE, tilde = TRUE, exclude = NULL, sort.arg = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{data}{ A data frame or a matrix. %% ~~Describe \code{data} here~~ } \item{prefix}{ A vector of character strings, or a logical. If a character then the variables chosen from \code{data} begin with the value of \code{prefix}. If a logical then only \code{TRUE} is accepted and all the variables in \code{data} are chosen. %% ~~Describe \code{prefix} here~~ } \item{lhs}{ A character string. The response of a formula. %% ~~Describe \code{lhs} here~~ } \item{rhs}{ A character string. Included as part of the RHS a formula. Set \code{rhs = "0"} to suppress the intercept. %% ~~Describe \code{rhs} here~~ } \item{rhs2, rhs3}{ Same as \code{rhs} but appended to its RHS, i.e., \code{paste0(rhs, " + ", rhs2, " + ", rhs3)}. If used, \code{rhs} should be used first, and then possibly \code{rhs2} and then possibly \code{rhs3}. %% ~~Describe \code{rhs} here~~ } \item{as.character}{ Logical. Return the answer as a character string? %% ~~Describe \code{as.character} here~~ } \item{as.formula.arg}{ Logical. Is the answer a formula? %% ~~Describe \code{as.formula.arg} here~~ } \item{tilde}{ Logical. If \code{as.character} and \code{as.formula.arg} are both \code{TRUE} then include the tilde in the formula? } \item{exclude}{ Vector of character strings. Exclude these variables explicitly. %% ~~Describe \code{exclude} here~~ } \item{sort.arg}{ Logical. Sort the variables? %% ~~Describe \code{sort.arg} here~~ } } \details{ This is meant as a utility function to avoid manually: (i) making a \code{\link[base]{cbind}} call to construct a big matrix response, and (ii) constructing a formula involving a lot of terms. The savings can be made because the variables of interest begin with some prefix, e.g., with the character \code{"y"}. } \value{ If \code{as.character = FALSE} and \code{as.formula.arg = FALSE} then a matrix such as \code{cbind(y1, y2, y3)}. If \code{as.character = TRUE} and \code{as.formula.arg = FALSE} then a character string such as \code{"cbind(y1, y2, y3)"}. If \code{as.character = FALSE} and \code{as.formula.arg = TRUE} then a \code{\link[stats]{formula}} such as \code{lhs ~ y1 + y2 + y3}. If \code{as.character = TRUE} and \code{as.formula.arg = TRUE} then a character string such as \code{"lhs ~ y1 + y2 + y3"}. See the examples below. By default, if no variables beginning the the value of \code{prefix} is found then a \code{NULL} is returned. Setting \code{prefix = " "} is a way of selecting no variables. %% ~Describe the value returned %% If it is a LIST, use %% \item{comp1 }{Description of 'comp1'} %% \item{comp2 }{Description of 'comp2'} %% ... } %%\references{ %% ~put references to the literature/web site here ~ %%} \author{ T. W. Yee. %% ~~who you are~~ } \note{ This function is a bit experimental at this stage and may change in the short future. Some of its utility may be better achieved using \code{\link[base]{subset}} and its \code{select} argument, e.g., \code{subset(pdata, TRUE, select = y01:y10)}. For some models such as \code{\link{posbernoulli.t}} the order of the variables in the \code{xij} argument is crucial, therefore care must be taken with the argument \code{sort.arg}. In some instances, it may be good to rename variables \code{y1} to \code{y01}, \code{y2} to \code{y02}, etc. when there are variables such as \code{y14}. Currently \code{subsetcol()} and \code{Select()} are identical. One of these functions might be withdrawn in the future. %% ~~further notes~~ } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{vglm}}, \code{\link[base]{cbind}}, \code{\link[base]{subset}}, \code{\link[stats]{formula}}, \code{\link{fill}}. %% ~~objects to See Also as \code{\link{help}}, ~~~ } \examples{ Pneumo <- pneumo colnames(Pneumo) <- c("y1", "y2", "y3", "x2") # The "y" variables are response Pneumo$x1 <- 1; Pneumo$x3 <- 3; Pneumo$x <- 0; Pneumo$x4 <- 4 # Add these Select(data = Pneumo) # Same as with(Pneumo, cbind(y1, y2, y3)) Select(Pneumo, "x") Select(Pneumo, "x", sort = FALSE, as.char = TRUE) Select(Pneumo, "x", exclude = "x1") Select(Pneumo, "x", exclude = "x1", as.char = TRUE) Select(Pneumo, c("x", "y")) Select(Pneumo, "z") # Now returns a NULL Select(Pneumo, " ") # Now returns a NULL Select(Pneumo, prefix = TRUE, as.formula = TRUE) Select(Pneumo, "x", exclude = c("x3", "x1"), as.formula = TRUE, lhs = "cbind(y1, y2, y3)", rhs = "0") Select(Pneumo, "x", exclude = "x1", as.formula = TRUE, as.char = TRUE, lhs = "cbind(y1, y2, y3)", rhs = "0") # Now a 'real' example: Huggins89table1 <- transform(Huggins89table1, x3.tij = t01) tab1 <- subset(Huggins89table1, rowSums(Select(Huggins89table1, "y")) > 0) # Same as # subset(Huggins89table1, y1 + y2 + y3 + y4 + y5 + y6 + y7 + y8 + y9 + y10 > 0) # Long way to do it: fit.th <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2 + x3.tij, xij = list(x3.tij ~ t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 - 1), posbernoulli.t(parallel.t = TRUE ~ x2 + x3.tij), data = tab1, trace = TRUE, form2 = ~ x2 + x3.tij + t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10) # Short way to do it: Fit.th <- vglm(Select(tab1, "y") ~ x2 + x3.tij, xij = list(Select(tab1, "t", as.formula = TRUE, sort = FALSE, lhs = "x3.tij", rhs = "0")), posbernoulli.t(parallel.t = TRUE ~ x2 + x3.tij), data = tab1, trace = TRUE, form2 = Select(tab1, prefix = TRUE, as.formula = TRUE)) } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} % 20140524; For Fit.th before prefix = TRUE was allowed: % form2 = Select(tab1, "t", as.formula = TRUE, % rhs = "x2 + x3.tij")) % dim(subset(prinia, TRUE, select = grepl("^y", colnames(prinia)))) VGAM/man/vglm-class.Rd0000644000176200001440000001654313135276753014143 0ustar liggesusers\name{vglm-class} \docType{class} \alias{vglm-class} \title{Class ``vglm'' } \description{ Vector generalized linear models. } \section{Objects from the Class}{ Objects can be created by calls of the form \code{vglm(...)}. % ~~ describe objects here ~~ } \section{Slots}{ In the following, \eqn{M} is the number of linear predictors. \describe{ \item{\code{extra}:}{Object of class \code{"list"}; the \code{extra} argument on entry to \code{vglm}. This contains any extra information that might be needed by the family function. } \item{\code{family}:}{Object of class \code{"vglmff"}. The family function. } \item{\code{iter}:}{Object of class \code{"numeric"}. The number of IRLS iterations used. } \item{\code{predictors}:}{Object of class \code{"matrix"} with \eqn{M} columns which holds the \eqn{M} linear predictors. } \item{\code{assign}:}{Object of class \code{"list"}, from class \code{ "vlm"}. This named list gives information matching the columns and the (LM) model matrix terms. } \item{\code{call}:}{Object of class \code{"call"}, from class \code{ "vlm"}. The matched call. } \item{\code{coefficients}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. A named vector of coefficients. } \item{\code{constraints}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list of constraint matrices used in the fitting. } \item{\code{contrasts}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The contrasts used (if any). } \item{\code{control}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{\code{criterion}:}{Object of class \code{"list"}, from class \code{ "vlm"}. List of convergence criterion evaluated at the final IRLS iteration. } \item{\code{df.residual}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The residual degrees of freedom. } \item{\code{df.total}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The total degrees of freedom. } \item{\code{dispersion}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The scaling parameter. } \item{\code{effects}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The effects. } \item{\code{fitted.values}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The fitted values. %This may be missing or consist entirely %of \code{NA}s, e.g., the Cauchy model. } \item{\code{misc}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list to hold miscellaneous parameters. } \item{\code{model}:}{Object of class \code{"data.frame"}, from class \code{ "vlm"}. The model frame. } \item{\code{na.action}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list holding information about missing values. } \item{\code{offset}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. If non-zero, a \eqn{M}-column matrix of offsets. } \item{\code{post}:}{Object of class \code{"list"}, from class \code{ "vlm"} where post-analysis results may be put. } \item{\code{preplot}:}{Object of class \code{"list"}, from class \code{ "vlm"} used by \code{\link{plotvgam}}; the plotting parameters may be put here. } \item{\code{prior.weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"} holding the initially supplied weights. } \item{\code{qr}:}{Object of class \code{"list"}, from class \code{ "vlm"}. QR decomposition at the final iteration. } \item{\code{R}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \bold{R} matrix in the QR decomposition used in the fitting. } \item{\code{rank}:}{Object of class \code{"integer"}, from class \code{ "vlm"}. Numerical rank of the fitted model. } \item{\code{residuals}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \emph{working} residuals at the final IRLS iteration. } \item{\code{ResSS}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. Residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices. } \item{\code{smart.prediction}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of data-dependent parameters (if any) that are used by smart prediction. } \item{\code{terms}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The \code{\link[stats]{terms}} object used. } \item{\code{weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The weight matrices at the final IRLS iteration. This is in matrix-band form. } \item{\code{x}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The model matrix (LM, not VGLM). } \item{\code{xlevels}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The levels of the factors, if any, used in fitting. } \item{\code{y}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The response, in matrix form. } \item{\code{Xm2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{Ym2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{callXm2}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call for argument \code{form2}. } } } \section{Extends}{ Class \code{"vlm"}, directly. } \section{Methods}{ \describe{ \item{cdf}{\code{signature(object = "vglm")}: cumulative distribution function. Applicable to, e.g., quantile regression and extreme value data models.} \item{deplot}{\code{signature(object = "vglm")}: Applicable to, e.g., quantile regression.} \item{deviance}{\code{signature(object = "vglm")}: deviance of the model (where applicable). } \item{plot}{\code{signature(x = "vglm")}: diagnostic plots. } \item{predict}{\code{signature(object = "vglm")}: extract the linear predictors or predict the linear predictors at a new data frame.} \item{print}{\code{signature(x = "vglm")}: short summary of the object. } \item{qtplot}{\code{signature(object = "vglm")}: quantile plot (only applicable to some models). } \item{resid}{\code{signature(object = "vglm")}: residuals. There are various types of these. } \item{residuals}{\code{signature(object = "vglm")}: residuals. Shorthand for \code{resid}. } \item{rlplot}{\code{signature(object = "vglm")}: return level plot. Useful for extreme value data models.} \item{summary}{\code{signature(object = "vglm")}: a more detailed summary of the object. } } } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. %\url{http://www.stat.auckland.ac.nz/~yee} } \author{ Thomas W. Yee } %\note{ ~~further notes~~ } %~Make other sections like WARNING with \section{WARNING }{....} ~ \seealso{ \code{\link{vglm}}, \code{\link{vglmff-class}}, \code{\link{vgam-class}}. } \examples{ # Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo) } \keyword{classes} VGAM/man/genrayleigh.Rd0000644000176200001440000000465313135276753014370 0ustar liggesusers\name{genrayleigh} \alias{genrayleigh} %- Also NEED an '\alias' for EACH other topic documented here. \title{Generalized Rayleigh Distribution Family Function} \description{ Estimates the two parameters of the generalized Rayleigh distribution by maximum likelihood estimation. } \usage{ genrayleigh(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, tol12 = 1e-05, nsimEIM = 300, zero = 2) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, lshape}{ Link function for the two positive parameters, scale and shape. See \code{\link{Links}} for more choices. } \item{iscale, ishape}{ Numeric. Optional initial values for the scale and shape parameters. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } \item{tol12}{ Numeric and positive. Tolerance for testing whether the second shape parameter is either 1 or 2. If so then the working weights need to handle these singularities. } } \details{ The generalized Rayleigh distribution has density function \deqn{f(y;b = scale,s = shape) = (2 s y/b^{2}) e^{-(y/b)^{2}} (1 - e^{-(y/b)^{2}})^{s-1}}{% (2*s*y/b^2) * e^(-(y/b)^2) * (1 - e^(-(y/b)^2))^(s-1)} where \eqn{y > 0} and the two parameters, \eqn{b} and \eqn{s}, are positive. The mean cannot be expressed nicely so the median is returned as the fitted values. Applications of the generalized Rayleigh distribution include modeling strength data and general lifetime data. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kundu, D., Raqab, M. C. (2005). Generalized Rayleigh distribution: different methods of estimations. \emph{Computational Statistics and Data Analysis}, \bold{49}, 187--200. } \author{ J. G. Lauder and T. W. Yee } \note{ We define \code{scale} as the reciprocal of the scale parameter used by Kundu and Raqab (2005). } \seealso{ \code{\link{dgenray}}, \code{\link{rayleigh}}. } \examples{ Scale <- exp(1); shape <- exp(1) rdata <- data.frame(y = rgenray(n = 1000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, genrayleigh, data = rdata, trace = TRUE) c(with(rdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/auuc.Rd0000644000176200001440000000200013135276753013007 0ustar liggesusers\name{auuc} \alias{auuc} \docType{data} \title{ Auckland University Undergraduate Counts Data} \description{ Undergraduate student enrolments at the University of Auckland in 1990. } \usage{data(auuc)} \format{ A data frame with 4 observations on the following 5 variables. \describe{ \item{Commerce}{a numeric vector of counts.} \item{Arts}{a numeric vector of counts.} \item{SciEng}{a numeric vector of counts.} \item{Law}{a numeric vector of counts.} \item{Medicine}{a numeric vector of counts.} } } \details{ Each student is cross-classified by their colleges (Science and Engineering have been combined) and the socio-economic status (SES) of their fathers (1 = highest, down to 4 = lowest). } \source{ Dr Tony Morrison. } \references{ Wild, C. J. and Seber, G. A. F. (2000) \emph{Chance Encounters: A First Course in Data Analysis and Inference}, New York: Wiley. } \examples{ auuc \dontrun{ round(fitted(grc(auuc))) round(fitted(grc(auuc, Rank = 2))) } } \keyword{datasets} VGAM/man/logc.Rd0000644000176200001440000000365213135276753013014 0ustar liggesusers\name{logc} \alias{logc} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Complementary-log Link Function } \description{ Computes the Complementary-log Transformation, Including its Inverse and the First Two Derivatives. } \usage{ logc(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The complementary-log link function is suitable for parameters that are less than unity. Numerical values of \code{theta} close to 1 or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the log of \code{theta}, i.e., \code{log(1-theta)} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{1-exp(theta)}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. Here, all logarithms are natural logarithms, i.e., to base \emph{e}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 1. One way of overcoming this is to use \code{bvalue}. } \seealso{ \code{\link{Links}}, \code{\link{loge}}, \code{\link{cloglog}}, \code{\link{loglog}}, \code{\link{logoff}}. } \examples{ \dontrun{ logc(seq(-0.2, 1.1, by = 0.1)) # Has NAs } logc(seq(-0.2, 1.1, by = 0.1), bvalue = 1 - .Machine$double.eps) # Has no NAs } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/gammahyperbola.Rd0000644000176200001440000000532613135276753015060 0ustar liggesusers\name{gammahyperbola} \alias{gammahyperbola} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gamma Hyperbola Bivariate Distribution } \description{ Estimate the parameter of a gamma hyperbola bivariate distribution by maximum likelihood estimation. } \usage{ gammahyperbola(ltheta = "loge", itheta = NULL, expected = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{ltheta}{ Link function applied to the (positive) parameter \eqn{\theta}{theta}. See \code{\link{Links}} for more choices. } \item{itheta}{ Initial value for the parameter. The default is to estimate it internally. } \item{expected}{ Logical. \code{FALSE} means the Newton-Raphson (using the observed information matrix) algorithm, otherwise the expected information matrix is used (Fisher scoring algorithm). } } \details{ The joint probability density function is given by \deqn{f(y_1,y_2) = \exp( -e^{-\theta} y_1 / \theta - \theta y_2 )}{% f(y1,y2) = exp( -exp(-theta) * y1 / theta - theta * y2) } for \eqn{\theta > 0}{theta > 0}, \eqn{y_1 > 0}{y1 > 0}, \eqn{y_2 > 1}{y2 > 1}. The random variables \eqn{Y_1}{Y1} and \eqn{Y_2}{Y2} are independent. The marginal distribution of \eqn{Y_1}{Y1} is an exponential distribution with rate parameter \eqn{\exp(-\theta)/\theta}{exp(-theta)/theta}. The marginal distribution of \eqn{Y_2}{Y2} is an exponential distribution that has been shifted to the right by 1 and with rate parameter \eqn{\theta}{theta}. The fitted values are stored in a two-column matrix with the marginal means, which are \eqn{\theta \exp(\theta)}{theta * exp(theta)} and \eqn{1 + 1/\theta}{1 + 1/theta}. The default algorithm is Newton-Raphson because Fisher scoring tends to be much slower for this distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Reid, N. (2003) Asymptotics and the theory of inference. \emph{Annals of Statistics}, \bold{31}, 1695--1731. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. } \seealso{ \code{\link{exponential}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, theta = exp(-2 + x2)) gdata <- transform(gdata, y1 = rexp(nn, rate = exp(-theta)/theta), y2 = rexp(nn, rate = theta) + 1) fit <- vglm(cbind(y1, y2) ~ x2, gammahyperbola(expected = TRUE), data = gdata) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit) } \keyword{models} \keyword{regression} % fit <- vglm(cbind(y1, y2) ~ x2, gammahyperbola, data = gdata, trace = TRUE, crit = "coef") VGAM/man/backPain.Rd0000644000176200001440000000263513135276753013600 0ustar liggesusers\name{backPain} \alias{backPain} \docType{data} \title{ Data on Back Pain Prognosis, from Anderson (1984) } \description{ Data from a study of patients suffering from back pain. Prognostic variables were recorded at presentation and progress was categorised three weeks after treatment. } \usage{data(backPain)} \format{ A data frame with 101 observations on the following 4 variables. \describe{ \item{x1}{length of previous attack.} \item{x2}{pain change.} \item{x3}{lordosis.} \item{pain}{an ordered factor describing the progress of each patient with levels \code{worse} < \code{same} < \code{slight.improvement} < \code{moderate.improvement} < \code{marked.improvement} < \code{complete.relief}. } } } \source{ \code{http://ideas.repec.org/c/boc/bocode/s419001.html} % \url{http://ideas.repec.org/c/boc/bocode/s419001.html} The data set and this help file was copied from \pkg{gnm} so that a vignette in \pkg{VGAM} could be run; the analysis is described in Yee (2010). } \references{ Anderson, J. A. (1984) Regression and Ordered Categorical Variables. \emph{J. R. Statist. Soc. B}, \bold{46(1)}, 1-30. Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. } \examples{ summary(backPain) } \keyword{datasets} % set.seed(1) % data(backPain) VGAM/man/plotqrrvglm.Rd0000644000176200001440000000450013135276753014452 0ustar liggesusers\name{plotqrrvglm} \alias{plotqrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Model Diagnostic Plots for QRR-VGLMs } \description{ The residuals of a QRR-VGLM are plotted for model diagnostic purposes. } \usage{ plotqrrvglm(object, rtype = c("response", "pearson", "deviance", "working"), ask = FALSE, main = paste(Rtype, "residuals vs latent variable(s)"), xlab = "Latent Variable", I.tolerances = object@control$eq.tolerances, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object of class \code{"qrrvglm"}. } \item{rtype}{ Character string giving residual type. By default, the first one is chosen. } \item{ask}{ Logical. If \code{TRUE}, the user is asked to hit the return key for the next plot. } \item{main}{ Character string giving the title of the plot. } \item{xlab}{ Character string giving the x-axis caption. } \item{I.tolerances}{ Logical. This argument is fed into \code{Coef(object, I.tolerances = I.tolerances)}. } \item{\dots}{ Other plotting arguments (see \code{\link[graphics]{par}}). } } \details{ Plotting the residuals can be potentially very useful for checking that the model fit is adequate. } \value{ The original object. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{Thomas W. Yee} \note{ An ordination plot of a QRR-VGLM can be obtained by \code{\link{lvplot.qrrvglm}}. } \seealso{ \code{\link{lvplot.qrrvglm}}, \code{\link{cqo}}. } \examples{\dontrun{ # QRR-VGLM on the hunting spiders data # This is computationally expensive set.seed(111) # This leads to the global solution # hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardize the environmental variables p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, Crow1positive = FALSE) par(mfrow = c(3, 4)) plot(p1, rtype = "response", col = "blue", pch = 4, las = 1, main = "") } } \keyword{dplot} \keyword{models} \keyword{regression} VGAM/man/enormUC.Rd0000644000176200001440000000760713135276753013444 0ustar liggesusers\name{Expectiles-Normal} \alias{Expectiles-Normal} \alias{enorm} \alias{denorm} \alias{penorm} \alias{qenorm} \alias{renorm} \title{ Expectiles of the Normal Distribution } \description{ Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of a normal distribution. } \usage{ denorm(x, mean = 0, sd = 1, log = FALSE) penorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qenorm(p, mean = 0, sd = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) renorm(n, mean = 0, sd = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, p, q}{ See \code{\link{deunif}}. } \item{n, mean, sd, log}{ See \code{\link[stats:Normal]{rnorm}}. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } \item{Maxit.nr, Tol.nr}{ See \code{\link{deunif}}. } } \details{ General details are given in \code{\link{deunif}} including a note regarding the terminology used. Here, \code{norm} corresponds to the distribution of interest, \eqn{F}, and \code{enorm} corresponds to \eqn{G}. The addition of ``\code{e}'' is for the `other' distribution associated with the parent distribution. Thus \code{denorm} is for \eqn{g}, \code{penorm} is for \eqn{G}, \code{qenorm} is for the inverse of \eqn{G}, \code{renorm} generates random variates from \eqn{g}. For \code{qenorm} the Newton-Raphson algorithm is used to solve for \eqn{y} satisfying \eqn{p = G(y)}. Numerical problems may occur when values of \code{p} are very close to 0 or 1. } \value{ \code{denorm(x)} gives the density function \eqn{g(x)}. \code{penorm(q)} gives the distribution function \eqn{G(q)}. \code{qenorm(p)} gives the expectile function: the value \eqn{y} such that \eqn{G(y)=p}. \code{renorm(n)} gives \eqn{n} random variates from \eqn{G}. } %\references{ % %Jones, M. C. (1994) %Expectiles and M-quantiles are quantiles. %\emph{Statistics and Probability Letters}, %\bold{20}, 149--153. % %} \author{ T. W. Yee and Kai Huang } %\note{ %The ``\code{q}'', as the first character of ``\code{qeunif}'', %may be changed to ``\code{e}'' in the future, %the reason being to emphasize that the expectiles are returned. %Ditto for the argument ``\code{q}'' in \code{peunif}. % %} \seealso{ \code{\link{deunif}}, \code{\link{deexp}}, \code{\link{dnorm}}, \code{\link{amlnormal}}, \code{\link{lms.bcn}}. } \examples{ my.p <- 0.25; y <- rnorm(nn <- 1000) (myexp <- qenorm(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Non-standard normal mymean <- 1; mysd <- 2 yy <- rnorm(nn, mymean, mysd) (myexp <- qenorm(my.p, mymean, mysd)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p penorm(-Inf, mymean, mysd) # Should be 0 penorm( Inf, mymean, mysd) # Should be 1 penorm(mean(yy), mymean, mysd) # Should be 0.5 abs(qenorm(0.5, mymean, mysd) - mean(yy)) # Should be 0 abs(penorm(myexp, mymean, mysd) - my.p) # Should be 0 integrate(f = denorm, lower = -Inf, upper = Inf, mymean, mysd) # Should be 1 \dontrun{ par(mfrow = c(2, 1)) yy <- seq(-3, 3, len = nn) plot(yy, denorm(yy), type = "l", col="blue", xlab = "y", ylab = "g(y)", main = "g(y) for N(0,1); dotted green is f(y) = dnorm(y)") lines(yy, dnorm(yy), col = "darkgreen", lty = "dotted", lwd = 2) # 'original' plot(yy, penorm(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for N(0,1)") abline(v = 0, h = 0.5, col = "red", lty = "dashed") lines(yy, pnorm(yy), col = "darkgreen", lty = "dotted", lwd = 2) } } \keyword{distribution} %# Equivalently: %I1 = mean(y <= myexp) * mean( myexp - y[y <= myexp]) %I2 = mean(y > myexp) * mean(-myexp + y[y > myexp]) %I1 / (I1 + I2) # Should be my.p %# Or: %I1 = sum( myexp - y[y <= myexp]) %I2 = sum(-myexp + y[y > myexp]) VGAM/man/loglapUC.Rd0000644000176200001440000000676013135276753013601 0ustar liggesusers\name{loglapUC} \alias{dloglap} \alias{ploglap} \alias{qloglap} \alias{rloglap} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Log-Laplace Distribution } \description{ Density, distribution function, quantile function and random generation for the 3-parameter log-Laplace distribution with location parameter \code{location.ald}, scale parameter \code{scale.ald} (on the log scale), and asymmetry parameter \code{kappa}. } \usage{ dloglap(x, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) ploglap(q, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), lower.tail = TRUE, log.p = FALSE) qloglap(p, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), lower.tail = TRUE, log.p = FALSE) rloglap(n, location.ald = 0, scale.ald = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{ number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{location.ald, scale.ald}{ the location parameter \eqn{\xi}{xi} and the (positive) scale parameter \eqn{\sigma}{sigma}, on the log scale. } \item{tau}{ the quantile parameter \eqn{\tau}{tau}. Must consist of values in \eqn{(0,1)}. This argument is used to specify \code{kappa} and is ignored if \code{kappa} is assigned. } \item{kappa}{ the asymmetry parameter \eqn{\kappa}{kappa}. Must consist of positive values. } \item{log}{ if \code{TRUE}, probabilities \code{p} are given as \code{log(p)}. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ A positive random variable \eqn{Y} is said to have a log-Laplace distribution if \eqn{\log(Y)} has an asymmetric Laplace distribution (ALD). There are many variants of ALDs and the one used here is described in \code{\link{alaplace3}}. } \value{ \code{dloglap} gives the density, \code{ploglap} gives the distribution function, \code{qloglap} gives the quantile function, and \code{rloglap} generates random deviates. } \references{ Kozubowski, T. J. and Podgorski, K. (2003) Log-Laplace distributions. \emph{International Mathematical Journal}, \bold{3}, 467--495. } \author{ T. W. Yee and Kai Huang } %\note{ % The \pkg{VGAM} family function \code{\link{loglaplace3}} % estimates the three parameters by maximum likelihood estimation. %} \seealso{ \code{\link{dalap}}, \code{\link{alaplace3}}, \code{\link{loglaplace1}}. % \code{\link{loglaplace3}}. } \examples{ loc <- 0; sigma <- exp(0.5); kappa <- 1 x <- seq(-0.2, 5, by = 0.01) \dontrun{ plot(x, dloglap(x, loc, sigma, kappa = kappa), type = "l", col = "blue", main = "Blue is density, red is cumulative distribution function", ylim = c(0,1), sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qloglap(seq(0.05,0.95,by = 0.05), loc, sigma, kappa = kappa), dloglap(qloglap(seq(0.05,0.95,by = 0.05), loc, sigma, kappa = kappa), loc, sigma, kappa = kappa), col = "purple", lty = 3, type = "h") lines(x, ploglap(x, loc, sigma, kappa = kappa), type = "l", col = "red") abline(h = 0, lty = 2) } ploglap(qloglap(seq(0.05,0.95,by = 0.05), loc, sigma, kappa = kappa), loc, sigma, kappa = kappa) } \keyword{distribution} VGAM/man/smartpred.Rd0000644000176200001440000001742513135276753014074 0ustar liggesusers\name{smartpred} \alias{smartpred} \alias{sm.bs} \alias{sm.ns} \alias{sm.scale} \alias{sm.scale.default} \alias{sm.poly} \title{ Smart Prediction } \description{ Data-dependent parameters in formula terms can cause problems in when predicting. The \pkg{smartpred} package saves data-dependent parameters on the object so that the bug is fixed. The \code{\link[stats]{lm}} and \code{\link[stats]{glm}} functions have been fixed properly. Note that the \pkg{VGAM} package by T. W. Yee automatically comes with smart prediction. } \usage{ sm.bs(x, df = NULL, knots = NULL, degree = 3, intercept = FALSE, Boundary.knots = range(x)) sm.ns(x, df = NULL, knots = NULL, intercept = FALSE, Boundary.knots = range(x)) sm.poly(x, ..., degree = 1, coefs = NULL, raw = FALSE) sm.scale(x, center = TRUE, scale = TRUE) } %\usage{ %lm() %glm() %ns() %bs() %poly() %scale() %vglm() %rrvglm() %vgam() %cao() %cqo() %uqo() %} \arguments{ \item{x}{ The \code{x} argument is actually common to them all. } \item{df, knots, intercept, Boundary.knots}{ See \code{\link[splines]{bs}} and/or \code{\link[splines]{ns}}. } \item{degree, \dots, coefs, raw}{ See \code{\link[stats]{poly}}. } \item{center, scale}{ See \code{\link[base]{scale}}. } } \value{ The usual value returned by \code{\link[splines]{bs}}, \code{\link[splines]{ns}}, \code{\link[stats]{poly}} and \code{\link[base]{scale}}, When used with functions such as \code{\link[VGAM]{vglm}} the data-dependent parameters are saved on one slot component called \code{smart.prediction}. } \section{Side Effects}{ The variables \code{.max.smart}, \code{.smart.prediction} and \code{.smart.prediction.counter} are created while the model is being fitted. They are created in a new environment called \code{smartpredenv}. These variables are deleted after the model has been fitted. However, if there is an error in the model fitting function or the fitting model is killed (e.g., by typing control-C) then these variables will be left in \code{smartpredenv}. At the beginning of model fitting, these variables are deleted if present in \code{smartpredenv}. % In S-PLUS they are created in frame 1. During prediction, the variables \code{.smart.prediction} and \code{.smart.prediction.counter} are reconstructed and read by the smart functions when the model frame is re-evaluated. After prediction, these variables are deleted. If the modelling function is used with argument \code{smart = FALSE} (e.g., \code{vglm(..., smart = FALSE)}) then smart prediction will not be used, and the results should match with the original \R functions. } \details{ \R version 1.6.0 introduced a partial fix for the prediction problem because it does not work all the time, e.g., for terms such as \code{I(poly(x, 3))}, \code{poly(c(scale(x)), 3)}, \code{bs(scale(x), 3)}, \code{scale(scale(x))}. See the examples below. Smart prediction, however, will always work. % albeit, not so elegantly. The basic idea is that the functions in the formula are now smart, and the modelling functions make use of these smart functions. Smart prediction works in two ways: using \code{\link{smart.expression}}, or using a combination of \code{\link{put.smart}} and \code{\link{get.smart}}. } \author{T. W. Yee and T. J. Hastie} %\note{ % In S-PLUS you will need to load in the \pkg{smartpred} library with % the argument \code{first = T}, e.g., % \code{library(smartpred, lib = "./mys8libs", first = T)}. % Here, \code{mys8libs} is the name of a directory of installed packages. % To install the smartpred package in Linux/Unix, type something like % \code{Splus8 INSTALL -l ./mys8libs ./smartpred_0.8-2.tar.gz}. %} %\note{ % In \R and % prior to the \pkg{VGAM} package using name spaces, the location of the % variables was the workspace. The present use of \code{smartpredenv} % is superior, and is somewhat similar to the S-PLUS implementation in % that the user is more oblivious to its existence. % %} \seealso{ \code{\link{get.smart.prediction}}, \code{\link{get.smart}}, \code{\link{put.smart}}, \code{\link{smart.expression}}, \code{\link{smart.mode.is}}, \code{\link{setup.smart}}, \code{\link{wrapup.smart}}. For \code{\link[VGAM]{vgam}} in \pkg{VGAM}, \code{\link[VGAM]{sm.ps}} is important. Commonly used data-dependent functions include \code{\link[base]{scale}}, \code{\link[stats]{poly}}, \code{\link[splines]{bs}}, \code{\link[splines]{ns}}. In \R, the functions \code{\link[splines]{bs}} and \code{\link[splines]{ns}} are in the \pkg{splines} package, and this library is automatically loaded in because it contains compiled code that \code{\link[splines]{bs}} and \code{\link[splines]{ns}} call. % The website \url{http://www.stat.auckland.ac.nz/~yee} % contains more information such as how to write a % smart function, and other technical details. The functions \code{\link[VGAM]{vglm}}, \code{\link[VGAM]{vgam}}, \code{\link[VGAM]{rrvglm}} and \code{\link[VGAM]{cqo}} in T. W. Yee's \pkg{VGAM} package are examples of modelling functions that employ smart prediction. } \section{WARNING }{ % In S-PLUS, % if the \code{"bigdata"} library is loaded then it is % \code{detach()}'ed. This is done because % \code{scale} cannot be made smart if \code{"bigdata"} is loaded % (it is loaded by default in the Windows version of % Splus 8.0, but not in Linux/Unix). % The function \code{\link[base]{search}} tells what is % currently attached. % In \R and S-PLUS The functions \code{\link[splines]{bs}}, \code{\link[splines]{ns}}, \code{\link[stats]{poly}} and \code{\link[base]{scale}} are now left alone (from 2014-05 onwards) and no longer smart. They work via safe prediction. The smart versions of these functions have been renamed and they begin with \code{"sm."}. The functions \code{\link[splines]{predict.bs}} and \code{predict.ns} are not smart. That is because they operate on objects that contain attributes only and do not have list components or slots. The function \code{\link[stats:poly]{predict.poly}} is not smart. } \examples{ # Create some data first n <- 20 set.seed(86) # For reproducibility of the random numbers ldata <- data.frame(x2 = sort(runif(n)), y = sort(runif(n))) library("splines") # To get ns() in R # This will work for R 1.6.0 and later fit <- lm(y ~ ns(x2, df = 5), data = ldata) \dontrun{ plot(y ~ x2, data = ldata) lines(fitted(fit) ~ x2, data = ldata) new.ldata <- data.frame(x2 = seq(0, 1, len = n)) points(predict(fit, new.ldata) ~ x2, new.ldata, type = "b", col = 2, err = -1) } # The following fails for R 1.6.x and later. It can be # made to work with smart prediction provided # ns is changed to sm.ns and scale is changed to sm.scale: fit1 <- lm(y ~ ns(scale(x2), df = 5), data = ldata) \dontrun{ plot(y ~ x2, data = ldata, main = "Safe prediction fails") lines(fitted(fit1) ~ x2, data = ldata) points(predict(fit1, new.ldata) ~ x2, new.ldata, type = "b", col = 2, err = -1) } # Fit the above using smart prediction \dontrun{ library("VGAM") # The following requires the VGAM package to be loaded fit2 <- vglm(y ~ sm.ns(sm.scale(x2), df = 5), uninormal, data = ldata) fit2@smart.prediction plot(y ~ x2, data = ldata, main = "Smart prediction") lines(fitted(fit2) ~ x2, data = ldata) points(predict(fit2, new.ldata, type = "response") ~ x2, data = new.ldata, type = "b", col = 2, err = -1) } } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} %lm(..., smart = TRUE) %glm(..., smart = TRUE) %ns() %bs() %poly() %scale() %vglm(..., smart = TRUE) %rrvglm(..., smart = TRUE) %vgam(..., smart = TRUE) %cao(..., smart = TRUE) %cqo(..., smart = TRUE) %uqo(..., smart = TRUE) %library(smartpred, lib = "./mys8libs", first = T) VGAM/man/qtplot.gumbel.Rd0000644000176200001440000000760613135276753014670 0ustar liggesusers\name{qtplot.gumbel} \alias{qtplot.gumbel} \alias{qtplot.gumbelff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quantile Plot for Gumbel Regression } \description{ Plots quantiles associated with a Gumbel model. } \usage{ qtplot.gumbel(object, show.plot = TRUE, y.arg = TRUE, spline.fit = FALSE, label = TRUE, R = object@misc$R, percentiles = object@misc$percentiles, add.arg = FALSE, mpv = object@misc$mpv, xlab = NULL, ylab = "", main = "", pch = par()$pch, pcol.arg = par()$col, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} extremes model of the Gumbel type, produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}, and with a family function that is either \code{\link{gumbel}} or \code{\link{gumbelff}}. } \item{show.plot}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{y.arg}{ Logical. Add the raw data on to the plot? } \item{spline.fit}{ Logical. Use a spline fit through the fitted percentiles? This can be useful if there are large gaps between some values along the covariate. } \item{label}{ Logical. Label the percentiles? } \item{R}{ See \code{\link{gumbel}}. } \item{percentiles}{ See \code{\link{gumbel}}. } \item{add.arg}{ Logical. Add the plot to an existing plot? } \item{mpv}{ See \code{\link{gumbel}}. } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{main}{ Title of the plot. See \code{\link[graphics]{title}}. } \item{pch}{ Plotting character. See \code{\link[graphics]{par}}. } \item{pcol.arg}{ Color of the points. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol.arg}{ Color of the lines. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{tcol.arg}{ Color of the text (if \code{label} is \code{TRUE}). See the \code{col} argument of \code{\link[graphics]{par}}. } \item{tadj}{ Text justification. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{sub} and \code{las}. } } \details{ There should be a single covariate such as time. The quantiles specified by \code{percentiles} are plotted. } \value{ The object with a list called \code{qtplot} in the \code{post} slot of \code{object}. (If \code{show.plot = FALSE} then just the list is returned.) The list contains components \item{fitted.values}{ The percentiles of the response, possibly including the MPV. } \item{percentiles}{ The percentiles (small vector of values between 0 and 100. } } %\references{ ~put references to the literature/web site here ~ } \author{ Thomas W. Yee } \note{ Unlike \code{\link{gumbel}}, one cannot have \code{percentiles = NULL}. } \seealso{ \code{\link{gumbel}}. } \examples{ ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) \dontrun{ par(mfrow = c(1, 1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), lwd = 2, pcol = "blue", tadj = 0.4, ylab = "Sea level (cm)") qtplot(fit1, perc = 97, mpv = FALSE, lcol = 3, tcol = 3, lwd = 2, tadj = 0.4, add = TRUE) -> saved head(saved@post$qtplot$fitted) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/prentice74.Rd0000644000176200001440000000740713135276753014056 0ustar liggesusers\name{prentice74} \alias{prentice74} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Prentice (1974) Log-gamma Distribution } \description{ Estimation of a 3-parameter log-gamma distribution described by Prentice (1974). } \usage{ prentice74(llocation = "identitylink", lscale = "loge", lshape = "identitylink", ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, glocation.mux = exp((-4:4)/2), gscale.mux = exp((-4:4)/2), gshape = qt(ppoints(6), df = 1), probs.y = 0.3, zero = c("scale", "shape")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale, lshape}{ Parameter link function applied to the location parameter \eqn{a}, positive scale parameter \eqn{b} and the shape parameter \eqn{q}, respectively. See \code{\link{Links}} for more choices. } \item{ilocation, iscale}{ Initial value for \eqn{a} and \eqn{b}, respectively. The defaults mean an initial value is determined internally for each. } \item{ishape}{ Initial value for \eqn{q}. If failure to converge occurs, try some other value. The default means an initial value is determined internally. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. % Can be an integer-valued vector specifying which % linear/additive predictors are modelled as intercepts-only. % Then the values must be from the set \{1,2,3\}. } \item{glocation.mux, gscale.mux, gshape, probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The probability density function is given by \deqn{f(y;a,b,q) = |q|\,\exp(w/q^2 - e^w) / (b \, \Gamma(1/q^2)),}{% f(y;a,b,q) = |q| * exp(w/q^2 - e^w) / (b*gamma(1/q^2)),} for shape parameter \eqn{q \ne 0}{q != 0}, positive scale parameter \eqn{b > 0}, location parameter \eqn{a}, and all real \eqn{y}. Here, \eqn{w = (y-a)q/b+\psi(1/q^2)}{w = (y-a)*q/b+psi(1/q^2)} where \eqn{\psi}{psi} is the digamma function, \code{\link[base:Special]{digamma}}. The mean of \eqn{Y} is \eqn{a} (returned as the fitted values). This is a different parameterization compared to \code{\link{lgamma3}}. Special cases: \eqn{q = 0} is the normal distribution with standard deviation \eqn{b}, \eqn{q = -1} is the extreme value distribution for maximums, \eqn{q = 1} is the extreme value distribution for minima (Weibull). If \eqn{q > 0} then the distribution is left skew, else \eqn{q < 0} is right skew. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Prentice, R. L. (1974) A log gamma model and its maximum likelihood estimation. \emph{Biometrika}, \bold{61}, 539--544. %On Maximisation of the Likelihood for the Generalised Gamma Distribution. %Angela Noufaily & M.C. Jones, %29-Oct-2009, %\url{http://stats-www.open.ac.uk/TechnicalReports/} } \section{Warning }{ The special case \eqn{q = 0} is not handled, therefore estimates of \eqn{q} too close to zero may cause numerical problems. } \author{ T. W. Yee } \note{ The notation used here differs from Prentice (1974): \eqn{\alpha = a}{alpha = a}, \eqn{\sigma = b}{sigma = b}. Fisher scoring is used. } \seealso{ \code{\link{lgamma3}}, \code{\link[base:Special]{lgamma}}, \code{\link{gengamma.stacy}}. } \examples{ pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, loc = -1 + 2*x2, Scale = exp(1)) pdata <- transform(pdata, y = rlgamma(nn, loc = loc, scale = Scale, shape = 1)) fit <- vglm(y ~ x2, prentice74(zero = 2:3), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) # Note the coefficients for location } \keyword{models} \keyword{regression} VGAM/man/binom2.orUC.Rd0000644000176200001440000000741213135276753014123 0ustar liggesusers\name{Binom2.or} \alias{Binom2.or} \alias{dbinom2.or} \alias{rbinom2.or} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Binary Regression with an Odds Ratio } \description{ Density and random generation for a bivariate binary regression model using an odds ratio as the measure of dependency. } \usage{ rbinom2.or(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, twoCols = TRUE, colnames = if (twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.or(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{n}{ number of observations. Same as in \code{\link[stats]{runif}}. The arguments \code{mu1}, \code{mu2}, \code{oratio} are recycled to this value. } \item{mu1, mu2}{ The marginal probabilities. Only \code{mu1} is needed if \code{exchangeable = TRUE}. Values should be between 0 and 1. } \item{oratio}{ Odds ratio. Must be numeric and positive. The default value of unity means the responses are statistically independent. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{twoCols}{ Logical. If \code{TRUE}, then a \eqn{n} \eqn{\times}{*} \eqn{2} matrix of 1s and 0s is returned. If \code{FALSE}, then a \eqn{n} \eqn{\times}{*} \eqn{4} matrix of 1s and 0s is returned. } \item{colnames}{ The \code{dimnames} argument of \code{\link[base]{matrix}} is assigned \code{list(NULL, colnames)}. } \item{tol}{ Tolerance for testing independence. Should be some small positive numerical value. } \item{ErrorCheck}{ Logical. Do some error checking of the input parameters? } } \details{ The function \code{rbinom2.or} generates data coming from a bivariate binary response model. The data might be fitted with the \pkg{VGAM} family function \code{\link{binom2.or}}. The function \code{dbinom2.or} does not really compute the density (because that does not make sense here) but rather returns the four joint probabilities. } \value{ The function \code{rbinom2.or} returns either a 2 or 4 column matrix of 1s and 0s, depending on the argument \code{twoCols}. The function \code{dbinom2.or} returns a 4 column matrix of joint probabilities; each row adds up to unity. } \author{ T. W. Yee } \seealso{ \code{\link{binom2.or}}. } \examples{ nn <- 1000 # Example 1 ymat <- rbinom2.or(nn, mu1 = logit(1, inv = TRUE), oratio = exp(2), exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) (myor <- mytab["0","0"] * mytab["1","1"] / (mytab["1","0"] * mytab["0","1"])) fit <- vglm(ymat ~ 1, binom2.or(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = logit(-2 + 4 * x2, inverse = TRUE), mu2 = logit(-1 + 3 * x2, inverse = TRUE)) dmat <- with(bdata, dbinom2.or(mu1 = mu1, mu2 = mu2, oratio = exp(2))) ymat <- with(bdata, rbinom2.or(n = nn, mu1 = mu1, mu2 = mu2, oratio = exp(2))) fit2 <- vglm(ymat ~ x2, binom2.or, data = bdata) coef(fit2, matrix = TRUE) \dontrun{ matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, type = "l", main = "Joint probabilities", ylim = 0:1, ylab = "Probabilities", xlab = "x2", las = 1) legend("top", lty = 1:4, col = 1:4, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) } } \keyword{distribution} VGAM/man/qrrvglm.control.Rd0000644000176200001440000005307713135276753015247 0ustar liggesusers\name{qrrvglm.control} \alias{qrrvglm.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control Function for QRR-VGLMs (CQO) } \description{ Algorithmic constants and parameters for a constrained quadratic ordination (CQO), by fitting a \emph{quadratic reduced-rank vector generalized linear model} (QRR-VGLM), are set using this function. It is the control function for \code{\link{cqo}}. } \usage{ qrrvglm.control(Rank = 1, Bestof = if (length(Cinit)) 1 else 10, checkwz = TRUE, Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-06, EqualTolerances = NULL, eq.tolerances = TRUE, Etamat.colmax = 10, FastAlgorithm = TRUE, GradientFunction = TRUE, Hstep = 0.001, isd.latvar = rep_len(c(2, 1, rep_len(0.5, Rank)), Rank), iKvector = 0.1, iShape = 0.1, ITolerances = NULL, I.tolerances = FALSE, maxitl = 40, imethod = 1, Maxit.optim = 250, MUXfactor = rep_len(7, Rank), noRRR = ~ 1, Norrr = NA, optim.maxit = 20, Parscale = if (I.tolerances) 0.001 else 1.0, sd.Cinit = 0.02, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO = TRUE, wzepsilon = .Machine$double.eps^0.75, ...) } %- maybe also `usage' for other objects documented here. \arguments{ In the following, \eqn{R} is the \code{Rank}, \eqn{M} is the number of linear predictors, and \eqn{S} is the number of responses (species). Thus \eqn{M=S} for binomial and Poisson responses, and \eqn{M=2S} for the negative binomial and 2-parameter gamma distributions. \item{Rank}{ The numerical rank \eqn{R} of the model, i.e., the number of ordination axes. Must be an element from the set \{1,2,\ldots,min(\eqn{M},\eqn{p_2}{p2})\} where the vector of explanatory variables \eqn{x} is partitioned into (\eqn{x_1},\eqn{x_2}), which is of dimension \eqn{p_1+p_2}{p1+p2}. The variables making up \eqn{x_1} are given by the terms in the \code{noRRR} argument, and the rest of the terms comprise \eqn{x_2}. } \item{Bestof}{ Integer. The best of \code{Bestof} models fitted is returned. This argument helps guard against local solutions by (hopefully) finding the global solution from many fits. The argument has value 1 if an initial value for \eqn{C} is inputted using \code{Cinit}. } \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{Cinit}{ Optional initial \eqn{C} matrix, which must be a \eqn{p_2}{p2} by \eqn{R} matrix. The default is to apply \code{.Init.Poisson.QO()} to obtain initial values. } \item{Crow1positive}{ Logical vector of length \code{Rank} (recycled if necessary): are the elements of the first row of \eqn{C} positive? For example, if \code{Rank} is 4, then specifying \code{Crow1positive = c(FALSE, TRUE)} will force \eqn{C[1,1]} and \eqn{C[1,3]} to be negative, and \eqn{C[1,2]} and \eqn{C[1,4]} to be positive. This argument allows for a reflection in the ordination axes because the coefficients of the latent variables are unique up to a sign. } \item{epsilon}{ Positive numeric. Used to test for convergence for GLMs fitted in C. Larger values mean a loosening of the convergence criterion. If an error code of 3 is reported, try increasing this value. } \item{eq.tolerances}{ Logical indicating whether each (quadratic) predictor will have equal tolerances. Having \code{eq.tolerances = TRUE} can help avoid numerical problems, especially with binary data. Note that the estimated (common) tolerance matrix may or may not be positive-definite. If it is then it can be scaled to the \eqn{R} by \eqn{R} identity matrix, i.e., made equivalent to \code{I.tolerances = TRUE}. Setting \code{I.tolerances = TRUE} will \emph{force} a common \eqn{R} by \eqn{R} identity matrix as the tolerance matrix to the data even if it is not appropriate. In general, setting \code{I.tolerances = TRUE} is preferred over \code{eq.tolerances = TRUE} because, if it works, it is much faster and uses less memory. However, \code{I.tolerances = TRUE} requires the environmental variables to be scaled appropriately. See \bold{Details} for more details. } \item{EqualTolerances}{ Defunct argument. Use \code{eq.tolerances} instead. } % \item{Eta.range}{ Numerical vector of length 2 or \code{NULL}. % Gives the lower and upper bounds on the values that can be taken % by the quadratic predictor (i.e., on the eta-scale). % Since \code{FastAlgorithm = TRUE}, this argument should be ignored. % } \item{Etamat.colmax}{ Positive integer, no smaller than \code{Rank}. Controls the amount of memory used by \code{.Init.Poisson.QO()}. It is the maximum number of columns allowed for the pseudo-response and its weights. In general, the larger the value, the better the initial value. Used only if \code{Use.Init.Poisson.QO = TRUE}. } \item{FastAlgorithm}{ Logical. Whether a new fast algorithm is to be used. The fast algorithm results in a large speed increases compared to Yee (2004). Some details of the fast algorithm are found in Appendix A of Yee (2006). Setting \code{FastAlgorithm = FALSE} will give an error. } \item{GradientFunction}{ Logical. Whether \code{\link[stats]{optim}}'s argument \code{gr} is used or not, i.e., to compute gradient values. Used only if \code{FastAlgorithm} is \code{TRUE}. The default value is usually faster on most problems. } \item{Hstep}{ Positive value. Used as the step size in the finite difference approximation to the derivatives by \code{\link[stats]{optim}}. % Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{isd.latvar}{ Initial standard deviations for the latent variables (site scores). Numeric, positive and of length \eqn{R} (recycled if necessary). This argument is used only if \code{I.tolerances = TRUE}. Used by \code{.Init.Poisson.QO()} to obtain initial values for the constrained coefficients \eqn{C} adjusted to a reasonable value. It adjusts the spread of the site scores relative to a common species tolerance of 1 for each ordination axis. A value between 0.5 and 10 is recommended; a value such as 10 means that the range of the environmental space is very large relative to the niche width of the species. The successive values should decrease because the first ordination axis should have the most spread of site scores, followed by the second ordination axis, etc. } \item{iKvector, iShape}{ Numeric, recycled to length \eqn{S} if necessary. Initial values used for estimating the positive \eqn{k} and \eqn{\lambda}{lambda} parameters of the negative binomial and 2-parameter gamma distributions respectively. For further information see \code{\link{negbinomial}} and \code{\link{gamma2}}. These arguments override the \code{ik} and \code{ishape} arguments in \code{\link{negbinomial}} and \code{\link{gamma2}}. } \item{I.tolerances}{ Logical. If \code{TRUE} then the (common) tolerance matrix is the \eqn{R} by \eqn{R} identity matrix by definition. Note that having \code{I.tolerances = TRUE} implies \code{eq.tolerances = TRUE}, but not vice versa. Internally, the quadratic terms will be treated as offsets (in GLM jargon) and so the models can potentially be fitted very efficiently. \emph{However, it is a very good idea to center and scale all numerical variables in the \eqn{x_2} vector}. See \bold{Details} for more details. The success of \code{I.tolerances = TRUE} often depends on suitable values for \code{isd.latvar} and/or \code{MUXfactor}. } \item{ITolerances}{ Defunct argument. Use \code{I.tolerances} instead. } \item{maxitl}{ Maximum number of times the optimizer is called or restarted. Most users should ignore this argument. } \item{imethod}{ Method of initialization. A positive integer 1 or 2 or 3 etc. depending on the \pkg{VGAM} family function. Currently it is used for \code{\link{negbinomial}} and \code{\link{gamma2}} only, and used within the C. } \item{Maxit.optim}{ Positive integer. Number of iterations given to the function \code{\link[stats]{optim}} at each of the \code{optim.maxit} iterations. } \item{MUXfactor}{ Multiplication factor for detecting large offset values. Numeric, positive and of length \eqn{R} (recycled if necessary). This argument is used only if \code{I.tolerances = TRUE}. Offsets are \eqn{-0.5} multiplied by the sum of the squares of all \eqn{R} latent variable values. If the latent variable values are too large then this will result in numerical problems. By too large, it is meant that the standard deviation of the latent variable values are greater than \code{MUXfactor[r] * isd.latvar[r]} for \code{r=1:Rank} (this is why centering and scaling all the numerical predictor variables in \eqn{x_2} is recommended). A value about 3 or 4 is recommended. If failure to converge occurs, try a slightly lower value. } \item{optim.maxit}{ Positive integer. Number of times \code{\link[stats]{optim}} is invoked. At iteration \code{i}, the \code{i}th value of \code{Maxit.optim} is fed into \code{\link[stats]{optim}}. } \item{noRRR}{ Formula giving terms that are \emph{not} to be included in the reduced-rank regression (or formation of the latent variables), i.e., those belong to \eqn{x_1}. Those variables which do not make up the latent variable (reduced-rank regression) correspond to the \eqn{B_1}{B_1} matrix. The default is to omit the intercept term from the latent variables. } \item{Norrr}{ Defunct. Please use \code{noRRR}. Use of \code{Norrr} will become an error soon. } \item{Parscale}{ Numerical and positive-valued vector of length \eqn{C} (recycled if necessary). Passed into \code{optim(..., control = list(parscale = Parscale))}; the elements of \eqn{C} become \eqn{C} / \code{Parscale}. Setting \code{I.tolerances = TRUE} results in line searches that are very large, therefore \eqn{C} has to be scaled accordingly to avoid large step sizes. See \bold{Details} for more information. It's probably best to leave this argument alone. } \item{sd.Cinit}{ Standard deviation of the initial values for the elements of \eqn{C}. These are normally distributed with mean zero. This argument is used only if \code{Use.Init.Poisson.QO = FALSE} and \eqn{C} is not inputted using \code{Cinit}. } \item{trace}{ Logical indicating if output should be produced for each iteration. The default is \code{TRUE} because the calculations are numerically intensive, meaning it may take a long time, so that the user might think the computer has locked up if \code{trace = FALSE}. } % \item{Kinit}{ Initial values for the index parameters \code{k} in the % negative binomial distribution (one per species). % In general, a smaller number is preferred over a larger number. % The vector is recycled to the number of responses (species). % The argument is currently not used. % } % \item{Dzero}{ Integer vector specifying which squared terms % are to be zeroed. These linear predictors will correspond to % a RR-VGLM. % The values must be elements from the set \{1,2,\ldots,\eqn{M}\}. % Used only if \code{Quadratic = TRUE} and \code{FastAlgorithm = FALSE}. % } \item{SmallNo}{ Positive numeric between \code{.Machine$double.eps} and \code{0.0001}. Used to avoid under- or over-flow in the IRLS algorithm. Used only if \code{FastAlgorithm} is \code{TRUE}. } \item{Use.Init.Poisson.QO}{ Logical. If \code{TRUE} then the function \code{.Init.Poisson.QO()} is used to obtain initial values for the canonical coefficients \eqn{C}. If \code{FALSE} then random numbers are used instead. } \item{wzepsilon}{ Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } \item{\dots}{ Ignored at present. } } \details{ Recall that the central formula for CQO is \deqn{\eta = B_1^T x_1 + A \nu + \sum_{m=1}^M (\nu^T D_m \nu) e_m}{% eta = B_1^T x_1 + A nu + sum_{m=1}^M (nu^T D_m nu) e_m} where \eqn{x_1}{x_1} is a vector (usually just a 1 for an intercept), \eqn{x_2}{x_2} is a vector of environmental variables, \eqn{\nu=C^T x_2}{nu=C^T x_2} is a \eqn{R}-vector of latent variables, \eqn{e_m} is a vector of 0s but with a 1 in the \eqn{m}th position. QRR-VGLMs are an extension of RR-VGLMs and allow for maximum likelihood solutions to constrained quadratic ordination (CQO) models. % For the fitting of QRR-VGLMs, the default is that the \eqn{C} matrix % (containing the \emph{canonical} or \emph{constrained coefficients} % corresponding to \eqn{x_2}) % is constrained by forcing the latent variables to have sample % variance-covariance matrix equalling \code{diag(Rank)}, i.e., % unit variance and uncorrelated. The tolerance matrices are, in % general, diagonal under such a constraint. Having \code{I.tolerances = TRUE} means all the tolerance matrices are the order-\eqn{R} identity matrix, i.e., it \emph{forces} bell-shaped curves/surfaces on all species. This results in a more difficult optimization problem (especially for 2-parameter models such as the negative binomial and gamma) because of overflow errors and it appears there are more local solutions. To help avoid the overflow errors, scaling \eqn{C} by the factor \code{Parscale} can help enormously. Even better, scaling \eqn{C} by specifying \code{isd.latvar} is more understandable to humans. If failure to converge occurs, try adjusting \code{Parscale}, or better, setting \code{eq.tolerances = TRUE} (and hope that the estimated tolerance matrix is positive-definite). To fit an equal-tolerances model, it is firstly best to try setting \code{I.tolerances = TRUE} and varying \code{isd.latvar} and/or \code{MUXfactor} if it fails to converge. If it still fails to converge after many attempts, try setting \code{eq.tolerances = TRUE}, however this will usually be a lot slower because it requires a lot more memory. With a \eqn{R > 1} model, the latent variables are always uncorrelated, i.e., the variance-covariance matrix of the site scores is a diagonal matrix. If setting \code{eq.tolerances = TRUE} is used and the common estimated tolerance matrix is positive-definite then that model is effectively the same as the \code{I.tolerances = TRUE} model (the two are transformations of each other). In general, \code{I.tolerances = TRUE} is numerically more unstable and presents a more difficult problem to optimize; the arguments \code{isd.latvar} and/or \code{MUXfactor} often must be assigned some good value(s) (possibly found by trial and error) in order for convergence to occur. Setting \code{I.tolerances = TRUE} \emph{forces} a bell-shaped curve or surface onto all the species data, therefore this option should be used with deliberation. If unsuitable, the resulting fit may be very misleading. Usually it is a good idea for the user to set \code{eq.tolerances = FALSE} to see which species appear to have a bell-shaped curve or surface. Improvements to the fit can often be achieved using transformations, e.g., nitrogen concentration to log nitrogen concentration. Fitting a CAO model (see \code{\link{cao}}) first is a good idea for pre-examining the data and checking whether it is appropriate to fit a CQO model. %Suppose \code{FastAlgorithm = FALSE}. In theory (if %\code{Eta.range = NULL}), for QRR-VGLMs, the predictors have the values of %a quadratic form. However, when \code{Eta.range} is assigned a numerical %vector of length 2 (giving the endpoints of an interval), then those %values lying outside the interval are assigned the closest boundary %value. The \code{Eta.range} argument is provided to help avoid %numerical problems associated with the inner minimization problem. A %consequence of this is that the fitted values are bounded, e.g., between %\code{1/(1+exp(-Eta.range[1]))} and \code{1/(1+exp(-Eta.range[2]))} for %binary data (logit link), and greater than \code{exp(Eta.range[1])} for %Poisson data (log link). It is suggested that, for binary responses, %\code{c(-16, 16)} be used, and for Poisson responses, \code{c(-16, Inf)} %be used. The value \code{NULL} corresponds to \code{c(-Inf, Inf)}. } \value{ A list with components matching the input names. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ When \code{I.tolerances = TRUE} it is a good idea to apply \code{\link[base]{scale}} to all the numerical variables that make up the latent variable, i.e., those of \eqn{x_2}{x_2}. This is to make them have mean 0, and hence avoid large offset values which cause numerical problems. This function has many arguments that are common with \code{\link{rrvglm.control}} and \code{\link{vglm.control}}. It is usually a good idea to try fitting a model with \code{I.tolerances = TRUE} first, and if convergence is unsuccessful, then try \code{eq.tolerances = TRUE} and \code{I.tolerances = FALSE}. Ordination diagrams with \code{eq.tolerances = TRUE} have a natural interpretation, but with \code{eq.tolerances = FALSE} they are more complicated and requires, e.g., contours to be overlaid on the ordination diagram (see \code{\link{lvplot.qrrvglm}}). % and/or use the \code{Eta.range} argument. In the example below, an equal-tolerances CQO model is fitted to the hunting spiders data. Because \code{I.tolerances = TRUE}, it is a good idea to center all the \eqn{x_2} variables first. Upon fitting the model, the actual standard deviation of the site scores are computed. Ideally, the \code{isd.latvar} argument should have had this value for the best chances of getting good initial values. For comparison, the model is refitted with that value and it should run more faster and reliably. } \section{Warning }{ The default value of \code{Bestof} is a bare minimum for many datasets, therefore it will be necessary to increase its value to increase the chances of obtaining the global solution. %Suppose \code{FastAlgorithm = FALSE}. %The fitted values of QRR-VGLMs can be restricted to lie between two values %in order to help make the computation numerically stable. For some data %sets, it may be necessary to use \code{Eta.range} to obtain convergence; %however, the fitted values etc. will no longer be accurate, especially at %small and/or large values. Convergence is slower when \code{Eta.range} %is used to restrict values. } \seealso{ \code{\link{cqo}}, \code{\link{rcqo}}, \code{\link{Coef.qrrvglm}}, \code{\link{Coef.qrrvglm-class}}, \code{\link[stats]{optim}}, \code{\link{binomialff}}, \code{\link{poissonff}}, \code{\link{negbinomial}}, \code{\link{gamma2}}, \code{\link{gaussianff}}. % \code{\link{rrvglm}}, % \code{\link{rrvglm.control}}, % \code{\link{rrvglm.optim.control}}, } \examples{ \dontrun{ # Poisson CQO with equal tolerances set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Good idea when I.tolerances = TRUE p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, quasipoissonff, data = hspider, eq.tolerances = TRUE) sort(deviance(p1, history = TRUE)) # A history of all the iterations (isd.latvar <- apply(latvar(p1), 2, sd)) # Should be approx isd.latvar # Refit the model with better initial values set.seed(111) # This leads to the global solution p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, I.tolerances = TRUE, quasipoissonff, data = hspider, isd.latvar = isd.latvar) # Note the use of isd.latvar here sort(deviance(p1, history = TRUE)) # A history of all the iterations } } \keyword{models} \keyword{regression} %\dontrun{ %# 20120221; withdrawn for a while coz it creates a lot of error messages. %# Negative binomial CQO; smallest deviance is about 275.389 %set.seed(1234) # This leads to a reasonable (but not the global) solution? %nb1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, % Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ % WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, % I.tol = FALSE, eq.tol = TRUE, # A good idea for negbinomial % fam = negbinomial, data = hspider) %sort(deviance(nb1, history = TRUE)) # A history of all the iterations %summary(nb1) %} %\dontrun{ lvplot(nb1, lcol = 1:12, y = TRUE, pcol = 1:12) } VGAM/man/dirmultinomial.Rd0000644000176200001440000001565413135276753015126 0ustar liggesusers\name{dirmultinomial} \alias{dirmultinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{Fitting a Dirichlet-Multinomial Distribution } \description{ Fits a Dirichlet-multinomial distribution to a matrix response. } \usage{ dirmultinomial(lphi = "logit", iphi = 0.10, parallel = FALSE, zero = "M") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lphi}{ Link function applied to the \eqn{\phi}{phi} parameter, which lies in the open unit interval \eqn{(0,1)}. See \code{\link{Links}} for more choices. } \item{iphi}{ Numeric. Initial value for \eqn{\phi}{phi}. Must be in the open unit interval \eqn{(0,1)}. If a failure to converge occurs then try assigning this argument a different value. } \item{parallel}{ A logical (formula not allowed here) indicating whether the probabilities \eqn{\pi_1,\ldots,\pi_{M-1}}{pi_1,\ldots,pi_{M-1}} are to be equal via equal coefficients. Note \eqn{\pi_M}{pi_M} will generally be different from the other probabilities. Setting \code{parallel = TRUE} will only work if you also set \code{zero = NULL} because of interference between these arguments (with respect to the intercept term). } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \eqn{\{1,2,\ldots,M\}}. If the character \code{"M"} then this means the numerical value \eqn{M}, which corresponds to linear/additive predictor associated with \eqn{\phi}{phi}. Setting \code{zero = NULL} means none of the values from the set \eqn{\{1,2,\ldots,M\}}. } } \details{ The Dirichlet-multinomial distribution arises from a multinomial distribution where the probability parameters are not constant but are generated from a multivariate distribution called the Dirichlet distribution. The Dirichlet-multinomial distribution has probability function \deqn{P(Y_1=y_1,\ldots,Y_M=y_M) = {N_{*} \choose {y_1,\ldots,y_M}} \frac{ \prod_{j=1}^{M} \prod_{r=1}^{y_{j}} (\pi_j (1-\phi) + (r-1)\phi)}{ \prod_{r=1}^{N_{*}} (1-\phi + (r-1)\phi)}}{% P(Y_1=y_1,\ldots,Y_M=y_M) = C_{y_1,\ldots,y_M}^{N_{*}} prod_{j=1}^{M} prod_{r=1}^{y_{j}} (pi_j (1-phi) + (r-1)phi) / prod_{r=1}^{N_{*}} (1-phi + (r-1)phi)} where \eqn{\phi}{phi} is the \emph{over-dispersion} parameter and \eqn{N_{*} = y_1+\cdots+y_M}{N_* = y_1+\cdots+y_M}. Here, \eqn{a \choose b}{C_b^a} means ``\eqn{a} choose \eqn{b}'' and refers to combinations (see \code{\link[base]{choose}}). The above formula applies to each row of the matrix response. In this \pkg{VGAM} family function the first \eqn{M-1} linear/additive predictors correspond to the first \eqn{M-1} probabilities via \deqn{\eta_j = \log(P[Y=j]/ P[Y=M]) = \log(\pi_j/\pi_M)}{% eta_j = log(P[Y=j]/ P[Y=M]) = log(pi_j/pi_M)} where \eqn{\eta_j}{eta_j} is the \eqn{j}th linear/additive predictor (\eqn{\eta_M=0}{eta_M=0} by definition for \eqn{P[Y=M]} but not for \eqn{\phi}{phi}) and \eqn{j=1,\ldots,M-1}. The \eqn{M}th linear/additive predictor corresponds to \code{lphi} applied to \eqn{\phi}{phi}. Note that \eqn{E(Y_j) = N_* \pi_j}{E(Y_j) = N_* pi_j} but the probabilities (returned as the fitted values) \eqn{\pi_j}{pi_j} are bundled together as a \eqn{M}-column matrix. The quantities \eqn{N_*} are returned as the prior weights. The beta-binomial distribution is a special case of the Dirichlet-multinomial distribution when \eqn{M=2}; see \code{\link{betabinomial}}. It is easy to show that the first shape parameter of the beta distribution is \eqn{shape1=\pi(1/\phi-1)}{shape1=pi*(1/phi-1)} and the second shape parameter is \eqn{shape2=(1-\pi)(1/\phi-1)}{shape2=(1-pi)*(1/phi-1)}. Also, \eqn{\phi=1/(1+shape1+shape2)}{phi=1/(1+shape1+shape2)}, which is known as the \emph{intra-cluster correlation} coefficient. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. If the model is an intercept-only model then \code{@misc} (which is a list) has a component called \code{shape} which is a vector with the \eqn{M} values \eqn{\pi_j(1/\phi-1)}{pi_j * (1/phi-1)}. % zz not sure: These are the shape parameters of the underlying % Dirichlet distribution. } \references{ Paul, S. R., Balasooriya, U. and Banerjee, T. (2005) Fisher information matrix of the Dirichlet-multinomial distribution. \emph{Biometrical Journal}, \bold{47}, 230--236. Tvedebrink, T. (2010) Overdispersion in allelic counts and \eqn{\theta}-correction in forensic genetics. \emph{Theoretical Population Biology}, \bold{78}, 200--210. Yu, P. and Shaw, C. A. (2014). An Efficient Algorithm for Accurate Computation of the Dirichlet-Multinomial Log-Likelihood Function. \emph{Bioinformatics}, \bold{30}, 1547--54. % url {doi:10.1093/bioinformatics/btu079}. % number = {11}, % first published online February 11, 2014 } \author{ Thomas W. Yee } \section{Warning }{ This \pkg{VGAM} family function is prone to numerical problems, especially when there are covariates. } \note{ The response can be a matrix of non-negative integers, or else a matrix of sample proportions and the total number of counts in each row specified using the \code{weights} argument. This dual input option is similar to \code{\link{multinomial}}. To fit a `parallel' model with the \eqn{\phi}{phi} parameter being an intercept-only you will need to use the \code{constraints} argument. Currently, Fisher scoring is implemented. To compute the expected information matrix a \code{for} loop is used; this may be very slow when the counts are large. Additionally, convergence may be slower than usual due to round-off error when computing the expected information matrices. } \seealso{ \code{\link{dirmul.old}}, \code{\link{betabinomial}}, \code{\link{betabinomialff}}, \code{\link{dirichlet}}, \code{\link{multinomial}}. } \examples{ nn <- 5; M <- 4; set.seed(1) ydata <- data.frame(round(matrix(runif(nn * M, max = 100), nn, M))) # Integer counts colnames(ydata) <- paste("y", 1:M, sep = "") fit <- vglm(cbind(y1, y2, y3, y4) ~ 1, dirmultinomial, data = ydata, trace = TRUE) head(fitted(fit)) depvar(fit) # Sample proportions weights(fit, type = "prior", matrix = FALSE) # Total counts per row \dontrun{ ydata <- transform(ydata, x2 = runif(nn)) fit <- vglm(cbind(y1, y2, y3, y4) ~ x2, dirmultinomial, data = ydata, trace = TRUE) Coef(fit) coef(fit, matrix = TRUE) (sfit <- summary(fit)) vcov(sfit) } } \keyword{models} \keyword{regression} % zz \eqn{\alpha_j = P[Y=j] \times (1/\phi - 1)}{alpha_j = P[Y=j] * % (1/phi - 1)} are the shape parameters, % for \eqn{j=1,\ldots,M}. % Currently, initial values can be improved upon. % \dontrun{ # This does not work: VGAM/man/weibullR.Rd0000644000176200001440000001574013135276753013656 0ustar liggesusers\name{weibullR} \alias{weibullR} %\alias{weibullff} %\alias{weibull.lsh} %\alias{weibull3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Weibull Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Weibull distribution. No observations should be censored. } \usage{ weibullR(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, lss = TRUE, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, lscale}{ Parameter link functions applied to the (positive) shape parameter (called \eqn{a} below) and (positive) scale parameter (called \eqn{b} below). See \code{\link{Links}} for more choices. } \item{ishape, iscale}{ Optional initial values for the shape and scale parameters. } \item{nrfs}{ Currently this argument is ignored. Numeric, of length one, with value in \eqn{[0,1]}. Weighting factor between Newton-Raphson and Fisher scoring. The value 0 means pure Newton-Raphson, while 1 means pure Fisher scoring. The default value uses a mixture of the two algorithms, and retaining positive-definite working weights. } \item{imethod}{ Initialization method used if there are censored observations. Currently only the values 1 and 2 are allowed. } \item{zero, probs.y, lss}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The Weibull density for a response \eqn{Y} is \deqn{f(y;a,b) = a y^{a-1} \exp[-(y/b)^a] / (b^a)}{% f(y;a,b) = a y^(a-1) * exp(-(y/b)^a) / [b^a]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y > 0}. The cumulative distribution function is \deqn{F(y;a,b) = 1 - \exp[-(y/b)^a].}{% F(y;a,b) = 1 - exp(-(y/b)^a).} The mean of \eqn{Y} is \eqn{b \, \Gamma(1+ 1/a)}{b * gamma(1+ 1/a)} (returned as the fitted values), and the mode is at \eqn{b\,(1-1/a)^{1/a}}{b * (1- 1/a)^(1/a)} when \eqn{a>1}. The density is unbounded for \eqn{a<1}. The \eqn{k}th moment about the origin is \eqn{E(Y^k) = b^k \, \Gamma(1+ k/a)}{E(Y^k) = b^k * gamma(1+ k/a)}. The hazard function is \eqn{a t^{a-1} / b^a}{a * t^(a-1) / b^a}. This \pkg{VGAM} family function currently does not handle censored data. Fisher scoring is used to estimate the two parameters. Although the expected information matrices used here are valid in all regions of the parameter space, the regularity conditions for maximum likelihood estimation are satisfied only if \eqn{a>2} (according to Kleiber and Kotz (2003)). If this is violated then a warning message is issued. One can enforce \eqn{a>2} by choosing \code{lshape = logoff(offset = -2)}. Common values of the shape parameter lie between 0.5 and 3.5. Summarized in Harper et al. (2011), for inference, there are 4 cases to consider. If \eqn{a \le 1} then the MLEs are not consistent (and the smallest observation becomes a hyperefficient solution for the location parameter in the 3-parameter case). If \eqn{1 < a < 2} then MLEs exist but are not asymptotically normal. If \eqn{a = 2} then the MLEs exist and are normal and asymptotically efficient but with a slower convergence rate than when \eqn{a > 2}. If \eqn{a > 2} then MLEs have classical asymptotic properties. The 3-parameter (location is the third parameter) Weibull can be estimated by maximizing a profile log-likelihood (see, e.g., Harper et al. (2011) and Lawless (2003)), else try \code{\link{gev}} which is a better parameterization. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Lawless, J. F. (2003) \emph{Statistical Models and Methods for Lifetime Data}, 2nd ed. {Hoboken, NJ, USA: John Wiley & Sons}. Rinne, Horst. (2009) \emph{The Weibull Distribution: A Handbook}. Boca Raton, FL, USA: CRC Press. Gupta, R. D. and Kundu, D. (2006) On the comparison of Fisher information of the Weibull and GE distributions, \emph{Journal of Statistical Planning and Inference}, \bold{136}, 3130--3144. Harper, W. V. and Eschenbach, T. G. and James, T. R. (2011) Concerns about Maximum Likelihood Estimation for the Three-Parameter {W}eibull Distribution: Case Study of Statistical Software, \emph{The American Statistician}, \bold{65(1)}, {44--54}. Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. \emph{Biometrika}, \bold{72}, 67--90. Smith, R. L. and Naylor, J. C. (1987) A comparison of maximum likelihood and Bayesian estimators for the three-parameter Weibull distribution. \emph{Applied Statistics}, \bold{36}, 358--369. } \author{ T. W. Yee } \note{ Successful convergence depends on having reasonably good initial values. If the initial values chosen by this function are not good, make use the two initial value arguments. This \pkg{VGAM} family function handles multiple responses. The Weibull distribution is often an alternative to the lognormal distribution. The inverse Weibull distribution, which is that of \eqn{1/Y} where \eqn{Y} has a Weibull(\eqn{a,b}) distribution, is known as the log-Gompertz distribution. There are problems implementing the three-parameter Weibull distribution. These are because the classical regularity conditions for the asymptotic properties of the MLEs are not satisfied because the support of the distribution depends on one of the parameters. Other related distributions are the Maxwell and Rayleigh distributions. } \section{Warning}{ This function is under development to handle other censoring situations. The version of this function which will handle censored data will be called \code{cenweibull()}. It is currently being written and will use \code{\link{SurvS4}} as input. It should be released in later versions of \pkg{VGAM}. If the shape parameter is less than two then misleading inference may result, e.g., in the \code{summary} and \code{vcov} of the object. } \seealso{ \code{\link{weibull.mean}}, \code{\link[stats:Weibull]{dweibull}}, \code{\link{truncweibull}}, \code{\link{gev}}, \code{\link{lognormal}}, \code{\link{expexpff}}, \code{\link{maxwell}}, \code{\link{rayleigh}}, \code{\link{gumbelII}}. } \examples{ wdata <- data.frame(x2 = runif(nn <- 1000)) # Complete data wdata <- transform(wdata, y1 = rweibull(nn, shape = exp(1), scale = exp(-2 + x2)), y2 = rweibull(nn, shape = exp(2), scale = exp( 1 - x2))) fit <- vglm(cbind(y1, y2) ~ x2, weibullR, data = wdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/plotqtplot.lmscreg.Rd0000644000176200001440000000767013135276753015751 0ustar liggesusers\name{plotqtplot.lmscreg} \alias{plotqtplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quantile Plot for LMS Quantile Regression } \description{ Plots the quantiles associated with a LMS quantile regression. } \usage{ plotqtplot.lmscreg(fitted.values, object, newdata = NULL, percentiles = object@misc$percentiles, lp = NULL, add.arg = FALSE, y = if (length(newdata)) FALSE else TRUE, spline.fit = FALSE, label = TRUE, size.label = 0.06, xlab = NULL, ylab = "", pch = par()$pch, pcex = par()$cex, pcol.arg = par()$col, xlim = NULL, ylim = NULL, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{fitted.values}{ Matrix of fitted values. } \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}, e.g., \code{\link{lms.yjn}}. } \item{newdata}{ Data frame at which predictions are made. By default, the original data are used. } \item{percentiles}{ Numerical vector with values between 0 and 100 that specify the percentiles (quantiles). The default is to use the percentiles when fitting the model. For example, the value 50 corresponds to the median. } \item{lp}{ Length of \code{percentiles}. } \item{add.arg}{ Logical. Add the quantiles to an existing plot? } \item{y}{ Logical. Add the response as points to the plot? } \item{spline.fit}{ Logical. Add a spline curve to the plot? } \item{label}{ Logical. Add the percentiles (as text) to the plot? } \item{size.label}{ Numeric. How much room to leave at the RHS for the label. It is in percent (of the range of the primary variable). } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{pch}{ Plotting character. See \code{\link[graphics]{par}}. } \item{pcex}{ Character expansion of the points. See \code{\link[graphics]{par}}. } \item{pcol.arg}{ Color of the points. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{xlim}{ Limits of the x-axis. See \code{\link[graphics]{par}}. } \item{ylim}{ Limits of the y-axis. See \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol.arg}{ Color of the lines. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{tcol.arg}{ Color of the text (if \code{label} is \code{TRUE}). See the \code{col} argument of \code{\link[graphics]{par}}. } \item{tadj}{ Text justification. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{main} and \code{las}. } } \details{ The above graphical parameters offer some flexibility when plotting the quantiles. } \value{ The matrix of fitted values. } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ While the graphical arguments of this function are useful to the user, this function should not be called directly. } \seealso{ \code{\link{qtplot.lmscreg}}. } \examples{\dontrun{ fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), data = bmi.nz) qtplot(fit) qtplot(fit, perc = c(25,50,75,95), lcol = "blue", tcol = "blue", llwd = 2) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/otzetaUC.Rd0000644000176200001440000000366613135276753013633 0ustar liggesusers\name{Otzeta} \alias{Otzeta} \alias{dotzeta} \alias{potzeta} \alias{qotzeta} \alias{rotzeta} \title{ One-truncated Zeta Distribution } \description{ Density, distribution function, quantile function, and random generation for the one-truncated zeta distribution. } \usage{ dotzeta(x, shape, log = FALSE) potzeta(q, shape, log.p = FALSE) qotzeta(p, shape) rotzeta(n, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, p, n}{ Same as in \code{\link[stats]{runif}}. } \item{shape}{ The positive shape parameter described in in \code{\link{zetaff}}. Here it is called \code{shape} because it is positive. % For \code{rotzeta()} this pa%arameter must be of length 1. } \item{log, log.p}{ Same as in \code{\link[stats]{runif}}. } } \details{ The one-truncated zeta distribution is a zeta distribution but with the probability of a one being zero. The other probabilities are scaled to add to unity. Some more details are given in \code{\link{zetaff}}. } \value{ \code{dotzeta} gives the density, \code{potzeta} gives the distribution function, \code{qotzeta} gives the quantile function, and \code{rotzeta} generates random deviates. } %\references{ %} \author{ T. W. Yee } \note{ Given some response data, the \pkg{VGAM} family function \code{\link{otzeta}} estimates the parameter \code{shape}. % Function \code{potzeta()} suffers from the problems that % \code{\link{plog}} sometimes has. } \seealso{ \code{\link{Otzeta}}, \code{\link{zetaff}}, \code{\link{Oizeta}}. } \examples{ dotzeta(1:20, 0.5) rotzeta(20, 0.5) \dontrun{ shape <- 0.8; x <- 1:10 plot(x, dotzeta(x, shape = shape), type = "h", ylim = 0:1, sub = "shape=0.8", las = 1, col = "blue", ylab = "Probability", main = "1-truncated zeta distribution: blue=PMF; orange=CDF") lines(x + 0.1, potzeta(x, shape = shape), col = "orange", lty = 3, type = "h") } } \keyword{distribution} VGAM/man/auxposbernoulli.t.Rd0000644000176200001440000000510413135276753015557 0ustar liggesusers\name{aux.posbernoulli.t} \alias{aux.posbernoulli.t} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Auxiliary Function for the Positive Bernoulli Family Function with Time Effects } \description{ Returns behavioural effects indicator variables from a capture history matrix. } \usage{ aux.posbernoulli.t(y, check.y = FALSE, rename = TRUE, name = "bei") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y}{ Capture history matrix. Rows are animals, columns are sampling occasions, and values should be 0s and 1s only. } \item{check.y}{ Logical, if \code{TRUE} then some basic checking is performed. } \item{rename, name}{ If \code{rename = TRUE} then the behavioural effects indicator are named using the value of \code{name} as the prefix. If \code{FALSE} then use the same column names as \code{y}. } } \details{ This function can help fit certain capture--recapture models (commonly known as \eqn{M_{tb}} or \eqn{M_{tbh}} (no prefix \eqn{h} means it is an intercept-only model) in the literature). See \code{\link{posbernoulli.t}} for details. } \value{ A list with the following components. \describe{ \item{cap.hist1}{ A matrix the same dimension as \code{y}. In any particular row there are 0s up to the first capture. Then there are 1s thereafter. } \item{cap1}{ A vector specifying which time occasion the animal was first captured. } \item{y0i}{ Number of noncaptures before the first capture. } \item{yr0i}{ Number of noncaptures after the first capture. } \item{yr1i}{ Number of recaptures after the first capture. } } } % \author{ Thomas W. Yee. } %\note{ % Models \eqn{M_{tbh}}{M_tbh} can be fitted using the % \code{xij} argument (see \code{\link{vglm.control}}) % to input the behavioural effect indicator variables. % Rather than manually setting these up, they may be more conveniently % obtained by \code{\link{aux.posbernoulli.t}}. See % the example below. % % %} %\section{Warning }{ % % See \code{\link{posbernoulli.tb}}. % % %} \seealso{ \code{\link{posbernoulli.t}}, \code{\link{deermice}}. } \examples{ # Fit a M_tbh model to the deermice data: (pdata <- aux.posbernoulli.t(with(deermice, cbind(y1, y2, y3, y4, y5, y6)))) deermice <- data.frame(deermice, bei = 0, # Add this pdata$cap.hist1) # Incorporate these head(deermice) # Augmented with behavioural effect indicator variables tail(deermice) } \keyword{models} \keyword{regression} VGAM/man/beggs.Rd0000644000176200001440000000335013135276753013152 0ustar liggesusers\name{beggs} \alias{beggs} \docType{data} \title{Bacon and Eggs Data} \description{ Purchasing of bacon and eggs. } \usage{ data(beggs) } \format{ Data frame of a two way table. \describe{ \item{b0, b1, b2, b3, b4}{ The \code{b} refers to bacon. The number of times bacon was purchased was 0, 1, 2, 3, or 4. } \item{e0, e1, e2, e3, e4}{ The \code{e} refers to eggs. The number of times eggs was purchased was 0, 1, 2, 3, or 4. } } } \details{ The data is from Information Resources, Inc., a consumer panel based in a large US city [see Bell and Lattin (1998) for further details]. Starting in June 1991, the purchases in the bacon and fresh eggs product categories for a sample of 548 households over four consecutive store trips was tracked. Only those grocery shopping trips with a total basket value of at least five dollars was considered. For each household, the total number of bacon purchases in their four eligible shopping trips and the total number of egg purchases (usually a package of eggs) for the same trips, were counted. % Data from Bell and Latin (1998). % Also see Danaher and Hardie (2005). } \source{ Bell, D. R. and Lattin, J. M. (1998) Shopping Behavior and Consumer Preference for Store Price Format: Why `Large Basket' Shoppers Prefer EDLP. \emph{Marketing Science}, \bold{17}, 66--88. } \references{ Danaher, P. J. and Hardie, B. G. S. (2005) Bacon with Your Eggs? Applications of a New Bivariate Beta-Binomial Distribution. \emph{American Statistician}, \bold{59}(4), 282--286. } \seealso{ \code{\link[VGAM]{rrvglm}}, \code{\link[VGAM]{rcim}}, \code{\link[VGAM]{grc}}. } \examples{ beggs colSums(beggs) rowSums(beggs) } \keyword{datasets} % % VGAM/man/rrvglm.Rd0000644000176200001440000002373013135276753013400 0ustar liggesusers\name{rrvglm} \alias{rrvglm} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Fitting Reduced-Rank Vector Generalized Linear Models (RR-VGLMs) } \description{ A \emph{reduced-rank vector generalized linear model} (RR-VGLM) is fitted. RR-VGLMs are VGLMs but some of the constraint matrices are estimated. In this documentation, \eqn{M} is the number of linear predictors. } \usage{ rrvglm(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = rrvglm.control(...), offset = NULL, method = "rrvglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{formula, family, weights}{ See \code{\link{vglm}}. } \item{data}{ an optional data frame containing the variables in the model. By default the variables are taken from \code{environment(formula)}, typically the environment from which \code{rrvglm} is called. } \item{subset, na.action}{ See \code{\link{vglm}}. } \item{etastart, mustart, coefstart}{ See \code{\link{vglm}}. } \item{control}{ a list of parameters for controlling the fitting process. See \code{\link{rrvglm.control}} for details. } \item{offset, model, contrasts}{ See \code{\link{vglm}}. } \item{method}{ the method to be used in fitting the model. The default (and presently only) method \code{rrvglm.fit} uses iteratively reweighted least squares (IRLS). } \item{x.arg, y.arg}{ logical values indicating whether the model matrix and response vector/matrix used in the fitting process should be assigned in the \code{x} and \code{y} slots. Note the model matrix is the LM model matrix; to get the VGLM model matrix type \code{model.matrix(vglmfit)} where \code{vglmfit} is a \code{vglm} object. } \item{constraints}{ See \code{\link{vglm}}. } \item{extra, smart, qr.arg}{ See \code{\link{vglm}}. } \item{\dots}{ further arguments passed into \code{\link{rrvglm.control}}. } } \details{ The central formula is given by \deqn{\eta = B_1^T x_1 + A \nu}{% eta = B_1^T x_1 + A nu} where \eqn{x_1}{x1} is a vector (usually just a 1 for an intercept), \eqn{x_2}{x2} is another vector of explanatory variables, and \eqn{\nu = C^T x_2}{nu = C^T x_2} is an \eqn{R}-vector of latent variables. Here, \eqn{\eta}{eta} is a vector of linear predictors, e.g., the \eqn{m}th element is \eqn{\eta_m = \log(E[Y_m])}{eta_m = log(E[Y_m])} for the \eqn{m}th Poisson response. The matrices \eqn{B_1}, \eqn{A} and \eqn{C} are estimated from the data, i.e., contain the regression coefficients. For ecologists, the central formula represents a \emph{constrained linear ordination} (CLO) since it is linear in the latent variables. It means that the response is a monotonically increasing or decreasing function of the latent variables. For identifiability it is common to enforce \emph{corner constraints} on \eqn{A}: by default, the top \eqn{R} by \eqn{R} submatrix is fixed to be the order-\eqn{R} identity matrix and the remainder of \eqn{A} is estimated. The underlying algorithm of RR-VGLMs is iteratively reweighted least squares (IRLS) with an optimizing algorithm applied within each IRLS iteration (e.g., alternating algorithm). In theory, any \pkg{VGAM} family function that works for \code{\link{vglm}} and \code{\link{vgam}} should work for \code{rrvglm} too. The function that actually does the work is \code{rrvglm.fit}; it is \code{vglm.fit} with some extra code. } \value{ An object of class \code{"rrvglm"}, which has the the same slots as a \code{"vglm"} object. The only difference is that the some of the constraint matrices are estimates rather than known. But \pkg{VGAM} stores the models the same internally. The slots of \code{"vglm"} objects are described in \code{\link{vglm-class}}. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Anderson, J. A. (1984) Regression and ordered categorical variables. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{46}, 1--30. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. % Documentation accompanying the \pkg{VGAM} package at % \url{http://www.stat.auckland.ac.nz/~yee} % contains further information and examples. } \author{ Thomas W. Yee } \note{ The arguments of \code{rrvglm} are in general the same as those of \code{\link{vglm}} but with some extras in \code{\link{rrvglm.control}}. The smart prediction (\code{\link{smartpred}}) library is packed with the \pkg{VGAM} library. In an example below, a rank-1 \emph{stereotype} model of Anderson (1984) is fitted to some car data. The reduced-rank regression is performed, adjusting for two covariates. Setting a trivial constraint matrix for the latent variable variables in \eqn{x_2}{x2} avoids a warning message when it is overwritten by a (common) estimated constraint matrix. It shows that German cars tend to be more expensive than American cars, given a car of fixed weight and width. If \code{fit <- rrvglm(..., data = mydata)} then \code{summary(fit)} requires corner constraints and no missing values in \code{mydata}. Often the estimated variance-covariance matrix of the parameters is not positive-definite; if this occurs, try refitting the model with a different value for \code{Index.corner}. For \emph{constrained quadratic ordination} (CQO) see \code{\link{cqo}} for more details about QRR-VGLMs. With multiple binary responses, one must use \code{binomialff(multiple.responses = TRUE)} to indicate that the response is a matrix with one response per column. Otherwise, it is interpreted as a single binary response variable. } % zzz; arguments of \code{\link{vglm}} are definitive. They're copied here. \seealso{ \code{\link{rrvglm.control}}, \code{\link{lvplot.rrvglm}} (same as \code{\link{biplot.rrvglm}}), \code{\link{rrvglm-class}}, \code{\link{grc}}, \code{\link{cqo}}, \code{\link{vglmff-class}}, \code{\link{vglm}}, \code{\link{vglm-class}}, \code{\link{smartpred}}, \code{rrvglm.fit}. Special family functions include \code{\link{negbinomial}} \code{\link{zipoisson}} and \code{\link{zinegbinomial}}. (see Yee (2014) and \pkg{COZIGAM}). Methods functions include \code{\link{Coef.rrvglm}}, \code{\link{calibrate.rrvglm}}, \code{summary.rrvglm}, etc. Data include \code{\link{crashi}}. % \code{\link{qrrvglm.control}}, % \code{\link{vcovqrrvglm}}, } \examples{ \dontrun{ # Example 1: RR negative binomial with Var(Y) = mu + delta1 * mu^delta2 nn <- 1000 # Number of observations delta1 <- 3.0 # Specify this delta2 <- 1.5 # Specify this; should be greater than unity a21 <- 2 - delta2 mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y2 = rnbinom(nn, mu = mu, size = (1/delta1)*mu^a21)) plot(y2 ~ x2, data = mydata, pch = "+", col = 'blue', las = 1, main = paste("Var(Y) = mu + ", delta1, " * mu^", delta2, sep = "")) rrnb2 <- rrvglm(y2 ~ x2 + x3, negbinomial(zero = NULL), data = mydata, trace = TRUE) a21.hat <- (Coef(rrnb2)@A)["loge(size)", 1] beta11.hat <- Coef(rrnb2)@B1["(Intercept)", "loge(mu)"] beta21.hat <- Coef(rrnb2)@B1["(Intercept)", "loge(size)"] (delta1.hat <- exp(a21.hat * beta11.hat - beta21.hat)) (delta2.hat <- 2 - a21.hat) # exp(a21.hat * predict(rrnb2)[1,1] - predict(rrnb2)[1,2]) # delta1.hat summary(rrnb2) # Obtain a 95 percent confidence interval for delta2: se.a21.hat <- sqrt(vcov(rrnb2)["I(latvar.mat)", "I(latvar.mat)"]) ci.a21 <- a21.hat + c(-1, 1) * 1.96 * se.a21.hat (ci.delta2 <- 2 - rev(ci.a21)) # The 95 percent confidence interval Confint.rrnb(rrnb2) # Quick way to get it # Plot the abundances and fitted values against the latent variable plot(y2 ~ latvar(rrnb2), data = mydata, col = "blue", xlab = "Latent variable", las = 1) ooo <- order(latvar(rrnb2)) lines(fitted(rrnb2)[ooo] ~ latvar(rrnb2)[ooo], col = "orange") # Example 2: stereotype model (reduced-rank multinomial logit model) data(car.all) scar <- subset(car.all, is.element(Country, c("Germany", "USA", "Japan", "Korea"))) fcols <- c(13,14,18:20,22:26,29:31,33,34,36) # These are factors scar[, -fcols] <- scale(scar[, -fcols]) # Standardize all numerical vars ones <- matrix(1, 3, 1) clist <- list("(Intercept)" = diag(3), Width = ones, Weight = ones, Disp. = diag(3), Tank = diag(3), Price = diag(3), Frt.Leg.Room = diag(3)) set.seed(111) fit <- rrvglm(Country ~ Width + Weight + Disp. + Tank + Price + Frt.Leg.Room, multinomial, data = scar, Rank = 2, trace = TRUE, constraints = clist, noRRR = ~ 1 + Width + Weight, Uncor = TRUE, Corner = FALSE, Bestof = 2) fit@misc$deviance # A history of the fits Coef(fit) biplot(fit, chull = TRUE, scores = TRUE, clty = 2, Ccex = 2, ccol = "blue", scol = "orange", Ccol = "darkgreen", Clwd = 2, main = "1=Germany, 2=Japan, 3=Korea, 4=USA") } } \keyword{models} \keyword{regression} %index <- with(car.all, Country == "Germany" | Country == "USA" | % Country == "Japan" | Country == "Korea") %scar <- car.all[index, ] # standardized car data %scar <- subset(car.all, % is.element(Country, c("Germany", "USA", "Japan", "Korea")) | % is.na(Country)) VGAM/man/bifgmexp.Rd0000644000176200001440000000660613135276753013673 0ustar liggesusers\name{bifgmexp} \alias{bifgmexp} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Farlie-Gumbel-Morgenstern Exponential Distribution Family Function } \description{ Estimate the association parameter of FGM bivariate exponential distribution by maximum likelihood estimation. } \usage{ bifgmexp(lapar = "rhobit", iapar = NULL, tola0 = 0.01, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar}{ Link function for the association parameter \eqn{\alpha}{alpha}, which lies between \eqn{-1} and \eqn{1}. See \code{\link{Links}} for more choices and other information. } \item{iapar}{ Numeric. Optional initial value for \eqn{\alpha}{alpha}. By default, an initial value is chosen internally. If a convergence failure occurs try assigning a different value. Assigning a value will override the argument \code{imethod}. } \item{tola0}{ Positive numeric. If the estimate of \eqn{\alpha}{alpha} has an absolute value less than this then it is replaced by this value. This is an attempt to fix a numerical problem when the estimate is too close to zero. } \item{imethod}{ An integer with value \code{1} or \code{2} which specifies the initialization method. If failure to converge occurs try the other value, or else specify a value for \code{ia}. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = e^{-y_1-y_2} ( 1 + \alpha [1 - e^{-y_1}] [1 - e^{-y_2}] ) + 1 - e^{-y_1} - e^{-y_2} }{% P(Y1 <= y1, Y2 <= y2) = exp(-y1-y2) * ( 1 + alpha * [1 - exp(-y1)] * [1 - exp(-y2)] ) + 1 - exp(-y1) - exp(-y2) } for \eqn{\alpha}{alpha} between \eqn{-1} and \eqn{1}. The support of the function is for \eqn{y_1>0}{y1>0} and \eqn{y_2>0}{y2>0}. The marginal distributions are an exponential distribution with unit mean. When \eqn{\alpha = 0}{alpha=0} then the random variables are independent, and this causes some problems in the estimation process since the distribution no longer depends on the parameter. A variant of Newton-Raphson is used, which only seems to work for an intercept model. It is a very good idea to set \code{trace = TRUE}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 1. This is because each marginal distribution corresponds to a exponential distribution with unit mean. This \pkg{VGAM} family function should be used with caution. } \seealso{ \code{\link{bifgmcop}}, \code{\link{bigumbelIexp}}. } \examples{ N <- 1000; mdata <- data.frame(y1 = rexp(N), y2 = rexp(N)) \dontrun{plot(ymat)} fit <- vglm(cbind(y1, y2) ~ 1, bifgmexp, data = mdata, trace = TRUE) fit <- vglm(cbind(y1, y2) ~ 1, bifgmexp, data = mdata, # This may fail trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) } \keyword{models} \keyword{regression} VGAM/man/BICvlm.Rd0000644000176200001440000000703413135276753013202 0ustar liggesusers\name{BICvlm} \alias{BICvlm} %\alias{BICvglm} \alias{BICvgam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bayesian Information Criterion } \description{ Calculates the Bayesian information criterion (BIC) for a fitted model object for which a log-likelihood value has been obtained. } \usage{ BICvlm(object, \dots, k = log(nobs(object))) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object, \dots}{ Same as \code{\link{AICvlm}}. } \item{k}{ Numeric, the penalty per parameter to be used; the default is \code{log(n)} where \code{n} is the number of observations). } } \details{ The so-called BIC or SBC (Schwarz's Bayesian criterion) can be computed by calling \code{\link{AICvlm}} with a different \code{k} argument. See \code{\link{AICvlm}} for information and caveats. } \value{ Returns a numeric value with the corresponding BIC, or \dots, depending on \code{k}. } \author{T. W. Yee. } \note{ BIC, AIC and other ICs can have have many additive constants added to them. The important thing are the differences since the minimum value corresponds to the best model. Preliminary testing shows absolute differences with some \pkg{VGAM} family functions such as \code{\link{gaussianff}}, however, they should agree with non-normal families. BIC has not been defined for QRR-VGLMs yet. } %\references{ % Sakamoto, Y., Ishiguro, M., and Kitagawa G. (1986). % \emph{Akaike Information Criterion Statistics}. % D. Reidel Publishing Company. %} \section{Warning }{ Like \code{\link{AICvlm}}, this code has not been double-checked. The general applicability of \code{BIC} for the VGLM/VGAM classes has not been developed fully. In particular, \code{BIC} should not be run on some \pkg{VGAM} family functions because of violation of certain regularity conditions, etc. Many \pkg{VGAM} family functions such as \code{\link{cumulative}} can have the number of observations absorbed into the prior weights argument (e.g., \code{weights} in \code{\link{vglm}}), either before or after fitting. Almost all \pkg{VGAM} family functions can have the number of observations defined by the \code{weights} argument, e.g., as an observed frequency. \code{BIC} simply uses the number of rows of the model matrix, say, as defining \code{n}, hence the user must be very careful of this possible error. Use at your own risk!! } \seealso{ \code{\link{AICvlm}}, VGLMs are described in \code{\link{vglm-class}}; VGAMs are described in \code{\link{vgam-class}}; RR-VGLMs are described in \code{\link{rrvglm-class}}; \code{\link[stats]{BIC}}, \code{\link[stats]{AIC}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) BIC(fit1) (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) BIC(fit2) # These do not agree in absolute terms: gdata <- data.frame(x2 = sort(runif(n <- 40))) gdata <- transform(gdata, y1 = 1 + 2*x2 + rnorm(n, sd = 0.1)) fit.v <- vglm(y1 ~ x2, gaussianff, data = gdata) fit.g <- glm(y1 ~ x2, gaussian , data = gdata) fit.l <- lm(y1 ~ x2, data = gdata) c(BIC(fit.l), BIC(fit.g), BIC(fit.v)) c(AIC(fit.l), AIC(fit.g), AIC(fit.v)) c(AIC(fit.l) - AIC(fit.v), AIC(fit.g) - AIC(fit.v)) c(logLik(fit.l), logLik(fit.g), logLik(fit.v)) } \keyword{models} \keyword{regression} VGAM/man/fisk.Rd0000644000176200001440000000630313135276753013020 0ustar liggesusers\name{fisk} \alias{fisk} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Fisk Distribution family function } \description{ Maximum likelihood estimation of the 2-parameter Fisk distribution. } \usage{ fisk(lscale = "loge", lshape1.a = "loge", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lss}{ See \code{\link{CommonVGAMffArguments}} for important information. } \item{lshape1.a, lscale}{ Parameter link functions applied to the (positive) parameters \eqn{a} and \code{scale}. See \code{\link{Links}} for more choices. } \item{iscale, ishape1.a, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. For \code{imethod = 2} a good initial value for \code{iscale} is needed to obtain a good estimate for the other parameter. } \item{gscale, gshape1.a}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 2-parameter Fisk (aka log-logistic) distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{q=p=1}. It is also the 3-parameter Singh-Maddala distribution with shape parameter \eqn{q=1}, as well as the Dagum distribution with \eqn{p=1}. More details can be found in Kleiber and Kotz (2003). The Fisk distribution has density \deqn{f(y) = a y^{a-1} / [b^a \{1 + (y/b)^a\}^2]}{% f(y) = a y^(a-1) / [b^a (1 + (y/b)^a)^2]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \eqn{a} is a shape parameter. The cumulative distribution function is \deqn{F(y) = 1 - [1 + (y/b)^a]^{-1} = [1 + (y/b)^{-a}]^{-1}.}{% F(y) = 1 - [1 + (y/b)^a]^(-1) = [1 + (y/b)^(-a)]^(-1).} The mean is \deqn{E(Y) = b \, \Gamma(1 + 1/a) \, \Gamma(1 - 1/a)}{% E(Y) = b gamma(1 + 1/a) gamma(1 - 1/a)} provided \eqn{a > 1}; these are returned as the fitted values. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. } \seealso{ \code{\link{Fisk}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{inv.lomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{inv.paralogistic}}, \code{\link{simulate.vlm}}. } \examples{ fdata <- data.frame(y = rfisk(n = 200, shape = exp(1), scale = exp(2))) fit <- vglm(y ~ 1, fisk(lss = FALSE), data = fdata, trace = TRUE) fit <- vglm(y ~ 1, fisk(ishape1.a = exp(2)), data = fdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/calibrate.rrvglm.Rd0000644000176200001440000000726013135276753015325 0ustar liggesusers\name{calibrate.rrvglm} \alias{calibrate.rrvglm} % 20170418 %- Also NEED an '\alias' for EACH other topic documented here. \title{ Calibration for CLO models (RR-VGLMs) } \description{ Performs maximum likelihood calibration for constrained and unconstrained quadratic and additive ordination models (CQO and CAO models are better known as QRR-VGLMs and RR-VGAMs respectively). } \usage{ calibrate.rrvglm(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "all3or4"), initial.vals = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ The fitted \code{\link{rrvglm}} model. } \item{newdata}{ A data frame with new response data (e.g., new species data). The default is to use the original data used to fit the model; however, the calibration may take a long time to compute because the computations are expensive. } \item{type}{ Same as \code{\link{calibrate.qrrvglm}}. The \code{"all3or4"} is for all of them, i.e., all \code{type}s. For CLO models, \code{"vcov"} is unavailable, so all 3 are returned. } \item{initial.vals}{ Same as \code{\link{calibrate.qrrvglm}}. The default is a grid defined by arguments in \code{\link{calibrate.rrvglm.control}}. } \item{\dots}{ Arguments that are fed into \code{\link{calibrate.rrvglm.control}}. } } \details{ Given a fitted regression CLO model, maximum likelihood calibration is theoretically easy and elegant. However, the method assumes that all responses are independent. More details and references are given in Yee (2012). The function \code{\link[stats]{optim}} is used to search for the maximum likelihood solution. Good initial values are needed, and \code{\link{calibrate.rrvglm.control}} allows the user some control over the choice of these. It is found empirically that the stereotype model (also known as a reduced-rank \code{\link{multinomial}} logit model) calibrates well only with grouped data. When the response vector is all 0s except for one 1, then the MLE will probably be at \code{-Inf} or \code{+Inf}. } \value{ See \code{\link{calibrate.qrrvglm}}. } %\references{ %} \author{T. W. Yee} %\note{ %} \section{Warning }{ This function assumes that the \emph{prior weights} are all unity; see \code{\link{weightsvglm}}. This function is computationally expensive for \code{Rank >= 1}, and since it uses a \code{for()} loop several times it can be slow. Setting \code{trace = TRUE} to get a running log is a good idea. } \seealso{ \code{\link{calibrate.qrrvglm}}, \code{\link{calibrate}}, \code{\link{rrvglm}}, \code{\link{weightsvglm}}. % \code{\link{cqo}}, % \code{\link{cao}}. % \code{\link{uqo}}, } \examples{ \dontrun{ nona.xs.nz <- na.omit(xs.nz) # Overkill!! nona.xs.nz$dmd <- with(nona.xs.nz, round(drinkmaxday)) nona.xs.nz$feethr <- with(nona.xs.nz, round(feethour)) nona.xs.nz$sleephr <- with(nona.xs.nz, round(sleep)) nona.xs.nz$beats <- with(nona.xs.nz, round(pulse)) p2 <- rrvglm(cbind(dmd, feethr, sleephr, beats) ~ age + smokenow + depressed + embarrassed + fedup + hurt + miserable + # 11 psychological nofriend + moody + nervous + tense + worry + worrier, # variables noRRR = ~ age + smokenow, trace = FALSE, poissonff, data = nona.xs.nz, Rank = 2) cp2 <- calibrate(p2, new = head(nona.xs.nz, 9), type = "all", trace = TRUE) cp2 two.cases <- nona.xs.nz[1:2, ] # Another example two.cases$dmd <- c(4, 10) two.cases$feethr <- c(4, 7) two.cases$sleephr <- c(7, 8) two.cases$beats <- c(62, 71) cp2b <- calibrate(p2, new = two.cases, type = "all") cp2b } } \keyword{models} \keyword{regression} VGAM/man/cfibrosis.Rd0000644000176200001440000000255413135276753014053 0ustar liggesusers\name{cfibrosis} \alias{cfibrosis} \docType{data} \title{ Cystic Fibrosis Data %% ~~ data name/kind ... ~~ } \description{ This data frame concerns families data and cystic fibrosis. } \usage{ data(cfibrosis) } \format{ A data frame with 24 rows on the following 4 variables. \describe{ \item{siblings, affected, ascertained, families}{ Over ascertained families, the \eqn{k}th ascertained family has \eqn{s_k} siblings of whom \eqn{r_k} are affected and \eqn{a_k} are ascertained. } } } \details{ The data set allows a classical segregation analysis to be peformed. In particular, to test Mendelian segregation ratios in nuclear family data. The likelihood has similarities with \code{\link{seq2binomial}}. %% ~~ If necessary, more details than the __description__ above ~~ } \source{ The data is originally from Crow (1965) and appears as Table 2.3 of Lange (2002). Crow, J. F. (1965) Problems of ascertainment in the analysis of family data. Epidemiology and Genetics of Chronic Disease. Public Health Service Publication 1163, Neel J. V., Shaw M. W., Schull W. J., editors, Department of Health, Education, and Welfare, Washington, DC, USA. Lange, K. (2002) Mathematical and Statistical Methods for Genetic Analysis. Second Edition. Springer-Verlag: New York, USA. } \examples{ cfibrosis summary(cfibrosis) } \keyword{datasets} VGAM/man/weightsvglm.Rd0000644000176200001440000000770613135276753014434 0ustar liggesusers\name{weightsvglm} \alias{weightsvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Prior and Working Weights of a VGLM fit } \description{ Returns either the prior weights or working weights of a VGLM object. } \usage{ weightsvglm(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ a model object from the \pkg{VGAM} \R package that inherits from a \emph{vector generalized linear model} (VGLM), e.g., a model of class \code{"vglm"}. } \item{type}{ Character, which type of weight is to be returned? The default is the first one. } \item{matrix.arg}{ Logical, whether the answer is returned as a matrix. If not, it will be a vector. } \item{ignore.slot}{ Logical. If \code{TRUE} then \code{object@weights} is ignored even if it has been assigned, and the long calculation for \code{object@weights} is repeated. This may give a slightly different answer because of the final IRLS step at convergence may or may not assign the latest value of quantities such as the mean and weights. } \item{deriv.arg}{ Logical. If \code{TRUE} then a list with components \code{deriv} and \code{weights} is returned. See below for more details. } \item{\dots}{ Currently ignored. } } \details{ Prior weights are usually inputted with the \code{weights} argument in functions such as \code{\link{vglm}} and \code{\link{vgam}}. It may refer to frequencies of the individual data or be weight matrices specified beforehand. Working weights are used by the IRLS algorithm. They correspond to the second derivatives of the log-likelihood function with respect to the linear predictors. The working weights correspond to positive-definite weight matrices and are returned in \emph{matrix-band} form, e.g., the first \eqn{M} columns correspond to the diagonals, etc. } \value{ If \code{type = "working"} and \code{deriv = TRUE} then a list is returned with the two components described below. Otherwise the prior or working weights are returned depending on the value of \code{type}. \item{deriv}{ Typically the first derivative of the log-likelihood with respect to the linear predictors. For example, this is the variable \code{deriv.mu} in \code{vglm.fit()}, or equivalently, the matrix returned in the \code{"deriv"} slot of a \pkg{VGAM} family function. } \item{weights }{ The working weights. } } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Chambers, J. M. and T. J. Hastie (eds) (1992) \emph{Statistical Models in S}. Wadsworth & Brooks/Cole. } \author{ Thomas W. Yee } \note{ This function is intended to be similar to \code{weights.glm} (see \code{\link[stats]{glm}}). } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link[stats]{glm}}, \code{\link{vglmff-class}}, \code{\link{vglm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) depvar(fit) # These are sample proportions weights(fit, type = "prior", matrix = FALSE) # Number of observations # Look at the working residuals nn <- nrow(model.matrix(fit, type = "lm")) M <- ncol(predict(fit)) wwt <- weights(fit, type = "working", deriv = TRUE) # In matrix-band format wz <- m2a(wwt$weights, M = M) # In array format wzinv <- array(apply(wz, 3, solve), c(M, M, nn)) wresid <- matrix(NA, nn, M) # Working residuals for (ii in 1:nn) wresid[ii, ] <- wzinv[, , ii, drop = TRUE] \%*\% wwt$deriv[ii, ] max(abs(c(resid(fit, type = "work")) - c(wresid))) # Should be 0 (zedd <- predict(fit) + wresid) # Adjusted dependent vector } \keyword{models} \keyword{regression} VGAM/man/lerch.Rd0000644000176200001440000000622013135276753013157 0ustar liggesusers\name{lerch} \alias{lerch} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Lerch Phi Function } \description{ Computes the Lerch transcendental Phi function. } \usage{ lerch(x, s, v, tolerance = 1.0e-10, iter = 100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, s, v}{ Numeric. This function recyles values of \code{x}, \code{s}, and \code{v} if necessary. } \item{tolerance}{ Numeric. Accuracy required, must be positive and less than 0.01. } \item{iter}{ Maximum number of iterations allowed to obtain convergence. If \code{iter} is too small then a result of \code{NA} may occur; if so, try increasing its value. } } \details{ The Lerch transcendental function is defined by \deqn{\Phi(x,s,v) = \sum_{n=0}^{\infty} \frac{x^n}{(n+v)^s}}{% Phi(x,s,v) = sum_{n=0}^{infty} x^n / (n+v)^s} where \eqn{|x|<1} and \eqn{v \neq 0, -1, -2, \ldots}{v != 0, -1, -2, ...}. Actually, \eqn{x} may be complex but this function only works for real \eqn{x}. The algorithm used is based on the relation \deqn{\Phi(x,s,v) = x^m \Phi(x,s,v+m) + \sum_{n=0}^{m-1} \frac{x^n}{(n+v)^s} .}{% Phi(x,s,v) = x^m Phi(x,s,v+m) + sum_{n=0}^{m-1} x^n / (n+v)^s . } See the URL below for more information. This function is a wrapper function for the C code described below. } \value{ Returns the value of the function evaluated at the values of \code{x}, \code{s}, \code{v}. If the above ranges of \eqn{x} and \eqn{v} are not satisfied, or some numeric problems occur, then this function will return a \code{NA} for those values. } \references{ Originally the code was found at \code{http://aksenov.freeshell.org/lerchphi/source/lerchphi.c}. Bateman, H. (1953) \emph{Higher Transcendental Functions}. Volume 1. McGraw-Hill, NY, USA. } \author{ S. V. Aksenov and U. D. Jentschura wrote the C code. The R wrapper function was written by T. W. Yee. } \note{ There are a number of special cases, e.g., the Riemann zeta-function is given by \eqn{\zeta(s) = \Phi(x=1,s,v=1)}{zeta(s) = Phi(x=1,s,v=1)}. The special case of \eqn{s=1} corresponds to the hypergeometric 2F1, and this is implemented in the \pkg{gsl} package. The Lerch transcendental Phi function should not be confused with the Lerch zeta function though they are quite similar. } \section{Warning }{ This function has not been thoroughly tested and contains bugs, for example, the zeta function cannot be computed with this function even though \eqn{\zeta(s) = \Phi(x=1,s,v=1)}{zeta(s) = Phi(x=1,s,v=1)}. There are many sources of problems such as lack of convergence, overflow and underflow, especially near singularities. If any problems occur then a \code{NA} will be returned. } \seealso{ \code{\link{zeta}}. } \examples{ \dontrun{ s <- 2; v <- 1; x <- seq(-1.1, 1.1, length = 201) plot(x, lerch(x, s = s, v = v), type = "l", col = "blue", las = 1, main = paste("lerch(x, s = ", s,", v =", v, ")", sep = "")) abline(v = 0, h = 1, lty = "dashed", col = "gray") s <- rnorm(n = 100) max(abs(zeta(s) - lerch(x = 1, s = s, v = 1))) # This fails (a bug); should be 0 } } \keyword{math} VGAM/man/frechetUC.Rd0000644000176200001440000000445313135276753013740 0ustar liggesusers\name{Frechet} \alias{Frechet} \alias{dfrechet} \alias{pfrechet} \alias{qfrechet} \alias{rfrechet} \title{The Frechet Distribution} \description{ Density, distribution function, quantile function and random generation for the three parameter Frechet distribution. } \usage{ dfrechet(x, location = 0, scale = 1, shape, log = FALSE) pfrechet(q, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qfrechet(p, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rfrechet(n, location = 0, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Passed into \code{\link[stats:Uniform]{runif}}. } \item{location, scale, shape}{the location parameter \eqn{a}, scale parameter \eqn{b}, and shape parameter \eqn{s}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Uniform]{punif}} or \code{\link[stats:Uniform]{qunif}}. } } \value{ \code{dfrechet} gives the density, \code{pfrechet} gives the distribution function, \code{qfrechet} gives the quantile function, and \code{rfrechet} generates random deviates. } \references{ Castillo, E., Hadi, A. S., Balakrishnan, N. Sarabia, J. S. (2005) \emph{Extreme Value and Related Models with Applications in Engineering and Science}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{frechet}}, the \pkg{VGAM} family function for estimating the 2 parameters (without location parameter) by maximum likelihood estimation, for the formula of the probability density function and range restrictions on the parameters. } %\note{ %} \seealso{ \code{\link{frechet}}. % \code{\link{frechet3}}. } \examples{ \dontrun{ shape <- 5 x <- seq(-0.1, 3.5, length = 401) plot(x, dfrechet(x, shape = shape), type = "l", ylab = "", las = 1, main = "Frechet density divided into 10 equal areas; orange = cdf") abline(h = 0, col = "blue", lty = 2) qq <- qfrechet(seq(0.1, 0.9, by = 0.1), shape = shape) lines(qq, dfrechet(qq, shape = shape), col = "purple", lty = 3, type = "h") lines(x, pfrechet(q = x, shape = shape), col = "orange") } } \keyword{distribution} VGAM/man/leipnik.Rd0000644000176200001440000000700713135276753013521 0ustar liggesusers\name{leipnik} \alias{leipnik} %- Also NEED an '\alias' for EACH other topic documented here. \title{Leipnik Distribution Family Function} \description{ Estimates the two parameters of a (transformed) Leipnik distribution by maximum likelihood estimation. } \usage{ leipnik(lmu = "logit", llambda = logoff(offset = 1), imu = NULL, ilambda = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, llambda}{ Link function for the \eqn{\mu}{mu} and \eqn{\lambda}{lambda} parameters. See \code{\link{Links}} for more choices. } \item{imu, ilambda}{ Numeric. Optional initial values for \eqn{\mu}{mu} and \eqn{\lambda}{lambda}. } } \details{ The (transformed) Leipnik distribution has density function \deqn{f(y;\mu,\lambda) = \frac{ \{ y(1-y) \}^{-\frac12}}{ \mbox{Beta}( \frac{\lambda+1}{2}, \frac12 )} \left[ 1 + \frac{(y-\mu)^2 }{y(1-y)} \right]^{ -\frac{\lambda}{2}}}{% f(y;mu,lambda) = (y(1-y))^(-1/2) * (1 + (y-mu)^2 / (y*(1-y)))^(-lambda/2) / Beta((lambda+1)/2, 1/2)} where \eqn{0 < y < 1} and \eqn{\lambda > -1}{lambda > -1}. The mean is \eqn{\mu}{mu} (returned as the fitted values) and the variance is \eqn{1/\lambda}{1/lambda}. Jorgensen (1997) calls the above the \bold{transformed} Leipnik distribution, and if \eqn{y = (x+1)/2} and \eqn{\mu = (\theta+1)/2}{mu = (theta+1)/2}, then the distribution of \eqn{X} as a function of \eqn{x} and \eqn{\theta}{theta} is known as the the (untransformed) Leipnik distribution. Here, both \eqn{x} and \eqn{\theta}{theta} are in \eqn{(-1, 1)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 2, New York: Wiley. (pages 612--617). } \author{ T. W. Yee } \note{ Convergence may be slow or fail. Until better initial value estimates are forthcoming try assigning the argument \code{ilambda} some numerical value if it fails to converge. Currently, Newton-Raphson is implemented, not Fisher scoring. Currently, this family function probably only really works for intercept-only models, i.e., \code{y ~ 1} in the formula. } %\section{Warning }{ % If \code{llambda="identitylink"} then it is possible that the % \code{lambda} estimate becomes less than \eqn{-1}, i.e., out of % bounds. One way to stop this is to choose \code{llambda = "loge"}, % however, \code{lambda} is then constrained to be positive. %} \seealso{ \code{\link{mccullagh89}}. } \examples{ ldata <- data.frame(y = rnorm(n = 2000, mean = 0.5, sd = 0.1)) # Not proper data fit <- vglm(y ~ 1, leipnik(ilambda = 1), data = ldata, trace = TRUE) head(fitted(fit)) with(ldata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # Sum of the prior weights sum(weights(fit, type = "work")) # Sum of the working weights } \keyword{models} \keyword{regression} %fit <- vglm(y ~ 1, leipnik(ilambda = 1), tr = TRUE, cri = "c", checkwz = FALSE) % leipnik(lmu = "logit", llambda = "loge", imu = NULL, ilambda = NULL) %fit <- vglm(y ~ 1, leipnik(ilambda = 1, llambda = logoff(offset = 1)), % data = ldata, trace = TRUE, crit = "coef") % fit <- vglm(y ~ 1, leipnik(ilambda = 1), data = ldata, trace = TRUE, checkwz = FALSE) VGAM/man/vgam-class.Rd0000644000176200001440000002036513135276753014125 0ustar liggesusers\name{vgam-class} \docType{class} \alias{vgam-class} \title{Class ``vgam'' } \description{ Vector generalized additive models. } \section{Objects from the Class}{ Objects can be created by calls of the form \code{vgam(...)}. % ~~ describe objects here ~~ } \section{Slots}{ \describe{ \item{\code{nl.chisq}:}{Object of class \code{"numeric"}. Nonlinear chi-squared values. } \item{\code{nl.df}:}{Object of class \code{"numeric"}. Nonlinear chi-squared degrees of freedom values. } \item{\code{spar}:}{Object of class \code{"numeric"} containing the (scaled) smoothing parameters. } \item{\code{s.xargument}:}{Object of class \code{"character"} holding the variable name of any \code{s()} terms. } \item{\code{var}:}{Object of class \code{"matrix"} holding approximate pointwise standard error information. } \item{\code{Bspline}:}{Object of class \code{"list"} holding the scaled (internal and boundary) knots, and the fitted B-spline coefficients. These are used for prediction. } \item{\code{extra}:}{Object of class \code{"list"}; the \code{extra} argument on entry to \code{vglm}. This contains any extra information that might be needed by the family function. } \item{\code{family}:}{Object of class \code{"vglmff"}. The family function. } \item{\code{iter}:}{Object of class \code{"numeric"}. The number of IRLS iterations used. } \item{\code{predictors}:}{Object of class \code{"matrix"} with \eqn{M} columns which holds the \eqn{M} linear predictors. } \item{\code{assign}:}{Object of class \code{"list"}, from class \code{ "vlm"}. This named list gives information matching the columns and the (LM) model matrix terms. } \item{\code{call}:}{Object of class \code{"call"}, from class \code{ "vlm"}. The matched call. } \item{\code{coefficients}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. A named vector of coefficients. } \item{\code{constraints}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list of constraint matrices used in the fitting. } \item{\code{contrasts}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The contrasts used (if any). } \item{\code{control}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of parameters for controlling the fitting process. See \code{\link{vglm.control}} for details. } \item{\code{criterion}:}{Object of class \code{"list"}, from class \code{ "vlm"}. List of convergence criterion evaluated at the final IRLS iteration. } \item{\code{df.residual}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The residual degrees of freedom. } \item{\code{df.total}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The total degrees of freedom. } \item{\code{dispersion}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The scaling parameter. } \item{\code{effects}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. The effects. } \item{\code{fitted.values}:}{ Object of class \code{"matrix"}, from class \code{ "vlm"}. The fitted values. This is usually the mean but may be quantiles, or the location parameter, e.g., in the Cauchy model. } \item{\code{misc}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A named list to hold miscellaneous parameters. } \item{\code{model}:}{Object of class \code{"data.frame"}, from class \code{ "vlm"}. The model frame. } \item{\code{na.action}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list holding information about missing values. } \item{\code{offset}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. If non-zero, a \eqn{M}-column matrix of offsets. } \item{\code{post}:}{Object of class \code{"list"}, from class \code{ "vlm"} where post-analysis results may be put. } \item{\code{preplot}:}{Object of class \code{"list"}, from class \code{ "vlm"} used by \code{\link{plotvgam}}; the plotting parameters may be put here. } \item{\code{prior.weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"} holding the initially supplied weights. } \item{\code{qr}:}{Object of class \code{"list"}, from class \code{ "vlm"}. QR decomposition at the final iteration. } \item{\code{R}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \bold{R} matrix in the QR decomposition used in the fitting. } \item{\code{rank}:}{Object of class \code{"integer"}, from class \code{ "vlm"}. Numerical rank of the fitted model. } \item{\code{residuals}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The \emph{working} residuals at the final IRLS iteration. } \item{\code{ResSS}:}{Object of class \code{"numeric"}, from class \code{ "vlm"}. Residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices. } \item{\code{smart.prediction}:}{Object of class \code{"list"}, from class \code{ "vlm"}. A list of data-dependent parameters (if any) that are used by smart prediction. } \item{\code{terms}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The \code{\link[stats]{terms}} object used. } \item{\code{weights}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The weight matrices at the final IRLS iteration. This is in matrix-band form. } \item{\code{x}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The model matrix (LM, not VGLM). } \item{\code{xlevels}:}{Object of class \code{"list"}, from class \code{ "vlm"}. The levels of the factors, if any, used in fitting. } \item{\code{y}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. The response, in matrix form. } \item{\code{Xm2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{Ym2}:}{Object of class \code{"matrix"}, from class \code{ "vlm"}. See \code{\link{vglm-class}}). } \item{\code{callXm2}:}{ Object of class \code{"call"}, from class \code{ "vlm"}. The matched call for argument \code{form2}. } } } \section{Extends}{ Class \code{"vglm"}, directly. Class \code{"vlm"}, by class \code{"vglm"}. } \section{Methods}{ \describe{ \item{cdf}{\code{signature(object = "vglm")}: cumulative distribution function. Useful for quantile regression and extreme value data models.} \item{deplot}{\code{signature(object = "vglm")}: density plot. Useful for quantile regression models.} \item{deviance}{\code{signature(object = "vglm")}: deviance of the model (where applicable). } \item{plot}{\code{signature(x = "vglm")}: diagnostic plots. } \item{predict}{\code{signature(object = "vglm")}: extract the additive predictors or predict the additive predictors at a new data frame.} \item{print}{\code{signature(x = "vglm")}: short summary of the object. } \item{qtplot}{\code{signature(object = "vglm")}: quantile plot (only applicable to some models). } \item{resid}{\code{signature(object = "vglm")}: residuals. There are various types of these. } \item{residuals}{\code{signature(object = "vglm")}: residuals. Shorthand for \code{resid}. } \item{rlplot}{\code{signature(object = "vglm")}: return level plot. Useful for extreme value data models.} \item{summary}{\code{signature(object = "vglm")}: a more detailed summary of the object. } } } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. %\url{http://www.stat.auckland.ac.nz/~yee} } \author{ Thomas W. Yee } \note{ VGAMs have all the slots that \code{\link{vglm}} objects have (\code{\link{vglm-class}}), plus the first few slots described in the section above. } %~Make other sections like WARNING with \section{WARNING }{....} ~ \seealso{ \code{\link{vgam.control}}, \code{\link{vglm}}, \code{\link[VGAM]{s}}, \code{\link{vglm-class}}, \code{\link{vglmff-class}}. } \examples{ # Fit a nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), data = pneumo) } \keyword{classes} \keyword{models} \keyword{regression} \keyword{smooth} VGAM/man/eunifUC.Rd0000644000176200001440000001301113135276753013414 0ustar liggesusers\name{Expectiles-Uniform} \alias{Expectiles-Uniform} \alias{eunif} \alias{deunif} \alias{peunif} \alias{qeunif} \alias{reunif} \title{ Expectiles of the Uniform Distribution } \description{ Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of a uniform distribution. } \usage{ deunif(x, min = 0, max = 1, log = FALSE) peunif(q, min = 0, max = 1, lower.tail = TRUE, log.p = FALSE) qeunif(p, min = 0, max = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) reunif(n, min = 0, max = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{ Vector of expectiles. See the terminology note below. } \item{p}{ Vector of probabilities. % (tau or \eqn{\tau}). These should lie in \eqn{(0,1)}. } \item{n, min, max, log}{ See \code{\link[stats:Uniform]{runif}}. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Uniform]{punif}} or \code{\link[stats:Uniform]{qunif}}. } \item{Maxit.nr}{ Numeric. Maximum number of Newton-Raphson iterations allowed. A warning is issued if convergence is not obtained for all \code{p} values. } \item{Tol.nr}{ Numeric. Small positive value specifying the tolerance or precision to which the expectiles are computed. } } \details{ Jones (1994) elucidated on the property that the expectiles of a random variable \eqn{X} with distribution function \eqn{F(x)} correspond to the quantiles of a distribution \eqn{G(x)} where \eqn{G} is related by an explicit formula to \eqn{F}. In particular, let \eqn{y} be the \eqn{p}-expectile of \eqn{F}. Then \eqn{y} is the \eqn{p}-quantile of \eqn{G} where \deqn{p = G(y) = (P(y) - y F(y)) / (2[P(y) - y F(y)] + y - \mu),}{ p = G(y) = (P(y) - y F(y)) / (2[P(y) - y F(y)] + y - mu),} and \eqn{\mu}{mu} is the mean of \eqn{X}. The derivative of \eqn{G} is \deqn{g(y) = (\mu F(y) - P(y)) / (2[P(y) - y F(y)] + y - \mu)^2 .}{ g(y) = ( mu F(y) - P(y)) / (2[P(y) - y F(y)] + y - mu)^2 .} Here, \eqn{P(y)} is the partial moment \eqn{\int_{-\infty}^{y} x f(x) \, dx}{int^{y} x f(x) dx} and \eqn{0 < p < 1}. The 0.5-expectile is the mean \eqn{\mu}{mu} and the 0.5-quantile is the median. A note about the terminology used here. Recall in the \emph{S} language there are the \code{dpqr}-type functions associated with a distribution, e.g., \code{\link[stats:Uniform]{dunif}}, \code{\link[stats:Uniform]{punif}}, \code{\link[stats:Uniform]{qunif}}, \code{\link[stats:Uniform]{runif}}, for the uniform distribution. Here, \code{unif} corresponds to \eqn{F} and \code{eunif} corresponds to \eqn{G}. The addition of ``\code{e}'' (for \emph{expectile}) is for the `other' distribution associated with the parent distribution. Thus \code{deunif} is for \eqn{g}, \code{peunif} is for \eqn{G}, \code{qeunif} is for the inverse of \eqn{G}, \code{reunif} generates random variates from \eqn{g}. For \code{qeunif} the Newton-Raphson algorithm is used to solve for \eqn{y} satisfying \eqn{p = G(y)}. Numerical problems may occur when values of \code{p} are very close to 0 or 1. } \value{ \code{deunif(x)} gives the density function \eqn{g(x)}. \code{peunif(q)} gives the distribution function \eqn{G(q)}. \code{qeunif(p)} gives the expectile function: the expectile \eqn{y} such that \eqn{G(y) = p}. \code{reunif(n)} gives \eqn{n} random variates from \eqn{G}. } \references{ Jones, M. C. (1994) Expectiles and M-quantiles are quantiles. \emph{Statistics and Probability Letters}, \bold{20}, 149--153. Yee, T. W. (2012) Vector generalized linear and additive quantile and expectile regression. \emph{In preparation}. } \author{ T. W. Yee and Kai Huang } %\note{ %The ``\code{q}'', as the first character of ``\code{qeunif}'', %may be changed to ``\code{e}'' in the future, %the reason being to emphasize that the expectiles are returned. %Ditto for the argument ``\code{q}'' in \code{peunif}. % %} \seealso{ \code{\link{deexp}}, \code{\link{denorm}}, \code{\link{dunif}}, \code{\link{dsc.t2}}. } \examples{ my.p <- 0.25; y <- runif(nn <- 1000) (myexp <- qeunif(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my.p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard uniform mymin <- 1; mymax <- 8 yy <- runif(nn, mymin, mymax) (myexp <- qeunif(my.p, mymin, mymax)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p peunif(mymin, mymin, mymax) # Should be 0 peunif(mymax, mymin, mymax) # Should be 1 peunif(mean(yy), mymin, mymax) # Should be 0.5 abs(qeunif(0.5, mymin, mymax) - mean(yy)) # Should be 0 abs(qeunif(0.5, mymin, mymax) - (mymin+mymax)/2) # Should be 0 abs(peunif(myexp, mymin, mymax) - my.p) # Should be 0 integrate(f = deunif, lower = mymin - 3, upper = mymax + 3, min = mymin, max = mymax) # Should be 1 \dontrun{ par(mfrow = c(2,1)) yy <- seq(0.0, 1.0, len = nn) plot(yy, deunif(yy), type = "l", col = "blue", ylim = c(0, 2), xlab = "y", ylab = "g(y)", main = "g(y) for Uniform(0,1)") lines(yy, dunif(yy), col = "darkgreen", lty = "dotted", lwd = 2) # 'original' plot(yy, peunif(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Uniform(0,1)") abline(a = 0.0, b = 1.0, col = "darkgreen", lty = "dotted", lwd = 2) abline(v = 0.5, h = 0.5, col = "red", lty = "dashed") } } \keyword{distribution} VGAM/man/posbernoulli.t.Rd0000644000176200001440000002437113135276753015050 0ustar liggesusers\name{posbernoulli.t} %\alias{posbernoulli} \alias{posbernoulli.t} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Family Function with Time Effects } \description{ Fits a GLM/GAM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Sampling occasion effects are accommodated. % Behavioural effects are accommodated via the \code{xij} argument % of \code{\link{vglm.control}}. } \usage{ posbernoulli.t(link = "logit", parallel.t = FALSE ~ 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE) } %- maybe also 'usage' for other objects documented here. %apply.parint = FALSE, \arguments{ \item{link, iprob, parallel.t}{ See \code{\link{CommonVGAMffArguments}} for information. By default, the parallelism assumption does not apply to the intercept. Setting \code{parallel.t = FALSE ~ -1}, or equivalently \code{parallel.t = FALSE ~ 0}, results in the \eqn{M_0}/\eqn{M_h} model. } \item{p.small, no.warning}{ A small probability value used to give a warning for the Horvitz--Thompson estimator. Any estimated probability value less than \code{p.small} will result in a warning, however, setting \code{no.warning = TRUE} will suppress this warning if it occurs. This is because the Horvitz-Thompson estimator is the sum of the reciprocal of such probabilities, therefore any probability that is too close to 0 will result in an unstable estimate. } } \details{ These models (commonly known as \eqn{M_t} or \eqn{M_{th}} (no prefix \eqn{h} means it is an intercept-only model) in the capture--recapture literature) operate on a capture history matrix response of 0s and 1s (\eqn{n \times \tau}{n x tau}). Each column is a sampling occasion where animals are potentially captured (e.g., a field trip), and each row is an individual animal. Capture is a 1, else a 0. No removal of animals from the population is made (closed population), e.g., no immigration or emigration. Each row of the response matrix has at least one capture. Once an animal is captured for the first time, it is marked/tagged so that its future capture history can be recorded. Then it is released immediately back into the population to remix. It is released immediately after each recapture too. It is assumed that the animals are independent and that, for a given animal, each sampling occasion is independent. And animals do not lose their marks/tags, and all marks/tags are correctly recorded. The number of linear/additive predictors is equal to the number of sampling occasions, i.e., \eqn{M = \tau}, say. The default link functions are \eqn{(logit \,p_{1},\ldots,logit \,p_{\tau})^T}{(logit p_(1),\ldots,logit p_(tau))^T} where each \eqn{p_{j}} denotes the probability of capture at time point \eqn{j}. The fitted value returned is a matrix of probabilities of the same dimension as the response matrix. % Thus \eqn{M = \tau}{M = tau}. A conditional likelihood is maximized here using Fisher scoring. Each sampling occasion has a separate probability that is modelled here. The probabilities can be constrained to be equal by setting \code{parallel.t = FALSE ~ 0}; then the results are effectively the same as \code{\link{posbinomial}} except the binomial constants are not included in the log-likelihood. If \code{parallel.t = TRUE ~ 0} then each column should have at least one 1 and at least one 0. It is well-known that some species of animals are affected by capture, e.g., trap-shy or trap-happy. This \pkg{VGAM} family function does \emph{not} allow any behavioral effect to be modelled (\code{\link{posbernoulli.b}} and \code{\link{posbernoulli.tb}} do) because the denominator of the likelihood function must be free of behavioral effects. % via covariates that are specific to each sampling occasion, % e.g., through the \code{xij} argument. % Ignoring capture history effects would mean % \code{\link{posbinomial}} could be used by aggregating over % the sampling occasions. % If there are no covariates that are specific to each occasion % then the response matrix can be summed over the columns and % \code{\link{posbinomial}} could be used by aggregating over % the sampling occasions. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. Upon fitting the \code{extra} slot has a (list) component called \code{N.hat} which is a point estimate of the population size \eqn{N} (it is the Horvitz-Thompson (1952) estimator). And there is a component called \code{SE.N.hat} containing its standard error. } \references{ Huggins, R. M. (1991) Some practical aspects of a conditional likelihood approach to capture experiments. \emph{Biometrics}, \bold{47}, 725--732. Huggins, R. M. and Hwang, W.-H. (2011) A review of the use of conditional likelihood in capture--recapture experiments. \emph{International Statistical Review}, \bold{79}, 385--400. Otis, D. L. and Burnham, K. P. and White, G. C. and Anderson, D. R. (1978) Statistical inference from capture data on closed animal populations, \emph{Wildlife Monographs}, \bold{62}, 3--135. Yee, T. W. and Stoklosa, J. and Huggins, R. M. (2015) The \pkg{VGAM} package for capture--recapture data using the conditional likelihood. \emph{Journal of Statistical Software}, \bold{65}, 1--33. \url{http://www.jstatsoft.org/v65/i05/}. % \bold{65}(5), 1--33. } \author{ Thomas W. Yee. } \note{ % Models \eqn{M_{tbh}}{M_tbh} can be fitted using the % \code{xij} argument (see \code{\link{vglm.control}}) % to input the behavioural effect indicator % variables. Rather than manually setting these % up, they may be more conveniently obtained by % \code{\link{aux.posbernoulli.t}}. % See the example below. The \code{weights} argument of \code{\link{vglm}} need not be assigned, and the default is just a matrix of ones. Fewer numerical problems are likely to occur for \code{parallel.t = TRUE}. Data-wise, each sampling occasion may need at least one success (capture) and one failure. Less stringent conditions in the data are needed when \code{parallel.t = TRUE}. Ditto when parallelism is applied to the intercept too. % for \code{apply.parint = TRUE}. The response matrix is returned unchanged; i.e., not converted into proportions like \code{\link{posbinomial}}. If the response matrix has column names then these are used in the labelling, else \code{prob1}, \code{prob2}, etc. are used. Using \code{AIC()} or \code{BIC()} to compare \code{\link{posbernoulli.t}}, \code{\link{posbernoulli.b}}, \code{\link{posbernoulli.tb}} models with a \code{\link{posbinomial}} model requires \code{posbinomial(omit.constant = TRUE)} because one needs to remove the normalizing constant from the log-likelihood function. See \code{\link{posbinomial}} for an example. % If not all of the \eqn{2^{\tau}-1}{2^(tau) - 1} combinations of % the response matrix are not present then it pays to add % such rows to the response matrix and assign a small but % positive prior weight. % For example, if \eqn{\tau=2}{tau=2} then there should be % (0,1) rows, % (1,0) rows and % (1,1) rows present in the response matrix. } %\section{Warning }{ % % See \code{\link{posbernoulli.tb}}. % % %} \seealso{ \code{\link{posbernoulli.b}}, \code{\link{posbernoulli.tb}}, \code{\link{Select}}, \code{\link{deermice}}, \code{\link{Huggins89table1}}, \code{\link{Huggins89.t1}}, \code{\link{dposbern}}, \code{\link{rposbern}}, \code{\link{posbinomial}}, \code{\link{AICvlm}}, \code{\link{BICvlm}}, \code{\link{prinia}}. % \code{\link{aux.posbernoulli.t}}, % \code{\link{vglm.control}} for \code{xij}, % \code{\link{huggins91}}. } \examples{ M.t <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, posbernoulli.t, data = deermice, trace = TRUE) coef(M.t, matrix = TRUE) constraints(M.t, matrix = TRUE) summary(M.t, presid = FALSE) M.h.1 <- vglm(Select(deermice, "y") ~ sex + weight, trace = TRUE, posbernoulli.t(parallel.t = FALSE ~ -1), data = deermice) coef(M.h.1, matrix = TRUE) constraints(M.h.1) summary(M.h.1, presid = FALSE) head(depvar(M.h.1)) # Response capture history matrix dim(depvar(M.h.1)) M.th.2 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, trace = TRUE, posbernoulli.t(parallel.t = FALSE), data = deermice) lrtest(M.h.1, M.th.2) # Test the parallelism assumption wrt sex and weight coef(M.th.2) coef(M.th.2, matrix = TRUE) constraints(M.th.2) summary(M.th.2, presid = FALSE) head(model.matrix(M.th.2, type = "vlm"), 21) M.th.2@extra$N.hat # Estimate of the population size; should be about N M.th.2@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(M.th.2@extra$N.hat + c(-1, 1) * 1.96 * M.th.2@extra$SE.N.hat, 1) # Fit a M_h model, effectively the parallel M_t model, using posbinomial() deermice <- transform(deermice, ysum = y1 + y2 + y3 + y4 + y5 + y6, tau = 6) M.h.3 <- vglm(cbind(ysum, tau - ysum) ~ sex + weight, posbinomial(omit.constant = TRUE), data = deermice, trace = TRUE) max(abs(coef(M.h.1) - coef(M.h.3))) # Should be zero logLik(M.h.3) - logLik(M.h.1) # Difference is due to the binomial constants } \keyword{models} \keyword{regression} %# Fit a M_tbh model: %pdata <- aux.posbernoulli.t(with(deermice, cbind(y1, y2, y3, y4, y5, y6))) # Convenient %deermice <- data.frame(deermice, bei = 0, pdata$cap.hist1) # Put all into 1 dataframe %head(deermice) # Augmented with behavioural effect indicator variables %M.tbh.1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight + age + bei, % posbernoulli.t(parallel.t = TRUE ~ sex + weight + age + bei - 1), % data = deermice, trace = TRUE, % xij = list(bei ~ bei1 + bei2 + bei3 + bei4 + bei5 + bei6 - 1), % form2 = ~ bei1 + bei2 + bei3 + bei4 + bei5 + bei6 + % sex + weight + age + bei) %coef(M.tbh.1, matrix = TRUE) %head(deermice, 3) %head(model.matrix(M.tbh.1, type = "vlm"), 20) %summary(M.tbh.1, presid = FALSE) %head(depvar(M.tbh.1)) # Response capture history matrix %dim(depvar(M.tbh.1)) VGAM/man/amlexponential.Rd0000644000176200001440000001207313135276753015105 0ustar liggesusers\name{amlexponential} \alias{amlexponential} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Exponential Regression by Asymmetric Maximum Likelihood Estimation } \description{ Exponential expectile regression estimated by maximizing an asymmetric likelihood function. } \usage{ amlexponential(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loge") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the expectiles. The larger the value the larger the fitted expectile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary maximum likelihood (MLE) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{imethod}{ Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } \item{link}{ See \code{\link{exponential}} and the warning below. } } \details{ The general methodology behind this \pkg{VGAM} family function is given in Efron (1992) and full details can be obtained there. % Equation numbers below refer to that article. This model is essentially an exponential regression model (see \code{\link{exponential}}) but the usual deviance is replaced by an asymmetric squared error loss function; it is multiplied by \eqn{w.aml} for positive residuals. The solution is the set of regression coefficients that minimize the sum of these deviance-type values over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1992) Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. \emph{Journal of the American Statistical Association}, \bold{87}, 98--107. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. Also, the individual deviance values corresponding to each element of the argument \code{w.aml} is stored in the \code{extra} slot. For \code{amlexponential} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. See \code{\link{amlpoisson}} about comments on the jargon, e.g., \emph{expectiles} etc. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } \section{Warning }{ Note that the \code{link} argument of \code{\link{exponential}} and \code{\link{amlexponential}} are currently different: one is the rate parameter and the other is the mean (expectile) parameter. If \code{w.aml} has more than one value then the value returned by \code{deviance} is the sum of all the (weighted) deviances taken over all the \code{w.aml} values. See Equation (1.6) of Efron (1992). } \seealso{ \code{\link{exponential}}, \code{\link{amlbinomial}}, \code{\link{amlpoisson}}, \code{\link{amlnormal}}, \code{\link{alaplace1}}, \code{\link{lms.bcg}}, \code{\link{deexp}}. } \examples{ nn <- 2000 mydat <- data.frame(x = seq(0, 1, length = nn)) mydat <- transform(mydat, mu = loge(-0 + 1.5*x + 0.2*x^2, inverse = TRUE)) mydat <- transform(mydat, mu = loge(0 - sin(8*x), inverse = TRUE)) mydat <- transform(mydat, y = rexp(nn, rate = 1/mu)) (fit <- vgam(y ~ s(x,df = 5), amlexponential(w = c(0.001, 0.1, 0.5, 5, 60)), mydat, trace = TRUE)) fit@extra \dontrun{ # These plots are against the sqrt scale (to increase clarity) par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty = 1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile curves are orange"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty = 1)) for (ii in fit@extra$percentile) with(mydat, matlines(x, sqrt(qexp(p = ii/100, rate = 1/mu)), col = "orange")) } } \keyword{models} \keyword{regression} VGAM/man/uninormal.Rd0000644000176200001440000001042113135276753014064 0ustar liggesusers\name{uninormal} \alias{uninormal} \alias{normal1} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Univariate Normal Distribution } \description{ Maximum likelihood estimation of the two parameters of a univariate normal distribution. } \usage{ uninormal(lmean = "identitylink", lsd = "loge", lvar = "loge", var.arg = FALSE, imethod = 1, isd = NULL, parallel = FALSE, smallno = 1e-05, zero = "sd") } %- maybe also 'usage' for other objects documented here. % apply.parint = FALSE, \arguments{ \item{lmean, lsd, lvar}{ Link functions applied to the mean and standard deviation/variance. See \code{\link{Links}} for more choices. Being positive quantities, a log link is the default for the standard deviation and variance (see \code{var.arg}). } % \item{emean, esd, evar}{ % List. Extra argument for the links. % See \code{earg} in \code{\link{Links}} for general information. % emean = list(), esd = list(), evar = list(), % } \item{var.arg}{ Logical. If \code{TRUE} then the second parameter is the variance and \code{lsd} and \code{esd} are ignored, else the standard deviation is used and \code{lvar} and \code{evar} are ignored. } \item{smallno}{ Numeric, positive but close to 0. Used specifically for quasi-variances; if the link for the mean is \code{\link{explink}} then any non-positive value of \code{eta} is replaced by this quantity (hopefully, temporarily and only during early iterations). } \item{imethod, parallel, isd, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. If \code{lmean = loge} then try \code{imethod = 2}. If \code{parallel = TRUE} then the parallelism constraint is not applied to the intercept. } } \details{ This fits a linear model (LM) as the first linear/additive predictor. So, by default, this is just the mean. By default, the log of the standard deviation is the second linear/additive predictor. The Fisher information matrix is diagonal. This \pkg{VGAM} family function can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \section{Warning}{ \code{uninormal()} is the new name; \code{normal1()} is old and will be decommissioned soon. } \note{ Yet to do: allow an argument such as \code{eq.sd} that enables the standard devations to be the same. } \seealso{ \code{\link{gaussianff}}, \code{\link{posnormal}}, \code{\link{mix2normal}}, \code{\link{normal.vcm}}, \code{\link{Qvar}}, \code{\link{tobit}}, \code{\link{cens.normal}}, \code{\link{foldnormal}}, \code{\link{skewnormal}}, \code{\link{double.cens.normal}}, \code{\link{SURff}}, \code{\link{AR1}}, \code{\link{huber2}}, \code{\link{studentt}}, \code{\link{binormal}}, \code{\link[stats:Normal]{dnorm}}, \code{\link{simulate.vlm}}, \code{\link{hdeff.vglm}}. } \examples{ udata <- data.frame(x2 = rnorm(nn <- 200)) udata <- transform(udata, y1 = rnorm(nn, m = 1 - 3*x2, sd = exp(1 + 0.2*x2)), y2a = rnorm(nn, m = 1 + 2*x2, sd = exp(1 + 2.0*x2)^0.5), y2b = rnorm(nn, m = 1 + 2*x2, sd = exp(1 + 2.0*x2)^0.5)) fit1 <- vglm(y1 ~ x2, uninormal(zero = NULL), data = udata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(cbind(y2a, y2b) ~ x2, data = udata, trace = TRUE, uninormal(var = TRUE, parallel = TRUE ~ x2, zero = NULL)) coef(fit2, matrix = TRUE) # Generate data from N(mu = theta = 10, sigma = theta) and estimate theta. theta <- 10 udata <- data.frame(y3 = rnorm(100, m = theta, sd = theta)) fit3a <- vglm(y3 ~ 1, uninormal(lsd = "identitylink"), data = udata, constraints = list("(Intercept)" = rbind(1, 1))) fit3b <- vglm(y3 ~ 1, uninormal(lsd = "identitylink", parallel = TRUE ~ 1, zero = NULL), data = udata) coef(fit3a, matrix = TRUE) coef(fit3b, matrix = TRUE) # Same as fit3a } \keyword{models} \keyword{regression} VGAM/man/diffzeta.Rd0000644000176200001440000000364713135276753013670 0ustar liggesusers\name{diffzeta} \alias{diffzeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Differenced Zeta Distribution Family Function } \description{ Estimates the parameter of the differenced zeta distribution. } \usage{ diffzeta(start = 1, lshape = "loge", ishape = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, ishape}{ Same as \code{\link{zetaff}}. } \item{start}{ Smallest value of the support of the distribution. Must be a positive integer. } } \details{ The PMF is \deqn{P(Y=y) = (a/y)^{s} - (a/(1+y))^{s},\ \ s>0,\ \ y=a,a+1,\ldots,}{% P(Y=y) = (a/y)^(s) - / (a/(1+y))^(s), s>0, y=a,a+1,...,} where \eqn{s} is the positive shape parameter, and \eqn{a} is \code{start}. According to Moreno-Sanchez et al. (2016), this model fits quite well to about 40 percent of all the English books in the Project Gutenberg data base (about 30,000 texts). Multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Moreno-Sanchez, I. and Font-Clos, F. and Corral, A. Large-Scale Analysis of Zipf's Law in English Texts, 2016. PLoS ONE, \bold{11}(1), 1--19. } \author{ T. W. Yee } %\note{ % The \code{\link{zeta}} function may be used to compute values % of the zeta function. % % %} \seealso{ \code{\link{Diffzeta}}, \code{\link{zetaff}}, \code{\link{zeta}}, \code{\link{zipf}}, \code{\link{zipf}}. } \examples{ odata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data odata <- transform(odata, shape = loge(-0.25 + x2, inverse = TRUE)) odata <- transform(odata, y1 = rdiffzeta(nn, shape)) with(odata, table(y1)) ofit <- vglm(y1 ~ x2, diffzeta, data = odata, trace = TRUE, crit = "coef") coef(ofit, matrix = TRUE) } \keyword{models} \keyword{regression} % VGAM/man/biplot-methods.Rd0000644000176200001440000000142113135276753015012 0ustar liggesusers\name{biplot-methods} \docType{methods} \alias{biplot,rrvglm-method} \alias{biplot,qrrvglm-method} \title{ Biplot of Constrained Regression Models } \description{ \code{biplot} is a generic function applied to RR-VGLMs and QRR-VGLMs etc. These apply to rank-1 and rank-2 models of these only. For RR-VGLMs these plot the second latent variable scores against the first latent variable scores. } %\usage{ % \S4method{biplot}{cao,Coef.cao}(object, ...) %} \section{Methods}{ \describe{ \item{x}{ The object from which the latent variables are extracted and/or plotted. } } } \note{ See \code{\link{lvplot}} which is very much related to biplots. } \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} VGAM/man/chisq.Rd0000644000176200001440000000276713135276753013205 0ustar liggesusers\name{chisq} \alias{chisq} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Chi-squared Distribution } \description{ Maximum likelihood estimation of the degrees of freedom for a chi-squared distribution. } \usage{ chisq(link = "loge", zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The degrees of freedom is treated as a parameter to be estimated, and as real (not integer). Being positive, a log link is used by default. Fisher scoring is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ Multiple responses are permitted. There may be convergence problems if the degrees of freedom is very large or close to zero. } \seealso{ \code{\link[stats]{Chisquare}}. \code{\link{uninormal}}. } \examples{ cdata <- data.frame(x2 = runif(nn <- 1000)) cdata <- transform(cdata, y1 = rchisq(nn, df = exp(1 - 1 * x2)), y2 = rchisq(nn, df = exp(2 - 2 * x2))) fit <- vglm(cbind(y1, y2) ~ x2, chisq, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} VGAM/man/rayleighUC.Rd0000644000176200001440000000423313135276753014120 0ustar liggesusers\name{Rayleigh} \alias{Rayleigh} \alias{drayleigh} \alias{prayleigh} \alias{qrayleigh} \alias{rrayleigh} \title{The Rayleigh Distribution} \description{ Density, distribution function, quantile function and random generation for the Rayleigh distribution with parameter \code{a}. } \usage{ drayleigh(x, scale = 1, log = FALSE) prayleigh(q, scale = 1, lower.tail = TRUE, log.p = FALSE) qrayleigh(p, scale = 1, lower.tail = TRUE, log.p = FALSE) rrayleigh(n, scale = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Fed into \code{\link[stats]{runif}}. } \item{scale}{the scale parameter \eqn{b}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{drayleigh} gives the density, \code{prayleigh} gives the distribution function, \code{qrayleigh} gives the quantile function, and \code{rrayleigh} generates random deviates. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{rayleigh}}, the \pkg{VGAM} family function for estimating the scale parameter \eqn{b} by maximum likelihood estimation, for the formula of the probability density function and range restrictions on the parameter \eqn{b}. } \note{ The Rayleigh distribution is related to the Maxwell distribution. } \seealso{ \code{\link{rayleigh}}, \code{\link{maxwell}}. } \examples{ \dontrun{ Scale <- 2; x <- seq(-1, 8, by = 0.1) plot(x, drayleigh(x, scale = Scale), type = "l", ylim = c(0,1), las = 1, ylab = "", main = "Rayleigh density divided into 10 equal areas; orange = cdf") abline(h = 0, col = "blue", lty = 2) qq <- qrayleigh(seq(0.1, 0.9, by = 0.1), scale = Scale) lines(qq, drayleigh(qq, scale = Scale), col = "purple", lty = 3, type = "h") lines(x, prayleigh(x, scale = Scale), col = "orange") } } \keyword{distribution} VGAM/man/gumbelII.Rd0000644000176200001440000001023413135276753013557 0ustar liggesusers\name{gumbelII} \alias{gumbelII} %\alias{gumbelIIff} %\alias{gumbelII.lsh} %\alias{gumbelII3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel-II Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Gumbel-II distribution. } \usage{ gumbelII(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), perc.out = NULL, imethod = 1, zero = "shape", nowarning = FALSE) } %- maybe also 'usage' for other objects documented here. % zero = "scale", nowarning = FALSE 20151128 \arguments{ \item{nowarning}{ Logical. Suppress a warning? } \item{lshape, lscale}{ Parameter link functions applied to the (positive) shape parameter (called \eqn{s} below) and (positive) scale parameter (called \eqn{b} below). See \code{\link{Links}} for more choices. } % \item{eshape, escale}{ % eshape = list(), escale = list(), % Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } Parameter link functions applied to the \item{ishape, iscale}{ Optional initial values for the shape and scale parameters. } \item{imethod}{ See \code{\link{weibullR}}. } \item{zero, probs.y}{ Details at \code{\link{CommonVGAMffArguments}}. } \item{perc.out}{ If the fitted values are to be quantiles then set this argument to be the percentiles of these, e.g., 50 for median. } } \details{ The Gumbel-II density for a response \eqn{Y} is \deqn{f(y;b,s) = s y^{s-1} \exp[-(y/b)^s] / (b^s)}{% f(y;b,s) = s y^(s-1) * exp(-(y/b)^s) / [b^s]} for \eqn{b > 0}, \eqn{s > 0}, \eqn{y > 0}. The cumulative distribution function is \deqn{F(y;b,s) = \exp[-(y/b)^{-s}].}{% F(y;b,s) = exp(-(y/b)^(-s)).} The mean of \eqn{Y} is \eqn{b \, \Gamma(1 - 1/s)}{b * gamma(1 - 1/s)} (returned as the fitted values) when \eqn{s>1}, and the variance is \eqn{b^2\,\Gamma(1-2/s)}{b^2 * Gamma(1-2/s)} when \eqn{s>2}. This distribution looks similar to \code{\link{weibullR}}, and is due to Gumbel (1954). This \pkg{VGAM} family function currently does not handle censored data. Fisher scoring is used to estimate the two parameters. Probably similar regularity conditions hold for this distribution compared to the Weibull distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Gumbel, E. J. (1954). Statistical theory of extreme values and some practical applications. \emph{Applied Mathematics Series}, volume 33, U.S. Department of Commerce, National Bureau of Standards, USA. } \author{ T. W. Yee } \note{ See \code{\link{weibullR}}. This \pkg{VGAM} family function handles multiple responses. } %\section{Warning}{ % This function is under development to handle other censoring situations. % The version of this function which will handle censored data will be % called \code{cengumbelII()}. It is currently being written and will use % \code{\link{SurvS4}} as input. % It should be released in later versions of \pkg{VGAM}. % % % If the shape parameter is less than two then misleading inference may % result, e.g., in the \code{summary} and \code{vcov} of the object. % % %} \seealso{ \code{\link{dgumbelII}}, \code{\link{gumbel}}, \code{\link{gev}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, heta1 = +1, heta2 = -1 + 0.1 * x2, ceta1 = 0, ceta2 = 1) gdata <- transform(gdata, shape1 = exp(heta1), shape2 = exp(heta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgumbelII(nn, scale = scale1, shape = shape1), y2 = rgumbelII(nn, scale = scale2, shape = shape2)) fit <- vglm(cbind(y1, y2) ~ x2, gumbelII(zero = c(1, 2, 3)), data = gdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/guplot.Rd0000644000176200001440000000517713135276753013406 0ustar liggesusers\name{guplot} \alias{guplot} \alias{guplot.default} \alias{guplot.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gumbel Plot } \description{ Produces a Gumbel plot, a diagnostic plot for checking whether the data appears to be from a Gumbel distribution. } \usage{ guplot(object, ...) guplot.default(y, main = "Gumbel Plot", xlab = "Reduced data", ylab = "Observed data", type = "p", ...) guplot.vlm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{y}{ A numerical vector. \code{NA}s etc. are not allowed.} \item{main}{Character. Overall title for the plot. } \item{xlab}{Character. Title for the x axis. } \item{ylab}{Character. Title for the y axis. } \item{type}{Type of plot. The default means points are plotted. } \item{object}{ An object that inherits class \code{"vlm"}, usually of class \code{\link{vglm-class}} or \code{\link{vgam-class}}. } \item{\dots}{ Graphical argument passed into \code{\link[graphics]{plot}}. See \code{\link[graphics]{par}} for an exhaustive list. The arguments \code{xlim} and \code{ylim} are particularly useful. } } \details{ If \eqn{Y} has a Gumbel distribution then plotting the sorted values \eqn{y_i} versus the \emph{reduced values} \eqn{r_i} should appear linear. The reduced values are given by \deqn{r_i = -\log(-\log(p_i)) }{% r_i = - log(- log(p_i)) } where \eqn{p_i} is the \eqn{i}th plotting position, taken here to be \eqn{(i-0.5)/n}. Here, \eqn{n} is the number of observations. Curvature upwards/downwards may indicate a Frechet/Weibull distribution, respectively. Outliers may also be detected using this plot. The function \code{guplot} is generic, and \code{guplot.default} and \code{guplot.vlm} are some methods functions for Gumbel plots. } \value{ A list is returned invisibly with the following components. \item{x }{The reduced data. } \item{y }{The sorted y data. } } %% zz not sure about the reference \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. Gumbel, E. J. (1958) \emph{Statistics of Extremes}. New York, USA: Columbia University Press. } \author{ T. W. Yee } \note{ The Gumbel distribution is a special case of the GEV distribution with shape parameter equal to zero. } \seealso{ \code{\link{gumbel}}, \code{\link{gumbelff}}, \code{\link{gev}}, \code{\link{venice}}. } \examples{\dontrun{guplot(rnorm(500), las = 1) -> ii names(ii) guplot(with(venice, r1), col = "blue") # Venice sea levels data }} \keyword{models} \keyword{regression} VGAM/man/cdf.lmscreg.Rd0000644000176200001440000000407513135276753014257 0ustar liggesusers\name{cdf.lmscreg} \alias{cdf.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cumulative Distribution Function for LMS Quantile Regression } \description{ Computes the cumulative distribution function (CDF) for observations, based on a LMS quantile regression. } \usage{ cdf.lmscreg(object, newdata = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}. } \item{newdata}{ Data frame where the predictions are to be made. If missing, the original data is used. } \item{\dots}{ Parameters which are passed into functions such as \code{cdf.lms.yjn}. } } \details{ The CDFs returned here are values lying in [0,1] giving the relative probabilities associated with the quantiles \code{newdata}. For example, a value near 0.75 means it is close to the upper quartile of the distribution. } \value{ A vector of CDF values lying in [0,1]. } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The data are treated like quantiles, and the percentiles are returned. The opposite is performed by \code{\link{qtplot.lmscreg}}. The CDF values of the model have been placed in \code{@post$cdf} when the model was fitted. } \seealso{ \code{\link{deplot.lmscreg}}, \code{\link{qtplot.lmscreg}}, \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{lms.yjn}}. } \examples{ fit <- vgam(BMI ~ s(age, df=c(4, 2)), lms.bcn(zero = 1), data = bmi.nz) head(fit@post$cdf) head(cdf(fit)) # Same head(depvar(fit)) head(fitted(fit)) cdf(fit, data.frame(age = c(31.5, 39), BMI = c(28.4, 24))) } \keyword{models} \keyword{regression} VGAM/man/grain.us.Rd0000644000176200001440000000202413135276753013606 0ustar liggesusers\name{grain.us} \alias{grain.us} \docType{data} \title{Grain Prices Data in USA } \description{ A 4-column matrix. } \usage{data(grain.us)} \format{ The columns are: \describe{ \item{wheat.flour}{numeric} \item{corn}{numeric} \item{wheat}{numeric} \item{rye}{numeric} } } \details{ Monthly averages of grain prices in the United States for wheat flour, corn, wheat, and rye for the period January 1961 through October 1972. The units are US dollars per 100 pound sack for wheat flour, and per bushel for corn, wheat and rye. } \source{ Ahn and Reinsel (1988). } \references{ Ahn, S. K and Reinsel, G. C. (1988) Nested reduced-rank autoregressive models for multiple time series. \emph{Journal of the American Statistical Association}, \bold{83}, 849--856. } \examples{ \dontrun{ cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Rank = c(4, 1)), epsilon = 1e-3, stepsize = 0.5, trace = TRUE, maxit = 50) summary(fit) } } \keyword{datasets} VGAM/man/bratUC.Rd0000644000176200001440000000545013135276753013246 0ustar liggesusers\name{Brat} \alias{Brat} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inputting Data to fit a Bradley Terry Model } \description{ Takes in a square matrix of counts and outputs them in a form that is accessible to the \code{\link{brat}} and \code{\link{bratt}} family functions. } \usage{ Brat(mat, ties = 0 * mat, string = c(">", "=="), whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{mat}{ Matrix of counts, which is considered \eqn{M} by \eqn{M} in dimension when there are ties, and \eqn{M+1} by \eqn{M+1} when there are no ties. The rows are winners and the columns are losers, e.g., the 2-1 element is now many times Competitor 2 has beaten Competitor 1. The matrices are best labelled with the competitors' names. } \item{ties}{ Matrix of counts. This should be the same dimension as \code{mat}. By default, there are no ties. The matrix must be symmetric, and the diagonal should contain \code{NA}s. } \item{string}{ Character. The matrices are labelled with the first value of the descriptor, e.g., \code{"NZ > Oz"} `means' NZ beats Australia in rugby. Suggested alternatives include \code{" beats "} or \code{" wins against "}. The second value is used to handle ties. } \item{whitespace}{ Logical. If \code{TRUE} then a white space is added before and after \code{string}; it generally enhances readability. See \code{\link{CommonVGAMffArguments}} for some similar-type information. } } \details{ In the \pkg{VGAM} package it is necessary for each matrix to be represented as a single row of data by \code{\link{brat}} and \code{\link{bratt}}. Hence the non-diagonal elements of the \eqn{M+1} by \eqn{M+1} matrix are concatenated into \eqn{M(M+1)} values (no ties), while if there are ties, the non-diagonal elements of the \eqn{M} by \eqn{M} matrix are concatenated into \eqn{M(M-1)} values. } \value{ A matrix with 1 row and either \eqn{M(M+1)} or \eqn{M(M-1)} columns. } \references{ Agresti, A. (2013) \emph{Categorical Data Analysis}, 3rd ed. Hoboken, NJ, USA: Wiley. } \author{ T. W. Yee } \note{ This is a data preprocessing function for \code{\link{brat}} and \code{\link{bratt}}. Yet to do: merge \code{InverseBrat} into \code{brat}. } \seealso{ \code{\link{brat}}, \code{\link{bratt}}, \code{InverseBrat}. } \examples{ journal <- c("Biometrika", "Comm Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) Brat(mat) # Less readable Brat(mat, whitespace = TRUE) # More readable vglm(Brat(mat, whitespace = TRUE) ~ 1, brat, trace = TRUE) } \keyword{models} \keyword{regression} VGAM/man/ordpoisson.Rd0000644000176200001440000001231013135276753014256 0ustar liggesusers\name{ordpoisson} \alias{ordpoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Ordinal Poisson Family Function } \description{ Fits a Poisson regression where the response is ordinal (the Poisson counts are grouped between known cutpoints). } \usage{ ordpoisson(cutpoints, countdata = FALSE, NOS = NULL, Levels = NULL, init.mu = NULL, parallel = FALSE, zero = NULL, link = "loge") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{cutpoints}{ Numeric. The cutpoints, \eqn{K_l}. These must be non-negative integers. \code{Inf} values may be included. See below for further details. } \item{countdata}{ Logical. Is the response (LHS of formula) in count-data format? If not then the response is a matrix or vector with values \code{1}, \code{2}, \ldots, \code{L}, say, where \code{L} is the number of levels. Such input can be generated with \code{\link[base]{cut}} with argument \code{labels = FALSE}. If \code{countdata = TRUE} then the response is expected to be in the same format as \code{depvar(fit)} where \code{fit} is a fitted model with \code{ordpoisson} as the \pkg{VGAM} family function. That is, the response is matrix of counts with \code{L} columns (if \code{NOS = 1}). } \item{NOS}{ Integer. The number of species, or more generally, the number of response random variates. This argument must be specified when \code{countdata = TRUE}. Usually \code{NOS = 1}. } \item{Levels}{ Integer vector, recycled to length \code{NOS} if necessary. The number of levels for each response random variate. This argument should agree with \code{cutpoints}. This argument must be specified when \code{countdata = TRUE}. } \item{init.mu}{ Numeric. Initial values for the means of the Poisson regressions. Recycled to length \code{NOS} if necessary. Use this argument if the default initial values fail (the default is to compute an initial value internally). } \item{parallel, zero, link}{ See \code{\link{poissonff}}. } } \details{ This \pkg{VGAM} family function uses maximum likelihood estimation (Fisher scoring) to fit a Poisson regression to each column of a matrix response. The data, however, is ordinal, and is obtained from known integer cutpoints. Here, \eqn{l=1,\ldots,L} where \eqn{L} (\eqn{L \geq 2}{L >= 2}) is the number of levels. In more detail, let \eqn{Y^*=l} if \eqn{K_{l-1} < Y \leq K_{l}}{K_{l-1} < Y <= K_{l}} where the \eqn{K_l} are the cutpoints. We have \eqn{K_0=-\infty}{K_0=-Inf} and \eqn{K_L=\infty}{K_L=Inf}. The response for this family function corresponds to \eqn{Y^*} but we are really interested in the Poisson regression of \eqn{Y}. If \code{NOS=1} then the argument \code{cutpoints} is a vector \eqn{(K_1,K_2,\ldots,K_L)} where the last value (\code{Inf}) is optional. If \code{NOS>1} then the vector should have \code{NOS-1} \code{Inf} values separating the cutpoints. For example, if there are \code{NOS=3} responses, then something like \code{ordpoisson(cut = c(0, 5, 10, Inf, 20, 30, Inf, 0, 10, 40, Inf))} is valid. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Yee, T. W. (2012) \emph{Ordinal ordination with normalizing link functions for count data}, (in preparation). } \author{ Thomas W. Yee } \note{ Sometimes there are no observations between two cutpoints. If so, the arguments \code{Levels} and \code{NOS} need to be specified too. See below for an example. } \section{Warning }{ The input requires care as little to no checking is done. If \code{fit} is the fitted object, have a look at \code{fit@extra} and \code{depvar(fit)} to check. } \seealso{ \code{\link{poissonff}}, \code{\link{polf}}, \code{\link[base:factor]{ordered}}. } \examples{ set.seed(123) # Example 1 x2 <- runif(n <- 1000); x3 <- runif(n) mymu <- exp(3 - 1 * x2 + 2 * x3) y1 <- rpois(n, lambda = mymu) cutpts <- c(-Inf, 20, 30, Inf) fcutpts <- cutpts[is.finite(cutpts)] # finite cutpoints ystar <- cut(y1, breaks = cutpts, labels = FALSE) \dontrun{ plot(x2, x3, col = ystar, pch = as.character(ystar)) } table(ystar) / sum(table(ystar)) fit <- vglm(ystar ~ x2 + x3, fam = ordpoisson(cutpoi = fcutpts)) head(depvar(fit)) # This can be input if countdata = TRUE head(fitted(fit)) head(predict(fit)) coef(fit, matrix = TRUE) fit@extra # Example 2: multivariate and there are no obsns between some cutpoints cutpts2 <- c(-Inf, 0, 9, 10, 20, 70, 200, 201, Inf) fcutpts2 <- cutpts2[is.finite(cutpts2)] # finite cutpoints y2 <- rpois(n, lambda = mymu) # Same model as y1 ystar2 <- cut(y2, breaks = cutpts2, labels = FALSE) table(ystar2) / sum(table(ystar2)) fit <- vglm(cbind(ystar,ystar2) ~ x2 + x3, fam = ordpoisson(cutpoi = c(fcutpts,Inf,fcutpts2,Inf), Levels = c(length(fcutpts)+1,length(fcutpts2)+1), parallel = TRUE), trace = TRUE) coef(fit, matrix = TRUE) fit@extra constraints(fit) summary(depvar(fit)) # Some columns have all zeros } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/posbernoulli.b.Rd0000644000176200001440000001747113135276753015031 0ustar liggesusers\name{posbernoulli.b} %\alias{posbernoulli} \alias{posbernoulli.b} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Bernoulli Family Function with Behavioural Effects } \description{ Fits a GLM-/GAM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Capture history behavioural effects are accommodated. } \usage{ posbernoulli.b(link = "logit", drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), I2 = FALSE, ipcapture = NULL, iprecapture = NULL, p.small = 1e-4, no.warning = FALSE) } %- maybe also 'usage' for other objects documented here. % apply.parint = FALSE, \arguments{ \item{link, drop.b, ipcapture, iprecapture}{ See \code{\link{CommonVGAMffArguments}} for information about these arguments. By default the parallelism assumption does not apply to the intercept. With an intercept-only model setting \code{drop.b = TRUE ~ 1} results in the \eqn{M_0}/\eqn{M_h} model. % it just deletes the 2nd column of the constraint matrix corresponding % to the intercept. % The default value of \code{zero} means that the behavioural % effect is modelled as the difference between the % two intercepts. % That is, it is modelled through the intercept, and a % negative value of the second linear/additive predictor means trap shy, etc. } \item{I2}{ Logical. This argument is used for terms that are not parallel. If \code{TRUE} then the constraint matrix \code{diag(2)} (the general default constraint matrix in \pkg{VGAM}) is used, else \code{cbind(0:1, 1)}. The latter means the first element/column corresponds to the behavioural effect. Consequently it and its standard error etc. can be accessed directly without subtracting two quantities. } \item{type.fitted}{ Details at \code{\link{posbernoulli.tb}}. } \item{p.small, no.warning}{ See \code{\link{posbernoulli.t}}. } } \details{ This model (commonly known as \eqn{M_b}/\eqn{M_{bh}} in the capture--recapture literature) operates on a capture history matrix response of 0s and 1s (\eqn{n \times \tau}{n x tau}). See \code{\link{posbernoulli.t}} for details, e.g., common assumptions with other models. Once an animal is captured for the first time, it is marked/tagged so that its future capture history can be recorded. The effect of the recapture probability is modelled through a second linear/additive predictor. It is well-known that some species of animals are affected by capture, e.g., trap-shy or trap-happy. This \pkg{VGAM} family function \emph{does} allow the capture history to be modelled via such behavioural effects. So does \code{\link{posbernoulli.tb}} but \code{\link{posbernoulli.t}} cannot. % If \code{drop.b = TRUE} the parallelism does not apply to the intercept. The number of linear/additive predictors is \eqn{M = 2}, and the default links are \eqn{(logit \,p_c, logit \,p_r)^T}{(logit p_c, logit p_r)^T} where \eqn{p_c} is the probability of capture and \eqn{p_r} is the probability of recapture. The fitted value returned is of the same dimension as the response matrix, and depends on the capture history: prior to being first captured, it is \code{pcapture}. Afterwards, it is \code{precapture}. By default, the constraint matrices for the intercept term and the other covariates are set up so that \eqn{p_r} differs from \eqn{p_c} by a simple binary effect, on a logit scale. However, this difference (the behavioural effect) is more directly estimated by having \code{I2 = FALSE}. Then it allows an estimate of the trap-happy/trap-shy effect; these are positive/negative values respectively. If \code{I2 = FALSE} then the (nonstandard) constraint matrix used is \code{cbind(0:1, 1)}, meaning the first element can be interpreted as the behavioural effect. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\section{Warning }{ % % See \code{\link{posbernoulli.t}}. % % %} \references{ See \code{\link{posbernoulli.t}}. } \author{ Thomas W. Yee. } \note{ The dependent variable is \emph{not} scaled to row proportions. This is the same as \code{\link{posbernoulli.t}} and \code{\link{posbernoulli.tb}} but different from \code{\link{posbinomial}} and \code{\link{binomialff}}. % Monitor convergence by setting \code{trace = TRUE}. % To fit \eqn{M_{tb}}{M_tb} and \eqn{M_{tbh}}{M_tbh} % use \code{\link{posbernoulli.t}} with the \code{xij} % argument of \code{\link{vglm.control}}. } \seealso{ \code{\link{posbernoulli.t}} and \code{\link{posbernoulli.tb}} (including estimating \eqn{N}), \code{\link{deermice}}, \code{\link{dposbern}}, \code{\link{rposbern}}, \code{\link{posbinomial}}, \code{\link{aux.posbernoulli.t}}, \code{\link{prinia}}. % \code{\link{huggins91}}. % \code{\link{vglm.control}} for \code{xij}, } \examples{ # deermice data ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, # Fit a M_b model M.b <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, posbernoulli.b, data = deermice, trace = TRUE) coef(M.b)["(Intercept):1"] # Behavioural effect on the logit scale coef(M.b, matrix = TRUE) constraints(M.b, matrix = TRUE) summary(M.b, presid = FALSE) # Fit a M_bh model M.bh <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.b, data = deermice, trace = TRUE) coef(M.bh, matrix = TRUE) coef(M.bh)["(Intercept):1"] # Behavioural effect on the logit scale constraints(M.bh) # (2,1) element of "(Intercept)" is for the behavioural effect summary(M.bh, presid = FALSE) # Significant positive (trap-happy) behavioural effect # Approx. 95 percent confidence for the behavioural effect: SE.M.bh <- coef(summary(M.bh))["(Intercept):1", "Std. Error"] coef(M.bh)["(Intercept):1"] + c(-1, 1) * 1.96 * SE.M.bh # Fit a M_h model M.h <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.b(drop.b = TRUE ~ sex + weight), data = deermice, trace = TRUE) coef(M.h, matrix = TRUE) constraints(M.h, matrix = TRUE) summary(M.h, presid = FALSE) # Fit a M_0 model M.0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, posbinomial, data = deermice, trace = TRUE) coef(M.0, matrix = TRUE) summary(M.0, presid = FALSE) # Simulated data set ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, set.seed(123); nTimePts <- 5; N <- 1000 # N is the popn size pdata <- rposbern(n = N, nTimePts = nTimePts, pvars = 2, is.popn = TRUE) nrow(pdata) # Less than N (because some animals were never captured) # The truth: xcoeffs are c(-2, 1, 2) and cap.effect = +1 M.bh.2 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.b, data = pdata, trace = TRUE) coef(M.bh.2) coef(M.bh.2, matrix = TRUE) constraints(M.bh.2, matrix = TRUE) summary(M.bh.2, presid = FALSE) head(depvar(M.bh.2)) # Capture history response matrix head(M.bh.2@extra$cap.hist1) # Info on its capture history head(M.bh.2@extra$cap1) # When it was first captured head(fitted(M.bh.2)) # Depends on capture history (trap.effect <- coef(M.bh.2)["(Intercept):1"]) # Should be +1 head(model.matrix(M.bh.2, type = "vlm"), 21) head(pdata) summary(pdata) dim(depvar(M.bh.2)) vcov(M.bh.2) M.bh.2@extra$N.hat # Estimate of the population size; should be about N M.bh.2@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(M.bh.2@extra$N.hat + c(-1, 1) * 1.96 * M.bh.2@extra$SE.N.hat, 1) } \keyword{models} \keyword{regression} %# Compare the models using a LRT %lrtest(M.bh, M.h) %(wald.pvalue <- 2 * pnorm(abs(summary(M.bh)@coef3["(Intercept):2", "z value"]), % lower.tail = FALSE)) # Two-sided pvalue VGAM/man/cardioid.Rd0000644000176200001440000000554513135276753013651 0ustar liggesusers\name{cardioid} \alias{cardioid} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cardioid Distribution Family Function } \description{ Estimates the two parameters of the cardioid distribution by maximum likelihood estimation. } \usage{ cardioid(lmu = extlogit(min = 0, max = 2*pi), lrho = extlogit(min = -0.5, max = 0.5), imu = NULL, irho = 0.3, nsimEIM = 100, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lrho}{ Parameter link functions applied to the \eqn{\mu}{mu} and \eqn{\rho}{rho} parameters, respectively. See \code{\link{Links}} for more choices. } \item{imu, irho}{ Initial values. A \code{NULL} means an initial value is chosen internally. See \code{\link{CommonVGAMffArguments}} for more information. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The two-parameter cardioid distribution has a density that can be written as \deqn{f(y;\mu,\rho) = \frac{1}{2\pi} \left(1 + 2\, \rho \cos(y - \mu) \right) }{% f(y;mu,rho) = (1 + 2*rho*cos(y-mu)) / (2*pi)} where \eqn{0 < y < 2\pi}{0 < y < 2*pi}, \eqn{0 < \mu < 2\pi}{0 < mu < 2*pi}, and \eqn{-0.5 < \rho < 0.5}{-0.5 < rho < 0.5} is the concentration parameter. The default link functions enforce the range constraints of the parameters. For positive \eqn{\rho} the distribution is unimodal and symmetric about \eqn{\mu}{mu}. The mean of \eqn{Y} (which make up the fitted values) is \eqn{\pi + (\rho/\pi) ((2 \pi-\mu) \sin(2 \pi-\mu) + \cos(2 \pi-\mu) - \mu \sin(\mu) - \cos(\mu))}{ pi + (rho/pi) ((2*pi-mu)*sin(2*pi-mu) + cos(2*pi-mu) - mu*sin(mu) - cos(mu))}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Jammalamadaka, S. R. and SenGupta, A. (2001) \emph{Topics in Circular Statistics}, Singapore: World Scientific. } \author{ T. W. Yee } \note{ Fisher scoring using simulation is used. } \section{Warning }{ Numerically, this distribution can be difficult to fit because of a log-likelihood having multiple maximums. The user is therefore encouraged to try different starting values, i.e., make use of \code{imu} and \code{irho}. } \seealso{ \code{\link{rcard}}, \code{\link{extlogit}}, \code{\link{vonmises}}. \pkg{CircStats} and \pkg{circular} currently have a lot more R functions for circular data than the \pkg{VGAM} package. } \examples{ \dontrun{ cdata <- data.frame(y = rcard(n = 1000, mu = 4, rho = 0.45)) fit <- vglm(y ~ 1, cardioid, data = cdata, trace = TRUE) coef(fit, matrix=TRUE) Coef(fit) c(with(cdata, mean(y)), head(fitted(fit), 1)) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/triangleUC.Rd0000644000176200001440000000437213135276753014125 0ustar liggesusers\name{Triangle} \alias{Triangle} \alias{dtriangle} \alias{ptriangle} \alias{qtriangle} \alias{rtriangle} \title{The Triangle Distribution} \description{ Density, distribution function, quantile function and random generation for the Triangle distribution with parameter \code{theta}. } \usage{ dtriangle(x, theta, lower = 0, upper = 1, log = FALSE) ptriangle(q, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) qtriangle(p, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) rtriangle(n, theta, lower = 0, upper = 1) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{theta}{the theta parameter which lies between \code{lower} and \code{upper}. } \item{lower, upper}{lower and upper limits of the distribution. Must be finite. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dtriangle} gives the density, \code{ptriangle} gives the distribution function, \code{qtriangle} gives the quantile function, and \code{rtriangle} generates random deviates. } %\references{ % %} \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{triangle}}, the \pkg{VGAM} family function for estimating the parameter \eqn{\theta}{theta} by maximum likelihood estimation. } %\note{ % %} \seealso{ \code{\link{triangle}}, \code{\link{topple}}. } \examples{ \dontrun{ x <- seq(-0.1, 1.1, by = 0.01); theta <- 0.75 plot(x, dtriangle(x, theta = theta), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = c(0,2), ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, ptriangle(x, theta = theta), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtriangle(probs, theta = theta) lines(Q, dtriangle(Q, theta = theta), col = "purple", lty = 3, type = "h") ptriangle(Q, theta = theta) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/genpoisson.Rd0000644000176200001440000001137113135276753014251 0ustar liggesusers\name{genpoisson} \alias{genpoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Poisson distribution } \description{ Estimation of the two-parameter generalized Poisson distribution. } \usage{ genpoisson(llambda = "rhobit", ltheta = "loge", ilambda = NULL, itheta = NULL, use.approx = TRUE, imethod = 1, ishrinkage = 0.95, zero = "lambda") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llambda, ltheta}{ Parameter link functions for \eqn{\lambda} and \eqn{\theta}. See \code{\link{Links}} for more choices. The \eqn{\lambda} parameter lies at least within the interval \eqn{[-1,1]}; see below for more details, and an alternative link is \code{\link{rhobit}}. The \eqn{\theta} parameter is positive, therefore the default is the log link. } \item{ilambda, itheta}{ Optional initial values for \eqn{\lambda} and \eqn{\theta}. The default is to choose values internally. } \item{use.approx}{ Logical. If \code{TRUE} then an approximation to the expected information matrix is used, otherwise Newton-Raphson is used. } \item{imethod}{ An integer with value \code{1} or \code{2} or \code{3} which specifies the initialization method for the parameters. If failure to converge occurs try another value and/or else specify a value for \code{ilambda} and/or \code{itheta}. } \item{ishrinkage, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } % \item{zero}{ % An integer vector, containing the value 1 or 2. % If so, \eqn{\lambda} or \eqn{\theta} respectively % are modelled as an intercept only. % If set to \code{NULL} then both linear/additive predictors are modelled % as functions of the explanatory variables. % } } \details{ The generalized Poisson distribution has density \deqn{f(y) = \theta(\theta+\lambda y)^{y-1} \exp(-\theta-\lambda y) / y!}{ f(y) = \theta(\theta+\lambda * y)^(y-1) * exp(-\theta-\lambda * y) / y!} for \eqn{\theta > 0} and \eqn{y = 0,1,2,\ldots}. Now \eqn{\max(-1,-\theta/m) \leq \lambda \leq 1}{ max(-1,-\theta/m) \le lambda \le 1} where \eqn{m (\geq 4)}{m (\ge 4)} is the greatest positive integer satisfying \eqn{\theta + m\lambda > 0} when \eqn{\lambda < 0} [and then \eqn{P(Y=y) = 0} for \eqn{y > m}]. Note the complicated support for this distribution means, for some data sets, the default link for \code{llambda} will not always work, and some tinkering may be required to get it running. As Consul and Famoye (2006) state on p.165, the lower limits on \eqn{\lambda} and \eqn{m \ge 4}{m >= 4} are imposed to ensure that there are at least 5 classes with nonzero probability when \eqn{\lambda} is negative. An ordinary Poisson distribution corresponds to \eqn{\lambda = 0}{lambda = 0}. The mean (returned as the fitted values) is \eqn{E(Y) = \theta / (1 - \lambda)} and the variance is \eqn{\theta / (1 - \lambda)^3}. For more information see Consul and Famoye (2006) for a summary and Consul (1989) for full details. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Consul, P. C. and Famoye, F. (2006) \emph{Lagrangian Probability Distributions}, Boston, USA: Birkhauser. Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall Consul, P. C. (1989) \emph{Generalized Poisson Distributions: Properties and Applications}. New York, USA: Marcel Dekker. } \section{Warning }{ Monitor convergence! This family function is fragile. Don't get confused because \code{theta} (and not \code{lambda}) here really matches more closely with \code{lambda} of \code{\link[stats:Poisson]{dpois}}. } \author{ T. W. Yee } \note{ This family function handles multiple responses. This distribution is potentially useful for dispersion modelling. Convergence problems may occur when \code{lambda} is very close to 0 or 1. If a failure occurs then you might want to try something like \code{llambda = extlogit(min = -0.9, max = 1)} to handle the LHS complicated constraint, and if that doesn't work, try \code{llambda = extlogit(min = -0.8, max = 1)}, etc. } \seealso{ \code{\link{poissonff}}, \code{\link[stats:Poisson]{dpois}}. \code{\link{dgenpois}}, \code{\link{rhobit}}, \code{\link{extlogit}}. } \examples{ gdata <- data.frame(x2 = runif(nn <- 200)) gdata <- transform(gdata, y1 = rpois(nn, exp(2 - x2))) # Poisson data fit <- vglm(y1 ~ x2, genpoisson, data = gdata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) } \keyword{models} \keyword{regression} % yettodo: see csda 2009, 53(9): 3478--3489. %{% f(y) = theta*(theta+lambda*y)^(y-1) exp(-theta-lambda*y) / y!} VGAM/man/QvarUC.Rd0000644000176200001440000002115613135276753013230 0ustar liggesusers\name{Qvar} \alias{Qvar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quasi-variances Preprocessing Function %% ~~function to do ... ~~ } \description{ Takes a \code{\link{vglm}} fit or a variance-covariance matrix, and preprocesses it for \code{\link{rcim}} and \code{\link{uninormal}} so that quasi-variances can be computed. %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ Qvar(object, factorname = NULL, which.linpred = 1, coef.indices = NULL, labels = NULL, dispersion = NULL, reference.name = "(reference)", estimates = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{"\link[=vglmff-class]{vglm}"} object or a variance-covariance matrix, e.g., \code{vcov(vglm.object)}. The former is preferred since it contains all the information needed. If a matrix then \code{factorname} and/or \code{coef.indices} should be specified to identify the factor. } \item{which.linpred}{ A single integer from the set \code{1:M}. Specifies which linear predictor to use. Let the value of \code{which.linpred} be called \eqn{j}. Then the factor should appear in that linear predictor, hence the \eqn{j}th row of the constraint matrix corresponding to the factor should have at least one nonzero value. Currently the \eqn{j}th row must have exactly one nonzero value because programming it for more than one nonzero value is difficult. } \item{factorname}{ Character. If the \code{\link{vglm}} object contains more than one factor as explanatory variable then this argument should be the name of the factor of interest. If \code{object} is a variance-covariance matrix then this argument should also be specified. } \item{labels}{ Character. Optional, for labelling the variance-covariance matrix. } \item{dispersion}{ Numeric. Optional, passed into \code{vcov()} with the same argument name. } \item{reference.name}{ Character. Label for for the reference level. } \item{coef.indices}{ Optional numeric vector of length at least 3 specifying the indices of the factor from the variance-covariance matrix. } \item{estimates}{ an optional vector of estimated coefficients (redundant if \code{object} is a model). } } \details{ Suppose a factor with \eqn{L} levels is an explanatory variable in a regression model. By default, R treats the first level as baseline so that its coefficient is set to zero. It estimates the other \eqn{L-1} coefficients, and with its associated standard errors, this is the conventional output. From the complete variance-covariance matrix one can compute \eqn{L} quasi-variances based on all pairwise difference of the coefficients. They are based on an approximation, and can be treated as uncorrelated. In minimizing the relative (not absolute) errors it is not hard to see that the estimation involves a RCIM (\code{\link{rcim}}) with an exponential link function (\code{\link{explink}}). If \code{object} is a model, then at least one of \code{factorname} or \code{coef.indices} must be non-\code{NULL}. The value of \code{coef.indices}, if non-\code{NULL}, determines which rows and columns of the model's variance-covariance matrix to use. If \code{coef.indices} contains a zero, an extra row and column are included at the indicated position, to represent the zero variances and covariances associated with a reference level. If \code{coef.indices} is \code{NULL}, then \code{factorname} should be the name of a factor effect in the model, and is used in order to extract the necessary variance-covariance estimates. Quasi-variances were first implemented in R with \pkg{qvcalc}. This implementation draws heavily from that. } \value{ A \eqn{L} by \eqn{L} matrix whose \eqn{i}-\eqn{j} element is the logarithm of the variance of the \eqn{i}th coefficient minus the \eqn{j}th coefficient, for all values of \eqn{i} and \eqn{j}. The diagonal elements are abitrary and are set to zero. The matrix has an attribute that corresponds to the prior weight matrix; it is accessed by \code{\link{uninormal}} and replaces the usual \code{weights} argument. of \code{\link{vglm}}. This weight matrix has ones on the off-diagonals and some small positive number on the diagonals. } \references{ Firth, D. (2003) Overcoming the reference category problem in the presentation of statistical models. \emph{Sociological Methodology} \bold{33}, 1--18. Firth, D. and de Menezes, R. X. (2004) Quasi-variances. \emph{Biometrika} \bold{91}, 65--80. Yee, T. W. and Hadi, A. F. (2014) Row-column interaction models, with an R implementation. \emph{Computational Statistics}, \bold{29}, 1427--1445. } \author{ T. W. Yee, based heavily on \code{qvcalc()} in \pkg{qvcalc} written by David Firth. } \note{ This is an adaptation of \code{qvcalc()} in \pkg{qvcalc}. It should work for all \code{\link{vglm}} models with one linear predictor, i.e., \eqn{M = 1}. For \eqn{M > 1} the factor should appear only in one of the linear predictors. It is important to set \code{maxit} to be larger than usual for \code{\link{rcim}} since convergence is slow. Upon successful convergence the \eqn{i}th row effect and the \eqn{i}th column effect should be equal. A simple computation involving the fitted and predicted values allows the quasi-variances to be extracted (see example below). A function to plot \emph{comparison intervals} has not been written here. } \section{Warning }{ Negative quasi-variances may occur (one of them and only one), though they are rare in practice. If so then numerical problems may occur. See \code{qvcalc()} for more information. } \seealso{ \code{\link{rcim}}, \code{\link{vglm}}, \code{\link{qvar}}, \code{\link{uninormal}}, \code{\link{explink}}, \code{qvcalc()} in \pkg{qvcalc}, \code{\link[MASS]{ships}}. %% ~~objects to See Also as \code{\link{help}}, ~~~ } \examples{ # Example 1 data("ships", package = "MASS") Shipmodel <- vglm(incidents ~ type + year + period, quasipoissonff, offset = log(service), # trace = TRUE, model = TRUE, data = ships, subset = (service > 0)) # Easiest form of input fit1 <- rcim(Qvar(Shipmodel, "type"), uninormal("explink"), maxit = 99) qvar(fit1) # Easy method to get the quasi-variances qvar(fit1, se = TRUE) # Easy method to get the quasi-standard errors (quasiVar <- exp(diag(fitted(fit1))) / 2) # Version 1 (quasiVar <- diag(predict(fit1)[, c(TRUE, FALSE)]) / 2) # Version 2 (quasiSE <- sqrt(quasiVar)) # Another form of input fit2 <- rcim(Qvar(Shipmodel, coef.ind = c(0, 2:5), reference.name = "typeA"), uninormal("explink"), maxit = 99) \dontrun{ qvplot(fit2, col = "green", lwd = 3, scol = "blue", slwd = 2, las = 1) } # The variance-covariance matrix is another form of input (not recommended) fit3 <- rcim(Qvar(cbind(0, rbind(0, vcov(Shipmodel)[2:5, 2:5])), labels = c("typeA", "typeB", "typeC", "typeD", "typeE"), estimates = c(typeA = 0, coef(Shipmodel)[2:5])), uninormal("explink"), maxit = 99) (QuasiVar <- exp(diag(fitted(fit3))) / 2) # Version 1 (QuasiVar <- diag(predict(fit3)[, c(TRUE, FALSE)]) / 2) # Version 2 (QuasiSE <- sqrt(quasiVar)) \dontrun{ qvplot(fit3) } # Example 2: a model with M > 1 linear predictors \dontrun{ require("VGAMdata") xs.nz.f <- subset(xs.nz, sex == "F") xs.nz.f <- subset(xs.nz.f, !is.na(babies) & !is.na(age) & !is.na(ethnicity)) xs.nz.f <- subset(xs.nz.f, ethnicity != "Other") clist <- list("sm.bs(age, df = 4)" = rbind(1, 0), "sm.bs(age, df = 3)" = rbind(0, 1), "ethnicity" = diag(2), "(Intercept)" = diag(2)) fit1 <- vglm(babies ~ sm.bs(age, df = 4) + sm.bs(age, df = 3) + ethnicity, zipoissonff(zero = NULL), xs.nz.f, constraints = clist, trace = TRUE) Fit1 <- rcim(Qvar(fit1, "ethnicity", which.linpred = 1), uninormal("explink", imethod = 1), maxit = 99, trace = TRUE) Fit2 <- rcim(Qvar(fit1, "ethnicity", which.linpred = 2), uninormal("explink", imethod = 1), maxit = 99, trace = TRUE) } \dontrun{ par(mfrow = c(1, 2)) qvplot(Fit1, scol = "blue", pch = 16, main = expression(eta[1]), slwd = 1.5, las = 1, length.arrows = 0.07) qvplot(Fit2, scol = "blue", pch = 16, main = expression(eta[2]), slwd = 1.5, las = 1, length.arrows = 0.07) } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} % \code{\link[qvcalc:qvcalc]{qvcalc}} in \pkg{qvcalc} VGAM/man/lvplot.qrrvglm.Rd0000644000176200001440000003267213135276753015105 0ustar liggesusers\name{lvplot.qrrvglm} \alias{lvplot.qrrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variable Plot for QO models } \description{ Produces an ordination diagram (latent variable plot) for quadratic ordination (QO) models. For rank-1 models, the x-axis is the first ordination/constrained/canonical axis. For rank-2 models, the x- and y-axis are the first and second ordination axes respectively. } \usage{ lvplot.qrrvglm(object, varI.latvar = FALSE, refResponse = NULL, add = FALSE, show.plot = TRUE, rug = TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab = paste("Latent Variable", if (Rank == 1) "" else " 1", sep = ""), ylab = if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex = par()$cex, pcol = par()$col, pch = par()$pch, llty = par()$lty, lcol = par()$col, llwd = par()$lwd, label.arg = FALSE, adj.arg = -0.1, ellipse = 0.95, Absolute = FALSE, elty = par()$lty, ecol = par()$col, elwd = par()$lwd, egrid = 200, chull.arg = FALSE, clty = 2, ccol = par()$col, clwd = par()$lwd, cpch = " ", C = FALSE, OriginC = c("origin", "mean"), Clty = par()$lty, Ccol = par()$col, Clwd = par()$lwd, Ccex = par()$cex, Cadj.arg = -0.1, stretchC = 1, sites = FALSE, spch = NULL, scol = par()$col, scex = par()$cex, sfont = par()$font, check.ok = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A CQO object. % A CQO or UQO object. } \item{varI.latvar}{ Logical that is fed into \code{\link{Coef.qrrvglm}}. } \item{refResponse}{ Integer or character that is fed into \code{\link{Coef.qrrvglm}}. } \item{add}{ Logical. Add to an existing plot? If \code{FALSE}, a new plot is made. } \item{show.plot}{ Logical. Plot it? } \item{rug}{ Logical. If \code{TRUE}, a rug plot is plotted at the foot of the plot (applies to rank-1 models only). These values are jittered to expose ties. } \item{y}{ Logical. If \code{TRUE}, the responses will be plotted (applies only to rank-1 models and if \code{type = "fitted.values"}.) } \item{type}{ Either \code{"fitted.values"} or \code{"predictors"}, specifies whether the y-axis is on the response or eta-scales respectively. } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{pcex}{ Character expansion of the points. Here, for rank-1 models, points are the response \emph{y} data. For rank-2 models, points are the optimums. See the \code{cex} argument in \code{\link[graphics]{par}}. } \item{pcol}{ Color of the points. See the \code{col} argument in \code{\link[graphics]{par}}. } \item{pch}{ Either an integer specifying a symbol or a single character to be used as the default in plotting points. See \code{\link[graphics]{par}}. The \code{pch} argument can be of length \eqn{M}, the number of species. } \item{llty}{ Line type. Rank-1 models only. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol}{ Line color. Rank-1 models only. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd}{ Line width. Rank-1 models only. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{label.arg}{ Logical. Label the optimums and \bold{C}? (applies only to rank-2 models only). } \item{adj.arg}{ Justification of text strings for labelling the optimums (applies only to rank-2 models only). See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{ellipse}{ Numerical, of length 0 or 1 (applies only to rank-2 models only). If \code{Absolute} is \code{TRUE} then \code{ellipse} should be assigned a value that is used for the elliptical contouring. If \code{Absolute} is \code{FALSE} then \code{ellipse} should be assigned a value between 0 and 1, for example, setting \code{ellipse = 0.9} means an ellipse with contour = 90\% of the maximum will be plotted about each optimum. If \code{ellipse} is a negative value, then the function checks that the model is an equal-tolerances model and \code{varI.latvar = FALSE}, and if so, plots circles with radius \code{-ellipse}. For example, setting \code{ellipse = -1} will result in circular contours that have unit radius (in latent variable units). If \code{ellipse} is \code{NULL} or \code{FALSE} then no ellipse is drawn around the optimums. } \item{Absolute}{ Logical. If \code{TRUE}, the contours corresponding to \code{ellipse} are on an absolute scale. If \code{FALSE}, the contours corresponding to \code{ellipse} are on a relative scale. } \item{elty}{ Line type of the ellipses. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{ecol}{ Line color of the ellipses. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{elwd}{ Line width of the ellipses. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{egrid}{ Numerical. Line resolution of the ellipses. Choosing a larger value will result in smoother ellipses. Useful when ellipses are large. } \item{chull.arg}{ Logical. Add a convex hull around the site scores? } \item{clty}{ Line type of the convex hull. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{ccol}{ Line color of the convex hull. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{clwd}{ Line width of the convex hull. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{cpch}{ Character to be plotted at the intersection points of the convex hull. Having white spaces means that site labels are not obscured there. See the \code{pch} argument of \code{\link[graphics]{par}}. } \item{C}{ Logical. Add \bold{C} (represented by arrows emanating from \code{OriginC}) to the plot? } \item{OriginC}{ Character or numeric. Where the arrows representing \bold{C} emanate from. If character, it must be one of the choices given. By default the first is chosen. The value \code{"origin"} means \code{c(0,0)}. The value \code{"mean"} means the sample mean of the latent variables (centroid). Alternatively, the user may specify a numerical vector of length 2. } \item{Clty}{ Line type of the arrows representing \bold{C}. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{Ccol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{Clwd}{ Line width of the arrows representing \bold{C}. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{Ccex}{ Numeric. Character expansion of the labelling of \bold{C}. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{Cadj.arg}{ Justification of text strings when labelling \bold{C}. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{stretchC}{ Numerical. Stretching factor for \bold{C}. Instead of using \bold{C}, \code{stretchC * } \bold{C} is used. } \item{sites}{ Logical. Add the site scores (aka latent variable values, nu's) to the plot? (applies only to rank-2 models only). } \item{spch}{ Plotting character of the site scores. The default value of \code{NULL} means the row labels of the data frame are used. They often are the site numbers. See the \code{pch} argument of \code{\link[graphics]{par}}. } \item{scol}{ Color of the site scores. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{scex}{ Character expansion of the site scores. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{sfont}{ Font used for the site scores. See the \code{font} argument of \code{\link[graphics]{par}}. } % \item{Rotate}{ % Numeric or logical. % A value from the set \{1,2,\ldots,\eqn{M}\} indicating % which species (quadratic predictor) is to be chosen so that % its major and semi-minor axes are parallel to the latent variable % axes, i.e., that species' Tolerance matrix will be diagonal. % If \code{Rotate} is \code{TRUE}, the first species is selected for rotation. % By default a rotation is performed only if the tolerance matrices are equal, % and \code{Rotation} only applies when the rank is greater than one. % See \code{\link{Coef.qrrvglm}} for details. % } % \item{I.tolerances}{ % Logical. % If \code{TRUE}, the tolerances matrices are transformed so that they are % the order-\code{Rank} identity matrix. This means that a rank-2 % latent variable plot % can be interpreted naturally in terms of distances and directions. % See \code{\link{Coef.qrrvglm}} for details. % } \item{check.ok}{ Logical. Whether a check is performed to see that \code{noRRR = ~ 1} was used. It doesn't make sense to have a latent variable plot unless this is so. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{xlim} and \code{ylim}. } } \details{ This function only works for rank-1 and rank-2 QRR-VGLMs with argument \code{noRRR = ~ 1}. For unequal-tolerances models, the latent variable axes can be rotated so that at least one of the tolerance matrices is diagonal; see \code{\link{Coef.qrrvglm}} for details. Arguments beginning with ``\code{p}'' correspond to the points e.g., \code{pcex} and \code{pcol} correspond to the size and color of the points. Such ``\code{p}'' arguments should be vectors of length 1, or \eqn{n}, the number of sites. For the rank-2 model, arguments beginning with ``\code{p}'' correspond to the optimums. } \value{ Returns a matrix of latent variables (site scores) regardless of whether a plot was produced or not. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. } \author{ Thomas W. Yee } \note{ A species which does not have an optimum will not have an ellipse drawn even if requested, i.e., if its tolerance matrix is not positive-definite. % Stationary points which are not bell-shaped will not be plotted % at all. Plotting \bold{C} gives a visual display of the weights (loadings) of each of the variables used in the linear combination defining each latent variable. The arguments \code{elty}, \code{ecol} and \code{elwd}, may be replaced in the future by \code{llty}, \code{lcol} and \code{llwd}, respectively. For rank-1 models, a similar function to this one is \code{\link{perspqrrvglm}}. It plots the fitted values on a more fine grid rather than at the actual site scores here. The result is a collection of smooth bell-shaped curves. However, it has the weakness that the plot is more divorced from the data; the user thinks it is the truth without an appreciation of the statistical variability in the estimates. % Yet to do: allow for the contour line to correspond to the tolerance % matrix itself. zz ?? In the example below, the data comes from an equal-tolerances model. The species' tolerance matrices are all the identity matrix, and the optimums are at (0,0), (1,1) and (-2,0) for species 1, 2, 3 respectively. } \section{Warning}{ Interpretation of a latent variable plot (CQO diagram) is potentially very misleading in terms of distances if (i) the tolerance matrices of the species are unequal and (ii) the contours of these tolerance matrices are not included in the ordination diagram. } \seealso{ \code{\link{lvplot}}, \code{\link{perspqrrvglm}}, \code{\link{Coef.qrrvglm}}, \code{\link[graphics]{par}}, \code{\link{cqo}}. } \examples{ set.seed(123); nn <- 200 cdata <- data.frame(x2 = rnorm(nn), # Has mean 0 (needed when I.tol=TRUE) x3 = rnorm(nn), # Has mean 0 (needed when I.tol=TRUE) x4 = rnorm(nn)) # Has mean 0 (needed when I.tol=TRUE) cdata <- transform(cdata, latvar1 = x2 + x3 - 2*x4, latvar2 = -x2 + x3 + 0*x4) # Nb. latvar2 is weakly correlated with latvar1 cdata <- transform(cdata, lambda1 = exp(6 - 0.5 * (latvar1-0)^2 - 0.5 * (latvar2-0)^2), lambda2 = exp(5 - 0.5 * (latvar1-1)^2 - 0.5 * (latvar2-1)^2), lambda3 = exp(5 - 0.5 * (latvar1+2)^2 - 0.5 * (latvar2-0)^2)) cdata <- transform(cdata, spp1 = rpois(nn, lambda1), spp2 = rpois(nn, lambda2), spp3 = rpois(nn, lambda3)) set.seed(111) \dontrun{ p2 <- cqo(cbind(spp1, spp2, spp3) ~ x2 + x3 + x4, poissonff, data = cdata, Rank = 2, I.tolerances = TRUE, Crow1positive = c(TRUE, FALSE)) # deviance = 505.81 if (deviance(p2) > 506) stop("suboptimal fit obtained") sort(deviance(p2, history = TRUE)) # A history of all the iterations Coef(p2) } \dontrun{ lvplot(p2, sites = TRUE, spch = "*", scol = "darkgreen", scex = 1.5, chull = TRUE, label = TRUE, Absolute = TRUE, ellipse = 140, adj = -0.5, pcol = "blue", pcex = 1.3, las = 1, Ccol = "orange", C = TRUE, Cadj = c(-0.3, -0.3, 1), Clwd = 2, Ccex = 1.4, main = paste("Contours at Abundance = 140 with", "convex hull of the site scores")) } \dontrun{ var(latvar(p2)) # A diagonal matrix, i.e., uncorrelated latent vars var(latvar(p2, varI.latvar = TRUE)) # Identity matrix Tol(p2)[, , 1:2] # Identity matrix Tol(p2, varI.latvar = TRUE)[, , 1:2] # A diagonal matrix } } \keyword{models} \keyword{regression} \keyword{graphs} VGAM/man/maxwell.Rd0000644000176200001440000000405013135276753013532 0ustar liggesusers\name{maxwell} \alias{maxwell} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Maxwell Distribution Family Function } \description{ Estimating the parameter of the Maxwell distribution by maximum likelihood estimation. } \usage{ maxwell(link = "loge", zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, zero}{ Parameter link function applied to \eqn{a}, which is called the parameter \code{rate}. See \code{\link{Links}} for more choices and information; a log link is the default because the parameter is positive. More information is at \code{\link{CommonVGAMffArguments}}. } } \details{ The Maxwell distribution, which is used in the area of thermodynamics, has a probability density function that can be written \deqn{f(y;a) = \sqrt{2/\pi} a^{3/2} y^2 \exp(-0.5 a y^2)}{% f(y;a) = sqrt(2/pi) * a^(3/2) * y^2 * exp(-0.5*a*y^2)} for \eqn{y>0} and \eqn{a>0}. The mean of \eqn{Y} is \eqn{\sqrt{8 / (a \pi)}}{sqrt(8 / (a * pi))} (returned as the fitted values), and its variance is \eqn{(3\pi - 8)/(\pi a)}{(3*pi - 8)/(pi*a)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ von Seggern, D. H. (1993) \emph{CRC Standard Curves and Surfaces}, Boca Raton, FL.: CRC Press. } \author{ T. W. Yee } \note{ Fisher-scoring and Newton-Raphson are the same here. A related distribution is the Rayleigh distribution. This \pkg{VGAM} family function handles multiple responses. This \pkg{VGAM} family function can be mimicked by \code{poisson.points(ostatistic = 1.5, dimension = 2)}. } \seealso{ \code{\link{Maxwell}}, \code{\link{rayleigh}}, \code{\link{poisson.points}}. } \examples{ mdata <- data.frame(y = rmaxwell(1000, rate = exp(2))) fit <- vglm(y ~ 1, maxwell, data = mdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/is.buggy.Rd0000644000176200001440000000620013135276753013607 0ustar liggesusers\name{is.buggy} \alias{is.buggy} \alias{is.buggy.vlm} % 20150326 %- Also NEED an '\alias' for EACH other topic documented here. \title{ Does the fitted object suffer from a known bug? } \description{ Checks to see if a fitted object suffers from some known bug. } \usage{ is.buggy(object, ...) is.buggy.vlm(object, each.term = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A fitted \pkg{VGAM} object, e.g., from \code{\link{vgam}}. } \item{each.term}{ Logical. If \code{TRUE} then a logical is returned for each term. } \item{\dots}{ Unused for now. } } \details{ It is known that \code{\link{vgam}} with \code{\link{s}} terms do not correctly handle constraint matrices (\code{cmat}, say) when \code{crossprod(cmat)} is not diagonal. This function detects whether this is so or not. Note that probably all \pkg{VGAM} family functions have defaults where all \code{crossprod(cmat)}s are diagonal, therefore do not suffer from this bug. It is more likely to occur if the user inputs constraint matrices using the \code{constraints} argument (and setting \code{zero = NULL} if necessary). Second-generation VGAMs based on \code{\link{sm.ps}} are a modern alternative to using \code{\link{s}}. It does not suffer from this bug. However, G2-VGAMs require a reasonably large sample size in order to work more reliably. } \value{ The default is a single logical (\code{TRUE} if any term is \code{TRUE}), otherwise a vector of such with each element corresponding to a term. If the value is \code{TRUE} then I suggest replacing the VGAM by a similar model fitted by \code{\link{vglm}} and using regression splines, e.g., \code{\link[splines]{bs}}, \code{\link[splines]{ns}}. } %\references{ %} \author{ T. W. Yee } \note{ When the bug is fixed this function may be withdrawn, otherwise always return \code{FALSE}s! } \seealso{ \code{\link{vgam}}. \code{\link{vglm}}, \code{\link[VGAM]{s}}, \code{\link[VGAM]{sm.ps}}, \code{\link[splines]{bs}}, \code{\link[splines]{ns}}. } \examples{ fit1 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(3, 4)), binomialff(multiple.responses = TRUE), data = hunua) is.buggy(fit1) # Okay is.buggy(fit1, each.term = TRUE) # No terms are buggy fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(3, 4)), binomialff(multiple.responses = TRUE), data = hunua, constraints = list("(Intercept)" = diag(2), "s(altitude, df = c(3, 4))" = matrix(c(1, 1, 0, 1), 2, 2))) is.buggy(fit2) # TRUE is.buggy(fit2, each.term = TRUE) constraints(fit2) # fit2b is an approximate alternative to fit2: fit2b <- vglm(cbind(agaaus, kniexc) ~ bs(altitude, df = 3) + bs(altitude, df = 4), binomialff(multiple.responses = TRUE), data = hunua, constraints = list("(Intercept)" = diag(2), "bs(altitude, df = 3)" = rbind(1, 1), "bs(altitude, df = 4)" = rbind(0, 1))) is.buggy(fit2b) # Okay is.buggy(fit2b, each.term = TRUE) constraints(fit2b) } \keyword{models} \keyword{regression} VGAM/man/deplot.lmscreg.Rd0000644000176200001440000000533413135276753015011 0ustar liggesusers\name{deplot.lmscreg} \alias{deplot.lmscreg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Density Plot for LMS Quantile Regression } \description{ Plots a probability density function associated with a LMS quantile regression. } \usage{ deplot.lmscreg(object, newdata = NULL, x0, y.arg, show.plot = TRUE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} quantile regression model, i.e., an object produced by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}} with a family function beginning with \code{"lms."}, e.g., \code{\link{lms.yjn}}. } \item{newdata}{ Optional data frame containing secondary variables such as sex. It should have a maximum of one row. The default is to use the original data. } \item{x0}{ Numeric. The value of the primary variable at which to make the `slice'. } \item{y.arg}{ Numerical vector. The values of the response variable at which to evaluate the density. This should be a grid that is fine enough to ensure the plotted curves are smooth. } \item{show.plot}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{\dots}{ Graphical parameter that are passed into \code{\link{plotdeplot.lmscreg}}. } } \details{ This function calls, e.g., \code{deplot.lms.yjn} in order to compute the density function. } \value{ The original \code{object} but with a list placed in the slot \code{post}, called \code{@post$deplot}. The list has components \item{newdata }{ The argument \code{newdata} above, or a one-row data frame constructed out of the \code{x0} argument. } \item{y}{ The argument \code{y.arg} above. } \item{density}{ Vector of the density function values evaluated at \code{y.arg}. } } \references{ Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. } \author{ Thomas W. Yee } \note{ \code{\link{plotdeplot.lmscreg}} actually does the plotting. } \seealso{ \code{\link{plotdeplot.lmscreg}}, \code{\link{qtplot.lmscreg}}, \code{\link{lms.bcn}}, \code{\link{lms.bcg}}, \code{\link{lms.yjn}}. } \examples{\dontrun{ fit <- vgam(BMI ~ s(age, df = c(4, 2)), fam = lms.bcn(zero = 1), data = bmi.nz) ygrid <- seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (red)") deplot(fit, x0 = 40, y = ygrid, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = ygrid, add = TRUE, col = "red", llwd = 2) -> a names(a@post$deplot) a@post$deplot$newdata head(a@post$deplot$y) head(a@post$deplot$density) } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/dagum.Rd0000644000176200001440000000770513135276753013170 0ustar liggesusers\name{dagum} \alias{dagum} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Dagum Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter Dagum distribution. } \usage{ dagum(lscale = "loge", lshape1.a = "loge", lshape2.p = "loge", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape") } %- maybe also 'usage' for other objects documented here. % zero = ifelse(lss, -(2:3), -c(1, 3)) \arguments{ \item{lss}{ See \code{\link{CommonVGAMffArguments}} for important information. } \item{lshape1.a, lscale, lshape2.p}{ Parameter link functions applied to the (positive) parameters \code{a}, \code{scale}, and \code{p}. See \code{\link{Links}} for more choices. } \item{iscale, ishape1.a, ishape2.p, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. For \code{imethod = 2} a good initial value for \code{ishape2.p} is needed to obtain a good estimate for the other parameter. } \item{gscale, gshape1.a, gshape2.p}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 3-parameter Dagum distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{q=1}. It is known under various other names, such as the Burr III, inverse Burr, beta-K, and 3-parameter kappa distribution. It can be considered a generalized log-logistic distribution. Some distributions which are special cases of the 3-parameter Dagum are the inverse Lomax (\eqn{a=1}), Fisk (\eqn{p=1}), and the inverse paralogistic (\eqn{a=p}). More details can be found in Kleiber and Kotz (2003). The Dagum distribution has a cumulative distribution function \deqn{F(y) = [1 + (y/b)^{-a}]^{-p}}{% F(y) = [1 + (y/b)^(-a)]^(-p)} which leads to a probability density function \deqn{f(y) = ap y^{ap-1} / [b^{ap} \{1 + (y/b)^a\}^{p+1}]}{% f(y) = ap y^(ap-1) / [b^(ap) (1 + (y/b)^a)^(p+1)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{p > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and the others are shape parameters. The mean is \deqn{E(Y) = b \, \Gamma(p + 1/a) \, \Gamma(1 - 1/a) / \Gamma(p)}{% E(Y) = b gamma(p + 1/a) gamma(1 - 1/a) / gamma(p)} provided \eqn{-ap < 1 < a}; these are returned as the fitted values. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. From Kleiber and Kotz (2003), the MLE is rather sensitive to isolated observations located sufficiently far from the majority of the data. Reliable estimation of the scale parameter require \eqn{n>7000}, while estimates for \eqn{a} and \eqn{p} can be considered unbiased for \eqn{n>2000} or 3000. } \seealso{ \code{\link{Dagum}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{inv.lomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{inv.paralogistic}}, \code{\link{simulate.vlm}}. } \examples{ ddata <- data.frame(y = rdagum(n = 3000, scale = exp(2), shape1 = exp(1), shape2 = exp(1))) fit <- vglm(y ~ 1, dagum(lss = FALSE), data = ddata, trace = TRUE) fit <- vglm(y ~ 1, dagum(lss = FALSE, ishape1.a = exp(1)), data = ddata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/summarypvgam.Rd0000644000176200001440000000507213135276753014616 0ustar liggesusers% 20160804; Adapted from summary.vgam.Rd \name{summarypvgam} \alias{summarypvgam} \alias{show.summary.pvgam} \title{Summarizing Penalized Vector Generalized Additive Model Fits} \usage{ summarypvgam(object, dispersion = NULL, digits = options()$digits - 2, presid = TRUE) \method{show}{summary.pvgam}(x, quote = TRUE, prefix = "", digits = options()$digits - 2, signif.stars = getOption("show.signif.stars")) } \arguments{ \item{object}{an object of class \code{"pvgam"}, which is the result of a call to \code{\link{vgam}} with at least one \code{\link{sm.os}} or \code{\link{sm.ps}} term. } \item{x}{an object of class \code{"summary.pvgam"}, which is the result of a call to \code{summarypvgam()}. } \item{dispersion, digits, presid}{ See \code{\link{summaryvglm}}. } \item{quote, prefix, signif.stars}{ See \code{\link{summaryvglm}}. } } \description{ These functions are all \code{\link{methods}} for class \code{"pvgam"} or \code{summary.pvgam} objects. } \details{ This methods function reports a summary more similar to \code{\link[mgcv]{summary.gam}} from \pkg{mgcv} than \code{summary.gam()} from \pkg{gam}. It applies to G2-VGAMs using \code{\link{sm.os}} and O-splines, else \code{\link{sm.ps}} and P-splines. In particular, the hypothesis test for whether each \code{\link{sm.os}} or \code{\link{sm.ps}} term can be deleted follows quite closely to \code{\link[mgcv]{summary.gam}}. The p-values from this type of test tend to be biased downwards (too small) and corresponds to \code{p.type = 5}. It is hoped in the short future that improved p-values be implemented, somewhat like the default of \code{\link[mgcv]{summary.gam}}. This methods function was adapted from \code{\link[mgcv]{summary.gam}}. } \value{ \code{summarypvgam} returns an object of class \code{"summary.pvgam"}; see \code{\link{summary.pvgam-class}}. } \section{Warning }{ See \code{\link{sm.os}}. } \seealso{ \code{\link{vgam}}, \code{\link{summaryvgam}}, \code{\link{summary.pvgam-class}}, \code{\link{sm.os}}, \code{\link{sm.ps}}, \code{\link[stats]{summary.glm}}, \code{\link[stats]{summary.lm}}, \code{\link[mgcv]{summary.gam}} from \pkg{mgcv}, % A core R package \code{\link{summaryvgam}} for G1-VGAMs. % \code{\link[gam]{summary.gam}}. % May not be installed. } \examples{ hfit2 <- vgam(agaaus ~ sm.os(altitude), binomialff, data = hunua) coef(hfit2, matrix = TRUE) summary(hfit2) } \keyword{models} \keyword{regression} % summary(hfit2)@post$s.table # For sm.ps() terms. VGAM/man/profilevglm.Rd0000644000176200001440000000447313135276753014420 0ustar liggesusers % file MASS/man/profilevglm.Rd % copyright (C) 1999-2008 W. N. Venables and B. D. Ripley \name{profilevglm} \alias{profilevglm} \title{Method for Profiling vglm Objects} \description{ Investigates the profile log-likelihood function for a fitted model of class \code{"vglm"}. } \usage{ profilevglm(object, which = 1:p.vlm, alpha = 0.01, maxsteps = 10, del = zmax/5, trace = NULL, \dots) } \arguments{ \item{object}{the original fitted model object.} \item{which}{the original model parameters which should be profiled. This can be a numeric or character vector. By default, all parameters are profiled. } \item{alpha}{highest significance level allowed for the profiling. % profile t-statistics. } \item{maxsteps}{maximum number of points to be used for profiling each parameter.} \item{del}{suggested change on the scale of the profile t-statistics. Default value chosen to allow profiling at about 10 parameter values.} \item{trace}{logical: should the progress of profiling be reported? The default is to use the \code{trace} value from the fitted object; see \code{\link{vglm.control}} for details. } \item{\dots}{further arguments passed to or from other methods.} } \value{ A list of classes \code{"profile.glm"} and \code{"profile"} with an element for each parameter being profiled. The elements are data-frames with two variables \item{par.vals}{a matrix of parameter values for each fitted model.} \item{tau}{the profile t-statistics.} } \details{ This function is called by \code{\link{confintvglm}} to do the profiling. See also \code{\link[MASS]{profile.glm}} for details. } \author{ T. W. Yee adapted this function from \code{\link[MASS]{profile.glm}}, written originally by D. M. Bates and W. N. Venables. (For S in 1996.) The help file was also used as a template. } \seealso{ \code{\link{vglm}}, \code{\link{confintvglm}}, \code{\link[stats]{profile}}, \code{\link[MASS]{profile.glm}}, \code{\link[MASS]{plot.profile}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, trace = TRUE, data = pneumo) pfit1 <- profile(fit1, trace = FALSE) confint(fit1, method = "profile", trace = FALSE) } \keyword{regression} \keyword{models} VGAM/man/studentt.Rd0000644000176200001440000001225713135276753013743 0ustar liggesusers\name{studentt} \alias{studentt} \alias{studentt2} \alias{studentt3} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Student t Distribution } \description{ Estimating the parameters of a Student t distribution. } \usage{ studentt (ldf = "loglog", idf = NULL, tol1 = 0.1, imethod = 1) studentt2(df = Inf, llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") studentt3(llocation = "identitylink", lscale = "loge", ldf = "loglog", ilocation = NULL, iscale = NULL, idf = NULL, imethod = 1, zero = c("scale", "df")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llocation, lscale, ldf}{ Parameter link functions for each parameter, e.g., for degrees of freedom \eqn{\nu}{nu}. See \code{\link{Links}} for more choices. The defaults ensures the parameters are in range. A \code{\link{loglog}} link keeps the degrees of freedom greater than unity; see below. } \item{ilocation, iscale, idf}{ Optional initial values. If given, the values must be in range. The default is to compute an initial value internally. } \item{tol1}{ A positive value, the tolerance for testing whether an initial value is 1. Best to leave this argument alone. } \item{df}{ Numeric, user-specified degrees of freedom. It may be of length equal to the number of columns of a response matrix. } \item{imethod, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The Student t density function is \deqn{f(y;\nu) = \frac{\Gamma((\nu+1)/2)}{\sqrt{\nu \pi} \Gamma(\nu/2)} \left(1 + \frac{y^2}{\nu} \right)^{-(\nu+1)/2}}{% f(y;nu) = (gamma((nu+1)/2) / (sqrt(nu*pi) gamma(nu/2))) * (1 + y^2 / nu)^{-(nu+1)/2}} for all real \eqn{y}. Then \eqn{E(Y)=0} if \eqn{\nu>1}{nu>1} (returned as the fitted values), and \eqn{Var(Y)= \nu/(\nu-2)}{Var(Y)= nu/(nu-2)} for \eqn{\nu > 2}{nu > 2}. When \eqn{\nu=1}{nu=1} then the Student \eqn{t}-distribution corresponds to the standard Cauchy distribution, \code{\link{cauchy1}}. When \eqn{\nu=2}{nu=2} with a scale parameter of \code{sqrt(2)} then the Student \eqn{t}-distribution corresponds to the standard (Koenker) distribution, \code{\link{sc.studentt2}}. The degrees of freedom can be treated as a parameter to be estimated, and as a real and not an integer. The Student t distribution is used for a variety of reasons in statistics, including robust regression. Let \eqn{Y = (T - \mu) / \sigma}{Y = (T - mu) / sigma} where \eqn{\mu}{mu} and \eqn{\sigma}{sigma} are the location and scale parameters respectively. Then \code{studentt3} estimates the location, scale and degrees of freedom parameters. And \code{studentt2} estimates the location, scale parameters for a user-specified degrees of freedom, \code{df}. And \code{studentt} estimates the degrees of freedom parameter only. The fitted values are the location parameters. By default the linear/additive predictors are \eqn{(\mu, \log(\sigma), \log\log(\nu))^T}{(mu, log(sigma), log log(nu))^T} or subsets thereof. In general convergence can be slow, especially when there are covariates. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Student (1908) The probable error of a mean. \emph{Biometrika}, \bold{6}, 1--25. Zhu, D. and Galbraith, J. W. (2010) A generalized asymmetric Student-\emph{t} distribution with application to financial econometrics. \emph{Journal of Econometrics}, \bold{157}, 297--305. } \author{ T. W. Yee } \note{ \code{studentt3()} and \code{studentt2()} can handle multiple responses. Practical experience has shown reasonably good initial values are required. If convergence failure occurs try using arguments such as \code{idf}. Local solutions are also possible, especially when the degrees of freedom is close to unity or the scale parameter is close to zero. A standard normal distribution corresponds to a \emph{t} distribution with infinite degrees of freedom. Consequently, if the data is close to normal, there may be convergence problems; best to use \code{\link{uninormal}} instead. } \seealso{ \code{\link{uninormal}}, \code{\link{cauchy1}}, \code{\link{logistic}}, \code{\link{huber2}}, \code{\link{sc.studentt2}}, \code{\link[stats]{TDist}}, \code{\link{simulate.vlm}}. } \examples{ tdata <- data.frame(x2 = runif(nn <- 1000)) tdata <- transform(tdata, y1 = rt(nn, df = exp(exp(0.5 - x2))), y2 = rt(nn, df = exp(exp(0.5 - x2)))) fit1 <- vglm(y1 ~ x2, studentt, data = tdata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(y1 ~ x2, studentt2(df = exp(exp(0.5))), data = tdata) coef(fit2, matrix = TRUE) # df inputted into studentt2() not quite right fit3 <- vglm(cbind(y1, y2) ~ x2, studentt3, data = tdata, trace = TRUE) coef(fit3, matrix = TRUE) } \keyword{models} \keyword{regression} %Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) %\emph{Statistical Distributions}, %Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. VGAM/man/zinegbinUC.Rd0000644000176200001440000000654613135276753014132 0ustar liggesusers\name{Zinegbin} \alias{Zinegbin} \alias{dzinegbin} \alias{pzinegbin} \alias{qzinegbin} \alias{rzinegbin} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Negative Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-inflated negative binomial distribution with parameter \code{pstr0}. } \usage{ dzinegbin(x, size, prob = NULL, munb = NULL, pstr0 = 0, log = FALSE) pzinegbin(q, size, prob = NULL, munb = NULL, pstr0 = 0) qzinegbin(p, size, prob = NULL, munb = NULL, pstr0 = 0) rzinegbin(n, size, prob = NULL, munb = NULL, pstr0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{ Same as in \code{\link[stats]{runif}}. } \item{size, prob, munb, log}{ Arguments matching \code{\link[stats:NegBinomial]{dnbinom}}. The argument \code{munb} corresponds to \code{mu} in \code{\link[stats:NegBinomial]{dnbinom}} and has been renamed to emphasize the fact that it is the mean of the negative binomial \emph{component}. } \item{pstr0}{ Probability of structural zero (i.e., ignoring the negative binomial distribution), called \eqn{\phi}{phi}. } } \details{ The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and a negative binomial distribution with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is distributed as a negative binomial distribution (see \code{\link[stats:NegBinomial]{rnbinom}}.) See \code{\link{negbinomial}}, a \pkg{VGAM} family function, for the formula of the probability density function and other details of the negative binomial distribution. } \value{ \code{dzinegbin} gives the density, \code{pzinegbin} gives the distribution function, \code{qzinegbin} gives the quantile function, and \code{rzinegbin} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pstr0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. These functions actually allow for \emph{zero-deflation}. That is, the resulting probability of a zero count is \emph{less than} the nominal value of the parent distribution. See \code{\link{Zipois}} for more information. } \seealso{ \code{\link{zinegbinomial}}, \code{\link[stats:NegBinomial]{rnbinom}}, \code{\link{rzipois}}. } \examples{ munb <- 3; pstr0 <- 0.2; size <- k <- 10; x <- 0:10 (ii <- dzinegbin(x, pstr0 = pstr0, mu = munb, size = k)) max(abs(cumsum(ii) - pzinegbin(x, pstr0 = pstr0, mu = munb, size = k))) # 0 table(rzinegbin(100, pstr0 = pstr0, mu = munb, size = k)) table(qzinegbin(runif(1000), pstr0 = pstr0, mu = munb, size = k)) round(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k) * 1000) # Should be similar \dontrun{barplot(rbind(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k), dnbinom(x, mu = munb, size = k)), las = 1, beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste("ZINB(mu = ", munb, ", k = ", k, ", pstr0 = ", pstr0, ") (blue) vs NB(mu = ", munb, ", size = ", k, ") (green)", sep = ""), names.arg = as.character(x)) } } \keyword{distribution} VGAM/man/clo.Rd0000644000176200001440000000264613135276753012647 0ustar liggesusers\name{clo} \alias{clo} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Redirects the User to rrvglm() } \description{ Redirects the user to the function \code{\link{rrvglm}}. } \usage{ clo(...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{\dots}{ Ignored. } } \details{ CLO stands for \emph{constrained linear ordination}, and is fitted with a statistical class of models called \emph{reduced-rank vector generalized linear models} (RR-VGLMs). It allows for generalized reduced-rank regression in that response types such as Poisson counts and presence/absence data can be handled. Currently in the \pkg{VGAM} package, \code{\link{rrvglm}} is used to fit RR-VGLMs. However, the Author's opinion is that linear responses to a latent variable (composite environmental gradient) is not as common as unimodal responses, therefore \code{\link{cqo}} is often more appropriate. The new CLO/CQO/CAO nomenclature described in Yee (2006). } \value{ Nothing is returned; an error message is issued. } \references{ Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{Thomas W. Yee} \seealso{ \code{\link{rrvglm}}, \code{\link{cqo}}. } \examples{ \dontrun{ clo() } } \keyword{models} \keyword{regression} VGAM/man/UtilitiesVGAM.Rd0000644000176200001440000000664713135276753014525 0ustar liggesusers\name{UtilitiesVGAM} \alias{UtilitiesVGAM} \alias{param.names} \alias{dimm} \alias{interleave.VGAM} \title{Utility Functions for the VGAM Package } \description{ A set of common utility functions used by \pkg{VGAM} family functions. } \usage{ param.names(string, S) dimm(M, hbw = M) interleave.VGAM(.M, M1, inverse = FALSE) } \arguments{ \item{string}{ Character. Name of the parameter. } \item{M, .M}{ Numeric. The total number of linear/additive predictors, called \eqn{M}. By total, it is meant summed over the number of responses. Often, \eqn{M} is the total number of parameters to be estimated (but this is not the same as the number of regression coefficients, unless the RHS of the formula is an intercept-only). The use of \code{.M} is unfortunate, but it is a compromise solution to what is presented in Yee (2015). Ideally, \code{.M} should be just \code{M}. } \item{M1}{ Numeric. The number of linear/additive predictors for one response, called \eqn{M_1}. This argument used to be called \code{M}, but is now renamed properly. } \item{inverse}{ Logical. Useful for the inverse function of \code{interleave.VGAM()}. } \item{S}{ Numeric. The number of responses. } \item{hbw}{ Numeric. The half-bandwidth, which measures the number of bands emanating from the central diagonal band. } } \value{ For \code{param.names()}, this function returns the parameter names for \eqn{S} responses, i.e., \code{string} is returned unchanged if \eqn{S=1}, else \code{paste(string, 1:S, sep = "")}. For \code{dimm()}, this function returns the number of elements to be stored for each of the working weight matrices. They are represented as columns in the matrix \code{wz} in e.g., \code{vglm.fit()}. See the \emph{matrix-band} format described in Section 18.3.5 of Yee (2015). For \code{interleave.VGAM()}, this function returns a reordering of the linear/additive predictors depending on the number of responses. The arguments presented in Table 18.5 may not be valid in your version of Yee (2015). } %\section{Warning }{ % The \code{zero} argument is supplied for convenience but conflicts %} \details{ See Yee (2015) for some details about some of these functions. } \references{ Yee, T. W. (2015) Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: \emph{Springer}. } \seealso{ \code{\link{CommonVGAMffArguments}}, \code{\link{VGAM-package}}. } \author{T. W. Yee. Victor Miranda added the \code{inverse} argument to \code{interleave.VGAM()}. } %\note{ % See \code{\link{Links}} regarding a major change in % %} \examples{ param.names("shape", 1) # "shape" param.names("shape", 3) # c("shape1", "shape2", "shape3") dimm(3, hbw = 1) # Diagonal matrix; the 3 elements need storage. dimm(3) # A general 3 x 3 symmetrix matrix has 6 unique elements. dimm(3, hbw = 2) # Tridiagonal matrix; the 3-3 element is 0 and unneeded. M1 <- 2; ncoly <- 3; M <- ncoly * M1 mynames1 <- param.names("location", ncoly) mynames2 <- param.names("scale", ncoly) (parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)]) # The following is/was in Yee (2015) and has a poor/deceptive style: (parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = M1)]) parameters.names[interleave.VGAM(M, M1 = M1, inverse = TRUE)] } \keyword{distribution} \keyword{regression} \keyword{programming} \keyword{models} VGAM/man/perks.Rd0000644000176200001440000000732213135276753013212 0ustar liggesusers\name{perks} \alias{perks} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Perks Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter Perks distribution. } \usage{ perks(lscale = "loge", lshape = "loge", iscale = NULL, ishape = NULL, gscale = exp(-5:5), gshape = exp(-5:5), nsimEIM = 500, oim.mean = FALSE, zero = NULL, nowarning = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{nowarning}{ Logical. Suppress a warning? Ignored for \pkg{VGAM} 0.9-7 and higher. } \item{lscale, lshape}{ Parameter link functions applied to the shape parameter \code{shape}, scale parameter \code{scale}. All parameters are treated as positive here See \code{\link{Links}} for more choices. } % \item{eshape, escale}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{iscale, ishape}{ Optional initial values. A \code{NULL} means a value is computed internally. } \item{gscale, gshape}{ See \code{\link{CommonVGAMffArguments}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } \item{oim.mean}{ To be currently ignored. } } \details{ The Perks distribution has cumulative distribution function \deqn{F(y; \alpha, \beta) = 1 - \left\{ \frac{1 + \alpha}{1 + \alpha e^{\beta y}} \right\}^{1 / \beta} }{% F(y; alpha, beta) = 1 - ((1 + \alpha)/(1 + alpha * e^(beta * y)))^(1 / beta) } which leads to a probability density function \deqn{f(y; \alpha, \beta) = \left[ 1 + \alpha \right]^{1 / \beta} \alpha e^{\beta y} / (1 + \alpha e^{\beta y})^{1 + 1 / \beta} }{% f(y; alpha, beta) = [ 1 + alpha]^(1 / \beta) * alpha * exp(beta * y) / (1 + alpha * exp(beta * y))^(1 + 1 / beta) } for \eqn{\alpha > 0}{alpha > 0}, \eqn{\beta > 0}{beta > 0}, \eqn{y > 0}. Here, \eqn{\beta}{beta} is called the scale parameter \code{scale}, and \eqn{\alpha}{alpha} is called a shape parameter. The moments for this distribution do not appear to be available in closed form. Simulated Fisher scoring is used and multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Perks, W. (1932) On some experiments in the graduation of mortality statistics. \emph{Journal of the Institute of Actuaries}, \bold{63}, 12--40. Richards, S. J. (2012) A handbook of parametric survival models for actuarial use. \emph{Scandinavian Actuarial Journal}. 1--25. } \author{ T. W. Yee } \section{Warning }{ A lot of care is needed because this is a rather difficult distribution for parameter estimation. If the self-starting initial values fail then try experimenting with the initial value arguments, especially \code{iscale}. Successful convergence depends on having very good initial values. Also, monitor convergence by setting \code{trace = TRUE}. } \seealso{ \code{\link{dperks}}, \code{\link{simulate.vlm}}. } \examples{ \dontrun{ set.seed(123) pdata <- data.frame(x2 = runif(nn <- 1000)) # x2 unused pdata <- transform(pdata, eta1 = -1, ceta1 = 1) pdata <- transform(pdata, shape1 = exp(eta1), scale1 = exp(ceta1)) pdata <- transform(pdata, y1 = rperks(nn, shape = shape1, scale = scale1)) fit1 <- vglm(y1 ~ 1, perks, data = pdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) } } \keyword{models} \keyword{regression} %# fit1 <- vglm(y1 ~ 1, perks, data = pdata, trace = TRUE) %# fit2 <- vglm(y1 ~ 1, perks(imeth = 2), data = pdata, trace = TRUE) % Argument \code{probs.y} is used only when \code{imethod = 2}. VGAM/man/hzeta.Rd0000644000176200001440000000512113135276753013174 0ustar liggesusers\name{hzeta} \alias{hzeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Haight's Zeta Family Function } \description{ Estimating the parameter of Haight's zeta distribution } \usage{ hzeta(lshape = "loglog", ishape = NULL, nsimEIM = 100) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape}{ Parameter link function for the parameter, called \eqn{\alpha}{alpha} below. See \code{\link{Links}} for more choices. Here, a log-log link keeps the parameter greater than one, meaning the mean is finite. } \item{ishape,nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The probability function is \deqn{f(y) = (2y-1)^{(-\alpha)} - (2y+1)^{(-\alpha)},}{% f(y) = (2y-1)^(-alpha) - (2y+1)^(-alpha),} where the parameter \eqn{\alpha>0}{alpha>0} and \eqn{y=1,2,\ldots}{y=1,2,...}. The function \code{\link{dhzeta}} computes this probability function. The mean of \eqn{Y}, which is returned as fitted values, is \eqn{(1-2^{-\alpha}) \zeta(\alpha)}{(1-2^(-alpha))*zeta(alpha)} provided \eqn{\alpha > 1}{alpha > 1}, where \eqn{\zeta}{zeta} is Riemann's zeta function. The mean is a decreasing function of \eqn{\alpha}{alpha}. The mean is infinite if \eqn{\alpha \leq 1}{alpha <= 1}, and the variance is infinite if \eqn{\alpha \leq 2}{alpha <= 2}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Pages 533--4 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. } \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{Hzeta}}, \code{\link{zeta}}, \code{\link{zetaff}}, \code{\link{loglog}}, \code{\link{simulate.vlm}}. } \examples{ shape <- exp(exp(-0.1)) # The parameter hdata <- data.frame(y = rhzeta(n = 1000, shape)) fit <- vglm(y ~ 1, hzeta, data = hdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models; should be same as shape c(with(hdata, mean(y)), head(fitted(fit), 1)) summary(fit) } \keyword{models} \keyword{regression} %# Generate some hzeta random variates %set.seed(123) %nn <- 400 %x <- 1:20 %shape <- 1.1 # The parameter %probs <- dhzeta(x, shape) %\dontrun{ %plot(x, probs, type="h", log="y")} %cs <- cumsum(probs) %tab <- table(cut(runif(nn), brea = c(0,cs,1))) %index <- (1:length(tab))[tab>0] %y <- rep(index, times=tab[index]) VGAM/man/lqnorm.Rd0000644000176200001440000001000213135276753013363 0ustar liggesusers\name{lqnorm} %\alias{lqnorm} \alias{lqnorm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Minimizing the L-q norm Family Function } \description{ Minimizes the L-q norm of residuals in a linear model. } \usage{ lqnorm(qpower = 2, link = "identitylink", imethod = 1, imu = NULL, ishrinkage = 0.95) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{qpower}{ A single numeric, must be greater than one, called \eqn{q} below. The absolute value of residuals are raised to the power of this argument, and then summed. This quantity is minimized with respect to the regression coefficients. } \item{link}{ Link function applied to the `mean' \eqn{\mu}{mu}. See \code{\link{Links}} for more details. } \item{imethod}{ Must be 1, 2 or 3. See \code{\link{CommonVGAMffArguments}} for more information. Ignored if \code{imu} is specified. } \item{imu}{ Numeric, optional initial values used for the fitted values. The default is to use \code{imethod = 1}. } \item{ishrinkage}{ How much shrinkage is used when initializing the fitted values. The value must be between 0 and 1 inclusive, and a value of 0 means the individual response values are used, and a value of 1 means the median or mean is used. This argument is used in conjunction with \code{imethod = 3}. } } \details{ This function minimizes the objective function \deqn{ \sum_{i=1}^n \; w_i (|y_i - \mu_i|)^q }{% sum_{i=1}^n w_i (|y_i - mu_i|)^q } where \eqn{q} is the argument \code{qpower}, \eqn{\eta_i = g(\mu_i)}{eta_i = g(mu_i)} where \eqn{g} is the link function, and \eqn{\eta_i}{eta_i} is the vector of linear/additive predictors. The prior weights \eqn{w_i} can be inputted using the \code{weights} argument of \code{vlm}/\code{\link{vglm}}/\code{\link{vgam}} etc.; it should be just a vector here since this function handles only a single vector or one-column response. Numerical problem will occur when \eqn{q} is too close to one. Probably reasonable values range from 1.5 and up, say. The value \eqn{q=2} corresponds to ordinary least squares while \eqn{q=1} corresponds to the MLE of a double exponential (Laplace) distibution. The procedure becomes more sensitive to outliers the larger the value of \eqn{q}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } \note{ This \pkg{VGAM} family function is an initial attempt to provide a more robust alternative for regression and/or offer a little more flexibility than least squares. The \code{@misc} slot of the fitted object contains a list component called \code{objectiveFunction} which is the value of the objective function at the final iteration. } \section{Warning }{ Convergence failure is common, therefore the user is advised to be cautious and monitor convergence! } \seealso{ \code{\link{gaussianff}}. } \examples{ set.seed(123) ldata <- data.frame(x = sort(runif(nn <- 10 ))) realfun <- function(x) 4 + 5*x ldata <- transform(ldata, y = realfun(x) + rnorm(nn, sd = exp(-1))) # Make the first observation an outlier ldata <- transform(ldata, y = c(4*y[1], y[-1]), x = c(-1, x[-1])) fit <- vglm(y ~ x, lqnorm(qpower = 1.2), data = ldata) coef(fit, matrix = TRUE) head(fitted(fit)) fit@misc$qpower fit@misc$objectiveFunction \dontrun{ # Graphical check with(ldata, plot(x, y, main = paste("LS = red, lqnorm = blue (qpower = ", fit@misc$qpower, "), truth = black", sep = ""), col = "blue")) lmfit <- lm(y ~ x, data = ldata) with(ldata, lines(x, fitted(fit), col = "blue")) with(ldata, lines(x, lmfit$fitted, col = "red")) with(ldata, lines(x, realfun(x), col = "black")) } } \keyword{models} \keyword{regression} VGAM/man/bilogisUC.Rd0000644000176200001440000000463513135276753013752 0ustar liggesusers\name{bilogis} \alias{bilogis} \alias{dbilogis} \alias{pbilogis} \alias{rbilogis} \title{Bivariate Logistic Distribution} \description{ Density, distribution function, quantile function and random generation for the 4-parameter bivariate logistic distribution. } \usage{ dbilogis(x1, x2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1, log = FALSE) pbilogis(q1, q2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) rbilogis(n, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Same as \code{\link[stats]{rlogis}}. } \item{loc1, loc2}{the location parameters \eqn{l_1}{l1} and \eqn{l_2}{l2}.} \item{scale1, scale2}{the scale parameters \eqn{s_1}{s1} and \eqn{s_2}{s2}.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{dbilogis} gives the density, \code{pbilogis} gives the distribution function, and \code{rbilogis} generates random deviates (a two-column matrix). } \references{ Gumbel, E. J. (1961) Bivariate logistic distributions. \emph{Journal of the American Statistical Association}, \bold{56}, 335--349. } \author{ T. W. Yee } \details{ See \code{\link{bilogis}}, the \pkg{VGAM} family function for estimating the four parameters by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } \note{ Gumbel (1961) proposed two bivariate logistic distributions with logistic distribution marginals, which he called Type I and Type II. The Type I is this one. The Type II belongs to the Morgenstern type. The \code{\link{biamhcop}} distribution has, as a special case, this distribution, which is when the random variables are independent. % This note added 20140920 } \seealso{ \code{\link{bilogistic}}, \code{\link{biamhcop}}. } \examples{ \dontrun{ par(mfrow = c(1, 3)) ymat <- rbilogis(n = 2000, loc1 = 5, loc2 = 7, scale2 = exp(1)) myxlim <- c(-2, 15); myylim <- c(-10, 30) plot(ymat, xlim = myxlim, ylim = myylim) N <- 100 x1 <- seq(myxlim[1], myxlim[2], len = N) x2 <- seq(myylim[1], myylim[2], len = N) ox <- expand.grid(x1, x2) z <- dbilogis(ox[,1], ox[,2], loc1 = 5, loc2 = 7, scale2 = exp(1)) contour(x1, x2, matrix(z, N, N), main = "density") z <- pbilogis(ox[,1], ox[,2], loc1 = 5, loc2 = 7, scale2 = exp(1)) contour(x1, x2, matrix(z, N, N), main = "cdf") } } \keyword{distribution} VGAM/man/flourbeetle.Rd0000644000176200001440000000271513135276753014377 0ustar liggesusers\name{flourbeetle} \alias{flourbeetle} \docType{data} \title{Mortality of Flour Beetles from Carbon Disulphide} \description{ The \code{flourbeetle} data frame has 8 rows and 4 columns. Two columns are explanatory, the other two are responses. } \usage{data(flourbeetle)} \format{ This data frame contains the following columns: \describe{ \item{logdose}{\code{\link[base]{log10}} applied to \code{CS2mgL}. } \item{CS2mgL}{a numeric vector, the concentration of gaseous carbon disulphide in mg per litre. } \item{exposed}{a numeric vector, counts; the number of beetles exposed to the poison. } \item{killed}{a numeric vector, counts; the numbers killed. } } } \details{ These data were originally given in Table IV of Bliss (1935) and are the combination of two series of toxicological experiments involving \emph{Tribolium confusum}, also known as the flour beetle. Groups of such adult beetles were exposed for 5 hours of gaseous carbon disulphide at different concentrations, and their mortality measured. } \source{ Bliss, C.I., 1935. The calculation of the dosage-mortality curve. \emph{Annals of Applied Biology}, \bold{22}, 134--167. } \seealso{ \code{\link{binomialff}}, \code{\link{probit}}. } %\references{ % % % % % %} \examples{ fit1 <- vglm(cbind(killed, exposed - killed) ~ logdose, binomialff(link = probit), data = flourbeetle, trace = TRUE) summary(fit1) } \keyword{datasets} VGAM/man/rlplot.gevff.Rd0000644000176200001440000001202313135276753014470 0ustar liggesusers\name{rlplot.gevff} \alias{rlplot.gevff} \alias{rlplot.gev} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Return Level Plot for GEV Fits } \description{ A return level plot is constructed for a GEV-type model. } \usage{ rlplot.gevff(object, show.plot = TRUE, probability = c((1:9)/100, (1:9)/10, 0.95, 0.99, 0.995, 0.999), add.arg = FALSE, xlab = if(log.arg) "Return Period (log-scale)" else "Return Period", ylab = "Return Level", main = "Return Level Plot", pch = par()$pch, pcol.arg = par()$col, pcex = par()$cex, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, slty.arg = par()$lty, scol.arg = par()$col, slwd.arg = par()$lwd, ylim = NULL, log.arg = TRUE, CI = TRUE, epsilon = 1e-05, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \pkg{VGAM} extremes model of the GEV-type, produced by \code{\link{vglm}} with a family function either \code{"gev"} or \code{"gevff"}. } \item{show.plot}{ Logical. Plot it? If \code{FALSE} no plot will be done. } \item{probability}{ Numeric vector of probabilities used. } \item{add.arg}{ Logical. Add the plot to an existing plot? } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{main}{ Title of the plot. See \code{\link[graphics]{title}}. } \item{pch}{ Plotting character. See \code{\link[graphics]{par}}. } \item{pcol.arg}{ Color of the points. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{pcex}{ Character expansion of the points. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{llty.arg}{ Line type. Line type. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{lcol.arg}{ Color of the lines. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{llwd.arg}{ Line width. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{slty.arg, scol.arg, slwd.arg}{ Correponding arguments for the lines used for the confidence intervals. Used only if \code{CI=TRUE}. } \item{ylim}{ Limits for the y-axis. Numeric of length 2. } \item{log.arg}{ Logical. If \code{TRUE} then \code{log=""} otherwise \code{log="x"}. This changes the labelling of the x-axis only. } \item{CI}{ Logical. Add in a 95 percent confidence interval? } \item{epsilon}{ Numeric, close to zero. Used for the finite-difference approximation to the first derivatives with respect to each parameter. If too small, numerical problems will occur. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{sub} and \code{las}. } } \details{ A return level plot plots \eqn{z_p}{zp} versus \eqn{\log(y_p)}{log(yp)}. It is linear if the shape parameter \eqn{\xi=0}{xi=0}. If \eqn{\xi<0}{xi<0} then the plot is convex with asymptotic limit as \eqn{p} approaches zero at \eqn{\mu-\sigma / \xi}{mu-sigma/xi}. And if \eqn{\xi>0}{xi>0} then the plot is concave and has no finite bound. Here, \eqn{G(z_p) = 1-p}{G(zp) = 1-p} where \eqn{0 i1 rlplot(fit2, pcol = "darkorange", lcol = "blue", log.arg = FALSE, scol = "darkgreen", slty = "dashed", las = 1) -> i2 range(i2@post$rlplot$upper - i1@post$rlplot$upper) # Should be near 0 range(i2@post$rlplot$lower - i1@post$rlplot$lower) # Should be near 0 } } \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/inv.paralogistic.Rd0000644000176200001440000000623413135276753015343 0ustar liggesusers\name{inv.paralogistic} \alias{inv.paralogistic} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inverse Paralogistic Distribution Family Function } \description{ Maximum likelihood estimation of the 2-parameter inverse paralogistic distribution. } \usage{ inv.paralogistic(lscale = "loge", lshape1.a = "loge", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lss}{ See \code{\link{CommonVGAMffArguments}} for important information. } \item{lshape1.a, lscale}{ Parameter link functions applied to the (positive) parameters \code{a} and \code{scale}. See \code{\link{Links}} for more choices. } \item{iscale, ishape1.a, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. For \code{imethod = 2} a good initial value for \code{ishape1.a} is needed to obtain a good estimate for the other parameter. } \item{gscale, gshape1.a}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 2-parameter inverse paralogistic distribution is the 4-parameter generalized beta II distribution with shape parameter \eqn{q=1} and \eqn{a=p}. It is the 3-parameter Dagum distribution with \eqn{a=p}. More details can be found in Kleiber and Kotz (2003). The inverse paralogistic distribution has density \deqn{f(y) = a^2 y^{a^2-1} / [b^{a^2} \{1 + (y/b)^a\}^{a+1}]}{% f(y) = a^2 y^(a^2-1) / [b^(a^2) (1 + (y/b)^a)^(a+1)]} for \eqn{a > 0}, \eqn{b > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and \eqn{a} is the shape parameter. The mean is \deqn{E(Y) = b \, \Gamma(a + 1/a) \, \Gamma(1 - 1/a) / \Gamma(a)}{% E(Y) = b gamma(a + 1/a) gamma(1 - 1/a) / gamma(a)} provided \eqn{a > 1}; these are returned as the fitted values. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. } \seealso{ \code{\link{Inv.paralogistic}}, \code{\link{genbetaII}}, \code{\link{betaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{inv.lomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{simulate.vlm}}. } \examples{ idata <- data.frame(y = rinv.paralogistic(n = 3000, exp(1), scale = exp(2))) fit <- vglm(y ~ 1, inv.paralogistic(lss = FALSE), data = idata, trace = TRUE) fit <- vglm(y ~ 1, inv.paralogistic(imethod = 2, ishape1.a = 4), data = idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/AA.Aa.aa.Rd0000644000176200001440000000512313135276753013244 0ustar liggesusers\name{AA.Aa.aa} \alias{AA.Aa.aa} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The AA-Aa-aa Blood Group System } \description{ Estimates the parameter of the AA-Aa-aa blood group system, with or without Hardy Weinberg equilibrium. } \usage{ AA.Aa.aa(linkp = "logit", linkf = "logit", inbreeding = FALSE, ipA = NULL, ifp = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{linkp, linkf}{ Link functions applied to \code{pA} and \code{f}. See \code{\link{Links}} for more choices. } \item{ipA, ifp}{ Optional initial values for \code{pA} and \code{f}. } \item{inbreeding}{ Logical. Is there inbreeding? %HWE assumption to be made? } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ This one or two parameter model involves a probability called \code{pA}. The probability of getting a count in the first column of the input (an AA) is \code{pA*pA}. When \code{inbreeding = TRUE}, an additional parameter \code{f} is used. If \code{inbreeding = FALSE} then \eqn{f = 0} and Hardy-Weinberg Equilibrium (HWE) is assumed. The EIM is used if \code{inbreeding = FALSE}. % With Hardy Weinberg equilibrium (HWE), % Without the HWE assumption, an additional parameter \code{f} is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Weir, B. S. (1996) \emph{Genetic Data Analysis II: Methods for Discrete Population Genetic Data}, Sunderland, MA: Sinauer Associates, Inc. } \author{ T. W. Yee } \note{ The input can be a 3-column matrix of counts, where the columns are AA, Ab and aa (in order). Alternatively, the input can be a 3-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \section{Warning }{ Setting \code{inbreeding = FALSE} makes estimation difficult with non-intercept-only models. Currently, this code seems to work with intercept-only models. } \seealso{ \code{\link{AB.Ab.aB.ab}}, \code{\link{ABO}}, \code{\link{A1A2A3}}, \code{\link{MNSs}}. % \code{\link{AB.Ab.aB.ab2}}, } \examples{ y <- cbind(53, 95, 38) fit1 <- vglm(y ~ 1, AA.Aa.aa, trace = TRUE) fit2 <- vglm(y ~ 1, AA.Aa.aa(inbreeding = TRUE), trace = TRUE) rbind(y, sum(y) * fitted(fit1)) Coef(fit1) # Estimated pA Coef(fit2) # Estimated pA and f summary(fit1) } \keyword{models} \keyword{regression} VGAM/man/triangle.Rd0000644000176200001440000000754313135276753013700 0ustar liggesusers\name{triangle} \alias{triangle} %- Also NEED an '\alias' for EACH other topic documented here. \title{Triangle Distribution Family Function } \description{ Estimating the parameter of the triangle distribution by maximum likelihood estimation. } \usage{ triangle(lower = 0, upper = 1, link = extlogit(min = 0, max = 1), itheta = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lower, upper}{lower and upper limits of the distribution. Must be finite. Called \eqn{A} and \eqn{B} respectively below. } \item{link}{ Parameter link function applied to the parameter \eqn{\theta}{theta}, which lies in \eqn{(A,B)}. See \code{\link{Links}} for more choices. The default constrains the estimate to lie in the interval. } \item{itheta}{ Optional initial value for the parameter. The default is to compute the value internally. } } \details{ The triangle distribution has a probability density function that consists of two lines joined at \eqn{\theta}{theta}, which is the location of the mode. The lines intersect the \eqn{y = 0} axis at \eqn{A} and \eqn{B}. Here, Fisher scoring is used. On fitting, the \code{extra} slot has components called \code{lower} and \code{upper} which contains the values of the above arguments (recycled to the right length). The fitted values are the mean of the distribution, which is \eqn{(A + B + \theta)/3}{(A + B + theta)/3}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Kotz, S. and van Dorp, J. R. (2004) Beyond Beta: Other Continuous Families of Distributions with Bounded Support and Applications. Chapter 1. World Scientific: Singapore. Nguyen, H. D. and McLachlan, G. J. (2016) Maximum likelihood estimation of triangular and polygon distributions. \emph{Computational Statistics and Data Analysis}, \bold{102}, 23--36. } \author{ T. W. Yee } \section{Warning}{ The MLE regularity conditions do not hold for this distribution (e.g., the first derivative evaluated at the mode does not exist because it is not continuous) so that misleading inferences may result, e.g., in the \code{summary} and \code{vcov} of the object. Additionally, convergence to the MLE often appears to fail. } \note{ The response must contain values in \eqn{(A, B)}. For most data sets (especially small ones) it is very common for half-stepping to occur. % 20130603 Arguments \code{lower} and \code{upper} and \code{link} must match. For example, setting \code{lower = 0.2} and \code{upper = 4} and \code{link = extlogit(min = 0.2, max = 4.1)} will result in an error. Ideally \code{link = extlogit(min = lower, max = upper)} ought to work but it does not (yet)! Minimal error checking is done for this deficiency. } \seealso{ \code{\link{Triangle}}, \code{\link{Topple}}, \code{\link{simulate.vlm}}. } \examples{ # Example 1 tdata <- data.frame(y = rtriangle(n <- 3000, theta = 3/4)) fit <- vglm(y ~ 1, triangle(link = "identitylink"), data = tdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fit@extra$lower) head(fitted(fit)) with(tdata, mean(y)) # Example 2; Kotz and van Dorp (2004), p.14 rdata <- data.frame(y = c(0.1, 0.25, 0.3, 0.4, 0.45, 0.6, 0.75, 0.8)) fit <- vglm(y ~ 1, triangle(link = "identitylink"), data = rdata, trace = TRUE, crit = "coef", maxit = 1000) Coef(fit) # The MLE is the 3rd order statistic, which is 0.3. fit <- vglm(y ~ 1, triangle(link = "identitylink"), data = rdata, trace = TRUE, crit = "coef", maxit = 1001) Coef(fit) # The MLE is the 3rd order statistic, which is 0.3. } \keyword{models} \keyword{regression} % 20130603: yettodo: fix up so ideally % link = extlogit(min = lower, max = upper), itheta = NULL) % works. VGAM/man/Opt.Rd0000644000176200001440000000472213135276753012631 0ustar liggesusers\name{Opt} \alias{Opt} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Optimums } \description{ Generic function for the \emph{optimums} (or optima) of a model. } \usage{ Opt(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the computation or extraction of an optimum (or optimums) is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Different models can define an optimum in different ways. Many models have no such notion or definition. Optimums occur in quadratic and additive ordination, e.g., CQO or CAO. For these models the optimum is the value of the latent variable where the maximum occurs, i.e., where the fitted value achieves its highest value. For quadratic ordination models there is a formula for the optimum but for additive ordination models the optimum must be searched for numerically. If it occurs on the boundary, then the optimum is undefined. At an optimum, the fitted value of the response is called the \emph{maximum}. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \note{ In ordination, the optimum of a species is sometimes called the \emph{species score}. } %\section{Warning }{ %} \seealso{ \code{Opt.qrrvglm}, \code{\link{Max}}, \code{\link{Tol}}. } \examples{ set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars # vvv p1 = cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, # vvv Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, # vvv Trocterr, Zoraspin) ~ # vvv WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, # vvv Bestof = 2, # vvv fam = quasipoissonff, data = hspider, Crow1positive=FALSE) # vvv Opt(p1) \dontrun{ index <- 1:ncol(depvar(p1)) persp(p1, col = index, las = 1, lwd = 2, main = "Vertical lines at the optimums") abline(v = Opt(p1), lty = 2, col = index) } } \keyword{models} \keyword{regression} VGAM/man/seq2binomial.Rd0000644000176200001440000000755613135276753014464 0ustar liggesusers\name{seq2binomial} \alias{seq2binomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Two-stage Sequential Binomial Distribution Family Function } \description{ Estimation of the probabilities of a two-stage binomial distribution. } \usage{ seq2binomial(lprob1 = "logit", lprob2 = "logit", iprob1 = NULL, iprob2 = NULL, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. % apply.parint = TRUE, \arguments{ \item{lprob1, lprob2}{ Parameter link functions applied to the two probabilities, called \eqn{p} and \eqn{q} below. See \code{\link{Links}} for more choices. } \item{iprob1, iprob2}{ Optional initial value for the first and second probabilities respectively. A \code{NULL} means a value is obtained in the \code{initialize} slot. } \item{parallel, zero}{ Details at \code{\link{Links}}. If \code{parallel = TRUE} then the constraint also applies to the intercept. } } \details{ This \pkg{VGAM} family function fits the model described by Crowder and Sweeting (1989) which is described as follows. Each of \eqn{m} spores has a probability \eqn{p} of germinating. Of the \eqn{y_1}{y1} spores that germinate, each has a probability \eqn{q} of bending in a particular direction. Let \eqn{y_2}{y2} be the number that bend in the specified direction. The probability model for this data is \eqn{P(y_1,y_2) =}{P(y1,y2) =} \deqn{ {m \choose y_1} p^{y_1} (1-p)^{m-y_1} {y_1 \choose y_2} q^{y_2} (1-q)^{y_1-y_2}}{% {choose(m,y1)} p^{y1} (1-p)^{m-y1} {choose(y1,y2)} q^{y2} (1-q)^{y1-y2}} for \eqn{0 < p < 1}, \eqn{0 < q < 1}, \eqn{y_1=1,\ldots,m}{y1=1,\ldots,m} and \eqn{y_2=1,\ldots,y_1}{y2=1,\ldots,y1}. Here, \eqn{p} is \code{prob1}, \eqn{q} is \code{prob2}. Although the Authors refer to this as the \emph{bivariate binomial} model, I have named it the \emph{(two-stage) sequential binomial} model. Fisher scoring is used. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Crowder, M. and Sweeting, T. (1989). Bayesian inference for a bivariate binomial distribution. \emph{Biometrika}, \bold{76}, 599--603. } \author{ Thomas W. Yee } \note{ The response must be a two-column matrix of sample proportions corresponding to \eqn{y_1}{y1} and \eqn{y_2}{y2}. The \eqn{m} values should be inputted with the \code{weights} argument of \code{\link{vglm}} and \code{\link{vgam}}. The fitted value is a two-column matrix of estimated probabilities \eqn{p} and \eqn{q}. A common form of error is when there are no trials for \eqn{y_1}{y1}, e.g., if \code{mvector} below has some values which are zero. } \seealso{ \code{\link{binomialff}}, \code{\link{cfibrosis}}. } \examples{ sdata <- data.frame(mvector = round(rnorm(nn <- 100, m = 10, sd = 2)), x2 = runif(nn)) sdata <- transform(sdata, prob1 = logit(+2 - x2, inverse = TRUE), prob2 = logit(-2 + x2, inverse = TRUE)) sdata <- transform(sdata, successes1 = rbinom(nn, size = mvector, prob = prob1)) sdata <- transform(sdata, successes2 = rbinom(nn, size = successes1, prob = prob2)) sdata <- transform(sdata, y1 = successes1 / mvector) sdata <- transform(sdata, y2 = successes2 / successes1) fit <- vglm(cbind(y1, y2) ~ x2, seq2binomial, weight = mvector, data = sdata, trace = TRUE) coef(fit) coef(fit, matrix = TRUE) head(fitted(fit)) head(depvar(fit)) head(weights(fit, type = "prior")) # Same as with(sdata, mvector) # Number of first successes: head(depvar(fit)[, 1] * c(weights(fit, type = "prior"))) # Number of second successes: head(depvar(fit)[, 2] * c(weights(fit, type = "prior")) * depvar(fit)[, 1]) } \keyword{models} \keyword{regression} VGAM/man/ruge.Rd0000644000176200001440000000213513135276753013025 0ustar liggesusers\name{ruge} \alias{ruge} \docType{data} \title{Rutherford-Geiger Polonium Data} \description{ Decay counts of polonium recorded by Rutherford and Geiger (1910). } \usage{data(ruge)} \format{ This data frame contains the following columns: \describe{ \item{counts}{a numeric vector, counts or frequencies} \item{number}{a numeric vector, the number of decays} } } \details{ These are the radioactive decay counts of polonium recorded by Rutherford and Geiger (1910) representing the number of scintillations in 2608 1/8 minute intervals. For example, there were 57 frequencies of zero counts. The counts can be thought of as being approximately Poisson distributed. } \source{ Rutherford, E. and Geiger, H. (1910) The Probability Variations in the Distribution of alpha Particles, \emph{Philosophical Magazine}, \bold{20}, 698--704. } %\references{ %} \examples{ lambdahat <- with(ruge, weighted.mean(number, w = counts)) (N <- with(ruge, sum(counts))) with(ruge, cbind(number, counts, fitted = round(N * dpois(number, lam = lambdahat)))) } \keyword{datasets} VGAM/man/zanegbinomial.Rd0000644000176200001440000002327713135276753014714 0ustar liggesusers\name{zanegbinomial} \alias{zanegbinomial} \alias{zanegbinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Negative Binomial Distribution } \description{ Fits a zero-altered negative binomial distribution based on a conditional model involving a binomial distribution and a positive-negative binomial distribution. } \usage{ zanegbinomial(zero = "size", type.fitted = c("mean", "munb", "pobs0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lpobs0 = "logit", lmunb = "loge", lsize = "loge", imethod = 1, ipobs0 = NULL, imunb = NULL, iprobs.y = NULL, gprobs.y = (0:9)/10, isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) zanegbinomialff(lmunb = "loge", lsize = "loge", lonempobs0 = "logit", type.fitted = c("mean", "munb", "pobs0", "onempobs0"), isize = NULL, ionempobs0 = NULL, zero = c("size", "onempobs0"), mds.min = 1e-3, iprobs.y = NULL, gprobs.y = (0:9)/10, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imethod = 1, imunb = NULL, nsimEIM = 500) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpobs0}{ Link function for the parameter \eqn{p_0}{pobs0}, called \code{pobs0} here. See \code{\link{Links}} for more choices. } \item{lmunb}{ Link function applied to the \code{munb} parameter, which is the mean \eqn{\mu_{nb}}{munb} of an ordinary negative binomial distribution. See \code{\link{Links}} for more choices. } \item{lsize}{ Parameter link function applied to the reciprocal of the dispersion parameter, called \code{k}. That is, as \code{k} increases, the variance of the response decreases. See \code{\link{Links}} for more choices. } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for information. } \item{lonempobs0, ionempobs0}{ Corresponding argument for the other parameterization. See details below. } % \item{epobs0, emunb, esize}{ % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % epobs0 = list(), emunb = list(), esize = list(), % } \item{ipobs0, imunb, isize}{ Optional initial values for \eqn{p_0}{pobs0} and \code{munb} and \code{k}. If given then it is okay to give one value for each response/species by inputting a vector whose length is the number of columns of the response matrix. } \item{zero}{ % Integer valued vector, may be assigned, e.g., \eqn{-3} or \eqn{3} if % the probability of an observed value is to be modelled with the % covariates. Specifies which of the three linear predictors are modelled as intercept-only. % By default, the \code{k} and \eqn{p_0}{pobs0} % parameters for each response are modelled as % single unknown numbers that are estimated. All parameters can be modelled as a function of the explanatory variables by setting \code{zero = NULL} (not recommended). A negative value means that the value is recycled, e.g., setting \eqn{-3} means all \code{k} are intercept-only for \code{zanegbinomial}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{nsimEIM, imethod}{ See \code{\link{CommonVGAMffArguments}}. } % \item{ishrinkage}{ % See \code{\link{negbinomial}} % and \code{\link{CommonVGAMffArguments}}. % } \item{iprobs.y, gsize.mux, gprobs.y}{ See \code{\link{negbinomial}}. % and \code{\link{CommonVGAMffArguments}}. } \item{cutoff.prob, eps.trig}{ See \code{\link{negbinomial}}. % and \code{\link{CommonVGAMffArguments}}. } \item{mds.min, max.support, max.chunk.MB}{ See \code{\link{negbinomial}}. % and \code{\link{CommonVGAMffArguments}}. } } \details{ The response \eqn{Y} is zero with probability \eqn{p_0}{pobs0}, or \eqn{Y} has a positive-negative binomial distribution with probability \eqn{1-p_0}{1-pobs0}. Thus \eqn{0 < p_0 < 1}{0 < pobs0 < 1}, which is modelled as a function of the covariates. The zero-altered negative binomial distribution differs from the zero-inflated negative binomial distribution in that the former has zeros coming from one source, whereas the latter has zeros coming from the negative binomial distribution too. The zero-inflated negative binomial distribution is implemented in the \pkg{VGAM} package. Some people call the zero-altered negative binomial a \emph{hurdle} model. For one response/species, by default, the three linear/additive predictors for \code{zanegbinomial()} are \eqn{(logit(p_0), \log(\mu_{nb}), \log(k))^T}{(logit(pobs0), log(munb), log(k))^T}. This vector is recycled for multiple species. The \pkg{VGAM} family function \code{zanegbinomialff()} has a few changes compared to \code{zanegbinomial()}. These are: (i) the order of the linear/additive predictors is switched so the negative binomial mean comes first; (ii) argument \code{onempobs0} is now 1 minus the probability of an observed 0, i.e., the probability of the positive negative binomial distribution, i.e., \code{onempobs0} is \code{1-pobs0}; (iii) argument \code{zero} has a new default so that the \code{pobs0} is intercept-only by default. Now \code{zanegbinomialff()} is generally recommended over \code{zanegbinomial()}. Both functions implement Fisher scoring and can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. The \code{fitted.values} slot of the fitted object, which should be extracted by the generic function \code{fitted}, returns the mean \eqn{\mu}{mu} (default) which is given by \deqn{\mu = (1-p_0) \mu_{nb} / [1 - (k/(k+\mu_{nb}))^k].}{% mu = (1-pobs0) * munb / [1 - (k/(k+munb))^k].} If \code{type.fitted = "pobs0"} then \eqn{p_0}{pobs0} is returned. } \references{ Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996) Modelling the abundances of rare species: statistical models for counts with extra zeros. \emph{Ecological Modelling}, \bold{88}, 297--308. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. } \section{Warning }{ This family function is fragile; it inherits the same difficulties as \code{\link{posnegbinomial}}. Convergence for this \pkg{VGAM} family function seems to depend quite strongly on providing good initial values. This \pkg{VGAM} family function is computationally expensive and usually runs slowly; setting \code{trace = TRUE} is useful for monitoring convergence. Inference obtained from \code{summary.vglm} and \code{summary.vgam} may or may not be correct. In particular, the p-values, standard errors and degrees of freedom may need adjustment. Use simulation on artificial data to check that these are reasonable. } \author{ T. W. Yee } \note{ Note this family function allows \eqn{p_0}{pobs0} to be modelled as functions of the covariates provided \code{zero} is set correctly. It is a conditional model, not a mixture model. Simulated Fisher scoring is the algorithm. This family function effectively combines \code{\link{posnegbinomial}} and \code{\link{binomialff}} into one family function. This family function can handle multiple responses, e.g., more than one species. } \seealso{ \code{\link{dzanegbin}}, \code{\link{posnegbinomial}}, \code{\link{negbinomial}}, \code{\link{binomialff}}, \code{\link{rposnegbin}}, \code{\link{zinegbinomial}}, \code{\link{zipoisson}}, \code{\link[stats:NegBinomial]{dnbinom}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}. } \examples{ \dontrun{ zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pobs0 = logit(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzanegbin(nn, munb = exp(0+2*x2), size = exp(1), pobs0 = pobs0), y2 = rzanegbin(nn, munb = exp(1+2*x2), size = exp(1), pobs0 = pobs0)) with(zdata, table(y1)) with(zdata, table(y2)) fit <- vglm(cbind(y1, y2) ~ x2, zanegbinomial, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) } } \keyword{models} \keyword{regression} % lpobs0 = "logit", lmunb = "loge", lsize = "loge", % type.fitted = c("mean", "pobs0"), % ipobs0 = NULL, isize = NULL, zero = "size", % probs.y = 0.75, cutoff.prob = 0.999, % max.support = 2000, max.chunk.MB = 30, % gsize = exp((-4):4), % imethod = 1, nsimEIM = 250, ishrinkage = 0.95) %zanegbinomial( %zero = "size", type.fitted = c("mean", "pobs0"), % nsimEIM = 250, cutoff.prob = 0.999, % max.support = 2000, max.chunk.MB = 30, % lpobs0 = "logit", lmunb = "loge", lsize = "loge", % imethod = 1, ipobs0 = NULL, probs.y = 0.75, % ishrinkage = 0.95, isize = NULL, gsize = exp((-4):4)) %zanegbinomialff(lmunb = "loge", lsize = "loge", lonempobs0 = "logit", % type.fitted = c("mean", "pobs0", "onempobs0"), isize = NULL, % ionempobs0 = NULL, zero = c("size", "onempobs0"), % probs.y = 0.75, cutoff.prob = 0.999, % max.support = 2000, max.chunk.MB = 30, % gsize = exp((-4):4), % imethod = 1, nsimEIM = 250, ishrinkage = 0.95) VGAM/man/Coef.rrvglm.Rd0000644000176200001440000000264113135276753014251 0ustar liggesusers\name{Coef.rrvglm} \alias{Coef.rrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Returns Important Matrices etc. of a RR-VGLM Object } \description{ This methods function returns important matrices etc. of a RR-VGLM object. } \usage{ Coef.rrvglm(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object of class \code{"rrvglm"}. } \item{\dots}{ Currently unused. } } \details{ The \bold{A}, \bold{B1}, \bold{C} matrices are returned, along with other slots. See \code{\link{rrvglm}} for details about RR-VGLMs. } \value{ An object of class \code{"Coef.rrvglm"} (see \code{\link{Coef.rrvglm-class}}). } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ This function is an alternative to \code{coef.rrvglm}. } % ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{Coef.rrvglm-class}}, \code{print.Coef.rrvglm}, \code{\link{rrvglm}}. } \examples{ # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} % # print(Coef(fit), digits = 3) VGAM/man/zanegbinUC.Rd0000644000176200001440000000507413135276753014115 0ustar liggesusers\name{Zanegbin} \alias{Zanegbin} \alias{dzanegbin} \alias{pzanegbin} \alias{qzanegbin} \alias{rzanegbin} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Negative Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered negative binomial distribution with parameter \code{pobs0}. } \usage{ dzanegbin(x, size, prob = NULL, munb = NULL, pobs0 = 0, log = FALSE) pzanegbin(q, size, prob = NULL, munb = NULL, pobs0 = 0) qzanegbin(p, size, prob = NULL, munb = NULL, pobs0 = 0) rzanegbin(n, size, prob = NULL, munb = NULL, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{size, prob, munb, log}{ Parameters from the ordinary negative binomial distribution (see \code{\link[stats:NegBinomial]{dnbinom}}). Some arguments have been renamed slightly. } \item{pobs0}{ Probability of zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive negative binomial distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive negative binomial(\eqn{\mu_{nb}}{munb}, size) distribution. } \value{ \code{dzanegbin} gives the density and \code{pzanegbin} gives the distribution function, \code{qzanegbin} gives the quantile function, and \code{rzanegbin} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{zanegbinomial}}, \code{\link{rposnegbin}}. } \examples{ munb <- 3; size <- 4; pobs0 <- 0.3; x <- (-1):7 dzanegbin(x, munb = munb, size = size, pobs0 = pobs0) table(rzanegbin(100, munb = munb, size = size, pobs0 = pobs0)) \dontrun{ x <- 0:10 barplot(rbind(dzanegbin(x, munb = munb, size = size, pobs0 = pobs0), dnbinom(x, mu = munb, size = size)), beside = TRUE, col = c("blue", "green"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("ZANB(munb = ", munb, ", size = ", size,", pobs0 = ", pobs0, ") [blue] vs", " NB(mu = ", munb, ", size = ", size, ") [green] densities", sep = "")) } } \keyword{distribution} VGAM/man/binormalcop.Rd0000644000176200001440000000750613135276753014377 0ustar liggesusers\name{binormalcop} \alias{binormalcop} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gaussian Copula (Bivariate) Family Function } \description{ Estimate the correlation parameter of the (bivariate) Gaussian copula distribution by maximum likelihood estimation. } \usage{ binormalcop(lrho = "rhobit", irho = NULL, imethod = 1, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. % apply.parint = TRUE, \arguments{ \item{lrho, irho, imethod}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more link function choices. } \item{parallel, zero}{ Details at \code{\link{CommonVGAMffArguments}}. If \code{parallel = TRUE} then the constraint is applied to the intercept too. } } \details{ The cumulative distribution function is \deqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = \Phi_2 ( \Phi^{-1}(y_1), \Phi^{-1}(y_2); \rho ) }{% P(Y1 <= y1, Y2 <= y2) = Phi_2(\Phi^(-1)(y_1), \Phi^(-1)(y_2); \rho)} for \eqn{-1 < \rho < 1}{-1 < rho < 1}, \eqn{\Phi_2}{Phi_2} is the cumulative distribution function of a standard bivariate normal (see \code{\link{pbinorm}}), and \eqn{\Phi}{Phi} is the cumulative distribution function of a standard univariate normal (see \code{\link[stats]{pnorm}}). The support of the function is the interior of the unit square; however, values of 0 and/or 1 are not allowed. The marginal distributions are the standard uniform distributions. When \eqn{\rho = 0}{rho=0} the random variables are independent. This \pkg{VGAM} family function can handle multiple responses, for example, a six-column matrix where the first 2 columns is the first out of three responses, the next 2 columns being the next response, etc. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Schepsmeier, U. and Stober, J. (2013) Derivatives and Fisher information of bivariate copulas. \emph{Statistical Papers}. } \author{ T. W. Yee } \note{ The response matrix must have a multiple of two-columns. Currently, the fitted value is a matrix with the same number of columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution. This \pkg{VGAM} family function is fragile; each response must be in the interior of the unit square. Setting \code{crit = "coef"} is sometimes a good idea because inaccuracies in \code{\link{pbinorm}} might mean unnecessary half-stepping will occur near the solution. } \seealso{ \code{\link{rbinormcop}}, \code{\link[stats]{pnorm}}, \code{\link{kendall.tau}}. } \examples{ nn <- 1000 ymat <- rbinormcop(n = nn, rho = rhobit(-0.9, inverse = TRUE)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) \dontrun{ plot(ymat, col = "blue") } fit1 <- vglm(cbind(y1, y2, y3, y4) ~ 1, # 2 responses, e.g., (y1,y2) is the first fam = binormalcop, crit = "coef", # Sometimes a good idea data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) # Another example; rho is a linear function of x2 bdata <- transform(bdata, rho = -0.5 + x2) ymat <- rbinormcop(n = nn, rho = with(bdata, rho)) bdata <- transform(bdata, y5 = ymat[, 1], y6 = ymat[, 2]) fit2 <- vgam(cbind(y5, y6) ~ s(x2), data = bdata, binormalcop(lrho = "identitylink"), trace = TRUE) \dontrun{ plot(fit2, lcol = "blue", scol = "orange", se = TRUE, las = 1) } } \keyword{models} \keyword{regression} % for real \eqn{\rho}{rho} in (-1,1). VGAM/man/binom2.or.Rd0000644000176200001440000002137513135276753013677 0ustar liggesusers\name{binom2.or} \alias{binom2.or} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Bivariate Binary Regression with an Odds Ratio (Family Function) } \description{ Fits a Palmgren (bivariate odds-ratio model, or bivariate logistic regression) model to two binary responses. Actually, a bivariate logistic/probit/cloglog/cauchit model can be fitted. The odds ratio is used as a measure of dependency. } \usage{ binom2.or(lmu = "logit", lmu1 = lmu, lmu2 = lmu, loratio = "loge", imu1 = NULL, imu2 = NULL, ioratio = NULL, zero = "oratio", exchangeable = FALSE, tol = 0.001, more.robust = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu}{ Link function applied to the two marginal probabilities. See \code{\link{Links}} for more choices. See the note below. } \item{lmu1, lmu2}{ Link function applied to the first and second of the two marginal probabilities. } \item{loratio}{ Link function applied to the odds ratio. See \code{\link{Links}} for more choices. } \item{imu1, imu2, ioratio}{ Optional initial values for the marginal probabilities and odds ratio. See \code{\link{CommonVGAMffArguments}} for more details. In general good initial values are often required so use these arguments if convergence failure occurs. } \item{zero}{ Which linear/additive predictor is modelled as an intercept only? The default is for the odds ratio. A \code{NULL} means none. See \code{\link{CommonVGAMffArguments}} for more details. } \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. } \item{tol}{ Tolerance for testing independence. Should be some small positive numerical value. } \item{more.robust}{ Logical. If \code{TRUE} then some measures are taken to compute the derivatives and working weights more robustly, i.e., in an attempt to avoid numerical problems. Currently this feature is not debugged if set \code{TRUE}. } } \details{ Also known informally as the \emph{Palmgren model}, the bivariate logistic model is a full-likelihood based model defined as two logistic regressions plus \code{log(oratio) = eta3} where \code{eta3} is the third linear/additive predictor relating the odds ratio to explanatory variables. Explicitly, the default model is \deqn{logit[P(Y_j=1)] = \eta_j,\ \ \ j=1,2}{% logit[P(Y_j=1)] = eta_j,\ \ \ j=1,2} for the marginals, and \deqn{\log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = \eta_3,}{% log[P(Y_{00}=1) P(Y_{11}=1) / (P(Y_{01}=1) P(Y_{10}=1))] = eta_3,} specifies the dependency between the two responses. Here, the responses equal 1 for a success and a 0 for a failure, and the odds ratio is often written \eqn{\psi=p_{00}p_{11}/(p_{10}p_{01})}{psi=p00 p11 / (p10 p01)}. The model is fitted by maximum likelihood estimation since the full likelihood is specified. The two binary responses are independent if and only if the odds ratio is unity, or equivalently, the log odds ratio is 0. Fisher scoring is implemented. The default models \eqn{\eta_3}{eta3} as a single parameter only, i.e., an intercept-only model, but this can be circumvented by setting \code{zero = NULL} in order to model the odds ratio as a function of all the explanatory variables. The function \code{binom2.or()} can handle other probability link functions such as \code{\link{probit}}, \code{\link{cloglog}} and \code{\link{cauchit}} links as well, so is quite general. In fact, the two marginal probabilities can each have a different link function. A similar model is the \emph{bivariate probit model} (\code{\link{binom2.rho}}), which is based on a standard bivariate normal distribution, but the bivariate probit model is less interpretable and flexible. The \code{exchangeable} argument should be used when the error structure is exchangeable, e.g., with eyes or ears data. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. These estimated probabilities should be extracted with the \code{fitted} generic function. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. le Cessie, S. and van Houwelingen, J. C. (1994) Logistic regression for correlated binary data. \emph{Applied Statistics}, \bold{43}, 95--108. Palmgren, J. (1989) \emph{Regression Models for Bivariate Binary Responses}. Technical Report no. 101, Department of Biostatistics, University of Washington, Seattle. Yee, T. W. and Dirnbock, T. (2009) Models for analysing species' presence/absence data at two time points. Journal of Theoretical Biology, \bold{259}(4), 684--694. % Documentation accompanying the \pkg{VGAM} package at % \url{https://www.stat.auckland.ac.nz/~yee} % contains further information and examples. } \author{ Thomas W. Yee } \note{ At present we call \code{\link{binom2.or}} families a \emph{bivariate odds-ratio model}. The response should be either a 4-column matrix of counts (whose columns correspond to \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1) respectively), or a two-column matrix where each column has two distinct values, or a factor with four levels. The function \code{\link{rbinom2.or}} may be used to generate such data. Successful convergence requires at least one case of each of the four possible outcomes. By default, a constant odds ratio is fitted because \code{zero = 3}. Set \code{zero = NULL} if you want the odds ratio to be modelled as a function of the explanatory variables; however, numerical problems are more likely to occur. The argument \code{lmu}, which is actually redundant, is used for convenience and for upward compatibility: specifying \code{lmu} only means the link function will be applied to \code{lmu1} and \code{lmu2}. Users who want a different link function for each of the two marginal probabilities should use the \code{lmu1} and \code{lmu2} arguments, and the argument \code{lmu} is then ignored. It doesn't make sense to specify \code{exchangeable = TRUE} and have different link functions for the two marginal probabilities. Regarding Yee and Dirnbock (2009), the \code{xij} (see \code{\link{vglm.control}}) argument enables environmental variables with different values at the two time points to be entered into an exchangeable \code{\link{binom2.or}} model. See the author's webpage for sample code. } \seealso{ \code{\link{rbinom2.or}}, \code{\link{binom2.rho}}, \code{\link{loglinb2}}, \code{\link{zipebcom}}, \code{\link{coalminers}}, \code{\link{binomialff}}, \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}. } \examples{ # Fit the model in Table 6.7 in McCullagh and Nelder (1989) coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW, nBW, BnW, BW) ~ Age, binom2.or(zero = NULL), data = coalminers) fitted(fit) summary(fit) coef(fit, matrix = TRUE) c(weights(fit, type = "prior")) * fitted(fit) # Table 6.8 \dontrun{ with(coalminers, matplot(Age, fitted(fit), type = "l", las = 1, xlab = "(age - 42) / 5", lwd = 2)) with(coalminers, matpoints(Age, depvar(fit), col=1:4)) legend(x = -4, y = 0.5, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (Breathlessness=0, Wheeze=0)", "2 = (Breathlessness=0, Wheeze=1)", "3 = (Breathlessness=1, Wheeze=0)", "4 = (Breathlessness=1, Wheeze=1)")) } # Another model: pet ownership \dontrun{ data(xs.nz, package = "VGAMdata") # More homogeneous: petdata <- subset(xs.nz, ethnicity == "European" & age < 70 & sex == "M") petdata <- na.omit(petdata[, c("cat", "dog", "age")]) summary(petdata) with(petdata, table(cat, dog)) # Can compute the odds ratio fit <- vgam(cbind((1-cat) * (1-dog), (1-cat) * dog, cat * (1-dog), cat * dog) ~ s(age, df = 5), binom2.or(zero = 3), data = petdata, trace = TRUE) colSums(depvar(fit)) coef(fit, matrix = TRUE) } \dontrun{ # Plot the estimated probabilities ooo <- order(with(petdata, age)) matplot(with(petdata, age)[ooo], fitted(fit)[ooo, ], type = "l", xlab = "Age", ylab = "Probability", main = "Pet ownership", ylim = c(0, max(fitted(fit))), las = 1, lwd = 1.5) legend("topleft", col=1:4, lty = 1:4, leg = c("no cat or dog ", "dog only", "cat only", "cat and dog"), lwd = 1.5) } } \keyword{models} \keyword{regression} VGAM/man/concoef-methods.Rd0000644000176200001440000000232113135276753015135 0ustar liggesusers\name{concoef-methods} \docType{methods} %\alias{concoef,ANY-method} \alias{concoef-method} \alias{concoef,cao-method} \alias{concoef,Coef.cao-method} \alias{concoef,rrvglm-method} \alias{concoef,qrrvglm-method} \alias{concoef,Coef.rrvglm-method} \alias{concoef,Coef.qrrvglm-method} % %%\alias{ccoef-method} %%\alias{ccoef,cao-method} %%\alias{ccoef,Coef.cao-method} %%\alias{ccoef,rrvglm-method} %%\alias{ccoef,qrrvglm-method} %%\alias{ccoef,Coef.rrvglm-method} %%\alias{ccoef,Coef.qrrvglm-method} % % This does not work: %\alias{ccoef,cao,Coef.cao,rrvglm,qrrvglm,Coef.rrvglm,Coef.qrrvglm-method} % \title{ Constrained (Canonical) Coefficients } \description{ \code{concoef} is a generic function used to return the constrained (canonical) coefficients of a constrained ordination model. The function invokes particular methods which depend on the class of the first argument. } %\usage{ % \S4method{ccoef}{cao,Coef.cao,rrvglm,qrrvglm,Coef.rrvglm,Coef.qrrvglm}(object, ...) %} \section{Methods}{ \describe{ \item{object}{ The object from which the constrained coefficients are extracted. } } } \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} VGAM/man/zinegbinomial.Rd0000644000176200001440000002546713135276753014727 0ustar liggesusers\name{zinegbinomial} \alias{zinegbinomial} \alias{zinegbinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Negative Binomial Distribution Family Function } \description{ Fits a zero-inflated negative binomial distribution by full maximum likelihood estimation. } \usage{ zinegbinomial(zero = "size", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lpstr0 = "logit", lmunb = "loge", lsize = "loge", imethod = 1, ipstr0 = NULL, imunb = NULL, iprobs.y = NULL, isize = NULL, gprobs.y = (0:9)/10, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) zinegbinomialff(lmunb = "loge", lsize = "loge", lonempstr0 = "logit", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), imunb = NULL, isize = NULL, ionempstr0 = NULL, zero = c("size", "onempstr0"), imethod = 1, iprobs.y = NULL, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, gprobs.y = (0:9)/10, gsize.mux = exp((-12:6)/2), mds.min = 1e-3, nsimEIM = 500) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr0, lmunb, lsize}{ Link functions for the parameters \eqn{\phi}{pstr0}, the mean and \eqn{k}; see \code{\link{negbinomial}} for details, and \code{\link{Links}} for more choices. For the zero-\emph{deflated} model see below. } % \item{epstr0, emunb, esize}{ % epstr0 = list(), emunb = list(), esize = list(), % List. Extra arguments for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}} for more information. } \item{ipstr0, isize, imunb}{ Optional initial values for \eqn{\phi}{pstr0} and \eqn{k}{k} and \eqn{\mu}{munb}. The default is to compute an initial value internally for both. If a vector then recycling is used. } \item{lonempstr0, ionempstr0}{ Corresponding arguments for the other parameterization. See details below. } \item{imethod}{ An integer with value \code{1} or \code{2} or \code{3} which specifies the initialization method for the mean parameter. If failure to converge occurs try another value. See \code{\link{CommonVGAMffArguments}} for more information. } \item{zero}{ Specifies which linear/additive predictors are to be modelled as intercept-only. They can be such that their absolute values are either 1 or 2 or 3. The default is the \eqn{\phi}{pstr0} and \eqn{k} parameters (both for each response). See \code{\link{CommonVGAMffArguments}} for more information. } \item{nsimEIM}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{iprobs.y, cutoff.prob, max.support, max.chunk.MB }{ See \code{\link{negbinomial}} and/or \code{\link{posnegbinomial}} for details. } \item{mds.min, eps.trig}{ See \code{\link{negbinomial}} for details. } \item{gprobs.y, gsize.mux}{ These arguments relate to grid searching in the initialization process. See \code{\link{negbinomial}} and/or \code{\link{posnegbinomial}} for details. } } \details{ These functions are based on \deqn{P(Y=0) = \phi + (1-\phi) (k/(k+\mu))^k,}{% P(Y=0) = phi + (1- phi) * (k/(k+munb))^k,} and for \eqn{y=1,2,\ldots}, \deqn{P(Y=y) = (1-\phi) \, dnbinom(y, \mu, k).}{% P(Y=y) = (1- phi) * dnbinom(y, munb, k).} The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{(1-\phi) \mu}{(1-phi)*munb} (returned as the fitted values). By default, the three linear/additive predictors for \code{zinegbinomial()} are \eqn{(logit(\phi), \log(\mu), \log(k))^T}{(logit(phi), log(munb), log(k))^T}. See \code{\link{negbinomial}}, another \pkg{VGAM} family function, for the formula of the probability density function and other details of the negative binomial distribution. Independent multiple responses are handled. If so then arguments \code{ipstr0} and \code{isize} may be vectors with length equal to the number of responses. The \pkg{VGAM} family function \code{zinegbinomialff()} has a few changes compared to \code{zinegbinomial()}. These are: (i) the order of the linear/additive predictors is switched so the NB mean comes first; (ii) \code{onempstr0} is now 1 minus the probability of a structural 0, i.e., the probability of the parent (NB) component, i.e., \code{onempstr0} is \code{1-pstr0}; (iii) argument \code{zero} has a new default so that the \code{onempstr0} is intercept-only by default. Now \code{zinegbinomialff()} is generally recommended over \code{zinegbinomial()}. Both functions implement Fisher scoring and can handle multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ % %} \author{ T. W. Yee } \note{ % 20130316: commenting out this: % For intercept-models, the \code{misc} slot has a component called % \code{pobs0} which is the estimate of \eqn{P(Y=0)}. % Note that \eqn{P(Y=0)} is not the parameter \eqn{\phi}{phi}. % 20130316: adding this: Estimated probabilities of a structural zero and an observed zero can be returned, as in \code{\link{zipoisson}}; see \code{\link{fittedvlm}} for more information. If \eqn{k} is large then the use of \pkg{VGAM} family function \code{\link{zipoisson}} is probably preferable. This follows because the Poisson is the limiting distribution of a negative binomial as \eqn{k} tends to infinity. The zero-\emph{deflated} negative binomial distribution might be fitted by setting \code{lpstr0 = identitylink}, albeit, not entirely reliably. See \code{\link{zipoisson}} for information that can be applied here. Else try the zero-altered negative binomial distribution (see \code{\link{zanegbinomial}}). } \section{Warning }{ This model can be difficult to fit to data, and this family function is fragile. The model is especially difficult to fit reliably when the estimated \eqn{k} parameter is very large (so the model approaches a zero-inflated Poisson distribution) or much less than 1 (and gets more difficult as it approaches 0). Numerical problems can also occur, e.g., when the probability of a zero is actually less than, and not more than, the nominal probability of zero. Similarly, numerical problems can occur if there is little or no 0-inflation, or when the sample size is small. Half-stepping is not uncommon. Successful convergence is sensitive to the initial values, therefore if failure to converge occurs, try using combinations of arguments \code{stepsize} (in \code{\link{vglm.control}}), \code{imethod}, \code{imunb}, \code{ipstr0}, \code{isize}, and/or \code{zero} if there are explanatory variables. Else try fitting an ordinary \code{\link{negbinomial}} model or a \code{\link{zipoisson}} model. % An infinite loop might occur if some of the fitted values % (the means) are too close to 0. % \code{ishrinkage}, This \pkg{VGAM} family function can be computationally expensive and can run slowly; setting \code{trace = TRUE} is useful for monitoring convergence. % 20160208; A bug caused this, but has been fixed now: % And \code{\link{zinegbinomial}} may converge slowly when % the estimated \eqn{k} parameter is less than 1; % and get slower as it approaches 0. } \seealso{ \code{\link{Zinegbin}}, \code{\link{negbinomial}}, \code{\link[stats:Poisson]{rpois}}, \code{\link{CommonVGAMffArguments}}. } \examples{ \dontrun{ # Example 1 ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, pstr0 = logit(-0.5 + 1 * x2, inverse = TRUE), munb = exp( 3 + 1 * x2), size = exp( 0 + 2 * x2)) ndata <- transform(ndata, y1 = rzinegbin(nn, mu = munb, size = size, pstr0 = pstr0)) with(ndata, table(y1)["0"] / sum(table(y1))) nfit <- vglm(y1 ~ x2, zinegbinomial(zero = NULL), data = ndata) coef(nfit, matrix = TRUE) summary(nfit) head(cbind(fitted(nfit), with(ndata, (1 - pstr0) * munb))) round(vcov(nfit), 3) # Example 2: RR-ZINB could also be called a COZIVGLM-ZINB-2 ndata <- data.frame(x2 = runif(nn <- 2000)) ndata <- transform(ndata, x3 = runif(nn)) ndata <- transform(ndata, eta1 = 3 + 1 * x2 + 2 * x3) ndata <- transform(ndata, pstr0 = logit(-1.5 + 0.5 * eta1, inverse = TRUE), munb = exp(eta1), size = exp(4)) ndata <- transform(ndata, y1 = rzinegbin(nn, pstr0 = pstr0, mu = munb, size = size)) with(ndata, table(y1)["0"] / sum(table(y1))) rrzinb <- rrvglm(y1 ~ x2 + x3, zinegbinomial(zero = NULL), data = ndata, Index.corner = 2, str0 = 3, trace = TRUE) coef(rrzinb, matrix = TRUE) Coef(rrzinb) } } \keyword{models} \keyword{regression} %zinegbinomial(lpstr0 = "logit", lmunb = "loge", lsize = "loge", % type.fitted = c("mean", "pobs0", "pstr0", "onempstr0"), % ipstr0 = NULL, isize = NULL, zero = "size", % imethod = 1, ishrinkage = 0.95, % probs.y = 0.75, cutoff.prob = 0.999, % max.support = 2000, max.chunk.MB = 30, % gpstr0 = 1:19/20, gsize = exp((-4):4), % nsimEIM = 250) %zinegbinomial(zero = "size", % type.fitted = c("mean", "pobs0", "pstr0", "onempstr0"), % nsimEIM = 250, cutoff.prob = 0.999, max.support = 2000, % max.chunk.MB = 30, % lpstr0 = "logit", lmunb = "loge", lsize = "loge", % imethod = 1, ipstr0 = NULL, imunb = NULL, % probs.y = 0.85, ishrinkage = 0.95, % isize = NULL, gpstr0 = 1:19/20, gsize = exp((-4):4)) %zinegbinomialff(lmunb = "loge", lsize = "loge", lonempstr0 = "logit", % type.fitted = c("mean", "pobs0", "pstr0", "onempstr0"), % isize = NULL, ionempstr0 = NULL, % zero = c("size", "onempstr0"), % imethod = 1, ishrinkage = 0.95, % probs.y = 0.75, cutoff.prob = 0.999, % max.support = 2000, max.chunk.MB = 30, % gonempstr0 = 1:19/20, gsize = exp((-4):4), % nsimEIM = 250) %ndata <- transform(ndata, % y1 = rzinegbin(nn, mu = munb, size = size, pstr0 = pstr0), % y2 = rzinegbin(nn, mu = munb, size = size, pstr0 = pstr0)) %with(ndata, table(y1)["0"] / sum(table(y1))) %fit <- vglm(cbind(y1, y2) ~ x2, zinegbinomial(zero = NULL), data = ndata) VGAM/man/venice.Rd0000644000176200001440000000725313135276753013342 0ustar liggesusers\name{venice} \alias{venice} \alias{venice90} \docType{data} \title{ Venice Maximum Sea Levels Data} \description{ Some sea levels data sets recorded at Venice, Italy. } \usage{ data(venice) data(venice90) } \format{ \code{venice} is a data frame with 51 observations on the following 11 variables. It concerns the maximum heights of sea levels between 1931 and 1981. \describe{ \item{year}{a numeric vector. } \item{r1,r2,r3,r4,r5,r6,r7,r8,r9,r10}{numeric vectors; \code{r1} is the highest recorded value, \code{r2} is the second highest recorded value, etc. } } \code{venice90} is a data frame with 455 observations on the following 7 variables. \describe{ \item{year, month, day, hour }{numeric vectors; actual time of the recording. } \item{sealevel}{numeric; sea level. } \item{ohour}{numeric; number of hours since the midnight of 31 Dec 1939 and 1 Jan 1940. } \item{Year}{numeric vector; approximate year as a real number. The formula is \code{start.year + ohour / (365.26 * 24)} where \code{start.year} is 1940. One can treat \code{Year} as continuous whereas \code{year} can be treated as both continuous and discrete. } } } \details{ Sea levels are in cm. For \code{venice90}, the value 0 corresponds to a fixed reference point (e.g., the mean sea level in 1897 at an old palace of Venice). Clearly since the relative (perceived) mean sea level has been increasing in trend over time (more than an overall 0.4 m increase by 2010), therefore the value 0 is (now) a very low and unusual measurement. For \code{venice}, in 1935 only the top six values were recorded. For \code{venice90}, this is a subset of a data set provided by Paolo Pirazzoli consisting of hourly sea levels from 1940 to 2009. Values greater than 90 cm were extracted, and then declustered (each cluster provides no more than one value, and each value is at least 24 hours apart). Thus the values are more likely to be independent. Of the original \code{(2009-1940+1)*365.26*24} values about 7 percent of these comprise \code{venice90}. Yet to do: check for consistency between the data sets. Some external data sets elsewhere have some extremes recorded at times not exactly on the hour. } \source{ Pirazzoli, P. (1982) Maree estreme a Venezia (periodo 1872--1981). \emph{Acqua Aria}, \bold{10}, 1023--1039. Thanks to Paolo Pirazzoli and Alberto Tomasin for the \code{venice90} data. } \references{ Smith, R. L. (1986) Extreme value theory based on the \emph{r} largest annual events. \emph{Journal of Hydrology}, \bold{86}, 27--43. Battistin, D. and Canestrelli, P. (2006). \emph{La serie storica delle maree a Venezia, 1872--2004} (in Italian), Comune di Venezia. Istituzione Centro Previsione e Segnalazioni Maree. } \seealso{ \code{\link[VGAM]{guplot}}, \code{\link[VGAM]{gev}}, \code{\link[VGAM]{gpd}}. } \examples{ \dontrun{ matplot(venice[["year"]], venice[, -1], xlab = "Year", ylab = "Sea level (cm)", type = "l") ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) par(mfrow = c(2, 1), xpd = TRUE) plot(fit1, se = TRUE, lcol = "blue", llwd = 2, slty = "dashed") par(mfrow = c(1,1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), llwd = 2, pcol = "blue", tadj = 0.1) plot(sealevel ~ Year, data = venice90, type = "h", col = "blue") summary(venice90) dim(venice90) round(100 * nrow(venice90) / ((2009 - 1940 + 1) * 365.26 * 24), digits = 3) } } \keyword{datasets} VGAM/man/lrpvglm.Rd0000644000176200001440000000755613135276753013562 0ustar liggesusers\name{lrp} \alias{lrp} \alias{lrp.vglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Likelihood ratio p-values } \description{ Returns a vector of p-values from testing whether each estimated coefficient of a VGLM regression model is 0 or not. The methodology is based on a likelihood ratio test. } \usage{ lrp(object, ...) lrp.vglm(object, which = NULL, omit1s = TRUE, trace = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{\link{vglm}} object. } \item{which}{ Numeric or character. Specifies which regression coefficient are to be selected. The default is to select them all, except the intercepts. } \item{trace}{ Logical. If \code{TRUE} then some output is produced as each regression coefficient is deleted (set to 0) and the IRLS iterations proceed. The default is to use the \code{trace} value of the fitted object; see \code{\link{vglm.control}}. } \item{omit1s}{ Logical. If \code{TRUE} (the default) then regression coefficients deriving from the intercept term are not selected. This is because, for some models such as \code{\link{propodds}} and \code{\link{cumulative}}, setting an intercept equal to 0 does not make much sense. % By default this function does not return the p-values for % those linear predictors deriving from an intercept. } \item{\dots}{ further arguments passed into the other methods functions. % e.g., \code{subset}. } } \details{ When \code{summary()} is applied to a \code{\link{vglm}} object a Wald table is produced. The corresponding p-values are generally viewed as inferior to those from a likelihood ratio test (LRT). For example, the Hauck and Donner (1977) effect (HDE) produces p-values that are biased upwards (see \code{\link{hdeff}}). Other reasons are that the Wald test is often less accurate (especially in small samples) and is not invariant to parameterization. This function returns p-values based on the LRT by deleting one column at a time from the big VLM matrix and then starting up IRLS to convergence (hopefully). Twice the difference between the log-likelihoods (or equivalently, the difference in the deviances if they are defined) is asymptotically chi-squared with 1 degree of freedom. One might expect the p-values from this function therefore to be more accurate and not suffer from the HDE. Thus this function is an alternative to \code{\link{summaryvglm}} for testing for the significance of a regression coefficient. } \value{ By default, a vector of (2-sided test) p-values. If the model is intercept-only then a \code{NULL} is returned by default. } %\references{ %} \author{ T. W. Yee. } \section{Warning }{ This function has not yet been thoroughly tested. Convergence failure is possible for some models applied to certain data sets; it is a good idea to set \code{trace = TRUE} to monitor convergence. } \note{ Only models with a full-likelihood are handled, so that quasi-type models such as \code{\link{quasipoissonff}} should not be fed in. One day this function might allow for terms, such as arising from \code{\link[stats]{poly}} and \code{\link[splines]{bs}}. % i.e., some of the columns are grouped together, } \seealso{ \code{\link{hdeff}}, \code{\link{summaryvglm}}, \code{\link{vglm}}, \code{\link{confintvglm}}, \code{\link[stats]{pchisq}}, \code{\link{profilevglm}}. % \code{\link{multinomial}}, % \code{\link{cumulative}}, } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) cbind(coef(summary(fit)), "LRT pvalue" = lrp(fit, omit1s = FALSE)) } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} VGAM/man/qvar.Rd0000644000176200001440000000455613135276753013045 0ustar liggesusers\name{qvar} \alias{qvar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Quasi-variances Extraction Function %% ~~function to do ... ~~ } \description{ Takes a \code{\link{rcim}} fit of the appropriate format and returns either the quasi-variances or quasi-standard errors. %% ~~ A concise (1-5 lines) description of what the function does. ~~ } \usage{ qvar(object, se = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{\link{rcim}} object that has family function \code{\link{uninormal}} with the \code{\link{explink}} link. See below for an example. } \item{se}{ Logical. If \code{FALSE} then the quasi-variances are returned, else the square root of them, called quasi-standard errors. } \item{\ldots}{ Currently unused. } } \details{ This simple function is ad hoc and simply is equivalent to computing the quasi-variances by \code{diag(predict(fit1)[, c(TRUE, FALSE)]) / 2}. This function is for convenience only. Serious users of quasi-variances ought to understand why and how this function works. } \value{ A vector of quasi-variances or quasi-standard errors. } %\references{ % %} \author{ T. W. Yee. } %\note{ % This is an adaptation of \code{qvcalc()} in \pkg{qvcalc}. % % %} %\section{Warning }{ % N % % %} \seealso{ \code{\link{rcim}}, \code{\link{uninormal}}, \code{\link{explink}}, \code{\link{Qvar}}, \code{\link[MASS]{ships}}. %% ~~objects to See Also as \code{\link{help}}, ~~~ } \examples{ data("ships", package = "MASS") Shipmodel <- vglm(incidents ~ type + year + period, quasipoissonff, offset = log(service), data = ships, subset = (service > 0)) # Easiest form of input fit1 <- rcim(Qvar(Shipmodel, "type"), uninormal("explink"), maxit = 99) qvar(fit1) # Quasi-variances qvar(fit1, se = TRUE) # Quasi-standard errors # Manually compute them: (quasiVar <- exp(diag(fitted(fit1))) / 2) # Version 1 (quasiVar <- diag(predict(fit1)[, c(TRUE, FALSE)]) / 2) # Version 2 (quasiSE <- sqrt(quasiVar)) \dontrun{ qvplot(fit1, col = "green", lwd = 3, scol = "blue", slwd = 2, las = 1) } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} % \code{\link[qvcalc:qvcalc]{qvcalc}} in \pkg{qvcalc} VGAM/man/riceUC.Rd0000644000176200001440000000533113135276753013236 0ustar liggesusers\name{Rice} \alias{Rice} \alias{drice} \alias{price} \alias{qrice} \alias{rrice} \title{The Rice Distribution} \description{ Density, distribution function, quantile function and random generation for the Rician distribution. } \usage{ drice(x, sigma, vee, log = FALSE) price(q, sigma, vee, lower.tail = TRUE, log.p = FALSE, ...) qrice(p, sigma, vee, lower.tail = TRUE, log.p = FALSE, ...) rrice(n, sigma, vee) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{vee, sigma}{ See \code{\link{riceff}}. } \item{\dots}{ Other arguments such as \code{lower.tail}. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } } \value{ \code{drice} gives the density, \code{price} gives the distribution function, \code{qrice} gives the quantile function, and \code{rrice} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{riceff}}, the \pkg{VGAM} family function for estimating the two parameters, for the formula of the probability density function and other details. Formulas for \code{price()} and \code{qrice()} are based on the Marcum-Q function. } %\section{Warning }{ % %} \seealso{ \code{\link{riceff}}. } \examples{ \dontrun{ x <- seq(0.01, 7, len = 201) plot(x, drice(x, vee = 0, sigma = 1), type = "n", las = 1,, ylab = "", main = "Density of Rice distribution for various values of v") sigma <- 1; vee <- c(0, 0.5, 1, 2, 4) for (ii in 1:length(vee)) lines(x, drice(x, vee = vee[ii], sigma), col = ii) legend(x = 5, y = 0.6, legend = as.character(vee), col = 1:length(vee), lty = 1) x <- seq(0, 4, by = 0.01); vee <- 1; sigma <- 1 probs <- seq(0.05, 0.95, by = 0.05) plot(x, drice(x, vee = vee, sigma = sigma), type = "l", col = "blue", main = "Blue is density, orange is cumulative distribution function", ylim = c(0, 1), sub = "Purple are 5, 10, ..., 95 percentiles", las = 1, ylab = "", cex.main = 0.9) abline(h = 0:1, col = "black", lty = 2) Q <- qrice(probs, sigma, vee = vee) lines(Q, drice(qrice(probs, sigma, vee = vee), sigma, vee = vee), col = "purple", lty = 3, type = "h") lines(x, price(x, sigma, vee = vee), type = "l", col = "orange") lines(Q, drice(Q, sigma, vee = vee), col = "purple", lty = 3, type = "h") lines(Q, price(Q, sigma, vee = vee), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(price(Q, sigma, vee = vee) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/yip88.Rd0000644000176200001440000001231013135276753013040 0ustar liggesusers\name{yip88} \alias{yip88} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Poisson Distribution (Yip (1988) algorithm) } \description{ Fits a zero-inflated Poisson distribution based on Yip (1988). } \usage{ yip88(link = "loge", n.arg = NULL, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function for the usual \eqn{\lambda}{lambda} parameter. See \code{\link{Links}} for more choices. } \item{n.arg}{ The total number of observations in the data set. Needed when the response variable has all the zeros deleted from it, so that the number of zeros can be determined. } \item{imethod}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The method implemented here, Yip (1988), maximizes a \emph{conditional} likelihood. Consequently, the methodology used here deletes the zeros from the data set, and is thus related to the positive Poisson distribution (where \eqn{P(Y=0) = 0}). The probability function of \eqn{Y} is 0 with probability \eqn{\phi}{phi}, and Poisson(\eqn{\lambda}{lambda}) with probability \eqn{1-\phi}{1-phi}. Thus \deqn{P(Y=0) =\phi + (1-\phi) P(W=0)}{% P(Y=0) = phi + (1-phi) * P(W=0)} where \eqn{W} is Poisson(\eqn{\lambda}{lambda}). The mean, \eqn{(1-\phi) \lambda}{(1-phi) * lambda}, can be obtained by the extractor function \code{fitted} applied to the object. This family function treats \eqn{\phi}{phi} as a scalar. If you want to model both \eqn{\phi}{phi} and \eqn{\lambda}{lambda} as a function of covariates, try \code{\link{zipoisson}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Yip, P. (1988) Inference about the mean of a Poisson distribution in the presence of a nuisance parameter. \emph{The Australian Journal of Statistics}, \bold{30}, 299--306. Angers, J-F. and Biswas, A. (2003) A Bayesian analysis of zero-inflated generalized Poisson model. \emph{Computational Statistics & Data Analysis}, \bold{42}, 37--46. } \author{ Thomas W. Yee } \note{ The data may be inputted in two ways. The first is when the response is a vector of positive values, with the argument \code{n} in \code{yip88} specifying the total number of observations. The second is simply include all the data in the response. In this case, the zeros are trimmed off during the computation, and the \code{x} and \code{y} slots of the object, if assigned, will reflect this. The estimate of \eqn{\phi}{phi} is placed in the \code{misc} slot as \code{@misc$pstr0}. However, this estimate is computed only for intercept models, i.e., the formula is of the form \code{y ~ 1}. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. Yip (1988) only considered \eqn{\phi}{phi} being a scalar and not modelled as a function of covariates. To get around this limitation, try \code{\link{zipoisson}}. Inference obtained from \code{summary.vglm} and \code{summary.vgam} may or may not be correct. In particular, the p-values, standard errors and degrees of freedom may need adjustment. Use simulation on artificial data to check that these are reasonable. } \seealso{ \code{\link{zipoisson}}, \code{\link{Zipois}}, \code{\link{zapoisson}}, \code{\link{pospoisson}}, \code{\link{poissonff}}, \code{\link{dzipois}}. } \examples{ phi <- 0.35; lambda <- 2 # Generate some artificial data y <- rzipois(n <- 1000, lambda, phi) table(y) # Two equivalent ways of fitting the same model fit1 <- vglm(y ~ 1, yip88(n = length(y)), subset = y > 0) fit2 <- vglm(y ~ 1, yip88, trace = TRUE, crit = "coef") (true.mean <- (1-phi) * lambda) mean(y) head(fitted(fit1)) fit1@misc$pstr0 # The estimate of phi # Compare the ZIP with the positive Poisson distribution pp <- vglm(y ~ 1, pospoisson, subset = y > 0, crit = "c") coef(pp) Coef(pp) coef(fit1) - coef(pp) # Same head(fitted(fit1) - fitted(pp)) # Different # Another example (Angers and Biswas, 2003) --------------------- abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) yy <- with(abdata, rep(y, w)) fit3 <- vglm(yy ~ 1, yip88(n = length(yy)), subset = yy > 0) fit3@misc$pstr0 # Estimate of phi (they get 0.5154 with SE 0.0707) coef(fit3, matrix = TRUE) Coef(fit3) # Estimate of lambda (they get 0.6997 with SE 0.1520) head(fitted(fit3)) mean(yy) # Compare this with fitted(fit3) } \keyword{models} \keyword{regression} % 20140101; try to put into a data frame but it gives a numerical % problem: %# Another example (Angers and Biswas, 2003) --------------------- %abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) %abdata <- subset(abdata, w > 0) %abdata <- subset(abdata, y > 0) %Abdata <- data.frame(yy = with(abdata, rep(y, w))) %fit3 <- vglm(yy ~ 1, yip88(n = nrow(Abdata)), data = Abdata) %fit3@misc$pstr0 # Estimate of phi (they get 0.5154 with SE 0.0707) %coef(fit3, matrix = TRUE) %Coef(fit3) # Estimate of lambda (they get 0.6997 with SE 0.1520) %head(fitted(fit3)) %with(Abdata, mean(yy)) # Compare this with fitted(fit3) VGAM/man/confintvglm.Rd0000644000176200001440000001236113135276753014413 0ustar liggesusers\name{confintvglm} %\name{confint} % \alias{confint} \alias{confintvglm} \alias{confintrrvglm} \alias{confintvgam} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Confidence Intervals for Parameters of VGLMs } \description{ Computes confidence intervals (CIs) for one or more parameters in a fitted model. Currently the object must be a \code{"\link{vglm}"} object. } % confint(object, parm, level = 0.95, \dots) \usage{ confintvglm(object, parm, level = 0.95, method = c("wald", "profile"), trace = NULL, \dots) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A fitted model object. } \item{parm, level, \dots}{Same as \code{\link[stats]{confint}}. } \item{method}{Character. The default is the first method. Abbreviations are allowed. Currently \code{"profile"} is basically working; and it is likely to be more accurate especially for small samples, as it is based on a profile log likelihood, however it is computationally intensive. } \item{trace}{ Logical. If \code{TRUE} then one can monitor the computation as it progresses (because it is expensive). The default is the orginal model's \code{trace} value (see \code{\link{vglm.control}}). Setting \code{FALSE} suppresses all intermediate output. } } \details{ The default for this methods function is based on \code{\link[stats]{confint.default}} and assumes asymptotic normality. In particular, the \code{\link[VGAM:coefvlm]{coef}} and \code{vcov} methods functions are used for \code{\link[VGAM]{vglm-class}} objects. When \code{method = "profile"} the function \code{\link{profilevglm}} is called to do the profiling. The code is very heavily based on \code{\link[MASS]{profile.glm}} which was originally written by D. M. Bates and W. N. Venables (For S in 1996) and subsequently corrected by B. D. Ripley. Sometimes the profiling method can give problems, for example, \code{\link{cumulative}} requires the \eqn{M} linear predictors not to intersect in the data cloud. Such numerical problems are less common when \code{method = "wald"}, however, it is well-known that inference based on profile likelihoods is generally more accurate than Wald, especially when the sample size is small. The deviance (\code{deviance(object)}) is used if possible, else the difference \code{2 * (logLik(object) - ell)} is computed, where \code{ell} are the values of the loglikelihood on a grid. For Wald CIs and \code{\link[VGAM]{rrvglm-class}} objects, currently an error message is produced because I haven't gotten around to write the methods function; it's not too hard, but am too busy! An interim measure is to coerce the object into a \code{"\link{vglm}"} object, but then the confidence intervals will tend to be too narrow because the estimated constraint matrices are treated as known. For Wald CIs and \code{\link[VGAM]{vgam-class}} objects, currently an error message is produced because the theory is undeveloped. } \value{ Same as \code{\link[stats]{confint}}. } %\references{ %} \author{ Thomas Yee adapted \code{\link[stats]{confint.lm}} to handle \code{"vglm"} objects, for Wald-type confidence intervals. Also, \code{\link[MASS]{profile.glm}} was originally written by D. M. Bates and W. N. Venables (For S in 1996) and subsequently corrected by B. D. Ripley. This function effectively calls \code{confint.profile.glm()} in \pkg{MASS}. } \note{ The order of the values of argument \code{method} may change in the future without notice. The functions \code{plot.profile.glm} and \code{pairs.profile.glm} from \pkg{MASS} appear to work with output from this function. } %\section{Warning }{ %} \seealso{ \code{\link{vcovvlm}}, \code{\link{summaryvglm}}, \code{\link{lrp.vglm}}, \code{\link[stats]{confint}}, \code{\link[MASS]{profile.glm}}, \code{plot.profile.glm}, \code{pairs.profile.glm}. } \examples{ # Example 1: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) glm.D93 <- glm(counts ~ outcome + treatment, family = poisson()) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff) confint(glm.D93) # needs MASS to be present on the system confint.default(glm.D93) # based on asymptotic normality confint(vglm.D93) confint(vglm.D93) - confint(glm.D93) # Should be all 0s confint(vglm.D93) - confint.default(glm.D93) # based on asympt. normality # Example 2: simulated negative binomial data with multiple responses ndata <- data.frame(x2 = runif(nn <- 100)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1)), y2 = rnbinom(nn, mu = exp(2-x2), size = exp(0))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) confint(fit1) confint(fit1, "x2:1") # This might be improved to "x2" some day... \dontrun{ confint(fit1, method = "profile") # Computationally expensive confint(fit1, "x2:1", method = "profile", trace = FALSE) } fit2 <- rrvglm(y1 ~ x2, negbinomial(zero = NULL), data = ndata) confint(as(fit2, "vglm")) # Too narrow (SEs are biased downwards) } \keyword{models} \keyword{regression} VGAM/man/negbinomial.Rd0000644000176200001440000006172613135276753014362 0ustar liggesusers\name{negbinomial} \alias{negbinomial} \alias{polya} \alias{polyaR} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative Binomial Distribution Family Function } \description{ Maximum likelihood estimation of the two parameters of a negative binomial distribution. } \usage{ negbinomial(zero = "size", parallel = FALSE, deviance.arg = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lmu = "loge", lsize = "loge", imethod = 1, imu = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) polya(zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lprob = "logit", lsize = "loge", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL) polyaR(zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lsize = "loge", lprob = "logit", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL) } % deviance.arg = FALSE, %- maybe also 'usage' for other objects documented here. \arguments{ \item{lmu, lsize, lprob}{ Link functions applied to the \eqn{\mu}{mu}, \eqn{k} and \eqn{p} parameters. See \code{\link{Links}} for more choices. Note that the \eqn{\mu}{mu}, \eqn{k} and \eqn{p} parameters are the \code{mu}, \code{size} and \code{prob} arguments of \code{\link[stats:NegBinomial]{rnbinom}} respectively. Common alternatives for \code{lsize} are \code{\link{negloge}} and \code{\link{reciprocal}}, and \code{\link{loglog}} (if \eqn{k > 1}). } \item{imu, imunb, isize, iprob}{ Optional initial values for the mean and \eqn{k} and \eqn{p}. For \eqn{k}, if failure to converge occurs then try different values (and/or use \code{imethod}). For a \eqn{S}-column response, \code{isize} can be of length \eqn{S}. A value \code{NULL} means an initial value for each response is computed internally using a gridsearch based on \code{gsize.mux}. The last argument is ignored if used within \code{\link{cqo}}; see the \code{iKvector} argument of \code{\link{qrrvglm.control}} instead. In the future \code{isize} and \code{iprob} might be depreciated. } \item{nsimEIM}{ This argument is used for computing the diagonal element of the \emph{expected information matrix} (EIM) corresponding to \eqn{k} based on the \emph{simulated Fisher scoring} (SFS) algorithm. See \code{\link{CommonVGAMffArguments}} for more information and the notes below. SFS is one of two algorithms for computing the EIM elements (so that both algorithms may be used on a given data set). SFS is faster than the exact method when \code{Qmax} is large. } \item{cutoff.prob}{ Fed into the \code{p} argument of \code{\link[stats:NegBinomial]{qnbinom}} in order to obtain an upper limit for the approximate support of the distribution, called \code{Qmax}, say. Similarly, the value \code{1-p} is fed into the \code{p} argument of \code{\link[stats:NegBinomial]{qnbinom}} in order to obtain a lower limit for the approximate support of the distribution, called \code{Qmin}, say. Hence the approximate support is \code{Qmin:Qmax}. This argument should be a numeric and close to 1 but never exactly 1. Used to specify how many terms of the infinite series for computing the second diagonal element of the EIM are actually used. The closer this argument is to 1, the more accurate the standard errors of the regression coefficients will be. If this argument is too small, convergence will take longer. % The sum of the probabilites are added until they reach % at least this value. % (but no more than \code{Maxiter} terms allowed). % Used in the finite series approximation. % It is like specifying \code{p} in an imaginary function \code{qnegbin(p)}. } \item{max.chunk.MB, max.support}{ \code{max.support} is used to describe the eligibility of individual observations to have their EIM computed by the \emph{exact method}. Here, we are concerned about computing the EIM wrt \eqn{k}. The exact method algorithm operates separately on each response variable, and it constructs a large matrix provided that the number of columns is less than \code{max.support}. If so, then the computations are done in chunks, so that no more than about \code{max.chunk.MB} megabytes of memory is used at a time (actually, it is proportional to this amount). Regarding eligibility of this algorithm, each observation must have the length of the vector, starting from the \code{1-cutoff.prob} quantile and finishing up at the \code{cutoff.prob} quantile, less than \code{max.support} (as its approximate support). If you have abundant memory then you might try setting \code{max.chunk.MB = Inf}, but then the computations might take a very long time. Setting \code{max.chunk.MB = 0} or \code{max.support = 0} will force the EIM to be computed using the SFS algorithm only (this \emph{used to be} the default method for \emph{all} the observations). When the fitted values of the model are large and \eqn{k} is small, the computation of the EIM will be costly with respect to time and memory if the exact method is used. Hence the argument \code{max.support} limits the cost in terms of time. For intercept-only models \code{max.support} is multiplied by a number (such as 10) because only one inner product needs be computed. Note: \code{max.support} is an upper bound and limits the number of terms dictated by the \code{eps.trig} argument. % Thus the number of columns of the matrix can be controlled by % the argument \code{cutoff.prob}. } \item{mds.min}{ Numeric. Minimum value of the NBD mean divided by \code{size} parameter. The closer this ratio is to 0, the closer the distribution is to a Poisson. Iterations will stop when an estimate of \eqn{k} is so large, relative to the mean, than it is below this threshold (this is treated as a boundary of the parameter space). } \item{eps.trig}{ Numeric. A small positive value used in the computation of the EIMs. It focusses on the denominator of the terms of a series. Each term in the series (that is used to approximate an infinite series) has a value greater than \code{size / sqrt(eps.trig)}, thus very small terms are ignored. It's a good idea to set a smaller value that will result in more accuracy, but it will require a greater computing time (when \eqn{k} is close to 0). And adjustment to \code{max.support} may be needed. In particular, the quantity computed by special means is \eqn{\psi'(k) - E[\psi'(Y+k)]}{trigamma(k) - E[trigamma(Y+k)]}, which is the difference between two \code{\link[base]{trigamma}}. functions. It is part of the calculation of the EIM with respect to the \code{size} parameter. } \item{gsize.mux}{ Similar to \code{gsigma} in \code{\link{CommonVGAMffArguments}}. However, this grid is multiplied by the initial estimates of the NBD mean parameter. That is, it is on a relative scale rather than on an absolute scale. If the counts are very large in value then convergence fail might occur; if so, then try a smaller value such as \code{gsize.mux = exp(-40)}. } % \item{Maxiter}{ % Used in the finite series approximation. % Integer. The maximum number of terms allowed when computing % the second diagonal element of the EIM. % In theory, the value involves an infinite series. % If this argument is too small then the value may be inaccurate. % } \item{type.fitted, percentiles}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{deviance.arg}{ Logical. If \code{TRUE}, the deviance is computed \emph{after} convergence. It only works in the NB-2 model. It is also necessary to set \code{criterion = "coefficients"} or \code{half.step = FALSE} since one cannot use that criterion properly for the minimization within the IRLS algorithm. It should be set \code{TRUE} when used with \code{\link{cqo}} under the fast algorithm. % Pre-20131212: % If \code{TRUE}, the deviance function is attached % to the object. Under ordinary circumstances, it should be % left alone because it really assumes the index parameter % is at the maximum likelihood estimate. Consequently, % one cannot use that criterion to minimize within the % IRLS algorithm. It should be set \code{TRUE} only when % used with \code{\link{cqo}} under the fast algorithm. } \item{imethod}{ An integer with value \code{1} or \code{2} etc. which specifies the initialization method for the \eqn{\mu}{mu} parameter. If failure to converge occurs try another value and/or else specify a value for \code{iprobs.y} and/or else specify a value for \code{isize}. } \item{parallel}{ See \code{\link{CommonVGAMffArguments}} for more information. Setting \code{parallel = TRUE} is useful in order to get something similar to \code{\link{quasipoissonff}} or what is known as NB-1. If \code{parallel = TRUE} then the parallelism constraint does not apply to any intercept term. You should set \code{zero = NULL} too if \code{parallel = TRUE} to avoid a conflict. } \item{gprobs.y}{ A vector representing a grid; passed into the \code{probs} argument of \code{\link[stats:quantile]{quantile}} when \code{imethod = 1} to obtain an initial value for the mean of each response. Is overwritten by any value of \code{iprobs.y}. } \item{iprobs.y}{ Passed into the \code{probs} argument of \code{\link[stats:quantile]{quantile}} when \code{imethod = 1} to obtain an initial value for the mean of each response. Overwrites any value of \code{gprobs.y}. This argument might be deleted in the future. } % \item{ishrinkage}{ % How much shrinkage is used when initializing \eqn{\mu}{mu}. % The value must be between 0 and 1 inclusive, and % a value of 0 means the individual response values are used, % and a value of 1 means the median or mean is used. % This argument is used in conjunction with \code{imethod}. % If convergence failure occurs try setting this argument to 1. % } \item{zero}{ Can be an integer-valued vector, and if so, then it is usually assigned \eqn{-2} or \eqn{2}. Specifies which of the two linear/additive predictors are modelled as an intercept only. By default, the \eqn{k} parameter (after \code{lsize} is applied) is modelled as a single unknown number that is estimated. It can be modelled as a function of the explanatory variables by setting \code{zero = NULL}; this has been called a NB-H model by Hilbe (2011). A negative value means that the value is recycled, so setting \eqn{-2} means all \eqn{k} are intercept-only. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The negative binomial distribution (NBD) can be motivated in several ways, e.g., as a Poisson distribution with a mean that is gamma distributed. There are several common parametrizations of the NBD. The one used by \code{negbinomial()} uses the mean \eqn{\mu}{mu} and an \emph{index} parameter \eqn{k}, both which are positive. Specifically, the density of a random variable \eqn{Y} is \deqn{f(y;\mu,k) = {y + k - 1 \choose y} \, \left( \frac{\mu}{\mu+k} \right)^y\, \left( \frac{k}{k+\mu} \right)^k }{% f(y;mu,k) = C_{y}^{y + k - 1} [mu/(mu+k)]^y [k/(k+mu)]^k} where \eqn{y=0,1,2,\ldots}, and \eqn{\mu > 0}{mu > 0} and \eqn{k > 0}. Note that the \emph{dispersion} parameter is \eqn{1/k}, so that as \eqn{k} approaches infinity the NBD approaches a Poisson distribution. The response has variance \eqn{Var(Y)=\mu+\mu^2/k}{Var(Y)=mu*(1+mu/k)}. When fitted, the \code{fitted.values} slot of the object contains the estimated value of the \eqn{\mu}{mu} parameter, i.e., of the mean \eqn{E(Y)}. It is common for some to use \eqn{\alpha=1/k}{alpha=1/k} as the ancillary or heterogeneity parameter; so common alternatives for \code{lsize} are \code{\link{negloge}} and \code{\link{reciprocal}}. For \code{polya} the density is \deqn{f(y;p,k) = {y + k - 1 \choose y} \, \left( 1 - p \right)^y\, p^k }{% f(y;p,k) = C_{y}^{y + k - 1} [1 - p]^y p^k} where \eqn{y=0,1,2,\ldots}, and \eqn{k > 0} and \eqn{0 < p < 1}{0 < p < 1}. Family function \code{polyaR()} is the same as \code{polya()} except the order of the two parameters are switched. The reason is that \code{polyaR()} tries to match with \code{\link[stats:NegBinomial]{rnbinom}} closely in terms of the argument order, etc. Should the probability parameter be of primary interest, probably, users will prefer using \code{polya()} rather than \code{polyaR()}. Possibly \code{polyaR()} will be decommissioned one day. The NBD can be coerced into the classical GLM framework with one of the parameters being of interest and the other treated as a nuisance/scale parameter (this is implemented in the \pkg{MASS} library). The \pkg{VGAM} family function \code{negbinomial()} treats both parameters on the same footing, and estimates them both by full maximum likelihood estimation. % SFS is employed as the default (see the \code{nsimEIM} % argument). The parameters \eqn{\mu}{mu} and \eqn{k} are independent (diagonal EIM), and the confidence region for \eqn{k} is extremely skewed so that its standard error is often of no practical use. The parameter \eqn{1/k} has been used as a measure of aggregation. For the NB-C the EIM is not diagonal. These \pkg{VGAM} family functions handle \emph{multiple} responses, so that a response matrix can be inputted. The number of columns is the number of species, say, and setting \code{zero = -2} means that \emph{all} species have a \eqn{k} equalling a (different) intercept only. } \section{Warning}{ Poisson regression corresponds to \eqn{k} equalling infinity. If the data is Poisson or close to Poisson, numerical problems may occur. Some corrective measures are taken, e.g., \eqn{k} is effectively capped (relative to the mean) during estimation to some large value and a warning is issued. And setting \code{stepsize = 0.5} for half stepping is probably a good idea too when the data is extreme. % Possibly setting \code{crit = "coef"} is a good idea because % the log-likelihood is often a \code{NaN} when the \code{size} % value is very large. % Note that \code{dnbinom(0, mu, size = Inf)} currently % is a \code{NaN} (a bug), % therefore if the data has some 0s then % setting \code{crit = "coef"} will avoid the problem that % the log-likelihood will be undefined during the last % stages of estimation. % Possibly choosing a log-log link may help in such cases, % otherwise try \code{\link{poissonff}} or % \code{\link{quasipoissonff}}. It is possible to fit a NBD % that has a similar variance function as a quasi-Poisson; see % the NB-1 example below. The NBD is a strictly unimodal distribution. Any data set that does not exhibit a mode (somewhere in the middle) makes the estimation problem difficult. Set \code{trace = TRUE} to monitor convergence. These functions are fragile; the maximum likelihood estimate of the index parameter is fraught (see Lawless, 1987). In general, the \code{\link{quasipoissonff}} is more robust. Other alternatives to \code{negbinomial} are to fit a NB-1 or RR-NB (aka NB-P) model; see Yee (2014). Also available are the NB-C, NB-H and NB-G. Assigning values to the \code{isize} argument may lead to a local solution, and smaller values are preferred over large values when using this argument. If one wants to force SFS to be used on all observations, then set \code{max.support = 0} or \code{max.chunk.MB = 0}. If one wants to force the exact method to be used for all observations, then set \code{max.support = Inf}. If the computer has \emph{much} memory, then trying \code{max.chunk.MB = Inf} and \code{max.support = Inf} may provide a small speed increase. If SFS is used at all, then the \code{@weights} slot of the fitted object will be a matrix; otherwise that slot will be a \code{0 x 0} matrix. Yet to do: write a family function which uses the methods of moments estimator for \eqn{k}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Lawless, J. F. (1987) Negative binomial and mixed Poisson regression. \emph{The Canadian Journal of Statistics} \bold{15}, 209--225. Hilbe, J. M. (2011) \emph{Negative Binomial Regression}, 2nd Edition. Cambridge: Cambridge University Press. Bliss, C. and Fisher, R. A. (1953) Fitting the negative binomial distribution to biological data. \emph{Biometrics} \bold{9}, 174--200. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. } \author{ Thomas W. Yee, and with a lot of help by Victor Miranda to get it going with \code{\link{nbcanlink}} (NB-C). } \note{ % The \pkg{VGAM} package has a few other family functions for the % negative binomial distribution. Currently, none of these others work % very well. These 3 functions implement 2 common parameterizations of the negative binomial (NB). Some people called the NB with integer \eqn{k} the \emph{Pascal} distribution, whereas if \eqn{k} is real then this is the \emph{Polya} distribution. I don't. The one matching the details of \code{\link[stats:NegBinomial]{rnbinom}} in terms of \eqn{p} and \eqn{k} is \code{polya()}. For \code{polya()} the code may fail when \eqn{p} is close to 0 or 1. It is not yet compatible with \code{\link{cqo}} or \code{\link{cao}}. Suppose the response is called \code{ymat}. For \code{negbinomial()} the diagonal element of the \emph{expected information matrix} (EIM) for parameter \eqn{k} involves an infinite series; consequently SFS (see \code{nsimEIM}) is used as the backup algorithm only. SFS should be better if \code{max(ymat)} is large, e.g., \code{max(ymat) > 1000}, or if there are any outliers in \code{ymat}. The default algorithm involves a finite series approximation to the support \code{0:Inf}; the arguments \code{max.memory}, \code{min.size} and \code{cutoff.prob} are pertinent. % \code{slope.mu}, % the arguments \code{Maxiter} and % can be invoked by setting \code{nsimEIM = NULL}. Regardless of the algorithm used, convergence problems may occur, especially when the response has large outliers or is large in magnitude. If convergence failure occurs, try using arguments (in recommended decreasing order) \code{max.support}, \code{nsimEIM}, \code{cutoff.prob}, \code{iprobs.y}, \code{imethod}, \code{isize}, \code{zero}, \code{max.chunk.MB}. The function \code{negbinomial} can be used by the fast algorithm in \code{\link{cqo}}, however, setting \code{eq.tolerances = TRUE} and \code{I.tolerances = FALSE} is recommended. % For \code{\link{cqo}} and \code{\link{cao}}, taking the square-root % of the response means (approximately) a \code{\link{poissonff}} family % may be used on the transformed data. % If the negative binomial family function \code{\link{negbinomial}} % is used for \code{cqo} then set \code{negbinomial(deviance = TRUE)} % is necessary. This means to minimize the deviance, which the fast % algorithm can handle. In the first example below (Bliss and Fisher, 1953), from each of 6 McIntosh apple trees in an orchard that had been sprayed, 25 leaves were randomly selected. On each of the leaves, the number of adult female European red mites were counted. There are two special uses of \code{negbinomial} for handling count data. Firstly, when used by \code{\link{rrvglm}} this results in a continuum of models in between and inclusive of quasi-Poisson and negative binomial regression. This is known as a reduced-rank negative binomial model \emph{(RR-NB)}. It fits a negative binomial log-linear regression with variance function \eqn{Var(Y)=\mu+\delta_1 \mu^{\delta_2}}{Var(Y) = mu + delta1 * mu^delta2} where \eqn{\delta_1}{delta1} and \eqn{\delta_2}{delta2} are parameters to be estimated by MLE. Confidence intervals are available for \eqn{\delta_2}{delta2}, therefore it can be decided upon whether the data are quasi-Poisson or negative binomial, if any. Secondly, the use of \code{negbinomial} with \code{parallel = TRUE} inside \code{\link{vglm}} can result in a model similar to \code{\link{quasipoissonff}}. This is named the \emph{NB-1} model. The dispersion parameter is estimated by MLE whereas \code{\link[stats:glm]{glm}} uses the method of moments. In particular, it fits a negative binomial log-linear regression with variance function \eqn{Var(Y) = \phi_0 \mu}{Var(Y) = phi0 * mu} where \eqn{\phi_0}{phi0} is a parameter to be estimated by MLE. Confidence intervals are available for \eqn{\phi_0}{phi0}. } \seealso{ \code{\link{quasipoissonff}}, \code{\link{poissonff}}, \code{\link{zinegbinomial}}, \code{\link{negbinomial.size}} (e.g., NB-G), \code{\link{nbcanlink}} (NB-C), \code{\link{posnegbinomial}}, \code{\link{inv.binomial}}, \code{\link[stats:NegBinomial]{rnbinom}}, \code{\link{nbolf}}, \code{\link{rrvglm}}, \code{\link{cao}}, \code{\link{cqo}}, \code{\link{CommonVGAMffArguments}}, \code{\link{simulate.vlm}}, \code{\link[stats:ppoints]{ppoints}}, \code{\link[stats:NegBinomial]{qnbinom}}. % \code{\link[MASS]{rnegbin}}. } \examples{ # Example 1: apple tree data (Bliss and Fisher, 1953) appletree <- data.frame(y = 0:7, w = c(70, 38, 17, 10, 9, 3, 2, 1)) fit <- vglm(y ~ 1, negbinomial(deviance = TRUE), data = appletree, weights = w, crit = "coef") # Obtain the deviance fit <- vglm(y ~ 1, negbinomial(deviance = TRUE), data = appletree, weights = w, half.step = FALSE) # Alternative method summary(fit) coef(fit, matrix = TRUE) Coef(fit) # For intercept-only models deviance(fit) # NB2 only; needs 'crit = "coef"' & 'deviance = TRUE' above # Example 2: simulated data with multiple responses \dontrun{ ndata <- data.frame(x2 = runif(nn <- 200)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1)), y2 = rnbinom(nn, mu = exp(2-x2), size = exp(0))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) } # Example 3: large counts implies SFS is used \dontrun{ ndata <- transform(ndata, y3 = rnbinom(nn, mu = exp(10+x2), size = exp(1))) with(ndata, range(y3)) # Large counts fit2 <- vglm(y3 ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit2, matrix = TRUE) head(fit2@weights) # Non-empty; SFS was used } # Example 4: a NB-1 to estimate a negative binomial with Var(Y) = phi0 * mu nn <- 200 # Number of observations phi0 <- 10 # Specify this; should be greater than unity delta0 <- 1 / (phi0 - 1) mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y3 = rnbinom(nn, mu = mu, size = delta0 * mu)) \dontrun{ plot(y3 ~ x2, data = mydata, pch = "+", col = "blue", main = paste("Var(Y) = ", phi0, " * mu", sep = ""), las = 1) } nb1 <- vglm(y3 ~ x2 + x3, negbinomial(parallel = TRUE, zero = NULL), data = mydata, trace = TRUE) # Extracting out some quantities: cnb1 <- coef(nb1, matrix = TRUE) mydiff <- (cnb1["(Intercept)", "loge(size)"] - cnb1["(Intercept)", "loge(mu)"]) delta0.hat <- exp(mydiff) (phi.hat <- 1 + 1 / delta0.hat) # MLE of phi summary(nb1) # Obtain a 95 percent confidence interval for phi0: myvec <- rbind(-1, 1, 0, 0) (se.mydiff <- sqrt(t(myvec) \%*\% vcov(nb1) \%*\% myvec)) ci.mydiff <- mydiff + c(-1.96, 1.96) * se.mydiff ci.delta0 <- ci.exp.mydiff <- exp(ci.mydiff) (ci.phi0 <- 1 + 1 / rev(ci.delta0)) # The 95 percent conf. interval for phi0 Confint.nb1(nb1) # Quick way to get it summary(glm(y3 ~ x2 + x3, quasipoisson, mydata))$disper # cf. moment estimator } \keyword{models} \keyword{regression} %lmu = "loge", lsize = "loge", % imu = NULL, isize = NULL, % nsimEIM = 250, cutoff.prob = 0.999, % max.support = 2000, max.chunk.MB = 30, % deviance.arg = FALSE, imethod = 1, % probs.y = 0.75, ishrinkage = 0.95, % gsize = exp((-4):4), % parallel = FALSE, ishrinkage = 0.95, zero = "size") %polya(lprob = "logit", lsize = "loge", % iprob = NULL, isize = NULL, probs.y = 0.75, nsimEIM = 100, % imethod = 1, ishrinkage = 0.95, zero = "size") %polyaR(lsize = "loge", lprob = "logit", % isize = NULL, iprob = NULL, probs.y = 0.75, nsimEIM = 100, % imethod = 1, ishrinkage = 0.95, zero = "size") VGAM/man/prplot.Rd0000644000176200001440000000465413135276753013413 0ustar liggesusers\name{prplot} \alias{prplot} \alias{prplot.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Probability Plots for Categorical Data Analysis } \description{ Plots the fitted probabilities for some very simplified special cases of categorical data analysis models. } \usage{ prplot(object, control = prplot.control(...), ...) prplot.control(xlab = NULL, ylab = "Probability", main = NULL, xlim = NULL, ylim = NULL, lty = par()$lty, col = par()$col, rcol = par()$col, lwd = par()$lwd, rlwd = par()$lwd, las = par()$las, rug.arg = FALSE, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Currently only an \code{\link{cumulative}} object. This includes a \code{\link{propodds}} object since that \pkg{VGAM} family function is a special case of \code{\link{cumulative}}. } \item{control}{ List containing some basic graphical parameters. } \item{xlab, ylab, main, xlim, ylim, lty }{ See \code{\link[graphics]{par}} and \code{...} below. } \item{col, rcol, lwd, rlwd, las, rug.arg}{ See \code{\link[graphics]{par}} and \code{...} below. Arguments starting with \code{r} refer to the rug. Argument \code{rug.arg} is logical: add a rug for the distinct values of the explanatory variable? } \item{\dots}{ Arguments such as \code{xlab} which are fed into \code{prplot.control()}. Only a small selection of graphical arguments from \code{\link[graphics]{par}} are offered. } } \details{ For models involving one term in the RHS of the formula this function plots the fitted probabilities against the single explanatory variable. } \value{ The object is returned invisibly with the \code{preplot} slot assigned. This is obtained by a call to \code{plotvgam()}. } %\references{ %% ~put references to the literature/web site here ~ %} %\author{ %T. W. Yee %} \note{ This function is rudimentary. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link{cumulative}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) M <- npred(fit) # Or fit@misc$M \dontrun{ prplot(fit) prplot(fit, lty = 1:M, col = (1:M)+2, rug = TRUE, las = 1, ylim = c(0, 1), rlwd = 2) } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{graphs} \keyword{models} \keyword{regression} VGAM/man/put.smart.Rd0000644000176200001440000000241513135276753014021 0ustar liggesusers\name{put.smart} \alias{put.smart} \title{ Adds a List to the End of the List ``.smart.prediction'' } \description{ Adds a list to the end of the list \code{.smart.prediction} in \code{smartpredenv}. } \usage{ put.smart(smart) } \arguments{ \item{smart}{ a list containing parameters needed later for smart prediction. } } \value{ Nothing is returned. } \section{Side Effects}{ The variable \code{.smart.prediction.counter} in \code{smartpredenv} is incremented beforehand, and \code{.smart.prediction[[.smart.prediction.counter]]} is assigned the list \code{smart}. If the list \code{.smart.prediction} in \code{smartpredenv} is not long enough to hold \code{smart}, then it is made larger, and the variable \code{.max.smart} in \code{smartpredenv} is adjusted accordingly. } \details{ \code{put.smart} is used in \code{"write"} mode within a smart function. It saves parameters at the time of model fitting, which are later used for prediction. The function \code{put.smart} is the opposite of \code{\link{get.smart}}, and both deal with the same contents. } \seealso{ \code{\link{get.smart}}. } \examples{ print(sm.min1) } %\keyword{smart} \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/riceff.Rd0000644000176200001440000000555113135276753013326 0ustar liggesusers\name{riceff} \alias{riceff} %- Also NEED an '\alias' for EACH other topic documented here. \title{Rice Distribution Family Function} \description{ Estimates the two parameters of a Rice distribution by maximum likelihood estimation. } \usage{ riceff(lsigma = "loge", lvee = "loge", isigma = NULL, ivee = NULL, nsimEIM = 100, zero = NULL, nowarning = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{nowarning}{ Logical. Suppress a warning? Ignored for \pkg{VGAM} 0.9-7 and higher. } \item{lvee, lsigma}{ Link functions for the \eqn{v} and \eqn{\sigma}{sigma} parameters. See \code{\link{Links}} for more choices and for general information. } \item{ivee, isigma}{ Optional initial values for the parameters. If convergence failure occurs (this \pkg{VGAM} family function seems to require good initial values) try using these arguments. See \code{\link{CommonVGAMffArguments}} for more information. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The Rician distribution has density function \deqn{f(y;v,\sigma) = \frac{ y }{\sigma^2} \, \exp(-(y^2+v^2) / (2\sigma^2)) \, I_0(y v / \sigma^2) }{% f(y;v,sigma) = (y/sigma^2) * exp(-(y^2+v^2) / (2*sigma^2)) * I_0(y*v/sigma^2)} where \eqn{y > 0}, \eqn{v > 0}, \eqn{\sigma > 0} and \eqn{I_0} is the modified Bessel function of the first kind with order zero. When \eqn{v = 0} the Rice distribution reduces to a Rayleigh distribution. The mean is \eqn{\sigma \sqrt{\pi/2} \exp(z/2) ((1-z) I_0(-z/2)-z I_1(-z/2))}{sigma*sqrt(pi/2) * exp(z/2)*((1-z) * I_0(-z/2)-z*I_1(-z/2))} (returned as the fitted values) where \eqn{z=-v^2/(2 \sigma^2)}{z=-v^2/(2*sigma^2)}. Simulated Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Rice, S. O. (1945) Mathematical Analysis of Random Noise. \emph{Bell System Technical Journal}, \bold{24}, 46--156. } \author{ T. W. Yee } \note{ Convergence problems may occur for data where \eqn{v=0}; if so, use \code{\link{rayleigh}} or possibly use an \code{\link{identity}} link. When \eqn{v} is large (greater than 3, say) then the mean is approximately \eqn{v} and the standard deviation is approximately \eqn{\sigma}{sigma}. } \seealso{ \code{\link{drice}}, \code{\link{rayleigh}}, \code{\link[base:Bessel]{besselI}}, \code{\link{simulate.vlm}}. } \examples{ \dontrun{ sigma <- exp(1); vee <- exp(2) rdata <- data.frame(y = rrice(n <- 1000, sigma, vee = vee)) fit <- vglm(y ~ 1, riceff, data = rdata, trace = TRUE, crit = "coef") c(with(rdata, mean(y)), fitted(fit)[1]) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/oilog.Rd0000644000176200001440000000447113135276753013201 0ustar liggesusers\name{oilog} \alias{oilog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-inflated Logarithmic Distribution Family Function } \description{ Fits a 1-inflated logarithmic distribution. } \usage{ oilog(lpstr1 = "logit", lshape = "logit", type.fitted = c("mean", "shape", "pobs1", "pstr1", "onempstr1"), ishape = NULL, gpstr1 = ppoints(8), gshape = ppoints(8), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr1, lshape}{ Link functions. For \code{lpstr1}: the same idea as \code{\link{zipoisson}} except it applies to a structural 1. } \item{gpstr1, gshape, ishape}{ For initial values. See \code{\link{CommonVGAMffArguments}} for information. } \item{type.fitted, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 1-inflated logarithmic distribution is a mixture distribution of the logarithmic distribution with some probability of obtaining a (structural) 1. Thus there are two sources for obtaining the value 1. This distribution is written here in a way that retains a similar notation to the one-inflated positive-Poisson, i.e., the probability \eqn{P[Y=1]} involves another parameter \eqn{\phi}{phi}. See \code{\link{oipospoisson}}. This family function can handle multiple responses. } %\section{Warning }{ % Under- or over-flow may occur if the data is ill-conditioned. % Lots of data is needed to estimate the parameters accurately. % Usually, probably the \code{shape} parameter is best modelled as % intercept-only. %} \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } %\references{ %} \author{ Thomas W. Yee } %\note{ %} \seealso{ \code{\link{Oilog}}, \code{\link{logff}}, \code{\link{Oizeta}}. } \examples{ \dontrun{ odata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data odata <- transform(odata, pstr1 = logit(-1 + x2, inverse = TRUE), shape = 0.5) odata <- transform(odata, y1 = roilog(nn, shape, pstr1 = pstr1)) with(odata, table(y1)) fit1 <- vglm(y1 ~ x2, oilog(zero = "shape"), data = odata, trace = TRUE) coef(fit1, matrix = TRUE) } } \keyword{models} \keyword{regression} VGAM/man/log1mexp.Rd0000644000176200001440000000315613135276753013623 0ustar liggesusers\name{log1mexp} \alias{log1mexp} \alias{log1pexp} \title{ Logarithms with an Unit Offset and Exponential Term } \description{ Computes \code{log(1 + exp(x))} and \code{log(1 - exp(-x))} accurately. } \usage{ log1mexp(x) log1pexp(x) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector of reals (numeric). Complex numbers not allowed since \code{\link[base]{expm1}} and \code{\link[base]{log1p}} do not handle these. } } \details{ %% ~~ If necessary, more details than the description above ~~ Computes \code{log(1 + exp(x))} and \code{log(1 - exp(-x))} accurately. An adjustment is made when \eqn{x} is away from 0 in value. } \value{ \code{log1mexp(x)} gives the value of \eqn{\log(1-\exp(-x))}{log(1-exp(-x))}. \code{log1pexp(x)} gives the value of \eqn{\log(1+\exp(x))}{log(1+exp(x))}. } \references{ Maechler, Martin (2012). Accurately Computing log(1-exp(-|a|)). Assessed from the \pkg{Rmpfr} package. } \author{ This is a direct translation of the function in Martin Maechler's (2012) paper by Xiangjie Xue and T. W. Yee. } \note{ If \code{NA} or \code{NaN} is present in the input, the corresponding output will be \code{NA}. } %% ~Make other sections like Warning with \section{Warning }{....} ~ \seealso{ \code{\link[base]{log1p}}, \code{\link[base]{expm1}}, \code{\link[base]{exp}}, \code{\link[base]{log}} } \examples{ x <- c(10, 50, 100, 200, 400, 500, 800, 1000, 1e4, 1e5, 1e20, Inf, NA) log1pexp(x) log(1 + exp(x)) # Naive; suffers from overflow log1mexp(x) log(1 - exp(-x)) y <- -x log1pexp(y) log(1 + exp(y)) # Naive; suffers from inaccuracy } VGAM/man/undocumented-methods.Rd0000644000176200001440000003310013135276753016212 0ustar liggesusers\name{undocumented-methods} \docType{methods} %\alias{ccoef,ANY-method} %\alias{ccoef-method} % % % 201707 \alias{lrp,vglm-method} % 201704 \alias{hdeff,vglm-method} % 201607, 201608: \alias{psint,pvgam-method} \alias{summary,pvgam-method} \alias{show,summary.pvgam-method} \alias{df.residual,pvgam-method} \alias{endf,ANY-method} \alias{endf,pvgam-method} \alias{endf,summary.pvgam-method} \alias{vcov,pvgam-method} \alias{show,pvgam,ANY-method} \alias{show,pvgam-method} \alias{model.matrix,pvgam-method} % 201604: \alias{plot,pvgam,ANY-method} % 201602: \alias{predictvglmS4VGAM,ANY,binom2.or-method} % 201601: \alias{showvglmS4VGAM,ANY,acat-method} \alias{showvgamS4VGAM,ANY,acat-method} \alias{showvglmS4VGAM,ANY,multinomial-method} \alias{showvgamS4VGAM,ANY,multinomial-method} % %\alias{coef,vgam-method} %\alias{coefficients,vgam-method} % 201512: \alias{summaryvglmS4VGAM,ANY,binom2.or-method} \alias{showsummaryvglmS4VGAM,ANY,binom2.or-method} % \alias{summaryvglmS4VGAM,ANY,posbernoulli.tb-method} \alias{showsummaryvglmS4VGAM,ANY,posbernoulli.tb-method} % \alias{showsummaryvglmS4VGAM,ANY,posbernoulli.b-method} \alias{showsummaryvglmS4VGAM,ANY,posbernoulli.t-method} % \alias{summaryvglmS4VGAM,ANY,VGAMcategorical-method} \alias{summaryvglmS4VGAM,ANY,cumulative-method} \alias{summaryvglmS4VGAM,ANY,multinomial-method} % \alias{showsummaryvglmS4VGAM,ANY,VGAMcategorical-method} \alias{showsummaryvglmS4VGAM,ANY,cumulative-method} \alias{showsummaryvglmS4VGAM,ANY,multinomial-method} % \alias{margeffS4VGAM,ANY,ANY,VGAMcategorical-method} \alias{margeffS4VGAM,ANY,ANY,VGAMordinal-method} \alias{margeffS4VGAM,ANY,ANY,acat-method} \alias{margeffS4VGAM,ANY,ANY,cratio-method} \alias{margeffS4VGAM,ANY,ANY,sratio-method} \alias{margeffS4VGAM,ANY,ANY,cumulative-method} \alias{margeffS4VGAM,ANY,ANY,multinomial-method} % %\alias{margeffS4VGAM,ANY,VGAMcategorical-method} %\alias{margeffS4VGAM,ANY,VGAMordinal-method} %\alias{margeffS4VGAM,ANY,acat-method} %\alias{margeffS4VGAM,ANY,cratio-method} %\alias{margeffS4VGAM,ANY,sratio-method} %\alias{margeffS4VGAM,ANY,cumulative-method} %\alias{margeffS4VGAM,ANY,multinomial-method} % % 201509: \alias{term.names,ANY-method} \alias{term.names,vlm-method} \alias{responseName,ANY-method} \alias{responseName,vlm-method} \alias{has.intercept,ANY-method} \alias{has.intercept,vlm-method} % 201508, for R 3.2.2: \alias{confint,ANY-method} \alias{confint,vglm-method} \alias{confint,vgam-method} \alias{confint,rrvglm-method} % % 201503, for R 3.1.3: \alias{is.buggy,ANY-method} \alias{is.buggy,vlm-method} \alias{familyname,ANY-method} \alias{familyname,vlm-method} \alias{familyname,vglmff-method} % % 201412 \alias{nparam,ANY-method} \alias{nparam,vlm-method} \alias{nparam,qrrvglm-method} \alias{nparam,rrvgam-method} \alias{nparam,vgam-method} \alias{nparam,vglm-method} \alias{nparam,rrvglm-method} \alias{linkfun,ANY-method} \alias{linkfun,vglm-method} % % % 201407 \alias{concoef,ANY-method} \alias{concoef,rrvgam-method} \alias{concoef,Coef.rrvgam-method} % % % 201406 \alias{QR.R,ANY-method} \alias{QR.R,vglm-method} \alias{QR.Q,ANY-method} \alias{QR.Q,vglm-method} % % % 201312 \alias{simulate,ANY-method} \alias{simulate,vlm-method} % % 20131104 \alias{family.name,ANY-method} \alias{family.name,vlm-method} \alias{family.name,vglmff-method} % 20130903 \alias{BIC,ANY-method} \alias{BIC,vlm-method} \alias{BIC,vglm-method} \alias{BIC,vgam-method} \alias{BIC,rrvglm-method} \alias{BIC,qrrvglm-method} \alias{BIC,rrvgam-method} % % 20121105 \alias{Rank,qrrvglm-method} \alias{Rank,rrvglm-method} \alias{Rank,rrvgam-method} % 20120821 \alias{model.matrix,vsmooth.spline-method} % % 20120511 \alias{is.parallel,matrix-method} \alias{is.parallel,vglm-method} \alias{is.parallel,ANY-method} \alias{is.zero,matrix-method} \alias{is.zero,vglm-method} \alias{is.zero,ANY-method} % % % 20120215 %\alias{print,vglmff-method} \alias{show,vglmff-method} % % % % 20120112 \alias{AIC,ANY-method} \alias{AICc,ANY-method} \alias{coef,ANY-method} \alias{logLik,ANY-method} \alias{plot,ANY-method} \alias{vcov,ANY-method} \alias{plot,rrvgam,ANY-method} \alias{plot,qrrvglm,ANY-method} \alias{plot,rcim,ANY-method} \alias{plot,rcim0,ANY-method} %\alias{plot,uqo,ANY-method} \alias{plot,vgam,ANY-method} \alias{plot,vglm,ANY-method} \alias{plot,vlm,ANY-method} \alias{plot,vsmooth.spline,ANY-method} % % % % % \alias{AIC,vlm-method} \alias{AIC,vglm-method} \alias{AIC,vgam-method} \alias{AIC,rrvglm-method} \alias{AIC,qrrvglm-method} \alias{AIC,rrvgam-method} \alias{AICc,vlm-method} \alias{AICc,vglm-method} %\alias{AICc,vgam-method} %\alias{AICc,rrvglm-method} %\alias{AICc,qrrvglm-method} \alias{attrassign,lm-method} \alias{calibrate,qrrvglm-method} \alias{calibrate,rrvgam-method} %\alias{calibrate,uqo-method} \alias{cdf,vglm-method} \alias{cdf,vgam-method} \alias{coefficients,rrvgam-method} \alias{coefficients,vlm-method} \alias{coefficients,vglm-method} \alias{coefficients,qrrvglm-method} %\alias{coefficients,uqo-method} \alias{coefficients,vsmooth.spline-method} \alias{coefficients,vsmooth.spline.fit-method} \alias{coefficients,summary.vglm-method} \alias{coefficients,summary.rrvglm-method} \alias{Coefficients,vlm-method} \alias{coef,rrvgam-method} \alias{coef,vlm-method} \alias{coef,vglm-method} \alias{coef,qrrvglm-method} %\alias{coef,uqo-method} \alias{coef,vsmooth.spline-method} \alias{coef,vsmooth.spline.fit-method} \alias{coef,summary.vglm-method} \alias{coef,summary.rrvglm-method} \alias{Coef,rrvgam-method} \alias{Coef,vlm-method} \alias{Coef,qrrvglm-method} \alias{Coef,rrvglm-method} %\alias{Coef,uqo-method} \alias{constraints,vlm-method} \alias{deplot,vglm-method} \alias{deplot,vgam-method} % \alias{depvar,ANY-method} \alias{depvar,rrvgam-method} \alias{depvar,qrrvglm-method} \alias{depvar,rcim-method} \alias{depvar,rrvglm-method} \alias{depvar,vlm-method} \alias{depvar,vsmooth.spline-method} % \alias{deviance,rrvgam-method} \alias{deviance,qrrvglm-method} \alias{deviance,vlm-method} %\alias{deviance,vglm-method} %\alias{deviance,uqo-method} \alias{df.residual,vlm-method} \alias{effects,vlm-method} \alias{fitted.values,qrrvglm-method} \alias{fitted.values,vlm-method} \alias{fitted.values,vglm-method} %\alias{fitted.values,uqo-method} \alias{fitted.values,vsmooth.spline-method} \alias{fitted,qrrvglm-method} \alias{fitted,vlm-method} \alias{fitted,vglm-method} %\alias{fitted,uqo-method} \alias{fitted,vsmooth.spline-method} % % %\alias{case.names,ANY-method} \alias{case.names,vlm-method} \alias{case.names,vgam-method} \alias{case.names,vglm-method} \alias{case.names,rrvglm-method} \alias{case.names,qrrvglm-method} \alias{case.names,grc-method} % %\alias{variable.names,ANY-method} \alias{variable.names,vlm-method} \alias{variable.names,vgam-method} \alias{variable.names,vglm-method} \alias{variable.names,rrvglm-method} \alias{variable.names,qrrvglm-method} \alias{variable.names,grc-method} % % %\alias{formula,ANY-method} \alias{formula,vlm-method} \alias{formula,vgam-method} \alias{formula,vglm-method} \alias{formula,rrvglm-method} \alias{formula,qrrvglm-method} \alias{formula,grc-method} %\alias{formula,uqo-method} % \alias{formula,vsmooth.spline-method} % % % \alias{hatvalues,ANY-method} \alias{hatvalues,vlm-method} \alias{hatvalues,vglm-method} \alias{hatvalues,rrvgam-method} \alias{hatvalues,qrrvglm-method} \alias{hatvalues,rcim-method} \alias{hatvalues,rrvglm-method} % % \alias{hatplot,ANY-method} \alias{hatplot,matrix-method} \alias{hatplot,vlm-method} \alias{hatplot,vglm-method} \alias{hatplot,rrvgam-method} \alias{hatplot,qrrvglm-method} \alias{hatplot,rcim-method} \alias{hatplot,rrvglm-method} % % \alias{dfbeta,ANY-method} \alias{dfbeta,matrix-method} \alias{dfbeta,vlm-method} \alias{dfbeta,vglm-method} \alias{dfbeta,rrvgam-method} \alias{dfbeta,qrrvglm-method} \alias{dfbeta,rcim-method} \alias{dfbeta,rrvglm-method} % % % \alias{guplot,numeric-method} \alias{guplot,vlm-method} %\alias{model.frame,ANY-method} \alias{model.frame,vlm-method} %\alias{plot,rcim0,ANY-method} %\alias{plot,rcim,ANY-method} %\alias{plot,rrvgam,ANY-method} %\alias{plot,vlm,ANY-method} %\alias{plot,vglm,ANY-method} %\alias{plot,vgam,ANY-method} %\alias{plot,qrrvglm,ANY-method} %\alias{plot,uqo,ANY-method} %\alias{plot,vsmooth.spline,ANY-method} \alias{predictors,vglm-method} \alias{rlplot,vglm-method} \alias{terms,vlm-method} %\alias{is.bell,uqo-method} \alias{is.bell,qrrvglm-method} \alias{is.bell,rrvglm-method} \alias{is.bell,vlm-method} \alias{is.bell,rrvgam-method} \alias{is.bell,Coef.qrrvglm-method} \alias{logLik,vlm-method} \alias{logLik,summary.vglm-method} \alias{logLik,vglm-method} \alias{logLik,vgam-method} \alias{logLik,qrrvglm-method} \alias{logLik,rrvgam-method} % \alias{lvplot,rrvgam-method} \alias{lvplot,qrrvglm-method} \alias{lvplot,rrvglm-method} %\alias{lvplot,uqo-method} % \alias{lv,rrvglm-method} \alias{lv,qrrvglm-method} \alias{lv,rrvgam-method} \alias{lv,Coef.rrvglm-method} \alias{lv,Coef.qrrvglm-method} \alias{lv,Coef.rrvgam-method} % \alias{lv,uqo-method} defunct %\alias{latvar,uqo-method} \alias{latvar,rrvgam-method} \alias{latvar,Coef.qrrvglm-method} \alias{latvar,Coef.rrvglm-method} \alias{latvar,rrvglm-method} \alias{latvar,qrrvglm-method} % \alias{Max,qrrvglm-method} \alias{Max,Coef.qrrvglm-method} %\alias{Max,uqo-method} \alias{Max,rrvgam-method} \alias{meplot,numeric-method} \alias{meplot,vlm-method} %\alias{model.matrix,ANY-method} \alias{model.matrix,qrrvglm-method} \alias{model.matrix,vlm-method} \alias{model.matrix,vgam-method} \alias{nobs,ANY-method} \alias{nobs,vlm-method} \alias{npred,ANY-method} \alias{npred,vlm-method} \alias{npred,rrvgam-method} \alias{npred,qrrvglm-method} \alias{npred,rcim-method} \alias{npred,rrvglm-method} \alias{nvar,ANY-method} \alias{nvar,vlm-method} \alias{nvar,vgam-method} \alias{nvar,rrvglm-method} \alias{nvar,qrrvglm-method} \alias{nvar,rrvgam-method} \alias{nvar,vlm-method} \alias{nvar,rcim-method} \alias{Opt,qrrvglm-method} \alias{Opt,Coef.qrrvglm-method} %\alias{Opt,uqo-method} \alias{Opt,rrvgam-method} \alias{persp,rrvgam-method} \alias{persp,qrrvglm-method} %\alias{persp,uqo-method} \alias{predict,rrvgam-method} \alias{predict,qrrvglm-method} \alias{predict,vgam-method} \alias{predict,vglm-method} \alias{predict,rrvglm-method} \alias{predict,vlm-method} %\alias{predict,uqo-method} \alias{predict,vsmooth.spline-method} \alias{predict,vsmooth.spline.fit-method} % % % Added 20090505: %\alias{print,ANY-method} % % % Added 20111224: \alias{lrtest,ANY-method} \alias{lrtest,vglm-method} %\alias{waldtest,ANY-method} \alias{print,VGAManova-method} \alias{show,VGAManova-method} % % % \alias{print,Coef.rrvgam-method} \alias{print,summary.rrvgam-method} \alias{print,qrrvglm-method} \alias{print,Coef.qrrvglm-method} \alias{print,rrvglm-method} % 20090505 \alias{print,summary.qrrvglm-method} \alias{print,Coef.rrvglm-method} \alias{print,vlm-method} \alias{print,vglm-method} \alias{print,vgam-method} \alias{print,summary.rrvglm-method} \alias{print,summary.vgam-method} \alias{print,summary.vglm-method} \alias{print,summary.vlm-method} %\alias{print,uqo-method} %\alias{print,Coef.uqo-method} %\alias{print,summary.uqo-method} \alias{print,vsmooth.spline-method} \alias{print,rrvgam-method} \alias{qtplot,vglm-method} \alias{qtplot,vgam-method} \alias{residuals,qrrvglm-method} \alias{residuals,vlm-method} \alias{residuals,vglm-method} \alias{residuals,vgam-method} %\alias{residuals,uqo-method} \alias{residuals,vsmooth.spline-method} \alias{resid,qrrvglm-method} \alias{resid,vlm-method} \alias{resid,vglm-method} \alias{resid,vgam-method} %\alias{resid,uqo-method} \alias{resid,vsmooth.spline-method} \alias{show,Coef.rrvgam-method} \alias{show,summary.rrvgam-method} \alias{show,qrrvglm-method} \alias{show,Coef.qrrvglm-method} \alias{show,rrvglm-method} % 20090505 \alias{show,summary.qrrvglm-method} \alias{show,Coef.rrvglm-method} \alias{show,vlm-method} \alias{show,vglm-method} \alias{show,vgam-method} \alias{show,summary.rrvglm-method} \alias{show,summary.vgam-method} \alias{show,summary.vglm-method} \alias{show,summary.vlm-method} %\alias{show,uqo-method} %\alias{show,Coef.uqo-method} %\alias{show,summary.uqo-method} \alias{show,vsmooth.spline-method} \alias{show,rrvgam-method} \alias{summary,grc-method} \alias{summary,rrvgam-method} \alias{summary,qrrvglm-method} \alias{summary,rcim-method} \alias{summary,rcim0-method} \alias{summary,rrvglm-method} \alias{summary,vgam-method} \alias{summary,vglm-method} \alias{summary,vlm-method} %\alias{summary,uqo-method} \alias{Tol,rrvgam-method} \alias{Tol,qrrvglm-method} \alias{Tol,Coef.qrrvglm-method} %\alias{Tol,uqo-method} %\alias{Tol,Coef.uqo-method} \alias{trplot,qrrvglm-method} %\alias{trplot,uqo-method} \alias{trplot,rrvgam-method} \alias{vcov,rrvglm-method} \alias{vcov,qrrvglm-method} \alias{vcov,vlm-method} \alias{vcov,vglm-method} \alias{vplot,factor-method} \alias{vplot,list-method} \alias{vplot,matrix-method} \alias{vplot,numeric-method} \alias{weights,vlm-method} \alias{weights,vglm-method} % % % This does not work (need one line for each one): %\alias{trplot,qrrvglm,uqo-method} % % % \title{ Undocumented Methods Functions } \description{ Lots of undocumented methods functions are aliased here. In the \pkg{VGAM} package there are currently many objects/methods/classes which are currently internal and/or undocumented. The help file suppresses the warnings when the package is 'CHECK'ed. } %\usage{ % \S4method{ccoef}{rrvgam,Coef.rrvgam,rrvglm,qrrvglm, % Coef.rrvglm,Coef.qrrvglm}(object, ...) %} \section{Methods}{ There are many methods and these will be documented over time. \describe{ \item{object}{ This argument is often used, and it is the primary object from which the function operates on. } } } \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} \keyword{internal} VGAM/man/gumbelUC.Rd0000644000176200001440000000751313135276753013573 0ustar liggesusers\name{gumbelUC} \alias{dgumbel} \alias{pgumbel} \alias{qgumbel} \alias{rgumbel} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Gumbel Distribution } \description{ Density, distribution function, quantile function and random generation for the Gumbel distribution with location parameter \code{location} and scale parameter \code{scale}. } \usage{ dgumbel(x, location = 0, scale = 1, log = FALSE) pgumbel(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qgumbel(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rgumbel(n, location = 0, scale = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{location}{the location parameter \eqn{\mu}{mu}. This is not the mean of the Gumbel distribution (see \bold{Details} below). } \item{scale}{the scale parameter \eqn{\sigma}{sigma}. This is not the standard deviation of the Gumbel distribution (see \bold{Details} below). } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Uniform]{punif}} or \code{\link[stats:Uniform]{qunif}}. } } \details{ The Gumbel distribution is a special case of the \emph{generalized extreme value} (GEV) distribution where the shape parameter \eqn{\xi}{xi} = 0. The latter has 3 parameters, so the Gumbel distribution has two. The Gumbel distribution function is \deqn{G(y) = \exp \left( - \exp \left[ - \frac{y-\mu}{\sigma} \right] \right) }{% G(y) = exp( -exp[ - (y-mu)/sigma ] ) } where \eqn{-\infty0}{sigma>0}. Its mean is \deqn{\mu - \sigma * \gamma}{% mu - sigma * gamma} and its variance is \deqn{\sigma^2 * \pi^2 / 6}{% sigma^2 * pi^2 / 6} where \eqn{\gamma}{gamma} is Euler's constant (which can be obtained as \code{-digamma(1)}). See \code{\link{gumbel}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for formulae and other details. Apart from \code{n}, all the above arguments may be vectors and are recyled to the appropriate length if necessary. } \value{ \code{dgumbel} gives the density, \code{pgumbel} gives the distribution function, \code{qgumbel} gives the quantile function, and \code{rgumbel} generates random deviates. } \references{ Coles, S. (2001) \emph{An Introduction to Statistical Modeling of Extreme Values}. London: Springer-Verlag. } \author{ T. W. Yee } \note{ The \pkg{VGAM} family function \code{\link{gumbel}} can estimate the parameters of a Gumbel distribution using maximum likelihood estimation. } \seealso{ \code{\link{gumbel}}, \code{\link{gumbelff}}, \code{\link{gev}}, \code{\link{dgompertz}}. } \examples{ mu <- 1; sigma <- 2; y <- rgumbel(n = 100, loc = mu, scale = sigma) c(mean(y), mu - sigma * digamma(1)) # Sample and population means c(var(y), sigma^2 * pi^2 / 6) # Sample and population variances \dontrun{ x <- seq(-2.5, 3.5, by = 0.01) loc <- 0; sigma <- 1 plot(x, dgumbel(x, loc, sigma), type = "l", col = "blue", ylim = c(0, 1), main = "Blue is density, red is cumulative distribution function", sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), dgumbel(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), loc, sigma), col = "purple", lty = 3, type = "h") lines(x, pgumbel(x, loc, sigma), type = "l", col = "red") abline(h = 0, lty = 2) } } \keyword{distribution} VGAM/man/fill.Rd0000644000176200001440000002334513135276753013017 0ustar liggesusers\name{fill} \alias{fill} \alias{fill1} %- \alias{fill2} %- \alias{fill3} %- \alias{fill4} %- \alias{fill5} %- \alias{fill6} %- \alias{fill7} %- \alias{fill8} %- \alias{fill9} %- \alias{fill10} %- \alias{fill11} %- \alias{fill12} %- \alias{fill13} %- \alias{fill14} %- \alias{fill15} %- \alias{fill16} %- \alias{fill17} %- \alias{fill18} %- \alias{fill19} %- \alias{fill20} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Creates a Matrix of Appropriate Dimension } \description{ A support function for the argument \code{xij}, it generates a matrix of an appropriate dimension. } \usage{ fill(x, values = 0, ncolx = ncol(x)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector or matrix which is used to determine the dimension of the answer, in particular, the number of rows. After converting \code{x} to a matrix if necessary, the answer is a matrix of \code{values} values, of dimension \code{nrow(x)} by \code{ncolx}. } \item{values}{ Numeric. The answer contains these values, which are recycled \emph{columnwise} if necessary, i.e., as \code{matrix(values, ..., byrow=TRUE)}. } \item{ncolx}{ The number of columns of the returned matrix. The default is the number of columns of \code{x}. } } \details{ The \code{xij} argument for \code{\link{vglm}} allows the user to input variables specific to each linear/additive predictor. For example, consider the bivariate logit model where the first/second linear/additive predictor is the logistic regression of the first/second binary response respectively. The third linear/additive predictor is \code{log(OR) = eta3}, where \code{OR} is the odds ratio. If one has ocular pressure as a covariate in this model then \code{xij} is required to handle the ocular pressure for each eye, since these will be different in general. [This contrasts with a variable such as \code{age}, the age of the person, which has a common value for both eyes.] In order to input these data into \code{\link{vglm}} one often finds that functions \code{fill}, \code{fill1}, etc. are useful. All terms in the \code{xij} and \code{formula} arguments in \code{\link{vglm}} must appear in the \code{form2} argument too. } \value{ \code{matrix(values, nrow=nrow(x), ncol=ncolx)}, i.e., a matrix consisting of values \code{values}, with the number of rows matching \code{x}, and the default number of columns is the number of columns of \code{x}. } %\references{ % More information can be found at % \url{http://www.stat.auckland.ac.nz/~yee}. % % %} % \section{Warning }{ % Care is needed in such cases. % See the examples below. % %} \author{ T. W. Yee } \note{ The effect of the \code{xij} argument is after other arguments such as \code{exchangeable} and \code{zero}. Hence \code{xij} does not affect constraint matrices. Additionally, there are currently 3 other identical \code{fill} functions, called \code{fill1}, \code{fill2} and \code{fill3}; if you need more then assign \code{fill4 = fill5 = fill1} etc. The reason for this is that if more than one \code{fill} function is needed then they must be unique. For example, if \eqn{M=4} then \code{xij = op ~ lop + rop + fill(mop) + fill(mop)} would reduce to \code{xij = op ~ lop + rop + fill(mop)}, whereas \code{xij = op ~ lop + rop + fill1(mop) + fill2(mop)} would retain all \eqn{M} terms, which is needed. % The constraint matrices, as returned by \code{constraints}, do not % have a different meaning when \code{xij} is used. In Examples 1 to 3 below, the \code{xij} argument illustrates covariates that are specific to a linear predictor. Here, \code{lop}/\code{rop} are the ocular pressures of the left/right eye in an artificial dataset, and \code{mop} is their mean. Variables \code{leye} and \code{reye} might be the presence/absence of a particular disease on the LHS/RHS eye respectively. % % Examples 1 and 2 are deliberately misspecified. % The output from, e.g., \code{coef(fit, matrix=TRUE)}, looks wrong but % is correct because the coefficients are multiplied by the zeros % produced from \code{fill}. In Example 3, the \code{xij} argument illustrates fitting the (exchangeable) model where there is a common smooth function of the ocular pressure. One should use regression splines since \code{\link{s}} in \code{\link{vgam}} does not handle the \code{xij} argument. However, regression splines such as \code{\link[splines]{bs}} and \code{\link[splines]{ns}} need to have the same basis functions here for both functions, and Example 3 illustrates a trick involving a function \code{BS} to obtain this, e.g., same knots. Although regression splines create more than a single column per term in the model matrix, \code{fill(BS(lop,rop))} creates the required (same) number of columns. } \seealso{ \code{\link{vglm.control}}, \code{\link{vglm}}, \code{\link{multinomial}}, \code{\link{Select}}. } \examples{ fill(runif(5)) fill(runif(5), ncol = 3) fill(runif(5), val = 1, ncol = 3) # Generate eyes data for the examples below. Eyes are independent (OR=1). nn <- 1000 # Number of people eyesdata <- data.frame(lop = round(runif(nn), 2), rop = round(runif(nn), 2), age = round(rnorm(nn, 40, 10))) eyesdata <- transform(eyesdata, mop = (lop + rop) / 2, # Mean ocular pressure op = (lop + rop) / 2, # Value unimportant unless plotting # op = lop, # Choose this if plotting eta1 = 0 - 2*lop + 0.04*age, # Linear predictor for left eye eta2 = 0 - 2*rop + 0.04*age) # Linear predictor for right eye eyesdata <- transform(eyesdata, leye = rbinom(nn, size = 1, prob = logit(eta1, inverse = TRUE)), reye = rbinom(nn, size = 1, prob = logit(eta2, inverse = TRUE))) # Example 1 # All effects are linear fit1 <- vglm(cbind(leye,reye) ~ op + age, family = binom2.or(exchangeable = TRUE, zero = 3), data = eyesdata, trace = TRUE, xij = list(op ~ lop + rop + fill(lop)), form2 = ~ op + lop + rop + fill(lop) + age) head(model.matrix(fit1, type = "lm")) # LM model matrix head(model.matrix(fit1, type = "vlm")) # Big VLM model matrix coef(fit1) coef(fit1, matrix = TRUE) # Unchanged with 'xij' constraints(fit1) max(abs(predict(fit1)-predict(fit1, new = eyesdata))) # Predicts correctly summary(fit1) \dontrun{ plotvgam(fit1, se = TRUE) # Wrong, e.g., because it plots against op, not lop. # So set op = lop in the above for a correct plot. } # Example 2 # Model OR as a linear function of mop fit2 <- vglm(cbind(leye,reye) ~ op + age, data = eyesdata, trace = TRUE, binom2.or(exchangeable = TRUE, zero = NULL), xij = list(op ~ lop + rop + mop), form2 = ~ op + lop + rop + mop + age) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) # Unchanged with 'xij' max(abs(predict(fit2) - predict(fit2, new = eyesdata))) # Predicts correctly summary(fit2) \dontrun{ plotvgam(fit2, se = TRUE) # Wrong because it plots against op, not lop. } # Example 3. This model uses regression splines on ocular pressure. # It uses a trick to ensure common basis functions. BS <- function(x, ...) sm.bs(c(x,...), df = 3)[1:length(x), , drop = FALSE] # trick fit3 <- vglm(cbind(leye,reye) ~ BS(lop,rop) + age, family = binom2.or(exchangeable = TRUE, zero = 3), data = eyesdata, trace = TRUE, xij = list(BS(lop,rop) ~ BS(lop,rop) + BS(rop,lop) + fill(BS(lop,rop))), form2 = ~ BS(lop,rop) + BS(rop,lop) + fill(BS(lop,rop)) + lop + rop + age) head(model.matrix(fit3, type = "lm")) # LM model matrix head(model.matrix(fit3, type = "vlm")) # Big VLM model matrix coef(fit3) coef(fit3, matrix = TRUE) summary(fit3) fit3@smart.prediction max(abs(predict(fit3) - predict(fit3, new = eyesdata))) # Predicts correctly predict(fit3, new = head(eyesdata)) # Note the 'scalar' OR, i.e., zero=3 max(abs(head(predict(fit3)) - predict(fit3, new = head(eyesdata)))) # Should be 0 \dontrun{ plotvgam(fit3, se = TRUE, xlab = "lop") # Correct } # Example 4. Capture-recapture model with ephemeral and enduring # memory effects. Similar to Yang and Chao (2005), Biometrics. deermice <- transform(deermice, Lag1 = y1) M.tbh.lag1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight + Lag1, posbernoulli.tb(parallel.t = FALSE ~ 0, parallel.b = FALSE ~ 0, drop.b = FALSE ~ 1), xij = list(Lag1 ~ fill(y1) + fill(y2) + fill(y3) + fill(y4) + fill(y5) + fill(y6) + y1 + y2 + y3 + y4 + y5), form2 = ~ sex + weight + Lag1 + fill(y1) + fill(y2) + fill(y3) + fill(y4) + fill(y5) + fill(y6) + y1 + y2 + y3 + y4 + y5 + y6, data = deermice, trace = TRUE) coef(M.tbh.lag1) } \keyword{models} \keyword{regression} %This function is unrelated to the \code{zero} argument found in many %\pkg{VGAM} family functions. [zz implies I should call it %\code{fill1(x, value=0, ncolx=ncol(x))} and create .Rd file for %\code{zero} argument.] %eyesdata$leye <- ifelse(runif(n) < exp(eta1)/(1+exp(eta1)), 1, 0) %eyesdata$reye <- ifelse(runif(n) < exp(eta2)/(1+exp(eta2)), 1, 0) % \deqn{logit P(Y_k=1) = f_k(x_{ijk}) }{% % logit P(Y_k=1) = f_k(x_{ijk}) } % for \code{k=1,2}. % fill1(lop, ncol=ncol(BS(lop,rop,mop))), data=eyesdata) % Models using the \code{xij} argument may or may not predict correctly, % and inference obtained using \code{summary} may be incorrect. VGAM/man/lindley.Rd0000644000176200001440000000425113135276753013524 0ustar liggesusers\name{lindley} \alias{lindley} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 1-parameter Gamma Distribution } \description{ Estimates the (1-parameter) Lindley distribution by maximum likelihood estimation. } \usage{ lindley(link = "loge", itheta = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the (positive) parameter. See \code{\link{Links}} for more choices. } % \item{earg}{ % List. Extra argument for the link. % See \code{earg} in \code{\link{Links}} for general information. % } \item{itheta, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The density function is given by \deqn{f(y; \theta) = \theta^2 (1 + y) \exp(-\theta y) / (1 + \theta)}{% f(y; theta) = theta^2 * (1 + y) * exp(-theta * y) / (1 + theta)} for \eqn{\theta > 0}{theta > 0} and \eqn{y > 0}. The mean of \eqn{Y} (returned as the fitted values) is \eqn{\mu = (\theta + 2) / (\theta (\theta + 1))}{mu = (theta + 2) / (theta * (theta + 1))}. The variance is \eqn{(\theta^2 + 4 \theta + 2) / (\theta (\theta + 1))^2}{(theta^2 + 4 * theta + 2) / (theta * (theta + 1))^2}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lindley, D. V. (1958) Fiducial distributions and Bayes' theorem. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{20}, 102--107. Ghitany, M. E. and Atieh, B. and Nadarajah, S. (2008) Lindley distribution and its application. \emph{Math. Comput. Simul.}, \bold{78}, 493--506. } \author{ T. W. Yee } \note{ This \pkg{VGAM} family function can handle multiple responses (inputted as a matrix). Fisher scoring is implemented. } \seealso{ \code{\link{dlind}}, \code{\link{gammaR}}, \code{\link{simulate.vlm}}. } \examples{ ldata <- data.frame(y = rlind(n = 1000, theta = exp(3))) fit <- vglm(y ~ 1, lindley, data = ldata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/notdocumentedyet.Rd0000644000176200001440000003741113135276753015462 0ustar liggesusers\name{notdocumentedyet} \alias{notdocumentedyet} % % % % 201706 and 201707 % \alias{lrp.vglm} \alias{retain.col} \alias{d3theta.deta3} % 201704 \alias{ghn100} \alias{ghw100} %\alias{hdeff} %\alias{hdeff.vglm} \alias{dprentice74} % 201612 \alias{label.cols.y} \alias{valid.vknotl2} % 201611 %\alias{profilevglm} %\alias{vpairs.profile} %\alias{vplot.profile} % 201609 \alias{prob.munb.size.VGAM} \alias{negbinomial.initialize.yj} % 201607, 201608 \alias{mroot2} \alias{psint} \alias{psintpvgam} \alias{df.residual_pvgam} \alias{startstoppvgam} \alias{summary.pvgam-class} %%%%% \alias{summarypvgam} %%%% % \alias{show.summary.pvgam} \alias{endf} \alias{endfpvgam} \alias{vcov.pvgam} \alias{vcov.pvgam-class} \alias{vlabel} \alias{show.pvgam} \alias{model.matrixpvgam} % 201606 \alias{gharmonic} \alias{gharmonic2} \alias{bisection.basic} \alias{Zeta.aux} \alias{deflat.limit.oizeta} % 201605 \alias{deflat.limit.oipospois} % 20160418 (keyword: mgcvvgam) % \alias{ps} \alias{get.X.VLM.aug} \alias{psv2magic} % \alias{psvglm.fit} % \alias{psvlm.wfit} \alias{pvgam-class} % \alias{PS} % \alias{checkwz} \alias{vforsub} \alias{vbacksub} \alias{vchol} \alias{process.constraints} \alias{mux22} \alias{mux111} % % % 201602, 201603, 201604: \alias{genbetaII.Loglikfun4} \alias{posNBD.Loglikfun2} \alias{NBD.Loglikfun2} \alias{AR1.gammas} \alias{Init.mu} \alias{.min.criterion.VGAM} \alias{predictvglmS4VGAM} % 201601: \alias{EIM.NB.speciald} \alias{EIM.NB.specialp} \alias{EIM.posNB.speciald} \alias{EIM.posNB.specialp} \alias{showvglmS4VGAM} \alias{showvgamS4VGAM} %\alias{coefvgam} % % 201512: \alias{margeffS4VGAM} \alias{showsummaryvglmS4VGAM} \alias{summaryvglmS4VGAM} \alias{findFirstMethod} \alias{cratio.derivs} \alias{subsetarray3} \alias{tapplymat1} % 201509, for a bug in car::linearHypothesis() and car:::Anova(): \alias{as.char.expression} \alias{coef.vlm} \alias{vcov.vlm} \alias{model.matrix.vlm} %\alias{has.intercept} %\alias{has.interceptvlm} %\alias{term.names} %\alias{term.namesvlm} \alias{responseName} \alias{responseNamevlm} % % 201503, 201504, 201505, 201508; %\alias{confintvglm} \alias{qlms.bcn} \alias{dlms.bcn} \alias{dbetaII} % \alias{AR1.control} % \alias{param.names} % 20151105 %\alias{is.buggy} %\alias{is.buggy.vlm} % % 201412; %\alias{linkfun.vglm} % 201408; \alias{dlevy} \alias{plevy} \alias{qlevy} \alias{rlevy} % 201407; expected.betabin.ab is needed for zibetabinomialff() in YBook. \alias{grid.search} \alias{grid.search2} \alias{grid.search3} \alias{grid.search4} \alias{expected.betabin.ab} % 201406; % \alias{interleave.VGAM} DONE 20151204 \alias{interleave.cmat} % 201506; \alias{marcumQ} \alias{QR.Q} \alias{QR.R} % %\alias{sm.bs} %\alias{sm.ns} %\alias{sm.poly} %\alias{sm.scale} %\alias{sm.scale.default} % % % % % 201312; % \alias{simulate.vlm} % 201311; 20150316: modified to familyname \alias{familyname} \alias{familyname.vlm} \alias{familyname.vglmff} % 201309; \alias{I.col} \alias{BIC} \alias{check.omit.constant} % % 201308; %\alias{dbiclaytoncop} %\alias{rbiclaytoncop} %\alias{biclaytoncop} % % 201307; %\alias{posnormal.control} \alias{rec.normal.control} \alias{rec.exp1.control} %\alias{kendall.tau} %\alias{binormalcop} %\alias{dbinormcop} %\alias{pbinormcop} %\alias{rbinormcop} %\alias{expint, expexpint, expint.E1} % % 201302; % \alias{pgamma.deriv.unscaled} % \alias{pgamma.deriv} % \alias{digami} % % 201212; \alias{binom2.rho.ss} % % 20121105; % \alias{posbernoulli.b.control} \alias{N.hat.posbernoulli} \alias{Rank} \alias{Rank.rrvglm} \alias{Rank.qrrvglm} \alias{Rank.rrvgam} % 20121015; delete this later %\alias{huggins91.old} % % 20120912 \alias{arwz2wz} % % 20120813 New links (no earg) %\alias{Dtheta.deta} % Commented out 20170701 %\alias{D2theta.deta2} % Commented out 20170701 %\alias{Eta2theta} % Commented out 20170701 %\alias{Theta2eta} % Commented out 20170701 \alias{link2list} %\alias{Namesof} % Commented out 20170701 % % % % % 20120514, 20120528, \alias{w.wz.merge} \alias{w.y.check} \alias{vweighted.mean.default} % % 20120418 \alias{nvar_vlm} % 20120310 %\alias{hatvalues} %\alias{hatvalues.vlm} % % % 20120307 \alias{npred} \alias{npred.vlm} % % % % 20120215 % \alias{print.vglmff} \alias{show.vglmff} % \alias{print.vfamily} % \alias{show.Coef.rrar} % \alias{family.vglm} \alias{show.vgam} \alias{show.vglm} \alias{show.vlm} % \alias{print.vgam} % \alias{print.vglm} % \alias{print.vlm} % \alias{print.vlm.wfit} % % % % % 20120112 \alias{AIC} \alias{AICc} \alias{coef} \alias{logLik} \alias{plot} %\alias{vcov} % 20150828 %\alias{vcovvlm} % 20150828 \alias{VGAMenv} \alias{nobs} \alias{show.Coef.rrvgam} \alias{show.Coef.qrrvglm} \alias{show.Coef.rrvglm} \alias{show.rrvglm} \alias{show.summary.rrvgam} % \alias{show.summary.lms} \alias{show.summary.qrrvglm} % \alias{show.summary.rc.exponential} \alias{show.summary.rrvglm} %\alias{show.summary.uqo} % \alias{show.summary.vgam} % \alias{show.summary.vglm} % 20150831 \alias{show.summary.vlm} %\alias{show.uqo} \alias{show.vanova} \alias{show.vsmooth.spline} % % % % % % % % % % 20111224; lrtest and waldtest stuff %\alias{lrtest} %\alias{lrtest_vglm} %\alias{print_anova} \alias{update_default} \alias{update_formula} % %\alias{waldtest} %\alias{waldtest_vglm} %\alias{waldtest_default} %\alias{waldtest_formula} % % % % % 20110202; 20110317; James Lauder work %\alias{dexpgeom} %\alias{pexpgeom} %\alias{qexpgeom} %\alias{rexpgeom} %\alias{expgeometric} % %\alias{dweibull3} %\alias{pweibull3} %\alias{qweibull3} %\alias{rweibull3} %\alias{weibull3} % % % % 20110321; misc. datasets. %\alias{fibre1.5} %\alias{fibre15} % % % 20120206; for RR-NB, or rrn.tex. \alias{plota21} % % % 20110202; for Melbourne; these include datasets. \alias{azprocedure} \alias{Confint.rrnb} \alias{Confint.nb1} %\alias{gala} % \alias{melbmaxtemp} % % % %20111128; basics \alias{is.empty.list} % % % % %20101222; Alfian work %\alias{Rcim} % Has been written %\alias{plotrcim0} % Has been written %\alias{moffset} % Has been written % \alias{Qvar} \alias{plotqvar} \alias{qvplot} \alias{depvar.vlm} % % % % %20110411 %\alias{dbinorm} \alias{dnorm2} % %20090330 \alias{dclogloglap} \alias{dlogitlap} \alias{dprobitlap} \alias{logitlaplace1.control} \alias{loglaplace1.control} \alias{pclogloglap} \alias{plogitlap} \alias{pprobitlap} \alias{qclogloglap} \alias{qlogitlap} \alias{qprobitlap} \alias{rclogloglap} \alias{rlogitlap} \alias{rprobitlap} % % % % \alias{A1A2A3.orig} 20140909; same as A1A2A3(hwe = TRUE) \alias{AAaa.nohw} %\alias{AIC} %\alias{AIC.qrrvglm} %\alias{AIC.rrvglm} %\alias{AIC.vgam} %\alias{AIC.vglm} %\alias{AIC.vlm} % \alias{Build.terms} \alias{Build.terms.vlm} \alias{Coef.rrvgam} \alias{Coefficients} \alias{Cut} \alias{Deviance.categorical.data.vgam} \alias{InverseBrat} \alias{Max.Coef.qrrvglm} \alias{Max.qrrvglm} \alias{Opt.Coef.qrrvglm} \alias{Opt.qrrvglm} % \alias{R170.or.later} \alias{Tol.Coef.qrrvglm} %\alias{Tol.Coef.uqo} \alias{Tol.qrrvglm} %\alias{Tol.uqo} \alias{a2m} % \alias{abbott} % 20150320; no longer releasing family.quantal.R. % \alias{acat.deriv} % \alias{add.arg} % \alias{add.constraints} % \alias{add.hookey} \alias{add1} \alias{add1.vgam} \alias{add1.vglm} % \alias{adjust.Dmat.expression} \alias{alaplace1.control} \alias{alaplace2.control} \alias{alaplace3.control} % \alias{alias.vgam} % \alias{alias.vglm} \alias{anova.vgam} \alias{anova.vglm} % \alias{as.vanova} % \alias{attrassign} % \alias{attrassigndefault} % \alias{attrassignlm} % \alias{beta4} % \alias{betaffqn} \alias{biplot} \alias{biplot.qrrvglm} % \alias{block.diag} % \alias{borel.tanner} % \alias{callcaof} % \alias{callcqof} % \alias{calldcaof} % \alias{calldcqof} % \alias{callduqof} % \alias{calluqof} % \alias{canonical.Hlist} % \alias{cao.fit} \alias{car.all} \alias{care.exp} % \alias{concoef.Coef.rrvgam} \alias{concoef.Coef.qrrvglm} \alias{concoef.rrvgam} \alias{concoef.qrrvglm} % \alias{cdf} \alias{cdf.lms.bcg} \alias{cdf.lms.bcn} \alias{cdf.lms.yjn} \alias{cdf.vglm} \alias{cm.VGAM} \alias{cm.zero.VGAM} \alias{cm.nointercept.VGAM} \alias{coefficients} \alias{coefqrrvglm} % \alias{coefvlm} % 20140124 \alias{coefvsmooth.spline} \alias{coefvsmooth.spline.fit} % \alias{constraints.vlm} % \alias{cqo.fit} \alias{d2theta.deta2} % \alias{dcda.fast} % \alias{dctda.fast.only} \alias{deplot} \alias{deplot.default} \alias{deplot.lms.bcg} \alias{deplot.lms.bcn} \alias{deplot.lms.yjn} \alias{deplot.lms.yjn2} \alias{deplot.vglm} \alias{deviance} %\alias{deviance.uqo} %\alias{deviance.vglm} \alias{deviance.vlm} \alias{deviance.qrrvglm} %\alias{df.residual} %\alias{df.residual_vlm} % \alias{dimm} % 20151105 % \alias{dneg.binomial} \alias{dnorm2} %\alias{dotC} %\alias{dotFortran} % \alias{dpsi.dlambda.yjn} % \alias{drop1.vgam} % \alias{drop1.vglm} \alias{dtheta.deta} % \alias{dy.dyj} % \alias{dyj.dy} \alias{effects} % \alias{effects.vgam} % \alias{effects.vlm} % \alias{eifun} \alias{eijfun} \alias{eta2theta} %\alias{explink} % \alias{extract.arg} \alias{fff.control} \alias{fill2} \alias{fill3} \alias{fitted} \alias{fitted.values} %\alias{fitted.values.uqo} \alias{fittedvsmooth.spline} % \alias{variable.names} \alias{variable.namesvlm} \alias{variable.namesrrvglm} \alias{case.names} \alias{case.namesvlm} % \alias{formula} \alias{formulaNA.VGAM} \alias{gammaff} % \alias{get.arg} % \alias{get.rrvglm.se1} % \alias{get.rrvglm.se2} % \alias{getind} % \alias{gh.weight.yjn.11} % \alias{gh.weight.yjn.12} % \alias{gh.weight.yjn.13} % \alias{glag.weight.yjn.11} % \alias{glag.weight.yjn.12} % \alias{glag.weight.yjn.13} % \alias{gleg.weight.yjn.11} % \alias{gleg.weight.yjn.12} % \alias{gleg.weight.yjn.13} \alias{glm} % \alias{hypersecant} % \alias{hypersecant01} % \alias{ima} % \alias{inv.binomial} \alias{inverse.gaussianff} \alias{is.Numeric} \alias{is.bell} \alias{is.bell.rrvgam} \alias{is.bell.qrrvglm} \alias{is.bell.rrvglm} \alias{is.bell.vlm} \alias{Kayfun.studentt} % \alias{is.linear.term} % \alias{jitteruqo} \alias{lm} \alias{lm2qrrvlm.model.matrix} \alias{lm2vlm.model.matrix} \alias{vlm2lm.model.matrix} \alias{lms.bcg.control} \alias{lms.bcn.control} \alias{lms.yjn.control} \alias{lmscreg.control} % \alias{logLik.vlm} \alias{logLik.qrrvglm} % \alias{lv.Coef.rrvgam} 20090505 \alias{latvar.Coef.qrrvglm} \alias{latvar.rrvgam} \alias{latvar.rrvglm} \alias{latvar.qrrvglm} \alias{lvplot.rrvgam} \alias{m2a} %\alias{m2avglm} % \alias{matrix.power} \alias{mbesselI0} \alias{mix2exp.control} \alias{mix2normal.control} \alias{mix2poisson.control} \alias{model.matrix.qrrvglm} \alias{model.matrixvgam} % \alias{mux11} % \alias{mux15} % \alias{mux2} % \alias{mux5} % \alias{mux55} % \alias{mux7} % \alias{mux9} % \alias{my.dbinom} \alias{my1} \alias{my2} \alias{namesof} % \alias{natural.ig} % \alias{neg.binomial} % \alias{neg.binomial.k} % \alias{negbin.ab} % \alias{new.assign} \alias{nlminbcontrol} \alias{nbolf2} \alias{nobs.vlm} \alias{nvar} \alias{nvar.vlm} \alias{nvar.vgam} \alias{nvar.rrvglm} \alias{nvar.qrrvglm} \alias{nvar.rrvgam} \alias{nvar.rcim} % \alias{num.deriv.rrr} \alias{persp} \alias{persp.rrvgam} \alias{plot.rrvgam} \alias{plotpreplotvgam} %\alias{plotvglm} \alias{plotvlm} \alias{plotvsmooth.spline} % \alias{pnorm2} done 20120910 % \alias{poissonqn} \alias{predict} \alias{predict.rrvgam} \alias{predict.glm} \alias{predict.lm} \alias{predict.mlm} % \alias{predictqrrvglm} \alias{predict.rrvglm} %\alias{predict.uqo} \alias{predict.vgam} \alias{predict.vlm} \alias{predictrrvgam} \alias{predictors} \alias{predictors.vglm} \alias{predictvsmooth.spline} \alias{predictvsmooth.spline.fit} % \alias{preplotvgam} \alias{print} \alias{procVec} \alias{negzero.expression.VGAM} \alias{process.binomial2.data.VGAM} \alias{process.categorical.data.VGAM} % \alias{proj.vgam} % \alias{proj.vglm} \alias{put.caption} % \alias{pweights} % \alias{qrrvglm.xprod} \alias{qtplot} \alias{qtplot.default} \alias{qtplot.lms.bcg} \alias{qtplot.lms.bcn} \alias{explot.lms.bcn} \alias{qtplot.lms.yjn} \alias{qtplot.lms.yjn2} \alias{qtplot.vextremes} \alias{qtplot.vglm} \alias{quasiff} % \alias{rainfall} % \alias{remove.arg} % \alias{replace.constraints} \alias{resid} \alias{residuals} % \alias{residualsqrrvglm} % \alias{residualsuqo} % \alias{residualsvglm} % \alias{residualsvlm} % \alias{residvsmooth.spline} \alias{rlplot} \alias{rlplot.vextremes} \alias{rlplot.vglm} % \alias{rrar.Ak1} % \alias{rrar.Ci} % \alias{rrar.Di} % \alias{rrar.Ht} % \alias{rrar.Mi} % \alias{rrar.Mmat} % \alias{rrar.UU} % \alias{rrar.Ut} % \alias{rrar.Wmat} \alias{rrar.control} % \alias{rrr.alternating.expression} % \alias{rrr.deriv.gradient.fast} % \alias{rrr.deriv.rss} % \alias{rrr.derivC.rss} % \alias{rrr.derivative.expression} % \alias{rrr.end.expression} % \alias{rrr.init.expression} % \alias{rrr.normalize} % \alias{rrvglm.control.Gaussian} % \alias{rrvglm.fit} \alias{ResSS.vgam} \alias{s.vam} \alias{simple.exponential} \alias{better.exponential} \alias{simple.poisson} \alias{size.binomial} % % \alias{sm.min1} \alias{sm.min2} \alias{sm.scale1} \alias{sm.scale2} %\alias{stdze1} %\alias{stdze2} % % % \alias{step.vgam} % \alias{step.vglm} % \alias{subconstraints} \alias{summary.rrvgam} \alias{summary.grc} \alias{summary.lms} \alias{summary.qrrvglm} \alias{summary.rc.exponential} \alias{summaryrcim} \alias{summary.rrvglm} %\alias{summary.uqo} % \alias{summaryvgam} %\alias{summaryvglm} % 20150831 \alias{summaryvlm} % \alias{tapplymat1} \alias{terms.vlm} \alias{termsvlm} \alias{theta2eta} \alias{trivial.constraints} % \alias{update.vgam} % \alias{update.vglm} % \alias{uqo.fit} % \alias{valid.vglmff} % \alias{valid.vknotl2} \alias{valt.control} % \alias{valt} % \alias{valt.1iter} % \alias{valt.2iter} % \alias{valt.control} % \alias{varassign} % \alias{vchol.greenstadt} \alias{vcontrol.expression} % \alias{vcovdefault} % \alias{vcovqrrvglm} %\alias{vcovrrvglm} % 20150828 % \alias{vcovvlm} % \alias{veigen} % \alias{vellipse} % \alias{vgam.fit} % \alias{vgam.match} % \alias{vgam.nlchisq} % \alias{vgety} \alias{vgam.fit} \alias{vglm.fit} \alias{vglm.garma.control} \alias{vglm.multinomial.control} \alias{vglm.multinomial.deviance.control} \alias{dmultinomial} \alias{vglm.VGAMcategorical.control} % \alias{vindex} % \alias{vlabel} \alias{vlm} \alias{vlm.control} % \alias{vlm.wfit} \alias{vnonlinear.control} \alias{vplot} \alias{vplot.default} \alias{vplot.factor} \alias{vplot.list} \alias{vplot.matrix} \alias{vplot.numeric} \alias{vvplot.factor} \alias{weights} \alias{Wr1} \alias{Wr2} % \alias{wweighted.mean} \alias{wweights} % \alias{yformat} % \alias{ylim.scale} % % % %\alias{Coef.uqo-class} \alias{rrvgam-class} \alias{rcim0-class} \alias{rcim-class} \alias{grc-class} \alias{qrrvglm-class} \alias{summary.qrrvglm-class} \alias{summary.rrvglm-class} \alias{summary.vgam-class} \alias{summary.vglm-class} \alias{summary.vlm-class} %%% 20101216 \alias{summary.rcim-class} %\alias{summary.rcim-class} %\alias{summaryrcim-class} %\alias{uqo-class} \alias{vcov.qrrvglm-class} \alias{vlm-class} \alias{vlmsmall-class} \alias{vsmooth.spline-class} \alias{vsmooth.spline.fit-class} \alias{Coef.rrvgam-class} \alias{summary.rrvgam-class} % % %- Also NEED an '\alias' for EACH other topic documented here. \title{ Undocumented and Internally Used Functions and Classes } \description{ Those currently undocumented and internally used functions are aliased to this help file. Ditto for some classes. } %\usage{ %uninormal(lmean = "identitylink", lsd = "loge", zero = NULL) %} %- maybe also 'usage' for other objects documented here. %\arguments{ % \item{lmean}{ % Link function applied to the mean. % See \code{\link{Links}} for more choices. % % } %} \details{ In the \pkg{VGAM} package there are currently many objects/methods/classes which are currently internal and/or undocumented. The help file suppresses the warnings when the package is 'CHECK'ed. } \value{ Each objects/methods/classes may or may not have its own individual value. These will be documented over time. } %\references{ %} \author{ T. W. Yee } %\note{ % %} %\seealso{ % \code{gaussianff}, % \code{\link{posnormal}}. % %} %\examples{ %} \keyword{models} \keyword{regression} \keyword{internal} VGAM/man/binormcopUC.Rd0000644000176200001440000000403213135276753014301 0ustar liggesusers\name{Binormcop} \alias{Binormcop} \alias{dbinormcop} \alias{pbinormcop} \alias{rbinormcop} \title{Gaussian Copula (Bivariate) Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Gaussian copula distribution. } \usage{ dbinormcop(x1, x2, rho = 0, log = FALSE) pbinormcop(q1, q2, rho = 0) rbinormcop(n, rho = 0) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles. The \code{x1} and \code{x2} should be in the interval \eqn{(0,1)}. Ditto for \code{q1} and \code{q2}. } \item{n}{number of observations. Same as \code{\link[stats]{rnorm}}. } \item{rho}{the correlation parameter. Should be in the interval \eqn{(-1,1)}. } \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. % Same as \code{\link[stats]{rnorm}}. } } \value{ \code{dbinormcop} gives the density, \code{pbinormcop} gives the distribution function, and \code{rbinormcop} generates random deviates (a two-column matrix). } %\references{ % %} \author{ T. W. Yee } \details{ See \code{\link{binormalcop}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } \note{ Yettodo: allow \code{x1} and/or \code{x2} to have values 1, and to allow any values for \code{x1} and/or \code{x2} to be outside the unit square. } \seealso{ \code{\link{binormalcop}}, \code{\link{binormal}}. } \examples{ \dontrun{ edge <- 0.01 # A small positive value N <- 101; x <- seq(edge, 1.0 - edge, len = N); Rho <- 0.7 ox <- expand.grid(x, x) zedd <- dbinormcop(ox[, 1], ox[, 2], rho = Rho, log = TRUE) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) zedd <- pbinormcop(ox[, 1], ox[, 2], rho = Rho) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) } } \keyword{distribution} %plot(r <- rbinormcop(n = 3000, rho = Rho), col = "blue") %par(mfrow = c(1, 2)) %hist(r[, 1]) # Should be uniform %hist(r[, 2]) # Should be uniform VGAM/man/truncparetoUC.Rd0000644000176200001440000000533713135276753014670 0ustar liggesusers\name{Truncpareto} \alias{Truncpareto} \alias{dtruncpareto} \alias{ptruncpareto} \alias{qtruncpareto} \alias{rtruncpareto} \title{The Truncated Pareto Distribution} \description{ Density, distribution function, quantile function and random generation for the upper truncated Pareto(I) distribution with parameters \code{lower}, \code{upper} and \code{shape}. } \usage{ dtruncpareto(x, lower, upper, shape, log = FALSE) ptruncpareto(q, lower, upper, shape, lower.tail = TRUE, log.p = FALSE) qtruncpareto(p, lower, upper, shape) rtruncpareto(n, lower, upper, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n, log}{Same meaning as \code{\link[stats:Uniform]{runif}}. } \item{lower, upper, shape}{ the lower, upper and shape (\eqn{k}) parameters. If necessary, values are recycled. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dtruncpareto} gives the density, \code{ptruncpareto} gives the distribution function, \code{qtruncpareto} gives the quantile function, and \code{rtruncpareto} generates random deviates. } \references{ Aban, I. B., Meerschaert, M. M. and Panorska, A. K. (2006) Parameter estimation for the truncated Pareto distribution, \emph{Journal of the American Statistical Association}, \bold{101}(473), 270--277. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{truncpareto}}, the \pkg{VGAM} family function for estimating the parameter \eqn{k} by maximum likelihood estimation, for the formula of the probability density function and the range restrictions imposed on the parameters. } %%\note{ %% The truncated Pareto distribution is %%} \seealso{ \code{\link{truncpareto}}. } \examples{ lower <- 3; upper <- 8; kay <- exp(0.5) \dontrun{ xx <- seq(lower - 0.5, upper + 0.5, len = 401) plot(xx, dtruncpareto(xx, low = lower, upp = upper, shape = kay), main = "Truncated Pareto density split into 10 equal areas", type = "l", ylim = 0:1, xlab = "x") abline(h = 0, col = "blue", lty = 2) qq <- qtruncpareto(seq(0.1, 0.9, by = 0.1), low = lower, upp = upper, shape = kay) lines(qq, dtruncpareto(qq, low = lower, upp = upper, shape = kay), col = "purple", lty = 3, type = "h") lines(xx, ptruncpareto(xx, low = lower, upp = upper, shape = kay), col = "orange") } pp <- seq(0.1, 0.9, by = 0.1) qq <- qtruncpareto(pp, lower = lower, upper = upper, shape = kay) ptruncpareto(qq, lower = lower, upper = upper, shape = kay) qtruncpareto(ptruncpareto(qq, lower = lower, upper = upper, shape = kay), lower = lower, upper = upper, shape = kay) - qq # Should be all 0 } \keyword{distribution} VGAM/man/depvar.Rd0000644000176200001440000000263613135276753013352 0ustar liggesusers\name{depvar} \alias{depvar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Response Variable Extracted } \description{ A generic function that extracts the response/dependent variable from objects. } \usage{ depvar(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object that has some response/dependent variable. } \item{\dots}{ Other arguments fed into the specific methods function of the model. In particular, sometimes \code{type = c("lm", "lm2")} is available, in which case the first one is chosen if the user does not input a value. The latter value corresponds to argument \code{form2}, and sometimes a response for that is optional. } } \details{ By default this function is preferred to calling \code{fit@y}, say. } \value{ The response/dependent variable, usually as a matrix or vector. } %\references{ % %} \author{ Thomas W. Yee } %\note{ %} %\section{Warning }{ % This %} \seealso{ \code{\link[stats]{model.matrix}}, \code{\link{vglm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) fit@y # Sample proportions (not recommended) depvar(fit) # Better than using fit@y; dependent variable (response) weights(fit, type = "prior") # Number of observations } \keyword{models} \keyword{regression} VGAM/man/plotvglm.Rd0000644000176200001440000000453513135276753013735 0ustar liggesusers\name{plotvglm} \alias{plotvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Plots for VGLMs } \description{ Currently this function plots the Pearson residuals versus the linear predictors (\eqn{M} plots) and plots the Pearson residuals versus the hat values (\eqn{M} plots). } \usage{ plotvglm(x, which = "(All)", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ An object of class \code{"vglm"} (see \code{\link{vglm-class}}) or inherits from that class. % Same as \code{\link{plotvgam}}. } \item{which}{ If a subset of the plots is required, specify a subset of the numbers \code{1:(2*M)}. The default is to plot them all. } \item{\dots}{ Arguments fed into the primitive \code{\link[graphics]{plot}} functions. } } \details{ This function is under development. Currently it plots the Pearson residuals against the predicted values (on the transformed scale) and the hat values. There are \eqn{2M} plots in total, therefore users should call \code{\link[graphics]{par}} to assign, e.g., the \code{mfrow} argument. Note: Section 3.7 of Yee (2015) describes the Pearson residuals and hat values for VGLMs. } \value{ Returns the object invisibly. % Same as \code{\link{plotvgam}}. } %\references{ %} \author{ T. W. Yee } %\note{ % \code{plotvglm()} is quite buggy at the moment. % \code{plotvglm()} works in a similar % manner to S-PLUS's \code{plot.gam()}, however, there is no % options for interactive construction of the plots yet. %} \seealso{ \code{\link{plotvgam}}, \code{\link{plotvgam.control}}, \code{\link{vglm}}. } \examples{ \dontrun{ ndata <- data.frame(x2 = runif(nn <- 200)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1))) fit1 <- vglm(y1 ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) par(mfrow = c(2, 2)) plot(fit1) # Manually produce the four plots plot(fit1, which = 1, col = "blue", las = 1, main = "main1") abline(h = 0, lty = "dashed", col = "gray50") plot(fit1, which = 2, col = "blue", las = 1, main = "main2") abline(h = 0, lty = "dashed", col = "gray50") plot(fit1, which = 3, col = "blue", las = 1, main = "main3") plot(fit1, which = 4, col = "blue", las = 1, main = "main4") } } \keyword{models} \keyword{regression} \keyword{smooth} \keyword{graphs} VGAM/man/hyperg.Rd0000644000176200001440000001031213135276753013355 0ustar liggesusers\name{hyperg} %\alias{hyperg} \alias{hyperg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Hypergeometric Family Function } \description{ Family function for a hypergeometric distribution where either the number of white balls or the total number of white and black balls are unknown. } \usage{ hyperg(N = NULL, D = NULL, lprob = "logit", iprob = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{N}{ Total number of white and black balls in the urn. Must be a vector with positive values, and is recycled, if necessary, to the same length as the response. One of \code{N} and \code{D} must be specified. } \item{D}{ Number of white balls in the urn. Must be a vector with positive values, and is recycled, if necessary, to the same length as the response. One of \code{N} and \code{D} must be specified. } \item{lprob}{ Link function for the probabilities. See \code{\link{Links}} for more choices. } \item{iprob}{ Optional initial value for the probabilities. The default is to choose initial values internally. } } \details{ Consider the scenario from \code{\link[stats]{dhyper}} where there are \eqn{N=m+n} balls in an urn, where \eqn{m} are white and \eqn{n} are black. A simple random sample (i.e., \emph{without} replacement) of \eqn{k} balls is taken. The response here is the sample \emph{proportion} of white balls. In this document, \code{N} is \eqn{N=m+n}, \code{D} is \eqn{m} (for the number of ``defectives'', in quality control terminology, or equivalently, the number of marked individuals). The parameter to be estimated is the population proportion of white balls, viz. \eqn{prob = m/(m+n)}. Depending on which one of \code{N} and \code{D} is inputted, the estimate of the other parameter can be obtained from the equation \eqn{prob = m/(m+n)}, or equivalently, \code{prob = D/N}. However, the log-factorials are computed using \code{\link[base]{lgamma}} and both \eqn{m} and \eqn{n} are not restricted to being integer. Thus if an integer \eqn{N} is to be estimated, it will be necessary to evaluate the likelihood function at integer values about the estimate, i.e., at \code{trunc(Nhat)} and \code{ceiling(Nhat)} where \code{Nhat} is the (real) estimate of \eqn{N}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ Thomas W. Yee } \note{ The response can be of one of three formats: a factor (first level taken as success), a vector of proportions of success, or a 2-column matrix (first column = successes) of counts. The argument \code{weights} in the modelling function can also be specified. In particular, for a general vector of proportions, you will need to specify \code{weights} because the number of trials is needed. } \seealso{ \code{\link[stats]{dhyper}}, \code{\link{binomialff}}. } \section{Warning }{ No checking is done to ensure that certain values are within range, e.g., \eqn{k \leq N}{k <= N}. } \examples{ nn <- 100 m <- 5 # Number of white balls in the population k <- rep(4, len = nn) # Sample sizes n <- 4 # Number of black balls in the population y <- rhyper(nn = nn, m = m, n = n, k = k) yprop <- y / k # Sample proportions # N is unknown, D is known. Both models are equivalent: fit <- vglm(cbind(y,k-y) ~ 1, hyperg(D = m), trace = TRUE, crit = "c") fit <- vglm(yprop ~ 1, hyperg(D = m), weight = k, trace = TRUE, crit = "c") # N is known, D is unknown. Both models are equivalent: fit <- vglm(cbind(y, k-y) ~ 1, hyperg(N = m+n), trace = TRUE, crit = "l") fit <- vglm(yprop ~ 1, hyperg(N = m+n), weight = k, trace = TRUE, crit = "l") coef(fit, matrix = TRUE) Coef(fit) # Should be equal to the true population proportion unique(m / (m+n)) # The true population proportion fit@extra head(fitted(fit)) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/bmi.nz.Rd0000644000176200001440000000260513135276753013262 0ustar liggesusers\name{bmi.nz} \alias{bmi.nz} \docType{data} \title{ Body Mass Index of New Zealand Adults Data} \description{ The body mass indexes and ages from an approximate random sample of 700 New Zealand adults. } \usage{data(bmi.nz)} \format{ A data frame with 700 observations on the following 2 variables. \describe{ \item{age}{a numeric vector; their age (years). } \item{BMI}{a numeric vector; their body mass indexes, which is their weight divided by the square of their height (kg / \eqn{m^2}{m^2}).} } } \details{ They are a random sample from the Fletcher Challenge/Auckland Heart and Health survey conducted in the early 1990s. There are some outliers in the data set. A variable \code{gender} would be useful, and may be added later. } \source{ Clinical Trials Research Unit, University of Auckland, New Zealand, \url{http://www.ctru.auckland.ac.nz}. } \references{ MacMahon, S., Norton, R., Jackson, R., Mackie, M. J., Cheng, A., Vander Hoorn, S., Milne, A., McCulloch, A. (1995) Fletcher Challenge-University of Auckland Heart & Health Study: design and baseline findings. \emph{New Zealand Medical Journal}, \bold{108}, 499--502. } \examples{ \dontrun{ with(bmi.nz, plot(age, BMI, col = "blue")) fit <- vgam(BMI ~ s(age, df = c(2, 4, 2)), lms.yjn, data = bmi.nz, trace = TRUE) qtplot(fit, pcol = "blue", tcol = "brown", lcol = "brown") } } \keyword{datasets} VGAM/man/poissonff.Rd0000644000176200001440000001367213135276753014101 0ustar liggesusers\name{poissonff} %\alias{poisson} \alias{poissonff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Poisson Family Function } \description{ Family function for a generalized linear model fitted to Poisson responses. The dispersion parameters may be known or unknown. } \usage{ poissonff(link = "loge", dispersion = 1, onedpar = FALSE, imu = NULL, imethod = 1, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the mean or means. See \code{\link{Links}} for more choices and information. } \item{dispersion}{ Dispersion parameter. By default, maximum likelihood is used to estimate the model because it is known. However, the user can specify \code{dispersion = 0} to have it estimated, or else specify a known positive value (or values if the response is a matrix---one value per column). } \item{onedpar}{ One dispersion parameter? If the response is a matrix, then a separate dispersion parameter will be computed for each response (column), by default. Setting \code{onedpar=TRUE} will pool them so that there is only one dispersion parameter to be estimated. } \item{parallel}{ A logical or formula. Used only if the response is a matrix. } \item{imu, imethod}{ See \code{\link{CommonVGAMffArguments}} for more information. } \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\}, where \eqn{M} is the number of columns of the matrix response. See \code{\link{CommonVGAMffArguments}} for more information. } \item{bred, earg.link}{ Details at \code{\link{CommonVGAMffArguments}}. Setting \code{bred = TRUE} should work for multiple responses and all \pkg{VGAM} link functions; it has been tested for \code{\link{loge}}, \code{\link{identity}} but further testing is required. } \item{type.fitted, percentiles}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ \eqn{M} defined above is the number of linear/additive predictors. If the dispersion parameter is unknown, then the resulting estimate is not fully a maximum likelihood estimate. A dispersion parameter that is less/greater than unity corresponds to under-/over-dispersion relative to the Poisson model. Over-dispersion is more common in practice. When fitting a Quadratic RR-VGLM (see \code{\link{cqo}}), the response is a matrix of \eqn{M}, say, columns (e.g., one column per species). Then there will be \eqn{M} dispersion parameters (one per column of the response matrix) if \code{dispersion = 0} and \code{onedpar = FALSE}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{cqo}}, and \code{\link{cao}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ This function will handle a matrix response automatically. The call \code{poissonff(dispersion=0, ...)} is equivalent to \code{quasipoissonff(...)}. The latter was written so that R users of \code{quasipoisson()} would only need to add a ``\code{ff}'' to the end of the family function name. Regardless of whether the dispersion parameter is to be estimated or not, its value can be seen from the output from the \code{summary()} of the object. % With the introduction of name spaces for the \pkg{VGAM} package, % \code{"ff"} can be dropped for this family function. } \section{Warning }{ With multiple responses, assigning a known dispersion parameter for \emph{each} response is not handled well yet. Currently, only a single known dispersion parameter is handled well. } \seealso{ \code{\link{Links}}, \code{\link{quasipoissonff}}, \code{\link{hdeff.vglm}}, \code{\link{genpoisson}}, \code{\link{zipoisson}}, \code{\link{pospoisson}}, \code{\link{oipospoisson}}, \code{\link{otpospoisson}}, \code{\link{skellam}}, \code{\link{mix2poisson}}, \code{\link{cens.poisson}}, \code{\link{ordpoisson}}, \code{\link{amlpoisson}}, \code{\link{inv.binomial}}, \code{\link{simulate.vlm}}, \code{\link{loge}}, \code{\link{polf}}, \code{\link{rrvglm}}, \code{\link{cqo}}, \code{\link{cao}}, \code{\link{binomialff}}, \code{\link{quasibinomialff}}, \code{\link[stats]{poisson}}, \code{\link{poisson.points}}, \code{\link{ruge}}, \code{\link{V1}}. } \examples{ poissonff() set.seed(123) pdata <- data.frame(x2 = rnorm(nn <- 100)) pdata <- transform(pdata, y1 = rpois(nn, exp(1 + x2)), y2 = rpois(nn, exp(1 + x2))) (fit1 <- vglm(cbind(y1, y2) ~ x2, poissonff, data = pdata)) (fit2 <- vglm(y1 ~ x2, poissonff(bred = TRUE), data = pdata)) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) nn <- 200 cdata <- data.frame(x2 = rnorm(nn), x3 = rnorm(nn), x4 = rnorm(nn)) cdata <- transform(cdata, lv1 = 0 + x3 - 2*x4) cdata <- transform(cdata, lambda1 = exp(3 - 0.5 * (lv1-0)^2), lambda2 = exp(2 - 0.5 * (lv1-1)^2), lambda3 = exp(2 - 0.5 * ((lv1+4)/2)^2)) cdata <- transform(cdata, y1 = rpois(nn, lambda1), y2 = rpois(nn, lambda2), y3 = rpois(nn, lambda3)) \dontrun{ lvplot(p1, y = TRUE, lcol = 2:4, pch = 2:4, pcol = 2:4, rug = FALSE) } } \keyword{models} \keyword{regression} %# vvv p1 <- cqo(cbind(y1,y2,y3) ~ x2 + x3 + x4, poissonff, data = cdata, %# vvv eq.tol = FALSE, I.tol = FALSE) %# vvv summary(p1) # # Three dispersion parameters are all unity VGAM/man/inv.lomaxUC.Rd0000644000176200001440000000374013135276753014231 0ustar liggesusers\name{Inv.lomax} \alias{Inv.lomax} \alias{dinv.lomax} \alias{pinv.lomax} \alias{qinv.lomax} \alias{rinv.lomax} \title{The Inverse Lomax Distribution} \description{ Density, distribution function, quantile function and random generation for the inverse Lomax distribution with shape parameter \code{p} and scale parameter \code{scale}. } \usage{ dinv.lomax(x, scale = 1, shape2.p, log = FALSE) pinv.lomax(q, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) qinv.lomax(p, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) rinv.lomax(n, scale = 1, shape2.p) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape2.p}{shape parameter.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dinv.lomax} gives the density, \code{pinv.lomax} gives the distribution function, \code{qinv.lomax} gives the quantile function, and \code{rinv.lomax} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \details{ See \code{\link{inv.lomax}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The inverse Lomax distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{inv.lomax}}, \code{\link{genbetaII}}. } \examples{ idata <- data.frame(y = rinv.lomax(n = 1000, exp(2), exp(1))) fit <- vglm(y ~ 1, inv.lomax, data = idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) } \keyword{distribution} VGAM/man/gammaR.Rd0000644000176200001440000001040213135276753013263 0ustar liggesusers\name{gammaR} \alias{gammaR} %- Also NEED an '\alias' for EACH other topic documented here. \title{ 2-parameter Gamma Distribution } \description{ Estimates the 2-parameter gamma distribution by maximum likelihood estimation. } \usage{ gammaR(lrate = "loge", lshape = "loge", irate = NULL, ishape = NULL, lss = TRUE, zero = "shape") } % zero = ifelse(lss, -2, -1) %- maybe also 'usage' for other objects documented here. \arguments{ % \item{nowarning}{ Logical. Suppress a warning? } \item{lrate, lshape}{ Link functions applied to the (positive) \emph{rate} and \emph{shape} parameters. See \code{\link{Links}} for more choices. } % \item{expected}{ % Logical. Use Fisher scoring? The default is yes, otherwise % Newton-Raphson is used. % expected = TRUE, % } \item{irate, ishape}{ Optional initial values for \emph{rate} and \emph{shape}. A \code{NULL} means a value is computed internally. If a failure to converge occurs, try using these arguments. } % \item{zero}{ % An integer specifying which % linear/additive predictor is to be modelled as an intercept only. % If assigned, the single value should be either 1 or 2 or \code{NULL}. % The default is to model \eqn{shape} as an intercept only. % A value \code{NULL} means neither 1 or 2. % } \item{zero, lss}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The density function is given by \deqn{f(y) = \exp(-rate \times y) \times y^{shape-1} \times rate^{shape} / \Gamma(shape)}{% f(y) = exp(-rate * y) y^(shape-1) rate^(shape) / gamma(shape)} for \eqn{shape > 0}, \eqn{rate > 0} and \eqn{y > 0}. Here, \eqn{\Gamma(shape)}{gamma(shape)} is the gamma function, as in \code{\link[base:Special]{gamma}}. The mean of \emph{Y} is \eqn{\mu = shape/rate}{mu = shape/rate} (returned as the fitted values) with variance \eqn{\sigma^2 = \mu^2 /shape = shape/rate^2}{sigma^2 = mu^2 /shape = shape/rate^2}. By default, the two linear/additive predictors are \eqn{\eta_1 = \log(shape)}{eta1 = log(shape)} and \eqn{\eta_2 = \log(rate)}{eta2 = log(rate)}. % expected = FALSE does not work well. 20140828. % The argument \code{expected} refers to the type of information % matrix. The expected information matrix corresponds to Fisher scoring % and is numerically better here. The observed information matrix % corresponds to the Newton-Raphson algorithm and may be withdrawn % from the family function in the future. If both algorithms work then % the differences in the results are often not huge. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Most standard texts on statistical distributions describe the 2-parameter gamma distribution, e.g., Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ The parameters \eqn{rate} and \eqn{shape} match with the arguments \code{rate} and \code{shape} of \code{\link[stats]{rgamma}}. The order of the arguments agree too. Here, \eqn{scale = 1/rate} is used, so one can use \code{\link{negloge}}. Multiple responses are handled. If \eqn{rate = 1} use the family function \code{\link{gamma1}} to estimate \eqn{shape}. } \seealso{ \code{\link{gamma1}} for the 1-parameter gamma distribution, \code{\link{gamma2}} for another parameterization of the 2-parameter gamma distribution, \code{\link{bigamma.mckay}} for \emph{a} bivariate gamma distribution, \code{\link{expexpff}}, \code{\link{simulate.vlm}}, \code{\link[stats]{rgamma}}, \code{\link{negloge}}. } \examples{ # Essentially a 1-parameter gamma gdata <- data.frame(y1 = rgamma(n <- 100, shape = exp(1))) fit1 <- vglm(y1 ~ 1, gamma1, data = gdata, trace = TRUE) fit2 <- vglm(y1 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) # Essentially a 2-parameter gamma gdata <- data.frame(y2 = rgamma(n = 500, rate = exp(1), shape = exp(2))) fit2 <- vglm(y2 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) summary(fit2) } \keyword{models} \keyword{regression} VGAM/man/topple.Rd0000644000176200001440000000333113135276753013365 0ustar liggesusers\name{topple} \alias{topple} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Topp-Leone Distribution Family Function } \description{ Estimating the parameter of the Topp-Leone distribution by maximum likelihood estimation. } \usage{ topple(lshape = "logit", zero = NULL, gshape = ppoints(8)) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, zero, gshape}{ More information is at \code{\link{CommonVGAMffArguments}}. } } \details{ The Topple distribution has a probability density function that can be written \deqn{f(y;s) = 2 s (1 - y) [y (2-y)]^{s-1}}{% f(y;s) = 2 * s * (1 - y) * (y * (2-y))^(s-1)} for \eqn{0 0}, and \eqn{Y = a_2 X}{Y = a2*X} for \eqn{X < 0}. Then \eqn{Y} is said to have a \emph{generalized folded normal distribution}. The ordinary folded normal distribution corresponds to the special case \eqn{a_1 = a_2 = 1}{a1 = a2 = 1}. The probability density function of the ordinary folded normal distribution can be written \code{dnorm(y, mean, sd) + dnorm(y, -mean, sd)} for \eqn{y \ge 0}. By default, \code{mean} and \code{log(sd)} are the linear/additive predictors. Having \code{mean=0} and \code{sd=1} results in the \emph{half-normal} distribution. The mean of an ordinary folded normal distribution is \deqn{E(Y) = \sigma \sqrt{2/\pi} \exp(-\mu^2/(2\sigma^2)) + \mu [1-2\Phi(-\mu/\sigma)] }{% E(Y) = sigma*sqrt(2/pi)*exp(-mu^2/(2*sigma^2)) + mu*[1-2*Phi(-mu/sigma)] } and these are returned as the fitted values. Here, \eqn{\Phi()}{Phi} is the cumulative distribution function of a standard normal (\code{\link[stats:Normal]{pnorm}}). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lin, P. C. (2005) Application of the generalized folded-normal distribution to the process capability measures. \emph{International Journal of Advanced Manufacturing Technology}, \bold{26}, 825--830. Johnson, N. L. (1962) The folded normal distribution: accuracy of estimation by maximum likelihood. \emph{Technometrics}, \bold{4}, 249--256. } \author{ Thomas W. Yee } \note{ The response variable for this family function is the same as \code{\link{uninormal}} except positive values are required. Reasonably good initial values are needed. Fisher scoring using simulation is implemented. See \code{\link{CommonVGAMffArguments}} for general information about many of these arguments. Yet to do: implement the results of Johnson (1962) which gives expressions for the EIM, albeit, under a different parameterization. Also, one element of the EIM appears to require numerical integration. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. It is recommended that several different initial values be used to help avoid local solutions. } \seealso{ \code{\link{rfoldnorm}}, \code{\link{uninormal}}, \code{\link[stats:Normal]{dnorm}}, \code{\link{skewnormal}}. } \examples{ \dontrun{ m <- 2; SD <- exp(1) fdata <- data.frame(y = rfoldnorm(n <- 1000, m = m, sd = SD)) hist(with(fdata, y), prob = TRUE, main = paste("foldnormal(m = ", m, ", sd = ", round(SD, 2), ")")) fit <- vglm(y ~ 1, foldnormal, data = fdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) # Add the fit to the histogram: mygrid <- with(fdata, seq(min(y), max(y), len = 200)) lines(mygrid, dfoldnorm(mygrid, Cfit[1], Cfit[2]), col = "orange") } } \keyword{models} \keyword{regression} VGAM/man/model.matrixvlm.Rd0000644000176200001440000000636313135276753015214 0ustar liggesusers\name{model.matrixvlm} \alias{model.matrixvlm} \title{Construct the Design Matrix of a VLM Object} \usage{ model.matrixvlm(object, type = c("vlm", "lm", "lm2", "bothlmlm2"), linpred.index = NULL, \dots) } \arguments{ \item{object}{an object of a class that inherits from the \emph{vector linear model} (VLM). } \item{type}{Type of design matrix returned. The first is the default. The value \code{"vlm"} is the VLM model matrix corresponding to the \code{formula} argument. The value \code{"lm"} is the LM model matrix corresponding to the \code{formula} argument. The value \code{"lm2"} is the second (LM) model matrix corresponding to the \code{form2} argument. The value \code{"bothlmlm2"} means both LM and VLM model matrices. } \item{linpred.index}{ Single integer. The index for a linear/additive predictor, it must have a value from the set \code{1:M}, and \code{type = "lm"} must be assigned. Then it returns a subset of the VLM matrix corresponding to the \code{linpred.index}th linear/additive predictor; this is a LM-type matrix. } \item{\dots}{further arguments passed to or from other methods. These include \code{data} (which is a data frame created with \code{\link{model.framevlm}}), \code{contrasts.arg}, and \code{xlev}. See \code{\link[stats]{model.matrix}} for more information. } } \description{ Creates a design matrix. Two types can be returned: a large one (class \code{"vlm"} or one that inherits from this such as \code{"vglm"}) or a small one (such as returned if it were of class \code{"lm"}). } \details{ This function creates a design matrix from \code{object}. This can be a small LM object or a big VLM object (default). The latter is constructed from the former and the constraint matrices. This code implements \emph{smart prediction} (see \code{\link{smartpred}}). } \value{ The design matrix for a regression model with the specified formula and data. If \code{type = "bothlmlm2"} then a list is returned with components \code{"X"} and \code{"Xm2"}. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Chambers, J. M. (1992) \emph{Data for models.} Chapter 3 of \emph{Statistical Models in S} eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. } \seealso{ \code{\link[stats]{model.matrix}}, \code{\link{model.framevlm}}, \code{\link{predictvglm}}, \code{\link{smartpred}}. } \examples{ # Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ poly(c(scale(let)), 2), multinomial, data = pneumo, trace = TRUE, x = FALSE) class(fit) fit@x # Not saved on the object model.matrix(fit) model.matrix(fit, linpred.index = 1, type = "lm") model.matrix(fit, linpred.index = 2, type = "lm") (Check1 <- head(model.matrix(fit, type = "lm"))) (Check2 <- model.matrix(fit, data = head(pneumo), type = "lm")) all.equal(c(Check1), c(Check2)) q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE } \keyword{models} VGAM/man/tikuvUC.Rd0000644000176200001440000000546513135276753013466 0ustar liggesusers\name{Tikuv} \alias{Tikuv} \alias{dtikuv} \alias{ptikuv} \alias{qtikuv} \alias{rtikuv} \title{A Short-tailed Symmetric Distribution } \description{ Density, cumulative distribution function, quantile function and random generation for the short-tailed symmetric distribution of Tiku and Vaughan (1999). } \usage{ dtikuv(x, d, mean = 0, sigma = 1, log = FALSE) ptikuv(q, d, mean = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE) qtikuv(p, d, mean = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE, ...) rtikuv(n, d, mean = 0, sigma = 1, Smallno = 1.0e-6) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{d, mean, sigma }{ arguments for the parameters of the distribution. See \code{\link{tikuv}} for more details. For \code{rtikuv}, arguments \code{mean} and \code{sigma} must be of length 1. } \item{Smallno}{ Numeric, a small value used by the rejection method for determining the lower and upper limits of the distribution. That is, \code{ptikuv(L) < Smallno} and \code{ptikuv(U) > 1-Smallno} where \code{L} and \code{U} are the lower and upper limits respectively. } \item{\ldots}{ Arguments that can be passed into \code{\link[stats]{uniroot}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dtikuv} gives the density, \code{ptikuv} gives the cumulative distribution function, \code{qtikuv} gives the quantile function, and \code{rtikuv} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{tikuv}} for more details. } %\note{ % %} \seealso{ \code{\link{tikuv}}. } \examples{ \dontrun{ par(mfrow = c(2, 1)) x <- seq(-5, 5, len = 401) plot(x, dnorm(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard normal, others are dtikuv(x, d)") lines(x, dtikuv(x, d = -10), col = "orange") lines(x, dtikuv(x, d = -1 ), col = "blue") lines(x, dtikuv(x, d = 1 ), col = "green") legend("topleft", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("d =", c(-10, -1, 1))) plot(x, pnorm(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard normal, others are ptikuv(x, d)") lines(x, ptikuv(x, d = -10), col = "orange") lines(x, ptikuv(x, d = -1 ), col = "blue") lines(x, ptikuv(x, d = 1 ), col = "green") legend("topleft", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("d =", c(-10, -1, 1))) } probs <- seq(0.1, 0.9, by = 0.1) ptikuv(qtikuv(p = probs, d = 1), d = 1) - probs # Should be all 0 } \keyword{distribution} VGAM/man/biclaytoncop.Rd0000644000176200001440000000764313135276753014562 0ustar liggesusers\name{biclaytoncop} \alias{biclaytoncop} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Clayton Copula (Bivariate) Family Function } \description{ Estimate the correlation parameter of the (bivariate) Clayton copula distribution by maximum likelihood estimation. } \usage{ biclaytoncop(lapar = "loge", iapar = NULL, imethod = 1, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lapar, iapar, imethod}{ Details at \code{\link{CommonVGAMffArguments}}. See \code{\link{Links}} for more link function choices. } \item{parallel, zero}{ Details at \code{\link{CommonVGAMffArguments}}. If \code{parallel = TRUE} then the constraint is also applied to the intercept. } } \details{ The cumulative distribution function is \deqn{P(u_1, u_2;\alpha) = (u_1^{-\alpha} + u_2^{-\alpha}-1)^{-1/\alpha}}{% P(u1,u2,alpha) = (u1^(-alpha) + u2^(-alpha)-1)^(-1/alpha)} for \eqn{0 \leq \alpha }{0 <= alpha}. Here, \eqn{\alpha}{alpha} is the association parameter. The support of the function is the interior of the unit square; however, values of 0 and/or 1 are not allowed (currently). The marginal distributions are the standard uniform distributions. When \eqn{\alpha = 0}{alpha=0} the random variables are independent. This \pkg{VGAM} family function can handle multiple responses, for example, a six-column matrix where the first 2 columns is the first out of three responses, the next 2 columns being the next response, etc. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ %A Model for Association in Bivariate Survival Data. Clayton, D. (1982) A model for association in bivariate survival data. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{44}, 414--422. Stober, J. and Schepsmeier, U. (2013) Derivatives and Fisher information of bivariate copulas. \emph{Statistical Papers}. } \author{ R. Feyter and T. W. Yee } \note{ The response matrix must have a multiple of two-columns. Currently, the fitted value is a matrix with the same number of columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution. This \pkg{VGAM} family function is fragile; each response must be in the interior of the unit square. % Setting \code{crit = "coef"} is sometimes a good idea because % inaccuracies in \code{\link{pbinorm}} might mean % unnecessary half-stepping will occur near the solution. } \seealso{ \code{\link{rbiclaytoncop}}, \code{\link{dbiclaytoncop}}, \code{\link{kendall.tau}}. } \examples{ ymat <- rbiclaytoncop(n = (nn <- 1000), apar = exp(2)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) \dontrun{ plot(ymat, col = "blue") } fit1 <- vglm(cbind(y1, y2, y3, y4) ~ 1, # 2 responses, e.g., (y1,y2) is the first biclaytoncop, data = bdata, trace = TRUE, crit = "coef") # Sometimes a good idea coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) # Another example; apar is a function of x2 bdata <- transform(bdata, apar = exp(-0.5 + x2)) ymat <- rbiclaytoncop(n = nn, apar = with(bdata, apar)) bdata <- transform(bdata, y5 = ymat[, 1], y6 = ymat[, 2]) fit2 <- vgam(cbind(y5, y6) ~ s(x2), data = bdata, biclaytoncop(lapar = "loge"), trace = TRUE) \dontrun{ plot(fit2, lcol = "blue", scol = "orange", se = TRUE, las = 1) } } \keyword{models} \keyword{regression} % for real \eqn{\alpha}{alpha} in (-1,1). VGAM/man/posbinomial.Rd0000644000176200001440000001203313135276753014375 0ustar liggesusers\name{posbinomial} \alias{posbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Positive Binomial Distribution Family Function } \description{ Fits a positive binomial distribution. } \usage{ posbinomial(link = "logit", multiple.responses = FALSE, parallel = FALSE, omit.constant = FALSE, p.small = 1e-4, no.warning = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link, multiple.responses, parallel, zero}{ Details at \code{\link{CommonVGAMffArguments}}. } \item{omit.constant}{ Logical. If \code{TRUE} then the constant (\code{lchoose(size, size * yprop)} is omitted from the \code{loglikelihood} calculation. If the model is to be compared using \code{AIC()} or \code{BIC()} (see \code{\link{AICvlm}} or \code{\link{BICvlm}}) to the likes of \code{\link{posbernoulli.tb}} etc. then it is important to set \code{omit.constant = TRUE} because all models then will not have any normalizing constants in the likelihood function. Hence they become comparable. This is because the \eqn{M_0} Otis et al. (1978) model coincides with \code{posbinomial()}. See below for an example. Also see \code{\link{posbernoulli.t}} regarding estimating the population size (\code{N.hat} and \code{SE.N.hat}) if the number of trials is the same for all observations. } \item{p.small, no.warning}{ See \code{\link{posbernoulli.t}}. } } \details{ The positive binomial distribution is the ordinary binomial distribution but with the probability of zero being zero. Thus the other probabilities are scaled up (i.e., divided by \eqn{1-P(Y=0)}{1-P(Y=0)}). The fitted values are the ordinary binomial distribution fitted values, i.e., the usual mean. In the capture--recapture literature this model is called the \eqn{M_0} if it is an intercept-only model. Otherwise it is called the \eqn{M_h} when there are covariates. It arises from a sum of a sequence of \eqn{\tau}-Bernoulli random variates subject to at least one success (capture). Here, each animal has the same probability of capture or recapture, regardless of the \eqn{\tau} sampling occasions. Independence between animals and between sampling occasions etc. is assumed. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Otis, D. L. et al. (1978) Statistical inference from capture data on closed animal populations, \emph{Wildlife Monographs}, \bold{62}, 3--135. Patil, G. P. (1962) Maximum likelihood estimation for generalised power series distributions and its application to a truncated binomial distribution. \emph{Biometrika}, \bold{49}, 227--237. Pearson, K. (1913) \emph{A Monograph on Albinism in Man}. Drapers Company Research Memoirs. } \author{ Thomas W. Yee } \note{ The input for this family function is the same as \code{\link{binomialff}}. If \code{multiple.responses = TRUE} then each column of the matrix response should be a count (the number of successes), and the \code{weights} argument should be a matrix of the same dimension as the response containing the number of trials. If \code{multiple.responses = FALSE} then the response input should be the same as \code{\link{binomialff}}. Yet to be done: a \code{quasi.posbinomial()} which estimates a dispersion parameter. } \section{Warning }{ Under- or over-flow may occur if the data is ill-conditioned. } \seealso{ \code{\link{posbernoulli.b}}, \code{\link{posbernoulli.t}}, \code{\link{posbernoulli.tb}}, \code{\link{binomialff}}, \code{\link{AICvlm}}, \code{\link{BICvlm}}, \code{\link{simulate.vlm}}. } \examples{ # Number of albinotic children in families with 5 kids (from Patil, 1962) ,,,, albinos <- data.frame(y = c(rep(1, 25), rep(2, 23), rep(3, 10), 4, 5), n = rep(5, 60)) fit1 <- vglm(cbind(y, n-y) ~ 1, posbinomial, albinos, trace = TRUE) summary(fit1) Coef(fit1) # = MLE of p = 0.3088 head(fitted(fit1)) sqrt(vcov(fit1, untransform = TRUE)) # SE = 0.0322 # Fit a M_0 model (Otis et al. 1978) to the deermice data ,,,,,,,,,,,,,,,,,,,,,,, M.0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, trace = TRUE, posbinomial(omit.constant = TRUE), data = deermice) coef(M.0, matrix = TRUE) Coef(M.0) constraints(M.0, matrix = TRUE) summary(M.0) c( N.hat = M.0@extra$N.hat, # Since tau = 6, i.e., 6 Bernoulli trials per SE.N.hat = M.0@extra$SE.N.hat) # observation is the same for each observation # Compare it to the M_b using AIC and BIC M.b <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, trace = TRUE, posbernoulli.b, data = deermice) sort(c(M.0 = AIC(M.0), M.b = AIC(M.b))) # Okay since omit.constant = TRUE sort(c(M.0 = BIC(M.0), M.b = BIC(M.b))) # Okay since omit.constant = TRUE } \keyword{models} \keyword{regression} % albinos <- transform(albinos, yprop = y / 5) VGAM/man/s.Rd0000644000176200001440000001251113135276753012324 0ustar liggesusers\name{s} \alias{s} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Defining Smooths in VGAM Formulas } \description{ \code{s} is used in the definition of (vector) smooth terms within \code{vgam} formulas. This corresponds to 1st-generation VGAMs that use backfitting for their estimation. The effective degrees of freedom is prespecified. } \usage{ s(x, df = 4, spar = 0, ...) } %- maybe also `usage' for other objects documented here. \arguments{ \item{x}{ covariate (abscissae) to be smoothed. Note that \code{x} must be a \emph{single} variable and not a function of a variable. For example, \code{s(x)} is fine but \code{s(log(x))} will fail. In this case, let \code{logx <- log(x)} (in the data frame), say, and then use \code{s(logx)}. At this stage bivariate smoothers (\code{x} would be a two-column matrix) are not implemented. } \item{df}{ numerical vector of length \eqn{r}. Effective degrees of freedom: must lie between 1 (linear fit) and \eqn{n} (interpolation). Thus one could say that \code{df-1} is the \emph{effective nonlinear degrees of freedom} (ENDF) of the smooth. Recycling of values will be used if \code{df} is not of length \eqn{r}. If \code{spar} is positive then this argument is ignored. Thus \code{s()} means that the effective degrees of freedom is prespecified. If it is known that the component function(s) are more wiggly than usual then try increasing the value of this argument. } \item{spar}{ numerical vector of length \eqn{r}. Positive smoothing parameters (after scaling) . Larger values mean more smoothing so that the solution approaches a linear fit for that component function. A zero value means that \code{df} is used. Recycling of values will be used if \code{spar} is not of length \eqn{r}. } \item{\dots}{ Ignored for now. } } \details{ In this help file \eqn{M} is the number of additive predictors and \eqn{r} is the number of component functions to be estimated (so that \eqn{r} is an element from the set \{1,2,\ldots,\eqn{M}\}). Also, if \eqn{n} is the number of \emph{distinct} abscissae, then \code{s} will fail if \eqn{n < 7}. \code{s}, which is symbolic and does not perform any smoothing itself, only handles a single covariate. Note that \code{s} works in \code{\link{vgam}} only. It has no effect in \code{\link{vglm}} (actually, it is similar to the identity function \code{\link[base:AsIs]{I}} so that \code{s(x2)} is the same as \code{x2} in the LM model matrix). It differs from the \code{s()} of the \pkg{gam} package and the \code{\link[mgcv]{s}} of the \pkg{mgcv} package; they should not be mixed together. Also, terms involving \code{s} should be simple additive terms, and not involving interactions and nesting etc. For example, \code{myfactor:s(x2)} is not a good idea. % It also differs from the S-PLUS \code{s} which allows % \code{spar} to be negative; \pkg{VGAM} does not allow this. } \value{ A vector with attributes that are (only) used by \code{vgam}. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } \note{ The vector cubic smoothing spline which \code{s()} represents is computationally demanding for large \eqn{M}. The cost is approximately \eqn{O(n M^3)} where \eqn{n} is the number of unique abscissae. Currently a bug relating to the use of \code{s()} is that only constraint matrices whose columns are orthogonal are handled correctly. If any \code{s()} term has a constraint matrix that does not satisfy this condition then a warning is issued. See \code{\link{is.buggy}} for more information. A more modern alternative to using \code{s} with \code{\link{vgam}} is to use \code{\link{sm.os}} or \code{\link{sm.ps}}. This does not require backfitting and allows automatic smoothing parameter selection. However, this alternative should only be used when the sample size is reasonably large (\eqn{> 500}, say). These are called Generation-2 VGAMs. Another alternative to using \code{s} with \code{\link{vgam}} is \code{\link[splines]{bs}} and/or \code{\link[splines]{ns}} with \code{\link{vglm}}. The latter implements half-stepping, which is helpful if convergence is difficult. } % ~Make other sections like WARNING with \section{WARNING }{....} ~ \seealso{ \code{\link{vgam}}, \code{\link{is.buggy}}, \code{\link{sm.os}}, \code{\link{sm.ps}}, \code{\link{vsmooth.spline}}. } \examples{ # Nonparametric logistic regression fit1 <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) \dontrun{ plot(fit1, se = TRUE) } # Bivariate logistic model with artificial data nn <- 300 bdata <- data.frame(x1 = runif(nn), x2 = runif(nn)) bdata <- transform(bdata, y1 = rbinom(nn, size = 1, prob = logit(sin(2 * x2), inverse = TRUE)), y2 = rbinom(nn, size = 1, prob = logit(sin(2 * x2), inverse = TRUE))) fit2 <- vgam(cbind(y1, y2) ~ x1 + s(x2, 3), trace = TRUE, binom2.or(exchangeable = TRUE), data = bdata) coef(fit2, matrix = TRUE) # Hard to interpret \dontrun{ plot(fit2, se = TRUE, which.term = 2, scol = "blue") } } \keyword{models} \keyword{regression} \keyword{smooth} % binom2.or(exchangeable = TRUE ~ s(x2, 3)) VGAM/man/betaII.Rd0000644000176200001440000000616413135276753013226 0ustar liggesusers\name{betaII} \alias{betaII} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta Distribution of the Second Kind } \description{ Maximum likelihood estimation of the 3-parameter beta II distribution. } \usage{ betaII(lscale = "loge", lshape2.p = "loge", lshape3.q = "loge", iscale = NULL, ishape2.p = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape") } %- maybe also 'usage' for other objects documented here. % probs.y = c(0.25, 0.5, 0.75), zero = -(2:3) \arguments{ \item{lscale, lshape2.p, lshape3.q}{ Parameter link functions applied to the (positive) parameters \code{scale}, \code{p} and \code{q}. See \code{\link{Links}} for more choices. } \item{iscale, ishape2.p, ishape3.q, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{gscale, gshape2.p, gshape3.q}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{probs.y}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The 3-parameter beta II is the 4-parameter \emph{generalized} beta II distribution with shape parameter \eqn{a=1}. It is also known as the Pearson VI distribution. Other distributions which are special cases of the 3-parameter beta II include the Lomax (\eqn{p=1}) and inverse Lomax (\eqn{q=1}). More details can be found in Kleiber and Kotz (2003). The beta II distribution has density \deqn{f(y) = y^{p-1} / [b^p B(p,q) \{1 + y/b\}^{p+q}]}{% f(y) = y^(p-1) / [b^p B(p,q) (1 + y/b)^(p+q)]} for \eqn{b > 0}, \eqn{p > 0}, \eqn{q > 0}, \eqn{y \geq 0}{y >= 0}. Here, \eqn{b} is the scale parameter \code{scale}, and the others are shape parameters. The mean is \deqn{E(Y) = b \, \Gamma(p + 1) \, \Gamma(q - 1) / (\Gamma(p) \, \Gamma(q))}{% E(Y) = b gamma(p + 1) gamma(q - 1) / ( gamma(p) gamma(q))} provided \eqn{q > 1}; these are returned as the fitted values. This family function handles multiple responses. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee } \note{ See the notes in \code{\link{genbetaII}}. } \seealso{ \code{\link{betaff}}, \code{\link{genbetaII}}, \code{\link{dagum}}, \code{\link{sinmad}}, \code{\link{fisk}}, \code{\link{inv.lomax}}, \code{\link{lomax}}, \code{\link{paralogistic}}, \code{\link{inv.paralogistic}}. } \examples{ bdata <- data.frame(y = rsinmad(2000, shape1.a = 1, shape3.q = exp(2), scale = exp(1))) # Not genuine data! fit <- vglm(y ~ 1, betaII, data = bdata, trace = TRUE) fit <- vglm(y ~ 1, betaII(ishape2.p = 0.7, ishape3.q = 0.7), data = bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/cardUC.Rd0000644000176200001440000000517213135276753013230 0ustar liggesusers\name{Card} \alias{Card} \alias{dcard} \alias{pcard} \alias{qcard} \alias{rcard} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Cardioid Distribution } \description{ Density, distribution function, quantile function and random generation for the cardioid distribution. } \usage{ dcard(x, mu, rho, log = FALSE) pcard(q, mu, rho, lower.tail = TRUE, log.p = FALSE) qcard(p, mu, rho, tolerance = 1e-07, maxits = 500, lower.tail = TRUE, log.p = FALSE) rcard(n, mu, rho, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{mu, rho}{ See \code{\link{cardioid}} for more information. } \item{tolerance, maxits, ...}{ The first two are control parameters for the algorithm used to solve for the roots of a nonlinear system of equations; \code{tolerance} controls for the accuracy and \code{maxits} is the maximum number of iterations. \code{rcard} calls \code{qcard} so the \code{...} can be used to vary the two arguments. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ See \code{\link{cardioid}}, the \pkg{VGAM} family function for estimating the two parameters by maximum likelihood estimation, for the formula of the probability density function and other details. } \value{ \code{dcard} gives the density, \code{pcard} gives the distribution function, \code{qcard} gives the quantile function, and \code{rcard} generates random deviates. } %\references{ } \author{ Thomas W. Yee and Kai Huang } \note{ Convergence problems might occur with \code{rcard}. } \seealso{ \code{\link{cardioid}}. } \examples{ \dontrun{ mu <- 4; rho <- 0.4; x <- seq(0, 2*pi, len = 501) plot(x, dcard(x, mu, rho), type = "l", las = 1, ylim = c(0, 1), col = "blue", ylab = paste("[dp]card(mu=", mu, ", rho=", rho, ")"), main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pcard(x, mu, rho), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qcard(probs, mu, rho) lines(Q, dcard(Q, mu, rho), col = "purple", lty = 3, type = "h") lines(Q, pcard(Q, mu, rho), col = "purple", lty = 3, type = "h") abline(h = c(0,probs, 1), v = c(0, 2*pi), col = "purple", lty = 3) max(abs(pcard(Q, mu, rho) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/huberUC.Rd0000644000176200001440000000670213135276753013424 0ustar liggesusers\name{dhuber} \alias{dhuber} \alias{edhuber} \alias{rhuber} \alias{qhuber} \alias{phuber} \title{Huber's Least Favourable Distribution} \description{ Density, distribution function, quantile function and random generation for Huber's least favourable distribution, see Huber and Ronchetti (2009). } \usage{ dhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) edhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) rhuber(n, k = 0.862, mu = 0, sigma = 1) qhuber(p, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE) phuber(q, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE) } \arguments{ \item{x, q}{numeric vector, vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of random values to be generated. If \code{length(n) > 1} then the length is taken to be the number required. } \item{k}{numeric. Borderline value of central Gaussian part of the distribution. This is known as the tuning constant, and should be positive. For example, \code{k = 0.862} refers to a 20\% contamination neighborhood of the Gaussian distribution. If \code{k = 1.40} then this is 5\% contamination. } \item{mu}{numeric. distribution mean.} \item{sigma}{numeric. Distribution scale (\code{sigma = 1} defines the distribution in standard form, with standard Gaussian centre).} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the result is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ Details are given in \code{\link{huber2}}, the \pkg{VGAM} family function for estimating the parameters \code{mu} and \code{sigma}. } \value{ \code{dhuber} gives out a vector of density values. \code{edhuber} gives out a list with components \code{val} (density values) and \code{eps} (contamination proportion). \code{rhuber} gives out a vector of random numbers generated by Huber's least favourable distribution. \code{phuber} gives the distribution function, \code{qhuber} gives the quantile function. } %\references{ % Huber, P. J. and Ronchetti, E. (2009) % \emph{Robust Statistics}, 2nd ed. New York: Wiley. % % % Huber, P. J. and Ronchetti, E. (2009) Robust Statistics % (2nd ed.). Wiley, New York. % % %} \author{ Christian Hennig wrote \code{[d,ed,r]huber()} (from \pkg{smoothmest}) and slight modifications were made by T. W. Yee to replace looping by vectorization and addition of the \code{log} argument. Arash Ardalan wrote \code{[pq]huber()}, and two arguments for these were implemented by Kai Huang. This helpfile was adapted from \pkg{smoothmest}. } \seealso{ \code{\link{huber2}}. } \examples{ set.seed(123456) edhuber(1:5, k = 1.5) rhuber(5) \dontrun{ mu <- 3; xx <- seq(-2, 7, len = 100) # Plot CDF and PDF plot(xx, dhuber(xx, mu = mu), type = "l", col = "blue", las = 1, ylab = "", main = "blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(xx, phuber(xx, mu = mu), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qhuber(probs, mu = mu) lines(Q, dhuber(Q, mu = mu), col = "purple", lty = 3, type = "h") lines(Q, phuber(Q, mu = mu), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) phuber(Q, mu = mu) - probs # Should be all 0s } } \keyword{distribution} VGAM/man/lrtest.Rd0000644000176200001440000001110713135276753013377 0ustar liggesusers\name{lrtest} \alias{lrtest} \alias{lrtest_vglm} %\alias{update_formula} %\alias{update_default} \title{Likelihood Ratio Test of Nested Models} \description{ \code{lrtest} is a generic function for carrying out likelihood ratio tests. The default method can be employed for comparing nested VGLMs (see details below). } \usage{ lrtest(object, \dots) lrtest_vglm(object, \dots, no.warning = FALSE, name = NULL) } %\method{lrtest}{default}(object, \dots, name = NULL) %\method{lrtest}{formula}(object, \dots, data = list()) \arguments{ \item{object}{ a \code{\link{vglm}} object. See below for details. } \item{\dots}{ further object specifications passed to methods. See below for details. } \item{no.warning}{ logical; if \code{TRUE} then no warning is issued. For example, setting \code{TRUE} might be a good idea when testing for linearity of a variable for a \code{"pvgam"} object. } \item{name}{ a function for extracting a suitable name/description from a fitted model object. By default the name is queried by calling \code{\link{formula}}. } % \item{data}{ % a data frame containing the variables in the model. % % } } \details{ \code{lrtest} is intended to be a generic function for comparisons of models via asymptotic likelihood ratio tests. The default method consecutively compares the fitted model object \code{object} with the models passed in \code{\dots}. Instead of passing the fitted model objects in \code{\dots}, several other specifications are possible. The updating mechanism is the same as for \code{\link[lmtest]{waldtest}}: the models in \code{\dots} can be specified as integers, characters (both for terms that should be eliminated from the previous model), update formulas or fitted model objects. Except for the last case, the existence of an \code{\link[stats]{update}} method is assumed. See \code{\link[lmtest]{waldtest}} for details. Subsequently, an asymptotic likelihood ratio test for each two consecutive models is carried out: Twice the difference in log-likelihoods (as derived by the \code{\link[stats]{logLik}} methods) is compared with a Chi-squared distribution. % The \code{"formula"} method fits a \code{\link{lm}} % first and then calls the default method. } \note{ The code was adapted directly from \pkg{lmtest} (written by T. Hothorn, A. Zeileis, G. Millo, D. Mitchell) and made to work for VGLMs and S4. This help file also was adapted from \pkg{lmtest}. \emph{Approximate} LRTs might be applied to VGAMs, as produced by \code{\link{vgam}}, but it is probably better in inference to use \code{\link{vglm}} with regression splines (\code{\link[splines]{bs}} and \code{\link[splines]{ns}}). This methods function should not be applied to other models such as those produced by \code{\link{rrvglm}}, by \code{\link{cqo}}, by \code{\link{cao}}. } \section{Warning }{ Several \pkg{VGAM} family functions implement distributions which do not satisfying the usual regularity conditions needed for the LRT to work. No checking or warning is given for these. } \value{ An object of class \code{"VGAManova"} which contains a slot with the log-likelihood, degrees of freedom, the difference in degrees of freedom, likelihood ratio Chi-squared statistic and corresponding p value. These are printed by \code{stats:::print.anova()}; see \code{\link[stats]{anova}}. } \seealso{ \pkg{lmtest}, \code{\link{vglm}}, \code{\link{lrp.vglm}}. % \code{\link{waldtest}} % \code{update_default}, % \code{update_formula}. } \examples{ set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit1 <- vglm(cbind(normal, mild, severe) ~ let , propodds, data = pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, data = pneumo) fit3 <- vglm(cbind(normal, mild, severe) ~ let , cumulative, data = pneumo) # Various equivalent specifications of the LR test for testing x3 (ans1 <- lrtest(fit2, fit1)) ans2 <- lrtest(fit2, 2) ans3 <- lrtest(fit2, "x3") ans4 <- lrtest(fit2, . ~ . - x3) c(all.equal(ans1, ans2), all.equal(ans1, ans3), all.equal(ans1, ans4)) # Doing it manually (testStatistic <- 2 * (logLik(fit2) - logLik(fit1))) (mypval <- pchisq(testStatistic, df = length(coef(fit2)) - length(coef(fit1)), lower.tail = FALSE)) (ans4 <- lrtest(fit3, fit1)) # Test proportional odds (parallelism) assumption } \keyword{htest} %(testStatistic <- 2 * (logLik(fit3) - logLik(fit1))) %(mypval <- pchisq(testStatistic, df = length(coef(fit3)) - length(coef(fit1)), % lower.tail = FALSE)) VGAM/man/Links.Rd0000644000176200001440000002560113135276753013146 0ustar liggesusers\name{Links} \alias{Links} \alias{TypicalVGAMlink} \title{Link functions for VGLM/VGAM/etc. families} \description{ The \pkg{VGAM} package provides a number of (parameter) link functions which are described in general here. Collectively, they offer the user considerable choice and flexibility for modelling data. } \usage{ TypicalVGAMlink(theta, someParameter = 0, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } \arguments{ \item{theta}{ Numeric or character. This is usually \eqn{\theta}{theta} (default) but can sometimes be \eqn{\eta}{eta}, depending on the other arguments. If \code{theta} is character then \code{inverse} and \code{deriv} are ignored. The name \code{theta} should always be the name of the first argument. } \item{someParameter}{ Some parameter, e.g., an offset. } \item{bvalue}{ Boundary value, positive if given. If \code{0 < theta} then values of \code{theta} which are less than or equal to 0 can be replaced by \code{bvalue} before computing the link function value. Values of \code{theta} which are greater than or equal to 1 can be replaced by 1 minus \code{bvalue} before computing the link function value. The value \code{bvalue = .Machine$double.eps} is sometimes a reasonable value, or something slightly higher. } % \item{earg}{ % List. % Extra argument allowing for additional information, specific to the % link function. For example, for \code{\link{logoff}}, this will % contain the offset value. The argument \code{earg} is % always a list with \emph{named} components. See each specific link % function to find the component names for the list. % % % Almost all \pkg{VGAM} family functions with a single link % function have an argument (often called \code{earg}) which will % allow parameters to be inputted for that link function. % For \pkg{VGAM} family functions with more than one link % function there usually will be an \code{earg}-type argument for % each link. For example, if there are two links called % \code{lshape} and \code{lscale} then % the \code{earg}-type arguments for these might be called % \code{eshape} and \code{escale}, say. % % } \item{inverse}{ Logical. If \code{TRUE} and \code{deriv = 0} then the inverse link value \eqn{\theta}{theta} is returned, hence the argument \code{theta} is really \eqn{\eta}{eta}. In all other cases, the argument \code{theta} is really \eqn{\theta}{theta}. } \item{deriv}{ Integer. Either 0, 1, or 2, specifying the order of the derivative. Some link functions handle values up to 3 or 4. } \item{short, tag}{ Logical. These are used for labelling the \code{blurb} slot of a \code{\link{vglmff-class}} object. These arguments are used only if \code{theta} is character, and gives the formula for the link in character form. If \code{tag = TRUE} then the result is preceeded by a little more information. } } \value{ Returns one of: the link function value or its first or second derivative, the inverse link or its first or second derivative, or a character description of the link. Here are the general details. If \code{inverse = FALSE} and \code{deriv = 0} (default) then the ordinary link function \eqn{\eta = g(\theta)}{eta = g(theta)} is returned. If \code{inverse = TRUE} and \code{deriv = 0} then the inverse link function value is returned, hence \code{theta} is really \eqn{\eta}{eta} (the only occasion this happens). If \code{inverse = FALSE} and \code{deriv = 1} then it is \eqn{d\eta / d\theta}{d eta / d theta} \emph{as a function of} \eqn{\theta}{theta}. If \code{inverse = FALSE} and \code{deriv = 2} then it is \eqn{d^2\eta / d\theta^2}{d^2 eta / d theta^2} \emph{as a function of} \eqn{\theta}{theta}. If \code{inverse = TRUE} and \code{deriv = 1} then it is \eqn{d\theta / d\eta}{d theta / d eta} \emph{as a function of} \eqn{\theta}{theta}. If \code{inverse = TRUE} and \code{deriv = 2} then it is \eqn{d^2\theta / d\eta^2}{d^2 theta / d eta^2} \emph{as a function of} \eqn{\theta}{theta}. It is only when \code{deriv = 1} that \code{linkfun(theta, deriv = 1, inverse = TRUE)} and \code{linkfun(theta, deriv = 1, inverse = FALSE)} are \emph{reciprocals} of each other. In particular, \code{linkfun(theta, deriv = 2, inverse = TRUE)} and \code{linkfun(theta, deriv = 2, inverse = FALSE)} are \emph{not} reciprocals of each other in general. % Prior to 20150711; this was what it was: % If \code{inverse = FALSE} and \code{deriv = 1} then it is % \eqn{d\theta / d\eta}{d theta / d eta} % \emph{as a function of} \eqn{\theta}{theta}. % If \code{inverse = FALSE} and \code{deriv = 2} then it is % \eqn{d^2\theta / d\eta^2}{d^2 theta / d eta^2} % \emph{as a function of} \eqn{\theta}{theta}. % If \code{inverse = TRUE} and \code{deriv = 0} then the inverse % link function is returned, hence \code{theta} is really % \eqn{\eta}{eta}. % If \code{inverse = TRUE} and \code{deriv} is positive then the % \emph{reciprocal} of the same link function with % \code{(theta = theta, someParameter, inverse = TRUE, deriv = deriv)} % is returned. } \details{ Almost all \pkg{VGAM} link functions have something similar to the argument list as given above. In this help file we have \eqn{\eta = g(\theta)}{eta = g(theta)} where \eqn{g} is the link function, \eqn{\theta}{theta} is the parameter and \eqn{\eta}{eta} is the linear/additive predictor. The link \eqn{g} must be strictly monotonic and twice-differentiable in its range. % The arguments \code{short} and \code{tag} are used only if % \code{theta} is character. % That is, there is a matching \code{earg} for each \code{link} argument. The following is a brief enumeration of all \pkg{VGAM} link functions. For parameters lying between 0 and 1 (e.g., probabilities): \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, \code{\link{cauchit}}, \code{\link{foldsqrt}}, \code{\link{logc}}, \code{\link{golf}}, \code{\link{polf}}, \code{\link{nbolf}}. For positive parameters (i.e., greater than 0): \code{\link{loge}}, \code{\link{negloge}}, \code{\link{powerlink}}. For parameters greater than 1: \code{\link{loglog}}. For parameters between \eqn{-1} and \eqn{1}: \code{\link{fisherz}}, \code{\link{rhobit}}. For parameters between \eqn{A} and \eqn{B}: \code{\link{extlogit}}, \code{\link{logoff}} (\eqn{B = \infty}{B = Inf}). For unrestricted parameters (i.e., any value): \code{\link{identity}}, \code{\link{negidentity}}, \code{\link{reciprocal}}, \code{\link{negreciprocal}}. % Other links: } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \section{Warning }{ The output of link functions changed at \pkg{VGAM} \code{0.9-9} (date was around 2015-07). Formerly, \code{linkfun(theta, deriv = 1)} is now \code{linkfun(theta, deriv = 1, inverse = TRUE)}, or equivalently, \code{1 / linkfun(theta, deriv = 1, inverse = TRUE)}. Also, formerly, \code{linkfun(theta, deriv = 2)} was \code{1 / linkfun(theta, deriv = 2, inverse = TRUE)}. This was a bug. Altogether, these are big changes and the user should beware! One day in the future, \emph{all} \pkg{VGAM} link functions may be renamed to end in the characters \code{"link"}. } \seealso{ \code{\link{TypicalVGAMfamilyFunction}}, \code{\link{linkfun}}, \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}. \code{\link{cqo}}, \code{\link{cao}}. % \code{\link{uqo}}. } \author{T. W. Yee} \note{ \pkg{VGAM} link functions are generally not compatible with other functions outside the package. In particular, they won't work with \code{\link[stats]{glm}} or any other package for fitting GAMs. From October 2006 onwards, all \pkg{VGAM} family functions will only contain one default value for each link argument rather than giving a vector of choices. For example, rather than \code{binomialff(link = c("logit", "probit", "cloglog", "cauchit", "identitylink"), ...)} it is now \code{binomialff(link = "logit", ...)}. No checking will be done to see if the user's choice is reasonable. This means that the user can write his/her own \pkg{VGAM} link function and use it within any \pkg{VGAM} family function. Altogether this provides greater flexibility. The downside is that the user must specify the \emph{full} name of the link function, by either assigning the link argument the full name as a character string, or just the name itself. See the examples below. From August 2012 onwards, a major change in link functions occurred. Argument \code{esigma} (and the like such as \code{earg}) used to be in \pkg{VGAM} prior to version 0.9-0 (released during the 2nd half of 2012). The major change is that arguments such as \code{offset} that used to be passed in via those arguments can done directly through the link function. For example, \code{gev(lshape = "logoff", eshape = list(offset = 0.5))} is replaced by \code{gev(lshape = logoff(offset = 0.5))}. The \code{@misc} slot no longer has \code{link} and \code{earg} components, but two other components replace these. Functions such as \code{dtheta.deta()}, \code{d2theta.deta2()}, \code{d3theta.deta3()}, \code{eta2theta()}, \code{theta2eta()} are modified. } \examples{ logit("a") logit("a", short = FALSE) logit("a", short = FALSE, tag = TRUE) logoff(1:5, offset = 1) # Same as log(1:5 + 1) powerlink(1:5, power = 2) # Same as (1:5)^2 \dontrun{ # This is old and no longer works: logoff(1:5, earg = list(offset = 1)) powerlink(1:5, earg = list(power = 2)) } fit1 <- vgam(agaaus ~ altitude, binomialff(link = "cloglog"), hunua) # best fit2 <- vgam(agaaus ~ altitude, binomialff(link = cloglog ), hunua) # okay \dontrun{ # This no longer works since "clog" is not a valid VGAM link function: fit3 <- vgam(agaaus ~ altitude, binomialff(link = "clog"), hunua) # not okay # No matter what the link, the estimated var-cov matrix is the same y <- rbeta(n = 1000, shape1 = exp(0), shape2 = exp(1)) fit1 <- vglm(y ~ 1, betaR(lshape1 = "identitylink", lshape2 = "identitylink"), trace = TRUE, crit = "coef") fit2 <- vglm(y ~ 1, betaR(lshape1 = logoff(offset = 1.1), lshape2 = logoff(offset = 1.1)), trace = TRUE) vcov(fit1, untransform = TRUE) vcov(fit1, untransform = TRUE) - vcov(fit2, untransform = TRUE) # Should be all 0s \dontrun{ # This is old: fit1@misc$earg # Some 'special' parameters fit2@misc$earg # Some 'special' parameters are here } par(mfrow = c(2, 2)) p <- seq(0.05, 0.95, len = 200) # A rather restricted range x <- seq(-4, 4, len = 200) plot(p, logit(p), type = "l", col = "blue") plot(x, logit(x, inverse = TRUE), type = "l", col = "blue") plot(p, logit(p, deriv = 1), type = "l", col = "blue") # 1 / (p*(1-p)) plot(p, logit(p, deriv = 2), type = "l", col = "blue") # (2*p-1)/(p*(1-p))^2 } } \keyword{models} VGAM/man/vglmff-class.Rd0000644000176200001440000002123013135276753014444 0ustar liggesusers\name{vglmff-class} \docType{class} \alias{vglmff-class} \title{Class ``vglmff'' } \description{ Family functions for the \pkg{VGAM} package } \section{Objects from the Class}{ Objects can be created by calls of the form \code{new("vglmff", ...)}. } \section{Slots}{ In the following, \eqn{M} is the number of linear/additive predictors. \describe{ \item{\code{blurb}:}{ Object of class \code{"character"} giving a small description of the model. Important arguments such as parameter link functions can be expressed here. } \item{\code{constraints}:}{ Object of class \code{"expression"} which sets up any constraint matrices defined by arguments in the family function. A \code{zero} argument is always fed into \code{cm.zero.vgam}, whereas other constraints are fed into \code{cm.vgam}. } \item{\code{deviance}:}{ Object of class \code{"function"} returning the deviance of the model. This slot is optional. If present, the function must have arguments \code{function(mu, y, w, residuals = FALSE, eta, extra = NULL)}. Deviance residuals are returned if \code{residuals = TRUE}. } \item{\code{fini}:}{ Object of class \code{"expression"} to insert code at a special position in \code{vglm.fit} or \code{vgam.fit}. This code is evaluated immediately after the fitting. } \item{\code{first}:}{ Object of class \code{"expression"} to insert code at a special position in \code{\link{vglm}} or \code{\link{vgam}}. } \item{\code{infos}:}{ Object of class \code{"function"} which returns a list with components such as \code{M1}. At present only a very few \pkg{VGAM} family functions have this feature implemented. Those that do do not require specifying the \code{M1} argument when used with \code{\link{rcim}}. } \item{\code{initialize}:}{ Object of class \code{"expression"} used to perform error checking (especially for the variable \code{y}) and obtain starting values for the model. In general, \code{etastart} or \code{mustart} are assigned values based on the variables \code{y}, \code{x} and \code{w}. } \item{\code{linkinv}:}{ Object of class \code{"function"} which returns the fitted values, given the linear/additive predictors. The function must have arguments \code{function(eta, extra = NULL)}. } \item{\code{last}:}{ Object of class \code{"expression"} to insert code at a special position (at the very end) of \code{vglm.fit()} or \code{vgam.fit()}. This code is evaluated after the fitting. The list \code{misc} is often assigned components in this slot, which becomes the \code{misc} slot on the fitted object. } \item{\code{linkfun}:}{ Object of class \code{"function"} which, given the fitted values, returns the linear/additive predictors. If present, the function must have arguments \code{function(mu, extra = NULL)}. Most \pkg{VGAM} family functions do not have a \code{linkfun} function. They largely are for classical exponential families, i.e., GLMs. } \item{\code{loglikelihood}:}{ Object of class \code{"function"} returning the log-likelihood of the model. This slot is optional. If present, the function must have arguments \code{function(mu, y, w, residuals = FALSE, eta, extra = NULL)}. The argument \code{residuals} can be ignored because log-likelihood residuals aren't defined. } \item{\code{middle}:}{ Object of class \code{"expression"} to insert code at a special position in \code{vglm.fit} or \code{vgam.fit}. } \item{\code{middle2}:}{ Object of class \code{"expression"} to insert code at a special position in \code{vglm.fit} or \code{vgam.fit}. } \item{\code{simslot}:}{ Object of class \code{"function"} to allow \code{\link[stats]{simulate}} to work. } \item{\code{hadof}:}{ Object of class \code{"function"}; experimental. } \item{\code{summary.dispersion}:}{ Object of class \code{"logical"} indicating whether the general VGLM formula (based on a residual sum of squares) can be used for computing the scaling/dispersion parameter. It is \code{TRUE} for most models except for nonlinear regression models. } \item{\code{vfamily}:}{ Object of class \code{"character"} giving class information about the family function. Although not developed at this stage, more flexible classes are planned in the future. For example, family functions \code{sratio}, \code{cratio}, \code{cumulative}, and \code{acat} all operate on categorical data, therefore will have a special class called \code{"VGAMcat"}, say. Then if \code{fit} was a \code{vglm} object, then \code{coef(fit)} would print out the \code{vglm} coefficients plus \code{"VGAMcat"} information as well. } \item{\code{deriv}:}{ Object of class \code{"expression"} which returns a \eqn{M}-column matrix of first derivatives of the log-likelihood function with respect to the linear/additive predictors, i.e., the score vector. In Yee and Wild (1996) this is the \eqn{\bold{d}_i}{\bold{d}i} vector. Thus each row of the matrix returned by this slot is such a vector. } \item{\code{weight}:}{ Object of class \code{"expression"} which returns the second derivatives of the log-likelihood function with respect to the linear/additive predictors. This can be either the observed or expected information matrix, i.e., Newton-Raphson or Fisher-scoring respectively. In Yee and Wild (1996) this is the \eqn{\bold{W}_i}{\bold{W}i} matrix. Thus each row of the matrix returned by this slot is such a matrix. Like the \code{weights} slot of \code{vglm}/\code{vgam}, it is stored in \emph{matrix-band} form, whereby the first \eqn{M} columns of the matrix are the diagonals, followed by the upper-diagonal band, followed by the band above that, etc. In this case, there can be up to \eqn{M(M+1)} columns, with the last column corresponding to the (1,\eqn{M}) elements of the weight matrices. } \item{\code{validfitted, validparams}:}{ Functions that test that the fitted values and all parameters are within range. These functions can issue a warning if violations are detected. } } } \section{Methods}{ \describe{ \item{print}{\code{signature(x = "vglmff")}: short summary of the family function. } } } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. %\url{http://www.stat.auckland.ac.nz/~yee} contains further %information on how to write \pkg{VGAM} family functions. %The file is amongst other \pkg{VGAM} PDF documentation. } \author{ Thomas W. Yee } \note{ With link functions etc., one must use \code{substitute} to embed the options into the code. There are two different forms: \code{eval(substitute(expression({...}), list(...)))} for expressions, and \code{eval(substitute( function(...) { ... }, list(...) )) } for functions. % 20130322; this is obsolete, and can delete it: % A unified method of handling arguments is to use % \code{match.arg}. This allows, for example, % \code{vglm(..., family = cratio(link = logit))} % and % \code{vglm(..., family = cratio(link = "logi"))} % to be equivalent (Nb. there is a \code{logit} function). The \code{extra} argument in \code{linkinv}, \code{linkfun}, \code{deviance}, \code{loglikelihood}, etc. matches with the argument \code{extra} in \code{\link{vglm}}, \code{\link{vgam}} and \code{\link{rrvglm}}. This allows input to be fed into all slots of a \pkg{VGAM} family function. The expression \code{derivative} is evaluated immediately prior to \code{weight}, so there is provision for re-use of variables etc. Programmers must be careful to choose variable names that do not interfere with \code{vglm.fit}, \code{vgam.fit()} etc. Programmers of \pkg{VGAM} family functions are encouraged to keep to previous conventions regarding the naming of arguments, e.g., \code{link} is the argument for parameter link functions, \code{zero} for allowing some of the linear/additive predictors to be an intercept term only, etc. In general, Fisher-scoring is recommended over Newton-Raphson where tractable. Although usually slightly slower in convergence, the weight matrices from using the expected information are positive-definite over a larger parameter space. } \section{Warning }{ \pkg{VGAM} family functions are not compatible with \code{\link[stats]{glm}}, nor \code{gam} (from either \pkg{gam} or \pkg{mgcv} packages). } \seealso{ \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}, \code{\link{rcim}}. } \examples{ cratio() cratio(link = "cloglog") cratio(link = "cloglog", reverse = TRUE) } \keyword{classes} VGAM/man/beniniUC.Rd0000644000176200001440000000442413135276753013562 0ustar liggesusers\name{Benini} \alias{Benini} \alias{dbenini} \alias{pbenini} \alias{qbenini} \alias{rbenini} \title{The Benini Distribution} \description{ Density, distribution function, quantile function and random generation for the Benini distribution with parameter \code{shape}. } \usage{ dbenini(x, y0, shape, log = FALSE) pbenini(q, y0, shape, lower.tail = TRUE, log.p = FALSE) qbenini(p, y0, shape, lower.tail = TRUE, log.p = FALSE) rbenini(n, y0, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{y0}{the scale parameter \eqn{y_0}{y0}. } \item{shape}{the shape parameter \eqn{b}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dbenini} gives the density, \code{pbenini} gives the distribution function, \code{qbenini} gives the quantile function, and \code{rbenini} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{benini1}}, the \pkg{VGAM} family function for estimating the parameter \eqn{s} by maximum likelihood estimation, for the formula of the probability density function and other details. } %\note{ % %} \seealso{ \code{\link{benini1}}. } \examples{ \dontrun{ y0 <- 1; shape <- exp(1) xx <- seq(0.0, 4, len = 101) plot(xx, dbenini(xx, y0 = y0, shape = shape), type = "l", col = "blue", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1, las = 1, ylab = "", xlab = "x") abline(h = 0, col = "blue", lty = 2) lines(xx, pbenini(xx, y0 = y0, shape = shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbenini(probs, y0 = y0, shape = shape) lines(Q, dbenini(Q, y0 = y0, shape = shape), col = "purple", lty = 3, type = "h") pbenini(Q, y0 = y0, shape = shape) - probs # Should be all zero } } \keyword{distribution} VGAM/man/otpospoisson.Rd0000644000176200001440000000335613135276753014650 0ustar liggesusers\name{otpospoisson} \alias{otpospoisson} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-truncated Poisson Distribution } \description{ Estimating the (single) parameter of the 1-truncated positive Poisson distribution. } \usage{ otpospoisson(llambda = "loge", type.fitted = c("mean", "lambda", "prob0", "prob1"), ilambda = NULL, imethod = 1, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{llambda, type.fitted, ilambda}{ Same as \code{\link{pospoisson}}. } \item{imethod, zero}{ Same as \code{\link{pospoisson}}. } } \details{ The 1-truncated positive Poisson distribution has support on 2, 3, \ldots. It is a Poisson distribution but with the probability of a one or zero being 0. The other probabilities are scaled to add to unity. Some more details can be found at \code{\link{pospoisson}}. Multiple responses are permitted. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{Otpospois}}, \code{\link{oipospoisson}}, \code{\link{simulate.vlm}}. } \examples{ odata <- data.frame(y1 = rotpospois(n = 1000, lambda = loge(1, inverse = TRUE))) ofit <- vglm(y1 ~ 1, otpospoisson, data = odata, trace = TRUE, crit = "c") coef(ofit, matrix = TRUE) Coef(ofit) \dontrun{with(odata, hist(y1, prob = TRUE, breaks = seq(0.5, max(y1) + 0.5, by = 1), border = "blue")) x <- seq(1, with(odata, max(y1)), by = 1) with(odata, lines(x, dotpospois(x, Coef(ofit)[1]), col = "orange", type = "h", lwd = 2)) } } \keyword{models} \keyword{regression} VGAM/man/dagumUC.Rd0000644000176200001440000000612413135276753013412 0ustar liggesusers\name{Dagum} \alias{Dagum} \alias{ddagum} \alias{pdagum} \alias{qdagum} \alias{rdagum} \title{The Dagum Distribution} \description{ Density, distribution function, quantile function and random generation for the Dagum distribution with shape parameters \code{a} and \code{p}, and scale parameter \code{scale}. } \usage{ ddagum(x, scale = 1, shape1.a, shape2.p, log = FALSE) pdagum(q, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) qdagum(p, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) rdagum(n, scale = 1, shape1.a, shape2.p) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1}, the length is taken to be the number required.} \item{shape1.a, shape2.p}{shape parameters.} \item{scale}{scale parameter.} \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{ddagum} gives the density, \code{pdagum} gives the distribution function, \code{qdagum} gives the quantile function, and \code{rdagum} generates random deviates. } \references{ Kleiber, C. and Kotz, S. (2003) \emph{Statistical Size Distributions in Economics and Actuarial Sciences}, Hoboken, NJ, USA: Wiley-Interscience. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{dagum}}, which is the \pkg{VGAM} family function for estimating the parameters by maximum likelihood estimation. } \note{ The Dagum distribution is a special case of the 4-parameter generalized beta II distribution. } \seealso{ \code{\link{dagum}}, \code{\link{genbetaII}}. } \examples{ probs <- seq(0.1, 0.9, by = 0.1) shape1.a <- 1; shape2.p <- 2 # Should be 0: max(abs(pdagum(qdagum(p = probs, shape1.a = shape1.a, shape2.p = shape2.p), shape1.a = shape1.a, shape2.p = shape2.p) - probs)) \dontrun{ par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", las = 1, ylim = c(0, 1), main = "Black is standard exponential, others are ddagum(x, ...)") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is standard exponential, others are pdagum(x, ...)") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("bottomright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) } } \keyword{distribution} VGAM/man/gaussianff.Rd0000644000176200001440000001163613135276753014217 0ustar liggesusers\name{gaussianff} %\alias{gaussian} \alias{gaussianff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Gaussian (Normal) Family Function } \description{ Fits a generalized linear model to a response with Gaussian (normal) errors. The dispersion parameter may be known or unknown. } \usage{ gaussianff(dispersion = 0, parallel = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{parallel}{ A logical or formula. If a formula, the response of the formula should be a logical and the terms of the formula indicates whether or not those terms are parallel. } \item{dispersion}{ Dispersion parameter. If 0 then it is estimated and the moment estimate is put in \code{object@misc$dispersion}; it is assigned the value \deqn{ \sum_{i=1}^n \; (y_i - \eta_i)^T W_i (y_i - \eta_i) / (nM-p) }{% sum_{i=1}^n (y_i - eta_i)^T W_i (y_i - \eta_i) / (nM-p) } where \eqn{p} is the total number of parameters estimated (for RR-VGLMs the value used is the number of columns in the large \eqn{X} model matrix; this may not be correct). If the argument is assigned a positive quantity then it is assumed to be known with that value. % zz 28/8/06 check for RR-VGLMs % By default, maximum likelihood is used to % By default, maximum likelihood is used to % estimate the model because it is known. However, the user can specify % \code{dispersion = 0} to have it estimated. } \item{zero}{ An integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,\ldots,\eqn{M}\} where \eqn{M} is the number of columns of the matrix response. } } \details{ This function is usually used in conjunction with \code{\link{vglm}}, else \code{vlm} is recommended instead. The notation \eqn{M} is used to denote the number of linear/additive predictors. This function can handle any finite \eqn{M}, and the default is to use ordinary least squares. A vector linear/additive model can be fitted by minimizing \deqn{ \sum_{i=1}^n \; (y_i - \eta_i)^T W_i (y_i - \eta_i) }{% sum_{i=1}^n (y_i - eta_i)^T W_i (y_i - \eta_i) } where \eqn{y_i} is a \eqn{M}-vector, \eqn{\eta_i}{eta_i} is the vector of linear/additive predictors. The \eqn{W_i} is any positive-definite matrix, and the default is the order-\eqn{M} identity matrix. The \eqn{W_i} can be inputted using the \code{weights} argument of \code{vlm}/\code{\link{vglm}}/\code{\link{vgam}} etc., and the format is the \emph{matrix-band} format whereby it is a \eqn{n \times A}{n * A} matrix with the diagonals are passed first, followed by next the upper band, all the way to the \eqn{(1,M)} element. Here, \eqn{A} has maximum value of \eqn{M(M+1)/2} and a minimum value of \eqn{M}. Usually the \code{weights} argument of \code{vlm}/\code{\link{vglm}}/\code{\link{vgam}}/\code{\link{rrvglm}} is just a vector, in which case each element is multiplied by a order-\eqn{M} identity matrix. If in doubt, type something like \code{weights(object, type="working")} after the model has been fitted. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } \note{ This \pkg{VGAM} family function is supposed to be similar to \code{\link[stats]{gaussian}} but is is not compatible with \code{\link[stats]{glm}}. The \code{"ff"} in the name is added to avoid any masking problems. } % \section{Warning }{ % This function probably contains some bugs, so the user is advised to % be cautious. % % % } \seealso{ \code{\link{uninormal}}, \code{\link{huber2}}, \code{\link{lqnorm}}, \code{\link{binormal}}, \code{\link{SURff}}. \code{vlm}, \code{\link{vglm}}, \code{\link{vgam}}, \code{\link{rrvglm}}. } \examples{ gdata <- data.frame(x2 = sort(runif(n <- 40))) gdata <- transform(gdata, y1 = 1 + 2*x2 + rnorm(n, sd = 0.1), y2 = 3 + 4*x2 + rnorm(n, sd = 0.1), y3 = 7 + 4*x2 + rnorm(n, sd = 0.1)) fit <- vglm(cbind(y1,y2) ~ x2, gaussianff, data = gdata) coef(fit, matrix = TRUE) # For comparison: coef( lmfit <- lm(y1 ~ x2, data = gdata)) coef(glmfit <- glm(y2 ~ x2, data = gdata, gaussian)) vcov(fit) vcov(lmfit) t(weights(fit, type = "prior")) # Unweighted observations head(weights(fit, type = "working")) # Identity matrices # Reduced-rank VLM (rank-1) fit2 <- rrvglm(cbind(y1, y2, y3) ~ x2, gaussianff, data = gdata) Coef(fit2) } \keyword{models} \keyword{regression} VGAM/man/SurvS4.Rd0000644000176200001440000001224013135276753013227 0ustar liggesusers\name{SurvS4} \alias{SurvS4} \alias{is.SurvS4} %%%% 20120216 \alias{print.SurvS4} \alias{show.SurvS4} \alias{Math.SurvS4} \alias{Summary.SurvS4} \alias{[.SurvS4} \alias{format.SurvS4} \alias{as.data.frame.SurvS4} \alias{as.character.SurvS4} \alias{is.na.SurvS4} \alias{Ops.SurvS4} \title{ Create a Survival Object } \description{ Create a survival object, usually used as a response variable in a model formula. } \usage{ SurvS4(time, time2, event, type =, origin = 0) is.SurvS4(x) } \arguments{ \item{time}{ for right censored data, this is the follow up time. For interval data, the first argument is the starting time for the interval. } \item{x}{ any R object. } \item{event}{ The status indicator, normally 0=alive, 1=dead. Other choices are \code{TRUE}/\code{FALSE} (\code{TRUE} = death) or 1/2 (2=death). For interval censored data, the status indicator is 0=right censored, 1=event at \code{time}, 2=left censored, 3=interval censored. Although unusual, the event indicator can be omitted, in which case all subjects are assumed to have an event. } \item{time2}{ ending time of the interval for interval censored or counting process data only. Intervals are assumed to be open on the left and closed on the right, \code{(start, end]}. For counting process data, \code{event} indicates whether an event occurred at the end of the interval. } \item{type}{ character string specifying the type of censoring. Possible values are \code{"right"}, \code{"left"}, \code{"counting"}, \code{"interval"}, or \code{"interval2"}. The default is \code{"right"} or \code{"counting"} depending on whether the \code{time2} argument is absent or present, respectively. } \item{origin}{ for counting process data, the hazard function origin. This is most often used in conjunction with a model containing time dependent strata in order to align the subjects properly when they cross over from one strata to another. } } \value{ An object of class \code{SurvS4} (formerly \code{Surv}). There are methods for \code{print}, \code{is.na}, and subscripting survival objects. \code{SurvS4} objects are implemented as a matrix of 2 or 3 columns. In the case of \code{is.SurvS4}, a logical value \code{TRUE} if \code{x} inherits from class \code{"SurvS4"}, otherwise a \code{FALSE}. } \details{ Typical usages are \preformatted{ SurvS4(time, event) SurvS4(time, time2, event, type=, origin=0) } In theory it is possible to represent interval censored data without a third column containing the explicit status. Exact, right censored, left censored and interval censored observation would be represented as intervals of (a,a), (a, infinity), (-infinity,b), and (a,b) respectively; each specifying the interval within which the event is known to have occurred. If \code{type = "interval2"} then the representation given above is assumed, with NA taking the place of infinity. If `type="interval" \code{event} must be given. If \code{event} is \code{0}, \code{1}, or \code{2}, the relevant information is assumed to be contained in \code{time}, the value in \code{time2} is ignored, and the second column of the result will contain a placeholder. Presently, the only methods allowing interval censored data are the parametric models computed by \code{\link[survival]{survreg}}, so the distinction between open and closed intervals is unimportant. The distinction is important for counting process data and the Cox model. The function tries to distinguish between the use of 0/1 and 1/2 coding for left and right censored data using \code{if (max(status)==2)}. If 1/2 coding is used and all the subjects are censored, it will guess wrong. Use 0/1 coding in this case. } \author{ The code and documentation comes from \pkg{survival}. Slight modifications have been made for conversion to S4 by T. W. Yee. Also, for \code{"interval"} data, \code{as.character.SurvS4()} has been modified to print intervals of the form \code{(start, end]} and not \code{[start, end]} as previously. (This makes a difference for discrete data, such as for \code{\link{cens.poisson}}). All \pkg{VGAM} family functions beginning with \code{"cen"} require the packaging function \code{Surv} to format the input. } \note{ The purpose of having \code{SurvS4} in \pkg{VGAM} is so that the same input can be fed into \code{\link{vglm}} as functions in \pkg{survival} such as \code{\link[survival]{survreg}}. The class name has been changed from \code{"Surv"} to \code{"SurvS4"}; see \code{\link{SurvS4-class}}. The format \code{J+} is interpreted in \pkg{VGAM} as \eqn{\ge J}. If \code{type="interval"} then these should not be used in \pkg{VGAM}: \code{(L,U-]} or \code{(L,U+]}. % zz is this for type="count" only? } \seealso{ \code{\link{SurvS4-class}}, \code{\link{cens.poisson}}, \code{\link[survival]{survreg}}, \code{\link{leukemia}}. % \code{\link[survival]{coxph}}, % \code{\link[survival]{survfit}}, } \examples{ with(leukemia, SurvS4(time, status)) class(with(leukemia, SurvS4(time, status))) } \keyword{survival} % Converted by Sd2Rd version 0.3-2. % with(heart, SurvS4(start,stop,event)) VGAM/man/prinia.Rd0000644000176200001440000000622413135276753013350 0ustar liggesusers\name{prinia} \alias{prinia} \docType{data} \title{Yellow-bellied Prinia %% ~~ data name/kind ... ~~ } \description{ A data frame with yellow-bellied Prinia. } \usage{ data(prinia) } \format{ A data frame with 151 observations on the following 23 variables. \describe{ \item{length}{a numeric vector, the scaled wing length (zero mean and unit variance). } \item{fat}{a numeric vector, fat index; originally 1 (no fat) to 4 (very fat) but converted to 0 (no fat) versus 1 otherwise. } \item{cap}{a numeric vector, number of times the bird was captured or recaptured. } \item{noncap}{a numeric vector, number of times the bird was not captured. } \item{y01, y02, y03, y04, y05, y06}{ a numeric vector of 0s and 1s; for noncapture and capture resp. } \item{y07, y08, y09, y10, y11, y12}{ same as above. } \item{y13, y14, y15, y16, y17, y18, y19}{ same as above. } } } \details{ The yellow-bellied Prinia \emph{Prinia flaviventris} is a common bird species located in Southeast Asia. A capture--recapture experiment was conducted at the Mai Po Nature Reserve in Hong Kong during 1991, where captured individuals had their wing lengths measured and fat index recorded. A total of 19 weekly capture occasions were considered, where 151 distinct birds were captured. More generally, the prinias are a genus of small insectivorous birds, and are sometimes referred to as \emph{wren-warblers}. They are a little-known group of the tropical and subtropical Old World, the roughly 30 species being divided fairly equally between Africa and Asia. % 20131030; this is old: % The example below illustrates the necessity of creating % variables \code{y1}, \code{y2}, \ldots in order for % \code{\link{posbernoulli.b}}, % \code{\link{posbernoulli.t}} and % \code{\link{posbernoulli.tb}} to work. % In contrast, \code{\link{posbinomial}} may have a simple 2-column % matrix as the response. % \emph{Prinia inornate} is from the SS paper, not exactly this bird. %% ~~ If necessary, more details than the __description__ above ~~ } \source{ Thanks to Paul Yip for permission to make this data available. % Further information is at: % Huggins, R. M. and Yip, P. S. F. (1997). % Statistical analysis of removal experiments with the use of auxillary variables. % \emph{Statistica Sinica} \bold{7}, 705--712. Hwang, W.-H. and Huggins, R. M. (2007) Application of semiparametric regression models in the analysis of capture--recapture experiments. \emph{Australian and New Zealand Journal of Statistics} \bold{49}, 191--202. } \examples{ head(prinia) summary(prinia) rowSums(prinia[, c("cap", "noncap")]) # 19s # Fit a positive-binomial distribution (M.h) to the data: fit1 <- vglm(cbind(cap, noncap) ~ length + fat, posbinomial, data = prinia) # Fit another positive-binomial distribution (M.h) to the data: # The response input is suitable for posbernoulli.*-type functions. fit2 <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10, y11, y12, y13, y14, y15, y16, y17, y18, y19) ~ length + fat, posbernoulli.b(drop.b = FALSE ~ 0), data = prinia) } \keyword{datasets} VGAM/man/deermice.Rd0000644000176200001440000000417113135276753013642 0ustar liggesusers\name{deermice} %\alias{Perom} \alias{deermice} \docType{data} \title{ Captures of Peromyscus maniculatus, also known as deer mice. %% ~~ data name/kind ... ~~ } \description{ Captures of \emph{Peromyscus maniculatus} collected at East Stuart Gulch, Colorado, USA. %% ~~ A concise (1-5 lines) description of the dataset. ~~ } % data(Perom) \usage{ data(deermice) } \format{ The format is a data frame. } \details{ \emph{Peromyscus maniculatus} is a rodent native to North America. The deer mouse is small in size, only about 8 to 10 cm long, not counting the length of the tail. Originally, the columns of this data frame represent the sex (\code{m} or \code{f}), the ages (\code{y}: young, \code{sa}: semi-adult, \code{a}: adult), the weights in grams, and the capture histories of 38 individuals over 6 trapping occasions (1: captured, 0: not captured). The data set was collected by V. Reid and distributed with the \pkg{CAPTURE} program of Otis et al. (1978). \code{deermice} has 38 deermice whereas \code{Perom} had 36 deermice (\code{Perom} has been withdrawn.) In \code{deermice} the two semi-adults have been classified as adults. The \code{sex} variable has 1 for female, and 0 for male. %% ~~ If necessary, more details than the __description__ above ~~ } %\source{ %% ~~ reference to a publication or URL from which the data were obtained ~~ %} \references{ Huggins, R. M. (1991) Some practical aspects of a conditional likelihood approach to capture experiments. \emph{Biometrics}, \bold{47}, 725--732. Otis, D. L. et al. (1978) Statistical inference from capture data on closed animal populations, \emph{Wildlife Monographs}, \bold{62}, 3--135. %% ~~ possibly secondary sources and usages ~~ } \seealso{ \code{\link[VGAM:posbernoulli.b]{posbernoulli.b}}, \code{\link[VGAM:posbernoulli.t]{posbernoulli.t}}, \code{\link{fill1}}. } \examples{ head(deermice) \dontrun{ fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + age, posbernoulli.t(parallel.t = TRUE), data = deermice, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) } } \keyword{datasets} VGAM/man/zibinomial.Rd0000644000176200001440000001545013135276753014224 0ustar liggesusers\name{zibinomial} \alias{zibinomial} \alias{zibinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Inflated Binomial Distribution Family Function } \description{ Fits a zero-inflated binomial distribution by maximum likelihood estimation. } \usage{ zibinomial(lpstr0 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, zero = NULL, multiple.responses = FALSE, imethod = 1) zibinomialff(lprob = "logit", lonempstr0 = "logit", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ionempstr0 = NULL, zero = "onempstr0", multiple.responses = FALSE, imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr0, lprob}{ Link functions for the parameter \eqn{\phi}{phi} and the usual binomial probability \eqn{\mu}{prob} parameter. See \code{\link{Links}} for more choices. For the zero-\emph{deflated} model see below. } % \item{epstr0, eprob}{ % epstr0 = list(), eprob = list(), % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}}. } \item{ipstr0}{ Optional initial values for \eqn{\phi}{phi}, whose values must lie between 0 and 1. The default is to compute an initial value internally. If a vector then recyling is used. } \item{lonempstr0, ionempstr0}{ Corresponding arguments for the other parameterization. See details below. } % \item{zero}{ % An integer specifying which linear/additive predictor is modelled % as intercepts only. If given, the value must be either 1 or 2, % and the default is the first. Setting \code{zero = NULL} enables both % \eqn{\phi}{phi} and \eqn{\mu}{prob} to be modelled as a function of % the explanatory variables. % See \code{\link{CommonVGAMffArguments}} for more information. % } \item{multiple.responses}{ Logical. Currently it must be \code{FALSE} to mean the function does not handle multiple responses. This is to remain compatible with the same argument in \code{\link{binomialff}}. } \item{zero, imethod}{ See \code{\link{CommonVGAMffArguments}} for information. Argument \code{zero} changed its default value for version 0.9-2. } } \details{ These functions are based on \deqn{P(Y=0) = \phi + (1-\phi) (1-\mu)^N,}{% P(Y=0) = phi + (1- phi) * (1-prob)^N,} for \eqn{y=0}, and \deqn{P(Y=y) = (1-\phi) {N \choose Ny} \mu^{Ny} (1-\mu)^{N(1-y)}.}{% P(Y=y) = (1-phi) * choose(N,Ny) * prob^(N*y) * (1-prob)^(N*(1-y)).} for \eqn{y=1/N,2/N,\ldots,1}. That is, the response is a sample proportion out of \eqn{N} trials, and the argument \code{size} in \code{\link{rzibinom}} is \eqn{N} here. The parameter \eqn{\phi}{phi} is the probability of a structural zero, and it satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{E(Y)=(1-\phi) \mu}{E(Y) = (1-phi) * prob} and these are returned as the fitted values by default. By default, the two linear/additive predictors for \code{zibinomial()} are \eqn{(logit(\phi), logit(\mu))^T}{(logit(phi), logit(prob))^T}. The \pkg{VGAM} family function \code{zibinomialff()} has a few changes compared to \code{zibinomial()}. These are: (i) the order of the linear/additive predictors is switched so the binomial probability comes first; (ii) argument \code{onempstr0} is now 1 minus the probability of a structural zero, i.e., the probability of the parent (binomial) component, i.e., \code{onempstr0} is \code{1-pstr0}; (iii) argument \code{zero} has a new default so that the \code{onempstr0} is intercept-only by default. Now \code{zibinomialff()} is generally recommended over \code{zibinomial()}. Both functions implement Fisher scoring. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Welsh, A. H., Lindenmayer, D. B. and Donnelly, C. F. (2013) Fitting and interpreting occupancy models. \emph{PLOS One}, \bold{8}, 1--21. } \author{ T. W. Yee } \note{ The response variable must have one of the formats described by \code{\link{binomialff}}, e.g., a factor or two column matrix or a vector of sample proportions with the \code{weights} argument specifying the values of \eqn{N}. To work well, one needs large values of \eqn{N} and \eqn{\mu>0}{prob>0}, i.e., the larger \eqn{N} and \eqn{\mu}{prob} are, the better. If \eqn{N = 1} then the model is unidentifiable since the number of parameters is excessive. Setting \code{stepsize = 0.5}, say, may aid convergence. % 20130316; commenting out this: % For intercept-models and constant \eqn{N} over the \eqn{n} observations, % the \code{misc} slot has a component called \code{pobs0} which is the % estimate of the probability of an observed 0, i.e., \eqn{P(Y=0)}. % This family function currently cannot handle a multivariate % response (only \code{multiple.responses = FALSE} can be handled). % 20130316; adding this: Estimated probabilities of a structural zero and an observed zero are returned, as in \code{\link{zipoisson}}. The zero-\emph{deflated} binomial distribution might be fitted by setting \code{lpstr0 = identitylink}, albeit, not entirely reliably. See \code{\link{zipoisson}} for information that can be applied here. Else try the zero-altered binomial distribution (see \code{\link{zabinomial}}). } \section{Warning }{ Numerical problems can occur. Half-stepping is not uncommon. If failure to converge occurs, make use of the argument \code{ipstr0} or \code{ionempstr0}, or \code{imethod}. } \seealso{ \code{\link{rzibinom}}, \code{\link{binomialff}}, \code{\link{posbinomial}}, \code{\link[stats:Binomial]{rbinom}}. } \examples{ size <- 10 # Number of trials; N in the notation above nn <- 200 zdata <- data.frame(pstr0 = logit( 0, inverse = TRUE), # 0.50 mubin = logit(-1, inverse = TRUE), # Mean of usual binomial sv = rep(size, length = nn)) zdata <- transform(zdata, y = rzibinom(nn, size = sv, prob = mubin, pstr0 = pstr0)) with(zdata, table(y)) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomialff, data = zdata, trace = TRUE) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomialff, data = zdata, trace = TRUE, stepsize = 0.5) coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models head(fitted(fit, type = "pobs0")) # Estimate of P(Y = 0) head(fitted(fit)) with(zdata, mean(y)) # Compare this with fitted(fit) summary(fit) } \keyword{models} \keyword{regression} % fit@misc$pobs0 # Estimate of P(Y = 0) VGAM/man/lirat.Rd0000644000176200001440000000451513135276753013202 0ustar liggesusers\name{lirat} \alias{lirat} \docType{data} \title{ Low-iron Rat Teratology Data } \description{ Low-iron rat teratology data. } \usage{data(lirat)} \format{ A data frame with 58 observations on the following 4 variables. \describe{ \item{\code{N}}{Litter size.} \item{\code{R}}{Number of dead fetuses.} \item{\code{hb}}{Hemoglobin level.} \item{\code{grp}}{Group number. Group 1 is the untreated (low-iron) group, group 2 received injections on day 7 or day 10 only, group 3 received injections on days 0 and 7, and group 4 received injections weekly.} } } \details{ The following description comes from Moore and Tsiatis (1991). The data comes from the experimental setup from Shepard et al. (1980), which is typical of studies of the effects of chemical agents or dietary regimens on fetal development in laboratory rats. Female rats were put in iron-deficient diets and divided into 4 groups. One group of controls was given weekly injections of iron supplement to bring their iron intake to normal levels, while another group was given only placebo injections. Two other groups were given fewer iron-supplement injections than the controls. The rats were made pregnant, sacrificed 3 weeks later, and the total number of fetuses and the number of dead fetuses in each litter were counted. For each litter the number of dead fetuses may be considered to be Binomial(\eqn{N,p}) where \eqn{N} is the litter size and \eqn{p} is the probability of a fetus dying. The parameter \eqn{p} is expected to vary from litter to litter, therefore the total variance of the proportions will be greater than that predicted by a binomial model, even when the covariates for hemoglobin level and experimental group are accounted for. } \source{ Moore, D. F. and Tsiatis, A. (1991) Robust Estimation of the Variance in Moment Methods for Extra-binomial and Extra-Poisson Variation. \emph{Biometrics}, \bold{47}, 383--401. } \references{ Shepard, T. H., Mackler, B. and Finch, C. A. (1980) Reproductive studies in the iron-deficient rat. \emph{Teratology}, \bold{22}, 329--334. } \examples{ \dontrun{ # cf. Figure 3 of Moore and Tsiatis (1991) plot(R / N ~ hb, data = lirat, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead") } } \keyword{datasets} VGAM/man/biamhcopUC.Rd0000644000176200001440000000306413135276753014077 0ustar liggesusers\name{Biamhcop} \alias{Biamhcop} \alias{dbiamhcop} \alias{pbiamhcop} \alias{rbiamhcop} \title{Ali-Mikhail-Haq Bivariate Distribution} \description{ Density, distribution function, and random generation for the (one parameter) bivariate Ali-Mikhail-Haq distribution. } \usage{ dbiamhcop(x1, x2, apar, log = FALSE) pbiamhcop(q1, q2, apar) rbiamhcop(n, apar) } \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{n}{number of observations. Same as \code{\link[stats]{runif}} } \item{apar}{the association parameter.} \item{log}{ Logical. If \code{TRUE} then the logarithm is returned. } } \value{ \code{dbiamhcop} gives the density, \code{pbiamhcop} gives the distribution function, and \code{rbiamhcop} generates random deviates (a two-column matrix). } %\references{ % %} \author{ T. W. Yee and C. S. Chee} \details{ See \code{\link{biamhcop}}, the \pkg{VGAM} family functions for estimating the parameter by maximum likelihood estimation, for the formula of the cumulative distribution function and other details. } %\note{ %} \seealso{ \code{\link{biamhcop}}. } \examples{ x <- seq(0, 1, len = (N <- 101)); apar <- 0.7 ox <- expand.grid(x, x) zedd <- dbiamhcop(ox[, 1], ox[, 2], apar = apar) \dontrun{ contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbiamhcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") plot(r <- rbiamhcop(n = 1000, apar = apar), col = "blue") par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform } } \keyword{distribution} VGAM/man/double.cens.normal.Rd0000644000176200001440000000557013135276753015561 0ustar liggesusers\name{double.cens.normal} \alias{double.cens.normal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Univariate Normal Distribution with Double Censoring } \description{ Maximum likelihood estimation of the two parameters of a univariate normal distribution when there is double censoring. } \usage{ double.cens.normal(r1 = 0, r2 = 0, lmu = "identitylink", lsd = "loge", imu = NULL, isd = NULL, zero = "sd") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{r1, r2}{ Integers. Number of smallest and largest values censored, respectively. } \item{lmu, lsd}{ Parameter link functions applied to the mean and standard deviation. See \code{\link{Links}} for more choices. } \item{imu, isd, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ This family function uses the Fisher information matrix given in Harter and Moore (1966). The matrix is not diagonal if either \code{r1} or \code{r2} are positive. By default, the mean is the first linear/additive predictor and the log of the standard deviation is the second linear/additive predictor. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Harter, H. L. and Moore, A. H. (1966) Iterative maximum-likelihood estimation of the parameters of normal populations from singly and doubly censored samples. \emph{Biometrika}, \bold{53}, 205--213. } \author{ T. W. Yee } \note{ This family function only handles a vector or one-column matrix response. The \code{weights} argument, if used, are interpreted as frequencies, therefore it must be a vector with positive integer values. With no censoring at all (the default), it is better (and equivalent) to use \code{\link{uninormal}}. } \seealso{ \code{\link{uninormal}}, \code{\link{cens.normal}}, \code{\link{tobit}}. } \examples{\dontrun{ # Repeat the simulations described in Harter and Moore (1966) SIMS <- 100 # Number of simulations (change this to 1000) mu.save <- sd.save <- rep(NA, len = SIMS) r1 <- 0; r2 <- 4; nn <- 20 for (sim in 1:SIMS) { y <- sort(rnorm(nn)) y <- y[(1+r1):(nn-r2)] # Delete r1 smallest and r2 largest fit <- vglm(y ~ 1, double.cens.normal(r1 = r1, r2 = r2)) mu.save[sim] <- predict(fit)[1, 1] sd.save[sim] <- exp(predict(fit)[1, 2]) # Assumes a log link and ~ 1 } c(mean(mu.save), mean(sd.save)) # Should be c(0,1) c(sd(mu.save), sd(sd.save)) } # Data from Sarhan and Greenberg (1962); MLEs are mu = 9.2606, sd = 1.3754 strontium90 <- data.frame(y = c(8.2, 8.4, 9.1, 9.8, 9.9)) fit <- vglm(y ~ 1, double.cens.normal(r1 = 2, r2 = 3, isd = 6), data = strontium90, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) } \keyword{models} \keyword{regression} VGAM/man/felix.Rd0000644000176200001440000000322613135276753013174 0ustar liggesusers\name{felix} \alias{felix} %- Also NEED an '\alias' for EACH other topic documented here. \title{Felix Distribution Family Function} \description{ Estimates the parameter of a Felix distribution by maximum likelihood estimation. } \usage{ felix(lrate = extlogit(min = 0, max = 0.5), imethod = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrate}{ Link function for the parameter, called \eqn{a} below; see \code{\link{Links}} for more choices and for general information. } \item{imethod}{ See \code{\link{CommonVGAMffArguments}}. Valid values are 1, 2, 3 or 4. } } \details{ The Felix distribution is an important basic Lagrangian distribution. The density function is \deqn{f(y;a) = \frac{ 1 }{((y-1)/2)!} y^{(y-3)/2} a^{(y-1)/2} \exp(-ay) }{% f(y;a) = (1 / ((y-1)/2)!) * y^((y-3)/2) * a^((y-1)/2) * exp(-ay)} where \eqn{y=1,3,5,\ldots} and \eqn{0 < a < 0.5}. The mean is \eqn{1/(1-2a)} (returned as the fitted values). Fisher scoring is implemented. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Consul, P. C. and Famoye, F. (2006) \emph{Lagrangian Probability Distributions}, Boston, USA: Birkhauser. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{dfelix}}, \code{\link{borel.tanner}}. } \examples{ fdata <- data.frame(y = 2 * rpois(n = 200, 1) + 1) # Not real data! fit <- vglm(y ~ 1, felix, data = fdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) } \keyword{models} \keyword{regression} VGAM/man/nbcanlink.Rd0000644000176200001440000001263213135276753014025 0ustar liggesusers\name{nbcanlink} \alias{nbcanlink} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Negative Binomial Canonical Link Function } \description{ Computes the negative binomial canonical link transformation, including its inverse and the first two derivatives. } \usage{ nbcanlink(theta, size = NULL, wrt.param = NULL, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. Typically the mean of a negative binomial distribution (NBD). See below for further details. } \item{size, wrt.param}{ \code{size} contains the \eqn{k} matrix which must be of a conformable dimension as \code{theta}. Also, if \code{deriv > 0} then \code{wrt.param} is either 1 or 2 (1 for with respect to the first parameter, and 2 for with respect to the second parameter (\code{size})). } \item{bvalue}{ Details at \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The NBD canonical link is \eqn{\log(\theta/ (\theta + k))}{log(theta/(theta + k))} where \eqn{\theta}{theta} is the NBD mean. The canonical link is used for theoretically relating the NBD to GLM class. This link function was specifically written for \code{\link{negbinomial}} and \code{\link{negbinomial.size}}, and should not be used elsewhere (these \pkg{VGAM} family functions have code that specifically handles \code{nbcanlink()}.) } \value{ For \code{deriv = 0}, the above equation when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{kmatrix / expm1(-theta)} where \code{theta} ie really \code{eta}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ Miranda, V. S. and Yee, T. W. (2017) On mean function modelling for several one-parameter discrete distributions. \emph{Manuscript in preparation}. Yee, T. W. (2014) Reduced-rank vector generalized linear models with two linear predictors. \emph{Computational Statistics and Data Analysis}, \bold{71}, 889--902. Hilbe, J. M. (2011) \emph{Negative Binomial Regression}, 2nd Edition. Cambridge: Cambridge University Press. } \author{ Thomas W. Yee and with a lot of help by Victor Miranda to get it going with \code{\link{negbinomial}}. } \section{Warning}{ This function works with \code{\link{negbinomial}} but care is needed because it is numerically fraught. In particular, the first linear/additive predictor must have negative values, and finding good initial values may be difficult, leading to it crashing at the start. Hence the NB-C model is sensitive to the initial values and may converge to a local solution. Pages 210 and 309 of Hilbe (2011) notes convergence difficulties (of Newton-Raphson type algorithms), and some of that this applies here. Setting \code{trace = TRUE} is a good idea, as is trying various values of \code{imethod} in \code{\link{negbinomial}}. % This function should work okay with \code{\link{negbinomial.size}}. % Standard errors may be unreliable. } \note{ While theoretically nice, this function is not recommended in general since its value is always negative (linear predictors ought to be unbounded in general). A \code{\link{loge}} link for argument \code{lmu} is recommended instead. Numerical instability may occur when \code{theta} is close to 0 or 1. Values of \code{theta} which are less than or equal to 0 can be replaced by \code{bvalue} before computing the link function value. See \code{\link{Links}}. } \seealso{ \code{\link{negbinomial}}, \code{\link{negbinomial.size}}. } \examples{ nbcanlink("mu", short = FALSE) mymu <- 1:10 # Test some basic operations: kmatrix <- cbind(runif(length(mymu))) eta1 <- nbcanlink(mymu, size = kmatrix) ans2 <- nbcanlink(eta1, size = kmatrix, inverse = TRUE) max(abs(ans2 - mymu)) # Should be 0 \dontrun{ mymu <- seq(0.5, 10, length = 101) kmatrix <- matrix(10, length(mymu), 1) plot(nbcanlink(mymu, size = kmatrix) ~ mymu, las = 1, type = "l", col = "blue", xlab = expression({mu})) } # Estimate the parameters from some simulated data ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, eta1 = -1 - 1 * x2, # eta1 < 0 size1 = exp(1), size2 = exp(2)) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inverse = TRUE), mu2 = nbcanlink(eta1, size = size2, inverse = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size = size1), y2 = rnbinom(nn, mu = mu2, size = size2)) summary(ndata) nbcfit <- vglm(cbind(y1, y2) ~ x2, # negbinomial(lmu = "nbcanlink", imethod = 1), # Try this negbinomial(lmu = "nbcanlink", imethod = 2), # Try this data = ndata, trace = TRUE) coef(nbcfit, matrix = TRUE) summary(nbcfit) } \keyword{math} \keyword{models} \keyword{regression} % abline(h = 0, col = "lightgray", lty = "dashed", lwd = 2.0) % The variance-covariance matrix may be wrong when the % canonical link is used. % vcov(fit) # May be wrong % 20150714; yettodo: fix up this and getting it going. % Hint: the working weights are treated as diagonal, whereas it isn't! VGAM/man/tobitUC.Rd0000644000176200001440000000623213135276753013436 0ustar liggesusers\name{Tobit} \alias{Tobit} \alias{dtobit} \alias{ptobit} \alias{qtobit} \alias{rtobit} \title{The Tobit Distribution} \description{ Density, distribution function, quantile function and random generation for the Tobit model. } \usage{ dtobit(x, mean = 0, sd = 1, Lower = 0, Upper = Inf, log = FALSE) ptobit(q, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) qtobit(p, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) rtobit(n, mean = 0, sd = 1, Lower = 0, Upper = Inf) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{Lower, Upper}{vector of lower and upper thresholds. } \item{mean, sd, lower.tail, log, log.p}{ see \code{\link[stats:Normal]{rnorm}}. } } \value{ \code{dtobit} gives the density, \code{ptobit} gives the distribution function, \code{qtobit} gives the quantile function, and \code{rtobit} generates random deviates. } \author{ T. W. Yee } \details{ See \code{\link{tobit}}, the \pkg{VGAM} family function for estimating the parameters, for details. Note that the density at \code{Lower} and \code{Upper} is the the area to the left and right of those points. Thus there are two spikes (but less in value); see the example below. Consequently, \code{dtobit(Lower) + dtobit(Upper) + } the area in between equals unity. % 20141223; this is old: % Note that the density at \code{Lower} and \code{Upper} is the % value of \code{\link[stats:Normal]{dnorm}} evaluated there plus % the area to the left/right of that point too. } %\note{ %} \seealso{ \code{\link{tobit}}, \code{\link[stats:Normal]{rnorm}}. } \examples{ mu <- 0.5; x <- seq(-2, 4, by = 0.01) Lower <- -1; Upper <- 2.0 integrate(dtobit, lower = Lower, upper = Upper, mean = mu, Lower = Lower, Upper = Upper)$value + dtobit(Lower, mean = mu, Lower = Lower, Upper = Upper) + dtobit(Upper, mean = mu, Lower = Lower, Upper = Upper) # Adds to unity \dontrun{ plot(x, ptobit(x, m = mu, Lower = Lower, Upper = Upper), type = "l", ylim = 0:1, las = 1, col = "orange", ylab = paste("ptobit(m = ", mu, ", sd = 1, Lower =", Lower, ", Upper =", Upper, ")"), main = "Orange is cumulative distribution function; blue is density", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0) lines(x, dtobit(x, m = mu, Lower = Lower, Upper = Upper), col = "blue") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtobit(probs, m = mu, Lower = Lower, Upper = Upper) lines(Q, ptobit(Q, m = mu, Lower = Lower, Upper = Upper), col = "purple", lty = "dashed", type = "h") lines(Q, dtobit(Q, m = mu, Lower = Lower, Upper = Upper), col = "darkgreen", lty = "dashed", type = "h") abline(h = probs, col = "purple", lty = "dashed") max(abs(ptobit(Q, m = mu, Lower = Lower, Upper = Upper) - probs)) # Should be 0 endpts <- c(Lower, Upper) # Endpoints have a spike (not quite, actually) lines(endpts, dtobit(endpts, m = mu, Lower = Lower, Upper = Upper), col = "blue", lwd = 3, type = "h") } } \keyword{distribution} VGAM/man/oiposbinomial.Rd0000644000176200001440000001442413135276753014733 0ustar liggesusers\name{oiposbinomial} \alias{oiposbinomial} %\alias{oiposbinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Inflated Positive Binomial Distribution Family Function } \description{ Fits a one-inflated positive binomial distribution by maximum likelihood estimation. } \usage{ oiposbinomial(lpstr1 = "logit", lprob = "logit", type.fitted = c("mean", "prob", "pobs1", "pstr1", "onempstr1"), iprob = NULL, gpstr1 = ppoints(9), gprob = ppoints(9), multiple.responses = FALSE, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lpstr1, lprob}{ Link functions for the parameter \eqn{\phi}{phi} and the positive binomial probability \eqn{\mu}{prob} parameter. See \code{\link{Links}} for more choices. See \code{\link{CommonVGAMffArguments}} also. For the one-\emph{deflated} model see below. } % \item{epstr1, eprob}{ % epstr1 = list(), eprob = list(), % List. Extra argument for the respective links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{type.fitted}{ See \code{\link{CommonVGAMffArguments}} and \code{\link{fittedvlm}}. } \item{iprob, gpstr1, gprob}{ For initial values; see \code{\link{CommonVGAMffArguments}}. } % \item{lonempstr1, ionempstr1}{ % Corresponding arguments for the other parameterization. % See details below. % } % \item{zero}{ % An integer specifying which linear/additive predictor is modelled % as intercepts only. If given, the value must be either 1 or 2, % and the default is the first. Setting \code{zero = NULL} enables both % \eqn{\phi}{phi} and \eqn{\mu}{prob} to be modelled as a function of % the explanatory variables. % See \code{\link{CommonVGAMffArguments}} for more information. % } \item{multiple.responses}{ Logical. See \code{\link{binomialff}} and \code{\link{posbinomial}}. % Currently it must be \code{FALSE} to mean the % function does not handle multiple responses. This % is to remain compatible with the same argument in % \code{\link{binomialff}}. } \item{zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ These functions are based on \deqn{P(Y=y) = \phi + (1-\phi) N \mu (1-\mu)^N / (1-(1-\mu)^N),}{% P(Y=y) = phi + (1- phi) * N * prob * (1-prob)^N / (1-(1-prob)^N),} for \eqn{y=1/N}, and \deqn{P(Y=y) = (1-\phi) {N \choose Ny} \mu^{Ny} (1-\mu)^{N(1-y)} / (1-(1-\mu)^N).}{% P(Y=y) = (1-phi) * choose(N,Ny) * prob^(N*y) * (1-prob)^(N*(1-y)) / (1-(1-prob)^N).} for \eqn{y=2/N,\ldots,1}. That is, the response is a sample proportion out of \eqn{N} trials, and the argument \code{size} in \code{\link{roiposbinom}} is \eqn{N} here. Ideally \eqn{N > 2} is needed. The parameter \eqn{\phi}{phi} is the probability of a structural one, and it satisfies \eqn{0 < \phi < 1}{0 < phi < 1} (usually). The mean of \eqn{Y} is \eqn{E(Y)=\phi + (1-\phi) \mu / (1-(1-\mu)^N)}{E(Y) = phi + (1-phi) * prob / (1-(1-prob)^N)} and these are returned as the default fitted values. By default, the two linear/additive predictors for \code{oiposbinomial()} are \eqn{(logit(\phi), logit(\mu))^T}{(logit(phi), logit(prob))^T}. % The \pkg{VGAM} family function \code{oiposbinomialff()} has a few % changes compared to \code{oiposbinomial()}. % These are: % (i) the order of the linear/additive predictors is switched so the % binomial probability comes first; % (ii) argument \code{onempstr1} is now 1 minus % the probability of a structural zero, i.e., % the probability of the parent (binomial) component, % i.e., \code{onempstr1} is \code{1-pstr1}; % (iii) argument \code{zero} has a new default so that the \code{onempstr1} % is intercept-only by default. % Now \code{oiposbinomialff()} is generally recommended over % \code{oiposbinomial()}. % Both functions implement Fisher scoring. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } %\references{ % % %} \author{ T. W. Yee } \note{ The response variable should have one of the formats described by \code{\link{binomialff}}, e.g., a factor or two column matrix or a vector of sample proportions with the \code{weights} argument specifying the values of \eqn{N}. To work well, one ideally needs large values of \eqn{N} and \eqn{\mu}{prob} much greater than 0, i.e., the larger \eqn{N} and \eqn{\mu}{prob} are, the better. If \eqn{N = 1} then the model is unidentifiable since the number of parameters is excessive. % 20130316; adding this: Estimated probabilities of a structural one and an observed one are returned, as in \code{\link{zipoisson}}. The one-\emph{deflated} positive binomial distribution might be fitted by setting \code{lpstr1 = "identitylink"}, albeit, not entirely reliably. See \code{\link{zipoisson}} for information that can be applied here. % Else try the one-altered positive binomial distribution (see % \code{\link{oabinomial}}). } \seealso{ \code{\link{roiposbinom}}, \code{\link{posbinomial}}, \code{\link{binomialff}}, \code{\link[stats:Binomial]{rbinom}}. } \examples{ size <- 10 # Number of trials; N in the notation above nn <- 200 odata <- data.frame(pstr1 = logit( 0, inverse = TRUE), # 0.50 mubin1 = logit(-1, inverse = TRUE), # Mean of usual binomial svec = rep(size, length = nn), x2 = runif(nn)) odata <- transform(odata, mubin2 = logit(-1 + x2, inverse = TRUE)) odata <- transform(odata, y1 = roiposbinom(nn, svec, pr = mubin1, pstr1 = pstr1), y2 = roiposbinom(nn, svec, pr = mubin2, pstr1 = pstr1)) with(odata, table(y1)) fit1 <- vglm(y1 / svec ~ 1, oiposbinomial, data = odata, weights = svec, trace = TRUE, crit = "coef") fit2 <- vglm(y2 / svec ~ x2, oiposbinomial, data = odata, weights = svec, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) # Useful for intercept-only models head(fitted(fit1, type = "pobs1")) # Estimate of P(Y = 1) head(fitted(fit1)) with(odata, mean(y1)) # Compare this with fitted(fit1) summary(fit1) } \keyword{models} \keyword{regression} % fit@misc$pobs0 # Estimate of P(Y = 0) VGAM/man/vsmooth.spline.Rd0000644000176200001440000001552713135276753015064 0ustar liggesusers\name{vsmooth.spline} \alias{vsmooth.spline} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Vector cubic smoothing spline } \description{ Fits a vector cubic smoothing spline. } \usage{ vsmooth.spline(x, y, w = NULL, df = rep(5, M), spar = NULL, i.constraint = diag(M), x.constraint = diag(M), constraints = list("(Intercepts)" = i.constraint, x = x.constraint), all.knots = FALSE, var.arg = FALSE, scale.w = TRUE, nk = NULL, control.spar = list()) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ A vector, matrix or a list. If a list, the \code{x} component is used. If a matrix, the first column is used. \code{x} may also be a complex vector, in which case the real part is used, and the imaginary part is used for the response. In this help file, \code{n} is the number of unique values of \code{x}. } \item{y}{ A vector, matrix or a list. If a list, the \code{y} component is used. If a matrix, all but the first column is used. In this help file, \code{M} is the number of columns of \code{y} if there are no constraints on the functions. } \item{w}{ The weight matrices or the number of observations. If the weight matrices, then this must be a \code{n}-row matrix with the elements in matrix-band form (see \code{iam}). If a vector, then these are the number of observations. By default, \code{w} is the \code{M} by \code{M} identity matrix, denoted by \code{matrix(1, n, M)}. } \item{df}{ Numerical vector containing the degrees of freedom for each component function (smooth). If necessary, the vector is recycled to have length equal to the number of component functions to be estimated (\code{M} if there are no constraints), which equals the number of columns of the \code{x}-constraint matrix. A value of 2 means a linear fit, and each element of \code{df} should lie between 2 and \code{n}. The larger the values of \code{df} the more wiggly the smooths. } \item{spar}{ Numerical vector containing the non-negative smoothing parameters for each component function (smooth). If necessary, the vector is recycled to have length equal to the number of component functions to be estimated (\code{M} if there are no constraints), which equals the number of columns of the \code{x}-constraint matrix. A value of zero means the smooth goes through the data and hence is wiggly. A value of \code{Inf} may be assigned, meaning the smooth will be linear. By default, the \code{NULL} value of \code{spar} means \code{df} is used to determine the smoothing parameters. } \item{all.knots}{ Logical. If \code{TRUE} then each distinct value of \code{x} will be a knot. By default, only a subset of the unique values of \code{x} are used; typically, the number of knots is \code{O(n^0.25)} for \code{n} large, but if \code{n <= 40} then all the unique values of \code{x} are used. } \item{i.constraint}{ A \code{M}-row constraint matrix for the intercepts. It must be of full column rank. By default, the constraint matrix for the intercepts is the \code{M} by \code{M} identity matrix, meaning no constraints. } \item{x.constraint}{ A \code{M}-row constraint matrix for \code{x}. It must be of full column rank. By default, the constraint matrix for the intercepts is the \code{M} by \code{M} identity matrix, meaning no constraints. } \item{constraints}{ An alternative to specifying \code{i.constraint} and \code{x.constraint}, this is a list with two components corresponding to the intercept and \code{x} respectively. They must both be a \code{M}-row constraint matrix with full column rank. } \item{var.arg}{ Logical: return the pointwise variances of the fit? Currently, this corresponds only to the nonlinear part of the fit, and may be wrong. } \item{scale.w}{ Logical. By default, the weights \code{w} are scaled so that the diagonal elements have mean 1. } \item{nk}{ Number of knots. If used, this argument overrides \code{all.knots}, and must lie between 6 and \code{n}+2 inclusive. } \item{control.spar}{ See \code{\link[stats]{smooth.spline}}. } } \details{ The algorithm implemented is detailed in Yee (2000). It involves decomposing the component functions into a linear and nonlinear part, and using B-splines. The cost of the computation is \code{O(n M^3)}. The argument \code{spar} contains \emph{scaled} smoothing parameters. } \value{ An object of class \code{"vsmooth.spline"} (see \code{vsmooth.spline-class}). } \references{ Yee, T. W. (2000) Vector Splines and Other Vector Smoothers. Pages 529--534. In: Bethlehem, J. G. and van der Heijde, P. G. M. \emph{Proceedings in Computational Statistics COMPSTAT 2000}. Heidelberg: Physica-Verlag. } \author{ Thomas W. Yee } \note{ This function is quite similar to \code{\link[stats]{smooth.spline}} but offers less functionality. For example, cross validation is not implemented here. For \code{M = 1}, the results will be generally different, mainly due to the different way the knots are selected. The vector cubic smoothing spline which \code{s()} represents is computationally demanding for large \eqn{M}. The cost is approximately \eqn{O(n M^3)} where \eqn{n} is the number of unique abscissae. Yet to be done: return the \emph{unscaled} smoothing parameters. } %~Make other sections like WARNING with \section{WARNING }{....} ~ \section{WARNING}{ See \code{\link{vgam}} for information about an important bug. } \seealso{ \code{vsmooth.spline-class}, \code{plot.vsmooth.spline}, \code{predict.vsmooth.spline}, \code{iam}, \code{\link{sm.os}}, \code{\link[VGAM]{s}}, \code{\link[stats]{smooth.spline}}. } \examples{ nn <- 20; x <- 2 + 5*(nn:1)/nn x[2:4] <- x[5:7] # Allow duplication y1 <- sin(x) + rnorm(nn, sd = 0.13) y2 <- cos(x) + rnorm(nn, sd = 0.13) y3 <- 1 + sin(x) + rnorm(nn, sd = 0.13) # Run this for constraints y <- cbind(y1, y2, y3) ww <- cbind(rep(3, nn), 4, (1:nn)/nn) (fit <- vsmooth.spline(x, y, w = ww, df = 5)) \dontrun{ plot(fit) # The 1st and 3rd functions do not differ by a constant } mat <- matrix(c(1,0,1, 0,1,0), 3, 2) (fit2 <- vsmooth.spline(x, y, w = ww, df = 5, i.constr = mat, x.constr = mat)) # The 1st and 3rd functions do differ by a constant: mycols <- c("orange", "blue", "orange") \dontrun{ plot(fit2, lcol = mycols, pcol = mycols, las = 1) } p <- predict(fit, x = model.matrix(fit, type = "lm"), deriv = 0) max(abs(depvar(fit) - with(p, y))) # Should be 0; and fit@y is not good par(mfrow = c(3, 1)) ux <- seq(1, 8, len = 100) for (dd in 1:3) { pp <- predict(fit, x = ux, deriv = dd) \dontrun{with(pp, matplot(x, y, type = "l", main = paste("deriv =", dd), lwd = 2, ylab = "", cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5)) } } } \keyword{regression} \keyword{smooth} VGAM/man/prats.Rd0000644000176200001440000000375113135276753013221 0ustar liggesusers\name{prats} \alias{prats} \docType{data} \title{ Pregnant Rats Toxological Experiment Data } \description{ A small toxological experiment data. The subjects are fetuses from two randomized groups of pregnant rats, and they were given a placebo or chemical treatment. The number with birth defects were recorded, as well as each litter size. } \usage{ data(prats) } \format{ A data frame with the following variables. \describe{ \item{treatment}{ A \code{0} means control; a \code{1} means the chemical treatment. } \item{alive, litter.size}{ The number of fetuses alive at 21 days, out of the number of fetuses alive at 4 days (the litter size). } } } \details{ The data concerns a toxological experiment where the subjects are fetuses from two randomized groups of 16 pregnant rats each, and they were given a placebo or chemical treatment. The number with birth defects andn the litter size were recorded. Half the rats were fed a control diet during pregnancy and lactation, and the diet of the other half was treated with a chemical. For each litter the number of pups alive at 4 days and the number of pups that survived the 21 day lactation period, were recorded. } \source{ Weil, C. S. (1970) Selection of the valid number of sampling units and a consideration of their combination in toxicological studies involving reproduction, teratogenesis or carcinogenesis. \emph{Food and Cosmetics Toxicology}, \bold{8}(2), 177--182. %Food and Cosmetics Toxicology %Fd. Cosmet. Toxicol. } \references{ Williams, D. A. (1975) The Analysis of Binary Responses From Toxicological Experiments Involving Reproduction and Teratogenicity. \emph{Biometrics}, \bold{31}(4), 949--952. } \seealso{ \code{\link[VGAM]{betabinomial}}, \code{\link[VGAM]{betabinomialff}}. } \examples{ prats colSums(subset(prats, treatment == 0)) colSums(subset(prats, treatment == 1)) summary(prats) } \keyword{datasets} % % VGAM/man/kumarUC.Rd0000644000176200001440000000424513135276753013436 0ustar liggesusers\name{Kumar} \alias{Kumar} \alias{dkumar} \alias{pkumar} \alias{qkumar} \alias{rkumar} \title{The Kumaraswamy Distribution} \description{ Density, distribution function, quantile function and random generation for the Kumaraswamy distribution. } \usage{ dkumar(x, shape1, shape2, log = FALSE) pkumar(q, shape1, shape2, lower.tail = TRUE, log.p = FALSE) qkumar(p, shape1, shape2, lower.tail = TRUE, log.p = FALSE) rkumar(n, shape1, shape2) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{shape1, shape2}{ positive shape parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dkumar} gives the density, \code{pkumar} gives the distribution function, \code{qkumar} gives the quantile function, and \code{rkumar} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{kumar}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ %} \seealso{ \code{\link{kumar}}. } \examples{ \dontrun{ shape1 <- 2; shape2 <- 2; nn <- 201; # shape1 <- shape2 <- 0.5; x <- seq(-0.05, 1.05, len = nn) plot(x, dkumar(x, shape1, shape2), type = "l", las = 1, ylim = c(0,1.5), ylab = paste("fkumar(shape1 = ", shape1, ", shape2 = ", shape2, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pkumar(x, shape1, shape2), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qkumar(probs, shape1, shape2) lines(Q, dkumar(Q, shape1, shape2), col = "purple", lty = 3, type = "h") lines(Q, pkumar(Q, shape1, shape2), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pkumar(Q, shape1, shape2) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/otlog.Rd0000644000176200001440000000305313135276753013207 0ustar liggesusers\name{otlog} \alias{otlog} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-truncated Logarithmic Distribution } \description{ Estimating the (single) parameter of the 1-truncated logarithmic distribution. } \usage{ otlog(lshape = "logit", gshape = ppoints(8), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape, gshape, zero}{ Same as \code{\link{logff}}. } } \details{ The 1-truncated logarithmic distribution is a logarithmic distribution but with the probability of a one being zero. The other probabilities are scaled to add to unity. Some more details can be found at \code{\link{logff}}. Multiple responses are permitted. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ %} \author{ T. W. Yee } %\note{ %} \seealso{ \code{\link{Otlog}}, \code{\link{logff}}, \code{\link{oalog}}, \code{\link{oilog}}, \code{\link{simulate.vlm}}. } \examples{ odata <- data.frame(y1 = rotlog(n = 1000, shape = logit(1/3, inverse = TRUE))) ofit <- vglm(y1 ~ 1, otlog, data = odata, trace = TRUE, crit = "c") coef(ofit, matrix = TRUE) Coef(ofit) \dontrun{with(odata, hist(y1, shape = TRUE, breaks = seq(0.5, max(y1) + 0.5, by = 1), border = "blue")) x <- seq(1, with(odata, max(y1)), by = 1) with(odata, lines(x, dotlog(x, Coef(ofit)[1]), col = "orange", type = "h", lwd = 2)) } } \keyword{models} \keyword{regression} VGAM/man/micmen.Rd0000644000176200001440000001054213135276753013334 0ustar liggesusers\name{micmen} \alias{micmen} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Michaelis-Menten Model } \description{ Fits a Michaelis-Menten nonlinear regression model. } \usage{ micmen(rpar = 0.001, divisor = 10, init1 = NULL, init2 = NULL, imethod = 1, oim = TRUE, link1 = "identitylink", link2 = "identitylink", firstDeriv = c("nsimEIM", "rpar"), probs.x = c(0.15, 0.85), nsimEIM = 500, dispersion = 0, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{rpar}{ Numeric. Initial positive ridge parameter. This is used to create positive-definite weight matrices. } \item{divisor}{ Numerical. The divisor used to divide the ridge parameter at each iteration until it is very small but still positive. The value of \code{divisor} should be greater than one. } \item{init1, init2}{ Numerical. Optional initial value for the first and second parameters, respectively. The default is to use a self-starting value. } \item{link1, link2}{ Parameter link function applied to the first and second parameters, respectively. See \code{\link{Links}} for more choices. } \item{dispersion}{ Numerical. Dispersion parameter. } \item{firstDeriv}{ Character. Algorithm for computing the first derivatives and working weights. The first is the default. } \item{imethod, probs.x}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } \item{oim}{ Use the OIM? See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The Michaelis-Menten model is given by \deqn{E(Y_i) = (\theta_1 u_i) / (\theta_2 + u_i)}{% E(Y_i) = theta1 * u_i / (theta2 + u_i)} where \eqn{\theta_1}{theta1} and \eqn{\theta_2}{theta2} are the two parameters. The relationship between iteratively reweighted least squares and the Gauss-Newton algorithm is given in Wedderburn (1974). However, the algorithm used by this family function is different. Details are given at the Author's web site. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Seber, G. A. F. and Wild, C. J. (1989) \emph{Nonlinear Regression}, New York: Wiley. Wedderburn, R. W. M. (1974) Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. \emph{Biometrika}, \bold{61}, 439--447. Bates, D. M. and Watts, D. G. (1988) \emph{Nonlinear Regression Analysis and Its Applications}, New York: Wiley. % Documentation accompanying the \pkg{VGAM} package at % \url{http://www.stat.auckland.ac.nz/~yee} % contains further information and examples. } \author{ T. W. Yee } \note{ The regressor values \eqn{u_i}{u_i} are inputted as the RHS of the \code{form2} argument. It should just be a simple term; no smart prediction is used. It should just a single vector, therefore omit the intercept term. The LHS of the formula \code{form2} is ignored. To predict the response at new values of \eqn{u_i}{u_i} one must assign the \code{@extra$Xm2} slot in the fitted object these values, e.g., see the example below. Numerical problems may occur. If so, try setting some initial values for the parameters. In the future, several self-starting initial values will be implemented. } \seealso{ \code{\link{enzyme}}. % \code{skira}. } \section{Warning }{ This function is not (nor could ever be) entirely reliable. Plotting the fitted function and monitoring convergence is recommended. } \examples{ fit <- vglm(velocity ~ 1, micmen, enzyme, trace = TRUE, crit = "coef", form2 = ~ conc - 1) summary(fit) \dontrun{ plot(velocity ~ conc, enzyme, xlab = "concentration", las = 1, col = "blue", main = "Michaelis-Menten equation for the enzyme data", ylim = c(0, max(velocity)), xlim = c(0, max(conc))) points(fitted(fit) ~ conc, enzyme, col = "red", pch = "+", cex = 1.5) # This predicts the response at a finer grid: newenzyme <- data.frame(conc = seq(0, max(with(enzyme, conc)), len = 200)) fit@extra$Xm2 <- newenzyme$conc # This assignment is needed for prediction lines(predict(fit, newenzyme, "response") ~ conc, newenzyme, col = "red") } } \keyword{models} \keyword{regression} VGAM/man/melbmaxtemp.Rd0000644000176200001440000000260713135276753014402 0ustar liggesusers\name{melbmaxtemp} \alias{melbmaxtemp} \docType{data} \title{ Melbourne Daily Maximum Temperatures} \description{ Melbourne daily maximum temperatures in degrees Celsius over the ten-year period 1981--1990. } \usage{ data(melbmaxtemp) } \format{ A vector with 3650 observations. } \details{ This is a time series data from Melbourne, Australia. It is commonly used to give a difficult quantile regression problem since the data is bimodal. That is, a hot day is likely to be followed by either an equally hot day or one much cooler. However, an independence assumption is typically made. } %\source{ %\url{http://www.london2012.com/medals/medal-count/}. % % %} \references{ Hyndman, R. J. and Bashtannyk, D. M. and Grunwald, G. K. (1996). Estimating and visualizing conditional densities. \emph{J. Comput. Graph. Statist.}, \bold{5}(4), 315--336. } \seealso{ \code{\link[VGAM]{lms.bcn}}. } \examples{ summary(melbmaxtemp) \dontrun{ par(mfrow = c(1, 1), mar = c(5, 4, 0.2, 0.1) + 0.1, las = 1) melb <- data.frame(today = melbmaxtemp[-1], yesterday = melbmaxtemp[-length(melbmaxtemp)]) plot(today ~ yesterday, data = melb, xlab = "Yesterday's Max Temperature", ylab = "Today's Max Temperature", cex = 1.4, type = "n") points(today ~ yesterday, data = melb, pch = 0, cex = 0.50, col = "blue") abline(a = 0, b = 1, lty = 3) } } \keyword{datasets} VGAM/man/wrapup.smart.Rd0000644000176200001440000000203113135276753014521 0ustar liggesusers\name{wrapup.smart} \alias{wrapup.smart} \title{ Cleans Up After Smart Prediction } \description{ \code{wrapup.smart} deletes any variables used by smart prediction. Needed by both the modelling function and the prediction function. } \usage{ wrapup.smart() } \details{ The variables to be deleted are \code{.smart.prediction}, \code{.smart.prediction.counter}, and \code{.smart.prediction.mode}. The function \code{wrapup.smart} is useful in \R because these variables are held in \code{smartpredenv}. % In S-PLUS, % \code{wrapup.smart} is not really necessary because the variables are % placed in frame 1, which disappears when finished anyway. } %\references{ % See the technical help file at \url{http://www.stat.auckland.ac.nz/~yee} % for details. % % % %} \seealso{ \code{\link{setup.smart}}. } \examples{ \dontrun{# Place this inside modelling functions such as lm, glm, vglm. wrapup.smart() # Put at the end of lm } } \keyword{models} \keyword{regression} \keyword{programming} % Converted by Sd2Rd version 1.10. VGAM/man/logff.Rd0000644000176200001440000000641513135276753013165 0ustar liggesusers\name{logff} \alias{logff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Logarithmic Distribution } \description{ Estimating the (single) parameter of the logarithmic distribution. } \usage{ logff(lshape = "logit", gshape = ppoints(8), zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape}{ Parameter link function for the parameter \eqn{c}, which lies between 0 and 1. See \code{\link{Links}} for more choices and information. Soon \code{logfflink()} will hopefully be available for event-rate data. } \item{gshape, zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The logarithmic distribution is a generalized power series distribution that is based specifically on the logarithmic series (scaled to a probability function). Its probability function is \eqn{f(y) = a c^y / y}{f(y) = a * c^y / y}, for \eqn{y=1,2,3,\ldots}{y=1,2,3,...}, where \eqn{0 < c < 1} (called \code{shape}), and \eqn{a = -1 / \log(1-c)}{a = -1 / log(1-c)}. The mean is \eqn{a c/(1-c)}{a*c/(1-c)} (returned as the fitted values) and variance is \eqn{a c (1-ac) /(1-c)^2}{a*c*(1-a*c)/(1-c)^2}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Chapter 7 of Johnson N. L., Kemp, A. W. and Kotz S. (2005) \emph{Univariate Discrete Distributions}, 3rd edition, Hoboken, New Jersey: Wiley. Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ The function \code{\link[base:Log]{log}} computes the natural logarithm. In the \pkg{VGAM} library, a link function with option \code{\link{loge}} corresponds to this. Multiple responses are permitted. The logarithmic distribution is sometimes confused with the log-series distribution. The latter was used by Fisher et al. for species abundance data and has two parameters. } \seealso{ \code{\link{rlog}}, \code{\link{oalog}}, \code{\link{oilog}}, \code{\link{otlog}}, \code{\link[base:Log]{log}}, \code{\link{loge}}, \code{\link{logoff}}, \code{\link{explogff}}, \code{\link{simulate.vlm}}. } \examples{ ldata <- data.frame(y = rlog(n = 1000, shape = logit(0.2, inverse = TRUE))) fit <- vglm(y ~ 1, logff, data = ldata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) \dontrun{with(ldata, hist(y, prob = TRUE, breaks = seq(0.5, max(y) + 0.5, by = 1), border = "blue")) x <- seq(1, with(ldata, max(y)), by = 1) with(ldata, lines(x, dlog(x, Coef(fit)[1]), col = "orange", type = "h", lwd = 2)) } # Example: Corbet (1943) butterfly Malaya data corbet <- data.frame(nindiv = 1:24, ofreq = c(118, 74, 44, 24, 29, 22, 20, 19, 20, 15, 12, 14, 6, 12, 6, 9, 9, 6, 10, 10, 11, 5, 3, 3)) fit <- vglm(nindiv ~ 1, logff, data = corbet, weights = ofreq) coef(fit, matrix = TRUE) shapehat <- Coef(fit)["shape"] pdf2 <- dlog(x = with(corbet, nindiv), shape = shapehat) print(with(corbet, cbind(nindiv, ofreq, fitted = pdf2 * sum(ofreq))), digits = 1) } \keyword{models} \keyword{regression} VGAM/man/plotvgam.control.Rd0000644000176200001440000000722713135276753015402 0ustar liggesusers\name{plotvgam.control} \alias{plotvgam.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Control Function for plotvgam() } \description{ Provides default values for many arguments available for \code{plotvgam()}. } \usage{ plotvgam.control(which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg = FALSE, one.at.a.time = FALSE, .include.dots = TRUE, noxmean = FALSE, shade = FALSE, shcol = "gray80", ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{which.cf}{ Integer vector specifying which component functions are to be plotted (for each covariate). Must have values from the set \{1,2,\ldots,\eqn{M}\}. } \item{xlim}{ Range for the x-axis. } \item{ylim}{ Range for the y-axis. } \item{llty}{ Line type for the fitted functions (lines). Fed into \code{par(lty)}. } \item{slty}{ Line type for the standard error bands. Fed into \code{par(lty)}. } \item{pcex}{ Character expansion for the points (residuals). Fed into \code{par(cex)}. } \item{pch}{ Character used for the points (residuals). Same as \code{par(pch)}. } \item{pcol}{ Color of the points. Fed into \code{par(col)}. } \item{lcol}{ Color of the fitted functions (lines). Fed into \code{par(col)}. } \item{rcol}{ Color of the rug plot. Fed into \code{par(col)}. } \item{scol}{ Color of the standard error bands. Fed into \code{par(col)}. } \item{llwd}{ Line width of the fitted functions (lines). Fed into \code{par(lwd)}. } \item{slwd}{ Line width of the standard error bands. Fed into \code{par(lwd)}. } \item{add.arg}{ Logical. If \code{TRUE} then the plot will be added to an existing plot, otherwise a new plot will be made. } \item{one.at.a.time}{ Logical. If \code{TRUE} then the plots are done one at a time, with the user having to hit the return key between the plots. } \item{.include.dots}{ Not to be used by the user. } \item{noxmean}{ Logical. If \code{TRUE} then the point at the mean of \eqn{x}, which is added when standard errors are specified and it thinks the function is linear, is not added. One might use this argument if \code{ylab} is specified. } \item{shade, shcol}{ \code{shade} is logical; if \code{TRUE} then the pointwise SE band is shaded gray by default. The colour can be adjusted by setting \code{shcol}. These arguments are ignored unless \code{se = TRUE} and \code{overlay = FALSE}; If \code{shade = TRUE} then \code{scol} is ignored. } \item{\dots}{ Other arguments that may be fed into \code{par()}. } In the above, \eqn{M} is the number of linear/additive predictors. } \details{ The most obvious features of \code{\link{plotvgam}} can be controlled by the above arguments. } \value{ A list with values matching the arguments. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. } \author{ Thomas W. Yee } %\note{ % This function enables \code{plotvgam()} to work in a similar % manner to S-PLUS's \code{plot.gam()}. % However, there is no interactive options yet. % %} \seealso{ \code{\link{plotvgam}}. } \examples{ plotvgam.control(lcol = c("red", "blue"), scol = "darkgreen", se = TRUE) } \keyword{models} \keyword{regression} \keyword{smooth} \keyword{dplot} VGAM/man/sc.studentt2.Rd0000644000176200001440000000715113135276753014426 0ustar liggesusers\name{sc.studentt2} \alias{sc.studentt2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Scaled Student t Distribution with 2 df Family Function } \description{ Estimates the location and scale parameters of a scaled Student t distribution with 2 degrees of freedom, by maximum likelihood estimation. } \usage{ sc.studentt2(percentile = 50, llocation = "identitylink", lscale = "loge", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentile}{ A numerical vector containing values between 0 and 100, which are the quantiles and expectiles. They will be returned as `fitted values'. } \item{llocation, lscale}{ See \code{\link{Links}} for more choices, and \code{\link{CommonVGAMffArguments}}. } \item{ilocation, iscale, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for details. } } \details{ Koenker (1993) solved for the distribution whose quantiles are equal to its expectiles. Its canonical form has mean and mode at 0, and has a heavy tail (in fact, its variance is infinite). % This is called Koenker's distribution here. The standard (``canonical'') form of this distribution can be endowed with a location and scale parameter. The standard form has a density that can be written as \deqn{f(z) = 2 / (4 + z^2)^{3/2}}{% f(z) = 2 / (4 + z^2)^(3/2) } for real \eqn{y}. Then \eqn{z = (y-a)/b} for location and scale parameters \eqn{a} and \eqn{b > 0}. The mean of \eqn{Y} is \eqn{a}{a}. By default, \eqn{\eta_1=a)}{eta1=a} and \eqn{\eta_2=\log(b)}{eta2=log(b)}. The expectiles/quantiles corresponding to \code{percentile} are returned as the fitted values; in particular, \code{percentile = 50} corresponds to the mean (0.5 expectile) and median (0.5 quantile). Note that if \eqn{Y} has a standard \code{\link{dsc.t2}} then \eqn{Y = \sqrt{2} T_2}{Y = sqrt(2) * T_2} where \eqn{T_2} has a Student-t distribution with 2 degrees of freedom. The two parameters here can also be estimated using \code{\link{studentt2}} by specifying \code{df = 2} and making an adjustment for the scale parameter, however, this \pkg{VGAM} family function is more efficient since the EIM is known (Fisher scoring is implemented.) } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Koenker, R. (1993) When are expectiles percentiles? (solution) \emph{Econometric Theory}, \bold{9}, 526--527. } \author{ T. W. Yee } %\note{ % %} \seealso{ \code{\link{dsc.t2}}, \code{\link{studentt2}}. } \examples{ set.seed(123); nn <- 1000 kdata <- data.frame(x2 = sort(runif(nn))) kdata <- transform(kdata, mylocat = 1 + 3 * x2, myscale = 1) kdata <- transform(kdata, y = rsc.t2(nn, loc = mylocat, scale = myscale)) fit <- vglm(y ~ x2, sc.studentt2(perc = c(1, 50, 99)), data = kdata) fit2 <- vglm(y ~ x2, studentt2(df = 2), data = kdata) # 'same' as fit coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) # Nice plot of the results \dontrun{ plot(y ~ x2, data = kdata, col = "blue", las = 1, sub = paste("n =", nn), main = "Fitted quantiles/expectiles using the sc.studentt2() distribution") matplot(with(kdata, x2), fitted(fit), add = TRUE, type = "l", lwd = 3) legend("bottomright", lty = 1:3, lwd = 3, legend = colnames(fitted(fit)), col = 1:3) } fit@extra$percentile # Sample quantiles } \keyword{models} \keyword{regression} VGAM/man/gengamma.Rd0000644000176200001440000001345213135276753013643 0ustar liggesusers\name{gengamma.stacy} \alias{gengamma.stacy} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Generalized Gamma distribution family function } \description{ Estimation of the 3-parameter generalized gamma distribution proposed by Stacy (1962). } \usage{ gengamma.stacy(lscale = "loge", ld = "loge", lk = "loge", iscale = NULL, id = NULL, ik = NULL, imethod = 1, gscale.mux = exp((-4:4)/2), gshape1.d = exp((-5:5)/2), gshape2.k = exp((-5:5)/2), probs.y = 0.3, zero = c("d", "k")) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lscale, ld, lk}{ Parameter link function applied to each of the positive parameters \eqn{b}, \eqn{d} and \eqn{k}, respectively. See \code{\link{Links}} for more choices. } \item{iscale, id, ik}{ Initial value for \eqn{b}, \eqn{d} and \eqn{k}, respectively. The defaults mean an initial value is determined internally for each. } \item{gscale.mux, gshape1.d, gshape2.k}{ See \code{\link{CommonVGAMffArguments}} for information. Replaced by \code{iscale}, \code{id} etc. if given. } \item{imethod, probs.y, zero}{ See \code{\link{CommonVGAMffArguments}} for information. % An integer-valued vector specifying which % linear/additive predictors are modelled as intercepts only. % The values must be from the set \{1,2,3\}. % The default value means none are modelled as intercept-only terms. } } \details{ The probability density function can be written \deqn{f(y;b,d,k) = d b^{-d k} y^{d k-1} \exp[-(y/b)^d] / \Gamma(k)}{% f(y;b,d,k) = d * b^(-d*k) * y^(d*k-1) * exp(-(y/b)^d) / gamma(k)} for scale parameter \eqn{b > 0}, and Weibull-type shape parameter \eqn{d > 0}, gamma-type shape parameter \eqn{k > 0}, and \eqn{y > 0}. The mean of \eqn{Y} is \eqn{b \times \Gamma(k+1/d) / \Gamma(k)}{b*gamma(k+1/d)/gamma(k)} (returned as the fitted values), which equals \eqn{bk}{b*k} if \eqn{d=1}. There are many special cases, as given in Table 1 of Stacey and Mihram (1965). In the following, the parameters are in the order \eqn{b,d,k}. The special cases are: Exponential \eqn{f(y;b,1,1)}, Gamma \eqn{f(y;b,1,k)}, Weibull \eqn{f(y;b,d,1)}, Chi Squared \eqn{f(y;2,1,a/2)} with \eqn{a} degrees of freedom, Chi \eqn{f(y;\sqrt{2},2,a/2)}{f(y;sqrt(2),2,a/2)} with \eqn{a} degrees of freedom, Half-normal \eqn{f(y;\sqrt{2},2,1/2)}{f(y;sqrt(2),2,1/2)}, Circular normal \eqn{f(y;\sqrt{2},2,1)}{f(y;sqrt(2),2,1)}, Spherical normal \eqn{f(y;\sqrt{2},2,3/2)}{f(y;sqrt(2),2,3/2)}, Rayleigh \eqn{f(y;c\sqrt{2},2,1)}{f(y;c sqrt(2),2,1)} where \eqn{c>0}. Also the log-normal distribution corresponds to when \code{k = Inf}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Stacy, E. W. (1962) A generalization of the gamma distribution. \emph{Annals of Mathematical Statistics}, \bold{33}(3), 1187--1192. Stacy, E. W. and Mihram, G. A. (1965) Parameter estimation for a generalized gamma distribution. \emph{Technometrics}, \bold{7}, 349--358. Prentice, R. L. (1974) A log gamma model and its maximum likelihood estimation. \emph{Biometrika}, \bold{61}, 539--544. } \section{Warning }{ Several authors have considered maximum likelihood estimation for the generalized gamma distribution and have found that the Newton-Raphson algorithm does not work very well and that the existence of solutions to the log-likelihood equations is sometimes in doubt. Although Fisher scoring is used here, it is likely that the same problems will be encountered. It appears that large samples are required, for example, the estimator of \eqn{k} became asymptotically normal only with 400 or more observations. It is not uncommon for maximum likelihood estimates to fail to converge even with two or three hundred observations. With covariates, even more observations are needed to increase the chances of convergence. Using covariates is not advised unless the sample size is at least a few thousand, and even if so, modelling 1 or 2 parameters as intercept-only is a very good idea (e.g., \code{zero = 2:3}). Monitoring convergence is also a very good idea (e.g., set \code{trace = TRUE}). Half-stepping is not uncommon, and if this occurs, then the results should be viewed with more suspicion. } \author{ T. W. Yee } \note{ The notation used here differs from Stacy (1962) and Prentice (1974). Poor initial values may result in failure to converge so if there are covariates and there are convergence problems, try using or checking the \code{zero} argument (e.g., \code{zero = 2:3}) or the \code{ik} argument or the \code{imethod} argument, etc. } \seealso{ \code{\link{rgengamma.stacy}}, \code{\link{gamma1}}, \code{\link{gamma2}}, \code{\link{prentice74}}, \code{\link{simulate.vlm}}, \code{\link{chisq}}, \code{\link{lognormal}}, \code{\link{rayleigh}}, \code{\link{weibullR}}. } \examples{ k <- exp(-1); Scale <- exp(1); dd <- exp(0.5); set.seed(1) gdata <- data.frame(y = rgamma(2000, shape = k, scale = Scale)) gfit <- vglm(y ~ 1, gengamma.stacy, data = gdata, trace = TRUE) coef(gfit, matrix = TRUE) } \keyword{models} \keyword{regression} %# Another example %gdata <- data.frame(x2 = runif(nn <- 5000)) %gdata <- transform(gdata, Scale = exp(1), % d = exp( 0 + 1.2* x2), % k = exp(-1 + 2 * x2)) %gdata <- transform(gdata, y = rgengamma.stacy(nn, scale = Scale, d = d, k = k)) %fit <- vglm(y ~ x2, gengamma.stacy(zero = 1, iscale = 6), data = gdata, trace = TRUE) %fit <- vglm(y ~ x2, gengamma.stacy(zero = 1), data = gdata, trace = TRUE, maxit = 50) %coef(fit, matrix = TRUE) VGAM/man/inv.binomial.Rd0000644000176200001440000000663013135276753014454 0ustar liggesusers\name{inv.binomial} \alias{inv.binomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{Inverse Binomial Distribution Family Function} \description{ Estimates the two parameters of an inverse binomial distribution by maximum likelihood estimation. } \usage{ inv.binomial(lrho = extlogit(min = 0.5, max = 1), llambda = "loge", irho = NULL, ilambda = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lrho, llambda}{ Link function for the \eqn{\rho}{rho} and \eqn{\lambda}{lambda} parameters. See \code{\link{Links}} for more choices. } \item{irho, ilambda}{ Numeric. Optional initial values for \eqn{\rho}{rho} and \eqn{\lambda}{lambda}. } \item{zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The inverse binomial distribution of Yanagimoto (1989) has density function \deqn{f(y;\rho,\lambda) = \frac{ \lambda \,\Gamma(2y+\lambda) }{\Gamma(y+1) \, \Gamma(y+\lambda+1) } \{ \rho(1-\rho) \}^y \rho^{\lambda}}{% f(y;rho,lambda) = (lambda * Gamma(2y+lambda)) * [rho*(1-rho)]^y * rho^lambda / (Gamma(y+1) * Gamma(y+lambda+1))} where \eqn{y=0,1,2,\ldots}{y=0,1,2,...} and \eqn{\frac12 < \rho < 1}{0.5 < rho < 1}, and \eqn{\lambda > 0}{lambda > 0}. The first two moments exist for \eqn{\rho>\frac12}{rho>0.5}; then the mean is \eqn{\lambda (1-\rho) /(2 \rho-1)}{lambda*(1-rho)/(2*rho-1)} (returned as the fitted values) and the variance is \eqn{\lambda \rho (1-\rho) /(2 \rho-1)^3}{lambda*rho*(1-rho)/(2*rho-1)^3}. The inverse binomial distribution is a special case of the generalized negative binomial distribution of Jain and Consul (1971). It holds that \eqn{Var(Y) > E(Y)} so that the inverse binomial distribution is overdispersed compared with the Poisson distribution. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Yanagimoto, T. (1989) The inverse binomial distribution as a statistical model. \emph{Communications in Statistics: Theory and Methods}, \bold{18}, 3625--3633. Jain, G. C. and Consul, P. C. (1971) A generalized negative binomial distribution. \emph{SIAM Journal on Applied Mathematics}, \bold{21}, 501--513. Jorgensen, B. (1997) \emph{The Theory of Dispersion Models}. London: Chapman & Hall } \author{ T. W. Yee } \note{ This \pkg{VGAM} family function only works reasonably well with intercept-only models. Good initial values are needed; if convergence failure occurs use \code{irho} and/or \code{ilambda}. Some elements of the working weight matrices use the expected information matrix while other elements use the observed information matrix. Yet to do: using the mean and the reciprocal of \eqn{\lambda}{lambda} results in an EIM that is diagonal. } \seealso{ \code{\link{negbinomial}}, \code{\link{poissonff}}. } \examples{ idata <- data.frame(y = rnbinom(n <- 1000, mu = exp(3), size = exp(1))) fit <- vglm(y ~ 1, inv.binomial, data = idata, trace = TRUE) with(idata, c(mean(y), head(fitted(fit), 1))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # Sum of the prior weights sum(weights(fit, type = "work")) # Sum of the working weights } \keyword{models} \keyword{regression} %fit <- vglm(y ~ 1, inv.binomial(ilambda = 1), trace = TRUE, crit = "c", checkwz = FALSE) VGAM/man/lms.bcn.Rd0000644000176200001440000002204513135276753013421 0ustar liggesusers\name{lms.bcn} \alias{lms.bcn} %- Also NEED an '\alias' for EACH other topic documented here. \title{ LMS Quantile Regression with a Box-Cox Transformation to Normality } \description{ LMS quantile regression with the Box-Cox transformation to normality. } \usage{ lms.bcn(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL, tol0 = 0.001) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentiles}{ A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as `fitted values'. % or expectiles. % 20140624; withdrawn 'expectiles'. % isigma = NULL, tol0 = 0.001, expectiles = FALSE } \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. The values must be from the set \{1,2,3\}. The default value usually increases the chance of successful convergence. Setting \code{zero = NULL} means they all are functions of the covariates. For more information see \code{\link{CommonVGAMffArguments}}. } \item{llambda, lmu, lsigma}{ Parameter link functions applied to the first, second and third linear/additive predictors. See \code{\link{Links}} for more choices, and \code{\link{CommonVGAMffArguments}}. } \item{idf.mu}{ Degrees of freedom for the cubic smoothing spline fit applied to get an initial estimate of mu. See \code{\link{vsmooth.spline}}. } \item{idf.sigma}{ Degrees of freedom for the cubic smoothing spline fit applied to get an initial estimate of sigma. See \code{\link{vsmooth.spline}}. This argument may be assigned \code{NULL} to get an initial value using some other algorithm. } \item{ilambda}{ Initial value for lambda. If necessary, it is recycled to be a vector of length \eqn{n} where \eqn{n} is the number of (independent) observations. } \item{isigma}{ Optional initial value for sigma. If necessary, it is recycled to be a vector of length \eqn{n}. The default value, \code{NULL}, means an initial value is computed in the \code{@initialize} slot of the family function. } \item{tol0}{ Small positive number, the tolerance for testing if lambda is equal to zero. } % \item{expectiles}{ % Experimental; please do not use. % A single logical. If \code{TRUE} then the method is LMS-expectile % regression; \emph{expectiles} are returned rather than quantiles. % The default is LMS quantile regression based on the normal distribution. % } } \details{ Given a value of the covariate, this function applies a Box-Cox transformation to the response to best obtain normality. The parameters chosen to do this are estimated by maximum likelihood or penalized maximum likelihood. In more detail, the basic idea behind this method is that, for a fixed value of \eqn{x}, a Box-Cox transformation of the response \eqn{Y} is applied to obtain standard normality. The 3 parameters (\eqn{\lambda}{lambda}, \eqn{\mu}{mu}, \eqn{\sigma}{sigma}, which start with the letters ``L-M-S'' respectively, hence its name) are chosen to maximize a penalized log-likelihood (with \code{\link{vgam}}). Then the appropriate quantiles of the standard normal distribution are back-transformed onto the original scale to get the desired quantiles. The three parameters may vary as a smooth function of \eqn{x}. The Box-Cox power transformation here of the \eqn{Y}, given \eqn{x}, is \deqn{Z = [(Y / \mu(x))^{\lambda(x)} - 1] / ( \sigma(x) \, \lambda(x) )}{ Z = [(Y / mu(x))^{lambda(x)} - 1] / (sigma(x) * lambda(x))} for \eqn{\lambda(x) \neq 0}{lambda(x) != 0}. (The singularity at \eqn{\lambda(x) = 0}{lambda(x) = 0} is handled by a simple function involving a logarithm.) Then \eqn{Z} is assumed to have a standard normal distribution. The parameter \eqn{\sigma(x)}{sigma(x)} must be positive, therefore \pkg{VGAM} chooses \eqn{\eta(x)^T = (\lambda(x), \mu(x), \log(\sigma(x)))}{eta(x)^T = (lambda(x), mu(x), log(sigma(x)))} by default. The parameter \eqn{\mu}{mu} is also positive, but while \eqn{\log(\mu)}{log(mu)} is available, it is not the default because \eqn{\mu}{mu} is more directly interpretable. Given the estimated linear/additive predictors, the \eqn{100\alpha}{100*alpha} percentile can be estimated by inverting the Box-Cox power transformation at the \eqn{100\alpha}{100*alpha} percentile of the standard normal distribution. Of the three functions, it is often a good idea to allow \eqn{\mu(x)}{mu(x)} to be more flexible because the functions \eqn{\lambda(x)}{lambda(x)} and \eqn{\sigma(x)}{sigma(x)} usually vary more smoothly with \eqn{x}. This is somewhat reflected in the default value for the argument \code{zero}, viz. \code{zero = c(1, 3)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Cole, T. J. and Green, P. J. (1992) Smoothing Reference Centile Curves: The LMS Method and Penalized Likelihood. \emph{Statistics in Medicine}, \bold{11}, 1305--1319. Green, P. J. and Silverman, B. W. (1994) \emph{Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach}, London: Chapman & Hall. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ The response must be positive because the Box-Cox transformation cannot handle negative values. The LMS-Yeo-Johnson-normal method can handle both positive and negative values. % LMS-BCN expectile regression is a \emph{new} methodology proposed % by myself! In general, the lambda and sigma functions should be more smoother than the mean function. Having \code{zero = 1}, \code{zero = 3} or \code{zero = c(1, 3)} is often a good idea. See the example below. % While it is usual to regress the response against a single % covariate, it is possible to add other explanatory variables, % e.g., gender. % See % \url{http://www.stat.auckland.ac.nz/~yee} % for further information and examples about this feature. } \section{Warning }{ The computations are not simple, therefore convergence may fail. Set \code{trace = TRUE} to monitor convergence if it isn't set already. Convergence failure will occur if, e.g., the response is bimodal at any particular value of \eqn{x}. In case of convergence failure, try different starting values. Also, the estimate may diverge quickly near the solution, in which case try prematurely stopping the iterations by assigning \code{maxits} to be the iteration number corresponding to the highest likelihood value. One trick is to fit a simple model and use it to provide initial values for a more complex model; see in the examples below. } \seealso{ \code{\link{lms.bcg}}, \code{\link{lms.yjn}}, \code{\link{qtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{cdf.lmscreg}}, \code{\link{alaplace1}}, \code{\link{amlnormal}}, \code{\link{denorm}}, \code{\link{CommonVGAMffArguments}}. % \code{\link{bmi.nz}}, } \examples{ \dontrun{ require("VGAMdata") mysubset <- subset(xs.nz, sex == "M" & ethnicity == "Maori" & study1) mysubset <- transform(mysubset, BMI = weight / height^2) BMIdata <- na.omit(mysubset) BMIdata <- subset(BMIdata, BMI < 80 & age < 65, select = c(age, BMI)) # Delete an outlier summary(BMIdata) fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = BMIdata) par(mfrow = c(1, 2)) plot(fit, scol = "blue", se = TRUE) # The two centered smooths head(predict(fit)) head(fitted(fit)) head(BMIdata) head(cdf(fit)) # Person 46 is probably overweight, given his age 100 * colMeans(depvar(fit, drop = TRUE) < fitted(fit)) # Empirical proportions # Convergence problems? Try this trick: fit0 is a simpler model used for fit1 fit0 <- vgam(BMI ~ s(age, df = 4), lms.bcn(zero = c(1, 3)), data = BMIdata) fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = BMIdata, etastart = predict(fit0)) } \dontrun{ # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 66), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values } } \keyword{models} \keyword{regression} % BMIdata <- subset(mysubset, select = c(age, BMI)) % BMIdata <- mysubset[, c("age", "BMI")] VGAM/man/calibrate-methods.Rd0000644000176200001440000000121413135276753015447 0ustar liggesusers\name{calibrate-methods} \docType{methods} \alias{calibrate,rrvglm-method} \alias{calibrate,Coef.qrrvglm-method} \title{ Calibration for Constrained Regression Models } \description{ \code{calibrate} is a generic function applied to RR-VGLMs, QRR-VGLMs and RR-VGAMs etc. } %\usage{ % \S4method{calibrate}{cao,Coef.cao}(object, ...) %} \section{Methods}{ \describe{ \item{object}{ The object from which the calibration is performed. } } } %\note{ % See \code{\link{lvplot}} which is very much related to biplots. % %} \keyword{methods} \keyword{classes} %\keyword{ ~~ other possible keyword(s)} \keyword{models} \keyword{regression} VGAM/man/mix2exp.Rd0000644000176200001440000001044513135276753013462 0ustar liggesusers\name{mix2exp} \alias{mix2exp} %- Also NEED an '\alias' for EACH other topic documented here. %- Adapted from mix2poisson.Rd \title{ Mixture of Two Exponential Distributions } \description{ Estimates the three parameters of a mixture of two exponential distributions by maximum likelihood estimation. } \usage{ mix2exp(lphi = "logit", llambda = "loge", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.8, 0.2), nsimEIM = 100, zero = "phi") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lphi, llambda}{ Link functions for the parameters \eqn{\phi}{phi} and \eqn{\lambda}{lambda}. The latter is the rate parameter and note that the mean of an ordinary exponential distribution is \eqn{1 / \lambda}. See \code{\link{Links}} for more choices. } \item{iphi, il1, il2}{ Initial value for \eqn{\phi}{phi}, and optional initial value for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. The last two have values that must be positive. The default is to compute initial values internally using the argument \code{qmu}. } \item{qmu}{ Vector with two values giving the probabilities relating to the sample quantiles for obtaining initial values for \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2}. The two values are fed in as the \code{probs} argument into \code{\link[stats]{quantile}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The probability density function can be loosely written as \deqn{f(y) = \phi\,Exponential(\lambda_1) + (1-\phi)\,Exponential(\lambda_2)}{% f(y) = phi * Exponential(lambda1) + (1-phi) * Exponential(lambda2)} where \eqn{\phi}{phi} is the probability an observation belongs to the first group, and \eqn{y>0}. The parameter \eqn{\phi}{phi} satisfies \eqn{0 < \phi < 1}{0 < phi < 1}. The mean of \eqn{Y} is \eqn{\phi / \lambda_1 + (1-\phi) / \lambda_2}{phi/lambda1 + (1-phi)/lambda2} and this is returned as the fitted values. By default, the three linear/additive predictors are \eqn{(logit(\phi), \log(\lambda_1), \log(\lambda_2))^T}{(logit(phi), log(lambda1), log(lambda2))^T}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } % \references{ ~put references to the literature/web site here ~ } \section{Warning }{ This \pkg{VGAM} family function requires care for a successful application. In particular, good initial values are required because of the presence of local solutions. Therefore running this function with several different combinations of arguments such as \code{iphi}, \code{il1}, \code{il2}, \code{qmu} is highly recommended. Graphical methods such as \code{\link[graphics]{hist}} can be used as an aid. This \pkg{VGAM} family function is experimental and should be used with care. } \author{ T. W. Yee } \note{ Fitting this model successfully to data can be difficult due to local solutions, uniqueness problems and ill-conditioned data. It pays to fit the model several times with different initial values and check that the best fit looks reasonable. Plotting the results is recommended. This function works better as \eqn{\lambda_1}{lambda1} and \eqn{\lambda_2}{lambda2} become more different. The default control argument \code{trace = TRUE} is to encourage monitoring convergence. } \seealso{ \code{\link[stats:Exponential]{rexp}}, \code{\link{exponential}}, \code{\link{mix2poisson}}. } \examples{ \dontrun{ lambda1 <- exp(1); lambda2 <- exp(3) (phi <- logit(-1, inverse = TRUE)) mdata <- data.frame(y1 = rexp(nn <- 1000, lambda1)) mdata <- transform(mdata, y2 = rexp(nn, lambda2)) mdata <- transform(mdata, Y = ifelse(runif(nn) < phi, y1, y2)) fit <- vglm(Y ~ 1, mix2exp, data = mdata, trace = TRUE) coef(fit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(fit), 'Truth' = c(phi, lambda1, lambda2)), digits = 2) with(mdata, hist(Y, prob = TRUE, main = "Orange = estimate, blue = truth")) abline(v = 1 / Coef(fit)[c(2, 3)], lty = 2, col = "orange", lwd = 2) abline(v = 1 / c(lambda1, lambda2), lty = 2, col = "blue", lwd = 2) } } \keyword{models} \keyword{regression} VGAM/man/alaplaceUC.Rd0000644000176200001440000000706113135276753014060 0ustar liggesusers\name{alaplaceUC} \alias{dalap} \alias{palap} \alias{qalap} \alias{ralap} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The Laplace Distribution } \description{ Density, distribution function, quantile function and random generation for the 3-parameter asymmetric Laplace distribution with location parameter \code{location}, scale parameter \code{scale}, and asymmetry parameter \code{kappa}. } \usage{ dalap(x, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), log = FALSE) palap(q, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), lower.tail = TRUE, log.p = FALSE) qalap(p, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau)), lower.tail = TRUE, log.p = FALSE) ralap(n, location = 0, scale = 1, tau = 0.5, kappa = sqrt(tau/(1-tau))) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{ number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{location}{ the location parameter \eqn{\xi}{xi}. } \item{scale}{ the scale parameter \eqn{\sigma}{sigma}. Must consist of positive values. } \item{tau}{ the quantile parameter \eqn{\tau}{tau}. Must consist of values in \eqn{(0,1)}. This argument is used to specify \code{kappa} and is ignored if \code{kappa} is assigned. } \item{kappa}{ the asymmetry parameter \eqn{\kappa}{kappa}. Must consist of positive values. } \item{log}{ if \code{TRUE}, probabilities \code{p} are given as \code{log(p)}. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \details{ There are many variants of asymmetric Laplace distributions (ALDs) and this one is known as \emph{the} ALD by Kotz et al. (2001). See \code{\link{alaplace3}}, the \pkg{VGAM} family function for estimating the three parameters by maximum likelihood estimation, for formulae and details. } \value{ \code{dalap} gives the density, \code{palap} gives the distribution function, \code{qalap} gives the quantile function, and \code{ralap} generates random deviates. } \references{ Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001) \emph{The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance}, Boston: Birkhauser. } \author{ T. W. Yee and Kai Huang } %\note{ % The \pkg{VGAM} family function \code{\link{alaplace3}} % estimates the three parameters by maximum likelihood estimation. %} \seealso{ \code{\link{alaplace3}}. % \code{\link{dloglap}}. } \examples{ x <- seq(-5, 5, by = 0.01) loc <- 0; sigma <- 1.5; kappa <- 2 \dontrun{ plot(x, dalap(x, loc, sigma, kappa = kappa), type = "l", col = "blue", main = "Blue is density, orange is cumulative distribution function", ylim = c(0, 1), sub = "Purple are 5, 10, ..., 95 percentiles", las = 1, ylab = "", cex.main = 0.5) abline(h = 0, col = "blue", lty = 2) lines(qalap(seq(0.05, 0.95, by = 0.05), loc, sigma, kappa = kappa), dalap(qalap(seq(0.05, 0.95, by = 0.05), loc, sigma, kappa = kappa), loc, sigma, kappa = kappa), col = "purple", lty = 3, type = "h") lines(x, palap(x, loc, sigma, kappa = kappa), type = "l", col = "orange") abline(h = 0, lty = 2) } pp <- seq(0.05, 0.95, by = 0.05) # Test two functions max(abs(palap(qalap(pp, loc, sigma, kappa = kappa), loc, sigma, kappa = kappa) - pp)) # Should be 0 } \keyword{distribution} VGAM/man/betabinomialff.Rd0000644000176200001440000002040613135276753015026 0ustar liggesusers\name{betabinomialff} \alias{betabinomialff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Beta-binomial Distribution Family Function } \description{ Fits a beta-binomial distribution by maximum likelihood estimation. The two parameters here are the shape parameters of the underlying beta distribution. } \usage{ betabinomialff(lshape1 = "loge", lshape2 = "loge", ishape1 = 1, ishape2 = NULL, imethod = 1, ishrinkage = 0.95, nsimEIM = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{lshape1, lshape2}{ Link functions for the two (positive) shape parameters of the beta distribution. See \code{\link{Links}} for more choices. } \item{ishape1, ishape2}{ Initial value for the shape parameters. The first must be positive, and is recyled to the necessary length. The second is optional. If a failure to converge occurs, try assigning a different value to \code{ishape1} and/or using \code{ishape2}. } \item{zero}{ Can be an integer specifying which linear/additive predictor is to be modelled as an intercept only. If assigned, the single value should be either \code{1} or \code{2}. The default is to model both shape parameters as functions of the covariates. If a failure to converge occurs, try \code{zero = 2}. See \code{\link{CommonVGAMffArguments}} for more information. } \item{ishrinkage, nsimEIM, imethod}{ See \code{\link{CommonVGAMffArguments}} for more information. The argument \code{ishrinkage} is used only if \code{imethod = 2}. Using the argument \code{nsimEIM} may offer large advantages for large values of \eqn{N} and/or large data sets. } } \details{ There are several parameterizations of the beta-binomial distribution. This family function directly models the two shape parameters of the associated beta distribution rather than the probability of success (however, see \bold{Note} below). The model can be written \eqn{T|P=p \sim Binomial(N,p)}{T|P=p ~ Binomial(N,p)} where \eqn{P} has a beta distribution with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Here, \eqn{N} is the number of trials (e.g., litter size), \eqn{T=NY} is the number of successes, and \eqn{p} is the probability of a success (e.g., a malformation). That is, \eqn{Y} is the \emph{proportion} of successes. Like \code{\link{binomialff}}, the fitted values are the estimated probability of success (i.e., \eqn{E[Y]} and not \eqn{E[T]}) and the prior weights \eqn{N} are attached separately on the object in a slot. The probability function is \deqn{P(T=t) = {N \choose t} \frac{B(\alpha+t, \beta+N-t)} {B(\alpha, \beta)}}{% P(T=t) = choose(N,t) B(alpha+t, beta+N-t) / B(alpha, beta)} where \eqn{t=0,1,\ldots,N}, and \eqn{B} is the beta function with shape parameters \eqn{\alpha}{alpha} and \eqn{\beta}{beta}. Recall \eqn{Y = T/N} is the real response being modelled. The default model is \eqn{\eta_1 = \log(\alpha)}{eta1 = log(alpha)} and \eqn{\eta_2 = \log(\beta)}{eta2 = log(beta)} because both parameters are positive. The mean (of \eqn{Y}) is \eqn{p = \mu = \alpha / (\alpha + \beta)}{p = mu = alpha / (alpha + beta)} and the variance (of \eqn{Y}) is \eqn{\mu(1-\mu)(1+(N-1)\rho)/N}{mu(1-mu)(1+(N-1)rho)/N}. Here, the correlation \eqn{\rho}{rho} is given by \eqn{1/(1 + \alpha + \beta)}{1/(1 + alpha + beta)} and is the correlation between the \eqn{N} individuals within a litter. A \emph{litter effect} is typically reflected by a positive value of \eqn{\rho}{rho}. It is known as the \emph{over-dispersion parameter}. This family function uses Fisher scoring. The two diagonal elements of the second-order expected derivatives with respect to \eqn{\alpha}{alpha} and \eqn{\beta}{beta} are computed numerically, which may fail for large \eqn{\alpha}{alpha}, \eqn{\beta}{beta}, \eqn{N} or else take a long time. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. Suppose \code{fit} is a fitted beta-binomial model. Then \code{fit@y} (better: \code{depvar(fit)}) contains the sample proportions \eqn{y}, \code{fitted(fit)} returns estimates of \eqn{E(Y)}, and \code{weights(fit, type = "prior")} returns the number of trials \eqn{N}. } \references{ Moore, D. F. and Tsiatis, A. (1991) Robust estimation of the variance in moment methods for extra-binomial and extra-Poisson variation. \emph{Biometrics}, \bold{47}, 383--401. Prentice, R. L. (1986) Binary regression using an extended beta-binomial distribution, with discussion of correlation induced by covariate measurement errors. \emph{Journal of the American Statistical Association}, \bold{81}, 321--327. } \author{ T. W. Yee } \note{ This function processes the input in the same way as \code{\link{binomialff}}. But it does not handle the case \eqn{N=1} very well because there are two parameters to estimate, not one, for each row of the input. Cases where \eqn{N=1} can be omitted via the \code{subset} argument of \code{\link{vglm}}. Although the two linear/additive predictors given above are in terms of \eqn{\alpha}{alpha} and \eqn{\beta}{beta}, basic algebra shows that the default amounts to fitting a logit link to the probability of success; subtracting the second linear/additive predictor from the first gives that logistic regression linear/additive predictor. That is, \eqn{logit(p) = \eta_1 - \eta_2}{logit(p) = eta1 - eta2}. This is illustated in one of the examples below. The \emph{extended} beta-binomial distribution of Prentice (1986) is currently not implemented in the \pkg{VGAM} package as it has range-restrictions for the correlation parameter that are currently too difficult to handle in this package. } \section{Warning }{ This family function is prone to numerical difficulties due to the expected information matrices not being positive-definite or ill-conditioned over some regions of the parameter space. If problems occur try setting \code{ishape1} to be some other positive value, using \code{ishape2} and/or setting \code{zero = 2}. This family function may be renamed in the future. See the warnings in \code{\link{betabinomial}}. } \seealso{ \code{\link{betabinomial}}, \code{\link{Betabinom}}, \code{\link{binomialff}}, \code{\link{betaff}}, \code{\link{dirmultinomial}}, \code{\link{lirat}}, \code{\link{simulate.vlm}}. } \examples{ # Example 1 N <- 10; s1 <- exp(1); s2 <- exp(2) y <- rbetabinom.ab(n = 100, size = N, shape1 = s1, shape2 = s2) fit <- vglm(cbind(y, N-y) ~ 1, betabinomialff, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fit@misc$rho) # The correlation parameter head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomialff, data = lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) fit@misc$rho # The correlation parameter t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # A "loge" link for the 2 shape parameters is a logistic regression: all.equal(c(fitted(fit)), as.vector(logit(predict(fit)[, 1] - predict(fit)[, 2], inverse = TRUE))) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomialff(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) coef(fit2, matrix = TRUE)[, 1] - coef(fit2, matrix = TRUE)[, 2] # logit(p) \dontrun{ with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) } \dontrun{ # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead", main = "Fitted values (lines)")) smalldf <- with(lirat, lirat[N > 1, ]) for (gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp) } } } \keyword{models} \keyword{regression} VGAM/man/loglinb2.Rd0000644000176200001440000000722113135276753013574 0ustar liggesusers\name{loglinb2} \alias{loglinb2} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Loglinear Model for Two Binary Responses } \description{ Fits a loglinear model to two binary responses. } \usage{ loglinb2(exchangeable = FALSE, zero = "u12") } %loglinb2(exchangeable = FALSE, zero = 3) %- maybe also 'usage' for other objects documented here. \arguments{ \item{exchangeable}{ Logical. If \code{TRUE}, the two marginal probabilities are constrained to be equal. Should be set \code{TRUE} for ears, eyes, etc. data. } \item{zero}{ Which linear/additive predictors are modelled as intercept-only? A \code{NULL} means none of them. See \code{\link{CommonVGAMffArguments}} for more information. } } \details{ The model is \deqn{P(Y_1=y_1,Y_2=y_2) = \exp(u_0+u_1 y_1+u_2 y_2+u_{12} y_1 y_2)}{% P(Y1=y1,Y2=y2) = exp(u0 + u1*y1 + u2*y2 + u12*y1*y2)} where \eqn{y_1}{y1} and \eqn{y_2}{y2} are 0 or 1, and the parameters are \eqn{u_1}{u1}, \eqn{u_2}{u2}, \eqn{u_{12}}{u12}. The normalizing parameter \eqn{u_0}{u0} can be expressed as a function of the other parameters, viz., \deqn{u_0 = -\log[1 + \exp(u_1) + \exp(u_2) + \exp(u_1 + u_2 + u_{12})].}{% u0 = -log[1 + exp(u1) + exp(u2) + exp(u1 + u2 + u12)].} The linear/additive predictors are \eqn{(\eta_1,\eta_2,\eta_3)^T = (u_1,u_2,u_{12})^T}{(eta1,eta2,eta3) = (u1,u2,u12)}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. When fitted, the \code{fitted.values} slot of the object contains the four joint probabilities, labelled as \eqn{(Y_1,Y_2)}{(Y1,Y2)} = (0,0), (0,1), (1,0), (1,1), respectively. } \references{ Yee, T. W. and Wild, C. J. (2001) Discussion to: ``Smoothing spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data (with discussion)'' by Gao, F., Wahba, G., Klein, R., Klein, B. \emph{Journal of the American Statistical Association}, \bold{96}, 127--160. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } \author{ Thomas W. Yee } \note{ The response must be a two-column matrix of ones and zeros only. This is more restrictive than \code{\link{binom2.or}}, which can handle more types of input formats. Note that each of the 4 combinations of the multivariate response need to appear in the data set. } \seealso{ \code{\link{binom2.or}}, \code{\link{binom2.rho}}, \code{\link{loglinb3}}. } \examples{ coalminers <- transform(coalminers, Age = (age - 42) / 5) # Get the n x 4 matrix of counts fit0 <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or, data = coalminers) counts <- round(c(weights(fit0, type = "prior")) * depvar(fit0)) # Create a n x 2 matrix response for loglinb2() # bwmat <- matrix(c(0,0, 0,1, 1,0, 1,1), 4, 2, byrow = TRUE) bwmat <- cbind(bln = c(0,0,1,1), wheeze = c(0,1,0,1)) matof1 <- matrix(1, nrow(counts), 1) newminers <- data.frame(bln = kronecker(matof1, bwmat[, 1]), wheeze = kronecker(matof1, bwmat[, 2]), wt = c(t(counts)), Age = with(coalminers, rep(age, rep(4, length(age))))) newminers <- newminers[with(newminers, wt) > 0,] fit <- vglm(cbind(bln,wheeze) ~ Age, loglinb2(zero = NULL), weight = wt, data = newminers) coef(fit, matrix = TRUE) # Same! (at least for the log odds-ratio) summary(fit) # Try reconcile this with McCullagh and Nelder (1989), p.234 (0.166-0.131) / 0.027458 # 1.275 is approximately 1.25 } \keyword{models} \keyword{regression} VGAM/man/garma.Rd0000644000176200001440000001500213135276753013147 0ustar liggesusers\name{garma} \alias{garma} %- Also NEED an '\alias' for EACH other topic documented here. \title{GARMA (Generalized Autoregressive Moving-Average) Models} \description{ Fits GARMA models to time series data. } \usage{ garma(link = "identitylink", p.ar.lag = 1, q.ma.lag = 0, coefstart = NULL, step = 1) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the mean response. The default is suitable for continuous responses. The link \code{\link{loge}} should be chosen if the data are counts. The link \code{\link{reciprocal}} can be chosen if the data are counts and the variance assumed for this is \eqn{\mu^2}{mu^2}. The links \code{\link{logit}}, \code{\link{probit}}, \code{\link{cloglog}}, and \code{\link{cauchit}} are supported and suitable for binary responses. Note that when the log or logit link is chosen: for log and logit, zero values can be replaced by \code{bvalue}. See \code{\link{loge}} and \code{\link{logit}} etc. for specific information about each link function. } \item{p.ar.lag}{ A positive integer, the lag for the autoregressive component. Called \eqn{p} below. } \item{q.ma.lag}{ A non-negative integer, the lag for the moving-average component. Called \eqn{q} below. } \item{coefstart}{ Starting values for the coefficients. Assigning this argument is highly recommended. For technical reasons, the argument \code{coefstart} in \code{\link{vglm}} cannot be used. } \item{step}{ Numeric. Step length, e.g., \code{0.5} means half-stepsizing. } % \item{constant}{ % Used when the log or logit link is chosen. % For log, zero values are replaced by \code{constant}. % For logit, zero values are replaced by \code{constant} and % unit values replaced by \code{1-constant}. % } } \details{ This function draws heavily on Benjamin \emph{et al.} (1998). See also Benjamin \emph{et al.} (2003). GARMA models extend the ARMA time series model to generalized responses in the exponential family, e.g., Poisson counts, binary responses. Currently, this function is rudimentary and can handle only certain continuous, count and binary responses only. The user must choose an appropriate link for the \code{link} argument. The GARMA(\eqn{p, q}) model is defined by firstly having a response belonging to the exponential family \deqn{f(y_t|D_t) = \exp \left\{ \frac{y_t \theta_t - b(\theta_t)}{\phi / A_t} + c(y_t, \phi / A_t) \right\}}{% f(y_t|D_t) = \exp [ (y_t theta_t - b(theta_t)) / (phi / A_t) + c(y_t, \phi / A_t) ] } where \eqn{\theta_t}{theta_t} and \eqn{\phi}{phi} are the canonical and scale parameters respectively, and \eqn{A_t} are known prior weights. The mean \eqn{\mu_t=E(Y_t|D_t)=b'(\theta_t)}{mu_t=E(Y_t|D_t)=b'(theta_t)} is related to the linear predictor \eqn{\eta_t}{eta_t} by the link function \eqn{g}. Here, \eqn{D_t=\{x_t,\ldots,x_1,y_{t-1},\ldots,y_1,\mu_{t-1},\ldots,\mu_1\}}{ D_t={x_t,\ldots,x_1,y_(t-1),\ldots,y_1,mu_(t-1),\ldots,mu_1}} is the previous information set. Secondly, the GARMA(\eqn{p, q}) model is defined by \deqn{g(\mu_t) = \eta_t = x_t^T \beta + \sum_{k=1}^p \phi_k (g(y_{t-k}) - x_{t-k}^T \beta) + \sum_{k=1}^q \theta_k (g(y_{t-k}) - \eta_{t-k}).}{% g(mu_t) = eta_t = x_t^T beta + \sum_{k=1}^p phi_k (g(y_{t-k}) - x_{t-k}^T beta) + \sum_{k=1}^q theta_k (g(y_{t-k}) - eta_{t-k}).} Parameter vectors \eqn{\beta}{beta}, \eqn{\phi}{phi} and \eqn{\theta}{theta} are estimated by maximum likelihood. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}. } \references{ Benjamin, M. A., Rigby, R. A. and Stasinopoulos, M. D. (1998) Fitting Non-Gaussian Time Series Models. Pages 191--196 in: \emph{Proceedings in Computational Statistics COMPSTAT 1998} by Payne, R. and P. J. Green. Physica-Verlag. Benjamin, M. A., Rigby, R. A. and Stasinopoulos, M. D. (2003) Generalized Autoregressive Moving Average Models. \emph{Journal of the American Statistical Association}, \bold{98}: 214--223. Zeger, S. L. and Qaqish, B. (1988) Markov regression models for time series: a quasi-likelihood approach. \emph{Biometrics}, \bold{44}: 1019--1031. } \author{ T. W. Yee } \note{ This function is unpolished and is requires \emph{lots} of improvements. In particular, initialization is \emph{very poor}. Results appear \emph{very} sensitive to quality of initial values. A limited amount of experience has shown that half-stepsizing is often needed for convergence, therefore choosing \code{crit = "coef"} is not recommended. Overdispersion is not handled. For binomial responses it is currently best to input a vector of 1s and 0s rather than the \code{cbind(successes, failures)} because the initialize slot is rudimentary. } \section{Warning}{ This \pkg{VGAM} family function is 'non-standard' in that the model does need some coercing to get it into the VGLM framework. Special code is required to get it running. A consequence is that some methods functions may give wrong results when applied to the fitted object. } %\seealso{ % The site \url{http://www.stat.auckland.ac.nz/~yee} contains % more documentation about this family function. % \code{\link{identity}}, % \code{\link{logit}}. %} \examples{ gdata <- data.frame(interspike = c(68, 41, 82, 66, 101, 66, 57, 41, 27, 78, 59, 73, 6, 44, 72, 66, 59, 60, 39, 52, 50, 29, 30, 56, 76, 55, 73, 104, 104, 52, 25, 33, 20, 60, 47, 6, 47, 22, 35, 30, 29, 58, 24, 34, 36, 34, 6, 19, 28, 16, 36, 33, 12, 26, 36, 39, 24, 14, 28, 13, 2, 30, 18, 17, 28, 9, 28, 20, 17, 12, 19, 18, 14, 23, 18, 22, 18, 19, 26, 27, 23, 24, 35, 22, 29, 28, 17, 30, 34, 17, 20, 49, 29, 35, 49, 25, 55, 42, 29, 16)) # See Zeger and Qaqish (1988) gdata <- transform(gdata, spikenum = seq(interspike)) bvalue <- 0.1 # .Machine$double.xmin # Boundary value fit <- vglm(interspike ~ 1, trace = TRUE, data = gdata, garma(loge(bvalue = bvalue), p = 2, coefstart = c(4, 0.3, 0.4))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) # A bug here \dontrun{ with(gdata, plot(interspike, ylim = c(0, 120), las = 1, xlab = "Spike Number", ylab = "Inter-Spike Time (ms)", col = "blue")) with(gdata, lines(spikenum[-(1:fit@misc$plag)], fitted(fit), col = "orange")) abline(h = mean(with(gdata, interspike)), lty = "dashed", col = "gray") } } \keyword{models} \keyword{regression} VGAM/man/amlbinomial.Rd0000644000176200001440000001133113135276753014345 0ustar liggesusers\name{amlbinomial} \alias{amlbinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Binomial Logistic Regression by Asymmetric Maximum Likelihood Estimation } \description{ Binomial quantile regression estimated by maximizing an asymmetric likelihood function. } \usage{ amlbinomial(w.aml = 1, parallel = FALSE, digw = 4, link = "logit") } %- maybe also 'usage' for other objects documented here. \arguments{ \item{w.aml}{ Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the ``w-regression plane''). The default value of unity results in the ordinary maximum likelihood (MLE) solution. } \item{parallel}{ If \code{w.aml} has more than one value then this argument allows the quantile curves to differ by the same amount as a function of the covariates. Setting this to be \code{TRUE} should force the quantile curves to not cross (although they may not cross anyway). See \code{\link{CommonVGAMffArguments}} for more information. } \item{digw }{ Passed into \code{\link[base]{Round}} as the \code{digits} argument for the \code{w.aml} values; used cosmetically for labelling. } \item{link}{ See \code{\link{binomialff}}. } } \details{ The general methodology behind this \pkg{VGAM} family function is given in Efron (1992) and full details can be obtained there. This model is essentially a logistic regression model (see \code{\link{binomialff}}) but the usual deviance is replaced by an asymmetric squared error loss function; it is multiplied by \eqn{w.aml} for positive residuals. The solution is the set of regression coefficients that minimize the sum of these deviance-type values over the data set, weighted by the \code{weights} argument (so that it can contain frequencies). Newton-Raphson estimation is used here. % Equation numbers below refer to that article. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Efron, B. (1992) Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. \emph{Journal of the American Statistical Association}, \bold{87}, 98--107. } \author{ Thomas W. Yee } \note{ On fitting, the \code{extra} slot has list components \code{"w.aml"} and \code{"percentile"}. The latter is the percent of observations below the ``w-regression plane'', which is the fitted values. Also, the individual deviance values corresponding to each element of the argument \code{w.aml} is stored in the \code{extra} slot. For \code{amlbinomial} objects, methods functions for the generic functions \code{qtplot} and \code{cdf} have not been written yet. See \code{\link{amlpoisson}} about comments on the jargon, e.g., \emph{expectiles} etc. In this documentation the word \emph{quantile} can often be interchangeably replaced by \emph{expectile} (things are informal here). } \section{Warning }{ If \code{w.aml} has more than one value then the value returned by \code{deviance} is the sum of all the (weighted) deviances taken over all the \code{w.aml} values. See Equation (1.6) of Efron (1992). } \seealso{ \code{\link{amlpoisson}}, \code{\link{amlexponential}}, \code{\link{amlnormal}}, \code{\link{alaplace1}}, \code{\link{denorm}}. } \examples{ # Example: binomial data with lots of trials per observation set.seed(1234) sizevec <- rep(100, length = (nn <- 200)) mydat <- data.frame(x = sort(runif(nn))) mydat <- transform(mydat, prob = logit(-0 + 2.5*x + x^2, inverse = TRUE)) mydat <- transform(mydat, y = rbinom(nn, size = sizevec, prob = prob)) (fit <- vgam(cbind(y, sizevec - y) ~ s(x, df = 3), amlbinomial(w = c(0.01, 0.2, 1, 5, 60)), mydat, trace = TRUE)) fit@extra \dontrun{ par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty = 1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile curves are red"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty = 1)) for (ii in fit@extra$percentile) with(mydat, matlines(x, 100 * qbinom(p = ii/100, size = sizevec, prob = prob) / sizevec, col = "red", lwd = 2, lty = 1)) } } \keyword{models} \keyword{regression} VGAM/man/makeham.Rd0000644000176200001440000001100713135276753013464 0ustar liggesusers\name{makeham} \alias{makeham} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Makeham Distribution Family Function } \description{ Maximum likelihood estimation of the 3-parameter Makeham distribution. } \usage{ makeham(lscale = "loge", lshape = "loge", lepsilon = "loge", iscale = NULL, ishape = NULL, iepsilon = NULL, gscale = exp(-5:5),gshape = exp(-5:5), gepsilon = exp(-4:1), nsimEIM = 500, oim.mean = TRUE, zero = NULL, nowarning = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{nowarning}{ Logical. Suppress a warning? Ignored for \pkg{VGAM} 0.9-7 and higher. } \item{lshape, lscale, lepsilon}{ Parameter link functions applied to the shape parameter \code{shape}, scale parameter \code{scale}, and other parameter \code{epsilon}. All parameters are treated as positive here (cf. \code{\link{dmakeham}} allows \code{epsilon = 0}, etc.). See \code{\link{Links}} for more choices. } % \item{eshape, escale, eepsilon}{ % List. Extra argument for each of the links. % See \code{earg} in \code{\link{Links}} for general information. % } \item{ishape, iscale, iepsilon}{ Optional initial values. A \code{NULL} means a value is computed internally. A value must be given for \code{iepsilon} currently, and this is a sensitive parameter! } \item{gshape, gscale, gepsilon}{ See \code{\link{CommonVGAMffArguments}}. } \item{nsimEIM, zero}{ See \code{\link{CommonVGAMffArguments}}. Argument \code{probs.y} is used only when \code{imethod = 2}. } \item{oim.mean}{ To be currently ignored. } } \details{ The Makeham distribution, which adds another parameter to the Gompertz distribution, has cumulative distribution function \deqn{F(y; \alpha, \beta, \varepsilon) = 1 - \exp \left\{ -y \varepsilon + \frac {\alpha}{\beta} \left[ 1 - e^{\beta y} \right] \right\} }{% F(y; alpha, beta, epsilon) = 1 - exp(-y * epsilon + (alpha / beta) * [1 - e^(beta * y)]) } which leads to a probability density function \deqn{f(y; \alpha, \beta, \varepsilon) = \left[ \varepsilon + \alpha e^{\beta y} \right] \; \exp \left\{ -y \varepsilon + \frac {\alpha}{\beta} \left[ 1 - e^{\beta y} \right] \right\}, }{% f(y; alpha, beta, epsilon) = (epsilon + alpha * e^(beta y) ) * exp(-y * epsilon + (alpha / beta) * [1 - e^(beta * y)]) } for \eqn{\alpha > 0}{alpha > 0}, \eqn{\beta > 0}{beta > 0}, \eqn{\varepsilon \geq 0}{epsilon >= 0}, \eqn{y > 0}. Here, \eqn{\beta}{beta} is called the scale parameter \code{scale}, and \eqn{\alpha}{alpha} is called a shape parameter. The moments for this distribution do not appear to be available in closed form. Simulated Fisher scoring is used and multiple responses are handled. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } %\references{ % %} \author{ T. W. Yee } \section{Warning }{ A lot of care is needed because this is a rather difficult distribution for parameter estimation, especially when the shape parameter is large relative to the scale parameter. If the self-starting initial values fail then try experimenting with the initial value arguments, especially \code{iepsilon}. Successful convergence depends on having very good initial values. More improvements could be made here. Also, monitor convergence by setting \code{trace = TRUE}. A trick is to fit a \code{\link{gompertz}} distribution and use it for initial values; see below. However, this family function is currently numerically fraught. } \seealso{ \code{\link{dmakeham}}, \code{\link{gompertz}}, \code{\link{simulate.vlm}}. } \examples{ \dontrun{ set.seed(123) mdata <- data.frame(x2 = runif(nn <- 1000)) mdata <- transform(mdata, eta1 = -1, ceta1 = 1, eeta1 = -2) mdata <- transform(mdata, shape1 = exp(eta1), scale1 = exp(ceta1), epsil1 = exp(eeta1)) mdata <- transform(mdata, y1 = rmakeham(nn, shape = shape1, scale = scale1, eps = epsil1)) # A trick is to fit a Gompertz distribution first fit0 <- vglm(y1 ~ 1, gompertz, data = mdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, makeham, data = mdata, etastart = cbind(predict(fit0), log(0.1)), trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) } } \keyword{models} \keyword{regression} %# fit1 <- vglm(y1 ~ 1, makeham, data = mdata, trace = TRUE) %# fit2 <- vglm(y1 ~ 1, makeham(imeth = 2), data = mdata, trace = TRUE) VGAM/man/biplackettcop.Rd0000644000176200001440000000602613135276753014712 0ustar liggesusers\name{biplackettcop} \alias{biplackettcop} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Plackett's Bivariate Copula Family Function } \description{ Estimate the association parameter of Plackett's bivariate distribution (copula) by maximum likelihood estimation. } \usage{ biplackettcop(link = "loge", ioratio = NULL, imethod = 1, nsimEIM = 200) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link}{ Link function applied to the (positive) odds ratio \eqn{\psi}{psi}. See \code{\link{Links}} for more choices and information. } \item{ioratio}{ Numeric. Optional initial value for \eqn{\psi}{psi}. If a convergence failure occurs try assigning a value or a different value. } \item{imethod, nsimEIM}{ See \code{\link{CommonVGAMffArguments}}. } } \details{ The defining equation is \deqn{\psi = H \times (1-y_1-y_2+H) / ((y_1-H) \times (y_2-H))}{% psi = H*(1-y1-y2+H) / ((y1-H)*(y2-H))} where \eqn{P(Y_1 \leq y_1, Y_2 \leq y_2) = H_{\psi}(y_1,y_2)}{P(Y1 <= y1, Y2 <= y2)= H(y1,y2)} is the cumulative distribution function. The density function is \eqn{h_{\psi}(y_1,y_2) =}{h(y1,y2) =} \deqn{\psi [1 + (\psi-1)(y_1 + y_2 - 2 y_1 y_2) ] / \left( [1 + (\psi-1)(y_1 + y_2) ]^2 - 4 \psi (\psi-1) y_1 y_2 \right)^{3/2}}{% psi*[1 + (psi-1)*(y1 + y2 - 2*y1*y2) ] / ( [1 + (psi-1)*(y1 + y2)]^2 - 4*psi*(psi-1)*y1*y2)^(3/2)} for \eqn{\psi > 0}{psi > 0}. Some writers call \eqn{\psi}{psi} the \emph{cross product ratio} but it is called the \emph{odds ratio} here. The support of the function is the unit square. The marginal distributions here are the standard uniform although it is commonly generalized to other distributions. If \eqn{\psi = 1}{psi=1} then \eqn{h_{\psi}(y_1,y_2) = y_1 y_2}{h(y1,y2) = y1*y2}, i.e., independence. As the odds ratio tends to infinity one has \eqn{y_1=y_2}{y1=y2}. As the odds ratio tends to 0 one has \eqn{y_2=1-y_1}{y2=1-y1}. Fisher scoring is implemented using \code{\link{rbiplackcop}}. Convergence is often quite slow. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Plackett, R. L. (1965) A class of bivariate distributions. \emph{Journal of the American Statistical Association}, \bold{60}, 516--522. } \author{ T. W. Yee } \note{ The response must be a two-column matrix. Currently, the fitted value is a 2-column matrix with 0.5 values because the marginal distributions correspond to a standard uniform distribution. } \seealso{ \code{\link{rbiplackcop}}, \code{\link{bifrankcop}}. } \examples{ \dontrun{ ymat <- rbiplackcop(n = 2000, oratio = exp(2)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = biplackettcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) } } \keyword{models} \keyword{regression} VGAM/man/margeff.Rd0000644000176200001440000001246413135276753013500 0ustar liggesusers\name{margeff} \alias{margeff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Marginal effects for several categorical response models } \description{ Marginal effects for the multinomial logit model and cumulative logit/probit/... models and continuation ratio models and stopping ratio models and adjacent categories models: the derivative of the fitted probabilities with respect to each explanatory variable. } \usage{ margeff(object, subset = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ A \code{\link{vglm}} object, with one of the following family functions: \code{\link{multinomial}}, \code{\link{cumulative}}, \code{\link{cratio}}, \code{\link{sratio}} or \code{\link{acat}}. } \item{subset}{ Numerical or logical vector, denoting the required observation(s). Recycling is used if possible. The default means all observations. } \item{\dots}{ further arguments passed into the other methods functions. % e.g., \code{subset}. } } \details{ Computes the derivative of the fitted probabilities of the categorical response model with respect to each explanatory variable. Formerly one big function, this function now uses S4 dispatch to break up the computations. % 20151215 The function \code{margeff()} is \emph{not} generic. However, it calls the function \code{margeffS4VGAM()} which \emph{is}. This is based on the class of the \code{VGAMff} argument, and it uses the S4 function \code{\link[methods]{setMethod}} to correctly dispatch to the required methods function. The inheritance is given by the \code{vfamily} slot of the \pkg{VGAM} family function. } \value{ A \eqn{p} by \eqn{M+1} by \eqn{n} array, where \eqn{p} is the number of explanatory variables and the (hopefully) nominal response has \eqn{M+1} levels, and there are \eqn{n} observations. In general, if \code{is.numeric(subset)} and \code{length(subset) == 1} then a \eqn{p} by \eqn{M+1} matrix is returned. } % \references{ ~put references to the literature/web site here ~ } \author{ T. W. Yee, with some help and motivation from Stasha Rmandic. } \section{Warning }{ Care is needed in interpretation, e.g., the change is not universally accurate for a unit change in each explanatory variable because eventually the `new' probabilities may become negative or greater than unity. Also, the `new' probabilities will not sum to one. This function is not applicable for models with data-dependent terms such as \code{\link{bs}} and \code{\link{poly}}. Also the function should not be applied to models with any terms that have generated more than one column of the LM model matrix, such as \code{\link{bs}} and \code{\link{poly}}. For such try using numerical methods such as finite-differences. The \code{formula} in \code{object} should comprise of simple terms of the form \code{ ~ x2 + x3 + x4}, etc. Some numerical problems may occur if the fitted values are close to 0 or 1 for the \code{\link{cratio}} and \code{\link{sratio}} models. Models with offsets may result in an incorrect answer. } \note{ For \code{\link{multinomial}} this function should handle any value of \code{refLevel} and also any constraint matrices. However, it does not currently handle the \code{xij} or \code{form2} arguments, nor \code{\link{vgam}} objects. % 20151211; this is now false, so can delete this: % For \code{\link{multinomial}}, % if \code{subset} is numeric then the function uses a \code{for} loop over % the observations (slow). % The default computations use vectorization; this uses more memory than a % \code{for} loop but is faster. Some other limitations are imposed, e.g., for \code{\link{acat}} models only a \code{\link{loge}} link is allowed. } \seealso{ \code{\link{multinomial}}, \code{\link{cumulative}}, \code{\link{propodds}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{vglm}}. } \examples{ # Not a good example for multinomial() because the response is ordinal!! ii <- 3; hh <- 1/100 pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(reverse = TRUE, parallel = TRUE), data = pneumo) fitted(fit)[ii, ] mynewdata <- with(pneumo, data.frame(let = let[ii] + hh)) (newp <- predict(fit, newdata = mynewdata, type = "response")) # Compare the difference. Should be the same as hh --> 0. round(digits = 3, (newp-fitted(fit)[ii, ])/hh) # Finite-difference approxn round(digits = 3, margeff(fit, subset = ii)["let",]) # Other examples round(digits = 3, margeff(fit)) round(digits = 3, margeff(fit, subset = 2)["let",]) round(digits = 3, margeff(fit, subset = c(FALSE, TRUE))["let",,]) # recycling round(digits = 3, margeff(fit, subset = c(2, 4, 6, 8))["let",,]) } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{models} \keyword{regression} % set \code{i=1:n}. % hh * margeff(fit, i=ii)["let",] % cumulative(reverse=TRUE, parallel=TRUE), % cumulative(reverse=FALSE, parallel=TRUE), % cumulative(reverse=TRUE, parallel=FALSE), % cumulative(reverse=FALSE, parallel=FALSE), VGAM/man/erf.Rd0000644000176200001440000000344413135276753012643 0ustar liggesusers\name{erf} \alias{erf} \alias{erfc} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Error Function, and variants } \description{ Computes the error function, or its inverse, based on the normal distribution. Also computes the complement of the error function, or its inverse, } \usage{ erf(x, inverse = FALSE) erfc(x, inverse = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{ Numeric. } \item{inverse}{ Logical. Of length 1. } } \details{ \eqn{Erf(x)} is defined as \deqn{Erf(x) = \frac{2}{\sqrt{\pi}} \int_0^x \exp(-t^2) dt}{% Erf(x) = (2/sqrt(pi)) int_0^x exp(-t^2) dt} so that it is closely related to \code{\link[stats:Normal]{pnorm}}. The inverse function is defined for \eqn{x} in \eqn{(-1,1)}. } \value{ Returns the value of the function evaluated at \code{x}. } \references{ Abramowitz, M. and Stegun, I. A. (1972) \emph{Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables}, New York: Dover Publications Inc. } \author{ T. W. Yee} \note{ Some authors omit the term \eqn{2/\sqrt{\pi}}{2/sqrt(pi)} from the definition of \eqn{Erf(x)}. Although defined for complex arguments, this function only works for real arguments. The \emph{complementary error function} \eqn{erfc(x)} is defined as \eqn{1-erf(x)}, and is implemented by \code{erfc}. Its inverse function is defined for \eqn{x} in \eqn{(0,2)}. } \seealso{ \code{\link[stats:Normal]{pnorm}}. } \examples{ \dontrun{ curve(erf, -3, 3, col = "orange", ylab = "", las = 1) curve(pnorm, -3, 3, add = TRUE, col = "blue", lty = "dotted", lwd = 2) abline(v = 0, h = 0, lty = "dashed") legend("topleft", c("erf(x)", "pnorm(x)"), col = c("orange", "blue"), lty = c("solid", "dotted"), lwd = 1:2) } } \keyword{math} VGAM/man/inv.gaussianff.Rd0000644000176200001440000000637113135276753015012 0ustar liggesusers\name{inv.gaussianff} \alias{inv.gaussianff} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Inverse Gaussian Distribution Family Function } \description{ Estimates the two parameters of the inverse Gaussian distribution by maximum likelihood estimation. } \usage{ inv.gaussianff(lmu = "loge", llambda = "loge", imethod = 1, ilambda = NULL, parallel = FALSE, ishrinkage = 0.99, zero = NULL) } %- maybe also 'usage' for other objects documented here. %apply.parint = FALSE, \arguments{ \item{lmu, llambda}{ Parameter link functions for the \eqn{\mu}{mu} and \eqn{\lambda}{lambda} parameters. See \code{\link{Links}} for more choices. } \item{ilambda, parallel}{ See \code{\link{CommonVGAMffArguments}} for more information. If \code{parallel = TRUE} then the constraint is not applied to the intercept. } \item{imethod, ishrinkage, zero}{ See \code{\link{CommonVGAMffArguments}} for information. } } \details{ The standard (``canonical'') form of the inverse Gaussian distribution has a density that can be written as \deqn{f(y;\mu,\lambda) = \sqrt{\lambda/(2\pi y^3)} \exp\left(-\lambda (y-\mu)^2/(2 \mu^2 y)\right)}{% f(y;mu,lambda) = sqrt(lambda/(2*pi*y^3)) * exp(-lambda*(y-mu)^2/(2*mu^2*y)) } where \eqn{y>0}, \eqn{\mu>0}{mu>0}, and \eqn{\lambda>0}{lambda>0}. The mean of \eqn{Y} is \eqn{\mu}{mu} and its variance is \eqn{\mu^3/\lambda}{mu^3/lambda}. By default, \eqn{\eta_1=\log(\mu)}{eta1=log(mu)} and \eqn{\eta_2=\log(\lambda)}{eta2=log(lambda)}. The mean is returned as the fitted values. This \pkg{VGAM} family function can handle multiple responses (inputted as a matrix). } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994) \emph{Continuous Univariate Distributions}, 2nd edition, Volume 1, New York: Wiley. Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) \emph{Statistical Distributions}, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition. } \author{ T. W. Yee } \note{ The inverse Gaussian distribution can be fitted (to a certain extent) using the usual GLM framework involving a scale parameter. This family function is different from that approach in that it estimates both parameters by full maximum likelihood estimation. } \seealso{ \code{\link{Inv.gaussian}}, \code{\link{waldff}}, \code{\link{bisa}}. The \R{} package \pkg{SuppDists} has several functions for evaluating the density, distribution function, quantile function and generating random numbers from the inverse Gaussian distribution. } \examples{ idata <- data.frame(x2 = runif(nn <- 1000)) idata <- transform(idata, mymu = exp(2 + 1 * x2), Lambda = exp(2 + 1 * x2)) idata <- transform(idata, y = rinv.gaussian(nn, mu = mymu, lambda = Lambda)) fit1 <- vglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE) rrig <- rrvglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE) coef(fit1, matrix = TRUE) coef(rrig, matrix = TRUE) Coef(rrig) summary(fit1) } \keyword{models} \keyword{regression} VGAM/man/exppoissonUC.Rd0000644000176200001440000000440513135276753014524 0ustar liggesusers\name{exppois} \alias{exppois} \alias{dexppois} \alias{pexppois} \alias{qexppois} \alias{rexppois} \title{The Exponential Poisson Distribution} \description{ Density, distribution function, quantile function and random generation for the exponential poisson distribution. } \usage{ dexppois(x, rate = 1, shape, log = FALSE) pexppois(q, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) qexppois(p, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) rexppois(n, rate = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{shape, rate}{ positive parameters. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dexppois} gives the density, \code{pexppois} gives the distribution function, \code{qexppois} gives the quantile function, and \code{rexppois} generates random deviates. } \author{ Kai Huang and J. G. Lauder } \details{ See \code{\link{exppoisson}}, the \pkg{VGAM} family function for estimating the parameters, for the formula of the probability density function and other details. } %\note{ %} \seealso{ \code{\link{exppoisson}}. } \examples{ \dontrun{ rate <- 2; shape <- 0.5; nn <- 201 x <- seq(-0.05, 1.05, len = nn) plot(x, dexppois(x, rate = rate, shape), type = "l", las = 1, ylim = c(0, 3), ylab = paste("fexppoisson(rate = ", rate, ", shape = ", shape, ")"), col = "blue", cex.main = 0.8, main = "Blue is the density, orange the cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexppois(x, rate = rate, shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexppois(probs, rate = rate, shape) lines(Q, dexppois(Q, rate = rate, shape), col = "purple", lty = 3, type = "h") lines(Q, pexppois(Q, rate = rate, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3); abline(h = 0, col = "gray50") max(abs(pexppois(Q, rate = rate, shape) - probs)) # Should be 0 } } \keyword{distribution} VGAM/man/enzyme.Rd0000644000176200001440000000170313135276753013372 0ustar liggesusers\name{enzyme} \alias{enzyme} \docType{data} \title{ Enzyme Data} \description{ Enzyme velocity and substrate concentration. } \usage{data(enzyme)} \format{ A data frame with 12 observations on the following 2 variables. \describe{ \item{conc}{a numeric explanatory vector; substrate concentration} \item{velocity}{a numeric response vector; enzyme velocity} } } \details{ Sorry, more details need to be included later. } \source{ Sorry, more details need to be included later. } \references{ Watts, D. G. (1981) An introduction to nonlinear least squares. In: L. Endrenyi (Ed.), \emph{Kinetic Data Analysis: Design and Analysis of Enzyme and Pharmacokinetic Experiments}, pp.1--24. New York: Plenum Press. } \seealso{ \code{\link[VGAM]{micmen}}. } \examples{ \dontrun{ fit <- vglm(velocity ~ 1, micmen, data = enzyme, trace = TRUE, form2 = ~ conc - 1, crit = "crit") summary(fit) } } \keyword{datasets} VGAM/man/gompertzUC.Rd0000644000176200001440000000430213135276753014160 0ustar liggesusers\name{Gompertz} \alias{Gompertz} \alias{dgompertz} \alias{pgompertz} \alias{qgompertz} \alias{rgompertz} \title{The Gompertz Distribution} \description{ Density, cumulative distribution function, quantile function and random generation for the Gompertz distribution. } \usage{ dgompertz(x, scale = 1, shape, log = FALSE) pgompertz(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgompertz(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgompertz(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } \item{scale, shape}{positive scale and shape parameters. } } \value{ \code{dgompertz} gives the density, \code{pgompertz} gives the cumulative distribution function, \code{qgompertz} gives the quantile function, and \code{rgompertz} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{gompertz}} for details. } %\note{ % %} \seealso{ \code{\link{gompertz}}, \code{\link{dgumbel}}, \code{\link{dmakeham}}. } \examples{ probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(1); Scale <- exp(1) max(abs(pgompertz(qgompertz(p = probs, Scale, shape = Shape), Scale, shape = Shape) - probs)) # Should be 0 \dontrun{ x <- seq(-0.1, 1.0, by = 0.001) plot(x, dgompertz(x, Scale,shape = Shape), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pgompertz(x, Scale, shape = Shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgompertz(probs, Scale, shape = Shape) lines(Q, dgompertz(Q, Scale, shape = Shape), col = "purple", lty = 3, type = "h") pgompertz(Q, Scale, shape = Shape) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) } } \keyword{distribution} VGAM/man/Huggins89.t1.Rd0000644000176200001440000001210213135276753014166 0ustar liggesusers\name{Huggins89.t1} \alias{Huggins89.t1} \alias{Huggins89table1} \docType{data} \title{ Table 1 of Huggins (1989) } \description{ Simulated capture data set for the linear logistic model depending on an occasion covariate and an individual covariate for 10 trapping occasions and 20 individuals. %% ~~ A concise (1-5 lines) description of the dataset. ~~ } \usage{ data(Huggins89table1) data(Huggins89.t1) } \format{ The format is a data frame. %chr "Huggins89.t1" } \details{ Table 1 of Huggins (1989) gives this toy data set. Note that variables \code{t1},\ldots,\code{t10} are occasion-specific variables. They correspond to the response variables \code{y1},\ldots,\code{y10} which have values 1 for capture and 0 for not captured. Both \code{Huggins89table1} and \code{Huggins89.t1} are identical. The latter used variables beginning with \code{z}, not \code{t}, and may be withdrawn very soon. %% ~~ If necessary, more details than the __description__ above ~~ } %\source{ %% ~~ reference to a publication or URL from which the data were obtained ~~ %} \references{ Huggins, R. M. (1989) On the statistical analysis of capture experiments. \emph{Biometrika}, \bold{76}, 133--140. %% ~~ possibly secondary sources and usages ~~ } \examples{ Huggins89table1 <- transform(Huggins89table1, x3.tij = t01, T02 = t02, T03 = t03, T04 = t04, T05 = t05, T06 = t06, T07 = t07, T08 = t08, T09 = t09, T10 = t10) small.table1 <- subset(Huggins89table1, y01 + y02 + y03 + y04 + y05 + y06 + y07 + y08 + y09 + y10 > 0) # fit.tbh is the bottom equation on p.133. # It is a M_tbh model. fit.tbh <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2 + x3.tij, xij = list(x3.tij ~ t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 + T02 + T03 + T04 + T05 + T06 + T07 + T08 + T09 + T10 - 1), posbernoulli.tb(parallel.t = TRUE ~ x2 + x3.tij), data = small.table1, trace = TRUE, form2 = ~ x2 + x3.tij + t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 + T02 + T03 + T04 + T05 + T06 + T07 + T08 + T09 + T10) # These results differ a bit from Huggins (1989), probably because # two animals had to be removed here (they were never caught): coef(fit.tbh) # First element is the behavioural effect sqrt(diag(vcov(fit.tbh))) # SEs constraints(fit.tbh, matrix = TRUE) summary(fit.tbh, presid = FALSE) fit.tbh@extra$N.hat # Estimate of the population site N; cf. 20.86 fit.tbh@extra$SE.N.hat # Its standard error; cf. 1.87 or 4.51 fit.th <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.t, data = small.table1, trace = TRUE) coef(fit.th) constraints(fit.th) coef(fit.th, matrix = TRUE) # M_th model summary(fit.th, presid = FALSE) fit.th@extra$N.hat # Estimate of the population size N fit.th@extra$SE.N.hat # Its standard error fit.bh <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.b(I2 = FALSE), data = small.table1, trace = TRUE) coef(fit.bh) constraints(fit.bh) coef(fit.bh, matrix = TRUE) # M_bh model summary(fit.bh, presid = FALSE) fit.bh@extra$N.hat fit.bh@extra$SE.N.hat fit.h <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.b, data = small.table1, trace = TRUE) coef(fit.h, matrix = TRUE) # M_h model (version 1) coef(fit.h) summary(fit.h, presid = FALSE) fit.h@extra$N.hat fit.h@extra$SE.N.hat Fit.h <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.t(parallel.t = TRUE ~ x2), data = small.table1, trace = TRUE) coef(Fit.h) coef(Fit.h, matrix = TRUE) # M_h model (version 2) summary(Fit.h, presid = FALSE) Fit.h@extra$N.hat Fit.h@extra$SE.N.hat } \keyword{datasets} %\dontrun{ %} % data(Huggins89table1) %## maybe str(Huggins89table1) ; plot(Huggins89table1) ... % coef(fit1, matrix = TRUE) # M_t model % Huggins89.t1 <- transform(Huggins89.t1, xx2 = c(matrix(x2, 2, 10, byrow = TRUE))) %This code below is equivalent to the above fit.tbh (same name). %But this version uses manual construction of the constraint matrices: %tau <- 10 %Hlist <-list("(Intercept)" = cbind(bhvr.effect = c(rep(0, len = tau), % rep(1, len = tau-1)), % overall.intercept = 1), % x2 = cbind(rep(1, len = 2*tau-1)), % Zedd = cbind(rep(1, len = 2*tau-1))) %fit.tbh <- % vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2 + Zedd, % xij = list(Zedd ~ z01 + z02 + z03 + z04 + z05 + z06 + z07 + z08 + z09 + z10 + % Z02 + Z03 + Z04 + Z05 + Z06 + Z07 + Z08 + Z09 + Z10 - 1), % posbernoulli.tb, data = small.t1, trace = TRUE, % constraints = Hlist, % form2 = ~ x2 + Zedd + % z01 + z02 + z03 + z04 + z05 + z06 + z07 + z08 + z09 + z10 + % Z02 + Z03 + Z04 + Z05 + Z06 + Z07 + Z08 + Z09 + Z10) VGAM/man/lms.bcg.Rd0000644000176200001440000000762713135276753013423 0ustar liggesusers\name{lms.bcg} \alias{lms.bcg} %- Also NEED an '\alias' for EACH other topic documented here. \title{ LMS Quantile Regression with a Box-Cox transformation to a Gamma Distribution } \description{ LMS quantile regression with the Box-Cox transformation to the gamma distribution. } \usage{ lms.bcg(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loge", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{percentiles}{ A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as `fitted values'. } \item{zero}{ See \code{\link{lms.bcn}}. } \item{llambda, lmu, lsigma}{ See \code{\link{lms.bcn}}. } \item{idf.mu, idf.sigma}{ See \code{\link{lms.bcn}}. } \item{ilambda, isigma}{ See \code{\link{lms.bcn}}. } } \details{ Given a value of the covariate, this function applies a Box-Cox transformation to the response to best obtain a gamma distribution. The parameters chosen to do this are estimated by maximum likelihood or penalized maximum likelihood. Similar details can be found at \code{\link{lms.bcn}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Lopatatzidis A. and Green, P. J. (unpublished manuscript) Semiparametric quantile regression using the gamma distribution. Yee, T. W. (2004) Quantile regression via vector generalized additive models. \emph{Statistics in Medicine}, \bold{23}, 2295--2315. %Documentation accompanying the \pkg{VGAM} package at %\url{https://www.stat.auckland.ac.nz/~yee} %contains further information and examples. } \author{ Thomas W. Yee } \note{ Similar notes can be found at \code{\link{lms.bcn}}. } \section{Warning }{ This \pkg{VGAM} family function comes with the same warnings as \code{\link{lms.bcn}}. Also, the expected value of the second derivative with respect to lambda may be incorrect (my calculations do not agree with the Lopatatzidis and Green manuscript.) } \seealso{ \code{\link{lms.bcn}}, \code{\link{lms.yjn}}, \code{\link{qtplot.lmscreg}}, \code{\link{deplot.lmscreg}}, \code{\link{cdf.lmscreg}}, \code{\link{bmi.nz}}, \code{\link{amlexponential}}. } \examples{ # This converges, but deplot(fit) and qtplot(fit) do not work fit0 <- vglm(BMI ~ sm.bs(age, df = 4), lms.bcg, data = bmi.nz, trace = TRUE) coef(fit0, matrix = TRUE) \dontrun{ par(mfrow = c(1, 1)) plotvgam(fit0, se = TRUE) # Plot mu function (only) } # Use a trick: fit0 is used for initial values for fit1. fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), etastart = predict(fit0), lms.bcg(zero = 1), bmi.nz, trace = TRUE) # Difficult to get a model that converges. # Here, we prematurely stop iterations because it fails near the solution. fit2 <- vgam(BMI ~ s(age, df = c(4, 2)), maxit = 4, lms.bcg(zero = 1, ilam = 3), bmi.nz, trace = TRUE) summary(fit1) head(predict(fit1)) head(fitted(fit1)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit1)) \dontrun{ # Quantile plot par(bty = "l", mar=c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit1, percentiles=c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit1, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit1, x0=42, y=ygrid, add=TRUE, llty=2, col="red") aa <- deplot(fit1, x0=55, y=ygrid, add=TRUE, llty=4, col="blue", Attach=TRUE) aa@post$deplot # Contains density function values } } \keyword{models} \keyword{regression} VGAM/man/zero.Rd0000644000176200001440000000614213135276753013044 0ustar liggesusers\name{zero} % \alias{zeroarg} \alias{zero} \title{ The zero Argument in VGAM Family Functions } \description{ The \code{zero} argument allows users to conveniently model certain linear/additive predictors as intercept-only. } % \usage{ % VGAMfamilyFunction(zero = 3) % } \value{ Nothing is returned. It is simply a convenient argument for constraining certain linear/additive predictors to be an intercept only. } \section{Warning }{ The use of other arguments may conflict with the \code{zero} argument. For example, using \code{constraints} to input constraint matrices may conflict with the \code{zero} argument. Another example is the argument \code{parallel}. In general users should not assume any particular order of precedence when there is potential conflict of definition. Currently no checking for consistency is made. The argument \code{zero} may be renamed in the future to something better. } \section{Side Effects}{ The argument creates the appropriate constraint matrices internally. } \details{ Often a certain parameter needs to be modelled simply while other parameters in the model may be more complex, for example, the \eqn{\lambda}{lambda} parameter in LMS-Box-Cox quantile regression should be modelled more simply compared to its \eqn{\mu}{mu} parameter. Another example is the \eqn{\xi}{xi} parameter in a GEV distribution which is should be modelled simpler than its \eqn{\mu}{mu} parameter. Using the \code{zero} argument allows this to be fitted conveniently without having to input all the constraint matrices explicitly. The \code{zero} argument can be assigned an integer vector from the set \{\code{1:M}\} where \code{M} is the number of linear/additive predictors. Full details about constraint matrices can be found in the references. See \code{\link{CommonVGAMffArguments}} for more information. } \author{T. W. Yee } \note{ In all \pkg{VGAM} family functions \code{zero = NULL} means none of the linear/additive predictors are modelled as intercepts-only. Almost all \pkg{VGAM} family function have \code{zero = NULL} as the default, but there are some exceptions, e.g., \code{\link{binom2.or}}. Typing something like \code{coef(fit, matrix = TRUE)} is a useful way to ensure that the \code{zero} argument has worked as expected. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. %\url{http://www.stat.auckland.ac.nz/~yee} } \seealso{ \code{\link{CommonVGAMffArguments}}, \code{\link{constraints}}. } \examples{ args(multinomial) args(binom2.or) args(gpd) #LMS quantile regression example fit <- vglm(BMI ~ sm.bs(age, df = 4), lms.bcg(zero = c(1, 3)), data = bmi.nz, trace = TRUE) coef(fit, matrix = TRUE) } \keyword{models} \keyword{regression} \keyword{programming} %zz Here is a conflict which is not picked up (no internal checking done) VGAM/man/vgam.control.Rd0000644000176200001440000001565113135276753014503 0ustar liggesusers\name{vgam.control} \alias{vgam.control} %- Also NEED an `\alias' for EACH other topic documented here. \title{ Control Function for vgam() } \description{ Algorithmic constants and parameters for running \code{\link{vgam}} are set using this function. } \usage{ vgam.control(all.knots = FALSE, bf.epsilon = 1e-07, bf.maxit = 30, checkwz=TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, maxit = 30, Maxit.outer = 10, noWarning = FALSE, na.action = na.fail, nk = NULL, save.weights = FALSE, se.fit = TRUE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, gamma.arg = 1, ...) } %- maybe also `usage' for other objects documented here. \arguments{ % zz na.action differs from vglm \item{all.knots}{ logical indicating if all distinct points of the smoothing variables are to be used as knots. By default, \code{all.knots=TRUE} for \eqn{n \leq 40}{n <= 40}, and for \eqn{n > 40}, the number of knots is approximately \eqn{40 + (n-40)^{0.25}}{40 + (n-40)^0.25}. This increases very slowly with \eqn{n} so that the number of knots is approximately between 50 and 60 for large \eqn{n}. } \item{bf.epsilon}{ tolerance used by the modified vector backfitting algorithm for testing convergence. Must be a positive number. } \item{bf.maxit}{ maximum number of iterations allowed in the modified vector backfitting algorithm. Must be a positive integer. } \item{checkwz}{ logical indicating whether the diagonal elements of the working weight matrices should be checked whether they are sufficiently positive, i.e., greater than \code{wzepsilon}. If not, any values less than \code{wzepsilon} are replaced with this value. } \item{Check.rank, Check.cm.rank}{ See \code{\link{vglm.control}}. } \item{criterion}{ character variable describing what criterion is to be used to test for convergence. The possibilities are listed in \code{.min.criterion.VGAM}, but most family functions only implement a few of these. } \item{epsilon}{ positive convergence tolerance epsilon. Roughly speaking, the Newton-Raphson/Fisher-scoring/local-scoring iterations are assumed to have converged when two successive \code{criterion} values are within \code{epsilon} of each other. } \item{maxit}{ maximum number of Newton-Raphson/Fisher-scoring/local-scoring iterations allowed. } \item{Maxit.outer}{ maximum number of outer iterations allowed when there are \code{\link{sm.os}} or \code{\link{sm.ps}} terms. See \code{\link{vgam}} for a little information about the default \emph{outer iteration}. Note that one can use \emph{performance iteration} by setting \code{Maxit.outer = 1}; then the smoothing parameters will be automatically chosen at each IRLS iteration (some specific programming allows this). % Was Maxit.outer = 20 } \item{na.action}{ how to handle missing values. Unlike the SPLUS \code{gam} function, \code{\link{vgam}} cannot handle \code{NA}s when smoothing. } \item{nk}{ vector of length \eqn{d} containing positive integers. where \eqn{d} be the number of \code{\link{s}} terms in the formula. Recycling is used if necessary. The \eqn{i}th value is the number of B-spline coefficients to be estimated for each component function of the \eqn{i}th \code{s()} term. \code{nk} differs from the number of knots by some constant. If specified, \code{nk} overrides the automatic knot selection procedure. } \item{save.weights}{ logical indicating whether the \code{weights} slot of a \code{"vglm"} object will be saved on the object. If not, it will be reconstructed when needed, e.g., \code{summary}. } \item{se.fit}{ logical indicating whether approximate pointwise standard errors are to be saved on the object. If \code{TRUE}, then these can be plotted with \code{plot(..., se = TRUE)}. } \item{trace}{ logical indicating if output should be produced for each iteration. } \item{wzepsilon}{ Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. } % \item{xij}{ % formula giving terms making up a covariate-dependent term. % % } \item{noWarning}{ Same as \code{\link{vglm.control}}. } \item{xij}{ Same as \code{\link{vglm.control}}. } \item{gamma.arg}{ Numeric; same as \code{gamma} in \code{\link[mgcv]{magic}}. Inflation factor for optimizing the UBRE/GCV criterion. If given, a suggested value is 1.4 to help avoid overfitting, based on the work of Gu and co-workers (values between 1.2 and 1.4 appeared reasonable, based on simulations). A warning may be given if the value is deemed out-of-range. } \item{\dots}{ other parameters that may be picked up from control functions that are specific to the \pkg{VGAM} family function. % zz see later. } } \details{ Most of the control parameters are used within \code{vgam.fit} and you will have to look at that to understand the full details. Many of the control parameters are used in a similar manner by \code{vglm.fit} (\code{\link{vglm}}) because the algorithm (IRLS) is very similar. Setting \code{save.weights=FALSE} is useful for some models because the \code{weights} slot of the object is often the largest and so less memory is used to store the object. However, for some \pkg{VGAM} family function, it is necessary to set \code{save.weights=TRUE} because the \code{weights} slot cannot be reconstructed later. } \value{ A list with components matching the input names. A little error checking is done, but not much. The list is assigned to the \code{control} slot of \code{\link{vgam}} objects. } \references{ Yee, T. W. and Wild, C. J. (1996) Vector generalized additive models. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{58}, 481--493. % \url{http://www.stat.auckland.ac.nz/~yee} % For gamma=1.4: % Kim, Y.-J. and Gu, C. 2004, % Smoothing spline Gaussian regression: % more scalable computation via efficient approximation. %\emph{Journal of the Royal Statistical Society, Series B, Methodological}, %\bold{66}, 337--356. %\bold{66}(2), 337--356. } \author{ Thomas W. Yee} \note{ \code{\link{vgam}} does not implement half-stepsizing, therefore parametric models should be fitted with \code{\link{vglm}}. Also, \code{\link{vgam}} is slower than \code{\link{vglm}} too. } \section{Warning}{ See \code{\link{vglm.control}}. } \seealso{ \code{\link{vgam}}, \code{\link{vglm.control}}, \code{\link{vsmooth.spline}}, \code{\link{vglm}}. } \examples{ pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let, df = 2), multinomial, data = pneumo, trace = TRUE, eps = 1e-4, maxit = 10) } \keyword{models} \keyword{regression} \keyword{smooth} % xij = NULL, VGAM/man/bisaUC.Rd0000644000176200001440000000473713135276753013243 0ustar liggesusers\name{Bisa} \alias{Bisa} \alias{dbisa} \alias{pbisa} \alias{qbisa} \alias{rbisa} \title{The Birnbaum-Saunders Distribution} \description{ Density, distribution function, and random generation for the Birnbaum-Saunders distribution. } \usage{ dbisa(x, scale = 1, shape, log = FALSE) pbisa(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qbisa(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rbisa(n, scale = 1, shape) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{ Same as in \code{\link[stats]{runif}}. } \item{scale, shape}{ the (positive) scale and shape parameters. } \item{log}{ Logical. If \code{TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dbisa} gives the density, \code{pbisa} gives the distribution function, and \code{qbisa} gives the quantile function, and \code{rbisa} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ The Birnbaum-Saunders distribution is a distribution which is used in survival analysis. See \code{\link{bisa}}, the \pkg{VGAM} family function for estimating the parameters, for more details. } %\note{ %} \seealso{ \code{\link{bisa}}. } \examples{ \dontrun{ x <- seq(0, 6, len = 400) plot(x, dbisa(x, shape = 1), type = "l", col = "blue", ylab = "Density", lwd = 2, ylim = c(0,1.3), lty = 3, main = "X ~ Birnbaum-Saunders(shape, scale = 1)") lines(x, dbisa(x, shape = 2), col = "orange", lty = 2, lwd = 2) lines(x, dbisa(x, shape = 0.5), col = "green", lty = 1, lwd = 2) legend(x = 3, y = 0.9, legend = paste("shape = ",c(0.5, 1,2)), col = c("green","blue","orange"), lty = 1:3, lwd = 2) shape <- 1; x <- seq(0.0, 4, len = 401) plot(x, dbisa(x, shape = shape), type = "l", col = "blue", las = 1, ylab = "", main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pbisa(x, shape = shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbisa(probs, shape = shape) lines(Q, dbisa(Q, shape = shape), col = "purple", lty = 3, type = "h") pbisa(Q, shape = shape) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) lines(Q, pbisa(Q, shape = shape), col = "purple", lty = 3, type = "h") } } \keyword{distribution} VGAM/man/ABO.Rd0000644000176200001440000000447113135276753012471 0ustar liggesusers\name{ABO} \alias{ABO} %- Also NEED an '\alias' for EACH other topic documented here. \title{ The ABO Blood Group System } \description{ Estimates the two independent parameters of the the ABO blood group system. } \usage{ ABO(link.pA = "logit", link.pB = "logit", ipA = NULL, ipB = NULL, ipO = NULL, zero = NULL) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{link.pA, link.pB}{ Link functions applied to \code{pA} and \code{pB}. See \code{\link{Links}} for more choices. } \item{ipA, ipB, ipO}{ Optional initial value for \code{pA} and \code{pB} and \code{pO}. A \code{NULL} value means values are computed internally. } \item{zero}{ Details at \code{\link{CommonVGAMffArguments}}. } } \details{ The parameters \code{pA} and \code{pB} are probabilities, so that \code{pO=1-pA-pB} is the third probability. The probabilities \code{pA} and \code{pB} correspond to A and B respectively, so that \code{pO} is the probability for O. It is easier to make use of initial values for \code{pO} than for \code{pB}. In documentation elsewhere I sometimes use \code{pA=p}, \code{pB=q}, \code{pO=r}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}} and \code{\link{vgam}}. } \references{ Lange, K. (2002) \emph{Mathematical and Statistical Methods for Genetic Analysis}, 2nd ed. New York: Springer-Verlag. } \author{ T. W. Yee } \note{ The input can be a 4-column matrix of counts, where the columns are A, B, AB, O (in order). Alternatively, the input can be a 4-column matrix of proportions (so each row adds to 1) and the \code{weights} argument is used to specify the total number of counts for each row. } \seealso{ \code{\link{AA.Aa.aa}}, \code{\link{AB.Ab.aB.ab}}, \code{\link{A1A2A3}}, \code{\link{MNSs}}. % \code{\link{AB.Ab.aB.ab2}}, } \examples{ ymat <- cbind(A = 725, B = 258, AB = 72, O = 1073) # Order matters, not the name fit <- vglm(ymat ~ 1, ABO(link.pA = "identitylink", link.pB = "identitylink"), trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Estimated pA and pB rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit))) } \keyword{models} \keyword{regression} VGAM/man/betabinomUC.Rd0000644000176200001440000002065413135276753014261 0ustar liggesusers\name{Betabinom} \alias{Betabinom} \alias{dbetabinom} \alias{pbetabinom} %\alias{qbetabinom} \alias{rbetabinom} \alias{dbetabinom.ab} \alias{pbetabinom.ab} %\alias{qbetabinom.ab} \alias{rbetabinom.ab} %\alias{Zoibetabinom} \alias{dzoibetabinom} \alias{pzoibetabinom} %\alias{qzoibetabinom} \alias{rzoibetabinom} \alias{dzoibetabinom.ab} \alias{pzoibetabinom.ab} %\alias{qzoibetabinom.ab} \alias{rzoibetabinom.ab} \title{The Beta-Binomial Distribution} \description{ Density, distribution function, and random generation for the beta-binomial distribution and the inflated beta-binomial distribution. } \usage{ dbetabinom(x, size, prob, rho = 0, log = FALSE) pbetabinom(q, size, prob, rho, log.p = FALSE) rbetabinom(n, size, prob, rho = 0) dbetabinom.ab(x, size, shape1, shape2, log = FALSE, Inf.shape = exp(20), limit.prob = 0.5) pbetabinom.ab(q, size, shape1, shape2, log.p = FALSE) rbetabinom.ab(n, size, shape1, shape2, .dontuse.prob = NULL) dzoibetabinom(x, size, prob, rho = 0, pstr0 = 0, pstrsize = 0, log = FALSE) pzoibetabinom(q, size, prob, rho, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) rzoibetabinom(n, size, prob, rho = 0, pstr0 = 0, pstrsize = 0) dzoibetabinom.ab(x, size, shape1, shape2, pstr0 = 0, pstrsize = 0, log = FALSE) pzoibetabinom.ab(q, size, shape1, shape2, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) rzoibetabinom.ab(n, size, shape1, shape2, pstr0 = 0, pstrsize = 0) } % Infinity.shape = 1e5 .dontuse.prob = NULL \arguments{ \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{size}{number of trials.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{prob}{ the probability of success \eqn{\mu}{mu}. Must be in the unit closed interval \eqn{[0,1]}. } \item{rho}{ the correlation parameter \eqn{\rho}{rho}. Usually must be in the unit open interval \eqn{(0,1)}, however, the value 0 is sometimes supported (if so then it corresponds to the usual binomial distribution). } \item{shape1, shape2}{ the two (positive) shape parameters of the standard beta distribution. They are called \code{a} and \code{b} in \code{\link[base:Special]{beta}} respectively. } \item{log, log.p, lower.tail}{ Same meaning as \code{\link[stats]{runif}}. } \item{Inf.shape}{ Numeric. A large value such that, if \code{shape1} or \code{shape2} exceeds this, then special measures are taken, e.g., calling \code{\link[stats]{dbinom}}. Also, if \code{shape1} or \code{shape2} is less than its reciprocal, then special measures are also taken. This feature/approximation is needed to avoid numerical problem with catastrophic cancellation of multiple \code{\link[base:Special]{lbeta}} calls. } \item{limit.prob}{ If either shape parameters are \code{Inf} then the binomial limit is taken, with \code{shape1 / (shape1 + shape2)} as the probability of success. In the case where both are \code{Inf} this probability will be a \code{NaN = Inf/Inf}, however, the value \code{limit.prob} is used instead. Hence the default is to assume that both shape parameters are equal as the limit is taken. Purists may assign \code{NaN} to this argument. } \item{.dontuse.prob}{ An argument that should be ignored and unused. } \item{pstr0}{ Probability of a structual zero (i.e., ignoring the beta-binomial distribution). The default value of \code{pstr0} corresponds to the response having a beta-binomial distribuion inflated only at \code{size}. } \item{pstrsize}{ Probability of a structual maximum value \code{size}. The default value of \code{pstrsize} corresponds to the response having a beta-binomial distribution inflated only at 0. } } \value{ \code{dbetabinom} and \code{dbetabinom.ab} give the density, \code{pbetabinom} and \code{pbetabinom.ab} give the distribution function, and \code{rbetabinom} and \code{rbetabinom.ab} generate random deviates. % \code{qbetabinom} and \code{qbetabinom.ab} gives the quantile function, and \code{dzoibetabinom} and \code{dzoibetabinom.ab} give the inflated density, \code{pzoibetabinom} and \code{pzoibetabinom.ab} give the inflated distribution function, and \code{rzoibetabinom} and \code{rzoibetabinom.ab} generate random inflated deviates. } \author{ T. W. Yee and Xiangjie Xue} \details{ The beta-binomial distribution is a binomial distribution whose probability of success is not a constant but it is generated from a beta distribution with parameters \code{shape1} and \code{shape2}. Note that the mean of this beta distribution is \code{mu = shape1/(shape1+shape2)}, which therefore is the mean or the probability of success. See \code{\link{betabinomial}} and \code{\link{betabinomialff}}, the \pkg{VGAM} family functions for estimating the parameters, for the formula of the probability density function and other details. For the inflated beta-binomial distribution, the probability mass function is \deqn{P(Y = y) =(1 - pstr0 - pstrsize) \times BB(y) + pstr0 \times I[y = 0] + pstrsize \times I[y = size]}{% F(Y = y) =(1 - pstr0 - pstrsize) * BB(y) + pstr0 * I[y = 0] + pstrsize * I[y = size]} where \eqn{BB(y)} is the probability mass function of the beta-binomial distribution with the same shape parameters (\code{\link[VGAM]{pbetabinom.ab}}), \code{pstr0} is the inflated probability at 0 and \code{pstrsize} is the inflated probability at 1. The default values of \code{pstr0} and \code{pstrsize} mean that these functions behave like the ordinary \code{\link[VGAM]{Betabinom}} when only the essential arguments are inputted. } \note{ \code{pzoibetabinom}, \code{pzoibetabinom.ab}, \code{pbetabinom} and \code{pbetabinom.ab} can be particularly slow. The functions here ending in \code{.ab} are called from those functions which don't. The simple transformations \eqn{\mu=\alpha / (\alpha + \beta)}{mu=alpha/(alpha+beta)} and \eqn{\rho=1/(1 + \alpha + \beta)}{rho=1/(1+alpha+beta)} are used, where \eqn{\alpha}{alpha} and \eqn{\beta}{beta} are the two shape parameters. } \seealso{ \code{\link{betabinomial}}, \code{\link{betabinomialff}}, \code{\link{Zoabeta}}. } \examples{ set.seed(1); rbetabinom(10, 100, prob = 0.5) set.seed(1); rbinom(10, 100, prob = 0.5) # The same since rho = 0 \dontrun{ N <- 9; xx <- 0:N; s1 <- 2; s2 <- 3 dy <- dbetabinom.ab(xx, size = N, shape1 = s1, shape2 = s2) barplot(rbind(dy, dbinom(xx, size = N, prob = s1 / (s1+s2))), beside = TRUE, col = c("blue","green"), las = 1, main = paste("Beta-binomial (size=",N,", shape1=", s1, ", shape2=", s2, ") (blue) vs\n", " Binomial(size=", N, ", prob=", s1/(s1+s2), ") (green)", sep = ""), names.arg = as.character(xx), cex.main = 0.8) sum(dy * xx) # Check expected values are equal sum(dbinom(xx, size = N, prob = s1 / (s1+s2)) * xx) cumsum(dy) - pbetabinom.ab(xx, N, shape1 = s1, shape2 = s2) # Should be all 0 y <- rbetabinom.ab(n = 1e4, size = N, shape1 = s1, shape2 = s2) ty <- table(y) barplot(rbind(dy, ty / sum(ty)), beside = TRUE, col = c("blue", "orange"), las = 1, main = paste("Beta-binomial (size=", N, ", shape1=", s1, ", shape2=", s2, ") (blue) vs\n", " Random generated beta-binomial(size=", N, ", prob=", s1/(s1+s2), ") (orange)", sep = ""), cex.main = 0.8, names.arg = as.character(xx)) N <- 1e5; size <- 20; pstr0 <- 0.2; pstrsize <- 0.2 kk <- rzoibetabinom.ab(N, size, s1, s2, pstr0, pstrsize) hist(kk, probability = TRUE, border = "blue", ylim = c(0, 0.25), main = "Blue/green = inflated; orange = ordinary beta-binomial", breaks = -0.5 : (size + 0.5)) sum(kk == 0) / N # Proportion of 0 sum(kk == size) / N # Proportion of size lines(0 : size, dbetabinom.ab(0 : size, size, s1, s2), col = "orange") lines(0 : size, col = "green", type = "b", dzoibetabinom.ab(0 : size, size, s1, s2, pstr0, pstrsize)) } } \keyword{distribution} % \item{Inf.shape}{ % Numeric. A large value such that, % if \code{shape1} or \code{shape2} exceeds this, then % it is taken to be \code{Inf}. % Also, if \code{shape1} or \code{shape2} is less than its reciprocal, % then it might be loosely thought of as being effectively \code{0} % (although not treated exactly as so in the code). % This feature/approximation is needed to avoid numerical % problem with catastrophic cancellation of % multiple \code{\link[base:Special]{lbeta}} calls. % } VGAM/man/marital.nz.Rd0000644000176200001440000000234313135276753014143 0ustar liggesusers\name{marital.nz} \alias{marital.nz} \docType{data} \title{ New Zealand Marital Data } \description{ Some marital data mainly from a large NZ company collected in the early 1990s. } \usage{data(marital.nz)} \format{ A data frame with 6053 observations on the following 3 variables. \describe{ \item{\code{age}}{a numeric vector, age in years} \item{\code{ethnicity}}{a factor with levels \code{European} \code{Maori} \code{Other} \code{Polynesian}. Only Europeans are included in the data set. } \item{\code{mstatus}}{a factor with levels \code{Divorced/Separated}, \code{Married/Partnered}, \code{Single}, \code{Widowed}. } } } \details{ This is a subset of a data set collected from a self-administered questionnaire administered in a large New Zealand workforce observational study conducted during 1992--3. The data were augmented by a second study consisting of retirees. The data can be considered a reasonable representation of the white male New Zealand population in the early 1990s. } \source{ Clinical Trials Research Unit, University of Auckland, New Zealand. } \references{ See \code{\link{bmi.nz}} and \code{\link{chest.nz}}. } \examples{ summary(marital.nz) } \keyword{datasets} VGAM/man/multinomial.Rd0000644000176200001440000002755213135276753014427 0ustar liggesusers\name{multinomial} \alias{multinomial} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Multinomial Logit Model } \description{ Fits a multinomial logit model to a (preferably unordered) factor response. } \usage{ multinomial(zero = NULL, parallel = FALSE, nointercept = NULL, refLevel = "(Last)", whitespace = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{zero}{ Can be an integer-valued vector specifying which linear/additive predictors are modelled as intercepts only. Any values must be from the set \{1,2,\ldots,\eqn{M}\}. The default value means none are modelled as intercept-only terms. See \code{\link{CommonVGAMffArguments}} for more information. } \item{parallel}{ A logical, or formula specifying which terms have equal/unequal coefficients. } \item{nointercept, whitespace}{ See \code{\link{CommonVGAMffArguments}} for more details. } \item{refLevel}{ Either a (1) single positive integer or (2) a value of the factor or (3) a character string. If inputted as an integer then it specifies which column of the response matrix is the reference or baseline level. The default is the last one (the \eqn{(M+1)}th one). If used, this argument will be usually assigned the value \code{1}. If inputted as a value of a factor then beware of missing values of certain levels of the factor (\code{drop.unused.levels = TRUE} or \code{drop.unused.levels = FALSE}). See the example below. If inputted as a character string then this should be equal to (A) one of the levels of the factor response, else (B) one of the column names of the matrix response of counts; e.g., \code{vglm(cbind(normal, mild, severe) ~ let, multinomial(refLevel = "severe"), data = pneumo)} if it was (incorrectly because the response is ordinal) applied to the \code{\link{pneumo}} data set. Another example is \code{vglm(ethnicity ~ age, multinomial(refLevel = "European"), data = xs.nz)} if it was applied to the \code{\link[VGAMdata]{xs.nz}} data set. } } \details{ In this help file the response \eqn{Y} is assumed to be a factor with unordered values \eqn{1,2,\dots,M+1}, so that \eqn{M} is the number of linear/additive predictors \eqn{\eta_j}{eta_j}. The default model can be written \deqn{\eta_j = \log(P[Y=j]/ P[Y=M+1])}{% eta_j = log(P[Y=j]/ P[Y=M+1])} where \eqn{\eta_j}{eta_j} is the \eqn{j}th linear/additive predictor. Here, \eqn{j=1,\ldots,M}, and \eqn{\eta_{M+1}}{eta_{M+1}} is 0 by definition. That is, the last level of the factor, or last column of the response matrix, is taken as the reference level or baseline---this is for identifiability of the parameters. The reference or baseline level can be changed with the \code{refLevel} argument. In almost all the literature, the constraint matrices associated with this family of models are known. For example, setting \code{parallel = TRUE} will make all constraint matrices (except for the intercept) equal to a vector of \eqn{M} 1's. If the constraint matrices are unknown and to be estimated, then this can be achieved by fitting the model as a reduced-rank vector generalized linear model (RR-VGLM; see \code{\link{rrvglm}}). In particular, a multinomial logit model with unknown constraint matrices is known as a \emph{stereotype} model (Anderson, 1984), and can be fitted with \code{\link{rrvglm}}. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, \code{\link{rrvglm}} and \code{\link{vgam}}. } \references{ Yee, T. W. (2010) The \pkg{VGAM} package for categorical data analysis. \emph{Journal of Statistical Software}, \bold{32}, 1--34. \url{http://www.jstatsoft.org/v32/i10/}. Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. Agresti, A. (2013) \emph{Categorical Data Analysis}, 3rd ed. Hoboken, NJ, USA: Wiley. Hastie, T. J., Tibshirani, R. J. and Friedman, J. H. (2009) \emph{The Elements of Statistical Learning: Data Mining, Inference and Prediction}, 2nd ed. New York, USA: Springer-Verlag. Simonoff, J. S. (2003) \emph{Analyzing Categorical Data}, New York, USA: Springer-Verlag. Anderson, J. A. (1984) Regression and ordered categorical variables. \emph{Journal of the Royal Statistical Society, Series B, Methodological}, \bold{46}, 1--30. Tutz, G. (2012) \emph{Regression for Categorical Data}, Cambridge University Press. %Further information and examples on categorical data analysis %by the \pkg{VGAM} package can be found at %\url{http://www.stat.auckland.ac.nz/~yee/VGAM/doc/categorical.pdf}. } \author{ Thomas W. Yee } \note{ The response should be either a matrix of counts (with row sums that are all positive), or a factor. In both cases, the \code{y} slot returned by \code{\link{vglm}}/\code{\link{vgam}}/\code{\link{rrvglm}} is the matrix of sample proportions. The multinomial logit model is more appropriate for a nominal (unordered) factor response than for an ordinal (ordered) factor response. Models more suited for the latter include those based on cumulative probabilities, e.g., \code{\link{cumulative}}. \code{multinomial} is prone to numerical difficulties if the groups are separable and/or the fitted probabilities are close to 0 or 1. The fitted values returned are estimates of the probabilities \eqn{P[Y=j]} for \eqn{j=1,\ldots,M+1}. See \pkg{safeBinaryRegression} for the logistic regression case. Here is an example of the usage of the \code{parallel} argument. If there are covariates \code{x2}, \code{x3} and \code{x4}, then \code{parallel = TRUE ~ x2 + x3 - 1} and \code{parallel = FALSE ~ x4} are equivalent. This would constrain the regression coefficients for \code{x2} and \code{x3} to be equal; those of the intercepts and \code{x4} would be different. In Example 4 below, a conditional logit model is fitted to an artificial data set that explores how cost and travel time affect people's decision about how to travel to work. Walking is the baseline group. The variable \code{Cost.car} is the difference between the cost of travel to work by car and walking, etc. The variable \code{Time.car} is the difference between the travel duration/time to work by car and walking, etc. For other details about the \code{xij} argument see \code{\link{vglm.control}} and \code{\link{fill}}. The \code{\link[nnet]{multinom}} function in the \pkg{nnet} package uses the first level of the factor as baseline, whereas the last level of the factor is used here. Consequently the estimated regression coefficients differ. } % In the future, this family function may be renamed to % ``\code{mum}'' (for \bold{mu}ltinomial logit \bold{m}odel). % Please let me know if you strongly agree or disagree about this. \section{Warning }{ No check is made to verify that the response is nominal. See \code{\link{CommonVGAMffArguments}} for more warnings. } \seealso{ \code{\link{margeff}}, \code{\link{cumulative}}, \code{\link{acat}}, \code{\link{cratio}}, \code{\link{sratio}}, \code{\link{dirichlet}}, \code{\link{dirmultinomial}}, \code{\link{rrvglm}}, \code{\link{fill1}}, \code{\link[stats:Multinom]{Multinomial}}, \code{\link{multilogit}}, \code{\link[datasets]{iris}}. The author's homepage has further documentation about categorical data analysis using \pkg{VGAM}. } % \code{\link[base:Multinom]{rmultinom}} % \code{\link{pneumo}}, \examples{ # Example 1: fit a multinomial logit model to Edgar Anderson's iris data data(iris) \dontrun{ fit <- vglm(Species ~ ., multinomial, iris) coef(fit, matrix = TRUE) } # Example 2a: a simple example ycounts <- t(rmultinom(10, size = 20, prob = c(0.1, 0.2, 0.8))) # Counts fit <- vglm(ycounts ~ 1, multinomial) head(fitted(fit)) # Proportions fit@prior.weights # NOT recommended for extraction of prior weights weights(fit, type = "prior", matrix = FALSE) # The better method depvar(fit) # Sample proportions; same as fit@y constraints(fit) # Constraint matrices # Example 2b: Different reference level used as the baseline fit2 <- vglm(ycounts ~ 1, multinomial(refLevel = 2)) coef(fit2, matrix = TRUE) coef(fit , matrix = TRUE) # Easy to reconcile this output with fit2 # Example 3: The response is a factor. nn <- 10 dframe3 <- data.frame(yfactor = gl(3, nn, labels = c("Control", "Trt1", "Trt2")), x2 = runif(3 * nn)) myrefLevel <- with(dframe3, yfactor[12]) fit3a <- vglm(yfactor ~ x2, multinomial(refLevel = myrefLevel), dframe3) fit3b <- vglm(yfactor ~ x2, multinomial(refLevel = 2), dframe3) coef(fit3a, matrix = TRUE) # "Treatment1" is the reference level coef(fit3b, matrix = TRUE) # "Treatment1" is the reference level margeff(fit3b) # Example 4: Fit a rank-1 stereotype model fit4 <- rrvglm(Country ~ Width + Height + HP, multinomial, data = car.all) coef(fit4) # Contains the C matrix constraints(fit4)$HP # The A matrix coef(fit4, matrix = TRUE) # The B matrix Coef(fit4)@C # The C matrix concoef(fit4) # Better to get the C matrix this way Coef(fit4)@A # The A matrix svd(coef(fit4, matrix = TRUE)[-1, ])$d # This has rank 1; = C %*% t(A) # Classification (but watch out for NAs in some of the variables): apply(fitted(fit4), 1, which.max) # Classification colnames(fitted(fit4))[apply(fitted(fit4), 1, which.max)] # Classification apply(predict(fit4, car.all, type = "response"), 1, which.max) # Ditto # Example 5: The use of the xij argument (aka conditional logit model) set.seed(111) nn <- 100 # Number of people who travel to work M <- 3 # There are M+1 models of transport to go to work ycounts <- matrix(0, nn, M+1) ycounts[cbind(1:nn, sample(x = M+1, size = nn, replace = TRUE))] = 1 dimnames(ycounts) <- list(NULL, c("bus","train","car","walk")) gotowork <- data.frame(cost.bus = runif(nn), time.bus = runif(nn), cost.train= runif(nn), time.train= runif(nn), cost.car = runif(nn), time.car = runif(nn), cost.walk = runif(nn), time.walk = runif(nn)) gotowork <- round(gotowork, digits = 2) # For convenience gotowork <- transform(gotowork, Cost.bus = cost.bus - cost.walk, Cost.car = cost.car - cost.walk, Cost.train = cost.train - cost.walk, Cost = cost.train - cost.walk, # for labelling Time.bus = time.bus - time.walk, Time.car = time.car - time.walk, Time.train = time.train - time.walk, Time = time.train - time.walk) # for labelling fit <- vglm(ycounts ~ Cost + Time, multinomial(parall = TRUE ~ Cost + Time - 1), xij = list(Cost ~ Cost.bus + Cost.train + Cost.car, Time ~ Time.bus + Time.train + Time.car), form2 = ~ Cost + Cost.bus + Cost.train + Cost.car + Time + Time.bus + Time.train + Time.car, data = gotowork, trace = TRUE) head(model.matrix(fit, type = "lm")) # LM model matrix head(model.matrix(fit, type = "vlm")) # Big VLM model matrix coef(fit) coef(fit, matrix = TRUE) constraints(fit) summary(fit) max(abs(predict(fit) - predict(fit, new = gotowork))) # Should be 0 } \keyword{models} \keyword{regression} % 20100915; this no longer works: % # Example 2c: Different input to Example 2a but same result % w <- apply(ycounts, 1, sum) # Prior weights % yprop <- ycounts / w # Sample proportions % fitprop <- vglm(yprop ~ 1, multinomial, weights=w) % head(fitted(fitprop)) # Proportions % weights(fitprop, type="prior", matrix=FALSE) % fitprop@y # Same as the input VGAM/man/binormalUC.Rd0000644000176200001440000000743713135276753014130 0ustar liggesusers\name{Binorm} \alias{Binorm} \alias{pnorm2} \alias{dbinorm} \alias{pbinorm} \alias{rbinorm} \title{Bivariate Normal Distribution Cumulative Distribution Function} \description{ Density, cumulative distribution function and random generation for the bivariate normal distribution distribution. } % quantile function \usage{ dbinorm(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0, log = FALSE) pbinorm(q1, q2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) rbinorm(n, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) pnorm2(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) } % dbinorm(x1, x2, mean1 = 0, mean2 = 0, sd1 = 1, sd2 = 1, rho = 0, log = FALSE) \arguments{ \item{x1, x2, q1, q2}{vector of quantiles.} \item{mean1, mean2, var1, var2, cov12}{ vector of means, variances and the covariance. % standard deviations and correlation parameter. } % \item{sd1, sd2, rho}{ % vector of standard deviations and correlation parameter. % } \item{n}{number of observations. Same as \code{\link[stats]{rnorm}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } % \item{rho}{ % See \code{\link{binormal}}. % } } \value{ \code{dbinorm} gives the density, \code{pbinorm} gives the cumulative distribution function, \code{rbinorm} generates random deviates (\eqn{n} by 2 matrix). % \code{qnorm2} gives the quantile function, and } % \author{ T. W. Yee } \details{ The default arguments correspond to the standard bivariate normal distribution with correlation parameter \eqn{\rho = 0}{rho = 0}. That is, two independent standard normal distributions. Let \code{sd1} (say) be \code{sqrt(var1)} and written \eqn{\sigma_1}{sigma_1}, etc. Then the general formula for the correlation coefficient is \eqn{\rho = cov / (\sigma_1 \sigma_2)}{rho = cov / (sigma_1 * sigma_2)} where \eqn{cov} is argument \code{cov12}. Thus if arguments \code{var1} and \code{var2} are left alone then \code{cov12} can be inputted with \eqn{\rho}{rho}. One can think of this function as an extension of \code{\link[stats]{pnorm}} to two dimensions, however note that the argument names have been changed for \pkg{VGAM} 0.9-1 onwards. } \references{ \code{pbinorm()} is based on Donnelly (1973), the code was translated from FORTRAN to ratfor using struct, and then from ratfor to C manually. The function was originally called \code{bivnor}, and TWY only wrote a wrapper function. Donnelly, T. G. (1973) Algorithm 462: Bivariate Normal Distribution. \emph{Communications of the ACM}, \bold{16}, 638. % It gives the probability that a bivariate normal exceeds (ah, ak). % Here, gh and gk are 0.5 times the right tail areas of ah, ak under a N(0, 1) % distribution. } \section{Warning}{ Being based on an approximation, the results of \code{pbinorm()} may be negative! Also, \code{pnorm2()} should be withdrawn soon; use \code{pbinorm()} instead because it is identical. % this function used to be called \code{pnorm2()}. % \code{dbinorm()}'s arguments might change! % Currently they differ from \code{pbinorm()} % and \code{rbinorm()}, so use the full argument name % to future-proof possible changes! } \note{ For \code{rbinorm()}, if the \eqn{i}th variance-covariance matrix is not positive-definite then the \eqn{i}th row is all \code{NA}s. } \seealso{ \code{\link[stats]{pnorm}}, \code{\link{binormal}}, \code{\link{uninormal}}. } \examples{ yvec <- c(-5, -1.96, 0, 1.96, 5) ymat <- expand.grid(yvec, yvec) cbind(ymat, pbinorm(ymat[, 1], ymat[, 2])) \dontrun{ rhovec <- seq(-0.95, 0.95, by = 0.01) plot(rhovec, pbinorm(0, 0, cov12 = rhovec), type = "l", col = "blue", las = 1) abline(v = 0, h = 0.25, col = "gray", lty = "dashed") } } \keyword{distribution} VGAM/man/fittedvlm.Rd0000644000176200001440000000747213135276753014072 0ustar liggesusers\name{fittedvlm} \alias{fittedvlm} \alias{fitted.values.vlm} %- Also NEED an '\alias' for EACH other topic documented here. \title{Fitted Values of a VLM object} \description{ Extractor function for the fitted values of a model object that inherits from a \emph{vector linear model} (VLM), e.g., a model of class \code{"vglm"}. } \usage{ fittedvlm(object, drop = FALSE, type.fitted = NULL, percentiles = NULL, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ a model object that inherits from a VLM. } \item{drop}{ Logical. If \code{FALSE} then the answer is a matrix. If \code{TRUE} then the answer is a vector. } % \item{matrix.arg}{ % Logical. Return the answer as a matrix? % If \code{FALSE} then it will be a vector. % } \item{type.fitted}{ Character. Some \pkg{VGAM} family functions have a \code{type.fitted} argument. If so then a different type of fitted value can be returned. It is recomputed from the model after convergence. Note: this is an experimental feature and not all \pkg{VGAM} family functions have this implemented yet. See \code{\link{CommonVGAMffArguments}} for more details. } \item{percentiles}{ See \code{\link{CommonVGAMffArguments}} for details. } \item{\dots}{ Currently unused. } } \details{ The ``fitted values'' usually corresponds to the mean response, however, because the \pkg{VGAM} package fits so many models, this sometimes refers to quantities such as quantiles. The mean may even not exist, e.g., for a Cauchy distribution. Note that the fitted value is output from the \code{@linkinv} slot of the \pkg{VGAM} family function, where the \code{eta} argument is the \eqn{n \times M}{n x M} matrix of linear predictors. } \value{ The fitted values evaluated at the final IRLS iteration. } \references{ Chambers, J. M. and T. J. Hastie (eds) (1992) \emph{Statistical Models in S}. Wadsworth & Brooks/Cole. } \author{ Thomas W. Yee } \note{ This function is one of several extractor functions for the \pkg{VGAM} package. Others include \code{coef}, \code{deviance}, \code{weights} and \code{constraints} etc. This function is equivalent to the methods function for the generic function \code{fitted.values}. If \code{fit} is a VLM or VGLM then \code{fitted(fit)} and \code{predict(fit, type = "response")} should be equivalent (see \code{\link{predictvglm}}). The latter has the advantage in that it handles a \code{newdata} argument so that the fitted values can be computed for a different data set. } \seealso{ \code{\link[stats]{fitted}}, \code{\link{predictvglm}}, \code{\link{vglmff-class}}. } \examples{ # Categorical regression example 1 pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) fitted(fit1) # LMS quantile regression example 2 fit2 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit2, type = "response")) # Equal to the the following two: head(fitted(fit2)) predict(fit2, type = "response", newdata = head(bmi.nz)) # Zero-inflated example 3 zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pstr0.3 = logit(-0.5 , inverse = TRUE), lambda.3 = loge(-0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda.3, pstr0 = pstr0.3)) fit3 <- vglm(y1 ~ x2, zipoisson(zero = NULL), data = zdata, trace = TRUE) head(fitted(fit3, type.fitted = "mean" )) # E(Y), which is the default head(fitted(fit3, type.fitted = "pobs0")) # P(Y = 0) head(fitted(fit3, type.fitted = "pstr0")) # Prob of a structural 0 head(fitted(fit3, type.fitted = "onempstr0")) # 1 - prob of a structural 0 } \keyword{models} \keyword{regression} VGAM/man/latvar.Rd0000644000176200001440000000433713135276753013362 0ustar liggesusers\name{latvar} \alias{lv} \alias{latvar} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variables } \description{ Generic function for the \emph{latent variables} of a model. } \usage{ latvar(object, ...) lv(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which the extraction of latent variables is meaningful. } \item{\dots}{ Other arguments fed into the specific methods function of the model. Sometimes they are fed into the methods function for \code{\link{Coef}}. } } \details{ Latent variables occur in reduced-rank regression models, as well as in quadratic and additive ordination models. For the latter two, latent variable values are often called \emph{site scores} by ecologists. Latent variables are linear combinations of the explanatory variables. } \value{ The value returned depends specifically on the methods function invoked. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. Yee, T. W. (2004) A new technique for maximum-likelihood canonical Gaussian ordination. \emph{Ecological Monographs}, \bold{74}, 685--701. Yee, T. W. (2006) Constrained additive ordination. \emph{Ecology}, \bold{87}, 203--213. } \author{ Thomas W. Yee } \section{Warning}{ \code{\link{latvar}} and \code{\link{lv}} are identical, but the latter will be deprecated soon. Latent variables are not really applicable to \code{\link{vglm}}/\code{\link{vgam}} models. } \seealso{ \code{latvar.qrrvglm}, \code{latvar.rrvglm}, \code{latvar.cao}, \code{\link{lvplot}}. } \examples{ \dontrun{ hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Zoraspin = 2.5, 3), Bestof = 3, Crow1positive = TRUE) var(latvar(p1)) # Scaled to unit variance # Scaled to unit variance c(latvar(p1)) # Estimated site scores } } \keyword{models} \keyword{regression} VGAM/man/rhobit.Rd0000644000176200001440000000552713135276753013362 0ustar liggesusers\name{rhobit} \alias{rhobit} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Rhobit Link Function } \description{ Computes the rhobit link transformation, including its inverse and the first two derivatives. } \usage{ rhobit(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bminvalue, bmaxvalue}{ Optional boundary values, e.g., values of \code{theta} which are less than or equal to -1 can be replaced by \code{bminvalue} before computing the link function value. And values of \code{theta} which are greater than or equal to 1 can be replaced by \code{bmaxvalue} before computing the link function value. See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{rhobit} link function is commonly used for parameters that lie between \eqn{-1} and \eqn{1}. Numerical values of \code{theta} close to \eqn{-1} or \eqn{1} or out of range result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. } \value{ For \code{deriv = 0}, the rhobit of \code{theta}, i.e., \code{log((1 + theta)/(1 - theta))} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{(exp(theta) - 1)/(exp(theta) + 1)}. For \code{deriv = 1}, then the function returns \emph{d} \code{eta} / \emph{d} \code{theta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } %\references{ %Documentation accompanying the \pkg{VGAM} package at %\url{http://www.stat.auckland.ac.nz/~yee} %contains further information and examples. % % %} \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to \eqn{-1} or \eqn{1}. One way of overcoming this is to use \code{bminvalue}, etc. The correlation parameter of a standard bivariate normal distribution lies between \eqn{-1} and \eqn{1}, therefore this function can be used for modelling this parameter as a function of explanatory variables. The link function \code{rhobit} is very similar to \code{\link{fisherz}}, e.g., just twice the value of \code{\link{fisherz}}. } \seealso{ \code{\link{Links}}, \code{\link{binom2.rho}}, \code{\link{fisherz}}. } \examples{ theta <- seq(-0.99, 0.99, by = 0.01) y <- rhobit(theta) \dontrun{ plot(theta, y, type = "l", las = 1, ylab = "", main = "rhobit(theta)") abline(v = 0, h = 0, lty = 2) } x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) rhobit(x) # Has NAs rhobit(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/oazetaUC.Rd0000644000176200001440000000407013135276753013576 0ustar liggesusers\name{Oazeta} \alias{Oazeta} \alias{doazeta} \alias{poazeta} \alias{qoazeta} \alias{roazeta} %- Also NEED an '\alias' for EACH other topic documented here. \title{ One-Altered Logarithmic Distribution } \description{ Density, distribution function, quantile function and random generation for the one-altered zeta distribution with parameter \code{pobs1}. } \usage{ doazeta(x, shape, pobs1 = 0, log = FALSE) poazeta(q, shape, pobs1 = 0) qoazeta(p, shape, pobs1 = 0) roazeta(n, shape, pobs1 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q, n, p}{ Same \code{\link[stats:Uniform]{Unif}}.} \item{shape, log}{ Same as \code{\link{Otzeta}}). } \item{pobs1}{ Probability of (an observed) one, called \eqn{pobs1}. The default value of \code{pobs1 = 0} corresponds to the response having a 1-truncated zeta distribution. } } \details{ The probability function of \eqn{Y} is 1 with probability \code{pobs1}, else a 1-truncated zeta distribution. } \value{ \code{doazeta} gives the density and \code{poazeta} gives the distribution function, \code{qoazeta} gives the quantile function, and \code{roazeta} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pobs1} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{oazeta}}, \code{\link{Oizeta}}, \code{\link{Otzeta}}, \code{\link{zeta}}. } \examples{ shape <- 1.1; pobs1 <- 0.10; x <- (-1):7 doazeta(x, shape = shape, pobs1 = pobs1) table(roazeta(100, shape = shape, pobs1 = pobs1)) \dontrun{ x <- 0:10 barplot(rbind(doazeta(x, shape = shape, pobs1 = pobs1), dzeta(x, shape = shape)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("OAZ(shape = ", shape, ", pobs1 = ", pobs1, ") [blue] vs", " zeta(shape = ", shape, ") [orange] densities", sep = "")) } } \keyword{distribution} VGAM/man/AR1.Rd0000644000176200001440000002346113135276753012453 0ustar liggesusers\name{AR1} \alias{AR1} % \alias{AR1.control} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Autoregressive Process with Order-1 Family Function } \description{ Maximum likelihood estimation of the three-parameter AR-1 model } \usage{ AR1(ldrift = "identitylink", lsd = "loge", lvar = "loge", lrho = "rhobit", idrift = NULL, isd = NULL, ivar = NULL, irho = NULL, imethod = 1, ishrinkage = 0.95, type.likelihood = c("exact", "conditional"), type.EIM = c("exact", "approximate"), var.arg = FALSE, nodrift = FALSE, print.EIM = FALSE, zero = c(if (var.arg) "var" else "sd", "rho")) } % zero = c(-2, -3) % AR1.control(epsilon = 1e-6, maxit = 30, stepsize = 1, ...) % deviance.arg = FALSE, %- maybe also 'usage' for other objects documented here. \arguments{ \item{ldrift, lsd, lvar, lrho}{ Link functions applied to the scaled mean, standard deviation or variance, and correlation parameters. The parameter \code{drift} is known as the \emph{drift}, and it is a scaled mean. See \code{\link{Links}} for more choices. } \item{idrift, isd, ivar, irho}{ Optional initial values for the parameters. If failure to converge occurs then try different values and monitor convergence by using \code{trace = TRUE}. For a \eqn{S}-column response, these arguments can be of length \eqn{S}, and they are recycled by the columns first. A value \code{NULL} means an initial value for each response is computed internally. } \item{ishrinkage, imethod, zero}{ See \code{\link{CommonVGAMffArguments}} for more information. The default for \code{zero} assumes there is a drift parameter to be estimated (the default for that argument), so if a drift parameter is suppressed and there are covariates, then \code{zero} will need to be assigned the value 1 or 2 or \code{NULL}. } \item{var.arg}{ Same meaning as \code{\link{uninormal}}. } \item{nodrift}{ Logical, for determining whether to estimate the drift parameter. The default is to estimate it. If \code{TRUE}, the drift parameter is set to 0 and not estimated. } \item{type.EIM}{ What type of expected information matrix (EIM) is used in Fisher scoring. By default, this family function calls \code{\link[VGAM:AR1EIM]{AR1EIM}}, which recursively computes the exact EIM for the AR process with Gaussian white noise. See Porat and Friedlander (1986) for further details on the exact EIM. If \code{type.EIM = "approximate"} then approximate expression for the EIM of Autoregressive processes is used; this approach holds when the number of observations is large enough. Succinct details about the approximate EIM are delineated at Porat and Friedlander (1987). } \item{print.EIM}{ Logical. If \code{TRUE}, then the first few EIMs are printed. Here, the result shown is the sum of each EIM. } \item{type.likelihood}{ What type of likelihood function is maximized. The first choice (default) is the sum of the marginal likelihood and the conditional likelihood. Choosing the conditional likelihood means that the first observation is effectively ignored (this is handled internally by setting the value of the first prior weight to be some small positive number, e.g., \code{1.0e-6}). See the note below. } % \item{epsilon, maxit, stepsize,...}{ % Same as \code{\link[VGAM:vglm.control]{vglm.control}}. % % % } } \details{ The AR-1 model implemented here has \deqn{Y_1 \sim N(\mu, \sigma^2 / (1-\rho^2)), }{% Y(1) ~ N(mu, sigma^2 / (1-rho^2), } and \deqn{Y_i = \mu^* + \rho Y_{i-1} + e_i, }{% Y(i) = mu^* + rho * Y(i-1) + e(i) } where the \eqn{e_i}{e(i)} are i.i.d. Normal(0, sd = \eqn{\sigma}{sigma}) random variates. Here are a few notes: (1). A test for weak stationarity might be to verify whether \eqn{1/\rho}{1/rho} lies outside the unit circle. (2). The mean of all the \eqn{Y_i}{Y(i)} is \eqn{\mu^* /(1-\rho)}{mu^* / (1-rho)} and these are returned as the fitted values. (3). The correlation of all the \eqn{Y_i}{Y(i)} with \eqn{Y_{i-1}}{Y(i-1)} is \eqn{\rho}{rho}. (4). The default link function ensures that \eqn{-1 < \rho < 1}{-1 < rho < 1}. % (1). ... whether \eqn{\mu^*}{mu^*} is intercept-only. } \section{Warning}{ Monitoring convergence is urged, i.e., set \code{trace = TRUE}. Moreover, if the exact EIMs are used, set \code{print.EIM = TRUE} to compare the computed exact to the approximate EIM. Under the VGLM/VGAM approach, parameters can be modelled in terms of covariates. Particularly, if the standard deviation of the white noise is modelled in this way, then \code{type.EIM = "exact"} may certainly lead to unstable results. The reason is that white noise is a stationary process, and consequently, its variance must remain as a constant. Consequently, the use of variates to model this parameter contradicts the assumption of stationary random components to compute the exact EIMs proposed by Porat and Friedlander (1987). To prevent convergence issues in such cases, this family function internally verifies whether the variance of the white noise remains as a constant at each Fisher scoring iteration. If this assumption is violated and \code{type.EIM = "exact"} is set, then \code{AR1} automatically shifts to \code{type.EIM = "approximate"}. Also, a warning is accordingly displayed. %Thus, if modelling the standard deviation of the white noise %is required, the use of \code{type.EIM = "approximate"} is %highly recommended. %Yet to do: add an argument that allows the scaled mean parameter %to be deleted, i.e, a 2-parameter model is fitted. %Yet to do: \code{ARff(p.lag = 1)} should hopefully be written soon. } \value{ An object of class \code{"vglmff"} (see \code{\link{vglmff-class}}). The object is used by modelling functions such as \code{\link{vglm}}, and \code{\link{vgam}}. } \references{ Porat, B. and Friedlander, B. (1987) The Exact Cramer-Rao Bond for Gaussian Autoregressive Processes. \emph{IEEE Transactions on Aerospace and Electronic Systems}, \bold{AES-23(4)}, 537--542. } \author{ Victor Miranda (exact method) and Thomas W. Yee (approximate method).} \note{ %For \code{type.likelihood = "conditional"}, %the prior weight for the first observation is set to %some small positive number, which has the effect of deleting %that observation. %However, \eqn{n} is still the original \eqn{n} so that %statistics such as the residual degrees of freedom are %unchanged (uncorrected possibly). Multiple responses are handled. The mean is returned as the fitted values. % Argument \code{zero} can be either a numeric or a character % vector. It must specify the position(s) or name(s) of the % parameters to be modeled as intercept-only. If names are used, % notice that parameter names in this family function are % \deqn{c("drift", "var" or "sd", "rho").} %Practical experience has shown that half-stepping is a very %good idea. The default options use step sizes that are %about one third the usual step size. Consequently, %\code{maxit} is increased to about 100, by default. } \seealso{ \code{\link{AR1EIM}}, \code{\link{vglm.control}}, \code{\link{dAR1}}, \code{\link[stats]{arima.sim}}. } \examples{ ### Example 1: using arima.sim() to generate a 0-mean stationary time series. nn <- 500 tsdata <- data.frame(x2 = runif(nn)) ar.coef.1 <- rhobit(-1.55, inverse = TRUE) # Approx -0.65 ar.coef.2 <- rhobit( 1.0, inverse = TRUE) # Approx 0.50 set.seed(1) tsdata <- transform(tsdata, index = 1:nn, TS1 = arima.sim(nn, model = list(ar = ar.coef.1), sd = exp(1.5)), TS2 = arima.sim(nn, model = list(ar = ar.coef.2), sd = exp(1.0 + 1.5 * x2))) ### An autoregressive intercept--only model. ### ### Using the exact EIM, and "nodrift = TRUE" ### fit1a <- vglm(TS1 ~ 1, data = tsdata, trace = TRUE, AR1(var.arg = FALSE, nodrift = TRUE, type.EIM = "exact", print.EIM = FALSE), crit = "coefficients") Coef(fit1a) summary(fit1a) \dontrun{ ### Two responses. Here, the white noise standard deviation of TS2 ### ### is modelled in terms of 'x2'. Also, 'type.EIM = exact'. ### fit1b <- vglm(cbind(TS1, TS2) ~ x2, AR1(zero = NULL, nodrift = TRUE, var.arg = FALSE, type.EIM = "exact"), constraints = list("(Intercept)" = diag(4), "x2" = rbind(0, 0, 1, 0)), data = tsdata, trace = TRUE, crit = "coefficients") coef(fit1b, matrix = TRUE) summary(fit1b) ### Example 2: another stationary time series nn <- 500 my.rho <- rhobit(1.0, inverse = TRUE) my.mu <- 1.0 my.sd <- exp(1) tsdata <- data.frame(index = 1:nn, TS3 = runif(nn)) set.seed(2) for (ii in 2:nn) tsdata$TS3[ii] <- my.mu/(1 - my.rho) + my.rho * tsdata$TS3[ii-1] + rnorm(1, sd = my.sd) tsdata <- tsdata[-(1:ceiling(nn/5)), ] # Remove the burn-in data: ### Fitting an AR(1). The exact EIMs are used. fit2a <- vglm(TS3 ~ 1, AR1(type.likelihood = "exact", # "conditional", type.EIM = "exact"), data = tsdata, trace = TRUE, crit = "coefficients") Coef(fit2a) summary(fit2a) # SEs are useful to know Coef(fit2a)["rho"] # Estimate of rho, for intercept-only models my.rho # The 'truth' (rho) Coef(fit2a)["drift"] # Estimate of drift, for intercept-only models my.mu /(1 - my.rho) # The 'truth' (drift) } } \keyword{models} \keyword{regression} %fit1a <- vglm(cbind(TS1, TS2) ~ x2, AR1(zero = c(1:4, 6)), % data = tsdata, trace = TRUE) %head(weights(fit2a, type = "prior")) # First one is effectively deleted %head(weights(fit2a, type = "working")) # Ditto VGAM/man/zabinomUC.Rd0000644000176200001440000000461013135276753013752 0ustar liggesusers\name{Zabinom} \alias{Zabinom} \alias{dzabinom} \alias{pzabinom} \alias{qzabinom} \alias{rzabinom} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Zero-Altered Binomial Distribution } \description{ Density, distribution function, quantile function and random generation for the zero-altered binomial distribution with parameter \code{pobs0}. } \usage{ dzabinom(x, size, prob, pobs0 = 0, log = FALSE) pzabinom(q, size, prob, pobs0 = 0) qzabinom(p, size, prob, pobs0 = 0) rzabinom(n, size, prob, pobs0 = 0) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. If \code{length(n) > 1} then the length is taken to be the number required. } \item{size, prob, log}{ Parameters from the ordinary binomial distribution (see \code{\link[stats:Binomial]{dbinom}}). } \item{pobs0}{ Probability of (an observed) zero, called \eqn{pobs0}. The default value of \code{pobs0 = 0} corresponds to the response having a positive binomial distribution. } } \details{ The probability function of \eqn{Y} is 0 with probability \code{pobs0}, else a positive binomial(size, prob) distribution. } \value{ \code{dzabinom} gives the density and \code{pzabinom} gives the distribution function, \code{qzabinom} gives the quantile function, and \code{rzabinom} generates random deviates. } %\references{ } \author{ T. W. Yee } \note{ The argument \code{pobs0} is recycled to the required length, and must have values which lie in the interval \eqn{[0,1]}. } \seealso{ \code{\link{zibinomial}}, \code{\link{rposbinom}}. % \code{\link{zabinomial}}, } \examples{ size <- 10; prob <- 0.15; pobs0 <- 0.05; x <- (-1):7 dzabinom(x, size = size, prob = prob, pobs0 = pobs0) table(rzabinom(100, size = size, prob = prob, pobs0 = pobs0)) \dontrun{ x <- 0:10 barplot(rbind(dzabinom(x, size = size, prob = prob, pobs0 = pobs0), dbinom(x, size = size, prob = prob)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("ZAB(size = ", size, ", prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Binom(size = ", size, ", prob = ", prob, ") [orange] densities", sep = "")) } } \keyword{distribution} VGAM/man/skewnormUC.Rd0000644000176200001440000000463613135276753014170 0ustar liggesusers\name{skewnorm} \alias{skewnorm} \alias{dskewnorm} %\alias{pskewnorm} %\alias{qskewnorm} \alias{rskewnorm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Skew-Normal Distribution } \description{ Density and random generation for the univariate skew-normal distribution. % , distribution function, quantile function and } \usage{ dskewnorm(x, location = 0, scale = 1, shape = 0, log = FALSE) rskewnorm(n, location = 0, scale = 1, shape = 0) } %pskewnorm(q, lambda) %qskewnorm(p, lambda) %- maybe also 'usage' for other objects documented here. \arguments{ \item{x}{vector of quantiles.} % \item{x, q}{vector of quantiles.} % \item{p}{vector of probabilities.} \item{n}{number of observations. Same as \code{\link[stats]{runif}}. } \item{location}{ The location parameter \eqn{\xi}{xi}. A vector. } \item{scale}{ The scale parameter \eqn{\omega}{w}. A positive vector. } \item{shape}{ The shape parameter. It is called \eqn{\alpha}{alpha} in \code{\link{skewnormal}}. } \item{log}{ Logical. If \code{log=TRUE} then the logarithm of the density is returned. } } \details{ See \code{\link{skewnormal}}, which currently only estimates the shape parameter. More generally here, \eqn{Z = \xi + \omega Y}{Z = xi + w * Y} where \eqn{Y} has a standard skew-normal distribution (see \code{\link{skewnormal}}), \eqn{\xi}{xi} is the location parameter and \eqn{\omega}{w} is the scale parameter. } \value{ \code{dskewnorm} gives the density, \code{rskewnorm} generates random deviates. % \code{pskewnorm} gives the distribution function, % \code{qskewnorm} gives the quantile function, and } \references{ \code{http://tango.stat.unipd.it/SN}. % \url{http://tango.stat.unipd.it/SN}. } \author{ T. W. Yee } \note{ The default values of all three parameters corresponds to the skew-normal being the standard normal distribution. } \seealso{ \code{\link{skewnormal}}. } \examples{ \dontrun{ N <- 200 # Grid resolution shape <- 7; x <- seq(-4, 4, len = N) plot(x, dskewnorm(x, shape = shape), type = "l", col = "blue", las = 1, ylab = "", lty = 1, lwd = 2) abline(v = 0, h = 0, col = "grey") lines(x, dnorm(x), col = "orange", lty = 2, lwd = 2) legend("topleft", leg = c(paste("Blue = dskewnorm(x, ", shape,")", sep = ""), "Orange = standard normal density"), lty = 1:2, lwd = 2, col = c("blue", "orange")) } } \keyword{distribution} VGAM/man/calibrate.Rd0000644000176200001440000000535013135276753014013 0ustar liggesusers\name{calibrate} \alias{calibrate} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Model Calibrations } \description{ \code{calibrate} is a generic function used to produce calibrations from various model fitting functions. The function invokes particular `methods' which depend on the `class' of the first argument. } \usage{ calibrate(object, ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ An object for which a calibration is desired. } \item{\dots}{ Additional arguments affecting the calibration produced. Usually the most important argument in \code{\dots} is \code{newdata} which, for \code{calibrate}, contains new \emph{response} data, \bold{Y}, say. } } \details{ Given a regression model with explanatory variables \bold{X} and response \bold{Y}, calibration involves estimating \bold{X} from \bold{Y} using the regression model. It can be loosely thought of as the opposite of \code{\link{predict}} (which takes an \bold{X} and returns a \bold{Y}.) } \value{ In general, given a new response \bold{Y}, the explanatory variables \bold{X} are returned. However, for constrained ordination models such as CQO and CAO models, it is usually not possible to return \bold{X}, so the latent variables are returned instead (they are linear combinations of the \bold{X}). See the specific \code{calibrate} methods functions to see what they return. } %\references{ %} \author{ T. W. Yee } \note{ This function was not called \code{predictx} because of the inability of constrained ordination models to return \bold{X}; they can only return the latent variable values (site scores) instead. } \seealso{ \code{\link{predict}}, \code{\link{calibrate.rrvglm}}, \code{\link{calibrate.qrrvglm}}. } \examples{ \dontrun{ hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdzed environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Zoraspin = 2, 1.9), Bestof = 3, Crow1positive = TRUE) siteNos <- 1:2 # Calibrate these sites cp1 <- calibrate(p1, trace = TRUE, new = data.frame(depvar(p1)[siteNos, ], model.matrix(p1)[siteNos, ])) # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Solid=actual, dashed=calibrated site scores", label = TRUE, col = "blue", las = 1) # Actual site scores: abline(v = latvar(p1)[siteNos], lty = 1, col = 1:length(siteNos)) abline(v = cp1, lty = 2, col = 1:length(siteNos)) # Calibrated values } } \keyword{models} \keyword{regression} VGAM/man/lvplot.rrvglm.Rd0000644000176200001440000001642513135276753014722 0ustar liggesusers\name{lvplot.rrvglm} \alias{lvplot.rrvglm} \alias{biplot.rrvglm} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Latent Variable Plot for RR-VGLMs } \description{ Produces an \emph{ordination diagram} (also known as a \emph{biplot} or \emph{latent variable plot}) for \emph{reduced-rank vector generalized linear models} (RR-VGLMs). For rank-2 models only, the x- and y-axis are the first and second canonical axes respectively. } \usage{ lvplot.rrvglm(object, A = TRUE, C = TRUE, scores = FALSE, show.plot = TRUE, groups = rep(1, n), gapC = sqrt(sum(par()$cxy^2)), scaleA = 1, xlab = "Latent Variable 1", ylab = "Latent Variable 2", Alabels = if (length(object@misc$predictors.names)) object@misc$predictors.names else paste("LP", 1:M, sep = ""), Aadj = par()$adj, Acex = par()$cex, Acol = par()$col, Apch = NULL, Clabels = rownames(Cmat), Cadj = par()$adj, Ccex = par()$cex, Ccol = par()$col, Clty = par()$lty, Clwd = par()$lwd, chull.arg = FALSE, ccex = par()$cex, ccol = par()$col, clty = par()$lty, clwd = par()$lwd, spch = NULL, scex = par()$cex, scol = par()$col, slabels = rownames(x2mat), ...) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{object}{ Object of class \code{"rrvglm"}. } \item{A}{ Logical. Allow the plotting of \bold{A}? } \item{C}{ Logical. Allow the plotting of \bold{C}? If \code{TRUE} then \bold{C} is represented by arrows emenating from the origin. } \item{scores}{ Logical. Allow the plotting of the \eqn{n} scores? The scores are the values of the latent variables for each observation. } \item{show.plot}{ Logical. Plot it? If \code{FALSE}, no plot is produced and the matrix of scores (\eqn{n} latent variable values) is returned. If \code{TRUE}, the rank of \code{object} need not be 2. } \item{groups}{ A vector whose distinct values indicate which group the observation belongs to. By default, all the observations belong to a single group. Useful for the multinomial logit model (see \code{\link{multinomial}}.} \item{gapC}{ The gap between the end of the arrow and the text labelling of \bold{C}, in latent variable units.} \item{scaleA}{ Numerical value that is multiplied by \bold{A}, so that \bold{C} is divided by this value. } \item{xlab}{ Caption for the x-axis. See \code{\link[graphics]{par}}. } \item{ylab}{ Caption for the y-axis. See \code{\link[graphics]{par}}. } \item{Alabels}{ Character vector to label \bold{A}. Must be of length \eqn{M}. } \item{Aadj}{ Justification of text strings for labelling \bold{A}. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{Acex}{ Numeric. Character expansion of the labelling of \bold{A}. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{Acol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{Apch}{ Either an integer specifying a symbol or a single character to be used as the default in plotting points. See \code{\link[graphics]{par}}. The \code{pch} argument can be of length \eqn{M}, the number of species. } \item{Clabels}{ Character vector to label \bold{C}. Must be of length \eqn{p2}. } \item{Cadj}{ Justification of text strings for labelling \bold{C}. See the \code{adj} argument of \code{\link[graphics]{par}}. } \item{Ccex}{ Numeric. Character expansion of the labelling of \bold{C}. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{Ccol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{Clty}{ Line type of the arrows representing \bold{C}. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{Clwd}{ Line width of the arrows representing \bold{C}. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{chull.arg}{ Logical. Plot the convex hull of the scores? This is done for each group (see the \code{group} argument). } \item{ccex}{ Numeric. Character expansion of the labelling of the convex hull. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{ccol}{ Line color of the convex hull. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{clty}{ Line type of the convex hull. See the \code{lty} argument of \code{\link[graphics]{par}}. } \item{clwd}{ Line width of the convex hull. See the \code{lwd} argument of \code{\link[graphics]{par}}. } \item{spch}{ Either an integer specifying a symbol or a single character to be used as the default in plotting points. See \code{\link[graphics]{par}}. The \code{spch} argument can be of length \eqn{M}, the number of species. } \item{scex}{ Numeric. Character expansion of the labelling of the scores. See the \code{cex} argument of \code{\link[graphics]{par}}. } \item{scol}{ Line color of the arrows representing \bold{C}. See the \code{col} argument of \code{\link[graphics]{par}}. } \item{slabels}{ Character vector to label the scores. Must be of length \eqn{n}. } \item{\dots}{ Arguments passed into the \code{plot} function when setting up the entire plot. Useful arguments here include \code{xlim} and \code{ylim}. } } \details{ For RR-VGLMs, a \emph{biplot} and a \emph{latent variable} plot coincide. In general, many of the arguments starting with ``A'' refer to \bold{A} (of length \eqn{M}), ``C'' to \bold{C} (of length \eqn{p2}), ``c'' to the convex hull (of length \code{length(unique(groups))}), and ``s'' to scores (of length \eqn{n}). As the result is a biplot, its interpretation is based on the inner product. } \value{ The matrix of scores (\eqn{n} latent variable values) is returned regardless of whether a plot was produced or not. } \references{ Yee, T. W. and Hastie, T. J. (2003) Reduced-rank vector generalized linear models. \emph{Statistical Modelling}, \bold{3}, 15--41. } \author{ Thomas W. Yee } \note{ % Further work to be done: This function could be hooked up % to the normalization code of \code{\link{rrvglm}} to allow uncorrelated % latent variables etc. The functions \code{\link{lvplot.rrvglm}} and \code{\link{biplot.rrvglm}} are equivalent. In the example below the predictor variables are centered, which is a good idea. } \seealso{ \code{\link{lvplot}}, \code{\link[graphics]{par}}, \code{\link{rrvglm}}, \code{\link{Coef.rrvglm}}, \code{\link{rrvglm.control}}. } \examples{ nn <- nrow(pneumo) # x1, x2 and x3 are some unrelated covariates pneumo <- transform(pneumo, slet = scale(log(exposure.time)), x1 = rnorm(nn), x2 = rnorm(nn), x3 = rnorm(nn)) fit <- rrvglm(cbind(normal, mild, severe) ~ slet + x1 + x2 + x3, multinomial, data = pneumo, Rank = 2, Corner = FALSE, Uncorrel = TRUE) \dontrun{ lvplot(fit, chull = TRUE, scores = TRUE, clty = 2, ccol = "blue", scol = "red", Ccol = "darkgreen", Clwd = 2, Ccex = 2, main = "Biplot of some fictitional data") } } \keyword{models} \keyword{regression} \keyword{graphs} % pneumo$slet = scale(log(pneumo$exposure.time)) VGAM/man/nakagamiUC.Rd0000644000176200001440000000515213135276753014065 0ustar liggesusers\name{Nakagami} \alias{Nakagami} \alias{dnaka} \alias{pnaka} \alias{qnaka} \alias{rnaka} \title{Nakagami Distribution } \description{ Density, cumulative distribution function, quantile function and random generation for the Nakagami distribution. } \usage{ dnaka(x, scale = 1, shape, log = FALSE) pnaka(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qnaka(p, scale = 1, shape, ...) rnaka(n, scale = 1, shape, Smallno = 1.0e-6) } \arguments{ \item{x, q}{vector of quantiles.} \item{p}{vector of probabilities.} \item{n}{number of observations. Same as in \code{\link[stats]{runif}}. % Must be a positive integer of length 1. } \item{scale, shape}{ arguments for the parameters of the distribution. See \code{\link{nakagami}} for more details. For \code{rnaka}, arguments \code{shape} and \code{scale} must be of length 1. } \item{Smallno}{ Numeric, a small value used by the rejection method for determining the upper limit of the distribution. That is, \code{pnaka(U) > 1-Smallno} where \code{U} is the upper limit. } \item{\ldots}{ Arguments that can be passed into \code{\link[stats]{uniroot}}. } \item{log}{ Logical. If \code{log = TRUE} then the logarithm of the density is returned. } \item{lower.tail, log.p}{ Same meaning as in \code{\link[stats:Normal]{pnorm}} or \code{\link[stats:Normal]{qnorm}}. } } \value{ \code{dnaka} gives the density, \code{pnaka} gives the cumulative distribution function, \code{qnaka} gives the quantile function, and \code{rnaka} generates random deviates. } \author{ T. W. Yee and Kai Huang } \details{ See \code{\link{nakagami}} for more details. } %\note{ % %} \seealso{ \code{\link{nakagami}}. } \examples{ \dontrun{ x <- seq(0, 3.2, len = 200) plot(x, dgamma(x, shape = 1), type = "n", col = "black", ylab = "", ylim = c(0,1.5), main = "dnaka(x, shape = shape)") lines(x, dnaka(x, shape = 1), col = "orange") lines(x, dnaka(x, shape = 2), col = "blue") lines(x, dnaka(x, shape = 3), col = "green") legend(2, 1.0, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) plot(x, pnorm(x), type = "n", col = "black", ylab = "", ylim = 0:1, main = "pnaka(x, shape = shape)") lines(x, pnaka(x, shape = 1), col = "orange") lines(x, pnaka(x, shape = 2), col = "blue") lines(x, pnaka(x, shape = 3), col = "green") legend(2, 0.6, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) } probs <- seq(0.1, 0.9, by = 0.1) pnaka(qnaka(p = probs, shape = 2), shape = 2) - probs # Should be all 0 } \keyword{distribution} VGAM/man/reciprocal.Rd0000644000176200001440000000412013135276753014202 0ustar liggesusers\name{reciprocal} \alias{reciprocal} \alias{negreciprocal} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Reciprocal Link Function } \description{ Computes the reciprocal transformation, including its inverse and the first two derivatives. } \usage{ reciprocal(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negreciprocal(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{theta}{ Numeric or character. See below for further details. } \item{bvalue}{ See \code{\link{Links}}. } \item{inverse, deriv, short, tag}{ Details at \code{\link{Links}}. } } \details{ The \code{reciprocal} link function is a special case of the power link function. Numerical values of \code{theta} close to 0 result in \code{Inf}, \code{-Inf}, \code{NA} or \code{NaN}. The \code{negreciprocal} link function computes the negative reciprocal, i.e., \eqn{-1/ \theta}{-1/theta}. } \value{ For \code{reciprocal}: for \code{deriv = 0}, the reciprocal of \code{theta}, i.e., \code{1/theta} when \code{inverse = FALSE}, and if \code{inverse = TRUE} then \code{1/theta}. For \code{deriv = 1}, then the function returns \emph{d} \code{theta} / \emph{d} \code{eta} as a function of \code{theta} if \code{inverse = FALSE}, else if \code{inverse = TRUE} then it returns the reciprocal. } \references{ McCullagh, P. and Nelder, J. A. (1989) \emph{Generalized Linear Models}, 2nd ed. London: Chapman & Hall. } %\section{Warning}{ %} \author{ Thomas W. Yee } \note{ Numerical instability may occur when \code{theta} is close to 0. } \seealso{ \code{\link{identity}}, \code{\link{powerlink}}. } \examples{ reciprocal(1:5) reciprocal(1:5, inverse = TRUE, deriv = 2) negreciprocal(1:5) negreciprocal(1:5, inverse = TRUE, deriv = 2) x <- (-3):3 reciprocal(x) # Has Inf reciprocal(x, bvalue = .Machine$double.eps) # Has no Inf } \keyword{math} \keyword{models} \keyword{regression} VGAM/man/pgamma.deriv.unscaled.Rd0000644000176200001440000000424213135276753016233 0ustar liggesusers\name{pgamma.deriv.unscaled} \alias{pgamma.deriv.unscaled} %- Also NEED an '\alias' for EACH other topic documented here. \title{ Derivatives of the Incomplete Gamma Integral (Unscaled Version) } \description{ The first two derivatives of the incomplete gamma integral with scaling. } \usage{ pgamma.deriv.unscaled(q, shape) } %- maybe also 'usage' for other objects documented here. \arguments{ \item{q, shape}{ As in \code{\link[stats]{pgamma}} and \code{\link{pgamma.deriv}} but these must be vectors of positive values only and finite. } } \details{ Define \deqn{G(x, a) = \int_0^x t^{a-1} e^{-t} dt}{G(a,x) = integral_0^x t^(a-1) exp(-t) dt} so that \eqn{G(x, a)} is \code{pgamma(x, a) * gamma(a)}. Write \eqn{x = q} and \code{shape =} \eqn{a}. The 0th and first and second derivatives with respect to \eqn{a} of \eqn{G} are returned. This function is similar in spirit to \code{\link{pgamma.deriv}} but here there is no gamma function to scale things. Currently a 3-column matrix is returned (in the future this may change and an argument may be supplied so that only what is required by the user is computed.) This function is based on Wingo (1989). } \value{ The 3 columns, running from left to right, are the \code{0:2}th derivatives with respect to \eqn{a}. } \references{ See \code{\link{truncweibull}}. } \author{ T. W. Yee. } %\note{ % If convergence does not occur then try increasing the value of % \code{tmax}. % %} \section{Warning }{ These function seems inaccurate for \code{q = 1} and \code{q = 2}; see the plot below. } \seealso{ \code{\link{pgamma.deriv}}, \code{\link[stats]{pgamma}}. } \examples{ x <- 3; aa <- seq(0.3, 04, by = 0.01) ans.u <- pgamma.deriv.unscaled(x, aa) head(ans.u) \dontrun{ par(mfrow = c(1, 3)) for (jay in 1:3) { plot(aa, ans.u[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, main = colnames(ans.u)[jay], log = "", xlab = "shape", ylab = "") abline(h = 0, v = 1:2, lty = "dashed", col = "gray") # Inaccurate at 1 and 2 } } } % Add one or more standard keywords, see file 'KEYWORDS' in the % R documentation directory. \keyword{math} VGAM/LICENCE.note0000755000176200001440000000421113135276753012752 0ustar liggesusersSoftware and datasets to support 'Vector Generalized Linear and Additive Models: With an Implementation in R', first edition, by T. W. Yee. Springer, 2015. This file is intended to clarify ownership and copyright: where possible individual files also carry brief copyright notices. This file was adapted from the file of the same name from the MASS bundle. Copyrights ========== Some slightly-modified FORTRAN subroutines from http://pages.cs.wisc.edu/~deboor/pgs/ are used for the B-spline computations. Some modified LINPACK subroutines appear in the files ./src/vlinpack?.f Portions of the smoothing code called by vsmooth.spline() is based on an adaptation of F. O'Sullivan's BART code. Regarding file ./src/lerchphi.c, this program is copyright by Sergej V. Aksenov (http://www.geocities.com/saksenov) and Ulrich D. Jentschura (jentschura@physik.tu-dresden.de), 2002. Version 1.00 (May 1, 2002) R function pgamma.deriv() operates by a wrapper function to a Fortran subroutine written by R. J. Moore. The subroutine was modified to run using double precision. The original code came from http://lib.stat.cmu.edu/apstat/187. R functions expint(x), expexpint(x), expint.E1(x) operate by wrapper functions to code downloaded from http://www.netlib.org/specfun/ei My understanding is that the dataset files VGAM/data/* and VGAMdata/data/* are not copyright. All other files are copyright (C) 1998-2015 T. W. Yee. Licence ======= This is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 or 3 of the License (at your option). This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Files share/licenses/GPL-2 and share/licenses/GPL-3 in the R (source or binary) distribution are copies of versions 2 and 3 of the 'GNU General Public License'. These can also be viewed at https://www.r-project.org/Licenses/ t.yee@auckland.ac.nz