ipred/0000755000175100001440000000000013055606577011411 5ustar hornikusersipred/inst/0000755000175100001440000000000013055552623012356 5ustar hornikusersipred/inst/NEWS0000644000175100001440000001422713055552443013063 0ustar hornikusers# $Id: CHANGES,v 1.48 2009/09/09 15:40:28 hothorn Exp $ 0.9-6 (01.03.2017) register C routines 0.9-5 (28.07.2015) fix NAMESPACE 0.9-4 (20.02.2015) register predict.ipredknn 0.9-3 (20.12.2013) use trapezoid rule to compute integrated Brier score in sbrier 0.9-2 (02.09.2013) NAMESPACE issues, TH.data 0.9-0 (22.10.2012) Due to interface changes in rpart 3.1-55, the bagging function had to be rewritten. Results of previous version are not exactly reproducible. 0.8-13 (21.02.2012) import(survival) 0.8-12 (20.02.2012) use prodlim to compute censoring distributions in sbrier (makes a difference for tied survival times) GPL (>= 2) and no require in .onLoad 0.8-11 (08.02.2011) depends R >= 2.10 0.8-10 (02.02.2011) compress data files 0.8-9 (27.01.2011) fix nrow problem in sbrier, spotted by Phil Boonstra avoid partial matches of function arguments 0.8-8 (09.09.2009) documentation fixes 0.8-7 (27.03.2009) survival fixes 0.8-6 (28.07.2008) make R devel happy ($ is no longer allowed) 0.8-4 (09.10.2007) change maintainer 0.8-3 (29.06.2005) terms(formula, data) needs `data' argument (suggested by BDR). 0.8-2 (09.12.2004) - slda: correct for one explanatory variable: ewp <- svd(solve(diag(diag(Snull), ncol = ncol(Snull)))%*%Snull) ^^^^^^^^^^^^^ 0.8-1 (25.11.2004) - change #!/bin/bash -> #!/bin/sh 0.8-0 (02.06.2004) - correction of NAMESPACES 0.7-9 (13.05.2004) -description file, insert suggests: mvtnorm 0.7-8 (21.04.2004) - don't run selected examples and ipred-tests.R 0.7-7 (02.02.2004) -return predicted values for error estimations "boot" and "632plus" if required -optional argument determining which observations are incuded in each sample within 'errorest' -"boot" and "632plus" can be computed simultanously 0.7-6 (16.01.2004) fix tests/ipred-segfault 0.7-5 (19.12.2003) examples of inbagg and predict.inbagg (don't use mvtnorm) 0.7-4 (16.12.2003) R-1.9.0 fixes 0.7-3 (03.11.2003) fix documentation bugs found by `codoc' 0.7-2 (29.10.2003) `rpart' is sensitive to compilers / optimization flags: the results we compare `ipred's tests with are produced with an optimized version of `rpart' (gcc -O2). `eigen' in `slda' replaced by `svd' 0.7-1 (08.08.2003) adapt to changes in R-devel and lda (package MASS) 0.7-0 (08.08.2003) add namespaces 0.6-15 (----) new argument "getmodels=TRUE" to cv: the returned object has an element "models", a list which contains the models for each fold. new interface for inclass and adding method inbagg. 0.6-14 (13.03.2003) clean up bagging.Rd 0.6-12 (12.03.2003) methods for "integer" for the generics "bagging", "cv" and "bootest" do not call methods to generics directly, since they may be hidded (because not exported: predict.lda) 0.6-11 (05.03.2003) 632plus was false when the no-information error rate was less than the raw bootstrap estimator (eq. 29 was used instead of eq. 32 in Efron & Tibshirani, 1997). Thanks to Ramon Diaz for reporting. changed the RNGkind to RNGkind("Wichmann-Hill", "Kinderman-Ramage") or RNGversion("1.6.2") making the regression tests pass R CMD check with R-devel (1.7.0) ipred is allowed to import rpart.{anova, class, exp, poisson, matrix} from package rpart, thanks to BDR. 0.6-9 (25.02.2003) the terms attribute of data in errorest.data.frame may cause problems with some predict methods -> deleted 0.6-7 (17.02.2003) use a formula / data framework in cv and bootest. "model" now deals with the original variable names (and formula) instead of "y" and "X". "model" is now allowed to return a function with newdata argument for prediction. This is especially useful for estimating the error of both variable selection and model building simultaneously, the vignette gives a simple example. cv.numeric and bootest.numeric were broken and gave faulty estimates of MSE, both problems fixed if the maximum of votes for any class is not unique, the class is choosen by random in predict.classbagg now. Formerly, the class with lowest level was choosen by mistake. 0.6-6 (06.02.2003) fixes required by attached "methods" package 0.6-4 (18.12.2002) R CMD build problems 0.6-3 (03.12.2002) cv in errorest did faultly use all observations for estimating the error which lead to over optimistic results 0.6-2 (18.10.2002) documentation updates and copyright status added 0.6-1 (02.10.2002) documentation fixes 0.6-0 (27.09.2002) added vignette documentation updates 0.5-7 (23.09.2002) add internal functions irpart and predict.irpart for speeding up standard bagging use error.control for the specification of control parameters cv can be used to caculcate an "honest" prediction for each observation 0.5-6 (12.09.2002) code factors in GBSG2 data as factors. Documentation update. Add keepX argument to ipredbagg 0.5-5 (10.09.2002) set rpart.control(..., xval=0) by default 0.5-4 (05.08.2002) added k-NN with formula interface and stabilized LDA 0.5-3 (01.08.2002) use rpart.control() for regression and survival new documentation for bagging and friends 0.5-2 (30.07.2002) new low-level functions cv and bootest for error rate estimators (misclassification, mse, brier score) 0.5-1 (25.07.2002) bagging code completely rewritten 0.4-6 (27.06.2002) out-of-bag error for regression trees fixed. 0.4-5 (17.06.2002) use "minsplit = 2" in `rpart.control' passed to `bagging' 0.4-4 (17.05.2002) use predict.lda in bagging and predict.bagging bagging(..., method="double") did not work for factors. 0.4-3 (07.05.2002) bugfix in bagging (in models with one regressor), changes in documentation errorest 0.4-2 (10.04.2002) predict.bagging much faster, OOB much faster 0.4-1 (08.04.2002) bugfix in print.inclass, predict.inclass 0.4-0 (26.03.2002) pre-release for CRAN/devel ipred/inst/COPYRIGHTS0000644000175100001440000000107512211332764013772 0ustar hornikusersCOPYRIGHT STATUS ---------------- The bulk of this code is Copyright (C) 2002-2012 Andrea Peters and Torsten Hothorn except the code in .R/irpart.R .R/predict.irpart.R which are modifications from the files rpart.s and predict.rpart.s from package `rpart', version 3.1-8 which is Copyright (C) 2000 Mayo Foundation for Medical Education and Research with modifications for R by Brian D. Ripley. All code is subject to the GNU General Public License, Version 2. See the file COPYING for the exact conditions under which you may redistribute it. ipred/inst/doc/0000755000175100001440000000000013055552623013123 5ustar hornikusersipred/inst/doc/ipred-examples.R0000644000175100001440000001121413055552623016164 0ustar hornikusers### R code from vignette source 'ipred-examples.Rnw' ################################################### ### code chunk number 1: preliminaries ################################################### options(prompt=">", width=50) set.seed(210477) ################################################### ### code chunk number 2: bagging ################################################### library("ipred") library("rpart") library("MASS") data("GlaucomaM", package="TH.data") gbag <- bagging(Class ~ ., data = GlaucomaM, coob=TRUE) ################################################### ### code chunk number 3: print-bagging ################################################### print(gbag) ################################################### ### code chunk number 4: double-bagging ################################################### scomb <- list(list(model=slda, predict=function(object, newdata) predict(object, newdata)$x)) gbagc <- bagging(Class ~ ., data = GlaucomaM, comb=scomb) ################################################### ### code chunk number 5: predict.bagging ################################################### predict(gbagc, newdata=GlaucomaM[c(1:3, 99:102), ]) ################################################### ### code chunk number 6: indirect.formula ################################################### data("GlaucomaMVF", package="ipred") GlaucomaMVF <- GlaucomaMVF[,-63] formula.indirect <- Class~clv + lora + cs ~ . ################################################### ### code chunk number 7: indirect.fit ################################################### classify <- function (data) { attach(data) res <- ifelse((!is.na(clv) & !is.na(lora) & clv >= 5.1 & lora >= 49.23372) | (!is.na(clv) & !is.na(lora) & !is.na(cs) & clv < 5.1 & lora >= 58.55409 & cs < 1.405) | (is.na(clv) & !is.na(lora) & !is.na(cs) & lora >= 58.55409 & cs < 1.405) | (!is.na(clv) & is.na(lora) & cs < 1.405), 0, 1) detach(data) factor (res, labels = c("glaucoma", "normal")) } fit <- inclass(formula.indirect, pFUN = list(list(model = lm)), cFUN = classify, data = GlaucomaMVF) ################################################### ### code chunk number 8: print.indirect ################################################### print(fit) ################################################### ### code chunk number 9: predict.indirect ################################################### predict(object = fit, newdata = GlaucomaMVF[c(1:3, 86:88),]) ################################################### ### code chunk number 10: bagging.indirect ################################################### mypredict.rpart <- function(object, newdata) { RES <- predict(object, newdata) RET <- rep(NA, nrow(newdata)) NAMES <- rownames(newdata) RET[NAMES %in% names(RES)] <- RES[NAMES[NAMES %in% names(RES)]] RET } fit <- inbagg(formula.indirect, pFUN = list(list(model = rpart, predict = mypredict.rpart)), cFUN = classify, nbagg = 25, data = GlaucomaMVF) ################################################### ### code chunk number 11: plda ################################################### mypredict.lda <- function(object, newdata){ predict(object, newdata = newdata)$class } ################################################### ### code chunk number 12: cvlda ################################################### errorest(Class ~ ., data= GlaucomaM, model=lda, estimator = "cv", predict= mypredict.lda) ################################################### ### code chunk number 13: cvindirect ################################################### errorest(formula.indirect, data = GlaucomaMVF, model = inclass, estimator = "632plus", pFUN = list(list(model = lm)), cFUN = classify) ################################################### ### code chunk number 14: varsel-def ################################################### mymod <- function(formula, data, level=0.05) { # select all predictors that are associated with an # univariate t.test p-value of less that level sel <- which(lapply(data, function(x) { if (!is.numeric(x)) return(1) else return(t.test(x ~ data$Class)$p.value) }) < level) # make sure that the response is still there sel <- c(which(colnames(data) %in% "Class"), sel) # compute a LDA using the selected predictors only mod <- lda(formula , data=data[,sel]) # and return a function for prediction function(newdata) { predict(mod, newdata=newdata[,sel])$class } } ################################################### ### code chunk number 15: varsel-comp ################################################### errorest(Class ~ . , data=GlaucomaM, model=mymod, estimator = "cv", est.para=control.errorest(k=5)) ipred/inst/doc/ipred-examples.Rnw0000644000175100001440000004270413055552623016541 0ustar hornikusers\documentclass[11pt]{article} \usepackage[round]{natbib} \usepackage{bibentry} \usepackage{amsfonts} \usepackage{hyperref} \renewcommand{\baselinestretch}{1.3} \newcommand{\ipred}{\texttt{ipred }} %\VignetteIndexEntry{Some more or less useful examples for illustration.} %\VignetteDepends{ipred} %\textwidth=6.2in %\VignetteDepends{mvtnorm,TH.data,rpart,MASS} \begin{document} \title{\ipred: Improved Predictors} \date{} \SweaveOpts{engine=R,eps=TRUE,pdf=TRUE} <>= options(prompt=">", width=50) set.seed(210477) @ \maketitle This short manual is heavily based on \cite{Rnews:Peters+Hothorn+Lausen:2002} and needs some improvements. \section{Introduction} In classification problems, there are several attempts to create rules which assign future observations to certain classes. Common methods are for example linear discriminant analysis or classification trees. Recent developments lead to substantial reduction of misclassification error in many applications. Bootstrap aggregation \citep[``bagging'',][]{breiman:1996} combines classifiers trained on bootstrap samples of the original data. Another approach is indirect classification, which incorporates a priori knowledge into a classification rule \citep{hand:2001}. Since the misclassification error is a criterion to assess the classification techniques, its estimation is of main importance. A nearly unbiased but highly variable estimator can be calculated by cross validation. \cite{efron:1997} discuss bootstrap estimates of misclassification error. As a by-product of bagging, \cite{out-of-bag:1996} proposes the out-of-bag estimator. \\ However, the calculation of the desired classification models and their misclassification errors is often aggravated by different and specialized interfaces of the various procedures. We propose the \ipred package as a first attempt to create a unified interface for improved predictors and various error rate estimators. In the following we demonstrate the functionality of the package in the example of glaucoma classification. We start with an overview about the disease and data and review the implemented classification and estimation methods in context with their application to glaucoma diagnosis. \section{Glaucoma} Glaucoma is a slowly processing and irreversible disease that affects the optic nerve head. It is the second most reason for blindness worldwide. Glaucoma is usually diagnosed based on a reduced visual field, assessed by a medical examination of perimetry and a smaller number of intact nerve fibers at the optic nerve head. One opportunity to examine the amount of intact nerve fibers is using the Heidelberg Retina Tomograph (HRT), a confocal laser scanning tomograph, which does a three dimensional topographical analysis of the optic nerve head morphology. It produces a series of $32$ images, each of $256 \times 256$ pixels, which are converted to a single topographic image. A less complex, but although a less informative examination tool is the $2$-dimensional fundus photography. However, in cooperation with clinicians and a priori analysis we derived a diagnosis of glaucoma based on three variables only: $w_{lora}$ represents the loss of nerve fibers and is obtained by a $2$-dimensional fundus photography, $w_{cs}$ and $w_{clv}$ describe the visual field defect \citep{ifcs:2001}. \begin{center} \begin{figure}[h] \begin{center} {\small \setlength{\unitlength}{0.6cm} \begin{picture}(14.5,5) \put(5, 4.5){\makebox(2, 0.5){$w_{clv}\geq 5.1$}} \put(2.5, 3){\makebox(2, 0.5){$w_{lora}\geq 49.23$}} \put(7.5, 3){\makebox(2, 0.5){$w_{lora} \geq 58.55$}} \put(0, 1.5){\makebox(2, 0.5){$glaucoma$}} \put(3.5, 1.5){\makebox(2, 0.5){$normal$}} \put(6.5, 1.5){\makebox(2, 0.5){$w_{cs} < 1.405$}} \put(10, 1.5){\makebox(2, 0.5){$normal$}} \put(3.5, 0){\makebox(2, 0.5){$glaucoma$}} \put(6.5, 0){\makebox(2, 0.5){$normal$}} \put(6, 4.5){\vector(-3, -2){1.5}} \put(6, 4.5){\vector(3, -2){1.5}} \put(3.5, 3){\vector(3, -2){1.5}} \put(3.5, 3){\vector(-3, -2){1.5}} \put(8.5, 3){\vector(3, -2){1.5}} \put(8.5, 3){\vector(-3, -2){1.5}} \put(6.5, 1.5){\vector(3, -2){1.5}} \put(6.5, 1.5){\vector(-3, -2){1.5}} \end{picture} } \end{center} \caption{Glaucoma diagnosis. \label{diag}} \end{figure} \end{center} Figure \ref{diag} represents the diagnosis of glaucoma in terms of a medical decision tree. A complication of the disease is that a damage in the optic nerve head morphology precedes a measurable visual field defect. Furthermore, an early detection is of main importance, since an adequate therapy can only slow down the progression of the disease. Hence, a classification rule for detecting early damages should include morphological informations, rather than visual field data only. Two example datasets are included in the package. The first one contains measurements of the eye morphology only (\texttt{GlaucomaM}), including $62$ variables for $196$ observations. The second dataset (\texttt{GlaucomaMVF}) contains additional visual field measurements for a different set of patients. In both example datasets, the observations in the two groups are matched by age and sex to prevent any bias. \section{Bagging} Referring to the example of glaucoma diagnosis we first demonstrate the functionality of the \texttt{bagging} function. We fit \texttt{nbagg = 25} (default) classification trees for bagging by <>= library("ipred") library("rpart") library("MASS") data("GlaucomaM", package="TH.data") gbag <- bagging(Class ~ ., data = GlaucomaM, coob=TRUE) @ where \texttt{GlaucomaM} contains explanatory HRT variables and the response of glaucoma diagnosis (\texttt{Class}), a factor at two levels \texttt{normal} and \texttt{glaucoma}. \texttt{print} returns informations about the returned object, i.e. the number of bootstrap replications used and, as requested by \texttt{coob=TRUE}, the out-of-bag estimate of misclassification error \citep{out-of-bag:1996}. <>= print(gbag) @ The out-of-bag estimate uses the observations which are left out in a bootstrap sample to estimate the misclassification error at almost no additional computational costs. \cite{double-bag:2002} propose to use the out-of-bag samples for a combination of linear discriminant analysis and classification trees, called ``Double-Bagging''. For example, a combination of a stabilised linear disciminant analysis with classification trees can be computed along the following lines <>= scomb <- list(list(model=slda, predict=function(object, newdata) predict(object, newdata)$x)) gbagc <- bagging(Class ~ ., data = GlaucomaM, comb=scomb) @ \texttt{predict} predicts future observations according to the fitted model. <>= predict(gbagc, newdata=GlaucomaM[c(1:3, 99:102), ]) @ Both \texttt{bagging} and \texttt{predict} rely on the \texttt{rpart} routines. The \texttt{rpart} routine for each bootstrap sample can be controlled in the usual way. By default \texttt{rpart.control} is used with \texttt{minsize=2} and \texttt{cp=0} and it is wise to turn cross-validation off (\texttt{xval=0}). The function \texttt{prune} can be used to prune each of the trees to an appropriate size. \section{Indirect Classification} Especially in a medical context it often occurs that a priori knowledge about a classifying structure is given. For example it might be known that a disease is assessed on a subgroup of the given variables or, moreover, that class memberships are assigned by a deterministically known classifying function. \cite{hand:2001} proposes the framework of indirect classification which incorporates this a priori knowledge into a classification rule. In this framework we subdivide a given data set into three groups of variables: those to be used predicting the class membership (explanatory), those to be used defining the class membership (intermediate) and the class membership variable itself (response). For future observations, an indirect classifier predicts values for the appointed intermediate variables based on explanatory variables only. The observation is classified based on their predicted intermediate variables and a fixed classifying function. This indirect way of classification using the predicted intermediate variables offers possibilities to incorporate a priori knowledge by the subdivision of variables and by the construction of a fixed classifying function. We apply indirect classification by using the function \texttt{inclass}. Referring to the glaucoma example, explanatory variables are HRT and anamnestic variables only, intermediate variables are $w_{lora}, \, w_{cs}$ and $w_{clv}$. The response is the diagnosis of glaucoma which is determined by a fixed classifying function and therefore not included in the learning sample \texttt{GlaucomaMVF}. We assign the given variables to explanatory and intermediate by specifying the input formula. <>= data("GlaucomaMVF", package="ipred") GlaucomaMVF <- GlaucomaMVF[,-63] formula.indirect <- Class~clv + lora + cs ~ . @ The variables on the left-hand side represent the intermediate variables, modeled by the explanatory variables on the right-hand side. Almost each modeling technique can be used to predict the intermediate variables. We chose a linear model by \texttt{pFUN = list(list(model = lm))}. <>= classify <- function (data) { attach(data) res <- ifelse((!is.na(clv) & !is.na(lora) & clv >= 5.1 & lora >= 49.23372) | (!is.na(clv) & !is.na(lora) & !is.na(cs) & clv < 5.1 & lora >= 58.55409 & cs < 1.405) | (is.na(clv) & !is.na(lora) & !is.na(cs) & lora >= 58.55409 & cs < 1.405) | (!is.na(clv) & is.na(lora) & cs < 1.405), 0, 1) detach(data) factor (res, labels = c("glaucoma", "normal")) } fit <- inclass(formula.indirect, pFUN = list(list(model = lm)), cFUN = classify, data = GlaucomaMVF) @ \texttt{print} displays the subdivision of variables and the chosen modeling technique <>= print(fit) @ Furthermore, indirect classification predicts the intermediate variables based on the explanatory variables and classifies them according to a fixed classifying function in a second step, that means a deterministically known function for the class membership has to be specified. In our example this function is given in Figure \ref{diag} and implemented in the function \texttt{classify}.\\ Prediction of future observations is now performed by <>= predict(object = fit, newdata = GlaucomaMVF[c(1:3, 86:88),]) @ We perform a bootstrap aggregated indirect classification approach by choosing \texttt{pFUN = bagging} and specifying the number of bootstrap samples \citep{ifcs:2001}. Regression or classification trees are fitted for each bootstrap sample, with respect to the measurement scale of the specified intermediate variables <>= mypredict.rpart <- function(object, newdata) { RES <- predict(object, newdata) RET <- rep(NA, nrow(newdata)) NAMES <- rownames(newdata) RET[NAMES %in% names(RES)] <- RES[NAMES[NAMES %in% names(RES)]] RET } fit <- inbagg(formula.indirect, pFUN = list(list(model = rpart, predict = mypredict.rpart)), cFUN = classify, nbagg = 25, data = GlaucomaMVF) @ The call for the prediction of values remains unchanged. \section{Error Rate Estimation} Classification rules are usually assessed by their misclassification rate. Hence, error rate estimation is of main importance. The function \texttt{errorest} implements a unified interface to several resampling based estimators. Referring to the example, we apply a linear discriminant analysis and specify the error rate estimator by \texttt{estimator = "cv", "boot"} or \texttt{"632plus"}, respectively. A 10-fold cross validation is performed by choosing \texttt{estimator = "cv"} and \texttt{est.para = control.errorest(k = 10)}. The options \texttt{estimator = "boot"} or \texttt{estimator = "632plus"} deliver a bootstrap estimator and its bias corrected version {\sl .632+} \citep[see][]{efron:1997}, we specify the number of bootstrap samples to be drawn by \texttt{est.para = control.errorest(nboot = 50)}. Further arguments are required to particularize the classification technique. The argument \texttt{predict} represents the chosen predictive function. For a unified interface \texttt{predict} has to be based on the arguments \texttt{object} and \texttt{newdata} only, therefore a wrapper function \texttt{mypredict} is necessary for classifiers which require more than those arguments or do not return the predicted classes by default. For a linear discriminant analysis with \texttt{lda}, we need to specify <>= mypredict.lda <- function(object, newdata){ predict(object, newdata = newdata)$class } @ and calculate a 10-fold-cross-validated error rate estimator for a linear discriminant analysis by calling <>= errorest(Class ~ ., data= GlaucomaM, model=lda, estimator = "cv", predict= mypredict.lda) @ For the indirect approach the specification of the call becomes slightly more complicated. %Again for a unified interface a wrapper %function has to be used, which incorporates the fixed classification rule The bias corrected estimator {\sl .632+} is computed by <>= errorest(formula.indirect, data = GlaucomaMVF, model = inclass, estimator = "632plus", pFUN = list(list(model = lm)), cFUN = classify) @ Because of the subdivision of variables and a formula describing the modeling between explanatory and intermediate variables only, we must call the class membership variable. Hence, in contrast to the function \texttt{inclass} the data set \texttt{GlaucomaMVF} used in \texttt{errorest} must contain explanatory, intermediate and response variables. Sometimes it may be necessary to reduce the number of predictors before training a classifier. Estimating the error rate after the variable selection leads to biased estimates of the misclassfication error and therefore one should estimate the error rate of the whole procedure. Within the \texttt{errorest} framework, this can be done as follows. First, we define a function which does both variable selection and training of the classifier. For illustration proposes, we select the predictors by comparing their univariate $P$-values of a two-sample $t$-test with a prespecified level and train a LDA using the selected variables only. <>= mymod <- function(formula, data, level=0.05) { # select all predictors that are associated with an # univariate t.test p-value of less that level sel <- which(lapply(data, function(x) { if (!is.numeric(x)) return(1) else return(t.test(x ~ data$Class)$p.value) }) < level) # make sure that the response is still there sel <- c(which(colnames(data) %in% "Class"), sel) # compute a LDA using the selected predictors only mod <- lda(formula , data=data[,sel]) # and return a function for prediction function(newdata) { predict(mod, newdata=newdata[,sel])$class } } @ Note that \texttt{mymod} does not return an object of class \texttt{lda} but a function with argument \texttt{newdata} only. Thanks to lexical scoping, this function is used for computing predicted classes instead of a function \texttt{predict} passed to \texttt{errorest} as argument. Computing a $5$-fold cross-validated error rate estimator now is approximately a one-liner. <>= errorest(Class ~ . , data=GlaucomaM, model=mymod, estimator = "cv", est.para=control.errorest(k=5)) @ %%To summarize the performance of the different classification techniques in the considered example of glaucoma diagnosis, the 10-fold %%cross-validated error estimator delivers the %%results given in Table \ref{tenf}. %%\begin{figure} %%\begin{center} %%\begin{tabular}{ rrr } %%\hline %%dataset & method & error estimate \\ %%\hline %%\texttt{GlaucomaM} & {\sl slda} & 0.168 \\ %%\texttt{GlaucomaM} & {\sl bagging} & 0.158 \\ %%\texttt{GlaucomaM} & {\sl double-bagging} & 0.153 \\ %%\texttt{GlaucomaMVF} & {\sl inclass-bagging} & 0.206 \\ %%\tetxtt{GlaucomaMVF} & {\sl inclass-lm} & 0.229 \\ %%\hline %%\end{tabular} %%\caption{10-fold cross-validated error estimation of %%the misclassification error for several classification %%methods: {\sl slda} - stabilised linear discriminant analysis, %%{\sl bagging} - bagging with 50 bootstrap samples, %%{\sl double-bagging} - bagging with 50 bootstrap samples, %%combined with sLDA, {\sl inclass-bagging} - %%indirect classification using bagging, %%{\sl inclass-lm} indirect classification using %%linear modeling. \label{tenf}} %%\end{center} %%\end{figure} %%Note that an estimator of the variance is available for the ordinary %%bootstrap estimator (\texttt{estimator="boot"}) only, see \cite{efron:1997}. \section{Summary} \ipred tries to implement a unified interface to some recent developments in classification and error rate estimation. It is by no means finished nor perfect and we very much appreciate comments, suggestions and criticism. Currently, the major drawback is speed. Calling \texttt{rpart} $50$ times for each bootstrap sample is relatively inefficient but the design of interfaces was our main focus instead of optimization. Beside the examples shown, \texttt{bagging} can be used to compute bagging for regression trees and \texttt{errorest} computes estimators of the mean squared error for regression models. \bibliographystyle{plainnat} \bibliography{ipred} \end{document} ipred/inst/doc/ipred-examples.pdf0000644000175100001440000023766313055552627016562 0ustar hornikusers%PDF-1.5 % 1 0 obj << /Type /ObjStm /Length 2818 /Filter /FlateDecode /N 52 /First 399 >> stream xZr}_1o+ r%َ xI0 IH(R!A[3!YI AwOgPL0 SIbmYĂ9fMR*ˤ`R ?&ǤQI|4t1N& *pl'L fT1bJ1'1eq.e*f 0(qbqc "K0ׂ%D1XJ[R̪2ǒiǜM:e.N3~y֒9o4s)`bRHQe<2q,&դ4' ] GyM2\aV崘Kd8.}=yI<]{WyV^VޏJH+`  +Nqtave]bl'bA|1YzGȞ,8'|Y.Njdi$>KO(9U؝f%y?(w'%icj8\KR7! y_k.(l6/ ~%}x% MX 5g)ENlg~.48@6U4$[`hmQcG=)_l[ysPf" 5O4s-)Aɓ4jM@X,A#8̰Te:7h/,EK#xl5.^FuTtVzO W|S jSAmJmў{iО6Zh-Ҧ$mڐֈoݚ-b蛥5Mݚv"^J@_c!S(Jsn#J)She(mcm"zާҋqXEJk0q[4(FR #|qMQ9N;~v*Goշds/wO~{4Sg|R.>(a^f ħsToIһQ0ۛbR^4zt'# O0,Ӡ|~C~acQ XE)HLmרǒM)Th$hVػPvj6VV#ɑ|}5P#ִ-w1$qs^\\ևy#]ǟc?gWg|:*sOIߏ9?G ~4./O>W|g@<|= { OؤOtV2|QAK?%//yOs'~?b4 v'DH*A6bNgWy7Պ"1cۇkS+Z֋ڽ_` [SMت~65F [ra: =Xm8h)蝣it_34 dAڦX&T0`WRݏ0ъa_[>2DEy7J*0BLxy ݁EbVQăvk"mw_z}_>xvpZ%^7{:[֦iϫŨ-t{w5 o>'Է_Uz}Zğ~]#1@ؠ[eBo.r\e1(4X~HVVWSNuYN 1*Cv Wce";Aڀ!H;a avҙ*di+!÷bu-H?zQI8R.0j( g[ e2%F( /dp]p*BF:t$PI!@;pwm_-i6Y$vm+TY3Eմ,ZED\܍↖E_^NhmյLIqmTj#&sNkQrpG9Grz'o0 OprTH|r|H+ /9,FOz2Uv_9@̔ھϸϸɷHMҷBԜܖmJ€Tt?zXH5ٮh9>;eo8-mXn1\R) 5Ұoi8pb85\[ y6{o?k'gMrT^{ x1!=B{dzߵ*~h9NpԐ^۷Tmōo=\GjŽ>ޏUe£^z_1e}CB6(ERQϓN44ox}L^ȹ"mû4#t?{N͈//-̈́/1[֞hC{n}2M:mlQFd>HߌUQ{]5D-. - iy츽Q#V.p-u^ @e3{@1ߑ;;djt !2" MZD)VHqtc/X[G6&ۅ> stream GPL Ghostscript 9.18 2017-03-01T15:07:19+01:00 2017-03-01T15:07:19+01:00 LaTeX with hyperref package endstream endobj 55 0 obj << /Type /ObjStm /Length 1884 /Filter /FlateDecode /N 51 /First 405 >> stream xYmoF~b?(;jӳ@]iD$:(QI]<3;; mz&e60%֊9)dΠQG4у4F,GZy氐SȤxU0d^2%0.$5SF5L9XB*<(Zi iEMX0LhY ųgF E"3VH͌ELQ1+LhD[i1FR1-Y+j&g ZZn)%p8:;c`J?~i,`{{f)Q.]K[&j[?yS}"짟 c}ꮠ v$=󲚏a[$)3FxXҮN/˲.f-;^MS^-WFQUچ-CJTUBT\pV\%4|qAeyӢYֳa$]y9R`e yAO&ЂX~)hy1o ZM=lcuٔ`>ork-ױ9t2>NCe==jm2?nT~/zLHVv UY*fԋ/N!(U2S '`"6&M:[u6CCGQ^"zFhS`"7V=&UfXI>Ζ* 1Z%HTEa&ar 8; #zLڋ́g70 6Y{^nt&D-^?$%MbSg6KyS`_?L^p| lu gmYUM혔u_u=˛ly ! I~[^V &+hf чyZ(/E9^b%/xox}vz OAzd4H inpR. Me×36_jijN5}P&=0Z%l~o~yjzSdmݳ{=e~/.NO/ʮz2))9Sp]@15j֞+D zcϸnev[$Rʿ>Ո0ݹHQf }`"7l\( W(CFiHP[f+Z9gTeZ#S2=^W)S$vm%VR x!(R%adm5eѡyYk90[jg{ eGZgVg>[L?4,̢<ЀpՑBڀő8RBRCc &S[az('T>PY?2PE$O RPH=Ԇ'o}:9qH Cs0;Vgkb+{jȨqè7E].My(xjȥD:k])i1w0>s/%aSdQ WuBu'u+ſWarT59RR{.JG9;3U\[c/!R޽x^/![; %x/#IΓ CiVRУ)$ڴR2Bf5t l<> stream x[Ko3fqGnv`rp9X9ԊĥEҶSFlC5/7 Kn_7_yl 78vM2fWC\vݚ]0`n>'c]LLf:l/A|lIœXm)ᵩe֥9?Г_+ uxEiX-XB|b4 &G=I-񡯠>w֏@c˜v8V~ -Kޝ5> ;y`L-# .zV͔4.`Scnؿ)-LGʒaj%:0ߧX{w$,↏mjroI>vDtמA׸`]Vk$B<9l Ƣ  x50(`Z5=8j ڷtnYmz5K9t\$,CEZ~[t'k JT&H]؅`uO#WM8 cאX?*Z\K[bj! '!7\LN;-ya1N⬙,9 [ դQ^![i}ibϠa/j|m, 8,C7ίIȓqIZby34hDq|W=[v8n 1(K:xR(p/.:/yE-GktBtEF\ T rJ!*EN&P$@_#oٝwEJ_asΐ>"+-ocGWӠ~- NEszEa"[tYXDo ŗ(,[0Sdo&'z*0yҗ̳wBZ /Z\pd$x 뼑B>D^HG^O rN(5vgR}%-~;ڡP3R׋m'w]H@Bĩ!cAw|"$h.$zˇOI(/U t8$\&CҏRDT94,s˳ m͢_t+l8e>OrԆi4ag=i؀3BK2Gb٩֥)km>^ Q'GSØSxPsAk4iɳX/rbK!Uv\~rů4=m <_d&X0|7d,1mF.Bz3iZ |]BmȻ 8GU.=áBjZM¡hƫ `ZXDy؂FjA:2IW"ɂ5TW[أbjֈ2bc7[f2, g"Z=ó-qF% e<)uXjH7][n%31ATŎ2Q_U4Oхȗ\bJ:B.mE54d>TFXga; vFZ$?4%t~̓D^iNM ȶ^ǚENwԬ9kF7QW\3ef g;BYni2$]tUޱ .Ds;.5 n>K췯?-KI#ԇƽ"^b|˪W4^Ya:x^i|g-6,F!WpYNDnh$ayo\s,n,fuoPV l*'> stream x[;(pᬭ]!8ےaq`xw< G_UͣzvN h=?o^]7u_wn^tM#"vwJڝuk٥h.1V`XsvݻWhjvltw4}5>L҂1l֐cן1&MR+A /Uqҩ'M1ٔ~k[ 9&ZjԺҀaD%g]'YIxEr] ScHK۠w9^q:Vɸf~cm)bdsjAVR辘rFqM11a鵘4Y>R_W׿9hQ%EWڰM>_ ɹYW*a䜆S6t'LGc=0Ш5%3 X"|a~fV ؿFV5&Pbl,z"ʼz `SXS[9cdE,x`a?Il 3AWKe앧3ɇrW׷% gUh.l8aUdF4%W+!=ܜHiP: `u~'g\I!YIΑ,`/.)zSVpN:a\8T&>`*\2  hF[֘PHQwvucW{Y6ױec›{rK[smPl.l4D`P\ExltO 3AIG[F,!χo @y Q 7sHԣy_4ҲaD&O̮ͧ |ϑlY9$ſ%,˨-6ldYx>=L[`Yp${ F1:=S;= T'oRZ6%ZR>h&y;,73?ƾ|ԪmmA:XQGJDcat >%"N'Y~+S}!6c^?N'Oe) ԇ9dɇlʙOW4DOl<#5Znj 3bPkr 0/j Yy Xq +=%B6Okic/d {-4\g(>'{3t r8YH goK8E8"#eq>< 2E!lXB*ZR*^ڍ7 Ƽj7CB=RXd0U"21X a1{<oqH'i2 9 4-,2g{cTt`'^Lu8نI!QasD!0CUS,Qg0)˺gKɩFόMb Dq.]Կ,æ>b$p%p®* 캸YIxW/=X\mjLB`q,MbV2 ݓmX#~7bΆnSB{E 'V}To,=o$'5mS%.+#)~f\}d_H_ p.~WCJ3sҰo2'375Жj) hD'h9㷌#>U%ˊF4VmX 8 %-t)|?=^2#1N2(K)ܜe,ŀ^^kVwq85.6k6'-KŮɠZ\pbQQNoᅊ|F|.;5S m bnjs3(vM"H^LxF梔Hrηs}{>2D¿ӹ31cEZDo$﨤8)2PkRS(ZQF\N5=-o)xM_ Τwmtg5ЉBb{"7tzɦlJ5]L{鿅()Ev# + ]^=]d siӈHzdCvV/06<50ET!qzK' e-9` S0w2/ض?5J" uJu%tz 5Tq k\^˦A0Ʃ2U`?#J]brtN f.3_nry[CL/b3u5U): fbnܪlv?ZjCڍ$'~x@`{NowEԹOj'۠'FQUX|˵bnPRRܦi.KF˜6׷@=>xj0l9ӞwSj \6-͵PpUUCM[}^K4A8IBe9Ҫ۹E(۔d`bh,uCV$zy$j^T}245CȻ0x" O5o5U}04V1lO-pNrgCqЊ!&z\zRTZi2:E1=GݬLh %mgQ uɆ4Oi2OOn X y9(su00Q8 }𫷵0SvH3!̥.TqEezAwx¸]5Q,ZI,ngL`%9Q7 /y:S0)DJd8,A^ចSzOzko Fs]Wu*e:4iuJ7)"llUD _k=aJCW5w}5y"L>y ftO5lT[Rsd0,~ad.0833^%"FXbuK3n\NԬ?E񑌓? sf,BoDgS~0](Щ Oi|aV2vq`\Zݚ:;JdПCu!Eb`jmo0#Pf_h@hТ1I*tY0bC"a^cr !MTS \]/7z|Qi|JCi$YD!͏5h 0 nЍ= X%t(Hl˲]# uew] [9"ùS#Ųa$W)߀tq e)_枮¨h LT(ECD‡zDMC"HWxR.'uw` j;ȂlЂ$1&2cEJ/݄O=b),*!L+$eVh~kM<]5<=vl}@D~ xZ(tb, >{E8գ нw /RJݻ]ؔ_!# v~73Zh ~i󲳔$}Gya- lNm!Nxji. uR1n`6vdZ >OѴ^ȝսܨ{(٥u7lkGk 8f(WЎL Een*4^}7VCvk WN"꒞_N8wc#^ *d j@Ieqz1Um޲+{w/7\MH9@_>xo&~}69e%B2gdzb&;L]^%/m2VvPS%焺@zfSatF N~5 kc2 izo.zf~pULϋEy =ϝ暸twNl F0WϏze8 @Դ6x ^|0 貏 Me EW,endstream endobj 109 0 obj << /Filter /FlateDecode /Length 3515 >> stream xZKs3,fS ޏ(UQJQ`0+$I)䷧4f0(IJja1@_7?߳߼xw|$aa"㓋<>cgel8>>{^Нo1oa*/{ew*\wxUm+ZFG'eW@a1eR4xںwȣZj.-md7 z!,0+z; ̈{@frJWR9A\:= .q.H2ZXoHN88G |KgY;Q/&(=}{do YMP ꡬN`hf&x WiT)N_vAy3W([a! a훱d%BZK'hJDdUH*8m  @;3+>"af\U 8t}qq91/X4ggHH!'3wQj,=(˄I"E4,דQO€<7+VH!(|>30L2Tk^ayp?l)4KBzVa7yfCpml88al][YC LƸuPJ^:W)<%`EHe:iSZE@]F 7lVk%ӳ>-oaxVҰ=C9Ik(5 E$i0$ઘ.M.֫ %,1qA 1)C^N~Bf"!C$lDG@;?K4|\ñ$4ƈ1!5 =[mtHdmV.줟$,qT)lCXEd:zp~:kkǹM> QVL`,T`K"#xA6,Qp)s;I @DlYO9TKg$3N昋l:D@ٞ~ueyD`xIt`lޖ-|**oyIHYzB4BTaZVS-,)Adr(C! y(uכyUCky6}T>Ce9E BP›/ƔE eR`Ak whHIĂ&!y}  ǩU@X3%bU GCC+mmnRpS>ܔK_:8/I?iq_~c(B48+j:M΄0zVBYRNlڧdj#J1:$8>P ,~qw]-7-lߥq .VRsef’aʠwb3(t!YXSk;5{lXI1\/i& CvqypdyR k6!b(0*7;*vQةSLyb,{Ϗc~s6P3n(*{ @pD Qgl+=Ōi4ֱZ=eSKNjBEǍMwnT&YL%g,h8uf+2  +2P'l:Ѡiw!t&B ^[IոEC$*v-=dtX陆pƝq& ׈S|aSs=<Rd\KMٵuinQ3rW?ޘ[k~W +7ڻ rDHHxtFMR /vFFOi;e!o;]2sȜҫ:cF"|>OBBƄԒuF40 ZLm T]I^țHeS8ap ۹Gu󟄚l nɶ T7zn]MQ_rAVbw(။EJEEQ~M޾t].gDz@qtߑvٱ"򲐧|Wȡ9dmsbmB~]H0F]Ոǎu!_ȿɧ1ks]w<+䡐km1vu~TdI!_o89Ie&3 mݖѵZZ,LvC!۽;NN*W!И VNJ[LW`d;o ]!_2Ģj`?Em"<3- 8w`^w?[>[.{s;8%%Vꔁ!m/5FWoP\ZJG[Zrk:_L: gRcW3G4kWz"=nca5]{K{BNC-{jz^?:VtT>V ek݉A58;u-ĸP?s,.qV,>\iOizqUmPf]@l! ƶwGޜx/#Nendstream endobj 110 0 obj << /Filter /FlateDecode /Length 3623 >> stream x[sWGh:} [2Mi;tQl?HJv"$;qwwG)'N;I n碓og$z=g,~*ϥ]S֟_ܞX|",B4O+t'[\6?no kN舴އŦC!쳫B7r[ȗ&}c7|$'g)$;2Il. ^>mȡo .~gZrt6|]e݊WlnL.|M]2noo XU!jNm}FyV[ۢuͫ2zˀ ]5Gק沭֙57ɶ1+ QRR'WR_VSB/^÷ڛp+mdat`2ŠBFaNfq UNu(FP:҅- Us UIj7Gn14,wh F:eɿdִ l͗y]>dt`嵚[awK F(W^(~g;@n? A<?Ag{y1 T雲UQcJ鄏qG?BJ8@]^alë_iF"( HȚ^+]X ?p;Xzw h(nF+$I8{"iWljL7z l*`A_ <#m:flmQJ*SOsى]=f3`: ңdq4 &9l 7Lld #4v!l)8/^/I(=IZ\&pJN``杗ر: T ̤x>[`~SU{%L e]z tы:,Yk ǀp 0> C{9Aw΂PShhQ5¡0\Ě tbgP,g6~HIHr?ܜaE9f)gB*i.$N_\LFljYS)ZY&hZ'e7(5솥wĭ΅T mF?jT#U7x(neʘ5E =[EhBpucrWDhs3V/TfBsc e.$OTFWe"4`e^cvqd;VTa5Vv6M]oih<+NShUA/r[u4`5|#?YpdC!7dhkcr%m/tWu_HV4!ɠʋIJS ,R\v~8vMd$9AO9'9LA&pKՋìgus8LB>p;$36UA>ݵFwΔ\,DKp<~?İ!;XLd J{XQZX,ɦ1i]#SU7g)֤j@9Ldc Ңf1$'4Uz=@4y0X4!XP.>(Y),]sݦ>Hg?֟ ٗ[F~Ɵ[bLAiPpNZVN8ၟ2"2s /(ݴ:-h쩐̰>B|8|ӻ&O [p^ˍ;U'BKIm*6IpցԌ]!ŕMzcƋ2]U1^2%}yEO{%i1lr4ј>aW'Tۣ=%pܩk.;a%<%T9B_c|(^WbiCtu'( LH%X/|Ix̆5.E5>1 ~~vL*,a5lKSw9]U%[=<5okwsn#ՏvcO~uqpx4 ?d讹hG Y( m5,Gz1oTڤQhpzO6ERf|)ԕOTɁ_cO7cbo8 ; aG9\.䲒ộ4Fd#<.#`UA0 !Y/sju1qhjgh}r9  $h?rIĜ}B*ԯu꠬ꌟ6Gځt/=[VԢ^c,{$\sjת]d wŭ^ %nD{i%+2\N"| DZ;pt>vY!Jm>ԋ56-V Sr 1VpRog8%Ӫ|F_Xiz{i%#~plzҎ)hY~IjzrxU`^x|د7\,&r19̍+V|gltaSuG;H2XnqK06CI{s"~,e{SmGyE2]'DD~#N_ذ0Mx?ާJ}+L>ҳL mexa"ʘZN)v@U#qTث2~1mY:M} y9ŷg_\ 9$endstream endobj 111 0 obj << /Filter /FlateDecode /Length 3580 >> stream xZ[w~Sϲ57_8iWa>PM)HY:`jWNzrA`0oݩ77'_tw a*mZ/h r\mmnqV[)oOC5|q+aPBvb}ݗl-TtyŤ5e.NوBMhdޫU5? Jk p&X<w(B> IGΛ?qktI;9A8||BަRzp@v rĀ!$T:#۲E.fOhC$0pS{T@)ECJ_v'a} 8 |HXl. Fp7l#8vcc|U0O4b¹&j3# Y`z)QRmd@wCu:'=ȀX[uÎOQ`wE|#E 5VJ >XZPk/o#;<hus"6^7@yrI0Tr")Oyi~U8?TwI>F~ze|"Sizm"DO8]KgC0aⰻZEk>) dyι ^ݔ +iYhE5θ_p&Y`e۪I3'SH0‚Gks49  c#,{52Ef!A\֏$S:%L2k1vg ơ( [ijbglh"]Ty: qy(02vԪ~]kTB=I| :k: Δ=Fmθ@uFk)ȲGu?P#< Ԯ["jh$22 QMeQ |A6Y`pI3$ T{WZ"vDvPyN/r8\D‘`y92Kຟjּn+gCգzlp6zӔ@Dئ Ö|Xkp&aN[HQ\d)*~=J:d > GiX1mDKgu$^TF)dq/}%Ĭpeh2fk2!‘"L\g8uA8Yr%Τ\j?.pr)&w钄eщ1<7q/0PˌOT5BUrRNB\Rʃ b*S>Wƚp ĜT`̮k)O`C*+0r|? UPZ$Nbt3 _5hQn~a !6vؑ7e? %NpyH^-|ր6 Ev_qZjgXAY{_׀FpR5)A2jÚÆ>Kr(ٕ 5  GSV`6\#`l͗xƄ9eppt` FnU`DR7!@-H<KA"r̨͙Q( Y90;Pl~xUSfe _;xdz@"1m蘙=eDEA& CQ|%Q9yNM\1C!EE k'YԠKiǴ>(Ii#*&UP砧7#vҋGl3(/ֈU \Eƃ!t0+Yoxy2Iu>[p\9Gd9}}lhTU =ئZvF Aܞ!-<<Ӓ,ј:T[Yck͛wH m+nOy/9ɀ/Ry~Uc耀*RjvheG{; ֪` Gr,pL[tɻᾢ=4W`3{(D}̏G2KZL&4d x~݋}d7-%/BҀ.ZuJШ(b)qO"7F8˛ ISo~[7Y9\3n7,>[?1c b) JG=Tj},}]È/ &Z. su |*W7Hoֹ}u]2 ?)';Fʳ d) #R< NbyH8ϘYoije6_vAZ*)vON-'Aց[;o׬J\RHR-r#mP.ZJ+qWn/8&F+vUG4QRnzTr0qWB?bB @ΪAj%r( !LΤDed%7go-E*wyi5F/Ҋ 5j8 (Ws:)^7uEW`c(%2;T3,Oyjb3bJqږr=nX6Ja>"A;?=쐓%P9VP뢆-S`q&1( A>+yq+l- PIhsD )?GJCfLft)>x(ԬBI<.x% ?IY2I? `0D9d3sS{9G?,=4u〘{ Rnb3M> ;p&ǻ;3! ^>{ \ '^ o ~bo ~y,]L=ݙ{x|4zjENDϐ-"xΰ Nq"q.$C&WkW =8ZqA'g1VJ+8|2tnx }Z r?#r2#Mwe9-`+:Z=eA.vy7b},&ޠ }Ob ׇqͧ/ ]Դ5!AF֟~&9{|d5)yb|7gK4ʣ;Km/&͢<{Ya_l+TVy?*:d7uxwKEjiM! wuE^!fr:?T}B'_YU01P&77N귓W'Jƫendstream endobj 112 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 7109 >> stream xy TS׺J<{يypdF)2 D K<!"8P:VZUuzC}}lz[\+˄NU7J X8A~p7oBMޞh ja~З ACvF88}1Ə0'+l[ _@&amh3|#"B>qn|>9!rGp^ "Vz977:/80D"+,s  la%;FE/X{k8uGkq'L4)=m(}ʉJQ5ԇZj8AFRQFj4KmQc-|jZ@}D-S(Gj1H-&Q˨rj ZIM({j 5AuzP@7eCS}~-%S,I #(`PbR!ohuz'~fva='<^{{`G^%}}$%oʾxo{o4 OuꁵT6?n0[כIh[^aӞF/ jۗ)X<%{thcz@hmZ^C0CVHDXuFk! W`êh 2eu 9,ƷTd÷nmQ! =cq"Fc*$kﵬX$h9S/_ݍ= ZK}04V\p|C!2Z|JVM&ҕKzdE̳/ywcdrY ˶LBDYpy-BtS#tT7>J٧\x?X=o`ad~"gefUUqo>Lb+d~^P}+{~/QwHPnײa4@8Xul?a5xJ]\6c-55ڞzׇB?Mh%I2VqIxt,f%޴FmhsHpD~/E'L('-!kVHWtV>{=ERo-~V~Ay~{!bH@} E,Bt4<>?+"r-6t >kDu{hCŀF!|Z𸈹x60ؾG\)* MQ2uEXDHjqklA]7'{utcܛԭj_aNz2AƑ1yMTNLP&lĐW[Pߜ >㶓u eaK hAnv~7e]G+" =%Hu)੻FsQFgDhals2=?Զ6D|6@y >N,c6z̜s(NJ L<$GKɹjui)р!U3}˯IȓzfeZ8[cS'f|H/l%$s݉ jwENkZߢ*D6ZuAjLԡ0#< ?K{ Y4nJK3ުu@)% ieޒe[aqbR#쯏I O{OvCDF"u 9; ס`#r $(cP#TF2q fӟ[p_}L/m!NAYG ˵6 2@ߍ ^HOںh6}[ 0Q`uyo T66WilftVVIvVBM{Mv;JP\!+|.# RE@8lĨMUT8[Zy>ƾ JUk@ &@0ˈӤC>eTHg(hFVh}!:Fm^Apmy{*uqs:=|uʍ\:,WpR9L/s s%m) =dKMeIIYRM>2%~<ζjSTHjY#HIUU#)s幁QqK#K҇Tؠ\qyjM2A^3j{dAkB1"}#'-7>AS/_#xMrk\wSakar_t9M>M3}@]Leu2/nZRU$#='Hi g.pdKpxi+1vlq?z.ĘD(B%{;RQilh.l558T %ڠ4LOhv3N!ʰl e,gu]W@juzYի;>ՕP- JPs_3.KK11qWQ7M+55yL :wKmElKo]-xjꬂVF|$eA]d ,"7;FsR"wJA[tdV JwpoS/>įh5a5YX{\6L$ -B\QOnIM+>YiHNqU%گ6g)+DqpY;0:ON{ k&? 9|M$Ó W"(%e@wEȉDc+fHQ=D]cou#=.?˟!蠆Mx$fdBE8Pˠ2+F(e1[]M ڬ8MlVl͡2㾺P-.1dьY-BUO]<:,i@tBUB(/NNOQgv jU&0Y*U WE~Ekkpt̢- ,Cs$%^ 3 h=^`^AE"^Fh'!f64'I')9a6Ǟ4KS+ć{',ڎG0/c$N 8<-DKh4p{;'mo^2/cL}m0Ke) GGn<|   !˛t}z(h0( l>@(,%*U&֡PB`,KD0; }#@Ԭ}ɿlƄ>5#XAIZ.t)m1(EA#U{`jAyM_Χۡ;2,[>5;iM)ÏO|62 (SK0v'-ʾqFyd3@a |ÇPwړY' qrNTT( 6 %y2)i)ip %U2k~[8#Dp2KxMS![X@syT|",^m\$>_LF6SǭQ,w ҲpKzCN/v#OM+@Y-y>4 ,o<Ŷ $(yZjFΣsꖇpdCa/Ǣ.(. $Չ 428\($QS;QDtPZud 8 hEwaxw .t il2^|WTB Y^~O箔C,W_Z^^u-sលâx t&Hv!eQw>\f␆NVUq˂W:KrT&9=5=mDGNgDj{&FH,>Uȧ,Vߑ*ضĵivzA]uWoGX6[~^N07}” Nslukqomk'2D$; OsN7 $].fr"}dvYbDANx`+g+@X6 uVmLG!!G-IJtF Qh py7ٷxĝ=/[l|4aQ5WFW&)Aʿ8x>VH:'3p{U$b&ޕ ^E헣?'tBs_I}Q]PK<~#KÝ巆Yy7֛h:;p;Mn8:߃VD>>۱-2Y!ڢ'ݞz4[(ꡉףm,9%nFZ'K%r Ez0B$2qKbmeIZ%4QX >BGCRAF5-+[ o9{zn#`ϵD +6TT|M< YIĿ"OHl/1[:Knz1QX>{6dcQh .?y &077;ǿ_~~!'Ku ;[AtԷXE|.E{κ~6 ǃ0Cbb?E\'rE)r+zr<+/noUQaXaJaIS[asO6F8t=qB~H؏=?|w*W.qmz0c0S _> @5NG@ͩeɠaUĪ'(i˚IY|4A<[R`k5z7f`D6LMWԧWqpi^lI c#QPn^hc=VSu{՛TRendstream endobj 113 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 3247 >> stream xV TTW}eA"&s$qy⋚"(58T TJBACa@bLC C3Hok?Ik իӫ]*Vu={1CLz. ˨WQ%6\\) mar &8F&Zatpw7p&[O7;ԛUJ5#ş|TZ뼖d՞!K|^ЙNךp&,RȂOeXPSY};*Fv6bGI󈒌4ǝ;Uai7r%yɴSK#nY,6Ϋ6H\ +[@.m(:jom;%cw|<OK cѐ!F^;&b-l8Q %g >a -'ao }xedRAK8Re-h i{6NTu<&ێarcq4OGRP8W֔B@ɐiȂdN_ȑ sM1JO ʹݷjڵbPUN X]^kqѝN}tPA4U~J3UҢC=ɓp]ٶ |<$UwX26|PJކ-hy1m$Ck8~ #XQozV~48;chc)UQ Zti:gB{U~Id%\5~endRFWMX+yVzz.{A]2Q'icLo1H nԶX6YEw)s (g~cWFG [?*poܗcw2| %\V2,Xkt15MPm2sluk^̦Ob⊳uS$؀ Ð0 BPmgz?Rӧ{j z;_͓o?:y|cO!#aE|ܿ=nx*wc ;0>37l0V'g },:sakv4YV7$ ӵd0E3C yPgK3JʵuTp/:Q?gCt SrU+*+)k3^q6y~8J.0W}_Rt/Hn? :`Ao5*D4. +SZ[PQVٷ8}M%x*nѯm mEkԑY}>̪\}aߢ--{Tq:ikv>Y4b6*;8;o0@:dqy`O& '5Ri5{B.9|mTQk< I)}Q)2[ozD= ASi\H:\'0Xu|gX~Abz(suaT7~ZC+mgNSِ*l>:em-O7LR 5Q$Ur  %֎(0MoO\[V:ZeAV> stream xY XS׶>1prT%;j'jZ Z' c !dR &ΊC!Vmmj[[]nwA}}g׿:QV}(H$Y9u0\$G.%^[#1jxmNp Ŷ~%I Q ?iqNSg~k4gN. m>!N>Jo,V˷*Ε)sLQՓ}'~MtR+eN|}*NK!J>N&&>ʃC# 'v_EEQ]B\ C+/UF>[Gm޾wd< ~kֈ9q͓&Oq:mz3fRj%5zzzMPXj O>\BjZDM6R)ʙZJMޥܨ;A-fR[ ʁrpR 5G lA`ʎRC(oRʊ~Zp}vH#kv4Gk%RI%3)d j[l0y`cllaAw3n]tQ9dŐsCإl*{ahC~mnbΎʁB%=>!@ ЪMB!(%]:-R( E354_5̤FZQPp5*ѥaoʞl>93i l:}ݭ=v i~s!l*ƹ1 "o EDnްr̹"JK3dOC?\N?21VKo4Do 'YLS5m< kcg֊,氊ֆTƥ`}a9.dT4zmޛiܜq;q|~">3:#1HWaTz=!st]ߜ};Nqxa MA ~~}WT?/}\Zv_aFU6>ֳ/)ޟwJQ \!vAK{s9f+Rґ(x-Ic/1(0KKV?\ pTV쫈;2eQ=yLpZ >+s+(G C{ߢgxQ 󡎅W@+ll.my0tyv!N>3*ԙ쐬Cǯ2$F|&r:oIrߔQ[◱kiu]R 1ƈ%1Hѐ_RȵK#)^Y]`6 hczQjIZ_ 56t;ىG8laZ̙I_zS|FtN\mw۬Jpm>f7p솗# U|Rb>;ċwc-ZЕܓL<ᖷ$GE{Dxl=a۷-0*IG4(Xa!h8ġ]Fj]c^x~x,gXo3 }MR [Ķ򈞈k:SJ!%ہc+V#INsԅej6"( %[-1#Xdd%giFVKvy%ؼ, i$Y*R%JEG$/;]RdHry5i_dD(*f*B=9Y\opbaغ3U%FnrA̅7?U%rm5buF"/`%~m:a4@^tjpTfBCHr<9 SsvV+1Mա&K+ T%y!,]R̊SNI,L*D{QQ^^Qf7SxI _yVҤjRvi˜^ p7ɿ`u陨9"9HL 7Kzk\-dHuܦRFюlT+%/F7@蟧Y?=YҬ FKlE&b)`ȾzIV{͚h'?ݼ~ZjO3N|)_fLtif$)y: ˻v"džV^ȩx .yZO2 ы8r.fue?㡸1|:Xp8ݑI|khZp:[&2[t47޾T\dሪ{`U(>BW#ފH;wGtKjN(,D4@hQai0rb1ف5$}c\QURwmorkǘ+S|eEͪ6׬=r=֋Kƾ[=ry%']8vGG6]*c3S[myxΆ:^WX7='*ibJ "oH7#fҢ3GRL$DC"E~ZW7-Q?TNEb:Ţ JFqEEi(eeea?ʎYpXY~SS_Sn! [0Hw0N]Wм? gьN > 1L(S6nzߙ_Hg r=:|;b 9ֻI]Ҽ5&y E 䎂Kb j?ZXD4>1uG8w%TqqJ؁%ľGpEƚ`W <2i-}*ZETryeD]]ee[S`-`+ }5F7LD 4r:eJ2gtFk+Uxpx&:bneW3/.:yqQÔIi76-y,؂ĺ٢lQi<O&fln,& /oHO1=rvM? Bw o,7x /kZ0vr[n?8ABļj/4?ɓƪ;暵XwXP:cGR&hLeRC<|)vm-nyWiBxѡo;UǷ#}ڡmE(6Ah_zr;'Uȏi[ᓇ9DL;_ i[c9yYbQ6< 삗P0Z\ZZPeC LW T](c""ywzج;f4@8}=bb]=m? ׭vbw ]`:f[s:0Nղ h: =~EM,v^XbYF{pR/*2cǏQwE ̓ydf ދ5:TKX[c0v7,M0X w+%6W!×Ѣu7;nZOOyi;LOdZȕ?r{Nfc5,*Nʊ*ă ГVa%Y1K [OLm^ϝsyyX7?^{kA{8bDMARnfaQ1AzΚ)H$ka 8@l/F4TVS̍HJLn7Hc;7a1F}d1T0.l&>}0F҅Y&C8#$q䫬x$ddsa7::;ˮ(6gM1qvJAjFS]#6epobNnzFA-_'n7#Di1Ʒ]p BkK\;r^(+Db-[PS)UG$ibIsIRLBQ{s>$116+F=t?\ CzÀ4C? ܾusǺ~aH õqq)u1iT$ge)sZq]s%ʟe;Po|MHLNAZ&Lr'Зt衫'/}Ξ.TWUZW7!+>q<}<l||se'F}`sc,67. V%ٝ56~b|Lp㴟WF)RRW H5;ArW^m^0>;; BLudB }Rdy>Fl` !lܯ݇_ngl$&>9%9 hN[WGHVkB"PSSR5Ֆ=#7BuCu%UPzo&(ŭM%C Oj Sٻ yu׼z99Rb ~aAϬ3IۿN3vg; 8hYNa$+M,%QHT't")?vwu{ʷˉGrG%*MpY_/+s?(\٤~u "xTB^ąeښ҆6ƓT`/N+_@`N܇ кh 8ì;ަ֧0e!JGȴR]|{uj!70@X h+3YEmml 6(endstream endobj 115 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 486 >> stream x%AoAgZVLIbl&6xa,lk.q]`A-PMg65˗=|!M66[얇%nuF"A(jT8錂VUO$X=a9+ y [Ѧ$frVep}]Uը+F%9xu Y%^"x= z!0x+,|ZƂ뒰x|-Чe ΨwRcH0˂{ʠiTIHjF"trMk6I-~00v%rhYuݞmq'2=E7R4vXsvW6vyhB| 0u߁i77'.I*U]tv=89QV j=`kf|7p2"qF?fDUG$P<9d~r%<<\NШendstream endobj 116 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 1999 >> stream xU{P iZQo7zr7y=rR" I $MHB0,o@Ayb}獶3^Z[;Ulgz%eego\v,N`^+ffHCrbbk**IEc|OYnR jYT+(\.\v [W'eb([Jb-pDSY(h RV7u:*\J.~g NvH4uHR},KDV^rUVe+$ja Z[oXöc9Nl76aL#,{{KsD/JHJ87''œ9S$2It"6Ha5J_?,tTkZP[Sp{B. ux#M}no8v/|"p$;i!ҬuN>eMA38Qm#t th1SCYy۴bA}QJVv.MaTmITLaOE_>yz&;?Dih"2a38G7R'ۋt钭7dOaѕo9-?Q%*`>D@-F(1XPf}{J&Yl^ |咽a0وCKs52+ꚏuzN[/[FM/ qցp/tkDNdًppNuG8`m5j[Kf!,c^~{&{} Z(>Q߷0 VVWƤ&gTdCtHw=05 }t0FEg%NkU4Lt۴#EIJR%@yx<64 EmRZ+|vH7_F&Q{ۭ~N^?i*\V2u' S,/[+ra1m2zeR \v>Kv [GzoOM⡒ˑʭU(iNԘ'FVT;lHh32WA}5@([;]6fj|FVA TU;+kLGz:GofiRvI O?< mNؐUgOD-0~aE bh]f2EtKtD>AeNz6\!wp]]lpA]ӲK_v@hg;62Ō00it**&nX"HaTkIy_3+B7W/SY!.. w"+9w.ONRAxX4YPVT7DVQUy {^s:La}V]01(8?=݇#{Ό~e9d✉NqIAͼ+tCV,#,. )npמ9&vNGХt>!NwL>aKgR*-rRw: likc&"7.ϣd|].iMR[y̔O) 6nܛ{p5;R2!o[6%?X3mE`3k0oSk-bYl`Lc1܋V4Ne3礠~vw;|P/*ѩhٱP7$> stream x5]L[uϡ8}MQA6LaB c RP@iC | SMcHdeFq <^X!M޼<88'C$:Qeb Îm}y=HvCy1U RCUVHi24|;--5/endstream endobj 118 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 1119 >> stream x5kLSwϡLJ:lsDc݂sLA.n@WZ NiO[ B9ؖJWEEyCef8/ۢd8i~zߓef`8/o+o2"@2l%j\z]pYc0*ekw7n,^OWT$W7Ȥt7IHad r@ogYUIa^/6i uGk-:y-if i~VbJ&rVn0l1褲F ۅbOOrH2؟23x{6KT 3y"S(,F k|(a>F8Ey(„WV><~61Cy-q[4(J $E)Q<:&#gHdd@hLW& 7*|Pmgڎ0'!o/NhU&EEYTׇJaUӪ; qG#/rϹiq.un>OXK,q]z|WnKbB#,nTD5#ЇlwQ۬*K.Ujd!E GtlYh rbC\^K1D!Cd, t! XCi~{/\KQ+숴&(e%cBSި{tk/{'g''z4YDZT ꪁ:">t{Bf_vP¯V o yno$yK8c `趩,`.)-X2$(Ow/N}Dkoc,|b k[MfE]oI£+) 7wn;{Ϗο&b@K\(v\i a]@}x}>#6Vl&K %`BtXwӗS!S8Cd’)=:^t 5iXa"\wpo4N OS7ldgekY:{T>K|#HY$.}\YZ2ڛaJendstream endobj 119 0 obj << /Filter /FlateDecode /Length 178 >> stream x]O  @eX%C8""dߗGҡ-l8]'k"& "ƪۃ> stream xcd`ab`ddds541U~H3a!\^kO'nnBs``gd/mhnw/,L(QHT04Q020TpM-LNSM,HM,rr3SK*4l2JJ s4u3K2RSRSJsS ӃP>!&Ɖ ,ځ ]݌= ,6ɳUS|_K?{Ǵ) ]r귵vJ6Li/׽?L~'[7Gw|Wb= MM -:UniSL>_7Ue}a`0f מ`endstream endobj 121 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 6484 >> stream xXtW!4 % 6JK`c"ɒ%Vj,E.r6`z$@Hd)%w8$}۷rs_xD.hՆ7N't#1dzЋzuu 7/wNy$v0Q% >*hҬY3M8qVЂ𰄠Ua0'.h0<:B$9'J$J=aBzzyGGD$E Z*Lza,'"V F$'lApQ%KSeY)_wuľȨb6n{?~Йf~kF>fq'fNNMɚ:m:A #kup"@ 6#M(}b3XHl%mbb<@,%ˉw) b*1XIL'V3Dх$ +1$&э`N ׈yDoїA#v]n&8CQ1O[;ea"T~aס]?l|Bn#v{j{l=蹴g[J_}|}뮾}{_D?wgtI"Np;0 0,)A<p!!C2ysԛwkUp^qhKk=w$#S^ ק!{7FZi4&;]Mu) Enh~-e%AHIǍ纺ݞb\x 8t#K6]Æb ewgrdPJF!;]W0$]}]|)ȥ$6yY)9"pb?Cll2׶`}OU}Cʽ@#0kOU[&Z[spw hq2dhُHjې! |Qˮ67[ia :8'[*ee)ZM\>nP4Dш;#;yvzViAb ( `RugvPU<̃I9~|)NeJLDf""V.}Yx_$;-[ad,dj%h*FSaµNdImGQORؑ 2H ({AAiICi~͋ 3#`~i!Z:V)ܱwV蜥fk3Yl7pO= "pϠX^QƔP ̰ٙ1'|h9,olU$nm"/yQ V h^n 9gW9} ГbM }A[ _/4"p0 uʎ׎d":ȸ,+)\XGJag /@2HH ߶on%ؚ)_>frn$ K yhf⃹N3|4|@ӼVѥ5-*h4 ?h3eM곀&SdK*KZ͌LOړ͑-}ɘ\8q&ѧ C,hzEH4Mg R$t)s Dzed&y :Ze1wH# 5L +MAK഑(a.6=NC ;.p=܌0jU:Yz2`?.+{ oo|$QOJ[ M;t_!hR&tRg>DoZ저*QV$H; ܽ͹:exz Z2 +ɩUEm/¾5m!kt@I@GcvT4vp>v2~q邲bK1e >OH $w9O[V&v$EQzR{.kr Ԛd=_pB4nmNʋ1ZJ躆W]"Y~ ꆂQ[ >lMO4MRS"\\W.s%"?d%LJ7ܼ`#=nJ_ᐱ͋Qz㟂o@pˋ Z&ٞUn03Etܗ^4~hw4+ZZoSh30v ;\`*TdN:-&q\6gp\x7#HFP4H>VU*غJS̱,-7wIX@ :yB#Ψa3DT D4m|!H}A}hN&vwķt}bU֬f5JZְudz/*/͍ܫI=,#࿚ռ"mkZ> _m Q7D=vJadQO^mT s3Trz n,؋hSGoVu.dWFjESQ/}LuEiQ&@CONFՒU&fk2,qV1ΖRR\ ZX83\/,g>畁WZ-̊IM LeG<B>X, _C-5HMN-Ǜd .WUbYWj}KM%xlăD^m4-/<'̧q&8G!oI>~X~ %6Se+\lCBL6ДLjLڕLOILJHmuuLF7]og1 =vͳukɧ 4j۳9iSVqĿ.][îFb4Ы7~;v^hl ?ՇYmK'z:a&]P MLlyiVu|&Lg6L5&@ ϫ|̺qakJz/2jTa/(pp*I} Gp,K⍞B束|Tz B驩7Z`|ŏktr9{ۿj㷨RHfzʠӺDci8 pvDg@:%/Jw]儰uWhRdl+1 NMxD }}ry'yB"4rYi߹|.{wvz LJ|"uU>WϽAC=( |:'& 7 *A+\+ ihU A`6WYO@H ΣCKD4[܊i8anJ6[Ū1^ZA0YmF{4IjK/74;^/vagJEԮ,I*TFs"T"O:ZV,}>]0R4M'=YQцxu< "XS 2 T6ܷ{4̿zasPZgUMʦ0UG >}g>]1\ޗ*9}~H2Wm,;) %K;rO~,Kq)31u]yaE]tuGc6-՘[TU]^oS[9n(TۑĽ i x]nFi a/\F6]U.VkrT'VrʚSY.)ƚj_<'6U$bJV,.+3{щj,ԅrQԂFGuiC 'Cn;h84],$/q]H@LōV&F'Yj .@!x̿+ :%,ܼ[8FԩI&^}PYd*(`NdiT{㦍c-q\Z x(U%l}o<^q'8q&u{$=x t9'O?>6ԗUjL"JT"thۺ1[pTFY 8ֆLjZ)%> ^Zh.`F n<~b9zPT-> stream x[[o~W#<ɶ uҢ)PCM<:dG(s 9b+( !;xب^} WGG8sϧ& I>;OձnRJtߡ՞(][쟍HNU8*/ԕ0%;v|W$,L#tF2Jش[5kqʺQ3}7-j{x(bHۉ!.,ʷ<ĄRim|v"#9L@r] ϒl)is^AԾ -3 ѷ=LJ}SR9)ˬ^z"*ZJ@)J]CV`ANi*VSrKBSRj*-;rp~9/Z z\'n}Z`Ieօ&I]&DwsR-qjK=A\ Gw;UV" vI`k$i4ASp~wTa_hsa^`.qc0Jr;˽vO'z˓.Nܬ8,ز]C`_&tM1}$wCZa;Y !00],C*SW%B&QQ8p QDZEqm0d?08E__gB;p^ F"ec+SbM=WL@Ȏ"}dZߋw&$kݪWU(*WJ 3)qq]K#ԟ<1IhY+m;\ p[Wrf%+!\$,WO1@G޼򹓙Fkkɩdo[=~@'s: /Df E˼?BGFf]K#Nc_؎UB=gAO.q2\DeG$/#8[0dY5c[7##VP8X3n!XxPkn\V:Z6CR`H([䆦FM>'OPu%m»bj} >  |r 4BPb2   DVKpj?0wP2AހUojr(D=CaF~H3-r(ž$99`}ұKL2ym9z*#Y%wgAB:/)7m"3\嘷@{_s†ŷÁ38j]"ۈZQ܌&-D͐*ޡY'85 ~ V~,Pc.P aSoYE='uuҝMH8'$Du2,a9wěe:1qY_tsd"R-]E5cFM>NU93#BpG9̍JTFAYfX7S_ oZHM}j?#He~U@d8FNa E pa8?k; ZhwQRN5γ5Y*%g 5njC%RE  }$ލWkPw/dFlHNz6ɥK׾qX׬?׺q;?-#pPa?'㮵6Ox[l?# GeM gF_We ?.CSFfš}]pQei4zG~e~UG '#XP6 }2n=F̂Ϻ|ze]ߔ2j֋VHˮ O񗻞;{m9we(|#}v&if*M4vnnẍX-X;xuX~23( (aH֨j%JC/7L!îOÏjB,  /W)ƒ)` SKxx?4EDˬ1e@6Y+o@RDR v@Ta)O=HcaEXsJ^Մ&Bϭu%Lύ|mωG:DaOXxk,,*Y7d[IxS&{WEjӔ_6 EiFi1ېr9g 㒮+ \m+`?wS9a!꜅yV e|~!{ƄB@0eF-tsldg7i1*Ίt>y97U9U4?byo?‰e0 /vNTQR,Qʡ ~9 wOI ]aZ(Z 4v3XlFu2˴{ Ttmzxl>qLs_u] ۆ44{RfeVSnYM6ݶ=;/ jC?qn|^2?wH#s0w\_,_ߥA;o-EIM>,"hvJN*HYWf7eʂߔw_Sv؞y9Q;Riύ;Y9ѿ> stream x[[o~#v H EjRJZY+Usp4( '4>rN \}J͑=m=,tQDy|r~>Rp 'GJt:F~BwzZœ1J P0\(!%v92ZwA+hR`ჶnqqࢫ(SErqYfݤim.- ذApJb=F G{W0 L)qYP   /zamp:FiQKr lh5)[D>Y 8g;uF _oޜY5=N+5 NH%u {D ]XXmY'>873]_/QM˰`_ё>8]g5AL:;8>ʦňےqbrZX6JK۪`GYj:̏䶩^ط: TcP.fҪo0Na:P]wpژ* _K'6۶N-WyT[?tmWExp7U^7=3ilo5J{UėE y򃣽>d'b# L(곆I)a[E.ⶈ0"YU_"f]SΑGsgGFBl)iE@dy ?Yp41} _oC/x\þ9|7FﮈgML_S2[C%iUi*}_ao}ͅ1sqGk\XJ_9Y}f˶ >cfʘ>6Wl9[T w[0ݭЭ @gxcFs"_W~>w5$G>a:|5&"jPC}Ny\}KwoCdtK͘ Ni5!``_Љct"#1Mj`!1"7PypnO!=oPZY͒[8A`BbXfGD2|XNueȳZI%n -hpY䕠Zi좽W=4smSDt2#C!Uuk֜6D9i!/HV"Ղ zP1B]Qǟ*%-ЋԺǯcF)0"vm'ӊp,Xvͦn\w,- T$L6BeY]Ёmr{J=fL|>'?U$G%p䮅w? kvx=?4)6cgEmѫ~:T^—q;4lSsֹFsTxZا^W,W>*_<*wsg35yٴșƳ2 )UCY!JVgH(pӳ~2qljE3Y~|2sU%l ,PJw\z\YܙmyP"202&kMb`rT պt9Uyn PO*O<`UXIQCˬko]x9#`bR; f5٩PA+QӒdT+;@xXen33~{ͭ4-?ޱ^q:.Y}L/-47D3@h(wGOJ]+/ _|TT,5]!RwGGӉ* {C_S Z笸QXg!alPf6a>GPㄼ_W۲NOtLK6`A!(D"N^r0I!V9\m훏whH}f?CFI!|] MgE,v,J)?:7N['!_i5݈1xW߶gm{;Ʈjف+.")`nj᡹ޙ<fz8}Wf/W+AȜ)qDGteGh|:=Ծdi҄2d+;rG!�r6&Ĵ.qxK GDCIJIW0'e3y)S 7M8zT<%( NVOfm5/u OݤW5rU/@qrxTUr8ތ8zA=_U\=FٟHPb* Fu7pXɌ‡1]Tt0YO{xΑ8MA-2ć0+ ;D%ex֨ (2==3 [B0§QU7́cpWă ^')G0gj("μ{F܊p'񑒈,.au* q{H7-L⯫/Ŧkrl}-ƂQԌW!̀f98L /go:]LJ~K *+]3Lfy;{מ+> O$] O/;WEo%3o&50 i=p9@z"*N//J2E&,CcB,]==w"䏻ttb-^jCLXZRe*b]> y4wxiuV;OO m%ДIC=-Jʨ(f뢲 ܾ|}83drtAj1Ȣs ]Kt(- ) ?6ݵT'R_s墀ɏ.~8J?Ž3O+ 'i tWАlec<}R 6_H7OLdy<XC1~0\#7CQ QXk CAg ;xnZC n9OLgFg'W%th?=D-䧣'G?\Pendstream endobj 124 0 obj << /Filter /FlateDecode /Length 2967 >> stream xZo {o.кE".Voiw,S$;F!j.'2P9$7Ùa!zSo~-W3uÙL]?a30G,/ lcX8+{e˕^Bv_-WV^׭2UeVk|GL+ nY q 5!-ű0dqHz 8_390Ih7RN>ilk~zJ޳^8}_Vew>P skGbbo]((^Y #Ű 0Q;g~}Y_t IAbNZ{=ۤ8eqJyxju:d<JsͲ|[J]MSort7t <{s3fXj]\ XWځa 't|(7qNJL+ԛ?T!X<uFPA0I-c, ' )wܑx 7Z-VS*W5Е188,΄ ϥWڃ!eKjQB9Fv+𞕱wp*ztbL>*mW т+:+d j ^6٢r1Y+(A(QvРP`:.{?8-i4(Dj?.=i}]ou[ :ݫZ[bZ 0л\&pu3oD:F#FTtx+@iff( F-cWN.ait;Apq ]!k>kC]ئQC"~,BĦɟg `KHVL*& 댲@[0f_qh 8YE%7互Bx6EPT|B2Uu2h 9-1 $yܳ%4nm1T${BGoyP21\' &cC jvD-T& :[z"42a,8;> Cĺݡ[U*t8mq翦[T)&'rinLB!aZ"ZaTQaf?5<3&E!io2j)$a3f03QFckT )?X5O9&֐!R0\텉Rys/t]Ӻwm/TU%Ih[9K?$n#5#ϝvJC|OϗnOx=>@n*\l6b6YEJ~$^,K(˻e>oJ'-$)k۳/+ɗɳ؞UQdjz_;gYd  =+V^[V! c՟s$KVħoX5|J:ROMqfA|`}@ rBy`ۖ!#{GKji+XsD_}%铸A U4>Y2){ZlPrTtZJ^PV*IzDM#KXkX^ӍO@FIA|t9kj岩k {x:$I.5ˌO~,Ir6/Lj {VB+KZH${Nfe3-S<0Eɬ^^K(i@)%B,DJNxg&Kl!)r 8I6R:s4)CVS7:D4ɁUKvJBen jFA1샍; N_bb;q@1I  7|Rv4ĂqEj<8CqހT(U!4^kŰ5M{Uɪȷ-DSr1KӐ1,(87/OjdFY5Ρbnf׷J>n^ cp󈠢(?2+ n$H˩FQ` 7}Z30T}'@ٸŒmHgۼrlav#m=buCK0> stream xM]HSq8hgr(&JSR MOnvqAf$7scM-Ȩ$'NM%>7aX<#sm#%bRRCeV"IPjT4V0ȸܬQ:*`N2 aګ-Nh zTiM%q+'-yѨgCTs6nٳt U89⠩rad&ffH[C038 @1CP@ k,V8Wf)W,NQN,10 `֊{IV|Cې&ʪKGv5Y6@shժB<B~FJaᑼ3Y20"a1 ;@;#Ӗcesv*jsN {a:ͺPG@u )KzN[_V Z'JjgZ k>~S#CZ <\{Zb0W򌦗Kˁ8s$D͡DЂBڏʉ!8JO~3F7{[x᲻'JcQG j_Q`endstream endobj 126 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 1553 >> stream xT{PweCv+4NnѶ=TڱX˨ X4H$0$DH@y@*b{`;rVzcO=漇իBgw?H滳<~4hVflߕ~)i--g9)mm {_#Gaں }a@we@SJ ="(,F1lgB< Cjj߳Ā7uuwV|ά% Vvnz=G+Q^H\U C ?ܧL%JJTT&ueTxDI"{Xyo |!ke'DCow;pDCZI01( U0V!q mt>R:<>*SsS<8rrҦ+zԿz:]Ur(eS֕\9d%~Slnﰼc؇lQ !mS^ˋm+Nl8u?tq@m%/K-.227G.8ul>'TAp7Uj[&#}KHZ! KOC "| -%,$xmf%W#6O:4n {7?JR>dO4|5}ck'. ]vEqB ypoxoSz_OɤT,ϳpSf?PK_^F혋{kZԭPo(o6Hb rW|bSd!5_bPxk}*P5ϸ 2\2 c>oC*@:,q b;wfye) DT7r<IzZ-E壮r^qvEOFɑɼa9@7 ];e{gԒUʹG~*s3?/ [|ʪH.lwq)%Y?oGΟ\y?.}ۉm};G$oL2.-{ G!PgGgG Sz:oAsS[XI YqA,AXp OL&>LQ_~G3endstream endobj 127 0 obj << /Filter /FlateDecode /Length 2744 >> stream xZo G(y(No=cy(P)-$d@]rwGU#͒ǯ9nus}}xs89R[XiRQT+횐w.^\dir {жilԙt1@$iӴm\BO %WDlbB5["Ղy+')BQPȆ+"B^,HѯNtB|t[ǚ%m_|Mgp͔hSZ}&J`'Ȗ$* uUvb% &w!q=%C }Ea#>W\K!eD_U qNR6T wx7.XqJuMP \C'J>ELiA-gNr.^Xfgkkd'~Jf r tx+3 B,gvn(ƈZx $t쯩^{2N⊫˘\5g0#H=* 9OROU2"\3& S-㜙7J-$ž0 .'XBLxM D}wb*sp:yLX&XW+n II/NَUBPoSQR/AȠXSePZU s;\cv'˘}ƕU)~@p smrgU \bnrz 0rݪ9WYGTͭ@а>΃Y{0;IuCYjhVpB-]`/Pb7=.VX r`rˑ?H摈 B 70j͂Ҿ@0F71l<xw4yuU<ׂ=sq0Rb늘Mҡ8e2bfGǍ fVzVGy\Zpa!x~h༊I+0@f w5̚ sW7Υ|LQ58AꮨSQt 'N^ͣ*+ RPJ$3awZ%dKa]=,`[6.4tPҽǡޗ@iII+@L 3Hu%cA`ה`nFO˜c]g5}X]x`t_V1VUf hsM/Ț3C4 *5e)Q؟츺tfa(w {ڻ+ЅNu]S ab2MAbSru@VA#312M)mS6FLBNjP`o 8tNGDDJn \d'/>p{?p,bʌ|OZsx=$WMQ: tYL6bW@zl!/;Ϥsgv:gT f<;I!MFp~%qvRiPuta,GhW|o>҂3"e ^vM~b&]a0|HCɁ=GZD>%2rRx>h+"gntݫϤ%@nM|GǮؕ>9<#{G+sF\q/&́G\V\er_=}$0'Ce1immYWMM.437ۏ&KPrˢ<2ƫ&͆fUu*Zޱa)R> stream x)CMSL10$8.  i^WouCopyright (c) 1997, 2009 American Mathematical Society (), with Reserved Font Name CMSL10.CMSL10Computer Modern+.236 `~$8-Tzw{z|}\yzwwZ]rvipmmtt`rBrwK}z`cDӵ̋2-:IcyoȰƋ֬SGBj: j\~}u$ ɤ0nuok(Z )h=p|hy\H6VW4J'A5Pxsc~ڏsSsHcVeXh_xu"粤P͚ҧ͋gnstms|SS%M:+ǽʜ֋[ F\VbtcZ王ՋՙJbt~qOf@<uCnb  7 r;endstream endobj 129 0 obj << /Filter /FlateDecode /Length 2332 >> stream xZKoG ߏEC ^ ^vsbfH䷧Hv7=5KH6Y,VzPs~wl8_3?0Hv9͍n~5L1_e}&3:W\eAE&J5V Τqձngb: k5"h>a&k7a8iŘZ׊ù"oLA,7Ӱ&zqx^Wc]+im eTr򌰷d &_ 8 zhݪq*|ES.)pz|ȷ&3.[nl3:LB |[i\dI~CeQOuQK 9<=i" @w~{ (qNpjKDerO.{ɄSRV߷'y!w`Yr}}UCLHE& l&'QB1Ȫnpx8doN53J"БAdC`Ivu$QnI-Ю2Lƍ#I^erKHf7\1"Tc $;<R'm_X)mtMtpHlH#<"h$!۾PҎq(ny6i:g\ϙ=,U=" BY-cC>5|QƀjXh+۬ `ˢ-K%>EaD&:WPaFUߨ#žm_pe{лφV;qN.)iĴ ]*7S:w)i w}.swMXjRa ņ%.嚉hѤ7BɾMۺQJj; ^̤S^8 SDb'}Ds[߫H (I&56Lj[7P |%`Uj|H#B CAp wxK)mJ9r i'o_rM}DB!> 5kL't{}#J@*uOH 2ʢ:Re*D^-/pSeɤ$)oɣ=pJTt:fG2/3~NFșw~tEhIVw.䟃 0drI^QS~L,Z$}X:n#NO]DŽG̢ozGܱ6˃VȞ+bGzh5A*F{HDx3DE4mRAF[\oՋgZ ?Mendstream endobj 130 0 obj << /Filter /FlateDecode /Length 3209 >> stream x[mo$9'p7~A tHDd&ل2vewSIrH.?Ue;ߟNS_ݜ~ sߞV7o҈=]Q]OSƞ:;gɷn"\mT|a7kCۼ={WڼS\p]7{vewu {Y{o)Va'޳k:כ|ڼ]mnb3c;?H9i'ߜڼͫ~2W+zdt>kv[d1.. `5?67dYalv6wl/ )&AMY e)a)%L/DJm*:Pu ht}1_$mLu4H'0Hy4A0f4‚BauTB;kWIh,NǠ˃2VW@8Z2[&S%B`㚢5qT {lD%QdRkiȨd6tF:Am㤖16Z;p! D&Q&!ﳱ+/_܇ )Lgls΋n @j0 ~7/x10g߃^(~ub!}ppV7CA[-\ҷ%@i$~>eI%$TtpPc,;E hmNrb2I|*iG= @"YlhIr~:iǡ$-2xm4~eXDqc& M,]셵:e `jԆԕ#`4S7{lGk>/IlZ*I9X0#Б8#^ohEXI"܋RD$Lv\f@V0ߐ\(Xk _$YʖtNb&$24UM}ԊA >Q ̕`*zifM҂F{IƲBwIp 2b7C P$GLP \IC1hK8J w3庁, i'u4Zbc!wTx9,%u/3;"-V3IqH .-<^S FWĀ3F[q~-׹fj2kPel3ÙB$I,EJIQw祥̟p(wCA-z ak~pS}OSP/ O`+U{PHD{$j/TY1 $A u_?w|*`yr{ \2h!\ԫq+ !uFvpMOR=Ev_ tyso@K oW TS6atvE 7e )k0O>TI.WyזufQ:ab=SxTl̛e` 5vƖ-#9Q:CdIC>H%kw`$O;eg51" }?3C[i:v?oS\PtݰM}A97GQ!dў;wn~`agkN7FBPi+xp{e`}Mмb'n&]m>ϒ.SA5Y4kErR?9RZt8"YUF1u]y$h'ɂΦ^値P΁g B j&RkaΆ}(\Z_!;˝[E-(cޯsF\1Uї\r;Gqx9P&x#֒bQ /LOm㜹s;̇.66*KD`NM,oq>q7}bVܐy➔/ B=VW=QBmt3Aǯ܍'άӣ_E%cU϶6 P7iϣ#71䓇yO;@ٸ)DϏ%Ž>!.zma=|F6>0EHʡ<+zXnXܻxsV'~43|,bh9<@,;q|@"S~P f)N ?Zendstream endobj 131 0 obj << /Filter /FlateDecode /Subtype /Type1C /Length 4711 >> stream xXyTgOύV0VN][o]Z[UVAqK K,=Ov (*X ҢlV3m2oF쭝{n;ww8@8}9fGڸugOG-KFx&]Y6;+W*KT JpsŸ_`iʌd 1>3|c|NZrF|%=<*+QS>崜Ѳg웗%N}e|ANZ}5Y93ÇO7o۪ QnN8|cVR8bXJx_N o$n,LڔE1ssXiV$k:k3k+k'5\V u<.j|ZBuXYXYXYcXcY j,;:bƈ#7%u&j Ǟ'`7L Ycϟ8ab]hv>4ɓM6,o԰'pV4j.͠0 HM%nZ,6}+ *P)L>wZ=P$Qq 9Hp ] Ue*; L 3ZNQPë7GdnHO Qj?AhZ}egTR^umdL@_yTlsɻlKb0hH4{3PyFoyQtKY+BRLO Z sH0PJXL&V}T#d*zNʬ=Q zQT㩴W鐀G\Ck yD^]<< Ϝ~iqCг4^RӍv?zSͱqӅL֝{b o \3lPJA ͕qW7WD'`!rov 6{ҥF *=/7,j Ѵ hM4CsB3k-v?V8_I;V6J(zizf)} GLUdԛ>M' 9'C-qځM$7-ԩDF 1 +Wd*!Ѫ Tλít yہfBR##_"Ǟ‡:dg@EJ\(4TءLJG 9W`*l3YBU u@`()K23bSrl/G{q,p奴M/Pn>{R$KbWgԢs=Z[mG5 ^UQ\ -)*^Y{.?Ga:-G'8z2恌R/پ>*;mzaMS "8ڲ8~!4zrf8'خl Q80!w0-]Gz1zsR~j0N1BkD'}ECvnY%ܸl+wZOј#LC]DzvZT9L9;9}?yM4zwm|eJʮ<s -`5 Q8Vт {|| Z'Tf5 $HKҏԡ\fn0e9K(=j fbAm"0j&(;,J%Q%3]¤vqmπr t A-xK/;v\LH8Ln0Q9FDM ^Բp< LQB|'{`#//] .Yd2Mi V.'ivt o@)@lf#GyĨ?d#")pqחQ!8u w8 h uMSq~tM/^ʭJrB{{o<ɴ_9}F@ܟ4ܠw̹?1 BO)Q^Iēp(^ ?C],1ѻqR{3L^ZPhw647S?Dr=Lfs!ThӳTr'j^_ Fz3..Ic3z&XM/muƪ,ԕ u"0p~`N3^e*:k {vڜY#}AmPC@h#w?  z% 5ɶkA x =kP|7LC/1^ENR!)޳WNO9a~f&^[xP|hɷPDRK48f}Hl DВġ8'b%. GO(8O =ЛQи( az}@}A~T߾Y9G 9x_̀ď~Zcoku[z8˹YvZC4C+TnфbePmuij. jtmc22sHE2x^i\pUB,U2},Q@B^nr=]!L*Ch42:1iGvVgk'BѤ'ގDXFR9R%X^uUNnH+?9ƻ6g"S-a2/K֑yۏ~SJ^5;dy,q>4Ѯ;5I|Ba6÷W>WwJ6lC_Ys!ZWhdq`6 %D+ QO<ƪMfv{\e.L5=B[]m/iXӝZ[]ԭO~^ڕF.6F=K@#{~%' C{{Κ2`ej@Ւ,$p!?m6?_"ޫ@E"um?,vE}noٽ::66V]Q" fȐi?E|}E.zŴ^umTBy.i^vaģEkbwed=D[ Ey5wIoG抅x-X/8&o}]m3OuXN/5#M|ۇ% X5j 2 -U)ք?ġ)81 Fêy!z櫹h)d{:8=4Q**UTQrJF~M2~T 9AX#mVR݁k FjW^n2u^-.optY@JKm;9@4J.+Ji `+sK/jOEj'ԥz MF#:? #dX?|6yf5{y smH ~/gw*nݚ?]B!oql $&\1 bLLO肹]1y _]=c_C3ьf:Q桳O׆Q/ Ԡ =&qeDqZNY[S.!*7gqq{8z5dzX endstream endobj 132 0 obj << /Type /XRef /Length 140 /Filter /FlateDecode /DecodeParms << /Columns 5 /Predictor 12 >> /W [ 1 3 1 ] /Info 3 0 R /Root 2 0 R /Size 133 /ID [<9ce43e33752d0272ef0f0bbb6aebb35f><93f997da1b9218e42968b18986c455f3>] >> stream xcb&F~0 $8J@g.? d}hz9l>P H>V0Y&@ DrhH ed "ނH)/D< 6[fi`[.HC +` endstream endobj startxref 81429 %%EOF ipred/tests/0000755000175100001440000000000012555703762012551 5ustar hornikusersipred/tests/Examples/0000755000175100001440000000000013055552443014321 5ustar hornikusersipred/tests/Examples/ipred-Ex.Rout.save0000644000175100001440000007510213055552611017610 0ustar hornikusers R version 3.3.2 (2016-10-31) -- "Sincere Pumpkin Patch" Copyright (C) 2016 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > pkgname <- "ipred" > source(file.path(R.home("share"), "R", "examples-header.R")) > options(warn = 1) > library('ipred') > > base::assign(".oldSearch", base::search(), pos = 'CheckExEnv') > cleanEx() > nameEx("DLBCL") > ### * DLBCL > > flush(stderr()); flush(stdout()) > > ### Name: DLBCL > ### Title: Diffuse Large B-Cell Lymphoma > ### Aliases: DLBCL > ### Keywords: datasets > > ### ** Examples > > > set.seed(290875) > > data("DLBCL", package="ipred") > library("survival") > survfit(Surv(time, cens) ~ 1, data=DLBCL) Call: survfit(formula = Surv(time, cens) ~ 1, data = DLBCL) n events median 0.95LCL 0.95UCL 40.0 22.0 36.0 15.5 NA > > > > > cleanEx() detaching ‘package:survival’ > nameEx("GlaucomaMVF") > ### * GlaucomaMVF > > flush(stderr()); flush(stdout()) > > ### Name: GlaucomaMVF > ### Title: Glaucoma Database > ### Aliases: GlaucomaMVF > ### Keywords: datasets > > ### ** Examples > > ## Not run: > ##D > ##D data("GlaucomaMVF", package = "ipred") > ##D library("rpart") > ##D > ##D response <- function (data) { > ##D attach(data) > ##D res <- ifelse((!is.na(clv) & !is.na(lora) & clv >= 5.1 & lora >= > ##D 49.23372) | (!is.na(clv) & !is.na(lora) & !is.na(cs) & > ##D clv < 5.1 & lora >= 58.55409 & cs < 1.405) | (is.na(clv) & > ##D !is.na(lora) & !is.na(cs) & lora >= 58.55409 & cs < 1.405) | > ##D (!is.na(clv) & is.na(lora) & cs < 1.405), 0, 1) > ##D detach(data) > ##D factor (res, labels = c("glaucoma", "normal")) > ##D } > ##D > ##D errorest(Class~clv+lora+cs~., data = GlaucomaMVF, model=inclass, > ##D estimator="cv", pFUN = list(list(model = rpart)), cFUN = response) > ## End(Not run) > > > > cleanEx() > nameEx("bagging") > ### * bagging > > flush(stderr()); flush(stdout()) > > ### Name: bagging > ### Title: Bagging Classification, Regression and Survival Trees > ### Aliases: bagging ipredbagg ipredbagg.factor ipredbagg.integer > ### ipredbagg.numeric ipredbagg.Surv ipredbagg.default bagging.data.frame > ### bagging.default > ### Keywords: tree > > ### ** Examples > > > library("MASS") > library("survival") > > # Classification: Breast Cancer data > > data("BreastCancer", package = "mlbench") > > # Test set error bagging (nbagg = 50): 3.7% (Breiman, 1998, Table 5) > > mod <- bagging(Class ~ Cl.thickness + Cell.size + + Cell.shape + Marg.adhesion + + Epith.c.size + Bare.nuclei + + Bl.cromatin + Normal.nucleoli + + Mitoses, data=BreastCancer, coob=TRUE) > print(mod) Bagging classification trees with 25 bootstrap replications Call: bagging.data.frame(formula = Class ~ Cl.thickness + Cell.size + Cell.shape + Marg.adhesion + Epith.c.size + Bare.nuclei + Bl.cromatin + Normal.nucleoli + Mitoses, data = BreastCancer, coob = TRUE) Out-of-bag estimate of misclassification error: 0.0381 > > # Test set error bagging (nbagg=50): 7.9% (Breiman, 1996a, Table 2) > data("Ionosphere", package = "mlbench") > Ionosphere$V2 <- NULL # constant within groups > > bagging(Class ~ ., data=Ionosphere, coob=TRUE) Bagging classification trees with 25 bootstrap replications Call: bagging.data.frame(formula = Class ~ ., data = Ionosphere, coob = TRUE) Out-of-bag estimate of misclassification error: 0.0912 > > # Double-Bagging: combine LDA and classification trees > > # predict returns the linear discriminant values, i.e. linear combinations > # of the original predictors > > comb.lda <- list(list(model=lda, predict=function(obj, newdata) + predict(obj, newdata)$x)) > > # Note: out-of-bag estimator is not available in this situation, use > # errorest > > mod <- bagging(Class ~ ., data=Ionosphere, comb=comb.lda) > > predict(mod, Ionosphere[1:10,]) [1] good bad good bad good bad good bad good bad Levels: bad good > > # Regression: > > data("BostonHousing", package = "mlbench") > > # Test set error (nbagg=25, trees pruned): 3.41 (Breiman, 1996a, Table 8) > > mod <- bagging(medv ~ ., data=BostonHousing, coob=TRUE) > print(mod) Bagging regression trees with 25 bootstrap replications Call: bagging.data.frame(formula = medv ~ ., data = BostonHousing, coob = TRUE) Out-of-bag estimate of root mean squared error: 4.0618 > > library("mlbench") > learn <- as.data.frame(mlbench.friedman1(200)) > > # Test set error (nbagg=25, trees pruned): 2.47 (Breiman, 1996a, Table 8) > > mod <- bagging(y ~ ., data=learn, coob=TRUE) > print(mod) Bagging regression trees with 25 bootstrap replications Call: bagging.data.frame(formula = y ~ ., data = learn, coob = TRUE) Out-of-bag estimate of root mean squared error: 2.8532 > > # Survival data > > # Brier score for censored data estimated by > # 10 times 10-fold cross-validation: 0.2 (Hothorn et al, > # 2002) > > data("DLBCL", package = "ipred") > mod <- bagging(Surv(time,cens) ~ MGEc.1 + MGEc.2 + MGEc.3 + MGEc.4 + MGEc.5 + + MGEc.6 + MGEc.7 + MGEc.8 + MGEc.9 + + MGEc.10 + IPI, data=DLBCL, coob=TRUE) > > print(mod) Bagging survival trees with 25 bootstrap replications Call: bagging.data.frame(formula = Surv(time, cens) ~ MGEc.1 + MGEc.2 + MGEc.3 + MGEc.4 + MGEc.5 + MGEc.6 + MGEc.7 + MGEc.8 + MGEc.9 + MGEc.10 + IPI, data = DLBCL, coob = TRUE) Out-of-bag estimate of Brier's score: 0.2098 > > > > > > cleanEx() detaching ‘package:mlbench’, ‘package:survival’, ‘package:MASS’ > nameEx("dystrophy") > ### * dystrophy > > flush(stderr()); flush(stdout()) > > ### Name: dystrophy > ### Title: Detection of muscular dystrophy carriers. > ### Aliases: dystrophy > ### Keywords: datasets > > ### ** Examples > > ## Not run: > ##D > ##D data("dystrophy") > ##D library("rpart") > ##D errorest(Class~CK+H~AGE+PK+LD, data = dystrophy, model = inbagg, > ##D pFUN = list(list(model = lm, predict = mypredict.lm), list(model = rpart)), > ##D ns = 0.75, estimator = "cv") > ## End(Not run) > > > > cleanEx() > nameEx("errorest") > ### * errorest > > flush(stderr()); flush(stdout()) > > ### Name: errorest > ### Title: Estimators of Prediction Error > ### Aliases: errorest errorest.data.frame errorest.default > ### Keywords: misc > > ### ** Examples > > > # Classification > > data("iris") > library("MASS") > > # force predict to return class labels only > mypredict.lda <- function(object, newdata) + predict(object, newdata = newdata)$class > > # 10-fold cv of LDA for Iris data > errorest(Species ~ ., data=iris, model=lda, + estimator = "cv", predict= mypredict.lda) Call: errorest.data.frame(formula = Species ~ ., data = iris, model = lda, predict = mypredict.lda, estimator = "cv") 10-fold cross-validation estimator of misclassification error Misclassification error: 0.02 > > data("PimaIndiansDiabetes", package = "mlbench") > ## Not run: > ##D # 632+ bootstrap of LDA for Diabetes data > ##D errorest(diabetes ~ ., data=PimaIndiansDiabetes, model=lda, > ##D estimator = "632plus", predict= mypredict.lda) > ## End(Not run) > > #cv of a fixed partition of the data > list.tindx <- list(1:100, 101:200, 201:300, 301:400, 401:500, + 501:600, 601:700, 701:768) > > errorest(diabetes ~ ., data=PimaIndiansDiabetes, model=lda, + estimator = "cv", predict = mypredict.lda, + est.para = control.errorest(list.tindx = list.tindx)) Call: errorest.data.frame(formula = diabetes ~ ., data = PimaIndiansDiabetes, model = lda, predict = mypredict.lda, estimator = "cv", est.para = control.errorest(list.tindx = list.tindx)) 8-fold cross-validation estimator of misclassification error Misclassification error: 0.2227 > > ## Not run: > ##D #both bootstrap estimations based on fixed partitions > ##D > ##D list.tindx <- vector(mode = "list", length = 25) > ##D for(i in 1:25) { > ##D list.tindx[[i]] <- sample(1:768, 768, TRUE) > ##D } > ##D > ##D errorest(diabetes ~ ., data=PimaIndiansDiabetes, model=lda, > ##D estimator = c("boot", "632plus"), predict= mypredict.lda, > ##D est.para = control.errorest(list.tindx = list.tindx)) > ##D > ## End(Not run) > data("Glass", package = "mlbench") > > # LDA has cross-validated misclassification error of > # 38% (Ripley, 1996, page 98) > > # Pruned trees about 32% (Ripley, 1996, page 230) > > # use stratified sampling here, i.e. preserve the class proportions > errorest(Type ~ ., data=Glass, model=lda, + predict=mypredict.lda, est.para=control.errorest(strat=TRUE)) Call: errorest.data.frame(formula = Type ~ ., data = Glass, model = lda, predict = mypredict.lda, est.para = control.errorest(strat = TRUE)) 10-fold cross-validation estimator of misclassification error Misclassification error: 0.3785 > > # force predict to return class labels > mypredict.rpart <- function(object, newdata) + predict(object, newdata = newdata,type="class") > > library("rpart") > pruneit <- function(formula, ...) + prune(rpart(formula, ...), cp =0.01) > > errorest(Type ~ ., data=Glass, model=pruneit, + predict=mypredict.rpart, est.para=control.errorest(strat=TRUE)) Call: errorest.data.frame(formula = Type ~ ., data = Glass, model = pruneit, predict = mypredict.rpart, est.para = control.errorest(strat = TRUE)) 10-fold cross-validation estimator of misclassification error Misclassification error: 0.3178 > > # compute sensitivity and specifity for stabilised LDA > > data("GlaucomaM", package = "TH.data") > > error <- errorest(Class ~ ., data=GlaucomaM, model=slda, + predict=mypredict.lda, est.para=control.errorest(predictions=TRUE)) > > # sensitivity > > mean(error$predictions[GlaucomaM$Class == "glaucoma"] == "glaucoma") [1] 0.8163265 > > # specifity > > mean(error$predictions[GlaucomaM$Class == "normal"] == "normal") [1] 0.8367347 > > # Indirect Classification: Smoking data > > data(Smoking) > # Set three groups of variables: > # 1) explanatory variables are: TarY, NicY, COY, Sex, Age > # 2) intermediate variables are: TVPS, BPNL, COHB > # 3) response (resp) is defined by: > > resp <- function(data){ + data <- data[, c("TVPS", "BPNL", "COHB")] + res <- t(t(data) > c(4438, 232.5, 58)) + res <- as.factor(ifelse(apply(res, 1, sum) > 2, 1, 0)) + res + } > > response <- resp(Smoking[ ,c("TVPS", "BPNL", "COHB")]) > smoking <- cbind(Smoking, response) > > formula <- response~TVPS+BPNL+COHB~TarY+NicY+COY+Sex+Age > > # Estimation per leave-one-out estimate for the misclassification is > # 36.36% (Hand et al., 2001), using indirect classification with > # linear models > ## Not run: > ##D errorest(formula, data = smoking, model = inclass,estimator = "cv", > ##D pFUN = list(list(model=lm, predict = mypredict.lm)), cFUN = resp, > ##D est.para=control.errorest(k=nrow(smoking))) > ## End(Not run) > > # Regression > > data("BostonHousing", package = "mlbench") > > # 10-fold cv of lm for Boston Housing data > errorest(medv ~ ., data=BostonHousing, model=lm, + est.para=control.errorest(random=FALSE)) Call: errorest.data.frame(formula = medv ~ ., data = BostonHousing, model = lm, est.para = control.errorest(random = FALSE)) 10-fold cross-validation estimator of root mean squared error Root mean squared error: 5.877 > > # the same, with "model" returning a function for prediction > # instead of an object of class "lm" > > mylm <- function(formula, data) { + mod <- lm(formula, data) + function(newdata) predict(mod, newdata) + } > > errorest(medv ~ ., data=BostonHousing, model=mylm, + est.para=control.errorest(random=FALSE)) Call: errorest.data.frame(formula = medv ~ ., data = BostonHousing, model = mylm, est.para = control.errorest(random = FALSE)) 10-fold cross-validation estimator of root mean squared error Root mean squared error: 5.877 > > > # Survival data > > data("GBSG2", package = "TH.data") > library("survival") > > # prediction is fitted Kaplan-Meier > predict.survfit <- function(object, newdata) object > > # 5-fold cv of Kaplan-Meier for GBSG2 study > errorest(Surv(time, cens) ~ 1, data=GBSG2, model=survfit, + predict=predict.survfit, est.para=control.errorest(k=5)) Call: errorest.data.frame(formula = Surv(time, cens) ~ 1, data = GBSG2, model = survfit, predict = predict.survfit, est.para = control.errorest(k = 5)) 5-fold cross-validation estimator of Brier's score Brier's score: 0.1927 > > > > > > cleanEx() detaching ‘package:survival’, ‘package:rpart’, ‘package:MASS’ > nameEx("inbagg") > ### * inbagg > > flush(stderr()); flush(stdout()) > > ### Name: inbagg > ### Title: Indirect Bagging > ### Aliases: inbagg inbagg.default inbagg.data.frame > ### Keywords: misc > > ### ** Examples > > > library("MASS") > library("rpart") > y <- as.factor(sample(1:2, 100, replace = TRUE)) > W <- mvrnorm(n = 200, mu = rep(0, 3), Sigma = diag(3)) > X <- mvrnorm(n = 200, mu = rep(2, 3), Sigma = diag(3)) > colnames(W) <- c("w1", "w2", "w3") > colnames(X) <- c("x1", "x2", "x3") > DATA <- data.frame(y, W, X) > > > pFUN <- list(list(formula = w1~x1+x2, model = lm, predict = mypredict.lm), + list(model = rpart)) > > inbagg(y~w1+w2+w3~x1+x2+x3, data = DATA, pFUN = pFUN) Indirect bagging, with 25 bootstrap samples and intermediate variables: w1 w2 w3 > > > > cleanEx() detaching ‘package:rpart’, ‘package:MASS’ > nameEx("inclass") > ### * inclass > > flush(stderr()); flush(stdout()) > > ### Name: inclass > ### Title: Indirect Classification > ### Aliases: inclass inclass.default inclass.data.frame > ### Keywords: misc > > ### ** Examples > > data("Smoking", package = "ipred") > # Set three groups of variables: > # 1) explanatory variables are: TarY, NicY, COY, Sex, Age > # 2) intermediate variables are: TVPS, BPNL, COHB > # 3) response (resp) is defined by: > > classify <- function(data){ + data <- data[,c("TVPS", "BPNL", "COHB")] + res <- t(t(data) > c(4438, 232.5, 58)) + res <- as.factor(ifelse(apply(res, 1, sum) > 2, 1, 0)) + res + } > > response <- classify(Smoking[ ,c("TVPS", "BPNL", "COHB")]) > smoking <- data.frame(Smoking, response) > > formula <- response~TVPS+BPNL+COHB~TarY+NicY+COY+Sex+Age > > inclass(formula, data = smoking, pFUN = list(list(model = lm, predict = + mypredict.lm)), cFUN = classify) Indirect classification, with 3 intermediate variables: TVPS BPNL COHB Predictive model per intermediate is lm > > > > > cleanEx() > nameEx("ipredknn") > ### * ipredknn > > flush(stderr()); flush(stdout()) > > ### Name: ipredknn > ### Title: k-Nearest Neighbour Classification > ### Aliases: ipredknn > ### Keywords: multivariate > > ### ** Examples > > > library("mlbench") > learn <- as.data.frame(mlbench.twonorm(300)) > > mypredict.knn <- function(object, newdata) + predict.ipredknn(object, newdata, type="class") > > errorest(classes ~., data=learn, model=ipredknn, + predict=mypredict.knn) Call: errorest.data.frame(formula = classes ~ ., data = learn, model = ipredknn, predict = mypredict.knn) 10-fold cross-validation estimator of misclassification error Misclassification error: 0.0533 > > > > > > cleanEx() detaching ‘package:mlbench’ > nameEx("kfoldcv") > ### * kfoldcv > > flush(stderr()); flush(stdout()) > > ### Name: kfoldcv > ### Title: Subsamples for k-fold Cross-Validation > ### Aliases: kfoldcv > ### Keywords: misc > > ### ** Examples > > > # 10-fold CV with N = 91 > > kfoldcv(10, 91) [1] 10 9 9 9 9 9 9 9 9 9 > > ## Don't show: > k <- sample(5:15, 1) > k [1] 7 > N <- sample(50:150, 1) > N [1] 87 > stopifnot(sum(kfoldcv(k, N)) == N) > ## End(Don't show) > > > > > cleanEx() > nameEx("predict.bagging") > ### * predict.bagging > > flush(stderr()); flush(stdout()) > > ### Name: predict.classbagg > ### Title: Predictions from Bagging Trees > ### Aliases: predict.classbagg predict.regbagg predict.survbagg > ### Keywords: tree > > ### ** Examples > > > data("Ionosphere", package = "mlbench") > Ionosphere$V2 <- NULL # constant within groups > > # nbagg = 10 for performance reasons here > mod <- bagging(Class ~ ., data=Ionosphere) > > # out-of-bag estimate > > mean(predict(mod) != Ionosphere$Class) [1] 0.07977208 > > # predictions for the first 10 observations > > predict(mod, newdata=Ionosphere[1:10,]) [1] good bad good bad good bad good bad good bad Levels: bad good > > predict(mod, newdata=Ionosphere[1:10,], type="prob") bad good [1,] 0.00 1.00 [2,] 1.00 0.00 [3,] 0.00 1.00 [4,] 0.64 0.36 [5,] 0.00 1.00 [6,] 1.00 0.00 [7,] 0.00 1.00 [8,] 1.00 0.00 [9,] 0.00 1.00 [10,] 1.00 0.00 > > > > > cleanEx() > nameEx("predict.inbagg") > ### * predict.inbagg > > flush(stderr()); flush(stdout()) > > ### Name: predict.inbagg > ### Title: Predictions from an Inbagg Object > ### Aliases: predict.inbagg > ### Keywords: misc > > ### ** Examples > > > library("MASS") > library("rpart") > y <- as.factor(sample(1:2, 100, replace = TRUE)) > W <- mvrnorm(n = 200, mu = rep(0, 3), Sigma = diag(3)) > X <- mvrnorm(n = 200, mu = rep(2, 3), Sigma = diag(3)) > colnames(W) <- c("w1", "w2", "w3") > colnames(X) <- c("x1", "x2", "x3") > DATA <- data.frame(y, W, X) > > pFUN <- list(list(formula = w1~x1+x2, model = lm), + list(model = rpart)) > > RES <- inbagg(y~w1+w2+w3~x1+x2+x3, data = DATA, pFUN = pFUN) > predict(RES, newdata = X) [1] 1 1 2 2 1 2 2 2 2 1 1 1 2 1 2 1 2 2 1 2 2 1 2 1 1 1 1 1 2 1 1 2 1 1 2 2 2 [38] 1 2 1 2 2 2 2 2 2 1 1 2 2 1 2 1 1 1 1 1 2 2 1 2 1 1 1 2 1 1 2 1 2 1 2 1 1 [75] 1 2 2 1 2 2 1 2 1 1 2 1 2 1 1 1 1 1 2 2 2 2 1 1 2 2 1 1 2 2 1 2 2 2 2 1 1 [112] 1 2 1 2 1 2 2 1 2 2 1 2 1 1 1 1 1 2 1 1 2 1 1 2 2 2 1 2 1 2 2 2 2 2 2 1 1 [149] 2 2 1 2 1 1 1 1 1 2 2 1 2 1 1 1 2 1 1 2 1 2 1 2 1 1 1 2 2 1 2 2 1 2 1 1 2 [186] 1 2 1 1 1 1 1 2 2 2 2 1 1 2 2 Levels: 1 2 > > > > cleanEx() detaching ‘package:rpart’, ‘package:MASS’ > nameEx("predict.inclass") > ### * predict.inclass > > flush(stderr()); flush(stdout()) > > ### Name: predict.inclass > ### Title: Predictions from an Inclass Object > ### Aliases: predict.inclass > ### Keywords: misc > > ### ** Examples > > ## Not run: > ##D # Simulation model, classification rule following Hand et al. (2001) > ##D > ##D theta90 <- varset(N = 1000, sigma = 0.1, theta = 90, threshold = 0) > ##D > ##D dataset <- as.data.frame(cbind(theta90$explanatory, theta90$intermediate)) > ##D names(dataset) <- c(colnames(theta90$explanatory), > ##D colnames(theta90$intermediate)) > ##D > ##D classify <- function(Y, threshold = 0) { > ##D Y <- Y[,c("y1", "y2")] > ##D z <- (Y > threshold) > ##D resp <- as.factor(ifelse((z[,1] + z[,2]) > 1, 1, 0)) > ##D return(resp) > ##D } > ##D > ##D formula <- response~y1+y2~x1+x2 > ##D > ##D fit <- inclass(formula, data = dataset, pFUN = list(list(model = lm)), > ##D cFUN = classify) > ##D > ##D predict(object = fit, newdata = dataset) > ##D > ##D > ##D data("Smoking", package = "ipred") > ##D > ##D # explanatory variables are: TarY, NicY, COY, Sex, Age > ##D # intermediate variables are: TVPS, BPNL, COHB > ##D # reponse is defined by: > ##D > ##D classify <- function(data){ > ##D data <- data[,c("TVPS", "BPNL", "COHB")] > ##D res <- t(t(data) > c(4438, 232.5, 58)) > ##D res <- as.factor(ifelse(apply(res, 1, sum) > 2, 1, 0)) > ##D res > ##D } > ##D > ##D response <- classify(Smoking[ ,c("TVPS", "BPNL", "COHB")]) > ##D smoking <- cbind(Smoking, response) > ##D > ##D formula <- response~TVPS+BPNL+COHB~TarY+NicY+COY+Sex+Age > ##D > ##D fit <- inclass(formula, data = smoking, > ##D pFUN = list(list(model = lm)), cFUN = classify) > ##D > ##D > ##D predict(object = fit, newdata = smoking) > ## End(Not run) > > data("GlaucomaMVF", package = "ipred") > library("rpart") > glaucoma <- GlaucomaMVF[,(names(GlaucomaMVF) != "tension")] > # explanatory variables are derived by laser scanning image and intra occular pressure > # intermediate variables are: clv, cs, lora > # response is defined by > > classify <- function (data) { + attach(data) + res <- ifelse((!is.na(clv) & !is.na(lora) & clv >= 5.1 & lora >= + 49.23372) | (!is.na(clv) & !is.na(lora) & !is.na(cs) & + clv < 5.1 & lora >= 58.55409 & cs < 1.405) | (is.na(clv) & + !is.na(lora) & !is.na(cs) & lora >= 58.55409 & cs < 1.405) | + (!is.na(clv) & is.na(lora) & cs < 1.405), 0, 1) + detach(data) + factor (res, labels = c("glaucoma", "normal")) + } > > fit <- inclass(Class~clv+lora+cs~., data = glaucoma, + pFUN = list(list(model = rpart)), cFUN = classify) > > data("GlaucomaM", package = "TH.data") > predict(object = fit, newdata = GlaucomaM) [1] normal normal normal normal glaucoma glaucoma normal normal [9] normal normal normal normal glaucoma normal normal normal [17] normal normal normal glaucoma normal normal glaucoma normal [25] normal normal glaucoma normal glaucoma normal normal normal [33] normal normal normal normal glaucoma normal normal normal [41] normal normal glaucoma normal normal glaucoma normal normal [49] normal normal normal glaucoma glaucoma glaucoma glaucoma normal [57] glaucoma glaucoma normal normal glaucoma normal glaucoma normal [65] glaucoma normal normal normal normal normal glaucoma glaucoma [73] glaucoma normal normal normal glaucoma normal normal normal [81] glaucoma normal normal normal normal glaucoma glaucoma glaucoma [89] glaucoma normal normal normal glaucoma normal normal normal [97] normal normal glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [105] normal glaucoma normal glaucoma glaucoma glaucoma glaucoma glaucoma [113] glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [121] glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [129] glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [137] glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [145] glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [153] glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [161] glaucoma glaucoma glaucoma normal glaucoma glaucoma glaucoma glaucoma [169] glaucoma glaucoma glaucoma glaucoma normal glaucoma glaucoma glaucoma [177] glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma glaucoma [185] glaucoma glaucoma normal glaucoma glaucoma glaucoma glaucoma glaucoma [193] glaucoma glaucoma glaucoma glaucoma Levels: glaucoma normal > > > > > cleanEx() detaching ‘package:rpart’ > nameEx("prune.bagging") > ### * prune.bagging > > flush(stderr()); flush(stdout()) > > ### Name: prune.classbagg > ### Title: Pruning for Bagging > ### Aliases: prune.classbagg prune.regbagg prune.survbagg > ### Keywords: tree > > ### ** Examples > > > data("Glass", package = "mlbench") > library("rpart") > > mod <- bagging(Type ~ ., data=Glass, nbagg=10, coob=TRUE) > pmod <- prune(mod) > print(pmod) Bagging classification trees with 10 bootstrap replications Call: bagging.data.frame(formula = Type ~ ., data = Glass, nbagg = 10, coob = TRUE) Out-of-bag estimate of misclassification error: 0.285 > > > > > > cleanEx() detaching ‘package:rpart’ > nameEx("rsurv") > ### * rsurv > > flush(stderr()); flush(stdout()) > > ### Name: rsurv > ### Title: Simulate Survival Data > ### Aliases: rsurv > ### Keywords: survival > > ### ** Examples > > > library("survival") > # 3*X1 + X2 > simdat <- rsurv(500, model="C") > coxph(Surv(time, cens) ~ ., data=simdat) Call: coxph(formula = Surv(time, cens) ~ ., data = simdat) coef exp(coef) se(coef) z p X1 3.1555 23.4648 0.2023 15.60 < 2e-16 X2 1.1015 3.0086 0.1628 6.77 1.3e-11 X3 -0.2103 0.8104 0.1525 -1.38 0.168 X4 0.0466 1.0477 0.1488 0.31 0.754 X5 0.2709 1.3111 0.1536 1.76 0.078 Likelihood ratio test=289 on 5 df, p=0 n= 500, number of events= 500 > > > > > cleanEx() detaching ‘package:survival’ > nameEx("sbrier") > ### * sbrier > > flush(stderr()); flush(stdout()) > > ### Name: sbrier > ### Title: Model Fit for Survival Data > ### Aliases: sbrier > ### Keywords: survival > > ### ** Examples > > > library("survival") > data("DLBCL", package = "ipred") > smod <- Surv(DLBCL$time, DLBCL$cens) > > KM <- survfit(smod ~ 1) > # integrated Brier score up to max(DLBCL$time) > sbrier(smod, KM) [,1] [1,] 0.2237226 attr(,"names") [1] "integrated Brier score" attr(,"time") [1] 1.3 129.9 > > # integrated Brier score up to time=50 > sbrier(smod, KM, btime=c(0, 50)) Warning in sbrier(smod, KM, btime = c(0, 50)) : btime[1] is smaller than min(time) [,1] [1,] 0.2174081 attr(,"names") [1] "integrated Brier score" attr(,"time") [1] 1.3 39.6 > > # Brier score for time=50 > sbrier(smod, KM, btime=50) Brier score 0.249375 attr(,"time") [1] 50 > > # a "real" model: one single survival tree with Intern. Prognostic Index > # and mean gene expression in the first cluster as predictors > mod <- bagging(Surv(time, cens) ~ MGEc.1 + IPI, data=DLBCL, nbagg=1) > > # this is a list of survfit objects (==KM-curves), one for each observation > # in DLBCL > pred <- predict(mod, newdata=DLBCL) > > # integrated Brier score up to max(time) > sbrier(smod, pred) [,1] [1,] 0.1442559 attr(,"names") [1] "integrated Brier score" attr(,"time") [1] 1.3 129.9 > > # Brier score at time=50 > sbrier(smod, pred, btime=50) Brier score 0.1774478 attr(,"time") [1] 50 > # artificial examples and illustrations > > cleans <- function(x) { attr(x, "time") <- NULL; names(x) <- NULL; x } > > n <- 100 > time <- rpois(n, 20) > cens <- rep(1, n) > > # checks, Graf et al. page 2536, no censoring at all! > # no information: \pi(t) = 0.5 > > a <- sbrier(Surv(time, cens), rep(0.5, n), time[50]) > stopifnot(all.equal(cleans(a),0.25)) > > # some information: \pi(t) = S(t) > > n <- 100 > time <- 1:100 > mod <- survfit(Surv(time, cens) ~ 1) > a <- sbrier(Surv(time, cens), rep(list(mod), n)) > mymin <- mod$surv * (1 - mod$surv) > cleans(a) [,1] [1,] 0.1682833 > sum(mymin)/diff(range(time)) [1] 0.1683333 > > # independent of ordering > rand <- sample(1:100) > b <- sbrier(Surv(time, cens)[rand], rep(list(mod), n)[rand]) > stopifnot(all.equal(cleans(a), cleans(b))) > > ## Don't show: > # total information: \pi(t | X) known for every obs > > time <- 1:10 > cens <- rep(1,10) > pred <- diag(10) > pred[upper.tri(pred)] <- 1 > diag(pred) <- 0 > # > # a <- sbrier(Surv(time, cens), pred) > # stopifnot(all.equal(a, 0)) > # > ## End(Don't show) > > # 2 groups at different risk > > time <- c(1:10, 21:30) > strata <- c(rep(1, 10), rep(2, 10)) > cens <- rep(1, length(time)) > > # no information about the groups > > a <- sbrier(Surv(time, cens), survfit(Surv(time, cens) ~ 1)) > b <- sbrier(Surv(time, cens), rep(list(survfit(Surv(time, cens) ~1)), 20)) > stopifnot(all.equal(a, b)) > > # risk groups known > > mod <- survfit(Surv(time, cens) ~ strata) > b <- sbrier(Surv(time, cens), c(rep(list(mod[1]), 10), rep(list(mod[2]), 10))) > stopifnot(a > b) > > ### GBSG2 data > data("GBSG2", package = "TH.data") > > thsum <- function(x) { + ret <- c(median(x), quantile(x, 0.25), quantile(x,0.75)) + names(ret)[1] <- "Median" + ret + } > > t(apply(GBSG2[,c("age", "tsize", "pnodes", + "progrec", "estrec")], 2, thsum)) Median 25% 75% age 53.0 46 61.00 tsize 25.0 20 35.00 pnodes 3.0 1 7.00 progrec 32.5 7 131.75 estrec 36.0 8 114.00 > > table(GBSG2$menostat) Pre Post 290 396 > table(GBSG2$tgrade) I II III 81 444 161 > table(GBSG2$horTh) no yes 440 246 > > # pooled Kaplan-Meier > > mod <- survfit(Surv(time, cens) ~ 1, data=GBSG2) > # integrated Brier score > sbrier(Surv(GBSG2$time, GBSG2$cens), mod) [,1] [1,] 0.1939366 attr(,"names") [1] "integrated Brier score" attr(,"time") [1] 8 2659 > # Brier score at 5 years > sbrier(Surv(GBSG2$time, GBSG2$cens), mod, btime=1825) Brier score 0.2499984 attr(,"time") [1] 1825 > > # Nottingham prognostic index > > GBSG2 <- GBSG2[order(GBSG2$time),] > > NPI <- 0.2*GBSG2$tsize/10 + 1 + as.integer(GBSG2$tgrade) > NPI[NPI < 3.4] <- 1 > NPI[NPI >= 3.4 & NPI <=5.4] <- 2 > NPI[NPI > 5.4] <- 3 > > mod <- survfit(Surv(time, cens) ~ NPI, data=GBSG2) > plot(mod) > > pred <- c() > survs <- c() > for (i in sort(unique(NPI))) + survs <- c(survs, getsurv(mod[i], 1825)) > > for (i in 1:nrow(GBSG2)) + pred <- c(pred, survs[NPI[i]]) > > # Brier score of NPI at t=5 years > sbrier(Surv(GBSG2$time, GBSG2$cens), pred, btime=1825) Brier score 0.233823 attr(,"time") [1] 1825 > > > > > > cleanEx() detaching ‘package:survival’ > nameEx("slda") > ### * slda > > flush(stderr()); flush(stdout()) > > ### Name: slda > ### Title: Stabilised Linear Discriminant Analysis > ### Aliases: slda slda.default slda.formula slda.factor > ### Keywords: multivariate > > ### ** Examples > > > library("mlbench") > library("MASS") > learn <- as.data.frame(mlbench.twonorm(100)) > test <- as.data.frame(mlbench.twonorm(1000)) > > mlda <- lda(classes ~ ., data=learn) > mslda <- slda(classes ~ ., data=learn) > > print(mean(predict(mlda, newdata=test)$class != test$classes)) [1] 0.047 > print(mean(predict(mslda, newdata=test)$class != test$classes)) [1] 0.025 > > > > > cleanEx() detaching ‘package:MASS’, ‘package:mlbench’ > nameEx("varset") > ### * varset > > flush(stderr()); flush(stdout()) > > ### Name: varset > ### Title: Simulation Model > ### Aliases: varset > ### Keywords: misc > > ### ** Examples > > > theta90 <- varset(N = 1000, sigma = 0.1, theta = 90, threshold = 0) > theta0 <- varset(N = 1000, sigma = 0.1, theta = 0, threshold = 0) > par(mfrow = c(1, 2)) > plot(theta0$intermediate) > plot(theta90$intermediate) > > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > ### *